Linking Administrative and IT Productivity in Higher Education (166184367)

Published on December 2016 | Categories: Documents | Downloads: 12 | Comments: 0 | Views: 89
of 10
Download PDF   Embed   Report

Before we can fully take advantage of the potential that IT offers to improve the efficiency of higher education administration, we must find a way to measure its value. Traditional metrics fail to offer effective measures. This article examines an alternative. http://www.educause.edu/library/resources/linking-administrative-and-it-productivity-higher-education

Comments

Content

Linking Administrative and IT Productivity in Higher Education Copyright 1992 CAUSE FROM _CAUSE/EFFECT_ VOLUME 15, NUMBER 3, FALL 1992. Permission to copy or disseminate all or part of this material is granted provided that the copies are not made or distributed for commercial advantage, the CAUSE copyright and its date appear,and notice is given that copying is by permission of CAUSE, the association for managing and using information resources in higher education. To disseminate otherwise, or to republish, requires written permission.For further information, contact CAUSE, 4840 Pearl East Circle, Suite 302E, Boulder, CO 80301, 303-449-4430, e-mail [email protected] LINKING ADMINISTRATIVE AND IT PRODUCTIVITY IN HIGHER EDUCATION by Lynn A. DeNoia and John L. Swearingen ************************************************************************ Lynn DeNoia is Executive Director for Information Technology at Bryant College, responsible for strategic planning and coordination, selection, acquisition, implementation, and allocation of information technology resources throughout the campus. Her background also includes extensive experience as both internal and independent consultant in telecommunications systems and as a CIS/MIS department faculty member. John Swearingen is Assistant Professor of Computer Information Systems at Bryant College, specializing in strategic applications of information technology. His background also includes consulting and research in acoustics and vehicle control systems. ************************************************************************ ABSTRACT: offers to must find effective Before we can fully take advantage of the potential that IT improve the efficiency of higher education administration, we a way to measure its value. Traditional metrics fail to offer measures. This article examines an alternative.

In an era of rapid change, increasing costs, and decreasing enrollments, the effective and efficient performance of academic administration becomes increasingly important. Information technology (IT) may, because of the myriad goals toward which it can be employed, be an especially valuable resource. If IT is to be used, however, its value must be tied to administrative productivity and organizational outcomes. Traditional methods of valuing IT fail to provide this linkage. In the following, we demonstrate the use of the Return on Management (ROM) metric suggested by Paul Strassman.[1] This metric provides at least a first step toward appropriately valuing strategic applications of information technology so that the necessary organizational impact can be assessed. THE PROBLEM Colleges and universities face a growing need for more effective and efficient administration in the face of: * population-based decreases in the pool of traditional applicants

* greater competition for resources brought upon by escalating costs coupled with decreasing enrollments * a growing resistance to continually raising tuition at current rates * the need for increasing investment in information technology to support academic activities. While IT has the potential to improve administrative efficiency, its effective application is often frustrated by four factors. First, examples of successful applications usually carry high dollar costs. (We hasten to add, however, that high cost is not a prerequisite for success!) Second, it is seldom obvious which one(s) of several applications, all competing for limited and scarce resources, will contribute most to achieving specified organizational objectives. Third, the actual "value" of an application is difficult to measure. And, finally, we have come to recognize, as demonstrated by Cron and Sobol, that the level of spending on IT is not necessarily related directly to administrative productivity.[2] These factors suggest that careful and complete justification of IT applications is especially important if scarce resources are to be allocated in the most effective manner. In addition to these rather daunting factors, we need to recognize that the nature of information systems is changing. Traditional systems have been used primarily to enhance operations and process transactions.[3] Penrod and Dolence have suggested that systems are changing from promotion of transition or innovation to promotion of transformation, while Anthony's framework would suggest change is from operational/tactical to strategic applications.[4] In any view, the nature of the systems is changing: complexity is increasing; support is required for multiple linkages within the organization and between the organization and its environment; secondary, indirect, or intangible impacts are becoming more significant (the direct effect of a new financial aid system may be to improve financial control; the indirect effect, to improve competitive position by improving service to applicants); emphasis is shifting toward the "management of relationships" rather than the "management of things"; and the most important impacts of systems are often unanticipated. Benefits stem not from the nature or application of the technology itself, but rather from our ability to utilize the technology to achieve significant benefits from other, existing resources.[5] In addition, systems which would in the past have enabled strategic advantage are now becoming strategic necessities. As a consequence of such change, measuring the "value" of information technology is becoming more necessary and increasingly more difficult--and this value must be linked to administrative productivity. TRADITIONAL METHODS OF JUSTIFYING IT Strassmann suggests that traditional methods of justifying information technology:[6] * use the same procedures as in buying machine tools; * focus on high volume, homogeneous transactions;

* concern themselves with operational applications; * employ a bottoms-up approach and a technique based on unit costs; * emphasize cost reductions; * adopt a short-term and payback orientation; * emphasize justification by the analyst; * rely on proposals by providers of the tools; * apply to routine clerical work. Cost/benefit analysis is one of the more popular traditional and pragmatic approaches to valuing information technology. The procedure typically consists of five steps: (1) define the scope of the project, (2) evaluate the direct and secondary costs and benefits of the project, (3) define the life of the project, (4) discount the dollar values, and (5) perform the sensitivity analysis.[7] There are a number of problems with this approach. Step 2, for example, requires an attempt to identify in detail all tangible and intangible, direct and indirect, costs and benefits. Tangible costs and benefits are often easy--costs for staff and purchases from vendors, benefits such as reductions in staff or delivery time. The intangible and/or indirect costs and benefits (e.g., the impact of user sophistication, increased customer satisfaction, or client confidence) are often far more important, however, and are not easily defined or measured for inclusion in the analysis. In step 3, the lifetime chosen for a project is often arbitrary. Benefits that continue to accrue throughout the expected life automatically make longer projects look better. However, lifetimes chosen on the basis of "investment" criteria may bear no relation to what's appropriate for strategic necessity. Furthermore, the very concept of "lifetime" implies a point in time where project or system utility ends--a concept irrelevant to systems that provide strategic infrastructure for an organizational future. In step 4, an arbitrary choice for discount rate can mask the effect of errors made in predicting costs and benefits. In particular, prediction errors have less effect on results for higher discount rates. This traditional cost/benefit approach may be extended in various ways by using techniques such as: * incremental analysis--predicting expected work load and considering alternate ways of meeting this load; * expected value--estimating costs or benefits and assigning probabilities in order to compute "expected" costs or benefits; * value analysis--developing and testing a prototype, then extrapolating costs and benefits from prototype results; * benefit profile--developing an extensive list of benefits, noting degree of improvement or potential improvement;[8] or

* sensitivity analysis to include the effects of uncertainty. There is, however, no inherent structure for including "risk" with cost/benefit analysis--a major shortcoming for strategic systems. In addition, sometimes assessing the risk of not doing a project can be the most important aspect of a strategic systems investment decision. Such an assessment is particularly difficult with traditional tools. Finally, cost/benefit analysis emphasizes the "cost" basis, relying on the fundamental notion that benefits must "outweigh" costs in order for an investment to be worthwhile. We suggest that this aspect makes cost/benefit analysis most useful in deciding in which of several ways something should be done, rather than whether it should be done at all. We would also suggest that, philosophically, cost is a poor measure upon which to base a strategic decision. Other approaches, such as the grid-based impact/value framework of Hammer and Mangurian, provide guidance with respect to general investment strategies but are less useful for evaluating a specific investment decision.[9] The Critical Success Factors approach has the potential for developing unified priorities for investment decisions, but tends to fail when most sorely needed, i.e., at times when members of a management team cannot agree on a single set of appropriate objectives or goals.[10] A number of these traditional methodologies may be satisfactory for evaluation or justification of operational or tactical information technology systems (cost benefit analysis), or provide general guidelines for strategic investments (impact value framework or critical success factors). With respect to the justification of strategic or transformation-oriented information technology investments, however, they have several limitations: (1) they are typically cost (input) based rather than benefit (output) based; (2) they fail to adequately include intangible or secondary impacts, or the effects of risk; (3) they operate at an inappropriate level of detail, either very detailed (cost benefit analysis) or very general (impact value framework); (4) they place value on the technology or the acquisition of information, rather than on the derived organizational outcome; (5) they may require a level of mathematical knowledge and a degree of decision-making sophistication not widely available; and (6) they fail to link the notion of IT value to either organizationalbased outcomes, or administrative productivity. In short, traditional methodologies are inadequate for justification of strategic or transformation-oriented information technology investments. They are inadequate particularly in that they encourage cost-based decision-making, focus on analysis at the detail rather than organization level, fail to adequately include the impact of risk, and fail to link IT to overall organization performance.

ROM(TM) To justify a strategic IT investment, we must link together IT performance, managerial performance, and organizational performance. Strassmann's Return on Management (ROM) methodology provides such linkage. We will consider ROM to be a management productivity measure based upon three fundamental concepts: * Value added by management is any value of a product over and above the cost of raw materials, operations involved in, and services required for its production. * The fundamental purpose and effect of IT investments is to improve the performance of management. * The value of information comes only from use, not simply from acquisition. Comparison of total revenues to total cost provides organizational performance. Management productivity comparing the value added by management to the cost Comparing the value added by management to the cost technology used by management provides a measure of a measure of can be measured by of management. of the information IT productivity.

Taking a most simplistic view, we first calculate "management value added" by subtracting all non-management costs from total revenues; note that management value added includes the cost of management. We then calculate ROM by dividing this management value added by the cost of management. The resulting ratio corresponds to an overall measure of the "productivity of management." We compute a similar figure for "IT productivity" by dividing the management value added by the cost of information technology utilized by management. The calculation of management value added is not a straightforward process of simply totaling a set of numbers representing management's contribution in some sector of an organization. Rather, we must start with the total revenues of the organization, subtract the value added from capital, costs incurred to all suppliers, and all non-management operational costs, leaving what we actually define to be management value added. In the following illustration and discussion, we use the term "management" to include both the management and administrative functions. AN ILLUSTRATION We have found that different schools use rather different accounting systems and charts of accounts. To compare institutions, we chose data from the Integrated Post-Secondary Education Data System (IPEDS) Financial Survey for the years 1988, 1989, and 1990.[11] While these data do not provide the detail one might desire, they appear to be relatively consistent across institutions and political divisions; are public data, available equally to all institutions; and provide a reasonable and useful first approximation to the ideal data. Calculation of management value added proceeds as follows (see Table 1 for an illustration of the formula for a sample school): [TABLE 1 NOT AVAILABLE IN ASCII TEXT VERSION]

(1) Total revenue is taken directly from Part A: CURRENT FUNDS REVENUES BY SOURCE, line 16 of the IPEDS survey (Total Current Funds Revenues). This includes tuition and fees, federal, state, and local government monies, private gifts, endowment income, and monies from other sales and service activities. (2) Value added due to capital is taken from Part J, PHYSICAL PLANT ASSETS, lines 01 (Land), 02 (Building), and 03 (Equipment) summed and discounted at the estimated cost of capital for that year. In each case, we estimated the average cost of capital from the experience of our own institution, by dividing the total interest paid during that year by the total value of bonds outstanding. (3) A first approximation to the total of supplier and non-management operational expenses is taken from Part B, CURRENT FUNDS EXPENDITURES AND TRANSFERS, line 22 (Total Current Funds Expenditures and Transfers). This figure, however, includes both the non-management operational expenses we want (faculty salaries and the cost of the IT resources directly supporting operations, for example) and the management costs. Management costs in this instance include all salaries, wages, and bonuses, office space, and costs for technology directly supporting management activities. Institutional support (Part B, line 07) includes expenses for general management services, executive direction and planning, legal and fiscal operations, public relations, and development. We chose this institutional support figure to approximate administrative costs. The non-management operational expenses figure, then, is calculated by deducting institutional support from total current funds expenditures and transfers. RESULTS A full understanding of the administrative productivity at any one institution requires both comparison of productivity within that institution with productivity at other similar institutions, and looking at changes over time in productivity within that institution. Comparing your school with others Figure 1 displays administrator productivity for a sample of twentythree New England colleges and universities. Productivity for each institution is depicted by three consecutive bars for the years 1988, 1989, and 1990. One need not pore over this figure to see that administrator productivity varies widely across schools. We may, however, make several additional observations. First, our definition of the productivity index as Productivity = Management Value Added -----------------------Cost of Management where we have not yet removed the cost of management from the management value added, leads us to expect a ratio greater than one. But this seldom occurs! This suggests there may be considerable room for improvement in administrator productivity. Remember, however, that we used an aggregate measure based on approximations made with survey data. While the approximations were made consistently within that data set and data within the sets appear consistent, individual schools may have responded differently to the survey. It is therefore inappropriate to

place excessive confidence in the specific values of the productivity index. It is safe to conclude, however, that as measured by ROM, a difference in administrator productivity exists across schools. Second, we have suggested that you compare your school with "similar" schools. Choice of comparison schools must be done carefully to provide any validity. Even within our loosely defined "groups" of schools, wide variation in administrator productivity still exists. Comparison with overall or group averages may seem to be a good thing if the productivity of your school lies "above" the average, but it fails to tell you whether you are obtaining the most from your resources. A productivity index falling much below the average, however, could suggest that your situation warrants further investigation. Third, comparison across schools may suggest that your school is experiencing unique difficulties. For example, the data presented in Figure 1 suggest that the middle year (1989) was a "good" year for most schools--administrator productivity either rose from 1988 or stayed approximately the same; and productivity in 1990 tended to decrease somewhat. Schools #1 and #10, however, showed a decrease in productivity in both 1989 and 1990, while school #21 showed a consistent negative productivity. If any of these were our school, we would want to seek further explanation. We might begin by comparing detailed IPEDS data for our school with that of similar institutions. In any case, ROM may suggest a problem exists and point us in the general direction of its source; a detailed examination of the data is required if we are to understand the problem more fully and develop a useful solution. Finally, administrator productivity, as measured by Return on Management, is a function of many parameters. We must recognize that a significant financial investment will lower this measure of productivity if the benefits are not immediately realized. That does not make the investment unwise. In addition, the calculation is relatively sensitive to "cost of capital." We assumed in Figure 1 that the cost of capital is constant across schools, in fact being equal to the average cost of capital experienced by our home institution for the particular year. We realize, however, that the cost of capital changes as the financial position of an institution changes. Let us suggest, therefore, that the really interesting comparison is not the year-to-year absolute value of administrator productivity, nor a comparison across schools, but rather the manner and degree of change over time. Looking within your own institution over time Figure 2 portrays administrator and IT productivity over time at a single institution. Notice that the variables are plotted on different scales to make visual comparison easier. The index for IT productivity is roughly an order of magnitude greater than the index for overall administrator productivity. This difference is to be expected because information technology costs are only a portion of the total "cost of management." [FIGURE 2 NOT AVAILABLE IN ASCII TEXT VERSION] Notice that the two productivities appear to track fairly well, and that both have a slightly downward trend from 1983 on. It would also appear that changes in IT productivity (increases or decreases) tend to lead those in administrator productivity. We have, however, not yet answered a question of fundamental interest: "Do our investments in information technology contribute to improved productivity?"

From the year-to-year changes shown in Figure 3, we see several important features in both administrator and IT productivity. First, from 1984 on, most changes appear to be negative. This observation confirms the impression obtained from Figure 2 that productivity is decreasing. Second, in 1986 the change in IT productivity is more negative than that in general administration. This could suggest that the decrease in IT productivity contributed significantly to, or was even the predominant cause of, the general decrease in administrator productivity for that year. Alternatively, the decrease in IT and or administrative productivity may simply be the result of a large expenditure on IT resources in that year. Third, during 1987-91, the change in IT productivity is more positive than that for general administration. This might suggest that, at worst, IT was not the entire cause of any decrease in administrator productivity, and at best, improvements in IT productivity kept overall administrator productivity from decreasing still further. At this point, we must turn to other tools if we wish to identify specific causes of productivity changes. Standard ratio analysis might be used in the present example to suggest whether the observed decreases in productivity were due to increases in the number of personnel or to changes in the work process. As an aggregate measure, ROM cannot provide such detail. The two examples presented, looking at a snapshot in time across schools, and looking within a single institution over time, both focus on the result of some action, after the fact. A similar approach, using forecast as opposed to historical data, should enable us to compare the potential benefit of proposed investments. SOME LIMITATIONS The Return on Management metric is also limited, in several ways. First, since we have not operated within this perspective in the past, we have to change--change the way in which we consider the problem and the manner in which we collect data. A complaint raised by businesses attempting to apply this methodology, and, we suspect, of equal concern to educational institutions, is that it is difficult to take existing data and transform them into the required format. Second, the Return on Management metric, because of its aggregate nature, is most readily applied over an entire organization. In order to apply this metric on a department or other sub-organizational basis, we must develop "transfer prices" to value the services or information provided by one department to another. To date, there appears to have been little thought given to defining the value produced by "admissions," or "financial aid," or "the registrar's office." This value produced is required as the starting point in our calculation of management value added. Development of the "transfer price" model, we suggest, should be a next step in extending the concept of Return on Management. We may follow a similar approach in order to link a specific strategic or tactical outcome to the measure of productivity. A specific outcome can be considered as long as we are able to assign an appropriate dollar value to both the increase in revenues caused by the outcome and the

operational cost of producing it. These values are then entered into the calculations for management value added or the cost of management, as appropriate. Third, it is also clear that "productivity," as measured by ROM, varies as a function of factors not obviously addressed by the model-for public schools, an erosion of the revenue base due to tax cutbacks or politically imposed budget restrictions; for all schools, the inflation of the dollar over time, or changes in costs or prices over which management possesses no control. Including the impacts of some of these factors is quite straightforward; inflation, for example, may be considered by adjusting dollar values for inflation based upon published economic indices. This same approach will suffice for changes in costs over which management possesses no control--faculty salary increases, for example. The impact of an eroding revenue base is computationally straightforward to include (the composition of total revenues is adjusted), but interpretation of the resulting change in productivity is difficult. We must make a judgment as to whether we are willing to ascribe the resulting change in productivity to a decrease in financial flexibility due to the loss of non-customer based revenues, to a change in the performance of management, or to factors beyond the control of management. The most important consideration is that one be consistent over time in the definition of factors and application of the model. Return on Management provides only a philosophical framework, and the rudiments of a computational and interpretive approach. Each organization must mold the actual implementation to its own needs and characteristics. Return on Management is by no means a panacea. It provides "the answer" for no one. We suggest, however, that it does provide a new perspective within which to view information technology investments in relation to overall organization performance; a necessary link between information technology, administrative productivity, and organizational productivity; and a first step toward a more enlightened view of the role of information technology in the organization--all in an intuitively acceptable manner. These accomplishments, especially that of encouraging a new perspective, promise an important beginning. ======================================================================== Footnotes: 1 P. A. Strassmann, " Management Productivity as an IT Measure," in P. Berger, J. G. Kobielus, and D.E. Sutherland (Eds), Measuring Business Value of Information Technologies (Washington, DC: International Center for Information Technologies, 1988), pp. 17-55. 2 W. L. Cron and M. G. Sobol, " The Relationship Between Computerization and Performance: A Strategy for Maximizing the Economic Benefits of Computerization," Information and Management 6: 171-181. 3 A. L. Lederer and A. G. Putnam, " Connecting systems objectives to business strategy with BSP," Information Strategy: The Executive's Journal (Winter 1986): 12-19. 4 J. I. Penrod, and M. G. Dolence, " Reengineering: A Concept for Higher Education," CAUSE/EFFECT, Summer 1991, pp. 10-17; and R. N. Anthony, Planning and Control Systems: A Framework for Analysis (Cambridge, Mass.: Harvard University Graduate School of Business Administration, Studies in Management Control, 1965).

5 E. E. Clemons, " Corporate Strategies for Information Technology: A Resource Based Approach," Computer, November 1991, pp. 23-31. 6 Strassman, p. 27. 7 P. M. W. May, " Beware of the Cost/Benefit Model for IS Project Evaluation," Journal of Systems Management 36:6 (June 1985): 30-35. 8 R.D. Smith, " Measuring the Intangible Benefits of Computer-based Information Systems," Journal of Systems Management 34:9 (September 1983): 22-27. 9 M. Hammer and G. E. Mangurian, " The Changing Value of Communications Technology," Sloan Management Review 28:2 (Winter 1977): 65-71. 10 J. Rockart, " Chief Executives Define Their Own Data Needs," Harvard Business Review, March/April 1979, pp. 81-92. 11 United States Department of Education, National Center for Education Statistics. Integrated Post-Secondary Education Data System Financial Survey (Washington, DC: United States Department of Education, 1988, 1989, 1990). ======================================================================== Linking Administrative and IT Productivity in Higher Education

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close