Policy Sciences

, Volume 4, Issue 2, pp 171–195 | Cite as

A critical review of some basic considerations in post-secondary education evaluation

  • P. F. Gross


Following on a short review of North American program evaluation experiences in elementary, secondary and post-secondary education, a summary is given of the major problems observed in such evaluation efforts. With this background assessment, the remainder of the paper attempts to specify objectives and criteria that seem appropriate in post-secondary program evaluation in the 1970's. Some attention is devoted to problems of implementing changes in efficiency through increasing productivity by a number of alternative strategies. Finally, some tentative suggestions are made as to possible routes to implementing program evaluation in post-secondary education.


Economic Policy Evaluation Experience Critical Review Alternative Strategy Program Evaluation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Wynne, E., “School Output Measures as Tools for Change,”Education and Urban Society 2: 1 (November 1969), p. 3.Google Scholar
  2. 2.
    Trow, M.,Methodological Problems in the Evaluation of Innovation. Los Angeles: University of California, Center for the Study of Evaluation of Instructional Programs, May 1969.Google Scholar
  3. 2a.
    Other problems in the evaluation of innovation are discussed in Gross, N., Giaquinta, J. B. and Bernstein, M.,Implementing Organizational Innovations: A Sociological Analysis of Planned Educational Change. New York: Basic Books, 1971; Betts, F. M., “Evaluation of Innovation Programs in Urban Education,” Paper presented at Annual Meeting of Operations Research Society of America (ORSA), Anaheim, California, 28–30 October 1971, 34 pp.Google Scholar
  4. 3.
    Brooks, M., “The Community Action Program As a Setting for Applied Research,”Journal of Social Issues 21 (1965), pp. 29–40.Google Scholar
  5. 4.
    Suchman, E. A.,Evaluative Research. New York: Russell Sage Foundation, 1967, pp. 31–32.Google Scholar
  6. 5.
    Suchman, E. A., “A Model for Research and Evaluation on Rehabilitation.” In Sussman, M., ed.,Sociology and Rehabilitation. Washington, D.C.: American Sociological Association, 1966, p. 68.Google Scholar
  7. 6.
    The original source of this four-way classification was the U.S. Office of Economic Opportunity Instruction No. 72–78 issued in Fiscal Year 1968 at the height of the internal strife in OEO over the evaluation of Program Headstart. See Wholey, al., Federal Evaluation Policy. Washington, D.C.: The Urban Institute, June 1970, pp. 23–27 and pp. 61–69. See also Williams, Walter,Social Policy Research and Analysis. New York: Elsevier, 1971, p. 109.Google Scholar
  8. 7.
    It should be noted that although the apparently value-loaded term “effectiveness” has been used in the description of the four types of PE, “effectiveness” is but one criterion for evaluation of programs, as is noted further on.Google Scholar
  9. 8.
    The description is attributed to Donald Campbell in his address to the 79th Annual Convention of the American Psychological Association. See: “Campbell Outlines ‘Experimenting Society’ Plan,”APA Monitor 2: 12 (December 1971), p. 8.Google Scholar
  10. 9.
    This description is used by Alice Rivlin in herSystematic Thinking for Social Action. Washington, D.C.: Brookings Institution, 1971, p. 87.Google Scholar
  11. 10.
    Flanagan, J. al., Project Talent. Survey conducted by University of Pittsburgh for the U.S. Office of Education in 1960, with results published in a number of volumes from 1960-64.Google Scholar
  12. 11.
    Coleman, J. al., Equality of Educational Opportunity, Washington, D.C.: U.S. Office of Education, U.S.D.H.E.W., 1966.Google Scholar
  13. 12.
    Bowles, S. and Levin, H., “The Determinants of Scholastic Achievement: An Appraisal of Some Recent Evidence,”Journal of Human Resources 3 (Winter, 1968), 3–24;ibid., “More on Multicollinearity and the Effectiveness of Schools,Journal of Human Resources 3 (Summer, 1968), pp. 393–400; Coleman, J., “The Concept of Equality of Educational Opportunity,”Harvard Educational Review 38 (Winter, 1968), pp. 7–22. The most enlightening criticism is: Cain, G. and Watts, H., “Problems in Making Inferences from the Coleman Report.” Madison, Wisconsin: Institute for Research on Poverty, Discussion Paper No. 28–68, 1970.Google Scholar
  14. 13.
    A concise summary of these studies is contained in Cohen, D. K., “Politics and Research: Evaluation of Social Action Programs in Education,”Review of Educational Research 40: 2 (April 1970), pp. 213–238. A more detailed perspective of the evaluation problems of Headstart is contained in Chapter 7 of Williams,op. cit. (see footnote 6supra).Google Scholar
  15. 14.
    See, for example,Educational Vouchers: A Report on Financing Elementary Education by Grants to Parents. Cambridge: Center for the Study of Public Policy, December 1970.Google Scholar
  16. 16.
    An example of the earlier studies of scale economies is Hirsch, Werner Z., “Determinants of Public Education Expenditures,”National Tax Journal 13 (March 1960), pp. 29–40.Google Scholar
  17. 17.
    An example of the more sophisticated approach is Bowles, Samuel, “Towards an Education Production Function.” In Hansen, W. L., ed.,Education, Income and Human Capital, National Bureau of Economic Research Studies in Income and Wealth, Volume 35. New York: Columbia University Press, 1970, pp. 11–61; Katzman, Martin T., “Distribution and Production in a Big City Elementary School System,”Yale Economic Essays 8 (1968), pp. 201–256. A less sophisticated approach is Rajpal, Puran L., “Relationship Between Expenditures and Quality Characteristics of Education in Public Schools,”The Journal of Educational Research 63: 2 (October 1969), pp. 57–59.Google Scholar
  18. 18.
    We have to be careful to define the time period over which these responses to income and price changes occurred. There is some evidence that thelong-run income elasticity of U.S. non-defense expenditures (including education) is large, perhaps 1.4. Theshort-run income elasticity of the samepublic expenditures is 0.4% in the first year, 0.7% in the first two years, 1.0% in three years, rising to the 1.4% above in the long run. Public educational expenditures might be expected to follow the same patterns since U.S. Federal expenditures on education and manpower were nearly 8% of non-defense GNP in 1969. Defense expenditures were 44% of GNP. See:Economic Report of the President, Transmitted to the Congress, February 1971, p. 100. In a number of empirical studies of U.S. public education expenditures, the range of estimates of income elasticity of demand for secondary education is considerable. Hirsch's study (footnote 16supra) of St. Louis revealed that a one dollar increase in the average assessed valuation of real property per pupil in average daily attendance was associated with a 1.9% increase in total per pupil current expenditures (plus debt retirement) for public education. The corresponding income elasticity is 0.56. Brazer's studies of 40 large U.S. cities revealed an income elasticity of 0.73, while Fabricant's analysis ofstate level data suggests an income elasticity of 0.78. Owens' more recent evaluation of elementary schools in the nine large U.S. cities studied by Coleman suggests an income elasticity of 0.43, about half of the above values. See Owen, John D., “The Distribution of Educational Resources in Large American Cities,”Journal of Human Resources 7: 1 (1972), pp. 26–38.Google Scholar
  19. 19.
    Owen, John D.,op. cit., p. 38.Google Scholar
  20. 20.
    See for example:Perspective 1975, Sixth Annual Review of the Economic Council of Canada, Ottawa, September 1969 (particularly chapters 8 and 10).Google Scholar
  21. 21.
    Lacombe, J. B.,Some Economic Aspects of Education in Canada, Staff Study No. 34. Ottawa: Economic Council of Canada, 1972.Google Scholar
  22. 22.
    Hoepfner, R., “Characteristics of Standardized Tests: Evaluation Instruments,”UCLA Evaluation Comment 3: 1 (September 1971), pp. 1–6.Google Scholar
  23. 23.
    The controversy commenced with the publication of the results of a study of the California higher educational finance system in Weisbrod, B. and Hansen, W. L.,Benefits, Costs and Finance of Public Higher Education. Chicago: Markham Publishing Company, 1970. A shorter version by the same authors appeared as, “The Distribution of Costs and Direct Benefits of Public Higher Education: The Case of California,”Journal of Human Resources (Spring 1969), pp. 176–191. Subsequent communications by Cohnet al, Pechman and Hartman on the latter article appear inJournal of Human Resources 5 (Spring 1970), pp. 222–236;Journal of Human Resources 5 (Summer 1970), pp. 361–370;Journal of Human Resources 5 (Fall 1970), pp. 519–523. Pechman's analysis tended to show that the systemwas redistributive from rich to poor. A reply by Weisbrod and Hansen is inJHR 6: 3 (1970), pp. 363–374. In Canada, Judy found that the public financing of higher education in Canada was not redistributing income. See: Judy, R. W., “On the Income Redistributive Effects of Public and to Higher Education in Canada.” Toronto: Institute for Quantitative Analysis of Social and Economic Policy, University of Toronto, September 1969. However, elsewhere, Judy has argued that the same system is mildly redistributive in favor of the poorer income classes because “... the taxation structure is somewhat more progressive in Canada than is the distribution of college and university students by income class of family.” See Judy, R. W., “Costs: Theoretical and Methodological Issues.” In Somers, G. and Woods, W., eds.,Cost-Benefit Analysis of Manpower Policies. Kingston (Ontario): Industrial Relations Center, Queen's University, 1969, pp. 16–29 (at p. 29).Google Scholar
  24. 24.
    Improved methods of financing secondary education are discussed in Garfinkel, I., “Values and Efficiency in Financing Education.” Madison, Wisconsin: Institute for Research on Poverty, Discussion Paper 78–70, September 1970; Hansen, W. L. and Weisbrod, B. A., “A New Approach to Higher Education Finance.” In Orwig, M., ed.,Financing Public Education. Iowa City: American College Testing Program, 1971, pp. 117–142; Hinson, J. P., “Higher Education—How to Pay,”New England Economic Review (March–April, 1971), pp. 3–22; Denison, E. F., “An Aspect of Unequal Opportunity,” Brookings Bulletin, 8: 1 (1971), pp. 7–10.Google Scholar
  25. 25.
    “College to Base Tuition on Income,”New York Times, October 12, 1971.Google Scholar
  26. 26.
    This question,inter alia, is discussed in the Canadian context in Wright, D. T., “The Financing of Post-Secondary Education: Basic Issues and Distribution of Costs,”Canadian Public Administration 14: 4 (1971), pp. 595–607.Google Scholar
  27. 27.
    Cage, B. N. and Manatt, R. P., “Cost Analysis of Selected Educational Programs in the Community Colleges of Iowa,”Journal of Educational Research 63: 2 (October 1969), pp. 66–70.Google Scholar
  28. 28.
    Jenny, H. and Wynn, G., “Short-Run Cost Variations in Institutions of Higher Learning.” InThe Economics and Financing of Higher Education in the United States, A Compendium of Papers submitted to the Joint Economic Committee, U.S. 91st Congress, 1st Session 1969. Washington: U.S. Government Printing Office, 1969.Google Scholar
  29. 29.
    Hettich, W.,Expenditures, Output and Productivity in Canadian University Education. Special Study No. 14 prepared for Economic Council of Canada, Ottawa, January 1971.Google Scholar
  30. 30.
    Hettich, W.,op. cit., pp. 66–67.Google Scholar
  31. 31.
    This decline in university productivity is suggested by recent studies in Britain and in Canada. See Woodhall, M. and Blaug, M., “Productivity Trends in British University Education 1938–62,”Minerva 3: 4 (Summer 1965), pp. 483–488; Hettich,op. cit. Google Scholar
  32. 32.
    Design for Decision-Making: 8th Annual Review of Economic Council of Canada, Ottawa, September 1971 (Chapter 9: The Changing Education Scene).Google Scholar
  33. 33.
    Design for Decision-Making: 8th Annual Review of Economic Council of Canada, Ottawa, September 1971 (Chapter 9: The Changing Education Scene).Google Scholar
  34. 34.
    See for example, Trotter, B.,Television and Technology in University Teaching, A Report to the Committee on University Affairs and the Committee of Presidents of Universities of Ontario, December 1970.Google Scholar
  35. 35.
    See for example, Gross, P. F. and Cropley, A. J., “Educational Technology, Educational Effectiveness, and Productivity: A Report on an Experimental Use of Computer-Assisted Instruction in Saskatchewan.” Paper presented at First Annual Conference, Saskatchewan Education Research Association, Regina, Saskatchewan, October 29, 1971. Revised version in press.Google Scholar
  36. 36.
    Gross and Cropley,op. cit. Google Scholar
  37. 37.
    “Tech Schools Suffer as Students Turn Off,”Business Week, April 10, 1971, pp. 91; “Applicants to Ivy Colleges Off 7%,New York Times, April 23, 1971, p. 1; “Why the Graduate Degree No Longer Appeals,”New York Times, May 2, 1971.Google Scholar
  38. 38.
    Galper, H. and Dunn, R. “A Short-Run Demand Function for Higher Education in the United States,”Journal of Political Economy 77: 5 (September–October, 1969), pp. 765–776.Google Scholar
  39. 39.
    Schaafsma, J., “The Demand for Higher Education in Canada,” Working Paper 6903. University of Toronto: Institute for The Quantitative Analysis of Social and Economic Policy, September 1968.Google Scholar
  40. 40.
    The following critique is a synthesis of material in Rivlin,op. cit., (note 9supra); Cohen,op. cit. (note 13supra); Betts,op. cit. (note 2supra); and Caro, F., “Issues in the Evaluation of Social Programs,”Review of Educational Research 41: 2 (April 1971), pp. 87–114.Google Scholar
  41. 41.
    Robinson, W. S., “Ecological Correlations and The Behavior of Individuals,”American Sociological Review XV (June 1950), pp. 351–357. Subsequent discussion of this article is contained in articles by Goodman and Duncan and Davis in the December 1953 edition of the same journal.Google Scholar
  42. 42.
    I owe these last two points to Ackoff, Russell, “Toward An Idealized University,”Management Science 15: 4 (December 1968), pp. B121-B131.Google Scholar
  43. 43.
    This inductive approach has been proposed implicitly in many recent books and articles, most recently in Betts,op. cit., in his discussion of pathways to innovation in education.Google Scholar
  44. 44.
    Table 3 is a composite of at least two sources. The seven objectives listed are a modified version of those used by the North Carolina Community College System, and quoted in Cruze, A., “Long Range Planning for the North Carolina Community College System,” paper presented at 38th National Meeting of ORSA, October 28–30, 1970. The seven decision types are modified versions of those presented in Spindler, A., “Decision Making: The Target of Systems Science in Social Programs,” paper presented at 1971 Joint National Conference on Major Systems (ORSA-IEEE), Anaheim, California, October 29, 1971, 16 pp. mimeo.Google Scholar
  45. 45.
    For example, education theoretically improves the individual's chances of being employed, which in turn, improves aggregate national employment. This is not the case for all individuals, particularly certain racial-ethnic groups in North America.Google Scholar
  46. 46.
    These criteria have been used in both transportation and income maintenance program evaluation. See: Halushynsky, George D.,Evaluation of Transportation Plans: A Survey of Methodologies, IEEE/ORSA National Conference on Major Systems, October 25–29, 1971, Anaheim, California; Marmor, T., “On Comparing Income Maintenance Alternatives,”American Political Science Review 65 (1971), pp. 83–96.Google Scholar
  47. 47.
    Some of these issues are discussed inUCLA Evaluation Comment 3:2 (November 1971), p. 10.Google Scholar
  48. 48.
    These criteria are currently being used for evaluation in the student evaluation project of the UCLA Center for the Study of Evaluation.Google Scholar

Copyright information

© Elsevier Scientific Publishing Company 1973

Authors and Affiliations

  • P. F. Gross
    • 1
  1. 1.N.S.W. Health CommissionSydneyAustralia

Personalised recommendations