Skip to main content

On the application of program evaluation designs: Sorting out their use and abuse

Abstract

Increasingly complex methodological options, as well as growing sophistication of users, means that the formulation of a research design prior to conducting an evaluation study is likely to be more demanding and time-consucting than previously. In fact, one of the most difficult problems in the entire evaluation endeavor is the development of an appropriate design.. But the issue is not only one of complexity—it is also one of the appropriateness of the designs to the questions at hand.

The concern of this article is with tightening the linkage between questions asked and answers given—making sure that the design organizes and directs the evaluation efforts to provide relevant information germane to the needs of the policymakers. By tightening this linkage, it is presumed that the findings from evaluation studies can gain increased legitimacy and use. Appropriate uses and abuses of seven program evaluation designs are analyzed, stressing designs that are most appropriate to the types of informational questions asked by policymakers.

This is a preview of subscription content, access via your institution.

References

  • Boruch, R., Wortman, P., Cordray, D., et al., (1981).Reanalyzing program evaluations. San Francisco: Jossey-Bass.

    Google Scholar 

  • Cook, T.D., & Campbell, D.T. (1979).Quasi-experimentation: Design & analysis issues for field settings Chicago: Rand, McNally.

    Google Scholar 

  • Evaluation Research Society. (1982).Standards for evaluation practice. San Francisco: Jossey-Bass.

    Google Scholar 

  • Glass, G.V., McGaw, B., & Smith, M.L. (1981).Meta-analysis in social research. Beverly Hills, CA: Sage.

    Google Scholar 

  • Green, B., & Hall, J. (1984). Quantitative methods for meta-analysis.Annual Review of Psychology, 35, 37–53.

    Article  Google Scholar 

  • Hargrove, E.C. (1985)The missing link: The study of the implementation of social policy. Washington, D.C.; The Urban Institute.

    Google Scholar 

  • Hedges, L. (1984). Advances in statistical methods for meta-analysis.New Direction for Program Evaluation, 24, 25–42.

    Article  Google Scholar 

  • Light, R. and Pillemer, D. (1984).Summing up: The science of reviewing research. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Patton, M.Q. (1980).Qualitative evaluation methods. Beverly Hills, CA: Sage.

    Google Scholar 

  • Pressman, J.L., & Wildavsky, A. (1984).Implementation (3d Ed.) Berkeley, CA: University of California Press.

    Google Scholar 

  • Resmovic, E.L. (1979). Methodological considerations in evaluating correctional effectiveness: Issues and chronic problems. In L., Sechrest, S.O. White, & E.D. Brown. (Eds.),The rehabilitation of criminal offenders: Problems and prospects. Washington, D.C.: National Academy of Science.

    Google Scholar 

  • Riecken, H.W., & Boruch, R.F. (1974)Social experimentation: A method for planning and evaluating social Intervention. New York: Academic Press.

    Google Scholar 

  • Rist, R.C. (1981).Earning and learing: Youth employment policies and programs. Beverly Hills, CA: Sage.

    Google Scholar 

  • Rossi, P.H., & Freeman, H.E. (1982).Evaluation: A systematic approach (2d Ed.). Beverly Hills, CA: Sage.

    Google Scholar 

  • Siber, S. (1973). The integration of fieldwork and survey methods.American Journal of Sociology, 78, 1335–1359.

    Article  Google Scholar 

  • Trend, M.G. (1978). On the reconciliation of qualitative and quantitative analyses: A case study.Human organization, 37, 345–354.

    Google Scholar 

  • United States General Accounting Office. (1982).Casual analysis (Transfer Paper #1). Washington, D.C.: PEMD/USGAO.

    Google Scholar 

  • United States General Accounting Office. (1983).The evaluation synthesis (Methods paper #1). Washington, D.C.: PEMD/USGAO.

    Google Scholar 

  • United States General Accounting Office. (1984a).Designing evaluations (Transfer Paper #4). Washington D.C.: PEMD/USGAO.

    Google Scholar 

  • United States General Accounting Office. (1984b).WIC evaluation provide some favorable but no conclusive evidence on the effects expected for the special supplemental program for women, infants, and children (PEMD-84-4). Washington, D.C.: USGAO.

    Google Scholar 

  • United States General Accounting Office. (1986).Teenage pregnancy: 500, 000 births a year but few tested programs (PEMD-86-16BR). Washington, D.C.: PEMD/USGAO.

    Google Scholar 

  • United States General Accounting Office. (1987).Case study evaluation (Transfer Paper #9). Washington, D.C.: PEMD/USGAO.

    Google Scholar 

  • United States General Accounting Office. (1988).Children's program: A comparative evaluation framework and five illustrations (PEMD-88-28BR). Washington, D.C.: USGAO.

    Google Scholar 

  • Wholey, J. (1979).Evaluation: Promise and performance. Washington, D.C.: The Urban Institute.

    Google Scholar 

  • Williams, J. (Ed.). (1982)Studying implementation: Methodological and administrative issues. Chatham, N.J.: Chatham House.

    Google Scholar 

Download references

Authors

Additional information

Ray C. Rist is director of operations in the general government division of the United States General Accounting Office. He was previously a professor at Cornell University and has authored or edited sixteen books and written nearly one hundred articles. He is chair of the Working Group on Policy Evaluation whose members prepared the articles for this special symposium.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Rist, R.C. On the application of program evaluation designs: Sorting out their use and abuse. Knowledge in Society 2, 74–96 (1989). https://doi.org/10.1007/BF02687235

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF02687235

Keywords

  • Program Monitoring
  • Evaluation Strategy
  • Program Effect
  • Impact Evaluation
  • Evaluation Community