Cronbach’s Designing Evaluations: A Synopsis

Part of the Evaluation in Education and Human Services book series (EEHS, volume 8)


During the past 40 years, Lee J. Cronbach has concerned himself with many aspects of evaluation of social science programs. Much of his thinking in these areas has culminated in a book entitledDesigning Evaluations of Edu cational and Social Programs (Cronbach, 1982), a lengthy and erudite work, the preliminary version of which was completed in April 1978. Containing 374 pages, the book includes some new aspects for the design of educational evaluations, while discussing the pros and cons of some of the design concepts already in use.


Correct Response Systematic Evaluation Designing Evaluation Incorrect Response Social Program 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. Campbell, D.T. 1957. Factors relevant to the validity of experiments in social set tings. Psychological Bulletin, 54, 297–312.CrossRefGoogle Scholar
  2. Campbell, D.T. 1973. The social scientist as methodological servant of the experimenting society. Policy Studies Journal, 2, 72–75.CrossRefGoogle Scholar
  3. Campbell, D.T. 1975. Assessing the impact of planned social change. In G. M. Lyonds (ed.) Social research and public policies. Hanover, N.H.: Public Affairs Center, Dart mouth College.Google Scholar
  4. Cronbach, L.J. 1963. Course improvement through evaluation. Teachers College Record, 64, 672–683.Google Scholar
  5. Cronbach, L.J. 1982. Designing evaluations of educational and social programs. San Francisco: Jossey-Bass.Google Scholar
  6. Cronbach, L. J., et al. 1976. Research on classrooms and schools: Formulation of questions, design, and analysis. Occasional paper, Stanford Evaluation Consortium, Stanford University, Calif.Google Scholar
  7. Cronbach, L. J., and Associates. 1980. Toward reform of program evaluation. San Francisco: Jossey-Bass.Google Scholar
  8. House, E.R. 1976. Justice in evaluation. In G.C. Glass (ed.), Evaluation studies review annual, Vol. 1. Beverly Hills, Calif.: Sage, pp. 75–99.Google Scholar
  9. MacDonald, B. 1976. Evaluation and the control of education. In D. Tawney (ed.), Curriculum evaluation today: Trends and implications. London: Macmillan Education, pp. 125–136.Google Scholar
  10. Riecken, H. W., and Boruch, R.F. (eds.), 1974. Social experimentation, New York: Academic Press.Google Scholar
  11. Rossi, P.H., and Williams, W. 1972. Evaluating social programs: Theory, practice and politics. New York: Seminar Press.Google Scholar
  12. Scriven, M. 1967. The methodology of evaluation. In R. W. Stake, et al., Perspectives on curriculum evaluation. AERA Monograph Series on Curriculum Evaluation, No. 1. Chicago: Rand McNally, pp. 39–83.Google Scholar
  13. Scriven, M. 1974. Pros and cons about goal-free evaluation. Evaluation Comment, 3 1–4.Google Scholar
  14. Stake, R.E. 1967. The countenance of educational evaluation. Teachers College Record, 68 (April), 523–540, 63, 146.Google Scholar
  15. Suchman, E.A. 1970. Action for what? A critique of evaluative research. In R. O’Toole (ed.), The organization, management, and tactics of social research. Cambridge, Mass.: Schenkman.Google Scholar
  16. Suchman, E.A. 1967. Evaluative research. New York: Russell Sage.Google Scholar
  17. Weiss, C.H. (ed.) 1972. Evaluating action programs: Readings in social action and education. Boston: Allyn and Bacon.Google Scholar
  18. Wilensky, H. 1967. Organizational intelligence: Knowledge and policy in govern ment and industry. New York: Basic Books.Google Scholar

Copyright information

© Kluwer-Nijhoff Publishing 1985

Authors and Affiliations

  1. 1.Western Michigan UniversityKalamazooUSA

Personalised recommendations