Skip to main content

Part of the book series: Kluwer International Handbooks of Education ((SIHE,volume 9))

Abstract

This chapter presents the CIPP Evaluation Model, a comprehensive framework for guiding evaluations of programs, projects, personnel, products, institutions, and evaluation systems. This model was developed in the late 1960s to help improve and achieve accountability for U.S. school programs, especially those keyed to improving teaching and learning in urban, inner city school districts. Over the years, the model has been further developed and applied to educational programs both inside and outside the U.S. Also, the model has been adapted and employed in philanthropy, social programs, health professions, business, construction, and the military. It has been employed internally by schools, school districts, universities, charitable foundations, businesses, government agencies, and other organizations; by contracted external evaluators; and by individual teachers, educational administrators, and other professionals desiring to assess and improve their services.1 This chapter is designed to help educators around the world grasp the model’s main concepts, appreciate its wide-ranging applicability, and particularly consider how they can apply it in schools and systems of schools. The model’s underlying theme is that evaluation’s most important purpose is not to prove, but to improve.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 749.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 949.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 949.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Alexander, D. (1974). Handbook for traveling observers, National Science Foundation systems project. Kalamazoo: Western Michigan University Evaluation Center.

    Google Scholar 

  • Brickell, H.M. (1976). Needed: Instruments as good as our eyes. Occasional Paper Series, #7. Kalamazoo: Western Michigan University Evaluation Center.

    Google Scholar 

  • Cook, D.L., & Stufflebeam, D.L. (1967). Estimating test norms from variable size item and examinee samples. Educational and Psychological Measurement, 27, 601–610.

    Article  Google Scholar 

  • Evers, J. (1980). A field study of goal-based and goal-free evaluation techniques. Unpublished doctoral dissertation. Kalamazoo: Western Michigan University.

    Google Scholar 

  • House, E.R, Rivers, W., & Stufflebeam, D.L. (1974). An assessment of the Michigan accountability system. Phi Delta Kappan, 60(10).

    Google Scholar 

  • Joint Committee on Standards for Educational Evaluation. (1981). Standards for evaluations of educational programs, projects, and materials. New York: McGraw-Hill.

    Google Scholar 

  • Joint Committee on Standards for Educational Evaluation. (1988). The personnel evaluation standards. Newbury Park, CA: Sage.

    Google Scholar 

  • Joint Committee on Standards for Educational Evaluation. (1994). The program evaluation standards. Thousand Oaks, CA: Sage.

    Google Scholar 

  • Manning, P.R., & DeBakey, L. (1987). Medicine: Preserving the passion. New York: Springer-Verlag.

    Google Scholar 

  • National Commission on Excellence in Education. (1983). A Nation at Risk: The Imperative of Educational Reform. Washington, DC: U.S. Government Printing Office. (No. 065-000-00177-2).

    Google Scholar 

  • Nowakowski, A. (1974). Handbook for traveling observers, National Science Foundation systems project. Kalamazoo: Western Michigan University Evaluation Center.

    Google Scholar 

  • Pedulla, J.J., Haney, W, Madaus, G.F., Stufflebeam D.L., & Linn, R L. (1987). Response to assessing the assessment of the KEST. Kentucky School Boards Association Journal, 6(1), 7–9.

    Google Scholar 

  • Owens, T.R., & Stufflebeam, D.L. (1964). An experimental comparison of item sampling and examinee sampling for estimating test norms. Journal of Educational Measurement, 6(2), 75–82.

    Article  Google Scholar 

  • Reed, M. (1989). Fifth edition WMU traveling observer handbook: MacArthur project. Kalamazoo: Western Michigan University Evaluation Center.

    Google Scholar 

  • Reed, M. (1991). The evolution of the traveling observer (TO) role. Presented at the annual meeting of the American Educational Research Association, Chicago.

    Google Scholar 

  • Reinhard, D. (1972). Methodology development for input evaluation using advocate and design teams. Unpublished doctoral dissertation, The Ohio State University, Columbus.

    Google Scholar 

  • Sandberg, J. (1986). Alabama educator inservice training centers traveling observer handbook. Kalamazoo: Western Michigan University Evaluation Center.

    Google Scholar 

  • Sanders, W., & Horn, S. (1994). The Tennessee value-added assessment system (TVAAS): Mixedmodel methodology in educational assessment. Journal of Personnel Evaluation in Education, 8(3), 299–312.

    Article  Google Scholar 

  • Schalock, D., Schalock, M., & Girod, J. (1997). Teacher work sample methodology as used at Western Oregon State College. In J. Millman (Ed.), Using student achievement to evaluate teachers and schools (pp. 15–45). Newbury Park, CA: Corwin.

    Google Scholar 

  • Scriven, M. (1991). Evaluation thesaurus (4th ed.). Newbury Park, CA: Sage.

    Google Scholar 

  • Stufflebeam, D.L. (1997). The Oregon work sample methodology: Educational policy review. In J. Millman (Ed.), Grading teachers, grading schools (pp. 53–61). Thousand Oaks, CA: Corwin.

    Google Scholar 

  • Stufflebeam, D.L., Nitko, A., & Fenster, M. (1995). An independent evaluation of the Kentucky Instructional Results Information System (KIRIS). Kalamazoo: Western Michigan University Evaluation Center.

    Google Scholar 

  • Sumida, J. (1994). The Waianae self-help housing initiative: Ke Aka Ho’ona: Traveling observer handbook. Kalamazoo: Western Michigan University Evaluation Center.

    Google Scholar 

  • Thompson, T.L. (1986). Final synthesis report of the life services project traveling observer procedure. Kalamazoo: Western Michigan University Evaluation Center.

    Google Scholar 

  • Webster, W.J., Mendro, R.L., & Almaguer, TO. (1994). Effectiveness indices: A “value added” approach to measuring school effect. Studies in Educational Evaluation, 20(1), 113–146.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Kluwer Academic Publishers

About this chapter

Cite this chapter

Stufflebeam, D.L. (2003). The CIPP Model for Evaluation. In: Kellaghan, T., Stufflebeam, D.L. (eds) International Handbook of Educational Evaluation. Kluwer International Handbooks of Education, vol 9. Springer, Dordrecht. https://doi.org/10.1007/978-94-010-0309-4_4

Download citation

  • DOI: https://doi.org/10.1007/978-94-010-0309-4_4

  • Publisher Name: Springer, Dordrecht

  • Print ISBN: 978-1-4020-0849-8

  • Online ISBN: 978-94-010-0309-4

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics