Advertisement

Evaluations of Educational Practice, Programs, Projects, Products, and Policies

  • Jonathan Michael SpectorEmail author
Living reference work entry

Abstract

There are well-established evaluation methods that can be applied to programs, projects, products, practice, and policies in many domains. However, evaluations of educational efforts and technologies to support learning, instruction, and performance have received less support than in other domains such as health care or marketing. Education is a complex enterprise which makes the evaluation of efforts to improve education a challenge. The importance of conducting evaluations and constructing a body of knowledge with regard to what works (and what does not work) and when and why in education is important for progressive development and ongoing improvement in learning, instruction, and performance. This contribution describes what is known in general about a variety of evaluation approaches, and it summarizes findings pertinent to the evaluation of interventions and innovations in education, especially those involving technology. Both formative and summative evaluations are addressed, with particular emphasis on formative evaluations, as they are generally more complex. The use of a logic model is described. Fidelity of implementation and impact studies are illustrated. The relationship between evaluation studies and research is also discussed.

Keywords

Fidelity of implementation Formative evaluation Impact study Logic model Summative evaluation Theory of change 

References

  1. Cronback, L. J. (1989). Designing evaluations for educational and social programs. San Francisco, CA: Jossey-Bass.Google Scholar
  2. Flagle, C. D., Huggins, W. H., & Roy, R. H. (Eds.). (1960). Operations research and systems engineering. Baltimore, MD: The Johns Hopkins Press.Google Scholar
  3. Forrester, J. W. (1961). Industrial dynamics. Cambridge, MA: MIT Press.Google Scholar
  4. Garris, R., Ahlers, R., & Driskell, J. E. (2002). Games, motivation and learning: A research and practice model. Simulation and Gaming, 33(4), 441–467.CrossRefGoogle Scholar
  5. Gogus, A. (2006). Individual and situational factors that influence teachers’ perspectives and perceptions about the usefulness of the graphing calculator for student success. Dissertation, Instructional Design, Development and Evaluation, Syracuse University, Syracuse, New York, NY.Google Scholar
  6. Kidron, Y., & Lindsay, J. (2014). The effects of increased learning time on student academic and nonacademic outcomes: Outcomes from a meta-analytic review. Washington, DC: US Department of Education Institute of Education Sciences, National Center for Education and Region Assistance, Regional Educational Laboratory Appalachia. Retrieved from http://ies.ed.gov/ncee/edlabs/regions/appalachia/pdf/REL_2014015.pdf
  7. Louw, J. (1999). Improving practice through evaluation. In D. Donald, A. Dawes, & J. Louw (Eds.), Addressing childhood adversity (pp. 66–73). Cape Town, South Africa: David Phillip.Google Scholar
  8. Potter, C. (2006). Program evaluation. In M. Terre Blance, K. Durrheim, & D. Painter (Eds.), Research in practice: Applied methods for the social sciences (2nd ed., pp. 410–428). Cape Town, South Africa: UCT Press.Google Scholar
  9. Rao, V., & Woolcock, M. (2003). Integrating qualitative and quantitative approaches in program evaluation. In F. Bourguignon & L. Pereira da Silva (Eds.), The impact of economic policies on poverty and income distribution: Evaluation techniques and tools (pp. 165–190). Oxford, UK: Oxford University Press.Google Scholar
  10. Rossi, P., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach (7th ed.). Thousand Oaks, CA: Sage.Google Scholar
  11. Scriven, M. (1994). The fine line between evaluation and explanation. Evaluation Practice, 15, 75–77.CrossRefGoogle Scholar
  12. Shute, V., & Psotka, J. (1996). Intelligent tutoring systems: Past, present and future. In D. H. Jonassen (Ed.), Handbook of research on educational communications and technology (pp. 70–600). Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
  13. Silvern, L. C. (1965). Systems engineering of learning: Public education K-12. Los Angeles, CA: Education and Training Consultants.Google Scholar
  14. Spector, J. M. (2012). Foundations of educational technology: Integrative approaches and interdisciplinary perspectives. New York, NY: Routledge.Google Scholar
  15. Spector, J. M. (2013). Program and project evaluation. In J. M. Spector, M. D. Merrill, J. Elen, & M. J. Bishop (Eds.), Handbook of research on educational communications and technology (4th ed., pp. 195–201). New York, NY: Routledge.Google Scholar
  16. Spector, J. M., Johnson, T. E., & Young, P. A. (2014). An editorial on research and development in and with educational technology. Educational Technology Research & Development, 62(2), 1–12.CrossRefGoogle Scholar
  17. Spector, J. M., Johnson, T. E., & Young, P. A. (2015). An editorial on replication studies and scaling up efforts. Educational Technology Research & Development, 63(2), 1–4.CrossRefGoogle Scholar
  18. Suchman, E. A. (1967). Evaluation research: Principles and practice in public service and social action programs. New York, NY: Russell Sage.Google Scholar
  19. Suppes, P. (1978). Impact of research on education: Some case studies. Washington, DC: National Academy of Education.Google Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  1. 1.Department of Learning Technologies, College of InformationUniversity of North TexasDentonUSA

Personalised recommendations