Program and Project Evaluation



Educational technology research and development nearly always involves an intervention of some kind aimed at solving a problem or improving a situation pertaining to learning and instruction. Those involved—the stakeholders—naturally want to know whether the problem was solved and/or the extent to which the situation was improved. Attributing any outcomes to the intervention is not as easy as it may appear, as many factors are typically involved, beyond just the technology involved. This chapter describes a holistic approach to educational technology project and program evaluation. The emphasis is on evaluating the entire process from needs assessment through design, development, deployment, and support with particular attention to evaluating every aspect of the process so as to increase the likelihood of successful technology integration. The use of a logic model to organize evaluation as well as research is described.


Confirmatory evaluation Formative evaluation Logic model Needs assessment Program evaluation Summative evaluation Theory of change 


  1. Aldrich, R. (Ed.). (2002). A century of education. London: Routledge/Falmer.Google Scholar
  2. Cronback, L. J. (1989). Designing evaluations for educational and social programs. San Francisco, CA: Jossey-Bass.Google Scholar
  3. Ellsworth, J. B. (2000). Surviving change: A survey of educational change models. Syracuse, NY: ERIC Clearinghouse.Google Scholar
  4. Gagné, R. M. (1985). The conditions of learning. New York, NY: Holt, Rinehart &Winston.Google Scholar
  5. Knox, H. (1971). A history of educational research in the United States. Washington, DC: National Institute of Education. Retrieved January 3, 2012, from
  6. Langemann, E. C. (2000). An elusive science: The troubling history of education research. Chicago, IL: The University of Chicago Press.Google Scholar
  7. Louw, J. (1999). Improving practice through evaluation. In D. Donald, A. Dawes, & J. Louw (Eds.), Addressing childhood adversity (pp. 66–73). Cape Town: David Phillip.Google Scholar
  8. *Nagel, E. (1994). Introduction: Philosophy in educational research. In S. R. Sharma (Ed.), Encyclopedia of modern educational research (pp. 1–16). New Delhi: Anmol Publications.Google Scholar
  9. Popper, K. (1963). Conjectures and refutations: The growth of scientific knowledge. London: Routledge.Google Scholar
  10. Potter, C. (2006). Program evaluation. In M. Terre Blance, K. Durrheim, & D. Painter (Eds.), Research in practice: Applied methods for the social sciences (2nd ed., pp. 410–428). Cape Town: UCT Press.Google Scholar
  11. Rao, V., & Woolcock, M. (2003). Integrating qualitative and quantitative approaches in program evaluation. In F. Bourguignon & L. Pereira da Silva (Eds.), The impact of economic policies on poverty and income distribution: Evaluation techniques and tools (pp. 165–190). Oxford: Oxford University Press.Google Scholar
  12. Reynolds, A. J. (1998). Confirmatory program evaluation: A method for strengthening causal inference. American Journal of Evaluation, 19(2), 203–221.Google Scholar
  13. Rogers, E. M. (2003). Diffusion of innovations (5th ed.). New York, NY: Free Press.Google Scholar
  14. Rossi, P., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach (7th ed.). Thousand Oaks, CA: Sage.Google Scholar
  15. Russell, T. L. (2001). The no significant difference phenomenon: A comparative research annotated bibliography on technology for distance education. Montgomery, AL: The International Distance Education Certification Center.Google Scholar
  16. *Scriven, M. (1960). The methodology of educational research. Review of Educational Research, 30(5), 422–429.Google Scholar
  17. Scriven, M. (1994). The fine line between evaluation and explanation. Evaluation Practice, 15, 75–77.CrossRefGoogle Scholar
  18. Spector, J. M. (2010). Mental representations and their analysis: An epistemological perspective. In D. Ifenthaler, P. Pirnay-dummer, & N. M. Seel (Eds.), Computer-based diagnostics and systematic analysis of knowledge (pp. 17–40). New York, NY: Springer.Google Scholar
  19. *Spector, J. M. (2012). Foundations of educational technology: Integrative approaches and interdisciplinary perspectives. New York, NY: Routledge.Google Scholar
  20. *Suchman, E. A. (1967). Evaluation research: Principles and practice in public service and social action programs. New York, NY: Russell Sage Foundation.Google Scholar
  21. *Suppes, P. (1978). Impact of research on education: Some case studies. Washington, DC: National Academy of Education.Google Scholar

Copyright information

© Springer Science+Business Media New York 2014

Authors and Affiliations

  1. 1.Department of Learning Technologies, College of InformationThe University of North TexasDentonUSA

Personalised recommendations