Advertisement

Evaluating Assumptions

  • Apollo M. Nkwake
Chapter

Abstract

Examining the assumptions that hold a program theory together is a vital part of evaluating program outcomes. Examining implicit or explicit program assumptions facilitates understanding of program results, both intended and unintended. This chapter outlines evaluation approaches to testing program assumptions. The best time to start integrating assumptions in an evaluation is at the conceptualizing stage, when evaluation questions are being formulated. Ideally, framing the questions well will lead to methods, tools, and data that produce highly useful answers and solutions. Examining assumptions is without doubt a necessary element in the process.

Keywords

Evaluating assumptions Question framing Evaluating program theory Path analysis Pattern matching Assumptions Based Comprehensive Development Evaluation Framework Evaluating the counter-theory Integrative process outcome approach 

References

  1. Bourdon, J. (2001) Size and path length of Patricia tries: Dynamical sources context. Random Structures and Algorithms, 19(3–4), 289–315.Google Scholar
  2. Brown, P. (1995). The role of the evaluator in comprehensive community initiatives. In J. I. Connell, A. C. Kubisch, L. B. Schorr, & C. H. Weiss (Eds.), New approaches to evaluating community initiatives: Concepts, methods, and contexts (pp. 201–225). Washington, DC: Aspen Institute.Google Scholar
  3. Chen, H. T. (2005). Practical program evaluation: Assessing and improving planning, implementation, and effectiveness. London, UK: Sage Publications.CrossRefGoogle Scholar
  4. Chen, H. T. (2006). A theory-driven evaluation perspective on mixed methods research. Research in the Schools (Mid-South Educational Research Association), 13(1), 75–83.Google Scholar
  5. Chen, H. T., & Rossi, P. H. (1980). The multi-goal, theory-driven approach to evaluation: A model linking basic and applied social science. Social Forces, 59(1), 106–122.CrossRefGoogle Scholar
  6. Connell, J. P., & Kubisch, A. C. (1998). Applying a theory of change approach to the evaluation of comprehensive community initiatives: progress, prospects, and problems. In K. Fulbright-Anderson, A. C. Kubrisch, & J. P. Connell (Eds.), New approaches to evaluating community initiatives, Vol. 2. Theory, measurement and analysis. Washington, DC: Aspen Institute.Google Scholar
  7. Davies, F. D. (1989) Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319.CrossRefGoogle Scholar
  8. Davies, R. (2010, October 20). Counter-factual and counter-theories. [Blog post]. Retrieved from http://mandenews.blogspot.com/2010/10/counter-factuals-and-counter-theories.html
  9. Donaldson, S. I., & Gooler, L. E. (2002). Theory-driven evaluation of the work and health initiative: A focus on winning new jobs. American Journal of Evaluation, 23(3), 341–347.CrossRefGoogle Scholar
  10. Donaldson, S. I., & Scriven, M. (Eds.) (2003). Evaluating social programs and problems: Visions for the new millennium. In The Claremont symposium on applied social psychology (pp. 109–141). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  11. Feinstein, O. (2006). Assumptions Based Comprehensive Development Evaluation Framework (ABCDEF). Note presented at the pre-European Evaluation Society Conference, Training and Professional Development workshop on the ABCDEF. London, October 2, 2006Google Scholar
  12. Funnel, S. C. (2000). Developing and using a program theory matrix for program evaluation and performance monitoring. New Directions for Evaluation, 87(Fall), 91–101.CrossRefGoogle Scholar
  13. Green, B. L., & McAllister, C. (1998, February/March). Theory-based, participatory evaluation: A powerful tool for evaluating family support programs. The Bulletin of the National Center for Zero to Three, 18, 30–36.Google Scholar
  14. Hoyle, R. H. (ed.) (1995). Structural Equation Modeling. SAGE Publications, Inc. Thousand Oaks, CA.Google Scholar
  15. Janssens, F. J. G., & De Wolf, I. F. (2010). Analyzing the assumptions of a policy program: An ex-ante evaluation of “educational governance” in the Netherlands. American Journal of Evaluation, 30(3), 330–348.Google Scholar
  16. Johnson, R. A., & Wichern, D. W. (1982). Applied Multivariate Statistical Analysis (pp. 326–333). Prentice Hall, Inc. Englewood Cliffs, NJ.Google Scholar
  17. Kelloway, E. K. (1998). Using LISREL for structural equation modeling. Thousand Oaks, CA: Sage Publications.Google Scholar
  18. Leviton, L. C. (1994). Program theory and evaluation theory in community-based programs. American Journal of Evaluation, 15(1), 89–92.CrossRefGoogle Scholar
  19. Marquart, J. M. (1990). A pattern-matching approach to link program theory and evaluation data. New Directions for Program Evaluation, 47, 93–107.CrossRefGoogle Scholar
  20. Maruyama, G. M. (1998). Basics of structural equation modeling. Thousand Oaks, CA: Sage Publications.CrossRefGoogle Scholar
  21. Mayne, J. (2011). Contribution analysis: Addressing cause effect. In K. Forss, M. Marra, & R. Schwartz (Eds.), Evaluating the complex: attribution, contribution, and beyond (pp. 53–96). New Brunswick, NJ: Transactional Publishers.Google Scholar
  22. McLaughlin, J. A., & Jordan, G. B. (1999). Logic models: A tool for telling your performance story. Evaluation and Program Planning, Elsevier Science, 22(1), 65–72.CrossRefGoogle Scholar
  23. McLinden, D. J., & Trochim, W. M. K. (1998). Getting to parallel: Assessing the return on expectations of training. Performance Improvement, 37, 21–26.CrossRefGoogle Scholar
  24. Mitchell, R. J. (1993). Path analysis: Pollination. In S. M. Scheiner & J. Gurevitch (Eds.), Design and analysis of ecological experiments (pp. 211–231). New York, NY: Chapman and Hall, Inc..Google Scholar
  25. Morell, J. A. (2005). Why are there unintended consequences of program action, and what are the implications for doing evaluation? American Journal of Evaluation, 26(4), 444–463.CrossRefGoogle Scholar
  26. Oxford University (2010) Oxford Dictionary of English, Oxford University Press, 2010. Ed. Judy Pearsall.Google Scholar
  27. Pawson, R., & Tilley, N. (2001). Realistic evaluation bloodlines. American Journal of Evaluation, 22, 317–324.CrossRefGoogle Scholar
  28. Schalock, R. L., & Bonham, G. S. (2003). Measuring outcomes and managing for results. Evaluation and Program Planning, 26, 229–235.CrossRefGoogle Scholar
  29. Schumacker, R. E., & Lomax, R. G. (1996). A Beginner’s guide to structural equation modeling. Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  30. Scriven, M. (2008). A summative evaluation of RCT methodology: & an alternative approach to causal research. Journal of Multi Disciplinary Evaluation, 5(9), 15–24.Google Scholar
  31. Shaw, I., & Crompton, I. A. (2003). Theory, like mist on spectacles, obscures vision. Evaluation, 9(2), 192–204.CrossRefGoogle Scholar
  32. Stame, N. (2010). What doesn’t work? Three failures, many answers. Evaluation, 16(4), 371–387.CrossRefGoogle Scholar
  33. Tilley, N. (2004). Applying theory-driven evaluation to the British Crime Reduction Program. The theories of the program and of its evaluations. Criminology and Criminal Justice, 4(3), 255–276.CrossRefGoogle Scholar
  34. Trochim, W. (1989). Outcome pattern matching and program theory. Evaluation and Program Planning, 12(4), 355–366.CrossRefGoogle Scholar
  35. Trochim, W., & Cook, J. (1992). Pattern matching in theory-driven evaluation: A field example from psychiatric rehabilitation. In H. Chen & P. H. Rossi (Eds.), Using theory to improve program and policy evaluations (pp. 49–69). New York, NY: Greenwood Press.Google Scholar
  36. Ullman, J. B. (1996). Structural equation modeling. In B. G. Tabachnick & L. S. Fidell (Eds.), Using Multivariate Statistics, 3rd Edition (pp. 709–819). New York, NY: Harper Collins College Publishers.Google Scholar
  37. Weiss, C. H. (1993). Where politics and evaluation research meet. American Journal of Evaluation, 14(1), 93–106.CrossRefGoogle Scholar
  38. Weiss, C. H. (1997a). Theory-based evaluation: Past, present, and future. New Directions for Evaluation, 76(Winter), 41–55.CrossRefGoogle Scholar
  39. Weiss, C. H. (1997b). How can theory-based evaluation make greater headway? Evaluation Review, 21(4), 501–524.CrossRefGoogle Scholar
  40. Weiss, C. H. (2000). Which links in which theories shall we evaluate? New Directions for Evaluation, 87(Fall), 35–45.CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Apollo M. Nkwake
    • 1
  1. 1.Questions LLCMarylandUSA

Personalised recommendations