Skip to main content
  • 294 Accesses

Abstract

Examining the assumptions that hold a program theory together is a vital part of evaluating program outcomes. Examining implicit or explicit program assumptions facilitates understanding of program results, both intended and unintended. This chapter outlines evaluation approaches to testing program assumptions. The best time to start integrating assumptions in an evaluation is at the conceptualizing stage, when evaluation questions are being formulated. Ideally, framing the questions well will lead to methods, tools, and data that produce highly useful answers and solutions. Examining assumptions is without doubt a necessary element in the process.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://www.merriam-webster.com/dictionary/intention

References

  • Bourdon, J. (2001) Size and path length of Patricia tries: Dynamical sources context. Random Structures and Algorithms, 19(3–4), 289–315.

    Google Scholar 

  • Brown, P. (1995). The role of the evaluator in comprehensive community initiatives. In J. I. Connell, A. C. Kubisch, L. B. Schorr, & C. H. Weiss (Eds.), New approaches to evaluating community initiatives: Concepts, methods, and contexts (pp. 201–225). Washington, DC: Aspen Institute.

    Google Scholar 

  • Chen, H. T. (2005). Practical program evaluation: Assessing and improving planning, implementation, and effectiveness. London, UK: Sage Publications.

    Book  Google Scholar 

  • Chen, H. T. (2006). A theory-driven evaluation perspective on mixed methods research. Research in the Schools (Mid-South Educational Research Association), 13(1), 75–83.

    Google Scholar 

  • Chen, H. T., & Rossi, P. H. (1980). The multi-goal, theory-driven approach to evaluation: A model linking basic and applied social science. Social Forces, 59(1), 106–122.

    Article  Google Scholar 

  • Connell, J. P., & Kubisch, A. C. (1998). Applying a theory of change approach to the evaluation of comprehensive community initiatives: progress, prospects, and problems. In K. Fulbright-Anderson, A. C. Kubrisch, & J. P. Connell (Eds.), New approaches to evaluating community initiatives, Vol. 2. Theory, measurement and analysis. Washington, DC: Aspen Institute.

    Google Scholar 

  • Davies, F. D. (1989) Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319.

    Article  Google Scholar 

  • Davies, R. (2010, October 20). Counter-factual and counter-theories. [Blog post]. Retrieved from http://mandenews.blogspot.com/2010/10/counter-factuals-and-counter-theories.html

  • Donaldson, S. I., & Gooler, L. E. (2002). Theory-driven evaluation of the work and health initiative: A focus on winning new jobs. American Journal of Evaluation, 23(3), 341–347.

    Article  Google Scholar 

  • Donaldson, S. I., & Scriven, M. (Eds.) (2003). Evaluating social programs and problems: Visions for the new millennium. In The Claremont symposium on applied social psychology (pp. 109–141). Mahwah, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Feinstein, O. (2006). Assumptions Based Comprehensive Development Evaluation Framework (ABCDEF). Note presented at the pre-European Evaluation Society Conference, Training and Professional Development workshop on the ABCDEF. London, October 2, 2006

    Google Scholar 

  • Funnel, S. C. (2000). Developing and using a program theory matrix for program evaluation and performance monitoring. New Directions for Evaluation, 87(Fall), 91–101.

    Article  Google Scholar 

  • Green, B. L., & McAllister, C. (1998, February/March). Theory-based, participatory evaluation: A powerful tool for evaluating family support programs. The Bulletin of the National Center for Zero to Three, 18, 30–36.

    Google Scholar 

  • Hoyle, R. H. (ed.) (1995). Structural Equation Modeling. SAGE Publications, Inc. Thousand Oaks, CA.

    Google Scholar 

  • Janssens, F. J. G., & De Wolf, I. F. (2010). Analyzing the assumptions of a policy program: An ex-ante evaluation of “educational governance” in the Netherlands. American Journal of Evaluation, 30(3), 330–348.

    Google Scholar 

  • Johnson, R. A., & Wichern, D. W. (1982). Applied Multivariate Statistical Analysis (pp. 326–333). Prentice Hall, Inc. Englewood Cliffs, NJ.

    Google Scholar 

  • Kelloway, E. K. (1998). Using LISREL for structural equation modeling. Thousand Oaks, CA: Sage Publications.

    Google Scholar 

  • Leviton, L. C. (1994). Program theory and evaluation theory in community-based programs. American Journal of Evaluation, 15(1), 89–92.

    Article  Google Scholar 

  • Marquart, J. M. (1990). A pattern-matching approach to link program theory and evaluation data. New Directions for Program Evaluation, 47, 93–107.

    Article  Google Scholar 

  • Maruyama, G. M. (1998). Basics of structural equation modeling. Thousand Oaks, CA: Sage Publications.

    Book  Google Scholar 

  • Mayne, J. (2011). Contribution analysis: Addressing cause effect. In K. Forss, M. Marra, & R. Schwartz (Eds.), Evaluating the complex: attribution, contribution, and beyond (pp. 53–96). New Brunswick, NJ: Transactional Publishers.

    Google Scholar 

  • McLaughlin, J. A., & Jordan, G. B. (1999). Logic models: A tool for telling your performance story. Evaluation and Program Planning, Elsevier Science, 22(1), 65–72.

    Article  Google Scholar 

  • McLinden, D. J., & Trochim, W. M. K. (1998). Getting to parallel: Assessing the return on expectations of training. Performance Improvement, 37, 21–26.

    Article  Google Scholar 

  • Mitchell, R. J. (1993). Path analysis: Pollination. In S. M. Scheiner & J. Gurevitch (Eds.), Design and analysis of ecological experiments (pp. 211–231). New York, NY: Chapman and Hall, Inc..

    Google Scholar 

  • Morell, J. A. (2005). Why are there unintended consequences of program action, and what are the implications for doing evaluation? American Journal of Evaluation, 26(4), 444–463.

    Article  Google Scholar 

  • Oxford University (2010) Oxford Dictionary of English, Oxford University Press, 2010. Ed. Judy Pearsall.

    Google Scholar 

  • Pawson, R., & Tilley, N. (2001). Realistic evaluation bloodlines. American Journal of Evaluation, 22, 317–324.

    Article  Google Scholar 

  • Schalock, R. L., & Bonham, G. S. (2003). Measuring outcomes and managing for results. Evaluation and Program Planning, 26, 229–235.

    Article  Google Scholar 

  • Schumacker, R. E., & Lomax, R. G. (1996). A Beginner’s guide to structural equation modeling. Mahwah, NJ: Lawrence Erlbaum Associates.

    Google Scholar 

  • Scriven, M. (2008). A summative evaluation of RCT methodology: & an alternative approach to causal research. Journal of Multi Disciplinary Evaluation, 5(9), 15–24.

    Google Scholar 

  • Shaw, I., & Crompton, I. A. (2003). Theory, like mist on spectacles, obscures vision. Evaluation, 9(2), 192–204.

    Article  Google Scholar 

  • Stame, N. (2010). What doesn’t work? Three failures, many answers. Evaluation, 16(4), 371–387.

    Article  Google Scholar 

  • Tilley, N. (2004). Applying theory-driven evaluation to the British Crime Reduction Program. The theories of the program and of its evaluations. Criminology and Criminal Justice, 4(3), 255–276.

    Article  Google Scholar 

  • Trochim, W. (1989). Outcome pattern matching and program theory. Evaluation and Program Planning, 12(4), 355–366.

    Article  Google Scholar 

  • Trochim, W., & Cook, J. (1992). Pattern matching in theory-driven evaluation: A field example from psychiatric rehabilitation. In H. Chen & P. H. Rossi (Eds.), Using theory to improve program and policy evaluations (pp. 49–69). New York, NY: Greenwood Press.

    Google Scholar 

  • Ullman, J. B. (1996). Structural equation modeling. In B. G. Tabachnick & L. S. Fidell (Eds.), Using Multivariate Statistics, 3rd Edition (pp. 709–819). New York, NY: Harper Collins College Publishers.

    Google Scholar 

  • Weiss, C. H. (1993). Where politics and evaluation research meet. American Journal of Evaluation, 14(1), 93–106.

    Article  Google Scholar 

  • Weiss, C. H. (1997a). Theory-based evaluation: Past, present, and future. New Directions for Evaluation, 76(Winter), 41–55.

    Article  Google Scholar 

  • Weiss, C. H. (1997b). How can theory-based evaluation make greater headway? Evaluation Review, 21(4), 501–524.

    Article  Google Scholar 

  • Weiss, C. H. (2000). Which links in which theories shall we evaluate? New Directions for Evaluation, 87(Fall), 35–45.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Nkwake, A.M. (2020). Evaluating Assumptions. In: Working with Assumptions in International Development Program Evaluation. Springer, Cham. https://doi.org/10.1007/978-3-030-33004-0_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-33004-0_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-33003-3

  • Online ISBN: 978-3-030-33004-0

  • eBook Packages: Social SciencesSocial Sciences (R0)

Publish with us

Policies and ethics