Skip to main content
  • 824 Accesses

Abstract

Evaluators express preferences for certain methods over others. This chapter highlights the debate and assumptions underlying these preferences. There are debates on which methods represent the “gold standard” for evaluation. The key point discussed in this chapter is that the “gold standard” for evaluation methodology is Appropriateness to evaluation questions and the contextual realities of complex programs.

In God We TrustAll Others Must Have Credible Evidence

Donaldson et al. 2009, p. 2

Neither the quantitative hook set of for the big fish nor the qualitative net scaled for the little fish adequately captures life in the most seas. We need a paradigm to help us become scuba divers

Dattu 1994, pp. 61–70

It has been said that economics is a box of tools. But we shall have to resist the temptation of the law of the hammer, according to which a boy, given a hammer, finds everything worth pounding, not only nails but also Ming vases. We shall have to look, in the well known metaphor, where the [lost]key was dropped rather than where the light happens to be. We shall have to learn not only how to spell ‘banana’ but also when to stop. The professionals, whom a friend of mine calls ‘quantoids’ and who are enamored of their techniques, sometimes forget that if something is not worth doing, it is not worth doing well.

Streeten 2002, p. 110 quoted in Debrah Eade 2003, p. ix

With in any given approach, how do we choose the methods we use? And what factors influence how we apply them? These choices are not always clear or conscious. If we are really honest about it, many of us probably often choose methods because they are familiar and draw on skills that we feel confident about; or because we perceive that the funders require them……realistically, most ‘practitioners’ operate within constraints of time and resources that affect the choices we make; we also operate within political contexts that shape our choices

Rowlands 2003, p. 3.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Association for the Advancement of Science (AAS), (1990). The Nature of Science. Retrieved 12/12/2011 from www.project2061.org

  • Anderson, P. (1999). Complexity theory and organization science. Organization Science, 10(3), 216–232.

    Article  Google Scholar 

  • Ayala, F. (1994). On the scientific method, its practice and pitfalls. History and Philosophy of Life Sciences, 16(1), 205–240.

    Google Scholar 

  • Ball, S. J. (1995). Intellectuals or Technicians? The urgent role of theory in educational studies. British Journal of Educational Studies, 43(3), 255–271.

    Google Scholar 

  • Chatterji, M. (2007). Grades of evidence: Variability in quality of findings in effectiveness studies of complex field interventions. American Journal of Evaluation, 28(3), 239–255.

    Article  Google Scholar 

  • Chen, H. T., & Garbe, P. (2011). Assessing program outcomes from the bottom-up approach: An innovative perspective to outcome evaluation. New Directions for Evaluation, 2011(30), 93–106.

    Article  Google Scholar 

  • Chelimsky, E. (2012). Valuing, evaluation methods, and the politicization of the evaluation process. In G. Julnes (Ed.) Promoting valuation in the public interest: Informing policies for judging value in evaluation New Directions for Evaluation (133(Spring), pp. 77–83).

    Google Scholar 

  • Cronbach, L. J. (1982). Designing evaluations of educational and social programs. San Francisco: Jossey-Bass.

    Google Scholar 

  • Dattu, L. E. (1994). Paradigm wars: A basis for peaceful coexistence and beyond. New Directions for Evaluation, 61(Spring), 61–70.

    Google Scholar 

  • Desrosieres, A (1998). The politics of large numbers. A history of statistical reasoning. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Donaldson, I. S., Christie, C. A., & Mark, M. M. (2009). What counts as credible evidence in applied research and evaluation practice?. Los Angeles: Sage.

    Google Scholar 

  • Dunn, W. N. (1998). Campbell’s experimenting society: Prospect and retrospect. In W. N. Dunn (Ed.) The experimenting society: Essays in honor of Donald T. Campbell (pp. 20–21). New Brunswick, NJ: Transaction Publishers.

    Google Scholar 

  • Dupre, J. (2001). Human Nature and the Limits of Science. Simplification versus an extension of clarity! Oxford: Clarendon Press.

    Google Scholar 

  • Eade, D. (2003). Development Methods and Approaches: Critical Reflections. A Development in Practice reader. London: Oxfam GB.

    Google Scholar 

  • Handa, S., & Maluccio, J. A. (2010). Matching the gold standard: Comparing experimental and nonexperimental evaluation techniques for a geographically targeted program. Economic Development and Cultural Change, 58(3), 415–447.

    Article  Google Scholar 

  • Hughes, K. & Hutchings, C. (2011). Can we obtain the required rigor without randomization? Oxfam GB’s non-experimental Global Performance Framework.

    Google Scholar 

  • International Initiative for Impact Evaluation Working paper 13. Retrieved October 10, 2011, from www.3ieimpact.org

  • House, E. R. (1984). Factional disputes in evaluation. American Journal of Evaluation, 5(19), 19–21.

    Google Scholar 

  • Greene, J. C., Lipsey, M. W., & Schwandt, T. A. (2007). Method choice: Five discussant commentaries. New Directions for Evaluation, 113(Spring), 111–118.

    Article  Google Scholar 

  • Guba, E. G., & Lincoln, Y. S. (1989). Fourth generation evaluation. Newbury Park: Sage.

    Google Scholar 

  • Lay, M., & Papadopoulos, I. (2007). An exploration of fourth generation evaluation in practice. Evaluation, 13(4), 495–504.

    Article  Google Scholar 

  • Lincoln, Y. S. (1991). The arts and sciences of program evaluation. Evaluation Practice, 12(1), l–7.

    Google Scholar 

  • Julnes, G. (2012a). Editor’s note. New Directions For Evaluation, 133(Spring), 1–2.

    Article  Google Scholar 

  • Julnes, G. (2012b). Managing valuation. New Directions for Evaluation, 133(Spring), 3–15.

    Article  Google Scholar 

  • Murphy, N., Ellis, G. F. R., & O’Connor, T. (Eds.). (2009). Downward causation and the neurobiology of free will. Berlin: Springer.

    Google Scholar 

  • Newman, J., Rawlings, L., & Gertler, P. (1994). Using randomized control design in evaluating social sector programs in developing countries. The World Bank Research Observer, 9(2), 181–201.

    Article  Google Scholar 

  • Nowotny, H. (2005). Theory, culture & society. London: Sage.

    Google Scholar 

  • Roberts, A. (2002). A principled complementarity of method: In defence of methodological eclecticism and the qualitative-quantitative debate. The Qualitative Report, 7(3). Retrieved on July 3, 2011 from http://www.nova.edu/ssss/QR/QR7-3/roberts.html

  • Rowlands, J. (2003) Beyond the comfort zone: some issues, questions, and challenges in thinking about development approaches and methods. In E. Deborah (Ed.) Development methods and approaches: Critical reflections. A development in practice reader (pp. 1–20). London: Oxfam GB.

    Google Scholar 

  • Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and Quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin Company.

    Google Scholar 

  • Shadish, W. R., Cook, T. D. & Leviton, L. C. (1991). Donald T. Campbell: Methodologist of the experimenting society. In W. R. Shadish, T. D. Cook, & L. C. Leviton (Eds.), Foundations of program evaluation (pp. 73–119). London: Sage.

    Google Scholar 

  • Smith, M. F. (1994). On past, present and future assessments of the field of evaluation. Evaluation Practice, 15(3).

    Google Scholar 

  • Smith, N. L. (2010). Characterizing the evaluand in evaluating theory. American Journal of Evaluation, 31(3), 383–389.

    Article  Google Scholar 

  • Social Sector Programs in Developing Countries (1994). The World Bank Research Observer, 9(2), 181–201.

    Google Scholar 

  • Toulmin, S. (2001). Return to reason. Cambridge, MA: Harvard University Press.

    Google Scholar 

  • Wrigley, T. (2004a). School effectiveness’: The problem of reductionism. British Educational Research Journal, 30(2), 227–244.

    Article  Google Scholar 

  • Warren, A. (2011) The myth of the plan. Retrieved December 08, 2011, from http://stayingfortea.org/2011/06/27/the-myth-of-the-plan/

  • Wrigley, T. (2004b). School effectiveness’: The problem of reductionism. British Educational Research Journal, 30(2), 227–244.

    Article  Google Scholar 

  • Zhu, S. (1999). A method to obtain a randomized control group where it seems impossible. Evaluation Review, 23(4), 363–377.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Apollo M. Nkwake .

Rights and permissions

Reprints and permissions

Copyright information

© 2013 Springer Science+Business Media New York

About this chapter

Cite this chapter

Nkwake, A.M. (2013). Evaluating Complex Development Programs. In: Working with Assumptions in International Development Program Evaluation. Springer, New York, NY. https://doi.org/10.1007/978-1-4614-4797-9_4

Download citation

Publish with us

Policies and ethics