Advertisement

The European Journal of Development Research

, Volume 31, Issue 2, pp 174–179 | Cite as

Towards Appropriate Impact Evaluation Methods

  • Valérie PattynEmail author
Commentary

The choice of evaluation methods is one of the questions that has most plagued evaluators (Szanyi et al. 2012). Especially in development evaluation, where interventions tend to be very complex, and multiple stakeholders hold competing interests (Holvoet et al. 2018), this question is pressing. While one can discern an emerging consensus among evaluation scholars that not only (quasi-)experimental evidence can lay claim to a monopoly in the production of the best effectiveness evidence (Stern et al. 2012), this idea is definitely not yet commonly shared among all evaluators, let alone among commissioners of impact evaluation studies. The article by Olsen (2019) presents a strong persuasive case for considering alternative impact evaluation methods that can help overcome the shortcomings of randomised controlled trials (RCTs). The question is, however, under which conditions one should opt for such alternative methods, or to put it differently, under which conditions can it be ‘unwise’...

Notes

References

  1. Befani, B., and O’Donnell, M. 2016. Choosing appropriate evaluation methods tool. London: Bond. Retrieved from https://www.bond.org.uk/resources/evaluation-methods-tool. Accessed 20 Feb 2019.
  2. Befani, B. 2016. Pathways to change: Evaluating development interventions with Qualitative Comparative Analysis (QCA). Pathways to Change: Evaluating Development Interventions with QCA, report of Expertgruppen för Biståndsanalys (EBA). Retrieved from http://eba.se/wp-content/uploads/2016/07/QCA_BarbaraBefani-201605.pdf. Accessed 20 Feb 2019.
  3. Chelimsky, E., and W.R. Shadish. 1997. Evaluation for the 21st century: A handbook. Thousand Oaks: Sage.CrossRefGoogle Scholar
  4. Dahler-Larsen, P. 2012. The evaluation society. Stanford: Standford University Press.Google Scholar
  5. Holvoet, N., D. Van Esbroeck, L. Inberg, L. Popelier, B. Peeters, and E. Verhofstadt. 2018. To evaluate or not: Evaluability study of 40 interventions of Belgian development cooperation. Evaluation and Program Planning 67: 189–199.  https://doi.org/10.1016/j.evalprogplan.2017.12.005.CrossRefGoogle Scholar
  6. OECD-DAC. 2002. Glossary of key terms in evaluation and results based management. Paris. Retrieved from http://www.oecd.org/dataoecd/29/21/2754804.pdf.
  7. Olsen, W. 2019. Bridging to action requires mixed methods, not only randomised control trials. European Journal of Development Research.  https://doi.org/10.1057/s41287-019-00199-2.Google Scholar
  8. Pattyn, V., A. Molenveld, and B. Befani. 2019. Qualitative comparative analysis as an evaluation tool. American Journal of Evaluation 40(1): 55-74.  https://doi.org/10.1177/1098214017710502.CrossRefGoogle Scholar
  9. Pattyn, V., and S. Verweij. 2014. Beleidsevaluaties tussen methode en praktijk: Naar een meer realistische evaluatie benadering. Bestuur en Beleid. Tijdschrift voor Bestuurskunde en Bestuursrecht 8 (4): 260–267.Google Scholar
  10. Pawson, R., and N. Tilley. 1997. Realistic evaluation. London: Sage.Google Scholar
  11. Ragin, C. 1987. The comparative method. Moving beyond qualitative and quantitative strategies. London: University of California Press.Google Scholar
  12. Ragin, C. 2000. Fuzzy set social science. Chicago: University of Chicago Press.Google Scholar
  13. Rihoux, B., and C. Ragin. 2009. Configurational comparative methods. Qualitative comparative analysis (QCA) and related techniques. Thousand Oaks: Sage.CrossRefGoogle Scholar
  14. Sager, F., and C. Andereggen. 2012. Dealing with complex causality in realist synthesis: The promise of qualitative comparative analysis. American Journal of Evaluation 33 (1): 60–78.  https://doi.org/10.1177/1098214011411574.CrossRefGoogle Scholar
  15. Stern, E., Stame, N., Mayne, J., Forss, K., Davies, R., and Befani, B. 2012. Broadening the range of designs and methods for impact evaluations. Department for International Development, (February 2011), 1–127. Retrieved from http://www.dfid.gov.uk/Documents/publications1/design-method-impact-eval.pdf. Accessed 20 Feb 2019.
  16. Szanyi, M., T. Azzam, and M. Galen. 2012. Research on evaluation: A needs assessment. Canadian Journal of Program Evaluation 27 (1): 39–64.Google Scholar
  17. van der Knaap, P. 2004. Theory-based evaluation and learning: Possibilities and challenges. Evaluation 10 (1): 16–34.  https://doi.org/10.1177/1356389004042328.CrossRefGoogle Scholar
  18. Vedung, E. 1997. Public policy and program evaluation. Piscataway: Transaction.Google Scholar
  19. Weiss, C.H. 1977. Research for policy’s sake: The enlightenment function of social research. Policy Analysis 3: 531–545.  https://doi.org/10.2307/42783234.Google Scholar
  20. Weiss, C.H. 1993. Where politics and evaluation research meet. American Journal of Evaluation 14 (1): 93–106.  https://doi.org/10.1177/109821409301400119.CrossRefGoogle Scholar
  21. Wildavsky, A. 1987. Speaking truth to power: Art and craft of policy analysis. London: Routledge. Retrieved from https://www.taylorfrancis.com/books/9781351488471.

Copyright information

© European Association of Development Research and Training Institutes (EADI) 2019

Authors and Affiliations

  1. 1.Institute of Public AdministrationLeiden UniversityThe HagueThe Netherlands

Personalised recommendations