Towards Appropriate Impact Evaluation Methods
The choice of evaluation methods is one of the questions that has most plagued evaluators (Szanyi et al. 2012). Especially in development evaluation, where interventions tend to be very complex, and multiple stakeholders hold competing interests (Holvoet et al. 2018), this question is pressing. While one can discern an emerging consensus among evaluation scholars that not only (quasi-)experimental evidence can lay claim to a monopoly in the production of the best effectiveness evidence (Stern et al. 2012), this idea is definitely not yet commonly shared among all evaluators, let alone among commissioners of impact evaluation studies. The article by Olsen (2019) presents a strong persuasive case for considering alternative impact evaluation methods that can help overcome the shortcomings of randomised controlled trials (RCTs). The question is, however, under which conditions one should opt for such alternative methods, or to put it differently, under which conditions can it be ‘unwise’...
- Befani, B., and O’Donnell, M. 2016. Choosing appropriate evaluation methods tool. London: Bond. Retrieved from https://www.bond.org.uk/resources/evaluation-methods-tool. Accessed 20 Feb 2019.
- Befani, B. 2016. Pathways to change: Evaluating development interventions with Qualitative Comparative Analysis (QCA). Pathways to Change: Evaluating Development Interventions with QCA, report of Expertgruppen för Biståndsanalys (EBA). Retrieved from http://eba.se/wp-content/uploads/2016/07/QCA_BarbaraBefani-201605.pdf. Accessed 20 Feb 2019.
- Dahler-Larsen, P. 2012. The evaluation society. Stanford: Standford University Press.Google Scholar
- Holvoet, N., D. Van Esbroeck, L. Inberg, L. Popelier, B. Peeters, and E. Verhofstadt. 2018. To evaluate or not: Evaluability study of 40 interventions of Belgian development cooperation. Evaluation and Program Planning 67: 189–199. https://doi.org/10.1016/j.evalprogplan.2017.12.005.CrossRefGoogle Scholar
- OECD-DAC. 2002. Glossary of key terms in evaluation and results based management. Paris. Retrieved from http://www.oecd.org/dataoecd/29/21/2754804.pdf.
- Pattyn, V., and S. Verweij. 2014. Beleidsevaluaties tussen methode en praktijk: Naar een meer realistische evaluatie benadering. Bestuur en Beleid. Tijdschrift voor Bestuurskunde en Bestuursrecht 8 (4): 260–267.Google Scholar
- Pawson, R., and N. Tilley. 1997. Realistic evaluation. London: Sage.Google Scholar
- Ragin, C. 1987. The comparative method. Moving beyond qualitative and quantitative strategies. London: University of California Press.Google Scholar
- Ragin, C. 2000. Fuzzy set social science. Chicago: University of Chicago Press.Google Scholar
- Stern, E., Stame, N., Mayne, J., Forss, K., Davies, R., and Befani, B. 2012. Broadening the range of designs and methods for impact evaluations. Department for International Development, (February 2011), 1–127. Retrieved from http://www.dfid.gov.uk/Documents/publications1/design-method-impact-eval.pdf. Accessed 20 Feb 2019.
- Szanyi, M., T. Azzam, and M. Galen. 2012. Research on evaluation: A needs assessment. Canadian Journal of Program Evaluation 27 (1): 39–64.Google Scholar
- Vedung, E. 1997. Public policy and program evaluation. Piscataway: Transaction.Google Scholar
- Wildavsky, A. 1987. Speaking truth to power: Art and craft of policy analysis. London: Routledge. Retrieved from https://www.taylorfrancis.com/books/9781351488471.