Advertisement

The European Journal of Development Research

, Volume 31, Issue 2, pp 163–168 | Cite as

A Debate that Fatigues…: To Randomise or Not to Randomise; What’s the Real Question?

  • Ralitza DimovaEmail author
Commentary

Abstract

This commentary reviews key arguments on both sides of a heated methodological debate that has dominated the development economics literature for more than a decade now. It argues that the debate is increasingly fatiguing and tends to overemphasise (empirical) methodological peculiarities at the expense of conceptual issues, the resolution of which is crucial for successful policy making.

Keywords

Randomised controlled trials Instrumental variables Conceptual debate 

Résumé

Ce commentaire aborde un débat méthodologique animé qui a dominé la recherche économique sur les pays en route de développement depuis plus d’une décennie. L’auteur appuie l’idée que ce débat est de plus en plus fatigant en ce que son accent est mis sur des détailles méthodologiques au détriment des questions conceptuelles dont la response aurait un effet important sur l’élaboration des politiques réussies.

Notes

References

  1. Angrist, J., and J. Pischke. 2010. The credibility revolution in empirical economics: How better research design is taking the con out of econometrics. Journal of Economic Perspectives 24 (2): 3–30.CrossRefGoogle Scholar
  2. Athey, S., and G. Imbens. 2016. Recursive partitioning for heterogeneous causal effects. Proceedings of the National Academy of Sciences of the United States of America 113 (27): 7353–7360.CrossRefGoogle Scholar
  3. Banerjee, A., and E. Duflo. 2009. The experimental approach to development economics. Annual Review of Economics 1: 151–178.CrossRefGoogle Scholar
  4. Barrett, C., and M. Carter. 2010. The power and pitfalls of experiments in development economics: Some non-random reflections. Applied Economic Perspectives and Policy 3294: 515–548.CrossRefGoogle Scholar
  5. Basu, A., R. Dimova, M. Gbakou, and R. Viennet. 2018. Parental behavioural preferences and the educational progressions of girls and boys: lab-in-the-field evidence from rural Côte d’Ivoire. Working Paper, University of Manchester and Cornell Unviersity.Google Scholar
  6. Bold, T., M. Kimenyi, G. Mwabu, A. Nģanģa, and J. Sandefur. 2013. Scaling up what works: Experimental evidence on external validity in Kenyan education. CGD Working Paper 321, Center for Global Development, Washington DC. https://www.cgdev.org/publication/scaling-what-works-experimental-evidence-external-validity-kenyan-education-working. Accessed 1 Mar 2018.
  7. Deaton, A., and N. Cartwright. 2018. Understanding and misunderstanding randomized controlled trials. Social Science and Medicine 210: 2–21.CrossRefGoogle Scholar
  8. Demeritt, A., and K. Hoff. 2016. The making of development economics, Preliminary and Incomplete.Google Scholar
  9. Demeritt, A., and K. Hoff. 2018. The making of development economics. Policy Research Working Paper, 8317. http://documents.worldbank.org/curated/en/708521516820237922/The-making-of-behavioral-development-economics. Accessed 1 Mar 2018.
  10. Duflo, E., M. Kremer, and J. Robinson. 2011. Nudging farmers to use fertilizer: theory and experimental evidence from Kenya. American Economic Review 10 (6): 2350–2390.CrossRefGoogle Scholar
  11. Gerber, A., and D. Green. 2012. Field Experiments: Design, Analysis and Interpretation. New York: W.W. Norton.Google Scholar
  12. Glennerster, R. 2017. The practicalities of running randomized evaluations: Partnerships, measurement, ethics and transparency. In Handbook of Economic Field Experiments, vol. 1, ed. A. Banerjee and E. Duflo, 175–243. Amsterdam: Elsevier.CrossRefGoogle Scholar
  13. Goldberg, J. 2014. The R-word is not dirty. Blog Post, Center for Global Development, Washington D.C. https://www.cgdev.org/blog/r-word-not-dirty. Accessed 1 Mar 2018.
  14. Heckman, J., and J. Smith. 1995. Assessing the case for social experiments. Journal of Economic Perspectives 9 (2): 85–110.CrossRefGoogle Scholar
  15. Imbens, G. 2010. Better LATE than nothing: some comments on Deaton (2009) and Heckman and Urzua (2009). Journal of Economic Literature 48 (2): 399–423.CrossRefGoogle Scholar
  16. Imbens, G. 2018. Comments on understanding and misunderstanding randomized controlled trials: A commentary on Deaton and Cartwright. Social Science and Medicine 210: 50–52.CrossRefGoogle Scholar
  17. Manski, C. 1990. Nonparametric bounds on treatment effects. American Economic Review 80 (2): 319–323.Google Scholar
  18. Manski, C. 1995. Identification problems in social sciences. Cambridge and London: Harvard University Press.Google Scholar
  19. Manski, C. 2003. Partial identification of probability distributions. New York and Heidelberg: Springer.Google Scholar
  20. Olsen, W. 2018. Bridging the action that requires mixed methods not randomised controlled trials, European Journal of Development Research, forthcoming.Google Scholar
  21. Pritchett, L., and L. Sandefur. 2015. Learning from experiments when context matters. American Economic Review: Papers and Proceedings 105 (5): 471–475.CrossRefGoogle Scholar
  22. Ravallion, M. 2009a. Should the randomistas rule?. Economists’ Voice 6 (2): 1–5.Google Scholar
  23. Ravallion, M. 2009b. Evaluation in the practice of development. World Bank Research Observer 24 (1): 29–54.CrossRefGoogle Scholar
  24. Ravallion, M. 2012. Fighting poverty one experiment at a time: A review essay on Abhidjit Banerjee and Esther Duflo, Poor Economics. Journal of Economic Literature 50 (1): 103–114.CrossRefGoogle Scholar
  25. Ravallion, M. 2018. Should the randomistas (continue to) rule? Working Paper 492, Center for Global Development. https://www.cgdev.org/publication/should-randomistas-continue-rule. Accessed 1 Mar 2018.
  26. Rodrik, D. 2008. The new development economics: We shall experiment, but how shall we learn? Harvard Kennedy School, Faculty Research Working Paper Series, RWPO08-055, https://drodrik.scholar.harvard.edu/publications/new-development-economics-we-shall-experiment-how-shall-we-learn. Accessed 1 Mar 2018.
  27. Skoufias, E., and S. Parker. 2001. Conditional cash transfers and their impact on child work and schooling: Evidence from the PROGRESA program in Mexico. Economia 2 (1): 45–96.Google Scholar
  28. Todd, P., and K. Wolpin. 2006. Assessing the impact of a school subsidy program in Mexico: Using a social experiment to validate a dynamic behavioural model of child schooling and fertility. American Economic Review 96 (5): 1384–1417.CrossRefGoogle Scholar
  29. Wakefield, A., S. Murch, A. Anthony, J. Linnell, D. Casson, and M. Malik. 1998. Retracted: Ileal-lymphoid-nodular hyperplasmia, non-specific colitis and pervasive developmental disorder in children. Lancet 351: 637–641.CrossRefGoogle Scholar
  30. Young, A. 2018. Consistency without inference. Instrumental variables in practical application. Working Paper, London School of Economics, http://personal.lse.ac.uk/YoungA/. Accessed 1 Mar 2018.

Copyright information

© European Association of Development Research and Training Institutes (EADI) 2019

Authors and Affiliations

  1. 1.University of ManchesterManchesterUK

Personalised recommendations