Advertisement

Measuring Progress Towards Sustainability: A View of the Main Approaches to Evaluation

  • Keith ChildEmail author
Chapter
Part of the Natural Resource Management in Transition book series (NRMT, volume 2)

Abstract

In this chapter, we examine the broad contours of the science of sustainability evaluation through a look at some of the dominant approaches in use today. By approaches, we mean an “integrated set of options used to do some or all of the evaluation”. While there are many elaborate ways to classify evaluations (e.g., purpose, methodology), here we restrict our review to those that are most commonly used to measure sustainability and subdivide approaches as primarily experimental, quasi-experimental, or observational. This is not intended to be a definitive guide to all evaluation approaches, but rather a concise attempt to navigate the tricky world of sustainability evaluation through a pragmatic and selective review of those that are most frequently employed within an agricultural development context.

References

  1. Angrist JD, Pischke J-S (2010) The credibility revolution in empirical economics: how better research design is taking the con out of econometrics. J Econ Perspect 24(2):3–30CrossRefGoogle Scholar
  2. Bamberger M, Rugh J, Mabry L (2012) Real world evaluation: working under budget, time, data and political constraints, 2nd edn. Sage Publications, Thousand OaksGoogle Scholar
  3. Bamberger M, Vaessen J, Raimondo E (eds) (2015) Dealing with complexity in development evaluation: a practical approach. Sage Publications, Thousand OaksGoogle Scholar
  4. Befani B, Stedman-Bryce G (2017) Process tracing and Bayesian updating for impact evaluation. Evaluation 23(1):42–60CrossRefGoogle Scholar
  5. Blamey A, Mackenzie M (2007) Theories of change and realistic evaluation: peas in a pod or apples and oranges? Evaluation 13(4):439–455CrossRefGoogle Scholar
  6. Bours D, McGinn C, Pringle P (2014) The Theory of Change approach to climate change adaptation programming: Guidance note 3. SEA Change CoP, Phnom Penh and UKCIP, Oxford. www.ukcip.org.uk/wp-content/PDFs/MandE-Guidance-Note3.pdf. Accessed 31 Jan 2019
  7. Chen HT (1990) Theory-driven evaluations. Sage Publications, Newbury ParkGoogle Scholar
  8. Conservation International (2012) Constructing theories of change for ecosystem-based adaptation projects: a guidance document. Conservation International, Arlington. https://www.conservation.org/publications/Documents/CI_IKI-ToC-Guidance-Document.pdf. Accessed 31 Jan 2019
  9. Coryn CLS, Noakes LA, Westine CD, Schröter DC (2011) A systematic review of theory-driven evaluation practice from 1990 to 2009. Am J Eval 32(2):199–226CrossRefGoogle Scholar
  10. Davies R, Dart J (2005) The ‘Most Significant Change’ (MSC) Technique: A Guide to Its Use. http://mande.co.uk/wp-content/uploads/2018/01/MSCGuide.pdf. Accessed 31 Jan 2019
  11. Douthwaite B, Alvarez S, Thiele G, Mackay R (2008) Participatory impact pathways analysis: a practical method for project planning and evaluation. ILAC Brief 17. http://hdl.handle.net/10568/70093. Accessed 31 Jan 2019
  12. Heard K, O’Toole E, Naimpally R, Bressler L (2017) Real-world challenges to randomization and their solutions. J-PAL North America, Cambridge. https://www.povertyactionlab.org/sites/default/files/resources/2017.04.14-Real-World-Challenges-to-Randomization-and-Their-Solutions.pdf. Accessed 31 Jan 2019
  13. Kennedy ET, Balasubramanian H, Crosse WEM (2009) Issues of scale and monitoring status and trends in biodiversity. New Dir Eval 122:41–51CrossRefGoogle Scholar
  14. Marjanovic S, Cochrane G, Robin E, Sewankambo N, Ezeh A, Nyirenda M, Bonfoh B, Rweyemamu M, Chataway J (2017) Evaluating a complex research capacity-building intervention: reflections on an evaluation of the African institutions initiative. Evaluation 23(1):80–101CrossRefGoogle Scholar
  15. Mayne J (2012) Contribution analysis: coming of age? Evaluation 18(3):270–280CrossRefGoogle Scholar
  16. Mayne J, Stern E (2013) Impact evaluation of natural resource management research programs: a broader view. ACIAR Impact Assessment Series Report No. 84. Australian Centre for International Agricultural Research, CanberraGoogle Scholar
  17. McKinnon MC, Hole DG (2015) Exploring program theory to enhance monitoring and evaluation in ecosystem-based adaptation projects. New Dir Eval 147:49–60CrossRefGoogle Scholar
  18. OECD-DAC – Organisation for Economic Co-operation and Development-Development Assistance Committee, World Bank (2006) Source Book: Emerging Good Practice in Managing for Development Results. First Issue. World Bank, Washington, DC. http://www.mfdr.org/Sourcebook/1stEdition/MfDRSourcebook-Feb-16-2006.pdf. Accessed 31 Jan 2019
  19. Pawson R (2013) The science of evaluation: a realist manifesto. Sage Publications, LondonCrossRefGoogle Scholar
  20. Pawson R, Tilley N (1997) Realistic evaluation. Sage Publications, LondonGoogle Scholar
  21. Rasmussen LV, Bierbaum R, Oldekop JA, Agrawal A (2017) Bridging the practitioner researcher divide: indicators to track environmental, economic, and sociocultural sustainability of agricultural commodity production. Global Environ Change 42:33–46CrossRefGoogle Scholar
  22. Ravallion M (2009) Should the randomistas rule? Econ Voice 6(2):1–5.  https://doi.org/10.2202/1553-3832.1368 Google Scholar
  23. Rogers PJ (2009) Matching impact evaluation design to the nature of the intervention and the purpose of the evaluation. J Dev Eff 1(3):217–226CrossRefGoogle Scholar
  24. Rosin C, Campbell H, Reid J (2017) Metrology and sustainability: using sustainability audits in New Zealand to elaborate the complex politics of measuring. J Rural Stud 52:90–99CrossRefGoogle Scholar
  25. Sanbonmatsu J (2004) The postmodern prince: critical theory, left strategy, and the making of a new political subject. Monthly Review Press, New YorkGoogle Scholar
  26. Savedoff WD, Levine R, Birdsall N (2006) When will we ever learn? Improving lives through impact evaluation. Center for Global Development, Washington, DC. https://www.cgdev.org/sites/default/files/7973_file_WillWeEverLearn.pdf. Accessed 31 Jan 2019
  27. Tilley N (2000) Realistic evaluation: an overview. Paper presented at the Founding Conference of the Danish Evaluation Society, September 2000Google Scholar
  28. WCED – World Commission on Environment and Development (1987) Our common future. Oxford University Press, OxfordGoogle Scholar
  29. Weiss CH (1995) Nothing as practical as good theory: exploring theory-based evaluation for comprehensive community initiatives for children and families. In: Connell JP, Kubisch AC, Schorr LB, Weiss CH (eds) New approaches to evaluating community initiatives: concepts, methods, and context. Roundtable on comprehensive community initiatives for children and families. The Aspen Institute, Washington, DC, pp 65–92Google Scholar
  30. Weiss CH (1997) Theory-based evaluation: past, present, and future. New Dir Eval (76):41–55CrossRefGoogle Scholar
  31. White H, Phillips D (2012) Addressing attribution of cause and effect in small n impact evaluations: towards an integrated framework, 3ie Working Paper 15. International Initiative for Impact Evaluation, New Delhi. http://www.3ieimpact.org/evidence-hub/publications/working-papers/addressing-attribution-cause-and-effect-small-n-impact. Accessed 31 Jan 2019
  32. Woolcock M (2009) Toward a plurality of methods in project evaluation: a contextualised approach to understanding impact trajectories and efficacy. J Dev Eff 1(1):1–14CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.Impact Works Inc.ChelseaCanada

Personalised recommendations