Advertisement

Evaluating Complex Programs

  • Apollo M. Nkwake
Chapter

Abstract

Evaluators express preferences for certain methods over others. This chapter highlights the debate and assumptions underlying these preferences. The gold standard for evaluation methodology is appropriateness for the evaluation questions and the contextual realities of complex programs.

Keywords

Evaluating complex programs Mixed methods Qualitative methods Quantitative methods Simplification Paradigm fights Evaluation questions Methodological triangulation Evaluation assumptions 

References

  1. Anderson, P. (1999). Complexity theory and organization science. Organization Science, 10(3), 216–232.CrossRefGoogle Scholar
  2. Association for the Advancement of Science (AAS). (1990). The nature of science. Retrieved 12/12/2011 from www.project2061.org
  3. Ayala, F. (1994). On the scientific method, its practice and pitfalls. History and Philosophy of Life Sciences, 16(1), 205–240.Google Scholar
  4. Ball, S. J. (1995). Intellectuals or technicians? The urgent role of theory in educational studies. British Journal of Educational Studies, 43(3), 255–271.Google Scholar
  5. Bonell, C. (1999). Evidence based nursing: A stereo-typed view of quantitative and experimental research could work against professional autonomy and authority. Journal of Advanced Nursing, 30(1), 18–23.Google Scholar
  6. Brannen, J. (1992). Mixing methods: Qualitative and quantitative research. London: Avebury.Google Scholar
  7. Brewer, J., & Hunter, A. (1989). Multimethod Research: A Synthesis of Styles. Newbury Park, CA: Sage.Google Scholar
  8. Bryman, A. (2016). Integrating quantitative and qualitative research: how is it done?. Qualitative Research, 6(1), 97–113.CrossRefGoogle Scholar
  9. Bryman, A. (2017). Barriers to Integrating Quantitative and Qualitative Research. Journal of Mixed Methods Research, 1(1), 8.Google Scholar
  10. Cahill, H. A. (1996). A qualitative analysis of student nurses’ experiences of mentorship. Journal of Advanced Nursing, 24(4), 791–799.CrossRefGoogle Scholar
  11. Chatterji, M. (2007). Grades of evidence: Variability in quality of findings in effectiveness studies of complex field interventions. American Journal of Evaluation, 28(3), 239–255.CrossRefGoogle Scholar
  12. Chelimsky, E. (2012). Valuing, evaluation methods, and the politicization of the evaluation process. In G. Julnes (Ed.), Promoting valuation in the public interest: Informing policies for judging value in evaluation. New Directions for Evaluation, 133, 77–83.Google Scholar
  13. Chen, H. T., & Garbe, P. (2011). Assessing program outcomes from the bottom-up approach: An innovative perspective to outcome evaluation. New Directions for Evaluation, 2011(30), 93–106.Google Scholar
  14. Creswell, J. (2003). Research design: Qualitative, quantitative and mixed methods approaches (2nd ed.). Thousand Oaks, CA: SAGE Publications.Google Scholar
  15. Creswell, J., & Clark, P. V. (2007). Designing and Conducting Mixed Methods Research. Thousand Oaks, CA: Sage.Google Scholar
  16. Cronbach, L. J. (1982). Designing evaluations of educational and social programs. San Francisco, CA: Jossey-Bass.Google Scholar
  17. Dattu, L. E. (1994). Paradigm wars: A basis for peaceful coexistence and beyond. New Directions for Evaluation, 61(Spring), 61–70.Google Scholar
  18. Denzin, N. K., & Lincoln, Y. S. (Eds.) (2005). The Sage handbook of qualitative research (3rd ed.). Thousand Oaks, CA: Sage.Google Scholar
  19. Desrosieres, A. (1998). The politics of large numbers. A history of statistical reasoning. Cambridge, MA: Harvard University Press.Google Scholar
  20. Donaldson, I. S., Christie, C. A., & Mark, M. M. (2009). What counts as credible evidence in applied research and evaluation practice? London: Sage Publications.CrossRefGoogle Scholar
  21. Dunn, W. N. (1998). Campbell’s experimenting society: Prospect and retrospect. In W. N. Dunn (Ed.), The experimenting society: Essays in honor of Donald T. Campbell (pp. 20–21). New Brunswick, NJ: Transaction Publishers.Google Scholar
  22. Dupre, J. (2001). Human nature and the limits of science. Simplification versus an extension of clarity. Oxford, UK: Clarendon Press.CrossRefGoogle Scholar
  23. Eade, D. (2003). Development methods and approaches: Critical reflections. A development in practice reader. London: Oxfam GB.CrossRefGoogle Scholar
  24. Eccles, J. S., Barber, B. L., Stone, M., & Hunt, J. (2003). Extracurricular activities and adolescent development. Journal of Social Issues. 59(4), 865–889.Google Scholar
  25. Francisco, V. T., Capwell, E., & Butterfoss, F. D. (2000). Getting off to a good start with your evaluation. Journal of Health Promotion Practice, 1(2), 126–131.Google Scholar
  26. Giorgi, A. (1992). Description versus Interpretation: Competing alternative strategies for qualitative research. Journal of Phenomenological Psychology, 23(2), 119–135.CrossRefGoogle Scholar
  27. Greene, J. C., & Caracelli, V. J. (2003). Making paradigmatic sense of mixed methods practice. In A.Tashakkori & C.Teddlie (Eds.), Handbook of mixed methods in social and behavioral research (pp. 91–110). Thousand Oaks, CA: Sage.Google Scholar
  28. Greene, J. C., Lipsey, M. W., & Schwandt, T. A. (2007). Method choice: Five discussant commentaries. New Directions for Evaluation, 113(Spring), 111–118.CrossRefGoogle Scholar
  29. Guba, E. G., & Lincoln, Y. S. (1989). Fourth generation evaluation. Newbury Park, CA: Sage Publications.Google Scholar
  30. Handa, S., & Maluccio, J. A. (2010). Matching the gold standard: Comparing experimental and nonexperimental evaluation techniques for a geographically targeted program. Economic Development and Cultural Change, 58(3), 415–447.CrossRefGoogle Scholar
  31. Heilman, M. E. (1980). The impact of situational factors on personnel decisions concerning women: Varying the sex composition of the applicant pool. Organizational Behavior and Human Performance, 26(3), 386–395.CrossRefGoogle Scholar
  32. House, E. R. (1984). Factional disputes in evaluation. American Journal of Evaluation, 5, 19): 19–19): 21.CrossRefGoogle Scholar
  33. Hughes, K., & Hutchings, C. (2011). Can we obtain the required rigour without randomisation? Oxfam GB’s non-experimental Global Performance Framework. International Initiative for Impact Evaluation Working Paper 13. New Delhi: International Initiative for Impact Evaluation. Retrieved from https://www.3ieimpact.org/sites/default/files/2019-01/Working_Paper_13.pdf
  34. Isaac, S., & Michael, W. B. (1981). Handbook in research and evaluation. San Diego, CA: Eds Publishers.Google Scholar
  35. Julnes, G. (2012a). Editor’s note. In G. Julnes (Ed.), Promoting valuation in the public interest: Informing policies for judging value in evaluation. New Directions for Evaluation, 133, 1–2.Google Scholar
  36. Julnes, G. (2012b). Managing valuation. In G. Julnes (Ed.), Promoting valuation in the public interest: Informing policies for judging value in evaluation. New Directions for Evaluation, 133, 3–15.Google Scholar
  37. Lay, M., & Papadopoulos, I. (2007). An exploration of fourth generation evaluation in practice. Evaluation, 13(4), 495–504.CrossRefGoogle Scholar
  38. Lincoln, Y. S. (1991). The arts and sciences of program evaluation. Evaluation Practice, 12(1), l–7.CrossRefGoogle Scholar
  39. Maccallum, R. C. (1998). Commentary on quantitative methods in I-O research. Industrial-Organizational Psychologist, 35, 19–30.Google Scholar
  40. Madsen, C. K. (2000). Grand masters series: A personal perspective for research. Music Educators Journal, 86(6), 41.CrossRefGoogle Scholar
  41. McCarthey, S. J. (1994). Response to rowe: Aligning methods to assumptions. Reading Research Quarterly, 29(3), 248.CrossRefGoogle Scholar
  42. Mitroff, J., & Bonoma, T. V. (1978). Experimentation, and real-world problems: A critique and an alternate approach to evaluation of psychological assumptions. Evaluation Quarterly, 2(2), 235–260.CrossRefGoogle Scholar
  43. Morse, J. M. (1991). Approaches to Qualitative-Quantitative Methodological Triangulation. Nursing Research, 40(2), 120–123.CrossRefGoogle Scholar
  44. Morse, J. M. (1994). Designing qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative inquiry (pp. 220–235). Thousand Oaks, CA: Sage.Google Scholar
  45. Murphy, N., Ellis, G. F. R., & O’Connor, T. (Eds.). (2009). Downward causation and the neurobiology of free will. Berlin, Germany: Springer.Google Scholar
  46. Newman, J., Rawlings, L., & Gertler, P. (1994). Using randomized control design in evaluating social sector programs in developing countries. The World Bank Research Observer, 9(2), 181–201.CrossRefGoogle Scholar
  47. Nowotny, H. (2005). The increase of complexity and its reduction: Emergent interfaces between the natural sciences, humanities and social sciences. Theory, Culture & Society, 22(5), 15–31.CrossRefGoogle Scholar
  48. Olsen, W. (2004). Triangulation in social research: Qualitative and quantitative methods can really be mixed. In M. Holborn (Ed.), Development in Sociology (pp. 1-30). Causeway Press: Ormskirk.Google Scholar
  49. Picciotto, R. (2012). Experimentalism and development evaluation: Will the bubble burst?. Evaluation, 18(2), 213–229.CrossRefGoogle Scholar
  50. Roberts, A. (2002). A principled complementarity of method: In defence of methodological eclecticism and the qualitative-quantitative debate. The Qualitative Report, 7(3). Retrieved from http://www.nova.edu/ssss/QR/QR7-3/roberts.html.
  51. Rowlands, J. (2003). Beyond the comfort zone: Some issues, questions, and challenges in thinking about development approaches and methods. In D. Eade (Ed.), Development methods and approaches: Critical reflections. A development in practice reader (pp. 1–20). London: Oxfam GB.CrossRefGoogle Scholar
  52. Rude, C. D. (2009). Mapping the Research Questions in Technical Communication. Journal of Business and Technical Communication, 23(2), 174–215.CrossRefGoogle Scholar
  53. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin Company.Google Scholar
  54. Shadish, W. R., Cook, T. D., & Leviton, L. C. (1991). Donald T. Campbell: Methodologist of the experimenting society. In W. R. Shadish, T. D. Cook, & L. C. Leviton (Eds.), Foundations of program evaluation (pp. 73–119). London: Sage Publications.Google Scholar
  55. Smith, N. L. (1994). Evaluation: Review of the past, preview of the future. Evaluation Practice, 15(3), 215–227.CrossRefGoogle Scholar
  56. Smith, N. L. (2010). Characterizing the evaluand in evaluating theory. American Journal of Evaluation, 31(3), 383–389.CrossRefGoogle Scholar
  57. Streeten, P. (2002). Reflections on Social and Antisocial Capital. In J. Isham, T. Kelly, & S. Ramaswamy (Eds) (pp 40–57).Google Scholar
  58. Tashakkori, A., & Teddlie, C. (Eds.) (2008). Foundations of Mixed Methods Research: Integrating Quantitative and Qualitative Approaches in the Social and Behavioral Sciences. Thousand Oaks, CA: Sage Publications..Google Scholar
  59. Toulmin, S. (2001). Return to reason. Cambridge, MA: Harvard University Press.Google Scholar
  60. Voss, G. B. (2003). Formulating interesting research questions. Journal of the Academy of Marketing Science, 31, 356.CrossRefGoogle Scholar
  61. Warren, A. (2011). The myth of the plan. Retrieved from http://stayingfortea.org/2011/06/27/the-myth-of-the-plan/
  62. Williams, D. D. (1986). When is naturalistic evaluation appropriate?. New Directions for Program Evaluation, 1986(30), 85–92.CrossRefGoogle Scholar
  63. Wrigley, T. (2004). School effectiveness’: The problem of reductionism. British Educational Research Journal, 30(2), 227–244.CrossRefGoogle Scholar
  64. Zhu, S. (1999). A method to obtain a randomized control group where it seems. Impossible Evaluation Review, 23(4), 363–377.CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  • Apollo M. Nkwake
    • 1
  1. 1.Questions LLCMarylandUSA

Personalised recommendations