Skip to main content
Log in

A complex adaptive system approach to evaluation: application to a pay-for-performance program in the USA

  • Published:
Educational Assessment, Evaluation and Accountability Aims and scope Submit manuscript

Abstract

Evaluators frequently confront situations in which local programs struggle to meet the expectations and requirements specified by the external program funder. How can evaluators meaningfully evaluate programs (for both the funder and grantee) in situations in which the external program logic clashes with local complexities? This paper discusses complex adaptive system (CAS) evaluations as one method that addresses this question. To exemplify a CAS evaluation approach, we use the case of a pay-for-performance program, the Teacher Incentive Fund (TIF) program, a United States federal program implemented in numerous jurisdictions. Evaluation findings generated through a complex adaptive system approach have the potential to inform policy as well as assist the local program with ongoing improvements.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. In other disciplines, causal diagrams refer to apriori-specified diagrams that inform quantitative analyses. The “causal diagram” term used in this paper refers to an evaluation tool that changes as a result of a program’s evolution. Despite differences in defining causal diagrams, both interpretations of causal diagrams are means of reflecting on complexity.

  2. Value-added scores are a way to link student test scores to teacher/school effectiveness. The term refers to student growth or academic gain attributed to a teacher or school, as opposed to using unadjusted mean levels of achievement or percent of proficient students.

References

  • Barnes, M., Sullivan, H., & Matke, E. (2004). The development of collaborative capacity in health action zones: a final report from the national evaluation. Birmingham, U.K.: University of Birmingham.

    Google Scholar 

  • Blase, J., & Kirby, P. (2008). Bringing out the best in teachers: what effective principals do. Thousand Oaks, CA: Corwin Press.

    Google Scholar 

  • Bill and Melinda Gates Foundation. (2013). Ensuring fair and reliable measures of effective teaching. Seattle, WA: Bill and Melinda Gates Foundation.

    Google Scholar 

  • Chiang, H., Wellington, A., Hallgren, K., Speroni, C., Herrmann, M., Glazerman, S., & Constantine, J. (2015). Evaluation of the Teacher Incentive Fund: implementation and impacts of pay-for-performance after two years. Washington, DC: Mathematica Policy Research.

    Google Scholar 

  • Chiang, H., Speroni, C., Herrmann, M., Hallgren, K., Burkander, P., & Wellington, A. (2017). Evaluation of the teacher incentive fund: final report on implementation and impacts of pay-for-performance across four years. NCEE 2018–4004. Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, US Department of Education.

    Google Scholar 

  • Cook, T., Campbell, D., & Shadish, W. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.

    Google Scholar 

  • Cooperrider, D., & Srivastva, S. (1987). Appreciative inquiry in organizational life. Research in Organizational Change and Development, 1, 129–169.

    Google Scholar 

  • Darling-Hammond, L. (2013). Getting teacher evaluation right. New York, NY: Teachers College Press.

    Google Scholar 

  • David, J. (2010). Using value-added measures to evaluate teachers. Educational Leadership, 67 (8), retrieved at http://www.ascd.org/publications/educational_leadership/

  • Eckert, J. (2010). Performance-based compensation: design and implementation at six Teacher Incentive Fund sites. Seattle, WA: Bill & Melinda Gates Foundation .Retrieved from http://www.tapsystem.org/publications/eck_tif.pdf.

  • Eoyang, G., & Berkas, T. (1999). Evaluation in a complex adaptive system: a view in many directions. In M. Lissack & H. Gunz, Managing complexity in organizations. Westport: Quorum.

  • Fetterman, D., Kaftarian, S., & Wandersman, A. (1996). Empowerment evaluation: knowledge and tools for self-assessment and accountability. Thousand Oaks, CA: Sage.

    Google Scholar 

  • Glazerman, S., McKie, A., & Carey, N. (2009). An evaluation of the Teacher Advancement Program (TAP) in Chicago: year one impact report. Final report. Washington, DC: Mathematica Policy Research, Inc.

    Google Scholar 

  • Glazerman, S., Chiang, H., Wellington, A., Constantine, J., & Player, D. (2011). Impacts of performance pay under the Teacher Incentive Fund: study design report. Washington, DC: Mathematica Policy Research.

    Google Scholar 

  • Guba, E., & Lincoln, Y. (1989). Fourth generation evaluation. Newbury Park, California: SAGE Publications.

    Google Scholar 

  • Hawe, P., Bond, L., & Butler, H. (2009). Knowledge theories can inform evaluation practice: what can a complexity lens add? New Directions in Evaluation, 2009(124), 89–100.

    Article  Google Scholar 

  • Ingersoll, R. M. (2009). Who controls teachers’ work?: power and accountability in America’s schools. Harvard University Press.

  • Liket, K. C., Rey-Garcia, M., & Maas, K. E. (2014). Why aren’t evaluations working and what to do about it: a framework for negotiating meaningful evaluation in nonprofits. American Journal of Evaluation, 35(2), 171–188.

    Article  Google Scholar 

  • Marsh, J., Springer, M., McCaffrey, F., Yuan, K., Epstein, S., Koppich, J., et al. (2011). A big apple for educators New York City’s experiment with schoolwide performance bonuses. Santa Monica, CA: RAND Corporation.

  • Max, K., Constantine, J., Wellington, A., Halgren, K., Glazeman, S., Chiang, S., & Speroni, C. (2014). Evaluation of the Teacher Incentive Fund: implementation and early impacts of pay-for-performance after one year. Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, US Department of Education.

    Google Scholar 

  • McDonnell, L. M., & Elmore, R. F. (1987). Getting the job done: alternative policy instruments. Educational Evaluation and Policy Analysis, 9(2), 133–152.

    Article  Google Scholar 

  • Miles, M. B., Huberman, A. M., & Saldaña, J. (2013). Qualitative data analysis: a methods sourcebook (2nd ed.). Thousand Oaks, CA: Sage.

    Google Scholar 

  • Morell, J. A. (2010). Evaluation in the face of uncertainty: anticipating surprise and responding to the inevitable. New York, NY: Guilford Press.

    Google Scholar 

  • Murnane, R. J., & Cohen, D. K. (1986). Merit pay and the evaluation problem: understanding why most merit pay plans fail and a few survive. Harvard Education Review, 56(1), 1–17.

    Article  Google Scholar 

  • Ostrower, F. (2004). Attitudes and practices concerning effective philanthropy: survey report. Washington, DC: Urban Institute.

    Google Scholar 

  • Patton, M. (2011). Essentials of utilization-focused evaluation. Thousand Oaks, CA: Sage.

    Google Scholar 

  • Rice, J. K., Malen, B., Baumann, P., Chen, E., Dougherty, A., Hyde, L., & McKithen, C. (2012). The persistent problems and confounding challenges of educator incentives the case of TIF in Prince George’s County, Maryland. Educational Policy, 26(6), 892–933.

    Article  Google Scholar 

  • Rogers, P. (2008). Using programme theory to evaluate complicated and complex aspects of interventions. Evaluation, 14(1), 29–48.

    Article  Google Scholar 

  • Springer, M. G., Pane, J. F., Le, V.-N., McCaffrey, D. F., Burns, S. F., Hamilton, L. S., & Stecher, B. (2012). Team pay for performance: experimental evidence from the Round Rock pilot project on team incentives. Educational Evaluation and Policy Analysis., 34, 367–390.

    Article  Google Scholar 

  • Stufflebeam, D. (1983). The CIPP model for program evaluation. In G. Madaus, M. Scriven, & D. Stufflebeam (Eds.), Evaluation Models (pp. 117–141). Boston, MA: Kluwer-Nihjoff.

    Google Scholar 

  • Weisberg, D., Sexton, S., Mulhern, J., Keeling, D., Schunck, J., Palcisco, A., & Morgan, K. (2009). The widget effect: our national failure to acknowledge and act on differences in teacher effectiveness. Brooklyn: New Teacher Project.

    Google Scholar 

  • Yuan, K., Le, V.-N., McCaffrey, D. F., Marsh, J. A., Hamilton, L. S., Stecher, B. M., & Springer, M. G. (2012). Incentive pay programs do not affect teacher motivation or reported practices: results from three randomized studies. Educational Evaluation and Policy Analysis, 35(1), 3–22.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rick Mintrop.

Appendix

Appendix

Table 2 Main codes for each complex

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mintrop, R., Pryor, L. & Ordenes, M. A complex adaptive system approach to evaluation: application to a pay-for-performance program in the USA. Educ Asse Eval Acc 30, 285–312 (2018). https://doi.org/10.1007/s11092-018-9276-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11092-018-9276-6

Keywords

Navigation