Skip to main content
Log in

A Procedure for Assessing Intervention Fidelity in Experiments Testing Educational and Behavioral Interventions

  • Published:
The Journal of Behavioral Health Services & Research Aims and scope Submit manuscript

Abstract

An intervention's effectiveness is judged by whether it produces positive outcomes for participants, with the randomized experiment being the gold standard for determining intervention effects. However, the intervention-as-implemented in an experiment frequently differs from the intervention-as-designed, making it unclear whether unfavorable results are due to an ineffective intervention model or the failure to implement the model fully. It is therefore vital to accurately and systematically assess intervention fidelity and, where possible, incorporate fidelity data in the analysis of outcomes. This paper elaborates a five-step procedure for systematically assessing intervention fidelity in the context of randomized controlled trials (RCTs), describes the advantages of assessing fidelity with this approach, and uses examples to illustrate how this procedure can be applied.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6

Similar content being viewed by others

Notes

  1. It is also possible that an intervention’s developer would specify as part of its change model one or more moderators, constructs thought to influence the nature (strength) of the causal relationship between two or more constructs. However, we have omitted discussion of moderators because they are exogenous to the intervention as designed.

  2. Note that most models also involve assumptions (e.g., student characteristics) that may not be included in the graphic representation but that should be elaborated narratively.

  3. While this example illustrates the problem in principle, it is unlikely to have inflated fidelity in this particular study given that the proportion of non-core items was relatively small and significant results were obtained.

References

  1. Dane AV, Schneider BH. Program integrity in primary and early secondary prevention: Are implementation effects out of control? Clinical Psychology Review 1998; 18(1):23–45.

    Article  PubMed  CAS  Google Scholar 

  2. McIntyre LL, Gresham FM, DiGennaro FD, et al. Treatment integrity of school-based interventions with children in the Journal of Applied Behavior Analysis 1991–2005. Journal of Applied Behavior Analysis 2007; 40(4):659–672.

    PubMed  Google Scholar 

  3. Durlak JA, DuPre, EP. Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology 2008; 41(3):327–350.

    Article  PubMed  Google Scholar 

  4. O'Donnell CL. Defining, Conceptualizing, and measuring fidelity of implementation and its relationship to outcomes in K-12 curriculum intervention research. Review of Educational Research 2008; 78(1):33–84.

    Article  Google Scholar 

  5. Dusenbury L, Brannigan R, Falco M, et al. A review of research on fidelity of implementation: implications for drug abuse prevention in school settings. Health Education Research 2003; 18(2):237–256.

    Article  PubMed  Google Scholar 

  6. Fixsen DL, Naoom SF, Blasé KA, et al. Implementation Research: A Synthesis of the Literature. FMHI Publication no. 231. Tampa: Louis de la Parte Florida Mental Health Institute, National Implementation Research Network, University of South Florida, 2005.

  7. Hulleman CS, Rimm-Kaufman SE, Abry TDS. Construct validity, measurement, and analytical issues for fidelity assessment in education research. In: Halle T, Martinez-Beck I, Metz A (eds.) Applying Implementation Science to Early Care and Education Programs and Systems: Exploring a New Frontier. Baltimore, M.D.: Brookes Publishing, in press.

  8. Bellg AJ, Borrelli B, Resnick, B, et al. Enhancing treatment fidelity in health behavior change studies: Best practices and recommendations from the Behavior Change Consortium. Health Psychology 2004; 23(5):443–451.

    Article  PubMed  Google Scholar 

  9. Schoenwald SK, Garland AF, Chapman JE, et al. Toward the effective and efficient measurement of implementation fidelity. Administration and Policy in Mental Health and Mental Health Services Research 2011; 38(1):32–43

    Article  PubMed  Google Scholar 

  10. Carroll C, Patterson M, Wood S, et al. A conceptual framework for implementation fidelity. Implementation Science 2007; 2(40):1–9. Retrieved on June 1, 2012, from http://www.implementationscience.com/content/2/1/40.

  11. Holland PW. Statistics and causal inference. Journal of the American Statistical Association 1986; 81(396):945–960.

    Article  Google Scholar 

  12. Institute of Education Sciences. Education Research Training Grants. RFA No. IES-NCER-2008–02. Washington, D.C.: US Department of Education, 2007.

  13. Cordray DS, Pion GM. Treatment strength and integrity: Models and methods. In: Bootzin RR, McKnight PE (eds). Strengthening Research Methodology: Psychological Measurement and Evaluation. Washington, DC: American Psychological Association, 2006: pp. 103–124.

  14. Hulleman CS, Cordray D. Moving from the lab to the field: The role of fidelity and achieved relative intervention strength. Journal of Research on Intervention Effectiveness 2009; 2(1):88–110.

    Google Scholar 

  15. Borrelli B, Sepinwall D, Bellg AJ, et al. A new tool to assess treatment fidelity and evaluation of treatment fidelity across 10years of health behavior research. Journal of Consulting and Clinical Psychology 2005; 73(5):852–860.

    Article  PubMed  Google Scholar 

  16. Lichstein KL, Riedel BW, Grieve R. Fair tests of clinical trials: A treatment implementation model. Advances in Behavior Research and Therapy 1994; 16: 1–29.

    Article  Google Scholar 

  17. Cordray DS. 2007 Assessing Intervention Fidelity in Randomized Field Experiments. Funded Goal 5 proposal to Institute of Education Sciences.

  18. Hulleman CS, Cordray DS, Nelson MC, et al. The state of treatment fidelity assessment in elementary mathematics interventions. Poster presented at the annual meeting conference of the Institute of Education Sciences, Washington, D.C., June 2009.

  19. Knowlton LW, Phillips CC. The Logic Model Guidebook: Better Strategies for Great Results. Washington, D.C.: Sage, 2009.

    Google Scholar 

  20. Chen HT. Theory-Driven Evaluation. Thousand Oaks, CA: Sage Publications, 1990.

    Google Scholar 

  21. Sidani S, Sechrest L. Putting theory into operation. American Journal of Evaluation 1999; 20(2):227–238.

    Google Scholar 

  22. Donaldson SI, Lipsey MW. Roles for theory in contemporary evaluation practice: Developing practical knowledge. In: Shaw I, Greene JC, Mark MM (eds). The Handbook of Evaluation: Policies, Programs, and Practices. London: Sage, 2006: pp. 56–75.

    Google Scholar 

  23. Trochim W, Cook J. Pattern matching in theory-driven evaluation: A field example from psychiatric rehabilitation. In: Chen H, Rossi PH (eds). Using Theory to Improve Program and Policy Evaluations. New York: Greenwood Press, 1992, pp. 49–69.

    Google Scholar 

  24. Swafford JO, Jones GA, Thornton CA. Increased knowledge in geometry and instructional practice. Journal for Research in Mathematics Education 1997; 28(4):467–483.

    Article  Google Scholar 

  25. Noell GH, Witt JC, Slider NJ, et al. Treatment implementation following behavioral consultation in schools: A comparison of three follow-up strategies. School Psychology Review 2005; 34(1):87–106.

    Google Scholar 

  26. Moss M, Fountain AR, Boulay B, et al. Reading First Implementation Evaluation: Final Report. Cambridge, MA: Abt Associates, 2008.

    Google Scholar 

  27. Shadish WR, Cook TD, Campbell, DT. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. New York, NY: Houghton Mifflin Company, 2002.

    Google Scholar 

  28. Cook T. Postpositivist critical multiplism. In: Shotland RL, Marks MM (eds). Social Science and Social Policy. Beverly Hills, CA: Sage, 1985, pp. 21–62.

    Google Scholar 

  29. Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika 1951; 16(3):297–334.

    Article  Google Scholar 

  30. Cordray DS. Identifying and Assessing the Cause in RCTs. Instructional session presented at the Institute of Education Sciences RCT Training Institute, Nashville, TN, June 22, 2009.

  31. Cronbach LJ, Nageswari R, Gleser, GC. Theory of generalizability: A liberation of reliability theory. The British Journal of Statistical Psychology 1963; 16(2):137–163.

    Article  Google Scholar 

  32. Cronbach LJ, Gleser GC, Nanda H, et al. The Dependability of Behavioral Measurements: Theory of Generalizability for Scores and Profiles. New York: John Wiley, 1972.

    Google Scholar 

  33. Crocker L, Algina, J. Introduction to Classical and Modern Test Theory. New York: Harcourt Brace Jovanovich College Publishers, 1986:527.

  34. Brennan LB. Generalizability theory. In: Gierl M (ed). ITEMS: The Instructional Topics in Educational Measurement Series. Madison, WI: National Council on Measurement in Education, 1992. Available at: www.ncme.org/pubs/items.cfm Accessed June 18, 2011.

  35. Spector PE. Summated Rating Scale Construction: An Introduction. Newbury Park, CA: Sage, 1992.

    Google Scholar 

  36. Lennon RT. Assumptions underlying the use of content validity. Educational and Psychological Measurement 1956; 16(3):294–304.

    Article  Google Scholar 

  37. Cronbach LJ. Test validation. In: Thorndike, RL (ed.). Educational Measurement (2nd ed.). Washington, D. C.: American Council on Education, 1971, pp. 443–507.

  38. Mosier CI. A critical examination of the concepts of face validity. Educational & Psychological Measurement 1947; 7(2):191–205.

    Article  CAS  Google Scholar 

  39. McGrew JH, Bond GR, Dietzen L, Salyers M. Measuring the fidelity of implementation of a mental health program model. Journal of Consulting and Clinical Psychology 1994; 62(4): 670–678.

    Article  PubMed  CAS  Google Scholar 

  40. Mowbray CT, Holter MC, Teague GB et al. Fidelity criteria: Development, measurement, and validation. American Journal of Evaluation 2003; 24:315–340.

    Google Scholar 

  41. Abry T, Rimm-Kaufman SE, Hulleman CS. Using Intervention Core Components to Identify the Active Ingredients of the Responsive Classroom approach. 2012, manuscript in preparation.

  42. Fuchs LS, Fuchs D, Yazdian L, et al. Enhancing first-grade children's mathematical development with peer-assisted learning strategies. School Psychology Review 2002; 31(4):569–583.

    Google Scholar 

  43. Cordray DS, Pion GM, Dawson M, et al. 2008. The Efficacy of NWEA’s MAP Program. Institute of Education Sciences funded proposal.

  44. Wilson SJ, Lipsey MW, Derzon JH. The effects of school-based intervention programs on aggressive behavior: A meta-analysis. Journal of Consulting and Clinical Psychology 2003; 71:136–149.

    Article  PubMed  Google Scholar 

  45. Tobler NS. Meta-analysis of 143 adolescent drug prevention programs: Quantitative outcome results of program participants compared to a control or comparison group. Journal of Drug Issues, 1986; 16:537–567.

    Google Scholar 

  46. DuBois DL, Holloway BE, Valentine JC, et al. Effectiveness of mentoring programs for youth: A metaanalytic review. American Journal of Community Psychology 2002; 30:157–198.

    Article  PubMed  Google Scholar 

  47. Smith JD, Schneider BH, Smith PK, et al. The effectiveness of whole-school antibullying programs: A synthesis of evaluation research. School Psychology Review 2004; 33:547–560.

    Google Scholar 

  48. Likert R. A technique for the measurement of attitudes. Archives of Psychology 1932; 140:5–53.

    Google Scholar 

  49. Connor CM, Morrison FM, Fishman BJ, et al. Algorithm-guided individualized reading instruction. Science 2007; 315(5811):464–465.

    Article  PubMed  CAS  Google Scholar 

  50. Fuchs LS, Fuchs D, Karns K. Enhancing kindergarteners’ mathematical development: Effects of peer-assisted learning strategies. Elementary School Journal 2001; 101(5):495–510.

    Article  Google Scholar 

  51. Kutash K, Duchnowski A J, Sumi WC, et al. A school, family, and community collaborative program for children who have emotional disturbances. Journal of Emotional and Behavioral Disorders 2002; 10(2):99–107.

    Article  Google Scholar 

  52. Ginsburg-Block M, Fantuzzo J. Reciprocal peer tutoring: An analysis of teacher and student interactions as a function of training and experience. School Psychology Quarterly 1997; 12(2):1–16.

    Article  Google Scholar 

  53. Bond GR, Evans L, Salyers MP, et al. Measurement of fidelity in psychiatric rehabilitation. Mental Health Services Research, 2000; 2(2):75–87.

    Article  PubMed  CAS  Google Scholar 

  54. Teague GB, Bond GR, Drake RE. Program fidelity in assertive community treatment: Development and use of a measure. American Journal of Orthopsychiatry 1998; 68:216–232.

    Article  PubMed  CAS  Google Scholar 

  55. Johnsen M, Samberg L, Calsyn R, et al. Case management models for persons who are homeless and mentally ill: The ACCESS Demonstration Project. Community Mental Health Journal 1999; 35:325–346.

    Article  PubMed  CAS  Google Scholar 

Download references

Acknowledgments

The authors received support from the Institute of Education Sciences as follows: E.C. Sommer, #R305B04110; M.C. Nelson, #R305B04110; D.S. Cordray, #R305U060002; C.S. Hulleman, #R305B050029 and #144-NL14; C.L. Darrow, #R305B080025. However, the contents do not necessarily represent the positions or policies of the Institute of Education Sciences or the U.S. Department of Education.

Conflict of Interest

Each of the authors affirms that there are no actual or perceived conflicts of interest, financial or nonfinancial, that would bias any part of this manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Michael C. Nelson BS.

Additional information

An earlier version of this paper was presented at the Society for Research on Educational Effectiveness 2010 Conference.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Nelson, M.C., Cordray, D.S., Hulleman, C.S. et al. A Procedure for Assessing Intervention Fidelity in Experiments Testing Educational and Behavioral Interventions. J Behav Health Serv Res 39, 374–396 (2012). https://doi.org/10.1007/s11414-012-9295-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11414-012-9295-x

Keywords

Navigation