Abstract
An intervention's effectiveness is judged by whether it produces positive outcomes for participants, with the randomized experiment being the gold standard for determining intervention effects. However, the intervention-as-implemented in an experiment frequently differs from the intervention-as-designed, making it unclear whether unfavorable results are due to an ineffective intervention model or the failure to implement the model fully. It is therefore vital to accurately and systematically assess intervention fidelity and, where possible, incorporate fidelity data in the analysis of outcomes. This paper elaborates a five-step procedure for systematically assessing intervention fidelity in the context of randomized controlled trials (RCTs), describes the advantages of assessing fidelity with this approach, and uses examples to illustrate how this procedure can be applied.
Similar content being viewed by others
Notes
It is also possible that an intervention’s developer would specify as part of its change model one or more moderators, constructs thought to influence the nature (strength) of the causal relationship between two or more constructs. However, we have omitted discussion of moderators because they are exogenous to the intervention as designed.
Note that most models also involve assumptions (e.g., student characteristics) that may not be included in the graphic representation but that should be elaborated narratively.
While this example illustrates the problem in principle, it is unlikely to have inflated fidelity in this particular study given that the proportion of non-core items was relatively small and significant results were obtained.
References
Dane AV, Schneider BH. Program integrity in primary and early secondary prevention: Are implementation effects out of control? Clinical Psychology Review 1998; 18(1):23–45.
McIntyre LL, Gresham FM, DiGennaro FD, et al. Treatment integrity of school-based interventions with children in the Journal of Applied Behavior Analysis 1991–2005. Journal of Applied Behavior Analysis 2007; 40(4):659–672.
Durlak JA, DuPre, EP. Implementation matters: A review of research on the influence of implementation on program outcomes and the factors affecting implementation. American Journal of Community Psychology 2008; 41(3):327–350.
O'Donnell CL. Defining, Conceptualizing, and measuring fidelity of implementation and its relationship to outcomes in K-12 curriculum intervention research. Review of Educational Research 2008; 78(1):33–84.
Dusenbury L, Brannigan R, Falco M, et al. A review of research on fidelity of implementation: implications for drug abuse prevention in school settings. Health Education Research 2003; 18(2):237–256.
Fixsen DL, Naoom SF, Blasé KA, et al. Implementation Research: A Synthesis of the Literature. FMHI Publication no. 231. Tampa: Louis de la Parte Florida Mental Health Institute, National Implementation Research Network, University of South Florida, 2005.
Hulleman CS, Rimm-Kaufman SE, Abry TDS. Construct validity, measurement, and analytical issues for fidelity assessment in education research. In: Halle T, Martinez-Beck I, Metz A (eds.) Applying Implementation Science to Early Care and Education Programs and Systems: Exploring a New Frontier. Baltimore, M.D.: Brookes Publishing, in press.
Bellg AJ, Borrelli B, Resnick, B, et al. Enhancing treatment fidelity in health behavior change studies: Best practices and recommendations from the Behavior Change Consortium. Health Psychology 2004; 23(5):443–451.
Schoenwald SK, Garland AF, Chapman JE, et al. Toward the effective and efficient measurement of implementation fidelity. Administration and Policy in Mental Health and Mental Health Services Research 2011; 38(1):32–43
Carroll C, Patterson M, Wood S, et al. A conceptual framework for implementation fidelity. Implementation Science 2007; 2(40):1–9. Retrieved on June 1, 2012, from http://www.implementationscience.com/content/2/1/40.
Holland PW. Statistics and causal inference. Journal of the American Statistical Association 1986; 81(396):945–960.
Institute of Education Sciences. Education Research Training Grants. RFA No. IES-NCER-2008–02. Washington, D.C.: US Department of Education, 2007.
Cordray DS, Pion GM. Treatment strength and integrity: Models and methods. In: Bootzin RR, McKnight PE (eds). Strengthening Research Methodology: Psychological Measurement and Evaluation. Washington, DC: American Psychological Association, 2006: pp. 103–124.
Hulleman CS, Cordray D. Moving from the lab to the field: The role of fidelity and achieved relative intervention strength. Journal of Research on Intervention Effectiveness 2009; 2(1):88–110.
Borrelli B, Sepinwall D, Bellg AJ, et al. A new tool to assess treatment fidelity and evaluation of treatment fidelity across 10years of health behavior research. Journal of Consulting and Clinical Psychology 2005; 73(5):852–860.
Lichstein KL, Riedel BW, Grieve R. Fair tests of clinical trials: A treatment implementation model. Advances in Behavior Research and Therapy 1994; 16: 1–29.
Cordray DS. 2007 Assessing Intervention Fidelity in Randomized Field Experiments. Funded Goal 5 proposal to Institute of Education Sciences.
Hulleman CS, Cordray DS, Nelson MC, et al. The state of treatment fidelity assessment in elementary mathematics interventions. Poster presented at the annual meeting conference of the Institute of Education Sciences, Washington, D.C., June 2009.
Knowlton LW, Phillips CC. The Logic Model Guidebook: Better Strategies for Great Results. Washington, D.C.: Sage, 2009.
Chen HT. Theory-Driven Evaluation. Thousand Oaks, CA: Sage Publications, 1990.
Sidani S, Sechrest L. Putting theory into operation. American Journal of Evaluation 1999; 20(2):227–238.
Donaldson SI, Lipsey MW. Roles for theory in contemporary evaluation practice: Developing practical knowledge. In: Shaw I, Greene JC, Mark MM (eds). The Handbook of Evaluation: Policies, Programs, and Practices. London: Sage, 2006: pp. 56–75.
Trochim W, Cook J. Pattern matching in theory-driven evaluation: A field example from psychiatric rehabilitation. In: Chen H, Rossi PH (eds). Using Theory to Improve Program and Policy Evaluations. New York: Greenwood Press, 1992, pp. 49–69.
Swafford JO, Jones GA, Thornton CA. Increased knowledge in geometry and instructional practice. Journal for Research in Mathematics Education 1997; 28(4):467–483.
Noell GH, Witt JC, Slider NJ, et al. Treatment implementation following behavioral consultation in schools: A comparison of three follow-up strategies. School Psychology Review 2005; 34(1):87–106.
Moss M, Fountain AR, Boulay B, et al. Reading First Implementation Evaluation: Final Report. Cambridge, MA: Abt Associates, 2008.
Shadish WR, Cook TD, Campbell, DT. Experimental and Quasi-Experimental Designs for Generalized Causal Inference. New York, NY: Houghton Mifflin Company, 2002.
Cook T. Postpositivist critical multiplism. In: Shotland RL, Marks MM (eds). Social Science and Social Policy. Beverly Hills, CA: Sage, 1985, pp. 21–62.
Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika 1951; 16(3):297–334.
Cordray DS. Identifying and Assessing the Cause in RCTs. Instructional session presented at the Institute of Education Sciences RCT Training Institute, Nashville, TN, June 22, 2009.
Cronbach LJ, Nageswari R, Gleser, GC. Theory of generalizability: A liberation of reliability theory. The British Journal of Statistical Psychology 1963; 16(2):137–163.
Cronbach LJ, Gleser GC, Nanda H, et al. The Dependability of Behavioral Measurements: Theory of Generalizability for Scores and Profiles. New York: John Wiley, 1972.
Crocker L, Algina, J. Introduction to Classical and Modern Test Theory. New York: Harcourt Brace Jovanovich College Publishers, 1986:527.
Brennan LB. Generalizability theory. In: Gierl M (ed). ITEMS: The Instructional Topics in Educational Measurement Series. Madison, WI: National Council on Measurement in Education, 1992. Available at: www.ncme.org/pubs/items.cfm Accessed June 18, 2011.
Spector PE. Summated Rating Scale Construction: An Introduction. Newbury Park, CA: Sage, 1992.
Lennon RT. Assumptions underlying the use of content validity. Educational and Psychological Measurement 1956; 16(3):294–304.
Cronbach LJ. Test validation. In: Thorndike, RL (ed.). Educational Measurement (2nd ed.). Washington, D. C.: American Council on Education, 1971, pp. 443–507.
Mosier CI. A critical examination of the concepts of face validity. Educational & Psychological Measurement 1947; 7(2):191–205.
McGrew JH, Bond GR, Dietzen L, Salyers M. Measuring the fidelity of implementation of a mental health program model. Journal of Consulting and Clinical Psychology 1994; 62(4): 670–678.
Mowbray CT, Holter MC, Teague GB et al. Fidelity criteria: Development, measurement, and validation. American Journal of Evaluation 2003; 24:315–340.
Abry T, Rimm-Kaufman SE, Hulleman CS. Using Intervention Core Components to Identify the Active Ingredients of the Responsive Classroom approach. 2012, manuscript in preparation.
Fuchs LS, Fuchs D, Yazdian L, et al. Enhancing first-grade children's mathematical development with peer-assisted learning strategies. School Psychology Review 2002; 31(4):569–583.
Cordray DS, Pion GM, Dawson M, et al. 2008. The Efficacy of NWEA’s MAP Program. Institute of Education Sciences funded proposal.
Wilson SJ, Lipsey MW, Derzon JH. The effects of school-based intervention programs on aggressive behavior: A meta-analysis. Journal of Consulting and Clinical Psychology 2003; 71:136–149.
Tobler NS. Meta-analysis of 143 adolescent drug prevention programs: Quantitative outcome results of program participants compared to a control or comparison group. Journal of Drug Issues, 1986; 16:537–567.
DuBois DL, Holloway BE, Valentine JC, et al. Effectiveness of mentoring programs for youth: A metaanalytic review. American Journal of Community Psychology 2002; 30:157–198.
Smith JD, Schneider BH, Smith PK, et al. The effectiveness of whole-school antibullying programs: A synthesis of evaluation research. School Psychology Review 2004; 33:547–560.
Likert R. A technique for the measurement of attitudes. Archives of Psychology 1932; 140:5–53.
Connor CM, Morrison FM, Fishman BJ, et al. Algorithm-guided individualized reading instruction. Science 2007; 315(5811):464–465.
Fuchs LS, Fuchs D, Karns K. Enhancing kindergarteners’ mathematical development: Effects of peer-assisted learning strategies. Elementary School Journal 2001; 101(5):495–510.
Kutash K, Duchnowski A J, Sumi WC, et al. A school, family, and community collaborative program for children who have emotional disturbances. Journal of Emotional and Behavioral Disorders 2002; 10(2):99–107.
Ginsburg-Block M, Fantuzzo J. Reciprocal peer tutoring: An analysis of teacher and student interactions as a function of training and experience. School Psychology Quarterly 1997; 12(2):1–16.
Bond GR, Evans L, Salyers MP, et al. Measurement of fidelity in psychiatric rehabilitation. Mental Health Services Research, 2000; 2(2):75–87.
Teague GB, Bond GR, Drake RE. Program fidelity in assertive community treatment: Development and use of a measure. American Journal of Orthopsychiatry 1998; 68:216–232.
Johnsen M, Samberg L, Calsyn R, et al. Case management models for persons who are homeless and mentally ill: The ACCESS Demonstration Project. Community Mental Health Journal 1999; 35:325–346.
Acknowledgments
The authors received support from the Institute of Education Sciences as follows: E.C. Sommer, #R305B04110; M.C. Nelson, #R305B04110; D.S. Cordray, #R305U060002; C.S. Hulleman, #R305B050029 and #144-NL14; C.L. Darrow, #R305B080025. However, the contents do not necessarily represent the positions or policies of the Institute of Education Sciences or the U.S. Department of Education.
Conflict of Interest
Each of the authors affirms that there are no actual or perceived conflicts of interest, financial or nonfinancial, that would bias any part of this manuscript.
Author information
Authors and Affiliations
Corresponding author
Additional information
An earlier version of this paper was presented at the Society for Research on Educational Effectiveness 2010 Conference.
Rights and permissions
About this article
Cite this article
Nelson, M.C., Cordray, D.S., Hulleman, C.S. et al. A Procedure for Assessing Intervention Fidelity in Experiments Testing Educational and Behavioral Interventions. J Behav Health Serv Res 39, 374–396 (2012). https://doi.org/10.1007/s11414-012-9295-x
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11414-012-9295-x