Advertisement

Contemporary School Psychology

, Volume 18, Issue 1, pp 1–12 | Cite as

Applied Empiricism: Ensuring the Validity of Causal Response to Intervention Decisions

  • Stephen P. KilgusEmail author
  • Melissa A. Collier-Meek
  • Austin H. Johnson
  • Rose Jaffery
Article

Abstract

School personnel make a variety of decisions within multitiered problem-solving frameworks, including the decision to assign a student to group-based support, to design an individualized support plan, or classify a student as eligible for special education. Each decision is founded upon a judgment regarding whether the student has responded to intervention. These and other conclusions are inherently causal, thus requiring that educators carefully consider the internal, construct, and conclusion validity of each decision to ensure its defensibility. Researchers have identified multiple variables that are likely to moderate these validities, including the integrity with which interventions are implemented, the psychometric adequacy of progress-monitoring tools, the extent to which interventions and supports are matched to a student's needs, and the approach to single-case research design. We therefore review each of these variables in the interest of assisting practitioners to design acceptable and valid multi-tiered frameworks of prevention and service delivery.

Keywords

Response to intervention Research methods Validity 

References

  1. Alberto, P. A., & Troutman, A. C. (2009). Applied behavior analysis for teachers (8th ed.). Upper Saddle River: Pearson.Google Scholar
  2. Biggs, B. K., Vernberg, E. M., Twemlow, S. W., Fonagy, P., & Dill, E. J. (2008). Teacher adherence and its relation to teacher attitudes and student outcomes in an elementary school-based violence prevention program. School Psychology Review, 37, 533–549.Google Scholar
  3. Brown-Chidsey, R., Bronaugh, L., & McGraw, K. (2009). RTI in the classroom: guidelines and recipes for success. New York: Guilford.Google Scholar
  4. Burns, M. K., Ganuza, Z. M., & London, R. M. (2009). Brief experimental analysis of written letter formation: single-case demonstration. Journal of Behavioral Education, 18, 20–34.CrossRefGoogle Scholar
  5. Busk, P. L., & Serlin, R. C. (1992). Meta-analysis for single-case research. In T. R. Kratochwill & J. R. Levin (Eds.), Single-case research design and analysis: new directions for psychology and education (pp. 187–212). Hillsdale: Erlbaum.Google Scholar
  6. Campbell, J. M. (2004). Statistical comparison of four effect sizes for single-subject designs. Behavior Modification, 28, 234–246.CrossRefPubMedGoogle Scholar
  7. Campbell, A., & Anderson, C. M. (2008). Enhancing effects of check-in/check-out with function-based support. Behavioral Disorders, 33, 233–245.Google Scholar
  8. Chafouleas, S. M. (2011). Direct behavior rating: a review of the issues and research in its development. Education and Treatment of Children, 34, 575–591.CrossRefGoogle Scholar
  9. Chafouleas, S. M., Riley-Tillman, T. C., & Sugai, G. (2007). School-based behavioral assessment: informing intervention and instruction. New York: Guilford.Google Scholar
  10. Cochrane, W. S., & Laux, J. M. (2008). A survey investigating school psychologists' measurement of treatment integrity in school-based interventions and their beliefs about its importance. Psychology in the Schools, 45, 499–507.CrossRefGoogle Scholar
  11. Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied behavior analysis (2nd ed.). Upper Saddle River: Pearson.Google Scholar
  12. Crone, D. A., & Horner, R. H. (2003). Building positive behavior support systems in schools: functional behavioral assessment. New York: Guilford.Google Scholar
  13. Daly, E. J., Witt, J. C., Martens, B. K., & Dool, E. J. (1997). A model for conducting a functional analysis of academic performance problems. School Psychology Review, 26, 554–574.Google Scholar
  14. Drummond, T. (1994). The student risk screening scale. Grants Pass: Josephine County Mental Health Program.Google Scholar
  15. Filter, K. J., & Horner, R. H. (2009). Function-based academic interventions for problem behavior. Education and Treatment of Children, 32, 1–19.CrossRefGoogle Scholar
  16. Fuchs, D. M., Mock, D., Morgan, P. L., & Young, C. L. (2003). Responsiveness-to-intervention: definitions, evidence, and implications for the learning disabilities construct. Learning Disabilities Research & Practice, 18, 157–171.CrossRefGoogle Scholar
  17. Good, R. H., & Jefferson, G. (1989). Contemporary perspectives on curriculum-based measurement validity. In M. Shinn (Ed.), Advanced applications of curriculum-based measurement (pp. 61–88). New York: Guilford.Google Scholar
  18. Gresham, F. M. (2010). Data-based decision making for students' social behavior. Journal of Evidence-Based Practices for Schools, 11, 149–168.Google Scholar
  19. Gresham, F. M. (1989). Assessment of treatment integrity in school consultation and prereferral intervention. School Psychology Review, 18, 37–50.Google Scholar
  20. Hammond, D., & Gast, D. L. (2010). Descriptive analysis of single subject research designs: 1983–2007. Education and Training in Autism and Developmental Disabilities, 45, 187–202.Google Scholar
  21. Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of single-subject research to identify evidence-based practice in special education. Exceptional Children, 71, 165–179.CrossRefGoogle Scholar
  22. Kazdin, A. E., Kratochwill, T. R., & Vanden Bos, G. (1986). Beyond clinical trials: generalizing from research to practice. Professional Psychology: Research and Practice, 3, 391–398.CrossRefGoogle Scholar
  23. Knapp, T. J. (1983). Behavior analysts' visual appraisal of behavior change in graphic display. Behavioral Assessment, 5, 155–164.Google Scholar
  24. Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., et al. (2010). Single-case designs technical documentation. Resource document. What Works Clearinghouse.: http://ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf. Accessed 18 Aug 2012
  25. Kratochwill, T. R., Hoagwood, K. E., Kazak, A. E., Weisz, J. R., Hood, K., Vargas, L. A., et al. (2012). Practice-based evidence for children and adolescents: advancing the research agenda in schools. School Psychology Review, 41, 215–235.Google Scholar
  26. Lane, K. L., Menzies, H. M., Oakes, W. P., & Kalberg, J. R. (2012). Systematic screenings of behavior to support instruction: from preschool to high school. New York: Guilford.Google Scholar
  27. March, R. E., Horner, R. H., Lewis-Palmer, T., Brown, D., Crone, D., Todd, A. W., et al. (2000). Functional Assessment Checklist: Teachers and Staff (FACTS). Eugene, OR: Educational and Community Supports.Google Scholar
  28. McIntosh, K., Campbell, A. L., Carter, D. R., & Dickey, C. R. (2009). Differential effects of a tier two behavior intervention based on function of problem behavior. Journal of Positive Behavior Interventions, 11, 82–93.CrossRefGoogle Scholar
  29. Mercer, S. H., & Sterling, H. E. (2012). The impact of baseline trend control on visual analysis of single-case data. Journal of School Psychology, 50, 403–419.CrossRefPubMedGoogle Scholar
  30. Messick, S. (1995). Validity of psychological assessment: Validation of inferences from persons’ responses and performances as scientific inquiry into score meaning. American Psychologist, 50, 741–749.CrossRefGoogle Scholar
  31. National Association of School Psychologists (2010). Model for comprehensive and integrated school psychology services. Communique, 39(4), 1–6. Resource document.: http://nasponline.org/standards/2010standards/2_PracticeModel.pdf. Accessed 18 Aug 2012Google Scholar
  32. Noell, G. H., & Gansle, K. A. (2006). Assuring the form has substance: treatment plan implementation as the foundation of assessing response to intervention. Assessment for Effective Intervention, 32, 32–39.CrossRefGoogle Scholar
  33. Noell, G. H., Witt, J. C., Slider, N. J., Connell, J. E., Gatti, S. L., Williams, K. L., et al. (2005). Treatment implementation following behavioral consultation in schools: a comparison of three follow-up strategies. School Psychology Review, 34, 87–106.Google Scholar
  34. Parker, R. I., Vannest, K. J., & Brown, L. (2009). The improvement rate difference for single-case research. Exceptional Children, 75, 135–150.Google Scholar
  35. Peterson, L., Homer, A., & Wonderlich, S. (1982). The integrity of independent variables in behavior analysis. Journal of Applied Behavior Analysis, 15, 477–492.PubMedCentralCrossRefPubMedGoogle Scholar
  36. Riley-Tillman, T. C., & Burns, M. K. (2009). Evaluating educational interventions: single-case design for measuring response to intervention. New York: Guilford.Google Scholar
  37. Ruble, L., McGrew, J. H., & Toland, M. D. (2012). Goal attainment scaling as an outcome measure in randomized controlled trials of psychosocial interventions in autism. Journal of Autism and Developmental Disorders, 42, 1974–1983.PubMedCentralCrossRefPubMedGoogle Scholar
  38. Sanetti, L. M. H., & Kratochwill, T. R. (2009). Toward developing a science of treatment integrity: introduction to the special series. School Psychology Review, 38, 445–459.Google Scholar
  39. Sanetti, L. M. H., Fallon, L. M., & Collier-Meek, M. A. (2011). Treatment integrity assessment and intervention by school-based personnel: practical applications based on a preliminary study. School Psychology Forum, 5, 87–102.Google Scholar
  40. Schulte, A. C., Easton, J. E., & Parker, J. (2009). Advances in treatment integrity research: multidisciplinary perspectives on the conceptualization, measurement, and enhancement of treatment integrity. School Psychology Review, 38, 460–475.Google Scholar
  41. Scruggs, T. E., Mastropieri, M. A., & Casto, G. (1987). The quantitative synthesis of single-subject research: methodology and validation. Remedial and Special Education, 8, 24–33.CrossRefGoogle Scholar
  42. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton Mifflin.Google Scholar
  43. Sidman, M. (2011). Can an understanding of basic research facilitate the effectiveness of practitioners? Reflections and personal perspectives. Journal of Applied Behavior Analysis, 44, 973–991.PubMedCentralCrossRefPubMedGoogle Scholar
  44. Stoner, G., & Green, S. K. (1992). Reconsidering the scientist-practitioner model for school psychology practice. School Psychology Review, 21, 155–166.Google Scholar
  45. Turtura, J. E., Anderson, C. M., & Boyd, R. J. (2013). Addressing task avoidance in middle school students: academic behavior check-in/check-out. Journal of Positive Behavior Interventions, 42((6)), 1–9. doi: 10.1177/1098300713484063.Google Scholar
  46. Urbina, S. (2004). Essentials of psychological testing. Hoboken: Wiley.Google Scholar
  47. Vaughn, S., & Fuchs, L. S. (2003). Redefining learning disabilities as inadequate response to instruction: the promise and potential problems. Learning Disabilities Research & Practice, 18, 137–146.CrossRefGoogle Scholar
  48. Yeaton, W. H., & Sechrest, L. (1981). Critical dimensions in the choice and maintenance of successful treatments: strength, integrity, and effectiveness. Journal of Consulting and Clinical Psychology, 49, 156–167.CrossRefPubMedGoogle Scholar

Copyright information

© California Association of School Psychologists 2014

Authors and Affiliations

  • Stephen P. Kilgus
    • 1
    Email author
  • Melissa A. Collier-Meek
    • 2
  • Austin H. Johnson
    • 2
  • Rose Jaffery
    • 3
  1. 1.East Carolina UniversityGreenvilleUSA
  2. 2.University of ConnecticutMansfieldUSA
  3. 3.EASTCONNHamptonUSA

Personalised recommendations