Advertisement

Journal of Behavioral Education

, Volume 23, Issue 2, pp 287–311 | Cite as

A Comparison of Rubrics for Identifying Empirically Supported Practices with Single-Case Research

  • Daniel M. MagginEmail author
  • Amy M. Briesch
  • Sandra M. Chafouleas
  • Tyler D. Ferguson
  • Courtney Clark
Original Paper

Abstract

The use of single-case research methods for validating academic and behavioral interventions has gained considerable attention in recent years. As such, there has been a proliferation of methods for evaluating whether, and to what extent, primary research reports provide evidence of intervention effectiveness. Despite the recent interest in harnessing single-case research to identify empirically supported strategies, examination of these tools has revealed that there is a lack of consistency in the methodological criteria sampled and scoring procedures used to evaluate primary research reports. The present study examined the extent to which various evidence rubrics addressed specific methodological features of single-case research and classified studies into similar evidence categories. Results indicated that the methodological criteria included within rubrics tended to vary, particularly for criteria related to determining the generality of the intervention under study. Moreover, there was substantial discordance observed in the evidence classifications assigned to reviewed studies. These findings are discussed in the context of the still-developing nature of single-case evidence reviews. Recommendations for both research and practice are provided.

Keywords

Evidence-based practice Methodological quality Special education Single-case research Systematic review 

References

  1. Bailey, K. R. (1994). Generalizing the results of randomized clinical trials. Controlled Clinical Trials, 15, 15–23.PubMedCrossRefGoogle Scholar
  2. Barlow, D. H., Nock, M., & Hersen, M. (2009). Single case research designs: Strategies for studying behavior change (3rd ed.). New York: Pearson.Google Scholar
  3. Barnes, A. C., & Harlacher, J. E. (2008). Clearing the confusion: Response-to-intervention as a set of principles. Education and Treatment of Children, 31, 417–431.CrossRefGoogle Scholar
  4. Briesch, A. M., & Chafouleas, S. M. (2009). Review and analysis of literature on self-management interventions to promote appropriate classroom behaviors (1988–2008). School Psychology Quarterly, 24, 106–118.CrossRefGoogle Scholar
  5. Burns, M. K. (2012). Meta-analysis of single-case design research: Introduction to the special issue. Journal of Behavioral Education, 21, 1–10.CrossRefGoogle Scholar
  6. Burns, M. K., & Ysseldyke, J. E. (2009). Reported prevalence of evidence-based instructional practices in special education. Journal of Special Education, 43, 3–11.CrossRefGoogle Scholar
  7. Cook, B. G., & Cook, S. C. (2011). Unraveling evidence-based practices in special education. Journal of Special Education,. doi: 10.1177/0022466911420877.Google Scholar
  8. Cook, B. G., Tankersley, M., & Landrum, T. J. (2009). Determining evidence-based practices in special education. Exceptional Children, 75, 365–383.Google Scholar
  9. Cooper, J. O., Heron, T. E., & Heward, W. L. (2007). Applied behavior analysis (2nd ed.). Upper Saddle River, NJ: Pearson Prentice Hall.Google Scholar
  10. Dalton, T. D., Martella, R. C., & Marchand-Martella, N. E. (1999). The effects of a self-management program in reducing off-task behavior. Journal of Behavioral Education, 9, 157–176.CrossRefGoogle Scholar
  11. Detrich, R., & Lewis, T. (2012). A decade of evidence-based education: Where are we and where do we need to go? Journal of Positive Behavior Interventions,. doi: 10.1177/1098300712460278.Google Scholar
  12. Donaldson, S. I., Christie, C. A., & Mark, M. M. (2008). What counts as credible evidence in applied research and evaluation practice?. Thousand Oaks, CA: Sage.Google Scholar
  13. Durand, V. M., & Rost, N. (2005). Does it matter who participates in our studies? A caution when interpreting the research on positive behavioral support. Journal of Positive Behavior Interventions, 7, 186–188.CrossRefGoogle Scholar
  14. Gable, R. A., Tonelson, S. W., Sheth, M., Wilson, C., & Park, K. L. (2012). Importance, usage, and preparedness to implement evidence-based practices for students with Emotional Disabilities: A comparison of knowledge and skills of special education and general education teachers. Education and Treatment of Children, 35, 499–520.CrossRefGoogle Scholar
  15. Gast, D. L. (2010). Replication. In D. L. Gast (Ed.), Single subject research methodology in behavioral sciences (pp. 110–128). New York: Routledge.Google Scholar
  16. Gresham, F. M. (1989). Assessment of treatment integrity in school consultation and prereferral intervention. School Psychology Review, 18, 37–50.Google Scholar
  17. Gresham, F. M., Gansle, K. A., & Noell, G. H. (1993). Treatment integrity in applied behavior analysis with children. Journal of Applied Behavior Analysis, 26, 257–263.PubMedCentralPubMedCrossRefGoogle Scholar
  18. Hardman, M. L., & Dawson, S. (2008). The impact of federal public policy on curriculum and instruction for students with disabilities in the general classroom. Preventing School Failure, 52, 5–11.CrossRefGoogle Scholar
  19. Horner, R. H., & Kratochwill, T. R. (2012). Synthesizing single-case research to identify evidence-based practices: Some brief reflections. Journal of Behavioral Education, 21(3), 266–272.Google Scholar
  20. Johnston, J. M., & Pennypacker, H. S. (2009). Strategies and tactics of human behavior research (Vol. 3). New York: Routledge.Google Scholar
  21. Kazdin, A. E. (2011). Single-case research designs: Methods for clinical and applied settings. London: Oxford University Press.Google Scholar
  22. Kennedy, C. H. (2005). Single-case designs for educational research. New York: Pearson.Google Scholar
  23. Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., et al. (2010). Single-case designs technical documentation. Retrieved from What Works Clearinghouse website: http://ies.ed.gov/ncee/wwc/pdf/wwc_scd.pdf.
  24. Lane, K. L., & Carter, E. W. (2013). Reflections on the special issue issues and advances in the meta-analysis of single-case research. Remedial and Special Education, 34, 59–61.CrossRefGoogle Scholar
  25. Lane, K., Wolery, M., Reichow, B., & Rogers, L. (2007). Describing baseline conditions: Suggestions for study reports. Journal of Behavioral Education, 16, 224–234.CrossRefGoogle Scholar
  26. LeLaurin, K., & Wolery, M. (1992). Research standards in early intervention: Defining, describing, and measuring the independent variable. Journal of Early Intervention, 16, 275–287.CrossRefGoogle Scholar
  27. Logan, L. R., Hickman, R. R., Harris, S. R., & Heriza, C. B. (2008). Single-subject research design: Recommendations for levels of evidence and quality rating. Developmental Medicine and Child Neurology, 50, 99–103.CrossRefGoogle Scholar
  28. Maggin, D. M., Briesch, A. M., & Chafouleas, S. M. (2013). An application of the What Works Clearinghouse standards for evaluating single-subject research: Synthesis of the self-management literature base. Remedial and Special Education, 34, 39–43.CrossRefGoogle Scholar
  29. Maggin, D. M., & Chafouleas, S. M. (2010). PASS-RQ: Protocol for assessing single-subject research quality. Unpublished research instrument.Google Scholar
  30. Maggin, D. M., & Chafouleas, S. M. (2013). Introduction to the special series issues and advances of synthesizing single-case research. Remedial and Special Education, 34, 3–8.Google Scholar
  31. Maggin, D. M., Chafouleas, S. M., Goddard, K. M., & Johnson, A. H. (2011a). A systematic evaluation of token economies as a classroom management tool for students with challenging behavior. Journal of School Psychology, 49, 529–554.PubMedCrossRefGoogle Scholar
  32. Maggin, D. M., Johnson, A. H., Chafouleas, S. M., Ruberto, L. M., & Berggren, M. (2012). A systematic evidence review of school-based group contingency interventions for students with challenging behavior. Journal of School Psychology, 50, 625–654.PubMedCrossRefGoogle Scholar
  33. Maggin, D. M., O’Keeffe, B. V., & Johnson, A. H. (2011b). A quantitative synthesis of methodology in the meta-analysis of single-subject research for students with disabilities: 1985–2009. Exceptionality, 19, 109–135.CrossRefGoogle Scholar
  34. Mazzotti, V. L., Rowe, D. R., & Test, D. W. (2013). Navigating the evidence-based practice maze: Resources for teachers of secondary students with disabilities. Intervention in School and Clinic, 48, 159–166.CrossRefGoogle Scholar
  35. Michie, S., Fixsen, D., Grimshaw, J. M., & Eccles, M. P. (2009). Specifying and reporting complex behaviour change interventions: the need for a scientific method. Implementation Science, 4, 1–6.CrossRefGoogle Scholar
  36. National Autism Center. (2008). National standards project. Retrieved June 4, 2011, from www.nationalautismcenter.org/about/national.php.
  37. National Professional Development Center on Autism Spectrum Disorders. (2009). Evidence-based practices for children and youth with ASD. Retrieved June 14, 2011 from http://autismpdc.fpg.unc.edu/sites/autismpdc.fpg.unc.edu/files/EBP_Update_Reviewer_Training_printversion.pdf.
  38. National Research Council. (2002). Scientific research in education. Committee on Scientific Principles for Education Research. In: R. J. Shavelson, & L. Towne (Eds.), Center for education. Division of behavioral and social sciences and education. Washington, DC: National Academy Press.Google Scholar
  39. O’Keeffe, B. V., Slocum, T. A., Burlingame, C., Snyder, K., & Bundock, K. (2012). Comparing results of systematic reviews: Parallel reviews of research on repeated reading. Education and Treatment of Children, 35, 333–366.CrossRefGoogle Scholar
  40. Parsonson, B. S., & Baer, D. M. (1992). The analysis and presentation of graphic data. In T. Kratochwill (Ed.), Single subject research (pp. 101–166). New York: Academic Press.Google Scholar
  41. Perone, M. (1999). Statistical inference in behavior analysis: Experimental control is better. The Behavior Analyst, 22, 109.Google Scholar
  42. Reichow, B., Volkmar, F. R., & Cicchetti, D. V. (2008). Development of the evaluative method for evaluating and determining evidence-based practices in autism. Journal of Autism and Developmental Disorders, 38, 1311–1319.PubMedCrossRefGoogle Scholar
  43. Sanetti, L. M. H., Gritter, K. L., & Dobey, L. M. (2011). Treatment integrity of interventions with children in the school psychology literature from 1995 to 2008. School Psychology Review, 40, 72–84.Google Scholar
  44. Sanetti, L. M. H., & Kratochwill, T. R. (2009). Toward developing a science of treatment integrity: Introduction to the special series. School Psychology Review, 38, 445–459.Google Scholar
  45. Shrout, P. E., & Fleiss, J. L. (1979). Intraclass correlations: uses in assessing rater reliability. Psychological Bulletin, 86, 420.PubMedCrossRefGoogle Scholar
  46. Sidman, M. (1960). Tactics of scientific research. New York: Basic Books.Google Scholar
  47. Smith, G. J., Richards-Tutor, C., & Cook, B. G. (2010). Using teacher narratives in the dissemination of research-based practices. Intervention in School and Clinic, 46, 67–70.CrossRefGoogle Scholar
  48. Tate, R. L., McDonald, S., Perdices, M., Togher, L., Schultz, R., & Savage, S. (2008). Rating the methodological quality of single-subject designs and n-of-1 trials: Introducing the Single-Case Experimental Design (SCED) Scale. Neuropsychological Rehabilitation, 18, 385–401.PubMedCrossRefGoogle Scholar
  49. Taylor, J., & Watkinson, D. (2007). Indexing reliability for condition survey data. The Conservator, 30, 49–62.CrossRefGoogle Scholar
  50. Uebersax, J. S. (1987). Diversity of decision-making models and the measurement of interrater agreement. Psychological Bulletin, 101, 140.CrossRefGoogle Scholar
  51. Wendt, O., & Miller, B. (2012). Quality appraisal of single-subject experimental designs: An overview and comparison of different appraisal tools. Education and Treatment of Children, 35, 235–268.CrossRefGoogle Scholar
  52. Wilson, K. P. (2011). Synthesis of single-case design research in communication sciences and disorders: Challenges, strategies, and future directions. Evidence-Based Communication Assessment and Intervention, 5, 104–115.Google Scholar
  53. Wolery, M. (2013). A commentary: Single-case design technical document of the What Works Clearinghouse. Remedial and Special Education, 34, 39–43.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2013

Authors and Affiliations

  • Daniel M. Maggin
    • 1
    Email author
  • Amy M. Briesch
    • 2
  • Sandra M. Chafouleas
    • 3
  • Tyler D. Ferguson
    • 2
  • Courtney Clark
    • 1
  1. 1.University of Illinois at ChicagoChicagoUSA
  2. 2.Northeastern UniversityBostonUSA
  3. 3.University of ConnecticutStorrsUSA

Personalised recommendations