Advertisement

Neuropsychology Review

, Volume 16, Issue 4, pp 161–169 | Cite as

Evaluating Single-Subject Treatment Research: Lessons Learned from the Aphasia Literature

  • Pélagie M. Beeson
  • Randall R. Robey
Original Paper

Abstract

The mandate for evidence-based practice has prompted careful consideration of the weight of the scientific evidence regarding the therapeutic value of various clinical treatments. In the field of aphasia, a large number of single-subject research studies have been conducted, providing clinical outcome data that are potentially useful for clinicians and researchers; however, it has been difficult to discern the relative potency of these treatments in a standardized manner. In this paper we describe an approach to quantify treatment outcomes for single-subject research studies using effect sizes. These values provide a means to compare treatment outcomes within and between individuals, as well as to compare the relative strength of various treatments. Effect sizes also can be aggregated in order to conduct meta-analyses of specific treatment approaches. Consideration is given to optimizing research designs and providing adequate data so that the value of treatment research is maximized.

Keywords

Effect size Treatment Rehabilitation Outcomes Evidence based practice Stroke Meta-analysis 

Notes

Acknowledgement

The authors thank Susan Carnahan and Ronald Lazar for their helpful comments and suggestions regarding this paper. This work was supported by the Academy of Neurologic Communication Disorders and Sciences (ANCDS), the American Speech-Language-Hearing Association (ASHA), and Division 2: Neuropsysiology and Neurogenic Communication Disorders of ASHA. The first author is also supported in part by RO1DC007646 and RO1DC008286 from the National Institute on Deafness and Other Communication Disorders.

References

  1. Backman, C. L., Harris, S. R., Chisholm, J. M., & Monette, A. D. (1997). Single-subject research in rehabilitation: A review of studies using AB, withdrawal, multiple baseline, and alternate treatments designs. Archives of Physical Medicine and Rehabilitation, 78, 1145–1153.PubMedCrossRefGoogle Scholar
  2. Beeson, P. M., & Egnor, H. (2006). Combining treatment for written and spoken naming. Journal of the International Neuropsychological Society, 12, 816–827.Google Scholar
  3. Beeson, P. M., Magloire, J., & Robey, R. R. (2005). Letter-by-letter reading: Natural recovery and response to treatment. Behavioural Neurology, 16, 191–202.PubMedGoogle Scholar
  4. Bezeau, S., & Graves, R. (2001). Statistical power and effect sizes of neuropsychology research. Journal of Clinical and Experimental Neuropsychology, 23, 399–406.PubMedCrossRefGoogle Scholar
  5. Busk, P. L., & Serlin, R. (1992). Meta-analysis for single case research. In T. R. Kratochwill & J. R. Levin (Eds.), Single-case research design and analysis: New directions for psychology and education. Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
  6. Center, B. A., Skiba, R. J., & Casey, A. (1985–1986). A methodology for the quantitative synthesis of intra-subject design research. Journal of Special Education, 19, 387–400.CrossRefGoogle Scholar
  7. Cohen, J. (1988). Statistical power analysis for the behavioral sciences, Second edition. Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
  8. Faith, M. S., Allison, D. B., & Gorman, B. S. (1997). Meta-analysis of single-case research. In D. R. Franklin, D. B. Allison, & B. S. Gorman (Eds.), Design and analysis of single-case research. Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
  9. Fan, X. (2001). Statistical significance and effect size in educational research: two sides of a coin. Journal of Educational Research, 94, 275–282.CrossRefGoogle Scholar
  10. Fink, R. B., Brecher, A., Schwartz, M. F., & Robey, R. R. (2002). A computer-implement protocol for treatment of naming disorders: Evaluation of clinician-guided and partially self-guided instruction. Aphasiology, 16, 1061–1086.CrossRefGoogle Scholar
  11. Garrett, Z., & Thomas, J. (2006). International Journal of Language and Communication Disorders, 43, 95–105.CrossRefGoogle Scholar
  12. Greener, J., Enderby, P., & Whurr, R. (1999). Speech and language therapy for aphasia following stroke (Review). The Cochrane Library, 2, 1–62.Google Scholar
  13. Harbour, R., & Miller, J. (2001). A new system for grading recommendations in evidence based guidelines. British Medical Journal, 323, 334–336.PubMedCrossRefGoogle Scholar
  14. Hyde, J. S. (2001). Reporting effect sizes: The roles of editors, textbook authors, and publication manuals. Educational and Psychological Measurement, 61, 225–228.CrossRefGoogle Scholar
  15. Johnston, M. V., Ottenbacher, K. J., & Reichardt, C. S. (1995). Strong quasi-experimental designs for research on the effectiveness of rehabilitation. American Journal of Physical Medicine and Rehabilitation, 74, 383–392.PubMedCrossRefGoogle Scholar
  16. Kirk, R. E. (1996). Practical significance: A concept whose time has come. Educational and Psychological Measurement, 56, 746–759.Google Scholar
  17. Kirk, R. E. (2001). Promoting good statistical practices: Some suggestions. Educational and Psychological Measurement, 61, 213–218.CrossRefGoogle Scholar
  18. Kromrey, J. D., & Foster-Johnson, L. (1996). Determining the efficacy of intervention: The use of effect sizes for data analysis in single-subject research. The Journal of Experimental Education, 65, 73–93.CrossRefGoogle Scholar
  19. Matyas, T. A., & Greenwood, K. M. (1990). Visual analysis of single-case time series: Effects of variability, serial dependence, and magnitude of intervention effects. Journal of Applied Behavior Analysis, 23, 341–351.PubMedCrossRefGoogle Scholar
  20. McReynolds, L. V., & Thompson, C. K. (1986). Flexibility of single-subject experimental designs. Part I: Review of the basics of single-subject designs. Journal of Speech and Hearing Disorders, 51, 194–203.PubMedGoogle Scholar
  21. Moher, D., Cook, D. J., Eastwood, S., Okin, I., Rennie, D., Stroup, D. F., the QUOROM Group. (1999). Improving the quality of reports of meta-analyses of randomized controlled trials: The QUOROM statement. The Lancet, 354, 1896–1900.CrossRefGoogle Scholar
  22. Nix, T. W., & Barnette, J. J. (1998). A review of hypothesis testing revisited: Rejoinder to Thompson, Knapp, and Levin. Research in Schools, 5, 55–57.Google Scholar
  23. Publication Manual of the American Psychological Association (5th ed.). (2001). Washington, DC: American Psychological Association.Google Scholar
  24. Pring, T. (2004). Ask a silly question: two decades of troublesome trials. International Journal of Language and Communication Disorders, 39, 285–302.PubMedCrossRefGoogle Scholar
  25. Robey, R. R. (1994). The efficacy of treatment for aphasic persons: A meta-analysis. Brain and Language, 47, 582–608.PubMedCrossRefGoogle Scholar
  26. Robey, R. R. (1998). A meta-analysis of clinical outcomes in the treatment of aphasia. Journal of Speech, Language and Hearing Research, 41, 172–187.Google Scholar
  27. Robey, R. R., & Beeson, P. M. (2005). Aphasia treatment: Examining the evidence. Presentation at the American Speech-Language-Hearing Association Annual Convention. San Diego, CA.Google Scholar
  28. Robey, R. R., & Schultz, M. C. (1998). A model for conducting clinical outcome research: An adaptation of the standard protocol for use in aphasiology. Aphasiology, 12, 787–810.Google Scholar
  29. Robey, R. R., Schultz, M. C., Crawford, A. B., & Sinner, C. A. (1999). Single-subject clinical-outcome research: designs, data, effect sizes, and analyses. Aphasiology, 13, 445–473.CrossRefGoogle Scholar
  30. Scruggs, T. E., Mastropieri, M. A. (1998). Summarizing single-subject research. Behaviour Modification, 22, 221–242.Google Scholar
  31. Shaver, J. P. (1993). What statistical testing is, and what it is not? Journal of Experimental Education, 61, 293–316.Google Scholar
  32. Sidman, M. (1960). Tactics of scientific research. New York: Basic Books, Inc.Google Scholar
  33. Thompson, B. (1998). Statistical significance and effect size reporting: Portrait of a possible future. Research in Schools, 5, 33–38.Google Scholar
  34. Thompson, B. (2002). What future quantitative social science research could look like: Confidence intervals for effect sizes. Educational Researcher (3), 31, 25–32.Google Scholar
  35. Vacha-Haase, T. (2001). Statistical significance should not be considered one of life’s guarantees: Effect sizes are needed. Educational and Psychological Measurement, 61, 219–224.CrossRefGoogle Scholar
  36. White, D. M., Rusch, F. R., & Kazdin, A. E. (1989). Applications of meta-analysis in individual-subject research. Behavioral Assessment, 11(3), 281–296.Google Scholar
  37. Whurr, R., Lorch, M., & Nye, C. (1992). A meta-analysis of studies carried out between 1946 and 1988 concerned with the efficacy of speech and language therapy treatment for aphasic patients. European Journal of Disorders of Communication, 27, 1–17.PubMedGoogle Scholar
  38. World Health Organization (1975). WHO Scientific Group on Guidelines for Evaluation of Drugs for Use in Man (Geneva: World Health Organization).Google Scholar
  39. Wortman, P. M. (1994). Judging research quality. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis. New York: Russell Sage Foundation.Google Scholar

Copyright information

© Springer Science+Business Media, LLC 2006

Authors and Affiliations

  1. 1.Department of Speech, Language, and Hearing Sciences, Department of NeurologyThe University of ArizonaTucsonUSA
  2. 2.Communication Disorders ProgramUniversity of VirginiaCharlottesvilleUSA

Personalised recommendations