Advertisement

Single Subject Research

  • Kurt A. Freeman
  • May Lim
Reference work entry

Abstract:

In this chapter, we describe basic and expert competencies for the use of single subject designs. Basic competencies include repeated observation of behavioral phenomenon of interest, the participant serving as his or her own control, and utilizing measurement and visual analysis strategies that allow one to determine whether change occurs over time. Expert competencies include basic competencies plus repeated demonstration of experimental effect and altering only one variable at a time when introducing treatment. Through the use of numerous examples, we illustrate how single subject research designs lend themselves well to various aspects of research, including the stage of theory development (Level 1 research) and more formal and systematic tests of said theories (Level II and Level III research). We also describe how basic and expert competencies of single subject designs can be incorporated into clinical practice to evaluate effects of intervention. The reader will learn how single subject designs are unique approaches to scientific inquiry due to several features, including repeated observation of the dependent variable(s), replication of treatment effects, intrasubject and intersubject comparisons, visual analysis of individual participant data, and systematic manipulation of independent variable(s). This chapter will likely be useful for clinicians and researchers alike who are interested in answering questions about clinical phenomena in a manner that meets the requirements of the scientific method.

Keywords

Dependent Measure Data Pattern Extraneous Variable Multiple Baseline Individual Participant Data 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. Anderson, C. M., Freeman, K. A., & Scotti, J. R. (1999). Evaluation of the generalizability (reliability and validity) of analog functional assessment methodology. Behavior Therapy, 30, 21–30.CrossRefGoogle Scholar
  2. Atkins, D. C., Bedics, J. D., McGlinchey, J. B., & Beauchaine, T. P. (2005). Assessing clinical significance: Does it matter which method we use? Journal of Consulting and Clinical Psychology, 73, 982–989.PubMedCrossRefGoogle Scholar
  3. Bailey, J. S., & Burch, M. R. (2002). Research methods in applied behavior analysis. Thousand Oaks, CA: Sage.Google Scholar
  4. Barlow, D., & Hersen, M. (1984). Single case experimental designs: Strategies for studying behavior change (2nd ed.). Elmsford, NY: Pergamon.Google Scholar
  5. Blampied, N. M. (1999). A legacy neglected: Restating the case for single-case research in cognitive-behaviour therapy. Behaviour Change, 16, 89–104.CrossRefGoogle Scholar
  6. Blampied, N. M. (2001). The third way: Single-case research, training, and practice in clinical psychology. Australian Psychologist, 36, 157–163.CrossRefGoogle Scholar
  7. Campbell, D. T., & Stanley, J. C. (1966). Experimental and quasi-experimental designs for research and teaching. Chicago: Rand-McNally.Google Scholar
  8. Chambless, D. L., & Hollon, S. D. (1998). Defining empirically supported therapies. Journal of Consulting and Clinical Psychology, 66, 7–18.PubMedCrossRefGoogle Scholar
  9. Chambless, D. L., Sanderson, W. C., Shoham, V., Johnson, S. B., Pope, K. S., Crits-Christoph, P., et al. (1996). An update on empirically validated therapies. The Clinical Psychologist, 49, 5–14.Google Scholar
  10. Cook, D. J. (1996). Randomized trials in single subjects: The N of 1 study. Psychopharmacology Bulletin, 32, 363–367.PubMedGoogle Scholar
  11. Freeman, K. A., & Piazza, C. C. (1998). Combining stimulus fading, reinforcement, and extinction to treat food refusal. Journal of Applied Behavior Analysis, 31, 691–694.PubMedCrossRefGoogle Scholar
  12. Friman, P. C., Hoff, K. E., Schnoes, C., Freeman, K. A., Woods, D. W., & Blum, N. (1999). The bedtime pass: An approach to bedtime crying and leaving the room. Archives of Pediatric and Adolescent Medicine, 153, 1027–1029.Google Scholar
  13. Glicksohn, J., Gvirtsman, D., & Offer, S. (1997). The compensatory nature of mood: A single-subject time-series approach. Imagination, Cognition, & Personality, 15, 385–396.CrossRefGoogle Scholar
  14. Harris, F. N., & Jenson, W. R. (1985a). Comparisons of multiple-baseline across persons designs and AB designs with replication: Issues and confusions. Behavioral Assessment, 7, 121–127.Google Scholar
  15. Harris, F. N., & Jenson, W. R. (1985b). AB designs with replication: A reply to Hayes. Behavioral Assessment, 7, 133–135.Google Scholar
  16. Hartmann, D. P., Roper, B. L., & Bradford, D. C. (1979). Some relationships between behavioral and traditional assessment. Journal of Behavioral Assessment, 1, 3–21.CrossRefGoogle Scholar
  17. Hawkins, R. P., & Mathews, J. R. (1999). Frequent monitoring of clinical outcomes: Research and accountability for clinical practice. Education & Treatment of Children, 22, 117–135.Google Scholar
  18. Hayes, S. C. (1981). Single case experimental design and empirical clinical practice. Journal of Consulting and Clinical Psychology, 49, 193–211.PubMedCrossRefGoogle Scholar
  19. Hayes, S. C. (1985). Natural multiple baselines across persons: A reply to Harris and Jenson. Behavioral Assessment, 7, 129–132.Google Scholar
  20. Hayes, S. C., Barlow, D. H., & Nelson-Gray, R. O. (1999). The scientist practitioner: Research and accountability in the age of managed care. Boston: Allyn and Bacon.Google Scholar
  21. Hersen, M. (1982). Single-case experimental designs. In Bellack, A. S., & Hersen, M., Kazdin, A. E. (Eds.), International handbook of behavior modification and therapy (pp. 167–203). New York: Plenum Press.Google Scholar
  22. Hilliard, R. B. (1993). Single-case methodology in psychotherapy process and outcome research. Journal of Consulting and Clinical Psychology, 61, 373–380.PubMedCrossRefGoogle Scholar
  23. Holcombe, A., Wolery, M., & Gast, D. L. (1994). Comparative single-subject research: Descriptions of designs and discussions of problems. Topics in Early Childhood Special Education, 14, 119–145.CrossRefGoogle Scholar
  24. Holttum, S., & Goble, L. (2006). Factors influencing levels of research activity in clinical psychologists: A new model. Clinical Psychology & Psychotherapy, 13, 339–351.CrossRefGoogle Scholar
  25. Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom, S., & Wolery, S. (2005). The use of single-subject research to identify evidence-based practice in special education. Exceptional Children, 71, 165–179.Google Scholar
  26. Hrycaiko, D., & Martin, G. L. (1996). Applied research studies with single-subject designs: Why so few? Journal of Applied Sports Psychology, 8, 183–199.CrossRefGoogle Scholar
  27. Jacobson, N. S., Roberts, L. J., Berns, S. B., & McClinchey, J. B. (1999). Methods for defining and determining the clinical significance of treatment effects: Description, application, and alternatives. Journal of Consulting and Clinical Psychology, 67, 300–307.PubMedCrossRefGoogle Scholar
  28. Jacobson, N. S., & Truax, P. (1991). Clinical significance: A statistical approach to defining meaningful change in psychotherapy research. Journal of Consulting and Clinical Psychology, 59, 12–19.PubMedCrossRefGoogle Scholar
  29. Jones, E. E. (Ed.). (1993). Special section: Single-case research in psychotherapy. Journal of Consulting and Clinical Psychology, 61, 371–430.Google Scholar
  30. Kahng, S., Iwata, B. A., DeLeon, I. G., & Wallace, M. D. (2000). A comparison of procedures for programming noncontingent reinforcement schedules. Journal of Applied Behavior Analysis, 33, 223–231.PubMedCrossRefGoogle Scholar
  31. Kazdin, A. E. (1999). Clinical significance: Measuring whether interventions make a difference. In Kazdin, A. E. (Ed.), Methodological issues & strategies in clinical research (3rd ed.). Washington, DC: American Psychological Association.Google Scholar
  32. Kazdin, A. E. (2001). Behavior modification in applied settings (6th ed.). Belmont, CA: Wadsworth/Thomson Learning.Google Scholar
  33. Kazdin, A. E. (2003). Research design in clinical psychology (4th ed.). Boston: Allyn and Bacon.Google Scholar
  34. Kazdin, A. E., & Kopel, S. A. (1975). On resolving ambi-guities of the multiple-baseline design: Problems and recommendations. Behavior Therapy, 6, 601–608.CrossRefGoogle Scholar
  35. Kazdin, A. E., & Weisz, J. R. (Eds.). (2003). Evidence-based psychotherapies for children and adolescents. New York: Guilford.Google Scholar
  36. Krishef, C. H. (1991). Fundamental approaches to single subject design and analysis. Melbourne, FL: Robert E Krieger.Google Scholar
  37. Leary, M. R. (2001). Introduction to behavioral research methods. Boston: Allyn and Bacon.Google Scholar
  38. Lundervold, D. A., & Belwood, M. F. (2000). The best kept secret in counseling: Single-case (N = 1) experimental designs. Journal of Counseling and Development, 78, 92–102.CrossRefGoogle Scholar
  39. Mansell, J. (1982). Repeated direct replication of AB designs. Journal of Behavior Therapy and Experimental Psychiatry, 13, 261.PubMedCrossRefGoogle Scholar
  40. Mash, E. J., & Barkley, R. A. (Eds.). (2006). Treatment of childhood disorders (3rd ed.). New York: Guilford.Google Scholar
  41. Miltenberger, R. G. (2001). Behavior modification: Principles and procedures. Belmont, CA: Wadsworth/Thomson Learning.Google Scholar
  42. Morgan, D. L., & Morgan, R. K. (2001). Single-participant research design: Bridging science to managed care. American Psychologist, 56, 119–127.PubMedCrossRefGoogle Scholar
  43. Nugent, W. R. (1992). The affective impact of a clinical social worker’s interviewing style: A series of single-case experiments. Research on Social Work Practice, 2, 6–27.CrossRefGoogle Scholar
  44. Parker, R. I., Brossart, D. F., Vannest, K. J., Long, J. R., Garcia De-Alba, R., Baugh, F. G., et al. (2005). Effect sizes in single case research: How large is large? School Psychology Review, 34, 116–132.Google Scholar
  45. Perone, M. (1991). Experimental design in the analysis of free-operant behavior. In Iversen, I. H., & Lattal, K. A. (Eds.), Experimental analysis of behavior, Part I (pp. 135–171). Amsterdam, Netherlands: Elsevier.Google Scholar
  46. Perone, M. (1999). Statistical inference in behavior analysis: Experimental control is better. The Behavior Analyst, 22, 109–116.PubMedGoogle Scholar
  47. Phelps, R., Eisman, E. J., & Kohout, J. (1998). Psychological practice and managed care: Results of the CAPP practitioner survey. Professional psychology: Research and practice, 29, 31–36.CrossRefGoogle Scholar
  48. Scruggs, T. E., & Mastropieri, M. A. (1998). Summarizing single-subject research: Issues and applications. Behavior Modification, 22(3), 221–242.PubMedCrossRefGoogle Scholar
  49. Sidman, M. (1960). Tactics of scientific research. Boston: Authors Cooperative.Google Scholar
  50. Skinner, B. F. (1974). About behaviorism. Oxford: Knopf.Google Scholar
  51. Task Force on Promotion and Dissemination of Psychological Procedures. (1995). Training in and dissemination of empirically-validated treatments: Report and recommendations. The Clinical Psychologist, 48, 3–23.Google Scholar
  52. Todman, J. B., & Dugard, P. (2001). Single-case and small-n experimental designs: A practical guide to randomization tests. Mahwah, NJ: Erlbaum.Google Scholar
  53. Watson, P. J., & Workman, E. A. (1981). The non-concurrent multiple-baseline across-individuals design: An extension of the traditional multiple-baseline design. Journal of Behavior Therapy and Experimental Psychiatry, 12, 257–259.PubMedCrossRefGoogle Scholar
  54. Zhan, S., & Ottenbacher, K. J. (2003). Single subject research designs for disability research. Disability and Rehabilitation, 23, 1–8.Google Scholar

Copyright information

© Springer Science+Business Media LLC 2010

Authors and Affiliations

  • Kurt A. Freeman
    • 1
  • May Lim
    • 1
  1. 1.Oregon Health & Science UniversityPortlandUSA

Personalised recommendations