Advertisement

Sports Medicine

, Volume 34, Issue 15, pp 1035–1050 | Cite as

Single-Subject Research Designs and Data Analyses for Assessing Elite Athletes’ Conditioning

  • Taisuke Kinugasa
  • Ester Cerin
  • Sue Hooper
Leading Article

Abstract

Research in conditioning (all the processes of preparation for competition) has used group research designs, where multiple athletes are observed at one or more points in time. However, empirical reports of large inter-individual differences in response to conditioning regimens suggest that applied conditioning research would greatly benefit from single-subject research designs. Single-subject research designs allow us to find out the extent to which a specific conditioning regimen works for a specific athlete, as opposed to the average athlete, who is the focal point of group research designs. The aim of the following review is to outline the strategies and procedures of single-subject research as they pertain to the assessment of conditioning for individual athletes. The four main experimental designs in single-subject research are: the AB design, reversal (withdrawal) designs and their extensions, multiple baseline designs and alternating treatment designs. Visual and statistical analyses commonly used to analyse single-subject data, and advantages and limitations are discussed. Modelling of multivariate single-subject data using techniques such as dynamic factor analysis and structural equation modelling may identify individualised models of conditioning leading to better prediction of performance. Despite problems associated with data analyses in single-subject research (e.g. serial dependency), sports scientists should use single-subject research designs in applied conditioning research to understand how well an intervention (e.g. a training method) works and to predict performance for a particular athlete.

Keywords

Randomisation Test Elite Athlete Serial Dependency Multiple Baseline Design Imagery Training 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Notes

Acknowledgements

The authors wish to gratefully acknowledge Professor Will G. Hopkins, Auckland University of Technology, Auckland, New Zealand, for his invaluable contribution to this manuscript.

The authors have provided no information on sources of funding or on conflicts of interest directly relevant to the content of this review.

References

  1. 1.
    Lehmann M, Baumgartl P, Wiesenack C, et al. Training-overtraining: influence of a defined increase in training volume vs training intensity on performance, catecholamines and some metabolic parameters in experienced middle- and long-distance runners. Eur J Appl Physiol 1992; 64: 169–77CrossRefGoogle Scholar
  2. 2.
    Mujika I, Chatard JC, Busso T, et al. Effects of training on performance in competitive swimming. Can J Appl Physiol 1995; 20: 395–406PubMedCrossRefGoogle Scholar
  3. 3.
    Kinugasa T, Miyanaga Y, Shimojo H, et al. Statistical evaluation of conditioning using a single-case design. J Strength Cond Res 2002; 16: 466–71PubMedGoogle Scholar
  4. 4.
    Rowbottom DG, Morton A, Keast D. Monitoring for overtraining in the endurance performer. In: Shephard RJ, Åstrand PO, editors. Endurance in sport: volume II of the encyclopaedia of sports medicine. 2nd ed. Malden (MS): Blackwell Scientific, 2000: 486–504Google Scholar
  5. 5.
    Bompa TO. Theory and methodology of training: the key to athletic performance. 3rd ed. Dubuque (IA): Kendall/Hunt, 1994Google Scholar
  6. 6.
    Matveyev L. Fundamentals of sports training. Moscow: Progress Publishers, 1981Google Scholar
  7. 7.
    Bagger M, Petersen PH, Pedersen PK. Biological variation in variables associated with exercise training. Int J Sports Med 2003 Aug; 24 (6): 433–40PubMedCrossRefGoogle Scholar
  8. 8.
    Martin DT, Andersen MB, Gates W. Using profile of mood states (POMS) to monitor high-intensity training in cyclists: group versus case studies. Int J Sport Psychol 2000; 14: 138–56Google Scholar
  9. 9.
    Boulay MR. Physiological monitoring of elite cyclists: practical methods. Sports Med 1995; 20: 1–11PubMedCrossRefGoogle Scholar
  10. 10.
    Fry RW, Morton AR, Keast D. Periodisation of training stress: a review. Can J Sport Sci 1992; 17: 234–40PubMedGoogle Scholar
  11. 11.
    Pyne DB, Gleeson M, McDonald WA, et al. Training strategies to maintain immunocompetence in athletes. Int J Sports Med 2000; 21: s51–60CrossRefGoogle Scholar
  12. 12.
    Banister EW, Carter JB, Zarkadas PC. Training theory and taper: validation in triathlon athletes. Eur J Appl Physiol 1999; 79: 182–91CrossRefGoogle Scholar
  13. 13.
    Busso T. Variable dose-response relationship between exercise training and performance. Med Sci Sports Exerc 2003; 35: 1188–95PubMedCrossRefGoogle Scholar
  14. 14.
    Barlow DH, Hersen M. Single-case experimental designs: strategies for studying behavior change. 2nd ed. New York: Pergamon Press, 1984Google Scholar
  15. 15.
    Backman CL, Harris SR. Case studies, single subject research, and N of 1 randomized trials: comparison and contrasts. Am J Phys Med Rehabil 1999; 78: 170–6Google Scholar
  16. 16.
    Bobrovitz CD, Ottenbacher KJ. Comparison of visual inspection and statistical analysis of single-subject data in rehabilitation research. Am J Phys Med Rehabil 1998; 77: 94–102PubMedCrossRefGoogle Scholar
  17. 17.
    Hartmann DP. Forcing square pegs into round holes: some comments on ‘an analysis-of-variance model for the intrasubject replication design’. J Appl Behav Anal 1974; 7: 635–8PubMedCrossRefGoogle Scholar
  18. 18.
    Kazdin AE. Research design in clinical psychology. 3rd ed. Needham Heights (MA): Allyn and Bacon, 1998Google Scholar
  19. 19.
    Campbell DT. Reforms as experiments. Am Psychol 1969; 24: 409–29CrossRefGoogle Scholar
  20. 20.
    Lerner BS, Ostrow AC, Yura MT, et al. The effect of goal-setting and imagery training programs on the free-throw performance of female collegiate basketball players. Sport Psychol 1996; 10: 382–97Google Scholar
  21. 21.
    Zhan S, Ottenbacher KJ. Single subject research designs for disability research. Disabil Rehabil 2001; 23: 1–8PubMedCrossRefGoogle Scholar
  22. 22.
    Bryan AJ. Single-subject designs for evaluation of sport psychology interventions. Sport Psychol 1987; 1: 283–92Google Scholar
  23. 23.
    Callow N, Hardy L, Hall C. The effect of a motivational general-mastery imagery intervention on the sport confidence of high-level badminton players. Res Q Exerc Sport 2001; 72: 389–400PubMedGoogle Scholar
  24. 24.
    Neuman SB, McCormick S. Single-subject experimental research: applications for literacy. Newark (DE): International Reading Association, 1995Google Scholar
  25. 25.
    Richards SB, Taylor RL, Ramasamy R, et al. Single subject research: applications in educational and clinical settings. San Diego (CA): Singular Publishing Group, 1999Google Scholar
  26. 26.
    Shambrook CJ, Bull SJ. The use of a single-case research design to investigate the efficacy of imagery training. J Appl Sport Psychol 1996; 8: 27–43CrossRefGoogle Scholar
  27. 27.
    Wolko KL, Hrycaiko DW, Martin GL. A comparison of two self-management packages to standard coaching for improving practice performance of gymnasts. Behav Modif 1993; 17: 209–23CrossRefGoogle Scholar
  28. 28.
    Kazdin AE. Single-case research design: methods for clinical and applied settings. New York: Oxford University Press, 1982Google Scholar
  29. 29.
    Kratochwill TR, Levin JR. Single-case research design and analysis: new directions for psychology and education. Hilldale (DE): Lawlence Erlbaum Associates, 1992Google Scholar
  30. 30.
    Mattacola CG, Lloyd JW. Effects of a 6-week strength and proprioception training program on measures of dynamic balance: a single-case design. J Athlet Train 1997; 32: 127–35Google Scholar
  31. 31.
    Parsonson BS, Baer DM. The visual analysis of data, and current research into the stimuli controlling it. In: Kratochwill TR, Levin JR, editors. Single-case research design and analysis: new directions for psychology and education. Hillsdale (NJ): Lawrence Erlbaum Associates, 1992: 15–40Google Scholar
  32. 32.
    Rowbottom DG, Keast D, Green S, et al. The case history of an elite ultra-endurance cyclists who developed chronic fatigue syndrome. Med Sci Sports Exerc 1998; 30: 1345–8PubMedGoogle Scholar
  33. 33.
    Ferron J, Foster-Johnson L. Analyzing single-case data with visually guided randomization tests. Behav Res Methods Instrum Comput 1998; 30: 698–706CrossRefGoogle Scholar
  34. 34.
    DeProspero A, Cohen S. Inconsistent visual analyses of intrasubject data. J Appl Behav Anal 1979; 12: 573–9PubMedCrossRefGoogle Scholar
  35. 35.
    Ottenbacher KJ. Reliability and accuracy of visually analyzing graphed data from single-subject designs. Am J Occup Ther 1986; 40: 464–9PubMedCrossRefGoogle Scholar
  36. 36.
    Yamada T. Introduction of randomization tests as methods for analyzing single-case data. Jpn J Behav Anal 1998; 13: 44–58Google Scholar
  37. 37.
    Ottenbacher KJ. Interrater agreement of visual analysis in single-subject decisions: quantitative review and analysis. Am J Ment Retard 1993; 98: 135–42PubMedGoogle Scholar
  38. 38.
    Kazdin AE. Statistical analyses for single-case experimental designs. In: Barlow DH, Hersen M, editors. Single case experimental designs: strategies for studying behavior change. 2nd ed. New York: Pergamon Press, 1984: 285–321Google Scholar
  39. 39.
    Busk PL, Marascuilo LA. Statistical analysis in single-case research: issues, procedures, and recommendations, with application to multiple behaviors. In: Kratochwill TR, Levin JR, editors. Single-case research design and analysis: new directions for psychology and education. Hillsdale (NJ): Lawrence Erlbaum Associates, 1992: 159–85Google Scholar
  40. 40.
    McCleary R, Welsh WN. Philosophical and statistical foundations of time-series experiments. In: Kratochwill TR, Levin JR, editors. Single-case research design and analysis: new directions for psychology and education. Hillsdale (NJ): Lawrence Erlbaum Associates, 1992: 41–91Google Scholar
  41. 41.
    Todman JB, Dugard P. Single-case and small-n experimental designs: a practical guide to randomization tests. Mahwah (NJ): Lawrence Erlbaum Associates, 2001Google Scholar
  42. 42.
    Box GEP, Jenkins GM. Time series analysis: forecasting and control. Rev ed. San Francisco (CA): Holden-Day, 1976Google Scholar
  43. 43.
    Glass GV, Willson VL, Gottman JM. Design and analysis of time-series experiments. Boulder (CO): Colorado Associated University Press, 1975Google Scholar
  44. 44.
    Crosbie J. Interrupted time-series analysis with brief single-subject data. J Consult Clin Psychol 1993; 61: 966–74PubMedCrossRefGoogle Scholar
  45. 45.
    Hartmann DP, Gottman JM, Jones RR, et al. Interrupted time-series analysis and its application to behavioral data. J Appl Behav Anal 1980; 13: 543–59PubMedCrossRefGoogle Scholar
  46. 46.
    Tryon WW. A simplified time-series analysis for evaluating treatment interventions. J Appl Behav Anal 1982; 15: 423–9PubMedCrossRefGoogle Scholar
  47. 47.
    Young LC. On randomness in ordered sequences. Ann Math Stat 1941; 12: 293–300CrossRefGoogle Scholar
  48. 48.
    Blumberg CJ. Comments on ‘A simplified time-series analysis for evaluating treatment interventions’. J Appl Behav Anal 1984; 17: 539–42PubMedCrossRefGoogle Scholar
  49. 49.
    Crosbie J. The inappropriateness of the C statistic for assessing stability or treatment effects with single-subject data. Behav Assess 1989; 11: 315–25Google Scholar
  50. 50.
    Yamada T. Applications of statistical tests for single-case data: power comparison between randomization tests and C statistic [in Japanese]. Jpn J Behav Anal 1999; 14: 87–98Google Scholar
  51. 51.
    Edgington ES. Randomization tests. 3rd ed. New York: Marcel Dekker, 1995Google Scholar
  52. 52.
    Levin JR, Marascuilo LA, Hubert LJ. N = nonparametric randomization tests. In: Kratochwill TR, Levin JR, editors. Single-case research design and analysis: new directions for psychology and education. Hillsdale (NJ): Lawrence Erlbaum Associates, 1992: 159–85Google Scholar
  53. 53.
    Edgington ES. Statistical inference from n = 1 experiments. J Psychol 1967; 65: 195–9CrossRefGoogle Scholar
  54. 54.
    Ferron J, Onghena P. The power of randomization tests for single-case phase designs. J Exp Educ 1996; 64: 231–9CrossRefGoogle Scholar
  55. 55.
    Ferron J, Ware W. Analyzing single-case data: the power of randomization tests. J Exp Educ 1995; 63: 167–78CrossRefGoogle Scholar
  56. 56.
    Cohen J. Statistical power analysis for the behavioral sciences. 2nd ed. Hillsdale (NJ): Lawrence Erlbaum, 1988Google Scholar
  57. 57.
    Cohen J. A power primer. Psychol Bull 1992; 112: 155–9PubMedCrossRefGoogle Scholar
  58. 58.
    Gorman BS, Allison DB. Statistical alternatives for single-case designs. In: Franklin RD, Allison DB, Gorman BS, editors. Design and analysis of single-case research. Mahwah (NJ): Lawrence Erlbaum Associates, 1996Google Scholar
  59. 59.
    White OR. A glossary of behavioral terminology. Champaign (IL): Research Press, 1971Google Scholar
  60. 60.
    White OR. A manual for the calculation and use of the median slope: a technique of progress estimation and prediction in the single case. Eugene (OR): University of Oregon, Regional Resource Center for Handicapped Children, 1972Google Scholar
  61. 61.
    White OR. The ‘Split Middle’: a ‘Quickie’ method of trend estimation. Seattle (WA): University of Washington, Experimental Education Unit, Child Development and Mental Retardation Center, 1974Google Scholar
  62. 62.
    Nourbakhsh MR, Ottenbacher KJ. The statistical analysis of single-subject data: a comparative examination. Phys Ther 1994; 74: 768–76PubMedGoogle Scholar
  63. 63.
    Marlow C, Bull SJ, Heath B, et al. The use of a single case design to investigate the effect of a pre-performance routine on the water polo penalty shot. J Sci Med Sport 1998; 1: 143–55PubMedCrossRefGoogle Scholar
  64. 64.
    Silliman LM, French R. Use of selected reinforcers to improve the ball kicking of youths with profound mental retardation. Adapted Phys Activity Q 1993; 10: 52–69Google Scholar
  65. 65.
    Revusky SH. Some statistical treatments compatible with individual organism methodology. J Exp Anal Behav 1967; 10: 319–30PubMedCrossRefGoogle Scholar
  66. 66.
    Wolery M, Billingsley FF. The application of Revusky’s Rn test to slope and level changes. Behav Assess 1982; 4: 93–103Google Scholar
  67. 67.
    Mackinnon LT, Hooper SL. Overtraining and overreaching: cause, effects, and prevention. In: Garrett Jr WE, Kirkendall DT, editors. Exercise and sport science. Philadelphia (PA): Lippincott Williams and Willkins, 2000: 487–98Google Scholar
  68. 68.
    McKenzie DC. Markers of excessive exercise. Can J Appl Physiol 1999; 24: 66–73PubMedCrossRefGoogle Scholar
  69. 69.
    Rowbottom DG, Keast D, Morton A. Monitoring and preventing of overreaching and overtraining in endurance athletes. In: Kreider RB, Fry AC, O’Toole ML, editors. Overtraining in sport. Champaign (IL): Human Kinetics, 1998: 47–66Google Scholar
  70. 70.
    Cattell RB. Factor analysis. New York: Holt, 1952Google Scholar
  71. 71.
    Anderson TW. The use of factor analysis in the statistical analysis of multiple time series. Psychometrika 1963; 28: 1–25CrossRefGoogle Scholar
  72. 72.
    Wood P, Brown D. The study of intraindividual differences by means of dynamic factor models: rationale, implementation, and interpretation. Psychol Bull 1994; 116: 166–86CrossRefGoogle Scholar
  73. 73.
    Molenaar PCM. A dynamic factor model for the analysis of multivariate time series. Psychometrika 1985; 50: 181–202CrossRefGoogle Scholar
  74. 74.
    Molenaar PCM, Rovine MJ, Corneal SE. Dynamic factor analysis of emotional dispositions of adolescent stepsons toward their stepfathers. In: Silbereisen R, von Eye A, editors. Growing up in times of social change. New York: DeGruyter, 1999: 287–318Google Scholar
  75. 75.
    Hershberger SL, Corneal SE, Molenaar PCM. Dynamic factor analysis: an application to emotional response patterns underlying daughter/father and stepdaughter/stepfather relationships. Struct Equat Model 1994; 2: 31–52CrossRefGoogle Scholar
  76. 76.
    Hox JJ, Bechger TM. An introduction to structural equation modeling. Fam Sci Rev 1998; 11: 354–73Google Scholar
  77. 77.
    Marsh HW, Grayson D. Longitudinal stability of latent means and individual differences: a unified approach. Struct Equat Model 1994; 1: 317–59CrossRefGoogle Scholar
  78. 78.
    Raykov T, Widaman KF. Issues in applied structural equation modeling research. Struct Equat Model 1995; 2: 289–318CrossRefGoogle Scholar
  79. 79.
    Browne MW, Cudeck R. Alternative ways of assessing model fit. In: Bollen KA, Long JS, editors. Testing structural equation models. Newbury Park (CA): Sage Publications, 1993Google Scholar
  80. 80.
    Rigdon EE. Software review: Amos and AmosDraw. Struct Equat Model 1994; 1: 196–201CrossRefGoogle Scholar
  81. 81.
    Anderson TW. Some stochastic process models for intelligence test scores. In: Arrow KJ, Karlin K, Suppes P, editors. Mathematical methods in the social sciences. Stanford (CA): Stanford University Press, 1960Google Scholar
  82. 82.
    Meredith W, Tisak J. Latent curve analysis. Psychometrika 1990; 55: 107–22CrossRefGoogle Scholar
  83. 83.
    Mackinnon LT. Overtraining effects on immunity and performance in athletes. Immunol Cell Biol 2000; 78: 502–9PubMedCrossRefGoogle Scholar
  84. 84.
    Bates BT. Single-subject methodology: an alternative approach. Med Sci Sports Exerc 1996; 28: 631–8PubMedGoogle Scholar
  85. 85.
    Reboussin DM, Morgan TM. Statistical considerations in the use and analysis of single-subject designs. Med Sci Sports Exerc 1996; 28: 639–44PubMedGoogle Scholar
  86. 86.
    Hopkins WG. Measures of reliability in sports medicine and science. Sports Med 2000; 30: 1–15PubMedCrossRefGoogle Scholar
  87. 87.
    Hopkins WG. Probabilities of clinical or practical significance [online]. Available from URL: http//sportsci.org/jour/0201/wghprob.htm [Accessed 2004 Oct 27]
  88. 88.
    Hrycaiko D, Martin GL. Applied research studies with single-subject designs: why so few? J Appl Sport Psychol 1996; 8: 183–99CrossRefGoogle Scholar

Copyright information

© Adis Data Information BV 2004

Authors and Affiliations

  1. 1.School of Human Movement StudiesThe University of QueenslandBrisbaneAustralia
  2. 2.School of Population HealthThe University of QueenslandBrisbaneAustralia
  3. 3.Centre of Excellence for Applied Sport Science ResearchQueensland Academy of SportSunnybankAustralia

Personalised recommendations