Sports Medicine

, Volume 30, Issue 1, pp 1–15 | Cite as

Measures of Reliability in Sports Medicine and Science

Current Opinion

Abstract

Reliability refers to the reproducibility of values of a test, assay or other measurement in repeated trials on the same individuals. Better reliability implies better precision of single measurements and better tracking of changes in measurements in research or practical settings. The main measures of reliability are within-subject random variation, systematic change in the mean, and retest correlation. A simple, adaptable form of within-subject variation is the typical (standard) error of measurement: the standard deviation of an individual’s repeated measurements. For many measurements in sports medicine and science, the typical error is best expressed as a coefficient of variation (percentage of the mean). A biased, more limited form of within-subject variation is the limits of agreement: the 95% likely range of change of an individual’s measurements between 2 trials. Systematic changes in the mean of a measure between consecutive trials represent such effects as learning, motivation or fatigue; these changes need to be eliminated from estimates of within-subject variation. Retest correlation is difficult to interpret, mainly because its value is sensitive to the heterogeneity of the sample of participants. Uses of reliability include decision-making when monitoring individuals, comparison of tests or equipment, estimation of sample size in experiments and estimation of the magnitude of individual differences in the response to a treatment. Reasonable precision for estimates of reliability requires approximately 50 study participants and at least 3 trials. Studies aimed at assessing variation in reliability between tests or equipment require complex designs and analyses that researchers seldom perform correctly. A wider understanding of reliability and adoption of the typical error as the standard measure of reliability would improve the assessment of tests and equipment in our disciplines.

References

  1. 1.
    Atkinson G, Nevill AM. Statistical methods for addressing measurement error (reliability) in variables relevant to sports medicine. Sports Med 1998; 26: 217–38PubMedCrossRefGoogle Scholar
  2. 2.
    Hopkins WG, Hawley JA, Burke LM. Design and analysis of research on sport performance enhancement. Med Sci Sports Exerc 1999; 31: 472–85PubMedCrossRefGoogle Scholar
  3. 3.
    Nevill AM, Atkinson G. Assessing agreement between measurements recorded on a ratio scale in sports medicine and sports science. Br J Sports Med 1997; 31: 314–8PubMedCrossRefGoogle Scholar
  4. 4.
    Bland JM, Altman DG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet 1986 Feb; 8: 307–10CrossRefGoogle Scholar
  5. 5.
    Roebroeck ME, Harlaar J, Lankhorst GJ. The application of generalizability theory to reliability assessment: an illustration using isometric force measurements. Phys Ther 1993; 73: 386–401PubMedGoogle Scholar
  6. 6.
    VanLeeuwen DM, Barnes MD, Pase M. Generalizability theory: a unified approach to assessing the dependability (reliability) of measurements in the health sciences. J Outcome Measures 1998; 2: 302–25Google Scholar
  7. 7.
    Bartko JJ. The intraclass correlation coefficient as a measure of reliability. Psych Reports 1966; 19: 3–11CrossRefGoogle Scholar
  8. 8.
    Kovaleski JE, Heitman RJ, Gurchiek LR, et al. Reliability and effects of leg dominance on lower extremity isokinetic force and work using the Closed Chain Rider System. J Sport Rehabil 1997; 6: 319–26Google Scholar
  9. 9.
    Shrout PE, Fleiss JL. Intraclass correlations: uses in assessing rater reliability. Psych Bull 1979; 86: 420–8CrossRefGoogle Scholar
  10. 10.
    Kovaleski JE, Ingersoll CD, Knight KL, et al. Reliability of the BTE Dynatrac isotonic dynamometer. Isokinet Exerc Sci 1996; 6: 41–3Google Scholar
  11. 11.
    Hopkins WG. A new view of statistics. Available from: http://sportsci.org/resource/stats [Accessed 2000 Apr 18]Google Scholar
  12. 12.
    Hopkins WG, Manly BFJ. Errors in assigning grades based on tests of finite validity. Res Q Exerc Sport 1989; 60: 180–2PubMedGoogle Scholar
  13. 13.
    Cohen J. Statistical power analysis for the behavioral sciences. 2nd ed. Mahwah (NJ): Lawrence Erlbaum, 1988Google Scholar
  14. 14.
    Eliasziw M, Young SL, Woodbury MG, et al. Statistical methodology for the concurrent assessment of interrater and intrarater reliability: using goniometric measurements as an example. Phys Ther 1994; 74: 777–88PubMedGoogle Scholar
  15. 15.
    Clark VR, Hopkins WG, Hawley JA, et al. Placebo effect of carbohydrate feedings during a 40-km cycling time trial. Med Sci Sports Exerc. In pressGoogle Scholar
  16. 16.
    Hopkins WG, Wolfinger RD. Estimating ‘individual differences’ in the response to an experimental treatment [abstract]. Med Sci Sports Exerc 1998; 30 (5): S135Google Scholar
  17. 17.
    Tate RF, Klett GW. Optimal confidence intervals for the variance of a normal distribution. J Am Statist Assoc 1959; 54: 674–82CrossRefGoogle Scholar
  18. 18.
    Hopkins WG. Generalizing to a population. Available from: http://sportsci.org/resource/stats/generalize.html [Accessed 2000 Apr 18]Google Scholar
  19. 19.
    Hopkins WG. Reliability: calculations and more. Available from: http://sportsci.org/resource/stats/relycalc.html [Accessed 2000 Apr 18]Google Scholar
  20. 20.
    Schabort EJ, Hopkins WG, Hawley JA, et al. High reliability of performance of well-trained rowers on a rowing ergometer. J Sports Sci 1999; 17: 627–32PubMedCrossRefGoogle Scholar

Copyright information

© Adis International Limited 2000

Authors and Affiliations

  1. 1.Department of Physiology, School of Medical Sciences and School of Physical EducationUniversity of OtagoDunedinNew Zealand

Personalised recommendations