Advertisement

Power and Sample Size

  • Stephen LymanEmail author
Chapter

Abstract

Power is “the probability of rejecting the null hypothesis in a statistical test when a particular alternative hypothesis happens to be true” (Merriam-Webster Dictionary). Here, we discuss why power is needed in clinical research; the properties of power—including sample size, variability, frequency, p-value, and effect size; whether statistical power can ever be truly known; and how to calculate statistical power. Power is a theoretical concept, but it has practical benefit. Overpowered studies are inefficient and may waste resources that could be used to further our knowledge in other ways. Conversely, underpowered studies not only waste the time of participants and the researchers, they may violate the responsibility to do no harm as the study intervention may not be risk-free. All studies, especially those in which invasive interventions are being used, should always, ethically, be adequately powered.

References

  1. 1.
    Adcock CJ. Sample size determination: a review. Statistician. 1997;46(2):261–83.Google Scholar
  2. 2.
    Blackless M, Charuvastra A, Derryck A, Fausto-Sterling A, Lauzanne K, Lee E. How sexually dimorphic are we? Review and synthesis. Am J Hum Biol. 2000;12:151–66.CrossRefGoogle Scholar
  3. 3.
    Carley S, Dosman S, Jones SR, Harrison M. Simple nomograms to calculate sample size in diagnostic studies. Emerg Med J. 2005;22:180–1.CrossRefGoogle Scholar
  4. 4.
    Jones SR, Carley S, Harrison M. An introduction to power and sample size estimation. Emerg Med J. 2003;20:453–8.CrossRefGoogle Scholar
  5. 5.
    Mcglothlin AE, Lewis RJ. Minimal clinically important difference defining what really matters to patients. JAMA. 2014;312:1342–3.CrossRefGoogle Scholar
  6. 6.
    “Power.” Merriam-Webster.com. 2018. https://www.merriam-webster.com (1 July 2018).
  7. 7.
    Saito Y, Sozu T, Hamada H, Yoshimura I. Effective number of subjects and number of raters for inter-rater reliability studies. Stat Med. 2006;25:1547–60.CrossRefGoogle Scholar
  8. 8.
    Sim J, Wright CC. The Kappa statistic in reliability studies: use, interpretation, and sample size requirements. Phys Ther. 2005;85(3):257–68.Google Scholar
  9. 9.
    Theodore BR. Methodological problems associated with the present conceptualization of the minimum clinically important difference and substantial clinical benefit. Spine J. 2010;10:507–9.  https://doi.org/10.1016/j.spinee.2010.04.003.CrossRefPubMedGoogle Scholar
  10. 10.
    Walter SD, Eliasziw M, Donner A. Sample size and optimal designs for reliability studies. Stat Med. 1998;17:101–10.CrossRefGoogle Scholar

Copyright information

© ISAKOS 2019

Authors and Affiliations

  1. 1.Hospital for Special SurgeryNew YorkUSA

Personalised recommendations