# Power Calculations for Statistical Design

## Abstract

Power is the probability that a statistical analysis of experimental data will detect a true effect. Experiments have high or low power owing to decisions made at the planning stage. Although a carefully chosen method of analysis is more likely to find interesting results than routine or thoughtlessly chosen methods, nothing can be done to increase power for a particular analysis once the data have already been collected. High enough values of power—there is no universal definition of “enough”—give the experimenter good reason to hope that when the data are analyzed, if an experimental effect exists it will be found. Contrarily, low power means that negative results will be impossible to interpret. Was no effect found because none exists or because the experiment is unlikely to find an effect? Because this question cannot be answered in low-power studies, statisticians believe that such experiments should not be conducted.

## Keywords

Statistical Design Acceptance Region Statistical Concern Population Means Noncentrality Parameter## Preview

Unable to display preview. Download preview PDF.

## References

- Bishop, T. A., & Dudewicz, E. J. (1978). Exact analysis of variance with unequal variances: Test procedures and tables.
*Technometrics*,*20*, 419–430.CrossRefGoogle Scholar - Box, G. E. P., Hunter, W. G., & Hunter, J. S. (1978).
*Statistics for experimenters: An introduction to design, data analysis, and model building*. New York: Wiley.Google Scholar - Brown, C. C., & Green, S. B. (1982). Additional power computations for designing comparative Poisson trials.
*American Journal of Epidemiology*,*115*, 752–758.PubMedGoogle Scholar - Buyse, M. E., Staquet, M. J., & Sylvester, R. J. (Eds.). (1984).
*Cancer clinical trials*. London: Oxford University Press.Google Scholar - Cohen, J. (1977).
*Statistical power analysis for the behavioral sciences*(rev. ed.). New York: Academic Press.Google Scholar - Cox, D. R. (1958).
*Planning of experiments*. New York: Wiley.Google Scholar - Donner, A. (1984). Approaches to sample size estimation in the design of clinical trials—A review.
*Statistics in Medicine*,*3*, 199–214.PubMedCrossRefGoogle Scholar - Fleiss, J. L. (1986).
*The design and analysis of clinical experiments*. New York: Wiley.Google Scholar - Fleiss, J. L., Tytun, A., & Ury, H. K. (1980). A simple approximation for calculating sample size for comparing independent proportions.
*Biometrics*,*36*, 343–346.CrossRefGoogle Scholar - Friedman, L. M., Furberg, C. D., & DeMets, D. L. (1981).
*Fundamentals of clinical trials*. Boston: John Wright.Google Scholar - Gail, M. (1974). Power computations for designing comparative Poisson trials.
*Biometrics*,*30*, 231–237.CrossRefGoogle Scholar - Guenther, W. C. (1977). Power and sample size for approximate chi-square tests.
*American Statistician*,*31*(2), 83–85.Google Scholar - Johnson, N. L., & Kotz, S. (1970).
*Continuous univariate distributions*(Vol. 2). New York: Wiley.Google Scholar - IMSL, Inc. User’s Manual (1984).
*FORTRAN subroutines for mathematics and statistics*(Vol. 3, Edition 9.2). Houston: IMSL.Google Scholar - Kempthorne, O. (1952).
*Design and analysis of experiments*. New York: Wiley. (Reprinted by Krieger, Huntington, NY, 1975).Google Scholar - Kotz, S., & Johnson, N. L. (Eds.). (1982).
*The encyclopedia of statistics*(Vol. 2). New York: Wiley.Google Scholar - Lachin, J. M. (1981). to sample size determination and power analysis for clinical trials.
*Controlled Clinical Trials*,*2*, 93–113.PubMedCrossRefGoogle Scholar - Lachin, J. M., & Foulkes, M. A. (1986). Evaluation of sample size and power for analyses of survival with allowance for nonuniform patient entry, losses to follow-up, non-compliance, and stratification.
*Biometrics*,*42*, 507–519.PubMedCrossRefGoogle Scholar - Lakatos, E. (1986). Sample size determination in clinical trials with time-dependent rates of losses and noncompliance.
*Controlled Clinical Trials*,*7*, 189–199.PubMedCrossRefGoogle Scholar - Lee, E. T. (1980).
*Statistical methods for survival data analysis*. Belmont, CA: Lifetime Learning.Google Scholar - Mantel, N. (1963). Chi-square tests with one degree of freedom; extensions of the Mantel-Haenszel procedure.
*Journal of the American Statistical Association*,*58*, 690–700.Google Scholar - Moses, L. (1985).
*The 2*×*k contingency table with ordered columns: How important to take account of the order?*Technical Report No. 109, Stanford University Division of Biostatistics.Google Scholar - SAS Institute (1985).
*SAS user’s guide: Basics*(Version 5). Cary, NC: Author.Google Scholar - Scheffe, H. (1959).
*The analysis of variance*. New York: Wiley.Google Scholar - Stein, C. M. (1945). A two-sample test for a linear hypothesis whose power is independent of the variance.
*Annals of Mathematical Statistics*,*16*, 243–258.CrossRefGoogle Scholar - Tukey, J. W. (1977).
*Exploratory data analysis*. Reading, MA: Addison-Wesley.Google Scholar - Winer, B. J. (1971).
*Statistical principles in experimental design*(2nd ed.). New York: McGraw-Hill.Google Scholar