Tutorial: Small-N Power Analysis
- 217 Downloads
Power analysis is an overlooked and underreported aspect of study design. A priori power analysis involves estimating the sample size required for a study based on predetermined maximum tolerable Type I and II error rates and the minimum effect size that would be clinically, practically, or theoretically meaningful. Power is more often discussed within the context of large-N group designs, but power analyses can be used in small-N research and within-subjects designs to maximize the probative value of the research. In this tutorial, case studies illustrate how power analysis can be used by behavior analysts to compare two independent groups, behavior in baseline and intervention conditions, and response characteristics across multiple within-subject treatments. After reading this tutorial, the reader will be able to estimate just noticeable differences using means and standard deviations, convert them to standardized effect sizes, and use G*Power to determine the sample size needed to detect an effect with desired power.
KeywordsExperimental design A priori power analysis Effect size Sample size Tests of statistical significance Hypothesis testing G*Power
- Association for Behavior Analysis International Accreditation Board. (2017). Accreditation handbook. Portage, MI: Author.Google Scholar
- Behavior Analyst Certification Board. (2017). BCBA/BCaBA task list (5th ed.). Littleton, CO: Author.Google Scholar
- Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum Associates.Google Scholar
- Fechner, G. T. (1912). Elements of psychophysics (H. S. Langfeld, Trans.). In B. Rand (Ed.), The classical psychologists (pp. 562–572). Retrieved from http://psychclassics.yorku.ca/Fechner/ (Original work published 1860).
- Mayo, D. G., & Spanos, A. (2011). Error statistics. In P. S. Bandyopadhyay & M. R. Forster (Eds.), Handbook of philosophy of science, Philosophy of statistics (Vol. 7, pp. 153–198). Amsterdam, Netherlands: Elsevier.Google Scholar
- Neyman, J., & Pearson, E. S. (1928). On the use and interpretation of certain test criteria for purposes of statistical inference. Biometrika, 20A, 175–240 263–294.Google Scholar
- Neyman, J., & Pearson, E. S. (1933). IX. On the problem of the most efficient tests of statistical hypotheses. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character, 231(694–706), 289–337.Google Scholar
- Sidman, M. (1960). Tactics of scientific research: evaluating experimental data in psychology. New York, NY: Basic Books.Google Scholar
- Thompson, B. (2002). “Statistical,” “practical,” and “clinical”: how many kinds of significance do counselors need to consider? Journal of Counseling & Development, 80, 64–71. https://doi.org/10.1002/j.1556-6678.2002.tb00167.x.CrossRefGoogle Scholar