Skip to main content

Recommended effect size statistics for repeated measures designs

Abstract

Investigators, who are increasingly implored to present and discuss effect size statistics, might comply more often if they understood more clearly what is required. When investigators wish to report effect sizes derived from analyses of variance that include repeated measures, past advice has been problematic. Only recently has a generally useful effect size statistic been proposed for such designs: generalized eta squared (η 2G ; Olejnik & Algina, 2003). Here, we present this method, explain that η 2G is preferred to eta squared and partial eta squared because it provides comparability across between-subjects and within-subjects designs, show that it can easily be computed from information provided by standard statistical packages, and recommend that investigators provide it routinely in their research reports when appropriate.

References

  • Adamson, L. B., Bakeman, R., &Deckner, D. F. (2004). The development of symbol-infused joint engagement.Child Development,75, 1171–1187.

    Article  PubMed  Google Scholar 

  • American Psychological Association (2001).Publication manual of the American Psychological Association (5th ed.). Washington, DC: Author.

    Google Scholar 

  • Bakeman, R. (1992).Understanding social science statistics: A spreadsheet approach. Hillsdale, NJ: Erlbaum.

    Google Scholar 

  • Bakeman, R., &Robinson, B. F. (2005).Understanding statistics in the behavioral sciences. Mahwah, NJ: Erlbaum.

    Google Scholar 

  • Bond, C. F., Jr.,Wiitala, W. L., &Richard, F. D. (2003). Meta-analysis of raw mean differences.Psychological Methods,8, 406–418.

    Article  PubMed  Google Scholar 

  • Cohen, J. (1988).Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.

    Google Scholar 

  • Cohen, J., &Cohen, P. (1983).Applied multiple regression/correlation analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.

    Google Scholar 

  • Fidler, F., Thomason, N., Cumming, G., Finch, S., &Leeman, J. (2004). Editors can lead researchers to confidence intervals, but can’t make them think.Psychological Science,15, 119–126.

    Article  PubMed  Google Scholar 

  • Gillett, R. (2003). The metric comparability of meta-analytic effectsize estimators from factorial designs.Psychological Methods,8, 419–433.

    Article  PubMed  Google Scholar 

  • Glass, G. V., McGaw, B., &Smith, M. M. L. (1981).Meta-analysis in social science research. Newbury Park, CA: Sage.

    Google Scholar 

  • Hays, W. L. (1963).Statistics. New York: Holt, Rinehart & Winston.

    Google Scholar 

  • Keppel, G. (1991).Design and analysis: A researcher’s handbook (3rd ed.). Englewood Cliffs, NJ: Prentice-Hall.

    Google Scholar 

  • Keppel, G., &Wickens, T. D. (2004).Design and analysis: A researcher’s handbook (4th ed.). Upper Saddle River, NJ: Pearson Prentice-Hall.

    Google Scholar 

  • Olejnik, S., &Algina, J. (2003). Generalized eta and omega squared statistics: Measures of effect size for some common research designs.Psychological Methods,8, 434–447.

    Article  PubMed  Google Scholar 

  • Rosenthal, R., Rosnow, R. L., &Rubin, D. B. (2000).Contrasts and effect sizes in behavioral research: A correlational approach. Cambridge: Cambridge University Press.

    Google Scholar 

  • Schmidt, F. (1996). Statistical significance testing and cumulative knowledge in psychology: Implications for training of researchers.Psychological Methods,1, 115–129.

    Article  Google Scholar 

  • Tabachnick, B. G., &Fidell, L. S. (2001).Using multivariate statistics (4th ed.). Boston: Allyn and Bacon.

    Google Scholar 

  • Wilkinson, L., & theTask Force on Statistical Inference, American Psychological Association Board of Scientific Affairs (1999). Statistical methods in psychology journals: Guidelines and explanations.American Psychologist,54, 594–604.

    Article  Google Scholar 

  • Winer, B. J., Brown, D. R., &Michaels, K. M. (1991).Statistical principles in experimental design. New York: McGraw-Hill.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Roger Bakeman.

Additional information

My appreciation to my colleague, Christopher Henrich, and to two anonymous reviewers who provided useful and valuable feedback on earlier versions of this article.

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Bakeman, R. Recommended effect size statistics for repeated measures designs. Behavior Research Methods 37, 379–384 (2005). https://doi.org/10.3758/BF03192707

Download citation

  • Received:

  • Accepted:

  • Issue Date:

  • DOI: https://doi.org/10.3758/BF03192707

Keywords

  • Repeat Measure Design
  • Lowercase Letter
  • Repeated Measure Factor
  • Effect Size Measure
  • Standard Statistical Package