Skip to main content
Log in

On the misinterpretation of effect size

  • Published:
Educational Studies in Mathematics Aims and scope Submit manuscript

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Notes

  1. The fact that Kraft is somewhat inconsistent in his contextualising indicates how hard it is to pin down context. On the same page of his article (p. 20) he argues his benchmarks are for “effect sizes from causal studies of pre-K–12 education interventions evaluating effects on student achievement” and, later, his benchmarks are for “causal research that evaluates the effect of education interventions on standardised student achievement". Elsewhere he notes that still further contexualisation could be focussed on particular grade within pre-K-12 and on subject matter! I should also note that, given the unpublished nature of Kraft's paper, we need to be careful in referring to its arguments: there are multiple versions in circulation which make slightly different claims.

  2. Note that averaging across studies, as in meta-analysis, does not allow these factors to “wash out” and somehow make the resulting effect sizes comparable measures of the types of intervention. To do this would require that the factors (sample homogeneity, measure proximity, measure length, question design etc.) are distributed equally across the sets of studies for that comparison to be valid and that too is vanishingly unlikely (and normally goes unchecked by meta-analysts and meta-meta-analysts in education).

References

  • Bakker, A., Cai, J., English, L., Kaiser, G., Mesa, V., & Van Dooren, W. (2019). Beyond small, medium, or large: Points of consideration when interpreting effect sizes. Educational Studies in Mathematics, 102, 1–8.

    Article  Google Scholar 

  • Bergeron, P. J., & Rivard, L. (2017). How to engage in pseudoscience with real data: A criticism of John Hattie’s arguments in visible learning from the perspective of a statistician. McGill Journal of Education/Revue des sciences de l'éducation de McGill, 52(1), 237–246.

    Google Scholar 

  • Cheung, A. C., & Slavin, R. E. (2016). How methodological features affect effect sizes in education. Educational Researcher, 45(5), 283–292.

    Article  Google Scholar 

  • Cohen, J. (1962). The statistical power of abnormal-social psychological research: A review. Journal of Abnormal and Social Psychology, 65(3), 145–153.

    Article  Google Scholar 

  • Cohen, J. (1973). Brief notes: Statistical power analysis and research results. American Educational Research Journal, 10(3), 225–229.

    Google Scholar 

  • Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale: Lawrence Erlbaum Associates.

    Google Scholar 

  • Hattie, J. (2009). Visible learning: A synthesis of 800+ meta-analyses on achievement. Abingdon: Routledge.

    Google Scholar 

  • Higgins, S., & Katsipataki, M. (2016). Communicating comparative findings from meta-analysis in educational research: Some examples and suggestions. International Journal of Research & Method in Education, 39(3), 237–254.

    Article  Google Scholar 

  • Hunter, J. E., & Schmidt, F. L. (2004). Methods of meta-analysis: Correcting error and bias in research findings. Newbury Park: Sage.

    Book  Google Scholar 

  • IES (2017) What works clearinghouse: Procedure handbook v4.0. Retrieved from Institute for Education Studies https://ies.ed.gov/ncee/wwc/Docs/ referenceresources/wwc_standards_handbook_v4.pdf.

  • Joyce, K. E., & Cartwright, N. (2019). Bridging the gap between research and practice: Predicting what will work locally. American Educational Research Journal. https://doi.org/10.3102/0002831219866687

  • Kornell, N., & Bjork, R. A. (2008). Learning concepts and categories: Is spacing the “enemy of induction”? Psychological Science, 19(6), 585–592.

    Article  Google Scholar 

  • Kraft, M. (2019). Interpreting effect sizes of education interventions. (EdWorkingPaper: 19-10). Retrieved from Annenberg Institute at Brown University: http://edworkingpapers.com/ai19-10

  • Li, Q., & Ma, X. (2010). A meta-analysis of the effects of computer technology on school students’ mathematics learning. Educational Psychology Review, 22, 215–243.

    Article  Google Scholar 

  • Lloyd, C., Edovald, T., Morris, S., Kiss, Z., Skipp, A., & Haywood, S. (2015). Durham Shared Maths Project: Evaluation report and executive summary. London: Educational Endowment Foundation.

    Google Scholar 

  • Lortie-Forgues, H., & Inglis, M. (2019). Rigorous large-scale educational RCTs are often uninformative: Should we be concerned? Educational Researcher, 48(3), 158–166.

    Article  Google Scholar 

  • Savelsbergh, E. R., Prins, G. T., Rietbergen, C., Fechner, S., Vaessen, B. E., Draijer, J. M., & Bakker, A. (2016). Effects of innovative science and mathematics teaching on student attitudes and achievement: A meta-analytic study. Educational Research Review, 19, 158–172.

    Article  Google Scholar 

  • Simpson, A. (2017). The misdirection of public policy: Comparing and combining standardised effect sizes. Journal of Education Policy, 32(4), 450–466.

    Article  Google Scholar 

  • Simpson, A. (2018). Princesses are bigger than elephants: Effect size as a category error in evidence-based education. British Educational Research Journal, 44(5), 897–913.

    Article  Google Scholar 

  • Simpson, A. (2019). Separating arguments from conclusions: The mistaken role of effect size in educational policy research. Educational Research and Evaluation, 25(1-2), 99–109.

    Article  Google Scholar 

  • Slavin, R. E. (1986). Best-evidence synthesis: An alternative to meta-analytic and traditional reviews. Educational Researcher, 15(9), 5–11.

    Article  Google Scholar 

  • Terrin, N., Schmid, C. H., Lau, J., & Olkin, I. (2003). Adjusting for publication bias in the presence of heterogeneity. Statistics in Medicine, 22(13), 2113–2126.

    Article  Google Scholar 

  • Verkoeijen, P., & Bouwmeester, S. (2014). Is spacing really the “friend of induction”? Frontiers in Psychology, 5, 259.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Adrian Simpson.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Simpson, A. On the misinterpretation of effect size. Educ Stud Math 103, 125–133 (2020). https://doi.org/10.1007/s10649-019-09924-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10649-019-09924-4

Navigation