Abstract
Berk criticizes meta-analysis on a host of statistical grounds. The criticisms are thoughtful, and may even seem compelling on first read. However, his criticisms are really a special case of the kinds of criticisms that can be leveled at all scientific endeavor. Science is rife with assumptions of greater or lesser plausibility, no more so in meta-analysis than in primary research. We trust in those assumptions only temporarily and heuristically. The answer is not to abandon ship, but rather to work to improve the endeavor at the margin, hoping that science is self-correcting in the long-run, and that our answers are not too far off despite the weaknesses of the enterprise.
Similar content being viewed by others
References
Berk, R. A., & DeLeeuw, J. (1999). An evaluation of California’s inmate classification system using a generalized regression discontinuity design. Journal of the American Statistical Association, 94, 1045–1052.
Berk, R. A., Lenihan, K. J., & Rossi, P. H. (1980). Crime and poverty: Some experimental evidence from ex-offenders. American Sociological Review, 45, 766–786.
Berk, R. A., & Rauma, D. (1983). Capitalizing on nonrandom assignment to treatment: A regression discontinuity evaluation of a crime control program. Journal of the American Statistical Association, 78, 21–27.
Campbell, D. T. (1960). Blind variation and selective retention in creative thought as in other knowledge processes. Psychological Review, 67, 380–400.
Campbell, D. T. (1969). Ethnocentrism of disciplines and the fish-scale model of omniscience. In M. Sherif & C. W. Sherif (Eds), Interdisciplinary relationships in the social sciences (pp. 328–348). Chicago: Aldine.
Campbell, D. T. (1988). In E. S. Overman (Ed), Methodology and epistemology for social science: selected papers. Chicago: University of Chicago Press.
Cooper, H. M., & Hedges, L. V. (Eds.). (1994). Handbook of research synthesis. New York: Russell Sage Foundation.
Cooper, H. M., & Rosenthal, R. (1980). Statistical versus traditional procedures for summarizing research findings. Psychological Bulletin, 87, 442–449.
Cowman, C., & Conroy, R. M. (2004). A response to “Misrepresenting random sampling? A systematic review of research papers”. Journal of Advanced Nursing, 46, 221–222.
Eysenck, H. J. (1978). An exercise in meta-silliness. American Psychologist, 33, 517.
Glass, G. V. (1995). Review of H. M. Cooper and L.V. Hedges (Ed.), Handbook of research synthesis. Contemporary Psychology, 40, 736–738.
Glass, G. V. & Hopkins, K. D. (1996). Statistical Methods in Education & Psychology (3rd Ed.). Boston: Allyn & Bacon.
Harlow, L. A., Mulaik, S. A. & Steiger, J. H. (Eds.). (1997). What if there were no significance tests? Mahwah, New Jersey: Lawrence Erlbaum Associates.
Holton, G. (1986). The advancement of science, and its burdens. Cambridge, England: Cambridge University Press.
Ioannidis, J. P. A. (2005a). Contradicted and initially stronger effects in highly cited clinical research. Journal of the American Medical Association, 294, 218–228.
Ioannidis, J. P. A. (2005b). Why most published research findings are false. PLoS Medicine, 2, 696–701. (accessed online at http://www.plosmedicine.org, February 18, 2007).
Ioannidis, J. P. A., & Lau, J. (2001). Evolution of treatment effects over time: Empirical insight from recursive cumulative metaanalyses. Proceedings of the National Academy of Sciences of the United States of America, 98, 831–836.
Ioannidis, J. P. A., & Trikalinos, T. A. (2005). Early extreme contradictory estimates may appear in published research: The Proteus phenomenon in molecular genetics research and randomized trials. Journal of Clinical Epidemiology, 58, 543–549.
Malzahn, U., Bohning, D., & Holling, H. (2000). Nonparametric estimation of heterogeneity variance for the standardized difference used in meta-analysis. Biometrika, 87, 619–632.
McCullough, B. D., & Wilson, B. (1999). On the accuracy of statistical procedure in Microsoft Excel 97. Computational Statistics & Data Analysis, 31, 27–37.
Montori, V. M., Devereaux, P. J., Adhikari, N. K. J., Burns, K. E. A., Eggert, C. H. et al. (2005). Randomized trials stopped early for benefit: A systematic review. Journal of the American Medical Association, 294, 2203–2209.
Petrosino, A., Turpin-Petrosino, C., & Buehler, J. (2003). “‘Scared Straight’ and other juvenile awareness programs for preventing juvenile delinquency” (Updated C2 Review). In: The Campbell Collaboration Reviews of Intervention and Policy Evaluations (C2-RIPE), November, 2003. Philadelphia, Pennsylvania: Campbell Collaboration. (Retrieved from http://www.campbellcollaboration.org/doc-pdf/ssrupdt.pdf on February 18, 2007).
Raudenbush, S.W. (1994). Random effects models. In H.M. Cooper and L.V. Hedges (Eds.), Handbook of Research Synthesis (pp. 301–321). New York: Russell Sage Foundation.
Rubin, D. B. (1992). Literature synthesis or effect-size surface estimation? Journal of Educational Statistics, 17, 363–374.
Schulze, R. (2004). Meta-analysis: a comparison of approaches. Cambridge, Massachusetts: Hogrefe and Huber.
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston: Houghton-Mifflin.
Simonton, D. K. (1988). Scientific genius: a psychology of science. Cambridge, England: Cambridge University Press.
Simonton, D. K. (1993). Blind variations, chance configurations, and creative genius. Psychological Inquiry, 4, 225–228.
Trikalinos, T. A., Churchill, R., Ferrid, M., Leuchte, S., Tuunainenf, A., Wahlbeck, K., & Ioannidis, J. P. A. (2004). Effect sizes in cumulative meta-analyses of mental health randomized trials evolved over time. Journal of Clinical Epidemiology, 57, 1124–1130.
Van Den Noortgate, W. & Onghena, P. (2005). Parametric and nonparametric boostrap methods for meta-analysis. Behavior Research Methods, 37, 11–22.
Watson, R. (2004). A response to “Misrepresenting random sampling? A systematic review of research papers”. Journal of Advanced Nursing, 46, 220–221.
Wilkinson, L., & the Task Force on Statistical Inference. (1999). Statistical methods in psychology journals: Guidelines and explanations. American Psychologist, 54, 594–604.
Williamson, G. R. (2003). Misrepresenting random sampling? A systematic review of research papers. Journal of Advanced Nursing, 44, 278–288.
Williamson, G. R. (2004). A response to Watson’s and Cowman and Conroy’s critiques. Journal of Advanced Nursing, 46, 222–223.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Shadish, W.R. A world without meta-analysis. J Exp Criminol 3, 281–291 (2007). https://doi.org/10.1007/s11292-007-9034-0
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11292-007-9034-0