Abstract
Purpose
The purpose of this study was to review the Meta-Analysis Reporting Standards (MARS) of the American Psychological Association (APA) and highlight opportunities for improvement of meta-analytic reviews in the organizational sciences.
Design/Methodology/Approach
The paper reviews MARS, describes “best” meta-analytic practices across two schools of meta-analysis, and shows how implementing such practices helps achieve the aims set forth in MARS. Examples of best practices are provided to aid readers in finding models for their own research.
Implications/Value
Meta-analytic reviews are a primary avenue for the accumulation of knowledge in the organizational sciences as well as many other areas of science. Unfortunately, many meta-analytic reviews in the organizational sciences do not fully follow professional guidelines and standards as closely as they should. Such deviations from best practice undermine the transparency and replicability of the reviews and thus their usefulness for the generation of cumulative knowledge and evidence-based practice. This study shows how implementing “best” meta-analytic practices helps to achieve the aims set forth in MARS. Although the paper is written primarily for organizational scientists, the paper’s recommendations are not limited to any particular scientific domain.
Similar content being viewed by others
Notes
We note that the table includes some exemplar models that do not fully comply with a specific recommendation. However, even the ones that do not fully comply with a specific recommendation provide more information than the typical meta-analytic review in the organizational sciences and, as such, can serve as an exemplar model.
In meta-analytic studies, the terms studies and samples are often used interchangeably. We use the term samples throughout this manuscript because a single study can contain multiple samples.
Most statistical considerations we discuss in the context of correlation coefficients also apply to other effect size statistics such as standardized mean differences. Formulae for other effect sizes are available in the respective literatures (e.g., Borenstein et al. 2009; Hedges and Olkin 1985; Hedges and Vevea 1998; Hunter and Schmidt 2004).
We note that there are also models described as ‘mixed-effects’ (e.g., Raudenbush and Bryk 1985). Because these models contain terms for the residual variance in underlying effect sizes, we classify them as random-effects models for the purposes of this paper.
We note that there is little difference in estimates of the mean between both weighting schemes for the majority of effect size statistics in the organizational sciences (e.g., correlations and standardized mean differences). Conceptually, given the determinants of the variance in binary data, for effect size indices such as the odds ratio, the differences can be noticeable, favoring the inverse variance weights (Borenstein et al. 2009; Indrayan 2008; Sutton et al. 2000).
We recommended the reporting confidence intervals for the meta-analytic mean effect size as well as the computation of the REVC and its confidence interval (Viechtbauer 2007). We prefer the reporting of the prediction interval over the credibility interval.
For instance, conceptually and computationally, meta-analyses with smaller sample sizes per sample will have smaller I 2 indices, on average, due to greater sampling error variance than will meta-analyses with large sample sizes per sample due to smaller sampling error variance, even if their between-sample variability (e.g., moderator variance) is identical. Hunter and Schmidt’s 75 % rule shares this problem because a given ‘true’ variance of the meta-analytically derived effect size will be a larger percentage of the total variance as the sample size of the primary samples increases.
We note that the WLS regression procedures in the H&O approach use the inverse variance weight, and the estimation procedure for the standard errors differs from that of regular WLS regression estimation procedures. Thus, standard WLS techniques in software packages such as SPSS and SAS cannot be used to accurately estimate the regression model (Hedges and Olkin 1985), even when appropriate weighting is used. D. Wilson provides computationally correct macros for meta-regression (http://mason.gmu.edu/~dwilsonb/ma.html).
References
Aguinis, H., & Pierce, C. A. (1998). Testing moderator variable hypotheses meta-analytically. Journal of Management, 24, 577–592. doi:10.1016/s0149-2063(99)80074-9.
Aguinis, H., Dalton, D. R., Bosco, F. A., Pierce, C. A., & Dalton, C. M. (2011). Meta-analytic choices and judgment calls: Implications for theory building and testing, obtained effect sizes, and scholarly impact. Journal of Management, 37, 5–38. doi:10.1177/0149206310377113.
American Psychological Association. (2008). Reporting standards for research in psychology: Why do we need them? What might they be? American Psychologist, 63, 839–851. doi:810.1037/0003-1066X.1063.1039.1839.
American Psychological Association. (2010). Publication manual of the American psychological association (6th ed.). Washington, DC: American Psychological Association.
Aytug, Z. G., Rothstein, H. R., Zhou, W., & Kern, M. C. (2011). Revealed or concealed? Transparency of procedures, decisions, and judgment calls in meta-analyses. Organizational Research Methods, 15, 103–133. doi:10.1177/1094428111403495.
Baltes, B. B., Briggs, T. E., Huff, J. W., Wright, J. A., & Neuman, G. A. (1999). Flexible and compressed workweek schedules: A meta-analysis of their effects on work-related criteria. Journal of Applied Psychology, 84, 496–513. doi:10.1037/0021-9010.84.4.496.
Banks, G. C., & McDaniel, M. A. (2011). The kryptonite of evidence-based I–O psychology. Industrial and Organizational Psychology: Perspectives on Science and Practice, 4, 40–44. doi:10.1111/j.1754-9434.2010.01292.x.
Banks, G. C., Batchelor, J. H., & McDaniel, M. A. (2010). Smarter people are (a bit) more symmetrical: A meta-analysis of the relationship between intelligence and fluctuating asymmetry. Intelligence, 38, 393–401. doi:10.1016/j.intell.2010.04.003.
Banks, G. C., Kepes, S., & Banks, K. P. (2012a). Publication bias: The antagonist of meta-analytic reviews and effective policy making. Educational Evaluation and Policy Analysis, 34, 259–277. doi:10.3102/0162373712446144.
Banks, G. C., Kepes, S., & McDaniel, M. A. (2012b). Publication bias: A call for improved meta-analytic practice in the organizational sciences. International Journal of Selection and Assessment, 20, 182–196. doi:10.1111/j.1468-2389.2012.00591.x.
Bax, L., Yu, L.-M., Ikeda, N., Tsuruta, N., & Moons, K. G. (2006). Development and validation of MIX: Comprehensive free software for meta-analysis of causal research data. BMC Medical Research Methodology, 6, 50. doi:10.1186/1471-2288-6-50.
Bax, L., Yu, L.-M., Ikeda, N., & Moons, K. G. (2007). A systematic comparison of software dedicated to meta-analysis of causal studies. BMC Medical Research Methodology, 7, 40–48. doi:10.1186/1471-2288-7-40.
Beal, D. J., Corey, D. M., & Dunlap, W. P. (2002). On the bias of Huffcutt and Arthur’s (1995) procedure for identifying outliers in the meta-analysis of correlations. Journal of Applied Psychology, 87, 583–589. doi:10.1037/0021-9010.87.3.583.
Becker, B. J. (2005). The failsafe N or file-drawer number. In H. R. Rothstein, A. J. Sutton, & M. Borenstein (Eds.), Publication bias in meta analysis: Prevention, assessment, and adjustments (pp. 111–126). West Sussex: Wiley.
Berlin, J. A., & Ghersi, D. (2005). Preventing publication bias: Registeries and prospective metaanalysis. In H. R. Rothstein, A. J. Sutton, & M. Borenstein (Eds.), Publication bias in meta analysis: Prevention, assessment, and adjustments. West Sussex: Wiley.
Berman, N. G., & Parker, R. A. (2002). Meta-analysis: Neither quick nor easy. BMC Medical Research Methodology, 2, 10. doi:10.1186/1471-2288-2-12.
Bobko, P., & Roth, P. L. (2008). Psychometric accuracy and (the continuing need for) quality thinking in meta-analysis. Organizational Research Methods, 11, 114–126. doi:10.1177/1094428107303155.
Böhning, D. (2000). Computer-assisted analysis of mixtures and applications: Meta-analysis, disease mapping and others. Boca Ratton: Chapman and Hall/CRC.
Bonett, D. G. (2008). Meta-analytic interval estimation for Pearson correlations. Psychological Methods, 13, 173–189. doi:10.1037/a0012868.
Borenstein, M., Hedges, L. V., Higgins, J. P., & Rothstein, H. R. (2005). Comprehensive meta-analysis (Version 2). Englewood: Biostat.
Borenstein, M., Hedges, L. V., Higgins, J. P., & Rothstein, H. R. (2009). Introduction to meta-analysis. West Sussex: Wiley.
Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. (in press). Computing effect sizes for meta-analysis. West Sussex: Wiley.
Brannick, M. T. (2001). Implications of empirical Bayes meta-analysis for test validation. Journal of Applied Psychology, 86, 468–480. doi:10.1037/0021-9010.86.3.468.
Brannick, M. T., Yang, L.-Q., & Cafri, G. (2011). Comparison of weights for meta-analysis of r and d under realistic conditions. Organizational Research Methods, 14, 587–607. doi:10.1177/1094428110368725.
Briner, R. B., & Denyer, D. (2012). Systematic review and evidence synthesis as a practice and scholarship tool. In D. M. Rousseau (Ed.), The Oxford handbook of evidence-based management. New York: Oxford University Press.
Briner, R. B., & Rousseau, D. M. (2011). Evidence-based I–O psychology: Not there yet. Industrial and Organizational Psychology: Perspectives on Science and Practice, 4, 3–22. doi:10.1111/j.1754-9434.2010.01287.x.
Cafri, G., Kromrey, J. D., & Brannick, M. T. (2010). A meta-meta-analysis: Empirical review of statistical power, type I error rates, effect sizes, and model selection of meta-analyses published in psychology. Multivariate Behavioral Research, 45, 239–270. doi:10.1080/00273171003680187.
Carey, K. B., Scott-Sheldon, L. A. J., Carey, M. P., & DeMartini, K. S. (2007). Individual-level interventions to reduce college student drinking: A meta-analytic review. Addictive Behaviors, 32, 2469–2494. doi:10.1016/j.addbeh.2007.05.004.
Cholesterol Treatment Trialists’ Collaborators. (2005). Efficacy and safety of cholesterol-lowering treatment: Prospective meta-analysis of data from 90,056 participants in 14 randomised trials of statins. The Lancet, 366, 1267–1278. doi:10.1016/s0140-6736(05)67394-1.
Conn, V. S., Hafdahl, A. R., & Mehr, D. R. (2011). Interventions to increase physical activity among healthy adults: Meta-analysis of outcomes. American Journal of Public Health, 101, 751–758. doi:10.2105/ajph.2010.194381.
Cooper, H. (1998). Synthesizing research: A guide for literature reviews (3rd ed.). Thousand Oaks: Sage.
Cooper, H., & Hedges, L. V. (2009). Research synthesis as a scientific process. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 4–15). New York: Russell Sage Foundation.
Cooper, H., Hedges, L. V., & Valentine, J. C. (2009). The handbook of research synthesis and meta-analysis (2nd ed.). New York: Russell Sage Foundation.
Costanza, D. P., Badger, J. M., Fraser, R. L., Severt, J. B., & Gade, P. A. (2012). Generational differences in work-related attitudes: A meta-analysis. Journal of Business and Psychology, 27, 375–394. doi:10.1007/s10869-012-9259-4.
Crane, D. (1967). The gatekeepers of science: Some factors affecting the selection of articles for scientific journals. The American Sociologist, 2, 195–201. doi:10.2307/27701277.
Dalton, D. R., Aguinis, H., Dalton, C. M., Bosco, F. A., & Pierce, C. A. (2012). Revisiting the file drawer problem in meta-analysis: An assessment of published and non-published correlation matrices. Personnel Psychology, 65, 221–249. doi:10.1111/j.1744-6570.2012.01243.x.
DerSimonian, R., & Laird, N. (1986). Meta-analysis in clinical trials. Controlled Clinical Trials, 7, 177–188. doi:10.1016/0197-2456(86)90046-2.
Dickerson, K. (2005). Publication bias: Recognizing the problem, understandings its origins and scope, and preventing harm. In H. R. Rothstein, A. J. Sutton, & M. Borenstein (Eds.), Publication bias in meta analysis: Prevention, assessment, and adjustments (pp. 11–34). West Sussex: Wiley.
Dulebohn, J. H., Bommer, W. H., Liden, R. C., Brouer, R. L., & Ferris, G. R. (2012). A meta-analysis of antecedents and consequences of leader-member exchange: Integrating the past with an eye toward the future. Journal of Management, 38, 1715–1759. doi:10.1177/0149206311415280.
Edwards, W., Lindman, H., & Savage, L. J. (1963). Bayesian statistical inference for psychological research. Psychological Review, 70, 193–242. doi:10.1037/h0044139.
Egger, M., Smith, G. D., & Altman, D. (2001a). Systematic reviews in health care: Meta-analysis in context (2nd ed.). London: BMJ Books.
Egger, M., Smith, G. D., & O’Rourke, K. (2001b). Rationale, potentials, and promise of systematic reviews. In M. Egger, G. D. Smith, & D. Altman (Eds.), Systematic reviews in health care: Meta-analysis in context (2nd ed., pp. 3–22). London: BMJ Books.
Einhorn, H. J., & Hogarth, R. M. (1975). Unit weighting schemes for decision making. Organizational Behavior & Human Performance, 13, 171–192. doi:10.1016/0030-5073(75)90044-6.
Else-Quest, N. M., Hyde, J. S., & Linn, M. C. (2010). Cross-national patterns of gender differences in mathematics: A meta-analysis. Psychological Bulletin, 136, 103–127. doi:10.1037/a0018053.
Evangelou, E., Trikalinos, T. A., & Ioannidis, J. P. (2005). Unavailability of online supplementary scientific information from articles published in major journals. The FASEB Journal, 19, 1943–1944. doi:10.1096/fj.05-4784lsf.
Field, A. P. (2001). Meta-analysis of correlation coefficients: A Monte Carlo comparison of fixed- and random-effects methods. Psychological Methods, 6, 161–180. doi:10.1037/1082-989X.6.2.161.
Field, A. P. (2005). Is the meta-analysis of correlation coefficients accurate when population correlations vary? Psychological Methods, 10, 444–467. doi:10.1037/1082-989x.10.4.444.
Geyskens, I., Krishnan, R., Steenkamp, J.-B. E. M., & Cunha, P. V. (2009). A review and evaluation of meta-analysis practices in management research. Journal of Management, 35, 393–419. doi:10.1177/0149206308328501.
Greenhouse, J. B., & Iyengar, S. (2009). Sensitivity analysis and diagnostics. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), The handbook of research synthesis and meta-analysis (2nd ed., pp. 417–433). New York: Russell Sage Foundation.
Greenwald, A. G. (1975). Consequences of prejudice against the null hypothesis. Psychological Bulletin, 82, 1–20. doi:10.1037/h0076157.
Gurusamy, K., Aggarwal, R., Palanivelu, L., & Davidson, B. R. (2008). Systematic review of randomized controlled trials on the effectiveness of virtual reality training for laparoscopic surgery. British Journal of Surgery, 95, 1088–1097. doi:10.1002/bjs.6344.
Hafdahl, A. R., & Williams, M. A. (2009). Meta-analysis of correlations revisited: Attempted replication and extension of Field’s (2001) simulation studies. Psychological Methods, 14, 24–42. doi:10.1037/a0014697.
Halbert, R. J., Natoli, J. L., Gano, A., Badamgarav, E., Buist, A. S., & Mannino, D. M. (2006). Global burden of COPD: Systematic review and meta-analysis. European Respiratory Journal, 28, 523–532. doi:10.1183/09031936.06.00124605.
Hall, S. M., & Brannick, M. T. (2002). Comparison of two random-effects methods of meta-analysis. Journal of Applied Psychology, 87, 377–389. doi:10.1037/0021-9010.87.2.377.
Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. New York: Academic Press.
Hedges, L. V., & Vevea, J. L. (1998). Fixed- and random-effects models in meta-analysis. Psychological Methods, 3, 486–504. doi:10.1037/1082-989x.3.4.486.
Hermelin, E., Lievens, F., & Robertson, I. T. (2007). The validity of assessment centres for the prediction of supervisory performance ratings: A meta-analysis. International Journal of Selection and Assessment, 15, 405–411. doi:10.1111/j.1468-2389.2007.00399.x.
Higgins, J. P. T., & Green, S. (Eds.). (2009). Cochrane handbook for systematic reviews of interventions (Version 5.0.2, updated September 2009). The Cochrane Collaboration. Retrieved from www.cochrane-handbook.org.
Higgins, J. P. T., & Thompson, S. G. (2002). Quantifying heterogeneity in a meta-analysis. Statistics in Medicine, 21, 1539–1558. doi:10.1002/sim.1186.
Higgins, J. P. T., Thompson, S. G., Deeks, J. J., & Altman, D. G. (2003). Measuring inconsistency in meta-analyses. British Medical Journal, 327, 557–560. doi:10.1136/bmj.327.7414.557.
Hoffman, B. J., Blair, C. A., Meriac, J. P., & Woehr, D. J. (2007). Expanding the criterion domain? A quantitative review of the OCB literature. Journal of Applied Psychology, 92, 555–566. doi:10.1037/0021-9010.92.2.555.
Huedo-Medina, T. B., Sánchez-Meca, J., Marín-Martínez, F., & Botella, J. (2006). Assessing heterogeneity in meta-analysis: Q statistic or I 2 index? Psychological Methods, 11, 193–206. doi:10.1037/1082-989x.11.2.193.
Hunter, J. E., & Schmidt, F. L. (1994). Estimation of sampling error variance in the meta-analysis of correlations: Use of average correlation in the homogeneous case. Journal of Applied Psychology, 79, 171–177. doi:10.1037/0021-9010.79.2.171.
Hunter, J. E., & Schmidt, F. L. (2000). Fixed effects vs. random effects meta-analysis models: Implications for cumulative research knowledge. International Journal of Selection and Assessment, 8, 275–292. doi:10.1111/1468-2389.00156.
Hunter, J. E., & Schmidt, F. L. (2004). Methods of meta-analysis: Correcting error and bias in research findings (2nd ed.). Newbury Park: Sage.
Hunter, J. E., Schmidt, F. L., & Le, H. (2006). Implications of direct and indirect range restriction for meta-analysis methods and findings. Journal of Applied Psychology, 91, 594–612. doi:10.1037/0021-9010.91.3.594.
Indrayan, A. (2008). Medical biostatistics (2nd ed.). Boca Raton: Chapman & Hall/CRC.
Ioannidis, J. P. A. (2010). Meta-research: The art of getting it wrong. Research Synthesis Methods, 1, 169–184. doi:10.1002/jrsm.19.
Jensen, A. R. (1998). The g factor: The science of mental ability. Westport: Praeger.
Judge, T. A., Thoresen, C. J., Bono, J. E., & Patton, G. K. (2001). The job satisfaction–job performance relationship: A qualitative and quantitative review. Psychological Bulletin, 127, 376–407. doi:10.1037/0033-2909.127.3.376.
Kamdi, A. S., Kandavalli, N. B., Emusu, D., Jain, N., Mamtani, M., & Porterfield, J. R. (2011). Weak or absent evidence for the association of HLA-DR antigens with risk of thyroid carcinoma: A meta-analysis of observational studies. Tissue Antigens, 78, 382–389. doi:10.1111/j.1399-0039.2011.01754.x.
Kemery, E. R., Mossholder, K. W., & Dunlap, W. P. (1989). Meta-analysis and moderator variables: A cautionary note on transportability. Journal of Applied Psychology, 74, 168–170. doi:10.1037/0021-9010.74.1.168.
Kepes, S., & McDaniel, M. A. (in press). How trustworthy is the scientific literature in I-O psychology? Industrial and Organizational Psychology: Perspectives on Science and Practice.
Kepes, S., Banks, G. C., McDaniel, M. A., & Whetzel, D. L. (2012). Publication bias in the organizational sciences. Organizational Research Methods, 15, 624–662. doi:10.1177/1094428112452760.
Kepes, S., Banks, G. C., & Oh, I.-S. (in press). Avoiding bias in publication bias research: The value of “null” findings. Journal of Business and Psychology. doi:10.1007/s10869-012-9279-0.
Kisamore, J. L. (2008). Distributional shapes and validity transport: A comparison of lower bounds. International Journal of Selection and Assessment, 16, 27–29. doi:10.1111/j.1468-2389.2008.00406.x.
Kisamore, J. L., & Brannick, M. T. (2008). An illustration of the consequences of meta-Analysis model choice. Organizational Research Methods, 11, 35–53. doi:10.1177/1094428106287393.
Kuncel, N. R., Hezlett, S. A., & Ones, D. S. (2001). A comprehensive meta-analysis of the predictive validity of the graduate record examinations: Implications for graduate student selection and performance. Psychological Bulletin, 127, 162–181. doi:10.1037/0033-2909.127.1.162.
Laine, C., et al. (2007). Clinical trial registration: Looking back and moving ahead. New England Journal of Medicine, 356, 2734–2736. doi:10.1056/NEJMe078110.
Law, K. S., Schmidt, F. L., & Hunter, J. E. (1994a). Nonlinearity of range corrections in meta-analysis: Test of an improved procedure. Journal of Applied Psychology, 79, 425–438. doi:10.1037/0021-9010.79.3.425.
Law, K. S., Schmidt, F. L., & Hunter, J. E. (1994b). A test of two refinements in procedures for meta-analysis. Journal of Applied Psychology, 79, 978–986. doi:10.1037/0021-9010.79.6.978.
Le, H., Oh, I.-S., Shaffer, J., & Schmidt, F. L. (2007). Implications of methodological advances for the practice of personnel selection: How practitioners benefit from meta-analysis. Academy of Management Perspectives, 21, 6–15.
Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks: Sage.
Marín-Martínez, F., & Sánchez-Meca, J. (2010). Weighting by inverse variance or by sample size in random-effects meta-analysis. Educational and Psychological Measurement, 70, 56–73. doi:10.1177/0013164409344534.
McDaniel, M. A. (2005). Big-brained people are smarter: A meta-analysis of the relationship between in vivo brain volume and intelligence. Intelligence, 33, 337–346. doi:10.1016/j.intell.2004.11.005.
McDaniel, M. A., Whetzel, D. L., Schmidt, F. L., & Maurer, S. D. (1994). The validity of employment interviews: A comprehensive review and meta-analysis. Journal of Applied Psychology, 79, 599–616. doi:10.1037/0021-9010.79.4.599.
McDaniel, M. A., Rothstein, H. R., & Whetzel, D. L. (2006). Publication bias: A case study of four test vendors. Personnel Psychology, 59, 927–953. doi:10.1111/j.1744-6570.2006.00059.x.
McDaniel, M. A., Hartman, N. S., Whetzel, D. L., & Grubb, W. L. (2007). Situational judgment tests, response instructions, and validity: A meta-analysis. Personnel Psychology, 60, 63–91. doi:10.1111/j.1744-6570.2007.00065.x.
Miller, A. J., Worthington, E. L., & McDaniel, M. A. (2008). Gender and forgiveness: A meta-analytic review and research agenda. Journal of Social and Clinical Psychology, 27, 843–876. doi:10.1521/jscp.2008.27.8.843.
Orwin, R. G. (1983). A fail-safe N for effect size in meta-analysis. Journal of Educational Statistics, 8, 157–159. doi:10.2307/1164923.
Overton, R. C. (1998). A comparison of fixed-effects and mixed (random-effects) models for meta-analysis tests of moderator variable effects. Psychological Methods, 3, 354–379. doi:10.1037/1082-989x.3.3.354.
Park, T.-Y., & Shaw, J. D. (in press). Turnover rates and organizational performance: A meta-analysis. Journal of Applied Psychology. doi:10.1037/a0030723.
Puts, D. A., McDaniel, M. A., Jordan, C. L., & Breedlove, S. M. (2008). Spatial ability and prenatal androgens: Meta-analyses of congenital adrenal hyperplasia and digit ratio (2D:4D) studies. Archives of Sexual Behavior, 37, 100–111. doi:10.1007/s10508-007-9271-3.
Raju, N. S., Burke, M. J., Normand, J., & Langlois, G. M. (1991). A new meta-analytic approach. Journal of Applied Psychology, 76, 432–446. doi:10.1037/0021-9010.76.3.432.
Raudenbush, S. W. (1994). Random effects models. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 301–321). New York: Russell Sage Foundation.
Raudenbush, S. W., & Bryk, A. S. (1985). Empirical Bayes meta-analysis. Journal of Educational Statistics, 10, 75–98. doi:10.2307/1164836.
Renkewitz, F., Fuchs, H. M., & Fiedler, S. (2011). Is there evidence of publication biases in JDM research? Judgment and Decision Making, 6, 870–881.
Richardson, K. M., & Rothstein, H. R. (2008). Effects of occupational stress management intervention programs: A meta-analysis. Journal of Occupational Health Psychology, 13, 69–93. doi:10.1037/1076-8998.13.1.69.
Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86, 638–641. doi:10.1037/0033-2909.86.3.638.
Rosenthal R. (1991). Meta-analytic procedures for social research (Revised ed.). Newbury Park: Sage.
Roth, P. L. (2008). Software review: Hunter-Schmidt meta-analysis programs 1.1. Organizational Research Methods, 11, 192–196. doi:10.1177/1094428106298972.
Rothstein, H. R. (2003). Progress is our most important product: Contributions of validity generalization and meta-analysis to development communication of knowledge in I/O psychology. In K. R. Murphy (Ed.), Validity generalization: A critical review (pp. 115–154). Mahwah: Lawrence Erlbaum.
Rothstein, H. R. (2012). Accessing relevant literature. In H. M. Cooper (Ed.), APA handbook of research methods in psychology: Foundations, planning, measures, and psychometrics (Vol. 1, pp. 133–144). Washington: American Psychological Association.
Rothstein, H. R., Sutton, A. J., & Borenstein, M. (2005a). Publication bias in meta-analyses. In H. R. Rothstein, A. J. Sutton, & M. Borenstein (Eds.), Publication bias in meta analysis: Prevention, assessment, and adjustments (pp. 1–7). West Sussex: Wiley.
Rothstein, H. R., Sutton, A. J., & Borenstein, M. (2005b). Publication bias in meta analysis: Prevention, assessment, and adjustments. West Sussex: Wiley.
Sackett, P. R., Harris, M. M., & Orr, J. M. (1986). On seeking moderator variables in the meta-analysis of correlational data: a Monte Carlo investigation of statistical power and resistance to type I error. Journal of Applied Psychology, 71, 302–310. doi:10.1037/0021-9010.71.2.302.
Schmidt, F. L., & Hunter, J. E. (1977). Development of a general solution to the problem of validity generalization. Journal of Applied Psychology, 62, 529–540. doi:10.1037/0021-9010.62.5.529.
Schmidt, F. L., & Hunter, J. E. (2003). History, development, evolution, and impact of validity generalization and meta-analysis methods, 1975–2001. In K. R. Murphy (Ed.), Validity generalization: A critical review (pp. 31–65). Mahwah: Lawrence Erlbaum.
Schmidt, F. L., Oh, I.-S., & Hayes, T. L. (2009). Fixed- versus random-effects models in meta-analysis: Model properties and an empirical comparison of differences in results. British Journal of Mathematical and Statistical Psychology, 62, 97–128. doi:10.1348/000711007x255327.
Schulze, R. (2004). Meta-analysis: A comparison of approaches. Cambridge: Hogrefe & Huber.
Slavin, R. E. (1986). Best-evidence synthesis: An alternative to meta-analytic and traditional reviews. Educational Researcher, 15, 5–11. doi:10.3102/0013189X015009005.
Spector, P. E., & Levine, E. L. (1987). Meta-analysis for integrating study outcomes: A Monte Carlo study of its susceptibility to type I and type II errors. Journal of Applied Psychology, 72, 3–9. doi:10.1037/0021-9010.72.1.3.
Steel, P. D., & Kammeyer-Mueller, J. D. (2002). Comparing meta-analytic moderator estimation techniques under realistic conditions. Journal of Applied Psychology, 87, 96–111. doi:10.1037/0021-9010.87.1.96.
Steel, P. D., & Kammeyer-Mueller, J. D. (2008). Bayesian variance estimation for meta-analysis: Quantifying our uncertainty. Organizational Research Methods, 11, 54–78. doi:10.1177/1094428107300339.
Stone-Romero, E. F., & Anderson, L. E. (1994). Relative power of moderated multiple regression and the comparison of subgroup correlation coefficients for detecting moderating effects. Journal of Applied Psychology, 79, 354–359. doi:10.1037/0021-9010.79.3.354.
Sutton, A. J. (2005). Evidence concerning the consequences of publication and related biases. In H. R. Rothstein, A. J. Sutton, & M. Borenstein (Eds.), Publication bias in meta analysis: Prevention, assessment, and adjustments (pp. 175–192). West Sussex: Wiley.
Sutton, A. J., Abrams, K. R., Jones, D. R., Sheldon, T. A., & Song, F. (2000). Methods for meta-analysis in medical research. London: Wiley.
Terrizzi, J. A., Shook, N. J., & McDaniel, M. A. (2013). The behavioral immune system and social conservatism: A meta-analysis. Evolution and Human Behavior, 34, 99–108. doi:10.1016/j.evolhumbehav.2012.10.003.
Thomas, H. (1988). What is the interpretation of the validity generalization estimate S 2p = S 2r − S 2e ? Journal of Applied Psychology, 73, 679–682. doi:10.1037/0021-9010.73.4.679.
Thompson, S. G., & Higgins, J. P. T. (2002). How should meta-regression analyses be undertaken and interpreted? Statistics in Medicine, 21, 1559–1573. doi:10.1002/sim.1187.
Van Iddekinge, C. H., Roth, P. L., Raymark, P. H., & Odle-Dusseau, H. N. (2012). The criterion-related validity of integrity tests: An updated meta-analysis. Journal of Applied Psychology, 97, 499–530. doi:10.1037/a0021196.
Viechtbauer, W. (2007). Confidence intervals for the amount of heterogeneity in meta-analysis. Statistics in Medicine, 26, 37–52. doi:10.1002/sim.2514.
Viswesvaran, C., & Sanchez, J. I. (1998). Moderator search in meta-analysis: A review and cautionary note on existing approaches. Educational and Psychological Measurement, 58, 77–87. doi:10.1177/0013164498058001007.
White, H. D. (2009). Scientific communication and literature retrieval. In H. Cooper, L. V. Hedges, & J. C. Valentine (Eds.), Research synthesis and meta-analysis (2nd ed., pp. 51–71). New York: Russell Sage Foundation.
Whitener, E. M. (1990). Confusion of confidence intervals and credibility intervals in meta-analysis. Journal of Applied Psychology, 75, 315–321. doi:10.1037/0021-9010.75.3.315.
Author information
Authors and Affiliations
Corresponding author
Electronic supplementary material
Below is the link to the electronic supplementary material.
Rights and permissions
About this article
Cite this article
Kepes, S., McDaniel, M.A., Brannick, M.T. et al. Meta-analytic Reviews in the Organizational Sciences: Two Meta-analytic Schools on the Way to MARS (the Meta-analytic Reporting Standards). J Bus Psychol 28, 123–143 (2013). https://doi.org/10.1007/s10869-013-9300-2
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10869-013-9300-2