Advertisement

A demonstration and evaluation of the use of cross-classified random-effects models for meta-analysis

  • Belén Fernández-Castilla
  • Marlies Maes
  • Lies Declercq
  • Laleh Jamshidi
  • S. Natasha Beretvas
  • Patrick Onghena
  • Wim Van den Noortgate
Article

Abstract

It is common for the primary studies in meta-analyses to report multiple effect sizes, generating dependence among them. Hierarchical three-level models have been proposed as a means to deal with this dependency. Sometimes, however, dependency may be due to multiple random factors, and random factors are not necessarily nested, but rather may be crossed. For instance, effect sizes may belong to different studies, and, at the same time, effect sizes might represent the effects on different outcomes. Cross-classified random-effects models (CCREMs) can be used to model this nonhierarchical dependent structure. In this article, we explore by means of a simulation study the performance of CCREMs in comparison with the use of other meta-analytic models and estimation procedures, including the use of three- and two-level models and robust variance estimation. We also evaluated the performance of CCREMs when the underlying data were generated using a multivariate model. The results indicated that, whereas the quality of fixed-effect estimates is unaffected by any misspecification in the model, the standard error estimates of the mean effect size and of the moderator variables’ effects, as well as the variance component estimates, are biased under some conditions. Applying CCREMs led to unbiased fixed-effect and variance component estimates, outperforming the other models. Even when a CCREM was not used to generate the data, applying the CCREM yielded sound parameter estimates and inferences.

Keywords

Meta-analysis Multiple effect sizes Cross-classified random-effects model 

Notes

Author note

This research was supported by the Research Foundation–Flanders (FWO), through Grant G.0798.15N to the University of Leuven, Belgium. The opinions expressed are those of the authors and do not represent views of the FWO. For the simulations, we used the infrastructure of the VSC–Flemish Supercomputer Center, funded by the Hercules Foundation and the Flemish Government, Department EWI.

References

  1. Becker, B. J. (1992). Using results from replicated studies to estimate linear models. Journal of Educational and Behavioral Statistics, 17, 341–362.  https://doi.org/10.3102/10769986017004341 CrossRefGoogle Scholar
  2. Becker, B. J. (2000). Multivariate meta-analysis. In H. E. A. Tinsley & E. D. Brown (Eds.), Handbook of applied multivariate statistics and mathematical modeling (pp. 499–525). Orlando, FL: Academic Press.CrossRefGoogle Scholar
  3. Berkhof, J., & Kampen, J. K. (2004). Asymptotic effect of misspecification in the random part of the multilevel model. Journal of Educational and Behavioral Statistics, 29, 201–218.  https://doi.org/10.3102/10769986029002201 CrossRefGoogle Scholar
  4. Borenstein, M., Hedges, L. V., Higgins, J. P. T., & Rothstein, H. R. (2009). Introduction to meta-analysis. Chichester, UK: Wiley.CrossRefGoogle Scholar
  5. Cheung, M. W.-L. (2014). Modeling dependent effect sizes with three-level meta-analyses: A structural equation modeling approach. Psychological Methods, 19, 211–229.  https://doi.org/10.1037/a0032968 CrossRefPubMedGoogle Scholar
  6. De Wit, F. R., Greer, L. L., & Jehn, K. A. (2012). The paradox of intragroup conflict: A meta-analysis. Journal of Applied Psychology, 97, 360–390.CrossRefPubMedGoogle Scholar
  7. Fielding, A., & Goldstein, H. (2006). Cross-classified and multiple membership structures in multilevel models: An introduction and review (Research Report No. 791). University of Birmingham, Department of Eduction and Skills. ISBN 1 84478797 2.Google Scholar
  8. Geeraert, L., Van den Noortgate, W., Grietens, H., & Onghena, P. (2004). The effects of early prevention programs for families with young children at risk for physical child abuse and neglect: A meta-analysis. Child Maltreatment, 9, 277–291.CrossRefPubMedGoogle Scholar
  9. Gilboa, S., Shirom, A., Fried, Y., & Cooper, C. (2008). A meta-analysis of work demand stressors and job performance: Examining main and moderating effects. Personnel Psychology, 61, 227–271.  https://doi.org/10.1111/j.1744-6570.2008.00113.x CrossRefGoogle Scholar
  10. Gleser, L. J., & Olkin, I. (1994). Stochastically dependent effect sizes. In H. Cooper & L. V. Hedges (Eds.), The handbook of research synthesis (pp. 339–355). New York, NY: Russell Sage Foundation.Google Scholar
  11. Goldstein, H. (1994). Multilevel cross-classified models. Sociological Methods and Research, 22, 364–375.CrossRefGoogle Scholar
  12. Hedges, L. V., Tipton, E., & Johnson, M. C. (2010). Robust variance estimation in meta-regression with dependent effect size estimates. Research Synthesis Methods, 1, 39–65.  https://doi.org/10.1002/jrsm.5 CrossRefPubMedGoogle Scholar
  13. Hoogland, J. J., & Boomsma, A. (1998). Robustness studies in covariance structure modeling An overview and a meta-analysis. Sociological Methods and Research, 26, 329–367.CrossRefGoogle Scholar
  14. Kalaian, H. A., & Raudenbush, S. W. (1996). A multivariate mixed linear model for meta-analysis. Psychological Methods, 1, 227–235.  https://doi.org/10.1037/1082-989X.1.2.227 CrossRefGoogle Scholar
  15. Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks, CA: Sage.Google Scholar
  16. Luo, W., & Kwok, O. M. (2009). The impacts of ignoring a crossed factor in analyzing cross-classified data. Multivariate Behavioral Research, 44, 182–212.  https://doi.org/10.1080/00273170902794214 CrossRefPubMedGoogle Scholar
  17. Maes, M., Qualter, P., Vanhalst, J., Van den Noortgate, W., & Goossens, L. (2017). Gender differences in loneliness: A meta-analysis. Leuven, Belgium: Unpublished manuscript, KU Leuven.Google Scholar
  18. Maes, M., Van den Noortgate, W., Fustolo-Gunnink, S. F., Rassart, J., Luyckx, K., & Goossens, L. (2017). Loneliness in children and adolescents with chronic physical conditions: A meta-analysis. Journal of Pediatric Psychology, 42, 622–635.Google Scholar
  19. Meyers, J. L., & Beretvas, S. N. (2006). The impact of inappropriate modeling of cross-classified data structures. Multivariate Behavioral Research, 41, 473–497.  https://doi.org/10.1207/s15327906mbr4104_3 CrossRefPubMedGoogle Scholar
  20. Moerbeek, M. (2004). The consequence of ignoring a level of nesting in multilevel analysis. Multivariate Behavioral Research, 39, 129–149.  https://doi.org/10.1207/s15327906mbr3901_5.CrossRefPubMedGoogle Scholar
  21. Moeyaert, M., Ugille, M., Beretvas, S. N., Ferron, J., Bunuan, R., & Van den Noortgate, W. (2017). Methods for dealing with multiple outcomes in meta-analysis: A comparison between averaging effect sizes, robust variance estimation and multilevel meta-analysis. International Journal of Social Research Methodology, 20, 559–572.  https://doi.org/10.1080/13645579.2016.1252189
  22. Opdenakker, M. C., & Van Damme, J. (2000). The importance of identifying levels in multilevel analysis: an illustration of the effects of ignoring the top or intermediate levels in school effectiveness research. School Effectiveness and School Improvement, 11, 103–130.Google Scholar
  23. Rasbash, J., & Goldstein, H. (1994). Efficient analysis of mixed hierarchical and cross-classified random structures using a multilevel model. Journal of Educational and Behavioral Statistics, 19, 337–350.CrossRefGoogle Scholar
  24. Raudenbush, S. W., Becker, B. J., & Kalaian, H. (1988). Modeling multivariate effect sizes. Psychological Bulletin, 103, 111–120.  https://doi.org/10.1037/0033-2909.103.1.111 CrossRefGoogle Scholar
  25. Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods (2nd ed.). London, UK: Sage.Google Scholar
  26. Rubio-Aparicio, M., Marín-Martínez, F., Sánchez-Meca, J., & López-López, J. A. (2017). A methodological review of meta-analyses of the effectiveness of clinical psychology treatments. Behavior Research Methods. Advance online publication.  https://doi.org/10.3758/s13428-017-0973-8
  27. Searle, S., Casella, G., & McCulloch, C. E. (1992). Variance components. New York, NY: Wiley.CrossRefGoogle Scholar
  28. Snijders, T., & Bosker, R. (2012). Multilevel analysis: An introduction to basic and applied multilevel analysis (2nd ed.). London, UK: Sage.Google Scholar
  29. Tranmer, M., & Steel, D. G. (2001). Ignoring a level in a multilevel model: Evidence from UK census data. Environment and Planning A, 33, 941–948.CrossRefGoogle Scholar
  30. Van den Bussche, E., Van den Noortgate, W., & Reynvoet, B. (2009). Mechanisms of masked priming: a meta-analysis. Psychological Bulletin, 135, 452–477.  https://doi.org/10.1037/a0015329 CrossRefPubMedGoogle Scholar
  31. Van den Noortgate, W., López-López, J. A., Marín-Martínez, F., & Sánchez-Meca, J. (2013). Three-level meta-analysis of dependent effect sizes. Behavior Research Methods, 45, 576–594.  https://doi.org/10.3758/s13428-012-0261-6 CrossRefPubMedGoogle Scholar
  32. Van den Noortgate, W., López-López, J. A., Marín-Martínez, F., & Sánchez-Meca, J. (2015). Meta-analysis of multiple outcomes: A multilevel approach. Behavior Research Methods, 47, 1274–1294.  https://doi.org/10.3758/s13428-014-0527-2 CrossRefPubMedGoogle Scholar
  33. Van den Noortgate, W., Opdenakker, M. C., & Onghena, P. (2005). The effects of ignoring a level in multilevel analysis. School Effectiveness and School Improvement, 16, 281–303.CrossRefGoogle Scholar

Copyright information

© Psychonomic Society, Inc. 2018

Authors and Affiliations

  1. 1.Faculty of Psychology and Educational SciencesKU Leuven, University of LeuvenKortrijkBelgium
  2. 2.Imec-ITEC, KU Leuven, University of LeuvenLeuvenBelgium
  3. 3.Research Foundation Flanders (FWO)BrusselsBelgium
  4. 4.University of Texas at AustinAustinUSA

Personalised recommendations