Meta-analyses that statistically synthesize evidence from multiple research studies identified using systematic review methods can play a critical role in advancing evidence-informed prevention science. Decisions about prevention policies, programs, and practices should be informed by cumulative bodies of empirical evidence rather than single studies (Brownson et al., 1999; Gottfredson et al., 2015). When done in the context of a well-conducted systematic review, meta-analysis is a powerful tool for synthesizing evidence and exploring research questions that are difficult to address in individual studies, such as the association of individual study limitations on intervention effect estimates, replicability of empirical findings, and variation of effect estimates across populations and settings.

The current paper serves as a brief introduction to this special issue of Prevention Science, entitled “Modern Meta-Analytic Methods in Prevention Science,” which aims to highlight recent developments in meta-analytic methods and demonstrate their application to prevention research. The manuscripts in this special issue present original findings from meta-analyses that implement contemporary meta-analytic methods on timely prevention science topics. In addition, several of the included manuscripts provide methodological guidance on applying these meta-analytic approaches, including open data, code, and materials to help promote uptake of these methods in the community. In this brief introduction, we first provide a historical perspective on the use and evolution of meta-analysis in prevention science and related fields; we then highlight several common themes across the papers and invited commentaries included in the special issue; and finally we conclude with recommendations for accelerating the development and uptake of modern meta-analytic methods to support prevention-related efforts.

Evolution of Meta-Analytic Methods

Systematic reviews are comprehensive literature reviews that employ systematic, transparent, and reproducible methods for identifying, appraising, and summarizing the research literature. Systematic reviews often aim to synthesize quantitative evidence from studies included in the review. This component of evidence synthesis is known as meta-analysis: the statistical synthesis of empirical findings from multiple research studies. Widely applied today as an evidence synthesis technique, meta-analysis as a statistical technique is a relatively modern scholarly invention (Hunt, 1997). The statistician Karl Pearson used meta-analytic techniques as early as 1904 to statistically combine results across 11 studies examining the effects of typhoid vaccines (Pearson, 1904). However, the term “meta-analysis” is thought to be first coined by Gene Glass in his 1976 presidential address to the American Educational Research Association (Glass, 1976). The term was then subsequently popularized after Larry Hedges and Ingram Olkin’s seminal text on the statistical methods of meta-analysis (Hedges & Olkin, 1985). Since that time, meta-analysis has continued to grow in popularity in the social, behavioral, and health sciences (Gurevitch et al., 2018; Page et al., 2018), such that many published reviews may be redundant and unnecessary (Ioannidis, 2016).

With this rapid growth in the number of published reviews and meta-analyses, there has been a parallel growth in the development of meta-analytic techniques to handle the increasingly complex types of quantitative questions that researchers wish to answer and evidence that researchers wish to synthesize (Tanner-Smith & Grant, 2018). For example, over the past two decades, there has been increasing emphasis on the development and application of meta-analytic approaches for understanding heterogeneity in effects (Parr et al., 2019). Especially when individual participant data are available, contemporary techniques allow meta-analyses to move beyond simple questions such as “does this prevention program work” or “what is a risk factor for this maladaptive outcome” to more nuanced questions about variability in effects across contexts, settings, and populations that contribute to an emerging “heterogeneity revolution” in the behavioral sciences (Bryan et al., 2021). Despite this rapid evolution of meta-analytic techniques and approaches, and despite growing access to datasets and research materials needed to answer complex questions, there is still a lag between the development of new techniques and their uptake by researchers in the field.

Contributions of and Future Directions Inspired by this Special Issue

Several common themes connect the manuscripts in the special issue with important implications for summarizing empirical evidence in prevention science as well as future methodological development in evidence synthesis approaches. First, modern meta-analytic methods can assist prevention scientists both in their theory testing and theory building efforts. Meta-analytic structural equation modeling (MASEM) approaches may be particularly useful in this regard (Cheung & Chan, 2005; Jak et al., 2021). MASEM can be used by researchers interested in examining different theoretical models and hypothesized causal pathways, or addressing questions around measurement and construct validity of proposed theoretical constructs (Hagger et al., 2018; Morren & Grinstein, 2021). For example, Shen et al. (2021; in this issue) use MASEM to test different theories about the associations between hot and cool executive functions as they relate to children’s injury risk. MASEM can also be useful for addressing questions about mediation and mechanisms of action (Bergh et al., 2016), as exemplified by Valentine et al.'s (2021; in this issue) use of MASEM to evaluate whether and how dysfunctional attitudes might have an indirect effect on depression via automatic thoughts. Another useful tool for examining questions of mediation are sequential Bayesian data synthesis (SBDS) approaches; Wurpts et al. (2021; in this issue) demonstrate the flexibility of SBDS for synthesizing evidence about mediation effects and provide users with an easy to use SAS macro.

Many prevention scientists are likely familiar with meta-analysis techniques as they apply to the synthesis of aggregate study-level data and average effects across the population. Some meta-analyses also use study-level information to explore associations between the average effects of interventions or exposures and the average values for individual-level moderators (e.g., average age of participants, average baseline severity, percentage of girls in sample, percentage of African Americans in sample). Although prevalent in the published literature, these types of meta-analyses are susceptible to ecological bias when making inferences about individual characteristics using aggregate statistics (Fisher et al., 2017). They are incredibly limited in their ability to address questions about participant-level variability in effects and might offer misleading answers to questions about for whom an intervention may work best or for whom a risk/protective factor may be most important.

As demonstrated in several manuscripts included in this special issue, individual participant data (IPD) meta-analyses that synthesize individual-level data can advance prevention science theory and practice while overcoming the aforementioned limitations of aggregate data meta-analysis methods. There are numerous practical barriers associated with conducting IPD meta-analyses, foremost of which are the time and resources needed to obtain and harmonize multiple individual-level datasets (Nevitt & Tudur Smith, 2022; Ventresca et al., 2020). Nevertheless, IPD meta-analytic data can offer unique benefits above and beyond aggregate data, including the development of clinical prediction models (Riley et al., 2021), providing causally interpretable results (Barker et al., 2021; in this issue), and overcoming risk of ecological bias when assessing patient level variability in effects (Cooper & Patall, 2009). For example, Huh et al. (2021; in this issue) use IPD and MASEM methods to investigate the potential mediating role of protective behavioral strategies in the effects of brief motivational interventions on college students’ alcohol-related problems. This IPD meta-analysis was the result of an impressive collaborative project that obtained and harmonized data from 24 studies, but which was nonetheless a laborious and time-intensive process given attitudinal and structural barriers to data sharing across investigators (Mun et al., 2015). With the growing open science movement that values and promotes accessibility, transparency, and reproducibility in social and behavioral research (Grant et al., 2022), IPD meta-analyses could become more prevalent as data sharing becomes normative and structurally incentivized (Polanin & Terzian, 2019).

Another pressing issue in prevention science is understanding the comparative effectiveness of different types of programs or interventions for addressing a target outcome or condition. Network meta-analysis methods—which are widely used in medicine but still underutilized in prevention science—can be used to address questions about the comparative effectiveness of different types of interventions and comparators (Mavridis et al., 2015), or even the comparative effects of different intervention components or active ingredients (Miklowitz et al., 2021; Welton et al., 2009). In addition to the transportability methods for IPD meta-analysis presented in this special issue (Barker et al., 2021), these network meta-analysis techniques also can prompt future research discovery and theory building by permitting examination of indirect comparisons—namely, those contrasts that have never been directly compared head to head in individual studies. For instance, Seitidis et al. (2021; in this issue) demonstrate the application of network meta-analysis to examine the comparative effectiveness of branded brief alcohol interventions, permitting indirect comparisons of interventions that were never directly compared in the primary research literature.

Modern meta-analytic methods also offer new options to address multiple effect size estimates available for the same sample of participants. Multiple data sources and multiplicity in effect size estimates are the norm rather than the exception (Mayo‐Wilson et al., 2018), which creates several problems such as opportunities for cherry-picking (Mayo-Wilson et al., 2017a, b). For instance, studies may have multiple measurements of the same outcome construct, multiple follow-up points, multiple model estimation approaches, and shared comparison group arms within multi-arm trials. It is also common for effect size data to be clustered at larger units (e.g., geographic regions, research groups or labs, funding mechanisms) or to be clustered at multiple levels. Where multiple results could be used for analysis, meta-analysts might choose one result to include or select from other statistical methods available to handle multiplicity (López‐López et al., 2018). While there are methodological options for addressing selection effects (e.g., pre-specifying ranked preferences of measures for a selected construct), modern meta-analytic options include the robust variance estimation approaches demonstrated in this special issue (Pustejovsky & Tipton, 2021; applied in Polanin et al., 2021), full multivariate approaches to jointly model multiple outcomes (Riley et al., 2017), and multi-level meta-analysis models (Fernández-Castilla et al., 2020).

Although both frequentist and Bayesian methods can be applied to meta-analyses (Gronau et al., 2021; Thompson & Semma, 2020), Bayesian approaches can be particularly useful when researchers want to incorporate prior knowledge into model estimation. Incorporating these informative priors can be useful when parameters may be imprecisely estimated due to a small number of included studies (Valentine et al., 2017), or when estimating the effects of both intervention classes and their constituent interventions (Mayo-Wilson et al., 2014). Bayesian approaches can also play an important role in updating meta-analytic data in the context of new information, which will be increasingly relevant for assessing stability or change in the findings of “living” systematic reviews or other meta-analyses with periodic updates (Simmonds et al., 2017). Lastly, the interpretation of parameter intervals in Bayesian analyses is much more intuitive to non-technical audiences: i.e., the probability that the population value is within a given interval. In this issue, Thompson et al. (2022) demonstrate these advantages of Bayesian methods for meta-regression, providing example R code to facilitate future adoption of these methods.

This special issue also includes two invited commentaries, which highlight the important contributions of the included papers for advancing meta-analytic contributions to prevention science research, practice, and policy. Melendez-Torres (2021) describes the important role of these modern meta-analytic techniques for addressing the complex multivariate relationships of interest regarding how, why, and under what conditions preventive interventions may be optimized, and ensuring those findings are usable by and informative to key decision makers. Malin and Fortunato (2022) further emphasize the critical role that rigorous meta-analyses can play in evidence-based policy and decision making, informing the work of federal agencies such as the Administration for Children and Families in the US Department of Health and Human Services. Together, these commentaries underscore the complementary roles meta-analyses can play in advancing scientific prevention research while influencing prevention policy and practice at local, regional, and national levels.

Conclusions and Recommendations

This special issue aims to accelerate the uptake of modern meta-analytic techniques in the field of prevention science. By presenting applied demonstrations of these techniques, the included manuscripts make these techniques more widely accessible to prevention science researchers, facilitating the use of these modern methods in future meta-analyses. Looking forward, we recommend that meta-analysts in the field of prevention continue to take advantage of open data and code and embrace open science principles by registering protocols prospectively, distinguishing between confirmatory and exploratory analyses, and documenting and archiving project materials, code, and workflows (Grant et al., 2022). We also recommend that researchers engage key stakeholders throughout the meta-analysis and evidence synthesis process, which is essential for ensuring the meta-analytic findings will have relevance and utility to prevention policy and practice (Tanner-Smith & Grant, 2018). Finally, we recommend that methodologists and statisticians continue to develop techniques and tools that will accelerate the rapid execution, translation, and dissemination of rigorous meta-analyses intended to inform prevention-related efforts.