Implementation processes encompass a range of unique and shared strategies and tools that are used or facilitated by people or systems to support implementation of an innovation (Proctor et al., 2013). Whether they are events, ongoing activities, interactions, or phenomena, implementation processes can be disentangled into elements of implementation that are present, occur, or emerge in time and space. These elements may or may not influence implementation mechanisms and outcomes in linear and non-linear ways. What we term elements of implementation can include what implementation science theorizes as discrete implementation strategies (Powell et al., 2015), implementation determinants (Nilsen & Bernhardsson, 2019), implementation competencies (Metz et al., 2021), or any other relevant part of implementation processes. Implementation science has largely adopted conceptualizations of discrete implementation strategies as the actionable activities and events we use to facilitate implementation (Powell et al., 2015), similar to what intervention science conceptualizes as discrete practice elements in interventions (i.e., “we define a practice element as a discrete clinical technique or strategy used as part of a larger intervention plan”; Chorpita et al., 2005). Subsequently, we tend to explain implementation in terms of regulatory causal pathways where implementation strategies assert effects on determinants and outcomes through causing mechanisms of change (Lewis et al., 2018).

Some implementation strategies are relatively “static” and granular, such as a calendar reminder or using an implementation checklist. However, other strategies may be more dynamic, emerging, and ecological processes that work as several interconnected elements when they facilitate implementation, such as tailoring to contextual needs, co-creation, leadership and organizational development to improve implementation climate, or learning collaboratives. The interconnected elements that make up these dynamic strategies are not necessarily just disentangled parts of what we typically describe as a discrete implementation strategy. These elements may, for instance, include specific contextual circumstances or properties, relationships, cultural values, and competencies in actors in implementation—and they may be the elements of the strategy that catalyze or determine causal effects, and not necessarily the discrete events or activities (Wagner, 1999). Thus, when implementation processes drive change, the causes can also be attributed to configurations and patterns of different types of elements of implementation acting together, or on each other, as dynamic processes or systems (Braithwaite et al., 2018; Sterman, 2002). Different conceptualizations and causal theories do not necessarily exclude each other. Instead, different explanations of observations may complement each other and enrich our view of how implementation works. However, doing so necessitates considering the relevant elements that can be at play. With the vast amount of potentially relevant elements in implementation, we may benefit from scientific tools that help us narrow them down to those most likely to be influential in a given context. Common elements approaches offer methods for such distillation.

Several taxonomies, reporting standards, and frameworks exist across the implementation literature to help specify elements and outcomes (e.g., the ERIC taxonomy, Powell et al., 2015; the AACTT-framework, Presseau et al., 2019; the Behavior change technique taxonomy, Michie et al., 2013; the “name it—define it—specify it”—recommendations, Proctor et al., 2013; StaRi, Pinnock et al., 2017; TIDieR, Hoffmann et al., 2014; and AGReMA, Cashin et al., 2022). Implementation frameworks and syntheses based on comprehensive theoretical and empirical work arguably articulate elements that commonly are important for implementation (e.g., Cane et al., 2012; Damschroder et al., 2009; Moullin et al., 2019; Fixsen et al., 2005; Greenhalgh et al., 2004). However, the importance or utility of one or more common elements can depend on other elements, such as contextual and process factors. These factors suggest the need to ask more specific questions: When and under what circumstances do certain implementation strategies tend to work, and when do they not? Do they work as discrete strategies, or do they need to be interconnected parts of a blended, ecological, or sequential strategy? When do specific contextual determinants implicate a particular combination of strategies, and does it depend on the characteristics of the innovation being implemented or the competencies of implementation practitioners? What are the core elements of effective implementation strategies, and what strategies tend to be adaptable and when? Answers to such questions are complex and challenging to model generically, in part because they have conditions attached to them and constantly change. However, more advanced common elements methodologies can potentially provide useful answers, and in this paper, we propose language, methodology, and recommendations for the field to realize such potentials.

Common elements approaches have established a foothold in research on psychosocial, academic, and physical interventions (i.e., intervention science), and the concept of common elements proliferated as an approach to identify intervention practices common to effective interventions (Chorpita et al., 2005). This conceptual methodology paper discusses how the concept also applies to implementation science and how developments in common elements conceptualization and methods can facilitate practical implications for both intervention and implementation research and practice. We provide a step-by-step guide for using an advanced common elements methodology to synthesize and distill the intervention and implementation science literature into evidence-informed hypotheses about what, how, when, and where interventions and implementations tend to succeed. Our specific goals are to:

  1. 1.

    Provide a narrative review of the common elements concept and how its applications may advance implementation research and usability for implementation practice.

  2. 2.

    Provide a step-by-step guide to a systematic common elements methodology that synthesizes and distills the intervention and implementation literature together.

  3. 3.

    Offer recommendations for advancing element-level evidence in implementation science.

A Narrative Review of the Common Elements Concept

Similarities Between Intervention Science and Implementation Science

Intervention science is an umbrella term for research focused on human interventions within health and social sciences. Complex health and social interventions (Hawe et al., 2004) across service systems are likely to share commonalities such as practices, processes, structures, and delivery formats. Complex interventions (e.g., evidence-based programs) often also promote the same competencies in practitioners. There are debates about whether the focal points of intervention research should be on intervention techniques (e.g., discrete practice elements; McLeod et al., 2017), common factors (e.g., therapeutic alliance, empathy, and expectations; Wampold, 2015), therapist skills (e.g., interpersonal capacities, cognitive processing, self-assessment; Heinonen & Nissen-Lie, 2020; Hill et al., 2017), principles- and processes of change (e.g., process-based therapy; Hofmann & Hayes, 2019), or more complex manualized programs (e.g., Parent Child Interaction Therapy; Eyberg & Funderburk, 2011), even though scholars generally tend to recognize that all of the above more or less are relevant for the outcomes of interventions (Mulder et al., 2017).

Implementation science replicates intervention science in many regards, partly because they share philosophical and theoretical foundations (Engell, 2021). For example, blended implementation strategies resemble complex health programs; discrete implementation strategies resemble practice elements and techniques; implementation competencies resemble common factors and therapist skills; implementation drivers and processes resemble intervention principles and processes of change. There are also similar debates about what is more critical for successful implementation outcomes (e.g., a greater focus on discrete implementation strategies, blended strategies, dynamic systems, implementation competencies and relational skills, or guiding frameworks). Intervention and implementation science also share similar and connected research-to-practice gaps and call for the same type of research and methods to close them (e.g., more pragmatic applications of scientific evidence, more contextual alignment, and advancing training; Lyon et al., 2020b; Westerlund et al., 2019). These similarities indicate that methodological innovations in intervention science, such as common elements approaches, are likely to generalize to the younger but rapidly developing implementation science. However, instead of implementation research having to replicate the limitations that intervention research has encountered, such as limited implementability in interventions (Lyon et al., 2020a) and lacking understanding of how intervention practices work (e.g., active ingredients and mechanisms of change; Huibers et al., 2021; Kazdin, 2009), early accommodation of advanced common elements approaches may help address and avoid these limitations. There may also be benefits to purposefully connecting intervention and implementation science together in conceptualizations of common elements.

Common Elements Approaches in Intervention Science

The many shared features between complex mental health interventions led some scholars to argue that the elements found common across several effective interventions are more likely to contribute to positive outcomes than less common elements (Chorpita et al., 2005). That is, they are more likely to be the active or potent ingredients of interventions contributing to positive outcomes, or what Embry and Biglan referred to as evidence-based kernels (Embry & Biglan, 2008). Simply defined, common elements are practices or processes frequently shared by a selection of interventions (Engell et al., 2020). Depending on the philosophical orientation and the methodology used to identify them, common elements are assumed to have certain qualities or characteristics (e.g., active ingredients, essential elements, evidence-informed elements). Common elements approaches disentangle complex interventions into discrete elements and then describe or evaluate the relative merits of common elements across the scientific literature using varying levels of refinement.

Typical common elements approaches include mapping and distilling practice elements through literature reviews (Garland et al., 2008), identifying (potentially) active ingredients (e.g., Abry et al., 2015), using common elements for dissemination and implementation (e.g., Centre for Evidence & Implementation, 2020), and using identified common elements to inform the design or re-design of interventions (e.g., Engell et al., 2021). Originally, the analytical approaches used in common elements work were considered purely descriptive (Chorpita et al., 2007). However, work on common elements and similar concepts has developed in several directions. Traditional common elements methods are usually either based on expert opinions (e.g., Delphi methods; Garland et al., 2008), descriptive commonalities or frequencies (e.g., in systematic reviews; van der Pol et al., 2019; or practice-based observation; Hogue et al., 2019), statistical testing of associations (e.g., meta-regressions; Leijten et al., 2019), or combinations of these methods. See Leijten et al. (2021) for a scoping review of methods to evaluate active ingredients in interventions. Recent developments involve methods and statistics for reviewing and testing conjunctions of client characteristics, practice elements, and delivery formats (Furukawa et al., 2021), and reviewing combinations of different types of intervention practices, processes, contextual characteristics, and discrete implementation strategies to identify configurations that tend to work (Engell, 2021). These different but related methodologies share common goals of generating hypotheses about the more likely effective intervention contents, and subsequently fine-grained testing of intervention contents and mechanisms (Chorpita et al., 2011; Engell et al., 2020).

Common elements approaches have, in part, evolved as a response to frequently encountered issues with the implementation of evidence-based psychosocial care in the form of comprehensive programs with standardized protocols and models of implementation. The issues raised include inflexible programs and standardized protocols lacking contextual fit and sensitivity to unpredictable variation and change (Stirman & Comer, 2018). Comprehensive programs can also be demanding to implement, sustain, and coordinate in the multiple numbers needed to cover the totality of client needs that organizations providing psychosocial care encounter (Chorpita & Daleiden, 2018). Common elements approaches have been linked with a range of benefits for implementation that can counter such issues (Barth et al., 2020; Bolton et al., 2014; Conroy et al., 2019; Murray et al., 2020; Weiss et al., 2015; Weisz et al., 2012). For instance, increasing our understanding of effective intervention elements, and the mechanisms they potentiate, can focus implementation of interventions on the elements most likely to contribute to positive outcomes and discarding elements that are likely unnecessary or harmful. Doing so may reduce the complexity of interventions and increase their implementability as well as efficiency and effectiveness. Combining element approaches with research using multiple causal theories can also help increase our understanding of the how, why, and when interventions tend to work, to complement the primacy to the whether and how often efficacy/effectiveness questions that have traditionally dominated the evidence-based paradigm (Engell, 2021). Such understandings may help reconcile static population-based evidence with implementation of more personalized, ecological, and dynamic approaches to care (Engell et al., 2021), and unveil adaptations to interventions that tend to be favorable under different circumstances (e.g., Park et al., 2022). Common elements can also be “building blocks” to be reassembled and tailored into new or adapted interventions or other models and forms of implementation in practice (Chorpita et al., 2021; Engell et al., 2021). The common elements concept yet has important limitations that we will discuss, but it appears useful towards illuminating what tends to drive effective interventions and facilitating opportunities to make intervention evidence implementable and accessible to complex care settings.

Common Elements Logic and Language

There is no agreed-upon nomenclature in common elements research, and many terms are used interchangeably and differently. In our common elements work, we review and synthesize intervention and implementation elements together, and to do so, we use a language founded in established logic about the relationships between “parts” and “wholes” to make the common elements concept itself as atheoretical as we can (Engell, 2021; Varzi, 1996). That is why generic terms such as elements, components, and processes are preferred over more semantic terms such as discrete strategies or competencies when describing the concept and planning reviews. This enables us to systematically review and synthesize research from a wide range of theoretical perspectives, as well as reduce the risk of actively or unknowingly excluding relevant elements of the intervention or implementation. We can, of course, apply more theory in our choices of taxonomies when conducting the reviews or explaining their results. Table 1 provides a glossary of terms for common elements research and our understanding of what makes elements “core,” “common,” “evidence-informed,” and “evidence-based.” We explain in terms of implementation elements, but the logic is the same for intervention elements.

Table 1 Explanations of key terms used in common elements thinking and research

A key initial step in several common elements approaches is deconstructing “the thing” to be implemented (e.g., the clinical intervention, program, or policy implemented) or “the stuff that we do to get people and places to do the thing” (implementation strategies; Curran, 2020) into its smaller meaningful entities (e.g., elements and components). Deconstruction is done to discern all the parts/ingredients that go into the “thing” so that we can use various methods to gain understanding about their contributions to outcomes. These contributions can be as discrete parts and sets of parts (e.g., when the thing is a sum of one or more active parts/ingredients). They can also be as parts of more ecological contribution (e.g., when the thing or system is more than the sum of its parts), or as parts in other meronymic relations (i.e., the relationships between parts and wholes; Varzi, 1996), such as for instance how elements may have dispositional causal powers that may be “triggered” by specific configurations of elements (i.e., “dispositional partner elements;” Anjum & Mumford, 2018). The appropriate level of deconstruction can depend on the objectives of the work and perspectives/theory on the nature and composition of the “thing.”

Common Elements Approaches Applied to Implementation Science

Implementation science is in the early stages of establishing an evidence-base for implementation comparable to that in intervention science. As noted in Table 1, in terms of evidence for implementation strategies, audit and feedback is an empirically supported and relatively more “unpacked” implementation strategy to date (Brown et al., 2019; Ivers et al., 2012; Tuti et al., 2017). However, the common elements concept can be applied to other implementation strategies, guiding implementation frameworks, implementation competencies, training curriculums, and the literature on barriers and facilitators—both separately and conjunctively. Note that even though we suggest that the active content of implementation processes (i.e., elements) can go beyond discrete implementation strategies, element-level evidence would likely identify practice elements within discrete implementation strategies that are commonly key elements of implementation processes that succeed. By enhancing the accommodation of such element-level evidence in implementation science now in its early beginnings, we may avoid some of the limitations common elements work faces in intervention science today that holds detailed evidence back, such as insufficient reporting of details about interventions and implementation processes.

Methods that resemble common elements approaches have already had pragmatic and valuable implications for implementation research and practice. For example, the Expert Recommendations for Implementing Change (ERIC) project (Powell et al., 2015) used Delphi methods and concept mapping to discern the discrete implementation strategies that a group of implementation science experts identified as most important and feasible (Waltz et al., 2015). Common elements methodologies can build on such taxonomies, disentangle them further, review large accumulations of the implementation literature at a refined level of detail, and extract the elements, components, and configurations most commonly used in successful implementation in specific contexts and circumstances. For instance, using frequency counts, we can discern the most common elements and configurations and then use meta-regression analyses to test how they are associated with implementation outcomes and/or clinical outcomes. Some of these common elements may be discrete implementation strategies as per the ERIC taxonomy, some may be more granular, some may tend to be interconnected with specific determinants, and some may be important parts of implementation that implementation science has yet to theorize.

In our common elements work, we found current taxonomies in intervention and implementation science helpful for operationalization and categorization of elements, but frequently also insufficiently detailed for fine-grained deconstruction. Table 2 shows an example from a common elements review of academic interventions and their implementation strategies (Engell et al., 2020) where we chose a discrete parts approach and used the ERIC taxonomy to code conjunctions between intervention elements, implementation elements, and contextual characteristics. During coding, we found that several discrete implementation strategies varied considerably in activities and how these activities were practiced. For instance, the level of refinement for the implementation strategy “make training dynamic” was insufficient to discern the differences in dynamic training activities used by interventionists (e.g., dynamic due to use of role plays, use of feedback, combining several activities, or other interactive activities). To capture more of the variation, we further deconstructed the strategy into more discrete elements for coding. However, precise details were difficult to ascertain because of limited descriptions of the training programs in the studies. More detailed reporting in primary studies, for instance, if they adhered to standards of reporting implementation strategies (Presseau et al., 2019; Proctor et al., 2013) would likely provide a more accurate, detailed, and exhaustive list and we would also be able to code more of their processual aspects (e.g., time and intensity, timing, sequencing, actors).

Table 2 Deconstruction of the implementation strategy “make training dynamic”

An example of how common elements thinking can be used to open the “black box” of implementation can be found in Albers et al.’s (2021) integrative review of common implementation strategies used by professionals providing implementation support (implementation support practitioners [ISPs]). By looking at a large number of implementation studies across sectors, settings, and study designs, they first identified the most commonly used discrete implementation strategies by ISPs to support implementation, coded per the ERIC taxonomy. Next, they identified the most common activities and techniques (i.e., elements and components) used to carry out these discrete implementation strategies. They observed that the elements within each discrete implementation strategy varied considerably which raises the question of whether some elements are likely to be more useful or effective than others. They also observed that ISPs nearly always combined several implementation strategies, which raises questions about which combinations and their sequencing are most likely to be useful. Common elements approaches can help answer such questions.

A recent review by Tugendrajch et al. (2021) demonstrates the feasibility of descriptive common elements approaches in implementation science by reviewing the evidence for clinical supervision. First, they discerned the common elements of three different professional guidelines for providing clinical supervision of trainees and identified 17 elements that were common across the three guidelines. Next, they reviewed the literature to identify how the inclusion of these common elements in supervision was associated with therapist and client outcomes in therapy. They found that certain supervision elements (e.g., modeling ethical practices and documentation of supervision) were used in supervision with nearly exclusively positive outcomes, while other elements (e.g., self-evaluation and goal setting) were used in supervision with mixed outcomes. They also identified a general lack of attention to providing multicultural supervision, even though it is recommended in professional guidelines. Their results were, however, limited by scarce reporting of supervision details and potential publication bias. Despite the limitations, Tugendrajch et al. (2021) exemplify how common elements approaches not only operationalize the content of implementation strategies, but pragmatically review what content that tends to be included when strategies work, associations with their use across the literature, and identify content-specific knowledge gaps. However, the potential of common elements approaches stretches beyond such mapping, albeit requires more experimental primary studies, and more use of current and new reporting standards and data availability to realize its potential.

Methods for Distilling and Testing Common Elements in Interventions and Implementations

Instead of using guidelines as the source of defining common elements, we could use a common elements approach across the extant literature on interventions and implementations. Supplementary file 1 provides a detailed step-by-step guide to how such a common elements approach can be conducted when sufficient data are reported and available. As an example, we explain how the approach could be used to review common elements of external consultation as an implementation strategy for psychosocial interventions (i.e., facilitation from outside of an organization or clinical unit; Nadeem et al., 2013). This approach will also discern the specific contexts and circumstances in which these common elements are used successfully and unsuccessfully. In recent common elements reviews that include coding common implementation elements (Engell et al., 2020; Helland et al., 2022), we have found that insufficient implementation data reported from intervention and implementation studies limit the full potential of conducting a review at this level of detail. For instance, in a review of common combinations of practices, processes, and implementation elements in academic interventions, we identified 62 practice elements, 49 process elements, and only 36 implementation elements (Engell et al., 2020). However, with the growing detail and use of reporting standards and data sharing, the feasibility and utility of such reviews will rapidly grow. The results from such reviews can inform evidence-informed hypotheses about key elements and mechanisms in successful implementation of interventions, identify detailed evidence gaps, and provide specific practical implications for intervention and implementation researchers and practitioners.

The approach we present is based on a systematic common elements-review methodology developed by Engell et al. (Engell et al., 2020), which has been used in different reviews of interventions in recent years (Helland et al., 2022; Mellblom et al., 2023; Winje, 2019; Bækken, 2021). We acknowledge that other methodologies can also be used, and we direct interested readers to the work of Chorpita and Daleiden (2009), Leijten et al. (2019), and McLeod et al. (2017) for prominent examples. Also, although it is not explicitly a common elements methodology, we acknowledge the approach Brown et al. (2019) took to unpack core elements and mechanisms of audit and feedback by systematically reviewing and analyzing qualitative studies. Using theory-based meta-review methods, with similarities to common elements reviews, Brown et al. systematically coded and synthesized qualitative studies of audit and feedback into a comprehensive theory of how audit and feedback works and elements and factors that influence their effects. We chose the common elements review methodology because it is, to our knowledge, the only methodology that purposefully connects common intervention elements, implementation elements, and context elements. Table 3 is a short summary of key steps in this advanced common elements review methodology based on a manual available in Engell et al. (2020). Elaboration of steps and practical advice for use are available in supplementary file 1.

Table 3 Summary of step-by-step guide to advanced systematic common elements review methodology

Some common elements reviews restrict the study selection to effective studies or interventions that have outperformed a comparison condition (i.e., “winning interventions,” Chorpita & Daleiden, 2009; Barth et al., 2014; Okamura et al., 2020). Although doing so can be appropriate depending on the aims the review (e.g., describe the literature on effective interventions), as a general principle we recommend not excluding ineffective or less effective experimental conditions because it excludes information relevant for interpretation. As demonstrated in a common elements review of academic interventions (Engell et al., 2020), highly common elements in effective interventions may also be common in ineffective or less effective interventions. Thus, excluding studies without positive effects or with iatrogenic effects may skew results (e.g., popularity bias, Engell et al., 2020). Further, as recently demonstrated by Solheim-Kvamme et al. (2022), highly common elements of winning interventions may not necessarily be more associated with intervention effects than less common elements when their inclusion in ineffective and less effective interventions are statistically accounted for.

Results from Integrated Common Elements Reviews

Table 4 summarizes the type of results we could potentially extract from a common elements review of above 100 richly reported intervention and implementation studies where external consultation was experimentally tested or used. The most basic algorithms in Step 5, calculating frequency values of singular elements, provides information about the if and how often common elements are used in implementations and interventions producing effects, accounted for use in ineffective implementations/interventions. The more advanced algorithms, calculating frequency values for common combinations of practices, processes, and context provide more information about the how and when of successful implementation and intervention. For instance, we can extrapolate answers to specific questions such as: What are the most commonly successful consultation elements used in implementation of transdiagnostic mental health interventions for children ages 6–12 in community clinics, how are these elements most commonly used successfully, and which implementation determinants most commonly facilitate or inhibit successful implementation when these consultation elements are used? Combined, the results can be formulated as evidence-informed hypotheses and/or implications about key elements, mechanisms, or processes in interventions and implementations. Further, some of these hypotheses can be tested using the analyses in step 6, and strengthen or weaken their “evidence-informedness” and implications for experimental testing or use in practice.

Table 4 Potential results from the example review of consultation strategies

Advancing and Disseminating Common Elements in Implementation Science

Completing the algorithms described in Table 3 manually is rather cumbersome. However, the underlying logic can be computerized to make the sorting based on different configurations and patterns automated (Engell, 2021). Herein lies the potential for rapidly synthesizing the literature into pragmatic implications and predictions for research and practice. Artificial intelligence techniques such as Natural Language Processing (NLP) can further enhance the automation and usability of such systems (Olivetti et al., 2020). However, they rely on semi-standardized reporting of intervention and implementation details. Machine learning and other advanced analyses can be used to learn even more from such systems of big datasets, including using individual participant data to improve precision and idiosyncratic implications (e.g., Furukawa et al., 2021). In addition, feeding end-user data back into big data ecosystems can further improve predictions (Celi et al., 2020) where elements can be used as inputs into neural network architectures (Weng, 2020). For interested readers, the Human Behavior Change Project (Michie et al., 2020) is a pioneering example from behavior change science pursuing several of these possibilities.

There are existing dissemination tools available where common elements results can be useful. For example, evidence gap maps (EGMs) consolidate what we know and do not know from evaluation research about policies and practices, and visually present the results in interactive maps (example of Mega EGM by the Campbell Collaboration and UNICEF here; Saran et al., 2020). Similar maps can be made for evidence about common intervention and implementation elements, and, in time, may be integrated as subcategories of policies and practices in joint EGMs.

There are several online repositories and clearinghouses disseminating guides for evidence-based programs (EBPs), and such guides could also disseminate which implementation elements commonly work to implement and sustain the EBPs. PracticeWise (PracticeWise LLC, n.d.) is a commercial example of an online repository and clinical decision-making tool where users can find short guides for using common elements of EBPs for children’s mental health sorted by specific populations and contexts. Similar searchable repositories, preferably publicly available (e.g., through government funding), could in time be made for common elements of implementation and be integrated with evidence about EBPs. Although current formats for scientific publishing is predominantly manuscript-based, scientific publishing can (and likely will) evolve to other interactive and more real-time formats, such as, for instance, libraries and repositories with open datasets.

Practical Application of Common Elements in Implementation Practice

Dissemination of common elements results offers advantages for pragmatic use. A primary practical application of common elements for implementation science could be co-designing implementation strategies in partnership with implementation researchers and stakeholders. Indeed, while recommendations have been made about what to report regarding implementation strategies (e.g., action, actor, goal, duration; Proctor et al., 2013), implementation strategies require tailoring and adaptations to fit within diverse contexts (Miller et al., 2021) and for specific intervention characteristics (Lyon et al., 2020a). As such, a common elements approach could allow for increased understanding of what “building blocks” are needed and influential when co-designing an implementation effort, with guidance as to which elements tend to be combined, what can be adapted, and what is likely required for successful outcomes in a given setting and process. Ideally, the elements used and the planned and ad hoc adaptations should be tracked to continue to enhance our understanding of the necessary ingredients to promote high-quality implementation (Finley et al., 2018; Miller et al., 2021).

Common elements can also provide implications for training and education in implementation science. A well-articulated benefit of common elements approaches is that, compared to knowledge in the form of complex models and programs, elements can be more easily learned, retained, and integrated with practitioners’ more fluid knowledge and expertise (Chen et al., 2021; Engell et al., 2021). Focusing the training of implementation practitioners and stakeholders on the implementation elements that commonly work (and discarding those that commonly do not work), both generically and context-specifically, may be an efficient approach to educate and train for breadth and depth in implementation expertise. Element-based training curricula also lends itself to stepwise, sequential, and needs-based approaches to training and implementation (Engell et al., 2021).

Common elements approaches can also be leveraged in system and service design (Chorpita et al., 2021). As mentioned in the introduction, the unit of an element (i.e., a “meaningful whole”) does not necessarily have to be an implementation strategy or an intervention. Elements can also be systems, organizations, and services, which can be downstream deconstructed into their parts (elements and components). Conversely, the construction or alterations of systems and implementation models can be informed by the parts that are commonly associated with desired determinants and outcomes. Chorpita and Daleiden’s (2014, 2018) work on models of coordinated strategic mental health systems are early examples along those lines of thinking.

Limitations

Common elements reviews are limited by the studies available in the literature and the amount of details and data about implementation, intervention, and outcomes reported in those studies. Based on experience systematically reviewing thousands of studies, we are encouraged by movement in the field for researchers and authors to be substantially more attentive to reporting standards and details about intervention content and implementation processes. Another important limitation is the challenge of inferring element-level causality when the interventions or implementations tested “packages” of several elements (e.g., multi-faceted implementation strategy). Systematic common elements reviews are also subjected to the same biases as most traditional systematic reviews, particularly publication bias and the uncertain discrepancy between the intended experimental condition in primary studies (i.e., implementation manual) and the actual experimental condition (i.e., adherence to manual and quality of use). We also caution against popularity bias, which is the tendency of some elements to be frequently included in interventions and implementations based on popular opinion reasoning them as important, regardless of (unknown) effectiveness (Engell et al., 2020). Popularity bias may lead to certain elements commonly being “free agents” in effective interventions without necessarily contributing to their effects. Several methodological steps have been taken recent years to reduce these biases and improve inferences from element-level reviews of multi-element interventions (e.g., Engell et al., 2020; Furukawa et al., 2021; Solheim-Kvamme et al., 2022), and the precision will continue to improve with more use of reporting standards, developments in fidelity and process measurements, data availability, and advanced statistical analyses (Engell, 2021). For instance, computational linguistics and NLP will likely help improve the precision of fidelity measures (e.g., Flemotomos et al., 2021; Gallo et al., 2015; Imel et al., 2019), albeit there are challenges to overcome before such systems are in widespread use (Ahmadi et al., 2021). Nevertheless, highly common elements of effective interventions and implementations may best be described as evidence-informed, as they are derived from empirically tested interventions across contexts (i.e., informed by them), but not necessarily tested in isolation. They can also be viewed as hypothesis-generating for element-level experimental testing to establish evidence-based elements and mechanisms. At present, and as with all types of synthesized evidence, we recommend caution in interpreting common elements results and carefully evaluating their merit when they are considered for practice introduction.

Recommendations for Next Steps

Our first recommendation is to track and report element-level details in intervention and implementation studies. It is essential to report all practices, processes, and context characteristics that are of relevance to the interventions and implementations, and report results from all proximal and distal outcomes measured. In other words, we need to capture data about how important elements and outcomes in interventions and implementations occur, unfold, and emerge in time and space. In addition to traditional data collection methods, this will likely include active and passive capturing of process data such as for instance ecological momentary assessments (EMA) and audio, geo, and bio feedback from devices and wearables (see Bettis et al., 2022 for review). Such detailed process data (e.g., flexible fidelity to functions; Engell, 2021; or fluctuating psychological processes; Russell & Gajos, 2020) are of particular importance for making more precise inferences about elements, mechanisms, and dynamic processes, and may benefit from adjustments or additions to current reporting standards. The use of EMA and other experience sampling methods have rapidly increased in psychosocial research the last decade and offers high ecological validity and low recall bias in information (Bettis et al., 2022; van Roekel et al., 2019). For instance, having EMA strategically coordinated with recordings of intervention sessions and information from wearables can provide detailed data on fidelity to intervention elements and proximal behavioral, emotional, and physical processes to index the dynamics of immediate and short-term responses to intervention. There are also practical, scientific, and ethical limitations related to real-time process data such as response burden, data noise, and health data privacy that requires mindful navigation.

We can use supplementary files to provide sufficient reporting of details and data, which can be used by NLP systems (or human coders). Follow current reporting standards and taxonomies, such as the ones referenced throughout the paper, when appropriate (for review of these and other relevant standards, see Rudd et al., 2020), provide explanations when deviating from current reporting standards and taxonomies, and add to them when relevant and appropriate. What we can learn from common elements reviews will be influenced by the theories and taxonomies researchers use in primary studies and in conducting reviews. We are unlikely to agree on one common ontology and taxonomy for each intervention and implementation phenomenon, nor should that necessarily be a goal. We should use current reporting standards and taxonomies to enable comparisons and integrations across studies and contexts. However, taxonomies operating as disciplinary standards must be subject to continuous debate, testing, and refinement, and intervention and implementation science should remain open to diverse and novel theory. While some differing ontological perspectives may be reconcilable in common theories and taxonomies, some may also require multiple taxonomies because of fundamentally different perspectives.

Our second recommendation is to publish as many details about unsuccessful implementation as successful implementation. As described in the step-by-step guide to common elements reviews in Table 3 and supplementary file 1, much can be learned from accumulating details about implementation failures and including data from unsuccessful implementation in our reviews, models, and analyses. Open journals like arXiv can facilitate reporting of unsuccessful implementation efforts that are vital to share but difficult to get published in peer-reviewed journals. The file drawer problem is an unethical waste of opportunity for scientific learning.

Our third recommendation is to make data available when ethically appropriate. Plan for data availability early to accommodate in applications to ethics boards. Use online repositories or create routines for sharing data with review researchers who inquire about data. Major publishers (e.g., Springer, Wiley, Elsevier) and funding institutions (e.g., National institute of Health in the US, National Institute for Health and Care Research in the UK) encourage and support data availability. The Research Council of Norway now require grant applications to include plans for data availability or provide a strong justification for why data cannot be publicly shared.

Our fourth recommendation is to test common element-based hypotheses experimentally. By testing hypotheses generated through advanced common elements reviews with different experimental designs (e.g., factorial trials, dismantling trials, time-series, natural experiments) we will both be testing the common elements concept as a theory and gain causal knowledge about the effects of elements and mechanisms in implementation. Complementing these trials with other methods (e.g., realist evaluation, system dynamics modeling, phenomenological studies) can help study processes, mechanisms, systems, and narratives from multiple perspectives to enrich our understanding of causal relations.

We call on implementation journals such as Global Implementation Research and Applications, Implementation Science, Implementation Science Communications, Implementation Research and Practice and other journals publishing implementation and intervention studies to accommodate these recommendations for authors.

Conclusions

We proposed a research agenda for using advanced common elements methodology to improve the precision and pragmatism of implications from intervention and implementation research. For implementation science specifically, common elements methodologies are tools that can help synthesize and distill the literature into practical implementation applications, generate evidence-informed hypotheses about key elements, processes, and mechanisms in implementation, and promote evidence-informed precision tailoring of implementation strategies to contexts. We offered guidance and recommendations for researchers and journals on how we can realize those potentials, and we emphasized accommodating diverse approaches to understanding and studying implementation processes.