Below we describe strategies for abstracting ten implementation factors from intervention studies during evidence synthesis. In cases where included studies cite previous papers that describe the intervention, these papers should also be reviewed. Table 1 presents a definition of each factor, examples of relevant information that can be abstracted from intervention studies, and the sections in which the relevant information is most often found.
Acceptability
During implementation, acceptability is the “perception among implementation stakeholders that a given treatment, service, practice, or innovation is agreeable, palatable, or satisfactory.”1 Acceptability of an intervention by the target population is strongly associated with likelihood of adoption, as described in seminal work by Everett Rogers.12 Acceptability is typically measured through a survey of stakeholders and often focuses on provider and/or patient satisfaction.13, 14 If acceptability is not assessed in an intervention trial, it might be inferred in part from retention of providers and patients throughout the study, as dropout rates may indicate that participants do not find the intervention to be worthwhile, satisfactory, or agreeable.15 Caution should be taken in extrapolating retention rates to acceptability, because of the multiple factors that influence study retention, and comparing retention rates across studies that place varying emphasis on retention (e.g., highly controlled studies vs. pragmatic studies) could be misleading. However, if dropouts are surveyed or interviewed, these reported findings might provide a source of information about acceptability.
Adoption
Adoption is the “intention, initial decision, or action to try or employ an innovation or evidence-based practice.”1 Adoption is often influenced by other implementation factors (e.g., acceptability, feasibility, implementation cost)11 and is an outcome measure of implementation. Adoption can be extrapolated from the proportion of eligible providers (or clinics) that participate in part or all of an intervention.
Appropriateness
Appropriateness (sometimes referred to as compatibility)9 is the “perceived fit, relevance, or compatibility of an innovation or evidence-based practice for a given practice setting, provider, consumer, or problem.”1 Interventions perceived as more appropriate by stakeholders are more likely to be adopted.11, 16 Appropriateness can be determined through provider satisfaction surveys that measure provider opinions about intervention usefulness or impact on workflow, as well as pre-implementation considerations about fit (i.e., Does the hospital have the proper resources to be able to implement the intervention? Does the intervention make sense with the population of a given setting?).
Feasibility
Feasibility is the “extent to which a new treatment, or an innovation, can be successfully used or carried out within a given agency or setting.”1 Settings with adequate time and resources (both actual and perceived) to adopt and continue an intervention are more likely to initiate the use of that intervention.1, 11 Feasibility considerations mostly consist of demands put on a provider or a system, including the cost of the intervention and the time and staffing demands to properly implement the intervention. Papers may address feasibility by describing the training time and staffing required, start-up costs, materials needed for the intervention, and the need for systems-level changes for implementation.
Fidelity
Fidelity is the “degree to which an intervention was implemented as it was prescribed in the original protocol or as it was intended by the program developers.”1 Higher levels of fidelity are associated with greater improvement in clinical outcomes.16 Fidelity is most relevant when an intervention is tested in multiple trials. It is typically measured through a fidelity checklist or post-implementation assessment of the degree to which the intervention was implemented as planned. In an evidence synthesis project, fidelity can be evaluated by examining multiple studies of the same intervention to determine whether later versions of the intervention resemble the original intervention or whether adaptations were made (and if so, the reasons why).
Implementation Cost
Implementation cost is the “cost impact of an implementation effort”1 and is often influenced by intervention complexity, implementation strategy, and setting. High implementation costs are frequently a barrier to adoption.17 Implementation costs are specific to costs related to the implementation process and are distinct from cost (or cost-effectiveness) outcomes that result from an intervention. Implementation costs are rarely reported in efficacy studies but may be estimated by reviewing reports of personnel requirements (including the need for new staff, or time demands for training/participation) and technology or equipment.
Intervention Complexity
Intervention complexity is the “perceived difficulty of implementation, reflected by duration, scope, radicalness, disruptiveness, centrality, and intricacy and number of steps required to implement.”9 Complexity can also influence the reproducibility of an intervention and the external validity of trials the intervention undergoes.18 Intervention complexity can be assessed by examining the time period over which an intervention is carried out, the number of steps and personnel required to carry out the intervention, or the number of trainings required to teach an intervention.
Penetration
Penetration is the “integration of a practice within a service setting and its subsystems.”1 Interventions that have buy-in from an organization and/or greater organizational support have greater uptake.11, 19 Penetration shares elements with reach, but is a system-level measure of the quality and scope of reach and is defined by the proportion of providers who use an intervention out of the anticipated providers.20 Penetration can be observed through multi-site data (if available) or through a description of intervention integration within a system.
Reach
Reach examines “the absolute number, proportion, and representativeness of individuals who are willing to participate in a given initiative [or intervention].”9 The greater the population that an efficacious intervention reaches, the greater positive impact it will have on population health outcomes. Certain measures of reach could also ensure that underserved populations receive appropriate care and services. Demographics tables typically present the characteristics of the population that received an intervention. In some cases, flow diagrams may present information about the characteristics of patients who dropped out of a study. Combining these data with information about the target population can provide insight about the representativeness of the study population and the patients who received the full intervention.
Sustainability
Sustainability (or maintenance)9 is the “extent to which a newly implemented treatment is maintained or institutionalized within a service setting’s ongoing, stable operations.”1 Organizations with resources to sustain an intervention are more likely to adopt and continue an intervention.11 An intervention that is more easily sustainable will also have greater longevity. Sustainability is rarely described in early intervention studies, but the discussion section may consider characteristics of the intervention or implementation process that influence sustainability. In an evidence synthesis, follow-up studies or reports may provide information about whether an intervention was expanded or sustained after initial financial and workforce support ends.