Concerns about poor consistency in implementation science (IS) terminology have led researchers to characterize the field as a “Tower of Babel” [1]. Inconsistent terminology complicates literature searches, and researchers have found that search strategy yield and precision indices for implementation and quality improvement studies are moderate at best [2, 3]. This limits meta-analytic and replication efforts aimed at rigorously evaluating the effectiveness of implementation strategies and the value of existing literature for those enacting implementation initiatives. The science and practice of implementation would be greatly facilitated by a parsimonious nomenclature of conceptually distinct implementation strategies [2, 47].

Recently, Powell et al. [8] reviewed the health and mental health literature (including 41 compilations and reviews) and proposed a compilation of 68 discrete implementation strategies involving one action or process. This compilation served as the starting point for a subsequent multi-stage project called Expert Recommendations for Implementing Change (ERIC) [9]. The ERIC project’s first stage involved expert panelists (N = 71) using a modified Delphi process to revise the compilation, which resulted in an updated compilation of 73 discrete implementation strategies [10].

The aim of the ERIC project’s second stage, presented here, was to obtain preliminary validation of the compilation of 73 implementation strategies by studying the relationships between the strategies and obtaining relative importance and feasibility ratings for each strategy. The study of the relationships among the strategies supports the evaluation of whether the strategies are conceptually distinct from one another as well as how the strategies can be organized into conceptually relevant groupings. The former can also serve the practical purpose of making it easier for stakeholders to consider the range of implementation strategies by thematic cluster. The importance and feasibility ratings for the strategies provide insight into the perceived applicability of the strategies. It is of general interest which strategies have relatively high and low ratings by experts.


A purposive sampling procedure was used to recruit an expert panel of implementation science and clinical experts (N = 35) to participate in concept mapping and rating tasks [9, 10]. A detailed description of procedures has been published [9], and a summary is provided here. Concept mapping is a mixed-method procedure for engaging stakeholder groups in a structured conceptualization process [11]. This process supports visually representing the relationships among a set of related concepts and empirically clustering them into conceptually distinct categories and rating them on multiple dimensions.

The Concept Systems Global MAX™ [12] web platform was used for the panel’s sorting and rating tasks and data analysis. A more detailed introduction to concept mapping can be found in Trochim and Kane [13]. For the sorting task, participants were asked to sort virtual cards for each of the 73 strategies, accompanied by their definitions, into piles as they deemed appropriate. Participants were asked to rate each strategy for importance and feasibility ranging from 1 (relatively unimportant/not at all feasible) to 5 (extremely important/extremely feasible). These global ratings were prefaced by the following instructions: “Please select a number from 1 to 5 for each discrete implementation strategy to provide a rating in terms of how important (feasible) you think it is. Keep in mind that we are looking for relative importance (feasibility), use all the values in the rating scale to make distinctions.” Participants were able to select which set of activities they wanted to do first and were also able to work on the sorting and rating activities over multiple online sessions, at their convenience, before submitting their responses.

Multidimensional scaling and hierarchical cluster analyses were conducted to produce visual representations of the relationships among the strategies. Descriptive statistics for the importance and feasibility ratings were calculated. Each strategy’s importance and feasibility score was plotted on a graph. The resulting scatterplot was divided into four quadrants or “Go-zones” (e.g., I, II, III, IV) using the mean of each dimension. For example, quadrant I contains strategies that have values above the means for both dimensions. The Go-zone quadrants column in Table 1 reflects the combined relative importance and feasibility for each strategy.

Table 1 A summary of the 73 implementation strategies, organized by cluster with mean importance and feasibility ratings


Experts who participated in the concept mapping and rating tasks and were affiliated with academic or healthcare institutions in the United States (n = 34) or in Canada (n = 1). Thirty-two of the 35 experts provided valid sorts (>75 % of strategies sorted), and 30 provided importance and feasibility ratings for all strategies. Sixty-three percent of participants had exclusive expertise in IS, 29 % were experts in both IS and clinical practice, and 8 % indicated clinical practice expertise only. Sixty-nine percent of participants had some affiliation with the US Department of Veterans Affairs (VA), most of whom also held academic appointments in social science or health-related schools or departments.

Figure 1 presents a point map that visually represents the relationships among the 73 implementation strategies, with each point on the map representing a strategy. The strategies are numbered to aid in cross-referencing the spatial relationships of the points on the map with their labels enumerated in Table 1. All but two strategies were sorted as being conceptually distinct. Strategies #66 (Use capitated payments) and #70 (Use other payment schemes) were always sorted together. Two other strategies were proximal to one another though they were sorted together by only 4 of 32 panelists (#35 Identify and prepare champions and #57 Recruit, designate, and train for leadership), indicating that they are more similar in how they relate with other strategies on the map, than they are directly similar to one another.

Fig. 1
figure 1

Point and cluster map of all 73 strategies identified in the ERIC process. The map reflects the product of an expert panel (valid response n = 32) sorting 73 discrete implementation strategies into groupings by similarity with each strategy being depicted by a yellow dot and accompanied by a number supporting cross-referencing to the strategies enumerated in Table 1. Spatial distances reflect how frequently the strategies were sorted together as similar. In general, the closer two points are together, the more frequently those strategies were sorted together. Strategies distal from one another were infrequently, if at all, sorted together. These spatial relationships are relative to the sorting data obtained in this study, and distances do not reflect an absolute relationship (i.e., a 5-mm distance in the present map does not reflect the same relationship as a 5-mm distance on a map from a different data set). The legend provides the label for each of the nine clusters of strategies. Dotted lines within the Develop stakeholder interrelationships cluster indicate how two separate clusters were merged into one large cluster due to conceptual similarity among their items. Dotted lines extending between other clusters archive the reassignment of strategies from their original cluster to a neighboring cluster to which there was a better conceptual fit (i.e., strategies #48, #58, and #62)

The final clusters were developed over 3 weeks of deliberations by the ERIC investigative team. A 13 cluster starting point was selected because it is one standard deviation above the mean number of clusters typically obtained in concept mapping [14]. In this study, 69 % of respondents sorted statements into 13 or fewer piles. We sequentially reviewed cluster merges and achieved consensus to merge clusters down to nine conceptually distinct clusters. For example, two clusters shown in pale green at the center bottom in Fig. 1 (separated by dashed lines) were merged to form a single cluster labeled Develop stakeholder interrelationships, as the original clusters were judged as not sufficiently conceptually distinct.

When the team reviewed the clusters for conceptual clarity, three proposals came forward to move individual strategies to neighboring clusters. First, #62 (Start a dissemination organization) was moved from the Engage consumers cluster to the Change infrastructure cluster, as it was judged more similar to infrastructure support for a practice change than engaging consumers. Second, #48 (Organize clinician implementation team meetings) was moved to the Develop stakeholder interrelationships cluster from Adapt and tailor to the context, as the former has greater interpersonal focus than the latter. And finally, #58 (Remind clinicians) was moved to the Support clinicians cluster from Provide interactive assistance because it is more administrative than interactive in focus. Unanimous consensus was reached for the final cluster arrangements. Additional file 1 provides a cluster-by-cluster visual tour of the concept map.

A multi-step process was used to determine labels for the final clusters. The list began with labels provided by expert panel members for their clusters that were most similar to the final cluster solutions. This list was supplemented with highly descriptive labels identified from the investigative team’s meeting minutes from cluster solution deliberations. Proposed criteria for developing cluster labels (Table 2) were introduced for team comment by one of the authors (LJD) along with suggested label revisions. These criteria were helpful in structuring iterative discussion among team members, the result of which was voted upon by the team and unanimously adopted.

Table 2 Guidelines for cluster labels

Table 1 presents a summary of the 73 implementation strategies, organized by cluster with mean importance and feasibility ratings. There was a strong relationship (r = 0.7) between the feasibility and importance ratings, meaning that most strategies fell within either quadrant I (high importance and feasibility) or III (low importance/feasibility). However, there were still a number of strategies that were viewed as important but not as feasible (12 %, e.g., Access new funding), or feasible but less important (15 %, e.g., Remind clinicians). Clusters of strategies that are more immediate and concrete and are potentially more in the control of those tasked with supporting change (e.g., Use evaluative and iterative strategies, Train and educate stakeholders) tended to have higher importance and feasibility ratings. Clusters that are more strategic, but also potentially involve changing well-established systems (e.g., Change infrastructure, Utilize financial strategies), tended to have lower ratings. Figure 2 presents a graphic of the Go-zone data.

Fig. 2
figure 2

Go-zone plot for all 73 strategies based on expert ratings. Note. The range of the x and y axes reflect the mean values obtained for all 73 of the discrete implementation strategies for each of the rating scales. The plot is divided into quadrants on the basis of the overall mean values for each of the rating scales. Quadrant labels are depicted with roman numerals next to the plot. Strategies in quadrant I fall above the mean for both the importance and the feasibility ratings. Thus, these strategies are those where there was the highest consensus regarding their relative high importance and feasibility. Conversely, quadrant III reflects the strategies where there was consensus regarding their relative low importance and feasibility. Quadrants II and IV reflect strategies that were relatively high in feasibility or importance, respectively, but low on the other rating scale


Results from this study provide initial validation for viewing the 73 implementation strategies as conceptually distinct. Cluster analyses of the concept mapping data support grouping strategies into nine clusters which have practical heuristic value for those looking to the ERIC compilation of implementation strategies for guidance. The importance and feasibility ratings for the strategies supported the formation of Go-zone quadrants that can be used to help decision makers prioritize which strategies to use when planning an implementation initiative.

While the concept mapping strategy used in this study represents a strong methodological approach to evaluating whether the 73 implementation strategies are conceptually distinct and organizing them by theme and potential applicability (i.e., Go-zone analysis), there are notable limitations. Recruitment had been restricted to the time zones within the continental United States to minimize scheduling conflicts for elements of the ERIC project that required real time interactions among participants. Thus, all but one of the 35 participants were from the United States, and 69 % had some affiliation with the VA. While concept maps with 30 or more participants are considered to be highly reliable [14], if stakeholders from outside the United States had practice contexts that alter the perceptions of these strategies interrelationships, or the ratings of their perceived importance and feasibility, different results may be obtained.