Background

Implementation scientists recognize that determinants (barriers or facilitators) within local context impact implementation efforts. Assessing context before, during, and/or after implementation is important so that implementers can use this information identify optimal strategies that can be used to address barriers and leverage facilitators [1]. Easy-to-use quantitative context assessment tools rooted in the concepts and evidence-base within implementation science need to be developed. Such tools rely on frontline clinicians and staff accurately understanding of what is being asked within assessment instruments. However, these individuals are often not familiar with the language used in these assessments or how it applies to their own situation. Assessments should be rooted in theoretical constructs and yet also need to be conceptually clear using every-day language.

The Consolidated Framework for Implementation Research (CFIR) is a determinant framework, designed to identify barriers and facilitators that potentially impact implementation outcomes. Though frameworks like the CFIR seek to provide clarity and consistency in terms and definitions for each construct, the language used can be highly technical. The dominant approach for identifying barriers and facilitators has relied on researchers conducting assessments based on information elicited through qualitative interviews that are analyzed, interpreted, and used to develop tailored strategies with guidance for local practitioners to help them navigate their context for successful implementation [1,2,3,4,5]. Measurement instruments seek to elicit quantitative assessments of barriers and facilitators because this can be a more efficient way to assess context. However, these instruments are often exceedingly long or require expertise and training to use [6,7,8,9,10,11]. Frontline clinicians and staff who do the work of implementation may misunderstand or misapply questions designed to elicit potential barriers and facilitators; they are often more familiar with quality improvement language [12,13,14,15,16].

Pragmatic measures of context are needed. Glasgow and Riley define pragmatic measures as being important to stakeholders, low burden (usually indicated by a low number of survey items), actionable, and sensitive to change [17]. Stanick et al. add that pragmatic measures are feasible, low cost, and brief [18]. Guided by these principles, an abbreviated pragmatic context assessment tool (pCAT) was developed based on the CFIR. This instrument has been available online (www.CFIRguide.org) and has generated a high level of interest, generating nearly 50 requests over approximately 18 months (2021–2022). Thus, the purpose of this paper is to document methods used to develop the pCAT.

Methods

Our research team developed an abbreviated context assessment tool based on CFIR constructs that repeatedly arose as potential barriers or facilitators in implementation [19,20,21,22,23]. This tool was piloted with six frontline improvement teams (see Table 1); the teams collectively comprised 21 individuals who participated in the Learn. Engage. Act. Process. (LEAP) Program [23]. LEAP is a 26-week, virtual, coach-led, structured learning program designed to develop competency in the application of quality improvement methods and techniques for frontline clinicians and staff. The goal was for teams to use the assessment tool to identify potential barriers and facilitators to implementing improvements, so they could better understand the micro-level context within which they were working to improve processes and programs. We had concerns with the piloted version, however, because many responses did not reflect actual barriers and facilitators observed by and reported to the LEAP coaches who worked closely with frontline teams. We took the opportunity to pause, reflect, and update the pCAT.

Table 1 List of CFIR constructs included in Think Aloud survey development

Think Aloud method

The updated version of the pCAT (see Table 1) was incorporated into the interview guide with the goal of engaging individuals using a Think Aloud method [24] that asks participants to verbalize their thoughts as they consider how to respond to questions in the assessment tool. Specifically, as participants responded, we asked them to verbalize their considerations, interpretations, and to ask questions or seek clarifications, if needed. We encouraged participants to verbally identify areas of disconnect, misinterpretation, and misunderstanding with the language and concepts being used. Interviewees were instructed to read each item out loud and say out loud, everything that came to mind. This included thoughts about the CFIR construct itself, the formatting of the tool, the language used to frame each construct, and their actual response as it related to their local quality improvement context. Interviewees were informed that the interviewer may periodically ask follow-up questions but capturing stream-of-consciousness interpretation of the tool was the primary goal. Iterative changes to the pCAT tool were made based on interviewee feedback (see Fig. 1).

Fig. 1
figure 1

Think Aloud interview procedure

Participants

Participants included members of teams that participated in the LEAP quality improvement learning program after its initial pilot. Potential participants were invited to a telephone interview approximately 6 months after completing LEAP.

Interviews

Interviews lasted for about an hour and were conducted from March 2018 through August 2019, audio recorded, and transcribed verbatim.

Coding and analysis

Qualitative descriptions of barriers and facilitators in the transcripts were coded using CFIR constructs as preliminary codes. Additional codes were developed to capture more specificity when needed (e.g., adding consideration of Time as a subconstruct of Available Resources). As each interview was completed, language in the pCAT was iteratively updated as needed, based on input from each participant.

NVivo 12 Pro was used to facilitate coding [25]. Interviews were conducted by CHR. CHR and LJD examined early interview transcripts independently and participated in consensus discussions to establish initial coding and preliminary findings; all subsequent coding and iterative updates of the pCAT were done by (CHR) [26]. The Consolidated Criteria for Reporting Qualitative Studies checklist was used to guide the reporting of data collection and analysis activities [27].

Human protections

This work was developed as a non-research activity (i.e., without Institutional Review Board approval under the authority of Veterans Health Administration (VHA) operations) and complies with the guidance about authorization of non-research manuscripts outlined in VHA Program Guide 1200.21: VHA Operations Activities That May Constitute Research [28]. All authors attest that the activities that resulted in the production of this manuscript were conducted as part of the non-research activities conducted under the authority of the VHA National Center for Health Promotion and Disease Prevention.

Results

Thirty-eight invitations were sent to individuals on 34 teams that participated in LEAP after the initial pilot; 27 interviews were completed (71% response rate). Two interviews included two individuals from the same team at their request; the rest were one-on-one. The average length of the interviews was 47 min (range 27–63 min); all participants successfully completed their interview. Additional file 1 contains the final version of the abbreviated pragmatic context assessment tool (pCAT) based on results from interviews. The pCAT evolved as interviews progressed, based on experiences and input from the first nine people interviewed; the remaining 18 people did not express any challenges in responding to questions and their responses were in line with the intent of each question, indicating stability of the tool. The following sections highlight key themes that influenced changes made to the context assessment tool.

Specificity of the change: question stem

The first task for participants was to describe the change or improvement being implemented. Initially, the guidance was, “Please enter your problem area (area for improvement). This should reflect whatever topic you and your team are currently considering. It does not have to be final (e.g., The majority of patients fail to show up for scheduled orientation)”. However, participants found this guidance too broad and speculative, and they struggled to provide assessments. It was easier for participants when they anchored their responses to a specific, recent, or on-going improvement or implementation effort as they considered each construct. Participants observed that each construct could be a facilitator with one improvement effort and a barrier with another, affirming that context and knowing what the change is, matters. For example, communication may be a facilitator when the implementation involves people from the same service line but becomes a barrier when the change requires communication and cooperation across service lines. Attempting to rate CFIR constructs was much more difficult and far less useful than critically assessing the specific context of a specific planned or on-going implementation.

Thus, we edited the “stem” to be more specific and concrete. The final guidance was developed as, “We’ve found that it’s best to think concretely about a planned or on-going implementation (as opposed to the more general implementation environment). Include the specifics of the implementation/improvement project here.” We allowed flexibility in interpretation of “changes” as “implementation” or “improvement” because both involve implementing a planned change.

Identifying barriers versus facilitators

For each construct, participants were asked whether they agreed or disagreed with each statement. Agreeing meant the construct was a facilitator and disagreeing meant that the construct was a barrier. Participants could also be “neutral.” However, participants had difficulty indicating a level of agreement and instead wanted to answer with yes/no. To address this, we added explanatory text for Agree (this means the item is a potential facilitator) and Disagree (this means the item is a potential barrier). This change helped participants respond more accurately.

Response options

After introducing explanations for assessing constructs as barrier versus facilitator (or neutral), participants were asked to assess the potential impact on implementation. Choices included three levels of impact (low, moderate, and high). Participants had difficulty differentiating between three levels and understanding how to assess impact (or influence). They were more comfortable assessing the effect (or consequence). Thus, we simplified responses to include “Weak/no effect” and “Strong effect” options.

CFIR construct assessments

Six of ten CFIR constructs in the final version of the pCAT were unchanged from the version initially used in the think-aloud interviews (Patient Needs & Resources, Networks & Communications, Compatibility, Goals & Feedback, and Reflecting & Evaluating). The remaining four CFIR constructs shifted from future focus (e.g., “we will have…”) to current state (e.g., “we have…”). Additional changes are described below.

Relative advantage and tension for change

References to “key people” in these constructs were too vague for respondents. We revised language to refer to “people here” so respondents could tailor respond based on their knowledge of people most relevant for assessing relative advantage; this appeared to resolve difficulties in subsequent interviews.

Leadership engagement

The pCAT initially had a single question about “leaders here.” Participants had difficulty responding to this question without first considering the levels and types of leaders they work with, who may or may not have been involved in the improvement and then determining what they knew about their respective degree of engagement. Based on this feedback, we split CFIR’s “Leadership Engagement” construct to include two levels of leadership: (1) “leaders I work with most closely” and (2) “higher level leaders.” This change enabled respondents to respond more accurately.

Available resources

The pCAT Version 1.0 included a single question about “Available Resources.” Based on LEAP coach experiences with LEAP teams prior to our Think Aloud interviews, we separated this single question into three separate questions in pCAT Version 2.0. With this change, respondents had no difficulty answering separate questions about time and space. For “other needed resources,” respondents revealed a range of resources that might be needed including incentives for program participants and having a discretionary budget. Version 2.0 also incorporated current-state language instead of future-focused language as described above.

Other suggested improvements

Participants were asked about any additional barriers or facilitators. One participant suggested asking about longer-term sustainment instead of focusing on short-term change. Another participant suggested adding open-text space to allow respondents to explain and justify their responses and to reflect on variation or disagreement among team members.

Discussion

Our Think Aloud approach engaged frontline clinicians in the process of developing an abbreviated practical context assessment tool using plain language. The pCAT comprises 14 questions that assess ten CFIR constructs that range across four of the five framework domains: Innovation Characteristics, Outer Setting, Inner Setting, and Process (a copy is provided in Additional file 1). These constructs are among the most frequently reported as key determinants of implementation outcomes using the CFIR [2, 29]. Some of these constructs are also important for Lean quality improvement principles such as Goals and Feedback (i.e., alignment with objectives), Reflecting and Evaluating (e.g., using data to track outcomes), and Networks and Communications (e.g., open lines of dialogue) [30].

Context assessments are rarely done by practitioners within their own setting [31]. One reason for this is that measurement instruments often require expertise and are burdensome to apply [18, 31]. In deference to expertise and knowledge of frontline clinicians within their own setting [32] and in acknowledgement of their limited time, practical context assessment tools are needed that provide brief ratings of context to generate reflection and problem-solving by frontline teams engaged in improvement and that may help increase response rates for researchers and implementers who rely on these assessments to design strategies for successful implementation [1, 33].

Stanick et al. developed objective criteria by which to assess pragmatism of a measurement instrument [18], dividing criteria into “stakeholder-facing” and “objective” criteria. We applied each of the five objective criteria, which each use a six-point rating scale (− 1 to + 4; see Table 2). Based on these objective criteria, the pCAT is relatively pragmatic with scores of + 3 or + 4 for all criterion.

Table 2 Objective pragmatic rating criteria

The pCAT is available online [30]. It requires no specialized training to administer and can be completed electronically or on paper. The pCAT has limitations. First, this tool is an abbreviated assessment and is not designed to comprehensively assess all CFIR constructs. Though construct coverage is limited, those included align with the updated version of the CFIR [34]. The pCAT does not provide guidance about what respondents should do with the information elicited. Within the LEAP program [23], coaches worked with teams and highlighted the value of identifying barriers and facilitators when implementing changes so that barriers can be avoided or minimized and facilitators can be leveraged for success. Waltz et al. list recommended strategies that may best address each CFIR construct that manifests as a barrier [1]. Table 3 lists implementation strategies with the highest rate of endorsement for each of the ten constructs in pCAT that could be considered. Another key limitation of the pCAT is that each CFIR construct is assessed with a single question and does not follow a psychometric paradigm of development. The pCAT is offered as a brief practical tool for use by frontline teams and or coaches or facilitators to encourage collective understanding of local barriers and facilitators and to generate discussion about potential strategies based on this information. Content and structure of the final version is based on experiences of 27 individuals who were engaged in a quality improvement learning program. All respondents were frontline clinicians who were members in quality improvement teams embedded in a VHA medical center-based weight management program.

Table 3 List of implementation strategies recommended to address pCAT constructs

Conclusion

The pragmatic context assessment tool (pCAT) is designed as an abbreviated pragmatic approach to assess barriers and facilitators in clinical settings. It is short (14 items), available online (www.cfirguide.org), and is designed to draw on the expertise and knowledge of people who work at the frontline and are most familiar with their own clinical context.