Introduction

In the United States, chronic disease is the most common cause of disability and death, with cancer, heart disease, and cerebrovascular disease alone accounting for over half of all deaths [1] and consuming over 80% of the healthcare budget in the United States [2]. While community-level programs and policies in prevention and control hold great promise for reducing the burden of chronic disease [35], community prevention programs are increasingly pressured to stretch staff and budgets across a broad range of activities and responsibilities, emphasizing the need to direct scarce resources to the most effective programs.

Evidence-based interventions (EBIs) in chronic disease prevention are comprised of increasing numbers of programs and policies that have been research-tested and are ready for dissemination to community and public health practice settings (the Guide to Community Preventive Services [a.k.a., the Community Guide], the Guide to Clinical Preventive Services, Cancer Control PLANET, Research-tested Intervention Programs [69]). However, there is reason to believe that the best available evidence is not reaching or being integrated into all practice settings. In a survey of state and local public health practitioners, only 30% of practitioners at the local level had heard of the Community Guide [10], a standard for recommended EBIs in community health [11]. Among state-level practitioners, almost 90% of whom had heard of the Community Guide, a much smaller proportion reported making changes to existing (20%) or new programs (35%) on the basis of Community Guide recommendations [10]. Thus, these data indicate that the challenge for improving uptake of EBIs extends beyond simply spreading awareness to include integration into practice settings and suggest that readiness for EBIs may vary across settings.

Conceptualizing public health organizational readiness for EBIs requires drawing from a broader understanding of how evidence is used in public health practice as well as processes underlying the movement of research evidence into practice settings. Theoretical frameworks that lay the groundwork for these components of readiness include evidence-based public health [12] and the stages of innovation diffusion [1317]. Evidence-based public health practice has been described as applying the best available, scientifically rigorous, and peer-reviewed evidence (i.e., EBIs), but it also includes using data and information systems systematically, applying program-planning frameworks, engaging the community in assessment and decision making, conducting sound evaluation, and disseminating what is learned to key stakeholders and decision makers [12]. The conceptualization of stages characterizing the process of uptake and integration of EBIs is grounded in Rogers’ theory of Diffusion of Innovations [14, 16] and the application of these tenets in program planning and evaluation, most notably, as exemplified by the RE-AIM framework (i.e., reach, effectiveness, awareness, implementation and maintenance) [15, 17]. Together, these frameworks and theoretical models can be applied to the development of constructs for characterizing the stages of a public health organization’s readiness for EBIs, from awareness to adoption, implementation, and maintenance.

One of the central challenges for moving EBIs for chronic disease prevention into community and public health practice settings may be the variability in organizational readiness, but the lack of a tool to measure organizational readiness for EBIs currently hinders the ability to test this hypothesis. The aim of this article is to describe the development, measurement properties, and potential uses of a novel survey instrument used to measure stages of organizational readiness for EBIs, designed to be brief and generalizable across chronic disease prevention program areas.

Methods

Study participants

Respondents were sampled to represent program areas in chronic disease prevention, based on the presumption that they could provide an assessment of supports within the larger organization as pertaining to their ability to incorporate EBIs into practice. Since, to our knowledge, there is no existing current database of staff in specific chronic disease program areas in local public health departments and community organizations across the country, we relied on a combination of purposive and snowball sampling to recruit participants. A first set of respondents to the survey were recruited directly through purposive sampling of members of the National Association of Chronic Disease Directors (NACDD), state health department contacts, and their coworkers in the asthma control, diabetes prevention, obesity prevention, and tobacco control fields. After initial contacts, further respondents were selected through snowball sampling [18]. A subset of obesity prevention respondents represented organizations to which the authors are providing technical assistance on dissemination. These 19 organizations were funded through the Missouri Foundation for Health under the Healthy & Active Communities grant to create Model Practice Building (MPB) interventions. The survey (hosted and administered by Qualtrics, Inc.) was distributed by email to all respondents that were known by the researchers. In order to avoid respondents ignoring emails from an unknown sender, those organizations that had not had previous contact with the researchers were first sent a personal email or called over the telephone to identify the relevant staff members to complete the online survey.

The final sample, collected between February and June 2010, included respondents from state health departments, local health departments, and community-based organizations. Of the 393 individuals contacted, 277 took the resulting survey. After removing 34 respondents with incomplete data, the final analytic sample was 243 individuals representing 164 organizations, resulting in a response rate of 62%. A subset of respondents completed the survey a second time one to two months after initial survey administration (n = 92 [65 organizations], response rate 59%) in order to assess test-retest reliability. The study was reviewed and approved by the Washington University in St. Louis Institutional Review Board.

Measures

The survey instrument was developed based on the underlying staged frameworks articulated in the Diffusion of Innovations theory [16], RE-AIM [17], and a hybrid of these frameworks in Briss et al.[19]. This framework was further developed in the current project in the report of the qualitative factors that contribute to or inhibit movement along a staged framework [20].

Specific items for the survey instrument were developed primarily from three sources (which themselves cited Rogers [16] in addition to other above-mentioned frameworks): Steckler and colleagues’ study of the dissemination of tobacco prevention curricula [21], Jacob and colleagues’ study of evidence-based interventions among state-level public health practitioners [22], and the Center for Tobacco Policy Research Program Sustainability Assessment Tool [23]. From these questionnaires and based on input from a group of experts assembled for this study, a set of questions was developed (54 items in initial draft). The questionnaire was further refined based on results of cognitive response testing [2427] of a group of key stakeholders from funding agencies and practice settings (n = 11). In cognitive response testing, we sought to determine the following: (1) question comprehension (i.e., What does the respondent think the question is asking? What do specific words or phrases in the question mean to the respondent?); (2) information retrieval (i.e., What information does the respondent need to recall from memory in order to answer the question? How do they retrieve this information?); and (3) decision processing (i.e., How do they choose their answer?).

Stages were operationalized based on the items selected during the above-described process of survey development in order to measure stages as latent constructs, with a focus on using as few items as possible to retain user-friendliness of the questionnaire. The resulting four stages were defined as follows: (1) awareness as recognition of need and availability of sources for EBIs, which included four items that assessed whether the community considered the health issue to be a problem, its view of solutions, and the extent of awareness of EBIs among agency leadership and staff; (2) adoption as decision making based on evidence, which included five items that assessed the extent of using evidence in decision making, support from leadership, and access to technical assistance; (3) implementation as carrying out and adapting interventions to meet community needs, which included five items that assessed the extent to which the agency is able to adopt EBIs, having resources and skills needed for implementation, and support from leadership and the community for implementing EBIs; and (4) maintenance as the existing embedded activities and resources the organization has to support ongoing EBIs, which included nine items that assessed the extent to which the agency assesses community health needs, conducts evaluation of interventions and disseminates findings, has a network of partners and diverse funding sources, and has policies and procedures to ensure proper allocation of funding. A final contextual domain, “organizational climate” (three items), separate from the readiness scale domains, assessed the ability of the organization—independent from the intervention—to react, change, and adapt to new challenges, needs of the community, and a changing evidence base. A total of 26 questions comprised the four domains and additional contextual domain; all were measured with a seven-point Likert scale. The full survey instrument is available as an appendix (Additional file 1).

Confirmatory factor analysis

Data were analyzed using a series of confirmatory factor analyses (CFAs) in SPSS AMOS 16.0 (IBM, Armonk, NY). We chose a confirmatory rather than an exploratory approach because we identified items for each stage a priori and preferred a more theory-driven test of our model [28, 29]. Full-information maximum likelihood (FIML) estimation was used to include all available data. FIML is the recommended estimation method of choice when the data are missing at random and may be less biased than other multivariate approaches when missing data are not ignorable [3032].

We tested an a priori four-factor model, with each stage modeled as a latent factor, but also allowed for improvements and modifications, including alternative factor structures, adding error covariances, and removing poor-performing items (i.e., low factor loading or cross-loading). Correlations between factors also were examined. We used multiple fit indices to evaluate model fit: the chi-square/degrees of freedom, comparative fit index (CFI), and root mean square error of approximation (RMSEA) and its associated 90% confidence interval. CFI values between 0.90–0.95 or above suggest adequate to good fit [33, 34] and RMSEA values <0.06 suggest good model fit [33]. Finally, after determination of the final model structure for stages of readiness, we examined the correlations between organizational climate (modeled as a three-item latent factor) and having a university affiliation (modeled as binary exogenous variable) in relation to the factors from the final model.

Additional analyses were performed using STATA version 11 (StataCorp LP, College Station, TX) [35]. Descriptive statistics on characteristics of survey respondents were based on frequency distributions. Test-retest reliability was assessed with the intraclass correlation (ICC) statistic [36]. Cronbach’s alpha was also computed for each of the scales to provide a commonly used metric of internal consistency. Finally, mean scale scores were compared across program types to provide a preliminary assessment of construct validity.

Results

Descriptive statistics of the respondents and their respective organizations and programs are provided in Table 1.

Table 1 Description of surveyed programs and respondents (n = 243)

Measurement model development

The measurement model was first tested based on the initial hypothesized correlated four-factor structure representing the four stages of organizational readiness. As Table 2 shows, the initial model had poor fit across all fit indices. In model 1, most modification indices were relevant to the awareness and maintenance scale items, which caused us to recognize the “natural” split for the items (i.e., community and agency subscales of awareness, resource and evaluation subscales of maintenance). Second-order factors were considered but could not be tested with only two 2-item factors per scale. Model 2 included six correlated factors: two awareness factors, two maintenance factors, and the two initial adoption and implementation scales. Model fit improved, but remained poor (Table 2). In model 3, additional modifications were made to model 2 based on modification indices. We deleted two items with low factor loadings (<0.40) and one additional item due to the large number of modification indices suggesting correlated errors with other items. Two error covariances were added when model fit was improved and when inspection of the items revealed possible instrument or method effects (e.g., items contained the same word or phrase). Fit indices for model 3 showed good fit (Table 2).

Table 2 Measurement model development for scales based on stages of organizational readiness (n = 243)

All scale items and standardized factor loadings from model 3 are reported in Table 3. All items were significant indicators of their respective factor (p < .001). Intercorrelations between readiness scales in model 3 are shown in Table 4. Intercorrelations were generally highest between factors in adjacent stages, with the highest correlations between agency awareness and adoption (r = .83, p < .05) and between implementation and resource maintenance (r = .95, p < .05). Based on the high correlations, we explored the possible existence of a combined factor for these two sets of scales, but since model fit was worse they were retained as separate factors. The community awareness factor had the lowest correlations with other factors in the model.

Table 3 Standardized factor loadings from final structural equation model, with initial model factor loadings included for comparison
Table 4 Intercorrelations among readiness scales

Performance of readiness scales

Internal reliability estimates based on Cronbach’s alpha are presented in Table 5 for the final, modified scales. Alphas were generally acceptable for all scales, except for the relatively low value for the resource maintenance scale. Test-retest reliability, as measured by the ICC, was found to be good to moderate across scales [37], ranging from 0.71 (adoption) to 0.47 (agency awareness).

Table 5 Characteristics of revised readiness scales and mean summary scores across program types (n = 243)

Mean scores derived from the revised scales of the final model were examined across program types (Table 5). The comparisons were interpreted based on a priori expectations for program status with respect to the readiness scale. As a result of prior work with the MPB grantee obesity program group (as described in Methods above), this group was expected to have higher readiness scores than most other groups, particularly for latter stages. Tobacco control programs were expected to have the next-highest level of readiness based on the longer history of established programs in this area. However, there were no differences across groups for awareness and adoption scores, with all groups scoring relatively high on all scales, with the exception of community awareness scales, which were slightly lower but not different across groups. The MPB grantee group had higher implementation scores than all other groups (p < .05).

An analysis of the association of the readiness factors with organizational climate and university affiliation provided an additional assessment of the performance of the scales in relation to possible mediators of readiness (Table 6). The significant positive correlations between organizational climate and most of the readiness scales (weaker correlations were observed for community awareness) indicate that an organization with a climate favorable to evidence-based practice is more likely to have higher readiness. Having a university affiliation was not associated with any of the readiness factors.

Table 6 Associations between readiness stages from the final measurement model with organizational climate and university affiliation (n = 243)

Discussion

This article describes the initial assessment of the measurement properties of a novel survey instrument to stage organizational readiness for EBIs in community-based chronic disease practice settings. We found support for our theory-based measures of readiness stages using CFA, though some modification was needed before arriving at a final model. Most notably, the awareness scale was split into two separate factors representing community and agency awareness, and the maintenance scale was split into two separate factors representing resource and evaluation maintenance. The mean scale scores followed a hypothesized pattern across types of programs, with the group of grant-funded programs that had received additional support exhibiting higher mean scores, particularly with respect to latter stages.

This staging survey is based on the underlying assumption that moving from one stage to another occurs successively, with strength in earlier stages serving as accumulated capacity to move to the next stage. Our analysis found some support for this, as correlations between adjacent stages were generally stronger than correlations between nonadjacent stages. However, this model may oversimplify the variation that exists in processes involved in moving from one stage to another. It may be that the organizational processes that lead to awareness and initial adoption not only differ from the processes that lead to implementation and maintenance but also that the actual system underlying the process is itself different. For example, it is possible that awareness is driven by a diffusion process, via linked networks of practitioners and organizations, but that implementation and maintenance are driven more by the multilevel systems and structure that drive capacity [38]. In our analysis, while the resource maintenance scale distinguished between programs according to a priori expectation of readiness, it was comprised of items with moderate to low factor loadings, which likely contributed to the moderate alpha for this final scale. It is possible that our scale didn’t fully capture the domains that comprise maintenance. Likewise, the awareness domain, which was split into subscales to achieve better model fit for the measurement instrument as a whole, resulted in two-item subscales, which is considered less than ideal for measurement of latent constructs. Future studies that can afford to increase the number of survey items may benefit from adding items designed to more comprehensively measure the different content domains within each stage.

We found that an organizational climate more favorable to using research evidence was related to all stages of readiness for evidence-based practice, with the caveat that our measure of implementation climate is new and its measurement properties are yet to be examined. This is in line with prior work that has suggested that having an organizational culture favorable to using evidence may moderate the effectiveness of strategies to increase the implementation of evidence-informed policies and programs in public health settings [39]. Organizational factors such as prioritizing use of evidence in practice have been found to predict adoption of evidence-based practice in state health departments [40]. This would suggest that in addition to staging readiness for evidence-based practice, assessing organizational context with respect to the culture and climate toward using evidence will be equally important in determining appropriate implementation strategies in a given setting. It may be that if organizational culture and climate toward using evidence is low, then strategies to enhance culture toward using evidence in general (e.g., engaging leadership) may be considered as a possible starting point for implementation strategies.

This survey instrument may be grouped with other types of organizational assessments in the dissemination and implementation literature [41], most relevant being assessments of culture and climate [42] and readiness for change [43], although some distinctions are worth noting. Other organizational assessments include domains that do not have a clear sequenced relationship, while by definition, our framework’s stages are sequentially dependent on each other. While our assessment of readiness for EBIs in chronic disease prevention settings may be similar in orientation to instruments that measure readiness for change, our survey is more specific to organizational readiness for EBIs anchored in particular chronic disease program areas rather than the more global orientation of readiness for change. As Weiner [44] notes [43], the question of whether our instrument is measuring “readiness” or “determinants of readiness” may be important to consider when determining appropriate applications.

Another key point of departure for our instrument is that we were targeting public health chronic disease prevention settings versus clinical healthcare settings. The emphasis in public health settings is to deliver interventions using a community-based approach rather than an individual, patient-oriented approach, which is reflected in the structure and function of the organization, from program level on up [45]. These types of interventions tend to be carried out in teams within the organization, and often extend to community partners. Our instrument reflects this, as items comprising each stage include some assessment of agency staff, agency leadership, and the broader community (which could include individual community members as well as partners). Previous scales used to assess stages of EBI (i.e., “innovation”) diffusion in public health organizations have been substantially longer than the one created for this project (e.g., Steckler et al.[21], which had 125+ items).

This paper has a number of strengths and limitations worth noting. To our knowledge, our survey represents the first of its kind to be used in community and public health chronic disease prevention settings. The instrument is brief (less than six minutes average completion time in our sample) and easy to complete, as evidenced by the response rate, which suggests the feasibility of data collection for longitudinal assessments; our response rate was in line with previous work among local health officials (73% in Brownson et al.) [10]. The limited time and attention of chronic disease program staff to devote to a survey is no doubt shared across professional settings; this serves as a demonstration of the utility and validity of a brief instrument and may bolster efforts in other areas. We used a rigorous confirmatory analytic approach to confirm which items belonged to which content domains or scales. Additional testing is required to determine whether this measure of organizational readiness for EBIs is indeed predictive of actual adoption and implementation of EBIs. Our final model resulted in six scales, two of which (community and agency awareness) had only two items each, which is considered less than ideal for measurement of latent constructs. The final alphas for our scales were generally acceptable, especially considering the brevity of the scales, with the exception of the relatively low alpha for the resource maintenance scale. This suggests room for improvement in future versions of the instrument measuring these domains. We did not have sufficient sample size in this study to confirm our final model with a hold-out sample or to test for item invariance across multiple subgroups (e.g., program type, urban vs. rural setting). It is possible that readiness for evidence-based practice may vary by type of practice setting and that different measures may be needed. In many cases there was one respondent representing an organization; it is yet unclear how many respondents it takes in a particular setting to precisely measure these issues and track changes over time. Future work could explore conceptual and analytical issues around level of measurement and clustering with multilevel data collection and analysis.

Another important consideration in interpreting and using this instrument is the basis by which evidence-based practice was assessed. For the current survey, we define evidence-based practice as “an evidence-based intervention, evidence-based policy, or evidence-based clinical care…that integrates science-based interventions with community preferences to improve the health of populations.” Examples of sources of evidence-based interventions provided to respondents in the body of the survey were based on systematic research reviews (i.e., Community Guide, Cochrane). There may be additional information about community and organizational context, as well as knowledge derived from professional experience (practice-based evidence), that also comprise elements of evidence applied in practice settings [46] that were not explicitly addressed.

Conclusions

The lack of adequate measurement tools hinders progress in dissemination and implementation research [47]. To help fill this void, these results describe the reliability and measurement properties of a theory-based tool; the short, user-friendly instrument may be useful to researchers and practitioners seeking to assess organizational readiness for evidenced-based practice across a variety of chronic disease prevention programs and settings. Although the field is young, it is likely that intervention strategies to enhance dissemination and implementation progress will need to be tailored to stage of readiness. This parallels the recognized need in clinical settings to generate more evidence to guide the choice of implementation strategies based on the expectation of relative effectiveness given the characteristics of a particular setting [48].

There are many potential applications of a survey instrument such as this, in addition to tailoring intervention strategies to stage of readiness. For example, the survey could be used to provide a gateway to initiating contact with practice settings and as a basis from which to seek additional, in-depth information using more qualitative approaches. In this light, this staging survey may be placed in the spectrum of participatory approaches [49], in which assessment of organizational readiness may serve as one part in the process of integrating knowledge about the practice setting by including input from practitioners into developing strategies to increase the use of evidence in that setting. There are numerous analytical applications of the quantitative measures of readiness stage, including examination of the relationship between stages and measures of program success and how this may differ across program areas. Finally, the scales derived from this survey instrument may also be considered as part of a spectrum of implementation outcomes [50] and could be explored as markers of success of an implementation strategy (i.e., evaluate shift from low implementation to high implementation/maintenance).