Toward optimal implementation of cancer prevention and control programs in public health: a study protocol on mis-implementation
Much of the cancer burden in the USA is preventable, through application of existing knowledge. State-level funders and public health practitioners are in ideal positions to affect programs and policies related to cancer control. Mis-implementation refers to ending effective programs and policies prematurely or continuing ineffective ones. Greater attention to mis-implementation should lead to use of effective interventions and more efficient expenditure of resources, which in the long term, will lead to more positive cancer outcomes.
This is a three-phase study that takes a comprehensive approach, leading to the elucidation of tactics for addressing mis-implementation. Phase 1: We assess the extent to which mis-implementation is occurring among state cancer control programs in public health. This initial phase will involve a survey of 800 practitioners representing all states. The programs represented will span the full continuum of cancer control, from primary prevention to survivorship. Phase 2: Using data from phase 1 to identify organizations in which mis-implementation is particularly high or low, the team will conduct eight comparative case studies to get a richer understanding of mis-implementation and to understand contextual differences. These case studies will highlight lessons learned about mis-implementation and identify hypothesized drivers. Phase 3: Agent-based modeling will be used to identify dynamic interactions between individual capacity, organizational capacity, use of evidence, funding, and external factors driving mis-implementation. The team will then translate and disseminate findings from phases 1 to 3 to practitioners and practice-related stakeholders to support the reduction of mis-implementation.
This study is innovative and significant because it will (1) be the first to refine and further develop reliable and valid measures of mis-implementation of public health programs; (2) bring together a strong, transdisciplinary team with significant expertise in practice-based research; (3) use agent-based modeling to address cancer control implementation; and (4) use a participatory, evidence-based, stakeholder-driven approach that will identify key leverage points for addressing mis-implementation among state public health programs. This research is expected to provide replicable computational simulation models that can identify leverage points and public health system dynamics to reduce mis-implementation in cancer control and may be of interest to other health areas.
KeywordsMis-implementation Cancer control Agent-based models
Centers for Disease Control and Prevention
National Association of Chronic Disease Directors
Cancer continues to be the second most common cause of death in the USA [1, 2]; however, much of this burden is preventable through evidence-based interventions . Substantial potential for cancer control exists at the state level [4, 5] in which all states retain enormous authority to protect the public’s health . States shoulder their broad public health responsibilities through work carried out by state and local health agencies. Over $1.1 billion annually is expended on state cancer control programs1 (i.e., primary and secondary prevention) [7, 8], which is significantly higher than any other area of chronic disease prevention and control. However, cancer control covers a broad spectrum of programs, and funding can be limited in areas and population groups with high cancer burdens . With the limited resources available to state-level programs, the need to utilize the best available evidence to implement and sustain these programs is key to the efficiency of cancer control at the state level .
Evidence-based approaches to cancer control can significantly reduce the burden of cancer [10, 11, 12, 13]. This approach begins with an estimate of the preventable burden. Depending on the methods, between one third and one half of deaths due to cancer can be preventable [3, 14, 15]. Large-scale efforts such as Cancer Control P.L.A.N.E.T. and the Community Guide have now placed a wide array of evidence-based interventions in the hands of cancer control practitioners [13, 16, 17]. Despite those efforts, a set of agency-level structures and processes (e.g., leadership, organizational climate and culture, access to research information) needs to be present for evidence-based decision-making (EBDM) to grow and thrive [10, 18, 19, 20]. While efforts are building to make sure practitioners have access to and the capacity for EBDM , the need for the exploration of mis-implementation of these programs in public health is growing.
Importance and potential impact of mis-implementation
The scientific literature has begun to highlight the importance of considering de-implementation in health care and public health [22, 23]. While de-implementation looks at the retraction of unnecessary or overused care [23, 24], it does not quite fully examine the processes that sustain non-evidence-based programs and the de-implementation of programs that are, in fact, evidence-based. An example of the discontinuation of an evidence-based program is notable with the VERB campaign in the USA that demonstrated effectiveness in increasing physical activity of children but was then discontinued [25, 26]. On the other end of the mis-implementation spectrum is the continuation of non-evidence-based programs such as the continuation of the DARE (Drug Abuse Resistance Education) program despite many evaluations have demonstrated its limited effectiveness [27, 28]. That is why researchers have come to define mis-implementation as the process where effective interventions are ended or ineffective interventions are continued in health settings (i.e., EBDM is not occurring) [22, 24]. Most of the current literature focuses on the overuse and underuse of clinical interventions and the cultural and organizational shifts needed toward the acceptance of de-adoption within medicine . Currently, over 150 commonly used medical practices have been deemed ineffective or unsafe . Despite this discovery within the medical realm, there is still sparse literature on mis-implementation in the field of public health or cancer control.
It is already known that there are a number of cancer control programs that continue without a firm basis in scientific evidence . Hannon and colleagues reported that less than half of cancer control planners had ever used evidence-based resources . Previous studies have suggested that between 58 and 62% of public health programs are evidence-based [30, 31]. Even among programs that are evidence-based, 37% of chronic disease prevention staff in state health departments reported programs are often or always discontinued when they should continue .
Factors likely to affect mis-implementation
In delivery of mental health services, Massatti and colleagues made several key points regarding mis-implementation: (1) the right mix of contextual factors (e.g., organizational support) is needed for the continuation of effective programs in real-world settings; (2) there is a significant cost burden of the programs to the agency; and (3) understanding the nuances of early adopters promotes efficient dissemination of effective interventions . Management support for EBDM in public health agencies are associated with improved public health performance [18, 33], but little is known about the processes and factors that affect mis-implementation specifically. Pilot data indicate that organizational supports for EBDM may be protective against mis-implementation (e.g., leadership support for EBDM, having a work unit with the necessary EBDM skills) . In addition, engaging a diverse set of partners may also lower the likelihood of mis-implementation.
The utility of agent-based modeling for studying public health practice
Agent-based modeling (ABM) is a powerful tool being used to inform practice and policy decisions in numerous health-related fields . ABM is a type of computational simulation modeling in which individual agents—who may be people, organizations, or other entities—are defined according to mathematical rules and interact with one another and with their environment over time . ABM is a useful tool to observe the dynamic and interdependent relationships between heterogeneous agents within a complex system and how system-level behavior and outcomes evolve over time from the interaction between these individual agents (emergent behavior) [35, 36, 37].
Agent-based models have a strong track record in social and biological sciences and have been widely used to study infectious disease control [38, 39] and health care delivery [40, 41]. More recently, ABM has begun to be applied to chronic disease control, ranging from the study of etiology, to intervention design, to policy implementation [35, 40, 42, 43, 44]. In topics specific to cancer prevention, ABMs are now being used to show how individuals respond to tobacco control policies  and to better understand the community-level, contextual factors involved in implementing childhood obesity interventions . Some of the advantages of employing ABMs include that they can (1) model bi-directional and non-linear interactions between individuals, organizations, and external contextual factors ; (2) describe dynamic decision-making processes ; (3) simulate adaptation, counterfactuals, and relational structures such as networks [34, 42]; (4) consider and capture extensive heterogeneity across different entities or populations ; and (5) act as “policy laboratories” for researchers when real-world experimentation is not feasible or is too costly . ABM has the potential to pinpoint both the factors within state health departments that have the greatest effect on the mis-implementation of cancer control programs and the leverage points that may be good targets to improve successful implementation.
Early on, ABMs primarily relied on simple heuristic rules as models of human behavior and were generally limited in their ability to predict behaviors of larger populations and complex interactions. In recent years, however, ABMs have provided (1) more refined representations of behavior and decision-making [49, 50], (2) increasingly sophisticated representations of relational/environmental structures such as geography and networks [34, 42], and (3) greater focus on “co-evolution” across levels of scale across settings, including organizational dynamics as in political science [34, 47].
Phase 1, assessing mis-implementation (Aim 1)
The measures to assess the scope and patterns of the mis-implementation problem are vastly under-developed. There has been limited pilot work in this area ; therefore, phase 1 will focus on the refinement of measures and assessment of the patterns of mis-implementation in cancer control in the USA. The project begins by refining and pilot testing measures to assess mis-implementation within state health departments. The foundation of these measures comes from a pilot survey previously completed by members of the team, the Mis-Implementation Survey for Cancer Control [22, 51, 52, 53, 54, 55, 56, 57, 58]. The project team has engaged a group of public health practitioners who will serve as an advisory group throughout the duration of the project and will help inform the development of the measures.
Using the evidence tables, a draft instrument will be developed. It is likely to cover seven main domains: (1) biographical information; (2) frequency of mis-implementation; (3) reasons for mis-implementation; (4) barriers in overcoming mis-implementation; (5) specific programs being mis-implemented; (6) use of management supports for EBDM; and (7) ratings on current level of individual skills essential for implementing evidence-based interventions.
New measures will undergo expert review for content validity, relying on the advisory group of state health department practitioners. Before the instrument goes into the field, a series of individual interviews will be completed for cognitive response testing of newly developed items. Cognitive response testing is routinely used in refining questionnaires to improve the quality of data collection [60, 61, 62]. Cognitive response testing will be used to determine: (1) question comprehension (what does the respondent think the question is asking?); (2) information retrieval (what information does the respondent need to recall from memory in order to answer the question?); and (3) decision processing (how does the respondent choose their answer?).
Once cognitive testing is completed, additional edits will be made to the survey, and a test-retest will be employed to assess reliability of the instrument. The team intends to recruit around 100 practitioners, via the advisory group to complete the survey and then complete the survey again, 2 weeks after initial administration. Appropriate statistics will be calculated for each type of question to assess the reliability between the two test time points [63, 64].
Study participants will include cancer control public health practitioners, which include those individuals who direct and implement population-based intervention programs in state health departments. These practitioners may be directly involved in program delivery, may set priorities, or allocate resources for programs related to cancer risk factors. The target audience will be inter-disciplinary; that is, they will be drawn from diverse backgrounds including health educators, epidemiologists, and evaluators. Examples of the individuals in the target audience include (1) the director of a Centers for Disease Control and Prevention (CDC)-funded comprehensive cancer control program for the state; (2) the director of a state program addressing primary prevention of cancer (tobacco, inactivity, diet, sun protection); (3) the director of state programs promoting early detection of breast, cervical, and colorectal cancers among underserved populations; or (4) state health department epidemiologists, evaluators, policy officers, and health educators supporting cancer control programs.
Participants will be randomly drawn from the 3000-person membership of National Association of Chronic Disease Directors (NACDD) and program manager lists from key CDC-supported programs in cancer and cancer risk factors. The team has an established partnership with NACDD and has worked extensively with them on previous projects [30, 54, 55, 56, 57, 65, 66, 67, 68]. In phase 1, the team will recruit 1040 individuals for a final sample size of 800. The team anticipates a 60% response rate based on evidence that supports a series of emails, follow-up phone calls, and endorsements from NACDD leadership and officials in the states with enrolled participants [51, 52, 54, 56, 69, 70, 71, 72, 73, 74]. Similar to successful approaches in previous studies [56, 74], data will be collected using an online survey (Qualtrics software ) that will be delivered via email. The survey will remain open for a 2-month time period with four email reminders and two rounds of phone calls to bolster response rates . All respondents will be offered a gift card incentive.
Analyzing the survey data
Survey data will be analyzed in three ways. First, descriptive statistics (e.g., frequencies, central tendencies, and variabilities) and diagnostic plots (e.g., stem and leaf plots, q-q plots) will be completed on all variables. Data will be examined for outliers and tested as appropriate for normality, linearity, and homoscedasticity. Appropriate corrective strategies will be used if problems are identified. Bivariate and multivariate analyses will rely on data at the single time point of the phase 1 survey. These preliminary analyses are necessary to ensure high-quality data and to test assumptions of the proposed models. The team will also compare demographic and regional variations between respondents and non-respondents to assess potential response bias.
Next, the measurement properties of the instrument will be comprehensively assessed. To do so, the team will conduct confirmatory factor analysis within an explanatory framework using structural equation models [76, 77]. Using factor analysis seeks to reduce the anticipated large number of items in the survey tool to a smaller number of underlying latent variables while examining construct validity of the measures . The initial domains to be used in factor analysis are shown in Fig. 2.
where P ij and S ij are the fixed effects for program type and state population size, R ij is a fixed effect for reason for mis-implementation, (PR) ij is the program by reason interaction, and μ j and ε ij are the variance components at the state and individual levels, respectively. The mixed-effects modeling allows for random effects at the state (μ j ) level. A mixed-effects model will examine state-level variability and account for the nested design of the study .
Sample size calculations
For the factor analysis, given a minimum of four items per factor and expected factor loading of 0.6 or higher, the survey will need a sample of 400 . For reliability testing, for statistically significant (p < 0.05) kappa values of 0.50 and 0.70, the sample size requirements are 50 and 25 pairs, respectively, in each of the two groups. To estimate an intraclass correlation coefficient of 0.90 or above (power 0.80, p < 0.05), 45 pairs are required in each subgroup. Sample size estimates are based on Dunn’s recommendations . Therefore, a sample of 100 for reliability testing will provide high power.
For population-level estimates of mis-implementation and multivariate modeling, sample sizes are based on a power ≥ 90% with two-sided α = 5% . To estimate a prevalence of mis-implementation of 37% (± 3%), a sample of 750 is needed. To compare rates of mis-implementation by program area (e.g., cancer screening estimated at 19% versus primary prevention of cancer estimated at 29%), a sample size of 800 is required at power > 0.90 and p < 0.05.
Phase 2, comparative case studies (Aim 2)
Often, the key issue in understanding the translation of research to practice is not the evidence-based intervention itself, but rather the EBDM processes by which knowledge is transferred (i.e., contextual evidence as described by Rychetnik et al.  and Brownson et al. ) . Building on data collected in phase 1, the goal of phase 2 is to better understand the context for mis-implementation via case studies, which will involve key informant interviews. These interviews will involve sites that are successful or less than successful in addressing mis-implementation. The purpose of key informant interviews is to collect information from a range of people who have first-hand knowledge of an issue within a specific organization or community.
Sampling, recruitment, and interview domains
The team will utilize purposive sampling to select participants . Based on phase 1 data, eight states will be selected—that is, four states where mis-implementation is high and four where mis-implementation is low. Participants will be state health department practitioners who work in the identified states. By examining extreme cases, the study can maximize the likelihood that the qualitative approach will provide deeper understanding of mis-implementation, building on Aim 1 activities. While Aim 1 will determine the extent (the “how much”) and some underlying reasons (the “why”), Aim 2 will give a deeper understanding of the “why” and “how” of mis-implementation. The interviews will focus on several major areas (organized around domains in Fig. 2): (1) inputs related to mis-implementation, (2) factors affecting mis-implementation (individual, organizational, external), and (3) methods for reducing mis-implementation.
Study staff will make initial contact via email and by telephone to invite identified participants to the study and arrange an appointment for the telephone interview. Participants will receive consent information in accordance with the Washington University Institutional Review Board standards. The team anticipates approximately 12 interviews from each of the eight state health departments (a total of 96 interviews) and will conduct interviews until the team reaches thematic saturation . Participants will be offered a gift card incentive.
Data coding and analysis
Digital audio recordings of the interviews will be transcribed verbatim. Two project team members will analyze each of these transcripts via consensus coding. After reviewing the research questions [86, 87], the team members will read five of the transcripts using a first draft of a codebook. Each coder will be asked to systematically review the data and organize each of the statements into categories that summarize the concept or meaning articulated . Once the first five transcripts are coded, they will be discussed in detail to ensure the accuracy of the codebook and inter-coder consistency. The codebook will be edited as needed prior to coding the remainder of the transcripts. All transcripts will be analyzed using NVIVO.11 . After refinement of the codebook, each transcript will be coded independently by two team members. The two team members will then review non-overlapping coding in the text blocks and reach agreement on text blocking and coding. Themes from the coded transcripts will be summarized and highlighted with exemplary quotes from participants. Data analysis may also include quantification or some other form of data aggregation. The study team will use the interview guide questions to establish major categories (e.g., individual factors, organizational factors). All information that does not fit into these categories will be placed in an “other” category and then analyzed for new themes. Comparisons will be made to identify key differences in thematic issues between those in high and low mis-implementation settings. After initial analyses by research team members, one focus group will be conducted remotely or in-person with state health department staff in each of the eight states to get input on other interim theme summaries.
Phase 3, agent-based modeling
Sample agents and contextual factors
- Appeal of evidence-based practices
- Openness to innovation
Aarons et al. 
- Use economic evaluation in the decision-making process
- Adapt programs for different communities and settings
Gibbert et al. 
Jacob et al. 
- Lack of knowledge of EBDM
- General resistance to changing old practices
Maylahn et al. 
Jacobs et al. 
- Program funding
- Expertise of available staff
Erwin et al. 
Brownson et al. 
- Agency leadership values EBDM
- Management practices of direct supervisor
Brownson et al. 
Jacob et al. 
- Lack of incentives in the agency for EBDM
- Organizational culture does not support EBDM
Jacobs et al. 
Gibbert et al. 
- Maintains a diverse set of partners
- Compatible processes between partners
Brownson et al. 
Massatti et al. 
- Supportive state legislature
- Supportive governor
Brownson et al. 
Defining the agents and contextual factors
Agents in an ABM are generally defined by individual characteristics (properties), behavior rules that govern choices or actions (possibly dependent on both the agent’s own state and that of the environment) [36, 37], and a social environment that characterizes relationships between agents. In this case, both individual agents (the public health professionals working in cancer prevention and control within state health departments) and organizational agents (the state health departments in which decisions are being made) will be influenced by elements outside the state health department, as well as by interactions with each other.
The complexity of the systems in which the organizational and external factors operate and influence public health practitioner decision-making is such that ABM will be able to provide greater insight than traditional experimental design or epidemiological and econometric analytic tools which require assumptions about homogeneity, linearity, etc. that are not appropriate for complex organizational systems [36, 90]. By developing a model that is informed by survey and case study data, we will be able to help explain how mis-implementation arises from decisions made by individuals within specific organizational and external contexts. Additionally, ABM can provide insight into potential counterfactuals and implications these may have for intervention designs and targeting (e.g., if a particular organizational climate had been present, mis-implementation would not have occurred). The potential benefits of using an ABM approach include anticipating both the individual impact of modifiable influences on decision-making and the organizational impacts of mis-implementation in varying conditions .
In our planned initial model design, individual-level agents will work within a health department, making periodic choices about whether to continue or discontinue specific intervention strategies within cancer prevention and control programs. The team will explore the impact that the underlying factors of state health departments have on the patterns of mis-implementation. The team has developed the initial set of individual and organizational agent constructs and external influences from pilot research in mis-implementation and in evidence-based decision-making [18, 21, 22, 31, 56, 57, 74] and draws on previous literature from systems science on organizational dynamics both within health systems and other fields [91, 92, 93, 94, 95, 96]. Findings from phases 1 and 2 will allow the team to refine the core list of agent factors and also estimate the relative importance of different agent behaviors and attributes using a “bottom-up” approach (i.e., real-world data collected from individuals and organizations). Several of the agent factors listed in Table 1 have predicted mis-implementation in pilot research.
Individual-level agents in the initial models will represent the staff members who work within state health departments. These individuals, who exist in a hierarchy within a state health department, include leaders, managers, and coordinators (Fig. 3). The potential characteristics for modeling include their attitudes (e.g., openness to innovation) , skills and knowledge (e.g., ability to use economic evaluation) [31, 56], and barriers (e.g., lack of knowledge of EBDM processes) [57, 98], specifically those that may contribute to mis-implementation. Individual agents will also have social connections with other agents that affect how information flows through the organization and how organization-level decisions about implementation are made.
The second set of planned agents are organizations representing state health departments. These organizations have characteristics that influence mis-implementation . Several organizational agent characteristics likely to affect mis-implementation are management supports [18, 56], resources [52, 99], and organizational barriers [31, 57].
Contextual (external) factors
In addition, the team plans to model external contextual factors. While these are not directly present within health departments, context can have a significant influence on whether program strategies are continued or discontinued. The initial set of external factors has been drawn from pilot research and previous conceptual frameworks and includes variables such as a diverse, multi-disciplinary set of partners with EBDM skills [18, 32], funding climate, policy inputs, and political support . The exploration of external factors within the models will allow the team to observe how agents adapt to different contexts—including through departmental and organizational communication channels—and how adaptation affects outcomes related to mis-implementation .
Developing the computational simulation
The development of the ABM will follow established computational modeling best practices . The modeling process will have four key steps:
Step 1: model design and internal consistency testing
Design of the models begins with identification of key concepts and structures from the literature and pilot studies, as well as phases 1 and 2 and their operationalization into appropriate model constructs. Table 1 is a start from the pilot data and will be refined from what the team learns in phases 1 and 2. The models are then implemented in computational architecture, with each piece undergoing testing to ensure appropriate representation of concepts and implementation. Revisions of initial models’ implementation are undertaken as needed based on partial model testing and consultation with experts.
Step 2: test for explanatory insights
Once the initial models are complete, the generative explanatory power of the models to reproduce real-world observations about mis-implementation can be tested under a variety of different conditions. The testing procedure will have two parts. In part one, the ABM will be used to examine mis-implementation related to ending programs that should continue. In part two, the team will examine mis-implementation related to continuing programs that should end. For each type of mis-implementation, the team will focus on the ability of the models to reproduce “stylized facts” about mis-implementation obtained from pilot studies, activities in phases 1 and 2, and from the advisory group. These “stylized facts” are used to calibrate the models and include variables such as how skilled individuals are in EBDM and how strongly the organizational climate and culture supports EBDM. The engagement of the advisory group will be essential in this step.
Step 3: sensitivity analyses
Systematic exploration of model behavior will be undertaken as key parameters and assumptions are systematically varied. During this step, the model’s contextual environmental factors will be held constant, allowing the team to explore the sensitivity and dependency of outcomes (patterns of mis-implementation) to changes in assumptions about agent behavior and characteristics. Leveraging computational power to build up a robust statistical portrait of model dynamics and parameters, where assumptions are systematically varied, will allow for appropriate interpretation of model behavior, including the relationships between individual, organizational, and external contextual factors.
Step 4: model analysis to generate insights by manipulating potential levers
In this step, the team will introduce changes in individual and organizational agent knowledge and behavior and in contextual variables to explore effects on mis-implementation. This will help point to modifiable individual and organizational agent characteristics that, if addressed, can reduce mis-implementation in a variety of external conditions (i.e., essential leverage points). The ABMs can provide not only aggregate outcomes, but also information about how the relative importance and effect of different factors may vary across contexts and state health departments.
Mis-implementation has not been studied nor have adequate measures been developed to fully assess its impact in the field of population-based cancer control and prevention. This study is timely and important because (1) cancer is highly preventable, yet existing evidence-based interventions are not being adequately applied despite their potential impact; (2) pilot research shows that mis-implementation in cancer control is widespread; and (3) ABM is a useful tool to more fully understand mis-implementation complexities and dynamics. Results from this study can help shape and inform how state health departments guide and implement effective cancer control programs as well as continue to test the utilization of agent-based modeling to inform chronic disease and cancer control research.
Furthering the debate about terminology
As the multi-disciplinary field of implementation science has developed over the past 15 years, scholars from diverse professions have attempted to document the many overlapping and sometimes inconsistently defined terms . Many important contributions to implementation science originate from non-health sectors (e.g., agriculture, marketing, communications, management), increasing the breadth of literature and terminology. To bridge these sectors, a common lexicon will assist in accelerating progress by facilitating comparisons of methods and findings, supporting methods’ development and communication across disciplinary areas, and identifying gaps in knowledge [102, 103].
De-implementation: “abandoning ineffective medical practices and mitigating the risks of untested practices ”;
De-adoption: “rejection of a medical practice or health service found to be ineffective or harmful following a previous period of adoption… ”;
Termination: “the deliberate conclusion or cessation of specific government functions, programs, policies or organizations ”;
Overuse: “clinical care in the absence of a clear medical basis for use or when the benefit of therapy does not outweigh risks ”;
Underuse: “the failure to deliver a health service that is likely to improve the quality or quantity of life, which is affordable ”;
Misuse: typically synonymous with “medical errors ”
While planning the current project, we also realized that the use of the term mis-implementation may give some practitioners unease in that our study might reflect negatively on their day-to-day programs to decision-makers (e.g., public health leaders, policy makers), rather than identifying areas for improvement. These concerns have been addressed with a practice advisory group from the beginning of the project, and the team intends to frame the study goals carefully, including use of a project title and description that imparts positive, actionable outcomes: “Public Health in Appropriate Continuation of Tested Interventions” (Public Health in ACTION).
Utility of agent-based modeling
A strength of the study is the use of ABMs in combination with quantitative and qualitative methods to provide insights about how and why mis-implementation occurs and how specific policies may affect mis-implementation. These ABMs will explicate the dynamic interaction between individual decision-makers operating within organizations and influenced by external factors. An important policy utility comes from using the models to capture how variation in existing individual and organizational factors may reduce mis-implementation. Experiments using the models will allow comparison of baseline outcomes to outcomes as individual and organizational factors are changed to represent potential intervention scenarios. Insights generated by the models can be used to design novel strategies and policies that have the potential to effectively reduce mis-implementation. As these strategies become evident, the team will frequently seek input from the advisory group in order to maximize the real-world utility of the findings.
Like all ABMs, the planned models will require the team to make certain stylized assumptions about real-world structures and processes, especially when relevant data are not available. Translating the data the team collects in the initial phases of the project into the models in order to characterize individual and organizational behavior will require us to identify the most appropriate factors for inclusion, and develop appropriate ways to quantify these in a computational framework . Many elements included in the models—including those related to external support and opposition for programs, barriers to decision-making, organizational climate, and individual capacity—although theoretically justified, are inherently difficult to measure and quantify. Therefore, while the models will elucidate important dynamics involved in mis-implementation and identify key leverage points for improving decision-making, they are not intended to provide precise, quantitative forecasts of how organizational changes will affect implementation decisions. Additionally, this is the first known attempt to computationally model the process involved in the implementation and mis-implementation of programs within health departments. As such, future modeling may be warranted to further enhance our findings and apply the models to other specific contexts.
The study has a few limitations. While the team has plans to ensure the highest response rates [72, 73], high turn-over and workload demand of state health department workers may impede data collection efforts [109, 110]. Based on previous research and state-of-the-art methods [51, 52, 54, 72, 73], the team will take multiple steps to ensure a high response rate and will also compare respondents with non-respondents. There are limitations on how fully and accurately survey self-report responses can capture mis-implementation frequency and patterns across complex multi-faceted statewide programs.
A richer understanding of mis-implementation will help us better allocate already limited resources more efficiently, especially among health departments where a significant portion of cancer control work is contracted or performed in the USA . This knowledge will also allow researchers and practitioners to prevent the continuation of ineffective programs or discontinuation of effective programs . The team anticipates that the study will result in replicable models that can significantly impact mis-implementation in cancer control and can be applied to other health areas.
We use the term “programs” broadly to include a wide range of interventions from structured behavioral change strategies to broad policy approaches to cancer screening promotion initiatives.
We acknowledge our Practice Advisory Group members, Paula Clayton, Heather Dacus, Julia Thorsness, and Virginia Warren, who have provided integral guidance for this project. We thank Alexandra Morshed for her help with the implementation science terminology and development of Fig. 4. We also acknowledge our partnership with NACDD in the developing, implementing, and disseminating findings from this study.
This study is funded by the National Cancer Institute of the National Institutes of Health under award number R01CA214530. The findings and conclusions in this article are those of the authors and do not necessarily represent the official positions of the National Institutes of Health.
Availability of data and materials
RCB conceived the original idea, with input from the team, and led the initial grant writing. MP wrote the sections of the original grant, is coordinating the study, and coordinated input from the team on the protocol. PA, PCE, RAH, DAL, and SMR were all involved in the initial grant writing and provided continual support and input to the progress of the study as well as provided substantial edits and revisions to this paper. MF, BH, and MK provided continual support and input to the progress of the study and provided substantial edits and revisions to this paper. RCB is the principal investigator of this study. All authors read and approved the final manuscript.
Ethics approval and consent to participate
The study was approved by the institutional review board of Washington University in St. Louis (reference number: 201611078). This study also received approval under Washington University’s Protocol Review and Monitoring Committee.
Consent for publication
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
- 1.Cancer Facts & Figures 2015. http://www.cancer.org.
- 2.American Cancer Society. Cancer facts and figures 2016. Atlanta: American Cancer Society; 2016.Google Scholar
- 4.DeGroff A, Carter A, Kenney K, et al. Using Evidence-based Interventions to Improve Cancer Screening in the National Breast and Cervical Cancer Early Detection Program. J Pub Health Manage Pract: JPHMP. 2016;22(5):442-49. https://doi.org/10.1097/PHH.0000000000000369.
- 6.McGowan A, Brownson R, Wilcox L, Mensah G. Prevention and control of chronic diseases. In: Goodman R, Rothstein M, Hoffman R, Lopez W, Matthews G, editors. Law in public health practice. 2nd ed. New York: Oxford University Press; 2006.Google Scholar
- 7.FY2014 Grant Funding Profiles. http://wwwn.cdc.gov/fundingprofiles/.
- 8.Sustaining State Funding For Tobacco Control. https://www.cdc.gov/tobacco/stateandcommunity/tobacco_control_programs/program_development/sustainingstates/index.htm.
- 17.Guide to Community Preventive Services. https://www.thecommunityguide.org/.
- 19.Allen P, Brownson R, Duggan K, Stamatakis K, Erwin P. The Makings of an Evidence-Based Local Health Department:Identifying Administrative and Management Practices. Front Public Health Serv Syst Res Res. 2012;1(2). https://doi.org/10.13023/FPHSSR.0102.02.
- 21.Jacobs JA, Duggan K, Erwin P, Smith C, Borawski E, Compton J, D’Ambrosio L, Frank SH, Frazier-Kouassi S, Hannon PA, et al. Capacity building for evidence-based decision making in local health departments: scaling up an effective training approach. Implement Sci. 2014;9(1):124.CrossRefPubMedPubMedCentralGoogle Scholar
- 25.Huhman ME, Potter LD, Nolin MJ, et al. The Influence of the VERB Campaign on Children’s Physical Activity in 2002 to 2006.Am J Publ Health. 2010;100(4):638-45. https://doi.org/10.2105/AJPH.2008.142968.
- 31.Gibbert WS, Keating SM, Jacobs JA, Dodson E, Baker E, Diem G, Giles W, Gillespie KN, Grabauskas V, Shatchkute A, et al. Training the workforce in evidence-based public health: an evaluation of impact among US and international practitioners. Prev Chronic Dis. 2013;10:E148.CrossRefPubMedPubMedCentralGoogle Scholar
- 34.Hammond R: Considerations and best practices in agent-based modeling to inform policy. Paper commissioned by the Committee on the Assessment of Agent-Based Models to Inform Tobacco Product Regulation. In: Assessing the use of agent-based models for tobacco regulation. Volume Appendix A, edn. Edited by Committee on the Assessment of Agent-Based Models to Inform Tobacco Product Regulation. Washington, DC: Institute of Medicine of The National Academies; 2015: 161–193.Google Scholar
- 38.Brown ST, Tai JH, Bailey RR, Cooley PC, Wheaton WD, Potter MA, Voorhees RE, LeJeune M, Grefenstette JJ, Burke DS, et al. Would school closure for the 2009 H1N1 influenza epidemic have been worth the cost?: a computational simulation of Pennsylvania. BMC Public Health. 2011;11:353.CrossRefPubMedPubMedCentralGoogle Scholar
- 39.Lee BY, Brown ST, Korch GW, Cooley PC, Zimmerman RK, Wheaton WD, Zimmer SM, Grefenstette JJ, Bailey RR, Assi TM, et al. A computer simulation of vaccine prioritization, allocation, and rationing during the 2009 H1N1 influenza pandemic. Vaccine. 2010;28(31):4875–9.CrossRefPubMedPubMedCentralGoogle Scholar
- 41.Marshall DA, Burgos-Liz L, IJzerman MJ, Osgood ND, Padula WV, Higashi MK, Wong PK, Pasupathy KS, Crown W. Applying dynamic simulation modeling methods in health care delivery research-the SIMULATE checklist: report of the ISPOR simulation modeling emerging good practices task force. Value Health. 2015;18(1):5–16.CrossRefPubMedGoogle Scholar
- 48.Epstein J. Why model? Journal of Artificial Societies and Social Simulation. 2008;11(4):12.Google Scholar
- 49.Bruch E, Hammond R, Todd P. Co-evolution of decision-making and social environments. In: Scott R, Kosslyn S, editors. Emerging trends in the social and behavioral sciences. Hoboken: John Wiley and Sons; 2014.Google Scholar
- 50.Hammond RA, Ornstein JT, Fellows LK, Dubé L, Levitan R, Dagher A. A model of food reward learning with dynamic reward exposure. Front Comput Neurosci. 2012;6:82. https://doi.org/10.3389/fncom.2012.00082.
- 60.Forsyth BH, Lessler JT. Cognitive laboratory methods: a taxonomy. In: Biemer PP, Groves RM, Lyberg LE, Mathiowetz NA, Sudman S, editors. Measurement errors in surveys. New York: Wiley-Interscience; 1991. p. 395–418.Google Scholar
- 65.Brownson RC, Baker EA, Leet TL, Gillespie KN. Evidence-based public health. New York: Oxford University Press; 2003.Google Scholar
- 68.Yarber L, Brownson CA, Jacob RR, Baker EA, Jones E, Baumann C, Deshpande AD, Gillespie KN, Scharff DP, Brownson RC. Evaluating a train-the-trainer approach for improving capacity for evidence-based decision making in public health. BMC Health Serv Res. 2015;15(1):547.CrossRefPubMedPubMedCentralGoogle Scholar
- 72.Dillman D, Smyth J, Melani L. Internet, mail, and mixed-mode surveys: the tailored design method. Third ed. Hoboken: John Wiley & Sons, Inc; 2009.Google Scholar
- 75.Qualtrics: Survey Research Suite. http://www.qualtrics.com/.
- 87.Strauss A, Corbin J. Basics of qualitative research: grounded theory procedures and techniques. Newbury Park: Sage; 1990.Google Scholar
- 88.Patton MQ. Qualitative evaluation and research methods. 3rd ed. Thousand Oaks: Sage; 2002.Google Scholar
- 89.NVivo 10 for Windows. http://www.qsrinternational.com/products_nvivo.aspx.
- 90.Miller JH, Page SE. Complex adaptive systems : an introduction to computational models of social life. Princeton: Princeton University Press; 2007.Google Scholar
- 91.Morgan GP, Carley KM. Modeling formal and informal ties within an organization: a multiple model integration. In: the garbage can model of organizational choice: looking forward at forty. 2012. p. 253–92.Google Scholar
- 92.Noriega P. Coordination, organizations, institutions, and norms in agent systems II: AAMAS 2006 and ECAI 2006 International Workshops, COIN 2006, Hakodate, Japan, May 9, 2006, Riva del Garda, Italy, August 28, 2006: revised selected papers. Berlin; New York: Springer; 2007.Google Scholar
- 100.Tesfatsion L, Judd K, editors. Handbook of computational economics: agent-based computational economics, vol. 2. Amsterdam: North-Holland; 2006.Google Scholar
- 103.Rabin B, Brownson R. Terminology for dissemination and implementation research. In: Brownson R, Colditz G, Proctor E, editors. Dissemination and implementation research in health: translating science to practice. 2nd ed. New York: Oxford University Press; 2018. p. 19–45.Google Scholar
- 104.Rogers EM. Diffusion of innovations. Fifth ed. New York: Free Press; 2003.Google Scholar
- 105.Brewer Gad P. The foundations of policy analysis. Homewood: Dorsey Press; 1983.Google Scholar
- 106.Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century. Washington, DC: Institute of Medicine National Academy Press; 2001.Google Scholar
- 108.Kohn L, Corrigan J, Donaldson M. To err is human: building a safer health care system. Washington, DC: National Academies Press; 2000.Google Scholar
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.