Background

Cervical cancer is a high-burden global health issue, with an estimated 528,000 new cases and 266,000 deaths in 2012 for women across the world [1]. Most of the global burden (85%) lies in less developed countries, with regions in sub-Saharan Africa (SSA) having the largest age-standardized incidence and mortality rates [1]. Developed countries, such as the USA, have achieved significant decreases in cervical cancer burden since the introduction of organized Pap smear programs in the 1960s [2, 3]. However, many countries in SSA have been unable to attain such reductions due to implementation barriers and resource limitations [4,5,6,7,8]. In fact, cervical cancer rates are expected to continue rising despite efforts to implement national screening and treatment programs [9]. Cervical cancer remains the most commonly diagnosed cancer and leading cause of cancer-related death in African women south of the Sahara [1].

Untangling the causes for high cervical cancer burden in SSA is difficult due to a complex interplay of many biological, organizational, economic, and sociocultural factors. For example, HIV has been correlated with an increased risk for developing cervical cancer [10]. HIV infection causes the body to become immunocompromised and more susceptible to contracting HPV, which is a significant precursor to cervical cancer [10]. SSA incidentally carries a high HIV/AIDS burden, accounting for 71% of the global population living with HIV [11]. Furthermore, young women bear a disproportionate HIV burden compared to their male peers [11]. Other contributory factors include the aging and growth of the population, limited access to medical facilities, poor nutrition, severity of disease at presentation, and insufficient facilities for treatment [12,13,14,15]. While these factors contribute to the rise in cervical cancer for this region, this paper focuses on the need for improved implementation of existing prevention programs and the promise that increased access to preventive services has on decreasing burden.

Prevention is key. With adequate resources, precancerous cervical lesions are easily prevented and treatable [16, 17]. The incubation period between HPV infections developing into cervical cancer is 10 to 20 years, which allows ample opportunities to screen, track, and treat across the disease progression [18]. In addition, numerous technologies have been developed to detect and treat precancerous lesions including Pap smear, colposcopy, visual inspection with acetic acid or Lugol’s iodine (VIA/VILI), HPV DNA testing, cone biopsy, cryotherapy, and loop electrosurgical incision procedure (LEEP) [2, 19]. Although these tools have been proven safe and effective [20], there are still significant challenges in implementing them into comprehensive national screening and treatment programs.

For decades, developed countries have used cytology-based programs with Pap smear as the standard screening protocol [2, 3, 8]. However, these programs require lab infrastructure that is not readily available in many SSA countries and is often prohibitively expensive to sustain on a large scale [21]. Alternative screening methods have been developed with the hope of being more sustainable in resource-limited settings [8]. Visual inspection with acetic acid and Lugol’s iodine (VIA/VILI) are visual tests that are used to identify precancerous lesions with the naked eye. VIA and VILI are advantageous because they can be performed by non-physician providers (addressing provider shortages) and provide immediate results (reducing loss to follow-up) [22,23,24]. VIA and VILI have similar sensitivity when compared to Pap smear and can provide screening at a much lower cost and with fewer staff needed [20, 24, 25]. However, these visual tests are less specific and can lead to overtreatment [20, 24]. HPV DNA testing is another alternative screening method that is used to identify high risk, carcinogenic HPV (typically types 16 and 18). The test can be performed at home with self-sampling kits and has been acceptable for many surveyed women [26,27,28,29,30,31]. It can also be used as a preliminary triage to save time and resources on women that screen HPV negative and do not require follow-up testing [32, 33]. HPV DNA testing does not require the same level of lab infrastructure as Pap smear, but it involves lab processing nonetheless and wait times to receive results [8].

Despite development of alternatives to Pap smear, a significant research-to-practice gap still exists. Lack of trained providers, overburdened health facilities, insufficient supplies, inadequate lab infrastructure, loss to treatment follow-up, high costs, and cultural beliefs are some of the implementation barriers experienced in SSA [4,5,6,7,8]. In addition to seeking alternative screening methods, SSA countries can further improve their prevention efforts by developing and employing implementation strategies to overcome these barriers. An implementation strategy is defined as “a systematic intervention process to adopt and integrate evidence-based health innovations into usual care” [34]. The purpose of this systematic review is to uncover the breadth and diversity of implementation strategies used to improve the uptake and sustainability of cervical cancer prevention programs in SSA. Through highlighting different strategies, we aim to assist researchers, practitioners, managers, and policy makers in scaling up and evaluating new and existing programs.

Methods

Search strategy

Figure 1 outlines the search strategy, which has been reported according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [35, 36]. A reviewer (LJ) independently searched PubMed, Ovid/MEDLINE, Scopus, and Web of Science databases with the following approximate search terms: (cervical cancer OR HPV) AND (prevention OR screening OR program OR implementation OR scale-up OR Pap smear OR VIA OR VILI OR see-and-treat OR HPV vaccine OR HPV DNA test OR self-sampling OR colposcopy OR cryotherapy OR LEEP) AND (sub-Saharan Africa OR country-specific terms for each SSA country). Search strategies with specific terminology for each database are included as Additional file 1.

Fig. 1
figure 1

Search strategy. The following search strategy is reported according to PRISMA guidelines

Eligibility criteria

Inclusion and exclusion criteria were developed to identify original research that empirically evaluated or tested implementation strategies to improve cervical cancer prevention in SSA. Articles were eligible for inclusion if written in English, peer reviewed, and published between 1996 and 2017. Non-empirical studies (reviews, commentaries, editorials, etc.) and studies that did not explicitly assess implementation strategies (knowledge, attitudes, and beliefs; incidence and prevalence; safety and efficacy; cost effectiveness and modeling) were excluded from the review.

Study selection

The initial database search yielded 4575 results. Two reviewers (CJ, LJ) conducted the study selection process. Titles and abstracts of the identified articles were screened to exclude duplicates (n = 2465) and studies not relevant to the topic (n = 1264). The remaining articles (n = 846) were reviewed in full text. Fifty-three studies met the eligibility criteria and an additional 793 articles were excluded.

Data extraction

The 53 articles that fit the inclusion criteria were extracted for the following implementation-related content: title, author, publication year, purpose, country, study design, prevention tools, implementation strategies, implementation outcomes, and results. The primary reviewer (LJ) and two additional reviewers (AA, CJ) completed data extraction for a sample of initial articles (n = 11, 20%) to ensure accuracy. Inconsistencies were resolved through consensus before the primary reviewer proceeded with the remaining articles. Results were summarized in frequency tables.

Two seminal articles from implementation science, Proctor et al. [37] and Powell et al. [34], were used to define and categorize implementation outcomes and strategies, respectively. Based on the Conceptual Model of Implementation Research [38], Proctor et al. developed a taxonomy of implementation outcomes that are conceptually distinct from service system outcomes and clinical treatment outcomes. Implementation outcomes were defined as “the effects of deliberate and purposive actions to implement new treatments, practices, and services.” Using an iterative process of reading and discussing relevant literature in behavioral and health science, the working group of implementation researchers defined eight implementation outcomes: acceptability, adoption, appropriateness, costs, feasibility, fidelity, penetration, and sustainability. Powell et al. used the Consolidated Framework for Implementation Research [39] to compile a list of implementation strategies, or “systematic intervention processes to adopt and integrate evidence-based health innovations into usual care.” A working group of researchers and clinicians from health and mental services used narrative review to develop six categories: educate, restructure, quality, finance, plan, and attend to policy context. A complete list of categories and their definitions for implementation outcomes and strategies can be found in Table 1.

Table 1 Implementation outcomes and strategies

Quality screening

Quality assessment tools from the National Heart, Lung, and Blood Institute (NHLBI) were used to assess each study for internal validity [40]. There are separate NHLBI Quality Assessment Tools for each study type (controlled trials, pre-posttest, and cross-sectional). Each tool includes specific questions to assess bias, confounding, power, and strength of association between intervention and outcomes. The answer to each question can be “yes,” “no,” “cannot determine,” “not reported,” or “not applicable.” Instead of using a numeric scoring system, the rater is asked to consider potential risk for bias in the study design whenever a “no” is selected. Overall quality ratings are scored as “good” (low risk of bias, valid results), “fair” (some risk of bias, does not invalidate results), or “poor” (significant risk for bias, may invalidate results). One reviewer (LJ) independently screened all studies, and two additional reviewers (AA, CJ) screened a 20% sample (n = 11) to double check for accuracy.

Results

Of the initial 4575 articles (2110 after duplicates removed), 53 met inclusion criteria and were included in the following synthesis of results. Study characteristics are summarized in Table 2. The table of evidence is included as Table 3. Most studies were published within the last 7 years. Studies were well represented in all regions of sub-Saharan Africa with 16 of the total studies (30.2%) conducted in Southern Africa, 16 (30.2%) in Western, 14 (26.4%) in Eastern, and 7 (13.2%) in Middle.

Table 2 Study characteristics
Table 3 Table of evidence

Study design

The majority of studies included in the review are cross-sectional (n = 34, 64.2%). Ten of the cross-sectional studies similarly evaluated the impact of changing service providers on how well the screening test is performed. Using specificity and sensitivity rates, some studies compared VIA assessments between nurses and an expert physician [22, 24, 25, 41] while others compared self- vs. physician-collected samples for HPV DNA testing [27, 28, 30, 31, 42, 43]. Sixteen studies examined if screening coverage increases when changing service sites [44,45,46,47,48,49], combining screening with an already established program (i.e., HIV/STI screening) [50,51,52,53,54,55,56,57], or providing financial incentives [58, 59]. Four studies evaluated the effect of educational interventions on knowledge, attitudes, and screening behaviors for patients [60] and providers [61,62,63]. Three studies examined if reminder systems can help to decrease lost to follow-up rates through community health workers [64, 65] or phone-based tracking [23]. One study, Michelow et al. [66], used rapid review of reportedly negative cervical smears as an internal quality assurance modality.

Ten studies (18.9%) were conducted with a pre-posttest design. All of the pre-post studies evaluated the effectiveness of educational interventions in improving awareness and screening behaviors for patients [67,68,69,70,71,72,73,74] or knowledge and skills retention for providers [75, 76]. Only three studies included a control group [67, 69, 70].

There are eight randomized control trials (15.1%). Six trials tested strategies to increase screening uptake through educational interventions [26, 77,78,79], financial incentivizes [80], or changing service sites [81]. Two trials compared HPV DNA self-sampling to the current standard of physician collection via speculum exam [29, 82].

Only one study is a non-randomized control trial (1.9%). Mutyaba et al. [83] evaluated if male partner involvement is effective in reducing loss to follow-up after a positive VIA screening test.

Prevention tools

Primary prevention with HPV vaccine was included in 9 studies (17.0%). VIA was the most frequently used secondary screening method (n = 19, 35.8%). Less commonly, secondary screening was completed with HPV DNA/mRNA testing (n = 15, 28.3%), Pap smear (n = 13, 24.5%), VILI (n = 9, 17.0%), colposcopy (n = 7, 13.2%), biopsy (n = 5, 9.4%), and unspecified screening (n = 5, 9.4%). Digital imaging to supplement visual screening methods (VIA/VILI) was used in 9 studies (17.0%). If follow-up treatment of precancerous lesions was conducted, it was either performed with LEEP (n = 5, 9.4%) or cryotherapy (n = 5, 9.4%).

Implementation strategies

Researchers used educate (n = 38, 71.7%), restructure (n = 26, 49.1%), and quality (n = 13, 24.5%) strategies most frequently in their studies. For patients and their families, education strategies aimed to increase cervical cancer awareness and the importance of prevention. For providers, education strategies were used to improve knowledge and skills retention in conducting screening and treatment services such as VIA, cryotherapy, and LEEP. Example educate strategies include community outreach, individual patient teaching and counseling, provider training, mass media campaigns, and development of educational materials. Restructure strategies were used to facilitate implementation by changing service sites (established vs. mobile clinic for Pap smear), changing delivery models (age- vs. class-based for HPV vaccine), or changing providers (nurse vs. physician for VIA, patient vs. physician for HPV DNA test). Several studies also used the restructure strategy to combine cervical cancer prevention with other services (i.e., HIV/STI testing, marriage counseling, family planning) to improve the financial and infrastructural support provided through already established programs. The quality strategies included in these studies were ongoing consultation, patient reminder systems, and audit-feedback mechanisms. Five studies (9.4%) included a finance strategy to incentivize patients to uptake screening services. Only one study (1.9%) utilized the plan strategy. Kapambwe et al. [60] spent time developing trust with alangizi (traditional marriage counselors) to encourage them to integrate cervical cancer screening messaging into their counseling sessions with women. There were no policy strategies (0%) in the included studies.

Implementation outcomes

The most studied implementation outcomes were penetration (n = 33, 62.3%), acceptability (n = 15, 28.3%), and fidelity (n = 14, 26.4%). Penetration was often measured as vaccine or screening coverage, which is calculated by dividing the number of women who participated by the total eligible or targeted population. Additional measures of penetration included rates of loss to follow-up for cryotherapy or LEEP treatment and three-dose adherence for HPV vaccination. Acceptability was most commonly measured by surveying patients to determine reasons why they accepted or refused participation. Among providers, acceptability was measured as comfort with performing newly learned skills and reported satisfaction with training and program implementation. Fidelity was measured in studies that compared either nurses’ VIA assessments or self-collected HPV DNA samples to that of expert physicians. These comparisons indicated whether patients and nurses could perform these tests with reasonable reliability and help to address physician shortages by alternatively implementing the screenings.

Other less frequently studied outcomes included feasibility (n = 8, 15.1%), adoption (n = 6, 11.3%), sustainability (n = 2, 3.8%), and cost (n = 1, 1.9%). To measure feasibility, many researchers determined providers’ perceived barriers and facilitators to implementation. Other studies quantified circumstances that impeded successful operation of the program such as rates of equipment malfunction, poor picture quality for digital images, invalid lab results, and expired vaccines. Adoption was measured as the willingness or intent of patients to participate in screening or HPV vaccination. Only two studies included measures of sustainability. Moon et al. [54] quantified sustainability by the number of providers that were still performing VIA 1 year after initial training. Levine et al. [75] determined VIA skill and knowledge retention with a 6-month follow-up assessment. One study, Goldhaber-Fiebert et al. [65], measured costs associated with cervical cancer screening, i.e., community health worker home visits.

There were no studies that measured appropriateness (0).

Quality assessment

Few studies (n = 11, 20.8%) were determined to be of “good” quality using the NHLBI Quality Assessment Tools. The remaining studies were “fair” (n = 22, 41.5%) or “poor” (n = 20, 37.7%). Overall, many studies did not sufficiently describe their methodology, which made it difficult to make determinations for items on the NHLBI tools. Items were often marked as “not specified” or “cannot be determined.” A common weakness specifically for controlled intervention studies was a lack of adequate randomization. Some randomized control trials (RCTs) used a preset plan for allocating patients to intervention or control groups (i.e., even vs. odd ID numbers) instead of using computer-generated lists. Other RCTs did not provide any description for how participants were allocated. Adequate randomization is important as it provides confidence that results are attributable to the intervention rather than a difference in groups at baseline. For pre-posttests, only 3 of the 10 studies included a control group [67, 69, 70]. Without a control group for comparison, there is less confidence that an improvement between pre- and post-assessments is due to the intervention rather than mere chance. The cross-sectional studies were mainly descriptive. Limited cross-sectional studies used statistical analyses to determine associations between intervention and outcomes. Confounders were rarely measured and included in the analyses. Outcome measures frequently lacked validity and reliability.

Discussion

The challenges of establishing and sustaining cervical cancer prevention programs in SSA have been identified in several recent reviews [4,5,6,7]. However, the authors have found no review to date that addresses implementation strategies to overcome these identified barriers. Safe and effective prevention tools exist but are not reaching the women that need these services most. This review is an attempt to enter cervical cancer prevention into the implementation science conversation to propel the state of the science forward. Finocchario-Kessler et al. [84] conducted a systematic review of the literature between 2004 and 2014 to characterize the cervical cancer research in SSA according to four public health categories (primary prevention, secondary prevention, tertiary prevention, and quality of life). They determined that most studies focused on secondary prevention and concluded that there is a need for “implementation science research to inform feasible and sustainable strategies to maximize the number of women reached with services” [84].

Implementation science is an emerging field that aims to bridge research and practice in order to ultimately achieve desired patient and population health outcomes [85]. Historically, a significant amount of efficacy and effectiveness research conducted in controlled settings has not translated into “real-world” impact. The traditional, passive methods of dissemination (i.e., journal publishing) have not proven effective. Estimates show that it takes an average of 17 years for 14% of original research to effect practice [86]. Implementation science seeks to address this “quality chasm” by explicitly studying the processes of implementing evidence-based programs in clinical and public health settings [87]. Implementation strategies are instrumental in bridging the gap and improving the speed and rigor of research translation. The results from this review have provided insight into how study design, strategies, and outcomes have been used to study implementation of cervical cancer prevention in SSA. Since sub-Saharan Africa faces some of the highest cervical cancer rates worldwide, it is important to evaluate what has been done so far to address these challenges and contemplate how these efforts can be improved through use of implementation strategies.

Study design

While randomized control trials are the “gold standard” in efficacy and effectiveness research, these study designs are difficult to feasibly conduct in implementation research due to the use of multi-level, multi-strategy interventions [85]. It is more difficult to conduct random assignment when the level of analysis is at the organization, community, and/or country level rather than the individual level. It is also difficult to produce large enough sample sizes to create adequate statistical power. For these reasons, Brownson et al. [85] conclude in Dissemination and Implementation Research in Health, one of the seminal works to progress the field of implementation science, that quasi-experimental designs without randomization are reasonable for implementation research. However, they argue that rigorous quasi-experimental design is essential to achieving quality data that has practical use. While quasi-experimental studies may be more feasible to conduct, these designs do not produce the same level of confidence in causation as randomized control trials and make it more difficult to compare effectiveness between different studies.

In the absence of randomization, researchers can incorporate control groups, confounders, and statistical comparison of baseline group characteristics to greatly increase rigor of implementation study designs. In their assessment of 66 Cochrane reviews on implementation research, Brownson et al. [85] concluded that “many publications in the literature are still merely descriptive in nature or have weak designs without comparison or control conditions to answer critical research questions.” This systematic review has produced similar results. The majority of studies are cross-sectional, descriptive studies and assessed as “poor” or "fair" quality. This review echoes the argument that there is a need for more rigorous research designs that meet the needs of implementation science questions.

Implementation strategies

Evaluating effectiveness for the various implementation strategies is difficult due to the descriptive nature of most studies, overall poor quality in study designs, and variation in outcomes measured. While educate strategies were the most popular method leveraged in attempt to improve implementation, implementation science suggests that dissemination of information is not the most effective method for creating sustainable change [88]. Within this literature review, education has also failed to produce intended outcomes. Many studies employing educate strategies have shown improvements in awareness. However, these strategies in isolation have not always catalyzed better uptake, acceptability, and/or confidence [61, 63, 64, 78]. If a significant difference was observed, uptake still remained low [67, 77, 83]. These results suggest a need to diversify implementation strategies used to improve cervical cancer prevention in this context. Restructure, finance, and attend to policy context strategies can provide the organizational support required to improve implementation and overcome barriers particular to resource-limited settings.

Implementation outcomes

While there were implementation outcomes included in these studies, the overwhelming majority were patient-level outcomes, such as symptomatology, cancer rates, cervical lesion typology, etc. For implementation studies, it is crucial to measure implementation outcomes specifically [37]. If the desired health outcomes are not achieved after an evidence-based program is implemented, the failure is typically attributed to the evidence-based program without consideration of how well the practice was or was not implemented in that particular setting [86, 88]. If we do not measure implementation outcomes, there is no way to deduce what is ultimately influencing the patient or population health outcomes. Additionally, there is a need for continued effort in operationalizing and measuring implementation outcomes. One of the eight outcomes (appropriateness) was not measured in the review and should be considered for inclusion in future studies.

Limitations

A major limitation of this systematic review is the overall quality of evidence. “Poor” and "fair" quality ratings for the majority of studies make it difficult to make conclusions about implementation strategies and their effectiveness. Risk of bias in the study design and implementation greatly decreases confidence in the validity of results. Another limitation is that only a sample of initial articles, rather than the entire dataset, were abstracted and quality assessed by a second reviewer. However, inconsistencies were resolved through consensus before the primary reviewer proceeded with the remaining articles to ensure accuracy.

Conclusions

This systematic review elicits the need to diversify strategies that are used to improve implementation for cervical cancer prevention programs. While education is important, implementation science literature reveals that dissemination of information in isolation is not as effective in generating change [88]. There is a need for additional organizational support to further incentivize and sustain change [85, 89]. Implementation research is difficult because interventions are multifaceted and conducted at different levels of analysis [85]. Many studies in this review included patient level outcomes but did not include implementation-specific outcomes to assess the success of implementation strategies. This review calls for an increased use of implementation science frameworks to inform the design of studies that aim to improve cervical cancer prevention in SSA. This review also calls for increased use of common terminology from implementation science for outcomes and strategies. Implementation science can help to communicate results between researchers and increase rigor of research design to better isolate impact of implementation strategies on intended outcomes.