Background

Description of the condition

Decentralization has been a common healthcare reform in low- and middle-income countries (LMICs) since the 1950s and the early 1960s [1]. By decentralization we mean the transfer of authority or delegation of power in public planning, management, and decision-making from the national level to sub-national levels [13].

A common form of decentralization in LMICs is deconcentration, which has been defined as the handing over of some administrative authority from the central level of government to the district level of, for instance, a ministry of health. Deconcentration in the health sector aims at establishing a local district management team with clearly defined administrative duties and a degree of discretion that would enable local officials to manage without constant reference to ministry headquarters within a limited administrative area (e.g. health district) [1, 4].

However, inadequate leadership and management capacities of district health managers often hamper their ability to improve the quality, effectiveness, and efficiency of health service delivery, which in turn may contribute to a decreased use of healthcare services by the local population [5, 6].

Description of the intervention

There has been a significant investment in capacity-building programmes aiming at developing and maintaining essential competencies required for optimal public health and effective health service delivery [2, 6]. These capacity-building programmes are activities and processes that improve the ability of staff within organizations to carry out stated objectives [7].

In many LMICs, site-based trainingFootnote 1 and mentoring programmes have been implemented in order to strengthen competencies and provide supportive mentorship for local district health managers (see Fig. 1).

Fig. 1
figure 1

Logic model of site-based training, mentoring, and operational research intervention in a district health system

Mentoring (M) is a flexible learning and teaching process that serves specific objectives of a health programme. Health management mentoring involves spending time with managers in their local environment to assist them in their day-to-day challenges. Mentorship is defined as the dynamic, reciprocal relationship in a work environment between an advanced career incumbent (mentor) and a beginner (mentee) [8]. It aims at promoting the development of both the mentor and mentee. Mentoring is recognized as a catalyst for facilitating career selection, advancement, and productivity [8, 9].

Site-based training (SBT) (also called in-service training) involves training current district health managers in their work settings. Such training may enable health managers to run health districts in an integrated manner through sound management of primary and secondary health services, team building, and supervision [10]. SBT has been, and will remain, a significant investment in developing and maintaining essential competencies required for optimal public health in local district level service settings [11].

Operational research (OR) is “the search for knowledge on interventions, strategies, or tools that can enhance the quality, effectiveness, or coverage of programmes in which the research is being done” [12].

Action research (AR) is “a method used for improving practice. It involves action, evaluation, and critical reflection and – based on the evidence gathered – changes in practice are then implemented.” [13].

Both operational research and action research in this review are understood as a tool for capacity-building and as a close collaborative investigation between researchers and district health managers (DHM). In this paper, we will refer to both strategies as operational research.

Complex interventions are combinations of interacting and interdependent actions. The overall effect of the intervention could be higher or lower when interactions are synergistic or antagonistic, respectively. Our review will illuminate the effectiveness of site-based training intervention alone (single intervention) versus site-based training interventions associated with mentoring and/or operational research activities (multicomponent intervention): i.e. (SBT+M+OR) or (SBT+OR) or (SBT+M).

This review will identify the relationship between the intensity of the intervention (number of components) and the effect size. It will seek to reconcile conflicting evidence of the effectiveness of single intervention versus multi-component intervention [14, 15].

How the intervention might work

The World Health Organization (WHO) has defined four factors that enable the improvement of district health systems management: (1) there should be an adequate number of trained managers; (2) managers should have appropriate competencies; (3) there should be support systems to provide managers with the resources they need to carry out their responsibilities, including systems for planning and budgeting as well as human resource management; and (4) the environment in which the managers function should enable them to carry out their responsibilities.

Site-based training will improve the DHM competencies while mentoring DHM will provide the necessary supportive environment to help them carry out their day-to-day responsibilities (factors 2 and 4 of the WHO framework) [6]. Meanwhile, operational research fosters the ability of DHM to make sense of national policies, to translate them into operational terms, and to integrate them into their day-to-day practices [16]. Operational research connects researchers with DHM in their work settings. Researchers reflect on their ways of supporting managers, while managers identify work-related issues, analyse them, take action, and reflect on their action [17, 18].

We built a logic model (see Fig. 1) based on previous research to help us focus on our review question and guide data collection processes [1921]. Our review is focused on:

  • P: professionals working at district health management level

  • I: site-based training with or without mentoring AND/OR operational research

  • C: normal institutional arrangements

  • O: district health management functions (see Table 1)

Table 1 District management and leadership functions

Why it is important to do this review

A recent systematic review [11] has shown no or low-quality evidence of site-based training on the improvement of knowledge, competencies, and health outcomes. However, specific district management and leadership functions were not covered by this review.

This review will inform policymakers and educational institutions involved in site-based training and mentoring about the appropriate educational methods to use, constraints to avoid, and key mechanisms that enhance the effectiveness of these interventions and contextual factors that improve the performance of district health management.

Objectives

Our objective is to evaluate the available evidence on the effectiveness of site-based training, mentoring, and operational research on the improvement of district health system management and leadership (see Table 1). We will specifically evaluate site-based training intervention alone (single intervention) versus site-based training interventions associated with mentoring and/or operational research activities (multicomponent intervention).

We acknowledge that this intervention is multifaceted [14, 15] and complex in nature, as it is implemented in social systems characterized by human agency, uncertainty, and unpredictability. Hence, our secondary objectives are to identify the enabling or constraining contexts, mechanisms for the effectiveness of this intervention. We will report broader generalizable trends across multiple settings and will devise a “best fit framework” that will help policymakers understand how and why the intervention works.

We acknowledge that we may not review other relevant outcomes due to practical considerations within a limited timeframe (see Additional file 1).

Criteria for considering studies for this review

Population

Inclusion criteria

The population to be included in this review concerns DHM. We define DHM as health officers actually involved in district health management and spending some of their time in management and/or administrative functions within the health district. These include district medical officers, nursing officers, health inspectors, administrators, counsellors from the district health committee, representatives of a hospital (hospital directors), district administrators, representatives of clinics, medical assistants, or local government promotion officers. Also included are health workers who carry out administrative tasks besides their clinical practice, such as medical doctor, nurse practitioners,Footnote 2 clinical officersFootnote 3, or primary healthcare officers.

Exclusion criteria

Medical and nursing students will be excluded from the review. Community health workersFootnote 4 are also excluded because they are not involved in the management of health districts.

Intervention(s)

Intensity

Inclusion criteria

The intervention is a complex multifaceted intervention (i.e. combination of different strategies) that is composed of site-based training, with or without mentoring, and/or operational research at district level. Site-based training is the principal component, and mentoring and operational research are optional components of the intervention. However, we would consider studies where site-based training is accompanied by mentoring or operational research programmes.

Exclusion criteria

Traditional in-class training, pre-service training, or medical education will be excluded. Training and mentoring focusing on vertical programmes will also be excluded because of our focus on health district systemic functioning.

Who delivers the intervention

Independent researchers, academics, or local managers might deliver the intervention.

Comparators

Control groups concern districts that are not the site of site-based training, mentoring, and operational research interventions, delivering regular care.

Outcomes

Primary outcomes

Intermediate outcomes, as depicted in the logic model (box in red in Fig. 1), are the major focus of this review. We have identified major outcomes that are crucial for an effective functioning of local health systems. These outcomes are categorized into district health management and leadership functions. We validated these outcomes with content expertsFootnote 5 along with end users of this review.Footnote 6 These outcomes are grouped into two categories, district health management and leadership functions (see Table 1).

Secondary outcomes

This review intends to address the following secondary outcomes:

  • Mechanisms underlying the effect of the intervention

  • Enabling and constraining contextual factors

  • Comparison of effectiveness of single intervention versus multicomponent intervention

Types of study

In order to assess the effectiveness of site-based training, mentoring, and operational research interventions on the performance of district health system managers, we will rely on study designs that are capable of demonstrating a causal relationship namely the following: cluster randomized controlled trials, controlled before-and-after studies(CBA), interrupted time series(ITS), quasi-experimental design, cohort studies, and longitudinal studies.

We will also collect and extract information from qualitative studies linked to, or associated with, included studies. Qualitative research and economic evaluation studies will be used in the review to help contextualize findings and identify barriers and facilitators. Qualitative research will help to identify how the intervention might work (underlying mechanisms), within a particular local historical and institutional context.

Reviews and meta-analyses will be excluded, but eligible studies identified from within existing reviews will be included.

Context

Inclusion criteria

Only interventions that are delivered in district health systems in LMICs will be included in this review.

Exclusion criteria

We will exclude studies carried out in high-income countries (HICs) because their health district models are far different from the district health system functioning and staff involved. Therefore, evidence from HICs will be hardly transferrable to LMICs. Also, studies that have not been conducted within decentralized health district levels will be excluded.

Search strategy

Our search strategy relies on three elements (population, intervention, and context) from the PICOC (Population, Intervention, Comparison, Outcome, Context) framework [22]. We adopted a sensitive search strategy rather than a specific one to scan a whole range of studies in the specific field of health systems research.

Since we also will be gathering qualitative evidence, the search strategy has to be inclusive in order not to miss relevant papers. Therefore, we used a combination of thesaurus terms and free text terms (Shaw, Booth et al. 2004) (Fretheim, Oxman et al. 2009—see Appendix 1). Search limits and sources to be searched are depicted in Tables 2 and 3, respectively.

Table 2 Search limits
Table 3 Sources to be searched

Data collection and analysis

Study selection

A team of two reviewersFootnote 7,Footnote 8 will be involved in selecting studies. We are using Endnote as reference manager software. Title and abstract screening will be operationalized using Microsoft Excel sheets to record the process, including justifying reasons for exclusion. Full-text articles will be obtained in cases of doubt. In cases of disagreement between the reviewers, we will consult a third reviewer.

A kappa coefficient will be measured to make sure that discordance does not impact on the validity of the selection process. The review team includes two experts in evidence-informed decision-making.Footnote 9 Since we are interested in implementation gaps, qualitative studies and process evaluations will be obtained and considered alongside the included studies. We mean by process evaluation research that is used to assess fidelity, quality, and reach of implementation; to clarify causal mechanisms; and to identify contextual factors associated with variation in outcomes [23]. Data from multiple reports of the same study will be collated.

Data extraction

Two reviewers will carry out data extraction with external validation by mentorsFootnote 10 of the review. Additional searches for process evaluations and qualitative and implementation studies will be performed using citation searching to inform an understanding of the context, mechanisms, barriers, facilitators, cost, and sustainability of the intervention implementation. Authors will be contacted if implementation data are not included in the published articles or in case of missing data. Relevant data will be collected for each type of intervention (site-based training, mentoring, operational research) according to the data extraction form presented below.

Data extraction form

  1. 1.

    Author names, journal, year

  2. 2.

    Study design

  3. 3.

    Unit of analysis

  4. 4.

    Sampling methods

  5. 5.

    Type of intervention:

    1. (a)

      Organizational intervention

    2. (b)

      Professional (educational intervention)

  6. 6.

    Participants (profession, administrative position, level of training, clinical specialty, age, time since graduation)

  7. 7.

    Setting (location; country; district level, primary or secondary level; rural of urban area)

  8. 8.

    Intervention characteristics

For each type of intervention (mentoring, site-based training, and action research), data will be collected according to the data depicted in Table 4.

Table 4 Intervention characteristics

We will report the frequency and time during which outcome measurement was done. Length of post intervention follow-up will also be collected. Duplicate or multiple reports of the same study will be assembled and compared for duplication, completeness, and possible contradictions. Data will be extracted using a Microsoft Excel spreadsheet form. EndNote and RevMan software will be used for data storage and analysis. Authors will be contacted by mail in case of missing data. Measures of effects in included studies that will be reported are relative risk (RR) for dichotomous variables and categorical measures such as Likert scales and means or changes of means over time for continuous data.

Proposed quantitative data synthesis

We will use RevMan software to perform meta-analysis for quantitative data if similar outcomes and measurement scales were used. A test of heterogeneity (I2 statistic) will be carried out in order to test the relevance of meta-analysis and to inform decisions about the choice of either fixed effects or random effects models of meta-analysis. We will also explore whether effect size differs between single intervention (SBT) and multicomponent intervention (SBT and Mentoring AND/OR operational research). Meta regression, using Revman 5, could be used to examine conditional relationship among predictors (intensity of the intervention and other subgroup variables) and effect size magnitude.

Proposed qualitative data synthesis

We will first use best fit framework (BFF) synthesis to synthetize qualitative evidence gathered from included studies. This methodology fits the analysis of organizational policies and procedures. It is also appropriate for research within a limited time frame, with specific questions, and with heterogeneous outcomes [24, 25]. The main purpose of the BFF is to describe and interpret what is happening in specific contexts. It goes beyond identifying insights from individual case studies to reporting broader generalizable trends across multiple settings. Therefore, BFF helps practitioners design suitable action plans that address system-wide issues.

BFF is about choosing a conceptual framework that suits the question of the review, using it as the basis for an initial coding framework. In response to the evidence gathered, the framework is subsequently altered, so that the final model is a modified framework that includes both modified factors, for example those achieving additional granularity, and new factors not anticipated by the a priori framework. The revised framework thus becomes a more generalizable framework [25, 26]. The steps of BFF are depicted in Fig. 2 [25]. We will use the logic model, depicted in Fig. 1, as our a priori framework similarly to an increasing number of systematic reviews [2729]. Logic models help to highlight key contextual elements and to focus on the underlying programme theory. We do not claim that it is an ideal model, only that it fits the purpose of this review.

Fig. 2
figure 2

Best fit framework synthesis steps [25]

We will identify relevant framework publications using truncated terms (theorFootnote 11*, concept*,Footnote 12 framework*Footnote 13 or model*Footnote 14) in our reference management database of included studies. Supplementary search will be carried out on external databases using BeHEMoTh (Behaviour of interest, Health context, Exclusions, Models or Theories) template (see search strategy in Appendix 2) [25]. Study data will be extracted against the concepts and subcategories of the framework.

Quality assessment strategy

For RCTs and non RCTs studies (CBA, ITS) the Cochrane Collaboration tool [30] and EPOC Tools [31] will be used, respectively, for assessing risk of bias. Critical Appraisal Skills Programme (CASP) checklist tools will be used for qualitative studies [32]Footnote 15. We will assess the quality of studies and categorize them as low risk of bias, unclear risk of bias, and high risk of bias. These categories will be assigned respectively if there is an unlikely plausible risk of bias that could alter confidence in the results, plausible bias that raise a doubt of the validity of the results, or plausible bias that seriously weakens the confidence in results.

Besides, overall strength of recommendations will be assessed using the Grades of Recommendation, Assessment, Development and Evaluation (GRADE) approach. Quality of body of evidence will be categorized into four categories (high, moderate, low, and very low) based on the degree of likelihood of specific risk of bias, publication bias indirectness (surrogate outcomes, indirect comparison, difference in population, difference in intervention), inconsistency, and imprecision.

We will identify risk of bias using specific criteria [33]; estimate publication bias using funnel plots of study results [34]; and assess likelihood of inconsistency with criteria such as variation of point estimate, absence of minimal overlap of confidence interval, statistical test for heterogeneity, and I2 [35, 36].

For qualitative studies, the CASP tool will allow us to question the validity of the results, the quality of the analysis process, and the relevance for local context. To ensure consistency with the GRADE approach, we will use the corresponding CERQual approach for the qualitative studies. This will allow us to categorize the evidence into the same four categories (high, moderate, low, and very low) based on the likely confidence in the findings (methodological limitations, publication bias, relevance, coherence, and adequacy of data) [37].

Reporting

This protocol was reported according to the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA)-P Statement for reporting systematic review protocols (see Additional file 2) [38].