Background

In the drive to achieve universal health coverage (UHC) the importance of quality of care has been accentuated by the 2030 Sustainable Development Agenda. Challenges in service delivery, efficiency and resource utilization in the health sector remain regardless of recent progress.

In recent years, quality of health care has ascended high on the international health agenda especially in the context of Health System Strengthening (HSS) and UHC. Mortality and morbidity rates haven’t declined accordingly [1] although health care utilization rates increased in some low and middle income countries (LMIC). [2, 3] This discrepancy might be expounded by the low quality of care provided in both the public and private sector [4].

In Kenya, but also on a global scale, large and often unexplained differences in quality assessments can be observed between hospitals, facilities and providers. This raises the question of whether these are true differences or the result of weak measurement methods or quality auditors’ biases [5, 6]. Given the multitude of QI tools and approaches in use, it is one of today’s major challenges to improve their compatibility with specific health systems and to take existing instruments, procedures, and data from respective health information systems into account. There is an increasing demand – not only in Kenya – to implement evidence-based QI across health systems to ensure that QI approaches, standards and indicators adhere to scientific standards [5].

In Kenya, as in many other LMIC, remarkable endeavours have been made by the government, development partners, faith based organizations and the private sector to improve service delivery, efficiency and resource utilization. However, service performance and health indicators stay behind in the Kenyan health sector.

Besides deficient infrastructure and shortages of equipment, drugs and staff problems of quality of care are prevailing. These are particularly distinctive in the areas of maternal and neonatal care, family planning and in the provision of services for the survivors of sexual and gender-based violence [7]. Hence the maternal mortality rate remains intolerably high at 362 per 100,000 live births [8]. Whereas health facility data indicated that 95.7% of pregnant women in Kenya attended at least one antenatal care (ANC) visit in 2014, the minimum of four ANC visits, as recommended by World Health Organization (WHO), was only accessed by 57.6% according to survey data. More than half of pregnant women (61%) delivered at a health facility in 2014 [9]. But even these facility-based deliveries are often performed under inadequate professional surveillance [10]. The availability and use of essential guidelines at facility level is not warranted [11]. In 2014, contraceptive prevalence was still low, with not much more than half of married women in Kenya (58%) using any method and often contraceptives are out of stock [7]. Women’s increased vulnerability to HIV infection has been particularly connected to gender based violence as a special act of defiance, the seriousness of which has been repeatedly shown in the Kenyan context [12,13,14].

In 2001 the Kenya Quality Model (KQM) was launched by the Ministry of Health (MoH) [15]. KQM defined quality management as a process to better comply with standards and guidelines, to improve structures, processes and results in health care by Quality management (QM) tools and to meet patient needs. However, KQM was not implemented in a participatory way and remained a frozen tool [16]. KQM was therefore revised, extended and renamed into Kenya Quality Model for Health (KQMH). KQMH is supposed to serve as the national framework to unify existing approaches to improve quality of care at all facilities of the health system. Although KQMH has been further developed into a comprehensive conceptual framework for QM challenges remain to operationalize KQMH. In response to this implementation gap, and as part of its support for the Kenyan health sector, the Deutsche Gesellschaft für Internationale Zusammenarbeit (GIZ) sought to support the Kenyan Ministry of Health’s Department of Standards and Regulatory Services (DSRS) to establish a practical modality to operationalize the KQMH and make it the point of reference for all facilities working to improve the quality of their services. An integrative methodology was needed to reduce fragmentation, while an evidence-based approach was sought to strengthen the knowledge about how improving the quality of care can strengthen health systems.

A consortium including evaplan GmbH at the University of Heidelberg, the Institute for Applied Quality Improvement & Research in Health Care in Germany (aQua), and the Institute of Health Policy, Management and Research (IHPMR) in Nairobi was contracted for the development and implementation of an Integrated Quality Management System (IQMS). A first assessment of quality improvement activities that included a stakeholder mapping revealed a rather piecemeal approach to the topic of quality improvement in Kenya. Moreover, though traditional tools like supervision, the use of a Health Management Information System (HMIS) and continuous professional training were widely applied, the efforts did not produce expected results in terms of improved health outcomes.

Supervision was carried out erratically and the full potential of the approach was not exhausted. Modern quality tools like self-assessment were not well-known and little used. The completion of reporting forms was often undertaken late, the data itself was of questionable quality and the extent to which the data was used to inform health facility and sector planning limited [17].

This paper describes both the development and implementation of the IQMS and demonstrates how such an integrated quality management approach can serve as a powerful tool for decision making in poor resource settings and hence significantly improve the quality of care.

Methods

The aQua-Institute has developed a systemic, comprehensive and evidence-based Quality Management tool for the German health system. This integrated Quality Management approach has been formalized into the European Practice Assessment (EPA) and since 2013 is being implemented in more than 3000 health facilities in Germany, in the Netherlands, Belgium, Romania, Austria, Switzerland and Slovenia. It is a multiperspective, multifaceted indicator based approach that covers five domains (infrastructure, people, information, finance, and quality & safety) covering most of the six WHO health system building blocks. These domains can be modified according to the needs of the country and its health facilities.

A specially developed software (VISOTOOL®) visualizes the results in an easily understandable way to stimulate discussion with facility staff and facilitate the development of highly tailored improvement plans. Furthermore the software allows facilities to benchmark their results against the average result of all participating facilities [18, 19].

To operationalize the Kenya Quality Assurance Model for Health, EPA was adapted to leverage its integrative and evidence-based indicator-based approach in collaboration with the Ministry of Health including the Department of Standards and Regulations, the Department of Clinical Services, the Division of HMIS, the Division of Reproductive Health, the Division of Child Health and the Unit of Monitoring & Evaluation.

The adaptation process made use of a ten step modified RAND/UCLA appropriateness method. This systematic method to validate indicators is described in detail by Prytherch et al. [5]. The steps included a scoping workshop, definition of five critical domains of quality in the Kenyan context, and a review of more than 50 policy and planning documents, standards, management and clinical guidelines, grey and scientific literature to identify indicators in use in the Kenyan health system. An expert panel adapted and validated the five proposed domains, and assessed the identified, candidate indicators according to the Specific, Measurable, Achievable, Relevant and Time-bound (SMART) criteria, before rating them on their validity and feasibility using a modified Delphi method. The resulting 303 structure, process and outcome indicators, clustered across the five domains (Clinical Care, People, Management, Interface In/out-patients, and Quality and Safety), were broken-down into 29 dimensions. For the domain Clinical Care illustrative dimensions include antenatal care, delivery, postnatal care, family planning, survivors of gender-based violence; for the domain People they include patient satisfaction, staff satisfaction, staff general, staff appraisal, staff support; for the domain Management they include leadership and governance, financial, maintenance, supplies, drugs, data, equipment, amenities, transport, waiting times; for the domain Interface In/out-patients they include community, general, referral; for the domain Quality and Safety they include general, guidelines etc., critical incident reporting, emergency management, infection control, laboratory. Finally, a set of five data collection tools based upon the final register of indicators were developed. Following the principle of triangulation of methods these tools included surveys for patients and staff, a self-assessment, facilitator assessment and a manager interview guide. The data collection tools where then incorporated into the specially developed software (VISOTOOL®). The use of quality indicators is described in detail in Goetz et al. [20] and Herrler et al. [21] (Figs. 1 and 2).

Fig. 1
figure 1

Map of Kenya with distribution of facilities

Fig. 2
figure 2

Diagram outlining IQMS process

Design and sampling

A longitudinal study design was used. Ten facilities were selected out of 36 applications representing a variety of Kenyan facilities of all levels of care and from both rural and urban settings (n = 10; six hospitals (including four district hospitals and two county hospitals): Kisumu County Hospital, St. Monica’s Hospital, Bondo Sub County Hospital, Bomachage Chache Hospital, Manyala Sub County Hospital and Vihiga County Hospital; four health centers: Lynaginga Model Health Centre, Kenyerere Model Dispensary, Nyan’goma Dispensary, Shikunga Health Centre). The selection was purposeful but based on criteria like a specified minimum level of infrastructure and medical equipment for reproductive health-care provision, service provision for survivors of gender based violence, previous experience in the field of QM and motivation to invest in quality assurance in the long run. These selection criteria were to ensure the comparability between health facilities and reduce structural variables that might affect the generalisation of findings. The selected facilities were first visited in 2013 (T1) and re-visited between September 2015 and February 2016 (T2). Based on the identified gaps at T1, target-oriented improvement plans were developed with the facilities and in between the measurements several interventions were carried out in the facilities under continuous supervision.

The domain People with 85 indicators has been excluded from this analysis due to being personal data and will be published separately. All calculated values are based on the percentage achievements of the remaining 218 indicators, 24 dimensions and four domains (Clinical Care, Management, Interface In/out-patients, and Quality and Safety) at T1 and T2 on scale of 0 to 100 for each facility. All indicators, dimension and domain values of T1 and T2 for each facility have been exported from the software VISOTOOL®. Data was summarized using means and standard deviations (SDs). Categorical data was presented as frequency counts and percentages. Since the double-time data is not complete for all indicators at every facility, the calculations of the percentage changes had to be based only on those indicators with available values at both T1 and T2 at each facility. A z- testing was first considered, but due to the low data variance a t- test was felt more appropriate. p-values have been calculated in MicrosoftExcel2010 applying the one sample t-test on the mean change (difference: T2 minus T1 value) and the standard deviation. Since positive and negative changes are possible, a double-tailed event has been chosen and the expected value has been defined as a change 0, following the null hypothesis that the IQMS quality improvement has no significant impact on the T2 value for each indicator with both values. A significance level of α = 0.05 for a confidence interval of 95% has been chosen and a change (as improvement if positive or deterioration if negative) is significant if p < α, leading to a refusal of the null hypothesis.

Assessment process

The data collection tools were field tested in two facilities. Ethical clearance was obtained from the Institutional Research Ethics Committee (IREC) at Moi University, Kenya. The confidentiality of the analysis process and the fact that all responses would be depersonalized was emphasized and all participants provided informed consent.

The project was executed in two phases between 2013 and 2016. A baseline assessment was carried out during the first phase (T1) a reassessment (T2) after 1.5 years. All 10 facilities enrolled completed the first (baseline) and the second assessment.

Each assessment was implemented in two rounds making use of the above-described tools: surveys for patients and staff, a self-assessment, facilitator assessment and a manager interview guide. Experienced research assistants were used to carry out the patient survey orally in English and local languages. At least 100 responses per facility were sought from patients attending Antenatal Care, Post Natal, Family Planning and Maternity services. These surveys were complemented with the information received from the facility managers via their self-assessment. The data from these surveys was entered remotely into the VISOTOOL® software by a research assistant for analysis. A trained facilitator oversaw the facility assessment process, and the training of national “quality facilitators”. Self-assessment and patient and staff surveys were followed by a visitation through a trained facilitator, following a checklist and conducting a management interview, data was immediately entered into VISTOOL® and analysed on site.

Using VISOTOOL® the assessment was followed by an immediate and comprehensive feedback to the health facility staff. This enabled facilities to identify and focus on priority areas. Concrete and highly tailored plans of action were elaborated, preferably making use of locally available resources, including making use of existing quality improvement approaches such as KAIZEN-5S, coaching and quality circles (Tables 1 and 2).

Table 1 Indicators from the Domain Clinical Care, Dimension Delivery & Newborn Care with source [5]
Table 2 Measurement of indicators of Table 1 across the different assessment tools [5]

Between T1 and T2 the facilities used the given analysis and feedback of assessment T1 results for decision-making on what intervention should be given priority and be implemented. Each of the 10 facilities then conducted a number of one to five improvement interventions based on the gaps identified and accompanied by facility-driven tutoring and coaching targeting five main topics: neonatal mortality, the completeness of partographs, waiting times, IPC as well as shortages of staffing and transportation in remote areas.

Facilities were grouped according to whether or not a concrete improvement intervention was conducted. Only those improvement intervention topics with group sizes of at least two participating and two not-participating facilities were considered as eligible for a comparison of their T1 and T2 results, in respect to those IQMS indicators matching their mentioned inducements and intervention contents (Table 3).

Table 3 Summarized inducements, number of relevant and final IQMS indicators and intervention contents

Results

The characteristics of the study population are listed in Table 4.

Table 4 Characteristics of the study population

Excluding the fifth domain ‘People’, changes in the scores of the four domains and all 24 dimensions for the ten facilities at the two assessments are shown in Table 5.

Table 5 Total number of indicators, T1 (first assessment) and T2 (re-assessment) mean scores, percentage change, standard deviation and p values

Significant improvements were found in all four domains with higher scores measured in the domains ‘Clinical Care’ (10.08%; p = 0.0108), ‘Management’ (13.10%; p < 0.0001), ‘Interface In/out-patients’ (13.87%; p = 0.0246), ‘Quality and Safety’ (20.02%; p < 0.0001) and in total (14.64%; p < 0.0001).

In the domain ‘clinical care’ significant improvements were observed in the dimensions ‘Antenatal care’ (26.84%; p = 0.0059) and ‘Survivors of gender-based violence’ (11.20%; p = 0.0092). The least marked changes or even a -not significant- decline of some was found in the dimensions ‘delivery’ and ‘postnatal care’.

For the domain ‘management’ significant improvements were observed in the dimensions ‘Supplies’ (26.17%; p = 0.0145), ‘Drugs’ (12.78%; p = 0.0051), ‘Data’ (14.94%; p = 0.0403), ‘Amenities’ (12.79%; p = 0.0003) and ‘Waiting times’ (15.78%; p = 0.0369).

For the domain ‘Interface In/Out-patients’ significant improvements have been observed in the dimension ‘Referral’ (17.22% p = 0.0133). The least marked changes or even a -not significant- decline of some was found in the dimension ‘General’.

In the domain ‘quality and safety’ significant improvements were observed in the dimensions ‘Guidelines etc’ (34.18%; p = 0.0336), ‘Infection control’ (23.61%; p < 0.0001) and ‘Laboratory’ (17.30%; p < 0.0001).

The following boxplot (Fig. 3) shows the variation of all indicator changes within each domain. All mean values are within interquartile range and therefore represent the entirety of all values.

Fig. 3
figure 3

Box-Plot showing the variation of the average indicator changes of each domain.The line connects mean values

As health centres and hospitals are discussed together, a comparison of the average improvements of all 4 health centres with those of all 6 hospitals should prove that structural characteristics are not crucial for the achievement of a significant improvement (Table 6).

Table 6 A comparison of the percentage changes with p-values for each domain and in total for the mean of health centres and hospitals

The differences between T1 and improvement values (= T2 values), comparing the intervention and the non-intervention groups are shown below (Fig. 4).

Fig. 4
figure 4

Percentage T1 and improvement (=change (T2-T1) values compared for facilities with (intervention group) and without (non-intervention group) the concrete improvement interventions

Looking at the results of interventions, the analysis showed for example that the improvement interventions conducted to reduce neonatal mortality achieved higher improvement rates (change) (42.33%) than the non-intervention group, where the improvement of the comparable indicators has also been significant (15.57%). Those facilities that implemented concrete activities to improve their IPC, achieved significantly higher improvements (28%) than those facilities that did not. Nevertheless, also for those facilities without concrete IPC interventions marked improvement could be observed (18%).

Discussion

To improve quality of care various factors and a combination of methods are influential, such as evidence based measurement from different perspectives, extensive feedback to staff and prioritized improvement activities. Intrinsic motivation of staff could be assumed given the selection process to participate. In the mid run, there may be external incentives to embark into a systemic quality improvement process, such as accreditation [20].

Our assumption is that a precise, detailed and participative measurement and gap analysis - as a tool for good decision-making - is a basic requirement for setting the improvement process in motion and leads to effective targeted improvement interventions which are accompanied by facility-driven coaching and tutoring. Analysis on the effectiveness of the precursor of the IQMS Kenya, EPA in Germany and Switzerland, has shown significant improvements in three of four analysed domains and demonstrated the ability of EPA as effective and efficient quality management program [6, 20].

The higher significant achievements of improvement interventions in relation to the comparison group demonstrate the effectiveness of this targeted intervention performed under facility-driven coaching and tutoring. The integral IQMS quality improvement approach demonstrated that those facilities with a combination of measurements including gap analysis, decision-making and the conduction of supervised targeted interventions achieved better improvements than facilities with the same starting conditions, but only the performance of measurement. We can thus assume, that the actual improvement can be attributed to the systemic nature of the approach.

Lower T1 values at the participating facilities – for example in the improvement interventions on waiting time and on shortages of staffing and transport in remote areas - underline the validity of the integral IQMS approach for revealing deficiencies within this area. Despite these deficiencies in comparison to the non-intervention group, all improvement interventions were able to achieve significant changes and even higher T2 values than the non-intervention group, which proves its possibility of not just catching up but overtaking by performing a previous gap analysis and prioritization of concrete interventions [6].

Furthermore, the methodological approach chosen serves different purposes of quality assessment: internal improvement, external accountability, and scientific evidence. Therefor it is paramount to measure structure, process, or outcome of healthcare.

Without looking into all details of the actual improvement process between T1 and T2 - we assume that the precise measurement of quality problems helps sensitize health staff to recognize and accept quality problems, which is also endorsed by other authors [6, 19, 22]. Being a very precise measurement method it proved to be an effective way to improve quality without any additional significant resources. Even the best T1 values in the domain ‘Management’ (68.4%), followed by the domains ‘Clinical Care’ (62.66%) ‘Interface In/out-patients’ (58.64%), and finally ‘Quality & Safety’ (51.13%) still show potential for improvement and therefore demonstrate the necessity for continuous quality improvement, one of the principles of KQMH. On the other hand the high degree of prioritizing certain interventions over others, could also explain that a dimension and the respective indicators reach lower level of improvement in comparison to others.

The approach has the power to integrate different, pre-existing and possibly competing quality improvement (QI) initiatives and to reduce the risk of indicators being reinvented. With the exception of the 44 international indicators that were retained through the review and rating process, 234 of the 303 indicators used had previously existed in the Kenyan health system. In addition to exploring clinical areas the approach offers the possibility to illuminate health system bottlenecks like drug distribution and facility accounting issues. The specially developed software (VISOTOOL®) generates real time results for immediate feedback to the facility team as an integral part of the facility visit process. Precise measurement as well as detailed display of results empower the health facility teams to better analyse underlying problems, set their quality objectives and ensure the optimal use of existing resources according to the Pareto principle. Facilities can also track their progress with the software by comparing results after each assessment. Furthermore the software allows benchmarking. Health facilities can compare their results against the average results of other facilities taking part in the assessments.

Our experience showed that this indicator based approach can be adapted to and used in different contexts and health systems.

Nevertheless, the study has limitations: T2 results might not be a pure reflection of the IQMS quality improvement process, but also be influenced by structural differences regarding staff qualification, availabilities, resources and attitude at the facilities. As to the driving factors for improvement despite structural similarities among the selected health facilities an attribution gap may exist and confounders, e.g. interferences by other health system strengthening activities, could not be excluded. Moreover it can not be clearly defined which factors and improvement activities are producing better results. This is subject of a separate analysis being in process.

Conclusion

There is a need for validated methods to measure quality of care in LMICs. In accordance with existing literature our results demonstrate that implementing a quality management system based on a systematic performance monitoring of health facilities which includes a continuous improvement process, not only breathes life into the process of collecting data for indicators, but also creates motivation for change and ownership among users and providers of health services and can serve as a powerful tool to improve health outcomes in LMIC.

As such it offers a reflection on the relevance of evidence-based quality improvement for health system strengthening and has the potential to lay a solid ground for further certification and accreditation.