Background and Setting

Voluntary medical male circumcision (VMMC), also known in Uganda as safe male circumcision (SMC), was approved as one of the HIV prevention interventions by the World Health Organization (WHO) and UNAIDS in 2007 and added by the Ministry of Health (MOH) of Uganda as part of its comprehensive HIV prevention strategies in 2010. VMMC reduces by 60% the transmission of HIV from an HIV-positive female to an HIV-negative male (WHO 2007). VMMC involves removal of the foreskin along with health education of clients, especially on the need for the continued use of other HIV prevention methods, such as condom, because the procedure is not 100% protective.

VMMC has the potential to significantly reduce HIV transmission in Uganda. If implemented to reach a coverage of 80% of males aged 15–49 years (4.2 million circumcisions by the end of 2015), it would prevent 300,000 new HIV infections between 2011 and 2025 (Njeuhmeli et al. 2011). But to reach such an outcome, services need to be of high quality, meeting and exceeding minimum quality standards.

Before 2013, VMMC programming in Uganda was focused on meeting the 80% coverage target; quality issues were not given much consideration. In 2012, an external quality assessment conducted by the United States’ President’s Emergency Plan for AIDS Relief (PEPFAR) brought to the attention of the country quality gaps in VMMC service delivery, some of which needed immediate remediation. Among them were: use of general anesthesia as opposed to the WHO recommendation of using only local anesthesia, lack of standardized client data collection and reporting tools, and poor preparedness for management of adverse events, should they occur. Health facilities that were found to have severe deficits in meeting the expected standards were advised to suspend VMMC service delivery until the gaps that needed immediate remedies were fixed.

USAID, one of the major funders of VMMC service providers operating in Uganda, requested that a USAID improvement technical assistance project already operating in Uganda provide technical support to the MOH and to 10 implementing partners and the health units they supported with supplies and training, to address the identified gaps. USAID agreed to support a pilot collaborative improvement activity aimed at addressing the main VMMC service quality gaps initially in a selection of 30 sites and later scaled up to over 150 health units being funded by USAID to deliver VMMC services in Uganda.

This case study describes the journey of the 30 pilot health units to address gaps in VMMC service delivery and the kind of support that they received. The 30 health units consisted of 29 static health units and one mobile van, spread across all regions of the country. They represented high-volume sites chosen by USAID’s 10 VMMC implementing partners—three sites chosen by each implementing partner. The static health units included three regional referral hospitals, 15 general hospitals (100–200-bed-capacity hospital), eight health center IVs (50-bed-capacity health unit with a surgical theater mainly for obstetric emergencies), two health center IIIs (10-bed capacity, with maternity services but no operating surgical theater), and two health center IIs (day care health unit). Of the 30 health units, 18 were public, six military, and six private not-for-profit.

Developing and Applying a Locally Adapted VMMC Quality Assessment Tool

After the external quality assessment, the implementing partners and health units complained that the tools used in the external assessment were for high-income countries and not appropriate for a low-income country like Uganda, and therefore they were assessed against standards which were not applicable to their sites. As such, they did not own the findings.

To address the complaint related to the assessment tools, the MOH called a meeting of VMMC stakeholders that included the implementing partners, the technical support project, and PEPFAR in-country team representatives in January 2013 to chart a way forward. The meeting reviewed the tools used and agreed to revise them and align them to existing MOH policies and guidelines.

A smaller team led by the MOH and including the technical support project and implementing partners was tasked with developing a VMMC quality assessment tool aligned to the Ugandan setting. The process of aligning the tool involved reviewing the external assessment tool section by section, identifying areas which were addressed by other existing guidelines so as to harmonize them and avoid creating new guidance and at the same time maintaining the expected standards of care. The revised tool was piloted at five sites to determine the usability and ability to collect the intended information. In January and February 2013, several meetings were convened by the MOH, involving VMMC stakeholders such as service providers, implementing partners, and MOH departments, to gain feedback on the revised tool. Comments gained through these meetings were incorporated in the drafts by the team that reviewed the tool as the drafts went through the three levels of approval at the MOH.

Owing to the fact that the process of addressing quality gaps could not be delayed, after getting and incorporating feedback from the various stakeholder meetings, the MOH agreed to go ahead and use the draft tool to conduct baseline quality assessments, while the final tool went through the full MOH approval process to adopt it as the national MOH VMMC tool. The MOH-led National VMMC Task Force eventually approved the tool with minor amendments in May 2013.

Each of the 10 implementing partners was requested to identify three sites that were high volume to be part of the improvement collaborative. The plan was that the technical assistance project would jointly work with the partners and staff at these three health units and that the implementing partner would spread the learning to the rest of their VMMC sites. A schedule for conducting baseline assessments at partner sites was agreed upon with each implementing partner to establish the baseline level of performance using the new tool developed in Uganda.

In each site, the baseline assessment was conducted by a team comprised of technical assistance project staff, implementing partner staff, a representative from the district health office, and site team members. Sometimes MOH staff were able to join the assessment teams. The implementing partners were given the responsibility of making appointments with the health facilities for onsite assessments. The 30 baseline assessments were conducted from March through May 2013 (Byabagambi et al. 2015).

On the day of the assessment, the assessment team started by meeting with the health facility in-charge and members of the VMMC team. Everyone on the assessment team introduced themselves as well as any members of the health unit present. Next, the assessment team leader explained the purpose of the visit and the format it would take. The team leader also usually gave a brief overview of the seven components of the assessment tool and explained the purpose of each section. The seven components were: (1) management systems; (2) supplies, equipment, and environment; (3) registration of clients and group education; (4) individual counseling sessions for HIV testing and for medical male circumcision; (5) VMMC surgical procedure; (6) monitoring and evaluation; and (7) infection prevention and control (including waste management). At this meeting, the team leader also mentioned that preliminary feedback would be provided to the facility and partners at the end of the assessment exercise. The leader of the assessment team made it a point to emphasize that the assessment was not a fault-finding mission but rather was an assessment conducted in good faith, aimed at identifying any gaps so that they can be jointly addressed. This information was very important because it reduced any resistance or allayed any fears that the health facility team had and thus improved their cooperation.

The facility team leader then described to the assessors the process of VMMC service delivery in that site so that they could have a clear understanding of how the service was organized and thus better plan for the assessment. The introductory meeting lasted only about 30 minutes to allow ample time for conducting the assessment, which itself could take 3–4 hours. Enough copies of the assessment tool would be prepared so that the technical assistance project, the health unit, the district health office, the implementing partner, and the MOH official each had their own copy and completed it with the same information to minimize the need for photocopying and to make the follow-up of action items easy.

During the assessment, the facility in-charge would assign a health unit staff member to guide the assessment team; this individual also actively participated in the assessment and was considered one of the assessors. In most cases, this was the technical person working with that section. Once a particular section was completed, the assessment team would give some immediate feedback to the person(s) responsible and leave them to continue with their work. This was done in such a way to ensure that respect and privacy for the client and service provider were maintained. As an example, if the service provider did not disaggregate waste at the time of generation (e.g., mixing glove wrappings with blood-soiled gauze), this feedback would be given immediately, so that the mistake is not repeated during the next surgical procedure. The assessment team would then move to assess another area. Once all sections were completed, a debriefing meeting, chaired by the site in-charge and led by the leader of the assessors, would be convened and a detailed feedback given to all the staff members who were able to attend the meeting.

For most of the baseline assessments, the assessment team assessed all sections of the clinic as one team. This was done because not many people were familiar with the use of the tool and also to avoid disagreements at the end of the assessment. The entire team would agree that the provider either met or did not meet a specific standard. This approach promoted ownership of the findings by the health facility team, the implementing partner, and the district health office. This helped to solve one of the complaints that had been raised about the external quality assessment: that some items were marked as missing yet they were present but the assessors did not see them.

At the end of each site assessment, the assessment team, together with the site staff, developed an action plan. This process lasted about 30 minutes to 1 hour and was done as part of the debriefing meeting. The action plan detailed the gaps identified, the intervention selected to fill the gap, the person responsible, and the timeline for implementing the intervention. For example, some gaps could be addressed by the site staff, such as ensuring that the team members were aware of various reference guidelines and policies. The representative from the district health office was very instrumental in addressing gaps related to missing supplies such as drugs that are supplied through the national medical stores, because the district had the power to redistribute drugs within the district, from a health unit that has excess to one that did not have enough. The district also was instrumental in supporting teams to make timely requisitions for supplies. The implementing partner was instrumental in addressing gaps that required procurement of some missing equipment for emergency preparedness and availability of other VMMC consumables such as surgical kits. The technical assistance project was instrumental in building the skills of the health facility staff in addressing gaps through training in the continuous quality improvement approach, onsite coaching and mentorship, as well as organizing learning sessions. The MOH was instrumental in supporting sites to acquire some items which are centrally procured, such as information and communication materials. Therefore, it was quite beneficial to have a team of assessors who could all contribute to addressing the gaps.

Organizing the Improvement Effort

Formation of Quality Improvement Teams

At the end of the assessment and as part of the development of the action plan, the team members were also guided through the process of forming a quality improvement (QI) team that would be at the forefront of addressing the gaps. At the debrief, the technical assistance project explained who should be on the team. The guidance for team formation mainly involved the composition, in which sites were advised to ensure all staff involved in VMMC were represented on the team. These included counsellors, data clerks, circumcisers, assistant circumcisers, community mobilizers, and cleaners.

In some instances, the site made decisions on the spot about who would make up the QI team. Others said they would meet in the next few days to decide. The team members were selected through consensus by the health unit staff. From amongst the team members, a team leader and a secretary were selected. Emphasis was made that the team leader does not necessarily have to be the health unit manager. Sites were advised that they could change the composition of the team if there was need.

During the assessments, it was noted that majority of the service providers were not familiar with the use of the continuous quality improvement approach. Therefore, after completing baseline assessments at all 30 sites, a training in the use of the continuous quality improvement approach was conducted by the technical assistance project and the MOH. Four members of each site team, including a circumciser, an assistant circumciser, a counsellor, and any other member of the team, were invited to attend the training in which they were taught how to make changes to service delivery processes using the continuous quality improvement approach to address the gaps. Owing to the large number of participants, the participants were split into three groups based on geographical location; three regional trainings were conducted, each lasting 3 days. The trainers were drawn from the MOH and the technical assistance project. The training covered the principles of QI and basic QI concepts such as identifying gaps, testing changes, and measuring improvement.

Choosing Improvement Priorities

On the third day of the QI training, the participants reviewed the findings of their baseline assessments and the action plans they had started at the baseline debrief to determine what gaps were still outstanding since the completion of the assessment, to identify priority areas of focus for improvement. The priority areas were selected based on what was in the direct control of the site team, implementing partner, or district team. For instance, gaps such as lack of a functional QI team, no waste segregation, and incomplete records were prioritized as opposed to gaps that needed external support, such as missing policy documents or equipment and the need to train staff on how to conduct circumcision. Each team was free to take on as much they could as long as they had the means to do so.

Development of Aims and Measures of Improvement

Participants selected improvement objectives both for the quality standards that mainly look at the work environment to ensure it is safe and for the processes of care that focus on individual clients. After agreeing on the improvement objectives, indicators to monitor performance on the objectives were developed.

As part of the QI training, the participants conducted a process mapping exercise in which they drew flow charts depicting the flow of events for a client who comes to the site for VMMC. Using the flow chart diagrams, they identified areas or activities in the VMMC care process where quality was likely to be compromised. Such activities included points where no uniform information was likely to be given out. For instance, some clients might not receive all the information given out during the group education session, or some clients might miss the session all together, and some clients might get to the theater without providing consent. In most cases these were points where an action needed to be taken by a service provider as a client moved from one point to another.

As part of the QI training, indicators to measure various process activities were developed, drawing on the technical assistance project’s understanding of the main problem areas affecting VMMC services. These indicators included:

  • HIV counseling and testing: proportion of VMMC clients who are counseled and tested and receive HIV test results.

  • History taking and physical exam: proportion of VMMC clients who are screened for sexually transmitted infections prior to circumcision. Screening for sexually transmitted infections was used as a proxy for history taking and physical exam because it was a major gap at the majority of the sites.

  • Provision of informed consent: proportion of circumcised clients with documented informed consent prior to circumcision.

  • Surgical procedure: proportion of circumcised clients who experience moderate or severe adverse events.

  • Postoperative care: proportion of circumcised clients who return for follow-up within 48 hours of circumcision.

  • Postoperative care: proportion of circumcised clients who return for follow-up beyond 48 hours but within 7 days of circumcision.

The pilot sites participating in the improvement collaborative had two main ways of tracking improvement. First, improvement in compliance with quality standards (based on the assessment tool developed at the start of the activity) was measured using a color-coded dashboard that characterized performance as green (good), yellow (fair), or red (poor). After every assessment, scores were awarded to each standard, and a percentage computed based on how many standards the team got right out of the total number assessed. Along the way, the coaches noted that the scores were difficult to understand and the idea of developing a dashboard was borne. A score of less than 50% was coded red, signifying poor performance; above 50% to below 80% was coded yellow, signifying fair performance; and 80% or more was coded green, signifying good performance. A dashboard template already with preprinted colors was used to display the findings at site level. If for instance a site scored 85% in management system, a mark (X) would be made in the green section of the template. This template was used because color printing services were not available at the sites.

The second way of tracking improvement was through measuring care processes within VMMC service delivery using the indicators that were developed at the first quality improvement training. Prior to starting the collaborative, there were no indicators monitored by the teams apart from tracking the number of clients circumcised. Initial support dwelled on guiding the teams to understand how to collect the data from individual client forms and transfer it into registers. Most sites were not used to this process and often compiled reports by collecting data from individual client forms as opposed to getting it from the register. Because many sites were not used to using registers, the technical assistance project developed a standard register. Service providers at each site were mentored on the process of getting the data from client forms on days when they worked on clients and transferring it to the registers. At the end of each month, a summary of data would be made in the register for each indicator.

Various records were kept to document the quality improvement work. At each site, teams maintained documentation journals which contained all the information about their improvement work. In these journals, teams recorded the changes that were tested to improve processes of care and the results achieved. During QI team meetings, the data for each indicator was abstracted from the summary in the register and transferred to documentation journals in order to be able to track improvement using line graphs. The abstraction of data was done by the team members themselves. Each of the six indicators had a separate journal. Since the indicators were many, at some sites a specific team member was assigned the task of collecting data for a particular indicator and sharing the information with the rest of the team members during QI team meetings. At other sites this was done jointly during the QI meetings. External coaches would verify the data collected by the teams during coaching visits to ensure accuracy and completeness. For some indicators, such as client follow-up, this was done by taking a sample of some records if they were many or by looking at all the records if they were few. The coach would crosscheck the figures against the primary source documents (individual client forms). For indicators such as obtaining informed consent, this was done for all the individual records because it was easy to conduct and also because this indicator was of outmost importance.

Carrying Out the Improvement Effort

Development, Implementation, and Testing of Change Ideas

At one of the general hospitals, the team had a gap of poor client follow-up within 48 hours after circumcision. The team identified this gap through the analysis of their baseline data. The team reviewed data on the number of clients who were circumcised and compared it with how many of those returned for review within 48 hours as per the MOH guidelines. The team noted that 10/1201 (0.8%) clients circumcised in March 2013 and 8/1426 (0.6%) in April 2013 came back for review within 48 hours. The team was alarmed by these statistics.

The team discussed the reasons for this gap and noted several causes: service providers were not actually passing on information to the clients that they needed to return for review; some clients lived far away from the clinic and did not have money for transport to return 2 days later; and other clients could not afford to be away from work for another day within the same week. The team discussed these underlying causes to determine which problem affected the majority of the clients. After a thorough discussion that involved trying to understand what was the biggest cause of poor client follow-up, the team agreed that the bigger problem was inadequate information given to clients, leaving the majority of them with the option of not returning. The information given was not convincing enough to make clients who live far away or who cannot afford to be away from work for 2 days in the same week to return to the clinic. This discussion was facilitated by a coach. The main role of the coach was to ensure the discussion remained focused and logical, but the coach did not try to determine for the team the underlying cause of poor client follow-up.

The team agreed to change the information given to the clients. The team planned on how this change would be implemented. This included specifically mentioning that clients need to return as opposed to telling them to return only if they have a problem. The information would be given to all clients who are circumcised, and this was to be done at all service delivery points, starting from group education, during individual counseling, during the surgical procedure, and during postoperative care. The team tried out this change in May 2013, and when they analyzed their performance at the end of May, they had improved to having 619/1953 (32%) returning for review. The team was impressed by the result of their efforts but still they were not happy with 32%.

During the next coaching visit, the coach reviewed what the team had done and was also happy with the efforts of the team. In the same meeting, the team discussed ways of improving further. The staff member (part of the improvement team) who attended to the clients who returned for follow-up said that the majority of the clients who returned for review said it is because they want to have their bandage removed. The team agreed to add this to the information they gave to clients and changed the message from “come back if you have a problem” to “you need to come back for bandage removal and review.” This led to further improvement, and by July 2013, 69% of the clients were returning for review.

Over the next 3 months, improvement leveled off. The team was invited to a learning session in August 2013, and while there, they further learned about the importance of improving documentation. They also noted that at their own site, some clients were returning for review, but this information was not recorded. The team decided to address the gap of ensuring that all clients who return are actually reviewed and their records updated to reflect that they indeed returned. A focal point for clients who return for follow-up was designated, and clients’ forms for those who were recently circumcised placed at this point in a folder labeled 48-hour follow-up. When the team analyzed data for November, they noted that they had been underreporting client follow-up due to poor documentation because there was a 20% increment in performance. The team continued to monitor progress on this indicator and maintained the good performance, as shown in Fig. 11.1. Examples of changes that were tested by teams to improve this indicator are shown in Table 11.1.

Fig. 11.1
figure 1

Percentage of clients who return for follow-up within 48 hours at one general hospital

Table 11.1 Illustrative changes tested by pilot sites to improve client follow-up within 48 hours

Support for Improvement

After the QI training, every health unit was supported to improve the composition of the QI team they had formed at the end of the baseline assessment by building a team around the newly trained staff. The support involved working with service providers to identify who else should be on the improvement team to ensure that the team is fully functional and all sections of VMMC are well taken care of.

Since the main aim of the collaborative was improvement, the QI team members were trained on how to use the assessment tools in the absence of external coaches to conduct self-assessments on their own. Teams were encouraged to identify gaps on their own and then make changes. During subsequent coaching visits, the external coaches would review the self-assessment findings, the changes being tested to establish if they are in line with the gaps identified, and then make amendments where necessary. On a quarterly basis, the coaches would conduct an objective assessment at each site.

The teams were visited once every month by a team of coaches from the technical assistance project, the district health office, and their assigned implementing partner who helped them to refine the changes that they were testing. During the coaching visits, the teams were supported to implement the interventions they had selected. The support included reviewing the changes being tested by the teams to ensure that they were aimed at addressing the identified gaps and that the teams were able to accurately measure the impact of the changes tested. For example, at one of the sites, the team had a problem of poor documentation of clients who return for review. This team was planning to use the members of the village health teams to remind clients to come for follow-up. Such a change would not solve their problem. In reality, the clients would return and be attended to, but no records were made to show that clients actually did come back. In such a situation, the coach would systematically analyze the problem with the team so as to ensure that the change being tested can logically address the gap. They were also supported to plot their data and make decisions based on this data through analyzing the trends of the graphs. The analysis was made simple by merely looking at the trend of the graph to determine whether there is improvement depending on the type of indicator. For the majority of the indicators, such as client follow-up and obtaining consent, improvement was shown by a graph that had an ascending trending toward 100%. For adverse events, improvement was shown by a trend that was descending toward 0%. This support involved providing them with documentation journals and mentoring them in their use. In addition to using the documentation journals, some sites opted to plot their performance on flip charts which would be hung on the wall, so that all team members could easily visualize the team’s performance.

A coaching guide was developed to aid the on-site coaching visits. The guide was developed based on the experience gained in developing such guides for other technical areas. The guide went through various iterations, and the current version captures information about the improvement team composition, any changes in membership of the team, what actions the team took between coaching visits such as inquiring if the team met in the absence of external coaches, and, if yes, if any documentation was made. The coaching guide also gathered information on changes that the team planned to test and what impact they made on processes of care, whether the implementing partner and district health official were present during the coaching visit, and if the district health office was given a report after the coaching visit. It also summarized the key findings from the coaching visit and the action plan for addressing the identified gaps.

Three months after completion of the baseline assessment, four members of each improvement team were invited to attend the first learning session . Owing to the fact that the 30 teams were too numerous to convene in one learning session, they were grouped into three based on geographical proximity to each other. The learning session lasted 3 days, in which teams shared their performance.

On the first day of the learning session, the teams shared their performance on quality standards. This was done through summarizing their performance on flip charts that stated the gap, the changes tested, and the results achieved. The site’s dashboard was also presented comparing the baseline and current performance to depict the improvement that has been achieved. Because many sites did not have access to color printers, the technical assistance project coaches gathered the dashboard information from the sites prior to the learning session and printed them.

On the second day of the learning session , the teams shared performance on client-level process indicators. Whereas every attempt was made to give each site an opportunity to present, due to the many indicators and an average of 10 teams attending the learning session, the coaches reviewed each site’s presentation and identified those that should be given priority to present. These included the worst and best performers for each indicator. The worst performers were selected to present so that they share their challenges and get peer support, while the best performers were selected to present so that they share with all participants what enabled them to achieve the good results. If time allowed, other teams were given the opportunity to present after the worst and best performers. The presentation was basically a time series chart on a flip chart (very few sites had PowerPoint presentations), showing the changes tested and the impact of each tested change on each indicator.

The third day was used to conduct a refresher training in quality improvement for people who were already trained and for some new members if it was their first QI training session. Improvement areas of focus were also identified for the next implementation period. As the teams presented, the coaches took note of the common gaps with compliance with the standards and process indicators, and these were shared with the entire group. These formed the basis of selection of improvement focus areas, without forgetting site-specific gaps. Still on the third day, the coaches provided the teams with technical updates as well addressed common gaps.

Learning sessions were supposed to be conducted quarterly, but for some quarters they were not conducted because based on the coaching visits, the teams were not ready for them. Learning sessions were convened when there was a need. During their visits, if coaches noted that not many new changes had emerged during the implementation period and all the teams seemed to be at the same level of performance with no very good or very poor site, they would put off the learning session until there was value in bringing teams together. Therefore learning sessions were held when there was need as opposed to having them according to a fixed quarterly schedule.

During coaching visits and learning sessions, whenever coaches shared experiences from other sites, several team members expressed the need for having onsite exchange visits to see in person how other teams were performing and organizing processes of care. Exchange visits were therefore conducted for some teams by either taking staff from well-performing sites to struggling sites, or vice versa. The exchange visit pairing was based on identifying two teams in the same region (due to cost and time) with one of them performing well on a given indicator or standard and the other performing poorly on the same indicator or standard. In most cases it was one or two members from the well-performing site that went to the poorly performing site. This was because the one or two individuals would be able to support an entire team as opposed to transporting the entire poorly performing team to the well-performing site. This not only motivated the well-performing team but was also cheaper in terms of cost.

Not all teams participated in the exchange visits, as this was an activity based on need. The weaker teams that participated in the exchange visits reported that the visits were quite useful and helped them learn how to solve some gaps which they initially thought were unsolvable.


Figure 11.2 shows the performance of the 30 pilot sites on quality standards. It shows that over time, the sites reduced the number of standards that scored poor (red) and fair (yellow). By October 2013, all the sites had no standards scored as poor. The number of standards that scored fair was also gradually reduced.

Fig. 11.2
figure 2

Performance of the 30 sites on the seven MOH VMMC quality areas

Sustaining and Scaling Up the Improvement Effort

Right from the start, institutionalizing the continuous quality improvement approach in the VMMC program was a top priority. The main plan was to transfer the skills to the frontline health workers, to the district-based public health officers whose day-to-day work was to provide support to health facilities through supervision, to regional coaches, and to implementing partner staff at the national level. At the health unit level, the establishment of QI teams was done based on the guidance in the National Quality Improvement Framework and Strategic Plan developed by the MOH. Service providers were mentored on how to collect client information using standardized tools and report it through the national system.

At the start of the VMMC QI work, sites never used to report their performance data through the MOH health management information system, but rather, the implementing partner would pick up the information from the site. This was not sustainable because in the absence of the implementing partner, there was a risk that the information might not be reported. The support involved mentoring the site teams on how to collect the data using standard MOH reporting tools and completing the tools through submission to the district.

Health units were supported to integrate requisitions for VMMC supplies into the national system. At first, the majority of the sites would not include the needs of the VMMC clinic in the health facility’s drug order because they were used to the implementing partner bringing the supplies, and they would just sit back and wait for whatever amount the health unit would consume. For example, the VMMC clinics use a lot of paracetamol for pain management and, therefore, need to order for amounts that are over and above the average monthly consumption of the health facility. If this is not done, the health units will soon run out of paracetamol before its time to make the next order which for majority of health units is once every 2 months. The support involved mentoring the teams on how to estimate the required supplies, monitor supplies using the standard MOH stock cards, and integrate them into the health unit supplies. This was aimed at ensuring that the steady supply of consumables would continue beyond the period of the implementing partner’s support.

At the district level, staff from the district health office were supported to join all the site-based activities so that they would acquire QI and coaching skills so that they would go to coach and where possible mentor the teams at VMMC sites. To achieve this, all planned activities were jointly conducted with the district staff, starting with the baseline assessments. The district health officer was requested to identify one person from their team who would participate in the VMMC baseline assessment. This was also aimed at promoting district ownership of the improvement activity. The district staff participated in the baseline assessments and contributed to the development of action plans. They were also invited to attend and actively participated in the monthly onsite coaching sessions. This was aimed at building their coaching skills. They also attended the QI training that was organized for service providers in addition to attending the learning sessions. The district staff welcomed the idea of the joint onsite activities because this helped them build their skills, and they also used the opportunity to look at other activities beyond VMMC. In some districts, there was a challenge of being given a new person every month which made it difficult to do the mentorship. In some districts, some of the people selected by the district health office were not popular with the site teams, and this slowed down the mentorship.

At the regional level, coaches were trained and were supported to join improvement activities. For most of the regions, the existing regional coaches were oriented on VMMC continuous quality improvement, while for other regions which had very few existing regional coaches, new ones were identified and trained. At the national level, staff of the AIDS Control Program and Quality Assurance Department were actively engaged in baseline and subsequent reassessments, onsite coaching visits, learning sessions, and QI trainings. This was aimed at building their skills and knowledge so that they would transfer the knowledge and skills to all the districts and health units which the technical assistance project was not supporting.

The VMMC national task force of the MOH was supported to convene oversight meetings in which issues related to the quality of VMMC were discussed. The support involved providing a venue for the meeting, inviting participants to the meeting, and providing stationery and supplies needed to have a successful meeting. Whereas it was the desire of the MOH to convene the task force meetings quarterly, it had taken over 1 year without a meeting prior to the support provided by the technical assistance project.

Prior to the collaborative, there was a stand-alone, implementing partner-led reporting system which would not survive if at all the implementing partners stopped supporting the health units. Moreover, not all VMMC sites were reporting through this system. With MOH leadership, standardized tools for reporting client information were developed. These included client forms, theater register, and client card; these forms were later integrated in the national health management information system. The purpose of this was to institutionalize the process supporting improvement. The VMMC quality assessment tool used to conduct the baseline assessment was adopted by the MOH as a standard tool for the assessment of VMMC quality standards and is now used to identify quality gaps within all sections of VMMC which are then addressed by the improvement teams. In July 2016, the MOH adopted all six VMMC process indicators in the District Health Information System 2, for mandatory reporting to the district level.

Scaling Up VMMC QI Activities

Throughout the period of the collaborative, the implementing partners kept requesting for support to be provided to more sites. In May 2014, 19 more sites were brought on board bringing the total number of supported sites to 49. The implementing partners were again requested to identify two new sites to join the second phase of the collaborative. Once again, the sites selected were high-volume and those with gaps. The implementing partners had already conducted assessments at these sites, so they were in a better position to identify the sites that were in most need for this support.

Figure 11.3 shows the percentage of clients who returned for follow-up at the pilot sites and scale-up sites. It shows that the pilot sites took more time to get 80% of the clients returning for review, while the scale-up sites took a shorter period. This is attributed to the spread of knowledge from the old sites to the new sites and to the experience the coaches had gained.

Fig. 11.3
figure 3

Client follow-up at 48 hours for pilot and scale-up sites

Spreading the Knowledge

From the work done by the first 30 teams, several documents were developed summarizing the learning that had taken place. These included a list of changes tested to improve the quality of SMC in Uganda. This document was shared with all the new sites so that they could learn from what the old teams had done. The new teams were also trained in application of the continuous quality improvement approach.

Several case studies were developed from the work done by the first 30 sites and shared with the new teams. Improvement team members from old sites were asked to join some coaching visits to the new sites so that they could share their experience in starting the improvement work. A learning session was conducted in which both old and new teams were invited to share experiences. The new teams were the first to present their work, followed by the old teams. One of the key messages passed on by the old teams to the new teams was the importance of regularly summarizing their performance data on each day of VMMC and a monthly basis to avoid a backlog. The old team members shared experiences of improving specific indicators as well as getting started.


Author’s Perspective

Looking back, I say what an interesting journey this was albeit with lots of challenges and learning on my side. The most challenging period was at the start of the collaborative when all stakeholders were skeptical of each other. Ministry of Health was alarmed by the findings from the external assessments as were the implementing partners. The implementing partners thought [the technical assistance project] had come to police them and report them to their funders and as such they did not fully cooperate at the start. My best lesson was that we need to include all key stakeholders in the planning of the improvement collaborative. The initial plan for this collaborative was done without the implementing partners, and this resulted in resistance that made us lose some time. It is important to identify key stakeholders and include them in the planning, define the roles of each, and be open and transparent. We should be prepared to change our plans as need arises.

–John Byabagambi

Implementing Partner Perspective

It gives [Ugandan faith-based organization] great joy to see the success [the hospital] has registered in providing quality SMC services despite the gaps and challenges identified at the baseline assessment. The continuous QI mentorship, support and training that was conducted by [the technical assistance project] and involving us at all stages has been an excellent learning experience that enabled us to internalize and adopt the QI approaches even beyond [the hospital]. Each site is quite unique but once the principles of QI are followed, it can create a positive change as demonstrated at [the hospital]. The mentorship and support has been continuous and allowed the actual implementers suggest and test solutions to see their results. The involvement of top leadership of both the implementing partner and the site as demonstrated by both [Ugandan faith-based organization] and [the hospital] has been invaluable. Great thanks to USAID for the funds provided to facilitate the process and to [the technical assistance project] for the QI technical support provided to [Ugandan faith-based organization]. With the lessons learnt we will continue to support [the hospital] and all other sites to improve the quality of services being provided to the clients.

Clinical Services Specialist, [Ugandan faith-based organization]

Site Perspective

The counselor offering individual counseling admitted to hurriedly going through the post-test counseling messages and was mainly focusing on HIV test results. She was not recapping the advantages/benefits of SMC to the client because she was worried about the long waiting time for the whole SMC exercise . After the QI training, the counselor was able to develop and use a checklist to guide her during the posttest counseling session; she noted that there were a lot of issues that are not clear to the SMC clients during the group health education which she had to address in the individual session. She has also learnt that sometimes she may forget some key talking points and that’s why it’s important to stick to the checklist, however experienced she may be.

–HIV Counseling Program Manager at one of the supported sites