Background

"We have at our disposal the tools necessary for achieving control – elimination – eradication of a particular disease", is a common refrain of public health bodies and practitioners. Vaccination programmes have had a major impact on a few key diseases, even in developing countries, but why are there relatively few examples of successful disease control programmes, particularly of non-vaccine preventable endemic diseases in developing settings? The inequitable global distribution of resources available for health care is certainly an important contributory factor. However, even interventions deemed cost-effective for developing environments often fail to perform as expected. Apology, justification or condemnation is customary, and a recommendation for further research commonly suggested as the way forward. Although this suggestion occasionally stimulates complaints that available resources should be channelled into service delivery rather than research, there is generally an acknowledgement that immediate problem-solving should not be at the expense of discovering sustainable longer-term solutions [1]. However, research per se is not a panacea for an ineffectual health programme. To be relevant and ethical, the research conducted must produce locally applicable answers [2].

The "operational research" model is becoming increasingly popular for addressing perplexing questions at public health programme level. It has even been suggested that success in combating communicable diseases in Africa depends upon each country having the ability to conduct appropriate operational research [2, 3]. Optimum use of operational research for improving communicable disease control is contingent on an appreciation of its inherent strengths and potential weaknesses.

In the Lowveld Region of Mpumalanga Province in the rural northeast of South Africa (population approximately 850,000), malaria is an important seasonal public health problem, with between 3,000 and 5,000 malaria patients notified annually in recent years [4]. Plasmodium falciparum parasites account for more than 90% of infections. Traditionally, detection of parasites in the peripheral blood by Giemsa-stained thick blood films (GTF) preceded the initiation of therapy, as rickettsial and viral febrile illnesses that commonly occur in the area mimic malaria. In 1995 only two of the 72 clinics in the Region had an on-site microscopist skilled in malaria diagnosis. The remaining clinics depended on four centralized laboratories for examining GTFs, as adequate facilities and technically skilled laboratory personnel were not available in more remote areas.

A series of operational research studies were conducted between 1995 and 1999 to assess the accuracy and appropriateness of this approach, and explore alternative diagnostic methods. Studies included:

  • A random survey of 40% (30/72) of Lowveld clinics, found that only 20 clinics were still preparing thick blood films and of these, only three clinics received microscopy results within 24 hours, while 11 clinics never received results [5]. Of nine clinics that had blood slides available for scrutiny, only four had prepared slides of acceptable quality;

  • A study to assess the diagnostic accuracy of the four laboratories responsible for examining malaria blood films in the Province. Marked diagnostic disagreement was found. Kappa, the measure classically used for summarizing agreement beyond chance, had a value of only 0.11 (95% confidence interval of 0.0–0.23) for a series of quality assurance GTFs [6]. This value signified only minimal agreement amongst the laboratories, as kappa has a value ranging from 0 to 1, where 0 represents no agreement beyond chance and 1 perfect agreement;

  • Evaluation of a new diagnostic approach, with two rapid malaria card tests compared at a number of pilot clinics in the malaria-endemic area, following an escalation of malaria cases and political demands for action [7]. Participating nurses indicated a clear preference for one of the rapid tests, the ICT P.f. test, with time to diagnosis and ease of use being the reasons cited for this choice;

  • A field study that demonstrated excellent field accuracy of the ICT P.f. test and precipitated introduction of the rapid card test for first-line malaria diagnosis in all Lowveld Region clinics during 1996 [8];

  • A confidential inquiry into malaria deaths. An important finding was that delays in accessing GTF results in hospitals, particularly after-hours and during weekends, may have contributed to fatal outcomes among hospitalised malaria patients [9]. This finding provided the impetus for deploying ICT P.f. tests in hospital accident and emergency units throughout the Mpumalanga malaria area;

  • A field evaluation of a multiple species immunochromatographic test, established that the multiple species test shared equivalent sensitivity and specificity with the ICT P.f., but complicated interpretation [10]. In addition therapy for non-falciparum malaria was not available at clinic level due to the predominance of P. falciparum malaria in Mpumalanga. This prompted a policy decision to continue using the older rapid test. Quantitative and qualitative field evaluation of new-generation rapid tests is now routinely conducted in Mpumalanga to provide the information necessary for procurement of affordable and accurate tests.

We provided a compilation of these six studies to experienced students on Masters-level public health courses in two countries for formal qualitative analysis. The students used this case-study to assess the role of operational research in influencing control programme policy and practice.

Methods

A synopsis of the six studies mentioned above, including their rationale, methodology, results and outcomes were compiled for case-study use in a half-day session entitled "Strengths and Weaknesses of Operational Research". This module is a component of the Masters of Public Health subject "Introduction to Communicable Disease Control" offered by the Thusano School of Public Health, South Africa and the "Disease Control" subject of the Masters of Public Health and Tropical Medicine offered by James Cook University, Australia. These subjects are currently offered by attendance mode once per annum in Pretoria, South Africa and Townsville, Australia, respectively.

This material was provided as pre-reading the evening before the session. At the session's commencement the principles of grounded theory, the inductive process of identifying analytical categories from available data, were briefly reviewed. The students were then introduced to their task, which was to apply the basic steps associated with the grounded theory approach in analysing the material provided to derive positive and negative attributes of operational research for influencing communicable disease policy and practice in southern Africa. Individual students initially tackled this exercise, and then small groups of three to four students discussed and synthesised their findings. Finally, a plenary session was held during which students had the opportunity of listing the positive and negative attributes that had emerged during their analysis, providing clarification where required, and discussing themes derived. This method was successfully piloted in July 2000 at the Townsville Disease Control course, and then employed in Pretoria during October 2000 and Townsville in May 2001.

The majority of the 12 female and 7 male participants on the Thusano 2000 course were experienced professional nurses (n = 11), while the remainder were doctors or medical specialists. Most of these students (n = 15) were actively involved in managing communicable disease control programmes in South Africa at national (n = 3), provincial (n = 6) or district level (n = 6). The James Cook University students were predominantly medical doctors (n = 15), although there were also 2 nurses and 3 allied health professionals. Twelve students were employed in rural Australian settings, while five worked in developing countries. Twelve Townsville students were female and four students were employed in public health positions involving control of communicable diseases.

Thematic analysis was used to analyse the data generated during this exercise and catalogue strengths and weaknesses of operational research from the perspective of these students.

Results

Positive attributes of operational research

The "high relevance of operational research" was the most prominent theme that emerged, and was discussed extensively by both Thusano School of Public Health and James Cook University Masters students. Codes, or analytical categories, derived by the Pretoria students supporting this theme included "research driven by real problems", "allowed in-depth exploration of real reasons for problems" and "set in actual context, therefore provides specific answers". The Townsville students concluded that operational research was "real world research", "research that addresses real problems", and "draws on local knowledge". In addition, the Australian students mentioned that operational research had the advantage of "detecting locally unanticipated factors".

Both groups cited "high relevance" as a major reason why "operational research successfully affects policy and practice". The Australian analysts found that operational research "provides an evidence base for policy and procedures", while the South Africans concluded that operational research "leads to improved policy that is based on evidence."

Additional features identified by the Thusano students to account for the ability of operational research to effectively engage and convince policymakers and programme managers included "a focus (of operational research) on impact", "involvement of implementers in the research process", "decreased resistance to change, as policy based on operational research findings is seen (by policy-makers) to provide an opportunity to improve services", and "more likely to positively affect attitudes about the necessity to alter control measures – rather than just knowledge". They stated that "as senior management are aware that field personnel know the research outcomes, there is pressure on them to actively implement findings", that there was "less suspicion (amongst managers) because the research question, process and outcomes are open to local observers" and that "every manager enjoys success and being seen as proactive". The Townsville students derived similar features, "operational research by a health program creates immediate demand, by highlighting problems and demanding intervention" and "resistance to change may be less, as local health department management are more closely involved in the research".

Both groups mentioned the relatively short period between generation of research findings and implementation of resulting recommendations as a positive attribute of operational research. The South African group emphasised the "immediate benefit" and "fast application, even simultaneous, of lessons learned", while the Australian students indicated that operational research resulted in a "short lead-time to implementing findings" and that "changes occur in real-time".

Economic benefits of this research approach were highlighted by the South African students, "cost-effective, as existing resources are used" and "high quality research would attract additional resources from outside", and the Australian students, "led to better use of resources", and "focus of staff and management (would) shift to the areas where operational research was occurring and additional resources would flow in".

Analysts believed that conducting successful operational research established a commitment to ongoing research for evidence-based policy. Thusano students' themes included "once perceived to have positively influenced service delivery, the buy-in from management for investing in future research and continued interest in the particular program area was more likely", "incremental", "continues into the future", "ongoing", and "identified new problems setting a relevant research agenda". Townsville students' emphasised the "highly adaptable (operational research) agenda".

Advantages accruing to health program staff fully involved in planning and implementing operational research were also noted. These included the development of staff capacity, "capacity to conduct research is being built" and "improving research ability", reported by the Pretoria group, while Townsville students mentioned that, "people in field get research skills", and "acquire excellence in report-writing skills".

Negative attributes of operational research

The major concerns expressed by the Townsville students were that operational research, "may be of poor quality if there is a lack of local research skills" and that "poor quality research may have more weight as evidence and lead to inappropriate changes to practice and policy". The Pretoria students noted that there was an attendant "opportunity cost – other functions of field staff may be compromised". They expressed disquiet that routine programme evaluation may be compromised, "may lead to down-scaling of routine service evaluation because of a perception that service evaluation was also research, needing researchers, budget and ethics approval". They also indicated the reservation that "approval (to conduct operational research) may not be granted by senior management if they predict unflattering outcomes of research".

Both groups mentioned potential vulnerability of programme staff participating in operational research. Codes included, " vulnerable if practices that have been long entrenched are found to be wasteful or useless" (Pretoria), and "researcher more vulnerable to unfavourable outcomes" and "field staff may feel threatened by results" (Townsville).

At a practical level, Townsville respondents were concerned that "resources, money and interest are diverted from other areas where no operational research was happening", and "field staff may object to additional work associated with conducting research".

Discussion

Experienced health policymakers have noted a widening gap between scientific knowledge and health policy, and between theoretical health policy and practice [1113]. Many public health tools and strategies with proven laboratory or field trial efficacy do not realise tangible benefits in terms of disease control. Unfortunately there is a dearth of research on successfully translating results from field efficacy trials into field effectiveness [1416].

For the purposes of this study we defined operational research as the systematic search for knowledge on interventions, tools or strategies that enhance programme effectiveness, with the rider that the research should be planned and conducted by or in equal partnership with the local control programme. A distinction should be made between operational research as we define it and operations/health systems research, which traditionally involves an incursion of external analysts, usually from a university environment, into a field setting. The latter approach focuses, almost exclusively, on developing information systems and technology to support planning, and has been criticised for "pursuing theory at the expense of practice" [1719].

Specific features of operational research increase the likelihood that research-derived recommendations will successfully influence local control programme policy and practice. The principal feature highlighted by this study is the relevance of local operational research. Public health interventions studied in the setting in which they will be applied, are more likely to take account of the vagaries of local disease epidemiology, and available material and human resources. Local studies are also ideally suited to consider native context (biological, political, socio-economic and technological), a key determinant of the success of communicable disease control and eradication strategies [2025].

High relevance is assured by forging a close link between researchers and local control programme management, or by equipping the control programme to conduct its own research [26]. A seamless research-control interface increases the value of research topics chosen for enhancing programme effectiveness [27, 28]. The chasm between the agendas of research organizations and consumers of research findings is well documented [29]. Considerable disparity exists between the volume of work published on specific interventions, and their inherent interest to health programme managers [30]. Intimate involvement of field staff in setting the research agenda should address this disparity.

Better use of available resources emerged as a dominant theme. The number of instances where operational research has generated practical affordable local solutions, in stark contrast with the extravagantly expensive and impractical measures more generally recommended, continues to grow [31, 32]. It should come as little surprise that analysis of local problems by informed stakeholders should generate locally appropriate solutions consistent with available resources [33, 34].

An interesting feature elucidated was the suitability of the operational research model for studying locally occurring aberrations that follow adoption of guidelines and policy developed elsewhere. A prominent example of this aberration is the strikingly divergent impact on HIV transmission observed when a similar approach to controlling sexually transmitted infections was implemented in two African countries [35, 36]. Even the central doctrine of tuberculosis control, directly observed short-course chemotherapy (DOTS), has recently come under the spotlight, with the publication of conflicting results from the only randomised controlled evaluations conducted at programme level, in vastly different social settings [3739]. Policy-makers should fastidiously guard against automatically extrapolating positive findings from control strategy evaluations in one environment, to other settings. Increasingly complex and disparate contexts around the world demand context-specific solutions and local operational research should inform adaptation and precede wide-scale implementation [40].

Inherent attributes of the operational research approach facilitate implementation of findings at senior health management and policy-making level. A candid research agenda and involvement of local health staff, averts senior management suspicion of "research", gives greater credibility to research findings and recommendations, and diminishes the inevitable resistance to change [41]. This is important as the human factor in organisations has a major influence on the nature of policy and its implementation, with political factors often carrying more weight than formal evidence [12]. Outside researchers are often viewed with suspicion by government officials charged with policy making, as it is perceived that academic research fails to address priority issues [30]. Differences between consumers and researchers in values and life experience, understanding of science and access to decision-making structures, fuel the perception that researchers primarily have a selfish agenda [29]. This perception is fostered by the common failure of academic and research institutions in developing countries to communicate research findings with local policymakers, practicing health professionals, and the public [42]. Many researchers have been trained to believe that they have no direct role to play in improving health, that such involvement may devalue the independence of their research, and optimistically assume that key decision-makers will source relevant publications, understand the research language and results, and apply them for improving local health programmes. Delays in publication of research results in professional journals or even exclusion by print journals owing to competition for space may effectively preclude results from influencing current policy decisions [43]. Even when outside researchers do present research results to local decision-makers, there is little pressure on health management to implement recommended changes, as the researchers will soon depart. In contrast, where research is conducted by or in partnership with the local health programme, then results are often immediately accessible and decision-makers more accountable [44, 45]. The rapid, even simultaneous, introduction of research findings in practice was a noted benefit of operational research. Shorter turn-around may reflect the relative ease of expanding effective measures from a field research site to neighbouring areas if limited additional resources are required. Programme management also shares the language of local policy-makers [42].

The respondents identified operational research as a dynamic process, setting a continuously evolving agenda. The rapid generation of new knowledge and technology with potential benefits for the local population makes a dynamic research approach obligatory. Environmental vicissitude, and demographic and epidemiological changes demand continuous review of disease control policy and an unrelenting search for better control strategies. The success of this incremental research agenda should be measured by its ability to effect appropriate changes in control programme policy and practice [46, 47].

A manifest benefit emerging from the data was the development of a research culture within a health programme conducting operational research. Encouraging a healthy inquisitiveness in programme leaders will ensure that the right questions are posed and answered. Investing in providing senior programme staff with the research design and analytical tools necessary for framing and answering these questions, addresses the concern raised that poor quality results may engender false credibility and inappropriate programme modifications [40].

The potential weaknesses of the operational research approach highlighted by this study deserve close scrutiny. Firstly, although operational research and routine programme evaluation are part of the same spectrum of public health methodologies programme evaluation may be viewed as ongoing audit of routine service delivery, while operational research aims to solve operational problems identified, and develop and evaluate pilot interventions [48]. Thus when operational research findings lead to changes in communicable disease control policy, then routine evaluation must be built in to properly monitor their impact [39, 49]. Secondly, unlike routine programme evaluation, operational research requires preparation and submission of a research protocol for technical and ethical review. This rigorous process should assist in preventing inexperienced researchers from reaching invalid conclusions and implementing inappropriate policy. Field-dominated partnerships between research/academic institutions, and health programmes should be encouraged as this will serve as an additional safeguard.

The data were examined using an adaptation of the grounded theory approach [50]. Grounded theory is the term used to describe the inductive process of identifying analytical categories from data. The data is read and re-read to allow the researchers to identify and develop concepts, categories and themes while the research is being conducted. The approach promotes the development of theoretical explanations that conform closely to the data. Constant comparison allows key themes to be selected [51]. Similar themes emerged in two different geographical regions, amongst two separate groups of Public Health Masters students with diverse life experiences. This triangulation provided support for the validity of the thematic analysis [52, 53]. These analysts had relevant experience in the area of interest thus increasing the potential for identifying pertinent themes. In addition, a comprehensive review of published literature on operational research in communicable disease control provides similar themes to those derived by the two groups of analysts from the case study of published literature on operational research on malaria diagnosis in Mpumalanga Province, South Africa. However, caution should be exercised to avoid exceeding the bounds of the data and further research in other programmes and settings is encouraged to test the integrity and credibility of this analysis.

Conclusions

Efficacy is only one prerequisite for a public health intervention to be successful [54]. Selection of appropriate public health measures should be based on an assessment of their effectiveness and feasibility in the local setting. This will ensure that their impact, measured in another environment and context, is locally reproducible [55]. Local research at operational level is also essential for optimising the delivery of effective public health interventions [56]. The analysis of operational research conducted into malaria diagnosis in Mpumalanga Province, South Africa, suggests that the operational research approach can influence disease control policy and practice, and accelerate the inclusion of effective measures into local communicable disease control efforts. This analysis also endorses the value of equipping health programme staff with the knowledge and skills to efficiently conduct essential operational research at control programme level.