This study has produced a set of PIs that are all feasible and easily assessed in the current SA EM systems to evaluate and improve emergency centre quality of care. These indicators may need to be refined specific to local situations, although standardisation of at least some of the indicators will allow for direct comparison, audit and future studies. The list is not exhaustive, but provides a useful starting point for pilot studies and further research.
Historically physicians have not prioritised quality measurement and improvement, and the last few decades have seen the quality improvement movement shifting from an external regulatory requirement into an internally driven operation at the core of ECs in the developing world. This transition to a quality-driven revolution remains one of the greatest challenges facing emergency medicine in both the developed and developing world [11].
In their article on quality improvement in EM, Graff et al. [11] concluded that the definition of medical quality should include those factors that describe medical care that is important to all the stakeholders involved within the EC, such as doctors, nurses and patients. The most frequently used framework is that from the Institute of Medicine [12], which highlights the aims for any quality improvement intervention and should include safety, effectiveness, patient centeredness, timeliness, efficiency and fairness. The quality of public services in developing countries has been neglected, with little emphasis having been placed on quality improvement [13].
The Delphi technique has been successfully used elsewhere to develop PIs [4, 5]. This multifaceted and heterogeneous Delphi panel, with all members having considerable experience in the SA EM setting, has provided indicators with good generalisability and validity. Consensus methodology is a means of obtaining expert opinion and turning this into a reliable measure. There are no universally accepted or evidence-based criteria to define consensus, and 80% positive (or negative) response was chosen as a reasonable threshold given the nature of the statements in this study [14, 15].
PIs that assess structural components of ECs are more applicable to the current situation in SA, and the study reflects this. These should provide a good baseline and could be developed alongside national guidelines as to what is applicable for what level of EC.
The process PIs are useful and practical. They consist of time measures of flow and performance of vital clinical tasks, and of processes where documentation should be made that gives evidence that clinical process/protocols have been followed.
Outcome measures are difficult in the emergency environment where we seldom have information on outcome outside of the EC. A single measure of missed injuries is proposed as a feasible PI, but as in other studies it may not be a meaningful global measure of EM outcomes. Patient satisfaction is perhaps a better measure of outcome, and is largely weighted on timeliness and appropriateness of treatment, which should be gauged in the process PIs [5].
Werner and Ash express concerns that although performance measures do improve performance, many are designed to improve compliance to guidelines, which do not necessarily translate into clinical benefits [6]. This needs to be borne in mind, especially for the process-based PIs. Adherence and improvement to PIs should not take away from the priorities in clinical care, which are not necessarily reflected by PIs—for example, there is no prioritization of PIs—and clearly not all are life- or even limb-threatening issues. Sheldon notes the importance of having good evidence to back indicators, as well as consideration of integrating PIs with local and national policies on quality initiatives [16]. They also emphasize the importance of considering how the results of PIs will be analysed and appropriate actions to increase performance. Kruk et al. [17] have emphasised that performance indicators need to be relevant, reliable, feasible and evidence-based before they can be implemented locally. Thus, developed and developing countries may use very different indicators based on local conditions and policies.
Graff et al. [11] have identified a number of barriers to the measurement and implementation of quality improvement programmes. Most important is the lack of reliable, accurate data acquisition and analysis in ECs. This is especially challenging in resource-constrained developing world hospitals. For effective and accurate measurement, data need to be entered into a digital format. Most ECs in developing world settings have limited if any electronic records of EC patients, and data acquisition is through medical record acquisition, which is time consuming. Most ECs run a paper-based log book of EC admissions and discharges. This limits the usefulness and quality of the data. Secondly, the lack of senior administrative and clinical commitment to quality improvement within the EC is a major challenge. Traditionally, the EC has never been a priority within the hierarchical structure of the health care institution, and quality improvement has not formed part of the core aims. Furthermore, a lack of understanding concerning the aspects of quality measurement and improvement among senior staff and colleagues does not foster a team approach to prioritising the goal of improving patient care within the EC. The burden of diseases such as HIV/AIDS, malaria and trauma, and lack of qualified manpower within resource-poor systems have placed a tremendous burden on already overstretched health care systems. Many critics argue that scarce resources should be directed into solving these problems rather than highlighting further problems through measurement interventions [13]. In order for performance monitoring to be successful, it is essential that sound leadership from emergency physicians be fostered to create a multidisciplinary team approach to improving patient care [11].
PIs need to be clearly defined, and tested for validity, reliability and responsiveness before they can be put into common practice [5]. Further refinement and research are needed to guide this process.
Finally, the Centre for Health Economics of York [18] has highlighted some of the main types of unintended consequences of performance indicators that may be detrimental to patient care. These need to be considered when choosing indicators and analysing the results. Firstly, indicators may promote tunnel vision where managers may concentrate on a set of PIs while ignoring other important unmeasured aspects of health care. Secondly, suboptimisation involves pursuing narrow local goals while ignoring the overall objectives of the health system, while myopia is only concentrating on short-term goals and targets. Probably the most detrimental is the misrepresentation and deliberate manipulation of data to satisfy target requirements. Finally, gaming is the altering of behaviour to obtain a strategic advantage.
Creating a list of proposed indicators is one thing, but rolling them out to the ECs is the most difficult task. The PIs need to be further refined so as to ensure that all emergency physicians have the same understanding of the definition of the indicators to ensure an acceptable level of compliance. Further research is needed on how to approach and solve this issue. For example, the process of EC triage as always has been a controversial issue in South Africa. The need to prioritize the care of patients within South African ECs in response to long waiting times and overcrowding became obvious [20–23]. A staffed triage area has been identified as an essential process indicator of quality in this study. The Cape Triage group was convened in response to the variable level of triage practiced within South African ECs. Their goal was to develop and validate a new triage tool for use within South Africa. Using this platform, a multidisciplinary panel consisting of experts in the field of emergency care set out to accomplish this with the development of the Cape Triage Score that was rolled out across the Western Cape in January 2006 and will hopefully extend to the rest of South Africa in the near future. Extensive campaigns and training of health care providers in the use of the triage system under the auspices of the Cape Triage Group, the Division of Emergency Medicine of the University of Cape Town and Stellenbosch (UCT/US), and the Emergency Medicine Society of South Africa (EMSSA) have taken place. This campaign has shown positive results in terms of waiting times and mortality in many of the units within the Western Cape. This is an example of how by using an umbrella body like the EMSSA and the Division of Emergency Medicine UCT/US, we can use the PIs identified in this study as a starting point for further debate and discussion. In this way we can create benchmarking standards of good quality of care within our ECs and ensure that all health-care workers in the ECs have a common understanding of these quality indicators. This will ensure compliance with the performance indicators and improve the quality of care delivered. However, this is easier said than done, and we are still a long way off from achieving this goal. A concerted effort will be needed to get all those involved in emergency care under one roof to clarify and further refine these indicators. Governmental legislation and accreditation standards set out by the Department of Health and the Health Professions Council of South Africa will be needed to drive and enforce the process.
Emergency medicine is a rapidly developing speciality within the developing world, but the systems and processes in place are still largely immature and under development. Clear guidelines are needed for the development of the speciality within the developing world. Recent research [24] here in South Africa has identified key consensus areas for Emergency Medicine (EM) development in the developing world with respect to the scope of practice, staffing needs, training and research. The next step in this process is translation of these principles into clear and practical guidelines through focus group discussions to drive policy change, protocols, training and further research into EM development in the developing world.