Background

Globally, there is a growing concern about the need for quality health care, with a view that poor-quality care provision is not only wasteful but also ineffective and unethical [1]. Measurement of quality indicators is central to improvement efforts aimed to promote accountability in healthcare and professional practice. Quality indicators arise from the increasing demand for measures of quality across the healthcare continuum ranging from the community to tertiary level [2]. Nurses form the largest component of the health professional workforce and are recognised as essential to the delivery of safe and effective care. Understanding, measuring and reporting the quality of their work is, therefore, critical.

Quality assurance in nursing requires that nurses have the ability to measure their care, to define standards and to change their professional practice [3]. Therefore, measuring what nurses do is important in maintaining standards, supporting nursing management and understanding outcomes and their variation that is linked to nursing. This requires development of sensitive, nursing-specific indicators [4]. Nurse-sensitive indicators (NSIs) have been identified and used by healthcare organisations and researchers to measure how much nurses contribute to patient outcomes [5, 6]. Although there are varied definitions of NSIs, the most comprehensive one defines NSIs as measures of things that are about nursing (structure), about what nurses do (process) or about outcomes that can be linked to structure and process issues. These measures must be quantifiably influenced by nursing personnel, but the relationship between these measures and nursing is not necessarily causal [7].

The use of appropriate and relevant key performance indicators for nursing provides an opportunity to (i) demonstrate the unique contribution nurses make in delivering outcomes for patients and clients [8], (ii) highlight the gaps that might exist in nursing care provision, (iii) inform intervention design for improving nursing care provision and (iv) promote accountability for the care that nurses provide. With a focus on the inpatient setting and the potential use of NSIs for evaluating and improving quality in low- and middle-income countries, our aims were to (i) use a scoping review to identify NSIs reported in the literature and (ii) through a stakeholder-led approach, to adapt and if needed expand NSIs for potential use in Kenyan hospitals, and (iii) to develop a set of indicators with the potential use in wider LMIC contexts to support future evaluations of nursing care provision.

Methods

Review of literature

A scoping review [9, 10] undertaken to identify the literature on metrics for nursing quality of care, nursing care quality and their measurement methods (tools and data collection approaches) was conducted using EMBASE, CINAHL, MEDLINE and Google Scholar databases. The literature search was conducted using the following search terms: nurs* care metrics, nurs* care indicators, nurs* services indicators, nurs* metrics, nurs* care measures, and quality of care or nursing care.

Study selection criteria

We searched for all relevant literature published in the English language (due to time constraints) between 1900 and April 2017. Bibliographic references of retrieved studies were searched for additional articles that reported nursing quality indicators or nursing metrics. All study designs from all settings (LMIC, and high-income countries (HIC)) which reported on nursing care services and had an explanation of the concept of the quality of nursing care, and their measurement methods were included. Studies that reported ambulatory nursing care were excluded since the focus of the study was to develop indicators for the inpatient setting.

All titles and abstracts of identified articles were screened by two reviewers (DG and MZ) independently, and any disagreements resolved by discussion. Full texts of potentially relevant papers were retrieved, read and subjected to the inclusion/exclusion criteria. The authors did not assess the quality of the selected studies as our interest was in capturing a full list of indicators rather than how or how well they have been used. The process and reporting, including the step-wise retrieval, review, appraisal and inclusion into the study of literature (Fig. 1), followed the preferred reporting items for scoping reviews as outlined in the PRISMA extension for scoping reviews statement [11].

Fig. 1
figure 1

PRISMA flow chart on the literature search process. The PRISMA flow diagram for the selection process of studies and reasons for exclusion

Data extraction and synthesis

Data on study characteristics (e.g. study design, settings, objectives, sample size, discipline/unit), nursing indicators reported, study location and tools used (including availability) were abstracted on a standardised form and are summarised in Additional file 1. The abstraction was completed by one reviewer (MZ); a second reviewer (DG) counter-checked the extracted data. The primary reviewers (DG and MZ) discussed and resolved any differences in perspective that arose during the review to arrive at the final studies for inclusion. Agreement was achieved by consensus.

All of the identified publications mentioned indicators (159) and the studies which included them (across the 23 studies identified) were listed. The data requirements for these indicators were also explored in terms of data source and how to calculate the indicator (numerator/denominator). The indicators were then categorised narratively into three broad overlapping themes (allowing indicators to be in one or more categories) to inform the stakeholder-led process for selection of potential indicators applicable to Kenyan hospitals. The three broad thematic areas identified were (i) commonly reported indicators (identical indicators in four or more studies), (ii) indicators characterised into the respective domains of the Donabedian quality of care model (structure, process and outcome) and (iii) in the opinion of the authors (DG and MZ are both nurses and familiar with the public hospital settings in Kenya), indicators relevant and with potential direct application to Kenya with minor modifications. Indicators reported in the literature linked to other classifications/domains of quality (for instance compassion, safety or patient perspective) were re-categorised into the Donabedian framework based on the authors’ judgement on what domain the indicator best represented.

Stakeholder engagement to adopt/adapt indicators for the Kenyan context

To develop and contextualise a set of NSI to support evaluations of nursing care provision in Kenyan hospitals and wider LMIC settings, we established an expert advisory group (described below) to provide recommendations on what indicators would be contextually appropriate to measure nursing care in an LMIC setting. We presented findings from the scoping review and used the National Quality Forum (NQF) framework [12] on developing indicators for public reporting to guide the advisory panel on the selection of indicators from those identified in the review or develop new ones where necessary. NQF is a consensus-based health care organisation in the United States of America that defines measures or health practices that are the best, evidence-based approaches to improving care [13].

Selection of stakeholders

Drawing on our prior work with a broad neonatal stakeholder group [14, 15], we established an expert advisory group comprising individuals responsible for delivery of nursing care in major public hospitals, neonatal nurse training and nursing services policy in the Ministry of Health and County Governments. We also included major nursing stakeholder groups including the National Nurses Association of Kenya, the Nursing Council of Kenya and development partners (WHO, UNICEF).

The nursing advisory group was aimed at gaining a broad representation of the nursing community rather than a statistically representative group. We constituted panels from the nursing advisory group which met on two occasions for a full day of consultations. In the first meeting, a high-level group (n = 26) involved in policy-making drawn from the nursing directorate at the national level, training and regulatory institutions, and development partners met to review indicators identified through the scoping review with discussions being focused on a pre-identified list of possibly relevant indicators for LMIC selected by the authors. After a plenary session, smaller groups of at least five members, organised so that each group had broad representation in expertise and institutional affiliation, were formed to discuss indicators relevant to inpatient care for the five major inpatient disciplines (surgery, medicine, paediatrics, neonatal care and obstetrics and gynaecology). These discipline-specific groups were tasked with recommending a list of indicators for use in Kenya for the respective disciplines based on the literature in the form of the author’s pre-identified list. On average, each group reviewed 10–15 of the pre-identified indicators. Additionally, group members were allowed to propose new indicators that were not captured in the literature but were deemed appropriate for the Kenyan context based on their experience and expertise which would then be considered by the entire panel. The discussions on indicator selection and prioritisation drew on the guidance from the National Quality Framework (NQF) [12] and focussed on (i) which indicators were relevant and important to these disciplines in representing the quality of nursing care, (ii) acceptability by the nursing profession that the indicator was an important aspect of their work and that its measurement would be a credible as an assessment of their work, (iii) availability of existing data sources that could support evaluations and (iv) where data were not routinely available, whether it would be feasible/realistic to introduce new data elements. After deliberations, each of the discipline-specific groups presented their propositions to the wider advisory group, and consensus on what indicators should finally be proposed was sought through discussion and show of hands.

In the second meeting, the final list of indicators proposed from the initial high-level stakeholder group was presented to a group of 10 front line nurses (two nurses practising in each of the disciplines) for further refinement and prioritisation. This group was not mandated to reject indicators but advised on how to measure these indicators in practice.

The final list of indicators arising from the stakeholder-led process was categorised against the International Patient Safety Goals (IPSG) domains [16] and the Donabedian framework in instances where no suitable domain on the IPSG criteria was identified. The IPSG criteria were developed by the Joint Commission International (JCI) which is a recognised leader in international health care accreditation and focuses on identifying, measuring and sharing best practices in quality and patient safety [17].

Results

Overview of the studies included in this review

Overall, we identified 23 170 articles from database searches and an additional 14 articles from reference lists and Google Scholar. After screening titles and abstracts, 66 articles were considered for full-text review; however, 10 articles were not reviewed because full-text articles were inaccessible to us (n = 6) or they were not available in English (n = 4). Of the 56 full-text articles retrieved, 23 articles met our inclusion criteria. The main reasons for exclusion were that articles reported on ambulatory care indicators, described the process of developing and testing NSIs or were descriptions of how the NQF endorsed indicators might be implemented in practice and their potential impact. The article selection process is presented in the PRISMA flow chart (Fig. 1).

The reviewed studies included ten that collected primary data, two systematic reviews, three reports, one expert opinion and seven narrative reviews. A detailed description of the studies reviewed is provided in Additional file 1. The primary studies focussed on different settings such as specialist units (inpatient cardiovascular and critical care units, n = 3) and more general settings (acute care settings, medical/surgical units, swing bed units and transitional care, n = 7). The countries in which studies were conducted varied, most (n = 10) were conducted in the United States of America, followed by Europe (n = 6), Asia (n = 5) and Australia (n = 2). Within a single study, the minimum number of indicators was 6, the maximum was 44 and the median was 11 (IQR 7–17). Study type, setting, number of indicators reported and country where the study was done are reported in Table 1.

Table 1 The characteristics of the studies included in this review

Different authors had different approaches for classifying nurse-sensitive indicators. In a study conducted by Foulkes aiming at enhancing the understanding of nursing metrics in clinical practice in the United Kingdom, nursing indicators were categorised into safety, effectiveness and compassion in nursing care [18]. The High-Quality Care Metrics for Nursing report categorised the quality outcome into safety, effectiveness and experience of the care provision (both nurses and patients) categories [19]. In the review by Koy and colleagues, indicators were classified into nurse perspectives, patient perspectives and nurse-patient perspectives based on who’s perception of quality the indicator was measuring [20]. McCance et al. also reported patient and nurse perceptions of caring based on the patient-centred nursing framework [8]. The most commonly adopted approach by authors was the empirical framework for quality of care assessment of health systems by Donabedian that focuses on the structure, process and outcome domains [21]. There were variations in the domains reported with studies reporting indicators in all three [22,23,24] or one of three domains without explicitly mentioning which domain these indicators belonged to [6, 25,26,27]. A summary of the indicators reviewed and the domain they were categorised into as per the Donabedian quality care model is presented in Table 2.

Table 2 Nurse-sensitive indicators identified from the literature and classified as per the Donabedian quality framework (indicators have been extracted as reported in the literature, and indicators with similar definitions or measuring the same construct are included)

Indicators relevant for LMICS

Of the 159 indicators identified from the literature, the authors identified 70 indicators relevant to LMIC settings based on their understanding and experience in this context. These were then presented to the stakeholder group for consideration for use in LMIC hospitals. Of these, 31 indicators were adopted by stakeholders through the consensus process. These indicators were revised and clarified to take into account the Kenyan context. An additional 34 indicators were proposed by the stakeholder group based on the need and priority to monitor specific aspects of nursing care in LMIC. Of these, 21 indicators were adopted after deliberation and based on panel consensus. In total, 52 NSIs potentially relevant to LMIC settings were identified. This included 14 of the 25 commonly reported indicators (reported in at least four or more studies) presented in Additional file 2. A detailed description of the indicators adapted from existing indicators (literature), those recommended as additional indicators and the proposed methods for measuring the indicators as suggested by the stakeholder group is provided in Table 3.

Table 3 LMIC relevant Nursing sensitive indicators aligned with International Patient Safety Goals

Discussion

The aim of this study was to identify from the literature ‘nurse-sensitive indicators’ (NSIs) and, use a stakeholder-led approach, to develop and contextualise potential indicators to support evaluations of nursing care provision in Kenyan hospitals and potentially similar LMIC settings. Although there were several studies reporting NSIs, there were inconsistencies in the terminologies/definitions used to describe nursing quality indicators including nurse-sensitive indicators, nursing key performance indicators, nurse-sensitive quality indicators and nursing metrics [2, 5, 6, 20, 28]. In addition, definitions used for indicators varied by tool and data source despite the indicators aiming at assessing the same practice or outcome. For instance, nosocomial infections are considered in the aggregate in some studies whilst others described them by the system affected or resulting diseases such as urinary tract infections, pneumonia and upper respiratory infections. For example, some studies reported pneumonia and ventilator-acquired pneumonia as separate indicators (Table 4) [6, 29]. Consequently, there is considerable overlap in measurement approaches and limited standardisation across indicators undermining comparison between organisations or hospitals. Given the costs of measurement and the limited resources in LMICs, it will be important that a consistent and standard approach to indicator definition and measurement is developed to support the evaluation of nursing care in these settings.

Table 4 Indicators with similar definitions or measuring similar construct

Using a stakeholder-driven approach, indicators identified from the literature were reviewed for relevance to a LMIC setting and where necessary initially adapted by discipline-specific groups (surgery, medicine, paediatrics, neonatal care and obstetrics and gynaecology). Of the 159 indicators identified, 70 were considered by researchers familiar with the local context and with quality measurement as potentially relevant to LMIC hospital settings. Of these, 31 were selected (and often adapted) by local stakeholders as likely to be useful for the Kenyan context. The reasons why indicators were excluded spanned different case-mix of patients and hospital settings including the availability of technology and infrastructure in HICs that were often lacking in LMICs. An additional 21 indicators that were not identified in the literature were recommended by stakeholders to measure aspects of nursing care provision that were considered a priority for the Kenyan context. These additional indicators spanned the domains of structure assessment (e.g. availability of resources to support infection prevention and control activities/practices) and process (e.g. monitoring of phototherapy, communication and coordination of care through documented doctor’s ward rounds and consenting for surgical procedures). Our final set of indicators (n = 52) was classified based on the International Patient Safety Goals (IPSG) framework [16] (Table 3) and spanned all the domains of patient identification (n = 1); effective communication (n = 9); safety of high-alert medication (n = 5); correct site, procedure and patient for surgery (n = 5); risk of health care-acquired infections (n = 19); and patient harms resulting from falls (n = 3). Developing measurements of the work done by nurses and a link to patient safety may be important in helping us understand the consequences of workforce shortages, and such measures could be helpful in accreditation programmes emerging in LMICs [30,31,32] whilst drawing lessons from global programmes such as the Joint Commission International (JCI) [33].

Progress has been made in defining, refining and testing NSIs in HICs with the development of nursing networks that use NSIs for quality improvement. Examples of these include the adoption and widespread use of the American Nurses Association National Database of Nursing Quality Indicators (NDNQI) in evaluating the nursing quality of care [34] and the creation of minimum datasets for nursing quality indicators [35], but all these are limited to HICs. Exploring the commonly reported NSIs in HIC settings for transferability to LMIC with the premise that these are the most robust indicators based on their prevalent use, only 14 out of the 25 commonly reported indicators were adopted by stakeholders (Additional file 2). This suggests varying contexts and needs that should be considered when adapting recommendations from other settings. Therefore, approaches and progress made provide important lessons for LMICs as they consider indicators for adoption and operationalisation to avoid pitfalls that might have been experienced by HICs during the processes of setting up these systems. We hope by developing NSIs for a LMIC setting and using lessons on their implementation from HIC will help demonstrate the value, importance and broader contribution of nursing to high quality care both at local and wider levels whilst exploring what might constitute a minimum data set that allows quality monitoring and risk adjustment.

Nurses, the largest component of the health professional workforce, are essential to the delivery of safe and effective care as there are very few interventions (both clinical and nurse initiated) that occur without nursing involvement. Whilst nurses comprise the largest workforce and are considered the ‘glue’ that holds the health care system together, they are too often undervalued and their contribution to the quality of care agenda underestimated [36]. This is probably because most of what they do is rarely measured, particularly in the LMIC health care settings where most measures of quality of care provided focus almost exclusively on more medical aspects of care [37, 38]. Therefore, measuring what nurses do and the quality of the care they deliver is essential in demonstrating the value of nurses and their work in promoting safety. These measurements will also be useful in highlighting the implications of workforce shortages and identifying opportunities for improving care whilst building improvement networks to promote nurse-led initiatives.

Our proposed set of indicators needs to be considered in light of the following limitations. Firstly, our review methods and stakeholder engagement differed from the more formal structured approaches of undertaking a systematic review and Delphi approach to indicator development. However, the process of developing and selection of indicators involved a wide range of stakeholders and were agreed upon through a consensus-based approach hence providing face validity. Although our final list of indicators (n = 52) have not formally been validated with a wider stakeholder group, we feel it provides an initial indicator set for testing in future studies of nursing care provision. We recognise that some indicators might be considered more critical than others such as those linked to patient outcomes (e.g. mortality) or due to their overall contribution to quality care. We adopted a simple approach giving each indicator equal weights that was deemed easiest for the diverse expert group to understand. The aim was to generate an initial set of indicators that can be further evaluated with the potential for introducing weighting based on further work. As such, this list is only indicative of what aspects of nursing should be measured and does not take into account the relative importance of various indicators. Secondly, anecdotal evidence and from the literature [39,40,41,42] suggests that documentation of nursing is often fragmented, completed on several forms, sometimes in triplicate, and often completed in free text. This may undermine the application of the proposed indicators that are based on document review. As such, piloting of the proposed indicators in routine practice to evaluate their feasibility, reliability and construct validity will be important. To monitor and track the proposed NSIs may require better tools to support nursing care documentation, for instance, structured nursing notes. Similar efforts of co-designing structured nursing forms in Uganda and the United Kingdom have shown improvements in communication between nurses and other professionals whilst reducing time spent on documentation [43, 44].

Conclusion

Our proposed nurse-sensitive indicators informed by the literature and developed with stakeholders provide an opportunity for identifying gaps, developing targeted interventions for investment and improving care and mechanisms to support governance and accountability mechanisms that improve quality in LMIC health systems. The proposed NSIs for Kenya contribute to the dearth of information globally on NSI for monitoring quality of nursing care, particularly for LMICs. Further work on their validation through implementation, refinement and adaptation is required to generate a widely agreed set of standardised indicators. The latter provides an opportunity for LMICs to establish or join national or regional professional learning networks such as those in HICs [34, 45] or that are emerging in LMICs [46] that are showing success in achieving high-quality care through quality improvement and learning. Finally, measures of nursing quality might strengthen the voice of nurses in policy and practice and their position in planning and management roles where the nursing voice is often lacking.