Background

Healthcare is complex [1] and a key sector [2] that is now globally faced with problems of rising costs, lack of service efficiency, competition, and equity as well as responsiveness to users [3]. One estimate by the WHO has shown a yearly waste of approximately 20–40% of total healthcare resources because of inefficiency [4]. European countries have spent on average 9.6% of their gross domestic product (GDP) on healthcare in 2017 and 9.92% in 2019. Germany, France, and Sweden reported the highest healthcare expenditures in Europe in 2018 (between 10.9% and 11.5% of GDP) [5]. In the U.S., healthcare spending consumes 18% of the GDP, which is likely to eclipse $6 trillion by 2027 [6].

Hospitals, as the biggest consumers of health system budgets [7], are the major part of the health system [8]. In many countries 50–80% of the health sector budget is dedicated to hospitals [8, 9]. As a result, hospital performance analysis is becoming a routine task for every hospital manager. On the one hand, hospital managers worldwide are faced with difficult decisions regarding cost reduction, increasing service efficiency, and equity [10]. On the other hand, measuring hospital efficiency is an issue of interest among researchers because patients demand high-quality care at lower expenses [11].

To address the above mentioned need to measure hospital performance, implementing an appropriate hospital performance evaluation system is crucial in any hospital. In doing so, hospital administrators use various tools to analyse and monitor hospital activities [1], which need well-defined objectives, standards and quantitative indicators [12]. The latter are used to evaluate care provided to patients both quantitatively and qualitatively and are often related to input, output, processes, and outcomes. These indicators can be used for continuous quality improvement by monitoring, benchmarking, and prioritizing activities [13]. These parameters are developed to improve health outcomes and to provide comparative information for monitoring and managing and formulating policy objectives within and across health services [12]. Studies thus far have used their own set of indicators while evaluating hospital performance, which could be context dependent. In addition, those studies have mostly used a limited set of indicators that focus on few dimensions (2–6 dimensions) of hospital performance [14,15,16,17,18].

Therefore, comprehensive knowledge of potential indicators that can be used for hospital performance evaluation is necessary. It would help choose appropriate indicators when evaluating hospital performance in different contexts. It would also help researchers extend the range of analysis to evaluate performance from a wider perspective by considering more dimensions of performance. Although performance is a very commonly used term, it has several definitions [19, 20], yet, it is often misunderstood [21]. Therefore, some researchers have expressed confusion about the related terms and considered them interchangeable. These terms are effectiveness, efficiency, productivity, quality, flexibility, creativity, sustainability, evaluation, and piloting [21,22,23]. Thus, this scoping review aimed to categorize and present a comprehensive set of indicators that can be used as a suitable set for hospital performance evaluation at any needed level of analysis, i.e., clinical, para-clinical, logistical, or departmental, and relate those indicators to the appropriate performance dimensions. The uniqueness of this paper is that it provides its readers with a comprehensive collection of indicators that have been used in different performance analysis studies.

Materials and methods

We conducted a scoping review of a body of literature. The scoping review can be of particular use when the topic has not yet been extensively reviewed or has a complex or heterogeneous nature. This type of review is commonly undertaken to examine the extent, range, and nature of research activity in a topic area; determine the value and potential scope and cost of undertaking a full systematic review; summarize and disseminate research findings; and identify research gaps in the existing literature. As a scoping review provides a rigorous and transparent method for mapping areas of research, it can be used as a standalone project or as a preliminary step to a systematic review [24]. While a systematic review (qualitative or quantitative) usually addresses a narrow topic/scope and is a method for integrating or comparing findings from previous studies [25].

In our study, we used the Preferred Reporting Items for Systematic reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR) Checklist following the methods outlined by Arksey and O’Malley [26] and Tricco [27]. A systematic search for published and English-language literature on hospital performance evaluation models was conducted, using three databases, i.e., PubMed, Scopus, and Web of Science, from 2013 to January 2023. Initially, the identified keywords were refined and validated by a team of experts. Then, a combination of vocabularies was identified by the authors through a brainstorming process. The search strategy was formulated using Boolean operators. The title and abstract of the formulas were searched in the online databases. The search query for each database is presented in Table 1.

Table 1 Database query

In the screening process, relevant references related to hospital performance evaluation were screened and abstracted into researcher-developed Microsoft® Excel forms by dual independent reviewers and conflicting information was provided by other reviewers.

The inclusion criteria were as follows: focused only on the hospital setting, available full text and written in English. We excluded studies that focused on health organization indicators, not specifically on hospital indicators; articles without appropriate data (only focused on models and not indicators; or qualitative checklist questionnaires); and articles that focused only on clinical or disease-related indicators, not hospital performance dimensions, and provided very general items as indicators, not the domains of the indicators themselves. Then, a PRISMA-ScR Checklist was used to improve transparency in our review [28].

To extract the data, researcher-developed Microsoft® Excel forms (data tables) were designed. The following data were subsequently extracted into Microsoft®Excel for synthesis and evaluation: title, author, article year, country, indicator category, study environment (number of hospitals studied), study time frame, indicator name, number of indicators, indicator level (hospital level, department level), evaluation perspective (performance, productivity, efficiency, effectiveness, quality, cost, safety, satisfaction, etc.), study type (quantitative or qualitative), indicator subtype (input (structure), process, output (result), outcome and impact), and other explanations. To create a descriptive summary of the results that address the objectives of this scoping review, numerical summarization was also used.

The purpose of creating the main category and the evaluation perspective section was to develop them and create new categories, which focused on the type of indicators related to the performance term. For example, in the “Category” section, the names of the departments or wards of the hospital (such as hospital laboratories, pharmacies, clinical departments, and warehouses) and in the “Evaluation perspective” section, various terms related to the evaluation of hospital performance were extracted. These two types were used after extracting their information under the title “performance dimension”.

The indicators’ levels were collected to determine the level of performance evaluation with the relevant index. Some indicators were used to evaluate the performance of the entire hospital, some were used to evaluate the performance of hospital departments, and some were used to evaluate the performance at the level of a specific project. For example, several indicators (such as bed occupancy ratio, length of stay, and waiting time) were used to evaluate the performance of the entire hospital, and other indicators (such as laboratory department indicators, energy consumption indicators, and neonatal department indicators) were used only to measure the performance of specific departments. This sections were used under the title “category”. The “category” and “indicator’s name” sections were defined according to the results of the “subcategory” section.

The subtypes of indicators (input (structure), process, output(result), outcome and impact) were defined based on the chain model, and each of the selected indicators was linked to it (Appendix 1). As a result of the chain model, inputs were used to carry out activities, activities led to the delivery of services or products (outputs). The outputs started to bring about change (outcomes), and eventually, this (hopefully) contributed to the impact [29]. The classification of the set of input, process, output, outcome and impact indicators was such that readers could access these categories if necessary according to their chosen evaluation models. The term was used under the title “Indicators by types”.

The type of study was considered quantitative or qualitative for determining whether an indicator was able to perform calculations. In this way, readers can choose articles that use quantitative or qualitative indicators to evaluate hospital performance.

Results

We included 91 full-text studies (out of 7475) in English published between 2013 and January 2023 (Fig. 1), approximately 40% of which were published between 2020 and 2023. More than 20% of the retrieved studies were conducted in Iran and USA.

Fig. 1
figure 1

Study selection and data abstraction

Table 2 Study characteristics

Study characteristic

As shown in Table 2, in 85% of the reviewed studies, a number of hospitals (1 to 3828 hospitals, 13,221 hospitals in total) were evaluated. More than 90% of the studies used a quantitative approach. In more than 70% of the studies, hospital evaluation occurred at the department level, which can also be divided into three levels: administrative, clinical ward, and paramedical department. In addition, the administrative departments consist of 13 departments, including financial management [48, 55, 61, 67, 68, 80, 83, 109, 113], supply chain management and warehouse [15, 43, 84], value-based purchasing [33, 85], human resource management [97, 101], medical equipment [32, 87], health information management department [90], information systems [106], nutritional assessment [93], energy management [30, 45, 92], facility management [52, 53], building sustainability and resilience [35], research activities [44], and education [107].

The clinical wards consisted of 8 wards, namely, emergency departments (EDs) [16, 39, 56, 57, 69, 70, 89], surgery departments [58, 62, 63, 91, 102], intensive care units (ICUs) [47, 64, 65], operating rooms (ORs) [38, 88, 108], surgical intensive care units (SICUs) [111], obstetrics and gynecology department [59], neonatal intensive care units (NICUs) [74, 103] and quality of care [18, 31, 40, 50, 72, 92, 95, 112] indicators. The paramedical departments consisted of 3 departments, pharmacy [60, 76, 98], laboratory and blood bank [37, 42, 43, 49], and outpatient assessment [86] indicators.

Table 3 Performance dimensions and related indicators

With regard to data categorization, firstly, a total of 1204 indicators in 91 studies were extracted and after detailed examination, 43 indices (such as hospital ownership, level of care, admission process, and personal discipline) were removed due to their generality and impossibility of calculation in the hospital environment. Then, 1161 performance indicators were entered in this research and were categorized based on the performance criteria (more details about the indicators can be found in Appendix 1). Secondly, 145 functional dimensions, including divisions based on different departments and units of the hospital, were defined according to several focus group discussions with 5 health experts. Then, re-categorization and functional summarization were performed, after which 21 performance dimensions were finalized.

As shown in Table 4, the 21 performance dimensions were divided into three parts: category, subcategory, and related indicators. Additionally, according to the hospital levels, there were three categories: ‘organizational management’, ‘clinical management’, and ‘administrative management’. Then, according to the type of indicators, fifteen subcategories were defined for the 110 selected main indicators.

Performance dimensions

The ‘productivity’ dimension focuses on indicators reflecting the macro-performance of the hospital, considering that this index is more effective and efficient. The ‘efficiency’ dimension focuses on general performance indicators for the optimal use of resources to create optimal output in the hospital. The ‘effectiveness’ dimension is a general performance indicator with an outcome view. The ‘speed’ dimension focuses on the indicators that show attention to the service delivery time and the speed of the procedures. The ‘development’ dimension focuses on matters related to employees’ and students’ training and related training courses. In terms of ‘safety’ dimension, there were issues related to patient safety, unwanted and harmful events, and hospital infections.

The “quality of work life” dimension emphasizes matters related to personnel volume and work conditions. The ‘quality’ dimension is related to the quality of service provided in different parts of the hospital and possible complications in improving the quality of services. The ‘satisfaction’ dimension focuses on the satisfaction of patients, employees, and their complaints. The ‘innovation’ dimension relates to the research process and its output. The ‘appropriateness’ dimension involves proper service from clinical departments, pharmaceutical services, and patient treatment. The ‘evaluation’ dimension focuses on the indicators related to the assessment scores of the para-clinical departments of the hospital.

The ‘profitability’ dimension focuses on the overall output indicators for income and profitability. The ‘cost’ dimension focuses on indicators related to general expenditures and the average cost per bed and patient and budgeting. The ‘economy’ dimension is related to financial rates and their indicators. The ‘coherence’ dimension emphasizes the indicators related to the continuity of the service delivery process. The ‘patient-centeredness’ dimension focuses on the indicators related to the patient’s experience of the facility, environment, treatment processes, communications, and relevant support for the patient. The ‘equity’ dimension studies indicators related to social and financial justice and life expectancy. The ‘relationship’ dimension evaluates the process of consultations and discussions required during the patients’ care provided by the treatment team. The ‘sustainability’ dimension focuses on indicators related to energy standards. The ‘flexibility’ dimension focuses on the hospital’s response to the crisis.

According to Table 4, most studies focused on ‘efficiency’, ‘productivity’, ‘safety’ and ‘effectiveness’ as performance dimensions in 54, 53, 38 and 37 studies, respectively (40–70% of studies). In the ‘efficiency’ subcategory, resource management, supportive unit assessment, and human resource management indicators were the first to third most common indicators used in 26, 23 and 22 studies, respectively (approximately 25% of the studies).

In addition, for the ‘efficiency’ dimension, ‘medical staff numbers’, ‘emergency department bed numbers’, and ‘nonmedical staff numbers’ were reported in 16, 13, and 11 studies, respectively (between 20 and 30% of the studies). For the ‘productivity’ subcategory, ‘bed utilization rate’ and ‘service delivery and treatment’ were reported in 50% and 20% of the studies, respectively (46 and 19 out of 91).

Additionally, for the ‘productivity’ dimension, the ‘length of stay’ indicator was used more than others and reported in approximately 80% of the studies (43 out of 53), followed by the ‘bed occupancy rate’ in approximately 40% of the studies (21 out of 53). The ‘bed turnover ratio’ and ‘hospitalization rate’ were also reported in 12 studies. Furthermore, for ‘safety’ dimensions, all indicators were in the ‘patient safety’ subcategory, which has been reported in 38 studies, and ‘complications’, ‘accidents or adverse events’, and ‘incidents or errors rates’ were the most concentrated indicators by researchers in 13, 12, and 11 studies, respectively. The performance dimension of ‘effectiveness’ was presented in 37 studies (40%), with only two indicators, ‘mortality rate’ in 29 studies and ‘readmission rate’ in 23 studies.

Performance categories

Considering the three categories shown in Table 4, ‘organizational management’ indicators were more commonly used among the other two categories (‘clinical’ and ‘administrative’) and were present in more than 85% of the studies (78 out of 91). Two categories, ‘clinical management’ and ‘administrative management’, were reported in 62 and 51 studies, respectively.

Performance subcategories

Considering the 14 subcategories shown in Table 4, both the ‘bed utilization rate’ and ‘patient safety’ indicators were mentioned in 46 studies and were more common among the other subcategories. The second most common indicator of the ‘financial management’ subcategory was reported in 38 studies. At the third level, both the ‘human resource management’ and ‘time management’ indicators were presented in 31 studies. The ‘paramedical’ subcategory indicators were presented in less than 10% of the studies [60, 96,97,98, 106, 113].

Performance indicators

According to the indicator columns in Table 3, the most used indicators in reviewed studies were the length of stay, mortality rate, and readmission rate in 47%, 32%, and 25% of studies, respectively. Bed occupancy rate and non-personnel costs were reported in 23% of studies. Additionally, among the 110 indicators, 16 indicators, namely, the lab cancellation rate, exam-physician ratios, number of coded diagnoses, number of medical records, laboratory sample/report intervals, medical information request time, safety standards in the archives, nutritional risk screening, imaging quality control failures, errors in medical reports, average impact factor, nutritional measures, laboratory scoring, imaging inspection, discharge process and emergency response rate, were reported in less than 1% of the studies.

The classification of the indicators in Table 4 was performed based on the chain model, which included the input, process, output, outcome and impact. The assignment of the indicators to each category was performed according to the experts’ opinions. For instance, the number of publications by academic member of an academic hospital and the average impact factor of those publications were considered outcome indicators. As depicted in the Table 4, most studies (80%) focused more on output indicators. Additionally, fifteen studies focused on introducing and extracting some of the input, process, output, outcome and impact indicators; among those, only one study [96] has examined the input, process, output and impact indicators simultaneously.

Table 4 Indicators by types

Additionally, in approximately 42% (36 out of 91) of the studies, the indicators’ definitions, formulas, or descriptions have been illustrated, while less than 10% of the studies have defined measuring units, standard or benchmark units for all studied indicators [15, 43, 45, 51, 52, 57, 67].

Overall, nine studies related to hospital performance evaluation were conducted using systematic review methodologies (five systematic reviews [16, 29, 30, 56, 113], two literature reviews [79, 80], one narrative review [98] and one brief review [92]). Most of these studies focused on extracting performance indicators from one or more hospital departments (e.g., the emergency department) [16, 56], hospital laboratory and radiology information systems [106], supply chain performance [29], resources and financial results and activity [113], hospital water consumption [30], and the pharmaceutical sector [98]. Other reviews included a three-step process to review, evaluate and rank these hospital indicators in a systematic approach [16], or to evaluate performance indicator models to create an interactive network and visualize the causal relationships between performance indicators [79]; moreover, some have focused on the importance of indicators to ensure adequate coverage of the relevant areas of health care services to be evaluated [92].

Only one scoping review aimed to identify current assessments of hospital performance and compared quality measures from each method in the context of the six qualitative domains of STEEEP (safety, timeliness, effectiveness, efficiency, equity, and patient-centeredness) of the Institute of Medicine (IOM) in accordance with Donabedian’s framework and formulating policy recommendations [115].

In addition, 21 studies divided performance indicators into 2 to 6 dimensions of performance. Also, the reviewed studies included 2–40 indicators in zero [29, 30, 98] to 6 domains [34]. Moreover, none of the studies have tried to comprehensively summarize and categorize the performance indicators in several categories, focusing on all the indicators reflecting the performance of the entire hospital organization, or the indicators of administrative units or clinical departments.

Discussion

In this scoping review, a unique set of hospital performance evaluation indicators related to the various performance dimensions was categorized from 91 studies over the past ten years.

Similarly, in a study, 19 performance dimensions, 32 sub-dimensions, and 138 indicators were extracted from only six studies. Those dimensions were described by all studies included in the review, but only three studies specified the relevant indicators, and the list provided for all possible indicators was not comprehensive. Also, despite current review, there was no classification of indicators based on the hospital levels: managerial, clinical, or organizational levels [116]. Another study has similarly investigated the performance evaluation indicators of the hospital in such a way that among 42 studies, 111 indicators were presented in the four categories: input, output, outcome, and impact. But, there was no classification of indicators based on performance dimensions and hospital levels [117].

In this study, the importance of categorized indicators, for the first time to our knowledge, was determined based on their frequency of use in the published literature (Appendix 2). The ‘Organizational management’ indicators were the most common compared with the other two categories (‘clinical’ and ‘administrative’). It could be because of the fact that the indicators such as ‘bed occupancy rate’, ‘average length of stay’, ‘mortality rate’, ‘hospital infection rate’, and ‘patient safety’ are easier to be registered in hospital software compared to other indicators, and also they better reflect the overall performance of hospital. Thus, researchers are more interested in using these indicators.

Considering 14 subcategories, indicators related to three subcategories i.e. bed utilization, patient safety and financial management are the most frequent used indicators for hospital performance evaluation. It reflects the need of hospital managers to increase the profitability of hospital in one hand, and to control cost on the other hand. As a results, researchers have paid special attention to ‘cost income’, ‘profitability’, ‘economic’, etc., as indicators for evaluating hospital performance.

When considering indicators by type, more studies have focused on output indicators, while input indicators were the least common used. This might be because of the fact that at hospital level, it is difficult for managers to change those inputs such as ‘beds’, ‘human resources’, ‘equipment and facilities’. In addition, due to the complexity of interdepartmental relationships in hospitals, process indicators seemed to provide more variety for analysis than input indicators, so they were more often used. As mentioned above, output indicators were the most used indicators for hospital performance evaluation due to their ease of calculation and interpretation.

The main purpose of this paper was to identify a comprehensive set of indicators that can be used to evaluate hospital performance in various hospital settings by being distilled into a smaller and more related set of indicators for every hospital or department setting. future studies could be designed to validate each set of indicators in any specific context. In addition, they could investigate the relationship between the indicators and their outcomes of interest and the performance dimension each could address. This will enable hospital managers to build their own set of indicators for performance evaluation both at organization or at department level. Also it should be mentioned that.

Although some previous studies have provided definitions for each indicator and determined the standard criteria for them, this was not done in this study because the focus of this study was to provide a collection of all the indicators used in hospital performance evaluation, which resulted in the identification of more than a thousand indicators without limiting to specific country or context. So while preparing a smaller set of indicators, specific conditions of each country, such as the type of health system and its policy, the type of financing system, and the structure of services, should be taken into account to select appropriate indicators.

In addition, although it is important to examine the scope of each article to compare the list of indicators and the relationships between the dimensions of the hospital in terms of size and type and between the number and type of selected indicators, this was considered beyond the scope of this review due to the high number of indicators, which made the abovementioned investigations impossible. Future studies could do that while working with a smaller set of indicators.

Conclusion

This review aimed to categorize and present a comprehensive set of indicators for evaluating overall hospital performance in a systematic way. 1161 hospital performance indicators were drawn from 91 studies over the past ten years. They then were summarized into 110 main indicators, and categorized into three categories: 14 subcategories, and 21 performance dimensions This scoping review also highlighted the most frequent used indicators in performance evaluation studies which could reflect their importance for that purpose. The results of this review help hospital managers to build their own set of indicators for performance evaluation both at organization or at department level with regard to various performance dimensions.

As the results of this review was not limited to any specific country or context, specific conditions of each country, such as the type of health system and its policy, the type of financing system, and the structure of services, should be taken into account while selecting appropriate indicators as a smaller set of indicators for hospital performance evaluation in specific context.