Advertisement

Performance Management Strategy: Waiting Time in the English National Health Services

  • Shimaa ElkomyEmail author
  • Graham Cookson
Open Access
Article

Abstract

The difficulty of measuring public services outcome results in governments adopting some quality performance measurements to oversee the standards and characteristics of the presented services. The study empirically tests the theory of performance management and the extent the waiting time policy would result in higher quality service and better health outcomes in 161 trusts in England from 2010/2011 to 2013/2014. The results show that higher waiting admission share has significant adverse effect on quality standards in terms of mortality rate. Moreover, the findings show that shorter range of waiting time is statistically associated with higher patients’ reported heath gains. However, the paper shows evidence of the presence of the output distortions effect of performance management strategy. The study shows that hospitals with lower mean waiting time have significantly higher readmission rate within 28 days of discharge.

Keywords

Performance management Waiting time Quality outcome Panel analysis 

Introduction

In the public sector, targets are a common method of focusing attention on areas of performance that are of interest to the public and politicians. As Drucker (1974) states, “what gets measured gets managed.” A major concern is whether focusing attention, and therefore effort, on meeting targets comes at the cost of poorer performance in other areas which are not measured (Bevan and Hood 2006a; Christensen et al. 2006). This is even more problematic where those areas of quality are difficult to observe or are multifaceted. The paper attempts to examine the suitability of any single measure to reflect the diversified outputs and sometimes inconsistent stakeholder objectives of healthcare.

Within the National Health Service (NHS) in England waiting times have been viewed as high profile targets since 2001, in both accident and emergency and hospital elective care admission settings (Rowan et al. 2004). In response to chronic problems within the NHS, including excessive waiting times, the new government adopted a performance management strategy known as star-rating, whereby hospitals were rated between 0 and 3 stars for the achievement of a few key targets. Seven out of the ten key targets of the performance rating system is focused on waiting time. Hospitals that exceeded waiting time targets were at risk of senior management change or being merged or taken over by a more successful hospital (Department of Health 2005). Consequently, this system was dubbed “targets and terror” as it subjected top leadership to possible public humiliation and reputational damage whilst granting highly scoring hospitals a greater autonomy and access to finance (Friedman and Kelman 2007). Maximum waiting time for hospital elective care was gradually reduced from 18 months in 2000 to 15 month in 2002, 12 months in 2003, 9 months in 2004 and 6 months in 2005 then reached a waiting target of 18 weeks in 2008 while patients of cancer and cardiovascular diseases should not wait more than two weeks to receive the treatment (Propper et al. 2010).

Performance management strategy represents a scheme that incentivises employees in the public sector and sought to raise their productivity (Friedman and Kelman 2007; Dewatripont et al. 1999). While Davis et al. (2000) argue that the boost in the quality of the health care sector was mainly driven by intrinsic incentives related to staff motivation rather than any policy rewards, Dewatripont et al. (1999) argue policy target is seen to deal with the multiplicity and fuzziness of the goals in the public sector and set accountability and monitoring channels between the agency and its principals.

When public organisations have multiple policy objectives, performance management strategies are widely used (Osborne 2006). Linking performance to reward or punishment is deemed as a method of raising up the productivity of the public sector in the same line with the private one (Osborne and Gaebler, 1992). This is despite their well-known problems in creating incentives to focus on quality dimensions that is monitored on the expense of less monitored aspects or non-incentivised tasks. The latter idea is well established in the theory as “output distortion effects” (Christopher and Hood 2006). There is an academic and policy consensus on the success of waiting time target in reducing waiting time in the NHS. Yet, there is a little attention focused on assessing the output distortion effects of this high profile policy. While Propper et al. (2008) empirically support the dramatic reduction in waiting times since the policy adoption, Goddard et al. (2000) contend that it is difficult to decide about the success of the waiting time policy as other aspects of quality could have been negatively impacted. This paper examines the extent to which the adopted schemes of measuring performance in the English health care have an output distortion effects that is theoretically argued by plethora of literature (for example Bevan and Hood 2006a; Hood 2002).

This paper empirically examines the theory of output distortion effect of performance management strategies in the public sector. Using data of all acute NHS hospital trusts from 2010/2011 to 2013/2014, this paper studies the behaviour of hospitals who were target-driven on a set of healthcare outcomes (observed in-hospital mortality rate, re-admission rate within 30 days of discharge and patient reported health gains). In-hospital mortality rate is one of the most widely used measurement of quality in healthcare (Propper et al. 2010; Cooper et al. 2009; Siciliani et al. 2009), however, examining re-admission rate shows the success of the hospital in affecting the health status of patients after being discharged. This paper also uses health gains as reported by patients as one of the main outcome indicators for the healthcare quality as assessed by its users (patients). This study focuses on three indicators for measuring the waiting targets which are the average waiting per hospital, the share of waiting admission to total admission and patient perception of waiting time before admission based on Overall Patient Experience Score which allows patients to score how they evaluate the length of time. Despite the fact that average waiting time would conceal many differences across specialities in the same hospital, senior managers -who are monthly monitored for their hospital performance against targets- would be rewarded or penalised based on the overall performance of their hospital. Also, the share of the waiting admission would unfold the rigorous effort of the hospital to keep shorter queues which would reflect the commitment of the hospital to the waiting time targets. This is one of the few papers that empirically assesses the output distortion effect of performance management strategy using large data set of all acute hospital trusts in the NHS compiling and looking at the dynamics of the analysis through working on panel of more than one year.

The results show that the adoption of performance management in the form of waiting time policies has significant positive effect on some aspects of quality standards –in this case would be in-hospital mortality rate and health gains. This implies that performance management strategy has a degree of success in improving the quality standards and quantifying the policy objectives as discussed by Dewatripont et al. (1999) and Bevan and Hood (2006b). Yet, in the line of the theory of output distortion effect of performance management strategy this study found that focusing on waiting time policy to hit the policy targets lead to deterioration on some dimensions of quality that are less observed which is the readmission rate of patients after receiving the treatment and being discharged. The empirical findings show that hospitals with lower mean waiting time significantly experience higher readmission rate within 28 days of discharge. Yet, this paper shows evidence that the adoption of performance management indicators as outcome measurements per se rather than a tool to assess the final health outcome would result in a form of gaming. This implies that policy makers exhibit strong incentives to achieve the measurable target on the expense of the quality standards of healthcare.

Theoritical Framework

Performance Management

Policymakers use performance targets to evaluate the quality of output due to the nature of the hospital activity that depends on various sets of objectives and multiple output. In the public sector, officials and bureaucrats are paid regardless of their performance and their failures. The structural reforms of the public service provision aim at creating incentives by introducing set of rewards and penalties for performance. The lack of measurable outcome measurements and reliable output standards make it quite difficult for public sector officials to base their pay on their performance (Prendergast 2003; Di Mascio and Natalini 2013).

Classical organisational theory articulated by Max Weber and Frederick Taylor believes that public organisations are described by rationality whenever goals are specific and well-formalised. Organisation theory argues that public organisation is characterised by multiplicity of purposes, functions and objectives which create a complex system that struggle to optimise the social value added. Yet, the theory recognises the relevance of performance management strategy as a goal setting framework where public managers have clear objectives with less ambiguity in policy targets. This system of organisational behaviour creates a clear framework for reward, sense of accomplishment and a clear picture of identification of problems and designing solutions through management by objectives approach (Osborne 2006). However this approach of management by objectives emphasises on the relevance of the identification of measures of performance that reflect the real outcome of public organisation (Lemieux-Charles et al. 2003).

Plethora of public management studies –for example Heinrich (2002) and Latham et al. (2008) - discuss the problems of performance management design and argue their little effectiveness as policy instruments for increased public sector accountability. Osborne (2007) discusses three main methods for creating incentives in the public sectors. First is the entrepreneurial management where policy makers introduce profit-maximisation motives for the public sector while second methods is introducing competition between public (in-house teams) and private service provider which is known as managed competition. Finally is performance management strategy which sets clear performance measurements and certain policy objectives with quantifiable targets.

The first two alternatives of creating incentives in the public sector might not be in congruency with the nature of some public services or might be politically contentious. Performance management is a method of creating a system of incentives based on which a reward/penalty scheme is applied to bring accountability and measurability to the public services. This strategy requires the pre-designation of policy target(s) in a measurable form, monitoring of performance standards and applying some methods of feedback (Bevan and Hood 2006a). Linking performance to set of incentives might take the form of financial rewards like pay rise, bonus or sharing certain percent of the budget savings or psychological motives for example acknowledgements, reputational gains, awards or autonomy in management. One form of psychological rewards for the successful commitment to the targets is less relaxed monitoring and more empowerment for the bureaucrats. These performance standards and governance by targets might be remarkably beneficial whenever there is a pressing need to improve quality in any complex system as the public services (Beer 1985; Propper et al. 2010).

How effective is performance standard management in providing incentives for improving quality in the public services? Carter et al. (1995) posit that performance indicators are not solutions by themselves rather they open a room for investigation and understanding of social benefits/costs of the public provision of services, some of which are good performance standards that could reflect some aspects of the policy outcomes and others are short and limited. Holmstrom and Milgrom (1991) argue that setting measures of performance based on which incentives are linked is a difficult task. In the health care sector, one measurable policy objective is difficult to reflect diversified output, many aspects of quality and contribute to the achievement of the policy outcome and attains the highest social gains. The principal-agent problem acknowledges the degree to which the performance standard could be incomplete and the intricacy of the responses of managers in the public sector (Goddard et al. 2000). However, the inefficiencies of the public organisation, the separation between the ownership and the management, the immeasurability of some public service outcomes, poor definition of goals and sometimes clashing and inconsistent objectives necessitates the presence of the performance standard schemes (Burgess and Ratto 2003). Heckman et al. (1997) argue that any performance target should systematically reflect the real value added from the public services. To do otherwise may result in less value and higher social cost, as was the case of a job training scheme policy where the use of employability and earning levels as a standard for measuring the performance of the programme led to “cream skimming”, significant gaming costs and deteriorated efficiency (Courty and Marschke 1997). Propper et al. (2010) suggest that these performance standards and governance by targets might be beneficial whenever there is a pressing need to improve quality in any complex system such as healthcare sector.

Bevan and Hood (2006a) discuss ratchet effect, threshold effect and output distortion as three types of target gaming. Ratchet effects refer to the incentive of public managers to report low performance levels to avoid penalties of not attaining high policy expectations (Bird et al. 2005; Goddard et al. 2000). The threshold effects indicate setting a threshold for performance based on which a public unit is not motivated to exceed even if it can do better. Output distortions is a type of gaming where managers seek to achieve policy targets at the expense of many aspects of outcome and quality characteristics that are not easily measured which was remarkably obvious in the Soviet Union regime.

Therefore, the outcome of performance management strategy could be framed as one of the following alternatives as discussed by Bevan and Hood (2006a). First is the success of the policy indicators in reflecting all aspects of public service quality. Second is where performance management strategy succeeds in hitting policy targets but at the cost of deteriorated quality in other relevant but unmeasured aspects of policy outcome (output distortion effect). Third is the case where the performance indicators are absolutely imperfect measures of the policy outcome which is the case of “hitting the target and missing the point”. Fourth is where public managers fail to meet the performance standards. Therefore, the performance standards should just serve as monitoring tools considering their tentative nature rather than an output as such (Bird et al. 2005).

Bevan and Hood (2006a) claim that the NHS in the UK have done little effort to reveal gaming by hospitals in reporting waiting times and incidences of gaming appear on inquiry basis rather than systematic monitoring. National Audit office (2001) investigated the incidence of gaming for appointments of outpatients and inpatient admissions and revealed the manipulation of nine NHS trusts for their patient waiting lists impacting the records of 6000 patients. This gaming inquiry was ensued by a study of 41 trusts by the Audit Commission (2003) that showed evidence of deliberate manipulation and fabrication for waiting time lists by three trusts. The previous incidences show that public organisations under the pressure of meeting the performance indicators might have higher incentive for gaming.

On the other hand, policy targets in the NHS is deemed to have significantly improved one dimension of the quality of health care with no patient waiting more than four hours in the A&E department and more than 18 months for admission. Before the adoption of the policy targets the percentage of patients exceeding the latter figures was above 20% (Bevan and Hood, 2006b). It is generally believed that the star rating system has significantly contributed to the performance of the quality aspects targeted by the policy. In the same line Wilson (1989) and Dewatripont et al. (1999) highlight the significant effect of designating high profile targets on increasing the productivity of the public organisations due to developing sense of the ‘mission’ to achieve a set of critical tasks and focus on policy priorities even on the expense of sacrificing other objectives.

High powered incentive system hospitals who focus on reducing waiting time, which is the rewarded and well monitored policy objective, might compromise some degree of quality that is less monitored. This study evaluates whether this was the case. This paper tests the output distortion theory of performance managements strategies adopted in high powered incentive regime. The analysis empirically testes whether lower waiting time, hospitals with higher share of waiting admission and better scored of waiting time as assessed by patients would have compromised other aspects of quality that are less monitored or not subject to targets.

Literature Review

Waiting admission is a rationing mechanism to bring equilibrium between supply of health care services and the demand in the NHS as public service where patients face zero price at the point of demand (Januleviciute et al. 2013). From the supply side the government in the last decade has been busy with dealing with supply bottlenecks to expand treatment capacity by providing extra finance for elective surgery, involving the private sector to increase the capacity and introduce contestability and better management of theatres and diagnostic equipment (Harrison and Appleby 2009). Policies that aimed at management of the demand side aimed at introducing guidelines for patient referral system and methods of prioritisation (Dimakou et al. 2009).

Rowan et al. (2004) could not find empirical evidence that performance standards of the star rating scheme affected the quality of clinical output of the adult critical care in the NHS hospital. Propper et al. (2008) show that the English health care system that adopted waiting-time target has shown significant reduction in the waiting admission compared to Scotland that has not adopted similar policies. Siciliani et al. (2009) find that the relationship between waiting time and cost is non-linear and that the optimal waiting time that would minimise hospital cost would be ten days in a sample of 137 hospitals in the English NHS. However, in many specifications waiting time appear to be insignificant to the cost of hospital. Cooper et al. (2009) show that NHS reforms since 2001 including waiting time targets and increased competition and patient’s choice, has improved equality and by 2007 the association between waiting time and levels of deprivation has been less obvious. Propper et al. (2010) find that waiting time targets in the English health care has led to significant reduction in the length of waiting time and this has not decreased the quality of care and also has not resulted in any gaming or reduction of effort on less monitored activities. Propper (1995), using valuations obtained from trade-offs from experimental setup, estimates the money value of 30 days reduction in the waiting time for elective surgeries to be £35 on average for high income groups. With a total number of waiting admission from 1990 to 1991 of 13 million patient, the study estimated the costs of 30 days waiting time to be £650 million.

Siciliani and Hurst (2005) investigate means of reducing waiting times in the OECD countries from the supply and demand sides. On the supply side, they see the public health system is operating close or near full employment/capacity, then cooperating with local or international private provider could be a short run solution for excessive waiting time. On the demand side, management of waiting admission could be an effective policy for reducing waiting time. This would include clinical prioritisation system and financial incentives for provider to reduce waiting time. Gravelle and Siciliani (2008) discuss the waiting time as a rationing mechanism for the supply of health care. The study finds that the optimal waiting time is higher for patients with a smaller marginal disutility from waiting which implies that longer waiting would not significantly deteriorate their health condition. Nikolova et al. (2016) using conditional density estimation show that waiting time policies resulted in prioritization of patients who waited for longer time on the expense of patients who waited less. This shows that setting priorities in admitting patients to elective treatment has been affected by waiting time targets rather than clinical prioritisation. This is supported by Januleviciute et al. (2013) who find that the adoption of waiting time policy targets in both cases of Norway and Scotland resulted in shorter waiting times for patients who waited for long period and this came at the expense of not considering clinical priorities. In Norway, meeting the maximum waiting time limit implied that clinically high priority patients have to wait for longer period, yet in Scotland urgent patient group was not statistically affected.

Policy targets are acknowledged for its success in enhancing the performance of the public sector whenever there are areas of significant room for improvement. However, Appleby et al. (2003) argue that policy initiatives that aimed at reducing waiting time has been effective in shortening extreme long waiting times but has not successfully affected the average waiting time. The findings show that trusts which effectively succeeded in reducing waiting time had a good understanding of the whole health care system and adopted other measures to keep this reduction in waiting time sustainable. In this small sample set, a survey for the consultants working in three departments showed that 40% of the consultants observed positive health outcome gains of patients waiting for shorter periods because of waiting times targets. Oliver (2005) discusses the development of the waiting time policies in the English NHS and argue that further pressure on the reduction of waiting time like this proposed by Wanless Review for 2022/23 must be balanced with the outcome objectives of the healthcare system.

This study examines the extent to which the adopted schemes of measuring performance in the health care have a positive effect on policy outcomes and quality of services in the health care sector. Is it possible attribute the improvement (or deterioration) in the health outcome to the performance management strategy and policy incentives? This paper attempts to empirically examine the impact of focusing on waiting time targets on other aspects of quality and performance which are not measured by the target as proposed by Bevan and Hood (2006a) and Propper et al. (2010). The paper is structured as follows. Section two is methods which discusses the econometric model, estimation techniques and data while Sections four and five are the results and discussion and policy implications respectively.

Methods

The paper contributes to the literature that examines the effectiveness of the waiting time as performance standard measurement (Nikolova et al. 2016; Propper et al. 2010; Siciliani et al. 2009). This research adopts different outcome measurements to better envisage the quality of output in the English health care system. Mortality and readmission rates are widely adopted measures of hospital quality (Nikolova et al. 2016) and considered so far as the best indicators of quality failure (Bird et al. 2005). Observed mortality is calculated as the share of in-hospital deaths of the finished provider spells to normalise for the size of the trust. Readmission rate is the percentage of emergency readmissions to hospitals within 28 days of discharge for all adults above 16 years of the total number of discharges. This study also use health gains as an outcome measurement as perceived by patients from hip replacement operation reported by Patient Reported Outcome Measurements (PROM). The EQ-5D index is general and simple –though controversial as well- health outcome measurement that depends on five aspects that are evaluated before and after the operation which are mobility, self-care like self-washing and dressing, normal activities like work, study and leisure time, pain and discomfort, and psychological attributes like anxiety and depression. Health gain is the difference between the pre-operative score and post-operative score as perceived and evaluated by patients.

Three variables are used to examine the effectiveness of the waiting time targets on enhancing the quality of health services. First, waiting admission is calculated as the ratio of waiting list admissions to total admissions. Second, Overall Patient Experience Score is used which allows patients to score how they evaluate the length of time they have to stay on the waiting list before their admission to the hospital. Finally, mean waiting time is used which indicates average waiting time in every hospital trust. Trusts which have a trend of higher waiting times and less commitment to the performance standards tend to have higher average waiting time (Siciliani et al. 2009).

Average waiting time would measure the overall focus of the hospital to meet the target. A reduction in average waiting time could be backed by actual management success in managing waiting list -through reducing inefficient activities and eliminating unnecessary practices- or this includes gaming, reclassification of patients between the monitored waiting time list and the unmonitored planned admission or reprioritisation practices. It just shows a crude indicator of how much effort has been dedicated to meet the targets. Yet this indicator might have a degree of bias since evidence (i.e. Dimakou et al. (2009) and Januleviciute et al. 2013) shows that management of waiting time differs across specialities. However, average waiting time and the extent of breaching the target on the level of the hospital is a still a relevant indicator for hospital managers’ success. For the last decade, hospital managers and senior clinicians have realised the relevance of the meeting waiting time targets as one of the main policy indicators against which their performance is assessed (Harrison and Appleby 2009).

Like Propper et al. (2004), the paper accounts for the hospital performance by including median length of stay, urgent cancelled operations and patients not treated within 28 days of cancellation. These variables are relevant indicators for the performance and the quality of the process that should be controlled for to assess the impact of waiting time targets on health outcomes. Other hospital characteristics include percentage of emergency admission, number of critical care patients transferred for non-medical reasons, percentage of patients aged 75 and above and number of operating theatres. Also, the percentage of admitted patients who have been assessed for Venous Thromboembolism (VTE) risk is incorporated in the analysis. This is a type of blood clotting that is formed in the vein and is claimed to cause 25,000 deaths per year. Examining the risk would reduce the financial costs, length of stay and health burden of being mistreated and raise the health outcome by raising the quality of services and eliminating in-hospital mortality rate. Also, occupancy rate of critical care beds, day case and overnight beds are controlled for to account for hospital capacity and excessive demand. A dummy variable to control for foundation trusts that have more autonomy and privileged status due to meeting certain quality and management standards is included.

The data is a panel of 161 acute hospital trusts in the NHS for 2010/11 to 2013/14. Readmission rate is only available till 2011/2012 which leaves the analysis with the same number of hospital trusts but only two years of analysis for this outcome indicator (218 observations compared to 389 and 398 observations for mortality rate and health gains respectively). Panel data techniques is adopted to control for the average unobserved heterogeneity between trusts that are time invariant. For the observed in-hospital mortality rate, the Hausman (1978) specification could not accept the null hypothesis of zero correlation between the error term and the regressors which suggests that the fixed effect method is preferable. However, for the readmission rate and health gains estimations the null hypothesis of the correctness of the random effects could not be rejected. This means that the random effect would be consistent and efficient. All independent variables are insignificantly correlated and all regressions reported below include standard errors which are clustered by trust; this controls for heteroskedasticity and correlation of the error term within trust (White 1980).

Waiting time indicators may be endogenous in the sense that any exogenous factor might affect both outcome measurement and waiting time simultaneously. In this case the coefficient estimates of the waiting time indicators would be biased and inconsistent. To check for the endogenity of the waiting time indicators, Two Stage Least Square method (2SLS) is adopted through using Instrumental Variables (IVs) that are highly correlated with waiting time and zero correlated with the error term. Daycase episode data is used with the other exogenous variables as instruments. The F-statistic of the excluded instruments in the first stage regression shows that the instruments are highly correlated with the instrumented variables.1 The Sargan test for overidentification does not reject the null hypothesis of no correlation between the instruments and the error term, suggesting that the instruments are valid (Sargan 1958).2 A Durbin-Wu-Hausman test is performed to compare the results of the instrumental variables regression with the OLS regression, and to test for the null hypothesis of exogenity of the instrumented variables (Durbin 1954; Hausman 1978; Wu 1973).3 The empirical test cannot reject the null hypothesis that the analysis does not have a problem of endogenity in the different outcome specifications.

Table 1 displays some descriptive statistics for the data. The mean value for the observed mortality is 3.30% of the finished spells with 0.73 average variation from the mean value while for the readmission is 11.04% of the total discharges with 1.76 standard deviation. Health gains index ranges from an average of 23.5 to 57.7 improvement for the hip replacement. The overall patient score experience that reports patient’s scoring for how they evaluate the length of waiting time from the decision to be admitted till their admission point shows that on average trusts have a value of 80.23 points which would imply a “very good” according to patients’ perceptions. However, there is 14% average variation between trusts with Newham University Hospital NHS Trust having the lowest score of 63 which implies the highest waiting times according to patients’ evaluation. Mean waiting time has an average value of 53 days with 40 days standard deviation across trusts. Waiting admission share is on average 36.56% of total inpatient admission with 9.70% standard deviation from the mean. The source of observed mortality, readmission rate and health gains is the Health and Social Care Information Centre while the Health Episode Statistics is the data source for emergency admission, day case episodes, patient age, length of stay, waiting admission and mean waiting time variables. All other variables are available at NHS England.
Table 1

Descriptive statistics

Variable

Obs.

Mean

Std. Dev.

Min

Max

Mortality rate(% of finished spells)

537

3.30

0.73

1.04

5.58

Readmission rate (% of emergency readmissions to hospital within 28 days of discharge of total discharges)

322

11.04

1.76

0.00

17.15

Health gains (average health gain EQ-5D)

524

42.54

4.79

23.50

57.70

Urgent Cancelled Operations (number)

561

16.53

38.31

0.00

412.00

Number of Non-Medical Critical Transfers

561

1.36

5.29

0.00

70.00

Patients not treated within 28 days of cancellation (number)

610

17.35

31.94

0.00

294.00

VTE-assessed patients (% of total admission)

578

87.05

15.71

9.45

100.00

Emergency admission (% of total admission)

560

35.08

10.37

1.65

79.84

Patients aged 75 and above (% of total number of patients)

550

23.42

7.73

0.00

68.14

Number of operating theatres

612

18.72

10.51

0.00

55.50

Median length of stay

560

1.73

2.61

0.00

30.00

Occupied Adult critical care beds (% of total available critical beds)

541

81.47

11.18

25.00

100.00

Occupied daycase beds (% of total available daycase beds)

620

86.23

13.61

13.25

100.00

Occupied overnight beds (% of total available overnight beds)

624

85.25

7.047

43.06

98.68

Patient score of length of waiting time before admission (score)

612

80.23

15.19

63.00

97.60

Mean Waiting time (number of days)

560

53.00

40.00

3.00

461.00

Waiting Admission (% of total admission)

560

36.56

9.70

6.67

91.81

Results

Table 2 shows the three indicators of the waiting time performance standards. The mean waiting time variable might affect health outcomes and quality standards in two opposing manners. According to Siciliani et al. (2009), waiting time might adversely affect the health outcome as it exacerbates the health situation of a patient and elongates their suffering which might affect the gains of receiving the health service. Also, long waiting times might reduce efficiency due to the increased costs of managing waiting admission. On the other hand, waiting time might be deemed as a mechanism to deal with excessive demand and to enhance outcome by prioritising admission according to the medical need and clinical urgency or longer times of care that eliminates the probability of discharging patients prematurely (Iversen 1993). Waiting admission per se is deemed as a pressure indicator that quantifies the excess demand of health services. In the estimation, it would reflect the relationship between the quantity of the excessive demand (the structural disequilibrium) and the outcome measurements. While waiting time as a performance standard could have an ambiguous effect on the quality of health services, waiting admission is expected to have negative impact on outcome measurements.
Table 2

The effect of waiting policy on in-hospital observed mortality rate, readmission rate and health gains

 

(1)

(2)

(3)

Mortality rate- Fixed Effects

Readmission rate – Random Effects

Health Gains – Random Effects

Urgent Cancelled Operations

−0.0001

0.0036**

0.0017

(0.000)

(0.002)

(0.005)

Number of Non-Medical Critical Transfers

0.0029

−0.0181

0.0229

(0.003)

(0.014)

(0.027)

Patients not treated within 28 days of cancellation

0.0001

−0.0014

−0.0014

(0.000)

(0.001)

(0.006)

VTE-assessed patients

−0.0017**

0.0004

−0.0018

(0.001)

(0.002)

(0.019)

Emergency admission

−0.0073

0.0787***

−0.0668

(0.011)

(0.014)

(0.050)

Patients aged 75 and above

0.0813***

−0.0728***

0.1053

(0.017)

(0.024)

(0.076)

Number of operating theatres

0.0076

0.0322***

−0.0092

(0.014)

(0.009)

(0.030)

Median length of stay

0.1518***

0.1034

−1.0606*

(0.046)

(0.184)

(0.598)

Occupied Adult critical care beds

0.0035**

−0.0095*

0.0312*

(0.001)

(0.005)

(0.019)

Occupied daycase beds

−0.0047***

0.0045

0.0502*

(0.002)

(0.006)

(0.026)

Occupied overnight beds

0.0031

−0.0096

0.0184

(0.003)

(0.012)

(0.050)

Patient score of length of waiting time

0.0018

−0.0184**

0.0173

(0.002)

(0.009)

(0.044)

Mean waiting time

−0.0015

−0.0032***

−0.0825**

(0.002)

(0.001)

(0.034)

Waiting Admission

0.0151*

−0.0054

−0.0573

(0.008)

(0.013)

(0.063)

R-Squared

0.3098

0.3692

0.1874

Number of Obs.

389

218

378

Model Significance (p value)

(0.000)

(0.000)

(0.000)

Note: Standard errors clustered by trust in parentheses. *,**,*** indicate ten, five and one percent significance level respectively

The explanatory power of the estimations ranges between approximately 19% for patient reported health gains to 37% for the readmission rate and regressors are jointly significant at 1 % significance level. The empirical findings in column (1) corroborate the adverse effect of the waiting admission on the quality of health care services in England. The results show that waiting admission has a positive and significant effect on in-hospital mortality rate at 90% confidence level. This implies that trusts with excessive demand and higher share of waiting time admission will deliver lower quality of health services due to the prolonged suffering of patients and exacerbated health status of patients. The results show that a 1 % increase in waiting admissions share of total admission would increase the in-hospital mortality rate by approximately 0.015% keeping all other factors constant. Waiting time and patient score of waiting time do not seem to have significant impact on mortality rate.

Column (2) displays the effect of waiting time indicators on the re-admission rate within 28 days of discharge which is considered by literature as one of the quality indicators that shows the improvement/deterioration in the physical capabilities of the patients after receiving the health service. The results corroborate the argument about the mean waiting time which shows that higher waiting time might indicate higher quality in some aspects of the service. The empirical findings present evidence that trusts with higher mean waiting times have lower re-admission rates. This implies that hospitals with lower mean waiting times might tend to discharge patients prematurely before they are medically capable that would adversely affect the quality of care. This supports the argument of Iversen (1993) and Siciliani et al. (2009) that in some cases, waiting time could be an adjustment mechanism that increases the efficiency of health care service delivery. In this specification, patient score measurement has the expected negative and significant effect on readmission rate. This implies that health care units with higher score of waiting time perception exhibit significantly lower readmission rate. Column (3) shows the adverse effects of prolonged waiting time on health gains as evaluated by patients from hip replacement. The results show that hospitals with lower waiting time have significantly higher quality of health care service and positively affect the physical capabilities and health status of patients after being treated.

The findings in column (1) show that hospitals dealing with patients of higher severity (higher age group and longer length of stay) would exhibit significantly higher mortality rate. The results show the effectiveness of the VTE examination process in lowering the in-hospital mortality rate. A higher share of occupied daycase beds would result in significantly lower in-hospital mortality rate. This sheds light on the relevance of the daycase treatment that eliminate the chances of many hospital-associated infections that are responsible for medical complications and hence many in-hospital deaths. Column (2) shows the positive and significant association of emergency admission and the readmission rate. This implies that hospitals with higher unexpected demand exhibit lower quality standards and display higher readmission rate within 28 days of discharge.

For the readmission rate the age of patients seems to have negative effect. This might reflect that trusts with higher share of old patients might have lower re-admission rate (as shown in column 1) higher mortality rate (as shown in column 2). The results show that a trust with higher urgent cancelled operations – and hence with lower efficiency and deteriorated quality standard- exhibit significantly higher readmission rate. The results show that larger hospitals with higher number of operating theatres have significantly adverse effect on the quality of care as measured by the readmission rate. This might reflect that large health care units might encounter more complicated management problems that affect the quality of health care service. Occupancy rate of adult critical care beds is shown to have adverse effect on the quality standards in terms of mortality. However, the findings present evidence that higher demand on critical care is associated with lower readmission rate. Column (3) shows that hospitals with longer length of stay – which either implies higher patients severity or less management efficiency- and lower occupied critical care and daycase beds exhibit significantly lower health gains from hip replacement operations.

The empirical findings show that waiting time policy seen by policy makers as one scheme of performance management strategy exhibit positive and significant impact on some dimensions of healthcare quality standards. The study shows that on average patients with higher waiting time tend to have lower health gains as assessed by patients – in regard of the five aspects which are mobility, self-care, normal activities, pain and discomfort and psychological condition after receiving the hospital treatment. Also, hospitals with higher share of waiting admission have significantly higher in-hospital mortality rate at 90% confidence level.

On the other hand, the results show evidence of the output distortion effect where hospitals with lower mean waiting time experience significantly higher re-admission rate within 28 days of hospital discharge. This implies that waiting time policy waiting time policies resulted in a degree of gaming. Senior managers who are highly incentivised to hit the policy target which (measurable and quantifiable output indicator) have strong motive to compromise some dimensions of healthcare quality. The findings show that lower mean waiting time resulted in patients being discharged prematurely to allow for the admission of new patients. As seen in the results, this has adversely affected patients by being re-admitted to the hospital within 28 days of being discharged.

The paper suggests that designing performance management strategy that strictly focus on one output measurement –regardless of its relevance to the service outcome quality- would result in less focus on other aspects of quality that is less observed or difficult to be measured or monitored. In the NHS case the performance management strategy known as star rating mainly emphasises on waiting time as the key policy target (Rowan et al. 2004). The immeasurability of some public service outcomes has created a pressing need for the introduction of the New Public Management schemes with performance management strategy as one of the leading pillars in dealing with poor definition of goals and sometimes fuzziness of objectives in public organisation. Yet, the principal-agent discusses the problems of performance management strategy in creating strong incentives for gaming by public managers to hit the policy objectives on the expense of other dimensions of quality standards (Bevan and Hood 2006a). This is known in the theory as output distortion effect where performance management strategy succeeds in hitting policy targets but at the cost of deteriorated quality in other relevant but unmeasured aspects of policy outcome which was the case of the Soviet Union regime. The high powered incentive system created by performance management strategy results in English hospitals focusing on reducing waiting time, which is the rewarded and well monitored policy objective, yet compromising some degree of quality that is less monitored.

Discussion and Policy Implications

This paper is one of the few studies that attempts to empirically examine the effect of introducing performance management strategy, in particular waiting time policy on healthcare outcomes while controlling for various hospital specific effects and control variables. Colin-Thome´ (2009) argues that an over-focus on policy targets and activity/process indicators have had its toll on the quality standards and outcome measurements in healthcare, yet, little scholarship is dedicated to examine how severe this could be. What policy lessons could be learnt from the results? First, the paper found that waiting time as a performance indicator could reflect some aspects of quality in healthcare – for example the results show that higher waiting time is associated with lower health gains as assessed by patients themselves. Second, and in contrast, the study shows evidence of the output distortion effect of performance management strategy where hospitals focusing on shortening their waiting time are those with significantly higher re-admission rate for patients within 28 days of discharge. This implies that hospitals keen on shorter waiting time are those significantly associated with premature discharge of patients which results in deterioration in health conditions and productivity loss –hence deteriorated healthcare outcome.

Regarding policy lessons, the findings on performance management strategy should not be understood as an argument against its discussed effectiveness in drawing guidelines for policy objectives and designing some benchmark against which public administration could be set accountable. The adoption of performance management scheme, which acts as one of the main tenants of the new public management strategies in the public sector, could exhibit some useful information for public managers that shows indicators of organisational performance and flag problems, subsequently the urge for organisational change whenever required.

Yet, the results show that focusing on an activity level measurement – waiting time in this case- could result in adverse effect on healthcare outcomes which is the ultimate goal for healthcare units. Waiting time is a quantifiable measure that is highly monitored by policy makers, hence senior managers could have a strong incentive to be involved in patient prioritisation for access for healthcare that is not based on clinical urgency or medical condition rather in fear of breaking waiting time targets that is highly monitored and rewarded/penalised. A clear example is holding emergency patients in trolley in waiting areas or keeping them in ambulances outside emergency departments to avoid ‘starting the clock’ to attain the four-hour target of waiting time in Accident and Emergency (A&E) Department (Campbell 2008). An NHS report shows that 66% of A&E patients get admitted for the inpatient department in the last ten minutes of the four-hour waiting time target in fear of breaching the benchmark (NHS Digital 2009).

This study suggests that waiting time policy should be placed in a context of outcome-based performance management strategy that measures the change in the physical capabilities of patients after receiving the health treatment. Healthcare policy targets that would not consider hospital specifications, geographical location and patient characteristics would be short in enhancing the quality of healthcare systems.

The success of performance management in the English healthcare sector is evident in keeping waiting time target not exceeding 12 months in 2003 whilst for the same period the health care services of Scotland, Wales and Northern Island which have not adopted such a target policy experienced 10, 16 and 22% of patients waiting more than a year for an elective procedure respectively (Bevan and Hood 2006a). Yet, Harrison and New (2000) describe this improvement as ad-hoc and not structured one and that the past record that showed any improvements is short lived. Accordingly, these policies have positively affected the very long waiting times but the average waiting times could not be highly improved. In this line, this paper suggests the continuation of developing performance management strategies that are designed to swiftly meet the needs of escalated demand on healthcare sector and also effectively react to various factors related to advanced technology, changes in demographic characteristics, changes in medical development and clinical opinions and increasing patients’ expectations.

Conclusions

In the light of the increasing attention to effectiveness of the new public management and particularly performance management, this paper considered assessing the effect of the waiting time policy on healthcare outcomes using all data of acute hospitals in England from 2010/11 to 2013/14. Several quality standards have been used in this paper to reflect the multiplicity of objectives in healthcare (in-hospital mortality rate, re-admission rate and patient-assessed health gains). In particular, the empirical investigation has shown that trusts with higher share of waiting admission would exhibit significantly higher in-hospital mortality rate while higher score of patients’ perception of shorter waiting time is associated with lower re-admission rate. In the same vein, the findings corroborate that hospitals with less average waiting time would exhibit higher quality outcomes in terms of health gains as reported by patients. This is in the line of the previous discussion for the theory and previous empirical work that contended the relevance of designating quality standards to deal with the fuzziness of the goals in the public sector and set accountability and monitoring channels between the agency and its principals (Burgess and Ratto 2003; Dewatripont et al. 1999; Osborne 2006; Propper et al. 2010). Yet, the discussed theory demonstrated that the adoption of performance-related incentives in the public sector could be highly problematic and incomplete due to the intricacy of the responses of the managers in the public sector to these incentives (Bird et al. 2005; Courty and Marschke 1997; Goddard et al. 2000). This was clear from the findings where hospitals with shorter mean waiting time exhibit significantly higher re-admission rate. This implies that the adoption of waiting time policy targets could have resulted in patients being discharged prematurely.

This article concludes that waiting time as a performance standard for the quality of health care service is absolutely not perfect and more pressure to reduce waiting time might adversely affect the quality of health care services in England. The analysis shows evidence of the effectiveness of waiting time target policy on managing some aspects of health care quality like patient-reported health gains. However, the findings revealed the output distortion effect of that policy and these targets could be absolutely imperfect measures of the policy outcome which is the case of “hitting the target and missing the point” (Bevan and Hood 2006a).

Footnotes

  1. 1.

    - The p value for the F-statistic for the used IVs in the five quality outcome specifications ranges is significant under 1 % significance and the F-statistic is higher than 10.

  2. 2.

    - The null hypothesis under Sargan test for overidentfication is the zero correlation between the error term and the IVs. The results cannot reject the null hypothesis of the insignificant relationship between in different specifications.

  3. 3.

    - The null hypothesis of the Durbin-Wu-Hausman test is the insignificance of the difference between the OLS and the IV estimators. If the null hypothesis is rejected, this would imply that the OLS estimators are biased and inconsistent due to significant endogenity problem. In the different specifications, the null hypothesis cannot be rejected so the difference in the coefficients estimators is not systematic.

References

  1. Appleby, J., Harrison, A., & Devlin, N. (2003). What is the real cost of more patient choice? London: King's Fund.Google Scholar
  2. Audit Commission. (2003). Waiting List Accuracy. London: The Stationery Office (http://www.audit-commission.gov.uk/health/index.asp?catId=english^HEALTH ).
  3. Beer, S. (1985). Diagnosing the system for organizations. Chichester: Wiley.Google Scholar
  4. Bevan, G., & Hood, C. (2006a). What’s measured is what matters: Targets and gaming in the English public health care system. Public Administration, 84(3), 517–538.CrossRefGoogle Scholar
  5. Bevan G, Hood C. (2006b). ‘Have targets improved performance in the English NHS?’ British Medical Journal 332, 419–22.CrossRefGoogle Scholar
  6. Bird, S., Cox, D., Farewell, V., et al. (2005). Performance indicators: Good, bad, and ugly. Journal of the Royal Statistical Society, Series A, 168(1), 1–27.CrossRefGoogle Scholar
  7. Burgess, S., & Ratto, M. (2003). The role of incentives in the public sector: Issues and evidence. Oxford Review of Economic Policy, 19(2), 285–300.CrossRefGoogle Scholar
  8. Campbell, D. (2008). Scandal of patients left for hours outside A&E. The Observer, 17(2008), 1.Google Scholar
  9. Carter, N., Day, P., & Klein, R. (1995). How organisations measure success: The use of performance indicators in government. London and New York: Routledge.Google Scholar
  10. Christensen, T., Laegreid, P., & Stigen, I. (2006). Performance management and public sector reform: The Norwegian hospital reform. International Public Management Journal, 9(2), 113–139.CrossRefGoogle Scholar
  11. Christopher, H., & Hood, C. (2006). Gaming in targetworld: The targets approach to managing British public services. Public Administration Review, 66(4), 515–521.CrossRefGoogle Scholar
  12. Colin-Thome´, D. (2009). A review of lessons learnt for commissioners and performance managers following the healthcare commission investigation. London: HMSO.Google Scholar
  13. Cooper, Z., McGuire, A., Jones, S., & Le Grand, J. (2009). Equity, waiting times, and NHS reforms: Retrospective study. British Medical Journal, 339, b3264.CrossRefGoogle Scholar
  14. Courty, P., & Marschke, G. (1997). Measuring government performance: Lessons from a federal job-training program. American Economic Review, 87(2), 383–388.Google Scholar
  15. Davis, J., Schoorman, F., Mayer, R., & Tan, H. (2000). The trusted general manager and business unit performance: Empirical evidence of a competitive advantage. Strategic Management Journal, 21(5), 563–576.CrossRefGoogle Scholar
  16. Department of Health. (2005). Healthcare output and productivity: Accounting for quality change. London: Department of Health.Google Scholar
  17. Dewatripont, M., Jewitt, I., & Tirole, J. (1999). The economics of career concerns, part I: Comparing information structures. The Review of Economic Studies, 66(1), 183–198.CrossRefGoogle Scholar
  18. Di Mascio, F., & Natalini, A. (2013). Context and mechanisms in administrative reform processes: Performance management within Italian local government. International Public Management Journal, 16(1), 141–166.CrossRefGoogle Scholar
  19. Dimakou, S., Parkin, D., Devlin, N., & Appleby, J. (2009). Identifying the impact of government targets on waiting times in the NHS. Health Care Management Science, 12(1), 1–10.CrossRefGoogle Scholar
  20. Drucker, P. (1974). Management: Tasks, responsibilities, practices. New York: Harper and Row.Google Scholar
  21. Durbin, J. (1954). Errors in variables. Review of the International Statistical Institute, 22(1/3), 23–32.CrossRefGoogle Scholar
  22. Friedman, J., Kelman, S., (2007). Effort as investment: Analyzing the response to incentives. Working Paper Series rwp07-024. Harvard University, John F. Kennedy School of Government.Google Scholar
  23. Goddard, M., Mannion, R., & Smith, P. (2000). Enhancing performance in health care: A theoretical perspective on agency and the role of information. Health Economics, 9(2), 95–107.CrossRefGoogle Scholar
  24. Gravelle, H., & Siciliani, L. (2008). Is waiting-time prioritisation welfare improving? Health Economics, 17(2), 167–184.CrossRefGoogle Scholar
  25. Harrison, A., & Appleby, J. (2009). Reducing waiting times for hospital treatment: Lessons from the English NHS. Journal of Health Services Research & Policy, 14(3), 168–173.CrossRefGoogle Scholar
  26. Harrison, A., & New, B. (2000). Access to elective care: What should really be done about waiting lists. London: King’s Fund.Google Scholar
  27. Hausman, J. (1978). Specification tests in econometrics. Econometrica, 46(6), 1251.CrossRefGoogle Scholar
  28. Heckman, J., Heinrich, C., & Smith, J. (1997). Assessing the performance of performance standards in public bureaucracies. The American Economic Review, 87(2), 389–395.Google Scholar
  29. Heinrich, C. (2002). Outcomes–based performance management in the public sector: Implications for government accountability and effectiveness. Public Administration Review, 62(6), 712–725.CrossRefGoogle Scholar
  30. Holmstrom, B., & Milgrom, P. (1991). Multitask principal-agent analyses: Incentive contracts, asset ownership, and job design. Journal of Law, Economics and Organization, 7(special), 24–52.CrossRefGoogle Scholar
  31. Hood, C. (2002). Control, bargains and cheating: The politics of public-service reform. Journal of Public Administration Research and Theory, 12(3), 309–332.CrossRefGoogle Scholar
  32. Iversen, T. (1993). A theory of hospital waiting lists. Journal of Health Economics, 12(1), 55–71.CrossRefGoogle Scholar
  33. Januleviciute, J., Askildsen, J., Kaarboe, O., Holmås, T., & Sutton, M. (2013). The impact of different prioritisation policies on waiting times: Case studies of Norway and Scotland. Social Science & Medicine, 97, 1–6.CrossRefGoogle Scholar
  34. Latham, G., Borgogni, L., & Petitta, L. (2008). Goal setting and performance management in the public sector. International Public Management Journal, 11(4), 385–403.CrossRefGoogle Scholar
  35. Lemieux-Charles, L., McGuire, W., Champagne, F., Barnsley, J., Cole, D., & Sicotee, C. (2003). The use of multilevel performance indicators in managing performance in health care organizations. Management Decision, 41(8), 760–770.CrossRefGoogle Scholar
  36. National Audit Office (2001). Inpatient and outpatient waiting in the NHS. Report by the Comptroller and Auditor General HC211, London.Google Scholar
  37. NHS Digital. (2009). A&E attendances and emergency admissions. London: Department of Health.Google Scholar
  38. Nikolova, S., Harrison, M., & Sutton, M. (2016). The impact of waiting time on health gains from surgery: Evidence from a National Patient-reported Outcome Dataset. Health Economics, 25(8), 955–968.CrossRefGoogle Scholar
  39. Oliver, A. (2005). The English National Health Service: 1979-2005. Health Economics, 14(S1), S75–S99.CrossRefGoogle Scholar
  40. Osborne, S. (2006). The new public governance? Public Management Review, 8(3), 377–387.CrossRefGoogle Scholar
  41. Osborne, D. (2007). Reinventing government: What a difference a strategy makes. In 7th Global Forum on Reinventing Government: Building Trust in Government.Google Scholar
  42. Osborne, D., & Ted, G. (1992). Reinventing Government. Reading, MA: Addison-WesleyGoogle Scholar
  43. Prendergast, C. (2003). The limits of bureaucratic efficiency. Journal of Political Economy, 111(5), 929–958.CrossRefGoogle Scholar
  44. Propper, C. (1995). The disutility of time spent on the United Kingdom's National Health Service waiting lists. Journal of Human Resources, 30(4), 677–700.CrossRefGoogle Scholar
  45. Propper, C., Burgess, S., & Green, K. (2004). Does competition between hospitals improve the quality of care?: Hospital death rates and the NHS internal market. Journal of Public Economics, 88(7), 1247–1272.CrossRefGoogle Scholar
  46. Propper, C., Sutton, M., Whitnall, C., & Windmeijer, F. (2008). Did 'targets and terror' reduce waiting times in England for hospital care? The BE Journal of Economic Analysis & Policy, 8(2), article 5.Google Scholar
  47. Propper, C., Sutton, M., Whitnall, C., & Windmeijer, F. (2010). Incentives and targets in hospital care: Evidence from a natural experiment. Journal of Public Economics, 94(3), 318–335.CrossRefGoogle Scholar
  48. Rowan, K., Harrison, D., Brady, A., & Black, N. (2004). Hospitals ’ star ratings and clinical outcomes: Ecological study. British Medical Journal, 328(7445), 924–925.CrossRefGoogle Scholar
  49. Sargan, J. (1958). The estimation of economic relationships using instrumental variables. Econometrica, 26(3), 393–415.CrossRefGoogle Scholar
  50. Siciliani, L., & Hurst, J. (2005). Tackling excessive waiting times for elective surgery: A comparative analysis of policies in 12 OECD countries. Health Policy, 72(2), 201–215.CrossRefGoogle Scholar
  51. Siciliani, L., Stanciole, A., & Jacobs, R. (2009). Do waiting times reduce hospital costs? Journal of Health Economics, 28(4), 771–780.CrossRefGoogle Scholar
  52. White, H. (1980). A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econometrica, 48(4), 817–838.CrossRefGoogle Scholar
  53. Wilson, J. (1989). An optimal tax treatment of Leviathan. Economics and Politics, 1(2), 97–117.CrossRefGoogle Scholar
  54. Wu, D. (1973). Alternative tests of independence between stochastic regressors and disturbances. Econometrica, 41(4), 733–750.CrossRefGoogle Scholar

Copyright information

© The Author(s) 2018

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.School of Economics, Surrey Business SchoolUniversity of SurreySurreyUK
  2. 2.Office of Health EconomicsLondonUK

Personalised recommendations