, Volume 32, Issue 7, pp 613–615 | Cite as

NICE’s Cost-Effectiveness Range: Should it be Lowered?

  • J. P. RafteryEmail author

This question goes to the heart of the use of the cost per quality-adjusted life-year (QALY) in healthcare decision making, notably by UK agencies, including the National Institute for Health and Care Excellence (NICE), the National Screening Committee and regarding immunisation, but with implications for other health systems that use the cost per QALY. NICE has indicated the range within which its threshold lies: £20k to £30k per QALY gained [1, 2]. The worry is that if these thresholds are too high, NICE’s recommendations could be doing more harm than good. This would happen when, say, recommending a drug on the basis of its incremental cost-effectiveness ratio (ICER) of £30k led other National Health Service (NHS) services with a low ICER being displaced. Do no harm (‘Non Nocere’) should apply to health economists as well as to doctors.

No theoretical basis exists for NICE’s current thresholds. The reluctance of NICE to recommend technologies with ICERs over £30k emerged from analysis of its decisions [3]. This threshold emerged from precedents, particularly NICE’s long struggle with the multiple sclerosis (MS) drugs, which, with an ICER of £70k, seemed too high.

Two approaches have been taken to establishing what the threshold should be: public surveys and estimating the current NHS ICER. Much effort has gone into the former but to relatively little effect. The results are sensitive to the methods used [4]. The questions are difficult. And even if the public favoured a higher than current ICER, whether they would be prepared to pay for it remains unclear. The alternative approach, strongly supported by those around NICE, takes the NHS budget as fixed and tries to estimate the NHS ICER from variations in spend and performance. This approach was first reported in relatively short papers [5, 6, 7], worked up into a major project led by a York team [8].

The best estimate by the York team put the NHS ICER at just under £13k. This meant that NICE’s ICERs were much too high and should be reduced. This work was criticised by the Office of Health Economics (OHE), leading to a standoff. A plenary session on this topic at the large European International Society for Pharmacoeconomics and Outcomes Research (ISPOR) conference in late 2013 had to be cancelled due to claims from the York team that their work was about to be misrepresented by the OHE. The OHE critique has since been published [9]. This article steps gingerly into that contested space.

To understand the issues in contention, some background is necessary. The York approach analyses differences in spend by disease group between 152 Primary Care Trusts linked to differences in life-years, via differences in mortality. In brief, expenditure and mortality data exist for ten disease groups from which a cost per life-year can be estimated. This was extrapolated first to cover the other 13 disease groups and then put in terms of QALYs. Mortality data covering half the NHS were thus used to generate QALYs for the entire NHS. Many assumptions were required to estimate life-years from these data, even more to get to QALYs. The debate is largely about the plausibility or otherwise of the assumptions (Table 1).
Table 1

Key differences in assumptions between the York team and the Office of Health Economics [9, 10]

York assumptions

Additional assumption required according to OHE

1. Deaths averted by a change in expenditure returns an individual to the mortality risk of the general population (matched for age and gender)

10. Programme budgeting data are reliable

2. Expenditure and outcome elasticities are uncorrelated

11. A PCT’s response can be estimated from other PCTs with same expenditure and outcomes

3. Mortality effects of changes in expenditure (reported at PCT level) can be applied to all mortality recorded in a PBC

12. 28 % of spending not accounted for can be distributed pro rata

4. The PBC QALY effects are a weighted average of effects within each of the ICDs that contribute to the PBC based on the proportion of the total PBC population within each contributing ICD codes

13. Past and future spend effects cancel out

5. Health effects of changes in expenditure are restricted to the population at risk during 1 year

14. York assume quality of life gains are enjoyed now so do not need to be discounted

6. Health effects restricted to the PBC in which expenditure changes. No health effects associated with changes in GMS expenditure (or PBC22, Social Care)

15. Rising NHS productivity offsets rise in threshold due to increased NHS spending

7. Same proportional effect on QALY burden of disease as the estimated proportional effect on the life-year burden of disease

16. Given the uncertainty of the estimates, the lower should be chosen

8. Life-year effects are lived at a quality of life that reflects a proportionate improvement to the quality of life with disease


9. Proportional effect on QALY burden of disease in PBCs where mortality effects could not be estimated is assumed to be the same as the overall proportional effect on the life-year burden of disease across those PBCs where mortality effects could be estimated


GMS General Medical Services, ICD International Classification of Diseases, NHS National Health Service, OHE Office of Health Economics, PBC programme budget categories, PCT Primary Care Trust, QALY quality-adjusted life-year

The row over the reasonableness of the assumptions takes place within the terms of economics, with emphasis on terms such as elasticities, diminishing marginal returns, and so on.

The York report lists nine key assumptions and justifies them in relation to the lack of alternatives. This is reasonable only if one insists on generating an ICER for the NHS. Failure was not an option for the York team. The OHE query most of the York team’s assumptions but also point to a further seven assumptions (Table 1) that need to hold for the estimate to be valid. More assumptions could be readily added, notably the adjustment of local spending by the NHS needs index. The York work is pathbreaking in showing how the NHS ICER might be estimated. The assumptions required indicate the research needed for a more robust model.

Rather than discuss each assumption in detail, I ask whether the NICE threshold should be reduced on the basis of this work. The answer I think must be ‘no’ for two reasons. First, the assumptions required are too many and sweeping to be the basis of a major policy change. Second, the threshold may matter less than commonly thought.

In practice, NICE almost never says no on grounds of cost effectiveness. Of the 512 technologies with recommendations listed on the NICE website, 15 % (or 79) were not recommended [10]. Of these, only 29 were not cancer drugs (fundable through the Cancer Drugs Fund). Of the 29 non-cancer refusals, 14 were later accepted. Ten were rejected as either lacking evidence, as outmoded technologies, or were less effective than their alternatives. Two high-cost drugs for MS were rejected but then funded by the Department of Health through a special scheme. The three drugs remaining were rejected only at particular doses or in favour of close substitutes. Factors explaining these results, besides the Cancer Drugs Fund, and the Multiple Sclerosis Scheme [11], include the end-of-life criteria [12] and Patient Access Schemes. A rise in NICE’s threshold to around £40k [13] has also taken place. This is not to say that NICE’s threshold does not matter, but it plays a less important role than commonly thought.

Estimates exist for the NHS cost per QALY gained for the most common elective surgical procedures. Hip [14] and knee [15] replacements, and hernia [16] and varicose vein [17] repair cost less than £10k per QALY gained. Elective procedures such as these are often first to be reduced when the NHS is short of resources. They starkly illustrate the potential opportunity cost to the NHS of NICE guidance. Worryingly, the cost of these procedures varied widely by hospital, in ways that were not linked to outcomes [18]. Yet, the York work assumes variations in NHS spend are linked to outcomes. One way of minimising the opportunity cost would be for the NHS to protect treatments of proven cost effectiveness.

Basing the NHS opportunity cost on services displaced raises the question of whether these should be services potentially or actually displaced [19]. Maximising QALYs from a fixed budget requires displacement of all services with sub-optimal cost/QALY. But given NICE’s narrower remit of appraising the clinical and cost effectiveness of technologies, then use of services actually displaced ensures its recommendations improve efficiency [20].

If, as projected, NHS spending falls over the next few years, then the voice of the NHS may begin to be heard on the opportunity cost of NICE guidance. One study of the effect of NICE’s recommendation on herceptin pointed to the oncology services that a local hospital had to forego [21]. More such examples are needed. Opportunity costs are likely to be disease and/or specialty specific, as they are often only apparent to those close to clinical decision making. Those are the voices that need to join this debate.

If an NHS threshold cost per QALY gained cannot be agreed, other approaches may be needed. The most promising alternative might be capping the pharmaceutical budget. The 2014 Pharmaceutical Price Regulation Scheme did just that for 2014–2019 [22]. Instead of NICE assessing each highly priced drug in isolation, with the inevitable reaction from those who may lose, the NHS may save more from an across-the-board approach.


Conflict of interest

The author declares no conflict of interest in relation to the topic of this editorial.


  1. 1.
    NICE, Guide to the methods of technological appraisal. Ref: N1618; 2008.Google Scholar
  2. 2.
    NICE, Guide to the methods of technological appraisal. Ref: N0514; 2004.Google Scholar
  3. 3.
    Devlin N, Parkin N. Does NICE have an cost effectiveness threshold and what other factors influence its decisions? A binary choice analysis. Health Econ. 2004;13(5):437–52.PubMedCrossRefGoogle Scholar
  4. 4.
    Baker R, Bateman I, Donaldson C, Jones-Lee M, Lancsar E, Loomes G, Mason H, Odejar M, Pinto Prades JL, Robinson A, Ryan M, Shackley P, Smith R, Sugden R, Wildman J; the SVQ Research Team. Weighting and valuing quality-adjusted life-years using stated preference methods: preliminary results from the Social Value of a QALY Project. HTA J. 2010;14(27).
  5. 5.
    Martin S, Rice N, Smith PC. The link between health care spending and health outcomes in the new English primary care trusts. York: Centre for Health Economics; 2008.Google Scholar
  6. 6.
    Martin S, Rice N, Smith P. Does health care spending improve health outcomes? J Health Econ. 2008;27:826–42.PubMedCrossRefGoogle Scholar
  7. 7.
    Martin S, Rice N, Smith P. Comparing costs and outcomes across programmes of health care. Health Econ. 2012;21:316–37.PubMedCrossRefGoogle Scholar
  8. 8.
    Claxton K, Martin S, Soares M, Rice N, Spackman E, Hinde S, Devlin N, Smith PC, Sculpher M. Methods for the estimation of the NICE cost effectiveness threshold final report. University of York CHE; 2013.Google Scholar
  9. 9.
    Barnsley P, Towse A, Karlberg S, Sussex J. Critique of CHE Research Paper 81: methods for the estimation of the NICE cost effectiveness threshold. OHE. Occasional Paper 13/01; 2013.Google Scholar
  10. 10.
  11. 11.
    Raftery J. Costly failure of risk sharing scheme. BMJ. 2010;340:c1672.PubMedCrossRefGoogle Scholar
  12. 12.
    Latimere C. NICE’s end of life decision making scheme: impact on population health. BMJ. 2013;346:f1363. doi: 10.1136/bmj.f1363 (Published 21 March 2013).
  13. 13.
    Dakin H, Devlin N, Feng Y, Rice N, O’Neil P, Parkin D. The influence of cost effectiveness and other factors on NICE decisions. OHE Research Paper 13/06; 2013.Google Scholar
  14. 14.
    Apleby J, Poteliakhoff E, Shah K, Devlin N. Using patient-reported outcome measures to estimate cost-effectiveness of hip replacements in English hospitals. J R Soc Med. 2013;106(8):323–31. doi: 10.1177/0141076813489678.CrossRefGoogle Scholar
  15. 15.
    Dakin H, Gray A, Fitzpatrick R, MacLennan G, Murray D; The KAT Trial Group. Rationing of total knee replacement: a cost-effectiveness analysis on a large trial data set.
  16. 16.
    Coronini-Cronberg S, Appleby J, Thompson J. Application of patient-reported outcome measures (PROMs) data to estimate cost-effectiveness of hernia surgery in England. J R Soc Med. 2013;106:323–31.CrossRefGoogle Scholar
  17. 17.
    Michaels JA, et al. Randomised clinical trial, observational study and assessment of cost- effectiveness of the treatment of varicose veins (REACTIV Trial). Health Technol Assess. 2006;10(13):1–96.PubMedGoogle Scholar
  18. 18.
    Street A, Gutacker N, Bojke C, Devlin N, Daidone S. Variations in outcome and costs among NHS providers for common surgical procedures: econometric analyses of routinely collected data. Health Services Deliv Res. 2014;2(1), ISSN 2050-4349. doi: 10.3310/hsdr02010.
  19. 19.
    Eckermann S, Pekarsky B. Can the real opportunity cost stand up: displaced services, the straw man outside the room. Pharmacoeconomics 2014;32(4):319-25.Google Scholar
  20. 20.
    Paulden M, McCabe C, Karnon J. Achieving allocative efficiency in healthcare: nice in theory, not so NICE in practice? Pharmacoeconomics 2014;32(4):315-8.Google Scholar
  21. 21.
    Barrett A, Roques T, Small M, Smith R. How much will herceptin really cost? BMJ. 2006;333. doi: 10.1136/bmj.39008.624051.BE (Published 23 Nov 2006).
  22. 22.
    Department of Health. Pharmaceutical Price Regulation Scheme 2014; 2013.

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.University of SouthamptonSouthamptonUK

Personalised recommendations