The paper by Assistant Prof. Gagnon, published in this issue of PharmacoEconomics [1], succinctly summarizes the current status of hospital-based, health technology assessment activities around the world. Such a summary is timely given the multiplicity of new health technologies clambering for a place in our public hospitals, which are often inundated in red ink. How is it possible to maintain the dual goals of optimal patient care and fiscal prudence? Apart from the ever-elusive ‘Wisdom of SolomonFootnote 1’, one practical tool is the appropriate use of health technology assessment ‘at the coalface’, as it were. Here in New Zealand, as elsewhere, that coalface is often the tertiary hospital where caring and eager clinicians are enthusiastic protagonists of novel cutting-edge technology. Where those innovations can potentially improve outcomes whilst reducing costs, they are greeted with open arms. Sadly, a much more common scenario is one where the innovation is an improvement over current therapies but whilst the improvements might be measurable and real (with reduced morbidity and/or mortality rates), the costs are often eye watering when compared with the quantum of improvement. The metric for this, in health technology terms, is the incremental cost-effectiveness ratio (ICER) and it is not uncommon for new technologies to be presented with tentative ICERs of tens of thousands of dollars for every added quality-adjusted life-year. When healthcare dollars are in short supply, as they have been since the global financial crisis of 2008, it makes novel health technologies seem like desirable but unaffordable luxury cars. Ernest Rutherford, that great Nobel prizewinner from New Zealand, was a master of bluntness when he accurately stated the New Zealand stance on technological matters:

‘We’ve got no money so we’ve got to think’.

For the past 9 years, we, the Auckland District Health Board (ADHB) with an annual budget of around NZD$2 billion and 10,000 staff, have operated a hospital-based, health technology assessment committee (somewhere between the ‘Internal Committee’ and the ‘HTA unit’ described by Professor Gagnon) that has evaluated a wide variety of submissions concerning the implementation of new health technologies. The committee is made up of 12 clinicians, all well respected in their own spheres of activity and who are capable of analyzing medical literature in a dispassionate manner. During that time we have evaluated 73 submissions from a multitude of disciplines. The modus operandi has been pathway comparison (current vs. proposed) using current costs and outcomes compared with those anticipated once the new health technology is applied. This technology might be a new drug, medical device, diagnostic test, or service. We have judiciously avoided more distantly related health technologies such as improvements in information systems or support services such as human resources.

To assist in the comparison of dissimilar health technologies, applied in different medical disciplines, we developed, from the outset, a scoring tool (Fig. 1), which depended on incremental costs, predicted health improvements, and the quality of evidence for those anticipated costs and improvements. A further advantage of this tool was its ability to be used without the need for cohorting submissions or ranking within those cohorts. To date, scores have had a range of 0–115.Footnote 2 We added editorial notes where we thought such were appropriate but it has been rewarding for the members of this committee [we call ourselves the Clinical Practice Committee (CPC)] to see that, for the most part, the decision makers in the ADHB have made decisions in line with the scores assigned by our analyses.

Fig. 1
figure 1

Examples of high-scoring submissions include: Sacral nerve stimulation for fecal incontinence; Bevacizumab treatment of diabetic macula edema; Fetoscopic surgery for twin-to-twin transfusion syndrome; and the HALO system for radiofrequency ablation of the lower esophagus.

Examples of submissions receiving mid-range scores include: Renal denervation for hypertension (before the publication of the negative study in March 2014);Footnote 3 Photodynamic therapy for cholangiocarcinoma; Long QT syndrome genetic testing; IgE testing for food allergies; and Outpatient ORL laser treatment of polypoid lesions

Examples of low-scoring submissions include: Pre-filled midazolam syringes (in the pre-operative area); Percutaneous pulmonary valve placement; Rituximab treatment for SLE; Home humidification for xerostomia; and High-dose intravenous vitamin C for severe viral pneumonia.

After being evaluated by the CPC and scored, submissions resulted in letters being written to the Chief Medical Officer of the ADHB. The decisions made consisted of four options; Implementation (IMP), Approved but not yet funded (NYF), Interim approval for a fixed number of cases or time interval with data collection (IAD), and Declined (DEC). Figure 2 demonstrates the distribution of outcomes in terms of percentages of submissions.

Fig. 2
figure 2

Unsurprisingly, low-scoring submissions were often declined (more than 60 % of submissions scoring less than 30), whereas it was never the case that high-scoring submissions were declined (0 % of those scoring more than 60, p = 0.002 when comparing the DEC rate of high- and low-scoring submissions, Fisher’s exact test). Similarly, when submissions scored more than 60, the rate of IMP or NYF was over 90 % but when the scores were less than 30, the IMP or NYF rate was less than 10 % (p = 0.002 when comparing the IMP or NYF rates for high- and low-scoring submissions, Fisher’s exact test). Mid-scoring technologies were the most problematic with a propensity for these to be allocated IAD. The interim approval strategy has had variable outcomes based on the willingness of the implementing clinicians to capture, and subsequently provide for us, accurate data about both costs and outcomes. Promising technologies, for example, vagal nerve stimulation for intractable epilepsy, have sometimes been eventually declined, after an initial IAD decision, because of poor data collection rather than an absolute conviction that the technology itself was not beneficial or cost effective.

The ADHB CPC members would not pretend that it has netted all new health technologies that have been proposed or, indeed, implemented over the past 9 years. However, there has been a clear message from decision makers that the process helps both by selecting promising technologies and allowing them to decline suboptimal ones, feeling that they have analytical support to do so. This perception has been pervasive enough that the activities of the CPC have, since July 2014, become regional, covering three adjacent District Health Boards. It should be noted, in passing, that information concerning investment decisions has, from time to time, identified targets for disinvestment, mostly by way of constraining eligibility. We have experienced 15 such opportunities during our tenure, and, in 13/15 (87 %), access could quite legitimately be dramatically curtailed without concerns about impaired patient outcomes and costs savings of more than NZD$1 million per annum. In 2/15 (13 %), access was removed altogether with immediate savings of more than NZD$300,000 per annum.

In summary, I congratulate Assistant Prof. Gagnon on the timely summary she has provided regarding hospital-based, health technology assessment activities. Our experience encourages us to continue to expand such activities because of their recognized value in the difficult business of deciding amongst competing and sometimes dazzling new health technologies. Our simple, clinical pathway comparison methodology combined with the utility of a scoring tool has allowed us to fairly rapidly advise our public healthcare organization about what are sometimes quite contentious issues.