, Volume 36, Issue 2, pp 127–130 | Cite as

Towards a New Framework for Addressing Structural Uncertainty in Health Technology Assessment Guidelines

  • Salah Ghabri
  • Irina Cleemput
  • Jean-Michel Josselin

1 Introduction

Providing scientific advice and recommendations for public decision making entails identifying, selecting and weighing evidence derived from multiple sources of information through a systematic approach, while taking into account ethical, cultural and societal factors. Integrated in the evaluation process are exchanges between regulatory agencies, private firms, scientific experts and government representatives.

In the case of drugs and medical devices, health technology assessment (HTA) agencies are increasingly commissioned to evaluate innovations in order to provide government with recommendations and advice on reimbursement and/or pricing. To undertake this task, HTA agencies [1, 2, 3, 4, 5, 6] in Europe and elsewhere have developed methodological guidelines on the economic evaluation of health technologies [7]. One component of these guidelines deals with ways for both manufacturers (pharmaceutical and medical device firms) and HTA agencies evaluators (modelers, economists and public health experts) to address uncertainty. Several types of uncertainty have indeed been identified in HTA: methodological, parameter and structural uncertainty. Most guidelines describe quite well how to deal with the first two categories, although there is still room for improvement. However, recommendations about how to tackle structural uncertainty remain largely elusive. HTA agencies and decision makers may thus be exposed to oversimplifying assessments and recommendations by putting aside complex forms of uncertainty such as structural ‘deep’ uncertainty [8].

The editorial is not intended to promote new approaches to exploring structural uncertainty, rather to emphasize concerns related to the topic, such as definition and analysis. Our aim is therefore to highlight the need to renew the analytical framework guidance for HTA.

2 From the Knightian to the Bayesian Uncertainty

Uncertainty and its exploration have been investigated across many disciplines, including statistics, operations research, philosophy and economics. Defining uncertainty is a complex and challenging task, both theoretically and practically [9].

The theoretical debate frequently focusses on the quantification of ‘immeasurable’ or ‘deep’ uncertainty. As early as 1921, Franck Knight paved the way for an essential debate on uncertainty, how it can be defined, and the extent to which it should be taken into account at either microeconomic or macroeconomic levels of economic analysis. Knight [10] placed particular emphasis on the distinction between ‘known risk’, which may be measured, and ‘true uncertainty’, which is unlikely to be measured: “Uncertainty must be taken in a sense radically distinct from the familiar notion of risk, from which it has never been properly separated. The essential fact is that ‘risk’ means in some cases a quantity susceptible of measurement”.

Today, arguments against the Knightian view and in favor of Bayesian decision theory are increasingly developed. According to the latter, all forms of uncertainty are theoretically quantifiable. It is admittedly difficult and uncomfortable to quantify uncertainty in many contexts because “people find it difficult to express their knowledge and beliefs in probabilistic form, so that elicitation of probability distribution is a far from perfect process” [11]. However, if ultimately we are to make a decision under uncertainty, our optimal action will be that which maximizes our utility given our subjective probability specification over all relevant unknowns.

The Bayesian approach has been progressively included in the formal assessment of uncertainty [12, 13, 14, 15] and applied in the value-of-information framework [16]. It nevertheless remains that its handling by HTA guidelines remains tricky and challenging.

3 Taking into Account Uncertainty in the Economic Evaluation of Health Technologies

Handling uncertainty is a concern shared by national HTA guidelines (e.g. Australia [Pharmaceutical Benefits Advisory Committee; PBAC] [1], Belgium [Health Care Knowledge Centre; KCE 2015] [2], Canada [Canadian Agency for Drugs and Technologies in Health; CADT 2017] [3], England and Wales [National Institute for Health and Care Excellence; NICE 2015] [4], France [Haute Autorité de Santé; HAS 2012] [5] and the Netherlands [6]), academic research [17], international associations (e.g. International Society for Pharmacoeconomics and Outcomes Research–Society for Medical Decision Making [ISPOR-SMDM] Modelling Good Research Practices Task Force-6 2012 [18] and the European Network for Health Technology Assessment [EUnetHTA] 2015 [19]). In particular, the ISPOR-SMDM Modelling Good Research Practices Task Force-6 distinguished stochastic, parameter and structural uncertainty. Broadly, stochastic uncertainty refers to ‘random variability in outcomes between identical patients’. Parameter uncertainty relates to the uncertainty around the ‘true’ costs and effects for each decision option because we are uncertain about the ‘true’ values of the inputs of the model. Structural uncertainty is the most difficult type of uncertainty to define and to grasp. Again, according to Briggs et al. [18], structural uncertainty is associated with ‘the assumptions inherent in the decision model’. Today, although academic research suggests several analytical frameworks allowing the exploration of structural uncertainty [29, 30, 31, 32, 33, 34, 35], there are not yet any explicit HTA recommendations about how to operationally address structural uncertainty in national or international guidelines related to health economic evaluation [20, 32].

4 Managing Structural Uncertainty in Health Economic Evaluation: A Further Need For Straightforward Definition and Practical Health Technology Assessment (HTA) Guidance

A decision context is always characterized by limited knowledge on either the condition under study or the effect of the technology under evaluation. For example, drug outcomes and health gains based on surrogate endpoints [21, 22], incomplete data generated for the outcomes of medical devices [23, 24] or partial understanding of disease progression [25] put decision makers in a situation in which they must make decisions based on incomplete information. In limited knowledge environments, the lack of in-depth exploration of structural uncertainty may prevent both manufacturers and HTA agencies from optimally assessing the magnitude of prediction errors related to health economic outputs and consequently the confidence associated with economic evaluation (e.g. cost-effectiveness analysis).

To our knowledge, there is as yet no standard definition of structural uncertainty. Admittedly, it was identified in the late 2000s [26, 27], but no ‘operational’ definition of structural uncertainty itself has yet been proposed. For example, the report of the ISPOR-SMDM Modeling Good Research Practices Task Force-6 [18] and Ghabri et al. [20] associated it with the uncertainty related to ‘the assumptions inherent in the decision model’ or ‘model uncertainty’ and mentioned some analogies with concerns identified in the analysis of standard multivariate regression.

Exploring structural uncertainty primarily requires a renewed framework. Such a framework should first address and overcome the difficulty in defining exactly what structural uncertainty means. It cannot be defined as ‘uncertainty about the model’ because the model is built, so it is not uncertain. It cannot be ‘uncertainty about the assumptions or the structure of the model’, because that would imply there is a ‘true’ model about which we are uncertain. However, by definition, models are assumed constructs of a reality based on the best available knowledge. A frequently used quote in relation to models is that “all models are wrong, but some are useful” [28]. But if they are wrong, what does model error mean? We are uncertain about ‘true’ costs and effects for each of our decision options because our model produces a prediction that we know is doubtful since we know the model is not perfect (cannot perfectly reflect the actual future outcomes of the assessed intervention). Consequently, following Strong and Oakley [34], structural uncertainty might be defined as the uncertainty that arises as a result of uncertain model error in relation to the target quantities or more generally the outcomes that the model is trying to predict.

Once the issue of defining structural uncertainty has been taken up, the question of how to recommend addressing it in HTA agencies arises. Considering the genuine need to establish a renewed HTA framework, we suggest the following interrogations. Should HTA agencies go far enough in proposing ‘analytical’ tools to assess structural uncertainty or at least explicitly recommend the use of approaches of structural uncertainty already explored in academic research? Should health economic journals and peer reviewers stress the lack of exploration of structural uncertainty? Much like with the validation of health economic decision models and its related issues [36], is it not time to put forward a dedicated tool for structural uncertainty that would propose a commonly accepted definition and good practices on undisputed ways to include structural uncertainty in HTA?



This editorial partially draws on previous work presented at the 2016 Health Technology Assessment International (HTAi) Congress in Tokyo, Japan and at the 2016 meeting of the European Public Choice Society in Freiburg, Germany. The authors thank especially Mark Strong (University of Sheffield) and Jaime Caro (University of McGill) for their suggestions and insightful comments on an early draft of the manuscript. We also thank Nathalie Merle (HAS), Françoise Hamers (Santé Publique, France) and all conference participants for their helpful comments on this work. The opinions expressed in this editorial are those of the authors and do not necessarily represent the views of the French Authority for Health, the Belgian Health Care Knowledge Centre or the University of Rennes I.

Compliance with Ethical Standards


No funding was received for the preparation of this manuscript.

Conflict of interest

The authors have no conflicts of interest. Salah Ghabri is employed by the French National Authority for Health, Irina Cleemput is employed by the Belgian Health Care Knowledge Centre and Jean-Michel Josselin is employed by the University of Rennes1.


  1. 1.
    Pharmaceutical Benefits Advisory Committee. Guidelines for preparing submissions to the Pharmaceutical Benefits Advisory Committee (version 5.0). Australian Government Department of Health. 2016. Accessed 15 Oct 2017
  2. 2.
    Belgian Guidelines for economic evaluations and budget impact analysis. KCE. 2015. Accessed 15 Oct 2017.
  3. 3.
    Canadian Agency for Drugs and Technologies in Health. Guidelines for the economic evaluation of health technologies: Canada. 4th ed. Ottawa: Canadian Agency for Drugs and Technologies in Health; 2017. Accessed 15 Oct 2017.
  4. 4.
    National Institute for Health and Care Excellence (NICE). Single technology appraisal: user guide for company evidence submission template. 2015. Accessed 15 Oct 2017.
  5. 5.
    Haute Autorité de Santé (HAS). Choices in methods for economic evaluation. 2012. Accessed 15 Oct 2017.
  6. 6.
    Guideline for the Conduct of Economic Evaluations in Health Care (Dutch Version February 2016). Accessed 15 Oct 2017.
  7. 7.
    Heintz E, Gerber-Grote A, Ghabri S, Hamers F, Rupel V, Slabe-Erker R, Davidson T. Is there a European view on health economic evaluations? Results from a synopsis of methodological guidelines used in the EUnetHTA partner countries. Pharmacoeconomics. 2015;34:59–76.CrossRefGoogle Scholar
  8. 8.
    Cox LA. Confronting deep uncertainties in risk analysis. Risk Anal. 2012;32:1607–29.CrossRefPubMedGoogle Scholar
  9. 9.
    French S. Cynfin: uncertainty, small words and scenarios. J Oper Res Soc. 2015;66:1635–45.CrossRefGoogle Scholar
  10. 10.
    Knight F. Risk, uncertainty and profit. Boston: Houghton Mifflin; 1921.Google Scholar
  11. 11.
    O’Hagan A, Oakley JE. Probability is perfect, but we can’t elicit it perfectly. Reliab Eng Syst Saf. 2004;85:239–48.CrossRefGoogle Scholar
  12. 12.
    McCarron CE, Pullenayegum EM, Marshall DA, Goeree R, Tarride JE. Handling uncertainty in economic evaluations of patient level data: a review of the use of Bayesian methods to inform health technology assessments. Int J Technol Assess Health Care. 2009;25:546–54.CrossRefPubMedGoogle Scholar
  13. 13.
    Negrin MA, Vasquez-Polo FJ. Incorporating model uncertainty in cost-effectiveness analysis: Bayesian model averaging approach. J Health Econ. 2008;27:1250–9.CrossRefPubMedGoogle Scholar
  14. 14.
    Oakley J, O’Hagan A. Probabilistic sensitivity analysis of complex models: a Bayesian approach. J R Stat Soc B. 2004;66:751–69.CrossRefGoogle Scholar
  15. 15.
    Jackson CH, Sharples LD, Thompson SG. Structural and parameter uncertainty in Bayesian cost-effectiveness models. J R Stat Soc Series C Appl Stat. 2010;59:233–53.CrossRefGoogle Scholar
  16. 16.
    Claxton K, Neumann PJ, Araki S, Weinstein MC. Bayesian value-of-information analysis. An application to a policy model of Alzheimer’s disease. Int J Technol Assess Health Care. 2001;17:38–55.CrossRefPubMedGoogle Scholar
  17. 17.
    Drummond M, Sculpher M, Torrance G, O’Brien B, Stoddart G. Methods for the economic evaluation of health care programmes. 4th ed. New York: Oxford University Press; 2015.Google Scholar
  18. 18.
    Briggs A, Weinstein M, Fenwick E, et al. Model parameter estimation and uncertainty: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force 6. Value Health. 2012;15:835–42.CrossRefPubMedGoogle Scholar
  19. 19.
    EUnetHTA. Methods for health economic evaluations - A guideline based on current practices in Europe. 2015. Accessed 15 Oct 2017.
  20. 20.
    Ghabri S, Hamers F, Josselin J-M. Exploring uncertainty in economic evaluations of new drugs and medical devices: lessons from the first review of pharmaceutical companies’ submissions to the French National Authority for Health. Pharmacoeconomics. 2016;34:617–24.CrossRefPubMedGoogle Scholar
  21. 21.
    Ciani O, Buyse M, Drummond M, Rasi G, Saad ED, Taylor RS. Time to review the role of surrogate end points in health policy: state of the art and the way forward. Value Health. 2017; 20(3):487–95.CrossRefPubMedGoogle Scholar
  22. 22.
    Raimond V, Rochaix L, Josselin J-M. HTA agencies facing model biases: the case of type 2 diabetes. Pharmacoeconomics. 2014;32:815–936.CrossRefGoogle Scholar
  23. 23.
    Sorenson C, Tarricone R, Siebert M, Drummond M. Applying health economics for policy decision making: do devices differ from drugs? Europace. 2011;13:ii54–8.PubMedGoogle Scholar
  24. 24.
    Rothery C, Claxton K, Palmer S, Epstein D, Tarricone R, Sculpher M. Characterising uncertainty in the assessment of medical devices and determining future research needs. Health Econ. 2017;26(Suppl 1):109–23.CrossRefPubMedGoogle Scholar
  25. 25.
    Frederix G, Van Hasselt J, et al. Structural uncertainty on cost-effectiveness models for adjuvant endocrine breast cancer treatments: the need for disease-specific model standardization and improved guidance. Pharmacoeconomics. 2014;32:47–61.CrossRefPubMedGoogle Scholar
  26. 26.
    Bojke L, Claxton K, Sculpher M, Palmer S. Characterizing structural uncertainty in decision evaluations. Pharmacoeconomics. 2009;33:435–43.Google Scholar
  27. 27.
    Strong M, Pilgrim H, Oakley J, Chilcott J. Structural uncertainty in health economic decision models. 2009. ScHARR Occasional Paper.Google Scholar
  28. 28.
    Box G, Draper N. Empirical model-building and response surfaces. New Jersey: Wiley; 1987.Google Scholar
  29. 29.
    Jackson C, Thompson S, Sharples L. Accounting for uncertainty in health economic decision models by using model averaging. J R Stat Soc. 2009;A172:383–404.CrossRefGoogle Scholar
  30. 30.
    Jackson CH, Bojke L, Thompson SG, Claxton K, Sharples LD. A framework for addressing structural uncertainty in decision models. Med Decis Mak. 2011;31:662–74.CrossRefGoogle Scholar
  31. 31.
    Kadane JB, Lazar NA. Methods and criteria for model selection. J Am Stat Assoc. 2004;99:279–90.CrossRefGoogle Scholar
  32. 32.
    Afzali H, Karnon J. Exploring structural uncertainty in model-based economic evaluations. Pharmacoeconomics. 2015;33:435–43.CrossRefPubMedGoogle Scholar
  33. 33.
    Price MJ, Welton NJ, Briggs AH, Ades AE. Model averaging in the presence of structural uncertainty about treatment effects: influence on treatment decision and expected value of information. Value Health. 2011;14:205–18.CrossRefPubMedGoogle Scholar
  34. 34.
    Strong M, Oakley J. When is a model good enough? Deriving the expected value of model improvement via specifying internal model discrepancies. J Uncertain Quantif. 2014;2:106–25.CrossRefGoogle Scholar
  35. 35.
    Le Q. Structural uncertainty of Markov models for advanced breast cancer: a simulation study of Lapatinib. Med Decis Mak. 2016;36:629–40.CrossRefGoogle Scholar
  36. 36.
    Vemer P, Corro Ramos I, van Voorn GA, Al MJ, Feenstra TL. AdViSHE: a validation-assessment tool of health-economic models for decision makers and model users. Pharmacoeconomics. 2016;34(4):349–61.CrossRefPubMedGoogle Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2017

Authors and Affiliations

  • Salah Ghabri
    • 1
  • Irina Cleemput
    • 2
  • Jean-Michel Josselin
    • 3
  1. 1.Department of Economic and Public Health EvaluationFrench National Authority for Health (HAS)Saint-Denis La Plaine cedexFrance
  2. 2.Belgian Health Care Knowledge Centre (KCE)BrusselsBelgium
  3. 3.Faculty of EconomicsUniversity of Rennes 1, CREM-CNRSRennes cedexFrance

Personalised recommendations