, Volume 33, Issue 8, pp 777–781 | Cite as

Development and Use of Disease-Specific (Reference) Models for Economic Evaluations of Health Technologies: An Overview of Key Issues and Potential Solutions

  • Gerardus W. J. FrederixEmail author
  • Hossein Haji Ali Afzali
  • Erik J. Dasbach
  • Robyn L. Ward

1 Introduction

Decision-analytic models are increasingly used to inform decisions about whether or not to publicly fund new health technologies such as pharmaceuticals. Therefore, few would argue the need for developing and using high-quality models [1]. Over the past years, significant efforts have been made to improve the quality of decision-analytic models (e.g., through improvement and use of good practice guidelines [2]); however, important challenges facing decision-analytic modelling still remain. Recently, Caro and Moller [3] outlined some of these challenges, including, among other things, “validation”, “transparency”, “uncertainty”, and “implementing the model”, with potential impact on the credibility of model outcomes to decision makers. Of these critical points, uncertainty of model outcomes is a broadly studied topic. Three major types of uncertainty influencing the results of decision-analytic models are (1) parameter uncertainty; (2) methodological uncertainty; and (3) structural uncertainty [4].

The impact of both parameter uncertainty (uncertainty in numerical values assigned to model inputs such as transition probabilities) and methodological uncertainty (uncertainty in the choice of analytical methods, e.g., discount rate or the perspective taken) has been addressed substantially in the literature through, for example, using probabilistic sensitivity analyses to address parameter uncertainty [5] and using a ‘reference case’ to deal with methodological uncertainty [6, 7, 8]. In contrast, in the absence of a sufficient level of detail around the effect of and approaches to dealing with structural uncertainty, concerns regarding the credibility of model outcomes are frequently raised [9].

Structural uncertainty arises because the model is a simplification of reality and the question is whether alternative assumptions in the model regarding, for instance, the structure do not unequivocally fit reality better than another structure. Recent research has indicated the large impact of this structural uncertainty on outcomes of economic evaluations [10, 11]. Different structures incorporated in, for instance, breast cancer models had a large impact on the differences in incremental cost-effectiveness ratios (ICERs). Several of the implemented structures even incorrectly reflected the natural history of breast cancer progression; therefore, these could even not be seen as structural uncertainty but as biased structures. These findings show the effect of structural uncertainty on model outcomes and, hence, on funding decisions. To reduce the impact of varying model structures on outcomes for a specific disease, the use of standardized disease-specific (reference) models has been recently proposed [10, 12, 13, 14].

In 1996, Gold et al. [6] introduced the use of a reference case by describing a set of core requirements such as the need for discounting, sensitivity analyses, and time horizon, thereby enabling better comparison between economic evaluations and decreasing methodological uncertainty. Our proposal for the use of standardized reference models goes a step further than Gold et al. and focuses on uniform modelling methods, structures, and even parameterization and should therefore be called a disease-specific reference model. Such disease-specific (reference) models accurately represent both the knowledge and uncertainty about states/events relating to the disease progression on the basis of the best available evidence. A reference model can be applied to a wide set of interventions for a specific disease (e.g., drugs and procedures that may target alternative mechanisms or stages of disease).

A few disease-specific models have been previously published [15, 16, 17, 18, 19]. These are large-scale models with different boundaries, providing a consistent framework for the economic evaluation of a wide range of health technologies for a specific condition (e.g., rheumatoid arthritis, colorectal cancer). Our proposed disease specific (reference) model is not a system-level model, such as a whole disease model, simulating disease and treatment pathways, i.e., prevention, diagnosis, and post-diagnosis pathways [18]. Rather, a disease-specific model focuses on the disease progression, capturing all key clinical states/events, and the transition between them, and patient attributes that influence disease and/or response to treatment. This enables the disease-specific (reference) model to be used for a range of interventions that may target the disease progression.

Within the health technology funding decision process, the uptake of these models is often lacking, likely due to a lack of general agreement between key health technology assessment (HTA) stakeholders (e.g., funding bodies and industry) on the need for disease-specific models and a framework to develop and use such models. Other practical issues such as the time and technical complexities associated with the development of reference models may also contribute to resistance to using reference models in decision making [18].

During a panel discussion at the recent Health Technology Assessment International (HTAi) conference in Washington, DC, USA, the value of such disease-specific reference models was discussed from different HTA stakeholder viewpoints: academia, industry, and HTA agencies (e.g., national funding bodies). This commentary aims to briefly outline and share key points from our panel at HTAi, focusing on practical issues around the development and overcoming barriers prohibiting the use of standardized disease-specific models. Our motivation for sharing this commentary is to stimulate further dialogue and progress in making models easier to access, understand, and improve their validity and credibility.

2 Key Issues and Potential Solutions

Although the use of disease-specific models is a promising aid in dealing with and reducing structural uncertainty, and therefore increasing credibility, several issues were raised when discussing these models. Issues can be grouped into the following categories: (1) development; (2) characteristics; and (3) implementation issues.

2.1 Development

At first, multidisciplinary collaboration is needed to develop such models. A methodological framework proposed by Haji Ali Afzali et al. [20] was discussed during the panel discussion. This framework illustrates the development process for reference models and allows for inputs from a wide range of HTA stakeholders (e.g., government, industry, clinicians). Briefly, the proposal involves the establishment of a “reference model research unit” (e.g., within independent academic HTA centers commissioned by national funding bodies) that is responsible for undertaking and overseeing the development of disease-specific reference models. In line with best practice guidelines for modelling, the proposed framework consists of five main steps. These include the choice of a conceptual framework using the best available evidence and clinical inputs, choice of an appropriate modelling technique (e.g., cohort-based vs. individual-based models), construction of the model, model performance evaluation (i.e., validation and calibration), and updating the model structure when new evidence (i.e., about the natural history of the condition, or data to populate or validate the model) becomes available.

Although any party can be the initiator of disease-specific models, it would be very effective if one party takes the lead in setting up the logistics for development of such models. As different diseases and often different parties (i.e., pharmaceutical companies, non-governmental organizations, and academic institutions) are involved, it would be valuable for the field if HTA agencies took the lead and organized independent review groups. Both clinical and health economic conferences would be very suited to set up meetings among all the different stakeholders and parties to discuss the reference model at a high level and early in the development of new health technologies.

2.2 Characteristics

Although the development of disease-specific models involves a large group of experts in the field and all key HTA stakeholders (including industry representatives), users may disagree with certain structural aspects or assumptions. One option to address this issue for users is, first, to estimate model outcomes using the disease-specific model, followed by the estimation of model outcomes using an alternative model structure. Users should highlight the changes they made to the disease-specific model in a transparent way and compare outcomes against the basic model. This provides a basis for confirming any claimed advantages of the new model structure and therefore decision makers know the impact of the changes on the outcome.

In addition, to increase trust it is essential to make models as transparent and valid as possible. Transparency refers to the extent to which involved parties can review a model’s structure, parameters, equations, and assumptions. Ensuring validation refers to methods for judging whether a model’s accuracy is good in making relevant predictions. A recent International Society For Pharmacoeconomics and Outcomes Research (ISPOR) taskforce has described methods and recommended best practices for making models transparent and validating them [21]. Issues such as the availability of technical documentation and non-technical documentation of the model were discussed, which should increase transparency. In addition, the authors discussed different validation techniques, which should enhance validity of the model. Adherence to such guidance will lead to more trust of other modelers, users, and decision makers in the models produced. Enabling availability to all technical documentation will even lead to more collaboration in increasing the validity of existing models.

The transparency and validity of models can also be strengthened by pre-specifying and preparing in advance a model development plan and providing open access to the model while in development and when complete (see Sect. 2.3). The model development plan would pre-specify a reference standard model structure, test suites for testing the model (model performance evaluation), and analyses to be performed (e.g., health outcome analysis, sensitivity analyses, etc.), much like a statistical analysis plan is prepared for clinical trials. In doing so, the plan would describe methods to identify experts, engage decision makers, seek agreement on methods and analyses, and for organizing publication plans. In addition, the model development plan could be utilized as a living document to guide future model development. Like statistical analysis plans for clinical trials, changes to the model would be officially amended and submitted to a repository such as

Moreover, outcomes of economic evaluations between countries could differ due to country-specific characteristics including data input. For a successful adaptation of the disease-specific model, a key starting point is to reach consensus across jurisdictions to identify factors that can contribute to the transferability of findings. Then, the next step would be to set up transparent jurisdiction-specific modules using available transferability checklists [22]. This process enables different jurisdictions to assess the extent to which the findings of (model-based) economic evaluations from another jurisdiction can be adapted. For example, while parameters such as baseline patient characteristics and risk factors are jurisdiction specific, data on treatment effect (from clinical trials) and utility values might be more generalizable across jurisdictions [23]. Using disease-specific models, applicants can update jurisdiction-specific data. By doing so, the use of disease-specific models can be expanded (e.g., nationally or globally), thereby potentially increasing the efficiency of the decision-making process.

2.3 Implementation

The only way these reference models can and will be implemented is when these are formalized by decision makers. Without formal uptake of these models in the decision-making process, new models will constantly be built. To ensure wide use, exchange of knowledge and increase of model strength it is essential that the model and non-technical and technical documentation are published. Publication could occur by utilizing a Wikipedia-like platform. For example, precedents include the provision of user licenses as used by the developers of the United Kingdom Prospective Diabetes Study (UKPDS) Outcomes Model. The UKPDS Outcomes Model is a simulation model designed to assess the total burden of disease over an extrapolated lifetime for populations with type 2 diabetes mellitus [24]. A copy of the model software and a license to use it are available to third parties [25].

An alternative discussed during the panel discussion was to provide a web link that allows others to access, run, inspect, and contribute to the model, much like social coding is accomplished through GitHub ( One interesting example of such a platform for sharing models has been developed at Merck and is called WebModel. Currently, the platform has been used to share a human papillomavirus vaccine cost-effectiveness model with governments, academic institutions, and HTA agencies [26].

3 Conclusions and Future Steps

To summarize, a reference model should reflect the underlying biological process of the disease, should have been set up using a multidisciplinary framework, and is adaptable, transparent, transferable, and published and shared in an open source environment. These models can facilitate the use of an evidence-based structure and a validated model within HTA. Such outcomes are in line with a recent review report written by the Agency for Healthcare Research and Quality (AHQR) focusing on existing guidance and future research needs in the field of modeling [27].

Currently, the European network on Health Technology Assessment (EUNetHTA) is developing and setting up frameworks to support collaboration between European HTA organizations to bring added value to healthcare systems at the European, national, and regional level [28]. Such frameworks would be very suited for setting up and implementing disease-specific reference models. Within the decision-making process, the development of disease-specific reference models for every single condition requires further investment in terms of HTA stakeholders’ time (e.g., industry and funding bodies). We propose to develop reference models for chronic conditions that pose a significant health and economic burden on the health system (e.g., obesity, osteoporosis, or depression). These conditions have been proposed as the choice of model structure to reflect their complex natural history is likely to have a significant impact on model outcomes given their chronic nature and the impact of baseline characteristics and prior events on disease progression (e.g., the effect of a previous fracture on increasing the risk of further fractures in osteoporosis).

To demonstrate the usefulness of a reference model and the feasibility of the development process, a pilot study should be initiated in collaboration with national funding bodies and industry. These pilot studies should adhere to current development frameworks and include all characteristics deemed essential for implementation and usefulness. Eventually the investment in time and money by all stakeholders will be deemed cost effective because of the future benefits of disease-specific reference models. Policy makers can then rely on valid and comparable model-based economic evaluations, resulting in better-informed funding decisions, and HTA evaluation groups don’t need to appraise new model structures for similar disease areas. Pharmaceutical companies benefit from guidance and disease-specific reference cases because they don’t have to develop a completely new model for each new pharmaceutical and each new indication. Instead, they and other stakeholders can invest in collecting evidence with relative effectiveness studies as well as country-specific characteristics to populate the reference model.

Although structural uncertainty will always remain prevalent, compliance with best practice guidelines for modelling describing an evidence-based process to develop disease-specific (reference models) [11] as well establishing open forums from which to discuss the design of reference models will decrease the potential impact of alternative model structures on model outcomes. This will also make outcomes of future models more comparable and differences in outcomes more understandable. By reflecting the natural history of the disease under study more accurately, the use of reference models within the decision-making process will reduce modeler uncertainty resulting from not using the available evidence [29].

We emphasize that reference models are not intended to replace structural sensitivity analyses. In the presence of uncertain or conflicting evidence on model structure, structural uncertainty should be characterized using approaches such as model averaging and parameterization [30]. Moreover, established agreements between HTA stakeholders on a framework will address issues such as changing the model, disagreeing with the model, and refusal of data access.

In addition, use of similar models between countries could decrease time to market for new and promising therapies, thereby increasing the availability and value of new interventions to patients. Reference models could, therefore, have a huge potential as a standardized tool for informing decision making. To fulfill the advantages, more research into overcoming all barriers and issues is needed.



Erik Dasbach is an employee of Merck & Co Inc. Robyn Ward is a Commonwealth member of PBAC (Pharmaceutical Benefits Advisory Committee) and chair of MSAC (Medical Services Advisory Committee). Gerardus Frederix and Hossein Haji Ali Afzali declare no conflicts of interest.


  1. 1.
    Petrou S, Gray A. Economic evaluation using decision analytical modelling: design, conduct, analysis, and reporting. BMJ. 2011;342:d1766.PubMedCrossRefGoogle Scholar
  2. 2.
    Husereau D, Drummond M, Petrou S, Carswell C, Moher D, Greenberg D, et al. Consolidated health economic evaluation reporting standards (CHEERS) statement. Pharmacoeconomics. 2013;31(5):361–7.PubMedCrossRefGoogle Scholar
  3. 3.
    Caro JJ, Moller J. Decision-analytic models: current methodological challenges. Pharmacoeconomics. 2014;32(10):943–50.PubMedCrossRefGoogle Scholar
  4. 4.
    Bilcke J, Beutels P, Brisson M, Jit M. Accounting for methodological, structural, and parameter uncertainty in decision-analytic models: a practical guide. Med Decis Making. 2011;31(4):675–92.PubMedCrossRefGoogle Scholar
  5. 5.
    Briggs A, Sculpher MJ, Claxton K. Decision modelling for health economic evaluation. Oxford: Oxford University Press; 2006.Google Scholar
  6. 6.
    Gold MR, Siegel JE, Russel LB, Weinstein MC. Cost-effectiveness in health and medicine. New York: Oxford University Press; 1996.Google Scholar
  7. 7.
    Siegel JE, Weinstein MC, Russell LB, Gold MR. Recommendations for reporting cost-effectiveness analyses. Panel on cost-effectiveness in health and medicine. JAMA. 1996;276(16):1339–41.PubMedCrossRefGoogle Scholar
  8. 8.
    NICE. Guide to the methods of technology appraisal 2013. Accessed 16 Jan 2015.
  9. 9.
    Mauskopf J. A methodological review of models used to estimate the cost effectiveness of antiretroviral regimens for the treatment of HIV infection. Pharmacoeconomics. 2013;31(11):1031–50.PubMedCrossRefGoogle Scholar
  10. 10.
    Frederix GW, van Hasselt JG, Schellens JH, Hovels AM, Raaijmakers JA, Huitema AD, et al. The impact of structural uncertainty on cost-effectiveness models for adjuvant endocrine breast cancer treatments: the need for disease-specific model standardization and improved guidance. Pharmacoeconomics. 2014;32(1):47–61.PubMedCrossRefGoogle Scholar
  11. 11.
    Kim LG, Thompson SG. Uncertainty and validation of health economic decision models. Health Econ. 2010;19(1):43–55.PubMedGoogle Scholar
  12. 12.
    Frederix GW, Severens JL, Hovels AM, Raaijmakers JA, Schellens JH. Reviewing the cost-effectiveness of endocrine early breast cancer therapies: influence of differences in modeling methods on outcomes. Value Health. 2012;15(1):94–105.PubMedCrossRefGoogle Scholar
  13. 13.
    Haji Ali Afzali H, Karnon J. Addressing the challenge for well informed and consistent reimbursement decisions: the case for reference models. Pharmacoeconomics. 2011;29(10):823–5.PubMedCrossRefGoogle Scholar
  14. 14.
    Hiligsmann M, Cooper C, Guillemin F, Hochberg MC, Tugwell P, Arden N, et al. A reference case for economic evaluations in osteoarthritis: an expert consensus article from the European Society for Clinical and Economic Aspects of Osteoporosis and Osteoarthritis (ESCEO). Semin Arthritis Rheum. 2014;44(3):271–82.PubMedCrossRefGoogle Scholar
  15. 15.
    Drummond M, Maetzel A, Gabriel S, March L. Towards a reference case for use in future economic evaluations of interventions in osteoarthritis. J Rheumatol Suppl. 2003;68:26–30.PubMedGoogle Scholar
  16. 16.
    Gabriel S, Drummond M, Maetzel A, Boers M, Coyle D, Welch V, et al. OMERACT 6 Economics Working Group report: a proposal for a reference case for economic evaluation in rheumatoid arthritis. J Rheumatol. 2003;30(4):886–90.PubMedGoogle Scholar
  17. 17.
    Palmer AJ, Clarke P, Gray A, Leal J, Lloyd A, Grant D, et al. Computer modeling of diabetes and its complications: a report on the Fifth Mount Hood challenge meeting. Value Health. 2013;16(4):670–85.PubMedCrossRefGoogle Scholar
  18. 18.
    Tappenden P, Chilcott J, Brennan A, Squires H, Glynne-Jones R, Tappenden J. Using whole disease modeling to inform resource allocation decisions: economic evaluation of a clinical guideline for colorectal cancer using a single model. Value Health. 2013;16(4):542–53.PubMedCrossRefGoogle Scholar
  19. 19.
    Eddy DM, Schlessinger L. Archimedes: a trial-validated model of diabetes. Diabetes Care. 2003;26(11):3093–101.PubMedCrossRefGoogle Scholar
  20. 20.
    Haji Ali Afzali H, Karnon J, Merlin T. Improving the accuracy and comparability of model-based economic evaluations of health technologies for reimbursement decisions: a methodological framework for the development of reference models. Med Decis Making. 2013;33(3):325–32.CrossRefGoogle Scholar
  21. 21.
    Eddy DM, Hollingworth W, Caro JJ, Tsevat J, McDonald KM, Wong JB. Model transparency and validation: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force-7. Med Decis Making. 2012;32(5):733–43.PubMedCrossRefGoogle Scholar
  22. 22.
    Goeree R, He J, O’Reilly D, Tarride JE, Xie F, Lim M, et al. Transferability of health technology assessments and economic evaluations: a systematic review of approaches for assessment and application. Clinicoecon Outcomes Res. 2011;3:89–104.PubMedCentralPubMedCrossRefGoogle Scholar
  23. 23.
    Drummond M, Barbieri M, Cook J, Glick HA, Lis J, Malik F, et al. Transferability of economic evaluations across jurisdictions: ISPOR Good Research Practices Task Force report. Value Health. 2009;12(4):409–18.PubMedCrossRefGoogle Scholar
  24. 24.
    Clarke PM, Gray AM, Briggs A, Farmer AJ, Fenn P, Stevens RJ, et al. A model to estimate the lifetime health outcomes of patients with type 2 diabetes: the United Kingdom Prospective Diabetes Study (UKPDS) Outcomes Model (UKPDS no. 68). Diabetologia. 2004;47(10):1747–59.PubMedCrossRefGoogle Scholar
  25. 25.
    University of Oxford. UKPDS Outcomes Model. Accessed 31 Oct 2014.
  26. 26.
    Pillsbury M, Weiss T, Dasbach EJ. WebModel: web-based HPV dynamic transmission modeling. In: 2014 International Papillomavirus Conference, 20–25 Aug 2014; Seattle.Google Scholar
  27. 27.
    Agency for Healthcare Research and Quality. Decision and simulation modeling: review of existing guidance, future research needs, and validity assessment. 2014. Accessed 18 Jan 2015.
  28. 28.
    Kristensen FB, Makela M, Neikter SA, Rehnqvist N, Haheim LL, Morland B, et al. European network for health technology assessment, EUnetHTA: planning, development, and implementation of a sustainable European network for health technology assessment. Int J Technol Assess Health Care. 2009;25(Suppl 2):107–16.PubMedCrossRefGoogle Scholar
  29. 29.
    Haji Ali Afzali H, Karnon J. Exploring structural uncertainty in model-based economic evaluations. Pharmacoeconomics. Epub 2015 Jan 20.Google Scholar
  30. 30.
    Bojke L, Claxton K, Sculpher M, Palmer S. Characterizing structural uncertainty in decision analytic models: a review and application of methods. Value Health. 2009;12(5):739–49.PubMedCrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Gerardus W. J. Frederix
    • 1
    Email author
  • Hossein Haji Ali Afzali
    • 2
  • Erik J. Dasbach
    • 3
  • Robyn L. Ward
    • 4
    • 5
  1. 1.Divison of Pharmacoepidemiology and Clinical Pharmacology, Department of Pharmaceutical Sciences, Science FacultyUtrecht UniversityUtrechtThe Netherlands
  2. 2.Department of Public HealthThe University of AdelaideAdelaideAustralia
  3. 3.Health Economic StatisticsMerck Research LaboratoriesNorth WalesUSA
  4. 4.Prince of Wales Clinical SchoolUniversity of New South WalesSydneyAustralia
  5. 5.University of QueenslandBrisbaneAustralia

Personalised recommendations