Calibrating Models in Economic Evaluation
- 318 Downloads
Background: The importance of assessing the accuracy of health economic decision models is widely recognized. Many applied decision models (implicitly) assume that the process of identifying relevant values for a model’s input parameters is sufficient to prove the model’s accuracy. The selection of infeasible combinations of input parameter values is most likely in the context of probabilistic sensitivity analysis (PSA), where parameter values are drawn from independently specified probability distributions for each model parameter. Model calibration involves the identification of input parameter values that produce model output parameters that best predict observed data.
Methods: An empirical comparison of three key calibration issues is presented: the applied measure of goodness of fit (GOF); the search strategy for selecting sets of input parameter values; and the convergence criteria for determining acceptable GOF. The comparisons are presented in the context of probabilistic calibration, a widely applicable approach to calibration that can be easily integrated with PSA. The appendix provides a user’s guide to probabilistic calibration, with the reader invited to download the Microsoft® Excel-based model reported in this article.
Results: The calibrated models consistently provided higher mean estimates of the models’ output parameter, illustrating the potential gain in accuracy derived from calibrating decision models. Model uncertainty was also reduced. The chi-squared GOF measure differentiated between the accuracy of different parameter sets to a far greater degree than the likelihood GOF measure. The guided search strategy produced higher mean estimates of the models’ output parameter, as well as a narrower range of predicted output values, which may reflect greater precision in the identification of candidate parameter sets or more limited coverage of the parameter space. The broader convergence threshold resulted in lower mean estimates of the models’ output, and slightly wider ranges, which were closer to the outputs associated with the non-calibrated approach.
Conclusions: Probabilistic calibration provides a broadly applicable method that will improve the relevance of health economic decision models, and simultaneously reduce model uncertainty. The analyses reported in this paper inform the more efficient and accurate application of calibration methods for health economic decision models.
No sources of funding were used to conduct this study or prepare this manuscript. The authors have no conflicts of interest that are directly relevant to the content of this article.
- 1.Department of Health and Ageing. PBAC guidelines: guidelines for preparing submissions to the Pharmaceutical Benefits Advisory Committee (PBAC) [version 4.3]. Woden (ACT): PBAC, 2008 [online]. Available from URL: http://www.pbs.gov.au/html/industry/static/how_to_list_on_the_pbs/elements_of_the_listing_process/pbac_guidelines [Accessed 2010 Sep 29]Google Scholar
- 2.National Institute for Health and Clinical Excellence. Guide to the methods of technology appraisal (reference N0515). London: NICE, 2004 [online]. Available from URL: http://www.nice.org.uk/niceMedia/pdf/TAP_Methods.pdf [Accessed 2010 Sep 29]Google Scholar
- 8.Carlton J, Karnon J, Czoski-Murray C, et al. The clinical effectiveness and cost-effectiveness of screening programmes for amblyopia and strabismus in children up to the age of 4–5 years: a systematic review and economic evaluation. Health Technol Assess 2008; 12 (25): iii, xi–194Google Scholar
- 17.Frontline Systems, Inc. [online]. Available from URL: http://www.solver.com/ [Accessed 2010 Sep 20]