Transparency in decision modeling remains a topic of rigorous debate among healthcare stakeholders, given tensions between the potential benefits of external access during model development and the need to protect intellectual property and reward research investments [1,2,3]. Recognizing that decision modeling is conducted by various organizations, this article focuses on issues in transparency from the perspective of university-based researchers and academic institutions and recent experience in conducting collaborative research with the Institute for Clinical and Economic Review (ICER), a US-based health technology assessment organization that is actively engaged in model development for multiple audiences in the USA. Importantly, some level of transparency already exists in terms of methods and other technical specifications for published models; ICER public reports also include a technical appendix with details on model structure, parameter estimates, risk equations, and syntheses of clinical data that inform the model. Therefore, increased transparency in decision modeling is taken here to mean mechanisms to allow direct external access to a model’s structure, source code, and data. This is similar to the definition provided by Eddy et al. [4] in an International Society for Pharmacoeconomics and Outcomes Research (ISPOR) good practices report, in which it is stated that transparency “… refers to the extent to which interested parties can review a model’s structure, equations, parameter values, and assumptions.” As such, transparency is separated from the process of model building, although a process of transparency may interact with and itself be a step in the process of building the model. The current debate on transparency has arisen in the USA at a time when the use of decision models is becoming more prominent in healthcare decision making through the emergence of value-based formularies and the efforts of groups such as ICER [5]. Strategies to allow direct external access to models can take on many forms but are bounded between the status quo (e.g., detailed methods reporting) and free, publicly available open-source models. Balancing transparency with practicality and the interests of all involved parties is a daunting task with many ethical, legal, and infrastructure-related hurdles and is the subject of intense ongoing debate [2, 6,7,8]. To establish a suitable level of transparency, there is a need to balance the pursuit of model validity and reliability with protecting intellectual property rights and allowing for rewards for research investments. As with many such situations, a practical approach that meets the goals of the activity and balances the incentives and constraints of the interested parties is the ultimate goal.
A variety of different stakeholders are engaged in this topic, including model developers (both the developers of a specific model and the larger modeling community), model commissioners or funders, model users (i.e., healthcare decision makers, including healthcare payers and clinical guidelines groups), developers of modeling methods and supportive software, pharmaceutical and medical device manufacturers, and patients and other healthcare consumers. Although each group will have its own perspective, there is typically a shared goal of producing timely, accurate, valid, and reliable evidence about the comparative clinical and economic impact of healthcare interventions. The primary audience for most decision models is healthcare payers and, perhaps to a lesser extent, clinical guidelines groups. The process of developing and validating the relevant decision models should ensure that these audiences trust the results of the decision model. We note that transparency in the development process is a separate consideration from the subsequent activities of accessing the model to produce custom results, update model inputs, train future modelers, and repurpose models for a different research question. Although these activities have merit, they are not the primary purpose of developing a de novo decision model to inform decision makers about a specific research question. The ISPOR–SMDM (Society for Medical Decision Making) guidance appears to promote model transparency for the purpose of public validation rather than a broader set of purposes [4]. This encapsulates a central tension in the debate: Opponents of broader release worry about model appropriation or misuse for nefarious purposes as well as a detrimental effect on future funding prospects, whereas proponents argue that limited release undercuts the potential for further innovation and improvement through experimentation by others [1, 3, 9, 10].
The purpose of this paper is to provide an overview of the issues surrounding transparency from the perspective of university-based researchers and academic institutions, informed by recent joint experiences with ICER, and to offer key considerations for the field of health economics and outcomes research in the future. We believe these issues apply regardless of whether the work is being funded by governments, payers, health technology assessment (HTA) bodies, or industry. Our focus is on strategies for providing “direct model access”—in other words, allowing one or more interested party to obtain and review the source code, data, and technical documentation of a model developed by another party.
Acknowledging existing standards for the description of methods and inputs used in academic models, this paper seeks to delineate potential strategies to help move toward increased transparency as well as challenges related to balancing the needs and interests of the academic model developers and other parties with interests in model outputs. Finally, the paper also details an initiative to increase the transparency of models that is being developed to support ICER reviews of new and emerging technologies. In the spirit of the ISPOR–SMDM guidance [4], this initiative has a narrow scope, focusing on model development, with the goal of providing direct access to a working version of the model for the stakeholders most qualified to review the model structure, key assumptions, parameter estimates, and other features.