, Volume 37, Issue 11, pp 1321–1327 | Cite as

Achieving Appropriate Model Transparency: Challenges and Potential Solutions for Making Value-Based Decisions in the United States

  • Josh J. Carlson
  • Surrey M. Walton
  • Anirban Basu
  • Richard H. Chapman
  • Jonathan D. Campbell
  • R. Brett McQueen
  • Steven D. Pearson
  • Daniel R. Touchette
  • David Veenstra
  • Melanie D. Whittington
  • Daniel A. OllendorfEmail author
Open Access
Practical Application


Transparency in decision modeling remains a topic of rigorous debate among healthcare stakeholders, given tensions between the potential benefits of external access during model development and the need to protect intellectual property and reward research investments. Strategies to increase decision model transparency by allowing direct external access to a model’s structure, source code, and data can take on many forms but are bounded between the status quo and free publicly available open-source models. Importantly, some level of transparency already exists in terms of methods and other technical specifications for published models. The purpose of this paper is to delineate pertinent issues surrounding efforts to increase transparency via direct access to models and to offer key considerations for the field of health economics and outcomes research moving forward from a US academic perspective. Given the current environment faced by modelers in academic settings, expected benefits and challenges of allowing direct model access are discussed. The paper also includes suggestions for pathways toward increased transparency as well as an illustrative real-world example used in work with the Institute for Clinical and Economic Review to support assessments of the value of new health interventions. Potential options to increase transparency via direct model access during model development include adequate funding to support the additional effort required and mechanisms to maintain security of the underlying intellectual property. Ultimately, the appropriate level of transparency requires balancing the interests of several groups but, if done right, has the potential to improve models and better integrate them into healthcare priority setting and decision making in the US context.

Key Points

Greater transparency in economic models can help improve and ensure accuracy and relevance in modeling efforts.

Transparency requires effort by modelers and the maintaining of a balance between protection of intellectual property and assessment of model validity and reliability.

The Institute for Clinical and Economic Review has piloted structured mechanisms to allow for model validation efforts while protecting the work product of the modeling groups involved, which can serve as a springboard for future innovations to increase model transparency.

1 Introduction

Transparency in decision modeling remains a topic of rigorous debate among healthcare stakeholders, given tensions between the potential benefits of external access during model development and the need to protect intellectual property and reward research investments [1, 2, 3]. Recognizing that decision modeling is conducted by various organizations, this article focuses on issues in transparency from the perspective of university-based researchers and academic institutions and recent experience in conducting collaborative research with the Institute for Clinical and Economic Review (ICER), a US-based health technology assessment organization that is actively engaged in model development for multiple audiences in the USA. Importantly, some level of transparency already exists in terms of methods and other technical specifications for published models; ICER public reports also include a technical appendix with details on model structure, parameter estimates, risk equations, and syntheses of clinical data that inform the model. Therefore, increased transparency in decision modeling is taken here to mean mechanisms to allow direct external access to a model’s structure, source code, and data. This is similar to the definition provided by Eddy et al. [4] in an International Society for Pharmacoeconomics and Outcomes Research (ISPOR) good practices report, in which it is stated that transparency “… refers to the extent to which interested parties can review a model’s structure, equations, parameter values, and assumptions.” As such, transparency is separated from the process of model building, although a process of transparency may interact with and itself be a step in the process of building the model. The current debate on transparency has arisen in the USA at a time when the use of decision models is becoming more prominent in healthcare decision making through the emergence of value-based formularies and the efforts of groups such as ICER [5]. Strategies to allow direct external access to models can take on many forms but are bounded between the status quo (e.g., detailed methods reporting) and free, publicly available open-source models. Balancing transparency with practicality and the interests of all involved parties is a daunting task with many ethical, legal, and infrastructure-related hurdles and is the subject of intense ongoing debate [2, 6, 7, 8]. To establish a suitable level of transparency, there is a need to balance the pursuit of model validity and reliability with protecting intellectual property rights and allowing for rewards for research investments. As with many such situations, a practical approach that meets the goals of the activity and balances the incentives and constraints of the interested parties is the ultimate goal.

A variety of different stakeholders are engaged in this topic, including model developers (both the developers of a specific model and the larger modeling community), model commissioners or funders, model users (i.e., healthcare decision makers, including healthcare payers and clinical guidelines groups), developers of modeling methods and supportive software, pharmaceutical and medical device manufacturers, and patients and other healthcare consumers. Although each group will have its own perspective, there is typically a shared goal of producing timely, accurate, valid, and reliable evidence about the comparative clinical and economic impact of healthcare interventions. The primary audience for most decision models is healthcare payers and, perhaps to a lesser extent, clinical guidelines groups. The process of developing and validating the relevant decision models should ensure that these audiences trust the results of the decision model. We note that transparency in the development process is a separate consideration from the subsequent activities of accessing the model to produce custom results, update model inputs, train future modelers, and repurpose models for a different research question. Although these activities have merit, they are not the primary purpose of developing a de novo decision model to inform decision makers about a specific research question. The ISPOR–SMDM (Society for Medical Decision Making) guidance appears to promote model transparency for the purpose of public validation rather than a broader set of purposes [4]. This encapsulates a central tension in the debate: Opponents of broader release worry about model appropriation or misuse for nefarious purposes as well as a detrimental effect on future funding prospects, whereas proponents argue that limited release undercuts the potential for further innovation and improvement through experimentation by others [1, 3, 9, 10].

The purpose of this paper is to provide an overview of the issues surrounding transparency from the perspective of university-based researchers and academic institutions, informed by recent joint experiences with ICER, and to offer key considerations for the field of health economics and outcomes research in the future. We believe these issues apply regardless of whether the work is being funded by governments, payers, health technology assessment (HTA) bodies, or industry. Our focus is on strategies for providing “direct model access”—in other words, allowing one or more interested party to obtain and review the source code, data, and technical documentation of a model developed by another party.

Acknowledging existing standards for the description of methods and inputs used in academic models, this paper seeks to delineate potential strategies to help move toward increased transparency as well as challenges related to balancing the needs and interests of the academic model developers and other parties with interests in model outputs. Finally, the paper also details an initiative to increase the transparency of models that is being developed to support ICER reviews of new and emerging technologies. In the spirit of the ISPOR–SMDM guidance [4], this initiative has a narrow scope, focusing on model development, with the goal of providing direct access to a working version of the model for the stakeholders most qualified to review the model structure, key assumptions, parameter estimates, and other features.

2 Potential Benefits of Direct Model Access

A key potential benefit of increased model transparency via direct access to the model would be, put simply, higher-quality models. Direct access during the development of a model could facilitate and encourage review by interested parties into the structure, underlying assumptions, and key inputs of the model and facilitate attempts to replicate model findings. This could enable the identification of errors and/or suboptimal choices for sources of information and data collection used to inform model inputs. There would also be the opportunity to assess assumptions and methodological decisions related to how the model is designed and populated. In addition, direct access could help identify and characterize sources and levels of uncertainty in model estimates informing the construction of sensitivity analyses and important scenario analyses to consider. At its best, providing external groups direct access can serve as a thorough, high-quality peer review of the structure, source code, and data used in the model, with the attendant goals of maximized validity, reliability, and credibility as well as achieving the best possible understanding of the current level of robustness and potential for error in using the model.

Beyond the primary goals of model development, some have suggested that open access models could serve to increase efficiency in the modeling community by reducing duplication of work [2, 9]. Currently, several parties may simultaneously be designing models without knowledge of other efforts, leading to redundancy. Future modelers may also benefit from free models, as they would only need to update and repurpose rather than develop a de novo model. However, it is also useful to have more than one modeling effort for a decision problem as this allows for the assessment of structural uncertainty, that is, how modeling assumptions and underlying structural choices can affect the outcomes. If the purpose is for the field in general to arrive at a gold standard for a specific model in a particular therapeutic area, the question becomes whether a single publicly released model should become the starting point or whether the field is better served by striving for some form of convergent validity through the production of multiple models, academic discourse, and frank conversation, which has been the general approach to date. The Mt. Hood Diabetes Challenge Network [20], which brought together experts from around the world to promote an open exchange of ideas on economic simulation modeling in diabetes, is an example of a formalized version of this latter approach. Hence, an important consideration of transparency efforts is to balance gains from competition and diversity of ideas with efficiency and improved oversight into any one model [3, 6].

A further potential benefit of open source models could be improved ability to cite model developers/authors in derivative works [2]. This can serve to increase awareness and to foster collaborative relationships between investigators and other leaders interested in the general modeling process. For academics, this can lead to more citations in published works related to the model or to the treatment area. This form of collaboration, should it manifest, could also lead to diffusion of best practices and educational spill-overs across individuals and groups that, over time, could lead to more rapid and innovative advances in modeling. While outside the scope of this paper, similar potential benefits have been described in relation to sharing algorithms and results from basic science studies and clinical trials [11]. All this said, these public good benefits will not necessarily accrue to the original model developers and thus may require a system that can balance and incentivize the optimal distribution of benefits in line with time, effort, and resources used in the process. Ultimately, valid, well-publicized models will be more relevant to decision makers. Therefore, transparency is essential to a robust and valid approach to model development and is central to the continued growth of organizations actively utilizing decision modeling to improve healthcare decision making.

3 Challenges to Direct Model Access

With the above-mentioned potential benefits to direct model access come several challenges. A fundamental barrier stems from the currently low levels of non-industry funding in the USA for model development, which typically only covers time related to specific project objectives, leaving little room for other activities, even if deemed meritorious or considered a public good. In this context, the potentially unfunded time and effort required to share models with external groups, who often require a detailed technical guide and/or a simplified user interface, may not be viewed as a priority. The amount of extra effort required increases for stakeholders lacking modeling expertise. Even with National Institutes of Health-funded models, which are required to be made public, it is seldom if ever that transparency processes are followed in a decision-relevant timeframe [12], an issue that also persists with publicly funded clinical trial data [13]. Models funded by industry come with a unique set of issues, as contractual arrangements with academic researchers may limit the ability to share the model with external audiences.

Another concern is that allowing direct access can lead to delays, or even bias, created by input on model assumptions and parameters from groups with different sets of incentives. Allowing external groups to access a model during the development process opens the process to external influence. Much of the input received would likely consist of valid critiques and suggestions for improvement. However, groups such as manufacturers of the products in question often prefer modeling techniques and interpretations of data that favor their products and may attempt to steer the model development process toward a set of assumptions or inputs that match their interests. Reviews of the cost-effectiveness literature support such findings [14, 15, 16]. In the extreme and given the amount of potential profits in play for the developers and manufacturers of the interventions in question, especially when the model results have a potential impact on subsequent approval and pricing of the technology, some may view transparency initiatives as an opportunity to undermine the model development process altogether. This potential for delays and introduction of bias is directly at odds with the goals of providing timely and valid results and increases the resources required to develop models. Finally, a specific challenge when models are developed to support HTA decisions relates to the acceptability of prepublication or otherwise confidential data from manufacturers, which are often redacted in public documents. The presence of these data will naturally limit the ability to replicate or fully interrogate models, even if released publicly.

A general reluctance about calls for free open source models within academia stems from several contextual considerations. Model-building activities in academic settings are considered research, and the standard for reporting research findings is to describe the process sufficiently to allow replication. This allows for confirmation of the research findings when another research group sets about answering the same research question but using their own processes and resources and with sufficient technical details regarding the methodology. Healthcare research dollars are a limited resource, and researchers and their institutions compete for research funding. Therefore, academic researchers and related institutions consider model development to be a research investment that can yield short- and long-term returns in the form of future research funding, collaborations, and publications. Career prospects are directly linked to the ability to attain research funding and produce scholarly works. Therefore, especially as a voluntary effort, there is little enthusiasm for options such as making models completely open to the public if that could decrease funding and/or publication opportunities. For example, other research groups competing for the same pool of research funds could provide a competitive research proposal because less resources will be required if a robust, validated model is freely available. An analogy can be made to laboratory-based research. If a research group spent time and effort developing a microbial strain, giving the strain away for free may allow other groups far greater opportunity to leverage the innovation and potentially outcompete the innovator group for future research dollars. Scientific progress is made through incremental improvements on previous researchers’ works but also through competition and incentives for innovators. The key to producing short- and long-term scientific gains is to find the appropriate balance.

4 Potential Solutions and Way Forward

To alleviate some of the concerns and risks of open access to models, innovative data sharing agreements between research groups can be used to establish acceptable parameters for model sharing. Through contracts and financial incentives, interested parties can be allowed access to the model for understanding and technical review but with restrictions on future use of the model or the intellectual property contained therein. In fact, the ISPOR good practices for outcomes research paper on model transparency and validity suggests the use of formal data use agreements to facilitate model transparency efforts [4]. In addition to improved rules and agreements via contracts, portals for sharing models could be designed better to help foster transparency while protecting against copying valuable aspects of the model.

The most feasible type of model-sharing agreement will depend on multiple factors, such as incentives for scholarly activities, future funding, ownership of the model, and jurisdiction of model development. A key component of the discussion will naturally be how the work involved in model sharing will be funded. As with most aspects of transparency initiatives generally, the possible range exists on a broad spectrum between cost recovery and the typically six-figure contracts to develop models for commercial clients.

Model-sharing agreements can range from relatively simple versions such as confidentiality agreements or creative commons licenses to more advanced licensing or data use agreements. The benefits of confidentiality agreements include relatively quick execution between parties along with a relatively low but still relevant protection of intellectual property. Creative commons licenses provide free open access while also allowing model developers to further specify the use of their model by outside parties [21]. With creative commons licensing, modelers can allow or deny commercial use of their model and specify whether new users can adapt the model for other applications. However, creative commons licenses may be difficult to enforce [17]. More detailed and robust licensing agreements through universities or other entities can allow access under more specific rules that provide further protection and consequences for licensing infractions and the means to support the extra effort required to share models through licensing fees. Each of these arrangements require different levels of effort to design and execute, so their use should be aligned with the model-sharing goals, parties involved, and decision-making context.

5 Real-World Example: Model-Sharing Initiative by the Institute for Clinical and Economic Review

A prominent real-world example of providing direct model access to interested stakeholders is a transparency project launched by ICER in conjunction with the academic collaborators who develop and specify economic models to inform the cost-effectiveness and budget impact evaluations contained in ICER appraisals of both new and established technologies. The initial pilots were associated with migraine [18] and endometriosis [19] reviews that were ongoing at the time. Rather than a focus on broad public release for its own sake, this transparency effort was intended to answer questions about whether the models were fit for purpose to inform the policy decisions of interest for the specific ICER review.

The pilots involved direct contracting between the academic groups developing the models and the manufacturers of the products under review, given that the intellectual property being shared resided with the collaborators. Under these agreements, manufacturers paid a small fee to the relevant academic institution to cover the added costs of preparing the models for review, including the development of user documentation. Manufacturers were also asked to sign confidentiality and/or licensing agreements that prevented copying and/or distributing of the models. Access was time limited and targeted to fall within the 4-week public comment period following ICER’s posting of its draft reports.

Results from the pilots were disparate. The endometriosis pilot involved a single manufacturer whose general engagement during the review was limited, and the invitation to participate in the pilot was declined. The migraine pilot involved three manufacturers and featured an Excel-based model that was released on a Box [22] platform to allow controlled access from any location. All calculations and formulas were available to reviewers. Overall, this pilot was deemed a success. The manufacturers considered the contracting and model release process to be relatively smooth and described communication about the release as clear and consistent. In addition, stakeholders identified minor but relevant errors in the model that could be corrected in time for the final report. Still, there were several logistical challenges worth mentioning. First, company firewalls created problems with access to Box in some cases, requiring individuals to work outside of their preferred and secured information technology environment. In addition, the migraine model was populated in part by data submitted as “in confidence” by the manufacturers, which necessitated data redaction along with a concurrent worry that back calculation of confidential results, while prohibited, was nonetheless potentially feasible. Manufacturers requested customized versions of the model with their own data unredacted, but the nominal fee charged would not have covered the additional effort required for this change. Finally, the manufacturers were interested in more detail in the technical documentation and greater opportunity for interaction with the modelers. However, they also reported that the model structure, estimation, and documentation was reasonably straightforward; the benefit that additional interaction could provide was therefore unclear.

Subsequent reviews have produced similar outcomes. ICER’s review of treatments for hereditary angioedema involved two manufacturers, both of whom declined to participate in a model release. However, the organization developed an internal model of medication-assisted treatments for opioid use disorder, which was shared with three participating manufacturers as a “live” web app on heRo3sm [23], an online modeling tool that works with a cloud-based, open-source health economics modeling package in the programming language R. In this case, authorized users were sent a secure weblink to access the model in the hero3 environment. Users could modify certain parameters to assess changes in model results but could not make any permanent model changes. One manufacturer submitted confidential data and had exclusive access to a separate version of the model populated with the confidential information. As with the migraine pilot, feedback from the manufacturers focused primarily on logistical issues. First and foremost, while the use of R-based modeling is increasing, the participating manufacturers had limited exposure to R, making a technical evaluation and understanding of the available code challenging. Some manufacturers also reported difficulty in identifying certain parameters and reviewing sensitivity analyses, citing a lack of familiarity with the platform. While the hero3 vendor did offer a tutorial session for reviewers, this did not appear to mitigate all concerns.

Moving forward, ICER intends to work with its collaborators to routinely offer the opportunity for model examination to manufacturers of the products under review for every future topic. The hope is that, given the nominal fee and the fact that manufacturers already devote substantial resources to the review of ICER models, this will become a more predictable and consistent exercise. ICER is also willing to extend invitations for model review to patient advocacy groups, payers, and other stakeholders relevant to the model review process; to date, such interest has been limited.

6 Conclusions

Overall, model access during development, if viewed primarily in the contexts of efficiency and validation that have driven the ICER model transparency initiative, seems feasible and could be quite beneficial to all involved parties. Indeed, the importance of this discussion is international in scope, given that nearly all mature HTA bodies develop or critique manufacturer-submitted models to assess the value of new health interventions. Direct and openly public access after model development may be more difficult to resolve because of the lack of funding and incentives. Certainly, achieving model transparency will require improved stakeholder engagement, increased funding by interested parties, and further development of legal assurances to protect intellectual property. The ongoing debate about model transparency is important as we collectively work to improve the development and use of economic evidence to support healthcare decision making. As this process evolves, there is a strong impetus to work together with the interests and constraints of all stakeholders considered. As all health economists—formal and amateur—know, incentives matter. Hence, the key to moving forward is to develop a sustainable approach to reap the benefits of transparency that is robust, objective, and responsive to the various needs of the involved stakeholders.



Drs. Carlson, McQueen, Ollendorf, and Walton were responsible for preparation of the draft manuscript, manuscript revision, and responses to reviewer comments. Drs. Basu, Campbell, Chapman, Pearson, Touchette, Veenstra, and Whittington provided critical review of both the draft and revised manuscripts as well as a draft of the response to reviewer comments. Additionally, Drs. Touchette, Walton, Campbell, and McQueen provided personal reflections regarding their experiences with ICER’s pilot transparency initiative.

Compliance with Ethical Standards


No sources of funding were used to develop this manuscript.

Conflicts of Interest

The work described in the manuscript is relevant to Dr. Carlson’s current grant from ICER. Dr. Carlson has carried out consulting work with pharmaceutical companies that were included in the work described in the manuscript. Some of the work described in the manuscript is relevant to Dr. Walton’s and Dr. Touchette’s current contract-related funding from ICER. Specific transparency efforts related to a project with ICER described here that involved Dr. Walton and Dr. Touchette were funded by Allergan, Amgen, and Teva Pharmaceuticals. Drs. Campbell, McQueen, and Whittington have received project-related funding from ICER. Drs Basu and Veenstra work at the University of Washington, which has a contract with ICER. Drs Chapman and Pearson are employees, and Dr. Ollendorf is a former employee, of ICER, whose efforts are funded by the Laura and John Arnold Foundation and the California Healthcare Foundation. Over the past 3 years, ICER as an organization has received membership dues—unrelated to these activities—from a variety of payer and life science companies.


  1. 1.
    Cohen AB. Point-counterpoint: cost-effectiveness analysis in medical care and the issue of economic model transparency. Med Care. 2017;55(11):907–8.CrossRefGoogle Scholar
  2. 2.
    Cohen JT, Neumann PJ, Wong JB. A call for open-source cost-effectiveness analysis. Ann Intern Med. 2017;167(6):432–3.CrossRefGoogle Scholar
  3. 3.
    Padula WV, McQueen RB, Pronovost PJ. Can economic model transparency improve provider interpretation of cost-effectiveness analysis? Evaluating tradeoffs presented by the second panel on cost-effectiveness in health and medicine. Med Care. 2017;55(11):909–11.CrossRefGoogle Scholar
  4. 4.
    Eddy DM, Hollingworth W, Caro JJ, et al. Model transparency and validation: a report of the ISPOR-SMDM Modeling Good Research Practices Task Force–7. Value Health. 2012;15(6):843–50.CrossRefGoogle Scholar
  5. 5.
    Sullivan SD, Yeung K, Vogeler C, et al. Design, implementation, and first-year outcomes of a value-based drug formulary. J Manag Care Spec Pharm. 2015;21(4):269–75.PubMedGoogle Scholar
  6. 6.
    Dunlop WCN, Mason N, Kenworthy J, Akehurst RL. Benefits, challenges and potential strategies of open source health economic models. Pharmacoeconomics. 2017;35(1):125–8.CrossRefGoogle Scholar
  7. 7.
    Palmer AJ, Si L, Tew M, et al. Computer modeling of diabetes and its transparency: a report on the Eighth Mount Hood Challenge. Value Health. 2018;21(6):724–31.CrossRefGoogle Scholar
  8. 8.
    McCabe C, Dixon S. Testing the validity of cost-effectiveness models. Pharmacoeconomics. 2000;17(5):501–13.CrossRefGoogle Scholar
  9. 9.
    Cohen JT, Wong JB. Can economic model transparency improve provider interpretation of cost-effectiveness analysis? A Response. Med Care. 2017;55(11):912–4.CrossRefGoogle Scholar
  10. 10.
    Sampson CJ, Arnold R, Bryan S, et al. Transparency in decision modelling: what, why, who and how? Pharmacoeconomics. 2019.[Epub ahead of print].CrossRefPubMedGoogle Scholar
  11. 11.
    Allen C, Mehler DMA. Open science challenges, benefits and tips in early career and beyond. PLoS Biol. 2019;17(5):e3000246. Scholar
  12. 12.
    Iqbal SA, Wallach JD, Khoury MJ, et al. Reproducible research practices and transparency across the biomedical literature. PLoS Biol. 2016;14(1):e1002333. Scholar
  13. 13.
    Koenig F, Slattery J, Groves T, et al. Sharing clinical trial data on patient level: opportunities and challenges. Biom J. 2015;57(1):8–26.CrossRefGoogle Scholar
  14. 14.
    Neumann PJ, Thorat T, Shi J, et al. The changing face of the cost-utility literature, 1990–2012. Value Health. 2015;18(2):271–7.CrossRefGoogle Scholar
  15. 15.
    Neumann PJ, Fang CH, Cohen JT. 30 years of pharmaceutical cost-utility analyses: growth, diversity and methodological improvement. Pharmacoeconomics. 2009;27(10):861–72.CrossRefGoogle Scholar
  16. 16.
    Bell CM, Urbach DR, Ray JG, et al. Bias in published cost effectiveness studies: systematic review. BMJ. 2006;332(7543):699–703.CrossRefGoogle Scholar
  17. 17.
    Loren LP. Building a reliable semicommons of creative works: enforcement of creative commons licenses and limited abandonment of copyright. Lewis and Clark School of Law. 2019. Accessed Aug 2019.
  18. 18.
    Institute for Clinical and Economic Review. Final evidence report: calcitonin gene-related peptide (CGRP) inhibitors as preventive treatments for patients with episodic or chronic migraine: effectiveness and value. 2019. Accessed Aug 2019.
  19. 19.
    Institute for Clinical and Economic Review. Final evidence report: elagolix for treating endometriosis. 2019. Accessed Aug 2019.
  20. 20.
    Mt Hood Diabetes Challenge Network. Economics, simulation modelling and diabetes. 2019. Accessed Aug 2019.
  21. 21.
    Creative commons. About the licenses. Accessed Aug 2019.
  22. 22.
    Box for individuals and teams. Accessed Aug 2019.
  23. 23.
    Policy Analysis, Inc. heRo3 support. Accessed Aug 2019.

Copyright information

© The Author(s) 2019

Open AccessThis article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (, which permits any noncommercial use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  • Josh J. Carlson
    • 1
  • Surrey M. Walton
    • 2
  • Anirban Basu
    • 1
  • Richard H. Chapman
    • 3
  • Jonathan D. Campbell
    • 4
  • R. Brett McQueen
    • 4
  • Steven D. Pearson
    • 3
  • Daniel R. Touchette
    • 2
  • David Veenstra
    • 5
  • Melanie D. Whittington
    • 4
  • Daniel A. Ollendorf
    • 6
    Email author
  1. 1.Department of Pharmacy, Comparative Health Outcomes, Policy, and Economics (CHOICE) InstituteUniversity of WashingtonSeattleUSA
  2. 2.Department of Pharmacy Systems, Outcomes, and PolicyUniversity of Illinois-ChicagoChicagoUSA
  3. 3.Institute for Clinical and Economic Review (ICER)BostonUSA
  4. 4.University of Colorado School of PharmacyAuroraUSA
  5. 5.University of WashingtonSeattleUSA
  6. 6.Center for the Evaluation of Value and Risk in Health, Tufts Medical CenterBostonUSA

Personalised recommendations