Skip to main content

Modeling and designing health care payment innovations for medical imaging


Payment innovations that better align incentives in health care are a promising approach to reduce health care costs and improve quality of care. Designing effective payment systems, however, is challenging due to the complexity of the health care system with its many stakeholders and their often conflicting objectives. There is a lack of mathematical models that can comprehensively capture and efficiently analyze the complex, multi-level interactions and thereby predict the effect of new payment systems on stakeholder decisions and system-wide outcomes. To address the need for multi-level health care models, we apply multiscale decision theory (MSDT) and build upon its recent advances. In this paper, we specifically study the Medicare Shared Savings Program (MSSP) for Accountable Care Organizations (ACOs) and determine how this incentive program affects computed tomography (CT) use, and how it could be redesigned to minimize unnecessary CT scans. The model captures the multi-level interactions, decisions and outcomes for the key stakeholders, i.e., the payer, ACO, hospital, primary care physicians, radiologists and patients. Their interdependent decisions are analyzed game theoretically, and equilibrium solutions - which represent stakeholders’ normative decision responses - are derived. Our results provide decision-making insights for the payer on how to improve MSSP, for ACOs on how to distribute MSSP incentives among their members, and for hospitals on whether to invest in new CT imaging systems.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5


  1. 1.

    Brenner DJ, Hall EJ (2007) Computed tomography an increasing source of radiation exposure. N Engl J Med 357(22):2277–2284

    Article  Google Scholar 

  2. 2.

    de Brantes F (2013) The incentive cure: The real relief for health care. Newtown, CT: Health Care Incentives Improvement Institute

  3. 3.

    Berwick DM (2011) Making good on ACOs’ promise the final rule for the Medicare Shared Savings Program. N Engl J Med 365(19):1753–1756

    Article  Google Scholar 

  4. 4.

    Greaney T (2011) Accountable Care Organizations the fork in the road. N Engl J Med 364(1)

  5. 5.

    Centers for Medicare and Medicaid Services (2011) Medicare program; Medicare Shared Savings Program: Accountable Care Organizations. Final rule. Fed Regist 76(212):67802–67990

    Google Scholar 

  6. 6.

    Centers for Medicare and Medicaid Services (2014) Fact sheets: Medicare ACOs continue to succeed in improving care, lowering cost growth

  7. 7.

    Douven R, McGuire TG, McWilliams JM (2015) Avoiding unintended incentives in ACO payment models. Health Aff 34(1):143–149

    Article  Google Scholar 

  8. 8.

    (2015) Centers for Medicare and Medicaid Services, Shared Savings Program Accountable Care Organizations public use file, report,

  9. 9.

    Fuloria PC, Zenios SA (2001) Outcomes-adjusted reimbursement in a healthcare delivery system. Manag Sci 47(6):735–751

    Article  Google Scholar 

  10. 10.

    Lee DK, Zenios SA (2012) An evidence-based incentive system for Medicare’s End-Stage Renal Disease program. Manag Sci 58(6):1092–1105

    Article  Google Scholar 

  11. 11.

    Yaesoubi R, Roberts SD (2010) A game-theoretic framework for estimating a health purchasers willingness-to-pay for health and for expansion. Health Care Manag Sci 13(4):358–377

    Article  Google Scholar 

  12. 12.

    Boadway R, Marchand M, Sato M (2004) An optimal contract approach to hospital financing. J Health Econ 23(1):85–110

    Article  Google Scholar 

  13. 13.

    Pope B, Deshmukh A, Johnson A, Rohack J (2014) Multilateral contracting and prevention. Health Econ 23(4):397–409

    Article  Google Scholar 

  14. 14.

    Montreuil B, Charfeddine M, Labarthe O, Andr AC (2007) A generic agent-based framework for simulation in distributed healthcare systems. In: International Conference on Industrial Engineering and Systems Management

  15. 15.

    Rouse WB (2008) Health care as a complex adaptive system: Implications for design and management. Bridge-Washington-National Academy Engineering 38(1):17

    Google Scholar 

  16. 16.

    Basole RC, Bodner DA, Rouse WB (2012) Healthcare management through organizational simulation: An approach for studying emerging organizational ideas and concepts

  17. 17.

    Pope GC, Kautter J (2012) Minimum savings requirements in shared savings provider payment. Health Econ 21(11):1336–1347

    Article  Google Scholar 

  18. 18.

    Frandsen B, Rebitzer JB (2015) Structuring incentives within accountable care organizations. J Law Econ Org 31(1):i77—i103

    Google Scholar 

  19. 19.

    Zhang H, Wernz C, Slonim AD (2015) Aligning incentives in health care: a multiscale decision theory approach. EURO J Decis Process:1–26

  20. 20.

    Wernz C (2008) Multiscale decision-making: Bridging temporal and organizational scales in hierarchical systems. Thesis

  21. 21.

    Wernz C, Deshmukh A (2007) Decision strategies and design of agent interactions in hierarchical manufacturing systems. J Manuf Syst 26(2):135–143

    Article  Google Scholar 

  22. 22.

    Wernz C, Deshmukh A (2010) Multiscale decision-making: Bridging organizational scales in systems with distributed decision-makers. Eur J Oper Res 202(3):828–840

    Article  Google Scholar 

  23. 23.

    Wernz C, Henry A (2009) Multi-level coordination and decision-making in service operations. Serv Sci 1 (4):270–283

    Article  Google Scholar 

  24. 24.

    Wernz C, Deshmukh A (2012) Unifying temporal and organizational scales in multiscale decision-making. Eur J Oper Res 223(3):739–751

    Article  Google Scholar 

  25. 25.

    Wernz C (2013) Multi-time-scale Markov decision processes for organizational decision-making. EURO J Decis Process 1(3-4):299–324

    Article  Google Scholar 

  26. 26.

    Henry A, Wernz C (2014) Revenue-sharing in a three-stage supply chain with uncertainty: A multiscale decision theory approach. Ann Oper Res:1–24

  27. 27.

    Kassing P, Duszak R (2013) Repeat medical imaging: A classification system for meaningful policy analysis and research, Reston, Virginia: American College of Radiology

  28. 28.

    Duszak Jr R, Allen Jr B, Hughes DR, Husain N, Barr RM, Silva Iii E, Donovan WD (2012) Emergency department CT of the abdomen and pelvis: Preferential utilization in higher complexity patient encounters. J Am Coll Radiol 9(6):409–413

    Article  Google Scholar 

  29. 29.

    Pauker SG, Kassirer JP (1980) The threshold approach to clinical decision making. N Engl J Med 302 (20):1109–1117

    Article  Google Scholar 

  30. 30.

    Phelps CE, Mushlin AI (1988) Focusing technology assessment using medical decision theory. Med Decis Mak 8(4):279–289

    Article  Google Scholar 

  31. 31.

    Diener A, O’Brien B, Gafni A (1998) Health care contingent valuation studies: a review and classification of the literature. Health Econ 7(4):313–326

    Article  Google Scholar 

  32. 32.

    Hirth RA, Chernew ME, Miller E, Fendrick AM, Weissert WG (2000) Willingness to pay for a quality-adjusted life year in search of a standard. Med Decis Mak 20(3):332–342

    Article  Google Scholar 

  33. 33.

    Fudenberg D, Tirole J (1991) Game theory 393

  34. 34.

    Wernz C, Deshmukh A (2009) An incentive-based, multi-period decision model for hierarchical systems. In: Proceedings of the 3rd Annual Conference of the Indian Subcontinent Decision Sciences Institute Region (ISDSI), Hyderabad, India

  35. 35.

    Wernz C, Deshmukh A (2010) Multi-time-scale decision making for strategic agent interactions. In: Proceedings of the 2010 Industrial Engineering Research Conference, Cancun, Mexico, pp 1–6

Download references


The authors thank the editor and referees for handling this paper and for providing constructive comments. This research was funded by the National Science Foundation under award number CMMI-1335407 and by the Harvey L. Neiman Health Policy Institute of the American College of Radiology.

Author information



Corresponding author

Correspondence to Hui Zhang.



A1. Representation of agents’ UMPs

In agent P’s UMP, its monetary payoff is

$$\begin{array}{@{}rcl@{}} &&{{\Pi}^{P1}}(\theta_{P}|a_{h})={{N}_{1}}\cdot \{\frac{1}{2}({{c}_{H,A,N}}+{{c}_{S,A,N}})\\ &&-\theta_{P}\cdot [{{c}_{S,A,N}}-\tilde{q}{{c}_{S,I,T}}-(1-\tilde{q}){{c}_{S,I,N}}]\\ &&+{\theta_{P}^{2}}\cdot \frac{1}{2}[({{c}_{S,A,N}}-{{c}_{H,A,N}})+(1-\tilde{q})({{c}_{H,I,T}}-{{c}_{S,I,N}})\\ &&+\tilde{q}({{c}_{H,I,N}}-{{c}_{S,I,T}})]\}. \end{array} $$

Similarly, agent P’s health benefit is

$$\begin{array}{@{}rcl@{}} &&{{B}^{P1}}(\theta_{P}|a_{h})={{N}_{1}}\cdot \{\frac{1}{2}({{\mu}_{H,A,N}}+{{\mu}_{S,A,N}})\\ &&-\theta_{P}\cdot [{{\mu}_{S,A,N}}-\tilde{q}{{\mu}_{S,I,T}}-(1-\tilde{q}){{\mu}_{S,I,N}}] \\ &&+{\theta_{P}^{2}}\cdot \frac{1}{2}[({{\mu}_{S,A,N}}-{{\mu}_{H,A,N}})+(1\!-\tilde{q})({{\mu}_{H,I,T}}\!-{{\mu}_{S,I,N}})\\ &&+\tilde{q}({{\mu}_{H,I,N}}-{{\mu}_{S,I,T}})]\}. \end{array} $$

In agent R’s UMP, its monetary payoff is

$${{\Pi}^{R1}}(\theta_{P})+{{\Pi}^{R2}}(\theta_{R})={{N}_{1}}\cdot \theta_{P}\cdot {{c}_{I}}+{{N}_{2}}\cdot \theta_{R}\cdot {{c}_{I}}. $$

Agent R’s health benefit is

$$\begin{array}{@{}rcl@{}} &&{{B}^{R2}}(\theta_{R}|a_{h})={{N}_{2}}\cdot \{\frac{1}{2}{{\mu}_{H,A,N}}(2-r)+\frac{1}{2}{{\mu}_{S,A,N}}\cdot r \\ &&-\theta_{R}\cdot \{r\cdot [{{\mu}_{S,A,N}}-\tilde{q}{{\mu}_{S,I,T}}-(1-\tilde{q}){{\mu}_{S,I,N}}]\\ &&+(1-r)[{{\mu}_{H,A,N}}-\tilde{q}{{\mu}_{H,I,N}}-(1-\tilde{q}){{\mu}_{H,I,T}}]\} \\ &&+{\theta_{R}^{2}}\cdot \frac{1}{2}r\cdot [({{\mu}_{S,A,N}}-{{\mu}_{H,A,N}})\\ &&+(1-\tilde{q})({{\mu}_{H,I,T}}-{{\mu}_{S,I,N}})+\tilde{q}({{\mu}_{H,I,N}}-{{\mu}_{S,I,T}})]. \end{array} $$

A2 . Proof of Theorem 1

It is easy to obtain the expressions of \(\theta _{P}^{*h}\) and \(\theta _{R}^{*h}\) via taking derivatives of B P1(𝜃 P |a h ) and B R2(𝜃 R |a h ). Additionally, it is easy to check that \(\theta _{P}^{*m}=\theta _{R}^{*m}=1\).

Next, we show that \(\theta _{P}^{*}\) satisfies \(\theta _{P}^{*h}<\theta _{P}^{*}\le 1\). Denote

$$\begin{array}{@{}rcl@{}} {{X}_{1}}&=&{{\mu}_{S,A,N}}-\tilde{q}{{\mu}_{S,I,T}}-(1-\tilde{q}){{\mu}_{S,I,N}},\text{} \\ {{X}_{2}}&=&{{\mu}_{S,A,N}}-{{\mu}_{H,A,N}}+(1-\tilde{q})({{\mu}_{H,I,T}}-{{\mu}_{S,I,N}})\\ &&+\tilde{q}({{\mu}_{H,I,N}}-{{\mu}_{S,I,T}}). \end{array} $$

We have \(0<\theta _{P}^{*h}=\frac {{{X}_{1}}}{{{X}_{2}}}<1,\text {} {{X}_{1}}<0,\text {} {{X}_{2}}<0\). The first derivative of U P(𝜃 P |a h ) with respect to 𝜃 P is

$$\begin{array}{@{}rcl@{}} &\frac{\partial {{U}^{P}}(\theta_{P}|a_{h})}{\partial \theta_{P}}=-{{N}_{1}}[{{c}_{S,A,N}}-\tilde{q}{{c}_{S,I,T}}-(1-\tilde{q}){{c}_{S,I,N}}\\ &+{{\lambda}^{P}}{{X}_{1}}]+{{N}_{1}}\theta_{P}[(1-2\tilde{q})({{c}_{H,I,T}}-{{c}_{H,I,N}})+{{\lambda} ^{P}}{{X}_{2}}]. \end{array} $$

Given the inequalities \({{c}_{S,A,N}}-\tilde {q}{{c}_{S,I,T}}-(1-\tilde {q}){{c}_{S,I,N}}<0,\text {} (1-2\tilde {q})({{c}_{H,I,T}}-{{c}_{H,I,N}})<0\text {,} {{X}_{1}}<0,\text {} {{X}_{2}}<0\), we have

$$\theta_{P}^{*}=\left\{\begin{array}{l} \frac{{{c}_{S,A,N}}-\tilde{q}{{c}_{H,I,T}}-(1-\tilde{q}){{c}_{H,I,N}}+{{\lambda}^{P}}{{X}_{1}}}{(1-2\tilde{q})({{c}_{H,I,T}}-{{c}_{H,I,N}})+{{\lambda}^{P}}{{X}_{2}}}, \\ \text{} \text{} \text{} \text{ if} \frac{{{c}_{S,A,N}}-\tilde{q}{{c}_{H,I,T}}-(1-\tilde{q}){{c}_{H,I,N}}+{{\lambda}^{P}}{{X}_{1}}}{(1-2\tilde{q})({{c}_{H,I,T}}-{{c}_{H,I,N}})+{{\lambda}^{P}}{{X}_{2}}}<1 \\ 1, \text{ if} \frac{{{c}_{S,A,N}}-\tilde{q}{{c}_{H,I,T}}-(1-\tilde{q}){{c}_{H,I,N}}+{{\lambda} ^{P}}{{X}_{1}}}{(1-2\tilde{q})({{c}_{H,I,T}}-{{c}_{H,I,N}})+{{\lambda}^{P}}{{X}_{2}}}\ge 1 \end{array}\right.. $$

Because \({{c}_{S,A,N}}-\tilde {q}{{c}_{S,I,T}}-(1-\tilde {q}){{c}_{S,I,N}}-(1-2\tilde {q})({{c}_{H,I,T}}-{{c}_{H,I,N}})<0\), the result \(\theta _{P}^{*h}<\theta _{P}^{*}\le 1\) is then immediate.

Following the similar reasoning process as above, we have \(\theta _{R}^{*h}<\theta _{R}^{*}\le 1\). \(\square \)

A3. Proof of Theorem 2

For agent P: denote

$$\begin{array}{@{}rcl@{}} {{X}_{3}}&=&{{\mu}_{S,A,N}}-q{{\mu}_{S,I,T}}-(1-q){{\mu}_{S,I,N}},\text{} \\ {{X}_{4}}&=&{{\mu}_{S,A,N}}-{{\mu}_{H,A,N}}+(1-q)({{\mu}_{H,I,T}}-{{\mu}_{S,I,N}})\\ &&+q({{\mu}_{H,I,N}}-{{\mu}_{S,I,T}}),\\ {{X}_{5}}&=&-\delta {{\mu}_{S,I,T}}+\delta {{\mu}_{S,I,N}},\text{} \\ {{X}_{6}}&=&-\delta ({{\mu}_{H,I,T}}-{{\mu}_{S,I,N}})+\delta ({{\mu}_{H,I,N}}-{{\mu}_{S,I,T}}). \end{array} $$

We have \(\theta _{P}^{*h}(a_{2})=\frac {{{X}_{3}}}{{{X}_{4}}}\), \(\theta _{P}^{*h}(a_{1})=\frac {{{X}_{3}}+{{X}_{5}}}{{{X}_{4}}+{{X}_{6}}}\). Notice that X 3<0,X 4<0,X 3>X 4,X 5X 6<0,X 5<0,X 6<0, we have

$$\theta_{P}^{*h}(a_{1})-\theta_{P}^{*h}(a_{2})=\frac{{{X}_{4}}{{X}_{5}}-{{X}_{3}}{{X}_{6}}}{({{X}_{4}}+{{X}_{6}}){{X}_{4}}}>0. $$

Therefore, when agent H switches from a 2=1 to a 1=1, \(\theta _{P}^{*h}\) increases.

Next we consider the changes in \(\theta _{P}^{*}\) when agent H switches from a 2=1 to a 1=1. Denote

$$\begin{array}{@{}rcl@{}} &&X_{7}=\\ &&\frac{\left( \begin{array}{l} {c}_{S,A,N}-(q+{\delta}){c_{S,I,T}}-(1- q- \delta){c_{S,I,N}}\\ +{\lambda}^{P}[{{\mu}_{S,A,N}}-(q+ \delta){{\mu}_{S,I,T}}-(1- q- \delta){\mu_{S,I,N}}] \end{array}\right)}{\left( \begin{array}{l} [1-2(q +\delta)](c_{H,I,T}-c_{H,I,T})+{\lambda}^{P}[(\mu_{S,A,N}-\mu_{H,A,N})]\\ \quad +(1- q- \delta)({{\mu}_{H,I,T}}-{{\mu}_{S,I,N}})+(q+ \delta)({{\mu}_{H,I,N}}-{{\mu}_{S,I,T}})\left.\right] \end{array}\right)}. \end{array} $$

When \(\theta _{P}^{*}(a_{2})<1\), \(\theta _{P}^{*}(a_{1})-\theta _{P}^{*}(a_{2})>0\) is equivalent to

$$\left\{\begin{array}{l} {{X}_{7}}\ge 1>\theta_{P}^{*}(a_{2}), \text{ if} \theta_{P}^{*}(a_{1})=1 \\ {{X}_{7}}>\theta_{P}^{*}(a_{2}), \text{ if} \theta_{P}^{*}(a_{1})<1 \end{array}\right.. $$

Hence, \(\theta _{P}^{*}(a_{1})-\theta _{P}^{*}(a_{2})>0\) is equivalent to: \({{X}_{7}}-\theta _{P}^{*}(a_{2})>0\). Compute this inequality, we obtain the condition Eq. 14 in Theorem 2.

For agent R: denote \(\theta _{R}^{*h}(a_{2})=\frac {{{X}_{8}}}{{{X}_{9}}}\), X 8<0,X 9<0,X 8>X 9; denote \(\theta _{R}^{*h}(a_{1})=\frac {{{X}_{8}}+{{X}_{10}}}{{{X}_{9}}+{{X}_{11}}}\), where

$$\begin{array}{@{}rcl@{}} {{X}_{10}}&=&\delta \cdot r\cdot (-{{\mu}_{S,I,T}}+{{\mu}_{S,I,N}})\\ &&+\delta (1-r)(-{{\mu}_{H,I,N}}+{{\mu}_{H,I,T}}),\text{} \\ {{X}_{11}}&=&\delta \cdot r\cdot (-{{\mu}_{H,I,T}}+{{\mu}_{S,I,N}}+{{\mu}_{H,I,N}}-{{\mu}_{S,I,T}}). \end{array} $$

Because X 10X 11<0,X 10<0,X 11<0, we have

$$\theta_{R}^{*h}(a_{1})-\theta_{R}^{*h}(a_{2})=\frac{{{X}_{9}}{{X}_{10}}-{{X}_{8}}{{X}_{11}}}{({{X}_{9}}+{{X}_{11}}){{X}_{9}}}>0. $$

Therefore, when agent H switches from a 2=1 to a 1=1, \(\theta _{R}^{*h}\) increases.

Next we consider the changes in \(\theta _{R}^{*}\) when agent H switches from a 2=1 to a 1=1. Denote

$$\begin{array}{@{}rcl@{}} &&{X}_{12} =\\ &&\frac{\left( \begin{array}{l} -{{c}_{I}}+\lambda^{R}\left\{r\cdot[\mu_{S,A,N}-(q+\delta)\mu_{S,I,T}\right.\\ \quad -(1-q-\delta)\mu_{S,I.N}]+ (1-r)\left[\mu_{H,A,N}\right.\\ \qquad \left.- (q+ \delta)\mu_{H,I,N}-(1-q-\delta)\mu_{H,I,T}\right\}\end{array}\right)}{\left( \begin{array}{l} \lambda^{R}\cdot r \cdot\left[\mu_{S,A,N}-\mu_{H,A,N}+ (1-q-\delta)\left( \mu_{H,I,T}\right.\right.\\ \qquad\quad \left.\left.-\mu_{S,I,N}\right)+(q+ \delta)(\mu_{H,I,N}-{\mu_{S,I,T}})\right] \end{array}\right)}. \end{array} $$

Similarly, we have \(\theta _{R}^{*}(a_{1})-\theta _{R}^{*}(a_{2})>0\) equivalent to \({{X}_{12}}-\theta _{R}^{*}(a_{2})>0\). Next, following the same reasoning process in the proof for the change in \(\theta _{R}^{*h}\), it is easy to check that when agent H switches from a 2=1 to a 1=1 and when \(\theta _{R}^{*}(a_{2})<1\), \(\theta _{R}^{*}\) increases. \(\square \)

A4. Proof of Theorem 3

First, we provide the mathematical expressions for \(\theta _{P}^{**}\) and \(\theta _{R}^{**}\).

$$\theta_{P}^{**}=\left\{\begin{array}{l} \widetilde{\theta}_{P}, \text{ if} 0<\widetilde{\theta}_{P}<1 \\ 0, \text{ if} \widetilde{\theta}_{P}\le 0 \\ 1, \text{ if} \widetilde{\theta}_{P}\ge 1 \end{array}\right.,\text{} where $$
$$\widetilde{\theta}_{P}= \frac{\left( \begin{array}{r}\eta \alpha (1+{{\gamma}_{p}}){{c}_{I}}+(1-\eta \alpha)[c_{S,A,N}-\tilde{q}{c}_{S,I,T}\\-(1-\tilde{q})c_{S,I,N}]+\lambda^{P}[\mu_{S,A,N}\\-\tilde{q}\mu_{S,I,T}- (1-\tilde{q})\mu_{S,I,N}]\end{array} \right)}{\left( \begin{array}{l}(1-\eta \alpha)(1-2\tilde{q})({{c}_{H,I,T}}-{{c}_{H,I,N}})\\\quad+{{\lambda}^{P}}[({{\mu}_{S,A,N}}-{{\mu} _{H,A,N}})\\ \quad\quad+(1-\tilde{q})({{\mu}_{H,I,T}}-{{\mu}_{S,I,N}})+\tilde{q}({{\mu}_{H,I,N}}-{{\mu}_{S,I,T}})]\end{array} \right)}; $$
$$\theta_{R}^{**}=\left\{\begin{array}{ll} \widetilde{\theta}_{R}, \text{ if} 0<\widetilde{\theta}_{R}<1\\ 0, \text{ if} \widetilde{\theta}_{R}\le 0\\ 1, \text{ if} \widetilde{\theta}_{R}\ge 1 \end{array}\right.,\text{} where $$
$$\widetilde{\theta}_{R}\,=\, \frac{\left( \begin{array}{l} {-{{c}_{I}}+\eta \beta (1+{{\gamma}_{p}}){{c}_{I}}} \\ \quad+{{\lambda}^{R}}\{r\cdot [{{\mu}_{S,A,N}}-\tilde{q}{{\mu}_{S,I,T}}-(1-\tilde{q}){{\mu}_{S,I,N}}]\\\qquad++(1-r)[{{\mu} _{H,A,N}}-\tilde{q}{{\mu}_{H,I,N}}\,-\,(1-\tilde{q}){{\mu}_{H,I,T}}]\} \end{array}\right)} {\left( \begin{array}{l}{{\lambda} ^{R}}\cdot r\cdot [{{\mu}_{S,A,N}}-{{\mu}_{H,A,N}}\\\quad\,+\,(1-\tilde{q})({{\mu}_{H,I,T}}-{{\mu} _{S,I,N}})\,+\,\tilde{q}({{\mu}_{H,I,N}}-{{\mu}_{S,I,T}})]\end{array} \right)}. $$

Next, we prove Theorem 3(a). For agent P: denote

$$\begin{array}{@{}rcl@{}} {{X}_{13}}&=&{{c}_{S,A,N}}\,-\,\tilde{q}{{c}_{S,I,T}}\,-\,(1-\tilde{q}){{c}_{S,I,N}},\text{}\\ {{X}_{14}}&=&(1-2\tilde{q})({{c}_{H,I,T}}-{{c}_{H,I,N}}),\\ {{X}_{15}}&=&{{\lambda}^{P}}[{{\mu}_{S,A,N}}-\tilde{q}{{\mu}_{S,I,T}}\,-\,(1-\tilde{q}){{\mu}_{S,I,N}}],\\ {{X}_{16}}&=&{{\lambda}^{P}}[({{\mu}_{S,A,N}}\,-\,{{\mu}_{H,A,N}})\,+\,(1-\tilde{q})({{\mu}_{H,I,T}}-{{\mu}_{S,I,N}})\\ &&+\tilde{q}({{\mu}_{H,I,N}}-{{\mu}_{S,I,T}})]. \end{array} $$

By Theorem 1, we have X 13<X 14<0,X 16<X 15<0. When \(\theta _{P}^{**}\in (0,1)\), we have

$$\theta_{P}^{**}=\frac{\eta \alpha (1+{{\gamma}_{p}}){{c}_{I}}+(1-\eta \alpha ){{X}_{13}}+{{X}_{15}}}{(1-\eta \alpha ){{X}_{14}}+{{X}_{16}}}. $$

Take the first derivative of \(\theta _{P}^{**}\) with respect to α, and we have the inequality

$$\frac{\partial \theta_{P}^{**}}{\partial \alpha} \,=\,\frac{[{{X}_{14}}{{X}_{15}}\,-\,{{X}_{13}}{{X}_{16}}\,+\,({{X}_{14}}\,+\,{{X}_{16}})(1\,+\,{{\gamma}_{p}}){{c}_{I}}]\eta} {{{({{X}_{14}}\,+\,{{X}_{16}}\,-\,{{X}_{14}}\eta \alpha )}^{2}}}<0, $$
$$\eta \in (0,1],\text{} \alpha \in [0,1],\text{} {{\gamma}_{p}}\in {{\mathbb{R}}^{+}}. $$

Hence \(\theta _{P}^{**}\) is a strict decreasing function of α when \(\theta _{P}^{**}\in (0,1)\).

For agent R: similarly, when \(\theta _{R}^{**}\in (0,1)\), denote \(\theta _{R}^{**}=\frac {-{{c}_{I}}+\eta \beta (1+{{\gamma }_{p}}){{c}_{I}}+{{X}_{17}}}{{{X}_{18}}}\). We have X 17<0,X18<0. Take the first derivative of \(\theta _{R}^{**}\) with respect to β, we have the inequality

$$\frac{\partial \theta_{R}^{**}}{\partial \beta} =\frac{\eta (1+{{\gamma}_{p}}){{c}_{I}}}{{{X}_{18}}}<0, \eta \in (0,1],\text{} \beta \in [0,1],\text{} {{\gamma}_{p}}\in {{\mathbb{R}}^{+}}. $$

Hence \(\theta _{R}^{**}\) is a strict decreasing function of β when \(\theta _{R}^{**}\in (0,1).\) \(\square \)

Lastly, we prove Theorem 3(b).

For agent P: using previous notations, we have

$$\frac{\eta \alpha (1+{{\gamma}_{p}}){{c}_{I}}+(1-\eta \alpha ){{X}_{13}}+{{X}_{15}}}{(1-\eta \alpha ){{X}_{14}}+{{X}_{16}}}\le \frac{{{X}_{13}}+{{X}_{15}}}{{{X}_{14}}+{{X}_{16}}}, $$

and the equality is reached when α=0 (by monotonicity).

Recall that \(\theta _{P}^{*}=Min\{1,\text {} \frac {{{X}_{13}}+{{X}_{15}}}{{{X}_{14}}+{{X}_{16}}}\}\).

When \(\frac {{{X}_{13}}+{{X}_{15}}}{{{X}_{14}}+{{X}_{16}}}\le 1\), \(\theta _{P}^{**}\le \frac {{{X}_{13}}+{{X}_{15}}}{{{X}_{14}}+{{X}_{16}}}=\theta _{P}^{*}\).

When \(\frac {{{X}_{13}}+{{X}_{15}}}{{{X}_{14}}+{{X}_{16}}}>1\), \(\theta _{P}^{**}\le 1=\theta _{P}^{*}\).

Hence we always have \(\theta _{P}^{**}\le \theta _{P}^{*}\).

For agent R: following previous notations, we have

$$\frac{-{{c}_{I}}+\eta \beta (1+{{\gamma}_{p}}){{c}_{I}}+{{X}_{17}}}{{{X}_{18}}}\le \frac{-{{c}_{I}}+{{X}_{17}}}{{{X}_{18}}}, $$

and the equality is reached when β=0 (by monotonicity).

Recall that \(\theta _{R}^{*}=Min\{1,\text {} \frac {-{{c}_{I}}+{{X}_{17}}}{{{X}_{18}}}\}\).

When \(\frac {-{{c}_{I}}+{{X}_{17}}}{{{X}_{18}}}\le 1\), \(\theta _{R}^{**}\le \frac {-{{c}_{I}}+{{X}_{17}}}{{{X}_{18}}}=\theta _{R}^{*}\).

When \(\frac {-{{c}_{I}}+{{X}_{17}}}{{{X}_{18}}}>1\), \(\theta _{R}^{**}\le 1=\theta _{R}^{*}\).

Hence we always have \(\theta _{R}^{**}\le \theta _{R}^{*}\). \(\square \)

A5. Parameter values for numerical analysis

The parameter values used in Section 5 Numerical Analysis are listed in Table 5.

Table 5 Parameter Values for Numerical Analysis

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Zhang, H., Wernz, C. & Hughes, D.R. Modeling and designing health care payment innovations for medical imaging. Health Care Manag Sci 21, 37–51 (2018).

Download citation


  • Multiscale decision theory
  • Health care incentives
  • Health care payment systems
  • Accountable Care