Skip to main content

Elements of Program Design in Medicare’s Value-based and Alternative Payment Models: a Narrative Review


Increasing emphasis on value in health care has spurred the development of value-based and alternative payment models. Inherent in these models are choices around program scope (broad vs. narrow); selecting absolute or relative performance targets; rewarding improvement, achievement, or both; and offering penalties, rewards, or both. We examined and classified current Medicare payment models—the Hospital Readmissions Reduction Program (HRRP), Hospital Value-Based Purchasing Program (HVBP), Hospital-Acquired Conditions Reduction Program (HACRP), Medicare Advantage Quality Star Rating program, Physician Value-Based Payment Modifier (VM) and its successor, the Merit-Based Incentive Payment System (MIPS), and the Medicare Shared Savings Program (MSSP) on these elements of program design and reviewed the literature to place findings in context. We found that current Medicare payment models vary significantly across each parameter of program design examined. For example, in terms of scope, the HRRP focuses exclusively on risk-standardized excess readmissions and the HACRP on patient safety. In contrast, HVBP includes 21 measures in five domains, including both quality and cost measures. Choices regarding penalties versus bonuses are similarly variable: HRRP and HACRP are penalty-only; HVBP, VM, and MIPS are penalty-or-bonus; and the MSSP and MA quality star rating programs are largely bonus-only. Each choice has distinct pros and cons that impact program efficacy. Unfortunately, there are scant data to inform which program design choice is best. While no one approach is clearly superior to another, the variability contained within these programs provides an important opportunity for Medicare and others to learn from these undertakings and to use that knowledge to inform future policymaking.

Take-Home Points
• Increasing emphasis on value in health care has spurred the development of value-based and alternative payment models. Inherent in these models are choices around program scope; selecting absolute or relative performance targets; rewarding improvement, achievement, or both; and offering penalties, rewards, or both.
• We examined current Medicare value-based and alternative payment models on these elements of program design and found that while current models vary significantly across each parameter, there are scant prior data to inform which choice is best.
• Such variability in program design may represent an important opportunity to learn from existing models and create new ones in the future.


Historic change in the way we pay for health care is underway. Public and private payers alike are increasingly moving toward alternative payment models (APMs) and value-based purchasing (VBP) models that link provider payment to the quality and/or costs of care delivered. The US Department of Health and Human Services has set a goal of tying 50% of Medicare reimbursements to APMs by 2018, with 90% of remaining fee-for-service payments tied to quality or value.1 These goals and other reforms, including the bipartisan Medicare and CHIP Reauthorization Act, or MACRA, which establishes widespread value-based physician payment and provides incentives for participation in APMs, are broadly spurring the development of new models. Emphasis on value in health care seems likely to persist, even given the recent change in administration.

In the creation of new payment models, policymakers face choices about program design: in particular, how to measure and reward quality and cost-savings. Alternative approaches differ fundamentally, and each has pros and cons.2 In this article, we examine four dimensions of program design central to program function (Table 1): first, the scope of a program’s measures; second, whether performance targets are absolute or relative; third, whether a program rewards improvement or achievement or both; fourth, whether incentives are framed as penalties or rewards.

Table 1 Elements of Program Design in Current Medicare Value-Based and Alternative Payment Programs

Using this framework, we examined seven Medicare programs (Table 1). These included three hospital programs—the Hospital Readmissions Reduction Program (HRRP),3 Hospital Value-Based Purchasing Program (HVBP),4 and Hospital-Acquired Conditions Reduction Program (HACRP)5—and three ambulatory programs—the Medicare Advantage (MA) Quality Star Rating program6 and the Physician Value-Based Payment Modifier (VM)7 and its successor, the Merit-Based Incentive Payment System (MIPS).8 We also examined one large voluntary APM, the Medicare Shared Savings Program (MSSP).9 The design choices made for these programs have been widely divergent, and thus a critical examination may offer insights to inform the design of new payment and delivery models in the future.

Program Scope (Broad vs. Narrow)

One basic program design decision is the number and diversity of measures on which providers will be evaluated and payment will depend. Table 1 demonstrates that VBP/APM programs in Medicare vary widely in this regard. For example, in the hospital setting, the HRRP focuses exclusively on risk-standardized excess readmissions and the HACRP on patient safety. In contrast, HVBP will include 21 measures in five domains in fiscal year (FY) 2017, including both quality and cost measures; the Physician VM program lets clinicians choose from over 200 quality measures and includes cost measures as well.

There are likely tradeoffs between a broad versus narrow scope for the performance measures included in a program. Programs that use a broader set of measures may spur providers to undertake more intensive systems-based approaches to overall quality improvement. On the other hand, targeted programs may be less administratively burdensome and could make critical areas for improvement especially salient.

There are few data to support either a broad or targeted approach in terms of impact on outcomes. Support for a broad approach may come from results of the United Kingdom Quality and Outcomes Framework, which was a financial incentive program for general practitioners, aimed at measures of disease control and prevention (diabetes treatment, immunizations, etc.). Studies demonstrated that performance improved for most incented indicators, though many gains were modest.10 The broad-based HVBP program has been associated with improvements in processes of care, but has little demonstrated impact on outcomes included in the program.11 On the other hand, the highly targeted HRRP, which focuses only on readmissions, has been associated with a significant drop in readmission rates that was largest at poorly performing hospitals and for targeted conditions.12 14

Another important issue is that of “teaching to the test,” namely, whether when only a limited number of outcomes are measured, others—which may be equally important to patients and clinicians—are neglected. In the UK program mentioned above, for example, quality measures that were not specifically incented slowed in their improvement.15 Interestingly, in the same program, performance remained high for some incentivized quality indicators even after the indicators were retired,16 a pattern that has also been seen in an incentive program in US Veterans Affairs hospitals.17 This suggests that phasing a broad set of measures in and out rather than choosing a small static set may be a useful strategy.

The appropriate scope of measures for a given program will depend on its goals. As noted above, the HRRP and HACRP have a specific thematic focus, and thus a narrow set of measures is appropriate. In contrast, HVBP, MSSP, and VM are programs intended to change the way care is delivered across conditions, and thus a broader set is necessary. One strategy might be to have specific, targeted programs for the highest-priority conditions or issues and broad-based, frequently updated programs to improve care more generally.

Setting Targets Based on Absolute or Relative Performance

Another program design element is whether benchmarks are set on a relative or absolute basis—whether providers are graded on actual performance or “on a curve.” This element differs across current Medicare VBP programs and APMs. For example, under the HRRP, there is no target readmission rate—rather, whether or not a hospital is penalized depends on its performance relative to others in any given year. Similarly, for the HACRP, the hospitals in the highest quartile of adverse patient safety events and infection rates annually are penalized, regardless of the absolute performance those hospitals achieve. Conversely, for MSSP, performance targets are set based on the distribution of performance in the prior year, so participating providers know ahead of time that achieving a specific compliance rate will earn full points for that measure. In theory, all ACOs participating in the MSSP could earn perfect quality scores if all met the pre-set benchmarks.

There are advantages and disadvantages to these approaches, but few data to support either tactic. One consideration is the way in which relative vs. absolute targets are perceived by clinicians and contribute to behavior change. Absolute benchmarks give providers specific targets to meet, which may be more meaningful to clinical leaders and front-line staff and may encourage collaboration across providers. On the other hand, relative benchmarks may feel more abstract and discourage collaboration. One criticism of the HRRP has been that its relative benchmarks mean that even if all hospitals improve their readmission rates, the majority will still receive penalties; this may be suboptimal for helping clinicians to feel that their efforts in improvement are meaningful.

From the payer perspective, however, the tradeoffs between absolute and relative benchmarks are much different. Relative performance assessment allows the payer to prospectively assure budget neutrality by ensuring that the number of “winners,” or at least their winnings, can balance losses by the “losers,” while absolute benchmarking has much less financial certainty. Relative benchmarking may also be more easily implemented because it allows the distribution of observed performance to determine rewards and penalties and does not require a significant duration of pre-data with which to set parameters for expected performance.

Rewarding improvement or Achievement

Whether to reward improvement or achievement is another important dimension of program design. Again, programs vary: the HRRP, HACRP, and VM programs do not explicitly reward improvement, whereas HVBP and MA do. The MIPS program has signaled it will reward improvement, though details on how this will be done have not been released. The MSSP focuses heavily on improvement: during their first 3 years in the program, ACOs in the MSSP are evaluated against their own historical spending rather than any external benchmark.

Whether a program rewards improvement or achievement can significantly impact which providers do well and poorly under the program. If providers are evaluated only on achievement, the highest-performing providers at baseline will likely do best.18 For example, under the achievement-only HRRP, the highest-performing hospitals at baseline were the most likely to avoid penalties. The lowest-performing hospitals actually improved more quickly over the first 3 years of the program, but many still received penalties in every program year because they started out far behind the best performers and did not fully catch up.13 , 14 Baseline low performers, on the other hand, may benefit the most from improvement opportunities. The purest form of rewarding improvement, evaluating providers against their own historical performance, may give baseline poor performers the best opportunity, assuming that there is “low-hanging fruit” that can be addressed. Early experience from the MSSP as well as the Pioneer ACO program suggests that the most expensive baseline providers were the most likely to save money, supporting this possibility.19 , 20 However, only rewarding improvement may also mean giving financial rewards to providers who have improved, but are nonetheless delivering suboptimal or even substandard care, or, on the other hand, failing to reward persistently excellent performers whose year-upon-year performance changes little.

Another related issue is that of risk adjustment. Improvement-based comparisons depend much less heavily than achievement-based comparisons on accurate risk adjustment to enable fair comparisons between peers since each hospital or clinician serves as its own comparison group. This may be of particular salience to providers that serve medically or socially complex populations, who have been shown previously to perform more poorly on many existing VBP programs, in part because of characteristics of the patients they serve.21 24

Transparency for consumers is also a key consideration in the achievement versus improvement debate. If providers are only judged on improvement, a patient viewing a hospital’s rating might not know whether a good score was based on high absolute performance or on poor performance with high improvement over time. Given prior evidence suggesting that public reports may influence consumer choice in meaningful ways,25 , 26 such considerations are key. Public reporting and financial rewards could be de-coupled to avoid this particular problem.

Some combination of rewarding achievement and improvement may be optimal in most cases, which is how many current programs, including HVBP, are constructed. This offers an incentive to organizations to participate even if initial performance is low, while also recognizing high absolute levels of achievement and acknowledging that continued improvement is relatively more difficult at high levels of performance. In addition, including some absolute measures would likely help consumers directly compare provider quality, increasing transparency and promoting consumer-driven care. Some data suggest that providers may prefer this dual approach as well27 and may respond better to mixed-strategy compensation models than single-strategy ones.28 , 29

Framing Incentives as Penalties or Bonuses

Insights from behavioral economics suggest that how incentives are framed—in particular, whether they are framed as penalties or bonuses—can affect how providers respond. As in the other elements of program design, incentive framing in current Medicare programs varies. Two of the three hospital-based programs, HRRP and HACRP, are penalty-only; HVBP, VM, and the forthcoming MIPS program are penalty-or-bonus; the MSSP (Track 1, which includes more than 90% of MSSP participants) and MA quality star rating programs are largely bonus-only, though MA imposes non-financial penalties on poor performers.

There are pros and cons to the use of penalties versus bonuses. Prospect theory holds that more value is placed on losses than on equivalent gains (“loss aversion”),30 suggesting that penalties may provide a more powerful behavioral incentive than bonuses. Another related concept is that “willingness to accept” is often significantly greater than “willingness to pay,” suggesting that people require much more to give something up than they would be willing to pay for it.31 While there are few data in this area directly related to payment models, one hospital pay-for-performance program applied prospect theory by sending an advance incentive payment to eligible providers based on the expectation that they would be more motivated to avoid losing the payment than achieving a possible gain.32 Penalties may also be more economically efficient than bonuses, since bonus programs require paying additional money to high performers in order to incent change among low performers.33

On the other hand, bonus programs may be preferred by providers,27 and thus more likely to meet political acceptance, either because they are perceived as more fair or because of loss aversion—and the larger the penalty, the larger the concern.

Other innovative financial approaches that have been used to encourage physician behavior include the use of a lottery34 in which participants are rewarded with the possibility of a large reward rather than a more certain small reward. Lotteries have been trialed more extensively in the patient behavior literature,35 , 36 but have not been systematically studied in the performance improvement setting.

There is little empirical evidence to suggest whether bonuses or penalties are more effective at scale. Recent evidence demonstrating that the HRRP (penalty-only) has been associated with reductions in readmission rates,12 while HVBP (penalty-or-bonus) has had little impact on quality of care, patient experience, or mortality rates,11 , 37 might support the theory that penalties lead to a stronger behavioral response from providers, though there are many other differences (including scope, as noted above) between the two programs that make drawing a solid conclusion difficult.

A related question is the size of the incentive. Historically, bonus payments to hospitals have been in the range of 1–5% and to physicians 5 to 10%, but we know of no evidence that clearly links size of incentive to behavioral response. A large pay-for-performance program in UK hospitals that offered up to 4% bonuses was associated with improvements in mortality,38 while the Hospital Quality Incentive Demonstration (1–2% bonus, 1–2% penalty)39 and HVBP programs (1–2% bonus, 1–2% penalty) in the US were not.11 This finding might suggest that larger incentives have a larger effect on performance, but again there are other differences between these programs that preclude firm conclusions based on these examples alone.

Other Contextual Considerations

Variation across Medicare’s payment programs also reflects the different contextual background of the programs. For example, the HACRP was established as a single-issue, penalty-only program, presumably reflecting a perception among members of Congress that unacceptable lapses were resulting in complications and poor patient outcomes. In this case, a narrow focus on safety, with a penalty-only construct, met programmatic goals. On the other hand, the forthcoming MIPS program was established as a broad effort to incent providers in multiple areas—quality, costs, use of electronic health records, and practice improvement activities. Providing bonuses as well as penalties in MIPS was critical for widespread acceptance of such a sea change in physician payment and appropriately reflective of the fact that on many quality measures there may be a range of acceptable performance. Statutory frameworks differ as well, which impacts how programs are ultimately implemented: the legislation creating the HACRP has detailed statutory language around program design, while under MIPS, CMS was given a great deal of latitude through rulemaking in determining the specifics of measurement, bonuses and penalties, and implementation. Evaluating program design options therefore requires an understanding of not only the design elements themselves, but also of program genesis and intent.

Conclusions and Future Directions

As health care moves rapidly into an era dominated by APMs and VBP, program design is central, yet our current knowledge base is inadequate. Studying the effects of program design elements in existing and future federal programs will require complex data analysis to untangle which, if any, program design features maximize goal attainment, understanding that goals differ from program to program. The use of randomized trials of different payment models40 is another powerful tool that has historically been under-utilized in this area, but holds immense potential for creating the type of knowledge about clinician behavior that could help shape future policies.41 The Center for Medicare and Medicaid Innovation (CMMI) has recently launched models that include assignment to intervention and control groups, including the Million Hearts Model42 and the Home Health Value-Based Purchasing Model,43 and these will shed important light on strategies for payment reform.


  1. 1.

    Burwell SM. Setting value-based payment goals—HHS efforts to improve US health care. N Engl J Med. 2015;372(10):897–9.

    CAS  Article  PubMed  Google Scholar 

  2. 2.

    Rosenthal MB, Dudley RA. Pay-for-performance: will the latest payment trend improve care? JAMA. 2007;297(7):740–4.

    CAS  Article  PubMed  Google Scholar 

  3. 3.

    Centers for Medicare and Medicaid Services. FY 2013 IPPS Final Rule: Hospital Readmissions Reduction Program. Baltimore, MD: Centers for Medicare and Medicaid Services; 2012.

  4. 4.

    Centers for Medicare and Medicaid Services. Hospital Value-Based Purchasing: The Official Website for the Medicare Hospital Value-based Purchasing Program. Vol. 2014; 2014.

  5. 5.

    Centers for Medicare and Medicaid Services. Hospital-Acquired Condition (HAC) Reduction Program. Vol. 2015. Baltimore, MD: Centers for Medicare and Medicaid Services; 2014.

  6. 6.

    US Department of Health & Human Services. Star Ratings. Vol. 2016; 2016.

  7. 7.

    Centers for Medicare and Medicaid Services. Medicare FFS Physician Feedback Program/Value-Based Payment Modifier. Vol. 2014; 2014.

  8. 8.

    Centers for Medicare and Medicaid Services. Quality Payment Program: Delivery System Reform, Medicare Payment Reform, & MACRA. Vol. 2016; 2016.

  9. 9.

    Centers for Medicare and Medicaid Services. Shared Savings Program. Vol. 2014. Baltimore, MD: Centers for Medicare and Medicaid Services; 2014.

  10. 10.

    Gillam SJ, Siriwardena AN, Steel N. Pay-for-performance in the United Kingdom: impact of the quality and outcomes framework: a systematic review. Ann Fam Med. 2012;10(5):461–8.

    Article  PubMed  PubMed Central  Google Scholar 

  11. 11.

    Figueroa JF, Tsugawa Y, Zheng J, Orav EJ, Jha AK. Association between the Value-Based Purchasing pay for performance program and patient mortality in US hospitals: observational study. BMJ. 2016;353:i2214.

    Article  PubMed  PubMed Central  Google Scholar 

  12. 12.

    Zuckerman RB, Sheingold SH, Orav EJ, Ruhter J, Epstein AM. Readmissions, Observation, and the Hospital Readmissions Reduction Program. N Engl J Med. 2016.

  13. 13.

    Wasfy JH, Zigler CM, Choirat C, Wang Y, Dominici F, Yeh RW. Readmission Rates After Passage of the Hospital Readmissions Reduction Program: A Pre-Post Analysis. Ann Intern Med. 2016.

  14. 14.

    Desai NR, Ross JS, Kwon JY, et al. Association Between Hospital Penalty Status Under the Hospital Readmission Reduction Program and Readmission Rates for Target and Nontarget Conditions. JAMA. 2016;316(24):2647–2656.

    Article  PubMed  PubMed Central  Google Scholar 

  15. 15.

    Doran T, Kontopantelis E, Valderas JM, et al. Effect of financial incentives on incentivised and non-incentivised clinical activities: longitudinal analysis of data from the UK Quality and Outcomes Framework. BMJ. 2011;342:d3590.

    Article  PubMed  PubMed Central  Google Scholar 

  16. 16.

    Kontopantelis E, Springate D, Reeves D, Ashcroft DM, Valderas JM, Doran T. Withdrawing performance indicators: retrospective analysis of general practice performance under UK Quality and Outcomes Framework. BMJ. 2014;348:g330.

    Article  PubMed  PubMed Central  Google Scholar 

  17. 17.

    Benzer JK, Young GJ, Burgess JF, Jr., et al. Sustainability of quality improvement following removal of pay-for-performance incentives. J Gen Intern Med. 2014;29(1):127–32.

    Article  PubMed  Google Scholar 

  18. 18.

    Rosenthal MB, Frank RG, Li Z, Epstein AM. Early experience with pay-for-performance: from concept to practice. JAMA. 2005;294(14):1788–93.

    CAS  Article  PubMed  Google Scholar 

  19. 19.

    McWilliams JM, Hatfield LA, Chernew ME, Landon BE, Schwartz AL. Early performance of accountable care organizations in Medicare. N Engl J Med. 2016;374(24):2357–66.

    Article  PubMed  PubMed Central  Google Scholar 

  20. 20.

    McWilliams JM, Chernew ME, Landon BE, Schwartz AL. Performance differences in year 1 of pioneer accountable care organizations. N Engl J Med. 2015;372(20):1927–36.

    CAS  Article  PubMed  PubMed Central  Google Scholar 

  21. 21.

    Joynt KE, Jha AK. Characteristics of hospitals receiving penalties under the Hospital Readmissions Reduction Program. JAMA. 2013;309(4):342–3.

    CAS  Article  PubMed  Google Scholar 

  22. 22.

    Gilman M, Hockenberry JM, Adams EK, Milstein AS, Wilson IB, Becker ER. The financial effect of value-based purchasing and the Hospital Readmissions Reduction Program on Safety-Net Hospitals in 2014: a cohort study. Ann Intern Med. 2015;163(6):427–36.

    Article  PubMed  Google Scholar 

  23. 23.

    Rajaram R, Chung JW, Kinnier CV, et al. Hospital characteristics associated with penalties in the Centers for Medicare & Medicaid Services Hospital-Acquired Condition Reduction Program. JAMA. 2015;314(4):375–83.

    CAS  Article  PubMed  Google Scholar 

  24. 24.

    Weiss H, Pescatello S. Medicare Advantage: Stars System’s disproportionate impact on MA plans focusing on low-income populations. Health Affairs Blog. Vol. 2014: Health Affairs; 2014.

  25. 25.

    Mukamel DB, Mushlin AI. Quality of care information makes a difference: an analysis of market share and price changes after publication of the New York State Cardiac Surgery Mortality Reports. Med Care. 1998;36(7):945–54.

    CAS  Article  PubMed  Google Scholar 

  26. 26.

    Wang J, Hockenberry J, Chou SY, Yang M. Do bad report cards have consequences? Impacts of publicly reported provider quality information on the CABG market in Pennsylvania. J Health Econ. 2011;30(2):392–407.

    Article  PubMed  Google Scholar 

  27. 27.

    Chen TT, Lai MS, Chung KP. Participating physician preferences regarding a pay-for-performance incentive design: a discrete choice experiment. Int J Qual Health Care. 2016;28(1):40–6.

    Article  PubMed  Google Scholar 

  28. 28.

    Landon BE, O’Malley AJ, McKellar MR, Reschovsky JD, Hadley J. Physician compensation strategies and quality of care for Medicare beneficiaries. Am J Manag Care. 2014;20(10):804–11.

    PubMed  Google Scholar 

  29. 29.

    Lee TH, Bothe A, Steele GD. How Geisinger structures its physicians’ compensation to support improvements in quality, efficiency, and volume. Health Aff (Millwood). 2012;31(9):2068–73.

    Article  PubMed  Google Scholar 

  30. 30.

    Kahneman DT, Amos. Choices, values, and frames. New York; Cambridge, UK: Russell Sage Foundation; Cambridge University Press; 2000.

  31. 31.

    O’Brien BJ, Gertsen K, Willan AR, Faulkner LA. Is there a kink in consumers’ threshold value for cost-effectiveness in health care? Health Econ. 2002;11(2):175–80.

    Article  PubMed  Google Scholar 

  32. 32.

    Torchiana DF, Colton DG, Rao SK, Lenz SK, Meyer GS, Ferris TG. Massachusetts General Physicians Organization’s quality incentive program produces encouraging results. Health Aff (Millwood). 2013;32(10):1748–56.

    Article  PubMed  Google Scholar 

  33. 33.

    Custers T, Hurley J, Klazinga NS, Brown AD. Selecting effective incentive structures in health care: A decision framework to support health care purchasers in finding the right incentives to drive performance. BMC Health Serv Res. 2008;8(1):1–14.

  34. 34.

    Halpern SD, Kohn R, Dornbrand-Lo A, Metkus T, Asch DA, Volpp KG. Lottery-based versus fixed incentives to increase clinicians’ response to surveys. Health Serv Res. 2011;46(5):1663–74.

    Article  PubMed  PubMed Central  Google Scholar 

  35. 35.

    Kimmel SE, Troxel AB, French B, et al. A randomized trial of lottery-based incentives and reminders to improve warfarin adherence: the Warfarin Incentives (WIN2) Trial. Pharmacoepidemiol Drug Saf. 2016.

  36. 36.

    Kimmel SE, Troxel AB, Loewenstein G, et al. Randomized trial of lottery-based incentives to improve warfarin adherence. Am Heart J. 2012;164(2):268–74.

    Article  PubMed  PubMed Central  Google Scholar 

  37. 37.

    Ryan AM, Burgess JF, Jr, Pesko MF, Borden WB, Dimick JB. The early effects of Medicare’s Mandatory Hospital Pay-for-Performance Program. Health Serv Res. 2014.

  38. 38.

    Sutton M, Nikolova S, Boaden R, Lester H, McDonald R, Roland M. Reduced mortality with hospital pay for performance in England. N Engl J Med. 2012;367(19):1821–8.

    CAS  Article  PubMed  Google Scholar 

  39. 39.

    Jha AK, Joynt KE, Orav EJ, Epstein AM. The long-term effect of premier pay for performance on patient outcomes. N Engl J Med. 2012;366(17):1606–15.

    CAS  Article  PubMed  Google Scholar 

  40. 40.

    Hickson GB, Altemeier WA, Perrin JM. Physician reimbursement by salary or fee-for-service: effect on physician practice behavior in a randomized prospective study. Pediatrics. 1987;80(3):344–50.

    CAS  PubMed  Google Scholar 

  41. 41.

    Newhouse JP, Normand ST. Health policy trials. N Engl J Med. 2017;376(22):2160–2167.

    Article  PubMed  Google Scholar 

  42. 42.

    Sanghavi DM, Conway PH. Paying for prevention: a novel test of Medicare value-based payment for cardiovascular risk reduction. JAMA. 2015;314(2):123–4.

    CAS  Article  PubMed  Google Scholar 

  43. 43.

    Centers for Medicare and Medicaid Services. Home Health Value-Based Purchasing Model. Vol. 2016; 2016.

Download references

Author information



Corresponding author

Correspondence to Karen E. Joynt Maddox MD, MPH.

Ethics declarations




No specific project funding. United States Department of Health and Human Services, as employment.

Prior Presentations


Conflicts of Interest

The authors declare no conflicts of interest. The authors either were or are employed at the United States Department of Health and Human Services for some of the time during which this work was completed. The opinions expressed herein are those of the authors and not the federal government.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Joynt Maddox, K.E., Sen, A.P., Samson, L.W. et al. Elements of Program Design in Medicare’s Value-based and Alternative Payment Models: a Narrative Review. J GEN INTERN MED 32, 1249–1254 (2017).

Download citation


  • Alternative Payment Models (APMs)
  • Medicare Shared Savings Program (MSSP)
  • Hospital Value-Based Purchasing Program (HVBP)
  • Hospital Readmissions Reduction Program (HRRP)
  • Merit-Based Incentive Payment System (MIPS)