The CareFirst Patient-Centered Medical Home Program: Cost and Utilization Effects in Its First Three Years



Enhanced primary care models have diffused slowly and shown uneven results. Because their structural features are costly and challenging for small practices to implement, they offer modest rewards for improved performance, and improvement takes time.


To test whether a patient-centered medical home (PCMH) model that significantly rewarded cost savings and accommodated small primary care practices was associated with lower spending, fewer hospital admissions, and fewer emergency room visits.


We compared medical care expenditures and utilization among adults who participated in the PCMH program to adults who did not participate. We computed difference-in-difference estimates using two-part multivariate generalized linear models for expenditures and negative binomial models for utilization. Control variables included patient demographics, county, chronic condition indicators, and illness severity.


A total of 1,433,297 adults aged 18–64 years, residing in Maryland, Virginia, and the District of Columbia, and insured by CareFirst for at least 3 consecutive months between 2010 and 2013.


CareFirst implemented enhanced fee-for-service payments to the practices, offered a large retrospective bonus if annual cost and quality targets were exceeded, and provided information and care coordination support.


Outcomes were quarterly claims expenditures per member for all covered services, inpatient care, emergency care, and prescription drugs, and quarterly inpatient admissions and emergency room visits.


By the third intervention year, annual adjusted total claims payments were $109 per participating member (95 % CI: −$192, −$27), or 2.8 % lower than before the program and compared to those who did not participate. Forty-two percent of the overall decline in spending was explained by lower inpatient care, emergency care, and prescription drug spending. Much of the reduction in inpatient and emergency spending was explained by lower utilization of services.


A PCMH model that does not require practices to make infrastructure investments and that rewards cost savings can reduce spending and utilization.


Numerous models have been proposed for enhancing primary care and improving care coordination, while pursuing the triple aim of greater access, lower costs, and improved quality. These models range from patient-centered medical homes (PCMH) to accountable care organizations (ACOs).1 Many small physician practices, which provide most of the primary care services delivered in the United States, struggle to meet the requirements of even a standard PCMH model, citing large investments in infrastructure such as electronic medical records, retraining, workflow redesign, ongoing certification, and additional care coordination personnel, which can cost up to $100,000 per physician by some estimates.2 4 Some observers have argued that policy initiatives aimed at promoting these models could unintentionally lead to greater consolidation of physician practices and spell the end of small-scale practices.5

Most PCMH programs to date have relied on per-member per-month (PMPM) case management fees to finance the additional resources needed.6 9 While such models are suited to both large and small practices, they may not be sufficient to cover the increased practice costs necessary to perform PCMH functions or explicitly reward performance. In at least one PCMH program, practices were given access to additional staffing from a community health team, potentially benefitting smaller practices.10 Other PCMH programs have required third-party PCMH accreditation or have paid practices up front to meet certification criteria as a PCMH.11 13 Practices that do not receive financial support to become PMCH-certified are otherwise disadvantaged.

The Comprehensive Primary Care initiative (CPCI) required substantial PMPM payments from multiple payers, and offered shared savings based on quality and cost performance, but was not limited to practices with PCMH recognition.14 The initiative required changes in care delivery to enhance access, care planning, chronic care management, care coordination, and patient engagement. Despite some initial promising results, in the second year practices on average showed no savings in Medicare spending after accounting for the PMPM.15

Other PCMH initiatives have relied on modified fee-for-service payments that embed quality and spending incentives.16 21 Still other initiatives have relied on global budgets and “two-sided” financial risk, meaning that practices face the prospect of financial reward or penalty, depending on whether spending targets—and potentially quality targets—are or are not met.22 One recent example is the Massachusetts Alternative Quality Contract (AQC) program, which tied rewards to both quality metrics and spending targets, and which was directed at larger multispecialty groups or integrated systems that were in a strong position to bear financial risk.23 Small primary care practices, however, are not able to take on the same financial risk as large practices.

We studied the CareFirst BlueCross BlueShield’s PCMH program, which began in 2011, and has over 1 million enrollees in Maryland, the District of Columbia, and northern Virginia as of 2016. In contrast to other PCMH programs nationwide, the CareFirst program did not require large up-front investments by participating practices, a feature that made the program particularly well suited for adoption by small, independent practices. By 2013, just over 4000 primary care physicians and nurse practitioners had joined the program, representing more than 81 % of all primary care providers in the plan’s networks in Maryland, northern Virginia, and DC.24 This number of participating primary care physicians compares favorably with the 2222 providers nationwide who have participated in the CPCI program.

Study Setting

For statistical validity, CareFirst grouped smaller practices together to create “panels” or clusters of approximately 10 physicians. Because performance was measured and rewarded at the panel level, each practice had an incentive to communicate with other practices in the same panel. Primary care practices with more than 20 physicians were subdivided into panels of 10. From 2011 to 2013, the number of participating physicians grew steadily, from 3476 to 4037, while the number of attributed CareFirst members increased from 987,000 to 1,169,000.

Importantly, the program did not require external certification by a PCMH-accrediting organization, although it did contain the core PCMH attributes defined by AHRQ: comprehensive and coordinated care through an array of nurse coordinators, along with hospital transition, chronic, complex, behavioral, and substance abuse care managers; accessible services through same-day appointments and 24/7 phone triage; patient-centeredness through care plans developed by nurses, clinicians, and patients together; and quality through objective performance metrics required for earning shared savings.25

CareFirst offered participating practices a one-time 12 % increase in fee-for-service payments for services provided by the practices immediately upon joining the program, which averaged to approximately $10.34 per member per month. Panels were not put at financial risk, but were offered additional financial rewards—up to 80 % of annual fee-for-service billings—depending on their joint quality of care and spending growth performance each year. In addition, providers could receive separate payments for developing and maintaining care plans for selected patients. The insurer provided nurse coordinators and lists of members likely to benefit from care coordination. Nurse coordinators and physicians identified and focused on subsets of each panel’s 50 most severely ill patients. The nurse coordinators developed care plans, coordinated with families, and provided follow-up support.

CareFirst also provided an electronic portal through which panels could monitor their financial and claims-based quality performance and compare the efficiency of referrals across specialists and hospitals. The detailed information captured in the portal was based on members’ medical claims with a 1-month lag so that providers could track their cost performance continuously throughout the year. Using this portal, physicians were able not only to view information on specialist costs to inform referrals, but also to obtain patient-level reports to identify gaps in care and review care plans, with notes written by care coordinators with input from providers engaged in a patient’s care.

About 70 % of eventually participating members were attributed to participating panels in 2011, the first year of operation. Thus, the program was not fully implemented at a single starting point. In addition to delays in physician participation, program features were rolled out over the first 2 years. First, nurse care coordinators had to contact roughly 400 participating panels, an effort that was hampered by high initial rates of turnover in care coordinator staff. Second, the electronic physician portal was introduced in 2012, and was underutilized until program consultants were hired to assist practices in the use of the performance data through the portal. For these reasons, our evaluation of the CareFirst PCMH program is best understood as an effectiveness study of a large-scale program that was faced with the usual challenges of real-world implementation.

We examined whether a member’s attribution to a participating PCMH panel was associated with lower total payments and lower payments for inpatient care, emergency department visits, and prescription drugs. Given the program’s focus, we hypothesized that its impact on payments would be larger for patients with chronic conditions. We also tested whether the program was associated with reductions in inpatient admissions and emergency room visits. We evaluated results in all 3 years of the program’s operation. We focus our discussion on outcomes in the third year, since the literature has shown that PCMH programs typically take a few years to reach maturity and produce measurable effects.


Study Population

The study population included all adults aged 18 to 64 years who were covered by CareFirst for at least 3 consecutive months between 2010 and 2013. Individuals were included in the analysis if CareFirst held their medical and prescription drug claims. Individuals who had prescription drug coverage outside CareFirst were excluded. Monthly claims data were collapsed to quarterly observations to smooth monthly fluctuations but still capture seasonal trends. Online Appendix 1 illustrates our sample construction. The study was approved by the George Mason University institutional review board.

Definition of Intervention and Comparison Groups

Practices were able to join the program beginning January 1, 2011. Insured members were attributed to the participating primary care panel considered most responsible for that member’s primary care, based on the previous 12 months of evaluation and management claims office and preventive care visits in an outpatient setting.


Medical and prescription drug claims data were provided by CareFirst. For each member and quarter, we summed the allowed amounts for medical and prescription drug claims. We also included members’ out-of-pocket payments. We calculated quarterly allowed amounts separately for inpatient care, emergency department visits, and prescription drug claims. In addition, we calculated the number of emergency department visits and inpatient admissions per member-quarter. Chronic conditions were measured using diagnoses in the claims data. The illness burden was measured as a prospective risk score using DxCG Intelligence software (Verisk Health, Waltham, MA) based on the previous 12 months of claims, and was provided for each member-month by the insurer.26

Statistical Approach

The member-quarter was our unit of analysis. The primary dependent variable was the total claims allowed amount. We used a difference-in-differences estimator to capture changes in participant spending relative to changes in non-participant spending. We addressed observed differences between treatment and comparison members with treatment-on-the-treated propensity score weighting. The weighting models predicted the probability of being in the treatment group in the base year as a function of demographic characteristics, whether the covered individual was an employee or dependent, group size, whether the individual had a chronic condition, and illness burden. In addition to these covariates, all models included quarter and county fixed effects.

We also weighted each year of treatment and control observations to the baseline year for the treatment group in order to control for any compositional changes over time.27 The weighting models were estimated using boosted regression, as implemented in the "twang" package in R.28 We estimated two-part multivariate generalized linear models with a log link and gamma distribution to isolate the association between a member’s attribution to a participating primary care practice and quarterly spending.29 For inpatient admissions and emergency room visits, we estimated zero-inflated negative binomial or hurdle models with the same set of control variables. We clustered standard errors at the panel level.

Members who were continuously attributed to a participating panel were defined as the intervention group. We refer to this group as “always PCMH” (N = 592,886 individuals). Because some physician panels joined the program as early as January 2011, a member could be attributed to participating practices for as many as 3 years during our study period. Thus, we measured the association of spending with program participation in the first, second, and third years. Some members were ineligible for attribution, either because their primary care provider was in a non-participating practice or because their employer declined to have its employee members participate in the program. The members who were never attributed to the PCMH model during our study period constituted the comparison group.

Robustness Checks

As a robustness check, we defined a second, more expansive intervention group of members who were attributed for at least one quarter, but may not have been continuously attributed to a participating panel thereafter. This second group, referred to as “ever PCMH” (N = 804,758 individuals), included individuals with less exposure to the PCMH program than the “always PCMH” group.


The characteristics of members in the “always PCMH” intervention group were similar to those in the treatment group in the baseline year after weighting (Table 1). The covariate balance from propensity score weighting across all years, as measured by the standardized mean differences for each pair of covariates, is shown in Online Appendix 2. Balance was achieved with all 100 covariate pairs having a standardized difference of less than 0.10. Continuously attributed members, the “always PCMH” intervention group, had lower spending in the baseline quarter than the comparison group of members who were never attributed (Table 1; $966 vs. $1107, p < 0.001). Expenditures for prescription drugs ($131 vs. $108, p < 0.001) were higher, but emergency room ($44 vs. $46, p = 0.039) and inpatient care ($114 vs. $135, p < 0.001) were lower.

Table 1 CareFirst Enrollee Descriptive Characteristics – 2010 Quarter 1, Propensity Score-Weighted

Continuously attributed members recorded lower expenditures by the second and third years relative to the comparison group (Table 2). There were no statistically significant differences in total expenditures between the intervention and comparison groups in the first program year. For the third year, we estimated a reduction in total spending per member of $109 (95 % CI: −$191.82, −$26.96), equivalent to a decline of 2.8 % relative to base year. The total 3-year savings was $297 (95 % CI: −471.41, −123.69) per PCMH participant relative to comparisons. Figure 1 illustrates the regression-adjusted means for both the treatment and comparison groups for all 4 years (baseline and the 3 intervention years). Full regression results for the expenditure models are provided in Online Appendix 3.

Table 2 Total Expenditures – Annual Marginal Effects
Figure 1

Regression-adjusted mean total allowed amount, PCMH, and comparison

We estimated analogous average reductions for year 3 of $23 in inpatient spending (95 % CI: −$35, −$11), $8 in emergency department spending (95 % CI: −$11, −$5), and $14 in prescription drug spending (95 % CI: −$20, −$9) (Table 3). The percentage reductions relative to 2010 were 5.0 % for inpatient care, 4.5 % for emergency care, and 2.7 % for prescription drugs.

Table 3 Inpatient, Emergency Room, and Drug Expenditures – Annual Marginal Effects

Among individuals with chronic conditions (Table 2), the absolute reduction in total spending in year 3 was greater than that for all members ($144 vs. $109), but as a percentage of annualized spending it was equivalent (2.8 %). Twenty percent of the total reduction was due to inpatient spending, which declined by $32 (CI: −$56, −$9) by year 3 (Table 4). Also, by year 3, reductions in emergency room spending were larger for individuals with chronic conditions than for all individuals ($10 vs. $8), as was prescription drug spending ($18 vs. $14); neither difference was statistically significant between PCMH participants with chronic conditions and all participants.

Table 4 Inpatient, Emergency Room, and Drug Expenditures, Annual Marginal Effects – Chronic Group Only

The program was associated with reductions in inpatient admissions by the third year (Table 5). In year 3, members experienced 2.4 (95 % CI: −2.8, −2.2) fewer inpatient admissions per 1000 on average, representing a 2.4 % reduction. They also had 9.9 (95 % CI: −9.0, −7.7) fewer emergency room visits per 1000 in year 3, a decline of 3.2 %. Full regression results for the utilization models are provided in Online Appendix 4.

Table 5 Inpatient Admissions and Emergency Room Visits – Annual Marginal Effects

Robustness Results

When we expanded the intervention group to also include “ever” members who were attributed to participating practices only intermittently, the estimated impact of the program was in the same direction but larger in magnitude than in the main models using “always” participants. “Ever” robustness results are shown in column 2 of Table 2 for total allowed amounts, with full results in Online Appendix 5.


Implementation of the CareFirst PCMH program was associated with lowering of costs by year 2, and 2.8 % lower total payments by year 3. This compares favorably to most early PCMH programs with quality and spending incentives, which observed small or no effects on spending.30 Other PCMH programs have also been shown to reduce overall expenditures, inpatient care, or emergency room care, but they required meeting the full catalogue of PCMH accreditation criteria or substantial up-front investments, which are particularly onerous for small physician practices.31 33

The magnitude of the reduction was greatest for members with chronic conditions, consistent with other studies of coordinated care interventions.34 The gross decline in spending is comparable to that of the AQC program.35 By comparison, CMMI’s combined CPCI demonstrations lowered payments for medical services and/or utilization in some regions in year 1, but had no statistically measurable effects on cost or use on average in year 2. Since CareFirst’s incentive payments were offered as fee-for-service enhancements, they were captured by the claims data and spending calculations used in our analysis. Therefore, the results we report should be construed as net of participation fees. However, we do not have data on the amount spent by CareFirst on the information and care coordination infrastructure to implement the program. Our estimates suggest that it did reduce medical spending compared to a control group by year 2 of implementation.

The one region in the CPCI demonstration that experienced reductions in net spending in year 1 also experienced reductions in quality. Our study has not yet examined changes in quality, but minimum thresholds of quality performance—as measured mostly by claims data—were required for shared savings bonuses to be awarded by CareFirst.

In contrast to the Massachusetts AQC intervention, which was also associated with reductions in spending, 40 % of the overall decline in spending in the CareFirst program is explained by reductions in inpatient care, emergency care, and prescription drugs.

In our study, much of the reduction in inpatient and emergency care was explained by lower utilization of these services, indicating that the program may have succeeded in encouraging primary care physicians to manage both admissions and emergency visits. This could be due to lower volume of service, shifts to lower-priced settings, lower prices from acute care providers worried about volume, or lower intensity of services conditional on an admission or visit as a result of more conservative practice styles of referred specialists.


Early experience shows that an intervention aimed at realigning primary care practice incentives could be effective in curbing spending growth and utilization. The intervention studied here is noteworthy in that it avoided burdening participating practices with the costly infrastructure investments and short-term downside risk that many other PCMH interventions have. As such, the type of intervention studied here should appeal to small practices in particular. Moreover, these results suggest that some particular structural PCMH elements may not be required for good results, which is a lesson that could inform alternative payment models by other payers, such as Medicare.

Total spending declined more than the sum of reductions in inpatient care, emergency room care, and prescription drugs. It is possible that these extra reductions could be explained by other covered services, including outpatient specialty care, laboratory tests, imaging, and home care, or by lower prices. Lower spending on outpatient specialty care would point to the possibility that referral management was an important contributor to the results reported here. The physician portal offered by this program allowed primary care physicians to identify less expensive specialists more easily. Future work should address specialty care referral outcomes and quality outcomes.


  1. 1.

    Edwards ST, Bitton A, Hong J, Landon B. Patient-centered medical home initiatives expanded in 2009–13: providers, patients, and payment incentives increased. Health Aff. 2014;33(10):1823–1831. doi:10.1377/hlthaff.2014.0351.

    Article  Google Scholar 

  2. 2.

    Raffoul M, Petterson S, Moore M, Bazemore A, Peterson L. Smaller practices are less likely to report PCMH certification. Am Fam Physician. 2015;91(7):440.

    PubMed  Google Scholar 

  3. 3.

    Ho L, Antonucci J. The dissenter’s viewpoint: there has to be a better way to measure a medical home. Ann Fam Med. 2015;13:269–272. doi:10.1370/afm.1783.

    Article  PubMed  PubMed Central  Google Scholar 

  4. 4.

    Magill MK, Ehrenberger D, Scammon DL, et al. The cost of sustaining a patient-centered medical home: experience from two states. Ann Fam Med. 2015;13(5):429–35. doi:10.1370/afm.1851.

    Article  PubMed  PubMed Central  Google Scholar 

  5. 5.

    Casalino LP, Pesko MF, Ryan AM, et al. Small primary care physician practices have low rates of preventable hospital admissions. Health Aff. 2014;33(9):1680–1688. doi:10.1377/hlthaff.2014.0434.

    Article  Google Scholar 

  6. 6.

    Merrell K, Berenson RA. Structuring payment for medical homes. Health Aff. 2010;29(5):852–858. doi:10.1377/hlthaff.2009.0995.

    Article  Google Scholar 

  7. 7.

    Rosenthal MB, Alidina S, Friedberg MW, et al. A difference-in-difference analysis of changes in quality, utilization and cost following the Colorado Multi-Payer Patient-Centered Medical Home Pilot. J Gen Intern Med. 2015. doi:10.1007/s11606-015-3521-1.

    Article  PubMed  PubMed Central  Google Scholar 

  8. 8.

    Friedberg MW, Rosenthal MB, Werner RM, Volpp HG, Schneider EC. Effects of a medical home and shared savings intervention on quality and utilization of care. JAMA Intern Med. 2014;175(8):1362–1368. doi:10.1001/jamainternmed.2015.2047.

    Article  Google Scholar 

  9. 9.

    Bitton A, Martin C, Landon BE. A nationwide survey of patient centered medical home demonstration projects. J Gen Intern Med. 2010;25(6):584–92. doi:10.1007/s11606-010-1262-8.

    Article  PubMed  PubMed Central  Google Scholar 

  10. 10.

    Jones C, Finison K, McGraves-Lloyd, et al. Vermont’s community-oriented all-payer medical home model reduces expenditures and utilization while delivering high-quality care. Popul Health Manag. 2015. doi:pop.2015.0055.

  11. 11.

    Werner RM, Duggan M, Duey K, Zhu J, Stuart EA. The patient-centered medical home: an evaluation of a single private payer demonstration in New Jersey. Med Care. 2013;51(6):487–493. doi:10.1097/MLR.0b013e31828d4d29.

    Article  PubMed  Google Scholar 

  12. 12.

    Raskas RS, Latts LM, Hummel JR, Wenners D, Levine H, Nussbaum SR. Early results show WellPoint’s patient-centered medical home pilots have met some goals for costs, utilization, and quality. Health Aff (Millwood). 2012;31(9):2002–2009. doi:10.1377/hlthaff.2012.0364.

    Article  Google Scholar 

  13. 13.

    David G, Gunnarsson C, Saynish PA, Chawla R, Nigam S. Do patient-centered medical homes reduce emergency department visits? Health Serv Res. 2015;50(2):418–439. doi:10.1111/1475-6773.12218.

    Article  PubMed  Google Scholar 

  14. 14.

    Taylor EF, Dale S, Peikes D, et al. Evaluation of the Comprehensive Primary Care Initiative: First Annual Report. Math Policy Res. Available at: Accessed June 28, 2016.

  15. 15.

    Dale SB, Ghosh A, Peikes D, et al. Two-year costs and quality in the Comprehensive Primary Care Initiative. N Engl J Med. 2016. doi:10.1056/NEJMsa1414953.

    Article  PubMed  Google Scholar 

  16. 16.

    Rosenthal MB, de Brantes FS, Sinaiko AD, Frankel M, Robbins RD, Young S. Bridges to Excellence--recognizing high-quality care: analysis of physician quality and resource use. Am J Manag Care. 2008;14(10):670–677.

    PubMed  Google Scholar 

  17. 17.

    Christianson JB, Leatherman S, Sutherland K. Lessons from evaluations of purchaser pay-for-performance programs: a review of the evidence. Med Care Res Rev. 2008;65(6 Suppl):5S–35S. doi:10.1177/1077558708324236.

    Article  PubMed  Google Scholar 

  18. 18.

    Peikes D, Zutshi A, Genevro JL, Parchman ML, Meyers DS. Early evaluations of the medical home: building on a promising start. Am J Manag Care. 2012;18(2):105–116.

    PubMed  Google Scholar 

  19. 19.

    Pourat N, Davis A, Chen X, Vrungos S, Kominski G. In California, primary care continuity was associated with reduced emergency department use and fewer hospitalizations. Health Aff. 2015:34(7). doi:10.1377/hlthaff.2014.1165.

  20. 20.

    Maeng DD, Khan N, Tomcavage J, Graf TR, Davis TR, Steele GD. Reduced acute inpatient care was largest savings component of Geisinger Health System’s patient-centered medical home. Health Aff. 2015;34(7):634–644. doi:10.1377/hlthaff.2014.0855.

    Article  Google Scholar 

  21. 21.

    RTI International. Evaluation of the Multi-payer Advanced Primary Care Practice (MAPCP) Demonstration: First Annual Report. Available at: Accessed June 28, 2016.

  22. 22.

    Edwards ST, Abrams MK, Baron RJ, et al. Structuring payment to medical homes after the Affordable Care Act. J Gen Intern Med. 2014;29(10):1410–3. doi:10.1007/s11606-014-2848-3.

    Article  PubMed  PubMed Central  Google Scholar 

  23. 23.

    Chernew M, Mechanic RE, Landon BE, et al. Private-payer innovation in Massachusetts: the ’alternative quality contract’. Health Aff. 2011;1:51–61. doi:10.1377/hlthaff.2010.0980.

    Article  Google Scholar 

  24. 24.

    CareFirst BlueCross BlueShield. Program description and guidelines for CareFirst Patient-Centered Medical Home Program and Total Care and Cost Improvement Program. Available at: Accessed June 28, 2016.

  25. 25.

    Nielsen M, Olayiwol, JN, Grundy P, et al. The Patient-Centered Medical Home’s impact on cost and quality: An Annual Update of the Evidence, 2012-2013. Available at: Accessed June 28, 2016.

  26. 26.

    Chen J, Ellis RP, Toro KH, Ash AS. Mispricing in the medicare advantage risk adjustment model. Inquiry. 2015 May 1;52. doi:10.1177/0046958015583089.

  27. 27.

    Stuart EA, Huskamp HA, Duckworth K, et al. Using propensity scores in difference-in-differences models to estimate the effects of a policy change. Health Serv. Outcome Res. Methodol. 2014;14(4):166–182.

  28. 28.

    Beth Ann Griffin, Greg Ridgeway, Andrew R. Morral, et al. Toolkit for Weighting and Analysis of Nonequivalent Groups (TWANG) Website. Santa Monica, CA: RAND Corporation, 2014.

  29. 29.

    Manning WG, Mullahy J. Estimating log models: to transform or not to transform? J Health Econ. 2001;20(4):461–494. doi:10.1016/S0167-6296(01)00086-8.

    Article  PubMed  CAS  Google Scholar 

  30. 30.

    Jackson GL, Powers BJ, Chatterjee R, et al. The patient-centered medical home: a systematic review. Ann Intern Med. 2013;158(3):169–178. doi:10.7326/0003-4819-158-3-201302050-00579.

    Article  PubMed  Google Scholar 

  31. 31.

    Fitfield J, Forrest DD, Burleson JA, et al. Quality and efficiency in small practices transitioning to patient-centered medical homes: a randomized trial. J Gen Intern Med. 2013;28(6):778–86. doi:10.1007/s11606-013-2386-4.

    Article  Google Scholar 

  32. 32.

    Maryland Health Care Commission. Evaluation of the Maryland Multi-Payer Patient Centered Medical Home Program - first annual report. Available at: Accessed June 28, 2016.

  33. 33.

    Reid RJ, Coleman K, Johnson EA, et al. The Group Health medical home at year two: cost savings, higher patient satisfaction, and less burnout for providers. Health Aff. 2010;29(5):835–843. doi:10.1377/hlthaff.2010.0158.

    Article  Google Scholar 

  34. 34.

    Brown RS, Peikes D, Peterson G, Schore J, Razafindrakoto CM. Six features of Medicare coordinated care demonstration programs that cut hospital admissions of high-risk patients. Health Aff. 2012;31(6):1156–1166. doi:10.1377/hlthaff.2012.0393.

    Article  Google Scholar 

  35. 35.

    Song Z, Safran DG, Landon BE, et al. The ’Alternative Quality Contract,’ based on a global budget, lowered medical spending and improved quality. Health Aff. 2012:10.1377/hlthaff.2012.0327. doi:10.1377/hlthaff.2012.0327

Download references


Contributors: Drs. Cuellar and Helmchen contributed to the design and statistical analyses, supervision of the study, and drafting of the manuscript. Dr. Gimm contributed to the acquisition and interpretation of the data, revision of the manuscript, and obtaining funding. Dr. Want contributed to interpretation of the data, revision of the manuscript, and obtaining funding. Mr. Burla, Mr. Kells and Dr. Kicinger each contributed to acquiring the data, critical revisions to the manuscript, and administrative and technical support. Dr. Nichols (PI) led the design of the study and contributed to the drafting of the manuscript, obtaining funding, and supervision. Dr. Nichols had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.

Author information



Corresponding author

Correspondence to Alison Cuellar PhD.

Ethics declarations


CareFirst BlueCross BlueShield

Prior Presentations

AcademyHealth, Minneapolis, MN, June 16, 2015

Conflict of Interest

Dr. Len Nichols discloses receiving honoraria from the non-profit Rocky Mountain Health Plans (Grand Junction, CO) for organizing content about national trends and facilitating board retreat discussions, and from the American Medical Association for speaking at their annual state advocacy conference on antitrust issues. Dr. Nichols is also the tirector of George Mason University’s Center for Health Policy Research and Ethics, whose 501(c)((3) foundation account received a grant from America’s Health Insurance Plans to support graduate student work on health care market issues. Dr. Nichols is a member of the unpaid board of trustees of the National Committee for Quality Assurance, and an unpaid advisor on payment reform matters to the Patient-Centered Primary Care Collaborative. All other authors declare no conflict of interest.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Online Appendix 1

(DOCX 51 kb)

Online Appendix 2

(DOCX 32 kb)

Online Appendix 3

(DOCX 34 kb)

Online Appendix 4

(DOCX 28 kb)

Online Appendix 5

(DOCX 38 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (, which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Cuellar, A., Helmchen, L.A., Gimm, G. et al. The CareFirst Patient-Centered Medical Home Program: Cost and Utilization Effects in Its First Three Years. J GEN INTERN MED 31, 1382–1388 (2016).

Download citation


  • patient-centered care
  • primary care redesign
  • program evaluation