INTRODUCTION

Patient experiences with healthcare have become a critical measure of quality of care since ACA implementation.1 The patient experience evaluates whether what should have happened during a healthcare encounter actually did happen (i.e., timely access to appointments and communication with providers that was effective and patient-centered).2,3

Several studies have examined access to and the quality of care but few have considered the impact of Medicaid expansion on enrollees’ satisfaction. Published research has focused primarily on rating differences between public health plan models (i.e., Medicaid enrollees in managed care or fee for service4,5 or Medicare6). Research is lacking on satisfaction differences in a Medicaid expansion population between enrollees in a commercial insurance plan versus a public plan like Medicaid.

The Arkansas Medicaid expansion program created a unique natural experiment allowing us to address this gap. Medicaid eligibility in the state was expanded to over 225,000 individuals aged 19–64 with incomes at or below 138% of the federal poverty level (FPL) in 2014. The Arkansas legislation authorizing expansion provided a private insurance option for “low risk” adults.7 The state received a Section 1115 demonstration waiver from the Centers for Medicaid and Medicare Services (CMS) to use Medicaid funds to purchase this private insurance through the ACA Marketplace.8 Participants were automatically enrolled in either Medicaid or a qualified Marketplace health plan (QHP) based on a risk assessment9 consistent with legislative requirements.

The automatic assignment to plan type based on risk, while holding demographic factors constant, allowed us to assess any public-private insurance satisfaction gaps. Our objective in this study was to examine patient experience, a core dimension of healthcare quality, to determine if insurance type (Medicaid vs QHP) impacted scores.

METHODS

Study Design

Regression discontinuity (RD) was used to evaluate program differences between QHP vs Medicaid in patient experience scores.10 The expansion population completed a medical needs questionnaire to evaluate health risk (herein referred to as need score).10 The need scores, ranging from 0.02 to 0.61, determined assignment to Medicaid or QHP. Assignment to QHP was made for scores < 0.18 (low medical need) and to Medicaid for scores ≥ 0.18 (medically frail). The RD approach allowed comparisons of individuals at the threshold (0.18) to estimate treatment effects/satisfaction differences. The CAHPS survey was used to assess patient experiences.11

Study Population

As summarized in the flow diagram (Figure A1, Appendix),12 225,168 persons were eligible and enrolled under Medicaid expansion in 2014. Our sampling frame included 181,206 enrollees from which a sample of 29,164 was selected. The sample selection included 5000 enrollees each in Medicaid and QHP with an additional 19,164 participants selected to represent the full range of income and needs scores. A survey response rate of 26.4% yielded 6568 surveys from which exclusions were made for missing needs scores, insurance plan changes due to eligibility changes, and no provider visits in the preceding 6 months. Our final analytic sample was 3156 (1759 QHP/1397 Medicaid).

Data Sources

Data for this study were obtained from four 2014–2015 data sources: (1) Arkansas Medicaid enrollment files from the Arkansas Department of Human Services; (2) administrative claims data; (3) the exceptional health needs questionnaire (Appendix); and (4) the CAHPS survey administered July–September of 2015. All data were linked using a unique, encrypted identifier.

Measures

Outcomes were CAHPS ratings of care and care providers and the availability of electronic communication with provider. Ratings of personal provider, specialist provider, and all health care received ranged from 0 to 10 (0 = the worst possible care to 10 = best care/provider). A personal provider is seen for checkups or health advice while a specialist has expertise in one area (i.e., heart or kidney). Ratings were based on services received in the preceding 6 months. Responses were dichotomized—scores of 9 or 10 (top-box score) versus all other scores consistent with other research.13 Respondents were also asked whether providers offered electronic communications through email, smartphone, or patient portals (yes/no).

Covariates included age (continuous, 19–64), sex (male or female), race (White non-Hispanic, Black non-Hispanic, Other, and Hispanic), educational attainment (< high school, high school graduate/GED, some college, ≥ college graduate, and education missing), marital status (married/partnered, widowed/divorced/separated, never married and missing), mental health or substance abuse diagnosis, and the Charlson Comorbidity Index (CCI). Urban status used rural-urban commuting area code assigned from patient zip code and dichotomized to rural vs urban.14 The CCI, an ordinal measure, is based on diagnostic codes from claims data. CCI ranged from 0, no comorbidities, to 4+ indicating at least 4 comorbidities.

Statistical Methods

RD is a quasi-experimental approach whose results compare favorably to results obtained using RCTs, the gold standard, while being more practical and cost-effective.15 RD requires the use of a continuous measure to assign participants to treatment groups (Medicaid vs QHP)—the needs score in this case. Assignment was based on a discrete cut-point in the needs score (0.18) at which point the probability of assignment jumped from 0 to 1. This is referred to as sharp programmatic assignment as the probability of assignment is not continuous but deterministic (see Figure A2, Appendix).11 We used McCrary’s16 test to determine whether our data met this continuous assumption and to rule out any manipulation in program assignment. Results (see Appendix Figure A2) indicate a continuous distribution with no manipulation.

RD also requires that independent variables have a continuous distribution across needs scores with no breaks at the threshold. We tested this using visual inspection of scatter plots for age and CCI—our continuous variables. No discontinuity was observed (Figures A3 and A4, Appendix). Because of the way assignment to insurance programs was made, enrollees on each side of the threshold are expected to have similar characteristics. We examined Arkansas enrollees on sex, race, marital status, education, and urbanicity finding similar demographic characteristics on either side of the cut-point (Tables A2-A6, Appendix).

RD outcomes were modeled as a function of the needs score and insurance program assignment controlling for covariates. We estimated both parametric and non-parametric models as they take different approaches to bias and precision. Parametric RD uses all available data but can produce biased estimates with high precision. In contrast, non-parametric RD uses only observations close to the cut-point which produces less biased estimates with lower precision when the functional form the model is correctly specified.11 We minimized bias in our parametric models by testing different functional forms for need scores (e.g., linear, quadratic, and cubic) as well as interactions between needs scores and insurance program through a series of F-tests. We selected the simplest model based on F-test p values and the Akaike’s information criterion (AIC) (Table A7, Appendix). We also conducted robustness checks of significant models by sequentially dropping the outermost 1%, 5%, and 10% of data points plus the lowest and highest values of the need scores. Since the lowest 1%, 5%, and 10% of data points were all 0.02 (the 5 lowest values), the lowest 17% (the next lowest value) of data points were also dropped. Results of these tests can be found in Table A8, Appendix. All tests suggest the results of our models were both reliable and robust. Logistic regression models were run using SAS 9.4 (SAS Institute Inc., Cary, NC).

Nonparametric models were estimated as a complement to parametric models with consistent results giving greater confidence in our findings. We used the standard local linear regression approach (i.e., regressing data points near the cut-off) to estimate our nonparametric models. We ensured a sufficient number of observations were included in the analyses to reduce bias while maintaining precision.11 The appropriate bandwidth around the cut-point was determined using the Imbens-Kalyanaraman method15 specifying a triangular kernel function for all the models. The local average treatment effect (LATE) was reported for the program comparisons. The rdd package in R 3.3.3 was used for these analyses.17 Sensitivity analyses were run using local linear regression models with the RDHonest package in R retaining the same kernel function and bandwidths.18 Locally estimated scatterplot smoothing (LOESS) was superimposed to visualize program effects on outcomes.

RESULTS

Table 1 reports baseline characteristics of the sample population. Apart from differences in medical need, characteristics of the population, including income, were similar. Most of the cohort were female, approximately 45 years of age, non-Hispanic White, high school graduates, married, and lived outside an urban area.

Table 1 Bivariate Analysis by Program Indicator (Medicaid vs. QHP) All Variables (n, %)

Table 2 provides results of the RD parametric and non-parametric models. Results are reported as odds ratios and mean probability differences for the parametric models. The program effect at the cut-point was significant for top box scoring overall healthcare and personal doctor. Medicaid enrollees at the cut-point were less likely to have top-box scored overall healthcare and a personal doctor than QHP enrollees (OR = 0.71, 95% CI = 0.56–0.90 for overall healthcare and OR = 0.68, 95% CI = 0.53–0.87 for personal doctor). Non-parametric local linear regression models showed similar results to the parametric models. The LATE for Medicaid and QHP was − 0.12 (p = 0.04) and − 0.13 (p = 0.04) respectively. A significant program effect was not seen for the rating of specialty provider or for whether a personal doctor offered electronic communication.

Table 2 Models for CAHPS Outcomes

The adjusted predicted probability from the parametric model for all health care received (Fig. 1) is on the y-axis and need scores on the x-axis. A program effect is suggested by the discontinuity in the regression curve at the cut-point of about 10% between QHP and Medicaid enrollees with QHP enrollees more likely to highly rate health care received.

Figure 1
figure 1

Parametric model. Predicted probability of rating all healthcare a 9 or 10.

In Figure 2, the y-axis shows the adjusted predicted probability of highly rating personal doctor. This probability among QHP enrollees is stable across the need scores where it approaches the cut-point. In contrast, among Medicaid participants, the probability increases with increasing need scores. There is about a 10% difference between Medicaid and QHP participants in the probability of highly rating personal doctor. Significant results suggest a relatively stable and robust program effect.

Figure 2
figure 2

Parametric model. Predicted probability of rating personal provider a 9 or 10.

Figures 3 and 4 are provided for comparison purposes though results are not significant. In these figures, there is no jump at the threshold. The data are, instead, continuous across needs scores. Despite this, the trend was toward QHP participants more likely to highly rate specialists (67.9% versus 59.1%)

Figure 3
figure 3

Parametric model. Predicted probability of rating specialist provider a 9 or 10.

Figure 4
figure 4

Parametric model. Predicted probability of personal provider offering electronic communication.

DISCUSSION

Among newly insured Arkansans, QHP participants were more likely to highly rate personal providers and overall health care compared to Medicaid enrollees. Comparisons with other states are not possible because no other state has looked at differences in experience scores for a Medicaid expansion population, particularly for enrollees in a Medicaid versus a commercial plan. In part, this is because the Arkansas program was unique at the time in using Medicaid monies to purchase commercial insurance in the marketplace. However, a Commonwealth Fund study published in 2017 did find that Medicaid enrollees were as likely as commercially insured enrollees to rate the quality of their healthcare as excellent or very good (57% vs 52%).19 As these data do not focus solely on a Medicaid expansion population, the comparison is not exact. Still, it does suggest that a later assessment of experience scores might yield results different from ours.

Healthcare ratings are an important indicator of perceived quality. While we controlled for demographic and clinical characteristics, factors other than these may play a role in our findings—access to care and utilization. Analyses conducted by the Arkansas Center for Health Improvement (ACHI)12 found that 98% of the expansion population met distance to care standards, but Medicaid enrollees had more difficulty finding and engaging with providers. Nationally, the number of providers accepting Medicaid has remained stable over time despite the influx of new Medicaid enrollees. The already overburdened system made realized access more difficult for Medicaid enrollees.20 This Medicaid provider shortage is, in part, due to low reimbursement rates which, in 2018, were 50% of reimbursement rates for the privately insured in Arkansas.9

Research has also shown that connecting with a provider when needed is essential to both long-term engagement and satisfaction with providers.21 Arkansas Medicaid vs QHP enrollees indicated they had more difficulty getting care when needed.12 Basseyn and colleagues22 found that among Arkansas primary care practices, QHP providers had higher new patient appointment rates than Medicaid providers consistent with findings from national studies.23 QHP participants also had consistently better access to and utilization of primary, secondary, and tertiary prevention, care, and treatment vs Medicaid participants.12 Only 8.2% of Medicaid enrollees had accessed outpatient care at 30 days compared to 21.2% of QHP enrollees, and by 90 days, there was a wider difference (29.6% vs 41.8%).12 The connection between getting care when needed, utilization, more generally, and Medicaid reimbursement is highlighted by research showing that increasing Medicaid reimbursement also increased appointment availability.24 Additionally, improving healthcare utilization fosters patient trust and engagement with a provider, increases compliance, and improves health status which all lead to higher patient experience ratings.21,25,26

Taken together, patient experience score differences and patterns of reduced access and utilization are consistent with gaps in both health insurance and general health literacy. Health insurance literacy is the ability to understand key insurance terms, purchase and use insurance27,28 while health literacy is “…the capacity to obtain, process, and understand basic health information and services needed to make appropriate health decisions.”29 Individuals with higher health needs, like our Medicaid cohort, experience both greater health insurance and health illiteracy than those who are relatively healthier.30,31 Furthermore, health-related literacy deficits can drive both delayed care and emergent care32,33 impacting the ability to build interpersonal relationships and effective communication with providers34,35—factors associated with patient satisfaction.35

Our results were not significant for specialty care providers, but Medicaid participants generally report less access to specialty providers.36 Regarding results on electronic communication, literature suggests that patient knowledge and use of healthcare portals is limited and varies as a result of the “digital divide” by age, education, race/ethnicity, and income.37,38,39,40,41,42

LIMITATIONS

Our first limitation focuses on patient ratings generally. The problems associated with capturing patient ratings at a single point in time or as a part of a single construct have been documented.43,44,45,46,47,48 However, the CAHPS survey measuring patient experiences has been well validated.49 More specific to the Arkansas deployment of the CAHPS, our response rate was low at 26.4% which is less than the 40% response rate that the Agency for Healthcare Research and Quality (AHRQ)50 suggests can be attained, thus, creating potential nonresponse bias in our results. However, survey response rates in the USA, generally,51 and, more specifically, CAHPS response rate have declined over time.52 The 2014–2015 national Medicaid CAHPS had a response rate of 23.6% with data estimates still considered to be valid.4

Second, weighting CAHPS responses might have addressed survey non-response51; however, weights for the Arkansas CAHPS were developed without consideration of sample strata. Given this, we elected to use unweighted data in our analyses which may limit the generalizability of our conclusions. Third, dichotomizing our outcomes, while consistent with established methods, does obscure full variation in the ratings. Fourth, the study took place relatively soon after expansion began so enrollee knowledge might have changed over time. A second CAHPS was fielded in 2016. Future research will examine differences between the 2014 and 2016 surveys.

Fifth, we excluded participants from our final sampling frame primarily because of dual Medicare-Medicaid eligibility, inaccurate address information, or non-continuous enrollment. While these exclusions might have introduced some bias, we do not believe they substantively affected our results. Sixth, these data represent the experience of a single state and there may be difficulty generalizing our findings to other states since Arkansas was the first state to use the premium assistance mechanism.10 Seventh, while our models controlled for numerous patient factors, a key element, utilization, was not included as a predictor. Future research will examine the impact of utilization on experience scores. Finally, as we included in our analyses only those who had indicated they had a visit with either a primary or specialist provider, those who might have had the greatest difficulty finding providers were excluded.

CONCLUSION

Patient experience is an important policy relevant topic relative to Medicaid expansion. We found that Medicaid enrollees were significantly less likely to highly rate their healthcare and personal provider compared to QHP enrollees. Understanding the factors driving experience scores is critical to improving them. Multiple pathways were suggested for our results including access, utilization, and health-related literacy. First, increasing health insurance and health literacy is crucial for establishing a regular source of care, using healthcare appropriately, increasing trust in providers, and effectively communicating with providers. Ultimately, improvements in these areas will positively impact patient experiences. There are no plans within Arkansas government to implement health insurance or health literacy programs for the Medicaid expansion population. However, individual providers and practices may act on their own to implement practice-based screening for health literacy and multiple instruments are available for that purpose.53,54 Tailoring oral and written materials using language at or below the sixth grade level while incorporating visual aids when possible can improve two-way communication between patients and providers.55 Other non-literacy interventions have been suggested for practices including frequent and respectful communication, decreased appointment wait times, seeing patients at appointment times, and implementing a culture of service.55,56 Finally, programs like Ask Me 3 ® are available. Their purpose is ensuring patients maximize medical visits by asking what is my main health problem, what do I need to do, and why is this action important.57 These interventions improve patient engagement and, consequently, patient satisfaction which is critical in an increasingly market-driven world where physician reimbursement is driven in part by patient experiences.