INTRODUCTION

Physicians have been targeted for price transparency efforts because they have the expertise needed to distinguish when medical spending is necessary versus wasteful.1 4 Physician-targeted price transparency efforts are considered a promising cost-control strategy because the vast majority of controlled studies found that when clinicians are shown the prices of tests, they lower test-ordering rates.5 20

Available evidence is not without limitations. Nearly all studies have presented clinicians with charge information rather than paid prices, which are the prices that health plans actually pay.5 20 Charge information is usually used in contract negotiation and can be 4–40-fold higher than paid prices.21 As a result, presenting clinicians with charge information can potentially exaggerate clinicians’ price response.21 Existing studies have also presented price information to trainee clinicians on inpatient rotations rather than to fully licensed clinicians active in routine outpatient practice.5 20 Actively practicing, fully licensed clinicians may not know exact prices but may have an awareness of relative pricing (e.g., that ultrasounds are cheaper than MRIs). Thus, they may already be combining that knowledge with evidence-based practice while ordering and making high-value decisions.

It is also important to study the effect of price information within the context of an accountable care organization (ACO) because this type of organization is proliferating; ACOs—health care providers who are responsible for the cost and quality of care for a defined population of patients—are also interesting because they represent a type of organization that can benefit financially from lower spending.18 , 22 As a result, ACOs may develop or promote cultures that may cause clinicians to respond to price information differently than they would in hospital or emergency department settings (e.g., they may already have reduced unnecessary care to low rates or be more interested in shifting location of care rather than lowering ordering rates).18 The one study done within the ACO setting by members of our team suggests that price transparency involving laboratory tests can have a variable effect.18

The extant literature also lacks several domains that are relevant today. It does not explore how alternate presentations of price information may differentially affect clinician ordering rates.5 20 Presenting a Single Median Price may allow clinicians to focus on whether the test they are considering is “worth” the price they see; they may lower or raise ordering rates accordingly. Within a global payment contract or ACO, however, information about the differential paid price associated with tests performed “internally” versus “externally” to the risk-bearing (i.e., being financially responsible if a patient population spends beyond estimated or budgeted amounts) entity of the ACO may allow cost savings to be achieved by shifting the location of imaging or procedures rather than from lowering ordering levels.23

To our knowledge, no study has assessed how price information affects ordering of tests under clinical scenarios in which test ordering would be considered “inappropriate” (e.g., advanced brain imaging for simple headaches) while preserving ordering thought to be “appropriate” (e.g., recommended screening colonoscopies). If price information were to have an effect on clinician ordering rates, then it would be important for this effect to be limited to testing that is considered inappropriate and for price to have no effect on testing that is considered appropriate.

This study uses a blocked randomized-controlled study design to evaluate the effect of displaying a single median price or a pair of “internal/external” median prices on how often clinicians caring for adult patients order imaging studies and procedures: (1) overall, (2) to be completed internally within an ACO, (3) in test-ordering circumstances considered “inappropriate,” and (4) in test-ordering scenarios reflecting “appropriate” orders.

METHODS

Study Setting

Atrius Health (Atrius) is a large multispecialty medical group consisting of over 35 practice locations in eastern and central Massachusetts. At the time of our study, Atrius’ over 1200 primary care and specialist clinicians (84 % MDs/DOs; 16 % nurse practitioners/physician assistants) delivered care to nearly 400,000 patients aged 21 and older annually. About 10 % of patients were from Black or Non-White Hispanic backgrounds; 8 % and 13 % were insured by Medicaid and Medicare, respectively. Approximately half of the contracts and patients cared for by Atrius are risk-bearing. Boston Children’s Hospital Institutional Review Board approved this study, including a waiver of informed consent for clinicians.

Price Education Intervention (PEI)

Early in January 2014, Atrius introduced the PEI focused on commonly ordered imaging studies and procedures (Appendix Table 3) to all of their eligible clinicians; all clinicians had memo-based price information prior to randomized start of EHR-based price information. For each test, Atrius calculated a single median paid price from the insurer paid amount across all the risk-bearing commercial, Medicaid, and Medicare contracts that Atrius had in the year prior to the PEI. Atrius also calculated a set of “internal/external” median paid prices reflecting prices if the test was conducted within Atrius or outside of Atrius, respectively. Atrius prices were lower than non-Atrius prices in 92 % of cases with the mean difference in prices being $365 (SD $914). Paper and electronic memos introduced the intent of the PEI, which was to provide price information without adjunctive clinical decision support or patient education materials.

Study Design

Starting January 26, 2014, and continuing through December 31, 2014, we block randomized clinicians who could independently place orders in Atrius’ Epic-based electronic health record (EHR) to one of three study arms: Control (no EHR price display), Single Median Price, or Paired Internal/External Median Prices (Appendix Table 4). We first obtained Atrius’ list of 1509 clinician employees who could independently place orders in the Epic-based EHR. We drew practices in random order and randomized all physicians and eligible non-physician clinicians within each practice (or block) before moving on to the next practice. We block randomized clinicians because practice locations varied substantially in terms of size (5–50 providers), setting (urban, suburban), and patient population characteristics (e.g., race/ethnicity, insurance).

Clinicians randomized to the Single Median Price arm received a single median price display next to the test while they were placing that order in their EHR. Those in the Paired Internal/External Median Price arm had internal and external median prices appear next to the test in the ordering screen in their EHR.

The study sample consisted of 1205 clinicians who had at least one direct patient encounter with a patient ≥21 years within 2014, of which 407 were randomized to the Control arm, 396 to the Single Median Price arm, and 402 to the Paid Prices arm. Among eligible clinicians, 728 were primary care providers (e.g., internists, family practitioners) and 477 were specialists (e.g., obstetrics/gynecologists, cardiologists, orthopedists). Study team members were blinded to study arm assignment until our initial analysis was complete.

Data Source

Atrius’ Epic Systems©-based Stage 7 EHR records all clinicians’ ordering actions (e.g., orders placed, whether order was to be completed internally within Atrius) and has served as the chief repository for research data.24 27 We used Atrius data for calendar years 2013 and 2014. We used post-intervention (2014) data to measure the effect of the intervention because the pre-intervention (2013) data verified that study arms were balanced in our outcomes of interest prior to the intervention (Appendix Tables 5 and 6). Atrius’ EHR data were enhanced with electronic abstract information designed to capture whether “Choosing Wisely” recommendations were being followed28 , 29 and whether recommended cervical and colorectal cancer screening rates were being completed.

We followed Choosing Wisely criteria to identify clinical circumstances under which an imaging study or procedure test order would be considered “inappropriate.”30 Our analysis focused on the subset of orders being placed for patients: (1) at low-risk for cervical cancer (e.g., those with hysterectomies) who had Pap smear orders placed; (2) with Framingham Risk scores ≤12 points for men or ≤19 points for women who were receiving cardiac test orders (e.g., EKGs, ECHOs); (3) with simple syncope or simple headache who had head CT or MRI orders; (4) with uncomplicated low-back pain within 6 weeks of the initial diagnostic encounter who were having lumbar CTs or MRIs; (5) with acute, uncomplicated rhinosinusitis who were having sinus CTs; (6) at low risk for osteoporosis (e.g., normal weight women <50 years old without history of fracture, smoking, or heavy drinking) who had dual-energy X-ray absorptiometry orders placed.

We followed modified Healthcare Effectiveness Data and Information Set (HEDIS) criteria to identify “appropriate” ordering rates, which included: (1) orders for women aged 21–64 years who had not had a Pap within the prior 3 years; (2) orders for women aged 30–64 years who had not had cervical cytology/HPV co-testing within the prior 5 years; (3) orders for colonoscopy and flexible sigmoidoscopy for men or women aged 50–75 years who had not had a colonoscopy in the past 10 years or flexible sigmoidoscopy in the past 5 years.31

Main independent variable

Our main independent variable was an indicator of whether the clinician was randomized to the Control, Single Median Price, or Paired Internal/External Median Prices study arm.

Outcome variables

Our main dependent variables were ordering rates for the price-revealed tests, generally specified as each clinicians’ total volume of price-displayed tests of a given type divided by their total volume of encounters (i.e., orders per 100 patient encounters). We examined four different types of ordering rates: (1) overall (i.e., all orders for price-displayed tests); (2) internal; (3) inappropriate; (4) appropriate orders.

Statistical Analysis

For all analyses, our unit of analysis was the unit of randomization—clinician. We first analyzed data for all clinicians together. We then analyzed data for primary care clinicians separately from specialists because they order tests under different clinical circumstances. In general, we used one-way analysis of variance (ANOVA) and pair-wise t-tests (if ANOVAs were significant) to describe and compare clinicians in the Control arm relative to the intervention arms.

We examined the total volume and composition of each clinicians’ patient panels: average age, percent female, percent White, percent commercially insured, and number of chronic conditions per patient.32 We conducted sensitivity analyses to examine if our results were robust to including or excluding orders placed in non-face-to-face encounters.

Our study was designed to detect an effect size of 25 % of a standard deviation in test ordering with 80 % power and 5 % Type I error (i.e., a decrease or increase of roughly 3 orders per 100 encounters).

Even though differences in the ordering rates across study arms during the intervention period represent the effect of paid-price information in a randomized study design, we also estimated a generalized linear mixed model with a difference-in-difference regression specification in case one significant difference in patient panel characteristics (percent commercially insured) and low ordering rates could affect our analyses.

Data were analyzed using STATA statistical package, version 13.1.

RESULTS

Study Population

In 2014, clinicians across the three study arms were not significantly different with respect to: the volume of unique patients they cared for within the year, the composition of their patient panels, the volume of face-to-face encounters they were having with patients, and the volume of orders they were placing during face-to-face and non-face-to-face encounters (Table 1). On average across the three arms, clinicians cared for 770 [standard deviation (SD) 675; ANOVA p = 0.22] unique patients within the year through 1235 (SD 1116; ANOVA p = 0.19) face-to-face encounters. On average, clinicians’ patients were 46 years old (SD 14; ANOVA p = 0.74), female 64 % (SD 23 %; ANOVA p = 0.80) of the time, White 78 % (SD 17 %; ANOVA p = 0.95) of the time, commercially insured 72 % (SD 14 %; ANOVA p = 0.50) of the time, and had an average of 0.51 (SD 0.44; ANOVA p = 0.29) chronic conditions.

Table 1 Clinicians’ and Panel Characteristics, 2014

Ordering Rates: Overall

We found no significant difference in overall ordering rates among clinicians randomized to the Control, Single Median Price, or Paired Prices study arms (Fig. 1 and Table 2). Figure 1 presents the overall ordering rates graphically and illustrates how wide the variation in ordering rates can be relative to ordering levels. Table 2 shows that for every 100 face-to-face encounters, clinicians in the Control arm ordered 15.0 (SD 31.1) of the targeted tests, those in the Single Median Price arm ordered 15.0 (SD 16.2) tests, and those in the Paired Prices arms ordered 15.7 (SD 20.5) tests; ANOVA p-value 0.88.

Figure 1
figure 1

Boxplots of ordering rates by arm, clinician type, and order type, 2014

Table 2 Ordering Rates by Arm, Clinician Type, and Order Type, 2014

Ordering Rates: Internal, Inappropriate, and Appropriate

We also found no significant difference across arms with respect to orders designated to be completed internally or under clinical circumstances considered inappropriate or appropriate (Table 2).

For every 100 face-to-face encounters, clinicians in the Control arm designated 4.0 (SD 6.9) orders be completed internally, while those in the Single Median Price arm designated 4.3 (SD 7.6) orders to occur within Atrius, and those in the Paired Internal/External Median Prices arm specified that 4.5 (SD 8.2) orders to be completed internally; ANOVA p = 0.63.

For the clinical circumstances in which we could assess whether orders were inappropriate, clinicians in the Control arm ordered 0.3 (SD 0.6) tests, those in the Single Median Price arm ordered 0.3 (SD 0.5) tests, and those in the Paired Prices arms ordered 0.3 (SD 0.8) tests per 100 face-to-face encounters; ANOVA p-value 0.60. This pattern of results extended to the two clinical scenarios where orders were appropriate; clinicians in the Control arm ordered 1.9 (SD 4.7) tests, those in the Single Median Price arm ordered 1.8 (SD 3.6) tests, and those in the Paired Prices arms ordered 2.0 (SD 4.1) tests per 100 face-to-face encounters; ANOVA p = 0.82.

Primary Care versus Specialist Clinicians

We found the same non-significant differences in ordering rates between study arms when we analyzed 728 primary care clinicians separately from 477 specialists (Table 2). However, the two groups of clinicians exhibited different ordering levels and variation in their ordering rates. Specialists ordered nearly twice as many imaging studies or procedures overall per 100 face-to-face encounters compared to primary care clinicians. Mean overall ordering rates for specialists ranged from 19.6 to 21.1 across the three study arms compared with primary care clinicians’ mean overall rate of 11.6–12.2; p-values 0.75 and 0.92, respectively. Variation in overall ordering rates was also greater among specialists compared to primary care clinicians with the SD for specialists ranging from 23.5 to 45.8, whereas primary care clinicians’ SD was between 8.1 and 10.6.

Our sensitivity analyses demonstrated that the findings were robust to operationalizing ordering rates as being inclusive or exclusive of non-face-to-face encounters. We also found no difference between study arms when using a difference-in-difference regression model (Appendix Table 7).

DISCUSSION

To our knowledge, this is the largest randomized study of prospectively sharing paid-price information on imaging studies and procedures with clinicians at the point of care.5 20 This study suggests that, in contrast to the prior literature that mainly presented charge information to trainees in hospital settings, the display of paid prices to fully trained clinicians in an ACO setting does not necessarily lower ordering rates.

We also show that clinicians—at least those working in this particular ACO—do not differentially respond to price information when it is presented either as a Single Median Price or a Paired Internal/External Price. Price information also does not seem to be differentially applied to clinical scenarios in which ordering would be considered “inappropriate;” it may be reassuring that price information appears to have no impact on ordering in clinical scenarios considered “appropriate.”

Our non-significant findings exist despite the intensity of the price transparency intervention, the use of actual ordering data for assessment (not clinician self-report or claims that only represent care that has been completed and billed for), the evaluation duration being twice as long as in prior studies, and our having the power to detect a change that is one-fifth the size of what other studies have been powered to detect.5 20

Our findings are timely because ACOs are a type of delivery system that continues to proliferate across the US. The capabilities within the ACO we studied—the ability of organizations to calculate their own paid prices and insert them into EHRs so that clinicians can see the prices of the services while they are placing orders—are capabilities other ACOs have or are acquiring.

Three major factors likely explain our non-significant findings. First and foremost, Atrius clinicians work for an organization that has been involved in risk-bearing contracts for decades, and these clinicians have indicated that they see themselves as stewards of health care costs.18 Second, even though fully licensed active clinicians do not know specific prices, they may recognize the relative cost of services.33 Third, the clinicians in this intervention received paid-price information, not charges, so prices may not have seemed as high as clinicians might have expected them to be; this notion is substantiated by the qualitative interviews that we conducted.27 Lastly, we were not able to systematically collect information pertaining to the degree to which paid prices may have engendered different types of clinical interations between clinicians and patients before orders were placed.

There are limitations to our study. Our study was conducted at a single ACO that may not be generalizable to other health care organizations or clinicians. Displaying price information in settings where clinicians have not been acculturated to the idea of value-conscious care could still have an effect on clinician ordering rates. Our ability to identify some orders as inappropriate or appropriate, while novel, is still rudimentary; findings may differ if additional orders could be classified as inappropriate or appropriate. Similarly, although we had information on when clinicians designated orders internally, this designation was not required as a part of ordering, so findings could be different if we had more granular ordering details. Control group contamination may be a concern, but our qualitative interviews confirmed that clinicians did not confer with one another about price information.27 , 34 Lack of contamination concern is not surprising—even in prior hospital-based price transparency interventions where trainees work in teams, cross-cover, and constantly sign-out to one another, contamination was not found to be a significant factor.6 15 Some may be concerned about the possibility that the initial memo may have constituted a co-intervention, but numerous studies find that clinicians need repeated and ongoing exposure to information in order to change ordering behavior or practice patterns, so the possibility of an intervention effect from the one-time distribution of a memo is likely very remote; our qualitative interviews also confirmed that clinicians did not recall the contents of the memo.16 , 35 Lastly, we did not study how clinicians may respond to patient out-of-pocket spending, which may be an alternate price transparency strategy to consider.36

Conclusions

Clinicians are increasingly expected to act as good stewards of health care resources.37 40 To assist clinicians in value-conscious ordering practices, organizations may be considering including price displays in their EHRs. However, providing clinicians with price information does not necessarily lower test ordering rates. Further study is needed to understand the contextual, motivational, and behavioral factors that explain this result. Those with a particular interest in removing waste from the health care system may want to consider strategies outside of physician-targeted price transparency.27 Price transparency’s other benefits, such as the ability to improve patient and provider shared decision-making, is an important future research direction.