The United States consistently outspends other industrialized countries on health care, but with lower health care outcomes.1 Healthcare costs in the United States were 17.9 % of the gross domestic product (GDP) in 2012 according to the World Bank—the highest GPD percentage spent on health care in the world.2 At the same time, US life expectancy lags behind other developed nations.1 One contributing factor to rising health care costs is that physicians rarely know the charges of the services, tests, and procedures they order or perform.3,4 This disconnect between the trend in rising health care costs and physician lack of knowledge about the financial impact of their management decisions leads to an obvious concept for an intervention: provide a currency amount for the intervention in question for physicians when they are deciding what tests to order or what medications to prescribe. Yet, this has hardly become commonplace practice in the United States.58

There have, however, been a number of studies that have tested the hypothesis that price information reduces ordering and costs, but there has been no synthesis of these studies. This information could be informative to policymakers, patient advocates, insurance companies, hospitals, and medical groups, who are all trying to find ways to reduce overuse and control costs.

The purpose of this systematic review was to determine the type of charge display studies that have been published, the quality of these studies, and their findings. We wanted to synthesize this information in the form of a literature review. We set out to identify studies where the intervention provided medical practitioners with a currency amount that reflected the charge of what they were ordering in real time, and then analyzed the differences in ordering behavior. We chose to narrow in on reviewing interventions specifically looking at real time charge display and its effect on physician decisions, rather than the broader topic of performance feedback, which was thoroughly explored in a recent Cochrane Review by Ivers et al.,9 as well as more general literature review by Axt-Adam et al. in the early 1990s.10

We would like to comment on terminology at this juncture. The terms “price,” “cost,” “charge,” and “fee” are often used interchangeably in this literature, even though there are nuanced differences to these terms. All interventions we examined included a currency value that physicians could incorporate into their management decisions. The source of this currency value was not consistently disclosed, as we will discuss later. In every health system, there are many layers to how the costs are generated and how services are paid for, thereby generating prices, fees, charges, etc. This systematic review did not attempt to reconcile these differences, but to assess various interventions with aforementioned intention. Throughout this review, we refer to currency amounts displayed using the same term as the authors of the paper did in their description. For our discussion, we chose the term “charge display” to discuss the concept as it pertains to physician decision-making.


Design, Data Sources, and Search Criteria

We performed a systematic review of English-language articles published between 1982 and October 2013 using MEDLINE, Web of Knowledge, ABI-Inform, and Academic Search Premier, the details of which are outlined in Figure 1. The search strategy was developed by two of the authors (C.G. and H.A.B.E.). Search terms included medical descriptors, financial terms, behavior descriptors, and medical action as detailed in Table 1. We also manually searched reference lists in relevant articles.

Figure 1.
figure 1

Review process.

Table 1. Search Terms

Study Selection

We included articles that studied the effect of charge display interventions (including educational interventions) on the use of services, cost of care, or changes in physician decisions. The intervention had to provide charge data in “real time”—meaning that a currency value was displaced to the provider at the time of ordering. We included studies that had both a concurrent comparison group and those that used a pre-intervention vs. post-intervention design with no concurrent comparison group. We only included studies that provided quantitative results. We did not include studies where the outcome was change in attitudes, but did include studies where the outcome was change in case-based decisions. One reviewer (C.G.) assessed titles for relevance. Two reviewers (C.G. and S.R.R.) assessed selected abstracts for relevance and full articles for inclusion. When the reviewers disagreed, an additional reviewer (T.F.B.) resolved the discrepancy.

Data Extraction

Two authors (C.G. and G.H.) extracted the following data from selected articles: study design, setting, type of intervention, type of participants, number of participants, bias considerations, type of outcome measures, and results. These data are organized in Table 2.

Table 2. Summary of Evidence

Data Synthesis and Analysis

We grouped studies into two categories: 1) laboratory and radiology test ordering, 2) medication choices. For each study, we focused on three types of outcomes: 1) use of specific medical services or treatments, 2) cost of care, and 3) physician decisions. We were unable to perform a meta-analysis, because the studies were too heterogeneous.


Of the 4,513 articles identified through electronic search (search terms are outlined in Table 1), 71 articles were selected after title review, and from those articles, eight articles were selected by two reviewers after full article review. We identified nine more articles through reference review, for a total of 17 articles (Fig. 1).

Twelve studies were conducted in a clinical environment1122 while five were survey or simulation studies (i.e., studies that asked physicians how they might behave in a clinical setting).2327

Of the seventeen studies, seven were randomized controlled trials,11,12,14,19,23,24,26 eight were pre-intervention vs. post-intervention studies,15,16,18,2022,25,27 and two had a concurrent control and intervention groups, but were not randomized.13,17 Eleven studies examined physician ordering of laboratory or radiology testing,1119,23,24 while six looked at medication choice.2022,2527 The details of the study design, study size, bias considerations, and follow-up period for the included studies are summarized in Table 2.

Interventions in a Clinical Setting

Effects on Radiology and Laboratory Test Ordering

There were a total of nine papers in this category: four randomized controlled trials,11,12,14,19 two non-randomized controlled trial,13,17 and three pre-intervention vs. post-intervention studies15,16,18 that looked at test ordering. Four interventions were conducted in inpatient wards1114 two were conducted in emergency department,15,17 two were in intensive care units,16,18 and one was in an internal medicine outpatient clinic.19 Six were conducted in the United States11,12,1416,19; studies were also conducted in South Africa,13 Sweden,17 and France.18 Two studies came from the pediatric literature.15,16

The clinical interventions themselves included four electronic medical record (EMR)11,12,14,19 and five paper-based interventions.13,1518 The EMR interventions very similar: the window for a patient’s orders included the charge amounts.12,14,19 Bates et al. had a “cash register” component that totaled the charges.11 Among the paper-based methods, Hampers et al. and Seguin et al. placed charges next to the items ordered on paper order forms in a pediatric ED15 and in a French intensive care unit,18 respectively. Sachedeva et al. placed itemized charges from the prior day’s test charges every morning where orders were subsequently placed.16 Ellemdin et al. gave physicians a pocket-sized brochure with laboratory costs; physicians then had to write that amount on the order requisition.13 Schilling distributed price lists via email to physicians and then had this list displayed at the physicians’ workstations.17

Only four studies reported the sources of displayed charges. Two studies stated that the currency amounts reflected what the clinic or hospital charged to the insurer or to the patient if the patient did not have insurance,11,19 and two other studies used the Medicare allowable fee for the test.12,14

Three studies were designed with a quality metric in place. Hampers et al.’s design included follow-up phone calls to see if the patient had been medically re-evaluated and to assess satisfaction with care. Differences in control vs. intervention groups on both accounts were not statically significant.15 Sachdeva et al. collected data on occurrences of pediatric ICU-related complications during the control and intervention periods to measure quality of care. Length of stay in the ICU and mortality between the two groups were not statistically significant.16 Tierney et al. reviewed patients’ computer records for 26 weeks following the intervention period to compare rates of hospitalization, emergency room visits, and outpatient visits. No significant differences were found.19 The other six studies did not report a quality metric.

Of the nine clinically based interventions that examined test ordering, seven had statistically significant reductions in cost and/or the number of tests ordered. These results are fully detailed in Table 2. Feldman et al. reported decreases in number of tests ordered in both the intervention and control arms compared to pre-intervention rates; still, the difference in decreases in the intervention arm were statically significant and resulted in 10 % decrease in fees.14 Tierney et al. found that there were 14.9 % fewer tests ordered and that testing charges were 12.9 % lower per visit during the intervention period.19 Ellemdin et al. found a 27 % reduction in mean cost per admitted patient.13 Hampers et al. noted a 27 % decrease in charges during the intervention period compared with the control period.15

Effects on Medication Choice

Interestingly, all three clinically based interventions looking at medication choice came from the anesthesiology literature.2022 All three studies were pre-intervention vs. post-intervention designs. Two interventions involved supermarket-style price stickers on medications20,21; one intervention used lists of drug costs.22 One study focused on muscle relaxants alone,21 whereas two included a broader spectrum of medications used in the operating room setting.20,22 Two interventions included an educational component.20,21

Two of three studies found a significant reduction in total medication expenditures; one study did not. Lin et al. reported a shift in muscle relaxant choice towards the less costly option that resulted in a total expenditure decrease of 12.5 %.21 McNitt reported that the average savings after the intervention was $32/case.22 Both Lin et al. and McNitt et al. reported that PACU and SICU admissions, as proxies for quality of care, were not increased due to medication choice.21,22

Surveys and Simulation Studies

Of the five studies that looked at physician decisions in surveys or simulated settings, two looked at test ordering and three looked at medication ordering.

Effects on Radiology and Laboratory Test Ordering

Cummings et al. and Rudy et al. presented resident physicians with clinical scenarios and randomized surveyees to have charge information included in the portion of the survey in which the workup for the clinical scenario was assessed. Both studies noted a decrease in ordering when charge information was presented. Cummings et al. found that the cost of tests ordered for each hypothetical patient was 31.1 % lower when price information was provided.23 The design of this study did include a minimum work-up for each scenario required to preserve quality of care. Both intervention and control groups met that standard.23 Rudy et al. found that residents with access to charge data spent less on tests ($1,297 versus $2,205), but also had lower “appropriateness” scores, meaning that the quality of the care was impacted by the modified test ordering.24

Effects on Medication Choice

The three survey studies that examined medication choice surveyed non-US physicians about management of urinary tract infections,25 chronic obstructive pulmonary disease,26 and hypertension.27 Hux et al. surveyed primary care physicians in Canada in which participants were randomized to receive information on drug prices and/or patient insurance coverage, or no information. Each surveyee was provided with a clinical scenario and asked about his/her choice of medication and management.26 Hart et al. and Salman et al. surveyed physicians in Israel with scenarios involving urinary tract infections and hypertensive patients, respectively.25,27 In both studies, participants received an initial survey that did not include price information. Two months later, participants received the same survey with price information. The differences in medication choice were compared.

Hux et al. found that the percentage of physicians prescribing the expensive antibiotic option dropped from 38 to 18 % when insurance coverage and prices were disclosed.26 Hart et al. and Salman et al. both reported statistically significant differences in medication choices after prices were disclosed. Physicians surveyed prescribed the less expensive antibiotic 56 % initially, compared to 83 % when price was disclosed in Hart et al..25 In Salman et al., cost disclosure prompted 57 % of family practice physicians and 87 % of hospitalists surveyed to choose the less expensive medication, both of which conferred statistically significant p values.27


In this systematic review of charge transparency interventions, we found that having real-time access to charges changed ordering and prescribing behavior in the majority of studies. Of the clinically based interventions looking at laboratory and radiology ordering, seven of the nine studies reported statistically significant cost reduction when charges were displayed. Interestingly, of the six studies that reported differences in the number of tests ordered, only three reported a statistically significant decrease in the number of tests ordered. This may reflect that awareness of cost may lead a practitioner to order a less expensive test rather than fewer tests.

The clinically based interventions that focused on medication choice again trended towards a decrease in cost when currency amounts were displayed on medication—two of the three reported statistically significant reduction. All three survey studies also showed a trend towards choosing less expensive medication options when price was displayed, though these were hypothetical situations.

It is worth noting that the two studies with non-significant findings of the clinically based studies examined ordering patterns for radiology tests. Bates et al. reported a decrease in laboratory ordering, though not of statistical significance, and no difference in the ordering of radiology when price was displayed.11 Durand et al. only focused on radiology ordering, randomizing the various modalities that could be ordered, and found no difference.12

There was considerable heterogeneity in the clinical setting, patient population (pediatric vs. adult), health care system (international vs. US), study design, and outcomes measured. The majority of interventions took place in the inpatient setting, with two studies based in emergency medicine. Tierney et al. stands alone as the one outpatient clinically based study included in this analysis.19 All of these studies were conducted at a single site. Even among the clinically based randomized controlled interventions, there were differences in design: Feldman et al. and Durand et al. randomized the tests themselves, whereas Bates et al. and Tierney et al. randomized the patient encounters.


To our knowledge, no other literature review has specifically looked at real-time charge display and its impact on physician practice patterns. While this synthesis of data from the literature points toward the potential of cost-savings when prices are displayed, it is unclear whether universal availability of a currency amount will have enough impact to significantly bend the cost curve on a system-wide or national level. Indeed, as several recent articles have pointed out,4,6 finding exact charges of tests and medication can be very challenging—the resources necessary to find and integrate this information in real time, may outweigh the savings gained.

Another unanswered question is whether changes in practice from charge display affect quality of care. While some studies did incorporate a quality metric, the majority did not. A primary concern of physicians modifying practice patterns is that the quality of patient care will be compromised. Clearly, this is an area for further study.

Bias is another consideration in synthesizing these data. As the intervention in question is one of transparency, blinding subjects and assessors to the intervention is not possible. Several papers disclose that subjects were not aware that they were being studied; others specifically included an educational component as part of the intervention. The danger of performance bias and detection bias is inherent to these interventions. Reporting bias is another consideration, though we are reassured that studies with both significant and statically insignificant results have been included in the literature. Another limitation to acknowledge is that our review may have not have captured all articles on this subject. Indeed, only articles from the medical literature were ultimately included. We used search terms and search engines with the hopes of finding studies from policy, economics, and lay literature, but no additional interventions were identified.

Were charge data to be more broadly adopted, a significant issue to consider is what charge the ordering practitioner should use. There is often great discrepancy in the currency amount among what a hospital or clinic charges, what an insurance company reimburses, what a patient pays, and the cost to the larger medical system. These studies do not address which of these costs a clinician should consider when making ordering decisions. Indeed, the source of the charge presented was not consistently reported in these studies.

Finally, the decreases in costs reported in these studies focus primarily on the cost-savings to the hospital or clinical provider. What remains to be seen is whether charge transparency decreases medical expense to the patient. Potentially, the doctor–patient relationship could benefit from increased transparency about medical costs, though this has yet to be established. There are growing calls for physicians to factor the financial consequences into their medical decisions.2830 Charge data offers additional information for physicians to make the most educated decisions for a patient’s care.