Introduction

Blood transfusions have long been a common component of the therapy of critically ill patients, yet knowing when a particular patient will benefit from a transfusion has not always been clear [1]. Until recently, the 'optimal' hemoglobin concentration in critically ill patients was empirically set at 10 g/dl, and most patients in critical care units received transfusions during their stay in the unit [2,3]. There was no evidence from clinical trials to support this practice, but some studies had demonstrated a pathologic dependence of oxygen consumption (VO2) on oxygen delivery (DO2) in conditions like sepsis and acute respiratory distress syndrome [4]. These observations spawned the hope that increasing DO2 might improve tissue oxygenation and ultimately decrease mortality. Most of the clinical trials that attempted to increase DO2 did so using inotropes or vasoactive drugs, and demonstrated no benefit in clinical outcomes [5,6]. In both the experimental and control groups of these studies, hemoglobin was maintained at 10 g/dl (or hematocrit >0.30), reflecting the widely accepted threshold for transfusion at the time. In the few studies that specifically looked at the effect of blood transfusions on oxygen delivery and consumption, blood tended to increase oxygen delivery but not consumption [7,8]. To complicate matters, other investigators suggested that the measurements demonstrating pathologic dependence of oxygen consumption on delivery might in fact be artifactual [4] as a result of mathematical coupling. There were also concerns about possible immunosuppressive [9] and microcirculatory [10] effects of blood transfusions.

Naturally, this uncertainty in the literature about the respective benefits and harms of transfusion spilled over into clinical practice. As recently as the mid-1990s, papers documented strikingly heterogeneous transfusion practices by intensivists and suggested that a high proportion of critically ill patients were apparently being transfused without any clearly predisposing factors [3,11]. This debate led to a landmark investigation into transfusion requirements in critical care [the transfusion requirements in critical care (TRICC) trial], which we believe has brought about a change in practice [12]. This study was a multi-center, randomized, controlled trial in which euvolemic patients in the intensive care unit were randomized to either a restrictive or to a liberal transfusion policy. In the restrictive group, patients were transfused when the hemoglobin level was less than 7.0 g/dl, with a target hemoglobin level of 7.0 to 9.0 g/dl. In the liberal group, transfusions were given when the hemoglobin level was less than 10.0 g/dl, with a target range of 10.0 to 12.0 g/dl. There were a number of exclusion criteria including patients with ongoing bleeding or chronic anemia and patients undergoing cardiac surgery. The study enrolled over 800 patients and demonstrated no difference in 30-day mortality rates between the two groups. In-patient mortality was significantly lower in the restrictive transfusion group, and subgroup analyses in patients <55 years of age or in those with acute physiological and chronic health evaluation (APACHE) II scores ≤ 20 favored the restrictive transfusion strategy. The average number of units of blood transfused was 54% lower in the restrictive group than in the liberal group. The implications of this study were that the classic transfusion threshold of 10 g/dl [13] was unnecessarily high for many patients in the critical care unit, and that excessive transfusion might be harmful.

Although the results of this trial cannot be generalized to patients with acute coronary syndromes [14] or to patients specified in the exclusion criteria, the practical effect of this trial has been to lower the transfusion threshold to 7 g/dl for many patients. For those patients with a hemoglobin level above 7 g/dl, this trial has put the onus on clinicians to justify blood transfusion.

Notwithstanding the momentous and influential nature of this study, it is likely that this shift in attitudes towards blood transfusion had its roots earlier and elsewhere. No clinician in practice in the last 20 years could miss the hesitation and frank apprehension in the public consciousness engendered by the widely publicized infectious hazards of the transfusion of blood products. This underlying trepidation, spanning continents and cultures [15,16,17], spurred the careful examination of blood transfusion practices that would culminate in the TRICC trial. Even before the TRICC trial, physicians were becoming more reluctant to transfuse blood; the critical care literature from the mid-1990s demonstrates striking reductions in transfusion use in patients with burns [18] or trauma [19]. Indeed, if there was any key moment that changed clinical practice, one could argue that it was a simple two-page report that appeared almost 20 years ago in Morbidity and Mortality Weekly Report. Entitled 'Possible transfusion-associated Acquired Immune Deficiency Syndrome (AIDS) – California', the case report [20] was a harbinger of the transfusion-associated AIDS epidemic. Two decades later, even after significant improvements in the field of transfusion medicine which have made transfusion safer than ever, patients and their physicians will never view transfusion of blood and blood products in quite the same way.