Introduction

Liver transplantation has been in use as a therapy for end-stage liver disease since its approval by the National Institutes of Health (USA) in 1983; it aims to prolong life and improve its quality [1, 2]. The grounds for its continued use are the current good results of the procedure: the 1-year survival rate increased from about 70 % in the early 1980s to 90 % in the late 1990s. For 5-year survival, the rate currently exceeds 60 % [3].

With the increased number of liver transplantations in the world, and the consequent increase in waiting-list mortality, strategies have been adopted in an attempt to achieve a greater number of surgeries. One of them is the use of expanded criteria that accept donors known to be nonoptimal, marginal or borderline. Other such strategies include split-liver and living-donor transplantations [48].

It is known that survival of transplantation patients is related to variables of the donor, recipient, and surgery [915].

The impact of nonconventional donors on transplantation survival has been investigated, and some studies point to a deterioration in graft and recipient survival. Other variables, such as cold ischemia time and blood loss, have also been shown to be important in posttransplantation survival [1618].

Several predictive systems are currently employed in medical practice, particularly in the area of surgery. These models are used to assess disease severity and estimate patient survival, and are useful to select individual therapeutic strategies [19].

The balance of risk (BAR) score is a recently developed prediction system [20, 21]. Analyses of the liver transplantation population in the USA based on the United Network for Organ Sharing (UNOS) database, and by the University of Zurich, Switzerland, have confirmed the superiority of BAR over other predictive systems such as Model for End-Stage Liver Disease (MELD), Donor Age MELD (D-MELD), Donor Risk Index (DRI), and Survival Outcomes Following Liver Transplantation (SOFT) [11, 2224].

We are not aware of any transplantation centers in Brazil using the BAR system. BAR was created and validated in American and European populations, and no studies on Brazilian populations have been found in the literature. If it is to be incorporated into the procedures of Brazilian transplantation centers, it is necessary to verify its potential and check for possible limitations.

The aim of the present study is to examine the performance of the BAR score for survival time prediction of liver transplantation patients.

Methods

This is a single-center retrospective observational study of 537 liver transplantations performed on 512 patients in the period of March 1997 to December 2012 at the Unit of Liver Transplantation, State University of Campinas (UNICAMP), SP, Brazil.

Study population

Included patients were adults (over 18 years) with no distinction of race or sex, subjected to liver transplantation by the technique of receiver vena cava preservation (“piggy-back”), regardless of the type of superior hepatic vein reconstruction.

Of the 512 patients, 110 were excluded: children (n = 33), traditional technique, dextrocardia, reduced or split livers (n = 69), or incomplete medical files (n = 8).

All 402 studied patients received hepatic grafts from deceased donors. Donor variables supplied by the procurement organization (Organização de Procura de Órgãos, OPO-UNICAMP) [25] were sex, age (years), race, weight (kg), height (cm), blood type, history of alcoholism (yes/no), infection at admission preceding encephalic death (yes/no), arterial hypotension at admission preceding encephalic death (yes/no), cause of encephalic death, predonation cardiac arrest, location, and distance to transplantation center (km).

Receiver presurgical variables were sex, age (years), weight (kg), height (cm), body mass index (BM, kg/m2), disease, surgery date, international normalized ratio (INR), total serum bilirubin (mg/dL), serum creatinine (mg/dL), MELD score immediately before transplantation, serum sodium (mEq/L), glycemic index (mg/dL), serum albumin (g/dL), previous liver transplantation, and artificial life support.

Receiver intrasurgical variables were warm ischemia time and cold ischemia time (min), transfusion of packed red blood cells (RBC, u), fresh frozen plasma, platelets, albumin, cryoprecipitate, and salvaged blood (mL).

The BAR score was computed for each patient on 9 March 2013 according to the formula available at http://www.assessurgery.com/bar-score/bar-score-calculator/. The variables that constitute the score are donor age (years), cold ischemia time (h), retransplantation (yes/no), days in intensive care unit (ICU) with artificial life support (mechanical ventilator), receiver age (years), and MELD score (without exception points) [20]. The BAR score range is from 0 to 27 points; the factor with greatest weight is the receiver MELD score (0–14 points), followed by retransplantation (0 or 4 points), receiver age (0–3 points), ICU (0 or 3 points), cold ischemia time (0–2 points), and donor age (0–1 point). The relationship between the BAR score and posttransplantation survival time is found at the site http://www.assessurgery.com/bar-score/bar-score-calculator.

Statistical analysis

To investigate which BAR cutoff point may be more appropriate in transplantation decisions for our study population, a receiver operating characteristic (ROC) curve with calculation of the area under the curve (AUROC) was performed and the highest Youden index was calculated (Youden index = sensitivity + specificity − 1). The Hosmer–Lemeshow χ 2 statistic goodness-of-fit test was applied to assess model calibration [26, 27].

Survival time analysis was done by the Kaplan–Meier method, and the comparison between distributions used the log-rank test. For comparison among groups the Mann–Whitney test was chosen.

The study of associated factors was performed by multiple logistic regression and χ 2. The level of significance (p) adopted in all analyses was 5 %. For statistical tests, the program IBM® SPSS® Statistics version 21.0 (Chicago, USA, 2012) was used.

Results

Of the 402 patients under study, 296 (73.6 %) were males and 106 (26.4 %) were females, with average age of 48.82 ± 11.43 years. Hepatitis C virus (HCV) was the most frequent cause for transplantation (210 or 52.2 %). The average MELD score without exception points was 20.16 ± 7.41, and the median was 19 (7–50). It was found that 25 patients (6.2 %) required retransplantation (Table 1).

Table 1 Baseline characteristics of the patients

Donor average age was 35.63 ± 13.92 years; 234 (58.2 %) donors were males, and 168 (41.8 %) were females. The most common cause of encephalic death was trauma (46 %), followed by cerebrovascular accident (43 %) and other causes (11 %). The average cold ischemia time was 10.03 ± 3.2 h (Table 1).

The BAR score cutoff point calculated in our study was 11. The sensitivity, specificity, Youden index, and area under the ROC curve at the best cutoff point are presented in Fig. 1. ROC curve analysis with calculation of the AUROC revealed an area under the curve of 0.65 (95 % CI 0.59–0.71). An AUROC between 0.8 and 0.9 indicates excellent diagnostic accuracy, while AUROC >0.7 indicates a clinically useful prognostic model. The BAR score failed to reach AUROC >0.7 for prediction of 3-month survival after liver transplantation. The Hosmer–Lemeshow goodness-of-fit test demonstrated a p value of 0.063 (Table 2). These statistics suggest good fit when the associated p values are >0.05 [26, 27].

Table 2 Hosmer–Lemeshow test result to assess model calibration for prediction of 3-month mortality

The survival analysis estimated that 3-month survival would be achieved by 46 % of patients with BAR score greater than or equal to the cutoff, and by 77 % of those with a score below it. For 12 months, survival would be achieved by 44 % of patients with BAR score greater than or equal to the cutoff, and by 69 % of those with a score below it (Fig. 2).

Fig. 1
figure 1

Receiver operating characteristic (ROC) curve

Fig. 2
figure 2

Survival curve (Kaplan–Meier) of patients with BAR score ≥11 and <11 (log-rank test, p = 0.01)

The variables used in the multiple regression analysis are presented in Table 3. The following factors were found to be determinant for survival time of less than 3 months: BAR score greater than or equal to 11 points (OR 3.08; 95 % CI 1.75–5.42; p = 0.001) and intrasurgical use of packed RBC above 6 units (OR 4.49; 95 % CI 2.73–7.39; p = 0.001). For survival less than 12 months, the significant factors were BAR score greater than or equal to 11 points (OR 2.94; 95 % CI 1.67–5.16; p = 0.001) and RBC >6 units (OR 2.99; 95 % CI 1.92–4.64; p = 0.001).

Table 3 Multiple logistic regression analysis: study of factors associated with 3- and 12-month survival

Discussion

To our knowledge, the present study is the first to evaluate the effectiveness of the BAR predictor score in a Brazilian population, and to study it in conjunction with other predictive factors known from the literature.

Several systems have been created in an attempt to reduce waiting-list mortality and predict posttransplantation survival. However, only two of them actually changed the decision process of liver transplantations: the MELD score and the Expanded Criteria for Donors (ECD) [28].

The introduction of the MELD score has improved organ allocation for liver transplantations. However, although the MELD model is an accurate mortality predictor of patients on waiting lists for liver transplantations, other studies have shown that it does not achieve the same accuracy to predict survival after liver transplantation [29, 30].

In the study by Dutkowski et al. [20], the BAR score was reported as accurate in predicting 3-month survival, but some studies on the MELD predictor system have observed that a short-term survival analysis may not reflect the long-term reality [31, 32]. To find out whether this also applies to the BAR score, we evaluated 3-month and 12-month survival, and for both it was found that BAR was a good predictor for liver transplantation patients.

The ECD model has been increasingly used by transplantation centers, because it increases the number of donors and reduces waiting-list time [33]. However, some studies have shown that survival declines rapidly in the first 12 months after transplantation, and tends to stabilize after this period [34]; values are 83 % after 12 months, 72 % after the fifth year, and 58 % after 10 years. In Brazil, survival has been 71 % after 12 months [35], and this lower figure may be due to the large number of Expanded Criteria for Donors [36]. In the present study, the DRI as calculated according to Feng et al. [23] had a mean value of 1.68 ± 0.36, which indicates suboptimal donor organ quality [23, 37]. Surgeons are often faced with the difficult question of whether or not to accept liver offers from high-risk donors to high-risk receivers, and there is a lack of studies offering guidance for the decision in such scenarios.

In the study population, the BAR score proved to be useful to find good donor–receiver matches in a quick, easy, and reproducible way, and thus help increase receiver survival.

In other studies, the BAR score displayed good ability to estimate patient survival following liver transplantation for American (UNOS/OPTN-USA) [38] and European (University Hospital Zurich) [20, 21] populations, but it showed suboptimal ability in the present study population. This research also identified other survival prediction factors (massive transfusion), confirming previous reports in the literature [14, 39].

A point that remains to be discussed is how to determine the BAR cutoff point for the decision of whether or not to transplant. In the study by Dutkowski et al. [20], a BAR threshold of 18 was postulated, based on the observation that survival values begin to deteriorate past this point. However, our observations, based on the ROC curve and highest Youden index, lead us to believe that a BAR cutoff point of 11 may be more appropriate in transplantation decisions for our study population. The cutoff point of the BAR score determined for the investigated cohort is characterized by low sensitivity (39 %) and high specificity (87 %). As with any diagnostic test, lowering the cutoff point score will increase the test’s sensitivity but reduce its specificity. In other words, with a lower cutoff, more of the index cases are detected but there are also more false positives.

A systematic review and validation of prognostic models in liver transplantation performed by Jacob et al. [40] evaluated several models through assessment of a list of quality criteria, and the validation results of all models showed poor discriminatory ability. In other words, no model achieved ROC area larger than 0.7. Nevertheless, widely accepted international consensus requires areas under the receiver operating curve (AUROCs) >0.700 for clinical tests, diagnostics, and prognostic models. This threshold could not be reached by the BAR score for prediction of 3-month mortality in those investigated in the Brazilian population of this study (AUROC 0.65). In the study by Dutkowski et al. [20], the area under the receiver operating curve was 0.70, and this lower ability for prediction of 3-month mortality in the Brazilian population may be because the model was not developed with a Brazilian population. The suboptimal ability of the BAR score and of the models evaluated by Jacob et al. [40] show that an international consensus on the design and validation of prognostic models in liver transplantation is still lacking.

It is known that prevention of excessive blood loss during liver transplantation is important, since its occurrence can lead to protracted recuperation and survival reduction [14, 39]. In the present study, only 10 % of patients had not received blood transfusion during surgery, and it was observed that those who underwent polytransfusion had shorter survival time. It was also found that patients with BAR score greater than or equal to 11 required larger amounts of hemoderivatives and vasopressors (χ 2 = 22.54; p = 0.001).

Feng et al. [23], studying 20,023 liver transplantations in adults (UNOS/OPTN-USA) [38], used a Cox regression model to identify seven characteristics that were independently able to predict the risk of graft failure: donor age over 40 years, donation after cardiac death, split/partial grafts, Afro-American donor, height, cerebrovascular accident, and “other” causes of brain death. In the present study, the most common donor cause of death was trauma (47 %), followed by cerebrovascular accident (44 %), although no statistical significance was found.

Donor location varied widely, with only 30 % of organs coming from within a radius of 100 km. Therefore, notification and active search should be encouraged to reduce organ travel distances and achieve shorter cold ischemia times (CITs). The relationship between CIT and survival time was analyzed, and no significant linear regression was found (r 2 = 0.033).

A significant advantage of the BAR score is that it uses objective variables readily available at the time of organ offer, except for cold ischemia time, which can, however, be estimated.

The present study is a contribution to the verification and validation of the potential of the BAR system for the Brazilian population, aiming to incorporate it into the procedures of transplantation centers, as well as to study other predictive factors. This study demonstrates suboptimal ability of the BAR score for prediction of survival after liver transplantation in the investigated cohort and stresses the importance of polytransfusion as a determinant factor of survival.