Role of steroids in kidney transplantation

Current therapeutic protocols for kidney transplant patients have markedly reduced the incidence of acute rejection; however, only modest improvement in long-term outcomes has been achieved. A major challenge in the application of current immunosuppressive agents is the narrow therapeutic window between efficacy and toxicity. In attempts to minimize toxicity, multiple studies have sought to minimize or eliminate specific therapeutic agents, most commonly steroids. Currently, controversy exists on the need to use glucocorticoids for rejection prophylaxis, and there is no consensus on the appropriate dosing strategy. This chapter will review the pertinent pharmacokinetic, pharmacodynamic, and clinical parameters as they relate to dosing and the efficacy for glucocorticoids. Many relevant studies and commentaries have been offered over decades on the effects of steroid minimization or elimination on clinical outcomes, including the incidence of chronic rejection, renal function, and side effects. In this review, references are cited selectively from this larger body when they illustrate or summarize an important issue.

Reduction or elimination of immunosuppression without jeopardizing graft function has long been a goal of the transplant profession. The point of such dose reduction is to induce a state of immune “hyporesponsiveness,” “acceptance,” or “tolerance” to reduce complications of immunosuppression without increasing the risk of rejection [1]. Strictly speaking, true immune “tolerance” would permit permanent withdrawal of immunosuppression. Any such permissive state would either have to be permanent, or practical tests would have to be available [2] to anticipate loss of such adaptation before irreversible immunologic or parenchymal change occurred. Immunologically, generation of memory B and T cells which are reactive to graft antigens is thought to be an irreversible step, although long-term control of efferent responses of these systems, e.g., by regulatory T cells, could be a mechanism establishing a hyporesponsive or acceptance state. A more modest and more common goal of minimalist steroid regimens is not to reduce overall immunosuppression but simply to substitute one or more alternative agents for steroids, without increased risk of graft loss. Whatever the underlying immunologic theory or goal, steroids are usually the first agents to be withdrawn or markedly tapered in an effort to avoid steroid side effects. However, many recognize that current standard maintenance steroid doses may have markedly fewer side effects than previous higher dose regimens [3]. While true immune accommodation allows minimization of immunosuppression, it does not follow that simply tapering immunosuppression will bring about these states. Rather, in the general transplant population, tapering immunosuppression will usually increase the risk of rejection or graft loss [4]. The insidious nature of the immune response may mimic an accommodative state, (“pseudo-accommodation”), but for unclear reasons, a true hyporesponsive state may unpredictably disappear at a later time [5, 6]. Also, chronic rejection can develop unpredictably, insidiously, and irreversibly, so that, “cautious tapering of chronic immunosuppression” is, in a sense, an oxymoron, given these limitations. Once rejection ensues and is clinically appreciated, steroid-free patients are usually placed back on steroids [7], with relative benefit [8], an event that complicates the interpretation of many clinical studies.

Glucocorticoids were one of the first classes of medications used to prevent rejection after solid organ transplantation. Two specific medications, methylprednisolone and prednisone are used frequently as part of the immunosuppressive regimen. Corticosteroids have immunosuppressive, anti-inflammatory and lympholytic effects [9]. Unbound steroid passively diffuses through the cell membrane into the cell, binds to cytosolic receptors which causes release of the active receptor, then active receptor dimers interact with glucocorticoid response elements in promoter sequences [9]. The net result is decreased cytokine production, lymphocyte proliferation, and changes in cellular trafficking. Steroids may preferentially block so-called proinflammatory TH1 and perhaps Th17 cytokines [10]. They do not block the release of Il-10 [11], a cytokine important for T regulatory cell function that has been suggested to favor hyporesponsiveness to the allograft [12].

A common protocol in kidney transplantation is to administer fixed dose intravenous methylprednisolone perioperatively taper the dosing postoperatively with a transition to fixed-dose oral prednisone by postoperative day 3–5. Oral prednisone is typically tapered to physiological doses or a fixed dose of 5 mg for all recipients. In a study of patients with kidney disease (not transplanted), it was documented that the pharmacokinetics of intravenous methylprednisolone are linear with lower doses (1–5 mg/kg) and nonlinear when higher doses are employed (15 mg/kg) [13]. Despite this variability in exposure, lymphocyte suppression was not affected by dose, suggesting that dosing above 1 mg/kg exceeds the critical threshold for suppressing total lymphocyte counts [13] In studies of kidney transplant recipients, the pharmacokinetics of intravenous methylprednisolone have been found to be linear [14]. Additionally, there was a correlation between exposure and lymphocyte suppression [14]. This pharmacokinetic-dynamic relationship resulted in a statistically significant increase in rejection episodes in patients who had higher clearances of methylprednisolone [14]. This suggests that therapeutic drug monitoring may be warranted for methylprednisolone in the early posttransplant period to identify patients with significant deviations in their pharmacokinetic parameters placing them at risk for rejection. In the chronic setting, methyprednisolone clearance is significantly slower and less intrapatient variability has been documented [15]. The mean AUC normalized to a 1-mg dose of methylprednisolone was found to be 36.6 ng*h/mL in the acute setting and 62.1 ng/h/mL in the chronic setting [15].

A marker or panel of markers to predict and monitor immunologic risk is essential to responsible drug tapering [2]; however, any conclusions must be somewhat speculative because no markers have been developed and confirmed for reliable immune monitoring during steroid withdrawal. Progress in this field is discussed elsewhere in this volume. Also, in kidney transplant recipients a transcriptional biomarker panel has been suggested to be associated with reduced costimulatory signaling, immune quiescence, apoptosis, and memory T cell responses correlated with an operational state of hyporesponsiveness [16]. The presence of CD8+CD28+ suppressor T cells was associated with lower risk of rejection during tapering of mycophenolate (MMF) and steroids in patients on calcineurin inhibitors (CNIs) [16]. Conversely in another study, cytotoxic T lymphocyte effector molecules (granulysin, perforin, and granzyme B) were upregulated in adult and pediatric kidney recipients who had been steroid free during the first posttransplant year [17]. As these patients had been entirely stable, the authors interpreted this finding as paradoxical rather than as a predictor of eventual poor outcome. It has been suggested that early steroid withdrawal (e.g., in the first week) may be more immunologically beneficial, [3, 18] due to increased apoptosis of reactive donor specific T cells, [3] and cytokine receptors may not be upregulated [3, 10]. An effective strategy to the minimization of immunosuppression will require an accurate method to monitor immunologic risk.

Steroid dosing, minimization, and response

A discussion of the efficacy of steroids and their proper place in most transplant treatment protocols is hampered by [1] individual variability in metabolism, [2] the lack of clinically available steroid blood levels, [3] little emphasis on pharmacologic interactions, [4] and the tendency—even for centers prescribing steroids—to use very low steroid doses that may be subtherapeutic. For example, many protocols give 5 mg of prednisone to all chronic patients, without regard to differences in patient weight or BSA measurements, which can markedly affect exposure and efficacy. In some studies including meta-analyses, registry studies, and clinical trials comparing steroids to non-steroid regimens are usually discussed without reference to dose or metabolic parameters. A central tenet of this review is that the dose of steroids determines both side effects and efficacy. Similarly, the doses of other drugs that are used for maintenance immunosuppression in non-steroid protocols will affect the immunologic risks of steroid withdrawal and should be considered [19, 20].

Prednisone is the most widely used steroid and is metabolized in the liver, primarily to prednisolone, its major active metabolite. The shape of the post-dose time-concentration curve is similar to cyclosporine (CSA) and tacrolimus (TAC), with peak levels occurring 2–3 h after ingestion with no measurable levels by 24 h. The timing of the multiplicity of biologic effects during the dosing cycle beyond relative lymphopenia and changes in lymphocyte surface markers at 2–3 h [21] has not been established. Steroid effect can be grossly assessed by total lymphocyte or eosinophil count [13, 21, 22] or by fasting a.m. pre-dose cortisol levels [23], but no measure of steroid exposure has been clinically validated or is attempted by virtually any center. In the usual kidney transplant patient, a Cushingoid appearance is a poor predictor of increased steroid exposure [24].

In some studies in stable long-term kidney transplant recipients, gender and African–American race did not affect the pharmacokinetics of prednisolone [25]. However, an early small study suggested a near-significant 50% decrease in methylprednisolone clearance in blacks [26]. Prednisolone clearance seems to be decreased in kidney transplant patients relative to normals [27]. Chronic renal failure is associated with an increased free fraction of prednisolone likely due to hypoalbuminemia and competitive inhibition of protein binding by uremic toxins [25]. In obese patients, steroid dosing is complicated by whether to dose based on ideal or total body weight. Cortisol production in obesity is increased and has been found to be a consequence of greater lean body mass. Adipose tissue has been identified as a specific site for distribution and clearance of prednisolone [28]. Increased clearance of prednisolone in obesity may be the result of secondary effects of obesity including increased cardiac output, hepatic blood flow, and liver size [28]. However, the increased clearance may be accompanied by increased sensitivity to adrenal suppression [28]. Prednisolone dosing adjustment using total body weight is most appropriate [25]. Liver disease impairs conversion of prednisone to prednisolone, yet prednisone-dose-normalized prednisolone exposure is increased in liver transplant patients [27]. Such metabolic differences in liver and kidney recipients should be considered as already accounted for by current dosing protocols.

Whether part of a minimization protocol or not, steroids are more and more often tapered or discontinued in current practice [3]. In general clinical practice, selected patients at relatively lower risk of acute rejection or graft loss are the preferred populations for steroid withdrawal [3]. Even in the low dose ranges (e.g., 5 mg daily) that are employed in current regimens, important dose–response relationships exist. This should reinforce the almost inescapable proposition that in attempts to taper steroid immunosuppression to avoid side effects, we also reduce efficacy. The Canadian study, which showed markedly reduced graft function with steroid withdrawal was technically an ultra low dose vs. standard dose trial, as the off prednisone group received on the average about a milligram of prednisone every other day [29]. While alternate day steroids may produce less side effects, early studies demonstrated that they also provided less immunosuppression than equivalent daily dosing [30]. In patients who receive steroids, exposure-effect relationships have largely been validated indirectly. Prednisolone is hydroxylated by cortisol 6β hydroxylase and concomitant therapy with anticonvulsants such as phenytoin results in approximately a 50% increase in clearance [25]. Additionally, phenobarbital and diphenylhydantoin accelerate metabolism of prednisone and have been associated with markedly reduced graft survival with standard steroid regimens (Fig. 1) [31]. Indeed, many drugs that influence CSA and TAC metabolism similarly affect prednisone [32]. Oral contraceptives significantly increase steroid exposure [33]. In one study, diltiazem increased prednisolone exposure approximately 20% in normal subjects [21]. Consistent with these metabolic effects, diltiazem has been associated with better long-term graft survival [34], but the relationship to prednisone exposure is speculative due to the absence of blood levels. Interactions between steroid metabolism and CSA or TAC are of lesser clinical relevance [35].

Fig. 1
figure 1

Effect of anticonvulsant therapy on allograft survival

Notably, no significant interaction between ketoconazole and prednisolone has been demonstrated [25, 27]. This may be true for azole antifungals but has not been well studied with other agents. No significant change in prednisolone pharmacokinetics was found in patients who were coadministered cyclosporine, sirolimus, macrolide antibiotics, or nifedipine [36, 37].

Even though current oral daily doses of 5–10 mg/day of prednisone are low by historical standards, they nevertheless appear to have measurable effects. They are roughly equivalent to replacement doses for complete glucocorticoid deficiency, but even in the era of higher steroid dosing, chronic therapy usually did not induce clinically important adrenal suppression [38]. Even when “stress doses” of steroids are truly needed (in higher dose chronically immunosuppressed populations), arguably only small transient increases can be justified [39, 40]. A degree of partial adrenal suppression of uncertain significance has been suggested in a more recent report [23]. Reinstitution of low, seemingly “physiologic” doses of steroids can abolish the rejection-fever-myalgia syndrome in transplanted patients who have recently returned to dialysis and stopped immunosuppression. Likewise, withdrawal of such low “physiologic” doses in ostensibly stable renal transplant patients in whom graft hyporesponsiveness seems to have been induced is associated with new antibody formation (Fig. 2) [41]. Withdrawal of chronic, low prednisone doses of 0.1 mg/kg was dramatically associated with acute or subacute rejection in 11 of 11 patients transplanted across ABO barriers [42]. Taken together, these observations suggest that low-dose steroids do not suppress adrenal function, but do have significant dose-related therapeutic effects.

Fig. 2
figure 2

Antibody formation after withdrawal of prednisone in stable kidney transplant recipients

Registry data

Over the last 50 years, most routine, postoperative and maintenance transplant immunosuppression protocols have included steroids, although they have been given in lower and lower doses. Steroid-based regimens were the standard of care in the azathioprine (AZA) era, throughout most of the cyclosporine era, and still are for the large majority of centers in the United States. However, many centers currently omit steroids ad hoc in selected patients, and some centers are comfortable doing so in all immunologically lower risk patients [3]. Steroid avoidance centers cite studies using the newer agents TAC, MMF, and antibody induction, which are felt to make steroid immunosuppression unnecessary for many patients. These latter regimens, which have the goal of “safe substitution” for steroids, rather than to induce hyporesponsiveness or tolerance, will be discussed below.

Despite the widespread use of TAC, MMF, and induction agents, improvements in long-term kidney transplant survival have been modest. Most of the improvement in past decades appears to have been due to more effective medical and immunologic management within the first year. For example, the 10-year survival of deceased donor allografts improved from about 20% in the seventies to 40% in the nineties, but the half life of graft loss starting after the first year changed only from 7.5 years to 8.6 years [43]. Although these early losses have been reduced, significant improvements in long-term survival have not been seen [44]. Consistent with the lack of an expected improvement in long-term graft survival in protocols using MMF and TAC, registry studies may not detect a benefit of TAC over CSA or MMF over AZA [45]. This question is discussed in more detail in subsequent sections.

Because registry studies suggest that TAC and MMF have limited benefit compared to CSA and AZA, many transplant physicians consider that our major remaining problems are in chronic, not perioperative, immunosuppression. During the perioperative period, the plasticity and adaptability of the immune system needs to be controlled day by day. For this reason, many observers feel that steroid withdrawal has been “studied often, but not studied well,” i.e., not well studied with relevance to long-term allograft survival, where a low-risk patient should enjoy particularly prolonged graft survival, not just be documented to do well for a year or two. The short-term studies that begin perioperatively are also less relevant when they permit induction agents. As survival curves of induced and noninduced kidney transplants are parallel in the long term, the early effects of induction may not provide a sustained ongoing long-term advantage [46]. Induction may delay the emergence of differences in outcome, but day to day immunosuppression in the chronic phase of therapy may in fact determine long-term outcome. Registry studies suggest that tapering immunosuppression in the long run risks graft loss [4]. It is likely that there is not a sharp dichotomy between “safe” and “unsafe” regimens, but that as with most epidemiologic phenomena, a continuum exits. Thus, an insidious low level immune response can be misdiagnosed as an “accommodative” state as immunosuppression is tapered.

Registry studies are retrospective and attempt with varying degrees of success to control for unintended variables that may confound a hypothesis. Some registry studies show no detriment of steroid avoidance to graft survival [47]. However, many patients that were initially steroid free will return to steroid therapy [47, 48]. If analyzed by intent to treat, a large number of patients returning to steroid treatment may obscure the results of registry studies. Registry studies are also complicated by the fact that “safer” patients often remain on steroids, and it is difficult to adjust the analysis to account for this bias. This is strikingly illustrated by retrospective analyses attempting to identify matched controls who remained on steroids; that analysis suggested that steroids were actually deleterious to graft function [49, 50], an “observation” that prompted the Canadian trial (see below) [29].

Steroid avoidance studies

Many “successful” steroid avoidance protocols were reported well before the advent of Neoral, TAC, or MMF. Protocols have always either prescribed no steroids postoperatively, stopped steroids in the first weeks after transplantation, or tapered and discontinued them after several months. Study end points are usually rejection episodes, for-cause biopsies and/or changes in renal function. Historically, many studies have found more histological evidence of rejection with steroid withdrawal, especially in the higher immunologic risk groups but with little effect on graft function [51]. This has been interpreted by critics to indicate that graft survival will eventually be reduced by steroid avoidance for most patients and has prompted cautious recommendations that steroid avoidance be treated as unproven or even experimental [3, 52]. Meta-analyses of course are only as good as the studies they summarize. A recent meta-analysis of nine steroid withdrawal studies [53], attempted to neutralize the effect of induction therapy to obscure short-term differences by reviewing only studies that did not include induction in their protocols [53]. It concluded that steroid withdrawal was acceptable with TAC immunosuppression but not with CSA [53]. However, while indeed not part of any study protocol, induction was liberally and/or selectively used in several studies, and only two of the nine studies were TAC based [54]. The meta-analysis also included a large CSA study that used higher steroid doses and higher risk patients that had to be terminated for increased rejection in the off-steroid arm [54]. A comparable TAC study was not included with the two TAC studies that were analyzed. Thus, this well-received meta-analysis [55] included a diverse set of study protocols that may confound any interpretation of the analysis. Another careful meta-analysis concluded that steroid withdrawal was associated with an increased rate of rejection as well as increased side effects, but most studies included in the analysis were of poor quality [56].

Whether we are committing a “subgroup fallacy” by treating lower risk patients as different from the higher risk patients, especially in light of the persisting problems that have occurred with improving long-term allograft survival, is discussed elsewhere in this chapter.

Relevant United Network for Organ Sharing (UNOS) and/or single center Organ Procurement and Transplantation Network (OPTN) data are useful in evaluating published experience from the USA with cohorts of patients in whom steroids have been withdrawn. These public outcome data should be compared to graft survival data reported steroid withdrawal studies to assess overall risk of the study cohorts. In uncontrolled single center studies, in which, e.g., all patients at the center are supposed to be steroid free, the center’s UNOS or OPTN data can be compared with the data that appear in the study report. In some studies, these data do not seem concordant [5759]. The exceedingly well-done multicenter Astellas steroid sparing study, discussed below, reported 5-year graft survival in both cohorts that far exceeded UNOS national 3-year averages [60], suggesting that, by chance, participating centers preferentially entered low-risk patients.

The Canadian study from over two decades ago [29] and the recent Astellas-sponsored trial [60] are the two best steroid avoidance studies in several respects—placebo controlled, prospective, randomized, and reasonably long term. The seminal study of the efficacy of chronic steroid therapy and the risks of withdrawal was the randomized, placebo controlled, Canadian study published in 1992 [29]. Eleven Canadian centers enrolled patients from 1982 to 1985. Enrollment commenced at 90 days posttransplant, but only in stable patients with serum creatinine of ≤2.2 mg/dl. Participants were randomized 1:1, and all received the original widely used formulation of cyclosporine (Sandimmune). Trough levels averaged 100–110 ng/ml. Importantly, prednisone was dosed by weight, tapered to 0.2 mg/kg every other day by 6 months and discontinued per protocol. No induction or third drug (e.g., azathioprine) was given as part of the protocol. Additional steroids were optional at the discretion of the center’s physicians, but the placebo vs. prednisone outcomes were analyzed based on intention to treat.

Forty-three percent of all center patients were enrolled (with serum creatinine ≤2.2 mg/dl and no active rejection). Increased immunologic risk was not an exclusion criterion, in part because some previous uncontrolled experience had suggested that steroids given with CSA increased the risk of rejection [29]. By 1.3 years, 120 of 260 patients had been withdrawn from the placebo group and treated with prednisone, but per study design, remained in the placebo group for 2 years for analysis. Thirty-three subjects (13% of the total) in the placebo group received prednisone by mistake, and they were also analyzed in the placebo group. Graft survival at 5 years was 73% in the placebo group and 85% in the steroid group. Interestingly, differences were even more striking in higher risk patients such as retransplants. In the manuscript, renal function was only compared between subjects in the on-steroid and off-steroid groups with functioning grafts. Thus, the exclusion of nonfunctioning grafts would markedly diminish the apparent overall effect of steroid withdrawal on graft function. As reported, serum creatinine and calculated Cockcroft–Gault creatinine clearances differed at 6 months into the protocol, and the difference stayed relatively constant with creatinine clearances lower in the placebo group by approximately 10%. Data were presented graphically and do not allow precise analysis. However, if a creatinine value of 8 mg/dl is assigned to patients in either arm who returned to dialysis and these patients then included in a cumulative analysis, the long-term difference in serum creatinine between the two cohorts approaches 0.3–0.4 mg/dl with the lower creatinine in the steroid therapy group.

In summary, the Canadian steroid withdrawal study showed a benefit from steroid therapy. It showed significant differences in long-term graft survival and also in renal function in patients who had not yet lost their grafts, that is, it showed an ongoing effect of steroid avoidance. The Canadian study did not use TAC or MMF. Azathioprine use was not part of the protocol nor was AZA use, if any, reported in the analysis. Thus, a potential criticism of the study is that newer agents, e.g., tacrolimus and mycophenolate mofetil, would be more potent and diminish the importance of these striking results. In fact, some editorials have suggested that TAC (to replace CSA) and MMF (to supplement TAC and/or replace AZA) may have made the Canadian study irrelevant to current practices [18, 60, 61].

This possibility was addressed by the more recent Astellas steroid withdrawal study [60]. Published in 2008, the Astellas study was only the second long-term randomized, placebo controlled trial of steroid withdrawal in kidney transplant recipients. Three hundred ninety-seven patients transplanted between 1999 and 2002 were enrolled. The study involved lower risk patients with initially functioning grafts, who either had prednisone stopped by 7 days post transplant or tapered to 5 mg per day by 6 months as maintenance therapy. Both groups received postoperative antibody induction, TAC, and MMF. At 5 years, there was no significant difference in patients reaching the composite end point of death, acute rejection, or graft loss. Side effects were remarkably similar in both groups, with significant but small steroid-associated increases in triglyceride values and in weight gain, averaging about a pound a year. For-cause biopsies showed chronic allograft nephropathy in 9.9% of the steroid withdrawal group vs. 4.1% in the steroid treated group. At 5 years, serum creatinine in both groups remained at about 1.5 mg/dl. Five-year graft survival was not reported separately but was over 90% in both groups. In comparison from 1997 to 2004, the average OPTN 3-year survival for living donor kidneys was about 88%, for deceased donor kidneys about 78%, and for all primary kidney transplants about 82% (www.ustransplant.org). Thus, the 5-year outcomes of both the steroid treated and off-steroid groups in the Astellas study were superior to the overall 3-year results reported in the USA to OPTN, suggesting disproportionate, unintended inclusion of low-risk patients in both arms of the Astellas study.

While the sophisticated and careful trial design and analysis of the results were singularly helpful in assessing efficacy and side effects attributable to chronic steroid immunosuppression, interpretation of the Astellas trial is complicated by the fact that low-risk patients were entered and steroid doses were low (i.e., “almost no steroid”). The few documented steroid-related side effects suggest that drug exposure was indeed low. Thus, the Astellas study might be interpreted to support low-dose prednisone therapy, as the risks were minimal. Others report have questioned whether the minimal side effects associated with low-dose steroids are sufficient to justify withdrawal of prednisone [3, 62, 63].

In epidemiological studies, one needs a population at reasonable risk, some of whom have undergone a strong intervention, and differ only with respect to that intervention. To the extent that risk and/or the strength of the intervention are low, a longer observation period is needed to evaluate a hypothesis of equivalency of outcome. This low-risk/minimal intervention factor may complicate the generalizability of the Astellas study to the clinical use of steroids in long-term maintenance immunosuppression.

Steroid side effects

Steroids have many undesirable side effects, and many small, suboptimally controlled studies report less risk of exacerbating diabetes, hypertension, and hypercholesterolemia with steroid avoidance. However, many do not report the steroid doses used, but as previously discussed, dose is central to efficacy and the incidence of side effects. Also, it is important to consider that side effects from steroids have been of lesser magnitude in the current era of overall lower doses [3, 62, 63]. Steroid overexposure (or underexposure) might be plausibly deduced in the few patients who are taking other drugs that affect the metabolism of steroids [32]. But until we have assays for steroid exposure, we cannot accurately assess it. Therefore, a steroid side effect does not give one a clear rationale to reduce the dose “safely.” For most patients who are receiving target-range exposures of critical drugs, control of a side effect while keeping the dose unchanged would seem more prudent. In fact, studies that show the most beneficial immunologic effect of steroids also use higher doses, but still in the “low” dose range [64]. The Canadian trial [29] used weight based (i.e., higher) doses and higher risk groups, but the side effect profiles were not well described. As discussed in previous sections, the well-controlled Astellas trial [60] used lower doses with lower risk groups, showing a small immunologic disadvantage to steroid avoidance. The side effect profiles were so similar on and off steroids that the trial as easily argues against pursuing steroid avoidance as it argues for it. The Astellas trial used a relatively small, fixed daily dose of 5 mg of prednisone in all patients, regardless of size, and found no difference in the incidence of diabetes or hypertension, an exceedingly small differences in triglycerides and a differential weight gain of only 5.1 vs. 7.7 kg at 5 years (or about a pound a year). In higher dose eras, others have reported similar findings [65]. Steroids increase the HDL content in serum lipids which may complicate the assessment of cardiovascular risks [66]. Protocols have been developed to control weight gain [67] and bone disease [68], for example that does not involve steroid dose reduction. A possible positive side effect of steroids might be a reduction in CNI nephrotoxicity [69]. Omitting steroids may result in reduced expression of P glycoprotein [35]. Inhibition of that system at the surface of renal tubular cells may lead to increased intracellular accumulation and increased fibrosis [70]. For use during pregnancy, glucocorticoids, TAC, and CSA are rated class C teratogens, with MMF, sirolimus, and AZA rated at higher risk at class D [71].

Have tacrolimus and mycophenolate made steroids unnecessary?

On the basis of data generated prior to the Astellas study, whether TAC and MMF would be likely to “make Canadian study irrelevant” is open to question. Three early, prospective, well designed, and pivotal studies [7274] defined MMF as a new and more potent agent for acute and chronic immunosuppression [61]. Using CSA and prednisone as baseline immunosuppression, MMF was directly compared to AZA in two studies and to placebo in a third. Subjects were entered at the time of transplant and primary nonfunction was important in determining differences in graft survival [75]. In all three studies, the MMF group experienced less early mild rejection [76] and graft loss. However, in none of the three studies was there a finding of an ongoing effect of MMF beyond 6 months—only a maintained early difference that was generated by the early graft losses in all three trials. The Canadian study did not use AZA; therefore, the experience in the early and influential MMF vs. placebo study would be particularly relevant in assessing the contribution of MMF when added to a CSA and steroid regimen. This study compared CSA, MMF, and steroids to CSA, steroids, and a placebo, but serum creatinine values in the two groups were identical at 1 year, [77] and, inexplicably, no creatinine or other renal function data were included in the 3-year report [72]. There was no difference in graft loss between the study groups beyond 1 year as the survival curves remained parallel. The two other studies comparing MMF to AZA showed a similar lack of ongoing long-term effect, either in graft loss or in a separation in the serum creatinine values in the two cohorts of patients with functioning grafts [73, 74]. Defenders of MMF as a major advance in immunosuppression argue that these three early pivotal studies were not powered to show long-term effects on graft survival or graft function [61], but each study was about as large as the Canadian study, which did show striking differences in both. If we apply the same standards, it is hard to accept the MMF data as even partially convincing without giving even more credence to the Canadian data that suggest an ongoing beneficial effect for steroid immunosuppression. In summary, we might accept MMF as an improvement in current immunosuppression, but not by the evidence, a sufficient improvement to justify dismissing the Canadian study and making the cessation of low-dose steroid therapy safer.

Tacrolimus has also been held to be a sufficient improvement over CSA to justify steroid-free therapy in many patients in spite of the findings of the Canadian study [29]. However, many early pivotal, prospective studies comparing CSA and TAC can be found that show minimal to no differences in serum creatinine over the first few years, and even less effect on graft survival [78, 79]. Registry studies suggest equivalence of CSA and TAC [80]. A superiority of TAC over CSA has not been seen by the University of Minnesota [81], that consistently reports excellent survival results with steroid-free protocols that are not worse than on—steroid historical controls, i.e., by these accounts TAC provides steroid-free outcomes equivalent to CSA. Considering the totality of TAC–CSA studies, the evidence for some degree of superiority of TAC over CSA appears better than the evidence for relative superiority of MMF. But even the best evidence supporting TAC as an improvement over CSA in chronic immunosuppression showed a smaller benefit than that offered by steroids over no steroids in the Canadian study. MMF was a costly drug, and the incidence of diabetes was markedly increased when TAC was used in place of CSA (package inserts). As discussed in the next section, despite their drawbacks, no one has attempted typical “steroid-type” studies that probably would have readily identified low-risk subgroups, in which these newer drugs could have been avoided.

Is a “safe” steroid-free subgroup a defensible concept?

In any study, whether it is a registry analysis or any other clinical comparison, some patients in various subgroups will usually do well regardless of the intervention. Certainly, under the conditions of the study (short term, induction, etc.) there can be little doubt of acceptable outcomes when steroids are not part of transplant immunosuppression. But as a whole, the transplant profession has applied different standards to steroid use than it has to its newer, proprietary agents. When TAC and MMF were first introduced in spite of respective drawbacks of increased diabetic risk or cost, we did not try to identify “safe” subgoups, in which the newer agent was “unnecessary.” Given the strength of the evidence, this probably would not have been hard to do. But we instead recommended the newer drugs for all patients when they showed benefit in mixed-risk cohorts. Presumably, this was to maximize protection against long term, insidious rejection in groups that were at low short-term risk. Efficacy in higher risk groups in the short run was thought to imply benefit for lower risk groups in the longer run. Even by its defenders, steroid withdrawal in patients at higher immunologic risk has long been recognized as riskier and less advisable [28, 29, 82, 83]. But we attempt steroid withdrawal on lower risk patients as if they were somehow fundamentally different from higher risk patients, who instead might be so called “canaries in the mine,” that warned us of what was to come for the rest. We may have overestimated the protection afforded by MMF and TAC over the past two decades. As use of these agents has increased and the practice of steroid withdrawal has increased [3], we have not seen the improvement in long-term graft survival that many expected [44]. Yet, we continue to pursue steroid withdrawal for “safer” groups, as if lessons learned in one subpopulation of transplant recipients should not be carried over to the whole.

Summary

Many clinicians accept that steroids were an essential part of maximally effective chronic immunosuppression in past decades but some suggest that they are no longer necessary, at least in low-risk populations when our newer more potent drugs are used. This chapter summarizes the data that may support or undermine this view. (1) Chronic immunosuppression trials need to be chronic. Potential differences in long-term outcome will be minimized by short-term antibody induction studies in low-risk patients, particularly if the on-steroid arm receives very low doses. (2) In the early trials as well as in many subsequent studies, the widely accepted evidence for the superiority of MMF and TAC was not as strong as the evidence for the benefit of steroids in the Canadian study. The comparable Astellas trial found small differences favoring steroids in outcome and minimal side effects at 5 years, which may be attributable to the inclusion of low-risk patients and low doses of steroid. (3) Despite good reason to try, because of belief in overall long-term efficacy for all our patients, we have not attempted to find subgroups for which our newer agents, TAC and MMF, were not needed. (4) The lack of improvement in graft survival in the era of MMF and TAC suggests suboptimal additional protection afforded by current strategies. As the practice of steroid withdrawal has increased, we have not seen the improvement in long-term graft survival that many expected with our newer agents. (5) The basic principles of exposure, efficacy, and outcome epidemiology apply to all our immunosuppressive agents and this includes steroids. The transplant effort will benefit if steroids are dosed and steroid studies evaluated using the same standards that have been applied to our proprietary drugs.

A heterogeneous transplant population will always provide heterogeneity of responses to all of our drug regimens. Clearly some patients continue to do well when followed for several years when steroid free, and some have clearly benefitted long term from steroid avoidance. It is up to the individual clinician and center to discuss and decide with properly informed patients on the correct approach to steroid avoidance, with the realization that steroids seem to “add something” to transplant populations as a whole, and we have no accepted way to reverse a fully established chronic rejection. If steroids are to be avoided in low-risk patients, the decision must weigh the benefits of, e.g., perhaps 25 years of graft function on low-dose steroids against the real risk of perhaps only 20 years of function off steroids. The side effects of low-dose chronic steroid regimens must be weighed against the risks of accelerated occurrence off steroids of the “side effects” of reduced renal function as the transplant fails and an eventual return to chronic dialysis [84, 85].