I was born and raised in LeMars, Iowa (population 5,000). After graduating from high school and serving in the Navy (1944–1945), I earned a BA degree from Westminster College (Missouri) and MD and PhD degrees from Northwestern University (1947–1952). My PhD thesis in neurophysiology under the preceptorship of Horace W Magoun consisted of four interrelated articles (14) that collectively continue to gather 5–10 citations per year after nearly 65 years (Web of Science, Thomson Reuters, Philadelphia, PA, USA).

After completing a 1-year surgical internship at Johns Hopkins Hospital in June 1953, I suspended clinical training for 18 months to do an experimental study of complete heart block, a complication encountered with some of the first human open heart operations. After perfecting a model of heart block in dogs and elucidating its pathophysiology (5,6), I invented a safe method to treat the condition with repetitive low-voltage ventricular stimulation (7). After this demonstration of its efficacy, cardiac pacemaking was promptly applied clinically.

These early research experiences, first in neuroscience and then in cardiac physiology, epitomized the two ends of the spectrum within which I labored all the rest of my professional life: a search at one extreme for fundamental biologic mechanisms and at the other for practical remedies with which to treat human diseases. After I returned to surgical resident duties (1954–1959), a stage large enough to accommodate both kinds of activity emerged from my new interest in the liver, its double blood supply and ultimately its transplantation (https://doi.org/www.starzl.pitt.edu).

The specific issue at first was whether portal venous blood had qualities that were important for optimal liver function and overall metabolic homeostasis. This question had been a subject of periodic debate for nearly 80 years, largely because of the confusing literature generated by experimental and clinical studies of portacaval shunt. In 1955, the question resurfaced in the context of the canine auxiliary liver transplant model described by C Stuart Welch at Albany Medical College (8).

With Welch’s operation, the double blood supply of the native liver was left intact, whereas an extra liver (an allograft) was rearterialized and provided with a high volume of portal venous inflow from an alternative source. The “substitute” portal blood consisted of inferior vena caval venous return from the dog’s lower body and hind legs (Figure 1, left). When the transplanted hepatic allografts shrank to almost half the size within a few days, the acute atrophy was attributed to rejection.

Figure 1
figure 1

Three kinds of liver transplantation developed between 1955 and 1958 in dogs. Left: Welch’s auxiliary liver transplantation. Middle: Complete liver replacement. Right: Multivisceral transplantation. The allograft organs are colored blue. Illustration by Jon Coulter, M.A., C.M.I.

Based on off-duty studies of portacaval shunt in dogs, my alternative interpretation was that Welch’s auxiliary allografts had been deprived of liver-supporting (hepatotrophic) factors in the portal blood, to which the native liver had primary access and high avidity. To test the hypothesis, I developed two more procedures during 1958–1959: simple liver replacement (Figure 1, middle) (9,10) and replacement of the liver plus all of the other intraabdominal organs (Figure 1, right) (11). The three models in combination fueled two avenues of research.

The first investigative pathway concerned the metabolic cross-regulation of the different abdominal organs. By demonstrating that primary hepatic access to endogenous insulin and other molecules in portal blood played a crucial role in the control of liver size, ultrastructure, function and the capacity for regeneration (12,13), the “hepatotrophic” research contributed importantly to the scientific base of liver transplantation while fostering fundamental nontransplant research that continues to the present day in regenerative medicine (14) and intermediary metabolism (15).

The second developmental pathway involved the potential use of liver, kidney and other kinds of organ transplantation for the treatment of human diseases. Although the initial prospects seemed bleak, this became the dominant objective by the end of 1958. The only evidence that tissue or organ rejection might be avoidable had come from experiments reported 5 years earlier by Billingham, Brent and Medawar showing that allogeneic spleen cells could be transplanted into immunologically immature mice (16).

Animals permanently endowed with donor cells (donor leukocyte chimerism) could accept other tissues from the same donor but from no other donor (donor-specific tolerance). This “neonatal tolerance” model was the experimental surrogate of human bone marrow transplantation for the treatment of immune deficiencies. In a second model that presaged clinical bone marrow transplantation for many other indications, Main and Prehn (17) obtained similar tolerance in adult mice by enfeebling their immune reactivity with irradiation before transplanting them with donor bone marrow cells.

Such experiments were feasible only when there was a close match of mouse donor and recipient tissue (histocompatibility) antigens. Without such a match, the immune competent donor cells were either rejected or caused graft versus host disease. Another 15 years would pass before enough human leukocyte antigens were identified to permit the prerequisite matching.

Consequently, clinical bone marrow transplantation was not accomplished until 1968. In contrast, kidney allografts were transplanted with 1 year or longer survival without tissue matching, leukocyte infusion or evidence of chimerism in eight human recipients during the 1959–1962 period (Table 1) in which I was developing the canine liver transplant models.

Table 1 Characteristics of the first successful transplantations of kidney allografts with ≥6 months’ survival.

Patients 1–7 had been sublethally irradiated in advance of transplantation with virtually no immunosuppression afterward. The nonirradiated eighth recipient was treated daily after transplantation with azathioprine, a drug for which preclinical testing was done in dogs by Roy Calne (with Joseph Murray in Boston, MA) and Charles Zukoski (with David Hume in Richmond, VA). Although the possibility of pharmacologic immunosuppression had been greeted with optimism, this collapsed when, in a clinical renal transplant trial of azathioprine at the Peter Bent Brigham Hospital (Boston), the only survival exceeding 6 months in more than a dozen cases was that of the eighth patient in Table 1.

By this time, I had moved to the University of Colorado (Denver) and obtained a supply of azathioprine for its evaluation in our canine kidney and liver transplant models. In conformity with studies done elsewhere, only 5–10% of the dogs survived for 100 d. When daily treatment was then stopped in a step not taken by other investigators, some of the canine liver and a smaller number of kidney recipients did not reject their grafts.

Other crucial observations in our dog studies had not been apparent under the testing conditions of other laboratories. First, pretreatment with daily azathioprine for several weeks before canine kidney transplantation nearly doubled survival compared with that when the drug was begun on the day of operation. Most importantly, acute rejections that developed in dogs despite daily azathioprine could almost always be reversed by administration of large doses of prednisone. These readily reproducible canine findings were used to design clinical protocols in which prednisone was added only to treat rejections that developed under azathioprine.

This incremental use of the drugs exposed two characteristic features of the alloimmune response that ultimately were exploited for the transplantation of all kinds of organs with the aid of widely variable immunosuppressants. These features made up the title of my report in the October 1963 issue of Surgery, Gynecology, & Obstetrics (now Journal of the American College of Surgeons) of the world’s first series of repetitively successful kidney transplantations: “The reversal of rejection in human renal homografts with subsequent development of homograft tolerance” (18).

Although none of the kidney recipients were off immunosuppression at the time of the reports, tolerance was inferred from a declining need for treatment after rejections were reversed. In some cases, the need eventually declined to zero, exemplified by five patients in the original Colorado series who currently bear the world’s longest surviving kidney allografts after 50–52 posttransplant years. All have been off immunosuppression for 14–50 years (19).

At the time the iconic Denver clinical series was inaugurated in 1962, the only other kidney transplant program in America with clinical activity was at the Peter Bent Brigham Hospital in Boston. Guided by the freely shared information from Colorado about the combined use of azathioprine and prednisone, David Hume began a third clinical program at the Medical College of Virginia. During the next 2 years, nearly 50 more renal centers were founded in the United States or were gearing up while a similar race was on in Europe. Development of these new centers was facilitated by my 1964 textbook, Experience in Renal Transplantation (20), based on personal laboratory and clinical experience. Immunosuppression with azathioprine and prednisone remained the worldwide standard for nearly 20 years.

Encouraged by our kidney successes, I attempted five human liver replacements between March and October 1963 by using the same two drugs but with foreshortened pretreatment (21,22). The first recipient bled to death during the operation. The longest survival of the next four patients was 23 d. All four of their hepatic grafts functioned throughout and had little evidence at autopsy of rejection or of preservation injury. Death had been caused by spreading infections that originated in the lungs. In addition, pulmonary emboli had formed in and migrated to the lung from the veno-venous bypasses that had been an essential component of the canine liver operation. After single failed attempts in Boston (at “the Brigham”) and in Paris, all human liver transplant activity ceased worldwide until the summer of 1967.

During the nearly 4-year moratorium, problems that contributed to the 1963 failures were systematically addressed: blood coagulation, organ preservation, infection, complications of veno-venous bypasses and immunosuppression. In addition, organ graft availability was revolutionized by acceptance of brain death. After reopening the hepatic transplant program in July 1967, multiple examples of liver recipient survival for more than 1 year were produced (23). In the autumn of 1969, I published a second book, this one entitled Experience in Hepatic Transplantation (24).

During the next dozen years, I continued to do a small number of liver transplantations. The longest survivor, a child with biliary atresia, has now born her transplanted liver for 45 years. Although four new European centers were founded during this period, one year survival never exceeded 50% in any of the five programs. Thus, liver transplantation continued to bear the label “feasible but impractical” until the advent of cyclosporine in 1979 and our demonstration that the drug’s optimal use required its combination with prednisone in the “steroids as needed” strategy originally used with azathioprine-prednisone. Kidney recipients were the first to benefit (25). In addition, 11 of our first 12 liver recipients treated with the new baseline drug survived for more than 1 year (26).

At the end of December 1980, I moved from Colorado to Pittsburgh where the efficacy of the cyclosporine-based treatment was established for all transplanted vital organs. In December 1981, the promising developments with emphasis on liver replacement (27) were made known to the United States Surgeon General, C Everett Koop. Koop, with the support of President Ronald Reagan, initiated steps leading to a Consensus Development Conference for liver transplantation that included input from the four European centers. The conclusion on 23 June 1983 was that liver transplantation had become a “clinical service” rather than an experimental procedure.

The resulting worldwide stampede to develop liver transplant centers was even more dramatic than that of kidney transplantation 2 decades earlier. Only 6 years later, a 17-page article divided between two October 1989 issues of the New England Journal of Medicine (NEJM) began with the following statement: “The conceptual appeal of liver transplantation is so great that the procedure may come to mind as a last resort for virtually every patient with lethal hepatic disease” (28,29). Most of the legitimate indications for transplant candidacy were obvious, including inheritable disorders that had known biochemical explanations (for example, Wilson disease).

In addition, liver transplantation as an instrument of clinical research received worldwide attention in 1984 with the case of Stormie Jones, a 6-year-old child with congenital hyperlipoproteinemia. Her heart had been irreparably damaged by myocardial infarctions caused by the rapidly evolving coronary atherosclerosis that typifies the lipid disorder. The circumstances surrounding her combined liver and heart replacement and my rationale for proposing the drastic operation are described in the chapter titled “The Little Drummer Girls” in my book, The Puzzle People: Memoirs of a Transplant Surgeon (30).

The revolution in all kinds of organ transplantation during the 1980s had been driven by cyclosporine. However, by the time of our 1989 two-part NEJM publication, our preclinical and clinical studies of tacrolimus already had led to its replacement of cyclosporine as the baseline immunosuppressant (31,32). With tacrolimus, further improvements in survival were possible with liver, kidney and ultimately all kinds of organ transplantation. In addition, the multivisceral transplant procedures developed more than 3 decades earlier in dogs, as well as the transplantation of the intestine alone, were elevated to the status of “clinical service.” The world’s longest surviving multivisceral recipient, now a school teacher, has lived for 24 years after transplantation. Tacrolimus remains the baseline immunosuppressant of choice for all kinds of organ transplantation to the present day.

By the 1990s, organ transplantation was widely acknowledged to be one of the most significant medical advances of all time. It also was one of the most enigmatic. For >30 years, the unchallenged dogma had been that donor leukocyte chimerism played no role in organ alloengraftment. The consequence of this pervasive error was a never-ending search for chimerism-independent mechanisms of organ alloengraftment. The futile exercise ended with our discovery of a small population of multilineage donor leukocytes (microchimerism) in all surviving organ recipients in whom a search was done (33,34).

The microchimerism lay at the tipping point between immunity and tolerance (Figure 2, middle graphic). With our detection of the donor cells by sensitive immunocytochemical and molecular probes, the connection was apparent between organ alloengraftment, the mouse models of acquired tolerance, and clinical bone marrow transplantation. Successful organ transplantation was explained by “responses of coexisting donor and recipient immune cells, each to the other, causing reciprocal clonal expansion followed by peripheral clonal deletion” (33).

Figure 2
figure 2

Expressions of allotolerance. Top: Experimental and clinical models in which tolerance is associated with donor leukocyte macrochimerism. Bottom: Organ and composite tissue transplants for which alloengraftment is contingent on persistent donor leukocyte microchimerism. Middle: Tolerance permutations defined as balances between the quantity of persisting donor leukocytes with access to host lymphoid organs (solid lines) and the number of antidonor T cells induced at these lymphoid sites (dotted lines). Stabilizing factors (asterisks) may include special cells (for example, T regulatory, suppressor), cytokine profiles, enhancing antibodies, graft secretions, and endogenous or exogenous cytoprotective molecules. How continuous immunosuppression tilts the balance against donor-specific T cells is depicted by small down-directed open arrows. GVH, graft versus host; HVG, host versus graft.

The double immune response of organ transplantation is initiated by the migration of the allograft’s lymphoid cells (“passenger leukocytes”) through vascular routes to host lymphoid tissues. There, the immunocompetent donor cells induce a host versus graft (HVG) antidonor response while mounting a graft versus host (GVH) response. If sufficient reciprocal clonal exhaustion is not achieved, one cell population will destroy the other.

Organ alloengraftment was de facto a form of variable donor leukocyte chimerism-dependent tolerance, the completeness of which could be inferred from the amount required of maintenance immunosuppression. Contemporaneously, Donna Przepiorka et al. reported from Seattle that essentially all bone marrow cell recipients previously thought to have complete hematolymphopoietic cell replacement actually had a small residual population of their own lymphoid cells (35). Organ engraftment and bone marrow cell engraftment were nearly perfect mirror images.

In contrast to the dominant HVG reaction of organ recipients, the GVH reaction is dominant in bone marrow recipients whose immune system is weak or deliberately weakened. Therapeutic failure with either kind of transplantation was explained by the inability to control one, the other or both of the responses.

Independently at almost the same time, Rolf Zinkernagel published formal proof that the T-cell response against noncytopathic microparasites could be exhausted and deleted (36). He then proposed that clonal exhaustion-deletion and immune ignorance were the seminal mechanisms of tolerance to noncytopathic microorganisms (37). In this view, disease carrier states (for example, hepatitis, cytomegalovirus) were manifestations of specific tolerance to the different intracellular pathogens. The term “immune ignorance” referred to an antigen that fails to reach host lymphoid organs and is not therefore recognized to be present. The phenomenon was first described in the context of transplantation by Clyde Barker and Rupert Billingham (38).

With our mutual recognition that donor leukocytes and intracellular pathogens are mobile antigen equivalents and that transplantation involved a double immune response (host versus graft and graft versus host) that could be reciprocally exhausted and deleted, Zinkernagel and I prepared a review for NEJM. After describing a spectrum of transplant scenarios from outright rejection to durable tolerance and their infection analogs (39), we proposed two generalizable conclusions.

The first conclusion was that “the migration and localization of antigen are the governing factors in immunologic responsiveness or unresponsiveness against infections, tumors, and self and against xenografts and allografts” (39). The second conclusion was that “all … outcomes [of adaptive immunity] are determined by the balance reached between the quantity of antigen with access to host lymphoid organs and the number of antigen-specific T cells induced at these lymphoid sites” (39 [quotation]; 4045).

These conclusions mandated a paradigm shift in transplantation immunology and in immunology overall. Given that balances between the quantity of mobile antigen and the number of antigen-specific T cells govern immunologic responsiveness and nonresponsiveness, the actual role of transplant immunosuppression is simply to allow establishment of numerical supremacy of the donor leukocytes (Figure 2, middle graphic). Conditions favoring the mobile donor cells for bone marrow cell transplantation either preexist (that is, immune deficiency diseases) or are created in advance of transplantation by reduction of global immune reactivity with irradiation or drugs. Exhaustion deletion of the impending donor-specific response is thereby made easier.

In the contrasting treatment of organ transplant recipients, dominance of the donor cells relative to host antidonor T cells had been empirically achieved after rather than before transplantation. This step was accomplished with continuous immunosuppression that permitted clonal exhaustion and deletion of the host donor-specific response while avoiding immune injury to the allograft. The greatest threat of acute rejection (and risk of graft versus host disease) coincided with the mass migration of the organ’s passenger leukocytes during the first few posttransplant weeks. This period also constituted a one-time-only window of opportunity for nullification of the host-versus-graft and graft-versushost responses by reciprocal clonal exhaustion-deletion (Figure 2).

In 2001, Zinkernagel and I pointed out that the window would be closed to the extent that over-immunosuppression subverted the seminal tolerance mechanism of clonal activation → exhaustion → deletion (41), leaving the recipient permanently committed to heavy maintenance immunosuppression. Two therapeutic principles were proposed to reduce these self-defeating consequences: recipient pretreatment and minimal postoperative immunosuppression (41). The strategy was much the same as that used in the canonical Colorado kidney cases of 1962–1963 (18) and in conventional bone marrow transplant protocols.

Thus, insight about the meaning and mechanisms of tolerance derived from the microchimerism discoveries has made it possible to interconnect all existing examples of acquired transplantation tolerance (Figure 2). It has also set the stage for the more discriminating use in organ recipients of the numerous currently available immunosuppressants (42,43). The possibility remains that appropriately timed infusions of donor leukocytes could facilitate the level of deletional tolerance achievable with the organ’s donor passenger leukocytes alone (4145).

Disclosure

The authors declare that they have no competing interests as defined by Molecular Medicine, or other interests that might be perceived to influence the results and discussion reported in this paper.