Introduction

Animal experiments date back to ancient times. At the very beginning experiments carried out on animals were driven by curiosity to get to know the living body, later followed by the demand to better recognize various pathologies and improve our knowledge on diseases. Today, animal experiments are widely used in medical, biomedical, veterinary research and drug development, including toxicology and safety studies. However, the number of experimental animals used in a series of research cannot be extended limitless, and recently, great efforts have been made to replace animal experiments with in vitro organoid culture methods and in silico predictions. Here, we give an overview of the past and present of animal experiments, and discuss alternative strategies that may replace them in the future.

The past

Greek scientists such as Aristotle or Erasistratus are known to have executed animal experiments as early as in the second century BC. Galenus, the famous Greek-Roman physician, regularly executed experiments on animals, including autopsy to improve his anatomical, physiological and pathological knowledge (Hajar 2011).

However, the greatest value of animal testing is related to drug development. Two well-known examples clearly demonstrate why it is necessary to test drugs on animals before human trials are started. In 1937, an American company launched a product called “Elixir Sulfanilamide,” containing sulfanilamide dissolved in diethylene glycol (DEG) and flavored with raspberry. It was marketed without having been tested on animals and was blamed for hundreds of deaths and huge scandals (Hajar 2011). After the incident, under pressure from the public, the predecessor organization of the US Food and Drug Administration (FDA) was established in the USA, and it ordered that all drug candidates should be tested for toxicity on animals before being placed on the market. Europe also had its own lesson after thalidomide (Contergan®) entered the market in 1957. It was recommended as a sedative and analgesic for the treatment of insomnia, fatigue and abdominal pain, as well as for the treatment of pregnancy-induced nausea and vomiting. Consequently, from 1959 until its withdrawal about 10–20 thousands of “Contergan babies” were born with limb abnormalities (hands and/or feet) in 46 countries worldwide, except for the USA where the drug was not approved. Contergan was withdrawn from the market in 1961 (Western Germany) and 1962 (all over the world) (Hajar 2011). These fatal drug experiences highlighted the necessity of well-designed animal testing.

The present

Today, animal experiments are widely utilized in medical and veterinary research and drug development, in fundamental biomedical research, and in preclinical toxicological and safety studies (Fig. 1). Animal models are suitable to thoroughly study the pathomechanisms of diseases, utilizing various in vivo tests and imaging techniques. During this process affected organs and tissues can also be isolated at any stage in the development of pathology and can be subjected to in-depth studies using histological, immunohistochemical, electrophysiological and molecular biological methods, as well as imaging techniques (MRI, MEMRI, PET). Not merely does it improve our knowledge on diseases, but also facilitates an earlier and more reliable diagnosis. Medical students still acquire the basic knowledge and practice on small animals (dogs, rabbits) in many fields of medicine (e.g., cardio and vascular surgeries).

Fig. 1
figure 1

Distribution of animal experiments (based on The Economist article published on August 31, 2007)

Various animal species are used for animal experiments, such as nematodes (C. elegans), flies (D. melanogaster), fishes (Zebra danio), frogs, birds, rabbits, dogs and monkeys; however, in drug development rats (15%) and mice (68%) are the most common species. In the last century, rats were widely used in drug research; thus, classical behavioral and physiological studies were developed for rats. To date, the entire rat genome has been sequenced (Gibbs et al. 2004) and transgenic rats have been generated (Bäck et al. 2019). From the end of the twentieth century, however, the use of mice exceeded that of rats in drug development. The widespread use of mice for animal testing is justified by numerous reasons: (a) their relatively inexpensive breeding, (b) fast reproduction (20-day-long gestation), (c) large number of offsprings (8–12), (d) early maturity (4–6 weeks), (e) well-characterized genome (the entire genome sequence is known) and (f) the fact that 99% of mouse genes have human homology. Well-developed genetic and molecular biological techniques (e.g., transgenesis, gene editing) are available for the experimental genomic manipulations of mice.

The pathomechanism of infective diseases is also modeled by the artificial introduction of the pathogenic microorganisms (fungi, viruses, mycoplasmas, bacteria) into experimental animals. In other cases, chemical exposure is employed to model the pathologies characteristic for the particular disease studied. A good example for this is 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) exposure, which selectively damages dopaminergic neurons, thereby making it possible to model Parkinson’s disease. As another possible methodology, pathological alterations can be induced via surgical approaches (e.g., artery/vein ligations) in order to model and study various pathologies (e.g., to study ischemic lesions, brain hypoxia, etc.). The animal models of human cancers are generated by graft transplantation, viral/chemical/physical induction or genetic engineering.

Random gene modifications

Chemical mutagenesis

Genetic diseases can be studied via the laboratory evolution of gene mutations in living organisms. Mutations also occur spontaneously in the nature, but their occurrence is quite rare (frequency: 5–10 × 10−6 per gene locus) (O’Brien and Frankel 2004). Point mutations can be randomly induced in the genome by chemical compounds such as base analogues (5-bromouracil, 2-aminopurine), acridine, alkylating agents, such as sulfur mustard, nitrogen mustard, ethyl-methanesulfonate (EMS), deaminating agents, hydroxylamine or free radicals. Initially, N-ethyl-N-nitrosourea (ENS) was used in male mice to mutate the germ cells in vivo and is now also used for the in vitro mutation of embryonic stem cells (ES cells). The resultant mutant ES cell lines allow for the generation of mutant mice, offering the opportunity to attempt to localize the site of mutation in an individual animal that shows a characteristic phenotype (O’Brien and Frankel 2004). Another method of random mutagenesis is based on X-ray irradiation of gametes, which is primarily used for genetic testing in D. melanogaster.

Insertional mutagenesis

Insertional mutagenesis is another means to induce random mutations. In this case, recombinant viral vectors (adenovirus, retrovirus, lentivirus) or transposons are inserted into the genome. Significant progress has been made in mammalian gene transfer when the first DNA transposons were described at the end of twentieth century (Ivics et al. 1996). Using the transposase enzyme the transposons in the genome can be “jumped and pasted” optionally, so that virtually infinite number of mutations can be generated. Although most of the known transposons have been isolated from fishes (Sleeping Beauty, Tol2, Passport), cabbage caterpillars (PiggyBac) and frogs (Frog Prince), all of them work in mammalian cells without exception. On this field pioneer work has been done by the Hungarian scientists Z. Ivics and his coworker, Z. Izsvák (Ivics et al. 1996; Ivics et al. 1997). The newest type of transposons (PiggyBac, PiggyBat) will no longer leave “fingerprints” once they are skipped (i.e., certain sequences like C(A/T)GTA will not remain in their pre-insertion sites), so no mutations will emerge in their original environment (Skipper et al. 2013).

Creating transgenic animals

Definitely it was a major breakthrough when scientists were first able to artificially change the genome of living organisms using cloned genes to create genetically modified animals. The first milestone in this respect was the generation of transgenic mice, later followed by the establishment of knock-out technique for targeted gene deletion. Today, it is even possible to perform subtle genetic manipulations, such as induced/tissue-specific gene targeting, or gene silencing with siRNA or miRNA molecules.

The introduction of various gene-transfer methods was a major step forward in the development of relevant animal models, in particular those that harbor not only somatic but also heritable genomic mutations. This includes the methodology of viral (adenoviral and retroviral mediated) gene transfers into the reproductive organ, as well as the introduction of sperm-mediated gene mutations through artificial insemination. However, the most prevalent technique is the pronuclear microinjection of ova with various recombinant DNA constructs and targeted gene knock-out (KO).

Pronuclear microinjection of egg cells with DNA constructs

The heritable genetic modification of the mouse genome was first reported in the early 1980s, based on the microinjection of purified recombinant DNA into the pronucleus of fertilized mouse ova (Gordon et al. 1980; Harbers et al. 1981; Constantini and Lacy 1981). The technique spread rapidly and by the turn of the century thousands of transgenic mouse models were created worldwide using this technique. In Hungary, József Zákány introduced this technique in 1988 after returning home from Columbia University, New York, and taught several young researchers who are still engaged in transgenesis-based research in Szeged, Gödöllő and Budapest. The essence of this method includes isolating fertilized oocytes from superovulated female mice, followed by the injection of the purified DNA construct into the male pronucleus of the ova. After a short recovery period the microinjected oocytes are reimplanted into the oviduct of foster mothers. The inserted transgene can be detected in the newborns by southern hybridization and/or polymerase chain reaction (PCR). About 15–20% of the newborns are transgenic. Using this method even large DNA fragments (75–100 kb) cloned into an appropriate vector (e.g., P1-phagemid) can be inserted into the mouse genome (Callow et al. 1994).

Targeted gene editing

Classical gene knock-out technology

The elements of the gene targeting technique have been developed independently by several laboratories. In the laboratory of Capecchi and Smithies researchers studied gene exchanges occurring via homologous recombination in mammalian cells (Folger et al. 1984; Kucherlapati et al. 1984). Martin Evans and coworkers isolated and successfully maintained mouse embryonic stem (ES) cells in vitro (Evans and Kaufman 1981). When injected into blastocysts these cultured stem cells can differentiate into any bodily cell lines (ectoderm, mesoderm, endoderm) including the genital organs. The methodology of blastocyst injection was established earlier, in 1968 (Gardner 1968). Capecchi, Smithies and Evans were awarded the Nobel Prize in Physiology and Medicine in 2007 for their breakthrough discoveries in the field of genetic modifications that can be performed in mice using embryonic stem cells.

The technique is quite complex and has many potential errors. As the first step a targeting vector should be created, which is supposed to replace the homologous DNA sequences in the host cell. This is followed by the introduction of the targeting vector into pluripotent ES cells, mainly via electroporation, where gene sequences exchange between the targeting vector and the endogenous gene via homologous recombination. Next, mutant ES cells should be selected using the appropriate positive selection marker (e.g., neo gene) and enriched with a negative selection marker [e.g., thymidine kinase (TK) or diphtheria toxin (DT-A) genes] carried by the targeting vector. Selected clones are cloned and propagated. Then mutant ES cells are injected into mouse blastocysts which are re-implanted into the uterus of recipient female animals. The offsprings show various degrees of chimerism depending on the origin of their somatic cells. A high degree of chimerism indicates that most of the cells (including somatic and germ cells) originate from the mutant ES cell line. Next, male chimeras are crossed with wild-type black females, and when the gonads are adequately colonized with mutant ES cells, the mutation will be inherited to the progeny (germline transmission). Most importantly, transformed stem cells retain their pluripotency during cultivation, transformation and selection, so that they can differentiate into cells of any type (ectoderm, endoderm and mesoderm) after re-introduction into blastocyst. Then newborns are tested for the inheritance of the desired mutation using southern hybridization and PCR. As a final step a transgenic line is established.

An alternative method of blastocyst injection is aggregation of zona pellucida free mutant and wild-type blastocysts in a Petri dish, in vitro. This method was developed by András Nagy and his coworker, Elen Gócza (Nagy et al. 1990).

Novel methods of gene editing

Random gene modifications have necessarily been replaced by methods in which mutations can be introduced into a desired genetic locus, i.e., genetic manipulations can be targeted. Targeting genes at specific loci can be executed using for example zinc finger nuclease (ZFN), transcription activator-like effector nuclease (TALEN), clustered regularly interspaced short palindromic repeats (CRISPR-Cas9) and base editing. The various gene modification methods are summarized and compared in Table 1.

Table 1 Comparison of gene modification methods

The zinc finger nuclease (ZFN) method

Zinc finger nucleases (ZFNs) are a class of engineered DNA-binding proteins that facilitate targeted gene editing by creating double-strand breaks in DNA at user-specified locations. Each ZFN consists of two functional domains: a zinc finger protein (ZFP) domain that provides specific DNA binding and a DNA-cleavage domain. Zinc finger proteins (ZFPs) are the most abundant proteins in eukaryotes. They have multiple roles in DNA binding, RNA packaging, as well as in the regulation of gene transcription and apoptosis, protein folding and lipid binding (Laity et al. 2001). ZFPs consist of two antiparallel β-sheets and an α-helix and centrally bind a zinc ion through their Cys2 and His2 amino acids. Each zinc finger molecule consists of approximately 30 amino acid residues, of which amino acids at the − 1, + 3 and + 6 positions can bind to three bases on the DNA strand assuring specific DNA binding. For example, the amino acid residues Arg (− 1), Glu (+ 3) and Thr (+ 6) specifically bind to the bases G, C and T, respectively (Fig. 2a) (Klug 2010). Next, the ZFP DNA-binding domain can be linked to Fok1 endonuclease, which cleaves DNA at both strands when it forms dimers (Pavletich and Pabo 1991).

Fig. 2
figure 2

Schematic illustration of gene editing techniques. a Zinc-finger method (based on Klug 2010); b TALEN method (based on Boch et al. 2009) c CRISPR-Cas9 method (based on Mali et al. 2013). PAM (protospacer adjacent motif), tracrRNA (transactivating crRNA)

The TALEN method

The transcription activator-like effector nuclease (TALEN) technology uses artificial restriction enzymes generated by fusing a TAL effector DNA-binding domain to a DNA cleavage domain. Transcription activator like effector (TALE) is a virulence factor belonging to the type III secretion system (injectosome) of Xanthomonas bacteria, and restriction enzymes are molecules with enzyme activity capable to cut DNA strands at any specific sequence. The main component of the TAL effector molecule is a central domain consisting of 17.5 repeats. Each repeat domain contains 34 amino acid residues, of which the 12th and 13th residues are called repeat variable diresidue (RVD) (Fig. 2b). These are highly variable and replaceable, while the 3′ end of the peptide domain is highly conservative. RVD is strongly correlated with specific nucleotide recognition. This relationship between amino acid sequence and DNA recognition has allowed for the engineering of specific DNA-binding domains by selecting a combination of repeat segments containing the appropriate RVDs. Supposing that an RVD site is composed of Asn-Gly, the peptide can bind to a T nucleotide, while it binds to C when the RVD site is composed of His-Asp (Fig. 2b). This specific DNA binding domain is fused to Fok1 endonuclease, which upon dimerization cuts the recognized DNA sequences to make a double-strand break. The code of DNA-binding specificity of TAL Type III effectors was deciphered by Boch et al. (2009).

The CRISPR-Cas9 method

The bacterial genome includes a specific gene cluster called CRISPR (clustered regularly interspaced short palindromic repeats) which, along with the Cas9 nuclease, represents an efficient bacterial defense system against invasive phage DNA (Ishino et al. 1987). Upon phage infection nucleases cut the foreign DNA, and the resultant DNA pieces are inserted and stored in the repeat region of the CRISPR locus. Upon a repeated phage infection, the bacterial CRISPR locus is activated, transcribed and is processed into smaller units of RNA (crRNA). Then crRNAs, harboring complementary sequences to the phage DNA, bind to the Cas9 endonuclease which cleaves the foreign DNA.

Groups led by F. Zhang and G. Church demonstrated for the first time that the CRISPR-Cas9 system works in mammalian cells as well (Cong et al. 2013). In the mammalian system a 20-mer long crRNA, which provides sequence specificity, is fused to an auxiliary trans-activating RNA (tracrRNA) to form a chimeric single-guide RNA (sgRNA). Next, they are linked to the CRISPR-associated protein (Cas9) that cuts target DNA at specific sites to generate double-strand breaks that induce DNA repair in the cells (Fig. 2c) (Cong et al. 2013). For a practical use, most components of the CRISPR-Cas9 system have been cloned into a vector; thus, customers only need to insert their specific guide RNA sequences into this commercially available vector. RNA editing is also possible using a similar system, but in this case the Cas9 endonuclease is replaced by the Cas13 nuclease which cuts single-stranded RNA (Mali et al. 2013).

Unfortunately, these engineered nuclease technologies (ZFN, TALEN and CRISPR/Cas9) have the disadvantage called “off-target” events that occur when complexes bind to and cleave non-specific sites causing undesirable mutations. The frequency of off-target effect can reach 50% in some cases, which is the major concern for therapeutic and clinical applications (Zhang et al. 2015). In the past few years several laboratories have made serious efforts to overcome this problem and reduce the number of off-target events (Doench et al. 2016; Li et al. 2019).

These novel techniques of gene editing have successfully been using in several Hungarian laboratories, including the Laboratories of G. Szabó and F. Erdélyi (Institute of Experimental Medicine (KOKI), Budapest), A. Dinnyés (Szent István University, Gödöllő), Z. Bősze, E.Gócza and L. Hiripi (National Agricultural Research and Innovation Centre, Gödöllő) to generate gene-modified animals.

Base editing

Base editing is one of the latest breakthroughs in gene technology and gene therapy. The spontaneous deamination of cytosine is a major source of C-G to T-A transitions, which account for about half of all known human pathogenic point mutations (Gaudelli et al. 2017). The methodology of base editing was introduced by Liu and coworkers (Komor et al. 2016). Using base editing they have demonstrated that reversing a TA mutation into CG, and thus correcting this mutation, is feasible. In their system the guide RNA that provides sequence specificity is linked to a defective Cas9, which unzips the DNA, and makes a nick on the opposite strand (non-edited DNA strand) to trigger DNA repair. The defective Cas9 is linked to a base editing enzyme (i.e., adenosine deaminase) that turns A into inositol (I). During the process of DNA repair I is read as G on the edited strand, and C is inserted by DNA polymerase on the opposite strand during DNA replication (Komor et al. 2016). In some cases researchers also link a DNA glycosylase inhibitor to the base editing complex to prevent the repair of deamination. Using base editing A-T can be converted into G-C, and C-G can be converted into C-A in the mammalian genome, correcting the existing genetic point mutations.

Investigating the phenotype of genetically modified animals

In recent decades, several “factories” have been established to produce genetically modified mice. Hundreds of mutant mice have been produced by Lexicon Pharmaceuticals (The Woodlands, TX, USA) using gene targeting, by the Sanger Institute (England, Great Britain) using ethane–methane sulfonate (EMS) mutagenesis or by the Jackson Laboratories (Bar Harbor, Maine, USA) using TALEN and CRISPR-Cas9 gene-editing methods. Creating mutations in the mouse genome is relatively easy; however, determining the phenotypic changes induced by a certain mutation is an extremely difficult and complex task. Detailed histological, biochemical, genetic and molecular biological studies should be performed to ensure and validate that genetically modified animals undergo the phenotypic changes similar to those characteristics of the modeled disease. Therefore, studying the phenotype is a complex and multifaceted task, requiring expertize in immunological, electrophysiological, behavioral tests and imaging techniques, as well as in-depth and up-to-date knowledge on brain, heart, kidney, liver, pancreas, spleen or gonad-specific functionalities. Therefore, several international consortia (e.g., EUMODIC, the European Conditional Mouse Mutagenesis) have been established with a huge financial effort to analyze mutant phenotypes in details. The primary phenotype analysis, including tests based on an experimental panel, is carried out by four large European laboratories. Their results are public and accessible to anyone via the EuroPhenome Web site (http://www.europhenome.org/). In 2013, at the Helmholtz Center in Munich, a non-profit company, Infrafrontier GmbH, was established with the support of five countries (Germany, France, Finland, the Czech Republic and Greece) to coordinate the work of European laboratories engaged in phenotype analyses of transgenic mouse models. The German Mouse Clinic (GMC) was among the first phenotyping facilities to establish a widespread collaboration-based platform for phenotype characterization of mouse lines (Fuchs et al. 2018). The GMC has advanced development pipelines and offer a large scale of standardized and comprehensive phenotypic analysis of mouse mutants for various disease areas, such as screens for cognitive, memory, sensory and motor deficits, screens for neurobehavioral assessment, glucose metabolism, kidney function, energy metabolism, immune functions, allergies and lung functions (Fuchs et al. 2018).

How relevant are animal models in characterizing human diseases?

Dozens of human diseases are studied in mouse models. For example, a natural mutation of mice, which results in a small-eyed phenotype, is an excellent model for the development of human aniridia. Genetic research has revealed that a mutation in the PAX6 gene (haploinsufficiency of PAX6) is responsible for disease development in both mice and humans (Porteous and Dorin 1993). The first human disease modeled in mice was the Lesch–Nyhan syndrome, a progressive neurological disorder with a striking feature of self-mutilating behavior. Mutation of the hypoxanthine phosphoribosyl transferase (HPRT) gene involved in proline metabolism is responsible for the development of the human disease. HPRT-deficient mice were produced by ES cell technology, and surprisingly, knock-out mice appeared to be unaffected, showing normal behavior (Hooper et al. 1987; Kuehn et al. 1987). It was hypothesized that quantitative differences in purine metabolism between human and mice may explain this finding. However, Wu and Melton revealed that mice rather use another enzyme, adenosine phosphoribosyltransferase (APRT) for their purine salvage. Indeed, administration of an APRT inhibitor induced persistent self-harm behavior in HPRT-deficient mice (Wu and Melton 1993). Currently, many human diseases, such as Gaucher disease, cystic fibrosis, Waardenburg syndrome, Duchenne muscular dystrophy, sickle-cell anemia, atherosclerosis, Alzheimer’s disease, Parkinson’s disease or Li-Fraumeni disease, have adequate mouse models. Although the full scale of pathological alterations may be different in mice compared to humans, currently mouse models are the best means of understanding diseases and developing novel therapies.

There is no doubt, however, that great advances have been made in the development of in vitro methods recently. These include stem cell differentiation, metastasis tracking or artificial blood–brain barrier formation, but in many cases we are not yet able to study living processes without the whole body.

The future

The future of animal experiments and drug development in the twenty-first century

A recent study carried out by two animal welfare groups found that the number of animal tests requested by the US Environmental Protection Agency (EPA) has increased dramatically in 2017 and involved about 75,000 rats, rabbits, and other vertebrates (Zainzinger 2018). Such a high use of experimental animals drew the attention of scientists, and great efforts are made to replace animal experiments with alternative methods. Tissue cultures, perfused organs, histological sections or cell infection models are promising in vitro alternatives. In animal testing the recommendations issued by the European Commission encourage adherence to the 3Rs strategy (replacement, reduction, refinement) outlined by Russell and Burch (1959). Specifically, replacement refers to the absolute and relative replacement of experimental animals either by inanimate systems (e.g., in vitro organoids or computer programs) or by other species indicated to have a significantly lower potential for pain perception, such as some invertebrates. Reduction refers to the rational reduction of the number of experimental animals used and/or the maximization of the amount of information obtained per animal. Refinement refers to the treatments applied in the experiments, comprising the efforts to alleviate or minimize the potential pain, suffering or distress for the experimental animals used, assuring enhanced animal welfare. Thus, the 3Rs strategy aims to make animal experiments more acceptable and humane (Russell and Burch 1959). Recently, the original 3Rs have been extended with the terms of Reuse, referring to using the same animals in a later experiment when it is possible, and Rehabilitation of the animals after their use.

3D in vitro organoid cultures

Animal disease models are constructed with a dual purpose. On the one hand, they offer a means to monitor the development and the course of a disease by detailed histological, molecular biology and imaging studies. On the other hand, they are utilized to test the in vivo toxicity, efficacy and pharmacokinetic characteristics of candidate drug molecules. However, toxicity testing requires the use of a large number of experimental animals, and thus it is an area where the principle of substitution can and should be followed in the future. Recently, great efforts have been made to develop human 3D tissue chip, organ-on-a-chip (Pitsalidis et al. 2018), 3D transwell cultures, self-assembling organ cultures (Li et al. 2017) and air–liquid interface cerebral organoid (ALI-CO) cultures (Giandomenico et al. 2019) to replace experimental animals. Such 3D cultures modeling the lung, the liver, the heart, the intestines, the lymph nodes and the brain are already available (Proceedings of a Workshop, Chapter 5, 2018). The in vitro ALI-CO culture developed by Giandomenico et al. was maintained for 10–12 months and exhibited active neuronal networks and subcortical projecting tracts innervating mouse spinal cord explants. This organoid culture was even capable of eliciting coordinated muscle contractions when co-cultured with mouse spinal cord–muscle explants (Giandomenico et al. 2019). Furthermore, the development of in vitro human brain organoid models of the blood–brain barrier (BBB) offers the possibility to investigate the penetrability of various drugs through the BBB (Nakagawa et al. 2009, Kamal and Waldau 2019). Using photopolymerizable hydrogels Miller’s group has successfully printed an artificial 3D “lung” to explore the oxygenation and flow of human red blood cells in a vascularized alveolar model (Grigoryan et al. 2019).

Recently, promising experiments have been reported using scaffold-free 3D cell lines or primary cells in a spheroid suspension culture. Spheroid cultures can be easier to establish and simpler to work with compared to 3D-printed models that require a pre-designed scaffold (Tanabe 2019).

Molecularly refined personalized treatments may also be possible in the near future based on information gained from 3D organ cultures established from the patient’s own tissues. Noor et al. (2019) isolated omental tissue samples from patients, and the cells were reprogrammed to pluripotent stem cells and then differentiated into cardiomyocytes and endothelial cells combined with hydrogels to form “bioinks”. The researchers then 3D printed a complete small heart, having the blood vessels and chambers. An alternative technique is to obtain inducible pluripotent stem cells (iPSCs) from the patient’s blood, which can be differentiated in chemically well-defined media in vitro to produce the patient’s own 3D organ culture (Burridge et al. 2016a, b).

In silico toxicology

To reduce the vast number of experimental animals required for toxicological and safety studies, a very important step forward is the emergence of the so-called twenty-first-century toxicology which replaces in vivo animal testing by processing in vitro and in silico toxicological data (Hartung 2010). The core of this novel methodology is to compare the chemical structure and physiological activities (potential toxic effects) of compounds using various computer models (Raies and Bajic 2016; Kling 2019), (Fig. 3).

Fig. 3
figure 3

Development of predictive toxicology through in vivo, in vitro and in silico data analysis

Hartung and his coworkers created a map of the known chemicals by comparing about 10 million structurally similar molecules (Kling 2019). In a detailed computational analysis 190,000 chemicals with known classification as toxic or non-toxic were compared to the respective prediction, and the accuracy of the model to predict toxicity was found to be 87%. By comparison, when an animal experiment was repeated using the same drug molecule, the same result occurred in only 81% of cases, indicating that novel computational approaches can outperform the reproducibility of animal tests (Kling 2019; Luechtefeld et al. 2018).

Conclusions for future biology

We can hypothesize that time is not far off when the currently essential animal experiments may be at least partly replaced by in vitro and in silico methodologies. By processing hundreds of thousands of in vivo, in vitro and in silico data using mathematical models, such as quantitative structure–activity relationship (QSAR), read-across structure activity relationships (RASAR) and artificial intelligence (AI), we could be able to accurately predict the human toxicities and physiological effects of most chemicals. This is in agreement with the strategy to “reduce, refine and replace” animals in experimental testing, as outlined by the European Commission, as well as by the Environmental Protection Agency of the USA.