Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

1.1 Synthetic Life Preamble

The basic unit of life is the cell, with the capacities of genetic heredity and evolution as unique hallmarks. For centuries, biology has been the science of life focused on the analysis of the microscopic (membranes, cells, tissues, etc.) and macroscopic (insects, animals, plants, etc.) worlds, but it has gradually adopted synthesis as a means to understand biological systems since the late 19th century. In 1899, for instance, in a manner that is reminiscent of today’s media, the Boston Herald newspaper reported sensationally the work of the US-German biologist Jacques Loeb as the “creation of life” (Ball 2010). Loeb (1899) is known for his invention of artificial parthenogenesis: embryonic development was induced by treating sea urchin eggs with inorganic salts. Loeb conceived of living organisms as chemical machines, and he aimed for a synthetic science of life capable of forming new combinations from the elements of living nature, similar to the way in which an engineer sees his work, as practical, useful and controlled (Fangerau 2009). If we also recall the “Synthetic method to understand life” by the French professor in medicine, Stéphane Leduc (1853–1939), the “Creation of new species by experimental evolution” by the Dutch botanist Hugo de Vries (1848–1935), or the “Synthetic new species by genetic engineering” by the American botanist Albert Blakeslee (1874–1954), we can see that ideas of “synthetic life” resounded as early as the 1930s (Campos 2009).

During the second half of the 20th century the field of genetic engineering consolidated, allowing for the emergence of a new era in biotechnology. This was made possible by major breakthroughs including:

  1. 1.

    The elucidation of the structure of the molecule responsible for heredity in all living organisms, the double antiparallel deoxyribonucleic acid (DNA) helix, by James Watson and Francis Crick in the 1950s (Nobel Prize in Physiology or Medicine 1962).

  2. 2.

    The discovery of restriction enzymes by Werner Arber, Daniel Nathans and Hamilton Smith in the 1960s (Nobel Prize in Physiology or Medicine 1978).

  3. 3.

    The application of the restriction enzymes for DNA recombinant technology by Paul Berg and colleagues in the 1970s (Nobel Prize in Chemistry 1980).

  4. 4.

    The development of the technology for oligonucleotide synthesis in the 1980s (fundamental work for modern molecular biology not yet awarded a Nobel Prize).

  5. 5.

    The development of the Polymerase Chain Reaction (PCR) for the specific in vitro amplification of DNA by Kary Mullis in the 1980s (Nobel Prize in Chemistry 1993).

During these decades two journal articles referred to the term “synthetic biology”: one to highlight the potential impact of recombinant DNA in biotechnological applications (Szybalski and Skalka 1978) and the other to bring up its relevance in the political debate (Roblin 1979). By the 1980s, synthetic biology was defined as “the synthesis of artificial forms of life” (Hobom 1980), but it also was considered synonymous to “bioengineering” by some scientists (Benner and Sismour 2005). During the 1990s, while “designing synthetic molecules” (Rawls 2000) became a common practice biological research especially in the US, the field of metabolic engineering was emerging (Bailey et al. 1990). From a historical perspective, synthetic biology has been evolving from an old genetic (one gene) era towards a younger metabolic (two or more genes) phase, followed by a current genome (dozens of genes) engineering era that will drive us to a “biosystems engineering” (more than one organism) future (Carr and Church 2009).

In fact, with the increasing sequencing of genomes from different species at the end of the 1990s, the genetic program of complex living systems increasingly came to be regarded as ‘digital’, a view that had been common for some researchers working in the information technology (IT) sector much earlier (Danchin 2009). Since then biology increasingly depends on computers and mathematics for analysing huge amounts of DNA sequencing data (Shendure and Ji 2008). But it was not until the new millennium that the contemporary field of synthetic biology re-emerged from a community of engineers interested in biology. Their migration into biology (Brent 2004) has enabled the application of “engineering principles” like design, modelling, abstraction and modularity of “circuits” in living systems for useful purposes (Endy 2005). The culmination of synthetic biology as an “engineering” discipline has been considered by some bioengineers to be in 2004 during the first international meeting of the BioBricks Foundation at the Massachusetts Institute of Technology (MIT) in Cambridge, USA (Heinemann and Panke 2009).

1.2 Contemporary Synthetic Biology

In 2014, a scientific committee on behalf of the European Union (EU) suggested defining contemporary synthetic biology as “the application of science, technology and engineering to facilitate and accelerate the design, manufacture and/or modification of genetic materials in living organisms” (Breitling et al. 2015).Footnote 1 In theory, synthetic biology is about engineering and not about science (de Lorenzo and Danchin 2008), but in practice it is composed of different “research tribes” (Nature Editorials 2014) of scientists belonging to large interdisciplinary groups of biologists, chemists, engineers, computer scientists, physicists, mathematicians, scholars from the social sciences, humanities, artists (Reardon 2011) and Do-It-Yourself (DIY) biologists.Footnote 2 Figure 1 illustrates the main disciplinary synthetic biology “tribes”, but the reader is referred to previous work where the first research networks in synthetic biology were investigated (Oldham et al. 2012). Importantly, an open dialogue among natural scientists, social scientists and scholars from the humanities and other stakeholders has recently played an important role in shaping a more inclusive development of synthetic biology (Agapakis 2014).

Fig. 1
figure 1

The different research “tribes” of synthetic biology are, arranged according to the shape of the “penacho” (crest made out of bird feathers) from Moctezuma, the antepenultimate tribe ruler of the Aztec Empire of prehispanic Mexico

Some practitioners of these fields use their own jargon and metaphors according to their agenda (de Lorenzo 2011). For instance, while the assembly of “parts” (e.g. promoters), into “circuits” (e.g. plasmids) and their implementation in a “chassis” (e.g. bacteria) suggests a predictable engineering view of biology, it is evident that biology is not predictable but rather context-dependent. That is, what make a living cell are not the parts, but the interactions and relationships among them:

The oracle at Delphi posed a question concerning a boat: If, in time, every plank has rotted and been replaced, is the boat the same boat? Yes, the owner will say, the vessel is not its planks but the relationship between them. (Danchin 2003, backcover page).

Another example is the metaphor of “genome writing” (Bedau et al. 2010), referring to the full synthesis of a bacterial genome and its use for cellular reprogramming of a related bacterium (Gibson et al. 2010). It is clear that we are able to copy (DNA sequencing) and print (DNA synthesis) genomes, but we are far away from writing and designing genomes de novo because most of them are not completely understood or the function of many protein-coding genes is still unknown, among other limitations (Porcar and Pereto 2012).

Absolutely, the diversity of the synthetic biology practitioners is vast, and each research “tribe” has an agenda, but instead of promoting particular agendas that could hamper the development of others (i.e. tribalism), it is important to critically assess the various approaches that will evolve into interdependent methodologies in the coming years of research. To this end, synthetic biology can be broadly divided in four main engineering approaches (Fig. 2).

Fig. 2
figure 2

Proposition of four engineering approaches encompassing all synthetic biology research

1.2.1 Top-Down Engineering

This approach aims to reduce the complexity of extant cells by comparing universal genes and deleting non-essential ones for constructing a minimal genome (Juhas et al. 2011, 2012). These efforts rest on the idea about the existence of a universal minimal genome that gave rise to all living beings based on the assumption of a unique origin of cellular life, the so-called Last Universal Common Ancestor (LUCA) (Ouzounis and Kyrpides 1996). However, recent research points to the possibility that the tree of life composed of eukaryotic and prokaryotic cells (Eubacteria and Archaea) emerged from a community of primordial cells rather than from a single cell (Kim and Caetano-Anolles 2012). Therefore, while the Holy Grail-like quest for the “minimal genome” has become elusive, eliminating many non-essential functions compromises the fitness of an organism and results in “fragile” genomes (Acevedo-Rocha et al. 2013a).

Another approach in this area involves genome streamlining whereby dispensable DNA elements (mistakenly dubbed “junk DNA” in the past) are deleted to stabilize genomes for optimal performance in many biotechnological applications involving microbes (Leprince et al. 2012; Pal et al. 2014). These efforts coexist with the engineering of genomes (Carr and Church 2009) and epigenomes (Keung et al. 2015) in multiplex or combinatorial manner (Gallagher et al. 2014; Wang et al. 2009; Woodruff and Gill 2011). Importantly, top-down synthetic biology relies on comparative genomics (Abby and Daubin 2007) and proteomics (Nasir and Caetano-Anolles 2013) as well as systems (Lanza et al. 2012) and quantitative (Ouyang et al. 2012) biology. For an example of top-down engineering for the production of hydrogen as clean energy carrier see the chapter by R. Wünschiers in this book.

1.2.2 Bottom-up Engineering

The “bottom-up” synthetic biology approach primarily aims to create “protocells” and find the transition between the non-living and living matter by assembling three components: first, a metabolism for extracting energy from the environment and to construct, salvage and discard aged building blocks; second, an informational program like nucleic acids to control the system; and third, a container bringing these components together for allowing their coordination (Rasmussen et al. 2009). To achieve this, three important principles of self-organization are taken into consideration: reproduction, replication, and assembly. Reproduction (Sole et al. 2007) refers to the ability of a system to reproduce a similar copy of itself: cells (composed of container and metabolism) or “hardwares” reproduce; whereas replication (Paul and Joyce 2004) happens when a system replicates an exact copy of itself: the genetic program (such as DNA) or “software” is copied. Assembly occurs upon aggregation of vesicles or containers (e.g., Oparin’s coacervates) made of small droplets of organic molecules like lipids (Vasas et al. 2012) or liposomes, membrane-like structures containing phospholipids (Oberholzer and Luisi 2002).

Protocell research coexists with other in vitro synthetic biology projects aiming at synthesizing minimal cells (Jewett and Forster 2010), metabolic pathways (Billerbeck et al. 2013) or “never-born proteins” (Chiarabelli et al. 2012), as well as at imitating cellular processes (Forlin et al. 2012) such as cellular division (Schwille 2011) and growth (Blain and Szostak 2014). Although no longer considered synthetic biology research by the current EU definition given above (Breitling et al. 2015), this research, mostly fundamental, deserves proper recognition as synthetic biology research because it has potential impact on other synthetic biology areas such as metabolic engineering by the in vitro optimization of synthetic pathways.

1.2.3 Parallel Engineering or Bioengineering

Parallel engineering research is based on the canonical genetic code that employs standard biomolecules including nucleic acids and the twenty amino acids for engineering biological systems. It includes the standardization of DNA parts (Endy 2005), engineering of switches (Benenson 2012), biosensors (Salis et al. 2009; Zhang and Keasling 2011), genetic circuits (Brophy and Voigt 2014), logic gates (Win and Smolke 2008), and cellular communication operators (Bacchus and Fussenegger 2013) for a wide range of applications in biocomputing (Wang and Buck 2012), bioenergy (Malvankar et al. 2011), biofuels (Kung et al. 2012; Peralta-Yahya et al. 2012), bioremediation (de Lorenzo 2010; Schmidt and de Lorenzo 2012), optogenetics (Bacchus et al. 2013) and medicine (Ruder et al. 2011; Weber and Fussenegger 2012). Most of these applications conventionally rely on the use of one or more vectors (or plasmids) for controlling the expression of two or more genes and/or proteins. Plasmids are small, circular, double-strand DNA molecules that can replicate independently from chromosomal DNA, mostly found in prokaryotic but also sometimes in eukaryotic cells.

A large number of practitioners in this field are engineers aiming to abstract the complexity of biological systems into “parts”, “devices” and “systems” whose interactions could be predicted according to the dictum “what I cannot create, I do not understand” by Richard Feynman (Keller 2009). The migration of engineers into biology resulted in the first model-based design of genetic circuits (i.e., “circuit engineering”) based on simple mathematical models such as the toggle switch and the “repressilator” (Cameron et al. 2014). In some of these collaborations biologists and engineers work together with computer scientists to develop the next generation computer aided design (CAD) software for engineering-based synthetic biology (MacDonald et al. 2011). In summary, the main goal of bioengineers is to predict the behaviour of living systems for the sake of safety applications, as Drew Endy says:

Engineers hate complexity. I hate emergent properties. I like simplicity. I don’t want the plane I take tomorrow to have some emergent property while it’s flying.Footnote 3

1.2.4 Orthogonal or Perpendicular Engineering

Also known as “chemical synthetic biology” (Chiarabelli et al. 2012), this approach primarily aims to modify or expand the genetic codes of living systems with unnatural DNA bases (Benner and Sismour 2005; Pinheiro and Holliger 2012) and/or amino acids (Budisa 2004; Liu and Schultz 2010). This subarea also relates to xenobiology, an emergent area at the interface of synthetic biology, exobiology, systems chemistry and origin of life research (Schmidt 2010). In the last decades, scientists have synthesized molecules structurally related to the canonical bases of DNA to test whether those “alien” or xeno (XNA) molecules could be used as carriers of genetic information (Kwok 2012). Similarly, the DNA sugar (desoxyribose) has also been replaced by noncanonical moieties.

The genetic code can also be modified or expanded to express information beyond the 20 canonical amino acids of proteins. One strategy uses orthogonal enzymes and a transfer RNA adaptor from an unrelated organism to incorporate a given unnatural, noncanonical or xeno amino acid (XAA) into one or more proteins at one or more specific sites (Budisa 2014). The orthogonal enzymes are generated by “directed evolution”, a method that consists of repeated cycles of gene mutagenesis (genotypic diversity generation), screening or selection (of a particular phenotypic trait), and amplification of an improved variant for the next iterative round (Reetz 2013). Dozens of XAAs have been successfully incorporated into proteins in bacteria, yeast and human cell lines (Liu and Schultz 2010), but also in more complex organisms like worms and flies (Chin 2014). Directed evolution also enables the development of orthogonal ribosomes (based on canonical DNA sequence changes) to facilitate the incorporation of XAAs into proteins (Wang et al. 2007) or of “mirror life”, i.e., biological systems endowed with biomolecules composed of enantiomers of opposite chirality (Renders and Pinheiro 2015; Zhao and Lu 2014).

Another method dubbed “experimental evolution” (Kawecki et al. 2012) pushes microorganisms to incorporate XNAs into their genomes (Marlière et al. 2011) or XAAs into their proteomes by serial culturing (Yu et al. 2014). Orthogonal engineering based on experimental evolution also aims for engineering cells that can survive in asteroids, on the moon and even on Mars (Menezes et al. 2015). Although the changes at the DNA level are likely to be canonical, cells suitable for non-terrestrial habitats could be considered as orthogonal life. For more details regarding the colonization of Mars with the aid of synthetic microbes, see chapter by C. Verseux et al. in this book.

2 Synthetic Life Forms

Thanks to the technological advances in molecular biology, organic chemistry, and engineering in the last decades, the ease and speed of genetic modification has enabled the development of emergent Genetically Modified Organisms (GMOs). For example, the engineering-inspired design of GMOs has resulted in the crowdsourcing of Genetically Engineered Machines (GEMs), while the decreasing costs of synthetic DNA has enabled the reprogramming of Genomically Designed Organisms (GDOs). More recently, cutting-edge molecular tools have allowed for the arrival of Genomically Edited Organisms (GEOs). Finally, evolutionary approaches have accelerated the development of not only Genomically Recoded Organisms (GROs) harbouring expanded genetic codes, but also Chemically Modified Organisms (CMOs) endowed with unnatural DNA bases or amino acids. In this work, the three first definitions are introduced, whereas the two latter ones have already been proposed by others (Table 1).

Table 1 The various types of emergent GMOs

In the following sections, the various types of GMOs (mostly microbialFootnote 4) are introduced together with methods for their development and potential applications in biotechnology. Since GMEs, GEOs, GROs are modified using synthetic DNA, all these fall under the GMO definition (see next subsection). CMOs can be considered GMOs or not GMOs depending on whether their genetic changes are only induced by genetic modification or serial cultivation, respectively. The main purpose of showing existing and emergent types of GMOs and non-GMOs (e.g. CMOs) is to illustrate the most recent efforts undertaken by particular research “tribes” and to categorize them according to their engineering approach(es) (for an overview, see Fig. 3). The biosafety and biosecurity issues that these synthetic life forms may represent are subsequently highlighted. In this manner, it is easier to explore the context of a particular GMO/CMO regarding its history, origin, methodology, possible applications and risks in order to enable a better understanding of the organism and possibly an accurate technological assessment.

Fig. 3
figure 3

The synthetic nature of biology. Genetically modified organisms (GMOs) are produced using old and modern genetic engineering tools based on standard nucleic and amino acids as building blocks via top-down (up), bottom-up (down), parallel (left) and orthogonal (right) engineering. Engineering-based GMO design and construction has resulted in the crowdsourcing of Genetically Engineered Machines (GEMs) through the famous international GEMs competition (see below). In the bottom-up approach, Genomically Designed Viruses (GDVs) and Organisms (GDOs) can be reprogrammed by assembling commercial synthetic DNA, which can be subsequently transplanted into living cells. The first GDO reported was Mycoplasma, whose genetic code—do not confound with genetic program—was not altered, in contrast to yeast, whose genetic code (and program) is being modified in order to accommodate unnatural amino acids (see below). Genomically Edited Organisms (GEOs) are GMOs whose genetic material has been modified employing cutting-edge molecular tools. Genomically Recoded Organisms (GROs) are GEOs whose genetic code has been modified to accommodate unnatural amino acids (and perhaps unnatural DNA bases in the future). Finally, Chemically Modified Organisms (CMOs) are composed of unnatural nucleic acids and/or amino acids. Note that overlap between top-down (up) and bottom-up (down) is not shown because no truly synthetic cell, in which all components are synthesized in the lab and assembled into a living organism, thus far exists. There is also no overlap between engineering—(left) and evolutionary—(right) based approaches because the first aims to predict function from structure, whereas the latter does not predict function because this is extremely challenging if non-additive effects are taken into account (for a discussion on this topic, see Silver et al. 2014)

2.1 Genetically Modified Organisms

The EU law defines Genetically Modified Organisms (GMOs) as living entities whose “genetic material has been changed in a way that does not occur under natural conditions through cross-breeding or natural recombination”.Footnote 5 Internationally, the “Cartagena Protocol on Biosafety to the Convention on Biological Diversity”, which includes 170 countries, legally considers GMOs as Living Modified Organisms (LMOs), that is, “any living organism that possesses a novel combination of genetic material obtained through the use of modern biotechnology”.Footnote 6 The forerunners of genetic engineering, Paul Berg, Stanley Norman Cohen and Herbert Boyer, developed molecular tools to introduce foreign genes into bacteria, providing the basis for the development of GMOs. In 1972, Berg combined DNA from the gram-negative model bacterium Escherichia coli and the Simian Virus 40 in an exogenous plasmid or vector (Jackson et al. 1972). The following year Cohen demonstrated that DNA from the gram-positive bacterium Staphylococcus could be introduced and stably propagated into E. coli (Chang and Cohen 1974). This experiment gave rise to the first GMO. The next year, Cohen and Boyer reported the creation of the first E. coli endowed with a transgene originally from the South African clawed frog Xenopus (Cohen 2013), resulting in the first transgenic organism. By using plasmids as gene vectors, these proof-of-principle studies showed that genetic material could be transferred not only between closely related species but also across unrelated ones.

These experiments prompted Paul Berg to call for a moratorium on recombinant DNA technology to assess its risks (see below), while Cohen and Boyer urged to file a patent for exploiting recombinant DNA technology. At the same time, many biotechnology companies were founded, in 1976, for instance, Boyer co-founded Genentech, one of the first biotech companies that was able to “engineer” GMOs to produce human insulin (Goeddel et al. 1979b) and growth hormone (Goeddel et al. 1979a), among other blockbuster substances including alpha interferon, erythropoietin, and tissue plasminogen activator (Rasmussen 2014). Nowadays, many patients around the globe with diabetes type 1, growth problems and immunological disorders benefit from taking these hormones, sometimes on a daily basis. “For his fundamental studies of the biochemistry of nucleic acids, with particular regard to recombinant-DNA”, Paul Berg was awarded the Nobel Prize in Chemistry 1980, half of which was shared with Walter Gilbert and Frederick Sanger “for their contributions concerning the determination of base sequences in nucleic acids”.Footnote 7

This brief introduction to the history of genetic engineering shows not only that scientists have been tinkering with the genomes of microorganisms for almost half a century, but also that this would not have been possible without the development of molecular biology, which was essentially the result of the interdisciplinary collaboration between biologists, physicists, and mathematicians during the second half of the 19th century (Morange 2009).

During the 1990s, the field of metabolic engineering (two or more genes) emerged as an extension of genetic engineering (one gene) when the chemical engineer James Bailey realized that the microbial production of chemicals and antibiotics could be optimised if the metabolic resources of a cell could be adjusted, concluding that

[…] the emergence of a systematic paradigm for metabolic engineering will transform the present pharmaceutical, food, and chemical industries. (Bailey et al. 1990, p. 15)

The following year, Bailey analysed useful molecules that could be successfully produced in microorganisms (Bailey 1991), while others proposed that the modifications of the carbon metabolic flux should occur only at the principal node of the primary metabolic networks to overproduce desired metabolites (Stephanopoulos and Vallino 1991). These remarkable reviews clearly identified metabolic engineering as an extension of genetic engineering, laying a foundation for the successful field that now allows the production of biofuels (Alper and Stephanopoulos 2009), pharmaceuticals, fine- and bulk-chemicals (Keasling 2010).

Jay Keasling’s most recent breakthrough has been the production of a precursor of artemisinin in engineered yeast. Artemisinin is chemically known as a sesquiterpene lactone endoperoxide, the drug of choice to treat malaria, preferentially in combination with other derivatives. Every year, 500 million people become infected with this disease, and 1 million persons die of it in the developing world, mostly in sub-Saharan Africa, Southeast Asia and Latin America. Every 30 s, on average, a child dies from malaria worldwide. Artemisinin is obtained from the leaves of the plant Artemisia annua, also known as sweet wormwood. Using its leaves, a simple tea used to be prepared to treat fever, including malaria, in the traditional Chinese medicine but it was not until the 1970s that it was rediscovered for modern medical science. Chinese scientists extracted the active compound artemisinin (known as arteannuin in the past or qinghaosu in Chinese). However, this drug remained largely unknown to the Western scientific community until 1979 when a review of its application was published in English language (Group 1979). In fact, it took a while for the rest of the world—and especially the World Health Organization (WHO)—to discover the potential of the antimalarial drug (Tu 2011). Unfortunately, it has been argued that there is a global supply shortage of artemisinin owing to an increasing demand and to the environmental factors affecting the harvest of the 14-week-growing-plant, primarily in Chinese and Vietnamese but more recently in Indian and East African farms (Enserink 2005), yet this is a matter of controversy.Footnote 8 Artemisinin can also be chemically synthesized, but it is very expensive and non-affordable to most of the patients.

To come up with another source of artemisinin, Keasling and colleagues ingeniously inserted several genes from different species into the baker’s yeast Saccharomyces cerevisiae, which, fed solely by sugar and basic nutrients, is pushed to start a 3-day-culture production of artemisic acid, a precursor that can be chemically oxidized by standard procedures to artemisinin (Ro et al. 2006). The process of producing artemisinin has been shortened to only 14 days, but this took Keasling and colleagues many years of research with both successes and failures (Keasling 2008). For example, suboptimal expression of exogenous genes (having different genetic code usage) or volatility of molecules were encountered, but these problems were solved by optimizing synthetic genes (Martin et al. 2003) and reaction conditions (Newman et al. 2006). In the end, Keasling’s group was able to produce significant levels (>100 mg/L) of artemisic acid by engineering the mevalonate metabolic pathway with a total of 11 enzymes from E. coli, the yeast S. cerevisiae and the plant A. annua in E. coli (Fig. 4) (Chang et al. 2007).

Fig. 4
figure 4

Keasling’s metabolic pathway for semi-synthetic artemisinin. Upon glucose take-up, E. coli transforms it into acetyl-CoA via the glycolysis pathway. The introduction of 11 enzymes from E. coli (brown; 1, 7, 8), S. cerevisiae (orange; 2–6) and A. annua (green; 9–11) allows the conversion of acetyl-CoA via the mevalonate pathway into artemisic acid, which can be thereafter chemically converted to artemisinin. Enzymes: 1 AtoB, acetoacetyl-CoA thiolase; 2 HMGS, hydroxymethylglutaryl-CoA (HMG-CoA) synthase; 3 tHMGR, truncated HMG-CoA reductase; 4 MK, mevalonate kinase; 5 PMK, phosphomevalonate kinase; 6 MPD, mevalonate diphosphate decarboxylase; 7 idi, isopentenyl diphosphate isomerase; 8 ispA, Farnesyl pyrophosphate synthase; 9 ADS, amorpha-4,11-diene synthase; 10 CPR, cytochrome p450 redox partner; and 11 P450, monooxygenase (CYP71AV1) (Color figure online)

The metabolic pathway engineered in E. coli was later transferred to a genetically stable yeast strain that can efficiently transport out of the cells up to 115 mg of artemisic acid per litre of culture, allowing therefore a simple and inexpensive purification process (Ro et al. 2006). Recently, German scientists reported the development of an optimized system where the three-step chemical synthesis from artemisic acid to artemisinin can be reduced to a more economical and efficient single-step based solely on oxygen and light (Levesque and Seeberger 2012). This achievement should contribute to increasing the yields (hormones are produced in the gram range) while reducing artemisinin costs. A dose cost $2.40 United States (US) dollars several years ago, but the alliance of the first non-profit pharmaceutical company “Institute for OneWorld Health” and the company co-founded by Keasling “Amyris Biotechnologies” planed to decrease the dose costs ten-fold (ca. $0.25) with aid of $42.6 million from the Bill and Melinda Gates Foundation (Towie 2006), and the cooperation with the French international company Sanofi-Aventis for scaling-up the industrial process. Thus far, more than 1.7 million of semi-synthetic artemisinin doses have been shipped to malaria-endemic countries in Africa including Burkina Faso, Burundi, Democratic Republic of the Congo, Liberia, Niger, and Nigeria.Footnote 9

2.2 Genetically Engineered Machines

Genetically Engineered Machines (GEMs) are herein defined as GMOs that emerge yearly thanks to a crowd-sourcing approach: The international Genetically Engineered Machines (iGEM) competition. Undergraduate students pay a large registration fee for getting standardized DNA parts or “BioBricks” (BioBricks can be thought of as a type of DNA lego; for a discussion about the “playing” component in synthetic biology, see chapter by L. Litterst in this book), mostly originated from the bacterium E. coli, at the Registry of Standard Biological Parts (http://parts.igem.org) in MIT to construct at home, in less than a year, biological systems composed of “parts”, “devices” and “systems” (Smolke 2009). For example, the bacterium E. coli has been converted into “Eau D’e coli”, bacteria that smell like wintergreen or bananas depending on the growth state (MIT iGEM team in 2006); E. chromi, bacteria that glow in different colours (Cambridge iGEM team in 2009); or E. cryptor, a “hard-drive” device to store information (CU-Hong Kong iGEM team in 2010). The main difference between GEMs and GMOs is that the former ones are built with BioBricks and following engineering principles, whereas the latter ones are constructed without these two requirements. The idea behind this distinction is to emphasize the development of potential applications based on GEMs that can be predictable, in contrast to “standard” GMOs where this endeavour is not per se attempted from the onset.

Besides promoting creativity and innovation, iGEM serves as an international platform for fostering in young students self-confidence and awareness of the ethical, legal, and social implications of their synthetic biology project beyond the bench, i.e., “human practices” or more recently “policy and practices”. The first real iGEM competition took place in 2004, when 5 teams (Boston University, Caltech, MIT, Princeton University, and The University of Texas at Austin) participated, depositing a total of 50 parts. The University of Texas team designed the first biological photographic film with a bacterial lawn displaying the phrase “Hello World”Footnote 10 (Levskaya et al. 2005). Since its first meeting, the iGEM competition has grown steadily in the number of countries, teams and delivered parts (Table 2).

Table 2 Evolution of the iGEM competition

Interestingly, most of the successful iGEM teams tend to avoid using the registry parts and prefer to deposit new parts, perhaps because a major portion of the parts have not been tested or do not work as expected (Vilanova and Porcar 2014). These facts illustrate not only the context-dependency of biology and the importance of molecular relationships beyond synthetic DNA, but also the challenges that the iGEM competition will meet in the coming years: There is a need to better characterize the existing parts and to increase their quality, and there are additional issues regarding industrial applicability, team judgement and research funding transparency (Vilanova and Porcar 2014). From all the winners at each competition, only a few projects have been published, and even less have a potential industrial application (Vilanova and Porcar 2014). This is why the application of engineering principles in biology based on standardized DNA parts is a challenging endeavour that requires understanding the complexity of gene networks and variability of each cell within heterogeneous cellular populations (Kwok 2010).

2.3 Genomically Designed Viruses and Organisms

Genomically Designed Viruses (GDVs) and Genomically Designed Organisms (GDOs) are viruses or living entities, respectively, that have been reprogrammed using a genome that was copied from nature into a computer and later on designed to be bottom-up synthetized by chemical means. In this work, these two new definitions are introduced to differentiate from other GMOs in which the genome is borrowed from existing organisms or viruses and is not entirely synthetic (i.e., top-down engineering). The first genome that was chemically synthesized was that of the poliovirus (Cello et al. 2002) that infects humans, followed by that of the bacteriophage Phi X174 (Smith et al. 2003) that normally infects E. coli. The genome of the 1918 Spanish’ influenza pandemic virus was then synthesized (Tumpey et al. 2005), followed by other human retroviruses (Wimmer et al. 2009). All these GDVs were capable of infecting cells, suggesting that chemically synthesized genomes of higher organisms would be functional if these could be inserted into living cells.

In 2008, the Venter lab reported the synthesis of the complete genome of 582,970 base pairs (bps) of the bacterium Mycoplasma genitalium (Gibson et al. 2008), giving rise to the first GDO. In parallel trials, Venter’s team transplanted the natural genome of M. mycoides into M. capricolum, conferring the latter cell the identity of the former upon cell division and genetic selection (Lartigue et al. 2007). Mycoplasmas are parasitic bacteria that cause respiratory and inflammatory diseases in humans. They lack a cell wall but the reason why these bacteria were chosen as model organisms is because they bear the smallest genomes among all bacteria that can support cellular growth in the laboratory. After the two aforementioned breakthroughs, the next logical step was to combine them: In 2010, Gibson et al. reported the genome synthesis of M. mycoides genome (1,080,000 bps) and its transplantation into M. capricolum, reprogramming again the latter cell into the former, but the difference being that in the genome of the reprogrammed cells there were four encrypted watermark sequences (Fig. 5), indicating the names of 46 persons involved in the project, an email address, a website and three famous quotations: (1) James Joyce: “To live to err, to fall, to triumph, to recreate life out of life”; (2) Robert Oppenheimer: “See things not as they are, but as they might be”; and (3) Richard Feynman: “What I cannot build, I cannot understand” (Gibson et al. 2010).Footnote 11

Fig. 5
figure 5

Venter’s GDO. The genome of M. mycoides JCVI-syn1.0 built in vitro by assembling purchased DNA fragments, following by in vivo assembly using yeast. The designed genome was then transplanted into the parent bacterium M. capricolum, which upon replication in selective media acquired the phenotype of M. mycoides. The genome JCVI-syn1.0 contained four encrypted watermarks sequences indicated by numbers

Venter (2013) himself called these the first synthetic cells whose parents are a computer. However, the computer did not create the genome sequence; it served to store the sequence retrieved from nature upon DNA sequencing. Nor are the cells synthetic: only their DNA, which forms about 1 % of the cell dry weight, is synthetic. Nevertheless, this experiment has been regarded as “a defining moment in the history of biology and biotechnologyFootnote 12 because DNA controls the hereditary information and this raises the possibility of controlling and understanding life by using synthetic DNA (Bedau et al. 2010).

The synthesis of genomes is possible via “synthetic genomics” (Montague et al. 2012), an established field that emerged with the technological synergies between synthetic organic chemistry and engineering for high-throughput DNA synthesis (Carlson 2009). Synthetic genomics, in turn, has enabled the emergence of not only other GDOs including the baker’s yeast (Annaluru et al. 2014) and the bacterium Vibrio cholera (Messerschmidt et al. 2015), but also tools in basic research for assembling efficiently synthetic DNA (Gibson 2011). Last but not least, synthetic genomics promises to revolutionize the medical sector by reducing the time needed for the production of synthetic flu vaccines in case of pandemics from months to days (Okie 2011). Another health care application may be the development of synthetic bacteriophages (bacteria-killing viruses) given the recent emergence of antibiotic resistance, a huge global public health concern. Phage therapy is an old treatment that dates back to more than one century, but with the discovery of antibiotics during the first half of the 20th century; its application remained limited in the Western world (Reardon 2014). Thus, there is a potential in synthetic genomics for developing innovative phage therapies, but limited host range and side effects of bacterial lysis remain, among other non-technical issues (Citorik et al. 2014).

2.4 Genomically Edited Organisms

Genomically Edited Organisms (GEOs) are herein defined as those GMOs whose genomes have been modified with advanced molecular engineering tools. Genome editing can be performed in small or large scale as when respectively mutating a single-nucleotide polymorphism to correct a disease genotype (e.g. sickle-cell anemia) like in human cells (Charpentier and Doudna 2013; Doudna and Charpentier 2014; Gaj et al. 2013), or genome regions in multiplex for the combinatorial optimization of metabolic pathways in microbes (Gallagher et al. 2014; Wang et al. 2009; Woodruff and Gill 2011). GEOs are also GMOs but the reason behind this differentiation is to indicate the methodological differences: GMOs are usually modified using natural or synthetic DNA sequences encoded in plasmids, but modifying the chromosomal DNA (using or not plasmids) in GEOs usually involves the employment of synthetic DNA that is bought online from biotech companies.Footnote 13

Chromosomal modifications typically include DNA deletions (knock-out), additions (knock-in), or replacements that are crucial in fundamental research for understanding the function of a given gene, protein and/or genetic element in a physiological context. The foundations of genome editing can be traced down to the discovery of the cellular systems in yeast and mammalian cells that repair double-strand DNA breaks (DSBs) that otherwise would be lethal due to cellular death or oncogenic mutations (Doudna and Charpentier 2014). The repair of DSBs is possible by the activation of homologous recombination (HR), which is a “copy and paste” mechanism that requires an undamaged copy of the homologous DNA segment as a template for copying the DNA sequence along the break (Porteus and Carroll 2005).

In the past, UV radiation, chemicals and restriction enzymes were used to induce DSBs, but these were random and could not be directed to predetermined sites (Jasin 1996). Nevertheless, the pioneering work of Mario Capecchi, Martin Evans and Oliver Smithies lead to a basic understanding of the HR mechanism for repairing DSBs. In 2007, they shared The Nobel Prize in Physiology or Medicine “for their discoveries of principles for introducing specific gene modifications in mice by the use of embryonic stem cells”.Footnote 14 Since the 1990s, new genetic tools have been developed for modifying genomes more precisely, including rare-cutting homing endonucleases (Jasin 1996), Zinc-Finger Nucleases (ZFNs) (Porteus and Carroll 2005) and Transcription Activator-Like Effector Nucleases (TALENs) (Sun and Zhao 2013). By specifically inducing DSBs, these tools enable high HR that can be used in basic biology and to correct various disease-causing mutations associated with haemophilia, sickle-cell disease, and other deficiencies (Gaj et al. 2013).

Although ZFNs and TALENs nucleases provide access to most of the recent health care applications (Gaj et al. 2013), both depend on custom-made proteins for modifying each DNA target, which limits their use due to large costs when multiple genes are mutated simultaneously, as is necessary in more complex diseases involving multiple genes (Cox et al. 2015). Nonetheless, a new tool dubbed CRISPR-Cas9 has recently emerged for editing genes in multiplex with excellent HR efficiencies comparable to ZFNs and TALENs, but lower costs thanks to the programmability at the RNA level (Mali et al. 2013). Composed of clustered regularly interspaced short palindromic repeats (CRISPRs) of DNA and CRISPR-associated genes (Cas) along the genome, the CRISPR-Cas is an immune system that evolved in bacteria and archaea to combat hostile viruses and foreign plasmids by inducing site-specifically DSBs (Doudna and Charpentier 2014). The type-II CRISPR-Cas9 system has been used for deleting, adding, activating and suppressing target genes with great efficiency in many organisms (Fig. 6) (Charpentier and Doudna 2013; Doudna and Charpentier 2014). The technology has allowed the correction of genetic mutations of diseases such as cataracts and cystic fibrosis as well as the development of cancer models in animal tissues (Doudna and Charpentier 2014), and more recently the generation of human cells resistant to infection by HIV, the Human Immunodeficiency Virus (Liao et al. 2015).

Fig. 6
figure 6

The CRISPR-Cas9 technology has been used to modify the genomes of bacteria, yeast, fungi, nematodes, salamanders, frogs, fruit flies, zebrafish, mice, rats, plants, crops (rice, wheat, sorghum, tobacco), pigs, animal and human cell lines as well as embryonic stem cells. The Streptococcus pyogenes Cas9 nuclease (dark blue) in complex with single-guided RNA (green) and its target DNA (red) was made using the 3D crystal structure PDB (Protein Data Bank) file 4OO8 (Nishimasu et al. 2014) and the PyMOL Molecular Graphics System, version 1.5.0.4 Schrödinger, LLC (Color figure online)

In just two years, (since the beginning of 2013 until the end of 2014) examples of the application of the CRISPR-Cas9 technology in both basic and applied research have been reported and reviewed in more than 1000 publications (Doudna and Charpentier 2014). The revolutionary Cas9 technology has already been compared to the restriction enzymes and the PCR, essential tools in modern molecular biology research, because it promises to accelerate the editing of genomes across the medical, agricultural, environmental, pharmaceutical, chemical, and biotechnological sectors.

2.5 Genomically Recoded Organisms

Genomically Recoded Organisms (GROs) were introduced by the Church lab at Harvard (see below) referring to microbes whose genetic codes have been recoded using advanced genomic tools like those used for GEOs. The genetic code describes a set of rules relating the order of three DNA bases (codon) and a corresponding amino acid to synthesize proteins across all life forms; thus, establishing a universal link between information storage and execution (Fig. 7). For example, a small peptide composed of the amino acids MASTER can be coded at the RNA level by the codons AUG/GCC/AGC/ACC/GAA/AGA, but this order can be modified to synonymous codons (AUG/GCA/UCU/ACG/GAG/CGG) with the same amino acid meaning: MASTER. In addition, non-synonymous ones can also be introduced when another codon (e.g., the amber stop codon UAG) is used to incorporate a non-standard amino acid X as follows: MAXTER (ATG/GCA/UAG/ACG/GAG/CGG).

Fig. 7
figure 7

The universal genetic code in RNA format (bases: AUGC) [RNA uses adenine (A), guanine (G) cytosine (C) and uracil (U) as bases, whereas DNA uses AGC and thymine (T)]. The 20 canonical amino acids are encoded by 61 sense codons. Translation starts (Black right-pointing pointer) at AUG codon and terminates (Black square) at stop codons UAA (ochre), UGA (opal), or UAG (amber). Amino acids are arranged according to physicochemical properties: polar (green; T, N, S, G, Q, Y, C), nonpolar (red; M, I, A, V, P, L, F, W), basic (blue; K, R, H) and acidic (pink; D, E): M methionine (AUG); I isoleucine (AUA/G/C); T threonine (ACG/A/C/U); K lysine (AAA/G); N asparagine (AAC/U); S serine (AGU/C and UCG/A/C/U); R arginine (AGA/G and CGG/A/C/U); G glycine (GGU/C/A/G); D aspartate (GAU/C); E glutamate (GAA/G); A alanine (GCU/C/A/G); V valine (GUU/C/A/G); H histidine (CAU/C); Q glutamine (CAA/G); P proline (CCG/A/C/U); L leucine (CUG/A/C/U and UUA/G); F phenylalanine (UUU/C); Y tyrosine (UAU/C); C cysteine (UGU/C); and W tryptophan (UGG). Numbers indicate posttranslational modifications, for more details see Acevedo-Rocha (2010) (Color figure online)

In 1968, Holley, Khorana and Nirenberg shared the Nobel Prize in Physiology or Medicine “for their interpretation of the genetic code and its function in protein synthesis”.Footnote 15 In the same year, Crick called the genetic code a “frozen accident”, implying that “no new amino acid could be introduced without disrupting too many proteins” (Crick 1968, p. 375). Since the 1990s, however, the incorporation of XAAs into proteins has been thawing the “universal” genetic code. There are two basic ways to engineer the genetic code (Bacher et al. 2004): The components involved in the synthesis of proteins are engineered by directed evolution to allow the recognition of specific XAAs (Liu and Schultz 2010). But in these cases the genetic code changes are usually non-heritable, in contrast to the second approach, in which organisms (so far bacteria) are pushed to incorporate XAAs into their proteomes via experimental evolution, resulting in progeny with heritable changes (see CMOs below). Both approaches, nonetheless, could be combined to render offspring dependent on XAAs for survival.

GROs are relatively new organisms that were introduced by George Church and colleagues at Harvard by exploiting the mechanism of HR in microbes. His team built up a device that automates the process of gene delivery, targeting and replacement as well as microbial recovery and growth called “Multiplex Automated Genomic Engineering” (MAGE) (Wang et al. 2009). MAGE allows performing

[…] up to 50 different genome alterations at nearly the same time, producing combinatorial genomic diversity. In one instance, Church and Wyss researchers were able to make the bacteria Escherichia coli (E. coli) synthesize five times the normal quantity of lycopene, an antioxidant, in a matter of days and just $1000 in reagents.Footnote 16

Although this machine could accelerate the fields of metabolic and genome engineering for the microbe-based production of biofuels, pharmaceuticals and other chemicals, it is very expensive, and it is still very challenging to know with accuracy what gene(s) and/or protein(s) to target for mutagenesis.

Since 2011, nonetheless, Church and colleagues have reported five “tour-de-force” studies using MAGE with another application in mind: First, they exchanged 314 out of 321 UAG stop codons to synonymous UAA stop ones in several E. coli strains, but with some technical hurdles (Isaacs et al. 2011). Second, after overcoming the technical difficulties, they were able to recombine all strains and delete completely the 321 UAG codons as well as the protein that terminates protein synthesis at this signal for yielding the first GRO: E. coli C321.ΔA. This bacterium exhibited improved efficiencies for incorporating XAAs into various proteins and increased resistant against infection by the bacteriophage T7 compared to the parental E. coli strain MG1655 (Lajoie et al. 2013b). Third, they probed the limits of MAGE by eliminating the most rare codons present in 42 essential genes involved in E. coli translation as a means to “emancipate” codons for incorporating further XAAs (Lajoie et al. 2013a). In this work, it was realized that all non-essential genes could be modified with synonymous codons without compromising cellular fitness. Finally, based on the previous work and the E. coli C321.ΔA strain, on one hand, Church and colleagues, and, on the other, Isaacs et al. engineered essential genes to be functional by the rational dependence of XAAs, thus imposing the cells to be metabolically dependent on the external supply of XAAs (Rovner et al. 2015; Mandell et al. 2015). Both studies were able to show that E. coli can be contained in physical isolation without undetectable growth in liquid media for up to 14 or 20 days, as long as the XAAs was not added to the culture. The authors argue that their strategy “is a significant improvement over existing biocontainment approaches” (Rovner et al. 2015) and that it “provides a foundation for safer GMOs that are isolated from natural ecosystems by a reliance on synthetic metabolites” (Mandell et al. 2015). Beyond biocontainment, other applications of GROs would be the production of proteins endowed with XAAs for basic research and perhaps biocatalysis (Budisa 2014).

2.6 Chemically Modified Organisms

Chemically Modified Organisms (CMOs) are livings systems endowed with unnatural, noncanonical or xeno building blocks. The CMO term was introduced by Marlière et al. (2011), whose work was highlighted by Acevedo-Rocha and Budisa (2011) referring to “E. chlori” (see below). There are two basic approaches for creating CMOs. One strategy tackles proteins, the main executors of information, whereas the other one deals with the information carriers or nucleic acids (NA). As indicated in the previous section, together with some genetic tricks, the cultivation of microbes in the presence of unnatural building blocks allows their introduction in lieu of canonical amino acids or DNA. In 1983, Bacillus subtilis strain QB928 was able to grow on 4-fluorotryptophan, a synthetic analogue of tryptophan (Trp), one of the 20 canonical amino acids (Wong 1983). Thirty years later, the same microbe does not require Trp anymore for propagation, but only 4-fluorotryptophan thanks to its adaption that was accompanied by a relatively small degree of genomic changes (Yu et al. 2014). This is the first CMO whose proteome has been chemically modified by experimental evolution. E. coli has also been shown to grow on similar fluorinated Trp analogues (Bacher et al. 2004), but it also has the potential for accommodating other XAAs into its proteome (Bohlke and Budisa 2014).

CMOs with unnatural building blocks as genetic polymers have been also reported. DNA is composed of the bases adenine (A), guanine (G) cytosine (C) and thymine (T), each linked to a sugar (deoxyribose), which in turn is connected to two phosphate groups. But scientists have synthesized molecules structurally related to the components of A, T, C and G to test whether these “alien” molecules could be also used as carriers of genetic information, including iso-C, iso-G, K, X, Q, F, P, Z, NaM and 5SICS (Fig. 8) (Benner and Sismour 2005; Kwok 2012). Likewise, the sugar deoxyribose has been replaced by threose (TNA), arabinose (ANA), glycerol (GNA), hexitol (HNA), cyclohexene (CeNA), fluoro arabinose (FANA), etc. (Pinheiro and Holliger 2012).

Fig. 8
figure 8

Alternative genetic systems. On the top, the canonical DNA bases thymine (T) and adenine (A) forming two hydrogen bonds as well as cytosine (C) and guanine (G) forming three hydrogen bonds are shown. In the middle and bottom layers, unnatural DNA bases that are achieved by classical synthetic chemistry are depicted including Z and P, V and J, K and X, as well as isoC and isoG, all interacting via thee hydrogen bonds

Although most xeno-nucleic acids (XNAs) are incompatible with living systems, very recently Holliger and colleagues described the directed evolution-based engineering of polymerases (enzymes that copy nucleic acids) that were capable of synthesizing XNA from a DNA template and from DNA back to XNA (Pinheiro et al. 2012). This work is very important because it showed that genetic information could be stored in diverse unnatural polymers capable of heredity and evolution. Along the same vein, but using experimental evolution, Marlière and colleagues practically replaced all T bases by 5-chlorouracil in E. coli with concomitant dependence on this alien substance for survival (Marlière et al. 2011). These modifications resulted in morphological alterations; yet the E. coli cells were able to survive, giving rise to “E. chlori”, the first CMO bearing a chlorinated genome and potentially endowed with a “genetic firewall” (Acevedo-Rocha and Budisa 2011).

More recently, Romesberg and colleagues reported the addition of the XNA pair NaM and 5SICS to E. coli by introducing an algae membrane-protein that allowed the transportation of the XNA pair into the cells, followed by its replication in a plasmid (Malyshev et al. 2014). Although this CMO has been regarded the first living being bearing six bases as genetic code with many potential applications in basic and applied research (Thyer and Ellefson 2014), it should be mentioned that only one base pair per plasmid was present, corresponding to less than 0.0001 % of the total genetic content of that bacterium, which is about 1 % of the total cell dry weight. Thus, the in vivo replication and propagation of a truly six-membered genetic code controlling some biological process remains to be shown.

Several applications of orthogonal engineering have been envisaged. Basic research on “artificial genetic information systems” has already resulted in a more accurate diagnosis of viruses in patients: the inclusion of the bases isoC and isoG (Fig. 8) into “Branched DNA Assays”, which detect pathogenic NAs, significantly improves the signal-to-noise detection ratio. These assays are important because the viral load is critical to determine the amount and type of drug necessary in patients with HIV and Hepatitis C, two major health problems with 35 and 180 million cases worldwide, respectively. Using these assays, about 400,000 patients per year can be assigned to a more appropriate treatment and personalized medical care (Benner and Sismour 2005). Similar types of research on other XNAs are being pursued for diagnosing cystic fibrosis, severe acute respiratory syndrome (SARS) triggered by corona-virus, and other pathogens.

Synthetic amino acids have also found applications in human health: clinical trials using a human growth hormone containing a modified XAA recently demonstrated the required safety and efficacy as well as increased therapeutic potency and reduced injection frequency in adults (Cho et al. 2011). In scientific research, while proteins endowed with XAAs have become invaluable tools for unraveling complex cellular processes within living cells (Davis and Chin 2012), unnatural enzymes have potential applications in biocatalysis (Acevedo-Rocha et al. 2013b; Hoesl et al. 2011). That is, the use of enzymes and/or microbes to synthesize useful compounds like food additives, flavors, fragrances, fine chemicals, biofuels, bioplastics, biomaterials (e.g., silk) and many other molecules (Reetz 2013).

3 Biosafety and Biosecurity

3.1 Biosafety: Asilomar Meeting

In February 1975, scientists called for a worldwide moratorium on genetic engineering given the concern of the potential accidental release of GMOs into the environment. Upon the moratorium, some institutions stopped any kind of genetic engineering research, while others took the lead (Danchin 2010), deterring basic science and the development of innovative products in certain regions of the world (ter Meulen 2014). Nevertheless, the moratorium was critical because it allowed the planning of an international meeting in the Asilomar Conference Ground in Pacific Grove, California, USA, where scientists agreed that genetic engineering should continue under stringent guidelines (Berg et al. 1975). This important meeting allowed setting down international guidelines on research involving recombinant DNA (Berg 2008), giving rise to physical containment cautions for biological agents in four “Bio-Safety Levels”: 1, 2, 3 and 4.

In the Asilomar meeting, scientists themselves took the initiative to come together. Given the exposure of synthetic biology to the media, and concerns about harmful intentional or accidental misuses of synthetic microorganisms (Ferber 2004), several groups of bioethicists and members of non-governmental organizations have suggested having another Asilomar meeting to discuss the responsibility of synthetic biology research to society. Likewise, debates on synthetic biology under the Convention on Biological Diversity have considered a potential moratorium on the release of synthetic organisms, cells and genomes into the environment (Oldham et al. 2012). Although these debates were triggered after the publication of the first GDO by Venter and colleagues in 2010, biosafety concerns can be justified by the earlier synthesis of pathogenic viruses, such as the deadly 1918 H1N1 “Spanish Flu” (which killed an estimate of up to 50 million people) that was “resurrected” from archaeological samples (Tumpey et al. 2005), or the poliovirus, which was synthesized even without a template (Cello et al. 2002). Even though it has been argued that GDVs offer an opportunity to better understand infectious diseases (Wimmer et al. 2009), these pathogenic viruses also pose potential biosafety risks to the health of the laboratory workers and the public. Two incidents in 2014 at the Centers for Disease Control and Prevention (CDC) in Atlanta, USA (where the Spanish flu virus was synthesized) illustrate this: Samples from a low-virulence flu virus were contaminated with the lethal H5N1 avian flu strain, and bacteria causing deadly anthrax were not properly inactivated before their transportation into a lower biosafety level laboratory (Butler 2014), potentially exposing staff to the dangerous pathogens (Owens 2014).

3.2 Biosecurity: Beyond Asilomar

Whereas biosafety deals with the inherent capability of organisms or viruses to cause disease, biosecurity is mainly concerned with their misuse as biological weapons for bioterrorism. Bioterrorism is nothing new: the intentional release of pathogenic toxins, viruses, bacteria, fungi, and insects is a mode of biological warfare against humans and animals that many countries have used in the past. For example, during World War I, a German secret agent travelled to the USA to infect horses with glanders, a severe infectious disease caused by the bacterium Burkholderia mallei, which provokes respiratory ulcers, septicemia and death in horses, donkeys and mules. For this reason, the Geneva Protocol was introduced in 1925 to prohibit the use of chemical and biological weapons in international armed conflicts. Later on, the 1972 Biological Weapons Convention (BWC) and the 1993 Chemical Weapons Convention (CWC) were introduced to regulate the production, storage and transportation of biological and chemical arms.

Concerns over mishandling of harmful biological agents are nothing new to synthetic biology: The CDC in Atlanta, USA, besides harbouring the synthesized “Spanish Flu” virus, also possess dangerous human pathogens such as Francisella tularensis, Yersinia pestis and Bacillus anthracis, and have had to consider their potential misuse. Indeed, international regulations for gene synthesis were lacking for several years. It was only after 2007, when a journalist ordered online from a biotech company synthetic DNA of the smallpox virus to his home (Grushkin 2010), that the synthetic biology community proposed a set of policies related to the processes involving nucleic acids synthesis (Bugl et al. 2007). Among the new proposed policies is a first set that applies to firms supplying synthetic DNA. Using a database of pathogenic genomes and sequences of toxin genes, the firms should use special software to screen orders for potentially harmful DNA. A second set of proposed rules is aimed at regulating the purchase of DNA synthesizers and the reagent used in synthesis by enforcing the registration of the machines or the distribution of licenses to purchase specific chemicals needed for DNA synthesis.

3.3 New Synthetic Life Forms: Beyond Existing Regulations

Beyond contributing to the development of rules and regulations, the research community has also played a role in shaping biosafety and biosecurity policies, with the usual conjecture that what is natural is potentially much more dangerous and incontrollable than what is artificial (Marlière 2009). Because organisms close to their progenitors are pre-adapted to their native environments, it is logical to assume that GMOs, GEMs, GEOs would pose a mayor threat than GDOs, GROs and CMOs if all of these were accidentally released in the environment. However, since the latter organisms use or potentially can use synthetic building blocks as constituents that could only be synthesized in the lab; they would be quickly outcompeted by other organisms if the chemicals are correctly disposed. Therefore, orthogonal and bottom-up engineering should also follow the guidelines that chemistry labs have established for chemical disposal, together with those for biosafety labs involving GMOs, which is the norm for top-down and parallel engineering approaches. Given that synthetic biology is an extension of genetic engineering, current legislation on GMOs also applies to GEMs, GDOs, GEOs, GROs and those CMOs that have been previously modified to incorporate unnatural DNA bases and/or amino acids, at both the EUFootnote 17 and internationalFootnote 18 level. But there have been doubts about whether the current risk assessment procedures will be overburdened by the increasing pace of genetic modification, which is the case for GDOs, GEOs and GROs. In consequence, the latest EU recommendations “call for research to improve the ability to predict the behavior of complex engineered organisms”, and for the “development of additional approaches, including genetic firewalls based on noncanonical genetic material” (Breitling et al. 2015).

3.4 Genetic Firewall: Data Are Missing

It has been advocated that unnatural DNA would prevent information exchange between natural and synthetic organisms by acting as a genetic firewall (Schmidt 2010). Indeed, Church and colleagues showed that the first E. coli GRO was capable of resisting to some extent infections by the T7 bacteriophage because their genetic codes are not compatible (Lajoie et al. 2013b). However, the GRO infection by the virus was only slightly attenuated compared to the parental strain, implying that there are other factors that account for the lack of complete immunity.

So how many changes are needed in the genetic code to render a GRO completely immune to viruses without affecting its fitness? Given the high mutability of viruses, would it be possible that these could hijack the new genetic code if sufficient time is given in a non-controlled environment? If GROs and CMOs are not able to survive outside the lab, what would be the response of a natural organism to their unnatural building blocks? On the other hand, if a GRO or CMO dependent on XAAs manages to survive in the environment by any means, would it pose threats to other natural organisms, and if so, what kind of threats? Would GROs be more robust than CMOs in case of an accidental release in the environment? Are there other means of cellular communication beyond horizontal gene exchange that GROs or CMOs could use to persist or even proliferate? Clearly, many questions remain to be solved to show that a “genetic firewall” could be used as an efficient means to separate the natural from the synthetic world, especially when it is believed that XAA-based biocontainment would be a means to eliminate the fears of the public when it turns to apply GMOs in agriculture, medicine and environmental clean-up (Dolgin 2015).

3.5 Genetically Modified Humans: Napa Meeting

In March 2015, two groups of scientists called for a moratorium (Lanphier et al. 2015) and for a framework for open discourse (Baltimore et al. 2015) on any experiments that involve genome editing in human embryos or cells that could give rise to sperm or eggs. The articles were responses to some rumours by referees who reviewed work by Chinese scientists describing the use of the CRISPR-Cas9 system in human embryos and who fear, it “could trigger a public backlash that would block legitimate uses of the technology” (Vogel 2015). The main problem of the technology are the secondary effects (off-target and on-target mutational events with unintended consequences) that are not yet completely understood, for example: “Monkeys have been born from CRISPR-edited embryos, but at least half of the 10 pregnancies in the monkey experiments ended in miscarriage. In the monkeys that were born, not all cells carried the desired changes, so attempts to eliminate a disease gene might not work” (Vogel 2015). Despite this, the rumors of the referees became truth after some months: Work concerning the genomic edition of human embryos was recently published, but the authors argue that this was necessary to illustrate that the CRISPR-Cas9 technology is still far from clinical applications (Liang et al. 2015).Footnote 19

During a meeting in Napa in early 2015, Berg, Church and other scientists called urgently for a framework to discuss openly the safe and ethical use of the CRISPR-Cas9 technology to manipulate the human genome (Baltimore et al. 2015). The proposals for regulating “germline engineering” broadly include:

  1. 1.

    The discouragement of any attempts at genome modification for clinical application in humans in those countries where it is allowed (some countries don’t allow this kind of research or regulate it tightly).

  2. 2.

    The encouragement of transparent research to evaluate the efficacy and specificity of genome editing in human and non-human models relevant for gene therapy, as well as the implementation of standardized methods to determinate frequency of off-target effects and physiology of cells and issues upon genome editing.

  3. 3.

    The creation of forums for the exchange between scientists, bioethicists, government, interest groups and the general public to shape policy while discussing not only the risks and benefits, but also the ethical, legal, and social implications (ELSI) for curing human genetic disease by genome editing (Baltimore et al. 2015).

The fundamental issue with human-germline engineering is that beyond treating genetic disorders such as Huntington disease to eliminate human suffering, designer or “genetically modified babies” could be likewise engineered, facilitating the arrival of a new eugenics era (Pollack 2015). Although human-germline engineering is banned in several countries (not including the United States), a few labs and a company (primarily based in the United States) are working in this research line (Regalado 2015). One of the plans is to edit the DNA of a man’s sperm or woman’s egg and use the cells in an in vitro fertilization (IVF) clinic to produce the embryo, followed by its implantation in woman uterus to establish a pregnancy of the foetus.Footnote 20 Another strategy that promises to be more efficient aims to edit stem cells, which can divide rapidly in the lab, and then turn them into a sperm or egg. Although the CRISPR-Cas9 technology is still too immature to offer babies “à la carte”, 15 % of the adults in a recent survey indicated that it would be appropriate to genetically modify a baby to be more intelligent (Regalado 2015). Besides the engineering of more healthy and intelligent babies, some transhumanists think that the human genome is not perfect and that it could be engineered not only to protect against Alzheimer’s disease, but also to create “super-enhanced” individuals to solve complex issues like “climate change” (Regalado 2015).

3.6 Gene-Drive Engineering: Will There Be a Meeting?

The willingness of many scientists to engage in a public discussion with other scholars shows that there is an awareness of the potential negative effects of an emerging technology that has not been shown to be mature for clinical applications, especially when dealing with a delicate topic such as human embryonic stem cells. However, other less-concerned scientists not working with human stem cells have already devised a plan to create an “auto-catalytic” genetic system based on the CRISPR-Cas9 to spread mutated genes across populations of GEOs with high efficiency. For example, mosquitos could be engineered to impair the transmission of genes involved in malaria and dengue fever (Bohannon 2015). Beyond the potential benefits, the problem with this “mutagenic chain reaction” technology is that unintended off-target mutations at essential (or non-essential) genes could be triggered, thus spreading irreparable genetic defects (or traits) across natural populations of organisms and potentially driving populations with limited genetic diversity into extinction.

The potential devastating effects of this technology in the wild have been recently warned by Lunshof (2015). Church commented that this technology “is a step too far” (Bohannon 2015), yet he filled a patent for a more secure gene-drive technology that, he argues, “would offer substantial benefits to humanity and the environment” (Esvelt et al. 2014) by eradicating vectors that spread diseases, insect pests and invasive species not without calling for “thoughtful, inclusive, and well-informed public discussions to explore the responsible use of this currently theoretical technology” (Esvelt et al. 2014). In response to an ethical analysis on its regulation (Oye et al. 2014), it has been debated that the dual-use potential of this technology raises strong concerns because gene-drives carrying lethal toxins could be designed to eliminate particular human populations and attack their crops (Gurwitz 2014). In fact, Gurwitz (2014, p. 1010) concluded:

“just as the exact technical instructions for making nuclear weapons remain classified 70 years after the Manhattan Project—as they rightfully should—the gene drive methodological details do not belong in the scientific literature.”

However, Oye and Esvelt (2014, p. 1011) disagree with that response arguing that

“classifying information required to build gene drives cannot target potential misuses without also impeding development of defenses, as well as environmental, health, agricultural, and safety applications of CRISPR technology”.

These debates show the need of an international meeting involving all stakeholders to regulate the development and deployment of GEOs endowed with gene-drives that cannot be confined physically to a single country.

4 Assessing Synthetic Biology Beyond the Bench

In various surveys, synthetic biology has been perceived by society as an extension of genetic engineering, particularly in the production of GMOs. However, synthetic biology aims to avoid the same criticism that genetically modified plants and animals have triggered in various societies. Beyond the technology itself, in what follows, seven topics are discussed in which the challenges, dilemmas and paradoxes surrounding synthetic biology become obvious.

4.1 Global Social Justice

Keasling’s artemisinin technological breakthrough is perhaps the most widely used example to show that synthetic biology (in this case, the metabolic engineering “tribe”) can provide solutions to global health issues. However, social justice challenges have emerged because economies and employment in the South can be destabilized by synthetic biology carried out in the North (Engelhard 2009). In other words, although Keasling’s breakthrough has been welcomed by almost everyone in the synthetic biology community and other advocates fighting against malaria, there is a rising concern that the farmers who traditionally harvest A. annua could be losing their jobs, which would affect the families of more than 100,000 farmers worldwide.Footnote 21 Furthermore, the introduction of semi-synthetic artemisinin could further destabilise the already variable prices of botanical artemisinin due to market fluctuations (Peplow 2013). This example illustrates the dilemmas and paradoxes that surround the development of synthetic biology in a globalized world.Footnote 22

4.2 Synthetic Biology Democratization

It has been argued that DIY-biology will help in the worldwide democratization of synthetic biology in the same way that IT was democratized in the garages of computer hobbyists during the last century prior to the emergence of Silicon Valley as innovation hub (Wolinsky 2009). DIY biologists or “biohackers” have equipped their garages, closets, and kitchens with inexpensive laboratory equipment (Ledford 2010). Although DIY biologists performed simple experiments in the past, such as the insertion of a fluorescent protein into bacteria for producing glow-in-the-dark yogurt, it is expected that applications in health, energy and environmental monitoring will emerge (Seyfried et al. 2014). In some instances, however, there has been a rising concern that home-brew drugs could be also produced by DIY-biologists using synthetic microbes. This fear has been recently strengthened when metabolic engineers made yeast strains capable of synthesizing one of the various precursors of the opiate morphine, a precursor of heroin that has a high demand in the illegal drug market (Ehrenberg 2015).Footnote 23 Although this research is intended to enable the long-term centralized and legal production of opiates for pain relief, engineered yeasts for opiate biosynthesis could transform into illegal systems for criminal networks in USA and Europe where drug demand is the highest, because yeast is extremely easy to be sent (a few dried cells per post would suffice to start a culture), grown (water and basic nutrients are only required) and processed (basic lab equipment such as centrifuges) to extract any drugs (using basic columns and resins for chromatography). Thus, the democratization of molecular biology might be more difficult to regulate than initially thought.Footnote 24

4.3 Environmental Concern and Policy Regulation

Medicines, pharmaceuticals, fertilizers, pesticides, additives, cosmetics, plastics, cleaners, clothing, pigments, detergents, electronics parts and many other essential products for human needs are conventionally produced by synthetic chemistry. In fact, chemical companies produce yearly billions of tons of chemicals from about 90,000 different substances. One goal of synthetic biology is to produce enzymes and microbes that could replace the production of many of those compounds because their precursors are usually obtained from fossil fuels and thus unsustainable (Nielsen and Moon 2013). A recent sampling of European rivers, however, illustrates the environmental damage that has been produced with toxic cocktails of hormones, pesticides and hazardous chemicals (Malaj et al. 2014), let alone population declines of bees, birds and other insects that are essential in the global food supply chain (Chagnon et al. 2015). In fact, irreversible damages of many toxic, non-degradable and persistent chemicals on the immune, reproductive, endocrine and nervous systems of many animals (and likely humans) have been found.Footnote 25 To avoid similar environmental catastrophes using synthetic biology, proper legislation and regulation on the industrial production and waste of engineered microbes have to be implemented from the onset. A challenge for synthetic biology will be to regulate and assess the long-term effects of GMOs in the environment. To enhance biosafety biocontainment strategies, various genetic safeguard mechanisms could be implemented in each engineered microbe together with a “risk analysis and biosafety data” sheet (Moe-Behrens et al. 2013).

4.4 The GMO Debate

The GMO debate has existed since the Asilomar conference when biotechnology started to be regulated in general, but in the last two decades it shifted towards risks posed by genetically modified food and crops owing to the food crisis in Europe during the late 1990s. Nowadays, GMOs are commonly used in industrial (white) and medical (red) biotechnology for the respective production of chemicals and pharmaceuticals (in physically containments), but the genetic modification of plants and animals in agricultural (green) biotechnology has met more resistance in society owing to the potential risks of GMOs for humans, animals and the environment. There are different kinds of risks: real versus perceived, which depending on the stakeholder, can vary significantly (Torgersen 2004). Regardless of the risks and semantics (Holme et al. 2013; Hunter 2014; Nagamangala Kanchiswamy et al. 2015), it is clear that the GMO debate is mostly associated to multicellular organisms such as plants (Boyle et al. 2012) and animals (Markson and Elowitz 2014). Given that synthetic biology is increasingly targeting these multicellular organisms for genomic modifications, its acceptance by the public could be more difficult to gain. The main reason is that plants and animals directly affect human life, in contrast to microbes, which affect human life more indirectly: “As long as synthetic biology creates only new microbial life and does not directly affect human life, it will in all likelihood be considered acceptable” (van den Belt 2009, p. 257). Beyond the potential of synthetic biology for solving global issues, society may only accept any kind of GMOs, GEMs, GEOs, GDOs, GROs and CMOs if their products are labelled for the consumer in a transparent way where the freedom of choice is given to the consumers.

4.5 Media Hype, Metaphors and Promises

One of the most common mistakes that scientists often make is to claim that they think they have created something, and that this creation, they believe, will be a panacea to humankind. First of all, scientists do not “create”, because this word can be used in different contexts: Creation or “Creatio ex nihilo” means to bring someone or something into existence out of nothing, and it is usually reserved for a divine force in religious terms that scholars in social sciences and humanities might use more often. In reality, scientists invent or design something that has not existed before by using something that already existed; they are closer to “manipulatio” (manipulating) and “creatio ex existendo” (creating something out of existent parts) than to “creation ex nihilo” (Boldt and Muller 2008). This is a subtle, yet important difference: Scientists claiming the creation of synthetic life have been accused of “Playing God in Frankenstein’s Footsteps” (van den Belt 2009), with unfortunate consequences for the reputation of the field and its social acceptance (Schummer 2011). Perhaps the natural scientists do this on purpose to challenge the ethical (Link 2012) and religious (Dabrock 2009) views of other scholars working on synthetic biology. On the other hand, overstating that GMOs, GEMs, GEOs, GDOs, GROs and CMOs will solve all human problems is not beneficial because exaggerating exactly into this direction generates mistrust in the public, as many emergent technologies have done in history (Torgersen 2009). Hence, a lesson to synthetic biologists would be to be more cautious with their metaphors and promises, if they want to gain public acceptance of their technologies. For a more insightful discussion of the impact of metaphors in synthetic biology, see the chapter by D. Falkner in this book.

4.6 Semantics and the Public

Synthetic biology is a term that is not well known to the public (for further information, see the chapters by Ancillotti and Eriksson; Rerimassie; Seitz; Steurer). In fact, a recent survey in Germany confirmed this, but also revealed that people spontaneously perceived this field as an abstract and contradictory expression that they associated with interference against Nature.Footnote 26 How can nature be synthetic? Synthesis means to put together two parts to form a new one. This definition suggests that any minor modification of an extant organism would render it ‘synthetic’, which would be the case for all GMOs, CMEs, GEOs, GDOs, GROs and possibly CMOs. However, a bacterium with a synthetic plasmid is not the same as a bacterium with a synthetic genome, so we should make a distinction. For example, the synthetic genome of GDO M. mycoides jcvi-syn1.0 represents about 1 % of the whole dry cell mass. The cell reprogramming was performed using a genome borrowed from a related species, in which about 0.1 % genetic changes (watermarks) were done. So, why should this bacterium be called synthetic when the other 98.9 % components are natural? In reality, this bacterium is more natural than synthetic: Although genomes control gene expression, the truth is that genomes can only be useful if there are proteins that can process their information, as it was the case for M. mycoides jcvi-syn1.0. In a contrasting example, when the genome of the cyanobacterium Synechocystis was cloned into the bacterium B. subtilis, genes from the former could not be expressed in the latter organism because of “incompatibility” or context-dependency issues (Itaya et al. 2005). Thus, synthetic genomes only represent a minor part of cells, in contrast to lipids (which are not encoded in the genome) and proteins, of which there are millions of molecules in a single E. coli bacterium, let alone other important components (Fig. 9).

Fig. 9
figure 9

Pie chart displaying the composition (cell dry mass) of a typical E. coli bacterium growing with a doubling time of 40 min. Metabolites include building blocks and vitamins. Note that there are about 2,400,000 protein, 257,500 RNA, 22,000,000 lipid, 1,200,000 lipopolysaccharide, 4400 glycogen, 2 DNA and 1 peptidoglycan molecule(s) per cell (http://book.bionumbers.org/what-is-the-macromolecular-composition-of-the-cell/)

A truly synthetic organism would be that in which all the components shown in Fig. 9 would be synthesized and assembled to generate a living organism. Indeed, synthesizing all the molecules of life was the dream of the famous German chemist Emil Fischer at the beginning of the 20th century, when synthetic chemistry experienced a major revolution almost a century after the breakthrough by Friedrich Wohler in 1828: Urea synthesis, the first synthesis of an organic molecule in the lab (Yeh and Lim 2007). Thus a real challenge for bottom-up synthetic biologists will be to synthetize completely from scratch a living microbe (Porcar et al. 2011).

Given that top-down, parallel and perpendicular synthetic biology researchers mostly borrow microbes for tinkering (i.e. improving something by making small changes), the challenge will be to make the public aware that most of their research is not based on entirely synthetic organisms, but rather that synthetic DNA enables researchers to reprogram certain organisms for useful purposes. This endeavour could be facilitated by clearly distinguishing that DNA is not equal to life (de Lorenzo 2010), but just another minor component of life, as important as the other ones (Fig. 9). Another manner to engage the public with synthetic biology is by fostering cultural activities around the technology (see the chapter by B. Wray).

4.7 Resurrecting Life

The resurrection of an extinct species has so far been reported for “Celia”, the last bucardo (Pyrenenan ibex), which passed away on January 6th 2000. It was resurrected for 7 min on July 30th 2003 by nuclear transfer cloning (the germ cells had been frozen). Nuclear transfer involves the injection of the genetic material to be cloned into an unfertilized DNA-free egg, resulting in cells that have the potential to divide when placed in the uterus of an adult female mammal. This is how Dolly the sheep (5 July 1996—14 February 2003) was cloned from an adult somatic cell (Wilmut et al. 1997), but dozens of other speciesFootnote 27 have also been cloned despite the low efficiency of the technique. Besides nuclear cloning, it has been suggested that synthetic biology could cooperate with the biodiversity conservation community to protect or even resurrect extinct organisms (Redford et al. 2013). However, there are several technical and nontechnical issues that first have to be resolved, and this is clearly reflected in even Venter being cautious:

I have read too many articles that breezily discuss the reconstruction of a Neanderthal or a woolly mammoth with the help of cloning, even though the DNA sequences that have been obtained for each are highly fragmented, do not cover the entire genome and – as a result of being so degraded – are substantially less accurate than what is routinely obtained from fresh DNA. (Venter 2013, p. 87)

Further, even if fresh intact DNA or “software” of the woolly mammoth or Neanderthals were available, it would be necessary to have the appropriate cells or “hardware” that could interpret the genetic information (Danchin 2009). For example, when the components (nucleus, cytoplasm, and cell membrane) of amoebas of different species were combinatorially reassembled, the only viable organisms were those whose components originated from the same strain (Jeon et al. 1970). This old yet ingenious study shows that reprogramming of life goes beyond pure (synthetic) DNA.

Although it is difficult to define life among scholars from different fields because no consensus has been reached from the up to 123 current definitions (Trifonov 2011), it seems that life requires at least three components: A genetic program, a metabolism and a container (Acevedo-Rocha et al. 2013a). For a discussion about the concept of life, see the chapter by J. Steizinger. Regardless of the definitions, any attempt to reprogram or resurrect life, or construct life-like systems from the bottom-up should consider establishing the connection among these three components (Fig. 10).

Fig. 10
figure 10

Life prerequisites. Living systems as we know them have a genetic program (DNA), a metabolism fuelled by enzymatic reactions and proteins that execute this program using RNA as intermediate in protein translation (mRNA, tRNA, rRNA), and a container or membrane formed of lipids. Viruses are proteins (sometimes with lipids) encoding DNA or RNA, but since they have no metabolism, they are not alive, a fact that has puzzled scientists over decades (Villarreal 2004). Organelles such as mitochondria and chloroplasts contain metabolites and usually a minimal genetic program, but they are contained within a larger organism so these alone cannot be alive. Finally, the cytoplasm containing proteins, metabolites and lipids would not be alive without genetic material

Regarding non-technical issues, it is useful to illustrate the pros and cons to better understand the dilemma of resurrecting extinct species with synthetic biology by quoting an expert: “The ‘de-extinction’ movementa prominent group of scientists, futurists and their alliesargues that we no longer have to accept the finality of extinction.” (Minteer 2014). The most persuasive argument is to “appeal to our sense of justice: de-extinction is our opportunity to right past wrongs and to atone for our moral failings” (Minteer 2014). De-extinctionists also argue that by resurrecting species, ecological functions could be recovered, increasing the diversity of ecosystems. However, the introduction of revived species could pose disease threats to native species, in a manner reminiscent to the introduction of invasive species into new environments.Footnote 28 In addition, some conservationists also worry that “de-extinguished” species would have a limited genetic diversity.

In summary, one should ponder whether it is worthwhile investing huge amounts of human and financial resources in ambitious projects for resurrecting life given not only the technical and nontechnical difficulties, but also the paradoxes of our world: Devastating epidemics, hunting, habitat loss and degradation caused by both human industrial activities and climate change are triggering an unprecedented loss of biodiversity, with estimates of 500 up to 36,000 species of amphibians, birds and mammals disappearing every year (Monastersky 2014). Thus, instead of resurrecting life, synthetic biologists should help conservation biologists to develop innovative ideas for protecting the already thousands of endangered species: “Attempting to revive lost species is in many ways a refusal to accept our moral and technological limits in nature.” (Minteer 2014).

5 Conclusion

I have attempted to outline the most important areas of current research in synthetic biology in a general inclusive framework according to different engineering approaches. Almost all engineering efforts produce protocells, CMOs and GMOs, some of which are developed by well-known genetic engineering methods, and others by using more sophisticated tools. For example, the production of Genetically Engineered Machines (GEMs) is outsourced to undergraduate students every year across the world with the advantage of developing creativity and innovation in younger generations for solving complex problems. Similarly, researchers working in synthetic genomics have produced genomically designed organisms (GDOs) as a result of basic research, spinning-off revolutionary applications for the rapid assembly of synthetic DNA. More recently, the ground-breaking CRISPR-Cas9 tool, product of basic research, has allowed for the emergence of an impressive number of genomically edited organisms (GEOs), and in less than two years to better understand disease models and cellular biology as well as engineering multiple traits in microorganism for the optimal production of drugs, biofuels, and chemicals. However, the use of the CRISPR-Cas9 system should be cautious especially when engineering human embryonic stem cells as well as wild populations of organisms in the open environment (Ledford 2015). Finally, evolutionary approaches have accelerated the construction of microbial GROs and CMOs that promise to shed light on the meaning and evolution of life by endowing living systems with unnatural building blocks.

Importantly, any sort of genetic modification such as a single DNA base mutation, metabolic pathway optimization or whole genome recoding in any of these synthetic life forms will create a novel combination of genes and their products at the level of (pre- and post)-transcription (RNA) and -translation (protein), thus triggering a new network of gene interactions (at the transcriptome and proteome level) in the host organism and across organisms, which are difficult to understand and predict even using the most advanced mathematical algorithms and technological tools. These phenomena, which fall under the scope of epistasis (Phillips 2008), emphasise the evolutionary complexity of gene interactions inherent to biology. For this reason, synthetic biologists should be aware of the limitations that must be overcome to predict the behaviour of living systems. The purpose of illustrating the benefits, but also potential risks of these emergent life forms is to provide scholars from the social sciences and humanities as well as non-scientists with a glimpse of what synthetic biologists are actually doing. The challenges, dilemmas and paradoxes surrounding the field also show that there are already big challenges in a globalized world that cannot be solved exclusively by technological means. Whether synthetic biology will deliver health, food, and energy to societies with very different needs in the South and North without affecting an already polluted environment where biodiversity lost is an everyday phenomenon will be seen in the coming century.