Introduction

The achievement of a simplified synthetic chassis with only a fraction of the functions of a natural cell but keeping the very essence of life (the ability to perpetuate in time) is at the core of the research agenda of Synthetic Biology (SB). This field holds a great promise for the design, construction and development of artificial (i.e. man-made) biological systems thus offering viable new routes to ‘genetically modified’ organisms, smart drugs, artificial genomes and proteomes and may ultimately lead the way towards the creation of artificial and programmable living matter not limited to a biochemical system. The informed manipulation of such biological systems could have an enormous positive impact on our societies by contributing to the provision of healthcare, environmental protection, new materials, etc. The basic premise of SB is that methods commonly used to design and construct non-biological systems, such as those employed in the computational sciences and the engineering disciplines, could also be used to model and program novel synthetic biological systems. SB is thus intrinsically transdisciplinary and draws expertise from Biology, Chemistry, Physics, Computer Science, Mathematics and Engineering.

Synthetic biologists are attempting to develop ‘artificial life’, for both its tremendous applications in biotechnology and as a proxy for shedding light into the question of the origins of life. This is attempted by following two separate and competing routes: the ‘top-down’ and ‘bottom-up’ approaches to minimal cells. In the former, a primordial or minimal cell is generated by systematically reducing a biological cell’s genome until it no longer functions (Glass et al. 2006; Lartigue et al. 2007). The bottom-up methodology, on the other hand, seeks to assemble from scratch components or information units until an aspect of life emerges (Bedau 2003; Rasmussen et al. 2009). The overall intellectual and experimental challenges of implementing artificial life remain relatively long-term goals. However, along the way, guiding principles, experimental methodologies and theoretical insights from Biomimetic Chemistry and SB can be adopted in new ways for practical applications on a realistic, yet not necessarily immediate, time-frame.

Recent reports on the chemical synthesis of whole bacterial chromosomes (Gibson et al. 2008) and, specially, the successful transfer of a synthetic genome to a receptive cytoplasm (Gibson et al. 2010) demonstrate that the goal of creating living systems in the laboratory is approaching. Both reports show how powerful DNA synthesis and transplantation techniques are. But, paradoxically, the success in synthesizing (copying) DNA also highlights the very poor ability to de novo design (writing) genomes, which is certainly a consequence of the limited knowledge we have of the inherent complexity of living forms. The combination of high complexity, along with the dependence on biological fitness, and with the novelty of the framework needed to achieve the ambitious goal of building life, makes the endeavor an unprecedented challenge that requires, to our opinion, deep reflection. The present article is an attempt to foster the dialogue among synthetic biologists for laying the foundations on which to develop robust streamlined or purely synthetic life forms. The success in the creation of a general assent on the key points listed in Table 1 and described below might be a step forward in this direction.

Table 1 Challenges synthetic biologists will have to deal with

Reaching a consensus on synthetic and streamlined genomes

The first milestone is thus a meta-challenge: the achievement of a critical mass of scientists, conscious of the potential and limitations of their research on SB in order to set up the key points of this new field as the basis for the construction of synthetic/streamlined genomes. Like other new technologies, expensive research on SB is only performed in developed countries, mainly in the US and, to a lesser extent, Europe, along with scattered research centers in Japan and China (http://www.synbioproject.org/). The achievement of a huge endeavor such as the construction of a synthetic cell would certainly benefit from joining knowledge and efforts from all these research centers, and by the establishment of a constructive discussion on the bottlenecks that might hamper such task. Some of these bottlenecks correspond to the following grand challenges.

Cooking from scratch

A minimal living system needs to be able to process resources and turn them into building blocks so that it can grow and divide. Inheritable information needs, at least in part, to control this growth and division processes, and further this information needs to be modifiable between generations and open up for selection of novel information and thus evolution. It is a grand challenge to implement such a system in the laboratory.

A minimal living system consists of an informational component, a metabolic component and a container keeping both linked to each other (Rasmussen et al. 2004). The “recipe” for making a bottom up-based minimal self-replicating, evolving machine thus requires: (1) an inheritable information system that replicates (see “Replication and reproduction” section) and in part controls (2) a metabolic system that converts resources into building blocks together with (3) a container for localization of the genes and the metabolism as well as resource uptake and system replication through division. These “ingredients” (systems/components) should be coupled: information-container, metabolism-container, information-metabolism, and information-metabolism-container. Indeed, the real challenges of the bottom up approach to synthetic life lay in coupling a set of minimal processes required for the protocellular life cycle construction (Sunami et al. 2010; Rasmussen 2010). These bottlenecks include: (a) self-assembly of the components into a proto-organism; (b) uptake or fusion of the resources into the container (feeding); (c) container-associated replication of the informational system (gene replication); (d) metabolic transformation of resources into building blocks (growth); as well as (e) fission (division) of the proto-organism into two or more “fertile” copies.

It should be emphasized, that because of a required simplicity in any bottom protocell design, some of the current protocell models are partly based on non-biological building blocks and simpler organizations than we find in modern cells (DeClue et al. 2009). Therefore, one of the long-term application promises of the bottom up approach to synthetic life is also to explore and develop living processes in other hardware including computer networks and robotics systems e.g. for information- and production technology applications (Bedau et al. 2010).

Learning from nature: naturally evolved reduced minimal genomes

The minimal cell concept states that for a particular kind of cell in a defined environment, there are a minimum number of features or functions necessary to keep the cell alive (Peterson and Fraser 2001). However, a minimal cell is only meaningful in relation to a particular environment (and of course, to the kind of cell under study). In consequence, a plethora of minimal cells may exist (Huynen 2000). In this respect, the top-down efforts aim to simplify extant cells to its minimal expression. The search for a minimal cell usually implements the engineering of a reduced genome through successive rounds of gene deletion (Pósfai et al. 2006). These genes can be identified through in silico comparative genome analysis or, alternatively and in a complementary way, by singly inactivating all genes in the genome and identifying those essential for cell survival (Gil et al. 2004). But there is a third research avenue to approach minimal living systems: naturally reduced genomes. Under this category we can distinguish two subcategories: free living microorganisms and mutualistic endosymbionts. Among the smallest sequenced genomes from free-living prokaryotes we have an uncultured ocean β-proteobacterium (Giovannoni et al. 2008), the cosmopolitan oceanic bacterium Candidatus Pelagibacter ubique with 1,394 genes (Giovannoni et al. 2005), the dehalorespirant Dehalococcoides sp. with 1436 genes (He et al. 2003) and the hyperthermophilic crenarchaeon Ignicoccus hospitalis, with 1494 genes (Podar et al. 2008). An interesting finding is that these organisms, belonging to very different clades and living in clearly different environments, have evolved towards genomes with a similar number of genes. Taking these organisms as hallmarks, it has been proposed that the minimum number of genes for a free-living prokaryote should be approximately 1,400 (Podar et al. 2008). Can we consider building a cell based on a contemporary biochemistry with an even more reduced number of genes? Genomes from a variety of prokaryotes, whose biology encompasses endosymbionts are the product of a particularly intensive natural genome reduction. Genome sizes for host-associated prokaryotes are much smaller and there is a huge repertoire of bacterial genomes with fewer than 1,400 protein-coding genes. Smallest genomes are the outcome of endosymbiosis (Moya et al. 2009). Bacterial endosymbionts from a range of insect species, including aphids, carpenter ants, psyllids, tsetse flies, singing cicadas, and cockroaches have been studied, and they all exhibit genomes ranging from 150 to around 700 kb.

These naturally streamlined genomes demonstrate the feasibility of genome reduction in order to yield simple yet functional cells. Endosymbionts could serve both as a short cut for the top-down approach, by supplying already simplified cells for further synthetic reduction—if necessary—, and as a proof of concept of successful reduction events from which we can learn what genes and subsystems can be deleted and which is the robust genetic and functional core of the cell that must persist as a chassis.

Refine and make reality the notion of biological chassis

The concept of a simple genomic chassis on top of which forward-engineered systems can be implanted is one of the most appealing metaphors of SB. Alas, the very metaphor suggests the autonomy of the peripheral, implanted genetic circuits in respect to the basic cell physiology, which at best seem extremely difficult to achieve. This however has not been an obstacle to the extraordinary success of the notion, which assumes a functional autonomy of the central functions of the core cellular system in respect to implanted biological parts, modules and devices (de Lorenzo and Danchin 2008; de Lorenzo 2011). In reality, unlike parts used in engineering, extant biological components are extremely context-dependent, interact with metabolites and chemicals, are subject to Darwinian evolution, their combinations originate emergence phenomena and -last but not least- cells which harbor them do not remain the same size, but they grow and multiply.

Instead of the chassis, the metaphor that might be closer to the organization of biological objects is that of the Delphic boat (Danchin 1998). What makes a boat to be a boat is not the nature of its parts, but the interactions and relationships between them. Is it then possible to isolate completely the chassis from the implanted properties? A side aspect of the chassis issue is the connection between genomic simplicity and growth rate capacity. What makes cell have a slow-growing lifestyle vs. a fast growing existence? There may be master genes that control this aspect, or perhaps there is a general evolutionary adaptation to slow or fast growth regardless of stress or nutritional conditions. One simple example: some men are small because they do not eat enough (stress, lack of nutrients) while others are small because they are genetically programmed to be small, and it is certainly a serious mistake not to distinguish the two causes of small size/growth. This is relevant for future re-factored genomes if they are to have any application in Biotechnology. The best biotechnological agent is the one having a maximum of catalytic activity with a minimum of biomass. On the other hand, slow-growing bacteria are of little use and their manipulation is difficult. Ideally, one would like to have a chassis in which growth rate, metabolism and catalytic ability could be dissociated and ultimately controlled at user’s will (de Lorenzo 2011). Furthermore, the chassis has to be robust rather than delicate and be able of anabolism from simple carbon sources (better if they are industrial waste)—rather than relying on provision of external nutrients of all sorts.

Manufacturing engineered biosystems

The term orthogonal, borrowed from mathematics and computer science, is again a powerful metaphor (de Lorenzo 2011) that brings to mind a factual independence between otherwise co-existing systems (in the SB literature, A is orthogonal to B if A does not influence B). Orthogonal ribosomes recognize alternative genetic codes or messenger RNAs; orthogonal expression systems bring about transcription initiation regardless of the specific biological hosts and so on. Note that while the term orthogonal means independent, when used in the SB literature it simply denotes a lesser dependence of the host’s native programs. Orthogonal ribosomes may recognize a separate genetic code, but they are still heavily connected to rest of the molecular and metabolic network of the cell (Neumann et al. 2010). That the current meaning of orthogonal in SB is much more metaphorical than literal does not mean that we cannot entertain bona fide modules, even complete living systems engineered not to interact with any naturally occurring biological entity. Such orthogonal objects could be the ultimate solution to the problem of possible risks associated to engineered microbes. The more orthogonal one system is, the less risky it could be from many points of view.

In the meantime, the interplay between a more or less refactored chassis and more or less independent implanted genes for a given purpose will remain at the core of current research in SB. It should be noticed that transplantation of a whole foreign genome within that of an organism of a completely different clade can be stable, while it does not express its genes (Itaya et al. 2008). This paves the road for implementation of orthogonal systems, the expression of which could be triggered by dedicated regulatory circuits in the transplanted genome (e.g. using alternative RNA polymerases). Needless to say that orthogonal systems of this sort could be re-used to work in various hosts. At some point it might be useful to define an orthogonality index (e.g. 0–1) to qualify and quantify the degree of dependence of each module implanted in a given chassis has over the general functioning of the entire system.

Overcoming physical and chemical constraints

Several physical factors pose major problems for a synthetic cell chassis to be functional and they thus need to be addressed, particularly for bottom-up strategies aiming at constructing cells de novo with similar biochemistries and designs as modern cells. For example, the shape of the cell is essential since, in most cases, the surface vs volume ratio influences gene expression. Typically, for a given function, membrane proteins should be expressed at a lower level than that of the cytoplasmic components of that function. A consequence of this is that a synthetic cell might need to bear a transcription attenuation signal placed between the promoter-proximal genes for cytoplasmic components and membrane components. Osmotic pressure and electro-chemical gradient have also to be considered. It has to be noted that each time an efficient permease is implemented, a safety valve is needed to evacuate excess pressure, via exporting a modified form of the imported product (to avoid futile cycles) or, alternatively, by polymerisation via an appropriate polymerase. Regarding electro-chemical gradients, they impose the implementation of specific transporters, typically differentiating ions within and without the cell.

A further constraint is linked to the very nature of the chemical components of the cell. Many metabolic intermediates are highly reactive and they may lead to unwanted side reactions (Danchin and Sekowska 2009). This is at the root of many ageing processes. In particular all molecules containing alpha-dicarbonyl make up are highly reactive toward free amino groups (Gobert and Glomb 2009). Many eukaryotic organisms have used compartmentalization in organelles to solve this hurdle (D’Angelo et al. 2008; Go and Jones 2008).

Finally there are several constraints associated with the ability of the cell to perpetuate. Cell division implies that the organism must have some sort of mechanical sensor indicating stretching of the membrane/envelope. Additionally, division implies ageing, since all components of the cell must age, with different ageing propensity. Three major ways should be considered to cope with this process: degradation and re-synthesis, aggregation and disposal in some waste bin, export and replacement. In each case there is a need for some kind of measurement device telling the aged entity from the young one (Danchin 2009a). A major challenge of SB is the implementation of artificial biosystems facilitating artificial cells to deal with all these constraints.

From models to cells and back

Living matter functionally couples metabolism (e.g. production of energy through catabolism and the use of energy to build cellular structures through anabolism), information storage and processing (e.g. DNA transcription, mRNA translation, etc.) as well as compartmentalization processes and entities (e.g. cell wall formation, membrane transport, etc.), each with its characteristic length, time, energy and mass scales (Milo et al. 2010). These vastly different scales pose tremendous challenges for computational modeling and simulation, not dissimilar from the multi-scale difficulties found in climate modeling. Unlike the latter, however, experimentation, which could help smooth the interface of the modeling techniques involved, is possible for SB.

Bridging the gap between top-down and bottom-up SB offers also formidable advancement opportunities. For example, top-down SB requires the integration of sophisticated multi-scale modeling techniques encompassing at least genetic (transcriptional) networks, signaling networks and metabolic networks. From the three subsystems just mentioned, the latter is the better understood and computational modeling and simulation technology is maturing fast (e.g. Oberhardt et al. 2009). The remaining two, specially signaling networks, remain more of a challenge and some emergent trends in computational modeling, variously called “Executable Biology”, “Algorithmic Systems Biology”, “Infobiotics” (e.g. Priami 2009; Romero-Campero et al. 2009), seem promising. Infobiotics techniques define rules that describe how the modeled system moves from one state to the next (Fisher and Henzinger 2007). These rules are executed, that is, starting from an initial state, a procedure determines the next biochemical reaction (i.e. rule) to apply that, in turn, specifies the state to which the system evolves. Once a new state has been achieved the process is iterated. For large numbers of molecules, the intrinsic stochasticity of biological and chemical systems is averaged out, thus deterministic modeling is adequate. However, when a relatively small number of molecules is involved, stochastic effects become more prominent and must be taken into consideration. This can be accomplished by using discrete stochastic simulations that, in some cases, may show dramatic differences with a (wrongly applied) continuous deterministic model in what regards to the computed behavior for the system under study (Twycross et al. 2010). Besides, infobiotics methodologies have the additional advantage that they make more explicit and more clearly recognizable the mechanisms postulated within a model.

Recent reduced minimal cells reports (e.g. Kühner et al. 2009), bottom up and semi-synthetic cell advances (Gardner et al. 2009; Pasparakis et al. 2009) together with the above-mentioned computational techniques suggest a series of (sub)milestones. First, within the context of a reduced (either synthetic or naturally evolved) bacterium, would be the implementation of a complete, verifiable mechanistic simulation of all relevant molecular interactions involved in either of their metabolic, genetic transcriptional or signaling networks. This milestone implies simulating the 4-dimensional trajectory of perhaps tens of millions of particles. Critically, this particle set will be organized in key interaction networks in which just a few molecules (out of the millions that are simulated) govern system-wide behavior. Hence an explicit understanding of individual (as opposed to averaged behavior) causal mechanisms will provide fundamental insights into biological and chemical systems stochasticity. From the bottom-up point of view, the challenge would be the complete, verifiable mechanistic simulation of all relevant molecular interactions involved in a bottom-up implementation of membrane formation and other dynamic processes (e.g. reproduction), as well as of each of proto-{metabolic/transcriptional/signaling} networks (Fellermann et al. 2007).

The second milestone would be the integration of the above-mentioned models into a comprehensive simulation of the entire organism life cycle. These will require smoothing out simulation interfaces that deal with very different scales, e.g. membrane formation, TF binding, etc. The bottom-up perspective on this milestone offers the unique possibility of developing a co-design (Staunstrup and Wolf 1997) strategy, namely, the simultaneous step-by-step development of experimental techniques and computational models in which complete knowledge of what goes into the experiments is possible (in contrast with top-down approaches where a billion years biological “legacy system” must be dealt with). That is, although several potential routes towards protocell (sub)systems might be available, a co-designing strategy will ensure that only those routes that have a reliable simulation counterpart are followed (and vice versa). This strategy could substantially change the way in which bottom-up research is performed.

Ultimately, one is interested in harnessing and engineering collective multi-cellular behavior. This would require a detailed understanding of the sources of noise in (proto)cellular systems as well as practical strategies for programming both “patterned” noise and cell-to-cell communication. In particular, multi-cellular synthetic systems would require an exquisite noise control strategy (Rao et al. 2002) that could sustain a specific engineered behavior but also—paradoxically—the ability to harness biological noise as to allow the system sufficient plasticity to compensate for, e.g. changing environments or faulty components. Recent advances in the top-down engineering of multi-cellular behavior (Tamsir et al. 2010; Regot et al. 2010) suggest that a careful orchestration of intercellular communication within and across cell colonies, which uses a suitable spatial or temporal compartmentalization, facilitates the averaging-out of intrinsic noise (Pedraza and van Oudenaarden 2005; Rosenfeld et al. 2005) and achieves phenotypic robustness. However, impressive these demonstrations are, they remain simplistic and rigid in their spatial and temporal arrangements. In order to substantially scale up our capacity to build truly programmable multi-cellular systems a third milestone must be met: the integrative multicellular (e.g. colony level up) modeling of (reduced) bacteria and the simulation of a colony- or tissue- like ensemble of artificial cells built from the bottom-up and undergoing collective behavior (e.g. quorum sensing like processes, swarming, etc.). A key aspect of this milestone (Cronin et al. 2006) will be the integrated simulation of very large hybrid protocell-cell systems. These large-scale hybrid artificial-biological multicellular systems will, in turn, require substantial advances in our ability to perform realistic integrative simulations of the evolutionary processes that these systems might undergo, thus giving us—for the first time—the ability to better understand the long-term behavior of such systems. These simulations (and their experimental counterpart) will be crucial milestones towards the engineering of robust, reliable and efficient smart drug delivery systems, tissue/organ enhancement techniques as well as more general applications of artificial living matter.

Replication and reproduction

Cell multiplication has been taken as a granted property of biological systems. However, it is essential to discriminate between reproduction (making a similar copy) and replication (making an identical copy). Freeman Dyson convincingly demonstrated that, while reproduction can accumulate novel information, replication is doomed to accumulate errors (Dyson Freeman 1985). During the process of multiplication the program replicates, but the cell chassis reproduces, and this introduces a considerable variation (often wrongly interpreted as noise) that needs to be taken into account. We note that this is exactly what happens in the genome transplantation experiments (Lartigue et al. 2007) where the initial host chassis differs from that found at the end of the experiment: the program has replicated, while the chassis has reproduced. A happy consequence of this variation is the paradoxical, but rarely noticed fact that, the construction of a young progeny from old parents is a built-in property of all living organisms. This implies that there exist genetically encoded functions to cope with the process, which involve restoring, recruiting or even creating novel information (Danchin 2009b).

How can this be? Information is central to SB but systematically used in a loose way (see SB as a means to “manipulate information” whatever it is; Endy 2005). By contrast, information has for several decades made the core business of research and applications in computer sciences and engineering. In this field, information is considered as an authentic currency of reality that complements matter, energy, space and time (Zurek 1989). Specific processes must articulate all five categories together. In particular, information and energy are related in a non-intuitive way, as shown by Landauer and Bennett when they endeavored to evaluate the theoretical limits of computation for computers that were supposed to become ever faster and working in ever smaller volumes (Landauer 1961; Bennett 1988). Briefly, they concluded that creation of information does not consume energy, while accumulation of information, because it needs to make room by erasing memory of past events, is energy consuming. We extended this view by identifying energy-dependent degradative processes as Maxwell’s demons using energy to prevent degradation of what is rich in “useful” information (Danchin 2009b).

Reflection along these lines is of prime importance for the future of SB, as a construct lacking the corresponding genes and devices will slowly decay as it multiplies. By contrast, implementing energy-dependent systems to catch up with restoration of youth will open the door of unwanted innovation. A central challenge of SB will be to harness this remarkable aptitude to create fitness by managing novel information to the goals of the investigators.

Towards an integrated design strategy of synthetic organisms

The potential of SB-based approaches lies on the engineering principles of abstraction, decoupling and standardization as well as on modeling, but the experience reveals that rationally designed genetically engineered organisms might in fact be less adaptive than natural selection-shaped ones (Chan et al. 2005). The astounding complexity and diversity of natural living beings are the best demonstration of the superiority of natural selection over rational design. Thus, the combination of rational design (modeling-based and using standard biological parts) with selection strategies such as directed evolution, adaptive evolution and other Darwinian approaches might result in an exponential acceleration of the achievement of artificial live forms. Selection-based strategies are already used in SB for the identification/refining of the “best” clones for a desired function (Loakes and Holliger 2009; Porcar 2010). However, systematic Darwinian “dead or alive” selection approaches on rapidly multiplying organisms such as bacteria and viruses will certainly play a central role in the creation of complex and viable artificial life forms, not only by detecting the fittest clones on a particular gene product but by selecting functional metabolic networks as a whole. Indeed, using Darwinian selection for making life is a “to be or not to be” decision, since even the most complex rationally designed cell, if it is able to reproduce, cannot be prevented from evolving.

Coupling scientific development and public opinion information

The experience with the public perception of GMOs, particularly in Europe, should serve as a lesson for implementing a solid information platform that should go together with the development of synthetic systems (de Lorenzo 2010). The association by public opinion between SB and biotechnology is expected. Moreover, the very name of the discipline, Synthetic Biology, seems to have been calculated to produce a strong negative reaction (http://www.synbiosafe.eu/). Indeed, the very recent reference to the “creation” of the first synthetic bacterium (Gibson et al. 2010), despite its limited artificiality and its obvious lack of peril, has provoked an unprecedented negative reaction as demonstrated by hundreds of comments in Internet forums. However, the reaction the term evokes can in fact be considered as both a difficulty for its acceptance and an opportunity to popularize a positive view of artificiality by spreading the enormous potential benefits of man-made biomachines. We are convinced that the establishment of a transparent and fluid debate among scientists, decision makers and the general public is imperative for the acceptance of SB as a useful and positive technology. It would be mistaken to forget that the main dangers associated to life lie in natural organisms, especially those that are recognized as invasive species, as well as those that are the cause of emerging diseases (de Lorenzo 2010). Crying wolf has always led to catastrophes, when the real predator is forgotten.

After the grand challenges, the great expectations

The famous American physicist and Nobel Prize Richard Philips Feynman wrote this on his blackboard in 1988, at time of death: ‘what I cannot create I do not understand’. By reversing this, we get the more obvious ‘what i do not understand I cannot create’, which might in fact be the perfect metaphor of the paradoxical nature of SB. Because we should only be able to accomplish the challenge of creating life if we understand the mechanisms of life well enough –which is doubtful- to reproduce it in the lab. But, to a certain extent unexpectedly, we are now astoundingly close to creating living beings with only a glimpse of the complexity of the interactions behind their living nature. This complexity is exemplified by three recent reports by the group of Luis Serrano and coworkers on the naturally reduced bacterium Mycobacterium pneumoniae. The authors used genome-scale screening for soluble protein complexes, estimated the number of such molecular biomachines in some 200 and concluded that even this minimal organism exhibits a proteome complexity that ‘could not be directly inferred from its genome composition and organization or from extensive transcriptional analysis’ (Kühner et al. 2009). In a second work, reactions catalyzed by 129 enzymes reactions were characterized through more than 1,300 growth curves, which revealed a relatively linear topology compared to more complex genomes but similar metabolite concentrations, cellular energetics, adaptability, and global gene expression responses (Yus et al. 2009). Finally, a holistic study of the transcriptome of M. pneumoniae revealed an unexpected complexity, which is hardly understandable with current models of cell functioning (Güell et al. 2009).

In other words, the closer to life creation we are, the most intricate live appears to be, and this complexity, particularly that of protein interactions, makes machine-like orthogonalization an utopical, oversimplified metaphor for SB. Therefore, and if we apply Feynman’s philosophy, there’s no hope for truly synthetic life in a close future. But the fact is that we should be able to create what we do not fully understand, by using already functional parts and by integrating them in rational processes under the constant guiding aide of (natural-like) selection and evolution. This ‘assisted biological design’, combined with the already available high throughput DNA synthesis and transplantation techniques, might prove a revolutionary tool for a dramatic improvement of a range of biotechnological applications such as sustainable energy production, bioremediation strategies or biomedicine.