Towards Autonomous Robot Evolution

We outline a perspective on the future of evolutionary robotics and discuss a long-term vision regarding robots that evolve in the real world. We argue that such systems offer signiﬁcant potential for advancing both science and engineering. For science, evolving robots can be used to investigate fundamental issues about evolution and the emergence of embodied intelligence. For engineering, artiﬁcial evolution can be used as a tool that produces good designs in difﬁcult applications in complex unstructured environments with (partially) unknown and possibly changing conditions. This implies a new paradigm, second-order software engineering, where instead of directly developing a system for a given application, we develop an evolutionary system that will develop the target system for us. Importantly, this also holds for the hardware; with a complete evolutionary robot system, both the software and the hardware are evolved. In this chapter, we discuss the long-term vision, elaborate on the main challenges, and present the initial results of an ongoing research project concerned with the ﬁrst tangible implementation of such a robot system.


The Grand Vision of Robot Evolution
The long-term vision regarding robot evolution foresees systems of robots that reproduce and undergo evolution in the real world. The interest in such systems is based on perceiving evolution as a generic mechanism that has driven the emergence of life on Earth and created a vast diversity of life forms adapted to all kinds of environmental conditions. Thus, the rationale behind the grand vision can be stated as follows.
As natural evolution has produced successful life forms for practically all possible environmental niches on Earth, it is plausible that artificial evolution can also produce specialised robots for various environments and tasks. Robot evolution offers advantages in different areas depending on where the emphasis lies, on robots or on evolution. To put it simply, we can distinguish two perspectives: engineering and science.
As for engineering, artificial evolution can deliver good robot designs for a specific application. In this approach, evolution is (ab)used as an optimiser that is halted when a satisfactory solution is found. Real evolution, however, is not about optimisation but about adaptation that never stops. This potential can be unlocked through open-ended artificial evolution where a robot population continually changes and adapts to possibly unforeseen and changing conditions. In such systems, robots are improving in two dimensions: they are becoming fit for the environment and fit for purpose (as defined by their users).
Robots that evolve in an open-ended 'hands-free' fashion are also very interesting from a scientific perspective. Specifically, they can be regarded as hardware models of natural evolution that are programmable, tuneable, and observable. Hence, they can be employed as a novel research instrument to investigate fundamental issues about evolution and the emergence of embodied intelligence.

Researching Evolution Through Robots
Evolution is a natural phenomenon that has driven the emergence of life on Earth. Studying evolution is traditionally a subject within biological science. However, the invention of the computer and the ability to create artificial (digital) worlds opened the possibility of establishing artificial evolution, that is, computational processes that mimic Darwinian principles. The fundamental insight behind this is the observation that the reproduction-selection cycle that is pivotal to natural evolution is analogous to the generate-and-test loop of search algorithms. This was first postulated a long time ago [7,42], and by the end of the twentieth century, evolutionary computing (EC) has become a vibrant research area with many applications [21]. Over the past half-century, various evolutionary algorithms (EAs) have been developed under different names including genetic algorithms, genetic programming, and evolution strategies. They have proven their power on hard problems without analytical models, with non-linear relations among variables and complex objective functions with multiple local optima.
Evolutionary computing mimics natural evolution, but it takes place in a virtual space, whereas natural evolution happens in the real world. The advantage of evolutionary computing systems is that they are programmable, configurable, and observable. However, computer models and simulators are based on abstractions and approximations and may miss crucial aspects of the real world. This leads to the so-called reality gap, the phenomenon that the performance of a solution evolved in simulation decreases once it is transferred to a real robot. This gap can be significant, and observations and conclusions about system behaviour may be mere artefacts [28]. Natural evolutionary systems are quite the opposite. They are certainly real but hardly programmable, configurable, and observable. The combination of the two offering the best of both worlds is called the Evolution of Things. The notion and the term was introduced by Eiben et al. in [20] and further discussed in [15] and [22].
The key idea behind the Evolution of Things concept is to have a programmable evolutionary system that works with physical artefacts (see Fig. 2.1). These artefacts can be passive, e.g. sunglasses or airplane wings, or active, e.g. animate things: robots for short. Robots that are able to reproduce and evolve in the real world can be perceived as a physically embodied model of natural evolution [34] and used as a research instrument to study evolutionary phenomena. Fundamental questions that can be investigated include the evolution of (embodied) intelligence, the interplay between the body and the brain, and the impact of the environment on the evolved organisms. To date, such issues can be observed and studied in wetware (living systems) and software (computing systems); the Evolution of Things will realise them in hardware (robotic systems).
Using real robots instead of simulations is interesting, because this guarantees that the observed effects are real and not just artefacts of the simulator. Research with robots also offers advantages with respect to living organisms, because robots  [15]. are easily observable (e.g. internal processes and communication can be logged) and controllable. This allows systematic studies under strictly regulated conditions and many repetitions for solid statistics.

Supervised Robot Evolution: Breeding Farms
Evolutionary algorithms have been successful in solving various design problems, typically regarding inanimate artefacts with a functional or aesthetic value [2,3,27]. The potential for designing robots is evident; robots for a certain application can be developed through iterated selection and reproduction cycles until they satisfy the users' criteria.
Obviously, designing robots for structured environments with known and predictable conditions can be done by classic engineering. However, complex unstructured environments with (partially) unknown and possibly changing conditions represent a completely different challenge. Think, for instance, of robots for environmental monitoring in rain forests, exploration of ocean floors, or terraforming on other planets. In such cases, it is hard to determine the optimal morphologies and the control systems driving them. For example, should a robot that operates in the jungle have wheels, legs, or both? What is the optimal arrangement of its sensors? Should that robot be small to manoeuver through narrow openings or should it be big and heavy to trample down obstacles? The number of options considering materials, components, shapes, and sizes is huge, the link between a specific robot makeup and its task performance is poorly understood, and theoretical models and proven design heuristics are lacking.
In such cases, a good robot design can be evolved on a 'robot breeding farm'. This means a human-controlled and human-operated facility that has a mock-up environment resembling the real application conditions and an evolutionary engine that implements the two main components of any evolutionary process: reproduction and selection.
The first one can be realised by a (re)production facility that constructs the robots (phenotypes) based on specification sheets (genotypes). Currently, in 2020, this is a significant engineering challenge because the available technology for rapid prototyping and 3D printing is not able to produce all essential robot ingredients. For instance, printing servo motors and CPUs is beyond reach at the moment.
Realising the second component, selection, is much easier. The quality of a robot can be measured by its performance in the test environment, and parent selection can be implemented by using this quality as fitness. Importantly, the users can steer and accelerate evolution by selecting/deselecting robots for reproduction as they see fit. Doing so, users act akin to farmers who breed animals or plants; this explains the breeding farm metaphor. If needed, the user can also influence (re)production by the robotic equivalent of genetic modifications or directly injecting hand-designed robots into the evolving population. The evolutionary design process on a breeding farm stops after obtaining a good robot. This robot is then put forward as the solution and several copies of it can be produced to be deployed in the real application environment.

Open-Ended Robot Evolution: Out in the Wild
Robot breeding farms represent one particular approach to artificial evolution. In this approach, evolution is employed as a particular design method, and if the conditions or requirements change, the whole process needs to be repeated.
Enabling robot populations to adapt to previously unknown or changing conditions is essential in applications, where the environment is inaccessible, dangerous, or too costly for humans to approach and work in. Examples include remote areas on Earth, deep seas, and other planets. An evolutionary engine operating autonomously on the spot can mitigate this problem. The first component of this system is a (re)production facility that can make use of local resources and construct a large variety of robots. The second is a twofold selection drive, such that robots become fit for the environment and fit for purpose. Environmental selection (for viability) is for free, as robots with a poor feature set will not be able to operate adequately. Sexual selection, in turn, can be pre-programmed such that robots have a 'basic instinct' to choose mating partners with a high task performance (utility). The evolving robot population will then become increasingly adapted to the given environment and simultaneously become better and better in performing the task(s) predefined by the users. Importantly, over consecutive generations, the bodies and brains of the robot population adjust when the conditions change. In other words, evolution allows the robot population to optimise itself while on the job.
Such an evolving robot colony is very different from a breeding farm because the evolutionary system must be able to operate for extended periods of time without direct human oversight. In contrast to the breeding farm scenario, the evolutionary process never stops, and there is no such thing as the final solution in the form of any given robot makeup. In this respect, open-ended robot evolution is closer to biological evolution, while a breeding farm is more a (r)evolutionary approach to designing and optimising robots.

Evolutionary Robotics
The discipline that is concerned with robot evolution is evolutionary robotics (ER), a research area that applies EAs to design and optimise the bodies (morphology, hardware), the brains (controller, software), or both for simulated or real autonomous robots [5,13,14,23,37,38,43]. Over the last 20 years, the ER field has addressed the evolution of robot controllers in fixed morphologies with considerable success, but evolving the bodies has received much less attention. This is somewhat understandable, given the difficulty of implementing such systems in the real world, i.e. the lack of technologies for automated (re)production of robots. However, advances in robotics, 3D printing, and automated assembly mean it is now timely to work on physical robot systems with evolvable morphologies [45].
Systems within Artificial Life have addressed the evolution of morphologies (and control systems) of virtual creatures in simulated worlds, e.g. in the pioneering works by Sims, Bongard, and Pfeifer [4,39,41] and several subsequent studies [1,9]. This was brought closer to the physical instantiation by jointly evolving robotic shapes and control in a simulator and 3D printing the evolved shapes [33]. However, evolution took place in simulation and only the final product was materialised; furthermore, the robot had no sensors. The self-reproducing machines of Zykov et al. were modular robots that were designed or evolved to be able to make exact clones of themselves without variation, and they did not undergo evolution in the real world [46,47]. More recent work has evolved morphologies composed of novel, soft materials, but once again only the final morphologies were constructed, and in this case they were confined to operating within a pressurised chamber rather than operating in a real-world environment [26]. A related subfield of evolutionary design has concerned itself with constructible objects, but here again evolution generally has taken place in software, with only the end result being constructed. A few projects employed fitness evaluations on the hardware itself, but these systems produced inert objects with no controllers that passively underwent evolution [32,40].
A separate category of related work contains biologically motivated studies with evolution implemented in populations of robots [24,44]. Using real hardware is extremely challenging, and, to the best of our knowledge, there has been only one successful project, that of Long et al. investigating the evolution of vertebrae through swimming robot fish [10,34]. The project faced huge practical problems, for instance, manual construction of new generations took weeks, which severely limited the number of experiments and the number of generations per experiment.
One of the most relevant related works with respect to the autonomous robot evolution concept is the study of Brodbeck et al. which investigated the morphological evolution of physical robots through model-free phenotype development [8]. Being model-free means that the system does not employ simulations-all robots are physically constructed. As noted by the authors, this avoids the reality gap but raises two new problems: the birth problem and the speed problem-identified previously as challenge 2 and challenge 4 in [20]. The system demonstrates a solution to the birth problem in real hardware based on modular robot morphologies. Two types of cubic modules (active and passive) form the raw materials, and robot bodies are constructed by stacking and gluing a handful of such modules. The robots do not have an on-board controller, they are driven by an external PC, and their task is to locomote. Also the EA runs on the external PC with populations of size 10 and fitness defined by the travelled distance in a given time interval. Robot genomes encode the bodies implicitly by specifying the sequence of operations to build them by a robotic arm, dubbed the mother robot. The construction of new robots ('birth process') is hands-free in some of the reported experiments but requires human assistance in some others.
Finally, the RoboGen system and the Robot Baby Project represent a significant stepping stone towards autonomous robot evolution [1,30]. The RoboGen system features modular robots as phenotypes, a corresponding space of genotypes that specify the morphology and the controller of a robot, and a simulator that can simulate the behaviour of one single robot in a given environment [1]. Evolution is implemented by a classic evolutionary algorithm that maintains a population of genotypes; executes selection, crossover, and mutation; and calls the simulator for each fitness evaluation. The system is applied to evolve robots in simulation, and physical counterparts of the simulated robots can be easily constructed by 3D printing and manually assembling their components.
RoboGen was not meant to and has never been used to physically create each robot during an evolutionary process, but it inspired the robot design of the Robot Baby Project [30]. This project is a proof-of-concept study to demonstrate robot reproduction, and ultimately evolution, in a real-world habitat populated by robots.
A key feature is that robots coexist and (inter)act in the same physical space where they can 'live and work', 'meet and mate', thus producing offspring andover the long run-many generations of evolving robots. This feature is unique, since in other related works a traditional EA performs evolution, where robots are manifested one by one and evaluated in isolation during the fitness evaluation steps. The Robot Baby Project was based on the Triangle of Life system model (discussed in Sect. 4) and implemented all system components in the simplest possible form, cutting several corners. The experiment started with an initial population of two robots and ran a complete life cycle resulting in a new robot, parented by the first two. Even though the individual steps were simplified to the greatest extent possible, the whole system validated the underlying concepts, illuminated the generic workflow for the creation of more complex incarnations, and identified the most important open issues ( Fig. 2.2).

Fig. 2.2
The 'first family' produced by the Robot Baby Project. The blue 'spider' and the green 'gecko' in the background are the parents, the robot in front of them is their offspring. See [30] for details.

Evolvable Robot Hardware: Challenges and Directions
Consider now a long-term vision for evolvable robot hardware-freed from the limitations of current materials and fabrication technologies. We see two possible futures: one in which a universal robot manufacturing cell evolves and fabricates robots to order and the second in which smart proto-robot matter evolves itself into more complex, interesting (and possibly useful) forms. The first of these visions falls squarely within the engineering paradigm; it implies supervised evolution against a specification and requires a (possibly compact and self-contained) manufacturing infrastructure. The second vision is decidedly bio-inspired, even bio-mimetic, and falls within the Artificial Life paradigm: specifications might be loose or nonexistent and hence evolution open-ended rather than optimising towards some fitness. Here the infrastructure would consist of some environment in which the robot matter 'lives', recombining or replicating and collectively adapting in response to its environment; in this vision, robots would, in some sense, physically selfreproduce.

What Would We Evolve?
A robot is a complex interconnected system of subsystems, and any strategy for evolving a robot's body must be based upon the physical characteristics of those subsystems. It is helpful to categorise a robot's physical subsystems according to their function and then consider the implication of each of these for evolving robot bodies. Table 2.1 identifies seven functions, each of which would normally be present in a mobile robot able to both sense its environment and act on that environment. Micro-controllers, or equivalent, to provide robot control Physical structure, skeleton, or chassis Metal and/or plastic structure which determines and maintains the organisation of the robot's functional subsystems above Interconnections The connections (wiring) between sensing, signalling, actuation, energy, and control subsystems Consider the physical components required for each of the seven functions outlined above. In principle any or all of these could be evolved. In practice, however, we would not (and perhaps could not) evolve the actual subsystems for some of these functions: sensing, signalling, energy (i.e. the batteries), or control (i.e. the controller hardware), for example. An exception to this general rule would be when we are interested in evolving a particular sensing modality, in which case all other physical aspects would be fixed; an example can be found in [36], in which the auditory system for a Khepera mobile robot was evolved, modelled on Cricket audition.
For sensing and signalling elements, we would normally predetermine 1 and fix the actual sensing and signalling devices and their subsystems. The number, type, and especially the position of sensing and signalling elements are however a different matter. It would be of great interest to allow evolution to discover, from the space of sensing and signalling subsystems, which ones, how many, and where to position these in order to optimise the robot's overall functionality. For the energy and controller hardware, the major decisions must also be predetermined; the power source might be singular or distributed, for instance, but from an evolutionary perspective, providing the energy and controller elements are out of the waymounted perhaps towards the centre of the physical chassis, then their precise position is of no real interest.
Consider now the actuation elements. Although the motors, or equivalent devices, will need to be predetermined, the physical structures they drive are likely to be a particular focus for evolution. Take mobility, for example; much depends on the kind of motility we are seeking. For a simple wheeled robot, for instance, we might want to evolve the size of the wheels, the number of drive wheels, and their position on the robot's body. For example, Lund [35] describes the evolution of a Lego robot with 3 different wheel types and 25 possible wheel positions. But for walking, flying, or swimming robots with much more complex actuated structures (legs, wings, or fins), or snake-like robots in which the whole of a multi-segmented body is actuated, then it is likely that there will be many parameters we would want evolution to explore and optimise. Similarly, if we want to evolve structures for gripping or manipulating objects, then the number, size, and physical arrangement of the articulated parts would need to be evolvable parameters.
Consider next the robot's physical skeleton or chassis. This passive structure defines the shape of the robot. Although the function of the robot may appear to be primarily determined by its sensing, actuation, and control (software), this is not so: having the correct physical structure and hence physical separation and organisation of sensors and actuators is critical. This 3D structure provides the robot's body-centric frame of reference, within which the position and orientation of the rest of the robot's subsystems are defined. A robot's physical chassis might, at its simplest, be a single stiff component-like the one-piece stiff plastic moulded chassis of the e-puck educational mobile robot-or it may be a complex skeleton with a large number of individual structural elements flexibly connected like the compliant anthropomimetic humanoid ECCE robot [12]. 2 In either case, it is likely that we would want evolution to explore the space of possible morphologies for the robot's physical structure. However, the way we would parameterise the robot's physical structure would be different in each case. A relatively simple stiff singlepiece chassis can be described with a voxel map, for instance. A more complex skeleton-like structure would be better parameterised in the same way as actuated structures, as a dataset specifying the number, shape, and physical arrangement of its component parts.
Finally, consider the interconnect-the wiring between the electronic subsystems. It is safe to assume a 'standard' wiring model in which (a) every sensing, signalling, and actuation subsystem is wired to the controller subsystem and (b) the controller is wired to the energy source (unless power is distributed among the electronic subsystems). There would seem to be no need for this interconnect to be evolved.
In summary the key elements of the robot we would-in the general case-seek to evolve are: • The number, type, and position of sensing, signalling, and actuation subsystems • For the actuation subsystems, the number, shape, and physical arrangement of the articulated parts • The 3D shape of the robot's physical structure or chassis or, for more complex skeleton like structures, the number, shape, and physical arrangement of the parts of the skeleton

How Would We Physically Build the Evolved Robot's Body?
Consider now the question of how we would physically instantiate the evolved robot body. We are presented with a number of options, at least in principle. These are: (1) Hand-crafted method As specified by the genotype, fashion the evolved shapes and structures (i.e. chassis) by hand from raw materials, then handassemble these structures together with the predetermined subsystems (i.e. sensors, motors, controller, energy, and wiring), using the genotype to determine the position and orientation of these subsystems. (2) Hand-constructed method As specified by the genotype, hand-construct the evolved shapes and structures from pre-manufactured parts (i.e. Lego), then hand-assemble these structures together with the predetermined subsystems, using the genotype to determine the position and orientation of these subsystems.
(3) Semi-automated method Automate part of the process, by CNC machining or 3D printing the evolved shapes and structures, according to their specification in the genotype, then hand-assemble these structures together with the predetermined subsystems, using the genotype to determine the position and orientation of these subsystems. (4) Automated construction method Fully automate method 2 with a machine that can undertake both the construction of evolved shapes and structures (from, i.e. Lego or equivalent) while also selecting and correctly placing the predetermined subsystems. (5) Automated manufacture method Fully automate method 3 by integrating the process of (say) 3D printing the evolved shapes and structures, with a system for selecting and correctly placing the predetermined subsystems. This method matches the vision of the universal robot manufacturing cell suggested at the start of Sect. 3.
To the best of our knowledge, method 5 has not, to date, been reported in the evolutionary robotics literature. There are several interesting examples of evolved then hand-constructed robots using method 2. For instance, [35] describes the coevolution of a Lego robot body and its controller in which the evolved robot is physically constructed and tested. Here evolution explores a robot body space with 3 different wheel types, 25 possible wheel positions, and 11 sensor positions. Lund observes that although the body search space is small, with 825 possible solutions, the search space is actually much larger when taking into account the co-evolved controller parameters. Method 3 has been demonstrated in the Golem project [33], although in that project the artificial creatures evolved and physically realised contained no sensing or signalling systems, nor controller or energy, so they were not self-contained autonomous robots. Recently, method 4 has been used in [8], where the automated construction method stacked and glued a handful of modules on each other to form a robot. Also these robots do not have sensing or signalling systems, nor an on-board controller; they are driven by an external PC and their task is to locomote.

A Bio-inspired (Modular or Multi-cellular) Approach
Biological systems are inherently multi-cellular. It could be argued that modular robotic systems are a natural analogue for multi-cellular systems: one can abstract the fact that a single robotic unit in a modular system could be thought of as a cell, and a modular system a multi-cellular collective. When considering the evolution of these systems, much of what we have discussed in Sect. 3.1 stands, but we have a further consideration to make, and that is now we are not concerned with a single robot acting on its own but a number of robots that have joined together to make a larger single unit that can perform tasks that a single unit cannot. Such a modular system may well be able to share resources, such as energy; modules will have a common communication bus with which they can share data, for instance, the SYMBRION robotic platform described by Kernbach et al. in [31] consists of three different types of robots that can join together into a single unit-as shown in Fig. 2.3. Within such a modular system, a review by Eiben et al. [19] is a good example of starting to view the modular robotic collective as a single entity: an 'organism'. Eiben et al. discuss a variety of mechanisms that might be used to evolve not only single robot systems but also modular (and swarm) robotic systems and provide an evolutionary framework to evolve the controllers for robots within that collective, but taking into account the fact that it is a modular collective: so in effect, evolving at the level of a single modular collective. They demonstrate the on-board evolutionary approach using a number of robots, but not when functioning as a single, modular unit.
When considering evolving a controller at the organism level, we are not only concerned with the performance of an individual-we must now consider the performance of the organism with respect to some goal (which may be something as simple as surviving in the environment) and also the performance with respect to the environment: for instance, by implicitly deriving fitness from the environment, rather than having an explicit fitness measure, as shown in the work by Bredech et al. [6] where the environment is taken as the model to drive the performance of the collective.
The work by Zykov et al. [47] demonstrates probably the closest to an evolving modular robotic system on the Molecube platform. In this work, the authors make use of genetic algorithms to evolve various morphologies of the modular system and then, given those morphologies, evolve suitable controllers. This was done to provide a suitable gradient for the evolutionary algorithm to permit the (ultimate) discovery of self-replicating modular systems.
However, as yet, there is still no work that has fully demonstrated the evolution of both structure and function of a modular robotic system, in which evolution is fully embodied in the modular units themselves.

The Autonomous Robot Evolution Project
The autonomous robot evolution (ARE) project 3 is concerned with developing the first real robot system with evolvable morphologies. It exceeds its precursors, the works of Long [34], Brodbeck et al. [8], and the Robot Baby Project [30], in several important aspects. The key features that distinguish the ARE system are as follows: 1. Complex, truly autonomous robots with an on-board controller, several sensors, and various actuators undergo open-ended evolution in the real world. 2. The (re)production mechanism is capable of both manufacturing and assembling a diverse range of physical robots autonomously, without human interaction. 3. Asynchronous evolution in that robots from different generations coexist in the real world, since producing a new robot does not imply the removal of an existing robot. 4. Evolution at a population level is combined with individual learning mechanisms which enable a robot to improve its inherited controller during its lifetime in a real environment. 5. Robots are not evolved for a toy application but for a challenging task: for example, the exploration of a nuclear cave. This requires that robots develop several morphology-dependent skills, for instance, (targeted) locomotion and object manipulation.
A particularly interesting aspect of the ARE system is the manner the controller software for the robots is developed. Currently, the common approach is to have the software for robots developed by human programmers through the usual development cycle. A system where robots evolve and learn represents a significantly different method, where the controller software of the robots is algorithmically produced.
Specifically, in an evolutionary robot system, both the hardware and the software of a robot, that is, both its bodyplan and its controller, are encoded by its genotype. This genotype is the artificial equivalent of the DNA in living organisms, and it forms the specification sheet that determines the phenotype, the actual robot. By the very nature of an evolutionary system, these genotypes undergo crossover and mutation, resulting in new genotypes that encode new robots including new pieces of controller software. Over consecutive generations, selection (and learning) will increase the level of fitness as defined by the given application, thus increasing the functionality of the underlying software.
This implies a new paradigm, second-order software engineering, where instead of directly developing a system for a given application, we develop an evolutionary system that will develop the target system for us. Note that this also holds for the hardware; with a full-blown evolutionary robot system, both the software and the hardware can be evolved.

Overall System Architecture
A framework for 'evolving robots in real time and real space' has been suggested in Eiben et al. [18]. This framework, named the Triangle of Life, lays out a generic system architecture. A tangible implementation of it is envisaged by the notion of an EvoSphere as introduced and discussed in [17]. An EvoSphere represents a generic design template for an evolutionary robot habitat and forms the basis of the physical environment in the ARE project.
The Triangle of Life consists of three stages: morphogenesis, infancy, and mature life, as illustrated in Fig. 2.4. Consequently, an EvoSphere consists of three components, the Robot Fabricator, the Training Facility, and the Arena. The Robot Fabricator is where new robots are created (morphogenesis). This will be discussed in the next section in detail.
The Training Facility provides a suitable environment for individuals to learn during infancy, providing feedback, perhaps via a computer vision system or a human user, so individual robots can learn to control their-possibly unique-body to acquire basic skills (locomotion, object manipulation) and to perform some sim-  [18]. Right: Illustration of an EvoSphere after [17]. It consists of three main components belonging to the three edges of the triangle, plus a recycling facility and equipment for observation. ple tasks. Let us note that the learning methods in the Infancy stage/Training Facility can be of any type, e.g. evolutionary (neuro-evolution, genetic programming, CMA-ES) or non-evolutionary (reinforcement learning, simulated annealing, Bayesian optimisation). In the first case, we get a system with two evolutionary loops: the outer loop that forms the Triangle of Life that is evolving bodies and brains, and the inner loop 'under the hood' of the learning method that is improving the brain in the given body of a newborn robot.
The Training Facility increases the chances of success in the Arena and plays an important role: it prevents reproduction of poorly performing robots and saves resources. It also enables refinement of controllers learned in a simulated environment that do not transfer properly due to the reality gap. If a robot successfully acquires the required set of skills, it is declared a fertile adult and can start its mature life. To this end, it enters the Arena, which represents the world where the robots must survive and perform user-defined tasks and may be selected for reproduction. The selection mechanism can be innate in the robots (by choosing mating partners) or executed by an overseer, which can be algorithmic or a human 'breeder'.
Here it is important to note that different types of robot evolutionary systems can be distinguished depending on the human involvement in selection and reproduction. The involvement of humans can be a technical necessity, for instance, if automated construction is infeasible or if adopting a breeding farm approach. Table 2.2 shows the four possible combinations and the corresponding types of evolutionary robot systems.
The ARE project has started with a system of Type 3 by building a Robot Fabricator that can autonomously produce new robots [25]. For a Type 2 implementation, the information needed by the selection mechanism(s) needs to be obtained and processed automatically. For instance, an overhead camera system can observe the robots' behaviour and play the role of the matchmaker. Alternatively, robots can monitor their own performance and query each other before engaging in reproduction. A Type 4 system, like an autonomously evolving robot colony working on terraforming on another planet, is more than a simple addition of Types 2 and 3. For instance, it requires that 'newborn' robots are moved from the Robot Fabricator to the Arena and activated and observed there as they go about the given task automatically, without humans in the loop.
An essential feature of the EvoSphere concept and the ARE system is centralised, externalised reproduction. For reasons of ethics and safety, we reject distributed reproduction systems, e.g. self-replicators or the robotic equivalents of cell division, eggs, or pregnancy, and deliberately choose for one single facility that can produce A general problem with evolving robots in the real world is the speed of evolution. To produce enough generations for significant progress can take much time and resource. In principle, this problem can be mitigated by a hybrid evolutionary system, where physical robot evolution is combined with virtual robot evolution in a simulator. This hybridisation combines the advantages of the real and virtual worlds. Physical evolution is accelerated by the virtual component that can find good robot features with less time and fewer resources than physical evolution, while simulated evolution benefits from the influx of genes that are tested favourably in the real world. In an advanced system, the physical trials can help improve the accuracy of the simulator as well, thus reducing the reality gap.
In the ARE system, deep integration of virtual and physical robot evolution is possible. In essence, two concurrently running implementations of the Triangle of Life are foreseen, one in a virtual environment and one in the physical environment.
A key feature behind the integrated system is using the same genetic representation in both worlds. This enables cross-breeding so a new robot in either environment could have physical or virtual parents, or a combination of both. Additionally, it enables copying robots between environments simply by transferring a genotype from one environment to another and applying the appropriate morphogenesis protocol to create the corresponding phenotype (a virtual or a physical robot). The integration of the virtual and the physical subsystems requires a specific system component, the Ecosystem Manager, which optimises the working of the hybrid evolutionary system, maximising task performance.

The ARE Robot Fabricator
The ARE project development roughly follows the robotic life cycle represented by the Triangle of Life. The first priority is to establish a way to physically (re)produce robots. Our approach follows method 5 in Sect. 3.1.2 in that we have designed an automated robot fabricator, which we call RoboFab. The robot's genome encodes for both its hardware and software, that is, both its bodyplan and controller. The part of the genome that encodes the bodyplan determines both the 3D shape of the robot's skeleton and the number, type, and disposition of 'organs' on the skeleton.
We define organs as 'active components which individual robots can use to perform their task(s). It is expected that each robot will have a single "brain" organ, which contains the electronic control hardware and a battery to provide the robot with power. Other organs provide sensing and actuation, with their type and location being specified in the genome written by the evolutionary algorithm' [25]. The use of hand-designed organs in this way does not diminish the biological plausibility of ARE. Complex organisms are assemblages of pre-evolved components. Thus, although crude by comparison to their biological counterparts, our organs represent a pragmatic engineering compromise which does reflect the modularity evident in biological evolution, in materio [11]. As shown in Fig. 2.5, an ARE RoboFab has four major subsystems: (i) up to three 3D printers, (ii) an organ bank, (iii) an assembly fixture, and (iv) a centrally positioned robot arm (multi-axis manipulator). The purpose of each of these subsystems is outlined as follows: (i) The 3D printers are used to print the evolved robot's skeleton, which might be a single part or several. With more than one 3D printer, we can speed up the process by 3D printing skeletons for several different evolved robots in parallel, or-for robots with multi-part skeletons-each part can be printed in parallel. (ii) The organ bank contains a set of prefabricated organs, organised so that the robot arm can pick organs ready for placing within the part-built robot. (iii) The assembly fixture is designed to hold (and if necessary rotate) the robot's core skeleton while organs are placed and wired up. (iv) The robot arm is the engine of RoboFab. Fitted with a special gripper, the robot arm is responsible for assembling the complete robot.
A RoboFab equipped with one 3D printer is shown in Fig. 2.6. The fabrication and assembly sequence has six stages listed below: 1. RoboFab receives the required coordinates of the organs and one or more mesh files of the shape of the skeleton. 2. The skeleton is 3D printed. 3. The robot arm fetches the core 'brain' organ from the organ bank and clips it into the skeleton on the print bed. 4. The robot arm then lifts the core organ and skeleton assemblage off the print bed and attaches it to the assembly fixture. 5. The robot arm then picks and places the required organs from the organ bank, clipping them into place on the skeleton. 6. Finally the robot arm wires each organ to the core organ to complete the robot.
We have demonstrated the successful automated fabrication of several robot bodyplans (steps 1-6 of the sequence above), and one example robot is shown in Fig. 2.7. For a more detailed description of the ARE RoboFab, see [25].   2.7 A complete robot, fabricated, assembled, and wired by the RoboFab. This evolved robot has a total of three organs: the core 'brain' organ and two wheel organs.

Concluding Remarks
Autonomous robot evolution is a long-term prospect with great potential advantages for science and engineering. Conceptually, evolving robot systems belong to the more generic category of embodied artificial evolution as discussed in [20] and must indeed meet the challenges outlined therein.
1. Body types: An appropriate design space of all possible robot makeups needs to be specified. This defines the search space evolution will traverse searching for solutions with high values of (application-dependent) fitness. 2. How to start: The implementation of a robot (re)production system that delivers physical robot offspring with some form of inheritance is a critical prerequisite. 3. How to stop: To prevent the 'Jurassic Park problem', the system must have a kill switch to stop evolution if necessary. 4. Evolvability and rate of evolution: To be useful, an evolving robot system must exhibit a high degree of evolvability and a high rate of evolution. In practice, the system must make good progress in real time and have short reproduction cycles and large improvements per generation. 5. Process control and methodology: Evolving robot systems combine openended and directed evolution on the fly. An evolution manager-be it human, algorithmic, or combined-should be able to perform online process monitoring and steering in line with the given application objectives and user preferences. 6. Body-mind co-evolution and lifetime learning: Randomised reproduction of robot bodies and brains can lead to a mismatch between the body and the brain of newborn robots. This implies the need for a lifetime learning process to align the inherited brain with the inherited body and increase fitness and task performance quickly [16].
Arguably, challenges 1 and 2 form the first priority for any real implementation. In terms of the Triangle of Life model, they are positioned on the first side of the triangle. Thus, the initial steps on the road towards autonomous robot evolution must deal with the Robot Fabricator.
Therefore, the ARE project started by creating the ARE RoboFab. These efforts uncovered previously unknown issues regarding manufacturability and viability. In short, the problem is that a genotype, that is, the specification sheet for a new robot, can be formally correct but infeasible in different ways. Here we can distinguish structural and behavioural infeasibility. For instance, the genotype can describe a robot that is not constructable because its body parts overlap or is constructable in principle, but cannot be manufactured by the given machinery, the 3D printer, and the automated assembler. Another form of infeasibility occurs when a robot can be manufactured, but its bodyplan is lacking essential components for reasonable functioning, for instance, if the robot has no sensors or no actuators. Then the robot is manufacturable, but not viable. These issues are strongly rooted in the morphogenesis of physical robots and have, therefore, not been encountered before.
Ongoing research is concerned with discovering the essence of these issues and understanding how they affect the evolutionary process.
As stated by challenge 6 above, a morphologically evolving robot system needs lifetime learning. Such a component sits on the second side of the Triangle of Life and forms a logical second phase in the development. A fundamental issue here is that the learning methods must work on arbitrary robot morphologies that can possibly be produced by evolution, regardless of the size, the shape, and the arrangement of body parts. Furthermore, the newborn robots in a realistic system need to learn multiple skills. This is largely uncharted territory since most of the existing work in this area concerns learning methods for a given, fixed robot morphology and only one or two tasks. An additional challenge is the need for extreme sample efficiency, as the learning algorithm should work in a few trials and with little prior information. A possibly helpful trick here is to implement a Lamarckian combination of evolution and learning. This means that individually learned skills are coded into the genome and thus become inheritable. This requires more research, but the first results are promising [29]. Last but not least, evolution and learning are capable of delivering working controller software and perform what we call second-order software engineering. However, this software is only subject to functional testing and does not undergo the usual quality testing, debugging, and verification protocols. Clearly, further thought is required to understand how testing and improving the quality of software could be integrated into an autonomous evolutionary system in the future.
Ultimately, the ambition of the ARE project is to deliver the first-ever integrated system that can produce generations of robots in a real-world setting. On the long term, the aim is to produce a foundation for the future of a type of robotic design and manufacture that can change our thinking of robot production and open up possibilities for deploying robots in remote, inaccessible, and challenging environments.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License ((http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.