Abstract
We explored the role of modularity as a means to improve evolvability in populations of adaptive agents. We performed two sets of artificial life experiments. In the first, the adaptive agents were neural networks controlling the behavior of simulated garbage collecting robots, where modularity referred to the networks architectural organization and evolvability to the capacity of the population to adapt to environmental changes measured by the agents performance. In the second, the agents were programs that control the changes in network’s synaptic weights (learning algorithms), the modules were emerged clusters of symbols with a well defined function and evolvability was measured through the level of symbol diversity across programs. We found that the presence of modularity (either imposed by construction or as an emergent property in a favorable environment) is strongly correlated to the presence of very fit agents adapting effectively to environmental changes. In the case of learning algorithms we also observed that character diversity and modularity are also strongly correlated quantities.
Similar content being viewed by others
References
Wagner GP, Altenberg L (1996) Complex adaptations and the evolution of evolvability. Evolution 50:967–976
Kirschner M, Gerhart J (1998) Evolvability. Proc Natl Acad Sci USA 95:8420–8427
Calabretta R, Di Ferdinando A, Wagner GP, Parisi D (2003) What does it take to evolve behaviorally complex organisms? BioSyst 69:245–262
Calabretta R (2007) Genetic interference reduces the evolvability of modular and non-modular visual neural networks. Philos Trans R Soc B 362:403–410
Calabretta R, Nolfi S, Parisi D, Wagner GP (1997) An artificial life model for investigating the evolution of modularity. In: Bar-Yam Y (ed) Proceedings of the International Conference on Complex Systems. Addison-Wesley, Boston
Calabretta R, Nolfi S, Parisi D, Wagner GP (2000) Duplication of modules facilitates the evolution of functional specialization. Artificial Life 6:69–84
Di Ferdinando A, Calabretta R, Parisi D (2000) Evolving modular architectures for neural networks. In: R. French and J. Sougné (eds) Procs. of the sixth Neural computation and Psychology Workshop, pp 253–262
Neirotti J, Caticha N (2003) Dynamics of the evolution of learning algorithms by selection. Phys Rev E 67:041912
Rumelhart D, McClelland J (1986) Parallel distributed processing: explorations in the microstructure of cognition. MIT Press, Cambridge
Kashtan N, Alon U (2005) Spontaneous evolution of modularity and network motifs. Proc Nac Acad Sci USA 102:13773
Sun J, Deem MW (2007) Spontaneous emergence of modularity in a model of evolving individuals. Phys Rev Lett 99:228107
Nolfi S (1997) Using emergent modularity to develop control systems for mobile robots. Adapt Behav 5:343–363
Hornby GS, (2005) Measuring, enabling and comparing modularity, regularity and hierarchy in evolutionary design. Proceedings of the 2005 conference on Genetic and evolutionary computation, pp. 1729–1736. Washington, DC, USA
Miglino O, Lund HH, Nolfi S (1995) Evolving mobile robots in simulated and real environments. Artificial Life 4:417–434
Holland JH (1992) Adaptations in natural and artificial systems: an introductory analysis with applications to biology, control and artificial intelligence. University of Michigan Press, Ann Arbor
Koza J (1992) Genetic programming: on the programming of computers by means of natural selection. MIT Press, Cambridge
Neirotti JP (2010) Can a student learn optimally from two different teachers? J Phys A 43:015101
Caticha N, Kinouchi Osame (1998) Time ordering in the evolution of information processing and modulation systems. Philos Mag 77:1565
Neirotti JP, Franco L (2010) Computational capabilities of multilayer committee machines. J Phys A 43:445103
Acknowledgments
The financial support of the Italian National Research Council (Short-Term Mobility Program at Yale University to RC) is gratefully acknowledged. The authors thank the referees for valuable suggestions on the manuscript. RC would also like to thank Gunter Wagner, Stefano Nolfi and Freek Duynstee for their contribution to the early stages of this work.
Author information
Authors and Affiliations
Corresponding author
Appendix: A Algorithm construction by Genetic Programming (GP)
Appendix: A Algorithm construction by Genetic Programming (GP)
In this section we describe briefly our implementation of GP for the problem at hand. Conventional genetic algorithm (GA) works by manipulating fixed-length character strings that represent candidate solutions of a given problem. There are problems though, for which fixed-length character-structures are not flexible enough for representing their candidate solutions. In those cases a program representing the solution is a better alternative.
The simulation starts with a population of randomly created programs. All these programs have been constructed using pre-determined sets of variables and operators. The construction process respects some rules in order to avoid the creation of programs that can not be evaluated. The GP operations are used to create the population of the next generation. The programs are ranked by their fitness and then the GP operations are applied again. These two steps are then iterated.
Probably the most didactic way to illustrate the functioning of GP is by using the computing language LISP. We call a Faithful symbolic expression (FSE) a LISP program that do not return an error message when evaluated. Program components can be either functional operators or variables. Let \(\mathcal{F}\) be the set of all the operators and \(\mathcal{V}\) the set of all the variables used to write down the FSEs. The choice of these sets depends upon the nature of the problem being solved. For instance, if the solution of a problem can be represented by quotients of polynomials, a suitable choice of operators and variables is \(\mathcal{F}\) = {+ - * /} and \(\mathcal{V}\) = {x 1}. For example, a FSE is (+ (+ x x) (* x (- x (- x x)))), which is a (non unique) LISP representation of the function \(2x+x^{2}\) . The simplest FSE is a variable. The next simplest FSE is an operator followed by the appropriate number of variables (two in the example above). All FSEs are either a variable or a list with an operator followed by an appropriate number of FSEs. Unfaithful symbolic expressions are for instance: (x x), (+ x *) and (x - x).
The GP operations considered in the present work are asexual reproduction, mutation and cross-over. During asexual reproduction a certain fraction of the top ranked individuals, the Top set, are copied without any modification into the new generation, ensuring the preservation of structures that made them successful. Mutation is implemented on an individual by changing an atom at a random position. The new and old atoms are different but of the same kind to ensure faithfulness. Finally the modified tree is copied into the new generation. In order to accelerate the dynamics different mutation rates can be used for different atom types.
There are no sexes associated to the programs and cross-over is a hermaphrodite sexual GP operation. Cross-over parents are chosen by tournament, which is done as follows. First, consider a set of the population, then select an age \(a\) such that \(1\le a\le P\) (\(P\) is the maximum age) with a probability proportional to \(a \). From the chosen set, a certain number (e.g. ten) individuals are selected at random. The program with smaller generalization error at age \(a\) is selected for cross-over. In our experiments, the first parent is chosen from the Top set by tournament [16]. The second parent is chosen by tournament among the entire population. From each parent, an atom of the same type is selected at random. The FSEs with roots in the selected atoms are interchanged to generate two offspring. In order to avoid uncontrolled growth if the depth of any of the offspring is above a given threshold, the program is deleted. The mutation and cross-over operations are represented in Figs. 11 and 12.
The GP parameters used in the simulation are shown in Table 1. A short discussion about the parameter values will be presented in the conclusions. At generation zero a population of 500 FSEs is created at random. The programs have (in agreement with Table 1) a maximum depth of \(7\) nested parentheses. The variable set is as presented in (5) and the operator set is
where Psqr, Pexp, Plog, and % are the protected square root, exponential, logarithm and division; abs, +, -, and * are the usual absolute value, addition, subtraction and multiplication; and p., pN., ev \(\mathsf * \), vv \(\mathsf + \), and vv \(\mathsf - \) are the inner product (e.g. \(\mathbf{x}\cdot \mathbf{y}\) for \(\mathbf {x},\mathbf {y}\in \mathbb {R}^{N}\)) , normalized inner product (\(\mathbf{x}\cdot \mathbf{y}/N\) ), the product of a scalar times a vector (\(a\mathbf{x}\) ), the addition of two vectors and the subtraction of two vectors (\(\mathbf{x}\pm \mathbf{y}\) ) respectively. Protected functions are functions whose definition domains have been extended in order to accept a larger set of arguments. The definitions of these functions appear in Table 2.
If either one of the offspring has a depth bigger than 17, it is deleted. With a mutation rate of 0.01 % (one every 20 generations) a mutation is performed to the offspring. The process is repeated until the new population reaches the full size fixed here at 500.
After a new population is created, the fitness of each individual is measured and so a new ranking is built.
Rights and permissions
About this article
Cite this article
Calabretta, R., Neirotti, J. Adaptive Agents in Changing Environments, the Role of Modularity. Neural Process Lett 42, 257–274 (2015). https://doi.org/10.1007/s11063-014-9355-8
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-014-9355-8