Skip to main content
Log in

Adaptive Agents in Changing Environments, the Role of Modularity

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

We explored the role of modularity as a means to improve evolvability in populations of adaptive agents. We performed two sets of artificial life experiments. In the first, the adaptive agents were neural networks controlling the behavior of simulated garbage collecting robots, where modularity referred to the networks architectural organization and evolvability to the capacity of the population to adapt to environmental changes measured by the agents performance. In the second, the agents were programs that control the changes in network’s synaptic weights (learning algorithms), the modules were emerged clusters of symbols with a well defined function and evolvability was measured through the level of symbol diversity across programs. We found that the presence of modularity (either imposed by construction or as an emergent property in a favorable environment) is strongly correlated to the presence of very fit agents adapting effectively to environmental changes. In the case of learning algorithms we also observed that character diversity and modularity are also strongly correlated quantities.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Wagner GP, Altenberg L (1996) Complex adaptations and the evolution of evolvability. Evolution 50:967–976

    Article  MATH  Google Scholar 

  2. Kirschner M, Gerhart J (1998) Evolvability. Proc Natl Acad Sci USA 95:8420–8427

    Article  Google Scholar 

  3. Calabretta R, Di Ferdinando A, Wagner GP, Parisi D (2003) What does it take to evolve behaviorally complex organisms? BioSyst 69:245–262

    Article  Google Scholar 

  4. Calabretta R (2007) Genetic interference reduces the evolvability of modular and non-modular visual neural networks. Philos Trans R Soc B 362:403–410

    Article  Google Scholar 

  5. Calabretta R, Nolfi S, Parisi D, Wagner GP (1997) An artificial life model for investigating the evolution of modularity. In: Bar-Yam Y (ed) Proceedings of the International Conference on Complex Systems. Addison-Wesley, Boston

  6. Calabretta R, Nolfi S, Parisi D, Wagner GP (2000) Duplication of modules facilitates the evolution of functional specialization. Artificial Life 6:69–84

    Article  Google Scholar 

  7. Di Ferdinando A, Calabretta R, Parisi D (2000) Evolving modular architectures for neural networks. In: R. French and J. Sougné (eds) Procs. of the sixth Neural computation and Psychology Workshop, pp 253–262

  8. Neirotti J, Caticha N (2003) Dynamics of the evolution of learning algorithms by selection. Phys Rev E 67:041912

    Article  MathSciNet  Google Scholar 

  9. Rumelhart D, McClelland J (1986) Parallel distributed processing: explorations in the microstructure of cognition. MIT Press, Cambridge

    Google Scholar 

  10. Kashtan N, Alon U (2005) Spontaneous evolution of modularity and network motifs. Proc Nac Acad Sci USA 102:13773

    Article  Google Scholar 

  11. Sun J, Deem MW (2007) Spontaneous emergence of modularity in a model of evolving individuals. Phys Rev Lett 99:228107

    Article  Google Scholar 

  12. Nolfi S (1997) Using emergent modularity to develop control systems for mobile robots. Adapt Behav 5:343–363

    Article  Google Scholar 

  13. Hornby GS, (2005) Measuring, enabling and comparing modularity, regularity and hierarchy in evolutionary design. Proceedings of the 2005 conference on Genetic and evolutionary computation, pp. 1729–1736. Washington, DC, USA

  14. Miglino O, Lund HH, Nolfi S (1995) Evolving mobile robots in simulated and real environments. Artificial Life 4:417–434

    Article  Google Scholar 

  15. Holland JH (1992) Adaptations in natural and artificial systems: an introductory analysis with applications to biology, control and artificial intelligence. University of Michigan Press, Ann Arbor

  16. Koza J (1992) Genetic programming: on the programming of computers by means of natural selection. MIT Press, Cambridge

    Google Scholar 

  17. Neirotti JP (2010) Can a student learn optimally from two different teachers? J Phys A 43:015101

    Article  MathSciNet  Google Scholar 

  18. Caticha N, Kinouchi Osame (1998) Time ordering in the evolution of information processing and modulation systems. Philos Mag 77:1565

    Article  Google Scholar 

  19. Neirotti JP, Franco L (2010) Computational capabilities of multilayer committee machines. J Phys A 43:445103

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgments

The financial support of the Italian National Research Council (Short-Term Mobility Program at Yale University to RC) is gratefully acknowledged. The authors thank the referees for valuable suggestions on the manuscript. RC would also like to thank Gunter Wagner, Stefano Nolfi and Freek Duynstee for their contribution to the early stages of this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Juan Neirotti.

Appendix: A Algorithm construction by Genetic Programming (GP)

Appendix: A Algorithm construction by Genetic Programming (GP)

In this section we describe briefly our implementation of GP for the problem at hand. Conventional genetic algorithm (GA) works by manipulating fixed-length character strings that represent candidate solutions of a given problem. There are problems though, for which fixed-length character-structures are not flexible enough for representing their candidate solutions. In those cases a program representing the solution is a better alternative.

The simulation starts with a population of randomly created programs. All these programs have been constructed using pre-determined sets of variables and operators. The construction process respects some rules in order to avoid the creation of programs that can not be evaluated. The GP operations are used to create the population of the next generation. The programs are ranked by their fitness and then the GP operations are applied again. These two steps are then iterated.

Probably the most didactic way to illustrate the functioning of GP is by using the computing language LISP. We call a Faithful symbolic expression (FSE) a LISP program that do not return an error message when evaluated. Program components can be either functional operators or variables. Let \(\mathcal{F}\) be the set of all the operators and \(\mathcal{V}\) the set of all the variables used to write down the FSEs. The choice of these sets depends upon the nature of the problem being solved. For instance, if the solution of a problem can be represented by quotients of polynomials, a suitable choice of operators and variables is \(\mathcal{F}\) = {+ - * /} and \(\mathcal{V}\) = {x 1}. For example, a FSE is (+ (+ x x) (* x (- x (- x x)))), which is a (non unique) LISP representation of the function \(2x+x^{2}\) . The simplest FSE is a variable. The next simplest FSE is an operator followed by the appropriate number of variables (two in the example above). All FSEs are either a variable or a list with an operator followed by an appropriate number of FSEs. Unfaithful symbolic expressions are for instance: (x x), (+ x *) and (x - x).

The GP operations considered in the present work are asexual reproduction, mutation and cross-over. During asexual reproduction a certain fraction of the top ranked individuals, the Top set, are copied without any modification into the new generation, ensuring the preservation of structures that made them successful. Mutation is implemented on an individual by changing an atom at a random position. The new and old atoms are different but of the same kind to ensure faithfulness. Finally the modified tree is copied into the new generation. In order to accelerate the dynamics different mutation rates can be used for different atom types.

There are no sexes associated to the programs and cross-over is a hermaphrodite sexual GP operation. Cross-over parents are chosen by tournament, which is done as follows. First, consider a set of the population, then select an age \(a\) such that \(1\le a\le P\) (\(P\) is the maximum age) with a probability proportional to \(a \). From the chosen set, a certain number (e.g. ten) individuals are selected at random. The program with smaller generalization error at age \(a\) is selected for cross-over. In our experiments, the first parent is chosen from the Top set by tournament [16]. The second parent is chosen by tournament among the entire population. From each parent, an atom of the same type is selected at random. The FSEs with roots in the selected atoms are interchanged to generate two offspring. In order to avoid uncontrolled growth if the depth of any of the offspring is above a given threshold, the program is deleted. The mutation and cross-over operations are represented in Figs. 11 and 12.

Fig. 11
figure 11

LISP programs as parsing trees before and after a GP mutation. A randomly selected atom in the parse tree is changed to another randomly selected atom of the same type. In this example a multiplication operation is replaced by an addition

Fig. 12
figure 12

GP cross-over. Two parents are selected from the population. A random point in each tree is selected. The branches that grow from the point are interchanged in order to generate two offspring

The GP parameters used in the simulation are shown in Table 1. A short discussion about the parameter values will be presented in the conclusions. At generation zero a population of 500 FSEs is created at random. The programs have (in agreement with Table 1) a maximum depth of \(7\) nested parentheses. The variable set is as presented in (5) and the operator set is

$$\begin{aligned} \mathcal{F}=\left\{ \mathsf{Psqr\; Pexp\; Plog\; abs\;+\;-\;*\;\%\; p.\; pN.\; ev\!*\; vv\!+\; vv\!-}\right\} \,, \end{aligned}$$
(11)

where Psqr, Pexp, Plog, and % are the protected square root, exponential, logarithm and division; abs, +, -, and * are the usual absolute value, addition, subtraction and multiplication; and p., pN., ev \(\mathsf * \), vv \(\mathsf + \), and vv \(\mathsf - \) are the inner product (e.g. \(\mathbf{x}\cdot \mathbf{y}\) for \(\mathbf {x},\mathbf {y}\in \mathbb {R}^{N}\)) , normalized inner product (\(\mathbf{x}\cdot \mathbf{y}/N\) ), the product of a scalar times a vector (\(a\mathbf{x}\) ), the addition of two vectors and the subtraction of two vectors (\(\mathbf{x}\pm \mathbf{y}\) ) respectively. Protected functions are functions whose definition domains have been extended in order to accept a larger set of arguments. The definitions of these functions appear in Table 2.

Table 1 Control parameters for the GP simulation in our experiments.
Table 2 Definition of the protected function as FSEs

If either one of the offspring has a depth bigger than 17, it is deleted. With a mutation rate of 0.01 % (one every 20 generations) a mutation is performed to the offspring. The process is repeated until the new population reaches the full size fixed here at 500.

After a new population is created, the fitness of each individual is measured and so a new ranking is built.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Calabretta, R., Neirotti, J. Adaptive Agents in Changing Environments, the Role of Modularity. Neural Process Lett 42, 257–274 (2015). https://doi.org/10.1007/s11063-014-9355-8

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-014-9355-8

Keywords

Navigation