Skip to main content
Log in

Evolution of swarm robotics systems with novelty search

  • Published:
Swarm Intelligence Aims and scope Submit manuscript

Abstract

Novelty search is a recent artificial evolution technique that challenges traditional evolutionary approaches. In novelty search, solutions are rewarded based on their novelty, rather than their quality with respect to a predefined objective. The lack of a predefined objective precludes premature convergence caused by a deceptive fitness function. In this paper, we apply novelty search combined with NEAT to the evolution of neural controllers for homogeneous swarms of robots. Our empirical study is conducted in simulation, and we use a common swarm robotics task—aggregation, and a more challenging task—sharing of an energy recharging station. Our results show that novelty search is unaffected by deception, is notably effective in bootstrapping evolution, can find solutions with lower complexity than fitness-based evolution, and can find a broad diversity of solutions for the same task. Even in non-deceptive setups, novelty search achieves solution qualities similar to those obtained in traditional fitness-based evolution. Our study also encompasses variants of novelty search that work in concert with fitness-based evolution to combine the exploratory character of novelty search with the exploitatory character of objective-based evolution. We show that these variants can further improve the performance of novelty search. Overall, our study shows that novelty search is a promising alternative for the evolution of controllers for robotic swarms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Notes

  1. NeuroEvolution for Augmenting Topologies for Java—http://neat4j.sourceforge.net.

  2. The values that are plotted are fitness scores measured by the fitness function F a , and not the actual scores used for selection in novelty search or in random evolution. Note that even random evolution has an ascending fitness trajectory because the data points plotted are the highest score achieved so far in the evolutionary process. The trajectory for random evolution can thus increase from time to time when an individual, by chance, scores higher than any previously evaluated individual.

  3. We empirically determined the probability of randomly generating a solution where at least one robot survives until the end (in any of the 10 trials) to be approximately 1 %.

References

  • Ampatzis, C., Tuci, E., Trianni, V., & Dorigo, M. (2008). Evolution of signaling in a multi-robot system: categorization and communication. Adaptive Behavior, 16(1), 5–26.

    Article  Google Scholar 

  • Bahgeçi, E., & Şahin, E. (2005). Evolving aggregation behaviors for swarm robotic systems: a systematic case study. In Swarm intelligence symposium (pp. 333–340). New York: IEEE Press.

    Google Scholar 

  • Baldassarre, G., Nolfi, S., & Parisi, D. (2003). Evolving mobile robots able to display collective behaviors. Artificial Life, 9(3), 255–268.

    Article  Google Scholar 

  • Baldassarre, G., Trianni, V., Bonani, M., Mondada, F., Dorigo, M., & Nolfi, S. (2007). Self-organized coordinated motion in groups of physically connected robots. IEEE Transactions on Systems, Man and Cybernetics, 37(1), 224–239.

    Article  Google Scholar 

  • Bayindir, L., & Şahin, E. (2007). A review of studies in swarm robotics. Turkish Journal of Electrical Engineering and Computer Sciences, 15(2), 115–147.

    Google Scholar 

  • Bonabeau, E., Dorigo, M., & Theraulaz, G. (1999). Swarm intelligence: from natural to artificial systems. New York: Oxford University Press.

    MATH  Google Scholar 

  • Brambilla, M., Ferrante, E., Birattari, M., & Dorigo, M. (2013). Swarm robotics: a review from the swarm engineering perspective. Swarm Intelligence, 7(1), 1–41.

    Article  Google Scholar 

  • Cao, Y. U., Fukunaga, A. S., & Kahng, A. B. (1997). Cooperative mobile robotics: antecedents and directions. Autonomous Robots, 4(1), 7–27.

    Article  Google Scholar 

  • Castelli, M., Manzoni, L., & Vanneschi, L. (2011). A method to reuse old populations in genetic algorithms. In LNCS: Vol. 7026. Portuguese conference on artificial intelligence (EPIA) (pp. 138–152). Berlin: Springer.

    Chapter  Google Scholar 

  • Chellapilla, K., & Fogel, D. B. (1999). Evolving neural networks to play checkers without relying on expert knowledge. IEEE Transactions on Neural Networks, 10(6), 1382–1391.

    Article  Google Scholar 

  • Correll, N., & Martinoli, A. (2007). Modeling self-organized aggregation in a swarm of miniature robots. In IEEE international conference on robotics and automation (ICRA) (pp. 379–384). New York: IEEE Press.

    Google Scholar 

  • Cuccu, G., & Gomez, F. J. (2011). When novelty is not enough. In LNCS: Vol. 6624. European conference on the applications of evolutionary computation (EvoApplications) (pp. 234–243). Berlin: Springer.

    Chapter  Google Scholar 

  • Cuccu, G., Gomez, F. J., & Glasmachers, T. (2011). Novelty-based restarts for evolution strategies. In IEEE congress on evolutionary computation (IEEE CEC) (pp. 158–163). New York: IEEE Press.

    Google Scholar 

  • Deb, K. (2001). Multi-objective optimization using evolutionary algorithms. Hoboken: Wiley.

    MATH  Google Scholar 

  • Doncieux, S., & Mouret, J.-B. (2010). Behavioral diversity measures for evolutionary robotics. In IEEE congress on evolutionary computation (IEEE CEC) (pp. 1–8). New York: IEEE Press.

    Chapter  Google Scholar 

  • Doncieux, S., Mouret, J.-B., Bredeche, N., & Padois, V. (2011). Evolutionary robotics: exploring new horizons. In Studies in computational intelligence: Vol. 341. New horizons in evolutionary robotics (pp. 3–25). Berlin: Springer.

    Chapter  Google Scholar 

  • Goldberg, D. E., & Richardson, J. (1987). Genetic algorithms with sharing for multimodal function optimization. In Genetic algorithms and their applications: second international conference on genetic algorithms (pp. 41–49). Mahwah: Erlbaum.

    Google Scholar 

  • Gomes, J., Urbano, P., & Christensen, A. L. (2012). Progressive minimal criteria novelty search. In LNAI: Vol. 7637. Ibero-American conference on artificial intelligence (IBERAMIA) (pp. 281–290). Berlin: Springer.

    Google Scholar 

  • Gomez, F., & Mikkulainen, R. (1997). Incremental evolution of complex general behavior. Adaptative Behaviour, 5(3–4), 317–342.

    Article  Google Scholar 

  • Gross, R., & Dorigo, M. (2008). Evolution of solitary and group transport behaviors for autonomous robots capable of self-assembling. Adaptive Behavior, 16(5), 285–305.

    Article  Google Scholar 

  • Gutiérrez, A., Campo, A., Dorigo, M., Amor, D., Magdalena, L., & Monasterio-Huelin, F. (2008). An open localization and local communication embodied sensor. Sensors, 8(11), 7545–7563.

    Article  Google Scholar 

  • Harvey, I., Husbands, P., & Cliff, D. (1993). Issues in evolutionary robotics. In International conference on simulation of adaptive behavior (SAB) (pp. 364–373). Cambridge: MIT Press.

    Google Scholar 

  • Hauert, S., Zufferey, J.-C., & Floreano, D. (2009). Evolved swarming without positioning information: an application in aerial communication relay. Autonomous Robots, 26(1), 21–32.

    Article  Google Scholar 

  • Hornby, G. (2006). ALPS: the age-layered population structure for reducing the problem of premature convergence. In Genetic and evolutionary computation conference (GECCO) (pp. 815–822). New York: ACM.

    Google Scholar 

  • Hu, J., Goodman, E. D., Seo, K., Fan, Z., & Rosenberg, R. (2005). The hierarchical fair competition (HFC) framework for sustainable evolutionary algorithms. Evolutionary Computation, 13(2), 241–277.

    Article  Google Scholar 

  • Hugues, L., & Bredeche, N. (2006). Simbad: an autonomous robot simulation package for education and research. In LNCS: Vol. 4095. International conference on simulation of adaptive behavior (SAB) (pp. 831–842). Berlin: Springer.

    Google Scholar 

  • Hutter, M., & Legg, S. (2006). Fitness uniform optimization. IEEE Transactions on Evolutionary Computation, 10(5), 568–589.

    Article  Google Scholar 

  • Jeanson, R., Rivault, C., Deneubourg, J.-L., Blanco, S., Fournier, R., Jost, C., & Theraulaz, G. (2005). Self-organized aggregation in cockroaches. Animal Behaviour, 69(1), 169–180.

    Article  Google Scholar 

  • Jones, T., & Forrest, S. (1995). Fitness distance correlation as a measure of problem difficulty for genetic algorithms. In International conference on genetic algorithms (ICGA) (pp. 184–192). San Mateo: Morgan Kaufmann.

    Google Scholar 

  • Kernbach, S., & Kernbach, O. (2011). Collective energy homeostasis in a large-scale microrobotic swarm. Robotics and Autonomous Systems, 59(12), 1090–1101.

    Article  Google Scholar 

  • Kirkpatrick, D. A. (2012) Novelty search in competitive coevolution using normalized compression distance. Master thesis, College of Engineering, Florida Institute of Technology.

  • Kistemaker, S., & Whiteson, S. (2011). Critical factors in the performance of novelty search. In Genetic and evolutionary computation conference (GECCO) (pp. 965–972). New York: ACM.

    Google Scholar 

  • Knowles, J., Watson, R., & Corne, D. (2001). Reducing local optima in single-objective problems by multi-objectivization. In LNCS: Vol. 1993. Evolutionary multi-criterion optimization (pp. 269–283). Berlin: Springer.

    Chapter  Google Scholar 

  • Kohonen, T. (1990). The self-organizing map. Proceedings of the IEEE, 78(9), 1464–1480.

    Article  Google Scholar 

  • Krcah, P. (2010). Solving deceptive tasks in robot body-brain co-evolution by searching for behavioral novelty. In International conference on intelligent systems design and applications (ISDA) (pp. 284–289). New York: IEEE Press.

    Google Scholar 

  • Lehman, J., & Stanley, K. O. (2008). Exploiting open-endedness to solve problems through the search for novelty. In International conference on the synthesis and simulation of living systems (ALIFE) (pp. 329–336). Cambridge: MIT Press.

    Google Scholar 

  • Lehman, J., & Stanley, K. O. (2010a). Revising the evolutionary computation abstraction: minimal criteria novelty search. In Genetic and evolutionary computation conference (GECCO) (pp. 103–110). New York: ACM.

    Google Scholar 

  • Lehman, J., & Stanley, K. O. (2010b). Efficiently evolving programs through the search for novelty. In Genetic and evolutionary computation conference (GECCO) (pp. 837–844). New York: ACM.

    Google Scholar 

  • Lehman, J., & Stanley, K. O. (2011a). Abandoning objectives: evolution through the search for novelty alone. Evolutionary Computation, 19(2), 189–223.

    Article  Google Scholar 

  • Lehman, J., & Stanley, K. O. (2011b). Evolving a diversity of virtual creatures through novelty search and local competition. In Genetic and evolutionary computation conference (GECCO) (pp. 211–218). New York: ACM.

    Google Scholar 

  • Liu, W., Winfield, A. F. T., & Sa, J. (2007). Modelling swarm robotic systems: a case study in collective foraging. In Towards autonomous robotic systems (TAROS) (pp. 25–32).

    Google Scholar 

  • Michaud, F., & Robichaud, E. (2002). Sharing charging stations for long-term activity of autonomous robots. In IEEE/RSJ international conference on intelligent robots and systems (IROS) (Vol. 3, pp. 2746–2751). New York: IEEE Press.

    Chapter  Google Scholar 

  • Mondada, F., Bonani, M., Raemy, X., Pugh, J., Cianci, C., Klaptocz, A., Magnenat, S., Zufferey, J.-C., Floreano, D., & Martinoli, A. (2009). The e-puck, a robot designed for education in engineering. In 9th conference on autonomous robot systems and competitions (ROBOTICA) (pp. 59–65). Castelo Branco: IPCB.

    Google Scholar 

  • Mouret, J.-B. (2011). Novelty-based multiobjectivization. In Studies in computational intelligence: Vol. 341. New horizons in evolutionary robotics (pp. 139–154). Berlin: Springer.

    Chapter  Google Scholar 

  • Mouret, J.-B., & Doncieux, S. (2009). Overcoming the bootstrap problem in evolutionary robotics using behavioral diversity. In IEEE congress on evolutionary computation (IEEE CEC) (pp. 1161–1168). New York: IEEE Press.

    Google Scholar 

  • Mouret, J.-B., & Doncieux, S. (2012). Encouraging behavioral diversity in evolutionary robotics: an empirical study. Evolutionary Computation, 20(1), 91–133.

    Article  Google Scholar 

  • Muñoz Meléndez, A., Sempé, F., & Drogoul, A. (2002). Sharing a charging station without explicit communication in collective robotics. In International conference on simulation of adaptive behavior (SAB) (pp. 383–384). Cambridge: MIT Press.

    Google Scholar 

  • Nelson, A. L., Barlow, G. J., & Doitsidis, L. (2009). Fitness functions in evolutionary robotics: a survey and analysis. Robotics and Autonomous Systems, 57(4), 345–370.

    Article  Google Scholar 

  • Pini, G., & Tuci, E. (2008). On the design of neuro-controllers for individual and social learning behaviour in autonomous robots: an evolutionary approach. Connection Science, 20(2–3), 211–230.

    Article  Google Scholar 

  • Şahin, E. (2005). Swarm robotics: from sources of inspiration to domains of application. In LNCS: Vol. 3342. International workshop on swarm robotics (pp. 10–20). Berlin: Springer.

    Chapter  Google Scholar 

  • Schmidt, M., & Lipson, H. (2011). Age-fitness Pareto optimization. In Genetic and evolutionary computation: Vol. 8. Genetic programming theory and practice VIII (pp. 129–146). Berlin: Springer.

    Chapter  Google Scholar 

  • Soltoggio, A., & Jones, B. (2009). Novelty of behaviour as a basis for the neuro-evolution of operant reward learning. In Genetic and evolutionary computation conference (GECCO) (pp. 169–176). New York: ACM.

    Google Scholar 

  • Soysal, O., Bahgeçi, E., & Şahin, E. (2007). Aggregation in swarm robotic systems: evolution and probabilistic control. Turkish Journal of Electrical Engineering and Computer Sciences, 15(2), 199–225.

    Google Scholar 

  • Sperati, V., Trianni, V., & Nolfi, S. (2008). Evolving coordinated group behaviours through maximisation of mean mutual information. Swarm Intelligence, 2(2–4), 73–95.

    Article  Google Scholar 

  • Sprong, C. (2011). Common tasks in evolutionary robotics, an overview. Technical report, Faculty of Sciences, University of Amsterdam, Netherlands. URL http://www.few.vu.nl/nl/Images/werkstuk-sprong_tcm38-217791.pdf.

  • Stanley, K. O., & Miikkulainen, R. (2002). Evolving neural network through augmenting topologies. Evolutionary Computation, 10(2), 99–127.

    Article  Google Scholar 

  • Trianni, V. (2008). Studies in computational intelligence: Vol. 108. Evolutionary swarm robotics: evolving self-organising behaviours in groups of autonomous robots. Berlin: Springer.

    Book  Google Scholar 

  • Trianni, V., Gross, R., Labella, T. H., Şahin, E., & Dorigo, M. (2003). Evolving aggregation behaviors in a swarm of robots. In LNCS: Vol. 2801. European conference on artificial life (ECAL) (pp. 865–874). Berlin: Springer.

    Chapter  Google Scholar 

  • Trianni, V., Nolfi, S., & Dorigo, M. (2006). Cooperative hole avoidance in a swarm-bot. Robotics and Autonomous Systems, 54(2), 97–103.

    Article  Google Scholar 

  • Uchibe, E., Yanase, M., & Asada, M. (2002). Behavior generation for a mobile robot based on the adaptive fitness function. Robotics and Autonomous Systems, 40(2–3), 69–77.

    Article  Google Scholar 

  • Watson, R. A., & Pollack, J. B. (2001). Coevolutionary dynamics in a minimal substrate. In Genetic and evolutionary computation conference (GECCO) (pp. 702–709). San Mateo: Morgan Kaufmann.

    Google Scholar 

  • Whitley, L. D. (1991). Fundamental principles of deception in genetic search. In Foundations of genetic algorithms (pp. 221–241). San Mateo: Morgan Kaufmann.

    Google Scholar 

  • Zaera, N., Cliff, D., & Bruten, J. (1996). (Not) Evolving collective behaviours in synthetic fish. In International conference on simulation of adaptive behavior (SAB) (pp. 635–644). Cambridge: MIT Press.

    Google Scholar 

Download references

Acknowledgements

This research has been supported by Fundação para a Ciência e a Tecnologia (FCT) grants PTDC/EEACRO/104658/2008, PEst-OE/EEI/LA0008/2011, and SFRH/BD/89095/2012.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jorge Gomes.

Electronic Supplementary Material

Below are the links to the electronic supplementary material.

(MP4 12.1 MB)

(MP4 11.0 MB)

(MP4 15.9 MB)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Gomes, J., Urbano, P. & Christensen, A.L. Evolution of swarm robotics systems with novelty search. Swarm Intell 7, 115–144 (2013). https://doi.org/10.1007/s11721-013-0081-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11721-013-0081-z

Keywords

Navigation