Multirobot Behavior Synchronization through Direct Neural Network Communication

  • David B. D’Ambrosio
  • Skyler Goodell
  • Joel Lehman
  • Sebastian Risi
  • Kenneth O. Stanley
Part of the Lecture Notes in Computer Science book series (LNCS, volume 7507)

Abstract

Many important real-world problems, such as patrol or search and rescue, could benefit from the ability to train teams of robots to coordinate. One major challenge to achieving such coordination is determining the best way for robots on such teams to communicate with each other. Typical approaches employ hand-designed communication schemes that often require significant effort to engineer. In contrast, this paper presents a new communication scheme called the hive brain, in which the neural network controller of each robot is directly connected to internal nodes of other robots and the weights of these connections are evolved. In this way, the robots can evolve their own internal “language” to speak directly brain-to-brain. This approach is tested in a multirobot patrol synchronization domain where it produces robot controllers that synchronize through communication alone in both simulation and real robots, and that are robust to perturbation and changes in team size.

Keywords

Evolutionary Algorithms HyperNEAT Multirobot Teams Coordination Communication Artificial Neural Networks 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Busoniu, L., Babuska, R., De Schutter, B.: A comprehensive survey of multiagent reinforcement learning. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 38(2), 156–172 (2008)CrossRefGoogle Scholar
  2. 2.
    Jackson, D.E., Ratnieks, F.L.: Communication in ants. Current Biology 16(15), R570–R574 (2006)CrossRefGoogle Scholar
  3. 3.
    Riley, J., Greggers, U., Smith, A., Reynolds, D., Menzel, R.: The flight paths of honeybees recruited by the waggle dance. Nature 435(7039), 205–207 (2005)CrossRefGoogle Scholar
  4. 4.
    D’Ambrosio, D.B., Lehman, J., Risi, S., Stanley, K.O.: Evolving policy geometry for scalable multiagent learning. In: Proceedings of the Ninth International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2010), pp. 731–738. International Foundation for Autonomous Agents and Multiagent System (2010)Google Scholar
  5. 5.
    Bennett, M., Schatz, M.F., Rockwood, H., Wiesenfeld, K.: Huygens’s clocks. Proceedings of the Royal Society of London. Series A: Mathematical, Physical and Engineering Sciences 458(2019), 563–579 (2002)MathSciNetMATHCrossRefGoogle Scholar
  6. 6.
    Bowling, M., Veloso, M.: Multiagent learning using a variable learning rate. Artificial Intelligence 136(2), 215–250 (2002)MathSciNetMATHCrossRefGoogle Scholar
  7. 7.
    Hu, J., Wellman, M.P.: Multiagent reinforcement learning: theoretical framework and an algorithm. In: Proc. 15th International Conf. on Machine Learning, pp. 242–250. Morgan Kaufmann, San Francisco (1998)Google Scholar
  8. 8.
    Santana, H., Ramalho, G., Corruble, V., Ratitch, B.: Multi-agent patrolling with reinforcement learning. In: International Joint Conference on Autonomous Agents and Multiagent Systems, vol. 3, pp. 1122–1129 (2004)Google Scholar
  9. 9.
    Busoniu, L., Schutter, B.D., Babuska, R.: Learning and coordination in dynamic multiagent systems. Technical Report 05-019, Delft University of Technology (2005)Google Scholar
  10. 10.
    Panait, L., Luke, S.: Cooperative multi-agent learning: The state of the art. Autonomous Agents and Multi-Agent Systems 3(11), 383–434 (2005)Google Scholar
  11. 11.
    Ficici, S., Pollack, J.: A Game-Theoretic Approach to the Simple Coevolutionary Algorithm. In: Schoenauer, M., Deb, K., Rudolph, G., Yao, X., Lutton, E., Merelo, J.J., Schwefel, H.-P. (eds.) PPSN VI. LNCS, vol. 1917, pp. 467–476. Springer, Heidelberg (2000)CrossRefGoogle Scholar
  12. 12.
    Panait, L., Wiegand, R., Luke, S.: Improving coevolutionary search for optimal multiagent behaviors. In: Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence (IJCAI), pp. 653–658 (2003)Google Scholar
  13. 13.
    Ren, W., Beard, R., Atkins, E.: A survey of consensus problems in multi-agent coordination. In: Proceedings of the 2005 American Control Conference, vol. 3, pp. 1859–1864 (June 2005)Google Scholar
  14. 14.
    Fax, J.A., Murray, R.M.: Information flow and cooperative control of vehicle formations. IEEE Transactions on Automatic Control 49(9), 1465–1476 (2004)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Sepulchre, R., Leonard, N.: Collective motion and oscillator synchronization. Electrical Engineering 309, 189–205 (2004)MathSciNetGoogle Scholar
  16. 16.
    Rodriguez-Angeles, A., Nijmeijer, H.: Mutual synchronization of robots via estimated state feedback: A cooperative approach. IEEE Trans. on Control Systems Technology 12(4), 542–554 (2004)CrossRefGoogle Scholar
  17. 17.
    Yong, C.H., Miikkulainen, R.: Coevolution of role-based cooperation in multi-agent systems. IEEE Transactions on Autonomous Mental Development 1, 170–186 (2010)CrossRefGoogle Scholar
  18. 18.
    Di Paolo, E.A.: Behavioral coordination, structural congruence and entrainment in a simulation of acoustically coupled agents. Adaptive Behavior 8(1), 27–48 (2000)CrossRefGoogle Scholar
  19. 19.
    Floreano, D., Mitri, S., Magnenat, S., Keller, L.: Evolutionary conditions for the emergence of communication in robots. Current Biology 17(6), 514–519 (2007)CrossRefGoogle Scholar
  20. 20.
    Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evolutionary Computation 10, 99–127 (2002)CrossRefGoogle Scholar
  21. 21.
    Bongard, J.C., Pfeifer, R.: Morpho-functional Machines: The New Species (Designing Embodied Intelligence). In: Evolving Complete Agents using Artificial Ontogeny, pp. 237–258. Springer (2003)Google Scholar
  22. 22.
    Hornby, G.S., Pollack, J.B.: Creating high-level components with a generative representation for body-brain evolution. Artificial Life 8(3), 223–246 (2002)CrossRefGoogle Scholar
  23. 23.
    Stanley, K.O., Miikkulainen, R.: A taxonomy for artificial embryogeny. Artificial Life 9(2), 93–130 (2003)CrossRefGoogle Scholar
  24. 24.
    Stanley, K.O., D’Ambrosio, D.B., Gauci, J.: A hypercube-based indirect encoding for evolving large-scale neural networks. Artificial Life 15(2) (2009)Google Scholar
  25. 25.
    Gauci, J., Stanley, K.O.: Autonomous evolution of topographic regularities in artificial neural networks. Neural Computation, 38 (2010) (to appear)Google Scholar
  26. 26.
    Stanley, K.O., Miikkulainen, R.: Competitive coevolution through evolutionary complexification. Journal of Artificial Intelligence Research 21(1), 63–100 (2004)Google Scholar
  27. 27.
    Stanley, K.O.: Compositional pattern producing networks: A novel abstraction of development. Genetic Programming and Evolvable Machines 8(2), 131–162 (2007)MathSciNetCrossRefGoogle Scholar
  28. 28.
    Udin, S., Fawcett, J.: Formation of topographic maps. Annual Review of Neuroscience 11(1), 289–327 (1988)CrossRefGoogle Scholar
  29. 29.
    Hubel, D.H., Wiesel, T.N.: Receptive fields, binocular interaction and functional architecture in the cat’s visual cortex. The Journal of Physiology 160, 106–154 (1962)Google Scholar
  30. 30.
    D’Ambrosio, D.B., Stanley, K.O.: Generative encoding for multiagent learning. In: Proceedings of the Genetic and Evolutionary Computation Conference (GECCO 2008). ACM Press, New York (2008)Google Scholar
  31. 31.
    D’Ambrosio, D.B., Lehman, J., Risi, S., Stanley, K.O.: Task switching in multiagent learning through indirect encoding. In: Proceedings of the International Conference on Intelligent Robots and Systems (IROS 2011). IEEE, Piscataway (2011)Google Scholar
  32. 32.
    Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: Nsga-ii. IEEE Transactions on Evolutionary Computation 6(2), 182–197 (2002)CrossRefGoogle Scholar
  33. 33.
    Green, C.: SharpNEAT homepage (2003-2006), http://sharpneat.sourceforge.net/
  34. 34.
    Stanley, K.O., Miikkulainen, R.: Competitive coevolution through evolutionary complexification  21, 63–100 (2004)Google Scholar
  35. 35.
    Ren, W., Beard, R., Atkins, E.: A survey of consensus problems in multi-agent coordination. In: Proceedings of the 2005 American Control Conference, pp. 1859–1864. IEEE (2005)Google Scholar
  36. 36.
    Verbancsics, P., Stanley, K.: Constraining connectivity to encourage modularity in HyperNEAT. In: Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation, pp. 1483–1490. ACM (2011)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • David B. D’Ambrosio
    • 1
  • Skyler Goodell
    • 1
  • Joel Lehman
    • 1
  • Sebastian Risi
    • 1
  • Kenneth O. Stanley
    • 1
  1. 1.Department of Electrical Engineering and Computer ScienceUniversity of Central FloridaOrlandoUSA

Personalised recommendations