Skip to main content

Continuous Collaboration for Changing Environments

  • Chapter
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((TFMC,volume 9960))

Abstract

Collective autonomic systems (CAS) are distributed collections of agents that collaborate to achieve the system’s goals but autonomously adapt their behavior. We present the teacher/student architecture for locally coordinated distributed learning and show that in certain scenarios the performance of a swarm using teacher/student learning can be significantly better than that of agents learning individually. Teacher/student learning serves as foundation for the continuous collaboration (CC) development approach. We introduce CC, relate it to the EDLC, a life cycle model for CAS, and show that CC embodies many of the principles proposed for developing CAS.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Taking into account the different times at which students exchange information with teachers, the knowledge and strategies the students share are typically similar, not identical. This does not change the gist of the following discussion.

  2. 2.

    The careful reader may observe that the single robot takes only approximately 6 times as long as the swarm to reach its maximal performance, not more than 10 times as might be expected. This is an artifact of our learning schedule which learns only at the end of each episode, so that the single agent performs many more iterations of the DP algorithm before it reaches its maximum performance than the DP-learner and thus better exploits the data it has available. This means that the single agent can focus a larger percentage of its exploration on promising parts of the graph, thereby negating the advantages that the swarm has over a single learner. However, a swarm of 10 single learners would use 10 times the computational resources of a swarm with a DP-learner, which would justify running the DP-learner 10 times as frequently with corresponding improvements to the swarm’s performance.

  3. 3.

    Between episodes 30 and 50 the random modifications result in a graph in which some of the routes computed by the non-learning teachers are viable, therefore the performance is slightly better than in the other episodes in which the graph is damaged.

References

  1. Andre, D.: Programmable reinforcement learning agents. Ph.D. thesis, University of California at Berkeley (2003)

    Google Scholar 

  2. Ay, N., Der, R., Prokopenko, M.: Guided self-organization: perception-action loops of embodied systems. Theory Biosci. 131(3), 125–127 (2012)

    Article  Google Scholar 

  3. Belzner, L., Hölzl, M., Koch, N., Wirsing, M.: Collective autonomic systems: towards engineering principles and their foundations, July 2016

    Google Scholar 

  4. Cheng, B., et al.: Software engineering for self-adaptive systems: a research roadmap. In: Cheng, B., de Lemos, R., Giese, H., Inverardi, P., Magee, J. (eds.) Software Engineering for Self-Adaptive Systems. LNCS, vol. 5525, pp. 1–26. Springer, Heidelberg (2009). doi:10.1007/978-3-642-02161-9_1

    Chapter  Google Scholar 

  5. Claus, C., Boutilier, C.: The dynamics of reinforcement learning in cooperative multiagent systems. In: Proceedings of the 15th National/Tenth Conference on AI/Innovative Applications of AI, AAAI 1998/IAAI 1998, pp. 746–752. AAAI (1998)

    Google Scholar 

  6. Colombo, A., Fontanelli, D., Legay, A., Palopoli, L., Sedwards, S.: Efficient customisable dynamic motion planning for assistive robots in complex human environments. J. Ambient Intell. Smart Environ. 7(5), 617–634 (2015)

    Article  Google Scholar 

  7. Fagin, R., Moses, Y., Vardi, M., Halpern, J.: Reasoning About Knowledge. MIT Press, Cambridge (2003)

    MATH  Google Scholar 

  8. Ghallab, M., Nau, D.S., Traverso, P.: Automated Planning - Theory and Practice. Elsevier, Amsterdam (2004)

    MATH  Google Scholar 

  9. Hölzl, M., Gabor, T.: Continuous collaboration: a case study on the development of an adaptive cyber-physical system. In: Proceedings of the 1st International Workshop on Software Engineering for Smart Cyber-Physical Systems (SEsCPS). IEEE (2015)

    Google Scholar 

  10. Hölzl, M., Gabor, T.: Reasoning and learning for awareness and adaptation. In: Wirsing et al. [29]

    Google Scholar 

  11. Hölzl, M., Koch, N., Puviani, M., Wirsing, M., Zambonelli, F.: The ensemble development life cycle and best practices for collective autonomic systems. In: Wirsing et al. [29]

    Google Scholar 

  12. Karafotias, G., Haasdijk, E., Eiben, A.E.: An algorithm for distributed on-line, on-board evolutionary robotics. In: Proceedings of the 13th Annual Conference on Genetic and Evolutionary Computation, GECCO 2011, pp. 171–178. ACM, New York (2011)

    Google Scholar 

  13. Kephart, J.O., Chess, D.M.: The vision of autonomic computing. Computer 36(1), 41–50 (2003)

    Article  MathSciNet  Google Scholar 

  14. Marzinotto, A., Colledanchise, M., Smith, C., Ögren, P.: Towards a unified behavior trees framework for robot control. In: 2014 IEEE International Conference on Robotics and Automation, ICRA 2014, Hong Kong, pp. 5420–5427. IEEE (2014)

    Google Scholar 

  15. Millington, I., Funge, J.: Artificial Intelligence for Games, 2nd edn. Morgan Kaufmann, San Francisco (2009)

    Google Scholar 

  16. Object Management Group: UML Specifications. http://www.omg.org/spec/. Accessed 26 Feb 2015

  17. Ogren, P.: Increasing modularity of UAV control systems using computer game behavior trees. In: AIAA Guidance, Navigation and Control Conference, Minneapolis, Minnesota, pp. 13–16 (2012)

    Google Scholar 

  18. Shalev-Shwartz, S., Ben-David, S.: Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, New York (2014)

    Book  MATH  Google Scholar 

  19. Shoham, Y., Leyton-Brown, K.: Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations. Cambridge University Press, New York (2008)

    Book  MATH  Google Scholar 

  20. Sigmund, K.: A survey of replicator equations. In: Casti, J.L., Karlqvist, A. (eds.) Complexity, Language, and Life: Mathematical Approaches. Biomathematics, vol. 16, pp. 88–104. Springer, Heidelberg (1986)

    Chapter  Google Scholar 

  21. Sutton, R.S., Barto, A.G.: Reinforcement Learning. MIT Press, Cambridge (1998)

    Google Scholar 

  22. Thrun, S., Burgard, W., Fox, D.: Probabilistic Robotics. MIT Press, Cambridge (2005)

    MATH  Google Scholar 

  23. Vapnik, V.: The Nature of Statistical Learning Theory. Information Science and Statistics. Springer, New York (2013)

    MATH  Google Scholar 

  24. Vapnik, V.N.: Statistical Learning Theory. Wiley-Interscience, New York (1998)

    MATH  Google Scholar 

  25. Watson, R.A., Ficici, S.G., Pollack, J.B.: Embodied evolution: distributing an evolutionary algorithm in a population of robots. Robot. Auton. Syst. 39(1), 1–18 (2002)

    Article  Google Scholar 

  26. Weiss, G. (ed.): Multiagent Systems, 2nd edn. MIT Press, Cambridge (2013)

    Google Scholar 

  27. Wiegand, R.P.: An analysis of cooperative coevolutionary algorithms. Ph.D. thesis, George Mason University (2003)

    Google Scholar 

  28. Wiering, M., van Otterlo, M.: Reinforcement Learning: State-of-the-Art. Adaptation, Learning, and Optimization, vol. 12. Springer, Heidelberg (2012)

    Google Scholar 

  29. Wirsing, M., Hölzl, M., Koch, N., Mayer, P. (eds.): Software Engineering for Collective Autonomic Systems: Results of the ASCENS Project. LNCS, vol. 8998. Springer, Heidelberg (2015)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matthias Hölzl .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this chapter

Cite this chapter

Hölzl, M., Gabor, T. (2016). Continuous Collaboration for Changing Environments. In: Steffen, B. (eds) Transactions on Foundations for Mastering Change I. Lecture Notes in Computer Science(), vol 9960. Springer, Cham. https://doi.org/10.1007/978-3-319-46508-1_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-46508-1_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-46507-4

  • Online ISBN: 978-3-319-46508-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics