Skip to main content

Learning by Experience from Others — Social Learning and Imitation in Animals and Robots

  • Chapter
Adaptivity and Learning

Abstract

A challenging current research direction is the design of intelligent software systems — ‘agents’ — that are able to autonomously solve certain tasks within their environment. Application areas of software agents can be found in robotics, as for example agents that control robots to rescue people in dangerous environments, and also in virtual worlds as electronic markets, where intelligent agents have to compete against other market participants, that pursue their own goals.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Andou, T. (1998) Refinement of soccer agent’s position using reinforcement learning. In Kitano H., editor, RoboCup-97: Robot Soccer World Cup I,Springer Verlag.

    Google Scholar 

  2. Burkhard, H.-D., Hannebauer, M. and Wendler, J. (1998) Belief-desire-intention deliberation in artificial soccer. AI Magazine 19 (3), 87–93.

    Google Scholar 

  3. Barto, A. G., Sutton, R. S. and Watkins, C. J. C. H. (1989) Learning and sequential decision making. Technical Report COINS TR 89–95, Department of Computer and Information Science, University of Massachusetts, Amherst, September 1989.

    Google Scholar 

  4. Bertsekas, D. P. and Tsitsiklis, J. N. (1989) Neuro Dynamic Programming. Athena Scientific, Belmont, Massachusetts.

    Google Scholar 

  5. Bertsekas, D. P. and Tsitsiklis, J. N. (1996) Neuro Dynamic Programming. Athena Scientific, Belmont, Massachusetts.

    Google Scholar 

  6. Bertsekas, D. P. and Tsitsiklis, J. N. (1996) Neuro-dynamic programming. Optimization and neural computation series, 3. Athena Scientific.

    Google Scholar 

  7. Claus, C. and Boutilier, C. (1999) The Dynamics of Reinforcement Learning in Cooperative Multiagent Systems. In IJCAI.

    Google Scholar 

  8. Dorer, K. (1999) Behavior networks for continuous domains using situation-dependent motivations. In Proceedings of IJCAI ’99, Stockholm, Sweden, 1233–1238.

    Google Scholar 

  9. Filar, J. and Vrieze, K. (1997) Competitive Markov decision processes. Springer Verlag.

    Google Scholar 

  10. Lauer, M. and Riedmiller, M. (2000) An algorithm for distributed reinforcement learning in cooperative multi-agent systems. In Proceedings of International Conference on Machine Learning, ICML ’00, Stanford, CA, 535–542.

    Google Scholar 

  11. Luke, S. (1998) Genetic programming produced competitive soccer softbot teams for robocup97. In Proceedings of the Third Annual Genetic Programming Conference (GP98) San Francisco, CA, 204–222.

    Google Scholar 

  12. Merke, A. (1999) Reinforcement Lernen in Multiagentensystemen. Master’s thesis, Universität Karlsruhe.

    Google Scholar 

  13. Puterman, M. L. (1994) Markov decision processes: discrete stochastic dynamic programming. Wiley series in probability and mathematical statistics: Applied probability and statistics. Wiley.

    Book  Google Scholar 

  14. Riedmiller, M. (2000) Concepts and facilities of a neural reinforcement learning control architecture for technical process control. Journal of Neural Computing and Application 8, 323–338.

    Article  Google Scholar 

  15. Riedmiller, M., Merke, A., Meier, D., Hoffmann, A., Sinner, A., Thate, O., Kill, O. and Ehrmann, R. (2000) Karlsruhe brainstormers–a reinforcement learning way to robotic soccer. In Jennings, A., and Stone, P.,editors, RoboCup-2000: Robot Soccer World Cup IV, LNCS. Springer Verlag.

    Google Scholar 

  16. Stolzenburg, F., Obst, O., Murray, J. and Bremer, B. (1999) Spatial agents implemented in a logical expressible language. In Veloso M. M., editor, Proceedings of the 3rd International Workshop on RoboCup in Conjunction with 16th Joint International Conference on Artificial Intelligence, Stockholm, IJCAI press, 205–210.

    Google Scholar 

  17. Stone, P., Sutton, R. and Singh, S. (2000) Reinforcement learning for 3 vs. 2 keepaway. In Stone, P., Balch, T. and Kreatzschmarr, K. editors, RoboCup-00: Robot Soccer World Cup IV. Springer Verlag.

    Google Scholar 

  18. Stone, P. and Veloso, M. (1998) A layered approach to learning client behaviours in the robocup soccer server. Applied Artificial Intelligence 12, 165–188.

    Article  Google Scholar 

  19. Stone, P. and Veloso, M. (1998) Team-partitioned, opaque-transition reinforcement learning. In Asada, M. and Kitano, H. editors, RoboCup-98: Robot Soccer World Cup II,Springer Verlag.

    Google Scholar 

  20. Sutton, R. S. and Barto, A. G. (1998) Reinforcement Learning. MIT Press, Cambridge, MA.

    Google Scholar 

  21. Sutton, R. S., Precup, D. and Singh S. (1999) Between mdps and semi-mdps: A framework for temporal abstraction in reinforcement learning. Artificial Intelligence to appear.

    Google Scholar 

  22. Watkins, C. J. (1989) Learning from Delayed Rewards. Phd thesis, Cambridge University.

    Google Scholar 

  23. Watkins, C. J. C. H. and Dean, P. (1992) Technical Note: Q-Learning. Machine Leaning 8, 279–292.

    MATH  Google Scholar 

  24. Woolridge, M. (1999) Intelligent agents. In Weiss, G. editor, Multi Agent Systems. MIT Press

    Google Scholar 

Download references

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Riedmiller, M., Merke, A. (2003). Learning by Experience from Others — Social Learning and Imitation in Animals and Robots. In: Kühn, R., Menzel, R., Menzel, W., Ratsch, U., Richter, M.M., Stamatescu, IO. (eds) Adaptivity and Learning. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-05594-6_17

Download citation

  • DOI: https://doi.org/10.1007/978-3-662-05594-6_17

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-05510-2

  • Online ISBN: 978-3-662-05594-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics