Skip to main content

Continual Robot Learning with Constructive Neural Networks

  • Conference paper
  • First Online:
Learning Robots (EWLR 1997)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1545))

Included in the following conference series:

Abstract

In this paper, we present an approach for combining reinforcement learning, learning by imitation, and incremental hierarchical development. We apply this approach to a realistic simulated mobile robot that learns to perform a navigation task by imitating the movements of a teacher and then continues to learn by receiving reinforcement. The behaviours of the robot are represented as sensation-action rules in a constructive high-order neural network. Preliminary experiments are reported which show that incremental, hierarchical development, bootstrapped by imitative learning, allows the robot to adapt to changes in its environment during its entire lifetime very efficiently, even if only delayed reinforcements are given.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Bibliography

  1. Paul Bakker and Yasuo Kuniyoshi. Robot see, robot do: An overview of robot imitation. In Proceedings of the AISB’96 Workshop on Learning in Robots and Animals, Brighton, UK, 1996.

    Google Scholar 

  2. Jonathan H. Connell and Sridhar Mahadevan, editors. Robot Learning. Kluwer Academic, Norwell, MA, USA, 1993.

    MATH  Google Scholar 

  3. John Demiris and Gillian Hayes. Imitative learning mechanisms in robots and humans. In Volker Klingspor, editor, Proceedings of the Fifth European Workshop on Learning Robots, Bari, Italy, 1996.

    Google Scholar 

  4. Marco Dorigo and Marco Colombetti. The role of the trainer in reinforcement learning. In S. Mahadevan et al., editors, Proceedings of the Workshop on Robot Learning held as part of the 1994 International Conference on Machine Learning (ML’94) and the 1994 ACM Conference on Computational Learning Theory (COLT’4), pages 37–45, 1994.

    Google Scholar 

  5. Gillian Hayes and John Demiris. A robot controller using learning by imitation. In Proceedings of the Second International Symposium on Intelligent Robotic Systems, Grenoble, France, 1994.

    Google Scholar 

  6. Leslie Pack Kaelbling, Michael L. Littman, and Andrew W. Moore. Reinforcement learning: A survey. Artificial Intelligence Research, 4:237–285, 1996.

    Google Scholar 

  7. Yasuo Kuniyoshi, Masayuki Inaba, and Hirochika Inoue. Learning by watching: Extracting reusable task knowledge from visual observation of human performance. IEEE Transactions on Robotics and Automation, 10(6), 1994.

    Google Scholar 

  8. Long-Ji Lin. Self-improving reactive agents based on reinforcement learning, planning and teaching. Machine Learning, 8(3/4):293–321, 1992. Special Issue on Reinforcement Learning.

    Article  Google Scholar 

  9. Olivier Michel. Khepera Simulator Package 2.0. University of Nice Sophia-Antipolis, Valbonne, France, 1996. Available via the URL http://wwwi3s.unice.fr/~om/khep-sim.html.

    Google Scholar 

  10. Mark Bishop Ring. Continual learning in reinforcement environments. PhD thesis, University of Texas, Austin, TX,USA. Available via the URL http://www-set.gmd.de/~ring/Diss/~L, 1994.

    Google Scholar 

  11. R. L. Riolo. The emergence of default hierarchies in learning classifier systems. In J. D. Schaffer, editor, Proceedings of the Third International Conference on Genetic Algorithms, pages 322–327. Morgan Kaufmann, 1989.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Großmann, A., Poli, R. (1998). Continual Robot Learning with Constructive Neural Networks. In: Birk, A., Demiris, J. (eds) Learning Robots. EWLR 1997. Lecture Notes in Computer Science(), vol 1545. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-49240-2_7

Download citation

  • DOI: https://doi.org/10.1007/3-540-49240-2_7

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-65480-3

  • Online ISBN: 978-3-540-49240-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics