Skip to main content

Part of the book series: Autonomic Systems ((ASYS,volume 1))

Abstract

In this article we present an approach that enables robots to learn how to act and react robustly in continuous and noisy environments while not loosing track of the overall feasibility, i.e. minimising the execution time in order to keep up continuous learning. We do so by combining reinforcement learning mechanisms with techniques belonging to the field of multivariate statistics on three different levels of abstraction: the motivation layer and the two simultaneously learning strategy and skill layers. The motivation layer allows for modelling occasionally contradicting goals in terms of drives in a very intuitive fashion. A drive represents one single goal, that a robot wants to be satisfied, like charging its battery, when it is nearly exhausted, or transporting an object to a target position. The strategy layer encapsulates the main reinforcement learning algorithm based on an abstracted and dynamically adjusted Markovian state space. By means of state abstraction, we minimise the overall state space size in order to ensure feasibility of the learning process in a dynamically changing environment. The skill layer finally realises a generalised learning method for learning reactive low-level behaviours, that enable a robot to interact with the environment.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Alpaydin, E.: Introduction to Machine Learning. MIT Press, Cambridge (2004)

    Google Scholar 

  2. Cover, T., Hart, P.: Nearest neighbor pattern classification. IEEE Trans. Inf. Theory 13(1), 21–27 (1967)

    Article  MATH  Google Scholar 

  3. Herbrechtsmeier, S., Witkowski, U., Rückert, U.: BeBot: a modular mobile miniature robot platform supporting hardware reconfiguration and multi-standard communication. In: Progress in Robotics. Communications in Computer and Information Science, vol. 44, pp. 346–356. Springer, Berlin (2009)

    Chapter  Google Scholar 

  4. Jolliffe, I.T.: Principal Component Analysis. Springer, Berlin (2002)

    MATH  Google Scholar 

  5. Kochenderfer, M.J.: Adaptive modelling and planning for learning intelligent behaviour. PhD thesis, School of Informatics, University of Edinburgh (2006)

    Google Scholar 

  6. Konda, V.R., Tsitsiklis, J.N.: On actor-critic algorithms. SIAM J. Control Optim. 42(4), 1143–1166 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  7. Lazaric, A., Restelli, M., Bonarini, A.: Reinforcement learning in continuous action spaces through sequential Monte Carlo methods. In: Advances in Neural Information Processing Systems, vol. 20, pp. 833–840. MIT Press, Cambridge (2008)

    Google Scholar 

  8. Moore, A., Atkeson, C.: Prioritized sweeping: Reinforcement learning with less data and less time. Mach. Learn. 13(1), 103–130 (1993)

    Google Scholar 

  9. Nilsson, N.J.: Artificial Intelligence: A New Synthesis. Morgan Kaufmann, San Mateo (1998)

    MATH  Google Scholar 

  10. Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming. Wiley-Interscience, New York (1994)

    MATH  Google Scholar 

  11. Richert, W., Lüke, O., Nordmeyer, B., Kleinjohann, B.: Increasing the autonomy of mobile robots by on-line learning simultaneously at different levels of abstraction. In: Proceedings of the Fourth International Conference on Autonomic and Autonomous Systems, pp. 154–159. IEEE Computer Society, Los Alamitos (2008)

    Chapter  Google Scholar 

  12. Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (1998)

    Google Scholar 

  13. van Hasselt, H., Wiering, M.A.: Reinforcement learning in continuous action spaces. In: IEEE International Symposium on Approximate Dynamic Programming and Reinforcement Learning (ADPRL), pp. 272–279 (2007)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexander Jungmann .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer Basel AG

About this chapter

Cite this chapter

Jungmann, A., Kleinjohann, B., Richert, W. (2011). A Fast Hierarchical Learning Approach for Autonomous Robots. In: Müller-Schloer, C., Schmeck, H., Ungerer, T. (eds) Organic Computing — A Paradigm Shift for Complex Systems. Autonomic Systems, vol 1. Springer, Basel. https://doi.org/10.1007/978-3-0348-0130-0_36

Download citation

  • DOI: https://doi.org/10.1007/978-3-0348-0130-0_36

  • Publisher Name: Springer, Basel

  • Print ISBN: 978-3-0348-0129-4

  • Online ISBN: 978-3-0348-0130-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics