Advertisement

Echo State Networks for Mobile Robot Modeling and Control

  • Paul G. Plöger
  • Adriana Arghir
  • Tobias Günther
  • Ramin Hosseiny
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3020)

Abstract

Applications of recurrent neural networks (RNNs) tend to be rare because training is difficult. A recent theoretical breakthrough [Jae01b] called Echo State Networks (ESNs) has made RNN training easy and fast and makes RNNs a versatile tool for many problems. The key idea is training the output weights only of an otherwise topologically unrestricted but contractive network. After outlining the mathematical basics, we apply ESNs to two examples namely to the generation of a dynamical model for a differential drive robot using supervised learning and secondly to the training of a respective motor controller.

Keywords

Mobile Robot Spectral Radius Extend Kalman Filter Recurrent Neural Network Pulse Width Modulation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [A00]
    Christaller, T., Jaeger, H., Kobialka, H.-U., Schoell, P., Bredenfeld, A.: Robot behavior design using dual dynamics. Tech. GMD Report (2000)Google Scholar
  2. [Ark98]
    Arkin, R.C.: Behavior-based robotics. The MIT Press, Cambridge (1998)Google Scholar
  3. [BK01]
    Bredenfeld, A., Kobialka, H.-U.: Team cooperation using dual dynamics. In: Hannebauer, M. (ed.) Balancing reactivity and social deliberation in multi-agent systems. Lecture notes in computer science, pp. 111–124 (2001)Google Scholar
  4. [BSF94]
    Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks 5(2), 157–166 (1994) (Special Issue Dynamic and Recurrent Neural Networks)CrossRefGoogle Scholar
  5. [Gal80]
    Gallistel, C.R.: The Organization of Action: a New Synthesis. Lawrence Erlbaum Associates, Inc., Hilldale (1980)Google Scholar
  6. [Hou03]
    Housseiny, R.: Echo state networks used for the classification and filtering of silicon retina signals. Master’s thesis, RWTH AAchen (2003)Google Scholar
  7. [Jae01a]
    Jaeger, H.: The “echo state” approach to analysing and training recurrent neural networks. GMD Report 148, pp. 1–43 (2001)Google Scholar
  8. [Jae01b]
    Jaeger, H.: The echo state approach to analysing and training recurrent neural networks. Technical report, GMD - Forschungszentrum Informationstechnik GmbH (2001)Google Scholar
  9. [Jae01c]
    Jaeger, H.: Short term memory in echo state networks. Technical report, GMD Forschungszentrum Informationstechnik GmbH (2001)Google Scholar
  10. [Jae02]
    Jaeger, H.: Tutorial on trainig recurrent neural networks covering bppt, rtrl, ekf and the “echo state network” approach. Technical report, GMD Forschungszentrum Informationstechnik GmbH (2002)Google Scholar
  11. [JD92]
    Jordan, M.I., Rumelhart, D.E.: Forward models: Supervised learning with a distal teacher (1992)Google Scholar
  12. [Jor95]
    Jordan, M.I.: Computational aspects of motor control and motor learning. In: Heuer, H., Keele, S. (eds.) Handbook of Perception and Action: Motor Skills, Academic Press, New York (1995)Google Scholar
  13. [KBM98]
    Kortenkamp, D., Bonasso, R.P., Murphy, R.: Artificial Intelligence and Mobile Robots. AAAI Press / The MIT Press (1998)Google Scholar
  14. [KS87]
    Furawaka, K., Kawato, M., Suzuki, R.: A hierarchical neural network model for the control learning of voluntary movements (1987)Google Scholar
  15. [Mea89]
    Mead, C.: Analog VLSI and Neural Systems. Addison-Wesley, Reading (1989)zbMATHGoogle Scholar
  16. [MNM02]
    Maass, W., Natschläger, T., Markram, H.: Real-time computing without stable states: A new framework for neural computation based on perturbations. Neural Computation (2002)Google Scholar
  17. [MW96]
    Miall, R.C., Wolpert, D.M.: Forward models for phisiological motor control (1996)Google Scholar
  18. [Pom93]
    Pomerleau, D.A.: Neural Network Perception for Mobile Robot Guidance. Kluwer, Dordrecht (1993)Google Scholar
  19. [PTVF92]
    Press, W.H., Teukolsky, S.A., Vetterling, W.T., Flannery, B.P.: Numerical Recipes in C: The Art of Scientific Computing, 2nd edn. Cambridge University Press, Cambridge (1992)Google Scholar
  20. [SB81]
    Sutton, R.S., Barto, A.G.: Toward a modern theory of adaptive networks: expectation and prediction (1981)Google Scholar
  21. [Sch02]
    Schoenherr, F.: Learning to ground fact symbols in behavior-based robots. In: van Harmelen, F. (ed.) Proceedings of the 15th European Conference on Artificial Intelligence ECAI, pp. 708–712. IOS Press, Amsterdam (2002)Google Scholar
  22. [SV98]
    Enhanced Multi-Stream Kalman Filter Training for Recurrent Networks. In: Suykens, J.A.K., Vandewalle, J. (eds.) Nonlinear modeling, Kluwer Academic Publishers, Dordrecht (1998)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2004

Authors and Affiliations

  • Paul G. Plöger
    • 1
  • Adriana Arghir
    • 1
  • Tobias Günther
    • 1
  • Ramin Hosseiny
    • 1
  1. 1.FHG Institute of Autonomous Intelligent SystemsSt AugustinGermany

Personalised recommendations