Advertisement

KI - Künstliche Intelligenz

, Volume 28, Issue 1, pp 15–20 | Cite as

Towards Learning of Generic Skills for Robotic Manipulation

  • Jan Hendrik MetzenEmail author
  • Alexander Fabisch
  • Lisa Senger
  • José de Gea Fernández
  • Elsa Andrea Kirchner
Research Project

Abstract

Learning versatile, reusable skills is one of the key prerequisites for autonomous robots. Imitation and reinforcement learning are among the most prominent approaches for learning basic robotic skills. However, the learned skills are often very specific and cannot be reused in different but related tasks. In the project "Behaviors for Mobile Manipulation", we develop hierarchical and transfer learning methods which allow a robot to learn a repertoire of versatile skills that can be reused in different situations. The development of new methods is closely integrated with the analysis of complex human behavior.

Keywords

Multi-task learning Skill learning Movement primitives Transfer learning Reinforcement learning 

References

  1. 1.
    Argall BD, Chernova S, Veloso M, Browning B (2009) A survey of robot learning from demonstration. Robot Auton Syst 57(5):469–483CrossRefGoogle Scholar
  2. 2.
    Schaal S (1997) Learning from demonstration. In: Advances in neural information processing systems, vol 9. pp 12–20Google Scholar
  3. 3.
    Kober J, Bagnell JA, Peters J (2013) Reinforcement learning in robotics: a survey. Int J Robot Res 30:820–833Google Scholar
  4. 4.
    Taylor M, Stone P (2009) Transfer learning for reinforcement learning domains: a survey. J Mach Learn Res 10:1633–1685zbMATHMathSciNetGoogle Scholar
  5. 5.
    Barto AG, Mahadevan S (2003) Recent advances in hierarchical reinforcement learning. Discret Event Dyn Syst 13(4):341–379CrossRefMathSciNetGoogle Scholar
  6. 6.
    Stulp F, Schaal S (2011) Hierarchical reinforcement learning with motion primitives. In: 11th IEEE-RAS international conference on humanoid robotsGoogle Scholar
  7. 7.
    Peters J, Mülling K, Kober J, Nguyen-Tuong D, Krömer O (2012) Robot skill learning. In: Proceedings of the European conference on artificial intelligenceGoogle Scholar
  8. 8.
    Peters J, Schaal S (2008) Natural actor-critic. Neurocomputing 71(7–9):1180–1190CrossRefGoogle Scholar
  9. 9.
    Peters J, Schaal S (2007) Reinforcement learning by reward-weighted regression for operational space control. In: Proceedings of the international conference on machine learning, pp 745–750Google Scholar
  10. 10.
    Kober J, Peters J (2010) Policy search for motor primitives in robotics. Mach Learn 84:171–203CrossRefMathSciNetGoogle Scholar
  11. 11.
    Peters J, Mülling K, Altun Y (2010) Relative entropy policy search. In: Proceedings of the 24th AAAI conference on artificial intelligenceGoogle Scholar
  12. 12.
    Daniel C, Neumann G, Peters J (2012) Hierarchical relative entropy policy search. In: Proceedings of the 15th international conference on artificial intelligence and statistics, pp 273–281Google Scholar
  13. 13.
    Theodorou E, Buchli J, Schaal S (2010) A generalized path integral control approach to reinforcement learning. J Mach Learn Res 11:3137–3181zbMATHMathSciNetGoogle Scholar
  14. 14.
    Hansen N, Ostermeier A (2001) Completely derandomized self-adaptation in evolution strategies. Evolut Comput 9:159–195CrossRefGoogle Scholar
  15. 15.
    Heidrich-Meisner V, Igel C (2008) Evolution strategies for direct policy search. In: Parallel problem solving from nature PPSN X, pp 428–437Google Scholar
  16. 16.
    Ijspeert AJ, Nakanishi J, Hoffmann H, Pastor P, Schaal S (2013) Dynamical movement primitives: learning attractor models for motor behaviors. Neural Comput 25(2):328–373CrossRefzbMATHMathSciNetGoogle Scholar
  17. 17.
    Kober J, Muelling K, Kroemer O, Lampert CH, Scholkopf B, Peters J (2010) Movement templates for learning of hitting and batting. In: IEEE international conference on robotics and automationGoogle Scholar
  18. 18.
    Muelling K, Kober J, Kroemer O, Peters J (2013) Learning to select and generalize striking movements in robot table tennis. International J Robot Res 32(3):263–279CrossRefGoogle Scholar
  19. 19.
    Pastor P, Hoffmann H, Asfour T, Schaal S (2009) Learning and generalization of motor skills by learning from demonstration. In: Proceedings of the 2009 IEEE international conference on robotics and automation, pp 1293–1298Google Scholar
  20. 20.
    Khansari-Zadeh SM, Billard A (2013) Learning stable non-linear dynamical systems with gaussian mixture models. IEEE Trans RobotGoogle Scholar
  21. 21.
    Graybiel A (1998) The basal ganglia and chunking of action repertoires. Neurobiol Learn Mem 70(1–2):119–36CrossRefGoogle Scholar
  22. 22.
    Abdenebaoui L, Kirchner EA, Kassahun Y, Kirchner F (2007) A connectionist architecture for learning to play a simulated BRIO labyrinth game. In: Proceedings of the 30th annual German conference on artificial intelligence (KI07), Springer, pp 427–430Google Scholar
  23. 23.
    Fearnhead P, Liu Z (2007) On-line inference for multiple change point models. J Royal Stat Soc Ser B (Stat Methodol) 69:589–605CrossRefMathSciNetGoogle Scholar
  24. 24.
    Fox E, Sudderth E, Jordan M, Willsky A (2010) Sharing features among dynamical systems with beta processes. In: Neural information processing systems 22. MIT PressGoogle Scholar
  25. 25.
    Kober J, Oztop E, Peters J (2011) Reinforcement learning to adjust robot movements to new situations. In: Proceedings of the international joint conference on artificial intelligence, pp 1–6Google Scholar
  26. 26.
    da Silva BC, Konidaris G, Barto AG (2012) Learning parameterized skills. In: Proceedings of the 29th international conference on machine learning. EdinburghGoogle Scholar
  27. 27.
    Rasmussen C, Williams C (2006) Gaussian processes for machine learning. MIT Press, CambridgeGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Jan Hendrik Metzen
    • 1
    Email author
  • Alexander Fabisch
    • 1
  • Lisa Senger
    • 1
  • José de Gea Fernández
    • 2
  • Elsa Andrea Kirchner
    • 1
    • 2
  1. 1.Robotics GroupUniversität BremenBremenGermany
  2. 2.Robotics Innovation CenterGerman Research Center for Artificial Intelligence (DFKI)BremenGermany

Personalised recommendations