Encyclopedia of Systems and Control

Living Edition
| Editors: John Baillieul, Tariq Samad

Robot Learning

  • Jens KoberEmail author
Living reference work entry
DOI: https://doi.org/10.1007/978-1-4471-5102-9_100027-1

Abstract

With increasingly complex robot hardware and the push to enable robots to work in unstructured environments, there has been an increasing interest in applying machine learning and artificial intelligence techniques to systems and control problems in robotics. At the same time, many machine learning approaches have been developed with robotic problems as motivating use cases. Robot learning spans the problems of learning models, sensing, acting, as well as integrated approaches and has been applied to many types of robot embodiments. The algorithms cover most fields of machine learning. However, due to the specific challenges in robotics, these either need to be adapted or new approaches have to be developed.

Keywords

Supervised learning Unsupervised learning Reinforcement learning End-to-end learning Model learning 
This is a preview of subscription content, log in to check access.

Bibliography

  1. Argall BD, Chernova S, Veloso M, Browning B (2009) A survey of robot learning from demonstration. Robot Auton Syst 57(5):469–483CrossRefGoogle Scholar
  2. Arulkumaran K, Deisenroth MP, Brundage M, Bharath AA (2017) Deep reinforcement learning: a brief survey. IEEE Signal Process Mag 34(6):26–38CrossRefGoogle Scholar
  3. Bengio Y, Courville A, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 35(8):1798–1828CrossRefGoogle Scholar
  4. Billard A, Calinon S, Dillmann R (2016) Chap learning from humans. Handbook of robotics, 2nd edn. Springer, Secaucus, pp 1995–2014CrossRefGoogle Scholar
  5. Bishop CM (2006) Pattern recognition and machine learning. Springer, LondonzbMATHGoogle Scholar
  6. Bohg J, Hausman K, Sankaran B, Brock O, Kragic D, Schaal S, Sukhatme GS (2017) Interactive perception: Leveraging action in perception and perception in action. IEEE Transactions on Robotics 33(6):1273–1291CrossRefGoogle Scholar
  7. Calinon S, Lee D (2019) Chap learning control. Humanoid robotics: a reference. Springer, DordrechtGoogle Scholar
  8. Calinon S, D’halluin F, Sauser EL, Caldwell DG, Billard AG (2010) Learning and reproduction of gestures by imitation. IEEE Robot Autom Mag 17(2):44–54CrossRefGoogle Scholar
  9. Celemin CE, Maeda G, Ruiz-del-Solar J, Peters J, Kober J (2019) Reinforcement learning of motor skills using policy search and human corrective advice. Int J Robot Res 38(14):1560–1580CrossRefGoogle Scholar
  10. Chatzilygeroudis K, Vassiliades V, Stulp F, Calinon S, Mouret JB (2019) A survey on policy search algorithms for learning robot controllers in a handful of trials. IEEE Trans. on Robotics – acceptedGoogle Scholar
  11. Deisenroth MP, Neumann G, Peters J (2013) A survey on policy search for robotics. Found Trends® Robot 2(1–2):1–142CrossRefGoogle Scholar
  12. Garcıa J, Fernández F (2015) A comprehensive survey on safe reinforcement learning. J Mach Learn Res 16(1):1437–1480MathSciNetzbMATHGoogle Scholar
  13. Goodfellow I, Bengio Y, Courville A (2016) Deep learning. MIT Press, CambridgezbMATHGoogle Scholar
  14. Hastie T, Tibshirani R, Friedman JH (2009) The elements of statistical learning: data mining, inference, and prediction, 2nd edn. Springer series in statistics, Springer, New YorkCrossRefGoogle Scholar
  15. Kober J, Bagnell JA, Peters J (2013) Reinforcement learning in robotics: a survey. Int J Robot Res 32(11):1238–1274CrossRefGoogle Scholar
  16. Kooij JFP, Flohr F, Pool EAI, Gavrila DM (2019) Context-based path prediction for targets with switching dynamics. Int J Comput Vis 127(3):239–262CrossRefGoogle Scholar
  17. Lesort T, Díaz-Rodríguez N, Goudou JF, Filliat D (2018) State representation learning for control: An overview. Neural Networks. Elsevier 108:379–392CrossRefGoogle Scholar
  18. Levine S, Finn C, Darrell T, Abbeel P (2016) End-to-end training of deep visuomotor policies. J Mach Learn Res 17(1):1334–1373MathSciNetzbMATHGoogle Scholar
  19. Murphy KP (2012) Machine learning: a probabilistic perspective. The MIT Press, CambridgezbMATHGoogle Scholar
  20. Ng AY, Coates A, Diel M, Ganapathi V, Schulte J, Tse B, Berger E, Liang E (2006) Autonomous inverted helicopter flight via reinforcement learning. In: Experimental robotics IX. Springer, Berlin/Heidelberg, pp 363–372CrossRefGoogle Scholar
  21. Nguyen Tuong D, Peters J (2011) Model learning in robotics: a survey. Cogn Process 12(4):319–340CrossRefGoogle Scholar
  22. Osa T, Pajarinen J, Neumann G, Bagnell J, Abbeel P, Peters J (2018) An algorithmic perspective on imitation learning. Found Trends® RobotGoogle Scholar
  23. Premebida C, Ambrus R, Marton ZC (2018) Chap intelligent robotic perception systems. Applications of mobile robots. IntechOpen, London, pp 111–127Google Scholar
  24. Russell S, Norvig P (2009) Artificial intelligence: a modern approach, 3rd edn. Prentice Hall Press, Upper Saddle RiverzbMATHGoogle Scholar
  25. Schwarting W, Alonso-Mora J, Rus D (2018) Planning and decision-making for autonomous vehicles. Ann Rev Control Robot Auton Syst 1(1):187–210CrossRefGoogle Scholar
  26. Sigaud O, Stulp F (2019) Policy search in continuous action domains: an overview. Neural Netw 113:28–40CrossRefGoogle Scholar
  27. Sünderhauf N, Brock O, Scheirer W, Hadsell R, Fox D, Leitner J, Upcroft B, Abbeel P, Burgard W, Milford M, Corke P (2018) The limits and potentials of deep learning for robotics. Int J Robot Res 37(4–5):405–420CrossRefGoogle Scholar
  28. Ruiz-del Solar J, Loncomilla P, Soto N (2018) A survey on deep learning methods for robot vision. arXiv preprint:180310862Google Scholar
  29. Sutton RS, Barto AG (2018) Reinforcement learning: an introduction, 2nd edn. The MIT PresszbMATHGoogle Scholar
  30. Tai L, Zhang J, Liu M, Boedecker J, Burgard W (2016) A survey of deep network solutions for learning control in robotics: from reinforcement to imitation. arXiv preprint:161207139Google Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2020

Authors and Affiliations

  1. 1.Cognitive Robotics departmentDelft University of TechnologyDelftThe Netherlands

Section editors and affiliations

  • Bruno Siciliano
    • 1
  1. 1.Dipartimento di Ingegneria Elettrica e Tecnologie dell'InformazioneUniversità degli Studi di Napoli Federico IINapoliItaly