Skip to main content

Experiments with Motor Primitives in Table Tennis

  • Chapter
Experimental Robotics

Part of the book series: Springer Tracts in Advanced Robotics ((STAR,volume 79))

Abstract

Efficient acquisition of new motor skills is among the most important abilities in order to make robot application more flexible, reduce the amount and cost of human programming as well as to make future robots more autonomous. However, most machine learning approaches to date are not capable to meet this challenge as they do not scale into the domain of high dimensional anthropomorphic and service robots. Instead, robot skill learning needs to rely upon task-appropriate approaches and domain insights. A particularly powerful approach has been driven by the concept of re-usable motor primitives. These have been used to learn a variety of “elementary movements” such as striking movements (e.g., hitting a T-ball, striking a table tennis ball), rhythmic movements (e.g., drumming, gaits for legged locomotion, padlling balls on a string), grasping, jumping and many others. Here, we take the approach to the next level and show experimentally how most elements required for table tennis can be addressed using motor primitives. We show four important components: (i) We present a motor primitive formulation that can deal with hitting and striking movements. (ii) We show how these can be initialized by imitation learning and (iii) generalized by reinforcement learning. (iv) We show how selection, generalization and pruning for motor primitives can be dealt with using a mixture of motor primitives. The resulting experimental prototypes can be shown to work well in practice.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 219.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Acosta, L., Rodrigo, J.J., Mendez, J.A., Marchial, G.N., Sigut, M.: Ping-pong player prototype. Robotics and Automation Magazine 10, 44–52 (2003)

    Article  Google Scholar 

  2. Andersson, R.L.: A robot ping-pong player: experiment in real-time intelligent control. MIT Press, Cambridge (1988)

    Google Scholar 

  3. Bishop, C.M.: Pattern Recognition and Machine Learning. Springer (2006)

    Google Scholar 

  4. Chiappa, S., Kober, J., Peters, J.: Using bayesian dynamical systems for motion template libraries. In: Advances in Neural Information Processing Systems 22, NIPS 2008 (2008)

    Google Scholar 

  5. Fassler, H., Vasteras, H.A., Zurich, J.W.: A robot ping pong player: optimized mechanics, high performance 3d vision, and intelligent sensor control. Robotersysteme, 161–170 (1990)

    Google Scholar 

  6. Ijspeert, A.J., Nakanishi, J., Schaal, S.: Learning attractor landscapes for learning motor primitives. In: Advances in Neural Information Processing Systems 16 (NIPS), vol. 15, pp. 1547–1554. MIT Press, Cambridge (2003)

    Google Scholar 

  7. Kober, J., Muelling, K., Kroemer, O., Lampert, C., Schölkopf, B., Peters, J.: Movement templates for learning of hitting and batting. In: Proceedings of the IEEE International Conference on Robotics and Automation, ICRA 2010 (2010)

    Google Scholar 

  8. Kober, J., Oztop, E., Peters, J.: Reinforcement learning to adjust robot movements to new situations. In: Robotics: Science and Systems (2010)

    Google Scholar 

  9. Kober, J., Peters, J.: Policy search for motor primitives in robotics. In: Advances in Neural Information Processing Systems 21, pp. 849–856. MIT press, Cambridge (2009)

    Google Scholar 

  10. Matsubara, T., Hyon, S., Morimoto, J.: Learning stylistic dynamic movement primitives from multiple demonstrations. In: IEEE/RSJ International Conference on Intelligent RObots and Systems (2010)

    Google Scholar 

  11. Miyazaki, F., Matsushima, M., Takeuchi, M.: Learning to dynamically manipulate: A table tennis robot controls a ball and rallies with a human being. In: Advances in Robot Control, pp. 3137–3341. Springer (2005)

    Google Scholar 

  12. Mülling, K., Kober, J., Peters, J.: A biomimetic approach to robot table tennis. In: Proceedings of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2010 (2010)

    Google Scholar 

  13. Mülling, K., Kober, J., Peters, J.: Learning table tennis with a mixture of motor primitives. In: Proceedings of the 10th IEEE-RAS International Conference on Humanoid Robots, Humanoids 2010 (2010)

    Google Scholar 

  14. Peters, J., Muelling, K., Altun, Y.: Relative entropy policy search. In: Proceedings of the Twenty-Fourth National Conference on Artificial Intelligence, AAAI 2010 (2010)

    Google Scholar 

  15. Peters, J., Mülling, K., Kober, J., Nguyen-Tuong, D., Kroemer, O.: Towards motor skill learning for robotics. In: Proceedings of the 14th International Symposium on Robotics Research, ISRR 2009 (2009)

    Google Scholar 

  16. Peters, J., Schaal, S.: Reinforcement learning by reward-weighted regression for operational space control. In: Proceedings of the International Conference on Machine Learning, ICML (2007)

    Google Scholar 

  17. Schaal, S.: The SL simulation and real-time control software package. Technical report (in preparation)

    Google Scholar 

  18. Schaal, S., Mohajerian, P., Ijspeert, A.J.: Dynamics systems vs. optimal control – a unifying view. Progress in Brain Research 165(1), 425–445 (2007)

    Article  Google Scholar 

  19. Schaal, S., Peters, J., Nakanishi, J., Ijspeert, A.J.: Learning motor primitives. In: International Symposium on Robotics Research (2003)

    Google Scholar 

  20. Theodorou, E., Buchli, J., Schaal, S.: A generalized path integral control approach to reinforcement learning. Journal of Machine Learning Research 11, 3137–3181 (2010)

    MathSciNet  MATH  Google Scholar 

  21. Ude, A., Gams, A., Asfour, T., Morimoto, J.: Task-specific generalization of discrete and periodic dynamic movement primitives. IEEE Transactions on Robotics 26(5), 800–815 (2010)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer-Verlag GmbH Berlin Heidelberg

About this chapter

Cite this chapter

Peters, J., Mülling, K., Kober, J. (2014). Experiments with Motor Primitives in Table Tennis. In: Khatib, O., Kumar, V., Sukhatme, G. (eds) Experimental Robotics. Springer Tracts in Advanced Robotics, vol 79. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-28572-1_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-28572-1_24

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-28571-4

  • Online ISBN: 978-3-642-28572-1

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics