Skip to main content

Advertisement

Log in

Humanoids skill learning based on real-time human motion imitation using Kinect

  • Original Research Paper
  • Published:
Intelligent Service Robotics Aims and scope Submit manuscript

Abstract

In this paper, a novel framework which enables humanoid robots to learn new skills from demonstration is proposed. The proposed framework makes use of real-time human motion imitation module as a demonstration interface for providing the desired motion to the learning module in an efficient and user-friendly way. This interface overcomes many problems of the currently used interfaces like direct motion recording, kinesthetic teaching, and immersive teleoperation. This method gives the human demonstrator the ability to control almost all body parts of the humanoid robot in real time (including hand shape and orientation which are essential to perform object grasping). The humanoid robot is controlled remotely and without using any sophisticated haptic devices, where it depends only on an inexpensive Kinect sensor and two additional force sensors. To the best of our knowledge, this is the first time for Kinect sensor to be used in estimating hand shape and orientation for object grasping within the field of real-time human motion imitation. Then, the observed motions are projected onto a latent space using Gaussian process latent variable model to extract the relevant features. These relevant features are then used to train regression models through the variational heteroscedastic Gaussian process regression algorithm which is proved to be a very accurate and very fast regression algorithm. Our proposed framework is validated using different activities concerned with both human upper and lower body parts and object grasping also.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25
Fig. 26
Fig. 27

Similar content being viewed by others

Notes

  1. The training datasets are visualized after reducing their dimensionalities from 15 to 2 dimensions using t-distributed stochastic neighbor embedding (t-SNE) [25].

  2. The video is available at: http://www.videosprout.com/video?id=a69781a4-6817-42a0-9079-66dc2250c69d.

References

  1. Akgun B, Cakmak M, Yoo JW, Thomaz AL (2012) Trajectories and keyframes for kinesthetic teaching: A human-robot interaction perspective. In: Proceedings of the seventh annual ACM/IEEE international conference on human–robot interaction. ACM, pp 391–398

  2. Aldebaran (2015) http://doc.aldebaran.com/1-14/. Accessed on 1 May 2017

  3. Billard A, Calinon S, Dillmann R, Schaal S (2008) Robot programming by demonstration. In: Siciliano B, Khatib O (eds) Handbook of Robotics, chap 59. Springer, New York, pp 1371–1394

    Chapter  Google Scholar 

  4. Billard A, Grollman D (2013) Robot learning by demonstration. Scholarpedia 8(12):3824

    Article  Google Scholar 

  5. Calinon S, Billard A (2005) Recognition and reproduction of gestures using a probabilistic framework combining PCA, ICA and HMM. In: Proceedings of the 22nd international conference on machine learning. ACM, pp 105–112

  6. Calinon S, D’halluin F, Sauser EL, Caldwell DG, Billard AG (2010) Learning and reproduction of gestures by imitation. IEEE Robot Autom Mag 17(2):44–54

    Article  Google Scholar 

  7. Calinon S, Guenter F, Billard A (2007) On learning, representing, and generalizing a task in a humanoid robot. IEEE Trans Syst Man Cybern Part B Cybern 37(2):286–298

    Article  Google Scholar 

  8. Camps-Valls G, Gómez-Chova L, Muñoz-Marí J, Lázaro-Gredilla M, Verrelst J (2013) simpleR: a simple educational matlab toolbox for statistical regression. https://www.uv.es/gcamps/code/simpler-2-1.zip. Accessed 30 Apr 2017

  9. Cela A, Yebes JJ, Arroyo R, Bergasa LM, Barea R, López E (2013) Complete low-cost implementation of a teleoperated control system for a humanoid robot. Sensors 13(2):1385–1401

    Article  Google Scholar 

  10. Chalodhorn R, Grimes DB, Grochow K, Rao RP (2007) Learning to walk through imitation. IJCAI 7:2084–2090

    Google Scholar 

  11. Chen N, Chew CM, Tee KP, Han BS (2012) Human-aided robotic grasping. In: RO-MAN, 2012 IEEE. IEEE, pp 75–80

  12. Dariush B, Gienger M, Arumbakkam A, Zhu Y, Jian B, Fujimura K, Goerick C (2009) Online transfer of human motion to humanoids. Int J Hum Robot 6(02):265–289

    Article  Google Scholar 

  13. Ekvall S, Kragic D (2006) Learning task models from multiple human demonstrations. In: The 15th IEEE international symposium on robot and human interactive communication, 2006. ROMAN 2006. IEEE, pp 358–363

  14. Evrard P, Gribovskaya E, Calinon S, Billard A, Kheddar A (2009) Teaching physical collaborative tasks: object-lifting case study with a humanoid. In: 9th IEEE-RAS international conference on humanoid robots, 2009. Humanoids 2009. IEEE, pp 399–404

  15. Han J, Shao L, Xu D, Shotton J (2013) Enhanced computer vision with microsoft kinect sensor: a review. IEEE Trans Cybern 43(5):1318–1334

    Article  Google Scholar 

  16. Khansari-Zadeh SM, Billard A (2011) Learning stable nonlinear dynamical systems with Gaussian mixture models. IEEE Trans Robot 27(5):943–957

    Article  Google Scholar 

  17. Koenemann J, Burget F, Bennewitz M (2014) Real-time imitation of human whole-body motions by humanoids. In: 2014 IEEE international conference on robotics and automation (ICRA). IEEE, pp 2806–2812

  18. Kulić D, Takano W, Nakamura Y (2008) Incremental learning, clustering and hierarchy formation of whole body motion patterns using adaptive hidden Markov chains. Int J Robot Res 27(7):761–784

    Article  Google Scholar 

  19. Kuniyoshi Y, Inaba M, Inoue H (1994) Learning by watching: extracting reusable task knowledge from visual observation of human performance. IEEE Trans Robot Autom 10(6):799–822

    Article  Google Scholar 

  20. Lawrence N (2005) Probabilistic non-linear principal component analysis with Gaussian process latent variable models. J Mach Learn Res 6:1783–1816

    MathSciNet  MATH  Google Scholar 

  21. Lawrence ND (2004) Gaussian process latent variable models for visualisation of high dimensional data. Adv Neural Inf Process Syst 16(3):329–336

    Google Scholar 

  22. Lazaro-Gredilla M, Titsias M (2011) Variational heteroscedastic Gaussian process regression. In: Getoor L, Scheffer T (eds) Proceedings of the 28th international conference on machine learning (ICML-11), ICML ’11. ACM, New York, pp 841–848

  23. Lei J, Song M, Li ZN, Chen C (2015) Whole-body humanoid robot imitation with pose similarity evaluation. Signal Process 108:136–146

    Article  Google Scholar 

  24. Luo RC, Shih BH, Lin TW (2013) Real time human motion imitation of anthropomorphic dual arm robot based on Cartesian impedance control. In: 2013 IEEE international symposium on robotic and sensors environments (ROSE. IEEE), pp 25–30

  25. Maaten Lvd, Hinton G (2008) Visualizing data using t-SNE. J Mach Learn Res 9(Nov):2579–2605

    MATH  Google Scholar 

  26. Microsoft (2017) https://www.xbox.com/en-US/xbox-one. Accessed on 1 May 2017

  27. Mülling K, Kober J, Kroemer O, Peters J (2013) Learning to select and generalize striking movements in robot table tennis. Int J Robot Res 32(3):263–279

    Article  Google Scholar 

  28. Nakaoka S, Nakazawa A, Kanehiro F, Kaneko K, Morisawa M, Hirukawa H, Ikeuchi K (2007) Learning from observation paradigm: leg task models for enabling a biped humanoid robot to imitate human dances. Int J Robot Res 26(8):829–844

    Article  Google Scholar 

  29. Nakaoka S, Nakazawa A, Yokoi K, Hirukawa H, Ikeuchi K (2003) Generating whole body motions for a biped humanoid robot from captured human dances. In: IEEE international conference on robotics and automation, 2003. Proceedings. ICRA’03, vol 3. IEEE, pp 3905–3910

  30. Nguyen-Tuong D, Peters J (2011) Model learning for robot control: a survey. Cognit Process 12(4):319–340

    Article  Google Scholar 

  31. Ott C, Lee D, Nakamura Y (2008) Motion capture based human motion recognition and imitation by direct marker control. In: 8th IEEE-RAS international conference on humanoid robots, 2008. Humanoids 2008. IEEE, pp 399–405

  32. Ou Y, Hu J, Wang Z, Fu Y, Wu X, Li X (2015) A real-time human imitation system using kinect. Int J Soc Robot 7(5):587–600

    Article  Google Scholar 

  33. Pardowitz M, Knoop S, Dillmann R, Zollner RD (2007) Incremental learning of tasks from user demonstrations, past experiences, and vocal comments. IEEE Trans Syst Man Cybern Part B Cybern 37(2):322–332

    Article  Google Scholar 

  34. Peternel L, Babic J (2013) Humanoid robot posture-control learning in real-time based on human sensorimotor learning ability. In: 2013 IEEE international conference on robotics and automation (ICRA), pp 5329–5334

  35. Quirion S, Duchesne C, Laurendeau D, Marchand M (2008) Comparing GPLVM approaches for dimensionality reduction in character animation. J WSCG 16(1–3):41–48

    Google Scholar 

  36. Ramos OE, Saab L, Hak S, Mansard N (2011) Dynamic motion capture and edition using a stack of tasks. In: 2011 11th IEEE-RAS international conference on humanoid robots (humanoids). IEEE, pp 224–230

  37. Riley M, Ude A, Wade K, Atkeson CG (2003) Enabling real-time full-body imitation: a natural way of transferring human movement to humanoids. In: IEEE international conference on robotics and automation, 2003. Proceedings. ICRA’03, vol. 2. IEEE, pp 2368–2374

  38. Shon AP, Grochow K, Rao RP (2005) Robotic imitation from human motion capture using Gaussian processes. In: 2005 5th IEEE-RAS international conference on humanoid robots. IEEE, pp 129–134

  39. Stanton C, Bogdanovych A, Ratanasena E (2012) Teleoperation of a humanoid robot using full-body motion capture, example movements, and machine learning. In: Proceedings of Australasian conference on robotics and automation

  40. Suleiman W, Yoshida E, Kanehiro F, Laumond JP, Monin A (2008) On human motion imitation by humanoid robot. In: IEEE international conference on robotics and automation, 2008. ICRA 2008. IEEE, pp 2697–2704

  41. Titsias M, Lawrence N (2010) Bayesian Gaussian process latent variable model. In: Teh YW, Titterington DM (eds) Proceedings of the 13th international workshop on artificial intelligence and statistics, vol 9. JMLR W&CP, Chia Laguna Resort, Sardinia, Italy, pp 844–851

  42. Ude A, Atkeson CG, Riley M (2004) Programming full-body movements for humanoid robots by observation. Robot Auton Syst 47(2):93–108

    Article  Google Scholar 

  43. Yamane K, Hodgins J (2009) Simultaneous tracking and balancing of humanoid robots for imitating human motion capture data. In: IEEE/RSJ international conference on intelligent robots and systems, 2009. IROS 2009. IEEE, pp 2510–2517

Download references

Acknowledgements

This research has been supported by the Ministry of Higher Education (MoHE) of Egypt through a PhD fellowship. Our sincere thanks to Egypt-Japan University of Science and Technology (E-JUST) for guidance and support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Reda Elbasiony.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Elbasiony, R., Gomaa, W. Humanoids skill learning based on real-time human motion imitation using Kinect. Intel Serv Robotics 11, 149–169 (2018). https://doi.org/10.1007/s11370-018-0247-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11370-018-0247-z

Keywords

Navigation