Skip to main content

Autonomous framework for segmenting robot trajectories of manipulation task

Abstract

In manipulation tasks, motion trajectories are characterized by a set of key phases (i.e., motion primitives). It is therefore important to learn the motion primitives embedded in such tasks from a complete demonstration. In this paper, we propose a core framework that autonomously segments motion trajectories to support the learning of motion primitives. For this purpose, a set of segmentation points is estimated using a Gaussian Mixture Model (GMM) learned after investigating the dimensional subspaces reduced by Principal Component Analysis. The segmentation points can be acquired by two alternative approaches: (1) using a geometrical interpretation of the Gaussians obtained from the learned GMM, and (2) using the weights estimated along the time component of the learned GMM. The main contribution of this paper is the autonomous estimation of the segmentation points based on the GMM learned in a reduced dimensional space. The advantages of such an estimation are as follows: (1) segmentation points without any internal parameters to be manually predefined or pretuned (according to the types of given tasks and/or motion trajectories) can be estimated from a single training data, (2) segmentation points, in which non-linear motion trajectories can be better characterized than by using the original motion trajectories, can be estimated, and (3) natural motion trajectories can be retrieved by temporally rearranging motion segments. The capability of this autonomous segmentation framework is validated by four experiments. In the first experiment, motion segments are evaluated through a comparison with a human expert using a publicly available kitchen dataset. In the second experiment, motion segments are evaluated through a comparison with an existing approach using an open hand-writing database. In the third experiment, the segmentation performance is evaluated by retrieving motion trajectories from the reorganization of motion segments. In the fourth experiment, the segmentation performance is evaluated by clustering motion segments.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19
Fig. 20
Fig. 21
Fig. 22
Fig. 23
Fig. 24
Fig. 25
Fig. 26
Fig. 27
Fig. 28
Fig. 29
Fig. 30
Fig. 31
Fig. 32
Fig. 33
Fig. 34

References

  1. Akgun, B., Cakmak, M., Yoo, J., & Thomaz, A. (2012). Trajectories and keyframes for kinesthetic teaching: A human–robot interaction perspective. In Proceedings of the ACM/IEEE international conference on Human-robot interaction, ACM (pp. 391–398).

  2. Argall, B., Sauser, E., & Billard, A. (2010). Tactile guidance for policy refinement and reuse. In 2010 IEEE 9th International Conference on Development and Learning (ICDL), IEEE (pp. 7–12).

  3. Asfour, T., Gyarfas, F., Azad, P., & Dillmann, R. (2006). Imitation learning of dual-arm manipulation tasks in humanoid robots. In 2006 6th IEEE-RAS International Conference on Humanoid Robots, IEEE (pp. 40–47).

  4. Baby, S., Krüger, V., Kragic, D., & Kjellström, H. (2011). Primitive-based action representation and recognition. Advanced Robotics, 25(6–7), 871–891.

    Google Scholar 

  5. Bentivegna, D., & Atkeson, C. (2001). Learning from observation using primitives. In 2001 IEEE International Conference on Robotics and Automation (ICRA) (Vol. 2, pp. 1988–1993).

  6. Billing, E., Hellstrom, T., & Janlert, L. (2010). Behavior recognition for learning from demonstration. In 2010 IEEE International Conference on Robotics and Automation (ICRA) (pp. 866–872).

  7. Calinon, S., & Billard, A. (2005). Recognition and reproduction of gestures using a probabilistic framework combining pca, ica and hmm. In Proceedings of the 22nd International Conference on Machine learning, ACM (pp. 105–112).

  8. Calinon, S., & Billard, A. (2007). Incremental learning of gestures by imitation in a humanoid robot. In Proceedings of the ACM/IEEE international conference on Human-robot interaction, ACM (pp. 255–262).

  9. Calinon, S., Guenter, F., & Billard, A. (2007). On learning, representing, and generalizing a task in a humanoid robot. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 37(2), 286–298.

    Article  Google Scholar 

  10. Calinon, S., Pistillo, A., & Caldwell, D. (2011). Encoding the time and space constraints of a task in explicit-duration hidden Markov model. In 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE (pp. 3413–3418).

  11. Chernova, S., & Veloso, M. (2007). Confidence-based policy learning from demonstration using Gaussian mixture models. In Proceedings of the 6th International Joint Conference on Autonomous Agents and Multiagent Systems, ACM (p. 233).

  12. Cohen, B., Chitta, S., & Likhachev, M. (2010). Search-based planning for manipulation with motion primitives. In 2010 IEEE International Conference on Robotics and Automation (ICRA) (pp. 2902–2908).

  13. Cruz-Ramírez, N., Acosta-Mesa, H. G., Barrientos-Martínez, R. E., & Nava-Fernández, L. A. (2006). How good are the Bayesian information criterion and the minimum description length principle for model selection? A bayesian network analysis. In MICAI 2006: Advances in Artificial Intelligence, Springer (pp. 494–504).

  14. Drumwright, E., Jenkins, O., & Mataric, M. (2004). Exemplar-based primitives for humanoid movement classification and control. In 2004 IEEE International Conference on Robotics and Automation (ICRA) (Vol. 1, pp. 140–145).

  15. Eddy, S., et al. (1995). Multiple alignment using hidden Markov models. In Proceedings of the Third International Conference on Intelligent Systems for Molecular Biology (Vol. 3, pp. 114–120).

  16. Figueiredo, M. A. T., & Jain, A. K. (2002). Unsupervised learning of finite mixture models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(3), 381–396.

    Article  Google Scholar 

  17. Fügen, C., Gieselmann, P., Holzapfel, H., & Kraft, F. (2006). Natural human robot communication. In 2nd International Workshop on Human-Centered Robotic Systems, Munich, Germany.

  18. Gall, J., Yao, A., & Gool, L. V. (2010). 2D action recognition serves 3D human pose estimation. In Computer Vision-ECCV (pp. 425–438).

  19. Ghahramani, Z., & Jordan, M. (1994). Supervised learning from incomplete data via an EM approach. In Advances in Neural Information Processing Systems (Vol. 6, pp. 120–127). San Mateo, CA: Morgan Kaufmann.

  20. Gribovskaya, E., & Billard, A. (2008). Combining dynamical systems control and programming by demonstration for teaching discrete bimanual coordination tasks to a humanoid robot. In Proceedings of the ACM/IEEE International Conference on Human–Robot Interaction (HRI).

  21. Grimes, D., Chalodhorn, R., & Rao, R. (2006). Dynamic imitation in a humanoid robot through nonparametric probabilistic inference. In Proceedings of Robotics: Science and Systems.

  22. Hauser, K., Ng-Thow-Hing, V., & Gonzalez-Baños, H. (2011). In Multi-modal motion planning for a humanoid robot manipulation task. Robotics Research (pp. 307–317).

  23. Hueser, M., Baier, T., & Zhang, J. (2006). Learning of demonstrated grasping skills by stereoscopic tracking of human head configuration. In Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006, IEEE (pp. 2795–2800).

  24. Khansari-Zadeh, S. M., & Billard, A. (2010). Imitation learning of globally stable non-linear point-to-point robot motions using nonlinear programming. In 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE (pp. 2676–2683).

  25. Kruger, V., Tikhanoff, V., Natale, L., & Sandini, G. (2012). Imitation learning of non-linear point-to-point robot motions using Dirichlet processes. In 2012 IEEE International Conference on Robotics and Automation (ICRA), IEEE (pp. 2029–2034).

  26. Kuha, J. (2004). AIC and BIC. Sociological Methods & Research, 33(2), 188–229.

    Article  MathSciNet  Google Scholar 

  27. Kulic, D., Takano, W., & Nakamura, Y. (2009). Online segmentation and clustering from continuous observation of whole body motions. IEEE Transactions on Robotics, 25(5), 1158–1166.

  28. Lee, S., & Suh, I. (2012). Motivation-based dependable behavior selection using probabilistic affordance. Advanced Robotics, 26(8–9), 897–921.

    Google Scholar 

  29. Lee, S. H., Suh, I. H., Calinon, S., & Johansson, R. (2012). Learning basis skills by autonomous segmentation of humanoid motion trajectories. In Proceeding of International Conference on Humanoid Robots (pp. 112–119).

  30. Lubelski, A., Sokolov, I. M., & Klafter, J. (2008). Nonergodicity mimics inhomogeneity in single particle tracking. Physical Review letters, 100(25), 250,602.

    Article  Google Scholar 

  31. Mongillo, G., & Deneve, S. (2008). Online learning with hidden Markov models. Neural Computation, 20(7), 1706–1716.

    Article  MathSciNet  MATH  Google Scholar 

  32. Montesano, L., Lopes, M., Bernardino, A., & Santos-Victor, J. (2008). Learning object affordances: From sensory-motor coordination to imitation. IEEE Transactions on Robotics, 24(1), 15–26.

    Article  Google Scholar 

  33. Mühlig, M., Gienger, M., & Steil, J. (2010). Human–robot interaction for learning and adaptation of object movements. In 2010 IEEE/RSJ international Conference on Intelligent Robots and Systems (IROS 2010) (pp. 4901–4907).

  34. Neal, R. M., & Hinton, G. E. (1998). A view of the EM algorithm that justifies incremental, sparse, and other variants. In M. I. Jordan (Ed.), Learning in graphical models (pp. 355–368). Berlin: Springer.

    Chapter  Google Scholar 

  35. Nejati, N., Langley, P., & Konik, T. (2006). Learning hierarchical task networks by observation. In Proceedings of the 23rd International Conference on Machine Learning (pp. 665–672).

  36. Nicolescu, M., & Mataric, M. (2003). Natural methods for robot task learning: Instructive demonstrations, generalization and practice. In Proceedings of the Second International Joint Conference on Autonomous Agents and Multiagent Systems (pp. 241–248.)

  37. Park, G. R., Kim, K., Kim, C., Jeong, M. H., You, B. J., & Ra, S. (2009). Human-like catching motion of humanoid using evolutionary algorithm (ea)-based imitation learning. In The 18th IEEE International Symposium on Robot and Human Interactive Communication, 2009. RO-MAN 2009, IEEE (pp. 809–815).

  38. Pastor, P., Kalakrishnan, M., Chitta, S., Theodorou, E., & Schaal, S. (2011). Skill learning and task outcome prediction for manipulation. In 2011 IEEE International Conference on Robotics and Automation (ICRA), IEEE (pp. 3828–3834).

  39. Rasmussen, C. E. (2000). The infinite Gaussian mixture model. In S. A. Solla, T. K. Leen, & K.-R. Muller (Eds.), Advances in Neural Information Processing Systems 12 (pp. 554–560). Cambridge, MA: MIT Press.

    Google Scholar 

  40. Sahin, E., Çakmak, M., Dogar, M. R., Ugur, E., & Ucoluk, G. (2007). To afford or not to afford: A new formalization of affordances toward affordance-based robot control. Adaptive Behavior, 15(4), 447–472.

    Article  Google Scholar 

  41. Sato, M. A., & Ishii, S. (2000). On-line EM algorithm for the normalized Gaussian network. Neural Computation, 12(2), 407–432.

    Article  Google Scholar 

  42. Tenorth, M., Bandouch, J., & Beetz, M. (2009). The TUM kitchen data set of everyday manipulation activities for motion tracking and action recognition. In 2009 IEEE 12th International Conference on Computer Vision Workshops (ICCV Workshops) (pp. 1089–1096).

  43. Wu, J., & Rehg, J. M. (2011). Centrist: A visual descriptor for scene categorization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33(8), 1489–1501.

    Article  Google Scholar 

  44. Xu, L., & Jordan, M. (1996). On convergence properties of the EM algorithm for Gaussian mixtures. Neural Computation, 8(1), 129–151.

    Article  Google Scholar 

  45. Xuan, G., Zhang, W., & Chai, P. (2001). EM algorithms of Gaussian mixture model and hidden Markov model. In Proceedings of the 2001 International Conference on Image Processing, IEEE (Vol. 1, pp. 145–148).

Download references

Acknowledgments

This work was supported by the Global Frontier R&D Program on \(\langle \)Human-centered Interaction for Coexistence\(\rangle \) funded by the National Research Foundation of Korea grant funded by the Korean Government (MEST) (NRF-M1AXA003-2011-0028553).

Author information

Affiliations

Authors

Corresponding author

Correspondence to Il Hong Suh.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Lee, S.H., Suh, I.H., Calinon, S. et al. Autonomous framework for segmenting robot trajectories of manipulation task. Auton Robot 38, 107–141 (2015). https://doi.org/10.1007/s10514-014-9397-9

Download citation

Keywords

  • Autonomous segmentation
  • Motion segments
  • Gaussian mixture model
  • Principal component analysis