Skip to main content

Hierarchical Method for Segmentation by Classification of Motion Capture Data

  • Chapter
  • First Online:
Virtual Realities

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 8844))

Abstract

In this paper, we present a novel simple and efficient method for segmentation by classification of motion capture data automatically and with high accuracy. Classification of motion capture data demands dealing with high dimensional search space due to the high dimensionality of the motion capture data. The main contribution of this paper is a method for reducing this search space using the divide and conquer principle in a form of a taxonomy-tree which means a multi-level segmentation by classification algorithm, where the highest level classifies motion capture data into dynamic and static segments and the lowest level uses features of single body-parts to recognize wide range of human movements. The first implementation of this algorithm has given very promising results and proved that it is fast enough to be integrated in real-time systems such as robotics and surveillance systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    This data are captured at Chemnitz University of Technology.

References

  1. Barbic, J. Safonova, A. Pan, J.Y., Faloutsos, C. Hodgins, J.K., Pollard, N.S.: Segmenting motion capture data into distinct behaviours. In: Graphics Interface, pp. 185–194 (2004)

    Google Scholar 

  2. Kulic, D., Nakamura, Y.: Scaffolding on-line segmentation of full body human motion patterns. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, Acropolis Convention Center, France (2008)

    Google Scholar 

  3. Yun, S., Park, A., Jung, K.: Graph-based high level motion segmentation using normalized cuts. World Academy of Science Engineering and Technology 20 (2008)

    Google Scholar 

  4. Zhou, F. De la Torre, F., Hodgins, J.K.: Aligned cluster analysis for temporal segmentation of human motion. In: IEEE Conference on Automatic Face and Gestures Recognition (2008)

    Google Scholar 

  5. Lin, J.F., Kulic, D.: Segmentation human motion for automated rehabilitaion exercise analysis. In: 34th Annual International Conference of the IEEE EMBS (2012)

    Google Scholar 

  6. Gong, Dian, Medioni, Gérard, Zhu, Sikai, Zhao, Xuemei: Kernelized temporal cut for online temporal segmentation and recognition. In: Fitzgibbon, Andrew, Lazebnik, Svetlana, Perona, Pietro, Sato, Yoichi, Schmid, Cordelia (eds.) ECCV 2012, Part III. LNCS, vol. 7574, pp. 229–243. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  7. Nakata, T.: Temporal segmentation and recognition of body motion data based on inter-limb correlation analysis. In: Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems San Diego, CA, Oct 29–Nov 2 (2007)

    Google Scholar 

  8. Müllerm, M., Röder, T.: Motion templates for automatic classification and retrieval of motion capture data. In: Eurographics/ACM SIGGRAPH Symposium on Computer Animation (2006)

    Google Scholar 

  9. Zhao, L., Sukthankar, G.: A semi-supervised method for segmenting multi-modal data. In: Proceedings of the International Symposium on Quality of Life Technology (2009)

    Google Scholar 

  10. Cho, K., Chen, X.: Classifying and Visualizing Motion Capture Sequences using Deep Neural Networks. arXiv preprint arXiv:1306.3874 (2013)

    Google Scholar 

  11. Lv, Fengjun, Nevatia, Ramakant: Recognition and segmentation of 3-D human action using HMM and multi-class AdaBoost. In: Leonardis, Aleš, Bischof, Horst, Pinz, Axel (eds.) ECCV 2006. LNCS, vol. 3954, pp. 359–372. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  12. Li, C., Kulkarni, P.R., Prabhakaran, B.: Segmentation and recognition of motion capture data stream by classification. Multimedia Tools Appl. 35, 55–70 (2007)

    Article  Google Scholar 

  13. Bruderlin, A., Williams, L.: Motion signal processing. In: Proceedings of the 22nd Annual Conference on Computer Graphics and Interactive Techniques, pp. 97–104 (1995)

    Google Scholar 

  14. CMU Graphics Lab Motion Capture Database, Subject #86. http://mocap.cs.cmu.edu/search.php?subjectnumber=86. Accessed 24 March 2014

Download references

Acknowledgments

The data used in this work was obtained from motion capture.cs.cmu.edu.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Samer Salamah .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (AVI 40089 kb)

336837_1_En_10_MOESM2_ESM.avi

Supplementary material 2 (AVI 12119 kb)

Supplementary material 2 (AVI 12119 kb)

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this chapter

Cite this chapter

Salamah, S., Zhang, L., Brunnett, G. (2015). Hierarchical Method for Segmentation by Classification of Motion Capture Data. In: Brunnett, G., Coquillart, S., van Liere, R., Welch, G., Váša, L. (eds) Virtual Realities. Lecture Notes in Computer Science(), vol 8844. Springer, Cham. https://doi.org/10.1007/978-3-319-17043-5_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-17043-5_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-17042-8

  • Online ISBN: 978-3-319-17043-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics