Semantic Description and Recognition of Human Body Poses and Movement Sequences with Gesture Description Language

  • Tomasz Hachaj
  • Marek R. Ogiela
Part of the Communications in Computer and Information Science book series (CCIS, volume 353)

Abstract

In this article we introduce new approach for human body poses and movement sequences recognition. Our concept is based on syntactic description with so called Gesture Description Language (GDL). The implementation of GDL requires special semantic reasoning module with additional heap-like memory. In the following paragraphs we shortly describes our initial concept. We also present software and hardware architecture that we created to test our solution and very promising early experiments results.

Keywords

Pose recognition movement sequences recognition syntactic description semantic reasoning natural interface 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Shotton, F., et al.: Real-time human pose recognition in parts from single depth images. In: CVPR 2011 (March 2011)Google Scholar
  2. 2.
    Fanelli, G., Weise, T., Gall, J., Van Gool, L.: Real Time Head Pose Estimation from Consumer Depth Cameras. In: Mester, R., Felsberg, M. (eds.) DAGM 2011. LNCS, vol. 6835, pp. 101–110. Springer, Heidelberg (2011)CrossRefGoogle Scholar
  3. 3.
    Mitra, S., Acharya, T.: Gesture recognition: A survey. IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews 37(3) (2007)Google Scholar
  4. 4.
    Yeasin, M., Chaudhuri, S.: Visual understanding of dynamic hand gestures. Pattern Recognition 33, 1805–1817 (2000)CrossRefGoogle Scholar
  5. 5.
    Rosenblum, M., Yacoob, Y., Davis, L.S.: Human expression recognition from motion using a radial basis function network architecture. IEEE Trans. Neural Netw. 7(5), 1121–1138 (1996)CrossRefGoogle Scholar
  6. 6.
    Obdrlek, S., Kurillo, G., Han, J., Abresch, T., Bajcsy, R.: Real-Time Human Pose Detection and Tracking for Tele-Rehabilitation in Virtual Reality. Studies in Health Technology and Informatics 173, 320–324 (2012)Google Scholar
  7. 7.
    OpenNI homepage, http://www.openni.org
  8. 8.
    Prime Sensor NITE 1.3 Algorithms notes, Version 1.0, PrimeSense Inc. (2010), http://pr.cs.cornell.edu/humanactivities/data/NITE.pdf
  9. 9.
    Hong, P., Turk, M., Huang, T.S.: Gesture modeling and recognition using finite state machines. In: Proc. 4th IEEE Int. Conf. Autom. Face Gesture Recogn., Grenoble, France, pp. 410–415 (2000)Google Scholar
  10. 10.
    Hachaj, T., Ogiela, M.R.: Visualization of perfusion abnormalities with GPU-based volume rendering. Computers & Graphics 36(3), 163–169 (2012)CrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2012

Authors and Affiliations

  • Tomasz Hachaj
    • 1
  • Marek R. Ogiela
    • 2
  1. 1.Institute of Computer Science and Computer MethodsPedagogical University of KrakowKrakowPoland
  2. 2.AGH University of Science and TechnologyKrakowPoland

Personalised recommendations