Advertisement

Non-intrusive Gesture Recognition in Real Companion Environments

  • Sebastian Handrich
  • Omer Rashid
  • Ayoub Al-Hamadi
Chapter
Part of the Cognitive Technologies book series (COGTECH)

Abstract

Automatic gesture recognition pushes Human-Computer Interaction (HCI) closer to human-human interaction. Although gesture recognition technologies have been successfully applied to real-world applications, there are still several problems that need to be addressed for wider application of HCI systems: Firstly, gesture-recognition systems require a robust tracking of relevant body parts, which is challenging, since the human body is capable of an enormous range of poses. Therefore, a pose estimation approach that identifies body parts based on geodetic distances is proposed. Further, the generation of synthetic data, which is essential for training and evaluation purposes, is presented. A second problem is that gestures are spatio-temporal patterns that can vary in shape, trajectory or duration, even for the same person. Static patterns are recognized using geometrical and statistical features which are invariant to translation, rotation and scaling. Moreover, stochastical models like Hidden Markov Models and Conditional Random Fields applied to quantized trajectories are employed to classify dynamic patterns. Lastly, a non-gesture model-based spotting approach is proposed that separates meaningful gestures from random hand movements (spotting).

Notes

Acknowledgements

This work was done within the Transregional Collaborative Research Centre SFB/TRR 62 “Companion-Technology for Cognitive Technical Systems” funded by the German Research Foundation (DFG).

References

  1. 1.
    Andersson, F.: Bezier and B-spline Technology. Technical Report, Umea Universitet, Sweden (2003)Google Scholar
  2. 2.
    Beh, J., Han, D.K., Durasiwami, R., Ko, H.: Hidden Markov model on a unit hypersphere space for gesture trajectory recognition. Pattern Recogn. Lett. 36, 144–153 (2014)CrossRefGoogle Scholar
  3. 3.
    Chang, J.Y., Nam, S.W.: Fast random-forest-based human pose estimation using a multi-scale and cascade approach. ETRI J. 35(6), 949–959 (2013)CrossRefGoogle Scholar
  4. 4.
    Daniel, C.Y. Chen and Clinton B. Fookes. Labelled silhouettes for human pose estimation. In: International Conference on Information Science, Signal Processing and Their Applications (2010)Google Scholar
  5. 5.
    Dawod, AY., Abdullah, J., Alam, M.J.: Adaptive skin color model for hand segmentation. In: Computer Applications and Industrial Electronics (ICCAIE), pp. 486–489, Dec 2010Google Scholar
  6. 6.
    Dijkstra, E.W.: A note on two problems in connexion with graphs. Numer. Math. 1(1), 269–271 (1959)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Elmezain, M., Al-Hamadi, A., Appenrodt, J., Michaelis, B.: A hidden Markov model-based isolated and meaningful hand gesture recognition. Proc. World Acad. Sci. Eng. Technol. (PWASET) 31, 394–401 (2008)Google Scholar
  8. 8.
    Farouki, R.T.: The Bernstein polynomial basis: a centennial retrospective. Comput. Aided Geom. Des. 29(6), 379–419 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Ganapathi, V., Plagemann, C., Koller, D., Thrun, S.: Real time motion capture using a single time-of-flight camera. In: CVPR, pp. 755–762 (2010)Google Scholar
  10. 10.
    Girshick, R., Shotton, J., Kohli, P., Criminisi, A., Fitzgibbon, A.: Efficient regression of general-activity human poses from depth images. In: ICCV, pp. 415–422 (2011)Google Scholar
  11. 11.
    Han, J., Shao, L., Xu, D., Shotton, J.: Enhanced computer vision with microsoft kinect sensor: a review. IEEE Trans. Cybern. 43(5), 1318–1334 (2013)CrossRefGoogle Scholar
  12. 12.
    Hansford, D.: Bezier techniques. In: Farin, G., Hoschek, J., Kim, M. (eds.) Handbook of Computer Aided Geometric Design, pp. 75–109. North-Holland, Amsterdam (2002)CrossRefGoogle Scholar
  13. 13.
    Horn, B.K.P.: Closed-form solution of absolute orientation using unit quaternions. J. Opt. Soc. Am. 4, 629–642 (1987)CrossRefGoogle Scholar
  14. 14.
    Liang, Q., MiaoMiao, Z.: Markerless human pose estimation using image features and extremal contour. In: ISPACS, pp. 1–4 (2010)Google Scholar
  15. 15.
    Nanda, H., Fujimura, K.: Visual tracking using depth data. US Patent 7590262, Sept 2009Google Scholar
  16. 16.
    Obdrzalek, S., Kurillo, G., Ofli, F., Bajcsy, R., Seto, E., Jimison, H., Pavel, M.: Accuracy and robustness of kinect pose estimation in the context of coaching of elderly population. In: Engineering in Medicine and Biology Society (EMBC), pp. 1188–1193 (2012)Google Scholar
  17. 17.
    Plagemann, C., Ganapathi, V., Koller, D., Thrun, S.: Real-time identification and localization of body parts from depth images. In: ICRA, pp. 3108–3113 (2010)Google Scholar
  18. 18.
    Qiao, M., Cheng, J., Zhao, W.: Model-based human pose estimation with hierarchical ICP from single depth images. In: Advances in Automation and Robotics. Lecture Notes in Electrical Engineering, vol. 2, pp. 27–35. Springer, Berlin (2012)Google Scholar
  19. 19.
    Raheja, J.L., Chaudhary, A., Singal, K.: Tracking of fingertips and centers of palm using kinect. In: Computational Intelligence, Modelling and Simulation, 248–252 (2011)Google Scholar
  20. 20.
    Rasim, A., Alexander, T.: Hand detection based on skin color segmentation and classification of image local features. Tem J. 2(2), 150–155 (2013)Google Scholar
  21. 21.
    Rüther, M., Straka, M., Hauswiesner, S., Bischof, H.: Skeletal graph based human pose estimation in real-time. In: Proceedings of the British Machine Vision Conference, pp. 69.1–69.12. BMVA Press, Guildford (2011). http://dx.doi.org/10.5244/C.25.69
  22. 22.
    Salomon, D.: Curves and Surfaces for Computer Graphics. Springer, New York (2006)zbMATHGoogle Scholar
  23. 23.
    Schwarz, L.A., Mkhitaryan, A., Mateus, D., Navab, N.: Human skeleton tracking from depth data using geodesic distances and optical flow. Image Vis. Comput. 30(3), 217–226 (2012)CrossRefGoogle Scholar
  24. 24.
    Shotton, J., Girshick, R., Fitzgibbon, A., Sharp, T., Cook, M., Finocchio, M., Moore, R., Kohli, P., Criminisi, A., Kipman, A., Blake, A.: Efficient human pose estimation from single depth images. IEEE Trans. Pattern Anal. Mach. Intell. 35(12), 2821–2840 (2013)CrossRefGoogle Scholar
  25. 25.
    Siddiqui, M., Medioni, G.: Human pose estimation from a single view point, real-time range sensor. In: CVPRW, pp. 1–8, June 2010Google Scholar
  26. 26.
    Van den Bergh, M., Van Gool, L.J.: Combining RGB and ToF cameras for real-time 3d hand gesture interaction. In: WACV, pp. 66–72. IEEE Computer Society, New York (2011)Google Scholar
  27. 27.
    Wang, R.Y., Popović, J.: Real-time hand-tracking with a color glove. ACM Trans. Graph. 28(3), 63 (2009)Google Scholar
  28. 28.
    Wen, Y., Hu, C., Yu, G., Wang, C.: A robust method of detecting hand gestures using depth sensors. In: IEEE International Workshop on Haptic Audio Visual Environments and Games, pp. 72–77 (2012)Google Scholar
  29. 29.
    Yang, C., Jang, Y., Beh, J., Han, D., Ko, H.: Gesture recognition using depth-based hand tracking for contactless controller application. In: ICCE, pp. 297 –298 (2012)Google Scholar
  30. 30.
    Yeo, H., Lee, B., Lim, H.: Hand tracking and gesture recognition system for human-computer interaction using low-cost hardware. Multimed. Tools Appl. 74, 1–29 (2013)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Sebastian Handrich
    • 1
  • Omer Rashid
    • 1
  • Ayoub Al-Hamadi
    • 1
  1. 1.Otto-von-Guericke University MagdeburgMagdeburgGermany

Personalised recommendations