Advertisement

Study on User-Generated 3D Gestures for Video Conferencing System with See-Through Head Mounted Display

  • Guangchuan Li
  • Yue Liu
  • Yongtian Wang
  • Rempel David
Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 875)

Abstract

As video conferencing systems transition to using head-mounted displays (HMD), non-contacting (3D) hand gestures are likely to replace conventional input devices by providing more efficient interactions with less cost. This paper presents the design of an experimental video conferencing system with optical see-through HMD, Leap Motion hand tracker, and RGB cameras. Both the skeleton-based dynamic hand gesture recognition and ergonomic-based gesture lexicon design were studied. The proposed gesture recognition algorithm fused hand shape and hand direction feature and used Temporal Pyramid to obtain a high dimension feature and predicted the gesture classification through linear SVM machine learning. Subjects (N = 16) self-generated different hand gestures for 25 different tasks related to video conferencing and object manipulation and rated gestures on ease of making the gesture, match to the command, and arm fatigue. Based on these outcomes, a gesture lexicon is proposed for controlling a video conferencing system and for manipulating virtual objects.

Keywords

3D gestures Gesture recognition Video conferencing system Optical see-through HMD 

Notes

Acknowledgment

This work was supported in part by the National High Technology Research and Development Program of China (2015AA016303), the National Natural Science Foundation of China (61631010), and the Office Ergonomics Research Committee.

References

  1. 1.
    Wu, M., Balakrishnan, R.: Multi-finger and whole hand gestural interaction techniques for multi-user tabletop displays. In: Proceedings of ACM Symposium on User Interface Software and Technology, pp. 193–202 (2003)Google Scholar
  2. 2.
    Ramadan, A., Hemeda, H., Sarhan, A.: Touch-input based continuous authentication using gesture-level and session-level features. In: IEEE Information Technology, Electronics and Mobile Communication Conference, pp. 222–229 (2017)Google Scholar
  3. 3.
    Şen, F., Díaz, L., Horttana, T.: A novel gesture-based interface for a VR simulation: re-discovering Vrouw Maria. In: International Conference on Virtual Systems and Multimedia, pp. 323–330 (2013)Google Scholar
  4. 4.
    Dardas, N.H., Alhaj, M.: Hand gesture interaction with a 3D virtual environment. In: JCM Conference on Innovation in Computing & Engineering Machinery (2011)Google Scholar
  5. 5.
    Kurakin, A., Zhang, Z., Liu, Z.: A real time system for dynamic hand gesture recognition with a depth sensor. In: Signal Processing Conference, pp. 1975–1979 (2012)Google Scholar
  6. 6.
    Zhang, J., Zhou, W., Xie, C., Pu, J., Li, H.: Chinese sign language recognition with adaptive HMM. In: IEEE International Conference on Multimedia and Expo, pp. 1–6 (2016)Google Scholar
  7. 7.
    Nai, W., Liu, Y., Rempel, D., Wang, Y.: Fast hand posture classification using depth features extracted from random line segments. Pattern Recognit. 65, 1–10 (2016)CrossRefGoogle Scholar
  8. 8.
    Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines. ACM (2011)Google Scholar
  9. 9.
    Wobbrock, J.O., Morris, M.R., Wilson, A.D.: User-defined gestures for surface computing. In: SIGCHI Conference on Human Factors in Computing Systems, pp. 1083–1092 (2009)Google Scholar
  10. 10.
    Pereira, A., Wachs, J.P., Park, K., Rempel, D.: A user-developed 3-D hand gesture set for human-computer interaction. Hum. Factors 57, 607 (2015)CrossRefGoogle Scholar
  11. 11.
    Shotton, J., et al.: Real-time human pose recognition in parts from single depth images. In: Computer Vision and Pattern Recognition, pp. 1297–1304 (2011)Google Scholar
  12. 12.
    Smedt, Q.D., Wannous, H., Vandeborre, J.P.: Skeleton-based dynamic hand gesture recognition. In: Computer Vision and Pattern Recognition Workshops, pp. 1206–1214 (2016)Google Scholar
  13. 13.
    Rempel, D., Camilleri, M.J., Lee, D.L.: The design of hand gestures for human–computer interaction: lessons from sign language interpreters ☆. Int. J. Hum.-Comput. Stud. 72, 728–735 (2014)CrossRefGoogle Scholar
  14. 14.
    Lin, W., Du, L., Harris-Adamson, C., Barr, A., Rempel, D.: Design of hand gestures for manipulating objects in virtual reality. In: Kurosu, M. (ed.) HCI 2017. LNCS, vol. 10271, pp. 584–592. Springer, Cham (2017).  https://doi.org/10.1007/978-3-319-58071-5_44CrossRefGoogle Scholar
  15. 15.
    Marin, G., Dominio, F., Zanuttigh, P.: Hand gesture recognition with jointly calibrated Leap Motion and depth sensor. Multimed. Tools Appl. 75, 1–25 (2016)CrossRefGoogle Scholar
  16. 16.
    Evangelidis, G., Singh, G., Horaud, R.: Skeletal quads: human action recognition using joint quadruples. In: International Conference on Pattern Recognition, pp. 4513–4518 (2014)Google Scholar
  17. 17.

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  • Guangchuan Li
    • 1
    • 2
  • Yue Liu
    • 1
    • 2
  • Yongtian Wang
    • 1
    • 2
  • Rempel David
    • 3
  1. 1.Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and PhotonicsBeijing Institute of TechnologyBeijingChina
  2. 2.AICFVE of Beijing Film AcademyBeijingChina
  3. 3.University of CaliforniaBerkeleyUSA

Personalised recommendations