Advertisement

Unfamiliar Dynamic Hand Gestures Recognition Based on Zero-Shot Learning

  • Jinting Wu
  • Kang Li
  • Xiaoguang Zhao
  • Min Tan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11305)

Abstract

Most existing robots can recognize trained hand gestures to interpret user’s intent, while untrained dynamic hand gestures are hard to be understood correctly. This paper presents a dynamic hand gesture recognition approach based on Zero-Shot Learning (ZSL), which can recognize untrained hand gestures and predict user’s intention. To this end, we utilize a Bidirectional Long-Short-Term Memory (BLSTM) network to extract hand gesture feature from skeletal joint data collected by Leap Motion Controller (LMC). Specifically, this data is used to construct a novel dynamic hand gesture dataset for human-robot interaction application. Twenty common hand gestures are included and fifteen concrete semantic attributes are condensed. Based on these features and semantic attributes, a Semantic Autoencoder (SAE) is employed to learn a mapping from feature space to semantic space. By matching the most similar semantic information, the unfamiliar hand gestures are recognized as correct as possible. Experimental results on our dataset indicate that the proposed approach can effectively identify unfamiliar hand gestures.

Keywords

Dynamic hand gesture recognition Bidirectional Long-Short-Term Memory (BLSTM) Zero-Shot Learning (ZSL) Semantic Autoencoder (SAE) Leap Motion Controller (LMC) 

Notes

Acknowledgments

This work is partially supported by the National Natural Science Foundation of China under Grants 61673378 and 61421004.

References

  1. 1.
    Bartels, R.H., Stewart, G.W.: Solution of the matrix equation AX+ Xb = C [F4]. Commun. ACM 15(9), 820–826 (1972)CrossRefGoogle Scholar
  2. 2.
    Van den Bergh, M., et al.: Real-time 3D hand gesture interaction with a robot for understanding directions from humans. In: RO-MAN, pp. 357–362. IEEE (2011)Google Scholar
  3. 3.
    Bheda, V., Radpour, D.: Using deep convolutional networks for gesture recognition in American sign language. arXiv preprint arXiv:1710.06836 (2017)
  4. 4.
    Camastra, F., De Felice, D.: LVQ-based hand gesture recognition using a data glove. In: Apolloni, B., Bassis, S., Esposito, A., Morabito, F. (eds.) Neural Nets and Surroundings. Smart Innovation, Systems and Technologies, vol. 19, pp. 159–168. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-35467-0_17CrossRefGoogle Scholar
  5. 5.
    Changpinyo, S., Chao, W.L., Gong, B., Sha, F.: Synthesized classifiers for zero-shot learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5327–5336 (2016)Google Scholar
  6. 6.
    Chen, Y., Ding, Z., Chen, Y.L., Wu, X.: Rapid recognition of dynamic hand gestures using leap motion. In: 2015 IEEE International Conference on Information and Automation, pp. 1419–1424. IEEE (2015)Google Scholar
  7. 7.
    Graves, A., Schmidhuber, J.: Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Netw. 18(5–6), 602–610 (2005)CrossRefGoogle Scholar
  8. 8.
    Ji, Y., Liu, C., Gong, S., Cheng, W.: 3D hand gesture coding for sign language learning. In: 2016 International Conference on Virtual Reality and Visualization (ICVRV), pp. 407–410. IEEE (2016)Google Scholar
  9. 9.
    Ke, W., Li, W., Ruifeng, L., Lijun, Z.: Real-time hand gesture recognition for service robot. In: 2010 International Conference on Intelligent Computation Technology and Automation (ICICTA), vol. 2, pp. 976–979. IEEE (2010)Google Scholar
  10. 10.
    Kodirov, E., Xiang, T., Gong, S.: Semantic autoencoder for zero-shot learning. arXiv preprint arXiv:1704.08345 (2017)
  11. 11.
    Lampert, C.H., Nickisch, H., Harmeling, S.: Learning to detect unseen object classes by between-class attribute transfer. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2009, pp. 951–958. IEEE (2009)Google Scholar
  12. 12.
    Lefebvre, G., Berlemont, S., Mamalet, F., Garcia, C.: BLSTM-RNN based 3D gesture classification. In: Mladenov, V., Koprinkova-Hristova, P., Palm, G., Villa, A.E.P., Appollini, B., Kasabov, N. (eds.) ICANN 2013. LNCS, vol. 8131, pp. 381–388. Springer, Heidelberg (2013).  https://doi.org/10.1007/978-3-642-40728-4_48CrossRefGoogle Scholar
  13. 13.
    Lefebvre, G., Berlemont, S., Mamalet, F., Garcia, C.: Inertial gesture recognition with BLSTM-RNN. In: Koprinkova-Hristova, P., Mladenov, V., Kasabov, N.K. (eds.) Artificial Neural Networks. SSB, vol. 4, pp. 393–410. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-09903-3_19CrossRefGoogle Scholar
  14. 14.
    Li, W.J., Hsieh, C.Y., Lin, L.F., Chu, W.C.: Hand gesture recognition for post-stroke rehabilitation using leap motion. In: 2017 International Conference on Applied System Innovation (ICASI), pp. 386–388. IEEE (2017)Google Scholar
  15. 15.
    Lipton, Z.C., Berkowitz, J., Elkan, C.: A critical review of recurrent neural networks for sequence learning. arXiv preprint arXiv:1506.00019 (2015)
  16. 16.
    Lu, W., Tong, Z., Chu, J.: Dynamic hand gesture recognition with leap motion controller. IEEE Signal Process. Lett. 23(9), 1188–1192 (2016)CrossRefGoogle Scholar
  17. 17.
    Luo, R.C., Wu, Y.C.: Hand gesture recognition for human-robot interaction for service robot. In: 2012 IEEE Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), pp. 318–323. IEEE (2012)Google Scholar
  18. 18.
    Madapana, N., Wachs, J.P.: A semantical & analytical approach for zero shot gesture learning. In: 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017), pp. 796–801. IEEE (2017)Google Scholar
  19. 19.
    Marin, G., Dominio, F., Zanuttigh, P.: Hand gesture recognition with leap motion and kinect devices. In: 2014 IEEE International Conference on Image Processing (ICIP), pp. 1565–1569. IEEE (2014)Google Scholar
  20. 20.
    Molchanov, P., Gupta, S., Kim, K., Kautz, J.: Hand gesture recognition with 3D convolutional neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 1–7 (2015)Google Scholar
  21. 21.
    Morgado, P., Vasconcelos, N.: Semantically consistent regularization for zero-shot recognition. In: CVPR, vol. 9, p. 10 (2017)Google Scholar
  22. 22.
    Rautaray, S.S., Agrawal, A.: Vision based hand gesture recognition for human computer interaction: a survey. Artif. Intell. Rev. 43(1), 1–54 (2015)CrossRefGoogle Scholar
  23. 23.
    Romera-Paredes, B., Torr, P.: An embarrassingly simple approach to zero-shot learning. In: International Conference on Machine Learning, pp. 2152–2161 (2015)Google Scholar
  24. 24.
    Shen, J., Luo, Y., Wang, X., Wu, Z., Zhou, M.: GPU-based realtime hand gesture interaction and rendering for volume datasets using leap motion. In: 2014 International Conference on Cyberworlds (CW), pp. 85–92. IEEE (2014)Google Scholar
  25. 25.
    Tang, A., Lu, K., Wang, Y., Huang, J., Li, H.: A real-time hand posture recognition system using deep neural networks. ACM Trans. Intell. Syst. Technol. (TIST) 6(2), 21 (2015)Google Scholar
  26. 26.
    Weichert, F., Bachmann, D., Rudak, B., Fisseler, D.: Analysis of the accuracy and robustness of the leap motion controller. Sensors 13(5), 6380–6393 (2013)CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Jinting Wu
    • 1
    • 2
  • Kang Li
    • 1
    • 2
  • Xiaoguang Zhao
    • 1
    • 2
  • Min Tan
    • 1
    • 2
  1. 1.The State Key Laboratory of Management and Control for Complex SystemInstitute of Automation, Chinese Academy of SciencesBeijingChina
  2. 2.University of Chinese Academy of SciencesBeijingChina

Personalised recommendations