Advertisement

Hand Gesture Recognition Based on Saliency and Foveation Features Using Convolutional Neural Network

  • Earnest Paul IjjinaEmail author
Conference paper
  • 18 Downloads
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1090)

Abstract

The rapid growth in multimedia technology opened new possibilities for effective human–computer interaction (HCI) using multimedia devices. Gesture recognizing is an important task in the development of an effective human–computer interaction (HCI) system and this work is an attempt in this direction. This work presents an approach for recognizing hand gestures using deep convolutional neural network architecture from features derived from saliency and foveation information. In contrast to the classical approach of using raw video stream as input, low-level features driven by saliency and foveation information are extracted from depth video frames are considered for gesture recognition. The temporal variation of these features is given as input to a convolutional neural network for recognition. Due to the discriminative feature learning capability of deep learning models and the invariance of depth information to visual appearance, high recognition accuracy was achieved by this model. The proposed approach achieved an accuracy of 98.27% on SKIG hand gesture recognition dataset.

Keywords

Hand gesture recognition Human computer interaction (HCI) Convolutional neural network (CNN) Depth information 

References

  1. 1.
    Bengio, Y. 2009. Learning Deep Architectures for AI. Foundation and Trends in Machine Learning 2 (1): 1–127.MathSciNetCrossRefGoogle Scholar
  2. 2.
    Dominio, F., M. Donadeo, G. Marin, P. Zanuttigh, and G.M. Cortelazzo. 2013. Hand Gesture Recognition with Depth Data. In 4th ACM/IEEE International Workshop on Analysis and Retrieval of Tracked Events and Motion in Imagery Stream. 9–16. ARTEMIS ‘13, ACM, New York, NY, USA.  https://doi.org/10.1145/2510650.2510651.
  3. 3.
    Han, J., L. Shao, D. Xu, and J. Shotton. 2013. Enhanced Computer Vision with Microsoft Kinect Sensor: A Review. IEEE Transactions on Cybernetics 43(5): 1318–1334.  https://doi.org/10.1109/TCYB.2013.2265378.
  4. 4.
    Jaemin, L., H. Takimoto, H. Yamauchi, A. Kanazawa, and Y. Mitsukura. 2013. A Ro-bust Gesture Recognition Based On Depth Data. In: 19th Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV). 127–132.  https://doi.org/10.1109/FCV.2013.6485474.
  5. 5.
    Kurakin, A., and Z.Z. Liu, Z. 2012. A Real Time System For Dynamic Hand Gesture Recognition with a Depth Sensor. In 20th European Signal Processing Conference (EUSIPCO), 1975–1979.Google Scholar
  6. 6.
    Liu, L., and L. Shao. 2013. Learning Discriminative Representations From RGB-D Video Data. In International Joint Conference on Arti cial Intelligence (IJCAI), Aug 2013.Google Scholar
  7. 7.
    Molchanov, P., X. Yang, S. Gupta, K. Kim, S. Tyree, and J. Kautz. 2016. Online Detection and Classification of Dynamic Hand Gestures With Recurrent 3D Convolutional Neural Networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 4207–4215.Google Scholar
  8. 8.
    Nishida, N., and H. Nakayama. 2016. Multimodal Gesture Recognition Using Multi-Stream Recurrent Neural Network. In Revised Selected Papers of the 7th Pacific-Rim Symposium on Image and Video Technology, PSIVT 2015, vol. 9431, 682–694, New York, NY, USA.Google Scholar
  9. 9.
    Ohn-Bar, E., and M. Trivedi. 2013. The Power is in Your Hands: 3D Analysis of Hand Gestures in Naturalistic Video. In IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 912–917.  https://doi.org/10.1109/CVPRW.2013.134.
  10. 10.
    Palm, R.B. 2012. Prediction as a Candidate for Learning Deep Hierarchical Models of Data. Master’s thesis, Technical University of Denmark, Asmussens Alle, Denmark.Google Scholar
  11. 11.
    Ren, Z., J. Yuan, J. Meng, and Z. Zhang. 2013. Robust Part-Based Hand Gesture Recognition Using Kinect Sensor. IEEE Transactions on Multimedia 15(5): 1110–1120.  https://doi.org/10.1109/TMM.2013.2246148.
  12. 12.
    Ren, Z., J. Yuan, and Z. Zhang. 2011. Robust Hand Gesture Recognition Based On Finger-Earth Mover’s Distance With a Commodity Depth Camera. In Proceedings of the 19th ACM International Conference on Multimedia, MM ‘11, ACM, 1093–1096, New York, NY, USA.  https://doi.org/10.1145/2072298.2071946, http://doi.acm.org/10.1145/2072298.2071946.
  13. 13.
    Sarkar, A.R., G. Sanyal, and S. Majumder. 2013. Article: Hand Gesture Recognition Systems: A Survey. International Journal of Computer Applications 71(15): 25–37, Foundation of Computer Science, New York, USA.Google Scholar
  14. 14.
    Suarez, J., and R. Murphy. 2012. Hand Gesture Recognition With Depth Images: A Review. In 21st IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 411–417.  https://doi.org/10.1109/ROMAN.2012.6343787.
  15. 15.
    Tung, P.T., and L.Q. Ngoc. 2014. Elliptical Density Shape Model for Hand Gesture Recognition. In: Fifth Symposium on Information and Communication Technology, SoICT 14, ACM, 186–191, New York, NY, USA.Google Scholar
  16. 16.
    Wachs, J.P., M. Kolsch, H. Stern, and Y. Edan. 2011. Vision-Based Hand-Gesture Applications. Communications of the ACM 54(2): 60–71.  https://doi.org/10.1145/1897816.1897838, http://doi.acm.org/10.1145/1897816.1897838.

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  1. 1.Department of Computer Science and EngineeringNational Institute of Technology WarangalWarangalIndia

Personalised recommendations