Skip to main content
Log in

Attention-based hand semantic segmentation and gesture recognition using deep networks

  • Original Paper
  • Published:
Evolving Systems Aims and scope Submit manuscript

Abstract

The ability to discern the shape of hands can be a vital issue in improving the performance of hand gesture recognition for human–computer interaction. Segmentation itself is a very challenging problem having various constraints like illumination variations, complex background etc. The objective of the paper is to incorporate the perception of semantic segmentation into a classification problem and make use of the deep neural models to achieve improved results for both static and dynamic gestures. This paper utilizes the UNet architecture with attention-module to obtain the semantically segmented masks of the input images, which are then fed to a classifier for recognition. The concept of attention-mechanism adds to the improvement of segmentation accuracy. In this work, for static gestures, the top classifier layer of the VGG16 model is replaced with a classifier designed specifically for classifying the gestures at hand. For dynamic gestures, 3D-CNN (C3D) architecture is used as a classifier that can capture spatial as well as temporal information of a gesture video. The data augmentation process is used in preprocessing to generate a sufficient number of training images for the aforementioned CNN-based models. Significant and improved recognition has been achieved for both static and dynamic hand gesture databases through the inherent feature learning capability of CNN and refined segmentation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Data availability

The databases used in the experiments are - the Brazilian Sign Language dataset (http://sites.ecomp.uefs.br/lasic/projetos/libras-dataset) (Bastos et al. 2015), HGR dataset (https://sun.aei.polsl.pl/mkawulok/gestures/) (Kawulok et al. 2014) and IPN hand dataset (https://gibranbenitez.github.io/IPN_Hand/) (Benitez-Garcia et al. 2021) and these are publicly available databases.

References

  • Abdul W, Alsulaiman M, Amin SU, Faisal M, Muhammad G, Albogamy FR, Bencherif MA, Ghaleb H (2021) Intelligent real-time Arabic sign language classification using attention-based inception and bilstm. Comput Electric Eng 95:107395

    Article  Google Scholar 

  • Bastos IL, Angelo MF, Loula AC (2015) Recognition of static gestures applied to Brazilian sign language (libras). In: 2015 28th SIBGRAPI Conference on Graphics, Patterns and Images, pp. 305–312. IEEE

  • Benitez-Garcia G, Olivares-Mercado J, Sanchez-Perez G, Yanai K (2021) Ipn hand: a video dataset and benchmark for real-time continuous hand gesture recognition. In: 2020 25th International Conference on pattern recognition (ICPR), pp 4340–4347. IEEE

  • Chakraborty BK, Sarma D, Bhuyan M, MacDorman KF (2017) Review of constraints on vision-based gesture recognition for human-computer interaction. IET Comput Vis 12(1):3–15

    Article  Google Scholar 

  • Chen L-C, Papandreou G, Kokkinos I, Murphy K, Yuille AL (2014) Semantic image segmentation with deep convolutional nets and fully connected crfs. arXiv preprint arXiv:1412.7062

  • Chen L-C, Papandreou G, Schroff F, Adam H (2017a) Rethinking atrous convolution for semantic image segmentation. arXiv preprint arXiv:1706.05587

  • Chen L-C, Papandreou G, Kokkinos I, Murphy K, Yuille AL (2017b) Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans Pattern Anal Mach Intell 40(4):834–848

    Article  Google Scholar 

  • Chen L, Zhang H, Xiao J, Nie L, Shao J, Liu W, Chua T-S (2017c) Sca-cnn: spatial and channel-wise attention in convolutional networks for image captioning. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 5659–5667

  • Chen L-C, Zhu Y, Papandreou G, Schroff F, Adam H (2018) Encoder-decoder with atrous separable convolution for semantic image segmentation. In: Proceedings of the European Conference on computer vision (ECCV), pp 801–818

  • D’Eusanio A, Simoni A, Pini S, Borghi G, Vezzani R, Cucchiara R (2020) A transformer-based network for dynamic hand gesture recognition. In: 2020 International Conference on 3D Vision (3DV), pp. 623–632. IEEE

  • Dhingra N, Kunz, A (2019) Res3atn-deep 3d residual attention network for hand gesture recognition in videos. In: 2019 International Conference on 3D vision (3DV), pp 491–501. IEEE

  • Dutta HPJ, Sarma D, Bhuyan MK, Laskar RH (2020) Semantic segmentation based hand gesture recognition using deep neural networks. In: 2020 National Conference on Communications (NCC), pp 1–6, 2020. IEEE

  • Fu J, Liu J, Tian H, Li Y, Bao Y, Fang Z, Lu H (2019) Dual attention network for scene segmentation. In: Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, pp 3146–3154

  • Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2014) Generative adversarial nets. In: Advances in neural information processing systems, pp 2672–2680

  • He K, Gkioxari G, Dollár P, Girshick R (2017) Mask r-cnn. In: Proceedings of the IEEE International Conference on computer vision, pp 2961–2969, 2017

  • Huang H, Lin L, Tong R, Hu H, Zhang Q, Iwamoto Y, Han X, Chen Y-W, Wu J (2020) Unet 3+: A full-scale connected unet for medical image segmentation. In: ICASSP 2020-2020 IEEE International Conference on acoustics, speech and signal processing (ICASSP), pp 1055–1059. IEEE

  • Hu J, Shen L, Sun G (2018) Squeeze-and-excitation networks. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 7132–7141

  • Jaderberg M, Simonyan K, Zisserman A et al (2015) Spatial transformer networks. Adv Neural Inf Process Syst 28:2017–2025

    Google Scholar 

  • Karpathy A, Toderici G, Shetty S, Leung T, Sukthankar R, Fei-Fei L (2014) Large-scale video classification with convolutional neural networks. Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 1725–1732

  • Kavyasree V, Sarma D, Gupta P, Bhuyan M (2020) Deep network-based hand gesture recognition using optical flow guided trajectory images. In: 2020 IEEE Applied Signal Processing Conference (ASPCON), pp 252–256. IEEE

  • Kawulok M, Kawulok J, Nalepa J, Smolka B (2014) Self-adaptive algorithm for segmenting skin regions. EURASIP J Adv Signal Process 2014:1–22

    Article  Google Scholar 

  • Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980

  • Lea C, Flynn MD, Vidal R, Reiter A, Hager GD (2017) Temporal convolutional networks for action segmentation and detection. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 156–165, 2017

  • Li H, Xiong P, An J, Wang L (2018) Pyramid attention network for semantic segmentation. arXiv preprint arXiv:1805.10180

  • Li C, Tan Y, Chen W, Luo X, He Y, Gao Y, Li F (2020) Anu-net: attention-based nested u-net to exploit full resolution features for medical image segmentation. Comput Graph 90:11–20

    Article  Google Scholar 

  • Li X, Hou Y, Wang P, Gao Z, Xu M,  Li W (2021). Trear: Transformer-based rgb-d egocentric action recognition. IEEE Transactions on Cognitive and Developmental Systems, 14(1),246–252.

  • Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 3431–3440, 2015

  • Narasimhaswamy S, Wei Z, Wang Y, Zhang J, Hoai M (2019) Contextual attention for hand detection in the wild. In: Proceedings of the IEEE/CVF International Conference on computer vision, pp 9567–9576

  • Narayana P, Beveridge R, Draper BA (2018) Gesture recognition: focus on the hands. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 5235–5244

  • Pisharady PK, Vadakkepat P, Loh AP (2013) Attention based detection and recognition of hand postures against complex backgrounds. Int J Comput Vis 101(3):403–419

    Article  Google Scholar 

  • Ren S, He K, Girshick R, Sun J (2015) Faster r-cnn: towards real-time object detection with region proposal networks. Adv Neural Inf Process Syst 28:91–99

    Google Scholar 

  • R-FCN, D. A. I. J. (2016) Object detection via region-based fully convolutional networks. In Proceedings of IEEE International Conference on Computer Vision. Piscataway: IEEE Press, pp 1–9

  • Ronneberger O, Fischer P, Brox T (2015) U-net: convolutional networks for biomedical image segmentation. In: International Conference on medical image computing and computer-assisted intervention, pp 234–241, 2015. Springer

  • Sarma D, Bhuyan MK (2018) Hand gesture recognition using deep network through trajectory-to-contour based images. In: Proceedings of the IEEE India Council International Conference (INDICON), 2018

  • Sarma D, Bhuyan M (2021) Methods, databases and recent advancement of vision-based hand gesture recognition for hci systems: a review. SN Comput Sci 2(6):1–40

    Article  Google Scholar 

  • Sarma D, Bhuyan M (2022) Hand detection by two-level segmentation with double-tracking and gesture recognition using deep-features. Sens Imaging 23(1):1–29

    Article  Google Scholar 

  • Sarma D, Kavyasree V, Bhuyan M (2022) Two-stream fusion model using 3d-cnn and 2d-cnn via video-frames and optical flow motion templates for hand gesture recognition. Innov Syst Softw Eng pp 1–14

  • Sharma S, Kumar K (2021) Asl-3dcnn: American sign language recognition technique using 3-d convolutional neural networks. Multimed Tools Appl 80(17):26319–26331

    Article  Google Scholar 

  • Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556

  • Souly N, Spampinato C, Shah M (2017) Semi supervised semantic segmentation using generative adversarial network. In: Proceedings of the IEEE International Conference on computer vision, pp 5688–5696, 2017

  • Tran D, Bourdev L, Fergus R, Torresani L, Paluri M (2015) Learning spatiotemporal features with 3d convolutional networks. In: Proceedings of the IEEE International Conference on computer vision, pp 4489–4497

  • Vaswani A, Ramachandran P, Srinivas A, Parmar N, Hechtman B, Shlens J (2021) Scaling local self-attention for parameter efficient visual backbones. In: Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition, pp 12894–12904

  • Wang F, Jiang M, Qian C, Yang S, Li C, Zhang H, Wang X, Tang X (2017) Residual attention network for image classification. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 3156–3164

  • Woo S, Park J, Lee J-Y, Kweon IS (2018) Cbam: convolutional block attention module. In: Proceedings of the European Conference on computer vision (ECCV), pp 3–19, 2018

  • Yu C, Wang J, Peng C, Gao C, Yu G, Sang N (2018) Learning a discriminative feature network for semantic segmentation. In: Proceedings of the IEEE Conference on computer vision and pattern recognition, pp 1857–1866

  • Zhang X, Zhu X, Zhang N, Li P, Wang L et al (2018) Seggan: Semantic segmentation with generative adversarial network. In: 2018 IEEE Fourth International Conference on multimedia big data (BigMM), pp 1–5, 2018. IEEE

  • Zhou Z, Rahman Siddiquee MM, Tajbakhsh N, Liang J (2018) Unet++: a nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support: 4th International Workshop, DLMIA 2018, and 8th International Workshop, ML-CDS 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 20, 2018, Proceedings 4 (pp. 3–11). Springer International Publishing.

Download references

Funding

No funding was received for this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Debajit Sarma.

Ethics declarations

Conflict of interest

The authors declare that they have no Conflict of interest.

Human and animal rights

In this research work, there is no involvement of human participants and/or animals in any part of the experimentation.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sarma, D., Dutta, H.P.J., Yadav, K.S. et al. Attention-based hand semantic segmentation and gesture recognition using deep networks. Evolving Systems 15, 185–201 (2024). https://doi.org/10.1007/s12530-023-09512-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12530-023-09512-1

Keywords

Navigation