Skip to main content
Log in

MRCS: multi-radii circular signature based feature descriptor for hand gesture recognition

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Deaf and hearing-impaired persons communicate by means of signs and gestures. In course of time, this form of communication has evolved as natural languages with its own grammars and lexicons. Automatic hand gesture recognition is an important task in development of human computer interaction system for deaf mute community. In this paper, we report the development of a novel feature descriptor named Multi-Radii Circular Signature (MRCS) and associated automatic hand gesture recognition pipeline. This descriptor has certain desirable aspects such as translation, scale and rotation invariance, variable number of feature extraction, and symbol reconstruction. Multiple sets of experiments for various feature combinations with multiple classifiers have been carried out on three publicly available benchmark datasets viz. NTU 10-gesture dataset, HKU EEE DSP dataset and Senz3D dataset. Consistently high performance across multiple datasets and feature combinations reveals the robustness and generality of the descriptor. Its code and usage guidelines are also released at https://github.com/iilabau/MRCS for greater interest.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

References

  1. Bragg D, Koller O, Bellard M, Berke L, Boudreault P, Braffort A, Caselli N, Huenerfauth M, Kacorri H, Verhoef T, Vogler C, Morris MR (2019) Sign language recognition, generation, and translation: an interdisciplinary perspective. In: Proceedings of 21st international ACM SIGACCESS conference on computers and accessibility, pp 16–31. https://doi.org/10.1145/3308561.3353774

    Chapter  Google Scholar 

  2. Chen Q, Georganas ND, Petriu EM (2007) Real-time vision-based hand gesture recognition using Haar-like features. In: IEEE instrumentation & measurement technology conference (IMTC). Warsaw, pp 1–6

    Google Scholar 

  3. Debevc M, Kosec P, Holzinger A (2011) Improving multimodal web accessibility for deaf people: sign language interpreter module. Multimed Tools Appl 54(1):181–199

    Article  Google Scholar 

  4. Desai S (2017) Segmentation and recognition of fingers using Microsoft Kinect. In: Proceedings of the international conference on communication and networks. Paris, France, pp 45–53

    Chapter  Google Scholar 

  5. Dominio F, Donadeo M, Marin G, Zanuttigh P, Cortelazzo GM (2013) Hand gesture recognition with depth data. In: ACM/IEEE international workshop on analysis and retrieval of tracked events and motion in imagery stream, pp 9–16

    Google Scholar 

  6. Fatmi R, Rashad S, Integlia R (2019) Comparing ANN, SVM, and HMM based machine learning methods for American sign language recognition using wearable motion sensors. In: IEEE 9th annual computing and communication workshop and conference (CCWC). Las Vegas, NV, USA, pp 0290–0297

    Google Scholar 

  7. Hisham B, Hamouda A (2018) Arabic sign language recognition using Microsoft Kinect and leap motion controller. In: Proceedings of 11th international conference on informatics & systems (INFOS 2018), pp 1–7. https://doi.org/10.2139/ssrn.3389799

    Chapter  Google Scholar 

  8. Holzinger A (2018) From machine learning to explainable AI. In: World symposium on digital intelligence for systems and machines (DISA). IEEE, pp 55–66. https://doi.org/10.1109/DISA.2018.8490530

    Chapter  Google Scholar 

  9. Hussain S, Saxena R, Han X, Khan JA, Shin H (2017) Hand gesture recognition using deep learning. In: International SoC design conference (ISOCC). Seoul, pp 48–49

    Google Scholar 

  10. Jing L, Vahdani E, Huenerfauth M, Tian Y (2019) Recognizing American sign language manual signs from RGB-D videos. ArXiv:abs/1906.02851

  11. Kapuscinski T, Oszust M, Wysocki M (2013) Recognition of signed dynamic expressions observed by TOF camera. In: Proceedings of signal processing: algorithms, architectures, arrangements, and applications (SPA), pp 291–296

    Google Scholar 

  12. Kumar DA, Sastry ASCS, Kishore PVV, Kumar EK (2018) Indian sign language recognition using graph matching on 3D motion captured signs. Multimed Tools Appl 77(24):32063–32091

    Article  Google Scholar 

  13. Kumar EK, Kishore PVV, Sastry ASCS, Kumar MTK, Kumar DA (2018) Training CNNs for 3-D sign language recognition with color texture coded joint angular displacement maps. IEEE Signal Proc Lett 25(5):645–649

    Article  Google Scholar 

  14. Lee U, Tanaka J (2013) Finger identification and hand gesture recognition techniques for natural user interface. In: Proceedings of 11th Asia Pacific conference on computer human interaction. Bangalore, India, pp 274–279

    Chapter  Google Scholar 

  15. Li Y (2012) Hand gesture recognition using Kinect. In: IEEE international conference on computer science and automation engineering. Beijing, pp 196–199

    Chapter  Google Scholar 

  16. Liu X, Fujimura K (2004) Hand gesture recognition using depth data. In: Sixth IEEE international conference on automatic face and gesture recognition, pp 529–534

    Google Scholar 

  17. Ma X, Peng J (2018) Kinect sensor-based long-distance hand gesture recognition and fingertip detection with depth information. J Sens 2018:5809769:1–9

    Google Scholar 

  18. Mahmud H, Hasan MK, Tariq AA, Mottalib MA (2016) Hand gesture recognition using SIFT features on depth image. In: Ninth international conference on advances in computer-human interactions. Venice, Italy, pp 359–365

    Google Scholar 

  19. Memo A, Zanuttigh P (2018) Head-mounted gesture-controlled interface for human-computer interaction. Multimed Tools Appl 77(1):27–53

    Article  Google Scholar 

  20. Memo A, Minto L, Zanuttigh P (2015) Exploiting Silhouette descriptors and synthetic data for hand gesture recognition. In: STAG: smart tools & apps for graphics

    Google Scholar 

  21. MRCS Feature Descriptor. https://github.com/iilabau/MRCS, Accessed 5 Oct 2021

  22. Paul S, Basu S, Nasipuri M (2015) Microsoft Kinect in gesture recognition: a short review. Int J Control Theory Appl 8(5):2071–2076

    Google Scholar 

  23. Paul S, Nasser H, Nasipuri M, Ngo P, Basu S, Debled-Rennesson I (2017) A statistical-topological feature combination for recognition of isolated hand gestures from kinect based depth images. In: 18th international workshop on combinatorial image analysis (IWCIA). Springer LNCS, pp 256–267

    Chapter  Google Scholar 

  24. Paul S, Bhattacharyya A, Mollah AF, Basu S, Nasipuri M (2019) Hand segmentation from complex background for gesture recognition. In: Emerging technology in modelling and graphic, vol 937. Springer AISC, pp 775–782. https://doi.org/10.1007/978-981-13-7403-6_68

    Chapter  Google Scholar 

  25. Ramey A, Gonzalez-Pacheco V, Salichs MA (2011) Integration of a low-cost RGB-D sensor in a social robot for gesture recognition. In: Sixth ACM/IEEE international conference on human-robot interaction (HRI). Lausanne, pp 229–230

    Google Scholar 

  26. Ravi S, Suman M, Kishore PVV, Kumar-E K, Kumar-M TK, Kumar-D A (2019) Multi modal Spatio-temporal co-trained CNNs with single modal testing on RGB–D based sign language gesture recognition. J Comput Lang 52:88–102

    Article  Google Scholar 

  27. Ren Z, Meng J, Yuan J (2011) Depth camera-based hand gesture recognition and its applications in human-computer-interaction. In: Proceedings of 8th international conference on information, communications & signal processing. Singapore, pp 1–5

    Google Scholar 

  28. Ren Z, Yuan J, Meng J, Zhang Z (2013) Robust part-based hand gesture recognition using Kinect sensor. IEEE Trans Multimed 15(5):1110–1120

    Article  Google Scholar 

  29. Ryumin D, Ivanko D, Axyonov A, Kagirov I, Karpov A, Zelezny M (2019) Human-robot interaction with smart shopping trolley using sign language: data collection. In: IEEE international conference on pervasive computing and communications workshops. Kyoto, Japan, pp 949–954

    Google Scholar 

  30. Sahana T, Paul S, Basu S, Mollah AF (2020) Hand sign recognition from depth images with multi-scale density features for deaf mute persons. Proc Comput Sci 167:2043–2050

    Article  Google Scholar 

  31. Sahoo AK, Mishra GS, Ravulakollu KK (2014) Sign language recognition: state-of-the-art. ARPN J Eng Appl Sci 9(2):116–134

    Google Scholar 

  32. She Y, Wang Q, Jia Y, Gu T, He Q, Yang B (2014) A real-time hand gesture recognition approach based on motion features of feature points. In: Proceedings of IEEE 17th international conference on computational science and engineering, pp 1096–1102

    Google Scholar 

  33. Song L, Hu RM, Zhang H, Xiao YL, Gong LY (2013) Real-time 3d hand gesture detection from depth images. Adv Mater Res 756:4138–4142

    Article  Google Scholar 

  34. Suharjito R, Anderson F, Wiryana MC, Ariesta, Kusuma GP (2017) Sign language recognition application Systems for Deaf-Mute People: a review based on input-process-output. Proc Comput Sci 116:441–448. https://doi.org/10.1016/j.procs.2017.10.028

    Article  Google Scholar 

  35. Tang M (2011) Recognizing hand gestures with Microsoft’s Kinect. Department of Electrical Engineering of Stanford University, Palo Alto

    Google Scholar 

  36. Wan T, Wang Y, Li J (2012) Hand gesture recognition system using depth data. In: 2nd international conference on consumer electronics, communications and networks (CECNet), pp 1063–1066

    Google Scholar 

  37. Wang C, Liu Z, Chan SC (2015) Superpixel-based hand gesture recognition with kinect depth camera. IEEE Trans Multimed 17(1):29–39

    Article  Google Scholar 

  38. Xiao Q, Zhao Y, Huan W (2019) Multi-sensor data fusion for sign language recognition based on dynamic Bayesian network and convolutional neural network. Multimed Tools Appl 78(11):15335–15352

    Article  Google Scholar 

  39. Zafrulla Z, Brashear H, Starner T, Hamilton H, Presti P (2011) American sign language recognition with the kinect. In: ACM international conference on multimodal interaction (ICMI’11). Spain, pp 279–286

    Google Scholar 

Download references

Conflict of interest

There is no conflict of interest.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ayatullah Faruk Mollah.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Sahana, T., Basu, S., Nasipuri, M. et al. MRCS: multi-radii circular signature based feature descriptor for hand gesture recognition. Multimed Tools Appl 81, 8539–8560 (2022). https://doi.org/10.1007/s11042-021-11743-w

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-021-11743-w

Keywords

Navigation