Skip to main content
Log in

Multi-fusion feature pyramid for real-time hand detection

  • 1195: Deep Learning for Multimedia Signal Processing and Applications
  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Real-time HI (Human Interface) systems need accurate and efficient hand detection models to meet the limited resources in budget, dimension, memory, computing, and electric power. The detection task is also important for other applications such as homecare systems, fine-grained action recognition, movie interpretation, and even for understanding dance gestures. In recent years, object detection has become a less challenging task with the latest deep CNN-based state-of-the-art models, i.e., RCNN, SSD, and YOLO. However, these models cannot achieve desired efficiency and accuracy on HI-based embedded devices due to their complex time-consuming architecture. Another critical issue in hand detection is that small hands (<30 × 30 pixels) are still challenging for all the above methods. We proposed a shallow model named Multi-fusion Feature Pyramid for real-time hand detection to deal with the above problems. Experimental results on the Oxford hand dataset combined with the skin dataset show that the proposed method outperforms other SoTA methods in terms of accuracy, efficiency, and real-time speed. The COCO dataset is also used to compare with other state-of-the-art method and shows the highest efficiency and accuracy with the proposed CFPN model. Thus we conclude that the proposed model is useful for real-life small hand detection on embedded devices.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Bambach S, Lee S, Crandall DJ, Yu C (2015) Lending A Hand: Detecting Hands and Recognizing Activities in Complex Egocentric Interactions. In: 2015 IEEE International Conference on Computer Vision (ICCV), pp 1949–1957

    Chapter  Google Scholar 

  2. A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: optimal speed and accuracy of object detection,” 2020.

    Google Scholar 

  3. Bosquet B, Mucientes M, Brea VM (2019) StDNet: A convnet for small target detection. In: Br. Mach. Vis. Conf. 2018, BMVC 2018, pp 1–12

    Google Scholar 

  4. Chen Q, Georganas ND, Petriu EM (2008) Hand gesture recognition using Haar-like features and a stochastic context-free grammar. IEEE Trans Instrum Meas 57(8):1562–1571

    Article  Google Scholar 

  5. Chen MY, Alregib G, Juang B-H (2016) Air-writing recognition-part I: modeling and recognition of characters, words, and connecting motions. IEEE Trans on Human-Machine Systems 46(3):403–413

    Article  Google Scholar 

  6. Chen M, AlRegib G, Juang BH (2016) Air-writing recognition - part I: modeling and recognition of characters, words, and connecting motions. IEEE Trans Human-Machine Syst 46(3):403–413

    Article  Google Scholar 

  7. Dardas NH, Georganas ND (2011) Real-time hand gesture detection and recognition using bag-of-features and support vector machine techniques. IEEE Trans Instrum Meas 60(11):3592–3607

    Article  Google Scholar 

  8. Deng X, Zhang Y, Yang S, Tan P, Chang L, Yuan Y, Wang H (2018) Joint hand detection and rotation estimation using CNN. IEEE Trans Image Process 27(4):1888–1900

    Article  MathSciNet  Google Scholar 

  9. Felzenszwalb PF, Girshick RB, McAllester D, Ramanan D (2010) Object detection with discriminatively trained part-based models. IEEE Trans Pattern Anal Mach Intell 32(9):1627–1645

    Article  Google Scholar 

  10. Girshick R, Donahue J, Darrell T, Malik J (2014) Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp 580–587

    Chapter  Google Scholar 

  11. Gupta L, Ma S (2001) Gesture-based interaction and communication: automated classification of hand gesture contours. IEEE Trans. Syst. Man, Cybern. Part C Applications Rev, 31(1):114–120

  12. Han J, Shao L, Xu D, Shotton J (2013) Enhanced computer vision with Microsoft Kinect sensor: a review. IEEE Trans Cybern 43(5):1318–1334

    Article  Google Scholar 

  13. He K, Zhang X, Ren S, Sun J (2015) Spatial pyramid pooling in deep convolutional networks for visual recognition. IEEE Trans Pattern Anal Mach Intell 37(9):1904–1916

    Article  Google Scholar 

  14. K. He, G. Gkioxari, P. Dollar, and R. Girshick, “Mask R-CNN,” 2017, pp. 2980–2988.

  15. Infantino I, Rizzo R, Gaglio S (2007) A framework for sign language sentence recognition by commonsense context. IEEE Trans. Syst. Man, Cybern. Part C Applications Rev. 37(5):1034–1039

    Article  Google Scholar 

  16. S.-W. Kim, H.-K. Kook, J.-Y. Sun, M.-C. Kang, and S.-J. Ko, “Parallel Feature Pyramid Network for Object Detection: 15th European Conference, Munich, Germany, September 8–14, 2018, Proceedings, Part V,” 2018, pp. 239–256.

  17. Kolsch M, Turk M (2004) Robust hand detection. In: Sixth IEEE International Conference on Automatic Face and Gesture Recognition, 2004. Proceedings, pp 614–619

    Chapter  Google Scholar 

  18. Law H, Deng J (2019) CornerNet: detecting objects as paired Keypoints. Int. J. Comput. Vis

  19. Le THN, Quach KG, Zhu C, Duong CN, Luu K, Savvides M (2017) Robust Hand Detection and Classification in Vehicles and in the Wild. In: IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. Work., vol. 2017-July, pp 1203–1210

    Google Scholar 

  20. Le TH, Huang SC, Jaw DW (2019) Cross-resolution feature fusion for fast hand detection in intelligent homecare systems. IEEE Sensors J 19(12):4696–4704

    Article  Google Scholar 

  21. Lin T-Y et al (2014) Microsoft COCO: common objects in context

    Google Scholar 

  22. Lin T, Dollár P, Girshick R, He K, Hariharan B, Belongie S (2017) Feature Pyramid Networks for Object Detection. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 936–944

    Chapter  Google Scholar 

  23. Lin TY, Goyal P, Girshick R, He K, Dollar P (2017) Focal Loss for Dense Object Detection. In: Proc. IEEE Int. Conf. Comput. Vis., vol. 2017-Octob, pp 2999–3007

    Google Scholar 

  24. Liu W et al (2016) SSD: Single shot multibox detector. In: Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 9905 LNCS, pp 21–37

    Google Scholar 

  25. Lun R, Zhao W (2015) A Survey of Applications and Human Motion Recognition with Microsoft Kinect. 29(5)

  26. Mei K, Xu L, Li B, Lin B, Wang F (2015) A Real-time Hand Detection System Based on Multi-feature. Neurocomputing 158

  27. Mitra S, Acharya T (2007) Gesture recognition: a survey. IEEE Trans Syst Man, Cybern Part C Applications Rev 37(3):311–324

    Article  Google Scholar 

  28. A. Mittal, A. Zisserman, and P. Torr, “Hand detection using multiple proposals,” pp. 75.1–75.11, 2011.

  29. Mohanty A, Ahmed A, Goswami T, Das A, Vaishnavi P, Sahay RR (2017) Robust Pose Recognition Using Deep Learning. In: Proc. ofInternational Conf. Comput. Vis. Image Process. Adv. Intell. Syst. Comput., vol. 460 AISC, pp 93–105

    Google Scholar 

  30. E.-J. Ong and R. Bowden, “A boosted classifier tree for hand shape detection,” 2004.

    Google Scholar 

  31. Redmon J, Farhadi A (2017) YOLO9000: Better, faster, stronger. In: Proc. - 30th IEEE Conf. Comput. Vis. Pattern Recognition, CVPR 2017, vol. 2017-Janua, pp 6517–6525

    Google Scholar 

  32. J. Redmon and A. Farhadi, “YOLOv3: An Incremental Improvement,” 2018.

  33. Redmon J, Divvala S, Girshick R, Farhadi A (2016) You only look once: Unified, real-time object detection. In: Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp 779–788

    Google Scholar 

  34. Ren S, He K, Girshick R, Sun J (2017) Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell 39(6):1137–1149

    Article  Google Scholar 

  35. Roy K, Mohanty A, Sahay RR (2018) Deep Learning Based Hand Detection in Cluttered Environment Using Skin Segmentation. In: Proc. - 2017 IEEE Int. Conf. Comput. Vis. Work. ICCVW 2017, vol. 2018-Janua, pp 640–649

    Google Scholar 

  36. Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc, pp 1–14

    Google Scholar 

  37. Tsai TH, Hsieh JW, Chen HC, Huang SC (2017) Reverse time ordered stroke context for air-writing recognition. In: Ubi-Media 2017 - Proc. 10th Int. Conf. Ubi-Media Comput. Work. with 4th Int. Work. Adv. E-Learning 1st Int. Work. Multimed. IoT Networks, Syst. Appl

    Google Scholar 

  38. R. Wang, X. Li, S. Ao, and C. Ling, “Pelee: a real-time object detection system on Mobile devices,” 2018.

  39. Yang Y, Fermüller C, Li Y, Aloimonos Y (2015) Grasp type revisited: A modern perspective on a classical feature for vision. In: Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 07–12-June, no. c, pp 400–408

    Google Scholar 

  40. Zhang X, Chen X, Li Y, Lantz V, Wang K, Yang J (2011) A framework for hand gesture recognition based on accelerometer and EMG sensors. IEEE Trans Syst Man, Cybern - Part A Syst Humans 41(6):1064–1076

    Article  Google Scholar 

  41. Zhang X, Ye Z, Jin L, Feng Z, Xu S (2013) A new writing experience: finger writing in the air using a kinect sensor. IEEE Multimed 20(4):85–93

    Article  Google Scholar 

  42. Zhang L, Wu X, Luo D (2015) Human activity recognition with HMM-DNN model. In: 2015 IEEE 14th International Conference on Cognitive Informatics Cognitive Computing (ICCI*CC), pp 192–197

    Google Scholar 

  43. Zhao Q et al (2019) M2Det: a single-shot object detector based on multi-level feature pyramid network. Proc AAAI Conf Artif Intell 33:9259–9266

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jun-Wei Hsieh.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chang, CW., Santra, S., Hsieh, JW. et al. Multi-fusion feature pyramid for real-time hand detection. Multimed Tools Appl 81, 11917–11929 (2022). https://doi.org/10.1007/s11042-021-11897-7

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-021-11897-7

Keywords

Navigation