Multimedia Tools and Applications

, Volume 75, Issue 22, pp 14991–15015 | Cite as

Hand gesture recognition with jointly calibrated Leap Motion and depth sensor

Article

Abstract

Novel 3D acquisition devices like depth cameras and the Leap Motion have recently reached the market. Depth cameras allow to obtain a complete 3D description of the framed scene while the Leap Motion sensor is a device explicitly targeted for hand gesture recognition and provides only a limited set of relevant points. This paper shows how to jointly exploit the two types of sensors for accurate gesture recognition. An ad-hoc solution for the joint calibration of the two devices is firstly presented. Then a set of novel feature descriptors is introduced both for the Leap Motion and for depth data. Various schemes based on the distances of the hand samples from the centroid, on the curvature of the hand contour and on the convex hull of the hand shape are employed and the use of Leap Motion data to aid feature extraction is also considered. The proposed feature sets are fed to two different classifiers, one based on multi-class SVMs and one exploiting Random Forests. Different feature selection algorithms have also been tested in order to reduce the complexity of the approach. Experimental results show that a very high accuracy can be obtained from the proposed method. The current implementation is also able to run in real-time.

Keywords

Depth Gesture recognition Calibration Kinect Leap Motion SVM 

References

  1. 1.
    Aha DW, Bankert RL (1996) A comparative evaluation of sequential feature selection algorithms. In: Fisher D, Lenz H-J (eds) Learning from data, volume 112 of lecture notes in statistics. Springer, New York, pp 199–206Google Scholar
  2. 2.
    Ballan L, Taneja A, Gall J, Van Gool L, Pollefeys M (2012) Motion capture of hands in action using discriminative salient points. In: European conference on computer vision (ECCV). FlorenceGoogle Scholar
  3. 3.
    Breiman L (2001) Random forests. Mach Learn 45(1):5–32MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Chang C-C, Lin C-J (2011) LIBSVM: a library for support vector machines. ACM Trans Intell Syst Tech 2:27:1–27:27CrossRefGoogle Scholar
  5. 5.
    Chen Y-W, Lin C-J (2006) Combining svms with various feature selection strategies. In: Feature extraction, pp 315–324. SpringerGoogle Scholar
  6. 6.
    Dal Mutto C, Zanuttigh P, Cortelazzo GM (2012) Time-of-flight cameras and microsoft kinect. SpringerBriefs in electrical and computer engineering. SpringerGoogle Scholar
  7. 7.
    Dominio F, Donadeo M, Marin G, Zanuttigh P, Cortelazzo GM (2013) Hand gesture recognition with depth data. In: Proceedings of the 4th ACM/IEEE international workshop on Analysis and retrieval of tracked events and motion in imagery stream, pp 9–16. ACMGoogle Scholar
  8. 8.
    Dominio F, Donadeo M, Zanuttigh P (2014) Combining multiple depth-based descriptors for hand gesture recognition. Pattern Recogn Lett 50:101–111CrossRefGoogle Scholar
  9. 9.
    Guerrero-Rincon C, Uribe-Quevedo A, Leon-Rodriguez H, Park J-O (2013) Hand-based tracking animatronics interaction. In: 44th international symposium on robotics (ISR), 2013, pp 1–3Google Scholar
  10. 10.
    Herrera D, Kannala J, Heikkilä J (2012) Joint depth and color camera calibration with distortion correction. IEEE Trans Pattern Anal Mach Intell 34(10):2058–2064CrossRefGoogle Scholar
  11. 11.
    Keskin C, Kirac F, Kara YE, Akarun L (2011) Real time hand pose estimation using depth sensors. In: ICCV workshops, pp 1228–1234Google Scholar
  12. 12.
    Kosmopoulos DI, Doulamis A, Doulamis N (2005) Gesture-based video summarization. In: IEEE international conference on image processing, 2005. ICIP 2005, vol 3, pp III–1220–3Google Scholar
  13. 13.
    Kumar N, Belhumeur PN, Biswas A, Jacobs DW, Kress WJ, Lopez I, Soares JVB (2012) Leafsnap: a computer vision system for automatic plant species identification. In: Fitzgibbon A, Lazebnik S, Perona P, Sato Y, Schmid C (eds) Computer vision ECCV 2012. Lecture notes in computer science. Springer Berlin Heidelberg, pp 502–516Google Scholar
  14. 14.
    Kurakin A, Zhang Z, Liu Z (2012) A real time system for dynamic hand gesture recognition with a depth sensor. In: Proceedings of the 20th European signal processing conference, EUSIPCO 2012. Bucharest, Romania, pp 1975–1979Google Scholar
  15. 15.
    Manay S, Cremers D, Hong B-W, Yezzi AJ, Soatto S (2006) Integral invariants for shape matching. IEEE Trans PAMI 28(10):1602–1618MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Marin G, Dominio F, Zanuttigh P (2014) Hand gesture recognition with leap motion and kinect devices. In: Proceedings of IEEE international conference on image processing (ICIP). Paris, FranceGoogle Scholar
  17. 17.
    Marin G, Fraccaro M, Donadeo M, Dominio F, Zanuttigh P (2013) Palm area detection for reliable hand gesture recognition. In: Proceedings of Multimedia Signal Processing, Pula, ItalyGoogle Scholar
  18. 18.
    Mohandes M, Aliyu S, Deriche M (2014) Arabic sign language recognition using the leap motion controller. In: IEEE 23rd international symposium on industrial electronics (ISIE), pp 960–965Google Scholar
  19. 19.
    Nigam I, Vatsa M, Singh R (2014) Leap signature recognition using hoof and hot features. In: Proceedings of IEEE international conference on image processing (ICIP). Paris, FranceGoogle Scholar
  20. 20.
    Pedersoli F, Adami N, Benini S, Leonardi R (2012) Xkin - extendable hand pose and gesture recognition library for kinect. In: Proceedings of ACM conference on multimedia 2012 - open source competition. Nara, JapanGoogle Scholar
  21. 21.
    Potter LE, Araullo J, Carter L (2013) The leap motion controller: a view on sign language. In: Proceedings of the 25th australian computer-human interaction conference: augmentation, application, innovation, collaboration, OzCHI ’13, pp 175–178. ACM, New YorkGoogle Scholar
  22. 22.
    Ren Z, Meng J, Yuan J (2011) Depth camera based hand gesture recognition and its applications in human-computer-interaction. In: Proceedings of ICICS, pp 1–5Google Scholar
  23. 23.
    Ren Z, Yuan J, Zhang Z (2011) Robust hand gesture recognition based on finger-earth mover’s distance with a commodity depth camera. In: Proceedings of ACM Conference on Multimedia, pp 1093–1096. ACMGoogle Scholar
  24. 24.
    Shotton J, Fitzgibbon A, Cook M, Sharp T, Finocchio M, Moore R, Kipman A, Blake A (2011) Real-time human pose recognition in parts from single depth images. In: IEEE conference on computer vision and pattern recognition (CVPR), 2011, pp 1297–1304. IEEEGoogle Scholar
  25. 25.
    Suryanarayan P, Subramanian A, Mandalapu D (2010) Dynamic hand pose recognition using depth data. In: Proceedings of ICPR, pp 3105–3108Google Scholar
  26. 26.
    Vikram S, Li L, Russell S (2013) Writing and sketching in the air, recognizing and controlling on the fly. In: CHI ’13 extended abstracts on human factors in computing systems. CHI EA ’13, Paris, France. ACM, New York, pp 1179–1184. doi:10.1145/2468356.2468567
  27. 27.
    Wachs JP, Kölsch M, Stern H, Edan Y (2011) Vision-based hand-gesture applications. Commun ACM 54(2):60–71CrossRefGoogle Scholar
  28. 28.
    Wang J, Liu Z, Chorowski J, Chen Z, Wu Y (2012) Robust 3D action recognition with random occupancy patterns. In: Fitzgibbon A, Lazebnik S, Perona P, Sato Y, Schmid C (eds) Computer vision ECCV 2012. Lecture notes in computer science. Springer Berlin Heidelberg, pp 872–885. doi:10.1007/978-3-642-33709-3_62
  29. 29.
    Weichert F, Bachmann D, Rudak B, Fisseler D (2013) Analysis of the accuracy and robustness of the leap motion controller. Sensors 13(5):6380–6393CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  1. 1.Department of Information EngineeringUniversity of PadovaPadovaItaly

Personalised recommendations