Skip to main content

Towards a Professional Gesture Recognition with RGB-D from Smartphone

  • Conference paper
  • First Online:
  • 2592 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11754))

Abstract

The goal of this work is to build the basis for a smartphone application that provides functionalities for recording human motion data, train machine learning algorithms and recognize professional gestures. First, we take advantage of the new mobile phone cameras, either infrared or stereoscopic, to record RGB-D data. Then, a bottom-up pose estimation algorithm based on Deep Learning extracts the 2D human skeleton and exports the 3rd dimension using the depth. Finally, we use a gesture recognition engine, which is based on K-means and Hidden Markov Models (HMMs). The performance of the machine learning algorithm has been tested with professional gestures using a silk-weaving and a TV-assembly datasets.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Fang, H., Xie, S., Lu, C.: RMPE: regional multi-person pose estimation (2016)

    Google Scholar 

  2. Güler, R.A., Neverova, N., Kokkinos, I.: DensePose: dense human pose estimation in the wild (2018)

    Google Scholar 

  3. Abdulla, W.: Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow. GitHub repository (2017)

    Google Scholar 

  4. Pishchulin, L., et al.: DeepCut: joint subset partition and labeling for multi person pose estimation (2015)

    Google Scholar 

  5. Cao, Z., Simon, T., Wei, S.E., Sheikh, Y.: Openpose: realtime multi-person 2D pose estimation using part affinity fields (2018)

    Google Scholar 

  6. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556 (2014)

    Google Scholar 

  7. Ignatov, A., et al.: AI benchmark: running deep neural networks on android smartphones (2018)

    Google Scholar 

  8. Krafka, K.: Eye tracking for everyone (2016)

    Google Scholar 

  9. Zhang, L., Zhu, G., Shen, P., Song, J.: Learning spatiotemporal features using 3DCNN and convolutional LSTM for gesture recognition. In: ICCV Workshop (2017)

    Google Scholar 

  10. Wang, H., Wang, P., Song, Z., Li, W.: Large-scale multimodal gesture recognition using heterogeneous networks. In: ICCV 2017 Workshop, pp. 3129–3137 (2017)

    Google Scholar 

  11. Wang, P., Li, W., Liu, S., Gao, Z., Tang, C., Ogunbona, P.: Large-scale isolated gesture recognition using convolutional neural networks (2017)

    Google Scholar 

  12. Corradini, A.: Dynamic time warping for off-line recognition of a small gesture vocabulary, pp. 82-89 (2001)

    Google Scholar 

  13. Coupeté, E., Moutarde, F., Manitsaris, S.: Multi-users online recognition of technical gestures for natural human-robot collaboration in manufacturing. Auton. Robot. 43, 1309–1325 (2018)

    Article  Google Scholar 

  14. Gillian, N., Paradiso, J.A.: The gesture recognition toolkit. J. Mach. Learn. Res. 15, 3483–3487 (2014)

    MathSciNet  Google Scholar 

Download references

Acknowledgement

The research leading to these results has received funding by the EU Horizon 2020 Research and Innovation Programme under grant agreement No. 820767, CoLLaboratE project, and No. 822336, Mingei project. We acknowledge also the Arçelik factory and the Museum Haus der Seidenkultur for providing as with the use-cases as well as the Foundation for Research and Technology – Hellas for contributing to the motion capturing.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pablo Vicente Moñivar .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Moñivar, P.V., Manitsaris, S., Glushkova, A. (2019). Towards a Professional Gesture Recognition with RGB-D from Smartphone. In: Tzovaras, D., Giakoumis, D., Vincze, M., Argyros, A. (eds) Computer Vision Systems. ICVS 2019. Lecture Notes in Computer Science(), vol 11754. Springer, Cham. https://doi.org/10.1007/978-3-030-34995-0_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-34995-0_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-34994-3

  • Online ISBN: 978-3-030-34995-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics