Advertisement

Towards a Professional Gesture Recognition with RGB-D from Smartphone

  • Pablo Vicente MoñivarEmail author
  • Sotiris Manitsaris
  • Alina Glushkova
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11754)

Abstract

The goal of this work is to build the basis for a smartphone application that provides functionalities for recording human motion data, train machine learning algorithms and recognize professional gestures. First, we take advantage of the new mobile phone cameras, either infrared or stereoscopic, to record RGB-D data. Then, a bottom-up pose estimation algorithm based on Deep Learning extracts the 2D human skeleton and exports the 3rd dimension using the depth. Finally, we use a gesture recognition engine, which is based on K-means and Hidden Markov Models (HMMs). The performance of the machine learning algorithm has been tested with professional gestures using a silk-weaving and a TV-assembly datasets.

Keywords

Pose estimation Depth map Gesture recognition Hidden Markov Models Smartphone 

Notes

Acknowledgement

The research leading to these results has received funding by the EU Horizon 2020 Research and Innovation Programme under grant agreement No. 820767, CoLLaboratE project, and No. 822336, Mingei project. We acknowledge also the Arçelik factory and the Museum Haus der Seidenkultur for providing as with the use-cases as well as the Foundation for Research and Technology – Hellas for contributing to the motion capturing.

References

  1. 1.
    Fang, H., Xie, S., Lu, C.: RMPE: regional multi-person pose estimation (2016)Google Scholar
  2. 2.
    Güler, R.A., Neverova, N., Kokkinos, I.: DensePose: dense human pose estimation in the wild (2018)Google Scholar
  3. 3.
    Abdulla, W.: Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow. GitHub repository (2017)Google Scholar
  4. 4.
    Pishchulin, L., et al.: DeepCut: joint subset partition and labeling for multi person pose estimation (2015)Google Scholar
  5. 5.
    Cao, Z., Simon, T., Wei, S.E., Sheikh, Y.: Openpose: realtime multi-person 2D pose estimation using part affinity fields (2018)Google Scholar
  6. 6.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556 (2014)Google Scholar
  7. 7.
    Ignatov, A., et al.: AI benchmark: running deep neural networks on android smartphones (2018)Google Scholar
  8. 8.
    Krafka, K.: Eye tracking for everyone (2016)Google Scholar
  9. 9.
    Zhang, L., Zhu, G., Shen, P., Song, J.: Learning spatiotemporal features using 3DCNN and convolutional LSTM for gesture recognition. In: ICCV Workshop (2017)Google Scholar
  10. 10.
    Wang, H., Wang, P., Song, Z., Li, W.: Large-scale multimodal gesture recognition using heterogeneous networks. In: ICCV 2017 Workshop, pp. 3129–3137 (2017)Google Scholar
  11. 11.
    Wang, P., Li, W., Liu, S., Gao, Z., Tang, C., Ogunbona, P.: Large-scale isolated gesture recognition using convolutional neural networks (2017)Google Scholar
  12. 12.
    Corradini, A.: Dynamic time warping for off-line recognition of a small gesture vocabulary, pp. 82-89 (2001)Google Scholar
  13. 13.
    Coupeté, E., Moutarde, F., Manitsaris, S.: Multi-users online recognition of technical gestures for natural human-robot collaboration in manufacturing. Auton. Robot. 43, 1309–1325 (2018)CrossRefGoogle Scholar
  14. 14.
    Gillian, N., Paradiso, J.A.: The gesture recognition toolkit. J. Mach. Learn. Res. 15, 3483–3487 (2014)MathSciNetGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Pablo Vicente Moñivar
    • 1
    Email author
  • Sotiris Manitsaris
    • 1
  • Alina Glushkova
    • 1
  1. 1.Center for RoboticsMINES ParisTech, PSL Research UniversityParisFrance

Personalised recommendations