Abstract
The goal of this work is to build the basis for a smartphone application that provides functionalities for recording human motion data, train machine learning algorithms and recognize professional gestures. First, we take advantage of the new mobile phone cameras, either infrared or stereoscopic, to record RGB-D data. Then, a bottom-up pose estimation algorithm based on Deep Learning extracts the 2D human skeleton and exports the 3rd dimension using the depth. Finally, we use a gesture recognition engine, which is based on K-means and Hidden Markov Models (HMMs). The performance of the machine learning algorithm has been tested with professional gestures using a silk-weaving and a TV-assembly datasets.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Fang, H., Xie, S., Lu, C.: RMPE: regional multi-person pose estimation (2016)
Güler, R.A., Neverova, N., Kokkinos, I.: DensePose: dense human pose estimation in the wild (2018)
Abdulla, W.: Mask R-CNN for object detection and instance segmentation on Keras and TensorFlow. GitHub repository (2017)
Pishchulin, L., et al.: DeepCut: joint subset partition and labeling for multi person pose estimation (2015)
Cao, Z., Simon, T., Wei, S.E., Sheikh, Y.: Openpose: realtime multi-person 2D pose estimation using part affinity fields (2018)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556 (2014)
Ignatov, A., et al.: AI benchmark: running deep neural networks on android smartphones (2018)
Krafka, K.: Eye tracking for everyone (2016)
Zhang, L., Zhu, G., Shen, P., Song, J.: Learning spatiotemporal features using 3DCNN and convolutional LSTM for gesture recognition. In: ICCV Workshop (2017)
Wang, H., Wang, P., Song, Z., Li, W.: Large-scale multimodal gesture recognition using heterogeneous networks. In: ICCV 2017 Workshop, pp. 3129–3137 (2017)
Wang, P., Li, W., Liu, S., Gao, Z., Tang, C., Ogunbona, P.: Large-scale isolated gesture recognition using convolutional neural networks (2017)
Corradini, A.: Dynamic time warping for off-line recognition of a small gesture vocabulary, pp. 82-89 (2001)
Coupeté, E., Moutarde, F., Manitsaris, S.: Multi-users online recognition of technical gestures for natural human-robot collaboration in manufacturing. Auton. Robot. 43, 1309–1325 (2018)
Gillian, N., Paradiso, J.A.: The gesture recognition toolkit. J. Mach. Learn. Res. 15, 3483–3487 (2014)
Acknowledgement
The research leading to these results has received funding by the EU Horizon 2020 Research and Innovation Programme under grant agreement No. 820767, CoLLaboratE project, and No. 822336, Mingei project. We acknowledge also the Arçelik factory and the Museum Haus der Seidenkultur for providing as with the use-cases as well as the Foundation for Research and Technology – Hellas for contributing to the motion capturing.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Moñivar, P.V., Manitsaris, S., Glushkova, A. (2019). Towards a Professional Gesture Recognition with RGB-D from Smartphone. In: Tzovaras, D., Giakoumis, D., Vincze, M., Argyros, A. (eds) Computer Vision Systems. ICVS 2019. Lecture Notes in Computer Science(), vol 11754. Springer, Cham. https://doi.org/10.1007/978-3-030-34995-0_22
Download citation
DOI: https://doi.org/10.1007/978-3-030-34995-0_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-34994-3
Online ISBN: 978-3-030-34995-0
eBook Packages: Computer ScienceComputer Science (R0)