Kinect vs. Low-cost Inertial Sensing for Gesture Recognition
- 3.1k Downloads
In this paper, we investigate efficient recognition of human gestures / movements from multimedia and multimodal data, including the Microsoft Kinect and translational and rotational acceleration and velocity from wearable inertial sensors. We firstly present a system that automatically classifies a large range of activities (17 different gestures) using a random forest decision tree. Our system can achieve near real time recognition by appropriately selecting the sensors that led to the greatest contributing factor for a particular task. Features extracted from multimodal sensor data were used to train and evaluate a customized classifier. This novel technique is capable of successfully classifying various gestures with up to 91 % overall accuracy on a publicly available data set. Secondly we investigate a wide range of different motion capture modalities and compare their results in terms of gesture recognition accuracy using our proposed approach. We conclude that gesture recognition can be effectively performed by considering an approach that overcomes many of the limitations associated with the Kinect and potentially paves the way for low-cost gesture recognition in unconstrained environments.
KeywordsGesture recognition Decision tree Random forest Inertial sensors Kinect
Unable to display preview. Download preview PDF.
- 1.Aha, D.W., Bankert, R.L.: A comparative evaluation of sequential feature selection algorithms. In: Learning from Data, pp. 199–206. Springer (1996)Google Scholar
- 3.Alexiadis, D.S., Kelly, P., Daras, P., O’Connor, N.E., Boubekeur, T., Moussa, M.B.: Evaluating a dancer’s performance using kinect-based skeleton tracking. In: Proceedings of the 19th ACM International Conference on Multimedia, pp. 659–662. ACM (2011)Google Scholar
- 4.Bowman, D.A., Hodges, L.F.: User interface constraints for immersive virtual environment applications. Tech. rep. Atlanta: Graphics, Visualization and Usability (1995)Google Scholar
- 7.Dix, A.: Human computer interaction. Pearson Education (2004)Google Scholar
- 11.Lyons, D.M.: System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs, uS Patent 6,195,104 (February 27, 2001)Google Scholar
- 13.Mathie, M.: Monitoring and interpreting human movement patterns using a triaxial accelerometer. Ph.D. thesis, The University of New South Wales (2003)Google Scholar
- 14.Menache, A.: Understanding motion capture for computer animation and video games. Morgan Kaufmann (2000)Google Scholar
- 17.Ren, Z., Meng, J., Yuan, J., Zhang, Z.: Robust hand gesture recognition with kinect sensor. In: Proceedings of the 19th ACM International Conference on Multimedia, pp. 759–760. ACM (2011)Google Scholar
- 18.Roth, H., Vona, M.: Moving volume kinectfusion. In: BMVC, pp. 1–11 (2012)Google Scholar
- 19.Sturman, D.J.: A brief history of motion capture for computer character animation. In: SIGGRAPH 1994, Character Motion Systems, Course notes 1 (1994)Google Scholar