Advertisement

Kinect vs. Low-cost Inertial Sensing for Gesture Recognition

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8325)

Abstract

In this paper, we investigate efficient recognition of human gestures / movements from multimedia and multimodal data, including the Microsoft Kinect and translational and rotational acceleration and velocity from wearable inertial sensors. We firstly present a system that automatically classifies a large range of activities (17 different gestures) using a random forest decision tree. Our system can achieve near real time recognition by appropriately selecting the sensors that led to the greatest contributing factor for a particular task. Features extracted from multimodal sensor data were used to train and evaluate a customized classifier. This novel technique is capable of successfully classifying various gestures with up to 91 % overall accuracy on a publicly available data set. Secondly we investigate a wide range of different motion capture modalities and compare their results in terms of gesture recognition accuracy using our proposed approach. We conclude that gesture recognition can be effectively performed by considering an approach that overcomes many of the limitations associated with the Kinect and potentially paves the way for low-cost gesture recognition in unconstrained environments.

Keywords

Gesture recognition Decision tree Random forest Inertial sensors Kinect 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Aha, D.W., Bankert, R.L.: A comparative evaluation of sequential feature selection algorithms. In: Learning from Data, pp. 199–206. Springer (1996)Google Scholar
  2. 2.
    Ahmadi, A., Rowlands, D.D., James, D.A.: Development of inertial and novel marker-based techniques and analysis for upper arm rotational velocity measurements in tennis. Sports Engineering 12(4), 179–188 (2010)CrossRefGoogle Scholar
  3. 3.
    Alexiadis, D.S., Kelly, P., Daras, P., O’Connor, N.E., Boubekeur, T., Moussa, M.B.: Evaluating a dancer’s performance using kinect-based skeleton tracking. In: Proceedings of the 19th ACM International Conference on Multimedia, pp. 659–662. ACM (2011)Google Scholar
  4. 4.
    Bowman, D.A., Hodges, L.F.: User interface constraints for immersive virtual environment applications. Tech. rep. Atlanta: Graphics, Visualization and Usability (1995)Google Scholar
  5. 5.
    Bowman, D.A., Hodges, L.F.: Formalizing the design, evaluation, and application of interaction techniques for immersive virtual environments. Journal of Visual Languages & Computing 10(1), 37–53 (1999)CrossRefGoogle Scholar
  6. 6.
    Culhane, K., O’Connor, M., Lyons, D., Lyons, G.: Accelerometers in rehabilitation medicine for older adults. Age and Ageing 34(6), 556–560 (2005)CrossRefGoogle Scholar
  7. 7.
    Dix, A.: Human computer interaction. Pearson Education (2004)Google Scholar
  8. 8.
    Jaimes, A., Sebe, N.: Multimodal humancomputer interaction: A survey. Computer Vision and Image Understanding 108(1), 116–134 (2007)CrossRefGoogle Scholar
  9. 9.
    Junker, H., Amft, O., Lukowicz, P., Tröster, G.: Gesture spotting with body-worn inertial sensors to detect user activities. Pattern Recognition 41(6), 2010–2024 (2008)CrossRefzbMATHGoogle Scholar
  10. 10.
    Karantonis, D.M., Narayanan, M.R., Mathie, M., Lovell, N.H., Celler, B.G.: Implementation of a real-time human movement classifier using a triaxial accelerometer for ambulatory monitoring. IEEE Transactions on Information Technology in Biomedicine 10(1), 156–167 (2006)CrossRefGoogle Scholar
  11. 11.
    Lyons, D.M.: System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs, uS Patent 6,195,104 (February 27, 2001)Google Scholar
  12. 12.
    Mannini, A., Sabatini, A.M.: Machine learning methods for classifying human physical activity from on-body accelerometers. Sensors 10(2), 1154–1175 (2010)CrossRefGoogle Scholar
  13. 13.
    Mathie, M.: Monitoring and interpreting human movement patterns using a triaxial accelerometer. Ph.D. thesis, The University of New South Wales (2003)Google Scholar
  14. 14.
    Menache, A.: Understanding motion capture for computer animation and video games. Morgan Kaufmann (2000)Google Scholar
  15. 15.
    Moeslund, T.B., Granum, E.: A survey of computer vision-based human motion capture. Computer Vision and Image Understanding 81(3), 231–268 (2001)CrossRefzbMATHGoogle Scholar
  16. 16.
    Myers, B.A.: A brief history of human-computer interaction technology. Interactions 5(2), 44–54 (1998)CrossRefGoogle Scholar
  17. 17.
    Ren, Z., Meng, J., Yuan, J., Zhang, Z.: Robust hand gesture recognition with kinect sensor. In: Proceedings of the 19th ACM International Conference on Multimedia, pp. 759–760. ACM (2011)Google Scholar
  18. 18.
    Roth, H., Vona, M.: Moving volume kinectfusion. In: BMVC, pp. 1–11 (2012)Google Scholar
  19. 19.
    Sturman, D.J.: A brief history of motion capture for computer character animation. In: SIGGRAPH 1994, Character Motion Systems, Course notes 1 (1994)Google Scholar
  20. 20.
    Yang, J.Y., Wang, J.S., Chen, Y.P.: Using acceleration measurements for activity recognition: An e ective learning algorithm for constructing neural classifiers. Pattern Recognition Letters 29(16), 2213–2220 (2008)CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.INSIGHT: Centre for Data AnalyticsDublin City UniversityIreland
  2. 2.Applied Sports Performance Research, School of Health and Human PerformanceDublin City UniversityIreland

Personalised recommendations