Automatic Segmentation of Therapeutic Exercises Motion Data with a Predictive Event Approach

  • S. Spasojevic
  • R. Ventura
  • J. Santos-Victor
  • V. Potkonjak
  • A. Rodić
Conference paper
Part of the Mechanisms and Machine Science book series (Mechan. Machine Science, volume 38)

Abstract

We propose a novel approach for detecting events in data sequences, based on a predictive method using Gaussian processes. We have applied this approach for detecting relevant events in the therapeutic exercise sequences, wherein obtained results in addition to a suitable classifier, can be used directly for gesture segmentation. During exercise performing, motion data in the sense of 3D position of characteristic skeleton joints for each frame are acquired using a RGBD camera . Trajectories of joints relevant for the upper-body therapeutic exercises of Parkinson’s patients are modelled as Gaussian processes. Our event detection procedure using an adaptive Gaussian process predictor has been shown to outperform a first derivative based approach.

Keywords

Gesture segmentation Predictive event approach Gaussian processes Physical rehabilitation RGBD camera 

Notes

Acknowledgments

This work was partially funded by bilateral project COLBAR, between Instituto Superior Técnico, Lisbon, Portugal and Mihailo Pupin Institute, Belgrade, Serbia, FCT [PEst-OE/EEI/LA0009/2013], III44008, TR35003 and SNSF IP SCOPES, IZ74Z0_137361/1.

References

  1. 1.
    Goetz C, Tilley B, Shaftman S et al (2008) Movement disorder society-sponsored revision of the unified parkinson’s disease rating scale (MDS-UPDRS): scale presentation and clinimetric testing results. Mov Disord 22:2129–2170CrossRefGoogle Scholar
  2. 2.
    Cruz-Neira C, Sandin D, DeFanti T et al (1992) The cave: audio visual experience automatic virtual environment. Commun ACM 35(6):64–72CrossRefGoogle Scholar
  3. 3.
    Starner T, Leibe B, Minnen D et al (2003) The perceptive workbench: computervision-based gesture tracking, object tracking, and 3d reconstruction of augmented desks. Mach Vis Appl 14(1):59–71CrossRefGoogle Scholar
  4. 4.
    Gama A, Chaves T, Figueiredo L et al (2012) Guidance and movement correction based on therapeutics movements for motor rehabilitation support systems. In: 14th symposium on virtual and augmented reality, pp 191–200Google Scholar
  5. 5.
    Lee S (2006) Automatic gesture recognition for intelligent human-robot interaction. In: Proceedings of 7th international conference on automatic face and gesture recognition, pp 645–650Google Scholar
  6. 6.
    Park H, Jung D, Kim H (2006) Vision-based game interface using Human Gesture. In: Advances in image and video technology, pp 662–671Google Scholar
  7. 7.
    Alon J, Athitsos V, Sclaroff S (2005) Accurate and efficient gesture spotting via pruning and subgesture reasoning. In: Proceedings of IEEE ICCV workshop on human computer interaction, pp 189–198Google Scholar
  8. 8.
    Oka R (1998) Spotting method for classification of real world data. Comput J 41(8):559–565MATHCrossRefGoogle Scholar
  9. 9.
    Darrell T, Essa I, Pentland A (1996) Task-specific gesture analysis in real-time using interpolated views. Pattern Anal Mach Intell 18(12):1236–1242CrossRefGoogle Scholar
  10. 10.
    Kruskall J, Liberman M (1983) The symmetric time warping algorithm: From continuous to discrete. In: Time warps, string edits and macromolecules. pp 125–162Google Scholar
  11. 11.
    Morguet P, Lang M (1998) Spotting dynamic hand gestures in video image sequences using hidden Markov models. In: Proceedings of IEEE international conference on image processing. pp 193–197Google Scholar
  12. 12.
    Starner T, Weaver J, Pentland A (1998) Real-time American Sign Language recognition using a desk and wearable computer based video. Pattern Anal Mach Intell 20(12):1371–1375CrossRefGoogle Scholar
  13. 13.
    Chen F, Fu C, Huang C (2003) Hand gesture recognition using a real-time tracking method and Hidden Markov Models. Image Video Comput 21(8):745–758CrossRefGoogle Scholar
  14. 14.
    Wilson A, Bobick A (1999) Parametric hidden Markov models for gesture recognition. Pattern Anal Mach Intell 21(9):884–900CrossRefGoogle Scholar
  15. 15.
    Kwon DY (2008) A design framework for 3D spatial gesture interfaces. PhD, ETH, SwitzerlandGoogle Scholar
  16. 16.
    Kahol K, Tripathi P, Panchanathan S (2004) Automated gesture segmentation from dance sequences. In: Proceeings of IEEE 6th international conference on automatic face and gesture recognition, pp 883–888Google Scholar
  17. 17.
    Kocian J, Girard A, Banko B et al (2005) Dynamic system identification with Gaussian processes. Math Comput Model Dyn Syst 11(4):411–424MathSciNetCrossRefGoogle Scholar
  18. 18.
    Nery Bruno, Ventura Rodrigo (2011) A dynamical systems approach to online event segmentation in cognitive robotics. Paladyn. J Behav Robot 2(1):18–24Google Scholar
  19. 19.
    Nery Bruno, Ventura Rodrigo (2013) On the scalability and convergence of simultaneous parameter identification and synchronization of dynamical systems. Complex Syst 22(3):203–219MathSciNetGoogle Scholar
  20. 20.
    Rasmussen C, Williams C (2006) Gaussian processes for machine learning. The MIT Press, CambridgeMATHGoogle Scholar
  21. 21.
    Sokolova M, Lapalme G (2009) A systematic analysis of performance measures for classification tasks. Inf Process Manage 45(4):427–437CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • S. Spasojevic
    • 1
    • 2
    • 3
  • R. Ventura
    • 3
  • J. Santos-Victor
    • 3
  • V. Potkonjak
    • 1
  • A. Rodić
    • 2
  1. 1.Faculty of Electrical Engineering and Mihailo Pupin InstituteUniversity of BelgradeBelgradeSerbia
  2. 2.Mihailo Pupin InstituteUniversity of BelgradeBelgradeSerbia
  3. 3.Institute for Systems and Robotics, Instituto Superior TécnicoUniversity of LisbonLisbonPortugal

Personalised recommendations