Detecting Missed and Anomalous Action Segments Using Approximate String Matching Algorithm

Conference paper
Part of the Communications in Computer and Information Science book series (CCIS, volume 841)


We forget action steps and perform some unwanted action movements as amateur performers during our daily exercise routine, dance performances, etc. To improve our proficiency, it is important that we get a feedback on our performances in terms of where we went wrong. In this paper, we propose a framework for analyzing and issuing reports of action segments that were missed or anomalously performed. This involves comparing the performed sequence with the standard action sequence and notifying when misalignments occur. We propose an exemplar based Approximate String Matching (ASM) technique for detecting such anomalous and missing segments in action sequences. We compare the results with those obtained from the conventional Dynamic Time Warping (DTW) algorithm for sequence alignment. It is seen that the alignment of the action sequences under conventional DTW fails in the presence of missed action segments and anomalous segments due to its boundary condition constraints. The performance of the two techniques has been tested on a complex aperiodic human action dataset with Warm up exercise sequences that we developed from correct and incorrect executions by multiple people. The proposed ASM technique shows promising alignment and missed/anomalous notification results over this dataset.


Missed action Anomalous action Dynamic Time Warping Approximate String Matching 



We thank Divya Srivastava, Arnav Chopra, Aayush Sarda, Kushagra Surana and Rohil Surana for contributing towards creating dataset of Warm Up Exercise videos and their constant support throughout the work.


  1. 1.
    Pirsiavash, H., Vondrick, C., Torralba, A.: Assessing the quality of actions. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8694, pp. 556–571. Springer, Cham (2014). Scholar
  2. 2.
    Su, C.-J.: Personal rehabilitation exercise assistant with kinect and dynamic time warping. Int. J. Inf. Educ. Technol. 3(4), 448 (2013)Google Scholar
  3. 3.
    Alexiadis, D.S., Daras, P.: Quaternionic signal processing techniques for automatic evaluation of dance performances from MoCap data. IEEE Trans. Multimed. 16(5), 1391–1406 (2014)CrossRefGoogle Scholar
  4. 4.
    Jain, H., Harit, G.: A framework to assess sun salutation videos. In: Proceedings of the Tenth Indian Conference on Computer Vision, Graphics and Image Processing, p. 29. ACM (2016)Google Scholar
  5. 5.
    Newell, A., Yang, K., Deng, J.: Stacked hourglass networks for human pose estimation. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 483–499. Springer, Cham (2016). Scholar
  6. 6.
    Perše, M., Kristan, M., Perš, J., Kovačič, S.: Automatic Evaluation of Organized Basketball Activity using Bayesian Networks. Citeseer, Princeton (2007)Google Scholar
  7. 7.
    Jug, M., Perš, J., Dežman, B., Kovačič, S.: Trajectory based assessment of coordinated human activity. In: Crowley, J.L., Piater, J.H., Vincze, M., Paletta, L. (eds.) ICVS 2003. LNCS, vol. 2626, pp. 534–543. Springer, Heidelberg (2003). Scholar
  8. 8.
    Gordon, A.S.: Automated video assessment of human performance. In: Proceedings of AI-ED, pp. 16–19 (1995)Google Scholar
  9. 9.
    Venkataraman, V., Vlachos, I., Turaga, P.: Dynamical regularity for action analysis. In: BMVC, p. 67-1 (2015)Google Scholar
  10. 10.
    Su, C.-J., Chiang, C.-Y., Huang, J.-Y.: Kinect-enabled home-based rehabilitation system using dynamic time warping and fuzzy logic. Appl. Soft Comput. 22, 652–666 (2014)CrossRefGoogle Scholar
  11. 11.
    Hu, M.-C., Chen, C.-W., Cheng, W.-H., Chang, C.-H., Lai, J.-H., Wu, J.-L.: Real-time human movement retrieval and assessment with kinect sensor. IEEE Trans. Cybern. 45(4), 742–753 (2015)CrossRefGoogle Scholar
  12. 12.
    Sakoe, H., Chiba, S.: Dynamic programming algorithm optimization for spoken word recognition. IEEE Trans. Acoust. Speech Sign. Process. 26(1), 43–49 (1978)CrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.Indian Institute of Technology JodhpurJodhpurIndia

Personalised recommendations