Advertisement

Human activity recognition via optical flow: decomposing activities into basic actions

  • Ammar LadjailiaEmail author
  • Imed Bouchrika
  • Hayet Farida Merouani
  • Nouzha Harrati
  • Zohra Mahfouf
IAPR-MedPRAI
  • 24 Downloads

Abstract

Recognizing human activities using automated methods has emerged recently as a pivotal research theme for security-related applications. In this research paper, an optical flow descriptor is proposed for the recognition of human actions by considering only features derived from the motion. The signature for the human action is composed as a histogram containing kinematic features which include the local and global traits. Experimental results performed on the Weizmann and UCF101 databases confirmed the potentials of the proposed approach with attained classification rates of 98.76% and 70%, respectively, to distinguish between different human actions. For comparative and performance analysis, different types of classifiers including Knn, decision tree, SVM and deep learning are applied to the proposed descriptors. Further analysis is performed to assess the proposed descriptors under different resolutions and frame rates. The obtained results are in alignment with the early psychological studies reporting that human motion is adequate for the perception of human activities.

Keywords

Action recognition Motion descriptor Optical flow Decomposing activities 

Notes

Compliance with ethical standards

Conflict of interest

The authors declare that they have no conflict of interest.

References

  1. 1.
    Aggarwal JK, Ryoo MS (2011) Human activity analysis: a review. ACM Comput Surv (CSUR) 43(3):16CrossRefGoogle Scholar
  2. 2.
    Alfaro A, Mery D, Soto A (2013) Human action recognition from inter-temporal dictionaries of key-sequences. In: Pacific-Rim symposium on image and video technology. Springer, pp 419–430Google Scholar
  3. 3.
    Almotairi S, Ribeiro E (2014) Action classification using sequence alignment and shape context. In: The Twenty-Seventh International Flairs ConferenceGoogle Scholar
  4. 4.
    Asadi-Aghbolaghi M, Clapés A, Bellantonio M, Escalante HJ, Ponce-López V, Baró X, Guyon I, Kasaei S, Escalera S (2017) A survey on deep learning based approaches for action and gesture recognition in image sequences. In: 2017 12th IEEE international conference on automatic face and gesture recognition (FG 2017). IEEE, pp 476–483Google Scholar
  5. 5.
    Bouchrika I, Carter JN, Nixon MS, Mörzinger R, Thallinger G (2010) Using gait features for improving walking people detection. In: 2010 20th International conference on pattern recognition (ICPR). IEEE, pp 3097–3100Google Scholar
  6. 6.
    Chaquet JM, Carmona EJ, Fernández-Caballero A (2013) A survey of video datasets for human action and activity recognition. Comput Vis Image Underst 117(6):633–659CrossRefGoogle Scholar
  7. 7.
    Chaudhry R, Ravichandran A, Hager G, Vidal R (2009) Histograms of oriented optical flow and binet-cauchy kernels on nonlinear dynamical systems for the recognition of human actions. In: IEEE conference on computer vision and pattern recognition, 2009. CVPR 2009. IEEE, pp 1932–1939Google Scholar
  8. 8.
    Chen M, Kira Z et al (2017) TS-lSTM and temporal-inception: exploiting spatiotemporal dynamics for activity recognition. arXiv preprint arXiv:1703.10667
  9. 9.
    Colque RVHM, Caetano C, de Andrade MTL, Schwartz WR (2017) Histograms of optical flow orientation and magnitude and entropy to detect anomalous events in videos. IEEE Trans Circuits Syst Video Technol 27(3):673–682CrossRefGoogle Scholar
  10. 10.
    Dalal N, Triggs B (2005) Histograms of oriented gradients for human detection. In: IEEE computer society conference on computer vision and pattern recognition, 2005. CVPR 2005, vol 1. IEEE, pp 886–893Google Scholar
  11. 11.
    Daugman J (2004) How Iris recognition works. IEEE Trans Circuits Syst Video Technol 14(1):21–30CrossRefGoogle Scholar
  12. 12.
    Dhulekar P, Gandhe S, Chitte H, Pardeshi K (2017) Human action recognition: an overview. In: Proceedings of the international conference on data engineering and communication technology. Springer, pp 481–488Google Scholar
  13. 13.
    Dobhal T, Shitole V, Thomas G, Navada G (2015) Human activity recognition using binary motion image and deep learning. Procedia Comput Sci 58:178–185CrossRefGoogle Scholar
  14. 14.
    Donahue J, Anne Hendricks L, Guadarrama S, Rohrbach M, Venugopalan S, Saenko K, Darrell T (2015) Long-term recurrent convolutional networks for visual recognition and description. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2625–2634Google Scholar
  15. 15.
    Fan B, Ding Z, Gao W, Long T (2014) An improved motion compensation method for high resolution UAV SAR imaging. Sci China Inf Sci 57(12):1–13CrossRefGoogle Scholar
  16. 16.
    Fangbemi AS, Liu B, Yu N, Zhang Y (2018) Binary proximity patches motion descriptor for action recognition in videos. In: Proceedings of the 10th international conference on internet multimedia computing and service. ACM, p 17Google Scholar
  17. 17.
    Fathi A, Mori G (2008) Action recognition by learning mid-level motion features. In: IEEE conference on computer vision and pattern recognition, 2008. CVPR 2008. IEEE, pp 1–8Google Scholar
  18. 18.
    Feng Y, Ji M, Xiao J, Yang X, Zhang JJ, Zhuang Y, Li X (2015) Mining spatial-temporal patterns and structural sparsity for human motion data denoising. IEEE Trans Cybern 45(12):2693–2706CrossRefGoogle Scholar
  19. 19.
    Fortun D, Bouthemy P, Kervrann C (2015) Optical flow modeling and computation: a survey. Comput Vis Image Underst 134:1–21CrossRefGoogle Scholar
  20. 20.
    Gentile C, Li S, Kar P, Karatzoglou A, Etrue E, Zappella G (2016) On context-dependent clustering of bandits. arXiv preprint arXiv:1608.03544
  21. 21.
    Horn BK, Schunck BG (1981) Determining optical flow. In: 1981 Technical symposium east. International Society for Optics and Photonics, pp 319–331Google Scholar
  22. 22.
    Itti L, Koch C (2001) Computational modelling of visual attention. Nat Rev Neurosci 2(3):194CrossRefGoogle Scholar
  23. 23.
    Janschek K, Tchernykh V, Dyblenko S (2005) Integrated camera motion compensation by real-time image motion tracking and image deconvolution. In: Proceedings, 2005 IEEE/ASME international conference on advanced intelligent mechatronics. IEEE, pp 1437–1444Google Scholar
  24. 24.
    Kar P, Li S, Narasimhan H, Chawla S, Sebastiani F (2016) Online optimization methods for the quantification problem. In: Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. ACM, pp 1625–1634Google Scholar
  25. 25.
    Karpathy A, Toderici G, Shetty S, Leung T, Sukthankar R, Fei-Fei L (2014) Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1725–1732Google Scholar
  26. 26.
    Kiani H, Sim T, Lucey S (2014) Multi-channel correlation filters for human action recognition. In: 2014 IEEE international conference on image processing (ICIP). IEEE, pp 1485–1489Google Scholar
  27. 27.
    Kliper-Gross O, Gurovich Y, Hassner T, Wolf L (2012) Motion interchange patterns for action recognition in unconstrained videos. In: European conference on computer vision. Springer, pp 256–269Google Scholar
  28. 28.
    Koohzadi M, Charkari NM (2017) Survey on deep learning methods in human action recognition. IET Comput Vis 11(8):623–632CrossRefGoogle Scholar
  29. 29.
    Lara OD, Labrador MA (2013) A survey on human activity recognition using wearable sensors. IEEE Commun Surv Tutor 15(3):1192–1209CrossRefGoogle Scholar
  30. 30.
    Li S, Karatzoglou A, Gentile C (2016) Collaborative filtering bandits. In: Proceedings of the 39th international ACM SIGIR conference on research and development in information retrieval. ACM, pp 539–548Google Scholar
  31. 31.
    Liu J, Ali S, Shah M (2008) Recognizing human actions using multiple features. In: IEEE conference on computer vision and pattern recognition, 2008. CVPR 2008. IEEE, pp 1–8Google Scholar
  32. 32.
    Martínez F, Manzanera A, Romero E (2012) A motion descriptor based on statistics of optical flow orientations for action classification in video-surveillance. In: Wang FL, Lei J, Lau RWH, Zhang J (eds) Multimedia and signal processing. Springer, Berlin, pp 267–274CrossRefGoogle Scholar
  33. 33.
    Moeslund TB, Hilton A, Krüger V (2006) A survey of advances in vision-based human motion capture and analysis. Comput Vis Image Underst 104(2):90–126CrossRefGoogle Scholar
  34. 34.
    Moussa MM, Hamayed E, Fayek MB, El Nemr HA (2015) An enhanced method for human action recognition. J Adv Res 6(2):163–169CrossRefGoogle Scholar
  35. 35.
    Niebles JC, Wang H, Fei-Fei L (2008) Unsupervised learning of human action categories using spatial-temporal words. Int J Comput Vis 79(3):299–318CrossRefGoogle Scholar
  36. 36.
    Oshin O, Gilbert A, Bowden R (2014) Capturing relative motion and finding modes for action recognition in the wild. Comput Vis Image Underst 125:155–171CrossRefGoogle Scholar
  37. 37.
    Peng X, Wang L, Wang X, Qiao Y (2016) Bag of visual words and fusion methods for action recognition: comprehensive study and good practice. Comput Vis Image Underst 150:109–125CrossRefGoogle Scholar
  38. 38.
    Poppe R (2010) A survey on vision-based human action recognition. Image Vis Comput 28(6):976–990CrossRefGoogle Scholar
  39. 39.
    Rahman S, See J, Ho CC (2015) Action recognition in low quality videos by jointly using shape, motion and texture features. In: 2015 IEEE international conference on signal and image processing applications (ICSIPA). IEEE, pp 83–88Google Scholar
  40. 40.
    Schuldt C, Laptev I, Caputo B (2004) Recognizing human actions: a local SVM approach. In: Proceedings of the 17th international conference on pattern recognition, 2004. ICPR 2004, vol 3. IEEE, pp 32–36Google Scholar
  41. 41.
    Simonyan K, Zisserman A (2014) Two-stream convolutional networks for action recognition in videos. In: Ghahramani Z, Welling M, Cortes C, Lawrence ND, Weinberger KQ (eds) Proceedings of the 27th International Conference on Neural Information Processing Systems, vol 1. MIT Press, Cambridge, MA, USA, pp 568–576Google Scholar
  42. 42.
    Soomro K, Zamir AR, Shah M (2012) Ucf101: a dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402
  43. 43.
    Thurau C, Hlaváč V (2008) Pose primitive based human action recognition in videos or still images. In: IEEE conference on computer vision and pattern recognition, 2008. CVPR 2008. IEEE, pp 1–8Google Scholar
  44. 44.
    Tymoshchuk PV (2009) A discrete-time dynamic k-winners-take-all neural circuit. Neurocomputing 72(13–15):3191–3202CrossRefGoogle Scholar
  45. 45.
    Varol G, Laptev I, Schmid C (2018) Long-term temporal convolutions for action recognition. IEEE Trans Pattern Anal Mach Intell 40(6):1510–1517CrossRefGoogle Scholar
  46. 46.
    Vishwakarma S, Agrawal A (2013) A survey on activity recognition and behavior understanding in video surveillance. Vis Comput 29(10):983–1009CrossRefGoogle Scholar
  47. 47.
    Wang H, Schmid C (2013) Action recognition with improved trajectories. In: Proceedings of the IEEE international conference on computer vision, pp 3551–3558Google Scholar
  48. 48.
    Wang J (2010) Analysis and design of a \( k \)-winners-take-all model with a single state variable and the heaviside step activation function. IEEE Trans Neural Netw 21(9):1496–1506CrossRefGoogle Scholar
  49. 49.
    Wang J, Cherian A, Porikli F (2017) Ordered pooling of optical flow sequences for action recognition. In: 2017 IEEE winter conference on applications of computer vision (WACV). IEEE, pp 168–176Google Scholar
  50. 50.
    Wang L, Qiao Y, Tang X (2015) Action recognition with trajectory-pooled deep-convolutional descriptors. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4305–4314Google Scholar
  51. 51.
    Wang L, Xiong Y, Wang Z, Qiao Y, Lin D, Tang X, Van Gool L (2016) Temporal segment networks: Towards good practices for deep action recognition. In: European conference on computer vision. Springer, pp 20–36Google Scholar
  52. 52.
    Weinland D, Boyer E. (2008) Action recognition using exemplar-based embedding. In: IEEE conference on computer vision and pattern recognition, 2008. CVPR 2008. IEEE, pp 1–7Google Scholar
  53. 53.
    Yao A, Gall J, Van Gool L (2010) A hough transform-based voting framework for action recognition. In: 2010 IEEE conference on computer vision and pattern recognition (CVPR). IEEE, pp 2061–2068Google Scholar
  54. 54.
    Yeffet L, Wolf L (2009) Local trinary patterns for human action recognition. In: 2009 IEEE 12th international conference on computer vision, pp 492–497Google Scholar
  55. 55.
    Yi Y, Cheng Y, Xu C (2017) Mining human movement evolution for complex action recognition. Expert Syst Appl 78:259–272CrossRefGoogle Scholar
  56. 56.
    Zhu W, Hu J, Sun G, Cao X, Qiao Y (2016) A key volume mining deep framework for action recognition. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR). IEEE, pp 1991–1999Google Scholar
  57. 57.
    Zhu Y, Nayak NM, Roy-Chowdhury AK (2013) Context-aware activity recognition and anomaly detection in video. IEEE J Sel Top Signal Process 7(1):91–101CrossRefGoogle Scholar

Copyright information

© Springer-Verlag London Ltd., part of Springer Nature 2019

Authors and Affiliations

  • Ammar Ladjailia
    • 1
    Email author
  • Imed Bouchrika
    • 2
  • Hayet Farida Merouani
    • 1
  • Nouzha Harrati
    • 2
  • Zohra Mahfouf
    • 2
  1. 1.Department of Computer ScienceUniversity of AnnabaAnnabaAlgeria
  2. 2.Faculty of Science and TechnologyUniversity of Souk AhrasSouk AhrasAlgeria

Personalised recommendations