Human Interaction Recognition Using Improved Spatio-Temporal Features

Conference paper
Part of the Smart Innovation, Systems and Technologies book series (SIST, volume 43)

Abstract

Human Interaction Recognition (HIR) plays a major role in building intelligent video surveillance systems. In this paper, a new interaction recognition mechanism has been proposed to recognize the activity/interaction of the person with improved spatio-temporal feature extraction techniques robust against occlusion. In order to identify the interaction between two persons, tracking is necessary step to track the movement of the person. Next to tracking, local spatio temporal interest points have been detected using corner detector and the motion of the each corner points have been analysed using optical flow. Feature descriptor provides the motion information and the location of the body parts where the motion is exhibited in the blobs. Action has been predicted from the pose information and the temporal information from the optical flow. Hierarchical SVM (H-SVM) has been used to recognize the interaction and Occlusion of blobs gets determined based on the intersection of the region lying in that path. Performance of this system has been tested over different data sets and results seem to be promising.

Keywords

Video surveillance Blob tracking Spatio temporal features Interaction recognition 

References

  1. 1.
    Yilmaz, A., Javed, O., Shah, M.: Object tracking :a survey. ACM Comput. Surv. 38(4) (2006) (Article no. 13)Google Scholar
  2. 2.
    Ryoo, Agarwal, J.K.: Human activity analysis: a survey. ACM Comput. Surv. 43(3), 16:1–16:43 (2011)Google Scholar
  3. 3.
    Dollar, P., Rabaud, V., Cottrell, G., Belongie, S.: Behavior recognition via sparse spatio temporal features. In: 2nd Joint IEEE International Workshop on Visual Surveillance and Performance Evaluation of Tracking and Surveillance, pp. 65–72 (2005)Google Scholar
  4. 4.
    Gowsikhaa, D., Manjunath, Abirami, S.: Suspicious human activity detection from surveillance videos. Int. J. Internet Distrib. Comput. Syst. 2(2), 141–149 (2012)Google Scholar
  5. 5.
    Gowshikaa, D., Abirami, S., Baskaran, R.: Automated human behaviour analysis from surveillance videos: a survey. Artif. Intell. Rev. (1046). doi: 10.1007/s2-012-9341-3(2012) Google Scholar
  6. 6.
    Gowsikhaa, D., Abirami, S., Baskaran, R.: Construction of image ontology using low level features for image retrieval. In: Proceedings of the International Conference on Computer Communication and Informatics, pp. 129–134 (2012)Google Scholar
  7. 7.
    Sivarathinabala, M., Abirami, S.: Motion tracking of humans under occlusion using blobs. Advanced Computing, Networking and Informatics, vol 1. Smart Innovation, Systems and Technologies, vol. 27 , pp. 251–258 (2014)Google Scholar
  8. 8.
    Vahdat, A., Gao, B., Ranjbar, M., Mori, G.: A discriminative key pose sequence model for recognizing human interactions. In: Proceedings of the IEEE International Conference on Computer Vision (2011)Google Scholar
  9. 9.
    Chen, S., Liu, J., Wang, H.: A hierarchical human activity recognition framework based on automated reasoning. In: The Proceedings of IEEE International Conference on Systems, Man, and Cybernetics, pp. 3495–3499 (2013)Google Scholar
  10. 10.
    Patron-Perez, A., Marszalek, M., Zisserman, A., Reid, I.: High five: recognising human interactions in TV shows. In: British Machine Vision Conference (2010)Google Scholar
  11. 11.
    Bruhn, A., weickert, J.: Lucas/Kanade meets Horn/Schunck: combining local and global optic flow methods. Int. J. Comput. Vision 61(3), 211–231 (2005)Google Scholar
  12. 12.
    Jain, M., Jégou, H., Bouthemy, P.: Better exploiting motion for better action recognition. In: The Proceedings of IEEE International Conference on Computer Vision and Pattern Recognition, pp. 2555–2562 (2013)Google Scholar
  13. 13.
    Huang, K., Wang, S., Tan, T., Maybank, S.: Human behaviour analysis based on new motion descriptor. In: IEEE Transactions on Circuits and Systems for Video Technology (2009)Google Scholar
  14. 14.
    Ryoo, M.S., Aggarwal, J.K: UT Interaction Dataset, ICPR contest on Semantic Description of Human Activities (SDHA) (2010)Google Scholar
  15. 15.
    Yu Kong and Yunde Jia and Yun Fu, “Learning Human Interaction by Interactive Phrases”, Book title,European Conference on Computer Vision,pp.300–313, vol.7572,2012Google Scholar
  16. 16.
    Schuldt, C., Laptev, I., Caputo, B.: Recognizing human actions: local SVM approach. In: The Proceedings of ICPR, Cambridge (2004)Google Scholar
  17. 17.
    Nour el houda Slimani, K., benezeth, Y., Souami, F.: Human interaction recognition based on the co-occurence of visual words. In: The Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 455–460 (2014)Google Scholar

Copyright information

© Springer India 2016

Authors and Affiliations

  1. 1.Department of Information Science and Technology, College of EngineeringAnna UniversityChennaiIndia

Personalised recommendations