Advertisement

Automatic Multi-view Action Recognition with Robust Features

  • Kuang-Pen Chou
  • Mukesh Prasad
  • Dong-Lin Li
  • Neha Bharill
  • Yu-Feng Lin
  • Farookh Hussain
  • Chin-Teng Lin
  • Wen-Chieh Lin
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10636)

Abstract

This paper proposes view-invariant features to address multi-view action recognition for different actions performed in different views. The view-invariant features are obtained from clouds of varying temporal scale by extracting holistic features, which are modeled to explicitly take advantage of the global, spatial and temporal distribution of interest points. The proposed view-invariant features are highly discriminative and robust for recognizing actions as the view changes. This paper proposes a mechanism for real world application which can follow the actions of a person in a video based on image sequences and can separate these actions according to given training data. Using the proposed mechanism, the beginning and ending of an action sequence can be labeled automatically without the need for manual setting. It is not necessary in the proposed approach to re-train the system if there are changes in scenario, which means the trained database can be applied in a wide variety of environments. The experiment results show that the proposed approach outperforms existing methods on KTH and WEIZMANN datasets.

Keywords

Action recognition Feature extraction Background subtraction Classification Tracking 

References

  1. 1.
    Efros, A.A., Berg, A.C., Mori, G., Malik, J.: Recognizing action at a distance. In: Ninth IEEE International Conference on Computer Vision, vol. 2, pp. 726–733 (2003)Google Scholar
  2. 2.
    Fathi, A., Mori, G.: Action recognition by learning midlevel motion features. In: IEEE Conference on Computer Vision and Pattern Recognition (2008)Google Scholar
  3. 3.
    Rao, C., Shah, M.: View-invariance in action recognition. Comput. Vis. Pattern Recognit. 2, 316–322 (2001)Google Scholar
  4. 4.
    Ali, A., Aggarwal, J.: Segmentation and recognition of continuous human activity. In: IEEE Workshop on Detection and Recognition of Events in Video, p. 28 (2001)Google Scholar
  5. 5.
    Ramanan, D., Forsyth, D.A.: Automatic annotation of everyday movements. In: Conference on Neural Information Processing Systems (2003)Google Scholar
  6. 6.
    Sheikh, Y., Sheikh, M., Shah, M.: Exploring the space of a human action. In: International Conference on Computer Vision (2005)Google Scholar
  7. 7.
    Gorelick, L., Blank, M., Shechtman, E., Irani, M., Basri, R.: Actions as space-time shapes. Pattern Anal. Mach. Intell. 29(12), 2247–2253 (2007)CrossRefGoogle Scholar
  8. 8.
    Ke, Y., Sukthankar, R., Hebert, M.: Efficient visual event detection using volumetric features. In: IEEE Computer Society, Los Alamitos, CA, USA, vol. 1, pp. 166–173 (2005)Google Scholar
  9. 9.
    Yilmaz, A., Shah, M.: Actions sketch: a novel action representation. In: Computer Vision and Pattern Recognition, pp. 984–989 (2005)Google Scholar
  10. 10.
    Schuldt, C., Laptev, I., Caputo, B.: Recognizing human actions: a local SVM approach. In: International Conference on Pattern Recognition, pp. 32–36 (2004)Google Scholar
  11. 11.
    Scovanner, P., Ali, S., Shah, M.: A 3-Dimensional sift descriptor and its application to action recognition. In: International conference on Multimedia, pp. 357–360 (2007)Google Scholar
  12. 12.
    Lowe, D.: Distinctive image features from scale-invariant key-points. Int. J. Comput. Vision 20, 91–110 (2003)Google Scholar
  13. 13.
    Willems, G., Tuytelaars, T., Van Gool, L.: An efficient dense and scale-invariant spatio-temporal interest point detector. In: European Conference on Computer Vision, vol. 2, pp. 650–663 (2008)Google Scholar
  14. 14.
    Dollar, P., Rabaud, V., Cottrell, G., Belongie, S.: Behavior recognition via sparse spatio-temporal features. In: International Conference on Computer Communications and Networks, pp. 65–72 (2005)Google Scholar
  15. 15.
    Niebles, J.C., Fei-Fei, L.: A hierarchical model of shape and appearance for human action classification. In: Computer Vision and Pattern Recognition (2007)Google Scholar
  16. 16.
    Singh, S., Velastin, S.A., Ragheb, H.: MuHAVi: a multicamera human action video dataset for the evaluation of action recognition methods. In: 2nd Workshop on Activity Monitoring by Multi-Camera Surveillance Systems (AMMCSS), 29 August, Boston, USA (2010)Google Scholar
  17. 17.
    Eweiwi, A., Cheema, S., Thurau, C., Bauckhage, C.: Temporal key poses for human action recognition. In: International Conference on Computer Vision Workshops (2011)Google Scholar
  18. 18.
    Bregonzio, M., Gong, S., Xiang, T.: Recognising action as clouds of space-time interest points. In: Computer Vision and Pattern Recognition, pp. 1948–1955 (2009)Google Scholar
  19. 19.
    Parker, J.: Algorithms for Image Processing and Computer Vision. Wiley Computer Publishing, New York (1997)Google Scholar
  20. 20.
    Shorack, G.R., Wellner, J.A.: Empirical Processes with Applications to Statistics. Wiley, New York (1986)MATHGoogle Scholar
  21. 21.
    Gilbert, A., Illingworth, J., Bowden, R.: Scale invariant action recognition using compound features mined from dense spatio-temporal corners. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5302, pp. 222–233. Springer, Heidelberg (2008). doi: 10.1007/978-3-540-88682-2_18 CrossRefGoogle Scholar
  22. 22.
    Savarese, S., Pozo, A.D., Niebles, J., Fei-Fei, L.: Spatial temporal correlations for unsupervised action classification. In: IEEE Workshop on Motion and Video Computing (2008)Google Scholar
  23. 23.
    Zhang, Z., Hu, Y., Chan, S., Chia, L.-T.: Motion context: a new representation for human action recognition. In: Forsyth, D., Torr, P., Zisserman, A. (eds.) ECCV 2008. LNCS, vol. 5305, pp. 817–829. Springer, Heidelberg (2008). doi: 10.1007/978-3-540-88693-8_60 CrossRefGoogle Scholar
  24. 24.
    Nowozin, S., Bakir, G.H., Tsuda, K.: Discriminative sub-sequence mining for action classification. In: International Conference on Computer Vision, pp. 1–8 (2007)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Kuang-Pen Chou
    • 1
  • Mukesh Prasad
    • 2
  • Dong-Lin Li
    • 4
  • Neha Bharill
    • 3
  • Yu-Feng Lin
    • 1
  • Farookh Hussain
    • 2
  • Chin-Teng Lin
    • 2
  • Wen-Chieh Lin
    • 1
  1. 1.Department of Computer ScienceNational Chiao Tung UniversityHsinchuTaiwan
  2. 2.Centre for Artificial Intelligence, School of Software, FEITUniversity of TechnologySydneyAustralia
  3. 3.Department of Computer Science and EngineeringBirla Institute of Technology and Science, PilaniHyderabadIndia
  4. 4.Department of Electrical EngineeringNational Chiao Tung UniversityHsinchuTaiwan

Personalised recommendations