International Journal of Computer Vision

, Volume 119, Issue 3, pp 219–238 | Cite as

A Robust and Efficient Video Representation for Action Recognition

  • Heng Wang
  • Dan Oneata
  • Jakob Verbeek
  • Cordelia Schmid
Article

Abstract

This paper introduces a state-of-the-art video representation and applies it to efficient action recognition and detection. We first propose to improve the popular dense trajectory features by explicit camera motion estimation. More specifically, we extract feature point matches between frames using SURF descriptors and dense optical flow. The matches are used to estimate a homography with RANSAC. To improve the robustness of homography estimation, a human detector is employed to remove outlier matches from the human body as human motion is not constrained by the camera. Trajectories consistent with the homography are considered as due to camera motion, and thus removed. We also use the homography to cancel out camera motion from the optical flow. This results in significant improvement on motion-based HOF and MBH descriptors. We further explore the recent Fisher vector as an alternative feature encoding approach to the standard bag-of-words (BOW) histogram, and consider different ways to include spatial layout information in these encodings. We present a large and varied set of evaluations, considering (i) classification of short basic actions on six datasets, (ii) localization of such actions in feature-length movies, and (iii) large-scale recognition of complex events. We find that our improved trajectory features significantly outperform previous dense trajectories, and that Fisher vectors are superior to BOW encodings for video recognition tasks. In all three tasks, we show substantial improvements over the state-of-the-art results.

Keywords

Action recognition Action detection Multimedia event detection 

References

  1. Arandjelovic, R., & Zisserman, A. (2012). Three things everyone should know to improve object retrieval. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 2911–2918).Google Scholar
  2. Ballas, N., Yang, Y., Lan, Z. Z., Delezoide. B., Prêteux. F., & Hauptmann, A. (2013). Space-time robust video representation for action recognition. In IEEE International Conference on Computer Vision.Google Scholar
  3. Bay, H., Tuytelaars, T., & Gool, L. V. (2006). SURF: Speeded up robust features. In European Conference on Computer Vision.Google Scholar
  4. Cao, L., Mu, Y., Natsev, A., Chang, S. F., Hua, G., & Smith, J. (2012). Scene aligned pooling for complex video recognition. In European Conference on Computer Vision.Google Scholar
  5. Chatfield, K., Lempitsky, V., Vedaldi, A., & Zisserman, A. (2011). The devil is in the details: An evaluation of recent feature encoding methods. In British Machine Vision Conference.Google Scholar
  6. Dalal, N., & Triggs, B. (2005). Histograms of oriented gradients for human detection. In IEEE Conference on Computer Vision and Pattern Recognition.Google Scholar
  7. Dalal, N., Triggs, B., & Schmid, C. (2006). Human detection using oriented histograms of flow and appearance. In European Conference on Computer Vision.Google Scholar
  8. Dollár, P., Rabaud, V., Cottrell, G., & Belongie, S. (2005). Behavior recognition via sparse spatio-temporal features. In IEEE Workshop Visual Surveillance and Performance Evaluation of Tracking and Surveillance.Google Scholar
  9. Duchenne, O., Laptev, I., Sivic, J., Bach, F., & Ponce, J. (2009). Automatic annotation of human actions in video. In IEEE International Conference on Computer Vision (pp. 1491–1498).Google Scholar
  10. Farnebäck, G. (2003). Two-frame motion estimation based on polynomial expansion. In Proceedings of the Scandinavian Conference on Image Analysis.Google Scholar
  11. Felzenszwalb, P. F., Girshick, R. B., McAllester, D., & Ramanan, D. (2010). Object detection with discriminatively trained part-based models. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(9), 1627–1645.CrossRefGoogle Scholar
  12. Ferrari, V., Marin-Jimenez, M., & Zisserman, A. (2008). Progressive search space reduction for human pose estimation. In IEEE Conference on Computer Vision and Pattern Recognition.Google Scholar
  13. Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381–395.MathSciNetCrossRefGoogle Scholar
  14. Gaidon, A., Harchaoui, Z., & Schmid, C. (2011). Actom sequence models for efficient action detection. In IEEE Conference on Computer Vision and Pattern Recognition.Google Scholar
  15. Gaidon, A., Harchaoui, Z., & Schmid, C. (2013). Activity representation with motion hierarchies. International Journal of Computer Vision, 3, 1–20.MathSciNetGoogle Scholar
  16. Gauglitz, S., Höllerer, T., & Turk, M. (2011). Evaluation of interest point detectors and feature descriptors for visual tracking. International Journal of Computer Vision, 94(3), 335–360.CrossRefMATHGoogle Scholar
  17. van Gemert, J., Veenman, C., Smeulders, A., & Geusebroek, J. M. (2010). Visual word ambiguity. IEEE Transactions on Pattern Analysis and Machine Intelligence, 32(7), 1271–1283.CrossRefGoogle Scholar
  18. Gorelick, L., Blank, M., Shechtman, E., Irani, M., & Basri, R. (2007). Actions as space-time shapes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(12), 2247–2253.CrossRefGoogle Scholar
  19. Gupta, A., Kembhavi, A., & Davis, L. (2009). Observing human-object interactions: using spatial and functional compatibility for recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(10), 1775–1789.CrossRefGoogle Scholar
  20. Ikizler-Cinbis, N., & Sclaroff, S. (2010). Object, scene and actions: Combining multiple features for human action recognition. In European Conference on Computer Vision.Google Scholar
  21. Izadinia, H., & Shah, M. (2012). Recognizing complex events using large margin joint low-level event model. In European Conference on Computer Vision.Google Scholar
  22. Jain, M., Jégou, H., & Bouthemy, P. (2013). Better exploiting motion for better action recognition. In IEEE Conference on Computer Vision and Pattern Recognition.Google Scholar
  23. Jégou, H., Perronnin, F., Douze, M., Sánchez, J., Pérez, P., & Schmid, C. (2011). Aggregating local image descriptors into compact codes. IEEE Transactions on Pattern Analysis and Machine Intelligence.Google Scholar
  24. Jiang, Y. G., Dai, Q., Xue, X., Liu, W., & Ngo, C. W. (2012). Trajectory-based modeling of human actions with motion reference points. In European Conference on Computer Vision (pp. 425–438).Google Scholar
  25. Jiang, Y. G., Liu, J., Roshan Zamir, A., Laptev, I., Piccardi, M., Shah, M., & Sukthankar, R. (2013). THUMOS challenge: Action recognition with a large number of classes. http://crcv.ucf.edu/ICCV13-Action-Workshop/
  26. Karaman, S., Seidenari, L., Bagdanov, A. D., & Del Bimbo, A. (2013). L1-regularized logistic regression stacking and transductive crf smoothing for action recognition in video. In ICCV Workshop on Action Recognition with a Large Number of Classes.Google Scholar
  27. Kim, I., Oh, S., Vahdat, A., Cannons, K., Perera, A., & Mori, G. (2013). Segmental multi-way local pooling for video recognition. In ACM Conference on Multimedia (pp. 637–640).Google Scholar
  28. Kläser, A., Marszałek, M., & Schmid, C. (2008). A spatio-temporal descriptor based on 3D-gradients. In British Machine Vision Conference Google Scholar
  29. Kläser, A., Marszałek, M., Schmid, C., & Zisserman, A. (2010). Human focused action localization in video. In ECCV Workshop on Sign, Gesture, and Activity Google Scholar
  30. Krapac, J., Verbeek, J., & Jurie, F. (2011). Modeling spatial layout with Fisher vectors for image categorization. In IEEE International Conference on Computer Vision.Google Scholar
  31. Kuehne, H., Jhuang, H., Garrote, E., Poggio, T., & Serre, T. (2011). HMDB: A large video database for human motion recognition. In IEEE International Conference on Computer Vision (pp. 2556–2563).Google Scholar
  32. Laptev, I. (2005). On space-time interest points. International Journal of Computer Vision, 64(2–3), 107–123.CrossRefGoogle Scholar
  33. Laptev, I., & Pérez, P. (2007). Retrieving actions in movies. In IEEE International Conference on Computer Vision.Google Scholar
  34. Laptev, I., Marszałek, M., Schmid, C., & Rozenfeld, B. (2008). Learning realistic human actions from movies. In IEEE Conference on Computer Vision and Pattern Recognition.Google Scholar
  35. Le, Q. V., Zou, W. Y., Yeung, S. Y., & Ng, A. Y. (2011). Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis. In IEEE Conference on Computer Vision and Pattern Recognition.Google Scholar
  36. Li, K., Oh, S., Perera, A. A., & Fu, Y. (2012). A videography analysis framework for video retrieval and summarization. In British Machine Vision Conference (pp. 1–12).Google Scholar
  37. Li, W., Yu, Q., Divakaran, A., & Vasconcelos, N. (2013). Dynamic pooling for complex event recognition. In IEEE International Conference on Computer Vision.Google Scholar
  38. Liu, J., Luo, J., & Shah, M. (2009). Recognizing realistic actions from videos in the wild. In IEEE Conference on Computer Vision and Pattern Recognition.Google Scholar
  39. Ma, S., Zhang, J., Ikizler-Cinbis, N., & Sclaroff, S. (2013). Action recognition and localization by hierarchical space-time segments. In IEEE International Conference on Computer Vision.Google Scholar
  40. Marszałek, M., Laptev, I., & Schmid, C. (2009). Actions in context. In IEEE Conference on Computer Vision and Pattern Recognition.Google Scholar
  41. Mathe, S., & Sminchisescu, C. (2012). Dynamic eye movement datasets and learnt saliency models for visual action recognition. In European Conference on Computer Vision (pp. 842–856).Google Scholar
  42. Matikainen, P., Hebert, M., & Sukthankar, R. (2009). Trajectons: Action recognition through the motion analysis of tracked features. In ICCV Workshops on Video-Oriented Object and Event Classification.Google Scholar
  43. Matikainen, P., Hebert, M., & Sukthankar, R. (2010). Representing pairwise spatial and temporal relations for action recognition. In European Conference on Computer Vision.Google Scholar
  44. McCann, S., & Lowe, D. G. (2013). Spatially local coding for object recognition. In Asian Conference on Computer Vision (pp. 204–217). New York: Springer.Google Scholar
  45. Messing, R., Pal, C., & Kautz, H. (2009). Activity recognition using the velocity histories of tracked keypoints. In IEEE International Conference on Computer Vision.Google Scholar
  46. Murthy, O. R., & Goecke, R. (2013a). Combined ordered and improved trajectories for large scale human action recognition. In ICCV Workshop on Action Recognition with a Large Number of Classes.Google Scholar
  47. Murthy, O. R., & Goecke, R. (2013b). Ordered trajectories for large scale human action recognition. In ICCV Workshops.Google Scholar
  48. Natarajan, P., Wu, S., Vitaladevuni, S., Zhuang, X., Tsakalidis, S., Park, U., Prasad, R., & Natarajan, P. (2012). Multimodal feature fusion for robust event detection in web videos. In IEEE Conference on Computer Vision and Pattern Recognition.Google Scholar
  49. Niebles, J. C., Chen, C. W., & Fei-Fei, L. (2010). Modeling temporal structure of decomposable motion segments for activity classification. In European Conference on Computer Vision Google Scholar
  50. Oneata, D., Verbeek, J., & Schmid, C. (2013). Action and event recognition with Fisher vectors on a compact feature set. In IEEE International Conference on Computer Vision.Google Scholar
  51. Over, P., Awad, G., Michel, M., Fiscus, J., Sanders, G., Shaw, B., Kraaij, W., Smeaton, A. F., & Quenot, G. (2012). TRECVID 2012: An overview of the goals, tasks, data, evaluation mechanisms and metrics. In Proceedings of TRECVID.Google Scholar
  52. Park, D., Zitnick, C. L., Ramanan, D., & Dollár, P. (2013). Exploring weak stabilization for motion feature extraction. In IEEE Conference on Computer Vision and Pattern Recognition.Google Scholar
  53. Patron-Perez, A., Marszalek, M., Zisserman, A., & Reid, I. (2010). High Five: Recognising human interactions in TV shows. In British Machine Vision Conference.Google Scholar
  54. Peng, X., Wang, L., Cai, Z., Qiao, Y., & Peng, Q. (2013). Hybrid super vector with improved dense trajectories for action recognition. In ICCV Workshops Google Scholar
  55. Prest, A., Schmid, C., & Ferrari, V. (2012). Weakly supervised learning of interactions between humans and objects. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(3), 601–614.Google Scholar
  56. Prest, A., Ferrari, V., & Schmid, C. (2013). Explicit modeling of human-object interactions in realistic videos. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(4), 835–848.Google Scholar
  57. Reddy, K., & Shah, M. (2012). Recognizing 50 human action categories of web videos. Machine Vision and Applications (pp. 1–11).Google Scholar
  58. Sánchez, J., Perronnin, F., & de Campos, T. (2012). Modeling the spatial layout of images beyond spatial pyramids. Pattern Recognition Letters, 33(16), 2216–2223.CrossRefGoogle Scholar
  59. Sánchez, J., Perronnin, F., Mensink, T., & Verbeek, J. (2013). Image classification with the Fisher vector: Theory and practice. International Journal of Computer Vision, 105(3), 222–245.MathSciNetCrossRefMATHGoogle Scholar
  60. Sapienza, M., Cuzzolin, F., & Torr, P. (2012). Learning discriminative space-time actions from weakly labelled videos. In British Machine Vision Conference.Google Scholar
  61. Schüldt, C., Laptev, I., & Caputo, B. (2004). Recognizing human actions: A local SVM approach. In International Conference on Pattern Recognition.Google Scholar
  62. Scovanner, P., Ali, S., & Shah, M. (2007). A 3-dimensional SIFT descriptor and its application to action recognition. In ACM Conference on Multimedia.Google Scholar
  63. Shi, F., Petriu, E., & Laganiere, R. (2013). Sampling strategies for real-time action recognition. In IEEE Conference on Computer Vision and Pattern Recognition.Google Scholar
  64. Shi, J., & Tomasi, C. (1994). Good features to track. In IEEE Conference on Computer Vision and Pattern Recognition.Google Scholar
  65. Soomro, K., Zamir, A. R., & Shah, M. (2012). UCF101: A dataset of 101 human actions classes from videos in the wild. CRCV-TR-12-01.Google Scholar
  66. Sun, C., & Nevatia, R. (2013). Large-scale web video event classification by use of fisher vectors. In IEEE Winter Conference on Applications of Computer Vision.Google Scholar
  67. Sun, J., Wu, X., Yan, S., Cheong, L. F., Chua, T. S., & Li, J. (2009). Hierarchical spatio-temporal context modeling for action recognition. In IEEE Conference on Computer Vision and Pattern Recognition.Google Scholar
  68. Szeliski, R. (2006). Image alignment and stitching: A tutorial. Foundations and Trends in Computer Graphics and Vision, 2(1), 1–104.MathSciNetCrossRefMATHGoogle Scholar
  69. Tang, K., Fei-Fei, L., & Koller, D. (2012). Learning latent temporal structure for complex event detection. In IEEE Conference on Computer Vision and Pattern Recognition (pp. 1250–1257).Google Scholar
  70. Tang, K., Yao, B., Fei-Fei, L., & Koller, D. (2013). Combining the right features for complex event recognition. In IEEE International Conference on Computer Vision (pp. 2696–2703).Google Scholar
  71. Uemura, H., Ishikawa, S., & Mikolajczyk, K. (2008). Feature tracking and motion compensation for action recognition. In British Machine Vision Conference.Google Scholar
  72. Vahdat, A., & Mori, G. (2013). Handling uncertain tags in visual recognition. In IEEE International Conference on Computer Vision.Google Scholar
  73. Vahdat, A., Cannons, K., Mori, G., Oh, S., & Kim, I. (2013). Compositional models for video event detection: A multiple kernel learning latent variable approach. In IEEE International Conference on Computer Vision.Google Scholar
  74. Vig, E., Dorr, M., & Cox, D. (2012). Space-variant descriptor sampling for action recognition based on saliency and eye movements. In European Conference on Computer Vision (pp. 84–97).Google Scholar
  75. Wang, H., & Schmid, C. (2013). Action recognition with improved trajectories. In IEEE International Conference on Computer Vision.Google Scholar
  76. Wang, H., Ullah, M. M., Kläser, A., Laptev, I., & Schmid, C. (2009). Evaluation of local spatio-temporal features for action recognition. In British Machine Vision Conference.Google Scholar
  77. Wang, H., Kläser, A., Schmid, C., & Liu, C. L. (2013a). Dense trajectories and motion boundary descriptors for action recognition. International Journal of Computer Vision, 103(1), 60–79.MathSciNetCrossRefGoogle Scholar
  78. Wang, L., Qiao, Y., Tang, X., et al (2013b). Mining motion atoms and phrases for complex action recognition. In IEEE International Conference on Computer Vision (pp. 2680–2687).Google Scholar
  79. Wang, X., Wang, L., & Qiao, Y. (2012). A comparative study of encoding, pooling and normalization methods for action recognition. In Asian Conference on Computer Vision.Google Scholar
  80. Willems, G., Tuytelaars, T., & Gool, L. (2008). An efficient dense and scale-invariant spatio-temporal interest point detector. In European Conference on Computer Vision.Google Scholar
  81. Wu, S., Oreifej, O., & Shah, M. (2011). Action recognition in videos acquired by a moving camera using motion decomposition of Lagrangian particle trajectories. In IEEE International Conference on Computer Vision.Google Scholar
  82. Yang, Y., & Shah, M. (2012). Complex events detection using data-driven concepts. In European Conference on Computer Vision.Google Scholar
  83. Yeffet, L., & Wolf, L. (2009). Local trinary patterns for human action recognition. In IEEE International Conference on Computer Vision.Google Scholar
  84. Yu, G., Yuan, J., & Liu, Z. (2012). Propagative Hough voting for human activity recognition. In European Conference on Computer Vision (pp. 693–706).Google Scholar
  85. Zhu, J., Wang, B., Yang, X., Zhang, W., & Tu, Z. (2013). Action recognition with actons. In IEEE International Conference on Computer Vision.Google Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  • Heng Wang
    • 1
  • Dan Oneata
    • 2
  • Jakob Verbeek
    • 2
  • Cordelia Schmid
    • 2
  1. 1.Amazon ResearchSeattleUSA
  2. 2.INRIAGrenobleFrance

Personalised recommendations