Recurrent Residual Learning for Action Recognition

  • Ahsan Iqbal
  • Alexander Richard
  • Hilde Kuehne
  • Juergen Gall
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10496)

Abstract

Action recognition is a fundamental problem in computer vision with a lot of potential applications such as video surveillance, human computer interaction, and robot learning. Given pre-segmented videos, the task is to recognize actions happening within videos. Historically, hand crafted video features were used to address the task of action recognition. With the success of Deep ConvNets as an image analysis method, a lot of extensions of standard ConvNets were purposed to process variable length video data. In this work, we propose a novel recurrent ConvNet architecture called recurrent residual networks to address the task of action recognition. The approach extends ResNet, a state of the art model for image classification. While the original formulation of ResNet aims at learning spatial residuals in its layers, we extend the approach by introducing recurrent connections that allow to learn a spatio-temporal residual. In contrast to fully recurrent networks, our temporal connections only allow a limited range of preceding frames to contribute to the output for the current frame, enabling efficient training and inference as well as limiting the temporal context to a reasonable local range around each frame. On a large-scale action recognition dataset, we show that our model improves over both, the standard ResNet architecture and a ResNet extended by a fully recurrent layer.

Notes

Acknowledgement

The authors have been financially supported by the DFG projects KU 3396/2-1 and GA 1927/4-1 and the ERC Starting Grant ARCA (677650). Further, this work was supported by the AWS Cloud Credits for Research program.

References

  1. 1.
    Bilen, H., Fernando, B., Gavves, E., Vedaldi, A., Gould, S.: Dynamic image networks for action recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (2016)Google Scholar
  2. 2.
    Chen, B., Ting, J.A., Marlin, B., de Freitas, N.: Deep learning of invariant spatio-temporal features from video. In: NIPS 2010 Deep Learning and Unsupervised Feature Learning Workshop (2010)Google Scholar
  3. 3.
    Donahue, J., Anne Hendricks, L., Guadarrama, S., Rohrbach, M., Venugopalan, S., Saenko, K., Darrell, T.: Long-term recurrent convolutional networks for visual recognition and description. In: IEEE Conference on Computer Vision and Pattern Recognition (2015)Google Scholar
  4. 4.
    Feichtenhofer, C., Pinz, A., Wildes, R.: Spatiotemporal residual networks for video action recognition. Adv. Neural Inf. Process. Syst. 29, 3468–3476 (2016)Google Scholar
  5. 5.
    Fernando, B., Gavves, E., Oramas, J., Ghodrati, A., Tuytelaars, T.: Rank pooling for action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 39(4), 773–787 (2017)CrossRefGoogle Scholar
  6. 6.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (2016)Google Scholar
  7. 7.
    Jain, M., van Gemert, J.C., Snoek, C.G.M.: What do 15, 000 object categories tell us about classifying and localizing actions? In: IEEE Conference on Computer Vision and Pattern Recognition (2015)Google Scholar
  8. 8.
    Jhuang, H., Serre, T., Wolf, L., Poggio, T.: A biologically inspired system for action recognition. In: IEEE International Conference on Computer Vision (2007)Google Scholar
  9. 9.
    Ji, S., Xu, W., Yang, M., Yu, K.: 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 221–231 (2013)CrossRefGoogle Scholar
  10. 10.
    Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition. pp. 1725–1732 (2014)Google Scholar
  11. 11.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. In: International Conference on Learning Representations (2015)Google Scholar
  12. 12.
    Laptev, I.: On space-time interest points. Int. J. Comput. Vis. 64, 107–123 (2005)CrossRefGoogle Scholar
  13. 13.
    Le, Q.V., Zou, W.Y., Yeung, S.Y., Ng, A.Y.: Learning hierarchical invariant spatio-temporal features for action recognition with independent subspace analysis. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3361–3368 (2011)Google Scholar
  14. 14.
    Peng, X., Zou, C., Qiao, Y., Peng, Q.: Action recognition with stacked fisher vectors. In: European Conference on Computer Vision, pp. 581–595 (2014)Google Scholar
  15. 15.
    Richard, A., Gall, J.: A bag-of-words equivalent recurrent neural network for action recognition. Comput. Vis. Image Underst. 156, 79–91 (2017)CrossRefGoogle Scholar
  16. 16.
    Riesenhuber, M., Poggio, T.: Hierarchical models of object recognition in cortex. Nature Neurosci. 2(11), 1019–1025 (1999)CrossRefGoogle Scholar
  17. 17.
    Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  18. 18.
    Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. Adv. Neural Inf. Process. Syst. 27, 568–576 (2014)Google Scholar
  19. 19.
    Soomro, K., Zamir, A.R., Shah, M.: UCF101: A dataset of 101 human actions classes from videos in the wild. CoRR abs/1212.0402 (2012)Google Scholar
  20. 20.
    Taylor, G.W., Fergus, R., LeCun, Y., Bregler, C.: Convolutional learning of spatio-temporal features. In: European Conference on Computer Vision, pp. 140–153 (2010)Google Scholar
  21. 21.
    Wang, H., Klaser, A., Schmid, C., Liu, C.L.: Action recognition by dense trajectories. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3169–3176 (2011)Google Scholar
  22. 22.
    Wang, H., Schmid, C.: Action recognition with improved trajectories. In: IEEE International Conference on Computer Vision, pp. 3551–3558 (2013)Google Scholar
  23. 23.
    Wang, L., Qiao, Y., Tang, X.: Action recognition with trajectory-pooled deep-convolutional descriptors. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 4305–4314 (2015)Google Scholar
  24. 24.
    Wang, L., Qiao, Y., Tang, X.: Mofap: a multi-level representation for action recognition. Int. J. Comput. Vis. 119(3), 254–271 (2016)MathSciNetCrossRefGoogle Scholar
  25. 25.
    Wang, L., Xiong, Y., Wang, Z., Qiao, Y.: Towards good practices for very deep two-stream convnets. CoRR abs/1507.02159 (2015)Google Scholar
  26. 26.
    Wang, L., Xiong, Y., Wang, Z., Qiao, Y., Lin, D., Tang, X., Van Gool, L.: Temporal segment networks: towards good practices for deep action recognition. In: European Conference on Computer Vision (2016)Google Scholar
  27. 27.
    Wang, Y., Tian, F.: Recurrent residual learning for sequence classification. In: Conference on Empirical Methods on Natural Language Processing, pp. 938–943 (2016)Google Scholar
  28. 28.
    Yang, X., Molchanov, P., Kautz, J.: Multilayer and multimodal fusion of deep neural networks for video classification. In: Proceedings of the 2016 ACM on Multimedia Conference, pp. 978–987 (2016)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Ahsan Iqbal
    • 1
  • Alexander Richard
    • 1
  • Hilde Kuehne
    • 1
  • Juergen Gall
    • 1
  1. 1.University of BonnBonnGermany

Personalised recommendations