Advertisement

Human Action Recognition with 3D Convolution Skip-Connections and RNNs

  • Jiarong Song
  • Zhong Yang
  • Qiuyan Zhang
  • Ting Fang
  • Guoxiong Hu
  • Jiaming Han
  • Cong Chen
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11301)

Abstract

This paper proposes a novel network architecture for human action recognition. First, we employ a pre-trained spatio-temporal feature extractor to perform spatio-temporal features extraction on videos. Then, several-level spatio-temporal features are concatenated by 3D convolution skip-connections. Moreover, a batch normalization layer is applied to normalize the concatenated features. Subsequently, we feed these normalized features into a RNN architecture to model temporal dependencies, which enables our network to deal with long-term information. In addition, we divide each video into three parts in which each part is split into non-overlapping 16-frame clips to achieve data augmentation. Finally, the proposed method is evaluated on UCF101 Dataset and is compared with existing excellent methods. Experimental results demonstrate that our method achieves the highest recognition accuracy.

Keywords

Human action recognition 3D CNNs 3D convolution skip-connections RNNs Feature concatenation 

Notes

Acknowledgements

This work was supported in part by the National Science Foundation of China (No. 61473144), the Aeronautical Science Foundation of China (Key Laboratory) (No. 20162852031), the Jiangsu Postdoctoral Founding (No. 1402036C), the Special scientific instrument development of Ministry of science, technology of China (No. 2016YFF0103702) and the science and technology project of China Souther Grid Corp (No. 066600KK52170074).

References

  1. 1.
    Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 5(2), 157–166 (1994)CrossRefGoogle Scholar
  2. 2.
    Chavez, K.: 3D convnets with optical flow based regularizationGoogle Scholar
  3. 3.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  4. 4.
    Huang, G., Liu, Z., Weinberger, K.Q., van der Maaten, L.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, p. 3 (2017)Google Scholar
  5. 5.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015)
  6. 6.
    Ji, S., Xu, W., Yang, M., Yu, K.: 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 221–231 (2013)CrossRefGoogle Scholar
  7. 7.
    Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., Fei-Fei, L.: Large-scale video classification with convolutional neural networks. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 1725–1732 (2014)Google Scholar
  8. 8.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  9. 9.
    Lan, Z., Lin, M., Li, X., Hauptmann, A.G., Raj, B.: Beyond Gaussian pyramid: multi-skip feature stacking for action recognition. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 204–212 (2015)Google Scholar
  10. 10.
    LeCun, Y.A., Bottou, L., Orr, G.B., Müller, K.-R.: Efficient BackProp. In: Montavon, G., Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade. LNCS, vol. 7700, pp. 9–48. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-35289-8_3CrossRefGoogle Scholar
  11. 11.
    Lipton, Z.C., Berkowitz, J., Elkan, C.: A critical review of recurrent neural networks for sequence learning. arXiv preprint arXiv:1506.00019 (2015)
  12. 12.
    Nielsen, M.A.: Neural networks and deep learning (2015)Google Scholar
  13. 13.
    Orr, G.B., Müller, K.-R. (eds.): Neural Networks: Tricks of the Trade. LNCS, vol. 1524. Springer, Heidelberg (1998).  https://doi.org/10.1007/3-540-49430-8CrossRefGoogle Scholar
  14. 14.
    Peng, X., Wang, L., Wang, X., Qiao, Y.: Bag of visual words and fusion methods for action recognition: comprehensive study and good practice. Comput. Vis. Image Underst. 150, 109–125 (2016)CrossRefGoogle Scholar
  15. 15.
    Poppe, R.: A survey on vision-based human action recognition. Image Vis. Comput. 28(6), 976–990 (2010)CrossRefGoogle Scholar
  16. 16.
    Sánchez, J., Perronnin, F., Mensink, T., Verbeek, J.: Image classification with the fisher vector: theory and practice. Int. J. Comput. Vis. 105(3), 222–245 (2013)MathSciNetCrossRefGoogle Scholar
  17. 17.
    Schuldt, C., Laptev, I., Caputo, B.: Recognizing human actions: a local SVM approach. In: Proceedings of the 17th International Conference on Pattern Recognition, vol. 3, pp. 32–36. IEEE (2004)Google Scholar
  18. 18.
    Sevilla-Lara, L., Liao, Y., Guney, F., Jampani, V., Geiger, A., Black, M.J.: On the integration of optical flow and action recognition. arXiv preprint arXiv:1712.08416 (2017)
  19. 19.
    Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. In: Advances in Neural Information Processing Systems, pp. 568–576 (2014)Google Scholar
  20. 20.
    Soomro, K., Zamir, A.R., Shah, M.: UCF101: a dataset of 101 human actions classes from videos in the wild. arXiv preprint arXiv:1212.0402 (2012)
  21. 21.
    Srivastava, N., Mansimov, E., Salakhudinov, R.: Unsupervised learning of video representations using LSTMs. In: International Conference on Machine Learning, pp. 843–852 (2015)Google Scholar
  22. 22.
    Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3D convolutional networks. In: 2015 IEEE International Conference on Computer Vision, pp. 4489–4497. IEEE (2015)Google Scholar
  23. 23.
    Tran, D., Bourdev, L.D., Fergus, R., Torresani, L., Paluri, M.: C3D: generic features for video analysis. CoRR, abs/1412.0767 2(7), 8 (2014)Google Scholar
  24. 24.
    Tran, D., Ray, J., Shou, Z., Chang, S.F., Paluri, M.: Convnet architecture search for spatiotemporal feature learning. arXiv preprint arXiv:1708.05038 (2017)
  25. 25.
    Wang, H., Schmid, C.: Action recognition with improved trajectories. In: 2013 IEEE International Conference on Computer Vision, pp. 3551–3558. IEEE (2013)Google Scholar
  26. 26.
    Wang, L., Xiong, Y., Wang, Z., Qiao, Y.: Towards good practices for very deep two-stream convnets. arXiv preprint arXiv:1507.02159 (2015)
  27. 27.
    Wang, X., Farhadi, A., Gupta, A.: Actions \(\sim \) Transformations. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 2658–2667 (2016)Google Scholar
  28. 28.
    Wiesler, S., Ney, H.: A convergence analysis of log-linear training. In: Advances in Neural Information Processing Systems, pp. 657–665 (2011)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Jiarong Song
    • 1
  • Zhong Yang
    • 1
  • Qiuyan Zhang
    • 2
  • Ting Fang
    • 3
  • Guoxiong Hu
    • 1
    • 2
    • 3
  • Jiaming Han
    • 1
  • Cong Chen
    • 1
  1. 1.Nanjing University of Aeronautics and AstronauticsNanjingChina
  2. 2.Electric Power Research Institute of Guizhou Power Grid Co., Ltd.GuiyangChina
  3. 3.Hefei University of TechnologyHefeiChina

Personalised recommendations