Advertisement

Temporal Distinct Representation Learning for Action Recognition

Conference paper
  • 763 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12352)

Abstract

Motivated by the previous success of Two-Dimensional Convolutional Neural Network (2D CNN) on image recognition, researchers endeavor to leverage it to characterize videos. However, one limitation of applying 2D CNN to analyze videos is that different frames of a video share the same 2D CNN kernels, which may result in repeated and redundant information utilization, especially in the spatial semantics extraction process, hence neglecting the critical variations among frames. In this paper, we attempt to tackle this issue through two ways. 1) Design a sequential channel filtering mechanism, i.e., Progressive Enhancement Module (PEM), to excite the discriminative channels of features from different frames step by step, and thus avoid repeated information extraction. 2) Create a Temporal Diversity Loss (TD Loss) to force the kernels to concentrate on and capture the variations among frames rather than the image regions with similar appearance. Our method is evaluated on benchmark temporal reasoning datasets Something-Something V1 and V2, and it achieves visible improvements over the best competitor by \(2.4\%\) and \(1.3\%\), respectively. Besides, performance improvements over the 2D-CNN-based state-of-the-arts on the large-scale dataset Kinetics are also witnessed.

Keywords

Video representation learning Action recognition Progressive Enhancement Module Temporal Diversity Loss 

Notes

Acknowledgement

We thank Dr. Wei Liu from Tencent AI Lab for his valuable advice.

References

  1. 1.
    Carreira, J., Zisserman, A.: Quo vadis, action recognition? a new model and the kinetics dataset. In: CVPR, pp. 6299–6308 (2017)Google Scholar
  2. 2.
    Choi, J., Gao, C., Messou, J.C., Huang, J.B.: Why can’t i dance in the mall, learning to mitigate scene bias in action recognition. In: NeurIPS, pp. 853–865 (2019)Google Scholar
  3. 3.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: CVPR, pp. 248–255. IEEE (2009)Google Scholar
  4. 4.
    Feichtenhofer, C., Fan, H., Malik, J., He, K.: Slowfast networks for video recognition. In: ICCV, pp. 6202–6211 (2019)Google Scholar
  5. 5.
    Goyal, R., et al.: The “something something” video database for learning and evaluating visual common sense. In: ICCV, vol. 1, p. 5 (2017)Google Scholar
  6. 6.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)Google Scholar
  7. 7.
    Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: CVPR, pp. 7132–7141 (2018)Google Scholar
  8. 8.
    Jiang, Y.-G., Dai, Q., Xue, X., Liu, W., Ngo, C.-W.: Trajectory-based modeling of human actions with motion reference points. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7576, pp. 425–438. Springer, Heidelberg (2012).  https://doi.org/10.1007/978-3-642-33715-4_31CrossRefGoogle Scholar
  9. 9.
    Li, Y., Song, S., Li, Y., Liu, J.: Temporal bilinear networks for video action recognition. In: AAAI, vol. 33, pp. 8674–8681 (2019)Google Scholar
  10. 10.
    Lin, J., Gan, C., Han, S.: Tsm: Temporal shift module for efficient video understanding. In: ICCV, pp. 7083–7093 (2019)Google Scholar
  11. 11.
    Liu, Z., et al.: Teinet: towards an efficient architecture for video recognition. In: AAAI, vol. 2, p. 8 (2020)Google Scholar
  12. 12.
    Lu, X., Ma, C., Ni, B., Yang, X., Reid, I., Yang, M.-H.: Deep regression tracking with shrinkage loss. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) Computer Vision – ECCV 2018. LNCS, vol. 11218, pp. 369–386. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01264-9_22CrossRefGoogle Scholar
  13. 13.
    Lu, X., Wang, W., Ma, C., Shen, J., Shao, L., Porikli, F.: See more, know more: Unsupervised video object segmentation with co-attention siamese networks. In: CVPR, pp. 3623–3632 (2019)Google Scholar
  14. 14.
    Lu, X., Wang, W., Shen, J., Tai, Y.W., Crandall, D.J., Hoi, S.C.: Learning video object segmentation from unlabeled videos. In: CVPR, pp. 8960–8970 (2020)Google Scholar
  15. 15.
    Luo, C., Yuille, A.L.: Grouped spatial-temporal aggregation for efficient action recognition. In: ICCV, pp. 5512–5521 (2019)Google Scholar
  16. 16.
    Mahdisoltani, F., Berger, G., Gharbieh, W., Fleet, D., Memisevic, R.: On the effectiveness of task granularity for transfer learning. arXiv:1804.09235 (2018)
  17. 17.
    Qiu, Z., Yao, T., Ngo, C.W., Tian, X., Mei, T.: Learning spatio-temporal representation with local and global diffusion. In: CVPR, pp. 12056–12065 (2019)Google Scholar
  18. 18.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2014)Google Scholar
  19. 19.
    Szegedy, C., et al.: Going deeper with convolutions. In: CVPR, pp. 1–9 (2015)Google Scholar
  20. 20.
    Tran, D., Bourdev, L., Fergus, R., Torresani, L., Paluri, M.: Learning spatiotemporal features with 3D convolutional networks. In: ICCV, pp. 4489–4497 (2015)Google Scholar
  21. 21.
    Tran, D., Wang, H., Torresani, L., Ray, J., LeCun, Y., Paluri, M.: A closer look at spatiotemporal convolutions for action recognition. In: CVPR, pp. 6450–6459 (2018)Google Scholar
  22. 22.
    Wang, L., Li, W., Li, W., Van Gool, L.: Appearance-and-relation networks for video classification. In: CVPR, pp. 1430–1439 (2018)Google Scholar
  23. 23.
    Wang, L., Xiong, Y., Wang, Z., Qiao, Yu., Lin, D., Tang, X., Van Gool, L.: Temporal segment networks: towards good practices for deep action recognition. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9912, pp. 20–36. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46484-8_2CrossRefGoogle Scholar
  24. 24.
    Wang, X., Farhadi, A., Gupta, A.: Actions transformations. In: CVPR, pp. 2658–2667 (2016)Google Scholar
  25. 25.
    Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: CVPR, pp. 7794–7803 (2018)Google Scholar
  26. 26.
    Wang, X., Gupta, A.: Videos as space-time region graphs. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11209, pp. 413–431. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01228-1_25CrossRefGoogle Scholar
  27. 27.
    Wang, Y., Hoai, M.: Pulling actions out of context, explicit separation for effective combination. In: CVPR, pp. 7044–7053 (2018)Google Scholar
  28. 28.
    Xie, S., Sun, C., Huang, J., Tu, Z., Murphy, K.: Rethinking spatiotemporal feature learning: speed-accuracy trade-offs in video classification. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11219, pp. 318–335. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01267-0_19CrossRefGoogle Scholar
  29. 29.
    Xingjian, S., Chen, Z., Wang, H., Yeung, D.Y., Wong, W.K., Woo, W.c.: Convolutional LSTM network: a machine learning approach for precipitation nowcasting. In: NeurIPS, pp. 802–810 (2015)Google Scholar
  30. 30.
    Zhang, P., Lan, C., Xing, J., Zeng, W., Xue, J., Zheng, N.: View adaptive recurrent neural networks for high performance human action recognition from skeleton data. In: ICCV, pp. 2117–2126 (2017)Google Scholar
  31. 31.
    Zhao, B., Wu, X., Feng, J., Peng, Q., Yan, S.: Diversified visual attention networks for fine-grained object classification. T-MM 19(6), 1245–1256 (2017)Google Scholar
  32. 32.
    Zheng, H., Fu, J., Mei, T., Luo, J.: Learning multi-attention convolutional neural network for fine-grained image recognition. In: CVPR, pp. 5209–5217 (2017)Google Scholar
  33. 33.
    Zhou, B., Andonian, A., Oliva, A., Torralba, A.: Temporal relational reasoning in videos. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11205, pp. 831–846. Springer, Cham (2018).  https://doi.org/10.1007/978-3-030-01246-5_49CrossRefGoogle Scholar
  34. 34.
    Zhu, X., Xu, C., Hui, L., Lu, C., Tao, D.: Approximated bilinear modules for temporal modeling. In: ICCV, pp. 3494–3503 (2019)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Tencent AI LabShenzhenChina
  2. 2.Tencent Youtu LabShanghaiChina
  3. 3.School of EEENanyang Technological UniversitySingaporeSingapore
  4. 4.Department of CSEThe State University of New YorkBuffaloUSA

Personalised recommendations