Skip to main content
Log in

HybridNet: Integrating GCN and CNN for skeleton-based action recognition

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Graph convolutional networks (GCNs) can well-preserve the structure information of the human body. They have achieved outstanding performance in skeleton-based action recognition. Nevertheless, there are still some issues with existing GCN-based methods. First, all channels have the same adjacency matrix. However, the correlations between joints are complex and may drastically change depending on the actions. These correlations are difficult to fit by merely channel-shared adjacency matrices. Second, the interframe edges of graphs only connect the same joints, neglecting the dependencies between the different joints. Fortunately, convolutional neural networks (CNNs) can simultaneously establish the interdependence of all the points in a spatial-temporal patch. Furthermore, CNNs use different kernels among channels. They are more adaptable for modeling complicated dependencies. In this work, we design a hybrid network (HybridNet) to integrate GCNs and CNNs. The HybridNet not only utilizes structural information well but also models complicated relationships between interframe joints properly.Extensive experiments are conducted on three challenging datasets: NTU-RGB+D, NTU-RGB+D 120, and Skeleton-Kinetics. The proposed model achieves state-of-the-art performance on all these datasets by a considerable margin, demonstrating the superiority of our method. The source code is available at https://github.com/kraus-yang/HybridNet.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. Shahroudy A, Liu J, Ng T-T, Wang G (2016) NTU RGB+D: a large scale dataset for 3d human activity analysis. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp 1010–1019, DOI https://doi.org/10.1109/CVPR.2016.115. ISSN: 1063-6919, (to appear in print)

  2. Liu J, Shahroudy A, Perez M, Wang G, Duan L-Y, Kot AC (2020) NTU RGB+D 120: a large-scale benchmark for 3D human activity understanding. IEEE Trans Pattern Anal Mach Intell 42 (10):2684–2701. https://doi.org/10.1109/TPAMI.2019.2916873, https://ieeexplore.ieee.org/document/8713892/

    Article  Google Scholar 

  3. Carreira J, Zisserman A (2017) Quo vadis, action recognition? a new model and the kinetics dataset. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). http://ieeexplore.ieee.org/document/8099985/. IEEE, Honolulu, pp 4724–4733

  4. Yan S, Xiong Y, Lin D (2018) Spatial temporal graph convolutional networks for skeleton-based action recognition. Proc AAAI Conf Artif Intell 32(1). https://ojs.aaai.org/index.php/AAAI/article/view/12328. Number: 1

  5. Wang L, Xiong Y, Wang Z, Qiao Y, Lin D, Tang X, Van Gool L (November 2019) Temporal segment networks for action recognition in videos. IEEE Trans Pattern Anal Mach Intell 41(11):2740–2755. https://doi.org/10.1109/TPAMI.2018.2868668

    Article  Google Scholar 

  6. Yang H, Yuan C, Li B, Du Y, Xing J, Hu W, Maybank SJ (2019) Asymmetric 3D convolutional neural networks for action recognition. Pattern Recogn 85:1–12. https://doi.org/10.1016/j.patcog.2018.07.028, https://linkinghub.elsevier.com/retrieve/pii/S0031320318302632

    Article  Google Scholar 

  7. Xu Q, Zheng W, Song Y, Zhang C, Yuan X, Li Y (2021) Scene image and human skeleton-based dual-stream human action recognition. Pattern Recogn Lett 148:136–145. https://doi.org/10.1016/j.patrec.2021.06.003, https://linkinghub.elsevier.com/retrieve/pii/S0167865521001902

    Article  Google Scholar 

  8. Ji X, Cheng J, Tao D, Wu X, Feng W (2017) The spatial Laplacian and temporal energy pyramid representation for human action recognition using depth sequences. Knowl-Based Syst 122:64–74. https://doi.org/10.1016/j.knosys.2017.01.035, https://linkinghub.elsevier.com/retrieve/pii/S0950705117300461

    Article  Google Scholar 

  9. Garcia NC, Morerio P, Murino V (2018) Modality distillation with multiple stream networks for action recognition. In: Ferrari V, Hebert M, Sminchisescu C, Weiss Y (eds) Computer Vision – ECCV 2018, vol 11212. Springer International Publishing, Cham, pp 106–121, DOI https://doi.org/10.1007/978-3-030-01237-3_7. Series Title: Lecture Notes in Computer Science

  10. Wang P, Li W, Gao Z, Tang C, Ogunbona PO (2018) Depth pooling based large-scale 3-D action recognition with convolutional neural networks. IEEE Trans Multimed 20(5):1051–1061. https://doi.org/10.1109/TMM.2018.2818329, http://ieeexplore.ieee.org/document/8330763/

    Article  Google Scholar 

  11. Xiao Y, Chen J, Wang Y, Cao Z, Tianyi Zhou J, Bai X (2019) Action recognition for depth video using multi-view dynamic images. Inf Sci 480:287–304. https://doi.org/10.1016/j.ins.2018.12.050, https://www.sciencedirect.com/science/article/pii/S0020025518309964

    Article  Google Scholar 

  12. Singh R, Dhillon JK, Kushwaha AKS, Srivastava R (2019) Depth based enlarged temporal dimension of 3D deep convolutional network for activity recognition. Multimed Tools Appl 78(21):30599–30614. https://doi.org/10.1007/s11042-018-6425-3

    Article  Google Scholar 

  13. Ren Z, Zhang Q, Cheng J, Hao F, Gao X (2021) Segment spatial-temporal representation and cooperative learning of convolution neural networks for multimodal-based action recognition. Neurocomputing 433:142–153. https://doi.org/10.1016/j.neucom.2020.12.020, https://linkinghub.elsevier.com/retrieve/pii/S0925231220319019

    Article  Google Scholar 

  14. Liu M, Liu H, Chen C (2017) Enhanced skeleton visualization for view invariant human action recognition. Pattern Recogn 68:346–362. https://doi.org/10.1016/j.patcog.2017.02.030, https://linkinghub.elsevier.com/retrieve/pii/S0031320317300936

    Article  Google Scholar 

  15. Laraba S, Brahimi M, Tilmanne J, Dutoit T (2017) 3D skeleton-based action recognition by representing motion capture sequences as 2D-RGB images: Motion capture sequence to image for 3D action recognition. Comput Animation Virtual Worlds 28(3-4):e1782. https://doi.org/10.1002/cav.1782

    Article  Google Scholar 

  16. Ke Q, Bennamoun M, An S, Sohel F, Boussaid F (2018) Learning clip representations for skeleton-based 3D action recognition. IEEE Trans Image Process 27(6):2842–2855. https://doi.org/10.1109/TIP.2018.2812099, http://ieeexplore.ieee.org/document/8306456/

    Article  MathSciNet  MATH  Google Scholar 

  17. Li B, He M, Dai Y, Cheng X, Chen Y (2018) 3D skeleton based action recognition by video-domain translation-scale invariant mapping and multi-scale dilated CNN. Multimed Tools Appl 77(17):22901–22921. https://doi.org/10.1007/s11042-018-5642-0

    Article  Google Scholar 

  18. Xu Y, Cheng J, Wang L, Xia H, Liu F, Tao D (2018) Ensemble one-dimensional convolution neural networks for skeleton-based action recognition. IEEE Signal Process Lett 25(7):1044–1048. https://doi.org/10.1109/LSP.2018.2841649, https://ieeexplore.ieee.org/document/8368136/

    Article  Google Scholar 

  19. Liu J, Shahroudy A, Xu D, Wang G Leibe B, Matas J, Sebe N, Welling M (eds) (2016) Spatio-temporal LSTM with trust gates for 3D human action recognition, vol 9907. Springer International Publishing, Cham. https://doi.org/10.1007/978-3-319-46487-9_50. Series Title: Lecture Notes in Computer Science

  20. Liu J, Wang G, Duan L-Y, Abdiyeva K, Kot AC (2018) Skeleton-based human action recognition with global context-aware attention LSTM networks. IEEE Trans Image Process 27(4):1586–1599. https://doi.org/10.1109/TIP.2017.2785279

    Article  MathSciNet  MATH  Google Scholar 

  21. Zhang P, Lan C, Xing J, Zeng W, Xue J, Zheng N (2019) View adaptive neural networks for high performance skeleton-based human action recognition. IEEE Trans Pattern Anal Mach Intell 41(8):1963–1978. https://doi.org/10.1109/TPAMI.2019.2896631

    Article  Google Scholar 

  22. Shi L, Zhang Y, Cheng J, Lu H (2019) Two-stream adaptive graph convolutional networks for skeleton-based action recognition. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). https://ieeexplore.ieee.org/document/8953648/. IEEE, Long Beach, pp 12018–12027

  23. Shi L, Zhang Y, Cheng J, Lu H (2020) Skeleton-based action recognition with multi-stream adaptive graph convolutional networks. IEEE Trans Image Process 29:9532–9545. https://doi.org/10.1109/TIP.2020.3028207

    Article  MATH  Google Scholar 

  24. Song Y-F, Zhang Z, Shan C, Wang L (2021) Richly activated graph convolutional network for robust skeleton-based action recognition. IEEE Trans Circ Syst Video Technol 31(5):1915–1925. https://doi.org/10.1109/TCSVT.2020.3015051

    Article  Google Scholar 

  25. Xie J, Miao Q, Liu R, Xin W, Tang L, Zhong S, Gao X (2021) Attention adjacency matrix based graph convolutional networks for skeleton-based action recognition. Neurocomputing 440:230–239. https://doi.org/10.1016/j.neucom.2021.02.001, https://linkinghub.elsevier.com/retrieve/pii/S0925231221002101

    Article  Google Scholar 

  26. Yang H, Yan D, Zhang L, Sun Y, Li D, Maybank SJ (2022) Feedback graph convolutional network for skeleton-based action recognition. IEEE Trans Image Process 31:164–175. https://doi.org/10.1109/TIP.2021.3129117

    Article  Google Scholar 

  27. Sun N, Leng L, Liu J, Han G (2021) Multi-stream slowFast graph convolutional networks for skeleton-based action recognition. Image Vis Comput 109:104141. https://doi.org/10.1016/j.imavis.2021.104141, https://linkinghub.elsevier.com/retrieve/pii/S0262885621000469

    Article  Google Scholar 

  28. Zhang H, Wu C, Zhang Z, Zhu Y, Zhang Z, Lin H, Sun Y, He T, Mueller J, Manmatha R, Li M, Smola A (2020) ResNeSt: Split-attention networks. arXiv:2004.08955

  29. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition, pp 770–778. https://openaccess.thecvf.com/content_cvpr_2016/html/He_Deep_Residual_Learning_CVPR_2016_paper.html

  30. Yoon Y, Yu J, Jeon M (2021) Predictively encoded graph convolutional network for noise-robust skeleton-based action recognition. Appl Intell. https://doi.org/10.1007/s10489-021-02487-z

  31. Zhu G, Zhang L, Li H, Shen P, Shah S A A, Bennamoun M (2020) Topology-learnable graph convolution for skeleton-based action recognition. Pattern Recogn Lett 135:286–292. https://doi.org/10.1016/j.patrec.2020.05.005, https://www.sciencedirect.com/science/article/pii/S0167865520301756

    Article  Google Scholar 

  32. Chan W, Tian Z, Wu Y (2020) GAS-GCN: gated action-specific graph convolutional networks for skeleton-based action recognition. Sensors 20(12):3499. https://doi.org/10.3390/s20123499, https://www.mdpi.com/1424-8220/20/12/3499

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jianlin Zhang.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, W., Zhang, J., Cai, J. et al. HybridNet: Integrating GCN and CNN for skeleton-based action recognition. Appl Intell 53, 574–585 (2023). https://doi.org/10.1007/s10489-022-03436-0

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-03436-0

Keywords

Navigation