Abstract
Benefiting from the extensive use of depth sensors, human action recognition plays an important role in a great number of works, including human-computer interaction, security monitoring, motion-sensing games, medical care. In most recent works, depth cameras have been used to capture required action data for action identification. Especially, feature extraction based on skeleton information has made satisfactory achievements in action recognition and has been gradually extended to various algorithms. However, the research of view invariance is not deep enough. To improve the performance of skeleton recognition, the paper proposes an improved asymmetric convolution adaptive network, achieving desirable results on the public benchmark dataset of NTU RGB + D 60. The model combines an advanced adaptive module and asymmetric convolution blocks, which can effectively extract features from the original skeleton. Ablation studies and comparative experiments indicate that the performance of the model is better than many of the most state-of-the-art algorithms.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Pham, D.-T., Nguyen, T.-N., Le, T.-L.H., Vu.: Spatio-temporal representation for skeleton-based human action recognition. In: 2020 International Conference on Multimedia Analysis and Pattern Recognition (MAPR), October 2020
Shi, L., Zhang, Y., Cheng, J., Lu, H.: Two-stream adaptive graph convolutional networks for skeleton-based action recognition. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2019
Shi, L., Zhang, Y., Cheng, J., Lu, H.: Skeleton-based action recognition with directed graph neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7912–7921 (2019)
Li, C., Zhong, Q., Xie, D., Pu, S.: Cooccurrence feature learning from skeleton data for action recognition and detection with hierarchical aggregation. In: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, July 2018
Li, W., Zhang, Z., Liu, Z.: Action recognition based on a bag of 3D points. In: CVPR, pp. 9–14 (2010)
Yang, X., Tian, Y.: Eigen joints-based action recognition using Naive-Bayes-nearest-neighbor. In: CVPR, pp. 14–19 (2012)
Oreifej, O., Liu, Z.: HON4D: histogram of oriented 4D normals for activity recognition from depth sequences. In: CVPR, pp. 716–723 (2013)
Rahmani, H., Mian, A.: 3D action recognition from novel viewpoints. In: CVPR, p. 12 (2016)
Wang, P., Li, W., Gao, Z., Zhang, J., Tang, C., Ogunbona, P.: Deep convolutional neural networks for action recognition using depth map sequences. In: CVPR, pp. 1–8 (2015)
Shotton, J., et al.: Real-time human pose recognition in parts from single depth images. In: CVPR, pp. 1297–1304 (2011)
Pham, D.-T., Nguyen, T.-N., Le, T.-L., Vu, H.: Analyzing role of joint subset selection in human action recognition. In: 2019 6th NAFOSTED Conference on Information and Computer Science (NICS), December 2019
Yu, J., Gao, H., Yang, W., Chin, W., Kubota, N., Ju, Z.: A discriminative deep model with feature fusion and temporal attention for human action recognition. IEEE Access 8, 43243–43255 (2020)
Cheng, K., Zhang, Y., Cao, C., Shi, L., Cheng, J., Lu, H.: Decoupling GCN with DropGraph module for skeleton-based action recognition. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12369, pp. 536–553. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58586-0_32
Ding, X., Guo, Y., Ding, G., Han, J., ACNet: Strengthening the kernel skeletons for powerful CNN via asymmetric convolution blocks. IEEE Int. Conf. Comput. Vis. (ICCV) (2019)
Zhang, P., Lan, C., Xing, J., Zeng, W., Xue, J., Zheng, N.: View adaptive neural networks for high performance skeleton-based human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 41(8), 1963–1978 (2019)
Shahroudy, A., Liu, J., Ng, T.-T., Wang, G.: NTU RGB+ D: a large scale dataset for 3D human activity analysis. In: CVPR (2016)
Zhang, P., Lan, C., Xing, J., Zeng, W., Xue, J., Zheng, N.: View adaptive recurrent neural networks for high performance human action recognition from skeleton data. IEEE Int. Conf. Comput. Vis. 2117–2126 (2017)
Si, C., Jing, Y., Wang, W., Wang, L., Tan, T.: Skeleton-based action recognition with spatial reasoning and temporal stack learning. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11205, pp. 106–121. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01246-5_7
Li, T., Fan, L., Zhao, M., Liu, Y, Katabi, D.: Making the invisible visible: action recognition through walls and occlusions. IEEE Int. Conf. Comput. Vis. 872–881 (2019)
Yan, S., Xiong, Y., Lin, D.: Spatial temporal graph convolutional networks for skeleton-based action recognition. AAAI Conf. Artif. Intell. (2018)
Liu, H., Zhang, L., Guan, L., Liu, M.: GFNet: a lightweight group frame network for efficient human action recognition. IEEE Int. Conf. Acoust. Speech Sig. Process. 2583–2587 (2020)
Fan, Y., Weng, S., Zhang, Y., Shi, B., Zhang, Y.: Context-aware cross-attention for skeleton-based human action recognition. IEEE Trans. Image Process. Database: Compendex, 15280–15290 (2020)
Li, S., Yi, J., Farha, Y.A., Gall, J.: Pose refinement graph convolutional network for skeleton-based action recognition. IEEE Robot. Autom. Lett. 6(2), 1028–1035 (2021)
Cheng, K., Zhang, Y., He, X., Chen, W., Cheng, J., Lu, H.: Skeleton-based action recognition with shift graph convolutional network. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Ma, T., Yu, J., Gao, H., Ju, Z. (2022). Asymmetric Convolution View Adaptation Networks for Skeleton-Based Human Action Recognition. In: Jansen, T., Jensen, R., Mac Parthaláin, N., Lin, CM. (eds) Advances in Computational Intelligence Systems. UKCI 2021. Advances in Intelligent Systems and Computing, vol 1409. Springer, Cham. https://doi.org/10.1007/978-3-030-87094-2_17
Download citation
DOI: https://doi.org/10.1007/978-3-030-87094-2_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-87093-5
Online ISBN: 978-3-030-87094-2
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)