Skip to main content

Context Enhancement Methodology for Action Recognition in Still Images

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2023 (ICANN 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14254))

Included in the following conference series:

  • 963 Accesses

Abstract

Action recognition in still images is a popular research topic in the field of computer vision, but it is to remain challenging due to the lack of motion information. Contextual information is a significant factor in the task of recognizing image action, which is inseparable from a predefined action class. And the existing research strategy does not ensure adequate use of contextual information. To address this issue, we propose a Contextual Enhancement Module (CEM) that combines the self-attention mechanism and the contextual attention mechanism. Specifically, the context enhancement module uses self-attention to learn pixel-level contextual information, after which separates the image into parts and uses contextual attention to learn region-level contextual information. In this way, the model can emphasize the significance of various pixels and regions in the image and significantly improve feature representation. We performed a lot of experiments on the PASCAL VOC 2012 Action dataset and the Stanford 40 Actions dataset. The results demonstrate that our method performs effectively, with the state-of-the-arts outcomes being obtained on both datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Zhu, Y., et al.: A comprehensive study of deep video action recognition. arXiv preprint arXiv:2012.06567 (2020)

  2. Girish, D., Singh, V., Ralescu, A.: Understanding action recognition in still images. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 370–371 (2020)

    Google Scholar 

  3. Dosovitskiy, A., et al.: An image is worth 16 x 16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)

  4. Vaswani, A., et al.: Attention is all you need. Adv. Neural Inf. Process. Syst. 30 (2017)

    Google Scholar 

  5. Behera, A., Wharton, Z., Hewage, P.R., Bera, A.: Context-aware attentional pooling (cap) for fine-grained visual classification. Proc.AAAI Conf. Artif. Intell. 35(2), 929–937 (2021)

    Google Scholar 

  6. Wang, Y, et al.: Unsupervised discovery of action classes. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2006), vol. 2. IEEE (2006)

    Google Scholar 

  7. Gkioxari, G., Girshick, R., Malik, J.: Contextual action recognition with r* CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1080–1088 (2015)

    Google Scholar 

  8. Zhao, Z., Ma, H., Chen, X.: Semantic parts based top-down pyramid for action recognition. Patt. Recogn. Lett. 84, 134–141 (2016)

    Article  Google Scholar 

  9. Zhao, Z., Ma, H., You, S.: Single image action recognition using semantic body part actions. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3391–3399 (2017)

    Google Scholar 

  10. Yan, S., Smith, J.S., Lu, W., Zhang, B.: Multibranch attention networks for action recognition in still images. IEEE Trans. Cognitive Dev. Syst. 10(4), 1116–1125 (2017)

    Article  Google Scholar 

  11. Zhu, H., Hu, J.F., Zheng, W.S.: Learning hierarchical context for action recognition in still images. In: Advances in Multimedia Information Processing–PCM 2018: 19th Pacific-Rim Conference on Multimedia, Hefei, China, 21–22 September, 2018, Proceedings, Part III 19, pp. 67–77 (2018)

    Google Scholar 

  12. Zheng, Y., Zheng, X., Lu, X., Wu, S.: Spatial attention based visual semantic learning for action recognition in still images. Neurocomputing 413, 383–396 (2020)

    Article  Google Scholar 

  13. Ma, W., Liang, S.: Human-object relation network for action recognition in still images. In: 2020 IEEE International Conference on Multimedia and Expo (ICME), pp. 1–6 (2020)

    Google Scholar 

  14. Wang, J., Liang, S.: Pose-enhanced relation feature for action recognition in still images. In: Þór Jónsson, B., et al. MultiMedia Modeling. MMM 2022. Lecture Notes in Computer Science, vol. 13141. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-98358-1_13

  15. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  16. Zhang, H., Goodfellow, I., Metaxas, D., Odena, A.: Self-attention generative adversarial networks. In: International Conference on Machine Learning, pp. 7354–7363 (2019)

    Google Scholar 

  17. Yu, C., Zhao, X., Zheng, Q., Zhang, P., You, X.: Hierarchical bilinear pooling for fine-grained visual recognition. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 574–589 (2018)

    Google Scholar 

  18. Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal Visual Object Classes (voc) challenge. Int. J. Comput. Vision 88, 303–338 (2010)

    Article  Google Scholar 

  19. Yao, B., Jiang, X., Khosla, A., Lin, A.L., Guibas, L., Fei-Fei, L.: Human action recognition by learning bases of action attributes and parts. In: 2011 International Conference on Computer Vision, pp. 1331–1338 (2011)

    Google Scholar 

  20. Ruder, S.: An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747(2016)

  21. Mi, S., Zhang, Y.: Pose-guided action recognition in static images using lie-group. Appl. Intell. 1–9(2022)

    Google Scholar 

  22. Wu, W., Yu, J.: An improved deep relation network for action recognition in still images. In: ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2450–2454 (2021)

    Google Scholar 

  23. Li, Y., Li, K., Wang, X.: Recognizing actions in images by fusing multiple body structure cues. Patt. Recogn. 104, 107341 (2020)

    Article  Google Scholar 

Download references

Acknowledgement

This work is supported by the Inner Mongolia Science and Technology Project (No. 2021GG0166).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Wu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

He, J., Wu, W., Li, Y. (2023). Context Enhancement Methodology for Action Recognition in Still Images. In: Iliadis, L., Papaleonidas, A., Angelov, P., Jayne, C. (eds) Artificial Neural Networks and Machine Learning – ICANN 2023. ICANN 2023. Lecture Notes in Computer Science, vol 14254. Springer, Cham. https://doi.org/10.1007/978-3-031-44207-0_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-44207-0_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-44206-3

  • Online ISBN: 978-3-031-44207-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics