Skip to main content

Human-Object Interaction Detection Based on Multi-scale Attention Fusion

  • Conference paper
  • First Online:
Image and Graphics (ICIG 2021)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12888))

Included in the following conference series:

  • 2000 Accesses

Abstract

Human-object interaction detection is one of the key issues of scene understanding. It has widespread applications in advanced computer vision technology. However, due to the diversity of human postures, the uncertainty of the shape and size in objects, as well as the complexity of the relationship between people and objects. It is very challenging to detect the interaction relationship between people and objects. To solve this problem, this paper proposes a multi-scale attention fusion method to adapt to people and objects of different sizes and shapes. This method increases the range of attention which can more accurately judge the relationships between people and objects. Besides, we further propose a weighting mechanism to better characterize the interaction between people and close objects and express people’s intention of interaction. We evaluated the proposed method on HICO-DET and V-COCO datasets, which has verified its effectiveness and flexibility as well as has achieved a certain improvement in accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 109.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 139.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Tian, Y., Ruan, Q., An, G., et al.: Action recognition using local consistent group sparse coding with spatio-temporal structure. In: Proceedings of the 24th ACM International Conference on Multimedia, pp. 317–321 (2016)

    Google Scholar 

  2. Gkioxari, G., Girshick, R., Malik, J.: Contextual action recognition with r* cnn. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1080–1088 (2015)

    Google Scholar 

  3. Xu, B., Ye, H., Zheng, Y., et al.: Dense dilated network for few shot action recognition. In: Proceedings of the ACM on International Conference on Multimedia Retrieval, pp. 379–387 (2018)

    Google Scholar 

  4. Gao, C., Zou, Y., Huang, J.B.: ican: Instance-centric attention network for human-object interaction detection. arXiv preprint arXiv:1808.10437 (2018)

  5. Li, Y.L., Zhou, S., Huang, X., et al.: Transferable interactiveness knowledge for human-object interaction detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3585–3594 (2019)

    Google Scholar 

  6. Redmon, J., Divvala, S., Girshick, R., et al.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)

    Google Scholar 

  7. Liu, W., Anguelov, D., Erhan, D., et al.: SSD: Single shot multibox detector. In: European Conference on Computer Vision, pp. 21–37. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_2

  8. Li, Y., Qi, H., Dai, J., et al.: Fully convolutional instance-aware semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2359–2367 (2017)

    Google Scholar 

  9. Law, H., Deng, J.: Cornernet: detecting objects as paired keypoints. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 734–750 (2018)

    Google Scholar 

  10. Ren, S., He, K., Girshick, R., et al.: Faster R-CNN: towards real-time object detection with region proposal networks. arXiv preprint arXiv:1506.01497 (2015)

  11. He, K., Gkioxari, G., Dollár, P., et al.: Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2961–2969 (2017)

    Google Scholar 

  12. Chen, K., Pang, J., Wang, J., et al.: Hybrid task cascade for instance segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4974–4983 (2019)

    Google Scholar 

  13. Cai, Z., Vasconcelos, N.: Cascade R-CNN: high quality object detection and instance segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 43, 1483–1498 (2019)

    Google Scholar 

  14. Girshick, R., Donahue, J., Darrell, T., et al.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)

    Google Scholar 

  15. Lin, T.Y., Dollár, P., Girshick, R., et al.: Feature pyramid networks for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2117–2125 (2017)

    Google Scholar 

  16. Chéron, G., Laptev, I., Schmid, C.: P-CNN: pose-based cnn features for action recognition. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3218–3226 (2015)

    Google Scholar 

  17. Lu, C., Krishna, R., Bernstein, M., Fei-Fei, L.: Visual relationship detection with language priors. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 852–869. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_51

    Chapter  Google Scholar 

  18. Mallya, A., Lazebnik, S.: Learning models for actions and person-object interactions with transfer to question answering. In: European Conference on Computer Vision, pp. 414–428. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46448-0_25

  19. Girdhar, R., Ramanan, D.: Attentional pooling for action recognition. arXiv preprint arXiv:1711.01467 (2017)

  20. Jetley, S., Lord, N.A., Lee, N., et al.: Learn to pay attention. arXiv preprint arXiv:1804.02391 (2018)

  21. Gupta, S., Malik, J.: Visual semantic role labeling. arXiv preprint arXiv:1505.04474 (2015)

  22. Gkioxari, G., Girshick, R., Dollár, P., et al.: Detecting and recognizing human-object interactions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8359–8367 (2018)

    Google Scholar 

  23. Qi, S., Wang, W., Jia, B., Shen, J., Zhu, S.-C.: Learning human-object interactions by graph parsing neural networks. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11213, pp. 407–423. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01240-3_25

    Chapter  Google Scholar 

  24. Chao, Y.W., Liu, Y., Liu, X., et al.: Learning to detect human-object interactions. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 381–389. IEEE (2018)

    Google Scholar 

  25. Wang, T., Anwer, R.M., Khan, M.H., et al.: Deep contextual attention for human-object interaction detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5694–5702 (2019)

    Google Scholar 

  26. Liang, Z., Liu, J., Guan, Y., et al.: Visual-semantic graph attention networks for human-object interaction detection. arXiv e-prints: arXiv: 2001.02302 (2020)

    Google Scholar 

  27. Xu, B., Li, J., Wong, Y., et al.: Interact as you intend: intention-driven human-object interaction detection. IEEE Trans. Multimedia 22(6), 1423–1432 (2019)

    Article  Google Scholar 

  28. Fang, H.S., Xie, S., Tai, Y.W., et al.: RMPE: regional multi-person pose estimation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2334–2343 (2017)

    Google Scholar 

  29. Li, J., Wang, C., Zhu, H., et al.: Crowdpose: efficient crowded scenes pose estimation and a new benchmark. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10863–10872 (2019)

    Google Scholar 

  30. Xu, K., Ba, J., Kiros, R., et al.: Show, attend and tell: neural image caption generation with visual attention. In: International Conference on Machine Learning. PMLR, pp. 2048–2057 (2015)

    Google Scholar 

  31. You, Q., Jin, H., Wang, Z., et al.: Image captioning with semantic attention. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4651–4659 (2016)

    Google Scholar 

  32. Ji, Z., Fu, Y., Guo, J., et al.: Stacked semantics-guided attention model for fine-grained zero-shot learning. In: Advances in Neural Information Processing Systems, pp. 5995–6004 (2018)

    Google Scholar 

  33. Chu, X., Yang, W., Ouyang, W., et al.: Multi-context attention for human pose estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1831–1840 (2017)

    Google Scholar 

  34. Chao, Y.W., Wang, Z., He, Y., et al.: Hico: A benchmark for recognizing human-object interactions in images. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1017–1025 (2015)

    Google Scholar 

  35. Kuang, H., Zheng, Z., Liu, X., et al.: A human-object interaction detection method inspired by human body part information. In: 2020 12th International Conference on Measuring Technology and Mechatronics Automation (ICMTMA), pp. 342–346. IEEE (2020)

    Google Scholar 

  36. Zhou, T., Wang, W., Qi, S., et al.: Cascaded human-object interaction recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4263–4272 (2020)

    Google Scholar 

  37. Wan, B., Zhou, D., Liu, Y., et al.: Pose-aware multi-level feature network for human object interaction detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 9469–9478 (2019)

    Google Scholar 

  38. Wang, T., Yang, T., Danelljan, M., et al.: Learning human-object interaction detection using interaction points. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4116–4125 (2020)

    Google Scholar 

  39. Zhao, W., Lu, H., Wang, D.: Multisensor image fusion and enhancement in spectral total variation domain. IEEE Trans. Multimedia 20(4), 866–879 (2017)

    Article  Google Scholar 

  40. Lan, R., Sun, L., Liu, Z., et al.: MADNet: a fast and lightweight network for single-image super resolution. IEEE Trans. Cybern. 51, 1443–1453 (2020)

    Google Scholar 

Download references

Acknowledgments

This work was supported by the National Natural Science Foundation of China (No. 61672268).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yongzhao Zhan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wu, Q., Zhan, Y. (2021). Human-Object Interaction Detection Based on Multi-scale Attention Fusion. In: Peng, Y., Hu, SM., Gabbouj, M., Zhou, K., Elad, M., Xu, K. (eds) Image and Graphics. ICIG 2021. Lecture Notes in Computer Science(), vol 12888. Springer, Cham. https://doi.org/10.1007/978-3-030-87355-4_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-87355-4_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-87354-7

  • Online ISBN: 978-3-030-87355-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics