Mining Inter-Video Proposal Relations for Video Object Detection

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12366)


Recent studies have shown that, context aggregating information from proposals in different frames can clearly enhance the performance of video object detection. However, these approaches mainly exploit the intra-proposal relation within single video, while ignoring the intra-proposal relation among different videos, which can provide important discriminative cues for recognizing confusing objects. To address the limitation, we propose a novel Inter-Video Proposal Relation module. Based on a concise multi-level triplet selection scheme, this module can learn effective object representations via modeling relations of hard proposals among different videos. Moreover, we design a Hierarchical Video Relation Network (HVR-Net), by integrating intra-video and inter-video proposal relations in a hierarchical fashion. This design can progressively exploit both intra and inter contexts to boost video object detection. We examine our method on the large-scale video object detection benchmark, i.e., ImageNet VID, where HVR-Net achieves the SOTA results. Codes and models are available at


Video object detection Inter-Video Proposal Relation Multi-level triplet selection Hierachical Video Relation Network 



This work is partially supported by Science and Technology Service Network Initiative of Chinese Academy of Sciences (KFJ-STS-QYZX-092), Guangdong Special Support Program (2016TX03X276), Shenzhen Basic Research Program (CXB201104220032A), National Natural Science Foundation of China (61876176, U1713208), the Joint Lab of CAS-HK. This work is also partially supported by Australian Research Council Discovery Early Career Award (DE190100626).

Supplementary material

504479_1_En_26_MOESM1_ESM.pdf (455 kb)
Supplementary material 1 (pdf 455 KB)


  1. 1.
    Bertasius, G., Torresani, L., Shi, J.: Object detection in video with spatiotemporal sampling networks. In: ECCV, pp. 331–346 (2018)Google Scholar
  2. 2.
    Chen, K., et al.: Optimizing video object detection via a scale-time lattice. In: CVPR, pp. 7814–7823 (2018)Google Scholar
  3. 3.
    Dai, J., Li, Y., He, K., Sun, J.: R-FCN: object detection via region-based fully convolutional networks. In: Advances in Neural Information Processing Systems, pp. 379–387 (2016)Google Scholar
  4. 4.
    Deng, H., et al.: Object guided external memory network for video object detection. In: ICCV, pp. 6678–6687 (2019)Google Scholar
  5. 5.
    Deng, J., Pan, Y., Yao, T., Zhou, W., Li, H., Mei, T.: Relation distillation networks for video object detection. In: ICCV (2019)Google Scholar
  6. 6.
    Duan, K., Bai, S., Xie, L., Qi, H., Huang, Q., Tian, Q.: CenterNet: keypoint triplets for object detection. In: ICCV, pp. 6569–6578 (2019)Google Scholar
  7. 7.
    Feichtenhofer, C., Pinz, A., Zisserman, A.: Detect to track and track to detect. In: ICCV, pp. 3038–3046 (2017)Google Scholar
  8. 8.
    Girshick, R.: Fast R-CNN. In: ICCV, pp. 1440–1448 (2015)Google Scholar
  9. 9.
    Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: CVPR, pp. 580–587 (2014)Google Scholar
  10. 10.
    Guo, C., et al.: Progressive sparse local attention for video object detection. In: ICCV, pp. 3909–3918 (2019)Google Scholar
  11. 11.
    Han, W., et al.: SEQ-NMS for video object detection. arXiv preprint arXiv:1602.08465 (2016)
  12. 12.
    He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask R-CNN. In: CVPR, pp. 2961–2969 (2017)Google Scholar
  13. 13.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)Google Scholar
  14. 14.
    Hu, H., Gu, J., Zhang, Z., Dai, J., Wei, Y.: Relation networks for object detection. In: CVPR, pp. 3588–3597 (2018)Google Scholar
  15. 15.
    Jiang, Z., Gao, P., Guo, C., Zhang, Q., Xiang, S., Pan, C.: Video object detection with locally-weighted deformable neighbors. In: AAAI, vol. 33, pp. 8529–8536 (2019)Google Scholar
  16. 16.
    Kang, K., et al.: Object detection in videos with tubelet proposal networks. In: CVPR, pp. 727–735 (2017)Google Scholar
  17. 17.
    Kang, K., et al.: T-CNN: tubelets with convolutional neural networks for object detection from videos. TCSVT 28(10), 2896–2907 (2017)Google Scholar
  18. 18.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  19. 19.
    Law, H., Deng, J.: CornerNet: detecting objects as paired keypoints. In: ECCV, pp. 734–750 (2018)Google Scholar
  20. 20.
    Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: ICCV, pp. 2980–2988 (2017)Google Scholar
  21. 21.
    Lin, T.-Y., et al.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740–755. Springer, Cham (2014). Scholar
  22. 22.
    Liu, W., et al.: SSD: single shot multibox detector. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9905, pp. 21–37. Springer, Cham (2016). Scholar
  23. 23.
    Ouyang, W., et al.: DeepID-net: multi-stage and deformable deep convolutional neural networks for object detection. arXiv preprint arXiv:1409.3505 (2014)
  24. 24.
    Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: CVPR, pp. 779–788 (2016)Google Scholar
  25. 25.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: NIPS, pp. 91–99 (2015)Google Scholar
  26. 26.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: IEEE-TPAMI (2017)Google Scholar
  27. 27.
    Russakovsky, O., et al.: Imagenet large scale visual recognition challenge. IJCV 115(3), 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  28. 28.
    Shvets, M., Liu, W., Berg, A.C.: Leveraging long-range temporal relationships between proposals for video object detection. In: ICCV (2019)Google Scholar
  29. 29.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
  30. 30.
    Szegedy, C., et al.: Going deeper with convolutions. In: CVPR, pp. 1–9 (2015)Google Scholar
  31. 31.
    Vaswani, A., et al.: Attention is all you need. In: NIPS, pp. 5998–6008 (2017)Google Scholar
  32. 32.
    Wang, S., Zhou, Y., Yan, J., Deng, Z.: Fully motion-aware network for video object detection. In: ECCV, pp. 542–557 (2018)Google Scholar
  33. 33.
    Wang, X., Girshick, R., Gupta, A., He, K.: Non-local neural networks. In: CVPR, pp. 7794–7803 (2018)Google Scholar
  34. 34.
    Wu, H., Chen, Y., Wang, N., Zhang, Z.: Sequence level semantics aggregation for video object detection. In: ICCV (2019)Google Scholar
  35. 35.
    Xiao, F., Jae Lee, Y.: Video object detection with an aligned spatial-temporal memory. In: ECCV, pp. 485–501 (2018)Google Scholar
  36. 36.
    Xie, S., Girshick, R., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. In: CVPR, pp. 1492–1500 (2017)Google Scholar
  37. 37.
    Yang, B., Yan, J., Lei, Z., Li, S.Z.: Craft objects from images. In: CVPR, pp. 6043–6051 (2016)Google Scholar
  38. 38.
    Zhou, X., Zhuo, J., Krahenbuhl, P.: Bottom-up object detection by grouping extreme and center points. In: CVPR, pp. 850–859 (2019)Google Scholar
  39. 39.
    Zhu, C., He, Y., Savvides, M.: Feature selective anchor-free module for single-shot object detection. In: CVPR, pp. 840–849 (2019)Google Scholar
  40. 40.
    Zhu, X., Dai, J., Yuan, L., Wei, Y.: Towards high performance video object detection. In: CVPR, pp. 7210–7218 (2018)Google Scholar
  41. 41.
    Zhu, X., Wang, Y., Dai, J., Yuan, L., Wei, Y.: Flow-guided feature aggregation for video object detection. In: ICCV, pp. 408–417 (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Guangdong-Hong Kong-Macao Joint Laboratory of Human-Machine Intelligence-Synergy SystemsShenzhen Institutes of Advanced Technology, Chinese Academy of SciencesShenzhenChina
  2. 2.Faculty of Information TechnologyMonash UniversityMelbourneAustralia

Personalised recommendations