Skip to main content

Learning Linguistic Association Towards Efficient Text-Video Retrieval

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Abstract

Text-video retrieval attracts growing attention recently. A dominant approach is to learn a common space for aligning two modalities. However, video deliver richer content than text in general situations and captions usually miss certain events or details in the video. The information imbalance between two modalities makes it difficult to align their representations. In this paper, we propose a general framework, LINguistic ASsociation (LINAS), which utilizes the complementarity between captions corresponding to the same video. Concretely, we first train a teacher model taking extra relevant captions as inputs, which can aggregate language semantics for obtaining more comprehensive text representations. Since the additional captions are inaccessible during inference, Knowledge Distillation is employed to train a student model with a single caption as input. We further propose Adaptive Distillation strategy, which allows the student model to adaptively learn the knowledge from the teacher model. This strategy also suppresses the spurious relations introduced during the linguistic association. Extensive experiments demonstrate the effectiveness and efficiency of LINAS with various baseline architectures on benchmark datasets. Our code is available at https://github.com/silenceFS/LINAS.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    ‘TeachText-CE+’ achieves the best perfomrance on VATEX. However, the authors have not provided corresponding multi-modal features of VATEX dataset.

References

  1. Arnab, A., Dehghani, M., Heigold, G., Sun, C., Lucic, M., Schmid, C.: Vivit: a video vision transformer. In: 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, 10–17 October 2021, pp. 6816–6826. IEEE (2021). https://doi.org/10.1109/ICCV48922.2021.00676

  2. Bain, M., Nagrani, A., Varol, G., Zisserman, A.: Frozen in time: a joint video and image encoder for end-to-end retrieval. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1728–1738 (2021)

    Google Scholar 

  3. Bertasius, G., Wang, H., Torresani, L.: Is space-time attention all you need for video understanding? In: Meila, M., Zhang, T. (eds.) Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18–24 July 2021, Virtual Event. Proceedings of Machine Learning Research, vol. 139, pp. 813–824. PMLR (2021). http://proceedings.mlr.press/v139/bertasius21a.html

  4. Buciluǎ, C., Caruana, R., Niculescu-Mizil, A.: Model compression. In: Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 535–541 (2006)

    Google Scholar 

  5. Chen, S., Zhao, Y., Jin, Q., Wu, Q.: Fine-grained text-video retrieval with hierarchical graph reasoning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 10638–10647 (2020)

    Google Scholar 

  6. Chen, W., Li, G., Zhang, X., Yu, H., Wang, S., Huang, Q.: Cascade cross-modal attention network for video actor and action segmentation from a sentence. In: Proceedings of the 29th ACM International Conference on Multimedia, pp. 4053–4062 (2021)

    Google Scholar 

  7. Croitoru, I., et al.: Teachtext: crossmodal generalized distillation for text-video retrieval. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 11583–11593 (2021)

    Google Scholar 

  8. Dong, J., Li, X., Snoek, C.G.: Predicting visual features from text for image and video caption retrieval. IEEE Trans. Multimedia 20(12), 3377–3388 (2018)

    Article  Google Scholar 

  9. Dong, J., et al.: Dual encoding for zero-example video retrieval. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9346–9355 (2019)

    Google Scholar 

  10. Dong, J., et al.: Dual encoding for video retrieval by text. IEEE Trans. Pattern Anal. Mach. Intell. 44, 4065–4080 (2021)

    Google Scholar 

  11. Faghri, F., Fleet, D.J., Kiros, J.R., Fidler, S.: VSE++: improving visual-semantic embeddings with hard negatives. In: British Machine Vision Conference 2018, BMVC 2018, Newcastle, UK, 3–6 September 2018, p. 12. BMVA Press (2018). http://bmvc2018.org/contents/papers/0344.pdf

  12. Foteini, M., et al.: Iti-certh participation in trecvid 2016. In: TRECVID 2016 Workshop (2016)

    Google Scholar 

  13. Gabeur, V., Sun, C., Alahari, K., Schmid, C.: Multi-modal transformer for video retrieval. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12349, pp. 214–229. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58548-8_13

    Chapter  Google Scholar 

  14. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)

  15. Junbao, Z., et al.: Zero-shot video classification with appropriate web and task knowledge transfer. In: Proceedings of the 30th ACM International Conference on Multimedia (2022)

    Google Scholar 

  16. Le, D.D., et al.: Nii-hitachi-uit at trecvid 2016. In: TRECVID, vol. 25 (2016)

    Google Scholar 

  17. Lei, Jet al.: Less is more: clipbert for video-and-language learning via sparse sampling. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7331–7341 (2021)

    Google Scholar 

  18. Li, Q., Jin, S., Yan, J.: Mimicking very efficient network for object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6356–6364 (2017)

    Google Scholar 

  19. Li, X., Xu, C., Yang, G., Chen, Z., Dong, J.: W2vv++ fully deep learning for ad-hoc video search. In: Proceedings of the 27th ACM International Conference on Multimedia, pp. 1786–1794 (2019)

    Google Scholar 

  20. Li, Z., Hoiem, D.: Learning without forgetting. IEEE Trans. Pattern Anal. Mach. Intell. 40(12), 2935–2947 (2017)

    Article  Google Scholar 

  21. Liu, H., Simonyan, K., Yang, Y.: DARTS: differentiable architecture search. In: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, 6–9 May 2019. OpenReview.net (2019). https://openreview.net/forum?id=S1eYHoC5FX

  22. Liu, S., Fan, H., Qian, S., Chen, Y., Ding, W., Wang, Z.: Hit: hierarchical transformer with momentum contrast for video-text retrieval. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 11915–11925 (2021)

    Google Scholar 

  23. Liu, Y., Albanie, S., Nagrani, A., Zisserman, A.: Use what you have: video retrieval using representations from collaborative experts. In: 30th British Machine Vision Conference 2019, BMVC 2019, Cardiff, UK, 9–12 September 2019, p. 279. BMVA Press (2019). https://bmvc2019.org/wp-content/uploads/papers/0363-paper.pdf

  24. Luo, H., et al.: Clip4clip: an empirical study of CLIP for end to end video clip retrieval. CoRR abs/2104.08860 (2021). https://arxiv.org/abs/2104.08860

  25. Markatopoulou, F., Galanopoulos, D., Mezaris, V., Patras, I.: Query and keyframe representations for ad-hoc video search. In: Proceedings of the 2017 ACM on International Conference on Multimedia Retrieval, pp. 407–411 (2017)

    Google Scholar 

  26. Miech, A., Alayrac, J.B., Laptev, I., Sivic, J., Zisserman, A.: Thinking fast and slow: efficient text-to-visual retrieval with transformers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9826–9836 (2021)

    Google Scholar 

  27. Miech, A., Laptev, I., Sivic, J.: Learning a text-video embedding from incomplete and heterogeneous data. arXiv preprint arXiv:1804.02516 (2018)

  28. Miech, A., Zhukov, D., Alayrac, J.B., Tapaswi, M., Laptev, I., Sivic, J.: Howto100m: learning a text-video embedding by watching hundred million narrated video clips. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2630–2640 (2019)

    Google Scholar 

  29. Mithun, N.C., Li, J., Metze, F., Roy-Chowdhury, A.K.: Learning joint embedding with multimodal cues for cross-modal text-video retrieval. In: Proceedings of the 2018 ACM on International Conference on Multimedia Retrieval, pp. 19–27 (2018)

    Google Scholar 

  30. Nguyen, P.A., et al.: Vireo@ trecvid 2017: video-to-text, ad-hoc video search, and video hyperlinking. In: TRECVID (2017)

    Google Scholar 

  31. Park, W., Kim, D., Lu, Y., Cho, M.: Relational knowledge distillation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3967–3976 (2019)

    Google Scholar 

  32. Patrick, M., et al.: Support-set bottlenecks for video-text representation learning. In: 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, 3–7 May 2021. OpenReview.net (2021). https://openreview.net/forum?id=EqoXe2zmhrh

  33. Qi, Z., Wang, S., Su, C., Su, L., Huang, Q., Tian, Q.: Towards more explainability: concept knowledge mining network for event recognition. In: Proceedings of the ACM International Conference on Multimedia (ACM MM), pp. 3857–3865 (2020)

    Google Scholar 

  34. Qi, Z., Wang, S., Su, C., Su, L., Huang, Q., Tian, Q.: Self-regulated learning for egocentric video activity anticipation. IEEE Trans. Pattern Anal. Mach. Intell. (2021). https://doi.org/10.1109/TPAMI.2021.3059923

    Article  Google Scholar 

  35. Qi, Z., Wang, S., Su, C., Su, L., Zhang, W., Huang, Q.: Modeling temporal concept receptive field dynamically for untrimmed video analysis. In: Proceedings of the ACM International Conference on Multimedia (ACM MM), pp. 3798–3806 (2020)

    Google Scholar 

  36. Sheng, F., et al.: Concept propagation via attentional knowledge graph reasoning for video-text retrieval. In: Proceedings of the 30th ACM International Conference on Multimedia (2022)

    Google Scholar 

  37. Snoek, C.G., Li, X., Xu, C., Koelma, D.C.: University of amsterdam and renmin university at trecvid 2017: searching video, detecting events and describing video. In: TRECVID (2017)

    Google Scholar 

  38. Song, Y., Soleymani, M.: Polysemous visual-semantic embedding for cross-modal retrieval. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1979–1988 (2019)

    Google Scholar 

  39. Ueki, K., Hirakawa, K., Kikuchi, K., Ogawa, T., Kobayashi, T.: Waseda_meisei at trecvid 2017: ad-hoc video search. In: TRECVID (2017)

    Google Scholar 

  40. Vaswani, A., et al.: Attention is all you need. Adv. Neural Inf. Process. Syst. 30, 5998–6008 (2017)

    Google Scholar 

  41. Wang, L., Yoon, K.J.: Knowledge distillation and student-teacher learning for visual intelligence: a review and new outlooks. IEEE Trans. Pattern Anal. Mach. Intell. 44, 3048–3068 (2021)

    Article  Google Scholar 

  42. Wang, T., Zhang, R., Lu, Z., Zheng, F., Cheng, R., Luo, P.: End-to-end dense video captioning with parallel decoding. In: 2021 IEEE/CVF International Conference on Computer Vision, ICCV 2021, Montreal, QC, Canada, 10–17 October 2021, pp. 6827–6837. IEEE (2021). https://doi.org/10.1109/ICCV48922.2021.00677

  43. Wang, X., Zhu, L., Yang, Y.: T2vlad: global-local sequence alignment for text-video retrieval. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5079–5088 (2021)

    Google Scholar 

  44. Wray, M., Larlus, D., Csurka, G., Damen, D.: Fine-grained action retrieval through multiple parts-of-speech embeddings. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 450–459 (2019)

    Google Scholar 

  45. Yang, X., Dong, J., Cao, Y., Wang, X., Wang, M., Chua, T.: Tree-augmented cross-modal encoding for complex-query video retrieval. In: Huang, J., et al. (eds.) Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, SIGIR 2020, Virtual Event, China, 25–30 July 2020, pp. 1339–1348. ACM (2020). https://doi.org/10.1145/3397271.3401151

Download references

Acknowledgements

This work was supported in part by the National Key R &D Program of China under Grant 2018AAA0102000, in part by National Natural Science Foundation of China: 62022083, U21B2038 and 61931008, and in part by the Fundamental Research Funds for the Central Universities.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shuhui Wang .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 6776 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fang, S., Wang, S., Zhuo, J., Han, X., Huang, Q. (2022). Learning Linguistic Association Towards Efficient Text-Video Retrieval. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13696. Springer, Cham. https://doi.org/10.1007/978-3-031-20059-5_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20059-5_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20058-8

  • Online ISBN: 978-3-031-20059-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics