Skip to main content

Dual-Stream Knowledge-Preserving Hashing for Unsupervised Video Retrieval

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Abstract

Unsupervised video hashing usually optimizes binary codes by learning to reconstruct input videos. Such reconstruction constraint spends much effort on frame-level temporal context changes without focusing on video-level global semantics that are more useful for retrieval. Hence, we address this problem by decomposing video information into reconstruction-dependent and semantic-dependent information, which disentangles the semantic extraction from reconstruction constraint. Specifically, we first design a simple dual-stream structure, including a temporal layer and a hash layer. Then, with the help of semantic similarity knowledge obtained from self-supervision, the hash layer learns to capture information for semantic retrieval, while the temporal layer learns to capture the information for reconstruction. In this way, the model naturally preserves the disentangled semantics into binary codes. Validated by comprehensive experiments, our method consistently outperforms the state-of-the-arts on three video benchmarks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Bagherinezhad, H., Horton, M., Rastegari, M., Farhadi, A.: Label refinery: improving imagenet classification through label progression. In: AAAI (2021)

    Google Scholar 

  2. Botev, Z.I., Grotowski, J.F., Kroese, D.P.: Kernel density estimation via diffusion. Ann Stat (2010)

    Google Scholar 

  3. Brown, A., Xie, W., Kalogeiton, V., Zisserman, A.: Smooth-AP: smoothing the path towards large-scale image retrieval. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12354, pp. 677–694. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58545-7_39

    Chapter  Google Scholar 

  4. Caba Heilbron, F., Escorcia, V., Ghanem, B., Carlos Niebles, J.: Activitynet: a large-scale video benchmark for human activity understanding. In: CVPR (2015)

    Google Scholar 

  5. Cao, Z., Long, M., Wang, J., Yu, P.S.: Hashnet: deep learning to hash by continuation. In: ICCV (2017)

    Google Scholar 

  6. Cui, Q., Jiang, Q.-Y., Wei, X.-S., Li, W.-J., Yoshie, O.: ExchNet: a unified hashing network for large-scale fine-grained image retrieval. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12348, pp. 189–205. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58580-8_12

    Chapter  Google Scholar 

  7. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. In: NAACL (2019)

    Google Scholar 

  8. Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. In: ICLR (2021)

    Google Scholar 

  9. Erin Liong, V., Lu, J., Wang, G., Moulin, P., Zhou, J.: Deep hashing for compact binary codes learning. In: CVPR (2015)

    Google Scholar 

  10. Gabeur, V., Sun, C., Alahari, K., Schmid, C.: Multi-modal transformer for video retrieval. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12349, pp. 214–229. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58548-8_13

    Chapter  Google Scholar 

  11. Ge, J., Xie, H., Min, S., Zhang, Y.: Semantic-guided reinforced region embedding for generalized zero-shot learning. In: AAAI (2021)

    Google Scholar 

  12. Gong, Y., Lazebnik, S., Gordo, A., Perronnin, F.: Iterative quantization: a procrustean approach to learning binary codes for large-scale image retrieval. TPAMI (2012)

    Google Scholar 

  13. Guo, M., Haque, A., Huang, D.-A., Yeung, S., Fei-Fei, L.: Dynamic task prioritization for multitask learning. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11220, pp. 282–299. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01270-0_17

    Chapter  Google Scholar 

  14. Hinton, G., Vinyals, O., Dean, J., et al.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)

  15. Hubara, I., Courbariaux, M., Soudry, D., El-Yaniv, R., Bengio, Y.: Binarized neural networks. NeurIPS (2016)

    Google Scholar 

  16. Jiang, Y.G., Wu, Z., Wang, J., Xue, X., Chang, S.F.: Exploiting feature and class relationships in video categorization with regularized deep neural networks. TPAMI (2017)

    Google Scholar 

  17. Kingma, D.P., Welling, M.: Auto-encoding variational bayes. In: ICLR (2014)

    Google Scholar 

  18. Li, C., Yang, Y., Cao, J., Huang, Z.: Jointly modeling static visual appearance and temporal pattern for unsupervised video hashing. In: CIKM (2017)

    Google Scholar 

  19. Li, P., Li, Y., Xie, H., Zhang, L.: Neighborhood-adaptive structure augmented metric learning. In: AAAI (2022)

    Google Scholar 

  20. Li, S., Chen, Z., Li, X., Lu, J., Zhou, J.: Unsupervised variational video hashing with 1d-cnn-lstm networks. TMM (2019)

    Google Scholar 

  21. Li, S., Chen, Z., Lu, J., Li, X., Zhou, J.: Neighborhood preserving hashing for scalable video retrieval. In: ICCV (2019)

    Google Scholar 

  22. Li, S., Li, X., Lu, J., Zhou, J.: Self-supervised video hashing via bidirectional transformers. In: CVPR (2021)

    Google Scholar 

  23. Liong, V.E., Lu, J., Tan, Y.P., Zhou, J.: Deep video hashing. TMM (2016)

    Google Scholar 

  24. Liu, B., Yeung, S., Chou, E., Huang, D.-A., Fei-Fei, L., Niebles, J.C.: Temporal modular networks for retrieving complex compositional activities in videos. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11207, pp. 569–586. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01219-9_34

    Chapter  Google Scholar 

  25. Liu, Q., Xie, L., Wang, H., Yuille, A.L.: Semantic-aware knowledge preservation for zero-shot sketch-based image retrieval. In: ICCV (2019)

    Google Scholar 

  26. Liu, W., Wang, J., Kumar, S., Chang, S.F.: Hashing with graphs. In: ICML (2011)

    Google Scholar 

  27. Van der Maaten, L., Hinton, G.: Visualizing data using t-sne. JMLR (2008)

    Google Scholar 

  28. Milbich, T., et al.: DiVA: diverse visual feature aggregation for deep metric learning. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12353, pp. 590–607. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58598-3_35

    Chapter  Google Scholar 

  29. Min, S., Yao, H., Xie, H., Wang, C., Zha, Z.J., Zhang, Y.: Domain-aware visual bias eliminating for generalized zero-shot learning. In: CVPR (2020)

    Google Scholar 

  30. Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library. NeurIPS (2019)

    Google Scholar 

  31. Qiu, Z., Su, Q., Ou, Z., Yu, J., Chen, C.: Unsupervised hashing with contrastive information bottleneck. In: IJCAI (2021)

    Google Scholar 

  32. Romero, A., Ballas, N., Kahou, S.E., Chassang, A., Gatta, C., Bengio, Y.: Fitnets: hints for thin deep nets. In: ICLR (2015)

    Google Scholar 

  33. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. nature (1986)

    Google Scholar 

  34. Shen, Y., et al.: Auto-encoding twin-bottleneck hashing. In: CVPR (2020)

    Google Scholar 

  35. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: ICLR (2015)

    Google Scholar 

  36. Song, J., Yang, Y., Huang, Z., Shen, H.T., Hong, R.: Multiple feature hashing for real-time large scale near-duplicate video retrieval. In: ACM MM (2011)

    Google Scholar 

  37. Song, J., Zhang, H., Li, X., Gao, L., Wang, M., Hong, R.: Self-supervised video hashing with hierarchical binary auto-encoder. TIP (2018)

    Google Scholar 

  38. Srivastava, N., Mansimov, E., Salakhudinov, R.: Unsupervised learning of video representations using lstms. In: ICML (2015)

    Google Scholar 

  39. Su, S., Zhang, C., Han, K., Tian, Y.: Greedy hash: Towards fast optimization for accurate hash coding in cnn. In: NeurIPS (2018)

    Google Scholar 

  40. Thomee, B., et al.: The new data and new challenges in multimedia research. arXiv preprint arXiv:1503.01817 (2015)

  41. Tian, K., Zhou, S., Guan, J.: Deepcluster: a general clustering framework based on deep learning. In: ECML (2017)

    Google Scholar 

  42. Tishby, N., Zaslavsky, N.: Deep learning and the information bottleneck principle. In: ITW (2015)

    Google Scholar 

  43. Wang, Y., Xie, H., Fang, S., Wang, J., Zhu, S., Zhang, Y.: From two to one: a new scene text recognizer with visual language modeling network. In: ICCV (2021)

    Google Scholar 

  44. Wang, Y., Xie, H., Zha, Z.J., Xing, M., Fu, Z., Zhang, Y.: Contournet: taking a further step toward accurate arbitrary-shaped scene text detection. In: CVPR (2020)

    Google Scholar 

  45. Wu, G., et al.: Unsupervised deep video hashing via balanced code for large-scale video retrieval. TIP (2018)

    Google Scholar 

  46. Wu, W., et al.: End-to-end video text spotting with transformer. arXiv preprint arXiv:2203.10539 (2022)

  47. Xiao, J., Hays, J., Ehinger, K.A., Oliva, A., Torralba, A.: Sun database: large-scale scene recognition from abbey to zoo. In: CVPR (2010)

    Google Scholar 

  48. Yang, E., Deng, C., Liu, T., Liu, W., Tao, D.: Semantic structure-based unsupervised deep hashing. In: IJCAI (2018)

    Google Scholar 

  49. Yang, E., Liu, T., Deng, C., Liu, W., Tao, D.: Distillhash: unsupervised deep hashing by distilling data pairs. In: CVPR (2019)

    Google Scholar 

  50. Yang, K., Zhou, T., Tian, X., Tao, D., et al.: Class-disentanglement and applications in adversarial detection and defense. NeurIPS (2021)

    Google Scholar 

  51. Ye, G., Liu, D., Wang, J., Chang, S.F.: Large-scale video hashing via structure learning. In: ICCV (2013)

    Google Scholar 

  52. Yu, T., Yang, Y., Li, Y., Liu, L., Fei, H., Li, P.: Heterogeneous attention network for effective and efficient cross-modal retrieval. In: SIGIR (2021)

    Google Scholar 

  53. Yu, T., Yuan, J., Fang, C., Jin, H.: Product quantization network for fast image retrieval. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11205, pp. 191–206. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01246-5_12

    Chapter  Google Scholar 

  54. Yue-Hei Ng, J., Hausknecht, M., Vijayanarasimhan, S., Vinyals, O., Monga, R., Toderici, G.: Beyond short snippets: deep networks for video classification. In: CVPR (2015)

    Google Scholar 

  55. Zhang, B., Hu, H., Sha, F.: Cross-modal and hierarchical modeling of video and text. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11217, pp. 385–401. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01261-8_23

    Chapter  Google Scholar 

  56. Zhang, H., Wang, M., Hong, R., Chua, T.S.: Play and rewind: Optimizing binary representations of videos by self-supervised temporal hashing. In: ACM MM (2016)

    Google Scholar 

  57. Zhang, X., Zhang, T., Hong, X., Cui, Z., Yang, J.: Graph wasserstein correlation analysis for movie retrieval. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12370, pp. 424–439. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58595-2_26

    Chapter  Google Scholar 

  58. Zhao, Y., Jin, Z., Qi, G., Lu, H., Hua, X.: An adversarial approach to hard triplet generation. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11213, pp. 508–524. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01240-3_31

    Chapter  Google Scholar 

Download references

Acknowledgements

This work is supported by the National Nature Science Foundation of China (62121002, 62022076, U1936210), the Fundamental Research Funds for the Central Universities under Grant WK3480000011, the Youth Innovation Promotion Association Chinese Academy of Sciences (Y2021122). We acknowledge the support of GPU cluster built by MCC Lab of Information Science and Technology Institution, USTC.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hongtao Xie .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 705 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, P., Xie, H., Ge, J., Zhang, L., Min, S., Zhang, Y. (2022). Dual-Stream Knowledge-Preserving Hashing for Unsupervised Video Retrieval. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13674. Springer, Cham. https://doi.org/10.1007/978-3-031-19781-9_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19781-9_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19780-2

  • Online ISBN: 978-3-031-19781-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics