Skip to main content

Online Continual Learning with Contrastive Vision Transformer

  • Conference paper
  • First Online:
Computer Vision – ECCV 2022 (ECCV 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13680))

Included in the following conference series:

Abstract

Online continual learning (online CL) studies the problem of learning sequential tasks from an online data stream without task boundaries, aiming to adapt to new data while alleviating catastrophic forgetting on the past tasks. This paper proposes a framework Contrastive Vision Transformer (CVT), which designs a focal contrastive learning strategy based on a transformer architecture, to achieve a better stability-plasticity trade-off for online CL. Specifically, we design a new external attention mechanism for online CL that implicitly captures previous tasks’ information. Besides, CVT contains learnable focuses for each class, which could accumulate the knowledge of previous classes to alleviate forgetting. Based on the learnable focuses, we design a focal contrastive loss to rebalance contrastive learning between new and past classes and consolidate previously learned representations. Moreover, CVT contains a dual-classifier structure for decoupling learning current classes and balancing all observed classes. The extensive experimental results show that our approach achieves state-of-the-art performance with even fewer parameters on online CL benchmarks and effectively alleviates the catastrophic forgetting.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abati, D., Tomczak, J., Blankevoort, T., Calderara, S., Cucchiara, R., Bejnordi, B.E.: Conditional channel gated networks for task-aware continual learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 3931–3940 (2020)

    Google Scholar 

  2. Aljundi, R., Babiloni, F., Elhoseiny, M., Rohrbach, M., Tuytelaars, T.: Memory aware synapses: learning what (not) to forget. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11207, pp. 144–161. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01219-9_9

    Chapter  Google Scholar 

  3. Aljundi, R., Chakravarty, P., Tuytelaars, T.: Expert gate: lifelong learning with a network of experts. In: CVPR, pp. 3366–3375 (2017)

    Google Scholar 

  4. Aljundi, R., Lin, M., Goujaud, B., Bengio, Y.: Gradient based sample selection for online continual learning. In: Advances in Neural Information Processing Systems (2019)

    Google Scholar 

  5. Arani, E., Sarfraz, F., Zonooz, B.: Learning fast, learning slow: A general continual learning method based on complementary learning system. In: International Conference on Learning Representations (ICLR) (2022)

    Google Scholar 

  6. Bang, J., Kim, H., Yoo, Y., Ha, J.W., Choi, J.: Rainbow memory: continual learning with a memory of diverse samples. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8218–8227 (2021)

    Google Scholar 

  7. Bengio, Y., Goodfellow, I., Courville, A.: Deep Learning, vol. 1. MIT press Massachusetts, USA (2017)

    MATH  Google Scholar 

  8. Benjamin, A.S., Rolnick, D., Kording, K.P.: Measuring and regularizing networks in function space. In: International Conference on Learning Representations (2019)

    Google Scholar 

  9. Buzzega, P., Boschini, M., Porrello, A., Abati, D., Calderara, S.: Dark experience for general continual learning: a strong. In: Advances in Neural Information Processing Systems (NeurIPS), Simple Baseline (2020)

    Google Scholar 

  10. Buzzega, P., Boschini, M., Porrello, A., Calderara, S.: Rethinking experience replay: a bag of tricks for continual learning. In: International Conference on Pattern Recognition (ICPR), pp. 2180–2187. IEEE (2021)

    Google Scholar 

  11. Caron, M., Misra, I., Mairal, J., Goyal, P., Bojanowski, P., Joulin, A.: Unsupervised learning of visual features by contrasting cluster assignments. Adv. Neural. Inf. Process. Syst. 33, 9912–9924 (2020)

    Google Scholar 

  12. Chaudhry, A., Dokania, P.K., Ajanthan, T., Torr, P.H.S.: Riemannian walk for incremental learning: understanding forgetting and intransigence. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11215, pp. 556–572. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01252-6_33

    Chapter  Google Scholar 

  13. Chaudhry, A., Gordo, A., Dokania, P.K., Torr, P.H., Lopez-Paz, D.: Using hindsight to anchor past knowledge in continual learning. In: AAAI (2021)

    Google Scholar 

  14. Chaudhry, A., Ranzato, M., Rohrbach, M., Elhoseiny, M.: Efficient lifelong learning with a-GEM. In: International Conference on Learning Representations (ICLR) (2019)

    Google Scholar 

  15. Chen, S., Wang, L., Wang, Z., Yan, Y., Wang, D.H., Zhu, S.: Learning meta-adversarial features via multi-stage adaptation network for robust visual object tracking. Neurocomputing 491, 365–381 (2022)

    Article  Google Scholar 

  16. Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning, pp. 1597–1607. PMLR (2020)

    Google Scholar 

  17. Dai, Z., Liu, H., Le, Q.V., Tan, M.: Coatnet: Marrying convolution and attention for all data sizes. In: Advances in Neural Information Processing Systems (2021)

    Google Scholar 

  18. Delange, M., et al.: A continual learning survey: Defying forgetting in classification tasks. IEEE Tran. Pattern Anal. Mach. Intell., 1 (2021)

    Google Scholar 

  19. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: Imagenet: a large-scale hierarchical image database. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 248–255 (2009)

    Google Scholar 

  20. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  21. Dosovitskiy, A., et al.: An image is worth 16x16 words: Transformers for image recognition at scale. In: International Conference on Learning Representations (2021)

    Google Scholar 

  22. Douillard, A., Cord, M., Ollion, C., Robert, T., Valle, E.: PODNet: pooled outputs distillation for small-tasks incremental learning. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12365, pp. 86–102. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58565-5_6

    Chapter  Google Scholar 

  23. Douillard, A., Ramé, A., Couairon, G., Cord, M.: Dytox: Transformers for continual learning with dynamic token expansion. Conference on Computer Vision and Pattern Recognition (CVPR) (2022)

    Google Scholar 

  24. Duan, Y., Wang, Z., Wang, J., Wang, Y.K., Lin, C.T.: Position-aware image captioning with spatial relation. Neurocomputing 497, 28–38 (2022)

    Article  Google Scholar 

  25. Girdhar, R., Carreira, J., Doersch, C., Zisserman, A.: Video action transformer network. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 244–253 (2019)

    Google Scholar 

  26. Graham, B., et al.: Levit: a vision transformer in convnet’s clothing for faster inference. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 12259–12269, October 2021

    Google Scholar 

  27. Grossberg, S.: Adaptive resonance theory: how a brain learns to consciously attend, learn, and recognize a changing world. Neural Netw. 37, 1–47 (2013)

    Article  Google Scholar 

  28. Gunel, B., Du, J., Conneau, A., Stoyanov, V.: Supervised contrastive learning for pre-trained language model fine-tuning. arXiv preprint arXiv:2011.01403 (2020)

  29. Guo, J., Gong, M., Liu, T., Zhang, K., Tao, D.: LTF: A label transformation framework for correcting label shift. In: ICML, vol. 119, pp. 3843–3853 (2020)

    Google Scholar 

  30. Guo, J., Li, J., Fu, H., Gong, M., Zhang, K., Tao, D.: Alleviating semantics distortion in unsupervised low-level image-to-image translation via structure consistency constraint. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 18249–18259 (2022)

    Google Scholar 

  31. Hadsell, R., Chopra, S., LeCun, Y.: Dimensionality reduction by learning an invariant mapping. In: 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2006), vol. 2, pp. 1735–1742. IEEE (2006)

    Google Scholar 

  32. Han, K., et al.: A survey on visual transformer. ArXiv abs/2012.12556 (2020)

    Google Scholar 

  33. Hassani, A., Walton, S., Shah, N., Abuduweili, A., Li, J., Shi, H.: Escaping the big data paradigm with compact transformers. arXiv preprint arXiv:2104.05704 (2021)

  34. He, J., Mao, R., Shao, Z., Zhu, F.: Incremental learning in online scenario. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 13926–13935 (2020)

    Google Scholar 

  35. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  36. Hinton, G., Vinyals, O., Dean, J.: Distilling the Knowledge in a Neural Network. In: NeurIPS Workshop (2014)

    Google Scholar 

  37. Hou, S., Pan, X., Loy, C.C., Wang, Z., Lin, D.: Learning a unified classifier incrementally via rebalancing. In: CVPR, pp. 831–839 (2019)

    Google Scholar 

  38. Hsu, Y.C., Liu, Y.C., Ramasamy, A., Kira, Z.: Re-evaluating continual learning scenarios: a categorization and case for strong baselines. In: NeurIPS Continual Learning Workshop (2018)

    Google Scholar 

  39. Jin, X., Sadhu, A., Du, J., Ren, X.: Gradient-based editing of memory examples for online task-free continual learning. In: Advances in Neural Information Processing Systems 34 (2021)

    Google Scholar 

  40. Khan, S., et al.: Transformers in vision: a survey. arXiv preprint arXiv:2101.01169 (2021)

  41. Khosla, P., et al.: Supervised contrastive learning. In: Advances in Neural Information Processing Systems 33 (2020)

    Google Scholar 

  42. Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114(13), 3521–3526 (2017)

    Article  MathSciNet  Google Scholar 

  43. Kong, Y., Liu, L., Wang, J., Tao, D.: Adaptive curriculum learning. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 5067–5076 (2021)

    Google Scholar 

  44. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  45. Li, X., Zhou, Y., Wu, T., Socher, R., Xiong, C.: Learn to grow: a continual structure learning framework for overcoming catastrophic forgetting. In: International Conference on Machine Learning (ICML) (2019)

    Google Scholar 

  46. Li, Z., Hoiem, D.: Learning without forgetting. IEEE Trans. Pattern Anal. Mach. Intell. (TPAMI) 40(12) (2017)

    Google Scholar 

  47. Liu, X., Wang, X., Matwin, S.: Improving the interpretability of deep neural networks with knowledge distillation. In: 2018 IEEE International Conference on Data Mining Workshops (ICDMW). IEEE (2018)

    Google Scholar 

  48. Liu, Z., et al.: Swin transformer: hierarchical vision transformer using shifted windows. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 10012–10022 (2021)

    Google Scholar 

  49. Lopez-Paz, D., Ranzato, M.: Gradient episodic memory for continual learning. In: Advances in Neural Information Processing Systems (NeurIPS) (2017)

    Google Scholar 

  50. Losing, V., Hammer, B., Wersing, H.: Incremental on-line learning: a review and comparison of state of the art algorithms. Neurocomputing 275, 1261–1274 (2018)

    Article  Google Scholar 

  51. Luong, T., Pham, H., Manning, C.D.: Effective approaches to attention-based neural machine translation. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 1412–1421. Association for Computational Linguistics (September 2015)

    Google Scholar 

  52. Mai, Z., Li, R., Kim, H., Sanner, S.: Supervised contrastive replay: Revisiting the nearest class mean classifier in online class-incremental continual learning. In: 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 3584–3594. IEEE (2021)

    Google Scholar 

  53. Mallya, A., Davis, D., Lazebnik, S.: Piggyback: adapting a single network to multiple tasks by learning to mask weights. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11208, pp. 72–88. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01225-0_5

    Chapter  Google Scholar 

  54. McCloskey, M., Cohen, N.J.: Catastrophic interference in connectionist networks: the sequential learning problem. Psychology of Learning and Motivation, vol. 24, pp. 109–165. Academic Press (1989)

    Google Scholar 

  55. Mermillod, M., Bugaiska, A., Bonin, P.: The stability-plasticity dilemma: investigating the continuum from catastrophic forgetting to age-limited learning effects. Front. Psychol. 4, 504 (2013)

    Article  Google Scholar 

  56. Pham, Q., Liu, C., Hoi, S.: Dualnet: continual learning, fast and slow. In: Advances in Neural Information Processing Systems 34 (2021)

    Google Scholar 

  57. Radford, A., et al.: Learning transferable visual models from natural language supervision. arXiv preprint arXiv:2103.00020 (2021)

  58. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019)

    Google Scholar 

  59. Rajasegaran, J., Hayat, M., Khan, S.H., Khan, F.S., Shao, L.: Random path selection for continual learning. In: Advances in Neural Information Processing Systems (NeurIPS) (2019)

    Google Scholar 

  60. Rannen, A., Aljundi, R., Blaschko, M.B., Tuytelaars, T.: Encoder based lifelong learning. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 1320–1328 (2017)

    Google Scholar 

  61. Ratcliff, R.: Connectionist models of recognition memory: constraints imposed by learning and forgetting functions. Psychol. Rev. 97(2), 285 (1990)

    Article  Google Scholar 

  62. Rebuffi, S.A., Kolesnikov, A., Sperl, G., Lampert, C.H.: icarl: incremental classifier and representation learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2017)

    Google Scholar 

  63. Riemer, M., et al.: Learning to learn without forgetting by maximizing transfer and minimizing interference. In: International Conference on Learning Representations (ICLR) (2019)

    Google Scholar 

  64. Schwarz, J., et al.: Progress & Compress: a scalable framework for continual learning. In: International Conference on Machine Learning (2018)

    Google Scholar 

  65. Serra, J., Suris, D., Miron, M., Karatzoglou, A.: Overcoming catastrophic forgetting with hard attention to the task. In: International Conference on Machine Learning, vol. 80, pp. 4548–4557 (2018)

    Google Scholar 

  66. Shi, Y., Yuan, L., Chen, Y., Feng, J.: Continual learning via bit-level information preserving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 16674–16683 (2021)

    Google Scholar 

  67. Shim, D., Mai, Z., Jeong, J., Sanner, S., Kim, H., Jang, J.: Online class-incremental continual learning with adversarial shapley value. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 9630–9638 (2021)

    Google Scholar 

  68. Simon, C., Koniusz, P., Harandi, M.: On learning the geodesic path for incremental learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1591–1600 (2021)

    Google Scholar 

  69. Singh, P., Mazumder, P., Rai, P., Namboodiri, V.P.: Rectification-based knowledge retention for continual learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 15282–15291 (2021)

    Google Scholar 

  70. Stanford: Tiny ImageNet Challenge (CS231n) (2015). http://tiny-imagenet.herokuapp.com/

  71. Tang, S., Chen, D., Zhu, J., Yu, S., Ouyang, W.: Layerwise optimization by gradient decomposition for continual learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9634–9643 (2021)

    Google Scholar 

  72. Tian, Y., Krishnan, D., Isola, P.: Contrastive multiview coding. arXiv preprint arXiv:1906.05849 (2019)

  73. Vaswani, A., Ramachandran, P., Srinivas, A., Parmar, N., Hechtman, B., Shlens, J.: Scaling local self-attention for parameter efficient visual backbones. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12894–12904 (2021)

    Google Scholar 

  74. Vaswani, A., et al.: Attention is all you need. In: Advances in neural information processing systems, pp. 5998–6008 (2017)

    Google Scholar 

  75. van de Ven, G.M., Tolias, A.S.: Three continual learning scenarios. NeurIPS Continual Learning Workshop (2018)

    Google Scholar 

  76. Verwimp, E., De Lange, M., Tuytelaars, T.: Rehearsal revealed: the limits and merits of revisiting samples in continual learning. In: ICCV (2021)

    Google Scholar 

  77. Vitter, J.S.: Random sampling with a reservoir. ACM Trans. Math. Softw. (TOMS) 11(1), 37–57 (1985)

    Article  MathSciNet  Google Scholar 

  78. Wang, Z., Duan, Y., Liu, L., Tao, D.: Multi-label few-shot learning with semantic inference. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 15917–15918 (2021)

    Google Scholar 

  79. Wang, Z., Liu, L., Duan, Y., Kong, Y., Tao, D.: Continual learning with lifelong vision transformer. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 171–181, June 2022

    Google Scholar 

  80. Wang, Z., Liu, L., Duan, Y., Tao, D.: Continual learning with embeddings: Algorithm and analysis. In: ICML 2021 Workshop on Theory and Foundation of Continual Learning (2021)

    Google Scholar 

  81. Wang, Z., Liu, L., Duan, Y., Tao, D.: Continual learning through retrieval and imagination. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36(8), pp. 8594–8602 (2022)

    Google Scholar 

  82. Wang, Z., Liu, L., Duan, Y., Tao, D.: Sin: semantic inference network for few-shot streaming label learning. IEEE Trans. Neural Networks Learn. Syst., 1–14 (2022)

    Google Scholar 

  83. Wang, Z., Liu, L., Tao, D.: Deep streaming label learning. In: International Conference on Machine Learning (ICML), vol. 119, pp. 9963–9972 (2020)

    Google Scholar 

  84. Wu, H., Xiao, B., Codella, N., Liu, M., Dai, X., Yuan, L., Zhang, L.: Cvt: introducing convolutions to vision transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 22–31, October 2021

    Google Scholar 

  85. Xi, H., Aussel, D., Liu, W., Waller, S.T., Rey, D.: Single-leader multi-follower games for the regulation of two-sided mobility-as-a-service markets. Europ. J. Oper. Res. (2022)

    Google Scholar 

  86. Xi, H., He, L., Zhang, Y., Wang, Z.: Bounding the efficiency gain of differentiable road pricing for EVS and GVS to manage congestion and emissions. PLoS ONE 15(7), e0234204 (2020)

    Article  Google Scholar 

  87. Xi, H., Liu, W., Rey, D., Waller, S.T., Kilby, P.: Incentive-compatible mechanisms for online resource allocation in mobility-as-a-service systems. arXiv preprint arXiv:2009.06806 (2020)

  88. Xu, W., Xu, Y., Chang, T., Tu, Z.: Co-scale conv-attentional image transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 9981–9990, October 2021

    Google Scholar 

  89. Yan, S., Xie, J., He, X.: Der: dynamically expandable representation for class incremental learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2021)

    Google Scholar 

  90. Yu, L., et al.: Semantic drift compensation for class-incremental learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6982–6991 (2020)

    Google Scholar 

  91. Yu, P., Chen, Y., Jin, Y., Liu, Z.: Improving vision transformers for incremental learning. arXiv preprint arXiv:2112.06103 (2021)

  92. Yuan, L., et al.: Tokens-to-token vit: training vision transformers from scratch on imagenet. In: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pp. 558–567, October 2021

    Google Scholar 

  93. Zenke, F., Poole, B., Ganguli, S.: Continual learning through synaptic intelligence. In: International Conference on Machine Learning (ICML) (2017)

    Google Scholar 

  94. Zhao, X., Zhu, J., Luo, B., Gao, Y.: Survey on facial expression recognition: History, applications, and challenges. IEEE Multimedia 28(4), 38–44 (2021)

    Article  Google Scholar 

  95. Zhu, J., Luo, B., Zhao, S., Ying, S., Zhao, X., Gao, Y.: Iexpressnet: facial expression recognition with incremental classes. In: Proceedings of the 28th ACM International Conference on Multimedia, pp. 2899–2908 (2020)

    Google Scholar 

  96. Zhu, J., Wei, Y., Feng, Y., Zhao, X., Gao, Y.: Physiological signals-based emotion recognition via high-order correlation learning. ACM Trans. Multimed. Comput. Commun. Appl. (TOMM) 15(3s), 1–18 (2019)

    Google Scholar 

Download references

Acknowledgment

This work is supported by ARC FL-170100117, DP-180103424, IC-190100031, and LE-200100049.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhen Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, Z., Liu, L., Kong, Y., Guo, J., Tao, D. (2022). Online Continual Learning with Contrastive Vision Transformer. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds) Computer Vision – ECCV 2022. ECCV 2022. Lecture Notes in Computer Science, vol 13680. Springer, Cham. https://doi.org/10.1007/978-3-031-20044-1_36

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-20044-1_36

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-20043-4

  • Online ISBN: 978-3-031-20044-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics