Skip to main content

Self-supervised Contrastive Feature Refinement for Few-Shot Class-Incremental Learning

  • Conference paper
  • First Online:
Computer-Aided Design and Computer Graphics (CADGraphics 2023)

Abstract

Few-Shot Class-Incremental Learning (FSCIL) is to learn novel classes with few data points incrementally, without forgetting old classes. It is very hard to capture the underlying patterns and traits of the few-shot classes. To meet the challenges, we propose a Self-supervised Contrastive Feature Refinement (SCFR) framework which tackles the FSCIL issue from three aspects. Firstly, we employ a self-supervised learning framework to make the network to learn richer representations and promote feature refinement. Meanwhile, we design virtual classes to improve the models robustness and generalization during training process. To prevent catastrophic forgetting, we attach Gaussian Noise to encountered prototypes to recall the distribution of known classes and maintain stability in the embedding space. SCFR offers a systematic solution which can effectively mitigate the issues of catastrophic forgetting and over-fitting. Experiments on widely recognized datasets, including CUB200, miniImageNet and CIFAR100, show remarkable performance than other mainstream works.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 74.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Aljundi, R., Babiloni, F., Elhoseiny, M., Rohrbach, M., Tuytelaars, T.: Memory aware synapses: learning what (not) to forget. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 139–154 (2018)

    Google Scholar 

  2. Castro, F.M., Marín-Jiménez, M.J., Guil, N., Schmid, C., Alahari, K.: End-to-end incremental learning. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 233–248 (2018)

    Google Scholar 

  3. Chaudhry, A., Dokania, P.K., Ajanthan, T., Torr, P.H.: Riemannian walk for incremental learning: Understanding forgetting and intransigence. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 532–547 (2018)

    Google Scholar 

  4. Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for contrastive learning of visual representations. In: International Conference on Machine Learning, pp. 1597–1607. PMLR (2020)

    Google Scholar 

  5. Chen, T., Zhai, X., Ritter, M., Lucic, M., Houlsby, N.: Self-supervised GANs via auxiliary rotation loss. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12154–12163 (2019)

    Google Scholar 

  6. Chi, Z., Gu, L., Liu, H., Wang, Y., Yu, Y., Tang, J.: MetaFSCIL: a meta-learning approach for few-shot class incremental learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14166–14175 (2022)

    Google Scholar 

  7. Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: International Conference on Machine Learning, pp. 1126–1135. PMLR (2017)

    Google Scholar 

  8. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  9. Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531 (2015)

  10. Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114(13), 3521–3526 (2017)

    Article  MathSciNet  Google Scholar 

  11. Koch, G., Zemel, R., Salakhutdinov, R., et al.: Siamese neural networks for one-shot image recognition. In: ICML Deep Learning Workshop, Lille, vol. 2 (2015)

    Google Scholar 

  12. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  13. Lee, H., Hwang, S.J., Shin, J.: Self-supervised label augmentation via input transformations. In: International Conference on Machine Learning, pp. 5714–5724. PMLR (2020)

    Google Scholar 

  14. Mishra, N., Rohaninejad, M., Chen, X., Abbeel, P.: A simple neural attentive meta-learner. arXiv preprint arXiv:1707.03141 (2017)

  15. Munkhdalai, T., Yu, H.: Meta networks. In: International Conference on Machine Learning, pp. 2554–2563. PMLR (2017)

    Google Scholar 

  16. Pan, Z., Yu, X., Zhang, M., Gao, Y.: SSFE-Net: self-supervised feature enhancement for ultra-fine-grained few-shot class incremental learning. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 6275–6284 (2023)

    Google Scholar 

  17. Rajeswaran, A., Finn, C., Kakade, S.M., Levine, S.: Meta-learning with implicit gradients. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  18. Ravi, S., Larochelle, H.: Optimization as a model for few-shot learning. In: International Conference on Learning Representations (2017)

    Google Scholar 

  19. Rebuffi, S.A., Kolesnikov, A., Sperl, G., Lampert, C.H.: iCaRL: incremental classifier and representation learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2001–2010 (2017)

    Google Scholar 

  20. Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vision 115(3), 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  21. Rusu, A.A., et al.: Progressive neural networks. arXiv preprint arXiv:1606.04671 (2016)

  22. Rusu, A.A., et al.: Meta-learning with latent embedding optimization. arXiv preprint arXiv:1807.05960 (2018)

  23. Santoro, A., Bartunov, S., Botvinick, M., Wierstra, D., Lillicrap, T.: Meta-learning with memory-augmented neural networks. In: International Conference on Machine Learning, pp. 1842–1850. PMLR (2016)

    Google Scholar 

  24. Snell, J., Swersky, K., Zemel, R.: Prototypical networks for few-shot learning. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  25. Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P.H., Hospedales, T.M.: Learning to compare: relation network for few-shot learning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1199–1208 (2018)

    Google Scholar 

  26. Tao, X., Hong, X., Chang, X., Dong, S., Wei, X., Gong, Y.: Few-shot class-incremental learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12183–12192 (2020)

    Google Scholar 

  27. Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al.: Matching networks for one shot learning. In: Advances in Neural Information Processing Systems, vol. 29 (2016)

    Google Scholar 

  28. Wah, C., Branson, S., Welinder, P.: Technical report CNS-TR-2011-001. California Institute of Technology (2011)

    Google Scholar 

  29. Wah, C., Branson, S., Welinder, P., Perona, P., Belongie, S.: The Caltech-UCSD birds-200-2011 dataset (2011)

    Google Scholar 

  30. Wang, F.Y., Zhou, D.W., Ye, H.J., Zhan, D.C.: FOSTER: feature boosting and compression for class-incremental learning. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022, Part XXV. LNCS, vol. 13685, pp. 398–414. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19806-9_23

    Chapter  Google Scholar 

  31. Yang, B., et al.: Dynamic support network for few-shot class incremental learning. IEEE Trans. Pattern Anal. Mach. Intell. 45, 2945–2951 (2022)

    Google Scholar 

  32. Yoon, J., Yang, E., Lee, J., Hwang, S.J.: Lifelong learning with dynamically expandable networks. arXiv preprint arXiv:1708.01547 (2017)

  33. Zhang, C., Song, N., Lin, G., Zheng, Y., Pan, P., Xu, Y.: Few-shot incremental learning with continually evolved classifiers. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 12455–12464 (2021)

    Google Scholar 

  34. Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: Mixup: beyond empirical risk minimization. arXiv preprint arXiv:1710.09412 (2017)

  35. Zhao, H., Fu, Y., Kang, M., Tian, Q., Wu, F., Li, X.: MgSvF: multi-grained slow vs. fast framework for few-shot class-incremental learning. IEEE Trans. Pattern Anal. Mach. Intell. (2021)

    Google Scholar 

  36. Zhou, D.W., Wang, F.Y., Ye, H.J., Ma, L., Pu, S., Zhan, D.C.: Forward compatible few-shot class-incremental learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9046–9056 (2022)

    Google Scholar 

  37. Zhou, D.W., Ye, H.J., Ma, L., Xie, D., Pu, S., Zhan, D.C.: Few-shot class-incremental learning by sampling multi-phase tasks. IEEE Trans. Pattern Anal. Mach. Intell. (2022)

    Google Scholar 

  38. Zhu, F., Cheng, Z., Zhang, X.Y., Liu, C.L.: Class-incremental learning via dual augmentation. In: Advances in Neural Information Processing Systems, vol. 34, pp. 14306–14318 (2021)

    Google Scholar 

  39. Zhu, K., Cao, Y., Zhai, W., Cheng, J., Zha, Z.J.: Self-promoted prototype refinement for few-shot class-incremental learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6801–6810 (2021)

    Google Scholar 

Download references

Acknowledgements

This work is supported by National Key Research and Development Program of China (2019YFC1521104), National Natural Science Foundation of China (No. 61972157, 72192821), Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102) and Shanghai Science and Technology Commission (21511101200, 22YF1420300).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Lizhuang Ma .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ma, S., Yuan, W., Wang, Y., Tan, X., Zhang, Z., Ma, L. (2024). Self-supervised Contrastive Feature Refinement for Few-Shot Class-Incremental Learning. In: Hu, SM., Cai, Y., Rosin, P. (eds) Computer-Aided Design and Computer Graphics. CADGraphics 2023. Lecture Notes in Computer Science, vol 14250. Springer, Singapore. https://doi.org/10.1007/978-981-99-9666-7_19

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-9666-7_19

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-9665-0

  • Online ISBN: 978-981-99-9666-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics