End-to-End Incremental Learning

  • Francisco M. CastroEmail author
  • Manuel J. Marín-Jiménez
  • Nicolás Guil
  • Cordelia Schmid
  • Karteek Alahari
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11216)


Although deep learning approaches have stood out in recent years due to their state-of-the-art results, they continue to suffer from catastrophic forgetting, a dramatic decrease in overall performance when training with new classes added incrementally. This is due to current neural network architectures requiring the entire dataset, consisting of all the samples from the old as well as the new classes, to update the model—a requirement that becomes easily unsustainable as the number of classes grows. We address this issue with our approach to learn deep neural networks incrementally, using new data and only a small exemplar set corresponding to samples from the old classes. This is based on a loss composed of a distillation measure to retain the knowledge acquired from the old classes, and a cross-entropy loss to learn the new classes. Our incremental training is achieved while keeping the entire framework end-to-end, i.e., learning the data representation and the classifier jointly, unlike recent methods with no such guarantees. We evaluate our method extensively on the CIFAR-100 and ImageNet (ILSVRC 2012) image classification datasets, and show state-of-the-art performance.


Incremental learning CNN Distillation loss Image classification 



This work was supported in part by the projects TIC-1692 (Junta de Andalucía), TIN2016-80920R (Spanish Ministry of Science and Tech.), ERC advanced grant ALLEGRO, and EVEREST (no. 5302-1) funded by CEFIPRA. We gratefully acknowledge the support of NVIDIA Corporation with the donation of a Titan X Pascal GPU used for this research.

Supplementary material

474200_1_En_15_MOESM1_ESM.pdf (302 kb)
Supplementary material 1 (pdf 302 KB)


  1. 1.
    Supplementary material. Also available in the arXiv technical report.
  2. 2.
    Ans, B., Rousset, S., French, R.M., Musca, S.: Self-refreshing memory in artificial neural networks: learning temporal sequences without catastrophic forgetting. Connect. Sci. 16(2), 71–99 (2004)CrossRefGoogle Scholar
  3. 3.
    Bengio, Y., Courville, A., Vincent, P.: Representation learning: a review and new perspectives. PAMI 35(8), 1798–1828 (2013)CrossRefGoogle Scholar
  4. 4.
    Cauwenberghs, G., Poggio, T.: Incremental and decremental support vector machine learning. In: NIPS (2000)Google Scholar
  5. 5.
    Chen, X., Shrivastava, A., Gupta, A.: NEIL: extracting visual knowledge from web data. In: ICCV (2013)Google Scholar
  6. 6.
    Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995)zbMATHGoogle Scholar
  7. 7.
    Divvala, S., Farhadi, A., Guestrin, C.: Learning everything about anything: webly-supervised visual concept learning. In: CVPR (2014)Google Scholar
  8. 8.
    French, R.M.: Dynamically constraining connectionist networks to produce distributed, orthogonal representations to reduce catastrophic interference. In: Cognitive Science Society Conference (1994)Google Scholar
  9. 9.
    Furlanello, T., Zhao, J., Saxe, A.M., Itti, L., Tjan, B.S.: Active long term memory networks. ArXiv e-prints. arXiv:1606.02355 (2016)
  10. 10.
    Goodfellow, I., Mirza, M., Xiao, D., Courville, A., Bengio, Y.: An empirical investigation of catastrophic forgetting in gradient-based neural networks. ArXiv e-prints. arXiv:1312.6211 (2013)
  11. 11.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)Google Scholar
  12. 12.
    Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. In: NIPS Workshop (2014)Google Scholar
  13. 13.
    Jung, H., Ju, J., Jung, M., Kim, J.: Less-forgetting learning in deep neural networks. ArXiv e-prints. arXiv:1607.00122 (2016)
  14. 14.
    Kirkpatrick, J., et al.: Overcoming catastrophic forgetting in neural networks. Proc. Natl. Acad. Sci. 114(13), 3521–3526 (2017)MathSciNetCrossRefGoogle Scholar
  15. 15.
    Krizhevsky, A.: Learning multiple layers of features from tiny images. Technical report, University of Toronto (2009)Google Scholar
  16. 16.
    Li, Z., Hoiem, D.: Learning without forgetting. PAMI (2018)Google Scholar
  17. 17.
    Lopez-Paz, D., Ranzato, M.A.: Gradient episodic memory for continual learning. In: NIPS (2017)Google Scholar
  18. 18.
    McCloskey, M., Cohen, N.J.: Catastrophic interference in connectionist networks: the sequential learning problem. Psychol. Learn. Motiv. 24, 109–165 (1989)CrossRefGoogle Scholar
  19. 19.
    Mensink, T., Verbeek, J., Perronnin, F., Csurka, G.: Distance-based image classification: generalizing to new classes at near-zero cost. PAMI 35(11), 2624–2637 (2013)CrossRefGoogle Scholar
  20. 20.
    Mitchell, T., et al.: Never-ending learning. In: AAAI (2015)Google Scholar
  21. 21.
    Neelakantan, A., et al.: Adding gradient noise improves learning for very deep networks. ArXiv e-prints. arXiv:1511.06807 (2017)
  22. 22.
    Ratcliff, R.: Connectionist models of recognition memory: constraints imposed by learning and forgetting functions. Psychol. Rev. 97(2), 285 (1990)CrossRefGoogle Scholar
  23. 23.
    Rebuffi, S.A., Kolesnikov, A., Sperl, G., Lampert, C.H.: iCaRL: incremental classifier and representation learning. In: CVPR (2017)Google Scholar
  24. 24.
    Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. In: NIPS (2015)Google Scholar
  25. 25.
    Ristin, M., Guillaumin, M., Gall, J., Gool, L.V.: Incremental learning of NCM forests for large-scale image classification. In: CVPR (2014)Google Scholar
  26. 26.
    Ruping, S.: Incremental learning with support vector machines. In: ICDM (2001)Google Scholar
  27. 27.
    Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. IJCV 115(3), 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  28. 28.
    Rusu, A.A., et al.: Progressive neural networks. ArXiv e-prints. arXiv:1606.04671 (2016)
  29. 29.
    Ruvolo, P., Eaton, E.: ELLA: an efficient lifelong learning algorithm. In: ICML (2013)Google Scholar
  30. 30.
    Shmelkov, K., Schmid, C., Alahari, K.: Incremental learning of object detectors without catastrophic forgetting. In: ICCV (2017)Google Scholar
  31. 31.
    Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. In: NIPS (2014)Google Scholar
  32. 32.
    Terekhov, A.V., Montone, G., O’Regan, J.K.: Knowledge transfer in deep block-modular neural networks. In: Wilson, S.P., Verschure, P.F.M.J., Mura, A., Prescott, T.J. (eds.) LIVINGMACHINES 2015. LNCS (LNAI), vol. 9222, pp. 268–279. Springer, Cham (2015). Scholar
  33. 33.
    Thrun, S.: Lifelong learning algorithms. In: Thrun, S., Pratt, L. (eds.) Learning to Learn, pp. 181–209. Springer, Boston (1998). Scholar
  34. 34.
    Triki, A.R., Aljundi, R., Blaschko, M.B., Tuytelaars, T.: Encoder based lifelong learning. In: ICCV (2017)Google Scholar
  35. 35.
    Vedaldi, A., Lenc, K.: MatConvNet - convolutional neural networks for MATLAB. In: ACM Multimedia (2015)Google Scholar
  36. 36.
    Welling, M.: Herding dynamical weights to learn. In: ICML (2009)Google Scholar
  37. 37.
    Xiao, T., Zhang, J., Yang, K., Peng, Y., Zhang, Z.: Error-driven incremental learning in deep convolutional neural network for large-scale image classification. In: ACM Multimedia (2014)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2018

Authors and Affiliations

  • Francisco M. Castro
    • 1
    Email author
  • Manuel J. Marín-Jiménez
    • 2
  • Nicolás Guil
    • 1
  • Cordelia Schmid
    • 3
  • Karteek Alahari
    • 3
  1. 1.Department of Computer ArchitectureUniversity of MálagaMálagaSpain
  2. 2.Department of Computing and Numerical AnalysisUniversity of CórdobaCórdobaSpain
  3. 3.Univ. Grenoble Alpes, Inria, CNRS, Grenoble INP, LJKGrenobleFrance

Personalised recommendations