Advertisement

DeeSIL: Deep-Shallow Incremental Learning

  • Eden Belouadah
  • Adrian PopescuEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11130)

Abstract

Incremental Learning (IL) is an interesting AI problem when the algorithm is assumed to work on a budget. This is especially true when IL is modeled using a deep learning approach, where two complex challenges arise due to limited memory, which induces catastrophic forgetting and delays related to the retraining needed in order to incorporate new classes. Here we introduce DeeSIL, an adaptation of a known transfer learning scheme that combines a fixed deep representation used as feature extractor and learning independent shallow classifiers to increase recognition capacity. This scheme tackles the two aforementioned challenges since it works well with a limited memory budget and each new concept can be added within a minute. Moreover, since no deep retraining is needed when the model is incremented, DeeSIL can integrate larger amounts of initial data that provide more transferable features. Performance is evaluated on ImageNet LSVRC 2012 against three state of the art algorithms. Results show that, at scale, DeeSIL performance is 23 and 33 points higher than the best baseline when using the same and more initial data respectively.

Keywords

Incremental learning SVM ImageNet 

References

  1. 1.
    Aljundi, R., Chakravarty, P., Tuytelaars, T.: Expert gate: lifelong learning with a network of experts. In: Conference on Computer Vision and Pattern Recognition, CVPR (2017)Google Scholar
  2. 2.
    Chen, X., Gupta, A.: Webly supervised learning of convolutional networks. In: International Conference on Computer Vision, ICCV (2015)Google Scholar
  3. 3.
    Deselaers, T., Gass, T., Dreuw, P., Ney, H.: Jointly optimising relevance and diversity in image retrieval. In: ACM CIVR (2009)Google Scholar
  4. 4.
    Ginsca, A.L., Popescu, A., Le Borgne, H., Ballas, N., Vo, P., Kanellos, I.: Large-scale image mining with flickr groups. In: He, X., Luo, S., Tao, D., Xu, C., Yang, J., Hasan, M.A. (eds.) MMM 2015. LNCS, vol. 8935, pp. 318–334. Springer, Cham (2015).  https://doi.org/10.1007/978-3-319-14445-0_28CrossRefGoogle Scholar
  5. 5.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Conference on Computer Vision and Pattern Recognition, CVPR (2016)Google Scholar
  6. 6.
    Kornblith, S., Shlens, J., Le, Q.V.: Do better imagenet models transfer better? CoRR abs/1805.08974 (2018)Google Scholar
  7. 7.
    Li, Z., Hoiem, D.: Learning without forgetting. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 614–629. Springer, Cham (2016).  https://doi.org/10.1007/978-3-319-46493-0_37CrossRefGoogle Scholar
  8. 8.
    Mccloskey, M., Cohen, N.J.: Catastrophic interference in connectionist networks: the sequential learning problem. Psychol. Learn. Motiv. 24, 104–169 (1989)Google Scholar
  9. 9.
    Paszke, A., et al.: Automatic differentiation in PyTorch. In: Advances in Neural Information Processing Systems Workshops, NIPS-W (2017)Google Scholar
  10. 10.
    Razavian, A.S., Azizpour, H., Sullivan, J., Carlsson, S.: CNN features off-the-shelf: an astounding baseline for recognition. In: Conference on Computer Vision and Pattern Recognition Workshop, CVPR-W (2014)Google Scholar
  11. 11.
    Rebuffi, S., Kolesnikov, A., Sperl, G., Lampert, C.H.: iCaRL: incremental classifier and representation learning. In: Conference on Computer Vision and Pattern Recognition, CVPR (2017)Google Scholar
  12. 12.
    Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Rusu, A.A., et al.: Progressive neural networks. CoRR abs/1606.04671 (2016)Google Scholar
  14. 14.
    Tamaazousti, Y., Le Borgne, H., Hudelot, C.: MuCaLe-Net: multi categorical-level networks to generate more discriminating features. In: Conference on Computer Vision and Pattern Recognition, CVPR (2017)Google Scholar
  15. 15.
    Tamaazousti, Y., Le Borgne, H., Hudelot, C., Seddik, M.E.A., Tamaazousti, M.: Learning more universal representations for transfer-learning. arXiv:1712.09708 (2017)
  16. 16.
    Thomee, B., et al.: YFCC100M: the new data in multimedia research. Commun. ACM 59, 64–73 (2016)CrossRefGoogle Scholar
  17. 17.
    Wang, Y., Ramanan, D., Hebert, M.: Growing a brain: fine-tuning by increasing model capacity. In: Conference on Computer Vision and Pattern Recognition, CVPR (2017)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.CEA, LIST, Vision and Content Engineering LabGif-sur-YvetteFrance

Personalised recommendations