Impact of Base Dataset Design on Few-Shot Image Classification

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12361)


The quality and generality of deep image features is crucially determined by the data they have been trained on, but little is known about this often overlooked effect. In this paper, we systematically study the effect of variations in the training data by evaluating deep features trained on different image sets in a few-shot classification setting. The experimental protocol we define allows to explore key practical questions. What is the influence of the similarity between base and test classes? Given a fixed annotation budget, what is the optimal trade-off between the number of images per class and the number of classes? Given a fixed dataset, can features be improved by splitting or combining different classes? Should simple or diverse classes be annotated? In a wide range of experiments, we provide clear answers to these questions on the miniImageNet, ImageNet and CUB-200 benchmarks. We also show how the base dataset design can improve performance in few-shot classification more drastically than replacing a simple baseline by an advanced state of the art algorithm.


Dataset labeling Few-shot classification Meta-learning Weakly-supervised learning 



This work was supported in part by ANR project EnHerit ANR-17-CE23-0008, project Rapid Tabasco. We thank Maxime Oquab, Diane Bouchacourt and Alexei Efros for helpful discussions and feedback.

Supplementary material

504471_1_En_35_MOESM1_ESM.pdf (2.2 mb)
Supplementary material 1 (pdf 2228 KB)


  1. 1.
    Antoniou, A., Storkey, A.J.: Learning to learn via self-critique. In: NeurIPS (2019)Google Scholar
  2. 2.
    Birodkar, V., Mobahi, H., Bengio, S.: Semantic redundancies in image-classification datasets: The 10% you don’t need. arXiv preprint arXiv:1901.11409 (2019)
  3. 3.
    Buda, M., Maki, A., Mazurowski, M.A.: A systematic study of the class imbalance problem in convolutional neural networks. arXiv preprint arXiv:1710.05381 (2017)
  4. 4.
    Caron, M., Bojanowski, P., Joulin, A., Douze, M.: Deep clustering for unsupervised learning of visual features. In: ECCV (2018)Google Scholar
  5. 5.
    Chen, W., Liu, Y., Kira, Z., Wang, Y.F., Huang, J.: A closer look at few-shot classification. In: ICLR (2019)Google Scholar
  6. 6.
    Chitta, K., Alvarez, J.M., Haussmann, E., Farabet, C.: Less is more: an exploration of data redundancy with active dataset subsampling. arXiv preprint arXiv:1905.12737 (2019)
  7. 7.
    Cohn, D., Ladner, R., Waibel, A.: Improving generalization with active learning. Mach. Learn. 15, 201–221 (1994)Google Scholar
  8. 8.
    Defays, D.: An efficient algorithm for a complete link method. Comput. J. 20(4), 364–366 (1977)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR (2009)Google Scholar
  10. 10.
    Ducoffe, M., Precioso, F.: Adversarial active learning for deep networks: a margin based approach. arXiv preprint arXiv:1802.09841 (2018)
  11. 11.
    Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes (VOC) challenge. Int. J. Comput. Vis. 88(2), 303–338 (2010)CrossRefGoogle Scholar
  12. 12.
    Fan, Y., Tian, F., Qin, T., Liu, T.Y.: Neural data filter for bootstrapping stochastic gradient descent (2016)Google Scholar
  13. 13.
    Fellbaum, C.: WordNet: an electronic lexical database and some of its applications (1998)Google Scholar
  14. 14.
    Gal, Y., Islam, R., Ghahramani, Z.: Deep Bayesian active learning with image data. In: ICML (2017)Google Scholar
  15. 15.
    Ge, W., Yu, Y.: Borrowing treasures from the wealthy: deep transfer learning through selective joint fine-tuning. In: CVPR (2017)Google Scholar
  16. 16.
    Gidaris, S., Bursuc, A., Komodakis, N., Pérez, P., Cord, M.: Boosting few-shot visual learning with self-supervision. In: ICCV (2019)Google Scholar
  17. 17.
    Gidaris, S., Komodakis, N.: Dynamic few-shot visual learning without forgetting. In: CVPR (2018)Google Scholar
  18. 18.
    Gidaris, S., Singh, P., Komodakis, N.: Unsupervised representation learning by predicting image rotations. In: CVPR (2019)Google Scholar
  19. 19.
    Hariharan, B., Girshick, R.: Low-shot visual recognition by shrinking and hallucinating features. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3018–3027 (2017)Google Scholar
  20. 20.
    He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. arXiv preprint arXiv:1911.05722 (2019)
  21. 21.
    Hilliard, N., Phillips, L., Howland, S., Yankov, A., Corley, C.D., Hodas, N.O.: Few-shot learning with metric-agnostic conditional embeddings. arXiv preprint arXiv:1802.04376 (2018)
  22. 22.
    Hu, S.X., et al.: Empirical bayes transductive meta-learning with synthetic gradients. In: ICLR (2019)Google Scholar
  23. 23.
    Huh, M., Agrawal, P., Efros, A.A.: What makes imagenet good for transfer learning? In: NeurIPS LSCVS 2016 Workshop (2016)Google Scholar
  24. 24.
    Katharopoulos, A., Fleuret, F.: Not all samples are created equal: deep learning with importance sampling. arXiv preprint arXiv:1803.00942 (2018)
  25. 25.
    Kim, J., Kim, T., Kim, S., Yoo, C.D.: Edge-labeling graph neural network for few-shot learning. In: CVPR (2019)Google Scholar
  26. 26.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: NeurIPS (2012)Google Scholar
  27. 27.
    Lee, K., Maji, S., Ravichandran, A., Soatto, S.: Meta-learning with differentiable convex optimization. In: CVPR (2019)Google Scholar
  28. 28.
    Li, H., Eigen, D., Dodge, S., Zeiler, M., Wang, X.: Finding task-relevant features for few-shot learning by category traversal. In: CVPR (2019)Google Scholar
  29. 29.
    Liu, Y., et al.: Learning to propagate labels: transductive propagation network for few-shot learning. In: ICLR (2019)Google Scholar
  30. 30.
    London, B.: A PAC-Bayesian analysis of randomized learning with application to stochastic gradient descent. In: NeurIPS (2017)Google Scholar
  31. 31.
    Oquab, M., Bottou, L., Laptev, I., Sivic, J.: Learning and transferring mid-level image representations using convolutional neural networks. In: CVPR (2014)Google Scholar
  32. 32.
    Qiao, S., Liu, C., Shen, W., Yuille, A.L.: Few-shot image recognition by predicting parameters from activations. In: CVPR (2018)Google Scholar
  33. 33.
    Ravi, S., Larochelle, H.: Optimization as a model for few-shot learning. In: ICLR (2017)Google Scholar
  34. 34.
    Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015). Scholar
  35. 35.
    Rusu, A.A., et al.: Meta-learning with latent embedding optimization. In: ICLR (2019)Google Scholar
  36. 36.
    Sablayrolles, A., Douze, M., Schmid, C., Jégou, H.: Déja vu: an empirical evaluation of the memorization properties of convnets. arXiv preprint arXiv:1809.06396 (2018)
  37. 37.
    Sener, O., Savarese, S.: Active learning for convolutional neural networks: a core-set approach. In: ICLR (2017)Google Scholar
  38. 38.
    Settles, B.: Active learning literature survey. Tech. rep. University of Wisconsin-Madison Department of Computer Sciences (2009)Google Scholar
  39. 39.
    Sharif Razavian, A., Azizpour, H., Sullivan, J., Carlsson, S.: CNN features off-the-shelf: an astounding baseline for recognition. In: CVPR Workshops (2014)Google Scholar
  40. 40.
    Snell, J., Swersky, K., Zemel, R.: Prototypical networks for few-shot learning. In: NeurIPS (2017)Google Scholar
  41. 41.
    Triantafillou, E., et al.: Meta-dataset: a dataset of datasets for learning to learn from few examples. In: ICLR (2020)Google Scholar
  42. 42.
    Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al.: Matching networks for one shot learning. In: NeurIPS (2016)Google Scholar
  43. 43.
    Vodrahalli, K., Li, K., Malik, J.: Are all training examples created equal? an empirical study. arXiv preprint arXiv:1811.12569 (2018)
  44. 44.
    Wah, C., Branson, S., Welinder, P., Perona, P., Belongie, S.: The Caltech-UCSD birds-200-2011 dataset (2011)Google Scholar
  45. 45.
    Wang, Y., Chao, W.L., Weinberger, K.Q., van der Maaten, L.: SimpleShot: revisiting nearest-neighbor classification for few-shot learning. arXiv preprint arXiv:1911.04623
  46. 46.
    Wang, Y.X., Girshick, R., Hebert, M., Hariharan, B.: Low-shot learning from imaginary data. In: CVPR (2018)Google Scholar
  47. 47.
    Xiao, J., Hays, J., Ehinger, K.A., Oliva, A., Torralba, A.: Sun database: large-scale scene recognition from abbey to zoo. In: CVPR (2010)Google Scholar
  48. 48.
    Zamir, A.R., Sax, A., Shen, W., Guibas, L.J., Malik, J., Savarese, S.: Taskonomy: disentangling task transfer learning. In: CVPR (2018)Google Scholar
  49. 49.
    Zhou, L., Cui, P., Jia, X., Yang, S., Tian, Q.: Learning to select base classes for few-shot classification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4624–4633 (2020)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Facebook AI ResearchParisFrance
  2. 2.LIGM (UMR 8049) - École des Ponts, UPEChamps-sur-MarneFrance

Personalised recommendations