Advertisement

Prototype Rectification for Few-Shot Learning

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12346)

Abstract

Few-shot learning requires to recognize novel classes with scarce labeled data. Prototypical network is useful in existing researches, however, training on narrow-size distribution of scarce data usually tends to get biased prototypes. In this paper, we figure out two key influencing factors of the process: the intra-class bias and the cross-class bias. We then propose a simple yet effective approach for prototype rectification in transductive setting. The approach utilizes label propagation to diminish the intra-class bias and feature shifting to diminish the cross-class bias. We also conduct theoretical analysis to derive its rationality as well as the lower bound of the performance. Effectiveness is shown on three few-shot benchmarks. Notably, our approach achieves state-of-the-art performance on both miniImageNet (70.31% on 1-shot and 81.89% on 5-shot) and tieredImageNet (78.74% on 1-shot and 86.92% on 5-shot).

Keywords

Few-shot learning Prototype rectification Intra-class bias Cross-class bias 

Supplementary material

500725_1_En_43_MOESM1_ESM.pdf (141 kb)
Supplementary material 1 (pdf 141 KB)

References

  1. 1.
    Allen, K., Shelhamer, E., Shin, H., Tenenbaum, J.: Infinite mixture prototypes for few-shot learning. In: ICML, pp. 232–241 (2019)Google Scholar
  2. 2.
    Andrychowicz, M., et al.: Learning to learn by gradient descent by gradient descent. In: NIPS, pp. 3981–3989 (2016)Google Scholar
  3. 3.
    Chen, W.Y., Liu, Y.C., Kira, Z., Wang, Y.C.F., Huang, J.B.: A closer look at few-shot classification. In: ICLR (2019)Google Scholar
  4. 4.
    Dhillon, G.S., Chaudhari, P., Ravichandran, A., Soatto, S.: A baseline for few-shot image classification. In: ICLR (2020)Google Scholar
  5. 5.
    Fei-Fei, L., Fergus, R., Perona, P.: One-shot learning of object categories 28, 594–611 (2006)Google Scholar
  6. 6.
    Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: ICML, pp. 1126–1135 (2017)Google Scholar
  7. 7.
    Gidaris, S., Bursuc, A., Komodakis, N., Perez, P.P., Cord, M.: Boosting few-shot visual learning with self-supervision. In: ICCV, pp. 8058–8067 (2019)Google Scholar
  8. 8.
    Gidaris, S., Komodakis, N.: Dynamic few-shot visual learning without forgetting. In: CVPR, pp. 4367–4375 (2018)Google Scholar
  9. 9.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778 (2016)Google Scholar
  10. 10.
    Joachims, T.: Transductive inference for text classification using support vector machines. In: ICML, pp. 200–209 (1999)Google Scholar
  11. 11.
    Kim, J., Kim, T., Kim, S., Yoo, C.D.: Edge-labeling graph neural network for few-shot learning. In: CVPR, pp. 11–20 (2019)Google Scholar
  12. 12.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks 141, 1097–1105 (2012)Google Scholar
  13. 13.
    Lee, K., Maji, S., Ravichandran, A., Soatto, S.: Meta-learning with differentiable convex optimization. In: CVPR, pp. 10657–10665 (2019)Google Scholar
  14. 14.
    Li, X., et al.: Learning to self-train for semi-supervised few-shot classification. In: NeurIPS (2019)Google Scholar
  15. 15.
    Li, Z., Zhou, F., Chen, F., Li, H.: Meta-SGD: learning to learn quickly for few shot learning (2017)Google Scholar
  16. 16.
    Liu, Y., et al.: Learning to propagate labels: transductive propagation network for few-shot learning. In: ICLR (2019)Google Scholar
  17. 17.
    Miller, E., Matsakis, N., Viola, P.: Learning from one example through shared densities on transforms. In: CVPR, vol. 1, pp. 464–471 (2000)Google Scholar
  18. 18.
    Mishra, N., Rohaninejad, M., Chen, X., Abbeel, P.: A simple neural attentive meta-learner. In: ICLR (2018)Google Scholar
  19. 19.
    Munkhdalai, T., Yuan, X., Mehri, S., Trischler, A.: Rapid adaptation with conditionally shifted neurons. In: ICML, pp. 3661–3670 (2018)Google Scholar
  20. 20.
    Nichol, A., Achiam, J., Schulman, J.: On first-order meta-learning algorithms (2018)Google Scholar
  21. 21.
    Nowozin, S.: Optimal decisions from probabilistic models: the intersection-over-union case. In: CVPR, pp. 548–555 (2014)Google Scholar
  22. 22.
    Oreshkin, B.N., Lpez, P.R., Lacoste, A.: TADAM: task dependent adaptive metric for improved few-shot learning. In: NIPS, pp. 721–731 (2018)Google Scholar
  23. 23.
    Qiao, S., Liu, C., Shen, W., Yuille, A.L.: Few-shot image recognition by predicting parameters from activations. In: CVPR, pp. 7229–7238 (2018)Google Scholar
  24. 24.
    Ravi, S., Larochelle, H.: Optimization as a model for few-shot learning. In: ICLR (2017)Google Scholar
  25. 25.
    Ren, M., et al.: Meta-learning for semi-supervised few-shot classification. In: ICLR (2018)Google Scholar
  26. 26.
    Rice, S.H.: The expected value of the ratio of correlated random variables. Texas Tech University (2015)Google Scholar
  27. 27.
    Russakovsky, O., et al.: ImageNet large scale visual recognition challenge 115, 211–252 (2015)Google Scholar
  28. 28.
    Rusu, A.A., et al.: Meta-learning with latent embedding optimization. In: ICLR (2019)Google Scholar
  29. 29.
    Satorras, V.G., Estrach, J.B.: Few-shot learning with graph neural networks. In: ICLR (2018)Google Scholar
  30. 30.
    Snell, J., Swersky, K., Zemel, R.S.: Prototypical networks for few-shot learning. In: NIPS, pp. 4077–4087 (2017)Google Scholar
  31. 31.
    Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P.H., Hospedales, T.M.: Learning to compare: relation network for few-shot learning. In: CVPR, pp. 1199–1208 (2018)Google Scholar
  32. 32.
    Triantafillou, E., et al.: Meta-dataset: a dataset of datasets for learning to learn from few examples. In: ICLR (2020)Google Scholar
  33. 33.
    Vinyals, O., Blundell, C., Lillicrap, T.P., Kavukcuoglu, K., Wierstra, D.: Matching networks for one shot learning. In: NIPS, pp. 3637–3645 (2016)Google Scholar
  34. 34.
    Wang, Y., Li, W., Dai, D., Gool, L.V.: Deep domain adaptation by geodesic distance minimization. In: ICCVW, pp. 2651–2657 (2017)Google Scholar
  35. 35.
    Zagoruyko, S., Komodakis, N.: Wide residual networks. In: BMVC (2016)Google Scholar
  36. 36.
    Zhou, D., Bousquet, O., Lal, T.N., Weston, J., Schlkopf, B.: Learning with local and global consistency. In: NIPS, pp. 321–328 (2003)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.AInnovation Technology Co., Ltd.BeijingChina

Personalised recommendations