Skip to main content
Log in

In-use calibration: improving domain-specific fine-grained few-shot recognition

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Learning to recognize novel visual classes from few samples is challenging but promising. Previous studies have shown that few-shot model tends to overfit and lead to poor generalization performance, which is because it finds a biased distribution based on a few samples. In addition, in agriculture-specific domains, there are more serious research challenges such as imbalanced disease distribution, one-shot representation biases, fine-grained recognition, and granularity shift. As far as we know, this study is the first work on the fine-grained “Coarse-to-Fine” few-shot plant disease classification, which classifies “fine-grained novel classes” (specific to disease severity) based on “coarse-grained base classes” (specific to plant species). A complete two-stage in-use calibration strategy is presented in this paper. Firstly, we propose an attention-based inverse Mahalanobis distance weighted prototype calibration module (AIPCM). By transferring statistics from sample-rich coarse-grained base classes to sample-scarce fine-grained novel classes, we achieve prototype calibration for 1-shot sample and obtain an unbiased distribution in the feature space. Secondly, to generate more reasonable decision boundaries, we propose a prior-driven task-adapted decision boundary calibration module (TDBCM) based on class-covariance metric. The original Euclidean/Cosine distance is updated to the Mahalanobis distance by introducing the prior mean and covariance of the high-dimensional features. Experimental results on several datasets demonstrate that our model outperforms the state-of-the-art (SOTA) models. It can be said that our work is a valuable supplement to the domain-specific agricultural applications.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Algorithm 1
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16

Similar content being viewed by others

Data availability

Due to the nature of this research, participants of this study did not agree for their data to be shared publicly, so supporting data is not available.

References

  1. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 770–778

  2. Tu Z, Talebi H, Zhang H, Yang F, Milanfar P, Bovik A, Li Y (2022) Maxvit: multi-axis vision transformer. In: Computer vision–ECCV 2022: 17th European conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part XXIV. Springer, pp 459–479

  3. Ren S, He K, Girshick R, Sun J (2015) Faster r-cnn: towards real-time object detection with region proposal networks. In: Advances in neural information processing systems, 28

  4. Saavedra D, Banerjee S, Mery D (2021) Detection of threat objects in baggage inspection with X-ray images using deep learning. Neural Comput Appl 33:7803–7819

    Article  Google Scholar 

  5. Dong Z, He Y, Qi X, Chen Y, Shu H, Coatrieux J-L, Yang G, Li S (2022) MNet: rethinking 2D/3D networks for anisotropic medical image segmentation. arXiv preprint arXiv:2205.04846

  6. Rasi D, Deepa S (2022) Hybrid optimization enabled deep learning model for colour image segmentation and classification. Neural Comput Appl 34(23):21335–21352

    Article  Google Scholar 

  7. Abbas M, Xiao Q, Chen L, Chen P-Y, Chen T (2022) Sharp-maml: sharpness-aware model-agnostic meta learning. arXiv preprint arXiv:2206.03996

  8. Sun Q, Liu Y, Chua T-S, Schiele B (2019) Meta-transfer learning for few-shot learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp 403–412

  9. Liu Y, Schiele B, Sun Q (2020) An ensemble of epoch-wise empirical bayes for few-shot learning. In: Computer vision–ECCV 2020: 16th European conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part XVI 16. Springer, pp 404–421

  10. Sung F, Yang Y, Zhang L, Xiang T, Torr PH, Hospedales TM (2018) Learning to compare: relation network for few-shot learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1199–1208

  11. Snell J, Swersky K, Zemel R (2017) Prototypical networks for few-shot learning. In: Advances in neural information processing systems, 30

  12. Liu J, Song L, Qin Y (2020) Prototype rectification for few-shot learning. In: Computer vision–ECCV 2020: 16th European conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part I 16. Springer, pp 741–756

  13. Wang Y-X, Girshick R, Hebert M, Hariharan B (2018) Low-shot learning from imaginary data. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 7278–7286

  14. Park S-J, Han S, Baek J-W, Kim I, Song J, Lee HB, Han J-J, Hwang SJ (2020) Meta variance transfer: learning to augment from the others. In: International conference on machine learning. PMLR, pp 7510–7520

  15. Xian Y, Lorenz T, Schiele B, Akata Z (2018) Feature generating networks for zero-shot learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 5542–5551

  16. Liu J, Sun Y, Han C, Dou Z, Li W (2020) Deep representation learning on long-tailed data: a learnable embedding augmentation perspective. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp 2970–2979

  17. Yang S, Liu L, Xu M (2021) Free lunch for few-shot learning: distribution calibration. arXiv preprint arXiv:2101.06395

  18. Galeano P, Joseph E, Lillo RE (2015) The Mahalanobis distance for functional data with applications to classification. Technometrics 57(2):281–291

    Article  MathSciNet  Google Scholar 

  19. Bąk S, Charpiat G, Corvee E, Bremond F, Thonnat M (2012) Learning to match appearances by correlations in a covariance metric space. In: Computer vision–ECCV 2012: 12th European conference on computer vision, Florence, Italy, October 7–13, 2012, Proceedings, Part III 12. Springer, pp 806–820

  20. Mensink T, Verbeek J, Perronnin F, Csurka G (2013) Distance-based image classification: generalizing to new classes at near-zero cost. IEEE Trans Pattern Anal Mach Intell 35(11):2624–2637

    Article  Google Scholar 

  21. Kamal IM, Bae H, Liu L (2022) Metric learning as a service with covariance embedding. arXiv preprint arXiv:2211.15197

  22. Vinyals O, Blundell C, Lillicrap T, Wierstra D, et al (2016) Matching networks for one shot learning. In: Advances in neural information processing systems, 29

  23. Chen Y, Wang X, Liu Z, Xu H, Darrell T (2020) A new meta-baseline for few-shot learning

  24. Selvaraj MG, Vergara A, Ruiz H, Safari N, Elayabalan S, Ocimati W, Blomme G (2019) AI-powered banana diseases and pest detection. Plant Methods 15(1):1–11

    Article  Google Scholar 

  25. Mohanty SP, Hughes DP, Salathé M (2016) Using deep learning for image-based plant disease detection. Front Plant Sci 7:1419

    Article  Google Scholar 

  26. Krizhevsky A, Sutskever I, Hinton GE (2017) Imagenet classification with deep convolutional neural networks. Commun ACM 60(6):84–90

    Article  Google Scholar 

  27. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 1–9

  28. Brahimi M, Boukhalfa K, Moussaoui A (2017) Deep learning for tomato diseases: classification and symptoms visualization. Appl Artif Intell 31(4):299–315

    Article  Google Scholar 

  29. Chakraborty A, Kumer D, Deeba K (2021) Plant leaf disease recognition using fastai image classification. In: 2021 5th international conference on computing methodologies and communication (ICCMC). IEEE, pp 1624–1630

  30. Finn C, Abbeel P, Levine S (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In: International conference on machine learning. PMLR, pp 1126–1135

  31. Li Z, Zhou F, Chen F, Li H (2017) Meta-sgd: learning to learn quickly for few-shot learning. arXiv preprint arXiv:1707.09835

  32. Ravi S, Larochelle H (2017) Optimization as a model for few-shot learning. In: International conference on learning representations

  33. Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Comput 9(8):1735–1780

    Article  Google Scholar 

  34. Garcia V, Bruna J (2017) Few-shot learning with graph neural networks. arXiv preprint arXiv:1711.04043

  35. Zhang C, Cai Y, Lin G, Shen C (2020) Deepemd: few-shot image classification with differentiable earth mover’s distance and structured classifiers. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp 12203–12213

  36. Li H, Li L, Huang Y, Li N, Zhang Y (2023) An adaptive plug-and-play network for few-shot learning. arXiv preprint arXiv:2302.09326

  37. Fu J, Zheng H, Mei T (2017) Look closer to see better: recurrent attention convolutional neural network for fine-grained image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp 4438–4446

  38. Sun X, Xv H, Dong J, Zhou H, Chen C, Li Q (2020) Few-shot learning for domain-specific fine-grained image classification. IEEE Trans Ind Electron 68(4):3588–3598

    Article  Google Scholar 

  39. Wei X-S, Luo J-H, Wu J, Zhou Z-H (2017) Selective convolutional descriptor aggregation for fine-grained image retrieval. IEEE Trans Image Process 26(6):2868–2881

    Article  MathSciNet  Google Scholar 

  40. He J, Chen J-N, Liu S, Kortylewski A, Yang C, Bai Y, Wang C (2022) Transfg: a transformer architecture for fine-grained recognition. In: Proceedings of the AAAI conference on artificial intelligence, vol 36. pp 852–860

  41. Zhang Z-C, Chen Z-D, Wang Y, Luo X, Xu X-S (2022) Vit-fod: a vision transformer based fine-grained object discriminator. arXiv preprint arXiv:2203.12816

  42. Zhu L, Yang Y (2018) Compound memory networks for few-shot video classification. In: Proceedings of the European conference on computer vision (ECCV). pp 751–766

  43. Woo S, Park J, Lee J-Y, Kweon IS (2018) Cbam: convolutional block attention module. In: Proceedings of the European conference on computer vision (ECCV). pp 3–19

  44. Tukey JW (1977) Exploratory data analysis, vol 2. Reading, MA

    Google Scholar 

  45. Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L (2009) Imagenet: a large-scale hierarchical image database. In: 2009 IEEE conference on computer vision and pattern recognition. IEEE, pp 248–255

  46. Wah C, Branson S, Welinder P, Perona P, Belongie S (2011) The caltech-ucsd birds-200-2011 dataset

  47. Ravi S, Larochelle H (2016) Optimization as a model for few-shot learning. In: International conference on learning representations

  48. Du Y, Shen J, Zhen X, Snoek CG (2023) EMO: episodic memory optimization for few-shot meta-learning. arXiv preprint arXiv:2306.05189

  49. Chen Z, Fu Y, Zhang Y, Jiang Y-G, Xue X, Sigal L (2019) Multi-level semantic feature augmentation for one-shot learning. IEEE Trans Image Process 28(9):4594–4605

    Article  MathSciNet  Google Scholar 

  50. Chen W-Y, Liu Y-C, Kira Z, Wang Y-CF, Huang J-B (2019) A closer look at few-shot classification. arXiv preprint arXiv:1904.04232

  51. Liu B, Cao Y, Lin Y, Li Q, Zhang Z, Long M, Hu H (2020) Negative margin matters: understanding margin in few-shot classification. In: European conference on computer vision. Springer, pp 438–455

  52. Ye H-J, Hu H, Zhan D-C, Sha F (2020) Few-shot learning via embedding adaptation with set-to-set functions. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp 8808–8817

  53. Zhou Z, Qiu X, Xie J, Wu J, Zhang C (2021) Binocular mutual learning for improving few-shot classification. In: Proceedings of the IEEE/CVF international conference on computer vision. pp 8402–8411

  54. Liu Y, Zhang W, Xiang C, Zheng T, Cai D, He X (2022) Learning to affiliate: mutual centralized learning for few-shot classification. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp 14411–14420

  55. Xie J, Long F, Lv J, Wang Q, Li P (2022) Joint distribution matters: deep brownian distance covariance for few-shot classification. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp 7972–7981

  56. Afrasiyabi A, Larochelle H, Lalonde J-F, Gagné C (2022) Matching feature sets for few-shot image classification. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp 9014–9024

  57. Zhang C, Cai Y, Lin G, Shen C (2022) Deepemd: differentiable earth mover’s distance for few-shot learning. IEEE Trans Pattern Anal Mach Intell 45(5):5632–5648

    Google Scholar 

  58. Lee S, Moon W, Seong HS, Heo J-P (2023) Task-oriented channel attention for fine-grained few-shot classification. arXiv preprint arXiv:2308.00093

  59. Lai J, Yang S, Wu W, Wu T, Jiang G, Wang X, Liu J, Gao B-B, Zhang W, Xie Y, et al (2023) SpatialFormer: semantic and target aware attentions for few-shot learning. arXiv preprint arXiv:2303.09281

Download references

Acknowledgements

This work was supported by the National Science and Technology Major Project (2021ZD0110901).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hongxun Yao.

Ethics declarations

Conflict of interest

The authors declared that they have no conflicts of interest to this work. We declare that we do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, M., Yao, H. In-use calibration: improving domain-specific fine-grained few-shot recognition. Neural Comput & Applic 36, 8235–8255 (2024). https://doi.org/10.1007/s00521-024-09501-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-024-09501-8

Keywords

Navigation