Skip to main content
Log in

Learning representative features via constrictive annular loss for image classification

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Deep convolutional neural networks (DCNNs) have achieved significant performance on image classification task. How to use more powerful loss function to train robust DCNN for image classification has become a recent trend in the community. In this paper, we present an elegant yet effective loss function: Constrictive Annular Loss (CA-Loss), to boost the classification performance of the DCNNs. CA-Loss can adaptively constrict the features to a suitable scale leading to more representative features, even for the imbalanced dataset. CA-Loss can be easily combined with softmax loss to jointly supervise the DCNNs. Furthermore, CA-Loss does not require additional supervisory information, and it can be easily optimized by the classical optimization algorithm (e.g. Stochastic gradient descent). We conduct extensive experiments on two large scale classification benchmarks and three artificially imbalanced datasets. CA-Loss achieves the state-of-the-art accuracy on these datasets, which strongly demonstrates the effectiveness of our proposed loss function.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Facebook (2017) Pytorch. https://pytorch.org/

  2. Goodfellow IJ, Warde-Farley D, Mirza M, Courville A, Bengio Y (2013) Maxout networks. arXiv:13024389

  3. He K, Zhang X, Ren S, Sun J (2015) Delving deep into rectifiers: surpassing human-level performance on imagenet classification. In: Proceedings of the IEEE international conference on computer vision, pp 1026–1034

  4. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  5. He K, Gkioxari G, Dollár P, Girshick R (2017) Mask r-cnn. In: Proceedings of the IEEE international conference on computer vision, pp 2980–2988

  6. Huang G, Liu Z, Maaten LVD, Weinberger KQ (2017) Densely connected convolutional networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2261–2269

  7. Krizhevsky A (2009) Learning multiple layers of features from tiny images. Technical Report

  8. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097–1105

  9. Lecun Y, Cortes C (1998) The mnist database of handwritten digits. Technical Report

  10. Lin M, Chen Q, Yan S (2013) Network in network. arXiv:13124400

  11. Lin TY, Goyal P, Girshick R, He K, Dollar P (2017) Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision, pp 2999–3007

  12. Liu W, Wen Y, Yu Z, Yang M (2016) Large-margin softmax loss for convolutional neural networks. In: Proceedings of 33rd international conference on machine learning, pp 507–516

  13. Liu W, Wen Y, Yu Z, Li M, Raj B, Song L (2017) Sphereface: deep hypersphere embedding for face recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6738–6746

  14. Liu Y, Li H, Wang X (2017) Learning deep features via congenerous cosine loss for person recognition. arXiv:170206890

  15. Liu Y, Li H, Wang X (2017) Rethinking feature discrimination and polymerization for large-scale recognition. arXiv:171000870

  16. Patryk C, Ilya L, Frank H (2017) A downsampled variant of imagenet as an alternative to the cifar datasets. arXiv:170708819

  17. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M et al (2015) Imagenet large scale visual recognition challenge. Int J Comput Vis 115(3):211–252

    Article  MathSciNet  Google Scholar 

  18. Schroff F, Kalenichenko D, Philbin J (2015) Facenet: a unified embedding for face recognition and clustering. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 815–823

  19. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv:14091556

  20. Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15(1):1929–1958

    MathSciNet  MATH  Google Scholar 

  21. Sun Y, Chen Y, Wang X, Tang X (2014) Deep learning face representation by joint identification-verification. In: Advances in neural information processing systems, pp 1988–1996

  22. Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1–9

  23. Wan L, Zeiler M, Zhang S, Le Cun Y, Fergus R (2013) Regularization of neural networks using dropconnect. In: Proceedings of 30th international conference on machine learning, pp 1058– 1066

  24. Wang F, Xiang X, Cheng J, Yuille AL (2017) Normface: L2 hypersphere embedding for face verification. In: Proceedings of 25th ACM international conference on multimedia, pp 1041– 1049

  25. Wang F, Cheng J, Liu W, Liu H (2018) Additive margin softmax for face verification. IEEE Signal Process Lett 25(7):926–930

    Article  Google Scholar 

  26. Wang L, Yin B, Guo A, Ma H, Cao J (2018) Skip-connection convolutional neural network for still image crowd counting. Appl Intell 48(6):3360–3371

    Article  Google Scholar 

  27. Wang X, Zhang S, Lei Z, Liu S, Guo X, Li SZ (2018) Ensemble soft-margin softmax loss for image classification. In: Proceedings of the international joint conferences on artificial intelligence, pp 992–998

  28. Wen Y, Zhang K, Li Z, Qiao Y (2016) A discriminative feature learning approach for deep face recognition. In: Proceedings of the European conference on computer vision, pp 499–515

  29. Xie S, Girshick R, Dollar P, Tu Z, He K (2017) Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5987–5995

  30. Zagoruyko S, Komodakis N (2015) Learning to compare image patches via convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4353–4361

  31. Zeiler MD, Fergus R (2013) Stochastic pooling for regularization of deep convolutional neural networks. arXiv:13013557

Download references

Acknowledgements

This work is supported by the Natural Science Foundation of China (Grant No. 61273364).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ya-Ping Huang.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, JB., Huang, YP., Zou, Q. et al. Learning representative features via constrictive annular loss for image classification. Appl Intell 49, 3082–3092 (2019). https://doi.org/10.1007/s10489-019-01434-3

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-019-01434-3

Keywords

Navigation