Multi-branch Aggregate Convolutional Neural Network for Image Classification

  • Rui Fan
  • Pinqun JiangEmail author
  • Shangyou Zeng
  • Peng Li
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11434)


In terms of image classification, in order to obtain higher classification accuracy, different levels of feature information need to be extracted from the image. Convolutional neural networks are increasingly applied to image classification. However, the traditional convolutional neural network has insufficient feature information extraction, poor classification accuracy, and easy over-fitting. This paper proposes Multi-branch aggregation network framework based on deep convolutional neural network that can be applied to image classification. Based on the traditional convolutional nerve, the network width and depth network are increased without increasing the parameters to optimize and improve the network to further enhance the feature expression ability of the network, Enriched the diversity of feature sampling, increased image classification accuracy and prevented overfitting. The framework and traditional frameworks and other frameworks were compared and analyzed through a series of comparative experiments in two standard databases, CIFAR-10 and CIFAR-100, and the validity of the framework was demonstrated.


Image classification Convolutional neural network Classification accuracy Convergence 



Authors acknowledge support of the National Natural Science Foundation of China (Grant Nos. 11465004). Authors are also thankful to the anonymous reviewers whose constructive suggestions helped improve and clarify this manuscript.


  1. 1.
    Bossard, L., Guillaumin, M., Van Gool, L.: Food-101 – mining discriminative components with random forests. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8694, pp. 446–461. Springer, Cham (2014). Scholar
  2. 2.
    Burges, C.J.C.: A tutorial on support vector machines for pattern recognition. Data Min. Knowl. Disc. 2(2), 121–167 (1998)CrossRefGoogle Scholar
  3. 3.
    Penatti, O.A.B., Silva, F.B., Valle, E., et al.: Visual word spatial arrangement for image retrieval and classification. Pattern Recogn. 47(2), 705–720 (2014)CrossRefGoogle Scholar
  4. 4.
    Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. In: NIPS (2012)Google Scholar
  5. 5.
    Redmon, J., Divvala, S., Girshick, R., et al.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)Google Scholar
  6. 6.
    He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  7. 7.
    Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. arXiv preprint arXiv:1709.01507 (2017)
  8. 8.
    Zhang, Z., Fu, S.: Profiling and analysis of power consumption for virtualized systems and applications. In: 2010 IEEE 29th International Performance Computing and Communications Conference (IPCCC), pp. 329–330. IEEE (2010)Google Scholar
  9. 9.
    Srivastava, N., Hinton, G., Krizhevsky, A., et al.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  10. 10.
    Jia, Y., Shelhamer, E., Donahue, J., et al.: Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM International Conference on Multimedia, pp. 675–678. ACM (2014)Google Scholar
  11. 11.
    Snoek, J., Larochelle, H., Adams, R.P.: Practical Bayesian optimization of machine learning algorithms. In: Advances in Neural Information Processing Systems, pp. 2951–2959 (2012)Google Scholar
  12. 12.
    Goodfellow, I.J., Warde-Farley, D., Mirza, M., et al.: Maxout networks. arXiv preprint arXiv:1302.4389 (2013)
  13. 13.
    Chen, Y., Yang, X., Zhong, B., et al.: Network in network based weakly supervised learning for visual tracking. J. Vis. Commun. Image Represent. 37, 3–13 (2016)CrossRefGoogle Scholar
  14. 14.
    Lee, C.Y., Xie, S., Gallagher, P., et al.: Deeply-supervised nets. In: Artificial Intelligence and Statistics, pp. 562–572 (2015)Google Scholar
  15. 15.
    Yang, H.F., Lin, K., Chen, C.S.: Supervised learning of semantics-preserving hash via deep convolutional neural networks. IEEE Trans. Pattern Anal. Mach. Intell. 40(2), 437–451 (2018)CrossRefGoogle Scholar
  16. 16.
    Liang, M., Hu, X.: Recurrent convolutional neural network for object recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3367–3375 (2015)Google Scholar
  17. 17.
    Srivastava, R.K., Greff, K., Schmidhuber, J.: Highway networks. arXiv preprint arXiv:1505.00387 (2015)

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.College of Electronic EngineeringGuangxi Normal UniversityGuilinChina

Personalised recommendations