Advertisement

Evaluation of Group Convolution in Lightweight Deep Networks for Object Classification

  • Arindam Das
  • Thomas Boulay
  • Senthil YogamaniEmail author
  • Shiping Ou
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11264)

Abstract

Deploying a neural network model on a low-power embedded platform is a challenging task. In this paper, we present our study on the efficacy of aggregated residual transformation (defined in ResNeXt that secured 2nd place in the ILSVRC 2016 classification task) for lightweight deep networks. The major contributions to this paper include (i) evaluation of group convolution, (ii) study on the impact of skip connection and various width for lightweight deep network. Our extensive experiments on different topologies show that employing aggregated convolution operation followed by point-wise convolution degrades the accuracy significantly. Furthermore as per our study, skip connections are not a suitable candidate for smaller networks and width is an important attribute to magnify the accuracy. Our embedded friendly networks are tested on ImageNet 2012 dataset where 3D convolution is a better alternative to aggregated convolution because of the 10% improvement in classification accuracy.

Keywords

Object classification Convolutional Neural Network Group convolution Efficient networks 

References

  1. 1.
    Briot, A., AI, G., Creteil, V., Viswanath, P., Yogamani, S.: Analysis of efficient cnn design techniques for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 663–672 (2018)Google Scholar
  2. 2.
    Iandola, F.N., Keutzer, K.: Keynote: small neural nets are beautiful: enabling embedded systems with small deep-neural-network architectures. CoRR abs/1710.02759 (2017)Google Scholar
  3. 3.
    Canziani, A., Paszke, A., Culurciello, E.: An analysis of deep neural network models for practical applications. CoRR abs/1605.07678 (2016)Google Scholar
  4. 4.
    Huang, J., et al.: Speed/accuracy trade-offs for modern convolutional object detectors. CoRR abs/1611.10012 (2016)Google Scholar
  5. 5.
    LeCun, Y., et al.: Backpropagation applied to handwritten zip code recognition. Neural Comput. 1(4), 541–551 (1989)CrossRefGoogle Scholar
  6. 6.
    Lowe, D.G.: Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 60(2), 91–110 (2004)MathSciNetCrossRefGoogle Scholar
  7. 7.
    Dalal, N., Triggs, B.: Histograms of oriented gradients for human detection. In: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, CVPR 2005 (2005)Google Scholar
  8. 8.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: 25th International Conference on Neural Information Processing Systems, USA (2012)Google Scholar
  9. 9.
    Lin, M., Chen, Q., Yan, S.: Network in network. CoRR (2013)Google Scholar
  10. 10.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR (2014)Google Scholar
  11. 11.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016)Google Scholar
  12. 12.
    Xie, S., Girshick, R.B., Dollár, P., Tu, Z., He, K.: Aggregated residual transformations for deep neural networks. CoRR abs/1611.05431 (2016)Google Scholar
  13. 13.
    Szegedy, C., Ioffe, S., Vanhoucke, V.: Inception-v4, inception-ResNet and the impact of residual connections on learning. CoRR (2016)Google Scholar
  14. 14.
    Iandola, F.N., Moskewicz, M.W., Ashraf, K., Han, S., Dally, W.J., Keutzer, K.: SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and \(<\)1mb model size. CoRR (2016)Google Scholar
  15. 15.
    Zhang, X., Zhou, X., Lin, M., Sun, J.: ShuffleNet: an extremely efficient convolutional neural network for mobile devices. CoRR (2017)Google Scholar
  16. 16.
    Jia, Y., et al.: Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM International Conference on Multimedia, MM 2014, pp. 675–678 (2014)Google Scholar
  17. 17.
    Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift, pp. 448–456 (2015)Google Scholar
  18. 18.
    Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR abs/1412.6980 (2014)Google Scholar
  19. 19.
    He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In: Proceedings of the 2015 IEEE International Conference on Computer Vision (ICCV) (2015)Google Scholar
  20. 20.
    Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res.Google Scholar
  21. 21.
    Roy, S., Das, N., Kundu, M., Nasipuri, M.: Handwritten isolated Bangla compound character recognition: a new benchmark using a novel deep learning approach. Pattern Recognition Lett. 90, 15–21 (2017)CrossRefGoogle Scholar
  22. 22.
    Roy, S., Das, A., Bhattacharya, U.: Generalized stacking of layerwise-trained deep convolutional neural networks for document image classification. In: 23rd International Conference on Pattern Recognition, ICPR 2016 (2016)Google Scholar

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  • Arindam Das
    • 1
  • Thomas Boulay
    • 2
  • Senthil Yogamani
    • 3
    Email author
  • Shiping Ou
    • 4
  1. 1.ValeoChennaiIndia
  2. 2.ValeoParisFrance
  3. 3.ValeoGalwayIreland
  4. 4.ValeoBeijingChina

Personalised recommendations