Advertisement

Regularized Deep Convolutional Neural Networks for Feature Extraction and Classification

  • Khaoula Jayech
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10635)

Abstract

Deep Convolutional Neural Networks (DCNNs) are the state-of-the-art in fields such as visual object recognition, handwriting and speech recognition. The DCNNs include a large number of layers, a huge number of units, and connections. Therefore, with the huge number of parameters, overfitting can occur. In order to prevent the network against this problem, regularization techniques have been applied in different positions. In this paper, we show that with the right combination of applied regularization techniques such as fully connected dropout, max pooling dropout, L2 regularization and He initialization, it is possible to achieve good results in object recognition with small networks and without data augmentation.

Keywords

Deep learning Deep convolutional neural networks Object recognition Fully connected dropout Max pooling dropout L2 regularization 

References

  1. 1.
    Bai, S.: Growing random forest on deep convolutional neural networks for scene categorization. Expert Syst. Appl. 71, 279–287 (2017)CrossRefGoogle Scholar
  2. 2.
    Zhao, W., Xiong, L., Ding, H.: Automatic recognition of loess landforms using Random Forest method. J. Mt. Sci. 14(5), 885–897 (2017)CrossRefGoogle Scholar
  3. 3.
    Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images (2009)Google Scholar
  4. 4.
    Gecer, B., Azzopardi, G., Petkov, N.: Color-blob-based COSFIRE filters for object recognition. Image Vis. Comput. 57, 165–174 (2017)CrossRefGoogle Scholar
  5. 5.
    Liang, M., Hu, X.: Recurrent convolutional neural network for object recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 3367–3375 (2015)Google Scholar
  6. 6.
    Dicarlo, J., Cox, D.: Untangling invariant object recognition. Trends Cogn. Sci. 11(8), 333–341 (2007)CrossRefGoogle Scholar
  7. 7.
    Zhang, L., He, Z., Liu, Y.: Deep object recognition across domains based on adaptive extreme learning machine. Neurocomputing 239, 194–203 (2017)CrossRefGoogle Scholar
  8. 8.
    Lecun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRefGoogle Scholar
  9. 9.
    Chen, W., Wilson, J.T., Tyree, S., Weinberger, K.Q., Chen, Y.: Compressing convolutional neural networks. arXiv preprint arXiv:1506.04449 (2015)
  10. 10.
    Tobias, L., Ducournau, A., Rousseau, F.: Convolutional neural networks for object recognition on mobile devices: a case study. In: IEEE 23rd International Conference on Pattern Recognition (ICPR), pp. 3530–3535 (2016)Google Scholar
  11. 11.
    Li, H., Xu, B., Wang, N., Liu, J.: Deep convolutional neural networks for classifying body constitution. In: Villa, A.E.P., Masulli, P., Pons Rivero, A.J. (eds.) ICANN 2016. LNCS, vol. 9887, pp. 128–135. Springer, Cham (2016). doi: 10.1007/978-3-319-44781-0_16 CrossRefGoogle Scholar
  12. 12.
    Madai-Tahy, L., Otte, S., Hanten, R., Zell, A.: Revisiting deep convolutional neural networks for RGB-D based object recognition. In: Villa, A.E.P., Masulli, P., Pons Rivero, A.J. (eds.) ICANN 2016. LNCS, vol. 9887, pp. 29–37. Springer, Cham (2016). doi: 10.1007/978-3-319-44781-0_4 CrossRefGoogle Scholar
  13. 13.
    Krizhevsky, I., Sutskever, A., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems (NIPS), pp. 1097–1105 (2012)Google Scholar
  14. 14.
    Calderon, A., Roa, S., Victorino, J.: Handwritten digit recognition using convolutional neural networks and gabor filters. In: Proceedings of the International Congress on Computational Intelligence (2003)Google Scholar
  15. 15.
    Alwzwazy, H.A., Albehadili, H.M., Alwan, Y.S.: Handwritten digit recognition using convolutional neural networks (2016)Google Scholar
  16. 16.
    Peris, Á., Bolanos, M., Radeva, P.: Video description using bidirectional recurrent neural networks. arXiv preprint arXiv:1604.03390 (2016)
  17. 17.
    Peyrard, C., Baccouche, M., Garcia, C.: Blind super-resolution with deep convolutional neural networks. In: Villa, A.E.P., Masulli, P., Pons Rivero, A.J. (eds.) ICANN 2016. LNCS, vol. 9887, pp. 161–169. Springer, Cham (2016). doi: 10.1007/978-3-319-44781-0_20 CrossRefGoogle Scholar
  18. 18.
    Sholomon, D., David, Omid E., Netanyahu, Nathan S.: DNN-Buddies: a deep neural network-based estimation metric for the jigsaw puzzle problem. In: Villa, A.E.P., Masulli, P., Pons Rivero, A.J. (eds.) ICANN 2016. LNCS, vol. 9887, pp. 170–178. Springer, Cham (2016). doi: 10.1007/978-3-319-44781-0_21 CrossRefGoogle Scholar
  19. 19.
    Ruiz-Garcia, A., Elshaw, M., Altahhan, A., Palade, V.: Deep learning for emotion recognition in faces. In: Villa, A.E.P., Masulli, P., Pons Rivero, A.J. (eds.) ICANN 2016. LNCS, vol. 9887, pp. 38–46. Springer, Cham (2016). doi: 10.1007/978-3-319-44781-0_5 CrossRefGoogle Scholar
  20. 20.
    Wu, H., Gu, X.: Towards dropout training for convolutional neural networks. Neural Netw. 71, 1–10 (2015)CrossRefGoogle Scholar
  21. 21.
    Hara, K., Saitoh, D., Shouno, H.: Analysis of dropout learning regarded as ensemble learning. arXiv preprint arXiv:1706.06859 (2017)
  22. 22.
    Miclut, B.: Committees of deep feedforward networks trained with few data. In: Jiang, X., Hornegger, J., Koch, R. (eds.) GCPR 2014. LNCS, vol. 8753, pp. 736–742. Springer, Cham (2014). doi: 10.1007/978-3-319-11752-2_62 Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.LATIS Research Lab, National Engineering School of SousseUniversity of SousseSousseTunisia

Personalised recommendations