Image Classification Using Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN): A Review

  • Patel Dhruv
  • Subham NaskarEmail author
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 1101)


With the advent of technologies, real-time data is essentially required for future development. Everyday, a huge amount of visual data is being collected, but to use it efficiently, we need to recognize, understand and arrange the visual data for a perfect approach. So, the neural network was introduced to find out patterns from images, a form of visual data as the neuron functionality in a human brain. It is biologically inspired programming approach to allow the machine to learn from observational data. Neural networks have provided solutions to several problems of image recognition, and it is actively utilized in the medical field due to its efficiency. This paper concentrates upon the use of RNN and CNN in the feature extraction of images and the challenges. The paper also presents a brief literature review of the neural networks like CNN and RNN.


Image classification CNN RNN Feature extraction 


  1. 1.
    Image shows the sideview of Varanasi, and is downloaded from on August 16, 2019.
  2. 2.
    LeCun, Y., L. Bottou, Y. Bengio, and P. Haffner. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86: 2278–2324.CrossRefGoogle Scholar
  3. 3.
    Yamins, D, H. Hong, C. Cadieu, and J.J. Dicarlo. 2013. Hierarchical modular optimization of convolutional networks achieves representations similar to Macaque IT and human ventral stream. In 27th Annual Conference on Neural Information Processing Systems, NIPS 2013, December 5–10, 2013, Lake Tahoe, NV, United States, 2013. Air Force Office of Scientific Research (AFOSR),, Facebook, Google, Microsoft Research.Google Scholar
  4. 4.
    Hochreiter, S., and J. Schmidhuber. Long short-term memory. Neural computation.Google Scholar
  5. 5.
    Understanding a 3D CNN and its Uses. on 16 August, 2019.
  6. 6.
    Ouadfel, S., and S. Meshoul. 2012. Handling fuzzy image clustering with a modified ABC algorithm. International Journal of Intelligent Systems and Applications 4: 65.CrossRefGoogle Scholar
  7. 7.
    Das, S., and A. Konar. 2009. Automatic image pixel clustering with an improved differential evolution. Applied Soft Computing 9: 226–236.CrossRefGoogle Scholar
  8. 8.
    Yu, Z., W. Yu, R. Zou, and S. Yu. 2009. On ACO-based fuzzy clustering for image segmentation. In Advances in Neural Networks–ISNN 2009, 717–726. Berlin, Heidelberg: Springer.Google Scholar
  9. 9.
    Prabhu. 2018. Understanding of Convolutional Neural Network (CNN)—Deep Learning. March 4, 2018.
  10. 10.
    Wikipedia. 2019. Convolution Neural Network. July 9, 2019.
  11. 11.
    Oruganti, Ram Manohar. 2016. Image description using deep neural networks. Thesis, Rochester Institute of Technology.Google Scholar
  12. 12.
    Ava Soleimany. 2019. MIT 6.S191: Convolutional Neural Networks. on February 11, 2019.
  13. 13.
  14. 14.
    Banerjee, Survo. 2018. An Introduction to Recurrent Neural Networks. on May 23, 2018.Google Scholar
  15. 15.
    Gupta, Dishashree. 2017. Fundamentals of Deep Learning—Introduction to Recurrent Neural Networks.Google Scholar
  16. 16.
    Thapliyal, Manish. 2018. Vanishing Gradients in RNN.Google Scholar
  17. 17.
    Sammani, Fawaz. 2019. Applied Deep Learning with Pytorch published.Google Scholar
  18. 18.
    LeCun, B.B., J.S. Denker, D. Henderson, R.E. Howard, W. Hubbard, and L.D. Jackel. 1990. Handwritten digit recognition with a back-propagation network. In NIPS.Google Scholar
  19. 19.
    Krizhevsky, Alex I. Sutskever, and G.E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. In NIPS.Google Scholar
  20. 20.
    Zeiler, M., Taylor, G., and Fergus, R. 2011. Adaptive deconvolutional networks or mid and high-level feature learning. In ICCV.Google Scholar
  21. 21.
    Szegedy, C., W. Liu, Y. Jia, P. Sermanet, S. Reed. 2014. Going deeper with convolutions. In CVPR.Google Scholar
  22. 22.
    Ioffe, S., and C. Szegedy. 2015. Batch normalization: accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd ICML.Google Scholar
  23. 23.
    Szegedy, C., V. Vanhoucke, S. Ioffe, J. Shlens, and Z. Wojna. 2015. Rethinking the inception architecture for computer vision. arXiv:1512.
  24. 24.
    Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke. 2016. Inception-v4, Inception-ResNet and the impact of residual connections on learning. arXiv:1602.07261.
  25. 25.
    Hinton, G.E., N. Srivastava, A. Krizhevsky, I. Sutskever, and R.R. Salakhutdinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. Preprint at arXiv:1207.0580.
  26. 26.
    Thad Hughes, and Keir Mierle. 2013. RNN for voice activity detection published. In IEEE International Conference on Acoustics, Speech and Signal Processing.Google Scholar
  27. 27.
    Heigold, G.V., A. Vanhoucke, P. Senior, M. Nguyen, M. Ranzato, and Devin J. Dean. 2013. Multilingual acoustic models using distributed deep neural networks. In IEEE International Conference on Acoustics, Speech and Signal Processing.Google Scholar
  28. 28.
    Vanhoucke, Vincent, Matthieu Devi, and Georg Heigold. 2013. Multiframe deep neural networks for acoustic modeling. In IEEE International Conference on Acoustics, Speech and Signal Processing.Google Scholar
  29. 29.
    Yik-Cheung Tam, Yun Lei, Jing Zheng, and Wen Wang. 2014. ASR error detection using recurrent neural network language model and complementary ASR. In IEEE (ICASSP).Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  1. 1.School of Computer EngineeringKIITBhubaneswarIndia

Personalised recommendations