Convolutional Neural Networks Implementations for Computer Vision

  • Paweł Michalski
  • Bogdan Ruszczak
  • Michał Tomaszewski
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 720)


The paper covers the current state of the art regarding the use of machine learning mechanisms, and in particular the deep convolutional neural networks used in the field of computer vision. In the article there has been presented the current definition of deep learning and specific dependencies between related fields such as machine learning and artificial intelligence. The practical part of the work consists of three components: the features of the structure of the convolutional neural network, the distinction of its key elements, the description of their actions, the compilation of information about available learning sets used in network testing and verification processes, and the review of the implementation of convolutional neural networks, which had a significant impact on development of discipline. To illustrate the great potential of the presented tools for solving computer vision tasks, the study highlites examples of their applications. The possibility of using convolutional neural networks for identification of technical objects in digital images is indicated.


Deep learning Convolutional neural networks Computer vision Artificial intelligence Machine learning 


  1. 1.
    Batra, S., Sachdeva, S.: Suitability of data models for electronic health records database. In: Srinivasa, S., Mehta, S. (eds.) BDA 2014. LNCS, vol. 8883, pp. 14–32. Springer, Cham (2014). Scholar
  2. 2.
    Bagloee, S.A., Tavana, M., Asadi, M., et al.: Autonomous vehicles: challenges, opportunities, and future implications for transportation policies. J. Mod. Transport. 24(4), 284–303 (2016). Scholar
  3. 3.
    Pal, S.K., Meher, S.K., Skowron, A.: Data science, big data and granular mining. Pattern Recogn. Lett. 67(2), 109–112 (2015).
  4. 4.
    Häne, C., Sattler, T., Pollefeys, M.: Obstacle detection for self-driving cars using only monocular cameras and wheel odometry. In: IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS. Hamburg (2015).
  5. 5.
    Salman, Y.D., Ku-Mahamud, K.R., Kamioka, E.: Distance measurement for self-driving cars using stereo camera. In: Proceedings of the 6th International Conference on Computing and Informatics, ICOCI 2017, Kuala Lumpur (2017)Google Scholar
  6. 6.
    Hohm, A., Lotz, F., Fochler, O., Lueke, S., Winner, H.: Automated Driving in Real Traffic: from Current Technical Approaches towards Architectural Perspectives. SAE Technical Paper (2014)Google Scholar
  7. 7.
    Karami, E., Prasad, S., Shehata, M.: Image matching using SIFT, SURF, BRIEF and ORB: performance comparison for distorted images. In: Newfoundland Electrical and Computer Engineering Conference, IEEE, Newfoundland and Labrador Section At St. John’s, NL (2015).
  8. 8.
    Amodei, D., Olah, C., Steinhardt, J., Christiano,,P., Schulman, J., Man, D.: Concrete Problems in AI Safety (2016).
  9. 9.
    Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press, Cambridge (2016)zbMATHGoogle Scholar
  10. 10.
    LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. IEEE 86(11), 2278–2324 (1998)CrossRefGoogle Scholar
  11. 11.
    Hochreiter, S., Bengio, Y., Frasconi, P., Schmidhuber, J.: Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. In: Kremer, S.C., Kolen, J.F. (eds.) A Field Guide to Dynamical Recurrent Neural Networks. IEEE Press, Hoboken (2001)Google Scholar
  12. 12.
    Hinton, G.E.: To recognize shapes, first learn to generate images. Prog. Brain Res. 165, 535–547 (2007)CrossRefGoogle Scholar
  13. 13.
    Bengio, Y.: Learning Deep Architectures for AI. Now Publishers, Boston (2009)zbMATHGoogle Scholar
  14. 14.
    Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Advances in neural information processing systems (2012)Google Scholar
  15. 15.
    Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative Adversarial Networks (2014).
  16. 16.
    He, K., Zhang, X., Ren, S., Sun, J.: Delving deep into rectifiers: surpassing human-level performance on ImageNet classification. In: Proceedings of the IEEE International Conference on Computer Vision (2015).
  17. 17.
    Russakovsky, O., Deng, J., Su, H., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015). Scholar
  18. 18.
    Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2015)Google Scholar
  19. 19.
    ImageNet Project.
  20. 20.
    Cao, J., et al.: A parallel Adaboost-Backpropagation neural network for massive image dataset classification, Sci. Rep. 6(38201) (2016).
  21. 21.
    Oquab, M., Bottou, L., Laptev, I., Sivic, J.: Learning and transferring mid-level image representations using convolutional neural networks. In: IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH (2014).
  22. 22.
    Marszalek, M., Schmid, C., Harzallah, H., Weijer, J.: Learning object representations for visual object class recognition. In: Visual Recognition Challange workshop, ICCV (2007)Google Scholar
  23. 23.
    Yan, S., Dong, J., Chen, Q., Song, Z., Pan, Y., Xia, W., Huang, Z., Hua, Y., Shen, S.: Generalized hierarchical matching for sub-category aware object classification. In: Visual Recognition Challenge workshop, ECCV (2012)Google Scholar
  24. 24.
  25. 25.
    Papert, S., Minsky, M.: Perceptrons: An Introduction to Computational Geometry. MIT Press, Cambridge (1988)zbMATHGoogle Scholar
  26. 26.
    Fukushima, K.: Neocognitron: a self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol. Cybern. 36, 193–202 (1980). Scholar
  27. 27.
    Srivastava, N., et al.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)MathSciNetzbMATHGoogle Scholar
  28. 28.
    Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). Scholar
  29. 29.
    Simonyan, K., Zisserman, A.: Very Deep Convolutional Networks for Large-scale Image Recognition (2014).
  30. 30.
    Szegedy, C., et al.: Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning (2016).
  31. 31.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas (2016).
  32. 32.
    Yong-Deok, K., Eunhyeok, P., Sungjoo, Y., Taelim, C., Lu, Y., Dongjun, S.: Compression of Deep Convolutional Neural Networks for Fast and Low Power Mobile Applications (2016).

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.Faculty of Electrical Engineering, Automatic Control and Informatics, Institute of Computer ScienceOpole University of TechnologyOpolePoland
  2. 2.Faculty of Economy and ManagementOpolePoland

Personalised recommendations