Skip to main content

A Comprehensive Classification of Deep Learning Libraries

  • Conference paper
  • First Online:
Third International Congress on Information and Communication Technology

Part of the book series: Advances in Intelligent Systems and Computing ((AISC,volume 797))

Abstract

Deep learning (DL) networks are composed of multiple processing layers that learn data representations with multiple levels of abstraction. In recent years, DL networks have significantly improved the state of the art across different domains, including speech processing, text mining, pattern recognition, object detection, robotics, and big data analytics. Generally, a researcher or practitioner who is planning to use DL networks for the first time faces difficulties in selecting suitable software tools. The present article provides a comprehensive list and taxonomy of current programming languages and software tools that can be utilized for implementation of DL networks. The motivation of this article is hence to create awareness among researchers, especially beginners, regarding the various languages and interfaces that are available to implement deep learning and to provide a simplified ontological basis for selecting between them.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 169.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 219.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Hinton GE, Osindero S, Teh YW (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554

    Article  MathSciNet  Google Scholar 

  2. Hinton GE, Salakhutdinov RR (2016) Reducing the dimensionality of data with neural networks. Science 313(5786):504–507

    Article  MathSciNet  Google Scholar 

  3. Hou N, Dong H, Wang Z, Ren W, Alsaadi FE (2016) Non-fragile state estimation for discrete Markovian jumping neural networks. Neurocomputing 179:238–245

    Article  Google Scholar 

  4. Yu Y, Dong H, Wang Z, Ren W, Alsaadi FE (2016) Design of non-fragile state estimators for discrete time-delayed neural networks with parameter uncertainties. Neurocomputing 182:18–24

    Article  Google Scholar 

  5. Ling ZH, Kamg SY, Zen H, Senior A (2015) Deep learning for acoustic modeling in parametric speech generation: a systematic review of existing techniques and future trends. IEEE Signal Process Mag 32(3):35–52

    Article  Google Scholar 

  6. Yu D, Deng L (2011) Deep learning and its applications to signal and information processing [exploratory dsp]. IEEE Signal Process Mag 28(1):145–154

    Article  Google Scholar 

  7. Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117

    Article  Google Scholar 

  8. Erhan D, Bengio Y, Courville A, Manzagol PA, Vincent P, Bengiol S (2010) Why does unsupervised pre-training help deep learning? J Mach Learn Res 11(Feb):625–660

    Google Scholar 

  9. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444

    Article  Google Scholar 

  10. Deng L (2012) Three classes of deep learning architectures and their applications: a tutorial survey. APSIPA Trans Signal Inf Process

    Google Scholar 

  11. Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R, Guadarrama S, Darrell T (2014) Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM international conference on multimedia, ACM, pp 675–678

    Google Scholar 

  12. Yu D et al (2014) An introduction to computational networks and the computational network toolkit. Microsoft Technical Report MSR-TR-2014–112

    Google Scholar 

  13. Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M et al (2016) Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467

  14. Collobert R, Kavukcuoglu K, Farabet C (2011) Torch7: a matlab-like environment for machine learning. BigLearn, NIPS Workshop. No EPFL-CONF-192376

    Google Scholar 

  15. Chen T et al (2015) Mxnet: a flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274

  16. Al-Rfou R et al (2016) Theano: a Python framework for fast computation of mathematical expressions. arXiv preprint

    Google Scholar 

  17. PaddlePaddle: PArallel distributed deep learning (2017) https://github.com/PaddlePaddle/Paddle. Accessed 14 Aug 2017

  18. Jacob B, Guennebaud G (2012) Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms. http://eigen.tuxfamily.org/index.php?title=Main_Page

  19. Xianyi Z, Qian W, Saar W (2016) OpenBLAS: an optimized BLAS library. Agosto, Accedido

    Google Scholar 

  20. Wang E, Zhang Q, Shen B, Zhang G, Lu X, Wu Q, Wang Y (2014) Intel math kernel library. High-Performance Computing on the Intel® Xeon Phi™. Springer International Publishing, pp 167–188

    Google Scholar 

  21. Toolkit CUDA (2011) 4.0 cublas library. Nvidia Corporation 2701, pp 59–60

    Google Scholar 

  22. Chetlur S, Woolley C, Vandermersch P, Cohen J, Tran J, Catanzaro B, Shelhamer E (2014) cuDNN: efficient primitives for deep learning. arXiv preprint arXiv:1410.0759

  23. Larsen ABL (2016) DeepPy: pythonic deep learning

    Google Scholar 

  24. Goodfellow I, Bengio Y, Courville A (2016) Deep learning. Book in preparation for MIT Press. http://www.Deeplearningbook.org

  25. Vasilache N, Johnson J, Mathieu M, Chintala S, Piantino S, LeCun Y (2014) Fast convolutional nets with fbfft: a GPU performance evaluation. arXiv preprint arXiv:1412.7580

  26. Pirogov V, Gennady F (2016) Introducing DNN primitives in Intel® Math Kernel Library. https://software.intel.com/en-us/articles/introducing-dnn-primitives-in-intelr-mkl

  27. https://caffe2.ai/blog/2017/04/18/caffe2-open-source-announcement.html

Download references

Acknowledgements

Authors would like to acknowledge financial support from the Horizon 2020 European Research project DREAM4CARS (#731593).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hari Mohan Pandey .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pandey, H.M., Windridge, D. (2019). A Comprehensive Classification of Deep Learning Libraries. In: Yang, XS., Sherratt, S., Dey, N., Joshi, A. (eds) Third International Congress on Information and Communication Technology. Advances in Intelligent Systems and Computing, vol 797. Springer, Singapore. https://doi.org/10.1007/978-981-13-1165-9_40

Download citation

  • DOI: https://doi.org/10.1007/978-981-13-1165-9_40

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-13-1164-2

  • Online ISBN: 978-981-13-1165-9

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics