Skip to main content

A Comprehensive Classification of Deep Learning Libraries

Part of the Advances in Intelligent Systems and Computing book series (AISC,volume 797)

Abstract

Deep learning (DL) networks are composed of multiple processing layers that learn data representations with multiple levels of abstraction. In recent years, DL networks have significantly improved the state of the art across different domains, including speech processing, text mining, pattern recognition, object detection, robotics, and big data analytics. Generally, a researcher or practitioner who is planning to use DL networks for the first time faces difficulties in selecting suitable software tools. The present article provides a comprehensive list and taxonomy of current programming languages and software tools that can be utilized for implementation of DL networks. The motivation of this article is hence to create awareness among researchers, especially beginners, regarding the various languages and interfaces that are available to implement deep learning and to provide a simplified ontological basis for selecting between them.

Keywords

  • Deep learning
  • Deep learning libraries
  • Machine learning
  • Deep belief network

This is a preview of subscription content, access via your institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • DOI: 10.1007/978-981-13-1165-9_40
  • Chapter length: 9 pages
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
eBook
USD   229.00
Price excludes VAT (USA)
  • ISBN: 978-981-13-1165-9
  • Instant PDF download
  • Readable on all devices
  • Own it forever
  • Exclusive offer for individuals only
  • Tax calculation will be finalised during checkout
Softcover Book
USD   299.99
Price excludes VAT (USA)
Fig. 1

References

  1. Hinton GE, Osindero S, Teh YW (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554

    CrossRef  MathSciNet  Google Scholar 

  2. Hinton GE, Salakhutdinov RR (2016) Reducing the dimensionality of data with neural networks. Science 313(5786):504–507

    CrossRef  MathSciNet  Google Scholar 

  3. Hou N, Dong H, Wang Z, Ren W, Alsaadi FE (2016) Non-fragile state estimation for discrete Markovian jumping neural networks. Neurocomputing 179:238–245

    CrossRef  Google Scholar 

  4. Yu Y, Dong H, Wang Z, Ren W, Alsaadi FE (2016) Design of non-fragile state estimators for discrete time-delayed neural networks with parameter uncertainties. Neurocomputing 182:18–24

    CrossRef  Google Scholar 

  5. Ling ZH, Kamg SY, Zen H, Senior A (2015) Deep learning for acoustic modeling in parametric speech generation: a systematic review of existing techniques and future trends. IEEE Signal Process Mag 32(3):35–52

    CrossRef  Google Scholar 

  6. Yu D, Deng L (2011) Deep learning and its applications to signal and information processing [exploratory dsp]. IEEE Signal Process Mag 28(1):145–154

    CrossRef  Google Scholar 

  7. Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117

    CrossRef  Google Scholar 

  8. Erhan D, Bengio Y, Courville A, Manzagol PA, Vincent P, Bengiol S (2010) Why does unsupervised pre-training help deep learning? J Mach Learn Res 11(Feb):625–660

    Google Scholar 

  9. LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444

    CrossRef  Google Scholar 

  10. Deng L (2012) Three classes of deep learning architectures and their applications: a tutorial survey. APSIPA Trans Signal Inf Process

    Google Scholar 

  11. Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R, Guadarrama S, Darrell T (2014) Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM international conference on multimedia, ACM, pp 675–678

    Google Scholar 

  12. Yu D et al (2014) An introduction to computational networks and the computational network toolkit. Microsoft Technical Report MSR-TR-2014–112

    Google Scholar 

  13. Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M et al (2016) Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467

  14. Collobert R, Kavukcuoglu K, Farabet C (2011) Torch7: a matlab-like environment for machine learning. BigLearn, NIPS Workshop. No EPFL-CONF-192376

    Google Scholar 

  15. Chen T et al (2015) Mxnet: a flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274

  16. Al-Rfou R et al (2016) Theano: a Python framework for fast computation of mathematical expressions. arXiv preprint

    Google Scholar 

  17. PaddlePaddle: PArallel distributed deep learning (2017) https://github.com/PaddlePaddle/Paddle. Accessed 14 Aug 2017

  18. Jacob B, Guennebaud G (2012) Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms. http://eigen.tuxfamily.org/index.php?title=Main_Page

  19. Xianyi Z, Qian W, Saar W (2016) OpenBLAS: an optimized BLAS library. Agosto, Accedido

    Google Scholar 

  20. Wang E, Zhang Q, Shen B, Zhang G, Lu X, Wu Q, Wang Y (2014) Intel math kernel library. High-Performance Computing on the Intel® Xeon Phi™. Springer International Publishing, pp 167–188

    Google Scholar 

  21. Toolkit CUDA (2011) 4.0 cublas library. Nvidia Corporation 2701, pp 59–60

    Google Scholar 

  22. Chetlur S, Woolley C, Vandermersch P, Cohen J, Tran J, Catanzaro B, Shelhamer E (2014) cuDNN: efficient primitives for deep learning. arXiv preprint arXiv:1410.0759

  23. Larsen ABL (2016) DeepPy: pythonic deep learning

    Google Scholar 

  24. Goodfellow I, Bengio Y, Courville A (2016) Deep learning. Book in preparation for MIT Press. http://www.Deeplearningbook.org

  25. Vasilache N, Johnson J, Mathieu M, Chintala S, Piantino S, LeCun Y (2014) Fast convolutional nets with fbfft: a GPU performance evaluation. arXiv preprint arXiv:1412.7580

  26. Pirogov V, Gennady F (2016) Introducing DNN primitives in Intel® Math Kernel Library. https://software.intel.com/en-us/articles/introducing-dnn-primitives-in-intelr-mkl

  27. https://caffe2.ai/blog/2017/04/18/caffe2-open-source-announcement.html

Download references

Acknowledgements

Authors would like to acknowledge financial support from the Horizon 2020 European Research project DREAM4CARS (#731593).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hari Mohan Pandey .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and Permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this paper

Verify currency and authenticity via CrossMark

Cite this paper

Pandey, H.M., Windridge, D. (2019). A Comprehensive Classification of Deep Learning Libraries. In: Yang, XS., Sherratt, S., Dey, N., Joshi, A. (eds) Third International Congress on Information and Communication Technology. Advances in Intelligent Systems and Computing, vol 797. Springer, Singapore. https://doi.org/10.1007/978-981-13-1165-9_40

Download citation

  • DOI: https://doi.org/10.1007/978-981-13-1165-9_40

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-13-1164-2

  • Online ISBN: 978-981-13-1165-9

  • eBook Packages: EngineeringEngineering (R0)