Advertisement

A Comprehensive Classification of Deep Learning Libraries

  • Hari Mohan PandeyEmail author
  • David Windridge
Conference paper
Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 797)

Abstract

Deep learning (DL) networks are composed of multiple processing layers that learn data representations with multiple levels of abstraction. In recent years, DL networks have significantly improved the state of the art across different domains, including speech processing, text mining, pattern recognition, object detection, robotics, and big data analytics. Generally, a researcher or practitioner who is planning to use DL networks for the first time faces difficulties in selecting suitable software tools. The present article provides a comprehensive list and taxonomy of current programming languages and software tools that can be utilized for implementation of DL networks. The motivation of this article is hence to create awareness among researchers, especially beginners, regarding the various languages and interfaces that are available to implement deep learning and to provide a simplified ontological basis for selecting between them.

Keywords

Deep learning Deep learning libraries Machine learning Deep belief network 

Notes

Acknowledgements

Authors would like to acknowledge financial support from the Horizon 2020 European Research project DREAM4CARS (#731593).

References

  1. 1.
    Hinton GE, Osindero S, Teh YW (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554MathSciNetCrossRefGoogle Scholar
  2. 2.
    Hinton GE, Salakhutdinov RR (2016) Reducing the dimensionality of data with neural networks. Science 313(5786):504–507MathSciNetCrossRefGoogle Scholar
  3. 3.
    Hou N, Dong H, Wang Z, Ren W, Alsaadi FE (2016) Non-fragile state estimation for discrete Markovian jumping neural networks. Neurocomputing 179:238–245CrossRefGoogle Scholar
  4. 4.
    Yu Y, Dong H, Wang Z, Ren W, Alsaadi FE (2016) Design of non-fragile state estimators for discrete time-delayed neural networks with parameter uncertainties. Neurocomputing 182:18–24CrossRefGoogle Scholar
  5. 5.
    Ling ZH, Kamg SY, Zen H, Senior A (2015) Deep learning for acoustic modeling in parametric speech generation: a systematic review of existing techniques and future trends. IEEE Signal Process Mag 32(3):35–52CrossRefGoogle Scholar
  6. 6.
    Yu D, Deng L (2011) Deep learning and its applications to signal and information processing [exploratory dsp]. IEEE Signal Process Mag 28(1):145–154CrossRefGoogle Scholar
  7. 7.
    Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117CrossRefGoogle Scholar
  8. 8.
    Erhan D, Bengio Y, Courville A, Manzagol PA, Vincent P, Bengiol S (2010) Why does unsupervised pre-training help deep learning? J Mach Learn Res 11(Feb):625–660Google Scholar
  9. 9.
    LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444CrossRefGoogle Scholar
  10. 10.
    Deng L (2012) Three classes of deep learning architectures and their applications: a tutorial survey. APSIPA Trans Signal Inf ProcessGoogle Scholar
  11. 11.
    Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R, Guadarrama S, Darrell T (2014) Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM international conference on multimedia, ACM, pp 675–678Google Scholar
  12. 12.
    Yu D et al (2014) An introduction to computational networks and the computational network toolkit. Microsoft Technical Report MSR-TR-2014–112Google Scholar
  13. 13.
    Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M et al (2016) Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467
  14. 14.
    Collobert R, Kavukcuoglu K, Farabet C (2011) Torch7: a matlab-like environment for machine learning. BigLearn, NIPS Workshop. No EPFL-CONF-192376Google Scholar
  15. 15.
    Chen T et al (2015) Mxnet: a flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274
  16. 16.
    Al-Rfou R et al (2016) Theano: a Python framework for fast computation of mathematical expressions. arXiv preprintGoogle Scholar
  17. 17.
    PaddlePaddle: PArallel distributed deep learning (2017) https://github.com/PaddlePaddle/Paddle. Accessed 14 Aug 2017
  18. 18.
    Jacob B, Guennebaud G (2012) Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms. http://eigen.tuxfamily.org/index.php?title=Main_Page
  19. 19.
    Xianyi Z, Qian W, Saar W (2016) OpenBLAS: an optimized BLAS library. Agosto, AccedidoGoogle Scholar
  20. 20.
    Wang E, Zhang Q, Shen B, Zhang G, Lu X, Wu Q, Wang Y (2014) Intel math kernel library. High-Performance Computing on the Intel® Xeon Phi™. Springer International Publishing, pp 167–188Google Scholar
  21. 21.
    Toolkit CUDA (2011) 4.0 cublas library. Nvidia Corporation 2701, pp 59–60Google Scholar
  22. 22.
    Chetlur S, Woolley C, Vandermersch P, Cohen J, Tran J, Catanzaro B, Shelhamer E (2014) cuDNN: efficient primitives for deep learning. arXiv preprint arXiv:1410.0759
  23. 23.
    Larsen ABL (2016) DeepPy: pythonic deep learningGoogle Scholar
  24. 24.
    Goodfellow I, Bengio Y, Courville A (2016) Deep learning. Book in preparation for MIT Press. http://www.Deeplearningbook.org
  25. 25.
    Vasilache N, Johnson J, Mathieu M, Chintala S, Piantino S, LeCun Y (2014) Fast convolutional nets with fbfft: a GPU performance evaluation. arXiv preprint arXiv:1412.7580
  26. 26.
    Pirogov V, Gennady F (2016) Introducing DNN primitives in Intel® Math Kernel Library. https://software.intel.com/en-us/articles/introducing-dnn-primitives-in-intelr-mkl
  27. 27.

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.Department of Computer Science, School of Science and TechnologyMiddlesex UniversityLondonUK

Personalised recommendations