Abstract
Deep learning (DL) networks are composed of multiple processing layers that learn data representations with multiple levels of abstraction. In recent years, DL networks have significantly improved the state of the art across different domains, including speech processing, text mining, pattern recognition, object detection, robotics, and big data analytics. Generally, a researcher or practitioner who is planning to use DL networks for the first time faces difficulties in selecting suitable software tools. The present article provides a comprehensive list and taxonomy of current programming languages and software tools that can be utilized for implementation of DL networks. The motivation of this article is hence to create awareness among researchers, especially beginners, regarding the various languages and interfaces that are available to implement deep learning and to provide a simplified ontological basis for selecting between them.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Hinton GE, Osindero S, Teh YW (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554
Hinton GE, Salakhutdinov RR (2016) Reducing the dimensionality of data with neural networks. Science 313(5786):504–507
Hou N, Dong H, Wang Z, Ren W, Alsaadi FE (2016) Non-fragile state estimation for discrete Markovian jumping neural networks. Neurocomputing 179:238–245
Yu Y, Dong H, Wang Z, Ren W, Alsaadi FE (2016) Design of non-fragile state estimators for discrete time-delayed neural networks with parameter uncertainties. Neurocomputing 182:18–24
Ling ZH, Kamg SY, Zen H, Senior A (2015) Deep learning for acoustic modeling in parametric speech generation: a systematic review of existing techniques and future trends. IEEE Signal Process Mag 32(3):35–52
Yu D, Deng L (2011) Deep learning and its applications to signal and information processing [exploratory dsp]. IEEE Signal Process Mag 28(1):145–154
Schmidhuber J (2015) Deep learning in neural networks: an overview. Neural Netw 61:85–117
Erhan D, Bengio Y, Courville A, Manzagol PA, Vincent P, Bengiol S (2010) Why does unsupervised pre-training help deep learning? J Mach Learn Res 11(Feb):625–660
LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436–444
Deng L (2012) Three classes of deep learning architectures and their applications: a tutorial survey. APSIPA Trans Signal Inf Process
Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R, Guadarrama S, Darrell T (2014) Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM international conference on multimedia, ACM, pp 675–678
Yu D et al (2014) An introduction to computational networks and the computational network toolkit. Microsoft Technical Report MSR-TR-2014–112
Abadi M, Agarwal A, Barham P, Brevdo E, Chen Z, Citro C, Corrado GS, Davis A, Dean J, Devin M et al (2016) Tensorflow: large-scale machine learning on heterogeneous distributed systems. arXiv preprint arXiv:1603.04467
Collobert R, Kavukcuoglu K, Farabet C (2011) Torch7: a matlab-like environment for machine learning. BigLearn, NIPS Workshop. No EPFL-CONF-192376
Chen T et al (2015) Mxnet: a flexible and efficient machine learning library for heterogeneous distributed systems. arXiv preprint arXiv:1512.01274
Al-Rfou R et al (2016) Theano: a Python framework for fast computation of mathematical expressions. arXiv preprint
PaddlePaddle: PArallel distributed deep learning (2017) https://github.com/PaddlePaddle/Paddle. Accessed 14 Aug 2017
Jacob B, Guennebaud G (2012) Eigen is a C++ template library for linear algebra: matrices, vectors, numerical solvers, and related algorithms. http://eigen.tuxfamily.org/index.php?title=Main_Page
Xianyi Z, Qian W, Saar W (2016) OpenBLAS: an optimized BLAS library. Agosto, Accedido
Wang E, Zhang Q, Shen B, Zhang G, Lu X, Wu Q, Wang Y (2014) Intel math kernel library. High-Performance Computing on the Intel® Xeon Phi™. Springer International Publishing, pp 167–188
Toolkit CUDA (2011) 4.0 cublas library. Nvidia Corporation 2701, pp 59–60
Chetlur S, Woolley C, Vandermersch P, Cohen J, Tran J, Catanzaro B, Shelhamer E (2014) cuDNN: efficient primitives for deep learning. arXiv preprint arXiv:1410.0759
Larsen ABL (2016) DeepPy: pythonic deep learning
Goodfellow I, Bengio Y, Courville A (2016) Deep learning. Book in preparation for MIT Press. http://www.Deeplearningbook.org
Vasilache N, Johnson J, Mathieu M, Chintala S, Piantino S, LeCun Y (2014) Fast convolutional nets with fbfft: a GPU performance evaluation. arXiv preprint arXiv:1412.7580
Pirogov V, Gennady F (2016) Introducing DNN primitives in Intel® Math Kernel Library. https://software.intel.com/en-us/articles/introducing-dnn-primitives-in-intelr-mkl
https://caffe2.ai/blog/2017/04/18/caffe2-open-source-announcement.html
Acknowledgements
Authors would like to acknowledge financial support from the Horizon 2020 European Research project DREAM4CARS (#731593).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Pandey, H.M., Windridge, D. (2019). A Comprehensive Classification of Deep Learning Libraries. In: Yang, XS., Sherratt, S., Dey, N., Joshi, A. (eds) Third International Congress on Information and Communication Technology. Advances in Intelligent Systems and Computing, vol 797. Springer, Singapore. https://doi.org/10.1007/978-981-13-1165-9_40
Download citation
DOI: https://doi.org/10.1007/978-981-13-1165-9_40
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-1164-2
Online ISBN: 978-981-13-1165-9
eBook Packages: EngineeringEngineering (R0)