Abstract
We evaluate the performance of two well-known Deep Learning frameworks – Caffe and TensorFlow – on two different types of computing devices – GPU and NUMA CPU architecture – using two popular network models as benchmark – AlexNet and GoogLeNet. We variate batch sizes between trainings and estimate the average training time per iteration and per image on each configuration. Both frameworks presented similar times for the AlexNet model, and TensorFlow outperforms Caffe by presenting times up to 2 times lower than Caffe for the GoogLeNet Model. The work also presents the impact of lack of support by the frameworks for NUMA Architectures, and relates a problem stated on loss computation by the Caffe Framework.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
- 10.
References
AAbadi, M., et al.: Tensorflow: large-scalemachine learning on heterogeneous distributed systems (2016). CoRR abs/1603.04467. http://arxiv.org/abs/1603.04467
Abdelfattah, A., Haidar, A., Tomov, S., Dongarra, J.: Performance, design, and autotuning of batched GEMM for GPUs. In: Kunkel, J.M., Balaji, P., Dongarra, J. (eds.) ISC High Performance 2016. LNCS, vol. 9697, pp. 21–38. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-41321-1_2
Bahrampour, S., Ramakrishnan, N., Schott, L., Shah, M.: Comparative study of caffe, neon, theano, and torch for deep learning (2015). CoRR abs/1511.06435. http://arxiv.org/abs/1511.06435
Cecka, C.: Pro Tip: cuBLAS Strided Batched Matrix Multiply, July 2018. https://devblogs.nvidia.com/cublas-strided-batched-matrix-multiply/
Google: Deep Learning - Google Trends, May 2018. https://trends.google.com.br/trends/explore?date=all&q=%2Fm%2F0h1fn8h
Google Inc.: TensorFlow Architecture, July 2018. https://www.tensorflow.org/extend/architecture
Intel Corporation: Introducing Batch GEMM Operations, July 2018. https://software.intel.com/en-us/articles/introducing-batch-gemm-operations
Jia, Y., et al.: Caffe: convolutional architecture for fast feature embedding (2014). CoRR abs/1408.5093. http://arxiv.org/abs/1408.5093
Keskar, N.S., Mudigere, D., Nocedal, J., Smelyanskiy, M., Tang, P.T.P.: On large-batch training for deep learning: generalization gap and sharp minima (2016). CoRR abs/1609.04836. http://arxiv.org/abs/1609.04836
Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 25, pp. 1097–1105. Curran Associates, Inc. (2012). http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf
Pena, D., Forembski, A., Xu, X., Moloney, D.: Benchmarking of CNNs for low-cost, low-power robotics applications. In: Robotics: Science and Systems (RSS 2017) Workshop - New Frontier for Deep Learning in Robotics, July 2017
Roy, P., Song, S.L., Krishnamoorthy, S., Vishnu, A., Sengupta, D., Liu, X.: NUMA-Caffe: NUMA-aware deep learning neural networks. ACM Trans. Archit. Code Optim. 15(2), 24:1–24:26 (2018). https://doi.org/10.1145/3199605
Shams, S., Platania, R., Lee, K., Park, S.J.: Evaluation of deep learning frameworks over different HPC architectures. In: 2017 IEEE 37th International Conference on Distributed Computing Systems (ICDCS), pp. 1389–1396, June 2017. https://doi.org/10.1109/ICDCS.2017.259
Shi, S., Chu, X.: Performance modeling and evaluation of distributed deep learning frameworks on GPUs (2017). CoRR abs/1711.05979. http://arxiv.org/abs/1711.05979
Szegedy, C., et al.: Going deeper with convolutions. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1–9, June 2015. https://doi.org/10.1109/CVPR.2015.7298594
Vargas, R., Mosavi, A., Ruiz, L.: Deep learning: a review. In: Advances in Intelligent Systems and Computing (2017). https://www.researchgate.net/publication/318447392_DEEP_LEARNING_A_REVIEW
Acknowledgments
We thank CNPq for supporting the development of this work, and NVIDIA support with the donation of the NVIDIA GTX Titan X GPU used for our experiments.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Trindade, R.G., Lima, J.V.F., Charão, A.S. (2019). Performance Evaluation of Deep Learning Frameworks over Different Architectures. In: Senger, H., et al. High Performance Computing for Computational Science – VECPAR 2018. VECPAR 2018. Lecture Notes in Computer Science(), vol 11333. Springer, Cham. https://doi.org/10.1007/978-3-030-15996-2_7
Download citation
DOI: https://doi.org/10.1007/978-3-030-15996-2_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-15995-5
Online ISBN: 978-3-030-15996-2
eBook Packages: Computer ScienceComputer Science (R0)