Skip to main content

Benchmarking Deep Neural Network Training Using Multi- and Many-Core Processors

  • Conference paper
  • First Online:
Computer Information Systems and Industrial Management (CISIM 2020)

Abstract

In the paper we provide thorough benchmarking of deep neural network (DNN) training on modern multi- and many-core Intel processors in order to assess performance differences for various deep learning as well as parallel computing parameters. We present performance of DNN training for Alexnet, Googlenet, Googlenet_v2 as well as Resnet_50 for various engines used by the deep learning framework, for various batch sizes. Furthermore, we measured results for various numbers of threads with ranges depending on a given processor(s) as well as compact and scatter affinities. Based on results we formulate conclusions with respect to optimal parameters and relative performances which can serve as hints for researchers training similar networks using modern processors.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    http://ai-benchmark.com/.

References

  1. Czarnul, P.: Benchmarking parallel chess search in Stockfish on Intel Xeon and Intel Xeon Phi processors. In: Shi, Y., et al. (eds.) ICCS 2018. LNCS, vol. 10862, pp. 457–464. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-93713-7_40

    Chapter  Google Scholar 

  2. Czarnul, P.: Benchmarking performance of a hybrid Intel Xeon/Xeon Phi system for parallel computation of similarity measures between large vectors. Int. J. Parallel Program. 45, 1091–1107 (2017). https://doi.org/10.1007/s10766-016-0455-0

    Article  Google Scholar 

  3. Krzywaniak, A., Proficz, J., Czarnul, P.: Analyzing energy/performance trade-offs with power capping for parallel applications on modern multi and many core processors. In: FedCSIS, pp. 339–346 (2018)

    Google Scholar 

  4. Shi, S., Wang, Q., Xu, P., Chu, X.: Benchmarking state-of-the-art deep learning software tools. In: 2016 7th International Conference on Cloud Computing and Big Data (CCBD), pp. 99–104 (2016)

    Google Scholar 

  5. Serpa, M.S., Krause, A.M., Cruz, E.H.M., Navaux, P.O.A., Pasin, M., Felber, P.: Optimizing machine learning algorithms on multi-core and many-core architectures using thread and data mapping. In: 2018 26th Euromicro International Conference on Parallel, Distributed and Network-Based Processing (PDP), pp. 329–333 (2018)

    Google Scholar 

  6. Alzantot, M., Wang, Y., Ren, Z., Srivastava, M.B.: RSTensorFlow: GPU enabled TensorFlow for deep learning on commodity Android devices. In: MobiSys, pp. 7–12 (2017). https://doi.org/10.1145/3089801.3089805

  7. Awan, A.A., Subramoni, H., Panda, D.K.: An in-depth performance characterization of CPU- and GPU-based DNN training on modern architectures. In: Proceedings of the Machine Learning on HPC Environments, MLHPC 2017, pp. 8:1–8:8. ACM, New York (2017)

    Google Scholar 

  8. Dong, S., Kaeli, D.: DNNMark: a deep neural network benchmark suite for GPUs. In: Proceedings of the General Purpose GPUs, GPGPU 2010, pp. 63–72. ACM, New York (2017)

    Google Scholar 

  9. Karki, A., Keshava, C.P., Shivakumar, S.M., Skow, J., Hegde, G.M., Jeon, H.: Tango: a deep neural network benchmark suite for various accelerators (2019)

    Google Scholar 

  10. Barney, L.: Can FPGAs beat GPUs in accelerating next-generation deep learning? (2017). The Next Platform. https://www.nextplatform.com/2017/03/21/can-fpgas-beat-gpus-accelerating-next-generation-deep-learning/

  11. Sharma, H., et al.: From high-level deep neural models to FPGAs. In: 2016 49th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO), pp. 1–12 (2016)

    Google Scholar 

  12. Seppälä, S.: Performance of neural network image classification on mobile CPU and GPU. Master’s thesis, Aalto University (2018)

    Google Scholar 

  13. Ignatov, A., et al.: AI benchmark: running deep neural networks on Android smartphones. CoRR abs/1810.01109 (2018)

    Google Scholar 

  14. Vanhoucke, V., Senior, A., Mao, M.Z.: Improving the speed of neural networks on CPUs. In: Deep Learning and Unsupervised Feature Learning Workshop, NIPS 2011 (2011)

    Google Scholar 

  15. Wang, Y., Wei, G., Brooks, D.: Benchmarking TPU, GPU, and CPU platforms for deep learning. CoRR abs/1907.10701 (2019)

    Google Scholar 

  16. Czarnul, P., Proficz, J., Krzywaniak, A.: Energy-aware high-performance computing: survey of state-of-the-art tools, techniques, and environments. Sci. Program. 2019 (2019). Article ID. 8348791. https://doi.org/10.1155/2019/8348791

Download references

Acknowledgments

This work is partially supported by Intel and the Intel Labs Academic Compute Environment. Work partially performed within statutory activities of Dept. of Computer Architecture, Faculty of ETI, Gdańsk University of Technology.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paweł Czarnul .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jabłońska, K., Czarnul, P. (2020). Benchmarking Deep Neural Network Training Using Multi- and Many-Core Processors. In: Saeed, K., Dvorský, J. (eds) Computer Information Systems and Industrial Management. CISIM 2020. Lecture Notes in Computer Science(), vol 12133. Springer, Cham. https://doi.org/10.1007/978-3-030-47679-3_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-47679-3_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-47678-6

  • Online ISBN: 978-3-030-47679-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics