Advertisement

Peer-to-Peer Networking and Applications

, Volume 11, Issue 5, pp 1012–1021 | Cite as

Training deep neural network on multiple GPUs with a model averaging method

  • Qiongjie Yao
  • Xiaofei Liao
  • Hai JinEmail author
Article
  • 337 Downloads
Part of the following topical collections:
  1. Special Issue on Big Data Networking

Abstract

Deep learning has shown considerable promise in numerous practical machine learning applications. However training deep learning models is highly time-consuming. To solve this problem, many studies design distributed deep learning systems with multiple graphics processing units (GPUs) on a single machine or across machines. Data parallelism is the usually method to use multiple GPUs. However, this method is not suitable for all deep learning models such as fully connected deep neural network (DNN) because of the transfer overhead. In this paper we have analyzed the transfer overhead. Parameters synchronization is the key factor to cause the transfer overhead. To reduce parameters synchronization, we propose a multiple-GPUs framework based on the model averaging where each GPU trains a whole model until convergence and the CPU averages the models as the final optimal model. The only one parameters synchronization occurs when all GPUs have completed the training model, thus dramatically reducing transfer overhead. Experimental results show that the model averaging method achieves a speedup of 1.6x with two GPUs and 1.8x with four GPUs compared with the training method on a single GPU, respectively. Compared with the data parallelism method, it also achieves a speedup of 17x and 25x on two GPUs and four GPUs, respectively.

Keywords

Deep neural network Multiple GPUs Data parallelism Model averaging 

Notes

Acknowledgments

This paper is supported by National High-tech Research and Development Program of China (863 Program) under grant No.2015AA015303, National Natural Science Foundation of China under grant No. 61322210, 61272408, 61433019, Doctoral Fund of Ministry of Education of China under grant No. 20130142110048.

References

  1. 1.
    Chen K, Huo Q (2016) Scalable training of deep learning machines by incremental block training with intra-block parallel optimization and blockwise model-update filtering. In: Proceedings of the 2016 IEEE international conference on acoustics, speech and signal processing, pp 5880–5884Google Scholar
  2. 2.
    Chen T, Li M, Li Y, Lin M, Wang N, Wang M, Xiao T, Xu B, Zhang C, Zhang Z (2015) Mxnet: a flexible and efficient machine learning library for heterogeneous distributed systems. In: Proceedings of the workshop on machine learning systems with the 29th annual conference on neural information processing systems (NIPS), pp 80–86Google Scholar
  3. 3.
    Coates A, Huval B, Wang T, Wu D, Catanzaro B, Andrew N (2013) Deep learning with cots hpc systems. In: Proceedings of the 30th international conference on machine learning (ICML), pp 1337–1345Google Scholar
  4. 4.
    Cui H, Zhang H, Ganger G R, Gibbons P B, Xing E P (2016) Geeps: scalable deep learning on distributed gpus with a gpu-specialized parameter server. In: Proceedings of the eleventh European conference on computer systems, (EuroSys), pp 1–16Google Scholar
  5. 5.
    Dean J, Corrado G, Monga R, Chen K, Devin M, Le Q V, Mao M Z, Ranzato M, Senior A W, Tucker P A, Yang K, Ng A Y (2012) Large scale distributed deep networks. In: Proceedings of the 26th annual conference on neural information processing systems (NIPS), pp 1223–1231Google Scholar
  6. 6.
    Dong L, Wei F, Zhou M, Xu K (2014) Adaptive multi-compositionality for recursive neural models with applications to sentiment analysis. In: Proceedings of the 28th AAAI conference on artificial intelligence (AAAI), pp 1537–1543Google Scholar
  7. 7.
    Gao W, Zhou Z H (2016) Dropout rademacher complexity of deep neural networks. Sci Chin Inf Sci 59 (7):1–12CrossRefGoogle Scholar
  8. 8.
    He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp 770–781Google Scholar
  9. 9.
    Hinton G, Deng L, Yu D, Dahl G E, Mohamed AR, Jaitly N, Senior A, Vanhoucke V, Nguyen P, Sainath TN (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process Mag 29(6):82–97CrossRefGoogle Scholar
  10. 10.
    Hinton G E, Osindero S, Teh Y W (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Jia Y, Shelhamer E, Donahue J, Karayev S, Long J, Girshick R, Guadarrama S, Darrell T (2014) Caffe: convolutional architecture for fast feature embedding. In: Proceedings of the 22nd ACM international conference on multimedia, pp 675–678Google Scholar
  12. 12.
    Krizhevsky A (2014) One weird trick for parallelizing convolutional neural networks. Eprint ArxivGoogle Scholar
  13. 13.
    Krizhevsky A, Sutskever I, Hinton G E (2012) Imagenet classification with deep convolutional neural networks. In: Proceedings of the 26th annual conference on neural information processing systems (NIPS), pp 1097–1105Google Scholar
  14. 14.
    Le Q V (2013) Building high-level features using large scale unsupervised learning. In: Proceedings of the 2013 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 8595–8598Google Scholar
  15. 15.
    LeCun Y, Boser B E, Denker J S, Henderson D, Howard R E, Hubbard W E, Jackel L D (1989) Backpropagation applied to handwritten zip code recognition. Neural Comput 1(4):541–551CrossRefGoogle Scholar
  16. 16.
    Li M, Andersen DG, Park JW, Smola AJ, Ahmed A, Josifovski V, Long J, Shekita E J, Su B Y (2014) Scaling distributed machine learning with the parameter server. In: Proceedings of the 11th USENIX symposium on operating systems design and implementation (OSDI), pp 583–598Google Scholar
  17. 17.
    Li X, Zhang G, Huang HH, Wang Z, Zheng W (2016) Performance analysis of gpu-based convolutional neural networks. In: Proceedings of the 45th international conference on parallel processing, pp 67–76Google Scholar
  18. 18.
    Mann G, Mcdonald RT, Mohri M, Silberman N, Dan W, Mann G, Mcdonald RT, Mohri M, Silberman N, Dan W (2009) Efficient large-scale distributed training of conditional maximum entropy models. In: Proceedings of the 23rd annual conference on neural information processing systems, pp 1231–1239Google Scholar
  19. 19.
    Martens J (2010) Deep learning via hessian-free optimization. In: Proceedings of the 30th international conference on machine learning (ICML), pp 1337–1345Google Scholar
  20. 20.
    Mcmahan HB, Moore E, Ramage D, Arcas BAY (2016) Federated learning of deep networks using model averaging. Eprint ArxivGoogle Scholar
  21. 21.
    Ngiam J, Coates A, Lahiri A, Prochnow B, Le Q V, Ng AY (2011) On optimization methods for deep learning. In: Proceedings of the 28th international conference on machine learning (ICML), pp 265–272Google Scholar
  22. 22.
    Raina R, Madhavan A, Ng AY (2009) Large-scale deep unsupervised learning using graphics processors. In: Proceedings of the 26th international conference on machine learning (ICML), pp 873–880Google Scholar
  23. 23.
    Vinyals O, Toshev A, Bengio S, Erhan D (2015) Show and tell: a neural image caption generator. In: Proceedings of the 2015 IEEE conference on computer vision and pattern recognition (CVPR), pp 3156–3164Google Scholar
  24. 24.
    Yang XJ, Tao T, Wang GB (2012) Mptostream:an openmp compiler for cpu-gpu heterogeneous parallel systems. Sci Chin Inf Sci 55(9):1961–1971CrossRefGoogle Scholar
  25. 25.
    Zhang Y, Duchi JC, Wainwright MJ (2013) Communication-efficient algorithms for statistical optimization. J Mach Learn Res 14(1):3321–3363MathSciNetzbMATHGoogle Scholar
  26. 26.
    Zhi Y, Yang Y (2015) Discrete control of longitudinal dynamics for hypersonic flight vehicle using neural networks. Sci Chin Inf Sci 58(7):1–10MathSciNetCrossRefGoogle Scholar
  27. 27.
    Zinkevich M, Weimer M, Smola AJ, Li L (2010) Parallelized stochastic gradient descent, pp 2595–2603Google Scholar
  28. 28.
    Zou Y, Jin X, Li Y, Guo Z, Wang E, Xiao B (2014) Mariana: tencent deep learning platform and its applications. PVLDB 7(13):1772–1777Google Scholar

Copyright information

© Springer Science+Business Media New York 2017

Authors and Affiliations

  1. 1.Services Computing Technology and System Lab, Cluster and Grid Computing Lab, School of Computer Science and TechnologyHuazhong University of Science and TechnologyWuhanChina

Personalised recommendations