Asynchronous, Data-Parallel Deep Convolutional Neural Network Training with Linear Prediction Model for Parameter Transition

  • Ikuro Sato
  • Ryo Fujisaki
  • Yosuke Oyama
  • Akihiro Nomura
  • Satoshi Matsuoka
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10635)

Abstract

Recent studies have revealed that Convolutional Neural Networks requiring vastly many sum-of-product operations with relatively small numbers of parameters tend to exhibit great model performances. Asynchronous Stochastic Gradient Descent provides a possibility of large-scale distributed computation for training such networks. However, asynchrony introduces stale gradients, which are considered to have negative effects on training speed. In this work, we propose a method to predict future parameters during the training to mitigate the drawback of staleness. We show that the proposed method gives good parameter prediction accuracies that can improve speed of asynchronous training. The experimental results on ImageNet demonstrates that the proposed asynchronous training method, compared to a synchronous training method, reduces the training time to reach a certain model accuracy by a factor of 1.9 with 256 GPUs used in parallel.

References

  1. 1.
    He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  2. 2.
    Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Van- houcke, V., Rabinovich, A.: Going deeper with convolutions. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)Google Scholar
  3. 3.
    Lin, M., Chen, Q., Yan, S.: Network in network. In: International Conference on Learning Representations (2014)Google Scholar
  4. 4.
    Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211–252 (2015)CrossRefMathSciNetGoogle Scholar
  5. 5.
    Krizhevsky, A., Ilya, S., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Neural Information Processing Systems, pp. 1097–1105 (2012)Google Scholar
  6. 6.
    Dean, J., Corrado, G.S., Monga, R., Chen, K., Devin, M., Le, Q.V., Mao, M.Z., Ranzato, M., Senior, A., Tucker, P., Yang, K., Ng, A.Y.: Large scale distributed deep Networks. In: Neural Information Processing Systems, pp. 1223–1231 (2012)Google Scholar
  7. 7.
    Iandola, F.N., Ashraf, K., Moskewicz, M.W., Keutzer, K.: Firecaffe: near-linear acceleration of deep neural network training on compute clusters arXiv:1511.00175 (2015)
  8. 8.
    Zheng, S., Meng, Q., Wang, T., Chen, W., Yu, N., Ma, Z., Liu, T.: Asynchronous stochastic gradient descent with delay compensation for distributed deep learning arXiv:1609.08326 (2016)
  9. 9.
    Wu, R., Yan, S., Shan, Y., Dang, Q., Sun, G.: Deep Image: Scaling up image recognition arXiv:1501.02876 (2015)
  10. 10.
    Gupta, S., Zhang, W., Wang, F.: Model accuracy and runtime tradeoff in distributed deep learning: a systematic study. In: IEEE International Conference on Data Mining, pp. 171–180 (2016)Google Scholar
  11. 11.
    Zhang, W., Gupta, S., Lian, X., Liu, J.: Staleness-aware async-SGD for distributed deep learning. In: International Joint Conferences on Artificial Intelligence, pp. 2350–2356 (2016)Google Scholar
  12. 12.
    MPI: A message-passing interface standard. http://mpi-forum.org/docs/mpi-3.1/mpi31-report.pdf
  13. 13.
    Oyama, Y., Nomura, A., Sato, I., Nishimura, H., Tamatsu, Y., Matsuoka, S.: Predicting statistics of asynchronous SGD parameters for a large-scale distributed deep learning system on GPU supercomputers. In: IEEE Big Data, pp. 66–75 (2016)Google Scholar
  14. 14.
    Jaderberg, M., Czarnecki, W.M., Osindero, S., Vinyals, O., Graves, A., Kavukcuoglu, K.: Decoupled neural interfaces using synthetic gradients arXiv:1608.05343 (2016)
  15. 15.
    Nesterov, Y.: A method of solving a convex programming problem with convergence rate \(O (1/k^2)\). Sov. Math. Dokl. 27, 372–376 (1983)MATHGoogle Scholar
  16. 16.
    Qian, N.: On the momentum term in gradient descent learning algorithms. Neural Netw. 12(1), 145–151 (1999)CrossRefMathSciNetGoogle Scholar
  17. 17.
    Krizhevsky, A.: Learning multiple layers of features from tiny images. Master’s thesis, Computer Science Department, University of Toronto (2009)Google Scholar
  18. 18.
    Simard, P.Y., Steinkraus, D., Platt, J.C.: Best practices for convolutional neural networks applied to visual document analysis. In: IAPR International Conference on Document Analysis and Recognition, pp. 958–963 (2003)Google Scholar
  19. 19.
    Nair, V., Hinton, G.E.: Rectified linear units improve restricted Boltzmann machines. In: International Conference on Machine Learning, pp. 807–814 (2010)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Ikuro Sato
    • 1
  • Ryo Fujisaki
    • 1
  • Yosuke Oyama
    • 2
  • Akihiro Nomura
    • 2
  • Satoshi Matsuoka
    • 2
  1. 1.Denso IT Laboratory, Inc.TokyoJapan
  2. 2.Tokyo Institute of TechnologyTokyoJapan

Personalised recommendations