Abstract
Distributed stochastic gradient descent (SGD) algorithms are widely deployed in training large-scale deep learning models, while the communication overhead among workers becomes the new system bottleneck. Recently, two major categories of gradient compression techniques were proposed, including gradient quantization and sparsification. At best, the gradient quantization technique can obtain a compression ratio of 32, with little impact on model convergence accuracy. The gradient sparse technique can achieve a much higher compression ratio with some loss of model accuracy. To obtain a higher communication compression ratio with the minimum model accuracy loss, we proposed a mixed compression strategy named Hybrid Gradient Compression (HGC), which combines the merits of both quantization and sparsification. We validated the efficiency of our Hybrid Gradient Compression method by testing some complex models with millions of parameters (e.g., ResNet, VGG, LSTM, etc.) on the datasets including CIFAR-10, CIFAR-100 and Penn TreeBank on a GPU cluster. According to our tests, HGC can achieve a much higher gradient compression ratio at the cost of a small accuracy loss.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Aji, K.A.F., Heafield: sparse communication for distributed gradient descent. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 440–445. Copenhagen, Denmark (2017). https://doi.org/10.18653/v1/d17-1045
Alistarh, D., Grubic, D., Li, J., Tomioka, R., Vojnovic, M.: QSGD: communication-efficient SGD via gradient quantization and encoding. Adv. Neural Inf. Process. Syst. 30, 1709–1720 (2017)
Arora, S., Ge, R., Neyshabur, B., Zhang, Y.: Stronger generalization bounds for deep nets via a compression approach (2018)
Bernstein, J., Wang, Y.X., Azizzadenesheli, K., Anandkumar, A.: signSGD compressed optimisation for non-convex problems. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning, pp. 560–569. Stockholmsmässan, Stockholm, Sweden (2018)
Chen, C., Choi, J., Brand, D., Agrawal, A., Zhang, W., Gopalakrishnan, K.: Adacomp: adaptive residual gradient compression for data-parallel distributed training. In: 32nd AAAI Conference on Artificial Intelligence, AAAI 2018, pp. 2827–2835, January 2018
Dettmers, T.: 8-Bit approximations for parallelism in deep learning. In: International Conference on Learning Representations (2016)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90
Idelbayev, Y.: Proper ResNet implementation for CIFAR10/CIFAR100 in PyTorch. https://github.com/akamaster/pytorch_resnet_cifar10
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds) Advances in Neural Information Processing Systems, vol. 25, pp. 1097–1105 (2012)
Li, M., Andersen, D.G., Smola, A.J., Yu, K.: Communication efficient distributed machine learning with the parameter server. Adv. Neural Inf. Process. Syst. 27, 19–27 (2014)
Lin, Y., Han, S., Mao, H., Wang, Y., Dally, W.J.: Deep gradient compression: reducing the communication bandwidth for distributed training. In: International Conference on Learning Representations, May 2018
Seide, F., Fu, H., Droppo, J., Li, G., Yu, D.: 1-bit stochastic gradient descent and application to data-parallel distributed training of speech DNNs. In: Interspeech 2014 (2014)
Sergeev, A., Balso, M.D.: Horovod: fast and easy distributed deep learning in TensorFlow (2018). arXiv preprint: arXiv:1802.05799
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations, May 2015
Stich, S.U., Cordonnier, J.B., Jaggi, M.: Sparsified SGD with memory. Adv. Neural Inf. Process. Syst. 31, 4447–4458 (2018)
Strom, N.: Scalable distributed DNN training using commodity GPU cloud computing. In: INTERSPEECH, pp. 1488–1492 (2015)
Wang, H., Sievert, S., Liu, S., Charles, Z., Papailiopoulos, D., Wright, S.: ATOMO: communication-efficient learning via atomic sparsification. Adv. Neural Inf. Process. Syst. 31, 9850–9861 (2018)
Wangni, J., Wang, J., Liu, J., Zhang, T.: Gradient sparsification for communication-efficient distributed optimization. Adv. Neural Inf. Process. Syst. 31, 1299–1309 (2018)
Wen, W., Xu, C., Yan, F., Wu, C., Wang, Y., Chen, Y., Li, H.: TernGrad: ternary gradients to reduce communication in distributed deep learning. Adv. Neural Inf. Process. Syst. 30, 1509–1519 (2017)
Xu, H., et al.: Compressed Communication for Distributed Deep Learning: Survey and Quantitative Evaluation. Technical report, KAUST, April 2020
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Hu, K., Wu, C., Zhu, E. (2021). HGC: Hybrid Gradient Compression in Distributed Deep Learning. In: Sun, X., Zhang, X., Xia, Z., Bertino, E. (eds) Advances in Artificial Intelligence and Security. ICAIS 2021. Communications in Computer and Information Science, vol 1422. Springer, Cham. https://doi.org/10.1007/978-3-030-78615-1_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-78615-1_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-78614-4
Online ISBN: 978-3-030-78615-1
eBook Packages: Computer ScienceComputer Science (R0)