Abstract
Convolutional neural networks (CNNs) have shown very appealing performance for many computer vision applications. The training of CNNs is generally performed using stochastic gradient descent (SGD)-based optimization techniques. The adaptive momentum-based SGD optimizers are the recent trends. However, the existing optimizers are not able to maintain a zero mean in the first order moment and struggle with optimization. In this paper, we propose a moment centralization-based SGD optimizer for CNNs. Specifically, we impose the zero mean constraints on the first order moment explicitly. The proposed moment centralization is generic in nature and can be integrated with any of the existing adaptive momentum-based optimizers. The proposed idea is tested with three state-of-the-art optimization techniques, including Adam, Radam, and Adabelief on benchmark CIFAR10, CIFAR100, and TinyImageNet datasets for image classification. The performance of the existing optimizers is generally improved when integrated with the proposed moment centralization. Further, the results of the proposed moment centralization are also better than the existing gradient centralization. The analytical analysis using the toy example shows that the proposed method leads to a shorter and smoother optimization trajectory. The source code is made publicly available at https://github.com/sumanthsadhu/MC-optimizer.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Basha, S.S., Ghosh, S., Babu, K.K., Dubey, S.R., Pulabaigari, V., Mukherjee, S.: Rccnet: An efficient convolutional neural network for histological routine colon cancer nuclei classification. In: 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV). pp. 1222ā1227. IEEE (2018)
Bernstein, J., Wang, Y.X., Azizzadenesheli, K., Anandkumar, A.: signsgd: Compressed optimisation for non-convex problems. In: International Conference on Machine Learning. pp. 560ā569 (2018)
Bottou, L.: Large-scale machine learning with stochastic gradient descent. In: Proceedings of the COMPSTAT, pp. 177ā186 (2010)
Chen, Z., Badrinarayanan, V., Lee, C.Y., Rabinovich, A.: Gradnorm: Gradient normalization for adaptive loss balancing in deep multitask networks. In: International Conference on Machine Learning. pp. 794ā803. PMLR (2018)
Choi, S., Kim, T., Jeong, M., Park, H., Kim, C.: Meta batch-instance normalization for generalizable person re-identification. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 3425ā3435 (2021)
Dubey, S.R., Chakraborty, S., Roy, S.K., Mukherjee, S., Singh, S.K., Chaudhuri, B.B.: diffgrad: an optimization method for convolutional neural networks. IEEE transactions on neural networks and learning systems 31(11), 4500ā4511 (2019)
Dubey, S., Basha, S., Singh, S., Chaudhuri, B.: Curvature injected adaptive momentum optimizer for convolutional neural networks. arXiv preprint arXiv:2109.12504 (2021)
Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12(Jul), 2121ā2159 (2011)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770ā778 (2016)
Hinton, G., Srivastava, N., Swersky, K.: Neural networks for machine learning. Lecture 6a overview of mini-batch gradient descent course (2012)
Huang, H., Wang, C., Dong, B.: Nostalgic adam: Weighting more of the past gradients when designing the adaptive learning rate. In: Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence. pp. 2556ā2562 (2019)
Ioffe, S., Szegedy, C.: Batch normalization: Accelerating deep network training by reducing internal covariate shift. In: International conference on machine learning. pp. 448ā456. PMLR (2015)
Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. In: International Conference on Learning Representations (2015)
Krizhevsky, A., Hinton, G., etĀ al.: Learning multiple layers of features from tiny images. Tech Report (2009)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural Inf. Process. Syst. 25, 1097ā1105 (2012)
Le, Y., Yang, X.: Tiny imagenet visual recognition challenge. CS 231N 7(7), 3 (2015)
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436ā444 (2015)
Liu, L., Jiang, H., He, P., Chen, W., Liu, X., Gao, J., Han, J.: On the variance of the adaptive learning rate and beyond. In: International Conference on Learning Representations (2019)
Repala, V.K., Dubey, S.R.: Dual cnn models for unsupervised monocular depth estimation. In: International Conference on Pattern Recognition and Machine Intelligence. pp. 209ā217. Springer (2019)
Roy, S., Paoletti, M., Haut, J., Dubey, S., Kar, P., Plaza, A., Chaudhuri, B.: Angulargrad: A new optimization technique for angular convergence of convolutional neural networks. arXiv preprint arXiv:2105.10190 (2021)
Salimans, T., Kingma, D.P.: Weight normalization: a simple reparameterization to accelerate training of deep neural networks. Adv. Neural Inf. Process. Syst. 29, 901ā909 (2016)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. In: International Conference on Learning Representations (2015)
Singh, D., Singh, B.: Investigating the impact of data normalization on classification performance. Appl. Soft Comput. 97, 105524 (2020)
Srivastava, Y., Murali, V., Dubey, S.R.: Hard-mining loss based convolutional neural network for face recognition. In: International Conference on Computer Vision and Image Processing. pp. 70ā80. Springer (2020)
Sutskever, I., Martens, J., Dahl, G., Hinton, G.: On the importance of initialization and momentum in deep learning. In: Proceedings of the International Conference on Machine Learning. pp. 1139ā1147 (2013)
Yong, H., Huang, J., Hua, X., Zhang, L.: Gradient centralization: a new optimization technique for deep neural networks. In: European Conference on Computer Vision. pp. 635ā652. Springer (2020)
Zhuang, J., Tang, T., Ding, Y., Tatikonda, S.C., Dvornek, N., Papademetris, X., Duncan, J.: Adabelief optimizer: Adapting stepsizes by the belief in observed gradients. Adv. Neural Inf. Process. Syst. 33 (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Ā© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Sadu, S., Dubey, S.R., Sreeja, S.R. (2023). Moment Centralization-Based Gradient Descent Optimizers forĀ Convolutional Neural Networks. In: Tistarelli, M., Dubey, S.R., Singh, S.K., Jiang, X. (eds) Computer Vision and Machine Intelligence. Lecture Notes in Networks and Systems, vol 586. Springer, Singapore. https://doi.org/10.1007/978-981-19-7867-8_5
Download citation
DOI: https://doi.org/10.1007/978-981-19-7867-8_5
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-7866-1
Online ISBN: 978-981-19-7867-8
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)