Abstract
The goal of this paper is to compare the most commonly used first-order optimization techniques with proposed enhanced Gradient Descent-Based Optimization. The simplest optimization method is the gradient-based optimization technique. The optimization concerns instigated with Deep Neural Networks (NNs) are unraveled by the rest other techniques. The common technique used in deep neural network architectural setup is Stochastic Gradient Descent (SGD). In SGD there is a raise of variance which leads to slower convergence. This affects the performance of the system. So to address these issues the non-convex optimization technique with faster convergence using an enhanced stochastic variance reduced ascension approach is implemented. It enhances performance in terms of faster convergence.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Comput. 18(7), 1527–1554 (2006)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Adv. Neural. Inf. Process. Syst. 25, 1097–1105 (2012)
Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)
Wang, Yu., Yin, W., Zeng, J.: Global convergence of ADMM in nonconvex nonsmooth optimization. J. Sci. Comput. 78(1), 29–63 (2019)
Zhang, Z., et al.: A new finite-time varying-parameter convergent-differential neural-network for solving nonlinear and nonconvex optimization problems. Neurocomputing 319, 74–83 (2018)
Dauphin, Y., Pascanu, R., Gulcehre, C., Cho, K., Ganguli, S., Bengio, Y.: Identifying and attacking the saddle point problem in high-dimensional non-convex optimization. 1–14 (2014)
Hardt, M., Ma, T., Recht, B.: Gradient descent learns linear dynamical systems. J. Mach. Learn. Res. 19(1), 1025–1068 (2018)
Ruder, S.: An overview of gradient descent optimization algorithms. http://arxiv.org/abs/1609.04747. Accessed 29 Oct 2018
Hallen, R.: A study of gradient-based algorithms. http://lup.lub.lu.se/student-papers/record/8904399. Accessed 29 Oct 2018
Shalev-Shwartz, S., Shamir, O., Shammah, S.: Failures of gradient based deep learning (2017). arXiv:1703.07950
Papamakarios, G.: Comparison of stochastic optimization algorithms. School of Mathematic, University of Edinburgh. https://www.maths.ed.ac.uk/~prichtar/papers/Papamakarios.pdf. Accessed 26 Oct 2014
Darken, C., Chang, J., Moody, J.: Learning rate schedules for faster stochastic gradient search. Neural Networks for Signal Processing II. Proceedings of the IEEE Workshop (September), pp. 1–11 (1992). http://doi.org/10.1109/NNSP.1992.253713
Qian, N.: On the momentum term in gradient descent learning algorithms. Neural Netw. Off. J. Int. Neural Netw. Soc. 12(1), 145–151 (1999). 6080(98)00116-6
Nesterov, Y.: A method for unconstrained convex minimization problem with the rate of convergence o(1/k2). Doklady ANSSSR (translated as Soviet. Math. Docl.), 269, 543–547 (1983)
Sutton, R.S.: Two problems with backpropagation and other steepest-descent learning procedures for networks. In: Proceedings of 8th Annual Conference on Cognitive Science Society (1986)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Shikalgar, A., Sonavane, S. (2020). An Enhanced Stochastic Gradient Descent Variance Reduced Ascension Optimization Algorithm for Deep Neural Networks. In: Iyer, B., Rajurkar, A., Gudivada, V. (eds) Applied Computer Vision and Image Processing. Advances in Intelligent Systems and Computing, vol 1155. Springer, Singapore. https://doi.org/10.1007/978-981-15-4029-5_38
Download citation
DOI: https://doi.org/10.1007/978-981-15-4029-5_38
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-4028-8
Online ISBN: 978-981-15-4029-5
eBook Packages: EngineeringEngineering (R0)