Abstract
The learning problems are always affected with a certain amount of risk. This risk is measured empirically through various risk functions. The risk functional’s empirical estimates consist of an average over data points’ tuples. With this motivation in this work, the prima face is towards presenting any stochastic approximation method for solving problems involving minimization of risk. Considering huge datasets scenario, gradient estimates are achieved through taking samples of data points’ tuples with replacement. Based on this, a mathematical proposition is presented here which account towards considerable impact for this strategy on prediction model’s ability of generalization through stochastic gradient descent with momentum. The method reaches optimum trade-off with respect to accuracy and cost. The experimental results on maximization of area under the curve (AUC) and metric learning provides superior support towards this approach.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bellet, A., Habrard, A., Sebban, M.: Metric Learning. Morgan and Claypool Publishers, San Rafael (2015)
Zhao, P., Hoi, S., Jin, R., Yang, T.: AUC maximization. In: Proceedings of 28th International Conference on Machine Learning, pp. 233–240 (2011)
Fürnkranz, J., Hüllermeier, E., Vanderlooy, S.: Binary decomposition methods for multipartite ranking. In: Proceedings of Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pp. 359–374 (2009)
Clémençon, S.: On U-processes and clustering performance. In: Proceedings of 24th International Conference on Neural Information Processing Systems, pp. 37–45 (2011)
Lee, A.J.: U-Statistics: Theory and Practice. Marcel Dekker, New York (1990)
Clémençon, S., Lugosi, G., Vayatis, N.: Ranking and empirical risk minimization of U-Statistics. Ann. Stat. 36(2), 844–874 (2008)
Norouzi, M., Fleet, D.J., Salakhutdinov, R.: Hamming distance metric learning. In: Proceedings of 25th International Conference on Neural Information Processing Systems, pp. 1070–1078 (2012)
Kar, P., Sriperumbudur, B., Jain, P., Karnick, H.: On the generalization ability of online learning algorithms for pairwise loss functions. In: Proceedings of 30th International Conference on Machine Learning, pp. III-441–III-449 (2013)
Qian, Q., Jin, R., Yi, J., Zhang, L., Zhu, S.: Efficient distance metric learning by adaptive sampling and mini-batch stochastic gradient descent. Mach. Learn. 99(3), 353–372 (2015)
Johnson, R., Zhang, T.: Accelerating stochastic gradient descent using predictive variance reduction. In: Proceedings of 26th International Conference on Neural Information Processing Systems, pp. 315–323 (2013)
Le Roux, N., Schmidt, M.W., Bach, F.: A stochastic gradient method with an exponential convergence rate for finite training sets. In: Proceedings of 25th International Conference on Neural Information Processing Systems, pp. 2663–2671 (2012)
Mairal, J.: Incremental majorization-minimization optimization with application to large-scale machine learning. arXiv:1402.4419 (2014)
Defazio, A., Bach, F., Lacoste-Julien, S.: SAGA: a fast-incremental gradient method with support for non-strongly convex composite objectives. In: Proceedings of 27th International Conference on Neural Information Processing Systems, pp. 1646–1654 (2014)
Needell, D., Ward, R., Srebro, N.: Stochastic gradient descent, weighted sampling and the randomized Kaczmarz algorithm. In: Proceedings of 27th International Conference on Neural Information Processing Systems, pp. 1017–1025 (2014)
Zhao, P., Zhang, T.: Stochastic optimization with importance sampling for regularized loss minimization. In: Proceedings of 32nd International Conference on Machine Learning, pp. 1–9 (2015)
Chaudhuri, A.: Some investigations on empirical risk minimization through stochastic gradient with momentum algorithms. Technical report, TR–9818, Samsung R&D Institute Delhi India (2018)
Clémençon, S., Robbiano, S., Tressou, J.: Maximal deviations of incomplete U-processes with applications to empirical risk sampling. In: Proceedings of 13th SIAM International Conference on Data Mining, pp. 19–27 (2013)
Bottou, L., Bousquet, O.: The tradeoffs of large-scale learning. In: Proceedings of 20th International Conference on Neural Information Processing Systems, pp. 161–168 (2007)
Bach, F.R., Moulines, E.: Non-asymptotic analysis of stochastic approximation algorithms for machine learning. In: Proceedings of 24th International Conference on Neural Information Processing Systems, pp. 451–459 (2011)
Nemirovski, A., Juditsky, A., Lan, G., Shapiro, A.: Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19(4), 1574–1609 (2009)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Chaudhuri, A. (2019). The Minimization of Empirical Risk Through Stochastic Gradient Descent with Momentum Algorithms. In: Silhavy, R. (eds) Artificial Intelligence Methods in Intelligent Algorithms. CSOC 2019. Advances in Intelligent Systems and Computing, vol 985. Springer, Cham. https://doi.org/10.1007/978-3-030-19810-7_17
Download citation
DOI: https://doi.org/10.1007/978-3-030-19810-7_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-19809-1
Online ISBN: 978-3-030-19810-7
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)