Skip to main content
Log in

Inexact restoration with subsampled trust-region methods for finite-sum minimization

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

Convex and nonconvex finite-sum minimization arises in many scientific computing and machine learning applications. Recently, first-order and second-order methods where objective functions, gradients and Hessians are approximated by randomly sampling components of the sum have received great attention. We propose a new trust-region method which employs suitable approximations of the objective function, gradient and Hessian built via random subsampling techniques. The choice of the sample size is deterministic and ruled by the inexact restoration approach. We discuss local and global properties for finding approximate first- and second-order optimal points and function evaluation complexity results. Numerical experience shows that the new procedure is more efficient, in terms of overall computational cost, than the standard trust-region scheme with subsampled Hessians.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Bastin, F., Cirillo, C., Toint, P.L.: An adaptive Monte Carlo algorithm for computing mixed logit estimators. CMS 3(1), 55–79 (2006)

    Article  MathSciNet  Google Scholar 

  2. Bastin, F., Cirillo, C., Toint, P.L.: Convergence theory for nonconvex stochastic programming with an application to mixed logit. Math. Program. 108, 207–234 (2006)

    Article  MathSciNet  Google Scholar 

  3. Bellavia, S., Gurioli, G., Morini, B.: Adaptive cubic regularization methods with dynamic inexact Hessian information and applications to finite-sum minimization. IMA J. Numer. Anal. (2020). https://doi.org/10.1093/imanum/drz076

    Article  Google Scholar 

  4. Bellavia, S., Gurioli, G., Morini, B., Toint, PhL: Adaptive regularization algorithms with inexact evaluations for nonconvex optimization. SIAM J. Optim. 29(4), 2281–2915 (2019)

    Article  MathSciNet  Google Scholar 

  5. Bellavia, S., Krejić, N., Krklec Jerinkić, N.: Subsampled Inexact Newton methods for minimizing large sums of convex function. IMA J. Numer. Anal. (2019). https://doi.org/10.1093/imanum/drz027

    Article  MATH  Google Scholar 

  6. Berahas, A.S., Bollapragada, R., Nocedal, J.: An investigation of Newton-sketch and subsampled Newton methods. Optim. Methods Softw. (2020). https://doi.org/10.1080/10556788.2020.1725751

    Article  Google Scholar 

  7. Birgin, G.E., Krejić, N., Martínez, J.M.: On the employment of inexact restoration for the minimization of functions whose evaluation is subject to programming errors. Math. Comput. 87(311), 1307–1326 (2018)

    Article  Google Scholar 

  8. Birgin, G.E., Krejić, N., Martínez, J.M.: Iteration and evaluation complexity on the minimization of functions whose computation is intrinsically inexact. Math. Comput. 89, 253–278 (2020)

    Article  MathSciNet  Google Scholar 

  9. Blanchet, J., Cartis, C., Menickelly, M., Scheinberg, K.: Convergence rate analysis of a stochastic trust region method via supermartingales. Inf. J. Optim. 1(2), 92–119 (2019)

    Google Scholar 

  10. Bollapragada, R., Byrd, R., Nocedal, J.: Exact and inexact subsampled Newton methods for optimization. IMA J. Numer. Anal. 39(20), 545–578 (2019)

    Article  MathSciNet  Google Scholar 

  11. Bottou, L., Curtis, F.C., Nocedal, J.: Optimization methods for large-scale machine learning. SIAM Rev. 60(2), 223–311 (2018)

    Article  MathSciNet  Google Scholar 

  12. Byrd, R.H., Hansen, S.L., Nocedal, J., Singer, Y.: A stochastic quasi-Newton method for large-scale optimization. SIAM J. Optim. 26(2), 1008–1021 (2016)

    Article  MathSciNet  Google Scholar 

  13. Byrd, R.H., Chin, G.M., Nocedal, J., Wu, Y.: Sample size selection in optimization methods for machine learning. Math. Program. 134(1), 127–155 (2012)

    Article  MathSciNet  Google Scholar 

  14. Causality workbench team. A marketing dataset (2008). http://www.causality.inf.ethz.ch/data/CINA.html

  15. Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2, 27:1–27:27 (2011)

    Article  Google Scholar 

  16. Conn, A.R., Gould, N.I.M., Toint, P.L.: Trust-Region Methods. SMPS/SIAM Series on Optimization. SIAM, Philadelphia (2000)

    Book  Google Scholar 

  17. Deng, G., Ferris, M.C.: Variable-number sample path optimization. Math. Program. 117(1–2), 81–109 (2009)

    Article  MathSciNet  Google Scholar 

  18. Dennis, J.E., Schnabel, R.B.: Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice Hall, Englewood Cliffs (1983)

    MATH  Google Scholar 

  19. Erdogdu, M.A., Montanari, A.: Convergence rates of sub-sampled Newton methods. In: NIPS’15 Proceedings of the 28th International Conference on Neural Information Processing Systems, vol. 2, pp. 3052–3060 (2015)

  20. Friedlander, M.P., Schmidt, M.: Hybrid deterministic-stochastic methods for data fitting. SIAM J. Sci. Comput. 34(3), 1380–1405 (2012)

    Article  MathSciNet  Google Scholar 

  21. Grapiglia, G.N., Yuan, J., Yuan, Y.: On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization. Math. Program. Ser. A 152, 491–520 (2015)

    Article  MathSciNet  Google Scholar 

  22. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. In: Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324 (1998). MNIST database available at http://yann.lecun.com/exdb/mnist/

  23. Lichman, M.: UCI machine learning repository (2013). https://archive.ics.uci.edu/ml/index.php. Accessed 15 Nov 2018

  24. Liu, L., Liu, X., Hsieh, C.-J., Tao, D.: Stochastic second-order methods for non-convex optimization with inexact Hessian and gradient (2018). arXiv:1809.09853

  25. Krejić, N., Martínez, J.M.: Inexact restoration approach for minimization with inexact evaluation of the objective function. Math. Comput. 85, 1775–1791 (2016)

    Article  MathSciNet  Google Scholar 

  26. Krejić, N., Krklec, N.: Line search methods with variable sample size for unconstrained optimization. J. Comput. Appl. Math. 245, 213–231 (2013)

    Article  MathSciNet  Google Scholar 

  27. Krejić, N., Krklec, Jerinkić N.: Nonmonotone line search methods with variable sample size. Numer. Algorithms 68(4), 711–739 (2015)

    Article  MathSciNet  Google Scholar 

  28. Martínez, J.M.: Inexact restoration method with Lagrangian tangent decrease and new merit function for nonlinear programming. J. Optim. Theory Appl. 111, 39–58 (2001)

    Article  MathSciNet  Google Scholar 

  29. Martínez, J.M., Pilotta, E.A.: Inexact restoration algorithms for constrained optimization. J. Optim. Theory Appl. 104, 135–163 (2000)

    Article  MathSciNet  Google Scholar 

  30. Nocedal, J., Wright, S.J.: Numerical Optimization. Springer Series in Operations Research. Springer, Berlin (1999)

    Google Scholar 

  31. Pasupathy, R.: On choosing parameters in retrospective-approximation algorithms for stochastic root finding and simulation optimization. Oper. Res. 58(4), 889–901 (2010)

    Article  Google Scholar 

  32. Pilanci, M., Wainwright, M.J.: Newton sketch: a near linear-time optimization algorithm with linear-quadratic convergence. SIAM J. Optim. 27(1), 205–245 (2017)

    Article  MathSciNet  Google Scholar 

  33. Polak, E., Royset, J.O.: Efficient sample sizes in stochastic nonlinear programing. J. Comput. Appl. Math. 217(2), 301–310 (2008)

    Article  MathSciNet  Google Scholar 

  34. Roosta-Khorasani, F., Mahoney, M.W.: Sub-sampled Newton methods. Math. Program. 174, 293–326 (2019)

    Article  MathSciNet  Google Scholar 

  35. Xu, P., Yang, J., Roosta-Khorasani, F., Ré, C., Mahoney, M.W.: Sub-sampled Newton methods with non-uniform sampling. Adv. Neural Inf. Process. Syst. 30(NIPS), 2530–2538 (2016)

    Google Scholar 

  36. Xu, P., Roosta-Khorasani, F., Mahoney, M.W.: Newton-type methods for non-convex optimization under inexact Hessian information. Math. Program. (2019). https://doi.org/10.1007/s10107-019-01405-z

    Article  Google Scholar 

Download references

Acknowledgements

Dedicated with friendship to José Mario Martínez for his outstanding scientific contributions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nataša Krejić.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

S. Bellavia, B. Morini: Members of the INdAM Research Group GNCS.

The work of Bellavia and Morini was supported by Gruppo Nazionale per il Calcolo Scientifico (GNCS-INdAM) of Italy. The work of the second author was supported by Serbian Ministry of Education, Science and Technological Development, Grant No. 451-03-68/2020-14/200125. Part of the research was conducted during a visit of the second author at Dipartimento di Ingegneria Industriale supported by Piano di Internazionalizzazione, Università degli Studi di Firenze.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bellavia, S., Krejić, N. & Morini, B. Inexact restoration with subsampled trust-region methods for finite-sum minimization. Comput Optim Appl 76, 701–736 (2020). https://doi.org/10.1007/s10589-020-00196-w

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-020-00196-w

Keywords

Navigation