Global convergence rate analysis of unconstrained optimization methods based on probabilistic models

Abstract

We present global convergence rates for a line-search method which is based on random first-order models and directions whose quality is ensured only with certain probability. We show that in terms of the order of the accuracy, the evaluation complexity of such a method is the same as its counterparts that use deterministic accurate models; the use of probabilistic models only increases the complexity by a constant, which depends on the probability of the models being good. We particularize and improve these results in the convex and strongly convex case. We also analyze a probabilistic cubic regularization variant that allows approximate probabilistic second-order models and show improved complexity bounds compared to probabilistic first-order methods; again, as a function of the accuracy, the probabilistic cubic regularization bounds are of the same (optimal) order as for the deterministic case.

This is a preview of subscription content, log in to check access.

Notes

  1. 1.

    Note that throughout, \(f(x^k)\ne f_k\), since \(f_k=F_k(\omega _k)\) is a related measure of progress towards optimality.

  2. 2.

    Note that a recently-proposed cubic regularization variant [2] can dispense with the approximate global minimization condition altogether while maintaining the optimal complexity bound of ARC. A probabilistic variant of [2] can be constructed similarly to probabilistic ARC, and our analysis here can be extended to provide same-order complexity bounds.

References

  1. 1.

    Bandeira, A., Scheinberg, K., Vicente, L.: Convergence of trust-region methods based on probabilistic models. SIAM J. Optim. 24, 1238–1264 (2014)

    MathSciNet  Article  MATH  Google Scholar 

  2. 2.

    Birgin, E.G., Gardenghi, J.L., Martinez, S.A.S.J.M., Toint, P.L.: Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models. Technical report naXys-05-2015, Department of Mathematics, University of Namur (2015)

  3. 3.

    Byrd, R., Nocedal, J., Oztoprak, F.: An inexact successive quadratic approximation method for convex l-1 regularized optimization. Technical report (2013)

  4. 4.

    Byrd, R.H., Chin, G.M., Nocedal, J., Wu, Y.: Sample size selection in optimization methods for machine learning. Math. Program. 134, 127–155 (2012)

    MathSciNet  Article  MATH  Google Scholar 

  5. 5.

    Cartis, C., Gould, N., Toint, P.L.: Optimal Newton-type methods for nonconvex smooth optimization problems. Technical report, Optimization Online (2011)

  6. 6.

    Cartis, C., Gould, N., Toint, P.L.: On the oracle complexity of first-order and derivative-free algorithms for smooth nonconvex minimization. SIAM J. Optim. 22, 66–86 (2012)

    MathSciNet  Article  MATH  Google Scholar 

  7. 7.

    Cartis, C., Gould, N.I.M., Toint, P.L.: Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results. Math. Program. 127, 245–295 (2011)

    MathSciNet  Article  MATH  Google Scholar 

  8. 8.

    Cartis, C., Gould, N.I.M., Toint, P.L.: Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function- and derivative-evaluation complexity. Math. Program. 130, 295–319 (2011)

    MathSciNet  Article  MATH  Google Scholar 

  9. 9.

    Chen, R.: Stochastic derivative-free optimization of noisy functions. Ph.D. thesis, Department of Industrial and Systems Engineering, Lehigh University, Bethlehem, USA (2015)

  10. 10.

    Chen, R., Menickelly, M., Scheinberg, K.: Stochastic optimization using a trust-region method and random models. Technical report, ISE Dept., Lehigh University

  11. 11.

    Devolder, O., Glineur, F., Nesterov, Y.: First-order methods of smooth convex optimization with inexact oracle. Math. Program. 146, 37–75 (2014)

    MathSciNet  Article  MATH  Google Scholar 

  12. 12.

    Ghadimi, S., Lan, G.: Stochastic first- and zeroth-order methods for nonconvex stochastic programming. SIAM J. Optim. 23, 2341–2368 (2013)

    MathSciNet  Article  MATH  Google Scholar 

  13. 13.

    Gratton, S., Royer, C.W., Vicente, L.N., Zhang, Z.: Direct search based on probabilistic descent. SIAM J. Optim. 25, 1515–1541 (2015)

    MathSciNet  Article  MATH  Google Scholar 

  14. 14.

    Gratton, S., Royer, C.W., Vicente, L.N., Zhang, Z.: Complexity and global rates of trust-region methods based on probabilistic models. Technical report 17-09, Dept. Mathematics, Univ. Coimbra (2017)

  15. 15.

    Lee, J.D., Sun, Y., Saunders, M.A.: Proximal Newton-type methods for convex optimization. In: NIPS (2012)

  16. 16.

    Nesterov, Y.: Introductory Lectures on Convex Optimization. Kluwer, Dordrecht (2004)

    Google Scholar 

  17. 17.

    Nesterov, Y.: Random gradient-free minimization of convex functions. Technical report 2011/1, CORE (2011)

  18. 18.

    Nesterov, Y., Polyak, B.T.: Cubic regularization of Newton method and its global performance. Math. Program. 108, 177–205 (2006)

    MathSciNet  Article  MATH  Google Scholar 

  19. 19.

    Pasupathy, R., Glynn, P.W., Ghosh, S., Hahemi, F.: On sampling rates in stochastic recursion (under review) (2016)

  20. 20.

    Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Stat. 22, 400–407 (1951)

    MathSciNet  Article  MATH  Google Scholar 

  21. 21.

    Schmidt, M.W., Roux, N.L., Bach, F.: Convergence rates of inexact proximal-gradient methods for convex optimization. In: NIPS, pp. 1458–1466 (2011)

  22. 22.

    Schmidt, M.W., Roux, N.L., Bach, F.: Minimizing finite sums with the stochastic average gradient. CoRR, arXiv:1309.2388 (2013)

  23. 23.

    Shiryaev, A.: Probability, Graduate Texts on Mathematics. Springer, New York (1995)

    Google Scholar 

  24. 24.

    Spall, J.: Multivariate stochastic approximation using a simultaneous perturbation gradient approximation. IEEE Trans. Autom. Control 37, 332–341 (1992)

    MathSciNet  Article  MATH  Google Scholar 

Download references

Acknowledgements

We would like to thank Alexander Stolyar for helpful discussions on stochastic processes. We also would like to thank Zaikun Zhang, who was instrumental in helping us significantly simplify the analysis of the stochastic process in Sect. 2.

Author information

Affiliations

Authors

Corresponding author

Correspondence to K. Scheinberg.

Additional information

The work of C. Cartis was partially supported by the Oxford University EPSRC Platform Grant EP/I01893X/1. The work of K. Scheinberg is partially supported by NSF Grants DMS 10-16571, DMS 13-19356, CCF-1320137, AFOSR Grant FA9550-11-1-0239, and DARPA Grant FA 9550-12-1-0406 negotiated by AFOSR.

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Cartis, C., Scheinberg, K. Global convergence rate analysis of unconstrained optimization methods based on probabilistic models. Math. Program. 169, 337–375 (2018). https://doi.org/10.1007/s10107-017-1137-4

Download citation

Keywords

  • Line-search methods
  • Cubic regularization methods
  • Random models
  • Global convergence analysis

Mathematics Subject Classification

  • 90C30
  • 90C56
  • 49M37