Skip to main content
Log in

Updating the regularization parameter in the adaptive cubic regularization algorithm

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

The adaptive cubic regularization method (Cartis et al. in Math. Program. Ser. A 127(2):245–295, 2011; Math. Program. Ser. A. 130(2):295–319, 2011) has been recently proposed for solving unconstrained minimization problems. At each iteration of this method, the objective function is replaced by a cubic approximation which comprises an adaptive regularization parameter whose role is related to the local Lipschitz constant of the objective’s Hessian. We present new updating strategies for this parameter based on interpolation techniques, which improve the overall numerical performance of the algorithm. Numerical experiments on large nonlinear least-squares problems are provided.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. Audet, C., Orban, D.: Finding optimal algorithmic parameters using derivative-free optimization. SIAM J. Control Optim. 17(3), 642–664 (2006)

    MathSciNet  MATH  Google Scholar 

  2. Bellavia, S., Cartis, C., Gould, N.I.M., Morini, B., Toint, Ph.L.: Convergence of a regularized Euclidean residual algorithm for nonlinear least-squares. SIAM J. Numer. Anal. 48, 1–29 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  3. Cartis, C., Gould, N.I.M., Toint, Ph.L.: Adaptive cubic overestimation methods for unconstrained optimization. Part I: Motivation, convergence and numerical results. Math. Program., Ser. A 127(2), 245–295 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  4. Cartis, C., Gould, N.I.M., Toint, Ph.L.: Adaptive cubic overestimation methods for unconstrained optimization. Part II: Worst-case function- and derivative-evaluation complexity. Math. Program. Ser. A 130(2), 295–319 (2011). doi:10.1007/s10107-009-0337-y

    Article  MathSciNet  MATH  Google Scholar 

  5. Cartis, C., Gould, N.I.M., Toint, Ph.L.: Complexity bounds for second-order optimality in unconstrained optimization. J. Complex 28(1), 93–108 (2012). doi:10.1016/j.jco.2011.06.001

    Article  MathSciNet  MATH  Google Scholar 

  6. Cartis, C., Gould, N.I.M., Toint, Ph.L.: On the complexity of steepest descent, Newton’s and regularized Newton’s methods for nonconvex unconstrained optimization. SIAM J. Control Optim. 20(6), 2833–2852 (2010)

    MathSciNet  MATH  Google Scholar 

  7. Cartis, C., Gould, N.I.M., Toint, Ph.L.: Trust-region and other regularisations of linear least-squares problems. BIT 49(1), 21–53 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  8. Conn, A.R., Gould, N.I.M., Toint, Ph.L.: Trust-Region Methods. SIAM, Philadelphia (2000)

    Book  MATH  Google Scholar 

  9. Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  10. Dennis, J.E., Schnabel, R.B.: Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice Hall, Englewood Cliffs (1983)

    MATH  Google Scholar 

  11. Erway, J.B., Gill, P.E.: A subspace minimization method for the trust-region step. SIAM J. Control Optim. 20, 1439–1461 (2009)

    MathSciNet  MATH  Google Scholar 

  12. Erway, J.B., Gill, P.E., Griffin, J.D.: Iterative methods for finding a trust-region step. SIAM J. Control Optim. 20, 1110–1131 (2009)

    MathSciNet  MATH  Google Scholar 

  13. Golub, G.H., Kahan, W.: Calculating the singular values and pseudo-inverse of a matrix. SIAM J. Numer. Anal. 2(2), 205–224 (1965)

    MathSciNet  Google Scholar 

  14. Gould, N.I.M., Orban, D., Sartenaer, A., Toint, Ph.L.: Sensitivity of trust-region algorithms to their parameters. 4OR 3(3), 227–241 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  15. Gould, N.I.M., Lucidi, S., Roma, M., Toint, Ph.L.: Solving the trust-region subproblem using the Lanczos method. SIAM J. Control Optim. 9(2), 504–525 (1999)

    MathSciNet  MATH  Google Scholar 

  16. Gould, N.I.M., Orban, D., Toint, Ph.L.: CUTEr, a constrained and unconstrained testing environment, revisited. ACM Trans. Math. Softw. 29(4), 373–394 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  17. Gould, N.I.M., Orban, D., Toint, Ph.L.: GALAHAD—a library of thread-safe Fortran 90 packages for large-scale nonlinear optimization. ACM Trans. Math. Softw. 29(4), 353–372 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  18. Griewank, A.: The modification of Newton’s method for unconstrained optimization by bounding cubic terms. Technical Report NA/12 (1981), Department of Applied Mathematics and Theoretical Physics, University of Cambridge, United Kingdom (1981)

  19. Hager, W.W., Park, S.C.: Global convergence of SSM for minimizing a quadratic over a sphere. Math. Comput. 74, 1413–1423 (2005)

    MathSciNet  MATH  Google Scholar 

  20. Hager, W.W.: Minimizing a quadratic over a sphere. SIAM J. Control Optim. 12, 188–208 (2001)

    MathSciNet  MATH  Google Scholar 

  21. Nesterov, Yu., Polyak, B.T.: Cubic regularization of Newton’s method and its global performance. Math. Program. 108(1), 177–205 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  22. Paige, C.C., Saunders, M.A.: ALGORITHM 583: LSQR: an algorithm for sparse linear equations and sparse least squares. ACM Trans. Math. Softw. 8(2), 195–209 (1982)

    Article  MathSciNet  Google Scholar 

  23. Sartenaer, A.: Automatic determination of an initial trust region in nonlinear programming. SIAM J. Sci. Comput. 18(6), 1788–1803 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  24. Steihaug, T.: The conjugate gradient method and trust regions in large scale optimization. SIAM J. Numer. Anal. 20, 626–637 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  25. Weiser, M., Deuflhard, P., Erdmann, B.: Affine conjugate adaptive Newton methods for nonlinear elastomechanics. Optim. Methods Softw. 22(3), 413–431 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  26. Yuan, Y.: On the truncated conjugate-gradient method. Math. Program., Ser. A 87(3), 561–573 (1999)

    Article  Google Scholar 

Download references

Acknowledgements

The work of the first author was supported by EPSRC grant EP/E053351/1. The second author wishes to thank Stefania Bellavia and Benedetta Morini for several helpful discussions and for their continued encouragement and support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to M. Porcelli.

Electronic Supplementary Material

Below is the link to the electronic supplementary material.

Appendix (PDF 67.8 kB)

Rights and permissions

Reprints and permissions

About this article

Cite this article

Gould, N.I.M., Porcelli, M. & Toint, P.L. Updating the regularization parameter in the adaptive cubic regularization algorithm. Comput Optim Appl 53, 1–22 (2012). https://doi.org/10.1007/s10589-011-9446-7

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-011-9446-7

Keywords

Navigation