Skip to main content
Log in

Quadratic regularization methods with finite-difference gradient approximations

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

This paper presents two quadratic regularization methods with finite-difference gradient approximations for smooth unconstrained optimization problems. One method is based on forward finite-difference gradients, while the other is based on central finite-difference gradients. In both methods, the accuracy of the gradient approximations and the regularization parameter in the quadratic models are jointly adjusted using a nonmonotone acceptance condition for the trial points. When the objective function is bounded from below and has Lipschitz continuous gradient, it is shown that the method based on forward finite-difference gradients needs at most \({\mathcal{O}}\left( n\epsilon ^{-2}\right) \) function evaluations to generate a \(\epsilon \)-approximate stationary point, where n is the problem dimension. Under the additional assumption that the Hessian of the objective is Lipschitz continuous, an evaluation complexity bound of the same order is proved for the method based on central finite-difference gradients. Numerical results are also presented. They confirm the theoretical findings and illustrate the relative efficiency of the proposed methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Data Availability

The data defining the test problems considered in this work are available in [16].

Notes

  1. The MATLAB/Octave codes of the test problems are freely available in the websites https://www.mat.univie.ac.at/~neum/glopt/test.html#test_unconstr and https://people.sc.fsu.edu/~jburkardt/octave_src/test_nonlin/test_nonlin.html.

  2. http://www.maths.manchester.ac.uk/~higham/mctoolbox.

  3. The derivative-free trust-region method in [5] is designed to minimize functions of the form \(f(x)=h(F(x))\), where \(F:{\mathbb {R}}^{n}\rightarrow {\mathbb {R}}^{m}\) and \(h:{\mathbb {R}}^{m}\rightarrow {\mathbb {R}}\) is convex. In the code DFNLS we consider \(h(z)=\Vert z\Vert _{2}^{2}\). The parameters are the same considered in [5].

  4. The data profiles were generated using the code data_profile.m freely available in the website https://www.mcs.anl.gov/~more/dfo/.

References

  1. Larson, J., Menickelly, M., Wild, S.M.: Derivative-free optimization. Acta Numer., pp 287–404 (2019)

  2. Vicente, L.N.: Worst case complexity of direct search. EURO J. Comput. Optim. 1, 143–153 (2013)

    Article  MATH  Google Scholar 

  3. Konecny, J., Richtárik, P.: Simple complexity analysis of simplified direct search. arXiv:1410.0390 [math.OC] (2014)

  4. Dodangeh, M., Vicente, L.N., Zhang, Z.: On the optimal order of worst case complexity of direct search. Optim. Lett. 10, 699–708 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  5. Grapiglia, G.N., Yuan, J., Yuan, Y.: A derivative-free trust-region algorithm for composite nonsmooth optimization. Comput. Appl. Math. 35, 475–499 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  6. Garmanjani, R., Júdice, D., Vicente, L.N.: Trust-region methods without using derivatives: worst-case complexity and the non-smooth case. SIAM J. Optim. 26, 1987–2011 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  7. Nesterov, Y., Spokoiny, V.: Random gradient-free minimization of convex functions. Found. Comput. Math. 17, 527–566 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  8. Bergou, E.H., Gorbunov, E., Richtárik, P.: Stochastic three points method for unconstrained smooth minimization. SIAM J. Optim. 30, 2726–2749 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  9. Gratton, S., Royer, C.W., Vicente, L.N., Zhang, Z.: Direct search based on probabilistic descent. SIAM J. Optim. 25, 1515–1541 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  10. Kimiaei, M., Neumaier, A.: Efficient global unconstrained black box optimization. Optimization Online (2021)

  11. Cartis, C., Roberts, L.: Scalable subspace methods for derivative-free nonlinear least-squares optimization. arXiv:2102.12016 [math.OC] (2021)

  12. Nocedal, J., Wright, S.J.: Numerical Optimization, 2nd edn. Springer, Berlin (2006)

    MATH  Google Scholar 

  13. Grapiglia, G.N., Gonçalves, M.L.N., Silva, G.N.: A cubic regularization of Newton’s method with finite-difference Hessian approximations. Numer. Algorithms 90, 607–630 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  14. Cartis, C., Gould, N.I.M., Toint, Ph.L.: On the oracle complexity of first-order and derivative-free algorithms for smooth nonconvex minimization. SIAM J. Optim. 22, 66–86 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  15. Cartis, C., Gould, N.I.M., Toint, Ph.L.: Adaptive cubic regularisation methods for unconstrained optimization: Part I: motivation, convergence and numerical results. Math. Progr. 127, 245–295 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  16. Moré, J.J., Garbow, B.S., Hillstrom, K.E.: Testing unconstrained optimization software. ACM Trans. Math. Softw. 7, 17–41 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  17. Nelder, J.A., Mead, R.: A simplex method for function minimization. Comput. J. 7, 308–313 (1965)

    Article  MathSciNet  MATH  Google Scholar 

  18. Grapiglia, G.N., Sachs, E.W.: On the worst-case evaluation complexity of non-monotone line search algorithms. Comput. Optim. Appl. 68, 555–577 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  19. Moré, J.J., Wild, S.M.: Benchmarking derivative-free optimization algorithms. SIAM J. Optim. 20, 172–191 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  20. Berahas, A.S., Byrd, R.H., Nocedal, J.: Derivative-free optimization of noisy functions via quasi-Newton methods. SIAM J. Optim. 29, 965–993 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  21. Berahas, A.S., Cao, L., Choromanski, K., Scheinberg, K.: A theoretical and empirical comparison of gradient approximations in derivative-free optimization. Found. Comput. Math. 22, 507–560 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  22. Berahas, A.S., Sohab, O., Vicente, L.N.: Full-low evaluation methods for derivative-free optimization. arXiv:2107.11908 [math.OC] (2021)

  23. Shi, H-J.M., Xuan, M.Q., Oztoprak, F., Nocedal, J.: On the numerical performance of derivative-free optimization methods based on finite-difference approximations. arXiv:2102.09762 [math.OC] (2021)

Download references

Acknowledgements

The author is very grateful to two anonymous referees, whose comments helped to improve the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Geovani Nunes Grapiglia.

Additional information

This work is dedicated to Stela Angelozi Leite.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

G. N. Grapiglia was partially supported by CNPq - Brazil Grant 312777/2020-5.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Grapiglia, G.N. Quadratic regularization methods with finite-difference gradient approximations. Comput Optim Appl 85, 683–703 (2023). https://doi.org/10.1007/s10589-022-00373-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10589-022-00373-z

Keywords

Navigation