Advertisement

A Gradient-Based Globalization Strategy for the Newton Method

  • Daniela di Serafino
  • Gerardo Toraldo
  • Marco ViolaEmail author
Conference paper
  • 36 Downloads
Part of the Lecture Notes in Computer Science book series (LNCS, volume 11973)

Abstract

The Newton method is one of the most powerful methods for the solution of smooth unconstrained optimization problems. It has local quadratic convergence in a neighborhood of a local minimum where the Hessian is positive definite and Lipschitz continuous. Several strategies have been proposed in order to achieve global convergence. They are mainly based either on the modification of the Hessian together with a line search or on the adoption of a restricted-step strategy. We propose a globalization technique that combines the Newton and gradient directions, producing a descent direction on which a backtracking Armijo line search is performed. Our work is motivated by the effectiveness of gradient methods using suitable spectral step-length selection rules. We prove global convergence of the resulting algorithm, and quadratic rate of convergence under suitable second-order optimality conditions. A numerical comparison with a modified Newton method exploiting Hessian modifications shows the effectiveness of our approach.

Keywords

Newton method Gradient method Global convergence 

References

  1. 1.
    Antonelli, L., De Simone, V., di Serafino, D.: On the application of the spectral projected gradient method in image segmentation. J. Math. Imaging and Vis. 54(1), 106–116 (2016)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Barzilai, J., Borwein, J.M.: Two-point step size gradient methods. IMA J. Numer. Anal. 8(1), 141–148 (1988)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Bertsekas, D.P.: Nonlinear Programming, 3rd edn. Athena Scientific, Belmont (2016)zbMATHGoogle Scholar
  4. 4.
    Crisci, S., Ruggiero, V., Zanni, L.: Steplength selection in gradient projection methods for box-constrained quadratic programs. Appl. Math. Comput. 356, 312–327 (2019)MathSciNetzbMATHGoogle Scholar
  5. 5.
    De Asmundis, R., di Serafino, D., Landi, G.: On the regularizing behavior of the SDA and SDC gradient methods in the solution of linear ill-posed problems. J. Comput. Appl. Math. 302, 81–93 (2016)MathSciNetCrossRefGoogle Scholar
  6. 6.
    di Serafino, D., Ruggiero, V., Toraldo, G., Zanni, L.: On the steplength selection in gradient methods for unconstrained optimization. Appl. Math. Comput. 318, 176–195 (2018)MathSciNetzbMATHGoogle Scholar
  7. 7.
    di Serafino, D., Toraldo, G., Viola, M., Barlow, J.: A two-phase gradient method for quadratic programming problems with a single linear constraint and bounds on the variables. SIAM J. Optim. 28(4), 2809–2838 (2018)MathSciNetCrossRefGoogle Scholar
  8. 8.
    Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. Ser. B 91(2), 201–213 (2002)MathSciNetCrossRefGoogle Scholar
  9. 9.
    Dostál, Z., Toraldo, G., Viola, M., Vlach, O.: Proportionality-based gradient methods with applications in contact mechanics. In: Kozubek, T., et al. (eds.) HPCSE 2017. LNCS, vol. 11087, pp. 47–58. Springer, Cham (2018).  https://doi.org/10.1007/978-3-319-97136-0_4CrossRefGoogle Scholar
  10. 10.
    Fang, H., O’Leary, D.P.: Modified Cholesky algorithms: a catalog with new approaches. Math. Program. 115(2), 319–349 (2008)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Fletcher, R.: Practical Methods of Optimization. Wiley, New York (2013)zbMATHGoogle Scholar
  12. 12.
    Gill, P., Murray, W.: Newton-type methods for unconstrained and linearly constrained optimization. Math. Program. 7(1), 311–350 (1974)MathSciNetCrossRefGoogle Scholar
  13. 13.
    Han, L., Neumann, M.: Combining quasi-Newton and Cauchy directions. Int. J. Appl. Math. 22(2), 167–191 (2003)MathSciNetzbMATHGoogle Scholar
  14. 14.
    Moré, J.J., Sorensen, D.C.: Newton’s method. In: Golub, G. (ed.) Studies in Numerical Analysis, pp. 29–82. The Mathematical Association of America, Providence (1984)Google Scholar
  15. 15.
    Moré, J.J., Garbow, B.S., Hillstrom, K.E.: Algorithm 566: FORTRAN subroutines for testing unconstrained optimization software [C5], [E4]. ACM Trans. Math. Softw. 7(1), 136–140 (1981)CrossRefGoogle Scholar
  16. 16.
    Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course, vol. 87. Springer, Boston (2013).  https://doi.org/10.1007/978-1-4419-8853-9CrossRefzbMATHGoogle Scholar
  17. 17.
    Nocedal, J., Wright, S.: Numerical Optimization. Springer Series in Operations Research, 2nd edn. Springer, New York (2006).  https://doi.org/10.1007/978-0-387-40065-5CrossRefzbMATHGoogle Scholar
  18. 18.
    Polak, E.: Optimization: Algorithms and Consistent Approximations, vol. 124. Springer, New York (2012).  https://doi.org/10.1007/978-1-4612-0663-7CrossRefzbMATHGoogle Scholar
  19. 19.
    Zanella, R., Boccacci, P., Zanni, L., Bertero, M.: Efficient gradient projection methods for edge-preserving removal of Poisson noise. Inverse Prob. 25(4), 045010 (2009)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Department of Mathematics and PhysicsUniversity of Campania “L. Vanvitelli”CasertaItaly
  2. 2.Department of Mathematics and ApplicationsUniversity of Naples Federico IINaplesItaly

Personalised recommendations