Skip to main content
Log in

Regularized Newton method for unconstrained convex optimization

  • FULL LENGTH PAPER
  • Published:
Mathematical Programming Submit manuscript

Abstract

We introduce the regularized Newton method (rnm) for unconstrained convex optimization. For any convex function, with a bounded optimal set, the rnm generates a sequence that converges to the optimal set from any starting point. Moreover the rnm requires neither strong convexity nor smoothness properties in the entire space. If the function is strongly convex and smooth enough in the neighborhood of the solution then the rnm sequence converges to the unique solution with asymptotic quadratic rate. We characterized the neighborhood of the solution where the quadratic rate occurs.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Ermoliev Y.M. (1966). Methods for solving nonlinear extremal problems (in Russian). Kibernetika 4: 1–17

    Google Scholar 

  2. Kantorovich L.V. (1948). Functional analysis and applied mathematics (in Russian). Uspekhi Mat. Nauk 3: 89–185

    Google Scholar 

  3. Levenberg K. (1944). A method for the solution of certain nonlinear problems in least squares. Q. Appl. Math. 2: 164–168

    MATH  MathSciNet  Google Scholar 

  4. Marquardt D. (1963). An algorithm for least-squares estimation of nonlinear parameters. J. Soc. Ind. Appl. Math. 11: 431–441

    Article  MATH  MathSciNet  Google Scholar 

  5. Nesterov Y. and Nemirovsky A. (1994). Interior Point Polynomial Algorithms in Convex Programming. SIAM, Philadelphia

    MATH  Google Scholar 

  6. Nesterov, Y., Polyak, B.T.: Cubic Regularization of Newton Method and its Global Performance. Mathematical Programming. DOI 10.1007/s 10107-006-0706-8, published online 04/25/06

  7. Polyak B.T. (1963). Gradient methods for the minimization of functionals. USSR Comp. Math. and Math. Phys. 3(4): 643–653

    Article  Google Scholar 

  8. Polyak B.T. (1967). A general method of solving extremum problems. Sov. Math. Dokl. 3: 593–597

    Google Scholar 

  9. Polyak B.T. (1987). Introduction to Optimization. Optimization Software, New York

    Google Scholar 

  10. Shor N.Z. (1998). Nondifferentiable Optimization and Polynomial Problems. Kluwer, Boston

    MATH  Google Scholar 

  11. Smale S. (1986). Newton’s method estimates from data at one point. In: Ewing, R.E., Gross, K.I. and Martin, C.F. (eds) The Merging of Disciplines: New Directions in Pure, Applied, and Computational Mathematics, pp 185–196. Springer, New York

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Roman A. Polyak.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Polyak, R.A. Regularized Newton method for unconstrained convex optimization. Math. Program. 120, 125–145 (2009). https://doi.org/10.1007/s10107-007-0143-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-007-0143-3

Mathematics Subject Classification (2000)

Navigation