Journal of Optimization Theory and Applications

, Volume 147, Issue 3, pp 443–453 | Cite as

On a Global Complexity Bound of the Levenberg-Marquardt Method

  • Kenji Ueda
  • Nobuo Yamashita


In this paper, we investigate a global complexity bound of the Levenberg-Marquardt method (LMM) for the nonlinear least squares problem. The global complexity bound for an iterative method solving unconstrained minimization of φ is an upper bound to the number of iterations required to get an approximate solution, such that ‖∇φ(x)‖≤ε. We show that the global complexity bound of the LMM is O(ε −2).


Levenberg-Marquardt methods Global complexity bound Scale parameter 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Bertsekas, D.P.: Nonlinear Programming. Athena Scientific, New York (1995) zbMATHGoogle Scholar
  2. 2.
    Moré, J.J.: The Levenberg-Marquardt algorithm: implementation and theory. Numer. Anal. 630, 105–116 (1978) CrossRefGoogle Scholar
  3. 3.
    Nocedal, J., Wright, S.J.: Numerical Optimization, 2nd edn. Springer, New York (2006) zbMATHGoogle Scholar
  4. 4.
    Osborne, M.R.: Nonlinear least squares—the Levenberg algorithm revisited. J. Aust. Math. Soc. 19, 343–357 (1976) zbMATHCrossRefMathSciNetGoogle Scholar
  5. 5.
    Wright, S.J., Holt, J.N.: An inexact Levenberg-Marquardt method for large sparse nonlinear least squares. J. Aust. Math. Soc. 26, 387–403 (1985) zbMATHCrossRefMathSciNetGoogle Scholar
  6. 6.
    Conn, A.R., Gould, N.I.M., Toint, P.L.: Trust-Region Methods. SIAM, Philadelphia (2000) zbMATHGoogle Scholar
  7. 7.
    Cartis, C., Gould, N.I.M., Toint, P.L.: Adaptive cubic regularization methods for unconstrained optimization. Part II: Worst-case function- and derivative-evaluation complexity. Math. Program. doi: 10.1007/s10107-009-0337-y
  8. 8.
    Cartis, C., Gould, N.I.M., Toint, P.L.: On the complexity of steepest descent, Newton’s and regularized Newton’s methods for nonconvex unconstrained optimization problems. Technical Report 09/14, Department of Mathematics, FUNDP—University of Namur (2009) Google Scholar
  9. 9.
    Gratton, S., Sartenaer, A., Toint, P.L.: Recursive trust-region methods for multiscale nonlinear optimization. SIAM J. Optim. 19, 414–444 (2008) CrossRefMathSciNetGoogle Scholar
  10. 10.
    Nesterov, Yu.: Introductory Lectures on Convex Optimization. Kluwer Academic, Dordrecht (2004) zbMATHGoogle Scholar
  11. 11.
    Nesterov, Yu., Polyak, B.T.: Cubic regularization of Newton method and its global performance. Math. Program., Ser. A 108, 177–205 (2006) zbMATHCrossRefMathSciNetGoogle Scholar
  12. 12.
    Polyak, R.A.: Regularized Newton method for unconstrained convex optimization. Math. Program., Ser. B 120, 125–145 (2009) zbMATHCrossRefMathSciNetGoogle Scholar
  13. 13.
    Ueda, K., Yamashita, N.: Convergence properties of the regularized Newton method for the unconstrained nonconvex optimization. Appl. Math. Optim. 62, 27–46 (2010) zbMATHCrossRefMathSciNetGoogle Scholar
  14. 14.
    Ueda, K., Yamashita, N.: A regularized Newton method without line search for unconstrained optimization. Technical Report 2009-007, Department of Applied Mathematics and Physics, Graduate School of Informatics, Kyoto University (2009) Google Scholar
  15. 15.
    Nesterov, Yu.: Modified Gauss-Newton scheme with worst-case guarantees for global performance. Optim. Methods Softw. 22, 469–483 (2007) zbMATHCrossRefMathSciNetGoogle Scholar
  16. 16.
    Yamashita, N., Fukushima, M.: On the rate of convergence of the Levenberg-Marquardt method. Comput., Suppl. (Wien) 15, 227–238 (2001) MathSciNetGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2010

Authors and Affiliations

  1. 1.Advanced Technology R&D CenterMitsubishi Electric CorporationHyogoJapan
  2. 2.Department of Applied Mathematics and Physics, Graduate School of InformaticsKyoto UniversityKyotoJapan

Personalised recommendations