Skip to main content
Log in

An inexact proximal regularization method for unconstrained optimization

  • Original Article
  • Published:
Mathematical Methods of Operations Research Aims and scope Submit manuscript

Abstract

We present a regularization algorithm to solve a smooth unconstrained minimization problem.This algorithm is suitable to solve a degenerate problem, when the Hessian is singular at a local optimal solution. The main feature of our algorithm is that it uses an outer/inner iteration scheme. We show that the algorithm has a strong global convergence property under mild assumptions. A local convergence analysis shows that the algorithm is superlinearly convergent under a local error bound condition. Some numerical experiments are reported.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Armand P, Benoist J, Omheni R, Pateloup V (2014) Study of a primal-dual algorithm for equality constrained minimization. Comput Optim Appl 59(3):405–433

    Article  MathSciNet  MATH  Google Scholar 

  • Armand P, Benoist J, Orban D (2013) From global to local convergence of interior methods for nonlinear optimization. Optim Methods Softw 28(5):1051–1080

    Article  MathSciNet  MATH  Google Scholar 

  • Armand P, Omheni R (2015) A globally and quadratically convergent primal-dual augmented lagrangian algorithm for equality constrained optimization. Optim Methods Softw. doi:10.1080/10556788.2015.1025401

    Google Scholar 

  • Benson HY, Shanno DF (2014) Interior-point methods for nonconvex nonlinear programming: cubic regularization. Comput Optim Appl 58(2):323–346

    Article  MathSciNet  MATH  Google Scholar 

  • Byrd RH, Nocedal J (1989) A tool for the analysis of quasi-Newton methods with application to unconstrained minimization. SIAM J Numer Anal 26(3):727–739

    Article  MathSciNet  MATH  Google Scholar 

  • Cartis C, Gould NIM, Toint PL (2011) Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results. Math Progr Ser A 127(2):245–295

    Article  MathSciNet  MATH  Google Scholar 

  • Dolan ED, Moré JJ (2002) Benchmarking optimization software with performance profiles. Math Progr Ser A 91(2):201–213

    Article  MathSciNet  MATH  Google Scholar 

  • Fan J, Yuan Y (2014) A regularized Newton method for monotone nonlinear equations and its application. Optim Methods Softw 29(1):102–119

    Article  MathSciNet  MATH  Google Scholar 

  • Fuentes M, Malick J, Lemaréchal C (2012) Descentwise inexact proximal algorithms for smooth optimization. Comput Optim Appl 53(3):755–769

    Article  MathSciNet  MATH  Google Scholar 

  • Gould NIM, Orban D, Toint PL (2015) CUTEst: a constrained and unconstrained testing environment with safe threads for mathematical optimization. Comput Optim Appl 60(3):545–557

    Article  MathSciNet  MATH  Google Scholar 

  • Grippo L, Lampariello F, Lucidi S (1986) A nonmonotone line search technique for Newton’s method. SIAM J Numer Anal 23(4):707–716

    Article  MathSciNet  MATH  Google Scholar 

  • Hager WW, Zhang H (2008) Self-adaptive inexact proximal point methods. Comput Optim Appl 39(2):161–181

    Article  MathSciNet  MATH  Google Scholar 

  • Horn RA, Johnson CR (1990) Matrix analysis. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  • Humes C Jr, Silva PJS (2005) Inexact proximal point algorithms and descent methods in optimization. Optim Eng 6(2):257–271

    Article  MathSciNet  MATH  Google Scholar 

  • Li DH, Fukushima M, Qi L, Yamashita N (2004) Regularized Newton methods for convex minimization problems with singular solutions. Comput Optim Appl 28(2):131–147

    Article  MathSciNet  MATH  Google Scholar 

  • Li YJ, Li DH (2009) Truncated regularized Newton method for convex minimizations. Comput Optim Appl 43(1):119–131

    Article  MathSciNet  MATH  Google Scholar 

  • Parikh N, Boyd S (2014) Proximal algorithms. Found Trends Optim 1(3):127–239

    Article  Google Scholar 

  • Polyak RA (2009) Regularized Newton method for unconstrained convex optimization. Math Progr Ser B 120(1):125–145

    Article  MathSciNet  MATH  Google Scholar 

  • Santos SA, Silva RCM (2014) An inexact and nonmonotone proximal method for smooth unconstrained minimization. J Comput Appl Math 269:86–100

    Article  MathSciNet  MATH  Google Scholar 

  • Shen C, Chen X, Liang Y (2012) A regularized Newton method for degenerate unconstrained optimization problems. Optim Lett 6(8):1913–1933

    Article  MathSciNet  MATH  Google Scholar 

  • Ueda K, Yamashita N (2010) Convergence properties of the regularized Newton method for the unconstrained nonconvex optimization. Appl Math Optim 62(1):27–46

    Article  MathSciNet  MATH  Google Scholar 

  • Ueda K, Yamashita N (2014) A regularized Newton method without line search for unconstrained optimization. Comput Optim Appl 59(1–2):321–351

    Article  MathSciNet  MATH  Google Scholar 

  • Yamashita N, Fukushima M (2001) On the rate of convergence of the Levenberg-Marquardt method. Topics in numerical analysis, Comput Suppl, vol 15. Springer, Vienna, pp 239–249

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Paul Armand.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Armand, P., Lankoandé, I. An inexact proximal regularization method for unconstrained optimization. Math Meth Oper Res 85, 43–59 (2017). https://doi.org/10.1007/s00186-016-0561-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00186-016-0561-1

Keywords

Navigation