Skip to main content
Log in

A globally convergent trust-region algorithm for unconstrained derivative-free optimization

  • Published:
Computational and Applied Mathematics Aims and scope Submit manuscript


In this work we propose a derivative-free trust-region algorithm for unconstrained optimization. The algorithm is based on the ideas of Powell (Comput Optim Appl 53:527–555, 2012). We turn his general ideas into a step-form algorithm that avoids unnecessary reductions of the trust-region radius. The results related to the global convergence of the algorithm are presented in detail. We prove that, under reasonable hypotheses, any accumulation point of the sequence generated by the algorithm is stationary.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others


  • Arouxét MB, Echebest N, Pilotta EA (2011) Active-set strategy in Powell’s method for optimization without derivatives. Comput Appl Math 30(1):171–196

    MATH  MathSciNet  Google Scholar 

  • Bazaraa M, Sherali H, Shetty C (2006) Nonlinear Programm Theory Algorithms, 2nd edn. Athena Scientific, Wiley, New York

    Book  Google Scholar 

  • Bueno LF, Friedlander A, Martínez JM, Sobral FNC (2013) Inexact restoration method for derivative-free optimization with smooth constraints. SIAM J Optim 23(2):1189–1213

    Article  MATH  MathSciNet  Google Scholar 

  • Conejo PD, Karas EW, Ribeiro AA, Pedroso LG, Sachine M (2013) Global convergence of trust-region algorithms for convex constrained minimization without derivatives. Appl Math Comput 220:324–330

    Article  MathSciNet  Google Scholar 

  • Conn AR, Gould NIM, Toint PL (2000) Trust-region methods. MPS-SIAM series on optimization. SIAM, Philadelphia

  • Conn AR, Gould NIM, Vicente LN (2009) Global convergence of general derivative-free trust-region algorithms to first and second order critical points. SIAM J Numer Anal 20:387–415

    MATH  Google Scholar 

  • Conn AR, Scheinberg K, Toint PL (1997) On the convergence of derivative-free methods for unconstrained optimization. In: Iserles A, Buhmann M (eds) Approximation theory and optimization: tributes to M.J.D. Powell, pp 83–108

  • Conn AR, Scheinberg K, Vicente LN (2009) Introduction to derivative-free optimization. In: MPS-SIAM series on optimization. SIAM, Philadelphia

  • Conn AR, Toint PL (1996) An algorithm using quadratic interpolation for unconstrained derivative free optimization. In: Pillo GD, Gianessi F (eds) Nonlinear optimization and applications, pp 27–47. Plenum, New York

  • Diniz-Ehrhardt MA, Martínez JM, Pedroso LG (2011) Derivative-free methods for nonlinear programming with general lower-level constraints. Comput Appl Math 30:19–52

    MATH  MathSciNet  Google Scholar 

  • Gumma EAE, Hashim MHA, Montaz Ali M (2014) A derivative-free algorithm for linearly constrained optimization problems. Comput Optim Appl 57(3):599–621

  • Hare WL, Lucet Y (2014) Derivative-free optimization via proximal point methods. J Optim Theory Appl 160(1):204–220

  • Nocedal J, Wright SJ (1999) Numerical Optimization. In: Springer series in operations research. Springer, Berlin

  • Powell MJD (1964) An efficient method for finding the minimum of a function of several variables without calculating derivatives. Comput J 7:155–162

    Article  MATH  MathSciNet  Google Scholar 

  • Powell MJD (2002) UOBYQA: unconstrained optimization by quadratic approximation. Math Programm 92:555–582

    Article  MATH  Google Scholar 

  • Powell MJD (2006) The NEWUOA software for unconstrained optimization without derivatives. In: Pillo GD, Roma M (eds) Large-scale nonlinear optimization. Springer, New York, pp 255–297

  • Powell MJD (2008) Developments of NEWUOA for minimization without derivatives. IMA J Numer Anal 28:649–664

    Article  MATH  MathSciNet  Google Scholar 

  • Powell MJD (2009) The BOBYQA algorithm for bound constrained optimization without derivatives. Tech. rep. DAMTP 2009/NA06, Department of Applied Mathematics and Theoretical Physics, Cambridge, England

  • Powell MJD (2012) On the convergence of trust region algorithms for unconstrained minimization without derivatives. Comput Optim Appl 53:527–555

    Article  MATH  MathSciNet  Google Scholar 

  • Rios LM, Sahinidis NV (2013) Derivative-free optimization: a review of algorithms and comparison of software implementations. J Glob Optim 56:1247–1293

    Article  MATH  MathSciNet  Google Scholar 

Download references


The authors are grateful to the editor and the anonymous referees whose suggestions led to improvements in the paper.

Author information

Authors and Affiliations


Corresponding author

Correspondence to Priscila S. Ferreira.

Additional information

Communicated by Ernesto G. Birgin.

This work was supported by CNPq under Grants 133154/2011-4, 472313/2011-8, 477611/2013-3 and 307714/2011-0.



Consider the quadratic function

$$\begin{aligned} f(x)=\dfrac{1}{2}x^\mathrm{T}Ax+b^\mathrm{T}x+c, \end{aligned}$$

with \(A\in \mathbb {R}^{n\times n}\) symmetric, \(b \in \mathbb {R}^n\) and \(c\in \mathbb {R}\).

Lemma 23

If \(A\) is positive semidefinite and \(f\) is unbounded below, then \(b\) does not belong to the subspace generated by the eigenvectors associated with the positive eigenvalues of \(A\). Equivalently, there is a vector \(\omega \in \mathcal {N}(A)\) such that \(\Vert \omega \Vert =1\) and \(b^\mathrm{T}\omega >0\).


Let \(\left\{ v_1,\ldots ,v_n \right\} \) be an orthonormal basis of eigenvectors of \(A\) such that \(v_1,\ldots ,v_{\ell -1}\) are the eigenvectors corresponding to the null eigenvalues and \(v_\ell ,\ldots ,v_n\) the eigenvectors corresponding to the positive eigenvalues \(\lambda _\ell ,\ldots ,\lambda _n\). Then, for \(i=1,\ldots ,\ell -1\) and \(j=\ell ,\ldots ,n\), we have

$$\begin{aligned} Av_i=0 \quad \hbox {and}\quad v_j=A\left( \dfrac{1}{\lambda _j}v_j \right) . \end{aligned}$$

Thus, \([v_1,\ldots ,v_{\ell -1}] = \mathcal {N}(A)\) and \([v_{\ell },\ldots ,v_n] = \mathcal {R}(A)\). Since \(Ax+b=\nabla f(x)\ne 0\), for all \(x \in \mathbb {R}^n\), then \(b\notin \mathcal {R}(A)\). Equivalently, \(b\notin \mathcal {N}(A)^\perp \), which means that there is a vector \(\omega \in \mathcal {N}(A)\) such that \(b^\mathrm{T}\omega \ne 0\). Normalizing and taking the opposite, if necessary, the proof is concluded. \(\square \)

Lemma 24

If \(f\) is bounded below, then there is a unique \(x^*\in \mathcal{{R}}(A)\) such that \(Ax^*+b=0\). Moreover, if \(\lambda _\ell \) is the smallest positive eigenvalue of \(A\), then \(\Vert Ax^*\Vert \ge \lambda _\ell \Vert x^*\Vert \).


Since \(\mathcal{{R}}(A^2)\subset \mathcal{{R}}(A)\) and \(\dim (\mathcal{{R}}(A^2))=\dim (\mathcal{{R}}(A^\mathrm{T}A))=\dim (\mathcal{{R}}(A))\), we have that \(\mathcal{{R}}(A^2)=\mathcal{{R}}(A)\). As \(f\) is bounded below, \(b\in \mathcal{{R}}(A)=\mathcal{{R}}(A^2)\). Thus, there is \(u \in \mathbb {R}^n\) such that \(A^2u=b\). This means that \(A(-Au)+b=0\), i.e., \(x^* =-Au\in \mathcal{{R}}(A)\) and \(Ax^* +b=0\). To prove the uniqueness, note that if \(x^*,\bar{x}\in \mathcal{{R}}(A)\) are such that \(Ax^*+b=0\) and \(A\bar{x}+b=0\), then \(x^*-\bar{x}\in \mathcal{{R}}(A)=\mathcal{{R}}(A^\mathrm{T})\) and \(A(x^*-\bar{x})=0\). But this means that \(x^*-\bar{x}=0\).

To establish the inequality, first consider the case where \(A\) is positive definite. Thus,

$$\begin{aligned} \Vert A x^* \Vert ^2=(x^*)^\mathrm{T}A^2 x^* \ge \lambda _\ell ^2 \Vert x^* \Vert ^2. \end{aligned}$$

In the case where \(A\) has null eigenvalues, consider \(\left\{ v_1,\ldots ,v_n \right\} \) an orthonormal basis of eigenvectors such that \(v_1,\ldots ,v_{\ell -1}\) are the eigenvectors corresponding to the null eigenvalues and \(v_\ell ,\ldots ,v_n\) the eigenvectors corresponding to the positive eigenvalues. Defining \(P=(v_1\ldots v_n)\), \(D=\mathrm{diag}(\lambda _1,\ldots ,\lambda _n)\), where \(\lambda _i\) is the eigenvalue associated with the eigenvector \(v_i\), and \(\widehat{c}=P^\mathrm{T}b\), we have that if

$$\begin{aligned} z^*\in \mathcal{{R}}(D) \quad \hbox {and}\quad Dz^*+\widehat{c}=0, \end{aligned}$$

then \(x^*=Pz^*\in \mathcal{{R}}(A)\) and \(Ax^*+b=0\). Indeed,

$$\begin{aligned} x^*=Pz^*=PDw^*=APw^*\in \mathcal{{R}}(A) \end{aligned}$$


$$\begin{aligned} Ax^*+b=P(Dz^*+\widehat{c})=0. \end{aligned}$$

Now, we found \(z^*\) to satisfy (77). Define \(w^* \in \mathbb {R}^n\) by

$$\begin{aligned} w^*_i=\left\{ \begin{array}{ll} 0, &{} \hbox {if } i=1,\ldots ,\ell -1 \\ \dfrac{-\widehat{c}_i}{\lambda _i^2}, &{} \hbox {if } i=\ell ,\ldots ,n \end{array}\right. \end{aligned}$$

and \(z^*=Dw^*\). Since \(b\in {\mathcal {N}}(A)^\perp =[v_1,\ldots ,v_{\ell -1}]^\perp \), \(\widehat{c}_i=v_i^\mathrm{T}b=0\), for \(i=1,\ldots ,\ell -1\) and, consequently, \(Dz^*+\widehat{c}=0\). To complete the proof, note that

$$\begin{aligned} \Vert z^*\Vert ^2=\sum _{i=\ell }^{n}\left( \,\,\dfrac{\widehat{c}_i}{\lambda _i}\,\, \right) ^2\le \dfrac{1}{\lambda _\ell ^2}\sum _{i=\ell }^{n}\widehat{c}_i^2=\dfrac{1}{\lambda _\ell ^2}\Vert \widehat{c}\Vert ^2. \end{aligned}$$

Moreover, since \(x^*=Pz^*\), \(Ax^*+b=0\) and \(\widehat{c}=P^\mathrm{T}b\), we obtain that \(\Vert x^*\Vert =\Vert z^*\Vert \) and \(\Vert Ax^*\Vert =\Vert b\Vert =\Vert \widehat{c}\Vert \). Therefore,

$$\begin{aligned} \Vert x^*\Vert ^2\le \dfrac{1}{\lambda _\ell ^2}\Vert b\Vert ^2=\dfrac{1}{\lambda _\ell ^2}\Vert Ax^*\Vert ^2. \end{aligned}$$

\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ferreira, P.S., Karas, E.W. & Sachine, M. A globally convergent trust-region algorithm for unconstrained derivative-free optimization. Comp. Appl. Math. 34, 1075–1103 (2015).

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:


Mathematics Subject Classification