On fast trust region methods for quadratic models with linear constraints

Quadratic models Qk(x̲),x̲∈Rn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_k ( \underline{x}), \underline{x}\in \mathcal{R}^n$$\end{document}, of the objective function F(x̲),x̲∈Rn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$F ( \underline{x}), \underline{x}\in \mathcal{R}^n$$\end{document}, are used by many successful iterative algorithms for minimization, where k is the iteration number. Given the vector of variables x̲k∈Rn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\underline{x}_k \in \mathcal{R}^n$$\end{document}, a new vector x̲k+1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\underline{x}_{k+1}$$\end{document} may be calculated that satisfies Qk(x̲k+1)<Qk(x̲k)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_k ( \underline{x}_{k+1} ) < Q_k ( \underline{x}_k )$$\end{document}, in the hope that it provides the reduction F(x̲k+1)<F(x̲k)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$F ( \underline{x}_{k+1} ) < F ( \underline{x}_k )$$\end{document}. Trust region methods include a bound of the form ‖x̲k+1-x̲k‖≤Δk\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\Vert \underline{x}_{k+1} - \underline{x}_k \Vert \le \Delta _k$$\end{document}. Also we allow general linear constraints on the variables that have to hold at x̲k\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\underline{x}_k$$\end{document} and at x̲k+1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\underline{x}_{k+1}$$\end{document}. We consider the construction of x̲k+1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\underline{x}_{k+1}$$\end{document}, using only of magnitude n2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n^2$$\end{document} operations on a typical iteration when n is large. The linear constraints are treated by active sets, which may be updated during an iteration, and which decrease the number of degrees of freedom in the variables temporarily, by restricting x̲\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\underline{x}$$\end{document} to an affine subset of Rn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathcal{R}^n$$\end{document}. Conjugate gradient and Krylov subspace methods are addressed for adjusting the reduced variables, but the resultant steps are expressed in terms of the original variables. Termination conditions are given that are intended to combine suitable reductions in Qk(·)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$Q_k ( \cdot )$$\end{document} with a sufficiently small number of steps. The reason for our work is that x̲k+1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\underline{x}_{k+1}$$\end{document} is required in the LINCOA software of the author, which is designed for linearly constrained optimization without derivatives when there are hundreds of variables. Our studies go beyond the final version of LINCOA, however, which employs conjugate gradients with termination at the trust region boundary. In particular, we find that, if an active set is updated at a point that is not the trust region centre, then the Krylov method may terminate too early due to a degeneracy. An extension to the conjugate gradient method for searching round the trust region boundary receives attention too, but it was also rejected from LINCOA, because of poor numerical results. The given techniques of LINCOA seem to be adequate in practice.


Introduction
Let an iterative algorithm be applied to seek the least value of a general objective function F(x), x ∈ R n , subject to the linear constraints a T j x ≤ b j , j = 1, 2, . . . , m, (1.1) the value m = 0 being reserved for the unconstrained case. We assume that, for any vector of variables x ∈ R n , the function value F(x) can be calculated. The vectors x of these calculations are generated automatically by the algorithm after some preliminary work. A feasible point x 1 with F(x 1 ) are required for the first iteration, where feasible means that the constraints (1.1) are satisfied. For every iteration number k, we define x k to be the point that, at the start of the k-th iteration, has supplied the least calculated value of the objective function so far, subject to x k being feasible. If this point is not unique, we choose the candidate that occurs first, so x k+1 = x k implies the strict reduction F(x k+1 ) < F(x k ).
The main task of the k-th iteration is to pick a new vector of variables, x + k say, or to decide that the sequence of iterations is complete. On some iterations, x + k may be infeasible, in order to investigate changes to the objective function when moving away from a constraint boundary, an extreme case being when the boundary is the set of points that satisfy a linear equality constraint that is expressed as two linear inequalities. We restrict attention, however, to an iteration that makes x + k feasible, and that tries to achieve the reduction F(x + k ) < F(x k ). We also restrict attention to algorithms that employ a quadratic model Q k (x) ≈ F(x), x ∈ R n , and a trust region radius k > 0. We let the model function of the k-th iteration be the quadratic (1.2) the vector g k ∈ R n and the n × n symmetric matrix H k being chosen before the start of the iteration. The trust region radius k is also chosen in advance. The ideal vector x + k would be the vector x that minimizes Q k (x) subject to the linear constraints (1.1) and the trust region bound x − x k ≤ k . (1. 3) The purpose of our work, however, is to generate a vector x + k that is a useful approximation to this ideal vector, and whose construction requires only O(n 2 ) operations on most iterations. This is done by the LINCOA Fortran software [12], developed by the author for linearly constrained optimization when derivatives of F(x), x ∈ R n , are not available. LINCOA has been applied successfully to several test problems that have hundreds of variables without taking advantage of any sparsity, which would not be practicable if the average amount of work on each iteration were O(n 3 ), this amount of computation being typical if an accurate approximation to the ideal vector is required.
When first derivatives of F are calculated, the choice g k = ∇ F(x k ) is usual for the model function (1.2). Furthermore, after choosing the second derivative matrix H 1 for the first iteration, the k-th iteration may construct H k+1 from H k by the symmetric Broyden formula [see equation (3.6.5) of [3], for instance]. There is also a version of that formula for derivative-free optimization ( [9]). It generates both g k+1 and H k+1 from g k and H k by minimizing H k+1 −H k F subject to some interpolation conditions, where the subscript "F" denotes the Frobenius matrix norm. The techniques that provide g k , H k and k are separate from our work, however, except for one feature. It is that, instead of requiring H k to be available explicitly, we assume that, for any v ∈ R n , the form of H k allows the matrix vector product H k v to be calculated in O(n 2 ) operations. Thus our work is relevant to the LINCOA Fortran software, where the expression for H k includes a linear combination of about 2n + 1 outer products of the form y y T , y ∈ R n . Table 4 of [11] gives some numerical results, on the efficiency of the symmetric Broyden formula in derivative-free unconstrained optimization when F is a strictly convex quadratic function. In all of these tests, the optimal vector of variables is calculated to high accuracy, using only about O(n) values of F for large n, which shows that the updating formula is successful at capturing enough second derivative information to provide fast convergence. There is no need for H k − ∇ 2 F(x k ) to become small, as explained by [1] when ∇ F is available. In the tests of [11] with a quadratic F, every initial matrix H 1 satisfies H 1 − ∇ 2 F F > 1 2 ∇ 2 F F . Further, although the sequence H k − ∇ 2 F F , k = 1, 2, . . . , K , decreases monotonically, where K is the final value of k, the property is not unusual when n is large.
Therefore the setting for our choice of x + k is that we seek a vector x that provides a relatively small value of Q k (x) subject to the constraints (1.1) and (1.3), although we expect the accuracy of the approximation Q k (x) ≈ F(x), x ∈ R n , to be poor. After choosing x + k , the value F(x + k ) is calculated usually, and then, in derivative-free optimization, the next quadratic model satisfies the equation even if x + k is not the best vector of variables so far. Thus, if |Q k (x + k ) − F(x + k )| is large, then Q k+1 is substantially different from Q k . Fortunately, the symmetric Broyden formula has the property that, if F is quadratic, then H k+1 − H k tends to zero as k becomes large, so it is usual on the later iterations for the error |Q k (x + k ) − F(x + k )| to be much less than a typical error |Q k (x) − F(x)|, x − x k ≤ k . Thus the reduction F(x + k ) < F(x k ) is inherited from Q k (x + k ) < Q k (x k ) much more often than would be predicted by theoretical analysis, if the theory employed a bound on |Q k (x + k )− F(x + k )| that is derived only from the errors g k − ∇ F(x k ) and H k − ∇ 2 F(x k ) , assuming that F is twice differentiable.
Suitable ways of choosing x + k in the unconstrained case (m = 0) are described by [2] and by [10], for instance. They are addressed in Sect. 2, where both truncated conjugate gradient and Krylov subspace methods receive attention. They are pertinent to our techniques for linear constraints (m > 0), because, as explained in Sect. 3, active set methods are employed. Those methods generate sequences of unconstrained problems, some of the constraints (1.1) being satisfied as equations, which can reduce the number of variables, while the other constraints (1.1) are ignored temporarily. The idea of combining active set methods with Krylov subspaces is considered in Sect. 4. It was attractive to the author during the early development of the LINCOA software, but two severe disadvantages of this approach are exposed in Sect. 4. The present version of LINCOA combines active set methods with truncated conjugate gradients, which is the subject of Sect. 5. The conjugate gradient calculations are complete if they generate a vector x on the boundary of the trust region constraint (1.3), but moves round this boundary that preserve feasibility may provide a useful reduction in the value of Q k (x). This possibility is studied in Sect. 6. Some further remarks, including a technique for preserving feasibility when applying the Krylov method, are given in Sect. 7.

The unconstrained case
There are no linear constraints on the variables throughout this section, m being zero in expression (1.1). We consider truncated conjugate gradient and Krylov subspace methods for constructing a point x + k at which the quadratic model Q k (x), x −x k ≤ k , is relatively small. These methods are iterative, a sequence of points p , = 1, 2, . . . , L, being generated, with p 1 = x k , with Q k ( p +1 ) < Q k ( p ), = 1, 2, . . . , L − 1, and with x + k = p L . The main difference between them is that the conjugate gradient iterations are terminated if the -th iteration generates a point p +1 that satisfies p +1 − x k = k , but both p and p +1 may be on the trust region boundary in a Krylov subspace iteration. The second derivative matrix H k of the quadratic model (1.2) is allowed to be indefinite. We keep in mind that we would like the total amount of computation for each k to be O(n 2 ).
If the gradient g k of the model (1.2) were zero, then, instead of investigating whether H k has any negative eigenvalues, we would abandon the search for a vector x + k that provides Q k (x + k ) < Q k (x k ). In this case LINCOA would try to improve the model by changing one of its interpolation conditions. We assume throughout this section, however, that g k is nonzero.
The first iteration ( = 1) of both the conjugate gradient and Krylov subspace methods picks the vector where α 1 is the value of α that minimizes the quadratic function of one variable Q k ( p 1 − α g k ), α ∈ R, subject to p 1 − α g k − x k ≤ k . Further, whenever a conjugate gradient iteration generates p +1 from p , a line search from p along the direction is made, the value of β being defined by the conjugacy condition d T H k d −1 = 0. Thus p +1 is the vector In theory, these conditions provide termination after at most n iterations (see [3], for instance).
The only task of a conjugate gradient iteration that requires O(n 2 ) operations is the calculation of the product H k d . All of the other tasks of the -th iteration can be done in only O(n) operations, by taking advantage of the availability of H k d −1 when ∇ Q k ( p ), β and d are formed. Therefore the target of O(n 2 ) work for each k is maintained if L, the final value of , is bounded above by a number that is independent of both k and n. Further, because the total work of the optimization algorithm depends on the average value of L over k, a few large values of L may be tolerable.
The two termination conditions that have been mentioned are not suitable for keeping L small. Indeed, if H k is positive definite, then, for every ≥ 1, the property Q k ( p +1 ) < Q k (x k ) implies p +1 − x k < k for sufficiently large k . Furthermore, if ∇ Q k ( p +1 ) = 0 is achieved in exact arithmetic, then may have to be close to n, but it is usual for computer rounding errors to prevent ∇ Q k ( p +1 ) from becoming zero. Instead, the termination conditions of LINCOA, which are described below, are recommended for use in practice. They require a positive constant η 1 < 1 to be prescribed, the LINCOA value being η 1 = 0.01.
Termination occurs immediately with x + k = p if the calculated d is not a descent direction, which is the condition Otherwise, the positive steplengthα , say, that puts p +α d on the trust region boundary is found, and termination occurs also with x + k = p if this move is expected to give a relatively small reduction in Q k (·), which is the test (2.5) In the usual situation, when both inequalities (2.4) and (2.5) fail, the product H k d is generated for the construction of the new point (2.3). The calculation ends with x + k = p +1 if p +1 is on the trust region boundary, or if the reduction in Q k by the -th iteration has the property It also ends in the case = n, which is unlikely unless n is small. In all other cases, is increased by one for the next conjugate gradient iteration. Condition (2.4) is equivalent to ∇ Q k ( p ) = 0 in exact arithmetic, because, if ∇ Q k ( p ) is nonzero, then the given choices of p and d give the property the vector norm being Euclidean. The requirement d T ∇ Q k ( p ) < 0 before α is calculated provides α > 0. For ≥ 2, the test (2.5) causes termination when ∇ Q k ( p ) is small. Specifically, as p − x k < k and p +α d − x k = k imply α d < 2 k , we deduce from the Cauchy-Schwarz inequality that condition (2.5) holds in the case The test (2.6) gives L = + 1 and ) is relatively small. We find in Sect. 5 that the given conditions for termination are also useful for search directions d that satisfy linear constraints on the variables.
In the remainder of this section, we consider a Krylov subspace method for calculating x + k from x k , where k is any positive integer such that the gradient g k of the model (1.2) is nonzero. For ≥ 1, the Krylov subspace K is defined to be the span of the vectors H j−1 k g k ∈ R n , j = 1, 2, . . . , , the matrix H j−1 k being the ( j − 1)-th power of H k = ∇ 2 Q k , and H 0 k being the unit matrix even if H k is zero. Let * be the greatest integer such that K has dimension , which implies K = K * , ≥ * . We retain p 1 = x k . The -th iteration of our method constructs p +1 , and the number of iterations is at most * . Our choice of p +1 is a highly accurate estimate of the vector x that minimizes Q k (x), subject to x − x k ≤ k and subject to the condition that x − x k is in K . We find later in this section that the calculation of p +1 for each requires only O(n 2 ) operations. An introduction to Krylov subspace methods is given by [2], for instance.
It is well known that each p +1 of the conjugate gradient method has the property that p +1 − x k is in the Krylov subspace K , which can be deduced by induction from Eqs. (2.2) and (2.3). It is also well known that the search directions of the conjugate gradient method satisfy d T H k d j = 0, 1 ≤ j < . It follows that the points p , = 1, 2, 3, . . ., of the conjugate gradient and Krylov subspace methods are the same while they are strictly inside the trust region {x : x − x k ≤ k }. We recall that the conjugate gradient method is terminated when its p +1 is on the trust region boundary, but the iterations of the Krylov subspace method may continue, in order to provide a smaller value of Q k (x + k ). Our Krylov subspace method has a standard form with termination conditions for use in practice. For = 1, 2, . . . , * , we let v be a vector of unit length in K that, for ≥ 2, satisfies the orthogonality conditions which defines every v except its sign. Thus {v j : j = 1, 2, . . . , } is an orthonormal basis of K . Further, because H k v j is in the subspace K j+1 , the conditions (2.9) provide the highly useful property (2.10) The usual way of constructing v , ≥ 2, is described below. It is extended in Sect. 4 to calculations with linear constraints on the variables.
After setting p 1 = x k , the vector p 2 of our Krylov subspace method is given by Eq. (2.1) as mentioned already. For ≥ 2, we write p +1 as the sum (2.11) Therefore, assuming that every vector norm is Euclidean, we seek the vector of coefficients θ ∈ R that minimizes the quadratic function (2.12) subject to θ ≤ k . This function has the second derivatives and Eq. (2.10) shows that the matrix ∇ 2 k (·) is tridiagonal. Moreover, the gradient ∇ k (0) has the components v T i g k , i = 1, 2, . . . , , so only the first component is nonzero, its value being ± g k . Hence, after forming the tridiagonal part of ∇ 2 k (·), the required θ can be calculated accurately, conveniently and rapidly by the algorithm of [6].
At the beginning of the -th iteration of our Krylov subspace method, where ≥ 2, the vectors v i , i = 1, 2, . . . , − 1, are available, with the tridiagonal part of the second derivative matrix ∇ 2 k −1 (·), its last diagonal element being v T −1 H k v −1 , so the product H k v −1 has been calculated already. A condition for termination at this stage is suggested in the next paragraph. If it fails, then the Arnoldi formula is applied, the amount of work being only O(n), because Eq. (2.10) provides v T j H k v −1 = 0, j ≤ − 3. The calculation of H k v requires O(n 2 ) operations, however, which is needed for the second derivatives v T −1 H k v and v T H k v . Then p +1 is generated by the method of the previous paragraph, followed by termination with x + k = p +1 if inequality (2.6) is satisfied. Otherwise, is increased by one for the next iteration.
The left hand side of the test (2.5) is the reduction that occurs in the linear function if a step is taken from p along the direction d to the trust region boundary. For Krylov subspace methods, we replace that left hand side by the greatest reduction that can be achieved in k (·) by any step from p to the trust region boundary. We see that this step is to the point assuming ∇ Q k ( p ) = 0, so inequality (2.5) becomes the termination condition which is the test If it is satisfied, then, instead of forming v and ∇ 2 k (·) for the calculation of p +1 , the choice x + k = p is made, which completes the Krylov subspace iterations. The vectors p and ∇ Q k ( p ) are required for the termination condition (2.18) when ≥ 2. The construction of p provides the parameters γ i , say, of the sum Thus it is straightforward to calculate ∇ Q k ( p ), and again we take advantage of the property (2.10).

Active sets
We now turn to linear constraints on the variables, m being positive in expression (1.1). We apply an active set method, this technique being more than 40 years old (see [4], for instance). The active set A, say, is a subset of the indices {1, 2, . . . , m} such that the constraint gradients a j , j ∈ A, are linearly independent. Usually, until A is updated, the variables x ∈ R n are restricted by the equations but, if j is not in A, the constraint a T j x ≤ b j is ignored, until updating of the active set is needed to prevent a constraint violation. This updating may occur several times during the search on the k-th iteration for a vector x + k = x that provides a relatively small value of Q k (x), subject to the inequality constraints (1.1) and the bound x − x k ≤ k . As in Sect. 2, the search generates a sequence of points p , = 1, 2, . . . , L, in the trust region with p 1 = x k , x + k = p L , and Q k ( p +1 ) < Q k ( p ), = 1, 2, . . . , L − 1. Also, every p is feasible. Let A be the current active set when p +1 is constructed. We replace the Eq. (3.1) by the conditions throughout this section. This replacement brings a strong advantage if one (or more) of the residuals b j − a T j x, j = 1, 2, . . . , m, is very small and positive, because it allows the indices of those constraints to be in A. The Eq. (3.1), however, require the choice of A at x = x k to be a subset of { j : b j − a T j x k = 0}. Let b i − a T i x k be tiny and positive, where i is one of the constraint indices. If i is not in the set A, then the condition b i − a T i p 2 ≥ 0 is ignored in the first attempt to construct p 2 , so it is likely that the p 2 of this attempt violates the i-th constraint. Then p 2 would be shifted somehow to a feasible position in R n , and usually the new p 2 would satisfy a T i p 2 = b i , with i being included in a new active set for the construction of p 3 . Thus the active set of the k-th iteration may be updated after a tiny step from p 1 = x k to p 2 , which tends to be expensive if there are many tiny steps, the work of a typical updating being O(n 2 ). Therefore the conditions (3.2) are employed instead of the Eq. (3.1) in the LINCOA software, the actual choice of A at x k being as follows.
For every feasible x ∈ R n , and for the current k , we let J (x) be the set where η 2 is a positive constant, the value η 2 = 0.2 being set in the LINCOA software.
In other words, the constraint index j is in J (x) if and only if the distance from x to the boundary of the j-th constraint is at most η 2 k . The choice of A at x k is a subset of J (x k ). As in Sect. 2, the step from p 1 = x k to p 2 is along a search direction d 1 , which is defined below. This direction has the property a T j d in order that every positive step along d 1 goes no closer to the boundary of the j-th constraint for every j ∈ J (x k ). It follows that, if the length of the step from p 1 to p 2 is governed by the need for p 2 to be feasible, then the length p 2 − p 1 is greater than η 2 k , which prevents the tiny step that is addressed in the previous paragraph. This device is taken from the TOLMIN algorithm of [7]. The direction d 1 is the unique vector d that minimizes g k + d 2 subject to the conditions (3.4) and the choice of A at x k is derived from properties of d 1 . Let I(x k ) be the set , then a sufficiently small perturbation to the j-th constraint does not alter d 1 . It follows that d 1 is also the vector d that is a basis of the linear subspace spanned by a j , j ∈ I(x k ). The actual choice of basis does not matter, because the conditions (3.2) are always equivalent to a T j ( p +1 − p ) = 0, j ∈ I(x k ). In the LINCOA software, d 1 is calculated by the Goldfarb-Idnani algorithm [5] for quadratic programming. A subset A of J (x k ) is updated until it becomes the required active set. Let d(A) be the vector d that minimizes g k + d 2 subject to a T j d = 0, j ∈ A. The vectors a j , j ∈ A, are linearly independent for every A that occurs. Also, by employing the signs of some Lagrange multipliers, every A is given the property that d(A) is the vector d that minimizes g k + d 2 subject to a T j d ≤ 0, j ∈ A. It follows that the Goldfarb-Idnani calculation is complete when d = d(A) satisfies all the inequalities (3.4). Otherwise, a strict increase in g k + d(A) 2 is obtained by picking an index j ∈ J (x k ) with a T j d(A) > 0 and adding it to A, combined with deletions from A if necessary to achieve the stated conditions on A. For k ≥ 2, the initial A of this procedure is derived from the previous active set, which is called a "warm start". Usually, when the final A is different from the initial A, the amount of work of this part of LINCOA is within our target, namely O(n 2 ) operations for each k.
Let A be the n × |A| matrix that has the columns a j , j ∈ A, where A is any of the sets that occur in the previous paragraph. When A is updated in the Goldfarb-Idnani algorithm, the QR factorization of A is updated too. We let Q R be this factorization, where Q is n × |A| with orthonormal columns and where R is square, upper triangular and nonsingular. Furthermore, an n × (n − |A|) matrixQ is calculated and updated such that the n × n matrix ( Q |Q) is orthogonal. We employ the matrices Q, R anď Q that are available when the choice of the active set at x k is complete.
Indeed, the direction d that minimizes g k + d 2 subject to the Eq. (3.6) is given by the formula Further, the step p +1 − p satisfies the conditions (3.2) if and only if p +1 − p is in the column space ofQ. Thus, until A is changed from its choice at x k to avoid a constraint violation, our search for a small value of Q k (x), x ∈ R n , subject to the active constraints and the bound x − x k ≤ k , is equivalent to seeking a small value of the quadratic function Q k (x k +Qσ ), σ ∈ R n−|A| , subject to σ ≤ k . The calculation is now without linear constraints, so we can employ some of the techniques of Sect. 2. We address the construction ofQσ by Krylov subspace and truncated conjugate gradient methods in Sects. 4 and 5, respectively. Much cancellation is usual in the productQ T g k for large k, due to the first order conditions at the solution of a smooth optimization problem, and this cancellation occurs in the second term of the product d 1 = −Q(Q T g k ). Fortunately, because the definition ofQ provides Here is a simple example in two dimensions, illustrated in Fig. 1, that makes an important point. Letting x 1 and x 2 be the components of x ∈ R 2 , and letting x k be at the origin, we seek a relatively small value of the linear function subject to the constraints The feasible region of Fig. 1 contains the points that satisfy the constraints (3.9), and the steepest descent direction −∇ Q k (·) is shown too. We pick the LINCOA value Fig. 1 A change to the active set in two dimensions also {1}. Thus p 2 is at (2, 0) as shown in Fig. 1, the length of the step to p 2 being restricted by the constraints because Q k (·) is linear. Further progress can be made from p 2 only by deleting the index 1 from A, but an empty set is still excluded by the direction of −∇ Q k (·). Therefore A is updated from {1} to {2}, which causes p 3 to be the feasible point on the trust region boundary that satisfies a T 2 ( p 3 − p 2 ) = 0. We find that x = p 3 = (3, −1) is the x that minimizes the function (3.8) subject to the constraints (3.9), and that the algorithm supplies the strictly decreasing sequence The important point of this example is that, when A is updated at p 2 , it is necessary not only to add a constraint index to A but also to make a deletion from A. Furthermore, in all cases when A is updated at x = p , say, we want the length of the step from p to p +1 to be at least η 2 k if it is restricted by the linear constraints. Therefore, if a new A is required at the feasible point p , it is generated by the procedure that is described earlier in this section, after replacing g k by ∇ Q k ( p ) and x k by p . We are reluctant to update the active set at p , however, when p is close to the boundary of the trust region. Indeed, if a new active set at p is under consideration in the LINCOA software, then the change to A is made if and only if the distance from p to the trust region boundary is at least

Krylov subspace methods
Let the active set A be chosen at the trust region centre p 1 = x k as in Sect. 3. We recall from the paragraph that includes Eq. (3.7) that the minimization of Q k (x), x ∈ R n , subject to the active and trust region constraints is equivalent to the minimization of the quadratic function subject only to the trust region bound σ ≤ k , whereQ is an n × (n − |A|) matrix with orthonormal columns that are orthogonal to a j , j ∈ A. Because there is a trust region bound but no linear constraints on σ , either the Krylov subspace method or the conjugate direction method can be taken from Sect. 2 to construct a relatively small value of Q red k (σ ), σ ≤ k . The Krylov subspace alternative receives attention in this section. First we express it in terms of the original variables x ∈ R n .
Recalling that the Krylov subspace K in Sect. 2 is the span of the vectors H j−1 k g k ∈ R n , j = 1, 2, . . . , , we now let K be the subspace of R n−|A| that is spanned by Further, corresponding to the sequence p ∈ R n , = 1, 2, 3, . . ., in Sect. 2, we consider the sequence s ∈ R n−|A| , = 1, 2, 3, . . ., where s 1 is zero, and where the -th iteration of the Krylov subspace method sets s +1 to the vector σ in K that minimizes Q red k (σ ) subject to σ ≤ k . The analogue of Eq. (2.11) is that we write s +1 as the sum , and, for ≥ 2, the Arnoldi formula (2.14) supplies the vector Equation (2.10), which is very useful for trust region calculations without linear constraints, takes the form so again there are at most two nonzero terms in the sum of the Arnoldi formula. Instead of generating each w i explicitly, however, we work with the vectors v i = Qw i ∈ R n , i ≥ 1, which are analogous to the vectors v i in the second half of Sect. 2. In particular, becauseQ TQ is the (n − |A|) × (n − |A|) unit matrix, they enjoy the orthonormality property for all relevant positive integers i and j. Furthermore, Eqs. (4.2) and (4.
It follows that the required values of the parameters θ i , i = 1, 2, . . . , , are given by the minimization of the function subject to the trust region bound θ ≤ k , which is like the calculation in the paragraph that includes expressions (2.11)-(2.13).
In order to construct v i , i ≥ 1, we replace w i in the definition v i =Qw i by a vector that is available. The quadratic (4.2) has the first and second derivatives and Only the last two terms of its sum can be nonzero as before due to Eqs. (4.5) and (4.12).
The Krylov subspace method with the active set A is now very close to the method described in the two paragraphs that include Eqs. (2.11)-(2.14). The first change to the description in Sect. 2 is that, instead of the form (2.1), p 2 is now the point is a consequence of the properties (4.5) and (4.12), so ∇ 2 k (·) is still tridiagonal. The last − 1 components of the gradient ∇ k (0) ∈ R are still zero, but its first component is now ± QQ T g k = ± Q T g k . Finally, the Arnoldi formula (2.14) is replaced by expression (4.13). We retain termination if inequality (2.6) holds.
The Krylov method was employed in an early version of the LINCOA software. If the calculated point (4.7) violates an inactive linear constraint, then p +1 has to be replaced by a feasible point, and often the active set is changed. Some difficulties may occur, however, which are addressed in the remainder of this section. Because of them, the current version of LINCOA applies the version of conjugate gradients given in Sect. 5, instead of the Krylov method.
Here is an instructive example in only two variables with an empty active set, where both p 1 = x k and p 2 are feasible, but the p 3 of the Krylov method violates a constraint. We seek a small value of the quadratic model 14) subject to x ≤ 1 and x 2 ≤ 0.2, the trust region centre being at the origin, which is well away from the boundary of the linear constraint. The point p 2 has the form (2.1), where p 1 = x k = 0 and g k = ∇ Q k (x k ) = (−50, 0) T . The function Q k ( p 1 − αg k ), α ∈ R, is linear, so p 2 is on the trust region boundary at (1, 0) T , which is also well away from the boundary of the linear constraint. Therefore p 3 has the form (2.11) with = 2, the coefficients θ 1 and θ 2 being calculated to minimize expression (2.12) subject to θ ≤ 1. In other words, because n = 2, the point p 3 is the vector x that minimizes the model (4.14) subject to x ≤ 1, and one can verify that this point is the unconstrained minimizer of the strictly convex quadratic function Q k (x)+ 47 x 2 , x ∈ R 2 . Thus we find the sequence The point p 3 is unacceptable, however, as it violates the constraint x 2 ≤ 0.2. The condition Q k ( p 3 ) < Q k ( p 2 ) suggests that it may be helpful to consider the function of one variable in order to move to the feasible point on the straight line through p 2 and p 3 that provides the least value of Q k (·) subject to the trust region bound. Feasibility demands α ≤ 0.25, but the function (4.16) increases strictly monotonically for 0 ≤ α ≤ 17/64, so all positive values of α are rejected. Furthermore, all negative values of α are excluded by the trust region bound. In this case, therefore, the point p 3 of the Krylov method fails to provide an improvement over p 2 , even when p 3 − p 2 is used as a search direction from p 2 . On the other hand, p 2 can be generated easily by the conjugate gradient method, and then x + k = x k + p 2 is chosen, because p 2 is on the trust region boundary.
The minimization of the function (4.14) subject to the constraints is also instructive. The data p 1 = x k = 0 and g k = (−50, 0) T imply again that the step from p 1 to p 2 is along the first coordinate direction, and now p 2 is at (0.6, 0) T , because of the last of the conditions (4.17). The move from p 2 is along the boundary of x 1 + 0.1x 2 ≤ 0.6, the index of this constraint being put into the active set A, so p 3 and Q k ( p 3 ) take the values subject to the constraints so again the trust region centre x k is at the origin. The step to p 2 from p 1 = x k is along the direction −∇ Q k (x k ) = (0, 4, 1) T , and, because Q k (·) has negative curvature along this direction, the step is the longest one allowed by the constraints, giving p 2 = (0, 4, 1) T . Further, the active set becomes nonempty, in order to follow the boundary of x 3 ≤ 1, which, after the step from p 1 to p 2 , reduces the calculation to the minimization of the function of two variables subject to x ≤ 5, starting at the point p 1 = (0, 4) T , although the centre of the trust region is still at the origin. The Krylov subspace K , ≥ 1, for the minimization of Q k ( x), x ∈ R 2 , is the space spanned by the vectors (∇ 2 Q k ) j−1 ∇ Q k ( p 1 ), j = 1, 2, . . . , . The example has been chosen so that the matrix ∇ 2 Q k is diagonal and so that ∇ Q k ( p 1 ) is a multiple of the first coordinate vector. It follows that all searches in Krylov subspaces, starting at p 1 = (0, 4) T , cannot alter the value x 2 = 4 of the second variable, assuming that computer arithmetic is without rounding errors. The search from p 1 to p 2 is along the direction −∇ Q k ( p 1 ) = (10, 0) T . It goes to the point p 2 = (3, 4) T on the trust region boundary, which corresponds to p 3 = (3, 4, 1) T . Here Q k (·) is least subject to x ≤ 5 and x 2 = 4, so the iterations of the Krylov method are complete. They generate the monotonically decreasing sequence In this case, the Krylov method brings the remarkable disadvantage that Q k ( p 2 ) is not the least value of Q k ( x) subject to x ≤ 5. Indeed, Q k ( x) = −25 is achieved by x = (5, 0) T , for example. The conjugate gradient method would also generate the sequence p , = 1, 2, 3, of the Krylov method, with termination at p 3 because it is on the boundary of the trust region. A way of extending the conjugate gradient alternative by searching round the trust region boundary is considered in Sect. 6. It can calculate the vector x that minimizes the quadratic (4.21) subject to x ≤ 5, and it is included in the NEWUOA software for unconstrained optimization without derivatives ( [10]).

Conjugate gradient methods
Taking the decision to employ conjugate gradients instead of Krylov subspace methods in the LINCOA software provided much relief, because the difficulties in the second half of Sect. 4 are avoided. In particular, if a step of the conjugate gradient method goes from p ∈ R n to p +1 ∈ R n , then the quadratic model along this step, which is the function Q k ( p + α{ p +1 − p }), 0 ≤ α ≤ 1, decreases strictly monotonically. The initial point of every step is feasible. It follows that, if one or more of the linear constraints (1.1) are violated at x = p +1 , then it is suitable to replace p +1 by the point where α * is the greatest α ∈ R such that p new is feasible. Thus 0 ≤ α * < 1 holds, and p new is on the boundary of a constraint whose index is not in the current A.
The conjugate gradient method of Sect. 2 without linear constraints can be applied to the reduced function after generating the active set A at p 1 , as suggested in the paragraph that includes Eq. (3.7). Thus, starting at σ = σ 1 = 0, and until a termination condition is satisfied, we find the conjugate gradient search directions of the function (5.2) in R n−|A| . Letting these directions be d red , = 1, 2, 3, . . ., which have the downhill property where α is the value of α ≥ 0 that minimizes Q red k (σ + α d red ) subject to a trust region bound. Equation (2.2) for the reduced problem provides the directions where β is defined by the conjugacy condition As in Sect. 4, we prefer to work with the original variables x ∈ R n , instead of with the reduced variables σ ∈ R n−|A| , so we express the techniques of the previous paragraph in terms of the original variables. In particular, the line search of the -th conjugate gradient iteration sets α to the value of α ≥ 0 that minimizes Q k ( p +α d ) and there is no change to the value of β . Because Eqs. (5.6) and (4.9) give the identities 8) the condition (5.5) that defines β is just d T H k d −1 = 0, which agrees with the choice of β in formula (2.2). Therefore, until a termination condition holds, the conjugate direction method for the active set A is the same as the conjugate direction method in Sect. 2, except that the gradients ∇ Q k ( p ), ≥ 1, are multiplied by the projection operatorQQ T . Thus the directions (5.7) have the property a T j d = 0, j ∈ A, in order to satisfy the constraints (3.2).
The conditions of LINCOA for terminating the conjugate gradient steps for the current active set A are close to the conditions in the paragraph that includes expressions (2.4)-(2.6). Again there is termination with x + k = p if inequality (2.4) or (2.5) holds, whereα is still the nonnegative value of α such that p + α d is on the trust region boundary. Alternatively, the new point p +1 = p + α d is calculated. If the test (2.6) is satisfied, or if p +1 is a feasible point on the trust region boundary, or if p +1 is any feasible point with = n − |A|, then the conjugate gradient steps for the current iteration number k are complete, the value x + k = p +1 being chosen, except that x + k = p new is preferred if p +1 is infeasible, as suggested in the first paragraph of this section. Another possibility is that p +1 is a feasible point that is strictly inside the trust region with < n − |A|. Then is increased by one in order to continue the conjugate gradient steps for the current A. In all other cases, p +1 is infeasible, and we let p new be the point (5.1).
In these other cases, a choice is made between ending the conjugate gradient steps with x + k = p new , or generating a new active set at p new . We recall from the last paragraph of Sect. 3 that the LINCOA choice is to update A if and only if the distance from p new to the trust region boundary is at least η 2 k . Furthermore, after using the notation p 1 = x k at the beginning of the k-th iteration as in Sect. 2, we now revise the meaning of p 1 to p 1 = p new whenever a new active set is constructed at p new . Thus the description in this section of the truncated conjugate gradient method is valid for every A.
The number of conjugate gradient steps is finite for each A, even if we allow to exceed n − |A|. Indeed, failure of the termination test (2.6) gives the condition which cannot happen an infinite number of times as Q k (x), x −x k ≤ k , is bounded below. Also, the number of new active sets is finite for each iteration number k, which can be proved in the following way. Let p 1 be different from x k due to a previous change to the active set, and let the work with the current A be followed by the construction of another active set at the point (5.1) with = * , say. Then condition (5.9) and Q k ( which is helpful to our proof for * ≥ 2, because 0 < η 1 < 1, 0 < η 2 < 1 and * ≥ 2 In the alternative case * = 1, we recall that the choice of A gives the search direction d 1 the property a T j d 1 ≤ 0, j ∈ J ( p 1 ), where J (x), x ∈ R n , is the set (3.3). Thus the infeasibility of p 2 and Eq. (5.1) yield p new − p 1 > η 2 k . Moreover, the failure of the termination condition (2.5) for = 1 with α 1 d 1 < 2 k supply the inequality the first line being true because p new − p 1 is a multiple of d 1 . Now the function φ(α) = Q k ( p 1 +α{ p new − p 1 }), 0 ≤ α ≤ 1, is a quadratic that decreases monotonically, which implies φ(0) − φ(1) ≥ 1 2 |φ (0)|, and |φ (0)| is the left hand side of expression (5.11). Thus, remembering p new − p 1 > η 2 k , we deduce the property (5.12) which gives the bound in the case * = 1. It follows from expressions (5.10) and (5.13) that, whenever consecutive changes to the active set occur, the new value of Q k (x k ) − Q k ( p 1 ) is greater than the old one multiplied by (1 + 1 4 η 1 η 2 ). Again, due to the boundedness of Q k (x), x − x k ≤ k , this cannot happen an infinte number of times, so the number of changes to the active set is finite for each k.
An extension of the given techniques for truncated conjugate gradients subject to linear constraints is included in LINCOA, which introduces a tendency for x + k to be on the boundaries of the active constraints. Without the extension, more changes than necessary are made often to the active set in the following way. Let the j-th constraint be in the current A, which demands the condition b j − a T j p 1 ≤ η 2 k a j (5.14) for the current k , let p curr be the current p 1 , and let (b j − a T j p curr )/(η 2 a j ) be greater than the final trust region radius. Further, let the directions of a j and ∇ F(x) for every relevant x be such that it would be helpful to keep the j-th constraint active for the remainder of the calculation. The j-th constraint cannot be in the final active set, however, unless changes to p 1 cause (b j − a T j p 1 )/(η 2 a j ) to be at most the final value of k . On the other hand, while j is in A, condition (3.2) shows that all the conjugate gradient steps give b j − a T j p +1 = b j − a T j p . Thus the index j may remain in A until k becomes less than (b j − a T j p curr )/(η 2 a j ). Then j has to be removed from A, which allows the conjugate gradient method to generate a new p 1 that supplies the reduction b j − a T j p 1 < b j − a T j p curr . If j is the index of a linear constraint that is important to the final vector of variables, however, then j will be reinstated in A by yet another change to the active set.
The projected steepest descent direction −QQ T ∇ Q k ( p 1 ) is calculated as before when the new active set A is constructed at p 1 , but now we call this direction d old , because the main feature of the extension is that it picks a new direction d 1 for the formula p 2 = p 1 + α 1 d 1 , in order that the residuals of the active constraints can be reduced by the move from p 1 to p 2 . We still require d old to be a direction of descent, so the conjugate gradient steps are terminated by setting x + k = p 1 if d T old ∇ Q k ( p 1 ) ≥ 0 holds in practice, which is the test (2.4). Also, if all the residuals b j − a T j p 1 , j ∈ A, are sufficiently small, which means in LINCOA that they satisfy the bounds then the conjugate gradient steps from p 1 are as before, with d 1 = d old .
In all other cases, the extension begins by calculating the shortest step from p 1 to the boundaries of the active constraints. This step is the vector d perp , say, in the column space of A that satisfies A T d perp = r , where A has the columns a j , j ∈ A, and where r has the components b j − a T j p 1 , j ∈ A. We recall from near the middle of Sect. 3 that the QR factorization A = Q R is available, which assists the construction of d perp . Indeed, it has the form d perp = Qs, and s is defined by the equations with θ = 1, would bring the advantages that the length of the step from p 1 to p 2 is greater than η 2 k , and p 2 would be on the boundaries of all the active constraints. This point p 2 , however, can be infeasible and can violate the trust region bound, although p 1 + d 1 would satisfy all the constraints in the case θ = 0. Therefore the extension picks the direction (5.17), after setting θ to the greatest number in [0, 1] such that all the constraints hold at p 1 + d 1 . The value θ = 0 may occur, and then d 1 is a multiple of d old , so we would generate the sequence of steps from p 1 by the conjugate gradient method without the extension. When θ is positive in formula (5.17), we pick p 2 = p 1 + α 1 d 1 , where α 1 is the value of α that minimizes Q k ( p 1 + α d 1 ), 0 ≤ α ≤ 1. The point p 2 is feasible due to the choice of θ , but, for every α > 1, at least one constraint is violated at p 1 + α d 1 , in the usual case when p 1 + d 1 is on the boundary of a constraint that is satisfied as a strict inequality at p 1 . Our step from p 1 to p 2 achieves the reductions (5.18) in the residuals of the active constraints.
The following argument shows that α 1 is positive in Eq. (5.18). We have to rule out α 1 = 0, so it is sufficient to establish d T 1 ∇ Q k ( p 1 ) < 0, and we recall that termination with x + k = p 1 would have happened earlier in the case d T old ∇ Q k ( p 1 ) ≥ 0. Hence, because of the choice (5.17), it is sufficient to prove d T perp ∇ Q k ( p 1 ) ≤ 0. We find in the paragraph that includes expressions (3.4)-(3.6) that d old is the vector d that minimizes ∇ Q k ( p 1 ) + d subject to a T j d ≤ 0, j ∈ A, and the first order conditions at the solution of this quadratic programming problem provide the relation for some multipliers λ j that are all nonpositive. Moreover, d perp is derived from the equations a T j d perp = r j , j ∈ A, each r j = b j − a T j p 1 being nonnegative because p 1 is feasible. Thus, remembering that d perp is orthogonal to d old , we deduce the required inequality We now know from the property (5.18) that, with the extension, the step from p 1 to p 2 achieves a strict reduction in every positive residual of an active constraint.
is the least value of Q k ( p 1 + α d 1 ), α ≥ 0, and if condition (2.6) holds for = 1. Otherwise, a search direction d 2 is chosen for a step from p 2 to p 3 . The constraints (3.2) with = 2 require d 2 to be in the column space ofQ, but the direction (5.17) is not in this space for θ > 0. Therefore β 2 has to be zero in formula (5.7), giving d 2 = −QQ T ∇ Q k ( p 2 ). It is convenient to call this direction d 1 = −QQ T ∇ Q k ( p 1 ), so we replace p 2 by p 1 without making a further change to A. Thus the given description of the truncated conjugate gradient method for the current A becomes valid for the continuation of the method. Formula (5.7) still provides search directions that are mutually conjugate, because the new d 1 is a projected steepest descent direction.

Moves round the trust region boundary
The conjugate gradient method of Sect. 5 terminates at x + k = p +1 if p +1 is a feasible point on the boundary of the trust region, but usually a move round the boundary can generate another feasible point, x ++ k say, that provides the strict reduction In the case (4.19)-(4.20), for example, the conjugate gradient method yields , as mentioned at the end of Sect. 4. The early versions of the LINCOA software included an extension to the conjugate gradient method that seeks reductions in Q k (·) by searching round the trust region boundary, which is described briefly below. Now, however, LINCOA has been made simpler by the removal of the extension, because of some numerical results that are given too in this section.
A move from x + k to x ++ k round the trust region boundary is made by the early versions of LINCOA only if the Taylor series linear function suggests that Q k (x ++ k ) is going to be substantially less than Q k (x + k ). Indeed, letting holds, which corresponds to the test (2.17). The vector x ++ k is not calculated explicitly, however, because, using x + k −x k = k , it can be shown that the reduction Thus condition (6.2) causes termination if the vectorsQ T (x + k − x k ) anď Q T ∇ Q k (x + k ) are parallel with opposite signs. Also, a difficulty is avoided by termination in the highly unusual case when these two vectors are parallel (or nearly parallel) with the same sign. Specifically, there is a search round the trust region boundary from x + k only if x + k has the property When condition (6.4) holds, the vectorsQQ T (x + k − x k ) andQQ T ∇ Q k (x + k ) are linearly independent, and we let S ⊂ R n be the two dimensional linear space that is spanned by them. Each move x ++ where v is the vector in S that is orthogonal toQQ T (x + k − x k ), and that satisfies v = QQ T (x + k − x k and v T ∇ Q k (x + k ) < 0. The value of θ is calculated by seeking the first local minimum of the function subject to the feasibility of x + k + s(θ ). The inequality v T ∇ Q k (x + k ) < 0 supplies φ (0) < 0.
Equations (1.2) and (6.5) show that the function (6.6) is a trigonometric polynomial of degree two. The coefficients of this polynomial are generated, the amount of work when n is large being dominated by the need for the product H k v, the product H kQQ T (x + k − x k ) being available. We calculate an estimate, θ * say, of the least positive value of θ that satisfies φ (θ ) = 0, the relative error of the estimate being at most 0.01. By considering every inactive constraint whose boundary is within distance k of x k , the value of θ * is reduced if necessary so that all the points x + k + s(θ ), 0 ≤ θ ≤ θ * , are feasible. Then we make the choice x ++ k = x + k + s(θ * ). We take the view that x ++ k is a new vector x + k . There are no more searches round the trust region boundary for the current k if an inactive constraint causes a decrease in θ * in the previous paragraph, or if the change above to x + k reduces Q k (x + k ) by an amount that is at most the new value of η 1 {Q k (x k ) − Q k (x + k )}, which corresponds to the test (2.6), or if condition (6.4) fails for the new x + k . Otherwise, another move is made from x + k to a new x ++ k in the way that has been described already. This procedure is continued until termination occurs.
The success of searches round the trust region boundary was investigated by numerical experiments, including the application of versions of LINCOA to the following problem. We let n be even, and, for each x ∈ R n , we set the points We seek the least value of the function LINCOA requires not only an initial vector of variables but also the initial and final values of k , which are set to 0.1 and 10 −6 in these calculations. The value of NPT is also required, which is the number of interpolation conditions satisfied by each approximation Q k (x) ≈ F(x), x ∈ R n . The amount of routine work for each k is of magnitude NPT squared, due to a linear system of equations that supplies each new quadratic model, so it is helpful for NPT to be of magnitude n when n is large. The values NPT = n + 6 and NPT = 2n + 1 are compared in the numerical results of this section. Tables 1 and 2 give these results for n = 10 × 2 , = 0, 1, 2, 3, 4, 5. There are five cases for each n, the cases being distinguished by different random choices of the

Initial points
Final points Other final points  initial vector of variables. For each application of LINCOA, we let #F be the number of calculations of the objective function, and we let #TRI (Trust Region Iterations) be the number of iterations that construct x + k by the truncated conjugate gradient method of Sect. 5, with or without searches round the trust region boundary. The second and third columns of the tables show the averages to the nearest integer of #F and #TRI over the five cases for each n. We recall that every step in the construction of x + k requires a vector to be multiplied by the matrix H k = ∇ 2 Q k , which is the most expensive part of every step. For each k, the number of multiplications is in the range [1,3], [4,10] or [11, ∞). The number of occurrences of each range is counted for each test problem, the sum of these numbers being #TRI. These counts are also averaged to the nearest integer over the five cases for each n, the results being shown in the last three columns of the tables. Two versions of LINCOA are employed, the first one being the current version that is without searches round the trust region boundary, and the second one being the extension of the current version that includes the boundary searches and termination conditions of this section. The main entries in the table have the form p/q, where p and q are the averages of the version of LINCOA that is without and with boundary searches, respectively. Good accuracy is achieved throughout the experiments of Tables 1 and 2, the greatest residual of a KKT first order condition at a final vector of variables being about 3 × 10 −5 .
The results in the last rows of both tables are highly unfavourable to searches round the trust region boundary. We find in the n = 320 row of Table 1 that the extra work of the searches causes #F to become worse by 27 %, while the 0.7 % improvement in #F in the n = 320 row of Table 2 is very expensive. Indeed, although the conjugate gradient method is truncated after at most 3 steps in 39,116 out of 48,483 applications, the method with boundary searches requires more than 10 multiplications of a vector by H k = ∇ 2 Q k in about half of its 49,428 applications. Furthermore, the boundary searches in the n = 80 and n = 160 rows of Table 2 also take much more effort, and they cause #F to increase by about 7 and 35%, respectively. Table 2 is more  relevant than Table 1 for n ≥ 80 because its values of #F are smaller. Moreover, the #F entries in the n = 40 rows of both tables suggest strongly that boundary searches are unhelpful. We give less attention to smaller values of n, because LINCOA is designed to be particularly useful for minimization without derivatives when there are hundreds of variables, by taking advantage of the discovery that the symmetric Broyden updating method makes such calculations possible. Thus Tables 1 and 2 provide excellent support for the decision, taken in 2013, to terminate the calculation of x + k by LINCOA when the steps of the conjugate gradient method reach the trust region boundary.

Further remarks and discussion
We begin our conclusions by noting that, when there are no linear constraints on the variables, the Krylov subspace method provides searches round the boundary of the trust region that compare favourably with the searches of Sect. 6. Let p +1 ∈ R n anď p +1 ∈ R n be the points that are generated by the -th step of the Krylov method and by the -th step of the conjugate gradient method augmented by boundary searches, respectively, starting at the trust region centre p 1 =p 1 = x k , but the greatest values of for the two methods may be different. Becausep +1 − x k is in the linear space spanned by the gradients ∇ Q k (p j ), j = 1, 2, . . . , , it is also in the Krylov subspace K , defined in the complete paragraph between expressions (2.8) and (2.9), even if the dimension of K is less than . Also the choice of p +1 by the -th step of the Krylov method satisfies p +1 − x k ∈ K , with the additional property that Q k ( p +1 ) is the least value of Q k (x), x − x k ≤ k , subject to x − x k ∈ K . Thus the Krylov method enjoys the advantage Q k ( p +1 ) ≤ Q k (p +1 ) for every that occurs for both methods. Moreover, the Krylov method terminates when K +1 = K holds, which gives ≤ n even if the other conditions for termination are ignored. Usually, however, the number of boundary searches in Sect. 6 can be made arbitrarily large by letting the parameter η 1 in the test (6.4) be sufficiently small. These remarks suggest that the very poor results for searches in the last three columns of Tables 1 and 2 may be avoided if the Krylov method is applied.
Much of the effort in the development of LINCOA was spent on attempts to include the Krylov subspace method of Sect. 4 when there are linear constraints on the variables. Careful attention was given to the situation when, having chosen an active set at the point p 1 ∈ R n , which need not be the trust region centre x k , the method generates the sequence p j+1 , j = 1, 2, . . . , , in the trust region, and p +1 is the first point in the sequence that violates a linear constraint. If the function φ(α) = Q k ( p + α{ p +1 − p }), 0 ≤ α ≤ 1, decreases monotonically, then the method in the first paragraph of Sect. 5 is recommended, which allows either termi-nation with x + k = p new , or a change to A with p new becoming the starting point p 1 of the Krylov method for the new active set, p new being the vector (5.1).
Examples in Sect. 4 show, however, that occasionally the first derivative φ (0) is positive, although the Krylov method gives φ(1) < φ(0) and φ (1) ≤ 0, where φ(·) is defined above. A way of making further progress in this case is to pick θ > 0, and to let p +1 (θ ) be the vector x that minimizes the extended quadratic function subject to x − x k ≤ k and to x − p 1 being in the current Krylov subspace. For every θ > 0, the calculation of p +1 (θ ) is like the calculation of p +1 = p +1 (0), due to ∇ 2 Q + k (·, θ) − ∇ 2 Q k being a multiple of the unit matrix. The function φ(α, θ) = Q + k ( p + α{ p +1 (θ ) − p }, θ), 0 ≤ α ≤ 1, does decrease monotonically if θ is sufficiently large, in particular when Q + k (·, θ) is convex, because of the property φ (1, θ) ≤ 0. The author has considered finding a relatively small value of θ that supplies the downhill condition φ (0, θ) ≤ 0, followed by a change in p +1 from p +1 (0) to p +1 (θ ). Equation (7.1) with the Krylov method show that the move from p to the changed p +1 achieves the reduction Having replaced p +1 by p +1 (θ ), where θ gives the required monotonicity, the method in the second paragraph of this section is applied if p +1 is infeasible. Otherwise, termination with x + k = p +1 is suitable if condition (2.6) holds, the alternative being to continue the steps of the Krylov method for the calculation of p +2 , with the current active set and the original quadratic model. These techniques provide an iterative procedure that terminates for the current A, due to the argument in the paragraph that includes inequality (5.9).
We give further consideration to the Krylov method when the active set is chosen at a point p 1 that is different from the centre of the trust region. Then attention is restricted to vectors x ∈ R n that satisfy a T j (x − p 1 ) = 0, j ∈ A, which allows the trust region bound x − x k ≤ k to be written as the inequality say, where x k is now the shifted trust region centre Indeed, as x − p 1 is restricted to the column space ofQ, Eq. (7.4) shows that x − x k and x k − x k are in the column spaces ofQ and Q, respectively. Thus gives the form (7.3) of the trust region bound. Further, because Eq. (7.4) implies x k − x k ≤ p 1 − x k , and because active sets are chosen only at points p 1 that satisfy p 1 − x k < k , the trust region radius k in expression (7.3) is positive. The form (7.3) of the trust region bound exposes some deficiencies of the Krylov method in the case p 1 = x k . The description of the method in Sect. 4 is not convenient, however, because some of the details of the computation, like the use of the Arnoldi formula (4.13), are not relevant to our discussion. Instead, we employ a definition of the Krylov subspace K , ≥ 1, that depends on the sequence of points p j , j = 1, 2, . . . , . It can be shown to be equivalent to the previous one, because the Krylov and conjugate gradient methods generate the same points while strictly inside the trust region, the direction (5.7) being in the Krylov subspace K . Thus K 1 is the subspace (7.5) in the case = 1. Having found K 1 , we let p 2 be the point given by the usual construction of p +1 in the case = 1. Indeed, p +1 is the vector x that minimizes Q k (x), x ∈ R n , subject to x − p 1 ∈ K and to x − x k ≤ k . The definition (7.5) with this construction are applied iteratively for = 1, 2, 3, . . ., until a condition for termination is satisfied, one of them being K = K −1 . Two obvious features of formula (7.5) are that it provides K 1 ⊂ K 2 ⊂ · · · ⊂ K , and that it puts the step p +1 − p into the column space ofQ, as required by the constraints (3.2). The termination condition (2.6) is recommended. We have addressed already the situation when p +1 is infeasible. We compare this choice of p +1 to the one that is optimal if Q k (x), x ∈ R n , is replaced by the linear approximation which occurs also in Eq. (2.15). The vector −QQ T ∇ Q k ( p ) is the steepest descent direction of k (·) that is allowed by the active constraints, so the least value of k (·) subject to these constraints and the trust region bound (7.3) is at the point Assuming for the moment that the second derivatives of Q k (·) are small enough for the approximation k (x) ≈ Q k (x), x − x ≤ k , to be useful, we would like the reduction Q k ( p ) − Q k ( p +1 ) to compare favourably with Q k ( p ) − k ( p +1 ). This hope is achieved if p +1 − p 1 is in K , because then the calculation of p +1 by the Krylov method provides Q k ( p +1 ) ≤ Q k ( p +1 ) ≈ k ( p +1 ). Equation (7.7) shows that p +1 − p 1 is a linear combination of x k − p 1 andQQ T ∇ Q k ( p ), and the definition (7.5) givesQQ T ∇ Q k ( p ) ∈ K . It follows that the Krylov method is suitable in the case x k − p 1 = 0, which means that the starting point of the Krylov method for the current A is at the centre of the trust region constraint (7.3). Otherwise, the Krylov method may be disadvantageous.
We apply the remarks above to the last example of Sect. 4, where we seek a relatively small value of the function (4.19) subject to the constraints (4.20), the trust region centre x k being at the origin. The initial active set is empty, so the move from x k goes along the direction −∇ Q k (x k ), and it reaches the point p 1 = (0, 4, 1) T on the boundary of the linear constraint x 3 ≤ 1. This description is in Sect. 4, except that our present notation requires the point (0, 4, 1) T to be called p 1 instead of p 2 , because a new active set is generated there to prevent violations of the linear constraint. The new set {a j : j ∈ A} contains only the vector (0, 0, 1) T , which gives the matrices and k = 5. Therefore the Krylov method and Eq. (7.7) provide p 2 = (3, 4, 1) T and p 2 = (5, 0, 1) T , respectively. The Krylov method is inferior due to p 1 = x k , the new values of Q k (·) being Q k ( p 2 ) = −19.4 and Q k ( p 2 ) = −25. Furthermore, the Krylov method is unable to generate another step that yields the reduction Q k ( p 3 ) < Q k ( p 2 ). Indeed, the projected gradientQQ T ∇ Q k ( p 2 ) = (−4, 0, 0) T is parallel tǒ QQ T ∇ Q k ( p 1 ), so K 2 = K 1 occurs in the definition (7.5), which is a condition for termination.
There are three excellent reasons for starting the Krylov method for the new active set A at p 1 instead of at x k when p 1 is not a trust region centre. The point x k may be infeasible, the value Q k ( p 1 ) is the least known value of Q k (x), x ∈ R n , so far subject to the linear constraints and the trust region bound, and the new A has been chosen carefully so that a move from p 1 along the direction −QQ T ∇ Q k ( p 1 ) does not violate a constraint until the length of the move is greater than η 2 k . Moreover, while the sequence of Krylov steps is strictly inside the trust region, the steps are suitable, because they are the same as the conjugate gradient steps in Sect. 5. When the Krylov steps move round the boundary of the trust region, however, there is the very strong objection that the definition (7.5) of the Krylov subspaces is without any attention to the actual trust region boundary, this deficiency being shown clearly by the property K 1 = K 2 in the example of the previous paragraph. Therefore, although it is argued in the first paragraph of this section that boundary searches by the Krylov method are superior to those of Sect. 6 in the case p 1 = x k , and although this argument is valid too in the unusual situation p 1 = x k = x k , we expect the crude searches of Sect. 6 to be better in the cases p 1 = x k . We recall also that, when p +1 is generated by the Krylov method, the uphill property ( p +1 − p ) T ∇ Q k ( p ) > 0 is possible, which causes difficulties if p +1 is infeasible. These disadvantages led to the rejection of the Krylov method from the LINCOA software, as mentioned earlier. Nevetheless, the description of the Krylov method with linear constraints in Sect. 4 may be useful, because, in many applications of LINCOA, most of the changes to A occur at the beginning of an iteration, and then p 1 is at the trust region centre x k .
The choice of the quadratic model Q k (x), x ∈ R n , for each iteration number k is important, but it is outside the range of our work. Nevertheless, because Tables 1 and 2 in Sect. 6 compare NPT = n + 6 with NPT = 2n + 1, we comment briefly on the number of interpolation conditions. When the author began to investigate the symmetric Broyden method for minimization without derivatives, as reported in the last three paragraphs of [8], NPT was chosen to be O(n) for large n, in order to allow the routine work of each iteration to be only O(n 2 ). Comparisons were made with NPT = 1 2 (n + 1)(n + 2), which is the number of degrees of freedom in a quadratic function. The finding that smaller values of NPT often provide much lower values of #F was a welcome surprise. For the smaller values, the second derivative matrix ∇ 2 Q k is usually very different from ∇ 2 F(x k ) at the end of the calculation, even when F(·) is quadratic. It seems, therefore, that quadratic models without good accuracy can be helpful to the choice of x k+1 . This view is supported by the following advantage of Q k (·) over a linear approximation to F(·).
We suppose that there are no linear constraints on the variables, and that we wish to predict whether the reduction F(x) < F(x k ) is going to occur for some x in the trust region of the k-th iteration. If a linear approximation k (·) ≈ F(·), say, is employed for the prediction, and if ∇ k (x k ) is nonzero, then the answer is affirmative for all vectors x in the set {x : k (x) < k (x k )} ∩ {x : x − x k ≤ k }, which is half of the trust region on one side of the plane {x : (x − x k ) T ∇ k (x k ) = 0}. On the other hand, a typical quadratic model Q k (·) is subject to the interpolation conditions Q k (y i ) = F(y i ), i = 1, 2, . . . , NPT, (7.9) with NPT > n+1, and we expect x k to be a best interpolation point, which means x k = y t , where t is an integer in [1, NPT] that satisfies F(y t ) ≤ F(y i ), i = 1, 2, . . . , NPT. Thus the set {x : Q k (x) < Q k (x k )} is usually very different from a half plane, especially if x k is a strictly interior point of the convex hull of the interpolation points. Indeed, the set excludes a neighbourhood of every y i with F(y i ) > F(x k ), and searches for relatively small values of Q k (·) stay away automatically from the current interpolation points. Quadratic models with NPT of magnitude n are obvious candidates for providing this useful feature. Furthermore, because symmetric Broyden updating takes up the freedom in each new model by minimizing the change to the model in a particular way, some helpful properties of the old model can be inherited by the new one, although ∇ 2 Q k may be a very bad estimate of ∇ 2 F(x k ). The author is enthusiastic about such models, because of their success in his software for optimization without derivatives when there are hundreds of variables. Updating the quadratic model is an example of a subject that is fundamental to the development of algorithms such as LINCOA, but the subject is separate from our present work. Another fundamental subject that has not received our attention is the choice of vectors x for the calculation of new values of F(x) on iterations that are designed to improve the quadratic model, instead of trying to achieve the reduction F(x) < F(x k ). The number of these "model iterations" is about (#F − #TRI − NPT) in Tables 1 and 2. Therefore our paper is definitely not intended to be a description of LINCOA, although it may be welcomed by users of LINCOA, because that description has not been written yet. Instead, we have studied the investigations for LINCOA into the construction of feasible vectors x that provide a sufficiently small value of Q k (x), x − x k ≤ k , subject to linear constraints. Most of the efforts of those investigations, which have taken about 2 years, were spent on promising techniques that have not been included in the software. The prime example is the Krylov subspace method, which was expected to perform better than truncated conjugate gradients, due to its attractive way of taking steps round the trust region boundary in the unconstrained case. The reason for giving so much attention to failures of the Krylov method is that our findings may be helpful to future research.