Direct search based on probabilistic feasible descent for bound and linearly constrained problems


Direct search is a methodology for derivative-free optimization whose iterations are characterized by evaluating the objective function using a set of polling directions. In deterministic direct search applied to smooth objectives, these directions must somehow conform to the geometry of the feasible region, and typically consist of positive generators of approximate tangent cones (which then renders the corresponding methods globally convergent in the linearly constrained case). One knows however from the unconstrained case that randomly generating the polling directions leads to better complexity bounds as well as to gains in numerical efficiency, and it becomes then natural to consider random generation also in the presence of constraints. In this paper, we study a class of direct-search methods based on sufficient decrease for solving smooth linearly constrained problems where the polling directions are randomly generated (in approximate tangent cones). The random polling directions must satisfy probabilistic feasible descent, a concept which reduces to probabilistic descent in the absence of constraints. Such a property is instrumental in establishing almost-sure global convergence and worst-case complexity bounds with overwhelming probability. Numerical results show that the randomization of the polling directions can be beneficial over standard approaches with deterministic guarantees, as it is suggested by the respective worst-case complexity bounds.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8


  1. 1.

    When developing the convergence theory in the deterministic case, mostly in [27], then in [23], the authors have considered a formulation only involving linear inequality constraints. In [25], they have suggested to include linear equality constraints by adding to the positive generators of the approximate normal cone the transposed rows of A and their negatives, which in turn implies that the tangent cone lies in the null space of A. In this paper, we take an equivalent approach by explicitly considering the iterates in \({{\,\mathrm{null}\,}}(A)\). Not only does this allow a more direct application of the theory in [23, 27], but it also renders the consideration of the particular cases (only bounds or only linear equalities) simpler.

  2. 2.

    \(V^c_k\) corresponds to the coordinate vectors in \(T_k\) for which their negatives do not belong to \(V_k\), and hence the corresponding variables must be subject to bound constraints. Since there only are \(n_b\) bound constraints, one has \(0 \le |V_k^c| \le n_b\).


  1. 1.

    Abramson, M.A., Brezhneva, O.A., Dennis Jr., J.E., Pingel, R.L.: Pattern search in the presence of degenerate linear constraints. Optim. Methods Softw. 23, 297–319 (2008)

    MathSciNet  Article  MATH  Google Scholar 

  2. 2.

    Audet, C., Hare, W.: Derivative-Free and Blackbox Optimization. Springer Series in Operations Research and Financial Engineering. Springer, Berlin (2017)

    Google Scholar 

  3. 3.

    Audet, C., Le Digabel, S., Peyrega, M.: Linear equalities in blackbox optimization. Comput. Optim. Appl. 61, 1–23 (2015)

    MathSciNet  Article  MATH  Google Scholar 

  4. 4.

    Audet, C., Le Digabel, S., Tribes, C.: NOMAD user guide. Technical report G-2009-37, Les cahiers du GERAD (2009)

  5. 5.

    Audet, C., Dennis Jr., J.E.: A progressive barrier for derivative-free nonlinear programming. SIAM J. Optim. 20, 445–472 (2009)

    MathSciNet  Article  MATH  Google Scholar 

  6. 6.

    Bandeira, A.S., Scheinberg, K., Vicente, L.N.: Convergence of trust-region methods based on probabilistic models. SIAM J. Optim. 24, 1238–1264 (2014)

    MathSciNet  Article  MATH  Google Scholar 

  7. 7.

    Birgin, E.G., Gardenghi, J.L., Martínez, J.M., Santos, S.A., Toint, Ph.L.: Evaluation complexity for nonlinear constrained optimization using unscaled KKT conditions and high-order models. SIAM J. Optim. 29, 951–967 (2016)

  8. 8.

    Cartis, C., Gould, N.I.M., Toint, Ph.L.: An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity. IMA J. Numer. Anal. 32, 1662–1695 (2012)

  9. 9.

    Cartis, C., Gould, N.I.M., Toint, Ph.L.: On the complexity of finding first-order critical points in constrained nonlinear programming. Math. Program. 144, 93–106 (2014)

  10. 10.

    Conn, A.R., Scheinberg, K., Vicente, L.N.: Introduction to Derivative-Free Optimization. MPS-SIAM Series on Optimization. SIAM, Philadelphia (2009)

    Google Scholar 

  11. 11.

    Diouane, Y., Gratton, S., Vicente, L.N.: Globally convergent evolution strategies for constrained optimization. Comput. Optim. Appl. 62, 323–346 (2015)

    MathSciNet  Article  MATH  Google Scholar 

  12. 12.

    Dodangeh, M., Vicente, L.N., Zhang, Z.: On the optimal order of worst case complexity of direct search. Optim. Lett. 10, 699–708 (2016)

    MathSciNet  Article  MATH  Google Scholar 

  13. 13.

    Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)

    MathSciNet  Article  MATH  Google Scholar 

  14. 14.

    Dreisigmeyer, D.W.: Equality constraints, Riemannian manifolds and direct-search methods. Technical report LA-UR-06-7406, Los Alamos National Laboratory (2006)

  15. 15.

    Elster, C., Neumaier, A.: A grid algorithm for bound constrained optimization of noisy functions. IMA J. Numer. Anal. 15, 585–608 (1995)

    MathSciNet  Article  MATH  Google Scholar 

  16. 16.

    Fukuda, K., Prodon, A.: Double description method revisited. In: Deza, M., Euler, R., Manoussakis, I. (eds.) Combinatorics and Computer Science: 8th Franco-Japanese and 4th Franco-Chinese Conference, Brest, France, 3–5 July 1995, Selected Papers, pp. 91–111. Springer (1996)

  17. 17.

    García-Palomares, U.M., García-Urrea, I.J., Rodríguez-Hernández, P.S.: On sequential and parallel non-monotone derivative-free algorithms for box constrained optimization. Optim. Methods Softw. 28, 1233–1261 (2013)

    MathSciNet  Article  MATH  Google Scholar 

  18. 18.

    Gould, N.I.M., Orban, D., Toint, Ph.L.: CUTEst: a constrained and unconstrained testing environment with safe threads. Comput. Optim. Appl. 60, 545–557 (2015)

  19. 19.

    Gratton, S., Royer, C.W., Vicente, L.N., Zhang, Z.: Direct search based on probabilistic descent. SIAM J. Optim. 25, 1515–1541 (2015)

    MathSciNet  Article  MATH  Google Scholar 

  20. 20.

    Gratton, S., Vicente, L.N.: A merit function approach for direct search. SIAM J. Optim. 24, 1980–1998 (2014)

    MathSciNet  Article  MATH  Google Scholar 

  21. 21.

    Kelley, C.T.: Implicit Filtering Software Environment and Tools. SIAM, Philadelphia (2011)

    Google Scholar 

  22. 22.

    Kolda, T.G., Lewis, R.M., Torczon, V.: Optimization by direct search: new perspectives on some classical and modern methods. SIAM Rev. 45, 385–482 (2003)

    MathSciNet  Article  MATH  Google Scholar 

  23. 23.

    Kolda, T.G., Lewis, R.M., Torczon, V.: Stationarity results for generating set search for linearly constrained optimization. SIAM J. Optim. 17, 943–968 (2006)

    MathSciNet  Article  MATH  Google Scholar 

  24. 24.

    Le Digabel, S.: Algorithm 909: NOMAD: Nonlinear optimization with the MADS algorithm. ACM Trans. Math. Software 37, 44:1–44:15 (2011)

    MathSciNet  Article  MATH  Google Scholar 

  25. 25.

    Lewis, R.M., Shepherd, A., Torczon, V.: Implementing generating set search methods for linearly constrained minimization. SIAM J. Sci. Comput. 29, 2507–2530 (2007)

    MathSciNet  Article  MATH  Google Scholar 

  26. 26.

    Lewis, R.M., Torczon, V.: Pattern search algorithms for bound constrained minimization. SIAM J. Optim. 9, 1082–1099 (1999)

    MathSciNet  Article  MATH  Google Scholar 

  27. 27.

    Lewis, R.M., Torczon, V.: Pattern search algorithms for linearly constrained minimization. SIAM J. Optim. 10, 917–941 (2000)

    MathSciNet  Article  MATH  Google Scholar 

  28. 28.

    Lewis, R.M., Torczon, V.: A direct search approach to nonlinear programming problems using an augmented Lagrangian method with explicit treatment of the linear constraints. Technical report WM-CS-2010-01, College of William & Mary, Department of Computer Science (2010)

  29. 29.

    Liu, L., Zhang, X.: Generalized pattern search methods for linearly equality constrained optimization problems. Appl. Math. Comput. 181, 527–535 (2006)

    MathSciNet  MATH  Google Scholar 

  30. 30.

    Lucidi, S., Sciandrone, M.: A derivative-free algorithm for bound constrained minimization. Comput. Optim. Appl. 21, 119–142 (2002)

    MathSciNet  Article  MATH  Google Scholar 

  31. 31.

    Lucidi, S., Sciandrone, M., Tseng, P.: Objective-derivative-free methods for constrained optimization. Math. Program. 92, 31–59 (2002)

    MathSciNet  Article  MATH  Google Scholar 

  32. 32.

    Moré, J.J., Wild, S.M.: Benchmarking derivative-free optimization algorithms. SIAM J. Optim. 20, 172–191 (2009)

    MathSciNet  Article  MATH  Google Scholar 

  33. 33.

    Moreau, J.-J.: Décomposition orthogonale d’un espace hilbertien selon deux cônes mutuellement polaires. C. R. Acad. Sci. Paris 255, 238–240 (1962)

    MathSciNet  MATH  Google Scholar 

  34. 34.

    Price, C.J., Coope, I.D.: Frames and grids in unconstrained and linearly constrained optimization: a nonsmooth approach. SIAM J. Optim. 14, 415–438 (2003)

    MathSciNet  Article  MATH  Google Scholar 

  35. 35.

    Schilling, R.L.: Measures Integrals and Martingales. Cambridge University Press, Cambridge (2005)

    Google Scholar 

  36. 36.

    The Mathworks, Inc.: Global Optimization Toolbox User’s Guide, version 3.3, Oct 2014

  37. 37.

    Vaz, A.I.F., Vicente, L.N.: A particle swarm pattern search method for bound constrained global optimization. J. Global Optim. 39, 197–219 (2007)

    MathSciNet  Article  MATH  Google Scholar 

  38. 38.

    Vaz, A.I.F., Vicente, L.N.: PSwarm: a hybrid solver for linearly constrained global derivative-free optimization. Optim. Methods Softw. 24, 669–685 (2009)

    MathSciNet  Article  MATH  Google Scholar 

  39. 39.

    Vicente, L.N.: Worst case complexity of direct search. EURO J. Comput. Optim. 1, 143–153 (2013)

    Article  MATH  Google Scholar 

  40. 40.

    Yuan, Y.: Conditions for convergence of trust region algorithms for nonsmooth optimization. Math. Program. 31, 220–228 (1985)

    MathSciNet  Article  MATH  Google Scholar 

Download references

Author information



Corresponding author

Correspondence to C. W. Royer.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Support for C. W. Royer was partially provided by Université Toulouse III Paul Sabatier under a doctoral grant. Support for L. N. Vicente was partially provided by FCT under Grants UID/MAT/00324/2013 and P2020 SAICTPAC/0011/2015. Support for Z. Zhang was provided by the Hong Kong Polytechnic University under the Start-up Grant 1-ZVHT.


A Deterministic computation of positive cone generators

Provided a certain linear independence condition is satisfied, the positive generators of the approximate tangent cone can be computed as follows (see [25, Proposition 5.2]).

Proposition A.1

Let \(x \in \mathcal {F}\), \(\alpha > 0\), and assume that the set of generators of \(N(x,\alpha )\) has the form \(\left[ \, W_e\ \, -\!W_e\ \; W_i \,\right] \). Let B be a basis for the null space of \(W_e^\top \), and suppose that \(W_i^\top B = Q^\top \) has full row rank. If R is a right inverse for \(Q^\top \) and N is a matrix whose columns form a basis for the null space of \(Q^\top \), then the set

$$\begin{aligned} Y = \left[ \; -\!\,BR \quad BN \ \; -\!BN\; \right] \end{aligned}$$

positively generates \(T(x,\alpha )\).

Note that the number of vectors in Y is then \(n_R+2(n_B-n_R)=2n_B - n_R\), where \(n_B\) is the rank of B and \(n_R\) is that of Q (equal to number of columns of R). Since \(n_B < {\tilde{n}}\), we have that \(|Y| \le 2 {\tilde{n}}\).

In the case where \(W_i^\top B\) is not full row rank, one could consider all subsets of columns of \(W_i\) of largest size that yield full row rank matrices, obtain the corresponding positive generators by Proposition A.1, and then take the union of all these sets [34]. Due to the combinatorial nature of this technique, we adopted a different approach, following the lines of [22, 25] where an algorithm originating from computational geometry called the double description method [16] was applied. We implemented this algorithm to compute a set of extreme rays (or positive generators) for a polyhedral cone of the form \(\{ {\tilde{d}} \in \mathbb {R}^{{\tilde{n}}}: B {\tilde{d}} = 0, C {\tilde{d}} \ge 0 \}\), where we assume that \({{\,\mathrm{rank}\,}}(B) < {\tilde{n}}\), and applied it to the approximate tangent cone. In our experiments, the three variants detected degeneracy (and thus invoked the double description method) in less than 2% of the iterations on average.

B Proof of Proposition 7.1


To simplify the notation, we omit the index k in the proof. By our convention about \({{\,\mathrm{cm}\,}}_T(V, -{\tilde{G}})\) when \(P_T[{\tilde{G}}]=0\), we only need to consider the situation where \(P_T[{\tilde{G}}]\) is nonzero.

Define the event

$$\begin{aligned} \mathcal {E} = \left\{ \dfrac{\Vert P[{\tilde{G}}]\Vert }{\Vert P_{T}[{\tilde{G}}]\Vert } \ge \dfrac{1}{\sqrt{2}}\right\} \end{aligned}$$

and let \(\bar{\mathcal {E}}\) denote its complementary event. We observe that

$$\begin{aligned} \left\{ {{\,\mathrm{cm}\,}}_{T}([U^s\;\; U^c], -{\tilde{G}}) \ge \kappa \right\} \supset&\left\{ {{\,\mathrm{cm}\,}}_{S}(U^s, -{\tilde{G}}) \ge \sqrt{2}\kappa \right\} \cap \mathcal {E}, \end{aligned}$$
$$\begin{aligned} \left\{ {{\,\mathrm{cm}\,}}_{T}([U^s\;\; U^c], -{\tilde{G}}) \ge \kappa \right\} \supset&\left\{ {{\,\mathrm{cm}\,}}_{T^c}(U^c, -{\tilde{G}}) \ge \sqrt{2}\kappa \right\} \cap \bar{\mathcal {E}} \end{aligned}$$

for each \(\kappa \). Indeed, when \(\mathcal {E}\) happens, we have

$$\begin{aligned}&{{\,\mathrm{cm}\,}}_{T}([U^s\;\; U^c], -{\tilde{G}}) \\&\quad \ge {{\,\mathrm{cm}\,}}_{T}(U^s, -{\tilde{G}})= \dfrac{\Vert P_S[{\tilde{G}}]\Vert }{\Vert P_{T}[{\tilde{G}}]\Vert }{{\,\mathrm{cm}\,}}_{S}(U^s, -{\tilde{G}}) \ge \dfrac{1}{\sqrt{2}}{{\,\mathrm{cm}\,}}_{S}(U^s, -{\tilde{G}}), \end{aligned}$$

and (B.1) is consequently true; when \(\bar{\mathcal {E}}\) happens, then by the orthogonal decomposition

$$\begin{aligned} \Vert P_{T}[{\tilde{G}}]\Vert ^2 = \Vert P_{S}[{\tilde{G}}]\Vert ^2 + \Vert P_{T^c}[{\tilde{G}}]\Vert ^2, \end{aligned}$$

we know that \(\Vert P_{T^c}[{\tilde{G}}]\Vert /\Vert P_{T}[{\tilde{G}}]\Vert \ge 1/\sqrt{2}\), and hence

$$\begin{aligned} {{\,\mathrm{cm}\,}}_{T}([U^s\;\; U^c], -{\tilde{G}})\ge & {} {{\,\mathrm{cm}\,}}_{T^c}(U^c, -{\tilde{G}}) \\= & {} \dfrac{\Vert P_{T^c}[{\tilde{G}}]\Vert }{\Vert P_{T}[{\tilde{G}}]\Vert }{{\,\mathrm{cm}\,}}_{T^{c}}(U^c, -{\tilde{G}}) \\\ge & {} \dfrac{1}{\sqrt{2}}{{\,\mathrm{cm}\,}}_{T^{c}}(U^c, -{\tilde{G}}), \end{aligned}$$

which gives us (B.2). Since the events on the right-hand sides of (B.1) and (B.2) are mutually exclusive, the two inclusions lead us to

$$\begin{aligned} \mathbb {P}\left( {{\,\mathrm{cm}\,}}_{T}([U^s\;\; U^c], -{\tilde{G}}) \ge \kappa \mathrel {\big |} \sigma \right)\ge & {} \mathbb {P}\left( \left\{ {{\,\mathrm{cm}\,}}_{S}(U^s, -{\tilde{G}}) \ge \sqrt{2}\kappa \right\} \cap \mathcal {E} \mathrel {\big |} \sigma \right) \\&+\,\mathbb {P}\left( \left\{ {{\,\mathrm{cm}\,}}_{T^c}(U^c, -{\tilde{G}}) \ge \sqrt{2}\kappa \right\} \cap \bar{\mathcal {E}} \mathrel {\big |}\sigma \right) . \end{aligned}$$

Then it suffices to show the existence of \(\kappa \in (0,1)\) and \(p \in (p_0,1)\) that fulfill simultaneously

$$\begin{aligned} \mathbb {P}\left( \left\{ {{\,\mathrm{cm}\,}}_{S}(U^s, -{\tilde{G}}) \ge \sqrt{2}\kappa \right\} \cap \mathcal {E} \mathrel {\big |} \sigma \right)&\ge p \,{\mathbf {1}}_{\mathcal {E}}, \end{aligned}$$
$$\begin{aligned} \mathbb {P}\left( \left\{ {{\,\mathrm{cm}\,}}_{T^c}(U^c, -{\tilde{G}}) \ge \sqrt{2}\kappa \right\} \cap \bar{\mathcal {E}} \mathrel {\big |} \sigma \right)&\ge p {\mathbf {1}}_{\bar{\mathcal {E}}}, \end{aligned}$$

because \({\mathbf {1}}_{\mathcal {E}} + {\mathbf {1}}_{\bar{\mathcal {E}}}=1\). If \(\mathbb {P}(\mathcal {E}) = 0\), then (B.3) holds trivially regardless of \(\kappa \) or p. Similar things can be said about \(\bar{\mathcal {E}}\) and (B.4). Therefore, we assume that both \(\mathcal {E}\) and \(\mathcal {{\bar{E}}}\) are of positive probabilities. Then, noticing that \(\mathcal {E}\in \sigma \) (\(\mathcal {E}\) only depends on the past iterations), we have

$$\begin{aligned} \mathbb {P}\left( \left\{ {{\,\mathrm{cm}\,}}_{S}(U^s, -{\tilde{G}}) \ge \sqrt{2}\kappa \right\} \cap \mathcal {E} \mathrel {\big |} \sigma \right)&= \mathbb {P}\left( {{\,\mathrm{cm}\,}}_{S}(U^s, -{\tilde{G}}) \ge \sqrt{2}\kappa \mathrel {\big |} \sigma \cap \mathcal {E} \right) \mathbb {P}(\mathcal {E}|\sigma ), \\&=\mathbb {P}\left( {{\,\mathrm{cm}\,}}_{S}(U^s, -{\tilde{G}}) \ge \sqrt{2}\kappa \mathrel {\big |} \sigma \cap \mathcal {E} \right) {\mathbf {1}}_{\mathcal {E}}, \\ \mathbb {P}\left( \left\{ {{\,\mathrm{cm}\,}}_{T^c}(U^c, -{\tilde{G}}) \ge \sqrt{2}\kappa \right\} \cap \bar{\mathcal {E}} \mathrel {\big |} \sigma \right)&= \mathbb {P}\left( {{\,\mathrm{cm}\,}}_{T^c}(U^c, -{\tilde{G}}) \ge \sqrt{2}\kappa \mathrel {\big |} \sigma \cap \bar{\mathcal {E}} \right) \mathbb {P}(\bar{\mathcal {E}}|\sigma )\\&= \mathbb {P}\left( {{\,\mathrm{cm}\,}}_{T^c}(U^c, -{\tilde{G}}) \ge \sqrt{2}\kappa \mathrel {\big |} \sigma \cap \bar{\mathcal {E}} \right) {\mathbf {1}}_{\bar{\mathcal {E}}}, \end{aligned}$$

where \(\sigma \cap \mathcal {E}\) is the trace \(\sigma \)-algebra [35] of \(\mathcal {E}\) in \(\sigma \), namely \(\sigma \cap \mathcal {E} = \left\{ \mathcal {A} \cap \mathcal {E} \mathrel : \mathcal {A} \in \sigma \right\} \), and \(\sigma \cap \bar{\mathcal {E}}\) is that of \(\bar{\mathcal {E}}\). Hence it remains to prove that

$$\begin{aligned} \mathbb {P}\left( {{\,\mathrm{cm}\,}}_{S}(U^s, -{\tilde{G}}) \ge \sqrt{2}\kappa \mathrel {\big |} \sigma \cap \mathcal {E} \right)&\ge p, \end{aligned}$$
$$\begin{aligned} \mathbb {P}\left( {{\,\mathrm{cm}\,}}_{T^c}(U^c, -{\tilde{G}}) \ge \sqrt{2}\kappa \mathrel {\big |} \sigma \cap \bar{\mathcal {E}} \right)&\ge p. \end{aligned}$$

Let us examine \({{\,\mathrm{cm}\,}}_S(U^s, -{\tilde{G}})\) whenever \(\mathcal {E}\) happens (which necessarily means that S is nonzero). Since \(\mathcal {E}\) depends only on the past iterations, while \(U^s\) is essentially a set of \(r_s\) (recall (7.1)) i.i.d. directions from the uniform distribution on the unit sphere in \(\mathbb {R}^s\) with \(s=\dim (S) \le {\tilde{n}}\), one can employ the theory of [19, Appendix B] to justify the existence of \(p_s > p_0\) and \(\tau > 0\) that are independent of \({\tilde{n}}\) or s (solely depending on \(\theta \) and \(\gamma \)) and satisfy

$$\begin{aligned} \mathbb {P}\left( {{\,\mathrm{cm}\,}}_{S}(U^s,-{\tilde{G}}) \ge \frac{\tau }{\sqrt{{\tilde{n}}}} \mathrel {\Big |} \sigma \, \cap \, \mathcal {E} \right) \ge p_s. \end{aligned}$$

Now consider \({{\,\mathrm{cm}\,}}_{T^c}(U^c, -{\tilde{G}})\) under the occurrence of \(\bar{\mathcal {E}}\) (in that case, \(T^c\) is nonzero and \(V^c\) is nonempty). Let \(D^*\) be a direction in \(V^c\) that achieves

$$\begin{aligned} \frac{D^{*\top } (-{\tilde{G}})}{\Vert D^*\Vert \Vert P_{T^c}[{\tilde{G}}]\Vert } = {{\,\mathrm{cm}\,}}_{T^c} (V^c, -{\tilde{G}}). \end{aligned}$$

Then by the fact that \(U^c\) is a uniform random subset of \(V^c\) and \( |U^c| = \lceil p_c |V^c| \rceil \), we have

$$\begin{aligned} \mathbb {P}\left( {{\,\mathrm{cm}\,}}_{T^c}( U^c,-{\tilde{G}}) = {{\,\mathrm{cm}\,}}_{T^c} (V^c, -{\tilde{G}}) \mathrel {\big |} \sigma \cap {\bar{\mathcal {E}}} \right) \ge \mathbb {P}\left( D^* \in U^c \mathrel {\big |} \sigma \cap {\bar{\mathcal {E}}} \right) = \frac{|U^c|}{|V^c|} \ge p_c.\nonumber \\ \end{aligned}$$


$$\begin{aligned} \kappa _c = \lambda \left( \big \{ C = {\bar{T}} \cap {\bar{S}}^\perp : \; {\bar{T}} \in {\mathbf {T}} \text { and } {\bar{S}}\ \text { is a subspace of } {\bar{T}} \big \} \right) , \end{aligned}$$

where \(\lambda (\cdot )\) is defined as in (3.2) and \({\mathbf {T}}\) denotes all possible occurrences for the approximate tangent cone. Then \(\kappa _c > 0\) and \({{\,\mathrm{cm}\,}}_{T^c} (V^c, -{\tilde{G}}) \ge \kappa _c\). Hence (B.8) implies

$$\begin{aligned} \mathbb {P}\left( {{\,\mathrm{cm}\,}}_{T^c}( U^c,-{\tilde{G}}) \ge \kappa _c \mathrel {\big |} \sigma \cap {\bar{\mathcal {E}}}\right) \ge p_c. \end{aligned}$$

Finally, set

$$\begin{aligned} \kappa = \frac{1}{\sqrt{2}}\min \left\{ \frac{\tau }{\sqrt{{\tilde{n}}}}, \; \kappa _c \right\} , \quad p = \min \left\{ p_s,\; p_c \right\} . \end{aligned}$$

Then \(\kappa \in (0,1)\), \(p \in (p_0, 1)\), and they fulfill (B.5) and (B.6) according to (B.7) and (B.9). Moreover, \(\kappa \) depends on the geometry of T, and consequently depends on m, n, and \(m_{\mathcal {I}}\), while p depends solely on \(\theta \), \(\gamma \), and \(p_c\). The proof is then completed. \(\square \)

C List of CUTEst problems

The complete list of our test problems, with the associated dimensions and number of constraints, is provided in Tables 1, 2, 3 and 4.

Table 1 List of the smallest tested CUTEst test problems with bound constraints
Table 2 List of the largest tested CUTEst test problems with bound constraints
Table 3 List of the tested CUTEst test problems with only linear equality constraints and bounds
Table 4 List of the tested CUTEst test problems with at least one linear inequality constraint

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Gratton, S., Royer, C.W., Vicente, L.N. et al. Direct search based on probabilistic feasible descent for bound and linearly constrained problems. Comput Optim Appl 72, 525–559 (2019).

Download citation


  • Derivative-free optimization
  • Direct-search methods
  • Bound constraints
  • Linear constraints
  • Feasible descent
  • Probabilistic feasible descent
  • Worst-case complexity