A note on the convergence of deterministic gradient sampling in nonsmooth optimization

Approximation of subdifferentials is one of the main tasks when computing descent directions for nonsmooth optimization problems. In this article, we propose a bisection method for weakly lower semismooth functions which is able to compute new subgradients that improve a given approximation in case a direction with insufficient descent was computed. Combined with a recently proposed deterministic gradient sampling approach, this yields a deterministic and provably convergent way to approximate subdifferentials for computing descent directions.


Introduction
Nonsmooth optimization is concerned with the optimization of a function f : R n → R which is not necessarily continuously differentiable.For such functions, one cannot rely on the gradient for describing the local behavior around a given point.As a replacement, generalized concepts from nonsmooth analysis can be employed.If the objective is still locally Lipschitz continuous, as is the case in many practical applications, then the standard approach is to use the Clarke subdifferential ∂f [1].However, since ∂f (x) reduces to the gradient if f is continuously differentiable at x, and since f is typically continuously differentiable almost everywhere, the Clarke subdifferential cannot be used to capture nonsmoothness numerically.To circumvent this issue, the (Goldstein) ε-subdifferential ∂ ε f [2] may be used instead, which is the convex hull of all Clarke subdifferentials in an ε-ball around a given point.For the ε-subdifferential, it is sufficient for x to have a distance of at most ε to a nonsmooth point of f to capture the nonsmoothness.As such, it can be interpreted as a stabilized version of the Clarke subdifferential.
It is well-known that the element v ∈ −∂ ε f (x 0 ) with the smallest norm is a descent direction for f at x 0 [1,2].Unfortunately, the full ε-subdifferential that is required to compute this direction is rarely available in practice and has to be approximated instead.To this end, in the gradient sampling method [3][4][5], the idea is to approximate ∂ ε f (x 0 ) by the convex hull of the gradients at randomly generated points in the ε-ball around x 0 where f is differentiable.While this is easy to implement and convergence can be shown with probability 1, randomly computing gradients means that one generally computes more gradients than would be necessary.Furthermore, a good approximation may require a large and a priori unknown number of sample points (which is highlighted in Appendix A).As an alternative, in [6][7][8], a deterministic sampling approach was used.There, the idea is to compute the approximation of ∂ ε f (x 0 ) iteratively, by starting with W = {ξ} for a subgradient ξ ∈ ∂f (x 0 ) at x 0 , and then adding new elements of ∂ ε f (x 0 ) to W until conv(W ) is a satisfactory approximation.The mechanism for finding new elements of ∂ ε f (x 0 ) is based on the observation that if v is a direction that yields less descent than expected (based on the current approximation conv(W )), then there has to be a point x 0 + tv with t > 0 in the ε-ball ∈ conv(W ) can be sampled.To find such a t, a subroutine based on bisection of the interval (0, ε/∥v∥) is used.While in [6][7][8], it was analyzed why this subroutine likely works (i.e., terminates) in practice and while termination was also observed in all numerical examples, a full proof (under reasonable assumptions for f ) was not given.
The goal of this note is to close the above mentioned gap in the convergence theory of the deterministic gradient sampling approach.The bisection algorithm in [6][7][8] is based on reformulating the problem of finding a new element of ∂ ε f (x 0 ) as finding a point t > 0 in which a certain nonsmooth function h : R → R is increasing.The convergence issues arise in cases where the bisection converges to a critical point t of h (i.e., to a point t with 0 ∈ ∂h( t)).To fix these issues, we replace h by a slightly modified function h c.We then show convergence of the resulting method for the case where f is weakly lower semismooth [9,10] (meaning that −f is weakly upper semismooth).Since semismooth functions [11] are weakly lower semismooth, this case includes continuously differentiable functions, convex functions and piecewise differentiable functions [12].As our result is essentially just concerned with the deterministic computation of new ε-subgradients (not necessarily in the context of [6][7][8]), it has also use in other methods based on gradient sampling, like [13,14].Our method has strong similarities to Procedure 4.1 in [15], which is used for the computation of new subgradients in a bundle framework.It is based on the same idea, but differs in the condition used for bisection and the stopping criterion.
The remainder of this article is structured as follows: In Section 2 we introduce the basics of gradient sampling and the bisection algorithm from [7] (which is identical to the one in [8] and almost identical to the one in [6]).In [7], the bisection was only a small part of a larger algorithm, but we will introduce it here in a stand-alone way for convenience.In Section 3 we construct the improved bisection algorithm and show its convergence when f is weakly lower semismooth.In Section 4 we visualize the behavior of the improved bisection method in a simple example.Finally, in Section 5, we summarize our results and discuss possible directions for future research.

Computing descent directions for nonsmooth functions
In this section, we summarize the basic ideas of gradient sampling from [3][4][5][6][7][8].To this end, let f : R n → R be locally Lipschitz continuous.The Clarke subdifferential [1] of where Ω ⊆ R n is the set of points at which f is not differentiable.(By Rademacher's Theorem [1], Ω has measure zero.)The elements of ∂f (x) are called (Clarke) subgradients.In theory, the Clarke subdifferential can be used similarly to the standard gradient, as it can be used in generalized versions of results like the mean value theorem, the chain rule and optimality conditions [1].In practice however, there are severe problems: ∂f (x) only captures the nonsmoothness of f if x is a point where f is not continuously differentiable, which is typically a null set.Furthermore, when x ∈ Ω, there is no general way of computing the full subdifferential, i.e., all subgradients.Instead, a reasonable assumption is that we can only evaluate a single, arbitrary subgradient from ∂f (x) at any x ∈ R n .Further explanations and examples for these issues can be found in [16].To circumvent these problems, a more suitable object to use as a generalized derivative for constructing descent methods is the (Goldstein) where ε ≥ 0, B ε (x) := {y ∈ R n : ∥y − x∥ ≤ ε} and ∥ • ∥ is the Euclidean norm.The elements of ∂ ε f (x) are called ε-subgradients.The ε-subdifferential can be interpreted as a "stabilized" Clarke subdifferential.In particular, it can be used to compute descent directions, as we demonstrate in the following.Let x 0 ∈ R n , v ∈ R n \ {0} and ε > 0. Then a simple application of the mean value theorem ([1], Theorem 2.3.7)shows that Thus, directions v with ⟨ξ, v⟩ < 0 for all ξ ∈ ∂ ε f (x 0 ) are descent directions for f at x 0 .Based on convex analysis [17], the direction that minimizes the maximum of the inner products on the right-hand side of (2), called the ε-steepest descent direction, can be computed as and, due to (2), Unfortunately, the full ε-subdifferential required to solve Problem ( 3) is rarely available in practice.Thus, the direction v has to be approximated.

Random gradient sampling
In the standard gradient sampling framework [3][4][5], the ε-subdifferential is approximated by randomly (independently and uniformly) sampling m ≥ n + 1 elements x 1 , . . ., x m ∈ B ε (x 0 ) \ Ω and setting (The differentiability of f at the current iterate x 0 is enforced via a differentiability check.)As an approximation of v from (3), the direction is computed, and an Armijo-like backtracking line search is used to assure decrease in f .It can be shown that when dynamically reducing the sampling radius ε, then the resulting descent method (cf.[4], GS Algorithm) produces a sequence converging to a critical point of f with probability 1.Unfortunately, sampling randomly means that a large number of sample points m may be required to assure that v GS is a good approximation of v, i.e., to assure that meaningful decrease is achieved in every descent step.This drawback is highlighted in Appendix A.

Deterministic gradient sampling
Instead of randomly sampling points from B ε (x 0 ) for the approximation of ∂ ε f (x 0 ), there is also a deterministic approach [7].(In [7] multiobjective problems are considered, but we will only consider the special case of a single objective here.)Assume that x 0 is not ε-critical.The idea is to start with a subset W ⊆ ∂ ε f (x 0 ) (e.g., W = {ξ} for ξ ∈ ∂f (x 0 )) and to then iteratively add new subgradients to W until a direction ṽ := − arg min is found that yields sufficient descent.The meaning of "sufficient descent" can be derived from ( 5): For some fixed c ∈ (0, 1), we want to have at least To find a new subgradient (that is not already contained in conv(W )) in case ṽ does not yield sufficient descent, note that the mean value theorem implies that there are t ′ ∈ (0, ε/∥ṽ∥) and ξ ′ ∈ ∂f (x 0 + t ′ ṽ) with Analogously to ( 3) and ( 4), it holds Thus, (9) implies that ξ ′ / ∈ conv(W ), so adding ξ ′ to W improves the approximation of ∂ ε f (x 0 ).In [6,7], it was proven that iteratively computing ṽ via (7) while adding new subgradients to W as above yields a finite algorithm which deterministically computes descent directions satisfying (8) (as long as 0 / ∈ ∂ ε f (x 0 )).(For a formal definition of this algorithm, see Algorithm 2 in [7] for k = 1.)Unfortunately, the above application of the mean value theorem does not yield an explicit formula for the computation of ξ ′ as in (9), so additional effort is required in practice.To this end, a strategy based on bisection can be used: Let By the chain rule ([1], Theorem 2.3.9) it holds so ∂h(t ′ ) ∩ R >0 ̸ = ∅ for t ′ ∈ (0, ε/∥ṽ∥) would imply that there is some ξ ′ ∈ ∂f (x 0 + t ′ ṽ) as in (9).Thus, roughly speaking, the idea is to search for some interval in (0, ε/∥ṽ∥) on which h is monotonically increasing.In [7] this was done via Algorithm 1.It performs bisections such that h(a j ) < h(b j ) for all j ∈ N, while checking whether a ξ ′ was found satisfying (9).In [6][7][8], it was argued why the algorithm is likely to terminate in practice, and termination was also observed in all numerical examples, but a Algorithm 1 Computation of new ε-subgradients Require: Point x 0 ∈ R n , radius ε > 0, descent parameter c ∈ (0, 1), direction ṽ ∈ R n \ {0} violating (8).
In [8], Example 4.3.4,an example was constructed for which Algorithm 1 does not terminate with a function h that is not semismooth (cf.[11]).In the following, we will construct a more nuanced example that shows that the algorithm may also fail for semismooth functions.
Example 1 For i ∈ N ∪ {0} let Then x 1 i < x 2 i < x 1 i+1 for all i ∈ N ∪ {0}.We construct a function φ : R → R as follows: For x < 0 let φ(x) := − 1 2 x, for x ≥ 1 let φ(x) := 1 and on [0, 1) let φ be the piecewise linear function with φ(0) = 0 and The graph of φ is shown in Figure 1(a).Figure 1(b) shows the gradient of φ at points where φ is differentiable, and a vertical line from smallest to the largest subgradient at points where φ is not differentiable.We see that in the limit x → 1, all subgradients tend to 0. Based on this observation, it can be shown that φ is semismooth.Now let as shown in Figure 1(c).Let x 0 = 0, ε = 1 and c = 1/2.Assume that we have evaluated the subgradient Then for W = {ξ}, the direction ṽ from ( 7) is simply ṽ = −ξ = 1.When checking whether ṽ yields sufficient decrease, we see that i.e., ṽ does not yield sufficient (or even any) decrease.
For Algorithm 1 we have Since 1 = h(1) = h(b 1 ) > h(t) for all t ∈ [0, 1), we have as indicated in Figure 1(a) and (b).By construction (cf. Figure 1(b)), f is continuously differentiable at all x 0 + t j ṽ = t j and Thus, Algorithm 1 does not terminate.
Note that in the previous example, only the nonsmoothness of f at x = 1 is relevant for the failure of the algorithm.Thus, by "smoothing" the objective f at all kinks except 1, one could even construct a semismooth function which is continuously differentiable everywhere outside of a single point for which the algorithm fails.

Improved bisection method
In this section, we propose a slightly modified version of Algorithm 1 and prove termination for the case where the objective f is weakly lower semismooth.For any x ∈ R n , we assume that we are only able to evaluate a single, arbitrary subgradient of the locally Lipschitz continuous f at x.

Derivation of the improved method
Roughly speaking, the idea is to use a smaller parameter c < c in h (cf.(10)) to try to find a new subgradient that satisfies a stricter version of inequality ( 9).This will solve the first of the issues described in Section 2.2, as even points in which ∂h is negative but close to zero then suffice to find subgradients that satisfy the weaker requirement with respect to c.
More precisely, note that ṽ is an unacceptable descent direction if and only if Thus, if ṽ is an unacceptable direction, then it is also unacceptable if we replace c in ( 8) by any c ∈ (c min , c) ̸ = ∅.In other words, we could apply Algorithm 1 for any c ∈ (c min , c) and the method would still produce sequences with h c(a j ) < h c(b j ) for all j ∈ N, where, analogously to (10), In step 3, the method would check whether ⟨ξ ′ , ṽ⟩ > −c∥ṽ∥ 2 .But since c < c, this inequality is stricter than (9).Thus, instead, we apply Algorithm 1 with only h being replaced by h c and the rest unchanged.In terms of the subdifferential of h c, this means that the method may stop as soon as where (c − c)∥ṽ∥ 2 < 0. For clarity, this modified algorithm is denoted in Algorithm 2.

Proof of termination
We begin the analysis of Algorithm 2 with some simple, technical results.
Lemma 1 If Algorithm 2 does not terminate, then (i) (a j ) j , (b j ) j and (t j ) j have the same limit t ∈ [0, ε/∥ṽ∥], (ii) t ∈ [a j , b j ] for all j ∈ N, (iii) (h c(b j )) j is monotonically increasing and h c(a j ) < h c(b j ) for all j ∈ N, (iv) t j < t for infinitely many j ∈ N.
Proof (i) By construction (a j ) j is monotonically increasing and (b j ) j is monotonically decreasing in [0, ε/∥ṽ∥], so both sequences converge.Since Algorithm 2 does not terminate, it holds lim j→∞ b j − a j = 0, so they must converge to the same limit t ∈ [0, ε/∥ṽ∥].Since t j ∈ (a j , b j ) for all j ∈ N, (t j ) j must have the same limit.
(ii) Assume that t / ∈ [a j , b j ] for some j ∈ N. Then either t < a j or t > b j .Due to the monotonicity of (a j ) j and (b j ) j , this is a contradiction to t being the limit of both sequences.(iii) By construction, the value of (b j ) j only changes when h c(b j ) ≤ h c(t j ) in step 4. In this case b j+1 is set to t j , so we have The proof follows by induction.(iv) Assume that this does not hold.Then there is some N ∈ N such that t j ≥ t for all j > N .By construction of (t j ) j , t j = t may only hold once, so we can assume w.l.o.g. that t j > t for all j > N .Since t ∈ [a j , b j ] for all j ∈ N and t j is the midpoint of [a j , b j ], this implies that a j = a N for all j > N .In particular, t = lim j→∞ a j = a N .Since h c( t) = h c(a N ) < h c(b N ) and (h c(b j )) j is monotonically increasing with lim j→∞ h c(b j ) = h c( t) due to continuity, this is a contradiction.□ To be able to fix the second of the two issues mentioned in Section 2.2, we need a stronger assumption for f .An easy way to solve the issue would be to force equality in the chain rule (11) by assuming that f is regular (cf.[1], Definition 2.3.4).While the class of regular functions includes convex functions, even simple nonconvex functions like x → −|x| are not regular.As such, this assumption would heavily restrict the applicability of the method.Fortunately, we do not actually need equality in (11) as we are only interested in the behavior of ⟨∂f (x 0 + tṽ), ṽ⟩ for t → t, and not necessarily in ⟨∂f (x 0 + tṽ), ṽ⟩ itself.Thus, we will see that it suffices to assume that f is weakly lower semismooth [9,10], which means that it is locally Lipschitz and for x ∈ R n , v ∈ R n and sequences (s i ) i ∈ R >0 , (ξ i ) i ∈ R n with s i ↘ 0 and ξ i ∈ ∂f (x + s i v) for all i ∈ N, it holds lim sup Roughly speaking, weak lower semismoothness means that there is a semicontinuous relationship between directional derivatives and sequences of subgradients (via the inner product).In our case, we are interested in inequality (13) for x = x 0 + tṽ and v = −ṽ.This will give us a lower estimate for ⟨ξ ′ , ṽ⟩ in step 3 of Algorithm 2, which we can use to show termination.
To this end, we will first derive an upper bound for the right-hand side of (13) in the following lemma.
Proof By Lemma 1 we have t j < t for infinitely many j ∈ N. Let (j i ) i be the sequence of such j.Note that monotonicity of (h c(b j )) j and t ji < t imply that In particular, writing f (x 0 + t ji ṽ) = f (x 0 + tṽ − ( t − t ji )ṽ), it holds for all i ∈ N. Since t − t ji ↘ 0 and the limit inferior is taken in (14), this completes the proof.□ The previous lemma enables us to prove our main result.
Theorem 1 Assume that f is weakly lower semismooth.Then Algorithm 2 terminates.
Proof Assume that Algorithm 2 does not terminate.Choose (j i ) i as in the proof of Lemma 2 and let ξ ′ i ∈ ∂f (x 0 + t ji ṽ) be the subgradient evaluated in step 3 in iteration j i .Let s i := t − t ji .Then ξ ′ i ∈ ∂f (x 0 + t ji ṽ) = ∂f (x 0 + tṽ − s i ṽ) and s i ↘ 0, so by Lemma 2 and weak lower semismoothness, it holds In particular, we must have ⟨ξ ′ i , ṽ⟩ > −c∥ṽ∥ 2 after finitely many iterations, causing the algorithm to stop in step 3. □

Example
In this section, we visualize the difference between Algorithms 1 and 2 by revisiting Example 1.
Example 2(a) In the situation of Example 1, it holds 2) can be chosen for Algorithm 2. Figure 2 shows the graphs of h c and ∂h c when choosing c = 1/4.We see that the algorithm terminates after two iterations with t 3 = 5/8 and the new ε-subgradient ξ ′ = 11/8 / ∈ conv(W ) = {−1}.

Conclusion and future work
In this article, we showed how the gap in the convergence theory of the deterministic gradient sampling methods from [6][7][8] can be closed for weakly lower semismooth functions by a more careful handling of the sufficient decrease condition in the bisection.
For future work, it might be worth to analyze the behavior of Algorithm 2 in a more general setting.In [18], the convergence theory for the original gradient sampling method from [4] was generalized to directionally Lipschitzian functions, and already in [4], gradient sampling was successfully applied to even more general, non-Lipschitzian functions.Furthermore, the general strategy of approximating the ε-subdifferential in a deterministic fashion as in [6][7][8]15] may lead to interesting new methods when combined with other methods that rely on random sampling, like [13,14].In the long run, we believe that this strategy may lead one step closer to a unified framework for both gradient sampling and bundle methods.For example, when comparing the standard gradient sampling method in [4] and the bundle method of [15], one could "hide" all the null steps in the bundle method in a subroutine and end up with a method that is similar to the gradient sampling method, just with a different way to approximate the ε-subdifferential.related to x 0 = 0 being a nonsmooth point of f , and we get a similar (or even worse) result when choosing some x 0 ∈ D 1 close to zero.(It would just become more difficult to compute the exact probabilities as in the table for x 0 ̸ = 0.) In this case, the method from [4] would perform descent steps with short step lengths (and little descent), which would require many function evaluations due to the backtracking nature of the line search, without recognizing that the iterates are already ε-critical.

Fig. 1
Fig. 1 The graphs of (a) φ = h, (b) ∂φ = ∂h and (c) f in Example 1.The red lines show the values of t j = 1 − 2 −j for j ∈ N.

Fig. 2
Fig. 2 The graphs of (a) h c and (b) ∂h c for c = 1/2, c = 1/4 in Example 2. The vertical lines show the sequence (t j ) j from Algorithm 2, with the final value colored in blue.The dashed horizontal line in (b) marks the value (c − c)∥ṽ∥ 2 = −1/4 above which ∂h c(t j ) must lie for the method to stop (cf.(12)).

Fig
Fig. A1 (a) The graph of f for n = 2 in Example 3. The red lines indicate the points in which f is not differentiable.(b) The boundary of the unit sphere B 1 (0) (dashed) and the sets D 1 (green) and D 2 (blue).
Lemma 2 Assume that Algorithm 2 does not terminate.Let t as in Lemma 1. Then

Table A1
Probability that at least one of the 2n sample points lies in D 2 ε in Example 3.