A Geometric Integration Approach to Nonsmooth, Nonconvex Optimisation

The optimisation of nonsmooth, nonconvex functions without access to gradients is a particularly challenging problem that is frequently encountered, for example in model parameter optimisation problems. Bilevel optimisation of parameters is a standard setting in areas such as variational regularisation problems and supervised machine learning. We present efficient and robust derivative-free methods called randomised Itoh–Abe methods. These are generalisations of the Itoh–Abe discrete gradient method, a well-known scheme from geometric integration, which has previously only been considered in the smooth setting. We demonstrate that the method and its favourable energy dissipation properties are well defined in the nonsmooth setting. Furthermore, we prove that whenever the objective function is locally Lipschitz continuous, the iterates almost surely converge to a connected set of Clarke stationary points. We present an implementation of the methods, and apply it to various test problems. The numerical results indicate that the randomised Itoh–Abe methods can be superior to state-of-the-art derivative-free optimisation methods in solving nonsmooth problems while still remaining competitive in terms of efficiency.


Introduction
We consider the unconstrained optimisation problem min x∈R n V (x), (1.1) where the objective function V is locally Lipschitz continuous, bounded below and coercive-the latter meaning that x ∈ R n : V (x) ≤ M is compact for all M ∈ R. The function may be nonconvex and nonsmooth, and we assume no knowledge besides point evaluations x → V (x). To solve (1.1), we present randomised Itoh-Abe methods, a generalisation of the Itoh-Abe discrete gradient method. The latter is a derivative-free optimisation scheme, 1 that has previously only been considered for differentiable functions.
Discrete gradient methods, a tool from geometric numerical integration, are optimisation schemes that inherit the energy dissipation of continuous gradient flow. The iterates of the methods monotonically decrease the objective function, for all time steps, and Grimm et al. [28] recently provided a convergence theory for solving (1.1) in the continuously differentiable setting. We extend the concepts and results of their work and show that the Itoh-Abe discrete gradient method can be applied in the nonsmooth case, and, furthermore, that the favourable dissipativity property of the methods extends to this setting. Furthermore, we prove that for locally Lipschitz continuous functions the iterates converge to a set of stationary points, defined by the Clarke subdifferential framework.

Gradient flow and the discrete gradient method
For a differentiable function V : R n → R, gradient flow is the ODE system defined bẏ where the dot represents differentiation with respect to time. By applying the chain rule, we compute d dt V (x(t )) = 〈∇V (x(t )),ẋ(t )〉 = − ∇V (x(t )) 2 = − ẋ(t ) 2 ≤ 0, where x denotes the 2-norm 〈x, x〉. This implies that gradient flow is inherently an energy dissipative system. In the field of geometric numerical integration, one studies methods for numerically solving ODEs that also preserve structures of the continuous system-see [30,45] for an introduction. Discrete gradient methods can be applied to first-order gradient systems to preserve energy conservation laws, dissipation laws, as well as Lyapunov functions [25,34,46,57]. They are defined as follows.
Definition 1.1 Let V : R n → R be continuously differentiable. A discrete gradient is a continuous mapping ∇V : R n × R n → R n that satisfies the two following properties. for all x, y ∈ R n .
We now introduce the discrete gradient method for optimisation. For x 0 ∈ R n and time steps τ k > 0, k ∈ N, we solve (1.4) We apply the above mean value property to derive that the iterates decrease V . (1.5) Note that the decrease holds for all time steps τ k > 0, and that (1.5) can be seen as a discrete analogue of the dissipative structure of gradient flow (1.3), replacing derivatives by finite differences. Grimm et al. [28] proved that for coercive, continuously differentiable functions, the iterates of (1.4) converge to a set of stationary points, provided that there are strictly positive constants τ min , τ max such that τ k ∈ [τ min , τ max ] for all k ∈ N.

Itoh-Abe methods
The Itoh-Abe discrete gradient [34] (also known as coordinate increment discrete gradient) 2 is defined as Solving an iterate of the discrete gradient method (1.4) with the Itoh-Abe discrete gradient is equivalent to successively solving n scalar equations of the form We generalise the Itoh-Abe discrete gradient method to randomised Itoh-Abe methods accordingly. Let (d k ) k∈N ⊂ S n−1 be a sequence of directions, where S n−1 denotes the unit sphere x ∈ R n : x = 1 . The directions can be drawn from a random distribution or chosen deterministically. At the kth step, we update If no such β k exists, we set x k+1 = x k . We formalise this method in Algorithm 1. We assume throughout the paper that the time steps (τ k ) k∈N are bounded between two strictly positive constants τ min , τ max , which can take arbitrary values.
Observe that if (d k ) k∈N cycle through the standard coordinates (e i ) n i =1 with the rule d k = e [k mod n]+1 , then computing n steps of (1.6) corresponds to one step of (1.4) with the Itoh-Abe discrete gradient. Furthermore, the dissipation properties (1.5) can be rewritten as (1.7) Consequently, the dissipative structure of the Itoh-Abe methods is well-defined in a derivative-free setting.
Ehrhardt et al. [20] studied Itoh-Abe methods in the smooth setting, asserting linear convergence rates for functions that satisfy the Polyak-Łojasiewicz inequality [36]. Furthermore, the application of Itoh-Abe discrete gradient methods to smooth optimisation problems is well-documented. Applications include convex variational regularisation problems for image analysis [28], nonconvex image inpainting problems with Euler's elastica regularisation [59], for which it outperformed gradient-based schemes, and the popular Gauss-Seidel method and successive-over-relaxation (SOR) methods for solving linear systems [48].

Bilevel optimisation and blackbox problems
An important application for developing derivative-free solvers is model parameter optimisation problems. The setting for this class of problems is as follows. A model depends on some tunable parameters α ∈ R n , so that for a given parameter choice α, the model returns an output u α . There is a cost function Φ, which assigns to output u α a numerical score Φ(u α ) ∈ R, which we want to minimise. The associated model parameter optimisation problem becomes α * ∈ arg min α∈R n Φ(u α ).
A well-known example of parameter optimisation problems is supervised machine learning.
In this paper, we consider one instance of such problems in image analysis, called bilevel optimisation of variational regularisation problems. Here, the model is given by a variational regularisation problem for image denoising, where f δ is a noisy image and α is the regularisation parameter. For training data with desired reconstruction u † , we consider a scoring function Φ that estimates the discrepancy between u † and the reconstruction u α . In Section 6.2, we apply Itoh-Abe methods to solve these problems.
Bilevel optimisation problems, and model parameter optimisation problems in general, pose several challenges. They are often nonconvex and nonsmooth, due to the nonsmoothness and nonlinearity of α → u α . Furthermore, the model simulation α → u α is an algorithmic process for which gradients or subgradients cannot easily be estimated. Such problems are termed blackbox optimisation problems, as one only has access to point evaluations of the function. It is therefore of great interest to develop efficient and robust derivative-free methods for such optimisation problems.
There is a rich literature on bilevel optimisation for variational regularisation problems in image analysis, c.f. e.g. [12,58,39,51]. Furthermore, model parameter optimisation problems appear in many other applications. These include optimising for the management of water resources [23], approximation of a transmembrane protein structure in computational biology [26], image registration in medical imaging [52], the building of wind farms [19], and solar energy utilisation in architectural design [35], to name a few.

Related literature on nonsmooth, nonconvex optimisation
Although nonsmooth, nonconvex problems are known for their difficulty compared to convex problems, a rich optimisation theory has grown since the 1970s. As the focus of this paper is derivative-free optimisation, we will compare the methods' convergence properties and performance to other derivative-free solvers. Audet and Hare [3] recently provided a reference work for this field.
While there a myriad of derivative-free solvers, few provide convergence guarantees for nonsmooth, nonconvex functions. Audet and Dennis Jr [2] introduced the mesh adaptive direct search (MADS) method for constrained optimisation, with provable convergence guarantees to stationary points for nonsmooth, nonconvex functions. Direct search methods evaluate the function at a finite polling set, compare the evaluations, and update the polling set accordingly. Such methods only consider the ordering of the evaluations, rather than the numerical differences. A significant portion of derivative-free methods are direct search methods, and the most well known of these is the Nelder-Mead method (also known as the downhill simplex method) [49].
Alternatively, model-based methods that build a local quadratic model based on evaluations are welldocumented [13,55,56]. While such methods tend to work well in practice, they are normally designed only for smooth functions, so their performance on nonsmooth functions is not guaranteed.
Fasano et al. [22] formulated a derivative-free line search method termed DFN and analyse its convergence properties for nonsmooth functions for the Clarke subdifferential, in the constrained setting. The Itoh-Abe methods share many similarities with DFN, such as performing line searches along dense directions, and they employ a similar convergence analysis. However, the line searches of these methods differ, and the Itoh-Abe methods are in particular motivated by discrete gradient methods and preserving the dissipative structure of gradient flow. Furthermore, our convergence analysis is more comprehensive, considering both stochastic and deterministic methods, as well as obtaining stronger convergence results in the deterministic case. Building on the aforementioned algorithm, Liuzzi and Truemper [43] formulated a derivative-free method that is a hybrid between DFN and MADS.
Furthermore, we note the resemblance of randomised Itoh-Abe methods (1.6), when (d k ) k∈N is randomly, independently drawn from S n−1 , to the random search method proposed by Polyak in [54] and studied for nonsmooth, convex functions by Nesterov in [50], given by where d k is randomly, independently drawn from S n−1 . The implicit equation (1.6) can be treated as a line search rule for the above method, with constraints imposed by τ min , τ max .
While our focus is on derivative-free methods, we also mention some popular methods for nonsmooth, nonconvex optimisation that use gradient or subgradient information. Central in nonsmooth optimisation are bundle methods, where a subgradient [16] is required at each iterate to construct a linear approximation to the objective function-see [37] for an introduction. A close alternative to bundle methods are gradient sampling methods (see [11] for a recent review by Burke et al.), where the descent direction is determined by sampling gradients in a neighborhood of the current iterate. Curtis and Que [18] formulated a hybrid method between the gradient sampling scheme of [17] and the well known quasi-Newton method BFGS adapted to nonsmooth problems [42]. These methods have convergence guarantees in the Clarke subdifferential framework, under the assumption that the objective function is differentiable on an open, dense set. Last, we mention a derivative-free scheme based on gradient sampling methods, proposed by Kiwiel [38], where gradients are replaced by Gupal's estimates of gradients of the Steklov averages of the objective function. This method has convergence guarantees in the Clarke subdifferential framework, but has a high computational cost in terms of function evaluations per iterate.

Contributions
In this paper, we formulate randomised Itoh-Abe methods for nonsmooth functions. We prove that the method always admits a solution, and that the iterates converge to a set of Clarke stationary points, for any locally Lipschitz continuous function, and both for deterministic and randomly chosen search directions. Consequently, the scope of discrete gradient methods for optimisation is significantly broadened, and we conclude that the dissipativity properties of gradient flow can be preserved even beyond differentiability. Ultimately, this provides a new, robust, and versatile optimisation scheme for nonsmooth, nonconvex functions.
The theoretical convergence analysis for the Itoh-Abe methods is thorough and foundational, and we provide examples that demonstrate that the conditions of the convergence theorem are not just sufficient, but necessary. Furthermore, the statements and proofs are sufficiently general so that they can be adapted to other schemes, such as the aforementioned DFO method, thus enhancing the theory of these methods as well.
We show that the method works well in practice, by solving bilevel optimisation problems for variational regularisation problems, as well as solving benchmark problems such as Rosenbrock functions.
The rest of the paper is structured as follows. Section 2 provides a background on the Clarke subdifferential for nonsmooth, nonconvex analysis. In Section 3, the main theoretical results of the paper are presented, namely existence and optimality results in the stochastic and deterministic setting. In Section 4, we briefly discuss the Itoh-Abe discrete gradient for general coordinate systems. In Section 5 and Section 6, the numerical implementation is described and results from example problems are presented. Finally, a conclusion is given in Section 7.

Nonconvex optimisation
In this section, we introduce the Clarke subdifferential framework [16] for nonsmooth, nonconvex optimisation. This is the most popular framework for this setting, due to its nice analytical properties. It generalises the gradient of a differentiable function, as well as the subdifferential [21] of a convex function, hence the term Clarke subdifferential. Francis H. Clarke introduced the framework in his doctoral thesis in 1973 [15], in which he termed it generalised gradient.

The Clarke subdifferential
Throughout the paper, for ε > 0 and x ∈ R n , we denote by B ε (x) the open ball {y ∈ R n : y − x < ε}.
V is locally Lipschitz continuous if the above property holds for all x ∈ R n . Definition 2.2 For V Lipschitz near x and for a vector d ∈ R n , the Clarke directional derivative is given by λ .

Definition 2.3
Let V be locally Lipschitz and x ∈ R n . The Clarke subdifferential of V at x is given by The subdifferential ∂V is well-defined for locally Lipschitz functions, coincides with the standard subdifferential for convex functions [ where D(V ) is the set of differentiable points, and co denotes the convex hull of the set [16, Theorem 2.5.1]. We additionally state two useful results, both of which can be found in Chapter 2 of [16].
There are alternative frameworks for generalising differentiability of nonsmooth, nonconvex functions. For example, the Michel-Penot subdifferential [47] coincides with the Gâteaux derivative when this exists, unlike the Clarke subdifferential, which is larger and only coincides with strict derivatives [24]. However, the Clarke subdifferential is outer semicontinuous, making it in most cases the preferred framework for analysis. See [7] by Borwein and Zhu for a survey of various subdifferentials, published on the 25th birthday of the Clarke subdifferential.

Discrete gradients versus subgradients
By definition, when V is continuously differentiable, any discrete gradient ∇V (x, y) converges to the gradient ∇V (x) as y → x. However, for nondifferentiable V , discrete gradients do not necessarily approximate a subgradient or even an ε-approximate subgradient. 3 This is demonstrated by the following example.
Then, for all k, the Itoh-Abe discrete gradient is

Clarke stationary points
For our purposes, we also define Clarke directional stationarity.
Any local maxima and minima are stationary. If V is convex, then stationary points coincide with the global minima. For more general classes of functions, the concept of Clarke stationary points also reduces to convex first-order optimality conditions.

Definition 2.9 ([53]) A locally Lipschitz continuous function
If V is pseudoconvex, then any Clarke stationary point is a global minimum [4]. Clarke [16] also introduced the notion of regularity.
If this holds for all x, we say that V is regular.
For a regular function, a point is Clarke stationary if and only if the directional derivative is nonnegative in all directions. For example, convex functions are regular, and strict differentiability at a point implies regularity at a point. However, for nonregular functions, x * can simultaneously be Clarke stationary and have negative directional derivatives in a neighbourhood of directions.

The discrete gradient method for nonsmooth optimisation
In this section, we present the main theoretical results for the randomised Itoh-Abe methods. In particular, we prove that the update (1.6), admits a solution for all τ k > 0. We also prove under minimal assumptions on V and (d k ) k∈N that the iterates converge to a connected set of Clarke stationary points, both in a stochastic and deterministic setting.

Existence result
Lemma 3.1 Suppose V is a continuous function bounded below, and that x ∈ R n , d ∈ S n−1 and τ > 0. Then one of the following statements hold.
(i) There is a β = 0 that solves (1.6), i.e. that satisfies Proof. Suppose the second statement does not hold. Then there is ε > 0 such that Taking β → 0, we get that On the other hand, as V is bounded below, we have Thus there is β 2 such that Since the mapping β → is continuous for β ∈ (0, ∞), we conclude by the intermediate value theorem [62,Theorem 4.23] that there is β ∈ (β 1 , β 2 ) that solves the discrete gradient equation Remark 3.2 Note that by the above proof it is straightforward to identify an interval in which a solution to (1.6) exists, allowing for the use of standard root solver algorithms.
The following lemma, which is an adaptation of [28, Theorem 1] for the nonsmooth setting, summarises some useful properties of the methods. Lemma 3.3 Suppose that V is continuous, bounded from below and coercive, and let (x k ) k∈N be the iterates produced by (1.6). Then, the following properties hold.
Similarly, by (1.7) We conclude . Therefore, by coercivity of V , the iterates (x k ) k∈N are bounded, and admit an accumulation point.
We denote by S the limit set of (x k ) k∈N , which is the set of accumulation points, By the above lemma, S is nonempty. We now prove further properties of the limit set.

Lemma 3.4
The limit set S is compact, connected and has empty interior. Furthermore, V is constant on S.
Proof. Boundedness of S follows from coercivity of V combined with the fact that S belongs to the level set Since any accumulation point of S is also an accumulation point of (x k ) k∈N , S is closed. Hence S is compact. We prove connectedness by contradiction. Suppose there are two disjoint, nonempty open sets A and B such that S ⊂ A∪B . The sequence (x k ) k∈N jumps between A and B infinitely many times and x k+1 −x k → 0, which implies that there is a subsequence of (x k j ) j ∈N in R n \(A∪B ). However, (x k j ) j ∈N is a bounded sequence and has an accumulation point, which must belong in R n \ (A ∪ B ). This contradicts the assumption that all accumulation points of ( We show that S has empty interior by contradiction.

Optimality result
We now proceed to the main result of this paper, namely that all points in the limit set S are Clarke stationary. We consider the stochastic case and the deterministic case separately.
In the stochastic case, we assume that the directions (d k ) k∈N are randomly, independently drawn, and that the support of the probability density of Ξ is dense in S n−1 . It is straightforward to extend the proof to the case where (d nk+1 , . . . , d n(k+1) ) are drawn as an orthonormal system under the assumptions that the directions (d nk+1 ) k∈N are independently drawn from S n−1 and that the support of the density of the corresponding marginal distribution is dense in S n−1 .
We define X to be the set of nonstationary points, (3.1) k∈N are independently drawn from the random distribution Ξ, and suppose that the support of the density of Ξ is dense in S n−1 . Then P(S ∩ X = ∅) = 0, i.e. the limit set S is almost surely in the set of stationary points.
Proof. We will construct a countable collection of open sets (B j ) j ∈N , such that X ⊂ j ∈N B j and so that for all j ∈ N we have P(S ∩ B j = ∅) = 0. Then the result follows from countable additivity of probability measures.
First, we show that for every x ∈ X , there is d ∈ S n−1 , ε > 0, and δ > 0 such that To show this, note that if x ∈ X , then by definition there is d ∈ S n−1 and ε > 0 such that Therefore, there is η > 0 such that for all λ ∈ (0, η) and all y ∈ B η (x), we have is also locally Lipschitz continuous (of the same rank). It follows that there exists δ ∈ (0, η) such that for all y ∈ B δ (x), all e ∈ B δ (d ) ∩ S n−1 , and all λ ∈ (0, δ), we have This concludes the first part. Next, for m ∈ N, we define the set X m = x ∈ X : (3.2) holds for some d ∈ S n−1 , ε > 0 and all δ < 1/m .
Let (y i ) i ∈N be a dense sequence in X m , which exists because Q n is both countable and dense in R n . We define Y (m) Since a countable union of countable sets is countable, we conclude with the following statement. For each However, so, combining these equations, we get

This in return implies
On the other hand, if λ ≥ δ i , then Setting µ = min ε 2 i τ min , Choosing K ∈ N such that K µ > M − m, we know that this event only has to occur K times for (x k j ) j ∈N to leave B i . In other words, almost surely, there is no subsequence (x k j ) j ∈N ⊂ B i . This concludes the proof.

Deterministic case
We now cover the deterministic case, in which (d k ) k∈N is required to be cyclically dense. Definition 3.6 A sequence (d k ) k∈N ⊂ S n−1 is cyclically dense in S n−1 if, for all ε > 0, there is N ∈ N such that for any k ∈ N, the set d k , . . . , d k+N −1 forms an ε-cover of S n−1 , Remark 3.7 Randomly drawn sequences are almost surely not cyclically dense, hence the separate treatment of the stochastic and deterministic methods.
Many constructions of dense sequences are also cyclically dense. We provide an example of such a sequence on the unit interval [0, 1]. Example 3.8 Let σ ∈ (0, 1) be an irrational number and define the sequence (λ k ) k∈N in [0, 1] by where σk denotes the largest integer less than or equal to σk. To see that (λ k ) k∈N is cyclically dense in [0, 1], set ε > 0 and note by sequential compactness of [0, 1] that there is k, r ∈ N such that |λ k − λ k+r | < ε. We can write δ = |λ k − λ k+r | > 0, where we know that δ is strictly positive, as no value can be repeated in the sequence due to σ being irrational. By modular arithmetic, we have for any l ∈ N, In other words, the subsequence (λ k+r l ) l ∈N moves in increments of δ < ε on [0, 1]. Setting N = r 1 where δ denotes the smallest integer greater than or equal to δ, it is clear that for any j ∈ N, the set {λ j , λ j +1 , . . . , λ j +N −1 } forms an ε-cover of [0, 1].
One could naturally extend this construction to higher dimensions [0, 1] n , by choosing n irrational numbers such that the ratio of any two of these numbers is also irrational.
We will show that an accumulation point x * ∈ S cannot belong to the ball B δ i (y i ), from which it follows that S is a subset of the set of stationary points. For contradiction, suppose that there is a subsequence By cyclical density, we can choose N such that the corresponding directions {d k j , d k j +1 , . . . , d k j +N −1 } form an ε i -cover of S n−1 . Therefore, there exists x k ∈ B δ i (y i ) and d k ∈ B δ i (e i ), so we can argue as in Theorem 3.5, that , this would happen arbitrarily many times, which is a contradiction. This concludes the proof.

Necessity of search density and Lipschitz continuity
For nonsmooth problems, it is necessary to employ a set of directions (d k ) k∈N larger than the set of basis coordinates {e 1 , . . . , e n }. To see this, consider the function V (x, y) = max x, y and the starting point x 0 = [1, 1] T . With the standard Itoh-Abe discrete gradient method, the iterates would remain at x 0 , even though this point is nonstationary.
We show with a simple example that the assumption of density of (d k ) k∈N in Theorem 3.5 is not only sufficient, but also necessary.  N )). This interval can be made arbitrarily small by choosing N to be sufficiently large. Therefore, for an Itoh-Abe method to descend from x 0 for arbitrary functions, the directions (d k ) k∈N need to include a convergent subsequence to the direction [1, 0] T . As this direction is arbitrary, we deduce that (d k ) k∈N must be dense.
Theorem 3.5 also assumes that V is locally Lipschitz continuous. We briefly discuss why this assumption is necessary, and provide an example to show that for functions that are merely continuous, the theorem no longer holds.
By Proposition 2.1.1. (b) in [16], the mapping (y, d ) → V o (y; d ) is upper semicontinuous for y in a neighbourhood of x, due to the local Lipschitz continuity of V near x. That is, This property is crucial for the convergence analysis of Itoh-Abe methods, as it implies Without local Lipschitz continuity, it is possible to have In this case, there is no guarantee that the limit x * is Clarke stationary. We demonstrate this with an example.

Example 3.11
We will first state the iterates (x k ) k∈N and then construct a function V : R 2 → R that fits these iterates. Let (d k ) k∈N be a cyclically dense sequence in S 1 and assume without loss of generality that [0, 1] T ∉ (d k ) k∈N . Replacing d k with −d k does not change the step in (1.6), so we assume that d k 1 < 0 for all k. We set x 0 = [0, 0] T and define (x k ) k∈N and (V (x k )) k∈N to be Since k∈N x k − x k+1 < ∞, it follows that x k converges to some limit x * , and V (x k ) clearly decreases to a limit V * ∈ R. Furthermore, these steps satisfy (1.6) with τ k = 1. We then define V on the line segments Next, we define V on R 2 as a function that linearly decreases everywhere in the direction [0, 1] T , and so that its value is consistent with the values given on the predefined line segments [x k , x k+1 ]. Note that this is a well-defined and continuous function, since each line in the direction [0, 1] T crosses at most one point on at most one line segment, due to our assumptions on (d k ) k∈N .
We conclude the example by noting that the limit x * is not Clarke stationary-in fact, no point is Clarke stationary-since V o (x; [0, 1] T ) = −1 for all x.

Nonsmooth, nonconvex functions with further regularity
For a large class of nonsmooth optimisation problems (convex and nonconvex), the objective function is sufficiently regular so that the standard Itoh-Abe discrete gradient method is also guaranteed to converge to Clarke stationary points. These are functions V for which x * ∈ R n is Clarke stationary if and only if V o (x * ; e i ) ≥ 0 for i = 1, . . . , n. One example is functions of the form where E is a continuously differentiable function that may be nonconvex, x 1 denotes |x 1 | + . . . + |x n |, and λ > 0. See for example Proposition 2.3.3 and the subsequent corollary in [16], combined with the fact that the nonsmooth component of V , i.e. · 1 , separates into n coordinate-wise scalar functions. This implies that the Clarke subdifferential is given by where denotes the Cartesian product and Since this paper is chiefly concerned with the blackbox setting where no particular structure of V is assumed, we do not include a rigorous analysis of the convergence properties of the standard Itoh-Abe discrete gradient method for functions of the above form. However, we point out that for nonsmooth, nonconvex optimisation problems where Clarke stationarity is equivalent to Clarke directional stationarity along the standard coordinates, one can adapt Theorem 3.5 in a straightforward manner to prove that the iterates converge to a set of Clarke stationary points when the directions (d k ) k∈N are drawn from the standard coordinates (e i ) n i =1 . Furthermore, one could drop the requirement that V is locally Lipschitz continuous, and replace x 1 with x p p , where p ∈ (0, 1), and x p p = |x 1 | p + . . . + |x n | p . This too is beyond the scope of this paper.

Rotated Itoh-Abe discrete gradients
We briefly discuss a randomised Itoh-Abe method that retains the Itoh-Abe discrete gradient structure, by ensuring that the directions (d kn+1 , d kn+2 , . . . , d k(n+1) ) are orthonormal. Equivalently, we consider each block of n directions to be independently drawn from a random distribution on the set of orthogonal transformations on R n , denoted by O(n).
It is straightforward to check that it is a discrete gradient, as defined for continuously differentiable functions V .
Proof. For any x, y ∈ R n , x = y, Thus, we can implement schemes that are formally discrete gradient methods, and also fulfill the convergence theorem in Section 3.

Numerical implementation
We consider three ways of choosing (d k ) k∈N .
1. Standard Itoh-Abe method. The directions cycle through the standard coordinates, with the rule d k = e [(k−1) mod n]+1 . Performing n steps of this method is equivalent to one step with the standard Itoh-Abe discrete gradient method. 2. Random pursuit. The directions are independently drawn from a random distribution Ξ on S n−1 . We assume that the support of the density of Ξ is dense in S n−1 . 3. Rotated Itoh-Abe method. For each k ∈ N, the block of n directions (d kn+1 , d kn+2 , . . . , d (k+1)n ) is drawn from a random distribution on O(n), the orthogonal group of dimension n. In other words, the directions form an orthonormal basis. This retains the discrete gradient structure of the standard Itoh-Abe discrete gradient method. We assume that each draw from O(n) is independent, and, for notational continuity, we denote by Ξ the marginal distribution of d kn+1 on S n−1 , and again assume that the support of the density is dense in S n−1 .
We formalise an implementation of randomised Itoh-Abe methods with two algorithms, an inner and an outer one. Algorithm 3 is the inner algorithm and solves (1.6) for x k+1 , given x k , d k and time step bounds τ min , τ max . Algorithm 2 is the outer algorithm, which calls the inner algorithm for each iterate x k , and provides a stopping rule for the methods. The stopping rule in Algorithm 2 takes two positive integers K and M as parameters, such that the algorithm stops either after K iterations, or when the iterates have not sufficiently decreased V in the last M iterations. We typically set M ≈ n, n being the dimension of the domain, unless the function V is expected to be highly irregular or nonsmooth, in which case we choose a larger M , as directions are generally prone to yield insufficient decrease. This stopping rule can be replaced by any other heuristic.
Algorithm 3 is a tailormade scalar solver for (1.6) that balances the tradeoff between optimally decreasing V given constraints τ min , τ max and using minimal function evaluations. Rather than solving for a given τ k , it ensures that there exists τ k ∈ [τ min , τ max ] that matches the output x k+1 . It requires a preliminary τ ∈ [τ min , τ max ], which we heuristically chose as τ = τ min τ max . This method is particularly suitable when τ min τ max , and can be replaced by any other scalar root finder algorithm.
The randomised Itoh-Abe methods have been implemented on Python and will be made available on GITHUB upon acceptance of this manuscript.

Algorithm 2 Randomised Itoh-Abe method with solver and stopping criterion
Input: starting point x 0 , directions (d k ) k∈N , time step bounds (τ min , τ max ), tolerance for function reduction η, maximal number of iterations K , maximal number of consecutive directions without descent before stopping M , internal solver described by Algorithm 3. Initialise: counter m = 0. In this section, we use the randomised Itoh-Abe methods to solve several nonsmooth, nonconvex problems. In Section 6.1, we consider some well known optimisation challenges developed by Rosenbrock and Nesterov. In Section 6.2, we solve bilevel optimisation of parameters in variational regularisation problems. 4 We compare our method to state-of-the-art derivative-free optimisation methods Py-BOBYQA [13,56] and the LT-MADS solver provided by NOMAD [2,41,40]. For purposes of comparing results across solvers for these problems, we do not measure objective function value against iterates, but objective function value against function evaluations.
Solve for β assuming linear extrapolation of V and with predicted τ (assume for simplicity β > ε): while Parabolic step has not decreased V do Update parabolic interpolation points x i , i = 0, 1, 2. end while y = x i is optimal point from parabolic interpolation step while

(6.1)
Its global minimiser [1, 1] T is located in a narrow, curved valley, which is challenging for the iterates to navigate. We compare the three variants of the Itoh-Abe method, for which we set the algorithm parameters ε = 10 −5 , τ min = 10 −4 , τ max = 10 2 , η = 10 −9 , and M = 30. See Figure 6.1 for the numerical results. All three methods converge to the global minimiser, which shows that the Itoh-Abe methods are robust. Unsurprisingly, the random pursuit method and the rotated Itoh-Abe method, which descend in varying directions, perform significantly better than the standard Itoh-Abe method.
We also compare the three Itoh-Abe methods for this example, and set the algorithm parameters ε = 10 −10 , τ min = 10 −4 , τ max = 10 2 , η = 10 −16 , and M = 100. See Figure 6.2 for the results from this. As can be seen, the standard Itoh-Abe discrete gradient method is not suitable for the irregular paths and nonsmooth kinks of the objective function, and stagnates early on. The two randomised Itoh-Abe methods perform better, as they descend in varying directions. For the remaining 2D problems in this paper, we will consider the rotated Itoh-Abe method, although we could just as well have used the random pursuit method. For higher-dimensional problems, we recommend the random pursuit method.
We also compare the performance of the randomised Itoh-Abe (RIA) method to Py-BOBYQA and LT-MADS for Nesterov's nonsmooth Chebyshev-Rosenbrock function. We set the parameters of the Itoh-Abe method to ε = 10 −10 , τ min = 10 −4 , τ max = 10 2 , η = 10 −16 , and M = 100, the parameters of Py-BOBYQA to rhobeg = 2, rhoend = 10 −16 and npt = (n+1)(n+2)/2, and the parameters of LT-MADS to DIRECTION_TYPE = LT 2N and MIN_MESH_SIZE = 10 −13 . See Figure 6.3 and 6.4 for the numerical results for two different starting points. In the first case, the Itoh-Abe method successfully converges to the global minimiser, the LT-MADS method locates the nonminimising stationary point at [0, −1] T , while the Py-BOBYQA iterates stagnate at a kink, reflecting the fact that the method is not designed for nonsmooth functions. In the second case, both the Itoh-Abe method and LT-MADS locate the minimiser, while the Py-BOBYQA iterates stagnate at a kink.

Bilevel parameter learning in image analysis
In this subsection, we consider the Itoh-Abe method for solving bilevel optimisation problems for the learning of parameters of variational imaging problems. We restrict our focus to denoising problems, although the same method could be applied to any inverse problem. We first consider one-dimensional bilevel problems with wavelet and TV denoising, and two-dimensional problems with TGV denoising. In the TGV case, we compare the randomised Itoh-Abe method to the Py-BOBYQA and LT-MADS methods. Throughout this section, we set M = n, where n = 1, 2.

Setup for variational regularisation problem
Consider an image u † ∈ L 2 (Ω), for some domain Ω ⊂ R 2 , and a noisy image To recover a clean image from the noisy one, we consider a parametrised family of regularisers, and solve the variational regularisation problem The first term in (6.3), the data fidelity term, ensures that the reconstruction approximates f δ . The regulariser term serves to denoise the reconstruction, by promoting favourable features such as smooth regions and sharp edges. The parameters α determine how heavily to regularise, and sometimes adjust other features of the regulariser. See [6,33,63] for an overview of variational regularisation methods.
We list some common regularisers in image analysis. Total variation (TV) [10,61] is given by the function R α (u) := α TV(u), where α ∈ [0, ∞), and This is one of the most common regularisers for image denoising. See Figure 6.5 for an example of denoising with TV regularisation. We also consider its second-order generalisation, total generalised variation [9,8], For a linear operator W on L 2 (Ω), the basis pursuit regulariser promotes sparsity of the image u in the dictionary of W . As illustrated in Figure 6.5, the quality of the reconstruction is sensitive to α. If α is too low, the reconstruction is too noisy, while if α is too high, too much detail is removed. As it is generally not possible to ascertain the optimal choice of α a priori, a significant amount of time and effort is spent on parameter tuning. It is therefore of interest to improve our understanding of optimal parameter choices. One approach is to learn suitable parameters from training data. This requires a desired reconstruction u † , noisy data f δ , and a scoring function Φ : L 2 (Ω) → R that measures the error between u † and the reconstruction u α . The bilevel optimisation problem is given by s.t. u α solves (6.3). (6.4) In our case, we have strong convexity in the data fidelity term, which implies that u α is unique for each α ∈ [0, ∞) n . We can therefore define a mapping V (α) := Φ(u α ). (6.5) The bilevel problem (6.4) is difficult to tackle, both analytically as well as numerically. In most cases, the lower level problem (6.3) does not have a closed form formulation. Instead, a reconstruction u α is approximated numerically with an algorithm. Therefore, one typically does not have access to gradient or subgradient information 5 for the mappings α → u α . Furthermore, the bilevel mapping α → Φ(u α ) is often nonsmooth and nonconvex. Therefore, numerically solving (6.4) amounts to solving a nonsmooth, nonconvex function in a blackbox setting. We consider the application of the Itoh-Abe method for these problems.
For the numerical experiments in this paper, we reparametrise V (α) as V (exp(α)), where the exponential operator is applied elementwise on the parameters. There are two reasons for doing so. The first reason is that this paper is concerned with unconstrained optimisation, and this parametrisation allows is to optimise on R n instead of [0, ∞) n . The second reason is that exp(α) has been found to be a preferable scaling for purposes of numerical optimisation.

Wavelet denoising
We consider the wavelet denoising problem u α = arg min u∈L 2 (Ω) 1 2 where W is a wavelet transform. In particular, W is an orthonormal basis, which implies that the regularisation problem has the unique solution where T α is the shrinkage operator defined by We first optimise α for the scoring function We set the parameters of the Itoh-Abe method to ε = 10 −4 , τ min = 10 −1 , τ max = 10, and η = 10 −1 . See Figure 6.6 for the numerical results. (a) Plot with labels. We also optimise α with respect to the scoring function Φ(u) := 1 − SSIM(u, u † ), where SSIM is the structural similarity function [64] SSIM(u, v) := (2µ u µ v + c)(2σ uv +C ) Here µ u is the mean intensity of u, σ u is the unbiased estimate of the standard deviation of u, and σ uv is the correlation coefficient between u and v: We set the parameters of the Itoh-Abe method to ε = 10 −4 , τ min = 10 −3 , τ max = 10 3 , and η = 10 −2 . See Figure 6.7 for the numerical results.
We compare these results to the results from the Py-BOBYQA and LT-MADS solvers. We set the parameters of Py-BOBYQA to rhobeg = 2, rhoend = 10 −10 and npt = 2(n +1) and the parameters of LT-MADS to DIRECTION_TYPE = LT 2N . See the results for two different starting points in Figure 6.10 and 6.11. We note that the objective function is approximately stationary across a range of values, which leads to the different points of convergence, and different limiting values of the objective function for different methods. We see that the methods are all of comparable efficiency, although the Itoh-Abe method is slower initially. The Itoh-Abe method seems to be the most efficient, once it is within a neighborhood of the minimiser.

Conclusion
In this paper, we have shown that the randomised Itoh-Abe methods are efficient and robust schemes for solving unconstrained nonsmooth, nonconvex problems without the use of gradients or subgradients. Furthermore, the favourable rates of dissipativity that the discrete gradient method inherits from the gradient flow system extends to the nonsmooth case. We show, under minimal assumptions on the objective function, that the methods admit a solution that is computationally tractable, and the iterates converge to a connected set of Clarke stationary points. Through examples, the assumptions are also shown to be necessary. The methods are shown to be robust and versatile optimisation schemes. It locates the global minimisers of the Rosenbrock function and a variant of Nesterov's nonsmooth Chebyshev-Rosenbrock functions. The efficiency of the Itoh-Abe discrete gradient method for smooth problems has already been demonstrated elsewhere [28,48,59]. We also consider its application to bilevel learning problems and compare its performance to the derivative-free Py-BOBYQA and LT-MADS methods.    Future work will be dedicated to adapting the randomised Itoh-Abe methods for constrained optimisation problems, establishing convergence of the iterates of the method for Kurdyka-Łojasiewicz functions [1], and analysing the Lipschitz continuity properties of bilevel optimisation for variational regularisation problems.