Subgradient ellipsoid method for nonsmooth convex problems

In this paper, we present a new ellipsoid-type algorithm for solving nonsmooth problems with convex structure. Examples of such problems include nonsmooth convex minimization problems, convex-concave saddle-point problems and variational inequalities with monotone operator. Our algorithm can be seen as a combination of the standard Subgradient and Ellipsoid methods. However, in contrast to the latter one, the proposed method has a reasonable convergence rate even when the dimensionality of the problem is sufficiently large. For generating accuracy certificates in our algorithm, we propose an efficient technique, which ameliorates the previously known recipes (Nemirovski in Math Oper Res 35(1):52–78, 2010).


Introduction
The Ellipsoid Method is a classical algorithm in Convex Optimization. It was proposed in 1976 by Yudin and Nemirovski [23] as the modified method of centered crosssections and then independently rediscovered a year later by Shor [21] in the form of the subgradient method with space dilation. However, the popularity came to the Ellipsoid Method only when Khachiyan used it in 1979 for proving his famous result on polynomial solvability of Linear Programming [10]. Shortly after, several polynomial algorithms, based on the Ellipsoid Method, were developed for some combinatorial optimization problems [9]. For more details and historical remarks on the Ellipsoid Method, see [2,3,14].
Despite its long history, the Ellipsoid Method still has some issues which have not been fully resolved or have been resolved only recently. One of them is the computation of accuracy certificates which is important for generating approximate solutions to dual problems or for solving general problems with convex structure (saddle-point problems, variational inequalities, etc.). For a long time, the procedure for calculating an accuracy certificate in the Ellipsoid Method required solving an auxiliary piecewise linear optimization problem (see, e.g., sect. 5 and 6 in [14]). Although this auxiliary computation did not use any additional calls to the oracle, it was still computationally expensive and, in some cases, could take even more time than the Ellipsoid Method itself. Only recently an efficient alternative has been proposed [16].
Another issue with the Ellipsoid Method is related to its poor dependency on the dimensionality of the problem. Consider, e.g., the minimization problem where f : R n → R is a convex function and Q := {x ∈ R n : x ≤ R} is the Euclidean ball of radius R > 0. The Ellipsoid Method for solving (1) can be written as follows (see, e.g., sect. 3.2.8 in [19]): x k+1 := x k − 1 n + 1 W k g k g k , W k g k 1/2 , where x 0 := 0, W 0 := R 2 I (I is the identity matrix) and g k := f (x k ) is an arbitrary nonzero subgradient if x k ∈ Q, and g k is an arbitrary separator 1 To solve problem (1) with accuracy > 0 (in terms of the function value), the Ellipsoid Method needs iterations, where M > 0 is the Lipschitz constant of f on Q (see theorem 3.2.11 in [19]). Looking at this estimate, we can see an immediate drawback: it directly depends on the dimension and becomes useless when n → ∞. In particular, we cannot guarantee any reasonable rate of convergence for the Ellipsoid Method when the dimensionality of the problem is sufficiently big. Note that the aforementioned drawback is an artifact of the method itself, not its analysis. Indeed, when n → ∞, iteration (2) reads Thus, the method stays at the same point and does not make any progress.
On the other hand, the simplest Subgradient Method for solving (1) possesses the "dimension-independent" O(M 2 R 2 / 2 ) iteration complexity bound (see, e.g., sect. 3.2.3 in [19]). Comparing this estimate with (3), we see that the Ellipsoid Method is significantly faster than the Subgradient Method only when n is not too big compared to M R/ and significantly slower otherwise. Clearly, this situation is strange because the former algorithm does much more work at every iteration by "improving" the "metric" W k which is used for measuring the norm of the subgradients.
In this paper, we propose a new ellipsoid-type algorithm for solving nonsmooth problems with convex structure, which does not have the discussed above drawback. Our algorithm can be seen as a combination of the Subgradient and Ellipsoid methods and its convergence rate is basically as good as the best of the corresponding rates of these two methods (up to some logarithmic factors). In particular, when n → ∞, the convergence rate of our algorithm coincides with that of the Subgradient Method.

Contents
This paper is organized as follows. In Sect. 2.1, we review the general formulation of a problem with convex structure and the associated with it notions of accuracy certificate and residual. Our presentation mostly follows [16] with examples taken from [18]. Then, in Sect. 2.2, we introduce the notions of accuracy semicertificate and gap and discuss their relation with those of accuracy certificate and residual.
In Sect. 3, we present the general algorithmic scheme of our methods. To measure the convergence rate of this scheme, we introduce the notion of sliding gap and establish some preliminary bounds on it.
In Sect. 4, we discuss different choices of parameters in our general scheme. First, we show that, by setting some of the parameters to zero, we obtain the standard Subgradient and Ellipsoid methods. Then we consider a couple of other less trivial choices which lead to two new algorithms. The principal of these new algorithms is the latter one, which we call the Subgradient Ellipsoid Method. We demonstrate that the convergence rate of this algorithm is basically as good as the best of those of the Subgradient and Ellipsoid methods.
In Sect. 5, we show that, for both our new methods, it is possible to efficiently generate accuracy semicertificates whose gap is upper bounded by the sliding gap.
We also compare our approach with the recently proposed technique from [16] for building accuracy certificates for the standard Ellipsoid Method.
In Sect. 6, we discuss how to efficiently implement our general scheme and the procedure for generating accuracy semicertificates. In particular, we show that the time and memory requirements of our scheme are the same as in the standard Ellipsoid Method.
Finally, in Sect. 7, we discuss some open questions.

Notation and generalities
In this paper, E denotes an arbitrary n-dimensional real vector space.
Note that, for any s ∈ E * and x ∈ E, we have the Cauchy-Schwarz inequality which becomes an equality if and only if s and Bx are collinear. In addition to x and · * , we often work with other Euclidean norms defined in the same way but using another reference operator instead of B. In this case, we write · G and · * G , where G : E → E * is the corresponding self-adjoint positive definite linear operator.
Sometimes, in the formulas, involving products of linear operators, it is convenient to treat x ∈ E as a linear operator from R to E, defined by xα := αx, and x * as a linear operator from E * to R, defined by x * s := s, x . Likewise, any s ∈ E * can be treated as a linear operator from R to E * , defined by sα := αs, and s * as a linear operator from E to R, defined by s * x := s, x . Then, x x * and ss * are rank-one selfadjoint linear operators from E * to E and from E to E * respectively, acting as follows: For a self-adjoint linear operator G : E → E * , by tr G and det G, we denote the trace and determinant of G with respect to our fixed operator B: Note that, in these definitions, B −1 G is a linear operator from E to E, so tr(B −1 G) and det(B −1 G) are the standard well-defined notions of trace and determinant of a linear operator acting on the same space. For example, they can be defined as the trace and determinant of the matrix representation of B −1 G with respect to an arbitrary chosen basis in E (the result is independent of the particular choice of basis). Alternatively, tr G and det G can be equivalently defined as the sum and product, respectively, of the eigenvalues of G with respect to B.
For a point x ∈ E and a real r > 0, by we denote the closed Euclidean ball with center x and radius r . Given two solids 2 Q, Q 0 ⊆ E, we can define the relative volume of Q with respect to Q 0 by vol(Q/Q 0 ) := vol Q e / vol Q e 0 , where e is an arbitrary basis in E, Q e , Q e 0 ⊆ R n are the coordinate representations of the sets Q, Q 0 in the basis e and vol is the Lebesgue measure in R n . Note that the relative volume is independent of the particular choice of the basis e. Indeed, for any other basis f , we have For us, it will be convenient to define the volume of a solid Q ⊆ E as the relative volume of Q with respect to the unit ball: vol Q := vol(Q/B (0, 1)).

Description and examples
In this paper, we consider numerical algorithms for solving problems with convex structure. The main examples of such problems are convex minimization problems, convex-concave saddle-point problems, convex Nash equilibrium problems, and variational inequalities with monotone operators.
The general formulation of a problem with convex structure involves two objects: -Solid Q ⊆ E (called the feasible set), represented by the Separation Oracle: given any point x ∈ E, this oracle can check whether x ∈ int Q, and if not, it reports a vector g Q (x) ∈ E * \ {0} which separates x from Q: -Vector field g : int Q → E * , represented by the First-Order Oracle: given any point x ∈ int Q, this oracle returns the vector g(x).
In what follows, we only consider the problems satisfying the following condition: Remark 1 A careful reader may note that the notation x * overlaps with our general notation for the linear operator generated by a point x (see Sect. 1). However, there should be no risk of confusion since the precise meaning of x * can usually be easily inferred from the context.
A numerical algorithm for solving a problem with convex structure starts at some point x 0 ∈ E. At each step k ≥ 0, it queries the oracles at the current test point x k to obtain the new information about the problem, and then somehow uses this new information to form the next test point x k+1 . Depending on whether x k ∈ int Q, the kth step of the algorithm is called productive or nonproductive.
The total information, obtained by the algorithm from the oracles after k ≥ 1 steps, comprises its execution protocol which consists of: -The vectors g 0 , . . . , g k−1 ∈ E * reported by the oracles: An accuracy certificate, associated with the above execution protocol, is a nonnegative vector λ := (λ 0 , . . . , λ k−1 ) such that S k (λ) := i∈I k λ i > 0 (and, in particular, I k = ∅). Given any solid Ω, containing Q, we can define the following residual of λ on Ω: which is easily computable whenever Ω is a simple set (e.g., a Euclidean ball). Note that and, in particular, k (λ) ≥ 0 in view of (5).
In what follows, we will be interested in the algorithms, which can produce accuracy certificates λ (k) with k (λ (k) ) → 0 at a certain rate. This is a meaningful goal because, for all known instances of problems with convex structure, the residual k (λ) upper bounds a certain natural inaccuracy measure for the corresponding problem. Let us briefly review some standard examples (for more examples, see [16,18] and the references therein).

Example 1 (Convex minimization problem)
Consider the problem where Q ⊆ E is a solid and f : E → R ∪ {+∞} is closed convex and finite on int Q.
is an arbitrary subgradient of f at x. Clearly, (5) holds for x * being any solution of (8).
One can verify that, in this example, the residual k (λ) upper bounds the functional residual: forx k := 1 . Moreover, k (λ), in fact, upper bounds the primal-dual gap for a certain dual problem for (8). Indeed, let f * : E * → R ∪ {+∞} be the conjugate function of f . Then, we can represent (8) in the following dual form: where dom f * := {s ∈ E * : f * (s) < +∞} and ξ Q (−s) := max x∈Q −s, x . Denote i∈I k λ i g i . Then, using (7) and the convexity of f and f * , we obtain Thus,x k and s k are k (λ)-approximate solutions (in terms of function value) to problems (8) and (9), respectively. Note that the same is true if we replacex k with x * k .
Example 2 (Convex-concave saddle-point problem) Consider the following problem: where U , V are solids in some finite-dimensional vector spaces E u , E v , respectively, and f : U × V → R is a continuous function which is convex-concave, i.e., f (·, v) is convex and f (u, ·) is concave for any u ∈ U and any v ∈ V .
In this example, we set E := E u × E v , Q := U × V and use the First-Order Oracle where f u (x) is an arbitrary subgradient of f (·, v) at u and f v (y) is an arbitrary supergradient of f (u, ·) at v. Then, for any x := (u, v) ∈ int Q and any x : In particular, (5) holds for x * := (u * , v * ) in view of (10).
Let φ : U → R and ψ : V → R be the functions In view of (10), we have can be used for measuring the quality of an approximate solution x := (u, v) ∈ Q to problem (10).
i∈I k λ i x i =: (û k ,v k ) and using (7), we obtain where the second inequality is due to (11) and the last one follows from the convexityconcavity of f . Thus, the residual k (λ) upper bounds the primal-dual gap for the approximate solutionx k .

Example 3 (Variational inequality with monotone operator)
Let Q ⊆ E be a solid and let V : Q → E * be a continuous operator which is monotone, i.e., V (x) − V (y), x − y ≥ 0 for all x, y ∈ Q. The goal is to solve the following (weak) variational inequality: Since V is continuous, this problem is equivalent to its strong variant: find A standard tool for measuring the quality of an approximate solution to (12) is the dual gap function, introduced in [1]: It is easy to see that f is a convex nonnegative function which equals 0 exactly at the solutions of (12). In this example, the First-Order Oracle is defined by g(x) := V (x) for any x ∈ int Q. Denotex k := 1 S k (λ) i∈I k λ i x i . Then, using (7) and the monotonicity of V , we obtain Thus, k (λ) upper bounds the dual gap function for the approximate solutionx k .

Establishing convergence of residual
For the algorithms, considered in this paper, instead of accuracy certificates and residuals, it turns out to be more convenient to speak about closely related notions of accuracy semicertificates and gaps, which we now introduce.
As before, let x 0 , . . . , x k−1 be the test points, generated by the algorithm after k ≥ 1 steps, and let g 0 , . . . , g k−1 be the corresponding oracle outputs. An accuracy semicertificate, associated with this information, is a nonnegative vector λ := (λ 0 , . . . , λ k−1 ) such that Γ k (λ) := k−1 i=0 λ i g i * > 0. Given any solid Ω, containing Q, the gap of λ on Ω is defined in the following way: Comparing these definitions with those of accuracy certificate and residual, we see that the only difference between them is that now we use a different "normalizing" coefficient: . Also, in the definitions of semicertificate and gap, we do not make any distinction between productive and nonproductive steps. Note that δ k (λ) ≥ 0. Let us demonstrate that by making the gap sufficiently small, we can make the corresponding residual sufficiently small as well. For this, we need the following standard assumption about our problem with convex structure (see, e.g., [16]).

Assumption 1
The vector field g, reported by the First-Order Oracle, is semibounded: Another interesting example is the subgradient field g of a ν-selfconcordant barrier f : E → R ∪ {+∞} for the set Q; in this case, g is semibounded with V := ν (see, e.g., [19,Theorem 5.3.7]), while f (x) → +∞ at the boundary of Q.
Lemma 1 Let λ be a semicertificate such that δ k (λ) < r , where r is the largest of the radii of Euclidean balls contained in Q. Then, λ is a certificate and where the inequality follows from the separation property (4) and Assumption 1.
where the inequalities follow from the definition (13) of δ k and (14), respectively. It remains to show that λ is a certificate, i.e., S k > 0. But this is simple. Indeed, if S k = 0, then, taking x :=x in (15) and using (14), which contradicts our assumption that λ is a semicertificate, i.e., Γ k > 0.
According to Lemma 1, from the convergence rate of the gap δ k (λ (k) ) to zero, we can easily obtain the corresponding convergence rate of the residual k (λ (k) ). In particular, to ensure that k (λ (k) ) ≤ for some > 0, it suffices to make δ k (λ (k) ) ≤ δ( ) := r /( + V ). For this reason, in the rest of this paper, we can focus our attention on studying the convergence rate only for the gap.

General algorithmic scheme
Consider the general scheme presented in Algorithm 1. This scheme works with an arbitrary oracle G : E → E * satisfying the following condition: The point x * from (16) is typically called a solution of our problem. For the general problem with convex structure, represented by the First-Order Oracle g and the Separation Oracle g Q for the solid Q, the oracle G is usually defined as follows: To ensure that (16) holds, the constant R needs to be chosen sufficiently big so that Q ⊆ B (x 0 , R).

Algorithm 1: General Scheme of Subgradient Ellipsoid Method
Input: Point x 0 ∈ E and scalar R > 0.
3. Choose some coefficients a k , b k ≥ 0 and update the functions Note that, in Algorithm 1, ω k are strictly convex quadratic functions and k are affine functions. Therefore, the sets Ω k are certain ellipsoids and L − k are certain halfspaces (possibly degenerate).
Let us show that Algorithm 1 is a cutting-plane scheme in which the sets Ω k ∩ L − k are the localizers of the solution x * .
Next, let us establish an important representation of the ellipsoids Ω k via the functions k and the test points x k . For this, let us define G k := ∇ 2 ω k (0) for each k ≥ 0. Observe that these operators satisfy the following simple relations (cf. (17)): Also, let us define the sequence R k > 0 by the recurrence Lemma 3 In Algorithm 1, for all k ≥ 0, we have In particular, for all k ≥ 0 and all . Note that ψ k is a quadratic function with Hessian G k and minimizer x k . Hence, for any x ∈ E, we have where ψ * k := min x∈E ψ k (x). Let us compute ψ * k . Combining (17), (18) and (20), for any x ∈ E, we obtain Therefore, where the last identity follows from the fact that G −1 (18)). Since (22) is true for any k ≥ 0 and since ψ * 0 = 0, we thus obtain, in view of (19), Let x ∈ Ω k be arbitrary. Using the definition of ψ k (x) and (23), we obtain Lemma 3 has several consequences. First, we see that the localizers Ω k ∩ L − k are contained in the ellipsoids {x : x − x k G k ≤ R k } whose centers are the test points x k .
Second, we get a uniform upper bound on the function − k on the ellipsoid Ω k : This observation leads us to the following definition of the sliding gap: provided that Γ k := k−1 i=0 a i g i * > 0. According to our observation, we have At the same time, Δ k ≥ 0 in view of Lemma 2 and 16 Comparing the definition (24) of the sliding gap Δ k with the definition (13) of the gap δ k (a (k) ) for the semicertificate a (k) := (a 0 , . . . , a k−1 ), we see that they are almost identical. The only difference between them is that the solid Ω k , over which the maximum is taken in the definition of the sliding gap, depends on the iteration counter k. This seems to be unfortunate because we cannot guarantee that each Ω k contains the feasible set Q (as required in the definition of gap) even if so does the initial solid Ω 0 = B (x 0 , R). However, this problem can be dealt with. Namely, in Sect. 5, we will show that the semicertificate a (k) can be efficiently converted into another semicertificate λ (k) for which δ k (λ (k) ) ≤ Δ k when taken over the initial solid Ω := Ω 0 . Thus, the sliding gap Δ k is a meaningful measure of convergence rate of Algorithm 1, and it makes sense to call the coefficients a (k) a preliminary semicertificate.
Let us now demonstrate that, for a suitable choice of the coefficients a k and b k in Algorithm 1, we can ensure that the sliding gap Δ k converges to zero.

Remark 2
From now on, in order to avoid taking into account some trivial degenerate cases, it will be convenient to make the following minor technical assumption: In Algorithm 1, g k = 0 for all k ≥ 0.
Indeed, when the oracle reports g k = 0 for some k ≥ 0, it usually means that the test point x k , at which the oracle was queried, is, in fact, an exact solution to our problem. For example, if the standard oracle for a problem with convex structure has reported g k = 0, we can terminate the method and return the certificate λ := (0, . . . , 0, 1) for which the residual k (λ) = 0.
Let us choose the coefficients a k and b k in the following way: where α k , θ, γ ≥ 0 are certain coefficients to be chosen later. According to (25), to estimate the convergence rate of the sliding gap, we need to estimate the rate of growth of the coefficients R k and Γ k from above and below, respectively. Let us do this.

Lemma 4 In Algorithm 1 with parameters (26), for all k
Proof By the definition of U k and Lemma 3, we have At the same time, U k ≥ 0 in view of Lemma 2 and (16). Hence, where the identity follows from (26). Combining this with (19), we obtain Note that, for any ξ 1 , ξ 2 ≥ 0 and any τ > 0, we have (look at the minimum of the right-hand side in τ ). Therefore, for arbitrary τ > 0, where we denote q := q c (γ ) ≥ 1 and β k := τ +1 τ (1+γ ) α 2 k . Dividing both sides by q k+1 , we get Since this is true for any k ≥ 0, we thus obtain, in view of (19), that Multiplying both sides by q k and using that β i q i+1 ≤ τ +1 τ α 2 i , we come to (27). When α k = 0 for all k ≥ 0, we have k = 0 and L − k = E for all k ≥ 0. Therefore, by Lemma 3

Remark 3
From the proof, one can see that the quantity C k in Lemma 4 can be improved

Lemma 5
In Algorithm 1 with parameters (26), for all k ≥ 1, we have Proof By the definition of Γ k and (26), we have Let us estimate each sum from below separately. For the first sum, we can use the trivial bound ρ i ≥ 1, which is valid for any i ≥ 0 (since G i B in view of (18)). This gives us k−1 Let us estimate the second sum. According to (19), for any i ≥ 0, we have R i ≥ R.
and it remains to lower bound k−1 i=0 ρ 2 i . By 18 and 26, where we have applied the arithmetic-geometric mean inequality. Combining the obtained estimates, we get (30).
In this case, b k = 0 for all k ≥ 0, so G k = B and ω k (x) = ω 0 (x) = 1 2 x 2 for all x ∈ E and all k ≥ 0 (see (17) and (18)). Consequently, the new test points x k+1 in Algorithm 1 are generated according to the following rule: where a i = α i R/ g i * . Thus, Algorithm 1 is the Subgradient Method: x k+1 = x k − a k g k .
In this example, each ellipsoid Ω k is simply a ball: Ω k = B (x 0 , R) for all k ≥ 0. Hence, the sliding gap Δ k , defined in (24), does not "slide" and coincides with the gap of the semicertificate a := (a 0 , . . . , a k−1 ) on the solid B (x 0 , R): In view of Lemmas 4 and 5, for all k ≥ 1, we have Lemma 4). Substituting these estimates into (25), we obtain the following well-known estimate for the gap in the Subgradient Method: The standard strategies for choosing the coefficients α i are as follows (see, e.g., sect. 3.2.3 in [19]): 1. We fix in advance the number of iterations k ≥ 1 of the method and use constant coefficients This corresponds to the so-called Short-Step Subgradient Method, for which we have 2. Alternatively, we can use time-varying coefficients α i := 1 √ i+1 , i ≥ 0. This approach does not require us to fix in advance the number of iterations k. However, the corresponding convergence rate estimate becomes slightly worse:

Remark 4
If we allow projections onto the feasible set, then, for the resulting Subgradient Method with time-varying coefficients α i , one can establish the O(1/ √ k) convergence rate for the "truncated" gap

Standard ellipsoid method
Another extreme choice is the following one: For this choice, we have a k = 0 for all k ≥ 0. Hence, k = 0 and L − k = E for all k ≥ 0. Therefore, the localizers in this method are the following ellipsoids (see Lemma 3): Observe that, in this example, Γ k ≡ k−1 i=0 a i g i * = 0 for all k ≥ 1, so there is no preliminary semicertificate and the sliding gap is undefined. However, we can still ensure the convergence to zero of a certain meaningful measure of optimality, namely, the average radius of the localizers Ω k : Indeed, let us define the following functions for any real c, p > 0: According to Lemma 4, for any k ≥ 0, we have At the same time, in view of (18) and (26) Combining this with (32)-(34), we obtain, for any k ≥ 0, that Let us now choose γ which minimizes avrad Ω k . For such computations, the following auxiliary result is useful (see Sect. A for the proof).

Lemma 6
For any c ≥ 1/2 and any p ≥ 2, the function ζ p,c , defined in (34), attains its minimum at a unique point with the corresponding value ζ p,c γ c ( p) ≤ e −1/(2cp) .
Applying Lemma 6 to 36, we see that the optimal value of γ is for which ζ n,1/2 (γ ) ≤ e −1/n . With this choice of γ , we obtain, for all k ≥ 0, that One can check that Algorithm 1 with parameters (26), (31) and (38) is, in fact, the standard Ellipsoid Method (see Remark 6).

Ellipsoid method with preliminary semicertificate
As we have seen, we cannot measure the convergence rate of the standard Ellipsoid Method using the sliding gap because there is no preliminary semicertificate in this method. Let us present a modification of the standard Ellipsoid Method which does not have this drawback but still enjoys the same convergence rate as the original method (up to some absolute constants).
Recall from (18) and (19) that G i G k and R i ≤ R k . Therefore, where the penultimate inequality follows from Lemma 2 and 3. According to (41),

Subgradient ellipsoid method
The previous algorithm still shares the drawback of the original Ellipsoid Method, namely, it does not work when n → ∞. To eliminate this drawback, let us choose α k similarly to how this is done in the Subgradient Method.
On the other hand, dropping the second term in (47), we can write Suppose k ≤ n 2 . Then, from (34) and (44), it follows that Hence, by (46), R k ≤ √ eC k R 2 . Combining this with (25) and (49), we obtain By numerical evaluation, one can verify that, for our choice of θ , we have 1 2 e(θ+1) θ ≤ 2. This proves the first estimate in (45).
Exactly as in the Subgradient Method, we can use the following two strategies for choosing the coefficients β i : 1. We fix in advance the number of iterations k ≥ 1 of the method and use constant coefficients 2. We use time-varying coefficients β i := 1 √ i+1 , i ≥ 0. In this case, Let us discuss convergence rate estimate (50). Up to absolute constants, this estimate is exactly the same as in the Subgradient Method when k ≤ n 2 and as in the Ellipsoid Method when k ≥ n 2 . In particular, when n → ∞, we recover the convergence rate of the Subgradient Method.
To provide a better interpretation of the obtained results, let us compare the convergence rates of the Subgradient and Ellipsoid methods: To compare these rates, let us look at their squared ratio: Let us find out for which values of k the rate of the Subgradient Method is better than that of the Ellipsoid Method and vice versa. We assume that n ≥ 2.
Returning to our obtained estimate (50), we see that, ignoring absolute constants and ignoring the "small" region of the values of k between n 2 and n 2 ln n, our convergence rate is basically the best of the corresponding convergence rates of the Subgradient and Ellipsoid methods.

Constructing accuracy semicertificate
Let us show how to convert a preliminary accuracy semicertificate, produced by Algorithm 1, into a semicertificate whose gap on the initial solid is upper bounded by the sliding gap. The key ingredient here is the following auxiliary algorithm which was first proposed in [16] for building accuracy certificates in the standard Ellipsoid Method.

Augmentation algorithm
Let k ≥ 0 be an integer and let Q 0 , . . . , Q k be solids in E such that where x i ∈ E, g i ∈ E * . Further, suppose that, for any s ∈ E * and any 0 ≤ i ≤ k − 1, we can compute a dual multiplier μ ≥ 0 such that (provided that certain regularity conditions hold). Let us abbreviate any solution μ of this problem by μ(s, Q i , x i , g i ).
Consider now the following routine.

Methods with preliminary certificate
Let us apply the Augmentation Algorithm for building an accuracy semicertificate for Algorithm 1. We only consider those instances for which Γ k := k−1 i=0 a i g i * > 0 so that the sliding gap Δ k is well-defined: Recall that the vector a : = (a 0 , . . . , a k−1 ) is called a preliminary semicertificate. For technical reasons, it will be convenient to add the following termination criterion into Algorithm 1: where δ > 0 is a fixed constant. Depending on whether this termination criterion has been satisfied at iteration k, we call it a terminal or nonterminal iteration, respectively.

Remark 5
In practice, one can set δ to an arbitrarily small value (within machine precision) if the desired target accuracy is unknown. As can be seen from the subsequent discussion, the main purpose of the termination criterion (53) is to ensure that U k never becomes equal to zero during the iterations of Algorithm 1. This guarantees the existence of dual multiplier in (52) for any s ∈ E * at every nonterminal iteration. The case U k = 0 corresponds to the degenerate situation when Algorithm 1 has "accidentally" found an exact solution.
Let k ≥ 1 be an iteration of Algorithm 1. According to Lemma 2, the sets Q i := Ω i ∩ L − i satisfy (51). Since the method has not been terminated during the course of the previous iterations, we have 4 U i > 0 for all 0 ≤ i ≤ k − 1. Therefore, for any 0 ≤ i ≤ k − 1, there exists x ∈ Q i such that g i , x − x i < 0. This guarantees the existence of dual multiplier in (52).

Standard ellipsoid method
In the standard Ellipsoid Method, there is no preliminary semicertificate. Therefore, we cannot apply the above procedure. However, in this method, it is still possible to generate an accuracy semicertificate, although the corresponding procedure is slightly more involved. Let us now briefly describe this procedure and discuss how it differs from the previous approach. For details, we refer the reader to [16].
Let k ≥ 1 be an iteration of the method. There are two main steps. The first step is to find a direction s k , in which the "width" of the ellipsoid Ω k (see (32)) is minimal: It is not difficult to see that s k is given by the unit eigenvector 5 of the operator G k , corresponding to the largest eigenvalue. For the corresponding minimal "width" of the ellipsoid, we have the following bound via the average radius: where ρ k := 2 avrad Ω k . Recall that avrad Ω k ≤ e −k/(2n 2 ) R in view of (39).
At the second step, we apply Algorithm 2 two times with the sets Q i := Ω i : first, to the vector s k to obtain dual multipliers μ := (μ 0 , . . . , μ k−1 ) and then to the vector −s k to obtain dual multipliers μ := (μ 0 , . . . , μ k−1 ). By Lemma 7 and (54), we have (x 0 , R)). Consequently, for λ := μ + μ , we obtain 5 Here eigenvectors and eigenvalues are defined with respect to the operator B inducing the norm x .
Finally, one can show that where D is the diameter of Q and r is the maximal of the radii of Euclidean balls contained in Q. Thus, whenever ρ k < r , λ is a semicertificate with the following gap on B (x 0 , R): Compared to the standard Ellipsoid Method, we see that, in the Subgradient Ellipsoid methods, the presence of the preliminary semicertificate removes the necessity in finding the minimal-"width" direction and requires only one run of the Augmentation Algorithm.
6 Implementation details

Explicit representations
In the implementation of Algorithm 1, instead of the operators G k , it is better to work with their inverses H k := G −1 k . Applying the Sherman-Morrison formula to (18), we obtain the following update rule for H k : Let us now obtain an explicit formula for the next test point x k+1 . This has already been partly done in the proof of Lemma 3. Indeed, recall that x k+1 is the minimizer of the function ψ k+1 (x). From (21), we see that x k+1 = x k − (a k + 1 2 b k U k )H k+1 g k . Combining it with (55), we obtain Finally, one can obtain the following explicit representations for L − k and Ω k : where, for any k ≥ 0, c 0 := 0, σ 0 := 0, c k+1 := c k + a k g k , σ k+1 := σ k + a k g k , x k , provided that there exists some x ∈ E such that x H −1 < 1, a, x ≤ β (Slater condition). One can show that (60) has the following solution (see Lemma 10): u(H , s, a, β), otherwise, where u (H , s, a, β) is the unconstrained minimizer of the objective function in (60). Let us present an explicit formula for u (H , s, a, β). For future use, it will be convenient to write down this formula in a slightly more general form for the following multidimensional 7 variant of problem (60): where s ∈ E * , H : E * → E is a self-adjoint positive definite linear operator, A : R m → E * is a linear operator with trivial kernel and b ∈ R m , b, (A * H A) −1 b < 1. It is not difficult to show that problem (62) has the following unique solution (see Lemma 9): Note that, in order for the above approach to work, we need to guarantee that the sets Ω k and L − k satisfy a certain regularity condition, namely, int Ω k ∩ L − k = ∅. This condition can be easily fulfilled by adding into Algorithm 1 the termination criterion (53). (53). Then, at each iteration k ≥ 0, at the beginning of Step 2, we have int Ω k ∩ L − k = ∅. Moreover, if k is a nonterminal iteration, we also have g k ,

Lemma 8 Consider Algorithm 1 with termination criterion
At the same time, slightly modifying the proof of Lemma 2 (using that int = ∅, and we can continue by induction.

Computing dual multipliers
Recall from Sect. 5 that the procedure for generating an accuracy semicertificate for Algorithm 1 requires one to repeatedly carry out the following operation: given s ∈ E * and some iteration number i ≥ 0, compute a dual multiplier μ ≥ 0 such that This can be done as follows. First, using (57), let us rewrite the above primal problem more explicitly: Our goal is to dualize the second linear constraint and find the corresponding multiplier. However, for the sake of symmetry, it is better to dualize both linear constraints, find the corresponding multipliers and then keep only the second one. Let us simplify our notation by introducing the following problem: where H : E * → E is a self-adjoint positive definite linear operator, s, a 1 , a 2 ∈ E * and b 1 , b 2 ∈ R. Clearly, our original problem can be transformed into this form by setting H : Note that this transformation does not change the dual multipliers.
Dualizing the linear constraints in (64), we obtain the following dual problem: which is solvable provided the following Slater condition holds: Note that (66) can be ensured by adding termination criterion (53) into Algorithm 1 (see Lemma 8).
A solution of (65) can be found using Algorithm 3. In this routine, τ (·), ξ(·) and u(·) are the auxiliary operations, defined in Sect. 6.2, and A := (a 1 , a 2 ) is the linear operator Au := u 1 a 1 + u 2 a 2 acting from R 2 to E * . The correctness of Algorithm 3 is proved in Theorem 3.

Time and memory requirements
Let us discuss the time and memory requirements of Algorithm 1, taking into account the previously mentioned implementation details.
The main objects in Algorithm 1, which need to be stored and updated between iterations, are the test points x k , matrices H k , scalars R k , vectors c k and scalars σ k , see (19), (55), 56 and (58) for the corresponding updating formulas. To store all these objects, we need O(n 2 ) memory.
Consider now what happens at each iteration k. First, we compute U k . For this, we calculate z k and D k according to (58) and then perform the calculations described in Sect. 6.2. The most difficult operation there is computing the matrix-vector product, which takes O(n 2 ) time. After that, we calculate the coefficients a k and b k according to (26), where α k , θ and γ are certain scalars, easily computable for all main instances of Algorithm 1 (see . The most expensive step there is computing the norm g k * G k , which can be done in O(n 2 ) operations by evaluating the product H k g k . Finally, we update our main objects, which takes O(n 2 ) time.
Thus, each iteration of Algorithm 1 has O(n 2 ) time and memory complexities, exactly as in the standard Ellipsoid Method. Now let us analyze the complexity of the auxiliary procedure from Sect. 5 for converting a preliminary semicertificate into a semicertificate. The main operation in this procedure is running Algorithm 2, which iterates "backwards", computing some dual multiplier μ i at each iteration i = k −1, . . . , 0. Using the approach from Sect. 6.3, we can compute μ i in O(n 2 ) time, provided that the objects x i , g i , H i , z i , D i , c i , σ i are stored in memory. Note, however, that, in contrast to the "forward" pass, when iterating "backwards", there is no way to efficiently recompute all these objects without storing in memory a certain "history" of the main process from iteration 0 up to k. The simplest choice is to keep in this "history" all the objects mentioned above, which requires O(kn 2 ) memory. A slightly more efficient idea is to keep the matrix-vector products H i g i instead of H i and then use (55) to recompute H i from H i+1 in O(n 2 ) operations. This allows us to reduce the size of the "history" down to O(kn) while still keeping the O(kn 2 ) total time complexity of the auxiliary procedure. Note that these estimates are exactly the same as those for the best currently known technique for generating accuracy certificates in the standard Ellipsoid Method [16]. In particular, if we generate a semicertificate only once at the very end, then the time complexity of our procedure is comparable to that of running the standard Ellipsoid Method without computing any certificates. Alternatively, as suggested in [16], one can generate semicertificates, say, every 2, 4, 8, 16, . . . iterations. Then, the total "overhead" of the auxiliary procedure for generating semicertificates will be comparable to the time complexity of the method itself.

Conclusion
In this paper, we have addressed one of the issues of the standard Ellipsoid Method, namely, its poor convergence for problems of large dimension n. For this, we have proposed a new algorithm which can be seen as the combination of the Subgradient and Ellipsoid methods.
Our developments can be considered as a first step towards constructing universal methods for nonsmooth problems with convex structure. Such methods could significantly improve the practical efficiency of solving various applied problems.
Note that there are still some open questions. First, the convergence estimate of our method with time-varying coefficients contains an extra factor proportional to the logarithm of the iteration counter. We have seen that this logarithmic factor has its roots yet in the Subgradient Method. However, as discussed in Remark 4, for the Subgradient Method, this issue can be easily resolved by allowing projections onto the feasible set and working with "truncated" gaps. An even better alternative, which does not require any of this machinery, is to use Dual Averaging [18] instead of the Subgradient Method. It is an interesting question whether one can combine the Dual Averaging with the Ellipsoid Method similarly to how we have combined the Subgradient and Ellipsoid methods.
Second, the convergence rate estimate, which we have obtained for our method, is not continuous in the dimension n. Indeed, for small values of the iteration counter k, this estimate behaves as that of the Subgradient Method and then, at some moment (around n 2 ), it switches to the estimate of the Ellipsoid Method. As discussed at the end of Sect. 4.4, there exists some "small" gap between these two estimates around the switching moment. Nevertheless, the method itself is continuous in n and does not contain any explicit switching rules. Therefore, there should be some continuous convergence rate estimate for our method, and it is an open question to find it.
Another interesting question is to understand what happens with the proposed method on other (less general) classes of convex problems than those, considered in this paper. For example, it is well-known that, on smooth and/or strongly convex problems, (sub)gradient methods have much better convergence rates than on the general nonsmooth problems. We expect that similar conclusions should also be valid for the proposed Subgradient Ellipsoid Method. However, to achieve the acceleration, it may be necessary to introduce some modifications in the algorithm such as using different step sizes. We leave this direction for future research.
Finally, apart from the Ellipsoid Method, there exist other "dimension-dependent" methods (e.g., the Center-of-Gravity Method 8 [13,20], the Inscribed Ellipsoid Method [22], the Circumscribed Simplex Method [6], etc.). Similarly, the Subgradient Method is not the only "dimension-independent" method and there exist numerous alternatives which are better suited for certain problem classes (e.g., the Fast Gradient Method [17] for Smooth Convex Optimization or methods for Stochastic Programming [7,8,11,15]). Of course, it is interesting to consider different combinations of the aforementioned "dimension-dependent" and "dimension-independent" methods. In this regard, it is also worth mentioning the works [4,5], where the authors propose new variants of gradient-type methods for smooth strongly convex minimization problems inspired by the geometric construction of the Ellipsoid Method.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

B Support function and dual multipliers: proofs
For brevity, everywhere in this section, we write x and · * instead of · H −1 and · * H −1 , respectively. We also denote B 0 := {x ∈ E : x ≤ 1}.

B.1 Auxiliary operations
Lemma 9 Let s ∈ E * , let A : R m → E * be a linear operator with trivial kernel and Then, problem (62) has a unique solution given by (63).
Proof Note that the sublevel sets of the objective function in (62) are bounded: for all u ∈ R m . Hence, problem (62) has a solution.
Let u ∈ R m be a solution of problem (62). If s = Au, then u = (A * H A) −1 A * s, which coincides with the solution given by (63) (note that, in this case, r = 0). Now suppose s = Au. Then, from the first-order optimality condition, we obtain that b = A * (s − Au)/ρ, where ρ := s − Au * > 0. Hence, u = (A * H A) −1 (A * s − ρb) and Thus, ρ = r and u = u (H , s, A, b) given by (63).

Lemma 10
Let s, a ∈ E * , β ∈ R be such that a, x ≤ β for some x ∈ int B 0 . Then, problem (60) has a solution given by (61). Moreover, this solution is unique if β < a * .
We have proved that (61) is indeed a solution of (60). Moreover, when a, s = β s * , we have shown that this solution is unique. It remains to prove the uniqueness of solution when a, s = β s * , assuming additionally that β < a * . But this is simple. Indeed, by our assumptions, |β| < a * , so | a, s | = |β| s * < a * s * . Hence, a and s are linearly independent. But then φ is strictly convex, and thus its minimizer is unique.

B.2 Computation of dual multipliers
In this section, we prove the correctness of Algorithm 3.
For s ∈ E * , let X (s) be the subdifferential of · * at the point s: Clearly, X (s) ⊆ B 0 for any s ∈ E * . When s = 0, we denote the unique element of X (s) by x(s). Let us formulate a convenient optimality condition.
Proof Indeed, the standard optimality condition for a convex function over the nonnegative orthant is as follows: μ * ∈ R m + is a minimizer of ψ on R m + if and only if there exists g * ∈ ∂ψ(μ * ) such that g * i ≥ 0 and g * i μ * i = 0 for all 1 ≤ i ≤ m. It remains to note that ∂ψ(μ * ) = b − A * X (s − Aμ * ).
Thus, at this point, any solution of (65) must be a solution of (62). In view of Lemma 9, to finish the proof, it remains to show that the vectors a 1 , a 2 are linearly independent and b, (A * H A) −1 b < 1. But this is simple. Indeed, from (75), it follows that since τ 1 and τ 2 cannot both be equal to 0. Combining (76) and (74), we see that int B 0 ∩ L 1 ∩ L 2 = ∅ and, in particular, L 1 ∩ L 2 = ∅. Hence, a 1 , a 2 are linearly independent (otherwise, L 1 = L 2 , which contradicts (76)). Taking any x ∈ int B 0 ∩ L 1 ∩ L 2 , we obtain x < 1 and