Frank–Wolfe and friends: a journey into projection-free first-order optimization methods

Invented some 65 years ago in a seminal paper by Marguerite Straus-Frank and Philip Wolfe, the Frank–Wolfe method recently enjoys a remarkable revival, fuelled by the need of fast and reliable first-order optimization methods in Data Science and other relevant application areas. This review tries to explain the success of this approach by illustrating versatility and applicability in a wide range of contexts, combined with an account on recent progress in variants, improving on both the speed and efficiency of this surprisingly simple principle of first-order optimization.


Introduction
In their seminal work [36], Marguerite Straus-Frank and Philip Wolfe introduced a first-order algorithm for the minimization of convex quadratic objectives over polytopes, now known as Frank-Wolfe (FW) method.The main idea of the method is simple: to generate a sequence of feasible iterates by moving at every step towards a minimizer of a linearized objective, the so-called FW vertex.Subsequent works, partly motivated by applications in optimal control theory (see [33] for references), generalized the method to smooth (possibly non-convex) optimization over closed subsets of Banach spaces admitting a linear minimization oracle (see [30,34]).
Furthermore, while the O(1/k) rate in the original article was proved to be optimal when the solution lies on the boundary of the feasible set [19], improved rates were given in a variety of different settings.In [67] and [30], a linear convergence rate was proved over strongly convex domains assuming a lower bound on the gradient norm, a result then extended in [33] under more general gradient inequalities.In [46], linear convergence of the method was proved for strongly convex objectives with the minimum obtained in the relative interior of the feasible set.
The slow convergence behaviour for objectives with solution on the boundary motivated the introduction of several variants, the most popular being Wolfe's away step [88].Wolfe's idea was to move away from bad vertices, in case a step of the FW method moving towards good vertices did not lead to sufficient improvement on the objective.This idea was successfully applied in several network equilibrium problems, where linear minimization can be achieved by solving a min-cost flow problem (see [40] and references therein).In [46], some ideas already sketched by Wolfe were formalized to prove linear convergence of the Wolfe's away step method and identification of the face containing the solution in finite time, under some suitable strict complementarity assumptions.
In recent years, the FW method has regained popularity thanks to its ability to handle the structured constraints appearing in machine learning and data science applications efficiently.Examples include LASSO, SVM training, matrix completion, minimum enclosing ball, density mixture estimation, cluster detection, to name just a few (see Section 3 for further details).
One of the main features of the FW algorithm is its ability to naturally identify sparse and structured (approximate) solutions.For instance, if the optimization domain is the simplex, then after k steps the cardinality of the support of the last iterate generated by the method is at most k + 1.Most importantly, in this setting every vertex added to the support at every iteration must be the best possible in some sense, a property that connects the method with many greedy optimization schemes [26].This makes the FW method pretty efficient on the abovementioned problem class.Indeed, the combination of structured solutions with often noisy data makes the sparse approximations found by the method possibly more desirable than high precision solutions generated by a faster converging approach.In some cases, like in cluster detection (see, e.g., [10]), finding the support of the solution is actually enough to solve the problem independently from the precision achieved.
Another important feature is that the linear minimization used in the method is often cheaper than the projections required by projected-gradient methods.
It is important to notice that, even when these two operations have the same complexity, constants defining the related bounds can differ significantly (see [28] for some examples and tests).When dealing with large scale problems, the FW method hence has a much smaller per-iteration cost with respect to projectedgradient methods.For this reason, FW methods fall into the category of projectionfree methods [64].Furthermore, the method can be used to approximately solve quadratic subproblems in accelerated schemes, an approach usually referred to as conditional gradient sliding (see, e.g., [20,65]).

Organisation of the paper
The present review is not intended to provide an exhaustive literature survey, but rather as an advanced tutorial demonstrating versatility and power of this approach.The article is structured as follows: in Section 2, we introduce the classic FW method, together with a general scheme for all the methods we consider.In Section 3, we present applications from classic optimization to more recent machine learning problems.In Section 4, we review some important stepsizes for first order methods.In Section 5, we discuss the main theoretical results about the FW method and the most popular variants, including the O(1/k) convergence rate for convex objectives, affine invariance, the sparse approximation property, and support identification.In Section 6 we illustrate some recent improvements on the O(1/k) convergence rate.Finally, in Section 7 we present recent FW variants fitting different optimization frameworks, in particular block coordinate, distributed, accelerated, and trace norm optimization.

Notation
For any integers a and b, denote by [a : b] = {x integer : a ≤ x ≤ b} the integer range between them.For a set V , the power set 2 V denotes the system all subsets of V , whereas for any positive integer s ∈ N we set V s := {S ∈ 2 V : |S| = s}, with |S| denoting the number of elements in S. Matrices are denoted by capital sansserif letters (e.g., the zero matrix O, or the n × n identity matrix I n with columns e i the length of which should be clear from the context).The all-ones vector is e := i e i ∈ R n .Generally, vectors are always denoted by boldface sans-serif letters x, and their transpose by x ⊺ .The Euclidean norm of x is then x := √ x ⊺ x whereas the general p-norm is denoted by x p for any p ≥ 1 (so x 2 = x ).By contrast, the so-called zero-norm simply counts the number of nonzero entries: For a vector d we denote as Here o denotes the zero vector.In context of symmetric matrices, "psd" abbreviates "positive-semidefinite".

Problem and general scheme
We consider the following problem: where C is a convex and compact (i.e.bounded and closed) subset of R n and, unless specified otherwise, f is a differentiable function having Lipschitz continuous gradient with constant L > 0: Throughout the article, we denote by x * a (global) solution to (1) and use the symbol f * := f (x * ) as a shorthand for the corresponding optimal value.The general scheme of the first-order methods we consider for problem (1), reported in Algorithm 1, is based upon a set F (x, g) of directions feasible at x using first-order local information on f around x, in the smooth case g = ∇f (x).From this set, a particular d ∈ F (x, g) is selected, with the maximal stepsize α max possibly dependent from auxiliary information available to the method (at iteration k, we thus write α max k ), and not always equal to the maximal feasible stepsize.
Algorithm 1 First-order method ] a suitably chosen stepsize 6 End for

The classical Frank-Wolfe method
The classical FW method for minimization of a smooth objective f generates a sequence of feasible points {x k } following the scheme of Algorithm 2. At the iteration k it moves toward a vertex i.e., an extreme point, of the feasible set minimizing the scalar product with the current gradient ∇f (x k ).It therefore makes use of a linear minimization oracle (LMO) for the feasible set defining the descent direction as In particular, the update at step 6 can be written as Since α k ∈ [0, 1], by induction x k+1 can be written as a convex combination of elements in the set S k+1 := {x 0 } ∪ {s i } 0≤i≤k .When C = conv(A) for a set A of points with some common property, usually called "elementary atoms", if x 0 ∈ A then x k can be written as a convex combination of k + 1 elements in A. Note that due to Caratheodory's theorem, we can even limit the number of occurring atoms to min{k, n} + 1.In the rest of the paper the primal gap at iteration k is defined as , with α k ∈ (0, 1] a suitably chosen stepsize 7 End for 3 Examples FW methods and variants are a natural choice for constrained optimization on convex sets admitting a linear minimization oracle significantly faster than computing a projection.We present here in particular the traffic assignment problem, submodular optimization, LASSO problem, matrix completion, adversarial attacks, minimum enclosing ball, SVM training, maximal clique search in graphs.

Traffic assignment
Finding a traffic pattern satisfying the equilibrium conditions in a transportation network is a classic problem in optimization that dates back to Wardrop's paper [86].Let G be a network with set of nodes [1 : n].Let {D(i, j)} i =j be demand coefficients, modeling the amount of goods with destination j and origin i.For any i, j with i = j let furthermore f ij : R → R be non-linear cost functions, and x s ij be the flow on link (i, j) with destination s.The traffic assignment problem can be modeled as the following non-linear multicommodity network problem [40]: Then the linearized optimization subproblem necessary to compute the FW vertex can be split in n shortest paths subproblems, each of the form min A number of FW variants were proposed in the literature for efficiently handling this kind of problems (see, e.g., [9,40,66,87] and references therein for further details).In the more recent work [55] a FW variant also solving a shortest path subproblem at each iteration was applied to image and video colocalization.

Submodular optimization
Given a finite set V , a function r : 2 V → R is said to be submodular if for every As is common practice in the optimization literature (see e.g.[4, Section 2.1]), here we always assume s(∅) = 0.A number of machine learning problems, including image segmentation and sensor placement, can be cast as minimization of a submodular function (see, e.g., [4,22] and references therein for further details): Submodular optimization can also be seen as a more general way to relate combinatorial problems to convexity, for example for structured sparsity [4,53].By a theorem from [39], problem (8) can be in turn reduced to an minimum norm point problem over the base polytope For this polytope, linear optimization can be achieved with a simple greedy algorithm.More precisely, consider the LP max s∈B(F ) Then if the objective vecor w has a negative component, the problem is clearly unbounded.Otherwise, a solution to the LP can be obtained by ordering w in decreasing manner as w j 1 ≥ w j 2 ≥ ... ≥ w j n , and setting for k ∈ [1 : n].We thus have a LMO with a O(n log n) cost.This is the reason why FW variants are widely used in the context of submodular optimization; further details can be found in, e.g., [4,53].

LASSO problem
The LASSO, proposed by Tibshirani in 1996 [81], is a popular tool for sparse linear regression.Given the training set where r ⊺ i are the rows of an m × n matrix A, the goal is finding a sparse linear model (i.e., a model with a small number of non-zero parameters) describing the data.This problem is strictly connected with the Basis Pursuit Denoising (BPD) problem in signal analysis (see, e.g., [25]).In this case, given a discrete-time input signal b, and a dictionary {a j ∈ R m : j ∈ [1 : n]} of elementary discrete-time signals, usually called atoms (here a j are the columns of a matrix A), the goal is finding a sparse linear combination of the atoms that approximate the real signal.From a purely formal point of view, LASSO and BPD problems are equivalent, and both can be formulated as follows: where the parameter τ controls the amount of shrinkage that is applied to the model (related to sparsity, i.e., the number of nonzero components in x).The feasible set is Thus we have the following LMO in this case: It is easy to see that the FW per-iteration cost is then O(n).The peculiar structure of the problem makes FW variants well-suited for its solution.This is the reason why LASSO/BPD problems were considered in a number of FW-related papers (see, e.g., [52,53,62,68]).

Matrix completion
Matrix completion is a widely studied problem that comes up in many areas of science and engineering, including collaborative filtering, machine learning, control, remote sensing, and computer vision (just to name a few; see also [18] and references therein).The goal is to retrieve a low rank matrix X ∈ R n 1 ×n 2 from a sparse set of observed matrix entries {U ij } (i,j)∈J with J ⊂ [1 : Thus the problem can be formulated as follows [38]: where the function f is given by the squared loss over the observed entries of the matrix and δ > 0 is a parameter representing the assumed belief about the rank of the reconstructed matrix we want to get in the end.In practice, the low rank constraint is relaxed with a nuclear norm ball constraint, where we recall that the nuclear norm X * of a matrix X is equal the sum of its singular values.Thus we get the following convex optimization problem: min The feasible set is the convex hull of rank-one matrices: If we indicate with A J the matrix that coincides with A on the indices J and is zero otherwise, then we can write ∇f (X) = 2 (X − U) J .Thus we have the following LMO in this case: which boils down to computing the gradient, and the rank-one matrix δu 1 v ⊺ 1 , with u 1 , v 1 right and left singular vectors corresponding to the top singular value of −∇f (X k ).Consequently, the FW method at a given iteration approximately reconstructs the target matrix as a sparse combination of rank-1 matrices.Furthermore, as the gradient matrix is sparse (it only has |J | non-zero entries) storage and approximate singular vector computations can be performed much more efficiently than for dense matrices1 .A number of FW variants has hence been proposed in the literature for solving this problem (see, e.g., [38,52,53]).

Adversarial attacks in machine learning
Adversarial examples are maliciously perturbed inputs designed to mislead a properly trained learning machine at test time.An adversarial attack hence consists in taking a correctly classified data point x 0 and slightly modifying it to create a new data point that leads the considered model to misclassification (see, e.g., [21,24,45] for further details).A possible formulation of the problem (see, e.g., [23,45]) is given by the so called maximum allowable ℓ p -norm attack that is, where f is a suitably chosen attack loss function, x 0 is a correctly classified data point, x represents the additive noise/perturbation, ε > 0 denotes the magnitude of the attack, and p ≥ 1.It is easy to see that the LMO has a cost O(n).If x 0 is a feature vector of a dog image correctly classified by our learning machine, our adversarial attack hence suitably perturbs the feature vector (using the noise vector x), thus getting a new feature vector x 0 + x classified, e.g., as a cat.In case a target adversarial class is specified by the attacker, we have a targeted attack.In some scenarios, the goal may not be to push x 0 to a specific target class, but rather push it away from its original class.In this case we have a so called untargeted attack.The attack function f will hence be chosen depending on the kind of attack we aim to perform over the considered model.Due to its specific structure, problem (15) can be nicely handled by means of tailored FW variants.Some FW frameworks for adversarial attacks were recently described in, e.g., [23,56,78].

Minimum enclosing ball
Given a set of points P = {p 1 , . . ., p n } ⊂ R d , the minimum enclosing ball problem (MEB, see, e.g., [26,92]) consists in finding the smallest ball containing P .Such a problem models numerous important applications in clustering, nearest neighbor search, data classification, machine learning, facility location, collision detection, and computer graphics, to name just a few.We refer the reader to [60] and the references therein for further details.Denoting by c ∈ R d the center and by √ γ (with γ ≥ 0) the radius of the ball, a convex quadratic formulation for this problem is min This problem can be formulated via Lagrangian duality as a convex Standard Quadratic Optimization Problem (StQP, see, e.g.[12]) with , and the LMO is defined as follows: It is easy to see that cost per iteration is O(n).When applied to (18), the FW method can find an ε-cluster in O( 1ε ), where an ε-cluster is a subset P ′ of P such that the MEB of P ′ dilated by 1 + ε contains P [26].The set P ′ is given by the atoms in P selected by the LMO in the first O( 1 ε ) iterations.Further details related to the connections between FW methods and MEB problems can be found in, e.g., [2, 26, 1] and references therein.

Training linear Support Vector Machines
Support Vector Machines (SVMs) represent a very important class of machine learning tools (see, e.g., [82] for further details).Given a labeled set of data points, usually called training set: the linear SVM training problem consists in finding a linear classifier w ∈ R d such that the label y i can be deduced with the "highest possible confidence" from w ⊺ p i .A convex quadratic formulation for this problem is the following [26]: where the slack variable ρ stands for the negative margin and we can have ρ < 0 if and only if there exists an exact linear classifier, i.e. w such that w ⊺ p i = sign(y i ).
The dual of ( 19) is again an StQP: with A = [y 1 p 1 , ..., y n p n ].Notice that problem ( 20) is equivalent to an MNP problem on conv{y i p i : i ∈ [1 : n]}, see Section 7.2 below.Further details on FW methods for SVM training problems can be found in, e.g., [26,52].

Finding maximal cliques in graphs
In the context of network analysis the clique model, dating back at least to the work of Luce and Perry [69] about social networks, refers to subsets with every two elements in a direct relationship.The problem of finding maximal cliques has numerous applications in domains including telecommunication networks, biochemistry, financial networks, and scheduling (see, e.g., [11,90]).Let G = (V, E) be a simple undirected graph with V and E set of vertices and edges, respectively.A clique in G is a subset C ⊆ V such that (i, j) ∈ E for each (i, j) ∈ C, with i = j.
The goal in finding a clique C such that |C| is maximal (i.e., it is not contained in any strictly larger clique).This corresponds to find a local minimum for the following equivalent (this time non-convex) StQP (see, e.g., [10,11,51] for further details): where A G is the adjacency matrix of G. Due to the peculiar structure of the problem, FW methods can be fruitfully used to find maximal cliques (see, e.g., [51]).

Stepsizes
Popular rules for determining the stepsize are: diminishing stepsize: mainly used for the classic FW (see, e.g., [37,53]); -exact line search: where we pick the smallest minimizer of the function ϕ for the sake of being well-defined even in rare cases of ties (see, e.g., [14,62]); -Armijo line search: the method iteratively shrinks the step size in order to guarantee a sufficient reduction of the objective function.It represents a good way to replace exact line search in cases when it gets too costly.In practice, we fix parameters δ ∈ (0, 1) and γ ∈ (0, 1 2 ), then try steps α = δ m α max k with m ∈ {0, 1, 2, . . .} until the sufficient decrease inequality holds, and set α k = α (see, e.g., [13] and references therein).-Lipschitz constant dependent step size: with L the Lipschitz constant of ∇f (see, e.g., [14,74]).
The Lipschitz constant dependent step size can be seen as the minimizer of the quadratic model m k (•; L) overestimating f along the line where the inequality follows by the standard Descent Lemma [9, Proposition 6.1.2].
In case L is unknown, it is even possible to approximate L using a backtracking line search (see, e.g., [58,74]).
We now report a lower bound for the improvement on the objective obtained with the stepsize (25), often used in the convergence analysis.
Lemma 1 If α k is given by (25) Proof We have where we used the standard Descent Lemma in the inequality.⊓ ⊔ 5 Properties of the FW method and its variants

The FW gap
A key parameter often used as a measure of convergence is the FW gap which is always nonnegative and equal to 0 only in first order stationary points.This gap is, by definition, readily available during the algorithm.If f is convex, using that ∇f (x) is a subgradient we obtain so that G(x) is an upper bound on the optimality gap at x. Furthermore, G(x) is a special case of the Fenchel duality gap [63].
If C = ∆ n−1 is the simplex, then G is related to the Wolfe dual as defined in [26].Indeed, this variant of Wolfe's dual reads and for a fixed x ∈ R n , the optimal values of (u, λ) are Performing maximization in problem (31) iteratively, first for (u, λ) and then for x, this implies that ( 31) is equivalent to Furthermore, since Slater's condition is satisfied, strong duality holds by Slater's theorem [15], resulting in G(x * ) = 0 for every solution x * of the primal problem.
The FW gap is related to several other measures of convergence (see e.g.[64, Section 7.5.1]).First, consider the projected gradient with π B the projection on a convex and closed subset B ⊆ R n .We have g k = 0 if and only if x k is stationary, with where we used [y First-order stationarity conditions are equivalent to The FW gap provides a lower bound on the distance from the normal cone dist(N C (x), −∇f (x)), inflated by the diameter D > 0 of C, as follows: where in the first inequality we used ( ] ≤ 0 together with the Cauchy-Schwarz inequality, and [61]).On the other hand, if f is convex, we have an O(1/k) rate on the optimality gap (see, e.g., [36,67]) for all the stepsizes discussed in Section 4.Here we include a proof for the Lipschitz constant dependent stepsize α k given by (25).
Theorem 1 If f is convex and the stepsize is given by (25), then for every k ≥ 1 Before proving the theorem we prove a lemma concerning the decrease of the objective in the case of a full FW step, that is a step with and with α k equal to 1, the maximal feasible stepsize.
then by Definitions ( 3) and ( 29) the last inequality following by Definition (25) and the assumption that α k = 1.By the standard Descent Lemma it also follows Considering the definition of d k and convexity of f , we get To conclude, it suffices to apply to the RHS of (40) the inequality where we used (39) in the first inequality and We can now proceed with the proof of the main result.
Reasoning by induction, if (37) holds for k with α k = 1, then the claim is clear by (38).On the other hand, if α k < α max k = 1 then by Lemma 1, we have where we used in the third inequality; and the last inequality follows by induction hypothesis.⊓ ⊔ As can be easily seen from above argument, the convergence rate of O(1/k) is true also in more abstract normed spaces than R n , e.g. when C is a convex and weakly compact subset of a Banach space (see, e.g., [30,34]).A generalization for some unbounded sets is given in [35].The bound is tight due to a zigzagging behaviour of the method near solutions on the boundary, leading to a rate of Ω(1/k 1+δ ) for every δ > 0 (see [19] for further details), when the objective is a strictly convex quadratic function and the domain is a polytope.Also the minimum FW gap min i∈[0:k] G(x i ) converges at a rate of O(1/k) (see [53,37]).In [37], a broad class of stepsizes is examined, including α k = 1 k+1 and α k = ᾱ constant.For these stepsizes a convergence rate of O ln(k) k is proved.

Variants
Active set FW variants mostly aim to improve over the O(1/k) rate and also ensure support identification in finite time.They generate a sequence of active sets {A k }, such that x k ∈ conv(A k ), and define alternative directions making use of these active sets.
For the pairwise FW (PFW) and the away step FW (AFW) (see [26,62]) we have that A k must always be a subset of S k , with x k a convex combination of the elements in A k .The away vertex v k is then defined by The AFW direction, introduced in [88], is hence given by while the PFW direction, as defined in [62] and inspired by the early work [70], is with s k defined in (3).
The FW method with in-face directions (FDFW) (see [38,46]), also known as Decomposition invariant Conditional Gradient (DiCG) when applied to polytopes [5], is defined exactly as the AFW, but with the minimal face F (x k ) of C containing x k as the active set.The extended FW (EFW) was introduced in [50] and is also known as simplicial decomposition [83].At every iteration the method minimizes the objective in the current active set A k+1 where A k+1 ⊆ A k ∪ {s k } (see, e.g., [26], Algorithm 4.2).A more general version of the EFW, only approximately minimizing on the current active set, was introduced in [62] under the name of fully corrective FW.In Table 1, we report the main features of the classic FW and of the variants under analysis.
Table 1: FW method and variants covered in this review.

Sparse approximation properties
As discussed in the previous section, for the classic FW method and the AFW, PFW, EFW variants x k can always be written as a convex combination of elements in Even for the FDFW we still have the weaker property that x k must be an affine combination of elements in It turns out that the convergence rate of methods with this property is Ω( 1 k ) in high dimension.More precisely, if C = conv(A) with A compact, the O(1/k) rate of the classic FW method is worst case optimal given the sparsity constraint An example where the O(1/k) rate is tight was presented in [53].Let C = ∆ n−1 and f (x) = x − 1 n e 2 .Clearly, f * = 0 with x * = 1 n e.Then it is easy to see that min{f (x) − f * : x 0 ≤ k + 1} ≥ 1 k+1 − 1 n for every k ∈ N, so that in particular under (49) with A k = {e i : i ∈ [1 : n]}, the rate of any FW variant must be Ω( 1 k ).

Affine invariance
The FW method and the AFW, PFW, EFW are affine invariant [53].More precisely, let P be a linear transformation, f be such that f (Px) = f (x) and Ĉ = P(C).
Then for every sequence {x k } generated by the methods applied to (f, C), the sequence {y k } := {Px k } can be generated by the FW method with the same stepsizes applied to ( f, Ĉ).As a corollary, considering the special case where P is the matrix collecting the elements of A as columns, one can prove results on C = ∆ |A|−1 and generalize them to Ĉ := conv(A) by affine invariance.An affine invariant convergence rate bound for convex objectives can be given using the curvature constant [26].
When the method uses the stepsize sequence (22), it is possible to give the following affine invariant convergence rate bounds (see [37]): thus in particular slightly improving the rate we gave in Theorem 1 since we have that κ f,C ≤ LD 2 .

Support identification for the AFW
It is a classic result that the AFW under some strict complementarity conditions and for strongly convex objectives identifies in finite time the face containing the solution [46].Here we report some explicit bounds for this property proved in [14].
We first assume that C = ∆ n−1 , and introduce the multiplier functions for i ∈ [1 : n].Let x * be a stationary point for f , with the objective f not necessarily convex.It is easy to check that {λ i (x * )} i∈[1:n] coincide with the Lagrangian multipliers.Furthermore, by complementarity conditions we have The next lemma uses λ i , and the Lipschitz constant L of ∇f , to give a lower bound of the so-called active set radius r * , defining a neighborhood of x * .Starting the algorithm in this neighbourhood, the active set (the minimal face of C containing x * ) is identified in a limited number of iterations.
Lemma 3 Let x * be a stationary point for f on the boundary of ∆ n−1 , δ min = min i:λ i (x * )>0 λ i (x * ) and Let. Assume that for every k for which d k = d A k holds, the step size α k is not smaller than the stepsize given by (25), Proof Follows from [14, Theorem 3.3], since under the assumptions the AFW sets one variable in supp(x k )\I(x * ) to zero at every step without increasing the 1-norm distance from x * .⊓ ⊔ The above lemma does not require convexity and was applied in [14] to derive active set identification bounds in several convex and non-convex settings.Here we focus on the case where the domain C = conv(A) with |A| < +∞ is a generic polytope, and where f is µ-strongly convex for some µ > 0, i.e.
Let E C (x * ) be the face of C exposed by ∇f (x * ): Let then θ A be the Hoffman constant (see [7]) related to [ Ā⊺ , I n , e, −e] ⊺ , with Ā the matrix having as columns the elements in A. Finally, consider the function f A (y) := f ( Āy) on ∆ |A|−1 , and let L A be the Lipschitz constant of ∇f A as well as Using linearity of AFW convergence for strongly convex objectives (see Section 6.1), we have the following result: where µ A = µ nθ 2 A and q ∈ (0, 1) is the constant related to the linear convergence rate of the AFW, i.e. h k ≤ q k h 0 for all k.

Proof (sketch)
We present an argument in the case which can be easily extended by affine invariance to the general case (see [14] for details).In this case θ A ≥ 1 and we can define μ := µ/n ≥ µ A .
Now we combine n • ≥ • 1 with strong convexity and relation (57) to obtain ) for every k ≥ k.Since x 0 is a vertex of the simplex, and at every step at most one coordinate is added to the support of the current iterate, | supp(xk)| ≤ k + 1.The claim follows by applying Lemma 3.

⊓ ⊔
Additional bounds under a quadratic growth condition weaker than strong convexity and strict complementarity are reported in [42].
Convergence and finite time identification for the PFW and the AFW are proved in [13] for a specific class of non-convex minimization problems over the standard simplex, under the additional assumption that the sequence generated has a finite set of limit points.In another line of work, active set identification strategies combined with FW variants have been proposed in [29] and [80].

Inexact linear oracle
In many real-world applications, linear subproblems can only be solved approximately.This is the reason why the convergence of FW variants is often analyzed under some error term for the linear minimization oracle (see, e.g., [16,17,37,53,59]).A common assumption, relaxing the FW vertex exact minimization property, is to have access to a point (usually a vertex) sk such that for a sequence {δ k } of non negative approximation errors.
If the sequence {δ k } is constant and equal to some δ > 0, then trivially the lowest possible approximation error achieved by the FW method is δ.At the same time, [37, Theorem 5.1] implies a rate of O( rate can be instead retrieved by assuming that {δ k } converges to 0 quickly enough, and in particular if for a constant δ > 0. Under (59), in [53] a convergence rate of was proved for the FW method with α k given by exact line search or equal to 2 k+2 , as well as for the EFW.
A linearly convergent variant making use of an approximated linear oracle recycling previous solutions to the linear minimization subproblem is studied in [16].In [37,49], the analysis of the classic FW method is extended to the case of inexact gradient information.In particular in [37], assuming the availability of the (δ, L) oracle introduced in [31], a convergence rate of O(1/k + δk) is proved.
6 Improved rates for strongly convex objectives

Linear convergence under an angle condition
In the rest of this section we assume that f is µ-strongly convex (54).We also assume that the stepsize is given by exact linesearch or by (25).
Under this assumption, an asymptotic linear convergence rate for the FDFW on polytopes was given in the early work [46].Furthermore, in [44] a linearly convergent variant was proposed, making use however of an additional local linear minimization oracle.
Recent works obtain linear convergence rates by proving the angle condition for some τ > 0 and some x * ∈ arg min x∈C f (x).As we shall see in the next lemma, under (61) it is not difficult to prove linear convergence rates in the number of good steps.These are FW steps with α k = 1 and steps in any descent direction with α k < 1.
Table 2: Known convergence rates for the FW method and the variants covered in this review.NC, C and SC stand for non-convex, convex and strongly convex respectively.
Lemma 4 If the step k is a good step and (61) holds, then Proof If the step k is a full FW step then Lemma 2 entails h k+1 ≤ 1 2 h k .In the remaining case, first observe that by strong convexity which means We can then proceed using the bound (27) from Lemma 1 in the following way: where we used (61) in the second inequality and (64) in the third one.

⊓ ⊔
As a corollary, under (61) we have the rate for any method with non increasing {f (x k )} and following Algorithm 1, with γ(k) ≤ k an integer denoting the number of good steps until step k.It turns out that for all the variants we introduced in this review we have γ(k) ≥ T k for some constant T > 0. When x * is in the relative interior of C, the FW method satisfies (61) and we have the following result (see [46,62]): Proof We can assume for simplicity int(C) = ∅, since otherwise we can restrict ourselves to the affine hull of C. Let δ = dist(x * , ∂C) and g = −∇f (x k ).First, by assumption we have x * + δ g ∈ C. Therefore where we used x * + δ g ∈ C in the first inequality and convexity in the second.We can conclude The thesis follows by Lemma 4, noticing that for τ = dist(x * ,∂C) In [62], the authors proved that directions generated by the AFW and the PFW on polytopes satisfy condition (61), with τ = PWidth(A)/D and PWidth(A), pyramidal width of A. While PWidth(A) was originally defined with a rather complex minmax expression, in [73] it was then proved PWidth(A) = min This quantity can be explicitly computed in a few special cases.For A = {0, 1} n we have PWidth Angle conditions like (61) with τ dependent on the number of vertices used to represent x k as a convex combination were given in [5] and [7] for the FDFW and the PFW respectively.In particular, in [7] a geometric constant Ω C called vertex-facet distance was defined as with V (C) the set of vertices of C, and H(C) the set of supporting hyperplanes of C (containing a facet of C).Then condition ( 61) is satisfied for τ = Ω C /s, with d k the PFW direction and s the number of points used in the active set A k .
In [5], a geometric constant H s was defined depending on the minimum number s of vertices needed to represent the current point x k , as well as on the proper2 inequalities q appearing in a description of C. For each of these inequalities the second gap g i was defined as with the secondmax function giving the second largest value achieved by the argument.Then H s is defined as The arguments used in the paper imply that (61) holds with τ = if d k is a FDFW direction and x k the convex combination of at most s vertices.We refer the reader to [73] and [75] for additional results on these and related constants.
The linear convergence results for strongly convex objectives are extended to compositions of strongly convex objectives with affine transformations in [7], [62], [73].In [47], the linear convergence results for the AFW and the FW method with minimum in the interior are extended with respect to a generalized condition number L f,C,D /µ f,C,D , with D a distance function on C.
For the AFW, the PFW and the FDFW, linear rates with no bad steps (γ(k) = k) are given in [76] for non-convex objectives satisfying a Kurdyka-Lojasiewicz inequality.In [77], condition (61) was proved for the FW direction and orthographic retractions on some convex sets with smooth boundary.The work [27] introduces a new FW variant using a subroutine to align the descent direction with the projection on the tangent cone of the negative gradient, thus implicitly maximizing τ in (61).

Strongly convex domains
When C is strongly convex we have a O(1/k 2 ) rate (see, e.g., [43,57]) for the classic FW method.Furthermore, when C is β C -strongly convex and ∇f (x) ≥ c > 0, then we have the linear convergence rate (see [30,33,58,67]) Finally, it is possible to interpolate between the O(1/k 2 ) rate of the strongly convex setting and the O(1/k) rate of the general convex one by relaxing strong convexity of the objective with Hölderian error bounds [91] and also by relaxing strong convexity of the domain with uniform convexity [57].

Block coordinate Frank-Wolfe method
The block coordinate FW (BCFW) was introduced in [63] for block product domains of the form C = C (1) × ... × C (m) ⊆ R n 1 +...+n m , and applied to structured SVM training.The algorithm operates by selecting a random block and performing a FW step in that block.Formally, for s ∈ R m i let s (i) ∈ R n be the vector with all blocks equal to o except for the i-th block equal to s.We can write the direction of the BCFW as for a random index i ∈ [1 : n].
In [63], a convergence rate of where κ ⊗,i f are the curvature constants of the objective fixing the blocks outside C (i) : An asynchronous and parallel generalization for this method was given in [85].This version assumes that a cloud oracle is available, modeling a set of worker nodes each sending information to a server at different times.This information consists of an index i and the following LMO on C (i) : The algorithm is called asynchronous because k can be smaller than k, modeling a delay in the information sent by the node.Once the server has collected a minibatch S of τ distinct indexes (overwriting repetitions), the descent direction is defined as If the indices sent by the nodes are i.i.d., then under suitable assumptions on the delay, a convergence rate of can be proved, where K τ = mκ ⊗ f,τ (1 + δ) + h 0 for δ depending on the delay error, with κ ⊗ f,τ the average curvature constant in a minibatch keeping all the components not in the minibatch fixed.
In [72], several improvements are proposed for the BCFW, including an adaptive criterion to prioritize blocks based on their FW gap, and block coordinate versions of the AFW and the PFW variants.
In [79], a multi plane BCFW approach is proposed in the specific case of the structured SVM, based on caching supporting planes in the primal, corresponding to block linear minimizers in the dual.In [8], the duality for structured SVM between BCFW and stochastic subgradient descent is exploited to define a learning rate schedule for neural networks based only on one hyper parameter.The block coordinate approach is extended to the generalized FW in [6], with coordinates however picked in a cyclic order.
with C a closed convex subset of R n and • * a norm on R n .In [89], a FW variant is introduced to solve the problem when C is a polytope and • * is the standard Euclidean norm • .Similarly to the variants introduced in Section 5.3, it generates a sequence of active sets {A k } with s k ∈ A k+1 .At the step k the norm is minimized on the affine hull aff(A k ) of the current active set A k , that is The descent direction d k is then defined as and the stepsize is given by a tailored linesearch that allows to remove some of the atoms in the set A k (see, e.g.[62,89]).Whenever x k+1 is in the relative interior of conv(A k ), the FW vertex is added to the active set (that is, s k ∈ A k+1 ).
Otherwise, at least one of the vertices not appearing in a convex representation of x k is removed.This scheme converges linearly when applied to generic smooth strongly convex objectives (see, e.g., [62]).
In [48], a FW variant is proposed for minimum norm problems of the form with K a convex cone, f convex with L-Lipschitz gradient.In particular, the optimization domain is r ⊺ x , so that by homogeneity for every k the linear minimization oracle on C(δ k ) is given by LMO For every k, applying the FW method with suitable stopping conditions an approximate minimizer x k of f (x) over C(δ k ) is generated, with an associated lower bound on the objective, an affine function in y: Then the function the quantity δ k+1 can be defined as min{δ ≥ 0 : lk (δ) ≤ 0}, hence F (δ k+1 ) ≥ 0.

Variants for optimization over the trace norm ball
The FW method has found many applications for optimization problems over the trace norm ball.In this case, as explained in Example 3.4, linear optimization can be obtained by computing the top left and right singular vectors of the matrix −∇f (X k ), an operation referred to as 1-SVD (see [3]) .
In the work [38], the FDFW is applied to the matrix completion problem (12), thus generating a sequence of matrices {X k } with X k * ≤ δ for every k.The method can be implemented efficiently exploiting the fact that for X on the boundary of the nuclear norm ball, there is a simple expression for the face F (X).For X ∈ R m×n with rank(X) = k let UDV ⊺ be the thin SVD of X, so that D ∈ R k×k is the diagonal matrix of non zero singolar values for X, with corresponding left and right singular vectors in the columns of U ∈ R m×k and V ∈ R n×k respectively.If X * = δ then the minimal face of the domain containing X is the set It is not difficult to see that we have rank(X k ) ≤ k + 1 for every k ∈ N, as well.Furthermore, the thin SVD of the current iterate X k can be updated efficiently both after FW steps and after in face steps.The convergence rate of the FDFW in this setting is still O(1/k).
In the recent work [84], an unbounded variant of the FW method is applied to solve a generalized version of the trace norm ball optimization problem: min X∈R m×n {f (X) : PXQ * ≤ δ} (91) with P, Q singular matrices.The main idea of the method is to decompose the domain in the sum S + T between the kernel T of the linear function ϕ P,Q (X) = PXQ and a bounded set S ⊂ T ⊥ .Then gradient descent steps in the unbounded component T are alternated to FW steps in the bounded component S. The authors apply this approach to the generalized LASSO as well, using the AFW for the bounded component.
In [3], a variant of the classic FW using k-SVD (computing the top k left and right singular vectors for the SVD) is introduced, and it is proved that it converges linearly for strongly convex objectives when the solution has rank at most k.In [71], the FW step is combined with a proximal gradient step for a quadratic problem on the product of the nuclear norm ball with the ℓ 1 ball.Approaches using an equivalent formulation on the spectrahedron introduced in [54] are analyzed in [32,41].

Conclusions
While the concept of the FW method is quite easy to understand, its advantages, witnessed by a multitude of related work, may not be apparent to someone not closely familiar with the subject.Therefore we considered, in Section 3, several motivating applications, ranging from classic optimization to more recent machine learning problems.As in any line search-based method, the proper choice of stepsize is an important ingredient to achieve satisfactory performance.In Section 4, we review several options for stepsizes in first order methods, which are closely related both to the theoretical analysis as well as to practical implementation issues, guaranteeing fast convergence.This scope was investigated in more detail in Section 5 covering main results about the FW method and its most popular variants, including the O(1/k) convergence rate for convex objectives, affine invariance, the sparse approximation property, and support identification.The account is complemented by a report on recent progress in improving on the O(1/k) convergence rate in Section 6. Versatility and efficiency of this approach was also illustrated in the final Section 7 describing present recent FW variants fitting different optimization frameworks and computational environments, in particular block coordinate, distributed, accelerated, and trace norm optimization.For sure many other interesting and relevant aspects of FW and friends could not find their way into this review because of space and time limitations, but the authors hope to have convinced readers that FW merits a consideration even by non-experts in first-order optimization.

7. 2
Variants for the min-norm point problem Consider the min-norm point (MNP) problem min x∈C x * , The technique proposed in the article applies the standard FW method to the problems min{f (x) : x * ≤ δ k , x ∈ K} , with {δ k } an increasing sequence convergent to the optimal value δ of the problem(85).Let C(δ) = {x ∈ R n : x * ≤ δ} ∩ K for δ ≥ 0, and let LM(r) ∈ arg min x∈C(1)