The Frank-Wolfe algorithm: a short introduction

In this paper we provide an introduction to the Frank-Wolfe algorithm, a method for smooth convex optimization in the presence of (relatively) complicated constraints. We will present the algorithm, introduce key concepts, and establish important baseline results, such as e.g., primal and dual convergence. We will also discuss some of its properties, present a new adaptive step-size strategy as well as applications.


Introduction
Throughout this paper we will be concerned with constrained optimization problems of the form min ∈  (), (Opt) where  ⊆ ℝ  is some convex feasible region capturing the constraints, e.g., a polyhedron arising from a system of linear inequalities or a spectrahedron, and  is the objective function satisfying some regularity property, e.g., smoothness and convexity.We also need to specify what access methods we have, both, to the function and the feasible region.A common setup is black box first-order access for  , allowing (only) the computation of gradients ∇ () for a given point  as well as the function value  ().For the access to the feasible region , which we will assume to be compact in the following, there are several common models; we simplify the exposition here for the sake of brevity: 1. Projection.Access to the projection operator Π  of  that, for a given point  ∈ ℝ  returns Π  () ≐ argmin ∈ ‖− ‖, for some norm ‖.‖ (or more generally Bregman divergences).
2. Barrier function.Access to a barrier function of the feasible region  that increases in value to infinity when approaching the boundary of .A typical example is, the barrier function − ∑  log(  −   ) for a linear inequality system  ≐ { |  ≤ }.
Specialized approaches for specific cases, e.g., the simplex method (Dantzig, 1981;1983) in the case of linear objectives which uses an explicit description of the feasible region also exist but here we concentrate on the aforementioned black box model.There are also proximal methods, which can be considered generalizations of projection-based methods and which we will not explicitly consider for the sake of brevity; see e.g., Nemirovski and Yudin (1983); Nesterov (2004;2018); Nocedal and Wright (2006) for a discussion.
Traditionally, problems of the form (Opt) are solved by variants of projection-based methods.In particular firstorder methods, such as variants of projected gradient descent are often chosen in large-scale contexts as they are comparatively cheap.For some feasible region  with projector Π  (e.g., Π  () ≐ argmin ∈ ‖ − ‖) and smooth objective function  , projected gradient descent (PGD) updates typically take the form: where   is some step-size, e.g.,   = 1/ if  is -smooth (see Definition 2.1) and convex.In essence, a descent step is taken without considering the constraints, and then it is projected back into the feasible region (see Figure 1).Projection-based first-order methods have been extensively studied, with comprehensive overviews available in, e.g., Nesterov (2018); Nocedal and Wright (2006).Optimal methods and rates are known for most scenarios.Efficient execution of the projection operation is possible for simple constraints, such as box constraints or highly structured feasible regions, e.g., as discussed in Moondra et al. (2021); Gupta et al. (2016) for submodular base polytopes.However, when the feasible region grows in complexity, the projection operation can become the limiting factor.It often demands the solution of an auxiliary optimization problem-known as the projection problem-over the same feasible region for every descent step.This complexity renders the use of projection-based methods for many significant constrained problems quite challenging; in some cases relaxed projections which essentially compute separating hyperplanes can be used though.
Interior point methods (IPM) offer an alternative approach, see e.g., Boyd et al. (2004); Potra and Wright (2000).To illustrate this approach, consider the goal of minimizing a linear function  over a polytope defined as  ≐ { |  ≤ }.The typical updates in a path-following IPM resemble:   ← argmin  ⟨, ⟩ +  ∑ log(  −   ), (IPM) where the value of  → 0 according to some strategy for .Often, these steps are only approximately solved.IPMs, while potent with appealing theoretical guarantees, usually necessitate a barrier function that encapsulates the feasible region's description.In numerous critical scenarios, a concise feasible region description is either unknown or proven to be non-existent.For instance, the matching polytope does not admit small linear programs, neither exact ones (Rothvoss, 2017) nor approximate ones (Braun and Pokutta, 2015a;b;Sinha, 2018).Additionally, achieving sufficient accuracy in the IPM step updates often requires second-order information, which can sometimes restrict its applicability.
Upon closely examining the two methods mentioned earlier, it is clear that both essentially transform the constrained problem (Opt) into an unconstrained one.They then either correct updates that violate constraints (as in PGD) or penalize nearing constraint violations (as in IPM).Yet, another category of techniques exists, termed projection-free methods, which focus directly on constrained optimization.Unlike their counterparts, these methods sidestep the need for costly projections or penalty strategies and maintain feasibility throughout the process.The most notable variants in this category are the Frank-Wolfe (FW) methods-going back to Frank and Wolfe (1956)-which will be the focus of this article and which are also known as conditional gradient (CG) methods (Levitin and Polyak, 1966).
Rather than relying on potentially expensive projection operations (see Figure 2), Frank-Wolfe methods use a so-called Linear Minimization Oracle (LMO).This subroutine only involves optimizing a linear function over the feasible region, often proving more cost-effective than traditional projections; see Combettes and Pokutta (2021) for an in-depth comparison.The nuclear norm ball, along with matrix completion, is a prime example highlighting the difference in complexity.The core updates in Frank-Wolfe methods often rely on the following fundamental update: where any solution to the argmin is suitable and   follows some step-size strategy, e.g.,   = 2 2+ .Essentially, the LMO identifies an alternate direction for descent.Subsequently, convex combinations of points are constructed within the feasible region to maintain feasibility.Viewed through the lens of complexity theory, the Frank-Wolfe methods reduce the optimization of a convex function  over  into the repeated optimization of evolving linear functions over .A schematic of the most basic variant of the Frank-Wolfe algorithm is shown in Figure 3.
For a more comprehensive exposition complementing this introductory article the interested reader is referred to Braun et al. (2022a); the notation has been deliberately chosen to be matching whenever possible.

Outline
We start with some basic notions and notations in Section 2 and then present the original Frank-Wolfe algorithm along with some motivation in Section 3. We then proceed in Section 4 with establishing basic properties, such as e.g., convergence and also provide matching lower bounds.While this is primarily an overview article, we do provide a new adaptive step-size strategy in Section 4.5, which is also available in the FrankWolfe.jljulia package.In Section 5 we then consider applications of the Frank-Wolfe algorithm and also discuss computational aspects in Section 6.

Preliminaries
In the following ‖ ⋅ ‖ will denote the 2-norm if not stated otherwise.Note however that in general other norms are possible and have been used in the context of Frank-Wolfe algorithms.Moreover, for simplicity we assume that  is differentiable, which is a standard assumption in the context of Frank-Wolfe algorithms although non-smooth variants are known (see, e.g., Braun et al. (2022a) for details).
For our analysis we will heavily rely on the following key concepts: (2. 3) The smoothness and (strong) convexity inequalities from above allow us to obtain upper and lower bounds on the function  .Convexity and strong convexity provide respectively linear and quadratic lower bounds on the function  at a given point  while smoothness provides a quadratic upper bound as shown in Figure 4.
For completeness we note that, both, -smoothness and -strong convexity can also be expressed without relying on function values of  only using gradients ∇ .This is in particular useful in the context of an adaptive step-size strategy that we will discuss in Section 4.5 as it significantly improves numerical stability of the estimates.(2.5) There is also the closely related and seemingly stronger property of -Lipschitz continuous gradient ∇ , however in the case that  is full-dimensional and  is convex it is known to be equivalent to -smoothness (see Nesterov (2018, Theorem 2.1.5)for the unbounded case, i.e., where  = ℝ  and Braun et al. (2022a, Lemma 1.7) for  being arbitrary convex domain).In particular, for twice differentiable convex functions  , we can also capture smoothness and strong convexity in terms of the Hessian via ‖∇ 2  ‖ ≤  and via the largest eigenvalue of ∇ 2  being upper bounded by  ≥ 0 and the smallest eigenvalue being lower bounded by  ≥ 0, respectively; the first inequality is useful for numerical estimation of .In the following the domain  will be a compact convex set and we assume that we have access to a so-called Linear Minimization Oracle (LMO) for , which upon being provided with a linear objective function  returns a minimizer  = argmin ∈ ⟨, ⟩ as formalized in Algorithm 2. Note that  is not necessarily unique and without loss of generality we assume that  is an extreme point of ; these extreme points are also often called atoms in the context of Frank-Wolfe algorithms.For the compact convex set  the diameter  of  is defined as  ≐ max ,∈ ‖ − ‖.
Regarding the function  we assume that we have access to gradients and function evaluations which is formalized as First-Order Oracle (denoted as FOO), which given a point  ∈ , returns the function value  () and gradient ∇ () at ; see Algorithm 3. In the following we let  * be one of the optimal solutions to min ∈  () and define further  * ≐  ( * ).Moreover, if not stated otherwise we consider  ∶  → ℝ.

The Frank-Wolfe algorithm
We will now introduce the original variant of the Frank-Wolfe (FW) algorithm due to Frank and Wolfe (1956), which is often also referred to as Conditional Gradients (Levitin and Polyak, 1966).Although many advanced variants with enhanced properties and improved convergence in specific problem configurations exist today, we will focus on the original version for clarity and to underscore the fundamental concepts.Suppose we are interested in minimizing a smooth and convex function  over some compact convex feasible set .A natural strategy would be to follow the negative of the gradient ∇ () at a given point .However, how far can we go into that direction before we hit the boundary of the feasible region?Moreover, even if we would know how far we can go, i.e., we would potentially truncate steps to not leave the feasible region, even then the resulting algorithm might not be converging to an optimal solution.In fact, the arguably most well-known strategy, the projected gradient descent method does not simply stop at the boundary but follows the negative of the gradient according to some step-size, disregarding the constraints, and then projects back onto the feasible region.This last step can be very costly: if we do not have an efficient formulation or algorithm for the projection problem, solving this projection problem can be a (relatively expensive) optimization problem in itself.In contrast, the basic idea of the Frank-Wolfe algorithm is to not follow the negative of the gradient but to follow an alternative direction of descent, which is well-enough aligned with the negative of the gradient, ensures enough primal progress, and for which we can easily ensure feasibility by means of computing convex combinations.This is done via the aforementioned Linear Minimization Oracle, with which we can optimize the negative of the gradient over the feasible region  and then take the obtained vertex to form an alternative direction of descent.The overall process is outlined in Figure 5 and in Algorithm 4 we provide the Frank-Wolfe algorithm, which only requires access to (Opt) via the LMO (see Algorithm 2) to access the feasible region and via the FOO (see Algorithm 3) to access the function.

Algorithm 4: Frank-Wolfe algorithm
Input: Initial atom  0 ∈ , smooth and convex objective function  Output: As can be seen, assuming access to the two oracles, the actual implementation is very straight-forward: a simple computation of a convex combination, which ensures that we do not leave the feasible region.We made the deliberate choice in Line 3 of Algorithm 4 to use the most basic step-size strategy   = 2 2+ , the so-called open loop or agnostic step-size, as this makes the algorithm parameter-independent, i.e., not requiring any function parameters or parameter estimations.In the worst-case, this step-size is not dominated by more elaborate strategies (such as, e.g., line search or short steps), however in many important special cases there are better choices.As this is crucial we will discuss this a little more in-depth in Section 4.5 and will also provide a new variant of an adaptive step-size strategy.
Another important property is that the algorithm is affine invariant, i.e., problem rescaling etc. does not affect the algorithm's performance, compared to most other methods including PGD (notable exceptions exist, e.g., Newton's method).This makes the algorithm also very robust (especially with the open loop step-sizes) often offering superior numerical stability.
Finally, we would like to mention that at iteration  the iterate   is a convex combination of at most  + 1 extreme points (or atoms) of .This will allow us later to obtain sparsity vs. approximation trade-offs in Section 4.1.

Properties
We will now establish key properties of Algorithm 4. We start with convergence properties and will then establish matching lower bounds as well as other properties.

Convergence
We will now prove the convergence of the Frank-Wolfe algorithm (Algorithm 4).Convergence proofs for these methods typically use two key ingredients, which we will introduce in the following.(FW-gap) Proof.The first inequality follows from convexity and the second inequality follows from maximality.
The Frank-Wolfe gap plays a crucial role in the theory of Frank-Wolfe methods as it provides an easily computable optimality certificate and suboptimality gap measure.An extreme point  ∈ argmax ∈ ⟨∇ (),  − ⟩, is typically referred to as Frank-Wolfe vertex for ∇ ().The Frank-Wolfe gap also naturally appear in the first-order optimality condition for (Opt), which states that  * ∈  is optimal for (Opt) if and only if the Frank-Wolfe gap at  * is equal to 0. Note that in the constrained case it does not necessarily hold that ∇ ( * ) = 0.
The second property that is crucial is smoothness as it allows us to lower bound the primal progress we can derive from a step of the Frank-Wolfe algorithm. .Proof.The convergence proof of the Frank-Wolfe algorithm follows an approach that is quite representative for convergence results in that area.The proof follows the template outlined in Braun et al. (2022a)  This contraction relates the primal gap at  +1 with the primal gap at   .We conclude the proof by induction.First observe that for  = 0 by (4.2) it follows which completes the proof.
The theorem above provides a convergence guarantee for the primal gap.However, it relies on knowledge of the diameter  and Lipschitz constant  for estimating the number of required iterations to reach a certain target accuracy .We can also consider the Frank-Wolfe gap max   ∈ ⟨∇ (  ),   −   ⟩, which upper bounds the primal gap  (  ) −  ( * ) via Lemma 4.1.While this gap is not monotonously decreasing (similar to the primal gap in the case of the open loop step-size) it is readily available in each iteration and hence can be used as a stopping criterion, i.e., we stop the algorithm when max   ∈ ⟨∇ (  ),   −   ⟩ ≤ , not requiring a priori knowledge about  and .For the running minimum we can establish a convergence rate similar to that in Theorem 4.4; see Jaggi (2013), see also Braun et al. (2022a, Theorem 2.2 and Remark 2.3).Another important property of the Frank-Wolfe algorithm is that it maintains convex combinations of extreme points and in each iteration at most one new extreme point is added.This leads to a natural accuracy vs. sparsity trade-off, where sparsity broadly refers to having convex combinations with a small number of vertices.This property is very useful and has been exploited repeatedly to prove mathematical results via applying the convergence guarantee of the Frank-Wolfe algorithm; we will see such an example further below in Section 5.1

A matching lower bound
In this section we will now provide a matching lower bound example due to Lan (2013); Jaggi (2013) that will require Ω( 2 /) LMO calls to achieve an accuracy of  for an -smooth function  and a feasible region of diameter .This lower bound holds for any algorithm that accesses the feasible region solely through an LMO and shows that in general the convergence rate of the Frank-Wolfe algorithm in Theorem 4.4 cannot be improved.We consider i.e., we minimize the standard quadratic  () = ‖‖ 2 over the probability simplex  ≐ conv { 1 , … ,   }, where the   denote the standard basis vectors in ℝ  , i.e., we have  = 2 and  = √ 2 and any other combination of values for  and  can be obtained via rescaling.As  is strongly convex it has a unique optimal solution, which is easily seen to be  * = ( 1  , … , 1  ) with optimal objective function value  ( * ) = 1  .Note that the optimal solution lies in the relative interior of , one of the earliest cases in which improved convergence rates for Frank-Wolfe methods have been obtained (GuéLat and Marcotte, 1986).
If we now run the Frank-Wolfe algorithm from any extreme point  0 of , then after  <  iterations, we have made  LMO calls, and hence have picked up at most  + 1 of the  standard basis vectors.This is the only information available to us about the feasible region and by convexity the only feasible points the algorithm can create are convex combinations   of these picked up extreme points.Thus it holds Therefore the primal gap after  iterations satisfies  (  ) −  ( * ) ≥ 1/( + 1) − 1/ and thus with the choice  ≫ 1/ we need Ω(1/) LMO calls to guarantee a primal gap of at most .Finally, observe that this example also provides an inherent sparsity vs. optimality tradeoff: if we aim for a solution with sparsity , then the primal gap can be as large as  (  ) −  ( * ) ≥ 1/( + 1) − 1/.
However, several remarks are in order to put this example into perspective.First of all, the lower bound example only holds up to the dimension  of the problem and that is for good reason.Once we pass the dimension threshold, the lower bound is not instructive any more and other step-size strategies might achieve linear rates for  ≥ , and in particular if the step-size is the short step rule (see also Section 4.5) with exact smoothness  we are optimal after exactly  =  − 1 iterations; see Figures 7 and 8 for computational examples.Moreover, here we considered convergence rates independent of additional problem parameters.Introducing such parameters might provide more granular convergence rates under mild assumptions as shown, e.g., in Garber (2020).There is also a different lower bound of Ω(1/) by Wolfe (1970) (see also Braun et al. (2022a, Theorem 2.8)) that is based around the so-called zigzagging phenomenon of the Frank-Wolfe algorithm and that holds beyond the dimension threshold.However, it only holds for step-size strategies-grossly simplifying-that are at least as good as the short step strategy and interestingly the open loop step-size strategy is not subject to this lower bound.This is no coincidence, as there are cases (Wirth et al., 2023a;c) where the open loop step-size can achieve a convergence rate of (1/ 2 ) for instances that satisfy the condition of the lower bound of Wolfe (1970).Finally, there is a universal lower bound (Braun et al., 2022a, Proposition 2.9) that matches the improved (1/ 2 ) rate for the open loop step-size: (4.4) Finally, in actual computations these lower bounds are rarely an issue as instances often possess additional structure and adaptive step-size strategies (see Section 4.5) provide excellent computational performance without requiring any knowledge of problem parameters.

Nonconvex objectives
The Frank-Wolfe algorithm can also be used to obtain locally optimal solutions if  is nonconvex but smooth.In this case,  ∈  is locally optimal (or first-order critical) if and only if the Frank-Wolfe gap at  is 0, i.e., max ∈ ⟨∇ (),  − ⟩ = 0. We will present a simple argument to establish convergence to a locally optimal solution, however the argument can be improved as done in Lacoste-Julien (2016), which was also the first to establish convergence for smooth nonconvex objectives.In particular, our argument will use a constant step-size which has the advantage that it is parameter-free, but we need to decide on the number of iterations  ahead of time and the convergence guarantee only holds for the last iteration  in contrast to so-called anytime guarantees that hold in each iteration  = 0, … ,  .Nonetheless, the core of the argument is identical and more clearly isolated that way. Figure 8: Same parameters as in Figure 7, however with the modified objective  () = ‖ − ( 1  , … , 1  )‖ 2 .Note, that ( 1  , … , 1  ) is the optimal solution for minimizing  () = ‖‖ 2 over the probability simplex.Dual convergence is identical while primal convergence differs.We divide by ( + 1) on both sides to arrive at our final estimation and for  = 1 √  +1 this yields which completes the proof.
Note that   can be observed throughout the algorithm's run and can be used as a stopping criterion.Moreover, the convergence rate of (1/ √  ) is optimal; see Braun et al. (2022a) for a discussion.If we have knowledge about ℎ 0 , , and  then the above estimation can be slightly improved while maintaining a constant step size rule.We revisit (4.5) and optimize for , to obtain  = √ 2ℎ 0  2 ( +1) and hence: In the right most estimation the two bounds from (4.6) and (4.7) are identical, which is due to the relatively weak estimation of the very last inequality.In fact, the difference between (4.6) und (4.7) is that in the former we have the arithmetic mean between 2ℎ 0 and  2 as bound, i.e.,   ≤ 2ℎ 0 + 2 2 1 √  +1 , whereas in (4.7) we have the geometric mean of the two terms, i.e.,   ≤ √ 2ℎ 0  2 1 √  +1 ; by the AMGM inequality the latter is smaller than the former.In both cases, we can also turn the guarantees into anytime guarantees (with minor changes in constants) by using the step-size rules   = 1/ √  + 1 and   = √ 2ℎ 0  2 (+1) , respectively, and then using the bound Then telescoping works analogously to the above with minor adjustments.Finally, note that in all estimation we do not only provide a guarantee for the running minimum of the Frank-Wolfe gap but their averages in fact and the former is a consequence of the latter.

Dual prices
Another very useful property of the Frank-Wolfe algorithm (and also its more complex extensions) is that we readily obtain dual prices for active constraints, as long as the LMO provides dual prices.Similar to linear optimization, the dual price of a constraint captures the (local) rate of change of the objective if the constraint is relaxed.This is in particular useful in, e.g., portfolio optimization applications and energy problems, where marginal prices of constraints can guide decisions of real-world decision makers.Here we will only consider dual prices at the optimal solution  * and we will only cover the basic case without any degeneracy.However we can also derive dual prices for approximately optimal solutions and we refer the interested reader to Braun and Pokutta (2021) for an in-depth discussion.
Suppose that the feasible region  is actually a polytope of the form  = { ∶  ≤ } with  ∈ ℝ × and  ∈ ℝ  .Let  ∈  be arbitrary.By strong duality we have that  ∈  is a minimizer for the linear program min {∶≤} ⟨∇ (), ⟩ if and only if there exist dual prices 0 ≤  ∈ ℝ  , so that i.e., the dual prices together with the constraints certify optimality.It is well known that the second equation can be equivalently replaced by a complementary slackness condition that states ⟨,  − ⟩ = 0; it can be readily seen that (LP-duality) implies ⟨,  − ⟩ = 0 by rearranging and the other direction follows similarly.Now consider a primal-dual pair (, ) that satisfies (LP-duality).By definition  is also a Frank-Wolfe vertex for ∇ (), so that we immediately obtain ⟨∇ (),  − ⟩ = − ⟨,  − ⟩ = ⟨,  − ⟩ , i.e., the Frank-Wolfe gap at  is equal to the complementarity gap for  given ; if the latter would be 0 then complementary slackness would hold or equivalently the Frank-Wolfe gap would be 0 and (, ) would be an optimal primal-dual pair.This can be now directly be related to min {∶≤}  () via Slater's (strong duality) condition of optimality:  is optimal for min {∶≤}  () if and only if  is optimal for min {∶≤} ⟨∇ (), ⟩.This implies that if  is an optimal solution to min {∶≤}  () then (, ) will also satisfy (LP-duality).Hence for an optimal solution , the dual prices  valid for  are also valid for .
Given that the LMO for polytopes is often realized via linear programming solvers that compute dual prices as by-product, we readily obtain dual prices  for the optimal solution  * via the Frank-Wolfe vertex  for ∇ ( * ).

Adaptive Step-sizes
The primal progress of a Frank-Wolfe step is driven by the smoothness inequality.Suppose  is -smooth, then using the definition of the Frank-Wolfe step, i.e.,  +1 = (1 −   )  +     and Lemma 4. (4.9) Technically we can only form convex combinations if   ∈ [0, 1], so that we have to truncate   ≔ min { ⟨∇ (  ),  −  ⟩ ‖  −  ‖ 2 , 1 } ; observe that   ≥ 0 holds always as the we have that the Frank-Wolfe gap ⟨∇ (  ),   −   ⟩ ≥ 0. This step-size choice is often referred to as short step step-size and is the Frank-Wolfe equivalent to steepest descent.In the case that the truncation is active, it holds that ⟨∇ (  ),   −   ⟩ ≥ ‖  −   ‖ 2 and together with (4.8) it follows that we are in a regime where we converge linearly with  (  ) −  ( +1 ) ≥ ⟨∇ (  ),   −   ⟩ /2 ≥ ( (  ) −  ( * ))/2, i.e., the primal progress is at least half of the Frank-Wolfe gap and hence at least half of the primal gap.
The short step strategy avoids the overhead of line searches, however unfortunately it requires knowledge of the smoothness constant  or at least reasonably tight upper bounds of such.This issue is what Pedregosa et al. (2020) addressed in a very nice paper by dynamically approximating .Rather than performing a traditional line search on the function value, the approximation of  leads only to a slightly slower convergence rate by a constant factor, has only small overhead, and does not suffer the additive resolution issue of traditional line searches, where one can only get as accurate as the line search .In particular, this adaptive strategy allows to adapt to the potentially better local smoothness of  , rather than relying on a worst-case estimate; see Braun et al. (2022a) for an in-depth discussion.
In } being the short step for the estimation , i.e., we only test (inner products with) the gradient ∇ at different points.Moreover, this test might provide additional primal progress as we discuss below.This leads to the adaptive step-size strategy given in Algorithm 5, which is numerically very stable, however requires gradient computations (rather than function evaluations).
We first show now that our condition (altAdaptive) implies the same primal progress as (adaptive) and then we will show that (altAdaptive) holds for  if  is -smooth.As such all results of Pedregosa et al. (2020) apply readily to the modified variant in Algorithm 5. To demonstrate the convergence behavior of the various step-size strategies we ran a simple test problem with results presented in Figure 9.We see that the adaptive strategy approximates the short step very well and both significantly outperform the open loop strategy.
In the following we present the slightly more involved estimation based on a new progress bound from smoothness.For completeness we also include a significantly simplified estimation based on the regular smoothness bound in Appendix A, however there we only guarantee approximation of the smoothness constant within a factor of 2. We start with introducing another variant of the smoothness inequality.Note that all these inequalities are equivalent when considering all , , however we want to apply them for a specific pair of points ,  and then their transformations from one into another might not be sharp as demonstrated in the following remark: Remark 4.8 (Point-wise smoothness estimations).Suppose that  is -smooth and convex and consider two points , .Suppose we want to derive (2.3) from the gradient-based variant in (2.4) using only the two points , .Then the naive way of doing so it: ≤ ⟨∇ (),  − ⟩ + ‖ − ‖ 2 .(using (2.4)) Observe that this is almost the desired inequality (2.3), except for the smoothness constant 2 and not .
The following lemma provides a different smoothness inequality that allows for tighter estimations.It requires  to be -smooth and convex on a potentially slightly larger domain containing .Lemma 4.9 (Smoothness revisited).Let  be an -smooth and convex function on the -neighborhood of a compact convex set , where  is the diameter of .Then for all ,  ∈  it holds: Next we lower bound the left-hand side as Chaining these two inequalities together and rearranging gives the desired claim.
The proof above explicitly relies on the convexity of  via Braun et al. (2022a, Lemma 1.8).With Lemma 4.9 we can provide the following guarantee on the primal progress.
Lemma 4.10 (Primal progress from (altAdaptive)).Let  be an -smooth and convex function on the -neighborhood of a compact convex set , where  is the diameter of .Further, let  +1 = (1 −   )  +     with   = min , as  only occurs in the definition of   and  +1 .Our starting point is Equation (4.10) with  ←  +1 and  ←   : . Now if   = 1 and  ≥ , then the above simplifies to: This finishes the proof.
Before we continue a few remarks are in order.First of all, observe that  (  ) −  ( +1 ) ≥ ⟨∇ (  ),   −   ⟩ 2 + ⟨∇ ( +1 ),   −   ⟩ 2 2 max{, }‖  −   ‖ 2 , has an additional term compared to the standard smoothness estimation (4.9) and ⟨∇ ( +1 ),   −   ⟩ = 0 if and only if  +1 is identical to the line search solution.This is in particular the case if  is a standard quadratic as then the line search solution is identical to the short step solution.Nonetheless, in the typical case this extra term provides additional primal progress.Taking the maximum in the denominator ensures that if  < , then we recover the primal progress that one would have obtained with the estimation  = .This seems counter-intuitive as usually using estimations  <  would lead to overshooting and negative primal progress, however here we still require that (altAdaptive) holds for , which prevents exactly this as can be seen from the proof.In particular, disregarding adaptivity for a second, in the case where  is known, then with the choice  = , Lemma 4.11 provides a stronger primal progress bound compared to (4.9) assuming that (altAdaptive) holds for  (which holds always as  is -smooth; see Lemma 4.11): (  ) −  ( +1 ) ≥ ⟨∇ (  ),   −   ⟩ 2 + ⟨∇ ( +1 ),   −   ⟩ 2 2‖  −   ‖ 2 .This improved primal progress bound might give rise to improved convergence rates in some regimes; see also Teboulle and Vaisbourd (2023) for a similar analysis for the unconstrained case providing optimal constants for the convergence rates of gradient descent.Moreover, the discussion above also implies that if (altAdaptive) holds it might provide more primal progress than the original test via (adaptive) used in Pedregosa et al. (2020).
To conclude, we will now show that (altAdaptive) holds for , whenever the function is -smooth and   is the corresponding short step for .This implies that both (altAdaptive) and (adaptive) hold for  whenever  is -smooth with the added benefit of numerical stability and additional primal progress via (altAdaptive).In both cases, rearranging gives the desired inequality ⟨∇ ( +1 ),   −   ⟩ ≥ 0.
Finally, in case   = 0 we have   =  +1 and the assertion holds trivially.
Remark 4.12 (Faster open loop convergence).The adaptive step-size strategy from above uses feedback from the function and as such is not of the open loop type.In many applications such adaptive strategies are the strategies of choice as the function feedback is relatively minimal but convergence speed is superior (in most but not all cases as mentioned in Section 4.2).For many important cases we can also obtain improved convergence with rates of higher order for open loop step-sizes by using the modified step-size rule   =  + with  ∈ ℕ ≥1 ; see Wirth et al. (2023a;c) for details.This is quite surprising as we only change the shift  and not the order of  in the denominator of   .In fact, note that the order of  in the denominator cannot be changed significantly as we need that ∑    = ∞ and that ∑   2  converges for the step-size strategy to work; see Braun et al. (2022a).If the corresponding  cannot be set in practice, and if it has to be an open loop strategy,   = 2+log(+1) +2+log(+1) works very well; we can use  or  + 1 in the log depending on whether the first iteration is  = 0 or  = 1.This corresponds essentially to a strategy where  is gradually increased and it provides accelerated convergence rates when those exist while maintaining the same worst-case convergence rates as the basic strategy   = 2 +2 (Wirth et al., 2023b).For a sample computation, see Figure 10.2+ , the short step rule, which requires a smoothness estimate  (here we used the exact smoothness), and the adaptive step-size rule that dynamically approximates .Plot is log-log so that the order of the convergence corresponds to different slopes of the trajectories.+ .We can see that (depending on the specifics of the problem) larger  achieve convergence rates of a higher order.For comparison also the adaptive step-size strategy has been included.Plot is log-log so that the order of the convergence corresponds to different slopes of the trajectories.

Two applications
In the following we will present two applications of the Frank-Wolfe algorithm.Both examples use very simple quadratic objectives of the form  () = ‖ − ‖ 2 for some  for the sake of exposition; for more complex examples see also Braun et al. (2022a).

The Approximate Carathéodory Problem
Our first example, is the Approximate Carathéodory Problem.For this example, the Frank-Wolfe algorithm does not only present a practical means to solve the problem but in fact, its convergence guarantees itself provide a proof of the theorem and optimal bounds for a wide variety of regimes.Here we will confine ourselves to the 2-norm case not assuming any additional properties, however many more involved cases are possible as studied in Combettes and Pokutta (2023).
Given a compact convex set  ⊆ ℝ  , recall that Carathéodory's theorem states that any  * ∈  can be written as a convex combination of no more than  + 1 extreme points of , i.e.,  * = ∑ 1≤≤+1     with  ≥ 0, ∑    = 1, and   extreme points of  with 1 ≤  ≤  + 1.In the context of Carathéodory's theorem, the cardinality of a point  * ∈ , refers to the minimum number of required extreme points to express  * as a convex combination of those.If  * is of low cardinality it is often also referred to as sparse.Every specific convex combination that expresses  * provides an upper bound on the cardinality of  * .The approximate variant of Carathéodory's problem asks: given  * ∈ , what is required cardinality of an  ∈  to approximate  * within an error of no more than  > 0 (in a given norm)?Put differently, we are looking for  ∈  with ‖ −  * ‖ ≤  of low cardinality.The approximate Carathéodory theorem states: Theorem 5.1 (Approximate Carathéodory Theorem).Let  ≥ 2 and  be a compact convex set.For every  * ∈ , there exists  ∈  with cardinality of no more than ( 2  / 2 ) satisfying ‖ −  * ‖  ≤ , where   = max ,∈ ‖ − ‖  is the -norm diameter of .
Note that the bounds of Theorem 5.1 are essentially tight in many cases (Mirrokni et al., 2017).In the following, we briefly discuss the case  = 2 without any additional assumptions.Suppose we have given a point  * ∈  we can consider the objective  () ≐ ‖ −  * ‖ 2 .
Further, let  > 0 be the approximation guarantee.Apart from establishing theoretical results by means of the Frank-Wolfe algorithm's convergence guarantees it can be easily used in actual computations, see e.g., Figures 11 and 12 for an example.Observe that in the particular case of  () here, we can also directly observe the primal gap and hence use it as a stopping criterion.The Frank-Wolfe  (Braun et al., 2019b) and Blended Pairwise Conditional Gradients (Tsuji et al., 2022).All algorithms use the short step step-size.
Figure 12: Same setup as in Figure 11, however the stepsize strategy is the adaptive strategy from Section 4.5.As we can see the Frank-Wolfe algorithm is quite sensitive to the strategy while more advanced variants, due to their design of only adding new vertices when not enough progress can be made otherwise, are not.approach to the approximate Carathéodory Problem has been also recently used in Quantum Mechanics to establish new Bell inequalities and local models, as well as improve the Grothendieck constant   (3) of order 3 (see Designolle et al. (2023a;b) and the references contained therein).This approach is also very useful in the context of the coreset problem, which asks for a subset of data points of a large data set that maintains approximately the same statistical properties (see Braun et al. (2022a, Section 5.2.5)).

Separating hyperplanes
In Section 5.1 we have used the Frank-Wolfe algorithm for obtaining a convex decomposition of a point  * ∈ , i.e., we certified membership in .We can also use the Frank-Wolfe algorithm with the same objective  () = ‖ − x‖ 2 to obtain separating hyperplanes for points x ∉ , i.e., we certify non-membership.This has been successfully applied in Designolle et al. (2023a;b) to certify that the correlations of certain quantum states exhibit non-locality, i.e., are truly quantum by separating them from the local polytope, the polytope of all classical correlations.Moreover, it has also been used in Thuerck et al. (2023) to derive separating hyperplanes from enumeration oracles.
In the following we provide the most naive way of computing separating hyperplanes.An improved strategy has been presented in Thuerck et al. (2023), which derives a new algorithmic characterization of non-membership, that requires fewer iterations of the Frank-Wolfe algorithm compared to our naive strategy here.It is also interesting to note that from a complexity-theoretic perspective, what the Frank-Wolfe algorithm does is to turn an LMO for  into a separation oracle for  via optimizing the objective ‖ − x‖ 2 .
Given x ∉ , we consider the optimization problem Now turning this argument around, if x ∉  is -far from , we need to run the Frank-Wolfe algorithm until the Frank-Wolfe gap satisfies max ∈ ⟨∇ (  ),   − ⟩ ≤  2 .Combining this with Theorem 4.5 we can estimate 6.75 2  + 2 ≤  2 , with  = 2. Thus we have found a separating hyperplane for x whenever  ≥ 13.5 2 / 2 .
In practice however we usually do not know  and we also do not know how far x is from .Nonetheless, we can simply test in each iteration  whether x violates (validIneq), i.e., ∇ (  ) separates x ⇔ min ∈ ⟨∇ (  ), ⟩ > ⟨∇ (  ), x⟩ .and simply stop then and are guaranteed this will take no more than ( 2 / 2 ) iterations.The process is illustrated in Figure 13.A similar approach, basically combining Sections 5.1 and 5.2 can also be used to compute the intersection of two compact convex sets or certify their disjointness by means of a separating hyperplane (assuming LMO access to each) as shown in Braun et al. (2022b).

Computational codes
For actual computations we have developed the FrankWolfe.jlJulia package, which implements many Frank-Wolfe variants and is highly customizable.Moreover, we have also developed a mixed-integer extension Boscia.jl that allows for some of the variables taking discrete values.

Figure 1 :
Figure 1: Projection-based methods: may require (potentially expensive) projection back into  to ensure feasibility.

Figure 2 :
Figure 2: Projection-free methods: ensure feasibility by forming convex combinations only.

Figure 3 :
Figure 3: New iterates are formed by convex combination with an extreme point approximating the gradient, ensuring feasibility.

Algorithm 2 :Figure 6 :
Figure 6: Access to function  and feasible region  is via two functions that we assume to have (oracle) access to.

Figure 7 :
Figure7: Minimizing  () = ‖‖ 2 over the probability simplex of dimension  = 10 with an iteration limit of  = 10 4 .It can be seen that once the iteration  crosses the dimension threshold  the short step strategy immediately recovers the optimal solution.

Algorithm 5 :LFigure 9 :
Figure 9: Convergence speed for a simple quadratic over a  -sparse polytope with three different step-size strategies.The basic open loop step-size   = 22+ , the short step rule, which requires a smoothness estimate  (here we used the exact smoothness), and the adaptive step-size rule that dynamically approximates .Plot is log-log so that the order of the convergence corresponds to different slopes of the trajectories.

Figure 10 :
Figure 10: Convergence speed for a simple quadratic over a  -sparse polytope for open loop strategies of the form   = + .We can see that (depending on the specifics of the problem) larger  achieve convergence rates of a higher order.For comparison also the adaptive step-size strategy has been included.Plot is log-log so that the order of the convergence corresponds to different slopes of the trajectories.

Figure 11 :
Figure11: Cardinality vs. approximation error in  2 -norm over a polytope of dimension  = 1000 for the Frank-Wolfe algorithm and two more advanced variants Lazy Awaystep Frank-Wolfe(Braun et al., 2019b) and Blended Pairwise Conditional Gradients(Tsuji et al., 2022).All algorithms use the short step step-size.
2.3 (Smoothness and Strong Convexity via Gradients).Let  ∶  → ℝ be a differentiable function.Then The Frank-Wolfe step: To minimize a convex function  over a polytope , a linear approximation of  is constructed at   as  (  ) + ⟨∇ (  ),  −   ⟩.The Frank-Wolfe vertex   minimizes this approximation.The step transitions from   to  +1 by moving towards   as determined by a step-size rule.Contour lines of  in red and linear approximation blue.
) on both sides, bounding ‖  −   ‖ ≤ , and rearranging leads to  ( +1 ) −  ( * ) ≤ (1 −   )( (  ) −  ( * )) +  2 ) −  ( +1 ) ≥   ⟨∇ (  ),   −   ⟩ −  2   2 ‖  −   ‖ 2 ≥   ( (  ) −  ( * )) −  2 * Assuming, we have access to an LMO for , we can now minimize the function  () over  via the Frank-Wolfe algorithm (Algorithm 4).In order to achieve ‖ −  * ‖ ≤  we have to run the Frank-Wolfe algorithm until  (  ) =  (  ) −  ( * ) ≤  2 , which by Theorem 4.4 takes no more than (2 2 / 2 ) iterations, where  is the  2 -diameter of .Moreover, in each iteration the algorithm is picking up at most one extreme point as discussed in Section 4.1.This establishes the guarantee for case  = 2 in Theorem 5.1.Here we applied the basic convergence guarantee from Theorem 4.4.However, for the Frank-Wolfe algorithm many more convergence guarantees are known, depending on properties of the feasible domain and position of the point  * that we want to approximate with a sparse convex combination.These improved convergence rates immediately translate into improved approximation guarantees for the approximate Carathéodory problem and we state some of these guarantees in Table1.