1 Introduction

Duality in convex optimization may be interpreted as a notion of sensitivity of an optimization problem to perturbations of its data. Similar notions of sensitivity appear in numerical analysis, where the effects of numerical errors on the stability of the computed solution are of central concern. Indeed, backward-error analysis ([16], §1.5) describes the related notion that computed approximate solutions may be considered as exact solutions of perturbations of the original problem. It is natural, then, to ask if duality can help us understand the behavior of a class of numerical algorithms for convex optimization. In this paper, we describe how the level-set method [2, 5, 6] produces an incorrect solution when applied to a problem for which strong duality fails to hold. In other words, the level-set method cannot succeed if there does not exist a dual pairing that is tight. This failure of strong duality indicates that the stated optimization problem is brittle, in the sense that its value as a function of small perturbations to its data is discontinuous; this violates a vital assumption needed for the level-set method to succeed.

Consider the convex optimization problem

$$\begin{aligned} \displaystyle \mathop {\hbox {minimize}}_{x \in \mathcal {X}} f(x) \mathop {\hbox {subject to}}g (x) \le 0, \end{aligned}$$
(P)

where f and g are closed proper convex functions that map \(\mathbb {R}^n\) to the extended real line \(\mathbb {R}\cup \{\infty \}\), and \(\mathcal {X}\) is a convex set in \(\mathbb {R}^n\). Let the optimal value \(\tau ^*_{\smash p}\) of (P) be finite, which indicates that that (P) is feasible. In the context of level-set methods, we may think of the constraint \(g(x)\le 0\) as representing a computational challenge. For example, there may not exist any efficient algorithm to compute the projection onto the constraint set . In many important cases, the objective function has a useful structure that makes it computationally convenient to swap the roles of the objective f with the constraint g, and instead to solve the level-set problem

figure a

where \(\tau \) is an estimate of the optimal value \(\tau ^*_{\smash p}\). The term “level set” points to the feasible set of problem (\({\mathrm{Q}}_\tau \)), which is the \(\tau \) level set of the function f.

If \(\tau \approx \tau ^*_{\smash p}\), the level-set constraint \(f(x)\le \tau \) ensures that a solution \(x_\tau \in \mathcal {X}\) of this problem causes \(f(x_\tau )\) to have a value near the optimal value \(\tau ^*_{\smash p}\). If, additionally, \(g(x_\tau )\le 0\), then \(x_\tau \) is a nearly optimal and feasible solution for (P). The trade-off for this potentially more convenient problem is that we must compute a sequence of parameters \(\tau _k\) that converges to \(\tau ^*_{\smash p}\).

1.1 Objective and constraint reversals

The technique of exchanging the roles of the objective and constraint functions has a long history. For example, the isoperimetric problem, which dates back to the second century B.C.E., seeks the maximum area that can be circumscribed by a curve of fixed length [24]. The converse problem seeks the minimum-length curve that encloses a certain area. Both problems yield the same circular solution. The mean-variance model of financial portfolio optimization, pioneered by Markowitz [18], is another example. It can be phrased as either the problem of allocating assets that minimize risk (i.e., variance) subject to a specified mean return, or as the problem of maximizing the mean return subject to a specified risk. The correct parameter choice, such as \(\tau \) in the case of the level-set problem (\({\mathrm{Q}}_\tau \)), causes both problems to have the same solution.

The idea of rephrasing an optimization problem as a root-finding problem appears often in the optimization literature. The celebrated Levenberg-Marquardt algorithm [19, 20], and trust-region methods [15] more generally, use a root-finding procedure to solve a parameterized version of the optimization problem. Lemaréchal et al. [17] develop a root-finding procedure for a level-bundle method for general convex optimization. The widely used SPGL1 software package for sparse optimization [9] implements the level-set method for obtaining sparse solutions of linear least-squares and underdetermined linear systems [7, 8].

1.2 Duality of the value function root

Define the optimal-value function, or simply the value function, of (\({\mathrm{Q}}_\tau \)) by

(1.1)

If the constraint in (P) is active at a solution, that is, \(g(x)=0\), this definition then suggests that the optimal value \(\tau ^*_{\smash p}\) of (P) is a root of the equation

$$\begin{aligned} v(\tau )=0, \end{aligned}$$

and in particular, is the leftmost root:

(1.2)

The surprise is that this is not always true.

In fact, as we demonstrate in this paper, the failure of strong duality for (P) implies that

(1.3)

Thus, a root-finding algorithm, such as bisection or Newton’s method, implemented so as to yield the leftmost root of the equation \(v(\tau )=0\) will converge to a value of \(\tau \) that prevents (\({\mathrm{Q}}_\tau \)) from attaining a meaningful solution. This phenomenon is depicted in Fig. 1, and is manifested by the semidefinite optimization problem in Example 2.2. Moreover, the infimal value in (1.3), defined here as \(\tau ^*_d\), coincides with the optimal value of any dual pairing of (P) that arises from Fenchel-Rockafellar convex duality [22, Theorem 11.39]. These results are established by Theorems 5.1 and 5.2.

We do not assume that our readers are experts in convex duality theory, and so we present an abbreviated summary of the machinery needed to develop our main results. We also describe a generalized version of the level-set pairing between the problems (P) and (\({\mathrm{Q}}_\tau \)), and thus establish Theorem 5.2. We show in Sect. 2 how these theoretical results can be used to establish sufficient conditions for strong duality.

Fig. 1
figure 1

A depiction of a value function v that exhibits the strict inequality described by (1.3); see also Example 2.2. In this example, the value function \(v(\tau )\) vanishes for all \(\tau \ge \tau ^*_d\), where \(\tau ^*_d<\tau ^*_{\smash p}\). Solutions of (1.1) for values of \(\tau <\tau ^*_{\smash p}\) are necessarily super-optimal and infeasible for (P). The difference between \(\tau ^*_d\) and \(\tau ^*_{\smash p}\) corresponds to the gap between the optimal values of (P) and its dual problem

1.3 Level-set methods

In practice, only an approximate solution of the problem (P) is required, and the level-set method can be used to obtain an approximate root that satisfies \(v(\tau )\le \epsilon \). The solution \(x\in \mathcal {X}\) of the corresponding level-set problem (\({\mathrm{Q}}_\tau \)) is super-optimal and \(\epsilon \)-infeasible:

$$\begin{aligned} f(x) \le \tau ^*_{\smash p}\qquad \hbox {and}\qquad g(x) \le \epsilon . \end{aligned}$$

Aravkin et al. [2] describe the general level-set approach, and establish a complexity analysis that asserts that \(\mathcal {O}\big (\log \epsilon ^{-1}\big )\) approximate evaluations of v are required to obtain an \(\epsilon \)-infeasible solution. These root-finding procedures are based on standard approaches, including bisection, secant, and Newton methods. The efficiency of these approaches hinges on the accuracy required of each evaluation of the value function v. Aravkin et al. also demonstrate that the required complexity can be achieved by requiring a bound on error in each evaluation of v that is proportional to \(\epsilon \).

The formulation (P) is very general, even though the constraint \(g(x)\le 0\) represents only a single function of the full constraint set represented by \(\mathcal {X}\). There are various avenues for reformulating any combination of constraints that lead to a single functional-constraint formulation such as (P). For instance, multiple linear constraints of the form \(Ax=b\) can be represented as a constraint on the norm of the residual, i.e., \(g(x) = \Vert Ax-b\Vert \le 0\). More generally, for any set of constraints \(c(x)\le 0\) where \(c=(c_i)\) is a vector of convex functions \(c_i\), we may set \(g(x) = \rho (\max \{0,\,c(x)\})\) for any convenient nonnegative convex function \(\rho \) that vanishes only at the origin, thus ensuring that \(g(x)\le 0\) if and only if \(c(x)\le 0\).

2 Examples

We provide concrete examples that exhibit the behavior shown in (1.3). These semidefinite programs (SDPs) demonstrate that the level-set method can produce diverging iterates.

Let \(x_{ij}\) denote the (ij)th entry of the n-by-n symmetric matrix \(X=(x_{ij})\). The notation \(X\succeq 0\) denotes the requirement that X is symmetric positive semidefinite.

Example 2.1

(SDP with infinite gap) Consider the \(2\times 2\) SDP

$$\begin{aligned} \displaystyle \mathop {\hbox {minimize}}_{X\succeq 0} -2 x_{21} \mathop {\hbox {subject to}}x_{11} = 0, \end{aligned}$$
(2.1)

whose solution and optimal value are given, respectively, by

$$\begin{aligned} X_* = \begin{bmatrix}0 &{} 0 \\ 0 &{} 0\end{bmatrix} \quad \text{ and }\quad \tau ^*_{\smash p}= 0. \end{aligned}$$

The Lagrange dual is a feasibility problem:

$$\begin{aligned} \displaystyle \mathop {\hbox {maximize}}_{y\in \mathbb {R}} 0 \mathop {\hbox {subject to}}\begin{bmatrix}y &{} -1 \\ -1 &{} 0\end{bmatrix} \succeq 0. \end{aligned}$$

Because the dual problem is infeasible, we assign the dual optimal value \(\tau ^*_d= -\infty \). Thus, \(\tau ^*_d= -\infty < \tau ^*_{\smash p}=0\), and this dual pairing fails to have strong duality.

The application of the level-set method to the primal problem (2.1) can be accomplished by defining the functions

$$\begin{aligned} f(X) := -2 x_{21} \text {and} g(X) := |x_{11}| , \end{aligned}$$

which together define the value function of the level-set problem (\({\mathrm{Q}}_\tau \)):

$$\begin{aligned} v(\tau ) = \inf _{X\succeq 0} \big \{\;|x_{11}| \ \big |\ {-2x_{21}}\le \tau \; \big \}. \end{aligned}$$
(2.2)

Because \(X^*\) is primal optimal, \(v(\tau ) = 0\) for all \(\tau \ge \tau ^*_{\smash p}=0\). Now consider the parametric matrix

$$\begin{aligned} X(\tau , \epsilon ) := \begin{bmatrix}\epsilon &{} \frac{\tau }{2} \\ \frac{\tau }{2} &{} \frac{\tau ^2}{4\epsilon } \end{bmatrix} \qquad \hbox {for all}\qquad \tau <0 \text{ and } \epsilon > 0, \end{aligned}$$

which is feasible for the level-set problem (2.2). Thus, \(v(\tau )\) is finite. The level-set problem clearly has a zero lower bound that can be approached by sending \(\epsilon \downarrow 0\). Thus, \(v(\tau ) = 0\) for all \(\tau < 0\).

In summary, \(v(\tau ) = 0\) for all \(\tau \), and so \(v(\tau )\) has roots less than the true optimal value \(\tau ^*_{\smash p}\). Furthermore, for \(\tau < 0\), there is no primal attainment for (1.1), because \(\lim _{\epsilon \downarrow 0}X(\tau , \epsilon )\) does not exist. \(\square \)

Example 2.2

(SDP with finite gap) Consider the \(3\times 3\) SDP

$$\begin{aligned} \displaystyle \mathop {\hbox {minimize}}_{X\succeq 0} -2 x_{31} \mathop {\hbox {subject to}}x_{11} = 0,\ x_{22} + 2 x_{31} = 1. \end{aligned}$$
(2.3)

The positive semidefinite constraint on X, together with the constraint \(x_{11}=0\), implies that \(x_{31}\) must vanish. Thus, the solution and optimal value are given, respectively, by

$$\begin{aligned} X^* = \begin{bmatrix}0 &{} 0 &{} 0 \\ 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 0\end{bmatrix} \qquad \hbox {and}\qquad \tau ^*_{\smash p}= 0. \end{aligned}$$
(2.4)

The Lagrange dual problem is

$$\begin{aligned} \displaystyle \mathop {\hbox {maximize}}_{y\in \mathbb {R}^2} -y_2 \mathop {\hbox {subject to}}\begin{bmatrix}y_1 &{} 0 &{} y_2-1 \\ 0 &{} y_2 &{} 0 \\ y_2 - 1 &{} 0 &{} 0\end{bmatrix} \succeq 0. \end{aligned}$$

The dual constraint requires \(y_2 = 1\), and thus the optimal dual value is \(\tau ^*_d= -1 < 0 = \tau ^*_{\smash p}\).

For the application of the level-set method to primal problem (2.3), we assign

$$\begin{aligned} f(X) := -2x_{31} \text{ and }g(X) := x_{11}^2 + (x_{22} + 2x_{31} - 1)^2, \end{aligned}$$
(2.5)

which together define the value function

(2.6)

As in Example 2.1, any convex nonnegative g function that vanishes on the feasible set could have been used to define v. It follows from (2.4) that \(v(\tau ) = 0\) for all \(\tau \ge 0\). Also, it can be verified that \(v(\tau ) = 0\) for all \(\tau \ge \tau ^*_d=-1\). To understand this, first define the parametric matrix

$$\begin{aligned} X_\epsilon = \begin{bmatrix} \epsilon &{} 0 &{} \frac{1}{2} \\ 0 &{} 0 &{} 0 \\ \frac{1}{2} &{} 0 &{} \frac{1}{4\epsilon } \end{bmatrix} \qquad \hbox {with}\qquad \epsilon > 0, \end{aligned}$$

which is feasible for level-set problem (2.6), and has objective value \(g(X_\epsilon ) = \epsilon ^2\). Because \(X_\epsilon \) is feasible for all positive \(\epsilon \), the optimal value vanishes because . Moreover, the set of minimizers for (2.6) is empty for all \(\tau \in (-1,0)\). Figure 1 illustrates the behavior of this value function.

Thus, we can produce a sequence of matrices \(X_\epsilon \) each of which is \(\epsilon \)-infeasible with respect to the infeasibility measure given by (2.5). However, the limit as \(\epsilon \downarrow 0\) does not produce a feasible point, and the limit does not even exist because the entry \(x_{33}\) of \(X_\epsilon \) goes to infinity.

The level-set method fails since the root of \(v(\tau )\) identifies an incorrect optimal primal value \(\tau ^*_{\smash p}\), and instead identifies the optimal dual value \(\tau ^*_d<\tau ^*_{\smash p}\). \(\square \)

3 Value functions

The level-set method based on (1.1) is founded on the inverse-function relationship between the pair of “flipped” value functions

(3.1a)
(3.1b)

Clearly, \(\tau ^*_{\smash p}=p(0)\). Here we summarize the key aspects of the relationship between the value functions v and p, and their respective solutions. Aravkin et al. [1] provide a complete description.

Let \({{\,\mathrm{argmin}\,}}\,v(\tau )\) and \({{\,\mathrm{argmin}\,}}\,p(u)\), respectively, denote the set of solutions to the optimization problem underlying the value functions v and p. Thus, for example, if the value p(u) is finite,

otherwise, \({{\,\mathrm{argmin}\,}}p(u)\) is empty. Clearly, \({{\,\mathrm{argmin}\,}}\,p(0)={{\,\mathrm{argmin}\,}}\,(\mathrm{P})\). Because p is defined via an infimum, \({{\,\mathrm{argmin}\,}}p(u)\) can be empty even if p is finite, in which case we say that the value p(u) is not attained.

Let \(\mathcal {S}\) be the set of parameters \(\tau \) for which the level-set constraint \(f(x)\le \tau \) of (\({\mathrm{Q}}_\tau \)) holds with equality. Formally,

The following theorem establishes the relationships between the value functions p and v, and their respective solution sets. This result is reproduced from Aravkin et al. [1, Theorem 2.1].

Theorem 3.1

Value-function inverses For every \(\tau \in \mathcal {S}\), the following statements hold:

  1. (a)

    \((p\circ v)(\tau )=\tau \),

  2. (b)

    .

The condition \(\tau \in \mathcal {S}\) means that the constraint of the level-set problem (\({\mathrm{Q}}_\tau \)) must be active in order for the result to hold. The following example establishes that this condition is necessary.

Example 3.1

(Failure of value-function inverse) The univariate problem

$$\begin{aligned} \displaystyle \mathop {\hbox {minimize}}_{x\in \mathbb {R}}|x| \mathop {\hbox {subject to}}|x|-1\le 0 \end{aligned}$$

has the trivial solution \(x^*=0\) with optimal value \(\tau ^*_{\smash p}=0\). Note that the constraint is inactive at the solution, which violates the hypothesis of Theorem 3.1. Now consider the value functions

$$\begin{aligned} p(u)&= \inf \,\{\ |x| \,:\, |x|-1\le u\ \}, \\ v(\tau )&= \inf \,\{\ |x| - 1 \,:\, |x|\le \tau \}, \end{aligned}$$

which correspond, respectively, to a parameterization of the original problem, and to the level-set problem. The level-set value function v evaluates to

$$\begin{aligned} v(\tau ) = {\left\{ \begin{array}{ll} -1 &{} \text{ if } \tau \ge \tau ^*_{\smash p}\\ +\infty &{} \text{ if } \tau <\tau ^*_{\smash p}. \end{array}\right. } \end{aligned}$$

Because p is nonnegative over its domain, there is no value \(\tau \) for which the inverse-function relationship shown by Theorem 3.1(a) holds.

Theorem 3.1 is symmetric, and holds if the roles of f and g, and p and v, are reversed. Aravkin et al. [1] show that this result holds even if the underlying functions and sets that define (P) are not convex.

Part (b) of the theorem confirms that if \(\tau ^*_{\smash p}\in \mathcal {S}\), i.e., the constraint \(g(x)\le 0\) holds with equality at a solution of (P), then solutions of the level-set problem coincide with solution of the original problem defined by p(0). More formally,

$$\begin{aligned} {{\,\mathrm{argmin}\,}}v(\tau ^*_{\smash p})={{\,\mathrm{argmin}\,}}\,(\mathrm{P}). \end{aligned}$$

Again consider Example 2.2, where we set \(\tau = -1/2\), which falls midway between the interval \((\tau ^*_d,\tau ^*_{\smash p})=(-1,0)\). Because the solution set \({{\,\mathrm{argmin}\,}}v(\tau )\) is empty, \(\tau \notin \mathcal {S}\). Thus,

$$\begin{aligned} (p\circ v)(\tau ) = p(0) = 0 \ne \tau , \end{aligned}$$

and the level-set method fails.

In order establish an inverse-function-like relationship between the value functions p and v that always holds for convex problems, we provide a modified definition of the epigraphs for v and w.

Definition 3.1

(Value function epigraph) The value function epigraph of the optimal value function p in (3.1a) is defined by

This definition similar to the regular definition for the epigraph of a function, given by

except that if \(\tau = p(u)\) but \({{\,\mathrm{argmin}\,}}p(u)\) is empty, then \((u, \tau ) \notin {{\,\mathrm{vfepi}\,}}w\).

The result below follows immediately from the definition of the value function epigraph. It establishes that (1.2) holds if (\({\mathrm{Q}}_\tau \)) has a solution that attains its optimal value (as opposed to relying on the infimal operator to achieve that value).

Proposition 3.1

For the value functions p and v,

$$\begin{aligned} (u, \tau ) \in {{\,\mathrm{vfepi}\,}}p \iff (\tau , u) \in {{\,\mathrm{vfepi}\,}}v. \end{aligned}$$

4 Duality in convex optimization

Duality in convex optimization can be understood as describing the behavior of an optimization problem under perturbation to its data. From this point of view, dual variables describe the sensitivity of the problem’s optimal value to that perturbation. The description that we give here summarizes a well-developed theory fully described by Rockafellar and Wets [22]. We adopt a geometric viewpoint that we have found helpful for understanding the connection between duality and the level-set method, and lays out the objects needed for the analysis in subsequent sections.

For this section only, consider the generic convex optimization problem

$$\begin{aligned} \displaystyle \mathop {\hbox {minimize}}_{x\in \mathcal {X}}h(x), \end{aligned}$$

where \(h:\mathbb {R}^n\rightarrow \mathbb {R}\cup \{\infty \}\) is an arbitrary closed proper convex function. The perturbation approach is predicated on fixing a certain convex function \(F(x,u):\mathbb {R}^n\times \mathbb {R}^m\rightarrow \mathbb {R}\cup \{\infty \}\) with the property that

$$\begin{aligned} F(x,0) = h(x) \quad \forall x. \end{aligned}$$

Thus, the particular choice of F determines the perturbation function

$$\begin{aligned} p(u) := \inf _{x} F(x,u), \end{aligned}$$

which describes how the optimal value of h changes under a perturbation u. We seek the behavior of the perturbation function about the origin, at which the value of p coincides with the optimal value \(\tau ^*_{\smash p}\), i.e., \(p(0)=\tau ^*_{\smash p}\).

The convex conjugate of the function p is

defines the affine function \(\mu \mapsto \langle \mu ,u\rangle - p^{\star }(\mu )\) that minorizes p and supports the epigraph of p; see Fig. 2. The biconjugate \(p^{\star \star }\) provides a convex and closed function that is a global lower envelope for p, i.e., \(p^{\star \star }(u)\le p(u)\) for all u. This last inequality is tight at a point u, i.e., \(p^{\star \star }(u)=p(u)\), if and only if p is lower-semicontinuous at u [21, Theorem 7.1]. Because of the connection between lower semicontinuity and the closure of the epigraph, we say that p is closed at such points u.

As described by Rockafellar and Wets [22, Lemma 11.38], the function p and its biconjugate \(p^{\star \star }\) define dual pairs of optimization problems given by

$$\begin{aligned} p(0) = \inf _x\, F(x,0) \quad \text{ and }\quad p^{\star \star }(0) = \sup _y\, {-F^{\star }(0,y)}, \end{aligned}$$
(4.1)

which define the primal and dual optimal values

$$\begin{aligned} \tau ^*_d:=p^{\star \star }(0) \le p(0) =: \tau ^*_{\smash p}. \end{aligned}$$
(4.2)

Strong duality holds when \(\tau ^*_{\smash p}=\tau ^*_d\), which indicates the closure of p at the origin. As we show in Sect. 5, the optimal dual value \(\tau ^*_d\) coincides with the value of the infimal value defined in (1.3).

Fig. 2
figure 2

The relationship between the primal perturbation value p(u) and a single instance (with slope \(\mu \) and intercept \(q_\mu \)) of the uncountably many minorizing affine functions that define the dual problem. The panel on the left depicts a non-optimal supporting hyperplane that crosses the vertical axis at \(-p^{\star }(\mu )<\tau ^*_{\smash p}\); the panel on the right depicts an optimal supporting hyperplane that generates a slope \(\mu \) and intercept \(-p^{\star }(\mu )=\tau ^*_{\smash p}\)

The following well-known result establishes a constraint qualification for (P) that ensures strong duality holds. See Rockafellar and Wets [22, Theorem 11.39] for a more comprehensive version of this result.

Theorem 4.1

(Weak and strong duality) Consider the primal-dual pair (4.1).

  1. a.

    [Weak duality] The inequality \(\tau ^*_{\smash p}\ge \tau ^*_d\) always holds.

  2. b.

    [Strong duality] If \(0\in {{\,\mathrm{int}\,}}{{\,\mathrm{dom}\,}}p\), then \(\tau ^*_{\smash p}=\tau ^*_d\).

To establish the connection between the pair of value functions (3.1) for (P) and this duality framework, we observe that

where

$$\begin{aligned} F(x,u) :=f(x) + \delta _\mathcal {X}(x) + \delta _{{{\,\mathrm{epi}\,}}g}(x, u), \end{aligned}$$
(4.3)

and the indicator function \(\delta _{\mathcal {C}}\) vanishes on the set \(\mathcal {C}\) and is \(+\infty \) otherwise. The dual problem \(p^{\star \star }(0)\) defined in (4.1) is derived as follows:

(4.4)

We recognize this last expression as the familiar Lagrangian-dual for the optimization problem (P).

5 Duality of the value function root

We now provide a formal statement and proof our main result concerning problem (P) and the inequality shown in (1.3). In the latter part of this section we also provide a straight-forward extension of the main result that allows for multiple constraints, and not just a single constraint function, as specified by (P).

Note that the theorem below does not address conditions under which \(v(\tau ^*_{\smash p})\le 0\), which is true if and only if the solution set \({{\,\mathrm{argmin}\,}}\,(P)\) is not empty. In particular, any \(x^*\in {{\,\mathrm{argmin}\,}}\,(P)\) is a solution of (\({\mathrm{Q}}_\tau \)) for \(\tau =\tau ^*_{\smash p}\), and hence \(v(\tau ^*_{\smash p})\le 0\). However, if \({{\,\mathrm{argmin}\,}}\,(P)\) is empty, then there is no solution to (\({\mathrm{Q}}_\tau \)) and hence \(v(\tau ^*_{\smash p})=+\infty \).

Theorem 5.1

(Duality of the value function root) For problem (P) and the pair of value function v and p, defined by (3.1),

where \(\tau ^*_d:=p^{\star \star }(0)\) is the optimal value of the Lagrange-dual problem (4.4).

Before giving the proof, below, we provide an intuitive argument for Theorem 5.1. Suppose that strong duality holds for (P). Hence, \(\tau ^*_{\smash p}=p(0) = p^{**}(0)=\tau ^*_d\), which means that the perturbation function p is closed at the origin. We sketch in the top row of Fig. 3 example pairs of value functions p and v that exhibit this behavior. To understand this picture, first consider the value \(\tau _1 < \tau ^*_{\smash p}\), shown in the top row. It is evident that \(v(\tau _1)\) is positive, because otherwise there must exist a vector \(x\in \mathcal {X}\) that is super-optimal and feasible, i.e.,

$$\begin{aligned} f(x)\le \tau _1<\tau ^*_{\smash p}\quad \text{ and }\quad g(x)\le 0, \end{aligned}$$

which contradicts the definition of \(\tau ^*_{\smash p}\). It then follows that the value \(u:=v(\tau _1)\) yields \(p(u) = \tau _1\). For \(\tau _2 > \tau ^*\), any solution to the original problem would be feasible (therefore requiring no perturbation u) and would achieve objective value \(p(0) = \tau ^*_{\smash p}< \tau _2\). Furthermore, notice that as \(\tau _1 \rightarrow \tau ^*_{\smash p}\), the value \(p(u_1)\) varies continuously in \(\tau _1\), where \(u_1\) is the smallest root of \(p(u) = \tau _1\).

Fig. 3
figure 3

The perturbation function p(u) and corresponding level-set value function \(v(\tau )\) for problems with strong duality (top row) and no strong duality (bottom row). Panel (c) illustrates the case when strong duality fails and the graph of p is open at the origin, which implies that \(\tau ^*_d<\tau ^*_{\smash p}\equiv p(0)\)

Next consider the second row of Fig. 3. In this case, strong duality fails, which means that

$$\begin{aligned} \lim _{u \downarrow 0} p(u) = \tau ^*_d\ne p(0). \end{aligned}$$

With \(\tau = \tau _1\), we have \(v(\tau _1) > 0\). With \(\tau = \tau _3 > \tau ^*_{\smash p}\), we have \(v(\tau ) = 0\) because any solution to (P) causes (\({\mathrm{Q}}_\tau \)) to have zero value. But for \(\tau ^*_d< \tau _2 < \tau ^*_{\smash p}\), we see that \(v(\tau _2) = 0\), because for any positive \(\epsilon \) there exists positive \(u < \epsilon \) such that \(p(u) \le \tau _2\). Even though there is no feasible point that achieves a superoptimal value \(f(x) \le \tau _2 < \tau ^*_{\smash p}\), for any positive \(\epsilon \) there exists an \(\epsilon \)-infeasible point that achieves that objective value.

Proof of Theorem 5.1

We first prove the second result that \(v(\tau )\le 0\) if \(\tau >\tau ^*_d\). Suppose that strong duality holds, i.e., \(\tau ^*_{\smash p}=\tau ^*_d\). Then the required result is immediate because if \(\tau ^*_{\smash p}\) is the optimal value, then for any \(\tau > \tau ^*_{\smash p}\), there exists feasible x such that \(f(x)\le \tau \).

Suppose that strong duality does not hold, i.e., \(\tau ^*_{\smash p}> \tau ^*_d\). If \(\tau >\tau ^*_{\smash p}\), it is immediate that \(v(\tau )\le 0\). Assume, then, that \(\tau \in (\tau ^*_d,\tau ^*_{\smash p}]\). Note that the two conditions \(g(x) \le u\) and \(f(x) \le \tau \) are equivalent to the single condition \(F(x,u)\le \tau \), where F is defined by (4.3). We will therefore prove that

$$\begin{aligned} \forall \epsilon > 0,\ \exists x\in \mathcal {X} \text{ such } \text{ that }\ F(x,u) \le \tau ,\ u\le \epsilon , \end{aligned}$$
(5.1)

which is equivalent to the required condition \(v(\tau ) \le 0\). It follows from the convexity of \({{\,\mathrm{epi}\,}}p\) and from (4.2) that \((0,\tau ^*_d)\in {{\,\mathrm{epi}\,}}p^{\star \star } = {{\,\mathrm{cl}\,}}{{\,\mathrm{epi}\,}}p\). Thus,

$$\begin{aligned} \forall \eta > 0,\ \exists (u,\omega ) \in {{\,\mathrm{epi}\,}}p \text{ such } \text{ that }\ \Vert (u,\omega ) - (0, \tau ^*_d)\Vert < \eta . \end{aligned}$$

Note that

$$\begin{aligned} \begin{aligned} \lim _{\epsilon \downarrow 0}\inf \left\{ p(u) \,\big |\, |u|\le \epsilon \right\}&\overset{\mathrm{(i)}}{=} \lim _{\epsilon \downarrow 0}\inf \left\{ p^{\star \star }(u)\,\big |\,|u|\le \epsilon \right\} \\&\overset{\mathrm{(ii)}}{=} p^{\star \star }(0) \overset{\mathrm{(iii)}}{=}\ \tau ^*_d, \end{aligned} \end{aligned}$$
(5.2)

where equality (1) follows from the fact that \(p(u) = p^{\star \star }(u)\) for all \(u\in {{\,\mathrm{dom}\,}}p\), equality (2) follows from the closure of \(p^{\star \star }\), and (3) follows from (4.2). This implies that

$$\begin{aligned} \forall \eta > 0,\ \exists (u,\omega ) \in {{\,\mathrm{epi}\,}}p \text{ such } \text{ that }\ \Vert (u,p(u)) - (0, \tau ^*_d)\Vert < \eta . \end{aligned}$$

For any fixed positive \(\epsilon \) define . Choose \({\hat{u}}\in {{\,\mathrm{dom}\,}}p\) such that \(\Vert ({\hat{u}},p({\hat{u}})) - (0, \tau ^*_d)\Vert < \mu \), and so

$$\begin{aligned} \epsilon \ge \mu > \Vert ({\hat{u}},p({\hat{u}})) - (0, \tau ^*_d)\Vert \ge \max \left\{ \,\Vert {\hat{u}}\Vert ,\, |p({\hat{u}})-\tau ^*_d|\,\right\} . \end{aligned}$$

Thus,

$$\begin{aligned} p({\hat{u}}) < \tau ^*_d+ \mu . \end{aligned}$$
(5.3)

Moreover, it follows from the definition of \(p({\hat{u}})\), cf. (3.1a), that

$$\begin{aligned} \forall \nu > 0,\ \exists x\in \mathcal {X} \text{ such } \text{ that } F(x,{\hat{u}}) \le p({\hat{u}}) + \nu . \end{aligned}$$

Choose \(\nu = \mu \), and so there exists \({\hat{x}}\) such that \(F({\hat{x}}, {\hat{u}}) \le p({\hat{u}}) + \mu \). Together with (5.3), we have

$$\begin{aligned} f({\hat{x}}) \le p({\hat{u}}) + \mu < \tau ^*_d+ 2\mu \le \tau . \end{aligned}$$

Therefore, for each \(\epsilon > 0\), we can find a pair \(({\hat{x}},{\hat{u}})\) that satisfies (5.1), which completes the proof of the second result.

Next we prove the first result, which is equivalent to proving that \(v(\tau )>0\) if \(\tau <\tau ^*_d\) because \(v(\tau )\) is convex. Observe that \(\tau < \tau ^*_d\equiv p^{**}(0)\) is equivalent to \((0,\tau ) \notin {{\,\mathrm{cl}\,}}{{\,\mathrm{epi}\,}}p\), which implies that

(5.4)

which completes the proof. \(\square \)

The proof of Theorem 5.1 reveals that the behavior exhibited by Examples 2.1 and 2.2 stems from the failure of strong duality with respect to perturbations in the linear constraints.

5.1 General perturbation framework

We now generalize Theorem 5.1 to inlclude arbitrary perturbations to (P), and thus more general notions of duality. In this case we are interested in the value function pair

(5.5a)
(5.5b)

where is an arbitrary convex function with the property that \(F(x,0)=f(x)\) (cf. Sect. 4), and \(\Vert \cdot \Vert \) is any norm. Because p is parameterized by an m-vector u and not just a scalar as previously considered, we must consider the norm of the perturbation. Therefore, \(v(\tau )\) is necessarily non-negative. We are thus interested in the leftmost root of the equation \(v(\tau ) = 0\), rather than an inequality as in Theorem 5.1.

Example 5.1

Multiple constraints Consider the convex optimization problem

$$\begin{aligned} \displaystyle \mathop {\hbox {minimize}}_{x} f(x) \mathop {\hbox {subject to}}c(x) \le 0,\ Ax=b, \end{aligned}$$
(5.6)

where \(c=(c_i)_{i=1}^m\) is a vector-valued convex function and A is a matrix. Introduce perturbations \(u_1\) and \(u_2\) to the right-hand sides of the constraints, which gives rise to Lagrange duality, and corresponds to the perturbation function

One valid choice for the value function that corresponds to swapping both constraints with the objective to (5.6) can be expressed as

$$\begin{aligned} v(\tau ) = \inf _{x,u_1,u_2} \left\{ \tfrac{1}{2}\Vert [u_1]_+\Vert _2^2 + \tfrac{1}{2} \Vert u_2\Vert _2^2 \ \big |\ \begin{aligned} f(x)&\le \tau \\ c(x)&\le u_1\\ Ax -b&= u_2 \end{aligned} \right\} , \end{aligned}$$

where the operator \([u_1]_+=\max \{0, u_1\}\) is taken component-wise on the elements of \(u_1\). This particular formulation of the value function makes explicit the connection to the perturbation function. We may thus interpret the value function as giving the minimal perturbation that corresponds to an objective value less than or equal to \(\tau \). \(\square \)

Theorem 5.2

For the functions p and v defined by (5.5),

The proof is almost identical to that of Theorem 5.1, except that we treat u as a vector, and replace u by \(\Vert u\Vert \) in (5.1), (5.2), and (5.4).

Theorems 5.1 and 5.2 imply that \(v(\tau ) \le 0\) for all values larger than the optimal dual value. (The inequality \(\tau > \tau ^*_d\) is strict, as \(v(\tau ^*_d)\) may be infinite.) Thus if strong duality does not hold, then \(v(\tau )\) identifies the wrong optimal value for the original problem being solved. This means that the level-set method may provide a point arbitrarily close to feasibility, but is at least a fixed distance away from the true solution independent of how close to feasibility the returned point may be.

Example 5.2

(Basis pursuit denoising [13, 14]) The level-set method implemented in the SPGL1 software package solves the 1-norm regularized least-squares problem

$$\begin{aligned} \displaystyle \mathop {\hbox {minimize}}_{x} \Vert x\Vert _1 \mathop {\hbox {subject to}}\Vert Ax-b\Vert _2 \le u \end{aligned}$$

for any value of \(u\ge 0\), assuming that the problem remains feasible. (The case \(u=0\) is important, as it accommodates the case in which we seek a sparse solution to the under-determined linear system \(Ax=b\).) The algorithm approximately solves a sequence of flipped problems

$$\begin{aligned} \displaystyle \mathop {\hbox {minimize}}_{x} \Vert Ax-b\Vert _2 \mathop {\hbox {subject to}}\Vert x\Vert _1 \le \tau _k, \end{aligned}$$

where \(\tau _k\) is chosen so that the corresponding solution \(x_k\) satisfies \(\Vert Ax_k-b\Vert _2\approx u\). Strong duality holds because the domains of the nonlinear functions (i.e., the 1- and 2-norms) cover the whole space. Thus, the level-set method succeeds on this problem. \(\square \)

6 Sufficient conditions for strong duality

The condition that \(0\in {{\,\mathrm{dom}\,}}p\) may be interpreted as Slater’s constraint qualification [10, §3.2], which in the context of (P) requires that there exist a point \({\hat{x}}\) in the domain of f and for which \(g({\hat{x}})<0\). This condition is sufficient to establish strong duality. Here we show how Theorem 5.1 can be used as a device to characterize an alternative set of sufficient conditions that continue to ensure strong duality even for problems that do not satisfy Slater’s condition.

Proposition 6.1

Problem (P) satisfies strong duality if either one of the following conditions hold:

  1. (a)

    the objective f is coercive, i.e., \(f(x) \rightarrow \infty \) as \(\Vert x\Vert \rightarrow \infty \);

  2. (b)

    \(\mathcal {X}\) is compact.

Proof

Consider the level-set problem (\({\mathrm{Q}}_\tau \)) and its corresponding optimal-value function \(v(\tau )\) given by (1.1). In either case (a) or (b), the feasible set

of (1.1) is compact because either \(\mathcal {X}\) is compact or the level sets of f are compact. Therefore, (\({\mathrm{Q}}_\tau \)) always attains its minimum for all .

Suppose strong duality does not hold. Theorem 5.1 then confirms that there exists a parameter \(\tau \in (\tau ^*_d,\tau ^*_{\smash p})\) such that \(v(\tau ) = 0\). However, because (\({\mathrm{Q}}_\tau \)) always attains its minimum, there must exist a point \({\hat{x}} \in \mathcal {X}\) such that \(f({\hat{x}}) \le \tau < \tau ^*_{\smash p}\) and \(g(x) \le 0\), which contradicts the fact that \(\tau ^*_{\smash p}\) is the optimal value of (P). We have therefore established that \(\tau ^*_d= \tau ^*_{\smash p}\) and hence that (P) satisfies strong duality. \(\square \)

We can use Theorem 6.1 to establish that certain optimization problems that do not satisfy a Slater constraint qualification still enjoy strong duality. As an example, consider the conic optimization problem

$$\begin{aligned} \displaystyle \mathop {\hbox {minimize}}_{x} \langle c,x\rangle \mathop {\hbox {subject to}}\mathcal {A}x=b,\ x\in \mathcal {K}, \end{aligned}$$
(6.1)

where \(\mathcal {A}:\mathcal {E}_1\rightarrow E_2\) is a linear map between Euclidean spaces \(\mathcal {E}_1\) and \(\mathcal {E}_2\), and \(\mathcal {K}\subseteq \mathcal {E}_1\) is a closed proper convex cone. This wide class of problems includes linear programming (LP), second-order programming (SOCP), and SDPs, and has many important scientific and engineering applications [3]. If c is in the interior of the dual cone , then \(\langle c,x\rangle >0\) for all feasible \(x\in \mathcal {K}\). Equivalently, the function \(f(x):=\langle c,x\rangle + \delta _\mathcal {K}(x)\) is coercive. Thus, (6.1) is equivalent to the problem

$$\begin{aligned} \displaystyle \mathop {\hbox {minimize}}_{x} f(x) \mathop {\hbox {subject to}}\mathcal {A}x=b, \end{aligned}$$

which has a coercive objective. Thus, Part (a) of Theorem 6.1 applies, and strong duality holds.

A concrete application of this model problem is the SDP relaxation of the celebrated phase-retrieval problem [12, 23]

$$\begin{aligned} \displaystyle \mathop {\hbox {minimize}}_{X} {{\,\mathrm{tr}\,}}(X) \mathop {\hbox {subject to}}\mathcal {A}X=b,\ X\succeq 0, \end{aligned}$$
(6.2)

where \(\mathcal {K}\) is now the cone of Hermitian positive semidefinite matrices (i.e., all the eigenvalues are real-valued and nonnegative) and \(c=I\) is the identity matrix, so that \(\langle C,X\rangle = {{\,\mathrm{tr}\,}}(X)\). In that setting, Candès et al. [12] prove that with high probability, the feasible set of (6.1) is a rank-1 singleton (the desired solution), and thus we cannot use Slater’s condition to establish strong duality. However, because \(\mathcal {K}\) is self dual [11, Example 2.24], clearly \(c\in {{\,\mathrm{int}\,}}\mathcal {K}\), and by the discussion above, we can use Theorem 6.1 to establish that strong duality holds (6.2).

A consequence of Theorem 6.1 is that it is possible to modify (P) in order to guarantee strong duality. In particular, we may regularize the objective, and instead consider a version of the problem with the objective as \(f(x) + \mu \Vert x\Vert \), where the parameter \(\mu \) controls the degree of regularization contributed by the regularization term \(\Vert x\Vert \). If, for example, f is bounded below on \(\mathcal {X}\), the regularized objective is then coercive and Theorem 6.1 asserts that the revised problem satisfies strong duality. Thus, the optimal value function of the level-set problem has the correct root, and the level-set method is applicable. For toy problems such as Examples 2.1 and 2.2, where all of the feasible points are optimal, regularization would not perturb the solution; however, in general we expect that the regularization will perturb the resulting solution, and in some cases this may be the desired outcome.