1 Introduction

Pareto efficiency is a well-established concept in a variety of fields such as economy, engineering and biology, see e.g. [18] for a broad overview. In [9], Iancu and Trichakis adapted this concept to robust optimization (RO) for linear programs.

In particular, they consider the robust linear program

$$\begin{aligned} \max _{x\in \mathcal {X}}\min _{p\in \mathcal {U}}p^\top x, \end{aligned}$$
(1)

where the feasible set \(\mathcal {X}\) and the uncertainty set \(\mathcal {U}\) are assumed to be polytopes. In this setting they characterize and compute so-called Pareto robustly optimal or PRO solutions. These are robustly optimal solutions \(x\in \mathcal {X}\) for which there exists no \(\bar{x}\in \mathcal {X}\) such that \(p^\top \bar{x} \ge p^\top x\) for all \(p\in \mathcal {U}\) and \(\bar{p}^\top \bar{x} > \bar{p}^\top x\) for at least one \(\bar{p} \in \mathcal {U}\). The main purpose of this article is to generalize this definition and retrieve a characterization of PRO solutions in a setting that is similar to the one in [9]. Moreover, we show that in the case of robust semidefinite programs, computing PRO solutions is tractable.

Although the work of Iancu and Trichakis on the linear framework is rather new, it has triggered further research such as an analysis for adjustable settings, see e.g. [16] for a rolling horizon approach and [3] for a Fourier-Motzkin Elimination based approach.


Structure

In Sect. 2, we generalize the approach of Iancu and Trichakis to \(\mathcal {X}\) being a subset of a finite dimensional Euclidean vector space and an uncertain parameter that affects the objective affinely and is contained in a compact, convex uncertainty set \(\mathcal {U}\). In particular, we provide a characterization of Pareto robustly optimal (PRO) solutions in this broader setting, which is our main result. This result enables us to prove the tractability of computing a PRO solution in the case of robust semidefinite programming. In Sects. 3 and 4, we illustrate how to compute the robust maximal eigenvalue of a class of matrices and consider a variant of the SDP that is at the core of the Goemans-Williamson Algorithm [8]. The PRO solutions of the latter are then used as an input for the algorithm and improve the computed cuts for the robust max-cut problem.


Notation

In the remainder of this article, the feasible set \(\mathcal {X}\) and the uncertainty set \(\mathcal {U}\) are contained in finite dimensional Euclidean vector spaces. In the present article, we will mostly choose for both spaces the space of real symmetric \(n \times n\)-matrices \(\mathcal {S}^n\) equipped with the Frobenius inner product \(\langle \cdot ,\cdot \rangle\), i.e., \((\mathcal {S}^n,\langle \cdot ,\cdot \rangle )\). For a positive semidefinite matrix \(X \in \mathbb {R}^{n \times n}\), we write \(X\succeq 0\) and we denote the set of symmetric positive semidefinite matrices by \(\mathcal {S}^n_{\succeq 0}\). Given a subset S of an Euclidean vector space V with inner product \(\langle \cdot , \cdot \rangle _V\), we denote its dual cone by \(S^*=\{y\in V:\langle y,x\rangle _V \ge 0~\forall x\in S\}\) and its relative interior by \(\mathrm {relint}(S)\). For a real matrix \(A \in \mathbb {R}^{n \times n}\), we denote its trace by \(\mathrm {Tr}(A)\). For a positive integer \(n\in \mathbb {N}\), we use \([n]:=\{1,...,n\}\) to denote a set of indices and \(I_n\) to denote the n-dimensional identity matrix. The vector \(e_i \in \mathbb {R}^n\), \(i \in [n]\), denotes the i-th unit vector and \(\mathbbm {1}:=\sum _{i=1}^n e_i\in \mathbb {R}^n\) denotes the all-ones vector. We further denote by \(E_{ij}{:}{=}\frac{1}{2}(e_ie_j^\top +e_je_i^\top )\in \mathcal {S}^n\), \(i,j \in [n]\), the standard basis of \(\mathcal {S}^n\).

2 Pareto optimal solutions for affine uncertainty

As a generalization of Program (1), we consider the following robust optimization problem

$$\begin{aligned} \sup _{x \in \mathcal {X}} \min _{p \in \mathcal {U}} f(x,p), \end{aligned}$$
(2)

where \(\mathcal {X}\) is the feasible set, \(\mathcal {U}\subseteq V\) is the convex and compact uncertainty set located in a Euclidean vector space. Let further \(f(\cdot ,p):\mathcal {X}\rightarrow \mathbb {R}\) be a function that is well-defined for all \(p \in \mathcal {U}\). Naturally, we assume that \(\mathcal {U}\) is not a singleton. The parameter \(p \in \mathcal {U}\) encodes an affine uncertainty, i.e., \(f(x,\cdot ):\mathcal {U}\rightarrow \mathbb {R}\) is affine in p for all \(x \in \mathcal {X}\). The involved affinity gives rise to an alternative formulation of (2), namely

$$\begin{aligned} \sup _{x \in \mathcal {X}} \min _{p \in \mathcal {U}} \langle \bar{f}(x), p\rangle _V + g(x), \end{aligned}$$
(3)

where \(\bar{f}(x)\in V\) and \(g(x) \in \mathbb {R}\) are the unique elements that correspond to the affine functional \(f(x,\cdot ): p\mapsto f(x,p)\) as given by the Riesz’ representation theorem. Hence (2) can be seen as an generalization of (1) to Euclidean vector spaces. However, over the course of the present article we mainly stick to Formulation (2). We note further, that if \(\mathcal {X}\) is compact and f is continuous on \(\mathcal {X}\), we replace ’\(\sup\)’ by ’\(\max\)’ in (2). We denote the set of robustly optimal solutions, i.e., the set of optimal solutions of (2), by \(\mathcal {X}^{\mathrm {RO}}\).

In robust optimization, one usually focuses on the worst-case scenario, i.e., it suffices to find any robust solution \(x\in \mathcal {X}^{\mathrm {RO}}\). In contrast to this approach, we aim for a specific \(x\in \mathcal {X}^{\mathrm {RO}}\) that also performs well under all other scenarios \(p\in \mathcal {U}\). To this end, we use the definition of Pareto robustness from [3], which is a generalization of the definition from [9] as mentioned in the introduction:

Definition 1

A robustly optimal solution \(x\in \mathcal {X}^{\mathrm {RO}}\) is called a Pareto robustly optimal solution (PRO) of (2) if there exists no \(\bar{x}\in \mathcal {X}\) such that

$$\begin{aligned}&\forall p \in \mathcal {U}:\quad f(\bar{x},p) \ge f(x,p), \end{aligned}$$
(4)
$$\begin{aligned}&\exists \bar{p} \in \mathcal {U}:\quad f(\bar{x},\bar{p}) > f(x,\bar{p}). \end{aligned}$$
(5)

In this case, we also write \(x\in \mathcal {X}^{\mathrm {PRO}}\). If \(x\notin \mathcal {X}^{\mathrm {PRO}}\), we say for an \(\bar{x}\), which fulfills (4) and (5), that it Pareto dominates x.

It is natural to ask whether such solutions exist, if they can be characterized and whether they can be determined properly.

We first give an introductory example that fits into the setting of (2). We thereby demonstrate that the choice of a Pareto optimal solution can significantly improve the objective value. After proving our main result Theorem 1 on a characterization of PRO solutions, we apply it to the example. In Sect. 4, a more broad discussion of applications will be done.

Example 1

Consider the robust quadratic knapsack problem:

$$\begin{aligned} qkp(R,w,d,\mathcal {U})~{:}{=}\sup _{x\in \{0,1\}^n}&\min _{p\in \mathcal {U}}\ x^\top R(p)x\\ \mathrm {s.t.} ~~ &w^\top x \le d. \end{aligned}$$

quadratic knapsack problems arise in various applications. For illustrative purposes, we consider an example from [15], where a logistics company wants to construct hubs, that on the one hand maximize the reward function \(x^\top Rx\) but on the other hand are restricted by budgetary constraints \(w^\top x\le d\). Here, rewards \(R_{ij}\) are paid for shipping a good from hub i to hub j and rewards \(R_{ii}, R_{jj}\) are paid for additional services at the hubs i and j if there is a shipping. Uncertainties in the reward matrix R may for example originate from the type of lorry the company uses.

In the following, we demonstrate that there are PRO solutions \(x\in \mathcal {X}^{\mathrm {PRO}}\) for quadratic knapsack, that Pareto dominate other robust solutions \(x\in \mathcal {X}^{\mathrm {RO}}\setminus \mathcal {X}^{\mathrm {PRO}}\). Moreover, we show that the improvement in the objective can be significant, if p does not attain its worst-case realization. As an example, let \(w=\mathbbm {1}, d=5\) and \(R(p)= \mathbbm {1}\mathbbm {1}^\top +E_{ii}(p_1-1)+E_{jj}(p_1-1)+E_{ij}(p_2-1)\) for a fixed pair of indices \(i,j\in [n]\). This affine relation is a common form to formulate matrix uncertainties (see e.g. [6]). It can be generalized by considering arbitrary matrices instead of the standard basis matrices \(E_{ij}\in \mathcal {S}^n\). We consider a convex uncertainty set \(\mathcal {U}{:}{=}\{p\in \mathbb {R}^2:\ p_1\ge 1, p_1^2\le p_2, p_2\le 4\}\) and observe that for this particular \(\mathcal {U}\) the worst case is attained by \(p=(1,1)^\top\) since

$$\begin{aligned} \min _{p\in \mathcal {U}}x^\top R(p)x=\min _{p\in \mathcal {U}}(p_1-1)(x_i^2+x_j^2)+(p_2-1)x_ix_j+x^\top \mathbbm {1}\mathbbm {1}^\top x \end{aligned}$$

and \(x\ge 0\). Hence, in the worst case we have \(R(p)=R((1,1)^\top )=\mathbbm {1}\mathbbm {1}^\top\) and consequently every \(x\in \{0,1\}^n\) with \(\sum _{i \in [n]} x_i =5\) is a robustly optimal solution with objective value \(x^\top \mathbbm {1}\mathbbm {1}^\top x=25\). However, every solution that in addition satisfies \(x_i=x_j=1\) Pareto dominates the other robust solutions since the respective objective value is equal to

$$\begin{aligned} x^\top R(p)x=(p_1-1)(x_i^2+x_j^2)+(p_2-1)x_ix_j + 25=2(p_1-1)+(p_2-1)+25. \end{aligned}$$

In our example, the advantage of choosing such an \(x\in \mathcal {X}^{\mathrm {PRO}}\) compared to a solution \(x\in \mathcal {X}^{\mathrm {RO}}\setminus \mathcal {X}^{\mathrm {PRO}}\) can increase to \(30>25\), if \(p_1=2\) and \(p_2=4\).

The key to characterize and determine PRO solutions is the following theorem which is a generalization of Theorem 1 in [9] and our main result.

Theorem 1

A solution \(x^* \in \mathcal {X}^{\mathrm {RO}}\) of (2) is PRO if and only if it is an optimal solution to the optimization problem

$$\begin{aligned}\sup _{y} ~ &f(y,{\hat{p}})\\ \mathrm {s.t.} ~& \min _{p \in \mathcal {U}} f(y,p) - f(x^*,p) \ge 0, \\ &y \in \mathcal {X}\end{aligned}$$
(6)

for an arbitrary \({\hat{p}} \in \mathrm {relint}(\mathcal {U})\). Every feasible solution y to (6) with an objective value greater than \(f(x^*,{\hat{p}})\) Pareto dominates \(x^*\). Moreover, if Program (2) yields an optimal solution then it is PRO.

Proof

We begin by pointing out that \(\mathrm {relint}(\mathcal {U}) \ne \emptyset\) since \(\mathcal {U}\) is convex. Furthermore, for the inner minimization program, there exists an optimal solution \(p^*\) since the objective is affine and \(\mathcal {U}\) is compact.

If y is feasible for Program (6) with an objective value greater than \(f(x^*,{\hat{p}})\), then the following holds:

$$\begin{aligned}&f(y,p) \ge f(x^*,p) \ \forall p \in \mathcal {U},\\&f(y,{\hat{p}}) > f(x^*,{\hat{p}}). \end{aligned}$$

In other words, y Pareto dominates \(x^*\).

Next, we show that \(x^* \in \mathcal {X}^{\mathrm {PRO}}\) if and only if \(x^*\) is an optimal solution of Program (6). However, we have already shown that, if there exists a feasible solution with greater objective value than \(x^*\), i.e., if \(x^*\) is not optimal for Program (6), then \(x^* \notin \mathcal {X}^{\mathrm {PRO}}\). Thus, we only need to show that optimality of \(x^*\) for Program (6) implies \(x^* \in \mathcal {X}^{\mathrm {PRO}}\). We assume that \(x^*\) is not Pareto robustly optimal. Then there exists a solution \(y \in \mathcal {X}\) that Pareto dominates \(x^*\) and we obtain

$$\begin{aligned} 0 < \max _{p \in \mathcal {U}} f(y,p) - f(x^*,p). \end{aligned}$$
(7)

Since, on the right-hand side of (7), we optimize an affine function over a convex set \(\mathcal {U}\), an optimal solution \(\bar{p}\) is w.l.o.g. an extreme point of \(\mathcal {U}\). Additionally, the convexity of \(\mathcal {U}\) implies that for \({\hat{p}} \in \mathrm {relint}(\mathcal {U})\), there exist \(p\in \mathcal {U}\) and \(\varepsilon \in (0,1)\) such that \({\hat{p}} = \varepsilon \bar{p} + (1- \varepsilon )p\). In particular, we obtain

$$\begin{aligned} f(y,{\hat{p}}) - f(x^*,{\hat{p}})= \varepsilon (f(y,\bar{p}) - f(x^*,\bar{p}))+(1-\varepsilon )(f(y,p) - f(x^*,p)) > 0, \end{aligned}$$

where the inequality follows from the fact that \(\bar{p}\) is a maximizer in (7) and that y is a feasible solution of Program (6). Hence, \(x^*\) is not an optimal solution of Program (6) and the claim follows.

For the last claim in Theorem 1, assume that \(y^*\) is an optimal solution of Program (6). Assume for contradiction that \(y^* \notin \mathcal {X}^{\mathrm {PRO}}\). Then, there exist \(\bar{p} \in \mathcal {U}\) and \(z \in \mathcal {X}\) with \(f(z,\bar{p}) > f(y^*,\bar{p})\) and \(f(z,p) - f(y^*,p) \ge 0\) for all \(p \in \mathcal {U}\). However, since

$$\begin{aligned} f(z,p) - f(x^*,p) \ge f(z,p) - f(y^*,p) \ge 0 \ \forall p \in \mathcal {U}, \end{aligned}$$

z is feasible for Program (6). Furthermore, analogously to before, \(f(z,\bar{p}) > f(y^*,\bar{p})\) implies that \(f(z,{\hat{p}}) > f(y^*, {\hat{p}})\), i.e., the objective value of z is higher than the objective value of \(y^*\) – contradiction to the optimality of \(y^*\). \(\square\)

We observe that since the function f is affine on a convex set \(\mathcal {U}\), one could reformulate the minimization problem with its dual cone, KKT–conditions or reformulations given in [2]. This property would be beneficial to solve Program (6). In the following, we apply Theorem 1 to the problem given in Example 1.

Example 1 continued. Without loss of generality we set \(i=1\) and \(j=2\). We prove that \(x^*=(1,1,1,1,1,0,\ldots ,0)^\top\) is a PRO solution to \(qkp(R,\mathbbm {1},5,\mathcal {U})\) with \(R(p)= \mathbbm {1}\mathbbm {1}^\top +E_{11}(p_1-1)+E_{22}(p_1-1)+E_{12}(p_2-1)\) and \(\mathcal {U}{:}{=}\{p\in \mathbb {R}^2:\ p_1\ge 1, p_1^2\le p_2, p_2\le 4\}\). Consider an arbitrary point \({\hat{p}}\in \text {relint}(\mathcal {U})\). Due to Theorem 1 it suffices to show that \(x^*\) is an optimal solution to

$$\begin{aligned}&\max _{y} \ y^\top R({\hat{p}})y, \end{aligned}$$
(8a)
$$\begin{aligned}&\quad \mathrm {s.t.} \ \min _{p \in \mathcal {U}} y^\top R(p)y - (x^*)^\top R(p)x^* \ge 0, \end{aligned}$$
(8b)
$$\begin{aligned}&\quad \ y \in \{0,1\}^n, \end{aligned}$$
(8c)
$$\begin{aligned}&\quad \ \mathbbm {1}^\top y \le 5. \end{aligned}$$
(8d)

Here, we can reformulate Constraint (8b) since

$$\begin{aligned}&\min _{p\in \mathcal {U}}~(\mathbbm {1}^\top y)^2 -(\mathbbm {1}^\top x^*)^2 +(p_1-1)(y_1^2-(x^*_1)^2)+(p_1-1)(y_2^2-(x^*_2)^2)\\&\quad \quad +(p_2-1)(y_1y_2-x^*_1x^*_2)\\ =&\min _{p\in \mathcal {U}}~(\mathbbm {1}^\top y)^2 - 25 +(p_1-1)(y_1^2-1)+(p_1-1)(y_2^2-1)+(p_2-1)(y_1y_2-1)\\ =&(\mathbbm {1}^\top y)^2 - 25 +(y_1^2-1)+(y_2^2-1)+3(y_1y_2-1), \end{aligned}$$

where the last equation holds since \(p=(2,4)^\top\) is a minimizer for every binary y. Moreover, since Constraint (8d) implies that \((\mathbbm {1}^\top y)^2 - 25 \le 0\), we conclude that \(y_1=y_2=1\) for every feasible \(y \in \{0,1\}^n\). Thus, we reformulate Program (8) to

$$\begin{aligned}&\max _{y} \ y^\top R({\hat{p}})y \end{aligned}$$
(9a)
$$\begin{aligned}&\quad \mathrm {s.t.} \ y_1= y_2 = 1, \end{aligned}$$
(9b)
$$\begin{aligned}&\quad \ y \in \{0,1\}^n, \end{aligned}$$
(9c)
$$\begin{aligned}&\quad \ \sum _{i=3}^n y_i\le 3. \end{aligned}$$
(9d)

Hence, we have that \(y^\top R({\hat{p}})y=(x^*)^\top R({\hat{p}})x^*\) for all feasible y and conclude that \(x^*\) is optimal for (8).

We observe that the reformulated Program (9) is also a quadratic knapsack problem. Furthermore, the uncertainty set chosen in Example 1 is an intersection of the second order cone with two halfspaces. We computed a Pareto optimal solution and also checked the Pareto optimality by applying Theorem 1, both by hand. However, an SOCP structure in the uncertainty set as illustrated in the above example may in some cases also allow us to dualize the inner minimization program. Since this dualization approach would result in a convex MINLP even for wider classes of programs under uncertainty, the example suggests that obtaining PRO solutions might be computationally tractable in practice for a variety of problems. However, investigating such properties would be the content of future research.

Another way to determine a PRO solution is given by the following theorem in case one can provide a closed form of \(\mathcal {X}^{\mathrm {RO}}\):

Theorem 2

Let \({\hat{p}} \in \mathrm {relint}(\mathcal {U})\). Then \(\mathrm {argsup}_{x \in \mathcal {X}^{\mathrm {RO}}} f(x,{\hat{p}})\) is a subset of Pareto robustly optimal solutions of (2).

Proof

Assume that \(x^* \in \mathrm {argsup}_{x \in \mathcal {X}^{\mathrm {RO}}} f(x,{\hat{p}})\) but \(x^* \notin \mathcal {X}^{\mathrm {PRO}}\). Then there exists \(y\in \mathcal {X}^{\mathrm {RO}}\) with \(f(x^*,p) \le f(y,p)\) for all \(p \in \mathcal {U}\) and \(\bar{p} \in \mathcal {U}\) with \(f(x^*,\bar{p}) < f(y,\bar{p})\). Similar to the proof of Theorem 1, \({\hat{p}} = \varepsilon \bar{p} + (1-\varepsilon ) p\) for a \(p \in \mathcal {U}\) and \(\varepsilon \in (0,1)\) holds. Hence,

$$\begin{aligned} 0 \ge f(y,{\hat{p}}) - f(x^*,{\hat{p}}) = \varepsilon (f(y,\bar{p}) - f(x^*,\bar{p})) + (1-\varepsilon ) (f(y,p) - f(x^*,p)) > 0, \end{aligned}$$

where the first inequality holds since \(x^*\) was a maximizer of \(f(\cdot ,{\hat{p}})\). \(\square\)

In contrast to Theorems 1 and 2, which aim to determine PRO solutions, the following theorem addresses the question whether there exist non-trivial PRO solutions x for (2), i.e., \(x\in \mathcal {X}^{\mathrm {PRO}}\) but \(\mathcal {X}^{\mathrm {PRO}}\ne \mathcal {X}^{\mathrm {RO}}\).

Theorem 3

Let \({\hat{p}} \in \mathrm {relint}(\mathcal {U})\) and consider the optimization problem

$$\begin{aligned}\sup _{x,y} ~& f(y,{\hat{p}}) - f(x,{\hat{p}})\\\mathrm {s.t.} ~& \min _{p \in \mathcal {U}} f(y,p) - f(x,p) \ge 0, \\&y \in \mathcal {X}, \\ &x \in \mathcal {X}^{\mathrm {RO}}. \end{aligned}$$
(10)

Then \(\mathcal {X}^{\mathrm {PRO}}= \mathcal {X}^{\mathrm {RO}}\) if and only if the optimal value of (10) equals zero.

Proof

Suppose that there exists a feasible solution \((x^*,y^*)\) of (10) with strictly positive objective value. We observe that

$$\begin{aligned} \min _{p \in \mathcal {U}} f(y^*,p) - f(x^*,p) \ge 0 \text { and }f(y^*,{\hat{p}}) - f(x^*, {\hat{p}}) > 0 \end{aligned}$$

implies that \(y^*\) Pareto dominates \(x^*\in \mathcal {X}^{\mathrm {RO}}\) and thus \(x^*\in \mathcal {X}^{\mathrm {RO}}\setminus \mathcal {X}^{\mathrm {PRO}}\). For the opposite direction, we consider an arbitrary \(\bar{x}\in \mathcal {X}^{\mathrm {RO}}\) and suppose that the optimal value of (10) is zero. This implies that

$$\begin{aligned}f(\bar{x},{\hat{p}})\ge \sup _{y} ~& f(y,{\hat{p}})\\ \mathrm {s.t.} ~& \min _{p \in \mathcal {U}} f(y,p) - f(\bar{x},p) \ge 0, \\& y \in \mathcal {X}. \end{aligned}$$

Moreover, equality holds since \(y=\bar{x}\) is a feasible and optimal solution and thus we can apply Theorem 1 to obtain that \(\bar{x}\in \mathcal {X}^{\mathrm {PRO}}\) and conclude \(\mathcal {X}^{\mathrm {PRO}}=\mathcal {X}^{\mathrm {RO}}\). \(\square\)

2.1 A tractable reformulation for SDPs under linear perturbations

We illustrate the above results by the example of semidefinite programming with uncertainties that solely affect the cost matrix. In addition, we provide a tractability result for this class of optimization problems. We consider a feasible set given by an arbitrary spectrahedron

$$\begin{aligned} \mathcal {X}=\{X\in {\mathcal {S}^n_{\succeq 0}}: ~\langle A_j,X\rangle =b_j,~ \forall j\in [k]\}, \end{aligned}$$

and an uncertainty set

$$\begin{aligned} \mathcal {U}=\left\{ P=P_0+\sum _{i=1}^N\mu _iP_i:\ \mu \in [\mu ^-,\mu ^+]\right\} \end{aligned}$$
(11)

with fixed parameters \(P_0,\ldots , P_N\in \mathcal {S}^n, \mu ^-,\mu ^+\in \mathbb {R}^N\). This uncertainty set has been widely used for matrix uncertainty, cf. [6]. We observe that since the Frobenius inner product \(f(X,P)=\langle P,X\rangle\) is bilinear, it encodes linearity in X and in the uncertain parameter P. Hence, it can be used as an objective function for (2). Thus, we consider the following SDP under cost uncertainty which fits in our setting

$$\begin{aligned} \begin{aligned}&\sup _{X\in \mathcal {S}^n_{\succeq 0}}~\min _{P\in \mathcal {U}}~\langle P,X\rangle \\&\quad \text {s.t.} ~\langle A_j,X\rangle =b_j,~~~\forall j\in [k]. \end{aligned} \end{aligned}$$
(12)

It is worth noting that the above problem formulation differs from the more established ones in, e.g., [6] or [1] by considering uncertainties in the objective instead of uncertainties in the constraints. Although we do not investigate the exact relation between these two approaches here, we want to point out that the considered problem is a semidefinite version of the setting investigated by [9]. We recall that we aim to compute a Pareto robustly optimal solution for (12), i.e., a robustly optimal solution \(X\in \mathcal {X}^{\mathrm {RO}}\), such that there is no other \(\bar{X}\in \mathcal {X}\) that satisfies

$$\begin{aligned}&\forall P \in \mathcal {U}: \ \langle P,\bar{X}\rangle \ge \langle P,X\rangle ,\\&\exists \bar{P} \in \mathcal {U}: \ \langle \bar{P},\bar{X}\rangle > \langle \bar{P},X\rangle . \end{aligned}$$

The following proposition shows how Theorem 1 can be used to achieve this.

Proposition 1

A solution \(X \in \mathcal {X}^{\mathrm {RO}}\) is Pareto robustly optimal for (12) if and only if the optimal value of

$$\begin{aligned} \begin{aligned}&\sup _{Z} \ \langle {\hat{P}},Z\rangle \\&\quad \mathrm {s.t.} \ Z \in \mathcal {U}^*, \\&\quad \ X + Z \in \mathcal {X}\end{aligned} \end{aligned}$$
(13)

is 0. If it is positive with optimal solution Z, then \(X+Z \in \mathcal {X}^{\mathrm {PRO}}\). Moreover, if a PRO solution to (12) exists, Program (13) computes a PRO solution to (12). The corresponding runtime is polynomial in n.

Proof

Applying Theorem 1, one obtains that \(X \in \mathcal {X}^{\mathrm {RO}}\) is Pareto robustly optimal if and only if

$$\begin{aligned}&\sup _{Y} \ \langle {\hat{P}}, Y \rangle , \end{aligned}$$
(14a)
$$\begin{aligned}&\quad \mathrm {s.t.} \ \min _{P \in \mathcal {U}} \langle Y - X, P \rangle \ge 0, \end{aligned}$$
(14b)
$$\begin{aligned}&\quad \ Y \in \mathcal {X}\end{aligned}$$
(14c)

has an optimal value of \(\langle {\hat{P}}, X \rangle\). Let \(Z:= Y-X\). Then, \(\langle {\hat{P}}, Y \rangle \ge \langle {\hat{P}}, X \rangle\) is equivalent to \(\langle {\hat{P}}, Z \rangle \ge 0\) and the inequality \(\min _{P \in \mathcal {U}} \langle Y - X, P \rangle \ge 0\) is equivalent to \(Z \in \mathcal {U}^*\), which proves the first part of the claim. In order to prove tractability, we observe

$$\begin{aligned} (14b) \quad\Leftrightarrow&&0&\le \min _{\mu \in [\mu ^-,\mu ^+]} \langle Y - X, D_0 \rangle + \sum _{i=1}^N \mu _i \langle Y - X, D_i \rangle \\ \Leftrightarrow&&-\langle Y - X, D_0 \rangle&\le \min _{\mu \in [\mu ^-,\mu ^+]} \sum _{i=1}^N\mu _i \langle Y - X, D_i \rangle \\ \Leftrightarrow&&-\langle Y - X, D_0 \rangle&\le \max _{y \in \mathbb {R}^{2n}_{\ge 0}} \left\{ y^\top \begin{pmatrix} -\mu ^+\\ \mu ^- \end{pmatrix}:\ \begin{pmatrix}-I_n&I_n \end{pmatrix}y = \begin{pmatrix} \langle Y-X,D_1\rangle \\ \vdots \\ \langle Y-X,D_n\rangle \end{pmatrix} \right\} \end{aligned}$$

and consequently, Program (13) can be written as an SDP which is polynomially solvable in the encoding length of its input:

$$\begin{aligned} \sup _{Y,y}~&\ \langle {\hat{P}}, Y \rangle \\ \mathrm {s.t.}~&\ y^\top \begin{pmatrix} -\mu ^+\\ \mu ^- \end{pmatrix}\ge -\langle Y - X, D_0 \rangle ,\\&\begin{pmatrix} -I_n&I_n \end{pmatrix}y = \begin{pmatrix} \langle Y-X,D_1\rangle \\ \vdots \\ \langle Y-X,D_n\rangle \end{pmatrix}, \\&Y \in \mathcal {X}, y\in \mathbb {R}^{2n}_{\ge 0}. \end{aligned}$$

We note that this maximization program is computationally tractable since the number of additional variables and constraints is polynomial in the encoding length of the input (namely, \(n+1\) additional constraints and 2n additional variables). \(\square\)

Thus, we have proved that computing a Pareto robustly optimal solution for robust semidefinite programs (12) with cost uncertainty (11) is tractable. In the following section we illustrate its use for a robust eigenvalue problem and the computation of max-cuts on graphs with uncertain weights.

3 Application I: The Robust maximum eigenvalue problem

In the following paragraphs, we show that computing the maximal eigenvalue of a set of affine combinations of matrices fits into the setting of (2). The largest eigenvalue problem of a matrix C can be written as (see, e.g., [14]):

$$\begin{aligned} \begin{aligned}\lambda _{\max }&=&\max _{X\in \mathcal {S}^n_{\succeq 0}} ~&\langle C,X\rangle &&=&\min _y ~&y\\ &&\text {s.t.}~&\mathrm {Tr}(X)=1~(\Leftrightarrow \langle I_n,X\rangle =1)&&&\text{s.t.}~&yI_n-C\succeq 0. \end{aligned} \end{aligned}$$
(15)

An optimal matrix \(X\in \mathcal {S}^n_{\succeq 0}\) for the first optimization problem corresponds to the eigenvector x with respect to the largest eigenvalue \(\lambda _{\max }\) of C by \(X=xx^\top\). In the remainder of this section, we consider the following robust variant of (15) with respect to a compact and convex uncertainty set \(\mathcal {U}\).

$$\begin{aligned} \lambda _{\max }=\max _{X\in \mathcal {S}^n_{\succeq 0}}~&\min _{C\in \mathcal {U}} ~\langle C,X\rangle \\\text {s.t.}~~&\mathrm {Tr}(X)=1. \end{aligned}$$
(16)

Note that for compact and convex uncertainty sets \(\mathcal {U}\), Sion’s minimax theorem [17] allows us to interchange the \(\max\) and \(\min\) operators. Thus, the problem boils down to minimizing the maximal eigenvalue of an affine family of symmetric matrices – a problem with a wide range of applications, e.g. in stability analysis of dynamic systems or the computation of structured singular values, see [7]. In the following example, we provide an instance with non-trivial (\(\mathcal {X}^{\mathrm {PRO}}\ne \mathcal {X}^{\mathrm {RO}}\)) Pareto robustly optimal solutions for this eigenvalue problem.

Example 2

Let \(C\in \mathcal {U}= \left\{ \begin{pmatrix} 1 &{} 0 \\ 0 &{} 1 \end{pmatrix}+\mu \begin{pmatrix} 1 &{} -1 \\ -1 &{} 1 \end{pmatrix}: \mu \in [0,1]\right\}\). Then, the matrix \(X'=\frac{1}{2}\begin{pmatrix} 1 &{} -1 \\ -1 &{} 1 \end{pmatrix}\) is a robustly optimal solution to (16) since for every \(\mu \in [0,1]\) and \(X \in \mathcal {S}^n_{\succeq 0}\) with \(\mathrm {Tr}(X)=1\) we have:

$$\begin{aligned} \langle C, X\rangle&= \langle I_2,X\rangle +\mu \left\langle \begin{pmatrix} 1 &{} -1 \\ -1 &{} 1 \end{pmatrix},X\right\rangle \ge \langle I_2,X\rangle = 1. \end{aligned}$$

Note that the inequality holds because the matrix \(\begin{pmatrix} 1 &{} -1 \\ -1 &{} 1 \end{pmatrix}\) is positive semidefinite. Thus, for every feasible X, \(\mu =0\) is the worst-case realization of uncertainty that can occur. Consequently, every feasible solution X, such as \(X'\), is also a robustly optimal solution. However, \(X'\) Pareto dominates every other solution \(X\in \mathcal {X}^{\mathrm {RO}}\), since for every \(\mu >0\) and \(X\ne X'\), we have

$$\begin{aligned} \langle C, X\rangle = \langle I_2,X\rangle +\mu \left\langle \begin{pmatrix} 1 &{} -1 \\ -1 &{} 1 \end{pmatrix},X\right\rangle < 1 + \mu \left\langle \begin{pmatrix} 1 &{} -1 \\ -1 &{} 1 \end{pmatrix},X'\right\rangle = \langle C, X'\rangle . \end{aligned}$$

We note that one could check \(X' \in \mathcal {X}^{\mathrm {PRO}}\) by an application of Proposition 1.

Note that the existence of more than one robustly optimal solution is non-trivial as for uncorrelated uncertainties, i.e. uncorrelated uncertainty sets for the entries of C, we often obtain a unique robustly optimal solution. In the above example, the uncertainties in the entries are linked through the matrix \(\begin{pmatrix} 1 &{} -1 \\ -1 &{} 1 \end{pmatrix}\) and thus correlated.

4 Application II: Robust max-cut

The weighted max-cut problem is one of the fundamental combinatorial problems from Karp’s list of 21 NP-complete problems [10]. Given an undirected graph \(G=(V,E)\) equipped with a weight function \(w: E\rightarrow \mathbb {R}\), the task is to find a cut \(\delta (V')=\{e\in E: |e\cap V'|=1\}\) defined by \(V'\subseteq V\) with maximal weight, i.e.,

$$\begin{aligned} mc(G,w):=\max _{V'\subseteq V} \sum _{e\in \delta (V')}w_e = \max _{x\in \{-1,1\}^V} \frac{1}{4} x^\top L_w x, \end{aligned}$$

where \(L_w\) denotes the weighted Laplacian of the graph, i.e.,

$$\begin{aligned} L_w=\sum _{\{i,j\}\in E}w_{ij}E_{ij}'\text { with }E_{ij}'=E_{ii}+E_{jj}-2E_{ij}. \end{aligned}$$

In combinatorial optimization under uncertainty, it is common to restrict oneself to uncertainties in the objective in order to keep the structure of the underlying combinatorial problem, see [11] for a survey. In the remainder of this section, we consider uncertain weights, i.e., \(w\in \mathcal {Z}\subseteq \mathbb {R}^E\) for a convex and compact uncertainty set \(\mathcal {Z}\). Similar to [13], we define the robust counterpart of the uncertain weigthed max-cut problem that corresponds to mc(Gw) by

$$\begin{aligned} mc(G,\mathcal {Z})= \max _{x\in \{-1,1\}^V}\min _{w\in \mathcal {Z}} \frac{1}{4} x^\top L(w) x, \end{aligned}$$
(17)

where \(L(w)=\sum _{\{i,j\}\in E}w_{ij}E_{ij}'\) denotes the uncertain Laplacian. Note that the set \(\mathcal {U}=\{L(w): w\in \mathcal {Z}\}\) represents a more general uncertainty compared to (11) in the previous section. Again, we address the question whether for a given graph G, we can improve a robustly optimal solution to (17) in terms of Pareto dominance. In some instances such as \(\gamma\)-stable graphs introduced by Bilu and Linial [4], there exist solutions \({\hat{x}}\) that are not only Pareto optimal but moreover ensures that there is no solution \(\bar{x}\in \mathcal {X}\) such that there exists \(\bar{p}\in \mathcal {U}: f(\bar{x},\bar{p}) > f({\hat{x}},\bar{p})\). Although our techniques would apply for their instances, there are more efficient ways to compute these solutions. However, in general, graphs are not \(\gamma\)-stable and hence we first demonstrate the existence of two optimal solutions to an instance of robust weighted max-cut problem of which one Pareto dominates the other with the following example:

Example 3

Consider the complete graph with three nodes equipped with uncertain weights \(w_{12}(\mu )=w_{13}(\mu )=4+2\mu\) and \(w_{23}(\mu )=3+\mu\) that affinely depend on \(\mu\) with \(\mu \in [-1,1]\). We observe that

$$\begin{aligned} 8+4\mu =w(\delta (v_1))\ge w(\delta (v_2))=w(\delta (v_3))=7+3\mu , \end{aligned}$$

where equality holds if and only if \(\mu =-1\). Since this describes the worst case for all these three cuts, we have that every cut is a robustly optimal solution. However, the cut \(\delta (v_1)\) Pareto dominates the other cuts, since \(w(\delta (v_1))>w(\delta (v_2))=w(\delta (v_3))\) whenever \(\mu >-1\).

Additionally to Example 3, we briefly discuss pure interval uncertainty sets which are commonly used for combinatorial optimization under uncertainty, cf. [11] and [5]. The following shows that in this case Pareto dominance between robustly optimal solutions is only possible under very specific conditions.

Proposition 2

Consider Program (2) with \(\mathcal {X}\subseteq \{0,1\}^n\), \(f(x,p)=p^\top x\), interval uncertainty \(\mathcal {U}:=[\bar{p} - \Delta p, \bar{p}] \subseteq \mathbb {R}^n\), and let \(x^* \in \mathcal {X}^{\mathrm {RO}}\). Then, \(x^* + z\) with \(z \in \{-1,0,1\}^n\) Pareto dominates \(x^*\) if and only if

  • \(x^*+z \in \mathcal {X}^{\mathrm {RO}}\),

  • \(\{i\in [n]:z_i=-1\}\subseteq \{i\in [n]:\Delta p_i = 0\}\), and,

  • there exists at least one \(i \in [n]\) with \(z_i = 1\) and \(\Delta p_i > 0\).

Proof

Theorem 1 in [9], which in this case is equivalent to our Theorem 1, states that \(x^*\in \mathcal {X}^{\mathrm {RO}}\) is Pareto dominated by \(x^* + z^*\) if and only if, for an arbitrary \({\hat{p}} \in \mathrm {relint}(\mathcal {U})\), \(z^*\) is feasible to the program

$$\begin{aligned} \max _{z} ~&{\hat{p}}^\top z\\ \mathrm {s.t.} ~&z \in \mathcal {U}^*, \\&x^*+z \in \mathcal {X}, \end{aligned}$$
(18)

and its objective value is positive. We determine the dual cone:

$$\begin{aligned} z \in \mathcal {U}^*&\Leftrightarrow z^\top u \ge 0 \ \forall u \in \mathcal {U}, \nonumber \\&\Leftrightarrow \min _{u \in [\bar{p} - \Delta p, \bar{p}]} z^\top u \ge 0, \nonumber \\&\Leftrightarrow \max _{(y,s)\in \mathbb {R}_{\ge 0}^{2n}: \ y - s = z} (\bar{p} - \Delta p)^\top y - \bar{p}^\top s \ge 0, \nonumber \\&\Leftrightarrow \exists y \ge 0: \bar{p}^\top z - \Delta p^\top y \ge 0, \ y \ge z, \end{aligned}$$
(19)

where we apply strong duality to obtain (19). Since for all \(i \in [n]\), there exists \(\lambda _i \in (0,1)\), such that \({\hat{p}}_i = \bar{p}_i - \lambda _i\Delta p_i\), Program (18) is equivalent to

$$\begin{aligned} \begin{aligned} \max _{y,z}~& \sum _{i\in [n]} (\bar{p}_i - \lambda _i\Delta p_i)z_i\\ \mathrm {s.t.}~&\ \bar{p}^\top z - \Delta p^\top y \ge 0, \\&x^*+z \in \mathcal {X}, \\&y \ge z, \\&y \ge 0. \end{aligned} \end{aligned}$$
(20)

for \(\lambda \in (0,1)^n\). Now, \(x^* + z^*\) Pareto dominates \(x^*\) if and only if there exists \(y^* \in \mathbb {R}^n\) such that \((y^*,z^*)\) is a feasible solution to Program (20) with positive objective value. Since \(\lambda\) is arbitrary, this holds for every \(\lambda \in (0,1)^n\). Using this property, we prove the proposition in the following.

We assume that \(x^* + z^*\) Pareto dominates \(x^*\). Thus, \(x^* + z^*, x^* \in \mathcal {X}^{\mathrm {RO}}\) and, in particular,

$$\begin{aligned} \min _{p \in \mathcal {U}} p^\top (x^* + z^*) = \min _{p \in \mathcal {U}} p^\top x^*. \end{aligned}$$
(21)

Since \(x^*\), and \(x^*+z^*\) are nonnegative, the worst-case uncertainty is attained at \(\bar{p} - \Delta p\). We obtain \((\bar{p} - \Delta p)^\top (x^* + z^*) = (\bar{p} - \Delta p)^\top x^*\), implying \(\bar{p}^\top z^* = \Delta p^\top z^*\). Thus, we can set \(y_i = |z^*_i|\), \(i \in [n]\), and \(z= z^*\) to obtain a feasible solution to (20) with objective value

$$\begin{aligned} \sum _{i \in [n]} (1 - \lambda _i) \Delta p_i z^*_i \end{aligned}$$
(22)

which is strictly positive for every \(\lambda \in (0,1)^n\) by Theorem 1. This implies

$$\begin{aligned} \sum _{i \in [n]} (1 - \lambda _i) \Delta p_i z^*_i \ge 0 \end{aligned}$$
(23)

for all \(\lambda \in [0,1]^n\). Thus, Inequality (23) is also true for \(\lambda = \sum _{j \in [n]\setminus \{i\}} e_j\) for all \(i \in [n]\). This implies \(\Delta p_i z^*_i \ge 0\) for all \(i \in [n]\) and thus, whenever \(z^*_i = -1\), \(\Delta p_i = 0\). Furthermore, (22) can only be positive when there exists an index \(i \in [n]\) with \(z_i = 1\) and \(\Delta p_i > 0\).

Proving the other direction is rather direct, since \(x^* + z^* \in \mathcal {X}^{\mathrm {RO}}\) implies Equation (21) and (yz) with \(y_i = |z^*_i|\), \(i \in [n]\), and \(z= z^*\) is again a feasible solution to Program (20). Since the resp. objective value is strictly positive for \(\lambda \in (0,1)^n\), \(x^* + z^*\) Pareto dominates \(x^*\). \(\square\)

We observe that \(x'\in \mathcal {X}^{\mathrm {RO}}\) Pareto dominates \(x\in \mathcal {X}^{\mathrm {RO}}\) only if there exists at least one index \(i\in [n]\) with \(x'_i=1\), \(x_i=0\) and \(\Delta p_i>0\), i.e., there is a scenario \(p \in \mathcal {U}\) with \(p_i>\bar{p}_i-\Delta p_i\) and \(p_j=\bar{p}_j-\Delta p_j\) for all \(j\ne i\) increasing only the solution \(x'\) compared to the worst case. Additionally, all indices \(i\in [n]\) with \(x_i=1\) and \(x'_i=0\) cannot be affected by uncertainty. This second observation leads to the following corollary.

Corollary 1

Consider the setting of Proposition 2. If \(\Delta p > 0\), a solution \(x\in \mathcal {X}^{\mathrm {RO}}\) is Pareto dominated by another solution \(x'\in {\mathcal {X}}\) if and only if

  • \(\{i\in [n]:x_i=1\}\subsetneq \{i\in [n]:x'_i=1\}\), and,

  • \(\Delta p_j=\bar{p}_j \text { for all }j\in \{i\in [n]:x'_i=1\}\setminus \{i\in [n]:x_i=1\}\).

If, in addition to \(\Delta p > 0\), \(\Delta p_i\ne \bar{p}_i\) for all \(i\in [n]\), \(\mathcal {X}^{\mathrm {RO}}= \mathcal {X}^{\mathrm {PRO}}\).

Since max-cut can be phrased as a binary program by using the cut polytope, the statements above hold true for the robust max-cut problem for uncorrelated uncertainties. Although the nominal max-cut problem is widely considered in the literature, its robust counterpart is to the best of our knowledge not well-investigated. For the nominal case, the famous algorithm of Goemans and Williamson [8] enables us to compute a cut that satisfies an \(\alpha\)-approximation ratio with \(\alpha =0.878...\). Moreover, if Khot’s unique games conjecture [12] holds, this is the best approximation ratio we could hope to achieve with a polynomial time algorithm. In the remainder of this section, we first derive robustly optimal cuts with the same approximation ratio and then apply our results from Sect. 2 to compute new cuts with improved approximation guarantees if the worst-case uncertainty is not attained. To this end, we consider the SDP relaxation of (17):

$$\begin{aligned} sdp(G,\mathcal {Z})=\max _{Y \in \mathcal {S}^n_{\succeq 0}} ~&\min _{w\in \mathcal {Z}} ~\left\langle \frac{1}{4}L(w),Y\right\rangle \\\mathrm {s.t.} ~~& \langle E_{ii}, Y\rangle = 1 \quad \forall i\in [n].\\ \end{aligned}$$
(24)

If the inner problem in (24) is a tractable conic program, such as an LP or SDP, it can often be dualized and we can properly compute a robustly optimal solution to (24) by solving the resulting SDP. This solution could then be used to compute a cut via Goemans-Williamson’s Algorithm that guarantees the same approximation ratio for the robust max-cut.

Proposition 3

Let \(w\ge 0\) for every \(w\in \mathcal {Z}\) and \(\bar{Y}\) be a robust optimal solution to (24). Then,

$$\begin{aligned} \min _{w\in \mathcal {Z}} \left\langle \frac{L(w)}{4},\bar{Y}\right\rangle = sdp(G,\mathcal {Z})\ge mc(G,\mathcal {Z}) \ge 0.878\ldots sdp(G,\mathcal {Z}). \end{aligned}$$

Proof

The first inequality follows by a simple relaxation argument. For the second inequality we strictly follow the arguments of Goemans and Williamson [8]:

Let \(\bar{y_k}\) denote the columns of the Cholesky decomposition of \(\bar{Y}\). Then, we observe that \(x\in \{-1,1\}^V\) defined by \(x_k=\text {sign}(\bar{y_k}^\top r)\) forms a cut in G. The proof of Goemans and Williamson then relies on the fact that for vectors \(r\in S^{n-1}\) drawn from the rotationally invariant probability distribution on the unit sphere and their corresponding cuts, we have that

$$\begin{aligned} \mathbb {E}\left( 1-x_ix_j\right) \ge 0.878\ldots (1-\bar{y_i}^\top \bar{y_j}) =0.878\ldots sdp(G,\mathcal {Z}). \end{aligned}$$

Finally, we conclude

$$\begin{aligned} \mathbb {E}\left( \min _{w\in \mathcal {Z}} \frac{1}{4} x^\top L(w) x\right)&= \mathbb {E}\left( \min _{w\in \mathcal {Z}}\frac{1}{4}\sum _{\{i,j\}\in E} w_{ij}(1-x_ix_j)\right) \\&= \min _{w\in \mathcal {Z}}\frac{1}{4}\sum _{\{i,j\}\in E} w_{ij}\mathbb {E}\left( 1-x_ix_j\right) \\&\ge 0.878\ldots \min _{w\in \mathcal {Z}}\frac{1}{4}\sum _{\{i,j\}\in E}w_{ij} (1-\bar{y_i}^\top \bar{y_j})\\&= 0.878\ldots sdp(G,\mathcal {Z}). \end{aligned}$$

\(\square\)

It is worth noting that there are already similar approximation results known, see e.g. [11]. We observe that the quality of a cut in a graph with uncertain edge weights may not only rely on its performance in a worst-case scenario but also on its performance in every other scenario \(w\in \mathcal {Z}\). Hence, we show that a Pareto optimal solution \(Y^*\) to (17) outperforms any other robustly optimal solution \(\bar{Y}\) of \(sdp(G,\mathcal {Z})\) in terms of the approximation ratio of their corresponding cuts:

Proposition 4

Let \(Y^*\) Pareto dominate \(\bar{Y}\) for (24) and let \(x^*\) and \(\bar{x}\) denote the corresponding cuts derived from \(Y^*\) and \(\bar{Y}\) respectively via the Goemans-Williamson Algorithm. Denote

$$\begin{aligned} sdp(G,w,Y)=\frac{1}{4}\sum _{\{i,j\}\in E}w_{ij}(1-y_i^\top y_j). \end{aligned}$$

Then, for every \(w\in \mathcal {Z}\) we have

$$\begin{aligned} mc(G,w)\ge 0.878...sdp(G,w,Y^*) \ge 0.878...sdp(G,w,\bar{Y}) \end{aligned}$$

and there exists a \(w\in \mathcal {Z}\), for which the last inequality holds strictly.

Proof

$$\begin{aligned} \mathbb {E}\left( \frac{1}{4}\sum _{\{i,j\}\in E}w_{ij}(1-x_ix_j)\right)&= \frac{1}{4}\sum _{\{i,j\}\in E}w_{ij}\mathbb {E}\left( 1-x_ix_j\right) \\&\ge 0.878\ldots \frac{1}{4}\sum _{\{i,j\}\in E}w_{ij}(1-(y_i^*)^\top y_j^*)\\&\ge 0.878\ldots \frac{1}{4}\sum _{\{i,j\}\in E}w_{ij}(1-\bar{y_i}^\top \bar{y_j}), \end{aligned}$$

where the last inequality and its strict counterpart for at least one realization of the uncertain parameter follows from the Pareto dominance of \(Y^*\). \(\square\)

5 Conclusion

In this paper, we generalized the methods introduced in [9] to determine Pareto robustly optimal solutions for linear programs with an uncertain objective to general optimization problems whose objective function is affected affinely by the uncertainty. Moreover, we proved the tractability of these methods in the case of semidefinite programming with matrix box uncertainties and illustrated their use at the examples of the maximal eigenvalue of an affine set of matrices and the classical max-cut problem.