Introduction

Many real-world decision problems arising in engineering and management depend on uncertain parameters. This parameters’ uncertainty may be due to limited observability of data, noisy measurements, implementations and prediction errors. Stochastic optimization and robust optimization frameworks have classically allowed to model this uncertainty within a decision-making framework.

Stochastic optimization assumes that the decision maker has complete knowledge about the underlying uncertainty through a known probability distribution. The probability distribution of the random parameters is inferred from prior beliefs, experts opinions, errors in predictions based on the historical data or a mixture of these.

In robust optimization, instead, no arbitrary assumption on the distribution of parameters is required. A robust solution is defined introducing a different optimization problem known as a robust counterpart that allows to find a “worst-case-oriented” optimal solution. Robust optimization is proving to be a fruitful tool to study problems with uncertain data. Since the seminal paper by Ben-Tal and Nemirovski (1998), several authors have studied the problem both in scalar and multiobjective settings [see e.g. (Hayashi et al., 2013; Skanda & Lebiedz, 2013; Souyris et al., 2013; Suzuki et al., 2013; Goh & Sim, 2011)]. More recently, a detailed monograph has been devoted to the topic (Ben-Tal & El Ghaoui, 2009) and survey papers by Gabrel et al. (2014); Bertsimas et al. (2011) collected major issues and applications of scalar robust optimization. The need for such a tool arises when a constrained optimization problem depends upon uncertain parameters that may affect both the objective function and/or the constraints. This occurs in many real-world applications of optimization in industries, energy markets, finance, to quote some fields [see e.g. (Hu et al., 2011; Hassanzadeh et al., 2014; Aouam et al., 2016; Zugno & Conejo, 2015; Gregory et al., 2011) and the references therein], due to unknown future developments, measurement or manufacturing errors, incomplete information in model development, and so on. In such circumstances, stochastic optimization is often applied, but this approach requires the choice of a probability distribution that can hardly be motivated but for the technical capability of solving the problem.

Robust optimization has also been extended to multiobjective problems, see e.g. Kuroiwa and Lee (2012); Crespi et al. (2017) and several theorethical issues have been investigated by using the componentwise minmax approach. Financial applications of the minmax approach to robust multiobjective optimization can be found e.g. in Schöttle and Werner (2009), Fliege and Werner (2014) where robust portfolio selection is investigated.

We remark that a different way to deal with robust multiobjective optimization problems arises observing that the so-called robust solutions of a multiobjective optimization problem are deeply related to solutions of a set optimization problem [see e.g. (Ehrgott et al., 2014; Ide et al., 2014; Crespi et al., 2017)].

In this paper we deal with the minmax aproach to robust multiobjective optimization. We survey the main notions and results related to this approach with particular reference to linear scalarization and optimality conditions.

Then we investigate the issue of sensitivity of the solutions of robust multiobjective optimization problems with respect to variations of the uncertainty set. We first recall results about sensitivity of the optimal values with respect to changes in the uncertainty set. In particular we observe that when robust solutions of a multiobjective optimization problem are considered, a “loss of efficiency” occurs with respect to the solution obtained in the “nominal” problem, i.e. the problem in which the uncertain parameters assume a fixed value that can be an estimation of the “true” value [see e.g. Ben-Tal and Nemirovski (1998)]. In the scalar case, assuming a minimization problem, this simply means the robust optimal value is “greater” or equal than the optimal value of the nominal problem and robust solutions are \(\epsilon \)-solutions of the nominal problem. We estimate location of efficient frontiers of the nominal problem and the robust problem and the related efficiency loss through set distances, according to the shape of the uncertainty set. Thereafter, we prove results concerning the sensitivity of the optimal solutions with respect to changes in the uncertainty set.

Finally, we consider applications of the presented results to mean-variance portfolio optimization.

The paper is organized as follows. In Sect. 2 we recall the formulation of a Robust Multiobjective Optimization Problem (RMP). In Sect. 3 we recall results about linear scalarization of a RMP and we give optimality conditions under convexity assumptions. In Sect. 4 we first recall results about sensitivity of optimal values (efficient frontiers) of a RMP with respect to changes in the uncertainty set. Then we prove results about the sensitivity of optimal solutions with respect to changes in the uncertainty set. Section 5 gives an application of the presented results to Mean-Variance Portfolio Optimization. Finally, Sect. 6 concludes the paper with some suggestions for future research.

Robust multiobjective optimization: problem formulation

Throughout this paper \({{\mathbb {R}}}^n\) denotes the Euclidean space of dimension n. Given a lower bounded function \(g: {{\mathbb {R}}}^n\rightarrow {{\mathbb {R}}}\) and a closed convex set \(X \subseteq {{\mathbb {R}}}^n\), consider the scalar optimization problem

$$\begin{aligned} \inf _{x\in X} g(x) \end{aligned}$$
(P)

A point \(x^0 \in X\) is a solution of problem (P) when \(g(x^0)= \inf _{x\in X} g(x)\). We consider now an uncertain optimization problem

$$\begin{aligned} \inf _{x\in X} f(x,u) \end{aligned}$$
(UP)

where \(f:{{\mathbb {R}}}^n\times {{\mathbb {R}}}^p\rightarrow {{\mathbb {R}}}\), u is an uncertain parameter, with \(u\in {\mathcal {U}}\) for some convex compact set \({\mathcal {U}}\subseteq {{\mathbb {R}}}^p\). We assume f is continuous w.r.t. u and \(f(\cdot , u)\) is lower bounded on X for every \(u\in {\mathcal {U}}\).

Problem (UP) has been extensively studied in the literature (see e.g. Ben-Tal and El Ghaoui (2009) and the references therein). Following (Ben-Tal & El Ghaoui, 2009), we associate to problem (UP) the Robust Optimization Problem

$$\begin{aligned} \inf _{x\in X} \max _{u\in {\mathcal {U}}}{f(x,u)} \end{aligned}$$
(RP)

Problem (RP) describes a worst-case oriented attitude of the decision maker and is called the robust counterpart of problem (UP).

In order to extend problem (RP) to the multiobjective case we recall some basic notions in multiobjective optimization [see e.g. Sawaragi et al. (1985)]. We consider the problem

$$\begin{aligned} \min _{x\in X} g(x) \end{aligned}$$
(MP)

where \(g(x)= (g_1(x), \ldots , g_m(x))\) with \(g_i:{{\mathbb {R}}}^n\rightarrow {{\mathbb {R}}}\), \(i=1, \ldots , m\) and \(X \subseteq {{\mathbb {R}}}^n\) is a closed convex set.

A point \(x^0\in X\) is said to be a (Pareto) efficient solution of problem (MP) when

$$\begin{aligned} (\mathrm{Im}(g)-g(x^0)) \cap (-{{\mathbb {R}}}^m_+)=\{0\} \end{aligned}$$

where \(\mathrm{Im}(g)\) is the image of g, or equivalently, there does not exist \(x\in X\) such that \(g(x)\le g(x^0)\) and \(g(x)\ne g(x^0)\), where \(a\le b\) if \(a\in b-{{\mathbb {R}}}^m_+\).

A point \(x^0\) is said to be a weakly efficient solution of problem (MP) when

$$\begin{aligned} (\mathrm{Im}(g)-g(x^0)) \cap (-\mathrm{int}{{\mathbb {R}}}^m_+)=\emptyset \end{aligned}$$

or equivalently, there does not exist \(x\in X\) such that \(g(x)< g(x^0)\), where \(a< b\) if \(a\in b-{\mathrm{int\,}}{{\mathbb {R}}}^m_+\).

A point \(x^0\) is said to be a properly efficient solution of problem (MP) it it is an efficient solution and there exists a number \(M>0\) such that for all \(i=1, \ldots , m\) and \(x \in X\) satisfying \(g_i(x)< g_i(x^0)\) there exists an index j such that \(g_j(x^0) < g_j(x)\) and

$$\begin{aligned} \frac{g_i(x^0)-g_i(x)}{g_j(x)-g_j(x^0)}\le M \end{aligned}$$
(1)

The set of efficient (resp. weakly efficient, properly efficient) solutions of problem (MP) is denoted by \(\mathrm{Eff}(\mathrm{MP})\) (resp. \(\mathrm{WEff}(\mathrm{MP})\), \(\mathrm{PEff(\mathrm{MP})}\)). We will set

$$\begin{aligned}&\mathrm{Min}(\mathrm{MP})= \{g(x): x \in \mathrm{Eff}(\mathrm{MP})\};\\&\mathrm{WMin}(\mathrm{MP})= \{g(x): x \in \mathrm{WEff}(\mathrm{MP})\};\\&\mathrm{PMin}(\mathrm{MP})= \{g(x): x \in \mathrm{PEff}(\mathrm{MP})\} \end{aligned}$$

Clearly \(\mathrm{PEff(\mathrm{MP})} \subseteq \mathrm{Eff}(\mathrm{MP})\subseteq \mathrm{WEff}(\mathrm{MP})\) and \(\mathrm{PMin(\mathrm{MP})} \subseteq \mathrm{Min}(\mathrm{MP})\subseteq \mathrm{WMin}(\mathrm{MP})\).

Now we consider an uncertain multiobjective optimization problem

$$\begin{aligned} \min _{x\in X} \left( f_1(x,u_1),\ldots ,f_m(x,u_m)\right) \end{aligned}$$
(UMP)

where \(f_i:{{\mathbb {R}}}^n\times {{\mathbb {R}}}^p\rightarrow {{\mathbb {R}}}\), \(i=1,\ldots , m\) are continuous functions w.r.t. \(u_i\), with \(u_i\in {\mathcal {U}}_i\) and \({\mathcal {U}}_i\subseteq {{\mathbb {R}}}^p\), \(i=1, \ldots , m\).

The robust counterpart of (UMP) is defined as [see e.g. Kuroiwa and Lee (2012), Kuroiwa and Lee (2014)]

$$\begin{aligned} \min _{x\in X} \left( \max _{u_1\in \mathcal U_1}{f_1(x,u_1)},\ldots ,\max _{u_m\in \mathcal U_m}{f_m(x,u_m)}\right) \end{aligned}$$
(RMP)

A robust efficient (weakly efficient, properly efficient) solution of (UMP) is defined as a vector \(x^0\in X\) that is an efficient (weakly efficient, properly efficient) solution of (RMP).

Robust multiobjective optimization: scalarization and optimality conditions

In this section we recall linear scalarization methods for finding robust properly efficient solutions and robust weakly efficient solutions of problem (UMP). As a consequence of these results it is possible to prove optimality conditions for robust multiobjective optimization problems. We recall that, given the multiobjective optimization problem (MP) the following characterizations of weakly and properly efficient solutions by means of linear scalarization hold (see e.g. Sawaragi et al. (1985)).

Theorem 3.1

  1. (i)

    If there exist numbers \(\beta _i \ge 0\), \(i=1, \ldots , m\), not all zero, such that \(x^0\in X\) minimizes function

    $$\begin{aligned} \sum _{i=1}^{m}\beta _i g_i(x) \end{aligned}$$
    (2)

    then \(x^0 \in \mathrm{WEff (MP)}\). If functions \(g_i\) are convex, \(i=1, \ldots , m\) and \(x^0 \in \mathrm{WEff (MP)}\), then there exist numbers \(\beta _i \ge 0\), \(i=1, \ldots , m\), not all zero, such that \(x^0\) minimizes function (2).

  2. (ii)

    If there exist numbers \(\beta _i >0 \), \(i=1, \ldots , m\) such that \(x^0\in X\) minimizes function (2), then \(x^0 \in \mathrm{PEff (MP)}\). If functions \(g_i\) are convex, \(i=1, \ldots , m\) and \(x^0 \in \mathrm{PEff (MP)}\), then there exist numbers \(\beta _i > 0\), \(i=1, \ldots , m\) such that \(x^0\) minimizes function (2).

The next result, due to Kuroiwa and Lee (2012) extends Theorem 3.1 to robust multiobjective optimization problems. We set

$$\begin{aligned} {{\mathcal {U}}}_i(x^0)=\{u_i \in {{\mathcal {U}}}_i: f_i(x^0, u_i)= \max _{u_i \in {{\mathcal {U}}}_i} f_i(x^0, u_i)\} \end{aligned}$$
(3)

Theorem 3.2

In problem (UMP) assume \(f_i(x, u_i)\) are convex with respect to \(x\in X\) and concave with respect to \(u_i \in {{{\mathcal {U}}}}_i\).

  1. (i)

    \(x^0 \in X\) is a robust properly efficient solution for problem (UMP) if and only if there exist numbers \(\lambda _i^0>0\), and vectors \(u_i^0\in {{{\mathcal {U}}}}_i(x^0)\), \(i=1, \ldots , m\), such that

    $$\begin{aligned} \sum _{i=1}^m\lambda _i^0f_i(x^0, u_i^0) \le \sum _{i=1}^m\lambda _i^0f_i(x, u_i^0), \ \ \forall x \in X \end{aligned}$$
    (4)

    i.e. \(x^0\) minimizes function \(\sum _{i=1}^m\lambda _i^0f_i(x, u_i^0)\) over X. This is equivalent to say that \(x^0\) is properly efficient for the problem of minimizing

    $$\begin{aligned} (f_1(x, u_1^0), \ldots , f_m(x, u_m^0)) \end{aligned}$$
    (5)
  2. (ii)

    \(x^0\in X\) is a robust weakly efficient solution for problem (UMP) if and only if there exist numbers \(\lambda _i^0\ge 0\), not all zero, and vectors \(u_i^0 \in \mathcal{U}_i(x^0)\) \(i=1, \ldots , m\), with \(u_i^0\in {{{\mathcal {U}}}}_i(x^0)\) when \( \lambda _i^0>0\) such that

    $$\begin{aligned} \sum _{i=1}^m\lambda _i^0f_i(x^0, u_i^0) \le \sum _{i=1}^m\lambda _i^0f_i(x, u_i^0), \ \ \forall x \in X \end{aligned}$$
    (6)

    i.e. \(x^0\) minimizes function \(\sum _{i=1}^m \lambda _i^0f_i(x, u_i^0)\) over X. This is equivalent to say that \(x^0\) is weakly efficient for the problem of minimizing the multiobjective function (5)

The next result is an immediate consequence of Theorem 3.2. We denote by \(\partial _{x}f_i(x^0, u_i)\) the subgradient of function \(f_i(\cdot , u_i)\) with respect to x at \(x^0 \in X\), i.e.

$$\begin{aligned} \partial _{x}f_i(x^0, u_i)=\{v \in {{\mathbb {R}}}^n: f_i(x, u_i) \ge f_i(x^0, u_i) + \langle v , x-x^0\rangle \} \end{aligned}$$
(7)

and by \(N_X(x^0)\) the normal cone to the set X at the point \(x^0 \in X\), i.e.

$$\begin{aligned} N_X(x^0)=\{v \in {{\mathbb {R}}}^n: \langle v, x-x^0 \rangle \ge 0 \} \end{aligned}$$
(8)

Theorem 3.3

In problem (UMP) assume \(f_i(x, u_i)\) are convex with respect to \(x\in X\) and concave with respect to \(u_i \in {{{\mathcal {U}}}}_i\).

  1. (i)

    A point \(x^0 \in X\) is a robust properly efficient solution for problem (UMP) if and only if there exist \(\lambda _i^0>0\), \(u_i^0 \in {{{\mathcal {U}}}}_i(x^0)\), \(i= 1, \ldots , m\), such that

    $$\begin{aligned} 0 \in \sum _{i=1}^m \lambda _i^0 \partial _x f_i(x^0 , u_i^0) + N_X(x^0) \end{aligned}$$
    (9)
  2. (ii)

    A point \(x^0 \in X\) is a robust weakly efficient solution for problem (UMP) if and only if there exist \(\lambda _i^0\ge 0\), not all zero, \( u_i^0 \in {{{\mathcal {U}}}}_i\) \(i= 1, \ldots , m\) with \(u_i^0 \in {{{\mathcal {U}}}}_i(x^0)\) when \(\lambda _i >0\), such that

    $$\begin{aligned} 0 \in \sum _{i=1}^m \lambda _i^0 \partial _x f_i(x^0 , u_i^0) + N_X(x^0) \end{aligned}$$
    (10)

Proof

The proof is an immediate consequence of Theorem 3.2, the necessary and sufficient optimality conditions for scalar convex optimization and the linearity of the subgradient [see e.g. Rockafellar (1970)] and is omitted. \(\square \)

Robust multiobjective optimization: sensitivity to uncertainty

Various degrees of uncertainty can occur for the same objective function. Here we also introduce compact, convex subsets \({\mathcal U}_i^0 \subseteq {\mathcal {U}}_i\) representing the nominal instances of our robust optimization problem, as the least achievable uncertainty. Moreover, we consider sets

$$\begin{aligned} {\mathcal {W}}_i^{\lambda }:=\left( 1-\lambda \right) {\mathcal {U}}_i^0+\lambda {\mathcal {U}}_i \end{aligned}$$
(11)

where \(\lambda \in \left[ 0,1\right] \). Clearly

$$\begin{aligned} \begin{array}{ll} {\mathcal {W}}_i^{1}={\mathcal {U}}_i &{} \hbox {(high uncertainty)}\\ {\mathcal {W}}_i^{0}={\mathcal {U}}_i^0 &{} \hbox {(low uncertainty)} \end{array} \end{aligned}$$

An extreme, yet meaningful situation is represented by \(\mathcal {U}_i^0= \{u_i^0\}\) that depicts the absence of uncertainty.

Sensitivity of the optimal values

In this subsection we survey results regarding the sensitivity of the set \({\mathrm{WMin\,}}(\mathrm{RMP}^{\lambda })\) with respect to changes in the uncertainty set described by variations of the parameter \(\lambda \) [see Crespi et al. (2018)].

We assume \(\max _{u_i\in {\mathcal {U}}_i^0}{f_i(\cdot ,u_i)}\) is lower bounded on X, and we set

$$\begin{aligned} E_{f_i} \left( x\right) =\max _{u_i\in {\mathcal {U}}_i}{f_i(x,u_i)-\max _{u_i\in {\mathcal {U}}^0_i}f_i(x,u_i)} \end{aligned}$$
(12)

and

$$\begin{aligned}&{\overline{E}}_{f_i}=\sup _{x\in X}E_{f_i}(x) \end{aligned}$$
(13)
$$\begin{aligned}&{\underline{E}}_{f_i}=\inf _{x\in X}E_{f_i}(x) \end{aligned}$$
(14)

assuming \({\overline{E}}_{f_i}\) is finite, \(i=1, \ldots , m\). Clearly, since \({\mathcal {U}}_i^0\subseteq {\mathcal {U}}_i\), \(E_{f_i}\left( x\right) \ge 0\), \(\forall x\in X\).

Remark 4.1

Simple calculations show that \({E}_{f_i}(x)\) assumes the following particular forms.

  1. (i)

    Assume \({\mathcal {U}}_i^0=\{u_i^0\}\). If \(f_i(x,u_i)=\langle f_i(x), u_i\rangle + h_i(x)\), with \(f_i:{{\mathbb {R}}}^n\rightarrow {{\mathbb {R}}}^p\), \(h_i: {{\mathbb {R}}}^n \rightarrow {{\mathbb {R}}}\), it holds

    $$\begin{aligned} E_{f_i}(x)=\max _{u_i^\prime \in {\mathcal {U}}_i-u_i^0}{\langle f_i(x),u_i^\prime \rangle }, \end{aligned}$$

    which, for \({\mathcal {U}}_i={\mathcal {B}}(u_i^0)\) (unit ball in \({{\mathbb {R}}}^p\)) entails

    $$\begin{aligned} E_{f_i}(x)=\max _{b\in {\mathcal {B}}(0)}{\langle f_i(x),b\rangle }=\Vert f_i(x)\Vert \end{aligned}$$

    If \(f_i(x,u_i)=\langle x,u_i\rangle +h_i(x)\) and \({\mathcal {U}}_i={\mathcal {B}}(u_i^0)\), it holds

    $$\begin{aligned} E_{f_i}(x)=\Vert x\Vert . \end{aligned}$$
  2. (ii)

    Assume \(f_i(x,u_i)=\langle x,u_i\rangle +h_i(x)\) and consider uncertainty sets of ellipsoidal type. Set

    $$\begin{aligned} {{\mathcal {U}}}= \{u=(u_1, \ldots , u_m): \sum _{i=1}^m c_i\Vert u_i-u_i^0\Vert \le \delta \} \end{aligned}$$
    (15)

    where \(\delta >0\) and \(c_i>0\), \(i=1, \ldots , m\) and let \({\mathcal U}_i\) be the projection of \({{\mathcal {U}}}\) on the i-th component. Then we have

    $$\begin{aligned} E_{f_i}(x)=\frac{\delta }{c_i}\Vert x\Vert . \end{aligned}$$
    (16)

The robust counterpart relative to \({\mathcal {W}}_i^{\lambda }\) is

figure a

When \(\lambda = 0\) the robust counterpart shows the lowest level of uncertainty achievable (possibly none at all). We are going to study the behavior of the optimal values and optimal solutions of (RMP\(^\lambda \)) as \(\lambda \) changes. For simplicity sake we will set:

  1. (i)

    \(f^\lambda (x)=\left( \begin{array}{ccc} \max _{w_1\in {\mathcal {W}}_1^{\lambda }}{f_1(x,w_1)},\ldots ,\max _{w_m\in {\mathcal {W}}_m^{\lambda }}{f_m(x,w_m)}\end{array}\right) \);

  2. (ii)

    \(\overline{{\mathbf {E}}}_{f}=\left( {\overline{E}}_{f_1}, \ldots , {\overline{E}}_{f_m}\right) \);

  3. (iii)

    \(\underline{{\mathbf {E}}}_{f}=\left( {\underline{E}}_{f_1}, \ldots , {\underline{E}}_{f_m}\right) \).

From the definitions, we clearly have \(f^0 \le f^{\lambda }\) and \(0\le \underline{{\mathbf {E}}}_{f}\le \overline{{\mathbf {E}}}_{f}\).

We need the following relation between sets and the next definition (see e.g. Kuroiwa (2001), Luc (1989)). For \(A, B\subseteq {{\mathbb {R}}}^m\) we denote

$$\begin{aligned} A\le ^l B \ \mathrm{if}\ A+{{\mathbb {R}}}^m_+\supseteq B, \end{aligned}$$
(17)

or equivalently \(A\le ^l B\), if for every \(b\in B\) there exists \(a\in A\) such that \(a\le b\) holds. This relation is reflexive and transitive, but not antisymmetric.

Definition 4.1

A set \(A\subseteq {{\mathbb {R}}}^m\) is said to be \({{\mathbb {R}}}^m_+\)-closed when \(A+ {{\mathbb {R}}}^m_+\) is closed.

Now we will discuss location of the robust minimal or weakly minimal values.

Proposition 4.1

(Crespi et al., 2018) Assume that \(\mathrm{Im}(f^0)\) is \({{\mathbb {R}}}^m_+\)-closed. Then set relations

$$\begin{aligned} {\mathrm{Min\,}}({\mathrm{RMP}^0})\le ^l\mathrm{Min}(\mathrm{RMP}^\lambda )\hbox { and } {\mathrm{WMin\,}}({\mathrm{RMP}^0})\le ^l\mathrm{WMin}(\mathrm{RMP}^\lambda ) \end{aligned}$$

hold for every \(\lambda \in [0,1]\).

Remark 4.2

When \(m=1\), i.e. a scalar optimization problem is considered, Proposition 4.1 simply states that the optimal value of the Robust Optimization Problem with low uncertainty, \((\mathrm{RMP}^0)\), is less or equal then the optimal value for the Robust Optimization Problem \((\mathrm{RMP}^\lambda )\) with uncertainty measured by the parameter \(\lambda \). Hence, the previous proposition basically states that there is an efficiency loss due to higher uncertainty since the weakly efficient frontier of problem \((\mathrm{RMP}^{\lambda })\) lies above the weakly efficient frontier of problem \((\mathrm{RMP}^{0})\) ("above" is intended with respect to the \(\le ^l\) order).

The next result allows us to give an upper bound for the efficiency loss due to uncertainty.

Proposition 4.2

(Crespi et al., 2018) Assume that functions \(f_i(x,u_i)\) are convex in each variables \(x\in X\) and \(u_i\in {{\mathcal {U}}_i}\), and \(\mathrm{Im}(f^{\lambda })\), \(\lambda \in [0,1]\), is \({{\mathbb {R}}}^m_+\)-closed. Then the set relation

$$\begin{aligned} {\mathrm{WMin\,}}{\mathrm{RMP}^\lambda }\le ^l \mathrm{WMin}(\mathrm{RMP}^0)+\lambda \overline{{\mathbf {E}}}_{f}. \end{aligned}$$

holds.

Remark 4.3

The convexity assumption on \(f_i ( \cdot , u_i)\) in Proposition 4.2 can be weakened by \({{\mathbb {R}}}^m_+\) -convexity of \(\mathrm{Im}(f^{\lambda })\).

Combining Propositions 4.1 and 4.2 we get the following corollary.

Corollary 4.1

(Crespi et al., 2018) Assume that functions \(f_i(x,u_i)\) are convex in each variables \(x\in X\) and \(u_i\in {{\mathcal {U}}_i}\), and \(\mathrm{Im}(f^\lambda )\), \(\lambda \in [0,1]\), is \({{\mathbb {R}}}^m_+\)-closed. Then set relations

$$\begin{aligned} {\mathrm{WMin\,}}({\mathrm{RMP}^0}) \le ^l\mathrm{WMin}(\mathrm{RMP}^\lambda ) \le ^l\mathrm{WMin}(\mathrm{RMP}^0)+\lambda \overline{{\mathbf {E}}}_{f} \end{aligned}$$
(18)

hold.

We now wish to estimate the distance between the efficient frontiers of \(\mathrm{RMP}^0\) and \(\mathrm{RMP}^{\lambda }\).

A set \(A\subseteq {{\mathbb {R}}}^m\) is said to be \({{\mathbb {R}}}^m_+\)-closed-convex-minorized if \(A+{{\mathbb {R}}}^m_+\) is closed and convex, and there exists \(x\in {{\mathbb {R}}}^m\) such that \(x+{{\mathbb {R}}}^m_+\supseteq A\). Let \({\mathcal {C}}\) be the family of all \({{\mathbb {R}}}^m_+\)-closed-convex minorized nonempty subsets of \({{\mathbb {R}}}^m\).

We define a binary relation \(\equiv \) on \({\mathcal {C}}\) by: \(A\equiv B\) if \(A+{{\mathbb {R}}}^m_+=B+{{\mathbb {R}}}^m_+\) for any \(A,B\in {\mathcal {C}}\). Then \(\equiv \) is an equivalence relation and we can define the equivalence class \([A]=\{B\in {\mathcal {C}}\mid A\equiv B\}\) and the quotient set \({\mathcal {C}}\!/\!\equiv =\{[A]\mid A\in {\mathcal {C}} \}\), For \(D=\{d\in {{\mathbb {R}}}^m_+\mid \Vert d\Vert =1\}\), function \(H:({\mathcal {C}}\!/\!\equiv )^2\rightarrow {{\mathbb {R}}}\), which is defined as follows, is a metric [see e.g. Kuroiwa (2003), Kuroiwa and Nuriya (2006)]:

$$\begin{aligned} H(A,B):=H([A],[B]):=\sup _{d\in D} \vert \inf _{a\in A} \langle d,a\rangle -\inf _{b\in B}\langle d,b\rangle |\end{aligned}$$
(19)

Corollary 4.2

(Crespi et al., 2018) Under the assumptions of Corollary 4.1, we have

$$\begin{aligned} H({\mathrm{WMin\,}}({\mathrm{RMP}^0}),\mathrm{WMin}(\mathrm{RMP}^\lambda ))\le \lambda \Vert \overline{{\mathbf {E}}}_{f}\Vert . \end{aligned}$$

and

$$\begin{aligned}{}[\mathrm{WMin}(\mathrm{RMP}^\lambda )]\rightarrow [{\mathrm{WMin\,}}({\mathrm{RMP}^0})]\ \hbox { as }\ \lambda \downarrow 0 \end{aligned}$$

in the metric H.

The next result gives a lower bound for the efficiency loss due to uncertainty.

Proposition 4.3

(Crespi et al., 2018) Assume that functions \(f_i(x,u_i)\) are convex in \(x\in X\) and concave in \(u_i\in {{\mathcal {U}}_i}\), and \(\mathrm{Im}(f^0)\) is \({{\mathbb {R}}}^m_+\)-closed. Then set relation

$$\begin{aligned} {\mathrm{WMin\,}}({\mathrm{RMP}^0})+\lambda \underline{{\mathbf {E}}}_{f}\le ^l \mathrm{WMin}(\mathrm{RMP}^\lambda ). \end{aligned}$$

holds.

Combining the previous results we can give upper and lower bounds for the efficiency loss as stated in the next corollary.

Corollary 4.3

(Crespi et al., 2018) Assume that \(f_i(x,u_i)=\langle f_i(x),u_i\rangle +h_i(x)\) where \(f_i:{{\mathbb {R}}}^n\times {{\mathbb {R}}}^p\rightarrow {{\mathbb {R}}}\) are convex, \(h_i: {{\mathbb {R}}}^n \rightarrow {{\mathbb {R}}}\), \(i=1,\ldots , m\), and \(\mathrm{Im}(f^\lambda )\) are \({{\mathbb {R}}}^m_+\)-closed. Then set relations

$$\begin{aligned} {\mathrm{WMin\,}}({\mathrm{RMP}^0})+\lambda \underline{{\mathbf {E}}}_{f}\le ^l \mathrm{WMin}(\mathrm{RMP}^\lambda )\le ^l \mathrm{WMin}(\mathrm{RMP}^0)+\lambda \overline{{\mathbf {E}}}_{f} \end{aligned}$$
(20)

hold.

Corollary 4.4

(Crespi et al., 2018) Under the assumption of the previous Corollary, we have

$$\begin{aligned} \lambda \Vert \underline{{\mathbf {E}}}_{f}\Vert \le H({\mathrm{WMin\,}}({\mathrm{RMP}^0}), \mathrm{WMin}(\mathrm{RMP}^\lambda ))\le \lambda \Vert \overline{{\mathbf {E}}}_{f}\Vert . \end{aligned}$$

Sensitivity of optimal solutions

We now establish results regarding the sensitivity of optimal solutions with respect to changes in the uncertainty set. We need the following definitions [see e.g. Li and Xu (2010)].

Definition 4.2

Let \(f: X \rightarrow {{\mathbb {R}}}\). We say that \(x^0 \in X\) is an isolated minimizer of order \(\alpha >0\) and constant \(h>0\) when for every \(x\in X\) it holds

$$\begin{aligned} f(x)-f(x^0)\ge h\Vert x - x^0\Vert ^{\alpha } \end{aligned}$$
(21)

Definition 4.3

We say that \(f_i(x, \cdot )\) is Hölder of order \(\delta >0\) on \({{{\mathcal {U}}}}_i\) with constant \(m_i>0\), uniformly wth respect to \(x \in X\) when

$$\begin{aligned} |f_i(x, u_i^1)-f_i(x,u_i^2)|\le m_i \Vert u_i^1-u_i^2\Vert ^\delta \end{aligned}$$
(22)

for every \(u_i^1, u_i^2 \in {{\mathcal {U}}}_i\) and \(x \in X\).

Theorem 4.1

Let X be a compact set and assume that

  1. (i)

    \(f_i(x, \cdot )\) is Hölder of order \(\delta >0\) on \(\mathcal{U}_i\) with constant \(m_i>0\), uniformly wth respect to \(x \in X\), \(i=1\ldots , m\).

  2. (ii)

    \(f_i(x, u_i)\) are convex with respect to \(x \in X\) and concave with respect to \(u_i \in {{\mathcal {U}}_i}\)

Let \(u^0 =(u_1^0, \ldots , u_m^0)\) and

$$\begin{aligned} L(x, u^0)=\sum _{i=1}^m\beta _if_i(x, u_i^0) \end{aligned}$$
(23)

with \(\beta _i\in [0,1]\), \(i=1, \ldots , m\) and \(\sum _{i=1}^m\beta _i=1\). Let \(x^0 \in X\) be an isolated minimizer of order \(\alpha \) and constant h for function \(L(x, u^0)\).

Then there exists \(x(\lambda ) \in \mathrm{WEff}(\mathrm{RMP}^{\lambda })\) such that

$$\begin{aligned} d(x(\lambda ) , \mathrm{WEff}(\mathrm{RMP}^0)) \le \left( \frac{2\lambda }{h}\right) ^{1/\alpha }\left\{ \max _{i=1, \ldots , m}\left[ m_i(D(\mathcal{U}_i))^{\delta }\right] \right\} ^{1/\alpha } \end{aligned}$$
(24)

where \(d(x, A)= \inf _{a \in A}\Vert x-a\Vert \) denotes the distance between the point x and the set A and D(A) denotes the diameter of the set A, i.e.

$$\begin{aligned} D(A)= \sup _{x, y \in A}\Vert x-y\Vert \end{aligned}$$
(25)

Proof

We have \(L(x, u^0)-L(x^0, u^0) \ge h\Vert x-x^0\Vert ^{\alpha }\). Let \(x(\lambda ) \in X\) be a minimizer of function

$$\begin{aligned} L_{\lambda }(x)= \sum _{i=1}^m\beta _i \max _{u_i \in \mathcal{W}_i^{\lambda }} f_i(x, u_i) \end{aligned}$$
(26)

Hence \(x(\lambda ) \in {\mathrm{WEff\,}}(RMP^{\lambda })\) by Theorem 3.1.

We have

$$\begin{aligned} \sum _{i=1}^m\beta _i \max _{u_i \in {{{\mathcal {W}}}}_i^{\lambda }} f_i(x, u_i)= \max _{(u_1, \ldots , u_m) \in ({{{\mathcal {W}}}}_1^{\lambda }, \ldots ,{{{\mathcal {W}}}}_m^{\lambda })}\sum _{i=1}^m\beta _if_i(x, u_i) \end{aligned}$$
(27)

and by using Ky-Fan’s Minimax Theorem ( Fan (1953)) we get

$$\begin{aligned} \max _{(u_1, \ldots , u_m) \in ({{{\mathcal {W}}}}_1^{\lambda }, \ldots , \mathcal{W}_m^{\lambda })}\min _{x \in X}\sum _{i=1}^m\beta _i f_i(x, u_i)= \max _{(u_1, \ldots , u_m) \in ({{{\mathcal {W}}}}_1^{\lambda }, \ldots , \mathcal{W}_m^{\lambda })}\sum _{i=1}^m\beta _i f_i(x(\lambda ), u_i) \end{aligned}$$
(28)

It follows the existence of vectors \({\bar{u}}_i\in \mathcal{W}_i^{\lambda }(x(\lambda ))\), \(i=1, \ldots , m\) (see (3) for the definition of \(\mathcal{W}_i^{\lambda }(x(\lambda ))\)) such that \( \forall x\in X\)

$$\begin{aligned} \sum _{i=1}^m\beta _i f_i(x, {\bar{u}}_i)\ge \max _{(u_1, \ldots , u_m) \in ({{{\mathcal {W}}}}_1^{\lambda }, \ldots , \mathcal{W}_m^{\lambda })}\sum _{i=1}^m\beta _i f_i(x(\lambda ), u_i)\ge \sum _{i=1}^m\beta _i f_i(x(\lambda ), {\bar{u}}_i), \end{aligned}$$
(29)

i.e. we get the existence of \({\bar{u}}_i \in {{{\mathcal {W}}}}_i^{\lambda }\), \(i=1, \ldots , m\) . such that \(x(\lambda )\) minimizes function

$$\begin{aligned} L(x, u)= \sum _{i=1}^m\beta _i f_i(x, {\bar{u}}_i) \end{aligned}$$
(30)

It holds

$$\begin{aligned} L(x^0, {\bar{u}})-L(x(\lambda ), {\bar{u}})=L(x^0, u^0)-L(x(\lambda ), u^0)+w \end{aligned}$$
(31)

where

$$\begin{aligned} w=\left[ L(x^0, {\bar{u}})-L(x^0, u^0)\right] + \left[ L(x(\lambda ), u^0)-L(x(\lambda ), {\bar{u}})\right] \end{aligned}$$
(32)

We have

$$\begin{aligned}&\vert w\vert \le \vert L(x^0, {\bar{u}})-L(x^0, u^0)\vert + \vert L(x(\lambda ), u^0)-L(x(\lambda ), {\bar{u}})\vert \le \end{aligned}$$
(33)
$$\begin{aligned}&\sum _{i=1}^m \beta _i\vert f_i(x(\lambda ), {\bar{u}}_i)-f_i(x(\lambda ), u_i^0)\vert + \sum _{i=1}^m\beta _i\vert f_i(x(\lambda ), u_i^0)-f_i(x(\lambda ), {\bar{u}}_i)\vert \le \end{aligned}$$
(34)
$$\begin{aligned}&2\lambda \sum _{i=1}^m \beta _i m_i(D({{{\mathcal {U}}}}_i))^{\delta }\le 2\lambda \max _{i=1, \ldots , m}\left[ m_i(D(\mathcal{U}_i))^{\delta }\right] \end{aligned}$$
(35)

We claim that

$$\begin{aligned} L(x(\lambda ), u^0) -L(x^0, u^0) \le \vert w\vert \end{aligned}$$
(36)

Indeed, suppose to the contrary that \(L(x(\lambda ), u^0)-L(x^0, u^0) -|w|>0\). If \(w=0\), then

$$\begin{aligned} L(x^0, u^0)-L(x(\lambda ), u^0)>0 \end{aligned}$$
(37)

which contradicts to \(x^0\) minimizer for \(L(x, u^0)\). If \(w\not =0\) then

$$\begin{aligned} L(x(\lambda ), {\bar{u}})-L(x^0, {\bar{u}})>0 \end{aligned}$$
(38)

which again contradicts to \(x(\lambda )\) minimizer for \(L(x, \bar{u})\). Observe now that since \(x^0\) is an isolated minimizer of order \(\alpha \) and constant h, we have

$$\begin{aligned} h\Vert x(\lambda )-x^0\Vert ^{\alpha }\le L(x(\lambda ), u^0)-L(x^0, u^0) \end{aligned}$$
(39)

and hence

$$\begin{aligned} h\Vert x(\lambda )-x^0\Vert ^{\alpha }\le 2\lambda \max _{i=1, \ldots , m}\left[ m_i(D({{{\mathcal {U}}}}_i))^{\delta }\right] \end{aligned}$$
(40)

So it holds

$$\begin{aligned} \Vert x(\lambda )-x^0\Vert \le \left( \frac{2\lambda }{h}\right) ^{1/\alpha }\left\{ \max _{i=1, \ldots , m}\left[ m_i(D(\mathcal{U}_i))^{\delta }\right] \right\} ^{1/\alpha } \end{aligned}$$
(41)

which concludes the proof. \(\square \)

Remark 4.4

If in Theorem 4.1 we assume \(\beta _i \in (0,1]\), \(i=1, \ldots , m\) with \(\sum _{i=1}^m\beta _i=1\) then we get the existence of a point \(x(\lambda ) \in \mathrm{PEff} ({\mathrm{RMP}}^\lambda )\) such that

$$\begin{aligned} d(x(\lambda ) , \mathrm{PEff}(\mathrm{RMP}^0)) \le \left( \frac{2\lambda }{h}\right) ^{1/\alpha }\left\{ \max _{i=1, \ldots , m}\left[ m_i(D(\mathcal{U}_i))^{\delta }\right] \right\} ^{1/\alpha } \end{aligned}$$
(42)

Application to mean-variance portfolio optimization

We apply the results of the previous sections to mean-variance portfolio optimization. We recall that the the mean-variance portfolio optimization model dates back to Markowitz (1952) [see also Markowitz (1968)]. The basic idea is that a portfolio is solely characterized by the two quantities risk (mostly measured in terms of the variance or volatility) and expected return. Since an investor is seeking for an allocation with low risk and high expected return, a trade-off between these two conflicting aims has to be made.

Consider a financial market with n risky assets defined on a suitable probability space in a single period setting. We assume their multivariate distribution has parameters \(\mu \) and and \(\Sigma \) representing the vector of expected returns and the variance–covariance matrix, respectively. We also assume

$$\begin{aligned} X=\{x \in {{\mathbb {R}}}^n: x_i \ge 0, \ \sum _{i=1}^nx_i=1\} \end{aligned}$$
(43)

is the set of admissible portfolios (i.e. we admit no shortselling). The efficient frontier in portfolio optimization is obtained as the set of solutions of the following problem:

$$\begin{aligned} \mathrm{min }_{x\in X}(f_1 (x, \mu ) , f_2 (x, \Sigma )) \end{aligned}$$
(44)

where \(f_1 (x, \mu ) = - \langle \mu , x \rangle \); \(f_2 (x, \Sigma ) = x^T\Sigma x\). However, the nominal values of \(\mu \) and \(\Sigma \) are not known before the optimal portfolio is selected, although their realization will affect the payoff (this is an issue already pointed out in (Markowitz, 1952)). The Decision Maker, therefore, faces an uncertainty problem that we can model by assuming that the couple \((\mu , \Sigma )\) ranges in some uncertainty set \({{\mathcal {U}}}\). Assume \((\mu ^0 , \Sigma ^0) \in {{\mathbb {R}}}^n\times {{\mathbb {M}}}^n_+\) is a nominal instance (e.g. the one that will be realized or the one can be expected under some arbitrary distribution assumption). Following Fliege and Werner (2014) we assume \({{\mathcal {U}}}\) is of ellipsoidal type i.e.

$$\begin{aligned} {{\mathcal {U}}}= \left\{ (\mu , \Sigma )\in {{\mathbb {R}}}^n \times {{\mathbb {M}}}^n_+: \Vert \mu -\mu ^0\Vert +c\Vert \Sigma -\Sigma ^0\Vert \le r\right\} \end{aligned}$$
(45)

Here \({{\mathbb {M}}}_+^n\) denotes the set of positive semidefinite square matrices of order n. We denote by \({{{\mathcal {U}}}}_1 \subseteq {{\mathbb {R}}}^n\) the projection of \({{{\mathcal {U}}}}\) on \({{\mathbb {R}}}^n\) and by \({{{\mathcal {U}}}}_2 \subseteq {{\mathbb {M}}}^n_+\) the projection of \({{{\mathcal {U}}}}\) on \({{\mathbb {M}}}^n_+\). With \({{\mathbb {M}}}^n_{++}\) we denote the set of positive definite square matrices of order n. To comply with the notation of the previous sections we can identify the matrix \(\Sigma \) with an element of \({{\mathbb {R}}}^{n^2}\). Hence, the robust counterpart of Problem (44) is

$$\begin{aligned} \mathrm{min}_{x \in X} (\mathrm{max}_{\mu \in {{{\mathcal {U}}}}_1 } -\langle \mu , x \rangle , \mathrm{max}_{\Sigma \in {{{\mathcal {U}}}}_2} x^T \Sigma x) \end{aligned}$$
(46)

Set

$$\begin{aligned}&{{{\mathcal {W}}}}_1^{\lambda }=(1-\lambda )\mu ^0 + \lambda {{\mathcal {U}}}_1\\&{{{\mathcal {W}}}}_2^{\lambda }=(1-\lambda )\Sigma ^0 + \lambda {{\mathcal {U}}}_2 \end{aligned}$$

Remark 4.1 gives

$$\begin{aligned} E_{f_1}(x)= r\Vert x\Vert \end{aligned}$$
(47)

while simple calculations show that

$$\begin{aligned} E_{f_2}(x)=\frac{r}{c}\Vert x\Vert ^2 \end{aligned}$$
(48)

It follows

$$\begin{aligned} \overline{\mathbf{E}}_{f}= r \left( 1, \frac{1}{c}\right) , \ \ {{\underline{\varvec{E}}}}_{f}= r \left( \frac{ \sqrt{n}}{n}, \frac{1}{cn}\right) \end{aligned}$$
(49)

Denote by \(\mathrm{RMP}^{\lambda }\) the robust counterpart of problem (44) with uncertainty sets \({\mathcal W}_1^{\lambda }\) and \({{\mathcal {W}}}_2^{\lambda }\). Corollaries 4.3 and 4.4 give

$$\begin{aligned} {\mathrm{WMin\,}}({\mathrm{RMP}^0})+ \lambda r \left( \frac{ \sqrt{n}}{n}, \frac{1}{cn}\right) \le ^l \mathrm{WMin}(\mathrm{RMP}^\lambda ) \le ^l \mathrm{WMin}(\mathrm{RMP}^0)+\lambda r\left( 1, \frac{1}{c}\right) \end{aligned}$$
(50)

and

$$\begin{aligned} \lambda r\frac{\sqrt{n+c^2}}{cn} \le H({\mathrm{WMin\,}}({\mathrm{RMP}^0}), \mathrm{WMin}(\mathrm{RMP}^\lambda ))\le \lambda r\sqrt{\frac{c^2+1}{c^2}} \end{aligned}$$
(51)

Inequalities (50) set upper and lower bounds for the efficiency loss due to uncertainty that one incurs by considering problem \(\mathrm{RMP}^{\lambda }\). The lefthand inequality in (50) states that the weakly efficient frontier for problem \(\mathrm{RMP}^{\lambda }\) is shifted upwards in the direction \(\left( \frac{ \sqrt{n}}{n}, \frac{1}{cn}\right) \) and the “magnitude” of this shifting is at least \(\lambda r\). Hence, the lefthandside in (50) gives an estimation of the “minimum” efficiency loss that one incurs, with respect to problem \(\mathrm{RMP}^0\), at uncertainty level given by \(\lambda \). Observe that both components in \(\underline{\mathbf{E}}_{f}=\left( \frac{ \sqrt{n}}{n}, \frac{1}{cn}\right) \) are decreasing with respect to n and \(\underline{\mathbf{E}}_{f}\) converges to (0, 0) as \(n \rightarrow +\infty \). This means that when the number of assets increases, the minimum efficiency loss in \(\mathrm{RMP}^\lambda \) with respect to \(\mathrm{RMP}^0\) decreases, which can be seen as an effect of portfolio diversification (i.e. increasing the number of assets in the portfolio we have a reduction of the minimum efficiency loss that one incurs at a given uncertainty level \(\lambda \)).

Similarly, formula (51) states upper and lower bounds for the distance betweeen efficient frontiers of problems \(\mathrm{RMP}^0\) and \(\mathrm{RMP}^{\lambda }\).

Observing that \(f_1\) is linear both in x and \(\mu \) and \(f_2\) is convex in x and linear in \(\Sigma \), we can apply Theorem 3.2 to get the following result which gives a characterization of solutions of problem \(\mathrm{RMP}^{\lambda }\) in terms of linear scalarization.

Theorem 5.1

  1. (i)

    A point \({\bar{x}}\in {\mathrm{WEff\,}}(RMP^{\lambda })\) if and only if there exist \(\beta _1, \beta _2 \ge 0\), not both zero, \({\bar{\mu }} \in {{{\mathcal {W}}}}_1^{\lambda }\), \({\bar{\Sigma }} \in {{{\mathcal {W}}}}_2^{\lambda }\) such that \( {\bar{x}}\) minimizes

    $$\begin{aligned} -\beta _1\langle {\bar{\mu }}, x \rangle + \beta _2 x^T{\bar{\Sigma }} x \end{aligned}$$
    (52)

    i.e. \({\bar{x}}\) is weakly efficient for the portfolio optimization problem with returns \({\bar{\mu }}\) and variance-covariance matrix \(\bar{\Sigma }\).

  2. (ii)

    A point \({\bar{x}}\in \mathrm{PEff}(RMP^{\lambda })\) if and only if there exist \(\beta _1, \beta _2 >0 \), \({\bar{\mu }} \in \mathcal{W}_1^{\lambda }\), \({\bar{\Sigma }} \in {{{\mathcal {W}}}}_2^{\lambda }\) such that \( {\bar{x}}\in X\) minimizes

    $$\begin{aligned} -\beta _1\langle {\bar{\mu }}, x \rangle + \beta _2 x^T{\bar{\Sigma }} x \end{aligned}$$
    (53)

    i.e. \({\bar{x}}\) is properly efficient for the portfolio optimization problem with returns \({\bar{\mu }}\) and variance-covariance matrix \(\bar{\Sigma }\).

Finally, we prove the following result which is a counterpart of Theorem 4.1 for the mean-variance portfolio optimization problem. We denote by \(T_X(x)\) be the tangent cone to X at \(x^0\) [see e.g. Rockafellar (1970)], i.e.

$$\begin{aligned} T_X({\bar{x}})= \mathrm{cl}\{a(x-x^0), \ x \in X,\ a \ge 0 \} \end{aligned}$$
(54)

Theorem 5.2

Assume \(\Sigma ^0, \Sigma \in {{\mathbb {M}}}^n_{++}\). Let \(\beta _1, \beta _2 \in [0,1]\), with \(\beta _1 + \beta _2 =1\) and assume \({\bar{x}}\in X\) is a minimizer for the function

$$\begin{aligned} -\beta _1\langle \mu ^0, x \rangle + \beta _2 x^T \Sigma ^0 x \end{aligned}$$
(55)
  1. (i)

    Let \(\beta _2 >0\). Then there exists \(x(\lambda )\in {\mathrm{WEff\,}}(\mathrm{RMP}^{\lambda })\) such that

    $$\begin{aligned} d(x(\lambda ), {\mathrm{WEff\,}}(\mathrm{RMP}^0)) \le \left( \frac{2\lambda }{h}\right) ^{1/2}\left( \max _{i=1, 2}[D(\mathcal{U}_i)]\right) ^{1/2}\le \end{aligned}$$
    (56)
    $$\begin{aligned} 2\left( \frac{\lambda }{h}\right) ^{1/2}\left( \max \left\{ r, \frac{r}{c}\right\} \right) ^{\frac{1}{2}} \end{aligned}$$

    where \(h=\min _{d \in T_X(x^0)\cap S}d^T \Sigma ^0 d\) and S denotes the unit sphere in \({{\mathbb {R}}}^n\).

  2. (ii)

    Let \(\beta _2=0\). Then there exists \(x(\lambda )\in \mathrm{PEff}(RMP^{\lambda })\) such that

    $$\begin{aligned} d(x(\lambda ), \mathrm{PEff}(\mathrm{RMP}^0)) \le \frac{2\lambda }{h}\left( \max _{i=1, 2}[D({{{\mathcal {U}}}}_i)]\right) \le 4\left( \frac{\lambda }{h}\right) \left( \max \left\{ r, \frac{r}{c}\right\} \right) \end{aligned}$$
    (57)

    where \(h=\min _{d \in T_X(x^0)\cap S} \langle -\mu ^0, d\rangle \).

Proof

  1. (i)

    We first observe that \(f_1(x, \mu ^0)\) and \(f_2(x, \Sigma ^0)\) are Hölder of degree 1 i.e. Lipschitz on X as functions of \(\mu \) and \(\Sigma \) respectively, uniformly with respect to \(x \in X\). This is due to the to the fact that \(f_1\) and \(f_2\) are differentiable, continuously with respect to \(\mu \) and \(\Sigma \). Further \(f_1\) and \(f_2\) are convex as functions of x and linear as functions of to \(\mu \) and \(\Sigma \) respectively. Let

    $$\begin{aligned} l(x)= - \beta _1 \langle \mu ^0, x \rangle + \beta _2 \langle x^T \Sigma ^0 x\rangle \end{aligned}$$
    (58)

    Since \(x^0\) minimizes l(x) over X it holds

    $$\begin{aligned} \langle \nabla l(x^0) , d\rangle \ge 0 \end{aligned}$$
    (59)

    for every \(d \in T_X(x^0)\) (see Rockafellar (1970)). Hence, for \(d \in X\) and \(t>0\) we have, since l(x) is a quadratic function

    $$\begin{aligned} l(x)-l(x^0)= \langle \nabla l(x^0) , x-x^0\rangle + \frac{1}{2} (x-x^0)^T\nabla ^2 l(x^0)(x-x^0) \end{aligned}$$
    (60)
    $$\begin{aligned} \ge \frac{1}{2} (x-x^0)^T\nabla ^2l(x^0)(x-x^0) \end{aligned}$$
    (61)

    It follows

    $$\begin{aligned} l(x)-l(x^0) \ge \Vert x-x^0\Vert ^2 \frac{1}{2\Vert x-x^0\Vert ^2} (x-x^0)^T\nabla ^2(x^0)(x-x^0) \ge h \Vert x-x^0\Vert ^2 \end{aligned}$$
    (62)

    where, since \(\nabla ^2l(x^0)=2\Sigma ^0\). Hence \(x^0\) is an isolated minimizer order 2 and constant h and the thesis follows from Theorem 4.1. The last inequality follows since by (45) we have

    $$\begin{aligned} D({{\mathcal {U}}}_1)\le r, \ \ D({{\mathcal {U}}}_2) \le \frac{r}{c} \end{aligned}$$
    (63)
  2. (ii)

    The proof is similar to that of point i) and is omitted.

\(\square \)

Concluding remarks

We conclude this paper with a glimpse on possible further research.

We often have partial knowledge on the statistical properties of the model parameters. Specifically, the probability distribution quantifying the model parameter uncertainty is known ambiguously. A typical approach to handle this ambiguity, is to estimate the probability distribution using statistical tools. The decision-making process can then be performed with respect to the estimated distribution. Such an estimation can be imprecise. Ambiguous stochastic optimization is a modeling approach that protects the decision-maker from the ambiguity in the underlying probability distribution. Ambiguity about probability distribution can be modelled using the concept of imprecise probability or more generally the notion of set-valued probability [see e.g. La Torre et al. (2021)]. A different way to model this ambiguity is to assume the underlying probability distribution is unknown and lies in an ambiguity set of probability distributions.

This last approach, as in robust optimization, hedges against the ambiguity in probability distribution by taking a worst-case (minmax) approach (Distributionally Robust Multiobjective Optimization).

Extensions of the presented results to Distributionally Robust Multiobjective Optimization are a first direction for further research.

As pointed out, the Robust Optimization approach is a worst-case oriented approach. For this reason robust solutions of an optimization problem have been also called pessimistic solutions. Indeed, optimistic solutions have been considered in the literature as solutions of the best-case oriented multiobjective optimization problem with objective functions

$$\begin{aligned} \min _{u_i \in {{\mathcal {U}}}_i} f_i(x, u_i), \ \ i=1\ldots , m \end{aligned}$$
(64)

In order to model the level of pessimism one can consider the multiobjective optimization problem with objective functions

$$\begin{aligned} f_i^{p_i}(x)= p_i \max _{u_i \in {{{\mathcal {U}}}}_i}f_i(x, u_i)+ (1-p_i) \min _{u_i \in {{{\mathcal {U}}}}_i}f_i(x, u_i) \end{aligned}$$
(65)

where \(p_i\in [0,1]\) describes the level of pessimism for objective i. The study of problem (65) is another possible direction for further research.