Abstract
Robust optimization is proving to be a fruitful tool to study problems with uncertain data. In this paper we deal with the minmax aproach to robust multiobjective optimization. We survey the main features of this problem with particular reference to results concerning linear scalarization and sensitivity of optimal values with respect to changes in the uncertainty set. Furthermore we prove results concerning sensitivity of optimal solutions with respect to changes in the uncertainty set. Finally we apply the presented results to mean-variance portfolio optimization.
Introduction
Many real-world decision problems arising in engineering and management depend on uncertain parameters. This parameters’ uncertainty may be due to limited observability of data, noisy measurements, implementations and prediction errors. Stochastic optimization and robust optimization frameworks have classically allowed to model this uncertainty within a decision-making framework.
Stochastic optimization assumes that the decision maker has complete knowledge about the underlying uncertainty through a known probability distribution. The probability distribution of the random parameters is inferred from prior beliefs, experts opinions, errors in predictions based on the historical data or a mixture of these.
In robust optimization, instead, no arbitrary assumption on the distribution of parameters is required. A robust solution is defined introducing a different optimization problem known as a robust counterpart that allows to find a “worst-case-oriented” optimal solution. Robust optimization is proving to be a fruitful tool to study problems with uncertain data. Since the seminal paper by Ben-Tal and Nemirovski (1998), several authors have studied the problem both in scalar and multiobjective settings [see e.g. (Hayashi et al., 2013; Skanda & Lebiedz, 2013; Souyris et al., 2013; Suzuki et al., 2013; Goh & Sim, 2011)]. More recently, a detailed monograph has been devoted to the topic (Ben-Tal & El Ghaoui, 2009) and survey papers by Gabrel et al. (2014); Bertsimas et al. (2011) collected major issues and applications of scalar robust optimization. The need for such a tool arises when a constrained optimization problem depends upon uncertain parameters that may affect both the objective function and/or the constraints. This occurs in many real-world applications of optimization in industries, energy markets, finance, to quote some fields [see e.g. (Hu et al., 2011; Hassanzadeh et al., 2014; Aouam et al., 2016; Zugno & Conejo, 2015; Gregory et al., 2011) and the references therein], due to unknown future developments, measurement or manufacturing errors, incomplete information in model development, and so on. In such circumstances, stochastic optimization is often applied, but this approach requires the choice of a probability distribution that can hardly be motivated but for the technical capability of solving the problem.
Robust optimization has also been extended to multiobjective problems, see e.g. Kuroiwa and Lee (2012); Crespi et al. (2017) and several theorethical issues have been investigated by using the componentwise minmax approach. Financial applications of the minmax approach to robust multiobjective optimization can be found e.g. in Schöttle and Werner (2009), Fliege and Werner (2014) where robust portfolio selection is investigated.
We remark that a different way to deal with robust multiobjective optimization problems arises observing that the so-called robust solutions of a multiobjective optimization problem are deeply related to solutions of a set optimization problem [see e.g. (Ehrgott et al., 2014; Ide et al., 2014; Crespi et al., 2017)].
In this paper we deal with the minmax aproach to robust multiobjective optimization. We survey the main notions and results related to this approach with particular reference to linear scalarization and optimality conditions.
Then we investigate the issue of sensitivity of the solutions of robust multiobjective optimization problems with respect to variations of the uncertainty set. We first recall results about sensitivity of the optimal values with respect to changes in the uncertainty set. In particular we observe that when robust solutions of a multiobjective optimization problem are considered, a “loss of efficiency” occurs with respect to the solution obtained in the “nominal” problem, i.e. the problem in which the uncertain parameters assume a fixed value that can be an estimation of the “true” value [see e.g. Ben-Tal and Nemirovski (1998)]. In the scalar case, assuming a minimization problem, this simply means the robust optimal value is “greater” or equal than the optimal value of the nominal problem and robust solutions are \(\epsilon \)-solutions of the nominal problem. We estimate location of efficient frontiers of the nominal problem and the robust problem and the related efficiency loss through set distances, according to the shape of the uncertainty set. Thereafter, we prove results concerning the sensitivity of the optimal solutions with respect to changes in the uncertainty set.
Finally, we consider applications of the presented results to mean-variance portfolio optimization.
The paper is organized as follows. In Sect. 2 we recall the formulation of a Robust Multiobjective Optimization Problem (RMP). In Sect. 3 we recall results about linear scalarization of a RMP and we give optimality conditions under convexity assumptions. In Sect. 4 we first recall results about sensitivity of optimal values (efficient frontiers) of a RMP with respect to changes in the uncertainty set. Then we prove results about the sensitivity of optimal solutions with respect to changes in the uncertainty set. Section 5 gives an application of the presented results to Mean-Variance Portfolio Optimization. Finally, Sect. 6 concludes the paper with some suggestions for future research.
Robust multiobjective optimization: problem formulation
Throughout this paper \({{\mathbb {R}}}^n\) denotes the Euclidean space of dimension n. Given a lower bounded function \(g: {{\mathbb {R}}}^n\rightarrow {{\mathbb {R}}}\) and a closed convex set \(X \subseteq {{\mathbb {R}}}^n\), consider the scalar optimization problem
A point \(x^0 \in X\) is a solution of problem (P) when \(g(x^0)= \inf _{x\in X} g(x)\). We consider now an uncertain optimization problem
where \(f:{{\mathbb {R}}}^n\times {{\mathbb {R}}}^p\rightarrow {{\mathbb {R}}}\), u is an uncertain parameter, with \(u\in {\mathcal {U}}\) for some convex compact set \({\mathcal {U}}\subseteq {{\mathbb {R}}}^p\). We assume f is continuous w.r.t. u and \(f(\cdot , u)\) is lower bounded on X for every \(u\in {\mathcal {U}}\).
Problem (UP) has been extensively studied in the literature (see e.g. Ben-Tal and El Ghaoui (2009) and the references therein). Following (Ben-Tal & El Ghaoui, 2009), we associate to problem (UP) the Robust Optimization Problem
Problem (RP) describes a worst-case oriented attitude of the decision maker and is called the robust counterpart of problem (UP).
In order to extend problem (RP) to the multiobjective case we recall some basic notions in multiobjective optimization [see e.g. Sawaragi et al. (1985)]. We consider the problem
where \(g(x)= (g_1(x), \ldots , g_m(x))\) with \(g_i:{{\mathbb {R}}}^n\rightarrow {{\mathbb {R}}}\), \(i=1, \ldots , m\) and \(X \subseteq {{\mathbb {R}}}^n\) is a closed convex set.
A point \(x^0\in X\) is said to be a (Pareto) efficient solution of problem (MP) when
where \(\mathrm{Im}(g)\) is the image of g, or equivalently, there does not exist \(x\in X\) such that \(g(x)\le g(x^0)\) and \(g(x)\ne g(x^0)\), where \(a\le b\) if \(a\in b-{{\mathbb {R}}}^m_+\).
A point \(x^0\) is said to be a weakly efficient solution of problem (MP) when
or equivalently, there does not exist \(x\in X\) such that \(g(x)< g(x^0)\), where \(a< b\) if \(a\in b-{\mathrm{int\,}}{{\mathbb {R}}}^m_+\).
A point \(x^0\) is said to be a properly efficient solution of problem (MP) it it is an efficient solution and there exists a number \(M>0\) such that for all \(i=1, \ldots , m\) and \(x \in X\) satisfying \(g_i(x)< g_i(x^0)\) there exists an index j such that \(g_j(x^0) < g_j(x)\) and
The set of efficient (resp. weakly efficient, properly efficient) solutions of problem (MP) is denoted by \(\mathrm{Eff}(\mathrm{MP})\) (resp. \(\mathrm{WEff}(\mathrm{MP})\), \(\mathrm{PEff(\mathrm{MP})}\)). We will set
Clearly \(\mathrm{PEff(\mathrm{MP})} \subseteq \mathrm{Eff}(\mathrm{MP})\subseteq \mathrm{WEff}(\mathrm{MP})\) and \(\mathrm{PMin(\mathrm{MP})} \subseteq \mathrm{Min}(\mathrm{MP})\subseteq \mathrm{WMin}(\mathrm{MP})\).
Now we consider an uncertain multiobjective optimization problem
where \(f_i:{{\mathbb {R}}}^n\times {{\mathbb {R}}}^p\rightarrow {{\mathbb {R}}}\), \(i=1,\ldots , m\) are continuous functions w.r.t. \(u_i\), with \(u_i\in {\mathcal {U}}_i\) and \({\mathcal {U}}_i\subseteq {{\mathbb {R}}}^p\), \(i=1, \ldots , m\).
The robust counterpart of (UMP) is defined as [see e.g. Kuroiwa and Lee (2012), Kuroiwa and Lee (2014)]
A robust efficient (weakly efficient, properly efficient) solution of (UMP) is defined as a vector \(x^0\in X\) that is an efficient (weakly efficient, properly efficient) solution of (RMP).
Robust multiobjective optimization: scalarization and optimality conditions
In this section we recall linear scalarization methods for finding robust properly efficient solutions and robust weakly efficient solutions of problem (UMP). As a consequence of these results it is possible to prove optimality conditions for robust multiobjective optimization problems. We recall that, given the multiobjective optimization problem (MP) the following characterizations of weakly and properly efficient solutions by means of linear scalarization hold (see e.g. Sawaragi et al. (1985)).
Theorem 3.1
-
(i)
If there exist numbers \(\beta _i \ge 0\), \(i=1, \ldots , m\), not all zero, such that \(x^0\in X\) minimizes function
$$\begin{aligned} \sum _{i=1}^{m}\beta _i g_i(x) \end{aligned}$$(2)then \(x^0 \in \mathrm{WEff (MP)}\). If functions \(g_i\) are convex, \(i=1, \ldots , m\) and \(x^0 \in \mathrm{WEff (MP)}\), then there exist numbers \(\beta _i \ge 0\), \(i=1, \ldots , m\), not all zero, such that \(x^0\) minimizes function (2).
-
(ii)
If there exist numbers \(\beta _i >0 \), \(i=1, \ldots , m\) such that \(x^0\in X\) minimizes function (2), then \(x^0 \in \mathrm{PEff (MP)}\). If functions \(g_i\) are convex, \(i=1, \ldots , m\) and \(x^0 \in \mathrm{PEff (MP)}\), then there exist numbers \(\beta _i > 0\), \(i=1, \ldots , m\) such that \(x^0\) minimizes function (2).
The next result, due to Kuroiwa and Lee (2012) extends Theorem 3.1 to robust multiobjective optimization problems. We set
Theorem 3.2
In problem (UMP) assume \(f_i(x, u_i)\) are convex with respect to \(x\in X\) and concave with respect to \(u_i \in {{{\mathcal {U}}}}_i\).
-
(i)
\(x^0 \in X\) is a robust properly efficient solution for problem (UMP) if and only if there exist numbers \(\lambda _i^0>0\), and vectors \(u_i^0\in {{{\mathcal {U}}}}_i(x^0)\), \(i=1, \ldots , m\), such that
$$\begin{aligned} \sum _{i=1}^m\lambda _i^0f_i(x^0, u_i^0) \le \sum _{i=1}^m\lambda _i^0f_i(x, u_i^0), \ \ \forall x \in X \end{aligned}$$(4)i.e. \(x^0\) minimizes function \(\sum _{i=1}^m\lambda _i^0f_i(x, u_i^0)\) over X. This is equivalent to say that \(x^0\) is properly efficient for the problem of minimizing
$$\begin{aligned} (f_1(x, u_1^0), \ldots , f_m(x, u_m^0)) \end{aligned}$$(5) -
(ii)
\(x^0\in X\) is a robust weakly efficient solution for problem (UMP) if and only if there exist numbers \(\lambda _i^0\ge 0\), not all zero, and vectors \(u_i^0 \in \mathcal{U}_i(x^0)\) \(i=1, \ldots , m\), with \(u_i^0\in {{{\mathcal {U}}}}_i(x^0)\) when \( \lambda _i^0>0\) such that
$$\begin{aligned} \sum _{i=1}^m\lambda _i^0f_i(x^0, u_i^0) \le \sum _{i=1}^m\lambda _i^0f_i(x, u_i^0), \ \ \forall x \in X \end{aligned}$$(6)i.e. \(x^0\) minimizes function \(\sum _{i=1}^m \lambda _i^0f_i(x, u_i^0)\) over X. This is equivalent to say that \(x^0\) is weakly efficient for the problem of minimizing the multiobjective function (5)
The next result is an immediate consequence of Theorem 3.2. We denote by \(\partial _{x}f_i(x^0, u_i)\) the subgradient of function \(f_i(\cdot , u_i)\) with respect to x at \(x^0 \in X\), i.e.
and by \(N_X(x^0)\) the normal cone to the set X at the point \(x^0 \in X\), i.e.
Theorem 3.3
In problem (UMP) assume \(f_i(x, u_i)\) are convex with respect to \(x\in X\) and concave with respect to \(u_i \in {{{\mathcal {U}}}}_i\).
-
(i)
A point \(x^0 \in X\) is a robust properly efficient solution for problem (UMP) if and only if there exist \(\lambda _i^0>0\), \(u_i^0 \in {{{\mathcal {U}}}}_i(x^0)\), \(i= 1, \ldots , m\), such that
$$\begin{aligned} 0 \in \sum _{i=1}^m \lambda _i^0 \partial _x f_i(x^0 , u_i^0) + N_X(x^0) \end{aligned}$$(9) -
(ii)
A point \(x^0 \in X\) is a robust weakly efficient solution for problem (UMP) if and only if there exist \(\lambda _i^0\ge 0\), not all zero, \( u_i^0 \in {{{\mathcal {U}}}}_i\) \(i= 1, \ldots , m\) with \(u_i^0 \in {{{\mathcal {U}}}}_i(x^0)\) when \(\lambda _i >0\), such that
$$\begin{aligned} 0 \in \sum _{i=1}^m \lambda _i^0 \partial _x f_i(x^0 , u_i^0) + N_X(x^0) \end{aligned}$$(10)
Proof
The proof is an immediate consequence of Theorem 3.2, the necessary and sufficient optimality conditions for scalar convex optimization and the linearity of the subgradient [see e.g. Rockafellar (1970)] and is omitted. \(\square \)
Robust multiobjective optimization: sensitivity to uncertainty
Various degrees of uncertainty can occur for the same objective function. Here we also introduce compact, convex subsets \({\mathcal U}_i^0 \subseteq {\mathcal {U}}_i\) representing the nominal instances of our robust optimization problem, as the least achievable uncertainty. Moreover, we consider sets
where \(\lambda \in \left[ 0,1\right] \). Clearly
An extreme, yet meaningful situation is represented by \(\mathcal {U}_i^0= \{u_i^0\}\) that depicts the absence of uncertainty.
Sensitivity of the optimal values
In this subsection we survey results regarding the sensitivity of the set \({\mathrm{WMin\,}}(\mathrm{RMP}^{\lambda })\) with respect to changes in the uncertainty set described by variations of the parameter \(\lambda \) [see Crespi et al. (2018)].
We assume \(\max _{u_i\in {\mathcal {U}}_i^0}{f_i(\cdot ,u_i)}\) is lower bounded on X, and we set
and
assuming \({\overline{E}}_{f_i}\) is finite, \(i=1, \ldots , m\). Clearly, since \({\mathcal {U}}_i^0\subseteq {\mathcal {U}}_i\), \(E_{f_i}\left( x\right) \ge 0\), \(\forall x\in X\).
Remark 4.1
Simple calculations show that \({E}_{f_i}(x)\) assumes the following particular forms.
-
(i)
Assume \({\mathcal {U}}_i^0=\{u_i^0\}\). If \(f_i(x,u_i)=\langle f_i(x), u_i\rangle + h_i(x)\), with \(f_i:{{\mathbb {R}}}^n\rightarrow {{\mathbb {R}}}^p\), \(h_i: {{\mathbb {R}}}^n \rightarrow {{\mathbb {R}}}\), it holds
$$\begin{aligned} E_{f_i}(x)=\max _{u_i^\prime \in {\mathcal {U}}_i-u_i^0}{\langle f_i(x),u_i^\prime \rangle }, \end{aligned}$$which, for \({\mathcal {U}}_i={\mathcal {B}}(u_i^0)\) (unit ball in \({{\mathbb {R}}}^p\)) entails
$$\begin{aligned} E_{f_i}(x)=\max _{b\in {\mathcal {B}}(0)}{\langle f_i(x),b\rangle }=\Vert f_i(x)\Vert \end{aligned}$$If \(f_i(x,u_i)=\langle x,u_i\rangle +h_i(x)\) and \({\mathcal {U}}_i={\mathcal {B}}(u_i^0)\), it holds
$$\begin{aligned} E_{f_i}(x)=\Vert x\Vert . \end{aligned}$$ -
(ii)
Assume \(f_i(x,u_i)=\langle x,u_i\rangle +h_i(x)\) and consider uncertainty sets of ellipsoidal type. Set
$$\begin{aligned} {{\mathcal {U}}}= \{u=(u_1, \ldots , u_m): \sum _{i=1}^m c_i\Vert u_i-u_i^0\Vert \le \delta \} \end{aligned}$$(15)where \(\delta >0\) and \(c_i>0\), \(i=1, \ldots , m\) and let \({\mathcal U}_i\) be the projection of \({{\mathcal {U}}}\) on the i-th component. Then we have
$$\begin{aligned} E_{f_i}(x)=\frac{\delta }{c_i}\Vert x\Vert . \end{aligned}$$(16)
The robust counterpart relative to \({\mathcal {W}}_i^{\lambda }\) is

When \(\lambda = 0\) the robust counterpart shows the lowest level of uncertainty achievable (possibly none at all). We are going to study the behavior of the optimal values and optimal solutions of (RMP\(^\lambda \)) as \(\lambda \) changes. For simplicity sake we will set:
-
(i)
\(f^\lambda (x)=\left( \begin{array}{ccc} \max _{w_1\in {\mathcal {W}}_1^{\lambda }}{f_1(x,w_1)},\ldots ,\max _{w_m\in {\mathcal {W}}_m^{\lambda }}{f_m(x,w_m)}\end{array}\right) \);
-
(ii)
\(\overline{{\mathbf {E}}}_{f}=\left( {\overline{E}}_{f_1}, \ldots , {\overline{E}}_{f_m}\right) \);
-
(iii)
\(\underline{{\mathbf {E}}}_{f}=\left( {\underline{E}}_{f_1}, \ldots , {\underline{E}}_{f_m}\right) \).
From the definitions, we clearly have \(f^0 \le f^{\lambda }\) and \(0\le \underline{{\mathbf {E}}}_{f}\le \overline{{\mathbf {E}}}_{f}\).
We need the following relation between sets and the next definition (see e.g. Kuroiwa (2001), Luc (1989)). For \(A, B\subseteq {{\mathbb {R}}}^m\) we denote
or equivalently \(A\le ^l B\), if for every \(b\in B\) there exists \(a\in A\) such that \(a\le b\) holds. This relation is reflexive and transitive, but not antisymmetric.
Definition 4.1
A set \(A\subseteq {{\mathbb {R}}}^m\) is said to be \({{\mathbb {R}}}^m_+\)-closed when \(A+ {{\mathbb {R}}}^m_+\) is closed.
Now we will discuss location of the robust minimal or weakly minimal values.
Proposition 4.1
(Crespi et al., 2018) Assume that \(\mathrm{Im}(f^0)\) is \({{\mathbb {R}}}^m_+\)-closed. Then set relations
hold for every \(\lambda \in [0,1]\).
Remark 4.2
When \(m=1\), i.e. a scalar optimization problem is considered, Proposition 4.1 simply states that the optimal value of the Robust Optimization Problem with low uncertainty, \((\mathrm{RMP}^0)\), is less or equal then the optimal value for the Robust Optimization Problem \((\mathrm{RMP}^\lambda )\) with uncertainty measured by the parameter \(\lambda \). Hence, the previous proposition basically states that there is an efficiency loss due to higher uncertainty since the weakly efficient frontier of problem \((\mathrm{RMP}^{\lambda })\) lies above the weakly efficient frontier of problem \((\mathrm{RMP}^{0})\) ("above" is intended with respect to the \(\le ^l\) order).
The next result allows us to give an upper bound for the efficiency loss due to uncertainty.
Proposition 4.2
(Crespi et al., 2018) Assume that functions \(f_i(x,u_i)\) are convex in each variables \(x\in X\) and \(u_i\in {{\mathcal {U}}_i}\), and \(\mathrm{Im}(f^{\lambda })\), \(\lambda \in [0,1]\), is \({{\mathbb {R}}}^m_+\)-closed. Then the set relation
holds.
Remark 4.3
The convexity assumption on \(f_i ( \cdot , u_i)\) in Proposition 4.2 can be weakened by \({{\mathbb {R}}}^m_+\) -convexity of \(\mathrm{Im}(f^{\lambda })\).
Combining Propositions 4.1 and 4.2 we get the following corollary.
Corollary 4.1
(Crespi et al., 2018) Assume that functions \(f_i(x,u_i)\) are convex in each variables \(x\in X\) and \(u_i\in {{\mathcal {U}}_i}\), and \(\mathrm{Im}(f^\lambda )\), \(\lambda \in [0,1]\), is \({{\mathbb {R}}}^m_+\)-closed. Then set relations
hold.
We now wish to estimate the distance between the efficient frontiers of \(\mathrm{RMP}^0\) and \(\mathrm{RMP}^{\lambda }\).
A set \(A\subseteq {{\mathbb {R}}}^m\) is said to be \({{\mathbb {R}}}^m_+\)-closed-convex-minorized if \(A+{{\mathbb {R}}}^m_+\) is closed and convex, and there exists \(x\in {{\mathbb {R}}}^m\) such that \(x+{{\mathbb {R}}}^m_+\supseteq A\). Let \({\mathcal {C}}\) be the family of all \({{\mathbb {R}}}^m_+\)-closed-convex minorized nonempty subsets of \({{\mathbb {R}}}^m\).
We define a binary relation \(\equiv \) on \({\mathcal {C}}\) by: \(A\equiv B\) if \(A+{{\mathbb {R}}}^m_+=B+{{\mathbb {R}}}^m_+\) for any \(A,B\in {\mathcal {C}}\). Then \(\equiv \) is an equivalence relation and we can define the equivalence class \([A]=\{B\in {\mathcal {C}}\mid A\equiv B\}\) and the quotient set \({\mathcal {C}}\!/\!\equiv =\{[A]\mid A\in {\mathcal {C}} \}\), For \(D=\{d\in {{\mathbb {R}}}^m_+\mid \Vert d\Vert =1\}\), function \(H:({\mathcal {C}}\!/\!\equiv )^2\rightarrow {{\mathbb {R}}}\), which is defined as follows, is a metric [see e.g. Kuroiwa (2003), Kuroiwa and Nuriya (2006)]:
Corollary 4.2
(Crespi et al., 2018) Under the assumptions of Corollary 4.1, we have
and
in the metric H.
The next result gives a lower bound for the efficiency loss due to uncertainty.
Proposition 4.3
(Crespi et al., 2018) Assume that functions \(f_i(x,u_i)\) are convex in \(x\in X\) and concave in \(u_i\in {{\mathcal {U}}_i}\), and \(\mathrm{Im}(f^0)\) is \({{\mathbb {R}}}^m_+\)-closed. Then set relation
holds.
Combining the previous results we can give upper and lower bounds for the efficiency loss as stated in the next corollary.
Corollary 4.3
(Crespi et al., 2018) Assume that \(f_i(x,u_i)=\langle f_i(x),u_i\rangle +h_i(x)\) where \(f_i:{{\mathbb {R}}}^n\times {{\mathbb {R}}}^p\rightarrow {{\mathbb {R}}}\) are convex, \(h_i: {{\mathbb {R}}}^n \rightarrow {{\mathbb {R}}}\), \(i=1,\ldots , m\), and \(\mathrm{Im}(f^\lambda )\) are \({{\mathbb {R}}}^m_+\)-closed. Then set relations
hold.
Corollary 4.4
(Crespi et al., 2018) Under the assumption of the previous Corollary, we have
Sensitivity of optimal solutions
We now establish results regarding the sensitivity of optimal solutions with respect to changes in the uncertainty set. We need the following definitions [see e.g. Li and Xu (2010)].
Definition 4.2
Let \(f: X \rightarrow {{\mathbb {R}}}\). We say that \(x^0 \in X\) is an isolated minimizer of order \(\alpha >0\) and constant \(h>0\) when for every \(x\in X\) it holds
Definition 4.3
We say that \(f_i(x, \cdot )\) is Hölder of order \(\delta >0\) on \({{{\mathcal {U}}}}_i\) with constant \(m_i>0\), uniformly wth respect to \(x \in X\) when
for every \(u_i^1, u_i^2 \in {{\mathcal {U}}}_i\) and \(x \in X\).
Theorem 4.1
Let X be a compact set and assume that
-
(i)
\(f_i(x, \cdot )\) is Hölder of order \(\delta >0\) on \(\mathcal{U}_i\) with constant \(m_i>0\), uniformly wth respect to \(x \in X\), \(i=1\ldots , m\).
-
(ii)
\(f_i(x, u_i)\) are convex with respect to \(x \in X\) and concave with respect to \(u_i \in {{\mathcal {U}}_i}\)
Let \(u^0 =(u_1^0, \ldots , u_m^0)\) and
with \(\beta _i\in [0,1]\), \(i=1, \ldots , m\) and \(\sum _{i=1}^m\beta _i=1\). Let \(x^0 \in X\) be an isolated minimizer of order \(\alpha \) and constant h for function \(L(x, u^0)\).
Then there exists \(x(\lambda ) \in \mathrm{WEff}(\mathrm{RMP}^{\lambda })\) such that
where \(d(x, A)= \inf _{a \in A}\Vert x-a\Vert \) denotes the distance between the point x and the set A and D(A) denotes the diameter of the set A, i.e.
Proof
We have \(L(x, u^0)-L(x^0, u^0) \ge h\Vert x-x^0\Vert ^{\alpha }\). Let \(x(\lambda ) \in X\) be a minimizer of function
Hence \(x(\lambda ) \in {\mathrm{WEff\,}}(RMP^{\lambda })\) by Theorem 3.1.
We have
and by using Ky-Fan’s Minimax Theorem ( Fan (1953)) we get
It follows the existence of vectors \({\bar{u}}_i\in \mathcal{W}_i^{\lambda }(x(\lambda ))\), \(i=1, \ldots , m\) (see (3) for the definition of \(\mathcal{W}_i^{\lambda }(x(\lambda ))\)) such that \( \forall x\in X\)
i.e. we get the existence of \({\bar{u}}_i \in {{{\mathcal {W}}}}_i^{\lambda }\), \(i=1, \ldots , m\) . such that \(x(\lambda )\) minimizes function
It holds
where
We have
We claim that
Indeed, suppose to the contrary that \(L(x(\lambda ), u^0)-L(x^0, u^0) -|w|>0\). If \(w=0\), then
which contradicts to \(x^0\) minimizer for \(L(x, u^0)\). If \(w\not =0\) then
which again contradicts to \(x(\lambda )\) minimizer for \(L(x, \bar{u})\). Observe now that since \(x^0\) is an isolated minimizer of order \(\alpha \) and constant h, we have
and hence
So it holds
which concludes the proof. \(\square \)
Remark 4.4
If in Theorem 4.1 we assume \(\beta _i \in (0,1]\), \(i=1, \ldots , m\) with \(\sum _{i=1}^m\beta _i=1\) then we get the existence of a point \(x(\lambda ) \in \mathrm{PEff} ({\mathrm{RMP}}^\lambda )\) such that
Application to mean-variance portfolio optimization
We apply the results of the previous sections to mean-variance portfolio optimization. We recall that the the mean-variance portfolio optimization model dates back to Markowitz (1952) [see also Markowitz (1968)]. The basic idea is that a portfolio is solely characterized by the two quantities risk (mostly measured in terms of the variance or volatility) and expected return. Since an investor is seeking for an allocation with low risk and high expected return, a trade-off between these two conflicting aims has to be made.
Consider a financial market with n risky assets defined on a suitable probability space in a single period setting. We assume their multivariate distribution has parameters \(\mu \) and and \(\Sigma \) representing the vector of expected returns and the variance–covariance matrix, respectively. We also assume
is the set of admissible portfolios (i.e. we admit no shortselling). The efficient frontier in portfolio optimization is obtained as the set of solutions of the following problem:
where \(f_1 (x, \mu ) = - \langle \mu , x \rangle \); \(f_2 (x, \Sigma ) = x^T\Sigma x\). However, the nominal values of \(\mu \) and \(\Sigma \) are not known before the optimal portfolio is selected, although their realization will affect the payoff (this is an issue already pointed out in (Markowitz, 1952)). The Decision Maker, therefore, faces an uncertainty problem that we can model by assuming that the couple \((\mu , \Sigma )\) ranges in some uncertainty set \({{\mathcal {U}}}\). Assume \((\mu ^0 , \Sigma ^0) \in {{\mathbb {R}}}^n\times {{\mathbb {M}}}^n_+\) is a nominal instance (e.g. the one that will be realized or the one can be expected under some arbitrary distribution assumption). Following Fliege and Werner (2014) we assume \({{\mathcal {U}}}\) is of ellipsoidal type i.e.
Here \({{\mathbb {M}}}_+^n\) denotes the set of positive semidefinite square matrices of order n. We denote by \({{{\mathcal {U}}}}_1 \subseteq {{\mathbb {R}}}^n\) the projection of \({{{\mathcal {U}}}}\) on \({{\mathbb {R}}}^n\) and by \({{{\mathcal {U}}}}_2 \subseteq {{\mathbb {M}}}^n_+\) the projection of \({{{\mathcal {U}}}}\) on \({{\mathbb {M}}}^n_+\). With \({{\mathbb {M}}}^n_{++}\) we denote the set of positive definite square matrices of order n. To comply with the notation of the previous sections we can identify the matrix \(\Sigma \) with an element of \({{\mathbb {R}}}^{n^2}\). Hence, the robust counterpart of Problem (44) is
Set
Remark 4.1 gives
while simple calculations show that
It follows
Denote by \(\mathrm{RMP}^{\lambda }\) the robust counterpart of problem (44) with uncertainty sets \({\mathcal W}_1^{\lambda }\) and \({{\mathcal {W}}}_2^{\lambda }\). Corollaries 4.3 and 4.4 give
and
Inequalities (50) set upper and lower bounds for the efficiency loss due to uncertainty that one incurs by considering problem \(\mathrm{RMP}^{\lambda }\). The lefthand inequality in (50) states that the weakly efficient frontier for problem \(\mathrm{RMP}^{\lambda }\) is shifted upwards in the direction \(\left( \frac{ \sqrt{n}}{n}, \frac{1}{cn}\right) \) and the “magnitude” of this shifting is at least \(\lambda r\). Hence, the lefthandside in (50) gives an estimation of the “minimum” efficiency loss that one incurs, with respect to problem \(\mathrm{RMP}^0\), at uncertainty level given by \(\lambda \). Observe that both components in \(\underline{\mathbf{E}}_{f}=\left( \frac{ \sqrt{n}}{n}, \frac{1}{cn}\right) \) are decreasing with respect to n and \(\underline{\mathbf{E}}_{f}\) converges to (0, 0) as \(n \rightarrow +\infty \). This means that when the number of assets increases, the minimum efficiency loss in \(\mathrm{RMP}^\lambda \) with respect to \(\mathrm{RMP}^0\) decreases, which can be seen as an effect of portfolio diversification (i.e. increasing the number of assets in the portfolio we have a reduction of the minimum efficiency loss that one incurs at a given uncertainty level \(\lambda \)).
Similarly, formula (51) states upper and lower bounds for the distance betweeen efficient frontiers of problems \(\mathrm{RMP}^0\) and \(\mathrm{RMP}^{\lambda }\).
Observing that \(f_1\) is linear both in x and \(\mu \) and \(f_2\) is convex in x and linear in \(\Sigma \), we can apply Theorem 3.2 to get the following result which gives a characterization of solutions of problem \(\mathrm{RMP}^{\lambda }\) in terms of linear scalarization.
Theorem 5.1
-
(i)
A point \({\bar{x}}\in {\mathrm{WEff\,}}(RMP^{\lambda })\) if and only if there exist \(\beta _1, \beta _2 \ge 0\), not both zero, \({\bar{\mu }} \in {{{\mathcal {W}}}}_1^{\lambda }\), \({\bar{\Sigma }} \in {{{\mathcal {W}}}}_2^{\lambda }\) such that \( {\bar{x}}\) minimizes
$$\begin{aligned} -\beta _1\langle {\bar{\mu }}, x \rangle + \beta _2 x^T{\bar{\Sigma }} x \end{aligned}$$(52)i.e. \({\bar{x}}\) is weakly efficient for the portfolio optimization problem with returns \({\bar{\mu }}\) and variance-covariance matrix \(\bar{\Sigma }\).
-
(ii)
A point \({\bar{x}}\in \mathrm{PEff}(RMP^{\lambda })\) if and only if there exist \(\beta _1, \beta _2 >0 \), \({\bar{\mu }} \in \mathcal{W}_1^{\lambda }\), \({\bar{\Sigma }} \in {{{\mathcal {W}}}}_2^{\lambda }\) such that \( {\bar{x}}\in X\) minimizes
$$\begin{aligned} -\beta _1\langle {\bar{\mu }}, x \rangle + \beta _2 x^T{\bar{\Sigma }} x \end{aligned}$$(53)i.e. \({\bar{x}}\) is properly efficient for the portfolio optimization problem with returns \({\bar{\mu }}\) and variance-covariance matrix \(\bar{\Sigma }\).
Finally, we prove the following result which is a counterpart of Theorem 4.1 for the mean-variance portfolio optimization problem. We denote by \(T_X(x)\) be the tangent cone to X at \(x^0\) [see e.g. Rockafellar (1970)], i.e.
Theorem 5.2
Assume \(\Sigma ^0, \Sigma \in {{\mathbb {M}}}^n_{++}\). Let \(\beta _1, \beta _2 \in [0,1]\), with \(\beta _1 + \beta _2 =1\) and assume \({\bar{x}}\in X\) is a minimizer for the function
-
(i)
Let \(\beta _2 >0\). Then there exists \(x(\lambda )\in {\mathrm{WEff\,}}(\mathrm{RMP}^{\lambda })\) such that
$$\begin{aligned} d(x(\lambda ), {\mathrm{WEff\,}}(\mathrm{RMP}^0)) \le \left( \frac{2\lambda }{h}\right) ^{1/2}\left( \max _{i=1, 2}[D(\mathcal{U}_i)]\right) ^{1/2}\le \end{aligned}$$(56)$$\begin{aligned} 2\left( \frac{\lambda }{h}\right) ^{1/2}\left( \max \left\{ r, \frac{r}{c}\right\} \right) ^{\frac{1}{2}} \end{aligned}$$where \(h=\min _{d \in T_X(x^0)\cap S}d^T \Sigma ^0 d\) and S denotes the unit sphere in \({{\mathbb {R}}}^n\).
-
(ii)
Let \(\beta _2=0\). Then there exists \(x(\lambda )\in \mathrm{PEff}(RMP^{\lambda })\) such that
$$\begin{aligned} d(x(\lambda ), \mathrm{PEff}(\mathrm{RMP}^0)) \le \frac{2\lambda }{h}\left( \max _{i=1, 2}[D({{{\mathcal {U}}}}_i)]\right) \le 4\left( \frac{\lambda }{h}\right) \left( \max \left\{ r, \frac{r}{c}\right\} \right) \end{aligned}$$(57)where \(h=\min _{d \in T_X(x^0)\cap S} \langle -\mu ^0, d\rangle \).
Proof
-
(i)
We first observe that \(f_1(x, \mu ^0)\) and \(f_2(x, \Sigma ^0)\) are Hölder of degree 1 i.e. Lipschitz on X as functions of \(\mu \) and \(\Sigma \) respectively, uniformly with respect to \(x \in X\). This is due to the to the fact that \(f_1\) and \(f_2\) are differentiable, continuously with respect to \(\mu \) and \(\Sigma \). Further \(f_1\) and \(f_2\) are convex as functions of x and linear as functions of to \(\mu \) and \(\Sigma \) respectively. Let
$$\begin{aligned} l(x)= - \beta _1 \langle \mu ^0, x \rangle + \beta _2 \langle x^T \Sigma ^0 x\rangle \end{aligned}$$(58)Since \(x^0\) minimizes l(x) over X it holds
$$\begin{aligned} \langle \nabla l(x^0) , d\rangle \ge 0 \end{aligned}$$(59)for every \(d \in T_X(x^0)\) (see Rockafellar (1970)). Hence, for \(d \in X\) and \(t>0\) we have, since l(x) is a quadratic function
$$\begin{aligned} l(x)-l(x^0)= \langle \nabla l(x^0) , x-x^0\rangle + \frac{1}{2} (x-x^0)^T\nabla ^2 l(x^0)(x-x^0) \end{aligned}$$(60)$$\begin{aligned} \ge \frac{1}{2} (x-x^0)^T\nabla ^2l(x^0)(x-x^0) \end{aligned}$$(61)It follows
$$\begin{aligned} l(x)-l(x^0) \ge \Vert x-x^0\Vert ^2 \frac{1}{2\Vert x-x^0\Vert ^2} (x-x^0)^T\nabla ^2(x^0)(x-x^0) \ge h \Vert x-x^0\Vert ^2 \end{aligned}$$(62)where, since \(\nabla ^2l(x^0)=2\Sigma ^0\). Hence \(x^0\) is an isolated minimizer order 2 and constant h and the thesis follows from Theorem 4.1. The last inequality follows since by (45) we have
$$\begin{aligned} D({{\mathcal {U}}}_1)\le r, \ \ D({{\mathcal {U}}}_2) \le \frac{r}{c} \end{aligned}$$(63) -
(ii)
The proof is similar to that of point i) and is omitted.
\(\square \)
Concluding remarks
We conclude this paper with a glimpse on possible further research.
We often have partial knowledge on the statistical properties of the model parameters. Specifically, the probability distribution quantifying the model parameter uncertainty is known ambiguously. A typical approach to handle this ambiguity, is to estimate the probability distribution using statistical tools. The decision-making process can then be performed with respect to the estimated distribution. Such an estimation can be imprecise. Ambiguous stochastic optimization is a modeling approach that protects the decision-maker from the ambiguity in the underlying probability distribution. Ambiguity about probability distribution can be modelled using the concept of imprecise probability or more generally the notion of set-valued probability [see e.g. La Torre et al. (2021)]. A different way to model this ambiguity is to assume the underlying probability distribution is unknown and lies in an ambiguity set of probability distributions.
This last approach, as in robust optimization, hedges against the ambiguity in probability distribution by taking a worst-case (minmax) approach (Distributionally Robust Multiobjective Optimization).
Extensions of the presented results to Distributionally Robust Multiobjective Optimization are a first direction for further research.
As pointed out, the Robust Optimization approach is a worst-case oriented approach. For this reason robust solutions of an optimization problem have been also called pessimistic solutions. Indeed, optimistic solutions have been considered in the literature as solutions of the best-case oriented multiobjective optimization problem with objective functions
In order to model the level of pessimism one can consider the multiobjective optimization problem with objective functions
where \(p_i\in [0,1]\) describes the level of pessimism for objective i. The study of problem (65) is another possible direction for further research.
References
Aouam, T., Muthuraman, K., & Rardin, R. L. (2016). Robust optimization policy benchmarks and modeling errors in natural gas. European Journal of Operational Research, 250(3), 807–815.
Ben-Tal, A., El Ghaoui, L., & Nemirovski, A. (2009). Robust optimization. Princeton University Press.
Ben-Tal, A., & Nemirovski, A. (1998). Robust convex optimization. Mathematics of Operations Research, 23, 769–805.
Bertsimas, D., Brown, D. B., & Caramanis, C. (2011). Theory and applications of robust optimization. SIAM Review, 53, 464–501.
Crespi, G. P., Kuroiwa, D., & Rocca, M. (2017). Quasiconvexity of set-valued maps assures well-posedness of robust vector optimization. Annals of Operations Research, 251, 89–104.
Crespi, G. P., Kuroiwa, D., & Rocca, M. (2018). Robust optimization: Sensitivity to uncertainty in scalar and vector cases, with applications. Operations Research Perspectives, 5, 113–119.
Ehrgott, M., Ide, J., & Schöbel, A. (2014). Minmax robustness for multi-objective optimization problems. European Journal of Operational Research, 239(1), 17–31.
Fan, K. (1953). Minimax theorems. Proceedings of The National Academy of Sciences of the United States of America, 39(1), 42–47.
Fliege, J., & Werner, R. (2014). Robust multiobjective optimization & applications in portfolio optimization. European Journal of Operational Research, 234(2), 422–433.
Gabrel, V., Murat, C., & Thiele, A. (2014). Recent advances in robust optimization: an overview. European Journal of Operations Research, 235, 471–483.
Goh, J., & Sim, M. (2011). Robust optimization made easy with rome. Operations Research, 59, 973–985.
Gregory, C., Darby-Dowman, K., & Mitra, G. (2011). Robust optimization and portfolio selection: The cost of robustness. European Journal of Operational Research, 212(2), 417–428.
Hassanzadeh, F., Nemati, H., & Sun, M. (2014). Robust optimization for interactive multiobjective programming with imprecise information applied to r &d project portfolio selection. European Journal of Operational Research, 238, 41–53, 10. https://doi.org/10.1016/j.ejor.2014.03.023.
Hayashi, S., Nishimura, R., & Fukushima, M. (2013). SDP reformulation for robust optimization problems based on nonconvex QP duality. Computational Optimization and Applications, 55, 21–47.
Hu, J., Homem de Mello, T., & Mehrotra, S. (2011). Risk-adjusted budget allocation models with application in homeland security. IIE Transactions, 43(12), 819–839.
Ide, J., Köbis, E., Kuroiwa, D., Schöbel, A., & Tammer, C. (2014). Fixed point theory and applicationsthe relationship between multi-objective robustness concepts and set-valued optimization. Fixed Point Theory and Applications, 83.
Kuroiwa, D. & Nuriya, T. (2006). A generalized embedding vector space in set optimization. In Proceedings of the forth international conference on nonlinear and convex analysis.
Kuroiwa, D. (2001) On set-valued optimization. Nonlinear Analysis: Theory, Methods & Applications, 47, 1395-1400, 08.
Kuroiwa, D., & Lee, G. (2014). On robust multiobjective convex optimization. Journal of Nonlinear and Convex Analysis, 15, 1125–1136, 01.
Kuroiwa, D. (2003). Existence theorems of set optimization with set-valued maps. Journal of Information and Optimization Sciences, 24(1), 73–84.
Kuroiwa, D., & Lee, G. M. (2012). On robust multiobjective optimization. Vietnam Journal of Mathematics, 40, 305–317.
La Torre, D., Mendivil, F., & Rocca, M. (2021). Modeling portfolio efficiency using stochastic optimization with incomplete information and partial uncertainty. Annals of Operations Research. https://doi.org/10.1007/s10479-021-04372-x
Li, S. J., & Xu, S. (2010). Sufficient conditions of isolated minimizers for constrained programming problems. Numerical Functional Analysis and Optimization, 31(6), 715–727.
Luc, D.T. (1989). Theory of vector optimization. Lecture Notes in Economics and Mathematical Systems, Springer-Verlag.
Markowitz, H. M. (1952). Portfolio selection. Journal of Finance, 7(1), 71–91.
Markowitz, H. M. (1968). Portfolio selection: efficient diversification of investments. Yale University Press.
Sawaragi, Y., Nakayama, H., & Tanino, T. (1985). Theory of multiobjective optimization, volume 176 of Mathematics in Science and Engineering. Academic Press, Inc., Orlando, FL. ISBN 0-12-620370-9.
Schöttle, K., & Werner, R. (2009). Robustness properties of mean-variance portfolios. Optimization, 58, 641–663.
Skanda, D., & Lebiedz, D. (2013). A robust optimization approach to experimental design for model discrimination of dynamical systems. Mathematical Programming Ser. A, 141, 405–433.
Souyris, S., Cortés, C. E., Ordóñez, F., & Weintraub, A. (2013). A robust optimization approach to dispatching technicians under stochastic service times. Optimization Letters, 7, 1549–1568.
Suzuki, S., Kuroiwa, D., & Lee, G. M. (2013). Surrogate duality for robust optimization. European Journal of Operations Research, 231, 257–262.
Tyrrell Rockafellar, R. (1970). Convex analysis. Princeton Mathematical Series. Princeton University Press.
Zugno, M., & Conejo, A. J. (2015). A robust optimization approach to energy and reserve dispatch in electricity markets. European Journal of Operational Research, 247(2), 659–671.
Funding
Open access funding provided by Università degli Studi dell’Insubria within the CRUI-CARE Agreement.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Rocca, M. Sensitivity to uncertainty and scalarization in robust multiobjective optimization: an overview with application to mean-variance portfolio optimization. Ann Oper Res (2022). https://doi.org/10.1007/s10479-022-04951-6
Accepted:
Published:
DOI: https://doi.org/10.1007/s10479-022-04951-6
Keywords
- Multiobjective optimization
- Robust optimization
- Portfolio optimization