Abstract
We introduce a new notion of a vector-based robust minimal solution for a vector-valued uncertain optimization problem, which is defined by means of some open cone. We present necessary and sufficient conditions for this kind of solution, which are stated in terms of some directional derivatives of vector-valued functions. To prove these results, we apply the methods of set-valued analysis. We also study relations between our definition and three other known optimality concepts. Finally, for the case of scalar optimization, we present two general algorithm models for computing vector-based robust minimal solutions.
1 Introduction
In many optimization problems, one has to deal with some uncertainty of the data. Mathematically, this can be described by an additional parameter, which influences either the objective function (as in [1]), or the functions defining constraints (as in [2]), or both. The exact value of this parameter is unknown at the moment of decision, but it can be assumed that the parameter values lie in a given uncertainty set.
The theory of uncertain optimization (also called robust optimization) for multiobjective problems is a relatively new direction of research: the authors of paper [3], submitted in 2014, write that it “has been started only within the last 2 years”. One possible approach to uncertain multiobjective optimization is to interpret an uncertain optimization problem as a special set-valued optimization problem and then apply the methods of set-valued analysis; see, e.g., [4, Section 3.1] and [1, Section 5]. In this paper, we follow [1] regarding the formulation of a set-valued problem associated with an uncertain vector optimization problem. We study the notion of Q-minimality (where Q is an open cone) in the context of uncertain vector optimization. We define four types of robust Q-minimal solutions, where the first one is new (a vector-based robustQ-minimal solution; see Definition 3.2(a)), while the other three are variants of some definitions known from the literature. The paper is devoted to studying relations between these four types of solutions, proving some optimality conditions for vector-based robust Q-minimal solutions and constructing algorithm models for finding them.
The organization of this paper is as follows: In Sect. 2, we briefly discuss Q-minimal solutions for set-valued optimization problems. In Sect. 3, we formulate an uncertain vector optimization problem and construct the associated set-valued optimization problem. We also define four concepts of robust Q-minimal solutions and examine relations between them. Section 4 provides one more relation for the particular case of scalar optimization. In Sect. 5, we prove a characterization of a vector-based robust Q-minimal solution of an uncertain optimization problem in terms of a radial derivative of some vector-valued function. Since this characterization may be difficult to apply in practice, in the next two sections we present other optimality conditions (necessary in Sect. 6 and sufficient in Sect. 7), which have simpler forms but are not characterizations. In Sect. 8, we discuss two general algorithm models for finding vector-based robust Q-minimal solutions for the case of scalar optimization with a finite number of scenarios. Finally, in Sect. 9, we present a computational example.
2 Q-Minimal Solutions in Set-Valued Optimization
Let X, Y be normed spaces, S be a nonempty subset of X, and \( F:X\rightrightarrows Y\) be a set-valued map. We define the graph of F as follows:
We denote by \(F_{S}\) the restriction of F to S defined by
(see [5, p. 132]). Let Q be an arbitrary open cone in Y, which is nonempty and different from Y. We remind that an open coneQ is an open set satisfying the condition \(\lambda y\in Q\) for all \(y\in Q\) and \(\lambda >0\). We consider the following set-valued optimization problem:
where the minimization is understood with respect to the cone Q, according to the following definition.
Definition 2.1
Let \(({\bar{x}},{\bar{y}})\in \mathrm {graph}F_{S}\). We say that \(({\bar{x}},{\bar{y}})\) is a Q-minimal solution of problem (2), if
where
We introduce the following relation \(\prec \) in Y:
In particular, if the cone Q is convex, then the relation \(\prec \) is transitive.
Remark 2.1
It is easy to see that \(({\bar{x}},{\bar{y}})\) is a Q-minimal solution of problem (2), if and only if \(y\nprec {\bar{y}}\) for all \(y\in F(S)\).
The notion of a Q-minimal solution has been introduced in [6]. It includes several types of solutions, known from the literature, as particular cases:
-
(i)
a strong (or ideal) efficient point of F(S),
-
(ii)
a weak efficient point of F(S),
-
(iii)
a positive-properly efficient point of F(S),
-
(iv)
a Geoffrion-properly efficient point of F(S),
-
(v)
a Borwein-properly efficient point of F(S),
-
(vi)
a Henig-properly efficient point of F(S),
-
(vii)
a strong Henig-properly efficient point of F(S),
-
(viii)
a super efficient point of F(S);
the details are described in [7, Prop. 1.2] and [6, Thm. 21.7].
For other solution concepts in set-valued optimization, see [8, Section 2.6].
A particular case of problem (2) is the vector optimization problem:
where \(f:X\rightarrow Y\) is a single-valued map.
Definition 2.2
Let \({\bar{x}}\in S\). We say that \({\bar{x}}\) is a Q-minimal solution of problem (5), if \(({\bar{x}},f({\bar{x}}))\) is a Q-minimal solution of problem (2) with F defined by \(F(x):=\{f(x)\}\).
Remark 2.2
Obviously, \({\bar{x}}\) is a Q-minimal solution of problem (5), if and only if
3 An Uncertain Vector Optimization Problem
In this section, we formulate an uncertain vector optimization problem as in [1, Section 5], define four types of its robust Q-minimal solutions, and discuss the relationships between them.
Let X, Y, Z be normed spaces, let S and \({\mathcal {U}}\) be nonempty subsets of X and Z, respectively, and let \(f:X\times {\mathcal {U}}\rightarrow Y\).
Definition 3.1
An uncertain vector optimization problem \(P({\mathcal {U}})\) is defined as the family
of vector optimization problems
For each \(x\in X\), we denote
Then, \(F:X\rightrightarrows Y\) is a set-valued map. In this way, we can construct a set-valued optimization problem of the form (2), associated with the uncertain vector optimization problem (7).
Definition 3.2
Let \({\bar{x}}\in S\), and let F be defined by (9). We say that
-
(a)
\({\bar{x}}\) is a vector-based robust Q-minimal solution of \(P( {\mathcal {U}})\), if there exists \({\bar{y}}\in F({\bar{x}})\) such that \(({\bar{x}}, {\bar{y}})\) is a Q-minimal solution of (2);
-
(b)
\({\bar{x}}\) is a flimsily robust Q-minimal solution of \(P( {\mathcal {U}})\), if it is a Q-minimal solution of \(P(\xi )\) for at least one \(\xi \in {\mathcal {U}}\);
-
(c)
\({\bar{x}}\) is a highly robust Q-minimal solution of \(P({\mathcal {U}})\), if it is a Q-minimal solution of \(P(\xi )\) for all \(\xi \in {\mathcal {U}}\);
-
(d)
\({\bar{x}}\) is a set-based robust Q-minimal solution of \(P( {\mathcal {U}})\), if there exists no \(x\in S\) such that \(F(x)\subseteq F({\bar{x}})-Q\).
Remark 3.1
Part (a) of Definition 3.2 is new. Parts (b) and (c) are introduced here by analogy with Definitions 4 and 5, respectively, in [3], where the usual efficiency instead of Q-minimality was used. Part (d) is analogous to Definition 3.2 in [9]. For other concepts of robust solutions in uncertain optimization and relations between them, see [10]. The motivation for using the vector-based approach in this paper was to obtain some intermediate concept between definitions (b) and (c), which, however, proved successful in the scalar-valued case only; see Sect. 4. We will try to extend our results to vector-valued problems in a further research.
The proposition below clarifies the relation between Definitions 2.2 and 3.2(a).
Proposition 3.1
A point \({\bar{x}}\in S\) is a vector-based robust Q-minimal solution of \(P({\mathcal {U}})\), if and only if there exists \({\bar{\xi }}\in {\mathcal {U}}\) such that \(({\bar{x}},{\bar{\xi }})\) is a Q-minimal solution of the following vector optimization problem:
Proof
By Definition 3.2(a), formula (9) and Remark 2.2 (where S should be replaced by \(S\times {\mathcal {U}}\)), we have the following chain of equivalences:
\(\square \)
Corollary 3.1
A point \({\bar{x}}\in X\) is a vector-based robust Q-minimal solution of \(P({\mathcal {U}})\), if and only if there exists \({\bar{\xi }}\in {\mathcal {U}}\) such that
Proof
This follows easily from (4) and the fourth statement in (11). \(\square \)
The rest of this section is devoted to studying relations between the different concepts of Q-minimality for \(P({\mathcal {U}})\) which are listed in Definition 3.2. Propositions 3.2 and 3.3 show that the implications \(\mathrm {(a)\Rightarrow (b)}\) and \(\mathrm {(c)\Rightarrow (b)}\) are always true. Examples 3.1–3.6 prove that in the general case, no other implication between definitions (a), (b), (c), (d) is valid. Later in Sect. 4, we will show that the implication \(\mathrm {(c)\Rightarrow (a)}\) holds for the particular case of a scalar uncertain optimization problem.
Proposition 3.2
If \({\bar{x}}\) is a vector-based robust Q-minimal solution of \(P({\mathcal {U}})\), then it is a flimsily robust Q-minimal solution of \(P({\mathcal {U}})\).
Proof
By assumption, there exists \({\bar{y}}\in F({\bar{x}})\) such that \(({\bar{x}},\bar{ y})\) is a Q-minimal solution of (2). Hence, for each \(\xi \in {\mathcal {U}}\), we have
The relation \({\bar{y}}\in F({\bar{x}})\) implies that \({\bar{y}}=f({\bar{x}},{\bar{\xi }})\) for some \({\bar{\xi }}\in {\mathcal {U}}\). Of course, this \({\bar{\xi }}\) also satisfies (12). Therefore, we have
which by Remark 2.2 is equivalent to \({\bar{x}}\) being a Q-minimal solution of \(P({\bar{\xi }})\). \(\square \)
Proposition 3.3
If \({\bar{x}}\) is a highly robust Q-minimal solution of \(P( {\mathcal {U}})\), then it is a flimsily robust Q-minimal solution of \( P({\mathcal {U}})\).
Proof
This follows immediately from the definitions (see [3, Lemma 6]). \(\square \)
Below, the symbol \([y,y^{\prime }]\) with \(y,y^{\prime }\) belonging to Y (or another normed space), denotes the line segment, i.e., \([y,y^{\prime }]=\{\lambda y+(1-\lambda )y^{\prime }:0\leqslant \lambda \leqslant 1\}\).
Example 3.1
This example shows that \(\mathrm {(b)\nRightarrow (a)}\), \(\mathrm { (b)\nRightarrow (c)}\) and \(\mathrm {(d)\nRightarrow (a)}\).
Let \(X=Y=Z={\mathbb {R}}\), \(Q=]0,\infty [\), \(S={\mathcal {U}}=[-1,1]\), \( f(x,\xi )=x^{2}\xi \). Then, \(F(x)=[-x^{2},x^{2}]\) for all \(x\in {\mathbb {R}}\). Observe that for each \(\xi \in [0,1]\), the point \({\bar{x}}=0\) is a Q-minimal solution of \(P(\xi )\), and for \(\xi \in [-1,0[\), it is not a Q-minimal solution of \(P(\xi )\). Thus, \({\bar{x}}\) is a flimsily robust (but not highly robust) Q-minimal solution of \(P({\mathcal {U}})\). However, \(\bar{x }\) is not a vector-based robust Q-minimal solution of \(P({\mathcal {U}})\) because the only element \({\bar{y}}\in F(0)\) is \({\bar{y}}=0\), and
We can also see that \({\bar{x}}=0\) is a set-based robust Q-minimal solution of \(P({\mathcal {U}})\). Indeed, for each \(x\in S\), we have
Example 3.2
This example shows that \(\mathrm {(a)\nRightarrow (d)}\) and \( \mathrm {(b)\nRightarrow (d)}\).
Let \(X=Y=Z={\mathbb {R}}\), \(Q=]0,\infty [\), \(S={\mathcal {U}}=[-1,1]\), \( f(x,\xi )=\left( x^{2}-1\right) \xi \). Then, \(F(x)=[x^{2}-1,-x^{2}+1]\) for all \(x\in {\mathbb {R}}\). We can see that \({\bar{x}}=0\) is a flimsily robust (but not highly robust) Q-minimal solution of \(P({\mathcal {U}})\) since it is a Q-minimal solution of \(P(\xi )\) for \(\xi \in [0,1]\) and it is not a Q-minimal solution of \(P(\xi )\) for \(\xi \in [-1,0[\). Moreover, \(\bar{x }\) is a vector-based robust Q-minimal solution of \(P({\mathcal {U}})\) because, for \({\bar{y}}=-1\), we have
However, \({\bar{x}}=0\) is not a set-based robust Q-minimal solution of \(P( {\mathcal {U}})\). Indeed, for each \(x\in S\backslash \{{\bar{x}}\}\), we have
Example 3.3
This example shows that \(\mathrm {(d)\nRightarrow (b)}\) and \( \mathrm {(d)\nRightarrow (c)}\).
Let \(X=Y=Z={\mathbb {R}}\), \(Q=]0,\infty [\), \(S=[-1,1]\), \({\mathcal {U}} =\{-0.5,0.5\}\), \(f(x,\xi )=\left( x-\xi \right) ^{2}\). Then, \(F(x)=\left\{ \left( x-\xi \right) ^{2}:\xi \in {\mathcal {U}}\right\} \) for all \(x\in {\mathbb {R}}\); in particular, \(F(0)=\{(-0.5)^{2},(0.5)^{2}\}=\{0.25\}\). We will show that \({\bar{x}}=0\) is a set-based robust Q-minimal solution of \(P( {\mathcal {U}})\). Suppose that this is not true. Then, there exists \(x\in S\) such that
However, if \(x\in [0,1]\), then \((x+0.5)^{2}\in F(x)\) and \( (x+0.5)^{2}\ge 0.25\), which contradicts (13). Similarly, if \(x\in [-1,0]\), then \((x-0.5)^{2}\in F(x)\) and
\((x-0.5)^{2}\ge 0.25\), which also contradicts (13).
On the other hand, \({\bar{x}}=0\) is not a flimsily robust Q-minimal solution of \(P({\mathcal {U}})\). This follows because the only solution of \(P(-0.5)\) is equal to \(-0.5\) with \(f(-0.5,-0.5)=0\), and similarly, the only solution of P(0.5) is equal to 0.5 with \(f(0.5,0.5)=0\), while at \({\bar{x}}=0\), both minimized functions \(f(\cdot ,-0.5)\) and \(f(\cdot ,0.5)\) have strictly positive values. Of course, by Proposition 3.3, \({\bar{x}}\) is also not a highly robust Q-minimal solution of \(P({\mathcal {U}})\).
Example 3.4
This example shows that \(\mathrm {(c)\nRightarrow (a)}\).
Let \(X=Z={\mathbb {R}}\), \(Y={\mathbb {R}}^{2}\), \(Q=\{(y_{1},y_{2})\in {\mathbb {R}} ^{2}:y_{i}>0\), \(i=1,2\}\), \(S=[0,3]\), \({\mathcal {U}}=\{1,2\}\),
Let us note that
Let \({\bar{x}}=1\). Observe that \({\bar{x}}\) is a highly robust Q-minimal solution of \(P({\mathcal {U}})\). Indeed, \(f({\bar{x}},1)=(0,2)\), \(f({\bar{x}} ,2)=(0.5,0.5)\),
Therefore,
which means that \({\bar{x}}\) is a Q-minimal solution of both vector optimization problems \(P(\xi )\), \(\xi =1,2\).
However, the point \({\bar{x}}=1\) is not a vector-based robust Q-minimal solution of \(P({\mathcal {U}})\). Let us suppose the contrary. Then, there exists a point \({\bar{y}}\in F({\bar{x}})=\{(0,2),(0.5,0.5)\}\) such that
where
But \({\bar{y}}\) cannot be equal to (0, 2), because
and
Similarly, \({\bar{y}}\) cannot be equal to (0.5, 0.5), because
and
Therefore, we get a contradiction.
Example 3.5
This example shows that \(\mathrm {(a)\nRightarrow (c)}\).
Take the same data as in Example 3.4, except for the definition of f, which has now the form
Let us note that
Let \({\bar{x}}=1\). Observe that \({\bar{x}}\) is not a highly robust Q-minimal solution of \(P({\mathcal {U}})\) (it is only flimsily robust). Indeed, \(f({\bar{x}} ,1)=(0,2)\), \(f({\bar{x}},2)=(1,1)\),
Therefore,
which means that \({\bar{x}}\) is a Q-minimal solution of vector optimization problem P(1) but is not a Q-minimal solution of vector optimization problem P(2).
However, the point \({\bar{x}}=1\) is a vector-based robust Q-minimal solution of \(P({\mathcal {U}})\). Indeed, taking \({\bar{y}}=(0,2)\in F({\bar{x}})\), we obtain
since
Example 3.6
This example shows that \(\mathrm {(c)\nRightarrow (d)}\).
Let \(X=Z={\mathbb {R}}\), \(Y={\mathbb {R}}^{2}\), \(Q=\{(y_{1},y_{2})\in {\mathbb {R}} ^{2}:y_{i}>0\), \(i=1,2\}\), \(S=[0,2]\), \({\mathcal {U}}=\{1,2\}\),
We will show that \({\bar{x}}=1\) is a highly robust Q-minimal solution of \(P( {\mathcal {U}})\). Indeed, \({\bar{x}}\) is a Q-minimal solution of P(1) because the set
has empty intersection with \(-Q\). Similarly, \({\bar{x}}\) is a Q-minimal solution of P(2) because the set
has empty intersection with \(-Q\).
However, \({\bar{x}}=1\) is not a set-based robust Q-minimal solution of \(P( {\mathcal {U}})\). To see this, take \(x=2\). We have
and
It follows that \(F(x)\subseteq F({\bar{x}})-Q\), which contradicts Definition 3.2(d).
4 The Case of Scalar Optimization
In this section we consider the case where \(Y={\mathbb {R}}\) and \(Q=]0,\infty [\). In this case, the relation \(\prec \) may be replaced by the usual strict inequality <. We will show that in this case, one more relation between two parts of Definition 3.2 holds, which implies that a vector-based robust Q-minimal solution is an intermediate notion between a highly robust Q-minimal solution and a flimsily robust Q-minimal solution.
Theorem 4.1
Let \(Y={\mathbb {R}}\) and \(Q=]0,\infty [\). Suppose that the values of the set-valued map \(F:X\rightrightarrows {\mathbb {R}}\) are closed and bounded from below. Then, every highly robust Q-minimal solution of \(P({\mathcal {U}})\) is a vector-based robust Q-minimal solution of \(P({\mathcal {U}})\).
Proof
Let \({\bar{x}}\in S\) be a highly robust Q-minimal solution of \(P({\mathcal {U}}) \). Then, for each \(\xi \in {\mathcal {U}}\), \({\bar{x}}\) is a Q-minimal solution of the scalar optimization problem \(P(\xi )\), which means that \( {\bar{x}}\) is a global minimum point of \(f(\cdot ,\xi )\) in the usual sense:
Since the set \(F({\bar{x}})=\left\{ f({\bar{x}},\xi ):\xi \in {\mathcal {U}} \right\} \) is closed and bounded from below, there exists \({\bar{\xi }}\in {\mathcal {U}}\) such that
Conditions (14) and (15) imply that
Consequently, by Proposition 3.1, \({\bar{x}}\) is a vector-based robust Q-minimal solution of \(P({\mathcal {U}})\). \(\square \)
5 A Characterization of Vector-Based Robust Q-Minimal Solutions
In this section, we present a characterization of a vector-based robust Q-minimal solution of \(P({\mathcal {U}})\) in terms of some radial derivative of the function f appearing in (8), restricted to \(S\times {\mathcal {U}}\) . By our knowledge, such results are not known even in the special scalar-valued case. It seems to be possible to derive similar characterizations for different solutions based on the set-based approach to uncertain optimization; for example, for part (d) of Definition 3.2. We plan to describe corresponding results in a subsequent paper.
First, we recall the definition of an outer radial derivative of an arbitrary set-valued mapping.
Definition 5.1
Let \(F:X\rightrightarrows Y\), let \(({\bar{x}},{\bar{y}})\in \mathrm { graph}F\), and let m be a positive integer. The m-th order outer radial derivative of F at \(({\bar{x}},{\bar{y}})\) is the set-valued map \({\overline{D}}_{R}^{m}F({\bar{x}},{\bar{y}}):X\rightrightarrows Y\) defined by
The derivative \({\overline{D}}_{R}^{1}F({\bar{x}},{\bar{y}})\) was first introduced in [5]; the derivative \({\overline{D}}_{R}^{m}F({\bar{x}},{\bar{y}})\) (for an arbitrary m) was defined in [7]. An interesting feature of radial derivatives is that contrary to classical derivatives, they lead to global sufficient conditions without any (generalized) convexity assumptions. This is due to the fact that we do not require that \(t_{n}\) converges to zero in (16).
In particular, if \(f:X\rightarrow Y\) is a single-valued mapping, we will use the notation \({\overline{D}}_{R}^{m}f({\bar{x}};u)\) instead of \({\overline{D}} _{R}^{m}F({\bar{x}},f({\bar{x}}))(u)\), where \(F:X\rightrightarrows Y\) is the multifunction defined by \(F(x):=\{f(x)\}\). Hence, it follows from (16) that
Proposition 5.1
Let \(f:X\rightarrow Y\), \({\bar{x}},u\in X\), and let m be a positive integer. Then, \(f({\bar{x}}+u)-f({\bar{x}})\in {\overline{D}}_{R}^{m}f( {\bar{x}};u)\).
Proof
It is sufficient to take the constant sequences \(t_{n}\equiv 1\) and \( (u_{n},v_{n})\equiv (u,v)\) in (17). \(\square \)
We now return to the uncertain optimization problem \(P({\mathcal {U}})\). We will denote by \(f_{S\times {\mathcal {U}}}\) the restriction of \(f:X\times Z\rightarrow Y\) to \(S\times {\mathcal {U}}\). Then, by analogy with (17), we can write, for any \(({\bar{x}},{\bar{\xi }})\in S\times {\mathcal {U}}\),
We have the following counterpart of Proposition 5.1.
Proposition 5.2
Let \(f:X\times Z\rightarrow Y\), \(({\bar{x}},{\bar{\xi }})\in S\times {\mathcal {U}}\subseteq X\times Z\), and let m be a positive integer. Suppose that \(x\in X\) and \(\xi \in {\mathcal {U}}\) are such that \({\bar{x}}+x\in S\) and \( {\bar{\xi }}+\xi \in {\mathcal {U}}\). Then,
Proof
It is sufficient to take \(t_{n}\equiv 1\) and \((x_{n},\xi _{n},y_{n})\equiv (x,\xi ,y)\) in (18). \(\square \)
Theorem 5.1
A point \({\bar{x}}\in S\) is a vector-based robust Q-minimal solution of \(P({\mathcal {U}})\) if and only if
Proof
Part “if”. Suppose that \({\bar{x}}\) is not a vector-based robust Q-minimal solution of \(P({\mathcal {U}})\). Then, for each \({\bar{y}}\in F({\bar{x}})\) (where F is given by (9)), the pair \(( {\bar{x}},{\bar{y}})\) is not a Q-minimal solution of (2). This is equivalent to
Since \({\bar{y}}\in F({\bar{x}})\) is equivalent to \({\bar{y}}=f({\bar{x}},{\bar{\xi }})\) for some \({\bar{\xi }}\in {\mathcal {U}}\), we obtain from (20) that
Take any \({\bar{\xi }}\in {\mathcal {U}}\). By (21), there exists \(x\in S\) such that
Using the definition of F, we see that there exists \(\xi \in {\mathcal {U}}\) such that
By defining \(u:=x-{\bar{x}}\in S-{\bar{x}}\) and \(d:=\xi -{\bar{\xi }}\in {\mathcal {U}} -{\bar{\xi }}\), we can rewrite (22) as
However, by Proposition 5.2 and the relations \({\bar{x}}+u=x\in S\), \(\bar{ \xi }+d=\xi \in {\mathcal {U}}\), we have
Combining (23) and (24), we get
We have thus verified that for each \({\bar{\xi }}\in {\mathcal {U}}\), there exist \(u\in S-{\bar{x}}\) and \(d\in {\mathcal {U}}-{\bar{\xi }}\) such that (25) holds. This contradicts (19).
Part “only if”. Let \({\bar{x}}\in S\) be a vector-based robust Q-minimal solution of \(P({\mathcal {U}})\), then
Hence, there exists \({\bar{\xi }}\in {\mathcal {U}}\) such that \({\bar{y}}=f({\bar{x}}, {\bar{\xi }})\), and consequently,
We will show that
Suppose to the contrary that (27) is false, then there exist \(x\in S\) , \(\xi \in {\mathcal {U}}\) and \(y\in Y\) such that
By (28) and (18), there exist sequences \(t_{n}>0\) and \((x_{n},\xi _{n},y_{n})\rightarrow (x-{\bar{x}},\xi -{\bar{\xi }},y)\) such that for all n, we have
Since Q is open and \(y_{n}\rightarrow y\in -Q\), we have \(y_{n}\in -Q\) for sufficiently large n. As Q is an open cone, the last relation implies \( t_{n}^{m}y_{n}\in -Q\). From this and (29), we deduce
a contradiction to (26). \(\square \)
The characterization given in Theorem 5.1 is difficult to apply in practice as it involves the restriction \(f_{S\times {\mathcal {U}}}\), which is not easy to compute, especially if the constraint set S is defined by some functional conditions. Therefore, in the next two sections we present a necessary condition (Theorem 6.2) and a sufficient condition (Theorem 7.1), both for a vector-based robust Q-minimal solution of \(P( {\mathcal {U}})\), which do not use this restricted function.
6 Necessary Optimality Conditions
The following derivative for a set-valued mapping \(F:X\rightrightarrows Y\) was first defined in [11].
Definition 6.1
Let \(({\bar{x}},{\bar{y}})\in \mathrm {graph}F\), and let m be a positive integer. The m-th order outer contingent-type derivative of F at \(({\bar{x}},{\bar{y}})\) is the set-valued map \({\overline{d}}^{m}F({\bar{x}},\bar{ y}):X\rightrightarrows Y\) defined by
We will also use the following derivative for a vector-valued map \(f:X\rightarrow Y\) (if it exists):
where m is a positive integer, and \(u,w\in X\).
Definition 6.2
The contingent cone to S at \({\bar{x}}\in \mathrm {cl}S\) is defined as follows:
The following two theorems are, in view of Definition 3.2(a), necessary conditions for \({\bar{x}}\in S\) to be a vector-based robust Q-minimal solution of \(P({\mathcal {U}})\).
Theorem 6.1
Let \(({\bar{x}},{\bar{y}})\in \mathrm {graph}F_{S}\), and let m be a positive integer. If \(({\bar{x}},{\bar{y}})\) is a Q-minimal solution of problem (2), then
Proof
Suppose that \(({\bar{x}},{\bar{y}})\) is a Q-minimal solution of (2) but condition (31) is false, then there exist vectors \(u\in X\) and
By the definition of \({\overline{d}}^{m}F_{S}\), there exist sequences \(t_{n}\rightarrow 0^{+}\) and \((u_{n},v_{n})\rightarrow (u,v)\) such that
which is equivalent to
Since Q is open and \(v_{n}\rightarrow v\in -Q\), we have \(v_{n}\in -Q\) for sufficiently large n. As Q is an open cone, the last relation implies \(t_{n}^{m}v_{n}\in -Q\). From this and (32), we deduce
a contradiction to (3). \(\square \)
Remark 6.1
Contrary to the other results of this paper, Theorem 6.1 remains valid even if F is an arbitrary set-valued map, not necessarily defined by formula (9).
Theorem 6.2
Let F be given by (9), let \(({\bar{x}},{\bar{y}})\in \mathrm {graph}F_{S}\), and let m be a positive integer. Suppose that for each \( {\bar{\xi }}\in {\mathcal {U}}\) satisfying the condition
and for each pair \((u,d)\in X\times Z\), there exists the derivative \(d^{m}f(( {\bar{x}},{\bar{\xi }});(u,d))\in Y\). If \(({\bar{x}},{\bar{y}})\) is a Q-minimal solution of (2) (where F is given by (9)), then
for all vectors \({\bar{\xi }}\in {\mathcal {U}}\) satisfying (33) and for all
Proof
Suppose that the desired conclusion is false, then there exist vectors \(\bar{ \xi }\in {\mathcal {U}}\) and \((u,d)\in X\times Y\) satisfying (33) and (35), respectively, such that
By (35), there exist sequences \(t_{n}\rightarrow 0^{+}\) and \((u_{n},d_{n})\rightarrow (u,d)\) such that for all n,
Let \(\xi _{n}:={\bar{\xi }}+t_{n}d_{n}\). By (36) and the definition of \(d^{m}f\), we have
It follows from (33) and (38) that
By (37), we obtain \({\bar{x}}+t_{n}u_{n}\in S\) and \(\xi _{n}\in {\mathcal {U}}\). These two relations, and conditions (9), (39) give
We have thus verified that there exist sequences \(t_{n}\rightarrow 0^{+}\), \(u_{n}\rightarrow u\) and \(v_{n}\rightarrow v\) such that \({\bar{y}} +t_{n}^{m}v_{n}\in F_{S}({\bar{x}}+t_{n}u_{n})\) for all n. This means that \(v\in {\overline{d}}^{m}F_{S}({\bar{x}},{\bar{y}})(u)\). But this contradicts Theorem 6.1 because \(v\in -Q\). \(\square \)
Example 6.1
Let \(X=Y=Z={\mathbb {R}}\), \(Q=]0,\infty [\), \(S=[-1,1]\), \({\mathcal {U}}=[0,1]\), \(f(x,\xi )=x^{2}+\xi \). Then, \(F(x)=[x^{2},x^{2}+1]\) for all \(x\in {\mathbb {R}}\). The point \({\bar{x}}=0\) is a vector-based robust Q-minimal solution of \(P({\mathcal {U}})\) because there exist \({\bar{y}}=0\) and \({\bar{\xi }}=0\in {\mathcal {U}}\) such that \({\bar{y}}=f({\bar{x}},{\bar{\xi }})\in F( {\bar{x}})\) and
Observe that \({\bar{\xi }}=0\) is the only element of \({\mathcal {U}}\) satisfying condition (33). We also have
For such directions (u, d), we can compute
Since \(d\notin -Q\), the necessary condition given in Theorem 6.2 is satisfied for \(m=1\). Note that for \(m=2\), we cannot apply Theorem 6.2 because the derivative \(d^{2}f(({\bar{x}},{\bar{\xi }});(u,d))\) (for \(d>0\)) does not exist as an element of \({\mathbb {R}}\):
Example 6.2
Take the same data as in Example 6.1, except for the definition of f which has now the form \(f(x,\xi )=x^{2}+\xi ^{2}\). As before, we have \(F(x)=[x^{2},x^{2}+1]\) for all \(x\in {\mathbb {R}}\). Moreover, \({\bar{x}}=0\) is a vector-based robust Q-minimal solution of \(P({\mathcal {U}})\) with the same points \({\bar{y}}=0\) and \({\bar{\xi }}=0\). In this example, we can apply Theorem 6.2 both for \(m=1\) and \(m=2\) because
Example 6.3
Let \(X=Y=Z={\mathbb {R}}\), \(Q=]0,+\infty [\), \(S={\mathcal {U}} =[-1,1]\), \(f(x,\xi )=x^{2}\xi \). Then, \(F(x)=[-x^{2},x^{2}]\) for all \(x\in {\mathbb {R}}\). Observe that for each \(\xi \in [0,1]\), the point \(\bar{x }=0\) is a Q-minimal solution of \(P(\xi )\), and for \(\xi \in [-1,0[\) , it is not a Q-minimal solution of \(P(\xi )\). Moreover, the point \({\bar{x}} =0\) is not a vector-based robust Q-minimal solution of \(P({\mathcal {U}})\) because the only element \({\bar{y}}\in F(0)\) is \({\bar{y}}=0\), and
We will show that applying Theorem 6.2, we can exclude the point 0 as a possible vector-based robust Q-minimal solution of \(P({\mathcal {U}})\). Take any point \({\bar{\xi }}\in {\mathcal {U}}\); it obviously satisfies condition (33) of the form \(f(0,{\bar{\xi }})=0\). Since
we can take any direction as (u, d) in (34). We can verify that
This result for \(m=2\) is negative if \(u\ne 0\) and \({\bar{\xi }}<0\). Hence, condition (34) does not hold for \(m=2\).
7 Sufficient Optimality Conditions
We will now prove a sufficient optimality condition for uncertain optimization.
Theorem 7.1
Let F be given by (9), and let \({\bar{x}}\in S\). If there exists \({\bar{\xi }}\in {\mathcal {U}}\) such that
then \({\bar{x}}\) is a vector-based robust Q-minimal solution of problem \(P({\mathcal {U}})\).
Proof
Suppose that the desired conclusion is false. Then, for each \({\bar{y}}\in F({\bar{x}})\), the pair \(({\bar{x}},{\bar{y}})\) is not a Q-minimal solution of (2). By arguing as in the proof of Theorem 5.1 part “if”, we can show that for each \({\bar{\xi }} \in {\mathcal {U}}\), there exist \(u=x-{\bar{x}}\in S-{\bar{x}}\) and \(d=\xi - {\bar{\xi }}\in {\mathcal {U}}-{\bar{\xi }}\) such that
However, by Proposition 5.1, we have
Combining (41) and (42), we get
We have thus verified that for each \({\bar{\xi }}\in {\mathcal {U}}\), there exist \(u\in S-{\bar{x}}\) and \(d\in {\mathcal {U}}-{\bar{\xi }}\) such that (43) holds. This contradicts the assumption of the theorem. \(\square \)
Example 7.1
Let \(X=Y=Z={\mathbb {R}}\), \(Q=]0,\infty [ \), \(S=[-1,1]\), \({\mathcal {U}}=[0,1]\), \(f(x,\xi )=x^{2}+\xi ^{2}\), and \({\overline{x}}=0\) (we have the same data as in Example 6.2). We will show that condition (40) holds for \({\bar{\xi }}=0\). Indeed, for each \(x\in S\) and \(d\in {\mathcal {U}}\), we have
Since each sequence \(\{v_{n}\}\) in (44) is nonnegative, we have \({\overline{D}}_{R}^{m}f((0,0);(x,d))\subseteq [0,\infty [\), and consequently, \({\overline{D}}_{R}^{m}f((0,0);(x,d))\cap (-Q)=\emptyset \). But x and d are arbitrary points of S and \({\mathcal {U}}\), respectively, which implies that \({\overline{D}}_{R}^{m}f((0,0);(S,{\mathcal {U}}))\cap (-Q)=\emptyset \). Thus, Theorem 7.1 can be applied to deduce that \({\bar{x}}\) is a vector-based robust Q-minimal solution of problem \(P( {\mathcal {U}})\).
8 Construction of Algorithms for a Finite Set of Scenarios
In this section, we return to the case of scalar optimization considered in Sect. 4. We present two general algorithm models that can be useful for solving the particular case of problem \(P({\mathcal {U}})\) where the set \( {\mathcal {U}}\) is finite: \({\mathcal {U}}=\{\xi _{1},...,\xi _{m}\}\) (we say that we have m different scenarios). This case is important for some practical applications; see, e.g., [3, Example 3].
Throughout this section, we assume that for each \(i\in \{1,...,m\}\), the function \(f(\cdot ,\xi _{i}):X\rightarrow {\mathbb {R}}\) belongs to a fixed class \({\mathcal {F}}\) of functions. We also assume that there exists an algorithm \(A(g,x_{0})\) which, for a given function \(g\in {\mathcal {F}}\) and a given starting point \(x_{0}\in S\), generates an infinite sequence \(\{x_{k}\}\) converging to some point \({\bar{x}}\) which is a global minimizer for g on S :
The first algorithm model is valid under an additional assumption of regularity stated in Definition 8.1. This assumption helps to find a vector-based robust Q-minimal solution faster than in the general case that will be considered later.
Definition 8.1
We say that a finite set of scenarios \({\mathcal {U}}\) is regular, if it satisfies the following condition for each pair \(i,j\in \{1,...,m\},i\ne j\):
Condition (46) means that strict inequalities between the values of f for different scenarios are preserved throughout the whole space X, and consequently, the graphs of \(f(\cdot ,\xi _{i})\) for different values of \( \xi _{i}\) do not intersect.
The following algorithm can be used to find a vector-based robust Q-minimal solution of \(P({\mathcal {U}})\) in the case where the number of elements of \({\mathcal {U}}\) is relatively small.
Algorithm Model 1
-
Step 1.
Choose a starting point \((x_{0},\xi _{0})\in S\times {\mathcal {U}}\).
-
Step 2.
If there exists \(\xi \in {\mathcal {U}}\) such that \(f(x_{0},\xi )<f(x_{0},\xi _{0})\), then set \(\xi _{0}:=\xi \) and repeat Step 2. Otherwise, go to Step 3.
-
Step 3.
Run the algorithm \(A(f(\cdot ,\xi _{0}),x_{0})\), generating an infinite sequence \(\{x_{k}\}\).
Theorem 8.1
Suppose that the set \({\mathcal {U}}\) is regular. Then, the limit \({\bar{x}}\) of the sequence \(\{x_{k}\}\) generated by Algorithm Model 1 is a vector-based robust Q-minimal solution of \(P({\mathcal {U}})\).
Proof
Suppose that the desired conclusion is false. Then, by Corollary 3.1, there exists a point \((x^{*},\xi ^{*})\in S\times {\mathcal {U}}\) such that
Since condition (45) holds for \(g=f(\cdot ,\xi _{0})\), we have
In particular, taking \(x=x^{*}\) in (48) and combining this inequality with (47), we obtain
Since \({\mathcal {U}}\) is regular, condition (49) implies that a similar inequality must also hold for \(x_{0}\):
This, however, contradicts the construction of Algorithm Model 1 (when we go from Step 2 to Step 3, \(\xi _{0}\) is the best scenario at the point \(x_{0}\)). \(\square \)
Remark 8.1
If, in Algorithm Model 1, \(A(f(\cdot ,\xi _{0}),x_{0})\) terminates after a finite number of steps, then we can still use Theorem 8.1 by assuming that the sequence \(\{x_{k}\}\) is constant after it reaches a global minimizer \( {\bar{x}}\) of \(f(\cdot ,\xi _{0})\) on S.
We are now going to describe another algorithm model which does not require the regularity condition (46). On the other hand, we assume that \( {\mathcal {F}}\) satisfies the following condition for each positive integer m:
Algorithm Model 2
-
Step 1.
Choose a starting point \(x_{0}\in S\).
-
Step 2.
Define the function \(g(x):=\mathrm {\min }\{f(x,\xi ):\xi \in {\mathcal {U}}\}\) and run the algorithm \(A(g,x_{0})\), generating an infinite sequence \(\{x_{k}\}\).
Remark 8.2
The description of Algorithm Model 2 is very simple. However, it may require computing the minimum of m values \(f(x_{k},\xi _{1}),...,f(x_{k},\xi _{m})\) at each step of the algorithm \(A(g,x_{0})\). If the regularity condition (46) is not satisfied, we can have this minimum attained at different scenarios \(\xi \in {\mathcal {U}}\) for different values of \(x_{k}\).
Theorem 8.2
Suppose that condition (50) is satisfied. Then, the limit \({\bar{x}}\) of the sequence \(\{x_{k}\}\) generated by Algorithm Model 2 is a vector-based robust Q-minimal solution of \(P({\mathcal {U}})\).
Proof
By condition (45) and the definition of g, we have
Take any \({\bar{\xi }}\in {\mathcal {U}}\) such that \(g({\bar{x}})=f({\bar{x}},{\bar{\xi }})\). This equality and (51) give that
Therefore, by Proposition 3.1, \({\bar{x}}\) is a vector-based robust Q-minimal solution of \(P({\mathcal {U}})\).
\(\square \)
9 A Computational Example
Algorithm Models 1 and 2, presented above, require applying some global minimization method for a given real-valued function. Such methods exist but, for possibly nonconvex functions, they are rather complicated. To illustrate the theory developed in the previous section, here we present a simple example of one-dimensional uncertain optimization problem, for which Algorithm Model 1 can be applied in combination with the Shubert optimization method described in [12]. The Shubert method is designed for seeking the global maximum of a function of one real variable. Below, we briefly present its version adapted for minimization.
Let \(f:[a,b]\rightarrow {\mathbb {R}}\) be a real-valued function satisfying the Lipschitz condition, which means that there exists a constant \(C\ge 0\) such that for each \(x,y\in [a,b]\), the following inequality holds: \( \left| f(x)-f(y)\right| \le C\left| x-y\right| \).
We introduce the following notation:
The Shubert Algorithm
-
Step 1.
Choose a starting point \(x_{0}\in [a,b]\). Set \(n=0\).
-
Step 2.
Find a point \(x_{n+1}\), at which the function \(F_{n}\) attains its minimum on [a, b]. Increase n by 1 and repeat Step 2.
The following theorem is a reformulation of a result from [12, p. 381] .
Theorem 9.1
The Shubert algorithm generates an infinite sequence \(\{x_{n}\}\) such that:
- (a):
-
the sequence \(\{f(x_{n})\}\) converges to \(\phi \);
- (b):
-
the sequence \(\{M_{n}\}\), where \(M_{n}:=\mathrm {\min } \left\{ F_{n}(x):x\in [a,b]\right\} \), is nondecreasing and converges to \(\phi \);
- (c):
-
\(\inf \left\{ \left| x-x_{n}\right| :x\in \Phi \right\} \underset{n\rightarrow \infty }{\rightarrow }0\).
In the following example, we have used Scientific WorkPlace 5.00 software for numerical computations.
Example 9.1
Let \(X=Y=Z={\mathbb {R}}\), \(S=[0,4]\), \({\mathcal {U}}=\{0,1,2,3\}\). We define the function f as follows:
We want to apply Algorithm Model 1 to solve the problem \(P(\mathcal {U)}\), which is defined by (7)–(8). Obviously, the regularity condition (46) is satisfied. We proceed as follows:
-
1.
We choose a starting point \((x_{0},\xi _{0})\in S\times {\mathcal {U}}\). For this example, let it be equal to (1, 2).
-
2.
We check if there exists \(\xi \in {\mathcal {U}}\) such that
$$\begin{aligned} f(x_{0},\xi )<f(x_{0},\xi _{0}). \end{aligned}$$(52)We see that there are two values for \(\xi \) that satisfy this inequality: \( \xi =0\) and \(\xi =1\). Let us choose the first one, and set \(\xi _{0}:=0\).
-
3.
Since there is no \(\xi \) satisfying (52), we go to Step 3 of Algorithm Model 1, that is, we apply the Shubert algorithm to the function \( g:=f(\cdot ,0)\) with the starting point \(x_{0}=1\). It is easy to show that g satisfies the Lipschitz condition on [0, 4] with the constant \(C=8\). First, we construct the function \(F_{0}\):
$$\begin{aligned} F_{0}(x)=g(x_{0})-C\left| x-x_{0}\right| =g(1)-8\left| x-1\right| =3-8\left| x-1\right| . \end{aligned}$$ -
4.
We look for a point \(x_{1}\), at which \(F_{0}\) attains its minimum on [0, 4]. Since the graph of \(F_{0}\) consists of two line segments, and its maximum is attained at \(x_{0}\), the minimum must be attained at one of the endpoints of [0, 4]. Let us compute these values:
$$\begin{aligned} F_{0}(0)= & {} 3-8\left| 0-1\right| =3-8=-5, \\ F_{0}(4)= & {} 3-8\left| 4-1\right| =3-24=-21. \end{aligned}$$Hence, we accept \(x_{1}=4\).
-
5.
We construct the function \(F_{1}\):
$$\begin{aligned} F_{1}(x)= & {} \mathrm {\max }\{g(x_{0})-C\left| x-x_{0}\right| ,g(x_{1})-C\left| x-x_{1}\right| \} \\= & {} \mathrm {\max }\{3-8\left| x-1\right| ,0-8\left| x-4\right| \}. \end{aligned}$$ -
6.
We look for a point \(x_{2}\), at which \(F_{1}\) attains its minimum on [0, 4]. Observe that \(F_{1}\) is a piecewise linear function, which can be described as follows:
$$\begin{aligned} F_{1}(x)=\left\{ \begin{array}{ccc} 3-8(-x+1)=8x-5, &{} \text {for} &{} x\in [0,1], \\ 3-8(x-1)=-8x+11, &{} \text {for} &{} x\in [1,a], \\ 0-8(-x+4)=8x-32, &{} \text {for} &{} x\in [a,4], \end{array} \right. \end{aligned}$$where a is the solution of equation \(-8x+11=8x-32\), that is, \(a=\frac{43}{ 16}= 2.6875\). The minimum can be attained at one of the points 0, a, 4 . We compute:
$$\begin{aligned} F_{1}(0)= & {} F_{0}(0)=-5, \\ F_{1}(a)= & {} F_{0}(a)=-8\cdot \frac{43}{16}+11=-\frac{43}{2}+11=-10.\,5, \\ F_{1}(4)= & {} 8\cdot 4-32=0. \end{aligned}$$Hence, we can take 2.6875 as an exact value for \(x_{2}\).
-
7.
We construct the function \(F_{2}\):
$$\begin{aligned} F_{2}(x)= & {} \max \{g(x_{0})-C|x-x_{0}|,g(x_{1})-C|x-x_{1}|,g(x_{2})-C|x-x_{2}|\} \\= & {} \max \{3-8|x-1|,0-8|x-4|,-2.425-8|x-2.6875|\} \\= & {} \left\{ \begin{array}{ll} 8x-5, &{} \text {for }\ \ x\in [0,1], \\ -8x+11, &{} \text {for }\ \ x\in [1,a_{1}], \\ 8x-23.925, &{} \text {for }\ \ x\in [a_{1},2.6875], \\ -8x+19.075, &{} \text {for }\ \ x\in [2.6875,a_{2}], \\ 8x-32, &{} \text {for }\ \ x\in [a_{2},4], \end{array} \right. \end{aligned}$$where \(a_{1}\) is the solution of equation \(-8x+11=8x-23.925\), that is, \( a_{1}\approx 2.1828\), and \(a_{2}\) is the solution of equation \( -8x+19.075=8x-32\), that is, \(a_{2}\approx 3.1922.\)
-
8.
We look for a point \(x_{3}\), which minimizes \(F_{2}\) on [0, 4]. It can be one of the points \(0,a_{1},a_{2},4\). We compute the corresponding values:
$$\begin{aligned} F_{2}(0)= & {} -5, \\ F_{2}(a_{1})= & {} F_{2}(2.1828)=-6.4624, \\ F_{2}(a_{2})= & {} F_{2}(3.1922)=-6.4624, \\ F_{2}(4)= & {} 0. \end{aligned}$$Since the values of \(F_{2}\) at the points \(a_{1}\) and \(a_{2}\) are equal, we could accept each one of them as the next approximation \(x_{3}\). However, only \(a_{2}\approx 3.1922\) is relatively close to the true global minimizer of g on [0, 4], which can be found analytically: \(2+\frac{2}{3}\sqrt{3} \approx 3.1547\). We can see that the performance of the algorithm depends on the choice of minimizers for \(F_{n}\) at each iteration, which is called “sampling” in [12].
References
Köbis, E., Tammer, C., Yao, J.C.: Optimality conditions for set-valued optimization problems based on set approach and applications in uncertain optimization. J. Nonlinear Convex Anal. 18(6), 1001–1014 (2017)
Chuong, T.D.: Optimality and duality for robust multiobjective optimization problems. Nonlinear Anal. 134, 127–143 (2016)
Ide, J., Schöbel, A.: Robustness for uncertain multi-objective optimization: a survey and analysis of different concepts. OR Spectr. 38(1), 235–271 (2016)
Klamroth, K., Köbis, E., Schöbel, A., Tammer, C.: A unified approach to uncertain optimization. Eur. J. Oper. Res. 260(2), 403–420 (2017)
Taa, A.: Set-valued derivatives of multifunctions and optimality conditions. Numer. Funct. Anal. Optim. 19(1–2), 121–140 (1998)
Ha, T.X.D.: Optimality conditions for several types of efficient solutions of set-valued optimization problems. In: Pardalos, P.M., et al. (eds.) Springer Optimization and Its Applications, vol. 35. Springer, New York (2010)
Anh, N.L.H., Khanh, P.Q., Tung, L.T.: Higher order radial derivatives and optimality conditions in nonsmooth vector optimization. Nonlinear Anal. 74, 7365–7379 (2011)
Khan, A.A., Tammer, C., Zălinescu, C.: Set-Valued Optimization: An Introduction with Applications. Springer, Heidelberg (2015)
Rahimi, M., Soleimani-damaneh, M.: Robustness in deterministic vector optimization. J. Optim. Theory Appl. 179, 137–162 (2018)
Botte, M., Schöbel, A.: Dominance for multi-objective robust optimization concepts. Eur. J. Oper. Res. 273, 430–440 (2019)
Li, S.J., Sun, X.K., Zhu, S.K.: Higher-order optimality conditions for strict minimality in set-valued optimization. J. Nonlinear Convex Anal. 13(2), 281–291 (2012)
Shubert, B.O.: A sequential method seeking the global maximum of a function. SIAM J. Numer. Anal. 9(3), 379–388 (1972)
Acknowledgements
The authors are grateful to the University of Łódź for providing necessary funds and conditions needed to complete this research.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Anita Schöbel.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Studniarski, M., Michalak, A. & Stasiak, A. Necessary and Sufficient Conditions for Robust Minimal Solutions in Uncertain Vector Optimization. J Optim Theory Appl 186, 375–397 (2020). https://doi.org/10.1007/s10957-020-01714-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10957-020-01714-w