Necessary and Sufficient Conditions for Robust Minimal Solutions in Uncertain Vector Optimization

We introduce a new notion of a vector-based robust minimal solution for a vector-valued uncertain optimization problem, which is defined by means of some open cone. We present necessary and sufficient conditions for this kind of solution, which are stated in terms of some directional derivatives of vector-valued functions. To prove these results, we apply the methods of set-valued analysis. We also study relations between our definition and three other known optimality concepts. Finally, for the case of scalar optimization, we present two general algorithm models for computing vector-based robust minimal solutions.

either the objective function (as in [1]), or the functions defining constraints (as in [2]), or both. The exact value of this parameter is unknown at the moment of decision, but it can be assumed that the parameter values lie in a given uncertainty set.
The theory of uncertain optimization (also called robust optimization) for multiobjective problems is a relatively new direction of research: the authors of paper [3], submitted in 2014, write that it "has been started only within the last 2 years". One possible approach to uncertain multiobjective optimization is to interpret an uncertain optimization problem as a special set-valued optimization problem and then apply the methods of set-valued analysis; see, e.g., [4,Section 3.1] and [1,Section 5]. In this paper, we follow [1] regarding the formulation of a set-valued problem associated with an uncertain vector optimization problem. We study the notion of Q-minimality (where Q is an open cone) in the context of uncertain vector optimization. We define four types of robust Q-minimal solutions, where the first one is new (a vector-based robust Q-minimal solution; see Definition 3.2(a)), while the other three are variants of some definitions known from the literature. The paper is devoted to studying relations between these four types of solutions, proving some optimality conditions for vector-based robust Q-minimal solutions and constructing algorithm models for finding them.
The organization of this paper is as follows: In Sect. 2, we briefly discuss Qminimal solutions for set-valued optimization problems. In Sect. 3, we formulate an uncertain vector optimization problem and construct the associated set-valued optimization problem. We also define four concepts of robust Q-minimal solutions and examine relations between them. Section 4 provides one more relation for the particular case of scalar optimization. In Sect. 5, we prove a characterization of a vector-based robust Q-minimal solution of an uncertain optimization problem in terms of a radial derivative of some vector-valued function. Since this characterization may be difficult to apply in practice, in the next two sections we present other optimality conditions (necessary in Sect. 6 and sufficient in Sect. 7), which have simpler forms but are not characterizations. In Sect. 8, we discuss two general algorithm models for finding vector-based robust Q-minimal solutions for the case of scalar optimization with a finite number of scenarios. Finally, in Sect. 9, we present a computational example.

Q-Minimal Solutions in Set-Valued Optimization
Let X , Y be normed spaces, S be a nonempty subset of X , and F : X ⇒ Y be a set-valued map. We define the graph of F as follows: We denote by F S the restriction of F to S defined by (see [5, p. 132]). Let Q be an arbitrary open cone in Y , which is nonempty and different from Y . We remind that an open cone Q is an open set satisfying the condition λy ∈ Q for all y ∈ Q and λ > 0. We consider the following set-valued optimization problem: where the minimization is understood with respect to the cone Q, according to the following definition. where We introduce the following relation ≺ in Y : In particular, if the cone Q is convex, then the relation ≺ is transitive.

Remark 2.1
It is easy to see that (x,ȳ) is a Q-minimal solution of problem (2), if and only if y ⊀ȳ for all y ∈ F(S).
The notion of a Q-minimal solution has been introduced in [6]. It includes several types of solutions, known from the literature, as particular cases: For other solution concepts in set-valued optimization, see [8,Section 2.6].
A particular case of problem (2) is the vector optimization problem: where f : X → Y is a single-valued map.

Remark 2.2
Obviously,x is a Q-minimal solution of problem (5), if and only if

An Uncertain Vector Optimization Problem
In this section, we formulate an uncertain vector optimization problem as in [1,Section 5], define four types of its robust Q-minimal solutions, and discuss the relationships between them. Let X , Y , Z be normed spaces, let S and U be nonempty subsets of X and Z , respectively, and let f :

Definition 3.1 An uncertain vector optimization problem P(U) is defined as the family
of vector optimization problems For each x ∈ X , we denote Then, F : X ⇒ Y is a set-valued map. In this way, we can construct a set-valued optimization problem of the form (2), associated with the uncertain vector optimization problem (7). Definition 3.2 Letx ∈ S, and let F be defined by (9). We say that Remark 3.1 Part (a) of Definition 3.2 is new. Parts (b) and (c) are introduced here by analogy with Definitions 4 and 5, respectively, in [3], where the usual efficiency instead of Q-minimality was used. Part (d) is analogous to Definition 3.2 in [9]. For other concepts of robust solutions in uncertain optimization and relations between them, see [10]. The motivation for using the vector-based approach in this paper was to obtain some intermediate concept between definitions (b) and (c), which, however, proved successful in the scalar-valued case only; see Sect. 4. We will try to extend our results to vector-valued problems in a further research.
The proposition below clarifies the relation between Definitions 2.2 and 3.2(a).
Proof By Definition 3.2(a), formula (9) and Remark 2.2 (where S should be replaced by S × U), we have the following chain of equivalences: x is a vector-based robust Q-minimal solution of P(U)

Corollary 3.1 A pointx ∈ X is a vector-based robust Q-minimal solution of P(U), if and only if there existsξ
Proof This follows easily from (4) and the fourth statement in (11).
The rest of this section is devoted to studying relations between the different concepts of Q-minimality for P(U) which are listed in Definition 3.

Proposition 3.2 Ifx is a vector-based robust Q-minimal solution of P(U), then it is a flimsily robust Q-minimal solution of P(U).
Proof By assumption, there existsȳ ∈ F(x) such that (x,ȳ) is a Q-minimal solution of (2). Hence, for each ξ ∈ U, we have The relationȳ ∈ F(x) implies thatȳ = f (x,ξ) for someξ ∈ U. Of course, thisξ also satisfies (12). Therefore, we have which by Remark 2.2 is equivalent tox being a Q-minimal solution of P(ξ).

Proposition 3.3 Ifx is a highly robust Q-minimal solution of P(U), then it is a flimsily robust Q-minimal solution of P(U).
Proof This follows immediately from the definitions (see [3,Lemma 6]).

Example 3.1 This example shows that
Thus,x is a flimsily robust (but not highly robust) Q-minimal solution of P(U). However,x is not a vector-based robust Q-minimal solution of P(U) because the only elementȳ ∈ F(0) isȳ = 0, and We can also see thatx = 0 is a set-based robust Q-minimal solution of P(U). Indeed, for each x ∈ S, we have

Example 3.2 This example shows that (a) (d) and (b) (d).
We can see thatx = 0 is a flimsily robust (but not highly robust) However,x = 0 is not a set-based robust Q-minimal solution of P(U). Indeed, for each x ∈ S\{x}, we have

Example 3.3 This example shows that
25}. We will show thatx = 0 is a set-based robust Q-minimal solution of P(U). Suppose that this is not true. Then, there exists x ∈ S such that

Example 3.4 This example shows that (c) (a).
Let us note that Therefore, which means thatx is a Q-minimal solution of both vector optimization problems P(ξ ), ξ = 1, 2.

Example 3.5 This example shows that (a)
(c). Take the same data as in Example 3.4, except for the definition of f, which has now the form Let us note that Therefore, which means thatx is a Q-minimal solution of vector optimization problem P(1) but is not a Q-minimal solution of vector optimization problem P (2).
However, the pointx = 1 is a vector-based robust Q-minimal solution of P(U).

Example 3.6 This example shows that (c) (d).
We will show thatx = 1 is a highly robust Q-minimal solution of P(U). Indeed,x is a Q-minimal solution of P(1) because the set has empty intersection with −Q.
However,x = 1 is not a set-based robust Q-minimal solution of P(U). To see this, take x = 2. We have

The Case of Scalar Optimization
In this section we consider the case where Y = R and Q =]0, ∞[. In this case, the relation ≺ may be replaced by the usual strict inequality <. We will show that in this case, one more relation between two parts of Definition 3.2 holds, which implies that a vector-based robust Q-minimal solution is an intermediate notion between a highly robust Q-minimal solution and a flimsily robust Q-minimal solution. Proof Letx ∈ S be a highly robust Q-minimal solution of P(U). Then, for each ξ ∈ U,x is a Q-minimal solution of the scalar optimization problem P(ξ ), which means thatx is a global minimum point of f (·, ξ) in the usual sense: Since the set F(x) = { f (x, ξ) : ξ ∈ U} is closed and bounded from below, there exists ξ ∈ U such that Conditions (14) and (15) imply that Consequently, by Proposition 3.1,x is a vector-based robust Q-minimal solution of P(U).

A Characterization of Vector-Based Robust Q-Minimal Solutions
In this section, we present a characterization of a vector-based robust Q-minimal solution of P(U) in terms of some radial derivative of the function f appearing in (8), restricted to S × U . By our knowledge, such results are not known even in the special scalar-valued case. It seems to be possible to derive similar characterizations for different solutions based on the set-based approach to uncertain optimization; for example, for part (d) of Definition 3.2. We plan to describe corresponding results in a subsequent paper.
First, we recall the definition of an outer radial derivative of an arbitrary set-valued mapping.
Definition 5.1 Let F : X ⇒ Y , let (x,ȳ) ∈ graphF, and let m be a positive integer. The m-th order outer radial derivative of F at (x,ȳ) is the set-valued map D m R F(x,ȳ) : The derivative D 1 R F(x,ȳ) was first introduced in [5]; the derivative D m R F(x,ȳ) (for an arbitrary m) was defined in [7]. An interesting feature of radial derivatives is that contrary to classical derivatives, they lead to global sufficient conditions without any (generalized) convexity assumptions. This is due to the fact that we do not require that t n converges to zero in (16).
In particular, if f : X → Y is a single-valued mapping, we will use the notation Proposition 5.1 Let f : X → Y ,x, u ∈ X , and let m be a positive integer. Then, . Proof It is sufficient to take the constant sequences t n ≡ 1 and (u n , v n ) ≡ (u, v) in (17).
We now return to the uncertain optimization problem P(U). We will denote by f S×U the restriction of f : X × Z → Y to S × U. Then, by analogy with (17), we can write, for any (x,ξ) ∈ S × U, , ξ, y), ∀n, f (x,ξ ) + t m n y n ∈ f S×U (x + t n x n ,ξ + t n ξ n ) = {y ∈ Y : ∃t n > 0, ∃(x n , ξ n , y n ) → (x, ξ, y), ∀n,x + t n x n ∈ S, ξ + t n ξ n ∈ U, f (x,ξ ) + t m n y n = f (x + t n x n ,ξ + t n ξ n ) .
We have the following counterpart of Proposition 5.1.
Proof It is sufficient to take t n ≡ 1 and (x n , ξ n , y n ) ≡ (x, ξ, y) in (18).

Theorem 5.1 A pointx ∈ S is a vector-based robust Q-minimal solution of P(U) if and only if
Proof Part "if". Suppose thatx is not a vector-based robust Q-minimal solution of P(U). Then, for eachȳ ∈ F(x) (where F is given by (9)), the pair (x,ȳ) is not a Q-minimal solution of (2). This is equivalent to Sinceȳ ∈ F(x) is equivalent toȳ = f (x,ξ) for someξ ∈ U, we obtain from (20) that Take anyξ ∈ U. By (21), there exists x ∈ S such that Using the definition of F, we see that there exists ξ ∈ U such that By defining u := x −x ∈ S −x and d := ξ −ξ ∈ U −ξ , we can rewrite (22) as However, by Proposition 5.2 and the relationsx + u = x ∈ S,ξ + d = ξ ∈ U, we have Combining (23) and (24), we get We have thus verified that for eachξ ∈ U, there exist u ∈ S −x and d ∈ U −ξ such that (25) holds. This contradicts (19). Part "only if". Letx ∈ S be a vector-based robust Q-minimal solution of P(U), then Hence, there existsξ ∈ U such thatȳ = f (x,ξ ), and consequently, We will show that Suppose to the contrary that (27) is false, then there exist x ∈ S , ξ ∈ U and y ∈ Y such that By (28) and (18), there exist sequences t n > 0 and (x n , ξ n , y n ) → (x −x, ξ −ξ, y) such that for all n, we havē x + t n x n ∈ S,ξ + t n ξ n ∈ U and f (x,ξ) + t m n y n = f (x + t n x n ,ξ + t n ξ n ). (29) Since Q is open and y n → y ∈ −Q, we have y n ∈ −Q for sufficiently large n. As Q is an open cone, the last relation implies t m n y n ∈ −Q. From this and (29), we deduce a contradiction to (26).
The characterization given in Theorem 5.1 is difficult to apply in practice as it involves the restriction f S×U , which is not easy to compute, especially if the constraint set S is defined by some functional conditions. Therefore, in the next two sections we present a necessary condition (Theorem 6.2) and a sufficient condition (Theorem 7.1), both for a vector-based robust Q-minimal solution of P(U), which do not use this restricted function.

Necessary Optimality Conditions
The following derivative for a set-valued mapping F : X ⇒ Y was first defined in [11].
We will also use the following derivative for a vector-valued map f : X → Y (if it exists): where m is a positive integer, and u, w ∈ X .

Definition 6.2
The contingent cone to S atx ∈ clS is defined as follows: The following two theorems are, in view of Definition 3.2(a), necessary conditions forx ∈ S to be a vector-based robust Q-minimal solution of P(U). (2), then

Theorem 6.1 Let (x,ȳ) ∈ graphF S , and let m be a positive integer. If (x,ȳ) is a Q-minimal solution of problem
By the definition of d m F S , there exist sequences t n → 0 + and (u n , v n ) → (u, v) such thatȳ + t m n v n ∈ F S (x + t n u n ), which is equivalent tō x + t n u n ∈ S andȳ + t m n v n ∈ F(x + t n u n ).
Since Q is open and v n → v ∈ −Q, we have v n ∈ −Q for sufficiently large n. As Q is an open cone, the last relation implies t m n v n ∈ −Q. From this and (32), we deduce a contradiction to (3).

Remark 6.1
Contrary to the other results of this paper, Theorem 6.1 remains valid even if F is an arbitrary set-valued map, not necessarily defined by formula (9).
Theorem 6.2 Let F be given by (9), let (x,ȳ) ∈ graphF S , and let m be a positive integer. Suppose that for eachξ ∈ U satisfying the condition and for each pair (u, d) ∈ X × Z , there exists the derivative d m f ((x,ξ); (u, d)) ∈ Y .
Proof Suppose that the desired conclusion is false, then there exist vectorsξ ∈ U and (u, d) ∈ X × Y satisfying (33) and (35), respectively, such that By (35), there exist sequences t n → 0 + and (u n , d n ) → (u, d) such that for all n, Let ξ n :=ξ + t n d n . By (36) and the definition of d m f , we have It follows from (33) and (38) that By (37), we obtainx + t n u n ∈ S and ξ n ∈ U. These two relations, and conditions (9), (39) giveȳ + t m n v n ∈ F(x + t n u n ) = F S (x + t n u n ).
We have thus verified that there exist sequences t n → 0 + , u n → u and v n → v such thatȳ + t m n v n ∈ F S (x + t n u n ) for all n. This means that v ∈ d m F S (x,ȳ)(u). But this contradicts Theorem 6.1 because v ∈ −Q.
Observe thatξ = 0 is the only element of U satisfying condition (33). We also have For such directions (u, d), we can compute Since d / ∈ −Q, the necessary condition given in Theorem 6.2 is satisfied for m = 1. Note that for m = 2, we cannot apply Theorem 6.2 because the derivative d 2 f ((x,ξ); (u, d)) (for d > 0) does not exist as an element of R: Example 6.2 Take the same data as in Example 6.1, except for the definition of f which has now the form f (x, ξ) = x 2 + ξ 2 . As before, we have F(x) = [x 2 , x 2 + 1] for all x ∈ R. Moreover,x = 0 is a vector-based robust Q-minimal solution of P(U) with the same pointsȳ = 0 andξ = 0. In this example, we can apply Theorem 6.2 both for m = 1 and m = 2 because for all x ∈ R. Observe that for each ξ ∈ [0, 1], the point x = 0 is a Q-minimal solution of P(ξ ), and for ξ ∈ [−1, 0[ , it is not a Q-minimal solution of P(ξ ). Moreover, the pointx = 0 is not a vector-based robust Q-minimal solution of P(U) because the only elementȳ ∈ F(0) isȳ = 0, and We will show that applying Theorem 6.2, we can exclude the point 0 as a possible vector-based robust Q-minimal solution of P(U). Take any pointξ ∈ U; it obviously satisfies condition (33) of the form f (0,ξ) = 0. Since we can take any direction as (u, d) in (34). We can verify that

Sufficient Optimality Conditions
We will now prove a sufficient optimality condition for uncertain optimization.
Theorem 7.1 Let F be given by (9), and letx ∈ S. If there existsξ ∈ U such that thenx is a vector-based robust Q-minimal solution of problem P(U).
Proof Suppose that the desired conclusion is false. Then, for eachȳ ∈ F(x), the pair (x,ȳ) is not a Q-minimal solution of (2). By arguing as in the proof of Theorem 5.1 part "if", we can show that for eachξ ∈ U, there exist However, by Proposition 5.1, we have Combining (41) and (42), we get We have thus verified that for eachξ ∈ U, there exist u ∈ S −x and d ∈ U −ξ such that (43) holds. This contradicts the assumption of the theorem.
, and x = 0 (we have the same data as in Example 6.2). We will show that condition (40) holds forξ = 0. Indeed, for each x ∈ S and d ∈ U, we have

Construction of Algorithms for a Finite Set of Scenarios
In this section, we return to the case of scalar optimization considered in Sect. 4. We present two general algorithm models that can be useful for solving the particular case of problem P(U) where the set U is finite: U = {ξ 1 , ..., ξ m } (we say that we have m different scenarios). This case is important for some practical applications; see, e.g., [3,Example 3].
Throughout this section, we assume that for each i ∈ {1, ..., m}, the function f (·, ξ i ) : X → R belongs to a fixed class F of functions. We also assume that there exists an algorithm A(g, x 0 ) which, for a given function g ∈ F and a given starting point x 0 ∈ S, generates an infinite sequence {x k } converging to some pointx which is a global minimizer for g on S : The first algorithm model is valid under an additional assumption of regularity stated in Definition 8.1. This assumption helps to find a vector-based robust Q-minimal solution faster than in the general case that will be considered later.
Definition 8. 1 We say that a finite set of scenarios U is regular, if it satisfies the following condition for each pair i, j ∈ {1, ..., m}, i = j: Condition (46) means that strict inequalities between the values of f for different scenarios are preserved throughout the whole space X , and consequently, the graphs of f (·, ξ i ) for different values of ξ i do not intersect.
The following algorithm can be used to find a vector-based robust Q-minimal solution of P(U) in the case where the number of elements of U is relatively small.

Theorem 8.1 Suppose that the set U is regular. Then, the limitx of the sequence {x k } generated by Algorithm Model 1 is a vector-based robust Q-minimal solution of P(U).
Proof Suppose that the desired conclusion is false. Then, by Corollary 3.1, there exists a point (x * , ξ * ) ∈ S × U such that Since condition (45) holds for g = f (·, ξ 0 ), we have

A Computational Example
Algorithm Models 1 and 2, presented above, require applying some global minimization method for a given real-valued function. Such methods exist but, for possibly nonconvex functions, they are rather complicated. To illustrate the theory developed in the previous section, here we present a simple example of one-dimensional uncertain optimization problem, for which Algorithm Model 1 can be applied in combination with the Shubert optimization method described in [12]. The Shubert method is designed for seeking the global maximum of a function of one real variable. Below, we briefly present its version adapted for minimization. Let f : [a, b] → R be a real-valued function satisfying the Lipschitz condition, which means that there exists a constant C ≥ 0 such that for each x, y ∈ [a, b], the following inequality holds: We introduce the following notation:

The Shubert Algorithm
Step 1. Choose a starting point x 0 ∈ [a, b]. Set n = 0.
Step 2. Find a point x n+1 , at which the function F n attains its minimum on [a, b]. Increase n by 1 and repeat Step 2.
The following theorem is a reformulation of a result from [12, p. 381] . In the following example, we have used Scientific WorkPlace 5.00 software for numerical computations.
We define the function f as follows: We want to apply Algorithm Model 1 to solve the problem P(U), which is defined by (7)- (8). Obviously, the regularity condition (46) is satisfied. We proceed as follows: 1. We choose a starting point (x 0 , ξ 0 ) ∈ S × U. For this example, let it be equal to (1, 2).
We see that there are two values for ξ that satisfy this inequality: ξ = 0 and ξ = 1. Let us choose the first one, and set ξ 0 := 0. 3. Since there is no ξ satisfying (52), we go to Step 3 of Algorithm Model 1, that is, we apply the Shubert algorithm to the function g := f (·, 0) with the starting point x 0 = 1. It is easy to show that g satisfies the Lipschitz condition on [0, 4] with the constant C = 8. First, we construct the function F 0 : 4. We look for a point x 1 , at which F 0 attains its minimum on [0, 4]. Since the graph of F 0 consists of two line segments, and its maximum is attained at x 0 , the minimum must be attained at one of the endpoints of [0, 4]. Let us compute these values: Hence, we accept x 1 = 4. 5. We construct the function F 1 : 6. We look for a point x 2 , at which F 1 attains its minimum on [0, 4]. Observe that F 1 is a piecewise linear function, which can be described as follows:   Hence, we can take 2.6875 as an exact value for x 2 .
7. We construct the function F 2 :  Since the values of F 2 at the points a 1 and a 2 are equal, we could accept each one of them as the next approximation x 3 . However, only a 2 ≈ 3.1922 is relatively close to the true global minimizer of g on [0, 4], which can be found analytically: 2 + 2 3 √ 3 ≈ 3.1547. We can see that the performance of the algorithm depends on the choice of minimizers for F n at each iteration, which is called "sampling" in [12].