Nonlinear expectations of random sets

Sublinear functionals of random variables are known as sublinear expectations; they are convex homogeneous functionals on infinite-dimensional linear spaces. We extend this concept for set-valued functionals defined on measurable set-valued functions (which form a nonlinear space), equivalently, on random closed sets. This calls for a separate study of sublinear and superlinear expectations, since a change of sign does not convert one to the other in the set-valued setting. We identify the extremal expectations as those arising from the primal and dual representations of them. Several general construction methods for nonlinear expectations are presented and the corresponding duality representation results are obtained. On the application side, sublinear expectations are naturally related to depth trimming of multivariate samples, while superlinear ones can be used to assess utilities of multiasset portfolios.

(1. 3) In many studies, the homogeneity property together with the sub-(super-) additivity is replaced by the convexity of e and the concavity of u. These nonlinear expectations may be defined on a larger family than L p or on its subfamily; it is necessary to assume that the domain of definition contains all constants and is closed under addition and multiplication by positive constants. The range of values may be extended to (−∞, ∞] for the sublinear expectation and to [−∞, ∞) for the superlinear one. The choice of notation e and u is explained by the fact that the superlinear expectation can be viewed as a utility function that allocates a higher utility value to the sum of two random variables in comparison with the sum of their individual utilities, see [5]. If random variable ξ models a financial gain, then r(ξ) = −u(ξ) is called a coherent risk measure. Property (1.1) is then termed cash invariance, and the superadditivity property is turned into subadditivity due to the change of sign. The subadditivity of risk means that the sum of two random variables bears at most the same risk as the sum of their risks; this is justified by the economic principle of diversification.
It is easy to see that e is a sublinear expectation if and only if is a superlinear one, and in this case e and u are said to form an exact dual pair. The sublinearity property yields that e(ξ) + e(−ξ) ≥ e(0) = 0 , so that −e(−ξ) ≤ e(ξ). The interval [u(ξ), e(ξ)] generated by an exact dual pair of nonlinear expectations characterises the uncertainty in the determination of the expectation of ξ. In finance, such intervals determine price ranges in illiquid markets, see [19]. We equip the space L p with the σ(L p , L q )-topology based on the standard pairing of L p and L q with 1/p + 1/q = 1. It is usually assumed that e is lower semicontinuous and u is upper semicontinuous in the σ(L p , L q )-topology. Given that e and u take finite values, general results of functional analysis concerning convex functions on linear spaces imply the semicontinuity property if p ∈ [1, ∞) (see [15]); it is additionally imposed if p = ∞. A nonlinear expectation is said to be law invariant (more exactly, law-determined) if it takes the same value on identically distributed random variables, see [8,Sec. 4.5].
A rich source of sublinear expectations is provided by suprema of conventional (linear) expectations taken with respect to several probability measures. Assuming the σ(L p , L q )lower semicontinuity, the bipolar theorem yields that this is the only possible case, see [5] and [15]. Then e(ξ) = sup γ∈M,Eγ=1 E(γξ) (1.5) is the supremum of expectations E(γξ) over a convex σ(L q , L p )-closed cone M in L q (R + ); the superlinear expectation is obtained by replacing the supremum with the infimum. In the following, we assume that (1.5) holds and the representing set M is chosen in such a way to ensure that the corresponding sublinear and superlinear expectations are law invariant, that is, with each γ, M contains all random variables identically distributed as γ.
A random closed set X in Euclidean space is a random element with values in the family F of closed sets in R d such that {X ∩ K = ∅} is a measurable event for all compact sets K in R d , see [20]. In other words, a random closed set is a measurable set-valued function. A random closed set X is said to be convex if X almost surely belongs to the family co F of closed convex sets in R d . For convex random sets in Euclidean space, the measurability condition is equivalent to the fact that the support function of X (see (2.2)) is a random function on R d with values in (−∞, ∞]. In the set-valued setting, it is natural to replace the inequalities (1.2) and (1.3) with the inclusions. For sets, the minus sign corresponds to the reflection with respect to the origin; it does not alter the direction of the inclusion, and so there is no direct link between set-valued sublinear and superlinear expectations. Set inclusions are always considered nonstrict, e.g., A ⊂ B allows for A = B.
This paper aims to systematically explore nonlinear set-valued expectations. Section 2 recalls the classical concept of the (linear) selection expectation for random closed sets, see [2] and [20,Sec. 2.1]. A random vector ξ is said to be a selection of X if ξ ∈ X almost surely. The selection expectation EX is defined as the closure of the set of expectations of all integrable selections of X (the primal representation) or by considering the expected support function (being the dual representation). In this section, we introduce a suitable convergence concept for (possibly, unbounded) random convex sets based on linear functionals applied to the support function.
Nonlinear expectations of random convex sets are introduced in Section 3. The definitions refine the properties of nonlinear expectations stated in [20,Sec. 2.2.7]. Basic examples of such expectations and more involved constructions are considered with a particular attention to the expectations of random singletons and half-spaces. It is also explained how the setvalued expectation applies to random convex functions and how it is possible to get rid of the homogeneity property and to extend the setting to convex/concave functionals.
Among the rather vast variety of nonlinear expectations, it is possible to identify extremal ones: the minimal sublinear expectation of X is the convex hull of nonlinear expectations of all sets from some family that yields X as their union. In the case of selections, this becomes a direct generalisation of the primal representation for the selection expectation. The maximal superlinear extension is the intersection of nonlinear expectations of all halfspaces containing the random set. While in the linear case the both coincide and provide two equivalent definitions of the selection expectation, in general, the two constructions differ.
Nonlinear maps restricted to the family L p (R d ) of p-integrable random vectors have been studied in [4,9], the comprehensive duality results can be found in [7]. In our framework, these studies concern the cases when the argument of a superlinear expectation is the sum of a random vector and a convex cone. However, for general set-valued arguments, it is not possible to rely the approach of [9,7], since the known techniques of set-valued optimisation theory (see, e.g., [16]) are not applicable.
The key technique suitable to handle nonlinear expectations relies on the bipolar theorem. A direct generalisation of this theorem for functional of random convex sets is not feasible, since random convex sets do not form a linear space. Section 5 provides duality results for sublinear expectations and Section 6 for the superlinear ones. Specifically, the constant preserving minimal sublinear expectations are identified. For the superlinear expectation, the family of random closed convex sets such that the sublinear expectation contains the origin is a convex cone. However, it is rather tricky to use the separation results, since linear functions (such as the selection expectation) may have trivial values on unbounded integrable random sets. For instance, the selection expectation of a random half-space with a nondeterministic normal is the whole space; in this case the superlinear expectation is not dominated by any nontrivial linear one. In order to handle such situations, the duality results for superlinear expectations are proved for the maximal superlinear expectation. It is shown that the superlinear expectation of a singleton is usually empty; in order to come up with a nontrivial minimal extension, singletons in the definition of the minimal extension are replaced by translated cones.
Some applications are outlined in Section 7. Sublinear expectations are useful as depth functions in order to identify outliers in samples of random sets. Such samples often appear in partial identified models in econometrics, see [22]. The superlinear expectation is closely related to measuring multivariate risk in finance and to multivariate utilities. Superlinear expectations are useful to describe the utility, since the utility of the sum of two portfolios described by random sets "dominates" the sum of their individual utilities. We show that the minimal extension of a superlinear expectation is closely related to the selection risk measure of lower random sets considered in [21].
Appendix presents a self-contained proof of the fact that vector-valued sublinear expectations of random vectors necessarily split into sublinear expectations applied to each component of the vector. This fact reiterates the point that the set-valued setting is essential for defining nonlinear expectations of random vectors.
Note the following notational conventions: X, Y denote random closed convex sets, F is a deterministic closed convex set, ξ and β are p-integrable random vectors and random variables, ζ and γ are q-integrable vectors and variables with 1/p + 1/q = 1, η is usually a random vector from the unit sphere S d−1 , u and v are deterministic points from S d−1 .
2 Selection expectation 2.1 Integrable random sets and selection expectation Let X be a random closed set in R d , which is always assumed to be almost surely non-empty. A random vector ξ is called a selection of X if ξ ∈ X almost surely. Let L p (X) denote the family of p-integrable selections of X for p ∈ [1, ∞), essentially bounded ones if p = ∞, and all selections if p = 0. If L p (X) is not empty, then X is called p-integrable, shortly integrable if p = 1. This is the case if X is p-integrably bounded, that is, If X is integrable, then its selection expectation is defined by which is the closure of the set of expectations of all integrable selections of X, see [20,Sec. 2.1.2]. If X is integrably bounded, then the closure on the right-hand side is not needed, EX is compact, and also almost surely convex if X is convex or the underlying probability space is non-atomic. From now on, we assume that all random closed sets are almost surely convex.
The support function of a non-empty set F in R d is defined by allowing for possibly infinite values if F is not bounded, where u, x denotes the scalar product. Due to homogeneity, the support function is determined by its values on the unit sphere S d−1 . If X is an integrable random closed set, then its expected support function is the support function of EX, that is, {x : x, u ≤ Eh(X, u)}, which may be seen as the dual representation of the selection expectation with (2.1) being its primal representation. [1] provide an axiomatic Daniell-Stone type characterisation of the selection expectation. Property (2.3) can be also expressed as meaning that in this case it is possible to interchange the expectation and the supremum. If X is an integrable random closed set and H is a sub-σ-algebra of F, the conditional expectation E(X|H) is identified by its support function, being the conditional expectation of the support function of X, see [12] and [20, Sec. 2.1.6]. The dilation (scaling) of a closed set F is defined as cF = {cx : x ∈ F } for c ∈ R. For two closed sets F 1 and F 2 , their closed Minkowski sum is defined by and the sum is empty if at least one summand is empty. If at least one of F 1 and F 2 is compact, then the closure on the right-hand side is not needed. We write shortly F + a instead of F + {a} for a ∈ R d .
If X and Y are random closed convex sets, then X + Y is a random closed set, see [20,Th. 1.3.25]. The selection expectation is linear on integrable random closed sets, that is, see, e.g., [20,Prop. 2.1.32].
Let C be a deterministic closed convex cone in R d which is distinct from the whole space. If F = F + C, then F is said to be C-closed. Due to the closed Minkowski sum on the righthand side, F is also topologically closed. Let co F(C) denote the family of all C-closed convex sets in R d (including the empty set), and let L p (co F(C)) be the family of all p-integrable random sets with values in co F(C). Any random set from L p (co F(C)) is necessarily a.s. non-empty. By we denote the polar cone to C.
is the family of lower convex closed sets, and a random closed convex set with realisations in this family is called a random lower set.
Example 2.2. Let C be a convex closed cone in R d which does not coincide with the whole space. If X = ξ + C for ξ ∈ L p (R d ), then X belongs to the space L p (co F(C)). For each ζ ∈ L q (G), we have h(X, ζ) = ξ, ζ .

Support function at random directions
denote a half-space in R d , and let H u (∞) = R d . Particular difficulties when dealing with unbounded random closed sets are caused by the fact that the support function of any deterministic argument may be infinite with probability one.
Example 2.3. Let X = H η (0) be the random half-space with the normal vector η having a non-atomic distribution. Then EX is the whole space. The support function of X is finite only on the random ray {cη : c ≥ 0}.
It is shown in [17,Cor. 3.5] that each random closed convex set satisfies where is the smallest half-space with outer normal η that contains X. If X is a.s. C-closed, (2.6) holds with η running through the family of selections of S d−1 ∩ G.
The second summand on the right-hand side is integrable, while the first one is nonnegative.
Corollary 2.5. The distribution of X ∈ L p (co F(C)) is uniquely determined by Eh(X, ζ) for ζ ∈ L q (G).
Proof. Apply Lemma 2.4 to Y = {ξ}, so that the values of Eh(X, ζ) identify all p-integrable selections of X, and note that X equals the closure of the family of its p-integrable selections, see [20,Prop. 2.1.4].
A random closed set X is called Hausdorff approximable if it appears as the almost sure limit in the Hausdorff metric of random closed sets with at most a finite number of values. It is known [20,Th. 1.3.18] that all random compact sets are Hausdorff approximable, as well as those that appear as the sum of a random compact set and a random closed set with at most a finite number of possible values. The random closed set X from Example 2.3 is not Hausdorff approximable.
The distribution of a Hausdorff approximable p-integrable random closed convex set X is uniquely determined by the selection expectations E(γX) for all γ ∈ L q (R + ), actually it suffices to let γ be all measurable indicators, see [11] and [20,Prop. 2.1.33]. If X is Hausdorff approximable, then its selections ξ are identified by the condition E(ξ1 A ) ∈ E(X1 A ) for all events A. By passing to the support functions, we arrive at a variant of Lemma 2.4 with ζ = u1 A for all u ∈ S d−1 and A ∈ F.

Convergence of random closed convex sets
Convergence of random closed sets is typically considered in probability, almost surely, or in distribution. In the following we need to define L p -type convergence concepts suitable to deal with unbounded random convex sets.
The space L p (R d ) is equipped with the σ(L p , L q )-topology, that is, ξ n → ξ means that E ξ, ζ → E ξ, ζ for all ζ ∈ L q (R d ).
Lemma 2.6. If X is a p-integrable random C-closed convex set, then L p (X) is a non-empty convex σ(L p , L q )-closed and L p (C)-closed subset of L p (R d ).
Proof. If ξ n ∈ L p (X) and ξ n → ξ ∈ L p (R d ) in σ(L p , L q ), then for all ζ ∈ L q (R d ). Thus, ξ is a selection of X by Lemma 2.4. The statement concerning C-closedness is obvious.
A sequence X n ∈ L p (co F(C)), n ≥ 1, is said to converge to X ∈ L p (co F(C)) scalarly in σ(L p , L q ) (shortly, scalarly) if Eh(X n , ζ) → Eh(X, ζ) for all ζ ∈ L q (G), where the convergence is understood in the extended line (−∞, ∞]. Since Eh(X n , ζ) equals the support function of L p (X n ) in direction ζ, this convergence is the scalar convergence L p (X n ) → L p (X) as convex sets in L p (R d ), see [26].

.1 Definitions
Fix p ∈ [1, ∞] and a convex closed cone C distinct from the whole space.
for all p-integrable random closed convex sets X and Y .
A superlinear set-valued expectation U satisfies the same properties with the exception of ii) replaced by U (F ) ⊂ F and (3.2) replaced by the superadditivity property The nonlinear expectations E and U are said to be law invariant, if they retain their values on identically distributed random closed convex sets.
Proposition 3.2. Nonlinear expectations on L p (co F(C)) take values from co F(C).
While the argument X of nonlinear expectations is a.s. non-empty, U (X) may be empty and then the right-hand side of (3.3) is also empty. However, if E(X) is empty for some X, is empty for all p-integrable random sets Y . In view of this, it is assumed that sublinear expectations take non-empty values. We always exclude the trivial cases, when The homogeneity property immediately implies that E(X) and U (X) are cones if X is almost surely a cone, that is, cX = X a.s. for all c > 0. Therefore, it is only possible to conclude that E(C) is a closed convex cone, which may be strictly larger than C. By Proposition 3.2, U (C) is either C or is empty.
The sublinear (respectively, superlinear) expectation is said to be normalised if E(C) = C (respectively, U (C) = C). We always have E(R d ) = R d by property ii), and also U ( The properties of the nonlinear expectations do not imply that they preserve deterministic convex closed sets. The family {F ∈ co F(C) : E(F ) = F } of invariant sets is closed under translations, dilations by positive reals, and for Minkowski sums, since if E(F ) = F and A nonlinear expectation is said to be constant preserving if all non-empty deterministic sets from co F(C) are invariant. The superlinear and sublinear expectations form a dual pair if U (X) ⊂ E(X) for each p-integrable random closed convex set X. In difference to the univariate setting, the exact duality relation (1.4) is useless; if C = {0}, then −E(−X) is also a sublinear expectation, where −X = {−x : x ∈ X} is the reflection of X with respect to the origin.
For a sequence {F n , n ≥ 1} of closed sets, its lower limit, lim inf F n , is the set of limits for all convergent sequences x n ∈ F n , n ≥ 1, and its upper limit, lim sup F n , is the set of limits for all convergent subsequences x n k ∈ F n k , k ≥ 1.
The sublinear expectation E is called lower semicontinuous if and U is upper semicontinuous if for a sequence of random closed convex sets {X n , n ≥ 1} converging to X in the chosen topology, e.g. scalarly lower semicontinuous if X n scalarly converges to X. Note that the lower semicontinuity definition is weaker than its standard variant for set-valued functions that would require that E is a subset of lim inf E(X n ), see [14,Prop. 2.35].
Remark 3.3. It is possible to consider nonlinear expectations defined only on some special random sets, e.g., singletons or half-spaces. It is only required that the family of such sets is closed under translations, dilations by positive reals, and for Minkowski sums. The family co F is often ordered by the reverse inclusion ordering; then the terminology is correspondingly adjusted, e.g., the superlinear expectation becomes sublinear. However, we systematically consider the conventional inclusion order.
Remark 3.4. Motivated by financial applications, it is possible to replace the homogeneity and sub-(super-) additivity properties with convexity or concavity, e.g., However, then U can be turned into a superlinear expectation U for random sets in the space R d+1 by letting The arguments of U are random closed convex sets Y = {t} × X; they form a family closed for dilations, Minkowski sums and translations by singletons from R + × R d . Note that selections of {t} × X are given by (t, ξ) with ξ being a selection of X. In view of this, all results in the homogeneous case apply to the convex case if dimension is increased by one.

Examples
The simplest example is provided by the selection expectation, which is linear and law invariant on all integrable random convex sets.

Example 3.5 (Fixed points and support). Let
denote the set of fixed points of a random closed set X. If X is almost surely convex, then F X is also almost surely convex, and if X is compact with a positive probability, then F X is compact. It is easy to see that F X+Y ⊃ F X + F Y , whence U (X) = F X is a law invariant superlinear expectation. With a similar idea, it is possible to define the sublinear expectation E(X) = supp X as the support of X, which is the set of points x such that X hits any open neighbourhood of x with a positive probability. By the monotonicity property,

Expectations of singletons and half-spaces
The additivity property on deterministic singletons immediately yields the following useful fact.
0}, and the same holds for the superlinear expectation.
Assuming its lower semicontinuity, it becomes the usual (linear) expectation. The following result concerns the superlinear expectation of singletons. For a general cone C, a similar result holds with singletons replaced by sets ξ + C.
whence the inclusion turns into the equality.
In view of Proposition 3.9 and imposing the upper semicontinuity property on the superlinear expectation, U ({ξ}) equals {Eξ} or is empty for each p-integrable ξ. The family Proposition 3.10. If X + X = R d a.s. for X being an independent copy of X, then E(X) = R d for each law invariant sublinear expectation E.
Proof. By subadditivity and law invariance, Proposition 3.10 applies if X = H η (0) is a half-space with a non-atomic η, so that each law invariant sublinear expectation on such random sets takes trivial values.

Nonlinear expectations of random convex functions
The obtained support function is called the perspective transform of f , see [13]. Note that f can be recovered by letting t = 1 in the support function of T f . If ξ(x), x ∈ R d , is a random nonnegative lower semicontinuous convex function, then its sublinear expectation can be defined as E(ξ)(x) = h(E(T ξ ), (1, x)), and the superlinear one is defined similarly. With this definition, all constructions from this paper apply to random functions.

Extensions of nonlinear expectations 4.1 Minimal extension
The minimal extension of a sublinear set-valued expectation E on random sets from L p (co F(C)) is defined by where co denotes the closed convex hull operation. It extends a sublinear expectation defined on sets ξ + C to all p-integrable random closed sets X such that X = X + C a.s. In terms of support functions, the minimal extension is given by Proposition 4.1. If E is a sublinear expectation defined on random sets ξ+C for ξ ∈ L p (R d ), then its minimal extension (4.1) is a sublinear expectation.
Proof. The additivity of E on deterministic singletons follows from this property of E. For a deterministic F ∈ co co F(C), The homogeneity and monotonicity properties of E are obvious. The subadditivity follows from the fact that

Maximal extension
Extending a superlinear expectation U from its values on half-spaces yields its maximal extension being the intersection of superlinear expectations of random half-spaces If U is superlinear on half-spaces with the same normal, that is, , and is scalarly upper semicontinuous on half-spaces with the same normal, that is, if β n → β in σ(L p , L q ), then its maximal extension U (X) given by (4.3) is superlinear and upper semicontinuous with respect to the scalar convergence of random closed convex sets. If U is law invariant on half-spaces, then U is law invariant.
Proof. The additivity on deterministic singletons follows from the fact that The homogeneity and monotonicity properties of the extension are obvious. For two pintegrable random closed convex sets X and Y , (4.4) yields that Assume that X n scalarly converges to X. Let x n k ∈ U (X n k ) and x n k → x for some x. Then x n k ∈ U (H η (X n k )) for all η ∈ L 0 (S d−1 ∩ G). Since h(X n k , η) → h(X, η) in σ(L p , L q ), upper semicontinuity on half-spaces yields that U (H η (X)) ⊃ lim sup U (H η (X n k )), whence x ∈ U (H η (X)) for all η. Therefore, x ∈ U (X), confirming the upper semicontinuity of the maximal extension. The law invariance property is straightforward.
It is possible to let η in (4.3) be deterministic and define With this reduced maximal extension, the superlinear expectation is extended from its values on half-spaces with deterministic normal vectors. Note that the reduced maximal extension may be equal to the whole space, e.g., for X = H η (0) being a half-space with a nondeterministic normal. It is obvious that U (X) ⊂ U (X) ⊂ U (X) and U is constant preserving. The reduced maximal extension is particularly useful for Hausdorff approximable random closed sets.

Exact nonlinear expectations
It is possible to apply the maximal extension to the sublinear expectation and the minimal extension to the superlinear one, resulting in E and U . The monotonicity property yields that, for each p-integrable random closed set X, It is easy to see that each extension is an idempotent operation, e.g., the minimal extension of E coincides with E. A nonlinear sublinear expectation is said to be minimal (respectively, maximal) if it coincides with its minimal (respectively, maximal) extension. The superlinear expectation is said to be reduced maximal if U coincides with U . Since random convex closed sets can be represented either as families of their selections or as intersections of half-spaces, the minimal representation may be considered a primal representation of an exact nonlinear expectation, while the maximal representation becomes the dual one.
If (4.6) holds with the equalities, then E is said to be exact. The same applies to superlinear expectations. Note that the selection expectation is exact on all integrable random closed convex sets, its minimality corresponds to (2.1) and maximality becomes (2.3).

Duality for minimal sublinear expectations
The minimal sublinear expectation is determined by its restriction on random sets ξ + C; the following result characterises such a restriction.
Proof. Sufficiency. For linearly independent u and v, each ζ ∈ Z u+v satisfies ζ = ζ 1 + ζ 2 with Eζ 1 = t 1 u and Eζ 2 = t 2 v. Thus, For u ∈ G, the set {ζ ∈ Z u : Eζ = u} is closed in σ(L q , L p ). Indeed, if ζ n → ζ, then in E ζ n , ξ → E ζ, ξ let ξ be one the basis vectors to confirm that Eζ = u. Since h(E(ξ + C), u) is the support function of the closed set {ζ ∈ Z u : Eζ = u} in direction ξ, it is lower semicontinuous as function of ξ ∈ L p (R d ), so that (3.4) holds. Necessity. By Proposition 3.2, the support function is infinite for u / ∈ G. For u ∈ G, let A u be the set of ξ ∈ L p (R d ) such that h(E(ξ + C), u) ≤ 0. The map ξ → h(E(ξ + C), u) is a sublinear map from L p (R d ) to (−∞, ∞]. By sublinearity, A u is a convex cone in L p (R d ), and A cu = A u for all c > 0. Furthermore, A u is closed with respect to the scalar convergence ξ n + C → ξ + C by the assumed lower semicontinuity of E. Hence, it is closed with respect to the convergence ξ n → ξ in σ(L p , L q ).
Note that 0 ∈ A u , and let be the polar cone to A u . For u = 0, we have A 0 = L p (R d ) and Z 0 = {0}. Consider u = 0. Letting ξ = a1 H for an event H and deterministic a with a, u ≤ 0, we obtain a member of A u , whence each ζ ∈ Z u satisfies Eζ, a1 H ≤ 0 whenever a, u ≤ 0. Thus, ζ ∈ G a.s., and letting H = Ω yields that Eζ = tu for some t ≥ 0 and all ζ ∈ Z u . The subadditivity property of the support function of Since A u is convex and σ(L p , L q )-closed, the bipolar theorem yields that Eh(X, ζ) = E(X).
Since the support function of E(X) given by (5.3) is the supremum of scalarly continuous functions of X, the minimal sublinear expectation is scalarly lower semicontinuous.
Corollary 5.3. If u ∈ Z u for all u ∈ R d , then EX ⊂ E(X) for all p-integrable X and any scalarly lower semicontinuous normalised minimal sublinear expectation E.
Remark 5.4. The sublinear expectation given by (5.3) is law invariant if and only if the sets Z u are law-complete, that is, with each ζ ∈ Z u , the set Z u contains all random vectors that share distribution with ζ.
Example 5.5. Let Z be a random matrix with EZ being the identity matrix, and let It is possible to let Z belong to a family of such matrices; then E(X) is the closed convex hull of the union of E(Z X) for all such Z. In this example, h(E(X), u) is not solely determined by h(X, u). This sublinear expectation is not necessarily constant preserving.
If the normal η = u is deterministic and Otherwise, E(H u (β)) = R d .

Exact sublinear expectation
Consider now the situation when, for each u, the value of h(E(X), u) is solely determined by the distribution of h(X, u). This is the case if the supremum in (5.3) involves only ζ such that ζ = γu for some γ ∈ L q (R + ). The following result shows that this condition characterises constant preserving minimal sublinear expectations, which then necessarily become exact ones.
Theorem 5.7. A function E on p-integrable random closed convex sets from L p (co F(C)) is a scalarly lower semicontinuous constant preserving minimal sublinear expectation if and only if h(E(X), u) = ∞ for u / ∈ G, and Proof. Sufficiency. If M u , u ∈ R d , satisfy the imposed conditions, then Z u = {γu : γ ∈ M u }, u ∈ G, satisfy the conditions of Lemma 5.1. Indeed, Z cu = Z u for all c > 0, and whence E is constant preserving. Necessity. Since E is minimal, the support function of E(X) is given by (5.3). The constant preserving property yields that E(H u (t)) = H u (t) for all half-spaces H u (t) with u ∈ G. By the argument from Example 5.6, the minimal sublinear expectation of a half-space H u (t) is distinct from the whole space only if (5.4) holds. The properties of Z u imply the imposed properties of M u = {γ : γu ∈ Z u }. Indeed, assume that γ ∈ M u+v , so that γ(u + v) ∈ Z u+v . Hence, γ(u + v) ∈ (Z u + Z v ), meaning that γ(u + v) is the norm limit of γ 1n u + γ 2n v for γ 1n u ∈ Z u and γ 2n v ∈ Z v , n ≥ 1. The linear independence of u and v yields that γ 1n → γ and γ 2n → γ, whence γ ∈ (M u ∩ M v ).
It is possible to rephrase (5.5) as defined by an analogue of (1.5). Since the negative part of h(X, u) is p-integrable, it is possible to consistently let e(h(X, u)) = ∞ in (5.7) if h(X, u) is not p-integrable.
Corollary 5.8. Each scalarly lower semicontinuous constant preserving minimal sublinear expectation is exact.
Proof. Since (5.5) yields that E(H η (X)) = R d if η is random, the maximal extension of E by an analogue of (4.3) reduces to deterministic η and so E = E is the reduced maximal extension. For u ∈ S d−1 ∩G and β ∈ L p (R), we have E(H u (β)) = H u (e u (β)), cf. Example 5.6. Thus, the reduced maximal extension of E is given by h(X, u))).
Corollary 5.9. If E is a scalarly lower semicontinuous constant preserving minimal normalised sublinear expectation, then E(X + F ) = E(X) + F for each deterministic F ∈ co F(C).
Corollary 5.10. Assume that E is scalarly lower semicontinuous constant preserving minimal law invariant sublinear expectation. Then E(E(X|H)) ⊂ E(X) for all X ∈ L p (co F(C)) and any sub-σ-algebra H of F. In particular, EX ⊂ E(X).
Proof. The law invariance of E implies that e u is law invariant. The sublinear expectation e u is dilatation monotonic, meaning that e u (E(β|H)) ≤ e u (β) for all β ∈ L p (R), see [8,Cor. 4.59] for this fact derived for risk measures. The statement follows from (5.6).
For a p-integrable random closed convex set X, its Firey p-expectation is defined by h(E p X, u) = (Eh(X, u) p ) 1/p . The next result follows from Hölder's inequality applied to E(γh(X, u)) in (5.5).
Corollary 5.11. If E admits representation (5.5), then The following result identifies a particularly important case, when the families M u = M do not depend on u. This property essentially means that the sublinear expectation preserves centred balls. By B r denote the ball of radius r centred at the origin. = e(h(X, u)) .
By (5.7), the support functions of the both sides of (5.8) are identical.
If X = {ξ} is a singleton, there is no need to take the convex hull on the right-hand side of (5.8).
Example 5.13. For an integrable X and n ≥ 1, consider the sublinear expectation It is easy to see that E ∪ n (X) is a minimal constant preserving sublinear expectation; it is given by (5.7) with the corresponding numerical sublinear expectation e(β), being the expected maximum of n i.i.d. copies of β ∈ L 1 (R). By Corollary 5.8, this sublinear expectation is exact.
Example 5.14. For α ∈ (0, 1), let P α be the family of random variables γ with values in [0, α −1 ] and such that Eγ = 1. Furthermore, let M be the cone generated by P α , that is M = {tγ : γ ∈ P α , t ≥ 0}. In finance, the set P α generates the average Value-at-Risk, which is the risk measure obtained as the average quantile, see [8]. Similarly, the numerical sublinear e and superlinear u expectations generated by this set M are represented as average quantiles. Namely, e(β) is the average of the quantiles of β at levels t ∈ (1 − α, 1), and u(β) is the average of the quantiles at levels t ∈ (0, α). The corresponding set-valued sublinear expectation E satisfies EX ⊂ E(X) ⊂ α −1 EX.

.1 Duality for maximal superlinear expectations
Consider a superlinear expectation defined on L p (co F(C)). If C = {0}, we deal with all p-integrable random closed convex sets. Recall that G = C o is the polar cone to C. x : x, E(γη) ≤ Eh(X, γη) (6.1) for a collection of convex σ(L q , L p )-closed cones M η ⊂ L q (R + ) parametrised by η ∈ L 0 (S d−1 ∩ G) and such that M u is strictly larger than {0} for each deterministic η = u ∈ S d−1 ∩ G.
Proof. Necessity. Fix η ∈ L 0 (S d−1 ∩ G), and let A η be the set of β ∈ L p (R) such that U (H η (β)) contains the origin. Since U (H η (0)) ⊃ U (C) = C, we have 0 ∈ A η . Since U (H u (t)) ⊂ H u (t), the family A u does not contain β = t for t < 0 and u ∈ S d−1 ∩ G.
by the assumed upper semicontinuity of U . Thus, A η is a convex σ(L p , L q )-closed cone in L p (R). Consider its positive dual cone Since U (C) = C, we have U (X) 0 whenever C ⊂ X a.s. In view of this, if β is a.s. nonnegative, then H η (β) a.s. contains zero and so β ∈ A η . Thus, each γ from M η is a.s. nonnegative. The bipolar theorem yields that Since (−t) / ∈ A u , (6.2) yields that the cone M u is strictly larger than {0}. Since U is assumed to be maximal, (4.3) implies that Sufficiency. It is easy to check that U given by (6.1) is additive on deterministic singletons, homogeneous and monotonic. If F ∈ co F(C) is deterministic, then letting η = u in (6.1) be deterministic and using the nontriviality of M u yield that U (F ) ⊂ F . Furthermore, U (C) = C, since U (C) contains the origin and so is not empty.
The superadditivity of U follows from the fact that It is easy to see that U coincides with its maximal extension. Note that (6.1) is equivalently written as If X n scalarly converges to X and x n k → x for x n k ∈ U (X n k ), k ≥ 1, then Eh(X n − x n , γη) converges to Eh(X−x, γη) for all γ ∈ L q (R + ) and η ∈ L 0 (S d−1 ∩G). Thus, Eh(X−x, γη) ≥ 0, whence x ∈ U (X), and the upper semicontinuity of U follows.
In difference to the sublinear case (see Theorem 5.2), the cones M η from Theorem 6.1 do not need to satisfy additional conditions like those imposed in Lemma 5.1.
Corollary 6.2. If 1 ∈ M η for all η, then U (X) ⊂ EX for all p-integrable X and any scalarly upper semicontinuous maximal normalised superlinear expectation U .
Proof. Restrict the intersection in (6.1) to deterministic η = u and γ = 1, so that the right-hand side of (6.1) becomes EX. Example 6.3. Let X = H η (β) be the half-space with normal η ∈ L 0 (S d−1 ) and β ∈ L p (R). If C = {0}, the maximal superlinear expectation of X is given by Assume that d = 2 and let η = (1, π) with π being an almost surely positive random variable. We have where u is the numerical superlinear expectation with the generating set M η . In particular, if β = 0 a.s., then Therefore, U (H η (0)) = H w (0) ∩ H w (0), where w = (1, e(π)) and w = (1, u(π)) for the exact dual pair e and u of nonlinear expectations with the representing set M η .

Reduced maximal extension
The following result can be proved similarly to Theorem 6.1 for the reduced maximal extension from (4.5).
Theorem 6.4. A map U : L p (co F(C)) → co F is a scalarly upper semicontinuous normalised reduced maximal superlinear expectation if and only if for a collection of nontrivial convex σ(L q , L p )-closed cones It is possible to take the intersection in (6.3) over all v ∈ S d−1 , since h(X, v) = ∞ for v / ∈ G. Representation (6.3) can be equivalently written as the intersection of the half-spaces is a superlinear univariate expectation of β ∈ L p (R) for each v ∈ S d−1 ∩ G. The superlinear expectation (6.3) is law invariant if the families M v are law-complete for all v.
Corollary 6.5. Let U : L p (co F(C)) → co F be a scalarly upper semicontinuous law invariant normalised reduced maximal superlinear expectation, and let the probability space be non-atomic. Then U is dilatation monotonic, meaning that for each sub-σ-algebra H ⊂ F and all X ∈ L p (co F(C)). In particular, U (X) ⊂ EX.
Proof. Since M u is law-complete, u v (β) given by (6.4) is a law invariant concave function of β ∈ L p (R). Thus, u v is dilatation monotonic by [8,Cor. 4.59], meaning that u(E(ξ|H)) ≥ u(ξ). Hence, x : where u given by (6.4) is the numerical superlinear expectation with the representing set M. In this case, U (X) is the largest convex set whose support function is dominated by Note that u(h(X, ·)) may fail to be a support function. Since for X ∈ L p (co F(C)), this reduced maximal superlinear expectation admits the equivalent representation as U (X) = γ∈M,Eγ=1 E(γX). (6.6) Example 6.7. Let X = ξ + C for a ξ ∈ L p (R d ) and a deterministic convex closed cone C that is different from the whole space. Then If C is a Riesz cone, then U (ξ + C) = x + C for some x, since an intersection of translations of C is again a translation of C, see [18,Th. 26.11].
Example 6.8. Let U (X) = E(X 1 ∩ · · · ∩ X n ) for n independent copies of X, noticing that the expectation is empty if the intersection X 1 ∩· · ·∩X n is empty with positive probability. This superlinear expectation is neither maximal, nor even a reduced maximal one. For instance, so that the reduced maximal extension U (X) is the largest convex set whose support function is dominated by U (H v (X)), v ∈ S d−1 . However, the support function of E(X 1 ∩ · · · ∩ X n ) is the expectation of the largest sublinear function dominated by min(h(X i , v), i = 1, . . . , N ), and so U (X) may be a strict subset of U (X). Then where the minimum is applied coordinatewisely to independent copies of ξ, while U (X) is the largest convex set whose support function is dominated by E min( ξ i , v , i = 1, . . . , n), v ∈ R d + . Obviously, min( ξ i , v , i = 1, . . . , n) ≥ min(ξ 1 , . . . , ξ n ), v with a possibly strict inequality.

Minimal extension of a superlinear expectation
In any nontrivial case, the superlinear expectation of a nondeterministic singleton is empty. Indeed, if ξ ∈ L p (R d ), then (6.3) yields that which is not empty only if In the setting of Example 6.6, U ({ξ}) is empty unless u( ξ, v )+u(− ξ, v ) ≥ 0 for all u. The latter means that u( ξ, v ) = e( ξ, v ) for the exact dual pair of real-valued nonlinear expectations. Equivalently, U ({ξ}) = ∅ if E(γξ) = E(γ ξ) for some γ, γ ∈ M. If this is the case for all ξ ∈ L p (X), then the minimal extension of U (X) is the set F X of fixed points of X, see Example 3.5. Thus, it is not feasible to come up with a nontrivial minimal extension of the superlinear expectation if C = {0}.
A possible way to ensure non-emptiness of the minimal extension of U (X) is to apply it to random sets from L p (co F(C)) with a cone C having interior points, since then at least one of h(X, v) and h(X, −v) is almost surely infinite for all v ∈ S d−1 . The minimal extension of U is given by The following result, in particular, implies that the union on the right-hand side of (6.8) is a convex set, cf. (4.1).
Theorem 6.9. The function U given by (6.8) is a superlinear expectation. If U in (6.8) is reduced maximal and satisfies the conditions of Corollary 6.5, then its minimal extension U is law invariant and dilatation monotonic.
Proof. Let x and x belong to the union on the right-hand side of (6.8) (without closure). Then x ∈ U (ξ + C) and x ∈ U (ξ + C), and the superlinearity of U yields that for each t ∈ [0, 1]. Since tξ + (1 − t)ξ is a selection of X, the convexity of U (X) easily follows.
The additivity on deterministic singletons, monotonicity and homogeneity properties are evident from (6.8). If F ∈ co F(C) is deterministic, then For the superadditivity property, consider x and y from the nonclosed right-hand side of (6.8) for X and Y , respectively. Then x ∈ U (ξ + C) and y ∈ U (η + C) for some ξ ∈ L p (X) and η ∈ L p (Y ). Hence, Now assume that U is reduced maximal. Let F X be the σ-algebra generated by X. The convexity of X implies that E(ξ|F X ) is a selection of X for any ξ ∈ L p (X). By the dilatation monotonicity property from Corollary 6.5, it is possible to replace ξ ∈ L p (X) in (6.8) with the family of F X -measurable p-integrable selections of X. These families coincide for two identically distributed sets, see [20,Prop. 1.4.5]. The dilatation monotonicity U (X) ⊂ U (E(X|F)) follows from Corollary 6.5.
Below we establish the upper semicontinuity of the minimal extension.
Proof. It suffices to omit the closure in (6.8) and consider x n ∈ U (X n ) such that x n → x and X n → X scalarly in σ(L p , L q ). For each n ≥ 1, there exists a ξ n ∈ L p (X n ) such that x n ∈ U (ξ n + C).
Assume now that ξ n p p = E ξ n p → ∞. Let ξ n = ξ n / ξ n p . This sequence is bounded in the L p -norm, and so assume without loss of generality that ξ n → ξ in σ(L p , L q ). Since x n / ξ n p ∈ U ((ξ n + C)/ ξ n p ) = U (ξ n + C), the upper semicontinuity of U yields that 0 ∈ U (ξ + C). For each ζ ∈ L q (G), we have ξ n , ζ ≤ h(X n , ζ). Dividing by ξ n p , taking expectation, and letting n → ∞ yield that E ξ , ζ ≤ 0. Thus, ξ ∈ C almost surely. Given that E ξ = 1, this contradicts the fact that U (ξ + C) contains the origin.
Similar reasons apply if p = ∞, splitting the cases when sup n ξ n is essentially bounded and when the essential supremum of x n converges to infinity.
The exact calculation of U (X) involves working with all p-integrable selections of X, which is a very rich family even in simple cases, like X = ξ + C. Since U (X) ⊂ U (X), (6.9) the superlinear expectation U (X) yields a computationally tractable upper bound on U (X).
Example 6.11. Assume that X = ξ+F for ξ ∈ L p (R d ) and a deterministic convex closed lower set F . Assume that U in (6.8) is reduced maximal and satisfies conditions of Corollary 6.5. Then where L p (F, F ξ ) is the family of selections of F which are measurable with respect to the σ-algebra generated by ξ. Indeed, U (ξ + ξ + C) is a subset of U (ξ + E(ξ |F ξ ) + C) by Corollary 6.5.
Note that the minimal extension U is not necessarily a maximal superlinear expectation. The following result describes its maximal extension. Theorem 6.12. Assume that U is defined by (6.8), where U = U is a scalarly upper semicontinuous reduced maximal superlinear expectation with representation (6.6). Then U (H v (β)) = U (H v (β)) for all v ∈ S d−1 ∩ G and β ∈ L p (R), and the reduced maximal extension of U coincides with U .
Proof. By (6.3), U (H v (β)) = H v (u(β)). In view of (6.9), it suffices to show that each x ∈ H v (u(β)) also belongs to U (H v (β)). Let y be the projection of x onto the subspace orthogonal to v. It suffices to show that x − y ∈ U (H v (β) − y). Noticing that H v (β) − y = H v (β), it is possible to assume that x = tv for t ≤ u(β).
Since U and U coincide on half-spaces, the reduced maximal extension of U is In view of (6.9), This surely holds for X = ξ + C and also for X being a half-space with a deterministic normal. In general, U (X) may be a strict subset of U (X) as the following example shows, so superlinear expectations are not exact even on rather simple random sets of the type ξ + C.
Let M v = M be the family from Example 5.14 and let u be the corresponding superlinear expectation with the representing set M. For each β ∈ L 1 (R), u(β) equals the average of the t-quantiles of β over t ∈ (0, α). If α ∈ (0, 1/2] and β takes two values with equal probabilities, then u(β) is the smaller value of β. Then U (X) = K ∩ (a + K), so that U (X) coincides with U (X) in this case, see Example 6.8. Now assume that α ∈ (1/2, 1). If β equally likely takes two values t and s, then u(β) = max(t, s) − |t − s|/(2α), and Since K is a Riesz cone, U (ξ +K) = x+K for some x, see Example 6.7.
For v ∈ G, the linear function x, v is dominated by 1 2α a, v if a, v < 0 and by (1− 1 2α ) a, v otherwise. By an elementary calculation, In view of Example 6.11, it suffices to consider selections of K measurable with respect to the σ-algebra F ξ generated by ξ; these selections take two values from the boundary of K with equal probabilities. The minimal extension U (X) can be found by (6.10), letting ξ equally likely take two values y = (y 1 , y 2 ) and z = (z 1 , z 2 ) on the boundary ∂K of K. Then Figure 1 shows U (X) and U (X) for π = π = 2, a = (1, −1), and α = 0.7. It shows that the minimal extension may be indeed a strict subset of the reduced maximal superlinear expectation.

Depth-trimmed regions and outliers
Consider a sublinear expectation E restricted to the family of p-integrable singletons, and let C = {0}. The map ξ → E({ξ}) satisfies the properties of depth-trimmed regions imposed in [3], which are those from [27] augmented by the monotonicity and subadditivity. Therefore, the sublinear expectation provides a rather generic construction of a depthtrimmed region associated with a random vector ξ ∈ L p (R d ). In statistical applications, points outside E({ξ}) or its empirical variant are regarded as outliers. The subadditivity property (3.2) means that, if a point is not an outlier for the convolution of two samples, then there is a way to obtain this point as the sum of two non-outliers for the original samples. where q β (s) is an s-quantile of β (in case of non-uniqueness, the choice of a particular quantile does not matter because of integration). The risk measure r(β) = e α (−β) is called the average value-at-risk. Denote by E α the corresponding minimal sublinear expectation constructed by (5.7), so that h(E α ({ξ}), u) = e α ( ξ, u ) for all u. The set E α ({ξ}) is the zonoid-trimmed region of ξ at level α, see [3] and [23]. This set can be obtained as where P α ⊂ L 1 (R + ) consists of all random variables with values in [0, α −1 ] and expectation 1, see Example 5.14. This setting is a special case of Theorem 5.12 with M = {tγ : γ ∈ P α , t ≥ 0}. The value of α controls the size of the depth-trimmed region, α = 1 yields a single point, being the expectation of ξ. The subadditivity property of zonoid-trimmed regions was first noticed by [4].
Example 7.2 (Lift expectation). Let X be an integrable random closed convex set. Consider the random set Y in R d+1 given by the convex hull of the origin and {1} × X. The selection expectation Z X = EY is called the lift expectation of X, see [6]. If X = {ξ} is a singleton, then Z X is the lift zonoid of ξ, see [23]. By definition of the selection expectation, Z X is the closure of the set of (E(β), E(βξ)), where β runs through the family of random variables with values in [0, 1]. Equivalently, (α, x) belongs to Z X if and only if x = αE(γξ) for γ from the family P α , see Example 7.1. Thus, the minimal extension E α of E α from Example 7.1 is

Parametric families of nonlinear expectations
Consider a dual pair U and E of nonlinear expectations such that U (X) ⊂ EX ⊂ E(X) for all random closed sets X ∈ L p (co F(C)). Then it is natural to regard observations of X that do not lie between the superlinear and sublinear expectation as outliers. For each F ∈ co F, it is possible to quantify its depth with respect to the distribution of X using parametric families of nonlinear expectations constructed as follows.
Let X 1 , . . . , X n be independent copies of a p-integrable random closed convex set X. For a sublinear expectation E, is also a sublinear expectation. The only slightly nontrivial property is the subadditivity, which follows from the fact that (X 1 + Y 1 ) ∪ · · · ∪ (X n + Y n ) ⊂ (X 1 ∪ · · · ∪ X n ) + (Y 1 ∪ · · · ∪ Y n ).
If X 1 ∩ · · · ∩ X n is a.s. non-empty, then U n (X) = U (X 1 ∩ · · · ∩ X n ) yields a superlinear expectation, noticing that It is possible to consistently let U ∩ λ (X) = ∅ if X 1 ∩ · · · ∩ X N is empty with positive probability. Proposition 7.3. Let N be a geometric random variable such that, for some λ ∈ (0, 1], is a sublinear expectation and, if X 1 ∩ · · · ∩ X n = ∅ a.s. for all n, then is a superlinear expectation depending on λ ∈ (0, 1]. Example 7.4. Choosing E(X) = U (X) = EX in (7.1) and (7.2) yields a family of nonlinear expectations depending on parameter, which are also easy to compute.
It is easily seen that E ∪ λ (X) increases and U ∩ λ (X) decreases as λ declines. Define the depth of F ∈ co F(C) as depth(F ) = sup{λ ∈ (0, 1] : It is easy to see that E ∪ 1 (X) = E(X), U ∩ 1 (X) = U (X). Furthermore, U ∩ λ (X) declines to the set of fixed points of X and E ∪ λ (X) increases to the support of X as λ ↓ 0, see Example 3.5. Thus, all closed convex sets F satisfying F X ⊂ F ⊂ supp X have a positive depth.
In order to handle the empirical variant of this concept based on a sample X 1 , . . . , X n of independent observations of X, consider a random closed setX that with equal probabilities takes one of the values X 1 , . . . , X n . Its distribution can be simulated by sampling one of these sets with possible repetitions. Then it is possible to use the nonlinear expectations of X in order to assess the depth of any given convex set, including those from the sample.

Risk of a set-valued portfolio
For a random variable ξ ∈ L p (R) interpreted as a financial outcome or gain, the value e(−ξ) (equivalently, −u(ξ)) is used in finance to assess the risk of ξ. It may be tempting to extend this to the multivariate setting by assuming that the risk is a d-dimensional function of a random vector ξ ∈ L p (R d ), with the conventional properties extended coordinatewisely. However, in this case the nonlinear expectations (and so the risk) are marginalised, that is, the risk of ξ splits into a vector of nonlinear expectations applied to the individual components of ξ, see Theorem A.1.
Moreover, assessing the financial risk of a vector ξ is impossible without taking into account exchange rules that can be applied to its components. If no exchanges are allowed and only consumption is possible, then one arrives at positions being selections of X = ξ+R d − . On the contrary, if the components of ξ are expressed in the same currency with unrestricted exchanges and disposal (consumption) of the assets, then each position from the half-space X = {x : x i ≤ ξ i } is reachable from ξ. Working with the random set X also eliminates possible non-uniqueness in the choice of ξ with identical sums.
In view of this, it is natural to consider multivariate financial positions as lower random closed convex sets, equivalently, those from L p (co F(C)) with C = R d − . The random closed set is said to be acceptable if 0 ∈ U (X), and the risk of X is defined as −U (X). The superadditivity property guarantees that if both X and Y are acceptable, then X + Y is acceptable. This is the classical financial diversification advantage formulated in set-valued terms.
If X ∈ L p (co F(C)) and C = R d − , the minimal extension (6.8) is called the lower set extension of U . If U is reduced maximal, (6.6) yields that In other words, U (X) is the closure of the set of all points dominated coordinatewisely by the superlinear expectation of at least one selection of X. In [21], the origin-reflected set −U (X) was called the selection risk measure of X. For set-valued portfolios X = ξ + C, arising as the sum of a singleton ξ and a (possibly random) convex cone C, the maximal superlinear expectation (in our terminology), considered a function of ξ only and not of ξ + C, was studied by [9] and [10]. The case of general set-valued arguments was pursued by [21]. For the purpose of risk assessment, one can use any superlinear expectation. However, the sensible choices are the maximal superlinear expectation in view of its closed form dual representation, and the lower set extension in view of its direct financial interpretation (through its primal representation), meaning the existence of a selection with all acceptable components. Given that the minimal superlinear expectation may be a strict subset of the maximal one (see Example 6.13), the acceptability of X under a maximal superlinear expectation may be a weaker requirement than the acceptability under the lower set extension.
Such a function may be viewed as a restriction of a sublinear set-valued expectation onto the family of sets ξ + R d − and letting e(ξ) be the coordinatewise supremum of E(ξ + R d − ). The following result shows that vector-valued sublinear expectations marginalise, that is, they split into sublinear expectations applied to each component of the random vector. where the infimum is taken coordinatewisely.
Consider the set C µ = {y ∈ R d : ξdµ ≤ ydµ} for some µ = (µ 1 , . . . , µ d ) ∈ A o . Let A o i denote the family of all nontrivial µ ∈ A o such that µ j vanish for all j = i. Note that if µ ∈ A o , then (µ 1 , 0, . . . , 0) ∈ A o 1 , that is the projections of A o and A o i on each of the component coincide. If µ ∈ A o 1 , then C µ = ξ 1 dµ 1 , ∞ × R × · · · × R.
Thus, this latter set C µ does not influence the coordinatewise infimum in (A.1) comparing to the sets obtained by letting µ ∈ A o 1 ∪ A o 2 . The same argument applies to µ ∈ A o with more than two nonvanishing components. Thus, the intersection in (A.1) can be taken over µ ∈ A o 1 ∪ · · · ∪ A o d , whence the result. A similar result holds for superlinear vector-valued expectations.