## 1 Introduction

Fix a probability space $$(\Omega ,\mathfrak{F},\mathbb{P})$$. A sublinear expectation is a real-valued function $$\mathtt{e}$$ defined on the space $$L^{p}(\mathbb{R})$$ of $$p$$-integrable random variables (with $$p\in [1,\infty ]$$) such that

$$\mathtt{e}(\xi +a)=\mathtt{e}(\xi )+a$$
(1.1)

for each deterministic $$a$$, the function $$\mathtt{e}$$ is monotone, i.e.,

$$\mathtt{e}(\xi ) \leq \mathtt{e}(\eta )\qquad \text{if \xi \leq \eta  a.s.},$$

homogeneous, i.e.,

$$\mathtt{e}(c\xi )=c\mathtt{e}(\xi ),\qquad c\geq 0,$$

$$\mathtt{e}(\xi +\eta )\leq \mathtt{e}(\xi )+\mathtt{e}(\eta );$$
(1.2)

see Peng [29, 30], who brought sublinear expectations to the realm of probability theory and established their close relationship to solutions of backward stochastic differential equations. A superlinear expectation $$\mathtt{u}$$ satisfies the same properties with (1.2) replaced by

$$\mathtt{u}(\xi +\eta )\geq \mathtt{u}(\xi )+\mathtt{u}(\eta ).$$
(1.3)

In many studies, the homogeneity property together with sub-(super-)additivity is replaced by convexity of $$\mathtt{e}$$ and concavity of $$\mathtt{u}$$. The range of values may be extended to $$(-\infty ,\infty ]$$ for the sublinear expectation and to $$[-\infty ,\infty )$$ for the superlinear one.

Abstract sublinear functionals have been studied by Fuglede [11], Schmeidler [32] and many further papers in relation to capacities and the Choquet integral and in view of applications to game theory and optimisation. While the notation $$\mathtt{e}$$ reflects the expectation meaning, the choice of notation $$\mathtt{u}$$ is explained by the fact that the superlinear expectation can be viewed as a utility function that assigns a higher utility value to the sum of two random variables in comparison with the sum of their individual utilities; see Delbaen [7, Chap. 4]. If the random variable $$\xi$$ models a financial gain, then $$r(\xi )=-\mathtt{u}(\xi )$$ is called a coherent risk measure. The property (1.1) is then termed cash-invariance, and the superadditivity property is turned into subadditivity due to the change of sign. The subadditivity of a risk measure means that the sum of two random variables bears at most the same risk as the sum of their risks; this is justified by the economic principle of diversification.

It is easy to see that $$\mathtt{e}$$ is a sublinear expectation if and only if

$$\mathtt{u}(\xi )=-\mathtt{e}(-\xi )$$
(1.4)

is a superlinear one, and in this case $$\mathtt{e}$$ and $$\mathtt{u}$$ are said to form an exact dual pair. The sublinearity property yields $$\mathtt{e}(\xi )+\mathtt{e}(-\xi )\geq \mathtt{e}(0)=0$$, so that $$-\mathtt{e}(-\xi )\leq \mathtt{e}(\xi )$$. The interval $$[\mathtt{u}(\xi ),\mathtt{e}(\xi )]$$ generated by an exact dual pair of nonlinear expectations characterises the uncertainty in the determination of the expectation of $$\xi$$. In finance, such intervals determine price ranges in illiquid markets; see Madan [24].

We equip the space $$L^{p}$$ with the $$\sigma (L^{p},L^{q})$$-topology based on the standard pairing of $$L^{p}$$ and $$L^{q}$$ with $$1/p+1/q=1$$. It is usually assumed that $$\mathtt{e}$$ is lower semicontinuous and $$\mathtt{u}$$ is upper semicontinuous in the $$\sigma (L^{p},L^{q})$$-topology. Given that $$\mathtt{e}$$ and $$\mathtt{u}$$ take finite values, general results of functional analysis concerning convex functions on linear spaces imply the semicontinuity property if $$p\in [1,\infty )$$ (see Kaina and Rüschendorf [20]); it is additionally imposed if $$p=\infty$$. A nonlinear expectation is said to be law-invariant (more exactly, law-determined) if it takes the same value on identically distributed random variables; see Föllmer and Schied [10, Sect. 4.5].

A rich source of sublinear expectations is provided by suprema of conventional (linear) expectations taken with respect to several probability measures. Assuming the $$\sigma (L^{p},L^{q})$$-lower semicontinuity, the bipolar theorem yields that this is the only possible case; see Delbaen [7, Sect. 4.5] and Kaina and Rüschendorf [20]. Then

$$\mathtt{e}(\xi )=\sup _{\gamma \in \mathcal{M},\mathbb{E}[\gamma ]=1} \mathbb{E}[\gamma \xi ]$$
(1.5)

is the supremum of expectations $$\mathbb{E}[\gamma \xi ]$$ over a convex $$\sigma (L^{q},L^{p})$$-closed cone ℳ in $$L^{q}(\mathbb{R}_{+})$$; the superlinear expectation is obtained by replacing the supremum with the infimum. In the following, we assume that (1.5) holds and that the representing set ℳ is chosen in such a way that the corresponding sublinear and superlinear expectations are law-invariant, that is, with each $$\gamma$$, ℳ contains all random variables identically distributed as $$\gamma$$.

A random closed set $$X$$ in Euclidean space is a random element with values in the family ℱ of closed sets in $$\mathbb{R}^{d}$$ such that $$\{X\cap K\neq \varnothing \}$$ is in $$\mathfrak{F}$$ for all compact sets $$K$$ in $$\mathbb{R}^{d}$$; see Molchanov [25, Sect. 1.1.1]. In other words, a random closed set is a measurable set-valued function. A random closed set $$X$$ is said to be convex if $$X$$ almost surely belongs to the family $$\operatorname{co}\mathcal{F}$$ of closed convex sets in $$\mathbb{R}^{d}$$. For convex random sets in Euclidean space, the measurability condition is equivalent to the condition that the support function of $$X$$ (see (2.2) below) is a random function on $$\mathbb{R}^{d}$$ with values in $$(-\infty ,\infty ]$$.

In the set-valued setting, it is natural to replace the inequalities (1.2) and (1.3) with inclusions. For sets, the minus sign corresponds to the reflection with respect to the origin; it does not alter the direction of the inclusion, and so there is no direct link between set-valued sublinear and superlinear expectations.

This paper aims to systematically explore nonlinear set-valued expectations. Section 2 recalls the classical concept of the (linear) selection expectation for random closed sets, introduced by Aumann [4] and Artstein and Vitale [3]; see also Molchanov [25, Sect. 2.1]. The selection expectation $$\mathbb{E}[X]$$ is defined as the closure of the set of expectations $$\mathbb{E}[\xi ]$$ of all integrable random vectors $$\xi$$ such that $$\xi \in X$$ almost surely (selections of $$X$$). In Sect. 2.3, we introduce a suitable convergence concept for (possibly unbounded) random convex sets based on linear functionals applied to the support function.

Nonlinear expectations of random convex sets are introduced in Sect. 3. We refine the properties of nonlinear expectations stated in Molchanov [25, Sect. 2.2.7]. Basic examples of such expectations and more involved constructions are considered with a particular attention to the expectations of random singletons. It is also explained how the set-valued expectation applies to random convex functions and how it is possible to get rid of the homogeneity property and extend the setting to convex/concave functionals.

Among the rather vast variety of nonlinear expectations, it is possible to identify extremal ones: the minimal sublinear expectation of $$X$$ is the convex hull of nonlinear expectations of all sets from some family that yields $$X$$ as their union. In the case of selections, this becomes a direct generalisation of the representation of the selection expectation as the set of expectations for all random points almost surely belonging to a random set. The maximal superlinear extension is the intersection of nonlinear expectations of all half-spaces containing the random set. While the two coincide in the linear case and provide two equivalent definitions of the selection expectation, the two constructions differ in general. Similar set-valued functions on linear spaces have been studied by Hamel [12] and Hamel and Heyde [13], and the dual representation in [12, 13] appears to be the representation of maximal superlinear expectations in our setting restricted to special random closed sets.

Nonlinear maps restricted to the family $$L^{p}(\mathbb{R}^{d})$$ of $$p$$-integrable random vectors and sets having the form of a random vector plus a cone have been studied by Cascos and Molchanov [6] and Hamel and Heyde [12, 13]; comprehensive duality results have been proved by Drapeau et al. [9]. In our terminology, these studies concern the case when the argument of a superlinear expectation is the sum of a random vector and a convex cone, which in Hamel et al. [14] is allowed to be random, but is the same for all random vectors involved. However, for general set-valued arguments, it does not seem possible to rely on the approach of [9, 12, 13], since the known techniques of set-valued optimisation theory (see e.g. Khan and Tammer [21]) do not suffice to handle functions whose arguments belong to a nonlinear space.

The key technique suitable to handle nonlinear expectations relies on the bipolar theorem. A direct generalisation of this theorem for functionals of random convex sets is not feasible, since random convex sets do not form a linear space. Section 5 provides duality results for sublinear expectations and Sect. 6 for superlinear ones. Specifically, the constant-preserving minimal sublinear expectations are identified. For the superlinear case, the family of random closed convex sets such that a superlinear expectation contains the origin is a convex cone. However, it is rather tricky to use separation results since linear functions (such as the selection expectation) may have trivial values on unbounded integrable random sets. For instance, the selection expectation of a random half-space with a nondeterministic normal is the whole space; in this case, superlinear expectations are not dominated by any nontrivial linear expectation. In order to handle such situations, the duality results for superlinear expectations are proved for the maximal superlinear expectation. It is shown that the superlinear expectation of a singleton is usually empty; in order to come up with a nontrivial minimal extension, singletons in the definition of the minimal extension are replaced by translated cones. For arguments being the sum of a point and a cone in $$\mathbb{R}^{d}$$, we recover the results of Hamel and Heyde [12, 13].

Some applications are presented in Sect. 7. Sublinear expectations are useful in order to identify outliers in samples of random sets. Such samples often appear in partially identified models in econometrics, e.g. as intervals giving the salary range (see Molchanov and Molinari [27]), or as interval-valued price ranges in finance. The superlinear expectation can be used to assess multivariate risk in finance and to measure multivariate utilities. The superlinearity property is essential, since the utility of the sum of two portfolios described by random sets “dominates” the sum of their individual utilities. We show that the minimal extension of a superlinear expectation is closely related to the selection risk measure of lower random sets considered by Molchanov and Cascos [26]. Allowing the arguments of multiasset utilities to be general convex random sets makes it possible to use iteration-based constructions in the dynamic framework (see Lépinette and Molchanov [23]) and so consider nonlinear extensions of multivariate martingales. The case of random sets having the form of a vector plus a cone is the standard setting in the theory of markets with proportional transaction costs; see Kabanov and Safarian [19]. Superlinear expectations make it possible to assess utilities (and risks) of such portfolios and so develop dynamic hedging strategies; see [23]. Allowing general arguments of superlinear expectations makes it possible to include models of general convex transaction costs (see Pennanen and Penner [31]), most importantly, the setting of limit order books.

The appendix presents a self-contained proof of the fact that vector-valued sublinear expectations of random vectors necessarily split into sublinear expectations applied to each component of the vector. This fact reiterates the point that the set-valued setting is essential for defining multivariate nonlinear expectations.

We use the following notational conventions: $$X,Y$$ denote random closed convex sets, $$F$$ is a deterministic closed convex set, $$\xi$$ and $$\beta$$ are $$p$$-integrable random vectors and random variables, $$\zeta$$ and $$\gamma$$ are $$q$$-integrable vectors and variables with $$1/p+1/q=1$$, $$\eta$$ is usually a random vector with values in the unit sphere $$\mathbb{S}^{d-1}$$, $$u$$ and $$v$$ are deterministic points from $$\mathbb{S}^{d-1}$$.

## 2 Selection expectation

### 2.1 Integrable random sets and selection expectation

Let $$X$$ be a random closed set in $$\mathbb{R}^{d}$$, always assumed to be almost surely nonempty. A random vector $$\xi$$ is called a selection of $$X$$ if $$\xi \in X$$ almost surely. Let $$L^{p}(X)$$ denote the family of (equivalence classes of) $$p$$-integrable selections of $$X$$ for $$p\in [1,\infty )$$, essentially bounded ones if $$p=\infty$$, and all selections if $$p=0$$. If $$L^{p}(X)$$ is not empty, then $$X$$ is called $$p$$-integrable, shortly integrable if $$p=1$$. This is the case if $$X$$ is $$p$$-integrably bounded, that is, if $$| X |=\sup \{ | x | : x\in X\}$$ is $$p$$-integrable (essentially bounded if $$p=\infty$$).

If $$X$$ is integrable, then its selection expectation is defined by

$$\mathbb{E}[X]:=\operatorname{cl}\{\mathbb{E}[\xi ]: \xi \in L^{1}(X) \},$$
(2.1)

which is the closure of the set of expectations of all integrable selections of $$X$$; see Molchanov [25, Sect. 2.1.2]. In (2.1), the same expectation is applied to all selections of $$X$$. If $$X$$ is integrably bounded, then the closure on the right-hand side is not needed and $$\mathbb{E}[X]$$ is compact. The set $$\mathbb{E}[X]$$ is convex if $$X$$ is convex or if the underlying probability space is non-atomic. From now on, we assume that all random closed sets we consider are almost surely convex.

The support function of any nonempty set $$F$$ in $$\mathbb{R}^{d}$$ is defined by

$$h(F,u)=\sup \{\langle x,u\rangle : x\in F \},\qquad u\in \mathbb{R}^{d},$$
(2.2)

allowing possibly infinite values if $$F$$ is not bounded, where $$\langle u,x\rangle$$ denotes the scalar product. Due to homogeneity, the support function is determined by its values on the unit sphere $$\mathbb{S}^{d-1}$$.

If $$X$$ is an integrable random closed set, then its expected support function is the support function of $$\mathbb{E}[X]$$, that is,

$$\mathbb{E}[h(X,u) ]=h (\mathbb{E}[X],u ),\qquad u\in \mathbb{R}^{d};$$
(2.3)

see [25, Theorem 2.1.38]. Thus

$$\mathbb{E}[X]=\bigcap _{u\in \mathbb{S}^{d-1}} \{x : \langle x,u \rangle \leq \mathbb{E}[h(X,u) ] \},$$

which may be seen as the dual representation of the selection expectation with (2.1) being its primal representation. Ararat and Rudloff [2] provide an axiomatic Daniell–Stone type characterisation of the selection expectation. The property (2.3) can also be expressed as

$$\mathbb{E}\Big[\sup _{\xi \in X} \langle \xi ,u\rangle \Big] =\sup _{ \xi \in L^{1}(X)} \mathbb{E}[\langle \xi ,u\rangle ],$$
(2.4)

meaning that in this case, it is possible to interchange expectation and supremum. If $$X$$ is an integrable random closed set and ℌ is a sub-$$\sigma$$-algebra of $$\mathfrak{F}$$, the conditional expectation $$\mathbb{E}[X|\mathfrak{H}]$$ is identified by its support function, being the conditional expectation of the support function of $$X$$; see Hiai and Umegaki [16] and [25, Sect. 2.1.6].

The dilation (scaling) of a closed set $$F$$ is defined as $$cF=\{cx : x\in F\}$$ for $$c\in \mathbb{R}$$. For two closed sets $$F_{1}$$ and $$F_{2}$$, their closed Minkowski sum is defined by

$$F_{1}+F_{2}=\operatorname{cl}\{x+y : x\in F_{1},\,y\in F_{2}\},$$

and the sum is empty if at least one summand is empty. If at least one of $$F_{1}$$ and $$F_{2}$$ is compact, the closure on the right-hand side is not needed. We write shortly $$F+a$$ instead of $$F+\{a\}$$ for $$a\in \mathbb{R}^{d}$$.

If $$X$$ and $$Y$$ are random closed convex sets, then $$X+Y$$ is a random closed convex set; see [25, Theorem 1.3.25]. The selection expectation is linear on integrable random closed sets, that is, $$\mathbb{E}[X+Y]=\mathbb{E}[X]+ \mathbb{E}[Y]$$; see e.g. [25, Proposition 2.1.32].

In the following, the letter $$C$$ always refers to a deterministic closed convex cone in $$\mathbb{R}^{d}$$ which is distinct from the whole space. If $$F=F+C$$, then $$F$$ is said to be $$C$$-closed. Due to the closed Minkowski sum on the right-hand side, $$F$$ is also topologically closed. Let $$\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)$$ denote the family of all $$C$$-closed convex sets in $$\mathbb{R}^{d}$$ (including the empty set), and let $$L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))$$ be the family of all $$p$$-integrable random sets with values in $$\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)$$. Any such random set is necessarily a.s. nonempty. By

$$C^{o}= \{u\in \mathbb{R}^{d} : h(C,u)\leq 0 \},$$

we denote the polar cone of $$C$$.

### Example 2.1

If $$C=\{0\}$$, then $$\operatorname{co}\mathcal{F}(\mathbb{R}^{d},\{0\})$$ is the family $$\operatorname{co}\mathcal{F}$$ of all convex closed sets in $$\mathbb{R}^{d}$$. If $$C=\mathbb{R}_{-}^{d}$$, then $$\operatorname{co}\mathcal{F}(\mathbb{R}^{d},\mathbb{R}_{-}^{d})$$ is the family of lower convex closed sets, and a random closed convex set with realisations in this family is called a random lower set.

### Example 2.2

Let $$C$$ be a convex closed cone in $$\mathbb{R}^{d}$$ which does not coincide with the whole space. If $$X=\xi +C$$ for $$\xi \in L^{p}(\mathbb{R}^{d})$$, then $$X$$ belongs to the space $$L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))$$. For each $$\zeta \in L^{q}( C^{o} )$$, we have $$h(X,\zeta )=\langle \xi ,\zeta \rangle$$.

### 2.2 Support function at random directions

For $$t\in \mathbb{R}$$, let

$$H_{u}(t)= \{x\in \mathbb{R}^{d} : \langle x,u\rangle \leq t \}, \qquad u\neq 0,$$

denote a half-space in $$\mathbb{R}^{d}$$, and set $$H_{u}(\infty )=\mathbb{R}^{d}$$. Particular difficulties when dealing with unbounded random closed sets are caused by the fact that the support function of any deterministic argument may be infinite with probability one.

### Example 2.3

Let $$X=H_{\eta }(0)$$ be the random half-space with the normal vector $$\eta$$ having a non-atomic distribution. Then $$\mathbb{E}[X]$$ is the whole space. The support function of $$X$$ is finite only on the random ray $$\{c\eta : c\geq 0\}$$.

It is shown by Lépinette and Molchanov [22, Corollary 3.5] that each random closed convex set satisfies

$$X=\bigcap _{\eta \in L^{0}(\mathbb{S}^{d-1})} H_{\eta }(X),$$
(2.5)

where

$$H_{\eta }(X)=H_{\eta }\big(h(X,\eta )\big)$$

is the smallest half-space with outer normal $$\eta$$ that contains $$X$$. If $$X$$ is a.s. $$C$$-closed, then (2.5) holds with $$\eta$$ running through the family of selections of $$\mathbb{S}^{d-1}\cap C^{o}$$.

For each $$\zeta \in L^{q}(\mathbb{R}^{d})$$, the support function $$h(X,\zeta )$$ is a random variable with values in $$(-\infty ,\infty ]$$; see [22, Lemma 3.1]. While $$h(X,\zeta )$$ is not necessarily integrable, its negative part is always integrable if $$X$$ is $$p$$-integrable. Indeed, choose any $$\xi \in L^{p}(X)$$ and write $$h(X,\zeta )=h(X-\xi ,\zeta )+\langle \xi ,\zeta \rangle$$. The second summand on the right-hand side is integrable, while the first one is nonnegative.

### Lemma 2.4

Let $$X,Y\in L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))$$. If we have $$\mathbb{E}[h(Y,\zeta )]\leq \mathbb{E}[h(X,\zeta )]$$ for all $$\zeta \in L^{q}( C^{o} )$$, then $$Y\subseteq X$$ a.s.

### Proof

For each $$A\in \mathfrak{F}$$, replacing $$\zeta$$ with $$\zeta \mathbf{1}_{A}$$ yields

$$\mathbb{E}[h(Y,\zeta )\mathbf{1}_{A} ]\leq \mathbb{E}[h(X,\zeta ) \mathbf{1}_{A} ],$$

whence $$h(Y,\zeta )\leq h(X,\zeta )$$ a.s. The same holds for a general $$\zeta \in L^{q}(\mathbb{R}^{d})$$ by splitting it into the cases when $$\zeta \in C^{o}$$ and $$\zeta \notin C^{o}$$. For a general $$\zeta \in L^{0}(\mathbb{R}^{d})$$, we have $$h(Y,\zeta _{n})\leq h(X,\zeta _{n})$$ a.s. with $$\zeta _{n}=\zeta \mathbf{1}_{\{|\zeta |\leq n\}}$$ for $$n\in \mathbb{N}$$. Thus $$h(Y,\zeta ) \leq h(X,\zeta )$$ almost surely for all $$\zeta \in L^{0}(\mathbb{R}^{d})$$, and the statement follows from [22, Corollary 3.6]. □

### Corollary 2.5

The distribution of $$X\in L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))$$ is uniquely determined by $$\mathbb{E}[h(X,\zeta )]$$ for $$\zeta \in L^{q}( C^{o} )$$.

### Proof

Apply Lemma 2.4 with $$Y=\{\xi \}$$, so that the values of $$\mathbb{E}[h(X,\zeta )]$$ identify all $$p$$-integrable selections of $$X$$, and note that $$X$$ equals the closure of the family of its $$p$$-integrable selections; see [25, Proposition 2.1.4]. □

A random closed set $$X$$ is called Hausdorff-approximable if it appears as the almost sure limit in the Hausdorff metric of random closed sets with at most a finite number of values. It is known [25, Theorem 1.3.18] that all random compact sets are Hausdorff-approximable, as well as those that appear as the sum of a random compact set and a random closed set with at most a finite number of possible values. The random closed set $$X$$ from Example 2.3 is not Hausdorff-approximable.

The distribution of a Hausdorff-approximable $$p$$-integrable random closed convex set $$X$$ is uniquely determined by the selection expectations $$\mathbb{E}[\gamma X]$$ for all $$\gamma \in L^{q}(\mathbb{R}_{+})$$, and it actually suffices to let $$\gamma$$ run through all measurable indicators; see Hess [15] and [25, Proposition 2.1.33]. If $$X$$ is Hausdorff-approximable, then its selections $$\xi$$ are identified by the condition $$\mathbb{E}[\xi \mathbf{1}_{A}]\in \mathbb{E}[X\mathbf{1}_{A}]$$ for all $$A\in \mathfrak{F}$$. By passing to the support functions, we arrive at a variant of Lemma 2.4 with $$\zeta =u\mathbf{1}_{A}$$ for all $$u\in \mathbb{S}^{d-1}$$ and $$A\in \mathfrak{F}$$.

### 2.3 Convergence of random closed convex sets

Convergence of random closed sets is typically considered in probability, almost surely or in distribution; see Molchanov [25, Sect. 1.7]. In the following, we define $$L^{p}$$-type convergence concepts. The space $$L^{p}(\mathbb{R}^{d})$$ is equipped with the $$\sigma (L^{p},L^{q})$$-topology, that is, $$\xi _{n}\to \xi$$ means that $$\mathbb{E}[\langle \xi _{n} ,\zeta \rangle ]\to \mathbb{E}[\langle \xi , \zeta \rangle ]$$ for all $$\zeta \in L^{q}(\mathbb{R}^{d})$$.

### Lemma 2.6

Recall that $$C$$ denotes a generic convex cone in $$\mathbb{R}^{d}$$ which differs from the whole space. If $$X$$ is a $$p$$-integrable random $$C$$-closed convex set, then $$L^{p}(X)$$ is a nonempty convex $$\sigma (L^{p},L^{q})$$-closed and $$L^{p}(C)$$-closed subset of $$L^{p}(\mathbb{R}^{d})$$.

### Proof

If $$\xi _{n}\in L^{p}(X)$$ and $$\xi _{n}\to \xi \in L^{p}(\mathbb{R}^{d})$$ in $$\sigma (L^{p},L^{q})$$, then

$$\mathbb{E}[\langle \xi ,\zeta \rangle ] =\lim _{n\to \infty } \mathbb{E}[\langle \xi _{n},\zeta \rangle ] \leq \mathbb{E}[h(X, \zeta ) ]$$

for all $$\zeta \in L^{q}(\mathbb{R}^{d})$$. Thus $$\xi$$ is a selection of $$X$$ by Lemma 2.4. The statement concerning $$C$$-closedness is obvious. □

A sequence $$(X_{n})_{n\in \mathbb{N}}$$ in $$L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))$$ is said to converge to a random set $$X\in L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))$$ scalarly in $$\sigma (L^{p},L^{q})$$ (shortly, scalarly) if

$$\mathbb{E}[h(X_{n},\zeta )]\longrightarrow \mathbb{E}[h(X,\zeta )] \qquad \text{as n\to \infty , for all \zeta \in L^{q}( C^{o} )},$$

where the convergence is understood in the extended line $$(-\infty ,\infty ]$$. Since $$\mathbb{E}[h(X_{n},\zeta )]$$ equals the support function of $$L^{p}(X_{n})$$ in the direction $$\zeta$$, this convergence is the scalar convergence $$L^{p}(X_{n})\to L^{p}(X)$$ as convex sets in $$L^{p}(\mathbb{R}^{d})$$; see Sonntag and Zǎlinescu [34].

## 3 General nonlinear set-valued expectations

### 3.1 Definitions

Fix $$p\in [1,\infty ]$$ and a convex closed cone $$C$$ distinct from the whole space $$\mathbb{R}^{d}$$.

### Definition 3.1

A sublinear set-valued expectation is a function

such that

i) for each deterministic $$a\in \mathbb{R}^{d}$$ (additivity on deterministic singletons);

ii) for all deterministic $$F\in \operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)$$;

iii) if $$X\subseteq Y$$ almost surely (monotonicity);

iv) for all $$c>0$$ (homogeneity);

(3.1)

for all $$p$$-integrable random closed convex sets $$X$$ and $$Y$$. A superlinear set-valued expectation satisfies the same properties with the exception of ii) replaced by and (3.1) replaced by the superadditivity property

(3.2)

The nonlinear expectations and are said to be law-invariant if they retain their values on identically distributed random closed convex sets.

### Proposition 3.2

All nonlinear expectations on $$L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))$$ take their values in $$\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)$$.

### Proof

If $$a\in C$$, then $$X+a\subseteq X$$ a.s., whence . Therefore, . □

While the argument $$X$$ of nonlinear expectations is a.s. nonempty, may be empty and then the right-hand side of (3.2) is also empty. However, if is empty for some $$X$$, then for $$\xi \in L^{p}(X)$$; hence

is empty for all $$p$$-integrable random sets $$Y$$. Thus each sublinear expectation is either always empty or always nonempty. In view of this, we assume that sublinear expectations take nonempty values. We always exclude the trivial cases when or for all $$X$$.

Note that is a closed convex cone, which may be strictly larger than $$C$$. By Proposition 3.2, is either $$C$$ or is empty. The sublinear (respectively, superlinear) expectation is said to be normalised if (respectively, ). We always have by property ii), and we also have , since for all $$a\in \mathbb{R}^{d}$$ and is not identically empty.

The properties of nonlinear expectations do not imply that they preserve deterministic convex closed sets. A deterministic set $$F$$ from $$\operatorname{co}(F(\mathbb{R}^{d},C))$$ is called invariant if . The family of invariant sets is closed under translations, under dilations by positive reals and for Minkowski sums, e.g. if and , then

A nonlinear expectation is said to be constant-preserving if all nonempty deterministic sets from $$\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)$$ are invariant.

The superlinear and sublinear expectations and form a dual pair if is a subset of for each $$p$$-integrable random closed convex set $$X$$. In contrast to the univariate setting, the reflection $$-X=\{-x : x\in X\}$$ of $$X$$ with respect to the origin does not alter the direction of set inclusions, so that the exact duality relation (1.4) is useless; if $$C=\{0\}$$, then is also a sublinear expectation.

For a sequence $$(F_{n})_{n\in \mathbb{N}}$$ of closed sets, its lower limit $$\liminf _{n\to \infty } F_{n}$$ is the set of limits for all convergent sequences $$x_{n}\in F_{n}$$, $$n\in \mathbb{N}$$, and its upper limit $$\limsup _{n\to \infty } F_{n}$$ is the set of limits for all convergent subsequences $$x_{n_{k}}\in F_{n_{k}}$$, $$k\in \mathbb{N}$$.

The sublinear expectation is called lower semicontinuous if

(3.3)

and is upper semicontinuous if

for any sequence $$(X_{n})_{n\in \mathbb{N}}$$ of random closed convex sets converging to $$X$$ in the chosen topology, e.g. scalarly lower semicontinuous if $$(X_{n})$$ scalarly converges to $$X$$. Note that our lower semicontinuity definition is weaker than its standard variant for set-valued functions which would require that is a subset of ; see Hu and Papageorgiou [18, Proposition 2.35].

### Proposition 3.3

If $$X+X'=\mathbb{R}^{d}$$ a.s. with $$X'$$ being an independent copy of $$X$$, then for each law-invariant sublinear expectation .

### Proof

□

Proposition 3.3 applies if $$X=H_{\eta }(0)$$ is a half-space with a non-atomic $$\eta$$, so that each law-invariant sublinear expectation on such random sets takes trivial values.

### Example 3.4

Let $$C=\mathbb{R}_{-}^{d}$$. If for a vector-valued function $$\overline{\mathtt{e}}:L^{p}(\mathbb{R}^{d})\to \mathbb{R}^{d}$$, then $$\overline{\mathtt{e}}(\xi )$$ splits into the vector of superlinear expectations applied to the components of $$\xi =(\xi _{1},\dots ,\xi _{d})$$; see Theorem A.1.

### Remark 3.5

It is possible to consider nonlinear expectations defined only on some special random sets, e.g. singletons or half-spaces. It is then only required that the family of such sets is closed under translations, under dilations by positive reals and for Minkowski sums.

### Remark 3.6

Utility functions of random variables are usually assumed to be superadditive. Risk measures of random variables are defined by inverting the sign and so become subadditive. In order to resemble the terminology common for risk measures, the family $$\operatorname{co}\mathcal{F}$$ could be ordered by the reverse inclusion ordering; then the terminology is correspondingly adjusted, e.g. a superlinear expectation becomes sublinear and monotonically decreasing. The use of the reverse inclusion order promoted by Hamel et al. [13, 14] is largely motivated by financial terminology, where risk measures are traditionally assumed to be antimonotonic and subadditive; see e.g. Föllmer and Schied [10, Chap. 4]. In the reverse inclusion order, set-valued risk measures become subadditive, exactly as conventional risk measures of random variables are. We, however, systematically consider the conventional inclusion order, and so our set-valued setting extends the setup advocated by Delbaen [7] in the numerical case. He considers utility functions instead of risk measures: utility functions are superlinear and increasing, corresponding to the properties of the superlinear set-valued expectation . Thus up to a change of terminology, our superlinear expectation corresponds to the sublinear set-valued risk measure of Hamel et al. [13, 14]. On the other hand, our sublinear expectation is a different object, which requires a separate treatment. Indeed, in the set-valued framework, a change of sign (that is, the central symmetry) does not alter the direction of the inclusion, and so it is not possible to convert a superlinear function to a sublinear one.

### Remark 3.7

Motivated by financial applications, it is possible to replace the homogeneity and sub-(super-)additivity properties with convexity or concavity, e.g.

But then can be turned into a superlinear expectation for random sets in the space $$\mathbb{R}^{d+1}$$ by letting

The arguments of are random closed convex sets $$Y=\{t\}\times X$$; they form a family closed for dilations, Minkowski sums and translations by singletons from $$\mathbb{R}_{+}\times \mathbb{R}^{d}$$. Note that selections of $$\{t\}\times X$$ are given by $$(t,\xi )$$ with $$\xi$$ being a selection of $$X$$. In view of this, all results in the homogeneous case apply to the convex case if the dimension is increased by one.

### 3.2 Examples

The simplest example is provided by the selection expectation, which is linear and law-invariant on all integrable random convex sets.

### Example 3.8

Let

$$F_{X}= \{x : \mathbb{P}[x\in X]=1 \}$$

denote the set of fixed points of a random closed set $$X$$. If $$X$$ is almost surely convex, then $$F_{X}$$ is also almost surely convex, and if $$X$$ is compact with positive probability, then $$F_{X}$$ is compact. It is easy to see that $$F_{X+Y}$$ contains $$F_{X}+F_{Y}$$, whence is a law-invariant superlinear expectation. With a similar idea, it is possible to define the sublinear expectation as the support of $$X$$, which is the set of points $$x \in \mathbb{R}^{d}$$ such that $$X$$ hits any open neighbourhood of $$x$$ with positive probability. By the monotonicity property, for any $$x\in F_{X}$$, whence is a subset of any other normalised superlinear expectation of $$X$$. By a similar argument, dominates any other constant-preserving sublinear expectation.

### Example 3.9

Fix $$C=\{0\}$$ and let $$X=[\beta ,\infty )\subseteq \mathbb{R}$$ be a half-line. The functional is superlinear if and only if $$\beta \mapsto \mathtt{e}(\beta )$$ is sublinear in the usual sense of (1.2). For random sets of the type $$Y=(-\infty ,\beta ]$$, the superlinearity of corresponds to the univariate superlinearity of $$\beta \mapsto \mathtt{u}(\beta )$$. This example shows that numerical sublinear expectations may be converted to both sublinear and superlinear set-valued ones depending on the choice of relevant random sets.

### Example 3.10

Let $$X=[\beta ',\beta ]$$ be a random interval on the line with $$\beta ,\beta '\in L^{p}(\mathbb{R})$$ and let $$C=\{0\}$$. Then is the interval formed by a numerical superlinear expectation of $$\beta '$$ and a numerical sublinear expectation of $$\beta$$ such that $$\mathtt{u}$$ is dominated by $$\mathtt{e}$$, e.g. if $$\mathtt{u}$$ and $$\mathtt{e}$$ form an exact dual pair. The superlinear expectation may be empty.

### 3.3 Expectations of singletons

The additivity property on deterministic singletons immediately yields the following useful fact.

### Lemma 3.11

We have , and the same holds for the superlinear expectation.

Fix $$C=\{0\}$$. Restricted to singletons, the sublinear expectation is a homogeneous map that satisfies

Note that is not necessarily a singleton. If is a singleton for each $$\xi \in L^{p}( \mathbb{R}^{d})$$, then is linear on $$L^{p}(\mathbb{R}^{d})$$. Assuming in addition lower semicontinuity, the sublinear expectation then becomes the usual (linear) expectation.

The following result concerns the superlinear expectation of singletons. For a general cone $$C$$, a similar result holds with singletons replaced by sets $$\xi +C$$.

### Proposition 3.12

Let $$C=\{0\}$$. For each $$\xi \in L^{p}(\mathbb{R}^{d})$$ and any normalised superlinear expectation , the set is either empty or a singleton, and is additive on .

### Proof

By (3.2) applied to $$X=\{\xi \}$$ and $$Y=\{-\xi \}$$, we have

whence is either empty or a singleton, and then . If and are singletons (and so are nonempty) for $$\xi ,\xi '\in L^{p}(\mathbb{R}^{d})$$, then

whence the inclusion turns into equality. □

In view of Proposition 3.12 and if we impose in addition upper semicontinuity on the superlinear expectation, equals $$\{\mathbb{E}[\xi ]\}$$ or is empty for each $$p$$-integrable $$\xi$$. The family of $$\xi \in L^{p}(\mathbb{R}^{d})$$ such that is then a convex cone in $$L^{p}(\mathbb{R}^{d})$$.

### 3.4 Nonlinear expectations of random convex functions

A lower semicontinuous convex function $$f:\mathbb{R}^{d}\to [0,\infty ]$$ yields a convex set $$T_{f}$$ in $$\mathbb{R}^{d+1}$$ uniquely identified by its support function

$$h\big(T_{f},(t,x)\big)= \textstyle\begin{cases} t f(x/t), &\quad t>0, \\ 0, & \quad \text{otherwise}. \end{cases}$$

This support function is called the perspective transform of $$f$$; see Hiriart-Urruty and Lemaréchal [17, Sect. IV.2.2]. Note that $$f$$ can be recovered by letting $$t=1$$ in the support function of $$T_{f}$$.

If $$x \mapsto \xi (x)$$ is a random nonnegative lower semicontinuous convex function on $$\mathbb{R}^{d}$$, then its sublinear expectation can be defined as , and the superlinear one is defined similarly. With this definition, all constructions from this paper apply to random functions.

## 4 Extensions of nonlinear expectations

### 4.1 Minimal extension

The minimal extension of a sublinear set-valued expectation on random sets from $$L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))$$ is defined by

(4.1)

where $$\overline{\mathrm{co}}$$ denotes the closed convex hull operation. The extension is called minimal since it is the smallest sublinear expectation compatible with the values of the original expectation on sets $$\xi +C$$. It extends a sublinear expectation defined on sets $$\xi +C$$ to all $$p$$-integrable random closed sets $$X$$ such that $$X=X+C$$ a.s. In terms of support functions, the minimal extension is given by

(4.2)

### Proposition 4.1

If is a sublinear expectation defined on random sets $$\xi +C$$ for $$\xi \in L^{p}(\mathbb{R}^{d})$$, then its minimal extension (4.1) is a sublinear expectation.

### Proof

The additivity of on deterministic singletons follows from this property of . For a deterministic $$F\in \operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)$$,

The homogeneity and monotonicity properties of are obvious. The subadditivity follows from the fact that $$L^{p}(X+Y)$$ is the $$L^{p}$$-closure of the sum $$L^{p}(X)+L^{p}(Y)$$; see [25, Proposition 2.1.6]. □

### 4.2 Maximal extension

Extending a superlinear expectation from its values on half-spaces yields its maximal extension

(4.3)

being the intersection of superlinear expectations of random half-spaces

$$H_{\eta }(X)=H_{\eta }\big(h(X,\eta )\big)$$

almost surely containing $$X\in L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))$$. The maximal extension is the largest superlinear expectation consistent with the values of the original one on half-spaces. Since $$H_{\eta }(h(X,\eta ))=H_{t\eta }(h(X,t\eta ))$$ for all $$t>0$$, it is possible to take the intersection in (4.3) over $$\eta \in L^{q}( C^{o} )$$.

### Proposition 4.2

If is superlinear on half-spaces with the same normal, that is,

(4.4)

for all $$\beta ,\beta '\in L^{p}(\mathbb{R})$$ and $$\eta \in L^{0}(\mathbb{S}^{d-1}\cap C^{o} )$$, and is scalarly upper semicontinuous on half-spaces with the same normal, that is,

if $$\beta _{n}\to \beta$$ in $$\sigma (L^{p},L^{q})$$, then its maximal extension given by (4.3) is superlinear and upper semicontinuous with respect to the scalar convergence of random closed convex sets. If is law-invariant on half-spaces, then is law-invariant.

### Proof

The additivity on deterministic singletons follows from the fact that we have $$H_{\eta }(X+a)=H_{\eta }(X)+a$$ for all $$a\in \mathbb{R}^{d}$$. If $$F\in \operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)$$ is deterministic, then

The homogeneity and monotonicity properties of the extension are obvious. For two $$p$$-integrable random closed convex sets $$X$$ and $$Y$$, (4.4) yields that

Assume that $$(X_{n})$$ scalarly converges to $$X$$. Let and let $$(x_{n_{k}})$$ converge to $$x$$. Then for all $$\eta \in L^{0}(\mathbb{S}^{d-1}\cap C^{o} )$$. Since $$h(X_{n_{k}}, \eta )\to h(X,\eta )$$ in $$\sigma (L^{p},L^{q})$$, scalar upper semicontinuity of on half-spaces yields that we have , whence for all $$\eta$$. Therefore , confirming the upper semicontinuity of the maximal extension. The law-invariance property is straightforward. □

It is possible to let $$\eta$$ in (4.3) be deterministic and define

(4.5)

With this reduced maximal extension, the superlinear expectation is extended from its values on half-spaces with deterministic normal vectors. Note that the reduced maximal extension may be equal to the whole space, e.g. for $$X$$ being a half-space $$H_{\eta }(0)$$ with a nondeterministic normal. It is obvious that and is constant-preserving. The reduced maximal extension is particularly useful for Hausdorff-approximable random closed sets.

### 4.3 Exact nonlinear expectations

It is possible to apply the maximal extension to the sublinear expectation and the minimal extension to the superlinear one, resulting in and . The monotonicity property yields that for each $$p$$-integrable random closed set $$X$$,

(4.6)

It is easy to see that each extension is an idempotent operation, e.g. the minimal extension of coincides with .

A nonlinear sublinear expectation is said to be minimal (respectively, maximal) if it coincides with its minimal (respectively, maximal) extension. A superlinear expectation is said to be reduced maximal if .

If (4.6) holds with the first two inclusions being equalities (that is, coincides with its minimal and maximal extensions), then is called exact. The same applies to superlinear expectations. Note that the selection expectation is exact on all integrable random closed convex sets, its minimality corresponds to (2.1) and maximality is (2.3).

Since random convex closed sets can be represented either as families of their selections or as intersections of half-spaces, the minimal representation of an exact nonlinear expectation may be considered its primal representation, while the maximal representation becomes the dual one.

## 5 Sublinear set-valued expectations

### 5.1 Duality for minimal sublinear expectations

The minimal sublinear expectation is determined by its restriction on random sets $$\xi +C$$; the following result characterises such a restriction.

### Lemma 5.1

A map defined for $$\xi \in L^{p}(\mathbb{R}^{d})$$ is a $$\sigma (L^{p},L^{q})$$-lower semicontinuous normalised sublinear expectation if and only if for $$u\notin C^{o}$$ and

where $$\mathcal{Z}_{u}$$, $$u\in C^{o}$$, are convex $$\sigma (L^{q},L^{p})$$-closed cones in $$L^{q}( C^{o} )$$ such that

$$\{\mathbb{E}[\zeta ]: \zeta \in \mathcal{Z}_{u}\}=\{tu : t\geq 0\}$$

for all $$u\neq 0$$, $$\mathcal{Z}_{cu}=\mathcal{Z}_{u}$$ for all $$c>0$$, $$\mathcal{Z}_{0}=\{0\}$$ and

$$\mathcal{Z}_{u+v}\subseteq \mathcal{Z}_{u}+\mathcal{Z}_{v},\qquad u,v \in C^{o} .$$
(5.1)

### Proof

(Sufficiency) For linearly independent $$u$$ and $$v$$ in $$\mathbb{R}^{d}$$, each $$\zeta \in \mathcal{Z}_{u+v}$$ satisfies $$\zeta =\zeta _{1}+\zeta _{2}$$ with $$\mathbb{E}[\zeta _{1}]=t_{1}u$$ and $$\mathbb{E}[\zeta _{2}]=t_{2}v$$. Thus $$\mathbb{E}[\zeta ]=t(u+v)$$ only if $$t_{1}=t_{2}=t$$. Therefore,

Since $$\mathcal{Z}_{cu}=\mathcal{Z}_{u}=c\mathcal{Z}_{u}$$ for any $$c>0$$,

so the function is sublinear in $$u$$ and hence a support function. The additivity property on singletons follows from the construction since

$$\sup _{\zeta \in \mathcal{Z}_{u}, \mathbb{E}[\zeta ]=u}\mathbb{E}[ \langle \zeta ,\xi +a\rangle ] =\sup _{\zeta \in \mathcal{Z}_{u}, \mathbb{E}[\zeta ]=u}\mathbb{E}[\langle \zeta ,\xi \rangle ]+\langle a,u \rangle$$

for each deterministic $$a\in \mathbb{R}^{d}$$. Furthermore, which implies that . The homogeneity property is obvious. The function is subadditive since

Finally, for $$u\in C^{o}$$, the set $$\{\zeta \in \mathcal{Z}_{u} : \mathbb{E}[\zeta ]=u\}$$ is closed in $$\sigma (L^{q},L^{p})$$. Since is the support function of the closed set $$\{\zeta \in \mathcal{Z}_{u} : \mathbb{E}[\zeta ]=u\}$$ in the direction $$\xi$$, it is lower semicontinuous as a function of $$\xi$$ so that (3.3) holds.

(Necessity) By Proposition 3.2, the support function is infinite for $$u\notin C^{o}$$. For $$u\in C^{o}$$, let

The map is sublinear from $$L^{p}(\mathbb{R}^{d})$$ to $$(-\infty ,\infty ]$$. By sublinearity, $$\mathcal{A}_{u}$$ is a convex cone in $$L^{p}(\mathbb{R}^{d})$$, and $$\mathcal{A}_{cu}=\mathcal{A}_{u}$$ for all $$c>0$$. Furthermore, $$\mathcal{A}_{u}$$ is closed with respect to the scalar convergence $$\xi _{n}+C\to \xi +C$$ by the assumed lower semicontinuity of . Hence it is closed with respect to the convergence $$\xi _{n}\to \xi$$ in $$\sigma (L^{p},L^{q})$$.

Note that $$0\in \mathcal{A}_{u}$$ and let

$$\mathcal{Z}_{u}= \{\zeta \in L^{q}(\mathbb{R}^{d}) : \mathbb{E}[ \langle \zeta ,\xi \rangle ]\leq 0\; \text{for all} \; \xi \in \mathcal{A}_{u} \}$$

be the polar cone of $$\mathcal{A}_{u}$$. For $$u=0$$, we have $$\mathcal{A}_{0}=L^{p}(\mathbb{R}^{d})$$ and $$\mathcal{Z}_{0}=\{0\}$$. Consider $$u\neq 0$$. Letting $$\xi =a\mathbf{1}_{A}$$ for an event $$A$$ and a deterministic $$a$$ such that $$\langle a,u\rangle \leq 0$$, we obtain a member of $$\mathcal{A}_{u}$$, whence each $$\zeta \in \mathcal{Z}_{u}$$ satisfies $$\langle \mathbb{E}[\zeta ],a\mathbf{1}_{A}\rangle \leq 0$$ whenever $$\langle a,u\rangle \leq 0$$. Thus $$\zeta \in C^{o}$$ a.s., and letting $$A=\Omega$$ yields that $$\mathbb{E}[\zeta ]=tu$$ for some $$t\geq 0$$ and all $$\zeta \in \mathcal{Z}_{u}$$. The subadditivity property of the support function of yields that $$\mathcal{A}_{u+v}\supseteq (\mathcal{A}_{u}\cap \mathcal{A}_{v})$$ for $$u,v\in C^{o}$$. By a Banach space analogue of Schneider [33, Theorem 1.6.9], the polar of $$\mathcal{A}_{u}\cap \mathcal{A}_{v}$$ is the closed sum $$\mathcal{Z}_{u}+\mathcal{Z}_{v}$$ of the polars, whence (5.1) holds.

By the definition of $$\mathcal{A}_{u}$$,

Since $$\mathcal{A}_{u}$$ is convex and $$\sigma (L^{p},L^{q})$$-closed, the bipolar theorem yields that

□

### Theorem 5.2

A function is a scalarly lower semicontinuous minimal normalised sublinear expectation if and only if admits the representation

(5.2)

and for $$u\notin C^{o}$$, where $$C^{o}$$ and the sets $$\mathcal{Z}_{u}$$, $$u\in \mathbb{R}^{d}$$, satisfy the conditions of Lemma 5.1.

### Proof

(Necessity) Lemma 5.1 applies to the restriction of onto random sets $$\xi +C$$. By the minimality assumption, coincides with its minimal extension (4.2). By Lemma 5.1, (4.2) and (2.4), for $$u\in C^{o}$$,

(Sufficiency) The right-hand side of (5.2) is sublinear in $$u$$ and so is a support function. The additivity on singletons, monotonicity, subadditivity and homogeneity properties of are obvious. For a deterministic $$F\in \operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)$$, the sublinearity of the support function yields that

whence . The minimality of follows from

Since the support function of given by (5.2) is the supremum of scalarly continuous functions of $$X$$, the minimal sublinear expectation is scalarly lower semicontinuous. □

### Remark 5.3

The sets $$\mathcal{Z}_{u}$$, $$u\in \mathbb{R}^{d}$$, constructed in the proof of necessity in Lemma 5.1 are maximal sets representing the sublinear expectation.

### Corollary 5.4

If $$u\in \mathcal{Z}_{u}$$ for all $$u\in \mathbb{R}^{d}$$, then for all $$p$$-integrable $$X$$ and any scalarly lower semicontinuous normalised minimal sublinear expectation .

### Proof

By (5.2), for all $$u\in C^{o}$$. □

### Remark 5.5

The sublinear expectation given by (5.2) is law-invariant if and only if the sets $$\mathcal{Z}_{u}$$ are law-complete, that is, with each $$\zeta \in \mathcal{Z}_{u}$$, the set $$\mathcal{Z}_{u}$$ contains all random vectors that have the same distribution as $$\zeta$$. If $$p=\infty$$, then the elements of $$\mathcal{Z}_{u}$$ can be represented as vectors composed of probability measures absolutely continuous with respect to ℙ. This is also possible for $$p\in [1,\infty )$$ using measures with $$p$$-integrable densities.

### Example 5.6

Let $$Z$$ be a random matrix with $$\mathbb{E}[Z]$$ being the identity matrix, and let $$\mathcal{Z}_{u}=\{tZu^{\top }: t\geq 0\}$$ for $$u\in C^{o} =\mathbb{R}^{d}$$, where $$C=\{0\}$$. Then (5.2) turns into the condition , whence . In this example, is not solely determined by $$h(X,u)$$. This sublinear expectation is not necessarily constant-preserving.

### Example 5.7

Let $$X=H_{\eta }(\beta )$$ with $$\beta \in L^{p}(\mathbb{R})$$ and $$\eta \in L^{0}(\mathbb{S}^{d-1}\cap C^{o} )$$ be a random half-space. By (5.2), is finite for $$u\in \mathbb{S}^{d-1}\cap C^{o}$$ only if each $$\zeta \in \mathcal{Z}_{u}$$ with $$\mathbb{E}[\zeta ]=u$$ satisfies $$\zeta =\gamma \eta$$ a.s. with $$\gamma \in L^{q}(\mathbb{R}_{+})$$. For such $$u$$,

If the normal $$\eta =u$$ is deterministic and

$$\mathcal{Z}_{u}\subseteq \{\gamma u : \gamma \in L^{q}(\mathbb{R}_{+}) \},$$
(5.3)

then with

$$t=\sup _{\gamma u\in \mathcal{Z}_{u},\mathbb{E}[\gamma ]=1} \mathbb{E}[ \gamma \beta ].$$

Otherwise, . Thus the sublinear expectation of a random half-space with a deterministic normal is either a half-space with the same normal or the whole space.

### 5.2 Exact sublinear expectation

Consider now the situation when for each $$u$$, the value of is solely determined by the distribution of $$h(X,u)$$. This is the case if the supremum in (5.2) involves only $$\zeta$$ such that $$\zeta =\gamma u$$ for some $$\gamma \in L^{q}(\mathbb{R}_{+})$$. The following result shows that this condition characterises constant-preserving minimal sublinear expectations, which then necessarily become exact ones.

### Theorem 5.8

A mapping is a scalarly lower semicontinuous constant-preserving minimal sublinear expectation if and only if for $$u\notin C^{o}$$ and

(5.4)

where $$\mathcal{M}_{u}$$, $$u\in C^{o}$$, are convex $$\sigma (L^{q},L^{p})$$-closed cones in $$L^{q}(\mathbb{R}_{+})$$ with $$\mathcal{M}_{cu}=\mathcal{M}_{u}$$ for all $$c>0$$ and $$\mathcal{M}_{u+v}\subseteq \mathcal{M}_{u}\cap \mathcal{M}_{v}$$ for all $$u,v\in C^{o}$$.

### Proof

(Sufficiency) If $$\mathcal{M}_{u}$$, $$u\in C^{o}$$, satisfy the imposed conditions, then $$\mathcal{Z}_{u}$$ given by $$\{\gamma u : \gamma \in \mathcal{M}_{u}\}$$ for $$u\in C^{o}$$ satisfy the conditions of Lemma 5.1. Indeed, $$\mathcal{Z}_{cu}=\mathcal{Z}_{u}$$ for all $$c>0$$ and

$$\mathcal{Z}_{u+v}= \{\gamma (u+v) : \gamma \in \mathcal{M}_{u+v} \} \subseteq \{\gamma (u+v) : \gamma \in \mathcal{M}_{u}\cap \mathcal{M}_{v} \} \subseteq \mathcal{Z}_{u}+\mathcal{Z}_{v}$$

for all $$u,v\in C^{o}$$. If $$F\in \operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)$$ is deterministic, then

whence is constant-preserving.

(Necessity) Since is minimal, the support function of is given by (5.2). The constant-preserving property yields that for all half-spaces $$H_{u}(t)$$ with $$u\in C^{o}$$. By the argument from Example 5.7, the minimal sublinear expectation of a half-space $$H_{u}(t)$$ is distinct from the whole space only if (5.3) holds.

The properties of $$\mathcal{Z}_{u}$$ imply those of $$\mathcal{M}_{u}=\{\gamma : \gamma u\in \mathcal{Z}_{u}\}$$. Indeed, assume that $$\gamma \in \mathcal{M}_{u+v}$$ so that $$\gamma (u+v)\in \mathcal{Z}_{u+v}$$. Hence $$\gamma (u+v)\in (\mathcal{Z}_{u}+\mathcal{Z}_{v})$$, meaning that $$\gamma (u+v)$$ is the limit of $$\gamma _{1n}u+\gamma _{2n}v$$ for $$\gamma _{1n}u\in \mathcal{Z}_{u}$$ and $$\gamma _{2n}v\in \mathcal{Z}_{v}$$, $$n \in \mathbb{N}$$. The linear independence of $$u$$ and $$v$$ yields $$\gamma _{1n}\to \gamma$$ and $$\gamma _{2n}\to \gamma$$, whence $$\gamma \in (\mathcal{M}_{u}\cap \mathcal{M}_{v})$$. □

It is possible to rephrase (5.4) as

(5.5)

for numerical sublinear expectations

$$\mathtt{e}_{u}(\beta ) =\sup _{\gamma \in \mathcal{M}_{u},\mathbb{E}[ \gamma ]=1}\mathbb{E}[\gamma \beta ], \qquad \beta \in L^{p}(\mathbb{R}),$$

defined by an analogue of (1.5). Since the negative part of $$h(X,u)$$ is $$p$$-integrable, it is possible to consistently let $$\mathtt{e}_{u}(h(X,u))=\infty$$ in (5.5) if $$h(X,u)$$ is not $$p$$-integrable.

### Corollary 5.9

Each scalarly lower semicontinuous constant-preserving minimal sublinear expectation is exact.

### Proof

Since (5.4) yields that if $$\eta$$ is random, the maximal extension of by an analogue of (4.3) reduces to deterministic $$\eta$$, and so is the reduced maximal extension. For $$u\in \mathbb{S}^{d-1}\cap C^{o}$$ and $$\beta \in L^{p}(\mathbb{R})$$, we have ; cf. Example 5.7. Thus the reduced maximal extension of is given by

Comparing with (5.5), we see that . The opposite inclusion is obvious, whence . □

### Corollary 5.10

If is a scalarly lower semicontinuous constant-preserving minimal normalised sublinear expectation, then for each deterministic $$F\in \operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)$$.

### Corollary 5.11

Let be a scalarly lower semicontinuous constant-preserving minimal law-invariant sublinear expectation. Then we have for all $$X\in L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))$$ and any $$\sigma$$-algebra $$\mathfrak{H}\subseteq \mathfrak{F}$$. In particular, .

### Proof

The law-invariance of implies that $$\mathtt{e}_{u}$$ is law-invariant. The sublinear expectation $$\mathtt{e}_{u}$$ is dilatation-monotonic, meaning that $$\mathtt{e}_{u}(\mathbb{E}[\beta |\mathfrak{H}])\leq \mathtt{e}_{u}( \beta )$$ for all $$\beta \in L^{p}(\mathbb{R})$$; see Föllmer and Schied [10, Corollary 4.59] for this fact derived for risk measures. The statement follows from (5.5). □

The following result identifies the particularly important case when the families $$\mathcal{M}_{u}=\mathcal{M}$$ do not depend on $$u$$. This essentially means that the sublinear expectation preserves centred balls. Let $$B_{r}$$ denote the ball of radius $$r$$ centred at the origin.

### Theorem 5.12

A scalarly lower semicontinuous constant-preserving minimal superlinear expectation satisfies for all $$\beta \in L^{p}(\mathbb{R}_{+})$$ and some $$r\geq 0$$ if and only if (5.4) holds with $$\mathcal{M}_{u}=\mathcal{M}$$ for all $$u\neq 0$$. Then

(5.6)

where $$\mathtt{e}$$ admits the representation (1.5). Furthermore,

(5.7)

### Proof

Assume that the $$\mathcal{M}_{u}$$ are constructed as in the proof of Theorem 5.8 so that $$\mathcal{M}_{u}$$ is maximal for each $$u\in C^{o}$$. The right-hand side of

does not depend on $$u\in \mathbb{S}^{d-1}\cap C^{o}$$ if and only if $$\mathcal{M}_{u}=\mathcal{M}$$ for all $$u\in C^{o}$$. The representation (5.6) follows from (5.5) with $$\mathcal{M}_{u}=\mathcal{M}$$. In view of (1.5),

\begin{aligned} \sup _{\gamma \in \mathcal{M},\mathbb{E}[\gamma ]=1} \mathbb{E}[h( \gamma X,u) ] &=\sup _{\gamma \in \mathcal{M},\mathbb{E}[\gamma ]=1} \mathbb{E}[h(\gamma X,u) ] =\sup _{\gamma \in \mathcal{M},\mathbb{E}[ \gamma ]=1} \mathbb{E}[\gamma h(X,u) ] \\ &=\mathtt{e}\big(h(X,u)\big)\,. \end{aligned}

By (5.6), the support functions of both sides of (5.7) are identical. □

If $$X=\{\xi \}$$ is a singleton, there is no need to take the convex hull on the right-hand side of (5.7).

### Remark 5.13

Equality (5.6) can be viewed as a scalarisation of the sublinear expectation. Indeed, it represents the convex set as an intersection of half-spaces $$H_{u}(\mathtt{e}(h(X,u)))$$ and so provides a dual representation of . Such scalarisations have been considered by Hamel and Heyde [13] and Hamel et al. [14] for set-valued risk measures, which are sublinear for the reverse inclusion. In that case, the exact equality may be violated, see (6.5) below, and the scalarisation is defined as the support function of the superlinear expectation.

### Example 5.14

For an integrable $$X$$ and $$n\in \mathbb{N}$$, consider the sublinear expectation

where $$X_{1},\dots ,X_{n}$$ are independent copies of $$X$$. It is easy to see that is a minimal constant-preserving sublinear expectation; it is given by (5.6) with the corresponding numerical sublinear expectation $$\mathtt{e}(\beta )$$ being the expected maximum of $$n$$ i.i.d. copies of $$\beta \in L^{1}(\mathbb{R})$$. By Corollary 5.9, this sublinear expectation is exact.

### Example 5.15

For $$\alpha \in (0,1)$$, let $$\mathcal{P}_{\alpha }$$ be the family of random variables $$\gamma$$ with values in $$[0,\alpha ^{-1}]$$ and such that $$\mathbb{E}[\gamma ]=1$$. Furthermore, let ℳ be the cone generated by $$\mathcal{P}_{\alpha }$$, that is, $$\mathcal{M}=\{t\gamma :\gamma \in \mathcal{P}_{\alpha }, t\geq 0\}$$. In finance, the set $$\mathcal{P}_{\alpha }$$ generates the average value-at-risk, which is the risk measure obtained as the average quantile; see Föllmer and Schied [10, Definition 4.43]. Similarly, the numerical sublinear $$\mathtt{e}$$ and superlinear $$\mathtt{u}$$ generated by this set ℳ are represented as average quantiles. Namely, $$\mathtt{e}(\beta )$$ is the average of the quantiles of $$\beta$$ at levels $$t\in (1-\alpha ,1)$$, and $$\mathtt{u}(\beta )$$ is the average of the quantiles at levels $$t\in (0,\alpha )$$.

## 6 Superlinear set-valued expectations

### 6.1 Duality for maximal superlinear expectations

Consider a superlinear expectation defined on $$L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))$$. If $$C=\{0\}$$, we deal with all $$p$$-integrable random closed convex sets. Recall that $$C^{o}$$ is the polar cone of $$C$$.

### Theorem 6.1

A map is a scalarly upper semicontinuous normalised maximal superlinear expectation if and only if

(6.1)

for a collection of convex $$\sigma (L^{q},L^{p})$$-closed cones $$\mathcal{M}_{\eta }\subseteq L^{q}(\mathbb{R}_{+})$$ parametrised by $$\eta \in L^{0}(\mathbb{S}^{d-1}\cap C^{o} )$$ and such that $$\mathcal{M}_{u}$$ is strictly larger than $$\{0\}$$ for each deterministic $$\eta =u\in \mathbb{S}^{d-1}\cap C^{o}$$.

### Proof

(Necessity) Fix $$\eta \in L^{0}(\mathbb{S}^{d-1}\cap C^{o} )$$ and let $$\mathcal{A}_{\eta }$$ be the set of all $$\beta \in L^{p}(\mathbb{R})$$ such that contains the origin. Since , we have $$0\in \mathcal{A}_{\eta }$$. Since , the family $$\mathcal{A}_{u}$$ does not contain $$\beta =t$$ for $$t<0$$ and $$u\in \mathbb{S}^{d-1}\cap C^{o}$$.

If $$\beta _{n}\to \beta$$ in $$\sigma (L^{p},L^{q})$$, then $$\mathbb{E}[h(H_{\eta }(\beta _{n}),\gamma \eta )]\to \mathbb{E}[h(H_{\eta }(\beta ),\gamma \eta )]$$ for all $$\gamma \in L^{q}(\mathbb{R})$$, whence $$H_{\eta }(\beta _{n})\to H_{\eta }(\beta )$$ scalarly in $$\sigma (L^{p},L^{q})$$. Therefore,

by the assumed upper semicontinuity of . Thus $$\mathcal{A}_{\eta }$$ is a convex $$\sigma (L^{p},L^{q})$$-closed cone in $$L^{p}(\mathbb{R})$$. Consider its positive dual cone

$$\mathcal{M}_{\eta }= \{\gamma \in L^{q}(\mathbb{R}) : \mathbb{E}[\gamma \beta ]\geq 0\; \text{for all}\; \beta \in \mathcal{A}_{\eta }\}.$$

Since , we have whenever $$C\subseteq X$$ a.s. In view of this, if $$\beta$$ is a.s. nonnegative, then $$H_{\eta }(\beta )$$ a.s. contains zero and so $$\beta \in \mathcal{A}_{\eta }$$. Thus each $$\gamma$$ from $$\mathcal{M}_{\eta }$$ is a.s. nonnegative. The bipolar theorem yields that

$$\mathcal{A}_{\eta }= \{\beta \in L^{p}(\mathbb{R}) : \mathbb{E}[\gamma \beta ]\geq 0\; \text{for all}\; \gamma \in \mathcal{M}_{\eta }\}.$$
(6.2)

Since $$(-t)\notin \mathcal{A}_{u}$$, (6.2) implies that the cone $$\mathcal{M}_{u}$$ is strictly larger than $$\{0\}$$. Since is assumed to be maximal, (4.3) implies that

(Sufficiency) It is easy to check that given by (6.1) is additive on deterministic singletons, homogeneous and monotonic. If $$F\in \operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)$$ is deterministic, then letting $$\eta =u$$ in (6.1) be deterministic and using the nontriviality of $$\mathcal{M}_{u}$$ yields that . Furthermore, , since contains the origin and so is not empty. The superadditivity of follows from the fact that

\begin{aligned} & \{x: \langle x,\mathbb{E}[\gamma \eta ] \rangle \leq \mathbb{E}[h(X, \gamma \eta ) ] +\mathbb{E}[h(Y,\gamma \eta ) ] \} \\ & \supseteq \{x: \langle x,\mathbb{E}[\gamma \eta ] \rangle \leq \mathbb{E}[h(X,\gamma \eta ) ] \} + \{x: \langle x,\mathbb{E}[\gamma \eta ] \rangle \leq \mathbb{E}[h(Y,\gamma \eta ) ] \}. \end{aligned}

It is easy to see that coincides with its maximal extension.

Note that (6.1) is equivalently written as

If $$(X_{n})$$ scalarly converges to $$X$$ and $$x_{n_{k}}\to x$$ for , $$k\in \mathbb{N}$$, then $$\mathbb{E}[h(X_{n}-x_{n},\gamma \eta )]$$ converges to $$\mathbb{E}[h(X-x,\gamma \eta )]$$ for all $$\gamma \in L^{q}(\mathbb{R}_{+})$$ and $$\eta$$ from $$L^{0}(\mathbb{S}^{d-1}\cap C^{o} )$$. Thus $$\mathbb{E}[h(X-x,\gamma \eta )]\geq 0$$, whence , and the upper semicontinuity of follows. □

In contrast to the sublinear case (see Theorem 5.2), the cones $$\mathcal{M}_{\eta }$$ from Theorem 6.1 need not satisfy additional conditions like those imposed in Lemma 5.1. However, if the intersection in (6.1) is taken over all $$\eta \in L^{q}( C^{o} )$$, then one must require that $$\mathcal{M}_{\beta \eta }=\{\gamma /\beta : \gamma \in \mathcal{M}_{\eta }\}$$ for all $$\beta \in L^{p}((0,\infty ))$$.

### Corollary 6.2

If $$1\in \mathcal{M}_{\eta }$$ for all $$\eta$$, then for all $$p$$-integrable $$X$$ and any scalarly upper semicontinuous maximal normalised superlinear expectation .

### Proof

Restrict the intersection in (6.1) to deterministic $$\eta =u$$ and $$\gamma =1$$, so that the right-hand side of (6.1) becomes $$\mathbb{E}[X]$$. □

### Example 6.3

Let $$X=H_{\eta }(\beta )$$ be the half-space with normal $$\eta \in L^{0}(\mathbb{S}^{d-1})$$ and $$\beta \in L^{p}(\mathbb{R})$$. If $$C=\{0\}$$, the maximal superlinear expectation of $$X$$ is given by

Assume that $$d=2$$ and let $$\eta =(1,\pi )/\sqrt{1+\pi ^{2}}$$ with $$\pi$$ being an almost surely positive random variable. This example represents the case of two currencies exchangeable at rate $$\pi$$ without transaction costs. We then have

where $$\mathtt{u}$$ is the numerical superlinear expectation with the representing set

$$\mathcal{M}= \{\gamma /\sqrt{1+\pi ^{2}} : \gamma \in \mathcal{M}_{\eta }\}.$$

In particular, if $$\beta =0$$ a.s., then the random set $$H_{\eta }(0)$$ describes all portfolios available at price zero for two currencies with the exchange rate $$\pi$$, and

Hence with $$w'=(1,\mathtt{e}(\pi ))$$ and $$w''=(1,\mathtt{u}(\pi ))$$ for the exact dual pair $$\mathtt{e}$$ and $$\mathtt{u}$$ of nonlinear expectations with the representing set ℳ.

### 6.2 Reduced maximal extension

The following result can be proved similarly to Theorem 6.1 for the reduced maximal extension from (4.5).

### Theorem 6.4

A map is a scalarly upper semicontinuous normalised reduced maximal superlinear expectation if and only if

(6.3)

for a collection of nontrivial convex $$\sigma (L^{q},L^{p})$$-closed cones $$\mathcal{M}_{v}\subseteq L^{q}(\mathbb{R}_{+})$$ parametrised by $$v\in \mathbb{S}^{d-1}\cap C^{o}$$.

It is possible to take the intersection in (6.3) over all $$v\in \mathbb{S}^{d-1}$$ since $$h(X,v)=\infty$$ for $$v\notin C^{o}$$. The representation (6.3) can be equivalently written as the intersection of the half-spaces $$\{x : \langle x,v\rangle \leq \mathtt{u}_{v}(h(X,v))\}$$, where

$$\mathtt{u}_{v}(\beta )=\inf _{\gamma \in \mathcal{M}_{v},\mathbb{E}[ \gamma ]=1}\mathbb{E}[\gamma \beta ]$$
(6.4)

is a superlinear univariate expectation of $$\beta \in L^{p}(\mathbb{R})$$ for each $$v\in \mathbb{S}^{d-1}\cap C^{o}$$. The superlinear expectation (6.3) is law-invariant if and only if the families $$\mathcal{M}_{v}$$ are law-complete for all $$v$$ or, equivalently, if $$\mathtt{u}_{v}$$ is law-invariant for all $$v$$.

### Corollary 6.5

Let be a scalarly upper semicontinuous law-invariant normalised reduced maximal superlinear expectation, and let the probability space be non-atomic. Then is dilatation-monotonic, meaning that for each sub-$$\sigma$$-algebra $$\mathfrak{H}\subseteq \mathfrak{F}$$ and all $$X\in L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))$$,

In particular, .

### Proof

Since $$\mathtt{u}_{v}(\beta )$$ given by (6.4) is a law-invariant concave function of $$\beta \in L^{p}(\mathbb{R})$$ and the probability space is non-atomic, it is dilatation-monotonic, meaning that $$\mathtt{u}_{v}(\mathbb{E}[\xi |\mathfrak{H}])\geq \mathtt{u}_{v}(\xi )$$; see Föllmer and Schied [10, Corollary 4.59]. Hence,

$$\mathtt{u}_{v}\big(h(X,v)\big)\leq \mathtt{u}_{v} \big(\mathbb{E}[h(X,v) |\mathfrak{H}]\big) =\mathtt{u}_{v}\big(h (\mathbb{E}[X| \mathfrak{H}],v )\big).$$

Thus the infimum on the right-hand side of (6.3) written for is dominated by the infimum corresponding to . This implies the inclusion of the two sets. □

### Example 6.6

If $$\mathcal{M}_{v}=\mathcal{M}$$ in (6.3) is nontrivial and does not depend on $$v$$, then (6.3) turns into

where $$\mathtt{u}$$ given by (6.4) is the numerical superlinear expectation with the representing set ℳ. In this case, is the largest convex set whose support function is dominated by $$\mathtt{u}(h(X,v))$$, that is,

(6.5)

Note that $$\mathtt{u}(h(X,\cdot ))$$ may fail to be a support function. The left-hand side of (6.5) is the scalarisation of the superlinear expectation ; cf. Hamel et al. [13, 14]. Since

$$\bigcap _{v\in \mathbb{S}^{d-1}\cap C^{o} } \{x : \langle x,v \rangle \leq \mathbb{E}[\gamma h(X,v) ] \}=\mathbb{E}[\gamma X]$$

for $$X\in L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))$$, this reduced maximal superlinear expectation admits an equivalent representation as

(6.6)

### Example 6.7

Let $$X=\xi +C$$ for a $$\xi \in L^{p}(\mathbb{R}^{d})$$ and a deterministic convex closed cone $$C$$ that is different from the whole space. Then

If $$\mathcal{M}_{v}=\mathcal{M}$$ for all $$v\in \mathbb{S}^{d-1}\cap C^{o}$$, then $$\mathtt{u}_{v}=\mathtt{u}$$ and

A cone $$C$$ is said to be a Riesz cone (or lattice cone) if $$\mathbb{R}^{d}$$ with the partial order generated by $$C$$ is a Riesz space (or a vector lattice), that is, the maximum of any two points from $$\mathbb{R}^{d}$$ is well defined. If this is the case, then for some $$x$$, since an intersection of translations of $$C$$ is again a translation of $$C$$; see Aliprantis and Tourky [1, Theorem 1.16].

### Example 6.8

Let for $$n$$ independent copies of $$X$$, noticing that the expectation is empty if the intersection $$X_{1}\cap \cdots \cap X_{n}$$ is empty with positive probability. This superlinear expectation is not a reduced maximal one. For instance,

so that the reduced maximal extension is the largest convex set whose support function is dominated by , $$v\in \mathbb{S}^{d-1}$$. However, the support function of $$\mathbb{E}[X_{1}\cap \cdots \cap X_{n}]$$ is the expectation of the largest sublinear function dominated by $$\min (h(X_{i},v), i=1,\dots ,N)$$, and so may be a strict subset of .

For instance, let $$X=\xi +\mathbb{R}_{-}^{d}$$ for $$\xi \in L^{p}(\mathbb{R}^{d})$$. Then

where the minimum is applied coordinatewise to independent copies of $$\xi$$, while is the largest convex set whose support function is dominated by the function $$v \mapsto \mathbb{E}[\min (\langle \xi _{i},v\rangle ,i=1,\dots ,n)]$$ for $$v\in \mathbb{R}_{+}^{d}$$. Obviously,

$$\min (\langle \xi _{i},v\rangle ,i=1,\dots ,n ) \geq \langle \min ( \xi _{1},\dots ,\xi _{n}),v \rangle$$

with a possibly strict inequality.

### 6.3 Minimal extension of a superlinear expectation

In any nontrivial case, the superlinear expectation of a nondeterministic singleton is empty.

### Proposition 6.9

Let be a normalised superlinear expectation satisfying the conditions of Proposition 4.2. Then for $$\xi \in L^{p}(\mathbb{R}^{d})$$ only if

$$\sup _{\gamma \in \mathcal{M}_{-v},\mathbb{E}[\gamma ]=1} \mathbb{E}[ \langle \xi ,\gamma v\rangle ] \leq \inf _{\gamma \in \mathcal{M}_{v}, \mathbb{E}[\gamma ]=1} \mathbb{E}[\langle \xi ,\gamma v\rangle ]$$
(6.7)

for all $$v\in \mathbb{S}^{d-1}$$.

### Proof

By a variant of Proposition 4.2 for the reduced maximal extension, this extension satisfies the conditions of Theorem 6.4 and hence admits the representation (6.3). If $$\xi \in L^{p}(\mathbb{R}^{d})$$, then (6.3) yields that

which is not empty only if (6.7) holds. □

In the setting of Example 6.6, is empty unless $$\mathtt{u}(\langle \xi ,v\rangle )+\mathtt{u}(-\langle \xi ,v \rangle )$$ is nonnegative for all $$u$$. The latter means that $$\mathtt{u}(\langle \xi ,v\rangle )=\mathtt{e}(\langle \xi ,v \rangle )$$ for the exact dual pair of real-valued nonlinear expectations. Equivalently, if $$\mathbb{E}[\gamma \xi ]\neq \mathbb{E}[\gamma '\xi ]$$ for some $$\gamma ,\gamma '\in \mathcal{M}$$. If this is the case for all $$\xi \in L^{p}(X)$$, then the minimal extension of is the set $$F_{X}$$ of fixed points of $$X$$; see Example 3.8. Thus it is not feasible to come up with a nontrivial minimal extension of the superlinear expectation if $$C=\{0\}$$.

A possible way to ensure nonemptiness of the minimal extension is to apply it to random sets $$X$$ from $$L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))$$ with a cone $$C$$ having interior points, since then at least one of $$h(X,v)$$ and $$h(X,-v)$$ is almost surely infinite for all $$v\in \mathbb{S}^{d-1}$$. The minimal extension of is given by

(6.8)

The following result implies in particular that the union on the right-hand side of (6.8) is a convex set; cf. (4.1).

### Theorem 6.10

Let be a scalarly upper semicontinuous law-invariant normalised reduced maximal superlinear expectation, and let the probability space be non-atomic. Then the minimal extension given by (6.8) is a law-invariant superlinear expectation.

### Proof

Let $$x$$ and $$x'$$ belong to the union on the right-hand side of (6.8) (without closure). Then and for $$\xi ,\xi '\in L^{p}(X)$$, and the superlinearity of yields that

for each $$t\in [0,1]$$. Since $$t\xi +(1-t)\xi '$$ is a selection of $$X$$, the convexity of easily follows. Additivity on deterministic singletons, monotonicity and homogeneity are evident from (6.8). If $$F\in \operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)$$ is deterministic, the dilatation-monotonicity of (see Corollary 6.5) yields that

For the superadditivity property, consider $$x$$ and $$y$$ from the nonclosed right-hand side of (6.8) for $$X$$ and $$Y$$, respectively. Then for some $$\xi \in L^{p}(X)$$ and for some $$\xi '\in L^{p}(Y)$$. Hence,

Finally, let $$\mathfrak{F}_{X}$$ be the $$\sigma$$-algebra generated by $$X$$, that is, $$\mathfrak{F}_{X}$$ is generated by the events $$\{X\cap K\neq \varnothing \}$$ for all compact sets $$K$$ in $$\mathbb{R}^{d}$$. The convexity of $$X$$ implies that $$\mathbb{E}[\xi |\mathfrak{F}_{X}]$$ is a selection of $$X$$ for any $$\xi \in L^{p}(X)$$. By the dilatation-monotonicity from Corollary 6.5, it is possible to replace $$\xi \in L^{p}(X)$$ in (6.8) with an $$\mathfrak{F}_{X}$$-measurable $$p$$-integrable selection of $$X$$. The families of $$\mathfrak{F}_{X}$$-measurable selections of $$X$$ and $$\mathfrak{F}_{Y}$$-measurable selections of $$Y$$ coincide for two identically distributed random sets $$X$$ and $$Y$$; see Molchanov [25, Proposition 1.4.5]. □

Below we establish the upper semicontinuity of the minimal extension.

### Theorem 6.11

Assume that $$p\in (1,\infty ]$$, satisfies the conditions imposed in Theorem 6.10, and that for all nontrivial $$\xi \in L^{p}(C)$$. Then the minimal extension is scalarly upper semicontinuous.

### Proof

It suffices to omit the closure in (6.8) and consider with $$x_{n}\to x$$ and $$X_{n}\to X$$ scalarly in $$\sigma (L^{p},L^{q})$$. For each $$n\in \mathbb{N}$$, there exists a $$\xi _{n}\in L^{p}(X_{n})$$ such that .

Assume first that $$p\in (1,\infty )$$ and $$\sup _{n\in \mathbb{N}}\mathbb{E}[|\xi _{n}|^{p}]<\infty$$. Then $$(\xi _{n})_{n\in \mathbb{N}}$$ is relatively compact in $$\sigma (L^{p},L^{q})$$. Without loss of generality (if necessary, passing to subsequences), assume that $$(\xi _{n})$$ converges to $$\xi$$ in $$\sigma (L^{p},L^{q})$$. Since $$\langle \xi _{n},\zeta \rangle \leq h(X_{n},\zeta )$$ for all $$\zeta \in L^{q}( C^{o} )$$, taking expectations, letting $$n\to \infty$$ and using the convergence $$\xi _{n}\to \xi$$ and $$X_{n}\to X$$ yields that $$\mathbb{E}[h(\xi ,\zeta )]\leq \mathbb{E}[h(X,\zeta )]$$. By Lemma 2.4, $$\xi$$ is a selection of $$X$$. By the upper semicontinuity of , the upper limit of is a subset of . Hence for some $$\xi \in L^{p}(X)$$ so that .

Assume now that $$\|\xi _{n}\|_{p}^{p}=\mathbb{E}[| \xi _{n} |^{p}]\to \infty$$. Let $$\xi '_{n}=\xi _{n}/\|\xi _{n}\|_{p}$$. This sequence is bounded in the $$L^{p}$$-norm, and so we can assume without loss of generality that $$\xi '_{n}\to \xi '$$ in $$\sigma (L^{p},L^{q})$$. Since

the upper semicontinuity of yields that . For each $$\zeta \in L^{q}( C^{o} )$$, we have $$\langle \xi _{n},\zeta \rangle \leq h(X_{n},\zeta )$$. Dividing by $$\|\xi _{n}\|_{p}$$, taking expectations and letting $$n\to \infty$$ yields that $$\mathbb{E}[\langle \xi ',\zeta \rangle ]\leq 0$$. Thus $$\xi '\in C$$ almost surely. Given that $$\mathbb{E}[\|\xi '\|]=1$$, this contradicts the fact that contains the origin.

The proof for $$p=\infty$$ follows the exact same steps, splitting the cases when $$\sup _{n\in \mathbb{N}} | \xi _{n} |$$ is essentially bounded (in which case the sequence is relatively compact in $$\sigma (L^{\infty },L^{1})$$) and when the essential supremum of $$(\xi _{n})$$ converges to infinity. □

The case $$p=1$$ is excluded in Theorem 6.11 since relative compactness in $$L^{1}$$ requires uniform integrability, which is a stronger condition than boundedness in $$L^{1}$$.

The exact calculation of involves working with all $$p$$-integrable selections of $$X$$, which is a very rich family even in simple cases like $$X=\xi +C$$. Since

(6.9)

the superlinear expectation yields a computationally tractable upper bound on .

### Example 6.12

Consider $$\xi \in L^{p}(\mathbb{R}^{d})$$ and a deterministic $$F\in \operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)$$. Assume that in (6.8) satisfies the conditions of Corollary 6.5. Then

(6.10)

where $$L^{p}(F,\sigma (\xi ))$$ is the family of selections of $$F$$ which are measurable with respect to the $$\sigma$$-algebra $$\sigma (\xi )$$ generated by $$\xi$$. Indeed, for each $$\xi '\in L^{p}(F)$$, the dilatation-monotonicity of yields that . It remains to note that $$\mathbb{E}[\xi '|\sigma (\xi )]\in L^{p}(F,\sigma (\xi ))$$.

Note that the minimal extension of a reduced maximal superlinear expectation is not necessarily a maximal superlinear expectation itself. The following result describes its reduced maximal extension.

### Theorem 6.13

Assume that is defined by (6.8), where is a scalarly upper semicontinuous reduced maximal superlinear expectation with representation (6.6). Then for all $$v\in \mathbb{S}^{d-1}\cap C^{o}$$ and $$\beta \in L^{p}(\mathbb{R})$$, and the reduced maximal extension of coincides with .

### Proof

By (6.3), . In view of (6.9), it suffices to show that each $$x\in H_{v}(\mathtt{u}(\beta ))$$ also belongs to . Let $$y$$ be the projection of $$x$$ onto the subspace orthogonal to $$v$$. It suffices to show that . Noticing that $$H_{v}(\beta )-y=H_{v}(\beta )$$, it is possible to assume that $$x=tv$$ for $$t\leq \mathtt{u}(\beta )$$. Consider $$\xi =\beta v$$. Then

Since $$\langle tv,w\rangle \leq \langle v,w\rangle \mathtt{u}(\beta )$$, we deduce that . Since and coincide on half-spaces, the reduced maximal extension of is

□

In general, may be a strict subset of as the following example shows; so superlinear expectations are not necessarily exact even on rather simple random sets of the type $$\xi +C$$.

### Example 6.14

Consider $$\xi \in \mathbb{R}^{2}$$ which takes with equal probabilities two possible values: the origin and $$a=(a_{1},a_{2})$$. Let $$X=\xi +C$$, where $$C$$ is the cone containing $$\mathbb{R}_{-}^{2}$$ and with points $$(1,-\pi )$$ and $$(-\pi ',1)$$ on its boundary such that $$\pi ,\pi '>1$$.

Let $$\mathcal{M}_{v}=\mathcal{M}$$ be the family from Example 5.15 and let $$\mathtt{u}$$ be the superlinear expectation with the representing set ℳ. For each $$\beta \in L^{1}(\mathbb{R})$$, $$\mathtt{u}(\beta )$$ equals the average of the $$t$$-quantiles of $$\beta$$ over $$t\in (0,\alpha )$$. If $$\alpha \in (0,1/2]$$ and $$\beta$$ takes two values with equal probabilities, then $$\mathtt{u}(\beta )$$ is the smaller value of $$\beta$$. Then so that coincides with in this case.

Now assume that $$\alpha \in (1/2,1)$$. If $$\beta$$ with equal probabilities takes two values $$t$$ and $$s$$, then $$\mathtt{u}(\beta )=\max (t,s)-|t-s|/(2\alpha )$$ and

$$\mathtt{u}(\langle \xi ,v\rangle )=\max (\langle a,v\rangle ,0 ) - \frac{1}{2\alpha } |\langle a,v\rangle |$$

for all $$v$$ from $$C^{o}$$. Since $$C$$ is a Riesz cone, for some $$x$$; see Example 6.7. For $$v\in C^{o}$$, the linear function $$x \mapsto \langle x,v\rangle$$ is dominated by $$\frac{1}{2\alpha }\langle a,v\rangle$$ if $$\langle a,v\rangle <0$$ and by $$(1-\frac{1}{2\alpha })\langle a,v\rangle$$ otherwise. By an elementary calculation,

$$x=\frac{1}{2\alpha }a+\bigg(\frac{1}{\alpha }-1\bigg) \frac{a_{1}\pi '+a_{2}}{\pi \pi '-1} (-\pi ',1).$$

In view of Example 6.12, for the minimal extension, it suffices to consider selections of $$C$$ which are measurable with respect to the $$\sigma$$-algebra $$\sigma (\xi )$$ generated by $$\xi$$; these selections take two values from the boundary of $$C$$ with equal probabilities. The minimal extension can be found via (6.10), letting $$\xi '$$ with equal probabilities take two values $$y=(y_{1},y_{2})$$ and $$z=(z_{1},z_{2})$$ on the boundary $$\partial C$$ of $$C$$. Then

Figure 1 shows and for $$\pi =\pi '=2$$, $$a=(1,-1)$$ and $$\alpha =0.7$$. It shows that the minimal extension may indeed be a strict subset of the underlying reduced maximal superlinear expectation.

## 7 Applications

### 7.1 Depth-trimmed regions and outliers

Consider a sublinear expectation restricted to the family of $$p$$-integrable singletons and let $$C=\{0\}$$. The map satisfies the properties of depth-trimmed regions imposed by Cascos [5], which are those from Zuo and Serfling [35] augmented by monotonicity and subadditivity.

Therefore, the sublinear expectation provides a rather generic construction of a depth-trimmed region associated with a random vector $$\xi \in L^{p}(\mathbb{R}^{d})$$. In statistical applications, points outside or its empirical variant are regarded as outliers. The subadditivity property (3.1) means that if a point is not an outlier for the convolution of two samples, then there is a way to obtain this point as the sum of two non-outliers for the original samples.

### Example 7.1

Fix $$\alpha \in (0,1)$$. For $$\beta \in L^{1}(\mathbb{R})$$, define

$$\mathtt{e}_{\alpha }(\beta )=\alpha ^{-1} \int _{1-\alpha }^{1} q_{\beta }(s) \, ds,$$

where $$q_{\beta }(s)$$ is an $$s$$-quantile of $$\beta$$ (in case of nonuniqueness, the choice of a particular quantile does not matter because of integration). The risk measure $$r(\beta )=\mathtt{e}_{\alpha }(-\beta )$$ is called the average value-at-risk. Denote by the corresponding minimal sublinear expectation constructed by (5.6), so that for all $$u$$. The set is the zonoid-trimmed region of $$\xi$$ at level $$\alpha$$; see Cascos [5] and Mosler [28, Sect. 3.1]. This set can be obtained as

where $$\mathcal{P}_{\alpha }\subseteq L^{1}(\mathbb{R}_{+})$$ consists of all random variables with values in $$[0,\alpha ^{-1}]$$ and expectation 1; see Example 5.15. This setting is a special case of Theorem 5.12 with $$\mathcal{M}=\{t\gamma : \gamma \in \mathcal{P}_{\alpha },t\geq 0\}$$. The value of $$\alpha$$ controls the size of the zonoid-trimmed region; $$\alpha =1$$ yields a single point, being the expectation of $$\xi$$. The subadditivity property of zonoid-trimmed regions was first noticed by Cascos and Molchanov [6].

### Example 7.2

Let $$X$$ be an integrable random closed convex set. Consider the random set $$Y$$ in $$\mathbb{R}^{d+1}$$ given by the convex hull of the origin and $$\{1\}\times X$$. The selection expectation $$Z_{X}=\mathbb{E}[Y]$$ is called the lift expectation of $$X$$; see Diaye et al. [8]. If $$X=\{\xi \}$$ is a singleton, then $$Z_{X}$$ is the lift zonoid of $$\xi$$; see Mosler [28, Sect. 2.2]. By the definition of the selection expectation, $$Z_{X}$$ is the closure of the set of $$(\mathbb{E}[\beta ],\mathbb{E}[\beta \xi ])$$, where $$\beta$$ runs through the family of random variables with values in $$[0,1]$$. Equivalently, $$(\alpha ,x)$$ belongs to $$Z_{X}$$ if and only if $$x=\alpha \mathbb{E}[\gamma \xi ]$$ for $$\gamma$$ from the family $$\mathcal{P}_{\alpha }$$; see Example 7.1. Thus the minimal extension of from Example 7.1 is .

### 7.2 Parametric families of nonlinear expectations

Consider nonlinear expectations and such that for all random closed sets $$X\in L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))$$. Then it is natural to regard observations of $$X$$ that do not lie between the superlinear and sublinear expectation as outliers.

Let $$X_{1},\dots ,X_{n}$$ be independent copies of a $$p$$-integrable random closed convex set $$X$$. For a sublinear expectation ,

(7.1)

is also a sublinear expectation. The only slightly nontrivial property is the subadditivity, which follows from the fact that

$$(X_{1}+Y_{1})\cup \cdots \cup (X_{n}+Y_{n}) \subseteq (X_{1}\cup \cdots \cup X_{n})+(Y_{1}\cup \cdots \cup Y_{n}).$$

If $$X_{1}\cap \cdots \cap X_{n}$$ is a.s. nonempty, then

(7.2)

yields a superlinear expectation, noticing that

$$(X_{1}+Y_{1})\cap \cdots \cap (X_{n}+Y_{n}) \supseteq (X_{1}\cap \cdots \cap X_{n})+(Y_{1}\cap \cdots \cap Y_{n}).$$

We let if $$X_{1}\cap \cdots \cap X_{n}$$ is empty with positive probability.

### Example 7.3

Choosing in (7.1) and (7.2) yields a family of nonlinear expectations depending on the parameter $$n$$, which are also easy to compute.

It is easily seen that increases and decreases as $$n$$ increases. Define the depth of $$F\in \operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)$$ as

It is easy to see that and . Hence $$F\in \operatorname{co}\mathcal{F}(\mathbb{R}^{d},C)$$ has depth one if . Note that decreases to the set of fixed points of $$X$$ and increases to the support of $$X$$ as $$n\to \infty$$; see Example 3.8. Thus only closed convex sets $$F$$ satisfying $$F_{X}\subseteq F\subseteq \operatorname{supp}X$$ may have a positive depth.

In order to handle the empirical variant of the preceding concept based on a sample $$X_{1},\dots ,X_{n}$$ of independent observations of $$X$$, consider a random closed set $$\tilde{X}$$ that with equal probabilities takes one of the values $$X_{1},\dots ,X_{n}$$. Its distribution can be simulated by sampling one of these sets with possible repetitions. Then it is possible to use the nonlinear expectations of $$\tilde{X}$$ in order to assess the depth of any given convex set, including those from the sample.

### 7.3 Risk and utility of a set-valued portfolio

For a random variable $$\xi \in L^{p}(\mathbb{R})$$ interpreted as a financial outcome or gain, the value $$\mathtt{e}(-\xi )$$ (equivalently, $$-\mathtt{u}(\xi )$$) is used in finance to assess the risk of $$\xi$$. It may be tempting to extend this to the multivariate setting by assuming that the risk is a $$d$$-dimensional function of a random vector $$\xi \in L^{p}(\mathbb{R}^{d})$$, with the conventional properties extended coordinatewise. However, in this case the nonlinear expectations (and so the risk) are marginalised, that is, the risk of $$\xi$$ splits into a vector of nonlinear expectations applied to the individual components of $$\xi$$; see Theorem A.1.

Moreover, an adequate assessment of the financial risk of a vector $$\xi$$ is impossible without taking into account exchange rules that can be applied to its components in order to convert $$\xi$$ to another financial position. If no exchanges are allowed and only consumption is possible, one arrives at positions being selections of $$X=\xi +\mathbb{R}_{-}^{d}$$. On the other hand, if the components of $$\xi$$ are expressed in the same currency with unrestricted exchanges and disposal (consumption) of the assets, each position from the half-space $$X=\{x : \sum x_{i}\leq \sum \xi _{i}\}$$ is reachable from $$\xi$$. Working with the random set $$X$$ also eliminates possible nonuniqueness in the choice of $$\xi$$ with identical sums.

In view of this, it is natural to consider multivariate financial positions as lower random closed convex sets or, equivalently, those from $$L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))$$ with $$C= \mathbb{R}_{-}^{d}$$. The random closed set is said to be acceptable if , and the risk of $$X$$ is defined as . The superadditivity property guarantees that if both $$X$$ and $$Y$$ are acceptable, then $$X+Y$$ is acceptable. This is the classical financial diversification advantage formulated in set-valued terms. The value determines the utility of $$X$$, exactly corresponding to the classical properties of utility functions being monotone superlinear functions of random variables. In particular, the superadditivity amounts to the fact that the utility of the sum is larger than or equal to the sum of the utilities.

If $$X\in L^{p}(\operatorname{co}\mathcal{F}(\mathbb{R}^{d},C))$$ and $$C=\mathbb{R}_{-}^{d}$$, the minimal extension (6.8) is called the lower set extension of . If is reduced maximal, (6.6) yields that

where $$\vec{\mathtt{u}}(\xi )=(\mathtt{u}(\xi _{1}),\dots ,\mathtt{u}( \xi _{d}))$$ is defined by applying the same superlinear expectation $$\mathtt{u}$$ with representing set ℳ to each component of $$\xi$$. Then

In other words, is the closure of the set of all points dominated coordinatewise by the superlinear expectation of at least one selection of $$X$$. In Molchanov and Cascos [26], the origin-reflected set was called the selection risk measure of $$X$$.

For set-valued portfolios $$X=\xi +C$$, arising as the sum of a singleton $$\xi$$ and a (possibly random) convex cone $$C$$, the maximal superlinear expectation (in our terminology), considered a function of $$\xi$$ only and not of $$\xi +C$$, was studied by Hamel and Heyde [13] and Hamel et al. [14]. However, if $$C$$ becomes random, the resulting function of $$\xi$$ alone is not necessarily law-invariant. The case of general random set-valued arguments was pursued by Molchanov and Cascos [26].

For the purpose of risk (or utility) assessment, one can use any superlinear expectation. However, the sensible choices are the maximal superlinear expectation in view of its closed form dual representation, and the lower set extension in view of its direct financial interpretation (through its primal representation), meaning the existence of a selection (that is, a financial position) with all components acceptable. Example 6.14 provides the numerical calculation of the reduced maximal and the minimal extension (see Figure 1) using the average quantile utility function. Given that the minimal superlinear expectation may be a strict subset of the maximal one (see Example 6.14), the acceptability of $$X$$ under a maximal superlinear expectation may be a weaker requirement than the acceptability under the lower set extension.

From the financial viewpoint, the acceptability of $$X=\xi +C$$ (for the payoff $$\xi \in L^{p}(\mathbb{R}^{d})$$ and a deterministic cone $$C$$ describing the family of portfolios available at price zero) under the lower set extension (that is, the minimal extension) means the existence of an exchange scenario $$\xi '\in L^{p}(C)$$ such that $$\xi +\xi '$$ has all components acceptable. In other words, by exchanging the components of $$\xi$$ and taking into account the transaction costs imposed by the cone $$C$$, it is possible to make all components of $$\xi$$ individually acceptable. On the other hand, the acceptability of $$X$$ under the reduced maximal extension means that $$\langle \xi ,\eta \rangle$$ is acceptable for all $$u$$ from the dual cone of $$C$$, that is, $$\xi$$ is acceptable under all price systems determined by $$C$$. For instance, this is the case if $$\xi +\xi '$$ has all components acceptable since

$$\langle \xi ,u\rangle =\langle \xi +\xi ',u\rangle -\langle \xi ',u \rangle .$$

The first term on the right-hand side is acceptable since the dual cone of $$C$$ is a subset of $$\mathbb{R}_{+}^{d}$$, while the second term is nonnegative since $$u$$ belongs to the dual cone of $$C$$.