1 Introduction

The present work investigates parameter-dependent stochastic optimization in finite discrete time with the tools of conditional analysis. We consider a forward process \((x_t)_{t=0}^\mathrm{T}\), for which \(x_{t+1}=v_{t}(x_t,z_t)\) depends on \(x_t\) as a function of earlier decisions and an immediate decision \(z_t\) chosen recursively in a state-dependent control set \(\varTheta _t(x_t)\).

Given a filtered probability space \((\varOmega ,{\mathcal {F}},({\mathcal {F}}_t)_{t=0}^\mathrm{T},{\mathbb {P}})\), we assume that the forward process \(x_t\) and the control process \(z_t\) assume values in \({\mathcal {F}}_t\)-conditional metric spaces \(X_t\) and \(Z_t\), respectively. An \({\mathcal {F}}_t\)-conditional metric space is a non-empty set X endowed with a vector-valued metric \(d\,{:}\,X\times X\rightarrow L^0_+(\varOmega ,{\mathcal {F}}_t,{\mathbb {P}})\), satisfying a concatenation property, which encodes information at time t. An example is the space of strongly \({\mathcal {F}}_t\)-measurableFootnote 1 functions with values in a metric space with almost everywhere evaluation of the metric. Intuitively, one may imagine a conditional metric space to be a collection of classical metric spaces \(X(\omega )\) parameterized by the points \(\omega \in \varOmega \) of a probability space and glued together in a measurable fashion. When the underlying probability space is not a standard Borel space, or satisfies a similar regularity assumption, and the “fibers” \(X(\omega )\) are not separable metric spaces, then this pointwise perspective runs into unsolvable measurability problems, usually related to the lack of Borel (or analytic) measurable selectors, or the problem of handling an uncountable collection of null sets. Therefore, instead of trying to model the problem fiberwise, we directly work in the conditional metric space X and build instead on arguments in conditional analysis. Important is that conditional analysis works when we quotient out consequently all null sets. Then, we can rely on the Dedekind completeness of the space \(L^0({\mathcal {F}})\) and the existence of concatenations of sequences along countable measurable partitions of \(\varOmega \), which allow for the conditional argumentation. For a methodological discussion, we refer the interested reader to Sect. 5.

We focus on stochastic control problems, which by the Bellman principle can be reduced to a finite number of one-period conditional optimization problems. Our main result shows that the global maximizer is attained. By backward induction, we show that the optimal value function is upper semi-continuous on the conditional metric space \(X_t\). For this, we assume that the control sets \(\varTheta _t(x_t)\) are conditionally sequentially compact; for a discussion of the notion of conditional compactness, we refer to  [1, Sections 3 and 4] and  [2, Sections 3.4 and 4]. Then, the existence of the one-period maximizers follows from a conditional version of the fact that a semi-continuous function on a compact space attains its extrema. Moreover, under a regularity condition on the control set—a conditional version of outer semi-continuity in set convergence (see, e.g.,  [3, Chapter 5, Section B] for the classical definition)—the value function is upper semi-continuous. Under stronger assumptions on the generators, the assumption of conditional compactness on the control set is relaxed in Proposition 4.1 by modifying arguments in  [4].

In Sect. 3, we provide sufficient conditions for conditional compactness and conditional outer semi-continuity of the control set. We focus on conditionally finite dimensional control sets. The results are illustrated with applications in mathematical finance. In Example 3.2, we study an optimal consumption problem with local risk constraints on the wealth process. Example 3.3 indicates the importance of conditional Euclidean space with conditional dimension to model control processes with state-dependent dimension. As an application of Proposition 4.1, we derive optimal portfolios w.r.t. dynamic risk measures, for which the risk aversion coefficient is influenced by the current wealth.

Normal integrands, random sets and measurable selection techniques are common tools in the study of parameterized stochastic optimization; see, e.g.,  [5,6,7,8]. In Sect. 5, we discuss the connection of conditional analysis to random sets and normal integrands. In particular, we discuss a one-to-one correspondence between the set of measurable selections of Effros measurable and closed-valued mappings and stable and sequentially closed sets. This indicates that control problems formulated in the language of normal integrands and random sets can equally be formulated in the language of conditional analysis. For a formalization with normal integrands and random sets, measurable selection lemmas provide the main tool to secure measurability. The use of measurable selection arguments is enforced by a pointwise application of standard results in classical analysis, and relies on topological assumptions such as separability and standard Borel spaces. In this regard, conditional analysis provides a measure-theoretic alternative, which does not rely on any topological assumptions, and works as soon as a formalization within its language is reached, which is demonstrated in this article in discrete time stochastic control theory. Conditional analysis approaches measurable functions directly by working with a conditional version of results in classical analysis. The application of conditional versions of classical theorems preserves measurability, see for example the proofs below, in which a conditional version of the Bolzano–Weierstraß theorem, the maximum theorem and the Heine–Borel theorem are employed.

The remainder of this article is organized as follows. In Sect. 2, we introduce the notion of conditional metric spaces and prove the main existence result. In Sects. 3 and 4, we discuss extensions of the main result and provide several examples. The link between conditional analysis and random set theory is established in Sect. 5.

2 Main Result

Let \((\varOmega ,{\mathcal {F}},{\mathbb {P}})\) be a probability space. Throughout, we identify two sets in \({\mathcal {F}}\) whenever their symmetric difference is a null set, and identify two functions on \(\varOmega \) if they coincide a.s. (almost surely). Let \({\mathcal {G}}\) be a sub-\(\sigma \)-algebra of \({\mathcal {F}}\). Denote by \(\varPi _{\mathcal {G}}\) the set of partitions \((A_k)\) of \(\varOmega \) where \(A_k\in {\mathcal {G}}\) for all k. Let \(L^0_{\mathcal {G}}, L^0_{\mathcal {G}}({\mathbb {N}}), L^0_{{\mathcal {G}},+}, L^0_{{\mathcal {G}},++}, {{\underline{\hbox {L}}}}^0_{\mathcal {G}}\), and \({\bar{L}}^0_{\mathcal {G}}\) denote the spaces of \({\mathcal {G}}\)-measurable random variables with values in \({\mathbb {R}}, {\mathbb {N}}, [0,+\infty [, ]0,+\infty [, {\mathbb {R}}\cup \{-\infty \}\), and \({\mathbb {R}}\cup \{\pm \infty \}\), respectively. Recall that \(L^0_{\mathcal {G}}\) with the pointwise a.s. order is a Dedekind complete lattice-ordered ring. The essential supremum and the essential infimum are denoted by \(\sup \) and \(\inf \), respectively. Inequalities between random variables with values in an ordered set are always understood in the pointwise a.s. sense.

Definition 2.1

A \({\mathcal {G}}\)-conditional metric on a non-empty set X is a function \(d\,{:}\,X\times X\rightarrow L^0_{{\mathcal {G}},+}\), such that the following conditions hold:

  1. (i)

    \(d(x,y)=0\) if and only if \(x=y\),

  2. (ii)

    \(d(x,y)=d(y,x),\)

  3. (iii)

    \(d(x,z)\le d(x,y)+d(y,z)\),

  4. (iv)

    for every sequence \((x_k)\) in X and \((A_k)\in \varPi _{\mathcal {G}}\), there exists exactly one element \(x\in X\) such that \(1_{A_k}d(x,x_k)=0\) for all \(k\in {\mathbb {N}}\).

The pair (Xd) is called a \({\mathcal {G}}\)-conditional metric space.

In the following, we call the unique element in (iv) the concatenation of the sequence \((x_k)\) along the partition \((A_k)\), and denote it by \(\sum \nolimits _k 1_{A_k}x_k\). For a sequence \((x_n)\) in a conditional metric space (Xd), we write \(x_n\rightarrow x\) whenever \(d(x,x_n)\rightarrow 0\) a.s.. Further, a conditional subsequence\((x_{n_k})\) of \((x_n)\) is of the form \(x_{n_k}:=\sum \nolimits _{j\in {\mathbb {N}}} 1_{\{n_k=j\}}x_j\), where \((n_k)\) is a sequence in \(L^0_{{\mathcal {G}}}({\mathbb {N}})\) such that \(n_k<n_{k+1}\) for all \(k\in {\mathbb {N}}\).

Definition 2.2

Let \((X,d_X)\) and \((Z,d_Z)\) be \({\mathcal {G}}\)-conditional metric spaces, and H and G subsets of X and Z, respectively.

The set H is called \({\mathcal {G}}\)-stable if \(H \ne \emptyset \) and \(\sum \nolimits _k 1_{A_k} x_k\in H\) for all \((A_k)\in \varPi _{\mathcal {G}}\) and every sequence \((x_k)\) in H, and sequentially closed if H contains every \(x\in X\), such that there is a sequence \((x_k)\) in H with \(x_k\rightarrow x\).

A function \(f:H\rightarrow G\) is said to be \({\mathcal {G}}\)-stable if \(f\left( \sum _k 1_{A_k} x_k\right) =\sum \nolimits _k 1_{A_k} f(x_k)\) for all \((A_k)\in \varPi _{\mathcal {G}}\) and every sequence \((x_k)\) in H, where H and G are assumed to be \({\mathcal {G}}\)-stable.

Remark 2.1

  1. 1.

    If (Xd) is a \({\mathcal {G}}\)-conditional metric space, then the metric d is \({\mathcal {G}}\)-stable, i.e., \(d\left( \sum \nolimits _k 1_{A_k}x_k, \sum \nolimits _k 1_{A_k}y_k\right) =\sum \nolimits _k 1_{A_k}d(x_k,y_k)\) for all sequences \((x_k)\) and \((y_k)\) in X and \((A_k)\in \varPi _{\mathcal {G}}\). Indeed, denoting by \(x=\sum \nolimits _k 1_{A_k}x_k\) and \(y=\sum \nolimits _k 1_{A_k}y_k\) the respective concatenations, it follows from the triangular inequality that

    $$\begin{aligned} 1_{A_k}d(x,y)&\le 1_{A_k}d(x,x_k)+1_{A_k}d(x_k,y_k)+1_{A_k}d(y_k,y)=1_{A_k}d(x_k,y_k)\\&\le 1_{A_k}d(x_k,x)+1_{A_k}d(x,y)+1_{A_k}d(y_k,y)=1_{A_k}d(x,y), \end{aligned}$$

    which shows that \(1_{A_k}d(x,y)=1_{A_k}d(\sum _k 1_{A_k}x_k,\sum \nolimits _k 1_{A_k}y_k)=1_{A_k}d(x_k,y_k)\) for all \(k\in {\mathbb {N}}\). Summing up over all k yields the desired \({\mathcal {G}}\)-stability.

  2. 2.

    Let \((X,d_X)\) and \((Y,d_Y)\) be two \({\mathcal {G}}\)-conditional metric spaces. Then, its product \(X\times Y\) endowed with the \({\mathcal {G}}\)-conditional metric

    $$\begin{aligned}d_{X\times Y}\left( (x,y),(x^\prime ,y^\prime )\right) :=\max \{d_{X}(x,x^\prime ),d_{Y}(y,y^\prime )\}\end{aligned}$$

    is a \({\mathcal {G}}\)-conditional metric space.

  3. 3.

    Let \((X,d_X)\) be a \({\mathcal {G}}\)-conditional metric space. Then, the set \({\mathbf {X}}\) of all pairs \((x,A)\in X\times {\mathcal {G}}\), where (xA) and (yB) are identified if \(A=B\) and \(1_A d(x,y)=0\), is a conditional set as introduced in  [2]; in  [2, Section 4] conditional metric spaces are defined.

We next introduce the parameter-dependent stochastic optimal control problem for conditional metric spaces. For a fixed finite time horizon \(T\in {\mathbb {N}}\), we consider a filtration \({\mathcal {F}}_0\subset {\mathcal {F}}_1\subset \cdots \subset {\mathcal {F}}_T={\mathcal {F}}\). For simplicity, we often abbreviate the index \({\mathcal {F}}_t\) by t, and write for instance \(L^0_{t}\) for \(L^0_{{\mathcal {F}}_t}\). For each \(t=0,\ldots ,T\), let \((X_t,d_{X_t})\) and \((Z_t,d_{Y_t})\) be \({\mathcal {F}}_t\)-conditional metric spaces. Our aim is to study control problems, for which the control set \(\varTheta _t\) depends on \({\mathcal {F}}_t\), but also on a state parameter \(x\in X_t\). For every \(t=0,\ldots ,T-1\), we assume that the state-dependent control set\(\varTheta _t\) satisfies

  1. (c1)

    \(\emptyset \ne \varTheta _t(x)\subset Z_t\) for all \(x\in X_t\),

  2. (c2)

    \(\varTheta _t\) is \({\mathcal {F}}_t\)-stable, i.e.,

    $$\begin{aligned} \varTheta _t\left( \sum _k 1_{A_k}x_k\right) =\sum _k 1_{A_k} \varTheta _t(x_k):=\left\{ \sum _k 1_{A_k} z_k:z_k\in \varTheta _t(x_k) \text { for all }k\right\} \end{aligned}$$

    for all \((A_k)\in \varPi _t\) and every sequence \((x_k)\) in \(X_t\),

  3. (c3)

    for every \(x\in X_t\), the set \(\varTheta _t(x)\) is conditionally sequentially compact, i.e., for every sequence \((z_n)\) in \(\varTheta _t(x)\), there exists a conditional subsequence \(n_1<n_2<\cdots \) with \(n_k\in L^0_t({\mathbb {N}})\) such that \(z_{n_k}\rightarrow z\in \varTheta _t(x)\),

  4. (c4)

    for every sequence \((x_n)\) in \(X_t\) such that \(x_n\rightarrow x\in X_t\) and every sequence \((z_n)\) in \(\varTheta _t(x_n)\), there exists a conditional subsequence \(n_1<n_2<\cdots \) with \(n_k\in L^0_t({\mathbb {N}})\) and a sequence \((z_k^\prime )\) in \(\varTheta _t(x)\) such that \(d_{Z_t}(z_{n_k},z_{k}^\prime )\rightarrow 0\) a.s..

Note that \({\mathcal {F}}_t\)-stability of \(\varTheta _t\) implies that \(\varTheta _t(x)\) is \({\mathcal {F}}_t\)-stable for all \(x\in X_t\).

We consider forward generators

$$\begin{aligned} v_t:X_t\times Z_t\rightarrow X_{t+1},\quad t=0,\ldots ,T-1, \end{aligned}$$

which are

  1. (v1)

    \({\mathcal {F}}_t\)-stable, i.e., \(v_t\left( \sum _k 1_{A_k}x_k, \sum \nolimits _k 1_{A_k}z_k\right) =\sum \nolimits _k 1_{A_k}v_t(x_k,z_k)\) for every partition \((A_k)\in \varPi _t\), and all sequences \((x_k)\) in \(X_t\) and \((z_k)\) in \(Z_t\),

  2. (v2)

    sequentially continuous, i.e., \(v_t(x_n,z_n)\rightarrow v_t(x,z)\) whenever \(x_n\rightarrow x\) in \(X_t\) and \(z_n\rightarrow z\) in \(Z_t\).

For every \(x_t\in X_t\), we consider the set

$$\begin{aligned} C_t(x_t)&:=\left\{ ((x_s)_{s=t+1}^\mathrm{T},(z_s)_{s=t}^{T-1}) :x_{s+1}=v_s(x_s,z_s),z_s\in \varTheta _s(x_s)\right. \\&\left. \qquad \text { for all }s=t,\ldots ,T-1\right\} \end{aligned}$$

of all parameter processes \((x_s)_{s=t}^\mathrm{T}\), which can be realized by the state-dependent controls \(z_s\in \varTheta _t(x_s)\) for \(s=t,\ldots ,T-1\).

As for the objective function, we consider backward generators

$$\begin{aligned} u_t:X_t\times {{\underline{\hbox {L}}}}^0_{t+1}\times Z_t \rightarrow {{\underline{\hbox {L}}}}^0_t,\quad t=0,1,\ldots ,T-1, \end{aligned}$$

which are

  1. (u1)

    \({\mathcal {F}}_t\)-stable, i.e., \(u_t\left( \sum _k 1_{A_k}x_k,\sum \nolimits _k 1_{A_k} y_k,\sum \nolimits _k 1_{A_k} z_k\right) = \sum \nolimits _k 1_{A_k} u_t(x_k,y_k,z_k)\) for all \((A_k)\in \varPi _t\), and sequences \((x_k)\) in \(X_t, (y_k)\) in \(Y_t\), and \((z_k)\) in \(Z_t\),

  2. (u2)

    increasing in the second component, i.e., \(u_t(x,y,z)\le u_t(x,y^\prime ,z)\) whenever \(y\le y^\prime \),

  3. (u3)

    sequentially upper semi-continuous, i.e.,

    $$\begin{aligned}\limsup _{n\rightarrow \infty } u_t(x_n,y_n,z_n)\le u_t(x,y,z),\end{aligned}$$

    whenever \(x_n\rightarrow x\) in \(X_t, y_n\rightarrow y\) in \({\underline{\hbox {L}}}^0_{t+1}\), and \(z_n\rightarrow z\) in \(Z_t\).

We assume that \(u_T\,{:}\,X_T\rightarrow {\underline{\hbox {L}}}^0_T\) is \({\mathcal {F}}_{T}\)-stable and sequentially upper semi-continuous.Footnote 2

Given such a family \((u_t)_{t=0}^\mathrm{T}\) of backward generators, our goal is to maximize

$$\begin{aligned} y_t(x_t):=\sup _{((x_s)_{s=t+1}^\mathrm{T},(z_s)_{s=t}^{T-1})\in C_t(x_t) }u_t(x_t,\cdot ,z_t)\circ \cdots \circ u_{T-1}(x_{T-1},\cdot ,z_{T-1})\circ u_T (x_T),\nonumber \\ \end{aligned}$$
(1)

over all realizable state processes initialized at \(x_t\in X_t\). In (1) we consider the composition of the functions \(u_T, u_{T-1}(x_{T-1},\cdot ,z_{T-1}),\ldots ,u_{t}(x_{t},\cdot ,z_{t})\), where \(u_s(x_s,\cdot ,z_s)\) denotes the function \({\underline{\hbox {L}}}^0_{s+1}\rightarrow {\underline{\hbox {L}}}^0_{s}, y\mapsto u_s(x_s,y,z_s)\).

Remark 2.2

The objective function in the stochastic control problem (1) is recursively defined. Its generators are functions between conditional metric spaces which are not necessarily (conditional) expected utilities. In case of (conditional) expected utility, the generators are closely related with dynamic and conditional risk measures; see  [9,10,11,12,13,14]. The preferences which underly conditional expected utility functionals were studied in  [15].

In decision theory, there is an extensive literature on recursive utilities starting with the seminal work  [16, 17]. The preferences therein are defined on sets of temporal lotteries (probability trees), and follow a kind of Bellman recursive structure, which is similar (on a formal level) to the construction above; see  [16, Theorem 1]. This was later extended in  [18], where non-expected utilities were incorporated as well, and established under the name of Epstein–Zin utilities. See also  [19] for a survey on non-expected utility theory. With the techniques of conditional analysis and based on results in BSDE theory,  [20] solves a utility maximization problem in continuous time for Epstein–Zin utilities.

The following result shows that the global supremum in (1) is attained and can be reduced to local optimization problems by the following Bellman’s principle.

Theorem 2.1

Suppose that \((\hbox {c}1)\)\((\hbox {c}4)\), \((\hbox {v}1)\)\((\hbox {v}2)\), and \((\hbox {u}1)\)\((\hbox {u}3)\) are fulfilled. Then, the functions \(y_t:X_{t}\rightarrow {\underline{\hbox {L}}}^0_t\) are \({\mathcal {F}}_t\)-stable and sequentially upper semi-continuous for all \(t=0,\ldots , T\), and can be computed by backward recursion

$$\begin{aligned} y_T(x_T)&=u_T(x_T)\\ y_t(x_t)&=\max _{z_t\in \varTheta _t(x_t)} u_t(x_t,y_{t+1}(v_t(x_t,z_t)),z_t), \quad t=0,\ldots ,T-1. \end{aligned}$$

Moreover, for every \(x_t\in X_t\) the process \(((x^*_s)_{s=t}^\mathrm{T},(z^*_s)_{s=t}^{T-1})\), given by \(x_t^*=x_t\), and the forward recursion \(x^*_{s+1}=v_s(x^*_s,z^*_s)\), where

$$\begin{aligned} z_s^*\in \mathop {\hbox {argmax}}\limits _{z_s\in \varTheta _s(x_s^*)} u_s\left( x_s^*,y_{s+1}(v_t(x^*_s,z_s)),z_s\right) ,\quad s=t,\ldots T-1, \end{aligned}$$
(2)

satisfies \(((x^*_s)_{s=t+1}^\mathrm{T},(z^*_s)_{s=t}^{T-1})\in C_t(x_t)\) and

$$\begin{aligned} y_t(x_t)= u_t(x_t,\cdot ,z^*_t)\circ \cdots \circ u_{T-1}(x^*_{T-1},\cdot ,z^*_{T-1})\circ u_T (x^*_T). \end{aligned}$$

Proof

The proof is by backward induction. For \(t=T\), it follows from (1) that \(y_T=u_T\), which by assumption is an \({\mathcal {F}}_t\)-stable and sequentially upper semi-continuous function from \(X_T\) to \({\underline{\hbox {L}}}^0_T\).

As for the induction step, assume that \(y_{t+1}:X_{t+1}\rightarrow {\underline{\hbox {L}}}^0_{t+1}\) is \({\mathcal {F}}_{t+1}\)-stable and sequentially upper semi-continuous, and that for each \(x_{t+1}\in X_{t+1}\) there exists \(((x^*_s)_{s=t+2}^\mathrm{T},(z^*_s)_{s=t+1}^{T-1})\in C_{t+1}(x_{t+1})\) such that

$$\begin{aligned} y_{t+1}(x_{t+1})=u_{t+1}(x_{t+1},\cdot ,z^*_{t+1})\circ \cdots \circ u_T (x^*_T). \end{aligned}$$

By (u1) and (v1), the function

$$\begin{aligned} X_t\times Z_t\ni (x,z)\mapsto u_t\left( x,y_{t+1}(v_t(x,z)),z\right) \end{aligned}$$

is \({\mathcal {F}}_t\)-stable. Moreover, it is sequentially upper semi-continuous. Indeed, let \((x_k,z_k)\) be a sequence in \(X_t\times Z_t\) such that \(x_k\rightarrow x\in X_t\) and \(z_k\rightarrow z\in Z_t\). Since \(v(x_k,z_k)\rightarrow v(x,z)\) by (v2), it follows from the induction hypothesis that

$$\begin{aligned} \limsup _{k\rightarrow \infty } y_{t+1}(v_t(x_k,z_k))\le y_{t+1}(v(x,z))<+\infty . \end{aligned}$$

Since

$$\begin{aligned} \left\{ \sup _{k\ge 1} y_{t+1}(v_t(x_{k},z_{k}))=+\infty \right\}&=\bigcap _{k\ge 1}\left\{ \sup _{k^\prime \ge k} y_{t+1}(v_t(x_{k^\prime },z_{k^\prime }))=+\infty \right\} \\&=\left\{ \limsup _{k\rightarrow \infty } y_{t+1}(v_t(x_k,z_k))=+\infty \right\} , \end{aligned}$$

we have \(\sup _{k\ge 1} y_{t+1}(v_t(x_{k},z_{k}))\in {\underline{\hbox {L}}}^0_{t+1}\). Hence, by (u2), (u3) and (v2), we get

$$\begin{aligned} \limsup _{k\rightarrow \infty } u_t\left( x_k,y_{t+1}(v_t(x_k,z_k)),z_k\right)&\le \limsup _{k\rightarrow \infty } u_t\left( x_k,\sup _{k^\prime \ge k} y_{t+1}(v_t(x_{k^\prime },z_{k^\prime })),z_k\right) \nonumber \\&\le u_t\left( x,\limsup _{k\rightarrow \infty } y_{t+1}(v_t(x_{k},z_{k})),z\right) \nonumber \\&\le u_t\left( x, y_{t+1}(v_t(x,z)),z\right) , \end{aligned}$$
(3)

which shows the desired sequential upper semi-continuity. As a consequence, the supremum in

$$\begin{aligned} f_t(x_t):=\sup _{z\in \varTheta _t(x_t)} u_t\left( x_t,y_{t+1}(v_t(x_t,z)),z\right) \end{aligned}$$
(4)

is attained for each \(x_t\in X_t\). Indeed, since \(z\mapsto u_t\left( x,y_{t+1}(v_t(x,z)),z\right) \) and \(\varTheta _t(x_t)\) are \({\mathcal {F}}_t\)-stable, it follows from standard properties of the essential supremum that there exists a sequence \(z_n\in \varTheta _t(x_t)\) such that

$$\begin{aligned} u_t\left( x_t,y_{t+1}(v_t(x_t,z_n)),z_n\right) \rightarrow f_t(x_t). \end{aligned}$$

By (c3), there is a conditional subsequence \(n_1<n_2<\cdots \) with \(n_k\in L^0_t({\mathbb {N}})\) such that \(z_{n_k}\rightarrow z\in \varTheta _t(x_t)\) a.s.. Since \(z\mapsto u_t\left( x,y_{t+1}(v_t(x,z)),z\right) \) is sequentially upper semi-continuous and \({\mathcal {F}}_t\)-stable, it follows that

$$\begin{aligned} u_t\left( x_t,y_{t+1}(v_t(x_t,z)),z\right) \ge \limsup _{k\rightarrow \infty } u_t\left( x_t,y_{t+1}(v_t(x_t,z_{n_k})),z_{n_k}\right) = f_t(x_t), \end{aligned}$$

which shows that the supremum in (4) is attained.

We next show that \(f_t\,{:}\,X_t\rightarrow {\underline{\hbox {L}}}^0_{t+1}\) is sequentially upper semi-continuous. By contradiction, suppose that \((x_k)\) is a sequence in \(X_t\) such that \(x_k\rightarrow x\in X_t\) and \(f_t(x)<\limsup _{k\rightarrow \infty } f_t(x_k)\) on some \(A\in {\mathcal {F}}\) with \({\mathbb {P}}(A)>0\). Note that \(f_t\) is \({\mathcal {F}}_t\)-stable. Thus, by possibly passing to a conditional subsequence , we can suppose that there exists \(r\in L^0_{t,++}\) such that

$$\begin{aligned} f_t(x)+r< f_t(x_k)\text { on }A,\quad \text { for all }k\in {\mathbb {N}}. \end{aligned}$$
(5)

Denote by \(z_k\in \varTheta _t(x_k)\) a respective maximizer of \(f_t(x_k)\). By (c4), there exists \(z_k^\prime \in \varTheta _t(x)\) such that \(d_{Z_t}(z_k,z^\prime _k)\rightarrow 0\) a.s. by possibly passing to a conditional subsequence . By (c3), there exists a conditional subsequence \(k_1<k_2<\cdots \) with \(k_l\in L^0_t({\mathbb {N}})\) such that \(z^\prime _{k_l}\rightarrow z^\prime \in \varTheta _t(x)\). Since \(d_{Z_t}(z_{k_l},z^\prime _{k_l})\rightarrow 0\) a.s., by \({\mathcal {F}}_t\)-stability of the conditional metric \(d_{Z_t}\), it follows from the triangular inequality that \(z_{k_l}\rightarrow z^\prime \in \varTheta _t(x)\). By the \({\mathcal {F}}_t\)-stability of \(f_t\) and (c2), it follows that \(z_{k_l}\) is in \(\varTheta _t(x_{k_l})\) and maximizes \(f_t(x_{k_l})\). Hence, it follows from (3) that

$$\begin{aligned} \limsup _{l\rightarrow \infty } f_t(x_{k_l})&=\limsup _{l\rightarrow \infty } u_t\left( x_{k_l},y_{t+1}(v_t(x_{k_l},z_{k_l})),z_{k_l}\right) \\&\le u_t\left( x,y_{t+1}(v_t(x,z^\prime )),z^\prime \right) \\&\le \sup _{z\in \varTheta _t(x)} u_t\left( x,y_{t+1}(v_t(x,z),z\right) =f_t(x). \end{aligned}$$

Notice that, due to the \({\mathcal {F}}_t\)-stability of \(f_t\), (5) is satisfied for any conditional subsequence of \((x_k)\). Thus, we have that \(f_t(x)+r\le \limsup _{l\rightarrow \infty } f_t(x_{k_l})\le f_t(x)\) on A, which is a contradiction. We conclude that \(f_t\) is sequentially upper semi-continuous.

Finally, we show that \(y_t=f_t\). By induction hypothesis, for every \(x_t\in X_t\) and \(z_t\in Z_t\), there exists \(((x^*_s)_{s=t+2}^\mathrm{T},(z^*_s)_{s=t+1}^{T-1} )\in C_{t+1}(v_t(x_t,z_t))\) such that

$$\begin{aligned} y_{t+1}(v_t(x_t,z_t))=u_{t+1}(v_t(x_t,z_t),\cdot ,z^*_{t+1})\circ \cdots \circ u_{T-1}(x^*_{T-1},\cdot ,z^*_{T-1})\circ u_T(z^*_T). \end{aligned}$$

In particular, for \(x_t\in X_t\) and \(z_t^*\in Z_t\) being a maximizer in (4), it holds

$$\begin{aligned} f_t(x_t)&=\sup _{z\in \varTheta _t(x_t)} u_t\left( x_t,y_{t+1}(v_t(x_t,z)),z\right) \\&= u_t\left( x_t,y_{t+1}(v_t(x_t,z_t^*)),z_t^*\right) \\&= u_t\left( x_t,\cdot ,z_t^*\right) \circ u_{t+1}(v_t(x_t,z^*_t),\cdot ,z^*_{t+1})\circ \cdots \circ u_{T-1}(x^*_{T-1},\cdot ,z^*_{T-1})\circ u_T(z^*_T) \\&= \sup _{(x,z)\in C_{t+1}(v(x_t,z^*_t)) } u_t\left( x_t,\cdot ,z_t^*\right) \circ u_{t+1}(v_t(x_t,z_t),\cdot ,z_{t+1})\circ \cdots \circ \circ u_T(z_T) \\&= \sup _{z_t\in \varTheta _t(x_t)} \sup _{(x,z)\in C_{t+1}(v(x_t,z_t)) } u_t\left( x_t,\cdot ,z_t\right) \circ u_{t+1}(v_t(x_t,z_t),\cdot ,z_{t+1})\circ \cdots \circ u_T(z_T) \\&= \sup _{(x,z)\in C_{t}(x_t) } u_t\left( x_t,\cdot ,z_t\right) \circ u_{t+1}(v_t(x_t,z_t),\cdot ,z_{t+1})\circ \cdots \circ u_T(z_T) \\&=y_t(x_t). \end{aligned}$$

This shows that \(((x^*_s)_{s=t+1}^\mathrm{T},(z^*_s)_{s=t}^\mathrm{T})\in C_{t}(x_t)\) is an optimizer of (1) whenever it satisfies the local optimality criterion

$$\begin{aligned} z^*_s\in {\mathop {\hbox {argmax}}\nolimits }_{z\in \varTheta _t(x^*_t)} u_s\left( x_s^*,y_{s+1}(v_t(x^*_s,z_s)),z_s\right) \quad \text{ and }\quad x^*_{s+1}=v_s(x^*_s,z^*_s) \end{aligned}$$

for all \(s=t,\ldots ,T\), where \(x_t^*=x_t\). In particular, every process which satisfies the forward recursion (2) is an optimizer for (1). \(\square \)

Example 2.1

As for the illustration, we provide examples of \({\mathcal {F}}_t\)-conditional metric spaces, which are of interest for the control and parameter spaces.

  1. 1.

    Given a nonempty metric space (Xd), denote by \(L^0_t(X)\) the set of all strongly \({\mathcal {F}}_t\)-measurable functions \(x:\varOmega \rightarrow X\). The metric d extends from X to \(L^0_t(X)\) by defining

    $$\begin{aligned} d_{L^0_t(X)}(x,{{\bar{x}}})(\omega ):=d(x(\omega ),{{\bar{x}}} (\omega ))\quad \text{ for } \text{ a.a. } \omega \in \varOmega \text{ and } \text{ all } x,{{\bar{x}}}\in L^0_t(X). \end{aligned}$$

    Then, \((L^0_t(X),d_{L^0_t(X)})\) is a \({\mathcal {F}}_t\)-conditional metric space.

  2. 2.

    The conditional Euclidean space with dimension \(n=\sum _k 1_{A_k} n_k\in L^0_t({\mathbb {N}})\) is defined as

    $$\begin{aligned} L^0_t({\mathbb {R}})^n=\sum _k 1_{A_k} L^0_t({\mathbb {R}}^{n_k}):=\left\{ \sum _k 1_{A_k} x_k :x_k\in L^0_t({\mathbb {R}}^{n_k}) \text { for all }k\right\} . \end{aligned}$$

    The \({\mathcal {F}}_t\)-conditional metric on \(L^0_t({\mathbb {R}})^n\) is defined by

    $$\begin{aligned} d_{L^0_t({\mathbb {R}})^n}(x,{{\bar{x}}}):=\sum _k 1_{A_k} d_{ L^0_t({\mathbb {R}}^{n_k}) }(x_k,{{\bar{x}}}_k), \end{aligned}$$

    where \(x=\sum _k 1_{A_k} x_k\) and \({{\bar{x}}}=\sum _k 1_{A_k} {{\bar{x}}}_k\). Here, \(d_{L^0_t({\mathbb {R}}^{n_k})}\) denotes the \({\mathcal {F}}_t\)-conditional metric on \(L^0_t({\mathbb {R}}^{n_k})\). Straightforward verification shows that \((L^0_t({\mathbb {R}})^n, d_{L^0_t({\mathbb {R}})^n})\) is a \({\mathcal {F}}_t\)-conditional metric space.

  3. 3.

    For \(1\le p<\infty \), we define the conditional \(L^p\)-space

    $$\begin{aligned} L^p_t:=\{x\in L^0_T :{\mathbb {E}}[|x|^p|{\mathcal {F}}_t]<+\infty \} \end{aligned}$$

    with \({\mathcal {F}}_t\)-conditional metric \(d_{L^p_t}(x,\bar{x}):={\mathbb {E}}[|x-{{\bar{x}}}|^p|{\mathcal {F}}_t]^{1/p}\). Then, \((L^p_t,d_{L^p_t})\) is a \({\mathcal {F}}_t\)-conditional metric space.

3 Compactness Condition for the Control Set

3.1 The Finite Dimensional Case

Suppose that \(Z_t=L^0_t({\mathbb {R}}^d)\). As shown in Example 2.1, the Euclidean metric of \({\mathbb {R}}^d\) extends to the \({\mathcal {F}}_t\)-conditional metric \(d_{L^0_t({\mathbb {R}}^d)}:L^0_t({\mathbb {R}}^d)\rightarrow L^0_{t,+}\).

Proposition 3.1

Suppose that for each \(t=0,\ldots ,T-1\), the control set \(\varTheta _t\) satisfies (c1), (c2) and the following conditions:

  1. (i)

    \(\left\{ (x,z)\in X_t\times L^0_t({\mathbb {R}}^d):z\in \varTheta _t(x)\right\} \) is sequentially closed,

  2. (ii)

    for every sequence \((x_n)\) in \(X_t\) with \(x_n\rightarrow x\in X_t\) a.s. there exists \(M\in L^0_{t,+}\) such that \(d_{L^0_t({\mathbb {R}}^d)}(z,0)\le M\) for all \(z\in \bigcup _n \varTheta _t(x_n)\).

Then, the control set \(\varTheta _t\) satisfies (c1)–(c4).

Proof

Let \((x_n)\) in \(X_t\) be a sequence such that \(x_n\rightarrow x\in X_t\) a.s., and \((z_n)\) a sequence in \(\varTheta _t(x_n)\). Since by assumption, \(d_{L^0_t({\mathbb {R}}^d)}(z_n,0)\le M\) for some \(M\in L^0_{t,+}\), the conditional Bolzano–Weierstrass theorem  [1, Theorem 3.8] implies a conditional subsequence \(n_1<n_2<\cdots \) with \(n_k\in L^0_t({\mathbb {N}})\) such that \(d_{L^0_t({\mathbb {R}}^d)}(z_{n_k},z)\rightarrow 0\) a.s. for some \(z\in L^0_t({\mathbb {R}}^d)\). Since \(\varTheta _t\) satisfies (c2), it holds \(z_{n_k}\in \varTheta _t(x_{n_k})\), and (i) implies \(z\in \varTheta _t(x)\). This shows (c4). Further, (c3) follows by considering the constant sequence \(x_n=x\) for all \(n\in {\mathbb {N}}\). \(\square \)

Example 3.1

Let \((S_t)_{t=0}^\mathrm{T}\) be a \(({\mathcal {F}}_t)\)-adapted price process with values in \(]0,+\infty [^d\). Given an initial investment \(x_0>0\), we consider the wealth process

$$\begin{aligned} x_{t+1}=v_t(x_t,z_t):=x_t+ \vartheta _t\cdot \Delta S_{t+1}, \end{aligned}$$

where the control is an investment strategy \(\vartheta _t\in L^0_t({\mathbb {R}}^d)\). We consider the (wealth-dependent) control set with short-selling constrains

$$\begin{aligned} \varTheta _t(x_t):=\left\{ \vartheta _t\in L^0_t({\mathbb {R}}^d):x_t=\vartheta _{t}\cdot S_t,\, 0\le \vartheta ^i_t,\, i=1,\ldots ,d\right\} . \end{aligned}$$

For each t, we consider a bounded measurable function \(g_t:\varOmega \times {\mathbb {R}}^d\rightarrow {\mathbb {R}}\) such that \(g_t(\omega ,\cdot )\) is upper semi-continuous for every \(\omega \in \varOmega \). Next, we show that there exists an optimizer \(((x_s^*)_{s={t+1}}^\mathrm{T},(\vartheta _s^*)_{s=t}^{T-1})\in C_t(x_t)\) of the utility maximization problem

$$\begin{aligned} \underset{((x_s)_{s=t}^\mathrm{T},(\vartheta _s)_{s=t}^{T-1})\in C_t(x_t)}{\sup }{\mathbb {E}}\left[ \sum _{s=t}^\mathrm{T} g_s(\cdot ,x_s)\mid {\mathcal {F}}_t\right] . \end{aligned}$$
(6)

Inspection shows that \(\varTheta _t\) satisfies (c1), (c2) and (i) of Proposition 3.1. As for (ii), for every \(\vartheta \in \varTheta _t(x)\), it can be checked that \(d_{L^0_t({\mathbb {R}}^d)}(\vartheta ,0)\le M(x)\) with \(M(x):=\max _{i=1,\ldots ,d}\frac{x}{S^i_t}\). Let \((x_n)\) be a sequence in \(L^0_t\) such that \(x_n\rightarrow x\in L^0_t\). Take \({\bar{x}}:=\sup _n x_n\in L^0_+\). Then, \(d_{L^0({\mathbb {R}}^d)}(\vartheta ,0)\le M({\bar{x}})\) for all \(z\in \bigcup _n\varTheta _t(x_n)\), which shows (ii). Finally, for fixed \(\alpha >0\), define \(u_T(x):=g_T(\cdot ,x)\) and for \(t=0,1,\ldots ,T-1\),

$$\begin{aligned} u_t(x,y,z):={\mathbb {E}}\left[ g_t(\cdot ,x)+\max \left\{ -\alpha ,\min \{\alpha ,y\}\right\} |{\mathcal {F}}_t\right] . \end{aligned}$$

Then, inspection shows that the backward process \((u_t)_{t=0}^\mathrm{T}\) satisfies conditions (u1) and (u2), and due to Fatou’s lemma, it also satisfies (u3). Since \(g_t\) is bounded, for \(\alpha >0\) large enough, it can be checked by backward induction that

$$\begin{aligned} u_t(x_t,\cdot ,z_t)\circ \cdots \circ u_{T-1}(x_{T-1},\cdot ,z_{T-1})\circ u_T (x_T)= {\mathbb {E}}\left[ \sum _{s=t}^\mathrm{T} g_s(\cdot ,x_s)\mid {\mathcal {F}}_t\right] \end{aligned}$$

for every \(((x_s)_{s=t+1}^\mathrm{T},(\vartheta _s)_{s=t}^{T-1})\in C_t(x_t)\). By Proposition 3.1 and Theorem 2.1 it follows that (6) has a global optimizer \(((x^*_s)_{s=t+1}^\mathrm{T},(\vartheta ^*_s)_{s=t}^{T-1})\in C_t(x_t)\).

Example 3.2

Let \((S_t)_{t=0}^\mathrm{T}\) be a d-dimensional \(({\mathcal {F}}_t)\)-adapted price process. Given an initial investment \(x_0>0\), we consider the wealth process

$$\begin{aligned} x_{t+1}=v_t(x_t,z_t):=x_t+ \vartheta _t\cdot \Delta S_{t+1}-c_t, \end{aligned}$$

where the control \(z_t=(\vartheta _t,c_t)\in L^0_t({\mathbb {R}}^d)\times L^0_+\) consists of an investment strategy \(\vartheta _t\in L^0_t(\mathbb {R^d})\) and a consumption \(c_t\in L^0_{t,+}\). Further, the forward generator \(v_t:L^0_t \times L^0_t({\mathbb {R}}^d\times {\mathbb {R}}_+)\rightarrow L^0_{t+1}\) satisfies (v1) and (v2). We assume that the wealth process satisfies the regulatory restriction

$$\begin{aligned} \rho _t(x_{t+1})\le 0,\quad t=0,\ldots ,T-1. \end{aligned}$$
(7)

In other words, \(x_{t+1}\) is acceptable w.r.t. a \({\mathcal {F}}_t\)-conditional convex risk measure \(\rho _t:L^0_{t+1}\rightarrow {\bar{L}}^0_t\) for all \(t=0,\ldots ,T-1\). Recall that a \({\mathcal {F}}_t\)-conditional convex risk measure is

normalized,:

i.e., \(\rho _t(0)=0\),

monotone,:

i.e., \(\rho _t(x)\le \rho _t(y)\) for all \(x,y\in L^0_{t+1}\) with \(x\ge y\),

\({\mathcal {F}}_t\)-translation invariant,:

i.e., \(\rho _t(x+m)=\rho (x)-m\) for all \(x\in L^0_{t+1}\) and \(m\in L^0_t\),

\({\mathcal {F}}_t\)-convex,:

i.e., \(\rho _t(\lambda x+(1-\lambda )y)\le \lambda \rho _t(x)+(1-\lambda )\rho _t(y)\) for all \(x,y\in L^0_{t+1}\) and \(\lambda \in L^0_t\) with \(0\le \lambda \le 1\).

By \({\mathcal {F}}_t\)-translation invariance, it follows that (7) is equivalent to

$$\begin{aligned}\rho _t(\vartheta _t\cdot \Delta S_{t+1})\le x_t-c_t.\end{aligned}$$

Moreover, \(\rho _t\) is \({\mathcal {F}}_t\)-stable as it is \({\mathcal {F}}_t\)-convex; see [1, Lemma 4.3]. Hence, we consider the (wealth-dependent) \({\mathcal {F}}_t\)-stable control set

$$\begin{aligned} \varTheta _t(x_t)&:=\Big \{z_t=(\vartheta _t,c_t)\in L^0_t({\mathbb {R}}^d\times {\mathbb {R}}_+):\rho _t(\vartheta _t\cdot \Delta S_{t+1})\le x_t-c_t\\&\qquad \text{ and } 0\le c_t\le x_t \Big \}. \end{aligned}$$

Suppose that for every \(\vartheta \in L^0_t({\mathbb {R}}^d)\) it holds \({\mathbb {P}}(\vartheta \cdot \Delta S_{t+1}<0\mid {\mathcal {F}}_t)>0\) on \(\{\vartheta \ne 0\}\), and therefore \({\mathbb {P}}(\vartheta \cdot \Delta S_{t+1}>0\mid {\mathcal {F}}_t)>0\) on \(\{\vartheta \ne 0\}\). Moreover, we assume that \(\rho _t(\vartheta \cdot \Delta S_{t+1})\in L^0_t\) for all \(\vartheta \in L^0_t({\mathbb {R}}^d)\), and \(\rho _t\) is \({\mathcal {F}}_t\)-sensitive to large losses, i.e., \(\lim _{m\rightarrow \infty }\rho _t(m y)=+\infty \) on \(\{{\mathbb {P}}(y<0\mid {\mathcal {F}}_t)>0\}\). Then, the control set \(\varTheta _t\) satisfies (i) and (ii) of Proposition 3.1. Indeed, consider the function \(f_t:L^0_t({\mathbb {R}}^d\times {\mathbb {R}}_+)\times L^0_t\rightarrow L^0_t\) defined as \(f_t(\vartheta ,c,x):=\rho _t(\vartheta \cdot \Delta S_{t+1})+c - x\), which is \({\mathcal {F}}_t\)-convex and therefore sequentially continuous by  [1, Theorem 7.2]. Hence, it follows that

$$\begin{aligned}&\left\{ (x,\vartheta ,c)\in L^0_t\times L^0_t({\mathbb {R}}^d\times {\mathbb {R}}_+):(\vartheta ,c)\in \varTheta _t(x)\right\} \\&\quad =\left\{ (x,\vartheta ,c)\in L^0_t\times L^0_t({\mathbb {R}}^d\times {\mathbb {R}}_+):f_t(\vartheta ,c,x)\le 0\right\} \end{aligned}$$

is \({\mathcal {F}}_t\)-convex and sequentially closed, which shows (i). As for (ii), let \((x_n)\) be a sequence in \(L^0_t\) such that \(x_n\rightarrow x\in L^0_t\). For \({{\bar{x}}}:=\sup _n x_n\in L^0_t\) one has

$$\begin{aligned} \varTheta _t(x_n)\subset \varTheta _t({{\bar{x}}}) \end{aligned}$$

for all \(n\in {\mathbb {N}}\). Hence, it remains to show that \(\varTheta _t({{\bar{x}}})\) is \({\mathcal {F}}_t\)-bounded, i.e., there is \(M\in L^0_{t,+}\) such that \(d_{L^0_t({\mathbb {R}}^d)}(\vartheta ,0)+c\le M\) for all \((\vartheta ,c)\in \varTheta _t({{\bar{x}}})\). Since \(\varTheta _t({{\bar{x}}})\) contains \((0,0)\in L^0_t({\mathbb {R}}^d\times {\mathbb {R}}_+)\), by   [1, Theorem 3.13] it is enough to show that for each \((\vartheta ,c)\in \varTheta _t({{\bar{x}}})\) with \((\vartheta ,c)\ne (0,0)\), there exists \(k\in {\mathbb {N}}\) such that \(k(\vartheta ,c)\notin \varTheta _t({{\bar{x}}})\). If \(c\ne 0\), this is obvious. Otherwise, it holds \({\mathbb {P}}(\vartheta \ne 0)>0\), in which case \(\lim _{m\rightarrow \infty }\rho _t(m\vartheta \cdot \Delta S_{t+1})=+\infty \) on \(\{\vartheta \ne 0\}\).

By Proposition 3.1 and Theorem 2.1, it follows that for each \(x_0>0\) and every recursive utility function with backward generators \((u_t)_{t=0}^\mathrm{T}\), which satisfy (u1)–(u3), there exists a global optimizer \(((x^*_s)_{s=1}^\mathrm{T},(\vartheta ^*_s,c^*_s)_{s=0}^{T-1})\in C_0(x_0)\) of the utility maximization problem (1) such that the local criterion (2) holds.

3.2 Conditional Dimension

Suppose that \(Z_t\) is the conditional Euclidean space \(L^0_t({\mathbb {R}})^{d_t}\) with dimension \(d_t=d_t(x)\in L^0_t({\mathbb {N}})\), which depends on the parameter \(x\in X_t\); see Example 2.1. Let \(d_t:X_t\rightarrow L^0_t({\mathbb {N}})\) be an \({\mathcal {F}}_t\)-stable and sequentially continuous, where \(L^0_t({\mathbb {N}})\) is endowed with the \({\mathcal {F}}_t\)-conditional metric which extends the discrete metric on \({\mathbb {N}}\). The control set \(\varTheta _t\) is chosen such that the following conditions hold:

  1. (c1)

    \(\emptyset \ne \varTheta _t(x)\subset L^0_t({\mathbb {R}})^{d_t(x)}\) for all \(x\in X_t\),

  2. (c2)

    \(\varTheta _t\) is \({\mathcal {F}}_t\)-stable, i.e.,

    $$\begin{aligned} \varTheta _t\left( \sum _k 1_{A_k}x_k\right) =\sum _k 1_{A_k} \varTheta _t(x_k)\subset L^0_t({\mathbb {R}})^{d_t(\sum _k 1_{A_k} x_k)} \end{aligned}$$

    for all \((A_k)\in \varPi _t\) and every sequence \((x_k)\) in \(X_t\).

Remark 3.1

Since \(Z_t=L^0_t({\mathbb {R}})^{d_t(x)}\) depends on the state \(x\in X_t\), we are in a more general setting as in Theorem 2.1. However, since \(L^0_t({\mathbb {N}})\) is endowed with the conditional discrete metric, for every sequence \((x_n)\) in \(X_t\) such that \(x_n\rightarrow x\in X_t\), there exists \(n_0\in L^0_t({\mathbb {N}})\) such that \(d_t(x_n)=d_t(x)\) for all \(n\ge n_0\). In particular, \(L^0_t({\mathbb {R}})^{d_t(x_n)}=L^0_t({\mathbb {R}})^{d_t(x)}\) for all \(n\ge n_0\), and Theorem 2.1 still holds true by exploring the arguments on \(z_n\in \varTheta _t(x_n)\) for sequences \(x_n\rightarrow x\) in the conditional space \(L^0_t({\mathbb {R}})^{d_t(x)}\).

A variant of Proposition 3.1 for control sets with conditional dimension can be formulated as follows.

Proposition 3.2

Suppose that for each \(t=0,\ldots ,T-1\), the control set \(\varTheta _t\) satisfies \(\text {(c1)}, \text {(c2)}\) and the following conditions:

  1. (i)

    \(\left\{ (x,z)\in X_t\times L^0_t({\mathbb {R}})^{d_t(x)}:z\in \varTheta _t(x)\right\} \) is sequentially closed.

  2. (ii)

    For every sequence \((x_n)\) in \(X_t\) with \(x_n\rightarrow x\in X_t\), there is \(M\in L^0_{t,+}\) and a conditional subsequence \(n_1<n_2<\cdots \) in \(L^0_t({\mathbb {N}})\), which satisfy \(d_{L^0_t({\mathbb {R}}^{d_t(x)})}(z,0)\le M\) for all \(z\in \bigcup _{k\ge k_0} \varTheta _t(x_{n_k})\) for some \(k_0\in L^0_t({\mathbb {N}})\), such that \(\varTheta _t(x_{n_k})\subset Z_t(d_t(x))\) for all \(k\ge k_0\).

Then, the control set \(\varTheta _t\) satisfies \(\text {(c1)}\)\(\text {(c4)}\).

Proof

Let \((x_n)\) in \(X_t\) be a sequence such that \(x_n\rightarrow x\in X_t\), and \(z_n\in \varTheta _t(x_n)\). By Remark 3.1 there exists a conditional subsequence \(n_1<n_2<\cdots \) with \(n_k\in L^0_t({\mathbb {N}})\) such that \(z_{n_k}\in \varTheta _t(x_{n_k})\subset L^0_t({\mathbb {R}})^{d_t(x)}\) for all k. Hence, we can argue similar as in the proof of Proposition 3.1. \(\square \)

Example 3.3

Consider a portfolio maximization problem, where the number of traded assets depends on past decision. More precisely, given a portfolio \(x_t=z_{t-1}=(\vartheta _{t-1},d_{t-1})\in L^0_{t-1}({\mathbb {R}})^{d_{t-1}}\times L^0_{t-1}({\mathbb {N}})\) chosen at time \(t-1\) (with initial value \(x_{-1}=(\vartheta _{-1},d_{-1})\in {\mathbb {R}}^{d_{-1}}\times {\mathbb {N}}\)), the investor can rebalance the portfolio at time t to

$$\begin{aligned} x_{t+1}=z_t=(\vartheta _t,d_t)\in \varTheta _t(x_t)\subset L^0_t({\mathbb {R}})^{d_{t-1}}\times L^0_t({\mathbb {N}}). \end{aligned}$$

Here, the state spaces and the control spaces \(X_{t+1}=Z_t=L^0_t({\mathbb {R}})^{d_{t-1}}\times L^0_t({\mathbb {N}})\) both depend on the past decision \(d_{t-1}\). In line with Remark 3.1, the convergence \(x^n_t=(\vartheta ^n_{t-1},d^n_{t-1}) \rightarrow x_t=(\vartheta _{t-1},d_{t-1})\) is understood as \(\vartheta ^n_{t-1}\rightarrow \vartheta _{t-1}\) in the conditional metric space \(L^0_t({\mathbb {R}})^{d_{t-1}}\), since \(d^n_{t-1}=d_{t-1}\) for all \(n\ge n_0\) for some \(n_0\in L^0_{t-1}({\mathbb {N}})\). Suppose that the control set \(\varTheta _t\) satisfies (c1), (c2) as well as (i) and (ii) of Proposition 3.2. Then, along the same argumentation as in Proposition 3.2, it follows that \(\varTheta _t\) satisfies (c1)–(c4). Since \(v_t(x_t,z_t):=z_t\) satisfies (v1) and (v2), Theorem 2.1 is applicable whenever the backward generators \((u_t)_{t=0}^\mathrm{T}\) satisfy (u1)–(u3).

The conditional dimension depending on past decisions allows for instance to add new assets at time t (\(d_t>d_{t-1}\)) which are traded at \(t+1\). Notice that \(\varTheta _t(\vartheta _{t-1},d_{t-1})\) denotes the set of all attainable portfolios at time t. For instance, let \(S_t\in L^0_{t,++}({\mathbb {R}}^d)\) be a price process with fixed \(d\in {\mathbb {N}}\). Without frictions and short-selling constraints one has

$$\begin{aligned}\varTheta _t(\vartheta _{t-1}):=\{\vartheta _t\in L^0_{t,+}({\mathbb {R}}^d):\vartheta _t\cdot S_t=\vartheta _{t-1}\cdot S_t\},\end{aligned}$$

which satisfies (c1)–(c4). Transaction costs can be included into the model by considering \(\varTheta _t(\vartheta _{t-1}):=\{\vartheta _t\in L^0_{t,+}({\mathbb {R}}^d):\vartheta _t-\vartheta _{t-1}\in C_t\}\) for a solvency region \(C_t\subset L^0_t({\mathbb {R}}^d)\); see, e.g.,  [21] for a discussion of different market models. Also, the solvency regions can be modeled state-dependently with conditional dimension \(d_t\in L^0_t({\mathbb {N}})\).

4 Unbounded Control Sets

In this section, we consider unbounded control sets \(\varTheta _t\equiv L^0_t({\mathbb {R}}^d)\) and do not assume constraints on the controls, but derive (c3) and (c4) for upper-level sets of \(y_t\) as a result of stronger assumptions on the forward and backward generators. In particular, we additionally need that the backward generators are \({\mathcal {F}}_t\)-sensitive to large losses and increasing in the first argument. Suppose that the forward generators

$$\begin{aligned} v_t:L^0_t \times L^0_t({\mathbb {R}}^d) \rightarrow L^0_{t+1}, \quad t=0,1,\ldots ,T-1, \end{aligned}$$

satisfy (v1), (v2) and

  1. (v3)

    \(v_t\) is increasing in the first component,

  2. (v4)

    \(v_t(x,\lambda z + (1-\lambda )z^\prime )\ge \lambda v_t(x,z) + (1-\lambda ) v_t(x,z^\prime )\) for all \(x\in L^0_t, z,z^\prime \in L^0_t({\mathbb {R}}^d)\), and \(\lambda \in L^0_t\) with \(0\le \lambda \le 1\),

  3. (v5)

    \({\mathbb {P}}(v_t(x,z)<x\mid {\mathcal {F}}_t)>0\) on \(\{z\ne 0\}\) for all \(x\in L^0_t\) and \(z\in L^0_t({\mathbb {R}}^d)\),

  4. (v6)

    \(v_t(x,0)=x\) for all \(x\in L^0_t\).

As for the backward generators, let \(u_T:L^0_T\rightarrow L^0_T\) be the identity mapping, and

$$\begin{aligned} u_t:L^0_t\times {\underline{\hbox {L}}}^0_{t+1}\times L^0_t({\mathbb {R}}^d)\rightarrow {\underline{\hbox {L}}}^0_t, \quad t=0,\ldots , T-1, \end{aligned}$$

satisfy (u1) and (u3) as well as

  1. (u2’)

    \(u_t\) is increasing in the first and second component,

  2. (u4)

    \(u_t(x,\lambda y + (1-\lambda )y^\prime ,\lambda z + (1-\lambda )z^\prime )\ge \min \left\{ u_t(x,y,z),u_t(x,y^\prime ,z^\prime )\right\} \) for all \(x\in L^0_t, y,y^\prime \in {\underline{\hbox {L}}}^0_{t+1}, z,z^\prime \in L^0_t({\mathbb {R}}^d)\), and \(\lambda \in L^0_t\) with \(0\le \lambda \le 1\),

  3. (u5)

    \(u_t(x,y+c,z)=u_t(x,y,z)+c\) for all \(x\in L^0_t, y\in {\underline{\hbox {L}}}^0_{t+1}, z\in L^0_t({\mathbb {R}}^d)\) and \(c\in L^0_t\),

  4. (u6)

    \(\lim _{m\rightarrow \infty }u_t(x,m y,m z)=-\infty \) on \(\{{\mathbb {P}}(y<0\mid {\mathcal {F}}_t)>0\}\) for all \(z\in L^0_t({\mathbb {R}}^d)\) and \(y\in {\underline{\hbox {L}}}^0_{t+1}\),

  5. (u7)

    \(u_t(x,0,0)=0\) for all \(x\in L^0_t\).

Let \(y_t:L^0_t\rightarrow {\underline{\hbox {L}}}^0_t\) be given as in (1), where

$$\begin{aligned} C_t(x_t)&:=\left\{ \left( (x_s)_{s=t+1}^\mathrm{T},(z_s)_{s=t}^{T-1}\right) :x_{s+1}=v_s(x_s,z_s),z_s\in L^0_t({\mathbb {R}}^d)\right. \\&\left. \quad \text { for all }s=t,\ldots ,T-1\right\} . \end{aligned}$$

Then, the following variant of Theorem 2.1 holds.

Proposition 4.1

Suppose that \(\text {(v1)}\)\(\text {(v6)}\) and \(\text {(u1)}\), \((\hbox {u}2')\), \(\text {(u3)}\)\((\hbox {u}7)\) are fulfilled, and there exists a constant \(K>0\) such that

$$\begin{aligned} \sup _{z\in L^0_t({\mathbb {R}}^d)} u_t(x,v_t(x,z),z) - x \le K \end{aligned}$$

for all \(t=0,\ldots ,T-1\), and \(x\in L^0_t\). Then, the functions \(y_t:L^0_{t}\rightarrow {\underline{\hbox {L}}}^0_t\) are \({\mathcal {F}}_t\)-stable, increasing and sequentially upper semi-continuous for all \(t=0,\ldots , T\), and can be computed by backward recursion

$$\begin{aligned} y_T(x_T)&=u_T(x_T)=x_T\\ y_t(x_t)&=\max _{z_t\in L^0_t({\mathbb {R}}^d)} u_t(x_t,y_{t+1}(v_t(x_t,z_t)),z_t), \quad t=0,\ldots ,T-1. \end{aligned}$$

Moreover, for every \(x_t\in L^0_t\) the process \(((x^*_s)_{s=t}^\mathrm{T},(z^*_s)_{s=t}^{T-1})\) given by \(x_t^*=x_t\), and forward recursion \(x^*_{s+1}=v_s(x^*_s,z^*_s)\), where

$$\begin{aligned} z_s^*\in {\mathop {\hbox {argmax}}\limits }_{z_s\in L^0_t({\mathbb {R}}^d)} u_s\left( x_s^*,y_{s+1}(v_t(x^*_s,z_s)),z_s\right) ,\quad s=t,\ldots T-1, \end{aligned}$$
(8)

satisfies \(((x^*_s)_{s=t+1}^\mathrm{T},(z^*_s)_{s=t}^{T-1})\in C_t(x_t)\) and

$$\begin{aligned} y_t(x_t)= u_t(x_t,\cdot ,z^*_t)\circ \cdots \circ u_{T-1}(x^*_{T-1},\cdot ,z^*_{T-1})\circ u_T (x^*_T). \end{aligned}$$

Proof

The proof is similar to Theorem 2.1. However, since the control set is not compact we have to argue differently to show the existence of (4), i.e., that the supremum in

$$\begin{aligned} y_t(x_t):=\sup _{z\in L^0_t({\mathbb {R}}^d)} u_t\left( x_t,y_{t+1}(v_t(x_t,z)),z\right) ,\quad x_t\in L^0_t, \end{aligned}$$

is attained. To do so, we first show that

$$\begin{aligned} 0\le y_t(x) - x \le K_t\quad \text{ for } \text{ all } x\in L^0_t, \end{aligned}$$
(9)

where \(K_t:=(T-t)K\) for all \(t=0,1,\ldots ,T\). For \(t=T\), one has \(y_T(x)-x=0\). By induction, suppose that \(y_{t+1}(x)-x\le (T-t)K_{t+1}\). Then, by (u2’) and (u5) for every \(z\in L^0_t({\mathbb {R}}^d)\), it holds

$$\begin{aligned}&u_t\left( x,y_{t+1}(v_t(x,z)\right) ,z)-x=u_t\left( x,y_{t+1}(v_t(x,z))-v_t(x,z)+v_t(x,z),z\right) -x \\&\le u_t(x,v_t(x,z),z) - x + K_{t+1} \le K+K_{t+1}=K_t, \end{aligned}$$

so that \(y_t(x)-x\le K_t\). As for the lower bound, suppose by induction that \(x\le y_{t+1}(x)\). By (v6), (u2\(^{\prime }\)), (u5) and (u7) it follows that

$$\begin{aligned} y_t(x)&\ge u_t\left( x,y_{t+1}(v_t(x,0)),0\right) \ge u_t\left( x,y_{t+1}(x),0\right) \\ {}&\ge u_t\left( x,x,0\right) = u_t\left( x,0,0\right) +x= x. \end{aligned}$$

Fix \(x\in L^0_t\). For \(z\in L^0_t({\mathbb {R}}^d)\) with \(u_t(x,y_{t+1}(v_t(x,0)),0)\le u_t(x,y_{t+1}(v_t(x,z)),z)\), it follows from (9) (u5), (u7) and (v6) that

$$\begin{aligned} x&=u_t(x,x,0)\le u_t(x,y_{t+1}(x),0) \le u_t\left( x,y_{t+1}(v_t(x,0)),0\right) \\&\le u_t\left( x,y_{t+1}(v_t(x,z)),z\right) \le u_t\left( x,v_t(x,z),z\right) +K_{t+1}. \end{aligned}$$

This shows that

$$\begin{aligned} y_t(x)=\sup _{z\in \varTheta _t(x)} u_t\left( x,y_{t+1}(v_t(x,z)),z\right) , \end{aligned}$$

for the \({\mathcal {F}}_t\)-stable set

$$\begin{aligned} \varTheta _t(x):=\left\{ z\in L^0_t({\mathbb {R}}^d):u_t(x,v_t(x,z),z) \ge x - K_{t+1}\right\} . \end{aligned}$$

It remains to show that \(\varTheta _t\) satisfies (c1)–(c4). To that end, we verify (i) and (ii) of Proposition 3.1. By (u3) and (v2), it follows that the set

$$\begin{aligned} \left\{ (x,z)\in L^0_t\times L^0_t({\mathbb {R}}^d):z\in \varTheta _t(x)\right\} \end{aligned}$$

is sequentially closed, which shows (i) of Proposition 3.1. As for (ii) of Proposition 3.1 let \((x_n)\) be a sequence in \(L^0_t\) such that \(x_n\rightarrow x\in L^0_t\). Defining \({\underline{x}}:=\inf _n x_n\in L^0_t\) as well as \({{\bar{x}}}:=\sup _n x_n\in L^0_t\), it follows from (u2’) and (v3) that

$$\begin{aligned} \varTheta _t(x_n)\subset \left\{ z\in L^0_t({\mathbb {R}}^d):u_t(\bar{x},v_t({{\bar{x}}},z),z) \ge {\underline{x}} - K_{t+1}\right\} =:\varTheta _t({\underline{x}},{{\bar{x}}}) \end{aligned}$$

for all \(n\in {\mathbb {N}}\). Moreover, by (u4) and (v4), the set \(\varTheta _t({\underline{x}},{{\bar{x}}})\) is \({\mathcal {F}}_t\)-convex. It remains to show that there exists \(M\in L^0_t\) such that \(d_{L^0_t({\mathbb {R}}^d)}(z,0)\le M\) for all \(z\in \varTheta _t({\underline{x}},{{\bar{x}}})\). This \(L^0_t\)-boundedness of \(\varTheta _t({\underline{x}},{{\bar{x}}})\) would follow from [1, Theorem 3.13], if for all \(z\in \varTheta _t({\underline{x}},{{\bar{x}}})\) with \(z\ne 0\), there exists \(A\in {\mathcal {F}}_t\) with \({\mathbb {P}}(A)>0\) such that

$$\begin{aligned} \lim _{m\rightarrow \infty } u_t({{\bar{x}}}, v_t({{\bar{x}}},m z), m z)=-\infty \quad \text { on }A. \end{aligned}$$
(10)

Indeed, since by (v5) one has \({\mathbb {P}}(v_t({{\bar{x}}},z)<\bar{x}\mid {\mathcal {F}}_t)>0\) on \(\{z\ne 0\}\), there exists \(l\in {\mathbb {N}}\) such that \(A:=\left\{ {\mathbb {P}}\left( |{{\bar{x}}}| + l (v_t({{\bar{x}}},z)- {{\bar{x}}})<0 \mid {\mathcal {F}}_t\right) >0\right\} \in {\mathcal {F}}_t\) satisfies \({\mathbb {P}}(A)>0\). By (v4), it follows that

$$\begin{aligned} v_t({{\bar{x}}},z)\ge \frac{1}{m}v_t({{\bar{x}}},mz)+\frac{m-1}{m}v_t(\bar{x},0), \end{aligned}$$

which by (v6) implies \(m\left( v_t({{\bar{x}}},z)-{{\bar{x}}}\right) \ge v_t(\bar{x},mz)-{{\bar{x}}}\) for all \(m\in {\mathbb {N}}\). This shows that

$$\begin{aligned} u_t({{\bar{x}}},v_t({{\bar{x}}},m z),m z)&\le u_t({{\bar{x}}},|{{\bar{x}}}| + v_t(\bar{x},m z)- {{\bar{x}}},m z)\\&\le u_t\left( {{\bar{x}}},\frac{m}{l}\left( |\bar{x}| + l(v_t({{\bar{x}}},z)- {{\bar{x}}})\right) ,m z\right) \end{aligned}$$

for all \(m\in {\mathbb {N}}\) large enough. Hence, the condition (u6) implies (10). \(\square \)

Example 4.1

Let \((S_t)_{t=0}^\mathrm{T}\) be a \({\mathbb {R}}^d\)-valued adapted stochastic process modeling the discounted stock prices of a financial market model. Given a trading strategy \(\vartheta _t\in L^0_t({\mathbb {R}}^d), t=0,\ldots ,T-1\), and an initial investment \(x_0\in L^0_0\), we define recursively the wealth process

$$\begin{aligned} x_{t+1}=v_t(x_t,\vartheta _t):=x_t + \vartheta _t\cdot \Delta S_{t+1}, \quad t=0,\ldots ,T-1, \end{aligned}$$

where \(\Delta S_{t+1}:=S_{t+1} - S_{t}\) denotes the stock price increment. We assume the no-arbitrage condition:

$$\begin{aligned}\vartheta \cdot \Delta S_{t+1}\ge 0 \text{ for } \vartheta \in L^0_t({\mathbb {R}}^d)\quad \text{ implies }\quad \vartheta = 0\end{aligned}$$

for all \(t=0,\ldots ,T-1\). Then, the forward generator \(v_t:L^0_t\times L^0_t({\mathbb {R}}^d)\rightarrow L^0_{t+1}\) satisfies (v1)–(v6). As for the backward generators, let \(u_T:L^0_T\rightarrow L^0_T\) be the identity and

$$\begin{aligned} u_t:L_t^0\times {\underline{\hbox {L}}}^0_{t+1}\rightarrow {\underline{\hbox {L}}}^0_t, \quad u_t(x,y):=\frac{1}{\gamma _t(x)}g_t(\gamma _t(x)y), \quad t=0,\ldots ,T-1, \end{aligned}$$

where \(g_t :{\underline{\hbox {L}}}^0_{t+1}\rightarrow {\underline{\hbox {L}}}^0_t\) is increasing, \({\mathcal {F}}_t\)-concave, \({\mathcal {F}}_t\)-translation invariant, sequentially upper semi-continuous, \(g_t(0)=0\) and \(\lim _{r\rightarrow \infty } g_t(ry)=-\infty \) on \(\{{\mathbb {P}}(y<0\mid {\mathcal {F}}_t)>0\}\). The function \(\gamma _t:L^0_t\rightarrow L^0_{t,++}\) is \({\mathcal {F}}_t\)-stable, decreasing and sequentially continuous and models the risk aversion depending on the wealth \(x_t\) at time t. Then, \(u_t\) satisfies the conditions (u1), (u2\(^{\prime }\)), (u3)–(u7). We only verify (u2\(^{\prime }\)) and (u3). To prove (u2\(^{\prime }\)) take \(x_1\le x_2\) and \(y_1\le y_2\). Let \(\beta _i:=1/\gamma _t(x_i)\), for \(i=1,2\). By using the monotonicity and \({\mathcal {F}}_t\)-concavity of \(g_t\), we have

$$\begin{aligned} g_t\left( \frac{y_2}{\beta _2}\right) \ge g_t\left( \frac{y_1}{\beta _2}\right) \ge \frac{\beta _1}{\beta _2}g_t(\frac{y_1}{\beta _1}) + \frac{\beta _2-\beta _1}{\beta _2}g_t(0) = \frac{\beta _1}{\beta _2} g_t\left( \frac{y_1}{\beta _1}\right) . \end{aligned}$$

Multiplying by \(\beta _2\), we obtain \(u_t(x_2,y_2)\ge u_t(x_1,y_1)\). Due to the monotonicity of \(u_t\), it suffices to verify (u3) for decreasing sequences. Indeed, suppose that \({x}_k\searrow x\) and \({y}_k\searrow y\). Then, by the monotonicity of \(u_t\) we have

$$\begin{aligned} g_t(\gamma _t({x}_k){y}_k)\ge \frac{\gamma _t({x}_k)}{\gamma _t(x)}g_t(\gamma _t(x)y)\quad \text { for all }k. \end{aligned}$$

Thus, by using that \(g_t\) is sequentially upper semi-continuous and \(\gamma \) is sequentially continuous we obtain

$$\begin{aligned} g_t(\gamma _t(x)y)&\ge \underset{k\rightarrow \infty }{\limsup }g_t(\gamma _t({x}_k){y}_k)\ge \underset{k\rightarrow \infty }{\liminf }g_t(\gamma _t({x}_k){y}_k) \\ {}&\ge \underset{k\rightarrow \infty }{\lim }\frac{\gamma _t({x}_k)}{\gamma _t(x)}g_t(\gamma _t(x)y)=g_t(\gamma _t(x)y). \end{aligned}$$

This shows that \(g_t(\gamma _t({x}_k){y}_k)\rightarrow g_t(\gamma _t(x)y)\), and therefore \(u_t(x_k,y_k)\rightarrow u_t(x,y)\). Given the wealth process \((x_t)_{t=0}^\mathrm{T}\), define the backward process

$$\begin{aligned} y_t(x_t)=\underset{((x_s)_{s=t}^\mathrm{T},(\vartheta _s)_{s=t}^{T-1})\in C_t(x_t)}{\sup }u_t(x_t,\cdot )\circ \cdots \circ u_{T-1}(x_{T-1},\cdot )\circ u_T (x_T),\quad \end{aligned}$$

for \(t=0,\ldots ,T-1\), where \(C_t(x_t)\) consists of all \(((x_s)_{s=t}^\mathrm{T},(\vartheta _s)_{s=t}^{T-1})\) such that \(x_{s+1}=x_s+\vartheta _{s+1}\cdot \Delta S_{s+1}\) for all \(s=t,\ldots ,T-1\). By induction, one can verify that \(y_t(x+c)=y_t(x)+c\) for every \(c\in L^0_{t-1}\) with \(t=1,\ldots ,T\). Suppose there exists \(K>0\) such that

$$\begin{aligned} u_t\left( x,v_t(x,\vartheta ),\vartheta \right) -x\le \frac{1}{\gamma _t(x)}g_t\left( \gamma _t(x)\vartheta \Delta S_{t+1}\right) \le K \end{aligned}$$
(11)

for all \(t=0,\ldots ,T-1, \vartheta \in L^0_t({\mathbb {R}}^d)\), and \(x\in L^0_t\). Then, it follows from Proposition 4.1 that

$$\begin{aligned} y_0(x_0)=\underset{((x_t)_{t=0}^\mathrm{T},(\vartheta _t)_{t=0}^{T-1})\in C_0(x_0)}{\sup }u_0(x_0,\cdot )\circ \ldots \circ u_{T-1}(x_{T-1},\cdot )\circ u_T (x_T) \end{aligned}$$

is attained for all \(x_0\in L^0_0\). For instance one could think of the dynamic entropic preference functional with generators \(-\frac{1}{\gamma _t}\log \left( {\mathbb {E}}[\exp (-\gamma _t y)\mid {\mathcal {F}}_t]\right) \), where the local risk aversion coefficient \(\gamma _t=\gamma _t(x_t)\) depends on the current wealth \(x_t\). Notice that \(\lim _{m\rightarrow \infty }-\log ({\mathbb {E}}[\exp (-my)\mid {\mathcal {F}}_t])=-\infty \) on \(\{{\mathbb {E}}[y<0\mid {\mathcal {F}}_t]>0\}\).

5 Connection to Random Set Theory

The aim of this section is to discuss methodological similarities and differences of conditional analysis and random set theory. Random sets are established on the basis of results in classical analysis. They seek a formalization of a randomized problem such that classical theorems can be applied in each state or point of the underlying probability space. Measurable selection lemmas then help to extract a measurable object. To guarantee that such selections exist usually topological countability assumptions such as separability on the involved spaces are necessary. On the other hand, conditional analysis relies on a conditional version of classical results which are directly applied to sets of measurable functions where we consequently identify functions which are equal almost surely. Therefore, the formalization mainly consists in describing those sets for which a conditional version of classical results can be proved  [2]. Measurability is then systematically preserved by the application of a conditional version of classical results by construction. A conditional version of classical theorems exists under measure-theoretic countability assumptions such as \(\sigma \)-finiteness and by working on the measure algebra by quotienting out the ideal of null sets from the underlying probability space, while the topological restrictions of random set techniques can be relaxed.

In the following, we show that conditional analysis extends measurable selections and random set theory. More precisely, we establish a correspondence between basic objects in random set theory and their analogues in conditional analysis under the hypothesis of separability. We fix a complete probability space \((\varOmega ,{\mathcal {F}},{\mathbb {P}})\) and a Polish space E. Recall that a closed-valued map \(S:\varOmega \rightrightarrows E\) (i.e., \(S(\omega )\subset E\) is a closed set for all \(\omega \in \varOmega \)) is Effros measurableFootnote 3 whenever \(S^{-1}(O):=\{\omega \in \varOmega :S(\omega )\cap O\ne \emptyset \}\in {\mathcal {F}}\) for all open sets O in E. Throughout, we assume that \(S(\omega )\ne \emptyset \) for a.a. \(\omega \in \varOmega \), and we identify \(S_1\) and \(S_2\) whenever \(S_1(\omega )=S_2(\omega )\) for a.a. \(\omega \in \varOmega \). We say that \(x\in L^0(E)\) is an a.s. measurable selection of S if \(x(\omega )\in S(\omega )\) for a.a. \(\omega \in \varOmega \). We denote by \(X_S\) the set of all a.s. measurable selections of S.

The following theorem is due to Kabanov and Safarian [22, Proposition 5.4.3] in the particular case \(E={\mathbb {R}}^d\); see also  [23, Theorem 2.1.6] and  [24, Theorem 2.3]Footnote 4 and the Remark 5.1 for further details on the relations of these results. Due to limitation of space, we drop our alternative proof of Theorem 5.1 which uses conditional analysis techniques, which can be found in the arXiv-version.

Theorem 5.1

Let \(S:\varOmega \rightrightarrows E\) be a closed-valued and Effros measurable mapping, and let \(X\subset L^0(E)\) be decomposable and sequentially closed.Footnote 5 Then, there exist closed-valued and Effros measurable mappings \(S_X:\varOmega \rightrightarrows E\) and \(S_{X_S}:\varOmega \rightrightarrows E\) satisfying the reciprocality relations \(S=S_{X_S}\) and \(X=X_{S_X}\).

A frequently employed concept in stochastic optimal control is a normal integrand; see, e.g., [25, 26], that is a function \(f:\varOmega \times E\rightarrow {\mathbb {R}}\) whose epigraphical mapping \(S_f:\varOmega \rightrightarrows E\times {\mathbb {R}}, S_f(\omega ):=\{(x,r)\in E\times {\mathbb {R}}:f(\omega ,x)\le r\}\), is closed-valued and Effros measurable. A consequence of normality of an integrand is that \(f(\omega ,x(\omega ))\) is measurable in \(\omega \) whenever \(x:\varOmega \rightarrow E\) is a measurable function. Moreover, a normal integrand \(f(\omega ,x)\) is measurable in \(\omega \) for fixed x and lower semi-continuous in x for fixed \(\omega \); cf.  [3, Proposition 14.28]. We obtain the following functional version of Theorem 5.1, where two normal integrands \(f:\varOmega \times E\rightarrow {\mathbb {R}}\) and \(g:\varOmega \times E\rightarrow {\mathbb {R}}\) are identified if their epigraphical mappings coincide a.s..

Corollary 5.1

Let \(u:L^0(E)\rightarrow L^0\) be stable and sequentially lower semi-continuous and let \(f:\varOmega \times E\rightarrow {\mathbb {R}}\) be a normal integrand. Then, there exist a stable and sequentially lower semi-continuous function \(u_f:L^0(E)\rightarrow L^0\) and a normal integrand \(f_u:\varOmega \times E\rightarrow {\mathbb {R}}\) such that \(u_{f_u}=u\) and \(f_{u_f}=f\).

Proof

Due to normality, \(u_f:L^0(E)\rightarrow L^0\) given by \(x\mapsto (\omega \mapsto f(\omega ,x(\omega )))\) is well defined. Direct inspection shows that \(u_f\) is stable and sequentially lower semi-continuous. Conversely, put \(X:=\{(x,r)\in L^0(E\times {\mathbb {R}}):u(x)\le r\}\). By assumption, X is a stable and sequentially closed subset of \(L^0(E\times {\mathbb {R}})\). By Theorem 5.1, there exist an Effros measurable and closed-valued map \(S_X:\varOmega \rightrightarrows E\times {\mathbb {R}}\) corresponding to X. Thus \(f_u:\varOmega \times E\rightarrow {\mathbb {R}}\) defined by \(f(\omega ,x):=\inf S(\omega )_x\) a.s. is a normal integrand where \(S(\omega )_x\) denotes the x-section of \(S(\omega )\). It follows from the reciprocality relations in Proposition 5.1 that \(u_{f_u}=u\) and \(f_{u_f}=f\). \(\square \)

We compare the assumptions which underly conditional analysis and random set theory. Conditional analysis is applicable under the following two purely measure-theoretic hypotheses:

  • A probability measure \({\mathbb {P}}\) on \((\varOmega ,{\mathcal {F}})\) needs to be fixed a priori in order to identifyFootnote 6 sets, functions, relations, etc. almost surely. By the almost sure identification, we work with equivalence classes of functions and sets, and thus basically change to a pointfree perspective.

  • One consequently works in the context of conditional sets  [2]. In particular, all involved sets must satisfy stability under countable concatenations; cf., Definition 2.2.

Conditional analysis does not rely on the following topological assumptions which are prevalent in random set theory:

  • standard Borel space,Footnote 7 measure completeness, closed-valued mappings and Polish spaces.

The connections provided by Theorem 5.1 and Corollary 5.1 suggest that a stochastic control problem can equally be formalized in the language of conditional set theory. In, e.g., [6, 25,26,27,28] some form of integrability is always assumed, which leads to further technicalities in the proofs; see also  [29, 30] and the references therein for basic studies on the relations of (conditional) expectations and integrands. The main results in Sect. 2 are established for general utilities which are not necessarily in the form of expected utilities, and no integrability assumptions are required.

Remark 5.1

Let E be a separable Banach space, and let \(L^p(E)\) be the Bochner space of all p-integrable functions \(x:\varOmega \rightarrow E\) for \(p\in [1,\infty ]\). For a set-valued mapping \(S:\varOmega \rightrightarrows E\), denote by \(X_S^p:=X_S\cap L^p(E)\) the set of p-integrable selections of S. Let \(X\subset L^p(E)\) be norm-closed. The result of Kabanov and Safarian  [22, Theorem 5.4.3] states that \(X=X_S^p\) for an Effros measurable closed-valued mapping \(S:\varOmega \rightrightarrows E\) if and only if X is finitely decomposable and \(L^p\)-closed, see also  [23, Theorem 2.1.6], while  [24, Theorem 2.3] by Molchanov and Lépinette replace \(L^p\)-closed by closed with respect to convergence in probability.

6 Conclusions

Conditional analysis allows for a treatment of stochastic control problems, which offers a viable alternative to classical measurable selection arguments. In particular, as controls and states are modeled in general conditional metric spaces, the mathematical restrictions of random set techniques such as finite dimension, separability, and completeness can be relaxed. The existence result Theorem 2.1 covers a wide variety of situations such as wealth-dependent utility maximization under risk constraints, or utility maximization, where the number of traded assets depends on past decisions. We applied the novel notion of conditional compactness  [2], which works in finite and infinite dimensional settings thanks to a conditional version of the Heine–Borel theorem  [2, Theorem 4.6]. Conditional compactness extends the notion of compact-valued and Effros measurable mappings; see  [32]. The control sets work in any conditional metric space. This involves many examples, which are out of reach of the existing technology, for example conditional \(L^p\)-spaces on general probability spaces, \(L^0({\mathbb {R}})^n\) with a conditional dimension, and \(L^0(X)\), where X is a non-separable metric space. Another example are conditional weak topologies, which are not included in this article for which conditional analysis offers extensive tools as well. We plan to explore this direction in future work.