1 Introduction

Initiated by Hobson [19], the theory of model-independent pricing has received substantial attention from the mathematical finance community; we refer to the survey [20]. Starting with [5, 17], the Skorokhod embedding approach has been complemented through optimal transport techniques. In particular, first versions of a robust superreplication theorem have been established: in discrete time, we mention [1] and the important contribution of Bouchard and Nutz [6]; for related work in a quasi-sure framework in continuous time, we refer to the work of Neufeld and Nutz [27] and Possamaï et al. [30]. Our results are more closely related to the continuous-time superreplication theorem of Dolinsky and Soner [14], which we recall here: given a centered probability measure \(\mu\) on ℝ, they study the primal maximization problem

$$ P:= \sup\{ \mathbb{E}_{\mathbb{P}}[G(S)]\}, $$

where \(S\) denotes the canonical process on \(C[0,1]\), the supremum is taken over all martingale measures ℙ on \(C[0,1]\) with \(S_{1}(\mathbb{P})=\mu\), and \(G\) denotes a functional on the path space satisfying appropriate continuity assumptions. The main result of [14] is a superreplication theorem that applies to this setup: they show that for each \(p>P\), there exist a hedging strategy \(H\) and a “European payoff function” \(\psi\) with \(\int\psi\,\mathrm{d} \mu=0\) such that

$$ p + (H\cdot S)_{1} + \psi(S_{1}) \geq G(S). $$

This is in principle quite satisfying; however, a drawback is that the option \(G\) needs to satisfy rather strong continuity assumptions, which in particular excludes all exotic option payoffs involving volatility. Given the practical importance of volatility derivatives, it is desirable to give a version of the Dolinsky–Soner theorem that applies also to this case. More recently, Dolinsky and Soner [15] have extended the original results of [14] to include càdlàg price processes, multiple maturities and price processes in higher dimensions; Hou and Obłój [24] have also recently extended these results to incorporate investor beliefs via a “prediction set” of possible outcomes.

Subsequently, we establish a superreplication theorem that applies to \(G\) which is invariant under time-changes in an appropriate sense. In contrast to the result of [14], this excludes the case of continuously monitored Asian options, but covers other practically relevant derivatives such as options on volatility or realized variance, lookback options and discretely monitored Asian options. Notably, it constitutes a general duality result appealing to the rich literature on the connection of model-independent finance and Skorokhod embedding. In a series of impressive achievements, Brown, Cox, Davis, Hobson, Klimmek, Neuberger, Obłój, Pedersen, Raval, Rogers, Wang, and others [31, 19, 7, 23, 8, 12, 10, 9, 11, 22, 21] were able to determine the values of related primal and dual problems for a number of exotic derivatives/market data, proving that they are equal. Here we establish the duality relation for generic derivatives, in particular recovering duality for the specific cases mentioned above.

To achieve this, we apply a pathwise approach to model-independent finance which was introduced by Vovk [35, 36, 37]. In particular, we rely on Vovk’s pathwise Dambis–Dubins–Schwarz theorem, which we combine with the duality theory for the Skorokhod embedding problem recently developed in [4].

After the completion of this work, we learned that Guo et al. [18] derived a duality result similar in spirit to Proposition 4.3 and Theorem 4.6. Their approach relies on different methods, and includes an interesting application to the optimal Skorokhod embedding problem.

Organization of the paper

In Sect. 1.1, we outline our main results. In Sect. 2, Vovk’s approach to mathematical finance is introduced and preliminary results are given. Section 3 is devoted to the statement and proof of our main result in its simplest form—a superreplication theorem for time-invariant payoffs for one period. In Sect. 4, we present an extension to finitely many marginals with “zero up to full information”; in particular, we then obtain our most general superreplication result, Theorem 4.6.

1.1 Formulation of the superreplication theorem

The purpose of this section is to illustrate heuristically the scope of the superreplication theorem presented below in Sects. 3 and 4.

For \(n\in\mathbb{N}\), let \(C[0,n]\) be the space of continuous functions \(\omega\colon[0,n] \to\mathbb{R}\) with \(\omega(0)=0\). The aim is to consider financial options \(G\colon C[0,n]\to\mathbb{R}\) of the form

$$ G(\omega)=\gamma\big( \operatorname{\mathfrak{t}}(\omega)|_{[0, \langle\omega\rangle_{n}] }, \langle\omega\rangle_{1}, \ldots, \langle\omega\rangle_{n}\big), $$
(1.1)

where \(\langle\omega\rangle_{\cdot}\) stands for the quadratic variation process of the path \(\omega\) and \(\operatorname{\mathfrak{t}}(\omega)\) stands for a version of the path \(\omega\) which is rescaled in time so that for each \(t\), its quadratic variation up to time \(t\) equals precisely \(t\). Intuitively, this means that \(\gamma\) sees only the path \(\omega|_{[0,n]}\) but not its time-parametrization. Let \(S\) be the canonical process on \(C[0,n]\). Under appropriate regularity conditions on \(\gamma\) (cf. Theorems 3.1 and 4.6 below), we obtain the following robust superhedging result:

Theorem

(See Theorems 3.1 and 4.6 for the precise statements)

Suppose that \(n\in\mathbb{N}\), \(I\subseteq\{1,\ldots, n\}\), \(n\in I\) and that \(\mu_{i}\) is a centered probability measure onfor each \(i\in I\). Setting

$$\begin{aligned} P_{n}:=\sup\left\{ \mathbb{E}_{\mathbb{P}}[G]\,:\, \textstyle\begin{array}{l} \mathbb{P}\textit{ is a martingale measure on } C[0,n], \\ S_{0}=0, S_{i}\sim\mu_{i} \textit{ for all } i\in I \end{array}\displaystyle \right\} \end{aligned}$$

and

$$\begin{aligned} D_{n}:=\inf\left\{ a\,:\, \textstyle\begin{array}{l} \textit{ there exist } H \textit{ and } (\psi_{j})_{j\in I}\textit{ such that } \int\psi_{j} \,\mathrm{d}\mu_{j}=0, \\ a +\sum_{j\in I} \psi_{j}(S_{j}) + (H\cdot S)_{n} \geq G((S_{t})_{t \leq n}) \end{array}\displaystyle \right\} , \end{aligned}$$

one has \(P_{n}=D_{n}\). Here \((H\cdot S)_{n}\) denotes the “pathwise stochastic integral” of \(H\) with respect to \(S\).

Of course, the present statement of our superreplication result is imprecise in that neither the pathwise stochastic integral appearing in the formulation of \(D_{n}\), nor the pathwise quadratic variation in the definition of \(G\), nor the operator \(\operatorname{\mathfrak{t}}\) are properly introduced. We address this in the following sections.

Example 1.1

Many financial derivatives such as options on volatility or realized variance, lookback options and discretely monitored Asian options are covered by the above superreplication theorem. Mathematically speaking, examples of derivatives in the time-invariant form (1.1) include the following:

  • \(G_{1}(\omega) := F_{1}(\omega(1), \dots, \omega(n), \langle\omega\rangle_{1}, \ldots, \langle\omega\rangle_{n})\)

  • \(G_{2}(\omega) := F_{2}(\max_{t \in[0,n]} \omega(t))\)

  • \(G_{3}(\omega) := F_{3}(\int_{0}^{n} \varphi(\omega(s), \langle\omega\rangle_{s}) \,\mathrm{d}\langle\omega\rangle_{s})\)

  • \(G_{4}(\omega) := F_{4}(G_{1}(\omega), G_{2}(\omega), G _{3}(\omega))\)

for functions \(F_{i} \colon\mathbb{R}^{m_{i}}\to\mathbb{R}\) with suitable regularity and growth conditions for some \(m_{i}\in \mathbb{N}\) for \(i=1,\dots,4\).

Examples that are not covered by our results are continuously monitored Asian options, \(G_{5}(\omega) := F(\int_{0}^{n} \omega(s) \, \mathrm{d}s)\) for a function \(F\colon\mathbb{R}\to\mathbb{R}\). However, in the case of Asian options, we could discretize time and consider the discretely monitored Asian options \(G_{6}(\omega) := F( \sum_{k=0}^{n-1} \omega(k))\).

Remark 1.2

Guo, Tan and Touzi recently derived a similar result to the above superreplication theorem; cf. [18, Theorem 3.1]. Both duality results provide comparable primal and dual problems and cover a similar class of financial derivatives.

However, the main difference is that our dual problem \(D_{n}\) is formulated in terms of pathwise superhedging (i.e., the superhedge has to hold for every path), while the superhedging in the dual problem of [18] is “only” required to hold quasi-surely under all martingale measures. One advantage of the stronger requirement of pathwise superhedging is that it allows us to formulate the dual problem independently of the measures considered in the primal problem. On the other hand, the stochastic integrals that can be constructed using the additional martingale measure structure allow Guo, Tan and Touzi to consider superhedging along a fixed (non-simple) strategy, while here we have to consider sequences of simple strategies; see the precise formulation in Theorem 3.1 or Theorem 4.6 below.

2 Superhedging and outer measure

In recent years, Vovk [35, 36, 37], see also [33], developed a new model-free approach to mathematical finance based on hedging. Without presuming any probabilistic structure, Vovk considers the space of real-valued continuous functions as possible price paths and introduces an outer measure on this space, which is based on a minimal superhedging price.

Vovk defines his outer measure on all continuous paths, and then shows that “typical price paths” admit a quadratic variation. To simplify many of the statements and proofs below, we restrict ourselves from the beginning to paths admitting quadratic variation. We discuss in Remark 2.8 below why this is no problem.

To be precise, define for a continuous path \(\omega\colon\mathbb{R} _{+} \to\mathbb{R}\) and \(n\in\mathbb{N}\) the stopping times

$$\begin{aligned} \sigma_{0}^{n}:=0 \quad\mbox{and}\quad\sigma_{k}^{n}:= \inf\{t \geq\sigma_{k-1}^{n}\,:\,\omega(t) \in2^{-n}\mathbb{Z}\mbox{ and } \omega(t)\neq\omega(\sigma_{k-1}^{n})\} \end{aligned}$$

for \(k\in\mathbb{N}\). For \(n\in\mathbb{N}\), the discrete quadratic variation of \(\omega\) is given by

$$ V^{n}_{t}(\omega) :=\sum_{k=0}^{\infty}\big(\omega({\sigma^{n}_{k+1} \wedge t})-\omega({\sigma_{k}^{n}\wedge t})\big)^{2},\quad t\in \mathbb{R}_{+}. $$

We write \(\varOmega^{\mathrm{qv}}\) for the space of continuous functions \(\omega\colon\mathbb{R}_{+} \to\mathbb{R}\) with \(\omega(0)=0\) such that

  • \(V^{n}(\omega)\) converges locally uniformly in time to a continuous limit \(\langle\omega\rangle\) which has the same intervals of constancy as \(\omega\), and

  • either \(\lim_{t\rightarrow\infty} \omega(t)\) exists or \(\langle \omega\rangle\) is unbounded on \(\mathbb{R}_{+}\).

The coordinate process on \(\varOmega^{\mathrm{qv}}\) is denoted by \(B_{t}(\omega):=\omega(t)\), and we introduce the natural filtration \((\mathcal{F}_{t}^{\mathrm{qv}})_{t\geq0} := (\sigma(B_{s}: s \le t))_{t\geq0}\) and set \(\mathcal{F}^{\mathrm{qv}}:= \bigvee_{t \geq0} \mathcal{F}^{\mathrm{qv}}_{t}\). Stopping times \(\tau\) and the associated \(\sigma\)-algebras \(\mathcal{F}^{\mathrm{qv}}_{\tau}\) are defined as usual. Occasionally, we also write \(\langle B \rangle( \omega) = \langle\omega\rangle\).

A process \(H \colon\varOmega^{\mathrm{qv}} \times\mathbb{R}_{+} \rightarrow \mathbb{R}\) is called a simple strategy if it is of the form

$$ H_{t}(\omega) = \sum_{n= 0}^{\infty} F_{n}(\omega) \mathbf{1} _{( \tau_{n}(\omega),\tau_{n+1}(\omega)]}(t), \quad(\omega,t)\in \varOmega^{\mathrm{qv}}\times\mathbb{R}_{+}, $$

where \(0 = \tau_{0}(\omega) < \tau_{1}(\omega) < \cdots\) are stopping times such that for every \(\omega\in\varOmega^{\mathrm{qv}}\), one has \(\lim_{n\to\infty} \tau_{n}(\omega) = \infty\), and \(F_{n}\colon \varOmega^{\mathrm{qv}} \rightarrow\mathbb{R}\) are \(\mathcal{F}^{ \mathrm{qv}}_{\tau_{n}}\)-measurable bounded functions for \(n\in \mathbb{N}\). For such a simple strategy \(H\), the corresponding capital process

$$ (H \cdot B)_{t}(\omega) = \sum_{n=0}^{\infty}F_{n}(\omega) \big(B _{\tau_{n+1}(\omega) \wedge t}(\omega) - B_{\tau_{n}(\omega) \wedge t}(\omega)\big) $$

is well defined for every \(\omega\in\varOmega^{\mathrm{qv}}\) and every \(t \in\mathbb{R}_{+}\). A simple strategy \(H\) is called \(\lambda\)-admissible for \(\lambda> 0\) if \((H\cdot B)_{t}( \omega) \ge- \lambda\) for all \(t \in\mathbb{R}_{+}\) and all \(\omega\in\varOmega^{\mathrm{qv}}\). We write \(\mathcal{H}_{\lambda}\) for the set of \(\lambda\)-admissible simple strategies.

To recall Vovk’s outer measure as introduced in [36], let us define the set of processes

$$\begin{aligned} \mathcal{V}_{\lambda}:= \bigg\{ h:= (H^{k})_{k\in\mathbb{N}}\,:\, H ^{k} \in\mathcal{H}_{\lambda_{k}}, \lambda_{k} > 0, \sum_{k=0}^{ \infty}\lambda_{k} = \lambda\bigg\} \end{aligned}$$

for an initial capital \(\lambda\in(0, \infty)\). Note that for every \(h = (H^{k})_{k\in\mathbb{N}} \in\mathcal{V}_{\lambda}\), all \(\omega\in\varOmega^{\mathrm{qv}}\) and all \(t \in\mathbb{R}_{+}\), the corresponding capital process

$$\begin{aligned} (h\cdot B)_{t}(\omega) := \sum_{k = 0}^{\infty} (H^{k}\cdot B)_{t}( \omega) = \sum_{k = 0}^{\infty} \big(\lambda_{k} + (H^{k}\cdot B)_{t}( \omega)\big) - \lambda \end{aligned}$$

is well defined and takes values in \([-\lambda, \infty]\). Then Vovk’s outer measure on \(\varOmega^{\mathrm{qv}}\) is given by

$$\begin{aligned} \overline{Q}[A] := \inf\Big\{ \lambda> 0\,:\, \exists\, h \in \mathcal{V}_{\lambda}\mbox{ with } \lambda+ \liminf_{t \to\infty}(h \cdot B)_{t}(\omega) \ge\mathbf{1} _{A}(\omega), \forall\omega \in\varOmega^{\mathrm{qv}}\Big\} \end{aligned}$$

for \(A \subseteq\varOmega^{\mathrm{qv}}\).

A slight modification of the outer measure \(\overline{Q}\) was introduced in [29, 28], which seems more in the spirit of the classical definition of superhedging prices in semimartingale models. In this context, one works with general admissible strategies and the Itô integral of a general strategy is given as limit of integrals of simple strategies. In that sense, the next definition seems more analogous to the classical one.

Definition 2.1

The outer measure \(\overline{P}\) of \(A \subseteq \varOmega^{\mathrm{qv}}\) is defined as the minimal superhedging price of \(\mathbf{1} _{A}\), that is,

$$\begin{aligned} \overline{P}[A]:= \inf\left\{ \lambda> 0 \,:\, \textstyle\begin{array}{l} \exists\, (H^{n}) \subseteq\mathcal{H}_{\lambda}\mbox{ such that } \forall\omega\in\varOmega^{\mathrm{qv}}, \\ \liminf_{t\rightarrow\infty} \liminf_{n\rightarrow\infty}(\lambda + (H^{n}\cdot B)_{t} (\omega)) \ge\mathbf{1} _{A}(\omega) \end{array}\displaystyle \right\} . \end{aligned}$$

A set \(A \subseteq\varOmega^{\mathrm{qv}}\) is said to be a nullset if it has \(\overline{P}\)-outer measure zero. A property \(( \mathcal{A})\) holds for typical price paths if the set \(A\) where \((\mathcal{A})\) is violated is a nullset.

Of course, for both definitions of outer measures, it would be convenient to just minimize over simple strategies rather than over the limit (inferior) along sequences of simple strategies. However, this would destroy the very much appreciated countable subadditivity of both outer measures.

Remark 2.2

It is conjectured that the outer measure \(\overline{P}\) coincides with \(\overline{Q}\). However, up to now it is only known that \(\overline{P}[A] \leq\overline{Q}[A]\) for a general set \(A \subseteq \varOmega^{\mathrm{qv}}\) (see [29, Sect. 2.4]) and that they coincide for time-superinvariant sets; see Definition 2.5 and Theorem 2.6 below. Therefore, the outer measures \(\overline{P}\) and \(\overline{Q}\) are basically the same in the present paper since we focus on time-invariant financial derivatives.

Perhaps the most interesting feature of \(\overline{P}\) is that it comes with the following arbitrage interpretation for nullsets.

Lemma 2.3

([29, Lemma 2.4])

A set \(A \subseteq\varOmega^{\mathrm{qv}}\) is a nullset if and only if there exists a sequence of 1-admissible simple strategies \((H^{n})_{{n\in\mathbb{N}}} \subseteq\mathcal{H}_{1}\) such that

$$\begin{aligned} 1 +\liminf_{t \rightarrow\infty}\liminf_{n \rightarrow\infty} (H ^{n} \cdot B)_{t} (\omega) \ge\infty\cdot\mathbf{1} _{A}(\omega), \end{aligned}$$

where we use the convention \(\infty\cdot0 := 0\) and \(\infty\cdot1 := \infty\).

A nullset is essentially a model-free arbitrage opportunity of the first kind. Recall that \(B\) satisfies (NA1) (no arbitrage opportunities of the first kind) under a probability measure ℙ on \((\varOmega^{\mathrm{qv}}, \mathcal{F}^{\mathrm{qv}})\) if the set \(\mathcal{W}^{\infty}_{1} := \{ 1 + \int_{0}^{\infty}H_{s} \, \mathrm{d}B_{s}\,:\, H \in\mathcal{H}_{1} \}\) is bounded in probability, that is, if \(\lim_{n \to\infty} \sup_{X\in\mathcal{W}^{\infty}_{1}} \mathbb{P}[ X \geq n]=0\). The notion (NA1) has gained a lot of interest in recent years since it is the minimal condition which has to be satisfied by any reasonable asset price model; see for example [3, 26, 32, 25].

The next proposition collects further properties of \(\overline{P}\).

Proposition 2.4

([28, Proposition 3.3])

  1. 1.

    \(\overline{P}\) is an outer measure with \(\overline{P}[\varOmega^{ \mathrm{qv}}]=1\), i.e., \(\overline{P}\) is nondecreasing, countably subadditive, and \(\overline{P}[\emptyset] = 0\).

  2. 2.

    Letbe a probability measure on \((\varOmega^{\mathrm{qv}}, \mathcal{F}^{\mathrm{qv}})\) such that the coordinate process  \(B\) is a ℙ-local martingale, and let \(A \in\mathcal{F}^{ \mathrm{qv}}\). Then \(\mathbb{P}[A] \le\overline{P}[A]\).

  3. 3.

    Let \(A \in\mathcal{F}^{\mathrm{qv}}\) be a nullset, and letbe a probability measure on \((\varOmega^{\mathrm{qv}}, \mathcal{F}^{\mathrm{qv}})\) such that the coordinate process \(B\) satisfies (NA1) under ℙ. Then \(\mathbb{P}[A] = 0\).

Especially, the last statement is of interest in robust mathematical finance because it says that every property which is satisfied by typical price paths holds quasi-surely for all probability measures fulfilling (NA1).

An essential ingredient to obtain our superreplication theorem for time-invariant derivatives is a very remarkable pathwise Dambis–Dubins–Schwarz theorem as presented in [36]. In order to give its precise statement here, we recall the definition of time-superinvariant sets; cf. [36, Sect. 3].

Definition 2.5

A continuous nondecreasing function \(f \colon\mathbb{R}_{+} \to \mathbb{R}_{+}\) with \(f(0)=0\) is said to be a time-change. The set of all time-changes is denoted by \(\mathcal{G}_{0}\), and the group of all time-changes that are strictly increasing and unbounded by \(\mathcal{G}\). Given \(f\in\mathcal{G}_{0}\), we define \(T_{f}(\omega ):=\omega\circ f\). A subset \(A\subseteq\varOmega^{\mathrm{qv}}\) is called time-superinvariant if for all \(f\in\mathcal{G}_{0}\), it holds that

$$ T_{f}^{-1}(A)\subseteq A. $$
(2.1)

A subset \(A\subseteq\varOmega^{\mathrm{qv}}\) is called time-invariant if (2.1) holds true for all \(f\in \mathcal{G}\).

For an intuitive explanation of time-superinvariance, we refer to [36, Remark 3.3]. We write \({\mathbb{W}}\) for the Wiener measure on \((\varOmega^{\mathrm{qv}}, \mathcal{F}^{\mathrm{qv}})\) and recall Vovk’s pathwise Dambis–Dubins–Schwarz theorem.

Theorem 2.6

([36, Theorem 3.1])

Each time-superinvariant set \(A\subseteq\varOmega^{\mathrm{qv}}\) satisfies \(\overline{P}[A]=\overline{Q}[A]={\mathbb{W}}[A]\).

Proof

For \(A\subseteq\varOmega^{\mathrm{qv}}\), Proposition 2.4 and Remark 2.2 imply \({\mathbb{W}}[A]\leq\overline{P}[A] \leq\overline{Q}[A]\). If \(A\) is additionally time-superinvariant, [36, Theorem 3.1] says \(\overline{Q}[A]={\mathbb{W}}[A]\), which immediately gives the desired result. □

Let us now introduce the normalizing time transformation operator \(\operatorname{\mathfrak{t}}\) in the sense of [36]. We follow [36] in defining the sequence of stopping times

$$ \tau_{t}(\omega):=\inf\left\{ s\geq0: \langle\omega\rangle_{s} > t\right\} $$
(2.2)

for \(t\in\mathbb{R}_{+}\) and \(\tau_{\infty}:= \sup_{n} \tau_{n}\). The normalizing time transformation \(\operatorname{\mathfrak{t}} \colon\varOmega^{\mathrm{qv}} \to\varOmega^{\mathrm{qv}}\) is given by

$$ \operatorname{\mathfrak{t}}(\omega)_{t}:= \omega(\tau_{t}), \quad t \in\mathbb{R}_{+}, $$

where we set \(\omega(\infty) := \lim_{t\to\infty} \omega(t)\) for all \(\omega\in\varOmega^{\mathrm{qv}}\) with \(\sup_{t\ge0} \langle \omega\rangle_{t} < \infty\). Note that \(\operatorname{\mathfrak{t}}( \omega)_{\cdot}\) stays constant from time \(\langle\omega \rangle_{\infty}\) on (which is, of course, only relevant if that time is finite). Below we also use \(\operatorname{\mathfrak{t}}\colon C _{\mathrm{qv}}[0,n]\to\varOmega^{\mathrm{qv}}\) which is defined analogously and where \(C_{\mathrm{qv}}[0,n]\) denotes the space of paths that are obtained by restricting functions in \(\varOmega^{\mathrm{qv}}\) to \([0,n]\). On the product space \(\varOmega^{\mathrm{qv}} \times\mathbb{R} _{+}\), we further introduce

$$ \bar{\operatorname{\mathfrak{t}}}(\omega, t):= \big( \operatorname{\mathfrak{t}}(\omega), \langle\omega\rangle_{t} \big). $$

We are now ready to state the main result of [36].

Theorem 2.7

([36, Theorem 6.4])

For any nonnegative Borel-measurable function \(F\colon\varOmega^{\mathrm{qv}}\to\mathbb{R}\), one has

$$ \overline{E}[(F\circ\operatorname{\mathfrak{t}}) \mathbf{1} _{\{ \langle B \rangle_{\infty}=\infty\}}]=\int_{\varOmega_{\mathrm{qv}}} F \,\mathrm{d}{\mathbb{W}}, $$

where \(\overline{E}\) denotes the obvious extension of \(\overline{P}\) from sets to nonnegative functions and \(\langle B \rangle_{\infty}:= \sup_{t\ge0} \langle B \rangle_{t}\).

Remark 2.8

It might seem like a strong restriction that we only deal with paths in \(\varOmega^{\mathrm{qv}}\) rather than considering all continuous functions. However, Vovk’s result holds on all of \(C(\mathbb{R}_{+})\), the continuous paths on \(\mathbb{R}_{+}\) started in 0, and is only slightly more complicated to state in that case. In particular, Vovk shows that \(C(\mathbb{R}_{+}) \setminus\varOmega^{\mathrm{qv}}\) is atypical in the sense that for every \(\varepsilon>0\), there exists a sequence of \(\varepsilon\)-admissible simple strategies \((H^{n})\) on \(C( \mathbb{R}_{+})\) (which are defined in the same way as above, replacing every occurrence of \(\varOmega^{\mathrm{qv}}\) by \(C(\mathbb{R}_{+})\)) such that for every \(\omega\in C(\mathbb{R}_{+}) \setminus \varOmega^{\mathrm{qv}}\), we have \(\liminf_{t\rightarrow\infty} \liminf_{n \to\infty} (H^{n} \cdot B)_{t} (\omega) = \infty\). In particular, all our results continue to hold on \(C(\mathbb{R}_{+})\) because on the set \(C(\mathbb{R}_{+}) \setminus\varOmega^{\mathrm{qv}}\), we can superhedge any functional starting from an arbitrarily small \(\varepsilon>0\). To simplify the presentation, we restricted our attention to \(\varOmega^{\mathrm{qv}}\) from the beginning.

Remark 2.9

Vovk defines the normalizing time transformation slightly differently, replacing \(\tau_{t}(\omega)\) by \(\inf\{ s \ge0: \langle\omega \rangle_{s} \ge t\}\), so considering the hitting time of \([t, \infty )\) rather than \((t,\infty)\). This corresponds to taking the càglàd version \((\tau_{t-})_{t \ge0}\) of the càdlàg process \((\tau_{t})_{t \ge0}\). But since on \(\varOmega^{\mathrm{qv}}\), the paths \(\omega\) and \(\langle\omega\rangle\) have the same intervals of constancy, we get \(\omega(\tau_{t-}) = \omega(\tau_{t})\) for all \(\omega\in\varOmega^{\mathrm{qv}}\), and by Remark 2.8 more generally for all typical price paths in \(C(\mathbb{R}_{+})\).

3 Duality for one period

Here we are interested in a one-period duality result for derivatives \(G\) on \(C_{\mathrm{qv}}[0,1]\) of the form \(\omega\mapsto G(\omega, \langle\omega\rangle_{1})\) which are invariant under suitable time-changes of \(\omega\). Typical examples for such derivatives are the running maximum up to time 1 or functions of the quadratic variation. Formally, this amounts to

$$ G =\tilde{G}\circ\bar{\operatorname{\mathfrak{t}}}(\cdot,1) $$

for some optional process \((\tilde{G}_{t})_{t\geq0}\) on \(( \varOmega^{\mathrm{qv}},(\mathcal{F}^{\mathrm{qv}}_{t})_{t\ge0})\), and more specifically we focus on processes \(\tilde{G}\) which are of the form \(\tilde{G}_{t}(\omega) = \gamma(\omega|_{[0,t]},t)\), where \(\omega|_{[0,t]}\) denotes the restriction of \(\omega\) to the interval \([0,t]\) and \(\gamma\colon\varUpsilon\to\mathbb{R}\) is an upper semi-continuous functional which is bounded from above. Here we wrote \(\varUpsilon\) for the space of stopped paths

$$ \varUpsilon:= \{(f,s)\,:\, f \in C[0,s], s \in\mathbb{R}_{+} \}, $$

equipped with the distance \(d_{\varUpsilon}\) (sometimes called the Dupire distance in the context of functional Itô calculus) which is defined for \(s\leq t\) by

$$ d_{\varUpsilon}\big((f,s),(g,t)\big) := \max\bigg( |t-s|, \sup_{0\le u \le s} |f(u) - g(u)|, \sup_{s \le u \le t} |g(u) - f(s)| \bigg), $$
(3.1)

and which turns \(\varUpsilon\) into a Polish space. The space \(\varUpsilon \) is a convenient way to express optionality of a process on \(C(\mathbb{R}_{+})\). Indeed, put

$$ r\colon C(\mathbb{R}_{+})\times\mathbb{R}_{+} \to\varUpsilon,\quad( \omega,t)\mapsto(\omega|_{[0,t]},t). $$

By [13, Theorem IV. 97], a process \(Y\) is predictable if and only if there is a function \(H\colon\varUpsilon\to\mathbb{R}\) such that \(Y=H\circ r\). Moreover, since \(\varOmega^{\mathrm{qv}}\) is a subset of the set of continuous paths, the optional and predictable processes coincide. Hence, \(Y\) is optional if and only if such a function \(H\) exists. We can say that an optional process \(Y\) is \(\varUpsilon \)-(upper/lower semi-)continuous if and only if the corresponding function \(H\) on \(\varUpsilon\) is (upper/lower semi-)continuous.

Given a centered probability measure \(\mu\) on ℝ with finite first moment, we want to solve the primal maximization problem

$$ P := \sup\{ \mathbb{E}_{\mathbb{P}}[G] : \mathbb{P}\mbox{ is a martingale measure on } C_{\mathrm{qv}}[0,1] \mbox{ with } S_{1}( \mathbb{P}) = \mu\}, $$
(3.2)

where \(S\) denotes the canonical process on \(C_{\mathrm{qv}}[0,1]\).

Since \(\mu\) satisfies \(\int|x|\,\mathrm{d}\mu(x) < \infty\), there is a smooth convex function \(\varphi\colon\mathbb{R}\to\mathbb{R} _{+}\) with \(\varphi(0)=0\), \(\lim_{x\to\pm\infty} \varphi(x)/|x| = \infty\), and such that \(\int\varphi(x) \,\mathrm{d}\mu(x) < \infty\) (apply for example the de la Vallée-Poussin theorem). From now on, we fix such a function \(\varphi\) and define

$$ \zeta_{t}(\omega) := \frac{1}{2} \int_{0}^{t} \varphi''\big(S_{s}( \omega)\big) \,\mathrm{d}\langle S \rangle_{s} (\omega), \quad (\omega,t) \in C_{\mathrm{qv}}[0,1] \times[0,1], $$

where we write \(\langle S \rangle(\omega) := \langle\omega\rangle \) for \(\omega\in\varOmega^{\mathrm{qv}}\). We then consider for \(\alpha, c > 0\) the set of (generalized admissible) simple strategies

$$ \mathcal{Q}_{\alpha,c} :=\left\{ H \,:\, \textstyle\begin{array}{l} H \mbox{ is a simple strategy and} \\ (H \cdot S)_{t}(\omega) \ge-c - \alpha\zeta_{t}(\omega),\forall( \omega,t) \in C_{\mathrm{qv}}[0,1] \times[0,1] \end{array}\displaystyle \right\} . $$

We also define the set of “European options available at price 0” as

$$ \mathcal{E}^{0} := \left\{ \psi\in C(\mathbb{R})\,:\, \frac{|\psi|}{1 + \varphi} \mbox{ is bounded}, \int\psi(x) \,\mathrm{d}\mu(x) =0\right\} . $$

In this setting, we deduce the following duality result for one period.

Theorem 3.1

Let \(\gamma\colon\varUpsilon\to\mathbb{R}\) be upper semi-continuous and bounded from above and let

$$ G\colon C_{\mathrm{qv}}[0,1] \to\mathbb{R}, \qquad G(\omega) := \gamma\big(\operatorname{\mathfrak{t}}(\omega)|_{[0, \langle\omega\rangle_{1}]}, \langle\omega\rangle_{1}\big). $$

Setting

$$ D := \inf\left\{ p\,:\, \textstyle\begin{array}{l} \exists c,\alpha>0, (H^{n}) \subseteq\mathcal{Q}_{\alpha,c}, \psi\in\mathcal{E}^{0} \textit{ such that } \forall\omega\in C_{ \mathrm{qv}}[0,1], \\ p +\liminf_{n\to\infty} (H^{n} \cdot S)_{1}(\omega) + \psi(S_{1}( \omega)) \ge G(\omega) \end{array}\displaystyle \right\} , $$

we have the duality relation

$$ P = D. $$

Note that \(P\) does not depend on \(\varphi\) and therefore also the value of \(D\) does not depend on it; the function \(\varphi\) is just needed to provide some compactness.

The inequality \(P \leq D\) is fairly easy: If \(p > D\), then there exist a sequence \((H^{n}) \subseteq \mathcal{Q}_{\alpha,c}\) and a \(\psi\in C(\mathbb{R})\) with \(\int\psi(x) \,\mathrm{d}\mu(x) = 0\) such that

$$ p +\liminf_{n\to\infty} (H^{n} \cdot S)_{1}(\omega) + \psi\big(S _{1}(\omega)\big) \ge G(\omega). $$

In particular, for all martingale measures ℙ on \(C_{ \mathrm{qv}}[0,1]\) with \(S_{1}(\mathbb{P}) = \mu\), we have

$$\begin{aligned} \mathbb{E}_{\mathbb{P}}[G] &\le\mathbb{E}_{\mathbb{P}}\Big[p + \liminf_{n\to\infty} (H^{n} \cdot S)_{1} + \psi(S_{1})\Big] \\ & \le p + \liminf_{n\to\infty} \mathbb{E}_{\mathbb{P}}[(H^{n} \cdot S)_{1}] + \mathbb{E}_{\mathbb{P}}[\psi(S_{1})] \le p, \end{aligned}$$

where in the second step we used Fatou’s lemma, which is justified because \((H^{n} \cdot S)_{1}\) is uniformly bounded from below by \(-c -\alpha\zeta_{1}\) and Itô’s formula gives ℙ-almost surely

$$ \varphi(S_{t}) = \int_{0}^{t} \varphi'(S_{s}) \,\mathrm{d}S_{s} + \zeta_{t}, $$

which shows that \(\zeta\) is the compensator of the ℙ-submartingale \(\varphi(S)\) and therefore \(\mathbb{E} _{\mathbb{P}}[\zeta_{1}] < \infty\).

In the following, we concentrate on the inequality \(P \geq D\) and proceed in three steps:

  1. 1.

    Reduction of primal problem \(P\) to optimal Skorokhod embedding \(P^{*}\): \(P= P^{*}\).

  2. 2.

    Duality of optimal Skorokhod embedding \(P^{*}\) and a dual problem \(D^{*}\): \(P^{*}=D^{*}\).

  3. 3.

    The new dual problem \(D^{*}\) dominates the dual problem \(D\): \(D\leq D^{*}\).

Step 1: The idea, going back to Hobson [19], is to translate the primal problem into an optimal Skorokhod embedding problem. Let us start by observing that if ℙ is a martingale measure for \(S\), then by the Dambis–Dubins–Schwarz theorem, the process \((\operatorname{\mathfrak{t}}(S)_{t\wedge\langle S \rangle _{1}})_{t \ge0}\) is a stopped Brownian motion under ℙ in the filtration \((\mathcal{F}^{S}_{\tau_{t}})_{t\ge0}\), where \(( \mathcal{F}^{S}_{t})_{t \in[0,1]}\) is the usual augmentation of the filtration generated by \(S\) and \((\tau_{t})_{t\ge0}\) are the stopping times defined in (2.2). It is also straightforward to verify that \(\langle S \rangle_{1}\) is a stopping time with respect to \((\mathcal{F}^{S}_{\tau_{t}})\). Since moreover \(\operatorname{\mathfrak{t}}(S)_{\langle S \rangle_{1}} = S_{1} \sim \mu\), we deduce that there exists a new filtered probability space \((\tilde{\varOmega}, (\mathcal{G}_{t})_{t \ge0}, \mathbb{Q})\) with a Brownian motion \(W\) and a stopping time \(\tau\) such that \(W_{\tau} \sim\mu\), the process \(W_{\cdot\wedge\tau}\) is a uniformly integrable martingale, and

$$ \mathbb{E}_{\mathbb{P}}[G] = \mathbb{E}_{\mathbb{Q}}\big[\gamma \big((W_{s})_{s\leq\tau}, \tau\big)\big]. $$

Conversely, let \((\tilde{\varOmega}, (\mathcal{G}_{t})_{t \ge0}, \mathbb{Q})\) be a filtered probability space with a Brownian motion \(W\) and a finite stopping time \(\tau\) such that \(W_{\tau}\sim \mu\) and \(W_{\cdot\wedge\tau}\) is a uniformly integrable martingale, and define \(S_{t} := W_{(t/(1-t)) \wedge\tau}\) for \(t\in[0,1]\). Then \(S\) is a martingale on \([0,1]\) with \(\langle S \rangle_{1} = \tau\), and writing ℙ for the law of \(S\), we get

$$ \mathbb{E}_{\mathbb{Q}}\big[\gamma\big((W_{s})_{s\leq\tau}, \tau \big)\big] = \mathbb{E}_{\mathbb{P}}[ \tilde{G}\circ\bar{ \operatorname{\mathfrak{t}}}(S, 1)] = \mathbb{E}_{\mathbb{P}}[G]. $$

To conclude, we arrive at the following observation.

Lemma 3.2

The value \(P\) defined in (3.2) is given by

$$\begin{aligned} P & \phantom{:}= P^{\ast} \\ &:= \sup\left\{ \mathbb{E}_{\mathbb{Q}}[\gamma((W_{s})_{s\leq \tau}, \tau)]\,:\, \textstyle\begin{array}{l} (\tilde{\varOmega}, (\mathcal{G}_{t})_{t \ge0}, \mathbb{Q}) \in \mathfrak{F}, \tau\in\mathfrak{T}((\mathcal{G}_{t})_{t \ge0}), \\ W_{\tau}\sim\mu, W_{\cdot\wedge\tau} \textit{ is a u.i. martingale} \end{array}\displaystyle \right\} , \end{aligned}$$
(3.3)

where \(\mathfrak{F}\) denotes all filtered probability spaces supporting a Brownian motion \(W\) and \(\mathfrak{T}((\mathcal{G}_{t})_{t\ge0})\) is the set of \(( \mathcal{G}_{t})_{t \ge0}\)-stopping times.

By [4, Lemma 3.11], the value \(P^{\ast}\) is independent of the particular probability space as long as it supports a Brownian motion and a \(\mathcal{G}_{0}\)-measurable uniformly distributed random variable. Therefore, it is sufficient to consider the probability space \((\bar{\varOmega},\bar{\mathcal{F}},(\bar{\mathcal{F}}_{t})_{t \ge0},\bar{ {\mathbb{W}}})\), where we take \(\bar{\varOmega} := C(\mathbb{R}_{+}) \times[0,1]\), \(\mathcal{F}=(\mathcal{F}_{t})_{t\geq0}\) to be the natural filtration on \(C(\mathbb{R}_{+})\), \(\bar{\mathcal{F}}\) to be the completion of \(\mathcal{F}\otimes\mathcal{B}([0,1])\), \(\bar{{\mathbb{W}}}[A_{1}\times A_{2}]: = {\mathbb{W}}[A_{1}] \lambda (A_{2})\), and \(\bar{\mathcal{F}}_{t}\) the usual augmentation of \(\mathcal{F}_{t} \otimes\sigma([0,1])\). Here, \(\lambda\) denotes the Lebesgue measure and \({\mathbb{W}}\) the Wiener measure. We write \(\bar{B}=(\bar{B}_{t})_{t\geq0}\) for the canonical process on \(\bar{\varOmega}\), that is, \(\bar{B}_{t}(\omega,u):=\omega(t)\).

For two given random times \(\tau\) and \(\tau'\) on \(\bar{\varOmega}\) and a bounded continuous function \(f\colon C(\mathbb{R}_{+})\times \mathbb{R}_{+}\to\mathbb{R}\), we define

$$\begin{aligned} d_{f}(\tau,\tau') & := | \mathbb{E}_{\bar{{\mathbb{W}}}} [ f(\tau)-f( \tau')]| \\ & \phantom{:}= \bigg| \int\Big( f\big(\omega, \tau(\omega, x)\big)-f \big(\omega, \tau'(\omega, x)\big)\Big)\, \bar{{\mathbb{W}}}( \mathrm{d} \omega, \mathrm{d} x) \bigg|. \end{aligned}$$

We then identify \(\tau\) and \(\tau'\) if \(d_{f}(\tau,\tau')=0\) for all continuous and bounded \(f\). On the resulting space of equivalence classes denoted by \(\mathrm{RT}\), the family of semi-norms \((d_{f})_{f}\) gives rise to a Polish topology. An equivalent interpretation of this space is to consider the measures on \(C(\mathbb{R}_{+})\times \mathbb{R}_{+}\) induced by

$$ \nu_{\tau}(A\times B) = \int\mathbf{1} _{\{\omega\in A, \tau( \omega,x) \in B\}}\, \bar{{\mathbb{W}}}(\mathrm{d} \omega, \mathrm{d} x). $$
(3.4)

The topology above corresponds to the topology of weak convergence of the corresponding measures. A random time \(\tau\) is an \(\bar{ \mathcal{F}}\)-stopping time if and only if for any \(f\in C(\mathbb{R} _{+})\) supported on \([0,t]\), the random variable \(f(\tau)\) is \(\bar{\mathcal{F}}_{t}\)-measurable, which in turn holds if and only if for all \(g\in C_{b}(C(\mathbb{R}_{+}))\), we have (see also [4, Theorem 3.8])

$$ \mathbb{E}_{\bar{{\mathbb{W}}}}\big[f(\tau)(g-\mathbb{E}_{{\mathbb{W}}}[g| \mathcal{F}_{t}])\big]=\int f(s)(g-\mathbb{E}_{{\mathbb{W}}}[g| \mathcal{F}_{t}])(\omega)~\nu_{\tau}(\mathrm{d}\omega,\mathrm{d}s)=0, $$
(3.5)

where on the left-hand side, we interpret \(g-\mathbb{E}_{{\mathbb{W}}}[g| \mathcal{F}_{t}]\) as a random variable on the extension \(\bar{\varOmega }\) via \((g-\mathbb{E}_{{\mathbb{W}}}[g|\mathcal{F}_{t}])(\omega,x)=(g- \mathbb{E}_{{\mathbb{W}}}[g|\mathcal{F}_{t}])(\omega)\). As a consequence, for a stopping time \(\tau\) on \(\bar{\varOmega}\), all elements of the respective equivalence class are stopping times. We call this equivalence class, as well as (by abuse of notation) its representatives, randomized stopping times (in short, \(\mathrm{RST}\)).

By the same argument as above, there is a continuous compensating process \(\zeta^{1} \colon\varUpsilon\to\mathbb{R}\) such that \((\varphi(B_{t}) - \zeta_{t})\) is a martingale under \({\mathbb{W}}\). We write \(\mathrm{RST}(\mu)\) for the set of randomized stopping times which embed a given measure \(\mu\) (that is, \(\bar{B}_{\tau}\sim \mu\) and \(B_{\cdot\wedge\tau}\) is a uniformly integrable martingale) and such that \(\mathbb{E}_{\bar{{\mathbb{W}}}}[\zeta^{1}_{\tau}]< \infty\), this last condition also being equivalent to \(\mathbb{E} _{\bar{{\mathbb{W}}}}[\zeta^{1}_{\tau}] = V\) for \(V = \int\varphi(x) \,\mu(\mathrm{d}x)\). It is then not hard to show that \(\mathrm{RST}( \mu)\) is compact; see [4, Theorem 3.14 and Sect. 7.2.1]. Thereby, we have turned the optimization problem (3.2) into the primal problem of the optimal Skorokhod embedding problem

$$ P^{\ast}=\sup_{\tau\in\mathrm{RST}(\mu)}\mathbb{E}_{ \bar{{\mathbb{W}}}}\big[\gamma\big((\bar{B}_{s})_{s\leq\tau}, \tau\big)\big]. $$
(3.6)

Step 2: In [4], a duality result for (3.6) is shown. To state it (and in what follows), it is convenient to fix a particularly nice version of the conditional expectation on the Wiener space \((C(\mathbb{R}_{+}),\mathcal{F},{\mathbb{W}})\).

Definition 3.3

Let \(X\colon C(\mathbb{R}_{+})\to\mathbb{R}\) be a measurable function which is bounded or positive. Then we define \(\mathbb{E}_{{\mathbb{W}}}[X| \mathcal{F}_{t}]\) to be the unique \(\mathcal{F}_{t}\)-measurable function satisfying

$$ \mathbb{E}_{{\mathbb{W}}}[X|\mathcal{F}_{t}](\omega) := \int X\big(( \omega|_{[0,t]})\oplus\tilde{\omega}\big)\, {\mathbb{W}}( \mathrm{d}\tilde{\omega}), $$

where \((\omega|_{[0,t]}) \oplus\tilde{\omega}\) is the concatenation of \(\omega|_{[0,t]}\) and \(\tilde{\omega}\), that is,

$$ (\omega|_{[0,t]}) \oplus\tilde{\omega}(r) := \mathbf{1} _{\{r \le t\}} \omega(r) + \mathbf{1} _{\{r> t\}} \big(\omega(t)+ \tilde{\omega}(r-t)\big). $$

Similarly, for bounded or positive \(X\colon\varOmega^{\mathrm{qv}} \to\mathbb{R}\), we define \(\mathbb{E}_{{\mathbb{W}}}[X|\mathcal{F} ^{\mathrm{qv}}_{t}]\) to be the unique \(\mathcal{F}^{\mathrm{qv}}_{t}\)-measurable function satisfying

$$ \mathbb{E}_{{\mathbb{W}}}[X|\mathcal{F}^{\mathrm{qv}}_{t}](\omega)= \int X\big((\omega|_{[0,t]})\oplus\tilde{\omega}\big)\, {\mathbb{W}}( \mathrm{d}\tilde{\omega}). $$

Then \(\mathbb{E}_{{\mathbb{W}}}[X|\mathcal{F}_{t}](\omega)\) as well as \(\mathbb{E}_{{\mathbb{W}}}[X|\mathcal{F}^{\mathrm{qv}}_{t}](\omega)\) depend only on \(\omega|_{[0,t]}\), and in particular, we can (and do) interpret the conditional expectation also as a function on \(C _{\mathrm{qv}}[0,t] := \{ \omega|_{[0,t]}\,:\, \omega\in \varOmega^{\mathrm{qv}}\}\).

We equip \(\varOmega^{\mathrm{qv}}\) with the topology of uniform convergence on compacts. Note that then \(\varOmega^{\mathrm{qv}}\) is a metric space, but it is not complete due to the fact that it is possible to approximate paths without quadratic variation uniformly by typical Brownian sample paths.

Proposition 3.4

([4, Proposition 3.5])

If \(X\in C_{b}(C(\mathbb{R}_{+}))\), then the process given by \(X_{t}(\omega):=\mathbb{E}_{{\mathbb{W}}}[X|\mathcal{F}_{t}](\omega )\) defines a \(\varUpsilon\)-continuous martingale on \((C(\mathbb{R}_{+}),( \mathcal{F}_{t}), {\mathbb{W}})\). By restriction, it is also a \(\varUpsilon\)-continuous martingale on \((\varOmega^{\mathrm{qv}},( \mathcal{F}^{\mathrm{qv}}_{t}), {\mathbb{W}})\).

Then the duality for the optimal Skorokhod embedding reads as follows:

Proposition 3.5

Let \(\gamma\colon\varUpsilon\to\mathbb{R}\) be upper semi-continuous and bounded from above. We put

$$ D^{\ast}:=\inf\left\{ p: \textstyle\begin{array}{l} \exists\alpha\ge0, \psi\in\mathcal{E}^{0}, m \in C_{b}(C( \mathbb{R}_{+})) \textit{ such that }\mathbb{E}_{{\mathbb{W}}}[m] = 0\textit{ and} \\ p + \mathbb{E}_{{\mathbb{W}}}[m|\mathcal{F}_{t}](\omega) + \alpha Q( \omega,t) + \psi(B_{t}(\omega)) \geq\gamma(\omega,t), \\ \forall(\omega, t)\in C(\mathbb{R}_{+})\times\mathbb{R}_{+} \end{array}\displaystyle \right\} , $$

where we write \(Q(\omega,t):=\varphi(B_{t}(\omega)) - 1/2 \int_{0} ^{t} \varphi''(B_{s}(\omega)) \,\mathrm{d}s\). Let \(P^{\ast}\) be defined as in (3.3). Then one has

$$ P^{\ast}= D^{\ast}. $$

Proof

This is essentially a restatement of [4, Theorem 4.2 and Proposition 4.3 (cf. the proof of Theorem 4.2)], combined with the discussion before [4, Theorem 7.3], which enables us to modify the statement to include the term \(\alpha Q( \omega,t)\) instead of \(\alpha(\omega(t)^{2}-t/2)\). □

By Proposition 3.4 and the fact that \(\varOmega^{\mathrm{qv}}\) is dense in \(C(\mathbb{R}_{+})\), we see that the value \(D^{\ast}\) equals

$$ D^{\ast,\mathrm{qv}} :=\inf\left\{ p \,:\, \textstyle\begin{array}{l} \exists\alpha\ge0, \psi\in\mathcal{E}^{0}, m \in C_{b}( \varOmega^{\mathrm{qv}}) \mbox{ such that } \mathbb{E}_{{\mathbb{W}}}[m] = 0 \mbox{ and } \\ p + \mathbb{E}_{{\mathbb{W}}}[m|\mathcal{F}_{t}^{\mathrm{qv}}](\omega ) + \alpha Q(\omega,t) + \psi(B_{t}(\omega)) \geq\gamma(\omega,t), \\ \forall(\omega, t)\in\varOmega^{\mathrm{qv}}\times\mathbb{R}_{+} \end{array}\displaystyle \right\} . $$

Step 3: Let now \(p>D^{*}=P^{\ast}=P\). Then Proposition 3.5 gives us a function \(\psi\in \mathcal{E}^{0}\), a constant \(\alpha\geq0\) and a continuous bounded function \(m\colon\varOmega^{\mathrm{qv}} \to\mathbb{R}\) with \(\mathbb{E}_{{\mathbb{W}}}[m]=0\) such that for all \((\omega,t) \in\varOmega^{\mathrm{qv}} \times\mathbb{R}_{+}\),

$$ M_{t}(\omega) := \mathbb{E}_{{\mathbb{W}}}[m|\mathcal{F}^{ \mathrm{qv}}_{t}](\omega) \geq-p-\psi\big(B_{t}(\omega)\big) - \alpha Q(\omega,t) + \gamma(\omega,t). $$
(3.7)

Consider the functional \(\tilde{m} \colon\varOmega^{\mathrm{qv}} \to \mathbb{R}\) given by

$$ \tilde{m}:= m \circ\operatorname{\mathfrak{t}} $$

which is \(\mathcal{G}\)-invariant, that is, invariant under all strictly increasing unbounded time-changes, and satisfies \(\mathbb{E}_{{\mathbb{W}}}[ \tilde{m}]=\mathbb{E}_{{\mathbb{W}}}[m]=0\). Denote by \(m_{0}\) the supremum of \(|m(\omega)|\) over all \(\omega\in\varOmega^{\mathrm{qv}}\). Then \(m_{0}+m \ge0\), and if we fix \(\varepsilon>0\) and apply Theorem 2.7 in conjunction with Remark 2.2, we get a sequence of simple strategies \((\tilde{H}^{n})\subseteq \mathcal{H}_{m_{0}+\varepsilon}\) such that

$$ \liminf_{t \to\infty} \liminf_{n\to\infty} \big(\varepsilon+( \tilde{H}^{n}\cdot B)_{t}(\omega)\big) \geq\tilde{m}(\omega) \mathbf{1} _{\{\langle B \rangle_{\infty}= \infty\}}(\omega), \quad \omega\in\varOmega^{\mathrm{qv}}. $$

By stopping, we may suppose that \((\tilde{H}^{n}\cdot B)_{t}(\omega) \le m_{0}\) for all \((\omega,t) \in\varOmega^{\mathrm{qv}} \times \mathbb{R}_{+}\). Set

$$ \tilde{M}_{t}(\omega):= (M\circ\bar{\operatorname{\mathfrak{t}}})( \omega,t), \quad (\omega,t) \in\varOmega^{\mathrm{qv}}\times\mathbb{R}_{+} . $$

Lemma 3.6

For all \((\omega,t) \in\varOmega^{\mathrm{qv}}\times\mathbb{R}_{+}\), we have

$$ \varepsilon+ \liminf_{n\to\infty}(\tilde{H}^{n} \cdot B)_{t}( \omega) \geq\tilde{M}_{t}(\omega). $$

Proof

We claim that \(\tilde{M}_{t} = \mathbb{E}_{{\mathbb{W}}}[ \mathbf{1} _{\{\langle B \rangle_{\infty}= \infty\}} \tilde{m} |\mathcal{F} ^{\mathrm{qv}}_{t}]\). Indeed, we have

$$ \tilde{M}_{t}( \omega|_{[0,t]}\oplus\tilde{\omega}, t) = (M\circ\bar{ \operatorname{\mathfrak{t}}}) ( \omega|_{[0,t]}\oplus \tilde{\omega}, t)= M_{\langle B\rangle_{t}}\big( \operatorname{\mathfrak{t}}( \omega|_{[0,t]}\oplus\tilde{\omega}) \big), $$

where the latter quantity actually does not depend on \(\tilde{\omega}\), i.e., with a slight abuse of notation we may write it as \(M_{\langle B \rangle_{t}} (\operatorname{\mathfrak{t}}(\omega|_{[0,t]}))\). Also, we have

$$\begin{aligned} &\mathbb{E}_{{\mathbb{W}}}[ \mathbf{1} _{\{\langle B \rangle_{\infty }= \infty\}} \tilde{m} |\mathcal{F}^{\mathrm{qv}}_{t}](\omega|_{[0,t]}) \\ &\quad = \mathbb{E}_{{\mathbb{W}}}[ \mathbf{1} _{\{\langle B \rangle_{ \infty}= \infty\}} m\circ\operatorname{\mathfrak{t}}|\mathcal{F} ^{\mathrm{qv}}_{t}](\omega|_{[0,t]}) \\ &\quad = \int\mathbf{1} _{\{ \langle B \rangle_{\infty}= \infty\}}( \omega|_{[0,t]}\oplus\tilde{\omega}) (m\circ \operatorname{\mathfrak{t}}) (\omega|_{[0,t]}\oplus\tilde{\omega}) \, {\mathbb{W}}(\mathrm{d}\tilde{\omega}) \\ &\quad = \int\mathbf{1} _{\{\langle B \rangle_{\infty}= \infty\}}( \tilde{\omega}) m\big(\operatorname{\mathfrak{t}}(\omega|_{[0,t]}) \oplus\operatorname{\mathfrak{t}}(\tilde{\omega}) \big)\, {\mathbb{W}}( \mathrm{d}\tilde{\omega}) \\ &\quad = \int m\big(\operatorname{\mathfrak{t}}(\omega|_{[0,t]})\oplus \tilde{\omega}\big)\, {\mathbb{W}}(\mathrm{d}\tilde{\omega}) \\ &\quad = M_{\langle B\rangle_{t}}\big(\operatorname{\mathfrak{t}}(\omega |_{[0,t]})\big), \end{aligned}$$

where we used that \({\mathbb{W}}\)-almost surely, \(\tilde{\omega} = \operatorname{\mathfrak{t}}(\tilde{\omega})\) and \(\langle B \rangle_{\infty}= \infty\). Writing

$$ (\tilde{H}^{n} \cdot B)_{t}^{s} := (\tilde{H}^{n} \cdot B)_{s} - ( \tilde{H}^{n} \cdot B)_{t}, $$

we thus find

$$\begin{aligned} \tilde{M}_{t} & = \mathbb{E}_{{\mathbb{W}}}[\mathbf{1} _{\{\langle B \rangle_{\infty}= \infty\}} \tilde{m}|\mathcal{F}^{\mathrm{qv}}_{t}] \\ &\leq\varepsilon+ \mathbb{E}_{{\mathbb{W}}}\big[ \liminf_{s\to \infty} \liminf_{n\to\infty} (\tilde{H}^{n} \cdot B)_{s}\big| \mathcal{F}^{\mathrm{qv}}_{t}\big] \\ & = \varepsilon+ \mathbb{E}_{{\mathbb{W}}}\big[\liminf_{s\to\infty } \liminf_{n\to\infty} \big((\tilde{H}^{n} \cdot B)_{t} + ( \tilde{H}^{n} \cdot B)_{t}^{s}\big)\big|\mathcal{F}^{\mathrm{qv}}_{t} \big] \\ &= \varepsilon+ \liminf_{n\to\infty} (\tilde{H}^{n} \cdot B)_{t} + \mathbb{E}_{{\mathbb{W}}}\big[ \liminf_{s\to\infty} \liminf_{n\to\infty} (\tilde{H}^{n} \cdot B)_{t}^{s}\big| \mathcal{F}^{\mathrm{qv}}_{t}\big]. \end{aligned}$$

Now it is easily verified that \((\liminf_{n} (\tilde{H}^{n} \cdot B)_{t} ^{s})_{s \ge t}\) is a bounded \({\mathbb{W}}\)-supermartingale started in 0 (recall that \(-m_{0} - \varepsilon\le(\tilde{H}^{n}\cdot B)_{s}( \omega) \le m_{0}\) for all \((\omega,s) \in\varOmega^{\mathrm{qv}} \times\mathbb{R}_{+}\), which yields \(|(\tilde{H}^{n} \cdot B)_{t} ^{s}(\omega)| \le2 m_{0} + \varepsilon\) for all \((\omega,s) \in\varOmega^{\mathrm{qv}} \times\mathbb{R}_{+}\)), and therefore the conditional expectation on the right-hand side is nonpositive, which concludes the proof. □

We are now ready to show that \(D\leq D^{*}\) and thus to prove the main result (Theorem 3.1) of this section.

Proof of Theorem 3.1

Lemma 3.6 and (3.7) show that

$$ \varepsilon+ \liminf_{n \to\infty} (\tilde{H}^{n} \cdot B)_{t}( \omega) \geq-p -\psi\big((B\circ\bar{\operatorname{\mathfrak{t}}})( \omega,t)\big)- \alpha(Q\circ\bar{\operatorname{\mathfrak{t}}}) ( \omega,t) + \gamma\circ\bar{\operatorname{\mathfrak{t}}}(\omega,t) $$

for all \((\omega,t) \in\varOmega^{\mathrm{qv}} \times\mathbb{R}_{+}\). Observing that

$$ \psi\big((B \circ\bar{\operatorname{\mathfrak{t}}}) (\omega,t) \big)=\psi\big(B_{t}(\omega)\big) \quad\mbox{and}\quad Q\circ\bar{ \operatorname{\mathfrak{t}}}(\omega,t)= \varphi\big(B_{t}(\omega) \big) - \zeta_{t}(\omega), $$

we get

$$\begin{aligned} & p+ \varepsilon+ \liminf_{n \to\infty} (\tilde{H}^{n} \cdot B)_{t}( \omega) + \psi\big(B_{t}(\omega)\big) + \alpha\Big(\varphi\big(B _{t}(\omega)\big) - \zeta_{t}(\omega)\Big) \ge\gamma\circ\bar{ \operatorname{\mathfrak{t}}}(\omega,t) \end{aligned}$$

for all \((\omega, t) \in\varOmega^{\mathrm{qv}} \times\mathbb{R}_{+}\). It now suffices to apply Föllmer’s pathwise Itô formula [16] along the dyadic Lebesgue partition defined in Sect. 2 to obtain a sequence of simple strategies \((G^{n})\subseteq\mathcal{Q}_{1,\alpha}\) such that \(\lim_{n \to \infty} (G^{n}\cdot B)_{t}(\omega) = \alpha(\varphi(B_{t}(\omega )) - \zeta_{t}(\omega))\) for all \((\omega,t) \in\varOmega^{ \mathrm{qv}} \times\mathbb{R}_{+}\); to make the strategies \((G^{n})\) admissible, it suffices to stop once the wealth at time \(t\) drops below \(-1-\alpha\zeta_{t}(\omega) < \alpha(\varphi(B _{t}(\omega)) - \zeta_{t}(\omega))\). Hence, setting \(H^{n} := \tilde{H}^{n} + G^{n}\), we have established that there exist \((H^{n})\subseteq\mathcal{Q}_{m_{0}+\varepsilon+1,\alpha}\) and \(\psi\in\mathcal{E}^{0}\) such that

$$ p+\varepsilon+ \liminf_{n \to\infty} (H^{n}\cdot B)_{t}(\omega)+ \psi\big(B_{t}(\omega)\big) \geq\gamma\circ\bar{ \operatorname{\mathfrak{t}}}(\omega,t) $$

for all \((\omega,t) \in\varOmega^{\mathrm{qv}} \times\mathbb{R}_{+}\). Now for fixed \(t \in\mathbb{R}_{+}\) the functionals on both sides only depend on \(\omega|_{[0,t]}\), so we can consider them as functionals on \(C_{\mathrm{qv}}[0,t]\), and thus the inequality holds in particular for all \((\omega,t) \in C_{\mathrm{qv}}[0,1]\times[0,1]\). Since \(p>P\) and \(\varepsilon>0\) are arbitrarily small, we deduce that \(D \le P\) and thus that \(D = P\). □

4 Duality in the multi-marginal case

In this section, we show a general duality result for the multi-marginal Skorokhod embedding problem and, moreover, for a slightly more general problem. Our main result then follows by exactly the same steps and arguments as for the one-marginal duality, that is, reduction of the primal problem to optimal multi-marginal Skorokhod embedding (Step 1 in the last section) and domination of the dual problem via the dual in the optimal multi-marginal Skorokhod embedding (Step 3 in the last section).

To this end, we introduce the set of all randomized multi-stopping times or \(n\)-tuples of randomized stopping times. As before, we consider the space \((\bar{\varOmega},\bar{\mathcal{F}}, \bar{{\mathbb{W}}})\) and denote its elements by \((\omega,x)\). We consider all \(n\)-tuples \(\tau=(\tau_{1},\ldots,\tau_{n})\) with \(\tau_{1}\leq\cdots\leq\tau_{n}\) and \(\tau_{i}\in\mathrm{RT}\) for all \(i\). We identify two such tuples if

$$\begin{aligned} d_{f}( \tau, \tau') & := |\mathbb{E}_{\bar{{\mathbb{W}}}} [f(\tau _{1},\ldots, \tau_{n}) - f(\tau'_{1},\ldots, \tau'_{n})]| \\ & \phantom{:}= \bigg| \int\Big(f\big(\omega, \tau_{1}(\omega,x), \ldots, \tau_{n}(\omega,x)\big) \\ & \phantom{=:: \bigg|\int\Big(}- f\big(\omega, \tau'_{1}(\omega,x), \ldots, \tau'_{n}(\omega,x)\big)\Big)\, \bar{{\mathbb{W}}}( \mathrm{d}\omega, \mathrm{d}x)\bigg| \end{aligned}$$
(4.1)

vanishes for all continuous, bounded \(f\colon C(\mathbb{R}_{+})\times \mathbb{R}_{+}^{n}\to\mathbb{R}\) and denote the resulting space by \(\mathrm{RT}_{n}\). Moreover, we consider \(\mathrm{RT}_{n}\) as a topological space by testing against all continuous bounded functions as in (4.1). As for the one-marginal case, for an ordered tuple \(\tau_{1}\leq\cdots\leq\tau_{n}\) of stopping times, it follows from (3.5) that all elements of the respective equivalence class are ordered tuples of stopping times as well. We denote this class by \(\mathrm{RST}_{n}\).

Fix \(I\subseteq\{1,\ldots,n\}\) with \(n\in I\) and \(|I|\leq n\) measures \((\mu_{i})_{i\in I}=\mu\) in convex order with finite first moments. If \(i \in\{1,\dots,n\}\setminus I\), write \(i+\) for the smallest element of \(\{j \in I: j \ge i\}\). For \(i\in I\), we set \(i+:=i\). By an iterative application of the de la Vallée-Poussin theorem, there is an increasing family of smooth, nonnegative, strictly convex functions \((\varphi_{i})_{i=1,\dots,n}\) (increasing in the sense that \(\varphi_{i}\leq\varphi_{j}\) for \(i\leq j\)) such that \(\varphi_{i}(0)=0\) and \(\varphi_{i+1}/\varphi_{i} \to\infty\) as \(x \to\pm\infty\), and \(\int\varphi_{i} \,\mathrm{d}\mu _{i+}<\infty\) for all \(i=1,\dots,n\). Denote the corresponding compensating processes by \(\zeta^{i}\) such that \(Q^{i}:= \varphi_{i}( B)-\zeta^{i}\) is a martingale, and write \(\mathcal{E} _{i} := \{\psi\in C(\mathbb{R}): \frac{|\psi|}{1 + \varphi_{i}} \mbox{ is bounded}\}\). Then we define \(\mathrm{RST}_{n}(\mu)\) to be the subset of \(\mathrm{RST}_{n}\) consisting of all tuples \(\tau_{1} \leq \cdots\leq \tau_{n}\) such that \(\bar{B}_{\tau_{i}}\sim\mu_{i}\) for all \(i\in I\) and \(\mathbb{E}_{\bar{{\mathbb{W}}}}[\zeta^{n}_{\tau _{n}}] < \infty\).

Similarly to the one-marginal case, we get

Lemma 4.1

For any set \(I\subseteq\{1,\ldots,n\}\) with \(n\in I\) and for any family of measures \((\mu_{i})_{i\in I}=\mu\) in convex order, the set \(\mathrm{RST}_{n}(\mu)\) is compact.

We introduce the space of paths where we have stopped \(n\) times as

$$ \varUpsilon_{n}:=\{(f,s_{1},\ldots,s_{n})\,:\,(f,s_{n})\in\varUpsilon, 0 \leq s_{1}\leq\cdots\leq s_{n}\}, $$

equipped with the topology generated by the obvious analogue of (3.1), namely

$$\begin{aligned} \begin{aligned} &d_{\varUpsilon_{n}}\big( (f,s_{1}, \ldots, s_{n}),(g,t_{1},\dots,t _{n})\big) \\ &\quad := \max\Big( |s_{1}-t_{1}|, \ldots, |s_{n}-t_{n}|, \sup_{u\ge0} |f(u \wedge s_{n})-g(u\wedge t_{n})|\Big). \end{aligned} \end{aligned}$$

We put \(\Delta_{n}:=\{(s_{1},\ldots,s_{n})\in\mathbb{R}_{+}^{n} \colon s_{1}\leq\cdots\leq s_{n}\}\). As a natural extension of an optional process, we say that a process \(Y\colon C(\mathbb{R}_{+}) \times\Delta_{n}\) is optional if for any family of stopping times \(\tau_{1} \le\cdots\le\tau_{n}\), the map \(Y(\bar{B},\tau_{1},\dots ,\tau_{n})\) is \(\bar{\mathcal{F}}_{\tau_{n}}\)-measurable. Put

$$ r_{n}\colon C(\mathbb{R}_{+})\times\Delta_{n}\to\varUpsilon_{n}, \quad(\omega,s_{1},\ldots,s_{n})\mapsto(\omega|_{[0,s_{n}]},s _{1},\ldots,s_{n}). $$

Just as in the one-marginal case, a function \(Y\colon C(\mathbb{R} _{+})\times\Delta_{n}\to\mathbb{R}\) is optional if and only if there exists a Borel function \(H\colon\varUpsilon_{n}\to\mathbb{R}\) such that \(Y=H\circ r_{n}\).

Given \(\gamma\colon\varUpsilon_{n}\to\mathbb{R}\), we are interested in the \(n\)-step primal problem

$$\begin{aligned} P^{*}_{n}:=\sup\big\{ \mathbb{E}_{\bar{{\mathbb{W}}}}[\gamma\circ r_{n}(\omega,\tau_{1},\ldots,\tau_{n})]: (\tau_{i})_{i=1}^{n} \in\mathrm{RST}_{n}(\mu)\big\} \end{aligned}$$

and its relation to the dual problem

$$\begin{aligned} D^{*}_{n}:=\inf\left\{ a: \textstyle\begin{array}{l} \exists(\psi_{j})_{j\in I}\mbox{, martingales }(M^{i})_{i=1}^{n}\mbox{ with }\mathbb{E}_{{\mathbb{W}}}[M^{i}_{\infty}]=0, \\ \int\psi_{j} \,\mathrm{d}\mu_{j}=0\mbox{ and} \\ a+\sum_{j\in I} \psi_{j}(B_{t_{j}}(\omega)) + \sum_{i=1}^{n} M^{i} _{t_{i}}(\omega) \geq\gamma(\omega,t_{1},\ldots,t_{n}), \\ \forall\omega\in C(\mathbb{R}_{+}),(t_{1},\ldots,t_{n})\in\Delta _{n} \end{array}\displaystyle \right\}. \end{aligned}$$
(4.2)

Remark 4.2

Note that in the primal as well as in the dual problem, only the stopping times truly live on \(\bar{\varOmega}\). The martingales \(M^{i}\) as well as the compensators \(\zeta^{i}\) live on \(C(\mathbb{R} _{+})\times\mathbb{R}_{+}\) in that they satisfy e.g. \(M^{i}_{t}( \omega,x)=M^{i}_{t}(\omega)\). We stress this by suppressing the \(x\) variable and writing e.g. \(\mathbb{E}_{{\mathbb{W}}}[M^{i}_{ \infty}]=0\) rather than \(\mathbb{E}_{\bar{{\mathbb{W}}}}[M^{i}_{ \infty}]=0\).

Convention

In the formulation of \(D^{*}_{n}\) in (4.2) and in the rest of the present section, \(M^{1},\ldots, M^{n}\) range over \(\varUpsilon\)-continuous martingales such that

$$ M^{i}_{t}(\omega)=\mathbb{E}_{\bar{{\mathbb{W}}}}[m^{i}|\mathcal{F} _{t}^{0}](\omega)+Q_{t}(\omega) $$

for some \(m^{i} \in C_{b}(\varOmega)\) and \(Q_{t}(\omega)=f(B_{t}( \omega))-\zeta^{f}_{t}(\omega)\), where \(f\) is a smooth function such that \(|f|/(1+\varphi_{i})\) is bounded, and \(\zeta^{f}\) is the corresponding compensating process \(\zeta^{f} = \frac{1}{2} \int_{0} ^{\cdot}f''(B_{s}) \,\mathrm{d}s\). In addition, we assume that \(\psi_{i}\in\mathcal{E}_{i}\) for all \(i \le n\).

Proposition 4.3

Let \(\gamma\colon\varUpsilon_{n}\to\mathbb{R}\) be upper semi-continuous and bounded from above. Under the above assumptions, we have \(P_{n}^{*}=D_{n}^{*}\).

As usual, the inequality \(P_{n}^{*}\leq D_{n}^{*}\) is not hard to see. The proof of the opposite inequality is based on the following minimax theorem.

Theorem 4.4

([2, Theorem 2.4.1])

Let \(K\), \(L\) be convex subsets of vector spaces \(H_{1}\) and \(H_{2}\), respectively, where \(H_{1}\) is locally convex, and let \(F\colon K \times L\to\mathbb{R}\) be given. If

  1. 1.

    K is compact,

  2. 2.

    \(F(\cdot, y)\) is continuous and convex on \(K\) for every \(y\in L\),

  3. 3.

    \(F(x,\cdot)\) is concave on \(L\) for every \(x\in K\),

then

$$ \sup_{y\in L}\inf_{x\in K} F(x,y)=\inf_{x\in K}\sup_{y\in L} F(x,y). $$

The inequality \(P_{n}^{*} \ge D_{n}^{*}\) will be proved inductively on \(n\). To this end, we need the following preliminary result.

Proposition 4.5

Let \(c\colon C(\mathbb{R}_{+})\times\Delta_{2}\to\mathbb{R}\) be upper semi-continuous and bounded from above and \(V_{i}:=\int\varphi_{i} \,\mathrm{d}\mu_{i}<\infty\) for \(i=1,2\). Put

$$ P^{V_{2}}:=\sup\big\{ \mathbb{E}_{\bar{{\mathbb{W}}}}[c(\omega,\tau _{1},\tau_{2})]\,:\, \tau_{1}\in\mathrm{RST}_{1}(\mu_{1}), \mathbb{E}_{\bar{{\mathbb{W}}}}[\zeta^{2}_{\tau_{2}}]\leq V_{2},(\tau _{1},\tau_{2})\in\mathrm{RT}_{2}\big\} $$

and

$$ D^{V_{2}}:=\inf\left\{ \textstyle \int\psi_{1} \,\mathrm{d}\mu_{1} : \textstyle\begin{array}{l} m\in C_{b}(C(\mathbb{R}_{+})), \psi_{1} \in C_{b}(\mathbb{R}_{+}) , \mathbb{E}_{{\mathbb{W}}}[m]=0, \\ \exists\alpha_{1},\alpha_{2}\geq0\textit{ with} \\ m(\omega)+\psi_{1}(\omega(t_{1})) - \sum_{i=1}^{2}\alpha_{i}(V_{i}- \zeta^{i}_{t_{i}}(\omega)) \\ \quad\geq c(\omega,t_{1},t_{2}) \end{array}\displaystyle \right\} . $$

Then we have

$$ P^{V_{2}}=D^{V_{2}}. $$

Proof

The inequality \(P^{V_{2}}\leq D^{V_{2}}\) follows easily. We are left to show the other inequality. The idea of the proof is to use a variational approach together with Theorem 4.4 to reduce the claim to the classical duality result in optimal transport.

Using standard approximation procedures (see [34, Proof of Theorem 5.10 (i), Step 5]), we can assume that \(c\) is continuous and bounded, bounded from above by 0 and satisfies for some \(L\)

$$ \mbox{supp } c\subseteq C(\mathbb{R}_{+})\times[0,L]^{2}. $$

In the following, we want to apply Theorem 4.4 where we take for \(K\) certain subsets of \(\mathrm{RT}_{2}\). The convexity of these subsets is easily seen by interpreting elements of these sets as measures via the obvious extension of (3.4). Compactness follows by Prokhorov’s theorem; this is shown by a trivial modification of the argument in [4, Theorem 3.14]. Hence, it follows by using Theorem 4.4 that

$$\begin{aligned} \begin{aligned} &\sup_{ {\scriptstyle \tau_{1}\in\mathrm{RST}_{1}(\mu_{1})\atop\scriptstyle {\scriptstyle \mathbb{E} _{\bar{{\mathbb{W}}}}[\zeta_{\tau_{2}}^{2}]\leq V_{2}\atop\scriptstyle (\tau_{1},\tau _{2})\in\mathrm{RT}_{2}}}} \mathbb{E}_{\bar{{\mathbb{W}}}}[c(\omega, \tau_{1},\tau_{2})] \\ &\quad = \sup_{ {\scriptstyle \tau_{1}\in\mathrm{RST}_{1}(\mu_{1})\atop\scriptstyle {\scriptstyle \tau_{2} \leq\max\{L,\tau_{1}\}\atop\scriptstyle (\tau_{1},\tau_{2})\in\mathrm{RT}_{2}}}} \ \inf_{\alpha\geq0}\ \mathbb{E}_{\bar{{\mathbb{W}}}}\big[c(\omega ,\tau_{1},\tau_{2}) + \alpha\big(V_{2}-\zeta_{\tau_{2}}^{2}(\omega) \big)\big] \\ &\quad = \inf_{\alpha\geq0}\ \sup_{ {\scriptstyle \tau_{1}\in\mathrm{RST}_{1}(\mu_{1})\atop\scriptstyle {\scriptstyle \tau_{2} \leq\max\{L,\tau_{1}\}\atop\scriptstyle (\tau_{1},\tau_{2})\in\mathrm{RT}_{2}}}} \ \mathbb{E}_{\bar{{\mathbb{W}}}}\big[c(\omega,\tau_{1},\tau_{2}) + \alpha\big(V_{2}-\zeta_{\tau_{2}}^{2}(\omega)\big)\big] \\ &\quad = \inf_{\alpha\geq0} \ \sup_{\tau_{1}\in\mathrm{RST}_{1}(\mu _{1})} \ \mathbb{E}_{\bar{{\mathbb{W}}}}[ c_{\alpha}(\omega,\tau_{1})], \end{aligned} \end{aligned}$$

where

$$ c_{\alpha}(\omega,t_{1}):=\sup_{t_{1}\leq t_{2}\leq\max\{L,t_{1} \}} \Big( c(\omega,t_{1},t_{2}) + \alpha\big(V_{2}-\zeta_{t_{2}} ^{2}(\omega)\big)\Big). $$

Hence, \(c_{\alpha}\) is a continuous and bounded function on \(C(\mathbb{R}_{+})\times\mathbb{R}_{+}\) since \(c\) is bounded, \(\zeta^{2}\) is continuous and increasing, and \(\{t_{2}:t_{1}\leq t _{2}\leq\max\{L,t_{1}\}\}\) is closed. To move closer to a classical transport setup, we define \(F\colon C(\mathbb{R}_{+})\times \mathbb{R}_{+}\times\mathbb{R}\to[-\infty,0]\) by

$$ F(\omega,t,y):= \textstyle\begin{cases} c_{\alpha}(\omega,t) & \mbox{ if } \omega(t)=y, \\ -\infty & \mbox{ else}, \end{cases} $$

which is an upper semi-continuous bounded function supported on

$$C(\mathbb{R}_{+})\times[0,L]\times\mathbb{R}. $$

Moreover, we define \(\mathrm{JOIN}(\mu_{1})\) to consist of all pairs of random variables \((\tau, Y)\) on \((\bar{\varOmega},\bar{{\mathbb{W}}})\) such that \(Y\sim\mu_{1}\) and \(\tau\in\mathrm{RST}\) satisfies \(\mathbb{E}_{\bar{ {\mathbb{W}}}}[\zeta^{1}_{\tau}]<\infty\). If \(\tau_{1}\in \mathrm{RST}(\mu_{1})\), then \((\tau_{1}, \bar{B}_{\tau_{1}})\in \mathrm{JOIN}(\mu_{1})\) and

$$ \mathbb{E}_{\bar{{\mathbb{W}}}}[c_{\alpha}(\omega,\tau_{1})] = \mathbb{E}_{\bar{{\mathbb{W}}}}[F(\omega,\tau_{1},\bar{B}_{\tau_{1}})]>- \infty. $$

Conversely, if \((\tau,Y)\in\mathrm{JOIN}(\mu_{1})\) with \(\mathbb{E} _{\bar{{\mathbb{W}}}}[F(\omega,\tau,Y)]>-\infty\), then one has almost surely \(Y=B_{\tau}\sim\mu_{1}\) so that \(\tau\in\mathrm{RST}(\mu _{1})\). Therefore, by the same argument as above,

$$\begin{aligned} \sup_{\tau_{1}\in\mathrm{RST}(\mu_{1})} \mathbb{E}_{ \bar{{\mathbb{W}}}}[c_{\alpha}(\omega,\tau_{1})] & = \sup_{(\tau,Y)\in\mathrm{JOIN}(\mu_{1})} \mathbb{E}_{ \bar{{\mathbb{W}}}}[F(\omega,\tau,Y)] \\ &= \inf_{\beta\geq0}\sup_{Y\sim\mu_{1}} \mathbb{E}_{ \bar{{\mathbb{W}}}}[F_{\beta}(\omega,Y)], \end{aligned}$$

where

$$ F_{\beta}(\omega,y):=\sup_{0\leq t\leq L} F(\omega,t,y) + \beta(V _{1}-\zeta^{1}_{t_{1}}) $$

is upper semi-continuous and bounded from above. The last supremum is the primal problem of a classical optimal transport problem written in a probabilistic fashion. Hence, employing the classical duality result (e.g. [34, Sect. 5]), we obtain

$$\begin{aligned} & \sup_{\tau_{1}\in\mathrm{RST}(\mu_{1})} \mathbb{E}_{ \bar{{\mathbb{W}}}}[c_{\alpha}(\omega,\tau_{1})] \\ &\quad=\inf_{\beta\geq0}\inf\bigg\{ \textstyle \!\int\!m\,\mathrm{d}{\mathbb{W}}+\!\int\! \psi\,\mathrm{d}\mu_{1} : \textstyle\begin{array}{l} m\in C_{b}(\mathbb{R}_{+}), \psi\in C_{b}(\mathbb{R}), \\ m(\omega)+\psi(y) \geq F_{\beta}(\omega,y) \end{array}\displaystyle \!\bigg\} \\ &\quad\geq\inf\bigg\{ \textstyle \!\int\! m\,\mathrm{d}{\mathbb{W}}+\!\int\! \psi\,\mathrm{d}\mu _{1} : \textstyle\begin{array}{l} \exists\beta\geq0, m\in C_{b}(C(\mathbb{R}_{+})), \psi\in C_{b}( \mathbb{R}) \mbox{ with} \\ m(\omega)+\psi(y)-\beta(V_{1}-\zeta^{1}_{t}(\omega)) \geq F( \omega,t,y) \end{array}\displaystyle \! \bigg\} \\ &\quad= \inf\bigg\{ \textstyle \!\int\! m\,\mathrm{d}{\mathbb{W}}+\!\int\! \psi\,\mathrm{d}\mu _{1} : \textstyle\begin{array}{l} \exists\beta\geq0, m\in C_{b}(C(\mathbb{R}_{+})), \psi\in C_{b}( \mathbb{R}) \mbox{ with} \\ m(\omega)+\psi(\omega(t))-\beta(V_{1}-\zeta^{1}_{t}(\omega)) \geq c_{\alpha}(\omega,t) \end{array}\displaystyle \! \bigg\} . \end{aligned}$$

Putting everything together yields the result. □

Proof of Proposition 4.3

By [34, Proof of Theorem 5.10 (i), Step 5], we can assume that \(\gamma\) is continuous and bounded. We show the result inductively by including more and more constraints (respectively, Lagrange multipliers) in the duality result in Proposition 3.5. In fact, we only show the result for the two cases \(n=2\), \(I=\{2\}\) and \(n=|I|=2\). The general claim follows then by an iterative application of the arguments that lead to Proposition 4.5 and the arguments below.

We first consider the case \(n=|I|=2\). Recall from (3.5) that a random time \(\tau\) is a stopping time if and only if \(\mathbb{E} _{\bar{{\mathbb{W}}}}[f(\tau)(g-\mathbb{E}_{{\mathbb{W}}}[g| \mathcal{F}_{t}])]=0\) for all \(g\in C_{b}(C(\mathbb{R}_{+}))\) and \(f\in C(\mathbb{R}_{+})\) supported on \([0,t]\). Let \(H\) be the set of all functions \(h \colon C(\mathbb{R}_{+})\times\mathbb{R}_{+}\!\to\! \mathbb{R}\) such that \(h(\omega,s)=\sum_{i=1}^{n} f_{i}(s)(g_{i}-\mathbb{E}_{{\mathbb{W}}}[g _{i}|\mathcal{F}_{u_{i}}])(\omega)\) for \(n\in\mathbb{N}\), \(g_{i}\in C_{b}(C(\mathbb{R}_{+}))\), and \(f_{i}\in C_{b}(\mathbb{R} _{+})\) supported on \([0,u_{i}]\). Then applying Theorem 4.4 again, we have

$$\begin{aligned} &\sup_{(\tau_{1},\tau_{2})\in\mathrm{RST}_{2}(\mu_{1},\mu_{2})} \mathbb{E}_{\bar{{\mathbb{W}}}}[\gamma\circ r_{2}(\omega,\tau_{1}, \tau_{2})] \\ &\quad = \sup_{{\scriptstyle \tau_{1}\in\mathrm{RST}(\mu_{1})\atop\scriptstyle {\scriptstyle (\tau _{1},\tau _{2})\in\mathrm{RT}_{2}\atop\scriptstyle \mathbb{E}_{\bar{{\mathbb{W}}}}[ \zeta_{\tau_{2}}^{2}]\leq V_{2}}}} \inf_{{\scriptstyle \psi_{2}\in C_{b}(\mathbb{R})\atop\scriptstyle h\in H}} \mathbb{E} _{\bar{{\mathbb{W}}}}\bigg[\gamma\circ r_{2}(\omega,\tau_{1},\tau _{2}) \!+ h(\omega,\tau_{2}) \!-\! \psi_{2}\big(\omega(\tau_{2})\big) \!+ \! \int\! \psi_{2} \,\mathrm{d}\mu_{2} \!\bigg] \\ &\quad =\inf_{{\scriptstyle \psi_{2}\in C_{b}(\mathbb{R})\atop\scriptstyle h\in H}} \sup_{{\scriptstyle \tau_{1}\in\mathrm{RST}(\mu_{1})\atop\scriptstyle {\scriptstyle (\tau _{1},\tau_{2}) \in\mathrm{RT}_{2} \atop\scriptstyle \mathbb{E}_{\bar{{\mathbb{W}}}}[\zeta_{\tau _{2}}^{2}]\leq V_{2}}}} \ \mathbb{E}_{\bar{{\mathbb{W}}}}[ \gamma_{\psi_{2},h}(\omega,\tau_{1},\tau_{2}) ], \end{aligned}$$

where we set

$$\begin{aligned} \gamma_{\psi_{2},h}(\omega,t_{1},t_{2}) & :=\gamma\circ r_{2}( \omega,t_{1},t_{2}) + h(\omega,t_{2}) - \psi_{2}\big(\omega(t_{2}) \big) + \int\psi_{2} \,\mathrm{d}\mu_{2} \end{aligned}$$

which is in \(C_{b}(C(\mathbb{R}_{+}) \times\Delta_{2})\). Applying Proposition 4.5, we get

$$\begin{aligned} \begin{aligned} &\sup_{(\tau_{1},\tau_{2})\in\mathrm{RST}_{2}(\mu_{1},\mu_{2})} \mathbb{E}_{\bar{{\mathbb{W}}}}[\gamma\circ r_{2}(\omega,\tau_{1}, \tau_{2})] \\ &\quad =\inf_{{\scriptstyle \psi_{2}\in C_{b}(\mathbb{R}) \atop\scriptstyle h\in H}} \ \inf \left\{ \textstyle \int\psi_{1} \,\mathrm{d}\mu_{1}: \textstyle\begin{array}{l} \psi_{1}\in C_{b}(\mathbb{R}) \mbox{ such that }\exists m\in C_{b}(C( \mathbb{R}_{+}))\mbox{ with} \\ \mathbb{E}_{{\mathbb{W}}}[m]=0, \exists\alpha_{1},\alpha_{2} \geq0\mbox{ with} \\ m(\omega) + \psi_{1}(\omega(t_{1})) - \sum_{i=1}^{2}\alpha_{i}(V _{i}-\zeta^{i}_{t_{i}}(\omega)) \\ \quad\geq\gamma_{\psi_{2},h}(\omega,t_{1},t_{2}) \end{array}\displaystyle \right\} . \end{aligned} \end{aligned}$$

Take \(m\), \(\psi_{1}\), \(\alpha_{1}\), \(\alpha_{2}\) satisfying

$$ m(\omega) + \psi_{1}\big(\omega(t_{1})\big) - \sum_{i=1}^{2}\alpha _{i}\big(V_{i}-\zeta^{i}_{t_{i}}(\omega)\big) \geq\gamma_{\psi_{2},h}( \omega,t_{1},t_{2}). $$
(4.3)

Observe that \(\mathbb{E}_{{\mathbb{W}}}[f(t)(g-\mathbb{E}_{{\mathbb{W}}}[g| \mathcal{F}_{u}])|\mathcal{F}_{t}]=0\) whenever \(\operatorname{supp}f \subseteq[0,u]\). Fixing \(t_{1}\) and \(t_{2}\), inequality (4.3) can be seen as an inequality between functions of \(\omega\). Hence, taking conditional expectations with respect to \(\mathcal{F}_{t_{2}}\) in the sense of Definition 3.3 and using the optionality of \(\gamma\) yields

$$ \mathbb{E}_{{\mathbb{W}}}[m|\mathcal{F}_{t_{2}}](\omega) + \sum_{i=1} ^{2}\psi_{i}\big(\omega(t_{i})\big) - \int\psi_{2}\,\mathrm{d}\mu _{2} - \sum_{i=1}^{2}\alpha_{i}\big(V_{i}-\zeta^{i}_{t_{i}}(\omega) \big) \geq\gamma\circ r_{2}(\omega,t_{1},t_{2}). $$

Hence,

$$\begin{aligned} & \sup_{(\tau_{1},\tau_{2})\in\mathrm{RST}_{2}(\mu_{1},\mu_{2})} \mathbb{E}_{\bar{{\mathbb{W}}}}[\gamma\circ r_{2}(\omega,\tau_{1}, \tau_{2})] \\ &\quad \geq\inf_{\psi_{2}\in C_{b}(\mathbb{R})} \ \inf\left\{ \textstyle \int\psi_{1} \,\mathrm{d}\mu_{1}+\int\psi_{2} \,\mathrm{d}\mu_{2}: \textstyle\begin{array}{l} \exists\varUpsilon\mbox{-continuous martingale }M\mbox{, } M_{0}=0, \\ \exists\psi_{1}\in C_{b}(\mathbb{R}_{+}), \exists\alpha_{1},\alpha _{2}\geq0 \mbox{ with} \\ \sum_{i=1}^{2}(\psi_{i}(\omega(t_{i}))+M_{t_{2}}(\omega)) \\ \quad- \sum_{i=1}^{2}\alpha_{i}(V_{i}-\varphi_{i}(\omega(t_{i})) \\ \quad \phantom{- \sum_{i=1}^{2}} + \varphi_{i}(\omega(t_{i}))-\zeta_{t_{i}}^{i}(\omega)) \\ \quad\geq\gamma\circ r_{2}(\omega,t_{1},t_{2}) \end{array}\displaystyle \right\} \\ &\quad = \inf_{\psi_{1},\psi_{2}\in\mathcal{E}_{1}\times\mathcal{E}_{2}} \left\{ \textstyle \int\psi_{1} \,\mathrm{d}\mu_{1}+\int\psi_{2} \,\mathrm{d}\mu_{2}: \textstyle\begin{array}{l} \exists\varUpsilon\mbox{-continuous martingales }M^{i} \\ \mbox{with }M^{i}_{0} =0\mbox{ such that} \\ \sum_{i=1}^{2}(\psi_{i}(\omega(t_{i}))+M^{i}_{t_{i}}(\omega)) \\ \quad\geq\gamma\circ r_{2}(\omega,t_{1},t_{2}) \end{array}\displaystyle \right\} \\ &\quad = \ D^{*}_{2}, \end{aligned}$$

where in the final step, we used the fact that \(\mathbb{E}_{\bar{ {\mathbb{W}}}}[\varphi_{i}(B_{\tau_{i}})]=\mathbb{E}_{ \bar{{\mathbb{W}}}}[\zeta_{\tau_{i}}^{i}]\), \(\int\varphi_{i} \, \, \mathrm{d}\mu_{i} =V_{i}\), \(\varphi_{i}(B_{0})=0\), and that \(\varphi_{i}(B)-\zeta^{i}\) is a martingale.

For later use, we write

$$ D(\gamma):= \left\{ (\psi_{1},\psi_{2}) \in\mathcal{E}_{1}\times \mathcal{E}_{2}: \textstyle\begin{array}{l} \exists\varUpsilon\mbox{-continuous martingales }M^{i} \\ \mbox{with }M^{i}_{0} =0\mbox{ such that} \\ \sum_{i=1}^{2}(\psi_{i}(\omega(t_{i}))+M^{i}_{t_{i}}(\omega)) \geq\gamma\circ r_{2}(\omega,t_{1},t_{2}) \end{array}\displaystyle \right\} . $$

We now consider the case where \(n=2\), \(|I|=1\) and \(I = \{2\}\); so we are prescribing \(\mu_{2}\) but not \(\mu_{1}\). Writing \(\rho\preceq\nu\) to denote that \(\rho\) precedes \(\nu\) in convex order, we use the result of the case where \(|I|=2\) to see that

$$\begin{aligned} P^{*}_{2} &= \sup_{(\tau_{1},\tau_{2})\in\mathrm{RST}_{2}(\mu_{2})} \mathbb{E}_{\bar{{\mathbb{W}}}}[\gamma\circ r_{2}(\omega,\tau_{1}, \tau_{2})] \\ & = \sup_{\mu_{1} \preceq\mu_{2}} \sup_{(\tau_{1},\tau_{2})\in\mathrm{RST}_{2}(\mu_{1},\mu_{2})} \mathbb{E}_{\bar{{\mathbb{W}}}}[\gamma\circ r_{2}(\omega,\tau_{1}, \tau_{2})] \\ & = \sup_{\mu_{1} \preceq\mu_{2}} \inf_{(\psi_{1},\psi_{2}) \in D(\gamma)}\left\{ \int\psi_{1} \, \mathrm{d}\mu_{1}+\int\psi_{2} \,\mathrm{d}\mu_{2}\right\} . \end{aligned}$$

We now need to introduce some additional compactness. Recall from the definitions of \(\varphi_{i}\) that \(\varphi_{2}/\varphi_{1} \to\infty \) as \(x \to\pm\infty\). Now let \(\varepsilon>0\) and write

$$ D^{\varepsilon}(\gamma^{\varepsilon}):= \left\{ (\psi_{1}^{\varepsilon },\psi_{2}) : \textstyle\begin{array}{l} \psi_{1}^{\varepsilon}+ \varepsilon\varphi_{2} \in\mathcal{E}_{1}, \psi_{2} \in\mathcal{E}_{2}, \mbox{ and} \\ \exists\varUpsilon\mbox{-continuous martingales }M^{i}\mbox{ with }M^{i}_{0}=0\mbox{ such that} \\ \psi_{1}^{\varepsilon}(\omega(t_{1})) + \psi_{2}(\omega(t_{2})) + \sum_{i=1}^{2} M^{i}_{t_{i}}(\omega)) \\ \quad\geq\gamma^{\varepsilon}\circ r_{2}(\omega,t_{1},t_{2}) \end{array}\displaystyle \right\} . $$

In particular, we have \((\psi_{1},\psi_{2}) \in D(\gamma)\) iff \((\psi_{1}-\varepsilon\varphi_{2},\psi_{2}) \in D^{\varepsilon}( \gamma-\varepsilon\varphi_{2}(\omega(t_{1})))\), and so (with \(\psi_{1}^{\varepsilon}= \psi_{1}-\varepsilon\varphi_{2}\), \(\gamma^{\varepsilon}= \gamma-\varepsilon\varphi_{2}(\omega(t_{1}))\))

$$\begin{aligned} & \inf_{(\psi_{1},\psi_{2}) \in D(\gamma)}\bigg\{ \int\psi_{1} \, \mathrm{d}\mu_{1}+\int\psi_{2} \,\mathrm{d}\mu_{2}\bigg\} \\ &\quad = \inf_{(\psi_{1}^{\varepsilon},\psi_{2}) \in D^{\varepsilon}( \gamma^{\varepsilon})}\bigg\{ \int(\psi_{1}^{\varepsilon}+\varepsilon \varphi_{2}) \,\mathrm{d}\mu_{1}+\int\psi_{2} \,\mathrm{d}\mu_{2} \bigg\} \\ &\quad = \inf_{(\psi_{1}^{\varepsilon},\psi_{2}) \in D^{\varepsilon}( \gamma^{\varepsilon})}\bigg\{ \int\psi_{1}^{\varepsilon}\, \mathrm{d}\mu_{1}+\int\psi_{2} \,\mathrm{d}\mu_{2}\bigg\} + \varepsilon \int\varphi_{2}\, \mu_{1}(\mathrm{d}x). \end{aligned}$$
(4.4)

In particular, the final integral can be bounded over the set of \(\mu_{1} \preceq\mu_{2}\), and so by taking \(\varepsilon>0\) small, this term can be made arbitrarily small. Moreover, by neglecting it, we get in (4.4) an inequality ≥ instead of equality.

If we introduce the set

$$ \mathrm{CV}: = \{c:\mathbb{R}\to\mathbb{R}: c \mbox{ convex, }c \ge0\mbox{, }c \in C^{2}\mbox{, }c(x) \le L(1+|x|)\mbox{ for some }L \ge0\}, $$

then we may test the convex ordering property by penalising against \(\mathrm{CV}\). In particular, we can write, after another application of Theorem 4.4,

$$\begin{aligned} P^{*}_{2}&\ge \inf_{(\psi_{1}^{\varepsilon},\psi_{2}) \in D^{\varepsilon}( \gamma^{\varepsilon})} \ \sup_{\mu_{1} \preceq\mu_{2}} \left\{ \int\psi_{1}^{\varepsilon}\,\mathrm{d}\mu_{1}+\int\psi_{2} \, \mathrm{d}\mu_{2}\right\} \\ & = \inf_{(\psi_{1}^{\varepsilon},\psi_{2}) \in D^{\varepsilon}( \gamma^{\varepsilon})} \sup_{\mu_{1}} \inf_{\substack{c \in \mathrm{CV}}} \left\{ \int(\psi_{1}^{\varepsilon}-c) \,\mathrm{d}\mu _{1}+\int(\psi_{2}+c) \,\mathrm{d}\mu_{2}\right\} . \end{aligned}$$

In addition, for fixed \(\psi^{\varepsilon}_{1} \in D^{\varepsilon}( \gamma^{\varepsilon})\), we observe that by the fact that \(\psi^{\varepsilon}_{1}+\varepsilon\varphi_{2} \in\mathcal{E}_{1}\), we must have \(\psi^{\varepsilon}_{1}(x) \to-\infty\) as \(x \to \pm\infty\). Hence, we can find a constant \(K\), which may depend on \(\psi_{1}^{\varepsilon}\), so that \(\psi^{\varepsilon}_{1}(x) <\psi _{1}^{\varepsilon}(0)\) for all \(x \notin[-K,K]\). In particular, we may restrict the supremum over measures \(\mu_{1}\) above to the set of probability measures \(\mathcal{P}_{K}:= \{\mu: \mu([-K,K]^{c}) = 0 \}\), where \(A^{c}\) denotes the complement of the set \(A\). Note that \(\mathcal{P}_{K}\) is compact; so we can then apply Theorem 4.4 to get

$$\begin{aligned} &\inf_{(\psi_{1}^{\varepsilon},\psi_{2}) \in D^{\varepsilon}( \gamma^{\varepsilon})} \ \sup_{\mu_{1} \preceq\mu_{2}} \bigg\{ \int\psi_{1}^{\varepsilon}\,\mathrm{d}\mu_{1}+\int\psi_{2} \, \mathrm{d}\mu_{2}\bigg\} \\ &\quad = \inf_{(\psi_{1}^{\varepsilon},\psi_{2}) \in D^{\varepsilon}( \gamma^{\varepsilon})} \inf_{\substack{c \in\mathrm{CV}}} \ \sup_{\mu_{1}\in\mathcal{P}_{K}}\ \bigg\{ \int(\psi_{1}^{\varepsilon }-c) \,\mathrm{d}\mu_{1}+\int(\psi_{2}+c) \,\mathrm{d}\mu_{2}\bigg\} \\ &\quad = \inf_{(\psi_{1}^{\varepsilon},\psi_{2}) \in D^{\varepsilon}( \gamma^{\varepsilon})} \ \inf_{\substack{c \in\mathrm{CV}}} \ \bigg\{ \sup_{x \in[-K,K]} \big(\psi_{1}^{\varepsilon}(x)-c(x) \big) +\int(\psi_{2}+c) \,\mathrm{d}\mu_{2}\bigg\} . \end{aligned}$$

In particular, for any \(\delta>0\), we can find \((\psi_{1}^{\varepsilon },\psi_{2}) \in D^{\varepsilon}(\gamma^{\varepsilon})\) and \(c \in\mathrm{CV}\) such that

$$ P^{*}_{2} \ge\sup_{x \in\mathbb{R}} \big(\psi_{1}^{\varepsilon}(x)-c(x) \big) +\int(\psi_{2}+c) \,\mathrm{d}\mu_{2} - \delta. $$

Take \(\psi_{2}^{\varepsilon}(\omega(t_{2})) := \sup_{x \in \mathbb{R}} (\psi_{1}^{\varepsilon}(x)-c(x)) + \psi_{2}(\omega(t _{2}))+c(\omega(t_{2})) +\varepsilon\varphi_{2}(\omega(t_{2}))\). Then there exist \(M^{1}\), \(M^{2}\) such that

$$\begin{aligned} \gamma^{\varepsilon}\circ r_{2}(\omega,t_{1},t_{2}) & \le\psi_{1} ^{\varepsilon}\big(\omega(t_{1})\big) + \psi_{2}\big(\omega(t_{2}) \big) + \sum_{i=1}^{2} M^{i}_{t_{i}}(\omega) \\ & = \psi_{2}^{\varepsilon}\big(\omega(t_{2})\big) + \sum_{i=1}^{2} M^{i}_{t_{i}}(\omega)-\varepsilon\varphi_{2}\big(\omega(t_{2}) \big)-c\big(\omega(t_{2})\big) + c\big(\omega(t_{1})\big) \\ & \phantom{=:} + \psi_{1}^{\varepsilon}\big(\omega(t_{1})\big)- c\big(\omega(t _{1})\big) - \sup_{x \in\mathbb{R}} \big(\psi_{1}^{\varepsilon}(x)-c(x) \big). \end{aligned}$$

Hence,

$$\begin{aligned} \gamma\circ r_{2}(\omega,t_{1},t_{2}) & \le\psi_{2}^{\varepsilon} \big(\omega(t_{2})\big) + \sum_{i=1}^{2} M^{i}_{t_{i}}(\omega)+ \varepsilon\Big(\varphi_{2}\big(\omega(t_{1})\big) - \varphi_{2} \big(\omega(t_{2})\big)\Big) \\ & \phantom{=:} -c\big(\omega(t_{2})\big) + c\big(\omega(t_{1})\big) \\ & = \psi_{2}^{\varepsilon}\big(\omega(t_{2})\big) + \sum_{i=1}^{2} M^{i}_{t_{i}}(\omega) \\ & \phantom{=:} + \varepsilon\bigg( \Big(\varphi_{2}\big(\omega(t_{1})\big)- \zeta_{t_{1}}^{2}\Big) - \Big(\varphi_{2}\big(\omega(t_{2})\big)- \zeta_{t_{2}}^{2}\Big)\bigg) + \varepsilon(\zeta_{t_{1}}^{2} - \zeta_{t_{2}}^{2}) \\ & \phantom{=:} + \Big(c\big(\omega(t_{1})\big)-\zeta_{t_{1}}^{c}\Big) - \Big(c \big(\omega(t_{2})\big)-\zeta_{t_{2}}^{c}\Big) + (\zeta_{t_{1}}^{c} - \zeta_{t_{2}}^{c}). \end{aligned}$$

Since \(\zeta^{2}\) is an increasing process compensating \(\varphi_{2}\), we get \(\zeta_{t_{2}}-\zeta_{t_{1}} \ge0\) whenever \(t_{1} \le t_{2}\). Similarly, \(\zeta^{c}\) is the increasing process compensating \(c\), and the same argument as above holds. Note that \(\zeta^{c}\) is \(\varUpsilon \)-continuous since \(c\) is assumed to be in \(C^{2}\). It follows that \((\psi_{1}^{\varepsilon},\psi_{2}) \in D^{\varepsilon}( \gamma^{\varepsilon})\) implies \(\psi_{2}^{\varepsilon}\in D'(\gamma)\), where

$$ D'(\gamma):= \bigg\{ \psi_{2} \in\mathcal{E}_{2}: \textstyle\begin{array}{l} \exists\varUpsilon\mbox{-continuous martingales }M^{i}\mbox{ with }M^{i}_{0} =0\mbox{ such that} \\ \psi_{2}(\omega(t_{2})) +\sum_{i=1}^{2} M^{i}_{t_{i}}(\omega) \geq\gamma\circ r_{2}(\omega,t_{1},t_{2}) \end{array}\displaystyle \bigg\} . $$

It follows by making \(\varepsilon\), \(\delta\) small that

$$\begin{aligned} P^{*}_{2} \ge\inf_{\psi_{2} \in D'(\gamma)} \int\psi_{2} \, \mathrm{d}\mu_{2}(x), \end{aligned}$$

and as usual, the inequality in the other direction is easy.

To establish the claim in the general case, we can now successively introduce more and more constraints accounting for more and more Lagrange multipliers and use either only the first or the first and the second argument to prove the full claim. □

To conclude, we can follow the reasoning of Sect. 3, more precisely Step 1 and Step 3, and obtain the following robust superhedging result.

Theorem 4.6

Suppose that \(n\in\mathbb{N}\), \(I\subseteq\{1,\ldots, n\}\), \(n\in I\) and that \(\mu_{i}\) is a centered probability measure onfor each \(i\in I\) and let \(G\colon C_{\mathrm{qv}}[0,n] \to\mathbb{R}\) be of the form

$$\begin{aligned} G(\omega)=\gamma\big( \operatorname{\mathfrak{t}}(\omega)|_{[0, \langle\omega\rangle_{n}] }, \langle\omega\rangle_{1}, \ldots, \langle\omega\rangle_{n}\big), \end{aligned}$$
(4.5)

where \(\gamma\) is \(\varUpsilon_{n}\)-upper semi-continuous and bounded from above. Let us define

$$\begin{aligned} P_{n}:=\sup\left\{ {\mathbb{E}}_{\mathbb{P}}[G]: \textstyle\begin{array}{l} \mathbb{P}\textit{ is a martingale measure on } C[0,n], \\ S_{0}=0,\, S_{i}\sim\mu_{i}\textit{ for all }i\in I \end{array}\displaystyle \right\} \end{aligned}$$

and

$$\begin{aligned} D_{n}:=\inf\left\{ a: \textstyle\begin{array}{l} \exists c>0, f \in C^{\infty}(\mathbb{R},\mathbb{R}) \textit{ such that } |f| /(1+\varphi_{n}) \textit{ is bounded}, \\ (H^{m})_{m \in\mathbb{N}} \subseteq\mathcal{Q}_{f,c} \textit{ and }(\psi_{j})_{j\in I}\textit{ with }\int\psi_{j} \,\mathrm{d}\mu _{j}=0\textit{ such that} \\ a+\sum_{j\in I} \psi_{j}(S_{j}(\omega)) + \liminf_{m\to\infty}(H ^{m}\cdot S)_{n}(\omega) \geq G(\omega), \\ \forall\omega\in C_{\mathrm{qv}}[0,n] \end{array}\displaystyle \right\} , \end{aligned}$$

where for \(f \in C^{2}(\mathbb{R},\mathbb{R})\), we set

$$ \mathcal{Q}_{f,c} :=\left\{ H: \textstyle\begin{array}{l} H \textit{ is a simple strategy and} \\ (H \cdot S)_{t}(\omega) \ge-c - \zeta^{f}_{t}(\omega), \forall( \omega,t) \in C_{\mathrm{qv}}[0,n] \times[0,n] \end{array}\displaystyle \right\} . $$

Under the above assumptions, we have \(P_{n}=D_{n}\).

Finally, we note that Theorem 4.6 could be further extended based on the above arguments. For example, we could include additional market information on prices of further options of the invariant form (4.5).