In the following we suppose that \((\Omega ,\mathcal {G},(\mathcal {G}_t)_{t\ge 0},\mathbb {P})\) is a stochastic basis which is sufficiently rich to support a Brownian motion B and a uniformly distributed \(\mathcal {G}_0\)-random variable. We suppose that \(\gamma : S\rightarrow \mathbb {R}\) is a Borel measurable function. In a slight abuse of notation we will also write \((\gamma _t)_{t\in \mathbb {R}_+}\) for the process given by
$$\begin{aligned} t\mapsto \gamma ((B_s)_{s\le t}, t). \end{aligned}$$
In the previous section we have considered a secondary optimization problem and a version of the monotonicity principle (Theorem 5.16) accounting for this extension. We now give a brief summary in probabilistic terms.
Write \(\mathsf {Opt}_\gamma \) for the set of \(\mathcal {G}\)-stopping times on \(\Omega \) which are optimizers of (OptSEP) and consider another Borel function \({\tilde{\gamma }}:S\rightarrow \mathbb {R}\). We call \({{\hat{\tau }}}\in \mathsf {Opt}_\gamma \) a secondary minimizer if it solves
$$\begin{aligned} P_{{\tilde{\gamma }}|\gamma }=\inf \{\mathbb {E}\left[ {\tilde{\gamma }}_\tau \right] : \tau \in \mathsf {Opt}_\gamma \}. \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad (\hbox {OptSEP}_2) \end{aligned}$$
As in (5.20) we say that \((\hbox {OptSEP}_2)\) is well posed if the primary optimization problem (OptSEP) is well posed and \(\mathbb {E}\left[ {\tilde{\gamma }}_\tau \right] \) exists with values in \((-\infty ,\infty ]\) for all \(\tau \in \mathsf {Opt}_\gamma \) and is finite for one such \(\tau \). Then we have the following version of Theorems 1.1 and 4.1:
Theorem 6.1
Let \(\gamma , \tilde{\gamma }:S\rightarrow \mathbb {R}\) be lsc and bounded from below in the sense of (4.2). Then \(\mathrm{({OptSEP}_2)}\) admits a minimizer \({\hat{\tau }}\).
We now provide the appropriate generalizations of Definitions 1.4 and 1.5 and Theorem 1.3 for this case.
Definition 6.2
The pair \(\big ((f,s), (g,t)\big )\in S\times S\) constitutes a secondary stop-go pair, written \(\big ((f,s), (g,t)\big )\in \mathsf {SG}_2\), iff \(f(s)=g(t)\), and for every
\((\mathcal {F}^B_t)_{t \ge 0}\)-stopping time \(\sigma \) which satisfies \(0< \mathbb {E}[\sigma ] < \infty \),
$$\begin{aligned} \mathbb {E}\big [\big (\gamma ^{(f,s)\oplus }\big )_{\sigma }\big ] + \gamma (g,t) \ge \gamma (f,s) \ +\ \mathbb {E}\big [\big (\gamma ^{(g,t)\oplus }\big )_{\sigma }\big ], \end{aligned}$$
(6.1)
whenever both sides are well defined, and the left-hand side is finite; and if
$$\begin{aligned} \mathbb {E}\big [\big (\gamma ^{(f,s)\oplus }\big )_{\sigma }\big ]\ +\ \gamma (g,t) = \gamma (f,s) \ +\ \mathbb {E}\big [\big (\gamma ^{(g,t)\oplus }\big )_{\sigma }\big ] \end{aligned}$$
(6.2)
then
$$\begin{aligned} \mathbb {E}\big [\big ({\tilde{\gamma }}^{(f,s)\oplus }\big )_{\sigma }\big ]\ +\ {\tilde{\gamma }}(g,t)> {\tilde{\gamma }}(f,s) \ +\ \mathbb {E}\big [\big ({\tilde{\gamma }}^{(g,t)\oplus }\big )_{\sigma }\big ], \end{aligned}$$
(6.3)
whenever both sides are well-defined and the left-hand side (of (6.3)) is finite.
Definition 6.3
We say that \(\Gamma \) is \( {\tilde{\gamma }}| \gamma \)-monotone if
$$\begin{aligned} {\mathsf {SG}_2}\cap \big ( \Gamma ^< \times \Gamma \big )=\emptyset .\end{aligned}$$
(6.4)
From Theorem 5.16 together with a trivial modification of Lemma 5.4 we then obtain:
Theorem 6.4
(Monotonicity Principle II) Let \(\gamma , {\tilde{\gamma }}:S\rightarrow \mathbb {R}\) be Borel measurable, suppose that \(\mathrm{({OptSEP}_2)}\) is well posed and that \({\hat{\tau }}\) is an optimizer. Then there exists a \({\tilde{\gamma }}| \gamma \)-monotone Borel set \(\Gamma \subseteq S\) such that \(\mathbb {P}\)-a.s.
$$\begin{aligned} ((B_t)_{t\le {\hat{\tau }}},{\hat{\tau }})\in \Gamma . \end{aligned}$$
(6.5)
Recovering classical embeddings
In this section we derive a number of classical embeddings as well as establish new embeddings. Figure 4 shows graphical representations of some of these constructions. We highlight the common feature of all these pictures: when plotted in an appropriate phase space, the stopping time is the hitting time of a barrier type set. Identifying the appropriate phase space, and determining the exact structure of the barrier will be the key step in deriving the solutions to (SEP) in this section.
For subsequent use, it will be helpful to write, for \((f,s) \in S,\bar{f} = \sup _{r \le s} f(r),\underline{f} = \inf _{r \le s} f(r)\) and \(|f|^* = \sup _{r \le s} |f(r)|\).
Theorem 6.5
(The Azéma–Yor embedding, cf. [4]) There exists a stopping time \(\tau _{AY}\) which maximizes
$$\begin{aligned} \mathbb {E}\left[ \sup _{t \le \tau } B_t\right] \end{aligned}$$
over all solutions to (SEP) and which is of the form \(\tau _{AY} = \inf \{ t > 0: B_t \le \psi (\sup _{s \le t} B_s)\}\) a.s., for some increasing function \(\psi \).
Proof
Fix a bounded and strictly increasing continuous function \(\varphi :\mathbb {R}_+\rightarrow \mathbb {R}\) and consider the continuous functions \(\gamma ((f,s)) = -\bar{f}\) and \({\tilde{\gamma }}((f,s)) = \varphi (\bar{f})(f(s))^2\). Then \((\hbox {OptSEP}_2)\) is well posed and by Theorem 6.1 there exists a minimizer \(\tau _{AY}\). By Theorem 6.4, pick a \({\tilde{\gamma }}|\gamma \)-monotone set \(\Gamma \subseteq S\) supporting \(\tau _{AY}\). We claim that
$$\begin{aligned} {\mathsf {SG}_2}\supseteq \{((f,s),(g,t))\in S\times S: g(t)=f(s), \bar{g} <\bar{f} \}. \end{aligned}$$
(6.6)
This is represented graphically in Fig. 5.
Indeed, pick \(((f,s),(g,t))\in S\times S\) with \(f(s)=g(t)\) and \( \bar{g}< \bar{f}\) and a stopping time \(\sigma \) with positive and finite expectation. Then (6.1) amounts to
$$\begin{aligned} \mathbb {E}\big [{\bar{f}} \vee (f(s)+{\bar{B}}_\sigma )\big ] + \bar{g} \le \bar{f} + \mathbb {E}\big [{\bar{g}} \vee (g(t) + {\bar{B}}_\sigma )\big ] \end{aligned}$$
with a strict inequality unless \({\bar{g}} \ge g(t) + {\bar{B}}_\sigma \) a.s. However in that case (6.2) is trivially satisfied and (6.3) amounts to
$$\begin{aligned} \mathbb {E}\big [\varphi ({\bar{f}}) (f(s)+B_\sigma )^2 \big ] + \varphi ({\bar{g}} ) g(t)^2 >\varphi ({\bar{f}} )f(s)^2 + \mathbb {E}\big [\varphi ({\bar{g}}) (g(t)+B_\sigma )^2 \big ] \end{aligned}$$
which holds since \(g(t)=f(s)\). Summing up, \(((f,s),(g,t))\in \mathsf {SG}\subseteq {\mathsf {SG}_2}\) in the former case and \(((f,s),(g,t))\in {\mathsf {SG}_2}\) in the latter case, proving (6.6).
In complete analogy with the derivation of the Root embedding (Theorem 2.1) we define
$$\begin{aligned} {\mathcal {R}}_\textsc {cl}&:= \left\{ (m,x): \exists (g,t) \in \Gamma , \bar{g} \le m, g(t) = x\right\} ,\\ {\mathcal {R}}_\textsc {op}&:= \left\{ (m,x): \exists (g,t) \in \Gamma , \bar{g} < m, g(t) = x \right\} , \end{aligned}$$
and write \(\tau _\textsc {cl}, \tau _\textsc {op}\) for the first times the process \((\bar{B}_t(\omega ),{B}_t(\omega ))\) hits the sets \({\mathcal {R}}_\textsc {cl}\) and \({\mathcal {R}}_\textsc {op}\) respectively. Then we claim \(\tau _\textsc {cl}\le \tau _{AY} \le \tau _\textsc {op}\) a.s. Note that \(\tau _\textsc {cl}\le \tau _{AY}\) holds by definition of \(\tau _\textsc {cl}.\) To show \(\tau _{AY} \le \tau _\textsc {op}\), consider \(\omega \) satisfying \(((B_s(\omega ))_{s\le \tau _{AY}(\omega )},\tau _{AY}(\omega ))\in \Gamma \) and assume for contradiction that \(\tau _\textsc {op}(\omega )<\tau _{AY}(\omega ).\) Then there exists \(s\in [\tau _\textsc {op}(\omega ),\tau _{AY}(\omega ))\) such that \(f:=(B_r(\omega ))_{r\le s}\) satisfies \(({\bar{f}}, f(s))\in {\mathcal {R}}_\textsc {op}\). Since \(s< \tau _{AY}(\omega )\) we have \((f,s)\in \Gamma ^<\). By definition of \({\mathcal {R}}_\textsc {op}\), there exists \((g,t)\in \Gamma \) such that \(f(s)= g(t)\) and \({\bar{g}} < {\bar{f}}\), yielding a contradiction.
Finally, we define
$$\begin{aligned} \psi _0(m) = \sup \{x: (m,x) \in {\mathcal {R}}_\textsc {cl}\}. \end{aligned}$$
It follows from the definition of \({\mathcal {R}}_\textsc {cl}\) that \(\psi _0(m)\) is increasing, and we define the right-continuous function \(\psi _+(m) = \psi _0(m+)\), and the left-continuous function \(\psi _-(m) = \psi _0(m-)\). It follows from the definitions of \(\tau _{\textsc {op}}\) and \(\tau _{\textsc {cl}}\) that:
$$\begin{aligned} \tau _+ := & {} \inf \{t \ge 0: B_t \le \psi _+(\bar{B}_t)\} \le \tau _{\textsc {cl}} \le \tau _{\textsc {op}} \\\le & {} \inf \{t \ge 0: B_t < \psi _-(\bar{B}_t)\} =: \tau _-. \end{aligned}$$
It is then easily checked that \(\tau _- = \tau _+\) a.s., and the result follows on taking \(\psi = \psi _+\). \(\square \)
Theorem 6.6
(The Jacka Embedding, cf. [33]) Let \(\varphi :\mathbb {R}_+\rightarrow \mathbb {R}\) be a bounded, strictly increasing right-continuous function. There exists a stopping time \(\tau _{J}\) which maximizes
$$\begin{aligned} \mathbb {E}\left[ \varphi \left( {{\sup }_{t \le \tau } }|B_t|\right) \right] \end{aligned}$$
over all solutions to (SEP), and which is of the form
$$\begin{aligned} \tau _{J} = \inf \left\{ t > 0: B_t \ge \alpha _-\left( {{\sup }_{s \le t}} |B_s|\right) \text { and } B_t \le \alpha _+\left( {{\sup }_{s \le t}} |B_s|\right) \right\} \end{aligned}$$
a.s., for some functions \(\alpha _+, \alpha _-\), where \(\alpha _+\) is increasing, \(\alpha _-\) is decreasing, and \(\alpha _+(y) \ge \alpha _-(y)\) for all \(y > y_0,\alpha _-(y) = -\alpha _+(y) = \infty \) for \(y < y_0\), some \(y_0 \ge 0\).
Proof
The proof runs along similar lines to the proof of Theorem 6.5, when we take \(\gamma ((f,s)) = -\varphi (|f|^*)\) and set \(\tilde{\gamma }((f,s)) = \tilde{\varphi }(|f|^*)(f(s))^2\) for some bounded and strictly increasing, continuous function \(\tilde{\varphi }\). Then the statement follows once we see
$$\begin{aligned} {\mathsf {SG}_2}\supseteq \left\{ ((f,s),(g,t))\in S\times S:f(s)=g(t), |f|^*>|g|^*\right\} , \end{aligned}$$
define
$$\begin{aligned} {\mathcal {R}}_\textsc {cl}&:= \left\{ (m,x): \exists (g,t) \in \Gamma , |g|^* \le m, g(t) = x\right\} \\ {\mathcal {R}}_\textsc {op}&:= \left\{ (m,x): \exists (g,t) \in \Gamma , |g|^* < m, g(t) = x\right\} , \end{aligned}$$
and then take \( \alpha _-(m) = \inf \{ x : (m,x) \in {\mathcal {R}}_\textsc {cl}\} \text { and } \alpha _+(m) = \sup \{x: (m,x) \in {\mathcal {R}}_\textsc {cl}\}. \)
\(\square \)
Remark 6.7
We observe that both the results hold for one-dimensional Brownian motion with an arbitrary starting distribution \(\lambda \) satisfying the usual convex ordering condition.
Theorem 6.8
(The Perkins Embedding, cf. [45]) Suppose \(\mu (\{0\}) = 0\). Let \(\varphi :\mathbb {R}_+^2\rightarrow \mathbb {R}\) be a bounded function which is continuous and strictly increasing in both arguments. There exists a stopping time \(\tau _{P}\) which minimizes
$$\begin{aligned} \mathbb {E}\left[ \varphi \left( \sup _{t \le \tau } B_t,-\inf _{t \le \tau } B_t\right) \right] \end{aligned}$$
over all solutions to (SEP) and which is of the form \(\tau _{P} = \inf \{ t > 0: B_t \not \in (\alpha _+(\bar{B}_t), \alpha _-(\underline{B}_t))\}\), for some decreasing functions \(\alpha _+\) and \(\alpha _-\) which are left- and right-continuous respectively.
Proof
Fix a bounded and strictly increasing continuous function \({\tilde{\varphi }}:\mathbb {R}^2_+\rightarrow \mathbb {R}\) and consider the continuous functions \(\gamma ((f,s)) = \varphi (\bar{f},-\underline{f})\) and \({\tilde{\gamma }}((f,s)) = -(f(s))^2 {\tilde{\varphi }}(\bar{f},-\underline{f})\). Then \((\hbox {OptSEP}_2)\) is well posed and by Theorem 6.1 there exists a minimizer \(\tau _{P}\). By Theorem 6.4, pick a \({\tilde{\gamma }}|\gamma \)-monotone set \(\Gamma \subseteq S\) supporting \(\tau _{P}\). Note that we may assume that \(\Gamma \) only contains points such that \(\underline{g}< 0 < \bar{g}\), since \(\mu (\{0\}) = 0\).
By a similar argument to that given in the proof of Theorem 6.5 we can show
$$\begin{aligned} {\mathsf {SG}_2}\supseteq \{((f,s),(g,t))\in S\times S: f(s)=g(t), (\bar{f}, - \underline{f})< (\bar{g}, -\underline{g})\}, \end{aligned}$$
where \((\bar{f}, - \underline{f})< (\bar{g}, -\underline{g})\) iff \((\bar{f}, - \underline{f})\le (\bar{g}, -\underline{g})\) but not \((\bar{f}, - \underline{f})= (\bar{g}, -\underline{g})\) and\((\bar{f}, - \underline{f})\le (\bar{g}, -\underline{g})\) refers to the partial order of \(\mathbb {R}^2\).
In addition, consider a path \((g,t) \in S\) such that \(\underline{g}< g(t) < \bar{g}\). Then there exists \((f,s) \in S\) such that \(f(r) = g(r)\) for \(r \le s\), and such that \(f(s) = g(t)\), and exactly one of \(\bar{f} = \bar{g}\), or \(\underline{f} = \underline{g}\). This is true since there must exist a last time that \(g(r) = x\) before setting the most recent extremum. In particular, \(((f,s),(g,t)) \in {\mathsf {SG}_2}\). It follows that \(\Gamma \cap \{(g,t): \underline{g}< g(t) < \bar{g}\} = \emptyset \), that is, any stopped path must stop at a minimum or a maximum.
Now consider the sets:
$$\begin{aligned} {\mathcal {R}}_\textsc {cl}&= \left\{ (m,x): \exists (g,t) \in \Gamma , g(t) = x = \underline{g}, \bar{g} \ge m\right\} \\&\quad \cup \left\{ (x,i): \exists (g,t) \in \Gamma , g(t) = x = \bar{g}, \underline{g} \le i\right\} \\&= \underline{\mathcal {R}}_\textsc {cl}\cup \bar{\mathcal {R}}_\textsc {cl}\\ {\mathcal {R}}_\textsc {op}&= \left\{ (m,x): \exists (g,t) \in \Gamma , g(t) = x = \underline{g}, \bar{g} > m\right\} \\&\quad \cup \left\{ (x,i): \exists (g,t) \in \Gamma , g(t) = x = \bar{g}, \underline{g} < i\right\} \\&= \underline{\mathcal {R}}_\textsc {op}\cup \bar{\mathcal {R}}_\textsc {op}, \end{aligned}$$
and their respective hitting times by \((\bar{B}_t,\underline{B}_t)_{t\ge 0}\), denoted \(\tau _\textsc {cl}, \tau _\textsc {op}\). Since \(\Gamma \cap \{(g,t): \underline{g}< g(t) < \bar{g}\} = \emptyset \), it follows that \(\tau _\textsc {cl}\le \tau _P\) a.s. In addition, an essentially identical argument to that used in the proof of Theorem 6.5 gives \(\tau _{P} \le \tau _\textsc {op}\) a.s.
We now set \(\alpha _+(m) = \sup \{x < 0 : (m,x) \in \underline{\mathcal {R}}_\textsc {cl}\}, \alpha _-(i) = \inf \{x > 0 : (x,i) \in \bar{\mathcal {R}}_\textsc {cl}\}.\) Then these functions are both clearly decreasing and left- and right-continuous respectively, by definition of the respective sets \(\underline{\mathcal {R}}_\textsc {cl}, \bar{\mathcal {R}}_\textsc {cl}\). Moreover, it is immediate that
$$\begin{aligned} \tau _\textsc {cl}= \inf \left\{ t > 0: B_t \not \in \left( \alpha _+(\bar{B}_t), \alpha _-(\underline{B}_t)\right) \right\} , \end{aligned}$$
and we deduce that \(\tau _\textsc {cl}= \tau _\textsc {op}\) a.s. by standard properties of Brownian motion. The conclusion follows. \(\square \)
Theorem 6.9
(Maximizing the range) Let \(\varphi :\mathbb {R}_+^2\rightarrow \mathbb {R}\) be a bounded function which is continuous and strictly increasing in both arguments. There exists a stopping time \(\tau _{xr}\) which maximizes
$$\begin{aligned} \mathbb {E}\left[ \varphi \left( \sup _{t \le \tau } B_t, -\inf _{t \le \tau } B_t\right) \right] \end{aligned}$$
over all solutions to (SEP), and which is of the form \(\tau _{xr} = \inf \{ t > 0: B_t \ge \alpha _-(\bar{B}_t,-\underline{B}_t)\ \text {or}\)
\(\ B_t \le \alpha _+(\bar{B}_t,-\underline{B}_t)\}\) for some right-continuous functions \(\alpha _-(m,i)\) decreasing in both coordinates and \(\alpha _+(m,i)\) increasing in both coordinates.
Proof
Our primary objective will be to minimize \(\gamma ((f,s)) = -\varphi (\bar{f},-\underline{f})\), which is a lsc function on S. We again introduce a secondary minimization problem: specifically, we consider the function \({\tilde{\gamma }}((f,s)) = (f(s))^2 {\tilde{\varphi }}(\bar{f},-\underline{f})\) for some bounded, continuous and strictly increasing function \({\tilde{\varphi }}:\mathbb {R}_+^2\rightarrow \mathbb {R}\). Then \((\hbox {OptSEP}_2)\) is well posed and by Theorem 6.1 there exists a minimizer \(\tau _{xr}\). By Theorem 6.4, pick a \({\tilde{\gamma }}|\gamma \)-monotone set \(\Gamma \subseteq S\) supporting \(\tau _{xr}.\)
By a similar argument to that given in the proof of Theorem 6.5 we can show \({\mathsf {SG}_2}\supseteq \{((f,s),(g,t))\in S\times S: f(s)=g(t), (\bar{f}, - \underline{f})> (\bar{g}, -\underline{g})\}\).
Let \({{\mathrm{conv}}}\) denote the convex hull, and write
$$\begin{aligned} I_{\textsc {cl}}(\bar{b},-\underline{b})&:= {{\mathrm{conv}}}\left\{ x: \exists (g,t) \in \Gamma , g(t) = x, (\bar{g},-\underline{g}) \le (\bar{b},-\underline{b})\right\} , \\ I_{\textsc {op}}(\bar{b},-\underline{b})&:= {{\mathrm{conv}}}\left\{ x: \exists (g,t) \in \Gamma , g(t) = x, (\bar{g},-\underline{g}) <(\bar{b},-\underline{b})\right\} . \end{aligned}$$
Then \(I_{\textsc {cl}}, I_{\textsc {op}}\) are both increasing in both coordinates, and \(I_{\textsc {cl}} \supseteq I_{\textsc {op}}\). Write \(\tau _{\textsc {op}} := \inf \{t \ge 0: B_t \in I_{\textsc {op}}(\bar{B}_t,-\underline{B}_t)\}\), and \(\tau _{\textsc {cl}} := \inf \{t \ge 0: B_t \in I_{\textsc {cl}}(\bar{B}_t,-\underline{B}_t)\}\). As previously, we deduce that \(\tau _{\textsc {cl}} \le \tau _{xr} \le \tau _{\textsc {op}}\). If, in addition, we define
$$\begin{aligned}&\alpha _+(m,i) := \sup I_{\textsc {op}}(m,i)\quad \quad \quad \quad \alpha _-(m,i) := \inf I_{\textsc {op}}(m,i)\\&\alpha _{+,\textsc {cl}}(m,i) := \sup I_{\textsc {cl}}(m,i) \quad \quad \quad \alpha _{-,\textsc {cl}}(m,i) := \inf I_{\textsc {cl}}(m,i) \end{aligned}$$
then \(\alpha _+, \alpha _-\) satisfy the conditions of the theorem, and
$$\begin{aligned} \tau _{\textsc {op}}&= \inf \left\{ t \ge 0 : B_t \ge \alpha _-(\bar{B}_t,-\underline{B}_t) \text{ or } B_t \le \alpha _+(\bar{B}_t,-\underline{B}_t)\right\} \\ \tau _{\textsc {cl}}&= \inf \left\{ t \ge 0 : B_t \ge \alpha _{-,\textsc {cl}}(\bar{B}_t,-\underline{B}_t) \text{ or } B_t \le \alpha _{+,\textsc {cl}}(\bar{B}_t,-\underline{B}_t)\right\} . \end{aligned}$$
To conclude, we need to show that \(\tau _{\textsc {op}} = \tau _{\textsc {cl}}\). However, we first observe that \(\tau _{\textsc {op}} \ge \sigma \), and \(\tau _{\textsc {cl}} \ge \sigma _{\textsc {cl}}\), where
$$\begin{aligned} \sigma&:= \inf \left\{ t \ge 0 : \alpha _-(\bar{B}_t,-\underline{B}_t)<\infty \; \text{ or } \; \alpha _+(\bar{B}_t,-\underline{B}_t)>-\infty \right\} \\ \sigma _{\textsc {cl}}&:= \inf \left\{ t \ge 0 : \alpha _{-,\textsc {cl}}(\bar{B}_t,-\underline{B}_t)<\infty \; \text{ or } \; \alpha _{+,\textsc {cl}}(\bar{B}_t,-\underline{B}_t)>-\infty \right\} , \end{aligned}$$
and in fact, \(\sigma = \sigma _{\textsc {cl}}\) a.s. In addition, on \(\{\sigma >0\}\) we have \(B_{\sigma } \in \{\bar{B}_\sigma , \underline{B}_\sigma \}\). On the set \(\{B_\sigma = \bar{B}_\sigma \}\) say, then
$$\begin{aligned} \tau _{\textsc {op}}&= \inf \{ t \ge \sigma : B_t \le \alpha _{+}(\bar{B}_t, -\underline{B}_\sigma )\}\\&= \inf \{ t \ge \sigma : B_t \le \alpha _{+,\textsc {cl}}(\bar{B}_t, -\underline{B}_\sigma )\} \quad a.s. \end{aligned}$$
by the same argument as used at the end of the proof of Theorem 6.5, and the fact that \(\alpha _{+}(m+,i) = \alpha _{+,\textsc {cl}}(m,i)\), by the definition of the sets \(I_{\textsc {cl}}, I_{\textsc {op}}\) . \(\square \)
Remark 6.10
We observe that, in the case of Theorem 6.9, the characterization provided would not appear to be sufficient to identify the functions \(\alpha _+, \alpha _-\) given the measure \(\mu \). This is in contrast to the constructions of Azéma–Yor, Perkins and Jacka, where knowledge of the form of the embedding is sufficient to identify the corresponding stopping rule.
On a more abstract level, uniqueness of barrier type embeddings in a two dimensional phase space can be seen as a consequence of Loynes’ argument [39]. More precisely, let \(A_t\) be some continuous process and suppose that \(\tau _1\) and \( \tau _2\) denote the times when \((A_t, B_t)\) hits a closed barrier type set \(R_1\) resp. \(R_2\). If \(\mathbb {E}[ \tau _1], \mathbb {E}[\tau _2] < \infty \) and both stopping times embed the same measure, the argument presented in Remark 2.3 shows that \(\tau _1=\tau _2\).
Remark 6.11
In Cox and Obłój [12], embeddings are constructed which maximize certain double-exit probabilities: for example, to maximize the probability that both \(\bar{B}_\tau \ge \bar{b}\) and \(\underline{B}_\tau \le \underline{b}\), for given levels \(\bar{b}\) and \(\underline{b}\). In this case, the embedding is no longer naturally viewed as a barrier type construction; instead, it is natural to characterize the embedding in terms of where the paths with different crossing behaviour for the barriers finish (for example, the paths which only hit the upper level may end up above a certain value, or between two other values). However, it is possible, again using a suitable secondary maximization problem, to show that there exists an optimizer demonstrating the behaviour characterizing the Cox–Obłój embeddings. (Specifically, if we write \(H_{b}((f,s)) = \inf \{t\le s: f(t) = b\},\underline{H} = H_{\underline{b}} \wedge H_{\bar{b}}\) and \(\bar{H} = H_{\underline{b}}\vee H_{\bar{b}}\) then the secondary maximization problem
$$\begin{aligned} {\tilde{\gamma }}((f,s)) = 1/2 ((f(s)-\underline{H}((f,s)))^2 \mathbbm {1}_ {\underline{H} \le s} - ((f(s)-\bar{H}((f,s)))^2 \mathbbm {1}_{\bar{H} \le s} \end{aligned}$$
is sufficient to rederive the form of these embeddings).
The Vallois-embedding and optimizing functions of local time
In this section we shall determine the stopping rule which solves
$$\begin{aligned} \inf \{\mathbb {E}[h(\mathfrak {L}_\tau )]: \tau \text{ solves } \hbox {(SEP)}\}, \end{aligned}$$
(6.7)
where \(\mathfrak {L}\) denotes the local time of Brownian motion at 0 and h is a convex or concave function. In many ways, the proof of this result will follow the arguments used in the previous section, however in contrast to the functions considered there, \(h(\mathfrak {L})\) is not defined on S in a straightforward way and hence we need to apply some care in fixing our notions. Moreover local time does not have an S-continuous modification and hence some additional argument is needed to establish that (6.7) admits a minimizer.
We say that a \(\mathcal {G}\)-adapted process \(\mathfrak {L}^x\) is a local time in x if it is a (right-continuous, increasing) compensator of \(|B-x|\) and we suppress x in the case of local time at 0. This determines \(\mathfrak {L}^x\) up to indistinguishability (and clearly the choice of \(\mathfrak {L}^x\) is irrelevant for (6.7)).
For us it is convenient to allow local time to assume the value \(+\infty \) on an evanescent set. Using this convention, Theorem 4.1 implies that there exists a Borel function \(L^x:S\rightarrow [0,\infty ]\) such that \(L^x\circ r \) is a (right-continuous, increasing) \(\mathcal {F}^0\)-predictable local time on Wiener space. We will call such a process \(L^x\) a raw local time in x. We note that the value \(+\infty \) cannot be avoided here, see [42].
Lemma 6.12
Let L be a raw local time in 0. Then there exists a Borel set \(A\subseteq {C_0(\mathbb {R}_+)},{\mathbb {W}}(A)=1\) such that for all
$$\begin{aligned} (f,s)\in U = \{(f,s) \in S : \exists \omega \in A, f = (\omega _r)_{r \le s}\} \end{aligned}$$
we have \(L(f,s)<\infty \) and
$$\begin{aligned} (g,t)\mapsto L^{(f,s)}(g,t) := L(f\oplus g, s+t )-L(f,s) \end{aligned}$$
(6.8)
is a raw local time in \(-f(s)\).
Proof
Write V for the set of all (f, s) such that \(L^{(f,s)}\) is not a raw local time. To understand whether \((f,s)\in V\) we need to check whether or not \( (\omega , t)\mapsto |B_{s+t}(f\oplus \omega )|-L^{(f,s)}(r(\omega ,t)) \) defines a martingale. Since this is a Borel property, \(V\subseteq S\) is Borel. Hence
$$\begin{aligned} {{\mathrm{deb}}}(V):=\{\omega : \exists t, r(\omega , t)\in V\} \end{aligned}$$
is analytic and thus universally measurable. To prove that \({\mathbb {W}}({{\mathrm{deb}}}(V))=0\) it is sufficient to show this for any given Borel subset of \({{\mathrm{deb}}}(V)\). Suppose for contradiction that \({\mathbb {W}}(E)>0\) for some Borel set \(E\subseteq {{\mathrm{deb}}}(V)\). By the optional section theorem this implies that there exists an \(\mathcal {F}^a\)-stopping time \(\tau \) such that \({\mathbb {W}}(\tau <\infty )>0\) and \((\omega , \tau (\omega ))\in r^{-1}(V)\) whenever \(\tau (\omega )<\infty \). Upon requiring this only a.s. we may of course assume that \(\tau \) is an \(\mathcal {F}^0\)-stopping time.
Given \(H=G\mathbbm {1}_{[[\tau ,\infty [[}\) for bounded \(\mathcal {F}^0_\tau \)-measurable G it follows from usual properties of local time that
$$\begin{aligned} t\mapsto (H\cdot (|B|-L\circ r))_t =G\big [(|B|-L\circ r)_{t}-(|B|-L\circ r)_{\tau \wedge t} \big ] \end{aligned}$$
is a martingale. As G was arbitrary,
$$\begin{aligned} (\omega , t)\mapsto |B_{\tau (\omega ')+t}(\omega '{}_{\upharpoonright [0,\tau (\omega ')]}\oplus \omega )|-L^{(\omega '{}_{\upharpoonright [0,\tau (\omega ')]},\tau (\omega '))}(r(\omega ,t)) \end{aligned}$$
defines a martingale for almost all \(\omega ',\tau (\omega ')<\infty \), contradicting \({\mathbb {W}}({{\mathrm{deb}}}(V))>0\).
It follows that \({\mathbb {W}}({{\mathrm{deb}}}(V))=0\), hence we may pick a Borel set \(A\subseteq {{\mathrm{deb}}}(V)^c\) with \({\mathbb {W}}(A)=1\) such that (6.8) holds. \(\square \)
Our next goal is to verify that (6.7) admits an optimizer.
Proposition 6.13
Let \(h:[0,\infty ) \rightarrow \mathbb {R}\) be continuous and bounded. Then there exists an optimizer for (6.7). Moreover, if \({\tilde{\gamma }}(f,s) = e^{-L(f,s)}f^2(s)\) or \({\tilde{\gamma }}(f,s) =- e^{-L(f,s)}f^2(s)\), the secondary minimization problem \(\mathrm{({OptSEP}_2)}\) also admits a solution.
Proof
Let L be a raw local time. We first observe that \((\mathfrak {L}_t)_{t \ge 0}:= (L\circ r((B_t)_{t \ge 0},t))_{t \ge 0}\) is (indistinguishable from) the local time of \((B_t)_{t \ge 0}\) on \((\Omega , \mathcal {G}, (\mathcal {G}_t)_{t\ge 0}, \mathbb {P})\). By Lemma 3.11 there exists a sequence \(\xi _1, \xi _2, \xi _3, \ldots \in \mathsf {RST}(\mu )\) such that
$$\begin{aligned} V^* = \lim \int h(\mathfrak {L}_t(\omega )) \,\xi _n(d\omega ,dt) = \inf \{\mathbb {E}[h(\mathfrak {L}_\tau )]: \tau \text{ solves } \hbox {(SEP)}\}. \end{aligned}$$
Possibly passing to a subsequence there is \(\xi \in \mathsf {RST}(\mu )\) such that \(\xi = \lim _n \xi _n\).
A result of Jacod and Memin ([34, Corollary 2.9]) asserts
$$\begin{aligned} \int \varphi \, d\xi _n\rightarrow \int \varphi \, d\xi \end{aligned}$$
for any bounded measurable function \(\varphi :{C_0(\mathbb {R}_+)}\times \mathbb {R}_+\) such that \(t\mapsto \varphi (\omega ,t)\) is continuous for every \(\omega \in {C_0(\mathbb {R}_+)}\). It follows that \(\int h(\mathfrak {L}_t(\omega )) \,\xi (d\omega ,dt) = V^*\). Moreover (again by Lemma 3.11) there exists a \(\mathcal {G}\)-stopping time \(\tau ^*\) such that \(\mathbb {E}[h(\mathfrak {L}_{\tau ^*})] = \int h(\mathfrak {L}_t(\omega )) \,\xi (d\omega ,dt)\).
The second assertion follows from a similar reasoning, using an approximation argument to handle the unboundedness of \({\tilde{\gamma }}\). \(\square \)
Note added in revision. Guo et al. [27] were able to relax the continuity assumption in our existence and duality results Theorems 1.1 and 1.2. Based on the work of Jacod and Memin [34] they establish these results under the assumption that \(t\mapsto \gamma \circ r(\omega , t)\) is lsc for every \(\omega \in {C_0(\mathbb {R}_+)}\). In particular their results would imply a more general version of Proposition 6.13.
We are now able to show:
Theorem 6.14
Let \(h:[0,\infty ] \rightarrow \mathbb {R}\) be a bounded, strictly concave function and \(\mathfrak {L}\) the local time of B at 0.
-
(1)
There exists a stopping time \(\tau _{V-}\) which maximizes
$$\begin{aligned} \mathbb {E}\left[ h\left( \mathfrak {L}_\tau \right) \right] \end{aligned}$$
over the set of all solutions to (SEP), and which is of the form
$$\begin{aligned}\tau _{V-} = \inf \left\{ t > 0: B_t \notin \left( \alpha _-\left( \mathfrak {L}_t\right) ,\alpha _+\left( \mathfrak {L}_t\right) \right) \right\} \text { a.s.,} \end{aligned}$$
for some decreasing function \(\alpha _+\ge 0\) and increasing function \(\alpha _-\le 0\).
-
(2)
There exists a stopping time \(\tau _{V+}\) which minimizes
$$\begin{aligned} \mathbb {E}\left[ h\left( \mathfrak {L}_\tau \right) \right] \end{aligned}$$
over the set of all solutions to (SEP), and which is of the form
$$\begin{aligned} \tau _{V+} = Z \wedge \inf \left\{ t > 0: B_t \notin \left( \alpha _-\left( \mathfrak {L}_t\right) ,\alpha _+\left( \mathfrak {L}_t\right) \right) \right\} , \text { a.s.} \end{aligned}$$
for some increasing function \(\alpha _+\ge 0\), and some decreasing function \(\alpha _-\le 0\), and a \(\{0,\infty \}\)-valued \(\mathcal {G}_0\)-measurable random variable Z.
Proof
We consider the second case, under the additional assumption that \(0<\mu (\{0\}) <1\), the other cases being slightly simpler. As above, we let L be a raw local time and observe that \((\mathfrak {L}_t)_{t \ge 0}:= (L\circ r((B_t)_{t \ge 0},t))_{t \ge 0}\) is (indistinguishable from) the local time of \((B_t)_{t \ge 0}\) on \((\Omega , \mathcal {G}, (\mathcal {G}_t)_{t\ge 0}, \mathbb {P})\).
Applying Proposition 6.13 and Theorem 6.4 to the optimizations corresponding to \(\gamma (\omega ,t) = h(L\circ r(\omega ,t))\) and \({\tilde{\gamma }}(\omega ,t) = e^{-L\circ r(\omega ,t)}\omega ^2_t\) we obtain a minimizer \(\tau _{V+}\) and a \({\tilde{\gamma }}|{\gamma }\)-monotone set \(\Gamma \subseteq S\) supporting \(\tau _{V+}\).
Recall the set \(A \subseteq {C_0(\mathbb {R}_+)}\) given by Lemma 6.12. By projection the set
$$\begin{aligned} U = \{(f,s) \in S : \exists \omega \in A, f = (\omega _r)_{r \le s}\} \end{aligned}$$
is universally measurable and since \(\tau _{V+}\) is a finite stopping time, \(\mathbb {P}( ((B_t)_{t\le \tau _{V+}},\tau _{V+}) \in U)=1\). Passing to an appropriate subset if necessary, we may also assume that U is Borel. We may therefore assume \(\Gamma \subseteq U\), and it then also follows that \(\Gamma ^{<} \subseteq U\).
By a similar argument to the previous cases we can show that
$$\begin{aligned} {\mathsf {SG}_2}\supseteq \{((f,s),(g,t))\in U\times U:f(s)=g(t), L(f,s)< L(g,t)\}, \end{aligned}$$
(6.9)
where Lemma 6.12 guarantees that local time of paths is well-behaved following a path-swapping operation. In particular, since both f and g belong to U, it follows that (6.8) holds, and (6.9) is a direct consequence of this.
Define the sets
$$\begin{aligned} {\mathcal {R}}_\textsc {op}&:= \left\{ (l,x): \exists (g,t) \in \Gamma , g(t) = x, L(g,t) >l \right\} ,\\ {\mathcal {R}}_\textsc {cl}&:= \left\{ (l,x): \exists (g,t) \in \Gamma , g(t) = x, L(g,t) \ge l \right\} , \end{aligned}$$
and the corresponding stopping times
$$\begin{aligned} \tau _\textsc {op}^*&:= \inf \left\{ t \ge 0: \mathfrak {L}_t> 0, (\mathfrak {L}_t,B_t) \in {\mathcal {R}}_{\textsc {op}}\right\} ,\\ \tau _\textsc {cl}^*&:= \inf \left\{ t \ge 0: \mathfrak {L}_t >0, (\mathfrak {L}_t,B_t) \in {\mathcal {R}}_{\textsc {cl}}\right\} . \end{aligned}$$
Strictly speaking, the random times on the right-hand side only define stopping times in the augmented filtration (by the Début Theorem), however by Theorem 3.1, this is sufficient to find almost surely equal \(\mathcal {G}\)-stopping times.
Since \((\Gamma ^{<} \times \Gamma ) \cap \mathsf {SG}_2 = \emptyset \) and \((0,0) \in \Gamma ^{<}\) (\(\Gamma \) contains a non-trivial element since \(\mu (\{0\}) < 1\)) then \((l,0) \not \in \Gamma \) for any \(l \ge 0\). It follows that \(\mathbb {P}(\tau _{V+} = 0) = \mu (\{0\})\).
We now consider \(\tau _{V+}\) on \(\{\tau _{V+} > 0 \}\). Note that \(\{\tau>0\}= \{ \mathfrak {L}_\tau >0\}\) a.s., for any stopping time \(\tau \) and hence in particular \(\{\tau _{V+}> 0 \}= \{ \mathfrak {L}_{\tau _{V+}} >0\}\) a.s. Then on \(\{\tau _{V+} >0\},\tau _\textsc {cl}^* \le \tau _{V+} \le \tau _\textsc {op}^*\) a.s., and hence \(\mathbb {P}(\tau _{V+} \le \tau _{\textsc {op}}^*) = 1\). Define \(\alpha _+(l) = \inf \{ x>0: (l,x) \in {\mathcal {R}}_\textsc {op}\}\) and \(\alpha _-(l) = \sup \{ x<0: (l,x) \in {\mathcal {R}}_\textsc {op}\}\).
If either of \(\alpha _-(\eta ) = 0\) or \(\alpha _+(\eta ) = 0\) for some \(\eta >0\), then \(\tau _{\textsc {op}}^* = 0\) a.s. Since \(\tau _{V+} \le \tau _\textsc {op}^*\) and \(\mathbb {P}(\tau _{V+}>0) >0\) we must therefore have \(\alpha _+(\eta )>0, \alpha _-(\eta )<0\) for \(\eta >0\). In addition, \(\alpha _+(l)\) is clearly right-continuous and increasing, so it must have at most countably many discontinuities, and similarly for \(\alpha _-(l)\). We can write
$$\begin{aligned}&\inf \left\{ t: \mathfrak {L}_t>0, B_t \not \in \left( \alpha _-\left( \mathfrak {L}_t-\right) ,\alpha _+\left( \mathfrak {L}_t-\right) \right) \right\} \le \tau _{\textsc {cl}}^*\\&\quad \le \tau _{\textsc {op}}^* \le \inf \left\{ t: \mathfrak {L}_t>0, B_t \not \in \left[ \alpha _-\left( \mathfrak {L}_t\right) ,\alpha _+\left( \mathfrak {L}_t\right) \right] \right\} \end{aligned}$$
and observe that (by standard properties of Brownian motion) the stopping times on the left and right are almost surely equal (since there are at most countably many discontinuities, and \(\alpha _+(l)\) and \(\alpha _-(l)\) are bounded away from zero on \([\eta ,\infty )\) for \(\eta >0\)). It follows that \(\tau _{V+} = \inf \{t: \mathfrak {L}_t >0, B_t \not \in (\alpha _-(\mathfrak {L}_t),\alpha _+(\mathfrak {L}_t))\} \) on \(\{\tau _{V+} > 0\}\), and we deduce that \(\tau _{V+}\) is zero with probability \(\mu (\{0\})\), and, conditional on being greater than zero, \( \tau _{V+} =\inf \{ t > 0: B_t \notin (\alpha _-(\mathfrak {L}_t),\alpha _+(\mathfrak {L}_t))\}\) a.s. \(\square \)
Remark 6.15
The arguments above extend from local time at 0 to a general continuous additive functional A. Recalling that \(\mathfrak {L}^x\) denotes local time in x, A can be represented in the form \(A_t:=\int _0^t \mathfrak {L}_s^x\, dm_A(x)\). Let f be a convex function such that \(f'' = m_A\) in the sense of distributions. If \(\int f\, d\mu < \infty \) then the above proof is easily adapted to the more general situation.
In this manner, we deduce the existence of optimal solutions to (SEP) for functions depending on A. By analogy with Theorem 6.14 this can be used to generate (inverse-/cave-) barrier type embeddings of various kinds. Other generalizations and variants may be considered in a similar manner. We leave specific examples as an exercise for the reader.
Root and Rost embeddings in higher dimensions
In this section we consider the Root and Rost constructions of Sects. 2.1 and 2.2 in the case of d-dimensional Brownian motion with general initial distribution, for \(d\ge 2\). In \(\mathbb {R}^d\), since Brownian motion is transient, it is no longer straightforward to assert the existence of an embedding. In general, [49] gives necessary and sufficient conditions for the existence of an embedding, and without the additional condition that \(\mathbb {E}[\tau ] < \infty \). In the Brownian case, Rost’s conditions for \(d \ge 3\) can be written as follows. There exists a stopping time \(\tau \) such that \(B_0 \sim \lambda \) and \(B_\tau \sim \mu \) if and only if for all \(y \in \mathbb {R}^d\)
$$\begin{aligned} \int u(x,y)\, \lambda (dx) \le \int u(x,y)\, \mu (dx),\; \text { where }\; u(x,y) = |x-y|^{2-d}. \end{aligned}$$
(6.10)
However, it is not clear that such a stopping time will satisfy the condition
$$\begin{aligned} \mathbb {E}[\tau ] = 1/d \left( \int |x|^2\, (\mu -\lambda )(dx)\right) . \end{aligned}$$
(6.11)
As a result, it is not straightforward to give simple criteria for the existence of a solution in \(\mathsf {RST}(\mu )\).
In the case \(d=2\) it follows from Falkner’s results [22] that the Skorokhod problem admits a solution (i.e. \(\mathsf {RST}(\mu )\ne \emptyset \)) if (6.10) is satisfied for \(u(x,y)=-\ln |x-y|\) and then (6.11) applies.
In either case, assuming that we do have a solution satisfying (6.11), then the existence result as well as the monotonicity principle carry over to the present setup (with identical proofs) and we are able to state the following:
Theorem 6.16
Suppose \(\mathsf {RST}(\mu )\) is non-empty. If h is a strictly convex function and \(\hat{\tau } \in \mathsf {RST}(\mu )\) minimizes \(\mathbb {E}[h(\tau )]\) over \(\tau \in \mathsf {RST}(\mu )\) then there exists a barrier \({\mathcal {R}}\) such that \(\hat{\tau } = \inf \{ t > 0 : (B_t,t) \in {\mathcal {R}}\}\) on \(\{{\hat{\tau }} >0\}\) a.s.
The proof of this result is much the same as that of Theorem 2.1, except we no longer show that \(\tau _\textsc {cl}= \tau _\textsc {op}\). In higher dimensions with general initial laws, it is easy to construct examples where there are common atoms of \(\lambda \) and \(\mu \), but where the size of the atom in \(\lambda \) is strictly larger than the atom of \(\mu \). By the transience of the process, it is clear that the optimal (indeed, only) behaviour is to stop mass starting at such a point immediately with a probability strictly between 0 and 1, however the stopping times \(\tau _\textsc {cl}\) and \(\tau _\textsc {op}\) will always stop either all the mass, or none of this mass respectively. For this reason, we do not say anything about the behaviour of \({\hat{\tau }}\) when \({\hat{\tau }} = 0\). Trivially, the above result tells us that the solution of the optimal embedding problem is given by a barrier if there exists a set D such that \(\lambda (D) = 1 = \mu (D^c)\).
Proof of Theorem 6.16
The first part of the proof proceeds similarly to the proof of Theorem 2.1. In particular, the set of stop-go pairs is given by
$$\begin{aligned} \mathsf {SG}\supseteq \{((f,s),(g,t))\in S\times S:f(s)=g(t), s>t\} \end{aligned}$$
and we define the sets \({\mathcal {R}}_\textsc {cl}, {\mathcal {R}}_\textsc {op}\) and the stopping times \(\tau _\textsc {cl}, \tau _\textsc {op}\) as above. We then fix \(\delta >0\), and consider the set \(\{{\hat{\tau }}\ge \delta \}\). Given \(\eta \ge 0\), we define \(B^{-\eta }_t = B_{t+\eta }\), for \(t \ge -\eta \) and set
$$\begin{aligned} \tau ^{\eta ,\delta }_\textsc {cl}:= \inf \{t \ge \delta : (t,B_t^{-\eta }) \in {\mathcal {R}}_\textsc {cl}\}. \end{aligned}$$
Then \(\tau _\textsc {cl}^{\eta ,\delta } \ge \delta \), and for any \(\varepsilon >0\), there exists \(\eta >0\) sufficiently small that \(d_{TV}(B^{-\eta }_{\delta },B_{\delta }) < \varepsilon ,\) where \(d_{TV}\) denotes the total variation distance. By the Strong Markov property of Brownian motion, it follows that \( d_{TV}(B^{-\eta }_{\tau _\textsc {cl}^{\eta ,\delta }}, B_{\tau _\textsc {cl}^{0,\delta }}) < \varepsilon \). In particular, the law of \(B^{-\eta }_{\tau _\textsc {cl}^{\eta ,\delta }}\) converges weakly to the law of \(B_{\tau _\textsc {cl}^{0,\delta }}\) as \(\eta \rightarrow 0\). Thus
$$\begin{aligned} \tau _\textsc {cl}^{\eta ,\delta } = \inf \{ t \ge \eta +\delta : (t-\eta ,B_t) \in {\mathcal {R}}_\textsc {cl}\}, \end{aligned}$$
so \(\tau _\textsc {cl}^{\eta ,\delta } \ge \tau _{R}^{0,\delta }\), and moreover, \(\tau _\textsc {cl}^{\eta ,\delta } \rightarrow \tau _\textsc {op}^{0,\delta }\) a.s. as \(\eta \rightarrow 0\). Hence, \(B^{-\eta }_{\tau _\textsc {cl}^{\eta ,\delta }} \rightarrow B_{\tau _\textsc {op}^{0,\delta }}\) in probability, as \(\eta \rightarrow 0\), so we have weak convergence of the law of \(B^{-\eta }_{\tau _\textsc {cl}^{\eta ,\delta }}\) to the law of \(B_{\tau _\textsc {op}^{0,\delta }}\), and hence \({B_{\tau _\textsc {op}^{0,\delta }} \sim B_{\tau _\textsc {cl}^{0,\delta }}}\). We now observe that, by an essentially identical argument to that in the proof of Theorem 2.1, we must have \(\tau _\textsc {cl}^{0,\delta } \le \hat{\tau } \le \tau _\textsc {op}^{0,\delta }\) on \(\{\hat{\tau } \ge \delta \}\). However, in the argument above, we know that \(\tau _\textsc {cl}^{0,\delta } \le {\hat{\tau }} \le \tau _\textsc {op}^{0,\delta }\), and \(\tau _\textsc {cl}^{\eta ,\delta } \rightarrow _{\mathcal {D}} \tau _\textsc {cl}^{0,\delta }\) and \(\tau _\textsc {cl}^{\eta ,\delta } \rightarrow _{\mathcal {D}} \tau _\textsc {op}^{0,\delta }\) as \(\eta \rightarrow 0\) (where \({\mathcal {D}}\) denotes convergence in distribution). It follows that \(\tau _\textsc {cl}^{0,\delta } =_{\mathcal {D}} \tau _\textsc {op}^{0,\delta }\) and hence \(\tau _\textsc {cl}^{0,\delta } = \tau _\textsc {op}^{0,\delta }\) a.s. In particular, \(B_{\tau _\textsc {cl}^{0,\delta }} = B_{\tau _\textsc {op}^{0,\delta }} = B_{{\hat{\tau }}}\) on \(\{{\hat{\tau }} \ge \delta \}\). Letting \(\delta \rightarrow 0\) we observe that \(\tau _\textsc {op}^{0,\delta } \rightarrow \tau _\textsc {op}\), and hence the required result holds on taking \({\mathcal {R}}={\mathcal {R}}_\textsc {op}\). \(\square \)
We now consider the generalization of the Rost embedding. Recall that \((\min (\lambda , \mu ))(A) := \inf _{B \subseteq A} \left( \lambda (B)+ \mu (A{\setminus } B)\right) \) defines a measure.
Theorem 6.17
Suppose \(\lambda , \mu \) are measures in \(\mathbb {R}^d\) and \(\hat{\tau } \in \mathsf {RST}(\mu )\) maximizes \(\mathbb {E}[h(\tau )]\) over all stopping times in \(\mathsf {RST}(\mu )\), for a convex function \(h: \mathbb {R}_+ \rightarrow \mathbb {R}\), with \(\mathbb {E}[h(\tau )]<\infty \). Then \(\mathbb {P}(\hat{\tau }=0, B_0 \in A) = (\min (\lambda , \mu ))(A)\), for \(A \in {\mathcal {B}}(\mathbb {R})\), and on \(\{\hat{\tau }>0\},\hat{\tau }\) is the first hitting time of an inverse barrier.
Proof
We follow the proof of Theorem 2.4 to recover the set of stop-go pairs given by
$$\begin{aligned} \mathsf {SG}\supseteq \{((f,s),(g,t))\in S\times S:f(s)=g(t), s<t\} \end{aligned}$$
and the sets \({\mathcal {R}}_\textsc {op}\) and \({\mathcal {R}}_\textsc {cl}\), and their corresponding hitting times \(\tau _\textsc {op}, \tau _\textsc {cl}\). For \(0 \le \eta \le \delta \), we define in addition the stopping times
$$\begin{aligned} \tau _\textsc {cl}^{\eta ,\delta } := \inf \{ t \ge \delta : (t,B_t^\eta ) \in {\mathcal {R}}_\textsc {cl}\},\ \tau _\textsc {op}^{\eta ,\delta } := \inf \{ t \ge \delta : (t,B_t^\eta ) \in {\mathcal {R}}_\textsc {op}\}, \end{aligned}$$
where \(B_t^\eta = B_{t-\eta }\), for \(t \ge \eta \).
It follows from an identical argument to that in the proof of Theorem 2.4 that \(\tau _\textsc {cl}^{0,\delta } \le \hat{\tau } \le \tau _\textsc {op}^{0,\delta }\) on \(\{\hat{\tau } \ge \delta \}\). However, by similar arguments to those used above, we deduce that \(\tau _\textsc {op}^{0,\delta }\) and \(\tau _\textsc {cl}^{0,\delta }\) have the same law on \(\{\hat{\tau } \ge \delta \}\), and hence that \(\hat{\tau } = \tau _\textsc {op}^{0,\delta }\) on this set, and then by taking \(\delta \rightarrow 0\), we get \(\hat{\tau } = \tau _\textsc {op}\) on \(\{\hat{\tau }>0\}\).
To see the final claim, we note that trivially \(\mathbb {P}(\hat{\tau }=0, B_0 \in A) \le (\min (\lambda , \mu ))(A)\). If there is strict inequality, then there exist some paths in \(\Gamma \) which start at \(x \in A\), and paths in \(\Gamma \) which stop at x at strictly positive time, constituting a stop-go pair and therefore violating the monotonicity principle. \(\square \)
Remark 6.18
We observe that the arguments of Remark 2.3 can be applied again in this context. However, one needs to be a little more careful, since it is necessary to take the fine closure of the barriers with respect to the fine topology for the processes \((t,B_t)_{t\ge 0}\). With this modification in place, the argument of Loynes can be easily adapted to show that the (finely closed versions) of the barriers in Theorems 6.16 and 6.17 are unique in the sense of Remark 2.3.
An optimal Skorokhod embedding problem which admits only randomized solutions
By analogy with optimal transport, we might interpret a ‘natural stopping time’ (i.e. a stopping time wrt to the Brownian filtration) which solves (OptSEP) as a Monge-type solution whereas stopping times which depend on additional randomization are of Kantorovich-type. With the exception of the Rost solution, all optimal stopping times encountered in the previous section are natural stopping times, and in the Rost case external randomization is only needed at time 0. One might ask whether the optimal Skorokhod embedding problem always admits a solution \(\tau \) which is natural on \(\{\tau >0\}\). We sketch an example, showing that this is not the case:
Example 6.19
There exist an absolutely continuous probability \(\mu \) and a continuous adapted process \(\gamma _t=\gamma ((B_s)_{s\le t})\) with values in [0, 1] such that (OptSEP) admits only randomized solutions.
Proof
Define the stopping time \(\sigma := \inf \{t \ge 0: B_t^2 + t^2 \ge 1\}\), the first time the Brownian path leaves the right half of the unit disc. Write \((C(0,\sigma ), {\mathbb {W}}_\sigma )\) for the space of continuous functions up to time \(\sigma \), equipped with the corresponding projection of Wiener measure. Pick an isomorphism
$$\begin{aligned} l:(C(0,\sigma ), {\mathbb {W}}_\sigma )\rightarrow ([2,3], \mathcal {L}) \end{aligned}$$
of standard Borel probability spaces. Using some extra randomization (independent of \(\mathcal {F}^B\)) we define a stopping time \(\tau \) such that
-
(1)
\(\tau =\sigma \) with probability 1 / 2,
-
(2)
otherwise \(\tau \) stops the first time the Brownian path reaches the level \(\pm l((B_s)_{s\le \sigma })\).
We then define \(\mu := \mathrm {Law}(B_\tau )\) and pick \(\gamma \) to be a function which equals 0 on paths which are stopped by \(\tau \) and is strictly positive otherwise; clearly we can do this in such a way that \( \gamma \) has continuous paths.
Write \({\hat{\tau }}\) for the randomized stopping time \(\mathsf {RST}(\mu )\) corresponding to \( \tau \). It is then straightforward to see that \({\hat{\tau }} \) is the unique solution of (OptSEP). Thus, the optimal Skorokhod embedding problem admits no (non-randomized) solution in the natural filtration of B. \(\square \)
In optimal transport it is a difficult and interesting problem to understand under which conditions transport problems admit solutions of Monge-type. An interesting subject for future research would be to understand when Monge-type solutions exist for the optimal Skorokhod embedding problem.