This section is devoted to the proof of Theorem 3.6. The proof closely mimicks that of Theorem 1.3/Theorem 5.7 in [6]. For the benefit of those readers already familiar with said paper we will first describe the changes required to the proofs there to make them work in our situation and then—for the sake of a more self-contained presentation—indulge in reiterating the main arguments and only citing results from [6] that we can use verbatim.
Sketch of differences in the proof of Theorem 3.6 relative to [6, Theorem 5.7]
Again the strategy is to show that for a larger set
we can find a set
such that
. The definition of
must of course be adapted analoguously to the changes required to the definition of \(\mathsf {SG}\).
Apart from that the only real changes are to [6, Theorem 5.8]. Whereas previously it was essential that the randomized stopping time \(\xi ^{r(\omega ,s)}\) is also a valid randomized stopping time of the Markov process in question when started at a different time but the same location \(\omega (s)\), we now need that \(\xi ^{r(\omega ,s)}\) will also be a randomized stopping time of our Markov process when started at the same time s but in a different place. Of course, when we are talking about Brownian motion both are true, but this difference is the reason why in the case of the Skorokhod embedding the right class of processes to generalize the argument to is that of Feller processes while in our setup we don’t need our processes to be time-homogeneous but we do need them to be space-homogeneous. That we are able to plant this “bush” \(\xi ^{r(\omega ,s)}\) in another location is what guarantees that the measure \(\xi _1^\pi \) defined in the proof of Theorem 5.8 of [6] is again a randomized stopping time.
Whereas in the Skorokhod case the task is to show that the new better randomized stopping time \(\xi ^\pi \) embeds the same distribution as \(\xi \) we now have to show that the randomized stopping time we construct has the same distribution as \(\xi \). The argument works along the same lines though—instead of using that \(\left( (\omega ,s),(\eta ,t)\right) \in \widehat{\mathsf {SG}}^\xi \) implies \(\omega (s)=\eta (t)\) we now use that \(\left( (\omega ,s),(\eta ,t)\right) \in \widehat{\mathsf {SG}}^\xi \) implies \(s=t\). \(\square \)
We now present the argument in more detail.
As may be clear by now, what we will show is that if \(\xi \in \mathsf {RST}_{\lambda }(\mu )\) is a solution of \((\textsc {OptStop'})\), then there is a measurable, \((\mathcal {F}^0_{t})_{t \ge 0}\)-adapted set \(\Gamma \subseteq C(\mathbb {R}_{+})\times \mathbb {R}_{+}\) such that \(\mathsf {SG}\cap \left( \Gamma ^< \times \Gamma \right) = \emptyset \). Using Lemma 5.3 this implies Theorem 3.6.
We need to make some preparations. To align the notation with [6] and to make some technical steps easier it is useful to have another characterization of measurable, \((\mathcal {F}^0_{t})_{t \ge 0}\)-adapted processes and sets. To this end define
Definition 6.1
r has many right inverses. A simple one is
$$\begin{aligned} r'&: S \rightarrow C(\mathbb {R}_{+})\times \mathbb {R}_{+}\\ r'(f,s)&:= \left( t \mapsto {\left\{ \begin{array}{ll} f(t) &{} \quad \text {for} t \le s \\ f(s) &{} \quad \text {for} t > s \end{array}\right. },s \right) . \end{aligned}$$
We endow S with the sigma algebra generated by \(r'\).
[6, Theorem 3.2], which is a direct consequence of [15, Theorem IV. 97], asserts that a process X is measurable, \((\mathcal {F}^0_{t})_{t \ge 0}\)-adapted iff X factors as \(X=X'\circ r\) for a measurable function \(X' : S \rightarrow \mathbb {R}\). So a set \(D \subseteq C(\mathbb {R}_{+})\times \mathbb {R}_{+}\) is measurable, \((\mathcal {F}^0_{t})_{t \ge 0}\)-adapted iff \(D = r^{-1}\left[ D'\right] \) for some measurable \(D' \subseteq S\).
Note that \(r(\omega ,t) = r(\omega ',t')\) implies \((\omega ,t) \odot \theta = (\omega ',t') \odot \theta \) and therefore
$$\begin{aligned} \mathsf {SG}&=(r \times r)^{-1}\left[ \mathsf {SG}'\right] \end{aligned}$$
for a set \(\mathsf {SG}' \subseteq S \times S\) which is described by an expression almost identical to that in Definition 3.4. Namely we can overload \(\odot \) to also be the name for the operation whose first operand is an element of S, such that \((\omega ,t) \odot \theta = r(\omega ,t) \odot \theta \) and note that as \(c\) is measurable, \((\mathcal {F}^0_{t})_{t \ge 0}\)-adapted we can write \(c= c'\circ r\) and thus get a cost function \( c'\) which is defined on S.
Given an optimal \(\xi \in \mathsf {RST}_{\lambda }(\mu )\) we may therefore rephrase our task as having to find a measurable set \(\Gamma \subseteq S\) such that \(r_*(\xi )\) is concentrated on \(\Gamma \) and that \(\mathsf {SG}' \cap \left( \Gamma ^< \times \Gamma \right) = \emptyset \), where
.
Note that for \(\Gamma \subseteq S\) although \(\left( r^{-1}\left[ \Gamma \right] \right) ^<\) is not equal to \( r^{-1}\left[ \Gamma ^<\right] \) we still have \(\mathsf {SG}\cap \left( r^{-1}\left[ \Gamma ^<\right] \times r^{-1}\left[ \Gamma \right] \right) = \emptyset \) iff \(\mathsf {SG}\cap \left( (r^{-1}\left[ \Gamma \right] )^< \times r^{-1}\left[ \Gamma \right] \right) = \emptyset \).
One of the main ingredients of the proof of [6, Theorem 1.3] and of our Theorem 3.6 is a procedure whereby we accumulate many infinitesimal changes to a given randomized stopping time \(\xi \) to build a new stopping time \(\xi ^\pi \). The guiding intuition for the authors is to picture these changes as replacing certain “branches” of the stopping time \(\xi \) by different branches. Some of these branches will actually enter the statement of a somewhat stronger theorem (Theorem 6.8 below), so we begin by describing these. Our way to get a handle on “branches”—i.e. infinitesimal parts of a randomized stopping time—is to describe them through a disintegration (wrt \(\mathbb {W}^{0}_{\lambda }\)) of the randomized stopping time. We need the following statement from [6] which should also serve to provide more intuition on the nature of randomized stopping times.
Lemma 6.2
[6, Theorem 3.8] Let \(\xi \) be a measure on \(C(\mathbb {R}_{+})\times \mathbb {R}_{+}\). Then \( \xi \in \mathsf {RST}_{\lambda } \) iff there is a disintegration \((\xi _{\omega })_{\omega \in C(\mathbb {R}_{+})}\) of \(\xi \) wrt \(\mathbb {W}^{0}_{\lambda }\) such that \((\omega ,t) \mapsto \xi _\omega ([0,t])\) is measurable, \((\mathcal {F}^0_{t})_{t \ge 0}\)-adapted and maps into [0, 1].
Using Lemma 6.2 above let us fix for the rest of this section both \(\xi \in \mathsf {RST}_{\lambda }(\mu )\) and a disintegration \(\left( \xi _{\omega }\right) _{\omega \in C(\mathbb {R}_{+})}\) with the properties above. Both Definition 6.3 below and Theorem 6.8 implicitly depend on this particular disintegration and we emphasize that whenever we write \(\xi _{\omega }\) in the following we are always referring to the same fixed disintegration with the properties given in Lemma 6.2. Note that the measurability properties of \(\left( \xi _{\omega }\right) _{\omega \in C(\mathbb {R}_{+})}\) imply that for any \( I \subseteq [0,s] \) we can determine \( \xi _\omega (I) \) from
alone. For \((f,s) \in S\) we will again overload notation and use \(\xi _{(f,s)} \) to refer to the measure on [0, s] which is equal to
for any \( \omega \in C(\mathbb {R}_{+})\) such that \(r(\omega ,s) = (f,s)\).
Definition 6.3
(conditional randomized stopping time) Let \((f,s) \in S\). We define a new randomized stopping time
by setting
for all bounded measurable \(F : C([s, \infty ))\times [s, \infty )\rightarrow \mathbb {R}\), i.e. \((\xi ^{(f,s)}_{\omega })_{\omega \in C([s, \infty ))}\) is the disintegration of \( \xi ^{(f,s)} \) wrt \(\mathbb {W}^{s}_{0}\).
Here \(\delta _s\) is the Dirac measure concentrated at s. Really, the definition in the case where \( \xi _{(f,s)}([0,s]) = 1 \) is somewhat arbitrary—it’s more a convenience to avoid partially defined functions. What we will use is that
.
Definition 6.4
(relative Stop-Go pairs) The set \(\mathsf {SG}^\xi \) consists of all \(\left( (f,t), (g,t)\right) \in S \times S\) (again the times have to match) such that either
$$\begin{aligned} c'(f,t) + {\int }c((g,t) \odot \theta , u) \,d\xi ^{(f,t)}(\theta ,u) < c'(g,t) + {\int }c((f,t) \odot \theta , u) \,d\xi ^{(f,t)}(\theta ,u) \end{aligned}$$
(6.2)
or any one of
-
1.
\(\xi ^{(f,t)}\left( C(\mathbb {R}_{+})\times \mathbb {R}_{+}\right) < 1\) or \({\int }s^{p_0} \,d\xi ^{(f,t)}(\theta ,s) = \infty \)
-
2.
the integral on the right hand side equals \(\infty \)
-
3.
either of the integrals is not defined
holds. We also define
$$\begin{aligned} \widehat{\mathsf {SG}}^\xi := \mathsf {SG}^\xi \cup \left\{ (f,s) \in S : \xi _{(f,s)}([0,s]) = 1 \right\} \times S \end{aligned}$$
(6.3)
Lemma 6.6 below says that the numbered cases above are exceptional in an appropriate sense and one may consider them a technical detail. Note that when we say \(\left( (f,t),(g,t)\right) \in \mathsf {SG}^\xi \) we are implicitly saying that \( \xi _{(f,t)}([0,t]) < 1 \).
Note that the sets \(\mathsf {SG}^\xi \) and \(\widehat{\mathsf {SG}}^\xi \) are measurable (in contrast to \(\mathsf {SG}\), which may be more complicated).
Definition 6.5
We call a measurable set \(F \subseteq S\) evanescent if \(r^{-1}\left[ F\right] \) is evanescent, that is, if \(\mathbb {W}^{0}_{\lambda }\left( \mathsf {proj}_{C(\mathbb {R}_{+})}\left[ r^{-1}\left[ F\right] \right] \right) = 0\).
Lemma 6.6
[6, Lemma 5.2] Let \(F: C(\mathbb {R}_{+})\times \mathbb {R}_{+}\rightarrow \mathbb {R}\) be some measurable function for which \({\int }F \,d\xi \in \mathbb {R}\). Then the following sets are evanescent.
-
\(\left\{ (f,s) \in S : \xi ^{(f,s)}\left( C(\mathbb {R}_{+})\times \mathbb {R}_{+}\right) < 1 \right\} \)
-
\(\left\{ (f,s) \in S : {\int }F((f,s) \odot \theta ,u) \,d\xi ^{(f,s)}(\theta ,u) \not \in \mathbb {R}\right\} \)
Proof
See [6].\(\square \)
Lemma 6.7
[6, Lemma 5.4]
$$\begin{aligned} \mathsf {SG}' \subseteq \widehat{\mathsf {SG}}^\xi \end{aligned}$$
Proof
Can be found in [6]. Note that they fix \(p_0 = 1\). \(\square \)
Theorem 6.8
Assume that \( \xi \) is a solution of \((\textsc {OptStop'})\). Then there is a measurable set \( \Gamma \subseteq S \) such that \( r_*(\xi )(\Gamma ) = 1 \) and
$$\begin{aligned} \widehat{\mathsf {SG}}^\xi \cap \left( \Gamma ^< \times \Gamma \right) = \emptyset \text {.}\end{aligned}$$
(6.4)
Our argument follows [6, Theorem 5.7]. We also need the following two auxilliary propositions, which in turn require some definitions.
Definition 6.9
Let \(\upsilon \) be a probability measure on some measure space Y. The set \(\mathsf {JOIN}_{\lambda }(\upsilon )\) is the set of all subprobability measures \(\pi \) on \((C(\mathbb {R}_{+})\times \mathbb {R}_{+}) \times Y\) such that
Proposition 6.10
Let \( \xi \) be a solution of \((\textsc {OptStop'})\). Then \( \left( r \times \mathsf {Id}\right) _*(\pi )(\mathsf {SG}^\xi ) = 0 \) for all \( \pi \in \mathsf {JOIN}_{\lambda }(r_*(\xi )) \).
Here we use \(\times \) to denote the Cartesian product map, i.e. for sets \(X_i,Y_i\) and functions \(F_i : X_i \rightarrow Y_i\) where \(i \in \{1,2\}\) the map \(F_1 \times F_2 : X_1 \times X_2 \rightarrow Y_1 \times Y_2\) is given by \((F_1 \times F_2)(x_1,x_2) = (F_1(x_1),F_2(x_2))\). Proposition 6.10 is an analogue of [6, Proposition 5.8] and it is where the material changes compared to [6] take place. We will give the proof at the end of this section.
Proposition 6.11
[6, Proposition 5.9] Let \((Y, \upsilon )\) be a Polish probability space and let \( E \subseteq S \times Y \) be a measurable set. Then the following are equivalent
-
1.
\( \left( r \times \mathsf {Id}\right) _*(\pi )(E) = 0 \) for all \(\pi \in \mathsf {JOIN}_{\lambda }(\upsilon )\)
-
2.
\( E \subseteq (F \times Y) \cup (S \times N) \) for some evanescent set \(F \subseteq S\) and a measurable set \(N \subseteq Y\) which satisfies \(\upsilon (N) = 0\).
Proposition 6.11 is proved in [6] and we will not repeat the proof here.
Proof of Theorem 6.8
Using Proposition 6.10 we see that \( \left( r \times \mathsf {Id}\right) _*(\pi )(\mathsf {SG}^\xi ) = 0 \) for all \( \pi \in \mathsf {JOIN}_{\lambda }(r_*(\xi )) \). Plugging this into Proposition 6.11 we find an evanescent set \(F_1 \subseteq S\) and a set \( N \subseteq S\) such that \(r_*(\xi )(N) = 0\) and \(\mathsf {SG}^\xi \subseteq (F_1 \times S) \cup (S \times N)\). Defining for any Borel set \(E \subseteq S\) the analytic set
we observe that \( \left( (E^>)^c\right) ^< \subseteq E^c \) and find \(r_*(\xi )(F_1^>) = 0\).
Setting \(F_2 := \left\{ (f,s) \in S : \xi _{(f,s)}([0,s]) = 1 \right\} \) and arguing on the disintegration \(\left( \xi _\omega \right) _{\omega \in C(\mathbb {R}_{+})}\) we see that \( r_*(\xi )(F_2^>) = 0 \), so \(r_*(\xi )(F^>) = 0\) for \(F := F_1 \cup F_2\).
This shows that \(S {\setminus } (N \cup F^>)\) has full \(r_*(\xi )\)-measure. Let \(\Gamma \) be a Borel subset of that set which also has full \(r_*(\xi )\)-measure.
Then
$$\begin{aligned} \Gamma ^< \times \Gamma&\subseteq \left( (F^>)^c\right) ^< \times N^c \subseteq F^c \times N^c \text { and}\\ \widehat{\mathsf {SG}}^\xi&\subseteq (F \times S) \cup (S \times N) \end{aligned}$$
which shows \( \widehat{\mathsf {SG}}^\xi \cap \left( \Gamma ^< \times \Gamma \right) = \emptyset \). \(\square \)
Lemma 6.12
If \( \alpha \in \mathsf {RST}_{\lambda } \) and \( G : C(\mathbb {R}_{+})\times \mathbb {R}_{+}\rightarrow [0,1] \) is measurable, \((\mathcal {F}^0_{t})_{t \ge 0}\)-adapted, then the measure defined by
$$\begin{aligned} F \mapsto {\int }F(\omega , t) G(\omega , t) \,d\alpha (\omega ,t) \end{aligned}$$
(6.5)
is still in \( \mathsf {RST}_{\lambda } \).
Proof
We use the criterion in Lemma 6.2. Let \( (\alpha _\omega )_{\omega \in C(\mathbb {R}_{+})} \) be a disintegration of \( \alpha \) wrt \( \mathbb {W}^{0}_{\lambda } \) for which \( (\omega ,t) \mapsto \alpha _\omega ([0,t]) \) is measurable, \((\mathcal {F}^0_{t})_{t \ge 0}\)-adapted and maps into [0, 1]. Then \((\hat{\alpha }_\omega )_\omega \) defined by \( \hat{\alpha }_{\omega } := F \mapsto {\int }F(t) G(\omega ,t) \,d\alpha _{\omega }(t) \) is a disintegration of the measure in (6.5) for which \((\omega ,t) \mapsto \hat{\alpha }_\omega ([0,t]) \) is measurable, \((\mathcal {F}^0_{t})_{t \ge 0}\)-adapted and maps into [0, 1]. \(\square \)
Lemma 6.13
(Strong Markov property for RSTs) Let \( \alpha \in \mathsf {RST}_{\lambda } \). Then
$$\begin{aligned} {\int }F(\omega ,t) \,d\alpha (\omega ,t) = {\iint }F((\omega ,t) \odot \tilde{\omega }, t) \,d\mathbb {W}^{t}_{0}(\tilde{\omega }) \,d\alpha (\omega ,t) \end{aligned}$$
for all bounded measurable \( F : C(\mathbb {R}_{+})\times \mathbb {R}_{+}\rightarrow \mathbb {R}\).
Proof
Using integral notation instead of the more conventional \(\mathbb E\), we may write the classical form of the strong markov property as
$$\begin{aligned}&{\int }G\left( \Theta _{\tau (\omega )}(\omega )\right) H(\omega ) \cdot 1_{\mathbb {R}_{+}}(\tau (\omega )) \,d\mathbb {W}^{0}_{\lambda }(\omega ) \\&\quad ={\iint }G(\tilde{\omega }) H(\omega ) \cdot 1_{\mathbb {R}_{+}}(\tau (\omega )) \,d\mathbb {W}^{\tau (\omega )}_{\omega (\tau (\omega ))}(\tilde{\omega }) \,d\mathbb {W}^{0}_{\lambda }(\omega ) \end{aligned}$$
for all bounded measurable \(G : C(\mathbb {R}_{+})\rightarrow \mathbb {R}\) and all bounded \(\mathcal {F}^0_\tau \)-measurable \(H : C(\mathbb {R}_{+})\rightarrow \mathbb {R}\). Here \(\Theta _t\) is the function which cuts off the initial segment of a path up to time t. From this a simple monotone class argument shows that
$$\begin{aligned}&{\int }K\left( \Theta _{\tau (\omega )}(\omega ),\omega \right) \cdot 1_{\mathbb {R}_{+}}(\tau (\omega )) \,d\mathbb {W}^{0}_{\lambda }(\omega ) \\&\quad = {\iint }K(\tilde{\Omega },\omega ) \cdot 1_{\mathbb {R}_{+}}(\tau (\omega )) \,d\mathbb {W}^{\tau (\omega )}_{\omega (\tau (\omega ))}(\tilde{\Omega }) \,d\mathbb {W}^{0}_{\lambda }(\omega ) \end{aligned}$$
for all bounded \(\mathcal {F}^0_\infty \otimes \mathcal {F}^0_\tau \)-measurable \(K : C(\mathbb {R}_{+})\times C(\mathbb {R}_{+})\) \(\rightarrow \mathbb {R}\).
We may then choose for \(K(\tilde{\omega }, \omega )\) the function \(F(\eta , \tau (\omega ))\) where the path \(\eta \) is created by cutting off the tail of \(\omega \) after time \(\tau (\omega )\) and attaching \(\tilde{\omega }\) in its place. Noting the relationship between \(\mathbb {W}^{\tau (\omega )}_{x}\) and \(\mathbb {W}^{\tau (\omega )}_{0}\) we then get
$$\begin{aligned}&{\int }F(\omega ,\tau (\omega )) \cdot 1_{\mathbb {R}_{+}}(\tau (\omega )) \,d\mathbb {W}^{0}_{\lambda }(\omega )\\&\quad = {\iint }F((\omega ,\tau (\omega )) \odot \tilde{\Omega },\tau (\omega )) \cdot 1_{\mathbb {R}_{+}}(\tau (\omega )) \,d\mathbb {W}^{\tau (\omega )}_{0}(\tilde{\Omega }) \,d\mathbb {W}^{0}_{\lambda }(\omega ) \text {.}\end{aligned}$$
Using Lemma 5.3 with \(\Omega = [0,1] \times C(\mathbb {R}_{+})\) and
we find a
-stopping time \(\tau \) s.t. we may write \(\alpha \) as
(where \(\mathcal {L}\) is Lebesgue measure on [0, 1]). For a fixed \(y \in [0,1]\), \(\omega \mapsto \tau (y,\omega )\) is an \((\mathcal {F}^0_{t})_{t \ge 0}\)-stopping time, so we may apply the previous equation to these stopping times and integrate over \(y \in [0,1]\) to get
$$\begin{aligned}&{\int }F(\omega ,\tau (y,\omega )) \cdot 1_{\mathbb {R}_{+}}(\tau (y,\omega )) \,d(\mathcal {L}\otimes \mathbb {W}^{0}_{\lambda })(y,\omega )\\&\quad = {\iint }F((\omega ,\tau (y,\omega )) \odot \tilde{\Omega },\tau (y,\omega )) \cdot 1_{\mathbb {R}_{+}}(\tau (y,\omega )) \,d\mathbb {W}^{\tau (y,\omega )}_{0}(\tilde{\Omega }) \,d(\mathcal {L}\otimes \mathbb {W}^{0}_{\lambda }) (y,\omega ) \text {.}\end{aligned}$$
Using the equation for \(\alpha \) we see that this is what we wanted to prove. \(\square \)
Lemma 6.14
(Gardener’s Lemma) Assume that we have \(\xi \in \mathsf {RST}_{\lambda }(\mathcal {P})\), a measure \(\alpha \) on \(C(\mathbb {R}_{+})\times \mathbb {R}_{+}\) and two families \( \beta ^{(\omega ,t)} \), \( \gamma ^{(\omega ,t)} \), where \( (\omega ,t) \in C(\mathbb {R}_{+})\times \mathbb {R}_{+}\), with
such that both maps
$$\begin{aligned} (\omega ,t)&\mapsto {\int }1_{D}\left( (\omega ,t) \odot \tilde{\omega },s\right) \,d\beta ^{(\omega ,t)}(\tilde{\omega },s) \text { and } \\ (\omega ,t)&\mapsto {\int }1_{D}\left( (\omega ,t) \odot \tilde{\omega },s\right) \,d\gamma ^{(\omega ,t)}(\tilde{\omega },s) \end{aligned}$$
are measurable for all Borel \(D \subseteq C(\mathbb {R}_{+})\times \mathbb {R}_{+}\) and that
$$\begin{aligned} \xi (D) - {\iint }1_{D}\left( (\omega ,t) \odot \tilde{\omega },s\right) \,d\beta ^{(\omega ,t)}(\tilde{\omega },s) \,d\alpha (\omega ,t) \ge 0 \end{aligned}$$
(6.6)
for all Borel \(D \subseteq C(\mathbb {R}_{+})\times \mathbb {R}_{+}\). Then for \(\hat{\xi }\) defined by
$$\begin{aligned} {\int }F \,d\hat{\xi } := {\int }F \,d\xi&- {\iint }F((\omega ,t) \odot \tilde{\omega },s) \,d\beta ^{(\omega ,t)}(\tilde{\omega },s) \,d\alpha (\omega ,t) \\&+ {\iint }F((\omega ,t) \odot \tilde{\omega },s) \,d\gamma ^{(\omega ,t)}(\tilde{\omega },s) \,d\alpha (\omega ,t) \end{aligned}$$
for all bounded measurable F we have \(\hat{\xi } \in \mathsf {RST}_{\lambda }(\mathcal {P})\).
Remark 6.15
The intuition behind the Gardener’s Lemma is that we are replacing certain branches \( \beta ^{(\omega ,t)} \) of the randomized stopping time \( \xi \) by other branches \( \gamma ^{(\omega ,t)} \) to obtain a new stopping time \( \hat{\xi } \). This process happens along the measure \(\alpha \). Note that (6.6) implies that \({\int }1_{D}\left( (\omega ,t) \odot \tilde{\omega }\right) \,d\mathbb {W}^{t}_{0}(\tilde{\omega }) \,d\alpha (\omega ,t) \le \mathbb {W}^{0}_{\lambda }(D) \) for all Borel \(D \subseteq C(\mathbb {R}_{+})\). The authors like to think of \(\alpha \) as a stopping time and of the maps \((\omega ,t) \mapsto \beta ^{(\omega ,t)}\) and \((\omega ,t) \mapsto \gamma ^{(\omega ,t)}\) as adapted (in some sense that would need to be made precise). As these assumptions aren’t necessary for the proof of the Gardener’s Lemma, they were left out, but it might help the reader’s intuition to keep them in mind.
Proof of Lemma 6.14
We need to check that the \(\hat{\xi }\) we define is indeed a measure, that \((\mathsf {proj}_{C(\mathbb {R}_{+})})_*(\hat{\xi }) = \mathbb {W}^{0}_{\lambda }\) and that (5.1) holds for \(\hat{\xi }\).
Checking that \(\hat{\xi }\) is a measure is routine—we just note that (6.6) guarantees that \(\hat{\xi }(D) \ge 0 \) for all Borel D.
Let \(G: C(\mathbb {R}_{+})\rightarrow \mathbb {R}\) be a bounded measurable function.
$$\begin{aligned} {\int }G(\omega ) \,d\hat{\xi }(\omega ,t)&= {\int }G(\omega ) \,d\xi (\omega ,t) - {\iint }G((\omega ,t) \odot \tilde{\omega }) \,d\beta ^{(\omega ,t)}(\tilde{\omega },s) \,d\alpha (\omega ,t) \\&\quad + {\iint }G((\omega ,t) \odot \tilde{\omega }) \,d\gamma ^{(\omega ,t)}(\tilde{\omega },s) \,d\alpha (\omega ,t) \\&= {\int }G \,d\mathbb {W}^{0}_{\lambda } - {\iint }G((\omega ,t) \odot \tilde{\omega }) \,d\mathbb {W}^{t}_{0} \,d\alpha (\omega ,t) \\&\quad + {\iint }G((\omega ,t) \odot \tilde{\omega }) \,d\mathbb {W}^{t}_{0} \,d\alpha (\omega ,t) \\&= {\int }G \,d\mathbb {W}^{0}_{\lambda } \end{aligned}$$
Now let \(F : \mathbb {R}_{+}\rightarrow \mathbb {R}\) and \(G: C(\mathbb {R}_{+})\rightarrow \mathbb {R}\) be bounded continuous functions, with F supported on [0, r].
The first summand is 0 because \(\xi \in \mathsf {RST}_{\lambda }(\mathcal {P})\). Looking at the second summand we expand the definition of
.
whenever \(t \le r\), which is the case for those t which are relevant in the integrand above, because \( F(s) \ne 0 \) implies \( s \le r \) and moreover \(\beta ^{(\omega ,t)}\) is concentrated on \((\tilde{\omega },s)\) for which \( t \le s \).
Setting \(\hat{G}^{(\omega ,t)}(\tilde{\omega }) := G((\omega ,t) \odot \tilde{\omega })\) and
we can write
which is 0 because
and therefore
for all \((\omega ,t)\) and \(r \ge t\). The same argument works for the third summand in (6.7). \(\square \)
Proof of Proof of Proposition 6.10
We prove the contrapositive. Assuming that there exists a \( \pi ' \in \mathsf {JOIN}_{\lambda }(r_*(\xi )) \) with \( \left( r \times \mathsf {Id}\right) _*(\pi ')(\mathsf {SG}^\xi ) > 0 \), we construct a \( \xi ^\pi \in \mathsf {RST}_{\lambda }(\mu ) \) such that \( {\int }c\,d\xi ^\pi < {\int }c\,d\xi \).
If \(\pi ' \in \mathsf {JOIN}_{\lambda }(r_*(\xi ))\), then for any two measurable sets \(D_1,D_2 \subseteq S\), because
and by making use of Lemma 6.12 we can deduce that
. Using the monotone class theorem this extends to any measurable subset of \(S \times S\) in place of \(D_1 \times D_2\). So we can set
and know that \((\mathsf {proj}_{C(\mathbb {R}_{+})\times \mathbb {R}_{+}})_*(\pi ) \in \mathsf {RST}_{\lambda }\) and that \(\pi \) is concentrated on \(\mathsf {SG}^\xi \).
We will be using a disintegration of \( \pi \) wrt \(r(\xi )\), which we call \( \left( \pi _{(g,t)}\right) _{(g,t) \in S} \) and for which we assume that \(\pi _{(g,t)}\) is a subprobability measure for all \((g,t) \in S\). It will also be useful to assume that \( \pi _{(g,t)} \) is concentrated on the set \( \{ (\omega ,s) \in C(\mathbb {R}_{+})\times \mathbb {R}_{+}: s = t \} \) not just for \( r(\xi ) \)-almost all (g, t) but for all (g, t) . Again this is no restriction of generality. We will also push \( \pi \) onto \( \left( C(\mathbb {R}_{+})\times \mathbb {R}_{+}\right) \times \left( C(\mathbb {R}_{+})\times \mathbb {R}_{+}\right) \), defining a measure \(\bar{\pi }\) via
$$\begin{aligned} {\int }F \,d\bar{\pi } := {\iint }F\left( (\omega ,s),((g,t) \odot \tilde{\eta }, t)\right) \,d\mathbb {W}^{t}_{0}(\tilde{\eta }) \,d\pi \left( (\omega ,s),(g,t)\right) \end{aligned}$$
for all bounded measurable F. Observe that by Lemma 6.13 the pushforward of \(\pi \) under projection onto the second coordinate (pair) is \(\xi \) and that a disintegration of \(\bar{\pi }\) wrt to \(\xi \) (again in the second coordinate) is given by \(\left( \pi _{r(\eta ,t)}\right) _{(\eta ,t) \in C(\mathbb {R}_{+})\times \mathbb {R}_{+}}\). Let us name \((\mathsf {proj}_{C(\mathbb {R}_{+})\times \mathbb {R}_{+}})_*(\pi ) =: \zeta \in \mathsf {RST}_{\lambda } \). We will now use the Gardener’s Lemma to define two modifications \(\xi _{0}^\pi \), \(\xi _{1}^\pi \) of \(\xi \) such that \(\xi ^\pi := \frac{1}{2}(\xi _{0}^\pi + \xi _{1}^\pi )\) is our improved randomized stopping time.
For all bounded measurable \(F : C(\mathbb {R}_{+})\times \mathbb {R}_{+}\rightarrow \mathbb {R}\) define
The concatenation on the last line is well-defined \(\bar{\pi }\)-almost everywhere because \(\bar{\pi }\) is concentrated on \( (r \times r)^{-1}\left[ \mathsf {SG}^\xi \right] \) and so in the integrand above \(s = t\) on a set of full measure.
We need to check that the Gardener’s Lemma applies in both cases. First of all observe that the product measure \( \mathbb {W}^{t}_{0} \otimes \delta _t \) is in
and that Lemma 6.13 implies
$$\begin{aligned} {\int }F(\omega ,t) \,d\alpha (\omega ,t) = {\iint }F((\omega ,t) \odot \tilde{\omega }, s) \,d\left( \mathbb {W}^{t}_{0} \otimes \delta _t\right) (\tilde{\omega },s) \,d\alpha (\omega ,t) \text {.}\end{aligned}$$
for any randomized stopping time \(\alpha \). So for \(\xi _{0}^\pi \) the measures \(\gamma ^{(\omega ,t)}\) are given by \( \mathbb {W}^{t}_{0} \otimes \delta _t \) and for \(\xi _{1}^\pi \) the measures \(\beta ^{(\omega ,t)}\) are given by \( \mathbb {W}^{t}_{0} \otimes \delta _t \).
For \( \xi _{0}^\pi \) the measure along which we are replacing branches is given by
$$\begin{aligned} F \mapsto {\int }F(\omega ,s) (1-\xi _{\omega }([0,s])) \,d\zeta (\omega ,s) \text {.}\end{aligned}$$
The branches \( \beta ^{(\omega ,s)} \) we remove are \(\xi ^{r(\omega ,s)} \). We need to check that
$$\begin{aligned} {\int }F \,d\xi - {\int }(1-\xi _{\omega }([0,s])) {\int }F((\omega ,s) \odot \tilde{\omega }, u) \,d\xi ^{r(\omega ,s)}(\tilde{\omega },u) \,d\zeta (\omega ,s) \ge 0 \end{aligned}$$
for all positive, bounded, measurable \( F : C(\mathbb {R}_{+})\times \mathbb {R}_{+}\rightarrow \mathbb {R}\). Let us calculate.
Here we first used the definition of \( \xi ^{r(\omega ,s)} \) and then Lemma 6.13 and finally that \((\mathsf {proj}_{C(\mathbb {R}_{+})})_*(\zeta ) \le \mathbb {W}^{0}_{\lambda }\).
For \(\xi _{1}^\pi \) we replace branches along
$$\begin{aligned} F&\mapsto {\int }F(\eta ,t) (1-\xi _{\omega }([0,s])) \,d\bar{\pi }\left( (\omega ,s),(\eta ,t)\right) \\&= {\int }F(\eta ,t) {\int }(1-\xi _{\omega }([0,s])) \,d\pi _{r(\eta ,t)}(\omega ,s) \,d\xi (\eta ,t) \text {.}\end{aligned}$$
The calculation above shows that
$$\begin{aligned} {\int }F \,d\xi - {\int }(1-\xi _{\omega }([0,s])) F(\eta ,t) \,d\bar{\pi }\left( (\omega ,s),(\eta ,t)\right) \ge 0 \end{aligned}$$
for all positive, bounded, measurable \( F : C(\mathbb {R}_{+})\times \mathbb {R}_{+}\rightarrow \mathbb {R}\). For \(\xi _{1}^\pi \) the branches \(\gamma ^{(\eta ,t)}\) that we add are given by
$$\begin{aligned} F \mapsto \frac{ {\int }(1-\xi _{\omega }([0,s])) {\int }F(\tilde{\omega },u) \,d\xi ^{r(\omega ,s)}(\tilde{\omega },u) \,d\pi _{r(\eta ,t)}(\omega ,s) }{ {\int }(1-\xi _{\omega }([0,s])) \,d\pi _{r(\eta ,t)}(\omega ,s) } \end{aligned}$$
when \({\int }(1-\xi _{\omega }([0,s])) \,d\pi _{r(\eta ,t)}(\omega ,s) > 0\) and \( \delta _t \) otherwise (again, the latter is arbitrary). In the more interesting case \( \gamma ^{(\eta ,t)} \) is an average over elements of
and therefore itself in
. Here it is again crucial that for \( \pi _{r(\eta ,t)} \)-almost all \((\omega ,s)\) we have \(s = t\), otherwise we would be averaging randomized stopping times of our process started at unrelated times.
Putting this together we see that \(\xi ^\pi := \frac{1}{2}(\xi _{0}^\pi + \xi _{1}^\pi )\) is a randomized stopping time and that
$$\begin{aligned}&2 {\int }F \,d(\xi ^\pi - \xi ) = {\int }(1-\xi _{\omega }([0,s])) \Big ( F(\omega ,s) - {\int }F((\omega ,s) \odot \tilde{\omega }, u) \,d\xi ^{r(\omega ,s)}(\tilde{\omega },u) \nonumber \\&\quad - F(\eta ,t) + {\int }F((\eta ,t) \odot \tilde{\omega },u) \,d\xi ^{r(\omega ,s)}(\tilde{\omega }, u) \Big ) \,d\bar{\pi }((\omega ,s),(\eta ,t)) \end{aligned}$$
(6.8)
for all bounded measurable \( F : C(\mathbb {R}_{+})\times \mathbb {R}_{+}\rightarrow \mathbb {R}\). Specializing to \(F(\omega ,s) = G(s)\) for \(G : \mathbb {R}_{+}\rightarrow \mathbb {R}\) bounded measurable we find that
$$\begin{aligned} {\int }G(s) \,d(\xi -\xi ^\pi )(\omega ,s) = 0 \text { ,} \end{aligned}$$
again because for \(\bar{\pi }\)-almost all \(\left( (\omega ,s),(\eta ,t)\right) \) we have \(s=t\). This shows that \(\xi ^\pi \in \mathsf {RST}_{\lambda }(\mu )\).
We now want to extend (6.8) to \(c\). We first show that (6.8) also holds for \(F: C(\mathbb {R}_{+})\times \mathbb {R}_{+}\rightarrow \mathbb {R}\) which are measurable and positive and for which \({\int }F \,d\xi < \infty \). To see this, approximate such an F from below by bounded measurable functions (for which (6.8) holds) and note that by previous calculations both
Looking at positive and negative parts of \(c\) and using Assumption 2.4 to see that \( {\int }c_{-} \,d(\xi ^\pi -\xi ) \in \mathbb {R}\) we get that indeed (6.8) holds for \(F = c\).
Now we will argue that the integrand in the right hand side of (6.8) is negative \(\bar{\pi }\)-almost everywhere. This will conclude the proof.
By inserting an r in appropriate places we can read off from Definition 6.4 what it means that \(\bar{\pi }\) is concentrated on \((r \times r)^{-1}\left[ \mathsf {SG}^\xi \right] \). In the course of verifying that (6.8) applies to \(c\) we already saw that cases 2 and 3 in Definition 6.4 can only occur on a set of \(\bar{\pi }\)-measure 0. Lemma 6.6 excludes case 1 \(\bar{\pi }\)-almost everywhere. This means that (6.2) holds \(\bar{\pi }\)-almost everywhere—or more correctly, that for \(\bar{\pi }\)-a.a. \(((\omega ,s),(\eta ,t))\) we have \(s=t\) and
$$\begin{aligned}&c(\omega ,s) - {\int }c((\omega ,s) \odot \tilde{\omega }, u) \,d\xi ^{r(\omega ,s)}(\tilde{\omega },u) \nonumber \\&\quad - c(\eta ,t) + {\int }c((\eta ,t) \odot \tilde{\omega }, u) \,d\xi ^{r(\omega ,s)}(\tilde{\omega },u) < 0 \text {,}\end{aligned}$$
(6.9)
completing the proof. \(\square \)