1 Main results

Let \((\Omega , \mathcal {F},P)\), be a complete probability space and let \(w_{\cdot }\) and \(b_{\cdot }\) be independent one-dimensional standard Wiener processes on this space. Fix some constants \(\sigma ,\sigma _{1}>0\) and consider the equation

$$\begin{aligned} x_{t}=x_{0}+\sigma _{1}w_{t}+\sigma b_{t}, \end{aligned}$$
(1.1)

where \( x_{0}\) is independent of the couple \((w_{\cdot },b_{\cdot })\) and has density \( \pi _{0}\in C_{0}^{\infty }=C_{0}^{\infty }(G)\) concentrated on G, where \(G=(0,1)\). Define

$$\begin{aligned} \mathcal {F}^{b_{\cdot }}_{t}= & {} \sigma (b_{s}:s\le t), \quad \tau =\inf \{t\ge 0:x_{t}\not \in G\},\\ A_{t}= & {} P(\tau \le t\mid \mathcal {F}^{b_{\cdot }}_{t}). \end{aligned}$$

Here is our main result.

Theorem 1.1

There is a continuous and nondecreasing modification of \( A_{t}\) which is singular with respect to Lebesgue measure, the latter provided that \( \sigma _{1}/\sigma \) is sufficiently small.

It is not hard to see that for any \(s\ge t\) we have \(A_{t}=P(\tau \le t\mid \mathcal {F}^{b_{\cdot }}_{s})\) (a.s.), which makes the fact that \(A_{t}\) admits a nondecreasing modification quite natural. The author is sure that the smallness assumption on \(\sigma _{1}/\sigma \) can be dropped but the proof of this fact is unknown to him.

The process \(A_{t}\) in a more general multi-dimensional framework arose in [6] as the main process governing the conditional distribution of a signal process \(x_{t}\) at the first time when it exists from a given domain. In [6] the observations \(y_{t}\) (\(=b_{t}\) in our case) were only available until the first exit time of \(x_{t}\) from the domain. It turns out that in the setting of (1.1) the conditional and the so-called unnormalized conditional distributions of \(x_{t}\) before it exits from G given \(y_{s},0\le s \le t\), coincide. These unnormalized conditional distributions are known to satisfy some linear stochastic partial differential equations and then the properties of \(A_{t}\) can be recovered from some properties of solutions of these equations.

To be more precise for \((t,x)\in (0,\infty )\times (0,1)\) consider the following (filtering) equation

$$\begin{aligned} d\pi _{t}(x)=(1/2) a D^{2}\pi _{t}(x)\,dt -\sigma D\pi _{t}(x)\,db_{t}, \end{aligned}$$
(1.2)

where \(a= \sigma _{1}^{2}+\sigma ^{2} \), with initial condition \(\pi _{0}(x)\) and zero lateral condition.

To explain in which sense we understand this equation, the initial condition, and the boundary condition, we need some notation. Introduce the space \(W^{1}_{2}=W^{1}_{2}(G)\) as the closure of the set of continuously differentiable functions in \(\bar{G}\) in the norm

$$\begin{aligned} \Vert u\Vert _{W^{1}_{2}}=\Vert u\Vert _{L_{2}}+\Vert Du\Vert _{L_{2}}, \end{aligned}$$

where Du is the derivative of u and \(L_{2}=L_{2}(G)\), and we introduce \(\overset{\scriptscriptstyle 0}{ W}\,\!^{1}_{2}=\overset{\scriptscriptstyle 0}{ W}\,\!^{1}_{2}(G)\) as the closure of \(C^{\infty }_{0}=C^{\infty }_{0}(G)\) in the above norm.

Denote by \( \mathcal {P}^{b_{\cdot }}\) the predictable \( \sigma \)-field in \( \Omega \times (0,\infty )\) associated with the filtration \(\{ \mathcal {F}^{b_{\cdot }}_{t}\}\). For \( T\in (0,\infty )\) introduce

$$\begin{aligned} G_{T}=(0,T)\times G,\quad \overset{\scriptscriptstyle 0}{ W}\,\!^{1}_{2}(G_{T})=L_{2}\left( \Omega \times (0,T),\mathcal {P}^{b_{\cdot }},\overset{\scriptscriptstyle 0}{ W}\,\!^{1}_{2}\right) . \end{aligned}$$

We are looking for a function \( \pi _{t}(x)\) which is a generalized function on G for each \( (\omega ,t)\in \Omega \times [0,\infty )\) such that \( \pi \in \cap _{T} \overset{\scriptscriptstyle 0}{ W}\,\!^{1}_{2}(G_{T})\) and for each \( \zeta \in C^{\infty }_{0} \) with probability one for all \(t\in [0,\infty )\) we have

$$\begin{aligned} (\pi _{t},\zeta )=(\pi _{0},\zeta ) -\int _{0}^{t}(1/2)\left( a D \pi _{s} ,D \zeta \right) \,ds-\int _{0}^{t} \left( \sigma D \pi _{s} ,\zeta \right) \,db_{s}, \end{aligned}$$
(1.3)

where we use the notation

$$\begin{aligned} (f,g)=\int _{G}f(x)g(x)\,dx. \end{aligned}$$

Observe that all expressions in (1.3) are well defined due to the fact that the coefficients of \( \pi \) and of \( D \pi \) are constant and

$$\begin{aligned} \pi , D \pi \in L_{2}(T):= L_{2}(\Omega \times (0,T), \mathcal {P}^{b_{\cdot }},L_{2}) \end{aligned}$$

for any \( T\in (0,\infty )\).

Recall that by assumption \(\pi _{0}\in C_{0}^{\infty }\).

Theorem 1.2

In the class \(\bigcap _{T}\overset{\scriptscriptstyle 0}{ W}\,\!^{1}_{2}(G_{T})\) there exists a unique solution \(\pi _{t}\) of equation (1.2) with initial condition \(\pi _{0}\). In addition, \( \pi _{t}\ge 0 \) for all \(t\in [0,\infty )\) (a.s.). With probability one \(\pi _{t}\) is continuous in \(L_{1}=L_{1}(G)\) and in \(L_{2}\).

The existence, uniqueness, and the (a.s.) continuity in \(L_{2}\) of \(\pi \) is a classical result proved in many places in a variety of settings (see, for instance, [5, 7, 8], and the references therein). That \(\pi _{t}\) is (a.s.) continuous as an \(L_{1}\)-function follows from its \(L_{2}\)-continuity and the boundedness of G. The fact that \(\pi \ge 0\) follows from the maximum principle (see, for instance, Theorem 1.1 of [3]) and the fact that, if \(u\in \overset{\scriptscriptstyle 0}{ W}\,\!^{1}_{2}\), then \(u^{+}\in \overset{\scriptscriptstyle 0}{ W}\,\!^{1}_{2}\).

The connection of \(A_{t}\) to \(\pi _{t}\) is established on the basis of Lemma 1.8 of [6], which our situation reads as follows.

Lemma 1.3

For any Borel bounded or nonnegative function \(\phi \) on G and \(t\in [0,\infty )\) we have (a.s.)

$$\begin{aligned} E\big \{I_{\tau >t}\phi (x_{t}) \mid \mathcal {F}^{b_{\cdot }}_{t}\}=(\pi _{t},\phi ). \end{aligned}$$
(1.4)

In particular, for each \(t\in [0,\infty )\) (a.s.)

$$\begin{aligned} P\{\tau >t\mid \mathcal {F}^{b_{\cdot }}_{t}\} =(\pi _{t},1) . \end{aligned}$$
(1.5)

Finally, (a.s.) we have \( (\pi _{t},1)>0\) for all \(t\in [0,\infty )\).

By Lemma 1.3 for any \(t\in [0,\infty )\)

$$\begin{aligned} P(\tau \le t\mid \mathcal {F}^{b_{\cdot }}_{t})=1-(\pi _{t},1) \end{aligned}$$

(a.s.) and by Theorem 1.2 the right-hand side is continuous in t (a.s). Also it turns out (see [6]) that the process \( (\pi _{t},1)\) is decreasing (a.s). Therefore, in Theorem 1.1 by the modification of \(A_{t}\), which we identify with the original \(A_{t}\), we mean \(1-(\pi _{t},1)\).

Observe that if in (1.3) we were allowed to first integrate by parts to replace

$$\begin{aligned} (D\pi _{s},D\zeta )\quad \text {with}\quad -(D^{2}\pi _{s},\zeta ) \end{aligned}$$

and then in the so-modified version of (1.3) take \(\zeta \equiv 1\) (\(\not \in C^{\infty }_{0}\)), then we would formally obtain that

$$\begin{aligned} A_{t}=1-(\pi _{t},1)=(1/2)a\int _{0}^{t}[D\pi _{s}(0)- D\pi _{s}(1)]\,ds. \end{aligned}$$
(1.6)

This shows that \(A_{t}\) is related to the normal derivative of \(\pi _{s}\) on the boundary of G, investigating which is done on the basis of a different description of \(\pi _{s}\).

We are going to state our second main result, which is about solutions of the heat equation in curvilinear cylinders whose lateral boundary consists of a trajectory of a Wiener process and of its parallel shift.

Theorem 1.4

For almost any \(\omega \) there exists a unique function \(u_{t}(x)\) defined, bounded, and continuous in the closure of

$$\begin{aligned} \Gamma (b_{\cdot })= \{(t,x):t\ge 0,\quad x\in (\sigma b_{t},1+\sigma b_{t})\} \end{aligned}$$

such that it is infinitely differentiable with respect to (tx) in \(\Gamma (b_{\cdot })\), satisfies there the equation

$$\begin{aligned} \partial _{t}u_{t}=(1/2)\sigma _{1}^{2}D^{2}u_{t} \end{aligned}$$
(1.7)

and satisfies the conditions \(u_{0}(x)=\pi _{0} (x)\), \(x\in [0,1]\), and \(u_{t}(\sigma b_{t}) =u_{t}(1+\sigma b_{t})=0\), \(t\ge 0\). Furthermore, if \(\sigma _{1}/\sigma \) is sufficiently small, then for any \(t\in [0,\infty )\)

$$\begin{aligned} \lim _{x\downarrow \sigma b_{t}}\frac{u_{t}(x)}{x-\sigma b_{t}} =\lim _{x\uparrow 1+\sigma b_{t}}\frac{u_{t}(x)}{ 1+\sigma b_{t}-x }=0 \end{aligned}$$
(1.8)

almost surely, so that the derivative of \(u_{t}(x)\) on the boundary of \(\Gamma (b_{\cdot })\) is zero (a.s.) for any fixed t.

Finally, with probability one

$$\begin{aligned} \int _{G}|u_{t}(x-\sigma b_{t})-\pi _{t}(x)|^{2}\,dx=0 \end{aligned}$$
(1.9)

for all \(t\ge 0\), so that \(u_{t}(x-\sigma b_{t})\) is a modification of \(\pi _{t}(x)\), and for this modification, for any \(t\in [0,\infty )\),

$$\begin{aligned} \lim _{x\downarrow 0}\frac{\pi _{t}(x)}{x} =\lim _{x\uparrow 1}\frac{\pi _{t}(x)}{1-x}=0 \end{aligned}$$
(1.10)

almost surely if \(\sigma _{1}/\sigma \) is sufficiently small.

Remark 1.1

Note that by the maximum principle

$$\begin{aligned} 0\le u_{t}\le \max _{G}\pi _{0}. \end{aligned}$$

The last statement of Theorem 1.4 makes the representation (1.6) dubious, and, even though there is a limit procedure showing that (1.6) holds in a generalized sense similar to that of the local time for Brownian motion (see [6]), one would rightfully suspect that \(A_{t}\) is not absolutely continuous with respect to t.

The last statement of Theorem 1.4 should not make the reader over-optimistic about the continuity properties of \(\pi _{t}(x)\) in x near the boundary of G (see Remark 1.2).

Still the following theorem will be easily derived from known results. Take \(\alpha \in (0,1)\) and \(c\in (0,\infty )\) and introduce

$$\begin{aligned} p= & {} p(c):=P\left( \sup _{t\le 1}w_{t}- \inf _{t\le 1}w_{t}\ge c(\sqrt{2}-1)/2\right) , \quad r=r(\alpha ,c):=\frac{\alpha (1-p )}{p (1-\alpha )},\\ \beta= & {} \beta (\alpha ,c):=2\frac{(r-1)p +1}{r^{\alpha }} =2\frac{1-p}{1-\alpha }r^{-\alpha }. \end{aligned}$$

As is easy to see, for any \(\alpha \in (0,1)\), we have \(p(c)\rightarrow 0\), \(r(\alpha ,c)\rightarrow \infty \), and \(\beta (\alpha ,c)\rightarrow 0\) as \(c\rightarrow \infty \). It follows from [2] (see the proof of Theorem 2.1 there) that there exists a function \(\alpha (c)\), \(c\in (0,\infty )\), with values in (0, 1) such that \(\alpha (c)\rightarrow 0\) as \(c\rightarrow \infty \) and \(\alpha (c)\le \alpha \) for any \(\alpha \) satisfying \(\beta (\alpha ,c)<1\).

Next, take some constants \(c\ge 0\), \(d>0\) and for \(x\in \mathbb {R}\) define

$$\begin{aligned} \tau _{d,x}= & {} \inf \{t\ge 0:d/\sqrt{2}+w_{t}=x \},\\ \gamma (c,d )= & {} P(\tau _{d,d} \wedge (1/2)<\tau _{d,-c} ). \end{aligned}$$

Theorem 1.5

Take the modification of \(\pi _{t}\) from Theorem 1.4, \(T\in (0,\infty )\), and define \(\varepsilon =\sigma _{1}/\sigma \). Then for any \(t\in [0,\infty )\), \(c,d>0\), such that \(\alpha (c \varepsilon )<1\) and \(\nu \) satisfying

$$\begin{aligned} 0\le \nu < \nu _{0}:= (1-\alpha (c \varepsilon ))\log _{2}\gamma ^{-2}(c,d ), \end{aligned}$$

we have that with probability one

$$\begin{aligned} \sup _{x\in (0,1)}\sup _{t\in [0,T]}\frac{\pi _{t}( x)}{x^{\nu }}<\infty . \end{aligned}$$
(1.11)

Furthermore, there exists a constant \( \nu \in (0,\infty )\) such that

$$\begin{aligned} E\sup _{x\in (0,1)}\sup _{t\in [0,T]}\frac{\pi _{t}(x)}{x^{\nu }(1-x)^{\nu }}<\infty . \end{aligned}$$
(1.12)

Remark 1.2

The largest possible value of \(\nu \) in (1.11) is unknown. However, Theorem 5.1 and Lemma 4.1 of [1] show that if we take a \(\delta >0\) and

$$\begin{aligned} \nu =(1+\delta )(2\pi \varepsilon ^{2})^{-1/2} e^{-1/(2\varepsilon ^{2})}, \end{aligned}$$

then for \(\varepsilon =\sigma _{1}/\sigma \) small enough the left-hand side of (1.11) equals infinity with probability one. Therefore, the largest value of \(\nu \) is extremely small if \(\varepsilon \) is small.

Remark 1.3

The fact that equation (1.8) holds (a.s.) does not contradict Remark 1.2, because (1.11) gives an estimate which is uniform with respect to t, and on almost each trajectory of \(b_{\cdot }\) there are points t such that \(v_{t}(x)/x\rightarrow \infty \) as \(x\downarrow 0\).

We prove Theorem 1.1 in Sect. 2 assuming that Theorems 1.4 and 1.5 are true. In Sect. 3 we prove Theorems 1.4 and 1.5. The first assertion of Theorem 1.4 and Itô’s formula easily lead to the conclusion that \(u_{t}(x-\sigma b_{t})\) is a classical solution of (1.2) and the assertion concerning (1.9) is proved by showing that classical solutions coincide with generalized ones in a much more general situation in Sect. 4.

2 Proof of Theorem 1.1

We start by proving that for each \(t_{0}\in (0,\infty )\) and \(t_{n}=t_{0}+1/n\) with probability one

$$\begin{aligned} E\left( \mathop {\underline{\lim }}\limits _{n\rightarrow \infty }\frac{1}{t_{n}-t_{0}}(A_{t_{n}}-A_{t_{0}}) \mid \mathcal {F}^{b_{\cdot }}_{t_{0}}\right) =0. \end{aligned}$$
(2.1)

Observe that for any \(\zeta \in C^{\infty }_{0}\) and \(t\ge t_{0}\)

$$\begin{aligned} (\pi _{t},\zeta )=(\pi _{t_{0}},\zeta ) +\int _{t_{0}}^{t}\left( \pi _{s}, (1/2)aD^{2}\zeta \right) \,ds+\int _{t_{0}}^{t} ( \pi _{s},\sigma D \zeta ) \,db_{s},\end{aligned}$$
(2.2)

where, here and below, we are dealing with the modification of \(\pi _{t}\) from Theorem 1.4. We multiply both parts of this equation by the indicator function of a set \(F\in \mathcal {F}^{b_{\cdot }}_{t_{0}}\) and then take the expectations of both parts. Then by denoting

$$\begin{aligned} \phi ^{F}_{t}(x)=E\pi _{t}(x)I_{F} \end{aligned}$$

we find

$$\begin{aligned} (\phi ^{F}_{t},\zeta )=(\phi ^{F}_{t_{0}},\zeta ) +\int _{t_{0}}^{t}\left( \phi ^{F}_{s}, (1/2)aD^{2}\zeta \right) \,ds.\end{aligned}$$
(2.3)

Observe that \(\phi ^{F}_{t}(x)\) is continuous in \(\bar{G}_{\infty }\) because of the continuity of \(\pi \) which is in addition bounded. Estimate (1.12) shows that \(\phi ^{F}_{t}(x)\rightarrow 0\) as \(x\rightarrow \{0,1\}\), \(x\in (0,1)\), \(t\ge 0\). Thus \(\phi ^{F}_{t}\) is a continuous in \([t_{0},\infty )\times \bar{G}\) weak solution of the equation

$$\begin{aligned} \partial _{t}\eta _{t}=(1/2)aD^{2}\eta _{t}. \end{aligned}$$
(2.4)

By uniqueness of such solutions, \(\phi ^{F}_{t}\) is a classical solution of this equation with zero boundary data. Hence

$$\begin{aligned} \phi ^{F}_{t}(x)- \phi ^{F}_{t_{0}}(x)= & {} (1/2)a\int _{t_{0}}^{t}D^{2}\phi ^{F}_{s}(x) \,ds,\\ E(A_{t}-A_{t_{0}})I_{F}= & {} - \int _{0}^{1}\left[ \phi ^{F}_{t}(x)- \phi ^{F}_{t_{0}}(x)\right] \,dx\\= & {} -(1/2)a\int _{t_{0}}^{t}\int _{0}^{1}D^{2}\phi ^{F}_{s}(x)\,dxds\\= & {} (1/2)a\int _{t_{0}}^{t}\left[ D\phi ^{F}_{s}(0) -D\phi ^{F}_{s}(1)\right] \,ds. \end{aligned}$$

Observe that \(D\phi ^{F}_{s}(0)\ge 0\) and \(D\phi ^{F}_{s}(1)\le 0\) since \(\phi ^{F}_{s}\) is nonnegative in G and vanishes on the boundary of G. Furthermore, by the maximum principle we have \(\phi ^{F}_{t}\le \psi ^{F}_{t}\), \(t\ge t_{0}\), where \(\psi ^{F}_{t}\) is defined as a unique bounded classical solution of (2.4) for \(t\ge t_{0}\), \(x>0\), with initial data \(\psi ^{F}_{t_{0}}(x)=\phi ^{F}_{t_{0}}(x) I_{(0,1)}(x)\) and zero boundary condition. In particular, \(D\phi ^{F}_{s}(0)\le D\psi ^{F}_{s}(0)\), and

$$\begin{aligned} E(A_{t}-A_{t_{0}})I_{F}\le (1/2)a \int _{t_{0}}^{t}D\psi ^{F}_{s}(0) \,ds. \end{aligned}$$

The following explicit representation for such solutions is well known:

$$\begin{aligned} \psi ^{F}_{t}(x)= \int _{0}^{1} p(t,x,y)\phi ^{F}_{t_{0}}(y) \,dy, \end{aligned}$$

where

$$\begin{aligned} p(t,x,y)=\frac{1}{\sqrt{2\pi a(t-t_{0})}} \exp \Bigg [-\frac{(x-y)^{2}}{2 a(t-t_{0})}\Bigg ] -\exp \Bigg [-\frac{(x+y)^{2}}{2 a (t-t_{0})}\Bigg ]. \end{aligned}$$

Hence,

$$\begin{aligned}&E(A_{t}-A_{t_{0}})I_{F}\le \int _{t_{0}}^{t} \frac{1}{\sqrt{2\pi a(r-t_{0})^{3}}} \int _{0}^{1}y\phi ^{F}_{t_{0}}(y)e^{-y^{2}/(2ar-2at_{0})}\,dy dr\\&\quad =\int _{t_{0}}^{t} \frac{2a}{\sqrt{2\pi a(r-t_{0})}} \int _{0}^{1/\sqrt{2ar-2at_{0}}} \phi ^{F}_{t_{0}}(x\sqrt{2ar-2at_{0}})xe^{-x^{2}}\,dxdr, \end{aligned}$$

which after taking into account the arbitrariness of \(F\in \mathcal {F}^{b_{\cdot }}_{t_{0}}\) leads to

$$\begin{aligned}&E(A_{t}-A_{t_{0}}\mid \mathcal {F}^{b_{\cdot }}_{t_{0}})\\&\quad \le \int _{t_{0}}^{t} \frac{2a}{\sqrt{2\pi a(r-t_{0})}} \int _{0}^{\infty } \pi _{t_{0}}(x\sqrt{2ar-2at_{0}})I_{x\sqrt{2ar-2at_{0}}\le 1}xe^{-x^{2}}\,dxdr \end{aligned}$$

almost surely for any \(t>t_{0}\). By Theorem 1.4 with probability one \(\pi _{t_{0}}(x)=x\theta (x)\), \(x\in [0,1]\), where \(\theta \) is a bounded function of x tending to zero as \(x\downarrow 0\). It follows that

$$\begin{aligned}&E(A_{t}-A_{t_{0}}\mid \mathcal {F}^{b_{\cdot }}_{t_{0}})\\&\quad \le \frac{2a}{\sqrt{ \pi }} \int _{t_{0}}^{t} \int _{0}^{\infty }\theta \left( \sqrt{2arx-2at_{0}x}\wedge 1\right) x^{2}e^{-x^{2}} \,dxdr . \end{aligned}$$

By the dominated convergence theorem (a.s.)

$$\begin{aligned} \lim _{r\downarrow t_{0}} \int _{0}^{\infty }\theta (\sqrt{2arx-2at_{0}x}\wedge 1)x^{2}e^{-x^{2}} \,dx=0 \end{aligned}$$

implying that (a.s.)

$$\begin{aligned}&\displaystyle \lim _{n\rightarrow \infty }\frac{1}{t_{n}-t_{0}}\int _{t_{0}}^{t_{n}} \int _{0}^{\infty }\theta (\sqrt{2arx-2at_{0}x}\wedge 1)x^{2}e^{-x^{2}} \,dxdr=0,\\&\displaystyle \quad \mathop {\overline{\lim }}\limits _{n\rightarrow \infty }\frac{1}{t_{n}-t_{0}} E(A_{t_{n}}-A_{t_{0}}\mid \mathcal {F}^{b_{\cdot }}_{t_{0}})=0, \end{aligned}$$

which yields (2.1) by Fatou’s lemma.

Thus, for any \(t\ge 0\), for almost all \(\omega \)

$$\begin{aligned} \mathop {\underline{\lim }}\limits _{n\rightarrow \infty }n(A_{t+1/n}-A_{t})=0. \end{aligned}$$
(2.5)

By Fubini’s theorem, for almost any \(\omega \), equation (2.5) holds for almost all t. It follows that, with probability one, the derivative of \(A_{t}\) is zero for almost all t and the theorem is proved.

3 Proof of Theorems 1.4 and 1.5

Proof of Theorem 1.4

On the space C of continuous functions on \([0,\infty )\) with Wiener measure W introduce the coordinate process \(x_{t}(x_{\cdot }):=x_{t}\), which is a Wiener process. For \(t\ge 0\), \(x\in \mathbb {R}\), and \(x_{\cdot },y_{\cdot }\in C\) such that \(y_{0}=0\) define

$$\begin{aligned} \tau (t,x,x_{\cdot },y_{\cdot })=\inf \{s\ge 0:x+ \sigma _{1}x_{s}\not \in ( \sigma y_{t-s}, \sigma y_{t-s}+1)\}, \end{aligned}$$

where \(y_{r}:=y_{0}\) for \(r\le 0\). Then the function

$$\begin{aligned} u_{t}(y_{\cdot } , x):= \int _{C}\pi _{0}(x_{t})I_{ \tau (t,x,x_{\cdot },y_{\cdot } )\ge t} \,W(dx_{\cdot }). \end{aligned}$$

is the probabilistic solution of the heat equation

$$\begin{aligned} \partial _{t}u_{t}=(1/2)\sigma _{1}^{2}D^{2}u_{t} \end{aligned}$$

in \(\Gamma (y_{\cdot })\) with boundary conditions

Due to interior estimates of derivatives of solutions to the heat equation, \(u_{t}(y_{\cdot } , x)\) is infinitely differentiable in \(\Gamma (y_{\cdot })\). Its continuity up to \(\{0\}\times [0,1]\) easily follows from the fact that \(\pi _{0}\) is a continuous function on [0, 1] vanishing at 0 and 1. The continuity of its derivatives up to \(\{0\}\times (0,1)\) follows from the fact that \(\pi _{0}\) is infinitely differentiable. Next, as in the proof of Theorem 4.1 of [2] one shows that for constants \(\nu \) [different in (3.1) and (3.2)] as in Theorem 1.5 we have that for any \(T\in (0,\infty )\)

$$\begin{aligned} \sup _{x\in (0,1)}\sup _{t\in [0,T]}\frac{u_{t}(b_{\cdot } , x)}{(b_{t}+x)^{\nu }(b_{t}+1-x)^{\nu }}<\infty \end{aligned}$$
(3.1)

for almost any trajectory of \(b_{\cdot }\), and

$$\begin{aligned} E\sup _{x\in (0,1)}\sup _{t\in [0,T]}\frac{u_{t}(b_{\cdot } , x)}{(b_{t}+x)^{\nu }(b_{t}+1-x)^{\nu }}<\infty . \end{aligned}$$
(3.2)

In particular, with probability one \(u_{t}(b_{\cdot } , x)\) is continuous at the lateral boundary of \(\Gamma (b_{\cdot })\).

Next we deal with (1.8). We organize the proof in the following way. For \(t\ge 0\), \(x\in \mathbb {R}\), and \(x_{\cdot },y_{\cdot }\in C\) such that \(y_{0}=0\) define

$$\begin{aligned} \mu (t,x,x_{\cdot },y_{\cdot })=\inf \{s\ge 0:x+ \sigma _{1}x_{s}\le \sigma y_{t-s}\}, \end{aligned}$$

where \(y_{r}:=y_{0}\) for \(r\le 0\). Also let

$$\begin{aligned} v_{t}(y_{\cdot } , x):= \int _{C}\pi _{0}I_{G}(x_{t})I_{\mu (t,x,x_{\cdot },y_{\cdot } )\ge t} \,W(dx_{\cdot }). \end{aligned}$$

Lemma 3.1

Let \(B_{t}\) be a one-dimensional Wiener process and \(\gamma \in (0,1)\). Then with probability one there exists a sequence of integers \(0\le m_{1}<m_{2}<...\) such that \(B_{t_{k}}\ge \sqrt{t_{k}}\) for all k, where \(t_{k}=\gamma ^{m_{k}}\). Moreover, \(m_{k}\le \beta k\) for all sufficiently large k, where \(\beta \) is any fixed number such that \(\alpha \beta >1\), \(\alpha =P(B_{1}\ge 1)\).

Proof

The sequence \(I_{B_{\gamma ^{m}}\ge \gamma ^{m/2}},m=0,1,...\), is stationary, so that the limit

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{1}{m}\sum _{k=1}^{m}I_{B_{\gamma ^{k}} \ge \gamma ^{k/2}} \end{aligned}$$

exists (a.s.). By the 0-1 law this limit is a constant and equals \(\alpha \) (a.s.). Set

$$\begin{aligned} m_{1}=\inf \{k\ge 1:B_{\gamma ^{k}}\ge \gamma ^{k/2}\},\quad m_{n+1}=\inf \{k>m_{n}:B_{\gamma ^{k}}\ge \gamma ^{k/2}\}. \end{aligned}$$

Then the number of \(k\in \{1,2,...\}\) such that \(m_{k}\le m\) divided by m tends to \(\alpha \). It follows that the number of integers \(i\in \{1,2,... ,\beta k\}\) such that \(m_{i}\le \beta k\) for all large k is greater than \(\beta 'k\alpha \), where \(\beta '\) is any number such that \(\beta '<\beta \). One can certainly take \(\beta '\) such that \(\beta '\alpha >1\). On the other hand there are always exactly k values of \(i\in \{1,2,..., m_{k}\}\) such that \(B_{\gamma ^{i}}\ge \gamma ^{i/2}\). Since \(\beta '\alpha >1\), it follows that for any sufficiently large k the inequality \(m_{k}\ge \beta k\) is impossible. The lemma is proved.

Lemma 3.2

If \(\sigma _{1}/\sigma \) is sufficiently small, then for each \(T\in (0,\infty )\)

$$\begin{aligned} \lim _{x\downarrow 0}\frac{v_{T}(x,b_{\cdot })}{x-\sigma b_{T}}= 0 \end{aligned}$$
(3.3)

almost surely.

Proof

Fix a \(T\in (0,\infty )\), set \(\gamma =1/2\), and take integers \(0\le m_{1}<m_{2}<...\) such that \(b_{T-t_{k}}-b_{T}\ge \sqrt{t_{k}}\) for all k, where \(t_{k}=\gamma ^{m_{k}}\) and \(m_{k}\le \beta k\) for all large k. By Lemma 3.1 such a sequence exists with probability one. Then notice that the inequality \(\mu (T,x,x_{\cdot },b_{\cdot })\ge T\) implies that

$$\begin{aligned} x+\sigma _{1} x_{t_{k}}\ge \sigma b_{T-t_{k}} \ge \sigma b_{T}+\sigma \sqrt{ t_{k}} \end{aligned}$$

for all k such that \(t_{k}\le T\). Denote by \(k_{0}\) the smallest k such that \(t_{k}\le T\). We also take into account that \(\pi _{0}\) is a bounded function and conclude that for any integer \(n\ge k_{0}\)

$$\begin{aligned} v_{T}(x,b_{\cdot })\le NW(z+\sigma _{1} x_{t_{k}}-\sigma \sqrt{t_{k}}\ge 0,k=k_{0},...,n), \end{aligned}$$

where \(N=\sup \pi _{0}\) and \(z=x-\sigma b_{T}\). It follows that to prove (3.3) it suffices to show that there exists an integer-valued function \(n=n(x)\ge k_{0}\) such that

$$\begin{aligned} \lim _{x\downarrow 0}\frac{1}{x} P(x+ w_{t_{k}}- K \sqrt{t_{k}}\ge 0,k=k_{0},...,n(x))=0, \end{aligned}$$
(3.4)

where \(K=\sigma /\sigma _{1}=\varepsilon ^{-1}\).

Observe that by Girsanov’s theorem for any \(n\ge k_{0}+1\)

$$\begin{aligned}&P(x+ w_{t_{k}}- K \sqrt{t_{k}}\ge 0,k=k_{0},k_{0}+1,...,n )\\&\quad = EI_{\Gamma _{n}(x)} \exp \Bigg (-\int _{t_{n}}^{t_{k_{0}}}f'(t)\,dw_{t} -(1/2)\int _{t_{n}}^{t_{k_{0}}}[f'(t)]^{2}\,dt\Bigg )=:I_{n}(x), \end{aligned}$$

where \(f(t)= K\sqrt{t}\) for \(t\ge t_{n}\), \(f(t)= K\sqrt{t_{n}}\) for \(t\in [0,t_{n}]\), and

$$\begin{aligned} \Gamma _{n}(x)=\{\omega :x+ w_{t_{k}} - K\sqrt{t_{n}} \ge 0,k=k_{0},...,n\}. \end{aligned}$$

Next, note that for bounded nonrandom functions g

$$\begin{aligned}&E\left\{ \exp \int _{s}^{t}g(r)\,dw_{r} \mid w_{s},w_{t}\right\} \\&\quad =\exp \bigg (\frac{w_{t}-w_{s}}{t-s}\int _{s}^{t}g(u)\,du +(1/2) \int _{s}^{t}\big [g(r)-\frac{1}{t-s}\int _{s}^{t}g(u)\,du\big ]^{2}\,dr \bigg ) \end{aligned}$$

because

$$\begin{aligned} \int _{s}^{t}\big [g(r)-\frac{1}{t-s}\int _{s}^{t}g(u)\,du\big ]\,dw_{r} \end{aligned}$$

is independent of \(w_{s},w_{t}\). Then we use the fact that

$$\begin{aligned} \int _{s}^{t}\Bigg [g(r)-\frac{1}{t-s}\int _{s}^{t}g(u)\,du\Bigg ]^{2}\,dr =\int _{s}^{t}g^{2}(r)\,dr-\frac{1}{t-s} \Bigg (\int _{s}^{t}g(u)\,du\Bigg )^{2} \end{aligned}$$

By applying this to \(g=-f'\) we find that

$$\begin{aligned} I_{n}(x)=EI_{\Gamma _{n}(x)} \exp \big (D_{n}-(1/2)C_{n}) \end{aligned}$$

with

$$\begin{aligned}&D_{n}:=- K\sum _{k=k_{0}+1}^{n}s_{k}(w_{t_{k-1}}-w_{t_{k}}),\quad s_{k}= (\sqrt{t_{k-1}}+\sqrt{t_{k}})^{-1},\\&C_{n}:=\sum _{k=k_{0}+1}^{n} \frac{1}{t_{k-1}-t_{k}} \Bigg (\int _{t_{k}}^{t_{k-1}}f'(u)\,du\Bigg )^{2}\\&\quad \;\quad = K^{2}\sum _{k=k_{0}+1}^{n} \frac{\sqrt{t_{k-1}}-\sqrt{t_{k}}}{\sqrt{t_{k-1}}+\sqrt{t_{k}}} \ge K^{2}\frac{1-\sqrt{\gamma }}{1+\sqrt{\gamma }}(n-k_{0})= :2K^{2}\kappa (n-k_{0}), \end{aligned}$$

where the inequality follows from the fact that \(\gamma t_{k-1} \ge t_{k}\) and \(\kappa \) is defined by the last equality.

Now we use summation by parts to see that

$$\begin{aligned} D_{n}=- Kw_{t_{k_{0}}}s_{k_{0}+1} + Kw_{t_{n}}s_{n} - K\sum _{k=k_{0}+1}^{n-1}w_{t_{k}} (s_{k+1}-s_{k}). \end{aligned}$$

On the event \(\Gamma _{n}(x)\) this quantity is smaller than

$$\begin{aligned}&Kw_{t_{n}}s_{n}+ K(x-K\sqrt{t_{n}}) \Bigg (s_{k_{0}+1} + \sum _{k=k_{0}+1}^{n-1} (s_{k+1}-s_{k}) \Bigg )\\&\quad = Kw_{t_{n}}s_{n}+ K(x-K\sqrt{t_{n}}) s_{n}. \end{aligned}$$

It follows that

$$\begin{aligned} I_{n}(x)\le & {} EI_{\Gamma _{n}(x)} \exp \big ( Kw_{t_{n}}s_{n} + Kx s_{n} - K^{2}\kappa (n-k_{0})\big ) ,\\\le & {} E \exp \big ( Kw_{t_{n}}s_{n} + Kx s_{n} - K^{2}\kappa (n-k_{0})\big )\\= & {} \exp \big ( ( K^{2}/2)t_{n}s^{2}_{n} + Kx s_{n}- K^{2}\kappa (n-k_{0})\big ). \end{aligned}$$

Now it is time to choose \(n=n(x)\). We take \(n=n(x)\) so that \(x^{2}\in [t_{n},t_{n-1})\). Then \(t_{n}s_{n}\le \sqrt{t_{n}} \le x\) and \(xs_{n}\le x/\sqrt{t_{n-1}}\le 1\). Also \(t_{n}s^{2}_{n}\le 1\) and since \(t_{n}\ge \gamma ^{\beta n}\), we have \(x\ge \gamma ^{\beta n/2}\), which implies that

$$\begin{aligned} \lim _{x\downarrow 0}x^{-1}I_{n(x)}(x) \le \lim _{n\rightarrow \infty }\exp \big ( K^{2}/2+ K- K^{2}\kappa (n-k_{0})- (1/2)\beta n\ln \gamma \big )=0 \end{aligned}$$

if \(\beta |\ln \gamma |< 2K^{2}\kappa \), which is true if \(\sigma _{1}/\sigma \) is small enough. This proves the lemma.

Corollary 3.3

If \(\sigma _{1}/\sigma \) is sufficiently small, then (1.8) holds (a.s.) for any fixed \(t\ge 0\).

Indeed, the equality of the extreme terms in (1.8) follows from Lemma 3.2 since \(u_{t}\le v_{t}\). The remaining equality is proved similarly by replacing x with \(1-x\).

It only remains to prove the last assertion of the theorem.

Observe that \(\tau (t,x,x_{\cdot },y_{\cdot })\) is a lower semicontinuous function of its arguments. Therefore by Fubini’s theorem \(u_{t}(y_{\cdot },x)\) is a Borel function of \((y_{\cdot },t,x)\). Furthermore, \(u_{t}(y_{\cdot }, x)\) will not change if we change \(y_{r}\) for \(r>t\). Hence, \(u_{t}(y_{\cdot }, x)\) is \(\mathcal {N}_{t}\)-measurable, where \(\mathcal {N}_{t}=\sigma (y_{r}:r\le t,y_{\cdot }\in C)\). Therefore,

$$\begin{aligned} v_{t}(x):=u_{t}(b_{\cdot },x-\sigma b_{t}) \end{aligned}$$

is \(\mathcal {F}_{t}\)-measurable for each \((t,x)\in \bar{G}_{\infty }\). After that in the same way the usual Itô’s formula is proved on the basis of Taylor’s formula and the fact that \(u_{t}(y_{\cdot },x)\) is infinitely differentiable we obtain that for any \(x\in G\) almost surely for all \(t\ge 0\)

$$\begin{aligned} v_{t}(x)=\pi _{0}(x)+(a/2)\int _{0}^{t}D^{2}v_{s}(x)\,ds -\sigma \int _{0}^{t}D v_{s}(x)\,db_{s}. \end{aligned}$$

The above properties of \(u_{t}\) and Theorem 4.1 now imply that (perhaps after modifying \(v_{t}\) on a set of probability zero) \(v_{t}\) satisfies (1.2) with zero boundary condition and initial condition \(\pi _{0}\) in the sense explained below that formula and is such that

$$\begin{aligned} \int _{0}^{T}\Vert v_{t}\Vert ^{2}_{W^{1}_{2}(G)}\,dt<\infty \end{aligned}$$

for each \(T\ge 0\). Uniqueness of solutions of (1.2) in this class of functions is a classical result, and this proves the remaining assertions of the theorem.

Proof of Theorem 1.5

After (1.9) has been proved and the modification \(u_{t}(x-\sigma b_{t})\) of \(\pi _{t}(x)\) has been chosen the assertions related to (1.11) and (1.12) follow directly from (3.1) and (3.2). This proves the theorem.