Abstract
We establish the singularity with respect to Lebesgue measure as a function of time of the conditional probability distribution that the sum of two one-dimensional Brownian motions will exit from the unit interval before time t, given the trajectory of the second Brownian motion up to the same time. On the way of doing so we show that if one solves the one-dimensional heat equation with zero condition on a trajectory of a one-dimensional Brownian motion, which is the lateral boundary, then for each moment of time with probability one the normal derivative of the solution is zero, provided that the diffusion of the Brownian motion is sufficiently large.
1 Main results
Let \((\Omega , \mathcal {F},P)\), be a complete probability space and let \(w_{\cdot }\) and \(b_{\cdot }\) be independent one-dimensional standard Wiener processes on this space. Fix some constants \(\sigma ,\sigma _{1}>0\) and consider the equation
where \( x_{0}\) is independent of the couple \((w_{\cdot },b_{\cdot })\) and has density \( \pi _{0}\in C_{0}^{\infty }=C_{0}^{\infty }(G)\) concentrated on G, where \(G=(0,1)\). Define
Here is our main result.
Theorem 1.1
There is a continuous and nondecreasing modification of \( A_{t}\) which is singular with respect to Lebesgue measure, the latter provided that \( \sigma _{1}/\sigma \) is sufficiently small.
It is not hard to see that for any \(s\ge t\) we have \(A_{t}=P(\tau \le t\mid \mathcal {F}^{b_{\cdot }}_{s})\) (a.s.), which makes the fact that \(A_{t}\) admits a nondecreasing modification quite natural. The author is sure that the smallness assumption on \(\sigma _{1}/\sigma \) can be dropped but the proof of this fact is unknown to him.
The process \(A_{t}\) in a more general multi-dimensional framework arose in [6] as the main process governing the conditional distribution of a signal process \(x_{t}\) at the first time when it exists from a given domain. In [6] the observations \(y_{t}\) (\(=b_{t}\) in our case) were only available until the first exit time of \(x_{t}\) from the domain. It turns out that in the setting of (1.1) the conditional and the so-called unnormalized conditional distributions of \(x_{t}\) before it exits from G given \(y_{s},0\le s \le t\), coincide. These unnormalized conditional distributions are known to satisfy some linear stochastic partial differential equations and then the properties of \(A_{t}\) can be recovered from some properties of solutions of these equations.
To be more precise for \((t,x)\in (0,\infty )\times (0,1)\) consider the following (filtering) equation
where \(a= \sigma _{1}^{2}+\sigma ^{2} \), with initial condition \(\pi _{0}(x)\) and zero lateral condition.
To explain in which sense we understand this equation, the initial condition, and the boundary condition, we need some notation. Introduce the space \(W^{1}_{2}=W^{1}_{2}(G)\) as the closure of the set of continuously differentiable functions in \(\bar{G}\) in the norm
where Du is the derivative of u and \(L_{2}=L_{2}(G)\), and we introduce \(\overset{\scriptscriptstyle 0}{ W}\,\!^{1}_{2}=\overset{\scriptscriptstyle 0}{ W}\,\!^{1}_{2}(G)\) as the closure of \(C^{\infty }_{0}=C^{\infty }_{0}(G)\) in the above norm.
Denote by \( \mathcal {P}^{b_{\cdot }}\) the predictable \( \sigma \)-field in \( \Omega \times (0,\infty )\) associated with the filtration \(\{ \mathcal {F}^{b_{\cdot }}_{t}\}\). For \( T\in (0,\infty )\) introduce
We are looking for a function \( \pi _{t}(x)\) which is a generalized function on G for each \( (\omega ,t)\in \Omega \times [0,\infty )\) such that \( \pi \in \cap _{T} \overset{\scriptscriptstyle 0}{ W}\,\!^{1}_{2}(G_{T})\) and for each \( \zeta \in C^{\infty }_{0} \) with probability one for all \(t\in [0,\infty )\) we have
where we use the notation
Observe that all expressions in (1.3) are well defined due to the fact that the coefficients of \( \pi \) and of \( D \pi \) are constant and
for any \( T\in (0,\infty )\).
Recall that by assumption \(\pi _{0}\in C_{0}^{\infty }\).
Theorem 1.2
In the class \(\bigcap _{T}\overset{\scriptscriptstyle 0}{ W}\,\!^{1}_{2}(G_{T})\) there exists a unique solution \(\pi _{t}\) of equation (1.2) with initial condition \(\pi _{0}\). In addition, \( \pi _{t}\ge 0 \) for all \(t\in [0,\infty )\) (a.s.). With probability one \(\pi _{t}\) is continuous in \(L_{1}=L_{1}(G)\) and in \(L_{2}\).
The existence, uniqueness, and the (a.s.) continuity in \(L_{2}\) of \(\pi \) is a classical result proved in many places in a variety of settings (see, for instance, [5, 7, 8], and the references therein). That \(\pi _{t}\) is (a.s.) continuous as an \(L_{1}\)-function follows from its \(L_{2}\)-continuity and the boundedness of G. The fact that \(\pi \ge 0\) follows from the maximum principle (see, for instance, Theorem 1.1 of [3]) and the fact that, if \(u\in \overset{\scriptscriptstyle 0}{ W}\,\!^{1}_{2}\), then \(u^{+}\in \overset{\scriptscriptstyle 0}{ W}\,\!^{1}_{2}\).
The connection of \(A_{t}\) to \(\pi _{t}\) is established on the basis of Lemma 1.8 of [6], which our situation reads as follows.
Lemma 1.3
For any Borel bounded or nonnegative function \(\phi \) on G and \(t\in [0,\infty )\) we have (a.s.)
In particular, for each \(t\in [0,\infty )\) (a.s.)
Finally, (a.s.) we have \( (\pi _{t},1)>0\) for all \(t\in [0,\infty )\).
By Lemma 1.3 for any \(t\in [0,\infty )\)
(a.s.) and by Theorem 1.2 the right-hand side is continuous in t (a.s). Also it turns out (see [6]) that the process \( (\pi _{t},1)\) is decreasing (a.s). Therefore, in Theorem 1.1 by the modification of \(A_{t}\), which we identify with the original \(A_{t}\), we mean \(1-(\pi _{t},1)\).
Observe that if in (1.3) we were allowed to first integrate by parts to replace
and then in the so-modified version of (1.3) take \(\zeta \equiv 1\) (\(\not \in C^{\infty }_{0}\)), then we would formally obtain that
This shows that \(A_{t}\) is related to the normal derivative of \(\pi _{s}\) on the boundary of G, investigating which is done on the basis of a different description of \(\pi _{s}\).
We are going to state our second main result, which is about solutions of the heat equation in curvilinear cylinders whose lateral boundary consists of a trajectory of a Wiener process and of its parallel shift.
Theorem 1.4
For almost any \(\omega \) there exists a unique function \(u_{t}(x)\) defined, bounded, and continuous in the closure of
such that it is infinitely differentiable with respect to (t, x) in \(\Gamma (b_{\cdot })\), satisfies there the equation
and satisfies the conditions \(u_{0}(x)=\pi _{0} (x)\), \(x\in [0,1]\), and \(u_{t}(\sigma b_{t}) =u_{t}(1+\sigma b_{t})=0\), \(t\ge 0\). Furthermore, if \(\sigma _{1}/\sigma \) is sufficiently small, then for any \(t\in [0,\infty )\)
almost surely, so that the derivative of \(u_{t}(x)\) on the boundary of \(\Gamma (b_{\cdot })\) is zero (a.s.) for any fixed t.
Finally, with probability one
for all \(t\ge 0\), so that \(u_{t}(x-\sigma b_{t})\) is a modification of \(\pi _{t}(x)\), and for this modification, for any \(t\in [0,\infty )\),
almost surely if \(\sigma _{1}/\sigma \) is sufficiently small.
Remark 1.1
Note that by the maximum principle
The last statement of Theorem 1.4 makes the representation (1.6) dubious, and, even though there is a limit procedure showing that (1.6) holds in a generalized sense similar to that of the local time for Brownian motion (see [6]), one would rightfully suspect that \(A_{t}\) is not absolutely continuous with respect to t.
The last statement of Theorem 1.4 should not make the reader over-optimistic about the continuity properties of \(\pi _{t}(x)\) in x near the boundary of G (see Remark 1.2).
Still the following theorem will be easily derived from known results. Take \(\alpha \in (0,1)\) and \(c\in (0,\infty )\) and introduce
As is easy to see, for any \(\alpha \in (0,1)\), we have \(p(c)\rightarrow 0\), \(r(\alpha ,c)\rightarrow \infty \), and \(\beta (\alpha ,c)\rightarrow 0\) as \(c\rightarrow \infty \). It follows from [2] (see the proof of Theorem 2.1 there) that there exists a function \(\alpha (c)\), \(c\in (0,\infty )\), with values in (0, 1) such that \(\alpha (c)\rightarrow 0\) as \(c\rightarrow \infty \) and \(\alpha (c)\le \alpha \) for any \(\alpha \) satisfying \(\beta (\alpha ,c)<1\).
Next, take some constants \(c\ge 0\), \(d>0\) and for \(x\in \mathbb {R}\) define
Theorem 1.5
Take the modification of \(\pi _{t}\) from Theorem 1.4, \(T\in (0,\infty )\), and define \(\varepsilon =\sigma _{1}/\sigma \). Then for any \(t\in [0,\infty )\), \(c,d>0\), such that \(\alpha (c \varepsilon )<1\) and \(\nu \) satisfying
we have that with probability one
Furthermore, there exists a constant \( \nu \in (0,\infty )\) such that
Remark 1.2
The largest possible value of \(\nu \) in (1.11) is unknown. However, Theorem 5.1 and Lemma 4.1 of [1] show that if we take a \(\delta >0\) and
then for \(\varepsilon =\sigma _{1}/\sigma \) small enough the left-hand side of (1.11) equals infinity with probability one. Therefore, the largest value of \(\nu \) is extremely small if \(\varepsilon \) is small.
Remark 1.3
The fact that equation (1.8) holds (a.s.) does not contradict Remark 1.2, because (1.11) gives an estimate which is uniform with respect to t, and on almost each trajectory of \(b_{\cdot }\) there are points t such that \(v_{t}(x)/x\rightarrow \infty \) as \(x\downarrow 0\).
We prove Theorem 1.1 in Sect. 2 assuming that Theorems 1.4 and 1.5 are true. In Sect. 3 we prove Theorems 1.4 and 1.5. The first assertion of Theorem 1.4 and Itô’s formula easily lead to the conclusion that \(u_{t}(x-\sigma b_{t})\) is a classical solution of (1.2) and the assertion concerning (1.9) is proved by showing that classical solutions coincide with generalized ones in a much more general situation in Sect. 4.
2 Proof of Theorem 1.1
We start by proving that for each \(t_{0}\in (0,\infty )\) and \(t_{n}=t_{0}+1/n\) with probability one
Observe that for any \(\zeta \in C^{\infty }_{0}\) and \(t\ge t_{0}\)
where, here and below, we are dealing with the modification of \(\pi _{t}\) from Theorem 1.4. We multiply both parts of this equation by the indicator function of a set \(F\in \mathcal {F}^{b_{\cdot }}_{t_{0}}\) and then take the expectations of both parts. Then by denoting
we find
Observe that \(\phi ^{F}_{t}(x)\) is continuous in \(\bar{G}_{\infty }\) because of the continuity of \(\pi \) which is in addition bounded. Estimate (1.12) shows that \(\phi ^{F}_{t}(x)\rightarrow 0\) as \(x\rightarrow \{0,1\}\), \(x\in (0,1)\), \(t\ge 0\). Thus \(\phi ^{F}_{t}\) is a continuous in \([t_{0},\infty )\times \bar{G}\) weak solution of the equation
By uniqueness of such solutions, \(\phi ^{F}_{t}\) is a classical solution of this equation with zero boundary data. Hence
Observe that \(D\phi ^{F}_{s}(0)\ge 0\) and \(D\phi ^{F}_{s}(1)\le 0\) since \(\phi ^{F}_{s}\) is nonnegative in G and vanishes on the boundary of G. Furthermore, by the maximum principle we have \(\phi ^{F}_{t}\le \psi ^{F}_{t}\), \(t\ge t_{0}\), where \(\psi ^{F}_{t}\) is defined as a unique bounded classical solution of (2.4) for \(t\ge t_{0}\), \(x>0\), with initial data \(\psi ^{F}_{t_{0}}(x)=\phi ^{F}_{t_{0}}(x) I_{(0,1)}(x)\) and zero boundary condition. In particular, \(D\phi ^{F}_{s}(0)\le D\psi ^{F}_{s}(0)\), and
The following explicit representation for such solutions is well known:
where
Hence,
which after taking into account the arbitrariness of \(F\in \mathcal {F}^{b_{\cdot }}_{t_{0}}\) leads to
almost surely for any \(t>t_{0}\). By Theorem 1.4 with probability one \(\pi _{t_{0}}(x)=x\theta (x)\), \(x\in [0,1]\), where \(\theta \) is a bounded function of x tending to zero as \(x\downarrow 0\). It follows that
By the dominated convergence theorem (a.s.)
implying that (a.s.)
which yields (2.1) by Fatou’s lemma.
Thus, for any \(t\ge 0\), for almost all \(\omega \)
By Fubini’s theorem, for almost any \(\omega \), equation (2.5) holds for almost all t. It follows that, with probability one, the derivative of \(A_{t}\) is zero for almost all t and the theorem is proved.
3 Proof of Theorems 1.4 and 1.5
Proof of Theorem 1.4
On the space C of continuous functions on \([0,\infty )\) with Wiener measure W introduce the coordinate process \(x_{t}(x_{\cdot }):=x_{t}\), which is a Wiener process. For \(t\ge 0\), \(x\in \mathbb {R}\), and \(x_{\cdot },y_{\cdot }\in C\) such that \(y_{0}=0\) define
where \(y_{r}:=y_{0}\) for \(r\le 0\). Then the function
is the probabilistic solution of the heat equation
in \(\Gamma (y_{\cdot })\) with boundary conditions

Due to interior estimates of derivatives of solutions to the heat equation, \(u_{t}(y_{\cdot } , x)\) is infinitely differentiable in \(\Gamma (y_{\cdot })\). Its continuity up to \(\{0\}\times [0,1]\) easily follows from the fact that \(\pi _{0}\) is a continuous function on [0, 1] vanishing at 0 and 1. The continuity of its derivatives up to \(\{0\}\times (0,1)\) follows from the fact that \(\pi _{0}\) is infinitely differentiable. Next, as in the proof of Theorem 4.1 of [2] one shows that for constants \(\nu \) [different in (3.1) and (3.2)] as in Theorem 1.5 we have that for any \(T\in (0,\infty )\)
for almost any trajectory of \(b_{\cdot }\), and
In particular, with probability one \(u_{t}(b_{\cdot } , x)\) is continuous at the lateral boundary of \(\Gamma (b_{\cdot })\).
Next we deal with (1.8). We organize the proof in the following way. For \(t\ge 0\), \(x\in \mathbb {R}\), and \(x_{\cdot },y_{\cdot }\in C\) such that \(y_{0}=0\) define
where \(y_{r}:=y_{0}\) for \(r\le 0\). Also let
Lemma 3.1
Let \(B_{t}\) be a one-dimensional Wiener process and \(\gamma \in (0,1)\). Then with probability one there exists a sequence of integers \(0\le m_{1}<m_{2}<...\) such that \(B_{t_{k}}\ge \sqrt{t_{k}}\) for all k, where \(t_{k}=\gamma ^{m_{k}}\). Moreover, \(m_{k}\le \beta k\) for all sufficiently large k, where \(\beta \) is any fixed number such that \(\alpha \beta >1\), \(\alpha =P(B_{1}\ge 1)\).
Proof
The sequence \(I_{B_{\gamma ^{m}}\ge \gamma ^{m/2}},m=0,1,...\), is stationary, so that the limit
exists (a.s.). By the 0-1 law this limit is a constant and equals \(\alpha \) (a.s.). Set
Then the number of \(k\in \{1,2,...\}\) such that \(m_{k}\le m\) divided by m tends to \(\alpha \). It follows that the number of integers \(i\in \{1,2,... ,\beta k\}\) such that \(m_{i}\le \beta k\) for all large k is greater than \(\beta 'k\alpha \), where \(\beta '\) is any number such that \(\beta '<\beta \). One can certainly take \(\beta '\) such that \(\beta '\alpha >1\). On the other hand there are always exactly k values of \(i\in \{1,2,..., m_{k}\}\) such that \(B_{\gamma ^{i}}\ge \gamma ^{i/2}\). Since \(\beta '\alpha >1\), it follows that for any sufficiently large k the inequality \(m_{k}\ge \beta k\) is impossible. The lemma is proved.
Lemma 3.2
If \(\sigma _{1}/\sigma \) is sufficiently small, then for each \(T\in (0,\infty )\)
almost surely.
Proof
Fix a \(T\in (0,\infty )\), set \(\gamma =1/2\), and take integers \(0\le m_{1}<m_{2}<...\) such that \(b_{T-t_{k}}-b_{T}\ge \sqrt{t_{k}}\) for all k, where \(t_{k}=\gamma ^{m_{k}}\) and \(m_{k}\le \beta k\) for all large k. By Lemma 3.1 such a sequence exists with probability one. Then notice that the inequality \(\mu (T,x,x_{\cdot },b_{\cdot })\ge T\) implies that
for all k such that \(t_{k}\le T\). Denote by \(k_{0}\) the smallest k such that \(t_{k}\le T\). We also take into account that \(\pi _{0}\) is a bounded function and conclude that for any integer \(n\ge k_{0}\)
where \(N=\sup \pi _{0}\) and \(z=x-\sigma b_{T}\). It follows that to prove (3.3) it suffices to show that there exists an integer-valued function \(n=n(x)\ge k_{0}\) such that
where \(K=\sigma /\sigma _{1}=\varepsilon ^{-1}\).
Observe that by Girsanov’s theorem for any \(n\ge k_{0}+1\)
where \(f(t)= K\sqrt{t}\) for \(t\ge t_{n}\), \(f(t)= K\sqrt{t_{n}}\) for \(t\in [0,t_{n}]\), and
Next, note that for bounded nonrandom functions g
because
is independent of \(w_{s},w_{t}\). Then we use the fact that
By applying this to \(g=-f'\) we find that
with
where the inequality follows from the fact that \(\gamma t_{k-1} \ge t_{k}\) and \(\kappa \) is defined by the last equality.
Now we use summation by parts to see that
On the event \(\Gamma _{n}(x)\) this quantity is smaller than
It follows that
Now it is time to choose \(n=n(x)\). We take \(n=n(x)\) so that \(x^{2}\in [t_{n},t_{n-1})\). Then \(t_{n}s_{n}\le \sqrt{t_{n}} \le x\) and \(xs_{n}\le x/\sqrt{t_{n-1}}\le 1\). Also \(t_{n}s^{2}_{n}\le 1\) and since \(t_{n}\ge \gamma ^{\beta n}\), we have \(x\ge \gamma ^{\beta n/2}\), which implies that
if \(\beta |\ln \gamma |< 2K^{2}\kappa \), which is true if \(\sigma _{1}/\sigma \) is small enough. This proves the lemma.
Corollary 3.3
If \(\sigma _{1}/\sigma \) is sufficiently small, then (1.8) holds (a.s.) for any fixed \(t\ge 0\).
Indeed, the equality of the extreme terms in (1.8) follows from Lemma 3.2 since \(u_{t}\le v_{t}\). The remaining equality is proved similarly by replacing x with \(1-x\).
It only remains to prove the last assertion of the theorem.
Observe that \(\tau (t,x,x_{\cdot },y_{\cdot })\) is a lower semicontinuous function of its arguments. Therefore by Fubini’s theorem \(u_{t}(y_{\cdot },x)\) is a Borel function of \((y_{\cdot },t,x)\). Furthermore, \(u_{t}(y_{\cdot }, x)\) will not change if we change \(y_{r}\) for \(r>t\). Hence, \(u_{t}(y_{\cdot }, x)\) is \(\mathcal {N}_{t}\)-measurable, where \(\mathcal {N}_{t}=\sigma (y_{r}:r\le t,y_{\cdot }\in C)\). Therefore,
is \(\mathcal {F}_{t}\)-measurable for each \((t,x)\in \bar{G}_{\infty }\). After that in the same way the usual Itô’s formula is proved on the basis of Taylor’s formula and the fact that \(u_{t}(y_{\cdot },x)\) is infinitely differentiable we obtain that for any \(x\in G\) almost surely for all \(t\ge 0\)
The above properties of \(u_{t}\) and Theorem 4.1 now imply that (perhaps after modifying \(v_{t}\) on a set of probability zero) \(v_{t}\) satisfies (1.2) with zero boundary condition and initial condition \(\pi _{0}\) in the sense explained below that formula and is such that
for each \(T\ge 0\). Uniqueness of solutions of (1.2) in this class of functions is a classical result, and this proves the remaining assertions of the theorem.
Proof of Theorem 1.5
After (1.9) has been proved and the modification \(u_{t}(x-\sigma b_{t})\) of \(\pi _{t}(x)\) has been chosen the assertions related to (1.11) and (1.12) follow directly from (3.1) and (3.2). This proves the theorem.
References
Krylov, N.V.: Brownian trajectory is a regular lateral boundary for the heat equation. SIAM J. Math. Anal. 34(5), 1167–1182 (2003)
Krylov, N.V.: One more square root law for Brownian motion and its application to SPDEs. Probab. Theory Relat. Fields 127, 496–512 (2003)
Krylov, N.V.: Maximum principle for SPDEs and its applications, A Volume in Honor of Professor Boris L. Rozovskii. In: Baxendale, P.H., Lototsky, S.V. (eds.) Stochastic Differential Equations: Theory and Applications. Interdisciplinary Mathematical Sciences, vol. 2, pp. 311–338. World Scientific, Singapore (2007)
Krylov, N.V.: On the Itô-Wentzell formula for distribution-valued processes and related topics. Prob. Theory Relat. Fields 150(1–2), 295–319 (2011)
Krylov, N.V., Rozovskii, B.L.: Stochastic evolution equations, pp. 71–146 in Itogy nauki i tekhniki, vol. 14, VINITI, Moscow, 1979 in Russian; English translation in J. Soviet Math., 16(4), 1233–1277 (1981)
Krylov, N.V., Wang, Teng: Filtering partially observable diffusions up to the exit time from a domain. Stoch. Proc. Appl. 121(8), 1785–1815 (2011)
Pardoux, E.: Equations aux dérivées partielles stochastiques non linéaires monotones. Etude de solutions fortes de type Ito (1975) Thèse Doct. Sci. Math. Univ, Paris Sud. http://www.cmi.univ-mrs.fr/~pardoux/Pardoux_these
Rozovskii, B.L.: Stochastic evolution systems. Kluwer, Dordrecht (1990)
Author information
Authors and Affiliations
Corresponding author
Additional information
The author was partially supported by NSF Grant DMS-1160569.
Appendix
Appendix
Let \(w^{1}_{t},w^{2}_{t},...\) be independent one-dimensional Wiener processes with respect to a filtration of complete \(\sigma \)-fields \(\mathcal {F}_{t}\subset \mathcal {F}\), and let \(\mathcal {P}\) denote the predictable \(\sigma \)-field related to \(\{\mathcal {F}_{t}\}\). Assume that on \(\Omega \times (0,\infty )\times \mathbb {R}^{d}\) we are given \(a_{t}(x)=(a^{ij}_{t}(x))\) which is a \(d\times d\)-symmetric matrix-valued function, \(b_{t}(x)\) which is an \(\mathbb {R}^{d}\)-valued function, real-valued \(c_{t}(x)\), and \(\sigma ^{i\cdot }_{t}(x) =(\sigma ^{i,1}_{t}(x),\sigma ^{i,2}_{t}(x),...)\), \(i=1,...,d\), and \(\nu _{t}(x)=(\nu ^{1}_{t}(x),\nu ^{2}_{t}(x),...)\), which are \(\ell _{2}\)-valued functions. Fix some constants \(K,\delta _{0}\in (0,\infty )\).
Assumption 4.1
-
(i)
The functions \(a,b,c,\sigma ,\nu \) are bounded and measurable as functions of \((\omega ,t,x)\) and are predictable as functions of \( (\omega ,t) \) for each x.
-
(ii)
The functions \(a_{t}(x)\) are Lipschitz continuous in x with constant K (independent of \(\omega ,t\)).
-
(ii)
For any \(\lambda ,x\in \mathbb {R}^{d}\), \(t\ge 0\), and \(\omega \)
$$\begin{aligned} (2a^{ij}_{t}-\alpha ^{ij}_{t})(x)\lambda ^{i}\lambda ^{j} \ge \delta _{0}|\lambda |^{2}, \end{aligned}$$(4.1)where \(\alpha ^{ij}:=\sigma ^{ik}\sigma ^{jk}\).
Let \(G\subset \mathbb {R}^{d}\) be an open set, \(T\in [0,\infty )\), and let \(u_{t}(x)\) be a real-valued function given on \(\Omega \times [0,T]\times \bar{G}\). Also suppose that on \(\Omega \times (0,T)\times G\) we are given functions \(f_{t}(x)\) and \(g_{t}(x)=(g^{1}_{t}(x),g^{2}_{t}(x),...)\) with values in \(\mathbb {R}\) and \( \ell _{2}\), respectively.
Define \(D_{i}=\partial /\partial x^{i}\), \(D_{ij}=D_{i}D_{j}\), and let Du denote the gradient of u and \(D^{2}u\) its Hessian.
Assumption 4.2
-
(i)
For any \(\omega \) the function \(u_{t}(x)\) is continuous in \([0,T]\times \bar{G}\) and vanishes on \([0,T]\times \partial G\) and, moreover (if G is unbounded), for any \(\delta >0\) and \(\omega \) there exists a compact set \(\Gamma \subset G\) such that \(|u_{t}(x)|\le \delta \) for all \(t\in [0,T]\) and \(x\in G{\setminus }\Gamma \). For any x the function \(u_{t}(x)\) is \(\mathcal {F}_{t}\)-adapted. For any \(\omega \) (if G is unbounded)
$$\begin{aligned} \int _{G}|u_{0}|^{2}\,dx+\int _{0}^{T}\int _{G}|u_{t}|^{2}\,dxdt<\infty . \end{aligned}$$ -
(ii)
For any \(\omega \), for almost any \(t\in [0,T]\), the second-order derivatives \(D^{2}u_{t}(x)\) are continuous with respect to x and for any compact set \(\Gamma \subset G\)
$$\begin{aligned} \int _{0}^{T}\int _{\Gamma }|D^{2}u_{t}|\,dxdt<\infty . \end{aligned}$$ -
(iii)
For each \(\omega \) and compact set \(\Gamma \subset G\)
$$\begin{aligned} \int _{0}^{T}\int _{\Gamma }|Du_{t}|^{2}\,dx dt<\infty . \end{aligned}$$ -
(iii)
The functions \(f_{t}(x)\) and \(g_{t}(x)\) are \(\mathcal {P}\otimes \mathcal {B}(G)\)-measurable as functions of \((\omega ,t,x)\), where \(\mathcal {B}(G)\) is the Borel \(\sigma \)-field on G. For each \(x\in G\) and \(\omega \) we have
$$\begin{aligned} \int _{0}^{T}(|D^{2}u_{s}(x)|+|Du_{s}(x)|^{2}+|f_{s}(x)|^{2} + |g_{s}(x)|^{2}_{ \ell _{2}})\,ds<\infty . \end{aligned}$$ -
(iv)
For each \(\omega \)
$$\begin{aligned} \int _{G}\int _{0}^{T}(|f_{s}(x)|^{2} + |g_{s}(x)|^{2}_{ \ell _{2}})\,dsdx<\infty , \end{aligned}$$and for each \(x\in G\) with probability one we have for all \(t\in [0,T]\) that
$$\begin{aligned} u_{t}(x)= & {} u_{0}(x) +\int _{0}^{t}\big [\sigma ^{ik}_{s}(x)D_{i}u_{s}(x) +\nu ^{k}_{s}(x)u_{s}(x)+g^{k}_{s}(x)\big ]\,dw^{k}_{s}\\&\quad \,\qquad +\int _{0}^{t}\big [a^{ij}_{s}(x) D_{ij}u_{s}(x)+b^{i}_{s}(x)D_{i}u_{s}(x) \!+\!c_{s}(x)u_{s}(x)\!+\!f_{s}(x)\big ]\,ds. \end{aligned}$$
Theorem 4.1
Under the above assumptions
(a.s.) and for any \(\phi \in C^{\infty }_{0}(G)\) with probability one
for all \(t\in [0,T]\).
Proof
By the stochastic Fubini theorem (see, for instance, Lemma 2.7 of [4]), for any \(\phi \in C^{\infty }_{0}(G)\) with probability one for all \(t\ge 0\)
where \(u^{\delta }=u-\delta \).
For \(\varepsilon ,\delta >0\) define
Observe that as \(\varepsilon \downarrow 0\) we have
uniformly with respect to \(t\in [0,T]\) by assumption. Therefore \(\tau _{\varepsilon ,\delta }\rightarrow T\) as \(\varepsilon \downarrow 0\) for any \(\delta >0\). Also notice that
in \(G{\setminus } K_{\varepsilon }\) if \(0\le t<\tau _{\varepsilon ,\delta } \).
By Lemma 2.5 of [3] for any \(\phi \in C^{\infty }_{0}(G)\) (a.s.) for all \(t\in [0,T]\)
where
We take \(\phi \ge 0\) such that \(\phi =1\) on \(K_{\varepsilon }\). Then for \(0\le s<\tau _{\varepsilon ,\delta }\)
Indeed, if \(u^{\delta }_{s}(x)\le 0\), then both sides vanish. However, if \(u^{\delta }_{s}(x)>0\) and \(0\le s<\tau _{\varepsilon ,\delta }\), then \(x\in K_{\varepsilon }\) and \(\phi (x)=1\). By also taking into account that
we transform (4.2) for such a \(\phi \) into
where this time
To proceed further we recall again the fact that the two conditions: \(s<\tau _{\varepsilon ,\delta }\) and \(u_{s}(x) >\delta \), imply that \(x\in K_{\varepsilon }\) so that
After that we use (4.1) and the inequalities like \(ab\le \varepsilon a^{2}+\varepsilon ^{-1}b^{2}\). We also recall that \(b^{i}_{s}-D_{j}a^{ij}_{s}, c_{s}\), and \(|\nu _{s}|_{\ell _{2}}\) are bounded and then in an absolutely standard way derive from (4.3) that there exists a constant \(N\in (0,\infty )\), independent of \(\omega ,t, \varepsilon ,\delta \), such that for any \(t\in [0,T]\) and \(\omega \)
In particular, for any stopping time \(\tau \le T\)
If \(\tau \) is a localizing time for the local martingale \(m^{\varepsilon ,\delta }_{t}\) starting at zero, then \(Em^{\varepsilon ,\delta }_{\tau }=0\) and
The last inequality, actually, holds for any stopping time \(\tau \le T\), which is easily proved by approximation.
Now we first let \(\varepsilon \downarrow 0\) and then \(\delta \downarrow 0\). Then by the monotone convergence theorem we obtain
Similarly,
and, since one knows that \(I_{u=0}Du=0\) (a.e.) for any function u of class \(W^{1}_{2}\), we finally conclude that
For \(n>0\) and \(\tau =\tau ^{n}\), where
the right-hand side of (4.4) is finite. Hence,
(a.s.). To prove the first assertion of the theorem, it only remains to observe that, by assumption, for any \(\omega \), we have \(\tau _{n}=T\) for all sufficiently large n. The second assertion follows from the stochastic Fubini theorem. The theorem is proved.
Rights and permissions
About this article
Cite this article
Krylov, N.V. On singularity as a function of time of a conditional distribution of an exit time. Probab. Theory Relat. Fields 165, 541–557 (2016). https://doi.org/10.1007/s00440-015-0639-3
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00440-015-0639-3
Keywords
- Stochastic partial differential equations
- Heat equation in domains with irregular lateral boundaries
- Filtering of partially observable diffusion processes
Mathematics Subject Classification
- 60H15
- 93E11