Abstract
In this paper, we first present a sufficient condition(a variant) for the large deviation criteria of Budhiraja, Dupuis and Maroulas for functionals of Brownian motions. The sufficient condition is particularly more suitable for stochastic differential/partial differential equations with reflection. We then apply the sufficient condition to establish a large deviation principle for obstacle problems of quasi-linear stochastic partial differential equations. It turns out that the backward stochastic differential equations will also play an important role.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Consider the following obstacle problems for quasilinear stochastic partial differential equations (SPDEs) in \(\mathbb {R}^d\):
where \(B^j_t, j=1,2,\ldots \) are independent real-valued standard Brownian motions, the stochastic integral against Brownian motions is interpreted as the backward Ito integral, \(\Delta \) is the Laplacian operator, \(f, g_i, h_j\) are appropriate measurable functions specified later, L(t, x) is the given barrier/obstacle function, R(dt, dx) is a random measure which is a part of the solution pair (U, R). The random measure R plays a similar role as a local time which prevents the solution U(t, x) from falling below the barrier L.
Such SPDEs appear in various applications like pathwise stochastic control problems, the Zakai equations in filtering and stochastic control with partial observations. Existence and uniqueness of the above stochastic obstacle problems were established in [13] based on an analytical approach. Existence and uniqueness of the obstacle problems for quasi-linear SPDEs on the whole space \(\mathbb {R}^d\) and driven by finite dimensional Brownian motions were studied in [20] using the approach of backward stochastic differential equations (BSDEs). Obstacle problems for nonlinear stochastic heat equations driven by space-time white noise were studied by several authors, see [23, 28] and references therein.
In this paper, we are concerned with the small noise large deviation principle(LDP) of the following obstacle problems for quasilinear SPDEs:
Large deviations for stochastic evolution equations and stochastic partial differential equations driven by Brownian motions have been investigated in many papers, see e.g. [3, 5, 6, 8, 11, 18, 19, 25, 27] and references therein.
To obtain the large deviation principle, we will adopt the weak convergence approach introduced by Budhiraja, Dupuis and Maroulas in [2,3,4]. We refer the reader to [2, 11, 18, 19, 3, 25] for large deviation principles of various dynamical systems driven by Gaussian noises.
In order to apply the weak convergence method to the obstacle problems, because of the singularity introduced by the reflection/local time, it seems difficult to directly use the criteria in [2]. We therefore first need to provide a sufficient condition to verify the criteria of Budhiraja–Dupuis–Maroulas. This sufficient condition turns out to be particularly suitable for stochastic dynamics generated by stochastic differential equations and stochastic partial differential equations with reflection. The advantage of the new sufficient condition is to shift the difficulty of proving the tightness of the perturbations of stochastic differential(partial differential) equations to a study of the continuity (with respect to the driving signals) of deterministic skeleton equations associated with the stochastic equations. This new sufficient condition is recently successfully applied to obtain a large deviation principle for stochastic conservation laws (Ref. [9]), which otherwise could not (at least very difficult) be established using the original form of the criteria in [2].
The important part of the current work is to study the continuity of the deterministic obstacle problems driven by the elements in the Cameron–Martin space of the driving Brownian motions. We need to show that if the driving signals converge weakly in the Cameron–Martin space, then the corresponding solutions of the skeleton equations converge in the appropriate state space. This turns out to be hard because of the singularity caused by the obstacle. To overcome the difficulties, we have to appeal to the penalized approximation of the skeleton equation and to establish some uniform estimate for the solutions of the approximating equations with the help of the backward stochastic differential equation representation of the solutions. This is purely due to the technical reason because primarily the LDP problem has not much to do with backward stochastic differential equations.
The rest of the paper is organized as follows. In Sect. 2, we introduce the stochastic obstacle problem and the precise framework. In Sect. 3, we recall the weak convergence approach of large deviations and present a sufficient condition. Section 4 is devoted to the study of skeleton obstacle problems. We will show that the solution of the skeleton problem is continuous with respect to the driving signal. The proof of the large deviation principle is in Sect. 5.
2 The Framework
2.1 Obstacle Problems
Let \(H:=\mathbf {L}^2(\mathbb {R}^d)\) be the Hilbert space of square integrable functions with respect to the Lebesgue measure on \(\mathbb {R}^d\). The associated scalar product and the norm are denoted by
Let \(V:=H(\mathbb {R}^d)\) denote the first order Sobolev space, endowed with the norm and the inner product:
\(V^*\) will denote the dual space of V. When causing no confusion, we also use \(\langle u,v\rangle \) to denote the dual pair between V and \(V^*\).
Our evolution problem will be considered over a fixed time interval [0, T]. Now we introduce the following assumptions.
Assumption 2.1
-
(i)
\(f:[0,T]\times \mathbb {R}^{d}\times \mathbb {R}\times \mathbb {R}^{d}\rightarrow \mathbb {R}\), \(h=(h_1,\ldots ,h_i,\ldots ):[0,T]\times \mathbb {R}^{d}\times \mathbb {R}\times \mathbb {R}^{d}\rightarrow \mathbb {R}^{\infty }\) and \(g=(g_1,\ldots ,g_d):[0,T]\times \mathbb {R}^{d}\times \mathbb {R}\times \mathbb {R}^{d}\rightarrow \mathbb {R}^{d}\) are measurable in (t, x, y, z) and satisfy \( f^0, h^0, g^0 \in \mathbf {L}^2\left( [0,T] \times \mathbb {R}^{d}\right) \cap \mathbf {L}^{\infty }\left( [0,T] \times \mathbb {R}^{d}\right) \) where \(f^0(t,x) := f(t,x,0, 0)\), \(h^0(t,x) := (\sum _{j=1}^{\infty }h_j(t, ,x,0, 0)^2)^{\frac{1}{2}}\) and \(g^0(t,x) := (\sum _{j=1}^{d}g_j(t, ,x,0, 0)^2)^{\frac{1}{2}}\).
-
(ii)
There exist constants \(c>0\), \(0<\alpha <1\) and \(0<\beta <1\) such that for any \((t,x)\in [0,T]\times \mathbb {R}^d~;~(y_1,z_1),(y_2,z_2)\in \mathbb {R}\times \mathbb {R}^{d}\)
$$\begin{aligned} |f(t,x,y_1,z_1)-f(t,x,y_2,z_2)|\le & {} c\big (|y_1-y_2|+|z_1-z_2|\big ) \\ \left( \sum _{i=1}^{\infty } |h_i(t,x,y_1,z_1)-h_i(t,x,y_2,z_2)|^2\right) ^{1/2}\le & {} c|y_1-y_2|+\beta |z_1-z_2|\\ \left( \sum _{i=1}^{d} |g_i(t,x,y_1,z_1)-g_i(t,x,y_2,z_2)|^2\right) ^{1/2}\le & {} c|y_1-y_2|+\alpha |z_1-z_2|. \end{aligned}$$ -
(iii)
There exists a function \(\bar{h}\in L^2(\mathbb {R}^{d})\cap L^{\infty }(\mathbb {R}^{d})\) such that for \((t,x,y,z) \in [0,T]\times \mathbb {R}^{d}\times \mathbb {R}\times \mathbb {R}^{d}\),
$$\begin{aligned} \left( \sum _{i=1}^{\infty } |h_i(t,x,y,z)|^2\right) ^{1/2}\le \bar{h}(x). \end{aligned}$$ -
(iv)
The contract property: \(\alpha +\displaystyle \frac{\beta ^2}{2}<\displaystyle \frac{1}{2}\).
-
(v)
The barrier function \(L(t,x):\mathbb {R}^d\rightarrow \mathbb {R}\) satisfies
$$\begin{aligned} \frac{\partial L(t,x)}{\partial t}, \quad \nabla L(t,x), \quad \Delta L(t,x)\in L^2([0,T]\times \mathbb {R}^d)\cap L^{\infty }([0,T]\times \mathbb {R}^d), \end{aligned}$$where the gradient \(\nabla \) and the Laplacian \(\Delta \) act on the space variable x.
Let \(H_T:=C([0, T], H)\cap L^2([0, T], V)\) be the Banach space endowed with the norm
We denote by \( {\mathcal H}_T\) the space of predictable, processes \((u_t, t\ge 0 ) \) such that \(u\in H_T\) and that
The space of test functions is \(\mathcal {D}= \mathcal {C}_c^{\infty } (\mathbb {R}^+) \otimes \mathcal {C}_c^{\infty } (\mathbb {R}^d)\), where \(\mathcal {C}_c^{\infty } (\mathbb {R}^+)\) denotes the space of real-valued infinitely differentiable functions with compact supports in \(\mathbb {R}^+\) and \(\mathcal {C}_c^{\infty }(\mathbb {R}^d)\) is the space of infinitely differentiable functions with compact supports in \(\mathbb {R}^d\).
Let \(B^j_t, j=1,2,\ldots \) be a sequence of independent real-valued standard Brownian motions on a complete filtrated probability space \((\Omega , \mathcal {F}, \mathcal {F}_t, P)\). We now precise the definition of solutions for the reflected quasilinear SPDE (1.1):
Definition 2.1
We say that a pair (U, R) is a solution of the obstacle problem (1.1) if
-
(1)
\(U\in {\mathcal H}_T\), \(U(t,x)\ge L(t,x)\), \(dP\otimes dt\otimes dx\)-a.e. and \(U(T,x)=\Phi (x)\), \(dx-a.e.\)
-
(2)
R is a random regular measure on \([0,T)\times \mathbb {R}^d\),
-
(3)
for every \(\varphi \in \mathcal D\)
$$\begin{aligned}&(U_t,\varphi _t)-(\Phi ,\varphi _T)+\int _t^T(U_s, \partial _s\varphi _s)ds+\frac{1}{2}\int _t^T\langle \nabla U_s, \nabla \varphi _s \rangle ds \nonumber \\&\quad =\int _t^T(f_s(U_s,\nabla U_s),\varphi _s)ds+\sum _{j=1}^{\infty }\int _t^T (h_s^j(U_s, \nabla U_s), \varphi _s)dB_s^j\ \nonumber \\&\quad \quad -\sum _{i=1}^d \int _t^T(g_s^i(U_s, \nabla U_s), \partial _i\varphi _s)ds+\int _t^T\int _{\mathbb {R}^d}\varphi _s(x)R(dx,ds), \end{aligned}$$(2.1) -
(4)
U admits a quasi-continuous version \(\tilde{U}\), and
$$\begin{aligned} \int _0^T\int _{\mathbb {R}^d}(\tilde{U}(s,x)-L(s,x))R(dx,ds)=0\quad \quad a.s. \end{aligned}$$
Remark 2.1
We refer the reader to [13] for the precise definition of regular measures and quasi-continuity of functions on the space \([0, T]\times \mathbb {R}^d\).
Let us recall the following result from [13, 20].
Theorem 2.1
Let Assumption 2.1 hold and assume \(\Phi (x)\ge L(T,x)\) dx-a.e.. Then there exists a unique solution (U, R) to the obstacle problem (1.1).
2.2 The Measures \(\mathbb {P}^m\)
The operator \( \partial _t + \frac{1}{2} \Delta \), which represents the main linear part in the equation (1.11.2), is associated with the Bownian motion in \(\mathbb {R}^d\). The sample space of the Brownian motion is \( \Omega ' = \mathcal {C }([0, \infty ); \mathbb {R}^d)\), the canonical process \((W_t)_{t \ge 0}\) is defined by \( W_t (\omega ) = \omega (t)\), for any \( \omega \in \Omega '\), \(t \ge 0\) and the shift operator, \( \theta _t \, : \, \Omega ' \longrightarrow \Omega '\), is defined by \( \theta _t (\omega ) (s) = \omega (t+s)\), for any \(s \ge 0\) and \( t \ge 0\). The canonical filtration \( \mathcal {F}_t^W = \sigma \left( W_s; s \le t \right) \) is completed by the standard procedure with respect to the probability measures produced by the transition function
where \( q_t (x) = (2\pi t)^{- \frac{d}{2}} \exp (-|x|^2/2t)\) is the Gaussian density. Thus we get a continuous Hunt process \((\Omega ', W_t, \theta _t, \mathcal {F}, \mathcal {F}^W_t, \mathbb {P}^x)\). We shall also use the backward filtration of the future events \( \mathcal {F}'_t = \sigma \left( W_s; \; \, s \ge t \right) \) for \(t\ge 0\). \(\mathbb {P}^0\) is the Wiener measure, which is supported by the set \( \Omega '_0 = \{ \omega \in \Omega ', \; \, \omega (0) =0 \}\). We also set \( \Pi _0 (\omega ) (t) = \omega (t) - \omega (0),\, t \ge 0\), which defines a map \( \Pi _0 \, : \, \Omega ' \rightarrow \Omega '_0\). Then \(\Pi = (W_0, \Pi _0 ) \, : \, \Omega ' \rightarrow \mathbb {R}^d \times \Omega '_0\) is a bijection. For each probability measure \(\mu \) on \(\mathbb {R}^d\), the probability \(\mathbb {P}^{\mu }\) of the Brownian motion started with the initial distribution \(\mu \) is given by
In particular, for the Lebesgue measure in \(\mathbb {R}^d\), which we denote by \( m = dx\), we have
Notice that \(\{W_{t-r}, \mathcal {F}'_{t-r}, r\in [0, t]\}\) is a backward local martingale under \(\mathbb {P}^m\). Let \(J(\cdot ,\cdot ): [0, \infty )\times \mathbb {R}^d \rightarrow \mathbb {R}^d\) be a measurable function such that \(J\in \mathbf {L}^2([0,T] \times \mathbb {{R}}^d\rightarrow \mathbb {{R}}^d) \) for every \(T>0\). We recall the forward and backward stochastic integral defined in [20, 27] under the measure \(\mathbb {P}^m\).
When J is smooth, one has
3 A Sufficient Condition for LDP
In this section we recall the criteria obtained in [2] for proving a large deviation principle and we will provide a sufficient condition to verify the criteria.
Let \(\mathcal {E}\) be a Polish space with the Borel \(\sigma \)-field \(\mathcal {B}(\mathcal {E})\). Recall
Definition 3.1
(Rate function) A function \(I: \mathcal {E}\rightarrow [0,\infty ]\) is called a rate function on \(\mathcal {E}\), if for each \(M<\infty \), the level set \(\{x\in \mathcal {E}:I(x)\le M\}\) is a compact subset of \(\mathcal {E}\).
Definition 3.2
(Large deviation principle) Let I be a rate function on \(\mathcal {E}\). A family \(\{X^\varepsilon \}\) of \(\mathcal {E}\)-valued random elements is said to satisfy a large deviation principle on \(\mathcal {E}\) with rate function I if the following two claims hold.
-
(a)
(Upper bound) For each closed subset F of \(\mathcal {E}\),
$$\begin{aligned} \limsup _{\varepsilon \rightarrow 0}\varepsilon \log \mathbb {P}(X^\varepsilon \in F)\le - \inf _{x\in F}I(x). \end{aligned}$$ -
(b)
(Lower bound) For each open subset G of \(\mathcal {E}\),
$$\begin{aligned} \liminf _{\varepsilon \rightarrow 0}\varepsilon \log \mathbb {P}(X^\varepsilon \in G)\ge - \inf _{x\in G}I(x). \end{aligned}$$
3.1 A Criteria of Budhiraja–Dupuis
The Cameron–Martin space associated with the Brownian motion \(\{B_t=(B_t^1,\ldots ,B^j_t,\ldots ), t\in [0,T]\}\) is isomorphic to the Hilbert space \(K:=L^2([0,T]; l^2)\) with the inner product:
where
\(l^2\) is a Hilbert space with inner product \(\langle a, b\rangle _{l^2}=\sum _{i=1}^{\infty }a_ib_i\) for \(a,b\in l^2\).
Let \(\tilde{K}\) denote the class of \(l^2\)-valued \(\{\mathcal {F}_t\}\)-predictable processes \(\phi \) that belong to the space K a.s.. Let \(S_N=\{k\in K; \int _0^T\Vert k(s)\Vert _{l^2}^2ds\le N\}\). The set \(S_N\) endowed with the weak topology is a compact Polish space. Set \(\tilde{S}_N=\{\phi \in \tilde{K};\phi (\omega )\in S_N, \mathbb {P}\text {-a.s.}\}\).
The following result was proved in [2].
Theorem 3.1
For \(\varepsilon >0\), let \(\Gamma ^\varepsilon \) be a measurable mapping from \(C([0,T];\mathbb {R}^\infty )\) into \(\mathcal {E}\). Set \(X^\varepsilon :=\Gamma ^\varepsilon (B(\cdot ))\). Suppose that there exists a measurable map \(\Gamma ^0:C([0,T];\mathbb {R}^\infty )\rightarrow \mathcal {E}\) such that
-
(a)
for every \(N<+\infty \) and any family \(\{k^\varepsilon ;\varepsilon >0\}\subset \tilde{S}_N\) satisfying that \(k^\varepsilon \) converges in law as \(S_N\)-valued random elements to some element k as \(\varepsilon \rightarrow 0\), \(\Gamma ^\varepsilon \left( B(\cdot )+\frac{1}{\sqrt{\varepsilon }}\int _0^{\cdot }k^\varepsilon (s)ds\right) \) converges in law to \(\Gamma ^0(\int _0^{\cdot }k(s)ds)\) as \(\varepsilon \rightarrow 0\);
-
(b)
for every \(N<+\infty \), the set
$$\begin{aligned} \left\{ \Gamma ^0\left( \int _0^{\cdot }k(s)ds\right) ; k\in S_N\right\} \end{aligned}$$is a compact subset of \(\mathcal {E}\).
Then the family \(\{X^\varepsilon \}_{\varepsilon >0}\) satisfies a large deviation principle in \(\mathcal {E}\) with the rate function I given by
with the convention \(\inf \{\emptyset \}=\infty \).
3.2 A Sufficient Condition
Here is a sufficient condition for verifying the assumptions in Theorem 3.1.
Theorem 3.2
For \(\varepsilon >0\), let \(\Gamma ^\varepsilon \) be a measurable mapping from \(C([0,T];\mathbb {R}^\infty )\) into \(\mathcal {E}\). Set \(X^\varepsilon :=\Gamma ^\varepsilon (B(\cdot ))\). Suppose that there exists a measurable map \(\Gamma ^0:C([0,T];\mathbb {R}^\infty )\rightarrow \mathcal {E}\) such that
-
(i)
for every \(N<+\infty \), any family \(\{k^\varepsilon ;\varepsilon >0\}\subset \tilde{S}_N\) and any \(\delta >0\),
$$\begin{aligned} \lim _{\varepsilon \rightarrow 0}P(\rho \left( Y^\varepsilon , Z^\varepsilon \right) >\delta )=0, \end{aligned}$$where \(Y^\varepsilon =\Gamma ^\varepsilon (B(\cdot )+\frac{1}{\sqrt{\varepsilon }}\int _0^{\cdot }k^\varepsilon (s)ds)\), \(Z^\varepsilon =\Gamma ^0\left( \int _0^{\cdot }k^\varepsilon (s)ds\right) \) and \(\rho (\cdot , \cdot )\) stands for the metric in the space \(\mathcal {E}\)
-
(ii)
for every \(N<+\infty \) and any family \(\{k^\varepsilon ;\varepsilon >0\}\subset {S}_N\) satisfying that \(k^\varepsilon \) converges weakly to some element k as \(\varepsilon \rightarrow 0\), \(\Gamma ^0\left( \int _0^{\cdot }k^\varepsilon (s)ds\right) \) converges to \(\Gamma ^0(\int _0^{\cdot }k(s)ds)\) in the space \(\mathcal {E}\).
Then the family \(\{X^\varepsilon \}_{\varepsilon >0}\) satisfies a large deviation principle in \(\mathcal {E}\) with the rate function I given by
with the convention \(\inf \{\emptyset \}=\infty \).
Remark 3.1
When proving a small noise large deviation principle for stochastic differential equations/stochastic partial differential equations, condition (i) is usually not difficult to check because the small noise disappears when \(\varepsilon \rightarrow 0\).
Proof
We will show that the conditions in Theorem 3.1 are fulfilled. Condition (b) in Theorem 3.1 follows from condition (ii) because \(S_N\) is compact with respect to the weak topology. Condition (i) implies that for any bounded, uniformly continuous function \(G(\cdot )\) on \(\mathcal {E}\),
Thus, condition (a) will be satisfied if \(Z^\varepsilon \) converges in law to \(\Gamma ^0(\int _0^{\cdot }k(s)ds)\) in the space \(\mathcal {E}\). This is indeed true since the mapping \(\Gamma ^0\) is continuous by condition (ii) and \(k^\varepsilon \) converge in law as \(S_N\)-valued random elements to k. The proof is complete. \(\square \)
4 Skeleton Equations
Recall \(K:=L^2([0, T], l^2)\). Let \(k=(k^1,\ldots ,k^j,\ldots )\in K\) and consider the deterministic obstacle problem:
The existence and uniqueness of the solution of the deterministic obstacle problem (4.1) can be obtained similarly as the random obstacle problem (1.1) ( but simpler). We refer the reader to [13] for more details. Denote by \(u^{k^{\varepsilon }}\) the solution of equation (4.1) with \(k^{\varepsilon }\) replacing k. The main purpose of this section is to show that \(u^{k^{\varepsilon }}\) converges to \(u^k\) in the space \(H_T\) if \(k^{\varepsilon }\rightarrow k\) weakly in the Hilbert space K. To this end, we first need to establish a number of preliminary results.
Consider the penalized equation:
It is known that \(u^{k,n}\rightarrow u^k\) as \(n\rightarrow \infty \) for a fixed \(k\in K\) (please see [13]). For later use, we need to show that for any \(M>0\), \(u^{k,n}\rightarrow u^k\) uniformly over the bounded subset \(\{k; \Vert k\Vert _K\le M\}\) as \(n\rightarrow \infty \). For this purpose, it turns out that we have to appeal to the BSDE representation of the solutions. Let \(Y^{k,n}_t:=u^{k,n}(t, W_t)\), \(Z^{k,n}_t=\nabla u^{k,n}(t, W_t)\). Then it was shown in [20] that \((Y^{k,n}, Z^{k,n})\) is the solution of the backward stochastic differential equation under \(\mathbb {P}^m\):
where \(S_r=L(r, W_r)\) satisfies
The following result is a uniform estimate for \((Y^{k,n}, Z^{k,n})\).
Lemma 4.1
For \(M>0\), we have the following estimate:
The proof of this lemma is a repeat of the proof of Lemma 6 in [20]. One just needs to notice that when applying the Grownwall’s inequality, the constant \(c_M\) on on right of (4.7) only depends on the norm of k which is bounded by M.
We also need the following estimate.
Lemma 4.2
Proof
Let \(F(z)=z^2\). Applying the Ito’s formula (see [20]) we have
Rearranging the terms we get
Using the conditions on h in the Assumption 2.1, for any given positive constant \(\varepsilon _1\) we have
By the assumptions on g, for any given positive constant \(\varepsilon _2\) we have
By a similar calculation, we have for any given \(\varepsilon _3>0\),
Substitute (4.11), (4.12) and (4.13) back into to (4.10), choose \(\varepsilon _1, \varepsilon _2, \varepsilon _3\) sufficiently small to obtain
where the condition on \(\alpha \) in the Assumption 2.1 was used. Now the desired conclusion (4.8) follows from the Grownwall’s inequality. \(\square \)
Lemma 4.3
For \(M>0\), we have
Proof
Let \(G(z)=(z^-)^4\). By the Ito’s formula we have
Rearrange the terms in the above equation to get
By Assumption 2.1, for any given positive constant \(\varepsilon _1\) we have
Using again Assumption 2.1 and the similar computation as above we can show that for any constants \(\varepsilon _2>0, \varepsilon _3>0\),
and
Put (4.20), (4.19), (4.18) and (4.17) together, select the constants \(\varepsilon _1\), \(\varepsilon _2\) and \(\varepsilon _3\) sufficiently small, and take expectation to get
Applying the Grownwall’s inequality and Lemma 4.2 we obtain
and
Observe that by the assumptions on the function g,
and
Using (4.23)–(4.25) and taking supremum over the interval [0, T] in (4.17) we further deduce that
completing the proof. \(\square \)
Proposition 4.1
For any \(M>0\), we have
Proof
We note that for any \(n, q\ge 1\),
(4.27) follows from the fact that the law of \(W_t\) under \(\mathbb {P}^m\) is the Lebesgue measure m for any \(t\ge 0\). Please also consult [20] (Theorem 3, Corollary 1) for details. Recall that for each \(k\in K\), \(u^{k,n}\rightarrow u^k\) as \(n\rightarrow \infty \). Thus, to prove (4.26), it is sufficient to show
and
We will achieve this with the help of backward stochastic differential equations satisfied by \(Y^{k,n}_t=u^{k,n}(t, W_t)\). Applying Ito’s formula we have
Note that
By Young’s inequality, we have for any \(\delta _1>0\),
Moreover for any \(\delta _2>0\), we have
Using Young’s inequality again, we have for any \(\delta _3>0\),
Substitute (4.31)–(4.34) back to (4.30), choose constants \(\delta _i, i=1,2,3 \) sufficiently small and take expectation to obtain
Using Lemmas 4.1, 4.3 and applying the Grownwall’s inequality we deduce that
and
Next we will strengthen the convergence in (4.36) to
We notice that by the Burkhölder’s inequality, for any \(\delta _4>0\) we have
Similarly, we have for \(\delta _5>0\)
Now use the above two estimates (4.39) and (4.40) and the already proved (4.36) to obtain (4.38). This completes the proof.\(\square \)
Theorem 4.1
Let Assumptions 2.1 hold. Assume that \(k^{\varepsilon }\rightarrow k\) weakly in the Hilbert space K as \(\varepsilon \rightarrow 0\). Then \(u^{k^{\varepsilon }}\) converges to \(u^k\) in the space \(H_T\), where \(u^{k^{\varepsilon }}\) denotes the solution of equation (4.1) with \(k^{\varepsilon }\) replacing k.
Proof
We will first prove a similar convergence result for the corresponding penalized PDEs and then combined with the uniform convergence proved in Proposition 4.1 we complete the proof of Theorem 4.1. Let \(u^{k^{\varepsilon }, n}\) be the solution to the following penalized PDE:
We first fix the integer n and show \(\lim _{\varepsilon \rightarrow 0}\Vert u^{k^{\varepsilon },n}-u^{k,n}\Vert _{H_T}=0\), \(u^{k,n}\) is the solution of equation (4.41) with \(k^{\varepsilon }\) replaced by k. To this end, we first prove that the family \(\{ u^{k^{\varepsilon }, n}, \varepsilon >0\}\) is tight in the space \(L^2([0, T], L_{loc}^2(\mathbb {R}^d))\). Using the chain rule and Gronwall’s inequality, as in Lemma 4.1 , we can show that
For \(\beta \in (0,1)\), recall that \(W^{\beta ,2}([0,T], V^*)\) is the space of mappings \(v(\cdot ): [0, T]\rightarrow V^*\) that satisfy
It is well known (see e.g. [15]) that the imbedding
is compact. As an equation in \(V^*\), we have
In view of (4.43), we have
Using the condition (iii) in Assumption 2.1, we have
By (4.43) and the similar calculations as above we also have
Thus, for \(\beta \in (0,\frac{1}{2})\), it follows from (4.45) –(4.48) that
Combining (4.49) with (4.43), we conclude that \(\{ u^{k^{\varepsilon }, n}, \varepsilon >0\}\) is tight in the space \(L^2([0, T], L_{loc}^2(\mathbb {R}^d))\). Now, applying the chain rule, we obtain
By the assumptions on \(h_j\) and Young’s inequality, we see that for any given \(\delta _1>0\),
Using the assumptions on f, g and (4.51) it follows from (4.50) that there exist positive constants \(\delta \), C such that
By Gronwall’s inequality, (4.52) yields that
To show \(\lim _{\varepsilon \rightarrow 0}\Vert u^{k^{\varepsilon },n}-u^{k,n}\Vert _{H_T}=0\), in view of (4.52) and (4.53), it suffices to prove
This will be achieved if we show that for any sequence \(\varepsilon _m\rightarrow 0\), one can find a subsequence \(\varepsilon _{m_i}\rightarrow 0\) such that
Now fix a sequence \(\varepsilon _m\rightarrow 0\). Since \(\{ u^{k^{\varepsilon _{m}}, n}, m\ge 1\}\) is tight in \(L^2([0,T], L_{loc}^2(\mathbb {R}^d))\), there exist a subsequence \(m_i, i\ge 1\) and a mapping \(\tilde{u}\) such that \(u^{k^{\varepsilon _{m_i}},n}\rightarrow \tilde{u}\) in \(L^2([0,T], L_{loc}^2(\mathbb {R}^d))\). Moreover, because of the uniform bound of \(u^{k^{\varepsilon _{m_i}},n}\) in (4.43), \(\tilde{u}\) belongs to \(L^2([0,T], H)\). Now,
Since \(k^{\varepsilon _{m_i}}\rightarrow k\) weakly in \(L^2([0,T], l^2)\), for every \(t>0\), it holds that
On the other hand, using the assumption on h, for \(0<t_1<t_2\le T\), we have
Combing (4.57) and (4.58) we deduce that
By Hölder’s inequality and the assumption on h, we have
For any \(M>0\), denote by \(B_M\) the ball in \(\mathbb {R}^d\) centered at zero with radius M. We can bound the right side of (4.60) as follows:
where the uniform \(L^2([0, T]\times \mathbb {R}^d)\)-bound of \(u^{k^{\varepsilon _{m_i}} , n}\) has been used. Now given any constant \(\delta >0\), we can pick a constant M such that \(C\int _{B_M^c}\bar{h}^2(x)dx\le \delta \). For the chosen constant M, we have
Thus, it follows from (4.60), (4.61) that
Since \(\delta \) is arbitrary, (4.55) follows from (4.56), (4.59) and (4.62). Hence we have proved \(\lim _{\varepsilon \rightarrow 0}\Vert u^{k^{\varepsilon },n}-u^{k,n}\Vert _{H_T}=0\).
Now we are ready to complete the last step of the proof. For any \(n\ge 1\), we have
For any given \(\delta >0\), by Proposition 4.1 there exists an integer \(n_0\) such that \(\sup _{\varepsilon }\Vert u^{k^{\varepsilon }}-u^{k^{\varepsilon }, n_0}\Vert _{H_T}\le \frac{\delta }{2}\) and \(\Vert u^k-u^{k,n_0}\Vert _{H_T}\le \frac{\delta }{2}\). Replacing n in (4.63) by \(n_0\) we get
As we just proved
we obtain that
Since the constant \(\delta \) is arbitrary, the proof is complete. \(\square \)
5 Large Deviations
After the preparations in Sect. 4, we are ready to state and to prove the large deviation result. Recall that \(U^{\varepsilon }\) is the solution of the obstacle problem:
For \(k\in K=L^2([0,T], l^2)\), denote by \(u^k\) the solution of the following deterministic obstacle problem:
Define a measurable mapping \(\Gamma ^0:C([0,T];\mathbb {R}^\infty )\rightarrow H_T\) by
where \(u^k\) is the solution of (5.3). Here is the main result:
Theorem 5.1
Let the Assumption 2.1 hold. Then the family \(\{U^\varepsilon \}_{\varepsilon >0}\) satisfies a large deviation principle on the space \(H_T\) with the rate function I given by
with the convention \(\inf \{\emptyset \}=\infty \).
Proof
The existence of a unique strong solution of the obstacle problem (5.1) implies that for every \(\varepsilon >0\), there exists a measurable mapping \(\Gamma ^\varepsilon \left( \cdot \right) :C([0,T];\mathbb {R}^\infty )\rightarrow H_T\) such that
To prove the theorem, we are going to show that the conditions (i) and (ii) in Theorem 3.2 are satisfied. Condition (ii) is exactly the statement of Theorem 4.1. It remains to establish the condition (i) in Theorem 3.2. Recall the definitions of the spaces \(S_N\) and \(\tilde{S}_N\) given in Sect. 3. Let \(\{k^{\varepsilon }, \varepsilon >0\}\subset \tilde{S}_N\) be a given family of stochastic processes. Applying Girsanov theorem it is easy to see that \(U^{\varepsilon ,k^\varepsilon }=\Gamma ^\varepsilon \left( B(\cdot )+\frac{1}{\sqrt{\varepsilon }}\int _0^{\cdot }k^\varepsilon (s)ds\right) \) is the solution of the stochastic obstacle problem:
Moreover, \(V^{k^\varepsilon }=\Gamma ^0\left( \int _0^{\cdot }k^\varepsilon (s)ds\right) \) is the solution of the random obstacle problem:
The condition (ii) in Theorem 3.2 will be satisfied if we prove
here \(U^{\varepsilon ,k^\varepsilon }_t=U^{\varepsilon ,k^\varepsilon }(t, \cdot )\) and \(V^{k^\varepsilon }_t=V^{k^\varepsilon }(t, \cdot )\). The rest of the proof is to establish (5.10). By Ito formula, we have
Here
With the assumptions on g in mind, applying Young’s inequality we have for any \(\delta _1>0\)
By the assumption on f, for any \(\delta _2>0\), we have
Using the assumption on h, given any \(\delta _3>0\), we also have
For the term \(I_5\) in (5.11), because \(U^{\varepsilon ,k^\varepsilon }_s-L(s,\cdot )\ge 0\), \(V^{\varepsilon ,k^\varepsilon }_s-L(s,\cdot )\ge 0\) and because that the random measures \(\nu _s^{\varepsilon }\), \(\mu _s^{\varepsilon }\) are positive, we have
Substituting (5.12)–(5.15) back into (5.11), choosing \(\delta _1, \delta _2, \delta _3\) sufficiently small and rearranging terms we can find a positive constant \(\delta >0\) such that
By the Gronwall’s inequality it follows that
where
Using Burkholder’s inequality and the boundedness of h, we see that
where we have used the fact that \(\sup _{\varepsilon }\{E[|U^{\varepsilon ,k^\varepsilon }_t|^2]+E[|V^{k^\varepsilon }_t|^2]\}<\infty \). By the condition on h in the Assumption 2.1, it is also clear that
Assertion (5.10) follows from (5.17) to (5.19). \(\square \)
References
Boue, M., Dupuis, P.: A variational representation for certain functionals of Brownian motion. Ann. Probab. 26(4), 1641–1659 (1998)
Budhiraja, A., Dupuis, P.: A variational representation for positive functionals of an infinite dimensional Brownian motion. Probab. Math. Stat. 20(1), 39–61 (2000)
Budhiraja, A., Dupuis, P., Maroulas, V.: Large deviations for infinite dimensional stochastic dynamical systems continuous time processes. Ann. Probab. 36(4), 1390–1420 (2008)
Budhiraja, A., Dupuis, P., Maroulas, V.: Variational representations for continuous time processes. Annales de l’Institut Henri Poincaré(B) Probabilités Statistiques 47(3), 725–747 (2011)
Cardon-Weber, C.: Large deviations for a Burgers’-type SPDE. Stoch. Process. Their Appl. 84, 53–70 (1999)
Cerrai, S., Röckner, M.: Large deviations for stochastic reaction-diffusion systems with multiplicative noise and non-Lipschtiz reaction term. Ann. Probab. 32, 1100–1139 (2004)
Chenal, F., Millet, A.: Uniform large deviations for parabolic SPDEs and applications. Stoch. Process. Appl. 72, 161–186 (1997)
Chow, P.: Large deviation problem for some parabolic Itô equations. Commun. Pure Appl. Math. 45, 97–120 (1992)
Dong, Z., Wu, J.L., Zhang, R., Zhang, T.: Large deviation principles for first-order scalar conservation laws with stochastic forcing. Ann. Appl. Probab. (to appear)
Da Prato, G., Zabczyk, J.: Stochastic Equations in Infinite Dimensions. Cambridge University Press, Cambridge (1992)
Duan, J., Millet, A.: Large deviations for the Boussinesq equations under random influences. Stochas. Process. Their Appl. 119(6), 2052–2081 (2009)
Denis, L., Stoica, L.: A general analytical result for non-linear SPDE’s and applications. Electron. J. Probab. 9(23), 674–709 (2004)
Denis, L., Matoussi, A., Zhang, J.: The obstacle problem for quasilinear stochastic PDEs: analytical approach. Ann. Probab. 42(3), 865–905 (2014)
Donai-Martin, C., Pardoux, E.: White noise driven SPDEs with reflection. Probab. Theory Relat. Fields 95, 1–24 (1993)
Flandoli, F., Gatarek, D.: Martingale and stationary solution for stochastic Navier-Stokes equations. Probab. Theory Relat. Fields 102, 367–391 (1995)
Haussmann, U.G., Pardoux, E.: Stochastic variational inequalities of parabolic type. Appl. Math. Optim. 20, 163–192 (1989)
Lions, J.L.: Quelques Methodes de Resolution des Problemes aux Limites non Lineaires. Dunod, Paris (1969)
Liu, W.: Large deviations for stochastic evolution equations with small multiplicative noise. Appl. Math. Opt. 61(1), 27–56 (2010)
Manna, U., Sritharan, S.S., Sundar, P.: Large deviations for the stochastic shell model of turbulence. Nonlinear Differ. Equ. Appl. 16(4), 493–521 (2009)
Matoussi, A., Stoica, L.: The obstacle problem for quasilinear stochastic PDE’s. Ann. Probab. 38(3), 1143–1179 (2010)
Matoussi, A., Xu, M.: Sobolev solution for semilinear PDE with obstacle under monotocity condition. Electron. J. Probab. 35(13), 1053–1067 (2008)
Matoussi, A., Sabbagh, W., Zhang, T.: Backward doubly SDEs and semilinear stochastic PDEs in a convex domain. Stoch. Process. Their Appl. (2016) (to appear)
Nualart, D., Pardoux, E.: White noise driven by quaslinear SPDEs with reflection. Probab. Theory Relat. Fields 93, 77–89 (1992)
Prévot, C., Röckner, M.: A Concise Course on Stochastic Partial Differential Equations. Lecture Notes in Mathematics, vol. 1905. Springer, Berlin (2007)
Röckner, M., Zhang, T., Zhang, X.: Large deviations for stochastic tamed 3D Navier-Stokes equations. Appl. Math. Opt. 61(2), 267–285 (2010)
Sowers, R.B.: Large deviations for a reaction-diffusion equation with non-Gaussian perturbations. Ann. Probab. 20, 504–537 (1992)
Stoica, I.L.: A probabilistic interpretation of the divergence and BSDEs. Stoch. Process. Their Appl. 103(1), 31–55 (2003)
Xu, T., Zhang, T.: White noise driven SPDEs with reflection: existence, uniqueness and large deviation principles. Stoch. Process. Their Appl. 119(10), 3453–3470 (2009)
Zambotti, L.: A reflected stochastic heat equation as symmetric dynamics with respect to the 3-d Bessel bridge. J. Funct. Anal. 180, 195–209 (2001)
Zhang, T.: White noise driven SPDEs with reflection: strong Feller properties and Harnack inequalities. Potential Anal. 33(2), 137–151 (2010)
Acknowledgements
We thank the anonymous referee for the very careful reading and useful suggestions. The first named author was partially supported by chaire Risques Financiers de la fondation du risque, CMAP-École Polytechnique, Palaiseau-France and the research of Wissal Sabbagh benefited from the support of the “Chair Markets in Transition”, Fédération Bancaire Française, and of the ANR 11-LABX-0019.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Matoussi, A., Sabbagh, W. & Zhang, T. Large Deviation Principles of Obstacle Problems for Quasilinear Stochastic PDEs. Appl Math Optim 83, 849–879 (2021). https://doi.org/10.1007/s00245-019-09570-5
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00245-019-09570-5
Keywords
- Stochastic partial differential equation
- Obstacle problems
- Large deviations
- Weak convergence
- Backward stochastic differential equations