Large Deviation Principles of Obstacle Problems for Quasilinear Stochastic PDEs

In this paper, we first present a sufficient condition(a variant) for the large deviation criteria of Budhiraja, Dupuis and Maroulas for functionals of Brownian motions. The sufficient condition is particularly more suitable for stochastic differential/partial differential equations with reflection. We then apply the sufficient condition to establish a large deviation principle for obstacle problems of quasi-linear stochastic partial differential equations. It turns out that the backward stochastic differential equations will also play an important role.

x, U (t, x), ∇U (t, x))dt where B j t , j = 1, 2, . . . are independent real-valued standard Brownian motions, the stochastic integral against Brownian motions is interpreted as the backward Ito integral, is the Laplacian operator, f , g i , h j are appropriate measurable functions specified later, L(t, x) is the given barrier/obstacle function, R(dt, dx) is a random measure which is a part of the solution pair (U , R). The random measure R plays a similar role as a local time which prevents the solution U (t, x) from falling below the barrier L.
Such SPDEs appear in various applications like pathwise stochastic control problems, the Zakai equations in filtering and stochastic control with partial observations. Existence and uniqueness of the above stochastic obstacle problems were established in [13] based on an analytical approach. Existence and uniqueness of the obstacle problems for quasi-linear SPDEs on the whole space R d and driven by finite dimensional Brownian motions were studied in [20] using the approach of backward stochastic differential equations (BSDEs). Obstacle problems for nonlinear stochastic heat equations driven by space-time white noise were studied by several authors, see [23,28] and references therein.
In this paper, we are concerned with the small noise large deviation principle(LDP) of the following obstacle problems for quasilinear SPDEs: Large deviations for stochastic evolution equations and stochastic partial differential equations driven by Brownian motions have been investigated in many papers, see e.g. [3,5,6,8,11,18,19,25,27] and references therein.
In order to apply the weak convergence method to the obstacle problems, because of the singularity introduced by the reflection/local time, it seems difficult to directly use the criteria in [2]. We therefore first need to provide a sufficient condition to verify the criteria of Budhiraja-Dupuis-Maroulas. This sufficient condition turns out to be particularly suitable for stochastic dynamics generated by stochastic differential equations and stochastic partial differential equations with reflection. The advantage of the new sufficient condition is to shift the difficulty of proving the tightness of the perturbations of stochastic differential(partial differential) equations to a study of the continuity (with respect to the driving signals) of deterministic skeleton equations associated with the stochastic equations. This new sufficient condition is recently successfully applied to obtain a large deviation principle for stochastic conservation laws (Ref. [9]), which otherwise could not (at least very difficult) be established using the original form of the criteria in [2].
The important part of the current work is to study the continuity of the deterministic obstacle problems driven by the elements in the Cameron-Martin space of the driving Brownian motions. We need to show that if the driving signals converge weakly in the Cameron-Martin space, then the corresponding solutions of the skeleton equations converge in the appropriate state space. This turns out to be hard because of the singularity caused by the obstacle. To overcome the difficulties, we have to appeal to the penalized approximation of the skeleton equation and to establish some uniform estimate for the solutions of the approximating equations with the help of the backward stochastic differential equation representation of the solutions. This is purely due to the technical reason because primarily the LDP problem has not much to do with backward stochastic differential equations.
The rest of the paper is organized as follows. In Sect. 2, we introduce the stochastic obstacle problem and the precise framework. In Sect. 3, we recall the weak convergence approach of large deviations and present a sufficient condition. Section 4 is devoted to the study of skeleton obstacle problems. We will show that the solution of the skeleton problem is continuous with respect to the driving signal. The proof of the large deviation principle is in Sect. 5.

Obstacle Problems
Let H := L 2 (R d ) be the Hilbert space of square integrable functions with respect to the Lebesgue measure on R d . The associated scalar product and the norm are denoted by Let V := H (R d ) denote the first order Sobolev space, endowed with the norm and the inner product: V * will denote the dual space of V . When causing no confusion, we also use u, v to denote the dual pair between V and V * .
Our evolution problem will be considered over a fixed time interval [0, T ]. Now we introduce the following assumptions.
(v) The barrier function L(t, x) : where the gradient ∇ and the Laplacian act on the space variable x. We denote by H T the space of predictable, processes (u t , t ≥ 0) such that u ∈ H T and that Let B j t , j = 1, 2, . . . be a sequence of independent real-valued standard Brownian motions on a complete filtrated probability space ( , F, F t , P). We now precise the definition of solutions for the reflected quasilinear SPDE (1.1): (4) U admits a quasi-continuous versionŨ , and

Remark 2.1
We refer the reader to [13] for the precise definition of regular measures and quasi-continuity of functions on the space Let us recall the following result from [13,20].

The Measures P m
The operator ∂ t + 1 2 , which represents the main linear part in the equation (1.1), is associated with the Bownian motion in R d . The sample space of the Brownian motion is = C([0, ∞); R d ), the canonical process (W t ) t≥0 is defined by W t (ω) = ω(t), for any ω ∈ , t ≥ 0 and the shift operator, θ t : −→ , is defined by θ t (ω)(s) = ω(t + s), for any s ≥ 0 and t ≥ 0. The canonical filtration F W t = σ (W s ; s ≤ t) is completed by the standard procedure with respect to the probability measures produced by the transition function ) is the Gaussian density. Thus we get a continuous Hunt process ( , W t , θ t , F, F W t , P x ). We shall also use the backward filtration of the future events F t = σ (W s ; s ≥ t) for t ≥ 0. P 0 is the Wiener measure, which is supported by the set 0 = {ω ∈ , ω(0) = 0}. We also set For each probability measure μ on R d , the probability P μ of the Brownian motion started with the initial distribution μ is given by div(J (r , ·))(W r )dr. (2.2) We refer the reader to [20,27] for more details.

A Sufficient Condition for LDP
In this section we recall the criteria obtained in [2] for proving a large deviation principle and we will provide a sufficient condition to verify the criteria. Let E be a Polish space with the Borel σ -field B(E). Recall

A Criteria of Budhiraja-Dupuis
The Cameron-Martin space associated with the Brownian motion The set S N endowed with the weak topology is a compact Polish space. SetS N = {φ ∈K ; φ(ω) ∈ S N , P-a.s.}.
The following result was proved in [2].
(a) for every N < +∞ and any family {k ε ; ε > 0} ⊂S N satisfying that k ε converges in law as S N -valued random elements to some element k as ε → 0, Then the family {X ε } ε>0 satisfies a large deviation principle in E with the rate function I given by with the convention inf{∅} = ∞.

A Sufficient Condition
Here is a sufficient condition for verifying the assumptions in Theorem 3.1.
Then the family {X ε } ε>0 satisfies a large deviation principle in E with the rate function I given by with the convention inf{∅} = ∞.

Remark 3.1
When proving a small noise large deviation principle for stochastic differential equations/stochastic partial differential equations, condition (i) is usually not difficult to check because the small noise disappears when ε → 0.
Proof We will show that the conditions in Theorem 3.1 are fulfilled. Condition (b) in Theorem 3.1 follows from condition (ii) because S N is compact with respect to the weak topology. Condition (i) implies that for any bounded, uniformly continuous function G(·) on E, Thus, condition (a) will be satisfied if Z ε converges in law to 0 ( · 0 k(s)ds) in the space E. This is indeed true since the mapping 0 is continuous by condition (ii) and k ε converge in law as S N -valued random elements to k. The proof is complete.

Skeleton Equations
Recall K := L 2 ([0, T ], l 2 ). Let k = (k 1 , . . . , k j , . . .) ∈ K and consider the deterministic obstacle problem: The existence and uniqueness of the solution of the deterministic obstacle problem (4.1) can be obtained similarly as the random obstacle problem (1.1) ( but simpler). We refer the reader to [13] for more details. Denote by u k ε the solution of equation (4.1) with k ε replacing k. The main purpose of this section is to show that u k ε converges to u k in the space H T if k ε → k weakly in the Hilbert space K . To this end, we first need to establish a number of preliminary results. Consider the penalized equation: It is known that u k,n → u k as n → ∞ for a fixed k ∈ K (please see [13]). For later use, we need to show that for any M > 0, u k,n → u k uniformly over the bounded subset {k; k K ≤ M} as n → ∞. For this purpose, it turns out that we have to appeal to the BSDE representation of the solutions. Let Y k,n Then it was shown in [20] that (Y k,n , Z k,n ) is the solution of the backward stochastic differential equation under P m : The following result is a uniform estimate for (Y k,n , Z k,n ).

Lemma 4.1
For M > 0, we have the following estimate: The proof of this lemma is a repeat of the proof of Lemma 6 in [20]. One just needs to notice that when applying the Grownwall's inequality, the constant c M on on right of (4.7) only depends on the norm of k which is bounded by M.
We also need the following estimate.
Applying the Ito's formula (see [20]) we have ∇(F (u k,n (r , ·) − L(r , ·))), g(r , ·, u k,n (r , ·), ∇u k,n (r , ·)) (W r )dr (4.9) Rearranging the terms we get (4.10) Using the conditions on h in the Assumption 2.1, for any given positive constant ε 1 we have (4.11) By the assumptions on g, for any given positive constant ε 2 we have (4.12) By a similar calculation, we have for any given ε 3 > 0, Substitute (4.11), (4.12) and (4.13) back into to (4.10), choose ε 1 , ε 2 , ε 3 sufficiently small to obtain (4.14) where the condition on α in the Assumption 2.1 was used. Now the desired conclusion (4.8) follows from the Grownwall's inequality. 4 . By the Ito's formula we have ∇(G (u k,n (r , ·) − L(r , ·))), g(r , ·, u k,n (r , ·), ∇u k,n (r , ·)) (W r )dr (4.16) Rearrange the terms in the above equation to get (4.17) By Assumption 2.1, for any given positive constant ε 1 we have Using again Assumption 2.1 and the similar computation as above we can show that for any constants Put (4.20), (4.19), (4.18) and (4.17) together, select the constants ε 1 , ε 2 and ε 3 sufficiently small, and take expectation to get Observe that by the assumptions on the function g, Proof We note that for any n, q ≥ 1, and lim n,q→∞ We will achieve this with the help of backward stochastic differential equations satisfied by Y k,n t = u k,n (t, W t ). Applying Ito's formula we have By Young's inequality, we have for any δ 1 > 0, Moreover for any δ 2 > 0, we have Using Young's inequality again, we have for any δ 3 > 0, Substitute (4.31)-(4.34) back to (4.30), choose constants δ i , i = 1, 2, 3 sufficiently small and take expectation to obtain We notice that by the Burkhölder's inequality, for any δ 4 > 0 we have (4.39) Similarly, we have for δ 5 > 0 Now use the above two estimates (4.39) and (4.40) and the already proved (4.36) to obtain (4.38). This completes the proof. Proof We will first prove a similar convergence result for the corresponding penalized PDEs and then combined with the uniform convergence proved in Proposition 4.1 we complete the proof of Theorem 4.1. Let u k ε ,n be the solution to the following penalized PDE: We first fix the integer n and show lim ε→0 u k ε ,n − u k,n H T = 0, u k,n is the solution of equation (4.41) with k ε replaced by k. To this end, we first prove that the family . Using the chain rule and Gronwall's inequality, as in Lemma 4.1 , we can show that It is well known (see e.g. [15]) that the imbedding is compact. As an equation in V * , we have u k ε ,n (t) = + 1 2 In view of (4.43), we have Using the condition (iii) in Assumption 2.1, we have By (4.43) and the similar calculations as above we also have Combining (4.49) with (4.43), we conclude that {u k ε ,n , ε > 0} is tight in the space . Now, applying the chain rule, we obtain T t g(s, ·, u k ε ,n (s, ·), ∇u k ε ,n (s, ·)) − g(s, ·, u k,n (s, ·), ∇u k,n (s, ·)), ∇(u k ε ,n (s) − u k,n (s)) ds + 2 T t f (s, ·, u k ε ,n (s, ·), ∇u k ε ,n (s, ·)) − f (s, ·, u k,n (s, ·), ∇u k,n (s, ·)), u k ε ,n (s) − u k,n (s) ds (h j (s, ·, u k ε ,n (s, ·), ∇u k ε ,n (s, ·)) − h j (s, ·, u k,n (s, ·), ∇u k,n (s, ·)))k ε, j s ds By the assumptions on h j and Young's inequality, we see that for any given δ 1 > 0, (h j (s, ·, u k ε ,n (s, ·), ∇u k ε ,n (s, ·)) −h j (s, ·, u k,n (s, ·), ∇u k,n (s, ·)))k ε, j s ds (4.51) Using the assumptions on f , g and (4.51) it follows from (4.50) that there exist positive constants δ, C such that  To show lim ε→0 u k ε ,n − u k,n H T = 0, in view of (4.52) and (4.53), it suffices to prove    On the other hand, using the assumption on h, for 0 < t 1 < t 2 ≤ T , we have