Systems of semilinear parabolic variational inequalities with time-dependent convex obstacles

We consider a system of seminlinear parabolic variational inequalities with time-dependent convex obstacles. We prove the existence and uniqueness of its solution. We also provide a stochastic representation of the solution and show that it can be approximated by the penalization method. Our proofs are based upon probabilistic methods from the theory of Markov processes and the theory of backward stochastic differential equations.

The main feature of the paper is that we deal with time-dependent obstacles. In case of single equation, i.e. when m = 1, problem (1.1), (1.2) is quite well investigated. For various results on existence, uniqueness and approximation of solutions in case of L 2 data and one or two regular obstacles, i.e. when D has the form D(t, x) = {y ∈ R : h(t, x) ≤ y ≤h(t, x)} for some regular h,h : E T →R (possibly h ≡ −∞ orh ≡ +∞) see the monograph [2, Sections 2.2, 2.18] and more recent papers [4,13,15]. Linear problem of the form (1.1), (1.2) with L 2 data and one irregular barrier is investigated in [20,26]. For recent results on semilinear problem see [8] (one merely measurable obstacle) and [9] (two measurable obstacles satisfying some separation condition). The problem with two irregular obstacles and L 1 data is investigated in [11].
In case of systems of equations the situation is quite different. To our knowledge, in this case only few partial results exist (see [20,Section 1.2] for the existence of solutions of weakly coupled systems and Example 9.3 and Theorem 9.2 in [18,Chapter 2] for the special case 0 ∈ D(t 2 , ·) ⊂ D(t 1 , ·) if 0 ≤ t 1 ≤ t 2 ; see also [14] for existence results concerning a different but related problem). The aim of the present paper is to prove quite general results on existence, uniqueness and approximation of solutions of (1.1), (1.2) in case the data are square integrable and D satisfies some mild regularity assumptions. The case of L 1 data and irregular obstacles is more difficult but certainly deserves further investigation.
In our opinion one of the main problem one encounters when dealing with systems and time-dependent obstacles lies in the proper choice of the definition of a solution. In fact, the main problem is to adopt definition which ensures uniqueness of solutions. The definition used in the present paper is a natural extension to systems of the definition used in one-dimensional case in [26] and then in [8,9,11]. By a solution of (1.1), (1.2) we mean a pair (u, µ) consisting of a function u = (u 1 , . . . , u m ) : E T → R m and vector µ = (µ 1 , . . . µ m ) of signed Borel measures on E T satisfying the following three conditions: (a) u i are quasi-continuous (with respect to the parabolic capacity determined by L t ) functions of class C([0, T ]; H) ∩ L 2 (0, T ; H 1 0 (E)) and µ i are smooth (with respect to the same capacity) measures of finite variation, (b) u is a weak solution of the problem ∂u ∂t such that u(t, x) ∈ D(t, x) for quasi-every (q.e. for short) (t, x) ∈ (0, T ] × E, In the above definition µ may be called the "obstacle reaction measure". It may be interpreted as the energy we have to add to the system to keep the solution inside D. Condition (c) is some kind of minimality condition imposed on µ. In case m = 1 it reduces to the usual minimality condition saying that µ = µ + − µ − , where µ + (resp. µ − ) is a positive measure acting only when u is equal to the lower obstacle h (resp. upper obstacleh). Also remark that an important requirement in our definition is that u is quasi-continuous and µ is smooth. It not only ensures that the integral in (c) is meaningful, but also allows us to give a probabilistic representation of solutions. In fact, this probabilistic representation may serve as an equivalent definition of a solution of (1.1), (1.2). As in classical monographs [2,5,17,18], and papers [14,15,20], in the present paper we work in L 2 setting. We assume that ϕ ∈ [L 2 (E)] m , f (·, ·, 0, 0) ∈ L 2 (0, T ; [L 2 (E)] m ) and f (t, x, ·, ·) is Lipschitz continuous for (t, x) ∈ E T . An important, model example of operator L t satisfying (1.4) is the Laplace operator. But as in [17,18,20], to cover classic examples, like temperature control in domains with discontinuous coefficient of thermal conductivity (see [17,Chapter 1,§3.4], [5,Chapter I,§3.3,4.4]), in the paper we consider divergence form operator with possibly discontinuous a. As for D, we assume that (t, x) → D(t, x) ∈ Conv is continuous if we equip Conv with the Hausdorff metric. We also assume that D satisfies the following separation condition: one can find a solution u * ∈ W of the Cauchy problem x) : dist(y, ∂D(t, x)) ≥ ε} for some ε > 0. We show that under the above assumptions there exists a unique solution (u, µ) of (1.1), (1.2) and that u and µ may be approximated by the penalization method. Note that our separation condition is not optimal, because we assume that ε > 0 and that u * is more regular than the solution u itself (see condition (a)). The condition is also stronger than known sufficient separation conditions in the one dimensional case (see [9]). Nevertheless, it is satisfied in many interesting situations. As in [8,9,11], to prove our result we use probabilistic methods. In particular, we rely heavily on the results of our earlier paper [12] devoted to reflected backward stochastic differential equations with time-dependent obstacles and in proofs we use the methods of the theory of Markov processes and probabilistic potential theory. Also note that the first results on multidimensional reflected backward stochastic differential equations were proved in [7] in case D is a fixed convex domain. The results of [7] were generalized in [25] to equations with Wiener-Poisson filtration. For related results with some time-depending domains see the recent paper [22].

Preliminaries
For x ∈ R m , z ∈ R m×d we set |x| 2 = m i=1 |x i | 2 , z 2 = trace(z * z). By ·, · we denote the usual scalar product in R m . Given a Hilbert space H we denote by [H] m its product equipped with the usual inner product (u, v) . The Lebesgue measure on R d will be denoted by m. By m 1 we denote the Lebesgue measure on E 1 .

Convex sets and functions
By Conv we denote the space of all bounded closed convex subsets of R m with nonempty interiors endowed with the Hausdorff metric ρ, that is for any D, G ∈ Conv we set Let D ∈ Conv and let N y denote the set of inward normal unit vectors at y ∈ ∂D. It is well known (see, e.g., [19]) that n ∈ N y if and only if y − x, n ≤ 0 for every x ∈ D. If moreover a ∈ IntD then for every n ∈ N y , y − a, n ≤ −dist(a, ∂D). (2.1) If dist(x, D) > 0 then there exists a unique y = Π D (x) ∈ ∂D such that |y − x| = dist(x, D). One can observe that (y − x)/|y − x| ∈ N y . Moreover (see [19]), for every a ∈ IntD, x − a, y − x ≤ −dist(a, ∂D)|y − x|.
(2.2) Also note that for any nonempty bounded closed convex sets D, G ⊂ R m and any x, y ∈ R m , (see [21, Chapter 0, Proposition 4.7]).
where ·, · is the duality pairing between V ′ and V. For . By E 0,T we denote the time-dependent form defined as where now ·, · denote the duality pairing between V ′ 0,T and V 0,T . Note that the forms E, E 0,T can be identified with some generalized Dirichlet form (see [ H 0,T and (·, ·) H 0,T denotes the usual inner product in H 0,T .
In the paper by cap we denote the parabolic capacity determined by the form E (for the construction and properties of cap see [23,Section 4] or [24, Section 6.2]). We will say that some property is satisfied quasi-everywhere (q.e. for short) if it is satisfied except for some Borel subset of E 1 of capacity cap zero. Using cap we define quasicontinuity as in [23]. By [23, Theorem 4.1] each function u ∈ W has a quasi-continuous m 1 -version, which we will denote byũ.
Let µ be a Borel signed measure on E 1 . In what follows |µ| stands for the total variation of µ. By M 0,b (E 1 ) we denote the set of all Borel measures on E 1 such that |µ| does not charge sets of zero capacity cap and |µ|(

Markov processes
By general results from the theory of Markov process (see, e.g., [24, Theorems 6.3.1, 6.3.10]) there exists a continuous Hunt process M = (Ω, (F t ) t≥0 , (X t ) t≥0 , ζ, (P z ) z∈E 1 ∪∆ ) with state space E 1 , life time ζ and cemetery state ∆ properly associated with E in the resolvent sense. By [23, Theorem 5.1], where τ (t) is the uniform motion to the right, i.e. τ (t) = τ (0) + t, τ (0) = s, P za.s. for z = (s, x). For an alternative construction of M, for which the starting point is the fundamental solution for the operator ∂ ∂t + L t , see [11,Section 2]. It is also known (this follows for instance from the construction of M given in [11]) that where X is the second component of X, is a continuous time-inhomogeneous Markov process whose transition density p E is the Green function for ∂ ∂t + L t on [0, T ) × E (for construction and properties of Green's function see [1]).
Note that if u is quasi-continuous then it is M-quasi-continuous, i.e. for q.e. (s, Let A be a positive continuous additive functional of M and let µ ∈ M 0,b (E 0,T ) be a positive measure. We will say that A corresponds to µ (or µ corresponds to A) if for q.e. (s, Since positive continuous additive functionals of M such that A + corresponds to µ + and A − corresponds to µ − (here µ + (resp. µ − ) is the positive (resp. negative) part of the Jordan decomposition of µ). Also note that (2.7) is some sort of the Revuz correspondence.
The following proposition is probably well known, but we do not have a reference.
Proof. We provide sketch of the proof. Since E 0,T is a generalized Dirichlet form, it follows from [29, Theorem 4.5] that (2.12) Since u satisfies (2.8), we see that the left-hand side of (2.12) equals E 0,T (u, v). From this and an analogue of [23,Theorem 7.4] for the form E 0,T it follows that N Modifying slightly the proof of [23, (7.13)] we show (2.9). Finally, to show (2.10), let us denote by M u,k the process on the right-hand side of (2.10) and consider a sequence {u n } of smooth functions such that u n → u in W. Then by the chain rule (see [29,Theorem 5.5 On the other hand, arguing as in the proof of [23, Theorem 7.2]) we show that , P z )-martingale and B an (F t )-predictable finite variation process (see, e.g., [27,Section III.7]). Recall that the is the quadratic variation of M and |B| T is the variation of B on the interval [0, T ]. By H 2 (P z ) we denote the space of all special ((F t ), P z )-semimartingales on [0, T ] with finite H 2 (P z ) norm. Remark 2.3. (i) Let z ∈ E 0,T and let ϕ, f satisfy the assumptions of Proposition 2.2. Then M [u] of Proposition 2.2 is a martingale under P z (see [29, p. 327 where σ −1 is the inverse matrix of σ. By Lévy's theorem and (2.11), for q.e. z ∈ E 0,T the process B = (B 1 , . . . , B d ) is under P z a d-dimensional standard Brownian motion with respect to (F t ) t≥0 . Finally, note that by (2.10) and (2.13), 3 Probabilistic solutions of the obstacle problem x) ∈ E T } be a family of closed convex sets in R m with nonempty interiors. Given ε > 0 we set D * (t, x) = {y ∈ D(t, x) : dist(y, ∂D(t, x)) ≥ ε}. We will assume that (A2) f (·, ·, 0, 0) ∈ L 2 (0, T ; H), (A3) f : E T × R m × R m×d → R m is a measurable function and there exist α, β ≥ 0 such that for all y 1 , y 2 ∈ R m and z 1 , z 2 ∈ R m×d .
As for the family D, we will need the following assumptions: (D1) The sets D(t, x) are bounded uniformly in (t, x) ∈ E T and the mapping E T ∋ (t, x) → D(t, x) ∈ Conv is continuous.
Condition (D2) is satisfied in the following natural situations.  . It is convenient to start with probabilistic solutions. Solutions of (1.1), (1.2) in the sense of the definition given in Section 1 will be studied in the next section. Note that the definition formulated below is an extension, to the case of systems, of the probabilistic definition adopted in [9,11] in case of single equation.  (iii) Taking t = 0 in (3.1) and then integrating with respect to P s,x we see that for q.e. (s, x) ∈ E 0,T , We begin with uniqueness of probabilistic solutions. Proof. Let (u 1 , µ 1 ), (u 2 , µ 2 ) be two solutions of OP(ϕ, f, D).
Applying Itô's formula and using (A3) shows that there is C > 0 depending only on α, β such that for q.e. (s, are quasi-continuous. Using this and condition (3.2) we get Proof. Since u is a strong solution of (3.4) if and only ifû = e λt u is a strong solution of (3.4) with L t replaced by L t − λ, ϕ replaced by some e λT ϕ ∈ H and f replaced by somef still satisfying (A2) and (A3), without loss of generality we may replace ). By (D1) and (2.3) the mapping (t, x, y) → −n(y − Π D(t,x) (y)) is continuous and Lipschitz continuous in y for each fixed (t, x) ∈ E T . Therefore for sufficiently large λ > 0 (depending on α and β) the operator A is bounded, hemicontinuous, monotone and coercive, i.e. satisfies condition (7.84) from [18,Chapter 2]. Therefore the existence of a unique strong solution u n ∈ [W] m of (3.4) follows from Theorem 7.1. and Remark 7.12 in [18,Chapter 2]. To prove (ii), Multiplying the above equation by u n − u * and integrating by parts we obtain By (2.2), (u n (t) − Π D(·,·) (u n )(t), u n (t) − u * (t)) H ≥ 0. Therefore from the above equality and (A3) it follows that Using this and standard arguments (we apply Poincaré's inequality and Gronwall's lemma) shows (ii).
In the proof of our main theorem on existence and approximation we will use some additional notation. LetM = (X, (P z ) z∈E 1 ∪∆ ) denote a dual process associated with the form defined by (2.4) (see [23,Theorem 5.1]). For µ ∈ M 0,b (E 1 ) let A µ denote the additive functional of M associated with M in the sense of (2.7), and letÂ µ denote the additive functional ofM associated with µ. Given α ≥ 0 and µ ∈ M 0,b (E 1 ) we set (whenever the integral exists) for (s, x) ∈ E 1 , whereÊ s,x denotes the expectation with respect toP s,x andζ is the life time ofM. ByŜ 00 (E 0,T ) we denote the set of all µ ∈ M 0,b (E 0,T ) such that |µ| is a finite order integral measure on E 1 (see [23] for the definition) and R 0,T 0 |µ| ∞ < ∞.
with constant C of Proposition 3.7.
Since u * ∈ L 2 (0, T ; V ), it follows from (3.11) and [10, Proposition 3.13] that We now show that from (A1)-(A3), (D1), (D2) it follows that for q.e. (s, x) ∈ E 0,T , under the measure P s,x the data ξ,f , D = {D t , t ∈ [0, T ]} satisfy the following hypotheses (H1)-(H4) from [12]. In the notation of the present paper, for fixed probability measure P s,x , these hypotheses read as follows: (H3)f : [0, T ] × Ω × R m × R m×d → R m is measurable with respect to P rog ⊗ B(R m ) ⊗ B m×d (P rog denotes the σ-field of all progressive subsets of [0, T ] × Ω) and there are exists α, β ≥ 0 such that P s,x -a.s. we have for all y 1 , y 2 ∈ R m and z 1 , z 2 ∈ R m×d , (H4) for each N the mapping t → D t ∩ {x ∈ R d : |x| ≤ N } ∈ Conv is càdlàg P s,xa.s. (with the convention that D T = D T − ), and there is a semimartingale A ∈ H 2 (P s,x ) such that A t ∈ IntD t , t ∈ [0, T ], and inf t≤T dist(A t , ∂D t ) > 0.
It is perhaps worth remarking that in case the operator L is in nondivergence form (for instance, L is the Laplace operator ∆, or, more generally, ∂a ij ∂x k ∈ L ∞ (0, T ) × E)) for i, j, k = 1, . . . , d), then the process X corresponding to L can be constructed by solving an Itô equation. This allows simplifying some arguments in the proofs of the results presented above, but actually not much. One reason is that we are working in L 2 setting, so even in the case where L = ∆ the solution of (1.1), (1.2) need not be continuous. On the other hand, since we use a stochastic approach via BSDEs (the basic relation is Y = u(X), where Y is the first component of the solution of the corresponding BSDE; see Remark 3.5(ii)), we must know that Y is continuous, and hence that u is quasi-continuous. This in turn requires the introduction of quasinotions (capacity, quasi-continuity, etc.) and we still have to use some results from the probabilistic potential theory.

Variational solutions
In this section we show that results of Section 3 can be translated into results on solutions of (1.1), (1.2) in the sense of the analytical definition formulated below. Solutions in the sense of this definition will be called variational solutions.   To see this it suffices to take t = 0 in (4.1) and consider η such that η i = v, η j = 0 for j = i. Also note that using a standard argument (see, e.g., the reasoning following (1.16) in [16,Chapter III]) and the fact that µ({t} × E) = 0 for t ∈ [0, T ] (see Remark 3.5(iv)) one can show that (4.3) implies (4.1).