Abstract
This paper is concerned with an initial value problem for a stochastic variational inequality associated with elasto-plastic torsion. Our goal is to establish the existence and uniqueness of a solution. The stochastic problem is reduced to essentially a deterministic problem, which is not covered by existing results on evolution variational inequalities. We propose a definition of a solution in the same spirit as for weak solutions of partial differential equations, and derive some basic consequences of our definition. Based on these results, we can prove the existence and uniqueness of a solution to the stochastic problem.
Similar content being viewed by others
1 Introduction
The goal of this paper is to study a stochastic evolution variational inequality of the form
Here \(\partial I_{\mathfrak {K}}( \cdot )\) denotes the subdifferential of the indicator function \(I_{\mathfrak {K}}( \cdot ),\) where \(\mathfrak {K}\) is a closed convex subset of a certain function space. In this paper, we will only consider \(\mathfrak {K}\) which is exclusively associated with elasto-plastic torsion. The right-hand side of (1.1) represents a random noise where \(M = M(t)\) is a certain Hilbert space valued continuous martingale.
The genesis of stochastic variational inequalities is the celebrated Skorohod problem [15]; see also [10]. For generalization to higher space dimensions, see [5, 16]. This subject has evolved in various directions. The above (1.1) is one such version. When \(\mathfrak {K}\) is the set of nonnegative functions, initial boundary value problems were discussed in [7–9, 12]. In particular, [12] has inspired extensive research on (1.1) and related problems in general abstract setting; see [1–3, 14, 17], and references therein. In general abstract setting, a typical assumption on the convex set \(\mathfrak {K}\) is that its interior is not empty. This assumption seems necessary to obtain suitable regularity so that a solution of (1.1) can be defined in an appropriate sense. However, this assumption excludes some important applications. For example, if \(\mathfrak {K}\) is the set of nonnegative functions in the basic function class \(H_0^1(G), \) then \(\mathfrak {K}\) has empty interior with respect to \(H_0^1(G)\)-norm for every space dimension \(d = 1, 2, \ldots .\) When \(\mathfrak {K}\) is associated with elasto-plastic torsion, it is given by
Obviously, this has also empty interior with respect to \(H_0^1(G)\)-norm. At present, there seems to be no result on the Cauchy problem for (1.1) when \(\mathfrak {K}\) is defined by (1.2). In this paper, we will address this problem exclusively for \(\mathfrak {K}\) defined by (1.2).
Let the initial condition be given by
We are seeking for a stochastic process \(u = u(t)\) defined on the time interval \([0, T]\) such that \(u(t) \in \mathfrak {K},\) for almost all \(t,\) and (1.1) and (1.3) are satisfied in an appropriate sense with probability one. Following the general strategy in [12], we reduce (1.1) to an essentially deterministic problem, and devote a good portion of this work to the deterministic problem. More precisely, we will work with the following form of a deterministic problem:
If the right-hand side of (1.1) is replaced by \(f \in L^2(0, T; H^{-1}(G)),\) the existence and uniqueness of a solution was established in a general setting which covers the case of (1.2); see [4, 11], where a solution is defined in terms of a variational inequality. In principle, we will adapt such formulation for the definition of a solution. However, we need substantial modification when the right-hand side of (1.1) is not an ordinary function with respect to the time variable. We note that \(M(t)\) is only Hölder continuous in time variable, and is not of bounded variation. Also, some key technical devices used in [4, 11] for the regularity and uniqueness of a solution do not seem to be adaptable to our case.
On the other hand, our definition of a solution requires substantial regularity of \(M\) with respect to space variables. More precisely, we will assume \(M \in C([0, T]; C_0^1(\overline{G})).\) In Sect. 3, we will present the definition of a solution of the deterministic problem, and establish the uniqueness of a solution. At present, the existence of a solution according to our definition is not known. This is due to lack of basic estimates under our assumptions on \(M(t).\) But, for the stochastic problem discussed in Sect. 4, this difficulty can be overcome by means of stochastic integrals, and we are able to establish the existence of a solution with probability one, which leads to the solution as a stochastic process. We will also show that this stochastic process with state-space \(\mathfrak {K}\) is a Markov process, and that it has an invariant measure on \(\mathfrak {K}.\)
2 Notation and technical preliminaries
Throughout this paper, \(G\) is a bounded open subset of \(\mathbb R^d\) with smooth boundary \(\partial G,\) and
which is a Banach space with the norm
The imbedding \(C_0^1( \overline{G} ) \rightarrow H_0^1(G)\) is dense.
As above, the set \(\mathfrak {K}\) is defined by
When \(\mathfrak {X}\) is a Banach space and \(J\) is an interval in \(\mathbb R,\)
When \(\mathfrak {X}\) is a topological space, \(\mathcal {B}(\mathfrak {X})\) denotes the set of all Borel subsets of \(\mathfrak {X}.\) When \(\mathfrak {X}\) is a compact metric space, \(\mathcal {M}(\mathfrak {X})\) is the space of Radon measures, and \(\mathcal {M}(\mathfrak {X})^d\) is the set of all \(\xi = (\xi _1, \ldots , \xi _d),\quad \xi _j \in \mathcal {M}(\mathfrak {X}),\quad j=1, \ldots , d.\)
Lemma 2.1
Let \(g \in \mathfrak {K}.\) Then, for any \(\epsilon > 0,\) there is \(g_{\epsilon } = g_{\epsilon }(x)\) such that
and
Here \(C_c^{\infty }(G)\) denotes the set of all functions in \(C^{\infty }(\mathbb R^d)\) with compact support in \(G.\)
Proof
We need a special partition of unity for \(G.\) Let
where \(B_{r_j}(z_j)\) is an open ball in \(\mathbb R^d\) with center \(z_j \in G\) and radius \(r_j >0\) with the following properties.
-
(i)
for \(j = 1, \ldots , n, B_{r_j}(z_j) \cap \partial G\) is not empty, and if \(z \in B_{r_j}(z_j) \cap \partial G,\) then \((1 - \lambda ) z_j + \lambda z \in G,\) for all \(0 \le \lambda < 1.\)
-
(ii)
for \( j = n+1, \ldots , n+m,\overline{ B_{r_j}(z_j)} \subset G.\)
Let \(\{ \alpha _j\}_{j=1}^{n + m}\) be a partition of unity subbordinate to \(\{ B_{r_j}(z_j)\}_{j=1}^{n+m}\) such that
and
We then define a mapping \(\psi _{\lambda } : \mathbb R^d \rightarrow \mathbb R^d\) by
Then, for \(x \in \overline{G},\)
and thus,
for some constant \(C > 0,\) where \(I\) is the \(d \times d\) identity matrix and \(D\psi _{\lambda }\) is the derivative matrix of \(\psi _{\lambda }.\) Let \(g \in H_0^1(G)\) be given such that \(|\nabla g(x) | \le 1,\) for almost all \( x \in G.\) We can extend \(g\) by
We find that
for all \(\lambda > 1\) with sufficiently small \(| \lambda - 1|.\)
For \(\delta > 0,\) let \(\zeta _{\delta } \in C_c^{\infty }(\mathbb R^d)\) be a mollifier such that
Now let \(\epsilon > 0\) be given. Then, we can choose \(\lambda > 1\) and \( \delta >0\) such that
satisfies the properties (2.1) and (2.2). \(\square \)
Lemma 2.2
Let \(f \in C([0, L]; H_0^1(G))\) such that \(f(t) \in \mathfrak {K},\) for all \(t \in [0, L].\) Then, for each \(\gamma >0,\) there is \(f_{\gamma } \in C([0, L]; C_0^1( \overline{G} ))\) such that
and
Proof
Fix any \(\gamma > 0.\) There is \(\delta > 0\) such that
Let \(\bigl \{J_k\bigr \}_{k=1}^N\) be a family of open intervals such that the length of \(J_k\) is \(\delta ,\) its midpoint \(z_k\) belongs to \([0, L],\) and
We then choose \(\beta _k \in C_c^{\infty }(J_k)\) such that
and define
By virtue of (2.5),
We apply Lemma 2.1 to each \(f(z_k)\) and obtain \(f_{k, \gamma } \in C_c^{\infty }(G)\) such that
and
Let
This \(f_{\gamma }\) satisfies (2.3) and (2.4). \(\square \)
For convenience of notation, we write
Suppose that \(\xi \in \mathcal {M}\bigl ([0, T] \times \overline{G} \bigr )^d.\) For any \(t \in (0, T],\nabla \cdot \xi \in C_t^{*}\) is defined by
for all \( h \in C\bigl ([0, t]; C_0^1(\overline{G})\bigr ),\) where \(< , \,\, \,>_{C_t^{*}, C_t}\) denotes the duality pairing between \(C\bigl ([0, t]; C_0^1(\overline{G})\bigr )\) and its dual \(C_t^{*}\).
Next let \(\epsilon >0\) and \(\rho _{\epsilon } \in C_c^{\infty }(\mathbb R)\) be a mollifier such that
Fix any \(t \in (0, T),\) and let \(\epsilon \in (0, T - t).\) The convolution \(\nabla \cdot \xi *\rho _{\epsilon } \in C_t^{*}\) is defined by
for all \( h \in C\bigl ([0, t]; C_0^1(\overline{G})\bigr )\),
Lemma 2.3
Let \(\xi \in \mathcal {M}\bigl ([0, T] \times \overline{G}\bigr )^d.\) Suppose that its variation satisfies
Fix any \(t \in (0, T)\). For \(\epsilon \in (0, T- t)\), it holds that
where \(\theta (\epsilon ) \rightarrow 0\) as \(\epsilon \rightarrow 0\).
Proof
Choose any \(\psi = \psi (s, x) \in C\left( [0, t]; C_0^1(\overline{G})\right) \) such that
It is convenient to introduce
Then,
and
Let
Then, \( h_j \in C([0, T]; C_0^1(\overline{G}))\) with
Thus,
where \(\bigl \Vert \xi \bigr \Vert \) denotes the variation of \(\xi \). It follows from (2.8) that
Since \(\bigl \Vert \xi \bigr \Vert \bigl ( \{0\} \times \overline{G}\bigr ) = 0,\) we have
which yields (2.7). \(\square \)
Lemma 2.4
If \(v \in C_r([0, T); L^2(G))\) and \(v(t) \in \mathfrak {K},\) for almost all \(t \in [0, T),\) then \(v(t) \in \mathfrak {K},\) for each \(t \in [0, T)\).
Proof
This follows from the fact that \(\mathfrak {K}\) is a closed subset of \(L^2(G)\). \(\square \)
Lemma 2.5
\(\mathfrak {K}\) is is a convex compact metric space with the metric induced by the \(L^2(G)\)-norm.
3 Deterministic problem
Throughout this section, we assume
We rewrite (1.1) as
and the initial condition is given by
For our definition of a solution, we need some preliminary observation. Suppose that we have the following ideal situation.
and
We then set
so that \(F \in L^2(0, T; H^{-1}(G))\) and (3.3) is satisfied if
holds for all \(v \in L^2(0, T; H_0^1(G))\) such that \(v(t) \in \mathfrak {K},\) for almost all \(t.\) We can simply take the condition (3.7) as a part of the definition of a solution. Without (3.6), the condition (3.7) does not make sense. So we adopt the basic spirit in the definition of weak solutions of partial differential equations. Assuming conditions (3.5) and (3.6), we derive a necessary consequence of (3.7). If this necessary consequence can be expressed under the conditions weaker than (3.6), we replace (3.7) by this consequence of (3.7) as a part of definition of a solution. This is the motivation behind the following definition.
Definition 3.1
\(u\) is a said to be a solution of (3.3) and (3.4) on the interval \([0, T]\) if
-
(i)
\(u \in L^{\infty }(0, T; H_0^1(G))\) and \(u(t) \in \mathfrak {K}\) for almost all \(t \in [0, T]\).
-
(ii)
There is \(\xi \in \mathcal {M}\bigl ([0, T] \times \overline{G} \bigr )^d\) such that \(\bigl \Vert \xi \bigr \Vert \bigl (\{0\} \times \overline{G}\bigr ) = 0,\) and
$$\begin{aligned} \frac{\partial }{\partial t} \bigl ( u - M \bigr ) - \Delta u - \nabla \cdot \xi = 0 \end{aligned}$$(3.8)in the sense of distributions over \((0, T) \times G,\) and
$$\begin{aligned}&\mathop \int \limits _0^s < \frac{\partial v}{\partial t}, v - u + M> dt - \mathop \int \limits _0^s < \Delta u, v - u + M > dt \nonumber \\&\quad + \mathop \int \limits _{[0, s] \times \overline{G}} \nabla M \cdot d\xi \ge \frac{1}{2}\Vert v(s) - u(s) + M(s)\Vert _{L^2(G)}^2 - \frac{1}{2}\Vert v(0) - u_0 \Vert _{L^2(G)}^2\nonumber \\ \end{aligned}$$(3.9)for almost all \(s \in [0, T),\) for each \(v\) such that
$$\begin{aligned} \frac{\partial v}{\partial t} \in L^2(0, T; H^{-1}(G)),\qquad v(t) \in \mathfrak {K}, \qquad \hbox {for almost all }t \in [0, T] \end{aligned}$$Here \(< \cdot , \cdot >\) denotes the duality pairing between \(H_0^1(G)\) and \(H^{-1}(G).\)
We note that \(\nabla \cdot \xi \) plays the role of \( - \partial I_{\mathfrak {K}}(u)\).
Lemma 3.2
The conditions (i), (3.8) and (3.9) imply the initial condition (3.4) and the regularity:
and \(u(t) \in \mathfrak {K},\) for each \(t \in [0, T).\)
Proof
For each \(t \in (0, T],\) define \(\xi _t \in \mathcal {M}\bigl (\overline{G}\bigr )^d\) by
where \(\xi \) is the measure in (3.8) and (3.9). Then, it follows from (3.8) that
holds in the sense of distributions over \((0, T) \times G.\) Thus, for each \(\phi \in C_c^{\infty }(G)\),
in the sense of distributions over \((0, T),\) where \(<, \, \,>\) stands for the inner product in \(L^2(G).\) It follows that \(<u, \phi > \in C_r([0, T)).\) Choose any \(t^{*} \in [0, T).\) There is a sequence \(\{t_n\} \downarrow t^{*}\) such that \(u(t_n) \in H_0^1(G)\) and \(\bigl \Vert \bigl |\nabla u(t_n)\bigr | \bigr \Vert _{L^{\infty }(G)} \le 1,\) for all \(n.\) Since \(u(t_n) \rightarrow u(t^{*})\) in the sense of distributions over \(G,\) we find that \( u(t^{*}) \in H_0^1(G)\) and \(\bigl \Vert \bigl |\nabla u(t^{*})\bigr | \bigr \Vert _{L^{\infty }(G)} \le 1.\) Thus, \(u(t) \in \mathfrak {K},\) for all \(t \in [0, T).\) Next choose an arbitrary sequence \(\{t_n\} \downarrow t^{*} \in [0, T).\) Since \(W^{1, \infty }(G)\) is compactly embedded into \(W^{s, p}(G)\) for each \(0 \le s <1\) and \(1 \le p < \infty ,\) there is a subsequence still denoted by \(\{t_n\}\) such that
Hence, \(u \in C_r([0, T); W^{s, p}(G)), \) for all \(0 \le s < 1\) and \(1 \le p < \infty \).
Next choose \(v\) in (3.9) such that \(v(t) = u_0,\) for all \(t \in [0, T].\) It follows that
which yields (3.4). \(\square \)
Lemma 3.3
Let \(\rho _{\epsilon }\) be a mollifier satisfying (2.6). Suppose that \(u\) is a solution with corresponding \(\xi \in \mathcal {M} \bigl ([0, T] \times \overline{G}\bigr )^d.\) Then, for each \(t \in (0, T),\) and \(0 < \epsilon < T - t,\) there is \(h = h(t, \epsilon , \xi ) \in L^2(0, t; L^2(G)^d)\) such that
Proof
It follows from (3.8) that
in the sense of distributions over \((0, T- \epsilon ) \times G,\) and hence,
Let
Then, \(\mathfrak {X}\) is a subspace of \(L^2(0, t; L^2(G)^d).\) For each \(\mathfrak {x} = \nabla \phi \in \mathfrak {X},\) define
Then,
and hence, \(\Lambda \) is a continuous linear functional on \(\mathfrak {X}.\) It can be extended to a continuous linear functional on \(L^2(0, t; L^2(\mathbb R)^d)\) by the Hahn-Banach theorem. Thus, there is \(h = h(t, \epsilon , \xi ) \in L^2(0, t; L^2(G)^d)\) such that
for all \(\phi \in C([0, t]; C_0^1(\overline{G}))\). \(\square \)
Lemma 3.4
Suppose that \(u\) is a solution with corresponding \(\xi \in \mathcal {M} \bigl ([0, T] \times \overline{G}\bigr )^d.\) Then, for each \(t \in (0, T),\)
where \(\rho _{\epsilon }\) is the same as above.
Proof
It follows from (3.10) that for each \( 0 < \epsilon < T - t,\)
for all \(s \in [0, t],\) and all \(v \in C^1([0, t]; C_0^1(\overline{G})).\) Here \(< , \, \,>\) is the duality pairing between \(H_0^1(G)\) and \(H^{-1}(G). \)
By passing \(\epsilon \rightarrow 0\) in (2.8) with \(\psi \) replaced by \(v\) and \(M,\) respectively,
and
By means of (2.7), we see that
It follows that
Now it is easy to see that
because the limits of all other terms of (3.12) exist as \(\epsilon \rightarrow 0.\) Next by choosing \(v \in C^1([0, t]; C_0^1(\overline{G}))\) such that \(\bigl | \nabla v(s, x)\bigr | \le 1, \) for all \((s, x) \in [0, t] \times \overline{G},\) and by comparing with the inequality (3.9), we conclude that
For each \( v \in C([0, t]; C_0^1(\overline{G}))\) such that \(\bigl | \nabla v(s, x)\bigr | \le 1, \) for all \((s, x) \in [0, t] \times \overline{G},\) there is a sequence \(\bigl \{v_n\bigr \}\) in \(C^1([0, T]; C_0^1(\overline{G}))\) such that \(\bigl | \nabla v_n(s, x)\bigr | \le 1, \) for all \((s, x) \in [0, t] \times \overline{G},\) and \(v_n \rightarrow v\) in \(C([0, t]; C_0^1(\overline{G})).\) Thus, (3.11) follows. \(\square \)
Lemma 3.5
Let \(u\) be a solution with corresponding \(\xi \in \mathcal {M}\bigl ([0, T] \times \overline{G}\bigr )^d.\) Let \(v\) be another function such that
It holds that
for each \(t \in (0, T).\)
Proof
Fix any \(\gamma > 0,\) and \(0 < t < T.\) By Lemma 2.3, there is \(0 < \epsilon _0 < \gamma \) such that \(\epsilon _0 < T -t,\) and
By virtue of Lemmas 2.2 and 3.3, there is \(w_{\epsilon , \gamma }\) such that
Hence, for each \(0 < \epsilon \le \epsilon _0,\)
Thus, we have
This yields (3.13). \(\square \)
Theorem 3.6
According to Definition 3.1, there is at most one solution of (3.3) and (3.4).
Proof
Let \(u_1\) and \(u_2\) be solutions with corresponding measures \(\xi _1\) and \(\xi _2\), respectively. Let us set
Then, it holds that
in the sense of distributions over \((0, T) \times G\). Let \(\rho _{\epsilon }\) be a mollifier satisfying (2.6). For \( 0 < \epsilon < T,\) it holds that
on the interval \([0, T - \epsilon )\). Choose any \(t_{*} \in (0, T).\) By virtue of Lemma 3.5,
and
Since \(w \in C_r([0, T); L^2(G)) \cap L^{\infty }(0, T; H_0^1(G)),\) it follows from (3.14) and (3.15) that
which yields \(w \equiv 0.\) \(\square \)
Theorem 3.7
Let \(u\) be a solution with corresponding \(\xi \in \mathcal {M}\bigl ([0, T] \times \overline{G}\bigr )^d\). Suppose that \(\mathcal O\) is an open subset of \((0, T) \times G\) such that
for some \( 0 < \nu < 1\). Then, \(\nabla \cdot \xi = 0\) in \(\mathcal O\) in the sense of distributions.
Proof
Let \(\mathfrak {V}\) be a nonempty open set such that
Choose any \(\phi \in C_c^{\infty }(\mathfrak {V})\). Then, there is \(\lambda \ne 0\) such that
Let us define
Then, \(v(t) \in \mathfrak {K},\) for almost all \(t \in (0, T),\) and we can apply Lemma 3.5 so that
and thus,
But
By considering \(\pm \lambda , \) we conclude that
This implies that \(\nabla \cdot \xi = 0\) in \(\mathfrak {V},\) and hence, in \(\mathcal O\) in the sense of distributions. \(\square \)
Remark 3.8
For a solution \(u\) of (3.3) and (3.4), the corresponding measure \(\xi \) is not determined uniquely, because we can add to \(\xi \) any divergence-free smooth vector field with compact support in \((0, T) \times G.\) We also note that Theorem 3.7 is related to the general fact that \(\partial I_{\mathfrak {K}}(u) = 0\) for \( u \in \hbox {interior of }\mathfrak {K} \) if it exists.
4 Stochastic problem
Throughout this section, \(\bigl (\Omega , \mathcal {F}, P\bigr )\) is a given complete probability space and \(\{\mathcal {F}_{t}\}_{t \ge 0}\) is a filtration on \(\bigl (\Omega , \mathcal {F}\bigr )\) such that \(\mathcal {F}_t\) is right-continuous for all \(t,\) and \(\mathcal {F}_0\) contains all \(P\)-negligible sets in \(\mathcal {F}.\) For general information on stochastic calculus, see [6, 10, 13].
We set
where \(\{B_j\}_{j=1}^{\infty }\) is a sequence of mutually independent standard Brownian motions on \(\bigl (\Omega , \mathcal {F}, \{\mathcal {F}_{t}\}, P\bigr ),\) and each \(g_j = g_j(\omega ,t, x)\) is \(H_0^1(G) \cap H^k(G)\)-valued progressively measurable with \(k > \frac{d}{2} + 1\) such that
Under these assumptions, \(M(t) \in C([0, T]; C_0^1(\overline{G})),\) for \(P\)-almost all \(\omega \in \Omega .\)
We first address the issue of existence of a solution, and then we will discuss the Markov property of the solution and prove the existence of an invariant measure.
4.1 Existence
Definition 4.1
A stochastic process \(u\) is a solution of (1.1) and (1.3) if it is adapted to \(\{\mathcal {F}_t\}\) and for \(P\)-almost all \(\omega \in \Omega ,\) \(u(\omega )\) is a solution of (3.3) and (3.4) according to Definition 3.1.
Theorem 4.2
Let \(T > 0\) be given. Suppose that \(u_0\) is \(\mathcal {F}_0\)-measurable, and \( u_0(\omega ) \in \mathfrak {K}, P\)-almost all \(\omega \in \Omega .\) Under the conditions (4.1) and (4.2), there is a pathwise unique solution to (1.1) and (1.3).
Our strategy of the proof is as follows.
By means of the penalty method discussed in [11], we consider the following initial boundary value problem for each \(\epsilon > 0\).
In conjunction with the existence of a unique solution, we will obtain basic stochastic estimates of a solution independent of \(\epsilon > 0.\) By means of these estimates, we can construct a pathwise solution for \(P\)-almost all \(\omega \in \Omega ,\) and use Theorem 3.6 to show that this is the desired stochastic process.
We now present the technical details.
The above problem (4.3) and (4.5) can be resolved by direct application of Theorems 4.2.4 and 4.2.5 of [13]. For this, we consider the following operator for \(\epsilon > 0.\)
Our Gelfand triple is
It is easy to see the following properties of the operator \(A_{\epsilon } = A_{\epsilon }(w)\).
-
[I]
For all \(w_1, w_2, w_3 \in V,\) the map
$$\begin{aligned} \lambda \mapsto < A_{\epsilon }(w_1 + \lambda w_2), w_3>_{V^{*},~ V} \end{aligned}$$is continuous \(\mathbb R \rightarrow \mathbb R.\)
-
[II]
For all \(w_1, w_2 \in V,\)
$$\begin{aligned} < A_{\epsilon }(w_1) - A_{\epsilon }(w_2), w_1 - w_2 >_{V^{*},\, V}~ \le ~ 0 \end{aligned}$$ -
[III]
For some constant \(C_{\epsilon } > 0,\)
$$\begin{aligned} < A_{\epsilon }(w), w>_{V^{*}, V} \le - \frac{1}{2\epsilon } \bigl \Vert w\bigr \Vert _{V}^4 + C_{\epsilon } \end{aligned}$$for all \(w \in V\).
-
[IV]
For all \(w \in V,\)
$$\begin{aligned} \bigl \Vert A_{\epsilon }(w)\bigr \Vert _{V^{*}} \le C_{\epsilon }\bigl \Vert w \bigr \Vert _{V}^3 + C \end{aligned}$$
Here \(C\) and \(C_{\epsilon }\) are some positive constants.
For the property [II], we consider a convex functional \(J_{\epsilon }(\cdot )\) on \(W_0^{1, 4}(G)\) defined by
Then, \(- A_{\epsilon }(w)\) is the Gâteaux differential at \(w\) of \(J_{\epsilon }\).
For [III] and [IV], we note that
and
for all \(v, w \in W_0^{1, 4}(G)\).
For later use, we also point out the following fact which can be easily proved.
Lemma 4.3
Let \(f \in L^2(0, T;H^{-1}(G)), h \in L^2(0, T; V)\), and \(\bigl \{ g_k\bigr \}\) be a sequence in \(L^2(0, T; V)\) such that \(g_k \rightarrow g,\) weakly in \(L^2(0, T;H_0^1(G)).\) Then,
According to Theorems 4.2.4 and 4.2.5 of [13] with help of [I]–[IV], there is a pathwise unique solution \(u\) of (4.3) and (4.5) which satisfies the following properties.
and it holds for \(P\)-almost all \(\omega \in \Omega \) that
By the Burkholder-Davis-Gundy inequality, we can derive from (4.8) that
for some positive constants \(C\) independent of \(\epsilon .\) Let us denote by \(u_k\) the solution when \(\epsilon = \frac{1}{k}, \quad k = 1, 2, \ldots \), and define
Since we have
for all \( k \ge 1,\) it follows that
Hence, there is \(\hat{\Omega }\subset \Omega \) such that \(P\bigl ( \Omega \setminus \hat{\Omega }) = 0,\) and for each \(\omega \in \hat{\Omega },\) there is a subsequence \(\{k_m\}\) depending on \(\omega \) and a positive constant \(L(\omega )\) such that
for all \(k_m\). Hence, there is a subsequence still denoted by \(\bigl \{ \bigl ( u_{k_m}, \xi _{k_m}\bigr )\bigr \}\) such that
For each \(k \ge 1,\) it holds for \(P\)-almost all \(\omega \in \Omega \) that
in the sense of distributions over \((0, T) \times G,\) and
for all \(s \in [0, T],\) and all \(v \in L^4(0, T; V)\) such that \(\dfrac{\partial v}{\partial t} \in L^{2}(0, T; H^{-1}(G)).\) Here, we note that
and
for \(P\)-almost all \(\omega \in \Omega .\)
By adding (4.17) and (4.18), we have
But
for almost all \(t \in [0, T],\) and
Thus, if \(v(t) \in \mathfrak {K},\) for almost all \(t \in [0, T],\)
and hence, (4.19) yields
for all \(s \in [0, T]\).
It follows that
for all \(\psi \in C([0, T])\) such that \(\psi (t) \ge 0, \forall t\).
There is \(\Omega ^{\dagger } \subset \hat{\Omega }\) such that \(P\bigl ( \Omega \setminus \Omega ^{\dagger }\bigr ) = 0 \) and for each \(\omega \in \Omega ^{\dagger },\) (4.16) and (4.21) hold for all \(k,\) all \(\psi \in C([0, T])\) such that \(\psi (t) \ge 0, \forall t,\) and all \(v\) such that \(\dfrac{\partial v}{\partial t} \in L^2(0, T; H^{-1}(G))\) and \(v(t) \in \mathfrak {K},\) for almost all \(t \in [0, T].\) For \(\omega \in \Omega ^{\dagger },\) let \(\bigl ( u, \xi \bigr )\) be determined by (4.13)–(4.15). Then, by Lemma 4.3, (4.21) implies that
Hence, it holds that
for almost all \(s \in [0, T],\) for each \(v\) such that \(\dfrac{\partial v}{\partial t} \in L^2(0, T; H^{-1}(G))\) with \(v(t) \in \mathfrak {K},\) for almost all \(t \in [0, T].\) It also follows from (4.16) that
in the sense of distributions over \((0, T) \times G.\) We now modify the measure \(\xi .\)
Then,
and, (4.23) and (4.24) are still valid with \(\xi \) replaced by \(\hat{\xi },\) because \(M(0, x) = 0,\) for all \(x \in \overline{G}.\) Since the functional
is convex on \(L^2(0, T; H_0^1(G)),\) it follows from (4.12) and (4.14) that
Therefore,
and \(u\) and \(\hat{\xi }\) satisfy all the conditions of Definition 3.1. Also, by Lemma 3.2, we see that
and, by Lemma 2.4,
For each \(\omega \in \Omega ^{\dagger },\) a solution \(u(\omega )\) of (3.3) and (3.4) has been obtained as the limit of a convergent subsequence satisfying (4.13)–(4.15). By Theorem 3.6, the limit is independent of the choice of such a subsequence. Based on this, we will establish measurability of \(u.\)
Fix any \(t_{*} \in (0, T).\) As above, let \(u_k\) be the solution of (4.3)–(4.5) for \(\epsilon = \frac{1}{k}, k \ge 1,\) with corresponding \(\xi _k.\) We set
and choose any \(\psi \in L^2(0, t_{*}; L^2(G))\) and \(\lambda \in \mathbb R.\) It holds that
Since the set on the right-hand side belongs to \(\mathcal {F}_{t_{*}}\) and \(L^2(0, t_{*}; L^2(G))\) is separable, \(u\) is \(L^2(0, t_{*}; L^2(G))\)-valued \(\mathcal {F}_{t_{*}}\)-measurable, which is valid for each \(t_{*} \in (0, T).\) Let \(\rho _{\epsilon }\) be a mollifier satisfying (2.6). Then, \(\bigl (u*\rho _{\epsilon }\bigr )(t_{*})\) is \(L^2(G)\)-valued \(\mathcal {F}_{t_{*} + \epsilon }\)-measurable. Since \(u \in C_r ([0, T); L^2(G)), \) for each \(\omega \in \Omega ^{\dagger },\)
as \(\epsilon \rightarrow 0.\) Since \(\mathcal {F}_{t_{*} +} = \mathcal {F}_{t_{*}},\) \(u(t_{*})\) is \(L^2(G)\)-valued \(\mathcal {F}_{t_{*}}\)-measurable. Also, \(u(t_{*})\) is \(H_0^1(G)\)-valued \(\mathcal {F}_{t_{*}}\)-measurable, because \(\mathcal {B}\bigl ( H_0^1(G)\bigr ) \subset \mathcal {B}\bigl ( L^2(G)\bigr ).\) Hence, \(u\) is adapted to \(\{\mathcal {F}_t\}.\) Pathwise uniqueness is a direct consequence of Theorem 3.6. Now the proof of Theorem 4.2 is complete.
4.2 Markov property and invariant measure
We further assume that \(g_j = g_j(x), \forall j,\) in (4.1), and (4.2) holds for each \(0 < T <\infty \).
Let \(X(t; s, x), 0 \le s \le t < \infty ,\) be the solution of (1.1) with the initial condition (1.3) replaced by
Then, for each \(x \in \mathfrak {K},\) we have \(X(t, s; x) \in \mathfrak {K},\) for all \( t \in [s, \infty ),\) for \(P\)-almost all \(\omega \in \Omega .\)
For \(0 \le s < t < \infty , \) we construct a \(\sigma \)-algebra \(\mathcal G_{t,s}\) as follows:
\(\mathcal H_{t,s}\) is the \(\sigma \)-algebra generated by \(\bigl \{ B_j(z) - B_j(s)\bigr \}, s \le z \le t, j =1, 2, \ldots \),
and \(P\)-negligible sets
and
Then, \(\mathcal G_{t, s} \subset \mathcal {F}_t,\) and \(\mathcal G_{t, s}\) is independent of \(\mathcal {F}_s.\) Let \(X_k(t, s; x)\) be the solution on the interval \([s, \infty )\) of (4.3), (4.4) and
Then, \(X_k(t, s; x)\) is adapted to \(\bigl \{\mathcal G_{t, s}\bigr \}_{t \ge s}.\) By the same proof of the fact that \(u(t_{*})\) is \(\mathcal {F}_{t_{*}}\)-measurable, \(X(t, s; x)\) is \(\mathcal G_{t, s}\)-measurable, and hence, independent of \(\mathcal {F}_s\).
Lemma 4.4
For each \( 0 \le s \le z \le t < \infty ,\) and \(x \in \mathfrak {K},\) it holds that
Proof
This follows from pathwise uniqueness of a solution. \(\square \)
Lemma 4.5
Let \(f\) be \(H_0^1(G)\)-valued \(\mathcal {F}_s\)-measurable such that \(f(\omega ) \in \mathfrak {K},\) for \(P\)-almost all \(\omega \in \Omega .\) Then, for each \(\epsilon > 0,\) there is a function \(f_{\epsilon }\) such that
where \(a_k \in \mathfrak {K}, \forall k,\) and \(F_k\)’s are disjoint \(\mathcal {F}_s\)-measurable subsets such that \(\Omega = \bigcup _{k=1}^{N(\epsilon )} F_k,\) and
Here, \(\chi _{F_k}( \cdot )\) denotes the characteristic function of the set \(F_k\).
Proof
Since \(\mathfrak {K}\) is a compact metric space with the metric of \(L^2(G),\) it is easy to see (4.30) and (4.31). Since \(\Omega = \bigcup _{k=1}^{N(\epsilon )} F_k,\) (4.30) implies (4.29). \(\square \)
Lemma 4.6
Let \( 0 \le s \le t < \infty ,\) For \(i =1, 2,\) let \(h_i\) be \(L^2(G)\)-valued \(\mathcal {F}_s\)-measurable and \(h_i(\omega ) \in \mathfrak {K},\) for \(P\)- almost all \(\omega \in \Omega .\) Then, it holds that
Proof
This follows by the same argument as for (3.16) \(\square \)
By means of the above Lemmas 4.4–4.6, we can repeat the standard argument to show the following Markov property of the process \( X = X(t; s, x).\) For details, see [6, 13].
Theorem 4.7
Let \(F\) be a bounded continuous function on \(L^2(G)\). For any \( 0 \le s \le z \le t < \infty ,\) and \(x \in \mathfrak {K},\) it holds that
Theorem 4.8
There is an invariant measure \(\mu \) on \(\mathfrak {K}\) such that
for all \(t \ge 0,\) and all bounded continuous function \(F\) on \(L^2(G).\)
Proof
By virtue of Lemma 2.5 and Theorem 4.7, we can use the method of Krylov and Bogoliubov to prove the existence of an invariant measure. We omit the details, which are well-known. \(\square \)
References
Barbu, V., Da Prato, G., Tubaro, L.: Kolmogorov equation associated to the stochastic reflection problem on a smooth convex set of a Hilbert space. Ann. Probab. 37, 1427–1458 (2009)
Barbu, V., Da Prato, G., Tubaro, L.: The stochastic reflection problem in Hilbert spaces. Commun. Partial Differ. Equ. 37, 352–367 (2012)
Benssousan, A., Rascanu, A.: Stochastic variational inequalities in infinite dimensional spaces. Numer. Funct. Anal. Optim. 18, 19–54 (1997)
Brezis, H.: Problèmes unilatéraux. J. Math. Pures Appl. 51, 1–164 (1972)
Cépa, E.: Problème de Skorohod multivoque. Ann. Probab. 26, 500–532 (1998)
Da Prato, G., Zabczyk, J.: Stochastic Equations in Infinite Dimensions. Cambridge University Press, Cambridge (1992)
Donati-Martin, C., Pardoux, É.: White noise driven SPDEs with reflection Probab. Theory Related Fields 95, 1–24 (1993)
Haussmann, U.G.: Stochastic PDEs with unilateral constraints in higher dimensions in “Stochastic partial differential equations and applications” (Trento, 1990). Pitman Res. Notes Math. Ser. 268, 204–215 (1992)
Haussmann, U.G., Pardoux, E.: Stochastic variational inequalities of parabolic type. Appl. Math. Optim. 20, 163–192 (1989)
Karatzas, I., Shreve, S.: Brownian Motion and Stochastic Calculus, 2nd edn. Springer, Berlin (1997)
Lions, J.L.: Quelques méthodes de résolution des problémes aux limites non linéaires. Dunod, Paris (1969)
Nualart, D., Pardoux, E.: White noise driven quasilinear SPDEs with reflection. Probab. Theory Relat. Fields 93, 77–89 (1992)
Prévôt, C., Röckner, M.: A Concise Course on Stochastic Partial Differential Equations, Lecture Notes in Mathematics, vol. 1905. Springer, Berlin (2007)
Röckner, M., Zhu, R., Zhu, X.: The stochastic reflection problem on an infinite dimensional convex set and BV functions in a Gelfand triple. Ann. Probab. 40, 1759–1794 (2012)
Skorohod, A.V.: Stochastic equations for diffusion processes in a bounded region. Theory Probab. Appl. 6, 264–274 (1961)
Tanaka, H.: Stochastic differential equations with reflecting boundary condition in convex regions Hiroshima. Math. J. 9, 163–177 (1979)
Zhang, X.: Skorohod problem and multivalued stochastic evolution equations in Banach spaces. Bull. Sci. Math. 131, 175–217 (2007)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Kim, J.U. Stochastic variational inequalities associated with elasto-plastic torsion. Stoch PDE: Anal Comp 2, 27–53 (2014). https://doi.org/10.1007/s40072-013-0024-0
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40072-013-0024-0