Abstract
In the present paper we are interested in the qualitative properties of the Markovian semigroups \({{\mathcal P }}=({{\mathcal P }}_t)_{t\ge 0}\) associated to the solutions of certain stochastic partial differential equations (SPDEs) with boundary noise. We assume that these problems can be written as an abstract stochastic PDE on a Hilbert space \(H\) taking the following form:
Here \(L\) is a real-valued Lévy process, \(A:D(A)\subset H\rightarrow H\) is an infinitesimal generator of a strongly continuous semigroup, \(\sigma :H\rightarrow {\mathbb {R}}\) is a Lipschitz continuous map bounded from below and above, and \(B:{\mathbb {R}}\rightarrow H\) a possibly unbounded operator. As typical examples of such stochastic evolution equation we mainly consider and treat the damped wave equation and heat equation both driven by boundary Lévy noise. In this article, we first show that, if the system
is approximate controllable at time \(T>0\) with control \(v\), then, under some additional conditions on \(B\), \(A\) and \(L\), the probability measure on \(H\) induced by \(u(t,x)\) at a given time \(t>0\), \(x\in H\), is positive on open subsets of \(H\). Secondly, we investigate under which conditions on the Lévy process \(L\) and on the operators \(A\) and \(B\) the solution to Eq. (1) is asymptotically strong Feller. It follows from our results that the wave equation with boundary Lévy noise has at most one invariant measure which is non-degenerate.
Similar content being viewed by others
Notes
If \(\nu (B)\lambda (I) = \infty \), then obviously \(\eta (B\times I)=\infty \) a.s.
A Lévy measure on \({{ {\mathbb {R}}}}\) is a \(\sigma \)-finite measure such that \(\nu (\{0\})=0\) and \(\int _{{ {\mathbb {R}}}}(|z|^2 \wedge 1) \nu (dz)<\infty \).
A process \(u\) is \(H\)-valued, iff for all \(t\ge 0\), \(u(t)\) is an \(H\)-valued random variable.
Note, that \(x\in supp(\varphi )\) iff for all \(\delta >0\), \(\varphi ( {{\mathcal D }}_H(x,\delta ))>0\).
An increasing sequence \(\{ d_n:n\in {\mathbb {N}}\}\) of pseudo metrics is called a totally separating system of pseudo metrics for \({{ \mathcal X }}\), if \(\lim _{n\rightarrow \infty }d_n(z,y) =1\) for all \(z,y\in {{ \mathcal X }}\), \(z\not = y\).
Let \(d\) be a pseudo-metric on \({{ \mathcal X }}\), we denote by \(L({{ \mathcal X }},d)\) the space of \(d\)-Lipschitz functions from \({{ \mathcal X }}\) into \({\mathbb {R}}\). That is, the function \(\phi :{{ \mathcal X }}\rightarrow {\mathbb {R}}\) is an element of \(L({{ \mathcal X }},d)\) if
$$\begin{aligned} \Vert \phi \Vert _d := {\mathop {\mathop {\sup }\limits _{z,y\in {{ \mathcal X }}}}\limits _{z\not = y }} { |\phi (z)-\phi (y)|\over d(z,y)}<\infty . \end{aligned}$$For a pseudo-metric \(d\) on \({{ \mathcal X }}\) we define the distance between two probability measures \({{\mathcal P }}_1\) and \({{\mathcal P }}_2\) with respect to \(d\) by
$$\begin{aligned} \Vert {{\mathcal P }}_1-{{\mathcal P }}_2\Vert _{d} := {\mathop {\mathop {\sup }\limits _{\phi \in L({{ \mathcal X }},d)}}\limits _{\Vert \phi \Vert _d=1}} \int _{{ \mathcal X }}\phi (x)\, ({{\mathcal P }}_1-{{\mathcal P }}_2)(dx). \end{aligned}$$\(\Delta L_t=L_t-L_{t^-}\), where \(L_{t^-}=\lim _{s<t,s\rightarrow t} L_t\).
Here \(f\sim g\) means that there exist two numbers \(c_1, c_2\) such that \(c_1 f(x) \le g(x) \le c_2 g(x)\) for all \(x\) in the the domain of definition \(D(f)=D(g)\) of \(f\) and \(g\).
References
Applebaum, D.: Lévy Processes and Stochastic Calculus, Volume 93 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge (2004)
Applebaum, D.: Martingale-valued measures, Ornstein–Uhlenbeck processes with jumps and operator self-decomposability in Hilbert space. In: Emery, M., Yor, M. (eds.) Memoriam Paul-André Meyer, Séminaire de Probabilités 39, Lecture Notes in Mathematics, pp. 173–198. Springer, Berlin (2006)
Applebaum, D.: On the infinitesimal generators of Ornstein–Uhlenbeck processes with jumps in Hilbert space. Potential Anal. 26, 79–100 (2007)
Bensoussan, A., Da Prato, G., Delfour, M., Mitter, S.: Representation and Control of Infinite Dimensional Systems. Systems & Control: Foundations & Applications, 2nd edn. Birkhäuser Boston Inc., Boston (2007)
Bichteler, K., Gravereaux, J.-B., Jacod, J.: Malliavin Calculus for Processes with Jumps, Volume 2 of Stochastics Monographs. Gordon and Breach Science Publishers, New York (1987)
Brzeźniak, Z., Hausenblas, E.: Maximal regularity for stochastic convolutions driven by Lévy processes. Probab. Theory Relat. Fields 145(3–4), 615–637 (2009)
Brzeźniak, Z., Peszat, S.: Hyperbolic equations with random boundary conditions. In: Recent Development in Stochastic Dynamics and Stochastic Analysis, Interdisciplinary Mathematics Science, vol. 8, , p. 121, World Scientific Publishing, Hackensack (2010)
Chojnowska-Michalik, A.: Stationary distributions for \(\infty \)-dimensional linear equations with general noise. In: Stochastic differential systems (Marseille–Luminy, 1984), Volume 69 of Lecture Notes in Control and Information Sciences, pp. 14–24. Springer, Berlin (1985)
Chojnowska-Michalik, A.: On processes of Ornstein–Uhlenbeck type in Hilbert space. Stochastics 21(3), 251–286 (1987)
Coron, J.-M.: Control and Nonlinearity. Volume 136 of Mathematical Surveys and Monographs. American Mathematical Society, Providence (2007)
Da Prato, G.: An Introduction to Infinite-Dimensional Analysis. Universitext, Springer-Verlag, Berlin (2006)
Da Prato, G., Zabczyk, J.: Stochastic Equations in Infinite Dimensions. Volume 44 of Encyclopedia of Mathematics and Its Applications, vol. 44. Cambridge University Press, Cambridge (1992)
Da Prato, G., Zabczyk, J.: Evolution equations with white-noise boundary conditions. Stoch. Stoch. Rep. 42(3–4), 167–182 (1993)
Da Prato, G., Zabczyk, J.: Ergodicity of Infinite-Dimensional Systems. Cambridge University Press, Cambridge (1997)
Ethier, S., Kurtz, T.: Markov Processes. Characterization and Convergence. Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics. Wiley, New York (1986)
Fournier, N.: Malliavin calculus for parabolic SPDEs with jumps. Stoch. Process. Appl. 87, 115–147 (2000)
Hairer, M., Mattingly, J.: Ergodicity of the 2D Navier–Stokes equation with degenerate stochastic forcing. Ann. Math. 164, 993–1032 (2006)
Hausenblas, E.: Absolute continuity of a law of an Itô process driven by a Lévy process to another Itô process. Int. J. Pure Appl. Math. 68, 387–401 (2011)
Hausenblas, E.: Existence, uniqueness and regularity of parabolic SPDEs driven by Poisson random measure. Electron. J. Probab. 10, 1496–1546 (2005)
Kapica, R., Szarek, T., Śleczka, M.: On a unique ergodicity of some Markov processes. Potential Anal. 36, 589–606 (2012)
Laroche, B., Philippe, M., Rouchon, P.: Motion planning for the heat equation. Int. J. Robust Nonlinear Control 10(8), 629–643 (2000)
Maslowski, B.: Stability of semilinear equations with boundary and pointwise noise. Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 22(01), 55–93 (1995)
Maslowski, B., Seidler, J.: Probabilistic approach to the strong Feller property. Probab. Theory Relat. Fields 118(2), 187–210 (2000)
Pandolfi, L., Priola, E., Zabczyk, J.: Linear operator inequality and null controllability with vanishing energy for unbounded control systems. SIAM J. Control Optim. 51(1), 629–659 (2013)
Pazy, A.: Semigroups of Linear Operators and Applications to Partial Differential Equations, Volume 44 of Applied Mathematical Sciences. Springer, New York (1983)
Peszat, S., Zabczyk, J.: Stochastic Partial Differential Equations with Lévy Noise, Volume 113 of Encyclopedia of Mathematics and Its Applications. Cambridge University Press, Cambridge (2007)
Priola, E., Zabczyk, J.: Null controllability with vanishing energy. SIAM J. Control Optim. 42, 1013–1032 (2003)
Priola, E., Zabczyk, J.: Ornstein–Uhlenbeck processes with jumps. Bull. Lond. Math. Soc. 41, 41–50 (2009)
Priola, E., Zabczyk, J.: Structural properties of semilinear SPDEs driven by cylindrical stable processes. Probab. Theory Relat. Fields 149, 97–137 (2011)
Priola, E., Shirikyan, A., Xu, L., Zabczyk, J.: Exponential ergodicity and regularity for equations with Lévy noise. Stoch. Process. Appl. 122, 106–133 (2012)
Priola, E., Xu, L., Zabczyk, J.: Exponential mixing for some SPDEs with Lévy noise. Stoch. Dyn. 11, 521–534 (2011)
Sato, S.I.: Lévy Processes and Infinitely Divisible Distributions. Cambridge Studies in Advanced Mathematics, vol. 68. Cambridge University Press, Cambridge (1999)
Tucsnak, M., Weiss, G.: Observation and Control for Operator Semigroups. Birkhäuser Advanced Texts. Birkhäuser Verlag, Basel (2009)
Weinan, E., Mattingly, J., Sinai, Y.: Gibbsian dynamics and ergodicity for the stochastically forced Navier–Stokes equation. Commun. Math. Phys. 224, 83–106 (2001)
Zuazua, E.: Exact Boundary Controllability for the Semilinear Wave Equation. Nonlinear Partial Differential Equations and Their Applications. Collége de France Seminar, Pitman Research Notes in Mathematics Series, 220, vol. X, 357391. Longman Science and Technology, Harlow (1991)
Acknowledgments
The authors gratefully acknowledge the careful reading of the manuscript by the reviewers, their comments and suggestions which have greatly improved the paper. The second author is very grateful to the financial support from the Austrian Science Foundation. His research was funded by the Austrian Science Fund (FWF): M1487 (Lise Meitner Program).
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix A: Technical Preliminaries
Let \(\lambda \) be the Lebesgue-measure on \({\mathbb {R}}\) and \(c:{\mathbb {R}}\rightarrow {\mathbb {R}}\) given by (17). Let \(r_0\) as in Hypothesis 1. As in page 8 let \(R\ge r_0\) and \(g_R\) be as in (22).
In this section we will show that one can find a transformation \(\theta ^{(R)}:[0,T]\times {\mathbb {R}}\backslash \{0\}\rightarrow {\mathbb {R}}\) such that for a given mapping \(V: [0,T]\rightarrow {\mathbb {R}}\) we have
For simplicity, we will first reformulate this problem in the following from. For any \(R\ge 10\) and \(R\ge r_0\) find a function \(\vartheta ^R:{\mathbb {R}}\times {\mathbb {R}}\rightarrow {\mathbb {R}}\), such that
Setting
It is straightforward to calculate that \(\theta \) satisfies (72).
To find such a transformation, we perturb the jumps a little bit. By the symmetricity of the Lévy measure, it is then sufficient to consider only positive jumps and only positive values of \(v\) in the first step. In the second step we construct a transformation which works for perturbation \(v\in {{ {\mathbb {R}}}}\) and positive and negative jumps.
Now, let \(\tilde{\delta }\in (0,\frac{2}{\alpha }(\alpha -1))\) and put
and
Let also \(\beta _2>-1\) be an arbitrary number and \(\gamma _2=2\). Note that \(-\beta _1-1=-\tilde{\delta }\) and because \(\alpha >1\) we also have \(1-\gamma _1(\beta _1+1)\le 0\).
Now define a function \(\vartheta ^{+,R} \) by
where \(\varsigma ^{(R)} :{\mathbb {R}}^ +\times {\mathbb {R}}^ + \rightarrow {\mathbb {R}}^ +\) is defined by
Here, the constant \(C>0\) has to be chosen in such a way that \({\mathbb {R}}_0^ +\ni K\mapsto \vartheta ^{+,R} (K)\) is a continuous function. From the definition of \(\varsigma ^{(R)}\) we see in particular that for \(K\ge 1\) and \(z \in \left( K^{\gamma _1}R, \frac{4}{3} K^{\gamma _1}R)\cup (\frac{8}{3} K^{\gamma _1}R, \frac{10}{3} K^{\gamma _1}R\right) \), \(\varsigma ^{(R)}_z(K,z)\le K z^{-\beta _1-1}\) and for \(K< 1\) and \(z \in \left( R, \frac{4}{3} R)\cup (\frac{4}{3} (1+K^{\gamma _2}) R,\frac{5}{3} (1+K^{\gamma _2}) R\right) \), \(\varsigma ^{(R)}_z(K,z)\le C z^{-\beta _2-1}\), where \(\varsigma ^{(R)}_z\) is the partial derivative of \(\varsigma ^{(R)}\) with respect to \(z\).
Lemma 6.1
Under the Hypothesis 1 the function \(\vartheta ^{+,R} :{{ {\mathbb {R}}}}_0^+\rightarrow {{ {\mathbb {R}}}}_0^+\) is invertible.
Proof
We start by verifying the following properties
-
(1)
\(\vartheta ^{+,R} (K)\in {\mathbb {R}}^+_0\);
-
(2)
the function \({\mathbb {R}}^+_0\ni K\mapsto \vartheta ^{+,R}(K) \in {\mathbb {R}}^+ _0\) is continuous.
-
(3)
the function \({\mathbb {R}}^+\ni K\mapsto \vartheta ^{+,R}(K) \in {\mathbb {R}}^+_0 \) is injective.
-
(4)
the function \({\mathbb {R}}^+\ni K\mapsto \vartheta ^{+,R}(K) \in {\mathbb {R}}^+ _0\) is surjective.
It will follow from (2) to (4) that the function \(\vartheta ^{+,R}\) is invertible.
Item (1) is clear by the definition of \(c\). In order to show Items (2) and (3) we take into account that the function \({\mathbb {R}}^+_0 \ni K\mapsto \vartheta ^{+,R} (K) \in {\mathbb {R}}^ +_0\) is strictly increasing and continuous.
Since \(\vartheta ^{+,R} (0)=0\) and \(\vartheta ^{+,R} \) is continuous on \({\mathbb {R}}^+_0\), item (4) will follow if we can show that \(\lim _{K\rightarrow \infty } \vartheta ^{+,R}(K) =\infty \). For this purpose let us first recall that for \(z>0\) we have \(c(z)=U^{-1}(z)\), where \(U:{\mathbb {R}}^+\rightarrow {\mathbb {R}}_0^+\) denotes the tail integral given by
From Hypothesis 1 it follows that there exist constants \(C_1>0\) and \(C_2>0\) such that
In fact, since by definition \(U(\cdot )=\nu (\cdot ,\infty )\), and by Hypothesis 1, there exists \(K_2>0\) such that for all \(z\in (0,r_0)\) we have
Thus,
which implies that
Hence,
If \(\alpha K_2r_0^ {-\alpha } \le \nu (r_0,\infty ) \), then (75) follows. If \(\alpha K_2r_0^ {-\alpha } > \nu (r_0,\infty ) \), then one can show by using elementary calculations that for any \(c>0\) and \(K\in (0,1)\)
Thus (75) is also valid.
By similar arguments one can show that there exist constants \(\tilde{C}_1>0\) and \(\tilde{C}_2>0\) such that
In fact, again by Hypothesis 1, there exists a constant \(\tilde{K}_2>0\) such that for all \(\tilde{z}\in (0,r_0)\) we have
Thus, by the same calculations as before we can infer that
If \(\alpha K_2r_0^ {-\alpha } \ge \nu (r_0,\infty ) \), then the assertion follows. If \(\alpha K_2r_0^ {-\alpha } < \nu (r_0,\infty ) \), then again it follows by direct calculations give that for any \(c>0\) and \(K>1 \) we have
Hence, (76) follows.
Next, for \(z>0\) the mapping \(U(\cdot )=\nu (\cdot , \infty )\) is invertible and its inverse \(U^{-1}(\cdot )\) coincides with \(c(\cdot )\) on \((0,\infty )\). Therefore, there exists a constant \(C>0\) such that \(c\) is differentiable for all \(z\ge C\) and
for any \(z\ge C\). We now derive a lower estimate for \(c^ \prime \). For this aim observe that Hypothesis 1 implies that there exist two constants \(C_3=\frac{1}{K_1}>0\) and \(C_4=K_2>0\) such that
In fact, since \(U(y)=\nu (y,\infty )\) we know that \(U'(y)=k(y)\). By Hypothesis 1 we have
Hence
which implies (77). Now, let \(z\ge U(r_0)\). Then \(U^ {-1}(z)\le r_0\) and
Estimate (75) gives that there exists two constants \(C_5>0\) and \(C_6>0\) such that
Now, we can start with proving Item (4). For simplicity, we put \(r_1=\frac{4}{3} R\) for the rest of the proof of (4). For any \(K>1\) we have
For \(\tilde{\gamma _1}= \gamma _1(1+\beta _1)\) we can derive from the estimate (78) that
Since \({ {\beta _1} (1-\alpha ) \over ({\beta _1}+1)\alpha }<{\frac{1}{\alpha }+1}\), there exists a constant \(C=C(\beta _1,\gamma _1,\alpha )>0\), such that
Integrating gives
where, for simplicity, we have put
It follows from (79) that
which concludes the proof of Item (4). From (1) to (4) it follows that the function \(\vartheta ^{+,R} :{\mathbb {R}}^+\rightarrow {\mathbb {R}}^+\) defined by (73) is invertible. \(\square \)
We also state the following remark which is very crucial for our analysis.
Remark 6.2
For any \(R\ge r_0\) and \(0<K\le 1\) we have
Proof of Estimate (81)
As above we set \(r_1=\frac{4}{3}R\). For any \(0<K<1\) we have
which implies the existence of a positive constant \(C(r_1, \alpha , \gamma _2,\beta _2)\) such that
By change of variables we get that
Since \({r_1}^{1+\beta _2}\le u\le r_1^{1+\beta _2}(1+K^{\gamma _2})^{(1+\beta _2)}\), we get
The Taylor expansion gives
By the choice of \(\beta _2\) and \(\gamma _2\), we have
This proves Lemma 6.1. \(\square \)
Let \(\kappa ^{+,R}\) be the inverse of \(\vartheta ^{+,R}\), i.e. \(\kappa ^{+,R}(z)=(\vartheta ^{+,R})^ {-1}(z)\), \(z\in {\mathbb {R}}_0^+\). Taking into account the negative jumps and negative values of \(v\), we will define the following transformation.
Corollary 6.3
Assume that Hypothesis 1 holds and let \(r_0\) be as in Hypothesis 1. Then, for any \(R\ge r_0\) the transformation
defined by
satisfies
Proof
Taken into account the symmetricity of the Lévy measure and the definition of \(\kappa ^{+,R}\), the proof follows by Lemma 6.1 and direct calculations. \(\square \)
Corollary 6.4
Let \(R\) as in Corollary 6.3 and define the function \(\rho ^{(R)}:{\mathbb {R}}\times {\mathbb {R}}\setminus \{0\}\rightarrow {\mathbb {R}}\) by
Let us denote the derivative in the direction of the second variable by \(\rho ^{(R)}_z\). Then
-
(1)
there exists a constant \( C(R)>0\) such that
$$\begin{aligned} \int _{{\mathbb {R}}\setminus \{0\}}|\rho ^{(R)}_z(x,z)|\, dz \le C(R)\,| x |^{2}, \quad \forall x\in {\mathbb {R}}; \end{aligned}$$ -
(2)
there exists \(\tilde{R}>0\) such that for all \(R>\tilde{R}\),
$$\begin{aligned} |\rho ^{(R)}_z(x,z)|\le \frac{1}{2}, \forall \, x\in {\mathbb {R}},\, \forall z\in {\mathbb {R}}\setminus \{0\}. \end{aligned}$$
Proof
For sake of simplicity, we set \(r_1=\frac{4}{3} R\) throughout this proof.
By the symmetricity assumption on the Lévy measure, it is enough to prove Item (1) for \(v>0\). Let \(v_0:= \inf \{v\ge 0: \kappa ^{+,R}(v)\ge 1\}\). First, let us assume, that \(v\ge v_0\). Then \(\kappa ^+(v) \ge 1\) and, by (79), \(\kappa ^{+,R}(v)\sim v ^\frac{1}{\Gamma }\), where \(\Gamma \) is defined in (80).Footnote 8 By the definition of \(\varsigma ^{(R)}\) we have that
for any \(K>1\). Setting \(K=v ^\frac{1}{\Gamma }\), we get by the choice of \(\beta _1\) and \(\gamma _1\) that
In case \(0\le v\le v_0\), \(\kappa ^+(v) < 1\). Thus, putting \(K=\kappa ^ +(v)\) we have
Since by (81) we have \(K\sim v \) and thanks to the choice of \(\gamma _2\) we obtain
From this we conclude the proof of Item (1).
As in the proof of (1) it is enough to prove that Item (2) is valid for any \(v>0\). The general case follows from the symmetricity in Hypothesis 1. Let us assume first \(v\ge v_0\), where again \(v_0:= \inf \{v\ge 0: \kappa ^{+,R}(v)\ge 1\}\). Then \(\varsigma ^{(R)}(z)=0\) for \(z\le \frac{4}{3} K R\), where \(K=\kappa ^{+,R}(v)\). In particular, by the definition of \(v_0\), \(\varsigma ^{(R)}(z)=0\) for \(z\le \frac{4}{3} R\). We know that \(\varsigma ^{(R)}_z(K,z)\le K z^{-\beta _1-1}\) for \(K\ge 1\) and \(z \in \left( K^{\gamma _1}R, \frac{4}{3} K^{\gamma _1}R)\cup (\frac{8}{3} K^{\gamma _1}R, \frac{10}{3} K^{\gamma _1}R\right) \), thus \(\varsigma ^{(R)}_z(v,z)\le \frac{1}{2}\) for any \( R> 2^\frac{1}{\tilde{\delta }}\). Next, let us assume that \(0\le v\le v_0\), i.e. \(K=\kappa ^{+,R}(v) \le 1\). Then \(\varsigma ^{(R)}_z(K, z)\le C z^ {-\beta _2-1}\) for \(z \in \left( R, \frac{4}{3} R)\cup (\frac{4}{3} (1+K^{\gamma _2}) R,\frac{5}{3} (1+K^{\gamma _2}) R\right) \). Therefore, \(\varsigma ^{(R)}_z(z,K)\le \frac{1}{2}\) for any \(R> [2C]^{\frac{1}{1+\beta _2}}.\) Choosing \( \tilde{R} =\max \left( 2^\frac{1}{\tilde{\delta }}, [2C]^\frac{1}{\beta _2+1}\right) , \) we easily see that for \(R>\tilde{R}\vee r_0\) we have \(\varsigma ^{(R)}_z(v,z)\le \frac{1}{2}\) for all \(K\) and \(z\). \(\square \)
Appendix B: Change of measure formula
Let \(\mu \) be a Poisson random measure over \(\bar{{\mathfrak {A}}}=(\bar{\Omega },\bar{{\mathbb {P}}},(\bar{{{\mathcal F }}}_t)_{t\ge 0},\bar{{{\mathcal F }}})\) with compensator \(\gamma \) defined by \(\gamma (U\times I)=\lambda (U) \lambda (I)\) for any \(U\times I\in \mathcal {B}(\mathbb {R})\times \mathcal {B}([0,\infty ))\). Let \(c:{{ {\mathbb {R}}}}\rightarrow {{ {\mathbb {R}}}}\) be the transformation defined by (49).
Let \(g:\bar{\Omega }\times [0,\infty )\times {\mathbb {R}}\rightarrow {\mathbb {R}}\) be a predictable process with \(g\in L^ 2([0,\infty )\times {\mathbb {R}};{\mathbb {R}})\) and \(\psi \) be a mapping defined by
Combining Corollary 6.3 and Example 1.9 of [18] one can verify the following Lemma.
Lemma 6.5
There exists a probability measure \({\mathbb {Q}}^ \psi \) on \({\mathfrak {A}}\) such that the Poisson random measure \(\mu _\psi \) defined by
has compensator \(\gamma \). For \(t\ge 0\) let \({\mathbb {Q}}^\psi _t\), respectively, \(\bar{{\mathbb {P}}}_t\), be the projection of \({\mathbb {Q}}^\psi \)(resp. \(\bar{{\mathbb {P}}}\)) onto \(\bar{{{\mathcal F }}}_t\). Then the density process given by
satisfies
Remark 6.6
Let \(v:\bar{\Omega }\times {\mathbb {R}}\rightarrow {\mathbb {R}}\) be a predictable process and let \(\theta ^{(R)}(t,z):= \psi (t,z)+ \rho ^{(R)}(v(t),z)=z+\rho ^{(R)}(v(t),z)\), where \(\rho ^{(R)}\) is defined in Corollary 6.4. Then, under \({\mathbb {Q}}^ \psi \) the Poisson random measure \(\mu _\psi =\mu _\theta \) defined by (85) has compensator \(\lambda \times \lambda \).
Remark 6.7
In case there exists a \(\delta >0\) such that , \(g_z(t,z )\ge \delta -1\) \(\forall \, z\in {\mathbb {R}}\), \({{\mathcal G }}\) is invertible and the inverse, \({\mathcal {H}}={{\mathcal G }}^ {-1}\) solves the following stochastic differential equation
Proof
The proof is done via the Laplace transform. Let \(\xi =\{\xi (t):0\le t<\infty \}\) be given by
Then under \({{\mathbb {Q}}^ \psi }\) the Laplace transform is given by
Rewriting \(\xi \) gives
Let \(M _\lambda =\{ M_\lambda (t):0\le t<\infty \}\) be given by \(M_\lambda (t) = e ^{-\lambda \xi (t)}\), \(0\le t<\infty \). Now, we will show that \(\mathbb {E}^{\bar{{\mathbb {P}}}} M _\lambda (t){{\mathcal G }}(t) = \mathbb {E}^{{\mathbb {Q}}^ \psi } e ^{-\lambda \xi (t)}\). First \(M _\lambda (t)\) solves
Therefore, \({\mathcal {Z}}_\lambda (t) = M _\lambda (t) \, {{\mathcal G }}(t)\) is given by
Substitution gives
Since
from which the Proposition follows. \(\square \)
Rights and permissions
About this article
Cite this article
Hausenblas, E., Razafimandimby, P.A. Controllability and qualitative properties of the solutions to SPDEs driven by boundary Lévy noise. Stoch PDE: Anal Comp 3, 221–271 (2015). https://doi.org/10.1007/s40072-015-0047-9
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40072-015-0047-9