Skip to main content
Log in

Controllability and qualitative properties of the solutions to SPDEs driven by boundary Lévy noise

  • Published:
Stochastic Partial Differential Equations: Analysis and Computations Aims and scope Submit manuscript

Abstract

In the present paper we are interested in the qualitative properties of the Markovian semigroups \({{\mathcal P }}=({{\mathcal P }}_t)_{t\ge 0}\) associated to the solutions of certain stochastic partial differential equations (SPDEs) with boundary noise. We assume that these problems can be written as an abstract stochastic PDE on a Hilbert space \(H\) taking the following form:

$$\begin{aligned} \left\{ \begin{array}{rcl} du(t,x)&{} = &{} A u(t,x)\, dt + B\;\sigma (u(t,x)) \, dL(t),\quad t>0;\\ u(0,x)&{} =&{}x\in H.\end{array}\right. \end{aligned}$$
(1)

Here \(L\) is a real-valued Lévy process, \(A:D(A)\subset H\rightarrow H\) is an infinitesimal generator of a strongly continuous semigroup, \(\sigma :H\rightarrow {\mathbb {R}}\) is a Lipschitz continuous map bounded from below and above, and \(B:{\mathbb {R}}\rightarrow H\) a possibly unbounded operator. As typical examples of such stochastic evolution equation we mainly consider and treat the damped wave equation and heat equation both driven by boundary Lévy noise. In this article, we first show that, if the system

$$\begin{aligned} \left\{ \begin{array}{rcl} du(t,x)&{} = &{}A u(t,x)\, dt + B\;v(t)\, dt,\quad t>0;\\ u(0,x)&{} =&{}x\in H\end{array}\right. \end{aligned}$$
(2)

is approximate controllable at time \(T>0\) with control \(v\), then, under some additional conditions on \(B\), \(A\) and \(L\), the probability measure on \(H\) induced by \(u(t,x)\) at a given time \(t>0\), \(x\in H\), is positive on open subsets of \(H\). Secondly, we investigate under which conditions on the Lévy process \(L\) and on the operators \(A\) and \(B\) the solution to Eq. (1) is asymptotically strong Feller. It follows from our results that the wave equation with boundary Lévy noise has at most one invariant measure which is non-degenerate.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. If \(\nu (B)\lambda (I) = \infty \), then obviously \(\eta (B\times I)=\infty \) a.s.

  2. A Lévy measure on \({{ {\mathbb {R}}}}\) is a \(\sigma \)-finite measure such that \(\nu (\{0\})=0\) and \(\int _{{ {\mathbb {R}}}}(|z|^2 \wedge 1) \nu (dz)<\infty \).

  3. A process \(u\) is \(H\)-valued, iff for all \(t\ge 0\), \(u(t)\) is an \(H\)-valued random variable.

  4. Note, that \(x\in supp(\varphi )\) iff for all \(\delta >0\), \(\varphi ( {{\mathcal D }}_H(x,\delta ))>0\).

  5. An increasing sequence \(\{ d_n:n\in {\mathbb {N}}\}\) of pseudo metrics is called a totally separating system of pseudo metrics for \({{ \mathcal X }}\), if \(\lim _{n\rightarrow \infty }d_n(z,y) =1\) for all \(z,y\in {{ \mathcal X }}\), \(z\not = y\).

  6. Let \(d\) be a pseudo-metric on \({{ \mathcal X }}\), we denote by \(L({{ \mathcal X }},d)\) the space of \(d\)-Lipschitz functions from \({{ \mathcal X }}\) into \({\mathbb {R}}\). That is, the function \(\phi :{{ \mathcal X }}\rightarrow {\mathbb {R}}\) is an element of \(L({{ \mathcal X }},d)\) if

    $$\begin{aligned} \Vert \phi \Vert _d := {\mathop {\mathop {\sup }\limits _{z,y\in {{ \mathcal X }}}}\limits _{z\not = y }} { |\phi (z)-\phi (y)|\over d(z,y)}<\infty . \end{aligned}$$

    For a pseudo-metric \(d\) on \({{ \mathcal X }}\) we define the distance between two probability measures \({{\mathcal P }}_1\) and \({{\mathcal P }}_2\) with respect to \(d\) by

    $$\begin{aligned} \Vert {{\mathcal P }}_1-{{\mathcal P }}_2\Vert _{d} := {\mathop {\mathop {\sup }\limits _{\phi \in L({{ \mathcal X }},d)}}\limits _{\Vert \phi \Vert _d=1}} \int _{{ \mathcal X }}\phi (x)\, ({{\mathcal P }}_1-{{\mathcal P }}_2)(dx). \end{aligned}$$
  7. \(\Delta L_t=L_t-L_{t^-}\), where \(L_{t^-}=\lim _{s<t,s\rightarrow t} L_t\).

  8. Here \(f\sim g\) means that there exist two numbers \(c_1, c_2\) such that \(c_1 f(x) \le g(x) \le c_2 g(x)\) for all \(x\) in the the domain of definition \(D(f)=D(g)\) of \(f\) and \(g\).

References

  1. Applebaum, D.: Lévy Processes and Stochastic Calculus, Volume 93 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge (2004)

    Book  Google Scholar 

  2. Applebaum, D.: Martingale-valued measures, Ornstein–Uhlenbeck processes with jumps and operator self-decomposability in Hilbert space. In: Emery, M., Yor, M. (eds.) Memoriam Paul-André Meyer, Séminaire de Probabilités 39, Lecture Notes in Mathematics, pp. 173–198. Springer, Berlin (2006)

    Google Scholar 

  3. Applebaum, D.: On the infinitesimal generators of Ornstein–Uhlenbeck processes with jumps in Hilbert space. Potential Anal. 26, 79–100 (2007)

    Article  MATH  MathSciNet  Google Scholar 

  4. Bensoussan, A., Da Prato, G., Delfour, M., Mitter, S.: Representation and Control of Infinite Dimensional Systems. Systems & Control: Foundations & Applications, 2nd edn. Birkhäuser Boston Inc., Boston (2007)

    Book  Google Scholar 

  5. Bichteler, K., Gravereaux, J.-B., Jacod, J.: Malliavin Calculus for Processes with Jumps, Volume 2 of Stochastics Monographs. Gordon and Breach Science Publishers, New York (1987)

    Google Scholar 

  6. Brzeźniak, Z., Hausenblas, E.: Maximal regularity for stochastic convolutions driven by Lévy processes. Probab. Theory Relat. Fields 145(3–4), 615–637 (2009)

    Article  MATH  Google Scholar 

  7. Brzeźniak, Z., Peszat, S.: Hyperbolic equations with random boundary conditions. In: Recent Development in Stochastic Dynamics and Stochastic Analysis, Interdisciplinary Mathematics Science, vol. 8, , p. 121, World Scientific Publishing, Hackensack (2010)

  8. Chojnowska-Michalik, A.: Stationary distributions for \(\infty \)-dimensional linear equations with general noise. In: Stochastic differential systems (Marseille–Luminy, 1984), Volume 69 of Lecture Notes in Control and Information Sciences, pp. 14–24. Springer, Berlin (1985)

  9. Chojnowska-Michalik, A.: On processes of Ornstein–Uhlenbeck type in Hilbert space. Stochastics 21(3), 251–286 (1987)

    Article  MATH  MathSciNet  Google Scholar 

  10. Coron, J.-M.: Control and Nonlinearity. Volume 136 of Mathematical Surveys and Monographs. American Mathematical Society, Providence (2007)

    Google Scholar 

  11. Da Prato, G.: An Introduction to Infinite-Dimensional Analysis. Universitext, Springer-Verlag, Berlin (2006)

    Book  MATH  Google Scholar 

  12. Da Prato, G., Zabczyk, J.: Stochastic Equations in Infinite Dimensions. Volume 44 of Encyclopedia of Mathematics and Its Applications, vol. 44. Cambridge University Press, Cambridge (1992)

    Book  Google Scholar 

  13. Da Prato, G., Zabczyk, J.: Evolution equations with white-noise boundary conditions. Stoch. Stoch. Rep. 42(3–4), 167–182 (1993)

    Article  MATH  Google Scholar 

  14. Da Prato, G., Zabczyk, J.: Ergodicity of Infinite-Dimensional Systems. Cambridge University Press, Cambridge (1997)

    Google Scholar 

  15. Ethier, S., Kurtz, T.: Markov Processes. Characterization and Convergence. Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics. Wiley, New York (1986)

    Google Scholar 

  16. Fournier, N.: Malliavin calculus for parabolic SPDEs with jumps. Stoch. Process. Appl. 87, 115–147 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  17. Hairer, M., Mattingly, J.: Ergodicity of the 2D Navier–Stokes equation with degenerate stochastic forcing. Ann. Math. 164, 993–1032 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  18. Hausenblas, E.: Absolute continuity of a law of an Itô process driven by a Lévy process to another Itô process. Int. J. Pure Appl. Math. 68, 387–401 (2011)

    MATH  MathSciNet  Google Scholar 

  19. Hausenblas, E.: Existence, uniqueness and regularity of parabolic SPDEs driven by Poisson random measure. Electron. J. Probab. 10, 1496–1546 (2005)

    Article  MathSciNet  Google Scholar 

  20. Kapica, R., Szarek, T., Śleczka, M.: On a unique ergodicity of some Markov processes. Potential Anal. 36, 589–606 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  21. Laroche, B., Philippe, M., Rouchon, P.: Motion planning for the heat equation. Int. J. Robust Nonlinear Control 10(8), 629–643 (2000)

    Article  MATH  Google Scholar 

  22. Maslowski, B.: Stability of semilinear equations with boundary and pointwise noise. Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 22(01), 55–93 (1995)

    MATH  MathSciNet  Google Scholar 

  23. Maslowski, B., Seidler, J.: Probabilistic approach to the strong Feller property. Probab. Theory Relat. Fields 118(2), 187–210 (2000)

    Article  MATH  MathSciNet  Google Scholar 

  24. Pandolfi, L., Priola, E., Zabczyk, J.: Linear operator inequality and null controllability with vanishing energy for unbounded control systems. SIAM J. Control Optim. 51(1), 629–659 (2013)

    Article  MATH  MathSciNet  Google Scholar 

  25. Pazy, A.: Semigroups of Linear Operators and Applications to Partial Differential Equations, Volume 44 of Applied Mathematical Sciences. Springer, New York (1983)

    Book  Google Scholar 

  26. Peszat, S., Zabczyk, J.: Stochastic Partial Differential Equations with Lévy Noise, Volume 113 of Encyclopedia of Mathematics and Its Applications. Cambridge University Press, Cambridge (2007)

    Book  Google Scholar 

  27. Priola, E., Zabczyk, J.: Null controllability with vanishing energy. SIAM J. Control Optim. 42, 1013–1032 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  28. Priola, E., Zabczyk, J.: Ornstein–Uhlenbeck processes with jumps. Bull. Lond. Math. Soc. 41, 41–50 (2009)

    Article  MATH  MathSciNet  Google Scholar 

  29. Priola, E., Zabczyk, J.: Structural properties of semilinear SPDEs driven by cylindrical stable processes. Probab. Theory Relat. Fields 149, 97–137 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  30. Priola, E., Shirikyan, A., Xu, L., Zabczyk, J.: Exponential ergodicity and regularity for equations with Lévy noise. Stoch. Process. Appl. 122, 106–133 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  31. Priola, E., Xu, L., Zabczyk, J.: Exponential mixing for some SPDEs with Lévy noise. Stoch. Dyn. 11, 521–534 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  32. Sato, S.I.: Lévy Processes and Infinitely Divisible Distributions. Cambridge Studies in Advanced Mathematics, vol. 68. Cambridge University Press, Cambridge (1999)

    Google Scholar 

  33. Tucsnak, M., Weiss, G.: Observation and Control for Operator Semigroups. Birkhäuser Advanced Texts. Birkhäuser Verlag, Basel (2009)

    Book  Google Scholar 

  34. Weinan, E., Mattingly, J., Sinai, Y.: Gibbsian dynamics and ergodicity for the stochastically forced Navier–Stokes equation. Commun. Math. Phys. 224, 83–106 (2001)

    Article  MATH  Google Scholar 

  35. Zuazua, E.: Exact Boundary Controllability for the Semilinear Wave Equation. Nonlinear Partial Differential Equations and Their Applications. Collége de France Seminar, Pitman Research Notes in Mathematics Series, 220, vol. X, 357391. Longman Science and Technology, Harlow (1991)

Download references

Acknowledgments

The authors gratefully acknowledge the careful reading of the manuscript by the reviewers, their comments and suggestions which have greatly improved the paper. The second author is very grateful to the financial support from the Austrian Science Foundation. His research was funded by the Austrian Science Fund (FWF): M1487 (Lise Meitner Program).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Erika Hausenblas.

Appendices

Appendix A: Technical Preliminaries

Let \(\lambda \) be the Lebesgue-measure on \({\mathbb {R}}\) and \(c:{\mathbb {R}}\rightarrow {\mathbb {R}}\) given by (17). Let \(r_0\) as in Hypothesis 1. As in page 8 let \(R\ge r_0\) and \(g_R\) be as in (22).

In this section we will show that one can find a transformation \(\theta ^{(R)}:[0,T]\times {\mathbb {R}}\backslash \{0\}\rightarrow {\mathbb {R}}\) such that for a given mapping \(V: [0,T]\rightarrow {\mathbb {R}}\) we have

$$\begin{aligned} -\left( {V(s)}+g_R\right) =\int _{{\mathbb {R}}\setminus B_{\mathbb {R}}(R)} \,[c(z)-c(\theta ^{(R)}(s,z))]\lambda (dz), \quad s\in [0,T]. \end{aligned}$$
(72)

For simplicity, we will first reformulate this problem in the following from. For any \(R\ge 10\) and \(R\ge r_0\) find a function \(\vartheta ^R:{\mathbb {R}}\times {\mathbb {R}}\rightarrow {\mathbb {R}}\), such that

$$\begin{aligned} v=\int _{{\mathbb {R}}\setminus B_{\mathbb {R}}(R)} [c(z)-c(\vartheta ^R(v,z))] \lambda (dz),\quad \forall v\in {\mathbb {R}}. \end{aligned}$$

Setting

$$\begin{aligned} \theta ^{(R)}(s,z) =\vartheta ^R \left( -\left( {V(s)}+g_R\right) ,z\right) , \quad s\in [0,T]. \end{aligned}$$

It is straightforward to calculate that \(\theta \) satisfies (72).

To find such a transformation, we perturb the jumps a little bit. By the symmetricity of the Lévy measure, it is then sufficient to consider only positive jumps and only positive values of \(v\) in the first step. In the second step we construct a transformation which works for perturbation \(v\in {{ {\mathbb {R}}}}\) and positive and negative jumps.

Now, let \(\tilde{\delta }\in (0,\frac{2}{\alpha }(\alpha -1))\) and put

$$\begin{aligned} \gamma _1 = { 2\alpha -\alpha \tilde{\delta }-2\over \alpha (2-\alpha +\alpha \tilde{\delta })} \end{aligned}$$

and

$$\begin{aligned} \beta _1=-\frac{1}{2\alpha \gamma _1} \, \left( \alpha -2 \gamma _1 -\alpha \gamma _1 - \sqrt{ (\alpha +2\gamma _1+\alpha \gamma _1)^ 2 - 4\alpha \gamma _1(2-\alpha +2\gamma _1)}\right) . \end{aligned}$$

Let also \(\beta _2>-1\) be an arbitrary number and \(\gamma _2=2\). Note that \(-\beta _1-1=-\tilde{\delta }\) and because \(\alpha >1\) we also have \(1-\gamma _1(\beta _1+1)\le 0\).

Now define a function \(\vartheta ^{+,R} \) by

$$\begin{aligned}{}[0,\infty ) \ni K \mapsto \vartheta ^{+,R} (K) := \int _{{\mathbb {R}}^+} \left( c(z) - c(z+ \varsigma ^{(R)} (K,z))\right) \; dz\in {{ {\mathbb {R}}}}, \end{aligned}$$
(73)

where \(\varsigma ^{(R)} :{\mathbb {R}}^ +\times {\mathbb {R}}^ + \rightarrow {\mathbb {R}}^ +\) is defined by

$$\begin{aligned} \varsigma ^{(R)} (K,z) := {\left\{ \begin{array}{ll}\begin{array}{ll} K z ^ {-\beta _1} &{} z \in \left( \frac{4}{3} R K^{\gamma _1},\frac{8}{3} K^ {\gamma _1}R\right) \text{ and } K\ge 1,\\ 0 &{} z \not \in \left( K^{\gamma _1}R, \frac{10}{3} K^{\gamma _1}R\right) \text{ and } K\ge 1, \\ C z ^ {-\beta _2} &{} z \in \left( \frac{4}{3} R , \frac{4}{3} R (1+ K^{\gamma _2} )\right) \text{ and } K< 1, \\ 0&{} z \not \in \left( R, \frac{5}{3} R(1+ K^{\gamma _2} )\right) \text{ and } K< 1, \\ \end{array}\\ \text{ differentiable } \text{ interpolated } \text{ elsewhere }., \end{array}\right. }\end{aligned}$$
(74)

Here, the constant \(C>0\) has to be chosen in such a way that \({\mathbb {R}}_0^ +\ni K\mapsto \vartheta ^{+,R} (K)\) is a continuous function. From the definition of \(\varsigma ^{(R)}\) we see in particular that for \(K\ge 1\) and \(z \in \left( K^{\gamma _1}R, \frac{4}{3} K^{\gamma _1}R)\cup (\frac{8}{3} K^{\gamma _1}R, \frac{10}{3} K^{\gamma _1}R\right) \), \(\varsigma ^{(R)}_z(K,z)\le K z^{-\beta _1-1}\) and for \(K< 1\) and \(z \in \left( R, \frac{4}{3} R)\cup (\frac{4}{3} (1+K^{\gamma _2}) R,\frac{5}{3} (1+K^{\gamma _2}) R\right) \), \(\varsigma ^{(R)}_z(K,z)\le C z^{-\beta _2-1}\), where \(\varsigma ^{(R)}_z\) is the partial derivative of \(\varsigma ^{(R)}\) with respect to \(z\).

Lemma 6.1

Under the Hypothesis 1 the function \(\vartheta ^{+,R} :{{ {\mathbb {R}}}}_0^+\rightarrow {{ {\mathbb {R}}}}_0^+\) is invertible.

Proof

We start by verifying the following properties

  1. (1)

    \(\vartheta ^{+,R} (K)\in {\mathbb {R}}^+_0\);

  2. (2)

    the function \({\mathbb {R}}^+_0\ni K\mapsto \vartheta ^{+,R}(K) \in {\mathbb {R}}^+ _0\) is continuous.

  3. (3)

    the function \({\mathbb {R}}^+\ni K\mapsto \vartheta ^{+,R}(K) \in {\mathbb {R}}^+_0 \) is injective.

  4. (4)

    the function \({\mathbb {R}}^+\ni K\mapsto \vartheta ^{+,R}(K) \in {\mathbb {R}}^+ _0\) is surjective.

It will follow from (2) to (4) that the function \(\vartheta ^{+,R}\) is invertible.

Item (1) is clear by the definition of \(c\). In order to show Items (2) and (3) we take into account that the function \({\mathbb {R}}^+_0 \ni K\mapsto \vartheta ^{+,R} (K) \in {\mathbb {R}}^ +_0\) is strictly increasing and continuous.

Since \(\vartheta ^{+,R} (0)=0\) and \(\vartheta ^{+,R} \) is continuous on \({\mathbb {R}}^+_0\), item (4) will follow if we can show that \(\lim _{K\rightarrow \infty } \vartheta ^{+,R}(K) =\infty \). For this purpose let us first recall that for \(z>0\) we have \(c(z)=U^{-1}(z)\), where \(U:{\mathbb {R}}^+\rightarrow {\mathbb {R}}_0^+\) denotes the tail integral given by

$$\begin{aligned} U(z)=\nu (z,\infty ),\quad z>0. \end{aligned}$$

From Hypothesis 1 it follows that there exist constants \(C_1>0\) and \(C_2>0\) such that

$$\begin{aligned} c(z) \ge C_1 z^ {-\frac{1}{\alpha }},\quad \forall \, z\ge C_2. \end{aligned}$$
(75)

In fact, since by definition \(U(\cdot )=\nu (\cdot ,\infty )\), and by Hypothesis 1, there exists \(K_2>0\) such that for all \(z\in (0,r_0)\) we have

$$\begin{aligned} U(z) = \nu (z,r_0) + \nu (r_0,\infty ) \ge K_2\int _{z}^ {r_0} x^ {-1-\alpha }\, dy + \nu (r_0,\infty ) \\ \ge \alpha K_2 \left( z^{-\alpha } - {r_0^{-\alpha }} \right) + \nu (r_0,\infty ). \end{aligned}$$

Thus,

$$\begin{aligned} {U(z) - \nu (r_0,\infty ) \over \alpha K_2} +r_0^ {-\alpha } \ge z ^ {-\alpha } \end{aligned}$$

which implies that

$$\begin{aligned} z^ \alpha \ge {\alpha K_2\over {U(z) - \nu (r_0,\infty ) } +\alpha K_2r_0^ {-\alpha }} . \end{aligned}$$

Hence,

$$\begin{aligned} z \ge \left( {\alpha K_2\over {U(z) - \nu (r_0,\infty ) } +\alpha K_2r_0^ {-\alpha }}\right) ^ \frac{1}{\alpha }. \end{aligned}$$

If \(\alpha K_2r_0^ {-\alpha } \le \nu (r_0,\infty ) \), then (75) follows. If \(\alpha K_2r_0^ {-\alpha } > \nu (r_0,\infty ) \), then one can show by using elementary calculations that for any \(c>0\) and \(K\in (0,1)\)

$$\begin{aligned} {1\over (y+c)^ \frac{1}{\alpha }} \ge {K\over y^ \frac{1}{\alpha }} ,\quad \forall y\ge {c\over 1-K^ \alpha }. \end{aligned}$$

Thus (75) is also valid.

By similar arguments one can show that there exist constants \(\tilde{C}_1>0\) and \(\tilde{C}_2>0\) such that

$$\begin{aligned} c(z) \le \tilde{C}_1 z^ {-\frac{1}{\alpha }},\quad \forall \, z\ge \tilde{C}_2. \end{aligned}$$
(76)

In fact, again by Hypothesis 1, there exists a constant \(\tilde{K}_2>0\) such that for all \(\tilde{z}\in (0,r_0)\) we have

$$\begin{aligned} U(z)= & {} \nu (z,r_0) + \nu (r_0,\infty ) \le \tilde{K}_2\int _{r_0}^ z x^ {-1-\alpha }\, dy + \nu (\tilde{r}_0,\infty ) \\\le & {} \alpha \tilde{K}_2 \left( z^{-\alpha } - {\tilde{r}_0^{-\alpha }} \right) + \nu (\tilde{r}_0,\infty ). \end{aligned}$$

Thus, by the same calculations as before we can infer that

$$\begin{aligned} z \le \left( {\alpha \tilde{K}_2\over {U(z) - \nu (\tilde{r}_0,\infty ) } +\alpha \tilde{K}_2\tilde{r}_0^ {-\alpha }}\right) ^ \frac{1}{\alpha }. \end{aligned}$$

If \(\alpha K_2r_0^ {-\alpha } \ge \nu (r_0,\infty ) \), then the assertion follows. If \(\alpha K_2r_0^ {-\alpha } < \nu (r_0,\infty ) \), then again it follows by direct calculations give that for any \(c>0\) and \(K>1 \) we have

$$\begin{aligned} {1\over (y+c)^ \frac{1}{\alpha }} \ge {K\over y^ \frac{1}{\alpha }} ,\quad \forall y\ge {cK^ \alpha \over K^ \alpha -1}. \end{aligned}$$

Hence, (76) follows.

Next, for \(z>0\) the mapping \(U(\cdot )=\nu (\cdot , \infty )\) is invertible and its inverse \(U^{-1}(\cdot )\) coincides with \(c(\cdot )\) on \((0,\infty )\). Therefore, there exists a constant \(C>0\) such that \(c\) is differentiable for all \(z\ge C\) and

$$\begin{aligned} c^ \prime (z) = {1\over U'(y)\Big |_{y=U^ {-1}(z)}}, \end{aligned}$$

for any \(z\ge C\). We now derive a lower estimate for \(c^ \prime \). For this aim observe that Hypothesis 1 implies that there exist two constants \(C_3=\frac{1}{K_1}>0\) and \(C_4=K_2>0\) such that

$$\begin{aligned} {1\over U'(y)} \ge {C_3\over y^{-\alpha -1}}, \quad \text{ for } \text{ all } 0<y\le C_4. \end{aligned}$$
(77)

In fact, since \(U(y)=\nu (y,\infty )\) we know that \(U'(y)=k(y)\). By Hypothesis 1 we have

$$\begin{aligned} k(y)\le K_2 y^ {-\alpha -1} ,\quad \forall y\le r_0. \end{aligned}$$

Hence

$$\begin{aligned} {K_2 \over k(y)} \ge {1\over y^ {-\alpha -1}} ,\quad \forall y\le r_0, \end{aligned}$$

which implies (77). Now, let \(z\ge U(r_0)\). Then \(U^ {-1}(z)\le r_0\) and

$$\begin{aligned} c^ \prime (z) \ge {1\over K_2 y^ {-\alpha -1}\Big |_{y=U^ {-1}(z)}}. \end{aligned}$$

Estimate (75) gives that there exists two constants \(C_5>0\) and \(C_6>0\) such that

$$\begin{aligned} c^ \prime (z) \ge {C_5\over z^{1+\frac{1}{\alpha }}} ,\quad \text{ for } \text{ all } z\ge C_6. \end{aligned}$$
(78)

Now, we can start with proving Item (4). For simplicity, we put \(r_1=\frac{4}{3} R\) for the rest of the proof of (4). For any \(K>1\) we have

$$\begin{aligned} \vartheta ^{+,R} (K)= & {} \int _0 ^ \infty \left[ c(z) - c(z+\varsigma ^{(R)} (K,z))\right] \; dz\\= & {} \int _{ r_1K^{\gamma _1}} ^ {K^{\gamma _1}2 r_1} \int _z ^ { z+\varsigma ^{(R)} (K,z)} \; \,c'(y) \; dy\; dz\\\ge & {} \int _{K^ {\gamma _1}r_1} ^ {K^ {\gamma _1}2 r_1} \varsigma ^{(R)} (K,z)\; \, c'(z+ \varsigma ^{(R)} (K,z)) \; dz . \end{aligned}$$

For \(\tilde{\gamma _1}= \gamma _1(1+\beta _1)\) we can derive from the estimate (78) that

$$\begin{aligned} \ldots\ge & {} \delta _0 K\; \int _{ \frac{4}{3} K^ {\gamma _1}R} ^ {\frac{8}{3} K^ {\gamma _1}R} { z ^{-{\beta _1}}\over \left( z+{K\over z ^ {\beta _1}}\right) ^ {\frac{1}{\alpha }+1}}\; dz \ge \delta _0 K\; \int _{ r_1K^ {\gamma _1}} ^ {K^ {\gamma _1}2 r_1} { z ^{-{\beta _1}+{\beta _1} ( {\frac{1}{\alpha }+1}) }\over \left( z ^ {1+{\beta _1}}+K\right) ^ {\frac{1}{\alpha }+1}}\; dz \\= & {} \delta _0\, K \, K ^ {- \frac{1+\alpha }{\alpha }} \int _{ r_1K^ {\gamma _1}} ^ {K^{\gamma _1}2 r_1} { z ^{{\beta _1} \over \alpha } \over \left( {z ^ {1+{\beta _1}} \over K} +1 \right) ^ {\frac{1}{\alpha }+1}}\; dz\\= & {} K ^{-\frac{1}{\alpha }} \int _{r_1^ {1+{\beta _1}} K^{\tilde{\gamma _1}}} ^ { (2r_1)^{1+{\beta _1}} K^{\tilde{\gamma _1}}} { \left( K u \right) ^{{1\over 1+{\beta _1}}\, {\beta _1} \over \alpha } \over \left( {u } +1 \right) ^ {\frac{1}{\alpha }+1}}\; K ^{\frac{1}{{\beta _1}+1}} u ^{\frac{1}{{\beta _1}+1} -1} du \\= & {} K ^{ - (1+{\beta _1}) +{\beta _1} +\alpha \over \alpha (1+{\beta _1})} \int _{{r_1^ {1+{\beta _1}}K^{\tilde{\gamma _1}}}} ^{(2r_1)^{1+{\beta _1}}K^{\tilde{\gamma _1}}} { u ^{ {\beta _1} +\alpha - ({\beta _1}+1)\alpha \over ({\beta _1}+1)\alpha } \over \left( {u } +1 \right) ^ {\frac{1}{\alpha }+1}}\; du \\= & {} K ^{ \alpha - 1 \over \alpha (1+{\beta _1})} \int _{{r_1^ {1+{\beta _1}}K^{\tilde{\gamma _1}}}} ^ {(2r_1)^{1+{\beta _1}}K^{\tilde{\gamma _1}}} { u ^{ {\beta _1} (1-\alpha ) \over ({\beta _1}+1)\alpha } \over \left( {u } +1 \right) ^ {\frac{1}{\alpha }+1}}\; du. \end{aligned}$$

Since \({ {\beta _1} (1-\alpha ) \over ({\beta _1}+1)\alpha }<{\frac{1}{\alpha }+1}\), there exists a constant \(C=C(\beta _1,\gamma _1,\alpha )>0\), such that

$$\begin{aligned} { u ^{ {\beta _1} (1-\alpha ) \over ({\beta _1}+1)\alpha } \over \left( {u } +1 \right) ^ {\frac{1}{\alpha }+1}}\;\ge C\, u^{-{1 + \alpha + 2 \alpha \beta _1\over \alpha (1+\beta _1)}}= C\, u^{-{1 + \alpha \over \alpha (1+\beta _1)}-1}, \quad u\ge 1. \end{aligned}$$

Integrating gives

$$\begin{aligned} \ldots\ge & {} C\, r_1^{2\beta _1-\alpha \beta _1 +1} K^{\Gamma }, \end{aligned}$$
(79)

where, for simplicity, we have put

$$\begin{aligned} \Gamma = {\alpha -1 -\gamma _1(1+\beta _1)(\alpha \beta _1+1) \over \alpha (1+\beta _1)} = {\alpha -\alpha \beta _1^2 \gamma _1-\alpha \beta _1\gamma _1-\beta _1\gamma _1 -\gamma _1-1 \over \alpha (1+\beta _1)}.\nonumber \\ \end{aligned}$$
(80)

It follows from (79) that

$$\begin{aligned} \lim _{K\rightarrow \infty } \int _{r_1} ^ \infty \left[ c(z) - c(z+\varsigma ^{(R)} (K,z))\right] \; dz =\infty , \end{aligned}$$

which concludes the proof of Item (4). From (1) to (4) it follows that the function \(\vartheta ^{+,R} :{\mathbb {R}}^+\rightarrow {\mathbb {R}}^+\) defined by (73) is invertible. \(\square \)

We also state the following remark which is very crucial for our analysis.

Remark 6.2

For any \(R\ge r_0\) and \(0<K\le 1\) we have

$$\begin{aligned} \int _0 ^ \infty \left[ c(z) - c(z+ \varsigma ^{(R)} (K,z))\right] \; dz\ge C(R) \, K. \end{aligned}$$
(81)

Proof of Estimate (81)

As above we set \(r_1=\frac{4}{3}R\). For any \(0<K<1\) we have

$$\begin{aligned}&\int _0 ^ \infty \left[ c(z) - c(z+ \varsigma ^{(R)} (K,z))\right] \; dz\\&\quad = \int _{r_1}^{r_1(1+K)^{\gamma _2}} \varsigma ^{(R)} (K,z) \frac{d}{dz}c(z+\varsigma ^{(R)}(K,z)) dz \; du, \end{aligned}$$

which implies the existence of a positive constant \(C(r_1, \alpha , \gamma _2,\beta _2)\) such that

$$\begin{aligned} \int _0 ^ \infty \left[ c(z) \!-\! c(z\!+\! \varsigma ^{(R)} (K,z))\right] \; dz&\!=\! C(r_1, \alpha , \gamma _2,\beta _2) \int _{r_1}^{r_1(1\!+\!K^{\gamma _2})}\frac{z^{-\beta _2}}{(z\!+\!z^{-\beta _2})^{1+\frac{1}{\alpha }}}dz \\&\!=\! C(r_1, \alpha , \gamma _2,\beta _2) \int _{r_1}^{r_1(1+K^{\gamma _2})}\frac{z^{-\beta _2{\alpha -1\over \alpha }}}{(z^{1+\beta _2}\!+\!1)^{1+\frac{1}{\alpha }}}dz. \end{aligned}$$

By change of variables we get that

$$\begin{aligned}&\int _0 ^ \infty \left[ c(z) - c(z+ \varsigma ^{(R)} (K,z))\right] \; dz \\&\quad \ge C(r_1, \alpha , \gamma _2,\beta _2) \int _{{r_1}^{1+\beta _2}}^{r_1^{1+\beta _2} (1+K^{\gamma _2})^{(1+\beta _2)}}\frac{u^{-\frac{\beta _2(\alpha -1)}{\alpha (\beta _2+1)}}}{(u+1)^{1+\frac{1}{\alpha }}} u^{\beta _2\over 1+\beta _2} \, du \\&\quad \ge C(r_1, \alpha , \gamma _2,\beta _2) \int _{{r_1}^{1+\beta _2}}^{r_1^{1+\beta _2} (1+K^{\gamma _2})^{(1+\beta _2)}}\frac{u^{\frac{\beta _2}{\alpha (\beta _2+1)}}}{(u+1)^{1+\frac{1}{\alpha }}} \,du. \end{aligned}$$

Since \({r_1}^{1+\beta _2}\le u\le r_1^{1+\beta _2}(1+K^{\gamma _2})^{(1+\beta _2)}\), we get

$$\begin{aligned}&\int _0 ^ \infty \left[ c(z) - c(z+ \varsigma ^{(R)} (K,z))\right] \; dz \\&\quad \ge C(r_1, \alpha , \gamma _2,\beta _2) \frac{1}{(1+2r_1^{1+\beta _2})^{(1+\frac{1}{\alpha }) }} \int _{{r_1}^{1+\beta _2}}^{[r_1(1+K^{\gamma _2})]^{(1+\beta _2)}} u^{\frac{\beta _2}{\alpha (\beta _2+1)}} du. \end{aligned}$$

The Taylor expansion gives

$$\begin{aligned} \int _0 ^ \infty \left[ c(z) - c(z+ \varsigma ^{(R)} (K,z))\right] \; dz&\ge C(r_1, \alpha , \gamma _2,\beta _2) \frac{ r_1^ {{\beta _2\over \alpha }}}{(1+(2r_1)^{\beta _2+1} )^{1+\frac{1}{\alpha }}} K^{{\gamma _2}\,}\\&\ge C(r_1, \alpha , \gamma _2,\beta _2) K^{{\gamma _2}\, } . \end{aligned}$$

By the choice of \(\beta _2\) and \(\gamma _2\), we have

$$\begin{aligned} \int _0 ^ \infty \left[ c(z) - c(z+ \varsigma ^{(R)} (K,z))\right] \; dz\ge C(r_1) \, K,\quad 0<K\le 1. \end{aligned}$$
(82)

This proves Lemma 6.1. \(\square \)

Let \(\kappa ^{+,R}\) be the inverse of \(\vartheta ^{+,R}\), i.e. \(\kappa ^{+,R}(z)=(\vartheta ^{+,R})^ {-1}(z)\), \(z\in {\mathbb {R}}_0^+\). Taking into account the negative jumps and negative values of \(v\), we will define the following transformation.

Corollary 6.3

Assume that Hypothesis 1 holds and let \(r_0\) be as in Hypothesis 1. Then, for any \(R\ge r_0\) the transformation

$$\begin{aligned} \vartheta ^R:{\mathbb {R}}\times {{ {\mathbb {R}}}}\setminus \{0\}\rightarrow {{ {\mathbb {R}}}}\end{aligned}$$

defined by

$$\begin{aligned} \vartheta ^R(v,z):= {\left\{ \begin{array}{ll}z+\varsigma ^{(R)}(\kappa ^{+,R}(|v|),z)&{} \text{ if } v\ge 0\, \text{ and } z> 0, \\ -z-\varsigma ^{(R)}(\kappa ^{+,R}(|v|),z)&{} \text{ if } v< 0 \, \text{ and } z< 0, \\ 0 &{} \text{ elsewhere } \end{array}\right. }\end{aligned}$$

satisfies

$$\begin{aligned} \int _{R} ^\infty \left[ c(z)-c(\vartheta ^R(v,z)) \right] \, dz\,\!+\! \int _{-\infty }^ {-R} \left[ c(z)-c(\vartheta ^R(v,z)) \right] \, dz\, \!=\! v . \quad \text{ for } \text{ all } \quad v \in {\mathbb {R}}. \end{aligned}$$

Proof

Taken into account the symmetricity of the Lévy measure and the definition of \(\kappa ^{+,R}\), the proof follows by Lemma 6.1 and direct calculations. \(\square \)

Corollary 6.4

Let \(R\) as in Corollary 6.3 and define the function \(\rho ^{(R)}:{\mathbb {R}}\times {\mathbb {R}}\setminus \{0\}\rightarrow {\mathbb {R}}\) by

$$\begin{aligned} \rho ^{(R)}(v,z):= {\left\{ \begin{array}{ll}\varsigma ^{(R)}(\kappa ^{+,R}(|v|),z)&{} \text{ if } v\ge 0\, \text{ and } z> 0,\\ -\varsigma ^{(R)}(\kappa ^{+,R}(|v|),z)&{} \text{ if } v< 0 \, \text{ and } z< 0, \\ 0 &{} \text{ elsewhere. } \end{array}\right. }\end{aligned}$$
(83)

Let us denote the derivative in the direction of the second variable by \(\rho ^{(R)}_z\). Then

  1. (1)

    there exists a constant \( C(R)>0\) such that

    $$\begin{aligned} \int _{{\mathbb {R}}\setminus \{0\}}|\rho ^{(R)}_z(x,z)|\, dz \le C(R)\,| x |^{2}, \quad \forall x\in {\mathbb {R}}; \end{aligned}$$
  2. (2)

    there exists \(\tilde{R}>0\) such that for all \(R>\tilde{R}\),

    $$\begin{aligned} |\rho ^{(R)}_z(x,z)|\le \frac{1}{2}, \forall \, x\in {\mathbb {R}},\, \forall z\in {\mathbb {R}}\setminus \{0\}. \end{aligned}$$

Proof

For sake of simplicity, we set \(r_1=\frac{4}{3} R\) throughout this proof.

By the symmetricity assumption on the Lévy measure, it is enough to prove Item (1) for \(v>0\). Let \(v_0:= \inf \{v\ge 0: \kappa ^{+,R}(v)\ge 1\}\). First, let us assume, that \(v\ge v_0\). Then \(\kappa ^+(v) \ge 1\) and, by (79), \(\kappa ^{+,R}(v)\sim v ^\frac{1}{\Gamma }\), where \(\Gamma \) is defined in (80).Footnote 8 By the definition of \(\varsigma ^{(R)}\) we have that

$$\begin{aligned} \int _0 ^\infty \varsigma ^{(R)}_z(K,z)\,dz \sim K \int _{ r_1K ^{\gamma _1}} ^{ 2r_1 K ^{\gamma _1}} z ^{-\beta _1-1}\, dz \sim C(r_1)\, K ^{1-\beta _1\gamma _1}, \end{aligned}$$

for any \(K>1\). Setting \(K=v ^\frac{1}{\Gamma }\), we get by the choice of \(\beta _1\) and \(\gamma _1\) that

$$\begin{aligned} \int _0 ^\infty \varsigma ^{(R)}_z(K,z)\,dz\sim C(r_1)\, v ^{1-\beta _1\gamma _1\over \Gamma } \sim C(r_1)v ^2,\quad v\ge v_0. \end{aligned}$$

In case \(0\le v\le v_0\), \(\kappa ^+(v) < 1\). Thus, putting \(K=\kappa ^ +(v)\) we have

$$\begin{aligned}&\int _0 ^\infty \varsigma ^{(R)}_z(K,z)\,dz \sim \int _{ r_1} ^{ r_1(1+ K ^{\gamma _2})} z ^{-\beta _2-1}\, dz \sim C(r_1,\beta _2)\\&\quad \times \left( (1+ K ^{\gamma _2} )^ {-\beta _2}-1\right) \sim C(r_1)\, K ^{\gamma _2}. \end{aligned}$$

Since by (81) we have \(K\sim v \) and thanks to the choice of \(\gamma _2\) we obtain

$$\begin{aligned} \int _0 ^\infty \varsigma ^{(R)}_z(K,z)\,dz\sim C(r_1)\, v ^{\gamma _2}\sim C(r_1,\beta _2)v ^2,\quad v\ge v_0. \end{aligned}$$

From this we conclude the proof of Item (1).

As in the proof of (1) it is enough to prove that Item (2) is valid for any \(v>0\). The general case follows from the symmetricity in Hypothesis 1. Let us assume first \(v\ge v_0\), where again \(v_0:= \inf \{v\ge 0: \kappa ^{+,R}(v)\ge 1\}\). Then \(\varsigma ^{(R)}(z)=0\) for \(z\le \frac{4}{3} K R\), where \(K=\kappa ^{+,R}(v)\). In particular, by the definition of \(v_0\), \(\varsigma ^{(R)}(z)=0\) for \(z\le \frac{4}{3} R\). We know that \(\varsigma ^{(R)}_z(K,z)\le K z^{-\beta _1-1}\) for \(K\ge 1\) and \(z \in \left( K^{\gamma _1}R, \frac{4}{3} K^{\gamma _1}R)\cup (\frac{8}{3} K^{\gamma _1}R, \frac{10}{3} K^{\gamma _1}R\right) \), thus \(\varsigma ^{(R)}_z(v,z)\le \frac{1}{2}\) for any \( R> 2^\frac{1}{\tilde{\delta }}\). Next, let us assume that \(0\le v\le v_0\), i.e. \(K=\kappa ^{+,R}(v) \le 1\). Then \(\varsigma ^{(R)}_z(K, z)\le C z^ {-\beta _2-1}\) for \(z \in \left( R, \frac{4}{3} R)\cup (\frac{4}{3} (1+K^{\gamma _2}) R,\frac{5}{3} (1+K^{\gamma _2}) R\right) \). Therefore, \(\varsigma ^{(R)}_z(z,K)\le \frac{1}{2}\) for any \(R> [2C]^{\frac{1}{1+\beta _2}}.\) Choosing \( \tilde{R} =\max \left( 2^\frac{1}{\tilde{\delta }}, [2C]^\frac{1}{\beta _2+1}\right) , \) we easily see that for \(R>\tilde{R}\vee r_0\) we have \(\varsigma ^{(R)}_z(v,z)\le \frac{1}{2}\) for all \(K\) and \(z\). \(\square \)

Appendix B: Change of measure formula

Let \(\mu \) be a Poisson random measure over \(\bar{{\mathfrak {A}}}=(\bar{\Omega },\bar{{\mathbb {P}}},(\bar{{{\mathcal F }}}_t)_{t\ge 0},\bar{{{\mathcal F }}})\) with compensator \(\gamma \) defined by \(\gamma (U\times I)=\lambda (U) \lambda (I)\) for any \(U\times I\in \mathcal {B}(\mathbb {R})\times \mathcal {B}([0,\infty ))\). Let \(c:{{ {\mathbb {R}}}}\rightarrow {{ {\mathbb {R}}}}\) be the transformation defined by (49).

Let \(g:\bar{\Omega }\times [0,\infty )\times {\mathbb {R}}\rightarrow {\mathbb {R}}\) be a predictable process with \(g\in L^ 2([0,\infty )\times {\mathbb {R}};{\mathbb {R}})\) and \(\psi \) be a mapping defined by

$$\begin{aligned} \psi :[0,\infty ) \times {{ {\mathbb {R}}}}\ni (t,z) \mapsto z+g(t,z) \, \in {{ {\mathbb {R}}}}. \end{aligned}$$
(84)

Combining Corollary 6.3 and Example 1.9 of [18] one can verify the following Lemma.

Lemma 6.5

There exists a probability measure \({\mathbb {Q}}^ \psi \) on \({\mathfrak {A}}\) such that the Poisson random measure \(\mu _\psi \) defined by

$$\begin{aligned} {{\mathcal B }}({{ {\mathbb {R}}}})\times {{\mathcal B }}([0,\infty ))\ni A\times I \mapsto \int _I\int _{{ {\mathbb {R}}}}1_A(\psi (s,z ))\mu (dz,ds) \end{aligned}$$
(85)

has compensator \(\gamma \). For \(t\ge 0\) let \({\mathbb {Q}}^\psi _t\), respectively, \(\bar{{\mathbb {P}}}_t\), be the projection of \({\mathbb {Q}}^\psi \)(resp. \(\bar{{\mathbb {P}}}\)) onto \(\bar{{{\mathcal F }}}_t\). Then the density process given by

$$\begin{aligned}{}[0,\infty ) \ni t\mapsto {{\mathcal G }}(t) := {d{\mathbb {Q}}^\psi _t\over d\bar{{\mathbb {P}}}_t},\quad t> 0, \end{aligned}$$

satisfies

$$\begin{aligned} \left\{ \begin{array}{rcl} d{{\mathcal G }}(t) &{}=&{} {{\mathcal G }}(t-) \int _{{ {\mathbb {R}}}}(1- \psi _z(z )) \,(\mu -\gamma )(dz,dt), \\ &{}=&{} {{\mathcal G }}(t-) \int _{{ {\mathbb {R}}}}g_z(t,z) \,(\mu -\gamma )(dz,dt), \\ {{\mathcal G }}(0) &{}=&{} 1. \end{array} \right. \end{aligned}$$
(86)

Remark 6.6

Let \(v:\bar{\Omega }\times {\mathbb {R}}\rightarrow {\mathbb {R}}\) be a predictable process and let \(\theta ^{(R)}(t,z):= \psi (t,z)+ \rho ^{(R)}(v(t),z)=z+\rho ^{(R)}(v(t),z)\), where \(\rho ^{(R)}\) is defined in Corollary 6.4. Then, under \({\mathbb {Q}}^ \psi \) the Poisson random measure \(\mu _\psi =\mu _\theta \) defined by (85) has compensator \(\lambda \times \lambda \).

Remark 6.7

In case there exists a \(\delta >0\) such that , \(g_z(t,z )\ge \delta -1\) \(\forall \, z\in {\mathbb {R}}\), \({{\mathcal G }}\) is invertible and the inverse, \({\mathcal {H}}={{\mathcal G }}^ {-1}\) solves the following stochastic differential equation

Proof

The proof is done via the Laplace transform. Let \(\xi =\{\xi (t):0\le t<\infty \}\) be given by

$$\begin{aligned} \left\{ \begin{array}{rcl} d\xi (t)&{}=&{} \int _{{ {\mathbb {R}}}}c(z) (\mu _\psi -\gamma ) (dz,dt), \\ \xi (0)&{}=&{} 0. \end{array} \right. \end{aligned}$$

Then under \({{\mathbb {Q}}^ \psi }\) the Laplace transform is given by

$$\begin{aligned} \mathbb {E}^{{\mathbb {Q}}^ \psi } e ^{-\lambda \xi (t)} = e ^{ \int _{{ {\mathbb {R}}}}\left[ e ^{-\lambda c(z)}-1+\lambda c(z)\, \right] \lambda (dz)} \end{aligned}$$

Rewriting \(\xi \) gives

$$\begin{aligned} \left\{ \begin{array}{rcl} d\xi (t) &{}=&{} \int _{{ {\mathbb {R}}}}c(\psi (t,z)) (\mu -\gamma ) (dz,dt) +\int _{{ {\mathbb {R}}}}\left[ c(\psi (s,z)) -c(z)\right] \gamma (dz, dt), \\ \xi (0)&{}=&{} 0. \end{array}\right. \end{aligned}$$

Let \(M _\lambda =\{ M_\lambda (t):0\le t<\infty \}\) be given by \(M_\lambda (t) = e ^{-\lambda \xi (t)}\), \(0\le t<\infty \). Now, we will show that \(\mathbb {E}^{\bar{{\mathbb {P}}}} M _\lambda (t){{\mathcal G }}(t) = \mathbb {E}^{{\mathbb {Q}}^ \psi } e ^{-\lambda \xi (t)}\). First \(M _\lambda (t)\) solves

$$\begin{aligned} dM _\lambda (t)= & {} -\lambda \, \int _{{ {\mathbb {R}}}}M _\lambda (t-) \left[ c(\psi (t,z)) -c(z)\right] \gamma (dz, dt)\\&+ \int _{{ {\mathbb {R}}}}M _\lambda (t-)\left[ e^{-\lambda c(\psi (t,z))}-1\right] (\mu -\gamma ) (dz,dt) \\&+ \int _{{ {\mathbb {R}}}}M _\lambda (t-) \left[ e^{-\lambda c(\psi (t,z))}- 1+\lambda c(\psi (t,z))\right] \gamma (dz, dt), \\ M_\lambda (0)= & {} 1. \end{aligned}$$

Therefore, \({\mathcal {Z}}_\lambda (t) = M _\lambda (t) \, {{\mathcal G }}(t)\) is given by

$$\begin{aligned} \mathbb {E}^{\bar{\mathbb {P}}} {\mathcal {Z}} _\lambda (t)= & {} -\lambda \mathbb {E}^{\bar{\mathbb {P}}} \int _0^t \int _{{ {\mathbb {R}}}}{\mathcal {Z}} _\lambda (s-) \left[ c(\psi (s,z)) - c(z)\right] \lambda (dz)\, ds \\&+\, \mathbb {E}^{\bar{\mathbb {P}}} \int _0^t \int _{{ {\mathbb {R}}}}{\mathcal {Z}} _\lambda (s-) \left[ e^{-\lambda c(\psi (s,z))}- 1+\lambda c(\psi (s,z))\right] \lambda (dz)\, ds \\&+\, \mathbb {E}^{\bar{\mathbb {P}}} \int _0^t {\mathcal {Z}} _\lambda (s-) \int _{{ {\mathbb {R}}}}\left[ e^{-\lambda c(\psi (s,z))}- 1\right] \left[ \psi _z(s,z) -1\right] \lambda (dz)\, ds \\= & {} \mathbb {E}^{\bar{\mathbb {P}}} \int _0^t \int _{{ {\mathbb {R}}}}{\mathcal {Z}} _\lambda (s-)\Big [ \lambda c(z) - \lambda c(\psi (s,z)) + e^{-\lambda c(\psi (s,z))}- 1 \\&+\,\lambda c(\psi (s,z))+ e^{-\lambda c(\psi (s,z))}\psi _z(s,z) - \psi _z(s,z) -e^{-\lambda c(\psi (s,z))}\\&+\,1\Big ] \gamma (dz,ds)\\= & {} \mathbb {E}^{\bar{\mathbb {P}}} \int _0^t \int _{{ {\mathbb {R}}}}{\mathcal {Z}} _\lambda (s-)\Big [ e^{-\lambda c(\psi (s,z))}\psi _z(s,z) - \psi _z(s,z) +\lambda c(z) \Big ] \gamma (dz,ds).\\= & {} \mathbb {E}^{\bar{\mathbb {P}}} \int _0^t \int _{{ {\mathbb {R}}}}{\mathcal {Z}} _\lambda (s-)\Big [ e^{-\lambda c(\psi (s,z))}-1\Big ]\psi _z(s,z) \lambda (dz) ds\\&+\, \lambda \mathbb {E}^{\bar{\mathbb {P}}} \int _0^t \int _{{ {\mathbb {R}}}}{\mathcal {Z}} _\lambda (s-) c(z) \gamma (dz,ds). \end{aligned}$$

Substitution gives

$$\begin{aligned} \mathbb {E}^{\bar{\mathbb {P}}} {\mathcal {Z}} _\lambda (t)= & {} \mathbb {E}^{\bar{\mathbb {P}}} \int _0^t \int _{{ {\mathbb {R}}}}{\mathcal {Z}} _\lambda (s-)\Big [ e^{-\lambda c(z)} - 1 +\lambda c(z) \Big ] \gamma (dz,ds). \end{aligned}$$

Since

$$\begin{aligned} \mathbb {E}^{{\mathbb {Q}}_\psi } \left[ e^{-\lambda {\xi (t) }}\right]&= \mathbb {E}^{\bar{\mathbb {P}}} \left[ {{\mathcal G }}(t)\, e^{-\lambda {\xi (t)}}\right] = \mathbb {E}^{\bar{\mathbb {P}}} \left[ {\mathcal {Z}} _\lambda (t) \right] \\&= \exp \left( \int _0^t \int _{{ {\mathbb {R}}}}\left[ e^{-\lambda c(z)} - 1 +\lambda c(z) \right] \gamma (dz,dt)\right) , \end{aligned}$$

from which the Proposition follows. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hausenblas, E., Razafimandimby, P.A. Controllability and qualitative properties of the solutions to SPDEs driven by boundary Lévy noise. Stoch PDE: Anal Comp 3, 221–271 (2015). https://doi.org/10.1007/s40072-015-0047-9

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40072-015-0047-9

Keywords

Mathematics Subject Classification

Navigation