1 Introduction

During the last few years, continuous-time spectrally negative Markov additive processes (SN-MAPs) has attracted a lot of attention. The SN-MAPs are a generalization of the Lévy processes with no positive jumps which can be seen a spectrally negative Lévy processes with Markov environment. Kyprianou and Palmowski [1] proved the existence of the scale matrix for SN-MAPs, which counterpart to the scale function of Lévy processes. Ivanovs and Palmowski [2] discussed the exist problems for spectrally negative SN-MAPs, and gave some identities for two-sided reflection SN-MAPs. These results are fundament of this paper.

Now, let us introduce some notations which can be found in Kyprianou and Palmowski [1] and Albrecher and Ivanovs [3]. Let \((X(t), J(t))\) be a spectrally negative Markov additive process on a filtered probability space \((\Omega, \mathcal{F}, \mathbf{F}, \mathbb{P})\). The process The filtration \(\mathbf{F} = \{\mathcal{F}_{t} : t \geq0\}\) is right-continuous and augmented. For all \(t\geq0\), the process \((X(t), J(t))\) is adapted to the filtration \(\mathcal{F}_{t}\). In this paper, \(X(t)\) denotes a surplus of insurance company, which is a real-valued right-continuous process with left limits. And \(J(t)\) represents the stochastic environment, which is an irreducible Markov chain with finite state space \(E=\{1,\ldots, n\}\), the intensity matrix (transition rate matrix) \(Q=(q_{ij})_{i,j\in E}\) and the stationary distribution π.

For the SN-MAPs, we have the following property: Given \(\{J (t) = i \}\), the process

$$\bigl(X(t+s)-X(t), J(t+s)\bigr) $$

is independent of \(\mathcal{F}_{t}\) and has the same law as \((X(s)-X(0), J(s))\) conditionally on \(\{J (0)=i\}\) for all \(s, t \geq0\) and \(i\in E\). When \(J(t)=i\), \(X(t)\) evolves as some Lévy process \(X_{i}(t)\), which has the Laplace exponent \(\psi_{i}(\alpha)\), that is,

$$\mathbb{E} {e^{\alpha X_{i}(t)}}=e^{\psi_{i}(\alpha)t},\quad \alpha\geq0. $$

In addition, \(X(t)\) has a jump distributed as \(U_{ij}(U_{ii}\equiv0)\), while \(J(t)\) switches from i to j, for all \(i, j \in E\). Let \(\tilde{G}_{ij}\) be the moment generation function of \(U_{ij}\).

Letting \(\tilde{G}=(\tilde{G}_{ij})_{i,j=1,2,\ldots,n}\) and \(Q \circ\tilde{G}=(q_{ij}\tilde{G}_{ij})_{i,j=1,2,\ldots,n}\), we can define the matrix exponent of the SN-MAP \(X(t)\):

$$F(\alpha)=Q \circ\tilde{G}+\operatorname{diag}\bigl(\psi_{1}(\alpha), \ldots,\psi _{n}(\alpha)\bigr),\quad\text{for all } \alpha\geq0. $$

Furthermore, it is well known that

$$\mathbb{E}\bigl[\mathrm{e}^{\alpha X(t)}; J(t)\bigr] =\mathrm{e}^{F(\alpha)t}. $$

Let \(\tau^{\pm}_{x} \) be the first time of \(X(t)\) over level ±x, that is,

$$\tau^{\pm}_{x}=\inf\bigl\{ t\geq0,\pm X(t)>x\bigr\} . $$

Since \(X(t)\) has no positive jumps, \(X(\tau^{+}_{x})=x\). In this article, we will use the transition rate matrix Λ of the Markov chain \(\{J(\tau^{+}_{x}), x\geq0\}\), which is defined as

$$\mathbb{P}\bigl[J\bigl(\tau^{+}_{x}\bigr)=j| J(0)=i\bigr]=\bigl( \mathrm{e}^{\Lambda x}\bigr)_{ij\in E}. $$

Let us make some review of the Remark 2.1 of Ivanovs and Palmowski [2].

Suppose \(J(t)\) will be absorbed at \(t=e_{q}\), where \(e_{q}\) is an exponential random variable with parameter \(q > 0\). By doing that, we get the Markov chain with transition rate matrix \(Q-q\mathbb{I}\), which leads to a SN-MAP with the matrix exponent \(F(\alpha)-q\mathbb{I}\). Then we have

$$ \mathbb{E}\bigl[\mathrm{e}^{-q\tau^{+}_{x}}; J\bigl( \tau^{+}_{x}\bigr)\bigr] = \mathbb{P}\bigl[\tau^{+}_{x}< e_{q}; J\bigl(\tau^{+}_{x}\bigr)\bigr]=\mathrm{e}^{\Lambda^{q} x}. $$
(1)

The matrix \(\Lambda^{q}\) is closely related to the matrix exponent \(F^{q}(\alpha)=F(\alpha)-q\mathbb{I}\), where \(\mathbb{I}\) is the identity matrix. When \(n=1\), it is easy to see that the matrix \(-\Lambda^{q}\) will reduce to the right inverse \(\Phi(q)\) of the Lévy process, where \(\Phi(q)=\sup\{\lambda: F(\lambda)=q\}\). In particular, the non-zero eigenvalues of \(\Lambda^{q}\) coincide with the zeros of \(\operatorname{det}(F^{q}(\alpha))\) in \(\mathbb{C}^{\operatorname{Re}>0}=\{z\in\mathbb{C}: \operatorname{Re}(z)> 0\}\); see Theorem 1 of D’Aauria et al. [4].

More recently, the concept of Parisian ruin is considered by many authors. The Parisian implementation delay is firstly introduced in insurance risk model by Dassios and Wu [5, 6]. In Dassios and Wu [5, 6], the Laplace transform of Parisian ruin time of classical Poisson risk process is discussed. Czarna and Palmowski [7] studied the Laplace transform of Parisian ruin time for spectrally negative Lévy risk process. Landriault et al. [8] discussed the Parisian ruin problem in which the delay followed a mixed Erlang distribution. Czarna [9] deals with a ruin problem of spectrally negative Lévy risk process, which has both Parisian delay and a lower ultimate bankrupt barrier. The Gerber–Shiu functionals at Parisian ruin is discussed by Baurdoux et al. [10]. Using the scale function, Loeffen et al. [11] gave a compact formula for the Parisian ruin probability of a spectrally negative Lévy risk process. In this paper, we want to improve on the result of Loeffen et al. [11] by making the model more general. The surplus process is extended from a spectrally negative Lévy risk process to a spectrally negative Markov additive risk process.

Now, we introduce some notations for Parisian delay. The quantity \(\kappa_{r}\) (\(r>0\)), which is called the Parisian ruin time, is defined as

$$\kappa_{r}=\inf\{t>r: t-g_{t}>r\},\quad \text{where } g_{t}=\sup\{0\leq s\leq t:X_{s}\geq0\}. $$

In other words, when the first time an excursion below zero lasts longer than the fixed implementation delay r, we say the company Parisian ruin. The probability \(\mathbb{P}_{x}(\kappa_{r} < \infty)\) can be called Parisian ruin probability. In this paper, we are interested in Parisian ruin probability. For \(x\in\mathbb{R}\), defined the \(n\times n\) matrix-valued function \(R(x)\) as:

$$R(x):=\mathbb{P}_{x}\bigl[\kappa_{r} = \infty; J\bigl( \tau^{-}_{0}\bigr)\bigr]. $$

Since the Parisian ruin probability \(\mathbb{P}_{x}[\kappa_{r} <\infty]\) can be expressed in terms of \(R(x)\), it suffices to study the matrix-valued function, \(R(x)\).

On the other hand, the scale matrices play the important role in this paper. Therefore we introduce them as follows. The scale matrix \(W(x)\) is characterized by

$$ \int^{\infty}_{0}\mathrm{e}^{-\alpha x}W(x) \,dx=F^{-1}(\alpha), $$

where α is sufficiently large. Following Ivanovs and Palmowski [2], we know, when \(x>0\), \(W(x)\) is non-singular and

$$ W(x)=\mathrm{e}^{-\Lambda x} L(x). $$

The matrix \(L(x)\) is a positive matrix increasing (as \(x\to\infty\)) to L, which is a matrix of expected occupation times at zero. Let \(W(0)=\lim_{x\to0}W(x)\), and then \(W(0)=L(0)= (L_{ij}(0) )\) with

$$L_{ij}(0)= \textstyle\begin{cases} \frac{1}{d_{i}},& \mbox{if } i=j \mbox{ and } X_{i} \mbox{ is of bounded variation},\\ 0,& \mbox{otherwise}, \end{cases} $$

for all \(i, j\in E\) and \(d_{i}>0\) is the drift coefficient of \(X_{i}\), \(i\in E\).

The second scale matrix \(Z(\alpha, x)\) is defined as:

$$ Z(\alpha, x)=\mathrm{e}^{\alpha x} \biggl(\mathbb{I}- \int^{x}_{0}\mathrm{e}^{-\alpha y}W(y)\,dy F( \alpha) \biggr)\quad \mbox{for } \alpha, x\geq0. $$
(2)

Note that \(Z(\alpha, x)\) is continuous in x and \(Z(\alpha, 0)=\mathbb{I}\). It is well known that \(Z(\alpha, x)\) is analytic in \(\alpha\in\mathbb{C}^{\operatorname{Re}>0}\). From Theorem 1 and Corollary 3 of Ivanovs and Palmowski [2], we obtain

$$\begin{aligned} &\mathbb{P}_{x}\bigl[\tau^{+}_{y} < \tau^{-}_{0}; J\bigl(\tau^{+}_{y}\bigr)\bigr]= W(x)W(y)^{-1}, \end{aligned}$$
(3)
$$\begin{aligned} &\mathbb{E}_{x}\bigl[\mathrm{e}^{\alpha X(\tau^{-}_{0})}; \tau^{-}_{0} < \tau^{+}_{y}, J\bigl(\tau^{-}_{0}\bigr) \bigr] = Z(\alpha, x)-W(x)W(y)^{-1}Z(\alpha, y), \end{aligned}$$
(4)

for all \(y\geq x\geq0\), and \(\alpha\geq0\).

All the above identities play an important role in the study of Parisian ruin. Further, we assume that \(\lim_{t\to\infty}\frac{X(t)}{t}=\mu>0\) (equivalently, \(E[X_{1}] > 0\)), otherwise Parisian ruin will happen with probability one.

In Sect. 2, we give our result which is the compact formula for Parisian ruin probability. Then we introduce the Jordan chain of \(\Lambda^{q}\) and using the Jordan chain we prove the result in Sect. 3.

Finally, we introduce some notation. Denote by \(\mathbb{P}_{x, i}\) the law of \((X, J)\) given \(\{X(0)=x, J(0)=i\}\), where \(x\in\mathbb{R}, i\in E\). Write \(\mathbb{E}_{x}[Y; J(\tau)]\) for a matrix with the \((i, j)\)th element \(\mathbb{E}[Y{{1}}_{\{J(\tau)=j\}} |J(0)=i, X(0)=x]\), where Y is an arbitrary random variable, τ is a (random) time, and \({{1}}_{A}\) denotes the indicator function of an event A. We also write \(\mathbb{P}_{x}[Y; J(\tau)]\) for a matrix with the \((i, j)\)th element \(\mathbb{P}_{x, i}[A, J(\tau)=j]\). If \(x=0\), then we simply drop the subscript. Let 1 denote the column vector of ones.

2 The main results

In this section, we first give an explicit expression for \(R(x)\) and also give a compact formula for Parisian ruin probability.

Theorem 1

For \(x\in\mathbb{R}\) and \(r>0\), we have

$$ R(x)=\mathbb{P}_{x}\bigl[\kappa_{r}=\infty; J \bigl(\tau^{-}_{0}\bigr)\bigr]=D(x)+H(x,r)\bigl[\mathbb {I}-H(0,r) \bigr]^{-1}W(0) D, $$
(5)

where

$$\begin{aligned} &D(x)=\mathbb{P}_{x}\bigl[\tau^{-}_{0}=\infty; J\bigl( \tau^{-}_{0}\bigr)\bigr]= \int^{x}_{0}W(z)\,dz Q+W(x)D, \\ &D=\lim_{a\uparrow\infty}W(a)^{-1}\biggl[{\mathbb{I}}- \int^{a}_{0}W(z)\,dz Q\biggr], \end{aligned}$$

and the Laplace transform (with respect to r) of \(H(x,r)\) is given by

$$\begin{aligned} & \int^{\infty}_{0}\mathrm{e}^{-q r}H(x,r)\,dr= \frac{1}{q}\bigl[\eta(q,x)-W(x)\eta(q)\bigr], \\ & \eta(q,x)=\mathrm{e}^{-\Lambda^{q} x}-q \int^{x}_{0}W(z)\mathrm{e}^{-\Lambda^{q} (x-z)}\,dz, \\ &\eta(q)=\lim_{y\to\infty}W^{-1}(y)\eta(q,y). \end{aligned}$$
(6)

Following Theorem 1, we obtain the following.

Corollary 1

For \(x\in\mathbb{R}\) and \(r>0\), if \(Q {\mathbf{1}}=0\), then

$$ \mathbb{P}_{x}[\kappa_{r}=\infty]= \bigl[D(x)+H(x,r)\bigl[\mathbb{I}-H(0,r)\bigr]^{-1}W(0) D\bigr]{ \mathbf{1}} $$
(7)

and

$$ \mathbb{P}_{x}[\kappa_{r}< \infty]={\pi }- \bigl[D(x)+H(x,r)\bigl[\mathbb {I}-H(0,r)\bigr]^{-1}W(0) D\bigr]{ \mathbf{1}}, $$
(8)

where π is the stationary distribution of Markov chain \(J(t)\).

Remark

If \(n=1\), that is, the state space of Markov chain only has one state, the MAP \((X(t), J(t))\) reduces to a spectrally negative Lévy process. In this special case, we can get \(D=0\), \(D(x)=W(x)\mathbb{E}X_{1}\), \(\Lambda^{q}=\Phi(q)\), \(\eta(q)=0\). Equation (8) is just Eq. (2) of Loeffen et al. [11].

3 Proof of Theorem 1

The proof of Theorem 1 relies on a spectral representation of the matrix \(\Lambda^{q}\) (see (1)). From Albrecher and Ivanovs [3], we obtain some results for the matrix \(-\Lambda^{q}\). Let γ be an eigenvalue with \(\operatorname{Re}(\gamma)>0\) of matrix \(-\Lambda^{q}\) and \(\nu_{1},\ldots,\nu_{j}\) be a Jordan chain of \(-\Lambda^{q}\), i.e., \(-\Lambda^{q}{\nu }_{1}=\gamma\nu_{1}\) and \(-\Lambda^{q}\nu _{i}=\gamma \nu _{i}+\nu _{i-1}\) for \(i=2,3,\ldots,j\). By the theory of Jordan chains, we have

$$\begin{aligned} &e^{-\Lambda^{q} x}\nu_{1} = e^{\gamma x} \nu_{1}, \end{aligned}$$
(9)
$$\begin{aligned} &e^{-\Lambda^{q} x}\nu_{j} = \sum _{i=0}^{j-1}\frac{x^{i}}{i!}{\mathrm{e}}^{-\gamma x} \nu_{j-i},\quad j\geq2, \end{aligned}$$
(10)

for all \(x\in\mathbb{R}\). Moreover, this Jordan chain turns out to be a generalized Jordan chain of an analytic matrix function \(F^{q}(\alpha)\), \(\operatorname{Re}(\alpha)>0\), corresponding to the eigenvalue γ, i.e.,

$$\begin{aligned} &F(\gamma)\nu_{1}= q\mathbb{I} \nu_{1}, \end{aligned}$$
(11)
$$\begin{aligned} &\sum_{i=0}^{j-1} \frac{1}{i!}\frac{d^{i}}{d\gamma^{i}}F^{q}(\gamma)\nu_{j-i}= \sum _{i=0}^{j-1}\frac{1}{i!} \frac{d^{i}}{d\gamma^{i}}F(\gamma)\nu _{j-i}-q\mathbb{I} \nu_{j}=0, \quad j\geq2. \end{aligned}$$
(12)

Also, when X has paths of unbounded variation, the proof of exist problems often relies on an approximation idea, which has already appeared in various papers; see Albrecher and Ivanovs [3], Czarna [9] and Baurdoux et al. [10]. For this reason we consider a stopping time \(\kappa^{\epsilon}_{r}\) which is the first time that an excursion below zero, starting when X gets below level zero, ending before X gets back up to ϵ (\(\epsilon>0\)) (rather than zero) and lasting longer than the fixed implementation delay r. More precisely, we define \(\kappa^{\epsilon}_{r}\) as

$$\kappa^{\epsilon}_{r}=\inf\bigl\{ t>r, t-g^{\epsilon}_{t}>r, X_{t-r}< 0\bigr\} ,\quad \text{where } g^{\epsilon}_{t}= \sup\{0\leq s\leq t: X_{s}\geq\epsilon\}. $$

It is easily to see that, for \(0<\epsilon'<\epsilon\),

$$\bigl\{ \kappa^{\epsilon'}_{r}=\infty\bigr\} \subseteq\bigl\{ \kappa^{\epsilon}_{r}=\infty\bigr\} \quad\mbox{and}\quad \bigcap_{\epsilon>0}\bigl\{ \kappa^{\epsilon}_{r}=\infty\bigr\} =\{ \kappa_{r}=\infty\}, $$

therefore

$$\mathbb{P}(\kappa_{r}=\infty)=\lim_{\epsilon\downarrow0}\mathbb{P} \bigl(\kappa^{\epsilon}_{r}=\infty\bigr). $$

Let

$$R^{\epsilon}(x):=\mathbb{P}_{x}\bigl[\kappa^{\epsilon}_{r} = \infty; J\bigl(\tau^{-}_{0}\bigr)\bigr], $$

then

$$R(x)=\lim_{\epsilon\downarrow0}R^{\epsilon}(x). $$

Since the process X has right-continuous paths, \(\tau^{+}_{0}=\lim_{\epsilon\downarrow0}\tau^{+}_{\epsilon}\).

For \(x<0\), due to the strong Markov property and spectral negativity, we have

$$\begin{aligned} R^{\epsilon}(x)=\mathbb{E}_{x}\bigl[I_{\{\tau^{+}_{\epsilon}\leq r\}}; J\bigl( \tau^{+}_{\epsilon}\bigr)\bigr]\mathbb{E}_{\epsilon}\bigl[I_{\{\kappa^{\epsilon }_{r}=\infty\}}; J\bigl(\tau^{-}_{0}\bigr)\bigr]=\mathbb{E}_{x} \bigl[I_{\{\tau^{+}_{\epsilon}\leq r\}}; J\bigl(\tau^{+}_{\epsilon}\bigr)\bigr]R^{\epsilon}( \epsilon). \end{aligned}$$

Consequently, for \(x\geq0\),

$$\begin{aligned} R^{\epsilon}(x) =&\mathbb{P}_{x}\bigl[ \tau^{-}_{0}=\infty; \kappa^{\epsilon}_{r} = \infty; J \bigl(\tau^{-}_{0}\bigr)\bigr]+\mathbb{E}_{x} \bigl[I_{\{\kappa^{\epsilon}_{r}=\infty\}}I_{\{\tau^{-}_{0}< \infty\}}; J\bigl(\tau^{-}_{0}\bigr)\bigr] \\ =&\mathbb{P}_{x}\bigl[\tau^{-}_{0}=\infty; J\bigl( \tau^{-}_{0}\bigr)\bigr] \\ &{}+ \int^{0}_{-\infty}\mathbb{P}_{x}\bigl[X\bigl( \tau^{-}_{0}\bigr)\in\,dy; \tau^{-}_{0}< \infty; J\bigl( \tau^{-}_{0}\bigr)\bigr]\mathbb{E}_{y}\bigl[I_{\{\tau^{+}_{\epsilon}\leq r\}}, J \bigl(\tau^{+}_{\epsilon}\bigr)\bigr]R^{\epsilon}(\epsilon). \end{aligned}$$
(13)

In fact, it is easy to verify that Eq. (13) is valid for any \(x\in\mathbb{R}\).

From Theorem 8 in Kyprianou and Palmowski [1], we know

$$\mathbb{P}_{x}\bigl[\tau^{-}_{0}< \infty; J\bigl( \tau^{-}_{0}\bigr)\bigr]={\mathbb{I}}- \int^{x}_{0}W(z)\,dz\, Q-W(x) D, $$

where

$$D=\lim_{a\uparrow\infty}W(a)^{-1}\biggl[{\mathbb{I}}- \int^{a}_{0}W(z)\,dz\, Q\biggr]. $$

So

$$D(x)=\mathbb{P}_{x}\bigl[\tau^{-}_{0}=\infty; J\bigl( \tau^{-}_{0}\bigr)\bigr]= \int^{x}_{0}W(z)\,dz\, Q+W(x) D. $$

Let

$$\begin{aligned} H(x,r,\epsilon) =& \int^{0}_{-\infty}\mathbb{P}_{x}\bigl[X\bigl( \tau^{-}_{0}\bigr)\in\,dy; \tau^{-}_{0}< \infty; J\bigl( \tau^{-}_{0}\bigr)\bigr]\mathbb{E}_{y}\bigl[I_{\{\tau^{+}_{\epsilon}\leq r\}}; J \bigl(\tau^{+}_{\epsilon}\bigr)\bigr], \\ =&\mathbb{E}_{x}\bigl[I_{\{\tau^{-}_{0}< \infty\}}\mathbb{P}_{X(\tau ^{-}_{0})} \bigl(\tau^{+}_{\epsilon}\leq r\bigr); J\bigl(\tau^{-}_{0}\bigr)\bigr], \end{aligned}$$

and

$$\begin{aligned} H(x,r) :=&\lim_{\epsilon\downarrow0}H(x,r,\epsilon)= \int^{0}_{-\infty}\mathbb{P}_{x}\bigl[X\bigl( \tau^{-}_{0}\bigr)\in\,dy; \tau^{-}_{0}< \infty; J\bigl( \tau^{-}_{0}\bigr)\bigr]\mathbb{E}_{y}\bigl[I_{\{\tau^{+}_{0}\leq r\}}; J \bigl(\tau^{+}_{0}\bigr)\bigr] \\ =& \int^{0}_{-\infty}\mathbb{P}_{x}\bigl[X\bigl( \tau^{-}_{0}\bigr)\in\,dy; \tau^{-}_{0}< \infty; J\bigl( \tau^{-}_{0}\bigr)\bigr]\mathbb{E}\bigl[I_{\{\tau^{+}_{-y}\leq r\}}; J\bigl( \tau^{+}_{-y}\bigr)\bigr], \end{aligned}$$
(14)

where in the third equality we have used spatial homogeneity.

Now, we can re-write Eq. (13) as

$$ R^{\epsilon}(x)= D(x)+ H(x,r,\epsilon)R^{\epsilon}( \epsilon). $$
(15)

Letting ϵ tends to zero on both sides of (15), we can obtain

$$ R(x)= D(x)+ H(x,r)R(0). $$
(16)

Setting \(x=0\) in (16), we get

$$R(0)=\bigl[\mathbb{I}-H(0,r)\bigr]^{-1}W(0)D. $$

Finally, we will analyze the matrix function \(H(x,r)\) in (14) by the Laplace transform. For all \(-y>0\), \(q\geq0\), integrating by parts and using (1), one can get

$$\begin{aligned} \int^{\infty}_{0}{\mathrm{e}}^{-q r}\mathbb{E} \bigl[I_{\{\tau^{+}_{-y}\leq r\}}; J\bigl(\tau^{+}_{-y}\bigr)\bigr]\,dr =& \frac{1}{q} \int^{\infty}_{0}\mathbb{P}\bigl[\tau ^{+}_{-y} \leq r; J\bigl(\tau^{+}_{-y}\bigr)\bigr]\,d{\mathrm{e}}^{-q r} \\ =&\frac{1}{q}\mathbb {E}\bigl[{\mathrm{e}}^{-q \tau^{+}_{-y}}; J\bigl( \tau^{+}_{-y}\bigr)\bigr] =\frac{1}{q}{\mathrm{e}}^{-\Lambda^{q} y}. \end{aligned}$$

Using Fubini’s theorem and the above equation, we obtain

$$\begin{aligned} \int^{\infty}_{0}{\mathrm{e}}^{-q r}H(x,r)\,dr =& \int^{0}_{-\infty}\mathbb{P}_{x}\bigl[X\bigl( \tau^{-}_{0}\bigr)\in \,dy; \tau^{-}_{0}< \infty, J\bigl( \tau^{-}_{0}\bigr)\bigr] \\ &{}\times \int^{\infty}_{0}{\mathrm{e}}^{-q r}\mathbb{E} \bigl[I_{\{\tau^{+}_{-y}\leq r\}}; J\bigl(\tau^{+}_{-y}\bigr)\bigr]\,dr \\ =&\frac{1}{q} \int^{0}_{-\infty}\mathbb{P}_{x}\bigl[X\bigl( \tau^{-}_{0}\bigr)\in\,dy; \tau^{-}_{0}< \infty, J\bigl( \tau^{-}_{0}\bigr)\bigr] {\mathrm{e}}^{-\Lambda^{q} y}. \end{aligned}$$
(17)

We will split the proof of Eq. (6) into two parts:

Part I: Assume that the matrix \(-\Lambda^{q}\) has n linearly independent eigenvectors ν, that is, \(-\Lambda^{q}{\nu }=\gamma\nu\). Multiplied by ν on both sides of Eq. (17) and using (4), (9), one can get

$$\begin{aligned} \int^{\infty}_{0}{\mathrm{e}}^{-q r}H(x,r)\,dr \nu =&\frac {1}{q} \int^{0}_{\infty}\mathbb{P}_{x}\bigl[X\bigl( \tau^{-}_{0}\bigr)\in \,dy; \tau^{-}_{0}< \infty, J\bigl( \tau^{-}_{0}\bigr)\bigr] {\mathrm{e}}^{-\Lambda^{q} y} \nu \\ =& \frac{1}{q} \int^{0}_{-\infty}\mathbb{P}_{x}\bigl[X\bigl( \tau^{-}_{0}\bigr)\in \,dy; \tau^{-}_{0}< \infty, J\bigl( \tau^{-}_{0}\bigr)\bigr] {\mathrm{e}}^{\gamma y} \nu \\ =& \frac{1}{q} \mathbb{E}_{x}\bigl[{\mathrm{e}}^{\gamma X(\tau^{-}_{0})}; \tau^{-}_{0}< \infty, J\bigl(\tau^{-}_{0}\bigr)\bigr] \nu \\ =& \frac{1}{q} \lim_{a\to\infty}\mathbb{E}_{x}\bigl[{ \mathrm{e}}^{\gamma X(\tau^{-}_{0})}; \tau^{-}_{0}< \tau^{+}_{a}, J\bigl( \tau^{-}_{0}\bigr)\bigr] \nu \\ =& \frac{1}{q} \Bigl[Z(\gamma, x) \nu-W(x)\lim_{a\to\infty}W(a)^{-1}Z( \gamma,a) \nu\Bigr]. \end{aligned}$$
(18)

From the definition of \(Z(\gamma,x)\) and (11), we know

$$\begin{aligned} Z(\gamma, x) \nu =&{\mathrm{e}}^{\gamma x} \nu- \int^{x}_{0}W(z){\mathrm{e}}^{\gamma (x-z)}\,dz\, F( \gamma) \nu \\ =&{\mathrm{e}}^{\gamma x} \nu- \int^{x}_{0}W(z){\mathrm{e}}^{\gamma (x-z)}\,dz\, q \mathbb{I} \nu \\ =&{\mathrm{e}}^{\gamma x} \nu-q \int^{x}_{0}W(z){\mathrm{e}}^{\gamma (x-z)}\,dz\, \nu \\ =&\biggl[\mathrm{e}^{-\Lambda^{q} x}-q \int^{x}_{0}W(z)\mathrm{e}^{-\Lambda^{q} (x-z)}\,dz\biggr] \nu=\eta(q,x) \nu. \end{aligned}$$

Then Eq. (18) can be re-written as

$$\begin{aligned} \int^{\infty}_{0}{\mathrm{e}}^{-q r}H(x,r)\,dr\, \nu =& \frac{1}{q} \Bigl[\eta(q,x)-W(x)\lim_{a\to\infty}W(a)^{-1} \eta(q,a)\Bigr] \nu \\ =&\frac{1}{q} \bigl[\eta(q,x)-W(x)\eta(q)\bigr] \nu, \end{aligned}$$
(19)

where the matrix \(\eta(q)=\lim_{a\to\infty}W(a)^{-1}\eta(q,a)\) is well-defined, because \(\lim_{a\to\infty}\mathbb{E}_{x}[{\mathrm{e}}^{\gamma X(\tau^{-}_{0})}; \tau^{-}_{0}<\tau^{+}_{a} ,J(\tau^{-}_{0})]\) exists.

Finally, under the assumption that there are n linearly independent eigenvectors, we have

$$ \int^{\infty}_{0}{\mathrm{e}}^{-q r}H(x,r)\,dr= \frac{1}{q} \bigl[\eta(q,x)-W(x)\eta(q)\bigr]. $$
(20)

Part II: In general, we consider a Jordan chain, \(\nu_{1},\ldots ,\nu_{j}\), of \(-\Lambda^{q}\) corresponding to an eigenvalue γ. Multiplying \(\nu_{j}\) on both sides of Eq. (17) and using (10), one can get

$$\begin{aligned} \int^{\infty}_{0}{\mathrm{e}}^{-q r}H(x,r)\,dr\, \nu_{j} =&\frac {1}{q} \int^{0}_{-\infty}\mathbb{P}_{x}\bigl[X\bigl( \tau^{-}_{0}\bigr)\in \,dy; \tau^{-}_{0}< \infty, J\bigl( \tau^{-}_{0}\bigr)\bigr] {\mathrm{e}}^{-\Lambda^{q} y} \nu _{j} \\ =& \frac{1}{q} \int^{0}_{-\infty}\mathbb{P}_{x}\bigl[X\bigl( \tau^{-}_{0}\bigr)\in \,dy; \tau^{-}_{0}< \infty, J\bigl( \tau^{-}_{0}\bigr)\bigr] \sum_{i=0}^{j-1} \frac{1}{i!}\frac {\partial^{i}}{\partial\gamma^{i}} {\mathrm{e}}^{\gamma y} \nu_{j-i} \\ =& \frac{1}{q} \lim_{a\to\infty}\sum _{i=0}^{j-1}\frac{1}{i!}\frac{\partial ^{i}}{\partial\gamma^{i}} \mathbb{E}_{x}\bigl[{\mathrm{e}}^{\gamma X(\tau^{-}_{0})}; \tau^{-}_{0}< \tau^{+}_{a}, J\bigl(\tau^{-}_{0}\bigr)\bigr] \nu_{j-i} \\ =& \frac{1}{q} \lim_{a\to\infty}\sum _{i=0}^{j-1}\frac{1}{i!}\frac{\partial ^{i}}{\partial\gamma^{i}} \bigl[Z( \gamma, x)-W(x)W(a)^{-1}Z(\gamma, a)\bigr] \nu_{j-i}. \end{aligned}$$
(21)

Also

$$\begin{aligned} \frac{\partial^{i}}{\partial\gamma^{i}}Z(\gamma, x) =&\frac{\partial ^{i}}{\partial\gamma^{i}}\biggl[{\mathrm{e}}^{\gamma x} \mathbb{I}- \int^{x}_{0}W(z){\mathrm{e}}^{\gamma (x-z)}\,dz\, F( \gamma)\biggr] \\ =&x^{i}{\mathrm{e}}^{\gamma x}\mathbb{I}-\sum _{k=0}^{i}\frac{i!}{k! (i-k)!} \int^{x}_{0}(x-z)^{k}W(z){ \mathrm{e}}^{\gamma (x-z)}\,dz\, F^{(i-k)}(\gamma). \end{aligned}$$

Interchanging summation and using Eq. (10), (12), one can obtain

$$\begin{aligned} &\sum_{i=0}^{j-1} \frac{1}{i!}\frac{\partial^{i}}{\partial \gamma^{i}} Z(\gamma, x) \nu_{j-i} \\ &\quad =\sum _{i=0}^{j-1}\frac{1}{i!}x^{i}{ \mathrm{e}}^{\gamma x}\mathbb{I} \nu_{j-i}-\sum _{k=0}^{j-1}\frac{1}{k!} \int^{x}_{0}(x-z)^{k}{\mathrm{e}}^{\gamma (x-z)}W(z) \,dz\, q\mathbb{I} \nu_{j-k} \\ &\quad =\biggl[\mathrm{e}^{-\Lambda^{q} x}-q \int^{x}_{0}W(z)\mathrm{e}^{-\Lambda^{q} (x-z)}\,dz\biggr] \nu_{j}=\eta(q,x) \nu_{j}. \end{aligned}$$
(22)

Using (22), Eq. (21) can be written as

$$\begin{aligned} \int^{\infty}_{0}{\mathrm{e}}^{-q r}H(x,r)\,dr\, \nu_{j} =& \frac{1}{q} \Bigl[\eta(q,x)-W(x)\lim _{a\to\infty}W(a)^{-1}\eta(q,a)\Bigr] \nu \\ =&\frac{1}{q} \bigl[\eta(q,x)-W(x)\eta(q)\bigr] \nu_{j}. \end{aligned}$$

The proof is complete.