1 Introduction

In the classical non-life insurance risk model, the Lundberg-Cramér surplus process has the form

$$ U^{0}(t)=u+ct-\sum_{i=1}^{N(t)}Y_{i}, $$
(1.1)

where \(u \geq0\) is the initial capital of an insurance company, \(c >0\) is the rate of premium income, \(\{N(t),t \geq0 \}\), which represents the total numbers of claims up to time t, is a homogeneous Poisson process with intensity λ, \(Y_{i}\) describes the amount of the ith claim, and \(\{Y_{i},i \geq1 \}\) is a sequence of nonnegative independent and identically distributed random variables, which is also independent of \(N(t)\). See Asmussen and Albrecher [1] and the references therein on this well-known model.

As an alternative, many papers assume that the premium income is no longer a linear function of time t. For example, Boikov [2] generalized the classical risk model to the case where the premium was modeled as another compound Poisson process, he derived the integral equations and exponential bounds for non-ruin probability. Melnikov [3] and Wang et al. [4] also focused on this kind of risk model. In addition, perturbed risk models have been discussed by many people since the pioneering work of Dufresne and Gerber [5]. See, for example, Furer and Schmidli [6], Schmidli [7] and the references therein.

Recently, more and more actuaries have been paying an increasing amounts of attention to the study of model with interest rate or investment return due to the practical importance. For example, Sundt and Teugels [8] and Cai and Dickson [9] studied the compound Poisson risk model with a constant interest rate force. Paulsen and Gjessing [10] and Kalashnikov and Norberg [11] considered the classical risk model with stochastic investment. For a perturbed risk process with investment, see Cai and Yang [12] and Zhu et al. [13]. Melnikov [3], Wang et al. [4] and Wei et al. [14] focused on risk models with stochastic premiums and when all capital of the insurer was invested in stock.

In this paper, we consider a perturbed risk model with stochastic premiums and constant interest force

$$\begin{aligned} U(t) =&ue^{rt}+c \int_{0}^{t}e^{r(t-s)}\,\mathrm{d}s+ \int_{0}^{t}e^{r(t-s)}\, \mathrm{d}\sum _{i=1}^{N_{1}(s)}X_{i} \\ &{}- \int_{0}^{t}e^{r(t-s)}\,\mathrm{d}\sum _{i=1}^{N_{2}(s)}Y_{i} +\sigma \int_{0}^{t}e^{r(t-s)}\,\mathrm{d}B(s), \end{aligned}$$
(1.2)

where \(\{U(t), t \geq0\}\) denotes the surplus process, \(c>0\) is a fixed constant representing the premium income rate, while \(\{X_{i},i \geq1\}\) account for the extra stochastic premiums whose arrival times constitute counting process \(\{N_{1}(t),t \geq0\}\). \(\{Y_{i},i \geq1\}\) denote the claim sizes with \(\{N_{2}(t),t \geq0\}\) being the total number of claims up to time t. \(\{B(t),t \geq0\}\) is a standard Brownian motion, which adds an additional uncertainty to the aggregate claims or the premiums because of market fluctuations, and \(\sigma\geq0\) is the diffusion coefficient. \(r >0\) is the constant interest force, implying that, for example, the insurance company invests any surplus into a bank account.

Throughout this paper, we assume that:

  • \(\{N_{1}(t),t \geq0\}\) and \(\{N_{2}(t),t \geq0\}\) are Poisson processes with intensities \(\lambda_{1}\) and \(\lambda_{2}\), respectively;

  • \(\{X_{i},i \geq1\}\) and \(\{Y_{i},i \geq1\}\) are two sequences of i.i.d random variables with the same distributions \(F(x)\) and \(G(y)\), respectively;

  • \(\{X_{i},i \geq1\}\), \(\{Y_{i},i \geq1\}\), \(\{N_{1}(t),t \geq0\}\), \(\{N_{2}(t),t \geq0\}\) and \(\{B(t),t \geq0\}\) are mutually independent;

  • the positive safety loading condition holds true, i.e.,

    $$ c+\lambda_{1}EX>\lambda_{2}EY. $$
    (1.3)

We define the ruin time, infinite-time ruin probability, and finite-time ruin probability as follows:

$$\begin{aligned}& T=\inf \bigl\{ t \geq0, U(t)< 0 \bigr\} \quad \bigl(\inf\{\emptyset\}=\infty \bigr); \\& \psi(u)=P \bigl(T< \infty|U(0)=u \bigr); \\& \psi(u,t_{0})=P \bigl(T \leq t_{0}|U(0)=u \bigr). \end{aligned}$$

It is well known that \(\psi(u,t_{1}) \leq\psi(u,t_{2}) \leq\cdots\leq \psi(u)\) for \(t_{1}< t_{2}< \cdots\) and \(\lim_{t_{0} \rightarrow\infty}\psi(u,t_{0})=\psi(u)\).

In the rest of this paper, we consider the upper bounds and the Lundberg-Cramér approximation for the infinite-time ruin probability, and we obtain the asymptotic formula for the finite-time ruin probability when the claim size is heavy-tailed. The results are shown in Section 2 and the proofs are given in Section 3.

2 Main results

In risk theory, upper bound and asymptotic behavior are basic results for ruin probability, so we also discuss these problems for model (1.2). For notational convenience, we introduce

  • \(m_{1}(\eta)=E (e^{\eta X} )= \int_{0}^{\infty}e^{\eta y}\,\mathrm{d}F(x)\), \(m_{2}(\eta)=E (e^{\eta Y} )= \int_{0}^{\infty}e^{\eta y}\, \mathrm{d}G(y)\);

  • \(\theta(z)= \frac{1}{2}\sigma^{2}z^{2}-cz+\lambda _{1}(m_{1}(-z)-1)+\lambda_{2}(m_{2}(z)-1)\);

  • \(\eta_{0}=\sup\{\eta>0,m_{2}(\eta) < \infty\}\), \(\gamma=\sup \{\eta>0,\sup_{t>0} \int_{0}^{t} \theta(\eta e^{-rs})\,\mathrm{d}s<\infty \}\).

Theorem 2.1

Assume that \(\eta_{0}>0\), then for any \(0<\eta<\gamma\), we have

$$ \psi(u) \leq\sup_{t>0} \exp \biggl\{ \int_{0}^{t} \theta \bigl(\eta e^{-rs} \bigr) \, \mathrm{d}s \biggr\} e^{-\eta u}. $$
(2.1)

Remark 2.1

From (1.3), by the convex property of \(\theta(z)\) and noting the fact that \(\theta'(0)=\lambda_{2}EY-c-\lambda_{1}EX<0\), \(\theta(0)=0\), and \(\theta(z)\rightarrow\infty\) as \(z\rightarrow \eta_{0}\), we know that there must exist a unique positive number \(z_{0}\) such that \(\theta(z_{0})=0\). Since \(\theta(z)<0\) for \(z< z_{0}\), then for any \(0<\eta\leq z_{0}\), \(\int_{0}^{t} \theta(\eta e^{-r s})\,\mathrm{d}s<0\). As a result, \(\sup_{t>0} \int_{0}^{t} \theta(\eta e^{-r s})\,\mathrm{d}s \leq0\) for any \(0<\eta\leq z_{0}\). Therefore, taking \(\eta=z_{0}\) in (2.1), we can get \(\psi(u) \leq e^{-z_{0}u}\). The right-hand side is exactly the upper bound for the risk model without investment (see Melnikov [3]), and \(z_{0}\) is the corresponding adjustment coefficient. The inequality shows that the ruin probability with interest is smaller than the one without interest.

On the other hand, because \(\int_{0}^{t} \theta(\eta e^{-rs})\,\mathrm{d}s= \int_{\eta e^{-rt}}^{\eta} \frac{\theta(s)}{rs}\,\mathrm{d}s\), therefore, when \(\eta>z_{0}\), the supremum over \(t>0\) in (2.1) is achieved at point where \(\eta e^{-rt}=z_{0}\), and (2.1) can be presented in a much clearer and simpler form as

$$\psi(u) \leq\exp \biggl\{ \int_{z_{0}}^{\eta} \frac{\theta(s)}{rs}\, \mathrm{d}s \biggr\} e^{-\eta u}, $$

getting rid of the supremum.

Remark 2.2

In the result, γ has a complex and tedious expression. Actually, if assuming that \(\eta_{0}>0\), we have \(\gamma=\eta_{0}\).

First of all, for any \(\varepsilon>0\), by the definition, it is easy to know \(\theta(\eta_{0}+\varepsilon)=\infty\), then

$$\sup_{t>0} \int_{0}^{t} \theta \bigl((\eta_{0}+ \varepsilon) e^{-rs} \bigr)\,\mathrm{d}s =\sup_{t>0} \int_{(\eta_{0}+\varepsilon) e^{-rt}}^{\eta_{0}+\varepsilon} \frac{\theta(s)}{rs}\,\mathrm{d}s = \int_{z_{0}}^{\eta_{0}} \frac{\theta(s)}{rs}\,\mathrm{d}s+ \int_{\eta _{0}}^{\eta _{0}+\varepsilon} \frac{\theta(s)}{rs}\,\mathrm{d}s=\infty, $$

which implies \(\gamma\leq\eta_{0}+\varepsilon\), letting \(\varepsilon \downarrow0\) we have \(\gamma\leq\eta_{0}\).

Second, for all \(0<\eta<\eta_{0}\), if \(\eta< z_{0}\),

$$\sup_{t>0} \int_{0}^{t} \theta \bigl(\eta e^{-rs} \bigr) \,\mathrm{d}s =\sup_{t>0} \int_{\eta e^{-rt}}^{\eta} \frac{\theta(s)}{rs}\, \mathrm{d}s \leq0, $$

else if \(\eta\geq z_{0}\),

$$\sup_{t>0} \int_{0}^{t} \theta \bigl(\eta e^{-rs} \bigr) \,\mathrm{d}s =\sup_{t>0} \int_{\eta e^{-rt}}^{\eta} \frac{\theta(s)}{rs}\, \mathrm{d}s \leq \int_{z_{0}}^{\eta} \frac{\theta(s)}{rs}\,\mathrm{d}s < \infty, $$

then we get \(\gamma\geq\eta_{0}\). As a result, \(\gamma=\eta_{0}\).

Inspired by Remark 2.1, we can obtain a tighter bound for the ruin probability as follows.

Theorem 2.2

Assume that \(\eta_{0}>0\), then we have

$$ \psi(u) \leq\exp \biggl\{ \int_{z_{0}}^{\tilde{\eta}_{0}(u)} \frac {\theta (s)}{rs}\,\mathrm{d}s \biggr\} e^{-\tilde{\eta}_{0}(u) u}, $$
(2.2)

in which \(\tilde{\eta}_{0}(u)\) is the solution greater than \(z_{0}\) to the function \(\tilde{\theta}(z)=\theta(z)-ruz=0\).

Remark 2.3

Note that \(\tilde{\theta}(z_{0})=\theta(z_{0})-ruz_{0}=-ruz_{0}<0\), and that the function \(\tilde{\theta}(z)\) is convex and converges to ∞ as \(z\rightarrow\eta_{0}\), it is easy to check that \(\tilde {\theta}(z)=0\) has a unique solution greater than \(z_{0}\). In addition, since \(\tilde{\theta}(\tilde{\eta}_{0}(u))=0\), and \(\tilde {\theta}(z)=\infty\) for all \(z >\eta_{0}\), it follows that \(\tilde{\eta}_{0}(u) \leq\eta_{0}=\gamma\). Thus, we find a suitable number \(\tilde{\eta}_{0}(u)\) so that we can get a best estimation for the upper bound of the ruin probability.

Example 2.1

Let \(c=1\), \(\sigma=0.1\), \(\lambda_{1}=100\), \(\lambda_{2}=50\), \(X \sim\exp(5/3)\), and \(Y \sim\exp(1)\), we display the numerical results for different η, u, and r in Table 1.

Table 1 Upper bounds of ruin probability for different η , u , and r

In this example, \(\eta=0.1216\) solves the equation \(\theta (z)=0\), then the upper bounds for the model without investment could be obtained (see Remark 2.1). However, we can find tighter bounds since \(r>0\). For example, when \(r=0.08\) and \(u=10\), the result for \(\eta=0.1299\) is better than that for \(\eta=0.1216\), furthermore, better than those for other values of η in Table 1. Actually, it corresponds to the best estimate of the ruin probability in this case, i.e., \(\tilde{\eta}_{0}(u)=0.1299\). Similarly, 0.1325, 0.1381, and 0.1431 are best choices of η for the cases that \(r=0.1\) and \(u=10\), \(r=0.08\) and \(u=20\), \(r=0.1\) and \(u=20\), respectively.

For the Lundberg-Crámer approximation, we have the following theorem.

Theorem 2.3

If \(\gamma< \infty\), then for any \(\varepsilon>0\),

$$\begin{aligned}& \lim_{u \rightarrow\infty} \psi(u)e^{(\gamma-\varepsilon)u}=0, \end{aligned}$$
(2.3)
$$\begin{aligned}& \lim_{u \rightarrow\infty} \psi(u)e^{(\gamma+\varepsilon )u}=+\infty. \end{aligned}$$
(2.4)

Remark 2.4

The number γ here is the so-called adjustment coefficient or the Lundberg exponent. It follows that \(\lim_{u\rightarrow\infty}{\frac{-\ln\psi (u)}{u}}=\gamma\) from (2.3) and (2.4), showing asymptotic behavior of \(\psi(u)\).

When the claim size is heavy-tailed, i.e., \(\eta_{0}=0\), asymptotic formula for the ruin probability will be considered usually. We recall several important classes of heavy-tailed distributions first.

We say a distribution G on \([0,+\infty)\) is subexponential, denoted by \(G \in\mathcal{S}\), if \(\bar{G}(x)=1-G(x) >0\) holds for all \(x \geq0\) and the relation

$$ \lim_{x \rightarrow\infty} {\frac{\overline{G^{*n}}(x)}{\bar{G}(x)}}=n $$
(2.5)

holds for some (hence for all) \(n=2,3,\ldots\) , where \(G^{*n}\) denotes n-fold convolution of G.

We say a distribution G is long tailed, denoted by \(G \in\mathcal {L}\), if the relation

$$ \lim_{x \rightarrow\infty} {\frac{\bar{G}(x+y)}{\bar{G}(x)}}=1 $$
(2.6)

holds for all \(y >0\).

We say a distribution G on \([0,+\infty)\) has a regularly varying tail, denoted by \(G \in\mathcal{R}_{-\alpha}\), if \(\bar{G}(x)>0\) for all \(x \geq0\) and there exists some \(\alpha>0\) such that the relation

$$ \lim_{x \rightarrow\infty} {\frac{\bar{G}(xy)}{\bar {G}(x)}}=y^{-\alpha} $$

holds for each \(y>0\).

It is well known that \(\mathcal{R}_{-\alpha} \subset\mathcal{S} \subset\mathcal{L}\). For more details of heavy-tailed distributions and their applications, see Embrechts et al. [15].

Now, we consider asymptotic formula for the finite-time ruin probability in model (1.2). We write \(f(x) \sim g(x)\) if \(\lim_{x\rightarrow\infty}{\frac{f(x)}{g(x)}}=1\) throughout this paper.

Theorem 2.4

If \(G \in\mathcal{S}\), then for each fixed \(t_{0}>0\),

$$ \psi(u,t_{0}) \sim{\frac{\lambda_{2}}{r}} \int_{u}^{ue^{rt_{0}}}{\frac {\bar {G}(y)}{y}}\,\mathrm{d}y,\quad u \rightarrow\infty. $$
(2.7)

Furthermore, we have the following result.

Theorem 2.5

If \(G \in\mathcal{R}_{-\alpha}\) for some \(\alpha>0\), then for each fixed \(t_{0}>0\),

$$ \psi(u,t_{0}) \sim{\frac{\lambda_{2}}{\alpha r}}\bar {G}(u) \bigl(1-e^{-\alpha r t_{0}} \bigr),\quad u \rightarrow\infty. $$
(2.8)

Remark 2.5

These theorems generalize the results for the models in Tang [16] where \(c=0\) and \(\sigma=0\), Jiang and Yan [17] where \(\lambda_{1}=0\), and extend the investigation for the model in Wei et al. [14] where \(\sigma=0\). Meanwhile, the conclusions are consistent with that of Veraverbeke [18], who pointed out that the diffusion term could be asymptotically negligible when the claims are subexponentially distributed.

3 Proofs of the main results

For the reason of convenience, we first introduce the discounted surplus process \(V(t)\) given by

$$\begin{aligned} V(t) =&U(t)e^{-rt} \\ =&u+c \int_{0}^{t}e^{-rs}\,\mathrm{d}s+ \int_{0}^{t}e^{-rs}\, \mathrm{d}\sum _{i=1}^{N_{1}(s)}X_{i}- \int_{0}^{t}e^{-rs}\,\mathrm{d}\sum _{i=1}^{N_{2}(s)}Y_{i} +\sigma \int_{0}^{t}e^{-rs}\,\mathrm{d}B(s), \end{aligned}$$
(3.1)

obviously,

$$ T=\inf \bigl\{ t \geq0, V(t)< 0 \bigr\} . $$
(3.2)

It is easy to see that \(\{(V(t),t),t \geq0\}\) is a Markov process, let \(\{\mathcal{F}_{t},t \geq0\}\) be the natural filtration of \(\{V(t),t \geq0\}\), i.e., \(\mathcal {F}_{t}=\sigma (V(s),0 \leq s \leq t)\), then T is an \(\mathcal{F}_{t}\)-stopping time. We can construct a martingale by Dynkin’s formula, which indicates the relationship between martingale and the infinitesimal generator of the Markov process, then we derive the upper bound for ruin probability via a martingale approach.

Lemma 3.1

Assume that \(\eta_{0}>0\), then for any \(0<\eta<\gamma\), \(M_{t}=\exp \{- \int_{0}^{t} \theta(\eta e^{-rs})\,\mathrm{d}s-\eta V(t) \}\) is an \(\mathcal{F}_{t}\)-martingale.

Proof

For any \(g \in\mathcal{D}(A)\), where A is the infinitesimal generator of \(\{(V(t),t),t \geq0\}\) and \(\mathcal{D}(A)\) is the domain of A. By the Itô formula, we know that

$$\begin{aligned} Ag(z,t) =& \frac{1}{2}\sigma^{2}e^{-2rt} \frac{\partial^{2}}{\partial z^{2}}g(z,t)+\frac{\partial}{\partial t}g(z,t) +ce^{-rt} \frac{\partial}{\partial z}g(z,t) \\ &{}+\lambda_{1} \int_{0}^{\infty} \bigl[g \bigl(z+xe^{-rt},t \bigr)-g(z,t) \bigr]\,\mathrm {d}F(x) \\ &{}+ \lambda_{2} \int_{0}^{\infty} \bigl[g \bigl(z-ye^{-rt},t \bigr)-g(z,t) \bigr]\,\mathrm{d}G(y). \end{aligned}$$
(3.3)

Trying a function of the form \(g(z,t)=a(t)e^{-\eta z}\) in (3.3), where \(a(t)\) is positive and differentiable, we get

$$\begin{aligned} Ag(z,t) =& \frac{1}{2}\sigma^{2}\eta^{2}e^{-2rt}g(z,t)+ \frac{a'(t)}{a(t)}g(z,t) -c \eta e^{-rt}g(z,t) \\ &{}+\lambda_{1}g(z,t) \int_{0}^{\infty}\exp \bigl\{ -\eta x e^{-rt} \bigr\} \,\mathrm{d}F(x) \\ &{}+\lambda_{2}g(z,t) \int_{0}^{\infty}\exp \bigl\{ \eta y e^{-rt} \bigr\} \,\mathrm {d}G(y)-\lambda_{1}g(z,t)-\lambda_{2}g(z,t). \end{aligned}$$

Now let \(Ag(z,t)=0\), which is equivalent to

$$\begin{aligned}& \frac{1}{2}\sigma^{2}\eta^{2}e^{-2rt}+ \frac{a'(t)}{a(t)}-c \eta e^{-rt} +\lambda_{1} \int_{0}^{\infty}\exp \bigl\{ -\eta x e^{-rt} \bigr\} \,\mathrm{d}F(x) \\& \quad {}+\lambda_{2} \int_{0}^{\infty}\exp \bigl\{ \eta y e^{-rt} \bigr\} \,\mathrm{d}G(y) -\lambda_{1}-\lambda_{2}=0, \end{aligned}$$

hence

$$\begin{aligned} a(t) =& \exp \biggl\{ - \int_{0}^{t} \biggl(\frac{1}{2} \sigma^{2}\eta ^{2}e^{-2rs}-c \eta e^{-rt} + \lambda_{1} \int_{0}^{\infty}\exp \bigl\{ -\eta x e^{-rs} \bigr\} \,\mathrm{d}F(x) a(t) \\ &{} +\lambda_{2} \int_{0}^{\infty}\exp \bigl\{ \eta y e^{-rs} \bigr\} \, \mathrm{d}G(y)-\lambda_{1}-\lambda_{2} \biggr)\, \mathrm{d}s \biggr\} \\ =& \exp \biggl\{ - \int_{0}^{t} \theta \bigl(\eta e^{-rs} \bigr) \,\mathrm{d}s \biggr\} , \end{aligned}$$

where we assume that \(a(0)=1\); then for any \(0<\eta<\gamma\), it follows from Dynkin’s formula (Rolski et al. [19]) that \(M_{t}=\exp \{- \int_{0}^{t} \theta(\eta e^{-rs})\,\mathrm{d}s-\eta V(t) \}\) is an \(\mathcal{F}_{t}\) martingale. □

Proof of Theorem 2.1

Choose \(t_{0} < \infty\), then \(t_{0} \wedge T\) is a bounded stopping time, then by Lemma 3.1 and the optional stopping theorem for a martingale, we have

$$EM_{0}=EM_{t_{0} \wedge T}, $$

which implies

$$\begin{aligned} e^{-\eta u} =& E\exp \biggl\{ - \int_{0}^{t_{0} \wedge T} \theta \bigl(\eta e^{-rs} \bigr) \,\mathrm{d}s-\eta V(t_{0} \wedge T) \biggr\} \\ \geq& E \biggl[\exp \biggl\{ - \int_{0}^{T} \theta \bigl(\eta e^{-rs} \bigr) \,\mathrm{d}s-\eta V(T) \biggr\} \Big|T \leq t_{0} \biggr]P(T \leq t_{0}). \end{aligned}$$

Since \(V(T)<0\), we know that \(\exp\{-\eta V(T)\} \geq1\) for all \(\eta >0\), therefore

$$\begin{aligned} P(T \leq t_{0}) \leq& \frac{e^{-\eta u}}{E [\exp \{- \int_{0}^{T} \theta(\eta e^{-rs})\,\mathrm{d}s-\eta V(T) \}|T \leq t_{0} ]} \\ \leq& \frac{e^{-\eta u}}{E [\exp \{- \int_{0}^{T} \theta (\eta e^{-rs})\,\mathrm{d}s \}|T \leq t_{0} ]} \\ \leq& \biggl(\inf_{0 < t \leq t_{0}}\exp \biggl\{ - \int_{0}^{t} \theta \bigl(\eta e^{-rs} \bigr) \,\mathrm{d}s \biggr\} \biggr)^{-1}e^{-\eta u} \\ \leq& \sup_{0 < t \leq t_{0}} \exp \biggl\{ \int_{0}^{t} \theta \bigl(\eta e^{-rs} \bigr) \,\mathrm{d}s \biggr\} e^{-\eta u}, \end{aligned}$$

then (2.1) holds true by letting \(t_{0} \rightarrow\infty\) since \(\lim_{t_{0} \rightarrow\infty}\psi(u,t_{0})=\psi(u)\). □

Proof of Theorem 2.2

By (2.1), we can obtain a finer upper bound for ruin probability as follows:

$$\begin{aligned} \psi(u) \leq& \inf_{0< \eta< \gamma}\sup _{t>0} \exp \biggl\{ \int_{0}^{t} \theta \bigl(\eta e^{-rs} \bigr) \,\mathrm{d}s \biggr\} e^{-\eta u} \\ = & \exp \biggl\{ -\sup_{0< \eta< \gamma} \biggl(\eta u- \biggl(\sup _{t>0} \int_{0}^{t} \theta \bigl(\eta e^{-rs} \bigr) \,\mathrm{d}s \biggr) \biggr) \biggr\} \\ = & \exp \biggl\{ -\sup_{0< \eta< \gamma} \biggl(\eta u- \biggl(\sup _{t>0} \int_{\eta e^{-rt}}^{\eta} \frac{\theta(s)}{rs}\,\mathrm{d}s \biggr) \biggr) \biggr\} \\ = & \exp \biggl\{ -\max \biggl\{ \sup_{0< \eta\leq z_{0}} \biggl(\eta u- \biggl(\sup_{t>0} \int_{\eta e^{-rt}}^{\eta} \frac{\theta(s)}{rs}\, \mathrm{d}s \biggr) \biggr), \\ &\sup_{z_{0}< \eta< \gamma} \biggl(\eta u- \biggl(\sup _{t>0} \int_{\eta e^{-rt}}^{\eta} \frac{\theta(s)}{rs}\,\mathrm{d}s \biggr) \biggr) \biggr\} \biggr\} \\ = & \exp \biggl\{ -\max \biggl\{ \sup_{0< \eta\leq z_{0}} (\eta u ), \sup _{z_{0}< \eta< \gamma} \biggl(\eta u- \int_{z_{0}}^{\eta} \frac {\theta (s)}{rs}\,\mathrm{d}s \biggr) \biggr\} \biggr\} \\ = & \exp \biggl\{ -\max \biggl\{ z_{0}u,\sup_{z_{0}< \eta< \gamma} \biggl(\eta u- \int_{z_{0}}^{\eta} \frac{\theta(s)}{rs}\,\mathrm{d}s \biggr) \biggr\} \biggr\} . \end{aligned}$$
(3.4)

In the following, we calculate \(\sup_{z_{0}<\eta<\gamma} (\eta u- \int_{z_{0}}^{\eta} \frac{\theta(s)}{rs}\,\mathrm{d}s )\), and compare it with \(z_{0}u\).

Using the same method as in Zhu et al. [13], we define the function

$$f_{u}(\eta)=\eta u- \int_{z_{0}}^{\eta} \frac{\theta(s)}{rs}\,\mathrm{d}s $$

for \(\eta\geq z_{0}\). Then for \(\eta>z_{0}\),

$$\begin{aligned}& f'_{u}(\eta)=u-\frac{1}{r\eta} \theta(\eta), \\& f''_{u}(\eta)=\frac{1}{r\eta^{2}} \bigl( \theta(\eta)-\eta\theta'(\eta) \bigr). \end{aligned}$$

Noting that \(\theta(z_{0})=0\), \(z_{0}>0\), and \(\theta'(z_{0})>0\), \(\theta ''(\eta)>0\), it is easy to know that

$$\begin{aligned}& \theta(z_{0})-z_{0}\theta'(z_{0})=-z_{0} \theta'(z_{0})< 0, \\& \frac{\mathrm{d}}{\mathrm{d}\eta} \bigl(\theta(\eta)-\eta\theta'(\eta ) \bigr)=- \eta \theta''(\eta)< 0 \quad \text{for all } \eta>0, \end{aligned}$$

we have

$$\theta(\eta)-\eta\theta'(\eta) \leq\theta(z_{0})-z_{0} \theta '(z_{0})< 0\quad \text{for all } \eta>z_{0}, $$

then it follows that

$$f''_{u}(\eta)=\frac{1}{r\eta^{2}} \bigl( \theta(\eta)-\eta\theta'(\eta ) \bigr)< 0\quad \text{for all } \eta>z_{0}. $$

So, for any fixed \(u>0\), the function \(f'_{u}(\eta)\) is decreasing in η for \(\eta>z_{0}\). Along with the fact that \(f'_{u}(z_{0})=u>0\) and \(f'_{u}(\eta)\rightarrow-\infty\) as \(\eta\rightarrow\eta_{0}=\gamma \), we know \(\tilde{\eta}_{0}(u)\) is the unique solution greater than \(z_{0}\) to the equation \(f'_{u}(\eta)=0\). Therefor, we have

$$\begin{aligned} \sup_{z_{0}< \eta< \gamma} \biggl(\eta u- \int_{z_{0}}^{\eta} \frac {\theta (s)}{rs}\,\mathrm{d}s \biggr) =&\sup_{z_{0}< \eta< \gamma} f_{u}(\eta)=f_{u} \bigl( \tilde{\eta}_{0}(u) \bigr) \\ =&\tilde {\eta }_{0}(u) u- \int_{z_{0}}^{\tilde{\eta}_{0}(u)} \frac{\theta(s)}{rs}\, \mathrm{d}s. \end{aligned}$$

Noting that \(f_{u}(\tilde{\eta}_{0}(u))>f_{u}(z_{0})=z_{0}u\), by (3.4), we get

$$\begin{aligned} \begin{aligned} \psi(u) &\leq\exp \biggl\{ -\max \biggl\{ z_{0}u,\sup _{z_{0}< \eta< \gamma } \biggl(\eta u- \int_{z_{0}}^{\eta} \frac{\theta(s)}{rs}\,\mathrm{d}s \biggr) \biggr\} \biggr\} \\ &= \exp \biggl\{ \int_{z_{0}}^{\tilde{\eta}_{0}(u)} \frac{\theta (s)}{rs}\,\mathrm{d}s \biggr\} e^{-\tilde{\eta}_{0}(u) u}. \end{aligned} \end{aligned}$$

 □

Proof of Theorem 2.3

For \(0< \varepsilon<\eta\), take \(\eta =\gamma - \frac{\varepsilon}{2}\), then by (2.1), we have

$$\psi(u) \leq\sup_{t>0} \exp \biggl\{ \int_{0}^{t} \theta \biggl( \biggl(\gamma- \frac {\varepsilon}{2} \biggr) e^{-rs} \biggr)\,\mathrm{d}s \biggr\} e^{-(\gamma-\frac {\varepsilon}{2}) u}, $$

hence, noting the fact that \(\sup_{t>0} \exp \{ \int_{0}^{t} \theta ((\gamma-\frac{\varepsilon}{2}) e^{-rs})\,\mathrm{d}s \} <\infty\), we can get

$$\psi(u)e^{\gamma-\varepsilon} \leq \sup_{t>0}\exp \biggl\{ \int_{0}^{t} \theta \biggl( \biggl(\gamma- \frac{\varepsilon}{2} \biggr) e^{-rs} \biggr)\,\mathrm{d}s \biggr\} e^{-\frac{\varepsilon}{2}u} \rightarrow0,\quad u \rightarrow\infty, $$

this proves (2.3).

In the following, we derive (2.4). Let \(\{L_{n},n \geq1\}\) and \(\{S_{n},n \geq1\}\) be the jump times of the Poisson process \(\{N_{1}(t),t\geq0\}\) and \(\{ N_{2}(t),t \geq0\}\), respectively, then we have

$$\begin{aligned}& \int_{0}^{t}e^{-rs}\,\mathrm{d}\sum _{i=1}^{N_{1}(s)}X_{i}=\sum _{i=1}^{N_{1}(t)}e^{-rL_{i}}X_{i}, \\& \int_{0}^{t}e^{-rs}\,\mathrm{d}\sum _{i=1}^{N_{2}(s)}Y_{i}=\sum _{i=1}^{N_{2}(t)}e^{-rS_{i}}Y_{i}. \end{aligned}$$

Denote \(B=\sup_{0 \leq t \leq t_{0}} \sigma\int_{0}^{t}e^{-rs}\,\mathrm {d}B(s)\) and \(K= \sup_{0 \leq t \leq t_{0}} \sum_{i=1}^{N_{1}(t)}e^{-rL_{i}}X_{i}\), by the definition,

$$\begin{aligned} \psi(u,t_{0}) =& P \Bigl(\inf_{0 \leq t \leq t_{0}}V(t)< 0 \Bigr) \\ =& P \Biggl(\inf_{0 \leq t \leq t_{0}} \Biggl(u+c \int_{0}^{t}e^{-rs}\, \mathrm{d}s+ \int_{0}^{t}e^{-rs}\,\mathrm{d}\sum _{i=1}^{N_{1}(s)}X_{i} \\ &{}- \int_{0}^{t}e^{-rs}\,\mathrm{d}\sum _{i=1}^{N_{2}(s)}Y_{i}+\sigma \int _{0}^{t}e^{-rs}\,\mathrm{d}B(s) \Biggr)< 0 \Biggr) \\ =& P \Biggl(\inf_{0 \leq t \leq t_{0}} \Biggl(u+{\frac {c}{r}} \bigl(1-e^{-rt} \bigr)+\sum_{i=1}^{N_{1}(t)}e^{-rL_{i}}X_{i} \\ &{}-\sum_{i=1}^{N_{2}(t)}e^{-rS_{i}}Y_{i}+ \sigma \int_{0}^{t}e^{-rs}\,\mathrm{d}B(s) \Biggr)< 0 \Biggr) \\ \geq& P \Biggl(u+{\frac{c}{r}} \bigl(1-e^{-rt_{0}} \bigr)-\sup _{0 \leq t \leq t_{0}} \sum_{i=1}^{N_{2}(t)}e^{-rS_{i}}Y_{i} \\ &{}+\sup_{0 \leq t \leq t_{0}}\sum_{i=1}^{N_{1}(t)}e^{-rL_{i}}X_{i}+ \sup_{0 \leq t \leq t_{0}}\sigma \int_{0}^{t}e^{-rs}\,\mathrm{d}B(s)< 0 \Biggr) \\ =& \int_{0}^{\infty} \int_{0}^{\infty} P \Biggl(u+{\frac{c}{r}} \bigl(1-e^{-rt_{0}} \bigr)- \Biggl(\sup_{0 \leq t \leq t_{0}} \sum _{i=1}^{N_{2}(t)}e^{-rS_{i}}Y_{i} \Biggr) \\ &{}+v+s< 0 \Biggr)\,\mathrm{d}P(K \in \mathrm{d}v)\,\mathrm{d}P(B \in \mathrm{d}s) \\ \geq& \int_{0}^{a} \int_{0}^{a} P \Biggl(u+b- \Biggl(\sup _{0 \leq t \leq t_{0}}\sum_{i=1}^{N_{2}(t)}e^{-rS_{i}}Y_{i} \Biggr)< 0 \Biggr)\,\mathrm{d}P(R \in \mathrm{d}v)\, \mathrm{d}P(B \in \mathrm{d}s) \\ =& P \Biggl( \Biggl(\sup_{0 \leq t \leq t_{0}}\sum _{i=1}^{N_{2}(t)}e^{-rS_{i}}Y_{i} \Biggr)>u+b \Biggr)P(K \leq a)P(B \leq a), \end{aligned}$$
(3.5)

where \(b={\frac{c}{r}}(1-e^{-rt_{0}})+2a\) and a is a positive constant. On the other hand,

$$\begin{aligned} P \Biggl( \Biggl(\sup_{0 \leq t \leq t_{0}}\sum _{i=1}^{N_{2}(t)}e^{-rS_{i}}Y_{i} \Biggr)>u+b \Biggr) =& P \Biggl(\sum_{i=1}^{N_{2}(t_{0})}e^{-rS_{i}}Y_{i} >u+b \Biggr) \\ \geq& P \Biggl(\sum_{i=1}^{N_{2}(t_{0})}e^{-rt_{0}}Y_{i} >u+b \Biggr) \\ =& P \Biggl(\sum_{i=1}^{N_{2}(t_{0})}Y_{i} >(u+b)e^{rt_{0}} \Biggr) \\ \geq& \sum_{m=0}^{\infty} P \Biggl(\sum _{i=1}^{N_{2}(t_{0})}Y_{i} >(u+b)e^{rt_{0}},N_{2}(t_{0})=m \Biggr) \\ \geq& \sum_{m=0}^{\infty} \overline{G} \bigl((u+b)e^{rt_{0}} \bigr)P \bigl(N_{2}(t_{0})=m \bigr) \\ =& \overline{G} \bigl((u+b)e^{rt_{0}} \bigr), \end{aligned}$$
(3.6)

then for each fixed ε, choose \(t_{0}\) such that \({\frac {\gamma +\varepsilon}{e^{rt_{0}}}} >\gamma\), i.e.,

$$ t_{0}< \frac{\ln(1+{\frac{\varepsilon}{\gamma}})}{r}. $$
(3.7)

Denote \(\gamma+\omega={\frac{\gamma+\varepsilon}{e^{rt_{0}}}}\) for some \(\omega>0\), note the fact that \(\psi(u,t) \leq\psi(u)\) for each \(t>0\), we have

$$\begin{aligned} \psi(u)e^{(\gamma+\varepsilon)u} \geq& \psi(u,t_{0})e^{(\gamma+\varepsilon)u} \\ \geq& P(K \leq a)P(B \leq a)\overline{G} \bigl((u+b)e^{rt_{0}} \bigr) e^{(\gamma +\varepsilon)u} \\ =& P(K \leq a)P(B \leq a)e^{-b(\gamma+\varepsilon)}\overline {G} \bigl((u+b)e^{rt_{0}} \bigr) e^{(\gamma+\varepsilon)(u+b)} \\ =& C \cdot\overline{G} \bigl((u+b)e^{rt_{0}} \bigr) \exp \biggl\{ { \frac{\gamma +\varepsilon}{e^{rt_{0}}}}(u+b)e^{rt_{0}} \biggr\} \\ =& C \cdot\overline{G} \bigl((u+b)e^{rt_{0}} \bigr) \exp \bigl\{ (\gamma+ \omega ) (u+b)e^{rt_{0}} \bigr\} , \end{aligned}$$
(3.8)

where \(C=P(K \leq a)P(B \leq a)e^{-b(\gamma+\varepsilon)}\). Since \(\gamma=\eta_{0}\) and \(\int_{0}^{\infty}e^{(\eta_{0}+\omega)y}\,\mathrm{d}G(y)=\infty \), so

$$\overline{G} \bigl((u+b)e^{rt_{0}} \bigr) \exp \bigl\{ (\gamma+\omega) (u+b)e^{rt_{0}} \bigr\} \rightarrow\infty,\quad u \rightarrow\infty, $$

consequently, by (3.8), we have

$$ \psi(u)e^{(\gamma+\varepsilon)u} \rightarrow\infty,\quad u \rightarrow \infty. $$

 □

Proof of Theorem 2.4

By the definition

$$\psi(u,t_{0})=P \bigl(V(t)< 0 \text{ for some }0< t \leq t_{0}|U(0)=u \bigr). $$

Denote

$$\begin{aligned}& p=\sup_{0 \leq t \leq t_{0}} c \int_{0}^{t}e^{-rs}\,\mathrm{d}s, \\& B=\sup_{0 \leq t \leq t_{0}} \sigma\int_{0}^{t}e^{-rs}\,\mathrm{d}B(s)\quad \mbox{and} \\& K=\sup_{0 \leq t \leq t_{0}} \int_{0}^{t}e^{-rs}\,\mathrm{d}\sum_{i=1}^{N_{1}(s)}X_{i}. \end{aligned}$$

From (3.1), for each \(t \in(0,t_{0}]\), we have

$$u- \int_{0}^{t}e^{-rs}\,\mathrm{d}\sum _{i=1}^{N_{2}(s)}Y_{i}+\sigma \int _{0}^{t}e^{-rs}\,\mathrm{d}B(s) \leq V(t) \leq u+p- \int_{0}^{t}e^{-rs}\,\mathrm{d}\sum _{i=1}^{N_{2}(s)}Y_{i}+K+B, $$

thus, the ruin probability \(\psi(u,t_{0})\) satisfies

$$\begin{aligned} \psi(u,t_{0}) \geq& P \Biggl(\sum _{i=1}^{N_{2}(t)}e^{-rS_{i}}Y_{i}>u+p+K+B \quad \text{ for some }0< t \leq t_{0} \Biggr) \\ =& P \Biggl(\sum_{i=1}^{N_{2}(t_{0})}e^{-rS_{i}}Y_{i}>u+p+K+B \Biggr) \end{aligned}$$
(3.9)

and

$$\begin{aligned} \psi(u,t_{0}) \leq& P \Biggl(\sum _{i=1}^{N_{2}(t)}e^{-rS_{i}}Y_{i}>u+\sigma \int _{0}^{t}e^{-rs}\,\mathrm{d}B(s) \text{ for some }0< t \leq t_{0} \Biggr) \\ \leq& P \Biggl(\sum_{i=1}^{N_{2}(t_{0})}e^{-rS_{i}}Y_{i}+ \sup_{0< t \leq t_{0}} \biggl[-\sigma \int_{0}^{t}e^{-rs}\,\mathrm{d}B(s) \biggr]>u \Biggr). \end{aligned}$$
(3.10)

Hence, if we prove that as \(u \rightarrow\infty\),

$$ P \Biggl(\sum_{i=1}^{N_{2}(t_{0})}e^{-rS_{i}}Y_{i}>u+p+K+B \Biggr) \sim{\frac{\lambda_{2}}{r}} \int_{u}^{ue^{rt_{0}}}{\frac{\bar{G}(y)}{y}}\, \mathrm{d}y $$
(3.11)

and

$$ P \Biggl(\sum_{i=1}^{N_{2}(t_{0})}e^{-rS_{i}}Y_{i}+ \sup_{0< t \leq t_{0}} \biggl[-\sigma \int_{0}^{t}e^{-rs}\,\mathrm{d}B(s) \biggr]>u \Biggr) \sim{\frac{\lambda_{2}}{r}} \int_{u}^{ue^{rt_{0}}}{\frac{\bar{G}(y)}{y}}\, \mathrm{d}y, $$
(3.12)

then by (3.9) and (3.10) it follows that

$$\psi(u,t_{0}) \sim{\frac{\lambda_{2}}{r}} \int_{u}^{ue^{rt_{0}}}{\frac {\bar {G}(y)}{y}}\,\mathrm{d}y,\quad u \rightarrow\infty. $$

Let us deal with \(P ( \sum_{i=1}^{N_{2}(t_{0})}e^{-rS_{i}}Y_{i}>u )\) first. By the same method as in Tang [16] or Jiang and Yan [17], we have, as \(u \rightarrow \infty\),

$$ P \Biggl( \sum_{i=1}^{N_{2}(t_{0})}e^{-rS_{i}}Y_{i}>u \Biggr) \sim{\frac{\lambda_{2}}{r}} \int_{u}^{ue^{rt_{0}}}{\frac{\bar {G}(y)}{y}}\,\mathrm{d}y, $$
(3.13)

which also implies that \(\sum_{i=1}^{N_{2}(t_{0})}e^{-rS_{i}}Y_{i}\) is still long tailed. Therefore

$$\begin{aligned}& \lim_{u \rightarrow\infty} {\frac{P (\sum_{i=1}^{N_{2}(t_{0})}e^{-rS_{i}}Y_{i}>u+p+K+B )}{P ( \sum_{i=1}^{N_{2}(t_{0})}e^{-rS_{i}}Y_{i}>u )}} \\& \quad = \int_{0}^{\infty} \int_{0}^{\infty}\lim_{u \rightarrow\infty} { \frac{P (\sum_{i=1}^{N_{2}(t_{0})}e^{-rS_{i}}Y_{i}>u+p+v+s )}{P ( \sum_{i=1}^{N_{2}(t_{0})}e^{-rS_{i}}Y_{i}>u )}}P (K\in \,\mathrm{d}v )P (B\in\,\mathrm{d}s )=1, \end{aligned}$$
(3.14)

where we use the fact that \(P (\sum_{i=1}^{N_{2}(t_{0})}e^{-rS_{i}}Y_{i}>u+p+v+s ) \leq P (\sum_{i=1}^{N_{2}(t_{0})}e^{-rS_{i}}Y_{i}>u )\) and the dominated convergence theorem.

On the other hand, the results in Jiang and Yan [17] show that

$$ P \Biggl(\sum_{i=1}^{N_{2}(t_{0})}e^{-rS_{i}}Y_{i}+ \sup_{0< t \leq t_{0}} \biggl[-\sigma \int_{0}^{t}e^{-rs}\,\mathrm{d}B(s) \biggr]>u \Biggr) \sim P \Biggl(\sum_{i=1}^{N_{2}(t_{0})}e^{-rS_{i}}Y_{i}>u \Biggr). $$
(3.15)

Thus, (3.11) and (3.12) follow from (3.13), (3.14), and (3.15). This ends the proof of Theorem 2.3. □

Proof of Theorem 2.5

Similarly, we also deal with \(P ( \sum_{i=1}^{N_{2}(t_{0})}e^{-rS_{i}}Y_{i}>u )\). Rewrite it as

$$P \Biggl(\sum_{i=1}^{N_{2}(t_{0})}e^{-rS_{i}}Y_{i}>u \Biggr) = P \Biggl(\sum_{i=1}^{\infty}e^{-rS_{i}}Y_{i}I(S_{i} \leq t_{0})>u \Biggr), $$

by Lemma 1 in Tang [20] with \(\theta_{n}=e^{-rS_{n}}I(S_{n} \leq t_{0})\), we have

$$P \Biggl(\sum_{i=1}^{N_{2}(t_{0})}e^{-rS_{i}}Y_{i}>u \Biggr) \sim\bar{G}(u) \sum_{i=1}^{\infty} Ee^{-rS_{n}}I(S_{n} \leq t_{0}) ={\frac{\lambda_{2}}{\alpha r}} \bar{G}(u) \bigl(1-e^{-\alpha r t_{0}} \bigr),\quad u \rightarrow \infty, $$

then (2.8) follows from (3.9), (3.10), (3.14), and (3.15). □