1 Review of the work by Zhang et al. [7]

Large deviations of the aggregate amount of claims are an important topic, which was initiated by Klüppelberg and Mikosch [4], reactivated by Tang et al. [6], and revisited by many researchers afterward. Due to its potential applications to insurance and finance, this topic is receiving increasing attention from academia.

In a recent work by Zhang et al. [7] the authors studied large deviations of the aggregate amount of claims in a size-dependent renewal risk model. In their notation, let \(\{X_{k},k\in \mathbb{N}\}\) and \(\{\theta _{k},k\in \mathbb{N}\}\) be claim sizes and interarrival times, respectively. Assume that the pairs \((X_{k},\theta _{k})\), \(k\in \mathbb{N}\), form a sequence of independent and identically distributed (i.i.d.) copies of a generic random pair \((X,\theta )\) with marginal distribution functions \(F=1-\overline{F}\) on \([0,\infty )\) and G on \([0,\infty )\) and with arbitrary dependence between X and θ. Define an integer-valued stochastic process

$$ N_{t}^{\ast }=\inf \{ k\in \mathbb{N}:\theta _{1}+ \cdots +\theta _{k} \geq t \} ,\quad t\geq 0. $$

Note that \(N_{t}^{\ast }\) is slightly different from the commonly used renewal counting process

$$ N_{t}=\sup \{ k\in \mathbb{N}:\theta _{1}+\cdots +\theta _{k} \leq t \}. $$

Then the aggregate amount of claims is defined by

$$ S_{t}^{\ast }=\sum_{k=1}^{N_{t}^{\ast }}X_{k},\quad t\geq 0, $$

where the sum is understood as 0 when \(N_{t}^{\ast }=0\).

The authors consider the case of subexponential claims. By definition a distribution function F on \([0,\infty )\) is subexponential, denoted by \(F\in \mathcal{S}\), if

$$ \lim_{x\rightarrow \infty }\frac{\overline{F^{\ast n}}(x)}{ \overline{F}(x)}=n $$
(1.1)

for all \(n\geq 2\), where \(F^{\ast n}\) denotes the n-fold convolution of F. As the authors pointed out, (1.1) implies

$$ \lim_{x\rightarrow \infty }\frac{P ( X_{1}+\cdots +X_{n}>x ) }{P ( \max \{X_{1},\ldots ,X_{n}\}>x ) }=1, $$
(1.2)

where \(X_{1}, X_{2}, \ldots\) are i.i.d. random variables with common distribution function F.

Then the authors stated the following precise large-deviation result.

Theorem ZWY

Assume that \(F\in \mathcal{S}\), \(E[X]=\mu \in (0,\infty )\), and \(E[\theta ]=1/\lambda \in (0,\infty )\). Then for arbitrarily given \(\gamma >0\), it holds uniformly for all \(x\geq \gamma t\) that

$$ P \bigl( S_{t}^{\ast }-\mu \lambda t>x \bigr) \sim \lambda t \overline{F}(x),\quad t\rightarrow \infty. $$
(1.3)

Here the uniformity is understood as

$$ \lim_{t\rightarrow \infty }\sup_{x\geq \gamma t} \biggl\vert \frac{P ( S_{t}^{\ast }-\mu \lambda t>x ) }{\lambda t\overline{F}(x)}-1 \biggr\vert =0. $$

This result is claimed to hold for the whole subexponential class \(\mathcal{S}\), and, in particular, it squarely removes a condition on the dependence structure of \((X,\theta )\) originally proposed by Chen and Yuen [2] and recently used by many researchers. Thus this result, if correct, would be an important contribution to the theory of large deviations. Unfortunately, the counterexample given in the following section disproves it.

2 A counterexample

Assume that:

  1. (i)

    the generic random pair \((X,\theta )\) contains comonotonic and identical components, that is, \(X=\theta \);

  2. (ii)

    the common distribution of X and θ is the Weibull distribution

    $$ F(x)=1-e^{-\sqrt{x}},\quad x\geq 0; $$
  3. (iii)

    \(x=t\rightarrow \infty \).

Then

$$ P \bigl( S_{t}^{\ast }-\mu \lambda t>x \bigr) \sim \lambda \int _{x} ^{2x}\overline{F}(y)\,dy=o \bigl( \lambda t \overline{F}(x) \bigr). $$
(2.1)

Our proof given in the next section shows that we have plenty of room to generalize this counterexample, but we will not do so to save space. Thus Theorem ZWY is fatally wrong. The erroneous step appears on the first two lines of their proof, where the authors claimed that, by the assumption \(F\in \mathcal{S}\) and relation (1.2), to prove (1.3), one needs only to prove

$$ P \Bigl( \max_{k\leq N_{t}^{\ast }}(X_{k}-\mu )>x \Bigr) \sim \lambda t\overline{F}(x), \quad t,x\rightarrow \infty. $$
(2.2)

They seem to have overlooked an essential difference between (1.2) and (2.2): The index n in (1.2) is arbitrarily fixed, whereas the index \(N_{t}^{\ast }\) in (2.2) varies in t and almost surely diverges to ∞ as \(t\rightarrow \infty \).

Nevertheless, in the paper the authors developed a martingale approach to the study of precise large deviations, which is novel to us.

3 Proof of (2.1)

By conditions (i) and (ii), \(\mu =1/\lambda =12\). By condition (ii), F has a long and rapidly varying tail; see Embrechts et al. [3] for these and related terminologies. Furthermore, for a distribution F with a long and rapidly varying tail, it is easy to verify the following:

$$ \overline{F} ( x ) =o \biggl( \int _{x}^{\infty } \overline{F}(y)\,dy \biggr) ,\qquad \int _{x}^{2x}\overline{F}(y)\,dy\sim \int _{x}^{\infty }\overline{F}(y)\,dy=o \bigl( x \overline{F}(x) \bigr). $$
(3.1)

See Su and Tang [5] for closely related discussions; for example, the first relation in (3.1) can be found in their Theorem 3.1(i). Thus, it remains to prove the first step in (2.1). Keeping in mind condition (iii), we derive

$$\begin{aligned} P \bigl( S_{t}^{\ast }-\mu \lambda t>x \bigr) &=\sum _{n=1}^{\infty }P \Biggl( \sum_{k=1}^{n}X_{k}>2x,N_{t}^{\ast }=n \Biggr) \\ &=\sum_{n=1}^{\infty }P \Biggl( \sum _{k=1}^{n}X_{k}>2x,\sum _{k=1}^{n-1}X _{k}< x \Biggr) \\ &= \int _{0}^{{x-}}\overline{F} ( 2x-y ) \sum _{n=1}^{\infty }P \Biggl( \sum_{k=1}^{n-1}X_{k} \in \,dy \Biggr) \\ &= \int _{0}^{x}\overline{F} ( 2x-y ) \,d\lambda (y), \end{aligned}$$

where \(\int _{0}^{x-}\) is understood as \(\int _{(0,x)}\), and \(\lambda (y)=E[N_{y}]\) for \(y\geq 0\) is the renewal function. The last step can be verified as follows: for \(y>0\),

$$ \sum_{n=1}^{\infty }P \Biggl( \sum _{k=1}^{n-1}X_{k}\leq y \Biggr) = \sum _{n=0}^{\infty }P \Biggl( \sum _{k=1}^{n}X_{k}\leq y \Biggr) =1+ \sum _{n=1}^{\infty }P ( N_{y}\geq n ) =1+ \lambda (y). $$

Recall Blackwell’s renewal theorem:

$$ \lim_{x\rightarrow \infty } \bigl( \lambda (x+1)-\lambda (x) \bigr) = \lambda; $$

see, e.g., page 155 of Asmussen [1]. Thus, for arbitrarily fixed small \(\varepsilon >0\), there is some large \(x_{0}\in \mathbb{N}\) such that, for all \(x\geq x_{0}\),

$$ (1-\varepsilon )\lambda \leq \lambda (x+1)-\lambda (x)\leq (1+\varepsilon ) \lambda. $$
(3.2)

We continue the derivation:

$$\begin{aligned} P \bigl( S_{x}^{\ast }-\mu \lambda x>x \bigr) &= \Biggl( \int _{0} ^{x_{0}}+\sum_{i=x_{0}}^{\lfloor x\rfloor -1} \int _{i}^{i+1}+ \int _{\lfloor x\rfloor }^{{x-}} \Biggr) \overline{F} ( 2x-y ) \,d \lambda (y) \\ &= I_{1}+I_{2}+I_{3}, \end{aligned}$$
(3.3)

where \(\lfloor x\rfloor \) is the commonly used floor function. Since F has a long tail,

$$ I_{1}\sim \overline{F} ( 2x ) \lambda (x_{0}). $$

For the other two terms in (3.3), we derive

$$\begin{aligned} I_{2}+I_{3} &\leq \sum_{i=x_{0}}^{\lfloor x\rfloor } \int _{i}^{i+1} \overline{F} ( 2x-y ) \,d\lambda (y) \\ &\leq \sum_{i=x_{0}}^{\lfloor x\rfloor }\overline{F} ( 2x-i-1 ) \bigl( \lambda (i+1)-\lambda (i) \bigr) \\ &\leq (1+\varepsilon )\lambda \sum_{i=x_{0}}^{\lfloor x\rfloor } \overline{F} ( 2x-i-1 ) \end{aligned}$$
(3.4)
$$\begin{aligned} &\leq (1+\varepsilon )\lambda \sum_{i=x_{0}}^{\lfloor x\rfloor } \int _{i}^{i+1}\overline{F} ( 2x-y-1 ) \,dy \\ &\leq (1+\varepsilon )\lambda \int _{0}^{x+1}\overline{F} ( 2x-y-1 ) \,dy \\ &=(1+\varepsilon )\lambda \int _{x-2}^{2x-1}\overline{F}(y)\,dy \\ &\sim (1+\varepsilon )\lambda \int _{x}^{2x}\overline{F}(y)\,dy, \end{aligned}$$
(3.5)

where in step (3.4) we applied the upper bound in (3.2), and in step (3.5) we used the long tail property of F again and the two asymptotic relations in (3.1). Plugging these estimates into (3.3) yields

$$ P \bigl( S_{x}^{\ast }-\mu \lambda x>x \bigr) \lesssim \overline{F} ( 2x ) \lambda (x_{0})+(1+\varepsilon )\lambda \int _{x} ^{2x}\overline{F}(y)\,dy\sim (1+\varepsilon )\lambda \int _{x}^{2x} \overline{F}(y)\,dy, $$

where the last step is due to \(\overline{F} ( 2x ) \leq \overline{F} ( x ) =o ( \int _{x}^{2x}\overline{F}(y)\,dy ) \). A similar asymptotic lower bound can also be established. Finally, by the arbitrariness of ε we prove the first step in (2.1).