1 Introduction

Let \(\xi ,\xi _1\), \(\xi _2\), ... be independent random variables with a common distribution F. Consider a random walk \(S_0=0\), \(S_n=\xi _1+\cdots +\xi _n\) and a stopping time

$$\begin{aligned} \tau :=\inf \{n\ge 1: S_n<0\}. \end{aligned}$$

Let \(M_\tau :=\max _{0\le i\le \tau } S_i\) and \(M=\sup \ \{S_n,\ n\ge 0\}\). We will consider random walks with infinite or undefined mean (\({\mathbf {E}}[|\xi _1|]=\infty \)) under the assumption that \(S_n\rightarrow -\infty \) a.s. It is well known that the latter assumption is equivalent to \(M<\infty \) a.s. and to \({\mathbf {E}}[\tau ]<\infty \) (see Theorem 1 in [17, Chapter XII, Section 2]).

In the infinite-mean case, an important role is played by the negative truncated mean function

$$\begin{aligned} m(x)\equiv & {} \mathbf{E}\min \{\xi ^-,x\} =\int _0^x \mathbf{P}\{\xi ^->y\}\,{d}y,\quad x\ge 0, \end{aligned}$$

where \(\xi ^-=\max \{-\xi ,0\}\); the function m(x) is continuous, increasing, \(m(0)=0\) and \(m(x)>0\) for any \(x>0\). It is known that if \({\mathbf {E}}|\xi |=\infty \), then \(S_n\rightarrow -\infty \) a.s. as \(n\rightarrow \infty \) if and only if

$$\begin{aligned} K:=\int _0^\infty \frac{x}{m(x)}\,F(dx)\ \text{ is } \text{ finite, } \end{aligned}$$
(1)

see Corollary 1 in [16].

The aim of this paper is to study the asymptotics for \({\mathbf {P}}(M_\tau >x)\) in the infinite-mean case. In the finite-mean case, under the assumption that \(F\in {\mathcal {S}}^*\) it was shown by Asmussen [1], see also [22] for the regularly varying case, that

$$\begin{aligned} {\mathbf {P}}(M_\tau >x)\sim {\mathbf {E}}\tau {{\overline{F}}}(x), \end{aligned}$$
(2)

where \({{\overline{F}}}(x) = 1-F(x) = {\mathbf {P}}(\xi _1>x).\) The class \({\mathcal {S}}^*\) of strongly subexponential distributions was introduced by Klüppelberg [23] and is defined as follows:

Definition 1

A distribution function F with finite \(\mu _+=\int _0^\infty {{\overline{F}}}(y)dy<\infty \) belongs to the class \({\mathcal {S}}^*\) of strong subexponential distributions if \({{\overline{F}}}(x)>0\) for all x and

$$\begin{aligned} \frac{\int _0^x {{\overline{F}}}(x-y){{\overline{F}}}(y)dy}{{{\overline{F}}}(x)} \rightarrow 2 \mu _+,\quad \text{ as } x\rightarrow \infty . \end{aligned}$$

This class is a proper subclass of the class \({\mathcal {S}}\) of subexponential distributions. It is shown in [23] that the Pareto, lognormal and Weibull distributions belong to the class \({\mathcal {S}}^*\) as well.

The proof in [1] relied on the local asymptotics for \({\mathbf {P}}(M \in (x,x+T]) \) found in [5] and, independently, in [3]. Foss and Zachary [19] pointed out the necessity of the condition \(F\in {\mathcal {S}}^*\) and extended (2) to the case of an arbitrary stopping time \(\sigma \) with finite mean \({\mathbf {E}}\sigma <\infty \). Then, Foss, Palmowsky and Zachary [20] found the asymptotics of \({\mathbf {P}}(M_\sigma >x)\), for a more general class of stopping times \(\sigma \), including those that may take infinite values or have infinite mean. They also proved that these asymptotics hold uniformly in all stopping times. A short proof of (2) may be found in [9, 10] and [14]. The former proof relies on local asymptotics for \({\mathbf {P}}(M_\tau \in (x,x+T])\) and the latter proof uses the martingale properties of \({\mathbf {P}}(M>x)\). The local asymptotics for \({\mathbf {P}}(M_\tau \in (x, x+T])\) were found in [13].

We will now introduce several subclasses of heavy-tailed distributions that will be used in the text.

Definition 2

A distribution function F is (right) long tailed (\(F\in {\mathcal {L}}\)) if, for any fixed \(y>0\),

$$\begin{aligned} {\mathbf {P}}(\xi>x+y\mid \xi >x)=\frac{{{\overline{F}}}(x+y)}{{{\overline{F}}}(x)}\rightarrow 1 , \quad x\rightarrow \infty . \end{aligned}$$

An important subclass of heavy-tailed distributions is the class of subexponential distributions introduced independently by Chistyakov [7] and Chover et al. [8].

Definition 3

A distribution function F on \({\mathbf {R}}^+\) is subexponential (\(F\in {\mathcal {S}}\)) if \({{\overline{F}}}(x)>0\) for all x and

$$\begin{aligned} \frac{{\mathbf {P}}(\xi _1+\xi _2>x)}{{\mathbf {P}}(\xi _1>x)} =\frac{{{\overline{F}}}^{2*}(x)}{{{\overline{F}}}(x)}\rightarrow 2, \quad \text{ as } x\rightarrow \infty , \end{aligned}$$
(3)

where \(\xi _1,\xi _2\) are independent random variables with a common distribution function F.

Sufficient conditions for a distribution to belong to the class \({\mathcal {S}}\,\) may be found, for example, in [7, 23, 24] and [25]. The class \({\mathcal {S}}\,\) includes, in particular, the following distributions on \([0,\infty )\):

  1. (i)

    the Pareto distribution with the tail \({{\overline{G}}}(x)=(\frac{\kappa }{\kappa +x})^\alpha \), where \(\alpha ,\kappa >0\);

  2. (ii)

    the lognormal distribution with the density \(e^{-(\ln x-\ln \alpha )^2/2\sigma ^2}/x\sqrt{2\pi \sigma ^2}\) with \({\alpha >0}\);

  3. (iii)

    the Weibull distribution with the tail \({{\overline{G}}}(x)=e^{-x^\alpha }\) with \({\alpha \in (0,1)}\).

Another subclass of heavy-tailed distributions is the class of distributions with dominated varying tail.

Definition 4

A distribution function F is a dominated varying tail distribution function (\({F\in {\mathcal {D}}}\)) if

$$\begin{aligned} \sup _{x>0}\frac{{{\overline{F}}}(x/2)}{{{\overline{F}}}(x)}<\infty . \end{aligned}$$

A distribution function from \({\mathcal {D}}\) is not always subexponential. Indeed, all subexponential distributions are long-tailed, but there are some dominated varying distributions which are not long-tailed; see [15] and [21] for a counterexample. However, Klüppelberg [23] proved that if the mean \(\int _0^\infty {{\overline{F}}}(y) dy\) is finite then \({\mathcal {L}}\cap {\mathcal {D}}\subset {\mathcal {S}}^*\subset {\mathcal {S}}\). All regularly varying distribution functions belong to \({\mathcal {D}}\).

Definition 5

A distribution function F is regularly varying with index \(-\alpha \) if, for all \(\lambda >0\),

$$\begin{aligned} \frac{{{\overline{F}}}(\lambda x)}{{{\overline{F}}}(x)}\rightarrow \lambda ^{-\alpha },\quad x\rightarrow \infty . \end{aligned}$$

Examples of regularly varying distribution functions are the Pareto distribution function and G with the tail \({{\overline{G}}}(x)\sim 1/x^\alpha \ln ^\beta x\) An extensive survey of the regularly varying distributions may be found in [6]. It is shown in [7] that any subexponential distribution is long-tailed by necessity. The converse is not true; see [15] for a counterexample.

When the mean is finite, the derivation of (2) in [1, 19] and [10] heavily relied on the local asymptotics of \({\mathbf {P}}(M\in (x,x+c])\) for a fixed \(c>0\) as \(x\rightarrow \infty \). In the infinite-mean case, these local asymptotics are not known. It seems that they can be found only in some particular cases. The reasons for this are complications in the local renewal theorem in the infinite mean case; see [4] for the complete solution of local renewal in the infinite mean case and its history. Therefore, we propose a slightly different approach: it appears that it is sufficient to prove directly that

$$\begin{aligned} \int _0^x\pi (du){{\overline{F}}}(x-u)\sim {{\overline{F}}}(x). \end{aligned}$$
(4)

For that, we introduce a new class \({\mathcal {S}}_F\) of heavy-tailed distributions:

Definition 6

Let F be a distribution function. A distribution function G on \(\mathbf{R}^+\) belongs to \({\mathcal S}_F\) (\(G\in {{\mathcal {S}}}_F\)) if

$$\begin{aligned} \int _0^x G(du){{\overline{F}}}(x-u)\sim {{\overline{F}}}(x). \end{aligned}$$
(5)

This class is a natural extension of the class of subexponential distributions. Indeed, it follows from the definition that F is subexponential if and only if \(F\in {\mathcal {S}}_F\). Then we study properties of this class. These properties (as well as their proofs) are rather close to those of subexponential distributions. Let \(G_1\) be a distribution function on \({\mathbb {R}}^+\) with distribution tail

$$\begin{aligned} {{\overline{G}}}_1(x) = \frac{1}{K}\int _x^{\infty } \frac{t-x}{m(t-x)} F(rm{d}t),\quad x\ge 0. \end{aligned}$$

The following theorems are the main results of this paper.

Theorem 7

Suppose \(\mathbf{E}\xi ^-=\infty \) and condition (1) holds. If the distribution function \(G_1\in {\mathcal {S}}_F\) then the asymptotics (2) hold, i.e.,

$$\begin{aligned} {\mathbf {P}} (M_\tau >x)\sim {\mathbf {E}}\tau {{\overline{F}}}(x). \end{aligned}$$

Theorem 8

Let \({\mathbf {E}}\xi _1^-=\infty \) and either of the following conditions hold:

  1. (a)

    \(F\in {{\mathcal {S}}}^*\);

  2. (b)

    \(F\in {{\mathcal {L}}}\cap {{\mathcal {D}}}\) and condition (1) holds.

Then \(G_1\in {\mathcal {S}}_F\).

2 Class \({{\mathcal {S}}}_F\) and its basic properties

Definition 6 may be rephrased as follows: Consider independent random variables \(\psi \ge 0\) and \(\xi \) with distributions G and F, respectively. Then \(G\in {{\mathcal {S}}}_F\) if and only if

$$\begin{aligned} {\mathbf {P}}(\xi +\psi>x, \psi \le x)\sim {\mathbf {P}}(\xi >x). \end{aligned}$$

Basic properties of the class \({{\mathcal {S}}}_F\) are very close to those of the class of subexponential distributions (see Lemmas 913). For a fine account of the theory of subexponential and local subexponential distributions we refer to [2] and [18]. Throughout, for any non-decreasing function G, we let \(G(x,y]:=G(y)-G(x)\).

Lemma 9

Let G be a distribution function on \({{\mathbf {R}}}^+\). Then \(G\in {{\mathcal {S}}}_F\) if and only if there exists a function \(h(x)\uparrow \infty , h(x)<x/2\) such that

  1. (i)

    \({{\overline{F}}}(x-h(x))\sim {{\overline{F}}}(x)\);

  2. (ii)

    \(G(x-h(x),x]=o({{\overline{F}}}(x))\);

  3. (iii)

    \(\int _{h(x)}^{x-h(x)} G({d}u){{\overline{F}}}(x-u)= o({{\overline{F}}}(x)).\)

Proof of Lemma 9

First, assume \(G\in S_F\). Fix any \(t>0\). Then,

$$\begin{aligned} \int _0^x G(du){{\overline{F}}}(x-u)&= \left( \int _0^t+\int _t^{x-t}+\int _{x-t}^{x}\right) G({d}u){{\overline{F}}}(x-u)\nonumber \\&\ge G[0,t]{{\overline{F}}}(x)+G(t,x-t]{{\overline{F}}}(x-t)+G(x-t,x]{{\overline{F}}}(t). \end{aligned}$$
(6)

By dividing both sides by \({{\overline{F}}}(x)\), letting x tend to infinity and rearranging the terms, we obtain

$$\begin{aligned} 1\ge \limsup _{x\rightarrow \infty }\frac{{{\overline{F}}}(x-t)}{\overline{F}(x)}\ge \lim \inf _{x\rightarrow \infty }\frac{{{\overline{F}}}(x-t)}{\overline{F}(x)}\ge 1 \end{aligned}$$
(7)

and

$$\begin{aligned} \limsup _{x\rightarrow \infty }\frac{G(x-t,x]}{{{\overline{F}}}(x)}=0. \end{aligned}$$
(8)

It follows from (7) and (8) that, for any fixed t,

$$\begin{aligned} {{\overline{F}}}(x-t)\sim {{\overline{F}}}(x), \quad G(x-t,x]=o({{\overline{F}}}(x)), \quad x\rightarrow \infty . \end{aligned}$$
(9)

Then, we can define h(x) as follows: For any \(n\in \{\,1,2,\ldots \,\}\) let \(x_n\) be the minimal number such that \(x_n\ge n/2\) and

$$\begin{aligned} \left| \frac{{{\overline{F}}}(x-t)}{{{\overline{F}}}(x)}-1\right| \le \frac{1}{n} \text{ and } \frac{G(x-t,x]}{{{\overline{F}}}(x)} \le \frac{1}{n}, \text{ for } x>x_n, t\in [0,n]. \end{aligned}$$
(10)

The existence of \(x_n\uparrow \infty \) follows from (9) and monotonicity of \({{\overline{F}}}\) and G. Then we put

$$\begin{aligned} h(x) = n, \quad x\in (x_n,x_{n+1}]. \end{aligned}$$

By construction, the function \(h(x)\le x/2\) and satisfies (i) and (ii) . To prove (iii) we make use of the representation (6) with \(t=h(x)\). Then, by, correspondingly, (i) and (ii),

$$\begin{aligned} \left( \int _0^{h(x)}+\int _{x-h(x)}^{x}\right) G(du){{\overline{F}}}(x-u) \sim {{\overline{F}}}(x), \quad x\rightarrow \infty . \end{aligned}$$

It follows from \(G\in {\mathcal {S}}_F\) that (iii) holds as well for this choice of h(x).

Conversely, suppose that there is a function h(x) satisfying (i)–(iii). Condition (i) implies

$$\begin{aligned} \int _0^{h(x)}G({d}y){{\overline{F}}}(x-y)\sim {{\overline{F}}}(x) \int _0^{h(x)}G({d}y)\sim {{\overline{F}}}(x), \end{aligned}$$

and condition (ii) implies

$$\begin{aligned} 0\le \int _{x-h(x)}^{x}G({d}y)\overline{F}(x-y)\le G(x-h(x),x]=o({{\overline{F}}}(x)). \end{aligned}$$

Using condition (iii) we obtain the required result \(G\in {{\mathcal {S}}}_F\). \(\square \)

Lemma 10

(convolution closure) Let the distribution functions \(G_1, G_2\) belong to \({{\mathcal {S}}}_F\). Then \(G_1*G_2\in {{\mathcal {S}}}_F.\)

Proof

Take a function h(x) satisfying conditions (i)–(iii) of Lemma 9 for both distributions \(G_1\) and \(G_2\) simultaneously. Then,

$$\begin{aligned}&\int _0^x G_1*G_2({d}u){{\overline{F}}}(x-u)\\&=\int _0^{x} G_1({d}u)\int _0^{x-u}G_2({d}v){{\overline{F}}}(x-u-v)\\&=\left( \int _0^{x-h(x)}+\int _{x-h(x)}^{x}\right) G_1({d}u)\int _0^{x-u}G_2({d}v){{\overline{F}}}(x-u-v) \equiv I_1(x)+I_2(x). \end{aligned}$$

By conditions (i) and (iii) of Lemma 9,

$$\begin{aligned} I_1(x)&=\int _0^{x-h(x)} G_1({d}u)\int _0^{x-u} G_2({d}v){{\overline{F}}}(x-u-v)\\&\sim \int _0^{x-h(x)} G_1({d}u){{\overline{F}}}(x-u)\sim {{\overline{F}}}(x) \end{aligned}$$

and, by condition (ii),

$$\begin{aligned} I_2(x)\le G_1(x-h(x),x]= o({{\overline{F}}}(x)). \end{aligned}$$

\(\square \)

By induction, Lemma 10 yields

Corollary 11

Let \(G\in {{\mathcal {S}}}_F\). Then \(G^{*n}\in {{\mathcal {S}}}_{F}\) for any \(n\ge 1\).

Throughout, \(G^{*0}\) denotes a distribution degenerated at 0.

Lemma 12

Let \(G\in {{\mathcal {S}}}_F\). Then, for any \(\varepsilon >0\), there exists \(A\equiv A(\varepsilon )>0\) such that, for any integer \(n\ge 0\) and for any \(x\ge 0\),

$$\begin{aligned} \int _0^x G^{*n}({d}u){{\overline{F}}}(x-u)\le & {} A(\varepsilon )(1+\varepsilon )^n {{\overline{F}}}(x). \end{aligned}$$

Proof

Take any \(\varepsilon >0\). Since \(G\in {{\mathcal {S}}}_F\), there exists \(x_0>0\) such that

$$\begin{aligned} \int _0^x G({d}u)\overline{F}(x-u) \le (1+\varepsilon ){{\overline{F}}}(x), \quad x\ge x_0. \end{aligned}$$
(11)

Put \(A\equiv \frac{1}{{{\overline{F}}}(x_0)}\). We use induction arguments. For \(n=0\) the assertion clearly holds. We suppose that the assertion is true for \(n-1\) and prove it for n. For \(x<x_0\),

$$\begin{aligned} \frac{\int _0^x G^{*n}({d}u)\overline{F}(x-u)}{{{\overline{F}}}(x)}\le \frac{1}{{{\overline{F}}}(x)}\le \frac{1}{{{\overline{F}}}(x_0)}=A \le A(1+\varepsilon )^n. \end{aligned}$$

Further, for \(x\ge x_0\),

$$\begin{aligned} \displaystyle \int _0^x G^{*n}({d}u){{\overline{F}}}(x-u)= & {} \int _0^ x G(dy) \int _0^{x-y}G^{*(n-1)}({d}v){{\overline{F}}}(x-y-v)\nonumber \\&\displaystyle \le&A(1+\varepsilon )^{n-1}\int _0^{x}G({d}y) \overline{F}(x-y)\le A(1+\varepsilon )^n{{\overline{F}}}(x). \end{aligned}$$

The latter follows from (11). \(\square \)

Let \(\{\zeta _n\}\) be a sequence of i.i.d. nonnegative random variables with a common distribution G, and let \(\nu \) be a random stopping time independent of \(\{\zeta _n\}\). Put \(X_n=\zeta _1+\cdots +\zeta _n\). Then the distribution of the randomly stopped sum \(X_\nu \) is

$$\begin{aligned} G_\nu (x)\equiv {\mathbf {P}}(X_\nu \le x)=\sum _{n\ge 0} {\mathbf {P}}(\nu =n)G^{*n}(x). \end{aligned}$$

Lemma 13

Let G belong to \({{\mathcal {S}}}_F\). Assume that \({\mathbf {E}}(1+\delta )^{\nu }<\infty \) for some \(\delta >0\). Then \(G_\nu \) belongs to \({{\mathcal {S}}}_F\).

Proof

The result follows from Corollary 11, Lemma 12 and from the dominated convergence theorem. \(\square \)

It is known [23, Theorem 3.2] that if \(F\in {{\mathcal {S}}}^*\) then F is subexponential, i.e., \(F\in {\mathcal {S}}_F\). In the following lemma, we generalize this assertion to obtain sufficient conditions for \(G\in {\mathcal S }_F\). Another extension may be found in [12, Lemma 9].

Lemma 14

Let \(F\in {{\mathcal {S}}}^*\) and G on \({\mathbf {R}}^+\) be such that \(G(x-1,x]=o({{\overline{F}}}(x))\). Then \(G\in {{\mathcal {S}}}_F\).

Proof

It follows from \(F\in {{\mathcal {S}}}^*\) that \(F\in {{\mathcal {L}}}\); see [23, Theorem 3.2]. Then there exists a function h(x) satisfying conditions (i) and (ii) of Lemma 9. Further,

$$\begin{aligned} \int _{h(x)}^{x-h(x)} G({d}y)\overline{F}(x-y)\le \sum _{k=[h(x)]-1}^{[x-h(x)]} G(k-1,k]{{\overline{F}}}(x-k). \end{aligned}$$

Since \(G(x-1,x]=o({{\overline{F}}}(x))\), for x large enough \(G(x-1,x]\le {{\overline{F}}}(x)\). Then,

$$\begin{aligned} \sum _{k=[h(x)]-1}^{[x-h(x)]} G(k-1,k]{{\overline{F}}}(x-k)\le \sum _{k=[h(x)]-1}^{[x-h(x)]} {{\overline{F}}}(k){{\overline{F}}}(x-k-1)=o({{\overline{F}}}(x)), \end{aligned}$$

the latter is due to \(F\in {\mathcal {S}}^*\). Then, condition (iii) of Lemma 9 is satisfied and \(G~\in ~{\mathcal {S}}_F\).

\(\square \)

Let H be a non-decreasing function on \({\mathbf {R}}^+\) such that the integral

$$\begin{aligned} \int _0^\infty H(dt){{\overline{F}}}(t) \text{ is } \text{ finite. } \end{aligned}$$

We assume that H is subadditive, i.e., \(H(x+y)\le H(x)+H(y)\) for all \(x,y\ge 0\). Consider a distribution function \(G_H\) on \({\mathbf {R}}^+\) with tail distribution

$$\begin{aligned} {{\overline{G}}}_H(x)\equiv \min \Biggl (1,\int _0^\infty {{\overline{F}}}(t+x)\,H({d}t)\Biggr ),\quad x\ge 0. \end{aligned}$$
(12)

Integrating (12) by parts, we obtain an equivalent representation for \(G_H\):

$$\begin{aligned} {{\overline{G}}}_H(x)= & {} \min \Biggl (1,\int _x^\infty H(0,t-x]\,F({d}t)\Biggr ),\quad x\ge 0. \end{aligned}$$
(13)

We now establish some properties of \(G_H\), which will be used in the next section.

Lemma 15

Let \(F\in {\mathcal {L}}\) and \(H(x-1,x]\rightarrow 0\) as \(x\rightarrow \infty \). Then,

$$\begin{aligned} G_H(x-1,x]=o({{\overline{F}}}(x)). \end{aligned}$$

Proof

It follows from the definition that, for all sufficiently large x,

$$\begin{aligned} G_H(x-1,x]= \int _{x-1}^\infty H((t-x)^+,t-x+1] F({d}t). \end{aligned}$$

Since \(F\in {\mathcal {L}}\), there exists a function h(x) such that \({{\overline{F}}}(x)\sim {{\overline{F}}}(x+h(x))\). Then,

$$\begin{aligned} \int _{x-1}^{x+h(x)} H((t-x)^+,t-x+1] F({d}t)\le H(1) F(x-1,x+h(x)] =o({{\overline{F}}}(x)) \end{aligned}$$

and

$$\begin{aligned} \int _{x+h(x)}^\infty H(t-x,t-x+1] F({d}t)\le \sup _{y\ge h(x)}H(y,y+1] {{\overline{F}}}(x+h(x))=o({{\overline{F}}}(x)). \end{aligned}$$

\(\square \)

Lemma 16

Let \(F\in {\mathcal {L}}\), and let \(H_1,H_2\) be subadditive functions such that \(H_1(x-1,x]\rightarrow 0, H_2(x-1,x]\rightarrow 0\) and

$$\begin{aligned} 0<\liminf \frac{H_1(x)}{H_2(x)}\le \limsup \frac{H_1(x)}{H_2(x)}<\infty . \end{aligned}$$
(14)

Then

$$\begin{aligned} G_{H_1}\in {\mathcal {S}}_F\iff G_{H_2}\in {\mathcal {S}}_F. \end{aligned}$$

Proof

By Lemma 15, \(G_{H_1}(x-1,x]=o({{\overline{F}}}(x))\) and \(G_{H_2}(x-1,x]=o({{\overline{F}}}(x))\). Therefore, there exists a function h(x) such that conditions (i) and (ii) of Lemma 9 hold for both distribution functions \(G_{H_1}\) and \(G_{H_2}\). Integrating by parts (iii) and using (i) and (ii), we obtain

$$\begin{aligned} \int _{h(x)}^{x-h(x)}G_{H_i}({d}y){{\overline{F}}}(x-y)= \int _{h(x)}^{x-h(x)}F({d}y){{\overline{G}}}_{H_i}(x-y,x]+o({{\overline{F}}}(x)), i=1,2. \end{aligned}$$

For any \(y<x\), we have

$$\begin{aligned} {{\overline{G}}}_{H_i}(x-y,x]= \int _{x-y}^x H_i(0,t-x+y] F({d}t) +\int _{x}^\infty H_i(t-x,t-x+y] F({d}t). \end{aligned}$$

Due to the subadditive property,

$$\begin{aligned} \int _{h(x)}^{x-h(x)}F({d}y)\int _{x}^\infty H_i(t-x,t-x+y] F({d}t)\le \int _{h(x)}^{x-h(x)}F({d}y) H_i(y){{\overline{F}}}(x)=o({{\overline{F}}}(x)). \end{aligned}$$

Hence condition (iii) of Lemma 9 holds for the distribution functions \(G_{H_1}\) and \(G_{H_2}\) if and only if

$$\begin{aligned} \int _{h(x)}^{x-h(x)}F({d}y)\int _{x-y}^x H_i(0,t-x+y] F({d}t)=o({{\overline{F}}}(x)). \end{aligned}$$
(15)

Then the assertion of the lemma follows from (14). \(\square \)

Lemma 17

Assume that \(F\in {\mathcal {L}}\cap {\mathcal {D}}\) and \(H(x-1,x]\rightarrow 0\) as \(x\rightarrow \infty \). Then \(G_H\in {\mathcal {S}}_F.\)

Proof

It follows from Lemma 15 that \(G_H(x-1,x]=o({{\overline{F}}}(x))\). Therefore, there exists a function satisfying conditions (i) and (ii) . From the proof of Lemma 16, it is clear that if (15) holds then \(G_H\in {\mathcal {S}}_F\). Using the subadditive property of H, we obtain

$$\begin{aligned} \int _{h(x)}^{x/2}F({d}y)\int _{x-y}^x H(0,t-x+y] F({d}t)\le \int _{h(x)}^{x/2}F({d}y)H(0,y] F(x/2,x]=o(1) {{\overline{F}}}(x/2). \end{aligned}$$

Further,

$$\begin{aligned} \int _{x/2}^{x-h(x)}F({d}y)\int _{x-y}^x H(0,t-x+y] F({d}t)\le {{\overline{F}}}(x/2) \int _{h(x)}^\infty F(dt)H(0,t]=o(1) {{\overline{F}}}(x/2). \end{aligned}$$

It follows from \(F\in {\mathcal {D}}\) that \(o(1) \overline{F}(x/2)=o({{\overline{F}}}(x))\). \(\square \)

3 Proofs of the main results

First recall a well-known construction of ladder moments and ladder heights [17, Chapter XII]. Let

$$\begin{aligned} \eta = \min \{n\ge 1: S_n>0\}\le \infty \end{aligned}$$

be the first (strict) ascending ladder epoch and put

$$\begin{aligned} p={\mathbf {P}}\{\eta =\infty \}={\mathbf {P}} (M=0). \end{aligned}$$

Let \(\{\psi _n\}_{n\ge 1}\) be a sequence of i.i.d.r.v.’s distributed as

$$\begin{aligned} {\mathbf {P}}(\psi _1\in B)\equiv G_+(B)={\mathbf {P}}(S_\eta \in B | \eta <\infty ). \end{aligned}$$

Let \(\nu \) be a random variable, independent of the above sequence, such that \({\mathbf {P}} (\nu =n)=p(1-p)^n, n=0,1,2,\ldots \). Then

$$\begin{aligned} M{\mathop {=}\limits ^{d}}\psi _1+\cdots +\psi _\nu . \end{aligned}$$
(16)

We start by proving an auxiliary assertion.

Lemma 18

Let \(G_+\in {{\mathcal {S}}}_F\). Then the asymptotics (2) hold, i.e.,

$$\begin{aligned} {\mathbf {P}} (M_\tau >x)\sim {\mathbf {E}}\tau {{\overline{F}}}(x). \end{aligned}$$

Proof

The proof is carried out via standard arguments: we obtain the lower and the upper bounds, which are asymptotically equivalent. Let us start with the lower bound, which is valid without any assumptions on F and \(G_+\). Fix a positive integer N. Then, for any \(x>0\),

$$\begin{aligned} {\mathbf {P}}(M_\tau>x)\ge & {} \sum _{n=0}^N {\mathbf {P}}\left( \tau>n, \max _{0\le i\le n} S_i\le x, S_{n+1}>x\right) \\\ge & {} \sum _{n=0}^N {\mathbf {P}}\left( \tau>n, \max _{0\le i\le n} S_i\le h(x), \xi _{n+1}>x\right) \\= & {} (1+o(1)){{\overline{F}}}(x) \sum _{n=0}^N {\mathbf {P}}(\tau >n). \end{aligned}$$

Let \(N\rightarrow \infty \) to obtain

$$\begin{aligned} {\mathbf {P}}(M_\tau >x)\ge (1+o(1)){\mathbf {E}}\tau {{\overline{F}}}(x). \end{aligned}$$

Now turn to the upper bound. For any \(x\ge 0\),

$$\begin{aligned} {\mathbf {P}}(M_\tau>x)&\le \sum _{n=0}^{\infty } {\mathbf {P}}(\tau>n, S_n\le x, S_{n+1}>x)\\&=\int _0^x\sum _{n=0}^{\infty } {\mathbf {P}}(S_1>0,\ldots , S_n>0, S_n \in du){{\overline{F}}}(x-u)\\&={\mathbf {E}}\tau \int _0^x {\mathbf {P}}(M\in du){{\overline{F}}}(x-u), \end{aligned}$$

for the latter see [17, Chapter XII, (2.7)]. Finally, it follows from \(G_+\in {{\mathcal {S}}}_F\), relation (16) and Lemma 13 that the distribution function of M belongs to \({\mathcal {S}}_F\), that is,

$$\begin{aligned} \int _0^x {\mathbf {P}}(M\in {d}u){{\overline{F}}}(x-u)\sim {{\overline{F}}}(x). \end{aligned}$$

\(\square \)

Now let us introduce a few more definitions. Let \(\chi =-S_{\tau }\) be the absolute value of the first non-positive sum and let \( G_-(x)\equiv {\mathbf {P}}(\chi \le x) \) be its distribution function. Define a renewal function

$$\begin{aligned} H_-(x) = \sum _{n=0}^\infty G_-^{*n}(x), \quad x\ge 0. \end{aligned}$$

Then \(\psi _1\) is distributed as follows [17, Chapter XII]:

$$\begin{aligned} {\mathbf {P}}(\psi _1>x) \equiv {{\overline{G}}}_+(x)= & {} \frac{1}{1-p} \int _0^\infty {{\overline{F}}}(u+x) H_-({d}u). \end{aligned}$$
(17)

We will need the following asymptotic estimates for the renewal function H.

Proposition 19

(see [12, Corollary 2]) Suppose \({\mathbf {E}}[\xi _1^-]=\infty \) and condition (1) holds. Then,

$$\begin{aligned} p\le \liminf _{x\rightarrow \infty } \frac{H_-(x)m(x)}{x}\le \limsup _{x\rightarrow \infty } \frac{H_-(x)m(x)}{x}\le 2p. \end{aligned}$$
(18)

Proof of Theorem 7

We will prove that \(G_1\in {\mathcal {S}}_F\) implies that \(G_+\in {\mathcal {S}}_F\). For that, we verify the conditions of Lemma 16.

First, by the properties of renewal functions, \(H_-\) is subadditive. It follows from \({\mathbf {E}}\xi _1^-=\infty \) that \({\mathbf {E}}\chi =\infty .\) This and the Key Renewal Theorem imply that \(H(x-1,x]\equiv H(x)-H(x-1)\rightarrow 0, x\rightarrow \infty \).

Second, the function x/m(x) is subadditive as well:

$$\begin{aligned} \frac{x+y}{m(x+y)}=\frac{x}{m(x+y)}+\frac{y}{m(x+y)}\le \frac{x}{m(x)}+\frac{y}{m(y)}. \end{aligned}$$

The latter inequality holds since m(x) is non-decreasing. Further, the function x/m(x) is non-decreasing, since

$$\begin{aligned} \frac{d}{dx}\frac{x}{m(x)}=\frac{m(x)-xm'(x)}{m^2(x)}=\frac{m(x)-x{\mathbf {P}}(\xi _1^->x)}{m^2(x)}\ge 0. \end{aligned}$$

Hence,

$$\begin{aligned} 0\le \frac{x}{m(x)}-\frac{x-1}{m(x-1)}\le \frac{1}{m(x-1)}\rightarrow 0. \end{aligned}$$

Then, Lemma 16 and (18) imply that \(G_+\in {\mathcal {S}}_F\) if and only if \(G_1\in {\mathcal {S}}_F.\) Consequently, \(G_+\in {\mathcal {S}}_F\) and, by Lemma 18, the asymptotics (2) hold. \(\square \)

Proof of Theorem 8

Sufficiency of (a) follows from Lemmas 14 and  15. Sufficiency of (b) follows from Lemma 17. \(\square \)