1 Introduction, Main Results and Discussion

1.1 Notation and Assumptions

Consider a random walk \(\{S(n),n\ge 0 \}\) on \({\mathbb {R}^d}\), \(d\ge 1\), where

$$\begin{aligned} S(n) =X(1)+\cdots +X(n), \quad n\ge 1 \end{aligned}$$

and \(\{X(n), n\ge 1\}\) are independent copies of a random vector \(X=(X_1,\ldots ,X_d)\). For \(x=(x_1, \ldots , x_d)\) in the (non-negative) half space, that is for \(x_1\ge 0\), let

$$\begin{aligned} \tau _x:=\min \{\, n\ge 1: x+S(n)\notin {\mathbb {H}}^+ \,\} =\min \{\, n\ge 1: x_1+S_1(n)\le 0 \,\} \end{aligned}$$

be the first time the random walk exits the (positive) half space

$$\begin{aligned} {\mathbb {H}}^+=\{(x_1,\ldots , x_d): x_1> 0\}. \end{aligned}$$

When \(x=0\), we will omit the subscript and write

$$\begin{aligned} \tau :=\tau _0=\min \{\, n\ge 1: S(n)\notin {\mathbb {H}}^+ \,\} =\min \{\, n\ge 1: S_1(n)\le 0 \,\}. \end{aligned}$$

In this paper, we study the asymptotic, as \(n\rightarrow \infty \), behaviour of the probability

$$\begin{aligned} p_n(x,y):={\textbf{P}}(x+S(n)\in y+\Delta ,\tau _x>n) \end{aligned}$$
(1)

and the Green function

$$\begin{aligned} G(x, y):= \sum _{n=0}^\infty p_n(x,y). \end{aligned}$$

Here and throughout, we denote \(\Delta =[0,1)^d\) and for \(y=(y_1,\ldots ,y_d)\),

$$\begin{aligned} y+\Delta = [y_1,y_1+1)\times [y_2,y_2+1)\times \cdots \times [y_d,y_d+1). \end{aligned}$$

In this paper, we will mostly concentrate on the case when the random walk S(n) has infinite second moments. More precisely, we shall assume that S(n) is asymptotically stable of index \(\alpha <2\) when we study large deviations for local probabilities and asymptotics for the Green function. The asymptotics for the Green function of walks with finite variances have already been studied in the literature: (a) Uchiyama [20] has considered lattice walks in a half space; (b) Duraj et al [13] have derived asymptotics of Green functions for a wider class of convex cones. It is worth mentioning that the authors of [13] analyse first the case of a half space and use the estimates for the Green function for a half space in the subsequent analysis of convex cones. This fact underlines the importance of the case of half spaces.

We will say that S(n) belongs to the domain of attraction of a multivariate stable law, if

$$\begin{aligned} \frac{S(n)}{c_n}{\mathop {\rightarrow }\limits ^{d}}\zeta _\alpha , \end{aligned}$$
(2)

where \(\alpha \in (0,2]\) and \(\zeta _\alpha \) has a multivariate stable law of index \(\alpha \). Note that we assume that S(n) is already centred. This does not restrict generality, when \(\alpha \ne 1\), as one can subtract the mean for \(\alpha >1\) and the centring is not needed for \(\alpha <1\). This assumption excludes, however, some walks with \(\alpha =1\) from consideration.

Necessary and sufficient conditions for the convergence in (2) are given in [16]. When \(\alpha \in (0,2)\), the convergence will take place if \({\textbf{P}}(|X |>t)\) is regularly varying of index \(-\alpha \) and there exists a measure \(\sigma \) on the unit sphere \({\mathbb {S}}^{d-1} \) such that

$$\begin{aligned} \frac{{\textbf{P}}\left( |X |>x, \frac{X}{|X |}\in A\right) }{{\textbf{P}}(|X |>x)}\rightarrow \sigma (A), \quad x\rightarrow \infty , \end{aligned}$$

for any measurable A on \({\mathbb {S}}^{d-1}\). We will write \(X\in {\mathcal {D}}(d, \alpha ,\sigma )\) when (2) holds and additionally \(\sigma ({\mathbb {S}}^{d-1}\cap {\mathbb {H}}^+)>0\) and \(\sigma ({\mathbb {S}}^{d-1}\cap {\mathbb {H}}^-)>0\) for \(\alpha \in (0,2)\), where \({\mathbb {H}}^- = {\mathbb {R}}^d\setminus {\mathbb {H}}^+\). Here, \(\sigma \) stands for the above measure on the unit sphere. Also, let \(g_{\alpha ,\sigma }\) be the density of \(\zeta _\alpha \). This assumption implies that the first coordinate \(X_1\) belongs to the one-dimensional domain of attraction.

Note that \(X\in {\mathcal {D}}(d, \alpha ,\sigma )\) implies that \(S_1(n)\) is oscillating, that is, \({\textbf{P}}(\tau <\infty )=1\) and \({\textbf{E}}[\tau ]=\infty \). For that recall that the random walk \(S_1(n)\) oscillates if and only if

$$\begin{aligned} \sum _{n=1}^{\infty }\frac{1}{n} \textbf{P}(S_1(n)>0) =\sum _{n=1}^{\infty }\frac{1}{n} \textbf{P}(S_{1}(n)\le 0)=\infty . \end{aligned}$$

Rogozin [17] investigated properties of \(\tau \) and demonstrated that the Spitzer condition

$$\begin{aligned} n^{-1}\sum _{k=1}^{n} \textbf{P}\left( S_1(k)>0\right) \rightarrow \rho \in \left( 0,1\right) \quad \text {as }n\rightarrow \infty \end{aligned}$$
(3)

holds if and only if \(\tau \) belongs to the domain of attraction of a positive stable law with parameter \(\rho \). In particular, if \(X\in {\mathcal {D}}(d, \alpha ,\sigma )\), then (see, for instance, [23]) condition (3) holds with

$$\begin{aligned} \displaystyle \rho :=\sigma (\mathbb {H^+}\cap \mathbb {S}^{d-1}). \end{aligned}$$

It is well known that (3) is equivalent to the following Spitzer–Doney condition

$$\begin{aligned} {\textbf{P}}(S_1(n)>0)\rightarrow \rho ,\quad n\rightarrow \infty . \end{aligned}$$
(4)

The scaling sequence \(\{c_n\}\) can be defined as follows, see [16]. Denote \(\mathbb {Z}:=\left\{ 0,\pm 1,\pm 2,\ldots \right\} ,\) \(\mathbb {Z} _{+}:=\left\{ 1,2,\ldots \right\} \) and let \(\left\{ c_{n},n\ge 1\right\} \) be a sequence of positive numbers specified by the relation

$$\begin{aligned} c_{n}:=\inf \left\{ u\ge 0:\mu (u)\le n^{-1}\right\} , \end{aligned}$$
(5)

where

$$\begin{aligned} \mu (u):=\frac{1}{u^{2}}\int _{-u}^{u}x^{2}\textbf{P}(|X |\in \textrm{d}x). \end{aligned}$$
(6)

It is known (see, for instance, [14, Ch. XVII, § 5]) that for every \( X\in \mathcal {D}(d,\alpha ,\sigma )\) the function \(\mu (u)\) is regularly varying with index \((-\alpha )\). This implies that \(\left\{ c_{n},n\ge 1\right\} \) is a regularly varying sequence with index \(\alpha ^{-1}\), i.e. there exists a function \(l_{1}(n),\) slowly varying at infinity, such that

$$\begin{aligned} c_{n}=n^{1/\alpha }l_{1}(n). \end{aligned}$$
(7)

Then, convergence (2) holds with this sequence \(\{c_n\}\).

In one-dimensional case, the study of asymptotics (1) was initiated in [21], where normal and small deviations of \(p_n(0,y)\) were considered. Asymptotics for \(p_n(x,y)\) with a general starting point x was studied then in [3] and [10]. Our assumption on \(X_{1}\) is the same as in these papers, and we use a similar approach for small and normal deviations. We used a different approach to study large deviations in the multidimensional case. Large deviations seem to be the most complicated part of the present paper.

As the first coordinate plays a distinctive role, we will adopt the following notation. For X(n), we will write \(X(n)=(X_{1}(n),X_{(2,d)}(n))\), where \(X_{1}(n)\) corresponds to the first coordinate and \(X_{(2,d)}(n)\) corresponds to the remaining coordinates. Similarly, we write \(S(n)=(S_{1}(n),S_{(2,d)}(n)), n=0,2,\ldots \).

The following conditional limit theorem will be crucial for the rest of this article. The weak convergence in this theorem can be proven similarly to [9]. Existence of the density is shown in the proof of Theorem 2, but can also be found similarly to Remark 2 in [21].

Theorem 1

If \(X\in \mathcal {D}(d,\alpha ,\sigma )\), then there exists a random vector \(M_{\alpha ,\sigma }\) on \({\mathbb {H}}^+\) with density \(p_{M_{\alpha ,\sigma }}(v)\) such that, for all \(u\in {\mathbb {R}}^d\),

$$\begin{aligned} \lim _{n\rightarrow \infty }{\textbf{P}}\left( \frac{S(n)}{c_{n}} \in u+\Delta \mid \tau >n\right) =\textbf{P}(M_{\alpha ,\sigma }\in u+\Delta ) =\int _{u+\Delta }p_{M_{\alpha ,\sigma }}(v)dv. \end{aligned}$$
(8)

Moreover, for every bounded and continuous function f,

$$\begin{aligned} {\textbf{E}}\left[ f\left( \frac{S(n)}{c_n}\right) \mid \tau _x>n\right] \rightarrow {\textbf{E}}[f(M_{\alpha ,\sigma })] \end{aligned}$$

uniformly in x with \(0\le x_1\le \delta _nc_n\), \(\delta _n\rightarrow 0\).

Our first result is an analogue of the classical local limit theorem, which is an extension of Theorems 3 and 5 in [21] when \(x=0\) and extends [10] and [3] for arbitrary starting point x to the case of half-planes.

Theorem 2

Suppose \(X\in \mathcal {D}(d,\alpha ,\sigma )\). If the distribution of X is non-lattice, then, for every \(r >0\), uniformly in x with \(0\le x_1\le \delta _nc_n\), \(\delta _n\rightarrow 0\),

$$\begin{aligned} \sup _{y\in \mathbb {H}^+}\left|c_{n}^d{\textbf{P}}\left( x+S(n)\in y+r\Delta \mid \tau _x>n\right) -r^d {p_{M_{\alpha ,\sigma }}\left( \frac{y-x}{c_{n}}\right) }\right|\rightarrow 0. \end{aligned}$$
(9)

If the distribution of X is lattice and if \({\mathbb {Z}}^d\) is the minimal lattice for X, then uniformly in \(x\in \mathbb {H}^+\cap {\mathbb {Z}}^d\) with \(0\le x_1\le \delta _nc_n\), \(\delta _n\rightarrow 0\),

$$\begin{aligned} \sup _{y\in \mathbb {H}^+\cap {\mathbb {Z}}^d}\left|c_{n}^d{\textbf{P}}\left( x+S(n)= y \mid \tau _x>n\right) -{p_{M_{\alpha ,\sigma }}\left( \frac{y-x}{c_{n}}\right) }\right|\rightarrow 0. \end{aligned}$$
(10)

If the ratio \(y/c_{n}\) varies with n in such a way that \(y_1/c_{n}\in (b_{1},b_{2})\) for some \(0<b_{1}<b_{2}<\infty \) and \(|y_{(2,d)} |=O(c_n)\), we can rewrite (9) as

$$\begin{aligned} c_{n}^d\textbf{P}(S(n)\in y+r\Delta \mid \tau >n)\sim r^d { p_{M_{\alpha ,\sigma }}(y/c_{n})}\quad \text {as }n\rightarrow \infty . \end{aligned}$$

However, if \(y_1/c_{n}\rightarrow 0\), then, in view of

$$\begin{aligned} \lim _{z_1\downarrow 0}{p_{M_{\alpha ,\sigma }}(z)=0}, \end{aligned}$$

relation (9) gives only

$$\begin{aligned} c_{n}^d{\textbf{P}}(S(n)\in y+\Delta \mid \tau >n)=o\left( 1\right) \quad \text {as }n\rightarrow \infty . \end{aligned}$$
(11)

Our next theorem refines (11) in the mentioned domain of small deviations, i.e. when \(y_1/c_{n}\rightarrow 0.\) Let

$$\begin{aligned} \tau ^+:=\min \{k\ge 1:S(k)\in \mathbb {H}^+\}=\min \{k\ge 1:S_{1}(k)>0\}. \end{aligned}$$

Let \(\chi ^{+}:=S_{1}(\tau ^{+})\) (resp. \(\chi ^{-}:=-S_{1}(\tau )\)) be the first ascending(descending) ladder height and let \((\chi ^+_n)_{n=1}^\infty \) (resp. \((\chi _n^-)_{n=1}^\infty \)) be a sequence of i.i.d. copies of \(\chi ^+\) (resp. \(\chi ^-\)). Let

$$\begin{aligned} H(u)&:= \textrm{I}\{u>0\}+\sum _{k=1}^{\infty }\textbf{P}(\chi _{1}^{+}+\ldots +\chi _{k}^{+}<u), \end{aligned}$$
(12)
$$\begin{aligned} V(u)&:= \textrm{I}\{u\ge 0\}+\sum _{k=1}^{\infty }\textbf{P}(\chi _{1}^{-}+\ldots +\chi _{k}^{-}\le u) \end{aligned}$$
(13)

be the renewal function of the ascending (descending) ladder height process. Clearly, H is a left-continuous function.

Theorem 3

Suppose \(X\in \mathcal {D}(d,\alpha ,\sigma )\). If the distribution of X is lattice and if \(\mathbb {Z}^d\) is the minimal lattice, then

$$\begin{aligned} {\textbf{P}}(x+S(n)= y; \tau _x>n)\sim V(x_1)H(y_1)\frac{g_{\alpha ,\sigma } \left( 0,\frac{y_{2,d}-x_{2,d}}{c_n}\right) }{nc_n^d} \end{aligned}$$

uniformly in \(x,y \in {\mathbb {H}}^+\cap {\mathbb {Z}}^d\)with \(x_1,y_{1}\in (0,\delta _{n}c_{n}]\) such that \(|x-y|\le A c_n \), where \(\delta _{n}\rightarrow 0\) as \(n\rightarrow \infty \) and A is a fixed constant.

If the distribution of X is non-lattice, then

$$\begin{aligned} {\textbf{P}}(x+S(n)\in y+\Delta ; \tau _x>n)\sim V(x_1)\int _{y_1}^{y_1+1}H(u)\textrm{d}u\frac{g_{\alpha ,\sigma } \left( 0,\frac{y_{2,d}-x_{2,d}}{c_n}\right) }{nc_n^d} \end{aligned}$$

uniformly in \(x_1,y_{1}\in (0,\delta _{n}c_{n}]\) such that \(|x-y|\le A c_n \), where \(\delta _{n}\rightarrow 0\) as \(n\rightarrow \infty \) and A is a fixed constant.

To obtain the asymptotics for the Green function of S(n) killed at leaving \(\mathbb {H}^+\), one has to estimate probabilities of local large deviations for S(n). For that, we shall assume that

$$\begin{aligned} {\textbf{P}}(X\in x+\Delta )\le \frac{\phi (|x |)}{|x |^{d}}=: g(|x |), \end{aligned}$$
(14)

where \(\phi (t):={\textbf{P}}(|X|>t)\) is regularly varying function of index \(-\alpha \).

The fact that global assumptions might in general give different asymptotics if the tails are too heavy is known in the multidimensional case since Williamson [22], who constructed a counterexample for the Green functions on the whole space, which has a different asymptotic behaviour without any local assumptions.

We now ready to formulate our bound for local large deviations.

Theorem 4

Let \(X\in {\mathcal {D}}(d, \alpha ,\sigma )\) with \(\alpha <2\) and (14) hold.

Then,

$$\begin{aligned} p_n(x,y) \le C_0 g(|x-y|) H(y_1+1)V(x_1+1). \end{aligned}$$
(15)

Having asymptotics for \(p_n(x,y)\) for all possible ranges (for small, normal and large deviations), one can easily derive asymptotics for the Green function. We start with the lattice case, where we combine Theorems 3 and 4 to obtain the asymptotics of the Green function near the boundary.

Theorem 5

Assume \(X\in {\mathcal {D}}(d,\alpha ,\sigma )\) with some \(\alpha <2\). Suppose that (14) holds.

  1. 1.

    If the distribution of X is lattice and if \({\mathbb {Z}}^d\) is minimal for X, then we have

    $$\begin{aligned} G(x,y) \sim C \frac{H(y_1)V(x_1)}{|x-y|^d} \int _0^{\infty } g_{\alpha ,\sigma }\left( 0, \frac{y_{2,d}-x_{2,d}}{|x-y|} t\right) t^{d-1} \textrm{d}t \end{aligned}$$

    for \(x_1,y_1=o(|x-y|)\). In particular, in the isotopic case, that is when the limiting density \(\sigma \) is uniform on the unit sphere,

    $$\begin{aligned} G(x,y) \sim C_{\alpha } \frac{H(y_1)V(x_1)}{|x-y|^d}, \quad |x-y|\rightarrow \infty , \end{aligned}$$

    for \(x_1,y_1=o(|x-y|)\).

  2. 2.

    If X is non-lattice, then

    $$\begin{aligned} G(x,y) \sim C \frac{\int _{y_1}^{y_1+1}H(u)\textrm{d}uV(x_1)}{|x-y|^d} \int _0^{\infty } g_{\alpha ,\sigma }\left( 0, \frac{y_{2,d}-x_{2,d}}{|x-y|} t \right) t^{d-1} \textrm{d}t \end{aligned}$$

    for \(x_1,y_1=o(|x-y|)\). In particular, in the isotopic case, that is when the limiting density \(\sigma \) is uniform on the unit sphere,

    $$\begin{aligned} G(x,y) \sim C \frac{\int _{y_1}^{y_1+1}H(u)\textrm{d}uV(x_1)}{|x-y|^d}, \quad |x-y|\rightarrow \infty , \end{aligned}$$

    for \(x_1,y_1=o(|x-y|)\).

  3. 3.

    In addition, there exists a constant C such that for all \(x,y\in \mathbb {H}^+\),

    $$\begin{aligned} G(x,y) \le C \frac{H(y_1)V(x_1)}{|x-y|^d}. \end{aligned}$$
    (16)

Remark 1

Recall that \(H(x)\sim C x^{\alpha \rho } l(x), x\rightarrow \infty \) for some slowly varying function l, see [21, Lemma 13]. Similarly, \(V(x)\sim C x^{\alpha (1-\rho )}/l(x)\) as \(x\rightarrow \infty \).

Remark 2

For stable Lévy processes, an exact formula (which can be analysed asymptotically) for the Green function g(xy) was obtained in [19]. We are not aware of any result of this kind for asymptotically stable random walks when \(\alpha <2\).

Remark 3

As we have already mentioned, walks with finite variances, which are a particular case of asymptotically stable walks with \(\alpha =2\), were considered in [13, Theorem 2] and [20] in the lattice case. In [13], estimates of the behaviour of the Green function relied on [5, 12] and were obtained in a more general situation of convex cones. Using our methods, we have obtained asymptotics for the Green function in a half space for all asymptotically stable walks with \(\alpha =2\). Specialising this result to the case of finite variances, we can obtain asymptotics under weaker moment conditions than in [13] and stronger than in [20]. The method we use is different from [20] who approached the problem using the potential kernel.

The only difference between \(\alpha <2\) and \(\alpha =2\) is the form of estimates for local large deviations. In the case \(\alpha =2\), one has to take care of the Gaussian component, which leads to very long calculations. For that reason, we have decided not to include walks with \(\alpha =2\) in the present paper and to consider this case in a separate paper.

Remark 4

It seems to be possible to extend the estimates for the Green function for asymptotically isotropic random walk in cones. This will be considered elsewhere and should also allow one to extend the results of [4, 5, 7, 8, 12, 13] to the stable case.

Since exit times from a half space can be considered as exit times for one-dimensional random walks, it is quite natural to use the methods, which are typical for walks on the real line. In the proofs of our results on the asymptotic behaviour of the probabilities, we follow this strategy and use a lot of methods from [10] and [21]. However, it is worth mentioning that additional dimensions cause many additional technical problems, because of the possibility that the random walk can have a large jump, which is close to the boundary hyperplane \(\{x:x_1=0\}\). While in the finite variance case one can try to control these jumps by assuming existence of additional moments, we cannot assume existence of some additional moments in the infinite variance case. For these reasons, our estimates for \(p_n(x,y)\) cannot be considered as a straight forward generalisation of one-dimensional results.

2 Preliminary Upper Bounds

In this section, we find bounds for \({\textbf{P}}(\tau _x>n)\). Since \(\tau _x\) is actually a stopping time for the one-dimensional walk \(S_1(k)\), we may apply Lemma 3 from [6], which gives us the following estimate.

Lemma 6

Assume that (4) holds. Then, there exists \(C_0\) such that for every \(x\in {\mathbb {H}}^+\) one has

$$\begin{aligned} \frac{{\textbf{P}}(\tau _x>n)}{{\textbf{P}}(\tau _0>n)}\le C_0V(x_1),\quad n\ge 1. \end{aligned}$$
(17)

The next lemma is an extension of [21, Lemma 20] to the case of half spaces. We will give a proof following a different approach, which relies on Lemma 6. This proof works in one-dimensional case as well, thus simplifying the corresponding arguments of [21].

For \(y=(y_1,\ldots , y_d),z=(z_1,\ldots , z_d)\in {\mathbb {R}}^d\), we will write \(y\le z\) if \(y_k\le z_k\) for all \(1\le k\le d\).

Lemma 7

Assume that \(X\in \mathcal {D}(d,\alpha ,\sigma )\). Then, there exists \(C>0\) such that for all \(x,y\in {\mathbb {H}}^+\) and all \(n\ge 1\) we have

$$\begin{aligned} p_n(x,y) \le \frac{CV(x_1)H(y_1)}{n c_n^d}. \end{aligned}$$
(18)

Similar result holds for the stopping time \(\tau ^+\):

$$\begin{aligned} {\textbf{P}}(S(n)\in x+\Delta ;\tau ^+>n)\le C\frac{V(\vert x_1\vert )}{nc_n^d}, \quad x\in {\mathbb {H}}^-. \end{aligned}$$

Proof

We prove the first statement only. The proof of the second estimate requires only notational changes.

Put \(n_1=[n/4], n_2=[3n/4]-n_1, n_3=n-[3n/4]\). We split the probability of interest into three parts,

$$\begin{aligned} p_n(x,y)&= \int _{{\mathbb {H}}^+} {\textbf{P}}(\tau _x>n_1, x+S(n_1)\in \textrm{d}u) \int _{{\mathbb {H}}^+}{\textbf{P}}(\tau _u>n_2, u+S(n_2)\in \textrm{d}z)\\&\quad \times {\textbf{P}}(\tau _z>n_3, z+S(n_3)\in y+\Delta ). \end{aligned}$$

Now, we will make use of the time inversion. Let

$$\begin{aligned} {\widetilde{X}}(n_3)=-X(1), {\widetilde{X}}(n_3-1)=-X(2),\ldots , {\widetilde{X}}(1)=-X(n_3) \end{aligned}$$

and

$$\begin{aligned} {\widetilde{S}}(k)&= {\widetilde{X}}(1)+\cdots +{\widetilde{X}}(k)\\&=-X(n_3)-\cdots -X(n_3-k+1) = S(n_3-k)-S(n_3), \quad k=1,\ldots ,n_3. \end{aligned}$$

Let \(1_d=(1,\ldots ,1)\). Then,

$$\begin{aligned}&{\textbf{P}}(z+S(n_3)\in y+\Delta ; \tau _z>n_3)\\&\quad = {\textbf{P}}(z+S(n_3)\in y+\Delta ; z_1+ \min (S_1(1),,\ldots , S_1(n_3)>0)\\&\quad = {\textbf{P}}(y+1_d+{\widetilde{S}}(n_3)\in z+(0,1]^d; z_1+\min ({\widetilde{S}}_1(n_3-1),\ldots , {\widetilde{S}}_1(1))>{\widetilde{S}}_1(n_3))\\&\quad \le {\textbf{P}}(y+1_d+{\widetilde{S}}(n_3)\in z+(0,1]^d; {\widetilde{\tau }}_{y+1_d}>n_3). \end{aligned}$$

Then, using the concentration function inequalities, we can continue as follows:

$$\begin{aligned}&\int _{{\mathbb {H}}^+}{\textbf{P}}(\tau _u>n_2, u+S(n_2)\in \textrm{d}z) {\textbf{P}}(\tau _z>n_3, z+S(n_3)\in y+\Delta )\\&\quad \le \int _{{\mathbb {H}}^+}{\textbf{P}}(\tau _u>n_2, u+S(n_2)\in \textrm{d}z) {\textbf{P}}(y+1_d+{\widetilde{S}}(n_3)\in z+(0,1]^d, {\widetilde{\tau }}_{y+1_d}>n_3)\\&\quad \le \sum _{j_1=0}^\infty \sum _{j_2=-\infty }^\infty \sum _{j_d=-\infty }^\infty {\textbf{P}}(\tau _u>n_2, u+S(n_2)\in [j_1,j_1+1)\times \dots [j_d,j_d+1) )\\&\qquad \times {\textbf{P}}(y+1_d+{\widetilde{S}}(n_3)\in (j_1,\ldots ,j_d) +2\Delta , {\widetilde{\tau }}_{y+1_d}>n_3)\\&\quad \le \frac{C}{c_n^d} \sum _{j_1=0}^\infty \sum _{j_2=-\infty }^\infty \sum _{j_d=-\infty }^\infty {\textbf{P}}(y+1_d+{\widetilde{S}}(n_3)\in (j_1,\ldots ,j_d) +2\Delta , {\widetilde{\tau }}_{y+1_d}>n_3)\\&\quad \le \frac{C2^d}{c_n^d} {\textbf{P}}({\widetilde{\tau }}_{y+1_d}>n_3).\\ \end{aligned}$$

As a result, we obtain the bound

$$\begin{aligned} p_n(x,y) \le \frac{C}{c_n^d}{\textbf{P}}(\tau _x>n_1) {\textbf{P}}(\widetilde{\tau }_{y+1_d}>n_3). \end{aligned}$$

Applying Lemma 6, we obtain

$$\begin{aligned} p_n(x,y)&\le \frac{CH(y_1)}{c_n^d} {\textbf{P}}(\tau _x>n_1) {\textbf{P}}({\widetilde{\tau }}_0>n_3)\\&\le \frac{CH(y_1)V(x_1)}{c_n^d} {\textbf{P}}(\tau _0>n_1) {\textbf{P}}(\tau ^+_0>n_3). \end{aligned}$$

Here, recall that one can deduce by Rogozin’s result that (3) holds if and only if there exists a function l(n), slowly varying at infinity, such that, as \( n\rightarrow \infty \),

$$\begin{aligned} \textbf{P}\left( \tau>n\right) \sim \frac{l(n)}{n^{1-\rho }},\ \ \textbf{P}\left( \tau ^{+}>n\right) \sim \frac{1}{\Gamma (\rho )\Gamma (1-\rho )n^{\rho }l(n)}. \end{aligned}$$
(19)

Then, we obtain that \({\textbf{P}}(\tau _0>n) {\textbf{P}}(\tau ^+_0>n)\sim \frac{C}{n}\) and arrive at the conclusion. \(\square \)

Lemma 8

Assume that the random walk S(n) is asymptotically stable. Then, there exists \(C>0\) such that for \(x,y\in {\mathbb {H}}^+\) and all \(n\ge 1\)

$$\begin{aligned} p_n(x,y) \le \frac{C V(x_1) }{c_n^d} \frac{l(n)}{n^{1-\rho }}. \end{aligned}$$
(20)

Proof

For \(n\ge 2\),

$$\begin{aligned} p_n(x,y)&\le {\textbf{P}}(\tau _x>n/2)\sup _{z\in {\mathbb {R}}^d}{\textbf{P}}(S_{[n/2]} \in z+\Delta ). \end{aligned}$$

Applying now a concentration function inequality, we obtain

$$\begin{aligned} p_n(x,y) \le {\textbf{P}}(\tau _x>n/2) \frac{C_1}{c_n^d} \le \frac{CV(x_1)}{c_n^d}\frac{l(n)}{n^{1-\rho }}, \end{aligned}$$

since

$$\begin{aligned} {\textbf{P}}(\tau _x>n) \le C V(x_1) {\textbf{P}}(\tau >n)\le C V(x_1)\frac{l(n)}{n^{1-\rho }}. \end{aligned}$$

\(\square \)

Before proving the next lemma, we state the following result, which is a minor modification of [21, Lemma 13].

Lemma 9

Suppose that \(X\in {\mathcal {D}}(d, \alpha ,\sigma )\) Then, as \(u\rightarrow \infty \),

$$\begin{aligned} H(u)\sim \frac{u^{\alpha \rho }l_{2}(u)}{\Gamma (1-\alpha \rho )\Gamma (1+\alpha \rho )} \end{aligned}$$
(21)

if \(\alpha \rho <1\), and

$$\begin{aligned} H(u)\sim ul_{3}(u) \end{aligned}$$
(22)

if \(\alpha \rho =1,\) where

$$\begin{aligned} l_{3}(u):=\left( \int _{0}^{u}{\textbf{P}}\left( \chi ^{+}>y\right) \textrm{d}y\right) ^{-1},\text { \ }u>0. \end{aligned}$$

In addition, there exists a constant \(C>0\) such that, in both cases,

$$\begin{aligned} H(c_{n})\sim Cn\textbf{P}(\tau >n)\quad \text {as }n\rightarrow \infty . \end{aligned}$$
(23)

Lemma 10

There exists a constant \(C\in \left( 0,\infty \right) \) such that, for all \(z\in [0,\infty )\times {\mathbb {R}}^{d-1},\)

$$\begin{aligned} \lim \sup _{\varepsilon \downarrow 0}\varepsilon ^{-d}\textbf{P}\left( M_{\alpha ,\sigma }\in z+\varepsilon \Delta \right) \le C\min \{1,(z_1)^{\alpha \rho }\}. \end{aligned}$$

In particular,

$$\begin{aligned} \lim _{z_1\downarrow 0}\lim \sup _{\varepsilon \downarrow 0}\varepsilon ^{-d} \textbf{P}\left( M_{\alpha ,\sigma }\in [z,z+\varepsilon )\right) =0. \end{aligned}$$

Proof

For all \(z\in [0,\infty )\times {\mathbb {R}}^{d-1},\) and all \(\varepsilon >0\), we have

$$\begin{aligned} {\textbf{P}}\left( M_{\alpha ,\sigma }\in z+\varepsilon \Delta \right) \le \lim \sup _{n\rightarrow \infty }\textbf{P}\left( S(n)\in c_{n} z+\varepsilon c_n\Delta \mid \tau >n\right) . \end{aligned}$$

Applying (25) gives

$$\begin{aligned} {\textbf{P}}\left( S(n)\in c_{n} z+\varepsilon c_n\Delta \mid \tau>n\right) \le C\frac{H(\min (c_{n},(z_1+\varepsilon )c_{n})) }{nc_{n}^d\textbf{P}(\tau >n)}(\varepsilon c_{n})^d. \end{aligned}$$

Recalling that H(x) is regularly varying with index \(\alpha \rho \) by Lemma 9 and taking into account (23), we get

$$\begin{aligned} {\textbf{P}}\left( S(n)\in c_{n} z+\varepsilon c_n\Delta \mid \tau>n\right)&\le C\varepsilon ^d \min \{1,(z_1+\varepsilon )^{\alpha \rho }\} \frac{H(c_{n})}{n\textbf{P}(\tau >n)} \\&\le C\varepsilon ^d \min \{1,(z_1+\varepsilon )^{\alpha \rho }\}. \end{aligned}$$

Consequently,

$$\begin{aligned} {\textbf{P}}\left( M_{\alpha ,\sigma }\in z+\varepsilon \Delta \right) \le C\varepsilon ^d \min \{1,(z_1+\varepsilon )^{\alpha \rho }\}. \end{aligned}$$
(24)

This inequality shows that there exists a constant \(C\in (0,\infty )\) such that

$$\begin{aligned} \lim \sup _{\varepsilon \downarrow 0}\varepsilon ^{-d} {\textbf{P}}\left( M_{\alpha ,\sigma }\in z+\varepsilon \Delta \right) \le C\min \{1,(z_1)^{\alpha \rho }\}, \text { for all \ }z\in [0,\infty )\times {\mathbb {R}}^{d-1}, \end{aligned}$$

as desired. \(\square \)

Corollary 11

Let \(X\in \mathcal {D}(d,\alpha ,\sigma ).\) There exists a constant \(C\in \left( 0,\infty \right) \) such that, for all \(n\ge 1\) and \(x,y\in {\mathbb {H}}^+\),

$$\begin{aligned} p_{n}(x,y)\le CV(x_1)\frac{H(\min (c_{n},y_1))}{nc_{n}^d}. \end{aligned}$$
(25)

Proof

The desired estimates follow from (23) and Lemmas 7 and 8. \(\square \)

3 Baxter–Spitzer Identity

We will need the following multidimensional extension of one-dimensional Baxter–Spitzer identity, see [18, Lemma 3.2].

Lemma 12

For \(t\in {\mathbb {R}}^d\) and \(|s |<1\), the following identity

$$\begin{aligned} 1+\sum _{n=1}^\infty s^n {\textbf{E}}[e^{it\cdot S(n)};\tau _0>n] =\exp \left\{ \sum _{n=1}^\infty \frac{s^n}{n}{\textbf{E}}\left[ e^{it\cdot S(n)}; S_1(n)>0\right] \right\} . \end{aligned}$$

We will now follow closely [21]. Put

$$\begin{aligned} B_n(y):={\textbf{P}}(S(n)< y; \tau _x>n). \end{aligned}$$

and \(b_n(y):= p_n(0,y)\). Lemma 15 of [21] extends as follows.

Lemma 13

The sequence of functions \(\{\, B_n(y), n\ge 1\,\}\) satisfies the recurrence equations

$$\begin{aligned} nB_n(y)= & {} {\textbf{P}}(S(n)<y, S_1(n)>0)\nonumber \\ {}{} & {} \quad +\sum _{k=1}^{n-1} \int _{{\mathbb {R}}^{d}} {\textbf{P}}(S(k)<y-z, S_1(k)>0) \textrm{d}B_{n-k}(z) \end{aligned}$$
(26)

and

$$\begin{aligned} nB_n(y)= & {} {\textbf{P}}(S(n)<y, S_1(n)>0)\nonumber \\{} & {} \quad +\sum _{k=1}^{n-1} \int _{{\mathbb {R}}^{d}} B_{n-k}(y-z) {\textbf{P}}(S(k)\in \textrm{d}z, S_1(k)>0). \end{aligned}$$
(27)

The proof is analogous to the proof of Lemma 15 of [21].

To deal with random walks started at an arbitrary point, we will prove Lemma 14 that extends (17) in [10]. Put

$$\begin{aligned} p_n^+(0,\textrm{d}y):={\textbf{P}}(S(n)\in \textrm{d}y, \tau ^+>n). \end{aligned}$$

We will slightly abuse the notation and write

$$\begin{aligned} p_n(x,\textrm{d}y)={\textbf{P}}(x+S(n)\in \textrm{d}y, \tau _x>n). \end{aligned}$$

Lemma 14

For \(x\in \overline{{\mathbb {H}}^+},y \in {\mathbb {H}}^+\), we have

$$\begin{aligned} p_n(x,\textrm{d}y)&= \sum _{k=0}^n \int _{(0,x_1\wedge y_1 ] \times {\mathbb {R}}^{d-1}} p_k^+(0, \textrm{d}z-x) p_{n-k}(0, \textrm{d}y-z) \end{aligned}$$
(28)

and for \(x\in \overline{{\mathbb {H}}^+} \cap {\mathbb {Z}},y \in {\mathbb {H}}^+\cap {\mathbb {Z}}\)

$$\begin{aligned} p_n(x,y)&=\sum _{k=0}^n \int _{(0,x_1\wedge (y_1+1) ] \times {\mathbb {R}}^{d-1}} p_k^+(0,\textrm{d}z-x)b_{n-k}(y-z). \end{aligned}$$
(29)

Proof

Decomposing the trajectory of the walk at the minimum of the first coordinate and using the duality lemma for random walks, we get

$$\begin{aligned} p_n(x,\textrm{d}y)&=\sum _{k=0}^n \int _{(0,x_1\wedge y_1 ] \times {\mathbb {R}}^{d-1}} {\textbf{P}}(x+S(k)\in \textrm{d}z, S_1(k)\le \min _{j\le k} S_{1}(j))\\&\quad \times {\textbf{P}}(z+S(n-k)\in \textrm{d}y, \tau _0>n-k)\\&=\sum _{k=0}^n \int _{(0,x_1\wedge y_1 ] \times {\mathbb {R}}^{d-1}} {\textbf{P}}(x+S(k)\in \textrm{d}z, \tau ^+>k)\\&\quad \times {\textbf{P}}(z+S(n-k)\in \textrm{d}y, \tau _0>n-k)\\&=\sum _{k=0}^n \int _{(0,x_1\wedge y_1 ] \times {\mathbb {R}}^{d-1}} p_k^+(0, \textrm{d}z-x) p_{n-k}(0, \textrm{d}y-z). \end{aligned}$$

Integrating (28), we obtain the second equality (29). \(\square \)

4 Probabilities of Normal Deviations: Proof of Theorem 2 for \(x=0\).

Let \(H_{y_1}^+ = \{(z_1,z^{(2,d)}): 0<z_1<y_1 )\}\). It follows from (26) that

$$\begin{aligned} nb_n(y)&={\textbf{P}}(S(n)\in y+\Delta )\nonumber \\&\quad +\sum _{k=1}^{n-1} \int _{{\mathbb {R}}^d}{\textbf{P}}(S(k)\in y-z+\Delta , S_1(k)>0) \textrm{d}B_{n-k}(z)\nonumber \\&:=R_{\varepsilon }^{(0)}(y)+ R_{\varepsilon }^{(1)} (y)+R_{\varepsilon }^{(2)}(y)+R_{\varepsilon }^{(3)}(y), \end{aligned}$$
(30)

where, for any fix \(\varepsilon \in (0,1/2)\) and with a slight abuse of notation,

$$\begin{aligned} R_{\varepsilon }^{(1)}(y)&:=\sum _{k=1}^{\varepsilon n} \int _{H_{y_1}^+}{\textbf{P}}(S(k)\in y-z+\Delta , S_1(k)>0)\textrm{d}B_{n-k}(z),\\ R_{\varepsilon }^{(2)}(y)&:=\sum _{k=\varepsilon n}^{(1-\varepsilon )n}\int _{H_{y_1}^+}{\textbf{P}}(S(k)\in y-z+\Delta , S_1(k)>0)\textrm{d}B_{n-k}(z),\\ {R}_{\varepsilon }^{(3)}(y)&:={\textbf{P}}({S}_{n}\in y+\Delta )\\&\quad +\sum _{k=[(1-\varepsilon )n]+1}^{n-1} \int _{{\mathbb {R}}^d}{\textbf{P}}(S(k)\in y-z+\Delta , S_1(k)>0)\textrm{d}B_{n-k}(z) \end{aligned}$$

and

$$\begin{aligned} R_\varepsilon ^{(0)}(y):= \sum _{k=1}^{(1-\varepsilon )n} \int _{[y_1,y_1+1)\times {\mathbb {R}}^{d-1}}{\textbf{P}}(S(k)\in y-z+\Delta , S_1(k)>0)\textrm{d}B_{n-k}(z). \end{aligned}$$

In what follows, we will show that the main contribution is due to \(R_{\varepsilon }^{(2)}(y)\) and other terms are negligible.

Fix some \(t\in {\mathbb {Z}}^d\). If \(z\in (t+\Delta )\), then

$$\begin{aligned} \left\{ S(k)\in y-z+\Delta \right\}&=\left\{ S_j(k)\in [y_j-z_j,y_j-z_j+1)\text { for all }j\right\} \\&\subseteq \left\{ S_j(k)\in [y_j-t_j-1,y_j-t_j+1)\text { for all }j\right\} \\&=\left\{ S(k)\in y-t-1+2\Delta \right\} . \end{aligned}$$

Consequently,

$$\begin{aligned} {\textbf{P}}(S(k)\in y-z+\Delta , S_1(k)>0) \le {\textbf{P}}(S(k)\in y-t-1+2\Delta , S_1(k)>0) \end{aligned}$$

for all \(z\in t+\Delta \). Applying this estimate, we conclude that, for every \(A\subset {\mathbb {R}}^d\),

$$\begin{aligned}&\int _A{\textbf{P}}(S(k)\in y-z+\Delta , S_1(k)>0)\textrm{d}B_{n-k}(z)\\&\quad \le \sum _{t\in {\mathbb {Z}}^d:(t+\Delta )\cap A\ne \emptyset } {\textbf{P}}(S(k)\in y-t-1+2\Delta , S_1(k)>0)b_{n-k}(t). \end{aligned}$$

Combining this estimate with Corollary 11 with \(x=0\), we obtain

$$\begin{aligned}&\int _A{\textbf{P}}(S(k)\in y-z+\Delta , S_1(k)>0)\textrm{d}B_{n-k}(z)\nonumber \\&\quad \le \frac{C}{(n-k)c_{n-k}^d} \sum _{t\in {\mathbb {Z}}^d:(t+\Delta )\cap A\ne \emptyset } H(t_1\wedge c_{n-k}){\textbf{P}}(S(k)\in y-t-1+2\Delta , S_1(k)>0). \end{aligned}$$
(31)

In order to bound \({R}^{(0)}_\varepsilon (y)\), we apply (31) with \(A=[y_1,y_1+1)\times {\mathbb {R}}^{d-1}\):

$$\begin{aligned}&{R}^{(0)}_\varepsilon (y)\\&\quad \le \sum _{k=1}^{(1-\varepsilon )n}\frac{C}{(n-k)c_{n-k}^d} \sum _{t\in {\mathbb {Z}}^d:t_1\in [y_1-1,y_1+1)} H(t_1\wedge c_{n-k}){\textbf{P}}(S(k)\in y-t-1+2\Delta , S_1(k)>0)\\&\quad \le C_\varepsilon \frac{H(y_1\wedge c_{n})}{nc_n^d} \sum _{k=1}^{(1-\varepsilon )n}{\textbf{P}}(S_1(k)\in (0,2)). \end{aligned}$$

Noting that \({\textbf{P}}(S_1(k)\in (0,2))\rightarrow 0\) as \(k\rightarrow \infty \), we get

$$\begin{aligned} \frac{c_n^d}{H(c_n)}R^{(0)}_\varepsilon (y)\rightarrow 0. \end{aligned}$$

Taking into account (23), we conclude that

$$\begin{aligned} \limsup _{n\rightarrow \infty }\frac{c_{n}^d}{n{\textbf{P}}(\tau >n)} R^{(0)}_\varepsilon (y)=0. \end{aligned}$$
(32)

Using the Stone theorem, we obtain

$$\begin{aligned} {R}_{\varepsilon }^{(3)}(y)\le \frac{C}{c_n^d} \left( 1+ \sum _{k=1}^{\varepsilon n} {\textbf{P}}(\tau >k) \right) . \end{aligned}$$

Further, by (19),

$$\begin{aligned} \sum _{k=0}^{\varepsilon n}{\textbf{P}}(\tau>k)\sim \rho ^{-1}\varepsilon ^{\rho }n{\textbf{P}}(\tau >n)\quad \text {as } n\rightarrow \infty . \end{aligned}$$

As a result, we obtain

$$\begin{aligned} \limsup _{n\rightarrow \infty }\frac{c_{n}^d}{n{\textbf{P}}(\tau >n)} \sup _{y\in {\mathbb {R}}^d}R_{\varepsilon }^{(3)}(y)\le C\varepsilon ^{\rho }. \end{aligned}$$
(33)

Applying (31) with \(A=[0,y_1)\times {\mathbb {R}}^{d-1}\), we get

$$\begin{aligned} {R}_{\varepsilon }^{(1)}(y) \le \frac{CH(c_n)}{nc_n^d} \sum _{k=1}^{\varepsilon n} {\textbf{P}}(S_1(k)\in (0,y_1+2)) \le \varepsilon \frac{CH(c_n)}{c_n^d} \end{aligned}$$

From this estimate and (23), we deduce

$$\begin{aligned} \limsup _{n\rightarrow \infty }\frac{c_{n}^d}{n\textbf{P}(\tau >n)} \sup _{y\in {\mathbb {H}}^+}R_{\varepsilon }^{(1)}(y)\le C\varepsilon . \end{aligned}$$
(34)

Thus, in the non-lattice case we combine the Stone local limit theorem with the first equality in (8) and obtain, uniformly in \(y\in {\mathbb {H}}^+\),

$$\begin{aligned} R_{\varepsilon }^{(2)}(y)&=\sum _{k=[\varepsilon n]+1}^{[(1-\varepsilon )n]} \frac{1}{c_{n-k}^d}\int _{0}^{y_1}\int _{{\mathbb {R}}^{d-1}}g_{\alpha ,\sigma }\left( \frac{y-z}{c_{n-k}} \right) \textrm{d}B_{k}(z)\\&\quad + o\left( \frac{1}{c_{n\varepsilon }^{d}}\sum _{k=1}^{n}{\textbf{P}}(S_{1}(k)<y_1,\tau>k) \right) \\&=\sum _{k=[\varepsilon n]+1}^{[(1-\varepsilon )n]}\frac{\textbf{P}(\tau ^{-}>k)}{c_{n-k}^d}\int _{0}^{y_1/c_k}\int _{{\mathbb {R}}^{d-1}}g_{\alpha ,\sigma }\left( \frac{y-c_{k}u}{ c_{n-k}}\right) \textbf{P}(M_{\alpha ,\sigma }\in \textrm{d}u) \\&\quad + o\left( \frac{1}{c_{n\varepsilon }^d}\sum _{k=1}^{n}{\textbf{P}}(S_{1}(k)<y_1,\tau>k)+ \sum _{k=1}^{n-1}\frac{\textbf{P}(\tau >k)}{c_{n\varepsilon }^d}\right) . \end{aligned}$$

According to (19),

$$\begin{aligned} \sum _{k=1}^{n}{\textbf{P}}(S^{(1)}<y_1,\tau>k)\le \sum _{k=1}^{n}{\textbf{P}}(\tau>k)\le Cn \textbf{P}(\tau >n). \end{aligned}$$

Hence,

$$\begin{aligned} R_{\varepsilon }^{(2)}(y)&=\sum _{k=[\varepsilon n]+1}^{[(1-\varepsilon )n]}\frac{\textbf{P}(\tau ^{-}>k)}{c_{n-k}^d}\int _{0}^{x_1/c_k}\int _{{\mathbb {R}}^{d-1}}g_{\alpha ,\sigma }\left( \frac{x-c_{k}u}{c_{n-k}}\right) \textbf{P}(M_{\alpha ,\sigma }\in \textrm{d}u) \\&\quad +o\left( \frac{n\textbf{P}(\tau >n)}{c_{n\varepsilon }^d}\right) . \end{aligned}$$

Since \(c_{k}\) and \(\textbf{P}(\tau >k)\) are regularly varying and \( g_{\alpha ,\sigma }(x)\) is uniformly continuous in \({\mathbb {R}}^d\), we let, for brevity, \(v=x/c_{n}\) and continue the previous estimates for \( R_{\varepsilon }^{(2)}(y)\) with

$$\begin{aligned}&=\frac{\textbf{P}(\tau>n)}{c_{n}^d}\sum _{k=[\varepsilon n]+1}^{[(1-\varepsilon )n]}\frac{(k/n)^{\rho -1}}{(1-k/n)^{1/\alpha }}\\&\quad \times \int _{0}^{v^{(1)}/(k/n)^{1/\alpha }}\int _{{\mathbb {R}}^{d-1}} g_{\alpha ,\sigma }\left( \frac{ v-(k/n)^{1/\alpha }u}{(1-k/n)^{1/\alpha }}\right) \textbf{P}(M_{\alpha ,\sigma }\in \textrm{d}u) \\&\quad + o\left( \frac{n\textbf{P}(\tau>n)}{c^d_{n\varepsilon }}\right) \\&=\frac{n\textbf{P}(\tau>n)}{c_{n}^d}f(\varepsilon ,1-\varepsilon ;v)+o\left( \frac{n\textbf{P}(\tau >n)}{c^d_{n\varepsilon }}\right) , \end{aligned}$$

where, for \(0\le w_{1}\le w_{2}\le 1\),

$$\begin{aligned} f(w_{1},w_{2};v):=\int _{w_{1}}^{w_{2}}\frac{t^{\rho -1}\textrm{d}t}{(1-t)^{1/\alpha }} \int _{0}^{v^{(1)}/t^{1/\alpha }}\int _{{\mathbb {R}}^d}g_{\alpha ,\sigma }\left( \frac{ v-t^{1/\alpha }u}{(1-t)^{1/\alpha }}\right) \textbf{P}(M_{\alpha ,\sigma }\in \textrm{d}u). \nonumber \\ \end{aligned}$$
(35)

Observe that, by boundedness of \(g_{\alpha ,\sigma }\left( y\right) \),

$$\begin{aligned} f(0,\varepsilon ;v)\le C\int _{0}^{\varepsilon }t^{\rho -1}\textrm{d}t\le C\varepsilon ^{\rho }. \end{aligned}$$

Further, it follows from (24) that \(\int \phi (u){\textbf{P}}(M_{\alpha ,\sigma }\in \textrm{d}u)\le C\int \phi (u)\textrm{d}u\) for every non-negative integrable function \(\phi \). Therefore,

$$\begin{aligned} f(1-\varepsilon ,1;v)&\le C\int _{1-\varepsilon }^{1}\frac{t^{\rho -1}\textrm{d}t}{(1-t)^{1/\alpha }} \int _{0}^{v^{(1)}/t^{1/\alpha }}\int _{{\mathbb {R}}^{d-1}}g_{\alpha ,\sigma }\left( \frac{ v-t^{1/\alpha }u}{(1-t)^{1/\alpha }}\right) \textrm{d}u\\&=C\int _{1-\varepsilon }^{1}t^{\rho -1-1/\alpha }\textrm{d}t\int _{0}^{v^{(1)}/(1-t)^{1/\alpha }} \int _{{\mathbb {R}}^{d-1}}g_{\alpha ,\sigma }\left( z\right) \textrm{d}z\le C\varepsilon . \end{aligned}$$

As a result, we have

$$\begin{aligned} \limsup _{n\rightarrow \infty } \sup _{y\in {\mathbb {H}}^+}\left| \frac{c_{n}^d}{n\textbf{P} (\tau >n)}R_{\varepsilon }^{(2)}(y)-f(0,1;y/c_{n})\right| \le C\varepsilon ^{\rho }. \end{aligned}$$
(36)

Combining (32)–(36) with representation (30) leads to

$$\begin{aligned} \limsup _{n\rightarrow \infty }\sup _{y\in {\mathbb {H}}^+}\left| \frac{c^d_{n}}{\textbf{P} (\tau >n)}b_{n}(y)-f(0,1;y/c_{n})\right| \le C\varepsilon ^{\rho }. \end{aligned}$$
(37)

Since \(\varepsilon >0\) is arbitrary, it follows that, as \(n\rightarrow \infty \),

$$\begin{aligned} \frac{c_{n}^d}{\textbf{P}(\tau >n)}b_{n}(y)-f(0,1;y/c_{n})\rightarrow 0 \end{aligned}$$
(38)

uniformly in \(y\in {\mathbb {H}}^+\). Recalling (8), we deduce by integration of ( 38) and evident transformations that

$$\begin{aligned} \int _{u+r\Delta }f(0,1;z)\textrm{d}z=\textbf{P}(M_{\alpha ,\sigma }\in u+r\Delta ) \end{aligned}$$
(39)

for all \(u\in {\mathbb {H}}^+, r>0\). This means, in particular, that the distribution of \(M_{\alpha ,\sigma }\) is absolutely continuous. Furthermore, it is not difficult to see that \(z\mapsto f(0,1;z)\) is a continuous mapping. Hence, in view of (39), we may consider f(0, 1; z) as a continuous version of the density of the distribution of \(M_{\alpha ,\sigma }\) and let \( p_{M_{\alpha ,\sigma }}(z):=\) f(0, 1; z). This and (38) imply the statement of Theorem 2 for \(\Delta =[0,1)^d\). To establish the desired result for arbitrary \(r\Delta , r>0\), it suffices to consider the random walk S(n)/r and to observe that

$$\begin{aligned} c_{n}^{r }:=\inf \left\{ u\ge 0:\frac{1}{u^{2}}\int _{-u}^{u}x^{2}{\textbf{P}}\left( \frac{|X |}{r }\in \textrm{d}x\right) \right\} =c_{n}/r^d. \end{aligned}$$

5 Probabilities of Normal Deviations When Random Walks Start at an Arbitrary Starting Point

Proof of Theorem 1

Due to the shift invariance in any direction orthogonal to \((1,0,\ldots ,0)\), we may consider, without loss of generality, only the case when the random walk starts at \(x=(x_1,0,\ldots ,0)\) with some \(x_1>0\).

As we have already mentioned before, repeating the arguments from [9] one can easily show that \({\textbf{P}}(\frac{S(n)}{c_n}\in \cdot \mid \tau >n)\) and \({\textbf{P}}(\frac{S(n)}{c_n}\in \cdot \mid \tau ^+>n)\) converge weakly. Recall also that the limit of \({\textbf{P}}(\frac{S(n)}{c_n})\in \cdot \mid \tau >n)\) is denoted by \(M_{\alpha ,\sigma }\).

Fix an arbitrary Borel set \(A\subset \mathbb {H}^+\). According to Lemma 14,

$$\begin{aligned}&{\textbf{P}}\left( \frac{x+S(n)}{c_n}\in A;\tau _x>n\right) \nonumber \\&\quad =\sum _{k=0}^n\int _{(0,x_1]\times {\mathbb {R}}^{d-1}}p^+_k(\textrm{d}z-x) {\textbf{P}}\left( \frac{z+S(n-k)}{c_{n-k}}\in A;\tau >n-k\right) . \end{aligned}$$
(40)

Choose now a sequence \(\{N_n\}\) of integers satisfying

$$\begin{aligned} N_n=o(n) \quad \text {and}\quad \frac{\delta _nc_n}{c_{N_n}}\rightarrow 0. \end{aligned}$$
(41)

We start our analysis of the sum in (40) by noting that

$$\begin{aligned}&\sum _{k=N_n+1}^{n/2}\int _{(0,x_1]\times {\mathbb {R}}^{d-1}}p^+_k(\textrm{d}z-x) {\textbf{P}}\left( \frac{z+S(n-k)}{c_{n-k}}\in A;\tau>n-k\right) \\&\quad \le \sum _{k=N_n+1}^{n/2}{\textbf{P}}(S_1(k)\ge -x_1;\tau ^+>k){\textbf{P}}(\tau>n-k)\\&\quad \le {\textbf{P}}(\tau>n/2) \sum _{k=N_n+1}^{n/2}{\textbf{P}}(S_1(k)\ge -x_1;\tau ^+>k). \end{aligned}$$

Applying the second statement of Lemma 7 to the walk \(S_1(k)\) and recalling that the sequence \(\{c_k\}\) is regularly varying, we obtain

$$\begin{aligned} \sum _{k=N_n+1}^{n/2}{\textbf{P}}(S_1(k)\ge -x_1;\tau ^+>k)&\le \sum _{k=N_n+1}^{\infty }{\textbf{P}}(S_1(k)\ge -x_1;\tau ^+>k)\nonumber \\&\le \sum _{k=N_n+1}^{\infty } \frac{C_1x_1V(x_1)}{kc_k} \le \frac{C_2x_1V(x_1)}{c_{N_n}}\nonumber \\&\le \frac{C_2\delta _n c_nV(x_1)}{c_{N_n}}. \end{aligned}$$
(42)

Taking into account (41), we conclude that

$$\begin{aligned} \sum _{k=N_n+1}^{\infty }{\textbf{P}}(S_1(k)\ge -x_1;\tau ^+>k)=o(V(x_1)) \end{aligned}$$
(43)

uniformly in \(x_1\le \delta _nc_n\). Consequently,

$$\begin{aligned}&\sum _{k=N_n+1}^{n/2}\int _{(0,x_1]\times {\mathbb {R}}^{d-1}}p^+_k(\textrm{d}z-x) {\textbf{P}}\left( \frac{z+S(n-k)}{c_{n-k}}\in A;\tau>n-k\right) \nonumber \\&\quad =o(V(x_1){\textbf{P}}(\tau >n)) \end{aligned}$$
(44)

uniformly in \(x_1\le \delta _nc_n\).

Using once again Lemma 7, we obtain

$$\begin{aligned}&\sum _{k=n/2}^{n}\int _{(0,x_1]\times {\mathbb {R}}^{d-1}}p^+_k(\textrm{d}z-x) {\textbf{P}}\left( \frac{z+S(n-k)}{c_{n-k}}\in A;\tau>n-k\right) \\&\quad \le \sum _{k=n/2}^{n}{\textbf{P}}(S_1(k)\ge -x_1;\tau ^+>k){\textbf{P}}(\tau>n-k)\\&\quad \le \sum _{k=n/2}^{n}\frac{C_1x_1V(x_1)}{kc_k}{\textbf{P}}(\tau>n-k) \le \frac{C_2x_1V(x_1)}{nc_n}\sum _{j=0}^n{\textbf{P}}(\tau >j). \end{aligned}$$

Since \({\textbf{P}}(\tau >j)\) is also regularly varying, we conclude that

$$\begin{aligned}&\sum _{k=n/2}^{n}\int _{(0,x_1]\times {\mathbb {R}}^{d-1}}p^+_k(\textrm{d}z-x) {\textbf{P}}\left( \frac{z+S(n-k)}{c_{n-k}}\in A;\tau>n-k\right) \nonumber \\&\quad \le C\frac{x_1V(x_1)}{c_n}{\textbf{P}}(\tau>n) =o(V(x_1){\textbf{P}}(\tau >n)) \end{aligned}$$
(45)

uniformly in \(x_1\le \delta _n c_n\).

Choose now a sequence \(\varepsilon _n\rightarrow 0\) so that \(c_{N_n}=o(\varepsilon _n c_n)\). By the convergence in the case of start at zero,

$$\begin{aligned} {\textbf{P}}\left( \frac{z+S(n-k)}{c_{n-k}}\in A;\tau>n-k\right) \sim {\textbf{P}}(M_{\alpha ,\sigma }\in A){\textbf{P}}(\tau >n) \end{aligned}$$

uniformly in \(k\le N_n\) and \(z\in B_{\varepsilon _n c_n}(0)\), where \(B_r(y)\) denotes the ball of radius r with centre at y. Therefore,

$$\begin{aligned}&\sum _{k=0}^{N_n} \int _{(0,x_1]\times {\mathbb {R}}^{d-1}\cap B_{\varepsilon _n c_n}(0)}p^+_k(\textrm{d}z-x) {\textbf{P}}\left( \frac{z+S(n-k)}{c_{n-k}}\in A;\tau>n-k\right) \\&\quad =\left[ {\textbf{P}}(M_{\alpha ,\sigma }\in A)+o(1)\right] {\textbf{P}}(\tau>n) \sum _{k=0}^{N_n} {\textbf{P}}(S_1(k)\ge -x_1,|S(k) |<\varepsilon _nc_n;\tau ^+>k)\\&\quad =\left[ {\textbf{P}}(M_{\alpha ,\sigma }\in A)+o(1)\right] {\textbf{P}}(\tau>n) \sum _{k=0}^{N_n} {\textbf{P}}(S_1(k)\ge -x_1;\tau ^+>k)\\&\qquad +O\left( {\textbf{P}}(\tau>n)\sum _{k=0}^{N_n} {\textbf{P}}(S_1(k)\ge -x_1,|S(k) |\ge \varepsilon _nc_n;\tau ^+>k)\right) . \end{aligned}$$

By the definition of V,

$$\begin{aligned} \sum _{k=0}^{N_n}{\textbf{P}}(S_1(k)\ge -x_1;\tau ^+>k)= V(x_1)-\sum _{k=N_n+1}^\infty {\textbf{P}}(S_1(k)\ge -x_1;\tau ^+>k). \end{aligned}$$

Using here (43), we obtain

$$\begin{aligned} \nonumber&\sum _{k=0}^{N_n} \int _{(0,x_1]\times {\mathbb {R}}^{d-1}\cap B_{\varepsilon _n c_n}(0)}p^+_k(\textrm{d}z-x) {\textbf{P}}\left( \frac{z+S(n-k)}{c_{n-k}}\in A;\tau>n-k\right) \\ \nonumber&\quad =\left[ {\textbf{P}}(M_{\alpha ,\sigma }\in A)+o(1)\right] V(x_1){\textbf{P}}(\tau>n)\\&\qquad +O\left( {\textbf{P}}(\tau>n)\sum _{k=0}^{N_n} {\textbf{P}}(S_1(k)\ge -x_1,|S(k) |\ge \varepsilon _nc_n;\tau ^+>k)\right) \end{aligned}$$
(46)

uniformly in \(x_1\le \delta _nc_n\).

Furthermore,

$$\begin{aligned} \nonumber&\sum _{k=0}^{N_n} \int _{(0,x_1]\times {\mathbb {R}}^{d-1}\cap B^c_{\varepsilon _n c_n}(0)}p^+_k(\textrm{d}z-x) {\textbf{P}}\left( \frac{z+S(n-k)}{c_{n-k}}\in A;\tau>n-k\right) \\ \nonumber&\quad \le \sum _{k=0}^{N_n} {\textbf{P}}(S_1(k)\ge -x_1,|S(k) |\ge \varepsilon _nc_n;\tau ^+>k){\textbf{P}}(\tau>n-k)\\&\quad \le C{\textbf{P}}(\tau>n)\sum _{k=0}^{N_n} {\textbf{P}}(S_1(k)\ge -x_1,|S(k) |\ge \varepsilon _nc_n;\tau ^+>k). \end{aligned}$$
(47)

Having all these estimates, one can easily see that it suffices to show that, uniformly in \(x_1\le \delta _nc_n\),

$$\begin{aligned} \sum _{k=0}^{N_n} {\textbf{P}}(S_1(k)\ge -x_1,|S(k) |\ge \varepsilon _nc_n;\tau ^+>k)=o(V(x_1)). \end{aligned}$$
(48)

Indeed, applying (48) to (46) and (47) leads us to the equality

$$\begin{aligned}&\sum _{k=0}^{N_n}\int _{(0,x_1]\times {\mathbb {R}}^{d-1}}p^+_k(\textrm{d}z-x) {\textbf{P}}\left( \frac{z+S(n-k)}{c_{n-k}}\in A;\tau>n-k\right) \\&\quad =\left[ {\textbf{P}}(M_{\alpha ,\sigma }\in A)+o(1)\right] V(x_1){\textbf{P}}(\tau >n). \end{aligned}$$

Plugging this and estimates (44), (45) into (40), we get

$$\begin{aligned} {\textbf{P}}\left( \frac{x+S(n)}{c_{n}}\in A;\tau _x>n\right) =\left[ {\textbf{P}}(M_{\alpha ,\sigma }\in A)+o(1)\right] V(x_1){\textbf{P}}(\tau >n) \end{aligned}$$

uniformly in \(x_1\le \delta _nc_n\). Recalling that \({\textbf{P}}(\tau _x>n)\sim V(x_1){\textbf{P}}(\tau >n)\), we have, uniform in \(x_1\le \delta _nc_n\), the convergence

$$\begin{aligned} {\textbf{P}}\left( \frac{x+S(n)}{c_{n}}\in A;\tau _x>n\right) \rightarrow {\textbf{P}}(M_{\alpha ,\sigma }\in A). \end{aligned}$$

To prove (48), we fix some \(R\ge 1\) and notice that

$$\begin{aligned}&\sum _{k=0}^{N_n} {\textbf{P}}(S_1(k)\ge -x_1,|S(k) |\ge \varepsilon _nc_n;\tau ^+>k)\\&\quad \le \sum _{k=0}^{R} {\textbf{P}}(|S(k) |\ge \varepsilon _nc_n;\tau ^+>k) +\sum _{k=R+1}^{N_n} {\textbf{P}}(S_1(k)\ge -x_1;\tau ^+>k). \end{aligned}$$

Similar to (42),

$$\begin{aligned} \sum _{k=R+1}^{N_n}{\textbf{P}}(S_1(k)\ge -x_1;\tau ^+>k) \le C\frac{x_1V(x_1)}{c_R}. \end{aligned}$$

Furthermore, using the convergence of measures \({\textbf{P}}(\frac{S(n)}{c_n}\in \cdot \mid \tau ^+>n)\), we have

$$\begin{aligned} \sum _{k=0}^{R}{\textbf{P}}(|S(k) |\ge \varepsilon _nc_n;\tau ^+>k) =o\left( \sum _{k=0}^{R}{\textbf{P}}(\tau ^+>k)\right) =o(R{\textbf{P}}(\tau ^+>R)) \end{aligned}$$

uniformly in \(R\le N_n\). Fix some \(\gamma >0\) and take R such that \(c_{R-1}<x_1/\gamma \) and \(c_R\ge x_1/\gamma \). Then,

$$\begin{aligned} \sum _{k=R+1}^{N_n}{\textbf{P}}(S_1(k)\ge -x_1;\tau ^+>k) \le C\gamma V(x) \end{aligned}$$

and, by Lemma 9,

$$\begin{aligned} R{\textbf{P}}(\tau ^+>R)\le CV(c_R)\le C\gamma ^{-\alpha (1-\rho )}V(x_1). \end{aligned}$$

Combining these estimates, we conclude that

$$\begin{aligned} \lim _{n\rightarrow \infty }\sup _{x_1\le \delta _nc_n}\frac{1}{V(x_1)} \sum _{k=0}^{N_n} {\textbf{P}}(S_1(k)\ge -x_1,|S(k) |\ge \varepsilon _nc_n;\tau ^+>k) \le C\gamma . \end{aligned}$$

Since \(\gamma \) can be chosen arbitrary small, we get (48). Thus, the proof of the theorem is complete. \(\square \)

Proof

(First proof of Theorem 2) We give a proof in the non-lattice case only. The lattice case is even simpler.

As in the proof of Theorem 1, it suffices to consider the case \(x=(x_1,0,\ldots ,0)\) with \(x_1\in (0,\delta _nc_n]\). The case \(x_1=0\) is considered in Sect. 4. There, we have proven that, uniformly in \(y\in \mathbb {H}^+\),

$$\begin{aligned} b_n(y)\sim \frac{{\textbf{P}}(\tau >n)}{c_n^d}p_{M_{\alpha ,\sigma }}(y/c_n). \end{aligned}$$
(49)

To generalise this relation to the case of positive \(x_1\), we first notice that, by Lemma 14,

$$\begin{aligned} p_n(x,y)=\sum _{k=0}^n\int _{(0,x_1]\times {\mathbb {R}}^{d-1}}p^+_k(\textrm{d}z-x) b_{n-k}(y-z). \end{aligned}$$
(50)

Fix some \(\gamma \in (0,1/2)\). The analysis of

$$\begin{aligned} \sum _{k=0}^{(1-\gamma )n}\int _{(0,x_1]\times {\mathbb {R}}^{d-1}}p^+_k(\textrm{d}z-x) b_{n-k}(y-z) \end{aligned}$$

is very similar to our proof of Theorem 1. If \(k\le (1-\gamma )n\), then we have the bound

$$\begin{aligned} b_{n-k}(y-z)\le C(\gamma )\frac{{\textbf{P}}(\tau >n)}{c_n^d}, \end{aligned}$$

which is an immediate consequence of (49). Using this uniform bound and the local limit theorem (49) directly, and repeating our arguments from the proof of Theorem 1, one can easily obtain

$$\begin{aligned} \sup _y\left|\frac{c_n^d}{{\textbf{P}}(\tau >n)} \sum _{k=0}^{(1-\gamma )n}\int _{(0,x_1]\times {\mathbb {R}}^{d-1}}p^+_k(\textrm{d}z-x) b_{n-k}(y-z)-p_{M_{\alpha ,\sigma }}(y/c_n)\right|\rightarrow 0\nonumber \\ \end{aligned}$$
(51)

uniformly in \(x_1\le \delta _n c_n\).

For \(k>(1-\gamma )n\), the mentioned above bound for \(b_{n-k}\) is useless, and one needs an additional argument. We first notice that

$$\begin{aligned}&\int _{(0,x_1]\times {\mathbb {R}}^{d-1}}p^+_k(\textrm{d}z-x)b_{n-k}(y-z)\\&\quad \le \sum _{u\in [0,x_1]\times {\mathbb {R}}^{d-1}\cap {\mathbb {Z}}^d}p_k^+(u-x+\Delta ) \max _{z\in u+\Delta }b_{n-k}(y-z)\\&\quad \le \sum _{u\in [0,x_1]\times {\mathbb {R}}^{d-1}\cap {\mathbb {Z}}^d}p_k^+(u-x+\Delta ) {\textbf{P}}(S(n-k)\in y-u-\textbf{1}+2\Delta ;\tau >n-k). \end{aligned}$$

Applying now the second statement of Lemma 7, we get

$$\begin{aligned}&\int _{(0,x_1]\times {\mathbb {R}}^{d-1}}p^+_k(\textrm{d}z-x)b_{n-k}(y-z)\\&\quad \le C\frac{V(x_1)}{kc_k^d} \sum _{u\in [0,x_1]\times {\mathbb {R}}^{d-1}\cap {\mathbb {Z}}^d} {\textbf{P}}(S(n-k)\in y-u-\textbf{1}+2\Delta ;\tau>n-k)\\&\quad \le C\frac{V(x_1)}{kc_k^d}{\textbf{P}}(\tau >n-k). \end{aligned}$$

Consequently,

$$\begin{aligned} \sum _{k=(1-\gamma )n}^n \int _{(0,x_1]\times {\mathbb {R}}^{d-1}}p^+_k(\textrm{d}z-x)b_{n-k}(y-z) \le C\frac{V(x_1)}{nc_n^d} \sum _{j=0}^{\gamma n}{\textbf{P}}(\tau >j). \end{aligned}$$

Noting now that the regular variation of \({\textbf{P}}(\tau >j)\) implies

$$\begin{aligned} \sum _{j=0}^{\gamma n}{\textbf{P}}(\tau>j) \le C\gamma ^{1-\rho }n{\textbf{P}}(\tau >n), \end{aligned}$$

we conclude that

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{c_n^d}{{\textbf{P}}(\tau >n)} \sum _{k=(1-\gamma )n}^n \int _{(0,x_1]\times {\mathbb {R}}^{d-1}}p^+_k(\textrm{d}z-x)b_{n-k}(y-z) \le C\gamma ^{1-\rho }V(x_1) \end{aligned}$$

for all \(x_1\le \delta _n c_n.\) Plugging this and (51) into (50) and letting \(\gamma \rightarrow 0\), we get the desired result. \(\square \)

Proof

(Second proof of Theorem 2) If the local assumption (14) holds, then we can use an approach similar to that of in [5], see Theorems 5 and 6 there. This approach allows one to avoid considering first the special case \(x=0\), as it is done in Sect. 4. Without loss of generality, we may assume that \(x=(x_1,0,\ldots ,0)\).

We first notice that if y is such that \(y_1\le 2\varepsilon c_n\), then, combining Lemmas 7 and 9,

$$\begin{aligned} p_n(x,y)\le C\frac{V(x_1)U(2\varepsilon c_n)}{nc_n^d} \le CV(x_1){\textbf{P}}(\tau _x>n)\varepsilon ^{\alpha \rho }\frac{1}{c_n^d}. \end{aligned}$$

Thus, uniformly in \(y\in \mathbb {H}^+\) with \(y\le 2\varepsilon c_n\),

$$\begin{aligned} c_n^d{\textbf{P}}(x+S(n)\in y+\Delta \mid \tau _x>n)\le C\varepsilon ^{\alpha \rho }. \end{aligned}$$

Combining this bound with the fact that \(p_{M_{\alpha ,\sigma }}(z)\) goes to 0 as \(z\rightarrow \partial \mathbb {H}^+\), we conclude that

$$\begin{aligned} \lim _{\varepsilon \rightarrow 0} \sup _{y\in \mathbb {H}^+:\,y_1\le 2\varepsilon c_n}\left|c_{n}^d{\textbf{P}}\left( x+S(n)\in y+\Delta \mid \tau _x>n\right) - {p_{M_{\alpha ,\sigma }}\left( \frac{y-x}{c_{n}}\right) }\right|=0 \end{aligned}$$
(52)

uniformly in x with \(x_1\le \delta _nc_n.\)

We next consider large values of y. More precisely, we assume that \(|y |>3A c_n\) with some \(A>1\). In this case, we have, by the Markov property at time \(m=[n/2]\),

$$\begin{aligned} p_n(x,y)&=\int _{I(x)}{\textbf{P}}(x+S(m)\in \textrm{d}z,\tau _x>m)p_{n-m}(z,y)\\&\quad +\int _{\mathbb {H}^+\setminus I(x)}{\textbf{P}}(x+S(m)\in \textrm{d}z,\tau _x>m)p_{n-m}(z,y), \end{aligned}$$

where \(I(x)=\{z: |z-x |\le Ac_n\}\). If \(z\in I(x)\), then \(|y-z |>Ac_n\) for all sufficiently large values of n. Using (73), we have

$$\begin{aligned} p_{n-m}(z,y)\le C\frac{n\phi (Ac_n)}{c_n^d}\le CA^{-\alpha }\frac{1}{c_n^d}. \end{aligned}$$

Therefore,

$$\begin{aligned} c_n^d\int _{I(x)}{\textbf{P}}(x+S(m)\in \textrm{d}z,\tau _x>m)p_{n-m}(z,y) \le CA^{-\alpha }{\textbf{P}}(\tau _x>m). \end{aligned}$$

Furthermore, using the standard concentration function estimate, we have

$$\begin{aligned}&c_n^d \int _{\mathbb {H}^+\setminus I(x)}{\textbf{P}}(x+S(n)\in \textrm{d}z,\tau _x>m)p_{n-m}(z,y)\\&\quad \le C{\textbf{P}}(x+S(m)\notin I(x);\tau _x>m) \le C{\textbf{P}}(|S(m) |>Ac_n;\tau _x>m). \end{aligned}$$

Combining these bounds, one gets easily

$$\begin{aligned} c_n^d{\textbf{P}}(x+S(n)\in y+\delta \mid \tau _x>n) \le C\left( A^{-\alpha }+{\textbf{P}}(|S(m)|>Ac_n\mid \tau _x>m)\right) . \end{aligned}$$

Applying now the integral limit theorem, we conclude that

$$\begin{aligned} \lim _{A\rightarrow \infty }\limsup _{n\rightarrow \infty } \sup _{y\in \mathbb {H}^+:\,|y |> 3Ac_n} \left|c_{n}^d{\textbf{P}}\left( x+S(n)\in y+\Delta \mid \tau _x>n\right) - {p_{M_{\alpha ,\sigma }}\left( \frac{y-x}{c_{n}}\right) }\right|=0 \end{aligned}$$
(53)

uniformly in x with \(x_1\le \delta _nc_n.\)

Thus, it remains to consider y such that \(y_1>2\varepsilon c_n\) and \(|y |\le 3Ac_n\). To analyse this range of values of y, we set \(m=[(1-\gamma )n]\) with some \(\gamma <1/2\). Let \(B_{\varepsilon c_n}(y)\) denote the ball of radius \(\varepsilon c_n\) around y. Then, by the Markov property at time m, we have

$$\begin{aligned} p_n(x,y)&=\int _{B_{\varepsilon c_n}(y)}{\textbf{P}}(x+S(m)\in \textrm{d}z,\tau _x>m)p_{n-m}(z,y)\nonumber \\&\quad + \int _{\mathbb {H}^+\setminus B_{\varepsilon c_n}(y)}{\textbf{P}}(x+S(m)\in \textrm{d}z,\tau _x>m)p_{n-m}(z,y). \end{aligned}$$
(54)

Using the large deviations bound (74), one gets easily

$$\begin{aligned} \sup _{z\in \mathbb {H}^+\setminus B_{\varepsilon c_n}(y)} p_{n-m}(z,y) \le C\frac{(n-m)\phi (\varepsilon c_n)}{(\varepsilon c_n)^d} \le C \gamma \varepsilon ^{-d-\alpha }c_n^{-d}. \end{aligned}$$

Consequently,

$$\begin{aligned} c_n^d \int _{\mathbb {H}^+\setminus B_{\varepsilon c_n}(y)} {\textbf{P}}(x+S(m)\in \textrm{d}z,\tau _x>m)p_{n-m}(z,y) \le C{\textbf{P}}(\tau _x>n)\gamma \varepsilon ^{-d-\alpha }.\qquad \end{aligned}$$
(55)

For the integral over \(B_{\varepsilon c_n}(y)\), we have

$$\begin{aligned}&\int _{B_{\varepsilon c_n}(y)} {\textbf{P}}(x+S(m)\in \textrm{d}z,\tau _x>m)p_{n-m}(z,y) \\&\quad =\int _{B_{\varepsilon c_n}(y)} {\textbf{P}}(x+S(m)\in \textrm{d}z,\tau _x>m){\textbf{P}}(z+S(n-m)\in y+\Delta )\\&\qquad +\int _{B_{\varepsilon c_n}(y)} {\textbf{P}}(x+S(m)\in \textrm{d}z,\tau _x>m){\textbf{P}}(z+S(n-m)\in y+\Delta ;\tau _x\le n-m). \end{aligned}$$

By the strong Markov property,

$$\begin{aligned}&{\textbf{P}}(z+S(n-m)\in y+\Delta ;\tau _x\le n-m)\\&\quad =\sum _{k=1}^{n-m-1}\int _{\mathbb {H}^-}{\textbf{P}}(z+S(n)\in \textrm{d}u,\tau _z=k) {\textbf{P}}(u+S(n-m-k)\in y+\Delta ). \end{aligned}$$

Noting that \(|y-u |>\varepsilon c_n\) for all \(u\in \mathbb {H}^-\) and using once again (74), we obtain

$$\begin{aligned} c_n^d{\textbf{P}}(z+S(n-m)\in y+\Delta ;\tau _x\le n-m) \le C\gamma \varepsilon ^{-d-\alpha }. \end{aligned}$$

Consequently,

$$\begin{aligned} \nonumber&c_n^d\int _{B_{\varepsilon c_n}(y)}{\textbf{P}}(x+S(m)\in \textrm{d}z,\tau _x>m) {\textbf{P}}(z+S(n-m)\in y+\Delta ;\tau _x\le n-m)\\&\quad \le C\gamma \varepsilon ^{-d-\alpha }{\textbf{P}}(\tau _x>m). \end{aligned}$$
(56)

By the Stone local limit theorem,

$$\begin{aligned}&\int _{B_{\varepsilon c_n}(y)} {\textbf{P}}(x+S(m)\in \textrm{d}z,\tau _x>m){\textbf{P}}(z+S(n-m)\in y+\Delta )\\&\quad =\frac{1+o(1)}{c_{n-m}^d}\int _{B_{\varepsilon c_n}(y)} {\textbf{P}}(x+S(m)\in \textrm{d}z,\tau _x>m) g_{\alpha ,\sigma }\left( \frac{y-z}{c_{n-m}}\right) . \end{aligned}$$

Recalling that \(g_{\alpha ,\sigma }\) is bounded, we then get

$$\begin{aligned}&\frac{c_n^d}{{\textbf{P}}(\tau _x>m)}\int _{B_{\varepsilon c_n}(y)} {\textbf{P}}(x+S(m)\in \textrm{d}z,\tau _x>m){\textbf{P}}(z+S(n-m)\in y+\Delta )\nonumber \\&\quad =\gamma ^{-d/\alpha }{\textbf{E}}\left[ g_{\alpha ,\sigma }\left( \frac{y-x-S(m)}{c_{n-m}}\right) \textrm{1}\left\{ x+S(m)\in B_{\varepsilon c_n}(y)\right\} \mid \tau _x>m\right] +o(1). \end{aligned}$$
(57)

Applying now the integral limit theorem, we infer that

$$\begin{aligned}&{\textbf{E}}\left[ g_{\alpha ,\sigma }\left( \frac{y-x-S(m)}{c_{n-m}}\right) \textrm{1}\left\{ x+S(m)\in B_{\varepsilon c_n}(y)\right\} \mid \tau _x>m\right] \\&\quad ={\textbf{E}}\left[ g_{\alpha ,\sigma }\left( \frac{y-x}{\gamma ^{1/\alpha }c_{n}} -\frac{(1-\gamma )^{1/\alpha }}{\gamma ^{1/\alpha }} M_{\alpha ,\sigma }\right) \textrm{1}\left\{ M_{\alpha ,\sigma }\in B_{\varepsilon (1-\gamma )^{-1/\alpha }}\left( \frac{y-x}{c_n}\right) \right\} \right] +o(1)\\&\quad ={\textbf{E}}\left[ g_{\alpha ,\sigma }\left( \frac{y-x}{\gamma ^{1/\alpha }c_{n}} -\frac{(1-\gamma )^{1/\alpha }}{\gamma ^{1/\alpha }} M_{\alpha ,\sigma }\right) \right] \\&\qquad -{\textbf{E}}\left[ g_{\alpha ,\sigma }\left( \frac{y-x}{\gamma ^{1/\alpha }c_{n}} -\frac{(1-\gamma )^{1/\alpha }}{\gamma ^{1/\alpha }} M_{\alpha ,\sigma }\right) \textrm{1}\left\{ M_{\alpha ,\sigma }\notin B_{\varepsilon (1-\gamma )^{-1/\alpha }}\left( \frac{y-x}{c_n}\right) \right\} \right] +o(1). \end{aligned}$$

Since \(g_{\alpha ,\sigma }(y)\le C|y |^{-d-\alpha }\),

$$\begin{aligned} \nonumber&{\textbf{E}}\left[ g_{\alpha ,\sigma }\left( \frac{y-x}{\gamma ^{1/\alpha }c_{n}} -\frac{(1-\gamma )^{1/\alpha }}{\gamma ^{1/\alpha }} M_{\alpha ,\sigma }\right) \textrm{1}\left\{ M_{\alpha ,\sigma }\notin B_{\varepsilon (1-\gamma )^{-1/\alpha }}\left( \frac{y-x}{c_n}\right) \right\} \right] \\&\quad \le C\gamma ^{(d+\alpha )/\alpha }\varepsilon ^{-d-\alpha }. \end{aligned}$$
(58)

Finally, we notice that, uniformly in \(w\in \mathbb {H}^+\),

$$\begin{aligned} \nonumber&{\textbf{E}}\left[ g_{\alpha ,\sigma }\left( \frac{w}{\gamma ^{1/\alpha }} -\frac{(1-\gamma )^{1/\alpha }}{\gamma ^{1/\alpha }} M_{\alpha ,\sigma }\right) \right] \\ \nonumber&\quad =\int _{{\mathbb {R}}^d}g_{\alpha ,\sigma }\left( \frac{w}{\gamma ^{1/\alpha }} -\frac{(1-\gamma )^{1/\alpha }}{\gamma ^{1/\alpha }}u\right) p_{M_{\alpha ,\sigma }}(u)\textrm{d}u\\ \nonumber&\quad =\gamma ^{d/\alpha }\int _{{\mathbb {R}}^d}g_{\alpha ,\sigma }\left( v\right) p_{M_{\alpha ,\sigma }}\left( \frac{w-\gamma ^{1/\alpha }v}{(1-\gamma )^{1/\alpha }}\right) dv\\&\quad =\gamma ^{d/\alpha }(p_{M_{\alpha ,\sigma }}(w)+o(1)) \quad \text {as }\gamma \rightarrow 0. \end{aligned}$$
(59)

Combining (56)–(59), we conclude that

$$\begin{aligned}&\lim _{\gamma \rightarrow 0}\limsup _{n\rightarrow \infty }\Bigg |\frac{c_n^d}{{\textbf{P}}(\tau _x>m)} \int _{B_{\varepsilon c_n}(y)} {\textbf{P}}(x+S(m)\in \textrm{d}z,\tau _x>m)p_{n-m}(z,y)\\&\quad -p_{M_{\alpha ,\sigma }}\left( \frac{y-x}{c_n}\right) \Bigg |=0. \end{aligned}$$

From this relation and from (55), we get

$$\begin{aligned} \sup _{y\in \mathbb {H}^+:\,y_1>2\varepsilon c_n|y |\le 3Ac_n}\left|c_{n}^d{\textbf{P}}\left( x+S(n)\in y+\Delta \mid \tau _x>n\right) - {p_{M_{\alpha ,\sigma }}\left( \frac{y-x}{c_{n}}\right) }\right|\rightarrow 0 \end{aligned}$$

uniformly in x with \(x_1\le \delta _n c_n\). This completes the proof of the theorem. \(\square \)

6 Probabilities of Small Deviations When Random Walk Starts at the Origin

Proposition 15

Suppose \(X\in \mathcal {D}(d,\alpha ,\sigma )\) and the distribution of X is non-lattice. Then,

$$\begin{aligned} c_{n}^d{\textbf{P}}(S(n)\in y+\Delta \mid \tau>n)\sim g_{\alpha ,\sigma }\left( 0, \frac{y_{(2,d)}}{c_n} \right) \frac{\int _{y_{1}}^{y_{1}+\Delta }H(u)\textrm{d}u}{n{\textbf{P}}\left( \tau ^{-}>n\right) },\quad \text {as } n\rightarrow \infty \end{aligned}$$
(60)

uniformly in \(y_{1}\in (0,\delta _{n}c_{n}]\), where \(\delta _{n}\rightarrow 0\) as \(n\rightarrow \infty \).

Proof

First, using once again (31), we get

$$\begin{aligned} {R}_{\varepsilon }^{(0)}(y)+{R}_{\varepsilon }^{(1)}(y)+{R}_{\varepsilon }^{(2)}(y) \le \frac{CH(y_1)}{nc_n^d} \sum _{k=1}^{(1-\varepsilon ) n} {\textbf{P}}(S_{1}(k)\in (0,y_1+2)). \end{aligned}$$

When \(\alpha \in (1,2]\), using the Stone theorem, we proceed as follows:

$$\begin{aligned} \sum _{k=1}^{(1-\varepsilon ) n} {\textbf{P}}(S_{1}(k)\in (0,y_1+2))&\le \sum _{k=1}^{(1-\varepsilon ) n} \frac{C(y_1+3)}{c_k}\\&\le C (y_1+3) \frac{n}{c_n}\le C\delta _n n. \end{aligned}$$

Now, we will consider the case \(\alpha \le 1\). Fix \(\beta >0\) and notice that \(1/\alpha -1+\beta >0\) for any \(\alpha \le 1\). Since \(c_n\) is regularly varying of index \(1/\alpha \), by Potter’s bounds, there exists \(C>0\), such that for \(k\le n\),

$$\begin{aligned} \frac{c_{n}}{c_k} \le C \left( \frac{n}{k}\right) ^{1/\alpha +\beta }. \end{aligned}$$

Then, for the sequence \(\gamma _n=\delta _n^{\frac{1}{2(1/\alpha -1+\beta )}} \rightarrow 0\),

$$\begin{aligned}&\sum _{k=1}^{(1-\varepsilon ) n} {\textbf{P}}(S_{1}(k)\in (0,y_1+2)) \\&\quad \le \gamma _n n + \sum _{k=\gamma _n n}^{(1-\varepsilon ) n} {\textbf{P}}(S_{1}(k)\in (0,y_1+2))\\&\quad \le \gamma _n n + C\sum _{k=\gamma _n n}^{(1-\varepsilon ) n} \frac{y_1+2}{c_k} \le \gamma _n n + C \frac{y_1+2}{c_n} \sum _{k=\gamma _n n}^{(1-\varepsilon ) n} \left( \frac{n}{k}\right) ^{1/\alpha +\beta }\\&\quad \le \gamma _n n + C \frac{y_1+2}{c_n} n \left( \frac{n}{\gamma _n n}\right) ^{1/\alpha +\beta -1} \le \gamma _n n +Cn\delta _n\left( \frac{1}{\gamma _n }\right) ^{1/\alpha +\beta -1}\\&\quad \le Cn(\gamma _n+\delta _n^{1/2}). \end{aligned}$$

Therefore,

$$\begin{aligned} {R}_{\varepsilon }^{(0)}(y)+{R}_{\varepsilon }^{(1)}(y)+{R}_{\varepsilon }^{(2)}(y) \le C (\gamma _n+\delta _n^{1/2}) \frac{H(y_1)}{c_n^d}, \end{aligned}$$

and, as a result, for any fixed \(\varepsilon >0\),

$$\begin{aligned} \lim _{n\rightarrow \infty }\sup _{y\in {\mathbb {R}}^d: 0<y_1\le \delta _{n}c_{n}}\frac{c_{n}^d}{H(y_1)} \left( {R}_{\varepsilon }^{(0)}(y)+ R_{\varepsilon }^{(1)}(y)+R_{\varepsilon }^{(2)}(y)\right) =0. \end{aligned}$$
(61)

Since (61) holds for any fixed \(\varepsilon >0\), there exists a sequence \(\varepsilon _n\downarrow 0\), such that (61) is still true. Moreover, it will be true for any \(\varepsilon '_n\downarrow 0\) such that \(\varepsilon '_n\ge \varepsilon _n\). We will assume now that \(\varepsilon = \varepsilon _n\). It is clear and will be used in the subsequent proof that we can increase \(\varepsilon _n\) and (61) will hold as long as \(\varepsilon _n<1/2\).

Now, we represent

$$\begin{aligned} R_\varepsilon ^{(3)}(y)= R_\varepsilon ^{(4)}(y)+R_\varepsilon ^{(5)}(y)+R_\varepsilon ^{(6)}(y), \end{aligned}$$

where

$$\begin{aligned} R_\varepsilon ^{(4)}(y)&={\textbf{P}}({S}(n)\in y+\Delta )+\sum _{k=1}^{\varepsilon n} \int _{z_1\in (0,y_1), |z^{(2,d)}|\le \varepsilon ^{1/3} c_n} {\textbf{P}}(S(n-k)\in y-z+\Delta )\textrm{d}B_{k}(z)\\ R_\varepsilon ^{(5)}(y)&=\sum _{k=1}^{\varepsilon n} \int _{z_1\in (y_1,y_1+1), |z^{(2,d)}|\le \varepsilon ^{1/3} c_n} {\textbf{P}}(S(n-k)\in y-z+\Delta )\textrm{d}B_{k}(z)\\ R_\varepsilon ^{(6)}(y)&=\sum _{k=1}^{\varepsilon n} \int _{z\in {\mathbb {R}}^d: |z^{(2,d)}|> \varepsilon ^{1/3} c_n} {\textbf{P}}(S(n-k)\in y-z+\Delta , S(n-k)_{1}>0)\textrm{d}B_{k}(z). \end{aligned}$$

Let \(\varepsilon _n\) an arbitrary converging to zero sequence of positive numbers. First, by the Stone local limit theorem,

$$\begin{aligned}&\int _{0<z_1<y_1, |z|^{(2,d)}\le \varepsilon _n^{1/3} c_n}{\textbf{P}}(S(n-k)\in y-z+\Delta )\textrm{d}B_{k}(z)\\&\quad =\frac{g_{\alpha ,\sigma } \left( 0,\frac{y_{(2,d)}}{c_{n-k}}\right) +\Delta _{1}(n-k,y)}{c_{n-k}^d} \int _{0<z_1<y_1, |z^{(2,d)}|\le \varepsilon _n^{1/3} c_n}\textrm{d}B_{k}(z), \end{aligned}$$

where \(\Delta _{1}(n-k,y)\rightarrow 0\) uniformly in z such that \(z_1\in (0,\delta _{n}c_{n})\) and \(k\in [1, \varepsilon _n n]\). Therefore,

$$\begin{aligned} R_{\varepsilon _n}^{(4)}(y)&= \left( g_{\alpha ,\sigma } \left( 0,\frac{y_{(2,d)}}{c_{n}}\right) +\Delta _{1}(n,y) \right) \left( \frac{1}{c_n^d} + \sum _{k=1}^{[\varepsilon _n n]} \frac{1}{c_{n-k}^d} \int _{0<z_1<y_1, |z^{(2,d)}|\le \varepsilon _n^{1/3} c_n}\textrm{d}B_{k}(z) \right) \nonumber \\&= \frac{g_{\alpha ,\sigma } \left( 0,\frac{y_{(2,d)}}{c_{n}}\right) +\Delta _{1}(n,y) }{c_n^d} \left( 1 + \sum _{k=1}^{[\varepsilon _n n]} \int _{0<z_1<y_1, |z^{(2,d)}|\le \varepsilon _n^{1/3} c_n}\textrm{d}B_{k}(z), \right) \end{aligned}$$
(62)

where \(\Delta _{1}(n,y)\rightarrow 0\) uniformly in y with \(y_1\in (0,\delta _{n}c_{n}) \) and \(k\in [1, \varepsilon _n n]\). Now, we represent

$$\begin{aligned}&1+\sum _{k=1}^{\varepsilon _n n} \int _{0<z_1<y_1, |z^{(2,d)}|\le \varepsilon _n^{1/3} c_n}\textrm{d}B_{k}(z) =H(x_1)-\sum _{k=\varepsilon _n n}^\infty {\textbf{P}}(S_{1}(k),\tau>k) \nonumber \\&\quad - \sum _{k=1}^{\varepsilon _n n} \int _{0<z_1<y_1, |z^{(2,d)}|>\varepsilon _n^{1/3} c_n}\textrm{d}B_{k}(z). \end{aligned}$$
(63)

For any fixed \(\varepsilon >0\), by Corollary 22 of [21],

$$\begin{aligned} \sum _{k=\varepsilon n}^\infty {\textbf{P}}(S_{1}(k)<x_1,\tau>k)&\le Cy_1H(y_1) \sum _{k>\varepsilon n}\frac{1}{kc_k}\\&\le C \frac{y_1H(y_1)}{c_{\varepsilon n}} =o(H(y_1)) \end{aligned}$$

uniformly in in y with \(y_1\le \delta _nc_n\). Hence, this bound holds for some sequence \(\varepsilon _n\downarrow 0\). Increasing the original sequence \(\varepsilon _n\) if needed, we obtain

$$\begin{aligned} \sum _{k=\varepsilon _n n}^\infty {\textbf{P}}(S_{1}(k)<y_1,\tau >k) =o(H(y_1)) \end{aligned}$$
(64)

uniformly in in y with \(y_1\le \delta _nc_n\).

Fix a large positive number A and let

$$\begin{aligned} c^\leftarrow (y):=\inf \{\, k\ge 1: c_k>y \,\}. \end{aligned}$$

Note that \(c_{c^\leftarrow (y)}>y\) and \(c_{c^\leftarrow (y)-1}\le y\). Recalling the definitions of sequence \(c_n\) and of function \(\mu \) in (5) and (6) and using the fact that \(\mu (y)\) is regularly varying, we infer that \(c^\leftarrow (y)\sim 1/(y)\) as \(y\rightarrow \infty \). Then,

$$\begin{aligned} \sum _{k=1}^{\varepsilon _n n} \int _{0<z_1<y_1, |z^{(2,d)} |>\varepsilon _n^{1/3} c_n}\textrm{d}B_{k}(z) \le R_{\varepsilon _n}^{(4,1)}(y)+ R_{\varepsilon _n}^{(4,2)}(y), \end{aligned}$$

where

$$\begin{aligned} R_{\varepsilon }^{(4,1)}(y) =\sum _{k=1}^{c^\leftarrow (Ay_1)-1} {\textbf{P}}(S_{1}(k)<y_1, |S_{2,d}(k) |>\varepsilon _n^{1/3} c_n, \tau >k) \end{aligned}$$

and

$$\begin{aligned} R_{\varepsilon }^{(4,2)}(y) =\sum _{k=c^\leftarrow (Ay_1)}^{\varepsilon _n n} {\textbf{P}}(S_{1}(k)<y_1, |S_{2,d}(k) |>\varepsilon _n^{1/3} c_n, \tau >k). \end{aligned}$$

We can estimate \(R_{\varepsilon }^{(4,2)}(x)\) similarly to the above

$$\begin{aligned} R_{\varepsilon }^{(4,2)}(y)&\le \sum _{k=c^\leftarrow (Ay_1)}^\infty {\textbf{P}}(S_1(k)<y_1,\tau >k) \\&\le C(y_1+1)H(y_1) \sum _{k=c^\leftarrow (Ay_1)}^\infty \frac{1}{kc_k} \le C \frac{(y_1+1)H(y_1)}{A(y_1+1)}=\frac{C}{A}H(y_1). \end{aligned}$$

Clearly, taking A sufficiently large we can make this term much smaller than \(H(y_1)\). By Theorem 1,

$$\begin{aligned} {\textbf{P}}(S_1(k)<y_1, |S_{2,d}(k) |>\varepsilon _n^{1/3} c_n\mid \tau>k) \le {\textbf{P}}(|S_{2,d}(k) |>\varepsilon _n^{1/3} c_n\mid \tau >k)\rightarrow 0, \end{aligned}$$

uniformly in \(k\le \varepsilon _n n\). Combining this with (23), we conclude that

$$\begin{aligned} R_{\varepsilon _n}^{(4,1)}(y)&=o\left( \sum _{k=1}^{c^\leftarrow (Ay_1)-1} {\textbf{P}}(\tau >k)\right) \\&= o \left( \sum _{k=1}^{c^\leftarrow (Ay_1)-1} \frac{H(c_k)}{k}\right) =o\left( H(y_1)\right) \end{aligned}$$

uniformly in y with \(y_1\le \delta _n c_n\). Hence,

$$\begin{aligned} \int _{0<z_1<y_1, |z^{(2,d)} |>\varepsilon _n^{1/3} c_n}\textrm{d}B_{k}(z) =o(H(y_1)) \end{aligned}$$
(65)

uniformly in y with \(y_1\le \delta _nc_n\). Combining (62), (63), (64) and (65), we obtain that

$$\begin{aligned} R_{\varepsilon _n}^{(4)}(y) = \frac{g_{\alpha ,\sigma } \left( 0,\frac{y_{(2,d)}}{c_{n}}\right) +\Delta _{2}(n,y)}{c_n^d} H(y_1), \end{aligned}$$
(66)

where \(\Delta _{2}(n,y)\rightarrow 0\) uniformly in y such that \(y_1\in (0,\delta _{n}c_{n})\). Using (65) and the Stone local limit theorem, one can easily conclude that

$$\begin{aligned} R_{\varepsilon _n }^{(6)}(y) \le \frac{C}{c_n^d} \sum _{k=1}^{\varepsilon _n n} \int _{0<z_1<y_1, |z^{(2,d)} |>\varepsilon _n^{1/3} c_n}\textrm{d}B_{k}(z) =o\left( \frac{H(y_1)}{c_n^d}\right) . \end{aligned}$$
(67)

uniformly in y such that \(y_1\in (0,\delta _{n}c_{n})\).

Analysis of \(R^{(5)}_\varepsilon (y)\) is very similar to that of \(R^{(4)}_\varepsilon (y)\). First, we make use of the Stone theorem,

$$\begin{aligned} R_{\varepsilon }^{(5)}(y) = \frac{g_{\alpha } \left( 0,\frac{y_{(2,d)}}{c_{n}}\right) +\Delta _{3}(n,y)}{c_n^d} \sum _{k=1}^{\varepsilon _n n} \int _{z_1\in (y_1,y_1+1), |z^{(2,d)} |\le \varepsilon _n^{1/3} c_n} (z_1-y_1) \textrm{d}B_{k}(z), \end{aligned}$$

where \(\Delta _{3}(n,y)\rightarrow 0\) uniformly in y with \(y_1\in (0,\delta _{n}c_{n})\). Then, using the same arguments as above, we obtain

$$\begin{aligned} R_{\varepsilon }^{(5)}(y) = \frac{g_{\alpha ,\sigma } \left( 0,\frac{y^{(2,d)}}{c_{n}}\right) +\Delta _{4}(n,y)}{c_n^d} \sum _{k=1}^{\varepsilon _n n} \int _{z_1\in (y_1,y_1+1)} (1+z_1-y_1) \textrm{d}B_{k}(z), \end{aligned}$$

where \(\Delta _{4}(n,y)\rightarrow 0\) uniformly in y with \(y_1\in (0,\delta _{n}c_{n})\). Integrating by parts, we can complete the proof now. \(\square \)

7 Probabilities of Small Deviations When Random Walks Start at an Arbitrary Starting Point

Proof of Theorem 3 in the lattice case

For the lattice distribution, we have from (28)

$$\begin{aligned} p_n(x,y) = {\textbf{P}}(x+S(n)=y,\tau _x>n) =\sum _{k=0}^n \sum _{z_1=1}^{x_1\wedge y_1} \sum _{z_2,\ldots ,z_d} p_k^+(0,z-x)b_{n-k}(y-z). \end{aligned}$$

Let \(N_n\) be the sequence of integers, which was constructed in the proof of Theorem 1. We shall use the representation

$$\begin{aligned} p_n(x,y) = P_1(x,y)+P_2(x,y)+P_3(x,y), \end{aligned}$$

where

$$\begin{aligned} P_1(x,y)&:= \sum _{k=0}^{N_n} \sum _{z_1=1}^{x_1\wedge y_1} \sum _{z_2,\ldots ,z_d} p_k^+(0,z-x)b_{n-k}(y-z),\\ P_2(x,y)&:= \sum _{k=N_n+1}^{n-N_n-1} \sum _{z_1=1}^{x_1\wedge y_1} \sum _{z_2,\ldots ,z_d} p_k^+(0,z-x)b_{n-k}(y-z),\\ P_3(x,y)&:= \sum _{k=n-N_n}^n \sum _{z_1=1}^{x_1\wedge y_1} \sum _{z_2,\ldots ,z_d} p_k^+(0,z-x)b_{n-k}(y-z). \end{aligned}$$

To estimate \(P_2(x,y)\), we shall proceed as in the analysis of normal deviations. Note that by Lemma 7,

$$\begin{aligned} b_{n-k}(y-z) \le C \frac{H(y_1-z_1)}{(n-k)c_{n-k}^d}. \end{aligned}$$

Then,

$$\begin{aligned} P_2(x,y)&\le C\frac{H(y_1)}{N_nc_{N_n}^d} \sum _{k=N_n+1}^\infty \sum _{z_1=1}^{x_1\wedge y_1} \sum _{z_2,\ldots ,z_d}p_k^+(0,z-x)\\&\le C\frac{H(y_1)}{N_nc_{N_n}^d} \sum _{k=N_n+1}^\infty {\textbf{P}}(S_1(k)\ge -x_1;\tau ^+>k). \end{aligned}$$

Using now (43) and increasing, if needed \(N_n\), we conclude that

$$\begin{aligned} P_2(x,y) = o\left( \frac{H(y_1) V(x_1)}{N_nc_{N_n}^d}\right) = o\left( \frac{H(y_1) V(x_1)}{nc_n^d}\right) . \end{aligned}$$
(68)

Now, we will consider the first term \(P_1(x,y)\). Let \(\varepsilon _n\downarrow 0 \) be the sequence, which we define in the proof of Theorem 1. We will need the following sets:

$$\begin{aligned} A_1(x,y)&=\left\{ z: |x-z |\le \varepsilon _n c_n, z_1 \in (0,x_1\wedge y_1 ] \right\} \\ C_1(x,y)&=\left\{ z: |x-z |> \varepsilon _n c_n, z_1 \in (0,x_1\wedge y_1] \right\} \end{aligned}$$

Applying now the asymptotics for small deviations of walks starting at zero, we get

$$\begin{aligned}&\sum _{k=0}^{N_n} \sum _{z\in A_1(x,y)} p_k^+(0,z-x)b_{n-k}(y-z)\\&\quad \sim \frac{g_{\alpha ,\sigma } \left( 0,\frac{y_{2,d}-x_{2,d}}{c_n}\right) }{nc_n^d} \sum _{k=0}^{N_n} \sum _{z\in A_1(x,y)} p_k^+(0,z-x)H(y_1-z_1). \end{aligned}$$

Next, we note that

$$\begin{aligned}&\sum _{k=0}^{N_n} \sum _{z\in C_1(x,y)} p_k^+(0,z-x)b_{n-k}(y-z)\\&\quad \le C\frac{H(y_1)}{nc_n^d} \sum _{k=0}^{N_n} {\textbf{P}}(|S(k) |\ge \varepsilon _n c_n,S_1(k)>-x_1,\tau ^+>k) \end{aligned}$$

and

$$\begin{aligned}&\sum _{k=0}^{N_n} \sum _{z\in C_1(x,y)} p_k^+(0,z-x)H(y_1-z_1)\\&\quad \le H(y_1) \sum _{k=0}^{N_n} {\textbf{P}}(|S(k) |\ge \varepsilon _n c_n,S_1(k)>-x_1,\tau ^+>k). \end{aligned}$$

Taking into account (48), we obtain

$$\begin{aligned} P_1(x,y) \sim \frac{g_{\alpha ,\sigma } \left( 0,\frac{y_{2,d}-x_{2,d}}{c_n}\right) }{nc_n^d} \sum _{z_1=0}^{x_1\wedge y_1} (V(x_1-z_1)-V(x_1-z_1-1))H(y_1-z_1),\nonumber \\ \end{aligned}$$
(69)

where we also replaced \(\sum _{0}^{N_n}\) by \(\sum _{0}^{\infty }\) using the arguments in the proof of Proposition 11 in [10]. Analogous arguments give us

$$\begin{aligned} P_3(x,y) \sim \frac{g_{\alpha ,\sigma } \left( 0,\frac{y_{2,d}-x_{2,d}}{c_n}\right) }{nc_n^d} \sum _{z_1=0}^{x_1\wedge y_1} V(x_1-z_1)(H(y_1-z_1)-H(y_1-z_1-1). \end{aligned}$$

Then, the arguments at the end of the proof of Proposition 11 in [10] give

$$\begin{aligned} P_1(x,y) +P_3(x,y) \sim V(x_1)H(y_1)\frac{g\left( 0,\frac{y_{2,d}-x_{2,d}}{c_n}\right) }{nc_n^d}. \end{aligned}$$

\(\square \)

Proof of Theorem 3 in the non-lattice case

The proof is very similar to the proof in the lattice case.

For the non-lattice distribution, we will make use of (29). We split the sum as follows:

$$\begin{aligned} p_n(x,y) = P_1(x,y)+P_2(x,y)+P_3(x,y), \end{aligned}$$

where

$$\begin{aligned} P_1(x,y)&:= \sum _{k=0}^{N_n} \int _{(0,x_1\wedge (y_1+1) ] \times {\mathbb {R}}^{d-1}} p_k^+(0,\textrm{d}z-x)b_{n-k}(y-z),\\ P_2(x,y)&:= \sum _{k=N_n+1}^{n-N_n-1} \int _{(0,x_1\wedge (y_1+1) ] \times {\mathbb {R}}^{d-1}} p_k^+(0,\textrm{d}z-x)b_{n-k}(y-z),\\ P_3(x,y)&:= \sum _{k=n-N_n}^n \int _{(0,x_1\wedge (y_1+1) ] \times {\mathbb {R}}^{d-1}} p_k^+(0,\textrm{d}z-x)b_{n-k}(y-z). \end{aligned}$$

There is virtually no difference in estimates for \(P_2(x,y)\). So repeating the same arguments we obtain

$$\begin{aligned} P_2(x,y) = o\left( \frac{H(y_1) V(x_1)}{nc_n^d}\right) . \end{aligned}$$
(70)

Now, we will consider the first term \(P_1(x,y)\). We will need the following sets:

$$\begin{aligned} A_1(x,y)&=\left\{ z: |x-z |\le \varepsilon _n c_n, z_1 \in (0,x_1\wedge y_1 ] \right\} \\ C_1(x,y)&=\left\{ z: |x-z |> \varepsilon _n c_n, z_1 \in (0,x_1\wedge y_1] \right\} \end{aligned}$$

Now, we have,

$$\begin{aligned}&\sum _{k=0}^{N_n} \int _{A_1(x,y)} p_k^+(0,\textrm{d}z-x)b_{n-k}(y-z)\\&\quad \sim \frac{g_{\alpha ,\sigma } \left( 0,\frac{y_{2,d}-x_{2,d}}{nc_n^d}\right) }{nc_n^d} \sum _{k=0}^{N_n} \int _{A_1(x,y)} p_k^+(0,\textrm{d}z-x) \int _{y_1-z_1}^{y_z-z_1+1} H(u)\textrm{d}u. \end{aligned}$$

Now, note that by (48)

$$\begin{aligned} \sum _{k=0}^{\varepsilon _n n} \int _{C_1(x,y)} p_k^+(0,\textrm{d}z-x)H(y_1-z_1)&\le H(y_1) \sum _{k=0}^{\varepsilon _n n} {\textbf{P}}(|S(k) |\ge \delta c_n,S_1(k)>-x_1,\tau ^+>k)\\&=o(H(y_1)V(x_1)), \end{aligned}$$

provided that \(\gamma _n\) and \(\varepsilon _n\) converge to 0 sufficiently slowly. As a result,

$$\begin{aligned} P_1(x,y) \sim \frac{g_{\alpha ,\sigma } \left( 0,\frac{y_{2,d}-x_{2,d}}{c_n}\right) }{nc_n^d} \int _{(0,x_1\wedge (y_1+1])} V(x_1-\textrm{d}z_1)\int _{y_1-z_1}^{y_z-z_1+1} H(u)\textrm{d}u,\qquad \end{aligned}$$
(71)

where we also replaced \(\sum _{0}^{\varepsilon _n n}\) by \(\sum _{0}^{\infty }\) using similar arguments. Analogous arguments give us

$$\begin{aligned} P_3(x,y) \sim \frac{g_{\alpha ,\sigma } \left( 0,\frac{y_{2,d}-x_{2,d}}{c_n}\right) }{nc_n^d} \int _{(0,x_1\wedge (y_1+1])} V(x_1-z_1)H(y_1-z_1+\Delta )\textrm{d}z_1.\qquad \end{aligned}$$
(72)

Now, note that integration by parts of the first integral gives

$$\begin{aligned}&\int _{(0,x_1\wedge (y_1+1])} V(x_1-\textrm{d}z_1)\int _{y_1-z_1}^{y_z-z_1+1} H(u)\textrm{d}u\\&\quad + \int _{(0,x_1\wedge (y_1+1])} V(x_1-z_1)H(y_1-z_1+\Delta )\textrm{d}z_1\\&\quad =V(x_1)\int _{y_1}^{y_1+1}H(u) \textrm{d}u. \end{aligned}$$

As a result,

$$\begin{aligned} P_1(x,y) +P_3(x,y) \sim V(x_1)\int _{y_1}^{y_1+1}H(u) \textrm{d}u \frac{g\left( 0,\frac{y_{2,d}-x_{2,d}}{c_n}\right) }{nc_n^d}. \end{aligned}$$

\(\square \)

8 Probabilities of Large Deviations When Random Walk Starts at the Origin

We will need the following large deviations estimates.

Proposition 16

Let \(X\sim {\mathcal {D}}(d,\alpha ,\sigma )\) with some \(\alpha <2\).

Then, there exists constant \(C_H\) such that for \(|x |\ge c_n\) we have

$$\begin{aligned} {\textbf{P}}(S(n)\in x+\Delta ) \le C_H \frac{1}{c_n^d} n\phi (|x |). \end{aligned}$$
(73)

If, in addition, (14) holds, then

$$\begin{aligned} {\textbf{P}}(S(n)\in x+\Delta ) \le C_H n\frac{\phi (|x |)}{|x |^{d}} \end{aligned}$$
(74)

This result is proven in [1, Theorem 2.6] in the lattice case. We omit the proof of non-lattice case, as it can be done very similarly to [1, Theorem 2.6].

Using Definition (5) of \(c_n\), we obtain from Corollary 11 the following upper bound. (Recall that \(g(r)=\frac{\phi (|r |)}{r^d}\).)

Lemma 17

For any \(A>1\), there exists \(c_A\) such that

$$\begin{aligned} b_n(x) \le c_A H(x_1) g(|x |), \end{aligned}$$
(75)

for x with \(c_n\le |x |\le Ac_n\).

The main goal of this section is to obtain an upper bound for \(b_n(x)\) in the case \(|x |>Ac_n\). We now obtain a bound, which will be valid also for \(|x |>Ac_n\).

Lemma 18

Suppose that X is asymptotically stable with \(\alpha \in (0,2)\). If (14) holds, then there exists \(\gamma >0\) such that for all y with \(|y |>c_n\) we have

$$\begin{aligned} b_n(y) \le \gamma H(y_1+1) g(|y |). \end{aligned}$$
(76)

Proof

We will first introduce some constants and sequences that will be used throughout the proof. Set

$$\begin{aligned} \rho _n = \frac{1}{n}\sum _{k=1}^n {\textbf{P}}(S_1(k)>0). \end{aligned}$$

Fix \(\delta \in (0,1)\) such that

$$\begin{aligned} \textrm{e}^{\delta }\sup _{n\ge 1}\rho _n+\delta \textrm{e}^{\delta }<1 \end{aligned}$$
(77)

and let \({\widetilde{A}}\) be such that

$$\begin{aligned}&\int _{|z |> {\widetilde{A}} c_k, |y-z |>|y |/2} g(|y-z |) g(|z |) \textrm{d}z \nonumber \\ +\int _{|y-z |> {\widetilde{A}} c_k, |z |>|y |/2}&g(|y-z |) g(|z |) \textrm{d}z \le \frac{\delta }{C_H k}g(|y |) \end{aligned}$$
(78)

for y with \(|y |>1\) and \(k\ge 1\). Let A be such that

$$\begin{aligned} \sup _{n\ge 1}\sup _{y,z: |y |>Ac_n, |z |\le {\widetilde{A}} c_n+1}\frac{g(|y-z |)}{g(|y |)} \le \textrm{e}^\delta . \end{aligned}$$
(79)

By Lemma 17, there exists \(c_A>1\) such that (76) holds for y with \(c_n\le |y |\le Ac_n\).

The proof will be done by induction. We will inductively construct an increasing sequence \(\gamma _n\) such that

$$\begin{aligned} b_n(y) \le \gamma _n H(y_1) g(|y |) \end{aligned}$$
(80)

for y with \(|y |>c_n\) and \(n\ge 1\). Then, we will show that \(\sup _n \gamma _n <\infty \). We put \(\gamma _1=c_A\) and then the base of induction \(n=1\) is immediate. Since \(\gamma _n\) will be increasing, it follows from the definition of \(A\) that (80) holds for y such that \(|c_n |<y\le A c_n\). Hence, we will consider only y with \(|y |>c_n\).

Assume now that we have already constructed the elements \(\gamma _k\) for \(k\le n-1\). We shall construct the next value \(\gamma _n\). It follows from (26) that

$$\begin{aligned} nb_n(y+\Delta )&={\textbf{P}}(S(n)\in y+\Delta )\nonumber \\&\quad +\sum _{k=1}^{n-1} \int _{{\mathbb {R}}^d}{\textbf{P}}(S(k)\in y-z+\Delta , S_1(k)>0) \textrm{d}B_{n-k}(z) \nonumber \\&=: R^{(1)}(y)+R^{(2)}(y)+R^{(3)}(y), \end{aligned}$$
(81)

where

$$\begin{aligned} R^{(1)}(y)&:={\textbf{P}}(S(n)\in y+\Delta ) +\sum _{k=1}^{n-1} \int _{|z |\le |y |/2}{\textbf{P}}(S(k)\in y-z+\Delta , S_1(k)>0) \textrm{d}B_{n-k}(z),\\ R^{(2)}(y)&:= \sum _{k=1}^{n-1} \int _{|z |>|y |/2,|y-z |\le {\widetilde{A}} c_k}{\textbf{P}}(S(k)\in y-z+\Delta , S_1(k)>0) \textrm{d}B_{n-k}(z),\\ R^{(3)}(y)&:=\sum _{k=1}^{n-1} \int _{|z |> |y |/2, |y-z |>{\widetilde{A}} c_k}{\textbf{P}}(S(k)\in y-z+\Delta , S_1(k)>0) \textrm{d}B_{n-k}(z). \end{aligned}$$

Using first the local large deviations bound (73) and then the regular variation of g, we get

$$\begin{aligned} R^{(1)}(y)&\le C_H ng(|y |) + C_H\sum _{k=1}^{n-1} \int _{|z |\le |y |/2, 0\le z_1\le y_1+1} k g(|y-z |) \textrm{d}B_{n-k}(z)\nonumber \\&\le C_H ng(|y |) + C_g C_H g(|y |)\sum _{k=1}^{n-1} k \int _{|z |\le |y |/2, 0\le z_1\le y_1+1} \textrm{d}B_{n-k}(z)\nonumber \\&\le (C_g+1)C_H ng(|y |)H(y_1+1). \end{aligned}$$
(82)

Second, integrating by parts and using then Definition (79) of A and the induction assumption, we obtain

$$\begin{aligned} \nonumber R^{(2)}(y)&\le \sum _{k=1}^{n-1} \int _{|y |>|z |/2,|y-z |\le {\widetilde{A}} c_k, 0\le z_1\le y_1+1} {\textbf{P}}(S(k)\in y-z+\Delta , S_1(k)>0) \textrm{d}B_{n-k}(z) \nonumber \\&\le \sum _{k=1}^{n-1} \int _{|y-z |>|z |/2-1,|z |\le {\widetilde{A}} c_k+1, 0\le z_1\le y_1+1} b_{n-k}(y-z){\textbf{P}}(S(k)\in \textrm{d}z, S_1(k)>0) \nonumber \\&\le \sum _{k=1}^{n-1} \int _{|y-z |>|z |/2-1,|z |\le {\widetilde{A}} c_k+1, 0\le z_1\le y_1+1}\nonumber \\&\qquad \gamma _{n-1} g(|y-z |) H(y_1-z_1+1){\textbf{P}}(S(k)\in \textrm{d}z, S_1(k)>0) \nonumber \\&\le \textrm{e}^\delta \gamma _{n-1} g(|y |)H(y_1+1) \sum _{k=1}^{n-1} {\textbf{P}}(S_1(k)>0)\nonumber \\&= \textrm{e}^{\delta } (n-1) \gamma _{n-1} \rho _{n-1} g(|y |)H(y_1+1). \end{aligned}$$
(83)

Third, using the induction assumption and (74),

$$\begin{aligned} R^{(3)}(y)&\le C_H\gamma _{n-1}\sum _{k=1}^{n-1} k \int _{|z |> |y |/2, |x-y|>{\widetilde{A}} c_k,0\le z_1\le y_1+1} g(|y-z |) g(|z |) H(z_1)\textrm{d}z \\&\le C_H\gamma _{n-1}H(y_1+1) \sum _{k=1}^{n-1} k \int _{|z |> |y |/2, |y-z |>{\widetilde{A}} c_k, 0\le z_1\le y_1+1} g(|y-z |) g(|z |) \textrm{d}z. \end{aligned}$$

We can estimate the integral as follows:

$$\begin{aligned} \int _{|y-z |> {\widetilde{A}} c_k, |z |>|y |/2} g(|y-z |) g(|z |) \textrm{d}z \le \frac{\delta }{C_H k}g(|y |), \end{aligned}$$

using Definition (78) of \({\widetilde{A}}\). Hence,

$$\begin{aligned}{} & {} R^{(3)}(y)\le C_H\gamma _{n-1}H(y_1+1) \frac{\delta }{C_H }g(|y |)(n-1)\nonumber \\ {}{} & {} \le \delta \gamma _{n-1}\textrm{e}^{\delta }H(y_1+1) g(|y |)(n-1). \end{aligned}$$
(84)

Combining (82), (83) and (84), we obtain that

$$\begin{aligned} b_n(y)\le ((C_g+1)C_H+ \gamma _{n-1}(\textrm{e}^{\delta }\rho _{n-1} +\delta \textrm{e}^{\delta }) )g(|y |) H(y_1+1). \end{aligned}$$

Then, for

$$\begin{aligned} \gamma _n:=\max (\gamma _{n-1},(C_g+1)C_H+ \gamma _{n-1}(\textrm{e}^{\delta }\rho _{n-1} +\delta \textrm{e}^{\delta }) ) \end{aligned}$$

inequality (80) holds. Then, using Definition (77) of \(\delta \) it is not difficult to show that

$$\begin{aligned} \limsup _{n\rightarrow \infty } \gamma _n<\infty . \end{aligned}$$

Hence, the statement of the lemma holds with

$$\begin{aligned} \gamma :=\sup _n \gamma _n<\infty . \end{aligned}$$

\(\square \)

9 Probabilities of Large Deviations When Random Walks Start at an Arbitrary Starting Point

Proof of Theorem 4

Let

$$\begin{aligned} A(x,y)&=\left\{ z: |y-z |\ge \frac{1}{2}|y-x |, z_1 \in (0,x_1\wedge (y_1+1) ] \right\} \\ C(x,y)&=\left\{ z: |x-z |\ge \frac{1}{2} |y-x |, z_1 \in (0,x_1\wedge (y_1+1) ] \right\} . \end{aligned}$$

If \(|y-z |\ge \frac{1}{2}|y-x |\), then, by Lemma 18,

$$\begin{aligned} b_{n-k}(y-z)\le \gamma _0 H(y_1-z_1+1) g(|y-z |) \le \gamma _0 H(y_1-z_1+1) g\left( \frac{|x-y|}{2}\right) . \end{aligned}$$

This implies that

$$\begin{aligned}&\sum _{k=0}^n \int _{A(x,y)} p_k^+(0,\textrm{d}z-x)b_{n-k}(y-z)\\&\quad \le \gamma _0 H(y_1+1) g\left( \frac{|x-y|}{2}\right) \sum _{k=0}^n {\textbf{P}}(x+S(k) \in A(x,y),\tau ^+>k)\\&\quad \le \gamma _0 H(y_1+1) g\left( \frac{|x-y|}{2}\right) \sum _{k=0}^n {\textbf{P}}(x_1+S_1(k) \in (0,x_1\wedge (y_1+1) ],\tau ^+>k)\\&\quad \le \gamma _0 H(y_1+1) g\left( \frac{|x-y|}{2}\right) V(x_1). \end{aligned}$$

By the same argument,

$$\begin{aligned}&\sum _{k=0}^n \int _{C(x,y)} p_k^+(0,\textrm{d}z-x)b_{n-k}(y-z)\\&\quad \le \sum _{k=0}^n \sum _{v\in {\mathbb {Z}}^d: (v+\Delta )\cap C(x,y)\ne \emptyset } {\textbf{P}}(S(k)\in v-x+\Delta ,\tau ^+>k) \max _{z\in v+\Delta }b_{n-k}(y-z)\\&\quad \le \sum _{k=0}^n \sum _{v\in {\mathbb {Z}}^d: (v+\Delta )\cap C(x,y)\ne \emptyset } {\textbf{P}}(S(k)\in v-x+\Delta ,\tau ^+>k) \sum _{e_j\in \{0,1\}^d} b_{n-k}(y-v-e_j)\\&\quad \le c 2^d V(x_1+1)g\left( \frac{|x-y|}{2}-1\right) H(y_1+1). \end{aligned}$$

These estimates give the desired bound. \(\square \)

10 Asymptotics for the Green Function Near the Boundary

Proof of Theorem 5

We consider the lattice case only. Fix \(A>0\). Then,

$$\begin{aligned} G(x,y) = G_1(x,y)+G_2,(x,y):= \sum _{n:Ac_n< |x-y|} p_n(x,y) +\sum _{n:Ac_n\ge |x-y|} p_n(x,y). \end{aligned}$$

Using Theorem 4, we obtain

$$\begin{aligned} G_1(x,y)&\le C H(y_1)V(x_1) \sum _{n:Ac_n< |x-y|} g(|x-y|) \\&\le C H(y_1)V(x_1) \frac{g(|x-y|)}{\mu (|x-y|/A)} \\&\le \frac{C}{A^{\alpha }} \frac{H(y_1)V(x_1)}{|x-y|^d}. \end{aligned}$$

For the second term, make use of Theorem 3

$$\begin{aligned} G_2(x,y) \sim H(y_1)V(x_1) \sum _{n:Ac_n\ge |x-y|} \frac{g_{\alpha ,\sigma }(0, \frac{y_{2,d}-x_{2,d}}{c_n}) }{nc_n^d} \end{aligned}$$

We will now analyse the series. Using the regular variation of \(c_n\), we can write it as

$$\begin{aligned} \sum _{n:Ac_n\ge |x-y|} \frac{g_{\alpha ,\sigma }(0, \frac{y_{2,d}-x_{2,d}}{c_n}) }{nc_n^d}&\sim \sum _{n:n \ge A^{-\alpha }/\mu (|x-y|)} \frac{g_{\alpha ,\sigma }(0, \frac{y_{2,d}-x_{2,d}}{c_n}) }{nc_n^d}\\&\sim \int _{A^{-\alpha }/\mu (|x-y|)}^\infty \frac{g_{\alpha ,\sigma }(0, \frac{y_{2,d}-x_{2,d}}{c_{[t]}}) }{tc_{[t]}^d}\textrm{d}t \\&\sim \int _0^{\mu (|x-y|)A^{\alpha }} \frac{g_{\alpha ,\sigma }(0, \frac{y_{2,d}-x_{2,d}}{c_{[1/z]}}) }{zc_{[1/z]}^d}\textrm{d}z\\&\sim \frac{1}{|x-y|^d} \int _0^{A^{\alpha }} g_{\alpha ,\sigma }\left( 0, \frac{y_{2,d} -x_{2,d}}{|x-y|}z^{1/\alpha }\right) z^{d/\alpha -1}\textrm{d}z, \end{aligned}$$

as \(|x-y|\rightarrow \infty \).

Here, we used the fact that since \(c_n\) is regularly varying of index \(1/\alpha \), for a fixed \(z>0\),

$$\begin{aligned} c([1/(\mu (|x-y|)z)]) \sim z^{-1/\alpha } c([1/(\mu (|x-y|))]) \sim z^{-1/\alpha } |x-y|, \end{aligned}$$

as \(|x-y|\rightarrow \infty \). Thus, in the general case,

$$\begin{aligned} G_2(x,y) \sim C \frac{H(y_1)V(x_1)}{|x-y|^d} \int _0^{A^{\alpha }} g_{\alpha ,\sigma }\left( 0, \frac{y_{2,d}-x_{2,d}}{|x-y|}z^{1/\alpha }\right) z^{d/\alpha -1}\textrm{d}z. \end{aligned}$$

Letting \(A\rightarrow \infty \) and substituting \(z^{1/\alpha }=t\), we arrive at the conclusion. Noting that in the isotropic case the ratio \(\frac{y_{2,d}-x_{2,d}}{\left|x-y \right|}\) belongs asymptotically to the unit sphere, we obtain the result in this case as well.

To obtain the upper bound (16) in the analysis of the second term, we make use of Lemma 7 instead of of Theorem 3. \(\square \)