1 Introduction

In 1921, with the article [11] Pólya pioneered research on the simple random walk on the integer lattice. Using Fourier analysis he proved that p(nx), the n’th step transition function, satisfiesFootnote 1

$$\begin{aligned} \lim _{n \rightarrow +\infty } (2 n)^{\frac{d}{2}} p(2 n; x)&= 2 d^{\frac{d}{2}} (2 \pi )^{-\frac{d}{2}}, \quad \text {if } {\left|x \right|}_1 \equiv 0 \pmod 2,\\ \lim _{n \rightarrow +\infty } (2n-1)^{\frac{d}{2}} p(2n-1; x)&= 2 d^{\frac{d}{2}} (2 \pi )^{-\frac{d}{2}}, \quad \text {if } {\left|x \right|}_1 \equiv 1 \pmod 2, \end{aligned}$$

for any \(x \in \mathbb {Z}^d\). Essentially, Pólya’s proof shows that (see Spitzer [13, Remark after P7.9])

$$\begin{aligned} p(n; x) = {\left\{ \begin{array}{ll} 2 d^{\frac{d}{2}} (2 \pi n)^{-\frac{d}{2}} e^{-\frac{d}{2 n} {\left|x \right|}_2^2} + o\big (n^{-\frac{d}{2}} \big ), &{} \text {if } {\left|x \right|}_1 \equiv n \pmod 2,\\ 0, &{} \text {otherwise,} \end{array}\right. } \end{aligned}$$

uniformly with respect to \(n \in \mathbb {N}\) and \(x \in \mathbb {Z}^d\). As it may be easily seen, the local limit theorem is very inaccurate when \({\left|x \right|}_2\) is larger than \(\sqrt{n}\). Further development of the Fourier method allowed to gain better control over the error term for large \({\left|x \right|}_2\) (see Smith [12], Spitzer [13, P7.10], Ney and Spitzer [10, Theorem 2.1]). Namely,

$$\begin{aligned} p(n; x) = {\left\{ \begin{array}{ll} 2 d^{\frac{d}{2}} (2 \pi n)^{-\frac{d}{2}} e^{-\frac{d}{2 n} {\left|x \right|}_2^2} + o\big (n^{-\frac{d}{2}+1} {\left|x \right|}_2^{-2} \big ), &{} \text {if } {\left|x \right|}_1 \equiv n \pmod 2,\\ 0, &{} \text {otherwise,} \end{array}\right. } \end{aligned}$$

uniformly with respect to \(x \in \mathbb {Z}^d {\setminus } \{0\}\). Let us observe that the error in the approximation of p(nx) is additive and may become large compared to the first term. In many applications, it is desired to have an asymptotic formula for p(nx) valid on the largest possible region with respect to n and x. There are some results in this direction available. In particular, (see Lawler [8, Propositon 1.2.5], Lawler and Limic [9, Theorem 2.3.11]) showed that there is \(\rho > 0\) such that for all \(n \in \mathbb {N}\) and \(x \in \mathbb {Z}^d\), if \({\left|x \right|}_2 \le \rho n\) then

$$\begin{aligned} p(n; x) = {\left\{ \begin{array}{ll} d^{\frac{d}{2}} (2 \pi n)^{-\frac{d}{2}} e^{-\frac{d}{2n} {\left|x \right|}_2^2} \big (2 + \mathcal {O}(n^{-1}) + \mathcal {O}(n^{-3} {\left|x \right|}_2^4)\big ), &{} \text {if } {\left|x \right|}_1 \equiv n \pmod 2,\\ 0, &{} \text {otherwise.} \end{array}\right. } \end{aligned}$$

Let us emphasize that the above asymptotic formula is valid for random walks having exponential moment. However, applied to the simple random walk gives the asymptotic formula useful only in the region where \({\left|x \right|}_2 = o(n^{3/4})\). Therefore, there arises a natural question:

Is there an asymptotic formula forp(nx) which is valid on a larger region than\({\left|x \right|}_2 = o(n^{3/4})\)?

The purpose of this article is to give a positive answer to the possed question. To be more precise, let us first introduce some notation. Let p be the transition function of the irreducible finite range random walk. By \(\mathcal {V}\) we denote its support, namely

$$\begin{aligned} \mathcal {V}= \{v \in \mathbb {Z}^d : p(v) > 0\}. \end{aligned}$$

Let \(\mathcal {M}\) be the interior of the convex hull of \(\mathcal {V}\). For \(\delta \in \mathcal {M}\) we set

$$\begin{aligned} \phi (\delta ) = \max \big \{{\langle x, \delta \rangle } - \log \kappa (x) : x \in \mathbb {R}^d \big \}, \end{aligned}$$
(1)

where \(\kappa \) is a function on \(\mathbb {R}^d\) defined by the formula

$$\begin{aligned} \kappa (x) = \sum _{v \in \mathcal {V}} p(v) e^{{\langle x, v\rangle }}. \end{aligned}$$

We also need a quadratic form on \(\mathbb {R}^d\) given by \(B_x(u, u) = D_u^2 \log \kappa (x)\). In Sect. 3 we prove the following theorem.

Theorem A

Let p be a transition function of an irreducible finite range random walk on \(\mathbb {Z}^d\). Let r be its period and \(X_0, \ldots , X_{r-1}\) the partition of \(\mathbb {Z}^d\) into aperiodic classes. There is \(\eta \ge 1\) such that for each \(j \in \{0, \ldots , r-1\}\), \(n \in \mathbb {N}\) and \(x \in X_j\), if \(n \equiv j \pmod r\) then

$$\begin{aligned} p(n; x) = (2 \pi n)^{-\frac{d}{2}} \big (\det B_s\big )^{-\frac{1}{2}} e^{-n \phi (\delta )} \Big (r + \mathcal {O}\big (n^{-1} {\text {dist}}(\delta , \partial \mathcal {M})^{-2\eta } \big )\Big ) \end{aligned}$$
(2)

otherwise \(p(n; x) = 0\), where \(\delta = \frac{x}{n}\) and \(s = \nabla \phi (\delta )\).

Some comments are in order. First, observe that the asymptotic formula (2) is valid in a region excluding only the case when \(n {\text {dist}}(\delta , \mathcal {M})^{2\eta }\) stays bounded. Although the function \(\phi \) is positive convex and comparable to \({\left|\, \cdot \, \right|}_2^2\), see Claim 1, it cannot be replaced in the asymptotic formula by \({\left|\, \cdot \, \right|}_2^2\) without introducing an additional error term, see Remark 1. For processes with continuous time it was observed by Davis in [3] that in order to get an upper bound for the heat kernel on a larger region one has to introduce a non-Gaussian factor. Therefore, Theorem A may be considered as a discrete time counterpart of [3]. Finally, although the quadratic form \(B_x\) is explicitly given, the mapping \(\mathcal {M}\ni \delta \mapsto s(\delta )\) is an implicit function. We want to stress the fact that while \(\delta \) approaches the boundary of \(\mathcal {M}\), the value of \({\left|s \right|}_2\) tends to infinity. In particular, the quadratic form \(B_s\)degenerates. For this reason a more convenient form of Theorem A is given in Corollary 3.2.

A positive answer to the posed question is partially given in [2, Theorem 4.1], however thanks to Theorem A we get the control over the error term.

Let us comment about the method of the proof of Theorem A. First, with the help of the Fourier inversion formula, we write p(nx) as an oscillatory integral. We split the integral into two parts. The first part we analyze by the Laplace method. This is not a straightforward application of it, since the phase function degenerates as \(\delta \) approaches the boundary of \(\mathcal {M}\). To estimate the second part we develop a geometric argument (see Theorem 2.2), which allows us to control the way how the quadratic form \(B_s\) degenerates. In fact, Theorem 2.2, is the main observation of the present paper. It can be successfully applied in much wider context. For example, to study finitely supported isotropic random walks on affine buildings (see [14]). The result obtained in this article has already found an application in the study of subordinated random walks (see [1]) which are spread all over \(\mathbb {Z}^d\) and do not have second moment. There is also an ongoing project to get the precise asymptotic formula for random walks with internal degrees of freedom extending the one obtained by Krámli and Szász [7] (see also Guivarc’h [4]). Finally, Appendix A contains applications of Theorem 3.1 to triangular and hexagonal lattices. This has to be compared with results recently obtained in [5, 6].

1.1 Notation

We use the convention that C stands for a generic positive constant whose value can change from line to line. The set of positive integers is denoted by \(\mathbb {N}\). Let \(\mathbb {N}_0 = \mathbb {N}\cup \{0\}\).

2 Preliminaries

2.1 Random walks

Let \(p(\cdot , \cdot )\) denote the transition density of a random walk on the d-dimensional integer lattice. Let \(p(x) = p(0, x)\). For \(n \in \mathbb {N}_0\) and \(x \in \mathbb {Z}^d\) we set

$$\begin{aligned} p(n; x) = {\left\{ \begin{array}{ll} \sum _{y \in \mathbb {Z}^d} p(n-1; y) p(x - y) &{} \text {if } n \ge 2, \\ p(x) &{} \text {if } n = 1. \end{array}\right. } \end{aligned}$$

The support of p is denoted by \(\mathcal {V}\), i.e.

$$\begin{aligned} \mathcal {V}= \big \{v \in \mathbb {Z}^d : p(v) > 0 \big \}. \end{aligned}$$

We assume that the set \(\mathcal {V}\) is finite. Let \(\kappa : \mathbb {C}^d \rightarrow \mathbb {C}\) be an exponential polynomial defined by

$$\begin{aligned} \kappa (z) = \sum _{v \in \mathcal {V}} p(v) e^{{\langle z, v\rangle }}, \end{aligned}$$

where \({\langle \,\cdot \,, \,\cdot \,\rangle }\) is the standard scalar product on \(\mathbb {C}^d\), i.e.,

$$\begin{aligned} {\langle z, w\rangle } = \sum _{j = 1}^d z_j \overline{w_j}. \end{aligned}$$

In particular, \(\mathbb {R}^d \ni \theta \mapsto \kappa (i\theta )\) is the characteristic function of p. We set

$$\begin{aligned} \mathcal {U}= \big \{\theta \in [-\pi , \pi )^d : {|{\kappa (i \theta )} |} = 1 \big \}. \end{aligned}$$
(3)

Finally, the interior of the convex hull of \(\mathcal {V}\) in \(\mathbb {R}^d\) is denoted by \(\mathcal {M}\).

In this article, we study the asymptotic behavior of transition functions of irreducible finite range random walks. Let us recall that the random walk is irreducible if for each \(x \in \mathbb {Z}^d\) there is \(n \in \mathbb {N}\) such that \(p(n; x) > 0\). By \(r \in \mathbb {N}\) we denote the period of p, that is

$$\begin{aligned} r = \gcd \big \{n \in \mathbb {N}: p(n; 0) > 0\big \}. \end{aligned}$$

Then the space \(\mathbb {Z}^d\) decomposes into r disjoint classes

$$\begin{aligned} X_j = \big \{x \in \mathbb {Z}^d : p(j + k r; x) > 0 \text { for some } k \ge 0 \big \} \end{aligned}$$

for \(j = 0, \ldots , r-1\). We observe that for \(j \in \{0, \ldots , r-1\}\) and \(x \in X_j\), if \(n \not \equiv j \pmod r\) then

$$\begin{aligned} p(n; x) = 0. \end{aligned}$$

For each \(x \in \mathbb {Z}^d\), by \(m_x\) we denote the smallest \(m \in \mathbb {N}\) such that \(p(m; x) > 0\), thus \(x / m_x \in \overline{\mathcal {M}}\). Notice that there is \(C \ge 1\) such that for all \(x \in \mathbb {Z}^d\),

$$\begin{aligned} C^{-1} {\left|x \right|}_1 \le m_x \le C {\left|x \right|}_1. \end{aligned}$$
(4)

Indeed, let \(\{e_1, \ldots , e_d\}\) be the standard basis of \(\mathbb {R}^d\). Since

$$\begin{aligned} e_j = \sum _{v \in \mathcal {V}} m_{j, v} v, \quad \text { and } \quad -e_j = \sum _{v \in \mathcal {V}} m_{-j, v} v, \end{aligned}$$

for some \(m_{j, v}, m_{-j, v} \in \mathbb {N}_0\) satisfying

$$\begin{aligned} m_{e_j} = \sum _{v \in \mathcal {V}} m_{j, v}, \quad \text { and } \quad m_{-e_j} = \sum _{v \in \mathcal {V}} m_{-j, v}, \end{aligned}$$

by setting \(\varepsilon _j = {\text {sign}}{\langle x, e_j\rangle }\) we get

$$\begin{aligned} x = \sum _{v \in \mathcal {V}} \Big (\sum _{j = 1}^d m_{\varepsilon _j j, v} {|{{\langle x, e_j\rangle }} |} \Big ) v. \end{aligned}$$

Hence,

$$\begin{aligned} m_x \le {\left|x \right|}_1 \sum _{j = 1}^d \big (m_{e_j} + m_{-e_j}\big ). \end{aligned}$$

which, together with boundedness of \(\overline{\mathcal {M}}\), implies (4).

Next, we observe that there is \(K > 0\) such that for all \(k \ge K\),

$$\begin{aligned} p(k r; 0) > 0, \end{aligned}$$

thus for all \(x \in \mathbb {Z}^d\) and \(n \ge Kr + m_x\),

$$\begin{aligned} p(n; x) = {\left\{ \begin{array}{ll} > 0, &{} \text { if } n \equiv m_x \pmod r, \\ 0, &{} \text { otherwise.} \end{array}\right. } \end{aligned}$$
(5)

Since

$$\begin{aligned} \bigg \{\frac{e_1}{m_{e_1}}, -\frac{e_1}{m_{-e_1}}, \ldots , \frac{e_d}{m_{e_d}}, -\frac{e_d}{m_{-e_d}} \bigg \} \end{aligned}$$
(6)

do not lay on the same affine hyperplane, the interior of the convex hull of (6) is a non-empty subset of \(\mathcal {M}\).

For each \(x \in \mathbb {R}^d\), by \(B_x\) we denote a quadratic form on \(\mathbb {R}^d\) defined by

$$\begin{aligned} B_x(u, u) = D_u^2 \log \kappa (x), \end{aligned}$$
(7)

where \(D_u\) denotes the derivative along a vector u, i.e.

$$\begin{aligned} D_u f(x) = \left. \frac{\mathrm{d}}{\mathrm{d}t} f(x + t u) \right| _{t = 0}. \end{aligned}$$

Since

$$\begin{aligned} D_u \log \kappa (x) = \sum _{v \in \mathcal {V}} \frac{p(v) e^{{\langle x, v\rangle }}}{\kappa (x)} {\langle u, v\rangle } \end{aligned}$$

and

$$\begin{aligned} D_u \left( \frac{p(v) e^{{\langle x, v\rangle }}}{\kappa (x)} \right) = \frac{p(v) e^{{\langle x, v\rangle }}}{\kappa (x)} {\langle u, v\rangle } - \sum _{v' \in \mathcal {V}} \frac{p(v) e^{{\langle x, v\rangle }}}{\kappa (x)} \cdot \frac{p(v') e^{{\langle x, v'\rangle }}}{\kappa (x)} {\langle u, v'\rangle }, \end{aligned}$$

we may write

$$\begin{aligned} B_x(u, u) = \frac{1}{2} \sum _{v, v'} \frac{p(v) e^{{\langle x, v\rangle }}}{\kappa (x)} \cdot \frac{p(v') e^{{\langle x, v'\rangle }}}{\kappa (x)} {\langle u, v - v'\rangle }^2. \end{aligned}$$
(8)

In particular, if the random walk is irreducible then the quadratic form \(B_x\) is positive definite.

Example 1

Let p be the transition function of the simple random walk on \(\mathbb {Z}^d\), i.e.

$$\begin{aligned} p(e_j) = p(-e_j) = \frac{1}{2d}, \quad \text {for} \quad j = 1, \ldots , d. \end{aligned}$$

Thus

$$\begin{aligned} \mathcal {V}= \{ \pm e_j : j =1, \ldots , d\}, \quad \text {and}\quad \mathcal {M}= \big \{x \in \mathbb {R}^d : {\left|x \right|}_1 < 1\big \}. \end{aligned}$$

Since

$$\begin{aligned} \kappa (z) = \frac{1}{2d} \sum _{j=1}^d \big (e^{z_j} + e^{-z_j}\big ), \end{aligned}$$

we get \(\mathcal {U}= \{0, (-\pi , -\pi , \ldots , -\pi )\}\). By a straightforward computation we may find the quadratic form \(B_0\),

$$\begin{aligned} B_0(u, u) = \frac{1}{(2 d)^2} \sum _{j=1}^d \sum _{j' = 1}^d (u_j + u_{j'})^2 + (u_j - u_{j'})^2 = \frac{1}{d} {\langle u, u\rangle }. \end{aligned}$$

2.2 Function s

For the sake of completeness we provide the proof of the following well-known theorem.

Theorem 2.1

For every \(\delta \in \mathcal {M}\) a function \(f(\delta ,\,\cdot \,): \mathbb {R}^d \rightarrow \mathbb {R}\) defined by

$$\begin{aligned} f(\delta , x) = {\langle x, \delta \rangle } - \log \kappa (x) \end{aligned}$$

attains its maximum at the unique point \(s \in \mathbb {R}^d\) satisfying \(\nabla \log \kappa (s) = \delta \).

Proof

Without loss of generality, we may assume \(\nabla \kappa (0)=0\). Indeed, otherwise we will consider

$$\begin{aligned} \tilde{\kappa }(z) = e^{-{\langle z, v_0\rangle }} \kappa (z) = \sum _{v \in \tilde{\mathcal {V}}} p(v + v_0) e^{{\langle z, v\rangle }} \end{aligned}$$

where \(v_0 = \nabla \kappa (0)\) and \(\tilde{\mathcal {V}} = \mathcal {V}- v_0\). Then \(\tilde{\mathcal {M}}\), the interior of the convex hull of \(\tilde{\mathcal {V}}\), is equal to \(\mathcal {M}- v_0\). For \(\tilde{\delta } = \delta - v_0\) we have

$$\begin{aligned} \tilde{f}(\tilde{\delta }, x) = {\langle x, \delta - v_0\rangle } - \log \tilde{\kappa }(x) = {\langle x, \delta \rangle } - \log \kappa (x) = f(\delta , x). \end{aligned}$$

We conclude that if s is the unique maximum of \(\mathbb {R}^d \ni x \mapsto \tilde{f}(\tilde{\delta }, x)\), then it is also the unique maximum of \(\mathbb {R}^d \ni x \mapsto f(\delta , x)\). Because

$$\begin{aligned} \nabla \log \tilde{\kappa }(x) = \nabla \log \kappa (x) - v_0, \end{aligned}$$

we get \(\nabla \log \kappa (s) = \tilde{\delta } + v_0 = \delta \), proving the claim.

Fix \(\delta \in \mathcal {M}\). Since \(\nabla \kappa (0) = 0\), by Taylor’s theorem we have

$$\begin{aligned} f(\delta , x) = {\langle x, \delta \rangle } + \mathcal {O}({\left|x \right|}_2^2) \end{aligned}$$

as \({\left|x \right|}_2\) approaches zero. Moreover, for any \(x, u \in \mathbb {R}^d\),

$$\begin{aligned} D_u^2 f(\delta , x) = -B_x(u, u), \end{aligned}$$

thus the function \(\mathbb {R}^d \ni x \mapsto f(\delta , x)\) is strictly concave.

Let us observe that

$$\begin{aligned} 0 = \nabla \kappa (0) = \sum _{v \in \mathcal {V}} p(v) \cdot v \in \overline{\mathcal {M}}. \end{aligned}$$

Since \(\mathcal {M}\) is not empty, the set \(\mathcal {V}\) cannot be contained in an affine hyperplane, thus \(0 \in \mathcal {M}\).

Now, \(\delta \in \mathcal {M}\) implies that there are \(v_1, \ldots , v_d \in \partial \mathcal {M}\cap \mathcal {V}\) such that \(\delta \) belongs to the convex hull of \(\{0, v_1, \ldots , v_d\}\), i.e. there are \(t_0, t_1, \ldots t_d \in [0, 1]\) satisfying

$$\begin{aligned} \delta = t_0 \cdot 0 + \sum _{j=1}^d t_j \cdot v_j = \sum _{j=1}^d t_j \cdot v_j. \end{aligned}$$

Because \(\delta \not \in \partial \mathcal {M}\) we must have \(t_0 > 0\), thus \(\sum _{j = 1}^d t_j < 1\). Hence,

$$\begin{aligned} \sum _{j=1}^d t_j \log \kappa (x) \ge \sum _{j=1}^d t_j \big (\log p(v_j) + {\langle x, v_j\rangle }\big ) =\sum _{j=1}^d t_j \log p(v_j) + {\langle x, \delta \rangle }, \end{aligned}$$

and we get

$$\begin{aligned} f(\delta , x) = {\langle x, \delta \rangle } - \log \kappa (x) \le \left( \sum _{j=1}^d t_j - 1\right) \log \kappa (x) - \sum _{j=1}^d t_j \log p(v_j), \end{aligned}$$

which implies that

$$\begin{aligned} \lim _{{\left|x \right|}_2 \rightarrow \infty } f(\delta , x) = -\infty , \end{aligned}$$

because

$$\begin{aligned} \lim _{{\left|x \right|}_2 \rightarrow \infty } \log \kappa (x) = +\infty , \end{aligned}$$

and the proof is finished. \(\square \)

In the rest of the article, given \(\delta \in \mathcal {M}\) by s we denote the unique solution to

$$\begin{aligned} \delta = \nabla \log \kappa (s) = \sum _{v \in \mathcal {V}} \frac{p(v) e^{{\langle s, v\rangle }}}{\kappa (s)} \cdot v. \end{aligned}$$
(9)

Let \(\phi : \mathcal {M} \rightarrow \mathbb {R}\) be defined by

$$\begin{aligned} \phi (\delta ) = \max \{{\langle x, \delta \rangle } - \log \kappa (x) : x \in \mathbb {R}^d\}, \end{aligned}$$
(10)

thus, by Theorem 2.1,

$$\begin{aligned} \phi (\delta ) = {\langle \delta , s\rangle } - \log \kappa (s). \end{aligned}$$
(11)

By (9), for any \(u \in \mathbb {R}^d\),

$$\begin{aligned} {\langle \delta , u\rangle } = D_u \log \kappa (s). \end{aligned}$$

Hence, for \(u, u' \in \mathbb {R}^d\),

$$\begin{aligned} {\langle u, u'\rangle } =D_u \big (D_{u'} \log \kappa (s) \big ) =\sum _{j = 1}^d D_j D_{u'} \log \kappa (s) D_u s_j = B_s(D_u s, u'), \end{aligned}$$

i.e. \(D_u s = B_s^{-1} u\). Therefore, we can compute

$$\begin{aligned} \nabla \phi (\delta ) = s + \sum _{j = 1}^d \delta _j \nabla s_j - \sum _{j =1}^d D_j \log \kappa (s) \nabla s_j = s, \end{aligned}$$

thus

$$\begin{aligned} D^2_u \phi (\delta ) = D_u \big ( {\langle u, s\rangle } \big ) = B_s^{-1}(u,u). \end{aligned}$$

In particular, \(\phi \) is a convex function on \(\mathcal {M}\). Let \(\delta _0 = \nabla \log \kappa (0)\). By Taylor’s theorem, we have

$$\begin{aligned} \phi (\delta ) = \frac{1}{2} B_0^{-1}(\delta - \delta _0, \delta -\delta _0) + \mathcal {O}({\left|\delta -\delta _0 \right|}_2^3) \end{aligned}$$
(12)

as \(\delta \) approaches \(\delta _0\). We claim that

Claim 1

For all \(\delta \in \mathcal {M}\),Footnote 2

$$\begin{aligned} \phi (\delta ) \asymp B_0^{-1}(\delta - \delta _0, \delta - \delta _0). \end{aligned}$$

Since \(\phi \) is convex and satisfies (12), it is enough to show that \(\phi \) is bounded from above. Given \(\delta \in \mathcal {M}\), let \(v_0 \in \mathcal {V}\) be any vector satisfying

$$\begin{aligned} {\langle s, v_0\rangle } = \max \big \{{\langle s, v\rangle } : v \in \mathcal {V}\big \}. \end{aligned}$$

Because

$$\begin{aligned} {\langle s, \delta \rangle } - {\langle s, v_0\rangle } = \sum _{v \in \mathcal {V}} \frac{p(v) e^{{\langle s, v\rangle }} }{\kappa (s)}{\langle s, v-v_0\rangle } \le 0, \end{aligned}$$

we get

$$\begin{aligned} \phi (\delta ) = {\langle s, \delta \rangle } - \log \kappa (s)&\le {\langle s, \delta \rangle } - \log \big (p(v_0) e^{{\langle s, v_0\rangle }}\big ) \\&\le -\log p(v_0), \end{aligned}$$

proving the claim.

Example 2

Let p be the transition density of the simple random walk on \(\mathbb {Z}\). Then \(\mathcal {V}= \{-1, 1\}\), \(\mathcal {U}= \{0, -\pi \}\) and \(\mathcal {M}= (-1, 1)\). For \(\delta \in \mathcal {M}\), we have

$$\begin{aligned} e^s = \sqrt{\frac{1 + \delta }{1 - \delta }}, \end{aligned}$$

and

$$\begin{aligned} \kappa (s) = \frac{e^s + e^{-s}}{2} = \frac{1}{\sqrt{1-\delta ^2}}. \end{aligned}$$

Hence, using (11) we obtain

$$\begin{aligned} \phi (\delta ) = \frac{1}{2} (1-\delta ) \log (1-\delta ) + \frac{1}{2}(1+\delta ) \log (1+\delta ). \end{aligned}$$

In general, there is no explicit formula for the function \(\phi \). By the implicit function theorem, the function s is real analytic on \(\mathcal {M}\). In particular, s is bounded on any compact subset of \(\mathcal {M}\). On the other hand, \({\left|s \right|}_2\) approaches infinity when \(\delta \) tends to \(\partial \mathcal {M}\). To see this, let us denote by \(\mathcal {F}\) a facet of \(\mathcal {M}\) such that \(\delta \) approaches \(\partial \mathcal {M}\cap \mathcal {F}\). Let u be an outward unit normal vector to \(\mathcal {M}\) at \(\mathcal {F}\). Then for each \(v_1 \in \mathcal {F}\cap \mathcal {V}\) and \(v_2 \in \mathcal {V}{\setminus } \mathcal {F}\) we have

$$\begin{aligned} {\langle v_1 - \delta , u\rangle }&= \sum _{v \in \mathcal {V}} \frac{p(v) e^{{\langle s, v\rangle }}}{\kappa (s)} {\langle v_1 - v, u\rangle } \\&= \sum _{v \in \mathcal {V}{\setminus } \mathcal {F}} \frac{p(v) e^{{\langle s, v\rangle }}}{\kappa (s)} {\langle v_1 - v, u\rangle } \ge \frac{p(v_2) e^{{\langle s, v_2\rangle }}}{\kappa (s)} {\langle v_1 - v_2, u\rangle }. \end{aligned}$$

Therefore, for any \(v \in \mathcal {V}{\setminus } \mathcal {F}\),

$$\begin{aligned} \lim _{\delta \rightarrow \partial \mathcal {M}\cap \mathcal {F}} \frac{e^{{\langle s, v\rangle }}}{\kappa (s)} = 0. \end{aligned}$$
(13)

The next theorem provides a control over the speed of convergence in (13).

Theorem 2.2

There are constants \(\eta \ge 1\) and \(C > 0\) such that for all \(\delta \in \mathcal {M}\), and \(v \in \mathcal {V}\) we have

$$\begin{aligned} \frac{e^{{\langle s, v\rangle }}}{\kappa (s)} \ge C {\text {dist}}(\delta , \partial \mathcal {M})^\eta \end{aligned}$$

where \(s=s(\delta )\) satisfies \(\delta = \nabla \log \kappa (s)\).

Proof

We consider any enumeration of elements of \(\mathcal {V}= \{v_1, \ldots , v_N\}\). Define

$$\begin{aligned} \Omega = \big \{\omega \in \mathcal {S}: {\langle \omega , v_{i}\rangle } \ge {\langle \omega , v_{i+1}\rangle } \text { for } i=1,\ldots ,N-1 \big \}, \end{aligned}$$

where \(\mathcal {S}\) is the unit sphere in \(\mathbb {R}^d\) centered at the origin. Since \(\mathcal {V}\) is finite, it is enough to prove that there are \(C > 0\) and \(\eta \ge 1\) such that for all \(x \in \mathbb {R}^d\), if \(\frac{x}{{\left|x \right|}_2} \in \Omega \) then for all \(v \in \mathcal {V}\),

$$\begin{aligned} \frac{e^{{\langle x, v\rangle }}}{\kappa (x)} \ge C {\text {dist}}(\delta , \partial \mathcal {M})^{\eta }, \end{aligned}$$

where

$$\begin{aligned} \delta = \sum _{v \in \mathcal {V}} \frac{p(v) e^{{\langle x, v\rangle }}}{\kappa (x)} \cdot v. \end{aligned}$$

Without loss of generality, we may assume that \(\Omega \ne \emptyset \). Let k be the smallest index such that points \(\{v_1, \ldots , v_k\}\) do not lay on the same facet of \(\mathcal {M}\). Let us recall that a set \(\mathcal {F}\) is a facet of \(\mathcal {M}\) if there are \(\lambda \in \mathcal {S}\) and \(c \in \mathbb {R}\) such that for all \(v \in \mathcal {V}\), \({\langle \lambda , v\rangle } \le c\), and

$$\begin{aligned} \mathcal {F}= {\text {conv}}\{v \in \mathcal {V}: {\langle \lambda , v\rangle } = c\}. \end{aligned}$$

Since \(\{v_1, \ldots , v_k\}\) do not lay on the same facet of \(\mathcal {M}\) and \(\Omega \) is a compact set, there is \(\epsilon > 0\) such that for all \(\omega \in \Omega \) we have

$$\begin{aligned} {\langle \omega , v_1\rangle } \ge {\langle \omega , v_k\rangle } + \epsilon . \end{aligned}$$
(14)

Indeed, otherwise, there are \(\omega _n \in \Omega \) such that

$$\begin{aligned} {\langle \omega _n, v_k\rangle } \le {\langle \omega _n, v_1\rangle } \le {\langle \omega _n, v_k\rangle } + \frac{1}{n}. \end{aligned}$$

Since \(\Omega \) is compact, there is \(\omega _0 \in \Omega \) such that

$$\begin{aligned} {\langle \omega _0, v_1\rangle } = {\langle \omega _0, v_k\rangle }, \end{aligned}$$

and for each \(i \in \{2, \ldots , N\}\),

$$\begin{aligned} {\langle \omega _0, v_1\rangle } \ge {\langle \omega _0, v_i\rangle }. \end{aligned}$$

This contradicts that \(\{v_1, \ldots , v_k\}\) do not lay on the same facet of \(\mathcal {M}\).

Let \(\mathcal {F}\) be a facet containing \(\{v_1,\ldots ,v_{k-1}\}\) determined by \(\lambda \in \mathcal {S}\) and \(c \in \mathbb {R}\). Let us consider \(x \in \mathbb {R}^d\) such that \(\frac{x}{{\left|x \right|}_2} \in \Omega \) and

$$\begin{aligned} \delta = \sum _{v\in \mathcal {V}} \frac{p(v) e^{{\langle x, v\rangle }}}{\kappa (x)} \cdot v. \end{aligned}$$

The distance of \(\delta \) to a plane containing the facet \(\mathcal {F}\) is not bigger than \(c - {\langle \lambda , \delta \rangle }\), thus

$$\begin{aligned} {\text {dist}}(\delta , \partial \mathcal {M}) \le c - {\langle \lambda , \delta \rangle }&= \sum _{v\in \mathcal {V}{\setminus }\mathcal {F}} \frac{p(v) e^{{\langle x, v\rangle }}}{\kappa (x)} {\langle \lambda , v_1-v\rangle } \\&\le 2 \max \{{\left|v \right|}_2 : v \in \mathcal {V}\} \cdot \frac{e^{{\langle x, v_k\rangle }}}{\kappa (x)}. \end{aligned}$$

Since

$$\begin{aligned} p(v_1) e^{{\langle x, v_1\rangle }} \le \kappa (x) \le e^{{\langle x, v_1\rangle }}, \end{aligned}$$

we obtain

$$\begin{aligned} e^{{\langle x, v_k - v_1\rangle }} \ge C {\text {dist}}(\delta , \partial \mathcal {M}). \end{aligned}$$

In particular, for \(1 \le j \le k\), we have

$$\begin{aligned} \frac{e^{{\langle x, v_j\rangle }}}{\kappa (x)} \ge C {\text {dist}}(\delta , \partial \mathcal {M}). \end{aligned}$$

If \(j > k\), we can estimate

$$\begin{aligned} \frac{e^{{\langle x, v_j\rangle }}}{\kappa (x)} \ge e^{{\langle x, v_j - v_1\rangle }}&= \Big (e^{{\langle x, v_k - v_1\rangle }}\Big )^{{\langle x, v_1 - v_j\rangle }/{\langle x, v_1 - v_k\rangle }} \\&\ge C {\text {dist}}(\delta , \partial \mathcal {M})^{{\langle x, v_1-v_j\rangle }/{\langle x, v_1 - v_k\rangle }}, \end{aligned}$$

which finishes the proof since, by (14),

$$\begin{aligned} 1 \le \frac{{\langle x, v_1 - v_j\rangle }}{{\langle x, v_1 - v_k\rangle }} \le \epsilon ^{-1} {\left|v_1 - v_j \right|}_2, \end{aligned}$$

thus it is enough to take

$$\begin{aligned} \eta = \epsilon ^{-1} \cdot \max \{{\left|v_1 - v \right|}_2 : v \in \mathcal {V}\}. \end{aligned}$$

\(\square \)

Example 3

For \(k \in \mathbb {N}\), let us consider a transition probability,

$$\begin{aligned} p(0, -k) = p(0, 0) = p(0, 2) = \frac{1}{3}. \end{aligned}$$

Then for \(\delta \in (-k, 2)\) we have

$$\begin{aligned} \delta&= \frac{2 e^{2 s} - k e^{-k s}}{e^{2 s} + 1 + e^{-ks}} \\&= \big (2 + o(e^{-2s})\big ) \big (1 - e^{-2s} + o(e^{-2 s})\big ) \\&= 2 - 2 e^{-2s} + o(e^{-2s}). \end{aligned}$$

Hence,

$$\begin{aligned} \lim _{\delta \rightarrow 2^-} \frac{e^{-2s}}{2 - \delta } = \frac{1}{2}, \end{aligned}$$

and

$$\begin{aligned} \lim _{\delta \rightarrow 2^-} \frac{e^{-ks}}{(2 - \delta )^{\frac{k}{2}}} = 2^{-\frac{k}{2}}. \end{aligned}$$

2.3 Analytic lemmas

For a multi-index \(\sigma \in \mathbb {N}^d\) we denote by \(X_\sigma \) a multi-set containing \(\sigma (i)\) copies of i. Let \(\Pi _\sigma \) be a set of all partitions of \(X_\sigma \). For the convenience of the reader we recall the following lemma.

Lemma 2.3

(Faà di Bruno’s formula) There are positive constants \(c_\pi \), \(\pi \in \Pi _\sigma \), such that for sufficiently smooth functions \(f: S \rightarrow T\), \(F: T \rightarrow \mathbb {R}\), \(T \subset \mathbb {R}\), \(S \subset \mathbb {R}^d\), we have

$$\begin{aligned} \partial ^{\sigma } F(f(s)) = \sum _{\pi \in \Pi _\sigma } c_\pi \left. \frac{\mathrm{d}^m }{\mathrm{d} t^m} \right| _{t = f(s)} F(t) \prod _{j=1}^{m} \partial ^{B_j} f(s) \end{aligned}$$

where \(\pi = \{B_1, \ldots , B_m\}\).

Let us observe that for

$$\begin{aligned} F(t) = \frac{1}{2-t}, \quad \text {and}\quad f(s) = \prod _{j=1}^d \frac{1}{1 - s_j}, \end{aligned}$$

the function F(f(s)) is real-analytic in some neighborhood of \(s = 0\), thus there is \(C > 0\) such that for every \(\sigma \in \mathbb {N}^d\),

$$\begin{aligned} \partial ^{\sigma } F(f(0)) \le C^{{|{\sigma } |} + 1} \sigma !. \end{aligned}$$

Therefore,

$$\begin{aligned} \sum _{\pi \in \Pi _\sigma } c_\pi m! \prod _{j=1}^m B_j! = \partial ^{\sigma } F(f(0)) \le C^{{|{\sigma } |}+1} \sigma ! \end{aligned}$$
(15)

where for a multi-set B containing \(\sigma (i)\) copies of i we set

$$\begin{aligned} B! = \prod _{i = 1}^d \sigma (i) !. \end{aligned}$$

Using Lemma 2.3 one can show

Lemma 2.4

Let \(\mathcal {V}\subset \mathbb {R}^d\) be a set of finite cardinality. Assume that for each \(v \in \mathcal {V}\), we are given \(a_v \in \mathbb {C}\), and \(b_v > 0\). Then for \(z = x+i\theta \in \mathbb {C}^d\) such that

$$\begin{aligned} {\left|\theta \right|}_2 \le (2 \cdot \max \{{\left|v \right|}_2: v \in \mathcal {V}\})^{-1}, \end{aligned}$$

we have

$$\begin{aligned} \Big | \sum _{v \in \mathcal {V}} b_v e^{\langle z, v\rangle } \Big | \ge \frac{1}{\sqrt{2}} \sum _{v \in \mathcal {V}} b_v e^{\langle x, v\rangle }. \end{aligned}$$
(16)

Moreover, there is \(C > 0\) such that for all \(\sigma \in \mathbb {N}^d\),

$$\begin{aligned} \bigg | \partial ^{\sigma } \bigg \{ \frac{ \sum _{v \in \mathcal {V}} a_v e^{\langle z, v\rangle } }{ \sum _{v \in \mathcal {V}} b_v e^{\langle z, v\rangle } } \bigg \} \bigg | \le C^{{|{\sigma } |}} \sigma ! \frac{ \sum _{v \in \mathcal {V}} {|{a_v} |} e^{\langle x, v\rangle } }{ \sum _{v \in \mathcal {V}} b_v e^{\langle x, v\rangle } }. \end{aligned}$$
(17)

Proof

We start by proving (16). We have

$$\begin{aligned} \Big | \sum _{v \in \mathcal {V}} b_v e^{{\langle z, v\rangle }} \Big |^2&= \sum _{v, v' \in \mathcal {V}} b_v b_{v'} e^{{\langle x, v + v'\rangle }} \cos {\langle \theta , v-v'\rangle } \\&\ge \sum _{v, v' \in \mathcal {V}} b_v b_{v'} e^{{\langle x, v + v'\rangle }} \left( 1 - \frac{{\langle \theta , v-v'\rangle }^2}{2}\right) \\&\ge \frac{1}{2} \Big ( \sum _{v \in \mathcal {V}} b_v e^{{\langle x, v\rangle }} \Big )^2 \end{aligned}$$

because \({|{{\langle \theta , v-v'\rangle }} |} \le 1\).

For the proof of (17), it is enough to show

$$\begin{aligned} \bigg |\partial ^{\sigma } \bigg \{ \frac{1}{\sum _{v \in \mathcal {V}} b_v e^{{\langle z, v\rangle }}}\bigg \} \bigg | \le C^{{|{\sigma } |} + 1} \sigma ! \frac{1}{\sum _{v \in \mathcal {V}} b_v e^{{\langle x, v\rangle }}}. \end{aligned}$$
(18)

Indeed, since

$$\begin{aligned} \Big | \partial ^\alpha \Big \{\sum _{v \in \mathcal {V}} a_v e^{{\langle z, v\rangle }} \Big \} \Big | \le \sum _{v \in \mathcal {V}} {|{a_v} |} \cdot {|{v^\alpha } |} e^{{\langle x, v\rangle }} \le C^{{|{\alpha } |}} \sum _{v \in \mathcal {V}} {|{a_v} |} e^{{\langle x, v\rangle }}, \end{aligned}$$
(19)

by (18) and the Leibniz’s rule we obtain (17). To show (18), we use Faà di Bruno’s formula with \(F(t) = 1/t\). By Lemma 2.3 together with estimates (16) and (19) we get

$$\begin{aligned} \bigg |\partial ^{\sigma } \bigg \{ \frac{1}{\sum _{v \in \mathcal {V}} b_v e^{{\langle z, v\rangle }}} \bigg \} \bigg |&\le \sum _{\pi \in \Pi _\sigma } c_\pi m! \Big (\sum _{v \in \mathcal {V}} b_v e^{{\langle x, v\rangle }} \Big )^{-m-1} \prod _{j=1}^m \Big | \partial ^{B_j}\Big \{\sum _{v \in \mathcal {V}} b_v e^{{\langle z, v\rangle }} \Big \}\Big | \\&\le C^{{|{\sigma } |}} \frac{1}{\sum _{v \in \mathcal {V}} b_v e^{{\langle x, v\rangle }}} \sum _{\pi \in \Pi _\sigma } c_\pi m! \\&\le C^{{|{\sigma } |} + 1} \frac{1}{\sum _{v \in \mathcal {V}} b_v e^{{\langle x, v\rangle }}}, \end{aligned}$$

where in the last inequality we have used (15). \(\square \)

3 Heat kernels

In this section we show the asymptotic behavior of the n’th step transition density of an irreducible finite range random walk on the integer lattice \(\mathbb {Z}^d\). Before we state and prove the main theorem, let us present the following example.

Example 4

Let p be the transition function of the simple random walk on \(\mathbb {Z}\). If \(x \equiv n \pmod 2\) then

$$\begin{aligned} p(n; x) = \frac{1}{2^n} \frac{n!}{(\frac{n-x}{2})! (\frac{n+x}{2})!}. \end{aligned}$$

Let us recall Stirling’s formula

$$\begin{aligned} n! = \sqrt{2 \pi } n^{n+\frac{1}{2}} e^{-n} \big (1 + \mathcal {O}(n^{-1})\big ). \end{aligned}$$

Hence, we have

$$\begin{aligned} p(n; x)&= \frac{1}{\sqrt{2\pi }} \frac{n^{n+\frac{1}{2}}}{(n-x)^{\frac{n-x+1}{2}} (n+x)^{\frac{n+x+1}{2}}} \Big (1 + \mathcal {O}\big ((n-x)^{-1}) + \mathcal {O}((n+x)^{-1}\big )\Big ) \\&= \frac{1}{\sqrt{2\pi n}} (1-\delta ^2)^{-\frac{1}{2}} e^{-n\phi (\delta )} \Big (1+\mathcal {O}\big (n^{-1} {\text {dist}}(\delta , \{-1, 1\})^{-1}\big )\Big ) \end{aligned}$$

where \(\delta = \frac{x}{n}\) and \(\phi (\delta ) = \frac{1}{2} (1-\delta ) \log (1-\delta ) + \frac{1}{2} (1+\delta ) \log (1+\delta )\).

Theorem 3.1

Let p be an irreducible finite range random walk on \(\mathbb {Z}^d\). Let r be its period and \(X_0, \ldots , X_{r-1}\) the partition of \(\mathbb {Z}^d\) into aperiodic classes. There is \(\eta \ge 1\) such that for each \(j \in \{0, 1, \ldots , r-1\}\), \(n \in \mathbb {N}\) and \(x \in X_j\), if \(n \equiv j \pmod r\) then

$$\begin{aligned} p(n; x) = (2 \pi n)^{-\frac{d}{2}} (\det B_s)^{-\frac{1}{2}} e^{-n\phi (\delta )} \Big ( r + \mathcal {O}\big (n^{-1} {\text {dist}}(\delta , \partial \mathcal {M})^{-2\eta }\big ) \Big ), \end{aligned}$$

otherwise \(p(n; x) = 0\), where \(\delta = \frac{x}{n}\), \(s = \nabla \phi (\delta )\) and

$$\begin{aligned} \phi (\delta ) = \max \big \{{\langle u, \delta \rangle } - \log \kappa (u) : u \in \mathbb {R}^d \big \}. \end{aligned}$$

Proof

Using the Fourier inversion formula we can write

$$\begin{aligned} p(n; x)=\bigg (\frac{1}{2\pi }\bigg )^d \int _{\mathscr {D}_d} \kappa (i\theta )^n e^{-i{\langle \theta , x\rangle }} {\, \mathrm d}\theta , \end{aligned}$$
(20)

where \(\mathscr {D}_d = [-\pi , \pi )^d\). If \(\theta _0 \in \mathcal {U}\) then \(\kappa (i \theta _0) = e^{it}\) for some \(t \in [-\pi , \pi )\) where \(\mathcal {U}\) is defined in (3). Since \(\kappa (i\theta _0)\) is a convex combination of complex numbers from the unit circle, \(\kappa (i \theta _0) = e^{i t}\) if and only if \(e^{i {\langle \theta _0, v\rangle }} = e^{it}\) for each \(v \in \mathcal {V}\). In particular,

$$\begin{aligned} e^{i n t} p(n; x)&= \bigg (\frac{1}{2\pi }\bigg )^d \int _{\mathscr {D}_d} \kappa (i\theta + i \theta _0)^n e^{-i{\langle \theta , x\rangle }} {\, \mathrm d}\theta \\&= e^{i {\langle \theta _0, x\rangle }} p(n; x), \end{aligned}$$

thus, whenever \(p(n; x) > 0\), we must have

$$\begin{aligned} e^{i n t} = e^{i {\langle \theta _0, x\rangle }}. \end{aligned}$$
(21)

Hence, by (5),

$$\begin{aligned} e^{i n t } = e^{i {\langle \theta _0, x\rangle }} = e^{i (n + r) t}, \end{aligned}$$

which implies that \(e^{i t}\) is r’th root of unity. In particular, the set \(\mathcal {U}\) has the cardinality r. Next, we claim that

Claim 2

For any \(u \in \mathbb {R}^d\),

$$\begin{aligned} \int _{\mathscr {D}_d} \kappa (i \theta )^n e^{-i {\langle \theta , x\rangle }} {\, \mathrm d}\theta = \int _{\mathscr {D}_d} \kappa (u + i\theta )^n e^{-{\langle u + i \theta , x\rangle }} {\, \mathrm d}\theta . \end{aligned}$$

To see this, we observe that if \(y \in \mathbb {R}^d\) and \(x \ne y\), then we have

$$\begin{aligned} \int _{\mathscr {D}_d} e^{i {\langle \theta , y\rangle }} e^{-i{\langle \theta , x\rangle }} {\, \mathrm d}\theta = 0 = \int _{\mathscr {D}_d} e^{{\langle u+i\theta , y\rangle }} e^{-{\langle u+i\theta , x\rangle }} {\, \mathrm d}\theta , \end{aligned}$$

otherwise

$$\begin{aligned} \int _{\mathscr {D}_d} e^{i {\langle \theta , x\rangle }} e^{-i{\langle \theta , x\rangle }} {\, \mathrm d}\theta&= \prod _{j = 1}^d \int _{-\pi }^\pi e^{i \theta _j x_j} e^{-i \theta _j x_j} {\, \mathrm d}\theta _j \\&= \prod _{j = 1}^d \int _{-\pi }^\pi e^{(u_j+i \theta _j) x_j} e^{-(u_j + i \theta _j) x_j} {\, \mathrm d}\theta _j \\&= \int _{\mathscr {D}_d} e^{{\langle u + i\theta , x\rangle }} e^{-{\langle u + i\theta , x\rangle }} {\, \mathrm d}\theta . \end{aligned}$$

Therefore,

$$\begin{aligned} \int _{\mathscr {D}_d} \kappa (i\theta )^n e^{-i{\langle \theta , x\rangle }} {\, \mathrm d}\theta&= \sum _{v_1, \ldots , v_n \in \mathcal {V}} \prod _{j=1}^n p(v_j) \int _{\mathscr {D}_d} e^{i{\langle \theta , \sum _{j=1}^n v_j\rangle }} e^{-i{\langle \theta , x\rangle }} {\, \mathrm d}\theta \\&= \sum _{v_1, \ldots , v_n \in \mathcal {V}} \prod _{j=1}^n p(v_j) \int _{\mathscr {D}_d} e^{{\langle u + i\theta , \sum _{j=1}^n v_j\rangle }} e^{-{\langle u+i\theta , x\rangle }} {\, \mathrm d}\theta \\&= \int _{\mathscr {D}_d} \kappa (u + i \theta )^n e^{-{\langle u+i\theta , x\rangle }} {\, \mathrm d}\theta . \end{aligned}$$

We notice that if \(p(n; x) > 0\) then \(\delta = \frac{x}{n} \in \overline{\mathcal {M}}\). Since \({\text {dist}}(\delta , \partial \mathcal {M}) > 0\), by Theorem 2.1, there is the unique \(s=s(\delta )\) such that \(\nabla \log \kappa (s) = \delta \). Hence, by Claim 2, we can write

$$\begin{aligned} p(n; x) = \bigg (\frac{1}{2 \pi }\bigg )^d e^{-n \phi (\delta )} \int _{\mathscr {D}_d} \bigg (\frac{\kappa (s + i \theta )}{\kappa (s)}\bigg )^n e^{-i {\langle \theta , x\rangle }} {\, \mathrm d}\theta , \end{aligned}$$

because

$$\begin{aligned} \phi (\delta ) = {\langle s, \delta \rangle } - \log \kappa (s). \end{aligned}$$

Let \(\epsilon > 0\) be small enough to satisfy (25), (27) and (30). We set

$$\begin{aligned} \mathscr {D}_d^\epsilon = \bigcap _{\theta _0 \in \mathcal {U}} \big \{\theta \in [-\pi , \pi )^d : {\left|\theta - \theta _0 \right|}_2 \ge \epsilon \big \}. \end{aligned}$$

Then the integral over \(\mathscr {D}_d^\epsilon \) is negligible. To see this, we write

$$\begin{aligned} 1 - \bigg |\frac{\kappa (s+i\theta )}{\kappa (s)} \bigg |^2&= 1 - \sum _{v, v' \in \mathcal {V}} \frac{p(v) e^{{\langle s+i\theta , v\rangle }}}{\kappa (s)} \cdot \frac{p(v')e^{{\langle s-i\theta , v'\rangle }}}{\kappa (s)} \nonumber \\&= 2 \sum _{v, v' \in \mathcal {V}} \frac{p(v) e^{{\langle s, v\rangle }}}{\kappa (s)} \cdot \frac{p(v') e^{{\langle s, v'\rangle }}}{\kappa (s)} \Big (\sin \Big \langle \frac{\theta }{2}, v-v'\Big \rangle \Big )^2. \end{aligned}$$
(22)

Now, we need the following estimate.

Claim 3

For every \(v_0 \in \mathcal {V}\), there is \(\xi > 0\) such that for all \(\theta \in \mathscr {D}_d^\epsilon \) there is \(v \in \mathcal {V}\) satisfying

$$\begin{aligned} \Big | \sin \Big \langle \frac{\theta }{2}, v - v_0 \Big \rangle \Big | \ge \xi . \end{aligned}$$
(23)

For the proof, we assume to contrary that for some \(v_0 \in \mathcal {V}\) and all \(m \in \mathbb {N}\) there is \(\theta _m \in \mathscr {D}_d^\epsilon \) such that for all \(v \in \mathcal {V}\),

$$\begin{aligned} \Big | \sin \Big \langle \frac{\theta _m}{2}, v - v_0 \Big \rangle \Big | \le \frac{1}{m}. \end{aligned}$$

By compactness of \(\mathscr {D}_d^\epsilon \), there is a subsequence \((\theta _{m_k} : k \in \mathbb {N})\) convergent to \(\theta ' \in \mathscr {D}_d^\epsilon \). Then for all \(v \in \mathcal {V}\),

$$\begin{aligned} \sin \Big \langle \frac{\theta '}{2}, v-v_0 \Big \rangle = 0, \end{aligned}$$

and hence

$$\begin{aligned} \kappa (i\theta ') = e^{i {\langle \theta ', v_0\rangle }} \end{aligned}$$

which is impossible since \(\theta ' \in \mathscr {D}_d^\epsilon \).

In order to apply Claim 3, we select any \(v_0\) satisfying

$$\begin{aligned} {\langle v_0, s\rangle } = \max \big \{{\langle v, s\rangle } : v \in \mathcal {V}\big \}, \end{aligned}$$

thus \(e^{{\langle s, v_0\rangle }} \ge \kappa (s)\). By Claim 3 and (22), for each \(\theta \in \mathscr {D}_d^\epsilon \) there is \(v \in \mathcal {V}\) such that

$$\begin{aligned} 1 - \bigg | \frac{\kappa (s+i\theta )}{\kappa (s)} \bigg |^2&\ge 2 p(v_0) \frac{p(v) e^{{\langle s, v\rangle }}}{\kappa (s)} \xi ^2 \\&\ge 2 \xi ^2 \min \{p(v')^2 : v' \in \mathcal {V}\} \cdot \frac{e^{{\langle s, v\rangle }}}{\kappa (s)}. \end{aligned}$$

Although v may depend on \(\theta \), by Theorem 2.2, there are \(C > 0\) and \(\eta \ge 1\) such that for all \(\theta \in \mathscr {D}_d^\epsilon \),

$$\begin{aligned} 1 - \bigg | \frac{\kappa (s + i\theta )}{\kappa (s)} \bigg |^2 \ge C {\text {dist}}(\delta , \partial \mathcal {M})^\eta . \end{aligned}$$

Hence,

$$\begin{aligned} \bigg | \frac{\kappa (s + i \theta )}{\kappa (s)} \bigg |^2 \le 1 - C {\text {dist}}(\delta , \partial \mathcal {M})^\eta \le e^{-C {\text {dist}}(\delta , \partial \mathcal {M})^\eta }. \end{aligned}$$

Since

$$\begin{aligned} n {\text {dist}}(\delta , \partial \mathcal {M})^\eta = n^{\frac{1}{2}} \big (n {\text {dist}}(\delta , \partial \mathcal {M})^{2 \eta }\big )^{\frac{1}{2}} \end{aligned}$$

we obtain that

$$\begin{aligned} e^{-C n {\text {dist}}(\delta , \partial \mathcal {M})^\eta } \le C' n^{-\frac{d}{2}-1} {\text {dist}}(\delta , \partial \mathcal {M})^{-2\eta }, \end{aligned}$$

provided n is large enough. We observe that for any \(u \in \mathbb {R}^d\),

$$\begin{aligned} B_s(u, u)&= \frac{1}{2} \sum _{v, v' \in \mathcal {V}} \frac{p(v) e^{{\langle x, v\rangle }}}{\kappa (x)} \cdot \frac{p(v') e^{{\langle x, v'\rangle }}}{\kappa (x)} {\langle u, v-v'\rangle }^2 \\&\le \frac{1}{\min \{p(v)^2 : v \in \mathcal {V}\}} B_0(u, u), \end{aligned}$$

thus for any \(u, u' \in \mathbb {R}^d\),

$$\begin{aligned} B_s(u, u') \le \frac{1}{\min \{p(v)^2 : v \in \mathcal {V}\}} \sqrt{B_0(u, u) B_0(u', u')}, \end{aligned}$$

and hence

$$\begin{aligned} \det B_s \le C \det B_0. \end{aligned}$$

Therefore, we conclude that

$$\begin{aligned} \int _{\mathscr {D}_d^\epsilon } \bigg |\frac{\kappa (s+i\theta )}{\kappa (s)}\bigg |^n {\, \mathrm d}\theta \le C n^{-\frac{d}{2}-1} (\det B_s)^{-\frac{1}{2}} {\text {dist}}(\delta , \partial \mathcal {M})^{-2\eta }. \end{aligned}$$

Next, let us consider the integral over

$$\begin{aligned} \bigcup _{\theta _0 \in \mathcal {U}} \big \{\theta \in [-\pi , \pi )^d : {\left|\theta - \theta _0 \right|}_2 < \epsilon \big \}. \end{aligned}$$
(24)

By taking \(\epsilon \) satisfying

$$\begin{aligned} \epsilon < \min \bigg \{\frac{{\left|\theta _0 - \theta _0' \right|}_2}{2} : \theta _0, \theta _0' \in \mathcal {U}\bigg \}, \end{aligned}$$
(25)

we guarantee that the sets in (24) are disjoint. Moreover, for any \(\theta _0 \in \mathcal {U}\), by the change of variables and (21) we get

$$\begin{aligned} \int _{{\left|\theta - \theta _0 \right|}_2< \epsilon } \bigg (\frac{\kappa (s + i\theta )}{\kappa (s)}\bigg ) ^n e^{-i{\langle \theta , x\rangle }} {\, \mathrm d}\theta&= \int _{{\left|\theta \right|}_2< \epsilon } \bigg ( \frac{\kappa (s + i\theta + i\theta _0)}{\kappa (s)} \bigg )^n e^{-i{\langle \theta +\theta _0, x\rangle }} {\, \mathrm d}\theta \\&= \int _{{\left|\theta \right|}_2 < \epsilon } \bigg (\frac{\kappa (s + i \theta )}{\kappa (s)}\bigg )^n e^{-i{\langle \theta , x\rangle }} {\, \mathrm d}\theta . \end{aligned}$$

Therefore,

$$\begin{aligned} \sum _{\theta _0 \in \mathcal {U}} \int _{{\left|\theta -\theta _0 \right|}_2< \epsilon } \bigg (\frac{\kappa (s + i \theta )}{\kappa (s)}\bigg )^n e^{-i{\langle \theta , x\rangle }} {\, \mathrm d}\theta = r \int _{{\left|\theta \right|}_2 < \epsilon } \bigg ( \frac{\kappa (s + i \theta )}{\kappa (s)} \bigg )^n e^{-i{\langle \theta , x\rangle }} {\, \mathrm d}\theta . \end{aligned}$$

Further, by (16), a function \(z \mapsto {\text {Log}}\kappa (z)\), where \({\text {Log}}\) denotes the principal value of the complex logarithm, is an analytic function in a strip \(\mathbb {R}^d + i B\) where

$$\begin{aligned} B = \Big \{b \in \mathbb {R}^d : {\left|b \right|}_2 < \big (2 \cdot \max \{{\left|v \right|}_2 : v \in \mathcal {V}\}\big )^{-1} \Big \}. \end{aligned}$$

Since for any \(u \in \mathbb {R}^d\) we have

$$\begin{aligned} D_u^2 {\text {Log}}\kappa (z) = \frac{1}{2} \sum _{v, v' \in \mathcal {V}} \frac{p(v) e^{{\langle z, v\rangle }}}{\kappa (z)} \cdot \frac{p(v') e^{{\langle z, v'\rangle }}}{\kappa (z)} {\langle u, v-v'\rangle }^2, \end{aligned}$$

by Lemma 2.4, there is \(C > 0\) such that for all \(\sigma \in \mathbb {N}^d\) and \(a + i b \in \mathbb {R}^d + i B\),

$$\begin{aligned} \big |\partial ^{\sigma } \big ( D_u^2 {\text {Log}}\kappa \big )(a+ib) \big | \le C^{{|{\sigma } |} + 1} \sigma ! B_a(u, u). \end{aligned}$$
(26)

If

$$\begin{aligned} \epsilon < \big (2 \cdot \max \{{\left|v \right|}_2 : v \in \mathcal {V}\}\big )^{-1}, \end{aligned}$$
(27)

then for \({\left|\theta \right|}_2 < \epsilon \) we can define

$$\begin{aligned} \psi (s, \theta ) = {\text {Log}}\kappa (s+i\theta ) - \log \kappa (s) - i{\langle \theta , \delta \rangle } + \frac{1}{2}B_s(\theta , \theta ). \end{aligned}$$

Hence,

$$\begin{aligned} \int _{{\left|\theta \right|}_2< \epsilon } \bigg (\frac{\kappa (s+i\theta )}{\kappa (s)}\bigg )^n e^{-i {\langle \theta , x\rangle }} {\, \mathrm d}\theta = \int _{{\left|\theta \right|}_2 < \epsilon } e^{-\frac{n}{2} B_s(\theta , \theta )} e^{n\psi (s, \theta )} {\, \mathrm d}\theta , \end{aligned}$$

and to finish the proof of theorem it is enough to show

Claim 4

$$\begin{aligned} \int _{{\left|\theta \right|}_2 < \epsilon } e^{-\frac{n}{2}B_s(\theta , \theta )} e^{n \psi (s, \theta )} {\, \mathrm d}\theta = (2\pi n)^{\frac{d}{2}} (\det B_s)^{-\frac{1}{2}} \Big (1+ \mathcal {O}\big (n^{-1} {\text {dist}}(\delta , \partial \mathcal {M})^{-2 \eta }\big )\Big ). \end{aligned}$$
(28)

Using the integral form for the reminder, we get

$$\begin{aligned} \psi (s, \theta ) = -\frac{i}{2} \int _0^1 (1 - t)^2 D_\theta ^3 {\text {Log}}\kappa (s + i t \theta ) {\, \mathrm d}t. \end{aligned}$$

In view of (26), there is \(c > 0\) such that for all \(s \in \mathbb {R}^d\) and \(\theta \in B\),

$$\begin{aligned} {|{\psi (s, \theta )} |} \le c {\left|\theta \right|}_2 B_s(\theta , \theta ). \end{aligned}$$
(29)

Therefore, by choosing

$$\begin{aligned} \epsilon < \bigg (4 \cdot \sup \bigg \{\frac{{|{\psi (a, b)} |}}{c {\left|b \right|}_2 B_a(b, b)} : a \in \mathbb {R}^d, b \in B\bigg \}\bigg )^{-1}, \end{aligned}$$
(30)

if \({\left|\theta \right|}_2 < \epsilon \) then we may estimate

$$\begin{aligned} {|{ \psi (s, \theta ) } |} \le \frac{1}{4} B_s(\theta , \theta ). \end{aligned}$$
(31)

Next, we write

$$\begin{aligned} e^{n\psi (s, \theta )}&= \left( e^{n\psi (s, \theta )} - 1 - n\psi (s, \theta )\right) + \left( n\psi (s, \theta ) - n \frac{D_\theta ^3 \psi (s, 0)}{3!}\right) \\&\quad + n \frac{D_\theta ^3 \psi (s, 0)}{3!} + 1, \end{aligned}$$

and we split (28) into four corresponding integrals.

Since for \(a \in \mathbb {C}\),

$$\begin{aligned} {|{e^a - 1 - a} |} \le \frac{{|{a} |}^2}{2} e^{{|{a} |}}, \end{aligned}$$

by (29) and (31), the first integrand can be estimated as follows

$$\begin{aligned} \left| e^{n\psi (s, \theta )} - 1 - n\psi (s, \theta ) \right|&\le \frac{1}{2} e^{\frac{n}{4} B_s(\theta , \theta )} \big (n \psi (s, \theta )\big )^2 \\&\le C e^{\frac{n}{4} B_s(\theta , \theta )} n^2 {\left|\theta \right|}_2^2 B_s(\theta , \theta )^2. \end{aligned}$$

Because

$$\begin{aligned} {\left|\theta \right|}_2^2 \le {\left||{B_s^{-1}} \right||} B_s(\theta , \theta ), \end{aligned}$$
(32)

we obtain

$$\begin{aligned}&\bigg |\int _{{\left|\theta \right|}_2< \epsilon } e^{-\frac{n}{2} B_{s}(\theta , \theta )} \big ( e^{n\psi (s, \theta )} - 1 - n\psi (s, \theta ) \big ) {\, \mathrm d}\theta \bigg |\nonumber \\&\quad \le C n^2 {\left||{B_s^{-1}} \right||} \int _{{\left|\theta \right|}_2 < \epsilon } e^{-\frac{n}{4} B_s(\theta , \theta )} B_s(\theta , \theta )^3 {\, \mathrm d}\theta \nonumber \\&\quad \le C n^{-\frac{d}{2}-1} (\det B_{s})^{-\frac{1}{2}} \big ||B_{s}^{-1} \big ||. \end{aligned}$$
(33)

Furthermore, by (26),

$$\begin{aligned} \bigg | \psi (s, \theta ) - \frac{D_\theta ^3 \psi (s, 0)}{3!} \bigg |&= \left| \frac{1}{3!} \int _0^1 (1-t)^3 D_\theta ^4 {\text {Log}}\kappa (s + i t\theta ) {\, \mathrm d}t \right| \\&\le C {\left|\theta \right|}_2^2 B_s(\theta , \theta ), \end{aligned}$$

which together with (32) implies

$$\begin{aligned}&\bigg | \int _{{\left|\theta \right|}_2< \epsilon } e^{-\frac{n}{2} B_{s}(\theta , \theta )} n \bigg ( \psi (s, \theta ) - \frac{D_\theta ^3 \psi (s, 0)}{3!} \bigg ) {\, \mathrm d}\theta \bigg | \nonumber \\&\quad \le C n {\left||{B_s^{-1}} \right||} \int _{{\left|\theta \right|}_2 < \epsilon } e^{-\frac{n}{2} B_s(\theta , \theta )} B_s(\theta , \theta )^2 {\, \mathrm d}\theta \nonumber \\&\quad \le C n^{-\frac{d}{2} - 1} (\det B_{s})^{-\frac{1}{2}} \big ||B_{s}^{-1} \big ||. \end{aligned}$$
(34)

The third integral is equal zero because the integrand is an odd function. The last one, by (32), we can estimate

$$\begin{aligned} \bigg |\int _{{\left|\theta \right|}_2 < \epsilon } e^{-\frac{n}{2} B_s(\theta , \theta )} {\, \mathrm d}\theta - \int _{\mathbb {R}^d} e^{-\frac{n}{2} B_s(\theta , \theta )} {\, \mathrm d}\theta \bigg |&\le e^{-\frac{n}{4} \epsilon ^2 {\left||{B_s^{-1}} \right||}^{-1}} \int _{\mathbb {R}^d} e^{-\frac{n}{4} B_s(\theta , \theta )} {\, \mathrm d}\theta \nonumber \\&\le C n^{-\frac{d}{2} -1} (\det B_s)^{-\frac{1}{2}} {\left||{B_s^{-1}} \right||}. \end{aligned}$$
(35)

By putting estimates (33), (34) and (35) together, we obtain

$$\begin{aligned} \int _{{\left|\theta \right|}_2 < \epsilon } e^{-\frac{n}{2}B_s(\theta , \theta )} e^{n\psi (s, \theta )} {\, \mathrm d}\theta = n^{-\frac{d}{2}} (\det B_s)^{-\frac{1}{2}} \big ( (2\pi )^{\frac{d}{2}} + \mathcal {O}\big (n^{-1} {\left||{B_s^{-1}} \right||}\big )\big ). \end{aligned}$$

Finally, by (8) and Theorem 2.2, there is \(C > 0\) such that for all \(\delta \in \mathcal {M}\) and any \(u \in \mathbb {R}^d\),

$$\begin{aligned} B_s(u, u) \ge C {\text {dist}}(\delta , \partial \mathcal {M})^{2 \eta } B_0(u, u). \end{aligned}$$
(36)

Hence,

$$\begin{aligned} {\left||{B_s^{-1}} \right||} = \Big (\min \{B_s(u, u) : {\left|u \right|}_2 = 1\}\Big )^{-1} \le C {\text {dist}}(\delta , \partial \mathcal {M})^{-2\eta }, \end{aligned}$$

which concludes the proof of Claim 4. \(\square \)

Although, the asymptotic in Theorem 3.1 is uniform on a large region with respect to n and x, it depends on the implicit function \(s(\delta )\). By (36), we may estimate

$$\begin{aligned} \det B_s \ge C {\text {dist}}(\delta , \partial \mathcal {M})^{2 r \eta }. \end{aligned}$$

Since \(\mathcal {M}\ni \delta \mapsto s(\delta )\) is real analytic, for each \(\epsilon > 0\) there is \(C_\epsilon > 0\) such that if \({\text {dist}}(\delta , \partial \mathcal {M}) \ge \epsilon \) then

$$\begin{aligned} {\left|s \right|}_1 \le C_\epsilon {\left|\delta - \delta _0 \right|}_1, \end{aligned}$$
(37)

and

$$\begin{aligned} \Big | \big ( \det B_s \big )^{-\frac{1}{2}} - \big ( \det B_0 \big )^{-\frac{1}{2}} \Big | \le C_\epsilon {\left|\delta - \delta _0 \right|}_1, \end{aligned}$$

where \(\delta _0 = \sum _{v \in \mathcal {V}} p(v) v\). In most applications the following form of the asymptotic of p(nx) is sufficient.

Corollary 3.2

For every \(\epsilon > 0\), \(j \in \{0, \ldots , r-1\}\), \(n \in \mathbb {N}\) and \(x \in X_j\), if \(n \equiv j \pmod r\) then

$$\begin{aligned} p(n; x) = (2\pi n)^{-\frac{d}{2}} (\det B_0)^{-\frac{1}{2}} e^{-n \phi (\delta )} \left( r + \mathcal {O}({\left|\delta - \delta _0 \right|}_1) + \mathcal {O}(n^{-1})\right) , \end{aligned}$$

otherwise \(p(n; x) = 0\), provided that \({\text {dist}}(\delta , \partial \mathcal {M}) \ge \epsilon \).

Remark 1

It is not possible to replace \(\phi (\delta )\) by \(\frac{1}{2} B_0^{-1}(\delta - \delta _0, \delta - \delta _0)\) without introducing an error term of a very different nature. Indeed, by (12),

$$\begin{aligned} e^{-n \phi (\delta )} = e^{-\frac{n}{2}B_0^{-1}(\delta -\delta _0, \delta - \delta _0)} e^{\mathcal {O}(n {\left|\delta - \delta _0 \right|}^3)}. \end{aligned}$$

Since \(\delta _0 \in \mathcal {M}\), if \(\delta \) approaches \(\partial \mathcal {M}\) then \(n {\left|\delta - \delta _0 \right|}^3\) cannot be small. Notice that the third power may be replaced by a higher degree if the random walk has vanishing moments. In particular, for the simple random walk on \(\mathbb {Z}^d\) (see Example 1), for all \(\epsilon > 0\), \(x \in \mathbb {Z}^d\) and \(n \in \mathbb {N}\), if \({\left|x \right|}_1 + n \in 2 \mathbb {N}\) then

$$\begin{aligned} p(n; x) = (2\pi )^{-\frac{d}{2}} \left( \frac{d}{n}\right) ^{\frac{d}{2}} e^{-\frac{d}{2n} {\left|x \right|}^2_2} \big (2 + \mathcal {O}(n {|{\delta } |}^4) + \mathcal {O}(n^{-1})\big ) \end{aligned}$$

otherwise \(p(n; x) = 0\), uniformly with respect to n and x provided that \({\left|x \right|}_1 \le (1-\epsilon ) n\).

Remark 2

It is relatively easy to obtain a global upper bound: for all \(n \in \mathbb {N}\) and \(x \in \mathbb {Z}^d\),

$$\begin{aligned} p(n; x) \le e^{-n\phi (\delta )}. \end{aligned}$$

Indeed, by Claim 2, for \(u \in \mathbb {R}^d\), we have

$$\begin{aligned} p(n; x) = \bigg (\frac{1}{2\pi }\bigg )^d \kappa (u)^n e^{-{\langle u, x\rangle }} \int _{\mathscr {D}_d} \bigg ( \frac{\kappa (u+i\theta )}{\kappa (u)} \bigg )^n e^{-i {\langle \theta , x\rangle }} {\, \mathrm d}\theta . \end{aligned}$$

Hence, by Theorem 2.1,

$$\begin{aligned} p(n; x) \le \min \big \{ \kappa (u)^n e^{-{\langle u, x\rangle }} : u \in \mathbb {R}^d \big \} \le e^{-n\phi (\delta )}. \end{aligned}$$