1 Introduction

Estimates at infinity for the heat kernel on the Heisenberg group or, more generally, H-type groups have attracted a lot of interest in the last decades (see, e.g. [2, 7, 10, 12, 16, 17]). In the context of H-type groups, in particular, some results were recently obtained by Eldredge [7] and Li [17] independently. In [7], Eldredge provides precise upper and lower bounds for the heat kernel \(p_s\) and its horizontal gradient \(\nabla _\mathcal {H}p_s\). In [17], Li provides asymptotic estimates for the heat kernel \(p_s\), as well as upper bounds for all its derivatives. Nevertheless, to the best of our knowledge, sharp asymptotic estimates at infinity for the derivatives of \(p_s\) are still missing. In this paper, we address this problem by providing asymptotic expansions at infinity of the heat kernel and of all its derivatives.

Let G be an H-type group identified with \(\mathbb {R}^{2 n}\times \mathbb {R}^m\) via the exponential map and denote by (xt) its generic element, where \(x\in \mathbb {R}^{2n}\) and \(t\in \mathbb {R}^m\). It is well known that the heat kernel \(p_s\) is a function of \(R:=|{x}|^2/4\) and |t|. Outside the region \(\{(x,t)\in G:t=0\}\), any derivative of \(p_s(x,t)\) can thus be written as a finite linear combination with smooth coefficients of the functions

$$\begin{aligned} p_{s,k_1,k_2}(x,t)=\frac{\partial ^{k_1}}{\partial R^{k_1}}\frac{\partial ^{k_2}}{\partial |t|^{k_2}} p_{s}(x,t), \end{aligned}$$

for suitable \(k_1,k_2 \in \mathbb {N}\). We call these functions radial partial derivatives of \(p_s\). Thus, everything can be reduced to finding asymptotic estimates at infinity of \(p_{s,k_1,k_2}\) for every \(k_1,k_2 \in \mathbb {N}\); these will yield asymptotic estimates of every desired derivative of \(p_s\).

We divide the paper into five sections. In the next section, we fix the notation and recall some preliminary facts on H-type groups and the method of stationary phase. In the central Sects. 3 and 4, the functions \(p_{s,k_1,k_2}\) are studied. In Sect. 3, we provide asymptotic estimates for \(p_{s,k_1,k_2}\) in the case \(m=1\), namely when G is a Heisenberg group; in Sect. 4, we extend the results of Sect. 3 to the more general class of H-type groups. This is done via a reduction to the case \(m=1\) when m is odd; a descent method is then applied in order to cover the case m even. The preliminary study of the case \(m=1\) is necessary except in a single case, for which the general case could be treated directly; nevertheless, we include both proofs for the sake of clarity. As the reader may see, our Theorem 4.2 and Corollary 4.15 cover the cases of [17, Theorems 1.4 and 1.5] and [7, Theorem 4.2] as particular instances and imply [17, Theorems 1.1 and 1.2] and [7, Theorem 4.4] as easy corollaries, by means of formula (5.1). In Sect. 5, we show an interesting application of our estimates, providing a different proof of a theorem due to Inglis [14] which concerns the discreteness of the spectrum of some Ornstein–Uhlenbeck operators on G.

We emphasize that our methods are strongly related to those employed by Gaveau [10] and then Hueber and Müller [12] in the case of the Heisenberg group \(\mathbb {H}^1\); some ideas are also taken from the work of Eldredge [7]. In particular, we borrow from [10] and [12] the use of the method of stationary phase, though in a stronger form provided by Hörmander [11].

2 Preliminaries

2.1 H-type groups

An H-type group G is a 2-step stratified group whose Lie algebra \(\mathfrak {g}\) is endowed with an inner product \((\, \cdot \,,\, \cdot \,)\) such that

  1. 1.

    if \(\mathfrak {z}\) is the centre of \(\mathfrak {g}\) and \(\mathfrak {h}=\mathfrak {z}^\perp \), then \([\mathfrak {h},\mathfrak {h}]=\mathfrak {z}\);

  2. 2.

    for every \(Z \in \mathfrak {z}\), the map \(J_Z:\mathfrak {h} \rightarrow \mathfrak {h}\),

    $$\begin{aligned}(J_Z X,Y)=( Z, [X,Y]) \qquad \forall X,Y\in \mathfrak {h},\end{aligned}$$

    is an isometry whenever \((Z, Z)=1\).

In particular, \(\mathfrak {g}\) stratifies as \(\mathfrak {h} \oplus \mathfrak {z}\). It is very convenient, however, to realize an H-type group G as \(\mathbb {R}^{2 n}\times \mathbb {R}^m \), for some \(n,m\in \mathbb {N}\), via the exponential map. More precisely, we shall denote by (xt) the elements of G, where \(x\in \mathbb {R}^{2n}\) and \(t \in \mathbb {R}^m\). We denote by \((e_1,\dots ,e_{2n})\) and \((u_1,\dots ,u_m)\) the standard bases of \(\mathbb {R}^{2 n}\) and \(\mathbb {R}^{m}\), respectively. Under this identification, the Haar measure (dy) is the Lebesgue measure. The maps \(\{J_Z :Z\in \mathfrak {z}\}\) are identified with \(2n \times 2n\) skew symmetric matrices \(\{J_t :t\in \mathbb {R}^m\}\), which are orthogonal whenever \(|t|=1\). This identification endows \(\mathbb {R}^{2n}\times \mathbb {R}^{m}\) with the group law

$$\begin{aligned} (x,t)\cdot (x',t') = \left( x+x',t+t' + \frac{1}{2} \sum _{k=1}^m (J_{u_k}x,x') u_k\right) . \end{aligned}$$

A basis of left-invariant vector fields for \(\mathfrak {g}\) is

$$\begin{aligned} X_j = \partial _{x_j} + \frac{1}{2}\sum _{k=1}^{m}( J_{u_k} x,e_j) \partial _{t_k},\quad j=1,\dots ,2n; \qquad T_k = \partial _{t_k}, \quad k=1,\dots ,m. \end{aligned}$$

In particular, \((X_j)_{1\le j \le 2n}\) is a basis for the first layer \(\mathfrak {h}\cong \mathbb {R}^{2 n}\). If f is a sufficiently smooth function on G, its horizontal gradient will be the vector field \(\nabla _\mathcal {H} f :=\sum _{j=1}^{2n} (X_j f)X_j\), and its sub-Laplacian \(\mathcal {L}f:=-\sum _{j=1}^{2n} X_j^2f\). We refer the reader to [3] for further details.

2.2 The heat kernel

On an H-type group \(G \simeq \mathbb {R}^{2 n}\times \mathbb {R}^m \), the heat kernel \((p_s)_{s>0}\) has the form

$$\begin{aligned} p_s(x,t)= \frac{1}{(4\pi )^n (2\pi )^m s^{n+m}}\int _{\mathbb {R}^m}e^{\frac{i}{s}(\lambda , t)-\frac{|{x}|^2}{4s}|\lambda |\coth (|\lambda |)} \left( \frac{ |\lambda |}{\sinh |\lambda |}\right) ^n\,\mathrm {d}\lambda , \end{aligned}$$
(2.1)

for every \(s>0\) and every \((x,t)\in G\) (see [10] or [13] for the Heisenberg groups, [20] or [24] for H-type groups). For the sake of clarity, we shall sometimes stress the dependence of \(p_s\) on the dimension m of the centre of G by writing \(p^{(m)}_s\) instead of \(p_s\).

We begin by writing the heat kernel (2.1) in a more convenient form. Let \(\mathcal {R}\) be an isometry such that \(\mathcal {R} t = |t|u_1\), where \(u_1\) is the first element of the canonical basisFootnote 1 of the centre of G, namely \(\mathbb {R}^m\). Then make the change of variables \(\lambda \mapsto \mathcal {R}^{-1} \lambda \) in (2.1), which gives

$$\begin{aligned} p_s(x,t)= \frac{1}{(4\pi )^n (2\pi )^m s^{n+m}}\int _{\mathbb {R}^m}e^{\frac{i}{s}( \lambda , u_1) |{t}|-\frac{|{x}|^2}{4s}|\lambda |\coth (|\lambda |)} \left( \frac{ |\lambda |}{\sinh |\lambda |}\right) ^n\,\mathrm {d}\lambda . \end{aligned}$$
(2.2)

It is now more evident that \(p_s\) depends only on \(|{x}|\) and \(|{t}|\). This leads us to the following definition.

Definition 2.1

Let \(R=\frac{|{x}|^2}{4}\). For all \(s>0\) and for all \(k_1,k_2\in \mathbb {N}\), define

$$\begin{aligned} \begin{aligned} p_{s,k_1,k_2}(x,t)&:=\frac{\partial ^{k_1}}{\partial R^{k_1}} \frac{\partial ^{k_2}}{\partial |t|^{k_2}}p_{s}(x,t)= \frac{ (-1)^{k_1}i^{k_2}}{(4\pi )^{n}(2\pi )^{m} s^{n+m+k_1+k_2}} \\ {}&\qquad \times \int _{\mathbb {R}^m} e^{\frac{i}{s}|t|( \lambda , u_1) -\frac{|{x}|^2}{4 s}|\lambda | \coth |\lambda |} \frac{|{\lambda }|^{n+k_1}\cosh (|\lambda |)^{k_1}}{\sinh (|\lambda |)^{n+k_1}}(\lambda ,u_1)^{k_2}\,\mathrm {d}\lambda . \end{aligned} \end{aligned}$$
(2.3)

Notice that \(p_s\) is a smooth function of R and \(|{t}|\) by formula (2.2), so that the definition of \(p_{s,k_1,k_2}\) is meaningful on the whole of G. Moreover, consider a differential operator on G of the form

$$\begin{aligned} X^\gamma = \frac{\partial ^{|{\gamma }|}}{\partial x^{\gamma _1} \partial t^{\gamma _2}} \end{aligned}$$

for some \(\gamma = (\gamma _1,\gamma _2)\in \mathbb {N}^{2n}\times \mathbb {N}^m\). By means of Faà di Bruno’s formula, the function \(X^\gamma p_s\) can be written on \(\{t\ne 0\}\) as a finite linear combination with smooth coefficients of the functions \(p_{s,k_1,k_2}\), for suitable \(k_1\) and \(k_2\). Since \(X^\gamma p_s\) is uniformly continuous, the value of \(X^\gamma p_s(x,0)\) can then be recovered by continuity uniformly in \(x\in \mathbb {R}^{2n}\). Therefore, one can obtain asymptotic estimates for \(X^\gamma p_s\) by combining appropriately some given estimates of \(p_{s,k_1,k_2}\) (see also Remark 4.16). We shall see an application of this in Sect. 5.

Observe that it will be sufficient to study \(p_{1,k_1,k_2}\), since

$$\begin{aligned} p_{s,k_1,k_2}(x,t)= \frac{1}{s^{n+m+k_1+k_2}} p_{1,k_1,k_2}\left( \frac{x}{\sqrt{s}},\frac{t}{s}\right) \end{aligned}$$

for every \(s>0\), \(k_1,\,k_2 \in \mathbb {N}\) and \((x,t)\in G\). Hence, we shall focus only on \(p_{1,k_1,k_2}\). Moreover, from now on we shall fix the integers \(k_1,k_2\ge 0\). Of course, the choice \(k_1=k_2=0\) gives the heat kernel \(p_s\).

Remark 2.2

It is well known (see [6] or [3, Remark 3.6.7]) that there exist n and m for which \(\mathbb {R}^{2n} \times \mathbb {R}^m\) cannot represent any H-type group. Nevertheless, (2.1) and hence (2.3) make sense for every positive \(n,m\in \mathbb {N}\), and for such n and m we shall then study \(p_{s,k_1,k_2}\).

Definition 2.3

(cf. [12]) For every \((x,t) \in G\), defineFootnote 2

$$\begin{aligned}\omega :=\frac{|{t}|}{R}, \qquad \delta :=\sqrt{\frac{R}{\pi |{t}|}},\qquad \kappa :=2\sqrt{\pi |{t}|R}.\end{aligned}$$

We shall split the asymptotic condition \((x,t)\rightarrow \infty \) into four cases, some of which depend on an arbitrary constant \(C>1\). In particular, the first one covers the case \(|{t}|/|{x}|^2\) bounded, while the other three are a suitable splitting of the case \(|{t}|/|{x}|^2 \rightarrow \infty \).

figure a

We shall describe the asymptotic behaviour of \(p_{1,k_1,k_2}\) in each of these four cases. The first two will both need the method of stationary phase (Theorem 2.7), while the other two can be treated through Taylor expansions.

In order to simplify the notation, we give some definitions.

Definition 2.4

Define the function \(\theta :(-\pi , \pi ) \rightarrow \mathbb {R}\) by

$$\begin{aligned} \theta (\lambda ):={\left\{ \begin{array}{l l } \frac{2\lambda -\sin (2\lambda )}{2\sin ^2(\lambda )} &{}{} \text{ if } \, \lambda \ne 0,\\ 0 &{}{} \text{ if }\, {\lambda = 0.}\end{array}\right. } \end{aligned}$$

Lemma 2.5

[10, §3, Lemma 3] \(\theta \) is an odd, strictly increasing analytic diffeomorphism between \((-\pi ,\pi )\) and \(\mathbb {R}\).

Definition 2.6

For every \(\omega \in \mathbb {R}\), set \(y_\omega :=\theta ^{-1}(\omega )\). For every \((x,t)\in G\) define

$$\begin{aligned} d(x,t):={\left\{ \begin{array}{ll} |{x}|\frac{y_\omega }{ \sin (y_\omega )} &{}\text {if}\, {x\ne 0\,\mathrm{and}\,t\ne 0,}\\ |{x}| &{}\text {if}\, {t=0,}\\ \sqrt{4\pi |{t}|} &{}\text {if}\, x=0. \end{array}\right. } \end{aligned}$$

It is worth observing that d(xt) is the Carnot–Carathéodory distance between (xt) and the origin with respect to the horizontal distribution generated by the vector fields \(X_1,\dots , X_{2 n}\). See [15] but also [2, 7, 21] for a proof and further details.

2.3 The method of stationary phase

The main tool that we shall use is an easy corollary of Hörmander’s theorem of stationary phase [11, Theorem 7.7.5], stated in a form convenient for our needs. We include a proof for the sake of clarity. Given an open set \(V\subseteq \mathbb {R}^m\), we write \(\mathcal {E}(V)\) for the space of \(C^\infty \) complex-valued functions on V, endowed with the topology of locally uniform convergence of all the derivatives. If f is a twice differentiable function on an open neighbourhood of 0, we write \(P_{2,0}f\) for the Taylor polynomial of order 2 about 0 of f.

Theorem 2.7

Let V be an open neighbourhood of 0 in \(\mathbb {R}^m\), and let \(\mathscr {F},\,\mathscr {G}\) be bounded subsets of \(\mathcal {E}(V)\) such that

  1. 1.

    \(\mathrm {Im}f(\lambda )\ge 0\) for every \(\lambda \in V\) and every \(f\in \mathscr {F}\). Moreover, there exist \(\eta >0\) and \(c_1>0\) such that \(B(0,2\eta )\subseteq V\) and \(\mathrm {Im}f(\lambda )\ge c_1|\lambda |\) whenever \(|{\lambda }|\ge \eta \) and \(f\in \mathscr {F}\);

  2. 2.

    \(\mathrm {Im}f(0)=f'(0)=0\) and \( \det f''(0)\ne 0\) for all \(f\in \mathscr {F}\);

  3. 3.

    there exists \(c_2>0\) such that \(|{f'(\lambda )}| \ge c_2|{\lambda }|\) for all \(|{\lambda }|\le 2\eta \) and for all \(f\in \mathscr {F}\);

  4. 4.

    there exists \(c_3>0\) such that \(|{g(\lambda )}|\le c_3 e^{c_3 |{\lambda }|}\) whenever \(\lambda \in V\), for every \(g\in \mathscr {G}\).

Then, for every \(k\in \mathbb {N}\),

$$\begin{aligned} \int _{V} e^{iR f(\lambda )} g(\lambda )\,\mathrm {d}\lambda = e^{i R f (0)} \sqrt{\frac{(2\pi i)^m}{R^m \det f''(0)}}\sum _{j=0}^k \frac{L_{j,f} g}{R^j}+ O\left( \frac{1}{R^{\frac{m}{2}+k+1}}\right) \end{aligned}$$
(2.4)

as \(R\rightarrow +\infty \), uniformly as \(f\in \mathscr {F}\) and \(g\in \mathscr {G}\), where

$$\begin{aligned} \begin{aligned} L_{j,f}g= i^{-j}\sum _{\mu =0}^{2j}\frac{ (f''(0)^{-1} \partial , \partial )^{\mu +j} [ (f-P_{2,0}f )^\mu g ](0)}{ 2^{\mu +j} \mu !(\mu +j) !}. \end{aligned} \end{aligned}$$

In particular, \(L_{0,f}g=g(0)\).

Proof

Take some \(\tau \in C_c^\infty (\mathbb {R}^m)\) such that \(\chi _{B(0,\eta )}\le \tau \le \chi _{B(0,2\eta )}\). Then split the integral as

$$\begin{aligned} \int _{V} e^{i R f(\lambda )} g(\lambda )\,\mathrm {d}\lambda = \int _{V} e^{i R f(\lambda )} g(\lambda )\tau (\lambda )\,\mathrm {d}\lambda + \int _{V} e^{i R f(\lambda )} g(\lambda )(1-\tau (\lambda ))\,\mathrm {d}\lambda \end{aligned}$$

and apply [11, Theorem 7.7.5] to the first term, thanks to the first assumption in 1 and assumptions 2 and 3: this represents the main contribution to the integral and gives the right-hand side of (2.4). The second term is instead negligible, since by the second assumption in 1 and by 4 we get, if R is large enough,

$$\begin{aligned} \begin{aligned} \left|\int _{V} e^{iR f(\lambda )} g(\lambda )(1-\tau (\lambda ))\,\mathrm {d}\lambda \right|&\lesssim \int _{|{\lambda }|\ge \eta } e^{-R\,\mathrm {Im}f(\lambda )+c_3 |{\lambda }|}\,\mathrm {d}\lambda \lesssim \int _\eta ^\infty e^{-R c_1\rho +c_3\rho } \rho ^{m-1}\,\mathrm {d}\rho \\&=\int _\eta ^\infty e^{-(c_1 R \rho - (1+c_3)\rho )-\rho } \rho ^{m-1}\,\mathrm {d}\rho \\&\lesssim e^{-(c_1 R-(1+c_3)) \eta }\int _0^\infty e^{-\rho } \rho ^{m-1}\,\mathrm {d}\rho , \end{aligned} \end{aligned}$$

which is \(O\left( e^{-Rc_1\eta }\right) \). The proof is complete. \(\square \)

Remark 2.8

Theorem 2.7 covers more cases than only oscillatory integrals. Indeed, assume we have an integral of the form

$$\begin{aligned} \int _{V} e^{-R f(\lambda )} g(\lambda )\,\mathrm {d}\lambda \end{aligned}$$

where f is real. Under suitable assumptions, such integrals are usually treated via Laplace’s method (see, e.g. [8, 25]). In this case, one can use directly Theorem 2.7, by substituting \(\mathrm {Im}f\) by f in assumptions 1–4, thus getting

$$\begin{aligned} \int _{V} e^{-R f(\lambda )} g(\lambda )\,\mathrm {d}\lambda = \sqrt{\frac{(2\pi )^m}{R^m \det f''(0)}}\sum _{j=0}^k \frac{L_{j,f} g}{R^j}+ O\left( \frac{1}{R^{\frac{m}{2}+k+1}}\right) , \end{aligned}$$
(2.5)

with the obvious modifications on \(L_{j,f}g\). Coherently, in such cases Theorem 2.7 will be referred to as Laplace’s method.

3 The Heisenberg group

In this section, we deal with the case \(m=1\), namely when \(G= \mathbb {H}^n\) is the Heisenberg group. The function \(p_{1,k_1,k_2}\) of Definition 2.1 here reads

$$\begin{aligned} \begin{aligned} p_{1,k_1,k_2}(x,t)&=\frac{2 (-1)^{k_1}i^{k_2}}{(4\pi )^{n+1} }\int _\mathbb {R} e^{i\lambda |{t}|-\frac{|{x}|^2}{4 }\lambda \coth (\lambda )} \frac{\lambda ^{n+k_1+k_2}\cosh (\lambda )^{k_1}}{\sinh (\lambda )^{n+k_1}}\,\mathrm {d}\lambda . \end{aligned} \end{aligned}$$

Indeed, the absolute values of \(\lambda \) in integral (2.3) can be removed by parity reasons. We begin by introducing some functions which greatly simplify the notation.

Definition 3.1

Define

$$\begin{aligned} h_{k_1,k_2}(R,t):= & {} (-1)^{k_1}i^{k_2}\int _\mathbb {R} e^{i\lambda |{t}| - R\lambda \coth (\lambda )} \frac{\lambda ^{n+k_1+k_2 }\cosh (\lambda )^{k_1}}{\sinh (\lambda )^{n+k_1}}\,\mathrm {d}\lambda \\= & {} \int _\mathbb {R} e^{iR\varphi _\omega (\lambda )}a_{k_1,k_2}(\lambda )\,\mathrm {d}\lambda , \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} a_{k_1,k_2}(\lambda )&={\left\{ \begin{array}{ll} (-1)^{k_1}i^{k_2}\frac{\lambda ^{n+k_1 +k_2}\cosh (\lambda )^{k_1}}{\sinh (\lambda )^{n+k_1}} &{} \text {if}\, {\lambda \not \in \pi i\mathbb {Z},}\\ (-1)^{k_1}i^{k_2}\delta _{k_2,0} &{}\text {if}\, {\lambda =0,} \end{array}\right. } \\ \varphi _\omega (\lambda )&= {\left\{ \begin{array}{ll} \omega \lambda +i\lambda \coth (\lambda ) &{} \text {if}\, {\lambda \not \in \pi i\mathbb {Z},}\\ i &{}\text {if}\, {\lambda =0}. \end{array}\right. } \end{aligned} \end{aligned}$$
(3.1)

Notice that

$$\begin{aligned} p_{1,k_1,k_2}(x,t)= \frac{2}{(4\pi )^{n+1}} h_{k_1,k_2}\left( R,t\right) \end{aligned}$$

for all \((x,t)\in \mathbb {H}^n\); hence, we can reduce matters to studying \(h_{k_1,k_2}(R,t)\). Observe moreover that \(y_\omega = \theta ^{-1}(\omega ) \in [0,\pi )\), since \(\omega \ge 0\).

It will be convenient to reverse the dependence relation between \((R,\omega )\) and (xt); hence, we shall no longer consider R and \(\omega \) as functions of (xt), but rather as “independent variables”. In this order of ideas, the formula \(|{t}|=R\,\omega \) should sound as a definition.

Our intent will be to apply Theorem 2.7 to a function closely related to \(h_{k_1,k_2}\); hence, we shall find some stationary points of the phase of \(h_{k_1,k_2}\), namely \(\varphi _\omega \). The lemma below is of fundamental importance.

Lemma 3.2

[10, §3, Lemma 6] \(\varphi _\omega '(\lambda )= \omega + \tilde{\theta }( i\lambda )\) for all \(\lambda \not \in \pi i\mathbb {Z}^*\), where \(\tilde{\theta }\) is the analytic continuation of \(\theta \) to \(\mathrm {Dom}(\varphi _{\omega })\). In particular, \(i y_\omega \) is a stationary point of \(\varphi _\omega \).

3.1 I. Estimates for \((x,t)\rightarrow \infty \) while \( 4|{t}|/|{x}|^2 \le C\).

Theorem 3.3

Fix \(C>0\). If \((x,t)\rightarrow \infty \) while \(0\le \omega \le C\), then

$$\begin{aligned} p_{1,k_1,k_2}(x,t)= \frac{1}{|x|}e^{-\frac{1}{4}d(x,t)^2}\varPsi (\omega ) \left[ (-1)^{k_1+k_2} \frac{ y_\omega ^{n+k_1+k_2}\cos (y_\omega )^{k_1}}{\sin (y_\omega )^{n+k_1}} +O\left( \frac{1}{|x|^2}\right) \right] \qquad \quad \end{aligned}$$
(3.2)

where

$$\begin{aligned} \varPsi (\omega )={\left\{ \begin{array}{ll} \frac{1}{4^n \pi ^{n+1}}\sqrt{\frac{\pi \sin (y_\omega )^3}{\sin (y_\omega )- y_\omega \cos (y_\omega )}} &{}{} \text {if}\, {\omega \ne 0,} \\ \frac{(3\pi )^{1/2}}{4^n \pi ^{n+1}} &{}{} \text{ if } \,{\omega = 0.}\end{array}\right. } \end{aligned}$$

It is worthwhile to stress that the above estimates may not be sharp when \(\omega \rightarrow 0\) and \(k_2>0\), as well as when \(\omega \rightarrow \frac{\pi }{2}\) and \(k_1>0\). In these cases indeed \(y_\omega \rightarrow 0\) and \(y_\omega \rightarrow \frac{\pi }{2}\), respectively, and the first term of the asymptotic expansion (3.2) may be smaller than the remainder. Since the sharp asymptotic behaviour of \(p_{1,k_1,k_2}\) when \(\omega \) remains bounded is rather involved, we avoid to outline the complete picture for the moment. The statement above is just a simplified version of Theorem 4.2 of Sect. 4.1, where the general case of H-type groups is completely described.

In this section, we then limit ourselves to consider Theorem 3.3 in the stated form. Its proof mostly consists in a straightforward generalization of [10, §3, Theorem 2], but it can also be seen as Proposition 4.4 of Sect. 4.1 in the current setting of Heisenberg groups. Nevertheless, for the sake of completeness we give a brief sketch of the proof.

The main idea is to change the contour of integration in the integral defining \(h_{k_1,k_2}\) in order to meet a stationary point of \(\varphi _\omega \). Since \(\mathrm {Im}\,\varphi _\omega (\lambda )= \omega \, \mathrm {Im}\, \lambda +\mathrm {Re}\left[ \lambda \coth (\lambda )\right] \) for every \(\lambda \not \in {\pi i\mathbb {Z}}\), to make this change we need to deepen our knowledge of \(\mathrm {Re}\left[ \lambda \coth (\lambda )\right] \) and \(|{a_{k_1,k_2}}|\); this is done in the following lemma, which we state without proof.

Lemma 3.4

For all \(\lambda ,y\in \mathbb {R}\) such that \(|{\lambda }|>|{y}|\),

$$\begin{aligned} \mathrm {Re}[ (\lambda +i y)\coth (\lambda +i y)]=\frac{\lambda \sinh (2\lambda )+y\sin (2y)}{2(\sinh (\lambda )^2+\sin (y)^2)}>0. \end{aligned}$$

Moreover, for all \(\lambda ,y\in \mathbb {R}\) such that either \(y\not \in \pi \mathbb {Z}\) or \(\lambda \ne 0\),

$$\begin{aligned} |{a_{k_1,k_2}(\lambda +i y)}|= \frac{|{\lambda +i y}|^{n+k_1+k_2}(\sinh (\lambda )^2+\cos (y)^2)^{\frac{k_1}{2}}}{(\sinh (\lambda )^2+\sin (y)^2)^{\frac{n+k_1}{2}}}. \end{aligned}$$

In the following lemma, we perform the change of the contour of integration in the definition of \(h_{k_1,k_2}\). Its proof is a simple adaptation of that of [12, Lemma 1.4].

Lemma 3.5

For all \(y\in [0,+\infty )\setminus {\pi \mathbb {N}^*}\)

$$\begin{aligned} \begin{aligned} h_{k_1,k_2}(R,t)= \int _{\mathbb {R}} e^{i R\varphi _\omega (\lambda + i y)} a_{k_1,k_2}(\lambda +i y)\,\mathrm {d}\lambda +2\pi i \sum _{\begin{array}{c} k\in \mathbb {N}^* \\ k\pi \in [0,y] \end{array}} \mathrm {Res}\left( e^{i R\varphi _\omega } a_{k_1,k_2},k\pi i \right) . \end{aligned} \end{aligned}$$

Proof of Theorem 3.3

Define

$$\begin{aligned} \psi _\omega \,= \varphi _\omega (\,\cdot \,+i y_\omega )-\varphi _\omega (i y_\omega ) \end{aligned}$$

and observe that

$$\begin{aligned} \begin{aligned} \varphi _\omega (i y_\omega )&= i\omega \, y_\omega +i y_\omega \cot (y_\omega ) =i\frac{y_\omega ^2}{\sin (y_\omega )^2}, \end{aligned} \end{aligned}$$

since \(\omega =\theta (y_\omega )\). Therefore, by Lemma 3.5 (recall that \(0\le y_\omega <\pi \), so that there are no residues)

$$\begin{aligned} h_{k_1,k_2}(R,{t})=e^{-\frac{1}{4}d(x,t)^2} \int _\mathbb {R}e^{i R\psi _\omega (\lambda )}a_{k_1,k_2}(\lambda +i y_\omega )\,\mathrm {d}\lambda . \end{aligned}$$

Our intent is to apply Theorem 2.7 to the bounded subsets \(\mathscr {F}=\{\psi _\omega :\omega \in \left[ 0, C\right] \}\) and \(\mathscr {G}= \{a_{k_1,k_2}(\cdot \,+i y_\omega ):\omega \in \left[ 0, C\right] \}\) of \(\mathcal {E}(\mathbb {R})\). Therefore, we first verify that the four conditions of its statement hold.

  1. 2.

    Lemmata 3.2 and 2.5 imply that \( i\varphi ''_\omega \left( i y_\omega \right) =-\theta '(-y_\omega )< 0\) for all \(\omega \in \mathbb {R}_+\). From the definition of \(\psi _\omega \), we then get

    $$\begin{aligned} \psi _\omega (0)=\psi '_\omega (0)=0, \qquad i\psi _\omega ''(0)< 0. \end{aligned}$$
    (3.3)
  2. 3.

    Consider the mapping \(\psi :\mathbb {R}\times (-\pi ,\pi )\ni (\lambda ,y)\mapsto \psi _{\theta (y)}(\lambda )\). By (3.3), \(\partial _1\psi (0,y)=0\) and \(i\partial _1^2\psi (0,y)< 0\) for all \(y\in [0,\pi )\); moreover, \(\psi \) is analytic thanks to Lemma 2.5. Therefore, by Taylor’s formula we may find two constants \(\eta >0\) and \(C'>0\) such that \( |{\partial _1\psi (\lambda ,y)}|\ge C' |{\lambda }|\) for all \(\lambda \in [-2\eta ,2\eta ]\) and for all \(y\in [0,\theta ^{-1}(C)]\).

  3. 1.

    Lemma 3.4 implies that

    $$\begin{aligned} \mathrm {Im}\,\psi (\lambda ,y) =\frac{\lambda \cosh (\lambda )\sinh (\lambda )-y\cot (y)\sinh (\lambda )^2}{\sinh (\lambda )^2+\sin (y)^2} \end{aligned}$$

    for all \(\lambda \in \mathbb {R}\) and for all \(y\in (-\pi ,\pi )\), \(y\ne 0\); moreover, the mapping \((0,\pi )\ni y\mapsto y\cot (y)\) is strictly decreasing and tends to 1 as \(y\rightarrow 0^+\). Therefore, if \(\lambda \ne 0\) and \(y\in [0,\pi )\), then

    $$\begin{aligned} \begin{aligned} \mathrm {Im}\,\psi (\lambda ,y) \ge \frac{\lambda \coth (\lambda )-1}{1+\frac{1}{\sinh (\lambda )^2}}>0 \end{aligned} \end{aligned}$$

    since \(\lambda \coth (\lambda )-1>0\). Observe finally that, since \(\frac{\lambda \coth (\lambda )-1}{1+\frac{1}{\sinh (\lambda )^2}}\sim |{\lambda }|\) for \(\lambda \rightarrow \infty \), the second condition is also satisfied.

  4. 4.

    Just observe that \(\mathscr {G}\) is bounded in \(L^\infty (\mathbb {R})\).

By Theorem 2.7, we then get

$$\begin{aligned} \begin{aligned} \int _\mathbb {R}e^{i R\psi _\omega (\lambda )} a_{k_1,k_2}(\lambda +i y_\omega )\,\mathrm {d}\lambda&= \frac{(2\pi ) (4\pi )^n}{|x|}\varPsi (\omega ) a_{k_1,k_2}(iy_\omega )+ O\left( \frac{1}{|{x}|^{3}}\right) \end{aligned} \end{aligned}$$

for \(R\rightarrow +\infty \), uniformly as \(\omega \) runs through [0, C]. \(\square \)

From now on, we shall consider the case \(\omega \rightarrow +\infty \). The method of stationary phase cannot be applied directly in this case, since \(y_\omega \rightarrow \pi \), and \(i\pi \) is a pole of the phase (as well as of the amplitude). Although it seems possible to adapt the techniques developed by Li [17] to this situation, our proof follows the idea presented by Hueber and Müller [12, Theorem 1.3 (i)] for the Heisenberg group \(\mathbb {H}^1\). We shall take advantage of this singularity to get the correct behaviour of \(h_{k_1,k_2}\), by means of the residues obtained by Lemma 3.5.

3.2 II. Estimates for \(\delta \rightarrow 0^+\) and \(\kappa \rightarrow +\infty \).

We state below the main result of this section.

Theorem 3.6

For \(\delta \rightarrow 0^+\) and \(\kappa \rightarrow +\infty \)

$$\begin{aligned} p_{1,k_1,k_2}(x,t)= \frac{(-1)^{k_2} \pi ^{k_1+k_2}}{4^{n}(\pi \delta )^{n+k_1-1}\sqrt{2\pi \kappa }} e^{-\frac{1}{4}d(x,t)^2} \left[ 1+O\left( \frac{1}{\kappa }+\delta \right) \right] . \end{aligned}$$

The proof of Theorem 3.6 will be prepared by several lemmata. The first step will be to invoke Lemma 3.5, of which we keep the notation, to move the contour of integration beyond the singularity at \(\pi i\); since at \(2\pi i\) there is another one, it seems convenient to stop at \(\frac{3\pi i}{2}\). We first notice that the integral on \(\mathbb {R}+\frac{3\pi i}{2}\) may be neglected in some circumstances, as the following lemma shows. It is essentially [12, Lemma 1.4], so we omit the proof.

Lemma 3.7

There exists a constant \(C'>0\) such that

$$\begin{aligned} \left|{\int _{\mathbb {R}}e^{i R\varphi _\omega \left( \lambda +\frac{3\pi i}{2}\right) } a_{k_1,k_2}\left( \lambda +\dfrac{3\pi i}{2}\right) \,\mathrm {d}\lambda }\right|\le C'e^{-\frac{3\pi |{t}|}{2}}. \end{aligned}$$

Hence, matters are reduced to the computation of the residue. First of all, define

$$\begin{aligned} r(\lambda ) = {\left\{ \begin{array}{ll} 1+\frac{1}{\lambda }-\pi (1+\lambda )\cot (\pi \lambda ) &{}{} \text{ if }\, {\lambda \not \in \mathbb {Z},}\\ 0 &{}{} \text{ if }\,{\lambda = 0,} \end{array}\right. }\\ \end{aligned}$$

and observe that r is holomorphic on its domain. It will be useful to define also

$$\begin{aligned} \tilde{\varphi }_{k_1,k_2} (R,\xi ):={\left\{ \begin{array}{ll} e^{R \, r(- \xi )}\frac{(\pi \xi )^ {n+k_1}\cos (\pi \xi )^ {k_1}(1-\xi )^ {n+k_1+k_2}}{\sin (\pi \xi )^{n+k_1}} &{}{}\text{ if }\, {\xi \not \in \mathbb {Z},}\\ 1 &{}{} \text{ if }\, {\xi =0,} \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} \varphi _{\delta , k_1,k_2}(s):=e^{-i(n+k_1-1)s} \tilde{\varphi }_{k_1,k_2}(0, \delta e^{i s}) \end{aligned}$$
(3.4)

whenever \(\delta e^{i s}\not \in \mathbb {Z}^*\). The following lemma may be proved again on the lines of [12, Lemma 1.4].

Lemma 3.8

For every \(\delta <1\)

$$\begin{aligned} 2\pi i\, \mathrm {Res}\left( e^{i R \varphi _\omega } a_{k_1,k_2},\pi i\right) =\frac{(-1)^{k_2}\pi ^{k_2+1} }{\delta ^{n+k_1-1}} e^{-R-\pi |{t}| } \int _{-\pi }^{\pi } e^{\kappa \cos (s)+R r(-\delta e^{i s}) }\varphi _{\delta ,k_1,k_2}(s)\,\mathrm {d}s. \end{aligned}$$
(3.5)

Therefore, it remains only to estimate the integral in (3.5), namely

$$\begin{aligned} \begin{aligned} H_{k_1,k_2}(R,t):=&\int _{-\pi }^{\pi } e^{\kappa \cos (s)+R r(-\delta e^{is}) }\varphi _{\delta ,k_1,k_2}(s)\,\mathrm {d}s =\int _{-\pi }^\pi e^{\kappa q_\delta ( -i s)} \varphi _{\delta ,k_1,k_2}(s)\,\mathrm {d}s, \end{aligned} \end{aligned}$$
(3.6)

where

$$\begin{aligned} q(\delta ,\zeta ) = q_\delta (\zeta ):=\cosh (\zeta )+\frac{\delta }{2}r(-\delta e^{-\zeta }). \end{aligned}$$
(3.7)

Notice that we may apply Theorem 2.7 only when \(\kappa \rightarrow +\infty \), and this is why we confined ourselves to the case where \(\delta \rightarrow 0^+\) (and we shall assume \(0<\delta <1\)) and \(\kappa \rightarrow +\infty \).

Again for technical convenience, we shall reverse the dependence relation between \((\delta ,\kappa )\) and \((R,|{t}|)\), thus assuming that \(\delta \) and \(\kappa \) are “independent variables”. Indeed, \(\delta \) and \(\kappa \) completely describe our problem, since

$$\begin{aligned} |{t}|=\frac{\kappa }{2\pi \delta },\qquad R=\frac{\kappa \delta }{2}, \end{aligned}$$

and \(|{t}|+R\rightarrow +\infty \) if \(\delta \rightarrow 0^+\) and \(\kappa \rightarrow +\infty \). We shall sometimes let \(\delta \) assume complex values. The following lemma is essentially [12, Lemma 1.2]. We present a slightly shorter proof.

Lemma 3.9

q is holomorphic on the set \(\{(\delta ,\zeta )\in \mathbb {C}\times \mathbb {C}|\, \delta e^{-\zeta }\not \in \mathbb {Z}^*\}\). Moreover, there exist two constants \(\delta _1\in (0,1)\) and \(\eta _1>0\) such that for all \(\delta \in \mathrm {B}_\mathbb {C}(0,\delta _1)\) there is a unique \( \sigma _\delta \in \mathrm {B}_\mathbb {C}(0,\eta _1)\) such that \(q'_\delta ( \sigma _\delta )=0\). Then, the mapping \(\mathrm {B}_\mathbb {C}(0,\delta _1)\ni \delta \mapsto \sigma _\delta \) is holomorphic and real on \((-\delta _1,\delta _1)\). Finally, \(\sigma _\delta = O(\delta ^2)\) and \(q_\delta (\sigma _\delta )=1+O(\delta ^2)\) for \(\delta \rightarrow 0\).

Proof

q is holomorphic since r is. Furthermore, \(\partial _2 q(0,0)=0\) and \(\partial _2^2 q(0,0)=1\). Therefore, the implicit function theorem (cf. [5, Proposition 6.1 of IV.5.6]) implies the existence of some \(\delta _1\) and \(\eta _1\) as in the statement, the holomorphy of the mapping \(\delta \mapsto \sigma _\delta \), and that \(\frac{d}{d \delta } \sigma _\delta \vert _{\delta =0}=0\). Notice also that \(\sigma _0=0\), so that \(\sigma _\delta = O(\delta ^2)\) for \(\delta \rightarrow 0\) by Taylor’s formula.

Since \(q_\delta \) is real on real numbers, \(q'_\delta (\overline{\sigma _\delta })=\overline{q'_\delta (\sigma _\delta )}=0\); thus, \(\sigma _\delta =\overline{\sigma _\delta }\) for the uniqueness of \(\sigma _\delta \), and hence \(\sigma _\delta \in \mathbb {R}\) for all \(\delta \in (-\delta _1,\delta _1)\).

The last assertion follows from Taylor’s formula, since \(q_0(\sigma _0)=q_0(0)=1\) and \(\frac{d}{d \delta } q_\delta (\sigma _\delta )\vert _{\delta =0}= \partial _1 q(0,0)+\partial _2 q(0,0)\frac{d}{d \delta } \sigma _\delta \vert _{\delta =0}=0\). \(\square \)

The contour of integration can now be changed in order to apply the method of stationary phase. For the remainder of this section, we keep \(\delta _1\) and \(\eta _1\) of Lemma 3.9 fixed.

Lemma 3.10

Let \(\tau \in C_c^\infty (\mathbb {R})\) such that \(\chi _{[-\frac{\pi }{2}, \frac{\pi }{2}]}\le \tau \le \chi _{[\pi ,\pi ]}\). Define, for all \(\delta \in (-\delta _1,\delta _1)\), the path \(\gamma _\delta (s):=s+i\sigma _\delta \, \tau (s)\), and

$$\begin{aligned} F_\delta (s):=-i q_\delta (-i\gamma _\delta (s))+i q_\delta (\sigma _\delta ) \qquad \text {and}\qquad \psi _{\delta ,k_1,k_2}:=(\varphi _{\delta ,k_1,k_2}\circ \gamma _\delta )\, \gamma _\delta '. \end{aligned}$$

Then

$$\begin{aligned} H_{k_1,k_2}(R,t)= e^{\kappa \,q_\delta (\sigma _\delta )} \int _{-\pi }^{\pi } e^{i\kappa F_\delta \left( s\right) }\psi _{\delta ,k_1,k_2}(s)\,\mathrm {d}s. \end{aligned}$$

Proof of Theorem 3.6

We shall apply Theorem 2.7 to the bounded subsets \(\mathscr {F}= \{ F_\delta :\delta \in (0,\delta _2)\}\) and \(\mathscr {G}= \{\psi _{\delta ,k_1,k_2}:\delta \in (0,\delta _2)\}\) of \(\mathcal {E}((-\pi ,\pi ))\), depending on some \(\delta _2\) to be fixed later. Hence, we check that the four conditions of the statement are satisfied.

  1. 1.

    The mapping \(F:(-\delta _1,\delta _1)\times \mathbb {R}\ni (\delta ,s )\mapsto F_\delta (s )\) is of class \(C^\infty \), and \(\partial _2^2 F(0,0)=i\); thus, we may find \(\delta _2\in (0,\delta _1)\), \(\eta _2\in \left( 0,\frac{\pi }{2}\right) \) and \(C''>0\) such that \(\mathrm {Im}\,\partial _2^2 F(\delta ,s )\ge 2 C''\) for all \(\delta \in [-\delta _2,\delta _2]\) and for all \(s \in [-2\eta _2,2\eta _2]\). From Taylor’s formula then

    $$\begin{aligned}\mathrm {Im}\, F(\delta ,s )=\int _0^s \partial _2^2\, \mathrm {Im}\, F(\delta ,\tau )(s -\tau )\,\mathrm {d}\tau \ge C'' s ^2\end{aligned}$$

    for all \(s \in [-2\eta _2,2\eta _2]\) and for all \(\delta \in [-\delta _2,\delta _2]\). Since \(\mathrm {Im}\, F(0,s )=1-\cos (s )\) for all \(s \in [-\pi ,\pi ]\), by reducing \(\delta _2\) and \(C''\) if necessary one may assume that \(\mathrm {Im}\,F(\delta ,s )\ge C''\pi ^2\ge C'' s ^2\) for all \(s \in \mathbb {R}\) such that \(2\eta _2\le |{s}|\le \pi \) and for all \(\delta \in [-\delta _2,\delta _2]\).

  2. 2.

    It is immediately seen that \( F_\delta (0)= F_\delta '(0)=0\) by definition.

  3. 3.

    For every \(\delta \in [-\delta _2,\delta _2]\) and \(s\in [-2 \eta _2,2 \eta _2]\)

    $$\begin{aligned} \left|\partial _2 F(\delta ,s)\right|\ge |{\partial _2\,\mathrm {Im}\, F(\delta , s)}|=\left|\int _{0}^s \partial _2^2\, \mathrm {Im}\,F(\delta ,\tau )\,\mathrm {d}\tau \right|\ge 2 C''|{s}|.\end{aligned}$$
  4. 4.

    Just observe that \(\mathscr {G}\) is bounded in \(L^\infty ((-\pi ,\pi ))\).

By Theorem 2.7, then,

$$\begin{aligned} \displaystyle \int _{-\pi }^\pi e^{i\kappa F_\delta (s)}\tau _2(s)\psi _{\delta ,k_1,k_2}(s)\,\mathrm {d}s=\sqrt{\frac{2\pi i}{\kappa F_\delta ''(0) }} \psi _{\delta ,k_1,k_2}(0)+O\left( \frac{1}{\kappa ^{3/2}} \right) . \end{aligned}$$

It is then easily seen that \( F_\delta ''(0)= i q_\delta ''(\sigma _\delta )=i(1 + O(\delta ))\) and \(\psi _{\delta ,k_1,k_2}(0)=\varphi _{\delta , k_1,k_2}(i\sigma _\delta )=1+O(\delta )\) for \(\delta \rightarrow 0^+\).

Now, by construction,

$$\begin{aligned} -R-\pi |{t}|+\kappa q_\delta (s)=i R\,\varphi _\omega (\pi i (1-\delta e^{-s})) \end{aligned}$$

for s in a neighbourhood of \(\sigma _\delta \). Take \(\delta _3\in (0,\delta _2]\) so that \( (1-\delta e^{-\sigma _\delta })\in (-1,1)\) for all \(\delta \in [0,\delta _3]\), and fix \(\delta \in (0,\delta _3)\) and \(t\ne 0\). We shall prove that

$$\begin{aligned}y_\omega = \pi (1- \delta e^{-\sigma _\delta }).\end{aligned}$$

Indeed, \(y_\omega \) is the unique element of \((-\pi ,\pi )\) such that \(\varphi '_\omega (i y_\omega )=0\); furthermore, \(\pi (1- \delta e^{-\sigma _\delta })\in (-\pi ,\pi )\) for the choice of \(\delta _3\), and \(-R\,\pi \,\delta \, e^{-\sigma _\delta }\varphi '_\omega (\pi i(1- \delta e^{-\sigma _\delta }))=\kappa \, q_\delta '(\sigma _\delta )=0\). Therefore, \(y_\omega = \pi (1-\delta e^{-\sigma _\delta })\). Finally, equality holds by analyticity whenever both sides are defined. It then follows that

$$\begin{aligned} -R-\pi |{t}| +\kappa q_\delta (\sigma _\delta )=i R\varphi _\omega (i y_\omega )=-\frac{1}{4}d(x,t)^2. \end{aligned}$$
(3.8)

Finally observe that, by definition of \(\kappa \) and \(\delta \), and by Lemma 3.9,

$$\begin{aligned} -\frac{3\pi |{t}|}{2}+R+\pi |{t}|-\kappa q_\delta (\sigma _\delta ) {+ \log \kappa } \le -{\frac{\kappa }{2\pi \delta }}\left[ \frac{\pi }{2}- \pi \delta ^2 + 2\pi \delta (1+O(\delta ^2)) {- 2\pi \delta \frac{\log \kappa }{\kappa }} \right] , \end{aligned}$$

which tends to \(-\infty \) as \(\delta \rightarrow 0^+\) and \(\kappa \rightarrow +\infty \). This means that

$$\begin{aligned} e^{-\frac{3\pi |{t}|}{2}}=o\left( \frac{e^{-R-\pi |{t}|+\kappa q_\delta (\sigma _\delta )}}{\kappa }\right) \end{aligned}$$

for \(\kappa \rightarrow +\infty \), uniformly as \(\delta \) runs through \((0,\delta _2]\). Our assertion is then a consequence of Lemmata  3.5 and 3.7. \(\square \)

3.3 III and IV. Estimates for \(\delta \rightarrow 0^+\) and \(\kappa \) bounded.

Strictly speaking, cases III and IV have already been considered together by Hueber and Müller [12, Theorem 1.3 (ii)] on the Heisenberg group \(\mathbb {H}^1\), i.e. when \(n=1\). Since their method does not apply when \(n>1\), we shall follow a different approach similar to that of Li [16].

We first recall that, for all \(\nu \in \mathbb {Z}\) and \(\zeta \in \mathbb {C}\), the modified Bessel function \(I_\nu \) of order \(\nu \) is defined as

$$\begin{aligned} I_\nu (\zeta )=\sum _{k\in \mathbb {N}} \frac{\zeta ^{2k+\nu }}{2^{2 k+\nu }k!\,\Gamma (k+\nu +1)}. \end{aligned}$$

If \(s>0\), then also

$$\begin{aligned} I_\nu (s)=\frac{1}{2\pi }\int _{-\pi }^\pi e^{s\cos (\xi )-i \nu \xi }\,\mathrm {d}\xi , \end{aligned}$$

as one can verify from [9, 7.3.1 (2)] by applying the change of variables \(\psi =\frac{\pi }{2}-\varphi \) and by taking into account the relationship [9, 7.2.2 (12)] between \(I_\nu =I_{-\nu }\) and \(J_\nu \), and also the periodicity of the integrand. Notice that for \(s>0\) and \(\nu \in \mathbb {Z}\), \(I_\nu (s)\) is strictly positive unless \(s=0\) and \(\nu \ne 0\). The main result of this section is the following.

Theorem 3.11

Fix \(C>1\). If \(\delta \rightarrow 0^+\) while \({1}/{C}\le \kappa \le C\), then

$$\begin{aligned} {p_{1,k_1,k_2}(x,t) =\frac{(-1)^{k_2} \pi ^{k_1+k_2}}{4^n (\pi \delta )^{n+k_1-1}} e^{-\frac{1}{4}d(x,t)^2 } e^{-\kappa } I_{{n+k_1-1}}(\kappa ) \left[ 1 +O(\delta )\right] .} \end{aligned}$$
(3.9)

When \(\kappa \rightarrow 0^+\) and \(|{t}|\rightarrow +\infty \)

$$\begin{aligned} \begin{aligned} p_{1,k_1,k_2}(x,t)&=\frac{(-1)^{k_2}\pi ^{k_1+k_2}}{4^n(n+k_1-1)!} |{t}|^{n+k_1-1} e^{-\frac{1}{4}d(x,t)^2 }\left[ 1+O\left( \frac{1}{|{t}|}+\kappa \right) \right] . \end{aligned} \end{aligned}$$
(3.10)

Lemma 3.12

For every \(N\in \mathbb {N}\)

$$\begin{aligned} H_{k_1,k_2}(R,t)=2\pi \sum _{|{\alpha }|\le N} I_{{n+k_1-1-\alpha _2}}(\kappa ) \frac{\partial ^\alpha \tilde{\varphi }_{k_1,k_2}(0,0)\kappa ^{\alpha _1}}{2^{\alpha _1}\alpha !}\delta ^{|{\alpha }|}+O\left( \delta ^{N+1}\right) \end{aligned}$$

for \(\delta \rightarrow 0^+\), uniformly as \(\kappa \) runs through [0, C].

Proof

By substituting (3.4) in (3.5) and by Taylor’s formula applied to \(\tilde{\varphi }_{k_1,k_2}\),

$$\begin{aligned} \begin{aligned} H_{k_1,k_2}(R,t)&=\int _{-\pi }^\pi e^{\kappa \cos (s)}e^{-i(n+k_1-1)s}\tilde{\varphi }_{k_1,k_2}(R,\delta e^{is})\,\mathrm {d}s\\&=\sum _{|{\alpha }|\le N} \frac{\partial ^{\alpha }\tilde{\varphi }_{k_1,k_2}(0,0)}{\alpha !}R^{\alpha _1}\delta ^{\alpha _2} \int _{-\pi }^\pi e^{\kappa \cos (s)} e^{-i(n+k_1-1-\alpha _2) s} \,\mathrm {d}s + \mathcal {R}_{N+1}(\delta ,\kappa ) \\ {}&= 2\pi \sum _{|{\alpha }|\le N} I_{{n+k_1-1-\alpha _2}}(\kappa ) \frac{\partial ^\alpha \tilde{\varphi }_{k_1,k_2}(0,0)\kappa ^{\alpha _1}}{2^{\alpha _1}\alpha !}\delta ^{|{\alpha }|}+ \mathcal {R}_{N+1}(\delta ,\kappa ), \end{aligned} \end{aligned}$$

where the last equality holds since \(R=\frac{\delta \kappa }{2}\). Moreover, \(\mathcal {R}_{N+1}(\delta ,\kappa )\) is easily seen to be \(O\left( \delta ^{N+1}\right) \) for \(\delta \rightarrow 0^+\) uniformly as \(\kappa \) runs through [0, C]. This completes the proof. \(\square \)

Proof of Theorem 3.11

Lemmata 3.7 and 3.8 imply that

$$\begin{aligned} p_{1,k_1,k_2}(x,t)\!=\!\frac{(-1)^{k_2}2 \pi ^{k_2-n}}{4^{n+1}\delta ^{n+k_1-1}} e^{-R-\pi |{t}|} H_{k_1,k_2}(R,t)+ O\left( e^{-\frac{3\pi |{t}|}{2}}\right) . \end{aligned}$$

Moreover, recall that \(\delta |{t}|= \frac{\kappa }{2 \pi }\) and \(R=\frac{\kappa \delta }{2}\); therefore, for every \(N\in \mathbb {N}\),

$$\begin{aligned} e^{-\frac{3\pi |{t}|}{2}}= o\left( \delta ^{N+2-n-k_1}e^{-R-\pi |{t}|}\right) \end{aligned}$$
(3.11)

as \(\delta \rightarrow 0^+\), uniformly as \(\kappa \) runs through [1 / CC]. By (3.8) and Lemma 3.9, the first assertion follows from Lemma 3.12 for \(N=0\).

As for (3.10), observe first that \(\kappa \rightarrow 0^+\) and \(|{t}| \rightarrow +\infty \) is equivalent to saying \(\delta ,\kappa \rightarrow 0^+\) and \(\delta =o(\kappa )\). Then Lemma 3.12 with \(N=n+k_1-1\) and an easy development of the Bessel function in a neighbourhood of 0 imply that

$$\begin{aligned} p_{1,k_1,k_2}(x,t)= \frac{\pi ^{k_1+k_2}(-1)^{k_2}}{4^n (\pi \delta )^{n+k_1-1}} e^{-\pi |{t}|-R} \Bigg [ \kappa ^{n+k_1-1} \frac{I_{n+k_1-1}^{(n+k_1-1)}(0)}{(n+k_1-1)!}+O(\kappa ^{n+k_1})\\ +\sum _{1\le |{\alpha }|\le n+k_1-1} O\left( I_{n+k_1-1-\alpha _2}(\kappa ) \kappa ^{\alpha _1}\delta ^{|{\alpha }|}\right) +O(\delta ^{n+k_1})\Bigg ] + O\left( e^{-\frac{3\pi |{t}|}{2}}\right) . \end{aligned}$$

Since \(\delta =o(\kappa )\), one has \(\delta ^{\alpha _2+\alpha _1-1}=O(\kappa ^{\alpha _2+\alpha _1-1})\) for every \(\alpha \ne 0\). Therefore,

$$\begin{aligned} \begin{aligned} \sum _{1\le |{\alpha }|\le n+k_1-1} O\left( I_{n+k_1-1-\alpha _2}(\kappa ) \kappa ^{\alpha _1}\delta ^{|{\alpha }|}\right)&=\sum _{1\le |{\alpha }|\le n+k_1-1} O\left( \kappa ^{n+k_1-2+2\alpha _1}\delta \right) \\&=O\left( \kappa ^{n+k_1-2}\delta \right) . \end{aligned} \end{aligned}$$

Since \(\frac{\kappa }{2\pi \delta }= |{t}|\) and \(I_{n+k_1-1}^{(n+k_1-1)}(0)= \frac{1}{2^{n+k_1-1}}\), we get

$$\begin{aligned} p_{1,k_1,k_2}(x,t)= \frac{\pi ^{k_1+k_2} (-1)^{k_2}}{4^n(n+k_1-1)!} e^{-\pi |{t}|-R} |{t}|^{n+k_1-1}\left[ 1+ O\left( \frac{1}{|{t}|}+\kappa +\delta \right) \right] + O\left( e^{-\frac{3\pi |{t}|}{2}}\right) . \end{aligned}$$

Finally, \(\delta =o\left( \frac{1}{|{t}|}\right) \) since \(\delta |{t}|= \frac{\kappa }{2\pi }\); moreover

$$\begin{aligned} e^{-\frac{3\pi |{t}|}{2}}=o\left( e^{-\pi |{t}|-R} |{t}|^{n+k_1-2}\right) \end{aligned}$$

since \(R\rightarrow 0^+\) and \(|{t}|\rightarrow +\infty \). The assertion follows. \(\square \)

The estimates in cases II, III, and IV can be put together. This is done in the following corollary, which will turn out to be fundamental later on. Define first, for \(\zeta \in \mathbb {C}\) and \(\nu \in \mathbb {Z}\),

$$\begin{aligned} \tilde{I}_\nu (\zeta ):=\sum _{k\ge 0} \frac{\zeta ^{2 k}}{2^{2 k+\nu }k! \Gamma (k+\nu +1)}. \end{aligned}$$

From now on we shall use the following abbreviation. We keep the notation of Lemma 3.9.

Definition 3.13

For \(\delta \in B_\mathbb {C}(0,\delta _1)\), define \(\rho (\delta ):=q_\delta (\sigma _\delta )\).

By Lemma 3.9, \(\rho \) is a holomorphic function such that \(\rho (0)=1\) and \(\rho '(0)=0\), so that \(\rho (\delta )=1+O(\delta ^2)\) as \(\delta \rightarrow 0\).

Corollary 3.14

When \((x,t)\rightarrow \infty \) and \(\delta \rightarrow 0^+\)

$$\begin{aligned} p_{1,k_1,k_2}(x,t)= \frac{(-1)^{k_2}\pi ^{k_1+k_2} }{2^{n -k_1+1} }|{t}|^{n+k_1-1} e^{-\frac{1}{4}d(x,t)^2}e^{-\kappa \rho (\delta )}\tilde{I}_{n+k_1-1}\left( \kappa \rho (\delta )\right) \left[ 1+ g(|{x}|,|{t}|) \right] , \end{aligned}$$

where

$$\begin{aligned} g(|{x} |,|{t}|)= {\left\{ \begin{array}{ll} O\left( \delta +\frac{1}{\kappa }\right) &{}{} \text{ if } \, {\delta \rightarrow 0^+ \, {and}\, \kappa \rightarrow +\infty ,}\\ O (\delta ) &{}{} \text{ if } \, {\delta \rightarrow 0^+ \, { and }\,{\kappa \in [\mathrm{1}/C,C]},}\\ O\left( \frac{1}{|{t}|}+\kappa \right) &{}{} \text{ if } \,{\delta \rightarrow 0^+\, {and}\, \kappa \rightarrow 0^+} \end{array}\right. } \end{aligned}$$
(3.12)

for every \(C>1\).

Proof

1. Assume first that \(\kappa \rightarrow +\infty \). Since \( I_\nu (s)= \frac{e^s}{\sqrt{2\pi s}}\left[ 1+O\left( \frac{1}{s}\right) \right] \) for \(s\rightarrow +\infty \), \(\nu \in \mathbb {Z}\) (cf. [9, 7.13.1 (5)]),

$$\begin{aligned} {\tilde{I}_\nu (s)=\frac{e^s}{s^\nu \sqrt{2\pi s}}\left[ 1+O\left( \frac{1}{s}\right) \right] \quad \text{ for } s\rightarrow +\infty \text{. }} \end{aligned}$$
(3.13)

Therefore, Theorem 3.6 implies that

$$\begin{aligned} \begin{aligned} p_{1,k_1,k_2}(x,t)&= \frac{(-1)^{k_2}\pi ^{k_1+k_2} }{4^n (\pi \delta )^{n+k_1-1}\sqrt{2\pi \kappa }}e^{-\frac{1}{4}d(x,t)^2 } \left[ 1+O\left( \frac{1}{\kappa } +\delta \right) \right] \\&=\frac{(-1)^{k_2}\pi ^{k_1+k_2} \tilde{I}_{n+k_1-1}\left( \kappa \rho (\delta )\right) }{2^{n -k_1+1} }|{t}|^{n+k_1-1} e^{-\frac{1}{4}d(x,t)^2 } e^{-\kappa \rho (\delta )} \\&\qquad \qquad \qquad \times \,\left[ 1+O\left( \frac{1}{\kappa \rho (\delta )} \right) \right] \left[ 1+O\left( \frac{1}{\kappa } +\delta \right) \right] \\&=\frac{(-1)^{k_2}\pi ^{k_1+k_2} \tilde{I}_{n+k_1-1}\left( \kappa \rho (\delta )\right) }{2^{n -k_1+1} }|{t}|^{n+k_1-1} e^{-\frac{1}{4}d(x,t)^2 } e^{-\kappa \rho (\delta )} \left[ 1+O\left( \frac{1}{\kappa } +\delta \right) \right] , \end{aligned} \end{aligned}$$

since \(\rho (\delta )=1+O(\delta ^2)\) and \(\frac{2 |{t}|}{\kappa }=\frac{1}{\pi \delta }\).

2. Assume now that \({\kappa \in [1/C,C]}\) for some \(C>1\). Then, by Theorem 3.11,

$$\begin{aligned} \begin{aligned} p_{1,k_1,k_2}(x,t)&= \frac{(-1)^{k_2}\pi ^{k_1+k_2}}{4^n (\pi \delta )^{n+k_1-1}} e^{-\frac{1}{4}d(x,t)^2 } e^{-\kappa } I_{n+k_1-1}(\kappa ) \left[ 1+O\left( \delta \right) \right] \\&=\frac{(-1)^{k_2}\pi ^{k_1+k_2}}{2^{n -k_1+1}}|{t}|^{n+k_1-1} e^{-\frac{1}{4}d(x,t)^2 } e^{-\kappa \rho (\delta ) } \tilde{I}_{n+k_1-1}(\kappa \rho (\delta )) [ 1+O( \delta ^2 ) ] \left[ 1+O\left( \delta \right) \right] \\&=\frac{(-1)^{k_2}\pi ^{k_1+k_2} }{2^{n -k_1+1}}|{t}|^{n+k_1-1} e^{-\frac{1}{4}d(x,t)^2 } e^{-\kappa \rho (\delta )} \tilde{I}_{n+k_1-1}(\kappa \rho (\delta )) \left[ 1+O\left( \delta \right) \right] , \end{aligned} \end{aligned}$$

where the second equality holds since \(I_{n+k_1-1}(\kappa \rho (\delta ))-I_{n+k_1-1}(\kappa )=O(\kappa (\rho (\delta )-1))=O(\delta ^2)\) uniformly as \(\kappa \) runs through [1 / CC] by Taylor’s formula.

3. Finally, if \(\kappa \rightarrow 0^+\) then

$$\begin{aligned}\tilde{I}_{n+k_1-1}(\kappa )=\tilde{I}_{n+k_1-1}(0)+O(\kappa )= \frac{1}{2^{n+k_1-1}(n+k_1-1)!} + O(\kappa ) \end{aligned}$$

by the definition of \(\tilde{I}_{n+k_1-1}\). Combining this estimate with Theorem 3.11 yields the assertion. \(\square \)

4 H-type groups

In this section, we deal with the general case \(m\ge 1\). In particular, we prove a refined version of Theorem 3.3 and extend Theorems 3.6 and 3.11: this is done through Theorems 4.2, 4.13 and 4.14, respectively. Theorem 4.2 treats the case I and is still inspired by [10, §3, Theorem 2]. The asymptotic estimates in the other three cases are first obtained in the case m odd, “reducing” to the case \(m=1\); the case m even is then achieved through a descent method.

The first step in order to apply the method of stationary phase is to extend the integrand to a meromorphic function on \(\mathbb {C}^m\). If \(m>1\), such extension is no longer automatic as when \(m=1\). A natural way consists in taking advantage of the parity of the functions that appear, as in [7]. Indeed, any continuous branch of \(\lambda \mapsto \sqrt{\lambda ^2}\) is a holomorphic function which coincides with \(\lambda \mapsto \pm |{\lambda }|\) on \(\mathbb {R}^m\); therefore, whenever g is an even holomorphic function defined on a symmetric open subset of \(\mathbb {C}\), the function \(\lambda \mapsto g(\sqrt{\lambda ^2})\) is well-defined, holomorphic, and coincides with \(\lambda \mapsto g(|{\lambda }|)\) on \(\mathbb {R}^m\). Hence, we are led to the following definition, which is the analogue of Definition . We shall use the same notation as before, without stressing the (new) dependence on m.

Definition 4.1

Define

$$\begin{aligned} \begin{aligned} h_{k_1,k_2}(R,t)= \int _{\mathbb {R}^m} e^{i R\varphi _\omega (\lambda )}a_{k_1,k_2}(\lambda )\,\mathrm {d}\lambda \end{aligned} \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} a_{k_1,k_2}(\lambda )&={\left\{ \begin{array}{ll} (-1)^{k_1}i^{k_2}\frac{\sqrt{\lambda ^2}^{n+k_1 }\cosh (\sqrt{\lambda ^2})^{k_1}}{\sinh (\sqrt{\lambda ^2})^{n+k_1}}(\lambda , u_1)^{k_2} &{} \text {if}\,{\sqrt{\lambda ^2}\not \in i \pi \mathbb {Z}^*,}\\ (-1)^{k_1}i^{k_2}\delta _{k_2,0} &{}\text {if}\, {\lambda =0,} \end{array}\right. } \\ \varphi _\omega (\lambda )&= {\left\{ \begin{array}{ll} \omega \, (\lambda , u_1) +i\sqrt{\lambda ^2} \coth (\sqrt{\lambda ^2} ) &{} \text {if}\, {\sqrt{\lambda ^2}\not \in i \pi \mathbb {Z}^*,}\\ i &{}\text {if}\, {\lambda =0}. \end{array}\right. } \end{aligned} \end{aligned}$$
(4.1)

Define also

$$\begin{aligned} a_{k_1,k_2,\omega }(\lambda ) :=a_{k_1,k_2}(\lambda +iy_\omega u_1). \end{aligned}$$
(4.2)

Observe again that

$$\begin{aligned} p_{1,k_1,k_2}(x,t)= \frac{1}{(4\pi )^n(2\pi )^{m}} h_{k_1,k_2}\left( R,t\right) \end{aligned}$$

for all \((x,t)\in \mathbb {R}^{2n}\times \mathbb {R}^m\), and that \(y_\omega =\theta ^{-1}(\omega ) \in [0,\pi )\), since \(\omega \ge 0\).

4.1 I. Estimates for \((x,t)\rightarrow \infty \) while \( 4|{t}|/|{x}|^2 \le C\).

The main result of this section is Theorem 4.2. As already said, the main ingredient of its proof is the method of stationary phase (cf. Proposition 4.4), which is already employed in [10, Theorem 2 of 3] to treat the case \(n=m=1\) and \(k_1=k_2=0\).

The novelty of considering all the derivatives of the heat kernel \(p_1\) (in other words, all the cases \(k_1\ge 0\) and \(k_2\ge 0\)) introduces additional complexity to the developments, since the choice \(k=0\) in (2.4) may not give the sharp asymptotic behaviour of \(p_{1,k_1,k_2}\) at infinity, while \(\omega \) remains bounded. In particular, this happens in the cases \(\omega \rightarrow 0\) and \(k_2>0\), or \(\omega \rightarrow \frac{\pi }{2}\) and \(k_1>0\). If \(\omega \) remains bounded and away from 0 and \(\frac{\pi }{2}\), the first term is instead enough.

Theorem 4.2

Fix \(\varepsilon , C>0\). If \((x,t)\rightarrow \infty \) while \(0\le \omega \le C\), then

$$\begin{aligned} p_{1,k_1,k_2}(x,t)= \frac{1}{|x|^m}e^{-\frac{1}{4}d(x,t)^2}\varPsi (\omega ) \varUpsilon (x,t) \end{aligned}$$

where

$$\begin{aligned} \varPsi (\omega )={\left\{ \begin{array}{ll} \frac{1}{4^n \pi ^{n+m}}\sqrt{\frac{(2 \pi )^m y_\omega ^ {m-1}\sin (y_\omega ) ^3}{2\omega ^{m-1} (\sin (y_\omega )- y_\omega \cos (y_\omega ))}} &{}{} \text{ if }\, {\omega \ne 0,}\\ \frac{(3\pi )^{m/2}}{4^n \pi ^{n+m}} &{}{} \text{ if }\, {\omega = 0,}\end{array}\right. } \end{aligned}$$
(4.3)

and

  1. 1.

    if \(\varepsilon \le \omega \le \frac{\pi }{2}-\varepsilon \) or \(\frac{\pi }{2}+\varepsilon \le \omega \le C\),

    $$\begin{aligned} \varUpsilon (x,t)= (-1)^{k_1+k_2} \frac{ y_\omega ^{n+k_1+k_2}\cos (y_\omega )^{k_1}}{\sin (y_\omega )^{n+k_1}} +O\left( \frac{1}{|x|^2}\right) ; \end{aligned}$$
    (4.4)
  2. 2.

    if \(\omega \rightarrow 0\) and \(k_2\) is even,

    $$\begin{aligned} \varUpsilon (x,t)= \sum _{j=0}^{k_2/2}c_{k_1,k_2,j} \frac{\omega ^{k_2-2j}}{|x|^{2j}} + O\left( \sum _{j=0}^{k_2/2} \frac{\omega ^{k_2-2j+1}}{|x|^{2j}}+\frac{1}{|x|^{k_2+2}}\right) ; \end{aligned}$$
    (4.5)
  3. 3.

    if \(\omega \rightarrow 0\), \(k_2\) is odd and \(|t|\rightarrow \infty \),

    $$\begin{aligned} \varUpsilon (x,t)= \sum _{j=0}^{(k_2-1)/2} c_{k_1,k_2,j}\frac{ \omega ^{k_2-2j}}{|x|^{2j}} + O\left( \sum _{j=0}^{(k_2+1)/2} \frac{ \omega ^{k_2-2j+1}}{|x|^{2j}}\right) ; \end{aligned}$$
    (4.6)
  4. 4.

    if \(\omega \rightarrow 0\), \(k_2\) is odd and \(0\le |t|\le C\)

    $$\begin{aligned} \varUpsilon (x,t)=c_{k_1,k_2+1, (k_2+1)/2}\frac{|t|}{|x|^{k_2+1}} +O\left( \frac{|t|}{|x|^{k_{2}+3}}\right) ; \end{aligned}$$
    (4.7)
  5. 5.

    if \(\omega \rightarrow \frac{\pi }{2}\) and \(k_1\) is even,

    $$\begin{aligned} \varUpsilon (x,t) = \sum _{j=0}^{k_1/2} b_{k_1,k_2,j} \frac{\left( \omega -\frac{\pi }{2}\right) ^{k_1-2j}}{|x|^{2j}} + O\left( \sum _{j=0}^{k_1/2}\frac{\left( \omega -\frac{\pi }{2}\right) ^{k_1-2j+1}}{|x|^{2j}} + \frac{1}{|x|^{k_1+2}}\right) ;\qquad \quad \end{aligned}$$
    (4.8)
  6. 6.

    if \(\omega \rightarrow \frac{\pi }{2}\) and \(k_1\) is odd,

    $$\begin{aligned} \varUpsilon (x,t)= & {} \sum _{j=0}^{(k_1-1)/2} b_{k_1,k_2,j} \frac{\left( \omega -\frac{\pi }{2}\right) ^{k_1-2j}}{|x|^{2j}} + \frac{b_{k_1,k_2,(k_1+1)/2}}{|x|^{k_1+1}}\nonumber \\&\quad + \,O\left( \sum _{j=0}^{(k_1-1)/2}\frac{\left( \omega -\frac{\pi }{2}\right) ^{k_1-2j+1}}{|x|^{2j}}+\frac{\omega -\frac{\pi }{2}}{|{x}|^{k_1+1}}+\frac{1}{|{x}|^{k_1+3}} \right) . \end{aligned}$$
    (4.9)

The coefficients \(c_{k_1,k_2,j}\) and \(b_{k_1,k_2,j}\) are explicitly given by (4.15), (4.17), and (4.18).

The remainder of this section is devoted to the proof of Theorem 4.2. Since it is quite involved, we split this section into two parts: in the first one we apply the method of stationary phase, while in the second one we find the asymptotics of the development given by Theorem 2.7, which are required to get the sharp developments (4.5)–(4.9). These proofs go through several lemmata.

Remark 4.3

Notice that any pair of terms in the sums appearing in developments (4.5), (4.6), (4.8), and (4.9) are not comparable with each other under the stated asymptotic condition. Therefore, these developments cannot be simplified. Observe moreover that for \(k_1\) and \(k_2\) fixed the coefficients \(b_{k_1,k_2,j}\) (resp. \(c_{k_1,k_2,j}\)) have the same sign; thus, no cancellation can occur, and our developments are indeed sharp. A more detailed description will be given in Sect. 4.1.2.

Finally, notice that it is possible to obtain even more precise expansions if one does not develop the terms \(L_{j,\psi _\omega }a_{k_1,k_2,\omega }\) which appear in Proposition 4.4. In particular, in the cases when \(\omega \rightarrow 0^+\) and \(k_2=0\), or \(\omega \rightarrow \frac{\pi }{2}\) and \(k_1=0\), the explicit computation of \(L_{0,\psi _\omega }a_{k_1,k_2,\omega } = a_{k_1,k_2}(iy_{\omega }u_1)\) leads to better remainders than those in (4.5) and (4.8), respectively.

4.1.1 Application of the Method of Stationary Phase

As already said, Proposition 4.4 is essentially an easy generalization of Theorem 3.3.

Proposition 4.4

Fix \(C>0\) and let \(k \in \mathbb {N}\). Then, if \((x,t)\rightarrow \infty \) while \(0\le \omega \le C\),

$$\begin{aligned} p_{1,k_1,k_2}(x,t) = \frac{1}{|{x}|^m}e^{-\frac{1}{4}d(x,t)^2}\varPsi (\omega )\left[ \sum _{j=0}^{k} \frac{4^j L_{j,\psi _\omega } a_{k_1,k_2,\omega }}{|x|^{2j}} + O\left( \frac{1}{|{x}|^{2k +2}}\right) \right] \end{aligned}$$
(4.10)

where \(\varPsi \) is defined by (4.3).

In the same way as in Sect. 3.1, we begin by finding some stationary points of the phase of \(h_{k_1,k_2}\), namely \(\varphi _\omega \).

Lemma 4.5

[7, Formula (5.7)] For all \(\lambda \) such that \(\sqrt{\lambda ^2}\not \in i \pi \mathbb {Z}^*\),

$$\begin{aligned}\varphi _\omega '(\lambda )= \omega u_1 + \lambda \frac{\tilde{\theta }(i \sqrt{\lambda ^2})}{\sqrt{\lambda ^2}}\end{aligned}$$

where \(\tilde{\theta }\) is the analytic continuation of \(\theta \) to \(\mathrm {Dom}(\varphi _{\omega })\). In particular, \(i y_\omega u_1\) is a stationary point of \(\varphi _\omega \).

We then change the contour of integration in the integral defining \(h_{k_1,k_2}\) in order to meet a stationary point of \(\varphi _\omega \). This is done in the following lemma, which is the analogue of Lemma 3.5.

Lemma 4.6

For every \(y\in [0,\pi )\)

$$\begin{aligned} \begin{aligned} h_{k_1,k_2}(R,t)= \int _{\mathbb {R}^m} e^{i R\varphi _\omega (\lambda + i y u_1)} a_{k_1,k_2}(\lambda +i y u_1)\,\mathrm {d}\lambda . \end{aligned} \end{aligned}$$

Proof

The theorem is proved in a similar fashion to [7, Lemma 5.4]. It may be useful to observe that for every \(\lambda \in \mathbb {C}^m\) such that either \(\mathrm {Im}\sqrt{\lambda ^2}\notin \pi \mathbb {Z}\) or \(\mathrm {Re}\sqrt{\lambda ^2}\ne 0\), we have

$$\begin{aligned} |{a_{k_1,k_2}(\lambda )}|= \frac{|{\lambda }|^{n+k_1} \left( \sinh \left( \mathrm {Re}\sqrt{\lambda ^2}\right) ^2+\cos \left( \mathrm {Im}\sqrt{\lambda ^2}\right) ^2\right) ^{k_1/2} }{\left( \sinh \left( \mathrm {Re}\sqrt{\lambda ^2}\right) ^2+\sin \left( \mathrm {Im}\sqrt{\lambda ^2}\right) ^2\right) ^{(n+k_1)/2}}|{(\lambda ,u_1)}|^{k_2}, \end{aligned}$$

by Lemma 3.4, since \(|{\sqrt{\lambda ^2}}|=|{\lambda }|\). Moreover, \(a_{k_1,k_2}\) is bounded on the set \(\{\lambda +i y u_1 :\lambda \in \mathbb {R}^m, y\in [0,C']\}\) for every \(C'\in (0,\pi )\). \(\square \)

Proof of Proposition 4.4

Define

$$\begin{aligned} \psi _\omega \,= \varphi _\omega (\,\cdot \,+i y_\omega u_1)-\varphi _\omega (i y_\omega u_1) \end{aligned}$$

and observe that, since \(\sqrt{(i y_\omega u_1)^2}=\pm i y_\omega \) and \(\omega =\theta (y_\omega )\), \(\varphi _\omega (i y_\omega u_1) =i\frac{y_\omega ^2}{\sin (y_\omega )^2}\). Therefore, by Lemma 4.6

$$\begin{aligned} h_{k_1,k_2}(R,t)=e^{-\frac{1}{4}d(x,t)^2} \int _\mathbb {R}e^{i R\psi _\omega (\lambda )}a_{k_1,k_2}(\lambda +i y_\omega u_1)\,\mathrm {d}\lambda . \end{aligned}$$

We shall apply Theorem 2.7 to the bounded subsets \(\mathscr {F}=\{\psi _\omega :\omega \in [0, C]\}\) and \(\mathscr {G}= \{a_{k_1,k_2,\omega }:\omega \in [0,C]\}\) of \(\mathcal {E}(\mathbb {R}^m)\).

  1. 2.

    Elementary computations show that

    $$\begin{aligned} -i\psi ''_\omega (0)= \theta '(y_\omega )u_1 \otimes u_1+ \frac{\omega }{y_\omega } \sum _{j=2}^m u_j\otimes u_j, \end{aligned}$$
    (4.11)

    so that \( \det (-i\psi ''_\omega (0)) = \theta '(y_\omega )\left( \frac{\theta (y_w)}{y_w}\right) ^{m-1}>0\). The conditions \( \psi _\omega (0)=\psi '_\omega (0)=0\) hold by construction.

  2. 3.

    Consider the mapping \(\psi :\mathbb {R}^m\times (-\pi ,\pi )\ni (\lambda ,y)\mapsto \psi _{\theta (y)}(\lambda )\). Then, by the preceding arguments, there is \(c>0\) such that \(\partial _1\psi (0,y)=0\) and \(-i \partial _1^2 \psi (0,y)\ge c (\,\cdot \,,\,\cdot \,)\) for all \(y\in [0,\pi )\); moreover, \(\psi \) is analytic by Lemma 2.5. Therefore, by Taylor’s formula we may find two constants \(\eta >0\) and \(C'>0\) such that \( |{\partial _1\psi (\lambda ,y)}|\ge C' |{\lambda }|\) for all \(\lambda \in B_{\mathbb {R}^m}(0,2\eta )\) and for all \(y\in [0,\theta ^{-1}(C)]\).

  3. 1.

    Combining [7, Lemmata 5.3 and 5.7], we infer that there is a constant \(C''>0\) such that

    $$\begin{aligned}\begin{aligned} \mathrm {Im}\,\psi (\lambda ,y)&= y\theta (y) + \mathrm {Re}\left[ \sqrt{(\lambda +i y u_1)^2}\coth \left( \sqrt{(\lambda +i y u_1)^2} \right) \right] - \frac{y^2}{\sin ^2 y} \ge C''|{\lambda }| \end{aligned}\end{aligned}$$

    whenever \(|{\lambda }|\ge \eta \) and \(0\le y\le \theta ^{-1}(C)\).

  4. 4.

    Just observe that \(\mathscr {G}\) is bounded in \(L^\infty (\mathbb {R}^m)\).

By Theorem 2.7, then,

$$\begin{aligned} \begin{aligned}&\int _{\mathbb {R}^m} e^{i R\psi _\omega (\lambda )}a_{k_1,k_2}(\lambda +i y_\omega u_1)\,\mathrm {d}\lambda \\&\quad = \frac{(2\pi )^m (4\pi )^n}{|x|^m}\varPsi (\omega ) \sum _{j=0}^k \frac{4^j L_{j,\psi _\omega }a_{k_1,k_2,\omega }}{|{x}|^{2j}}+ O\left( \frac{1}{|{x}|^{m+2k+2}}\right) \end{aligned} \end{aligned}$$

for \(R\rightarrow +\infty \), uniformly as \(\omega \) runs through [0, C]. \(\square \)

4.1.2 Further developments and completion of the proof of Theorem 4.2

We begin by recalling that, for every \(j\in \mathbb {N}\),

$$\begin{aligned} L_{j,\psi _\omega }a_{k_1,k_2,\omega }= i^{-j}\sum _{\mu =0}^{2j}\frac{ (\psi _\omega ''(0)^{-1} \partial , \partial )^{\mu +j} [ (\psi _\omega -P_{2,0}\psi _\omega )^\mu a_{k_1,k_2,\omega } ](0)}{ 2^{\mu +j} \mu !(\mu +j) !}. \end{aligned}$$
(4.12)

Thus, the point 1 of Theorem 4.2 follows immediately by taking \(k=0\) in Proposition 4.4, since

$$\begin{aligned} L_{0,\psi _\omega }a_{k_1,k_2,\omega } = a_{k_1,k_2,\omega }(0) = a_{k_1,k_2}(iy_{\omega }u_1). \end{aligned}$$

As for the other developments, observe that by (4.11)

$$\begin{aligned}&( \psi _\omega ''(0)^{-1}\partial , \partial )^{\mu +j} [ (\psi _\omega -P_{2,0}\psi _\omega )^\mu a_{k_1,k_2,\omega } ](0) \nonumber \\&\quad =\sum _{|{\alpha }|=\mu +j } \frac{ (\mu +j)! }{\alpha !} \frac{1}{ (i \theta '(y_\omega ))^{\alpha _1} } \left( \frac{y_\omega }{i\omega } \right) ^{|{\alpha }|-\alpha _1} \partial ^{2\alpha } [ (\psi _\omega -P_{2,0}\psi _\omega )^\mu a_{k_1,k_2,\omega } ] (0), \end{aligned}$$
(4.13)

where

$$\begin{aligned}&\partial ^{2 \alpha } [ (\psi _\omega -P_{2,0}\psi _\omega )^\mu a_{k_1,k_2,\omega } ] (0)\nonumber \\&\quad = \sum _{\begin{array}{c} \beta \le 2 \alpha ,\\ |{\beta }|\ge 3 \mu \end{array}} \frac{(2 \alpha )!}{\beta !\, (2\alpha -\beta ) !}\partial ^{\beta }[(\psi _\omega -P_{2,0}\psi _\omega )^\mu ](0)\, \partial ^{2\alpha -\beta } a_{k_1,k_2}(i y_\omega u_1). \end{aligned}$$
(4.14)

The sum above is restricted to \(|\beta |\ge 3\mu \) since \(\psi _\omega (\lambda )- P_{2,0}\psi _\omega (\lambda )\) is infinitesimal of order at least 3 for \(\lambda \rightarrow 0\). Observe moreover that, since \(|{2\alpha -\beta }|= 2|{\alpha }|-|{\beta }|\le 2 j-\mu \), we have \(|{2\alpha -\beta }|\le 2j\) and \(|{2\alpha -\beta }|=2j\) if and only if \(\mu =0\) and \(\beta =0\). We first consider the case \(\omega \rightarrow 0\).

Lemma 4.7

For every \(j\in \mathbb {N}\) such that \(2 j \le k_2\), define

$$\begin{aligned} c_{k_1,k_2,j}:=(-1)^{k_1+k_2} \frac{3^{k_2-j}k_2 !}{2^{k_2-2j}(k_2-2 j)! j!} . \end{aligned}$$
(4.15)

Then

$$\begin{aligned} 4^j L_{j,\psi _\omega } a_{k_1,k_2,\omega }= c_{k_1,k_2,j}\omega ^{k_2-2j} + O\left( \omega ^{k_2-2 j+1} \right) \end{aligned}$$

for \(\omega \rightarrow 0\).

Proof

Recall that \(a_{k_1,k_2}\) is an analytic function on its domain, and observe thatFootnote 3

$$\begin{aligned} a_{k_1,k_2}(\lambda )=(-1)^{k_1}i^{k_2}\lambda _1^{k_2}+O\left( |{\lambda }|^{k_2+2}\right) \end{aligned}$$

for \(\lambda \rightarrow 0\). Therefore, for every \(h=0,\dots , k_2\) we have

$$\begin{aligned} a_{k_1,k_2}^{(h)}(\lambda )= (-1)^{k_1}i^{k_2}\frac{k_2 !}{(k_2-h)!} \lambda _1^{k_2-h} u_1^{\otimes h} + O\left( |{\lambda }|^{k_2-h+2} \right) \end{aligned}$$
(4.16)

as \(\lambda \rightarrow 0\).

We now consider (4.14). If \(|{2\alpha -\beta }| < 2 j\), then by (4.16)

$$\begin{aligned} \partial ^\beta [(\psi _\omega -P_{2,0} \psi _\omega )^\mu ](0) \partial ^{2\alpha -\beta } a_{k_1,k_2}(i y_\omega u_1)= O\left( y_\omega ^{k_2-|{2\alpha -\beta }|} \right) =O\left( y_\omega ^{k_2- 2 j+1} \right) \end{aligned}$$

for \(\omega \rightarrow 0\). Otherwise, let \(|{2\alpha -\beta }|=2 j\), so that \(\mu =0\) and \(\beta =0\). If \(\alpha \ne j u_1\), then (4.16) implies that

$$\begin{aligned} \partial ^{2 \alpha } a_{k_1,k_2}(i y_\omega u_1)= O\left( y_\omega ^{k_2- 2 j+2} \right) =O\left( y_\omega ^{k_2-2 j+1} \right) , \end{aligned}$$

while, if \(\alpha =j u_1\),

$$\begin{aligned} \partial _1^{2 j} a_{k_1,k_2}(i y_\omega u_1)=(-1)^{k_1+k_2} i^{-2 j} \frac{k_2 !}{(k_2- 2 j)!} y_\omega ^{k_2-2 j}. \end{aligned}$$

From this and the fact that

$$\begin{aligned}\theta '(0)=\lim \limits _{\omega \rightarrow 0} \frac{\omega }{y_\omega }=\frac{2}{3}\end{aligned}$$

we get the asserted estimate. \(\square \)

Lemma 4.7 gives expansions 2 and 3 of Theorem 4.2. Indeed, it allows us to choose k in Proposition 4.4 as

  1. 2.

    \(k=k_2/2\) if \(k_2\) is even, since in this case the last term of the sum in (4.10) is

    $$\begin{aligned}\frac{c_{k_1,k_2,k_2/2}}{|x|^{k_2}} + O\left( \frac{\omega }{|x|^{k_2}}\right) \end{aligned}$$

    which is bigger than the remainder.

  2. 3.

    \(k=(k_2-1)/2\) if \(k_2\) is odd and \(|t|\rightarrow \infty \), since in this case the last term of the sum in (4.10) is

    $$\begin{aligned}c_{k_1,k_2,(k_2-1)/2}\frac{\omega }{|x|^{k_2-1}} + O\left( \frac{\omega ^2}{|x|^{k_2-1}}\right) = c_{k_1,k_2,(k_2-1)/2}\frac{|t|}{|x|^{k_2+1}} + O\left( \frac{|t|^2}{|x|^{k_2+3}}\right) \end{aligned}$$

    which is bigger than the remainder, since \(|t|\rightarrow \infty \).

The case 4 of Theorem 4.2, which is the case when \(k_2\) is odd, \(\omega \rightarrow 0\) and |t| is bounded, has to be treated in a different way, since \(\omega /|x|^{k_2-1}\) may be comparable with the remainder \(1/|x|^{k_2+1}\) or even smaller. Thus, the development given above may not be sharp in this case. To overcome this difficulty, we make use of the following lemma. For the reader’s convenience, we also consider \(k_2\) even and a stronger statement than that we need (see Remark 4.16).

Lemma 4.8

Let \(N\in \mathbb {N}\). Then, when \(\omega \rightarrow 0\),

$$\begin{aligned} p_{1,k_1,k_2}(x,t)= \sum _{h=0}^N\frac{1}{(2h+1)!}|t|^{2h+1}p_{1,k_1,k_2+2h+1}(x,0) + O\left( |{t}|^{2N+3} p_{1,k_1,k_2+2N+3}(x,0)\right) \end{aligned}$$

if \(k_2\) odd; if \(k_2\) is even

$$\begin{aligned} p_{1,k_1,k_2}(x,t)= \sum _{h=0}^N\frac{1}{(2h)!}|t|^{2h}p_{1,k_1,k_2+2h}(x,0) + O\left( |{t}|^{2N+2} p_{1,k_1,k_2+2N+2}(x,0)\right) . \end{aligned}$$

Proof

Assume that \(k_2\) is odd. Then

$$\begin{aligned}&(4\pi )^n(2\pi )^m\left|p_{1,k_1,k_2}(x,t) - \sum _{h=0}^N\frac{1}{(2h+1)!}|t|^{2h+1}p_{1,k_1,k_2+2h+1}(x,0) \right|\\ {}&= \left|{\int _{\mathbb {R}^m} e^{-\frac{|x|^2}{4}|\lambda |\coth |\lambda |}\frac{|{\lambda }|^{n+k_1}\cosh (|\lambda |)^{k_1}}{\sinh (|\lambda |)^{n+k_1}}(\lambda ,u_1)^{k_2}\left\{ e^{i|t|(\lambda ,u_1)} -\sum _{h=0}^N\frac{\left[ i|t|(\lambda ,u_1)\right] ^{2h+1}}{(2h+1)!}\right\} \,\mathrm {d}\lambda }\right|\\ {}&\le \frac{|t|^{2N+3}}{(2 N+3)!}\int _{\mathbb {R}^m} e^{-\frac{|x|^2}{4}|\lambda |\coth |\lambda |}\frac{|{\lambda }|^{n+k_1}\cosh (|\lambda |)^{k_1}}{\sinh (|\lambda |)^{n+k_1}}(\lambda ,u_1)^{k_2+2N+3}\,\mathrm {d}\lambda \\ {}&= \frac{(4\pi )^n(2\pi )^m }{(2 N+3)!} |{t}|^{2N+3} |{p_{1,k_1,k_2+2N+3}(x,0)}|. \end{aligned}$$

The first assertion is then proved. The proof in the case \(k_2\) even is analogous. \(\square \)

Thus, the case \(\omega \rightarrow 0\) while |t| remains bounded when \(k_2\) is odd can be related to the same case when \(k_2\) is even, which is completely described by Lemma 4.7. Observe that the expansion appearing in Theorem 4.2, 4, is obtained with the choice \(N=0\) in Lemma 4.8.

We finally consider the case \(\omega \rightarrow \frac{\pi }{2}\), which as above provides expansions 5 and 6 of Theorem 4.2.

Lemma 4.9

Define, for \(j\in \mathbb {N}\) such that \(2j\le k_1\),

$$\begin{aligned} b_{k_1,k_2,j}:=(-1)^{k_2} \frac{k_1 !}{2^{k_1-2j}(k_1-2 j)! j!}\ \left( \frac{\pi }{2}\right) ^{n+k_1+k_2}, \end{aligned}$$
(4.17)

and, when \(k_1\) is odd,

$$\begin{aligned}&b_{k_1,k_2,(k_1+1)/2}\nonumber \\&\quad :=(-1)^{k_2}\frac{(k_1+1)!}{[(k_1+1)/2]!}\left( \frac{\pi }{2}\right) ^{n+k_1+k_2-1}\left( n+k_1+k_2+\frac{\pi ^2}{24}(k_1+2) +\frac{3}{2}(m-1) \right) .\nonumber \\ \end{aligned}$$
(4.18)

Then, for \(\omega \rightarrow \frac{\pi }{2}\), if \(2j \le k_1\)

$$\begin{aligned} 4^j L_{j,\psi _\omega } a_{k_1,k_2,\omega }= b_{k_1,k_2,j} \left( \omega -\frac{\pi }{2}\right) ^{k_1-2 j} + O\left( \left( \omega -\frac{\pi }{2}\right) ^{k_1-2 j+1} \right) \end{aligned}$$

while if \(k_1\) is odd, then

$$\begin{aligned} 2^{k_1+1} L_{(k_1+1)/2, \psi _\omega }a_{k_1,k_2,\omega }= b_{k_1,k_2,(k_1+1)/2} + O\left( \omega -\frac{\pi }{2}\right) . \end{aligned}$$

Proof

By elementary computations,

$$\begin{aligned}&a_{k_1,k_2,\pi /2}\left( \lambda \right) =(-1)^{k_1} i^{k_2-n}\left( i\frac{\pi }{2}\right) ^{n+k_1+k_2} \lambda _1^{k_1}\nonumber \\&\qquad + (-1)^{k_1} i^{k_2-n}\left( i\frac{\pi }{2}\right) ^{n+k_1+k_2-1}\left( (n+k_1+k_2) \lambda _1^{k_1+1} +\frac{k_1}{2} \lambda _1^{k_1-1} (\lambda ^2-\lambda _1^2)\right) \nonumber \\ {}&\qquad +O\left( |{\lambda }|^{k_1+2} \right) . \end{aligned}$$
(4.19)

Therefore, since \(a_{k_1,k_2,\pi /2}\) is analytic on its domain, we infer that for every \(h=0,\dots , k_1\)

$$\begin{aligned} a_{k_1,k_2,\pi /2}^{(h)}(\lambda )=(-1)^{k_1} i^{k_2-n} \left( i\frac{\pi }{2} \right) ^{n+k_1+k_2} \frac{k_1!}{(k_1-h)!} \lambda _1^{k_1-h} u_1^{\otimes h} +O\left( |{\lambda }|^{k_1-h+1} \right) \qquad \quad \end{aligned}$$
(4.20)

as \(\lambda \rightarrow 0\).

Consider first j such that \(2j\le k_1\). Then, arguing as in the proof of Lemma 4.7 and taking into account (4.20) and the fact that

$$\begin{aligned} y_\omega -\frac{\pi }{2}= \frac{1}{2}\left( \omega -\frac{\pi }{2}\right) + O\left[ \left( \omega -\frac{\pi }{2}\right) ^2\right] \end{aligned}$$

when \(\omega \rightarrow \pi /2\), the first assertion follows.

Let now \(k_1\) be odd, so that \((k_1+1)/2\) is an integer. We shall prove that

$$\begin{aligned}2^{k_1+1} L_{(k_1+1)/2, \psi _{\pi /2}}a_{k_1,k_2,\pi /2}=b_{k_1,k_2,(k_1+1)/2}. \end{aligned}$$

The estimate in the statement is then a consequence of this by Taylor expansion.

Since \((\psi _{\pi /2}''(0)^{-1}\partial , \partial )^{\mu +(k_1+1)/2}\) is a differential operator of degree \(2\mu +k_1+1\) while \( [ (\psi _\omega -P_{2,0}\psi _\omega )^\mu a_{k_1,k_2,\omega } ]\) is infinitesimal of degree \(3\mu +k_1\) at 0, the only terms in sum (4.12) (with \(j=(k_1+1)/2\)) which are not zero are clearly those for which

$$\begin{aligned}2\mu +k_1+1 \ge 3\mu +k_1,\end{aligned}$$

namely \(\mu \le 1\). Consider first \(\mu =0\). Then, since \(\theta '(y_{\pi /2})=2\), by (4.13)

$$\begin{aligned} \begin{aligned} (\psi _{\pi /2}''(0)^{-1}\partial , \partial )^{(k_1+1)/2}a_{k_1,k_2,\pi /2}(0)&=i^{-(k_1+1)/2}\sum _{|{\alpha }|=(k_1+1)/2} \frac{[(k_1+1)/2]! }{2^{\alpha _1}\alpha !} \partial ^{2 \alpha } a_{k_1,k_2,\pi /2}(0). \end{aligned} \end{aligned}$$

Observe that, by (4.19), \(\partial ^{2\alpha } a_{k_1,k_2,\pi /2}(0)\ne 0\) only if \(\alpha =((k_1-1)/2)u_1+u_h\) for some \(h=1,\dots , m\). For the choice \(h=1\),

$$\begin{aligned} \partial ^{k_1+1}_1 a_{k_1,k_2,\pi /2}(0)=(-1)^{k_1} i^{k_2-n} \left( i\frac{\pi }{2} \right) ^{n+k_1+k_2-1} (k_1+1)! (n+k_1+k_2) \end{aligned}$$

while, for \(h=2,\dots , m\),

$$\begin{aligned} \partial ^{k_1-1}_1 \partial _h^2 a_{k_1,k_2,\pi /2}(0)=(-1)^{k_1}{i^{k_2-n} \left( i\frac{\pi }{2} \right) ^{n+k_1+k_2-1} k_1!} \end{aligned}$$

so that

$$\begin{aligned} (\psi _{\pi /2}''(0)^{-1}\partial , \partial )^{(k_1+1)/2}&a_{k_1,k_2,\pi /2}(0)\\ {}&= (-1)^{k_1}\frac{i^{k_2-n-\frac{k_1+1}{2}}}{2^{\frac{k_1+1}{2}}}\left( i\frac{\pi }{2}\right) ^{n+k_1+k_2-1}(k_1+1)! (n+k_1+k_2+m-1). \end{aligned}$$

Consider now \(\mu =1\). Then by (4.13)

$$\begin{aligned}&(\psi _{\pi /2}''(0)^{-1}\partial , \partial )^{(k_1+3)/2}\left[ (\psi _{\pi /2}-P_{2,0} \psi _{\pi /2}) a_{k_1,k_2,\pi /2} \right] (0)\\&\quad =i^{-(k_1+3)/2}\sum _{|{\alpha }|=(k_1+3)/2} \frac{[(k_1+3)/2]! }{2^{\alpha _1}\alpha !} \partial ^{2 \alpha } \left[ (\psi _{\pi /2}-P_{2,0}\psi _{\pi /2})a_{k_1,k_2,\pi /2} \right] (0). \end{aligned}$$

Since

$$\begin{aligned} \psi _{\pi /2}'''(0)= \pi u_1 \otimes u_1 \otimes u_1 + \frac{2}{\pi } \sum _{h=2}^m (u_1 \otimes u_h\otimes u_h+ u_h\otimes u_1 \otimes u_h + u_h\otimes u_h \otimes u_1 ), \end{aligned}$$

we deduce that the only \(\alpha \) for which we get a nonzero term in the above sum are \(u_1(k_1+1)/2+ u_h\) for \(h=1,\dots , m\). Now,

$$\begin{aligned} \partial _1^{k_1+3} \left[ (\psi _{\pi /2}-P_{2,0} \psi _{\pi /2}) a_{k_1,k_2,\pi /2}\right] (0)= \frac{(k_1+3)!}{ 3 !} (-1)^{k_1} i^{k_2-n} \pi \left( i \frac{\pi }{2}\right) ^{n+k_1+k_2} , \end{aligned}$$

while, for \(h=2,\dots , m\),

$$\begin{aligned} \partial _1^{k_1+1} \partial _h^2 \left[ (\psi _{\pi /2}-P_{2,0} \psi _{\pi /2})a_{k_1,k_2,\pi /2} \right] (0)= \frac{2}{\pi }(-1)^{k_1}{i^{k_2-n} \left( i\frac{\pi }{2}\right) ^{n+k_1+k_2}(k_1+1)! }. \end{aligned}$$

Therefore,

$$\begin{aligned} (\psi _{\pi /2}''(0)^{-1}&\partial , \partial )^{(k_1+3)/2}\left[ (\psi _{\pi /2}-P_{2,0} \psi _{\pi /2})a_{k_1,k_2,\pi /2} \right] (0)\\ {}&= (-1)^{k_1}i^{k_2-\frac{k_1+1}{2}}\frac{(k_1+1)!}{2^{(k_1+3)/2}}i^{-n} \left( i \frac{\pi }{2}\right) ^{n+k_1+k_2-1}(k_1+3) \left[ \frac{\pi ^2}{12}(k_1+2) + m-1 \right] \end{aligned}$$

from which one gets the asserted estimate. \(\square \)

Theorem 4.2 is now completely proved. In the following Table 1, we summarize the asymptotic behaviour, without remainders, of \(\varUpsilon (x,t)\).

Table 1 Asymptotic behaviour of \(\varUpsilon (x,t)\) in case I: principal part

4.2 The Other Cases

We now consider the case \(\omega \rightarrow +\infty \). We begin by showing that, when m is odd, matters can be reduced to the case \(m=1\).

Lemma 4.10

When m is odd, \(m\ge 3\),

$$\begin{aligned} p_{1,k_1,k_2}^{(m)}(x,t)= \sum _{k=1}^{\frac{m-1}{2}}\frac{c_{m,k}(-1)^k}{(2\pi )^{\frac{m-1}{2}}} \sum _{r=0}^{k_2} \left( {\begin{array}{c}k_2\\ r\end{array}}\right) \frac{(-1)^r (m-1-k)_r}{|t|^{m-1-k+r}}p_{1,k_1,k_2+k-r}^{(1)}(x,|t|),\qquad \quad \end{aligned}$$
(4.21)

where

$$\begin{aligned} c_{m,k}= \frac{(m-k-2)!}{2^{\frac{m-1}{2}-k}\left( {\frac{m-1}{2}} -k\right) ! (k-1)! } \end{aligned}$$

and \((m-1-k)_r = (m-1-k) \cdots (m-1-k+r-1)\) is the Pochhammer symbolFootnote 4.

Proof

Let m be odd, \(m\ge 3\). We first pass to polar coordinates in (2.3) for \(k_2=0\) and get

$$\begin{aligned} p_{1,k_1,0}^{(m)}(x,t)= \frac{(-1)^{\frac{m-1}{2}}}{(2\pi )^m (4\pi )^n}\int _{0}^\infty \int _{S^{m-1}} e^{i\rho |{t}|( \sigma , u_1)}\,\mathrm {d}\sigma \, e^{-R\rho \coth (\rho )}a_{k_1,m-1}(\rho )\,\mathrm {d}\rho \end{aligned}$$

where \(\mathrm {d}\sigma \) is the \((m-1)\)-dimensional (Hausdorff) measure on \(S^{m-1}\) and \(a_{k_1,m-1}\) is the function defined in (3.1). Since the Bessel function is an elementary function when m is odd, one can prove that (see, e.g. [7, equation (6.5)] and references therein)Footnote 5

$$\begin{aligned} \int _{S^{m-1}} e^{i\rho |{t}|( \sigma , u_1)}\,\mathrm {d}\sigma = 2(2\pi )^{\frac{m-1}{2}}\mathrm {Re}\frac{e^{i\rho |{t}| }}{(\rho |{t}|)^{m-1}}\sum _{k=1}^{\frac{m-1}{2}} c_{m,k}(-i|{t}| \rho )^k. \end{aligned}$$

This yields

$$\begin{aligned} \begin{aligned} p^{(m)}_{1,k_1,0}(x,t)&= \sum _{k=1}^{\frac{m-1}{2}}\frac{c_{m,k}(-1)^k}{(2\pi )^{\frac{m-1}{2}}} \frac{1}{|t|^{m-1-k}} p^{(1)}_{1,k_1,k}(x,|t|) \end{aligned} \end{aligned}$$

which gives (4.21), since \(p_{1,k_1,k_2}^{(m)}(x,t)= \frac{\partial ^{k_2}}{\partial |t|^{k_2}}p_{1,k_1,0}^{(m)}(x,t)\) by definition. \(\square \)

Corollary 4.11

Let m be odd. Then, when \((x,t)\rightarrow \infty \) and \(\delta \rightarrow 0^+\)

$$\begin{aligned} \begin{aligned} p_{1,k_1,k_2}^{(m)}(x,t)= \frac{(-1)^{k_2} \pi ^{k_1+k_2}}{2^{n-k_1+1 + \frac{m-1}{2}}}|t|^{n+k_1-1-\frac{m-1}{2}}e^{-\frac{1}{4}d(x,t)^2 } e^{-\kappa \rho (\delta )} \tilde{I}_{n+k_1-1}(\kappa \rho (\delta ))\left[ 1+ g(|{x}|,|t|) \right] ,\end{aligned} \end{aligned}$$
(4.22)

where g satisfies estimates (3.12).

Proof

If \(m=1\), the statement reduces to Corollary 3.14. Suppose then \(m\ge 3\). Since \(p^{(1)}_{1,k_1,r}\asymp p^{(1)}_{1,k_1,k_2}\) for every \(0\le r\le k_2\) by Corollary 3.14, the principal term in (4.21) corresponds to \(r=0\), \(k=\frac{m-1}{2}\). Hence,

$$\begin{aligned} p_{1,k_1,k_2}^{(m)}(x,t)=\frac{(-1)^{\frac{m-1}{2}}}{(2\pi )^{\frac{m-1}{2}}} |{t}|^{-\frac{m-1}{2}}p_{1,k_1,k_2+\frac{m-1}{2}}^{(1)}(x,t)\left[ 1+O\left( \frac{1}{|{t}|}\right) \right] . \end{aligned}$$
(4.23)

Now substitute the estimate given by Corollary 3.14 into (4.23). The remainder g in (4.22) still satisfies (3.12), since (3.12) is satisfied by \(1/|{t}|\). \(\square \)

Let now m be even, \(m\ge 2\). We start by a descent method, in the same spirit of [7]: indeed, observe that the Fourier inversion formula yields

$$\begin{aligned} p^{(m)}_{1,k_1,0}(x,t)=\int _\mathbb {R}p^{(m+1)}_{1,k_1,0}(x,(t,t_{m+1}))\,\mathrm {d}t_{m+1}, \end{aligned}$$

so that, by differentiating under the integral sign,

$$\begin{aligned} p^{(m)}_{1,k_1,k_2}(x,t)=\int _\mathbb {R}\frac{\partial ^{k_2}}{\partial |{t}|^{k_2}}p^{(m+1)}_{1,k_1,0}(x,(t,t_{m+1}))\,\mathrm {d}t_{m+1}. \end{aligned}$$

Observe now that \(|{(t,t_{m+1})}| = |{t}|\sqrt{1+ \frac{t_{m+1}^2}{|{t}|^2}}\). Therefore, if we define \(\mathfrak {I}^{k_2}:=\{h\in \mathbb {N}^{k_2}:\sum _{j=1}^{k_2} j h_j=k_2\}\), then Faà di Bruno’s formula applied twiceFootnote 6 leads to

$$\begin{aligned} p_{1,k_1,k_2}^{(m)}(x,t) = \sum _{h\in \mathfrak {I}^{k_2}} \frac{k_2!}{h!} \int _\mathbb {R}p_{1,k_1,|h|}^{(m+1)}(x,(t,t_{m+1}))\,F_{h}(t,t_{m+1})\,\mathrm {d}t_{m+1} \end{aligned}$$

where

$$\begin{aligned} \begin{aligned} F_{h}(t,t_{m+1})&= \prod _{j=1}^{k_2} \left( \sum _{\ell _1 +2\ell _2 = j} \frac{2^{\ell _1}}{\ell !} {(-1)^{|{\ell }|}} \left( {-\frac{1}{2}}\right) _{|\ell |}|t|^{1-j}\left( 1+\frac{t_{m+1}^2}{|t|^2}\right) ^{\frac{1}{2} -|\ell |}\right) ^{h_j}. \end{aligned} \end{aligned}$$

Since \(F_{(k_2,0,\dots ,0)}= \left( 1+\frac{t_{m+1}^2}{|t|^2}\right) ^{-k_2/2} \) while \(F_h=O\left( \frac{1}{|t|}\left( 1+\frac{t_{m+1}^2}{|t|^2}\right) ^{-1/2}\right) \) otherwise, we have proved the following lemma.

Lemma 4.12

When m is even, \(m\ge 2\),

$$\begin{aligned} p_{1,k_1,k_2}^{(m)}(x,t)= & {} \int _\mathbb {R}\left( 1+\frac{t_{m+1}^2}{|t|^2}\right) ^{-\frac{k_2}{2}} p_{1,k_1,k_2}^{(m+1)}(x,(t,t_{m+1})) \,\mathrm {d}t_{m+1} \\&\quad +\, O \left[ \frac{1}{|t|}\max _{0\le r< k_2}\int _\mathbb {R}\left( 1+\frac{t_{m+1}^2}{|{t}|^2}\right) ^{-\frac{1}{2}} p_{1,k_1,r}^{(m+1)}(x,(t,t_{m+1})) \,\mathrm {d}t_{m+1}\right] . \end{aligned}$$

As a consequence of Lemma 4.12, matters can be reduced to finding the asymptotic expansions of the integrals

$$\begin{aligned} \int _\mathbb {R}\left( 1+\frac{t_{m+1}^2}{|{t}|^2}\right) ^{\alpha } p_{1,k_1,r}^{(m+1)}(x,(t,t_{m+1}))\,\mathrm{d} t_{m+1} \end{aligned}$$
(4.24)

when \(\alpha \in \mathbb {R}\) and \(0\le r\le k_2\). From these, it will also be proved that the remainder in Lemma 4.12 is indeed smaller than the principal part, which a priori is not obvious.

With this aim, we define the function \(\sigma :\mathbb {R}\ni s \mapsto \sqrt{1+s^2}\) and write \(t'=(t,t_{m+1}) \in \mathbb {R}^{m+1}\). It is straightforward to check that \(|{t'}|=|{t}|\sigma \left( \frac{t_{m+1}}{|{t}|}\right) \). Thus, define

$$\begin{aligned}\delta (s) :=\frac{\delta }{\sqrt{\sigma (s)}},\qquad \kappa (s):=\kappa \sqrt{\sigma (s)} = 2\pi |t|\delta \sqrt{\sigma (s)}.\end{aligned}$$

Obviously, \(\delta (0)=\delta \) and \(\kappa (0)=\kappa \). If we put a prime on the quantities introduced in Definition 2.3 relative to \(t'\), moreover,

$$\begin{aligned} \delta '=\delta \left( \frac{t_{m+1}}{|t|}\right) , \qquad \kappa '=\kappa \left( \frac{t_{m+1}}{|{t}|}\right) . \end{aligned}$$

In cases II, III and IV, \(|t| \rightarrow \infty \) and \(\delta \rightarrow 0^+\). By substituting (4.22) into (4.24) and by the change of variable \(\frac{t_{m+1}}{|t|} \mapsto s\) in the integral, (4.24) equals

$$\begin{aligned} \frac{(-1)^{r} \pi ^{r+k_1}}{2^{n-k_1+1 + \frac{m}{2}}}|t|^{n+k_1-1 -\frac{m}{2}+1} e^{-\frac{1}{4}d(x,t)^2 } e^{-\kappa \rho (\delta )} \mathscr {I}_{2\alpha +n+k_1-1 -\frac{m}{2}}, \end{aligned}$$

where

$$\begin{aligned} \mathscr {I}_{\beta }= \int _{\mathbb {R}}\sigma (s)^{\beta } e^{-|t|\pi (\sigma (s) -1)}\tilde{I}_{n+k_1-1}\left( \kappa (s)\rho \left( \delta (s)\right) \right) \left[ 1+g(|{x}|,|t|\sigma (s))\right] \,\mathrm {d}s, \end{aligned}$$
(4.25)

and g satisfies estimates (3.12). Therefore, matters can be reduced to finding some asymptotic estimates of the integrals \(\mathscr {I}_{\beta }\).

4.3 II. Estimates for \(\delta \rightarrow 0^+\) and \(\kappa \rightarrow +\infty \).

Theorem 4.13

For \(\delta \rightarrow 0^+\) and \(\kappa \rightarrow +\infty \)

$$\begin{aligned} \begin{aligned} p_{1,k_1,k_2}(x,t)= \frac{(-1)^{k_2} \pi ^{k_1+k_2}}{ 4^n (\pi \delta )^{n+k_1-\frac{m+1}{2}} \sqrt{2 \pi \kappa ^m} }e^{-\frac{1}{4}d(x,t)^2}\left[ 1+O\left( \delta +\frac{1}{\kappa } \right) \right] . \end{aligned} \end{aligned}$$

Proof

When m is odd, the theorem is obtained by combining Theorem 3.6 with (4.23). Therefore, we only consider m even. By the preceding arguments, it will be sufficient to study \(\mathscr {I}_\beta \) in (4.25).

Since the argument of the modified Bessel function tends to \(+\infty \), we use development (3.13), which gives

$$\begin{aligned} \mathscr {I}_{\beta }= & {} \frac{(2\pi )^{-n-k_1} e^{\kappa \rho (\delta )}}{\delta ^{n+k_1-\frac{1}{2}}|t|^{n+k_1-\frac{1}{2}}} \int _\mathbb {R}e^{-|t|\varphi _\delta (s)}\frac{\sigma (s)^{\beta - \frac{1}{4} - \frac{n+k_1-1}{2}}}{\rho \left( \delta (s)\right) ^{n+k_1-\frac{1}{2}}}\\&\times \, \left[ 1+ O\left( \frac{1}{\delta |t|\sqrt{\sigma (s)}}\right) \right] \left[ 1+g(|{x}|,|t|\sigma (s))\right] \,\mathrm {d}s \end{aligned}$$

where

$$\begin{aligned} \varphi _\delta (s)= \pi [\sigma (s)-1]+ 2\pi \delta \left[ \rho (\delta ) - \sqrt{\sigma (s)}\rho \left( \delta (s)\right) \right] . \end{aligned}$$

We first study the principal part of the integral, to which we apply Laplace’s method (see Remark 2.8) with

$$\begin{aligned}\mathscr {F}=\{\varphi _\delta :\delta \in [0,\delta _2]\},\qquad \mathscr {G}= \left\{ \frac{\sigma (\cdot )^{\beta - \frac{1}{4} - \frac{n+k_1-1}{2}}}{\rho \left( \delta (\cdot )\right) ^{n+k_1-\frac{1}{2}}}:\delta \in [0,\delta _2]\right\} \end{aligned}$$

for some \(\delta _2\), smaller than the \(\delta _1\) of Lemma 3.9, to be determined.

  1. 2.

    It is easily seen that \(\varphi _\delta (0)=0\). Moreover,

    $$\begin{aligned} \varphi _\delta '(s)= \pi \frac{s}{\sigma (s)}\left[ 1- \delta (s) \rho \left( \delta (s)\right) + \frac{\delta ^2}{\sigma (s)^{\frac{3}{2}}} \rho '\left( \delta (s)\right) \right] , \end{aligned}$$
    (4.26)

    so that \(\varphi _\delta '(0)=0\) and \(\varphi _\delta ''(0)= \pi ( 1-\delta \rho (\delta )+\delta ^2 \rho '(\delta ) )\). Observe that there is \(\delta _2>0\), which we may choose smaller than \(\delta _1\), such that

    $$\begin{aligned} 1- \delta (s) \rho \left( \delta (s)\right) + \frac{\delta ^2}{\sigma (s)^{\frac{3}{2}}} \rho '\left( \delta (s)\right) \ge \frac{1}{2} \end{aligned}$$
    (4.27)

    for every s and every \(\delta \in [0,\delta _2]\). Therefore, \(\varphi ''_\delta (0)\ge \frac{\pi }{2}\) for every \(\delta \in [0,\delta _2]\).

  2. 3.

    By (4.26) and (4.27), for \(s\in \mathbb {R}\) and \(\delta \in (0,\delta _2)\),

    $$\begin{aligned} |{\varphi _\delta '(s)}| \ge \frac{\pi }{2\sigma (s)} |{s}|. \end{aligned}$$
    (4.28)

    In particular, \(|{\varphi _\delta '(s)}|\ge \frac{\pi }{2\sigma (2)} |{s}|\) for every \(s\in [-2,2]\).

  3. 1.

    Observe that \(\varphi _\delta '(s)= \mathrm {sign}(s)|{\varphi '_\delta (s)}|\) by (4.26); then, by (4.28),

    $$\begin{aligned} \varphi _\delta (s)=\int _0^s \mathrm {sign}(s)\left|\varphi '_\delta (u)\right|\,\mathrm {d}u=\left|{\int _0^s \left|{\varphi '_\delta (u)}\right|\,\mathrm {d}u}\right|\ge \frac{\pi }{2\sigma (s)} \left|{\int _0^s |{u}|\,\mathrm {d}u}\right|\ge \frac{\pi s^2}{4\sigma (s)} \end{aligned}$$

    for every \(s\in \mathbb {R}\), since \(\sigma \) is even and increasing on \([0,\infty )\).

  4. 4.

    By definition of \(\sigma \) and since \(\rho \) is continuous in zero, we get \(g(s) \lesssim |s|^{\beta - \frac{1}{4} - \frac{n+k_1-1}{2}}\) for \(s\rightarrow \infty \), uniformly in \(g \in \mathscr {G}\).

By Theorem 2.7, then,

$$\begin{aligned} \begin{aligned} \int _\mathbb {R}e^{-|{t}| \varphi _\delta (s)}\frac{\sigma (s)^{\beta - \frac{1}{4} - \frac{n+k_1-1}{2}}}{\rho \left( \delta (s)\right) ^{n+k_1-\frac{1}{2}}} \,\mathrm {d}s&= \sqrt{\frac{2}{|{t}|\left( 1-\delta \rho (\delta )+\delta ^2 \rho '(\delta )\right) }} \left[ 1+O\left( \frac{1}{|{t}|}\right) \right] \\&=\sqrt{\frac{2}{|{t}|}} \left[ 1+O\left( \delta +\frac{1}{|{t}|}\right) \right] . \end{aligned} \end{aligned}$$

The remainder can be treated similarly, and with the same arguments as above one gets

$$\begin{aligned}\begin{aligned} \int _\mathbb {R}e^{-|t|\varphi _\delta (s)}&\frac{\sigma (s)^{\beta - \frac{1}{4} - \frac{n+k_1-1}{2}}}{\rho \left( \delta (s)\right) ^{n+k_1-\frac{1}{2}}} {\left[ O\left( \frac{1}{\delta |t|\sqrt{\sigma (s)}}\right) + O\left( \frac{1}{\kappa }+\delta \right) \right] }\,\mathrm {d}s\\&\qquad \qquad = \sqrt{\frac{2}{|{t}|}} {\left[ 1+ O\left( \delta + \frac{1}{|{t}|}\right) \right] O\left( \frac{1}{\delta |{t}|}+\frac{1}{\kappa }+\delta \right) }=\sqrt{\frac{2}{|{t}| }}O\left( \frac{1}{\kappa } +\delta \right) \end{aligned}\end{aligned}$$

since \(\frac{1}{\delta |t|}= \frac{2\pi }{\kappa }=O\left( \frac{1}{\kappa }\right) \) and \(1/\sqrt{\sigma (s)}\le 1\) for every \(s\in \mathbb {R}\). The proof is complete. \(\square \)

4.4 III and IV. Estimates for \(\delta \rightarrow 0^+\) and \(\kappa \) bounded

These two cases can be treated together, and the principal part of \(p_{1,k_1,k_2}^{(m)}\) is easy to get. The remainders are more tricky, since when passing from the m-dimensional variable t to the \((m+1)\)-dimensional variable \(t'\) the asymptotic conditions in II, III, and IV do not correspond to those in II’, III’, IV’ (these symbols standing for the cases relative to \(m+1\)); on the contrary, they mix together according to the values of the additional variable \(t_{m+1}\).

Theorem 4.14

Fix \(C>1\). If \(\delta \rightarrow 0^+\) while \(1/C\le \kappa \le C\), then

$$\begin{aligned} p_{1,k_1,k_2}(x,t)= \frac{(-1)^{k_2} \pi ^{k_1+k_2}}{ 4^n (\pi \delta )^{n+k_1-\frac{m+1}{2}} \kappa ^{\frac{m-1}{2}} }e^{-\frac{1}{4}d(x,t)^2 } e^{-\kappa } I_{n+k_1-1}(\kappa ) \left[ 1+O\left( \delta \right) \right] . \end{aligned}$$

When \(\kappa \rightarrow 0^+\) and \(|{t}|\rightarrow +\infty \)

$$\begin{aligned} \begin{aligned} p_{1,k_1,k_2}(x,t)= \frac{(-1)^{k_2} \pi ^{k_1+k_2}}{{2^{2n + \frac{m-1}{2}}}(n+k_1-1)!}|{t}|^{n+k_1-1-\frac{m-1}{2}}e^{-\frac{1}{4}d(x,t)^2} \left[ 1+O\left( \kappa +\frac{1}{|{t}|} \right) \right] . \end{aligned} \end{aligned}$$

Proof

The theorem holds when m is odd by Theorem 3.11 combined with (4.23). When m is even, we shall apply Laplace’s method to \(\mathscr {I}_\beta \). We first deal with the principal part. Define first

$$\begin{aligned} \varphi (s)= \pi \sigma (s) -\pi , \end{aligned}$$

so that Theorem 2.7 will be applied to

$$\begin{aligned} \mathscr {F}=\{\varphi \}, \qquad \mathscr {G}= \{\sigma (\cdot )^{\beta } \tilde{I}_{n+k_1-1}\left( \kappa (\cdot )\rho \left( \delta (\cdot )\right) \right) :\delta \in [0,\delta _1), \,\kappa \in [0,C]\} \end{aligned}$$

where \(\delta _1\) is that of Lemma 3.9.

  1. 2.

    Notice that \(\varphi (0)=0\), that \(\varphi '(s)= \pi \frac{s}{\sigma (s)}\), and that \(\varphi ''(0)=\pi \).

  2. 1.

    Observe that \(\varphi (s)=\pi \frac{s^2}{1+\sqrt{1+s^2}}\ge \pi \frac{s^2}{2+|{s}|}\), for every \(s\in \mathbb {R}\).

  3. 3.

    It is easily seen that \(|{\varphi '(s)}|\ge \frac{\pi }{\sigma (1)} |{s}|\) for every \(s\in [-1,1]\).

  4. 4.

    Recall that by (3.13)

    $$\begin{aligned} \tilde{I}_{n+k_1-1}(\kappa (s)\rho (\delta (s))) \lesssim e^{\kappa (s)\rho (\delta (s))}\lesssim e^{\kappa \sqrt{\sigma (s)}} \end{aligned}$$

    as \(s\rightarrow \infty \), uniformly as \(\kappa \in [0, C]\) and \(\delta \in [0,\delta _1)\). Hence, there is a constant \(c_1>0\) such that \(|{\sigma (s)^{\beta }\tilde{I}_{n+k_1-1}\left( \kappa (s)\rho \left( \delta (s)\right) \right) }|\le c_1 e^{c_1|{s}|} \).

Therefore, by Theorem 2.7

$$\begin{aligned} \int _\mathbb {R}e^{-|{t}|\varphi (s) }\sigma (s)^\beta \tilde{I}_{n+k_1-1}\left( \kappa (s)\rho \left( \delta (s) \right) \right) \,\mathrm {d}s=\sqrt{\frac{2}{|{t}|}} \tilde{I}_{n+k_1-1}(\kappa \rho (\delta ))\left[ 1+O\left( \frac{1}{|{t}|} \right) \right] \end{aligned}$$

uniformly in \(\kappa \) and \(\delta \). Since \(\tilde{I}_{n+k_1-1}(\kappa \rho (\delta ))-\tilde{I}_{n+k_1-1}(\kappa )=O(\kappa \rho (\delta )-\kappa )=O(\delta ^2)\) uniformly as \( \kappa \in [0, C]\) by Taylor’s formula, we are done with the principal part. We now deal with the remainders, namely

$$\begin{aligned} \mathscr {I}'_\beta = \int _\mathbb {R}e^{-|{t}|\varphi (s) } \sigma (s)^\beta \tilde{I}_{n+k_1-1}\left( \kappa (s) \rho \left( \delta (s) \right) \right) g( |{x}|, |{t}|\sigma (s))\,\mathrm {d}s \end{aligned}$$

where

$$\begin{aligned} g( |{x}|, |{t}|\sigma (s))= {\left\{ \begin{array}{ll} O\left( \delta (s)+\frac{1}{\kappa (s)}\right) &{}\text {if} \,{\delta (s)\rightarrow 0^+ \,\mathrm{and}\, \kappa (s)\rightarrow +\infty ,}\\ O(\delta (s)) &{}\text {if}\, {\delta (s)\rightarrow 0^+ \,\mathrm{and}\, \kappa (s){\in [{1}/{C'}, C' ]},}\\ O\left( \frac{1}{|{t}|\sigma (s)}+\kappa (s)\right) &{}\text {if}\,{ \delta (s)\rightarrow 0^+ \,\mathrm{and}\, \kappa (s)\rightarrow 0^+} \end{array}\right. } \end{aligned}$$

for every \(C'>1\). Since \(\delta (s)\le \delta \) for every \(s\in \mathbb {R}\), we may find some positive constants \(C''\), \(\delta _2\le \delta _1\), where \(\delta _1\) is that of Lemma 3.9, and \(\kappa _2\le \kappa _1\) such that

$$\begin{aligned} |{g(|{x}|, |{t}|\sigma (s))}|\le {\left\{ \begin{array}{ll} C''\left( \delta (s)+\frac{1}{\kappa (s)}\right) &{}\text {when}\, {\delta \le \delta _2,\; \kappa (s)\ge \kappa _1,}\\ C''\delta (s) &{}\text {when}\, {\delta \le \delta _2,\; \kappa _2\le \kappa (s)\le \kappa _1,}\\ C''\left( \frac{1}{|{t}|\sigma (s)}+\kappa (s)\right) &{}\text {when}\, {\delta \le \delta _2,\; \kappa (s)\le \kappa _2.} \end{array}\right. } \end{aligned}$$

We shall split the integrals accordingly. Notice first that we may assume also that \(\kappa _2\le {1}/{(2 C)}\le 2C\le \kappa _1\), and, up to taking a smaller \(\delta _2\), that

$$\begin{aligned} \varphi (s)-2\pi \delta \sqrt{\sigma (s)}\rho \left( \delta (s) \right) \ge \frac{1}{2}|{s}| \end{aligned}$$

whenever \(|{s}|\ge 2\) and \(\delta \in [0,\delta _2)\).

Consider first case III, where \(\kappa \in [1/C,C]\). We split

$$\begin{aligned} \mathscr {I}'_\beta = \int _{\kappa (s) \le \kappa _1} + \int _{\kappa (s) \ge \kappa _1} = \mathscr {I}'_{\beta ,1} + \mathscr {I}'_{\beta ,2}. \end{aligned}$$

Observe that \(\kappa (s) \ge \kappa _1\) if and only if \(|s|\ge \sqrt{\frac{\kappa _1^4}{\kappa ^4}-1 } =:s_{1,\kappa }\ge 2\). Since

$$\begin{aligned}\tilde{I}_{n+k_1-1}(\kappa (s)\rho (\delta (s)) e^{-|{t}|\varphi (s)}=O\left( e^{|{t}|[2\pi \delta \sqrt{\sigma (s)}\rho (\delta (s))-\varphi (s) ] }\right) =O\left( e^{-\frac{1}{2}|{t}||{s}|}\right) \end{aligned}$$

as \(s\rightarrow \infty \), and since \(\delta =O\left( \frac{1}{\kappa }\right) =O(1)\) in case III, we get

$$\begin{aligned} \begin{aligned} |{\mathscr {I}'_{\beta ,2}}|&\le C' \left( \delta +\frac{1}{\kappa }\right) \int _{\begin{array}{c} |{s}| \ge s_{1,\kappa } \end{array} } \sigma (s)^{\beta -\frac{1}{2}}\tilde{I}_{n+k_1-1}\left( \kappa (s) \rho \left( \delta (s)\right) \right) e^{-|{t}|\varphi (s) } \,\mathrm {d}s =O\left( e^{-\frac{s_{1,\kappa }}{4} |{t}| } \right) , \end{aligned} \end{aligned}$$

which is negligible relative to . By Laplace’s method, moreover,

$$\begin{aligned} |{\mathscr {I}'_{\beta ,1}}|\le C'\delta \int _{|s|\le s_{1,\kappa } } \sigma (s)^{\beta -\frac{1}{2}} \tilde{I}_{n+k_1-1}\left( \kappa (s)\rho \left( \delta (s) \right) \right) e^{-|{t}|\varphi (s) } \,\mathrm {d}s =O\left( \delta \frac{1}{\sqrt{|{t}|}} \right) \end{aligned}$$

with the same arguments as above. This concludes the study of case III.

Consider now case IV, where \(\kappa \rightarrow 0^+\). We split

$$\begin{aligned}\mathscr {I}'_\beta = \int _{\kappa (s)\le \kappa _2} + \int _{\kappa _2 \le \kappa (s) \le \kappa _1} +\int _{\kappa (s) \ge \kappa _1} = \mathscr {I}'_{\beta ,1} +\mathscr {I}'_{\beta ,2} + \mathscr {I}'_{\beta ,3}. \end{aligned}$$

Observe that \(\kappa (s)\ge \kappa _2\) if and only if \(s\ge \sqrt{\frac{\kappa _2^4}{\kappa ^4}-1 }=:s_{2,\kappa }\), and \(s_{1,\kappa }\ge s_{2,\kappa }\ge 2\) if \(\kappa \) is sufficiently small. Exactly as above, we get

$$\begin{aligned} |{\mathscr {I}'_{\beta ,3}}| \le C' \left( \delta +\frac{1}{\kappa }\right) \int _{\begin{array}{c} |{s}| \ge s_{1,\kappa } \end{array} } \sigma (s)^{\beta -\frac{1}{2}} \tilde{I}_{n+k_1-1}\left( \kappa (s)\rho \left( \delta (s) \right) \right) e^{-|{t}|\varphi (s) } \,\mathrm {d}s =O\left( \frac{1}{\kappa } e^{-\frac{s_{1,\kappa }}{4} |{t}| } \right) \end{aligned}$$

which is negligible relative to . Then

$$\begin{aligned} |{\mathscr {I}'_{\beta ,2}}| \le C'\delta \int _{s_{2,\kappa }\le |{s}| \le s_{1,\kappa }} \sigma (s)^{\beta -\frac{1}{2}}\tilde{I}_{n+k_1-1}\left( \kappa (s) \rho \left( \delta (s) \right) \right) e^{-|{t}|\varphi (s) } \,\mathrm {d}s =O\left( \delta \, e^{-\frac{s_{2,\kappa }}{4} |{t}| } \right) , \end{aligned}$$

which is negligible relative to in case IV. Finally,

$$\begin{aligned} \begin{aligned} |{\mathscr {I}'_{\beta ,1}}|&\le C' \int _{\begin{array}{c} |{s}| \end{array}\le s_{2,\kappa }} \sigma (s)^{\beta } \tilde{I}_{n+k_1-1}\left( \kappa (s)\rho \left( \delta (s) \right) \right) e^{-|{t}|\varphi (s) }\left( \sqrt{\sigma (s)}\kappa +\frac{1}{\sigma (s)|{t}|} \right) \,\mathrm {d}s \\ {}&=O\left[ \frac{1}{\sqrt{|{t}|}}\left( \frac{1}{|{t}|}+\kappa \right) \right] , \end{aligned} \end{aligned}$$

by Laplace’s method as above. The proof is complete. \(\square \)

We can finally state the following corollary, which is the natural extension of Corollary 3.14.

Corollary 4.15

For \((x,t) \rightarrow \infty \) and \(\delta \rightarrow 0^+\)

$$\begin{aligned} \begin{aligned} p_{1,k_1,k_2}(x,t)= \frac{(-1)^{k_2} \pi ^{k_1+k_2}}{2^{n-k_1+1 + \frac{m-1}{2}}}|{t}|^{n+k_1-\frac{m+1}{2}}e^{-\frac{1}{4}d(x,t)^2 } e^{-\kappa \rho (\delta )}\tilde{I}_{n+k_1-1}(\kappa \rho (\delta )) \left[ 1+g(|{x}|,|{t}|)\right] , \end{aligned} \end{aligned}$$

where

$$\begin{aligned} g(|{x}|,|{t}|)={\left\{ \begin{array}{ll} O\left( \delta +\frac{1}{\kappa }\right) &{}\hbox { if }\, {\delta \rightarrow 0^+ \,\hbox { and }\, \kappa \rightarrow +\infty ,}\\ O(\delta ) &{}{} \text{ if } \, {\delta \rightarrow 0^+ \, \text{ and } \, \kappa \in {[1/C,C]},}\\ O\left( \frac{1}{|{t}|}+\kappa \right) &{}\hbox { if }\, {\delta \rightarrow 0^+ \,\hbox { and }\, \kappa \rightarrow 0^+} \end{array}\right. } \end{aligned}$$

for every \(C>0\).

We have not been able to find a single function which displays the asymptotic behaviour of \(p_{1,k_1,k_2}(x,t)\) as \((x,t)\rightarrow \infty \), though we showed that the exponential decrease is the same in the four cases. This is also the same decrease found by Eldredge [7, Theorems 4.2 and 4.4], when \(k_1=k_2=0\) and for the horizontal gradient, and Li [17, Theorems 1.4 and 1.5], when \(k_1=k_2=0\). Notice that in [17, Theorem 1.5 and the following Remark (1)] the remainders for \(k_1=k_2=0\) seem to be better than those we put in Corollary 4.15, but they reduce to ours when developing the estimates in a more convenient form in cases II and IV, as we did in Theorems 4.13 and 4.14.

Remark 4.16

Our sharp estimates for \(p_{1,k_1,k_2}\) can be used to obtain asymptotic estimates of all the derivatives of the heat kernel \(p_1\). Indeed, Faà di Bruno’s formula leads to

$$\begin{aligned}&\frac{\partial ^{|{\gamma }|}}{\partial x^{\gamma _1} \partial t^{\gamma _2}} p_{1}(x,t)\nonumber \\&\quad =\gamma _1 ! \gamma _2 ! \sum _{\eta ,\mu ,\beta } \frac{|{\mu }|!2^{|{\mu _1}|-|{\gamma _1}|} }{\eta !\mu ! \beta !} \left[ \prod _{h=1}^{|{\mu }|} \left( \frac{\left( \frac{1}{2}\right) _h }{h !} \right) ^{\beta _h}\right] x^{\eta _1} \mathrm {sign}(t)^{\mu _1} |{t}|^{ |{\beta }|-|{\gamma _2}| } p_{1,|{\eta }|,|{\beta }|}(x,t),\nonumber \\ \end{aligned}$$
(4.29)

where the sum is extended to all \(\eta =(\eta _1,\eta _2)\in \mathbb {N}^{2n}\times \mathbb {N}^{2n}\), \(\mu =(\mu _1,\mu _2)\in \mathbb {N}^m\times \mathbb {N}^m\) and \(\beta \in \mathbb {N}^{|{\mu }|}\) such that

$$\begin{aligned} \gamma _1=\eta _1+2 \eta _2, \qquad \gamma _2=\mu _1+2\mu _2, \qquad \sum _{h=1}^{|{\mu }|} h \beta _h= |{\mu }|. \end{aligned}$$

However, the sharp asymptotic expansions we explicitly provided in Theorems 4.24.13, and 4.14 may not be enough to get directly sharp asymptotic estimates of any desired derivative of \(p_1\): some cancellations among the principal terms may indeed occur in (4.29). Nevertheless, by inspecting case by case, the interested reader could consider as many terms of the expansions given by Theorem 2.7 or Lemma 3.12 as necessary. In the case when \(t\rightarrow 0\), one may also make use of Lemma 4.8 before expanding each term: a suitable choice for N gets rid of the negative powers of \(|{t}|\) appearing in (4.29). Despite this, our estimates for \(p_{1,k_1,k_2}\) lead to the sharp behaviour at infinity of \(\nabla _\mathcal {H}p_s\) and \(\mathcal {L}p_s\), as we shall see in the next section.

5 Sub-Riemannian Ornstein–Uhlenbeck operators

For every \(s>0\) consider the operator on \(L^2(p_s)\) given by

$$\begin{aligned} \mathcal {L}^{p_s}= \mathcal {L} - \frac{\nabla _\mathcal {H} {p_s}}{{p_s}} \cdot \nabla _\mathcal {H}{:C_c^\infty \rightarrow L^2(p_s)} \end{aligned}$$

which arises from the Dirichlet form \(\varphi \mapsto \int _{G} |{\nabla _\mathcal {H}\varphi (y)}| ^2 p_s(y)\, \mathrm {d}y\). For a fixed time \(s>0\), \(\mathcal {L}^{p_s}\) can be considered as a sub-Riemannian version of the classical Ornstein–Uhlenbeck operator (see [1, 18]). Arguing as Strichartz [23, Theorem 2.4] it is not hard to see that \(\mathcal {L}^{p_s}\) with domain \(C_c^\infty (G)\) is essentially self-adjoint on \(L^2(p_s)\), for every \(s>0\). Let us then consider its closure, which we still denote by \(\mathcal {L}^{p_s}\).

Theorem 5.1

\(\mathcal {L}^{p_s}\) has purely discrete spectrum for all \(s>0\).

Theorem 5.1 is indeed due to Inglis [14], whose proof relies on super Poincaré inequalities. Instead, we reduce matters to studying a Schrödinger-type operator by conjugating \(\mathcal {L}^{p_s}\) with the isometry \(U_s:L^2(p_s) \rightarrow L^2\) defined by \(U_s f=f\sqrt{p_s}\) (see, e.g. [4, 19]). More precisely, we consider the operator \(U_s\, \mathcal {L}^{p_s} \, U_s^{-1} :L^2 \rightarrow L^2\). Simple computations then lead to \(U_s \mathcal {L}^{p_s}\, U_s^{-1}=\mathcal {L}+V_s\), where \(V_s\) is the multiplication operatorFootnote 7 given by the function

$$\begin{aligned} V_s= -\frac{1}{4}\frac{|{\nabla _\mathcal {H}p_s}|^2}{p_s^2} - \frac{1}{2}\frac{\mathcal {L}p_s}{p_s}=-\frac{1}{4}\frac{\sum _{j=1}^{2n} (X_j p_s)^2 }{p_s^2} { + \frac{1}{2}\frac{\sum _{j=1}^{2n} X_j^2 p_s}{p_s}.} \end{aligned}$$

The main ingredient of the proof is due to Simon [22, Theorem 2]. Given a potential V and \(M>0\), we define \(\varOmega _M :=\{ g\in G :\, V(g) \le M \}\). For a subset E of G, we write \(|{E}|\) to denote its measure with respect to \(\mathrm {d}y\).

Theorem 5.2

Let V be a potential bounded from below such that \(|{\varOmega _{M}}|<\infty \) for every \(M>0\). Then there exists a self-adjoint extension of \(\mathcal {L} +V\) with purely discrete spectrum.

In order to apply Proposition 5.2, some estimates of the potential are needed; this is done in the following proposition.

Proposition 5.3

When \((x,t)\rightarrow \infty \), \(V_s(x,t)\asymp s^{-2}d(x,t)^2\) for every \(s>0\).

Proof

Since \(V_s(x,t)=\frac{1}{s}V_1\left( \frac{x}{\sqrt{s}},\frac{t}{s}\right) \), it will be sufficient to consider \(V_1\) only. For every \((x,t)\in G\)

$$\begin{aligned} \begin{aligned} |{\nabla _\mathcal {H}p_1}|^2(x,t)= R\, p_{1,1,0}(x,t)^2+R\, p_{1,0,1}(x,t)^2, \end{aligned} \end{aligned}$$
(5.1)

while

$$\begin{aligned} \begin{aligned} \mathcal {L} p_1(x,t)&=-R\, p_{1,2,0}(x,t)-n \,p_{1,1,0}(x,t)- R\, p_{1,0,2}(x,t) { + \frac{R}{|{t}|}(m-1)p_{1,0,1}(x,t)}. \end{aligned} \end{aligned}$$

Hence,

$$\begin{aligned} V_1=-\frac{R}{4} \frac{ p_{1,1,0}^2+p_{1,0,1}^2 }{p_{1,0,0}^2}+\frac{R}{2}\frac{ p_{1,2,0}+p_{1,0,2}+\frac{n}{R} {p_{1,1,0}} - \frac{m-1}{|{t}|} p_{1,0,1} }{p_{1,0,0}}. \end{aligned}$$

In order to find the asymptotics for the potential, it turns out that only the principal term of \(p_{1,k_1,k_2}\) is necessary, and therefore, for the sake of simplicity, we shall avoid an explicit treatment of the remainders. If one is interested in a more detailed description of the behaviour of the potential, however, it is enough to use the remainders that we found in the previous sections.

I. If \(\omega \) runs through [0, C] for some \(C>0\), then both \( \frac{y_\omega }{\sin (y_\omega )} \) and \(\frac{y_\omega }{\omega }\) are positive and bounded both from above and from below. Hence,

$$\begin{aligned} \begin{aligned} V_1 (x,t)&\sim -\frac{R}{4}\frac{y_\omega ^{2} }{\sin (y_\omega )^{2}}+\frac{R}{2}\frac{y_\omega ^{2} }{\sin (y_\omega )^{2}} =\frac{R}{4}\frac{y_\omega ^{2} }{\sin (y_\omega )^{2}}\asymp d(x,t)^2 \end{aligned} \end{aligned}$$

thanks to Theorem 4.2.

II. Let \(\delta \rightarrow 0^+\) and \(\kappa \rightarrow +\infty \). Then \(\frac{1}{R\delta }=o\left( \frac{1}{\delta ^2\sqrt{\kappa }} \right) \), and \(\kappa +\frac{|{t}|}{\sqrt{\kappa }}=o(|{t}|)\). Therefore, by Theorem 4.13,

$$\begin{aligned} \begin{aligned} V_1 (x,t)&\sim -\frac{R}{4}\frac{1}{\delta ^2}+\frac{R}{2}\frac{1}{\delta ^2} =\frac{\pi }{4}|{t}|\asymp d(x,t)^2. \end{aligned} \end{aligned}$$

III. Let \(\delta \rightarrow 0^+\) and \(\kappa \in [1/C,C]\) for some \(C>1\). Then \(\delta \asymp R\). Elementary computations yield

$$\begin{aligned} I'_{\nu }(\zeta )=\frac{I_{\nu -1}(\zeta )+I_{\nu +1}(\zeta )}{2},\end{aligned}$$

so that

$$\begin{aligned} (2 I_{\nu -1}I_{\nu +1}-I_\nu ^2)'(\zeta )=I_{\nu -2}(\zeta )I_{\nu +1}(\zeta )+I_{\nu -1}(\zeta )I_{\nu +2}(\zeta ) \end{aligned}$$

for all \(\nu \in \mathbb {Z}\) and for all \(\zeta \in \mathbb {C}\). Thus, \(2 I_{n-1}I_{n+1}-I_n^2\) is strictly increasing on \([0,\infty )\), hence strictly positive on \((0,\infty )\). Therefore, by Theorem 4.14

$$\begin{aligned} \begin{aligned} V_1 (x,t)&\sim -\frac{R}{4}\frac{\frac{1}{\delta ^2}I_{n}(\kappa )^2 }{I_{n-1}(\kappa )^2}+\frac{R}{2}\frac{\frac{1}{\delta ^2}I_{n+1}(\kappa ) +\frac{n}{R\delta } I_{n}(\kappa ) }{I_{n-1}(\kappa ) }\\&=\frac{\pi |{t}|}{4}\frac{\frac{2 n}{\kappa }I_n(\kappa )I_{n-1}(\kappa )+ 2 I_{n-1}(\kappa )I_{n+1}(\kappa )-I_n(\kappa )^2}{I_{n-1}(\kappa )^2}\asymp d(x,t)^2. \end{aligned} \end{aligned}$$

IV. Finally, let \(\kappa \rightarrow 0^+\) and \( |{t}|\rightarrow +\infty \). Then \(|{t}|=o\left( \frac{1}{R}\right) \), so that

$$\begin{aligned} \begin{aligned} V_1(x,t)&\sim -\frac{R}{4} \frac{\frac{\pi ^2}{(n!) ^2}|{t}|^{2}+\frac{\pi ^2}{[(n-1)!] ^2} }{\frac{1}{[(n-1)!]^2}}+ \frac{R}{2}\frac{\frac{\pi ^2}{(n+1)!}|{t}|^{2} +\frac{\pi ^2}{(n-1)!}+ \frac{n\pi }{R\, n!} |{t}|+ \frac{(m-1)\pi }{(n-1)!|{t}|}}{\frac{1}{(n-1)!}}\\&\sim \frac{\pi }{2}|{t}| \asymp d(x,t)^2, \end{aligned} \end{aligned}$$

thanks to Theorem 4.14 again. \(\square \)

Remark 5.4

The estimates provided by Eldredge [7] are not sufficient to prove Proposition 5.3, not even with some precise estimates of \(\mathcal {L}p_1/p_1\). Indeed, as the proof above shows, in cases I, II, and III one has \(\mathcal {L} p_1/p_1\asymp |{\nabla _\mathcal {H}p_1}|^2/p^2_1\), so that no lower control of \(V_1\) can be inferred. On the other hand, the upper bounds of the derivatives of \(p_s\) explicitly provided by Li [17] are not enough to describe the behaviour at infinity of \(V_s\).

Proof of of Theorem 5.1

Since \(V_s\) is continuous and diverges at infinity by Proposition 5.3, the assumptions of Theorem 5.2 are fulfilled and this ensures the existence of a self-adjoint extension \((T_s,\mathscr {D}_s)\) of \((\mathcal {L}+V_s,C_c^\infty )\) with purely discrete spectrum. Since the multiplication by the square root of \(p_s\), which we called \(U_s\), preserves \(C_c^\infty \), \(U_s^{-1}\mathscr {D}_s \supseteq C_c^\infty \); therefore, \((U_s^{-1} T_s U_s, U_s^{-1}\mathscr {D}_s)\) is a self-adjoint extension—with purely discrete spectrum—of \((\mathcal {L}^{p_s}, C_c^\infty )\), which is essentially self-adjoint. The result follows. \(\square \)