Abstract
On \(\mathbb R^N\) equipped with a normalized root system R, a multiplicity function \(k(\alpha ) > 0\), and the associated measure
let \(h_t(\mathbf{x},\mathbf{y})\) denote the heat kernel of the semigroup generated by the Dunkl Laplace operator \(\Delta _k\). Let \(d(\mathbf{x},\mathbf{y})=\min _{{g}\in G} \Vert \mathbf{x}-{g}(\mathbf{y})\Vert \), where G is the reflection group associated with R. We derive the following upper and lower bounds for \(h_t(\mathbf{x},\mathbf{y})\): for all \(c_l>1/4\) and \(0<c_u<1/4\) there are constants \(C_l,C_u>0\) such that
where \(\Lambda (\mathbf{x},\mathbf{y},t)\) can be expressed by means of some rational functions of \(\Vert \mathbf{x}-{g}(\mathbf{y})\Vert /\sqrt{t}\). An exact formula for \(\Lambda (\mathbf{x},\mathbf{y},t)\) is provided.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction and statement of the results
On the Euclidean space \(\mathbb R^N\) equipped with a normalized root system R and a multiplicity function \(k(\alpha ) > 0\), let \(\Delta _k\) denote the Dunkl Laplace operator (see Sect. 2). Let \(dw(\mathbf{x})=w(\mathbf{x})\, d\mathbf{x}\) be the associated measure, where
where \(R_{+}\) is a fixed positive subsystem of R. It is well-known that \(\Delta _k\) generates a semigroup \(\{e^{t\Delta _k}\}_{t \ge 0}\) of linear operators on \(L^2(dw)\) which has the form
where \(0<h_t(\mathbf{x},\mathbf{y})\) is a smooth function called the Dunkl heat kernel.
The main goal of this paper is to prove upper and lower bounds for \(h_t(\mathbf{x},\mathbf{y})\). In order to state the result we need to introduce some notation.
For \(\alpha \in {R}\), let
stand for the reflection with respect to the subspace perpendicular to \(\alpha \). Let G denote the Coxeter (reflection) group generated by the reflections \(\sigma _\alpha \), \(\alpha \in {R_{+}}\). We define the distance of the orbit of \(\mathbf{x}\) to the orbit of \(\mathbf{y}\) by
Obviously,
It is well known that \(d(\mathbf{x},\mathbf{y})=\Vert \mathbf{x}-{g} (\mathbf{y})\Vert \) if and only if \(\mathbf{x}\) and \({g}(\mathbf{y})\) belong to the same (closed) Weyl chamber (see [6, Chapter VII, proof of Theorem 2.12]). Let
stand for the (closed) Euclidean ball centered at \(\mathbf{x}\) and radius r. We denote by \(w(B(\mathbf {x},r))\) the dw-volume of the ball \(B(\mathbf {x},r)\).
For a finite sequence \(\varvec{\alpha }=(\alpha _1,\alpha _2,\ldots ,\alpha _m)\) of elements of \({R_{+}}\), \(\mathbf{x},\mathbf{y}\in \mathbb R^N\) and \(t>0\), let
be the length of \(\varvec{\alpha }\),
and
For \(\mathbf{x},\mathbf{y}\in \mathbb R^N\), let \( n (\mathbf{x},\mathbf{y})=0\) if \( d(\mathbf{x},\mathbf{y})=\Vert \mathbf{x}-\mathbf{y}\Vert \) and
otherwise. In other words, \(n(\mathbf{x},\mathbf{y})\) is the smallest number of reflections \(\sigma _\alpha \) which are needed to move \(\mathbf{y}\) to a (closed) Weyl chamber which contains \(\mathbf{x}\) (see Sect. 2.3). We also allow \(\varvec{\alpha }\) to be the empty sequence, denoted by \(\varvec{\alpha } =\emptyset \). Then for \(\varvec{\alpha }=\emptyset \), we set: \(\sigma _{\varvec{\alpha }}=I\) (the identity operator), \(\ell (\varvec{\alpha })=0\), and \(\rho _{\varvec{\alpha }}(\mathbf {x},\mathbf {y},t)=1\) for all \(\mathbf {x},\mathbf {y} \in \mathbb {R}^N\) and \(t>0\).
We say that a finite sequence \(\varvec{\alpha }=(\alpha _1,\alpha _2,\ldots ,\alpha _m)\) of positive roots is admissible for the pair \((\mathbf{x},\mathbf{y})\in \mathbb R^N\times \mathbb R^N\) if \(n(\mathbf {x},\sigma _{\varvec{\alpha }}(\mathbf {y}))=0\). In other words, the composition
of the reflections \(\sigma _{\alpha _j}\) maps \(\mathbf{y}\) to a Weyl chamber containing also \(\mathbf{x}\).
The set of the all admissible sequences \(\varvec{\alpha }\) for the pair \((\mathbf{x},\mathbf{y})\) will be denoted by \(\mathcal A(\mathbf{x},\mathbf{y})\). Note that if \(n(\mathbf{x},\mathbf{y})=0\), then \(\varvec{\alpha }=\emptyset \in \mathcal A(\mathbf {x},\mathbf {y})\).
Let us define
Note that for any \(c>1\) and for all \(\mathbf {x},\mathbf {y} \in \mathbb {R}^N\) and \(t>0\) we have
We are now in a position to state our main result about upper and lower bounds for the Dunkl heat kernel which are given by means of w-volumes of Euclidean balls, the function \(\Lambda (\mathbf {x},\mathbf {y},t)\), and \(d(\mathbf{x},\mathbf{y})\). Recall that \(k(\alpha )>0\) in the whole paper.
Theorem 1.1
Assume that \(0<c_{u}<1/4\) and \(c_l>1/4\). Then there are constants \(C_{u},C_{l}>0\) such that for all \(\mathbf {x},\mathbf {y} \in \mathbb {R}^N\) and \(t>0\) we have
Let us remark that this way of expressing estimates of the heat kernel is convenient in handling real harmonic analysis problems, because it allows us to apply methods from analysis on spaces of homogeneous type in the sense of Coifman and Weiss.
The proof of the theorem is based on an iteration procedure. In order to illustrate the method we start by proving upper and lower bounds for \(h_t(\mathbf{x},\mathbf{y})\) in the case where the root system is associated with symmetries of a regular m-sided polygon in \(\mathbb R^2\), e.g. when G is the dihedral group. In this case the formulation of the estimates and they proofs are much simpler.
Theorem 1.2
Assume that G is the group of symmetries of a regular m-sided polygon in \(\mathbb R^2\) centered at the origin and let R be the associated root system. Fix a positive subsystem \(R_+\) of R and set
Let \(0<c_{u}<1/4\) and \(c_l>1/4\). There are constants \(C_{u},C_{l}>0\) such that for all \(\mathbf {x},\mathbf {y} \in \mathbb {R}^N\) and \(t>0\) we have
Theorems 1.1 and 1.2 can be consider as improvements of the following estimates
obtained in [2, Theorems 3.1 and 4.4] (see Sect. 3 for more details).
A natural problem one can post is to define a positive function \(H(\mathbf{x},\mathbf{y},t)\) by means of volumes of balls, reflections, and distances, such that \(C^{-1} \le h_t(\mathbf{x},\mathbf{y})/ H(\mathbf{x},\mathbf{y}, t)\le C\).
2 Preliminaries and notation
2.1 Basic definitions of Dunkl theory
In this section we present basic facts concerning the theory of the Dunkl operators. For more details we refer the reader to [3, 8, 10, 11].
We consider the Euclidean space \(\mathbb R^N\) with the scalar product \(\langle \mathbf {x},\mathbf{y}\rangle =\sum _{j=1}^N x_jy_j \), where \(\mathbf{x}=(x_1,\ldots ,x_N)\), \(\mathbf{y}=(y_1,\ldots ,y_N)\), and the norm \(\Vert \mathbf{x}\Vert ^2=\langle \mathbf{x},\mathbf{x}\rangle \).
A normalized root system in \(\mathbb R^N\) is a finite set \(R\subset \mathbb R^N\setminus \{0\}\) such that \(R \cap \alpha \mathbb {R} = \{\pm \alpha \}\), \(\sigma _\alpha (R)=R\), and \(\Vert \alpha \Vert =\sqrt{2}\) for all \(\alpha \in R\), where \(\sigma _\alpha \) is defined by (1.2). Each root system can be written as a disjoint union \(R = R_{+} \cup -R_{+}\), where \(R_{+}\), \(-R_{+}\) are separated by a hyperplane through the origin. Such a set \(R_{+}\) is called a positive subsystem. Its choice is not unique. In this paper, we will work with a fixed positive subsystem \(R_{+}\).
The finite group G generated by the reflections \(\sigma _{\alpha }\), \(\alpha \in {R_{+}}\) is called the Coxeter group (reflection group) of the root system. Clearly, \(|G|> |R_{+}|\).
A multiplicity function is a G-invariant function \(k:R\rightarrow \mathbb C\) which will be fixed and \(> 0\) throughout this paper.
Let \(\mathbf {N}=N+\sum _{\alpha \in {R_{+}}}{2k(\alpha )}\). Then,
where w is the associated measure defined in (1.1). Observe that there is a constant \(C>0\) such that for all \(\mathbf {x} \in \mathbb {R}^N\) and \(r>0\) we have
so \(dw(\mathbf{x})\) is doubling, that is, there is a constant \(C>0\) such that
For \(\xi \in \mathbb {R}^N\), the Dunkl operators \(T_\xi \) are the following k-deformations of the directional derivatives \(\partial _\xi \) by difference operators:
The Dunkl operators \(T_{\xi }\), which were introduced in [3], commute and are skew-symmetric with respect to the G-invariant measure dw.
Let us denote \(T_j=T_{e_j}\), where \(\{e_j\}_{1 \le j \le N}\) is a canonical orthonormal basis of \(\mathbb {R}^N\).
For fixed \(\mathbf{y}\in \mathbb R^N\) the Dunkl kernel \(E(\mathbf{x},\mathbf{y})\) is a unique analytic solution to the system
The function \(E(\mathbf{x},\mathbf{y})\), which generalizes the exponential function \(e^{\langle \mathbf{x},\mathbf{y}\rangle }\), has a unique extension to a holomorphic function on \(\mathbb C^N\times \mathbb C^N\). Moreover, \(E(\mathbf {z},\mathbf {w})=E(\mathbf {w},\mathbf {z})\) for all \(\mathbf {z},\mathbf {w} \in \mathbb {C}^N\).
2.2 Dunkl Laplacian and Dunkl heat semigroup
The Dunkl Laplacian associated with R and k is the differential-difference operator \(\Delta _k=\sum _{j=1}^N T_{j}^2\), which acts on \(C^2(\mathbb {R}^N)\)-functions by
The operator \(\Delta _k\) is essentially self-adjoint on \(L^2(dw)\) (see for instance [1, Theorem 3.1]) and generates a semigroup \(H_t\) of linear self-adjoint contractions on \(L^2(dw)\). The semigroup has the form
where the heat kernel
is a \(C^\infty \)-function of all the variables \(\mathbf{x},\mathbf{y}\in \mathbb {R}^N\), \(t>0\), and satisfies
Here and subsequently,
The following specific formula for the Dunkl heat kernel was obtained by Rösler [9]:
Here
and \(\mu _{\mathbf {x}}\) is a probability measure, which is supported in the convex hull \({\text {conv}}\mathcal {O}(\mathbf {x})\) of the orbit \(\mathcal O(\mathbf{x}) =\{{g}(\mathbf{x}): {g} \in G\}\).
One can easily check that
2.3 Weyl chambers and their properties
The closures of connected components of
are called (closed) Weyl chambers. Below we present some properties of the reflections and the Weyl chambers, which will be used in next sections.
Lemma 2.1
Fix \(\mathbf{x},\mathbf{y}\in \mathbb R^N\) and \({g} \in G\). Then \(d(\mathbf{x},\mathbf{y})=\Vert \mathbf {x}-{g}(\mathbf{y})\Vert \) if and only if \({g}(\mathbf{y})\) and \(\mathbf{x}\) belong to the same Weyl chamber.
Proof
See [6, Chapter VII, proof of Theorem 2.12]. \(\square \)
Lemma 2.2
Let \(\mathbf {x},\mathbf {y} \in \mathbb {R}^N\) and assume that \(n(\mathbf{x},\mathbf{y})\ge 1\). Then there is \(\alpha \in {R_{+}}\) such that
Proof
If for a fixed \(\alpha \in R_+\), \( \Vert \mathbf{x}-\mathbf{y}\Vert \le \Vert \mathbf {x}-\sigma _\alpha (\mathbf{y})\Vert \), then \(\mathbf{x}\) and \(\mathbf{y}\) are situated in the same half space with the boundary \(\alpha ^{\perp }\) [6, Chapter VII, proof of Theorem 2.12]. Now, suppose towards a contradiction that \( \Vert \mathbf{x}-\mathbf{y}\Vert \le \Vert \mathbf {x}-\sigma _\alpha (\mathbf{y})\Vert \) for all \(\alpha \in R_+\). Then \(\mathbf{x}\) and \(\mathbf{y}\) belong to the same Weyl chamber, hence \(n(\mathbf{x},\mathbf{y})=0\). This contradicts our assumption. \(\square \)
Corollary 2.3
For any \(\mathbf {x},\mathbf {y} \in \mathbb {R}^N\) such that \(n(\mathbf {x},\mathbf {y})>0\) there are: \(1 \le m \le |G|\) and \(\varvec{\alpha }=(\alpha _1,\alpha _2,\ldots ,\alpha _m)\) such that
and
3 Auxiliary estimates for the heat kernel
In the present section we establish auxiliary estimates for the heat kernel which will be used for proving Theorems 1.1 and 1.2. Our starting point is the following proposition which is an improvement of the estimates (1.13).
Proposition 3.1
For any constants \({\widetilde{c}_{\ell }}>1/4\) and \( 0<{\widetilde{c}_{u}}<1/4\) there are positive constants \({\widetilde{C}_{\ell }},{\widetilde{C}_{u}}\), such that for all \(\mathbf {x},\mathbf {y} \in \mathbb {R}^N\) and \(t>0\) we have
Proof
To obtain the upper bound with any constant \(0<{\widetilde{c}_{u}}<1/4\) arbitrarily close to 1/4, we apply (2.4) together with (2.6) and get
where in the last inequality we have used the second inequality in (1.13) and the doubling property (2.2).
The lower bound in (3.1) with any constant \({\widetilde{c}_{\ell }}>1/4\) is Corollary 2.3 of Jiu and Li [7]. \(\square \)
We now turn to deriving estimates for the heat kernel which will be used for an iteration procedure.
Proposition 3.2
Let \({{\widetilde{c}_{\ell }}},{\widetilde{c}_{u}}\) be the constants from Proposition 3.1 and let \(c_1<{\widetilde{c}_{u}}\). There is \(C_1 \ge 1\) such that for all \(\mathbf {x},\mathbf {y} \in \mathbb {R}^N\) and \(t>0\) we have
Proof
The following formula was proved in [4, formula (3.5)]: for all \(\mathbf {x},\mathbf {y} \in \mathbb {R}^N\) and \(t>0\), we have
On the other hand, by (2.4),
Combining (3.4) with (3.5) we get
Note that \(I_2(t,\mathbf {x},\mathbf {y}){\ge }0\), and, thanks to our assumption on \(k(\alpha )>0\), we have \({\mathbf {N}>N}\). So, by (3.6),
Now, taking the arithmetic mean of the lower bound in (3.1) with (3.7) we obtain (3.2), since there is a constant \(C>0\) such that for all \(\mathbf {x},\mathbf {y} \in \mathbb {R}^N\) and \(t>0\) we have
In order to prove (3.3), set \(\varepsilon =({\widetilde{c}_{u}}-c_1)/(2{\widetilde{c}_{u}})\). Clearly, by the assumption \(c_1<{\widetilde{c}_{u}}\), we have \({0<\varepsilon <\frac{1}{2}}\). To obtain (3.3), we split the integral for \(tI_2(t,\mathbf {x},\mathbf {y})\) as follows:
Clearly,
where for the equality we have applied (2.4). In order to estimate \(I_{2,2}(t,\mathbf {x},\mathbf {y})\), note that there are \(C_\varepsilon ,C_{\varepsilon }',C_{\varepsilon }''>0\) such that
In the last inequality we have used Proposition 3.1. Combining (3.6), (3.9), and (3.10) we obtain
which finally leads to (3.3), because, by our assumption, \({\mathbf{N}>N}\). \(\square \)
Observe that our basic upper and lower bounds [see (3.2) and (3.3)] are of the same type and they differ by the constants in the exponent of the first component.
From now on the constants \(C_1,c_1\) from Proposition 3.2 are fixed.
Remark 3.3
The estimate (3.3) together with (1.13) imply the known bounds
see [4, Theorem 3.1]. An alternative proof of (3.12) which uses a Poincaré inequality was announced by W. Hebisch.
4 The case of the dihedral group: proof of Theorem 1.2
Let \(D_m\) be a regular m-polygon in \(\mathbb R^2\), \(m\ge 3\), such that the related root system R consists of 2m vectors
and the reflection group G acts either by the symmetries \(\sigma _{\alpha _j}\), or by the rotations \(\sigma _{\alpha _j}\circ \sigma _{\alpha _i}\), \(0 \le i,j \le 2m-1\). Consequently, \(\max _{\mathbf{x},\mathbf{y}\in \mathbb R^2} n (\mathbf{x},\mathbf{y})=2\).
Proof of Theorem 1.2
Fix \(0<c_u<c_1\), where \(c_1\) is a constant from Proposition 3.2. Let us consider three cases depending on the value of \(n(\mathbf {x},\mathbf {y})\).
Case \(n(\mathbf{x},\mathbf{y})=0\). By the definition of \(n(\mathbf {x},\mathbf {y})\) [see (1.6)], in this case \(\Vert \mathbf{x}-\mathbf{y}\Vert =d(\mathbf{x},\mathbf{y})\). Hence Proposition 3.1 reads
which are the desired estimates, since \(\Lambda _D(\mathbf {x},\mathbf {y},t)=1\) in this case.
Case \(n(\mathbf{x},\mathbf{y})=1\). Then, by the definition of \(n(\mathbf {x},\mathbf {y})\) [see (1.6)], there is \(\alpha _0\in {R_{+}}\) such that \(n (\mathbf {x},\sigma _{\alpha _0}(\mathbf{y}))=0\), that is, \(\Vert \mathbf {x}-\sigma _{\alpha _0}(\mathbf{y})\Vert =d(\mathbf{x},\mathbf{y})\). Using (3.2), we get
where in the last inequality we have used (4.1).
In order to prove the upper bound, we use (3.3), Proposition 3.1 together with the inequality \(d(\mathbf {x},\mathbf {y}) \le \Vert \mathbf {x}-\mathbf {y}\Vert \) and obtain
Case of \( n(\mathbf{x},\mathbf{y})=2\). In the proof of the upper and lower bounds we use the fact that, in this case, \(n(\mathbf {x},\sigma _\alpha (\mathbf{y}))=1\) for all \(\alpha \in {R_{+}}\).
We start by proving the lower bound. Using (3.2) we have
where in the last inequality we have used (4.2), since \(n(\mathbf {x}, {\varvec{\sigma }}_{{\varvec{\alpha }}}(\mathbf{y}))=1\) for all \(\alpha \in {R_{+}}\).
In order to obtain the upper bound, we apply (3.3) and then (4.3), and get
Let now \(\alpha _0\in {R_{+}}\) be such that \(\Vert \mathbf {x}-\sigma _{\alpha _0}(\mathbf{y})\Vert =\min _{\alpha \in {R_{+}}} \Vert \mathbf {x}-\sigma _{\alpha }(\mathbf{y})\Vert \). Then
(see Lemma 2.2). Thus, from (4.5) we conclude that
which implies the desired estimate (1.12).
\(\square \)
5 Proof of Theorem 1.1
5.1 Proof of the lower bound (1.9)
The proposition below combined with Corollary 2.3 imply (1.9).
Proposition 5.1
Assume that \({\widetilde{C}_{\ell }},{\widetilde{c}_{\ell }}\) are the constants from Proposition 3.1 and \(C_1\) is the constant from Proposition 3.2. For all \(\mathbf {x},\mathbf {y} \in \mathbb {R}^N\), \(t>0\), and \(\varvec{\alpha } \in \mathcal {A}(\mathbf {x},\mathbf {y})\) we have
Proof
The proof is by induction with respect to \(m=\ell (\varvec{\alpha })\). For \(m=0\) and \(m=1\) the claim is a consequence of Proposition 3.1 and (3.2) [see also (4.2)]. Assume that (5.1) holds for all \(\mathbf {x}_1,\mathbf {y}_1 \in \mathbb {R}^N\), \(t_1>0\), and \(\widetilde{\varvec{\alpha }} \in \mathcal {A}(\mathbf {x}_1,\mathbf {y}_1)\) such that \(\ell (\widetilde{\varvec{\alpha }})=m\). Let \(\varvec{\alpha }=(\alpha _1,\alpha _2,\ldots ,\alpha _{m+1}) \in \mathcal {A}(\mathbf {x},\mathbf {y})\) be such that \(\ell (\varvec{\alpha })=m+1\). By (3.2) we have
Note that \(\varvec{\alpha } \in \mathcal {A}(\mathbf {x},\mathbf {y})\) implies that the sequence \(\widetilde{\varvec{\alpha }}=(\alpha _2,\ldots ,\alpha _{m+1})\) belongs to \(\mathcal {A}(\mathbf {x},\sigma _{\alpha _1}(\mathbf {y}))\) and, obviously, \(\ell (\widetilde{\varvec{\alpha }})=m\). Therefore, the claim is a consequence of the induction hypothesis applied to \(\mathbf {x}\), \(\sigma _{\alpha _1}(\mathbf {y})\), and \(\widetilde{\varvec{\alpha }}\), and the fact that, by the definition of \(\rho _{\varvec{\alpha }}(\mathbf {x},\mathbf {y},t)\) and \(\rho _{\widetilde{\varvec{\alpha }}}(\mathbf {x},\sigma _{\alpha _1}(\mathbf {y}),t)\) (see (1.5)), we have
\(\square \)
5.2 Proof of the upper bound (1.10)
Let us begin with a corollary which follows by Proposition 5.1.
Corollary 5.2
Assume that \({\widetilde{c}_{\ell }}\) is the constant from Proposition 3.1. Then there is a constant \(C_2>0\) such that for all \(\mathbf {x},\mathbf {y} \in \mathbb {R}^N\) and \(t>0\) we have
Proof
If \(n(\mathbf{x},\mathbf{y})=0\), then (5.2) holds by Proposition 3.1, because \(d(\mathbf{x},\mathbf{y})=\Vert \mathbf{x}-\mathbf{y}\Vert \) in this case. For fixed \(\mathbf {x},\mathbf {y} \in \mathbb {R}^N\) such that \(n(\mathbf{x},\mathbf{y})\ge 1\), let \(\varvec{\alpha }=(\alpha _1,\alpha _2,\ldots ,\alpha _m)\), \(m\le |G|\), be as in Corollary 2.3. Then, thanks to (2.8), we have
so the claim follows by Proposition 5.1. \(\square \)
From now on the constant \(C_2\) from Corollary 5.2 is fixed.
Proposition 5.3
Let \(C_1{ \ge 1}\) and \(0<c_1<\widetilde{c}_u\) be the constants from Proposition 3.2. There is a constant \(c_2>{4C_1|G|}\) such that for all \(\mathbf {x},\mathbf {y} \in \mathbb {R}^N\) and \(t>0\) satisfying
we have
Remark 5.4
The condition“\(c_2>{4C_1|G|}\)”occurs in the formulation of the proposition for some technical reasons and it will be used later on in the proof of (1.10).
Proof
Thanks to (3.3) and the fact that \(0<h_t(\mathbf {x},\mathbf {y})<\infty \) it is enough to show that
To this end, by Corollary 5.2, we get
so (5.5) is a consequence of the fact that taking \(c_2>0\) large enough in (5.3), we have
\(\square \)
From now on the constant \(c_2\) from Proposition 5.3 is fixed.
Proposition 5.5
Assume that \({\widetilde{c}_{u}}\) is the constant from Proposition 3.1 and \(c_2\) is the same as in Proposition 5.3. Let \({0<}c_3<{\widetilde{c}_{u}}\). Then there is a constant \(C_3>0\) such that for all \(\mathbf {x},\mathbf {y} \in \mathbb {R}^N\) and \(t>0\) such that
there is \(\varvec{\alpha }\in \mathcal {A}(\mathbf {x},\mathbf {y})\), \(\ell (\varvec{\alpha }) \le |G|\), such that
Proof
If \(n(\mathbf {x},\mathbf {y})=0\), then one can take \(\varvec{\alpha }=\emptyset \) and the claim is a consequence of Proposition 3.1. Assume that \(n(\mathbf {x},\mathbf {y})>0\). For fixed \(\mathbf {x},\mathbf {y}\), let \(\varvec{\alpha }=(\alpha _1,\alpha _2,\ldots ,\alpha _m)\), \(m \le |G|\), be as in Corollary 2.3. If \(\Vert \mathbf {x}-\mathbf {y}\Vert \le c_2\sqrt{t}\), then the claim is satisfied by Proposition 3.1 and (2.8), and we may take even \(c_3={\widetilde{c}_{u}}\) in the inequality (5.6). If \(\Vert \mathbf {x}-\mathbf {y}\Vert \le c_2d(\mathbf {x},\mathbf {y})\), then by Proposition 3.1 we get
Moreover, the assumption \(\Vert \mathbf {x}-\mathbf {y}\Vert \le c_2d(\mathbf {x},\mathbf {y})\) implies
so the claim follows by the fact that there is \(C>0\) such that
where the second inequality is a consequence of (2.8). \(\square \)
From now on the constants \(C_3, c_3\) from Proposition 5.5 are fixed.
Proof of (1.10)
Let \(c_2\) be the constant from Proposition 5.3. Fix \(\mathbf {x},\mathbf {y} \in \mathbb {R}^N\) and \(t>0\) and consider
Note that \(G_0 \ne \emptyset \), because there is \({g}_0 \in G\) such that \(\Vert \mathbf {x}-{g}_0(\mathbf {y})\Vert =d(\mathbf {x},\mathbf {y})=d(\mathbf {x},{g}_0(\mathbf {y}))\), so the assumption (5.3) is not satisfied for \(\mathbf {x}\), \({g}_0(\mathbf {y})\), t. We will prove (1.10) for \(h_t(\mathbf {x},{g}(\mathbf {y}))\) for all \({g} \in G\). Note that, by the definition of \(G_0\), if \({g} \in G_0\), then by Proposition 5.5 we have
If \(G=G_0\), the proof is complete. Assume that \(G_0\ne G\). Consider all the values \(h_t(\mathbf {x},{g}(\mathbf {y}))\) for all \({g} \not \in G_0\) and list them in a decreasing sequence, that is, \(G\setminus G_0=\{{g}_1,{g}_2,\ldots ,{g}_m\}\) and
For \(1 \le j \le m\) let us denote
We will prove by induction on j that for all \(1 \le j \le m\) we have
where \(C_3,c_3\) are the constants from Proposition 5.5 and \(C_1\) is the constant from Proposition 3.2. We have already remarked that (5.10) is satisfied for \({g}\in G_0\) with \(j=0\), [see (5.7)].
Fix \(0\le j\le m-1\) and suppose that the estimate (5.10) holds for all \({g} \in G_j\). We will prove (5.10) for \({ g}_{j+1}\). Since \({g}_{j+1} \not \in G_0\),
[cf. (5.3)]. Hence, by Proposition 5.3, we get
Further, from (5.11) and the fact that \(c_2>{4C_1|G|}\), we conclude that
Let \(\alpha _0\in {R_+}\) be such that
It follows from (5.12) and (5.14) that
We claim that \(\sigma _{\alpha _0}\circ {g}_{j+1}\in G_j\). To prove the claim, aiming for a contradiction, suppose that \(\sigma _{\alpha _0}\circ g_{j+1}\in G\setminus G_j\). Then \(\sigma _{\alpha _0}\circ {g}_{j+1}\in \{{g}_{j+2},\ldots ,{g}_m\}\), because \(\sigma _{\alpha _0}\circ g_{j+1}\ne g_{j+1}\). Consequently, by (5.8), \(h_t(\mathbf{x},g_{j+1}(\mathbf{y}))\ge h_t(\mathbf{x},\sigma _{\alpha _0}\circ g_{j+1}(\mathbf{y}))\). Hence, using (5.15), we obtain
Since \(h_t>0\), applying (5.16) together with (5.13), we get
and we arrive at a contradiction. Thus the claim is established.
Thanks to the claim and by the induction hypothesis, the estimate (5.10) already holds for \( \sigma _{\alpha _0}\circ {g}_{j+1}\), in particular, since \(2C_1|G|>1\), we have,
Hence, utilizing (5.15) combined with (5.17), we obtain
For any sequence
from \(\mathcal {A}(\mathbf {x},\sigma _{\alpha _0}\circ {g}_{j+1}(\mathbf {y}))\) with \(\ell (\varvec{\alpha })\le |G|+j\), we define the new sequence
from \(\mathcal {A}(\mathbf {x},{g}_{j+1}(\mathbf {y}))\) satisfying \( \ell (\widetilde{\varvec{\alpha }})\le |G|+j+1\). Moreover,
So (5.10) for \( {g}_{j+1} \) follows from (5.18) and (5.19). The proof is complete.
\(\square \)
6 Applications of Theorem 1.1
6.1 Regularity of the heat kernel
The following theorem can be consider as an improvement of the estimates [2, Theorem 4.1 (b)].
Theorem 6.1
Let m be a non-negative integer. There are constants \(C_4,c_4>0\) such that for all \(\mathbf {x},\mathbf {y},\mathbf {y}' \in \mathbb {R}^N\) and \(t>0\) satisfying \(\Vert \mathbf {y}-\mathbf {y}'\Vert <\frac{\sqrt{t}}{2}\) we have
The constant \(c_4\) does not depend on m.
In the proof of Theorem 6.1 we will need the following lemma (which is a Harnack type inequality).
Lemma 6.2
There are constants \(C_5,c_5>0\) such that for all \(\mathbf {x},\mathbf {y},\mathbf {y}' \in \mathbb {R}^N\) and \(t>0\) satisfying \(\Vert \mathbf {y}-\mathbf {y}'\Vert <\frac{\sqrt{t}}{2}\) we have
Proof
Fix \(\mathbf {x},\mathbf {y},\mathbf {y}' \in \mathbb {R}^{N}\) and \(t>0\) such that \(\Vert \mathbf {y}-\mathbf {y}'\Vert < \frac{\sqrt{t}}{2}\). By Theorem 1.1 we have
where
Let us consider a sequence \(\varvec{\alpha } \in \mathcal {A}(\mathbf {x},\mathbf {y})\) such that \(\ell (\varvec{\alpha }) \le 2|G|\). We shall prove that
If \(\ell (\varvec{\alpha })=0\), then (6.5) is trivial. If \(\varvec{\alpha }=(\alpha _1,\alpha _2,\ldots ,\alpha _m) \in \mathcal {A}(\mathbf {x},\mathbf {y})\) and \(\Vert \mathbf {y}-\mathbf {y}'\Vert <\frac{\sqrt{t}}{2}\) , then for any \(1 \le j \le \ell (\varvec{\alpha })\), we have
Hence, by the definition of \(\rho _{\varvec{\alpha }}(\mathbf {x},\mathbf {y},t)\) [see (1.5)], we obtain (6.5).
Now, for \(\varvec{\alpha }\in \mathcal A(\mathbf{x},\mathbf{y})\), \( \ell (\varvec{\alpha })\le 2|G|\), we are going to define a new sequence \(\widetilde{\varvec{\alpha }}\) of elements of \({R_{+}}\) such that \(\widetilde{\varvec{\alpha }} \in \mathcal {A}(\mathbf {x},\mathbf {y}')\), \(\ell (\widetilde{\varvec{\alpha }}) \le 4|G|\), and
To this end, let us consider two cases.
Case 1. \(\varvec{\alpha } \in \mathcal {A}(\mathbf {x},\mathbf {y}')\). Then we set \(\widetilde{\varvec{\alpha }}:=\varvec{\alpha }\). Clearly, in this case (6.6) is satisfied.
Case 2. \(\varvec{\alpha } \not \in \mathcal {A}(\mathbf {x},\mathbf {y}')\). Let \(\varvec{\beta }=(\beta _1,\beta _2,\ldots ,\beta _{m_1})\) be a sequence from Corollary 2.3 chosen for the points \(\mathbf {x}\) and \(\sigma _{\varvec{\alpha }}(\mathbf {y}')\). In particular, by (2.8), then by the fact that \(\varvec{\alpha } \in \mathcal {A}(\mathbf {x},\mathbf {y})\) and \(\Vert \mathbf {y}-\mathbf {y}'\Vert \le \frac{\sqrt{t}}{2}\), for any \(1 \le j \le \ell (\varvec{\beta })\), we have
Consequently,
We set
Then, by the choice of \(\varvec{\beta }\), we have \(\widetilde{\varvec{\alpha }} \in \mathcal {A}(\mathbf {x},\mathbf {y}')\), \(\ell (\widetilde{\varvec{\alpha }}) \le 4|G|\). Moreover, by the definition of \(\rho _{\varvec{\alpha }}(\mathbf {x},\mathbf {y}',t)\) and \(\rho _{\widetilde{\varvec{\alpha }}}(\mathbf {x},\mathbf {y}',t)\), and (6.7) we have
which implies (6.6).
Applying (6.5) and (6.6), we have
which, together with (6.3), gives
Note that \(\Vert \mathbf {y}-\mathbf {y}'\Vert <\frac{\sqrt{t}}{2}\) implies
Therefore, by (6.8) we get
Finally, (6.2) is a consequence of Proposition 5.1, (6.9), and (1.8). \(\square \)
Proof of Theorem 6.1
For \(x \in \mathbb {R}\) and \(t>0\) let us denote
Then the formula (2.4) reads
Observe that \(\tilde{\mathfrak {h}}_t(x):=\partial _x\partial _t^m\widetilde{h_t}(x)\) is equal to \(\frac{x}{t^{m+1}} \tilde{h}_t(x)\) times a polynomial in \(\frac{x^2}{t}\). Hence, for any non-negative integer m there is a constant \(C_m>0\) such that for all \(x \in \mathbb {R}\) and \(t>0\) we have
Further, by (6.10) and (6.11) we get
Finally, note that for any \(s \in [0,1]\), we have
so, by Lemma 6.2, we get
Now (6.12) together with (6.13) imply the desired estimate (6.1). \(\square \)
Remark 6.3
In the proof of Theorem 6.1, we partially repeat the argument from [2, Theorem 4.1 (b)]. The novelty of the approach is using Lemma 6.2 instead of Proposition 3.1 in estimating the last integral of (6.12).
6.2 Remark on a theorem of Gallardo and Rejeb
In [5], the authors proved that the points \({g}(\mathbf{x})\), \({g} \in G\), belong to the support of the measure \(\mu _{\mathbf{x}}\) (see [5, Theorem A 3)]). Below, as an application of the estimates (1.9) and (1.10), we provide another proof of this theorem. The proof, at the same time, gives a more precise behavior of the measure \(\mu _{\mathbf{x}}\) around these points.
For \(\mathbf {y} \in \mathbb {R}^N\) and \(t>0\) we set
Theorem 6.4
There is a constant \(C_6>0\) such that for all \(\mathbf {x} \in \mathbb {R}^N\), \(t>0\), and \({g} \in G\) we have
Proof
Let \(\mathbf{y}={g}(\mathbf{x})\). Then \(d(\mathbf{x},\mathbf{y})=0\). Moreover, by the definition of \(A(\mathbf {x},\mathbf {y},\eta )\) [see (2.5)] and the fact that \(\Vert \mathbf {y}\Vert =\Vert \mathbf {x}\Vert \) we have
We first prove the upper bound in (6.16). Observe that \((\Vert \mathbf{x}\Vert ^2+\Vert \mathbf{y}\Vert ^2-2\langle \mathbf{y},\eta \rangle )/4 t \le 1/2\) for all \(\eta \in U(\mathbf{y}, t)\). Thus, applying (2.4), we get
where in the last inequality we have used (1.10).
We now turn to prove the lower bound in (6.16). From Theorem 1.1, the fact that \(d(\mathbf {x},\mathbf {y})=0\), (1.8), and the doubling property (2.2), we deduce that there is a constant \(C>0\) being independent of \(\mathbf{x}\), \({g}\in G\), and \(t>0\) such that
Hence, using (2.4) applied to \(h_{2t}(\mathbf{x},{g}(\mathbf{x}))\) and \(h_t(\mathbf{x}, {g} (\mathbf{x}))\) together with (1.8) and the doubling property (2.2), we conclude that there is a constant \(\widetilde{c}\ge 0\) independent of \(\mathbf {x}\), \({g} \in G\), and \(t>0\) such that
Let \(M=4(\widetilde{c}+1)\). We rewrite (6.18) by splitting the areas of the integration:
Observe that, by the definition of \(V(\mathbf {y},Mt)\) [see (6.15)], and the fact that \(M=4(\widetilde{c}+1)\), for all \(\eta \in V(\mathbf {y},Mt)\) we have
Consequently, (6.20) implies \(J_V\le \frac{1}{2}I_V\). Therefore, by (6.19), we get
Applying Theorem 1.1, we deduce that
Now the claim follows from (1.8) and the doubling property (2.2). \(\square \)
We want to remark that Theorem 6.4 extends the result of Jiu and Li [7, Theorem 2.1], where the behavior of \(\mu _{\mathbf{x}}\) around \(\mathbf{x}\) is studied.
References
Amri, B., Hammi, A.: Dunkl–Schrödinger operators. Complex Anal. Oper. Theory 113, 1033–1058 (2019)
Anker, J.-Ph., Dziubański, J., Hejna, A.: Harmonic functions, conjugate harmonic functions and the Hardy space \(H^1\) in the rational Dunkl setting. J. Fourier Anal. Appl. 25, 2356–2418 (2019)
Dunkl, C.F.: Differential-difference operators associated to reflection groups. Trans. Am. Math. 311(1), 167–183 (1989)
Dziubański, J., Hejna, A.: Remark on atomic decompositions for the Hardy space \(H^1\) in the rational Dunkl setting. Stud. Math. 251(1), 89–110 (2020)
Gallardo, L., Rejeb, Ch.: Support properties of the intertwining and the mean value operators in Dunkl theory. Proc. Am. Math. Soc. 146(1), 145–152 (2018)
Helgason,S.: Differential Geometry, Lie Groups, and Symmetric Spaces. Pure and Applied Mathematics, 80, xv+628 pp. Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New York-London (1978)
Jiu, J., Li, Z.: On the representing measures of Dunkl’s intertwining operator. J. Approx. Theory 269, 10 (2021)
Rösler, M.: Positivity of Dunkl’s intertwining operator. Duke Math. J. 98(3), 445–463 (1999)
Rösler, M.: A positive radial product formula for the Dunkl kernel. Trans. Am. Math. Soc. 355(6), 2413–2438 (2003)
Rösler, M.: Dunkl operators (theory and applications). In: Koelink, E., Van Assche, W. (eds.) Orthogonal Polynomials and Special Functions, (Leuven, 2002). Lect. Notes Math. 1817, pp. 93–135. Springer, Berlin (2003)
Rösler, M., Voit, M.: Dunkl theory, convolution algebras, and related Markov processes. In: Graczyk, P., Rösler, M., Yor, M. (eds.) Harmonic and Stochastic Analysis of Dunkl Processes. Travaux en cours 71, pp. 1–112. Hermann, Paris (2008)
Acknowledgements
The authors want to thank Jean-Philippe Anker for drawing their attention to some references. They are also greatly indebted to the reviewer for a careful reading of the manuscript and for helpful comments and suggestions which improved the presentation of the paper.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by L. Szekelyhidi.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.