Advertisement

Constructive Approximation

, Volume 41, Issue 2, pp 185–218 | Cite as

Analysis Related to All Admissible Type Parameters in the Jacobi Setting

  • Adam NowakEmail author
  • Peter Sjögren
  • Tomasz Z. Szarek
Open Access
Article

Abstract

We derive an integral representation for the Jacobi–Poisson kernel valid for all admissible type parameters \(\alpha ,\beta \) in the context of Jacobi expansions. This enables us to develop a technique for proving standard estimates in the Jacobi setting that works for all possible \(\alpha \) and \(\beta \). As a consequence, we can prove that several fundamental operators in the harmonic analysis of Jacobi expansions are (vector-valued) Calderón–Zygmund operators in the sense of the associated space of homogeneous type, and hence their mapping properties follow from the general theory. The new Jacobi–Poisson kernel representation also leads to sharp estimates of this kernel. The paper generalizes methods and results existing in the literature but valid or justified only for a restricted range of \(\alpha \) and \(\beta \).

Keywords

Jacobi expansion Jacobi–Poisson kernel Maximal operator  Riesz transform  Square function Spectral Multiplier Calderón–Zygmund operator 

Mathematics Subject Classification

Primary 42C05 Secondary 42C10 

1 Introduction

This paper is a continuation and completion of the research performed recently in [28] by the first and second authors. Given parameters \(\alpha ,\beta > -1\), consider the Jacobi differential operator
$$\begin{aligned} {\mathcal {J}}^{\alpha ,\beta } = - \frac{\mathrm{d}^2}{\mathrm{d}\theta ^2} - \frac{\alpha -\beta +(\alpha +\beta +1)\cos \theta }{\sin \theta } \frac{\mathrm{d}}{\mathrm{d}\theta } + \left( \frac{\alpha +\beta +1}{2}\right) ^2 \end{aligned}$$
on the interval \([0,\pi ]\) equipped with the (doubling) measure
$$\begin{aligned} \mathrm{d}\mu _{\alpha ,\beta }(\theta ) = \left( \sin \frac{\theta }{2} \right) ^{2\alpha +1} \left( \cos \frac{\theta }{2}\right) ^{2\beta +1} \,\mathrm{d}\theta . \end{aligned}$$
This operator, acting initially on \(C_c^2(0,\pi )\), has a natural self-adjoint extension in \(L^2(\mathrm{d}\mu _{\alpha ,\beta })\), whose spectral decomposition is discrete and given by the classical Jacobi polynomials. Various aspects of harmonic analysis related to the Jacobi setting have been studied in the literature. This line of research goes back to the seminal work of Muckenhoupt and Stein [26], in which the ultraspherical case (\(\alpha =\beta \)) was investigated. Later, several other authors contributed to the subject, see [28, Section 1] and also the end of [28, Section 2] for a detailed account and references. Actually, for the sake of completeness, that account should be augmented by further references, like [3, 4, 6, 13, 14, 15, 16, 19, 20, 23]. Certain extensions of the ultraspherical and Jacobi settings related to Dunkl’s theory were investigated from the harmonic analysis perspective in [24, 25].

The main result of [28] is restricted to \(\alpha ,\beta \ge -1/2\). It states that several fundamental operators in the harmonic analysis of Jacobi expansions, including Riesz transforms, imaginary powers of the Jacobi operator, the Jacobi–Poisson semigroup maximal operator, and Littlewood–Paley–Stein type square functions, are (vector-valued) Calderón–Zygmund operators. Consequently, their \(L^p\) mapping properties follow from the general theory. The proofs in [28] rely on an integral formula for the Jacobi–Poisson kernel derived in [28] from a product formula for Jacobi polynomials due to Dijksma and Koornwinder [17]. Unfortunately, the latter result is not valid if either \(\alpha < -1/2\) or \(\beta < -1/2\), and this limitation is inherited by the above-mentioned Jacobi–Poisson kernel representation. Thus the technique of proving estimates for kernels defined via the Jacobi–Poisson kernel developed in [28] is designed for the case \(\alpha ,\beta \ge -1/2\). The object of the present paper is to eliminate this restriction in the parameter values, which will require some new techniques.

Our method starts with the deduction of an integral representation of the Jacobi–Poisson kernel, valid for all \(\alpha ,\beta > -1\), see Proposition 2.3. This formula contains as a special case the one obtained in [28, Proposition 4.1] for \(\alpha ,\beta \ge -1/2\) and is more involved if either \(\alpha \) or \(\beta \) is less than \(-1/2\). Then we establish a suitable generalization to all \(\alpha ,\beta >-1\) of the strategy employed in [28] to prove standard estimates [see (15)–(17) below] for kernels expressible via the Jacobi–Poisson kernel. To achieve this, some essentially new arguments are required, and the method allows a unified treatment of all parameter values \(\alpha ,\beta > -1\).

As an application of these techniques, we prove that the maximal operator of the Jacobi–Poisson semigroup, the Riesz–Jacobi transforms, Littlewood–Paley–Stein type square functions and multipliers of Laplace and Laplace–Stieltjes transform type are scalar-valued or vector-valued Calderón–Zygmund operators, in the sense of the space of homogeneous type \(([0,\pi ],\mathrm{d}\mu _{\alpha ,\beta },|\cdot |)\); see Theorem 5.1. This extends to all \(\alpha ,\beta > -1\) several results for \(\alpha ,\beta \ge -1/2\) obtained in [28] and earlier papers, as well as results on the two kinds of Laplace transform type multipliers that follow from the recent work of Langowski [22]. Our technique is well suited to a wider variety of operators, including more general forms of \(g\)-functions and Lusin area type integrals. In a similar spirit, analogous problems concerning analysis for “low” values of type parameters were recently investigated in the Laguerre [30], Bessel [8], and certain Dunkl [9] settings.

The Jacobi–Poisson kernel representation derived in Proposition 2.3 makes it possible to describe the exact behavior of the kernel; see Theorem 6.1. The sharp estimates we prove extend to all \(\alpha ,\beta >-1\) the bounds found not long ago by Nowak and Sjögren [29, Theorem A.1 in the Appendix] under the restriction \(\alpha ,\beta \ge -1/2\). An important application of Theorem 6.1 are the sharp estimates for potential kernels in the Jacobi and Fourier–Bessel settings proved recently by Nowak and Roncal [27]. Moreover, Theorem 6.1 readily implies explicit sharp bounds for the nonspectral variant of the Jacobi–Poisson kernel sometimes called the Watson kernel and given by (see [2, Lecture 2] or [1, p. 385])
$$\begin{aligned} \sum _{n=0}^{\infty } r^n \frac{P_n^{\alpha ,\beta }(x)P_n^{\alpha ,\beta }(y)}{h_n^{\alpha ,\beta }}. \end{aligned}$$
Here \(0<r<1, x,y \in [-1,1], P_n^{\alpha ,\beta }\) are the classical Jacobi polynomials, and \(h_n^{\alpha ,\beta }\) are suitable normalizing constants. Recently an upper bound for the Watson kernel was obtained by Calderón and Urbina [5], and some earlier results in this spirit can be found in [4, 6, 14, 23] (see also [15]). We remark that our results concerning mapping properties of the Jacobi–Poisson semigroup maximal operator, see Corollary 5.2, lead in a straightforward manner to analogous results for the maximal operator related to the Watson kernel and investigated in [3, 4, 5, 6, 16].

It is worth noting that there are further interesting applications of our Jacobi–Poisson kernel representation. For instance, in [7] it is used to obtain a principal value integral representation for the Riesz–Jacobi transforms. On the other hand, in [10, 11, 12, 22, 34] (see also [35]), the authors make use of the integral representation for the Jacobi–Poisson kernel derived in [28, Proposition 4.1], which is restricted to \(\alpha ,\beta \ge -1/2\). The Jacobi–Poisson kernel formula obtained in Proposition 2.3 should thus make it possible to extend the relevant results in these papers to a wider range of \(\alpha ,\beta \). This, however, remains to be investigated.

The paper is organized as follows. In Sect. 2, we derive an integral representation of the Jacobi–Poisson kernel valid for all \(\alpha ,\beta > -1\). Section 3 contains various facts and preparatory results needed for kernel estimates. In Sect. 4, we prove standard estimates for kernels associated with the operators mentioned above. This leads to our main results in Sect. 5, saying that the operators in question can be interpreted as Calderón–Zygmund operators and giving, as a consequence, their \(L^p\) mapping properties. Finally, Sect. 6 is devoted to sharp estimates of the Jacobi–Poisson kernel.

Throughout the paper, we use a fairly standard notation with essentially all symbols referring to the space of homogeneous type \(([0,\pi ],\mathrm{d}\mu _{\alpha ,\beta },|\cdot |)\). Since the distance in this space is the Euclidean one, the ball denoted \(B(\theta ,r)\) is simply the interval \((\theta -r,\theta +r)\cap [0,\pi ]\). When writing estimates, we will frequently use the notation \(X \lesssim Y\) to indicate that \(X \le CY\) with a positive constant \(C\) independent of significant quantities. We shall write \(X \simeq Y\) when simultaneously \(X \lesssim Y\) and \(Y \lesssim X\).

2 The Jacobi–Poisson Kernel

Let \(\alpha ,\beta > -1\). The Jacobi–Poisson kernel is given by (see [28, Section 2])
$$\begin{aligned} H_t^{\alpha ,\beta }(\theta ,\varphi ) = \sum _{n=0}^{\infty } \mathrm{e}^{-t\left| n+\frac{\alpha +\beta +1}{2}\right| } {\mathcal {P}}_n^{\alpha ,\beta }(\theta ){\mathcal {P}}_n^{\alpha ,\beta }(\varphi ); \end{aligned}$$
here \(t>0\) and \(\theta ,\varphi \in [0,\pi ]\), and \({\mathcal {P}}_n^{\alpha ,\beta }\) are the classical Jacobi trigonometric polynomials, normalized in \(L^2(\mathrm{d}\mu _{\alpha ,\beta })\). This is the kernel of the Jacobi–Poisson semigroup \(\big \{\exp \big (-t\sqrt{{\mathcal {J}}^{\alpha ,\beta }}\big )\big \}_{t>0}\), since each \({\mathcal {P}}_n^{\alpha ,\beta }\) is an eigenfunction of \({\mathcal {J}}^{\alpha ,\beta }\), with eigenvalue \(\big (n+\frac{\alpha +\beta +1}{2}\big )^2\). Notice that the fraction \(\frac{\alpha +\beta +1}{2}\) may be negative. Defining the auxiliary kernel
$$\begin{aligned} {\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi ) := \sum _{n=0}^{\infty } \mathrm{e}^{-t\left( n+\frac{\alpha +\beta +1}{2}\right) } {\mathcal {P}}_n^{\alpha ,\beta }(\theta ){\mathcal {P}}_n^{\alpha ,\beta }(\varphi ), \end{aligned}$$
the Jacobi–Poisson kernel can be written as
$$\begin{aligned} H_t^{\alpha ,\beta }(\theta ,\varphi ) = {\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi ) + \chi _{\{\alpha +\beta < -1\}} \, 2^{\alpha +\beta +2} c_{\alpha ,\beta }\, \sinh \left( \frac{\alpha +\beta +1}{2} \, t \right) , \end{aligned}$$
(1)
where
$$\begin{aligned} c_{\alpha ,\beta } := \frac{\Gamma (\alpha +\beta +2)}{2^{\alpha +\beta +1}\Gamma (\alpha +1)\Gamma (\beta +1)}. \end{aligned}$$
As we shall see later, there are important cancellations between the two terms in (1) for large \(t\).
The kernel \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\) can be computed explicitly by means of Bailey’s formula, see [1, pp. 385–387]. More precisely, we havefor \(t>0\) and \(\theta ,\varphi \in [0,\pi ]\). Here \(F_4\) is Appel’s hypergeometric function of two variables defined by the series
$$\begin{aligned} F_4(a_1,a_2; b_1,b_2; x,y) = \sum _{m,n=0}^{\infty } \frac{(a_1)_{m+n} (a_2)_{m+n}}{(b_1)_m (b_2)_n m! n!} \, x^m y^n, \end{aligned}$$
where \((a)_n\) means the Pochhammer symbol, \((a)_n = a(a+1)\cdot \cdots \cdot (a+n-1)\) for \(n \ge 1\) and \((a)_0 = 1\). This double power series is known to converge absolutely when \(\sqrt{|x|}+ \sqrt{|y|} < 1\), cf. [18, Chapter V, Section 5.7.2]. From this expression, the positivity of \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\) can easily be seen. Moreover, (2) provides a holomorphic extension of \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\) as a function of the parameters \(\alpha ,\beta > -1\) to the region \(\{(\alpha ,\beta ) \in {\mathbb {C}}^2 : \mathfrak {R}\alpha , \mathfrak {R}\beta > -1\}\). Indeed, with \(t>0\) and \(\theta ,\varphi \in [0,\pi ]\) fixed, the hypergeometric series in (2) is a sum of holomorphic functions of \((\alpha ,\beta )\) converging locally uniformly in the region in question (the latter fact can be justified by means of elementary estimates for the Pochhammer symbol). However, the formula (2) does not seem to be convenient from the point of view of kernel estimates. Thus we need a more suitable representation.
In [28, Section 4], the first and second authors derived the following integral representation, valid for \(\alpha ,\beta \ge -1/2\) (notice that under this restriction \(H_t^{\alpha ,\beta }(\theta ,\varphi )\) coincides with \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\)):for \(t>0\) and \(\theta , \varphi \in [0,\pi ]\). Here
$$\begin{aligned} q(\theta ,\varphi ,u,v) = 1 - u \sin \frac{\theta }{2}\sin \frac{\varphi }{2}- v \cos \frac{\theta }{2}\cos \frac{\varphi }{2}, \end{aligned}$$
and the measure \(\mathrm{d}\Pi _{\alpha }\) is defined in the following way. For \(\alpha > -1/2\), we let
$$\begin{aligned} \Pi _{\alpha }(u) := \frac{\Gamma (\alpha +1)}{\sqrt{\pi }\Gamma (\alpha +1/2)} \int _0^u (1-w^2)^{\alpha -1/2}\, \mathrm{d}w, \end{aligned}$$
(4)
which is an odd function in \(-1<u<1\). Then \(\mathrm{d}\Pi _{\alpha }\) is a probability measure in \([-1,1]\). As \(\alpha \rightarrow -1/2\), one finds that \(\mathrm{d}\Pi _{\alpha }\) converges weakly to the measure \(\mathrm{d}\Pi _{-1/2} := \frac{1}{2}(\delta _{-1}+\delta _{1})\), where \(\delta _{\pm 1}\) denotes a point mass at \(\pm 1\).
Now we observe that (4) can be extended to all complex \(\alpha \ne -1/2\) with \(\mathfrak {R}\alpha > -1\). Then the (distribution) derivative
$$\begin{aligned} \mathrm{d}\Pi _{\alpha }(u)= \frac{\Gamma (\alpha +1)}{\sqrt{\pi }\Gamma (\alpha +1/2)} \, \left( 1-u^2\right) ^{\alpha -1/2}\, \mathrm{d}u \end{aligned}$$
is a local complex measure in \((-1,1)\). For \(\alpha \in (-1,-1/2)\) real, its density is negative, even, and not integrable in \((-1,1)\). If \(\phi \) is a continuous function in \((-1,1)\) and \(\phi (u) = \mathcal {O}(1-u)\) as \(u \rightarrow 1\), then the integral \(I(\alpha ) = \int _0^1\phi (u)\, \mathrm{d}\Pi _{\alpha }(u)\) is well defined. As a function of \(\alpha \), this integral is analytic in \(\{\alpha : \mathfrak {R}\alpha > -1, \alpha \ne -1/2\}\). Since \(|I(\alpha )| \lesssim |\alpha +1/2|\int _0^1 (1-u^2)^{\mathfrak {R}\alpha +1/2}\, \mathrm{d}u \rightarrow 0\) as \(\alpha \rightarrow -1/2\), we see that \(I(\alpha )\) is actually analytic in \(\{\alpha : \mathfrak {R}\alpha > -1\}\) and \(I(-1/2)=0\). More generally, if \(\phi _{\alpha ,\beta }(u)\) is continuous in \((u,\alpha ,\beta )\) and analytic in \((\alpha ,\beta )\) for \(-1<u<1\) and \(\mathfrak {R}\alpha , \mathfrak {R}\beta >-1\), and \(\phi _{\alpha ,\beta }(u) = \mathcal {O}(1-u)\) locally uniformly in \((\alpha ,\beta )\), then \(I(\alpha ,\beta ) = \int _0^1 \phi _{\alpha ,\beta }(u)\, \mathrm{d}\Pi _{\alpha }(u)\) will be analytic in \((\alpha ,\beta )\) in \(\mathfrak {R}\alpha , \mathfrak {R}\beta >-1\). Under analogous assumptions, this also extends to functions \(\phi _{\alpha ,\beta }(u,v)\) and the double integral \(I(\alpha ,\beta ) = \iint _{(0,1)^2} \phi _{\alpha ,\beta }(u,v) \, \mathrm{d}\Pi _{\alpha }(u)\,\mathrm{d}\Pi _{\beta }(v)\), if one assumes \(\phi _{\alpha ,\beta }(u,v) = \mathcal {O}((1-u)(1-v))\) locally uniformly in \(\alpha \) and \(\beta \).
The measures \(\mathrm{d}\Pi _{\alpha }\) will now be used to extend the representation (3) to the range \(\alpha ,\beta > -1\). Define
$$\begin{aligned} \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v):= \frac{c_{\alpha ,\beta } \,\sinh \frac{t}{2}}{(\cosh \frac{t}{2}-1 + q(\theta ,\varphi ,u,v))^{\alpha +\beta +2}}. \end{aligned}$$
(5)
Taking the even parts of \(\Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\) in \(u\) and \(v\), we also define
$$\begin{aligned} \Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,u,v):= \frac{1}{4} \sum _{\xi ,\eta = \pm 1} \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,\xi u, \eta v). \end{aligned}$$
Notice that by (3) and for symmetry reasons, we have for \(\alpha ,\beta \ge -1/2\),
$$\begin{aligned} {\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi ) = 4 \mathop {\iint }\limits _{(0,1]^2} \Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,u,v)\, \mathrm{d}\Pi _{\alpha }(u)\, \mathrm{d}\Pi _{\beta }(v). \end{aligned}$$
(6)
We can now state a general integral representation of \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\).

Theorem 2.1

For all \(\alpha ,\beta > -1, t>0\) and \(\theta ,\varphi \in [0,\pi ]\),
$$\begin{aligned} {\mathbb {H}}_{t}^{\alpha ,\beta }(\theta ,\varphi )&= 4 \mathop {\iint }\limits _{ (0,1]^2 } \left( \Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,u,v)- \Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,u,1)\right. \\&\quad \left. - \Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,1,v)+ \Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,1,1)\right) \, \mathrm{d}\Pi _{\alpha }(u)\, \mathrm{d}\Pi _{\beta }(v)\nonumber \\&\quad + 2 \int _{ (0,1] } \left( \Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,u,1)-\Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,1,1)\right) \, \mathrm{d}\Pi _{\alpha }(u)\nonumber \\&\quad + 2 \int _{ (0,1] } \left( \Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,1,v)-\Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,1,1)\right) \, \mathrm{d}\Pi _{\beta }(v)\nonumber \\&\quad + \Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,1,1). \nonumber \end{aligned}$$
(7)

Proof

For \(\alpha ,\beta \ge -1/2\), (7) is an easy consequence of (6). With \(\phi _{\alpha ,\beta }(u) = \Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,u,1) -\Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,1,1)\), the second integral in (7) is of the form \(I(\alpha ,\beta )\) just described; observe that \(\phi _{\alpha ,\beta }(u) = \mathcal {O}(1-u)\) as \(u \rightarrow 1\), since the derivative \(\partial \Psi _{E}^{\alpha ,\beta }/\partial u\) is bounded locally uniformly in \(\alpha \) and \(\beta \). The third integral in (7) is similar. For the double integral, we let
$$\begin{aligned} \phi _{\alpha ,\beta }(u,v)&= \Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,u,v)- \Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,u,1)\\&- \Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,1,v)+ \Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,1,1)\end{aligned}$$
and get a double integral of type \(I(\alpha ,\beta )\).

The conclusion is that the right-hand side of (7) is analytic in \((\alpha ,\beta ) \in \{z : \mathfrak {R}z > -1\}^2\). Theorem 2.1 follows, since the left-hand side is also analytic. \(\square \)

We remark that in Theorem 2.1, it does not matter whether one integrates over the open interval \((0,1)\) or over \((0,1]\), even when the measure is \(\mathrm{d}\Pi _{-1/2}\). But subsequently, it will be more convenient to use \((0,1]\).

Next we restate the formula of Theorem 2.1 in order to obtain a more suitable representation of \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\) for the kernel estimates in Sect. 4. Recall that for \(-1<\alpha <-1/2, \Pi _{\alpha }(u)\) is an odd function, which is negative for \(u>0\). It can easily be verified that the density \(|\Pi _{\alpha }(u)|\) defines a finite measure on \([-1,1]\). In fact, we have the following.

Lemma 2.2

Let \(-1 < \alpha < -1/2\) be fixed. Then
$$\begin{aligned} |\Pi _{\alpha }(u)| \simeq |u| (1 - |u|)^{\alpha +1/2} \simeq |u| \frac{\mathrm{d}\Pi _{\alpha +1}(u)}{\mathrm{d}u}, \quad u \in (-1,1). \end{aligned}$$

Proof

These three quantities are even in \(u\), and we need consider only \(u\in (0,1)\). It is enough to observe that then \(|\Pi _{\alpha }(u)| \simeq \int _0^u (1-w)^{\alpha -1/2} \, \mathrm{d}w\). \(\square \)

Proposition 2.3

Let \(t>0\) and \(\theta ,\varphi \in [0,\pi ]\).
  1. (i)
    If \(\alpha ,\beta \ge -1/2\), then
    $$\begin{aligned} {\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi ) = \iint \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \mathrm{d}\Pi _{\alpha }(u)\, \mathrm{d}\Pi _{\beta }(v). \end{aligned}$$
     
  2. (ii)
    If \(-1<\alpha <-1/2 \le \beta \), then
    $$\begin{aligned} {\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )&= \iint \left\{ -\partial _u \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \Pi _{\alpha }(u)\,\mathrm{d}u\, \mathrm{d}\Pi _{\beta }(v)\right. \\&\left. \quad + \,\Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \mathrm{d}\Pi _{-1\slash 2}(u)\, \mathrm{d}\Pi _{\beta }(v)\right\} . \end{aligned}$$
     
  3. (iii)
    If \(-1 < \beta < -1/2 \le \alpha \), then
    $$\begin{aligned} {\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )&= \iint \left\{ -\partial _v \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \mathrm{d}\Pi _{\alpha }(u)\, \Pi _{\beta }(v)\, \mathrm{d}v \right. \\&\left. \quad + \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \mathrm{d}\Pi _{\alpha }(u)\, \mathrm{d}\Pi _{-1\slash 2}(v)\right\} . \end{aligned}$$
     
  4. (iv)
    If \(-1 < \alpha , \beta < -1/2\), then
    $$\begin{aligned} {\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi ) =&\iint \left\{ \partial _{u} \partial _{v} \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \Pi _{\alpha }(u)\,\mathrm{d}u\, \Pi _{\beta }(v)\, \mathrm{d}v \right. \\&\quad -\, \partial _{u} \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \Pi _{\alpha }(u)\,\mathrm{d}u\, \mathrm{d}\Pi _{-1\slash 2}(v)\\&\quad -\,\partial _{v} \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \mathrm{d}\Pi _{-1\slash 2}(u)\, \Pi _{\beta }(v)\, \mathrm{d}v\\&\quad \left. +\,\Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \mathrm{d}\Pi _{-1\slash 2}(u)\, \mathrm{d}\Pi _{-1\slash 2}(v)\right\} . \end{aligned}$$
     

Here and in similar integrals in Sect. 6, it is understood that the integration in \(\mathrm{d}u\) and \(\mathrm{d}v\) is only over \((-1,1)\).

Proof of Proposition 2.3

Item (i) is just (3). To prove the remaining items, we combine Theorem 2.1, Lemma 2.2, and symmetries of the quantity \(\Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,u,v)\), its derivatives in \(u\) and \(v\), and the measures involved. We give further details in the case of (ii), leaving similar proofs of (iii) and (iv) to the reader.

Assume that \(-1 < \alpha < -1/2 \le \beta \). Since \(\mathrm{d}\Pi _{\beta }\) is a symmetric probability measure on \([-1,1]\) and has no atom at 0, formula (7) reduces to
$$\begin{aligned} {\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )&= 4 \mathop {\iint }\limits _{(0,1]^2} \left( \Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,u,v)- \Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,1,v)\right) \, \mathrm{d}\Pi _{\alpha }(u)\, \mathrm{d}\Pi _{\beta }(v)\\&\quad + 2 \int _{(0,1]} \Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,1,v)\, \mathrm{d}\Pi _{\beta }(v)\\&\equiv I_1 + I_2. \end{aligned}$$
Then, expressing \(\Psi ^{\alpha ,\beta }_{E}\) via \(\Psi ^{\alpha ,\beta }\) and making use of the symmetry of \(\mathrm{d}\Pi _{\beta }\), we see that
$$\begin{aligned} I_2&= 4 \mathop {\iint }\limits _{(0,1]^2} \Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,u,v)\, \mathrm{d}\Pi _{-1\slash 2}(u)\, \mathrm{d}\Pi _{\beta }(v)\\&= \iint \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \mathrm{d}\Pi _{-1\slash 2}(u)\, \mathrm{d}\Pi _{\beta }(v). \end{aligned}$$
In \(I_1\) we integrate by parts in the \(u\) variable, which is legitimate in view of Lemma 2.2. Observe that the integrand in \(I_1\) vanishes for \(u=1\) and that \(\Pi _{\alpha }(0)=0\). We get
$$\begin{aligned} I_1 = -4 \mathop {\iint }\limits _{(0,1]^2} \partial _u \Psi _E^{\alpha ,\beta }(t,\theta ,\varphi ,u,v) \,\Pi _{\alpha }(u)\,\mathrm{d}u\,\mathrm{d}\Pi _{\beta }(v). \end{aligned}$$
Inserting the definition of the symmetrization \(\Psi _E^{\alpha ,\beta }\), one easily finds that
$$\begin{aligned} I_1 = - \iint \partial _u \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v) \, \Pi _{\alpha }(u)\,\mathrm{d}u\, \mathrm{d}\Pi _{\beta }(v). \end{aligned}$$
The conclusion follows. \(\square \)

Remark 2.4

All the representations of \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\) contained in Proposition 2.3 are positive in the sense that each of the double integrals [there are one of these in (i), two in (ii) and in (iii), and four in (iv)] is nonnegative.

3 Preparatory Results

In this section, we gather various technical results, altogether forming a transparent and convenient method of proving standard estimates for kernels defined via the Jacobi–Poisson kernel. The essence of this technique is a uniform way of handling double integrals against products of measures of type \(\mathrm{d}\Pi _{\gamma }\) and \(\Pi _{\gamma }(u)\, \mathrm{d}u\). The resulting expressions contain only elementary functions and are relatively simple.

The result below, which is a generalization of [28, Lemma 4.3], plays a crucial role in our method to prove kernel estimates. It provides a link from estimates emerging from the integral representation of \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\), see Proposition 2.3, to the standard estimates related to the space of homogeneous type \(([0,\pi ], \mathrm{d}\mu _{\alpha ,\beta },|\cdot |)\).

Lemma 3.1

Let \(\alpha ,\beta > -1\). Assume that \(\xi _1,\xi _2,\kappa _1,\kappa _2 \ge 0\) are fixed and such that \(\alpha +\xi _1+\kappa _1, \, \beta +\xi _2+\kappa _2 \ge -1/2\). Then, uniformly in \(\theta ,\varphi \in [0,\pi ], \theta \ne \varphi \),
$$\begin{aligned}&\left( \sin \frac{\theta }{2}+\sin \frac{\varphi }{2}\right) ^{2\xi _1} \left( \cos \frac{\theta }{2}+ \cos \frac{\varphi }{2}\right) ^{2\xi _2} \iint \frac{\mathrm{d}\Pi _{\alpha +\xi _1+\kappa _1}(u) \, \mathrm{d}\Pi _{\beta +\xi _2+\kappa _2}(v) }{ q(\theta ,\varphi ,u,v)^{\alpha +\beta +\xi _1+\xi _2+3\slash 2}}\\&\quad \lesssim \frac{1}{\mu _{\alpha ,\beta }(B(\theta ,|\theta -\varphi |))}. \end{aligned}$$
Note that for any fixed \(\alpha ,\beta > -1\), the \(\mu _{\alpha ,\beta }\) measure of the interval \(B(\theta ,|\theta -\varphi |)\) can be described as follows, see [28, Lemma 4.2]:
$$\begin{aligned} \mu _{\alpha ,\beta }(B(\theta ,|\theta -\varphi |)) \simeq |\theta -\varphi |(\theta +\varphi )^{2\alpha + 1} (\pi -\theta +\pi -\varphi )^{2\beta + 1}, \quad \theta ,\varphi \in [0,\pi ]. \end{aligned}$$
(8)
Notice also that the right-hand side of the estimate in Lemma 3.1 is always larger than the positive constant \(1/\mu _{\alpha ,\beta }([0,\pi ])\). This fact will be used subsequently without further mention.

To prove Lemma 3.1, we need item (b) in the simple lemma below.

Lemma 3.2

Let \(\kappa \ge 0\) and \(\gamma \) and \(\nu \) be such that \(\gamma > \nu +1/2 \ge 0\). Then
  1. (a)
    $$\begin{aligned}&\int \frac{\mathrm{d}\Pi _{\nu }(s)}{(D-Bs)^{\kappa }(A-Bs)^{\gamma }} \simeq \frac{1}{(D-B)^{\kappa } A^{\nu +1/2} (A-B)^{\gamma -\nu -1/2}} \end{aligned}$$
     
uniformly for \(0 \le B < A \le D\);
  1. (b)
    $$\begin{aligned} \int \frac{\mathrm{d}\Pi _{\nu +\kappa }(s)}{(A-Bs)^{\gamma }} \lesssim \frac{1}{A^{\nu +1\slash 2}(A-B)^{\gamma -\nu -1/2}}, \quad 0 \le B < A. \end{aligned}$$
     

Proof

Part (a) is proved in [29, Appendix]. Part (b) can easily be deduced from (a) since the integral to be estimated is controlled by the same integral with \(\kappa =0\). \(\square \)

Proof of Lemma 3.1

The reasoning is a combination of the arguments given in the proofs of [30, Lemma 2.1] and [28, Lemma 4.3]. Observe that we may reduce the task to showing that
$$\begin{aligned} \iint \frac{\mathrm{d}\Pi _{\alpha +\kappa _1}(u) \, \mathrm{d}\Pi _{\beta +\kappa _2}(v) }{ q(\theta ,\varphi ,u,v)^{\alpha +\beta +3\slash 2}} \lesssim \frac{1}{\mu _{\alpha ,\beta }(B(\theta ,|\theta -\varphi |))}, \quad \theta ,\varphi \in [0,\pi ], \quad \theta \ne \varphi , \end{aligned}$$
(9)
under the assumption \(\alpha +\kappa _1,\beta +\kappa _2 \ge -1/2\). Indeed, applying (9) with \(\alpha +\xi _1,\beta +\xi _2\) instead of \(\alpha ,\beta \), and then using (8), we obtain
$$\begin{aligned}&\left( \sin \frac{\theta }{2}+\sin \frac{\varphi }{2}\right) ^{2\xi _1} \left( \cos \frac{\theta }{2}+ \cos \frac{\varphi }{2}\right) ^{2\xi _2} \iint \frac{\mathrm{d}\Pi _{\alpha +\xi _1+\kappa _1}(u) \, \mathrm{d}\Pi _{\beta +\xi _2+\kappa _2}(v) }{ q(\theta ,\varphi ,u,v)^{\alpha +\beta +\xi _1+\xi _2+3\slash 2}}\\&\quad \lesssim \left( \theta +\varphi \right) ^{2\xi _1} \left( \pi - \theta + \pi - \varphi \right) ^{2\xi _2} \frac{1}{\mu _{\alpha +\xi _1,\beta +\xi _2}(B(\theta ,|\theta -\varphi |))}\\&\quad \simeq \frac{1}{\mu _{\alpha ,\beta }(B(\theta ,|\theta -\varphi |))}. \end{aligned}$$
To prove (9), it is convenient to distinguish two cases.
Case 1 \(\alpha ,\beta \in (-1,-1\slash 2)\). Taking into account the estimates, see [28, (21)],
$$\begin{aligned} |\theta -\varphi |^2 \simeq 2 \sin ^2\frac{\theta -\varphi }{4} \le q(\theta ,\varphi ,u,v) \le 2 \cos ^2\frac{\theta -\varphi }{4} \le 2, \end{aligned}$$
where \(\theta ,\varphi \in [0,\pi ]\), \(u,v \in [-1,1]\), and using the fact that \(\mathrm{d}\Pi _{\alpha +\kappa _1}\) and \(\mathrm{d}\Pi _{\beta +\kappa _2}\) are finite, we get
$$\begin{aligned} \iint \frac{\mathrm{d}\Pi _{\alpha +\kappa _1}(u) \, \mathrm{d}\Pi _{\beta +\kappa _2}(v) }{ q(\theta ,\varphi ,u,v)^{\alpha +\beta +3\slash 2}} \lesssim \frac{1}{|\theta -\varphi |^{2\alpha +1} |\theta -\varphi |^{2\beta +1} |\theta -\varphi |} + \chi _{\{\alpha +\beta +3/2 < 0\}}. \end{aligned}$$
Then using the inequalities \(|\theta -\varphi | \le \theta + \varphi \) and \(|\theta -\varphi | \le \pi - \theta + \pi - \varphi \) together with (8), we obtain (9).
Case 2 At least one of the parameters \(\alpha ,\beta \) is in \([-1\slash 2,\infty )\), say \(\beta \ge -1/2\). Proceeding as in the proof of [28, Lemma 4.3] but applying Lemma 3.2 (b) instead of [28, Lemma 4.4] to the integral against \(\mathrm{d}\Pi _{\beta +\kappa _2}\), we see that
$$\begin{aligned} \iint \frac{\mathrm{d}\Pi _{\alpha +\kappa _1}(u) \, \mathrm{d}\Pi _{\beta +\kappa _2}(v) }{ q(\theta ,\varphi ,u,v)^{\alpha +\beta +3\slash 2}} \lesssim \frac{1}{(\pi -\theta +\pi -\varphi )^{2\beta +1}} \, \int \frac{\mathrm{d}\Pi _{\alpha +\kappa _1}(u)}{ q(\theta ,\varphi ,u,1)^{\alpha +1}}. \end{aligned}$$
When \(\alpha \ge -1\slash 2\), another application of Lemma 3.2 (b) leads to (9), see the proof of [28, Lemma 4.3]. If \(\alpha \in (-1,-1/2)\), we can apply the arguments from Case 1, getting
$$\begin{aligned} \int \frac{\mathrm{d}\Pi _{\alpha +\kappa _1}(u)}{q(\theta ,\varphi ,u,1)^{\alpha +1}} \lesssim \frac{1}{|\theta -\varphi |^{2\alpha +2}} \le \frac{1}{(\theta +\varphi )^{2\alpha +1} |\theta -\varphi |}. \end{aligned}$$
Now using (8), we arrive at the desired conclusion.

The proof of Lemma 3.1 is complete. \(\square \)

The remaining part of this section contains various technical results, which will allow us to control the relevant kernels by means of Lemma 3.1. To state the next lemma and also for further use, we introduce the following notation. We will omit the arguments and write briefly \(\mathfrak {q}\) instead of \(q(\theta ,\varphi ,u,v)\), when it does not lead to confusion. For a given parameter \(\lambda \in \mathbb {R}\), we define the auxiliary function
$$\begin{aligned} \Psi ^{\lambda }(t,\mathfrak {q}):= \frac{\sinh \frac{t}{2}}{(\cosh \frac{t}{2}-1 + \mathfrak {q})^{\lambda }}, \end{aligned}$$
so that \(\Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)= c_{\alpha ,\beta } \Psi ^{\alpha +\beta +2}(t,\mathfrak {q})\); see (5).

Lemma 3.3

Let \(\lambda \in \mathbb {R}, M,N \in \mathbb {N}=\{0,1,2,\ldots \}\) and \(K,R,L \in \{0, 1\}\) be fixed. Then
$$\begin{aligned}&\big | \partial _u^K \partial _v^R \partial _\varphi ^L \partial _\theta ^N \partial _t^M \Psi ^{\lambda }(t,\mathfrak {q})\big | \\&\quad \lesssim \sum _{k,r=0,1,2} \left( \sin \frac{\theta }{2}+\sin \frac{\varphi }{2}\right) ^{Kk} \left( \cos \frac{\theta }{2}+ \cos \frac{\varphi }{2}\right) ^{Rr}\\&\qquad \times \frac{1}{(t^2 + \mathfrak {q})^{ \lambda + (L+N+M-1 + Kk +Rr)\slash 2 }}, \end{aligned}$$
uniformly in \(t\in (0,1], \theta ,\varphi \in [0,\pi ]\) and \(u,v \in [-1,1]\).
To prove this lemma, we need two preparatory results. One of them is Faà di Bruno’s formula for the \(N\)th derivative, \(N \ge 1\), of the composition of two functions (see [21] for related references and interesting historical remarks). With \(D\) denoting the ordinary derivative, it reads
$$\begin{aligned} D^N(g\circ f)(\theta )&= \sum \frac{N!}{j_1! \cdot \cdots \cdot j_N!} \; \left( D^{j_1+\cdots +j_N} g\right) \circ f(\theta ) \nonumber \\&\quad \times \left( \frac{D^1 f(\theta )}{1!}\right) ^{j_1}\cdot \cdots \cdot \left( \frac{D^N f(\theta )}{N!}\right) ^{j_N}, \end{aligned}$$
(10)
where the summation runs over all \(j_1,\ldots ,j_N \ge 0\) such that \(j_1+2j_2+\cdots +N j_N = N\). Further, in the proof of Lemma 3.3, we will make use of the following bounds given in [28].

Lemma 3.4

[28, Lemma 4.5] For all \(\theta ,\varphi \in [0,\pi ]\) and \(u,v \in [-1,1]\), one has
$$\begin{aligned} \big | \partial _\theta \mathfrak {q}\big | \lesssim \sqrt{\mathfrak {q}} \quad \text {and} \quad \big | \partial _\varphi \mathfrak {q}\big | \lesssim \sqrt{\mathfrak {q}}. \end{aligned}$$

Proof of Lemma 3.3

Given \(\lambda \in \mathbb {R}\), we introduce the auxiliary function
$$\begin{aligned} \widetilde{\Psi }^{\lambda }(t,\mathfrak {q}):= \frac{1}{\sinh \frac{t}{2}} \, \Psi ^{\lambda }(t,\mathfrak {q})= \frac{1}{(\cosh \frac{t}{2} - 1 + \mathfrak {q})^{\lambda }}. \end{aligned}$$
We first reduce our task to showing the estimate
$$\begin{aligned}&\big | \partial _u^K \partial _v^R \partial _\varphi ^L \partial _\theta ^N \widetilde{\Psi }^{\lambda }(t,\mathfrak {q})\big | \nonumber \\&\quad \lesssim \sum _{k,r=0,1,2} \left( \sin \frac{\theta }{2}+\sin \frac{\varphi }{2}\right) ^{Kk} \left( \cos \frac{\theta }{2}+ \cos \frac{\varphi }{2}\right) ^{Rr} \frac{1}{(t^2 + \mathfrak {q})^{ \lambda + (L+N + Kk +Rr)\slash 2 }} \end{aligned}$$
(11)
for \(t \in (0,1], \theta ,\varphi \in [0,\pi ]\) and \(u,v\in [-1,1]\); here \(\lambda \in \mathbb {R}, N \in \mathbb {N}\) and \(K,R,L \in \{0, 1\}\) are fixed.
Observe that
$$\begin{aligned} \Psi ^{\lambda }(t,\mathfrak {q})=c_\lambda \left\{ \begin{array}{ll} \partial _t \frac{1}{\left( \cosh \frac{t}{2} - 1 + \mathfrak {q}\right) ^{\lambda - 1}} , &{} \lambda \ne 1,\\ \partial _t \log \left( \cosh \frac{t}{2} - 1 + \mathfrak {q}\right) , &{} \lambda = 1, \end{array} \right. \end{aligned}$$
where \(c_\lambda \) is a constant, possibly negative. Using Faà di Bruno’s formula (10) with \(f(t)=\cosh \frac{t}{2} - 1 + \mathfrak {q}\) and either \(g(x)=x^{-\lambda +1}\) or \(g(x)=\log x\), we obtain
$$\begin{aligned} \partial _t^M \Psi ^{\lambda }(t,\mathfrak {q})&= c_{\lambda } \, \partial _t^{M+1} (g\circ f) (t) \\&= \sum _{\begin{array}{c} j_i \ge 0 \\ j_1+\cdots + (M+1)j_{M+1}=M+1 \end{array}} C_{\lambda ,j} \left( \sinh \frac{t}{2}\right) ^{\sum _{\text {odd} \, i} j_i} \\&\quad \times \left( \cosh \frac{t}{2}\right) ^{\sum _{\text {even} \, i} j_i} \widetilde{\Psi }^{\lambda - 1 + \sum _i j_i}(t,\mathfrak {q}), \end{aligned}$$
where the \(C_{\lambda ,j}\) are constants, possibly zero. Differentiating these identities with respect to \(\theta ,\varphi ,u,v\) and then applying (11) and the relations
$$\begin{aligned} \cosh \frac{t}{2} \simeq 1, \quad \sinh \frac{t}{2} \simeq t \le \sqrt{t^2 +\mathfrak {q}}, \quad \quad t \in (0,1], \end{aligned}$$
we see that
$$\begin{aligned} \big | \partial _u^K \partial _v^R \partial _\varphi ^L \partial _\theta ^N \partial _t^M \Psi ^{\lambda }(t,\mathfrak {q})\big |&\lesssim \sum _{\begin{array}{c} j_i \ge 0 \\ j_1+\cdots + (M+1)j_{M+1}=M+1 \end{array}} \sum _{k,r=0,1,2} \left( \sin \frac{\theta }{2}+\sin \frac{\varphi }{2}\right) ^{Kk} \\&\quad \times \left( \cos \frac{\theta }{2}+ \cos \frac{\varphi }{2}\right) ^{Rr} \\&\quad \times \frac{1}{(t^2 + \mathfrak {q})^{ \lambda -1 + \sum _i j_i - (\sum _{\text {odd} \, i} j_i) \slash 2+ (L+N + Kk +Rr)\slash 2 }}. \end{aligned}$$
Now by the boundedness of \(\mathfrak {q}\) and the inequality
$$\begin{aligned} \sum _i j_i - \frac{1}{2} \sum _{\text {odd} \, i} j_i\le \frac{M+1}{2}, \end{aligned}$$
(12)
forced by the constraint \(j_1+\cdots + (M+1)j_{M+1}=M+1\), we get the asserted estimate. Thus it remains to prove (11).
We assume that \(N\ge 1\). The simpler case \(N = 0\) is left to the reader. Taking into account the relations
$$\begin{aligned} \partial _\theta ^{2m} \mathfrak {q}= (-4)^{-m}(\mathfrak {q}-1), \quad \partial _\theta ^{2m-1} \mathfrak {q}= (-4)^{1-m} \partial _\theta \mathfrak {q}, \quad m\ge 1, \end{aligned}$$
see [28, Section 4], and using Faà di Bruno’s formula with \(f(\theta ) = \cosh \frac{t}{2} - 1 + \mathfrak {q}\) and \(g(x) = x^{-\lambda }\), we get
$$\begin{aligned} \partial _\theta ^N \widetilde{\Psi }^{\lambda }(t,\mathfrak {q})= \sum _{\begin{array}{c} j_i \ge 0 \\ \,j_1+\cdots + Nj_{N}=N \end{array}} c_{\lambda ,j} \frac{1}{\left( \cosh \frac{t}{2} {-} 1 {+} \mathfrak {q}\right) ^{\lambda {+} \sum _i j_i}} \, (\mathfrak {q}{-}1)^{\sum _{\text {even} \, i} j_i} (\partial _\theta \mathfrak {q})^{\sum _{\text {odd} \, i} j_i}, \end{aligned}$$
where the \(c_{\lambda ,j}\) are constants. Further, keeping in mind that \(L,R,K \in \{ 0,1 \}\) and applying repeatedly Leibniz’ rule, we see that \(\partial _\varphi ^L \partial _\theta ^N \widetilde{\Psi }^{\lambda }(t,\mathfrak {q})\) is a sum of terms of the form constant times
$$\begin{aligned} \frac{1}{\left( \cosh \frac{t}{2} - 1 + \mathfrak {q}\right) ^{\lambda + \sum _i j_i + l_1}} \, (\partial _\varphi \mathfrak {q})^{l_1+l_2} (\mathfrak {q}-1)^{\sum _{\text {even} \, i} j_i- l_2} (\partial _\theta \mathfrak {q})^{\sum _{\text {odd} \, i} j_i- l_3} (\partial _\varphi \partial _\theta \mathfrak {q})^{l_3}, \end{aligned}$$
where the indices run over the set described by the conditions \(j_i \ge 0, j_1+\cdots + Nj_{N}=N, l_1, l_2, l_3 \ge 0, l_1+l_2+l_3=L\), and the exponents of \(\mathfrak {q}- 1\) and \(\partial _\theta \mathfrak {q}\) are nonnegative. Similarly, \(\partial _v^R \partial _\varphi ^L \partial _\theta ^N \widetilde{\Psi }^{\lambda }(t,\mathfrak {q})\) is a sum of terms of the form constant times
$$\begin{aligned}&\frac{1}{\left( \cosh \frac{t}{2} - 1 + \mathfrak {q}\right) ^{\lambda + \sum _i j_i + l_1+r_1}} (\partial _v \mathfrak {q})^{r_1+r_3} (\partial _\varphi \mathfrak {q})^{l_1+l_2-r_2} (\partial _v \partial _\varphi \mathfrak {q})^{r_2} (\mathfrak {q}-1)^{\sum _{\text {even} \, i} j_i- l_2 - r_3} \\&\quad \times (\partial _\theta \mathfrak {q})^{\sum _{\text {odd} \, i} j_i- l_3 - r_4} (\partial _v \partial _\theta \mathfrak {q})^{r_4} (\partial _\varphi \partial _\theta \mathfrak {q})^{l_3 - r_5} (\partial _v \partial _\varphi \partial _\theta \mathfrak {q})^{r_5}, \end{aligned}$$
where also \(r_1, \ldots , r_5 \ge 0, r_1+\cdots +r_5=R, l_1 + l_2 \ge r_2, l_3 \ge r_5\). Finally, since the derivative \(\partial _u \partial _v \mathfrak {q}\) vanishes, \(\partial _u^K \partial _v^R \partial _\varphi ^L \partial _\theta ^N \widetilde{\Psi }^{\lambda }(t,\mathfrak {q})\) is a sum of terms of the form constant times
$$\begin{aligned}&\frac{1}{(\cosh \frac{t}{2} - 1 + \mathfrak {q})^{\lambda + \sum _i j_i + l_1+r_1+k_1}} (\partial _u \mathfrak {q})^{k_1+k_3} (\partial _v \mathfrak {q})^{r_1+r_3} (\partial _\varphi \mathfrak {q})^{l_1+l_2-r_2-k_2} (\partial _u \partial _\varphi \mathfrak {q})^{k_2} \\&\quad \times (\partial _v \partial _\varphi \mathfrak {q})^{r_2} (\mathfrak {q}-1)^{\sum _{\text {even} \, i} j_i- l_2 - r_3 - k_3} (\partial _\theta \mathfrak {q})^{\sum _{\text {odd} \, i} j_i- l_3 - r_4 - k_4} (\partial _u \partial _\theta \mathfrak {q})^{k_4} \\&\quad \times (\partial _v \partial _\theta \mathfrak {q})^{r_4} (\partial _\varphi \partial _\theta \mathfrak {q})^{l_3 - r_5 - k_5} (\partial _u \partial _\varphi \partial _\theta \mathfrak {q})^{k_5} (\partial _v \partial _\varphi \partial _\theta \mathfrak {q})^{r_5}. \end{aligned}$$
Here we must add the conditions \(k_1, \ldots , k_5 \ge 0, k_1+\cdots +k_5=K\), and replace \(l_1 + l_2 \ge r_2, l_3 \ge r_5\) by \(l_1 + l_2 \ge r_2 + k_2, l_3 \ge r_5 + k_5\). We shall estimate all the factors in this product from above. Since \(t \le 1\), we can replace \(\cosh \frac{t}{2}-1+\mathfrak {q}\) by \(t^2+\mathfrak {q}\). The quantities \(\mathfrak {q}\) and \(\partial _\varphi \partial _\theta \mathfrak {q}\) are bounded. Further, we apply Lemma 3.4 to get
$$\begin{aligned} |\partial _\varphi \mathfrak {q}| + |\partial _\theta \mathfrak {q}| \lesssim \sqrt{\mathfrak {q}} \le \sqrt{t^2+\mathfrak {q}}. \end{aligned}$$
To deal with the resulting exponent of \(1/(t^2 + \mathfrak {q})\), we observe that
$$\begin{aligned} l_1-l_2+l_3 \le L, \quad \sum _i j_i - \frac{1}{2}\sum _{\text {odd} \, i} j_i\le \frac{N}{2}, \end{aligned}$$
cf. (12). Using also the estimates
$$\begin{aligned} \begin{array}{lll} &{}|\partial _u \mathfrak {q}| \le \left( \sin \frac{\theta }{2}+\sin \frac{\varphi }{2}\right) ^{2}, \quad &{}\quad |\partial _v \mathfrak {q}| \le \left( \cos \frac{\theta }{2}+ \cos \frac{\varphi }{2}\right) ^{2}, \\ &{}|\partial _\theta \partial _u \mathfrak {q}| + |\partial _\varphi \partial _u \mathfrak {q}| \le \sin \frac{\theta }{2}+\sin \frac{\varphi }{2}, \quad &{}\quad |\partial _\theta \partial _v \mathfrak {q}| + |\partial _\varphi \partial _v \mathfrak {q}| \le \cos \frac{\theta }{2}+ \cos \frac{\varphi }{2}, \\ &{}|\partial _\varphi \partial _\theta \partial _u \mathfrak {q}| \le 1, \quad &{}\quad |\partial _\varphi \partial _\theta \partial _v \mathfrak {q}| \le 1, \end{array} \end{aligned}$$
we infer that
$$\begin{aligned} \big | \partial _u^K \partial _v^R \partial _\varphi ^L \partial _\theta ^N \widetilde{\Psi }^{\lambda }(t,\mathfrak {q})\big | \lesssim&\sum _{\begin{array}{c} r_1+\cdots +r_5=R \\ k_1+\cdots +k_5=K \end{array}} \left( \sin \frac{\theta }{2}+\sin \frac{\varphi }{2}\right) ^{2k_1+2k_3+k_2+k_4} \\&\quad \times \left( \cos \frac{\theta }{2}+\cos \frac{\varphi }{2}\right) ^{2r_1+2r_3+r_2+r_4} \\&\quad \times \frac{1}{(t^2 + \mathfrak {q})^{\lambda + (N+L+ 2k_1+k_2+k_4 + 2r_1+r_2+r_4)\slash 2}}. \end{aligned}$$
Notice that \(2k_1+k_2+k_4 \in \{0,K,2K \}\), and similarly \(2r_1+r_2+r_4 \in \{0,R,2R \}\). This observation leads directly to (11).

The proof of Lemma 3.3 is complete. \(\square \)

Define
$$\begin{aligned} \mathrm{d}\Pi _{\alpha , K}= {\left\{ \begin{array}{ll} \mathrm{d}\Pi _{-1\slash 2}, &{} K=0, \\ \mathrm{d}\Pi _{\alpha + 1}, &{} K=1, \end{array}\right. } \end{aligned}$$
and similarly for \(\mathrm{d}\Pi _{\beta , R}\).

Corollary 3.5

Let \(M,N \in \mathbb {N}\) and \(L \in \{0, 1\}\) be fixed. The following estimates hold uniformly in \(t\in (0,1]\) and \(\theta ,\varphi \in [0,\pi ]\):
  1. (i)
    If \(\alpha ,\beta \ge -1\slash 2\), then
    $$\begin{aligned} \big | \partial _\varphi ^L \partial _\theta ^N \partial _t^M H_{t}^{\alpha ,\beta }(\theta ,\varphi ) \big | \lesssim \iint \frac{\mathrm{d}\Pi _{\alpha }(u)\, \mathrm{d}\Pi _{\beta }(v)}{(t^2 + \mathfrak {q})^{ \alpha + \beta + 3\slash 2 + (L+N+M)\slash 2 }}. \end{aligned}$$
     
  2. (ii)
    If \(-1 < \alpha < -1\slash 2 \le \beta \), then
    $$\begin{aligned} \big | \partial _\varphi ^L \partial _\theta ^N \partial _t^M H_{t}^{\alpha ,\beta }(\theta ,\varphi ) \big |&\lesssim 1 + \sum _{K=0,1} \sum _{k=0,1,2} \left( \sin \frac{\theta }{2}+\sin \frac{\varphi }{2}\right) ^{Kk} \\&\quad \times \iint \frac{\mathrm{d}\Pi _{\alpha , K}(u)\, \mathrm{d}\Pi _{\beta }(v)}{(t^2 + \mathfrak {q})^{ \alpha + \beta + 3\slash 2 + (L+N+M + Kk)\slash 2 }}. \end{aligned}$$
     
  3. (iii)
    If \(-1 < \beta < -1\slash 2 \le \alpha \), then
    $$\begin{aligned} \big | \partial _\varphi ^L \partial _\theta ^N \partial _t^M H_{t}^{\alpha ,\beta }(\theta ,\varphi ) \big |&\lesssim 1 + \sum _{R=0,1} \sum _{r=0,1,2} \left( \cos \frac{\theta }{2}+ \cos \frac{\varphi }{2}\right) ^{Rr} \\&\quad \times \iint \frac{\mathrm{d}\Pi _{\alpha }(u)\, \mathrm{d}\Pi _{\beta , R}(v)}{(t^2 + \mathfrak {q})^{ \alpha + \beta + 3\slash 2 + (L+N+M +Rr)\slash 2 }}. \end{aligned}$$
     
  4. (iv)
    If \(-1 < \alpha ,\beta < -1\slash 2\), then
    $$\begin{aligned} \big | \partial _\varphi ^L \partial _\theta ^N \partial _t^M H_{t}^{\alpha ,\beta }(\theta ,\varphi ) \big |&\lesssim 1 + \sum _{K,R=0,1} \sum _{k,r=0,1,2} \left( \sin \frac{\theta }{2}+\sin \frac{\varphi }{2}\right) ^{Kk} \left( \cos \frac{\theta }{2}+ \cos \frac{\varphi }{2}\right) ^{Rr} \\&\quad \times \iint \frac{\mathrm{d}\Pi _{\alpha , K}(u)\, \mathrm{d}\Pi _{\beta , R}(v)}{(t^2 + \mathfrak {q})^{ \alpha + \beta + 3\slash 2 + (L+N+M + Kk +Rr)\slash 2 }}. \end{aligned}$$
     

Proof

All the bounds are direct consequences of the equality (1), Proposition 2.3, Lemma 2.2, and the estimate from Lemma 3.3 (specified to \(\lambda = \alpha + \beta + 2\)). Here passing with the differentiation in \(t, \theta \) or \(\varphi \) under integrals against \(\mathrm{d}\Pi _{\gamma }, \gamma \ge -1/2\), or \(\Pi _{\gamma }(u)\, \mathrm{d}u, -1 < \gamma < -1/2\), can easily be justified with the aid of Lemma 3.3 and the dominated convergence theorem. \(\square \)

Lemma 3.6

Let \(\gamma \in \mathbb {R}\) and \(\eta \ge 0\) be fixed. Then
$$\begin{aligned} \int _0^1 \frac{t^\eta \,\mathrm{d}t}{(t^2+ \rho )^{\gamma + \eta \slash 2 + 1\slash 2}} \lesssim \left\{ \begin{array}{lll} \rho ^{-\gamma } , &{} \gamma > 0,\\ \log \left( 1+\rho ^{-1/2} \right) , &{} \gamma = 0, \\ 1, &{} \gamma < 0, \end{array} \right. \end{aligned}$$
uniformly in \(0 < \rho \le 2\).

Proof

This is elementary. For \(\gamma =0\), one has
$$\begin{aligned} \int _0^1 \frac{t^{\eta } \, \mathrm{d}t}{(t^2+ \rho )^{\eta /2 + 1\slash 2}} \le \int _0^1 \frac{\mathrm{d}t}{(t^2+ \rho )^{1\slash 2}} \simeq \int _0^1 \frac{\mathrm{d}t}{t+ \rho ^{1/2}} = \log \left( 1 + \rho ^{-1/2} \right) . \end{aligned}$$
\(\square \)

The next lemma will be frequently used in Sect. 4 to prove the relevant kernel estimates. Only the cases \(p\in \{ 1,2,\infty \}\) will be needed for our purposes. Other values of \(p\) are also of interest, but in connection with operators not considered in this paper.

Lemma 3.7

Let \(K,R \in \{ 0,1 \}, k,r \in \{ 0,1,2 \}, W \ge 1, s\ge 0\), and \(1 \le p \le \infty \) be fixed. Consider a function \(\Upsilon ^{\alpha ,\beta }_s(t,\theta ,\varphi )\) defined on \((0,1) \times [0,\pi ] \times [0,\pi ]\) in the following way:
  1. (i)
    For \(\alpha ,\beta \ge -1\slash 2\),
    $$\begin{aligned} \Upsilon ^{\alpha ,\beta }_s(t,\theta ,\varphi ) := \iint \frac{\mathrm{d}\Pi _{\alpha }(u)\, \mathrm{d}\Pi _{\beta }(v)}{(t^2 + \mathfrak {q})^{ \alpha +\beta +3\slash 2 + W\slash (2p) + s\slash 2 }}. \end{aligned}$$
     
  2. (ii)
    For \(-1 < \alpha < -1\slash 2 \le \beta \),
    $$\begin{aligned} \Upsilon ^{\alpha ,\beta }_s(t,\theta ,\varphi ) := \left( \sin \frac{\theta }{2}+ \sin \frac{\varphi }{2}\right) ^{Kk} \iint \frac{\mathrm{d}\Pi _{\alpha , K}(u)\, \mathrm{d}\Pi _{\beta }(v)}{(t^2 + \mathfrak {q})^{ \alpha +\beta +3\slash 2 + W\slash (2p) +Kk\slash 2 + s\slash 2 }}. \end{aligned}$$
     
  3. (iii)
    For \(-1 < \beta < -1\slash 2 \le \alpha \),
    $$\begin{aligned} \Upsilon ^{\alpha ,\beta }_s(t,\theta ,\varphi ) := \left( \cos \frac{\theta }{2}+ \cos \frac{\varphi }{2}\right) ^{Rr} \iint \frac{\mathrm{d}\Pi _{\alpha }(u)\, \mathrm{d}\Pi _{\beta , R}(v)}{(t^2 + \mathfrak {q})^{ \alpha +\beta +3\slash 2 + W\slash (2p) +Rr\slash 2 + s\slash 2 }}. \end{aligned}$$
     
  4. (iv)
    For \(-1 < \alpha ,\beta < -1\slash 2\),
    $$\begin{aligned} \Upsilon ^{\alpha ,\beta }_s(t,\theta ,\varphi ) :=&\left( \sin \frac{\theta }{2}+\sin \frac{\varphi }{2}\right) ^{Kk} \left( \cos \frac{\theta }{2}+ \cos \frac{\varphi }{2}\right) ^{Rr} \\&\quad \times \iint \frac{\mathrm{d}\Pi _{\alpha , K}(u)\, \mathrm{d}\Pi _{\beta , R}(v)}{(t^2 + \mathfrak {q})^{ \alpha +\beta +3\slash 2 + W\slash (2p) +Kk\slash 2 + Rr\slash 2 + s\slash 2 }}. \end{aligned}$$
     
Then the estimate
$$\begin{aligned} \left\| 1 + \Upsilon ^{\alpha ,\beta }_s(t,\theta ,\varphi )\right\| _{ L^p((0,1),t^{W-1}\mathrm{d}t) } \lesssim \frac{1}{|\theta -\varphi |^s} \; \frac{1}{\mu _{\alpha ,\beta }(B(\theta ,|\theta -\varphi |))} \end{aligned}$$
holds uniformly in \(\theta ,\varphi \in [0,\pi ], \theta \ne \varphi \).

Proof

It is enough to prove the desired estimate without the term 1 in the left-hand side. Further, since \(|\theta -\varphi |^2 \lesssim \mathfrak {q}\), it suffices to consider the case \(s=0\). We prove the estimate when \(-1 <\alpha ,\beta <-1 \slash 2\). The remaining cases are left to the reader; they are simpler, since then \(\alpha +\beta +3\slash 2 > 0\) and one needs Lemma 3.6 only with \(\gamma > 0\).

We first assume that \(p<\infty \). Using Minkowski’s integral inequality and then Lemma 3.6 with \(\gamma = p(\alpha +\beta +3\slash 2 + Kk\slash 2 + Rr\slash 2), \eta = W-1\) and \(\rho =\mathfrak {q}\), we obtain
$$\begin{aligned}&\left\| \Upsilon ^{\alpha ,\beta }_0(t,\theta ,\varphi ) \right\| _{ L^p((0,1),t^{W-1}\mathrm{d}t) } \\&\quad \le \left( \sin \frac{\theta }{2}+\sin \frac{\varphi }{2}\right) ^{Kk} \left( \cos \frac{\theta }{2}+ \cos \frac{\varphi }{2}\right) ^{Rr} \\&\quad \quad \times \iint \left( \int _0^1 \frac{t^{W-1} \, \mathrm{d}t }{(t^2 + \mathfrak {q})^{ p(\alpha +\beta +3\slash 2 + W\slash (2p) +Kk\slash 2 + Rr\slash 2) }} \right) ^{1\slash p} \\&\quad \quad \times \mathrm{d}\Pi _{\alpha , K}(u)\, \mathrm{d}\Pi _{\beta , R}(v)\\&\quad \lesssim \left( \sin \frac{\theta }{2}+\sin \frac{\varphi }{2}\right) ^{Kk} \left( \cos \frac{\theta }{2}+ \cos \frac{\varphi }{2}\right) ^{Rr} \\&\quad \quad \times \iint \left[ \left( \frac{1}{\mathfrak {q}} \right) ^{\alpha +\beta +3\slash 2 + Kk\slash 2 + Rr\slash 2}+1 + \left( \log \left( 1 + {\mathfrak {q}}^{-1/2} \right) \right) ^{1\slash p} \right] \, \\&\quad \quad \times \mathrm{d}\Pi _{\alpha , K}(u)\, \mathrm{d}\Pi _{\beta , R}(v). \end{aligned}$$
Now an application of Lemma 3.1 (specified to \(\xi _1=Kk\slash 2, \kappa _1=-\alpha -1\slash 2\) if \(K=0\) and \(\kappa _1=1 - k\slash 2\) if \(K=1, \xi _2=Rr\slash 2, \kappa _2=-\beta -1\slash 2\) if \(R=0\) and \(\kappa _2=1 - r\slash 2\) if \(R=1\)) gives the desired estimate for the expression emerging from the first term in the last integral. As for the remaining two expressions, we observe that \(1 \lesssim \log \left( 1 + \mathfrak {q}^{-1/2}\right) \lesssim \log \left( 1 + |\theta -\varphi |^{-1} \right) \). Moreover, as can be seen from (8), there exists an \(\varepsilon = \varepsilon (\alpha ,\beta ) > 0\) such that
$$\begin{aligned} \mu _{\alpha ,\beta }\left( B(\theta ,|\theta - \varphi |) \right) \lesssim |\theta - \varphi |^\varepsilon , \quad \theta ,\varphi \in [0,\pi ]. \end{aligned}$$
Since the measures \(\mathrm{d}\Pi _{\alpha , K}\) and \(\mathrm{d}\Pi _{\beta , R}\) are finite, the conclusion follows.
The case \(p=\infty \) can be justified in a similar way by using in the reasoning above the estimate
$$\begin{aligned} \frac{1}{ (t^2 + \mathfrak {q})^{ \alpha +\beta +3\slash 2 +Kk\slash 2 + Rr\slash 2} } \lesssim \left( \frac{1}{\mathfrak {q}}\right) ^{ \alpha +\beta +3\slash 2 +Kk\slash 2 + Rr\slash 2} +1, \quad t \in (0,1), \end{aligned}$$
instead of Lemma 3.6. \(\square \)

The next lemma and corollaries are long-time counterparts of Corollary 3.5 and Lemma 3.7.

Lemma 3.8

Assume that \(M,N \in {\mathbb {N}}\) and \(L \in \{0,1\}\) are fixed. Given \(\alpha ,\beta > -1\), there exists an \(\epsilon = \epsilon (\alpha ,\beta )>0\) such that
$$\begin{aligned} \big | \partial _{\varphi }^{L} \partial _{\theta }^N \partial _t^M H_t^{\alpha ,\beta }(\theta ,\varphi )\big |&\lesssim \mathrm{e}^{- t \left( \left| \frac{\alpha + \beta + 1}{2} \right| + \epsilon \right) } + \chi _{\{N=L=0, \, \alpha +\beta +1 \ne 0\}} \mathrm{e}^{- t \left| \frac{\alpha + \beta + 1}{2} \right| } \\&\quad + \chi _{\{M=N=L=0, \, \alpha +\beta +1=0\}}, \end{aligned}$$
uniformly in \(t \ge 1\) and \(\theta ,\varphi \in [0,\pi ]\). Moreover, one can take \(\epsilon = (\alpha +\beta +2) \wedge 1\).

To prove this, it is more convenient to employ the series representation of \(H_t^{\alpha ,\beta }(\theta ,\varphi )\) rather than the formulas from Proposition 2.3.

Proof of Lemma 3.8

For \(\alpha ,\beta >-1, t>0\) and \(\theta ,\varphi \in [0,\pi ]\), we have
$$\begin{aligned} H_t^{\alpha ,\beta }(\theta ,\varphi ) = \frac{1}{\mu _{\alpha ,\beta }([0,\pi ])} \mathrm{e}^{-t\left| \frac{\alpha +\beta +1}{2}\right| } + \sum _{n=1}^{\infty } \mathrm{e}^{-t\left( n+\frac{\alpha +\beta +1}{2}\right) } {\mathcal {P}}_n^{\alpha ,\beta }(\theta ) {\mathcal {P}}_n^{\alpha ,\beta }(\varphi ). \end{aligned}$$
(13)
Denote the sum in (13) by \(S\). To estimate \(S\) and its derivatives, we will need suitable bounds for \(\partial _{\theta }^{N}{\mathcal {P}}_n^{\alpha ,\beta }(\theta ), N \ge 0\). It is known (see [33, (7.32.2)]) thatCombining this with the identity (cf. [33, (4.21.7)])we see that for each \(N \ge 0\),In view of these facts, the series in (13) can be repeatedly differentiated term by term in \(t,\theta \) and \(\varphi \), and we get the bounds
$$\begin{aligned} \big |\partial _{\varphi }^{L} \partial _{\theta }^N \partial _t^M S\big |&\lesssim \sum _{n=1}^{\infty } \mathrm{e}^{-t\left( n+\frac{\alpha +\beta +1}{2}\right) } n^{M+3N+3L+2\alpha +2\beta +4} \\&= \mathrm{e}^{-t\left( \left| \frac{\alpha +\beta +1}{2}\right| + (\alpha +\beta +2)\wedge 1 \right) } \sum _{n=1}^{\infty } \mathrm{e}^{-t\left( n - 1\right) } n^{M+3N+3L+2\alpha +2\beta +4} \\&\lesssim \mathrm{e}^{-t\left( \left| \frac{\alpha +\beta +1}{2}\right| + (\alpha +\beta +2)\wedge 1 \right) }, \end{aligned}$$
uniformly in \(t \ge 1\) and \(\theta ,\varphi \in [0,\pi ]\).

Since the other term in (13) is trivial to handle, the conclusion follows. \(\square \)

Corollary 3.9

Let \(\alpha ,\beta > -1, M,N \in {\mathbb {N}}, L \in \{0,1\}, W \ge 1\), and \(1 \le p \le \infty \) be fixed. Then
$$\begin{aligned} \bigg \Vert \sup _{\theta ,\varphi \in [0,\pi ]} \big |\partial _{\varphi }^{L} \partial _{\theta }^N \partial _t^M H_t^{\alpha ,\beta }(\theta ,\varphi ) \big | \bigg \Vert _{L^p((1,\infty ),t^{W-1}\mathrm{d}t)} < \infty , \end{aligned}$$
excluding the cases when simultaneously \(\alpha +\beta +1=0\) and \(M=N=L=0\) and \(p<\infty \).

A strengthened special case of Corollary 3.9 will be needed when we estimate kernels associated with multipliers of Laplace–Stieltjes type.

Corollary 3.10

Let \(\alpha ,\beta > -1\) and \(L,N \in \{0,1\}\) be fixed. Then
$$\begin{aligned} \bigg \Vert \mathrm{e}^{t \left| \frac{\alpha + \beta + 1}{2} \right| } \sup _{\theta ,\varphi \in [0,\pi ]} \big |\partial _{\varphi }^{L} \partial _{\theta }^N H_t^{\alpha ,\beta }(\theta ,\varphi ) \big | \bigg \Vert _{L^\infty ((1,\infty ),\mathrm{d}t)} < \infty . \end{aligned}$$

4 Kernel Estimates

Let \(\mathbb {B}\) be a Banach space, and let \(K(\theta ,\varphi )\) be a kernel defined on \([0,\pi ]\times [0,\pi ]\backslash \{ (\theta ,\varphi ):\theta =\varphi \}\) and taking values in \(\mathbb {B}\). We say that \(K(\theta ,\varphi )\) is a standard kernel in the sense of the space of homogeneous type \(([0,\pi ], \mathrm{d}\mu _{\alpha ,\beta },|\cdot |)\) if it satisfies the so-called standard estimates, i.e., the growth estimate
$$\begin{aligned} \Vert K(\theta ,\varphi )\Vert _{\mathbb {B}} \lesssim \frac{1}{\mu _{\alpha ,\beta }(B(\theta ,|\theta -\varphi |))} \end{aligned}$$
(15)
and the smoothness estimates
$$\begin{aligned} \Vert K(\theta ,\varphi )-K(\theta ',\varphi )\Vert _{\mathbb {B}}&\lesssim \frac{|\theta -\theta '|}{|\theta -\varphi |}\, \frac{1}{\mu _{\alpha ,\beta }(B(\theta ,|\theta -\varphi |))}, \quad |\theta -\varphi |>2|\theta -\theta '|, \end{aligned}$$
(16)
$$\begin{aligned} \Vert K(\theta ,\varphi )-K(\theta ,\varphi ')\Vert _{\mathbb {B}}&\lesssim \frac{|\varphi -\varphi '|}{|\theta -\varphi |}\, \frac{1}{\mu _{\alpha ,\beta }(B(\theta ,|\theta -\varphi |))}, \quad |\theta -\varphi |>2|\varphi -\varphi '| . \end{aligned}$$
(17)
Notice that in these formulas, the ball (interval) \(B(\theta ,|\theta -\varphi |)\) can be replaced by \(B(\varphi ,|\varphi -\theta |)\), in view of the doubling property of \(\mu _{\alpha ,\beta }\).
We will show that the following kernels, with values in properly chosen Banach spaces \(\mathbb {B}\), satisfy the standard estimates:
  1. (I)
    The kernel associated with the Jacobi–Poisson semigroup maximal operator,
    $$\begin{aligned} \mathfrak {H}^{\alpha ,\beta }(\theta ,\varphi ) = \big \{H_t^{\alpha ,\beta }(\theta ,\varphi )\big \}_{t>0}, \quad \mathbb {B}=\mathbb {X} \subset L^{\infty }(\mathbb {R}_+,\mathrm{d}t), \end{aligned}$$
    where \(\mathbb {X}\) is the closed separable subspace of \(L^{\infty }(\mathbb {R}_+,\mathrm{d}t)\) consisting of all continuous functions \(f\) on \((0,\infty )\) which have finite limits as \(t \rightarrow 0^+\) and as \(t \rightarrow \infty \). Observe that \(\big \{H_t^{\alpha ,\beta }(\theta ,\varphi )\big \}_{t>0} \in \mathbb {X}\), for \(\theta \ne \varphi \), as can be seen from Proposition 2.3 and the bound \(\mathfrak {q}\gtrsim (\theta -\varphi )^2\), and the series representation (see the proof of Lemma 3.8).
     
  2. (II)
    The kernels associated with Riesz–Jacobi transforms,
    $$\begin{aligned} R_N^{\alpha ,\beta }(\theta ,\varphi ) = \frac{1}{\Gamma (N)} \int _0^{\infty } \partial _\theta ^N H_t^{\alpha ,\beta }(\theta ,\varphi ) t^{N -1}\, \mathrm{d}t, \quad \mathbb {B}=\mathbb {C}, \end{aligned}$$
    where \(N = 1,2,\ldots \).
     
  3. (III)
    The kernels associated with mixed square functions,
    $$\begin{aligned} \mathfrak {G}^{\alpha ,\beta }_{M,N}(\theta ,\varphi ) = \big \{\partial _\theta ^N \partial _t^M H_t^{\alpha ,\beta }(\theta ,\varphi ) \big \}_{t>0}, \quad \mathbb {B} = L^2\big (\mathbb {R}_+,t^{2M+2N-1}\mathrm{d}t\big ), \end{aligned}$$
    where \(M,N = 0,1,2,\ldots \) are such that \(M+N>0\).
     
  4. (IVa)
    The kernels associated with Laplace transform type multipliers,
    $$\begin{aligned} K^{\alpha ,\beta }_{\phi }(\theta ,\varphi ) = - \int _0^{\infty } \phi (t) \, \partial _t H_t^{\alpha ,\beta }(\theta ,\varphi ) \, \mathrm{d}t, \quad \mathbb {B}=\mathbb {C}, \end{aligned}$$
    where \(\phi \in L^{\infty }(\mathbb {R}_+,\mathrm{d}t)\).
     
  5. (IVb)
    The kernels associated with Laplace–Stieltjes transform type multipliers,
    $$\begin{aligned} K^{\alpha ,\beta }_{\nu }(\theta ,\varphi ) = \int _{(0,\infty )} H_t^{\alpha ,\beta }(\theta ,\varphi )\, \mathrm{d}\nu (t), \quad \mathbb {B}=\mathbb {C}, \end{aligned}$$
    where \(\nu \) is a signed or complex Borel measure on \((0,\infty )\) with total variation \(|\nu |\) satisfying
    $$\begin{aligned} \int _{(0,\infty )} \mathrm{e}^{-t \left| \frac{\alpha + \beta + 1}{2} \right| } \, \hbox {d}|\nu |(t) < \infty . \end{aligned}$$
    (18)
     
When \(K(\theta ,\varphi )\) is scalar-valued, i.e., \(\mathbb {B}=\mathbb {C}\), it is well known that the bounds (16) and (17) follow from the more convenient gradient estimate
$$\begin{aligned} {\Vert \partial _{\theta } K(\theta ,\varphi )\Vert }_{\mathbb {B}} + {\Vert \partial _{\varphi } K(\theta ,\varphi )\Vert }_{\mathbb {B}} \lesssim \frac{1}{|\theta -\varphi | \, \mu _{\alpha ,\beta }(B(\theta ,|\theta -\varphi |))}. \end{aligned}$$
(19)
We shall see that the same holds also in the vector-valued cases we consider. Then the derivatives in (19) are taken in the weak sense, which means that for any \(\mathtt v \in \mathbb {B}^*\),
$$\begin{aligned} \big \langle \mathtt v , \partial _{\theta } K(\theta ,\varphi ) \big \rangle = \partial _{\theta } \big \langle \mathtt v , K(\theta ,\varphi ) \big \rangle \end{aligned}$$
(20)
and similarly for \(\partial _{\varphi }\). If these weak derivatives \(\partial _{\theta } K(\theta ,\varphi )\) and \(\partial _{\varphi } K(\theta ,\varphi )\) exist as elements of \(\mathbb {B}\) and their norms satisfy (19), the scalar-valued case applies and (16) and (17) follow.

The result below extends to all \(\alpha ,\beta > -1\) the estimates obtained in [28, Section 4] for the restricted range \(\alpha ,\beta \ge -1\slash 2\). Moreover, here we also consider multipliers of Laplace and Laplace–Stieltjes transform type, which were merely mentioned in [28] and which cover as a special case the imaginary powers of \(\mathcal {J}^{\alpha ,\beta }\) (or \({\mathcal {J}}^{\alpha ,\beta }\Pi _0\) when \(\alpha +\beta +1=0\)) investigated there.

Theorem 4.1

Let \(\alpha ,\beta > -1\). Then the kernels (I)–(III), (IVa), and (IVb) satisfy the standard estimates (15), (16), and (17) with \(\mathbb {B}\) as indicated above.

In the proof, we tacitly assume that passing with the differentiation in \(\theta \) or \(\varphi \) under integrals against \(\mathrm{d}t\) or \(\mathrm{d}\nu (t)\) is legitimate. In fact, such manipulations can easily be verified by means of the dominated convergence theorem and the estimates obtained in Corollary 3.5 and Lemma 3.8.

Proof of Theorem 4.1

We treat each of the kernels separately.

The case of \(\varvec{\mathfrak {H}^{\alpha ,\beta }(\theta ,\varphi )}\) We first deal with the growth condition. Clearly, it suffices to prove independently the two bounds emerging from (15) by choosing \(\mathbb {B} = L^{\infty } ( (1,\infty ), \mathrm{d}t )\) and \(\mathbb {B} = L^{\infty } ( (0,1), \mathrm{d}t )\). These, however, are immediate consequences of Corollary 3.9 (with \(M=N=L=0, p=\infty \)) and Corollary 3.5 (taken with \(M=N=L=0\)) combined with Lemma 3.7 (specified to \(p=\infty , s=0\)), respectively.

To obtain the smoothness estimates, we must verify that the weak derivatives \(\partial _{\theta } \mathfrak {H}^{\alpha ,\beta }(\theta ,\varphi )\) and \(\partial _{\varphi } \mathfrak {H}^{\alpha ,\beta }(\theta ,\varphi )\) exist in the sense of (20) and satisfy (19). In this case, \(\mathtt v \) is a complex measure in \([0,\infty ]\), and
$$\begin{aligned} \left\langle \mathtt v , \mathfrak {H}^{\alpha ,\beta }(\theta ,\varphi ) \right\rangle = \int _{[0,\infty ]} H_t^{\alpha ,\beta }(\theta ,\varphi ) \, \mathrm{d}\mathtt v (t). \end{aligned}$$
It is enough to consider the derivative with respect to \(\theta \). By the dominated convergence theorem, which is applicable because of Lemma 3.8 and Corollary 3.5 together with the bound \(\mathfrak {q}\gtrsim (\theta -\varphi )^2\), we obtain
$$\begin{aligned} \partial _{\theta } \left\langle \mathtt v , \mathfrak {H}^{\alpha ,\beta }(\theta ,\varphi ) \right\rangle = \int _{[0,\infty ]} \partial _{\theta } H_t^{\alpha ,\beta }(\theta ,\varphi ) \, \mathrm{d}\mathtt v (t), \quad \theta \ne \varphi ; \end{aligned}$$
observe that \(\big \{ \partial _{\theta } H_t^{\alpha ,\beta }(\theta ,\varphi ) \big \}_{t>0} \in \mathbb {X}\) for \(\theta \ne \varphi \), as can be seen from Proposition 2.3 and Lemma 3.8. This identity implies that for \(\theta \ne \varphi \), the weak derivative \(\partial _{\theta } \mathfrak {H}^{\alpha ,\beta }(\theta ,\varphi )\) exists and equals \(\big \{ \partial _{\theta } H_t^{\alpha ,\beta }(\theta ,\varphi ) \big \}_{t>0}\). To see that it also satisfies (19), we first consider large \(t\) and observe that the estimate
$$\begin{aligned} \big \Vert \partial _\theta H_t^{\alpha ,\beta }(\theta ,\varphi ) \big \Vert _{L^\infty ( (1,\infty ), \mathrm{d}t)} \lesssim \frac{1}{|\theta -\varphi | \, \mu _{\alpha ,\beta }(B(\theta ,|\theta -\varphi |))}, \quad \theta \ne \varphi , \end{aligned}$$
follows from Corollary 3.9 (specified to \(M=L=0, N=W=1, p = \infty \)). For small \(t\), we have
$$\begin{aligned} \big \Vert \partial _\theta H_t^{\alpha ,\beta }(\theta ,\varphi ) \big \Vert _{L^\infty ( (0,1), \mathrm{d}t)} \lesssim \frac{1}{|\theta -\varphi | \, \mu _{\alpha ,\beta }(B(\theta ,|\theta -\varphi |))}, \quad \theta \ne \varphi , \end{aligned}$$
in view of Corollary 3.5 (with \(M=L=0, N=1\)) and Lemma 3.7 (taken with \(W=1, p=\infty , s=1\)).
The case of \(\varvec{R_N^{\alpha ,\beta }(\theta ,\varphi )}\) To prove the growth condition, it is enough to verify that
$$\begin{aligned} \big \Vert \partial _\theta ^N H_t^{\alpha ,\beta }(\theta ,\varphi ) \big \Vert _{L^1(\mathbb {R}_+, t^{ N - 1} \mathrm{d}t)} \lesssim \frac{1}{\mu _{\alpha ,\beta }(B(\theta ,|\theta -\varphi |))}, \quad \theta \ne \varphi . \end{aligned}$$
This, however, is a consequence of Corollary 3.9 (taken with \(M=L=0, W=N, p=1\)) and Corollary 3.5 (with \(M=L=0\)) combined with Lemma 3.7 (specified to \(W=N, p=1, s=0\)).
In order to show the gradient bound (19), it suffices to check that
$$\begin{aligned} \Big \Vert \big | \nabla _{\! \theta ,\varphi } \partial _\theta ^N H_t^{\alpha ,\beta }(\theta ,\varphi ) \big | \Big \Vert _{L^1(\mathbb {R}_+, t^{ N - 1} \mathrm{d}t)} \lesssim \frac{1}{|\theta -\varphi | \, \mu _{\alpha ,\beta }(B(\theta ,|\theta -\varphi |))}, \quad \theta \ne \varphi . \end{aligned}$$
This estimate follows by means of Corollary 3.9 (applied with \(M=0, p=1\)) and Corollary 3.5 (with \(M=0\)) together with Lemma 3.7 (specified to \(W=N, p=1, s=1\)).

The case of \(\varvec{\mathfrak {G}^{\alpha ,\beta }_{M,N}(\theta ,\varphi )}\) The growth condition is a straightforward consequence of Corollary 3.9 (with \(L=0, W=2M + 2N, p=2\)), Corollary 3.5 (with \(L=0\)) and Lemma 3.7 (taken with \(W=2M + 2N, p=2, s=0\)).

Next, we prove the gradient estimate (19), which amounts to
$$\begin{aligned} \Big \Vert \big |\nabla _{\! \theta ,\varphi } \partial _\theta ^{N}\partial _t^M H_t^{\alpha ,\beta }(\theta ,\varphi ) \big | \Big \Vert _{L^2(\mathbb {R}_+, t^{2M + 2N - 1} \mathrm{d}t)} \lesssim \frac{1}{|\theta -\varphi | \, \mu _{\alpha ,\beta }(B(\theta ,|\theta -\varphi |))}, \quad \theta \ne \varphi , \end{aligned}$$
where \(\nabla _{\! \theta ,\varphi }\) is taken in the weak sense. This follows with the aid of Corollary 3.9 (with \(W=2M+2N, p=2\)), Corollary 3.5, and Lemma 3.7 (applied with \(W=2M + 2N, p=2, s=1\)); cf. the arguments given for the case \(\mathfrak {H}^{\alpha ,\beta }(\theta ,\varphi )\) above.

The case of \(\varvec{K^{\alpha ,\beta }_{\phi }(\theta ,\varphi )}\) The growth bound is a direct consequence of the assumption \(\phi \in L^\infty (\mathbb {R}_+,\mathrm{d}t)\), Corollary 3.9 (specified to \(M=1, N=L=0, W=1, p=1\)), Corollary 3.5 (with \(M=1, N=L=0\)), and Lemma 3.7 (taken with \(W=1, p=1, s=0\)).

Since \(\phi \) is bounded, to prove the gradient estimate it is enough to verify that
$$\begin{aligned} \Big \Vert \big | \nabla _{\! \theta ,\varphi } \partial _t H_t^{\alpha ,\beta }(\theta ,\varphi ) \big | \Big \Vert _{L^1(\mathbb {R}_+,\mathrm{d}t)} \lesssim \frac{1}{|\theta -\varphi | \, \mu _{\alpha ,\beta }(B(\theta ,|\theta -\varphi |))}, \quad \theta \ne \varphi . \end{aligned}$$
Now applying Corollary 3.9 (with \(M=1, W=1, p=1\) and either \(N=1, L=0\) or \(N=0, L=1\)), Corollary 3.5 (specified to \(M=1\) and either \(N=1, L=0\) or \(N=0, L=1\)), and Lemma 3.7 (taken with \(W=1, p=1, s=1\)), we arrive at the desired bound.
The case of \(\varvec{K^{\alpha ,\beta }_{\nu }(\theta ,\varphi )}\) To show the growth condition, it is enough, by the assumption (18) concerning the measure \(\nu \), to check that
$$\begin{aligned} \Big \Vert \mathrm{e}^{t\left| \frac{\alpha +\beta +1}{2} \right| } H_t^{\alpha ,\beta }(\theta ,\varphi ) \Big \Vert _{L^\infty ((1,\infty ), \mathrm{d}t)}&\lesssim \frac{1}{\mu _{\alpha ,\beta }(B(\theta ,|\theta -\varphi |))}, \quad \theta \ne \varphi ,\\ \left\| H_t^{\alpha ,\beta }(\theta ,\varphi ) \right\| _{L^\infty ((0,1), \mathrm{d}t)}&\lesssim \frac{1}{\mu _{\alpha ,\beta }(B(\theta ,|\theta -\varphi |))}, \quad \theta \ne \varphi . \end{aligned}$$
The first estimate above is an immediate consequence of Corollary 3.10 (applied with \(N=L=0\)). The remaining bound is part of the growth condition for \(\mathfrak {H}^{\alpha ,\beta }(\theta ,\varphi )\), which is already justified.
Taking (18) into account, to verify the gradient estimate (19), it suffices to show that
$$\begin{aligned} \left\| \mathrm{e}^{t\left| \frac{\alpha +\beta +1}{2} \right| } \big | \nabla _{\! \theta ,\varphi } H_t^{\alpha ,\beta }(\theta ,\varphi ) \big | \right\| _{L^\infty ((1,\infty ), \mathrm{d}t)}&\lesssim \frac{1}{|\theta -\varphi | \, \mu _{\alpha ,\beta }(B(\theta ,|\theta -\varphi |))}, \quad \theta \ne \varphi , \\ \left\| \big | \nabla _{\! \theta ,\varphi } H_t^{\alpha ,\beta }(\theta ,\varphi ) \big | \right\| _{L^\infty ((0,1), \mathrm{d}t)}&\lesssim \frac{1}{|\theta -\varphi | \, \mu _{\alpha ,\beta }(B(\theta ,|\theta -\varphi |))}, \quad \theta \ne \varphi . \end{aligned}$$
Again, an application of Corollary 3.10 (with either \(N=1, L=0\) or \(N=0, L=1\)) produces the first bound. The second one is contained in the proof of the gradient estimate for \(\mathfrak {H}^{\alpha ,\beta }(\theta ,\varphi )\).

The proof of Theorem 4.1 is complete. \(\square \)

5 Calderón–Zygmund Operators

Let \(\mathbb {B}\) be a Banach space, and suppose that \(T\) is a linear operator assigning to each \(f\in L^2(\mathrm{d}\mu _{\alpha ,\beta })\) a strongly measurable \(\mathbb {B}\)-valued function \(Tf\) on \([0,\pi ]\). Then \(T\) is said to be a (vector-valued) Calderón–Zygmund operator in the sense of the space \(([0,\pi ],\mathrm{d}\mu _{\alpha ,\beta },|\cdot |)\) associated with \(\mathbb {B}\) if
  1. (A)

    \(T\) is bounded from \(L^2(\mathrm{d}\mu _{\alpha ,\beta })\) to \(L^2_{\mathbb {B}}(\mathrm{d}\mu _{\alpha ,\beta })\),

     
  2. (B)
    there exists a standard \(\mathbb {B}\)-valued kernel \(K(\theta ,\varphi )\) such that
    $$\begin{aligned} Tf(\theta ) = \int _0^{\pi } K(\theta ,\varphi ) f(\varphi )\, \mathrm{d}\mu _{\alpha ,\beta }(\varphi ), \quad \text {a.a.}\; \theta \notin \hbox {supp}f, \end{aligned}$$
    for \(f \in L^{\infty }([0,\pi ])\).
     
Here integration of \(\mathbb {B}\)-valued functions is understood in Bochner’s sense, and \(L^2_{\mathbb {B}}(\mathrm{d}\mu _{\alpha ,\beta })\) is the Bochner–Lebesgue space of all \(\mathbb {B}\)-valued \(\mathrm{d}\mu _{\alpha ,\beta }\)-square integrable functions on \([0,\pi ]\).

It is well known that a large part of the classical theory of Calderón–Zygmund operators remains valid, with appropriate adjustments, when the underlying space is of homogeneous type and the associated kernels are vector-valued, see for instance [31, 32]. In particular, if \(T\) is a Calderón–Zygmund operator in the sense of \(([0,\pi ],\mathrm{d}\mu _{\alpha ,\beta },|\cdot |)\) associated with a Banach space \(\mathbb {B}\), then its mapping properties in weighted \(L^p\) spaces follow from the general theory.

Let
$$\begin{aligned} \mathcal {H}_t^{\alpha ,\beta }f(\theta ) = \int _0^{\pi } H_t^{\alpha ,\beta }(\theta ,\varphi ) f(\varphi )\, \mathrm{d}\mu _{\alpha ,\beta }(\varphi ), \quad t>0, \quad \theta \in [0,\pi ], \end{aligned}$$
be the Jacobi–Poisson semigroup. For \(\alpha ,\beta > -1\) consider the following operators defined initially in \(L^2(\mathrm{d}\mu _{\alpha ,\beta })\):
  1. (I)
    The Jacobi–Poisson semigroup maximal operator
    $$\begin{aligned} \mathcal {H}_*^{\alpha ,\beta }f = \big \Vert \mathcal {H}_t^{\alpha ,\beta }f \big \Vert _{L^{\infty }(\mathbb {R}_+,\mathrm{d}t)}. \end{aligned}$$
     
  2. (II)
    Riesz–Jacobi transforms of orders \(N=1,2,\ldots \),
    $$\begin{aligned} R_N^{\alpha ,\beta }f = \sum _{n=1}^{\infty } \Big | n + \frac{\alpha +\beta +1}{2}\Big |^{-N} \big \langle f,{\mathcal {P}}_n^{\alpha ,\beta }\big \rangle _{\mathrm{d}\mu _{\alpha ,\beta }} \, \partial _{\theta }^N{\mathcal {P}}_n^{\alpha ,\beta }, \end{aligned}$$
    where \(\big \langle f,{\mathcal {P}}_n^{\alpha ,\beta }\big \rangle _{\mathrm{d}\mu _{\alpha ,\beta }}\) are the Fourier–Jacobi coefficients of \(f\).
     
  3. (III)
    Littlewood–Paley–Stein type mixed square functions
    $$\begin{aligned} g_{M,N}^{\alpha ,\beta }f = \big \Vert \partial _{\theta }^{N}\partial _t^M \mathcal {H}_t^{\alpha ,\beta }f\big \Vert _{L^2(\mathbb {R}_+,t^{2M+2N-1}\mathrm{d}t)}, \end{aligned}$$
    where \(M,N = 0,1,2,\ldots \) and \(M+N>0\).
     
  4. (IV)
    Multipliers of Laplace and Laplace–Stieltjes transform type
    $$\begin{aligned} M^{\alpha ,\beta }_{\mathfrak {m}}f = \sum _{n=0}^{\infty } \mathfrak {m}\left( \Big |n+\frac{\alpha +\beta +1}{2}\Big |\right) \big \langle f,{\mathcal {P}}_n^{\alpha ,\beta }\big \rangle _{\mathrm{d}\mu _{\alpha ,\beta }} {\mathcal {P}}_n^{\alpha ,\beta }, \end{aligned}$$
    where either \(\mathfrak {m}(z) = \int _0^{\infty } z \mathrm{e}^{-tz} \phi (t)\, \mathrm{d}t\) with \(\phi \in L^{\infty }(\mathbb {R}_+,\mathrm{d}t)\) or \(\mathfrak {m}(z) = \int _{(0,\infty )} \mathrm{e}^{-tz} \, \mathrm{d}\nu (t)\) for a signed or complex Borel measure \(\nu \) on \((0,\infty )\) whose total variation satisfies (18).
     
The formulas defining \(\mathcal {H}_*^{\alpha ,\beta }\) and \(g_{M,N}^{\alpha ,\beta }\) are understood pointwise and are actually valid for general functions \(f\) from weighted \(L^p\) spaces with Muckenhoupt weights. This is because for such \(f\), the integral defining \(\mathcal {H}_t^{\alpha ,\beta }f(\theta )\) is well defined and produces a smooth function of \((t,\theta )\in (0,\infty )\times [0,\pi ]\), see [28, Section 2]. The series defining \(R_N^{\alpha ,\beta }\) and \(M_{\mathfrak {m}}^{\alpha ,\beta }\) indeed converge in \(L^2(\mathrm{d}\mu _{\alpha ,\beta })\), which is clear in the case of \(M_{\mathfrak {m}}^{\alpha ,\beta }\), since the values of \(\mathfrak {m}\) that occur here stay bounded. For \(R_N^{\alpha ,\beta }\), the convergence follows by [28, Lemma 3.1], see the proof of [28, Proposition 2.2] in the case of \(R_N^{\alpha ,\beta }\).

As a consequence of Theorem 4.1, we get the following result.

Theorem 5.1

Assume that \(\alpha ,\beta > -1\). The Riesz–Jacobi transforms and the multipliers of Laplace and Laplace–Stieltjes transform type are scalar-valued Calderón–Zygmund operators in the sense of the space \(([0,\pi ],\mathrm{d}\mu _{\alpha ,\beta },|\cdot |)\). Furthermore, the Jacobi–Poisson semigroup maximal operator and the mixed square functions can be viewed as vector-valued Calderón–Zygmund operators in the sense of \(([0,\pi ],\mathrm{d}\mu _{\alpha ,\beta },|\cdot |)\), associated with the Banach spaces \(\mathbb {B}=\mathbb {X}\) and \(\mathbb {B} = L^2(\mathbb {R}_+,t^{2M+2N-1}\mathrm{d}t)\), respectively.

Proof

The standard estimates are provided in all the cases by Theorem 4.1. Thus it suffices to verify \(L^2\) boundedness and kernel associations [conditions (A) and (B) above]. This, however, was essentially done in [28, Section 3], since the arguments given there are actually valid for all \(\alpha ,\beta > -1\) if combined with the estimates proved (in some cases implicitly) in Sect. 4. An exception here are the Laplace and Laplace–Stieltjes type multipliers. But in these cases, the boundedness in \(L^2\) is straightforward, and the kernel associations are justified according to the outline opening the proof of [28, Proposition 2.3], see [28, Section 3, pp. 732–733]. Since all the necessary ingredients are contained in [28] and in the present paper, we leave further details to interested readers. \(\square \)

Denote by \(A_p^{\alpha ,\beta }, 1 \le p < \infty \), the Muckenhoupt classes of weights related to the space \(([0,\pi ],\mathrm{d}\mu _{\alpha ,\beta },|\cdot |)\) (see [28, Section 1] for the definition).

Corollary 5.2

Let \(\alpha ,\beta > -1\). The Riesz–Jacobi transforms and the multipliers of Laplace and Laplace–Stieltjes type extend to bounded linear operators on \(L^p(w\mathrm{d}\mu _{\alpha ,\beta }), w \in A_p^{\alpha ,\beta }, 1<p<\infty \), and from \(L^1(w\mathrm{d}\mu _{\alpha ,\beta })\) to weak \(L^1(w\mathrm{d}\mu _{\alpha ,\beta }), w \in A_1^{\alpha ,\beta }\). The same boundedness properties hold for the Jacobi–Poisson semigroup maximal operator and the mixed square functions, viewed as scalar-valued sublinear operators.

Proof

The part concerning \(R_N^{\alpha ,\beta }\) and \(M_{\mathfrak {m}}^{\alpha ,\beta }\) is a direct consequence of Theorem 5.1 and the general theory. The remaining part follows by Theorem 5.1 and the arguments given in the proof of [28, Corollary 2.5].

Remark 5.3

Elementary arguments, similar to those presented at the end of [8, Section 2], allow us to obtain unweighted \(L^p(\hbox {d}\mu _{\alpha ,\beta })\)-boundedness, \(1\le p \le \infty \), for the Laplace–Stieltjes transform type multipliers. The crucial fact needed in the reasoning is the estimate
$$\begin{aligned} \int _0^\pi |K_\nu ^{\alpha ,\beta }(\theta ,\varphi )| \, \hbox {d}\mu _{\alpha ,\beta } (\varphi ) + \int _0^\pi |K_\nu ^{\alpha ,\beta }(\varphi ,\theta )| \, \hbox {d}\mu _{\alpha ,\beta } (\varphi ) \lesssim 1, \quad \theta \in [0,\pi ], \end{aligned}$$
which is a direct consequence of the identity \(\mathcal {H}_t^{\alpha ,\beta } \varvec{1} = \mathrm{e}^{-t\left| \frac{\alpha + \beta + 1}{2}\right| }\) and condition (18) concerning the measure \(\nu \); here \(\varvec{1}\) is the constant function equal to 1 on \([0,\pi ]\).

6 Exact Behavior of the Jacobi–Poisson Kernel

We give another application of the representations in Proposition 2.3, which is interesting and important in its own right. We will describe in a sharp way the behavior of the kernels \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\) and \(H_t^{\alpha ,\beta }(\theta ,\varphi )\). The result below extends sharp estimates for the Jacobi–Poisson kernel obtained in [29, Theorem A.1] under the restriction \(\alpha ,\beta \ge -1/2\).

Theorem 6.1

Let \(\alpha ,\beta > -1\). Then
$$\begin{aligned}&H^{\alpha ,\beta }_t(\theta ,\varphi ) \simeq {\mathbb {H}}^{\alpha ,\beta }_t(\theta ,\varphi ) \\&\quad \simeq \left( t^2+ \theta ^2+\varphi ^2 \right) ^{-\alpha -1/2} \left( t^2 + (\pi -\theta )^2 + (\pi -\varphi )^2 \right) ^{-\beta -1/2} \frac{t}{t^2+(\theta -\varphi )^2}, \end{aligned}$$
uniformly in \(0 < t \le 1\) and \(\theta ,\varphi \in [0,\pi ]\), and
$$\begin{aligned} H_t^{\alpha ,\beta }(\theta ,\varphi ) \simeq \exp \left( -t \frac{|\alpha +\beta +1|}{2} \right) , \quad {\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi ) \simeq \exp \left( -t \frac{\alpha +\beta +1}{2} \right) , \end{aligned}$$
uniformly in \(t \ge 1\) and \(\theta ,\varphi \in [0,\pi ]\).

To prove this, we will need some technical results, one of which is Lemma 3.2 (a). Note that this lemma remains true if the integration is restricted to the subinterval \((1/2,1]\). This follows from the structure of \(\mathrm{d}\Pi _{\nu }\) and the fact that the integrand is positive and increasing.

Lemma 6.2

Let \(\tau >0\) be fixed. Then
$$\begin{aligned} \frac{1}{a^{\tau }} - \frac{1}{b^{\tau }} - \frac{1}{c^{\tau }} + \frac{1}{d^{\tau }} \gtrsim \frac{(b \wedge c - a)^2 \wedge a^2}{a^{\tau +2}}, \end{aligned}$$
uniformly in \(0<a \le b,c \le d\) satisfying \(a+d=b+c\).

Proof

We can assume that \(b\le c\). Then the right-hand side is independent of \(c\) and \(d\). In the left-hand side, we therefore replace \(c\) and \(d\) by \(c+s\) and \(d+s\), respectively, where \(s\ge b-c\). By differentiating, we see that the function \(s \mapsto -(c+s)^{-\tau } + (d+s)^{-\tau }\) is increasing. As a result, we need only consider the extreme case \(s=b-c\), which means proving the lemma for \(b=c\).

Writing \(h=b-a\), and letting \(f(x)=x^{-\tau }\), the left-hand side is now the second difference \(f(a)-2f(a+h)+f(a+2h)\), which equals \(f''(\xi )h^2\) for some \(\xi \in (a,a+2h)\). Now if \(h>Ca\) for some large \(C=C(\tau )\), the inequality of the lemma is trivial, since the term \(a^{-\tau }\) will dominate in the left-hand side. But if \(h\le Ca\), we have \(f''(\xi ) \simeq a^{-\tau -2}\), and the conclusion follows again. \(\square \)

Let \(\sigma > 1\) be fixed. Then one easily verifies that
$$\begin{aligned} \big | x^{-\sigma }-y^{-\sigma }\big | \simeq \frac{|x-y|}{(x\vee y)(x \wedge y)^{\sigma }}, \quad x,y>0. \end{aligned}$$
(21)

Proof of Theorem 6.1

We first prove the estimates for \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\). Among the four ranges of the type parameters distinguished in Proposition 2.3, it is enough to consider only two. Indeed, when \(\alpha ,\beta \ge -1/2\), the desired bounds are contained in [29, Theorem A.1], and the cases \(\beta < -1/2 \le \alpha \) and \(\alpha < -1/2 \le \beta \) are essentially the same. In what follows, we denote for \(t>0\) and \(\theta ,\varphi \in [0,\pi ]\),
$$\begin{aligned} X := \frac{\sin \frac{\theta }{2}\sin \frac{\varphi }{2}}{\cosh \frac{t}{2}-\cos \frac{\theta }{2}\cos \frac{\varphi }{2}}, \quad Y := \frac{\cos \frac{\theta }{2}\cos \frac{\varphi }{2}}{\cosh \frac{t}{2}-\sin \frac{\theta }{2}\sin \frac{\varphi }{2}}, \end{aligned}$$
and
$$\begin{aligned} Z:= \frac{\sinh \frac{t}{2}}{\left( \cosh \frac{t}{2}-\cos \frac{\theta }{2}\cos \frac{\varphi }{2}\right) ^{\alpha +1/2} \left( \cosh \frac{t}{2}-\sin \frac{\theta }{2}\sin \frac{\varphi }{2}\right) ^{\beta +1/2} \left( \cosh \frac{t}{2}- \sin \frac{\theta }{2}\sin \frac{\varphi }{2}- \cos \frac{\theta }{2}\cos \frac{\varphi }{2}\right) }. \end{aligned}$$
Notice that \(0 \le X,Y < 1\), and that \(Z\) is comparable, uniformly in \(0 < t \le 1\) and \(\theta ,\varphi \in [0,\pi ]\), with the expression describing the short-time behavior in Theorem 6.1; see the proof of [29, Theorem A.1]. Moreover, \(Z\) has the same long-time behavior as that asserted for \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\). Thus that part of the statement of Theorem 6.1 which deals with \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\) can be written simply as
$$\begin{aligned} {\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi ) \simeq Z, \quad t>0, \quad \theta ,\varphi \in [0,\pi ]. \end{aligned}$$
(22)
Case 1 \({-1<\alpha <-1/2\le \beta }.\) By Proposition 2.3,
$$\begin{aligned} {\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )&= \iint -\partial _u \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \Pi _{\alpha }(u)\,\mathrm{d}u\, \mathrm{d}\Pi _{\beta }(v)\\&\quad + \iint \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \mathrm{d}\Pi _{-1\slash 2}(u)\, \mathrm{d}\Pi _{\beta }(v)\\&\equiv I_1 + I_2. \end{aligned}$$
One finds that the integral \(I_1\) is dominated (up to a multiplicative constant) by its restriction to the subsquare \((1/2,1]^2\) and that the essential contribution to \(I_2\) comes from integrating over \((1/2,1]^2\). In view of Lemma 2.2, the measures \(|\Pi _{\alpha }(u)|\,\mathrm{d}u\) and \(\mathrm{d}\Pi _{\alpha +1}\) are comparable on \((1/2,1]\), and we infer that
$$\begin{aligned} I_1&\lesssim \sinh \frac{t}{2} \sin \frac{\theta }{2}\sin \frac{\varphi }{2}\iint \frac{\mathrm{d}\Pi _{\alpha +1}(u)\, \mathrm{d}\Pi _{\beta }(v)}{\left( \cosh \frac{t}{2} - u \sin \frac{\theta }{2}\sin \frac{\varphi }{2}- v \cos \frac{\theta }{2}\cos \frac{\varphi }{2}\right) ^{\alpha +\beta +3}}, \\ I_2&\simeq \sinh \frac{t}{2} \int \frac{\mathrm{d}\Pi _{\beta }(v)}{\left( \cosh \frac{t}{2} - \sin \frac{\theta }{2}\sin \frac{\varphi }{2}- v \cos \frac{\theta }{2}\cos \frac{\varphi }{2}\right) ^{\alpha +\beta +2}}, \end{aligned}$$
uniformly in \(t>0\) and \(\theta ,\varphi \in [0,\pi ]\). Applying now Lemma 3.2 (a) to \(I_1\) twice, first to the integral against \(\mathrm{d}\Pi _{\beta }\), with the parameters \(\nu =\beta , \kappa =0, \gamma = \alpha +\beta +3, A=\cosh \frac{t}{2}-u\sin \frac{\theta }{2}\sin \frac{\varphi }{2}, B= \cos \frac{\theta }{2}\cos \frac{\varphi }{2}\), and then to the resulting integral against \(\mathrm{d}\Pi _{\alpha +1}\), with the parameters \(\nu =\alpha +1, \kappa =\beta +1/2, \gamma = \alpha + 5/2, D=\cosh \frac{t}{2}, A = \cosh \frac{t}{2}-\cos \frac{\theta }{2}\cos \frac{\varphi }{2}, B=\sin \frac{\theta }{2}\sin \frac{\varphi }{2}\), we arrive at the bound
$$\begin{aligned} I_1 \lesssim X Z. \end{aligned}$$
Applying once again Lemma 3.2 (a), this time to \(I_2\) and with the parameters \(\nu = \beta , \kappa = 0, \gamma = \alpha +\beta +2, A = \cosh \frac{t}{2}-\sin \frac{\theta }{2}\sin \frac{\varphi }{2}, B=\cos \frac{\theta }{2}\cos \frac{\varphi }{2}\), we get
$$\begin{aligned} I_2 \simeq (1-X)^{-\alpha -1/2} Z. \end{aligned}$$
Estimating \(I_1\) from below is slightly more subtle. Notice that
$$\begin{aligned} I_1&= \sum _{\eta =\pm 1} \, \mathop {\iint }\limits _{(0,1]^2} \left( \partial _u \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,\eta v) \right. \\&\quad \left. -\,\partial _u \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,-u,\eta v) \right) \, |\Pi _{\alpha }(u)|\, \mathrm{d}u\, \mathrm{d}\Pi _{\beta }(v); \end{aligned}$$
here the integrand in each double integral is nonnegative, and the one corresponding to \(\eta =1\) is dominating. Thus, restricting the set of integration to \((1/2,1]^2\) and making use of Lemma 2.2, we write
$$\begin{aligned} I_1&\gtrsim \mathop {\iint }\limits _{(1/2,1]^2} \left( \partial _u \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v) - \partial _u \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,-u,v) \right) \, \mathrm{d}\Pi _{\alpha +1}(u)\, \mathrm{d}\Pi _{\beta }(v)\\&\simeq \sinh \frac{t}{2} \sin \frac{\theta }{2}\sin \frac{\varphi }{2}\mathop {\iint }\limits _{(1/2,1]^2} \left[ \frac{1}{\left( \cosh \frac{t}{2}-u\sin \frac{\theta }{2}\sin \frac{\varphi }{2}- v\cos \frac{\theta }{2}\cos \frac{\varphi }{2}\right) ^{\alpha +\beta +3}}\right. \\&\quad \left. -\,\frac{1}{\left( \cosh \frac{t}{2}+u\sin \frac{\theta }{2}\sin \frac{\varphi }{2}- v\cos \frac{\theta }{2}\cos \frac{\varphi }{2}\right) ^{\alpha +\beta +3}}\right] \, \mathrm{d}\Pi _{\alpha +1}(u)\, \mathrm{d}\Pi _{\beta }(v). \end{aligned}$$
Applying (21) to the expression in square brackets above, we get
$$\begin{aligned} I_1&\gtrsim \mathop {\iint }\limits _{(1/2,1]^2} \frac{\sinh \frac{t}{2} \left( \sin \frac{\theta }{2}\sin \frac{\varphi }{2}\right) ^2 \, u \, \mathrm{d}\Pi _{\alpha +1}(u)\, \mathrm{d}\Pi _{\beta }(v)}{\left( \cosh \frac{t}{2}+u\sin \frac{\theta }{2}\sin \frac{\varphi }{2}- v\cos \frac{\theta }{2}\cos \frac{\varphi }{2}\right) \left( \cosh \frac{t}{2}-u\sin \frac{\theta }{2}\sin \frac{\varphi }{2}- v\cos \frac{\theta }{2}\cos \frac{\varphi }{2}\right) ^{\alpha +\beta +3}}\\&\gtrsim \mathop {\iint }\limits _{(1/2,1]^2} \frac{\sinh \frac{t}{2} \left( \sin \frac{\theta }{2}\sin \frac{\varphi }{2}\right) ^2 \, \mathrm{d}\Pi _{\alpha +1}(u)\, \mathrm{d}\Pi _{\beta }(v)}{\left( \cosh \frac{t}{2}+\sin \frac{\theta }{2}\sin \frac{\varphi }{2}- v\cos \frac{\theta }{2}\cos \frac{\varphi }{2}\right) \left( \cosh \frac{t}{2}-u\sin \frac{\theta }{2}\sin \frac{\varphi }{2}- v\cos \frac{\theta }{2}\cos \frac{\varphi }{2}\right) ^{\alpha +\beta +3}}. \end{aligned}$$
The last integral is comparable with an analogous integral over the larger square \([-1,1]^2\), see the comment following Theorem 6.1. Now using Lemma 3.2 (a) twice, first for the integral against \(\mathrm{d}\Pi _{\beta }\) (with the parameters \(\nu =\beta , \kappa = 1, \gamma = \alpha +\beta +3, D = \cosh \frac{t}{2}+\sin \frac{\theta }{2}\sin \frac{\varphi }{2}, A=\cosh \frac{t}{2}-u\sin \frac{\theta }{2}\sin \frac{\varphi }{2}, B = \cos \frac{\theta }{2}\cos \frac{\varphi }{2}\)) and then for the resulting integral against \(\mathrm{d}\Pi _{\alpha +1}\) (with \(\nu =\alpha +1, \kappa = \beta +1/2, \gamma = \alpha + 5/2, D=\cosh \frac{t}{2}, A=\cosh \frac{t}{2}-\cos \frac{\theta }{2}\cos \frac{\varphi }{2}, B=\sin \frac{\theta }{2}\sin \frac{\varphi }{2}\)), we arrive at the bound
$$\begin{aligned} I_1 \gtrsim \frac{X^2}{X+1}\, Z \simeq X^2 Z. \end{aligned}$$
Summing up, we have proved that
$$\begin{aligned} X^2 Z + (1-X)^{-\alpha -1/2} Z \lesssim {\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi ) \lesssim XZ + (1-X)^{-\alpha -1/2} Z, \end{aligned}$$
uniformly in \(t>0\) and \(\theta ,\varphi \in [0,\pi ]\), and (22) follows.
Case 2 \({-1<\alpha ,\beta <-1/2}.\) In view of Proposition 2.3,
$$\begin{aligned} {\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )&= \iint \partial _{u} \partial _{v} \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \Pi _{\alpha }(u)\,\mathrm{d}u\, \Pi _{\beta }(v)\, \mathrm{d}v \\&\quad + \iint - \partial _{u} \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \Pi _{\alpha }(u)\,\mathrm{d}u\, \mathrm{d}\Pi _{-1\slash 2}(v)\\&\quad + \iint - \partial _{v} \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \mathrm{d}\Pi _{-1\slash 2}(u)\, \Pi _{\beta }(v)\, \mathrm{d}v\\&\quad + \iint \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \mathrm{d}\Pi _{-1\slash 2}(u)\, \mathrm{d}\Pi _{-1\slash 2}(v)\\&\equiv J_1 + J_2 + J_3 + J_4. \end{aligned}$$
Clearly, the main contribution to \(J_4\) comes from the point \((u,v)=(1,1)\), and so
$$\begin{aligned} J_4 \simeq \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,1,1) \simeq (1-X)^{-\alpha -1/2} (1-Y)^{-\beta -1/2} Z \le Z. \end{aligned}$$
To bound the remaining integrals from above, we proceed as in Case 1, obtaining
$$\begin{aligned} J_1&\lesssim \iint \partial _{u} \partial _{v} \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \mathrm{d}\Pi _{\alpha +1}(u)\, \mathrm{d}\Pi _{\beta +1}(v), \\ J_2&\lesssim \int \partial _{u} \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,1) \, \mathrm{d}\Pi _{\alpha +1}(u), \\ J_3&\lesssim \int \partial _{v} \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,1,v) \, \mathrm{d}\Pi _{\beta +1}(v). \end{aligned}$$
Then applying repeatedly Lemma 3.2 (a) with suitably chosen parameters, we get
$$\begin{aligned} J_1 \lesssim XYZ \le Z, \quad J_2 \lesssim X(1-Y)^{-\beta -1/2}Z \le Z, \quad J_3 \lesssim (1-X)^{-\alpha -1/2}YZ \le Z. \end{aligned}$$
To estimate \(J_2\) and \(J_3\) from below, we use the same trick as for \(I_1\) in Case 1. By means of Lemma 2.2 and (21), we can write
$$\begin{aligned} J_2&\gtrsim \frac{\sinh \frac{t}{2} \left( \sin \frac{\theta }{2}\sin \frac{\varphi }{2}\right) ^2}{\cosh \frac{t}{2} + \sin \frac{\theta }{2}\sin \frac{\varphi }{2}- \cos \frac{\theta }{2}\cos \frac{\varphi }{2}} \\&\times \int _{(1/2,1]} \frac{\mathrm{d}\Pi _{\alpha +1}(u)}{\left( \cosh \frac{t}{2}-u\sin \frac{\theta }{2}\sin \frac{\varphi }{2}- \cos \frac{\theta }{2}\cos \frac{\varphi }{2}\right) ^{\alpha +\beta +3}}. \end{aligned}$$
Then Lemma 3.2 (a) shows that
$$\begin{aligned} J_2 \gtrsim \frac{X^2}{X+1} (1-Y)^{-\beta -1/2} Z \simeq X^2 (1-Y)^{-\beta -1/2} Z. \end{aligned}$$
The case of \(J_3\) is parallel; we have
$$\begin{aligned} J_3 \gtrsim (1-X)^{-\alpha -1/2} \frac{Y^2}{Y+1} Z \simeq (1-X)^{-\alpha -1/2} Y^2 Z. \end{aligned}$$
Finally, we focus on the more delicate integral \(J_1\). Observe that
$$\begin{aligned} J_1 = \mathop {\iint }\limits _{(0,1]^2} \sum _{\xi ,\eta = \pm 1} \xi \eta \, \partial _u \partial _v \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,\xi u, \eta v) \, |\Pi _{\alpha }(u)|\, \mathrm{d}u\, |\Pi _{\beta }(v)|\, \mathrm{d}v. \end{aligned}$$
Restricting here the region of integration (the integrand is nonnegative, as we shall see in a moment) and using Lemma 2.2, we conclude
$$\begin{aligned} J_1 \gtrsim \sinh \frac{t}{2} \sin \frac{\theta }{2}\sin \frac{\varphi }{2}\cos \frac{\theta }{2}\cos \frac{\varphi }{2}\mathop {\iint }\limits _{(1/2,1]^2} \left( \frac{1}{a^{\tau }} - \frac{1}{b^{\tau }} - \frac{1}{c^{\tau }} + \frac{1}{d^{\tau }} \right) \, \mathrm{d}\Pi _{\alpha +1}(u)\, \mathrm{d}\Pi _{\beta +1}(v), \end{aligned}$$
where \(\tau =\alpha +\beta +4, a = \cosh \frac{t}{2}-u\sin \frac{\theta }{2}\sin \frac{\varphi }{2}- v\cos \frac{\theta }{2}\cos \frac{\varphi }{2}, b = \cosh \frac{t}{2}-u\sin \frac{\theta }{2}\sin \frac{\varphi }{2}+ v\cos \frac{\theta }{2}\cos \frac{\varphi }{2}, c = \cosh \frac{t}{2}+u\sin \frac{\theta }{2}\sin \frac{\varphi }{2}- v\cos \frac{\theta }{2}\cos \frac{\varphi }{2}, d = \cosh \frac{t}{2}+u\sin \frac{\theta }{2}\sin \frac{\varphi }{2}+ v\cos \frac{\theta }{2}\cos \frac{\varphi }{2}\). Now applying Lemma 6.2, we get
$$\begin{aligned} J_1 \gtrsim \sinh \frac{t}{2} \sin \frac{\theta }{2}\sin \frac{\varphi }{2}\cos \frac{\theta }{2}\cos \frac{\varphi }{2}\mathop {\iint }\limits _{(1/2,1]^2} \frac{(b \wedge c - a)^2 \wedge a^2}{a^{\alpha +\beta +6}} \, \mathrm{d}\Pi _{\alpha +1}(u)\, \mathrm{d}\Pi _{\beta +1}(v). \end{aligned}$$
Since
$$\begin{aligned} b \wedge c -a = 2 u \sin \frac{\theta }{2}\sin \frac{\varphi }{2}\wedge 2 v \cos \frac{\theta }{2}\cos \frac{\varphi }{2}\ge \sin \frac{\theta }{2}\sin \frac{\varphi }{2}\wedge \cos \frac{\theta }{2}\cos \frac{\varphi }{2}, \quad u,v \ge 1/2, \end{aligned}$$
we can write
$$\begin{aligned} J_1&\gtrsim \sinh \frac{t}{2} \sin \frac{\theta }{2}\sin \frac{\varphi }{2}\cos \frac{\theta }{2}\cos \frac{\varphi }{2}\\&\quad \times \mathop {\iint }\limits _{(1/2,1]^2}\frac{\mathrm{d}\Pi _{\alpha +1}(u)\, \mathrm{d}\Pi _{\beta +1}(v)}{\left( \cosh \frac{t}{2}-u\sin \frac{\theta }{2}\sin \frac{\varphi }{2}- v\cos \frac{\theta }{2}\cos \frac{\varphi }{2}\right) ^{\alpha +\beta +6}} \\&\quad \times \left[ \left( \cosh \frac{t}{2}-\sin \frac{\theta }{2}\sin \frac{\varphi }{2}- \cos \frac{\theta }{2}\cos \frac{\varphi }{2}\right) \wedge \sin \frac{\theta }{2}\sin \frac{\varphi }{2}\wedge \cos \frac{\theta }{2}\cos \frac{\varphi }{2}\right] ^2. \end{aligned}$$
Combining this with Lemma 3.2 (a), we see that
$$\begin{aligned} J_1 \gtrsim XY \left[ 1 \wedge \left( \frac{X}{1-X}\right) ^2 \wedge \left( \frac{Y}{1-Y}\right) ^2 \right] Z \ge (X \wedge Y)^4 Z. \end{aligned}$$
Altogether, the above considerations justify the estimates
$$\begin{aligned}&\left( (X\wedge Y)^4 + X^2(1-Y)^{-\beta -1/2} + (1-X)^{-\alpha -1/2}Y^2\right. \\&\qquad \left. + (1-X)^{-\alpha -1/2}(1-Y)^{-\beta -1/2}\right) Z \\&\quad \lesssim {\mathbb {H}}^{\alpha ,\beta }_t(\theta ,\varphi ) \lesssim Z, \end{aligned}$$
which hold uniformly in \(t>0\) and \(\theta ,\varphi \in [0,\pi ]\). From this, (22) follows.
We pass to the Jacobi–Poisson kernel \(H_t^{\alpha ,\beta }(\theta ,\varphi )\). Here we can assume that \(\lambda :=\alpha +\beta +1< 0\), since otherwise the kernels \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\) and \(H_t^{\alpha ,\beta }(\theta ,\varphi )\) coincide. Then
$$\begin{aligned} H_t^{\alpha ,\beta }(\theta ,\varphi ) = {\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi ) + 2^{\lambda + 1} c_{\alpha ,\beta }\, \sinh \frac{\lambda t}{2}. \end{aligned}$$
The second term here is negative for \(t>0\), so \(H_t^{\alpha ,\beta }(\theta ,\varphi ) < {\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\). Taking (22) into account, we obtain the short-time upper bound for \(H_t^{\alpha ,\beta }(\theta ,\varphi )\). Thus what remains to show is the lower bound and the long-time upper bound for \(H_t^{\alpha ,\beta }(\theta ,\varphi )\).
We first claim that the lower short-time bound holds provided that \(t>0\) is small enough. In view of the already justified estimates for \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\), this will follow once we check that
$$\begin{aligned} -2^{\lambda + 1} c_{\alpha ,\beta }\, \sinh \frac{\lambda t}{2} \le c \, {\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi ), \quad 0< t \le T_0, \end{aligned}$$
for some \(T_0>0\) and some \(c<1\). Notice that the hypergeometric series defining \(F_4\) in (2) has nonnegative terms and that the zero-order term is 1. Thus for \(t>0\) and \(\theta ,\varphi \in [0,\pi ]\),
$$\begin{aligned}&\left( \frac{2}{e}\right) ^{\lambda +1}{\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi ) + 2^{\lambda + 1} c_{\alpha ,\beta }\, \sinh \frac{\lambda t}{2}\\&\quad \ge \left( \frac{2}{e}\right) ^{\lambda +1} c_{\alpha ,\beta } \left( \frac{\sinh \frac{t}{2}}{\left( \cosh \frac{t}{2}\right) ^{\lambda +1}} + \mathrm{e}^{\lambda +1}\sinh \frac{\lambda t}{2}\right) . \end{aligned}$$
Now it suffices to ensure that, given \(\lambda \in (-1,0)\), the function
$$\begin{aligned} h(s) = \frac{\sinh s}{(\cosh s)^{\lambda +1}} + \mathrm{e}^{\lambda +1}\sinh (\lambda s) \end{aligned}$$
satisfies \(h(0)=0\) and \(h'(0)>0\). This, however, is straightforward. The claim follows.
Next we show that the upper long-time bound for \(H_t^{\alpha ,\beta }(\theta ,\varphi )\) holds for \(t \ge 1\) and that the lower counterpart is also true provided that \(t \ge T_1\) with \(T_1\) chosen large enough. From the series representation,
$$\begin{aligned} H_t^{\alpha ,\beta }(\theta ,\varphi ) = 2^{\lambda } c_{\alpha ,\beta }\, \mathrm{e}^{-t|\lambda | /2} + \sum _{n=1}^{\infty } \mathrm{e}^{-t(n+\lambda /2)} {\mathcal {P}}_n^{\alpha ,\beta }(\theta ){\mathcal {P}}_n^{\alpha ,\beta }(\varphi ). \end{aligned}$$
The last series can be controlled by means of the bound \(|{\mathcal {P}}_n^{\alpha ,\beta }(\theta )| \lesssim n, n \ge 1\), see (14). More precisely, we have
$$\begin{aligned} \bigg | \sum _{n=1}^{\infty } \mathrm{e}^{-t(n+\lambda /2)} {\mathcal {P}}_n^{\alpha ,\beta }(\theta ){\mathcal {P}}_n^{\alpha ,\beta }(\varphi )\bigg | \lesssim \mathrm{e}^{-t/2} \sum _{n=1}^{\infty } n^2 \mathrm{e}^{-t\left( n+\frac{\alpha +\beta }{2}\right) } \lesssim \mathrm{e}^{-t/2}, \quad t \ge 1. \end{aligned}$$
Since \(\alpha +\beta > -2\) and \(|\lambda |<1\), the conclusion follows.
To deal finally with the lower bound in the range \(T_0 \le t \le T_1\), we use the semigroup property of \(H_t^{\alpha ,\beta }\). For \(T_0 \le t \le 2T_0\), we have
$$\begin{aligned} H_t^{\alpha ,\beta }(\theta ,\varphi ) = \int _0^\pi H_{t/2}^{\alpha ,\beta }(\theta ,\psi ) H_{t/2}^{\alpha ,\beta }(\psi ,\varphi ) \, \mathrm{d}\mu _{\alpha ,\beta }(\psi ). \end{aligned}$$
Since \(H_{t/2}^{\alpha ,\beta }(\theta ,\varphi ) \gtrsim 1\) in \([T_0,2T_0]\times [0,\pi ]^2\) by the above, we conclude that also \(H_t^{\alpha ,\beta }(\theta ,\varphi )\) has a positive lower bound in the same set. In a finite number of similar steps, we will reach \(t=T_1\).

The proof of Theorem 6.1 is complete. \(\square \)

References

  1. 1.
    Andrews, G.E., Askey, R., Roy, R.: Special Functions, Encyclopedia of Mathematics and its Applications, vol. 71. Cambridge University Press, Cambridge (1999)Google Scholar
  2. 2.
    Askey, R.: Orthogonal Polynomials and Special Functions. Society for Industrial and Applied Mathematics, Philadelphia, PA (1975)CrossRefGoogle Scholar
  3. 3.
    Caffarelli, L.A.: Sobre la conjugación y sumabilidad de series de Jacobi. Ph.D. thesis, Facultad de Ciencias Exactas, Universidad de Buenos Aires, Argentina (1971)Google Scholar
  4. 4.
    Caffarelli, L.A., Calderón, C.P.: On Abel summability of multiple Jacobi series. Colloq. Math. 30, 277–288 (1974)zbMATHMathSciNetGoogle Scholar
  5. 5.
    Calderón, C.P., Urbina, W.O.: On Abel summability of Jacobi polynomials series, the Watson kernel and applications. Ill. J. Math. 57, 343–371 (2013)zbMATHGoogle Scholar
  6. 6.
    Calderón, C.P., Vera de Serio, V.N.: Abel summability of Jacobi type series. Ill. J. Math. 41, 237–265 (1997)zbMATHGoogle Scholar
  7. 7.
    Castro, A.J., Nowak, A., Szarek, T.Z.: Riesz–Jacobi transforms as principal value integrals. Preprint (2014). arXiv:1405.7069
  8. 8.
    Castro, A.J., Szarek, T.Z.: Calderón–Zygmund operators in the Bessel setting for all possible type indices. Acta Math. Sin. (Engl. Ser.) 30, 637–648 (2014)CrossRefzbMATHMathSciNetGoogle Scholar
  9. 9.
    Castro, A.J., Szarek, T.Z.: On fundamental harmonic analysis operators in certain Dunkl and Bessel settings. J. Math. Anal. Appl. 412, 943–963 (2014)CrossRefMathSciNetGoogle Scholar
  10. 10.
    Ciaurri, Ó.: The Poisson operator for orthogonal polynomials in the multidimensional ball. J. Fourier Anal. Appl. 19, 1020–1028 (2013)CrossRefzbMATHMathSciNetGoogle Scholar
  11. 11.
    Ciaurri, Ó., Roncal, L., Stinga, P.R.: Fractional integrals on compact Riemannian symmetric spaces of rank one. Adv. Math. 235, 627–647 (2013)CrossRefzbMATHMathSciNetGoogle Scholar
  12. 12.
    Ciaurri, Ó., Roncal, L., Stinga, P.R.: Riesz transforms on compact Riemannian symmetric spaces of rank one. Preprint (2013). arXiv:1308.6507
  13. 13.
    Connett, W.C., Schwartz, A.L.: A multiplier theorem for ultraspherical series. Stud. Math. 51, 51–70 (1974)zbMATHMathSciNetGoogle Scholar
  14. 14.
    Connett, W.C., Schwartz, A.L.: A multiplier theorem for Jacobi expansions. Stud. Math. 52, 243–261 (1975)zbMATHMathSciNetGoogle Scholar
  15. 15.
    Connett, W.C., Schwartz, A.L.: A correction to the paper: “A multiplier theorem for Jacobi expansions” (Studia Math. 52 (1975), pp. 243–261). Stud. Math. 54, 107 (1975)zbMATHMathSciNetGoogle Scholar
  16. 16.
    Connett, W.C., Schwartz, A.L.: The Littlewood–Paley theory for Jacobi expansions. Trans. Am. Math. Soc. 251, 219–234 (1979)CrossRefzbMATHMathSciNetGoogle Scholar
  17. 17.
    Dijksma, A., Koornwinder, T.K.: Spherical harmonics and the product of two Jacobi polynomials. Indag. Math. 33, 171–196 (1971)MathSciNetGoogle Scholar
  18. 18.
    Erdélyi, A., Magnus, W., Oberhettinger, F., Tricomi, F.G.: Higher Transcendental Functions. Vol. I. Based on Notes Left by Harry Bateman. Reprint of the 1953 original. Robert E. Krieger Publishing Co., Inc, Melbourne (1981)Google Scholar
  19. 19.
    Gasper, G., Trebels, W.: Multiplier criteria of Marcinkiewicz type for Jacobi expansions. Trans. Am. Math. Soc. 231, 117–132 (1977)CrossRefzbMATHMathSciNetGoogle Scholar
  20. 20.
    Gasper, G., Trebels, W.: Multiplier criteria of Hörmander type for Jacobi expansions. Stud. Math. 68, 187–197 (1980)zbMATHMathSciNetGoogle Scholar
  21. 21.
    Johnson, W.P.: The curious history of Faà di Bruno’s formula. Am. Math. Mon. 109, 217–234 (2002)CrossRefzbMATHGoogle Scholar
  22. 22.
    Langowski, B.: Harmonic analysis operators related to symmetrized Jacobi expansions. Acta Math. Hung. 140, 248–292 (2013)CrossRefMathSciNetGoogle Scholar
  23. 23.
    Li, Z.: Hardy spaces for Jacobi expansions I. The basic theory. Analysis 16, 27–49 (1996)CrossRefMathSciNetGoogle Scholar
  24. 24.
    Li, Z., Liao, J.: Hardy spaces for Dunkl–Gegenbauer expansions. J. Funct. Anal. 265, 687–742 (2013)CrossRefzbMATHMathSciNetGoogle Scholar
  25. 25.
    Li, Z., Liu, L.: Harmonic Analysis on Extended Jacobi Expansions: An Application of Dunkl’s Theory, Analysis, Combinatorics and Computing, pp. 319–340. Nova Science Publishers, Hauppauge (2002)Google Scholar
  26. 26.
    Muckenhoupt, B., Stein, E.M.: Classical expansions and their relation to conjugate harmonic functions. Trans. Am. Math. Soc. 118, 17–92 (1965)CrossRefzbMATHMathSciNetGoogle Scholar
  27. 27.
    Nowak, A., Roncal, L.: Potential operators associated with Jacobi and Fourier–Bessel expansions. J. Math. Anal. Appl. 422, 148–184 (2015)CrossRefzbMATHMathSciNetGoogle Scholar
  28. 28.
    Nowak, A., Sjögren, P.: Calderón–Zygmund operators related to Jacobi expansions. J. Fourier Anal. Appl. 18, 717–749 (2012)CrossRefzbMATHMathSciNetGoogle Scholar
  29. 29.
    Nowak, A., Sjögren, P.: Sharp estimates of the Jacobi heat kernel. Stud. Math. 218, 219–244 (2013)CrossRefzbMATHGoogle Scholar
  30. 30.
    Nowak, A., Szarek, T.Z.: Calderón–Zygmund operators related to Laguerre function expansions of convolution type. J. Math. Anal. Appl. 388, 801–816 (2012)CrossRefzbMATHMathSciNetGoogle Scholar
  31. 31.
    Rubio de Francia, J.L., Ruiz, F.J., Torrea, J.L.: Calderón–Zygmund theory for operator-valued kernels. Adv. Math. 62, 7–48 (1986)CrossRefzbMATHMathSciNetGoogle Scholar
  32. 32.
    Ruiz, F.J., Torrea, J.L.: Vector-valued Calderón–Zygmund theory and Carleson measures on spaces of homogeneous nature. Stud. Math. 88, 221–243 (1988)zbMATHMathSciNetGoogle Scholar
  33. 33.
    Szegö, G.: Orthogonal Polynomials, vol. 23, 4th edn. American Mathematical Society Colloquium Publications, Providence (1975)zbMATHGoogle Scholar
  34. 34.
    Wróbel, B.: Multivariate spectral multipliers for tensor product orthogonal expansions. Monatsh. Math. 168, 125–149 (2012)CrossRefzbMATHMathSciNetGoogle Scholar
  35. 35.
    Wróbel, B.: Erratum to: Multivariate spectral multipliers for tensor product orthogonal expansions. Monatsh. Math. 169, 113–115 (2013)CrossRefzbMATHMathSciNetGoogle Scholar

Copyright information

© The Author(s) 2015

Open AccessThis article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.

Authors and Affiliations

  1. 1.Institute of MathematicsPolish Academy of SciencesWarsawPoland
  2. 2.Department of Mathematical SciencesChalmers University of Technology and University of GothenburgGöteborgSweden

Personalised recommendations