Abstract
We derive an integral representation for the Jacobi–Poisson kernel valid for all admissible type parameters \(\alpha ,\beta \) in the context of Jacobi expansions. This enables us to develop a technique for proving standard estimates in the Jacobi setting that works for all possible \(\alpha \) and \(\beta \). As a consequence, we can prove that several fundamental operators in the harmonic analysis of Jacobi expansions are (vector-valued) Calderón–Zygmund operators in the sense of the associated space of homogeneous type, and hence their mapping properties follow from the general theory. The new Jacobi–Poisson kernel representation also leads to sharp estimates of this kernel. The paper generalizes methods and results existing in the literature but valid or justified only for a restricted range of \(\alpha \) and \(\beta \).
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
This paper is a continuation and completion of the research performed recently in [28] by the first and second authors. Given parameters \(\alpha ,\beta > -1\), consider the Jacobi differential operator
on the interval \([0,\pi ]\) equipped with the (doubling) measure
This operator, acting initially on \(C_c^2(0,\pi )\), has a natural self-adjoint extension in \(L^2(\mathrm{d}\mu _{\alpha ,\beta })\), whose spectral decomposition is discrete and given by the classical Jacobi polynomials. Various aspects of harmonic analysis related to the Jacobi setting have been studied in the literature. This line of research goes back to the seminal work of Muckenhoupt and Stein [26], in which the ultraspherical case (\(\alpha =\beta \)) was investigated. Later, several other authors contributed to the subject, see [28, Section 1] and also the end of [28, Section 2] for a detailed account and references. Actually, for the sake of completeness, that account should be augmented by further references, like [3, 4, 6, 13–16, 19, 20, 23]. Certain extensions of the ultraspherical and Jacobi settings related to Dunkl’s theory were investigated from the harmonic analysis perspective in [24, 25].
The main result of [28] is restricted to \(\alpha ,\beta \ge -1/2\). It states that several fundamental operators in the harmonic analysis of Jacobi expansions, including Riesz transforms, imaginary powers of the Jacobi operator, the Jacobi–Poisson semigroup maximal operator, and Littlewood–Paley–Stein type square functions, are (vector-valued) Calderón–Zygmund operators. Consequently, their \(L^p\) mapping properties follow from the general theory. The proofs in [28] rely on an integral formula for the Jacobi–Poisson kernel derived in [28] from a product formula for Jacobi polynomials due to Dijksma and Koornwinder [17]. Unfortunately, the latter result is not valid if either \(\alpha < -1/2\) or \(\beta < -1/2\), and this limitation is inherited by the above-mentioned Jacobi–Poisson kernel representation. Thus the technique of proving estimates for kernels defined via the Jacobi–Poisson kernel developed in [28] is designed for the case \(\alpha ,\beta \ge -1/2\). The object of the present paper is to eliminate this restriction in the parameter values, which will require some new techniques.
Our method starts with the deduction of an integral representation of the Jacobi–Poisson kernel, valid for all \(\alpha ,\beta > -1\), see Proposition 2.3. This formula contains as a special case the one obtained in [28, Proposition 4.1] for \(\alpha ,\beta \ge -1/2\) and is more involved if either \(\alpha \) or \(\beta \) is less than \(-1/2\). Then we establish a suitable generalization to all \(\alpha ,\beta >-1\) of the strategy employed in [28] to prove standard estimates [see (15)–(17) below] for kernels expressible via the Jacobi–Poisson kernel. To achieve this, some essentially new arguments are required, and the method allows a unified treatment of all parameter values \(\alpha ,\beta > -1\).
As an application of these techniques, we prove that the maximal operator of the Jacobi–Poisson semigroup, the Riesz–Jacobi transforms, Littlewood–Paley–Stein type square functions and multipliers of Laplace and Laplace–Stieltjes transform type are scalar-valued or vector-valued Calderón–Zygmund operators, in the sense of the space of homogeneous type \(([0,\pi ],\mathrm{d}\mu _{\alpha ,\beta },|\cdot |)\); see Theorem 5.1. This extends to all \(\alpha ,\beta > -1\) several results for \(\alpha ,\beta \ge -1/2\) obtained in [28] and earlier papers, as well as results on the two kinds of Laplace transform type multipliers that follow from the recent work of Langowski [22]. Our technique is well suited to a wider variety of operators, including more general forms of \(g\)-functions and Lusin area type integrals. In a similar spirit, analogous problems concerning analysis for “low” values of type parameters were recently investigated in the Laguerre [30], Bessel [8], and certain Dunkl [9] settings.
The Jacobi–Poisson kernel representation derived in Proposition 2.3 makes it possible to describe the exact behavior of the kernel; see Theorem 6.1. The sharp estimates we prove extend to all \(\alpha ,\beta >-1\) the bounds found not long ago by Nowak and Sjögren [29, Theorem A.1 in the Appendix] under the restriction \(\alpha ,\beta \ge -1/2\). An important application of Theorem 6.1 are the sharp estimates for potential kernels in the Jacobi and Fourier–Bessel settings proved recently by Nowak and Roncal [27]. Moreover, Theorem 6.1 readily implies explicit sharp bounds for the nonspectral variant of the Jacobi–Poisson kernel sometimes called the Watson kernel and given by (see [2, Lecture 2] or [1, p. 385])
Here \(0<r<1, x,y \in [-1,1], P_n^{\alpha ,\beta }\) are the classical Jacobi polynomials, and \(h_n^{\alpha ,\beta }\) are suitable normalizing constants. Recently an upper bound for the Watson kernel was obtained by Calderón and Urbina [5], and some earlier results in this spirit can be found in [4, 6, 14, 23] (see also [15]). We remark that our results concerning mapping properties of the Jacobi–Poisson semigroup maximal operator, see Corollary 5.2, lead in a straightforward manner to analogous results for the maximal operator related to the Watson kernel and investigated in [3–6, 16].
It is worth noting that there are further interesting applications of our Jacobi–Poisson kernel representation. For instance, in [7] it is used to obtain a principal value integral representation for the Riesz–Jacobi transforms. On the other hand, in [10–12, 22, 34] (see also [35]), the authors make use of the integral representation for the Jacobi–Poisson kernel derived in [28, Proposition 4.1], which is restricted to \(\alpha ,\beta \ge -1/2\). The Jacobi–Poisson kernel formula obtained in Proposition 2.3 should thus make it possible to extend the relevant results in these papers to a wider range of \(\alpha ,\beta \). This, however, remains to be investigated.
The paper is organized as follows. In Sect. 2, we derive an integral representation of the Jacobi–Poisson kernel valid for all \(\alpha ,\beta > -1\). Section 3 contains various facts and preparatory results needed for kernel estimates. In Sect. 4, we prove standard estimates for kernels associated with the operators mentioned above. This leads to our main results in Sect. 5, saying that the operators in question can be interpreted as Calderón–Zygmund operators and giving, as a consequence, their \(L^p\) mapping properties. Finally, Sect. 6 is devoted to sharp estimates of the Jacobi–Poisson kernel.
Throughout the paper, we use a fairly standard notation with essentially all symbols referring to the space of homogeneous type \(([0,\pi ],\mathrm{d}\mu _{\alpha ,\beta },|\cdot |)\). Since the distance in this space is the Euclidean one, the ball denoted \(B(\theta ,r)\) is simply the interval \((\theta -r,\theta +r)\cap [0,\pi ]\). When writing estimates, we will frequently use the notation \(X \lesssim Y\) to indicate that \(X \le CY\) with a positive constant \(C\) independent of significant quantities. We shall write \(X \simeq Y\) when simultaneously \(X \lesssim Y\) and \(Y \lesssim X\).
2 The Jacobi–Poisson Kernel
Let \(\alpha ,\beta > -1\). The Jacobi–Poisson kernel is given by (see [28, Section 2])
here \(t>0\) and \(\theta ,\varphi \in [0,\pi ]\), and \({\mathcal {P}}_n^{\alpha ,\beta }\) are the classical Jacobi trigonometric polynomials, normalized in \(L^2(\mathrm{d}\mu _{\alpha ,\beta })\). This is the kernel of the Jacobi–Poisson semigroup \(\big \{\exp \big (-t\sqrt{{\mathcal {J}}^{\alpha ,\beta }}\big )\big \}_{t>0}\), since each \({\mathcal {P}}_n^{\alpha ,\beta }\) is an eigenfunction of \({\mathcal {J}}^{\alpha ,\beta }\), with eigenvalue \(\big (n+\frac{\alpha +\beta +1}{2}\big )^2\). Notice that the fraction \(\frac{\alpha +\beta +1}{2}\) may be negative. Defining the auxiliary kernel
the Jacobi–Poisson kernel can be written as
where
As we shall see later, there are important cancellations between the two terms in (1) for large \(t\).
The kernel \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\) can be computed explicitly by means of Bailey’s formula, see [1, pp. 385–387]. More precisely, we have
for \(t>0\) and \(\theta ,\varphi \in [0,\pi ]\). Here \(F_4\) is Appel’s hypergeometric function of two variables defined by the series
where \((a)_n\) means the Pochhammer symbol, \((a)_n = a(a+1)\cdot \cdots \cdot (a+n-1)\) for \(n \ge 1\) and \((a)_0 = 1\). This double power series is known to converge absolutely when \(\sqrt{|x|}+ \sqrt{|y|} < 1\), cf. [18, Chapter V, Section 5.7.2]. From this expression, the positivity of \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\) can easily be seen. Moreover, (2) provides a holomorphic extension of \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\) as a function of the parameters \(\alpha ,\beta > -1\) to the region \(\{(\alpha ,\beta ) \in {\mathbb {C}}^2 : \mathfrak {R}\alpha , \mathfrak {R}\beta > -1\}\). Indeed, with \(t>0\) and \(\theta ,\varphi \in [0,\pi ]\) fixed, the hypergeometric series in (2) is a sum of holomorphic functions of \((\alpha ,\beta )\) converging locally uniformly in the region in question (the latter fact can be justified by means of elementary estimates for the Pochhammer symbol). However, the formula (2) does not seem to be convenient from the point of view of kernel estimates. Thus we need a more suitable representation.
In [28, Section 4], the first and second authors derived the following integral representation, valid for \(\alpha ,\beta \ge -1/2\) (notice that under this restriction \(H_t^{\alpha ,\beta }(\theta ,\varphi )\) coincides with \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\)):
for \(t>0\) and \(\theta , \varphi \in [0,\pi ]\). Here
and the measure \(\mathrm{d}\Pi _{\alpha }\) is defined in the following way. For \(\alpha > -1/2\), we let
which is an odd function in \(-1<u<1\). Then \(\mathrm{d}\Pi _{\alpha }\) is a probability measure in \([-1,1]\). As \(\alpha \rightarrow -1/2\), one finds that \(\mathrm{d}\Pi _{\alpha }\) converges weakly to the measure \(\mathrm{d}\Pi _{-1/2} := \frac{1}{2}(\delta _{-1}+\delta _{1})\), where \(\delta _{\pm 1}\) denotes a point mass at \(\pm 1\).
Now we observe that (4) can be extended to all complex \(\alpha \ne -1/2\) with \(\mathfrak {R}\alpha > -1\). Then the (distribution) derivative
is a local complex measure in \((-1,1)\). For \(\alpha \in (-1,-1/2)\) real, its density is negative, even, and not integrable in \((-1,1)\). If \(\phi \) is a continuous function in \((-1,1)\) and \(\phi (u) = \mathcal {O}(1-u)\) as \(u \rightarrow 1\), then the integral \(I(\alpha ) = \int _0^1\phi (u)\, \mathrm{d}\Pi _{\alpha }(u)\) is well defined. As a function of \(\alpha \), this integral is analytic in \(\{\alpha : \mathfrak {R}\alpha > -1, \alpha \ne -1/2\}\). Since \(|I(\alpha )| \lesssim |\alpha +1/2|\int _0^1 (1-u^2)^{\mathfrak {R}\alpha +1/2}\, \mathrm{d}u \rightarrow 0\) as \(\alpha \rightarrow -1/2\), we see that \(I(\alpha )\) is actually analytic in \(\{\alpha : \mathfrak {R}\alpha > -1\}\) and \(I(-1/2)=0\). More generally, if \(\phi _{\alpha ,\beta }(u)\) is continuous in \((u,\alpha ,\beta )\) and analytic in \((\alpha ,\beta )\) for \(-1<u<1\) and \(\mathfrak {R}\alpha , \mathfrak {R}\beta >-1\), and \(\phi _{\alpha ,\beta }(u) = \mathcal {O}(1-u)\) locally uniformly in \((\alpha ,\beta )\), then \(I(\alpha ,\beta ) = \int _0^1 \phi _{\alpha ,\beta }(u)\, \mathrm{d}\Pi _{\alpha }(u)\) will be analytic in \((\alpha ,\beta )\) in \(\mathfrak {R}\alpha , \mathfrak {R}\beta >-1\). Under analogous assumptions, this also extends to functions \(\phi _{\alpha ,\beta }(u,v)\) and the double integral \(I(\alpha ,\beta ) = \iint _{(0,1)^2} \phi _{\alpha ,\beta }(u,v) \, \mathrm{d}\Pi _{\alpha }(u)\,\mathrm{d}\Pi _{\beta }(v)\), if one assumes \(\phi _{\alpha ,\beta }(u,v) = \mathcal {O}((1-u)(1-v))\) locally uniformly in \(\alpha \) and \(\beta \).
The measures \(\mathrm{d}\Pi _{\alpha }\) will now be used to extend the representation (3) to the range \(\alpha ,\beta > -1\). Define
Taking the even parts of \(\Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\) in \(u\) and \(v\), we also define
Notice that by (3) and for symmetry reasons, we have for \(\alpha ,\beta \ge -1/2\),
We can now state a general integral representation of \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\).
Theorem 2.1
For all \(\alpha ,\beta > -1, t>0\) and \(\theta ,\varphi \in [0,\pi ]\),
Proof
For \(\alpha ,\beta \ge -1/2\), (7) is an easy consequence of (6). With \(\phi _{\alpha ,\beta }(u) = \Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,u,1) -\Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,1,1)\), the second integral in (7) is of the form \(I(\alpha ,\beta )\) just described; observe that \(\phi _{\alpha ,\beta }(u) = \mathcal {O}(1-u)\) as \(u \rightarrow 1\), since the derivative \(\partial \Psi _{E}^{\alpha ,\beta }/\partial u\) is bounded locally uniformly in \(\alpha \) and \(\beta \). The third integral in (7) is similar. For the double integral, we let
and get a double integral of type \(I(\alpha ,\beta )\).
The conclusion is that the right-hand side of (7) is analytic in \((\alpha ,\beta ) \in \{z : \mathfrak {R}z > -1\}^2\). Theorem 2.1 follows, since the left-hand side is also analytic. \(\square \)
We remark that in Theorem 2.1, it does not matter whether one integrates over the open interval \((0,1)\) or over \((0,1]\), even when the measure is \(\mathrm{d}\Pi _{-1/2}\). But subsequently, it will be more convenient to use \((0,1]\).
Next we restate the formula of Theorem 2.1 in order to obtain a more suitable representation of \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\) for the kernel estimates in Sect. 4. Recall that for \(-1<\alpha <-1/2, \Pi _{\alpha }(u)\) is an odd function, which is negative for \(u>0\). It can easily be verified that the density \(|\Pi _{\alpha }(u)|\) defines a finite measure on \([-1,1]\). In fact, we have the following.
Lemma 2.2
Let \(-1 < \alpha < -1/2\) be fixed. Then
Proof
These three quantities are even in \(u\), and we need consider only \(u\in (0,1)\). It is enough to observe that then \(|\Pi _{\alpha }(u)| \simeq \int _0^u (1-w)^{\alpha -1/2} \, \mathrm{d}w\). \(\square \)
Proposition 2.3
Let \(t>0\) and \(\theta ,\varphi \in [0,\pi ]\).
-
(i)
If \(\alpha ,\beta \ge -1/2\), then
$$\begin{aligned} {\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi ) = \iint \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \mathrm{d}\Pi _{\alpha }(u)\, \mathrm{d}\Pi _{\beta }(v). \end{aligned}$$ -
(ii)
If \(-1<\alpha <-1/2 \le \beta \), then
$$\begin{aligned} {\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )&= \iint \left\{ -\partial _u \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \Pi _{\alpha }(u)\,\mathrm{d}u\, \mathrm{d}\Pi _{\beta }(v)\right. \\&\left. \quad + \,\Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \mathrm{d}\Pi _{-1\slash 2}(u)\, \mathrm{d}\Pi _{\beta }(v)\right\} . \end{aligned}$$ -
(iii)
If \(-1 < \beta < -1/2 \le \alpha \), then
$$\begin{aligned} {\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )&= \iint \left\{ -\partial _v \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \mathrm{d}\Pi _{\alpha }(u)\, \Pi _{\beta }(v)\, \mathrm{d}v \right. \\&\left. \quad + \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \mathrm{d}\Pi _{\alpha }(u)\, \mathrm{d}\Pi _{-1\slash 2}(v)\right\} . \end{aligned}$$ -
(iv)
If \(-1 < \alpha , \beta < -1/2\), then
$$\begin{aligned} {\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi ) =&\iint \left\{ \partial _{u} \partial _{v} \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \Pi _{\alpha }(u)\,\mathrm{d}u\, \Pi _{\beta }(v)\, \mathrm{d}v \right. \\&\quad -\, \partial _{u} \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \Pi _{\alpha }(u)\,\mathrm{d}u\, \mathrm{d}\Pi _{-1\slash 2}(v)\\&\quad -\,\partial _{v} \Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \mathrm{d}\Pi _{-1\slash 2}(u)\, \Pi _{\beta }(v)\, \mathrm{d}v\\&\quad \left. +\,\Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)\, \mathrm{d}\Pi _{-1\slash 2}(u)\, \mathrm{d}\Pi _{-1\slash 2}(v)\right\} . \end{aligned}$$
Here and in similar integrals in Sect. 6, it is understood that the integration in \(\mathrm{d}u\) and \(\mathrm{d}v\) is only over \((-1,1)\).
Proof of Proposition 2.3
Item (i) is just (3). To prove the remaining items, we combine Theorem 2.1, Lemma 2.2, and symmetries of the quantity \(\Psi ^{\alpha ,\beta }_{E}(t,\theta ,\varphi ,u,v)\), its derivatives in \(u\) and \(v\), and the measures involved. We give further details in the case of (ii), leaving similar proofs of (iii) and (iv) to the reader.
Assume that \(-1 < \alpha < -1/2 \le \beta \). Since \(\mathrm{d}\Pi _{\beta }\) is a symmetric probability measure on \([-1,1]\) and has no atom at 0, formula (7) reduces to
Then, expressing \(\Psi ^{\alpha ,\beta }_{E}\) via \(\Psi ^{\alpha ,\beta }\) and making use of the symmetry of \(\mathrm{d}\Pi _{\beta }\), we see that
In \(I_1\) we integrate by parts in the \(u\) variable, which is legitimate in view of Lemma 2.2. Observe that the integrand in \(I_1\) vanishes for \(u=1\) and that \(\Pi _{\alpha }(0)=0\). We get
Inserting the definition of the symmetrization \(\Psi _E^{\alpha ,\beta }\), one easily finds that
The conclusion follows. \(\square \)
Remark 2.4
All the representations of \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\) contained in Proposition 2.3 are positive in the sense that each of the double integrals [there are one of these in (i), two in (ii) and in (iii), and four in (iv)] is nonnegative.
3 Preparatory Results
In this section, we gather various technical results, altogether forming a transparent and convenient method of proving standard estimates for kernels defined via the Jacobi–Poisson kernel. The essence of this technique is a uniform way of handling double integrals against products of measures of type \(\mathrm{d}\Pi _{\gamma }\) and \(\Pi _{\gamma }(u)\, \mathrm{d}u\). The resulting expressions contain only elementary functions and are relatively simple.
The result below, which is a generalization of [28, Lemma 4.3], plays a crucial role in our method to prove kernel estimates. It provides a link from estimates emerging from the integral representation of \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\), see Proposition 2.3, to the standard estimates related to the space of homogeneous type \(([0,\pi ], \mathrm{d}\mu _{\alpha ,\beta },|\cdot |)\).
Lemma 3.1
Let \(\alpha ,\beta > -1\). Assume that \(\xi _1,\xi _2,\kappa _1,\kappa _2 \ge 0\) are fixed and such that \(\alpha +\xi _1+\kappa _1, \, \beta +\xi _2+\kappa _2 \ge -1/2\). Then, uniformly in \(\theta ,\varphi \in [0,\pi ], \theta \ne \varphi \),
Note that for any fixed \(\alpha ,\beta > -1\), the \(\mu _{\alpha ,\beta }\) measure of the interval \(B(\theta ,|\theta -\varphi |)\) can be described as follows, see [28, Lemma 4.2]:
Notice also that the right-hand side of the estimate in Lemma 3.1 is always larger than the positive constant \(1/\mu _{\alpha ,\beta }([0,\pi ])\). This fact will be used subsequently without further mention.
To prove Lemma 3.1, we need item (b) in the simple lemma below.
Lemma 3.2
Let \(\kappa \ge 0\) and \(\gamma \) and \(\nu \) be such that \(\gamma > \nu +1/2 \ge 0\). Then
-
(a)
$$\begin{aligned}&\int \frac{\mathrm{d}\Pi _{\nu }(s)}{(D-Bs)^{\kappa }(A-Bs)^{\gamma }} \simeq \frac{1}{(D-B)^{\kappa } A^{\nu +1/2} (A-B)^{\gamma -\nu -1/2}} \end{aligned}$$
uniformly for \(0 \le B < A \le D\);
-
(b)
$$\begin{aligned} \int \frac{\mathrm{d}\Pi _{\nu +\kappa }(s)}{(A-Bs)^{\gamma }} \lesssim \frac{1}{A^{\nu +1\slash 2}(A-B)^{\gamma -\nu -1/2}}, \quad 0 \le B < A. \end{aligned}$$
Proof
Part (a) is proved in [29, Appendix]. Part (b) can easily be deduced from (a) since the integral to be estimated is controlled by the same integral with \(\kappa =0\). \(\square \)
Proof of Lemma 3.1
The reasoning is a combination of the arguments given in the proofs of [30, Lemma 2.1] and [28, Lemma 4.3]. Observe that we may reduce the task to showing that
under the assumption \(\alpha +\kappa _1,\beta +\kappa _2 \ge -1/2\). Indeed, applying (9) with \(\alpha +\xi _1,\beta +\xi _2\) instead of \(\alpha ,\beta \), and then using (8), we obtain
To prove (9), it is convenient to distinguish two cases.
Case 1 \(\alpha ,\beta \in (-1,-1\slash 2)\). Taking into account the estimates, see [28, (21)],
where \(\theta ,\varphi \in [0,\pi ]\), \(u,v \in [-1,1]\), and using the fact that \(\mathrm{d}\Pi _{\alpha +\kappa _1}\) and \(\mathrm{d}\Pi _{\beta +\kappa _2}\) are finite, we get
Then using the inequalities \(|\theta -\varphi | \le \theta + \varphi \) and \(|\theta -\varphi | \le \pi - \theta + \pi - \varphi \) together with (8), we obtain (9).
Case 2 At least one of the parameters \(\alpha ,\beta \) is in \([-1\slash 2,\infty )\), say \(\beta \ge -1/2\). Proceeding as in the proof of [28, Lemma 4.3] but applying Lemma 3.2 (b) instead of [28, Lemma 4.4] to the integral against \(\mathrm{d}\Pi _{\beta +\kappa _2}\), we see that
When \(\alpha \ge -1\slash 2\), another application of Lemma 3.2 (b) leads to (9), see the proof of [28, Lemma 4.3]. If \(\alpha \in (-1,-1/2)\), we can apply the arguments from Case 1, getting
Now using (8), we arrive at the desired conclusion.
The proof of Lemma 3.1 is complete. \(\square \)
The remaining part of this section contains various technical results, which will allow us to control the relevant kernels by means of Lemma 3.1. To state the next lemma and also for further use, we introduce the following notation. We will omit the arguments and write briefly \(\mathfrak {q}\) instead of \(q(\theta ,\varphi ,u,v)\), when it does not lead to confusion. For a given parameter \(\lambda \in \mathbb {R}\), we define the auxiliary function
so that \(\Psi ^{\alpha ,\beta }(t,\theta ,\varphi ,u,v)= c_{\alpha ,\beta } \Psi ^{\alpha +\beta +2}(t,\mathfrak {q})\); see (5).
Lemma 3.3
Let \(\lambda \in \mathbb {R}, M,N \in \mathbb {N}=\{0,1,2,\ldots \}\) and \(K,R,L \in \{0, 1\}\) be fixed. Then
uniformly in \(t\in (0,1], \theta ,\varphi \in [0,\pi ]\) and \(u,v \in [-1,1]\).
To prove this lemma, we need two preparatory results. One of them is Faà di Bruno’s formula for the \(N\)th derivative, \(N \ge 1\), of the composition of two functions (see [21] for related references and interesting historical remarks). With \(D\) denoting the ordinary derivative, it reads
where the summation runs over all \(j_1,\ldots ,j_N \ge 0\) such that \(j_1+2j_2+\cdots +N j_N = N\). Further, in the proof of Lemma 3.3, we will make use of the following bounds given in [28].
Lemma 3.4
[28, Lemma 4.5] For all \(\theta ,\varphi \in [0,\pi ]\) and \(u,v \in [-1,1]\), one has
Proof of Lemma 3.3
Given \(\lambda \in \mathbb {R}\), we introduce the auxiliary function
We first reduce our task to showing the estimate
for \(t \in (0,1], \theta ,\varphi \in [0,\pi ]\) and \(u,v\in [-1,1]\); here \(\lambda \in \mathbb {R}, N \in \mathbb {N}\) and \(K,R,L \in \{0, 1\}\) are fixed.
Observe that
where \(c_\lambda \) is a constant, possibly negative. Using Faà di Bruno’s formula (10) with \(f(t)=\cosh \frac{t}{2} - 1 + \mathfrak {q}\) and either \(g(x)=x^{-\lambda +1}\) or \(g(x)=\log x\), we obtain
where the \(C_{\lambda ,j}\) are constants, possibly zero. Differentiating these identities with respect to \(\theta ,\varphi ,u,v\) and then applying (11) and the relations
we see that
Now by the boundedness of \(\mathfrak {q}\) and the inequality
forced by the constraint \(j_1+\cdots + (M+1)j_{M+1}=M+1\), we get the asserted estimate. Thus it remains to prove (11).
We assume that \(N\ge 1\). The simpler case \(N = 0\) is left to the reader. Taking into account the relations
see [28, Section 4], and using Faà di Bruno’s formula with \(f(\theta ) = \cosh \frac{t}{2} - 1 + \mathfrak {q}\) and \(g(x) = x^{-\lambda }\), we get
where the \(c_{\lambda ,j}\) are constants. Further, keeping in mind that \(L,R,K \in \{ 0,1 \}\) and applying repeatedly Leibniz’ rule, we see that \(\partial _\varphi ^L \partial _\theta ^N \widetilde{\Psi }^{\lambda }(t,\mathfrak {q})\) is a sum of terms of the form constant times
where the indices run over the set described by the conditions \(j_i \ge 0, j_1+\cdots + Nj_{N}=N, l_1, l_2, l_3 \ge 0, l_1+l_2+l_3=L\), and the exponents of \(\mathfrak {q}- 1\) and \(\partial _\theta \mathfrak {q}\) are nonnegative. Similarly, \(\partial _v^R \partial _\varphi ^L \partial _\theta ^N \widetilde{\Psi }^{\lambda }(t,\mathfrak {q})\) is a sum of terms of the form constant times
where also \(r_1, \ldots , r_5 \ge 0, r_1+\cdots +r_5=R, l_1 + l_2 \ge r_2, l_3 \ge r_5\). Finally, since the derivative \(\partial _u \partial _v \mathfrak {q}\) vanishes, \(\partial _u^K \partial _v^R \partial _\varphi ^L \partial _\theta ^N \widetilde{\Psi }^{\lambda }(t,\mathfrak {q})\) is a sum of terms of the form constant times
Here we must add the conditions \(k_1, \ldots , k_5 \ge 0, k_1+\cdots +k_5=K\), and replace \(l_1 + l_2 \ge r_2, l_3 \ge r_5\) by \(l_1 + l_2 \ge r_2 + k_2, l_3 \ge r_5 + k_5\). We shall estimate all the factors in this product from above. Since \(t \le 1\), we can replace \(\cosh \frac{t}{2}-1+\mathfrak {q}\) by \(t^2+\mathfrak {q}\). The quantities \(\mathfrak {q}\) and \(\partial _\varphi \partial _\theta \mathfrak {q}\) are bounded. Further, we apply Lemma 3.4 to get
To deal with the resulting exponent of \(1/(t^2 + \mathfrak {q})\), we observe that
cf. (12). Using also the estimates
we infer that
Notice that \(2k_1+k_2+k_4 \in \{0,K,2K \}\), and similarly \(2r_1+r_2+r_4 \in \{0,R,2R \}\). This observation leads directly to (11).
The proof of Lemma 3.3 is complete. \(\square \)
Define
and similarly for \(\mathrm{d}\Pi _{\beta , R}\).
Corollary 3.5
Let \(M,N \in \mathbb {N}\) and \(L \in \{0, 1\}\) be fixed. The following estimates hold uniformly in \(t\in (0,1]\) and \(\theta ,\varphi \in [0,\pi ]\):
-
(i)
If \(\alpha ,\beta \ge -1\slash 2\), then
$$\begin{aligned} \big | \partial _\varphi ^L \partial _\theta ^N \partial _t^M H_{t}^{\alpha ,\beta }(\theta ,\varphi ) \big | \lesssim \iint \frac{\mathrm{d}\Pi _{\alpha }(u)\, \mathrm{d}\Pi _{\beta }(v)}{(t^2 + \mathfrak {q})^{ \alpha + \beta + 3\slash 2 + (L+N+M)\slash 2 }}. \end{aligned}$$ -
(ii)
If \(-1 < \alpha < -1\slash 2 \le \beta \), then
$$\begin{aligned} \big | \partial _\varphi ^L \partial _\theta ^N \partial _t^M H_{t}^{\alpha ,\beta }(\theta ,\varphi ) \big |&\lesssim 1 + \sum _{K=0,1} \sum _{k=0,1,2} \left( \sin \frac{\theta }{2}+\sin \frac{\varphi }{2}\right) ^{Kk} \\&\quad \times \iint \frac{\mathrm{d}\Pi _{\alpha , K}(u)\, \mathrm{d}\Pi _{\beta }(v)}{(t^2 + \mathfrak {q})^{ \alpha + \beta + 3\slash 2 + (L+N+M + Kk)\slash 2 }}. \end{aligned}$$ -
(iii)
If \(-1 < \beta < -1\slash 2 \le \alpha \), then
$$\begin{aligned} \big | \partial _\varphi ^L \partial _\theta ^N \partial _t^M H_{t}^{\alpha ,\beta }(\theta ,\varphi ) \big |&\lesssim 1 + \sum _{R=0,1} \sum _{r=0,1,2} \left( \cos \frac{\theta }{2}+ \cos \frac{\varphi }{2}\right) ^{Rr} \\&\quad \times \iint \frac{\mathrm{d}\Pi _{\alpha }(u)\, \mathrm{d}\Pi _{\beta , R}(v)}{(t^2 + \mathfrak {q})^{ \alpha + \beta + 3\slash 2 + (L+N+M +Rr)\slash 2 }}. \end{aligned}$$ -
(iv)
If \(-1 < \alpha ,\beta < -1\slash 2\), then
$$\begin{aligned} \big | \partial _\varphi ^L \partial _\theta ^N \partial _t^M H_{t}^{\alpha ,\beta }(\theta ,\varphi ) \big |&\lesssim 1 + \sum _{K,R=0,1} \sum _{k,r=0,1,2} \left( \sin \frac{\theta }{2}+\sin \frac{\varphi }{2}\right) ^{Kk} \left( \cos \frac{\theta }{2}+ \cos \frac{\varphi }{2}\right) ^{Rr} \\&\quad \times \iint \frac{\mathrm{d}\Pi _{\alpha , K}(u)\, \mathrm{d}\Pi _{\beta , R}(v)}{(t^2 + \mathfrak {q})^{ \alpha + \beta + 3\slash 2 + (L+N+M + Kk +Rr)\slash 2 }}. \end{aligned}$$
Proof
All the bounds are direct consequences of the equality (1), Proposition 2.3, Lemma 2.2, and the estimate from Lemma 3.3 (specified to \(\lambda = \alpha + \beta + 2\)). Here passing with the differentiation in \(t, \theta \) or \(\varphi \) under integrals against \(\mathrm{d}\Pi _{\gamma }, \gamma \ge -1/2\), or \(\Pi _{\gamma }(u)\, \mathrm{d}u, -1 < \gamma < -1/2\), can easily be justified with the aid of Lemma 3.3 and the dominated convergence theorem. \(\square \)
Lemma 3.6
Let \(\gamma \in \mathbb {R}\) and \(\eta \ge 0\) be fixed. Then
uniformly in \(0 < \rho \le 2\).
Proof
This is elementary. For \(\gamma =0\), one has
\(\square \)
The next lemma will be frequently used in Sect. 4 to prove the relevant kernel estimates. Only the cases \(p\in \{ 1,2,\infty \}\) will be needed for our purposes. Other values of \(p\) are also of interest, but in connection with operators not considered in this paper.
Lemma 3.7
Let \(K,R \in \{ 0,1 \}, k,r \in \{ 0,1,2 \}, W \ge 1, s\ge 0\), and \(1 \le p \le \infty \) be fixed. Consider a function \(\Upsilon ^{\alpha ,\beta }_s(t,\theta ,\varphi )\) defined on \((0,1) \times [0,\pi ] \times [0,\pi ]\) in the following way:
-
(i)
For \(\alpha ,\beta \ge -1\slash 2\),
$$\begin{aligned} \Upsilon ^{\alpha ,\beta }_s(t,\theta ,\varphi ) := \iint \frac{\mathrm{d}\Pi _{\alpha }(u)\, \mathrm{d}\Pi _{\beta }(v)}{(t^2 + \mathfrak {q})^{ \alpha +\beta +3\slash 2 + W\slash (2p) + s\slash 2 }}. \end{aligned}$$ -
(ii)
For \(-1 < \alpha < -1\slash 2 \le \beta \),
$$\begin{aligned} \Upsilon ^{\alpha ,\beta }_s(t,\theta ,\varphi ) := \left( \sin \frac{\theta }{2}+ \sin \frac{\varphi }{2}\right) ^{Kk} \iint \frac{\mathrm{d}\Pi _{\alpha , K}(u)\, \mathrm{d}\Pi _{\beta }(v)}{(t^2 + \mathfrak {q})^{ \alpha +\beta +3\slash 2 + W\slash (2p) +Kk\slash 2 + s\slash 2 }}. \end{aligned}$$ -
(iii)
For \(-1 < \beta < -1\slash 2 \le \alpha \),
$$\begin{aligned} \Upsilon ^{\alpha ,\beta }_s(t,\theta ,\varphi ) := \left( \cos \frac{\theta }{2}+ \cos \frac{\varphi }{2}\right) ^{Rr} \iint \frac{\mathrm{d}\Pi _{\alpha }(u)\, \mathrm{d}\Pi _{\beta , R}(v)}{(t^2 + \mathfrak {q})^{ \alpha +\beta +3\slash 2 + W\slash (2p) +Rr\slash 2 + s\slash 2 }}. \end{aligned}$$ -
(iv)
For \(-1 < \alpha ,\beta < -1\slash 2\),
$$\begin{aligned} \Upsilon ^{\alpha ,\beta }_s(t,\theta ,\varphi ) :=&\left( \sin \frac{\theta }{2}+\sin \frac{\varphi }{2}\right) ^{Kk} \left( \cos \frac{\theta }{2}+ \cos \frac{\varphi }{2}\right) ^{Rr} \\&\quad \times \iint \frac{\mathrm{d}\Pi _{\alpha , K}(u)\, \mathrm{d}\Pi _{\beta , R}(v)}{(t^2 + \mathfrak {q})^{ \alpha +\beta +3\slash 2 + W\slash (2p) +Kk\slash 2 + Rr\slash 2 + s\slash 2 }}. \end{aligned}$$
Then the estimate
holds uniformly in \(\theta ,\varphi \in [0,\pi ], \theta \ne \varphi \).
Proof
It is enough to prove the desired estimate without the term 1 in the left-hand side. Further, since \(|\theta -\varphi |^2 \lesssim \mathfrak {q}\), it suffices to consider the case \(s=0\). We prove the estimate when \(-1 <\alpha ,\beta <-1 \slash 2\). The remaining cases are left to the reader; they are simpler, since then \(\alpha +\beta +3\slash 2 > 0\) and one needs Lemma 3.6 only with \(\gamma > 0\).
We first assume that \(p<\infty \). Using Minkowski’s integral inequality and then Lemma 3.6 with \(\gamma = p(\alpha +\beta +3\slash 2 + Kk\slash 2 + Rr\slash 2), \eta = W-1\) and \(\rho =\mathfrak {q}\), we obtain
Now an application of Lemma 3.1 (specified to \(\xi _1=Kk\slash 2, \kappa _1=-\alpha -1\slash 2\) if \(K=0\) and \(\kappa _1=1 - k\slash 2\) if \(K=1, \xi _2=Rr\slash 2, \kappa _2=-\beta -1\slash 2\) if \(R=0\) and \(\kappa _2=1 - r\slash 2\) if \(R=1\)) gives the desired estimate for the expression emerging from the first term in the last integral. As for the remaining two expressions, we observe that \(1 \lesssim \log \left( 1 + \mathfrak {q}^{-1/2}\right) \lesssim \log \left( 1 + |\theta -\varphi |^{-1} \right) \). Moreover, as can be seen from (8), there exists an \(\varepsilon = \varepsilon (\alpha ,\beta ) > 0\) such that
Since the measures \(\mathrm{d}\Pi _{\alpha , K}\) and \(\mathrm{d}\Pi _{\beta , R}\) are finite, the conclusion follows.
The case \(p=\infty \) can be justified in a similar way by using in the reasoning above the estimate
instead of Lemma 3.6. \(\square \)
The next lemma and corollaries are long-time counterparts of Corollary 3.5 and Lemma 3.7.
Lemma 3.8
Assume that \(M,N \in {\mathbb {N}}\) and \(L \in \{0,1\}\) are fixed. Given \(\alpha ,\beta > -1\), there exists an \(\epsilon = \epsilon (\alpha ,\beta )>0\) such that
uniformly in \(t \ge 1\) and \(\theta ,\varphi \in [0,\pi ]\). Moreover, one can take \(\epsilon = (\alpha +\beta +2) \wedge 1\).
To prove this, it is more convenient to employ the series representation of \(H_t^{\alpha ,\beta }(\theta ,\varphi )\) rather than the formulas from Proposition 2.3.
Proof of Lemma 3.8
For \(\alpha ,\beta >-1, t>0\) and \(\theta ,\varphi \in [0,\pi ]\), we have
Denote the sum in (13) by \(S\). To estimate \(S\) and its derivatives, we will need suitable bounds for \(\partial _{\theta }^{N}{\mathcal {P}}_n^{\alpha ,\beta }(\theta ), N \ge 0\). It is known (see [33, (7.32.2)]) that
Combining this with the identity (cf. [33, (4.21.7)])
we see that for each \(N \ge 0\),
In view of these facts, the series in (13) can be repeatedly differentiated term by term in \(t,\theta \) and \(\varphi \), and we get the bounds
uniformly in \(t \ge 1\) and \(\theta ,\varphi \in [0,\pi ]\).
Since the other term in (13) is trivial to handle, the conclusion follows. \(\square \)
Corollary 3.9
Let \(\alpha ,\beta > -1, M,N \in {\mathbb {N}}, L \in \{0,1\}, W \ge 1\), and \(1 \le p \le \infty \) be fixed. Then
excluding the cases when simultaneously \(\alpha +\beta +1=0\) and \(M=N=L=0\) and \(p<\infty \).
A strengthened special case of Corollary 3.9 will be needed when we estimate kernels associated with multipliers of Laplace–Stieltjes type.
Corollary 3.10
Let \(\alpha ,\beta > -1\) and \(L,N \in \{0,1\}\) be fixed. Then
4 Kernel Estimates
Let \(\mathbb {B}\) be a Banach space, and let \(K(\theta ,\varphi )\) be a kernel defined on \([0,\pi ]\times [0,\pi ]\backslash \{ (\theta ,\varphi ):\theta =\varphi \}\) and taking values in \(\mathbb {B}\). We say that \(K(\theta ,\varphi )\) is a standard kernel in the sense of the space of homogeneous type \(([0,\pi ], \mathrm{d}\mu _{\alpha ,\beta },|\cdot |)\) if it satisfies the so-called standard estimates, i.e., the growth estimate
and the smoothness estimates
Notice that in these formulas, the ball (interval) \(B(\theta ,|\theta -\varphi |)\) can be replaced by \(B(\varphi ,|\varphi -\theta |)\), in view of the doubling property of \(\mu _{\alpha ,\beta }\).
We will show that the following kernels, with values in properly chosen Banach spaces \(\mathbb {B}\), satisfy the standard estimates:
-
(I)
The kernel associated with the Jacobi–Poisson semigroup maximal operator,
$$\begin{aligned} \mathfrak {H}^{\alpha ,\beta }(\theta ,\varphi ) = \big \{H_t^{\alpha ,\beta }(\theta ,\varphi )\big \}_{t>0}, \quad \mathbb {B}=\mathbb {X} \subset L^{\infty }(\mathbb {R}_+,\mathrm{d}t), \end{aligned}$$where \(\mathbb {X}\) is the closed separable subspace of \(L^{\infty }(\mathbb {R}_+,\mathrm{d}t)\) consisting of all continuous functions \(f\) on \((0,\infty )\) which have finite limits as \(t \rightarrow 0^+\) and as \(t \rightarrow \infty \). Observe that \(\big \{H_t^{\alpha ,\beta }(\theta ,\varphi )\big \}_{t>0} \in \mathbb {X}\), for \(\theta \ne \varphi \), as can be seen from Proposition 2.3 and the bound \(\mathfrak {q}\gtrsim (\theta -\varphi )^2\), and the series representation (see the proof of Lemma 3.8).
-
(II)
The kernels associated with Riesz–Jacobi transforms,
$$\begin{aligned} R_N^{\alpha ,\beta }(\theta ,\varphi ) = \frac{1}{\Gamma (N)} \int _0^{\infty } \partial _\theta ^N H_t^{\alpha ,\beta }(\theta ,\varphi ) t^{N -1}\, \mathrm{d}t, \quad \mathbb {B}=\mathbb {C}, \end{aligned}$$where \(N = 1,2,\ldots \).
-
(III)
The kernels associated with mixed square functions,
$$\begin{aligned} \mathfrak {G}^{\alpha ,\beta }_{M,N}(\theta ,\varphi ) = \big \{\partial _\theta ^N \partial _t^M H_t^{\alpha ,\beta }(\theta ,\varphi ) \big \}_{t>0}, \quad \mathbb {B} = L^2\big (\mathbb {R}_+,t^{2M+2N-1}\mathrm{d}t\big ), \end{aligned}$$where \(M,N = 0,1,2,\ldots \) are such that \(M+N>0\).
-
(IVa)
The kernels associated with Laplace transform type multipliers,
$$\begin{aligned} K^{\alpha ,\beta }_{\phi }(\theta ,\varphi ) = - \int _0^{\infty } \phi (t) \, \partial _t H_t^{\alpha ,\beta }(\theta ,\varphi ) \, \mathrm{d}t, \quad \mathbb {B}=\mathbb {C}, \end{aligned}$$where \(\phi \in L^{\infty }(\mathbb {R}_+,\mathrm{d}t)\).
-
(IVb)
The kernels associated with Laplace–Stieltjes transform type multipliers,
$$\begin{aligned} K^{\alpha ,\beta }_{\nu }(\theta ,\varphi ) = \int _{(0,\infty )} H_t^{\alpha ,\beta }(\theta ,\varphi )\, \mathrm{d}\nu (t), \quad \mathbb {B}=\mathbb {C}, \end{aligned}$$where \(\nu \) is a signed or complex Borel measure on \((0,\infty )\) with total variation \(|\nu |\) satisfying
$$\begin{aligned} \int _{(0,\infty )} \mathrm{e}^{-t \left| \frac{\alpha + \beta + 1}{2} \right| } \, \hbox {d}|\nu |(t) < \infty . \end{aligned}$$(18)
When \(K(\theta ,\varphi )\) is scalar-valued, i.e., \(\mathbb {B}=\mathbb {C}\), it is well known that the bounds (16) and (17) follow from the more convenient gradient estimate
We shall see that the same holds also in the vector-valued cases we consider. Then the derivatives in (19) are taken in the weak sense, which means that for any \(\mathtt v \in \mathbb {B}^*\),
and similarly for \(\partial _{\varphi }\). If these weak derivatives \(\partial _{\theta } K(\theta ,\varphi )\) and \(\partial _{\varphi } K(\theta ,\varphi )\) exist as elements of \(\mathbb {B}\) and their norms satisfy (19), the scalar-valued case applies and (16) and (17) follow.
The result below extends to all \(\alpha ,\beta > -1\) the estimates obtained in [28, Section 4] for the restricted range \(\alpha ,\beta \ge -1\slash 2\). Moreover, here we also consider multipliers of Laplace and Laplace–Stieltjes transform type, which were merely mentioned in [28] and which cover as a special case the imaginary powers of \(\mathcal {J}^{\alpha ,\beta }\) (or \({\mathcal {J}}^{\alpha ,\beta }\Pi _0\) when \(\alpha +\beta +1=0\)) investigated there.
Theorem 4.1
Let \(\alpha ,\beta > -1\). Then the kernels (I)–(III), (IVa), and (IVb) satisfy the standard estimates (15), (16), and (17) with \(\mathbb {B}\) as indicated above.
In the proof, we tacitly assume that passing with the differentiation in \(\theta \) or \(\varphi \) under integrals against \(\mathrm{d}t\) or \(\mathrm{d}\nu (t)\) is legitimate. In fact, such manipulations can easily be verified by means of the dominated convergence theorem and the estimates obtained in Corollary 3.5 and Lemma 3.8.
Proof of Theorem 4.1
We treat each of the kernels separately.
The case of \(\varvec{\mathfrak {H}^{\alpha ,\beta }(\theta ,\varphi )}\) We first deal with the growth condition. Clearly, it suffices to prove independently the two bounds emerging from (15) by choosing \(\mathbb {B} = L^{\infty } ( (1,\infty ), \mathrm{d}t )\) and \(\mathbb {B} = L^{\infty } ( (0,1), \mathrm{d}t )\). These, however, are immediate consequences of Corollary 3.9 (with \(M=N=L=0, p=\infty \)) and Corollary 3.5 (taken with \(M=N=L=0\)) combined with Lemma 3.7 (specified to \(p=\infty , s=0\)), respectively.
To obtain the smoothness estimates, we must verify that the weak derivatives \(\partial _{\theta } \mathfrak {H}^{\alpha ,\beta }(\theta ,\varphi )\) and \(\partial _{\varphi } \mathfrak {H}^{\alpha ,\beta }(\theta ,\varphi )\) exist in the sense of (20) and satisfy (19). In this case, \(\mathtt v \) is a complex measure in \([0,\infty ]\), and
It is enough to consider the derivative with respect to \(\theta \). By the dominated convergence theorem, which is applicable because of Lemma 3.8 and Corollary 3.5 together with the bound \(\mathfrak {q}\gtrsim (\theta -\varphi )^2\), we obtain
observe that \(\big \{ \partial _{\theta } H_t^{\alpha ,\beta }(\theta ,\varphi ) \big \}_{t>0} \in \mathbb {X}\) for \(\theta \ne \varphi \), as can be seen from Proposition 2.3 and Lemma 3.8. This identity implies that for \(\theta \ne \varphi \), the weak derivative \(\partial _{\theta } \mathfrak {H}^{\alpha ,\beta }(\theta ,\varphi )\) exists and equals \(\big \{ \partial _{\theta } H_t^{\alpha ,\beta }(\theta ,\varphi ) \big \}_{t>0}\). To see that it also satisfies (19), we first consider large \(t\) and observe that the estimate
follows from Corollary 3.9 (specified to \(M=L=0, N=W=1, p = \infty \)). For small \(t\), we have
in view of Corollary 3.5 (with \(M=L=0, N=1\)) and Lemma 3.7 (taken with \(W=1, p=\infty , s=1\)).
The case of \(\varvec{R_N^{\alpha ,\beta }(\theta ,\varphi )}\) To prove the growth condition, it is enough to verify that
This, however, is a consequence of Corollary 3.9 (taken with \(M=L=0, W=N, p=1\)) and Corollary 3.5 (with \(M=L=0\)) combined with Lemma 3.7 (specified to \(W=N, p=1, s=0\)).
In order to show the gradient bound (19), it suffices to check that
This estimate follows by means of Corollary 3.9 (applied with \(M=0, p=1\)) and Corollary 3.5 (with \(M=0\)) together with Lemma 3.7 (specified to \(W=N, p=1, s=1\)).
The case of \(\varvec{\mathfrak {G}^{\alpha ,\beta }_{M,N}(\theta ,\varphi )}\) The growth condition is a straightforward consequence of Corollary 3.9 (with \(L=0, W=2M + 2N, p=2\)), Corollary 3.5 (with \(L=0\)) and Lemma 3.7 (taken with \(W=2M + 2N, p=2, s=0\)).
Next, we prove the gradient estimate (19), which amounts to
where \(\nabla _{\! \theta ,\varphi }\) is taken in the weak sense. This follows with the aid of Corollary 3.9 (with \(W=2M+2N, p=2\)), Corollary 3.5, and Lemma 3.7 (applied with \(W=2M + 2N, p=2, s=1\)); cf. the arguments given for the case \(\mathfrak {H}^{\alpha ,\beta }(\theta ,\varphi )\) above.
The case of \(\varvec{K^{\alpha ,\beta }_{\phi }(\theta ,\varphi )}\) The growth bound is a direct consequence of the assumption \(\phi \in L^\infty (\mathbb {R}_+,\mathrm{d}t)\), Corollary 3.9 (specified to \(M=1, N=L=0, W=1, p=1\)), Corollary 3.5 (with \(M=1, N=L=0\)), and Lemma 3.7 (taken with \(W=1, p=1, s=0\)).
Since \(\phi \) is bounded, to prove the gradient estimate it is enough to verify that
Now applying Corollary 3.9 (with \(M=1, W=1, p=1\) and either \(N=1, L=0\) or \(N=0, L=1\)), Corollary 3.5 (specified to \(M=1\) and either \(N=1, L=0\) or \(N=0, L=1\)), and Lemma 3.7 (taken with \(W=1, p=1, s=1\)), we arrive at the desired bound.
The case of \(\varvec{K^{\alpha ,\beta }_{\nu }(\theta ,\varphi )}\) To show the growth condition, it is enough, by the assumption (18) concerning the measure \(\nu \), to check that
The first estimate above is an immediate consequence of Corollary 3.10 (applied with \(N=L=0\)). The remaining bound is part of the growth condition for \(\mathfrak {H}^{\alpha ,\beta }(\theta ,\varphi )\), which is already justified.
Taking (18) into account, to verify the gradient estimate (19), it suffices to show that
Again, an application of Corollary 3.10 (with either \(N=1, L=0\) or \(N=0, L=1\)) produces the first bound. The second one is contained in the proof of the gradient estimate for \(\mathfrak {H}^{\alpha ,\beta }(\theta ,\varphi )\).
The proof of Theorem 4.1 is complete. \(\square \)
5 Calderón–Zygmund Operators
Let \(\mathbb {B}\) be a Banach space, and suppose that \(T\) is a linear operator assigning to each \(f\in L^2(\mathrm{d}\mu _{\alpha ,\beta })\) a strongly measurable \(\mathbb {B}\)-valued function \(Tf\) on \([0,\pi ]\). Then \(T\) is said to be a (vector-valued) Calderón–Zygmund operator in the sense of the space \(([0,\pi ],\mathrm{d}\mu _{\alpha ,\beta },|\cdot |)\) associated with \(\mathbb {B}\) if
-
(A)
\(T\) is bounded from \(L^2(\mathrm{d}\mu _{\alpha ,\beta })\) to \(L^2_{\mathbb {B}}(\mathrm{d}\mu _{\alpha ,\beta })\),
-
(B)
there exists a standard \(\mathbb {B}\)-valued kernel \(K(\theta ,\varphi )\) such that
$$\begin{aligned} Tf(\theta ) = \int _0^{\pi } K(\theta ,\varphi ) f(\varphi )\, \mathrm{d}\mu _{\alpha ,\beta }(\varphi ), \quad \text {a.a.}\; \theta \notin \hbox {supp}f, \end{aligned}$$for \(f \in L^{\infty }([0,\pi ])\).
Here integration of \(\mathbb {B}\)-valued functions is understood in Bochner’s sense, and \(L^2_{\mathbb {B}}(\mathrm{d}\mu _{\alpha ,\beta })\) is the Bochner–Lebesgue space of all \(\mathbb {B}\)-valued \(\mathrm{d}\mu _{\alpha ,\beta }\)-square integrable functions on \([0,\pi ]\).
It is well known that a large part of the classical theory of Calderón–Zygmund operators remains valid, with appropriate adjustments, when the underlying space is of homogeneous type and the associated kernels are vector-valued, see for instance [31, 32]. In particular, if \(T\) is a Calderón–Zygmund operator in the sense of \(([0,\pi ],\mathrm{d}\mu _{\alpha ,\beta },|\cdot |)\) associated with a Banach space \(\mathbb {B}\), then its mapping properties in weighted \(L^p\) spaces follow from the general theory.
Let
be the Jacobi–Poisson semigroup. For \(\alpha ,\beta > -1\) consider the following operators defined initially in \(L^2(\mathrm{d}\mu _{\alpha ,\beta })\):
-
(I)
The Jacobi–Poisson semigroup maximal operator
$$\begin{aligned} \mathcal {H}_*^{\alpha ,\beta }f = \big \Vert \mathcal {H}_t^{\alpha ,\beta }f \big \Vert _{L^{\infty }(\mathbb {R}_+,\mathrm{d}t)}. \end{aligned}$$ -
(II)
Riesz–Jacobi transforms of orders \(N=1,2,\ldots \),
$$\begin{aligned} R_N^{\alpha ,\beta }f = \sum _{n=1}^{\infty } \Big | n + \frac{\alpha +\beta +1}{2}\Big |^{-N} \big \langle f,{\mathcal {P}}_n^{\alpha ,\beta }\big \rangle _{\mathrm{d}\mu _{\alpha ,\beta }} \, \partial _{\theta }^N{\mathcal {P}}_n^{\alpha ,\beta }, \end{aligned}$$where \(\big \langle f,{\mathcal {P}}_n^{\alpha ,\beta }\big \rangle _{\mathrm{d}\mu _{\alpha ,\beta }}\) are the Fourier–Jacobi coefficients of \(f\).
-
(III)
Littlewood–Paley–Stein type mixed square functions
$$\begin{aligned} g_{M,N}^{\alpha ,\beta }f = \big \Vert \partial _{\theta }^{N}\partial _t^M \mathcal {H}_t^{\alpha ,\beta }f\big \Vert _{L^2(\mathbb {R}_+,t^{2M+2N-1}\mathrm{d}t)}, \end{aligned}$$where \(M,N = 0,1,2,\ldots \) and \(M+N>0\).
-
(IV)
Multipliers of Laplace and Laplace–Stieltjes transform type
$$\begin{aligned} M^{\alpha ,\beta }_{\mathfrak {m}}f = \sum _{n=0}^{\infty } \mathfrak {m}\left( \Big |n+\frac{\alpha +\beta +1}{2}\Big |\right) \big \langle f,{\mathcal {P}}_n^{\alpha ,\beta }\big \rangle _{\mathrm{d}\mu _{\alpha ,\beta }} {\mathcal {P}}_n^{\alpha ,\beta }, \end{aligned}$$where either \(\mathfrak {m}(z) = \int _0^{\infty } z \mathrm{e}^{-tz} \phi (t)\, \mathrm{d}t\) with \(\phi \in L^{\infty }(\mathbb {R}_+,\mathrm{d}t)\) or \(\mathfrak {m}(z) = \int _{(0,\infty )} \mathrm{e}^{-tz} \, \mathrm{d}\nu (t)\) for a signed or complex Borel measure \(\nu \) on \((0,\infty )\) whose total variation satisfies (18).
The formulas defining \(\mathcal {H}_*^{\alpha ,\beta }\) and \(g_{M,N}^{\alpha ,\beta }\) are understood pointwise and are actually valid for general functions \(f\) from weighted \(L^p\) spaces with Muckenhoupt weights. This is because for such \(f\), the integral defining \(\mathcal {H}_t^{\alpha ,\beta }f(\theta )\) is well defined and produces a smooth function of \((t,\theta )\in (0,\infty )\times [0,\pi ]\), see [28, Section 2]. The series defining \(R_N^{\alpha ,\beta }\) and \(M_{\mathfrak {m}}^{\alpha ,\beta }\) indeed converge in \(L^2(\mathrm{d}\mu _{\alpha ,\beta })\), which is clear in the case of \(M_{\mathfrak {m}}^{\alpha ,\beta }\), since the values of \(\mathfrak {m}\) that occur here stay bounded. For \(R_N^{\alpha ,\beta }\), the convergence follows by [28, Lemma 3.1], see the proof of [28, Proposition 2.2] in the case of \(R_N^{\alpha ,\beta }\).
As a consequence of Theorem 4.1, we get the following result.
Theorem 5.1
Assume that \(\alpha ,\beta > -1\). The Riesz–Jacobi transforms and the multipliers of Laplace and Laplace–Stieltjes transform type are scalar-valued Calderón–Zygmund operators in the sense of the space \(([0,\pi ],\mathrm{d}\mu _{\alpha ,\beta },|\cdot |)\). Furthermore, the Jacobi–Poisson semigroup maximal operator and the mixed square functions can be viewed as vector-valued Calderón–Zygmund operators in the sense of \(([0,\pi ],\mathrm{d}\mu _{\alpha ,\beta },|\cdot |)\), associated with the Banach spaces \(\mathbb {B}=\mathbb {X}\) and \(\mathbb {B} = L^2(\mathbb {R}_+,t^{2M+2N-1}\mathrm{d}t)\), respectively.
Proof
The standard estimates are provided in all the cases by Theorem 4.1. Thus it suffices to verify \(L^2\) boundedness and kernel associations [conditions (A) and (B) above]. This, however, was essentially done in [28, Section 3], since the arguments given there are actually valid for all \(\alpha ,\beta > -1\) if combined with the estimates proved (in some cases implicitly) in Sect. 4. An exception here are the Laplace and Laplace–Stieltjes type multipliers. But in these cases, the boundedness in \(L^2\) is straightforward, and the kernel associations are justified according to the outline opening the proof of [28, Proposition 2.3], see [28, Section 3, pp. 732–733]. Since all the necessary ingredients are contained in [28] and in the present paper, we leave further details to interested readers. \(\square \)
Denote by \(A_p^{\alpha ,\beta }, 1 \le p < \infty \), the Muckenhoupt classes of weights related to the space \(([0,\pi ],\mathrm{d}\mu _{\alpha ,\beta },|\cdot |)\) (see [28, Section 1] for the definition).
Corollary 5.2
Let \(\alpha ,\beta > -1\). The Riesz–Jacobi transforms and the multipliers of Laplace and Laplace–Stieltjes type extend to bounded linear operators on \(L^p(w\mathrm{d}\mu _{\alpha ,\beta }), w \in A_p^{\alpha ,\beta }, 1<p<\infty \), and from \(L^1(w\mathrm{d}\mu _{\alpha ,\beta })\) to weak \(L^1(w\mathrm{d}\mu _{\alpha ,\beta }), w \in A_1^{\alpha ,\beta }\). The same boundedness properties hold for the Jacobi–Poisson semigroup maximal operator and the mixed square functions, viewed as scalar-valued sublinear operators.
Proof
The part concerning \(R_N^{\alpha ,\beta }\) and \(M_{\mathfrak {m}}^{\alpha ,\beta }\) is a direct consequence of Theorem 5.1 and the general theory. The remaining part follows by Theorem 5.1 and the arguments given in the proof of [28, Corollary 2.5].
Remark 5.3
Elementary arguments, similar to those presented at the end of [8, Section 2], allow us to obtain unweighted \(L^p(\hbox {d}\mu _{\alpha ,\beta })\)-boundedness, \(1\le p \le \infty \), for the Laplace–Stieltjes transform type multipliers. The crucial fact needed in the reasoning is the estimate
which is a direct consequence of the identity \(\mathcal {H}_t^{\alpha ,\beta } \varvec{1} = \mathrm{e}^{-t\left| \frac{\alpha + \beta + 1}{2}\right| }\) and condition (18) concerning the measure \(\nu \); here \(\varvec{1}\) is the constant function equal to 1 on \([0,\pi ]\).
6 Exact Behavior of the Jacobi–Poisson Kernel
We give another application of the representations in Proposition 2.3, which is interesting and important in its own right. We will describe in a sharp way the behavior of the kernels \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\) and \(H_t^{\alpha ,\beta }(\theta ,\varphi )\). The result below extends sharp estimates for the Jacobi–Poisson kernel obtained in [29, Theorem A.1] under the restriction \(\alpha ,\beta \ge -1/2\).
Theorem 6.1
Let \(\alpha ,\beta > -1\). Then
uniformly in \(0 < t \le 1\) and \(\theta ,\varphi \in [0,\pi ]\), and
uniformly in \(t \ge 1\) and \(\theta ,\varphi \in [0,\pi ]\).
To prove this, we will need some technical results, one of which is Lemma 3.2 (a). Note that this lemma remains true if the integration is restricted to the subinterval \((1/2,1]\). This follows from the structure of \(\mathrm{d}\Pi _{\nu }\) and the fact that the integrand is positive and increasing.
Lemma 6.2
Let \(\tau >0\) be fixed. Then
uniformly in \(0<a \le b,c \le d\) satisfying \(a+d=b+c\).
Proof
We can assume that \(b\le c\). Then the right-hand side is independent of \(c\) and \(d\). In the left-hand side, we therefore replace \(c\) and \(d\) by \(c+s\) and \(d+s\), respectively, where \(s\ge b-c\). By differentiating, we see that the function \(s \mapsto -(c+s)^{-\tau } + (d+s)^{-\tau }\) is increasing. As a result, we need only consider the extreme case \(s=b-c\), which means proving the lemma for \(b=c\).
Writing \(h=b-a\), and letting \(f(x)=x^{-\tau }\), the left-hand side is now the second difference \(f(a)-2f(a+h)+f(a+2h)\), which equals \(f''(\xi )h^2\) for some \(\xi \in (a,a+2h)\). Now if \(h>Ca\) for some large \(C=C(\tau )\), the inequality of the lemma is trivial, since the term \(a^{-\tau }\) will dominate in the left-hand side. But if \(h\le Ca\), we have \(f''(\xi ) \simeq a^{-\tau -2}\), and the conclusion follows again. \(\square \)
Let \(\sigma > 1\) be fixed. Then one easily verifies that
Proof of Theorem 6.1
We first prove the estimates for \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\). Among the four ranges of the type parameters distinguished in Proposition 2.3, it is enough to consider only two. Indeed, when \(\alpha ,\beta \ge -1/2\), the desired bounds are contained in [29, Theorem A.1], and the cases \(\beta < -1/2 \le \alpha \) and \(\alpha < -1/2 \le \beta \) are essentially the same. In what follows, we denote for \(t>0\) and \(\theta ,\varphi \in [0,\pi ]\),
and
Notice that \(0 \le X,Y < 1\), and that \(Z\) is comparable, uniformly in \(0 < t \le 1\) and \(\theta ,\varphi \in [0,\pi ]\), with the expression describing the short-time behavior in Theorem 6.1; see the proof of [29, Theorem A.1]. Moreover, \(Z\) has the same long-time behavior as that asserted for \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\). Thus that part of the statement of Theorem 6.1 which deals with \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\) can be written simply as
Case 1 \({-1<\alpha <-1/2\le \beta }.\) By Proposition 2.3,
One finds that the integral \(I_1\) is dominated (up to a multiplicative constant) by its restriction to the subsquare \((1/2,1]^2\) and that the essential contribution to \(I_2\) comes from integrating over \((1/2,1]^2\). In view of Lemma 2.2, the measures \(|\Pi _{\alpha }(u)|\,\mathrm{d}u\) and \(\mathrm{d}\Pi _{\alpha +1}\) are comparable on \((1/2,1]\), and we infer that
uniformly in \(t>0\) and \(\theta ,\varphi \in [0,\pi ]\). Applying now Lemma 3.2 (a) to \(I_1\) twice, first to the integral against \(\mathrm{d}\Pi _{\beta }\), with the parameters \(\nu =\beta , \kappa =0, \gamma = \alpha +\beta +3, A=\cosh \frac{t}{2}-u\sin \frac{\theta }{2}\sin \frac{\varphi }{2}, B= \cos \frac{\theta }{2}\cos \frac{\varphi }{2}\), and then to the resulting integral against \(\mathrm{d}\Pi _{\alpha +1}\), with the parameters \(\nu =\alpha +1, \kappa =\beta +1/2, \gamma = \alpha + 5/2, D=\cosh \frac{t}{2}, A = \cosh \frac{t}{2}-\cos \frac{\theta }{2}\cos \frac{\varphi }{2}, B=\sin \frac{\theta }{2}\sin \frac{\varphi }{2}\), we arrive at the bound
Applying once again Lemma 3.2 (a), this time to \(I_2\) and with the parameters \(\nu = \beta , \kappa = 0, \gamma = \alpha +\beta +2, A = \cosh \frac{t}{2}-\sin \frac{\theta }{2}\sin \frac{\varphi }{2}, B=\cos \frac{\theta }{2}\cos \frac{\varphi }{2}\), we get
Estimating \(I_1\) from below is slightly more subtle. Notice that
here the integrand in each double integral is nonnegative, and the one corresponding to \(\eta =1\) is dominating. Thus, restricting the set of integration to \((1/2,1]^2\) and making use of Lemma 2.2, we write
Applying (21) to the expression in square brackets above, we get
The last integral is comparable with an analogous integral over the larger square \([-1,1]^2\), see the comment following Theorem 6.1. Now using Lemma 3.2 (a) twice, first for the integral against \(\mathrm{d}\Pi _{\beta }\) (with the parameters \(\nu =\beta , \kappa = 1, \gamma = \alpha +\beta +3, D = \cosh \frac{t}{2}+\sin \frac{\theta }{2}\sin \frac{\varphi }{2}, A=\cosh \frac{t}{2}-u\sin \frac{\theta }{2}\sin \frac{\varphi }{2}, B = \cos \frac{\theta }{2}\cos \frac{\varphi }{2}\)) and then for the resulting integral against \(\mathrm{d}\Pi _{\alpha +1}\) (with \(\nu =\alpha +1, \kappa = \beta +1/2, \gamma = \alpha + 5/2, D=\cosh \frac{t}{2}, A=\cosh \frac{t}{2}-\cos \frac{\theta }{2}\cos \frac{\varphi }{2}, B=\sin \frac{\theta }{2}\sin \frac{\varphi }{2}\)), we arrive at the bound
Summing up, we have proved that
uniformly in \(t>0\) and \(\theta ,\varphi \in [0,\pi ]\), and (22) follows.
Case 2 \({-1<\alpha ,\beta <-1/2}.\) In view of Proposition 2.3,
Clearly, the main contribution to \(J_4\) comes from the point \((u,v)=(1,1)\), and so
To bound the remaining integrals from above, we proceed as in Case 1, obtaining
Then applying repeatedly Lemma 3.2 (a) with suitably chosen parameters, we get
To estimate \(J_2\) and \(J_3\) from below, we use the same trick as for \(I_1\) in Case 1. By means of Lemma 2.2 and (21), we can write
Then Lemma 3.2 (a) shows that
The case of \(J_3\) is parallel; we have
Finally, we focus on the more delicate integral \(J_1\). Observe that
Restricting here the region of integration (the integrand is nonnegative, as we shall see in a moment) and using Lemma 2.2, we conclude
where \(\tau =\alpha +\beta +4, a = \cosh \frac{t}{2}-u\sin \frac{\theta }{2}\sin \frac{\varphi }{2}- v\cos \frac{\theta }{2}\cos \frac{\varphi }{2}, b = \cosh \frac{t}{2}-u\sin \frac{\theta }{2}\sin \frac{\varphi }{2}+ v\cos \frac{\theta }{2}\cos \frac{\varphi }{2}, c = \cosh \frac{t}{2}+u\sin \frac{\theta }{2}\sin \frac{\varphi }{2}- v\cos \frac{\theta }{2}\cos \frac{\varphi }{2}, d = \cosh \frac{t}{2}+u\sin \frac{\theta }{2}\sin \frac{\varphi }{2}+ v\cos \frac{\theta }{2}\cos \frac{\varphi }{2}\). Now applying Lemma 6.2, we get
Since
we can write
Combining this with Lemma 3.2 (a), we see that
Altogether, the above considerations justify the estimates
which hold uniformly in \(t>0\) and \(\theta ,\varphi \in [0,\pi ]\). From this, (22) follows.
We pass to the Jacobi–Poisson kernel \(H_t^{\alpha ,\beta }(\theta ,\varphi )\). Here we can assume that \(\lambda :=\alpha +\beta +1< 0\), since otherwise the kernels \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\) and \(H_t^{\alpha ,\beta }(\theta ,\varphi )\) coincide. Then
The second term here is negative for \(t>0\), so \(H_t^{\alpha ,\beta }(\theta ,\varphi ) < {\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\). Taking (22) into account, we obtain the short-time upper bound for \(H_t^{\alpha ,\beta }(\theta ,\varphi )\). Thus what remains to show is the lower bound and the long-time upper bound for \(H_t^{\alpha ,\beta }(\theta ,\varphi )\).
We first claim that the lower short-time bound holds provided that \(t>0\) is small enough. In view of the already justified estimates for \({\mathbb {H}}_t^{\alpha ,\beta }(\theta ,\varphi )\), this will follow once we check that
for some \(T_0>0\) and some \(c<1\). Notice that the hypergeometric series defining \(F_4\) in (2) has nonnegative terms and that the zero-order term is 1. Thus for \(t>0\) and \(\theta ,\varphi \in [0,\pi ]\),
Now it suffices to ensure that, given \(\lambda \in (-1,0)\), the function
satisfies \(h(0)=0\) and \(h'(0)>0\). This, however, is straightforward. The claim follows.
Next we show that the upper long-time bound for \(H_t^{\alpha ,\beta }(\theta ,\varphi )\) holds for \(t \ge 1\) and that the lower counterpart is also true provided that \(t \ge T_1\) with \(T_1\) chosen large enough. From the series representation,
The last series can be controlled by means of the bound \(|{\mathcal {P}}_n^{\alpha ,\beta }(\theta )| \lesssim n, n \ge 1\), see (14). More precisely, we have
Since \(\alpha +\beta > -2\) and \(|\lambda |<1\), the conclusion follows.
To deal finally with the lower bound in the range \(T_0 \le t \le T_1\), we use the semigroup property of \(H_t^{\alpha ,\beta }\). For \(T_0 \le t \le 2T_0\), we have
Since \(H_{t/2}^{\alpha ,\beta }(\theta ,\varphi ) \gtrsim 1\) in \([T_0,2T_0]\times [0,\pi ]^2\) by the above, we conclude that also \(H_t^{\alpha ,\beta }(\theta ,\varphi )\) has a positive lower bound in the same set. In a finite number of similar steps, we will reach \(t=T_1\).
The proof of Theorem 6.1 is complete. \(\square \)
References
Andrews, G.E., Askey, R., Roy, R.: Special Functions, Encyclopedia of Mathematics and its Applications, vol. 71. Cambridge University Press, Cambridge (1999)
Askey, R.: Orthogonal Polynomials and Special Functions. Society for Industrial and Applied Mathematics, Philadelphia, PA (1975)
Caffarelli, L.A.: Sobre la conjugación y sumabilidad de series de Jacobi. Ph.D. thesis, Facultad de Ciencias Exactas, Universidad de Buenos Aires, Argentina (1971)
Caffarelli, L.A., Calderón, C.P.: On Abel summability of multiple Jacobi series. Colloq. Math. 30, 277–288 (1974)
Calderón, C.P., Urbina, W.O.: On Abel summability of Jacobi polynomials series, the Watson kernel and applications. Ill. J. Math. 57, 343–371 (2013)
Calderón, C.P., Vera de Serio, V.N.: Abel summability of Jacobi type series. Ill. J. Math. 41, 237–265 (1997)
Castro, A.J., Nowak, A., Szarek, T.Z.: Riesz–Jacobi transforms as principal value integrals. Preprint (2014). arXiv:1405.7069
Castro, A.J., Szarek, T.Z.: Calderón–Zygmund operators in the Bessel setting for all possible type indices. Acta Math. Sin. (Engl. Ser.) 30, 637–648 (2014)
Castro, A.J., Szarek, T.Z.: On fundamental harmonic analysis operators in certain Dunkl and Bessel settings. J. Math. Anal. Appl. 412, 943–963 (2014)
Ciaurri, Ó.: The Poisson operator for orthogonal polynomials in the multidimensional ball. J. Fourier Anal. Appl. 19, 1020–1028 (2013)
Ciaurri, Ó., Roncal, L., Stinga, P.R.: Fractional integrals on compact Riemannian symmetric spaces of rank one. Adv. Math. 235, 627–647 (2013)
Ciaurri, Ó., Roncal, L., Stinga, P.R.: Riesz transforms on compact Riemannian symmetric spaces of rank one. Preprint (2013). arXiv:1308.6507
Connett, W.C., Schwartz, A.L.: A multiplier theorem for ultraspherical series. Stud. Math. 51, 51–70 (1974)
Connett, W.C., Schwartz, A.L.: A multiplier theorem for Jacobi expansions. Stud. Math. 52, 243–261 (1975)
Connett, W.C., Schwartz, A.L.: A correction to the paper: “A multiplier theorem for Jacobi expansions” (Studia Math. 52 (1975), pp. 243–261). Stud. Math. 54, 107 (1975)
Connett, W.C., Schwartz, A.L.: The Littlewood–Paley theory for Jacobi expansions. Trans. Am. Math. Soc. 251, 219–234 (1979)
Dijksma, A., Koornwinder, T.K.: Spherical harmonics and the product of two Jacobi polynomials. Indag. Math. 33, 171–196 (1971)
Erdélyi, A., Magnus, W., Oberhettinger, F., Tricomi, F.G.: Higher Transcendental Functions. Vol. I. Based on Notes Left by Harry Bateman. Reprint of the 1953 original. Robert E. Krieger Publishing Co., Inc, Melbourne (1981)
Gasper, G., Trebels, W.: Multiplier criteria of Marcinkiewicz type for Jacobi expansions. Trans. Am. Math. Soc. 231, 117–132 (1977)
Gasper, G., Trebels, W.: Multiplier criteria of Hörmander type for Jacobi expansions. Stud. Math. 68, 187–197 (1980)
Johnson, W.P.: The curious history of Faà di Bruno’s formula. Am. Math. Mon. 109, 217–234 (2002)
Langowski, B.: Harmonic analysis operators related to symmetrized Jacobi expansions. Acta Math. Hung. 140, 248–292 (2013)
Li, Z.: Hardy spaces for Jacobi expansions I. The basic theory. Analysis 16, 27–49 (1996)
Li, Z., Liao, J.: Hardy spaces for Dunkl–Gegenbauer expansions. J. Funct. Anal. 265, 687–742 (2013)
Li, Z., Liu, L.: Harmonic Analysis on Extended Jacobi Expansions: An Application of Dunkl’s Theory, Analysis, Combinatorics and Computing, pp. 319–340. Nova Science Publishers, Hauppauge (2002)
Muckenhoupt, B., Stein, E.M.: Classical expansions and their relation to conjugate harmonic functions. Trans. Am. Math. Soc. 118, 17–92 (1965)
Nowak, A., Roncal, L.: Potential operators associated with Jacobi and Fourier–Bessel expansions. J. Math. Anal. Appl. 422, 148–184 (2015)
Nowak, A., Sjögren, P.: Calderón–Zygmund operators related to Jacobi expansions. J. Fourier Anal. Appl. 18, 717–749 (2012)
Nowak, A., Sjögren, P.: Sharp estimates of the Jacobi heat kernel. Stud. Math. 218, 219–244 (2013)
Nowak, A., Szarek, T.Z.: Calderón–Zygmund operators related to Laguerre function expansions of convolution type. J. Math. Anal. Appl. 388, 801–816 (2012)
Rubio de Francia, J.L., Ruiz, F.J., Torrea, J.L.: Calderón–Zygmund theory for operator-valued kernels. Adv. Math. 62, 7–48 (1986)
Ruiz, F.J., Torrea, J.L.: Vector-valued Calderón–Zygmund theory and Carleson measures on spaces of homogeneous nature. Stud. Math. 88, 221–243 (1988)
Szegö, G.: Orthogonal Polynomials, vol. 23, 4th edn. American Mathematical Society Colloquium Publications, Providence (1975)
Wróbel, B.: Multivariate spectral multipliers for tensor product orthogonal expansions. Monatsh. Math. 168, 125–149 (2012)
Wróbel, B.: Erratum to: Multivariate spectral multipliers for tensor product orthogonal expansions. Monatsh. Math. 169, 113–115 (2013)
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Edward B. Saff.
The first author was supported in part by MNiSW Grant No. N N201 417839. The third author was partially supported by NCN research Project 2012/05/N/ST1/02746.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
About this article
Cite this article
Nowak, A., Sjögren, P. & Szarek, T.Z. Analysis Related to All Admissible Type Parameters in the Jacobi Setting. Constr Approx 41, 185–218 (2015). https://doi.org/10.1007/s00365-015-9275-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00365-015-9275-5
Keywords
- Jacobi expansion
- Jacobi–Poisson kernel
- Maximal operator
- Riesz transform
- Square function
- Spectral Multiplier
- Calderón–Zygmund operator