1 Introduction

When one studies Fourier multiplier operators \(T_m\), with a compactly supported multiplier m, \(\widehat{T_mf}(\xi )=m(\xi )\hat{f}(\xi )\), a necessary condition for \(T_m\) to be bounded on \(L^p(\mathbb {R}^N)\) is that the kernel \(K=\check{m}\in L^p\). To see this, one can take a Schwartz function \(\phi \) with \(\hat{\phi }(\xi )=1\) for \(\xi \in {{\,\mathrm{supp}\,}}m\) so that \(T_m \phi = \check{m}\).

In a remarkable paper of Heo, Nazarov and Seeger [8], they show that this natural necessary condition is also sufficient for \(T_m\) to be bounded on \(L^p\) in the range \(1<p<\frac{2(N-1)}{N+1}\) whenever m is a compactly supported radial multiplier. The radial multiplier conjecture states that for radial multipliers m, \(\check{m}\in L^p\) implies \(T_m\) is bounded on \(L^p\) holds in the range \(1<p<\frac{2N}{N+1}\).

The canonical example of a compactly supported radial multiplier is the classical Bochner–Riesz multiplier

$$\begin{aligned} m_\alpha (\xi )=\left( 1-|\xi |^2\right) _+^{\alpha }. \end{aligned}$$

The corresponding convolution kernel \(K_\alpha =\check{m}_\alpha \) is well known to lie in \(L^p\) for

$$\begin{aligned} p>p_{\alpha ,N}=\frac{2N}{N+1+2\alpha }; \end{aligned}$$

see [9]. We have, from the famous work of C. Fefferman on the ball multiplier, [7], the following necessary condition for boundedness of the corresponding operator. For \(N\ge 2\) and \(\alpha \ge 0\), if the multiplier operator \(T_{m_{\alpha }}\) is bounded on \(L^p\) for \(p\ne 2\), then \(\alpha >0\). The Bochner–Riesz conjecture states that, for \(N\ge 2\), \(T_{m_\alpha }\) is bounded on \(L^p(\mathbb {R}^N)\) if and only if

$$\begin{aligned} p_{\alpha ,N}<p<p_{\alpha ,N}'\text { and }\alpha >0. \end{aligned}$$

The restrictions on the parameters may be equivalently expressed as

$$\begin{aligned} \alpha >\max \left\{ N\left| \frac{1}{2}-\frac{1}{p}\right| -\frac{1}{2} ,0\right\} . \end{aligned}$$

The radial multiplier conjecture is a significant generalisation of the Bochner–Riesz conjecture. However, the result in [8] makes no advances on known exponents for the Bochner–Riesz conjecture.

In this paper, we examine a class of compactly supported multipliers arising as Bochner–Riesz multipliers associated to a family of embedded varieties, \(\Gamma \), in \(\mathbb {R}^N\) of arbitrary codimension. Although compactly supported, these multipliers are not radial multipliers. Instances of the corresponding multiplier operators have kernels which are \(L^p\) integrable for a different range of p than the range of p for which the operators are bounded.

For a compact variety \(\Gamma \) and some \(\alpha >0\),

$$\begin{aligned} m(\zeta )=d(\zeta ,\Gamma )^\alpha \phi (\zeta ) \end{aligned}$$
(1)

is called the Bochner–Riesz multiplier with exponent \(\alpha \). Here \(\phi \) is some appropriate bump function whose support intersects \(\Gamma \).

We may assume that \(\Gamma \) is paramaterised as the graph of a smooth function \(\Psi \), with \(\phi \) supported in a small neighbourhood of 0 and \(\Psi \) chosen such that such that \(\Psi (0)=0\) and \(\nabla \Psi (0)=0\). This is achieved by using a partition of unity on the surface and then translating and rotating, with corresponding modifications to the multiplier operator. The mutual boundedness of the initial multiplier operator and the operators thus obtained is simply verified. As such, we consider the n-dimensional variety

$$\begin{aligned} \Gamma =\left\{ (\xi ,\Psi (\xi ));\xi \in \mathbb {R}^n\right\} \subset \mathbb {R}^N, \end{aligned}$$
(2)

with smooth \(\Psi :\mathbb {R}^n\rightarrow \mathbb {R}^L\) obtained by the above procedure. One can further reduce the study of the \(L^p\) mapping properties of the operators \(T_{m}\) to the study of the \(L^p\) mapping properties of related multiplier operators \(T_{m_{\Gamma ,\alpha }}\), whose multipliers are given by

$$\begin{aligned} m_{\Gamma ,\alpha }(\zeta )=m_{\Gamma ,\alpha }(\xi ,\eta )=\phi (\xi )|\eta -\Psi (\xi )|^\alpha \chi (\eta -\Psi (\xi )). \end{aligned}$$
(3)

Here \(\zeta =(\xi ,\eta )\in \mathbb {R}^{n+L}=\mathbb {R}^N\), \(\phi :\mathbb {R}^n\rightarrow \mathbb {R}\) and \(\chi :\mathbb {R}^L\rightarrow \mathbb {R}\) are appropriate radial bump functions with small supports contained in \(B^n(0,\delta )\) and \(B^L(0,\delta )\), respectively, where \(\delta <1\) is some suitable small parameter. This technique is well known. At its core, it requires an application of the implicit function theorem which shows that the multiplier (1) can be expressed as the product of (3) with a smooth compactly supported function and vice-versa; see [16] for details. Note that the functions \(\phi \) and \(\chi \) in (3) localise the multiplier about the origin, which is on the surface.

For a particular class of surfaces, we determine the precise \(L^p(\mathbb {R}^N)\) integrability range of the corresponding convolution kernel \(K_{\Gamma ,\alpha }=\check{m}_{\Gamma ,\alpha }\).

Let \(\Gamma =\{(\xi , \Psi (\xi ))\in \mathbb {R}^N; \xi \in {{\mathbb {R}}}^n\}\), where

$$\begin{aligned} \Psi (x) =(\widetilde{\Psi }(\xi ),0) = (|\xi |^{d_1}, |\xi |^{d_2}, \ldots , |\xi |^{d_{L_1}},0)\in \mathbb {R}^{L_1}\times {\mathbb {R}^{L_2}} \end{aligned}$$
(4)

and \(2\le d_1<d_2<\ldots <d_{L_1}\) are even. Here \(\widetilde{\Psi }(\xi )=(|\xi |^{d_1}, |\xi |^{d_2}, \ldots , |\xi |^{d_{L_1}})\), which is a smooth radial function. The surface \(\Gamma \) has dimension n and codimension \(L=L_1+L_2\).

Theorem 1.1

Consider the surface \(\Gamma =\{(\xi , \Psi (\xi ))\in \mathbb {R}^N: \xi \in {{\mathbb {R}}}^n\}\), where \(\Psi \) is given by (4) and \(L_1>0\). The convolution kernel \(K_{\Gamma ,\alpha }=\check{m}_{\Gamma ,\alpha }\in L^p(\mathbb {R}^N)\) if and only if \(p>\frac{L+n}{L+\alpha +\frac{n}{2}}\).

The proofs of the necessary and sufficient conditions for \(K_{\Gamma ,\alpha }\in L^p\) in Theorem 1.1 are different. The kernel \(K_{\Gamma ,\alpha }\) is given as an oscillatory integral with an explicit phase. To prove that

$$\begin{aligned} K_{\Gamma ,\alpha }\in L^p\implies p>\frac{L+n}{L+\alpha +\frac{n}{2}}, \end{aligned}$$
(5)

we restrict our attention to a region where the critical point of the phase is non-degenerate and use stationary phase techniques to obtain a pointwise estimate on the size of the kernel. To obtain the sufficient condition

$$\begin{aligned} p>\frac{L+n}{L+\alpha +\frac{n}{2}}\implies K_{\Gamma ,\alpha }\in L^p, \end{aligned}$$
(6)

we adapt a method developed in work by Arkhipov et al. [1], which was used to settle a question arising from Tarry’s problem in Number Theory; rather than pointwise control on the size of the kernel, we partition \(\mathbb {R}^N\) according to estimates on the kernel’s size and then estimate the size of each element of the partition.

Remark 1.2

Theorem 1.1 holds for more general varieties including, for example, the curves of standard type found in [15]; see [16] which obtains the result by a more delicate application of the methods contained herein.

We now focus on the \(L^p(\mathbb {R}^N)\) boundedness of the multiplier operator \(T_{m_{\Gamma ,\alpha }}\). After recalling some known results on the relation between the Bochner–Riesz conjecture and the restriction conjecture, we are able state a sharp result, Proposition 1.6.

For general varieties \(\Gamma \), it is well known that the Bochner–Riesz conjecture is connected to another fundamental part of Euclidean harmonic analysis, the Fourier restriction conjecture. In this context, the conjecture concerns the precise \(L^p(\mathbb {R}^N)\rightarrow L^q(\Gamma ,\mu )\) mapping properties of the restriction operator \(Rf=\hat{f}|_{\Gamma }\), where \(\mu \) denotes the surface measure on \(\Gamma \). Even in the original setting of \(\Gamma =S^{N-1}\), as proposed by Stein in the mid 1960’s, the Fourier restriction conjecture is unsolved for \(N\ge 3\).

Progress with the Bochner–Riesz conjecture has historically paralleled progress with the restriction conjecture. T. Tao established that the Bochner–Riesz conjecture implies the restriction conjecture on the sphere, [14]. The table contained therein also outlines some of the parallel progress in these two areas.

It is well known that sharp \(L^p\) estimates for Bochner–Riesz multiplier operators, \(T_{m_{\Gamma ,\alpha }}\), follow from \(L^2\) Fourier restriction estimates for \(\Gamma \); the relevant restriction estimates are written as

$$\begin{aligned} \left( \int \left| \hat{f}(\xi ,\Psi (\xi ))\right| ^2\phi (\xi )d\xi \right) ^\frac{1}{2}\le C\Vert f\Vert _{L^q(\mathbb {R}^N)}. \end{aligned}$$
(7)

The implication to obtain sharp Bochner–Riesz estimates and surrounding ideas date back to C. Fefferman’s thesis, [6], where the analysis is for the sphere \(\Gamma =S^{N-1}\).

For varieties \(\Gamma \) of arbitrary codimension we have the following result of G. Mockenhaupt, [11].

Theorem 1.3

Let

$$\begin{aligned} \Gamma =\lbrace \left( \xi , \Psi (\xi )\right) ; \xi \in \mathbb {R}^n\rbrace \subset \mathbb {R}^n\times \mathbb {R}^{L}. \end{aligned}$$

Suppose that the \(L^q\rightarrow L^2\) restriction inequality (7) holds. Then, for \(1\le p\le q\) or \(q'\le p <\infty \), the Bochner–Riesz multiplier \(m_{\Gamma ,\alpha }\), (3) with \(\alpha >0\), defines a multiplier operator, \(T_{m_{\Gamma ,\alpha }}\), which is bounded on \(L^p\) for

$$\begin{aligned} \alpha >(n+L)\left| \frac{1}{p}-\frac{1}{2}\right| -\frac{L}{2}. \end{aligned}$$

In particular, when \(1\le p\le q<2\), we see that \(T_{m_{\Gamma ,\alpha }}\) is bounded on \(L^p(\mathbb {R}^N)\) if \(p>\frac{L+n}{L+\alpha +\frac{n}{2}}\), which is precisely the \(L^p\) integrability range for the Bochner–Riesz kernels \(K_{\Gamma ,\alpha }\) considered in Theorem 1.1. Hence, for the varieties considered in Theorem 1.1, we obtain sharp \(L^p\) estimates for \(T_{m_{\Gamma ,\alpha }}\), provided the Fourier restriction inequality (7) holds.

In this paper, we observe that there are surfaces for which the range of p such that the Bochner–Riesz kernel \(K_{m_{\Gamma ,\alpha }}=\check{m}_{\Gamma ,\alpha }\in L^p\) differs from the \(L^p\) boundedness range for \(T_{m_{\Gamma ,\alpha }}\). These examples are instances of varieties

$$\begin{aligned} \Gamma =\left\{ \left( \xi ,\widetilde{\Psi }(\xi )\right) ;\xi \in \mathbb {R}^n\right\} \subset \mathbb {R}^n\times \mathbb {R}^{L_1}\times \mathbb {R}^{L_2}, \end{aligned}$$

with \(\widetilde{\Psi }:\mathbb {R}^n\mapsto \mathbb {R}^{L_1}\) smooth. These lie in a hyperplane in \(\mathbb {R}^N\) if \(L_1<L\). In this case it is well known that the Fourier restriction inequality (7) can not hold for any \(1<q\). Nevertheless, a restriction inequality may hold within the ambient hyperplane \(\mathbb {R}^{n+L_1}\). Indeed, let us suppose that

$$\begin{aligned} \left( \int _{\mathbb {R}^n}|\hat{g}(\xi ,\widetilde{\Psi }(\xi ))|^2\phi (\xi )d\xi \right) ^\frac{1}{2}\le C\Vert g\Vert _{L^q(\mathbb {R}^{n+L_1})} \end{aligned}$$
(8)

holds for some \(1<q<2\) and some constant C. Then we have the following generalisation of Theorem 1.3.

Theorem 1.4

Let

$$\begin{aligned} \Gamma =\left\{ \left( \xi , \widetilde{\Psi }(\xi ),0\right) ; \xi \in \mathbb {R}^n\right\} \subset \mathbb {R}^n\times \mathbb {R}^{L_1}\times \mathbb {R}^{L_2}\end{aligned}$$

and \(\widetilde{\Gamma }\) be the corresponding projection to \(\mathbb {R}^n\times \mathbb {R}^{L_1}\),

$$\begin{aligned}\widetilde{\Gamma }=\left\{ \left( \xi , \widetilde{\Psi }(\xi )\right) ; \xi \in \mathbb {R}^n\right\} \subset \mathbb {R}^n\times \mathbb {R}^{L_1}. \end{aligned}$$

Suppose that the \(L^q\rightarrow L^2\) restriction inequality (8) holds. Then, for \(1\le p\le q\) or \(q'\le p <\infty \), the Bochner–Riesz multiplier \(m_{\Gamma ,\alpha }\) given by (3) with \(\alpha > 0\) defines a multiplier operator, \(T_{m_{\Gamma ,\alpha }}\), which is bounded on \(L^p\) if

$$\begin{aligned} \alpha >(n+L_1)\left| \frac{1}{p}-\frac{1}{2}\right| -\frac{L_1}{2}. \end{aligned}$$

The proof of Theorem 1.4 follows along known lines of argument, those which establish Theorem 1.3; see [16]. It works via an \(L^2(\widetilde{\Gamma })\rightarrow L^{q'}(\mathbb {R}^N)\) extension estimate. Where \(L_2>0\), no non-trivial extension estimate from the surface \(\Gamma \) holds so that Theorem 1.4 is not a direct corollary of Theorem 1.3.

Let \(\Gamma =\left\{ (\xi ,\Psi (\xi ));\xi \in \mathbb {R}^n\right\} \), with \(\Psi \) given by (4) such that \(L_1<L\). Then a necessary condition for the restriction inequality (8) to hold is

$$\begin{aligned} \frac{q'}{2}\ge 1+\frac{D}{n}, \end{aligned}$$
(9)

where \(D=\sum _{j=1}^{L_1} d_j\). This follows from a standard Knapp example. The following result has been established by Wright [17].

Proposition 1.5

Let \(\Gamma =\left\{ (\xi ,\Psi (\xi ));\xi \in \mathbb {R}^n\right\} \), with \(\Psi \) given by (4). If \(d_1\ge n(L_1+1)\), then (8) holds if \(\frac{q'}{2}\ge 1+\frac{D}{n}\).

We sketch a proof of Proposition 1.5 in Appendix A.

Proposition 1.5 gives us a class of varieties of arbitrary codimension where the optimal \(L^2\) Fourier restriction estimate holds. Combining this with Theorem 1.4, de Leeuw’s multiplier theorem, duality, and Theorem 1.1 we establish the following sharp result; see Sect. 4.

Proposition 1.6

Let

$$\begin{aligned} \Gamma =\left\{ (\xi ,\Psi (\xi ));\xi \in \mathbb {R}^n\right\} , \end{aligned}$$

with \(\Psi \) defined at (4) chosen such that \(L_1<L\) and \(d_1\ge n(L_1+1)\). Consider the Bochner–Riesz multiplier \(m_{\Gamma ,\alpha }\), (3) with \(\alpha > 0\). Suppose that \(1\le p \le q\) or \(q'\le p <\infty \) and \(\frac{q'}{2}= 1+\frac{D}{n}\). The operator \(T_{m_{\Gamma ,\alpha }}\), is bounded on \(L^p\) if and only if \(\frac{L_1+n}{L_1+\alpha +\frac{n}{2}}<p<\frac{L_1+n}{L_1-\alpha +\frac{n}{2}}\).

One expects these propositions to hold without the extra condition \(d_1\ge n(L_1+1)\). For the curves described by the Eq. (4) when \(n=1\) and \(L_1=L\) this is indeed the case; see [5].

In distinction to Proposition 1.6, Theorem 1.1 implies that the convolution kernel \(K_{\Gamma ,\alpha }=\check{m}_{\Gamma ,\alpha }\) lies in \(L^p(\mathbb {R}^N)\) if and only if \(p>\frac{L+n}{L+\alpha +\frac{n}{2}}\). It is easily verified, for \(\alpha <\frac{n}{2}\), that \(\frac{L_1+n}{L_1+\alpha +\frac{n}{2}}\ge \frac{L+n}{L+\alpha +\frac{n}{2}}\), with strict inequality if \(L_1\ne L\). Hence, for the varieties in Theorem 1.1 with \(L_1<L\), the \(L^p\) boundedness range for \(T_{m_{\Gamma ,\alpha }}\) differs from the \(L^p\) integrability range of \(K_{\Gamma ,\alpha }\), as previously suggested.

Regarding the Fourier restriction problem for the moment curve,

$$\begin{aligned} (t,t^2,t^3,\ldots ,t^d), \end{aligned}$$

there is the result of Drury, [4], which tells us that the restriction inequality (7) holds for \(q'\ge d(d+1)\). This is sharp, by the example in [1]. As mentioned previously, it is a method from [1] that we adapt to show the sufficient condition for \(K_{\Gamma ,\alpha }\in L^p\), the implication (6), of Theorem 1.1.

2 Outline of Paper

Section 3 introduces notation and derives the Bochner–Riesz kernels. We also observe some basic properties of the kernels.

In Sect. 4 we show how Proposition 1.6 follows from Theorem 1.4 and Theorem 1.1 applied with reference to de Leeuw’s theorem.

In Sects. 5 and 6 we prove Theorem 1.1. In Sect. 5 we prove the sufficient condition for \(K_{\Gamma ,\alpha }\in L^p\), (6). In Sect. 6 the reverse implication, (5), is proved.

In Appendix A we sketch the proof of Theorem 1.5.

3 Notation

Throughout this paper C will be used to denote a constant, its value may change from line to line. We use the notation \(X\lesssim Y\) or \(Y > rsim X\) if there exists some constant C such that \(X\le CY\). Where \(X\lesssim Y\) and \(X > rsim Y\), we write \(X\sim Y\). When we wish to highlight the dependence of the implied constant C on some other parameter, say \(C=C(M)\), we will use the notation \(X\lesssim _M Y\). We use the notation \(X\ll Y\) or \(Y\gg X\) if there exists some suitable large constant D such that \(DX\le Y\).

We denote by \(B^m(x,r)\) the Euclidean ball of radius r centred at x in \(\mathbb {R}^m\), or simply B(xr) if the dimension is clear.

We use the notation

$$\begin{aligned} \mathcal {F}f(\xi )=\hat{f}(\xi )=\int e^{-2\pi i \xi \cdot x}f(x)dx, \end{aligned}$$

for the Fourier transform and likewise for the inverse Fourier transform we write

$$\begin{aligned} \mathcal {F}^{-1}g(x)=\check{g}(x)=\int e^{2\pi i x \cdot \xi }g(\xi )d\xi . \end{aligned}$$

We also denote by \(\hat{\sigma }\) the Fourier transform of the surface measure of the sphere \(S^{m-1}\),

$$\begin{aligned} \hat{\sigma }(x)=\int _{S^{m-1}}e^{2\pi i x\cdot \omega }d\sigma (\omega ). \end{aligned}$$

For \(p\in (1,\infty )\) we denote by \(\mathcal {M}_p\) the space of Borel measurable functions m for which the multiplier operator defined a priori by

$$\begin{aligned} f\mapsto \left( m\cdot \hat{f}\right) ^{\check{}} \end{aligned}$$

is bounded from \(L^p\) to \(L^p\).

Recall that we write \(L=L_1+L_2\). Typically, in what follows xy, and z will denote points in \(\mathbb {R}^n\), \(\mathbb {R}^{L_1}\), and \(\mathbb {R}^{L_2}\), respectively. Since we may have that \(L_2=0\) but integrate with respect to dz we follow the convention that, when \(L_2=0\), \(\mathbb {R}^{L_2}=\mathbb {R}^{0}=\lbrace 0\rbrace \) and dz is the counting measure. Similarly for \(S^{0}\), which we regard as the set \(\lbrace -1,1\rbrace \) equipped with the counting measure.

All multiplier operators are expressible as convolution operators. In this paper, we consider the Bochner–Riesz multipliers corresponding to the surfaces

$$\begin{aligned} \Gamma =\lbrace (\xi ,\Psi (\xi ));\xi \in \mathbb {R}^n\rbrace =\lbrace (\xi ,\widetilde{\Psi }(\xi ),0);\xi \in \mathbb {R}^n\rbrace \subset \mathbb {R}^n\times \mathbb {R}^{L_1}\times \mathbb {R}^{L_2} \end{aligned}$$

with \(\Psi \) as given in (4). We turn our sights to the convolution kernel of \(T_m\), where \(m=m_{\Gamma ,\alpha }\) is given at (3). The kernel, \(K_{\Gamma ,\alpha }=K=\check{m}\), is given by

$$\begin{aligned} K(x,y,z)= & {} \int _{\mathbb {R}^L}\int _{\mathbb {R}^n}e^{2\pi i \left( x\cdot \xi +(y,z)\cdot \eta \right) }m(\xi ,\eta )d\xi d\eta \nonumber \\= & {} \int _{\mathbb {R}^L}\int _{\mathbb {R}^n} e^{2\pi i \left( x\cdot \xi +(y,z)\cdot \eta \right) }\phi (\xi )|\eta -\Psi (\xi )|^\alpha \chi (\eta -\Psi (\xi ))d\xi d\eta \nonumber \\= & {} \int _{\mathbb {R}^L}\int _{\mathbb {R}^n} e^{2\pi i \left( x\cdot \xi +(y,z)\cdot (\eta +\Psi (\xi ))\right) }\phi (\xi )|\eta |^\alpha \chi (\eta )d\xi d\eta \nonumber \\= & {} A_{\alpha }(y,z)\left( \int _{\mathbb {R}^n} e^{2\pi i \left( x\cdot \xi +(y,z)\cdot \Psi (\xi )\right) }\phi (\xi )d\xi \right) \nonumber \\= & {} A_\alpha (y,z)k(x,y), \end{aligned}$$
(10)

where

$$\begin{aligned} k(x,y)=\int _{\mathbb {R}^n} e^{2\pi i \left( x\cdot \xi +y\cdot \widetilde{\Psi }(\xi )\right) }\phi (\xi )d\xi \end{aligned}$$
(11)

and

$$\begin{aligned} A_\alpha (y,z)=\int _{\mathbb {R}^L} e^{2\pi i (y,z)\cdot \eta }|\eta |^\alpha \chi (\eta )d\eta . \end{aligned}$$
(12)

Unlike for \(A_\alpha \), obtaining pointwise control k is a delicate matter. Such control will only be required when we prove the necessary condition (5) in Sect. 6.

We have the following bounds on the size of the function \(A_\alpha \). Their proof, which is routine, is omitted; see [16].

Lemma 3.1

For large |(yz)|, there exist constants \(0<c_\alpha \le C_\alpha \) such that

$$\begin{aligned} c_\alpha |(y,z)|^{-L-\alpha }\le |A_\alpha ((y,z))|\le C_\alpha |(y,z)|^{-L-\alpha }. \end{aligned}$$
(13)

4 Proof of Proposition 1.6

Proof

By Proposition 1.5, we see that the \(L^q\rightarrow L^2\) restriction inequality (8) also holds as an \(L^p\rightarrow L^2\) inequality for \(1\le p\le q\). Therefore, we can apply Theorem 1.4 to see that, where \(1\le p \le q\) or \(q'\le p <\infty \), \(T_{m}\) is bounded on \(L^p\) for \(\frac{L_1+n}{L_1+\alpha +\frac{n}{2}}<p<\frac{L_1+n}{L_1-\alpha +\frac{n}{2}}\). Instead of considering the multiplier

$$\begin{aligned} m(\xi ,\eta ,\lambda )=|(\eta ,\lambda )-\Psi (\xi )|^\alpha \chi \left( (\eta ,\lambda )-\Psi (\xi )\right) \phi (\xi ), \end{aligned}$$

we restrict this continuous multiplier to \(V=\mathbb {R}^{n+L_1}\), corresponding to \(\lambda =0\). de Leeuw’s theorem, [3], tells us that if \(m\in \mathcal {M}_p(\mathbb {R}^N)\), then \(m|_V\in \mathcal {M}_p(\mathbb {R}^{n+L_1})\). This is the Bochner–Riesz multiplier on \(\mathbb {R}^{n+L_1}\) corresponding to the surface \(\widetilde{\Gamma }=\left\{ (\xi ,\widetilde{\Psi }(\xi ));\xi \in \mathbb {R}^n\right\} \). We apply Theorem 1.1 to see that the corresponding kernel is in \(L^p\) only for \(p>\frac{L_1+n}{L_1+\alpha +\frac{n}{2}}\). By duality, we must also have \(p<\frac{L_1+n}{L_1-\alpha +\frac{n}{2}}\) for \(T_{m}\) to be bounded on \(L^p\). \(\square \)

5 A Sufficient Condition for \(K\in L^p\); Proof of (6)

In this section we prove one part of Theorem 1.1, the implication (6). That is the sufficient condition for the kernel K, (10), to be in \(L^p\).

We use the following lemma, this van der Corput estimate is essential to the proof of our main result. It is proven analogously to the corresponding classical estimate, as a simple corollary of the van der Corput estimate proved in [2].

Lemma 5.1

Let \(\Phi :\mathbb {R}\rightarrow \mathbb {R}\) be a polynomial of degree d and \(\phi \) be a smooth function with \({{\,\mathrm{supp}\,}}\phi \subset (a,b)\). Given the bound \(\inf _{t\in [a,b]}\sum _{j=1}^{d}|\Phi ^{(j)}(t)|^\frac{1}{j}\ge \kappa \), we have that

$$\begin{aligned} \left| \int _a^b e^{2\pi i\Phi (t)}\phi (t)dt\right| \lesssim _{d}\min \left\{ (b-a),\kappa ^{-1}\right\} \Vert \phi '\Vert _{L^1}. \end{aligned}$$

Henceforth, until stated otherwise, we denote by \(\Phi (t)\) the phase appearing in certain one dimensional oscillatory integrals \(I_{\pm }(x,y)=\int e^{2\pi i\Phi (t)}\phi _0(t)dt\). These phases are

$$\begin{aligned} \Phi (t)=\Phi _{x,y,\pm }(t)=\pm |x|t+\sum _{j=1}^{L_1} y_j\psi _j(t), \end{aligned}$$

with \(\psi _j(t)=|t|^{d_j}\). Recall \(\delta \), given at (3), is a small parameter bounding the radius of the support of \(\phi \). For \(t\in [-\delta ,\delta ]\), we denote by \(H_t(x,y)\) and H(xy) the quantities

$$\begin{aligned} H_t(x,y)=\sum _{j=1}^{d_{L_1}}\left| \Phi _{x,y}^{(j)}(t)\right| ^\frac{1}{j}\text { and }H(x,y)=\inf _{t\in [-\delta ,\delta ]}H_t(x,y). \end{aligned}$$

We will make use the following lemma.

Lemma 5.2

For \(|y| > rsim 1\), \(H(x,y) > rsim |y|^\frac{1}{d_{L_1}}\).

Proof

We relate H to a homogeneous \(\widetilde{H^1}\), where

$$\begin{aligned} \widetilde{H^{1}}(x,y)=\max _{j=1,2,\ldots ,d_{L_1}}\inf _{t\in [-\delta ,\delta ]}\left| \Phi _{x,y}^{(j)}(t)\right| . \end{aligned}$$

We show that \(\widetilde{H^1}\), which is clearly homogeneous of degree 1, is bounded positively away from 0 on the sphere. We first show that it is non-vanishing on the sphere.

Suppose that \(\widetilde{H^{1}}(x,y)=0\). Then, in particular, \(|y_{L_1}|\sim \inf _{t\in [a,b]}\left| \Phi ^{(d_{L_1})}_{x,y}(t)\right| =0\) so that \(y_{L_1}=0\). Considering the \(d_{L_1-1}\)th derivative in turn, we find that \(|y_{L_1-1}|\sim \inf _{t\in [-\delta ,\delta ]}\left| \Phi ^{(d_{L_1-1})}\right| =0\) so that \(y_{L_1-1}=0\). Continuing in this fashion shows that if \(\widetilde{H^{1}}(x,y)=0\), then \((x,y)=0\).

To show that \(\inf _{(x,y)\in S^{N-1}}\widetilde{H^1}(x,y)>0\), we suppose by way of contradiction that there exists a sequence \(z_n\in S^{N-1}\) with \(\widetilde{H^1}(z_n)\rightarrow 0\). Passing to a subsequence if necessary, we can assume that \(z_n\rightarrow z\in S^{N-1}\). We see that

$$\begin{aligned} \widetilde{H^1}(z_n)\ge \inf _{t\in I}\left| \Phi _{z_n}^{(j)}(t)\right|= & {} \left| \Phi _{z_n}^{(j)}(t_n)\right| \ge \left| \Phi _{z}^{(j)}(t_n)\right| -\left| \Phi _{z}^{(j)}(t_n)-\Phi _{z_n}^{(j)}(t_n)\right| \\\ge & {} \inf _{t\in I}\left| \Phi _{z}^{(j)}(t)\right| -C|z_n-z|. \end{aligned}$$

Taking the maximum over j shows that

$$\begin{aligned} \widetilde{H^1}(z_n)\ge \widetilde{H^1}(z)-C|z_n-z|. \end{aligned}$$

We know that \(\widetilde{H^1}(z)\ne 0\) and \(|z_n-z|\rightarrow 0\) so taking \(n\rightarrow \infty \) we obtain a contradiction, \(0>0\). Therefore we can conclude \(\inf _{(x,y)\in S^{N-1}}\widetilde{H^1}(x,y)>0\). As a consequence of this and homogeneity, we have that \(\widetilde{H^{1}}(x,y) > rsim |(x,y)|\ge |y|\).

Now, by relating H to the homogeneous \(\widetilde{H^1}\) , which is non-vanishing on the sphere, we show that \(H(x,y) > rsim |y|^\frac{1}{d_{L_1}}\). There exists \(t^*=t^*(x,y)\in I\) with \(H(x,y)=H_{t^*}(x,y)\). By an appropriate choice of \(1\le j_0\le d_{L_1}\), we see that

$$\begin{aligned} 1\lesssim \left| y\right| ^\frac{1}{d_{L_1}}\le & {} \left| (x,y)\right| ^\frac{1}{d_{L_1}}\lesssim \left( \widetilde{H^1}(x,y)\right) ^\frac{1}{d_{L_1}} \lesssim \left( \widetilde{H^1}(x,y)\right) ^\frac{1}{j_0}=\left( \inf _{t\in I}\left| \Phi ^{(j_0)}(t)\right| \right) ^\frac{1}{j_0}\\\le & {} \left| \Phi ^{(j_0)}(t^*)\right| ^\frac{1}{j_0}\le \max _{j=1,\ldots d_{L_1}}\left| \Phi ^{(j)}(t^*)\right| ^\frac{1}{j}\le H_{t^*}(x,y)=H(x,y). \end{aligned}$$

\(\square \)

We require the use of an asymptotic expansion before applying the van der Corput estimate. As such, we introduce a cutoff function \(\eta \), chosen such that \(\eta (r)=0\) for \(r\le \frac{1}{2}\) and \(\eta (r)=1\) for \(r\ge 1\). We will later perform polar integration; let us denote by \(\phi _0\) the function on \([0,\infty )\) for which \(\phi (\xi )=\phi _0(|\xi |)\).

Lemma 5.3

With \(\Phi (r)=\Phi _{x,y,\pm }(r)=\pm |x|r+\sum _{j=1}^{L_1} y_j\psi _j(r)\). We define \(H(x,y)=\inf _{r\in [-\delta ,\delta ]}\sum _{j=1}^{d_{L_1}}\left| \Phi ^{(j)}(r)\right| ^\frac{1}{j}\). If \(H(x,y)\ge \kappa \), then we have that

$$\begin{aligned} \left| \int e^{2\pi i\Phi (r)}\frac{1}{|rx|^{\frac{n-1}{2}}}\eta (|x|r)\phi _0(r)r^{n-1}dr\right| \lesssim \kappa ^{-1}\frac{1}{|x|^\frac{n-1}{2}}. \end{aligned}$$

Proof

When we apply Lemma 5.1, we see that, if \(H(x,y)\ge \kappa \), then

$$\begin{aligned} \left| \int _0^\infty e^{2\pi i\left( \pm r|x|+ \sum _{j=1}^{L_1}y_j\psi _j(r)\right) } \frac{1}{|rx|^{\frac{n-1}{2}}}\eta (|x|r)\phi _0(r)r^{n-1}dr\right| \lesssim _{d} \Vert \tilde{\phi }'\Vert _{L^1}\min \left\{ 1,\kappa ^{-1}\right\} , \end{aligned}$$

where \(\tilde{\phi }(r)=\frac{1}{|rx|^{\frac{n-1}{2}}}\eta (|x|r)\phi _0(r)r^{n-1}=\frac{1}{|x|^{\frac{n-1}{2}}}\eta (|x|r)\phi _0(r)r^{\frac{n-1}{2}}\). We see that

$$\begin{aligned} \tilde{\phi }'(r)= & {} \frac{1}{|x|^\frac{n-1}{2}}\left( |x|\eta '(|x|r)\phi _0(r)r^{\frac{n-1}{2}}+\eta (|x|r)\phi _0'(r)r^{\frac{n-1}{2}}\right. \\&\left. +\frac{n-1}{2}\eta (|x|r)\phi _0(r)r^{\frac{n-3}{2}}\right) \end{aligned}$$

so that

$$\begin{aligned} \Vert \tilde{\phi }'\Vert _{L^1}\le & {} \frac{1}{|x|^\frac{n-1}{2}}\left( \int \left| |x|\eta '(|x|r)\phi _0(r)r^{\frac{n-1}{2}}\right| dr+ \int \left| \eta (|x|r)\phi _0'(r)r^{\frac{n-1}{2}}\right| dr\right) \\&\quad +\frac{1}{|x|^\frac{n-1}{2}}\int \left| \frac{n-1}{2}\eta (|x|r)\phi _0(r)r^{\frac{n-3}{2}}\right| dr \\\lesssim & {} \frac{1}{|x|^\frac{n-1}{2}} \left( |x| |x|^{-\frac{n-1}{2}}|x|^{-1}+1+\left( 1+|x|^{-\frac{n-1}{2}}\right) \right) \sim \frac{1}{|x|^\frac{n-1}{2}}. \end{aligned}$$

\(\square \)

Proof of (6)

Recall the product expression \(K(x,y,z)=k(x,y)A_{\alpha }(y,z)\) from (10) and that the graphing function \(\Psi \) is given at (4) . By combining bounds on \(A_\alpha \), (13) and \(|A_\alpha |\le C\), we see

$$\begin{aligned} |k(x,y)A_{\alpha }(y,z)|\lesssim |k(x,y)|(1+|(y,z)|)^{-L-\alpha }. \end{aligned}$$

It thus suffices to show that \(k(x,y)(1+|(y,z)|)^{-L-\alpha }\in L^p(\mathbb {R}^N,dxdydz)\) for the desired range of p. We form a finite decomposition of \(k(x,y)(1+|(y,z)|)^{-L-\alpha }\) and provide an \(L^p\) bound on each of the resulting terms. Given such bounds, an application of the triangle inequality suffices to complete the proof. We show the result for \(p\le 2\), the full range follows because \(L^p\cap L^\infty \subset L^q\) for all \(q\in [2,\infty ]\) when \(p\le 2\).

We first consider regions \(R_j\) for which it is easily shown

$$\begin{aligned} \Vert k(x,y)(1+|(y,z)|)^{-(L+\alpha )}\Vert _{L^p(R_j)}<\infty . \end{aligned}$$

Where we have a comparison of the form \(W\ll Z\), the relevant large constants are to be later fixed. The remaining comparisons \(W > rsim Z\) are chosen so that the whole of \(\mathbb {R}^N\) is covered. The first of the regions is \(R_{0}\subset \mathbb {R}^N\), where \(|x|\gg |y|\). The next region is \(R_{1}\subset \mathbb {R}^N\), where \(|x|\lesssim |y|\lesssim 1\). The remaining region, where \(1\lesssim |x|\lesssim |y|\) and \(|y|\gg 1\), is the main region \(R\subset \mathbb {R}^N\). Trivially, we always have the estimate \(|k(x,y)|\le C\).

In the region \(R_{0}\), we consider the phase function

$$\begin{aligned} \widetilde{\Phi _{x,y}}(t)=\frac{1}{|x|}\left( x\cdot t + y\cdot \Psi (t)\right) , \end{aligned}$$

for which \(\Vert \widetilde{\Phi _{x,y}}\Vert _{C^{M+1}(\overline{B(0,\delta )})}\lesssim _{M} 1\) and \(\left| \nabla \widetilde{\Phi _{x,y}}(t)\right| > rsim 1\), for \(|t|\le \delta \). Therefore, by the non-stationary phase lemma,

$$\begin{aligned} \left| k(x,y)\right| =\left| \int e^{2\pi i |x|\widetilde{\Phi _{x,y}}(t)}\phi (t)dt\right| \lesssim C_M\frac{1}{|x|^M}. \end{aligned}$$

Integrating the resulting estimate, we see

$$\begin{aligned}&\int \int \int _{R_{0}}|k(x,y)|^p(1+|(y,z)|)^{-p(L+\alpha )}dxdydz\\&\quad \lesssim \int \int \int _{R_{0}}\frac{1}{(1+|x|)^{Mp}}\frac{1}{(1+|(y,z)|)^{p(L+\alpha )}}dxdydz, \end{aligned}$$

which is finite if we take \(M>n\).

Next, for \(R_{1}\), we see that

$$\begin{aligned}&\int \int \int _{R_{1}}|k(x,y)|^p(1+|(y,z)|)^{-p(L+\alpha )}dxdydz\\&\quad \lesssim \int _{|x|\lesssim 1}1dx\int \int (1+|(y,z)|)^{-p(L+\alpha )}dydz<\infty . \end{aligned}$$

We have thus established that \(\Vert k(x,y)(1+|(y,z)|)^{-p(L+\alpha )}\Vert _{L^p(R_j,dxdydz)}<\infty \) for \(j\in \lbrace 0,1\rbrace \). It remains to show that \(\Vert k(x,y)(1+|(y,z)|)^{-p(L+\alpha )}\Vert _{L^p(R,dxdydz)}<\infty \).

We can perform polar integration on the expression for k. Using the fact that \(\Psi \) is radial with \(\Psi (t)=\Psi _0(|t|)\) and \(\phi (t)=\phi _0(|t|)\), we expand

$$\begin{aligned} k(x,y)= & {} \int e^{2\pi i(x\cdot t + y\cdot \Psi (t))}\phi (t)dt\\= & {} \int \int e^{2\pi i(x\cdot r\omega +y\cdot \Psi _0(r))}\phi _0(r) d\sigma (\omega )r^{n-1}dr\\= & {} \int e^{2\pi iy\cdot \Psi _0(r)}\int e^{2\pi irx\cdot \omega } d\sigma (\omega )\phi _0(r)r^{n-1}dr\\= & {} \int e^{2\pi iy\cdot \Psi _0(r)}\hat{\sigma }(rx)\phi _0(r)r^{n-1}dr. \end{aligned}$$

For \(n>1\) we use the asymptotic expansion for \(\hat{\sigma }\)

$$\begin{aligned} \hat{\sigma }(rx)=\frac{1}{|rx|^{\frac{n-1}{2}}}\left( be^{2\pi ir|x|}+ae^{-2\pi ir|x|} +\varepsilon _{\hat{\sigma }}(rx)\right) , \end{aligned}$$

for \(|rx| > rsim 1\), where the a and b are constants and \(|\varepsilon _{\hat{\sigma }}(rx)|\le C|rx|^{-1}\); see, for example, Chapter 8 of [12]. For \(n=1\) the expansion holds trivially with \(\varepsilon _{\hat{\sigma }}(rx)=0\), since we have that \(\hat{\sigma }(rx)= 2\cos (2\pi r|x|)=e^{2\pi ir|x|}+e^{-2\pi ir|x|}\) is the Fourier transform of the counting measure on \(\lbrace -1,1\rbrace \). To make use of the expansion, we introduce a cutoff function, \(\eta \). We choose \(\eta \) such that \(\eta (r)=0\) for \(r\le \frac{1}{2}\) and \(\eta (r)=1\) for \(r\ge 1\). We see that

$$\begin{aligned} k(x,y)= \varepsilon _1(x,y)+I(x,y), \end{aligned}$$

where

$$\begin{aligned} \varepsilon _1(x,y)=\int _0^\infty e^{2\pi i\left( \sum _{j=1}^{L_1}y_j\psi _j(r)\right) } \widehat{\sigma }(rx)(1-\eta (|x|r))\phi _0(r)r^{n-1}dr \end{aligned}$$

and

$$\begin{aligned} I(x,y)=\int _0^\infty e^{2\pi i\left( \sum _{j=1}^{L_1}y_j\psi _j(r)\right) } \widehat{\sigma }(rx)\eta (|x|r)\phi _0(r)r^{n-1}dr. \end{aligned}$$

Let us look first to I(xy). Using the asymptotic expansion, we see that

$$\begin{aligned} I(x,y)= I_{+}(x,y)+I_{-}(x,y)+\varepsilon _2(x,y), \end{aligned}$$

where

$$\begin{aligned} I_{+}(x,y)= & {} b\int _0^\infty e^{2\pi i\left( r|x|+ \sum _{j=1}^{L_1}y_j\psi _j(r)\right) } \frac{1}{|rx|^{\frac{n-1}{2}}}\eta (|x|r)\phi _0(r)r^{n-1}dr,\\ I_{-}(x,y)= & {} a\int _0^\infty e^{2\pi i\left( -r|x|+ \sum _{j=1}^{L_1}y_j\psi _j(r)\right) } \frac{1}{|rx|^{\frac{n-1}{2}}}\eta (|x|r)\phi _0(r)r^{n-1}dr, \end{aligned}$$

and

$$\begin{aligned} \varepsilon _2(x,y)=\int _0^\infty e^{2\pi i\left( \sum _{j=1}^{L_1}y_j\psi _j(r)\right) } \frac{1}{|rx|^{\frac{n-1}{2}}}\varepsilon _{\hat{\sigma }}(rx)\eta (|x|r)\phi _0(r)r^{n-1}dr. \end{aligned}$$

The main terms are \(I_+\) and \(I_-\). The phases we consider are thus

$$\begin{aligned} \Phi _{x,y,+}(r)=r|x|+ \sum _{j=1}^{L_1}y_j\psi _j(r)\text { and }\Phi _{x,y,-}(r)=-r|x|+ \sum _{j=1}^{L_1}y_j\psi _j(r). \end{aligned}$$

We estimate the contributions of the terms \(I_+(x,y)\) and \(I_-(x,y)\) to our estimate on \(\Vert K\Vert _{L^p(R,dxdydz)}\) separately. Since the analysis is the same in either case, we only present the argument for \(I_+(x,y)\), with the relevant phase denoted by \(\Phi =\Phi _{x,y}=\Phi _{x,y,+}\). We apply Lemma 5.3 to analyse the contribution of \(I_+(x,y)\) to the \(L^p\) norm and work with reference to the quantity

$$\begin{aligned} H(x,y)=\inf _{|r|\le \delta }\sum _{j=1}^{d_{L_1}}\left| \frac{\Phi _{x,y}^{(j)}(r)}{j!}\right| ^\frac{1}{j}. \end{aligned}$$

To carry out the \(L^p\) integration of \(I_+(x,y)(1+|(y,z)|)^{-(L+\alpha )}\) over R, we perform a dyadic partition on scales \(H\sim 2^l\) and \(|y|\sim 2^m\). By Lemma 5.2, we have \(H(x,y) > rsim |y|^\frac{1}{d_{L_1}}\) on R. It follows that \(l,m\ge 0\), provided we choose the defining inequality \(|y|\gg 1\) with a large enough constant. When \(H\!=\!H(x,y)\!\sim \! 2^l\), we have as a consequence of Lemma 5.3 that \(|I_+(x,y)|^p\lesssim \frac{1}{|x|^{p\frac{n-1}{2}}}2^{-lp}\). Thus we see

$$\begin{aligned}&\int \int \int _R\left| I_{+}(x,y)\right| ^p\frac{1}{(1+|(y,z)|)^{p(L+\alpha )}}dxdydz\,\\&\quad \lesssim \sum _{m\ge 0}\sum _{l\ge 0}2^{-lp}\int \int \int _{\left\{ (x,y,z); H\sim 2^l,|x|\lesssim |y|,|z|\lesssim |y|, |y|\sim 2^m\right\} } \frac{1}{|x|^{p\frac{n-1}{2}}}\frac{1}{(1+|y|)^{p(L+\alpha )}}dxdydz\\&\qquad +\sum _{m\ge 0}\sum _{l\ge 0}2^{-lp}\int \int \int _{\left\{ (x,y,z); H\sim 2^l; |x|\lesssim |y|\lesssim |z|,|y|\sim 2^m\right\} }\frac{1}{|x|^{p\frac{n-1}{2}}}\frac{1}{(1+|z|)^{p(L+\alpha )}}dxdydz\\&\quad \lesssim \sum _{m\ge 0}\sum _{l\ge 0}2^{-lp}2^{-mp(L+\alpha )}2^{L_2m}\int \int _{\left\{ (x,y); H(x,y)\sim 2^l, |x|\lesssim |y|,|y|\sim 2^m \right\} }\frac{1}{|x|^{p\frac{n-1}{2}}}dxdy\\&\quad =\sum _{m\ge 0}\sum _{l\ge 0}2^{-lp}2^{-mp(L+\alpha )}2^{L_2m}J_{l,m}, \end{aligned}$$

where

$$\begin{aligned} J_{l,m}=\int \int _{\left\{ (x,y); H(x,y)\sim 2^l,|x|\lesssim |y|,|y|\sim 2^m\right\} }\frac{1}{|x|^{p\frac{n-1}{2}}}dxdy. \end{aligned}$$

We split the sum into two and write

$$\begin{aligned} \sum _{m\ge 0}\sum _{l\ge 0}2^{-lp}2^{-mp(L+\alpha )}2^{L_2m}J_{l,m}=S_1+S_2, \end{aligned}$$

where

$$\begin{aligned} S_1=\sum _{l\ge 0}2^{-pl}\sum _{m\ge 2l}2^{-pm(L+\alpha )}2^{L_2m}J_{l,m} \end{aligned}$$

and

$$\begin{aligned} S_2=\sum _{l\ge 0}2^{-pl}\sum _{2l> m\ge 0}2^{-pm(L+\alpha )}2^{L_2m}J_{l,m}. \end{aligned}$$

We provide two different estimates on the size of \(J_{l,m}\). The first estimate is the trivial estimate

$$\begin{aligned} J_{l,m}\le \int \int _{\left\{ (x,y);|y|\lesssim 2^m,|x|\lesssim 2^m\right\} }\frac{1}{|x|^{p\frac{n-1}{2}}}dxdy \le 2^{mL_1} 2^{mn-p\left( \frac{n-1}{2}\right) m}. \end{aligned}$$
(14)

The second estimate requires a little more care and it is here we work by the method from [1]. Since \(H(x,y)\sim 2^l\) we will show that there exists some

$$\begin{aligned} r_u\in \left\{ \frac{1}{2^l}[-2^l\delta ],\frac{1}{2^l}[-2^l\delta ]+\frac{1}{2^l},\ldots ,\frac{1}{2^l}[2^l\delta ]-\frac{1}{2^l},\frac{1}{2^l}[2^l\delta ]\right\} \text { such that }\left| \Phi '(r_u)\right| \lesssim 2^l, \end{aligned}$$

which is pivotal to our estimate. We can choose \(r_u=\frac{1}{2^l}\left[ 2^lr^*\right] \), where

$$\begin{aligned} \sum _{j=1}^{d_{L_1}}\left| \Phi ^{(j)}(r^*)\right| ^\frac{1}{j}=\inf _{r\in [-\delta ,\delta ]}\sum _{j=1}^{d_{L_1}}\left| \Phi ^{(j)}(r)\right| ^\frac{1}{j}=H(x,y). \end{aligned}$$

We have that

$$\begin{aligned}\Phi '(r_u)=\sum _{j=0}^{d_{L_1}-1} \frac{1}{j!}\Phi ^{(1+j)}(r^*)(r_u-r^*)^j. \end{aligned}$$

Using the inequalities \(|r_u-r^*|\le 2^{-l}\) and \(H(x,y)\sim 2^l\), we then see that

$$\begin{aligned} \left| \Phi '(r_u)\right| \lesssim \sum _{j=0}^{d_{L_1}-1} \left| \Phi ^{(1+j)}(r^*)\right| 2^{-lj} \lesssim \sum _{j=0}^{d_{L_1}-1} 2^{l(j+1)}2^{-lj}\sim 2^l. \end{aligned}$$

As a consequence, we have the inclusion

$$\begin{aligned} \left\{ (x,y);H(x,y)\sim 2^l\right\} \subset \bigcup _{u=[-2^l\delta ]}^{[-2^l\delta ]} \left\{ (x,y);\left| \Phi '(r_u)\right| \lesssim 2^l\right\} . \end{aligned}$$

For \(r_u\in \left\{ \frac{1}{2^l}[-2^l\delta ],\frac{1}{2^l}[-2^l\delta ]+\frac{1}{2^l},\ldots ,\frac{1}{2^l}[2^l\delta ]-\frac{1}{2^l},\frac{1}{2^l}[2^l\delta ]\right\} \), we will make the change of variables \(s\mapsto \tilde{s}=s+\sum _{j=1}^{L_1}y_j\psi _j'(r_u)=\Phi '(r_u)\), where \(s=|x|\). The change of variables has a unit Jacobian. The variables \(\omega \in S^{n-1},s\in \mathbb {R},\text { and }y\in \mathbb {R}^{L_1}\) are to be taken with \(s\omega = x\). We denote by \(e_1\) the vector \((1,0,\ldots ,0)\in \mathbb {R}^n\) and note \(H(s\omega ,y)=H(se_1,y)\) to see that

$$\begin{aligned} J_{l,m}= & {} \int \int \int _{\left\{ (\omega ,s,y);H(s\omega ,y)\sim 2^l, |y|\sim 2^m, s\lesssim |y|\right\} }\frac{1}{s^{p\frac{n-1}{2}}}d\sigma (\omega )\chi _{[0,\infty )}(s)s^{n-1}dsdy\\\lesssim & {} \sum _{u=[-2^l\delta ]}^{[2^l\delta ]}\int \int _{\left\{ (s,y);\left| \Phi _{se_1,y}'(r_u)\right| \lesssim 2^l, |y|\lesssim 2^m\right\} }s^{n-1-p\frac{n-1}{2}}\chi _{[0,\infty )}(s)dsdy\\\lesssim & {} \sum _{u=[-2^l\delta ]}^{[2^l\delta ]}\int \int _{\left\{ (\tilde{s},y);|\tilde{s}|\lesssim 2^l,|y|\lesssim 2^m\right\} } \left| \tilde{s}-\sum _{j=1}^{L_1}y_j\psi _j'(r_u)\right| ^{n-1-p\frac{n-1}{2}}d\tilde{s}dy\\\lesssim & {} \sum _{u=[-2^l\delta ]}^{[2^l\delta ]}\int \int _{\left\{ (\tilde{s},y);|\tilde{s}|\lesssim 2^l,|y|\lesssim 2^m\right\} }\left( |\tilde{s}|^{n-1-p\frac{n-1}{2}}+\left| \sum _{j=1}^{L_1}y_j\psi _j'(r_u)\right| ^{n-1-p\frac{n-1}{2}}\right) d\tilde{s}dy\\\lesssim & {} 2^{2l} \int _{\left\{ y;|y|\lesssim 2^m\right\} }\left( 2^{l\left( n-1-p\frac{n-1}{2}\right) }+2^{m\left( n-1-p\frac{n-1}{2}\right) }\right) dy \\\lesssim & {} 2^{2l} 2^{L_1m}\left( 2^{l\left( n-1-p\frac{n-1}{2}\right) }+2^{m\left( n-1-p\frac{n-1}{2}\right) }\right) . \end{aligned}$$

We use this last estimate in the case that \(2l\le m\), in which case we see that

$$\begin{aligned} J_{l,m}\lesssim 2^{2l} 2^{L_1m}2^{m\left( n-1-p\frac{n-1}{2}\right) }. \end{aligned}$$
(15)

We use this bound to see that

$$\begin{aligned} S_1= & {} \sum _{l\ge 0}2^{-pl}\sum _{m\ge 2l}2^{-pm(L+\alpha )}2^{L_2m}J_{l,m}\\\lesssim & {} \sum _{m\ge 0}2^{-pm(L+\alpha )}2^{Lm}2^{m\left( n-1-p\frac{n-1}{2}\right) }\sum _{0\le l \le \frac{m}{2}}2^{2l}2^{-pl}\\\lesssim & {} \sum _{m\ge 0}2^{-pm(L+\alpha )}2^{Lm}2^{m\left( n-1-p\frac{n-1}{2}\right) }2^{m}2^{-p\frac{m}{2}}, \end{aligned}$$

which is finite if \(p>\frac{L+n}{L+\alpha +\frac{n}{2}}\).

Using the trivial bound (14) we see that

$$\begin{aligned} S_2= & {} \sum _{l\ge 0}2^{-pl}\sum _{2l> m\ge 0}2^{-pm(L+\alpha )}2^{L_2m}J_{l,m}\\= & {} \sum _{m\ge 0}2^{-pm(L+\alpha )}2^{mL} 2^{mn-p\left( \frac{n-1}{2}\right) }\sum _{2l> m\ge 0}2^{-pl}\\\lesssim & {} \sum _{m\ge 0}2^{-pm(L+\alpha )}2^{mL} 2^{mn-p\left( \frac{n-1}{2}\right) }2^{-p\frac{m}{2}}, \end{aligned}$$

which is finite if \(p>\frac{L+n}{L+\alpha +\frac{n}{2}}\).

The analysis may be repeated to show that \(\Vert I_-(x,y)A_{\alpha }(y,z)\Vert _{L^p(R,dxdydz)}<\infty \).

To complete the proof, we must show that \(\Vert \varepsilon _i(x,y)(1+|(y,z)|)^{-L-\alpha }\Vert _{L^p(R,dxdydz)}<\infty \) for each \(i\in \lbrace 1,2\rbrace \) when \(p>\frac{n+L}{L+\alpha +\frac{n}{2}}\). For \(\varepsilon _2\), using \(|\varepsilon _{\hat{\sigma }}(rx)|\lesssim |rx|^{-1}\), we have

$$\begin{aligned} |\varepsilon _2(x,y)|= & {} \left| \int _0^\infty e^{2\pi i\left( \sum _{j=1}^{L_1}y_j\psi _j(r)\right) } \frac{1}{|rx|^{\frac{n-1}{2}}}\varepsilon _{\hat{\sigma }}(rx)\eta (|x|r)\phi _0(r)r^{n-1}dr\right| \\\le & {} C\frac{1}{|x|^{\frac{n-1}{2}}}\frac{1}{|x|}\int _{\frac{1}{2}|x|^{-1}}^1 r^{-1}r^{\frac{n-1}{2}}dr\sim \frac{1}{|x|^{\frac{n+1}{2}}},\text { for }(x,y,z)\in R. \end{aligned}$$

For \(\varepsilon _1\), we find

$$\begin{aligned} |\varepsilon _1(x)|= & {} \left| \int _0^\infty e^{2\pi i\left( \sum _{j=1}^{L_1}y_j\psi _j(r)\right) } \widehat{\sigma }(rx)(1-\eta (|x|r))\phi _0(r)r^{n-1}dr\right| \\\lesssim & {} \int _0^{|x|^{-1}} r^{n-1}dr\sim \frac{1}{|x|^n},\text { for }(x,y,z)\in R. \end{aligned}$$

In either case, we know that \(|x| > rsim 1\) so that \(\left| \varepsilon _i(x,y)\right| \lesssim \frac{1}{|x|^\frac{n+1}{2}},\text { for }(x,y,z)\in R\).

Of course, we also have the bound \(|\varepsilon _i(x,y)|\le C\) for each i. By carrying out polar integration, we find that

$$\begin{aligned}&\Vert \varepsilon _i(x,y)(1+|(y,z)|)^{-L-\alpha }\Vert ^p_{L^p(R,dxdydz)}\\&\quad \lesssim \int \int \int (1+|(y,z)|)^{-p(L+\alpha )}(1+|x|)^{-p\frac{n+1}{2}}dxdydz\\&\quad \sim \int _1^\infty r^{(n+L-1)-p(L+\alpha +\frac{n+1}{2})}dr, \end{aligned}$$

which is finite if \(p>\frac{n+L}{L+\alpha +\frac{n+1}{2}}\). In particular, this is true if \(p>\frac{n+L}{L+\alpha +\frac{n}{2}}\). This completes the proof that \(\Vert K\Vert _{L^p(\mathbb {R}^N)}<\infty \). \(\square \)

6 A Necessary Condition for \(K\in L^p\); Proof of (5)

We turn to the proof of the remaining implication of Theorem 1.1. That is the necessary condition for the Bochner–Riesz kernel to be in \(L^p\), (5). We will use the product expression, (10), for the kernel K. Since we have the lower bound \(|A_{\alpha }(y,z)| \ge c_\alpha |(y,z)|^{-\alpha - L}\) for large |(yz)| from (13), it will suffice to establish a lower bound on the size of the oscillatory integral k(xy) in a sufficiently large region. We later prove the following.

Lemma 6.1

Let

$$\begin{aligned} \begin{aligned} R&=\left\{ (x,y)\in \mathbb {R}^{n}\times \mathbb {R}^{L_1};\frac{\delta _1}{2}\le \frac{|x|}{|y_1|}\le \delta _1, \frac{|y_i|}{|y_1|}\le \delta _1\text { for }i\ge 2,\right. \\&\qquad \left. y_1\ge E, y_i\ge 1\text { for }i\ge 2,\text { and } |x|\ge 1\right\} , \end{aligned} \end{aligned}$$
(16)

where the parameter \(\delta _1\) is some sufficiently small constant and the parameter E is some sufficiently large constant. For k, (11), we have the bound

$$\begin{aligned} |k(x,y)| > rsim |x|^{-\frac{n}{2}},\text { for }(x,y)\in R. \end{aligned}$$

By an application of this lemma, we can now prove part of Theorem 1.1, the necessary condition for \(K\in L^p\).

Proof of (5)

In the case \(p< 2\) we see, using the bounds on \(|A_\alpha |\) from Lemma 3.1,

$$\begin{aligned}&\int \int \int _{(x,y,z)\in R\times \mathbb {R}^{L_2}}|K (x,y,z)|^pdxdydz\\&\quad > rsim \int \int _E^{\min \lbrace E,|z|\rbrace } (1+|z|)^{-p(L+\alpha )}y_1^{L_1-1}\int _{\frac{\delta _1y_1}{2}\le r\le \delta _1 y_1}r^{-\frac{pn}{2}+{n-1}}drdy_1dz\\&\qquad +\int _E^{\infty } (1+y_1)^{-p(L+\alpha )}y_1^{L_1-1}y_1^{L_2}\int _{\frac{\delta _1y_1}{2}\le r\le \delta _1 y_1}r^{-\frac{pn}{2}+{n-1}}drdy_1\\&\qquad \sim \int _E^\infty y_1^{-p(L+\alpha )-\frac{pn}{2}+n+L-1}dy_1. \end{aligned}$$

This last expression is finite if and only if

$$\begin{aligned} L-\frac{pn}{2}-p(L+\alpha )+n<0 \iff \frac{L+n}{p}-\frac{L+n}{2}-\frac{L}{2}<\alpha . \end{aligned}$$

\(\square \)

Proof of Lemma 6.1

From the definition at (4), we know that \(d_1\) is even. This will allow us to find a unique critical point of the relevant phase function.

Using polar coordinates, we have

$$\begin{aligned} k(x,y) = \int _0^{\infty } e^{2\pi i \Psi _y(r)} r^{n-1} \phi _0(r) \left[ \int _{{{\mathbb {S}}}^{n-1}} e^{2\pi i r x\cdot \omega } d\sigma (\omega ) \right] dr \end{aligned}$$

where \(\Psi _y(r) = y_1 r^{d_1} + y_2 r^{d_2} + \cdots +y_{L_1}r^{d_{L_1}}\) and the inner integral is the Fourier transform of the surface measure \(\sigma \) on \({{\mathbb {S}}}^{n-1}\), \({\hat{\sigma }(r x)}\). Our aim is to decompose \(k(x,y) = m(x,y) + \varepsilon (x,y)\) where the main term, m, satisfies

$$\begin{aligned} |m(x,y)| > rsim |x|^{-\frac{n}{2}},\text { for }(x,y)\in R \end{aligned}$$
(17)

and the error term \(\varepsilon \) satisfies \(|\varepsilon (x,y)| \lesssim \epsilon |x|^{-\frac{n}{2}}\), for \((x,y)\in R\), where \(\epsilon \) is some suitable small parameter.

We first decompose \(k= k_1 + \varepsilon _1\), where

$$\begin{aligned} k_1(x,y)=\int _{|x|^{-1}}^{\infty } e^{2\pi i \Psi _y(r)} {\hat{\sigma }(r x)} r^{n-1} \phi _0(r)dr\end{aligned}$$

and

$$\begin{aligned}\varepsilon _1(x,y)=\int _{0}^{|x|^{-1}} e^{2\pi i \Psi _y(r)} {\hat{\sigma }(r x)} r^{n-1} \phi _0(r) dr. \end{aligned}$$

We see that

$$\begin{aligned} |\varepsilon _1(x,y)| \lesssim |x|^{-n} \le \frac{\epsilon }{3} |x|^{-\frac{n}{2}},\text { for } (x,y)\in R, \end{aligned}$$
(18)

provided E is chosen large enough after fixing \(\delta _1\), since \(|x|\ge \delta _1|y_1|/2\ge \delta _1 E/2\).

In the case \(n\ge 2\), for \(k_1\) where \(r \ge |x|^{-1}\), we use the following well-known asymptotic formula; see, for example, Chapter 8 of [12]. For \(r|x| \ge 1\),

$$\begin{aligned} {\hat{\sigma }}(rx) = \frac{1}{(r|x|)^{\frac{n-1}{2}}}\left( a e^{- 2\pi i r |x|} + b e^{2\pi i r|x|} + \varepsilon _{\hat{\sigma }}(r|x|)\right) , \end{aligned}$$

where \(a, b \not = 0\) and \(|\varepsilon _{\hat{\sigma }}(r|x|)| \lesssim (r|x|)^{-1}\). In the case \(n=1\) the same formula holds trivially with \(a=b=1\) and \(\varepsilon _{\hat{\sigma }}=0\), since \(\sigma \) is the counting measure on \(\lbrace -1,1\rbrace \). This gives a corresponding decomposition for \(k_1 = k_{1,1} + \varepsilon _2 + \varepsilon _3\).

We have that

$$\begin{aligned} |\varepsilon _3(x,y)|= & {} \left| \int _{|x|^{-1}}^\infty e^{2\pi i \Psi _y(r)}\varepsilon _{\hat{\sigma }}(r|x|)r^{n-1}\phi _0(r)dr\right| \nonumber \\\lesssim & {} |x|^{-\frac{n+1}{2}} \int _0^1 r^{\frac{n-3}{2}} dr \le \frac{\epsilon }{3} |x|^{-\frac{n}{2}},\text { for }(x,y)\in R, \end{aligned}$$
(19)

provided E is chosen large enough after fixing \(\delta _1\), as previously.

Now we consider

$$\begin{aligned} \varepsilon _2(x,y) = b \frac{1}{|x|^{\frac{n-1}{2}}} \int _{|x|^{-1}}^{\infty } e^{2\pi i\left( |x| r + \Psi _y(r)\right) } r^{\frac{n-1}{2}} \phi _0(r) dr. \end{aligned}$$

Because \(y_1 \ge 0\), for \(r\in {{\,\mathrm{supp}\,}}\phi _0\),

$$\begin{aligned} | |x| + \Psi _y ' (r) |= & {} | |x| + k_1 y_1 r^{d_1 -1} + O(\delta _1 y_1 r^{d_1 - 1}) | \\\ge & {} |x| + d_1 y_1 r^{d_1 -1} (1 - C \delta _1) \ge |x|, \end{aligned}$$

when \(\delta _1 > 0\) is sufficiently small. Hence integrating by parts shows that

$$\begin{aligned} |\varepsilon _2(x,y)|\lesssim |x|^{-\frac{n+1}{2}}\le \frac{\epsilon }{3} |x|^{-\frac{n}{2}}\text { for }(x,y)\in R , \end{aligned}$$
(20)

as before, provided E is chosen large enough after \(\delta _1\) has been fixed.

Finally, we have

$$\begin{aligned} k_{1,1}(x,y)=a\frac{1}{|x|^{\frac{n-1}{2}}} \int _{|x|^{-1}}^{\infty } e^{2\pi i \left( - |x| r + \Psi _y(r)\right) } r^{\frac{n-1}{2}} \phi _0(r) dr. \end{aligned}$$

We make the change of variables \(r = \sigma s\) where \(\sigma ^{d_1 - 1} = \frac{|x|}{y_1} \sim \delta _1\) so that

$$\begin{aligned} k_{1,1}(x,y)=g(x,y) \sqrt{\lambda } \int _{\lambda ^{-1}}^{\infty } e^{2\pi i \lambda \Phi _{x,y}(s)} s^{\frac{n-1}{2}} \phi _0(\sigma s)ds, \end{aligned}$$

where \(\lambda = \sigma |x| \) and \(g(x,y) = a |x|^{-\frac{n}{2}} \sigma ^{\frac{n}{2}} \sim _{\delta _1} |x|^{-\frac{n}{2}}\) on R. Here the phase is given by

$$\begin{aligned} \Phi _{x,y}(s)=-s + s^{d_1} + \epsilon _2 s^{d_2} + \cdots + \epsilon _{L_1} s^{d_{L_1}},\text { where }\epsilon _j = \epsilon _j(x,y) = \frac{y_j}{y_1}\sigma ^{d_j-d_1}. \end{aligned}$$

We see that \(|\epsilon _js^{d_j-1}|\ll |s^{d_1-1}|\) for \(|s|\le \sigma ^{-1}\) and \((x,y)\in R\). Hence, for \(s\in {{\,\mathrm{supp}\,}}_0(\sigma \cdot )\), the phase \(\Phi _{x,y}(s)\) has a unique nondegenerate critical point, \(s_{*}\). Note that \(\lambda > rsim \delta _1^{\frac{1}{d_1-1}}\delta _1E\gg 1\), provided E is chosen large enough after \(\delta _1\) has been fixed. The critical point \(s_{*} \sim 1\) and the classical stationary phase argument (see [16]) shows that

$$\begin{aligned} \left| \int _{\lambda ^{-1}}^{\infty } e^{2\pi i \lambda \Phi _{x,y}(s)} s^{\frac{n-1}{2}} \phi _0(\sigma s) ds \right| \sim \lambda ^{-1/2}, \end{aligned}$$

implying that

$$\begin{aligned} |k_{1,1}(x,y)| > rsim _{\delta _1}|x|^{-\frac{n}{2}},\text { for }(x,y)\in R. \end{aligned}$$
(21)

Hence (18), (19), (20) and (21) show that, with \(m(x,y):=k_{1,1}(x,y)\), we can write \(k(x,y) = m(x,y) + \varepsilon (x,y)\) where \(|m(x,y)| > rsim _{\delta _1}|x|^{-\frac{n}{2}}\) and \(|\varepsilon (x,y)| \lesssim \epsilon |x|^{-\frac{n}{2}}\). A suitably small choice of \(\epsilon \) will ensure that \(|\varepsilon (x,y)|\le \frac{1}{2}|m(x,y)|\) so that

$$\begin{aligned} |k(x,y)| > rsim _{\delta _1}|x|^{-\frac{n}{2}},\text { for }(x,y)\in R, \end{aligned}$$

as required. Recall that the bounds featuring small \(\epsilon \) were obtained by making a large choice of E after a small \(\delta _1\) had been chosen. \(\square \)