1 Introduction

The purpose of this article is to prove the existence of solutions to the time-harmonic Maxwell’s equations and estimating the solutions (electromagnetic fields) in terms of the input data (currents) in \(L^p\)-spaces. Let \(({\mathcal {E}},{\mathcal {H}}): {\mathbb {R}}\times {\mathbb {R}}^3 \rightarrow {\mathbb {C}}^3 \times {\mathbb {C}}^3\) denote the electric and magnetic field, \(({\mathcal {D}},{\mathcal {B}}): {\mathbb {R}}\times {\mathbb {R}}^3 \rightarrow {\mathbb {C}}^3 \times {\mathbb {C}}^3\) the displacement field and magnetic induction, and \(({\mathcal {J}}_e,{\mathcal {J}}_m): {\mathbb {R}}\times {\mathbb {R}}^3 \rightarrow {\mathbb {C}}^3 \times {\mathbb {C}}^3\) the electric and magnetic current. Maxwell’s equations in the absence of charges are given by

$$\begin{aligned} \left\{ \begin{array}{cl} \partial _t {\mathcal {D}} &{}= \nabla \times {\mathcal {H}} - {\mathcal {J}}_e, \quad \nabla \cdot {\mathcal {D}} = \nabla \cdot {\mathcal {B}} = \nabla \cdot {\mathcal {J}}_e = \nabla \cdot {\mathcal {J}}_m = 0, \\ \partial _t {\mathcal {B}} &{}= - \nabla \times {\mathcal {E}} + {\mathcal {J}}_m, \quad (t,x) \in {\mathbb {R}}\times {\mathbb {R}}^3. \end{array} \right. \end{aligned}$$
(1)

In physical applications, the vector fields are real-valued. We suppose that displacement and magnetic field are related to electric field and magnetic induction through time-independent and spatially homogeneous material laws. This leads to supplementing (1) with

$$\begin{aligned} {\mathcal {D}}(t,x) = \varepsilon {\mathcal {E}}(t,x), \quad {\mathcal {B}}(t,x) = \mu {\mathcal {H}}(t,x) , \quad \varepsilon \in {\mathbb {R}}^{3 \times 3}, \; \mu \in {\mathbb {R}}^{3 \times 3}. \end{aligned}$$
(2)

\(\varepsilon \) is referred to as permittivity, and \(\mu \) is referred to as permeability. Permittivity and permeability are positive-definite in classical physical applications. We suppose in the following that \(\varepsilon \) and \(\mu \) are diagonal matrices and write

$$\begin{aligned} \varepsilon = \text {diag}(\varepsilon _1,\varepsilon _2,\varepsilon _3), \quad \mu = \text {diag}(\mu _1,\mu _2,\mu _3), \quad \varepsilon _i,\mu _j > 0. \end{aligned}$$
(3)

Maxwell’s equations are invariant under change of basis, i.e., the transformations \(X'(t,x) = MX(t,M^t x)\) for the involved vector fields with \(M \in SO(3)\), and time-parity symmetry \((t,x) \rightarrow (-t,-x)\). Hence, the more general case when \(\varepsilon \) and \(\mu \) are commuting positive-definite matrices, or equivalently: simultaneously orthogonally diagonalizable, reduces to (3). For physical explanations, we refer to [15, 34]. The assumption \(\nabla \cdot {\mathcal {D}} = 0\) corresponds to the absence of electrical charges and \(\nabla \cdot {\mathcal {B}} = 0\) translates to the absence of magnetic monopoles. Due to conservation of charges, the currents are likewise divergence-free. Since magnetic monopoles are hypothetical, \({\mathcal {J}}_m\) is vanishing for most applications. Here, we consider the more general case, which will highlight symmetry between \({\mathcal {E}}\) and \({\mathcal {H}}\). In this paper we focus on the fully anisotropic case

$$\begin{aligned} \frac{\varepsilon _1}{\mu _1}\ne \frac{\varepsilon _2}{\mu _2}\ne \frac{\varepsilon _3}{\mu _3} \ne \frac{\varepsilon _1}{\mu _1}. \end{aligned}$$
(4)

Upon considering the time-harmonic, monochromatic ansatz

$$\begin{aligned} \left\{ \begin{array}{cl} {\mathcal {D}}(t,x) &{}= e^{i \omega t} D(x), \quad {\mathcal {B}}(t,x) = e^{i \omega t} B(x), \\ {\mathcal {J}}_e(t,x) &{}= e^{i \omega t} J_{e}(x), \quad {\mathcal {J}}_m(t,x) = e^{i \omega t} J_{m}(x) \end{array} \right. \end{aligned}$$
(5)

with \((D,B): {\mathbb {R}}^{3} \rightarrow {\mathbb {C}}^3 \times {\mathbb {C}}^3\), \((J_{e},J_{m}) : {\mathbb {R}}^3 \rightarrow {\mathbb {C}}^3 \times {\mathbb {C}}^3\) divergence-free and \(E(x) = \varepsilon ^{-1} D(x)\), \(H(x) = \mu ^{-1} B(x)\) according to (2), we find

$$\begin{aligned} \left\{ \begin{array}{cl} i \omega D &{}= \nabla \times H - J_{e}, \quad \nabla \cdot J_{e} = \nabla \cdot J_{m} = 0, \\ i \omega B &{}= - \nabla \times E + J_{m}. \end{array} \right. \end{aligned}$$

With (2) we arrive at the equations

$$\begin{aligned} \left\{ \begin{array}{cl} \nabla \times E + i \omega \mu H &{}= J_m, \quad \nabla \cdot J_m = \nabla \cdot J_e = 0, \\ \nabla \times H - i \omega \varepsilon E &{}= J_e. \end{array} \right. \end{aligned}$$
(6)

Applying the divergence operator we find that \(D= \varepsilon E\) and \(B =\mu H\) are automatically divergence-free. However, in view of anisotropic material laws (2), this is in general not true for E and H. In what follows \(W^{m,p}({\mathbb {R}}^d)\) denotes the \(L^p\)-based Sobolev space defined by

$$\begin{aligned} W^{m,p}({\mathbb {R}}^d) = \{ f \in L^p({\mathbb {R}}^d) \, : \; \partial ^\alpha f \in L^p({\mathbb {R}}^d) \text { for all } \alpha \in {\mathbb {N}}_0^d, \, |\alpha | \le m \}. \end{aligned}$$

We prove the following:

Theorem 1.1

Let \(1 \le p_1, p_2, q \le \infty \), \(\varepsilon ,\mu \in {\mathbb {R}}^{3\times 3}\) as in (3),(4) and \((J_e,J_m) \in L^{p_1}({\mathbb {R}}^3) \cap L^{p_2}({\mathbb {R}}^3)\) divergence-free. If

$$\begin{aligned} \begin{aligned}&\qquad \frac{1}{p_1} > \frac{3}{4}, \quad \frac{1}{q} < \frac{1}{4}, \quad \frac{1}{p_1} - \frac{1}{q} \ge \frac{2}{3}, \\&\text { and }\; 0 \le \frac{1}{p_2} - \frac{1}{q} \le \frac{1}{3}, \quad (p_2,q) \notin \{(1,1), (1,\frac{3}{2}), (3,\infty ), (\infty ,\infty ) \}, \end{aligned} \end{aligned}$$
(7)

then, for any given \(\omega \in {\mathbb {R}}{\setminus }\{0\}\) there exists a distributional time-harmonic solution to fully anisotropic Maxwell’s equations (6) that satisfies

$$\begin{aligned} \Vert (E,H) \Vert _{L^q({\mathbb {R}}^3)} \lesssim _{p_1,p_2,q,\omega } \Vert (J_e,J_m) \Vert _{L^{p_1}({\mathbb {R}}^3) \cap L^{p_2}({\mathbb {R}}^3)}. \end{aligned}$$
(8)

If additionally \(J_e,J_m\in L^q({\mathbb {R}}^3)\), \(q<\infty \), then \(E,H\in W^{1,q}({\mathbb {R}}^3)\) is a weak solution satisfying

$$\begin{aligned} \Vert (E,H)\Vert _{W^{1,q}({\mathbb {R}}^3)} \lesssim _{p_1,q,\omega } \Vert (J_e,J_m)\Vert _{L^{p_1}({\mathbb {R}}^3) \cap L^q({\mathbb {R}}^3)}. \end{aligned}$$

Remark 1.2

  1. (a)

    The time-harmonic solutions constructed in Theorem 1.1 give rise to solutions of the original time-dependent Maxwell equations via the superposition principle. More precisely, being given the solutions \((E_\omega ,H_\omega )\) for a given frequency parameter \(\omega \in {\mathbb {R}}{\setminus }\{0\}\) by Theorem 1.1, a formal solution of (1) is given by

    $$\begin{aligned} {\mathcal {E}}(x,t):= \int _{\mathbb {R}}E_\omega (x)e^{i\omega t} \,d\omega ,\qquad {\mathcal {H}}(x,t):= \int _{\mathbb {R}}H_\omega (x)e^{i\omega t} \,d\omega \end{aligned}$$

    and accordingly for \({\mathcal {D}},{\mathcal {B}}\) via (2).

  2. (b)

    Distributional solutions to (6) are not unique: Already in the isotropic case \(\varepsilon = \mu = 1_{3 \times 3}\) the plane waves

    $$\begin{aligned} D(x) = E(x) = e^{i \omega x_3} e_1, \quad H(x) = B(x) = e^{i \omega x_3} e_2 \end{aligned}$$

    satisfy (6) with \(J_e = J_m = 0\). However, these solutions are not decaying. In the easier case of, e.g., the Helmholtz equation

    $$\begin{aligned} (\omega ^2 + \Delta ) u = 0 \qquad \text {in }{\mathbb {R}}^3, \end{aligned}$$
    (9)

    one can enforce uniqueness by suitable radiation conditions like the Sommerfeld outgoing radiation condition. This theory has a counterpart in the case of regular characteristic surfaces, which is \(\{\xi \in {\mathbb {R}}^3: |\xi | = \omega \}\) for (9), see [46, Theorem A]. However, we shall see that the Fresnel characteristic surface for (6) is not smooth. We do not know of a physically natural radiation condition for time-harmonic Maxwell’s equations in the fully anisotropic case, which seems to be the key point for uniqueness. Nevertheless, we expect that a clever notion of outgoing solution for (6) should lead to uniqueness as in the case of the Helmholtz equation. Here, solution formula (26) might be helpful.

  3. (c)

    The dependence of our estimates on \(\omega \) can be made more explicit. In fact, consider a solution (EH) to (6) for a given \(\omega \in {\mathbb {R}}{\setminus }\{0\}\). Then \(E_\omega (x):=E(\omega ^{-1}x), H_\omega (x):=E(\omega ^{-1}x)\) solves the corresponding problem with frequency parameter 1 and currents \(J_{e,\omega }(x):=\omega ^{-1}J_e(\omega ^{-1}x),J_{m,\omega }(x):=\omega ^{-1}J_m(\omega ^{-1}x)\). Then we obtain

    $$\begin{aligned} \begin{aligned} \Vert (E,H) \Vert _{L^q}&= \omega ^{-\frac{3}{q}} \Vert (E_\omega ,H_\omega ) \Vert _{L^q} \\&\lesssim _{p_1,p_2,q} \omega ^{-\frac{3}{q}} \Vert (J_{e, \omega }, J_{m, \omega }) \Vert _{L^{p_1} \cap L^{p_2}} \\&\lesssim _{p_1,p_2,q} \omega ^{-\frac{3}{q}-1} (\omega ^{\frac{3}{p_1}} \vee \omega ^{\frac{3}{p_2}} ) \Vert (J_e,J_m) \Vert _{L^{p_1} \cap L^{p_2}} \\&\lesssim _{p_1,p_2,q} \omega ^{\frac{3}{p_2}-\frac{3}{q}-1} (1+\omega )^{\frac{3}{p_1}-\frac{3}{p_2}} \Vert (J_e,J_m) \Vert _{L^{p_1} \cap L^{p_2}} \end{aligned} \end{aligned}$$

    and similarly

    $$\begin{aligned} \begin{aligned} \Vert (E,H) \Vert _{W^{1,q}}&\lesssim \omega ^{-\frac{3}{q}} (1+\omega ) \Vert (E_\omega ,H_\omega ) \Vert _{L^q} \\&\lesssim _{p_1,q} \omega ^{\frac{3}{p_2}-\frac{3}{q}-1} (1+\omega )^{\frac{3}{p_1}-\frac{3}{p_2}+1} \Vert (J_e,J_m) \Vert _{L^{p_1} \cap L^q}. \end{aligned} \end{aligned}$$

    As a consequence, considering weak limits one obtains solutions of the static Maxwell’s equations as \(\omega \rightarrow 0\) in the special case \(\frac{1}{p_2} - \frac{1}{q} =\frac{1}{3}\).

We shall see that the Fourier multiplier derived by inverting (6) for \(\omega \in {\mathbb {R}}\) is not well-defined in the sense of distributions. A common regularization is to consider \(\omega \in {\mathbb {C}}\backslash {\mathbb {R}}\) and derive estimates independent of \(\text {dist}(\omega , {\mathbb {R}})\). This program was carried out in our previous works [11, 40], which were concerned with isotropic, possibly inhomogeneous, respectively, partially anisotropic, but homogeneous media. The necessity of considering \((J_e,J_m)\) within intersections of \(L^p\)-spaces and the connection with resolvent estimates for the Half-Laplacian was discussed in [40]. In the present work we need to regularize differently due to a more complicated behavior of the involved Fourier symbols with respect to the change \(\omega \mapsto \omega +i\varepsilon \). In other words, we do not prove a Limiting Absorption Principle in the classical sense.

In the proof we will reduce the analysis to the case \(\mu _1 = \mu _2 = \mu _3=1\) as in [36] in order to simplify the notation. We will justify this step in Sect. 3. In the partially anisotropic case \(\# \{\varepsilon _1,\varepsilon _2,\varepsilon _3\} \le 2\) the matrix-valued Fourier multiplier associated with Maxwell’s equations can be diagonalized and a combination of Riesz transform estimates and resolvent estimates for the Half-Laplacian is used to prove uniform bounds. In our fully anisotropic case (4) this does not work at all. Instead of diagonalizing the symbol, we take the more direct approach of inverting the matrix Fourier multiplier associated with (6). Taking the Fourier transform in \({\mathbb {R}}^3\), denoting with \(\xi \in {\mathbb {R}}^3\) the dual variable of \(x \in {\mathbb {R}}^3\) and the vector-valued Fourier transform of E with \({\hat{E}}\), likewise for the other vector-valued quantities, we find that (6) is equivalent to

$$\begin{aligned} \left\{ \begin{array}{cl} i b(\xi ) {\hat{E}}(\xi ) + i \omega \mu {\hat{H}}(\xi ) &{}= {\hat{J}}_m, \quad \xi \cdot {\hat{J}}_m = \xi \cdot {\hat{J}}_e = 0, \\ i b(\xi ) {\hat{H}}(\xi ) - i \omega \varepsilon {\hat{E}}(\xi ) &{}= {\hat{J}}_e. \end{array} \right. \end{aligned}$$
(10)

In the above display, we denote

$$\begin{aligned} (\nabla \times f) \widehat{(}\xi ) = i b(\xi ) {\hat{f}}(\xi ), \quad b(\xi ) = \begin{pmatrix} 0 &{} -\xi _3 &{} \xi _2 \\ \xi _3 &{} 0 &{} -\xi _1 \\ -\xi _2 &{} \xi _1 &{} 0 \end{pmatrix} . \end{aligned}$$

In the first step, we use the block structure to show that solutions to (10) solve the following two \(3 \times 3\)-systems of second order:

Proposition 1.3

If \((E,H) \in {\mathcal {S}}'({\mathbb {R}}^3)^2\) solve (10), then the following holds true:

$$\begin{aligned} \left\{ \begin{array}{cl} (M_E(\xi ) - \omega ^2) {\hat{E}} &{}= - i \omega \varepsilon ^{-1} {\hat{J}}_e + i \varepsilon ^{-1} b(\xi ) \mu ^{-1} {\hat{J}}_m, \\ (M_H(\xi ) - \omega ^2) {\hat{H}} &{}= i \mu ^{-1} b(\xi ) \varepsilon ^{-1} {\hat{J}}_e + i \omega \mu ^{-1} {\hat{J}}_m. \end{array} \right. \end{aligned}$$
(11)

Here,

$$\begin{aligned} M_E(\xi ) = - \varepsilon ^{-1} b(\xi ) \mu ^{-1} b(\xi ), \qquad M_H(\xi ) = - \mu ^{-1} b(\xi ) \varepsilon ^{-1} b(\xi ). \end{aligned}$$

The proof of the proposition follows from rewriting (10) as

$$\begin{aligned} \begin{pmatrix} -i \omega \varepsilon &{}\quad i b(\xi ) \\ i b(\xi ) &{}\quad i \omega \mu \end{pmatrix} \begin{pmatrix} {\hat{E}} \\ {\hat{H}} \end{pmatrix} = \begin{pmatrix} {\hat{J}}_{e} \\ {\hat{J}}_{m} \end{pmatrix} \end{aligned}$$

and multiplying this equation with

$$\begin{aligned} \begin{pmatrix} -i \omega \varepsilon ^{-1} &{}\quad i \varepsilon ^{-1} b(\xi ) \mu ^{-1} \\ i \mu ^{-1} b(\xi ) \varepsilon ^{-1} &{}\quad i \omega \mu ^{-1} \end{pmatrix} . \end{aligned}$$
(12)

Notice, however, that (10) and (11) are not equivalent because symmetrizer (12) has a non-trivial kernel. A lengthy, but straight-forward computation reveals

$$\begin{aligned} p(\omega ,\xi ):= & {} \det (M_E(\xi ) - \omega ^2) = \det (M_H(\xi ) - \omega ^2)\nonumber \\= & {} - \omega ^2 (\omega ^4 - \omega ^2 q_0(\xi ) + q_1(\xi )), \end{aligned}$$
(13)

where

$$\begin{aligned} q_0(\xi )&= \xi _1^2 \left( \frac{1}{\varepsilon _2 \mu _3} + \frac{1}{\mu _2 \varepsilon _3} \right) + \xi _2^2 \left( \frac{1}{\varepsilon _1 \mu _3} + \frac{1}{\mu _1 \varepsilon _3} \right) + \xi _3^2 \left( \frac{1}{\varepsilon _1 \mu _2} + \frac{1}{\varepsilon _2 \mu _1} \right) , \\ q_1(\xi )&= \frac{1}{\varepsilon _1 \varepsilon _2 \varepsilon _3 \mu _1 \mu _2 \mu _3} (\varepsilon _1 \xi _1^2 + \varepsilon _2 \xi _2^2 + \varepsilon _3 \xi _3^2) (\mu _1 \xi _1^2 + \mu _2 \xi _2^2 + \mu _3 \xi _3^2). \end{aligned}$$

In the case \(\mu _1=\mu _2=\mu _3>0\) this corresponds to [36, Eq. (1.4)] by Liess.

From Proposition 1.3 we infer that solutions to anisotropic Maxwell’s equations can be found provided that the mapping properties of the Fourier multiplier with symbol \(p^{-1}(\omega ,\xi )\) or, actually, an adequate regularization of this, can be controlled. The first step of this analysis is to develop a sound understanding of the geometry of \(S:=\{\xi \in {\mathbb {R}}^3: p(\omega ,\xi )=0\}\), with an emphasis on its principal curvatures. This has essentially been carried out by Darboux [13] and Liess [36, Appendix]. We devote Sect. 3 to recapitulate these facts along with some computational details that were omitted in [36]. S is known as Fresnel’s wave surface, which was previously described, e.g., in [13, 16, 32, 36]. We refer to Fig. 2 for visualizations. Despite its seemingly complicated structure, this surface can be perceived as a non-smooth deformation of the doubly covered sphere in \({\mathbb {R}}^3\). For the involved algebraic computations we provide a MAPLETM sheet for verification [38].

We turn to a discussion of the regularization of \(p(\omega ,\xi )^{-1}\). Motivated by Cramer’s rule, we multiply (11) with the adjugate matrices and divide by \(p(\omega ,\xi ) + i \delta \). This leads us to approximate solutions \((E_\delta ,H_\delta )\). We postpone the precise definition to Sect. 2. The main part of the proof of Theorem 1.1 is then to show uniform bounds in \(\delta \ne 0\):

$$\begin{aligned} \Vert (E_\delta ,H_\delta ) \Vert _{L^q({\mathbb {R}}^3)} \lesssim \Vert (J_e,J_m) \Vert _{L^{p_1}({\mathbb {R}}^3) \cap L^{p_2}({\mathbb {R}}^3)} \end{aligned}$$

for q, \(p_1\), \(p_2\) as in Theorem 1.1. In Sect. 2 we shall see how this allows us to infer the existence of distributional solutions to (6) and how the limits can be understood as principal value distribution and delta distribution for Fresnel’s wave surface in Fourier space. Moreover, the distributional solutions are weak solutions provided that the currents have sufficiently high integrability.

We point out the connection to Bochner–Riesz operators of negative index and seemingly digress for a moment to explain key points for these operators. For \(0< \alpha <1\), consider the Bochner–Riesz operator with negative index given by

$$\begin{aligned} S^\alpha f (x) = \frac{C_d}{\Gamma (1-\alpha )} \int _{{\mathbb {R}}^d} e^{ix.\xi } (1-|\xi |^2)^{-\alpha }_+ {\hat{f}}(\xi ) \hbox {d}\xi . \end{aligned}$$
(14)

\(C_d\) denotes a dimensional constant, \(\Gamma \) denotes the Gamma function, and \(x_+ = \max (x,0)\). For \(1 \le \alpha \le (d+1)/2\), \(S^\alpha \) is explained by analytic continuation. The body of literature concerned with Bochner–Riesz estimates with negative index is huge, see, e.g., [5, 10, 24, 33, 41]. In Sect. 4 we give a more exhaustive overview. For \(\alpha = 1\), we find

$$\begin{aligned} S^\alpha f (x ) = C'_d \int _{{\mathbb {S}}^{d-1}} e^{i x. \xi } {\hat{f}}(\xi ) \hbox {d}\sigma (\xi ) = C'_d \int _{{\mathbb {R}}^d} e^{ix.\xi } \delta (|\xi |^2 - 1) {\hat{f}}(\xi ) \hbox {d}\xi \end{aligned}$$

because the distribution in (14) for \(\alpha = 1\) coincides with the delta distribution up to a factor. Estimates for such Fourier restriction–extension operators are the backbone of the Limiting Absorption Principle for the Helmholtz equation (cf. [25]). It turns out that we need more general Fourier restriction–extension estimates than the ones associated with elliptic surfaces because the Gaussian curvature of the Fresnel surface S changes sign, as we shall see in Sect. 3. We take the opportunity to prove estimates for generalized Bochner–Riesz operators of negative index for non-elliptic surfaces as the associated Fourier restriction–extension operators will be important in the proof of Theorem 1.1.

To describe our results in this direction, let \(d \ge 3\) and \(S = \{ (\xi ',\psi (\xi ')) : \, \xi ' \in [-1,1]^{d-1} \}\) be a smooth surface with \(k \in \{1,\ldots ,d-1 \}\) principal curvatures bounded from below. The case \(d=2\) was treated by Bak [2] and Gutiérrez [24]. Let \(\xi = (\xi ',\xi _d) \in {\mathbb {R}}^{d-1} \times {\mathbb {R}}\), \(x^\alpha _+ := (\max (x,0))^\alpha \), and

$$\begin{aligned} (T^\alpha f) \widehat{(}\xi ) = \frac{1}{\Gamma (1-\alpha )} \frac{\chi (\xi ')}{(\xi _d - \psi (\xi '))_+^\alpha } {\hat{f}}(\xi ), \; \chi \in C^\infty _c([-1,1]^{d-1}), \; 0< \alpha < \frac{k+2}{2}. \end{aligned}$$

In the following theorem, we show \(L^p\)-\(L^q\)-bounds

$$\begin{aligned} \Vert T^\alpha f \Vert _{L^q({\mathbb {R}}^d)} \lesssim \Vert f \Vert _{L^p({\mathbb {R}}^d)} \end{aligned}$$
(15)

within a pentagonal region (see Fig. 1)

$$\begin{aligned} \left( \frac{1}{p}, \frac{1}{q} \right) \in \text {conv}^0 (C_{\alpha ,k}, B_{\alpha ,k},B'_{\alpha ,k},C_{\alpha ,k}',A ), \qquad A:=(1,0). \end{aligned}$$

Here \(\text {conv}^0(X_1,\ldots ,X_k)\) denotes the interior of the convex hull of \(X_1,\ldots ,X_k\). For \(0<\alpha < \frac{k+2}{2}\), let

$$\begin{aligned} {\mathcal {P}}_{\alpha }(k) = \left\{ (x,y) \in [0,1]^2 : \, x > \frac{k+2\alpha }{2(k+1)}, \; y < \frac{k+2-2\alpha }{2(k+1)}, \; x - y \ge \frac{2\alpha }{k+2} \right\} . \end{aligned}$$
(16)

For two points X, \(Y \in [0,1]^2\), let

$$\begin{aligned}{}[X,Y]&= \{ Z \in [0,1]^2 \, : \, Z = \lambda X + (1-\lambda )Y \text { for some } \lambda \in [0,1] \}, \\ \text { and } (X,Y]&= [X,Y] \backslash \{X\}, \; [X,Y) = [X,Y] \backslash \{Y \}, \; (X,Y) = [X,Y] \backslash \{X,Y\}. \end{aligned}$$

At its inner endpoints \(B_{\alpha ,k}\), \(B'_{\alpha ,k}\), we show restricted weak bounds

$$\begin{aligned} \Vert T^\alpha f \Vert _{L^{q,\infty }({\mathbb {R}}^d)} \lesssim \Vert f \Vert _{L^{p,1}({\mathbb {R}}^d)}, \end{aligned}$$
(17)

and on part of its boundary, we show weak bounds

$$\begin{aligned} \Vert T^\alpha f \Vert _{L^q({\mathbb {R}}^d)}&\lesssim \Vert f \Vert _{L^{p,1}({\mathbb {R}}^d)}, \end{aligned}$$
(18)
$$\begin{aligned} \Vert T^\alpha f \Vert _{L^{q,\infty }({\mathbb {R}}^d)}&\lesssim \Vert f \Vert _{L^p({\mathbb {R}}^d)}. \end{aligned}$$
(19)
Fig. 1
figure 1

Riesz diagram for Theorem 1.4 with \(\alpha _1< \frac{1}{2} < \alpha _2\)

Theorem 1.4

Let \(1 \le p, q \le \infty \) and \(d\in {\mathbb {N}},d\ge 3\).

  1. (i)

    For \(\frac{1}{2} \le \alpha < \frac{k+2}{2}\) let

    $$\begin{aligned} B_{\alpha ,k}&= \left( \frac{k+2\alpha }{2(k+1)}, \frac{k(k+2-2\alpha )}{2(k+1)(k+2)} \right) ,&C_{\alpha ,k} = \left( \frac{k+2\alpha }{2(k+1)}, 0 \right) , \\ B_{\alpha ,k}'&= \left( \frac{ k^2 + 2(2+\alpha )k +4}{2(k+1)(k+2)}, \frac{k+2-2\alpha }{2(k+1)} \right) ,&C_{\alpha ,k}' = \left( 1, \frac{k+2-2\alpha }{2(k+1)}\right) . \end{aligned}$$

    Then (15) holds true for \((\frac{1}{p},\frac{1}{q}) \in {\mathcal {P}}_{\alpha }(k)\) defined in (16). For \(\alpha > \frac{1}{2}\), we find estimates (18) to hold for \((\frac{1}{p},\frac{1}{q}) \in (B_{\alpha ,k}, C_{\alpha ,k}]\); (19) for \((\frac{1}{p}, \frac{1}{q}) \in (B'_{\alpha ,k},C_{\alpha ,k}']\), and (17) for \((\frac{1}{p},\frac{1}{q}) \in \{ B_{\alpha ,k}, B'_{\alpha ,k} \}\).

  2. (ii)

    For \(0<\alpha < \frac{1}{2}\) let

    $$\begin{aligned} B_{\alpha ,k}&= \left( \frac{d-1+2\alpha }{2d}, \frac{k}{2(2+k)} \right) ,&C_{\alpha ,k} = \left( \frac{d-1+2\alpha }{2d}, 0 \right) , \\ B_{\alpha ,k}'&= \left( \frac{4+k}{2(2+k)}, \frac{d+1-2\alpha }{2d} \right) ,&C_{\alpha ,k}' = \left( 1, \frac{d+1-2\alpha }{2d} \right) . \end{aligned}$$

    Then (15) holds true for

    $$\begin{aligned} \frac{1}{p} > \frac{d-1+2\alpha }{2d}, \quad \frac{1}{q} < \frac{d+1-2\alpha }{2d}, \quad \frac{1}{p} - \frac{1}{q} \ge \frac{2(d-1+2\alpha ) + k (2\alpha -1)}{2d(2+k)}. \end{aligned}$$

    Furthermore, we find estimates (18) to hold for \((\frac{1}{p},\frac{1}{q}) \in (B_{\alpha ,k}, C_{\alpha ,k}]\); (19) for \((\frac{1}{p}, \frac{1}{q}) \in (B'_{\alpha ,k},C_{\alpha ,k}']\), and (17) for \((\frac{1}{p},\frac{1}{q}) \in \{ B_{\alpha ,k}, B'_{\alpha ,k} \}\).

For any \(\alpha \) the constant in (15)–(19) depends on the lower bounds of the principal curvatures and \(\Vert \chi \Vert _{C^N}\) and \(\Vert \psi \Vert _{C^N}\) for \(N=N(p,q,d)\). In particular it is stable under smooth perturbations of \(\chi \) and \(\psi \).

The proof is based on the decay of the Fourier transform of the surface measure on S (cf. [37, 42, Section VIII.5.8]) and convenient decompositions of the distribution \(\frac{1}{\Gamma (1-\alpha )} x^{-\alpha }_+\) (cf. [26, Section 3.2], [10, Lemma 2.1]). We also show that the strong bounds are sharp for \(\alpha \ge \frac{1}{2}\). In the elliptic case the currently best results were shown by Kwon–Lee [33, Section 2.6]. This also shows that our strong bounds are not sharp for \(\alpha < \frac{1}{2}\). We refer to Sect. 4 for further discussion.

To describe the remainder of our analysis, we recall important properties of the Fresnel surface. Up to small neighborhoods of four singular points, the surface is a smooth compact manifold with two connected components. The Gaussian curvature vanishes precisely along the so-called Hamiltonian circles on the outer sheet. However, the surface is never flat, i.e., there is always a principal section away from zero. Around the singular points, the surface looks conical and ceases to be a smooth manifold.

We briefly explain how this leads to an analysis of the Fourier multiplier \((p(\omega ,\xi ) + i\delta )^{-1}\), \(\omega \in {\mathbb {R}}\backslash \{ 0 \}\), \(0 < |\delta | \ll 1\). We recall that solutions to time-harmonic Maxwell’s equations are constructed by considering \(\delta \rightarrow 0\) with bounds independent of \(\delta \). The non-resonant contribution of \(\{\xi \in {\mathbb {R}}^3 \, : \, |p(\omega ,\xi )| \ge t_0 \}\), \(t_0>0\) away from Fresnel’s wave surface is estimated by Mikhlin’s theorem and standard estimates for Bessel potentials. This high-frequency part of the solutions is responsible for the condition \( 0 \le \frac{1}{p_2} - \frac{1}{q} \le \frac{1}{3}\) in (7). We refer to [40, Section 3] for further explanations regarding the impossibility of an estimate \(\Vert (E,H) \Vert _{L^q({\mathbb {R}}^3)} \lesssim \Vert (J_e,J_m) \Vert _{L^p({\mathbb {R}}^3)}\).

After smoothly cutting away the contribution of the high frequencies, we focus on estimates for the multiplier \((p(\omega ,\xi ) + i \delta )^{-1}\) in a neighborhood \(\{ |p(\omega ,\xi )| \le t_0 \}\) near the surface. It turns out that around the smooth elliptic part with Gaussian curvature bounded away from zero, we can use the estimates for the Bochner–Riesz operator from Theorem 1.4 for \(d=3,k=2,\alpha =1\). However, there is also a smooth non-elliptic part where the modulus of the Gaussian curvature is small and vanishes precisely along the Hamiltonian circles. Here, Theorem 1.4 applies for \(d=3,k=1,\alpha =1\). In the corresponding analysis of the multiplier \((p(\omega ,\xi )+i\delta )^{-1}\) we foliate the neighborhoods of the Fresnel surface by level sets of \(p(\omega ,\xi )\). The contributions of the single layers are estimated with the Fourier restriction–extension theorem. In the analysis we use decompositions in Fourier space generalizing arguments of Kwon–Lee [33, Section 4], where the decompositions were adapted to the sphere.

For the contribution coming from neighborhoods of the four isolated conical singularities, we will apply Theorem 1.4 once more for \(d=3,k=1,\alpha =1\). On a technical level, a major difference compared to the other regions comes from the fact that the cone is not a smooth manifold: we use an additional Littlewood–Paley decomposition and scaling to uncover its mapping properties. Jeong–Kwon–Lee [30] previously applied related arguments to analyze Sobolev inequalities for second-order non-elliptic operators.

We further mention the very recent preprint by Castéras–Földes [9] (see also [4]). In [9] \(L^p\)-mapping properties of Fourier multipliers \((Q(\xi ) + i \varepsilon )^{-1}\) for fourth-order polynomials Q were analyzed in the context of traveling waves for nonlinear equations. The analysis in [9] does not cover surfaces \(\{Q(\xi ) = 0 \}\) containing singular points, and the \(L^p\)-\(L^q\)-boundedness range stated in [9, Theorem 3.3] is strictly smaller than in the corresponding results given in Theorem 1.4.

Outline of the paper In Sect. 2 we carry out reductions to prove Theorem 1.1. We anticipate the uniform estimates of the regularized solutions that we will prove in Sects. 5 and 6, by which we finish the proof of Theorem 1.1. In Sect. 3 we recall the relevant geometric properties of the Fresnel surface and reduce our analysis to the case \(\omega =\mu _1 = \mu _2 = \mu _3 = 1\). In Sect. 4 we recall results on Bochner–Riesz estimates with negative index for elliptic surface and extend those to estimates for a class of more general non-degenerate surfaces by proving Theorem 1.4. In Sect. 5 we use these estimates to uniformly bound solutions to (5) corresponding to the smooth part of the Fresnel surface. In Sect. 6 we finally estimate the contribution with Fourier support close to the four singular points.

2 Reduction to Multiplier Estimates Related to the Fresnel Surface

The purpose of this section is to carry out the reductions indicated in Introduction. We first define suitable approximate solutions \((E_\delta ,H_\delta )\) and present estimates for those related to the different parts of the Fresnel surface and away from the Fresnel surface. With these estimates at hand, to be shown in the upcoming sections, we finish the proof of Theorem 1.1. At the end of the section we give explicit formulae for the solution.

We work with the following convention for the Fourier transform: For \(f \in {\mathcal {S}}({\mathbb {R}}^d)\) the Fourier transform is defined by

$$\begin{aligned} {\hat{f}}(\xi ) = \int _{{\mathbb {R}}^d} e^{-ix.\xi } f(x) \hbox {d}x \end{aligned}$$

and as usually extended by duality to \({\mathcal {S}}'({\mathbb {R}}^d)\). The Fourier inversion formula reads for \(f \in {\mathcal {S}}({\mathbb {R}}^d)\)

$$\begin{aligned} f(x) = (2 \pi )^{-d} \int _{{\mathbb {R}}^d} e^{ix.\xi } {\hat{f}}(\xi ) \hbox {d}\xi . \end{aligned}$$

2.1 Approximate Solutions

By Proposition 1.3 the original anisotropic Maxwell system leads to the following second order \(3 \times 3\)-system for E and H

$$\begin{aligned} \left\{ \begin{array}{cl} (M_E(\xi ) - \omega ^2) {\hat{E}} &{}= - i \omega \varepsilon ^{-1} {\hat{J}}_e + i \varepsilon ^{-1} b(\xi ) \mu ^{-1} {\hat{J}}_m, \\ (M_H(\xi ) - \omega ^2) {\hat{H}} &{}= i \mu ^{-1} b(\xi ) \varepsilon ^{-1} {\hat{J}}_e + i \omega \mu ^{-1} {\hat{J}}_m \end{array} \right. \end{aligned}$$
(20)

where \(M_E(\xi ) = - \varepsilon ^{-1} b(\xi ) \mu ^{-1} b(\xi )\) and \(M_H(\xi ) = - \mu ^{-1} b(\xi ) \varepsilon ^{-1} b(\xi )\). From (13) we recall

$$\begin{aligned} p(\omega ,\xi ) = \det (M_E(\xi ) - \omega ^2) = \det (M_H(\xi ) - \omega ^2) = - \omega ^2 (\omega ^4 - \omega ^2 q_0(\xi ) + q_1(\xi )), \end{aligned}$$

for the polynomials \(q_0,q_1\) as defined there. Inverting \(M_E(\xi )-\omega ^2\) using Cramer’s rule, we find for all \(\xi \in {\mathbb {R}}^3\) such that \(p(\omega ,\xi )\ne 0\):

$$\begin{aligned} \begin{aligned} (M_E(\xi )-\omega ^2)^{-1}&= \frac{1}{p(\omega ,\xi )} \text {adj}(M_E(\xi )-\omega ^2) = \frac{1}{\varepsilon _1\varepsilon _2\varepsilon _3 p(\omega ,\xi )} Z_{\varepsilon ,\mu }(\xi ) \varepsilon , \\ (M_H(\xi )-\omega ^2)^{-1}&= \frac{1}{p(\omega ,\xi )} \text {adj}(M_H(\xi )-\omega ^2) = \frac{1}{\mu _1\mu _2\mu _3 p(\omega ,\xi )} Z_{\mu ,\varepsilon }(\xi ) \mu . \end{aligned}\nonumber \\ \end{aligned}$$
(21)

Here, \(\text {adj}(M)\) denotes the adjugate matrix of M. Sarrus’s rule and lengthy computations yield that the components of \(Z=Z_{\varepsilon ,\mu }\) are given as follows:

$$\begin{aligned} Z_{11}(\xi )&= \xi _1^2\left( \frac{\xi _1^2}{\mu _2\mu _3}+\frac{\xi _2^2}{\mu _1\mu _3}+\frac{\xi _3^2}{\mu _1\mu _2}\right) - \omega ^2\left( \frac{\varepsilon _2}{\mu _2}\xi _1^2+\frac{\varepsilon _3}{\mu _3}\xi _1^2\right. \nonumber \\&\left. \quad +\frac{\varepsilon _2}{\mu _1}\xi _2^2+\frac{\varepsilon _3}{\mu _1}\xi _3^2\right) +\omega ^4\varepsilon _2\varepsilon _3, \nonumber \\ Z_{12}(\xi )&= Z_{21}(\xi ) = \xi _1\xi _2\left( \frac{\xi _1^2}{\mu _2\mu _3}+\frac{\xi _2^2}{\mu _1\mu _3}+\frac{\xi _3^2}{\mu _1\mu _2}-\omega ^2\frac{\varepsilon _3}{\mu _3}\right) , \nonumber \\ Z_{13}(\xi )&= Z_{31}(\xi ) = \xi _1\xi _3\left( \frac{\xi _1^2}{\mu _2\mu _3}+\frac{\xi _2^2}{\mu _1\mu _3}+\frac{\xi _3^2}{\mu _1\mu _2}-\omega ^2\frac{\varepsilon _2}{\mu _2}\right) ,\nonumber \\ Z_{22}(\xi )&= \xi _2^2\left( \frac{\xi _1^2}{\mu _2\mu _3}+\frac{\xi _2^2}{\mu _1\mu _3}+\frac{\xi _3^2}{\mu _1\mu _2}\right) - \omega ^2\left( \frac{\varepsilon _1}{\mu _2}\xi _1^2+\frac{\varepsilon _3}{\mu _3}\xi _2^2 \right. \nonumber \\&\quad + \left. \frac{\varepsilon _1}{\mu _1}\xi _2^2+\frac{\varepsilon _3}{\mu _2}\xi _3^2\right) +\omega ^4\varepsilon _1\varepsilon _3, \\ Z_{23}(\xi )&= Z_{32}(\xi ) =\xi _2\xi _3\left( \frac{\xi _1^2}{\mu _2\mu _3}+\frac{\xi _2^2}{\mu _1\mu _3}+\frac{\xi _3^2}{\mu _1\mu _2}-\omega ^2\frac{\varepsilon _1}{\mu _1}\right) ,\nonumber \\ Z_{33}(\xi )&= \xi _3^2\left( \frac{\xi _1^2}{\mu _2\mu _3}+\frac{\xi _2^2}{\mu _1\mu _3}+\frac{\xi _3^2}{\mu _1\mu _2}\right) - \omega ^2\left( \frac{\varepsilon _1}{\mu _3}\xi _1^2+\frac{\varepsilon _2}{\mu _3}\xi _2^2 \right. \nonumber \\&\quad +\left. \frac{\varepsilon _1}{\mu _1}\xi _3^2+\frac{\varepsilon _2}{\mu _2}\xi _3^2\right) +\omega ^4\varepsilon _1\varepsilon _2.\nonumber \end{aligned}$$
(22)

A crucial observation is that the associated matrix-valued Fourier multiplier will be applied to divergence-free functions. This is a consequence of (20) and (21). For that reason the fourth-order terms in the entries can be ignored (if convenient), which becomes important when estimating the large frequency parts of our approximate solutions. Let \(Z^{\text {eff}}(\xi )=Z_{\varepsilon ,\mu }^{\text {eff}}(\xi )\) denote the unique matrix-valued polynomial of degree 2 such that

$$\begin{aligned} Z_{\varepsilon ,\mu }(\xi )&= O(|\xi |^4) + Z_{\varepsilon ,\mu }^{\text {eff}}(\xi ), \\ \qquad \forall \xi \in {\mathbb {R}}^3: \, Z_{\varepsilon ,\mu }(\xi )v&= Z_{\varepsilon ,\mu }^{\text {eff}}(\xi )v \text { for all } v \in {\mathbb {R}}^3 \text { with } v \cdot \xi = 0. \end{aligned}$$

In view of (20) and (21) it is natural to define the approximate solutions \((E_\delta ,H_\delta )\) for \(|\delta | \ne 0\) as follows:

$$\begin{aligned} \left\{ \begin{array}{cl} {\hat{E}}_\delta (\xi ) &{}= \frac{i}{\varepsilon _1\varepsilon _2\varepsilon _3(p(\omega ,\xi )+i\delta )} Z_{\varepsilon ,\mu }(\xi )( - \omega {\hat{J}}_e(\xi ) + b(\xi ) \mu ^{-1} {\hat{J}}_m(\xi )), \\ {\hat{H}}_\delta (\xi ) &{}= \frac{i}{\mu _1\mu _2\mu _3(p(\omega ,\xi )+i\delta )} Z_{\mu ,\varepsilon }(\xi )( b(\xi ) \varepsilon ^{-1} {\hat{J}}_e(\xi )+ \omega {\hat{J}}_m(\xi )). \end{array} \right. \end{aligned}$$
(23)

To prove Theorem 1.1, we show estimates for these functions that are uniform with respect to \(\delta \). The high-frequency part away from the Fresnel surface is considered in the next subsection, and the remaining estimates will be done later. Then, taking these estimates for granted, we show how to conclude the argument.

2.2 Contributions of Low and High Frequencies

We turn to the description of the different contributions of \((E_\delta ,H_\delta )\). We split the contributions of the low and high frequencies. Let \(\beta _1,\beta _2 \in C^\infty ({\mathbb {R}}^3)\) satisfy \(\beta _1(\xi ) + \beta _2(\xi )=1\) with

$$\begin{aligned} \beta _1(\xi ) = 1 \text { if } |p(\omega ,\xi )| \le t_0 \quad \text {and}\quad \text {supp}(\beta _1) \subseteq \{ \xi \in {\mathbb {R}}^3 \,: \, |p(\omega ,\xi )| \le 2 t_0 \} \end{aligned}$$

where \(t_0>0\) denotes a small constant. \(t_0\) will be chosen later when carrying out the estimates close to the surface. Also, for \(m \in C^\infty ({\mathbb {R}}^d)\) we write

$$\begin{aligned} (m(D) f) \widehat{(}\xi ) = m(\xi ) {\hat{f}}(\xi ). \end{aligned}$$

Proposition 2.1

Let \(E_\delta ,H_\delta \) be given by (23). Then we find the following estimate to hold uniformly in \(|\delta | >0\):

$$\begin{aligned} \Vert \beta _2(D) (E_\delta ,H_\delta ) \Vert _{L^q({\mathbb {R}}^3)} \lesssim _{p,q,\omega } \Vert \beta _2(D)(J_e,J_m) \Vert _{L^p({\mathbb {R}}^3)}, \end{aligned}$$
(24)

provided that \(1 \le p,q \le \infty \) with \(0 \le \frac{1}{p} - \frac{1}{q} \le \frac{1}{3}\) and \((p,q) \notin \{(1,1), (1,\frac{3}{2}), (3,\infty ), (\infty , \infty ) \}\). If additionally \(J_e,J_m\in L^q({\mathbb {R}}^3)\), \(q<\infty \), then \(E_\delta ,H_\delta \in W^{1,q}({\mathbb {R}}^3)\) with

$$\begin{aligned} \Vert \beta _2(D)(E_\delta ,H_\delta )\Vert _{W^{1,q}({\mathbb {R}}^3)} \lesssim _{p,q,\omega } \Vert \beta _2(D)(J_e,J_m) \Vert _{L^{p}({\mathbb {R}}^3) \cap L^q({\mathbb {R}}^3)}. \end{aligned}$$

Proof

Choose \(\chi \in C^\infty ({\mathbb {R}}^3)\) with \(|p(\omega ,\xi ) | \ge c>0\) on \(\text {supp}(\chi )\) and \(\chi (\xi )=1\) on \(\text {supp}(\beta _2)\). We first consider the case \(q\ne \infty \). Then

$$\begin{aligned} \beta _2(\xi ){{\hat{E}}}_\delta (\xi )&= \frac{i\beta _2(\xi )}{\varepsilon _1\varepsilon _2\varepsilon _3(p(\omega ,\xi )+i\delta )} Z_{\varepsilon ,\mu }(\xi )( - \omega {\hat{J}}_e(\xi ) + b(\xi ) \mu ^{-1} {\hat{J}}_m(\xi )), \\&= -\frac{i\omega \chi (\xi )\langle \xi \rangle ^2 Z^{\text {eff}}_{\varepsilon ,\mu }(\xi )}{\varepsilon _1\varepsilon _2\varepsilon _3(p(\omega ,\xi )+i\delta )} \langle \xi \rangle ^{-2}\beta _2(\xi ) {\hat{J}}_e(\xi ) \\&\quad + \frac{i \chi (\xi )\langle \xi \rangle Z^{\text {eff}}_{\varepsilon ,\mu }(\xi )b(\xi )\mu ^{-1}}{\varepsilon _1\varepsilon _2\varepsilon _3(p(\omega ,\xi )+i\delta )} \langle \xi \rangle ^{-1}\beta _2(\xi ) {\hat{J}}_m(\xi ). \end{aligned}$$

By the choice of \(\chi \) we have the following uniform estimates with respect to \(\delta \):

$$\begin{aligned}&\left| \partial ^\alpha \left( \frac{\omega \chi (\xi )\langle \xi \rangle ^2 Z^{\text {eff}}_{\varepsilon ,\mu }(\xi )}{\varepsilon _1\varepsilon _2\varepsilon _3(p(\omega ,\xi )+i\delta )}\right) \right| + \left| \partial ^\alpha \left( \frac{\chi (\xi )\langle \xi \rangle Z^{\text {eff}}_{\varepsilon ,\mu }(\xi )b(\xi )\mu ^{-1}}{\varepsilon _1\varepsilon _2\varepsilon _3(p(\omega ,\xi )+i\delta )}\right) \right| \\&\quad \lesssim _\alpha |\xi |^{-\alpha } \text { for } \alpha \in {\mathbb {N}}_0^3. \end{aligned}$$

Since \(1< q < \infty \), Mikhlin’s theorem (cf. [19, Chapter 6]) applies and Bessel potential estimates (see for instance [11, Theorem 30]) yield

$$\begin{aligned} \Vert \beta _2(D) (E_\delta ,H_\delta )\Vert _{L^q({\mathbb {R}}^3)}&\le \Vert \langle D \rangle ^{-2}\beta _2(D) J_e\Vert _{L^q({\mathbb {R}}^3)} + \Vert \langle D \rangle ^{-1}\beta _2(D) J_m\Vert _{L^q({\mathbb {R}}^3)} \\&\quad + \Vert \langle D \rangle ^{-2}\beta _2(D) J_m\Vert _{L^q({\mathbb {R}}^3)} + \Vert \langle D \rangle ^{-1}\beta _2(D) J_e\Vert _{L^q({\mathbb {R}}^3)} \\&\le \Vert \beta _2(D)(J_e,J_m)\Vert _{L^p({\mathbb {R}}^3)} \end{aligned}$$

for the claimed range of exponents. If \(q=\infty \), we first use Sobolev embedding to lower \(q< \infty \), and applying the previous argument gives (24) for \(0< \frac{1}{p} < \frac{1}{3}\), which is all we had to show in this case. This gives the claim concerning \(L^q({\mathbb {R}}^3)\)-integrability. For the Sobolev regularity we obtain in a similar fashion

$$\begin{aligned} \Vert \beta _2(D) (E_\delta ,H_\delta )\Vert _{W^{1,q}({\mathbb {R}}^3)}&\le \Vert \beta _2(D) (E_\delta ,H_\delta )\Vert _{L^q({\mathbb {R}}^3)} \!+\! \Vert \langle D\rangle \beta _2(D) (E_\delta ,H_\delta )\Vert _{L^q({\mathbb {R}}^3)} \\&\le \Vert \langle D \rangle ^{-1}\beta _2(D) J_e\Vert _{L^q({\mathbb {R}}^3)} + \Vert \beta _2(D) J_m\Vert _{L^q({\mathbb {R}}^3)} \\&\quad + \Vert \langle D \rangle ^{-1}\beta _2(D) J_m\Vert _{L^q({\mathbb {R}}^3)} + \Vert \beta _2(D) J_e\Vert _{L^q({\mathbb {R}}^3)} \\&\le \Vert \beta _2(D)(J_e,J_m)\Vert _{L^q({\mathbb {R}}^3)} \end{aligned}$$

The proof is complete. \(\square \)

The paper is mainly devoted to estimate the local contribution close to the Fresnel surface \(S= \{p(\omega ,\xi ) = 0\}\). In Sect. 3 we shall see that the Fresnel surface has components of the following type:

  • smooth components with non-vanishing Gaussian curvature,

  • smooth components with curvature vanishing along a one-dimensional submanifold (Hamiltonian circles), but without flat points,

  • neighborhoods of conical singularities.

This fact is established in Corollary 3.8. Precisely, it suffices to consider six components of the first kind and four components of the second and third type. Corresponding to the three types listed above, we split

$$\begin{aligned} \beta _1(\xi ) = \beta _{11}(\xi ) + \beta _{12}(\xi ) + \beta _{13}(\xi ) \end{aligned}$$

with smooth compactly supported functions localizing to neighborhoods of the components of the above types. The estimate for the smooth components with non-vanishing Gaussian curvature is a consequence of estimates for Bochner–Riesz operators with negative index that we will prove in Sect. 4:

Proposition 2.2

Let \(1 \le p,q \le \infty \) and \((E_\delta ,H_\delta )\) as in (23). We find the following estimate to hold uniformly in \(|\delta | \ne 0\):

$$\begin{aligned} \Vert \beta _{11}(D) (E_\delta ,H_\delta ) \Vert _{L^q({\mathbb {R}}^3)} \lesssim \Vert \beta _{11}(D) (J_e,J_m) \Vert _{L^{p}({\mathbb {R}}^3)} \end{aligned}$$

provided that \(\frac{1}{p} > \frac{2}{3}\), \(\frac{1}{q} < \frac{1}{3}\) and \(\frac{1}{p} - \frac{1}{q} \ge \frac{1}{2}\).

By similar means, we show the inferior estimate for components with vanishing Gaussian curvature along the Hamiltonian circles:

Proposition 2.3

Let \(1 \le p, q \le \infty \) and \((E_\delta ,H_\delta )\) as in (23). We find the following estimate to hold uniformly in \(|\delta | \ne 0\):

$$\begin{aligned} \Vert \beta _{12}(D) (E_\delta ,H_\delta ) \Vert _{L^q({\mathbb {R}}^3)} \lesssim \Vert \beta _{12}(D) (J_e,J_m) \Vert _{L^p({\mathbb {R}}^3)} \end{aligned}$$

provided that \(\frac{1}{p} > \frac{3}{4}\), \(\frac{1}{q} < \frac{1}{4}\) and \(\frac{1}{p} - \frac{1}{q} \ge \frac{2}{3}\).

At last, the estimate around the singular points is shown in Sect. 6:

Proposition 2.4

Let \(1 \le p, q \le \infty \) and \((E_\delta ,H_\delta )\) as in (23). We find the following estimates to hold uniformly in \(|\delta | \ne 0\):

$$\begin{aligned} \Vert \beta _{13}(D) (E_\delta ,H_\delta ) \Vert _{L^q({\mathbb {R}}^3)} \lesssim \Vert \beta _{13}(D) (J_e,J_m) \Vert _{L^p({\mathbb {R}}^3)} \end{aligned}$$

provided that \(\frac{1}{p} > \frac{3}{4}\), \(\frac{1}{q} < \frac{1}{4}\) and \(\frac{1}{p} - \frac{1}{q} \ge \frac{2}{3}\).

Remark 2.5

For these estimates, due to bounded frequencies, the precise form of \(Z_{\varepsilon ,\mu }\) (or \(Z^{\text {eff}}_{\varepsilon ,\mu }\)) is not important. It suffices to show the above estimates for the multiplier

$$\begin{aligned} A_\delta f(x) = \int _{{\mathbb {R}}^3} e^{ix.\xi } \frac{\beta _{1i}(\xi )}{p(\omega ,\xi ) + i\delta } {\hat{f}}(\xi ) \hbox {d}\xi . \end{aligned}$$

Again due to bounded frequencies and \(1<q<\infty \), the \(W^{1,q}({\mathbb {R}}^3)\)-estimates result from

$$\begin{aligned} \Vert \beta _{1i}(D) (E_\delta ,H_\delta ) \Vert _{W^{m,q}({\mathbb {R}}^3)} \lesssim _{m,q} \Vert \beta _{1i}(D) (E_\delta ,H_\delta ) \Vert _{L^q({\mathbb {R}}^3)} \qquad (i=1,2,3) \end{aligned}$$

as a consequence of Young’s inequality.

2.3 Proof of Theorem 1.1

By Propositions 2.12.4 we have uniform bounds in \(\delta \ne 0\):

$$\begin{aligned} \Vert (E_\delta ,H_\delta ) \Vert _{L^q({\mathbb {R}}^3)} \lesssim \Vert (J_e,J_m) \Vert _{L^{p_1}({\mathbb {R}}^3) \cap L^{p_2}({\mathbb {R}}^3)} \end{aligned}$$

for q, \(p_1\), \(p_2\) as in the assumptions of Theorem 1.1. Hence, there is a weak limit \((E,H) \in L^q({\mathbb {R}}^3;{\mathbb {C}}^6)\), which satisfies the same bound by the Banach–Alaoglu–Bourbaki theorem. We have to show that the approximate solutions weakly converge to distributional solutions of

$$\begin{aligned} \left\{ \begin{array}{cl} ib(\xi ) {\hat{E}}(\xi ) + i \omega \mu {\hat{H}}(\xi ) &{}= {\hat{J}}_m(\xi ), \\ i b(\xi ) {\hat{H}}(\xi ) - i \omega \varepsilon {\hat{E}}(\xi ) &{}= {\hat{J}}_e(\xi ). \end{array} \right. \end{aligned}$$
(25)

Indeed, (23) gives

$$\begin{aligned} \begin{aligned}&ib(\xi ) {\hat{E}}_{\delta }(\xi ) + i \omega \mu {\hat{H}}_{\delta }(\xi ) \\&\quad = \frac{b(\xi )Z_{\varepsilon ,\mu }(\xi )}{\varepsilon _1\varepsilon _2\varepsilon _3(p(\omega ,\xi )+i\delta )} \big ( \omega {\hat{J}}_e(\xi ) - b(\xi ) \mu ^{-1} {\hat{J}}_m(\xi ) \big ) \\&\qquad - \frac{\omega \mu Z_{\mu ,\varepsilon }(\xi )}{\mu _1\mu _2\mu _3(p(\omega ,\xi )+i\delta )} \big ( b(\xi ) \varepsilon ^{-1} {\hat{J}}_e(\xi ) + \omega {\hat{J}}_m(\xi ) \big ) \\&\quad = \frac{\omega }{p(\omega ,\xi ) + i \delta } \left( \frac{b(\xi ) Z_{\varepsilon ,\mu }(\xi )}{ \varepsilon _1 \varepsilon _2 \varepsilon _3} - \frac{\mu Z_{\mu ,\varepsilon }(\xi ) b(\xi ) \varepsilon ^{-1}}{\mu _1 \mu _2 \mu _3} \right) {\hat{J}}_e(\xi ) \\&\qquad - \frac{1}{p(\omega ,\xi ) + i\delta } \left( \frac{ b(\xi ) Z_{\varepsilon ,\mu }(\xi ) b(\xi ) \mu ^{-1}}{\varepsilon _1 \varepsilon _2 \varepsilon _3} + \frac{\omega ^2 \mu Z_{\mu ,\varepsilon }(\xi )}{\mu _1 \mu _2 \mu _3} \right) {\hat{J}}_m(\xi ). \end{aligned} \end{aligned}$$

From (22) one infers after lengthy computations

$$\begin{aligned} \begin{aligned} \frac{b(\xi ) Z_{\varepsilon ,\mu }(\xi )}{\varepsilon _1 \varepsilon _2 \varepsilon _3} - \frac{\mu Z_{\mu ,\varepsilon }(\xi ) b(\xi ) \varepsilon ^{-1}}{\mu _1 \mu _2 \mu _3}&= 0, \\ \frac{b(\xi ) Z_{\varepsilon ,\mu }(\xi ) b(\xi ) \mu ^{-1}}{\varepsilon _1 \varepsilon _2 \varepsilon _3} + \frac{\omega ^2 \mu Z_{\mu ,\varepsilon }(\xi )}{\mu _1 \mu _2 \mu _3}&= -p(\omega ,\xi ) I_3. \end{aligned} \end{aligned}$$

As a consequence we obtain

$$\begin{aligned} ib(\xi ) {\hat{E}}_{\delta }(\xi ) + i \omega \mu {\hat{H}}_{\delta }(\xi ) = \frac{p(\omega ,\xi )}{p(\omega ,\xi ) + i \delta } {\hat{J}}_m(\xi ) = {\hat{J}}_m(\xi ) - \frac{i \delta }{p(\omega ,\xi ) + i \delta } {\hat{J}}_m(\xi ). \end{aligned}$$

By Proposition 2.12.4, and Remark 2.5 we have

$$\begin{aligned} \Vert (p(\omega ,D) + i \delta )^{-1} J_m \Vert _{L^q({\mathbb {R}}^3)} \lesssim \Vert J_m \Vert _{L^{p_1}({\mathbb {R}}^3) \cap L^{p_2}({\mathbb {R}}^3)} \end{aligned}$$

and, when assuming \(J_m\in L^q({\mathbb {R}}^3)\),

$$\begin{aligned} \Vert (p(\omega ,D) + i \delta )^{-1} J_m \Vert _{W^{1,q}({\mathbb {R}}^3)} \lesssim \Vert J_m \Vert _{L^{p_1}({\mathbb {R}}^3) \cap L^q({\mathbb {R}}^3)} \end{aligned}$$

so that the only \(\delta \)-dependent term vanishes as \(\delta \rightarrow 0\). This implies

$$\begin{aligned} \nabla \times E + i \omega \mu H = J_m \quad \text {in }{\mathbb {R}}^3 \end{aligned}$$

in the distributional sense and even in the weak sense for \(J_m\in L^q({\mathbb {R}}^3)\). Similarly, one proves the validity of the second equation in (25), and the proof is complete. \(\square \)

2.4 Explicit Representations of Solutions

At last, we give explicit representations of the constructed solutions. By Sokhotsky’s formula (cf. Sections 3.2 and 6.1 in [26]):

Proposition 2.6

Let \(H: {\mathbb {R}}^d \rightarrow {\mathbb {R}}\) such that \(| \nabla H(\xi ) | \ne 0\) at any point where \(H(\xi ) = 0\), then we can define the distributional limit

$$\begin{aligned} (H(\xi ) \pm i0)^{-1} = \lim _{\varepsilon \rightarrow 0} (H(\xi ) \pm i \varepsilon )^{-1}. \end{aligned}$$

Furthermore,

$$\begin{aligned} (H(\xi ) \pm i0)^{-1} = p.v. \frac{1}{H(\xi )} \mp i \pi \delta (H) \end{aligned}$$

in the sense of distributions.

In the context of the easier Helmholtz equation

$$\begin{aligned} (\Delta +1) u = -f, \end{aligned}$$

this allows to write for so-called outgoing solutions

$$\begin{aligned} \begin{aligned} u(x)&= \frac{1}{(2 \pi )^d} \int _{{\mathbb {R}}^d} \frac{e^{ix.\xi }}{|\xi |^2-1 - i 0} {\hat{f}}(\xi ) \hbox {d}\xi \\&= \frac{1}{(2 \pi )^d} p.v. \int _{{\mathbb {R}}^d} \frac{e^{ix.\xi }}{|\xi |^2-1} {\hat{f}}(\xi ) \hbox {d}\xi + \frac{i \pi }{(2 \pi )^d} \int _{{\mathbb {S}}^{d-1}} {\hat{f}}(\xi ) e^{i x. \xi } \hbox {d}\sigma (\xi ). \end{aligned} \end{aligned}$$

Proposition 2.6 suggests that the solutions to anisotropic Maxwell’s equations can again be written as principal value and delta distribution in Fourier space. However, Proposition 2.6 only allows to make sense of the principal value and delta distribution if \(S = \{\xi \in {\mathbb {R}}^3 \, : \, p(\omega ,\xi ) = 0 \}\) is a smooth manifold. But there are four isolated singular points \(\zeta _1,\ldots ,\zeta _4\in S\) as we will prove in Proposition 3.2. Still, we shall see how \(p.v. \frac{1}{p(\omega ,\xi )}\) and \(\delta _S(\xi )\) can be understood as Fourier multipliers with certain \(L^p\)-mapping properties. For a dense set, e.g., \(J \in {\mathcal {S}}({\mathbb {R}}^3)\), \(\zeta _i \notin \text {supp} ({\hat{J}})\), we can explain \(\delta _S\) as a Fourier multiplier

$$\begin{aligned} \int _{{\mathbb {R}}^3} \delta _S(\xi ) e^{ix.\xi } {\hat{J}}(\xi ) \hbox {d}\xi = \int _S e^{ix.\xi } {\hat{J}}(\xi ) \hbox {d}\sigma (\xi ). \end{aligned}$$

The density follows by Littlewood–Paley theory. As a consequence of Sects. 5 and 6, we have

$$\begin{aligned} \left\| \int _S e^{ix.\xi } {\hat{J}}(\xi ) \hbox {d}\sigma (\xi ) \right\| _{L^q({\mathbb {R}}^3)} \lesssim \Vert J \Vert _{L^p({\mathbb {R}}^3)} \end{aligned}$$

for p and q as in Proposition 2.4 with a bound independent of the support of \({\hat{J}}\). This allows to extend \({\mathcal {F}}^{-1} \delta _S {\mathcal {F}}: L^p({\mathbb {R}}^3) \rightarrow L^q({\mathbb {R}}^3)\) by density. Likewise, we can explain

$$\begin{aligned} p.v. \int _{{\mathbb {R}}^3} \frac{e^{ix.\xi } \beta (\xi )}{p(\omega ,\xi )} {\hat{J}}(\xi ) \hbox {d}\xi \end{aligned}$$

with \(\beta \in C^\infty _c({\mathbb {R}}^3)\) for \(J \in {\mathcal {S}}({\mathbb {R}}^3)\) and \(\zeta _1,\ldots ,\zeta _4 \notin \text {supp } ({\hat{J}})\). This and (23) explain the formula

$$\begin{aligned} \begin{aligned} E&= \frac{-i \pi }{(2 \pi )^3\varepsilon _1\varepsilon _2\varepsilon _3} \int _{{\mathbb {R}}^3} \delta _{S}(\xi ) e^{ix.\xi } Z_{\varepsilon ,\mu }(\xi )\big (-i \omega {\hat{J}}_e(\xi ) + i b(\xi ) \mu ^{-1} {\hat{J}}_m(\xi ) \big ) \hbox {d}\xi \\&\quad + \frac{1}{(2 \pi )^3\varepsilon _1\varepsilon _2\varepsilon _3} p.v. \int _{{\mathbb {R}}^3} \frac{e^{ix.\xi }}{p(\omega ,\xi )} Z_{\varepsilon ,\mu }(\xi ) \big ( - i \omega {\hat{J}}_e(\xi ) + ib(\xi ) \mu ^{-1} {\hat{J}}_m(\xi ) \big ) \hbox {d}\xi , \\ H&= \frac{-i \pi }{(2 \pi )^3\mu _1\mu _2\mu _3} \int _{{\mathbb {R}}^3} \delta _{S}(\xi ) e^{ix.\xi }Z_{\mu ,\varepsilon }(\xi ) \big ( i b(\xi ) \varepsilon ^{-1} {\hat{J}}_e(\xi ) + i \omega {\hat{J}}_m(\xi ) \big ) \hbox {d}\xi \\&\quad + \frac{1}{(2 \pi )^3\mu _1\mu _2\mu _3 } p.v. \int _{{\mathbb {R}}^3} \frac{e^{ix.\xi }}{p(\omega ,\xi )} Z_{\mu ,\varepsilon }(\xi ) \big ( i b(\xi ) \varepsilon ^{-1} {\hat{J}}_e(\xi ) + i \omega {\hat{J}}_m(\xi ) \big ) \hbox {d}\xi . \end{aligned} \end{aligned}$$
(26)

for solutions to anisotropic Maxwell’s equations. Notice that in these formulae we may replace the matrices \(Z_{\varepsilon ,\mu }(\xi ),Z_{\mu ,\varepsilon }(\xi )\) by the corresponding effective matrices.

3 Properties of the Fresnel Surface

As explained above, the set \(\{\xi \in {\mathbb {R}}^3: p(\omega ,\xi )=0\}\) plays a decisive role for our analysis. This classical quartic surface is known as Fresnel’s surface initially discovered by Augustin-Jean Fresnel in 1822 to describe the phenomenon of double refraction. In an optically anisotropic medium, e.g., a biaxial crystal, Fresnel’s surface corresponds to Huygen’s elementary spherical wave surfaces in isotropic media. This surface was already studied in the nineteenth century by Darboux [13]. For an account on classical references we refer to the survey by Knörrer [32]. In the present context the curvature properties will be most important, which were collected by Liess [36, Appendix]. We think it is worthwhile to elaborate on Liess’s presentation, as we shall also discuss first and second fundamental form in suitable coordinates.

We recall the key properties of Fresnel’s wave surface

$$\begin{aligned} S=\{\xi \in {\mathbb {R}}^3: p(\omega ,\xi )=0\},\quad p(\omega ,\xi )= - \omega ^2 (\omega ^4 - \omega ^2 q_0(\xi ) + q_1(\xi )) \end{aligned}$$

and

$$\begin{aligned} q_0(\xi )&= \xi _1^2 \left( \frac{1}{\varepsilon _2 \mu _3} + \frac{1}{\mu _2 \varepsilon _3} \right) + \xi _2^2 \left( \frac{1}{\varepsilon _1 \mu _3} + \frac{1}{\mu _1 \varepsilon _3} \right) + \xi _3^2 \left( \frac{1}{\varepsilon _1 \mu _2} + \frac{1}{\varepsilon _2 \mu _1} \right) , \\ q_1(\xi )&= \frac{1}{\varepsilon _1 \varepsilon _2 \varepsilon _3 \mu _1 \mu _2 \mu _3} (\varepsilon _1 \xi _1^2 + \varepsilon _2 \xi _2^2 + \varepsilon _3 \xi _3^2) (\mu _1 \xi _1^2 + \mu _2 \xi _2^2 + \mu _3 \xi _3^2). \end{aligned}$$

Recall that we assume full anisotropy (4). We first notice that we can reduce our analysis to the case \(\mu _1=\mu _2=\mu _3=\omega =1\). This results from the substitution \(\varepsilon _i \rightarrow \varepsilon _i' = \frac{\varepsilon _i}{\mu _i}\) and change of coordinates \(\xi \rightarrow \eta \) given by

$$\begin{aligned} \eta _i = \frac{\xi _i}{\omega \sqrt{\mu _{i+1}\mu _{i+2}}} \qquad (i=1,2,3) . \end{aligned}$$

Above and henceforth, we use cyclic notation \(\mu _4 := \mu _1\), \(\mu _5 := \mu _2\), likewise for \(\varepsilon _i\). Notice that this change of coordinates results from a suitable dilation of the coordinates, which corresponds to an appropriate dilation in physical space. To see the equivalence, let us introduce the corresponding quantities for \(\mu _1=\mu _2=\mu _3=\omega =1\), namely \({\mathcal {N}}(\eta ):= 1-q_0^*(\eta )+q_1^*(\eta )\) where

$$\begin{aligned} q_0^*(\eta )&= \eta _1^2\left( \frac{1}{\varepsilon _2}+\frac{1}{\varepsilon _3}\right) + \eta _2^2\left( \frac{1}{\varepsilon _1}+\frac{1}{\varepsilon _3}\right) + \eta _3^2\left( \frac{1}{\varepsilon _1}+\frac{1}{\varepsilon _2}\right) , \\ q_1^*(\eta )&= \frac{1}{\varepsilon _1\varepsilon _2\varepsilon _3}(\varepsilon _1\eta _1^2+\varepsilon _2\eta _2^2+\varepsilon _3\eta _3^2)(\eta _1^2+\eta _2^2+\eta _3^2). \end{aligned}$$

Then one observes \(- \omega ^6 {\mathcal {N}}(\varepsilon ', \eta )=p(\omega ,\xi )\); hence, the qualitative properties of Fresnel’s surface in the special case \(\mu _1=\mu _2=\mu _3=\omega =1\) carry over to the general case. For this reason we focus on the analysis of

$$\begin{aligned} S^* = \{ \eta \in {\mathbb {R}}^3: {\mathcal {N}}(\eta ) = 1 -q_0^*(\eta )+q_1^*(\eta )=0\}. \end{aligned}$$

For the sake of brevity, we let again \(\varepsilon _i := \varepsilon _i'\) and notice that (4) then reads

$$\begin{aligned} \varepsilon _1\ne \varepsilon _2\ne \varepsilon _3\ne \varepsilon _1. \end{aligned}$$

We use notations

$$\begin{aligned} \varepsilon _{i+1} \in \langle \varepsilon _i,\varepsilon _{i+2}\rangle \quad \text {if}\quad \varepsilon _i<\varepsilon _{i+1}<\varepsilon _{i+2} \text { or } \varepsilon _{i+2}<\varepsilon _{i+1}<\varepsilon _i. \end{aligned}$$

We first show that \(S^*\) is a smooth manifold away from four singular points. To see this, we compute

$$\begin{aligned}&\nabla {\mathcal {N}}(\eta ) = \begin{pmatrix} t_1(\eta )\eta _1 \\ t_2(\eta )\eta _2 \\ t_3(\eta )\eta _3 \\ \end{pmatrix},\quad \\&t_i(\eta ) = - \frac{2}{\varepsilon _{i+1}}-\frac{2}{\varepsilon _{i+2}} + \frac{2\varepsilon _i|\eta |^2+2(\varepsilon _1\eta _1^2+\varepsilon _2\eta _2^2+\varepsilon _3\eta _3^2)}{\varepsilon _1\varepsilon _2\varepsilon _3}. \end{aligned}$$

Definition 3.1

A point \(\eta \in S^*\) is called singular if \(\nabla {\mathcal {N}}(\eta )=0\). The set of singular points is denoted by \(\Sigma \).

The reason for this definition is that \(S^* {\setminus }\Sigma \) is a smooth manifold, whereas the neighborhoods of the singular points require a separate analysis. It turns out that there are precisely four singular points. This is a consequence of the following result.

Proposition 3.2

The set of singular points consists of all \(\eta \in S^*\) such that

$$\begin{aligned} \eta _i^2 = \frac{\varepsilon _{i+2} (\varepsilon _i-\varepsilon _{i+1})}{\varepsilon _i - \varepsilon _{i+2}}, \qquad \eta _{i+1}=0, \qquad \eta _{i+2}^2 = \frac{\varepsilon _i(\varepsilon _{i+2} -\varepsilon _{i+1})}{\varepsilon _{i+2} -\varepsilon _i}, \end{aligned}$$

where \(i\in \{1,2,3\}\) is uniquely determined by \(\varepsilon _{i+1}\in \langle \varepsilon _i,\varepsilon _{i+2}\rangle \).

Proof

We have to prove that each solution of \(\nabla {\mathcal {N}}(\eta )=(t_1(\eta )\eta _1,t_2(\eta )\eta _2,t_3(\eta )\eta _3) = (0,0,0)\) satisfies the above conditions. We first show \(\eta _j=0\) for some \(j\in \{1,2,3\}\). Otherwise, we would have \(t_1(\eta )=t_2(\eta )=t_3(\eta )=0\), and thus for \(j\in \{1,2,3\}\),

$$\begin{aligned} 2\varepsilon _j\eta _j^2+(\varepsilon _j+\varepsilon _{j+1})\eta _{j+1}^2+(\varepsilon _j+\varepsilon _{j+2})\eta _{j+2}^2 = \varepsilon _j(\varepsilon _{j+1}+\varepsilon _{j+2}). \end{aligned}$$

Hence,

$$\begin{aligned} \underbrace{ \begin{pmatrix} 2\varepsilon _1 &{}\quad \varepsilon _1+\varepsilon _2 &{}\quad \varepsilon _1+\varepsilon _3\\ \varepsilon _1+\varepsilon _2 &{}\quad 2\varepsilon _2 &{}\quad \varepsilon _2+\varepsilon _3\\ \varepsilon _1+\varepsilon _3 &{}\quad \varepsilon _2+\varepsilon _3 &{}\quad 2\varepsilon _3 \end{pmatrix}}_{=:M} \begin{pmatrix} \eta _1^2 \\ \eta _2^2 \\ \eta _3^2 \\ \end{pmatrix} = \begin{pmatrix} \varepsilon _1(\varepsilon _2+\varepsilon _3) \\ \varepsilon _2 (\varepsilon _1+\varepsilon _3) \\ \varepsilon _3(\varepsilon _1+\varepsilon _2) \\ \end{pmatrix}. \end{aligned}$$

The adjugate matrix of M is given by

$$\begin{aligned} adj(M) = \begin{pmatrix} -(\varepsilon _2-\varepsilon _3)^2 &{}\quad (\varepsilon _3-\varepsilon _1)(\varepsilon _3-\varepsilon _2) &{}\quad (\varepsilon _2-\varepsilon _1)(\varepsilon _2-\varepsilon _3)\\ (\varepsilon _3-\varepsilon _1)(\varepsilon _3-\varepsilon _2) &{}\quad -(\varepsilon _1-\varepsilon _3)^2 &{}\quad (\varepsilon _1-\varepsilon _2) (\varepsilon _1-\varepsilon _3)\\ (\varepsilon _2-\varepsilon _1)(\varepsilon _2-\varepsilon _3) &{}\quad (\varepsilon _1-\varepsilon _2)(\varepsilon _1-\varepsilon _3) &{}\quad -(\varepsilon _1-\varepsilon _2)^2 \end{pmatrix}. \end{aligned}$$

Multiplying this equation with adj(M) and using \(adj(M)M=\det (M)I_3=0\) we get

$$\begin{aligned} 0&= adj(M)M \begin{pmatrix} \eta _1^2 \\ \eta _2^2 \\ \eta _3^2 \\ \end{pmatrix} = adj(M) \begin{pmatrix} \varepsilon _1(\varepsilon _2+\varepsilon _3) \\ \varepsilon _2(\varepsilon _1+\varepsilon _3) \\ \varepsilon _3(\varepsilon _1+\varepsilon _2) \\ \end{pmatrix} \\&= \begin{pmatrix} (\varepsilon _2-\varepsilon _3)^2(\varepsilon _1-\varepsilon _2)(\varepsilon _1-\varepsilon _3) \\ (\varepsilon _1-\varepsilon _3)^2(\varepsilon _2-\varepsilon _1)(\varepsilon _2-\varepsilon _3) \\ (\varepsilon _1-\varepsilon _2)^2(\varepsilon _3-\varepsilon _1)(\varepsilon _3-\varepsilon _2) \\ \end{pmatrix}. \end{aligned}$$

Since this is impossible due to the full anisotropy, we conclude \(\eta _j=0\) for some \(j\in \{1,2,3\}\).

Next we show that only one coordinate of \(\eta \) vanishes. First, \(\eta _1=\eta _2=\eta _3=0\) is impossible in view of \(\eta \in S^*=\{{\mathcal {N}}(\eta )=0\}\) and \({\mathcal {N}}(0,0,0)=1\ne 0\). So we argue by contradiction and suppose that \(\eta _{j+1}=\eta _{j+2}=0\) and \(\eta _j\ne 0,t_j(\eta )=0\) for some \(j\in \{1,2,3\}\). In view of the formula for \(t_j\) this implies \(2\eta _j^2=\varepsilon _{j+1}+\varepsilon _{j+2}\). Inserting this into \({\mathcal {N}}(\eta )=0\), we obtain \(\varepsilon _{j+1}=\varepsilon _{j+2}\) as a necessary condition, which contradicts our assumption of full anisotropy. Hence, precisely one coordinate vanishes, say \(\eta _{j+1}=0,t_j(\eta )=t_{j+2}(\eta )=0, \eta _{j},\eta _{j+2}\ne 0\) for \(j\in \{1,2,3\}\). Elementary Linear Algebra shows that these conditions are equivalent to

$$\begin{aligned} \eta _j^2 = \frac{\varepsilon _{j+2} (\varepsilon _j-\varepsilon _{j+1})}{\varepsilon _j - \varepsilon _{j+2}}, \qquad \eta _{j+1}=0, \qquad \eta _{j+2}^2 = \frac{\varepsilon _j(\varepsilon _{j+2} -\varepsilon _{j+1})}{\varepsilon _{j+2} -\varepsilon _j}. \end{aligned}$$

Since the expressions on the right-hand side are positive if and only if \(\varepsilon _{j+1}\in \langle \varepsilon _j,\varepsilon _{j+2}\rangle \), we get the claim. \(\square \)

In particular, the Gaussian curvature is well-defined and smooth on \(S^*{\setminus }\Sigma \), i.e., away from the four singular points. We now introduce the explicit parametrization of \(S^*\) by Darboux and Liess ([36, A3]). Our parameters (st) correspond to \((\beta ,\alpha ')\) in Liess’ work. As in [36], this parametrization is given away from the four singular points and the principal sections \(S\cap \{\eta _i=0\}\).

Proposition 3.3

Let \(\sigma _1,\sigma _2,\sigma _3\in \{-1,+1\}\). Then a smooth parametrization of \((S^* {\setminus }\Sigma )\cap \bigcap _{i=1}^3 \{\sigma _i \eta _i>0\}\) is given by

$$\begin{aligned} \Phi _i(s,t) := \sigma _i \sqrt{\frac{\varepsilon _1\varepsilon _2\varepsilon _3(\varepsilon _i-s)(t^{-1}-\varepsilon _i^{-1})}{(\varepsilon _i-\varepsilon _{i+1})(\varepsilon _i-\varepsilon _{i+2})}} \qquad (i=1,2,3). \end{aligned}$$

For \(j\in \{1,2,3\}\) such that \(\varepsilon _j<\varepsilon _{j+1}<\varepsilon _{j+2}\) we either have \(\varepsilon _j<s<\varepsilon _{j+1}<t<\varepsilon _{j+2}\) or \(\varepsilon _j<t<\varepsilon _{j+1}<s<\varepsilon _{j+2}\).

Proof

If we define \(\eta :=\Phi (s,t)\in {\mathbb {R}}^3\), then one can subsequently verify

$$\begin{aligned} \eta _1^2+\eta _2^2+\eta _3^2&= s, \quad \varepsilon _1\eta _1^2+\varepsilon _2\eta _2^2+\varepsilon _3\eta _3^2 = \varepsilon _1\varepsilon _2\varepsilon _3 t^{-1}, \\ \quad q_1^*(\eta )&= st^{-1},\quad q_0^*(\eta ) = 1+st^{-1}. \end{aligned}$$

This implies \({\mathcal {N}}(\eta )= 1- (1+st^{-1})+ st^{-1}=0\), which proves \(\Phi (s,t)\in S^*\) for all st such that the argument of the square root is positive. On the other hand, every point of \((S^*{\setminus }\Sigma )\cap \bigcap _{i=1}^3 \{\sigma _i \eta _i>0\}\) can be written in this way. To see this, one solves the linear system

$$\begin{aligned} s&=\eta _1^2+\eta _2^2+\eta _3^2,\qquad \varepsilon _1\eta _1^2+\varepsilon _2\eta _2^2+\varepsilon _3\eta _3^2 = \varepsilon _1\varepsilon _2\varepsilon _3 t^{-1}, \\ 0&= {\mathcal {N}}(\eta ) = 1- \eta _1^2\left( \frac{1}{\varepsilon _2}+\frac{1}{\varepsilon _3}\right) - \eta _2^2\left( \frac{1}{\varepsilon _1}+\frac{1}{\varepsilon _3}\right) - \eta _3^2\left( \frac{1}{\varepsilon _1}+\frac{1}{\varepsilon _2}\right) + \frac{s}{t} \end{aligned}$$

for \(\eta _1^2,\eta _2^2,\eta _3^2\). In this way one finds \(\eta _i^2=\Phi _i(s,t)^2\), so \(\Phi \) is a smooth parametrization of the set \((S {\setminus }\Sigma )\cap \bigcap _{i=1}^3 \{\sigma _i \eta _i>0\}\). A computation shows that \(\Phi =(\Phi _1,\Phi _2,\Phi _3)\) is well-defined (the arguments of all square roots are positive) if and only if either \(\varepsilon _j<s<\varepsilon _{j+1}<t<\varepsilon _{j+2}\) or \(\varepsilon _j<t<\varepsilon _{j+1}<s<\varepsilon _{j+2}\) holds provided that \(\varepsilon _j<\varepsilon _{j+1}<\varepsilon _{j+2}\). \(\square \)

We note that the two parameter regions \(\varepsilon _j<s<\varepsilon _{j+1}<t<\varepsilon _{j+2}\) and \(\varepsilon _j<t<\varepsilon _{j+1}<s<\varepsilon _{j+2}\) give rise to the inner, respectively, outer sheet of the wave surface, cf. Fig. 2. Both sheets meet at the singular points that formally correspond to \(s=t=\varepsilon _{j+1}\) where, in accordance with Proposition 3.2, one has \(\eta _{j+1}=\Phi _{j+1}(s,t)=0\). We now turn toward the computation of the Gaussian curvature on \(S^* {\setminus }\Sigma \). This will first be done away from the principal sections, but the formula will prevail also in the principal sections since S is a smooth manifold in that region as we showed above. We start with computing the relevant derivatives for the first and second fundamental form of \(S^*\):

$$\begin{aligned} \partial _s\Phi _i(s,t)&= \frac{1}{2(s-\varepsilon _i)}\Phi _i,\quad \partial _t\Phi _i(s,t) = \frac{\varepsilon _i}{2t(t-\varepsilon _i)}\Phi _i,\quad \\ \partial _{ss}\Phi _i(s,t)&= -\frac{1}{4(s-\varepsilon _i)^2}\Phi _i,\\ \partial _{st}\Phi _i(s,t)&= \frac{\varepsilon _i}{4t(t-\varepsilon _i)(s-\varepsilon _i)}\Phi _i, \qquad \partial _{tt}\Phi _i(s,t) = \frac{\varepsilon _i(3\varepsilon _i-4t)}{4t^2(t-\varepsilon _i)^2}\Phi _i. \end{aligned}$$

From these formulae one gets the following.

Proposition 3.4

The first fundamental form of \(S^* {\setminus }\Sigma \) is given by

$$\begin{aligned} E(s,t)\hbox {d}s^2+2F(s,t)\,\hbox {d}s\,\hbox {d}t+G(s,t)\,\hbox {d}t^2, \end{aligned}$$

where

$$\begin{aligned} E(s,t)&= \frac{s^2t-(\varepsilon _1+\varepsilon _2+\varepsilon _3)st + (\varepsilon _1\varepsilon _2+\varepsilon _1\varepsilon _3+\varepsilon _2\varepsilon _3)t-\varepsilon _1\varepsilon _2\varepsilon _3}{4t(s-\varepsilon _1)(s-\varepsilon _2)(s-\varepsilon _3)}, \\ F(s,t)&= 0, \\ G(s,t)&= \frac{\varepsilon _1\varepsilon _2\varepsilon _3(s-t)}{4t^2(t-\varepsilon _1)(t-\varepsilon _2)(t-\varepsilon _3)}. \end{aligned}$$
Fig. 2
figure 2

Fresnel’s wave surface: inner sheet (left) and outer sheet (middle) for \(\varepsilon _1 = 1\), \(\varepsilon _2 = 3\), \(\varepsilon _3 = 9\). The contrast on the outer sheet highlights regions of identical Gaussian curvature. The Hamiltonian circles (blue in color) encase the singular points inside regions of brighter contrast. The contact of inner (light shade, yellow in color) and half of the outer sheet (dark shade, red in color) at two singular points is depicted in the right figure (color figure online)

Proof

This follows from lengthy, but straightforward computations based on

$$\begin{aligned} E(s,t)&= \langle \partial _s\Phi (s,t),\partial _s\Phi (s,t)\rangle = \sum _{i=1}^3 \frac{\Phi _i(s,t)^2}{4(s-\varepsilon _i)^2} ,\\ F(s,t)&= \langle \partial _s\Phi (s,t),\partial _t\Phi (s,t)\rangle = \sum _{i=1}^3 \frac{\varepsilon _i\Phi _i(s,t)^2}{4t(t-\varepsilon _i)(s-\varepsilon _i)}, \\ G(s,t)&= \langle \partial _t\Phi (s,t),\partial _t\Phi (s,t)\rangle = \sum _{i=1}^3 \frac{\varepsilon _i^2\Phi _i(s,t)^2}{4t^2(t-\varepsilon _i)^2}. \end{aligned}$$

\(\square \)

To write down the second fundamental form, we introduce the following functions:

$$\begin{aligned} m(s,t)&:=\left( \frac{\varepsilon _1\varepsilon _2\varepsilon _3}{(t-s)(s^2t-(\varepsilon _1+\varepsilon _2+\varepsilon _3)ts+(\varepsilon _1\varepsilon _2+\varepsilon _1\varepsilon _3+\varepsilon _2\varepsilon _3)t-\varepsilon _1\varepsilon _2\varepsilon _3)}\right) ^{1/2}, \\ P_L(s,t)&:= s^2t-2st^2+(\varepsilon _1+\varepsilon _2+\varepsilon _3)t^2-(\varepsilon _1\varepsilon _2+\varepsilon _1\varepsilon _3+\varepsilon _2\varepsilon _3)t+\varepsilon _1\varepsilon _2\varepsilon _3, \\ P_N(s,t)&:= -s^2t^2+(\varepsilon _1+\varepsilon _2+\varepsilon _3)st^2-(\varepsilon _1\varepsilon _2+\varepsilon _1\varepsilon _3+\varepsilon _2\varepsilon _3)t^2+\varepsilon _1\varepsilon _2\varepsilon _3(2t-s). \end{aligned}$$

Proposition 3.5

The second fundamental form of \(S^* {\setminus }\Sigma \) is given by

$$\begin{aligned} L(s,t)\hbox {d}s^2+2M(s,t)\,\hbox {d}s\,\hbox {d}t+N(s,t)\,\hbox {d}t^2, \end{aligned}$$

where

$$\begin{aligned} L(s,t)&= \frac{m(s,t)P_L(s,t)}{4t(s-\varepsilon _1)(s-\varepsilon _2)(s-\varepsilon _3)}, \quad M(s,t) = \frac{m(s,t)}{4t},\\ N(s,t)&= \frac{m(s,t)P_N(s,t)}{ 4t^2(t-\varepsilon _1)(t-\varepsilon _2)(t-\varepsilon _3)}. \end{aligned}$$

Proof

By definition, the functions LMN are given by

$$\begin{aligned}&L(s,t) = \langle \nu (s,t),\partial _{ss}\Phi (s,t)\rangle , \quad M(s,t) = \langle \nu (s,t),\partial _{st}\Phi (s,t)\rangle , \\&N(s,t) = \langle \nu (s,t),\partial _{tt}\Phi (s,t)\rangle \end{aligned}$$

where \(\nu (s,t)\) denotes the outer unit normal on \(S {\setminus }\Sigma \) at the point \(\Phi (s,t)\). In Euclidean coordinates, a normal at \(\eta =\Phi (s,t)\) is given by

\(\nabla {\mathcal {N}}(\eta )=(t_1(\eta )\eta _1,t_2(\eta )\eta _2,t_3(\eta )\eta _3)\). So we define

$$\begin{aligned} {\tilde{\nu }}_i(s,t)&:= 2t_i(\Phi (s,t)) \Phi _i(s,t) = \left( -\frac{1}{\varepsilon _{i+1}}-\frac{1}{\varepsilon _{i+2}}+\frac{s}{\varepsilon _{i+1}\varepsilon _{i+2}}+\frac{1}{t}\right) \Phi _i(s,t) \end{aligned}$$

and obtain after normalization

$$\begin{aligned} \nu _i(s,t)&= \frac{m(s,t)t}{2}{\tilde{\nu }}_i(s,t). \end{aligned}$$

Using this formula for the unit normal field \(\nu \), and plugging in the formulae for \(\Phi _{ss},\Phi _{st},\Phi _{tt}\), one obtains the above expressions for L(st), M(st), N(st).

\(\square \)

We continue with the formulae for the Gaussian and mean curvature, which were given in (A.1), (A.2) in Liess’ work [36].

Proposition 3.6

The Gaussian curvature at \(\Phi (s,t)\in S^* {\setminus }\Sigma \) is given by

$$\begin{aligned} K(s,t) = \frac{ (st-(\varepsilon _1+\varepsilon _2)t+\varepsilon _1\varepsilon _2) (st-(\varepsilon _1+\varepsilon _3)t+\varepsilon _1\varepsilon _3) (st-(\varepsilon _2+\varepsilon _3)t+\varepsilon _2\varepsilon _3)}{ (s-t)(s^2t-(\varepsilon _1+\varepsilon _2+\varepsilon _3)st + (\varepsilon _1\varepsilon _2+\varepsilon _1\varepsilon _3+\varepsilon _2\varepsilon _3)t-\varepsilon _1\varepsilon _2\varepsilon _3)^2}. \end{aligned}$$

Proof

The determinant of the first fundamental form is given by

$$\begin{aligned}&(EG-F^2)(s,t) \\&\quad = \frac{s^2t-(\varepsilon _1+\varepsilon _2+\varepsilon _3)st + (\varepsilon _1\varepsilon _2+\varepsilon _1\varepsilon _3+\varepsilon _2\varepsilon _3)t-\varepsilon _1\varepsilon _2\varepsilon _3 }{4t(s-\varepsilon _1)(s-\varepsilon _2)(s-\varepsilon _3)} \\&\qquad \times \frac{\varepsilon _1\varepsilon _2\varepsilon _3(s-t)}{4t^2(t-\varepsilon _1)(t-\varepsilon _2)(t-\varepsilon _3)} \\&\quad = \frac{\varepsilon _1\varepsilon _2\varepsilon _3(s-t)(s^2t-(\varepsilon _1+\varepsilon _2+\varepsilon _3)st + (\varepsilon _1\varepsilon _2+\varepsilon _1\varepsilon _3+\varepsilon _2\varepsilon _3)t-\varepsilon _1\varepsilon _2\varepsilon _3) }{16t^3(s-\varepsilon _1)(s-\varepsilon _2)(s-\varepsilon _3)(t-\varepsilon _1)(t-\varepsilon _2)(t-\varepsilon _3)} \end{aligned}$$

The determinant of the second fundamental form is

$$\begin{aligned}&(LN-M^2)(s,t) \\&\quad = \frac{m(s,t)P_L(s,t)}{4t(s-\varepsilon _1)(s-\varepsilon _2)(s-\varepsilon _3)} \cdot \frac{m(s,t)P_N(s,t)}{ 4t^2(t-\varepsilon _1)(t-\varepsilon _2)(t-\varepsilon _3)} - \frac{m(s,t)^2}{16t^2} \\&\quad = \frac{m(s,t)^2P_L(s,t)P_N(s,t)}{16t^3(s-\varepsilon _1)(s-\varepsilon _2)(s-\varepsilon _3)(t-\varepsilon _1)(t-\varepsilon _2)(t-\varepsilon _3)} - \frac{m(s,t)^2}{16t^2} \\&\quad = \frac{m(s,t)^2\left[ P_L(s,t)P_N(s,t) - t(s-\varepsilon _1)(s-\varepsilon _2)(s-\varepsilon _3)(t-\varepsilon _1)(t-\varepsilon _2)(t-\varepsilon _3) \right] }{16t^3(s-\varepsilon _1)(s-\varepsilon _2)(s-\varepsilon _3)(t-\varepsilon _1)(t-\varepsilon _2)(t-\varepsilon _3)} \\&\quad = \frac{\varepsilon _1\varepsilon _2\varepsilon _3 (st-(\varepsilon _1+\varepsilon _2)t+\varepsilon _1\varepsilon _2) }{16t^3(s-\varepsilon _1)(s-\varepsilon _2)(s-\varepsilon _3)(t-\varepsilon _1)(t-\varepsilon _2)(t-\varepsilon _3) } \\&\qquad \times \frac{(st-(\varepsilon _1+\varepsilon _3)t+\varepsilon _1\varepsilon _3) (st-(\varepsilon _2+\varepsilon _3)t+\varepsilon _2\varepsilon _3)}{(s^2t-(\varepsilon _1+\varepsilon _2+\varepsilon _3)ts+(\varepsilon _1\varepsilon _2+\varepsilon _1\varepsilon _3+\varepsilon _2\varepsilon _3)t-\varepsilon _1\varepsilon _2\varepsilon _3)}. \end{aligned}$$

So the Gaussian curvature at the point \(\Phi (s,t)\) is

$$\begin{aligned}&K(s,t) \\&\quad = \frac{(LN-M^2)(s,t)}{(EG-F^2)(s,t)} \\&\quad = \frac{ (st-(\varepsilon _1+\varepsilon _2)t+\varepsilon _1\varepsilon _2) (st-(\varepsilon _1+\varepsilon _3)t+\varepsilon _1\varepsilon _3) (st-(\varepsilon _2+\varepsilon _3)t+\varepsilon _2\varepsilon _3)}{ (s-t)(s^2t-(\varepsilon _1+\varepsilon _2+\varepsilon _3)st + (\varepsilon _1\varepsilon _2+\varepsilon _1\varepsilon _3+\varepsilon _2\varepsilon _3)t-\varepsilon _1\varepsilon _2\varepsilon _3)^2} . \end{aligned}$$

\(\square \)

Following Liess, we define \(\alpha (s,t)\) to be the squared distance of the origin to the tangent plane through \(\Phi (s,t)\in S^* {\setminus }\Sigma \). Then

$$\begin{aligned} \alpha (s,t)&:= \left( \frac{\nabla {\mathcal {N}}(\eta )\cdot \eta }{|\nabla {\mathcal {N}}(\eta )|}\right) ^2\Big |_{\eta =\Phi (s,t)} \\&= \frac{(\nabla {\mathcal {N}}(\Phi (s,t))\cdot \Phi (s,t))^2}{|\nabla {\mathcal {N}}(\Phi (s,t))|^2} \\&= \frac{(\sum _{i=1}^3 t_i(\Phi (s,t))\Phi _i(s,t))^2}{\sum _{i=1}^3 t_i(\Phi (s,t))^2\Phi _i(s,t)^2} \\&= \frac{(t-s)\varepsilon _1\varepsilon _2\varepsilon _3}{s^2t-(\varepsilon _1+\varepsilon _2+\varepsilon _3)st + (\varepsilon _1\varepsilon _2+\varepsilon _1\varepsilon _3+\varepsilon _2\varepsilon _3)t-\varepsilon _1\varepsilon _2\varepsilon _3}. \end{aligned}$$

From this we deduce

$$\begin{aligned} K(s,t) = \frac{(\alpha (s,t)-\varepsilon _1)(\alpha (s,t)-\varepsilon _2)(\alpha (s,t)-\varepsilon _3)}{ \alpha (s,t)(s-\varepsilon _1)(s-\varepsilon _2)(s-\varepsilon _3)}. \end{aligned}$$

Proposition 3.7

The mean curvature at \(\Phi (s,t)\in S^* {\setminus }\Sigma \) is given by \((\alpha =\alpha (s,t))\)

$$\begin{aligned} K_m (s,t)&= - \frac{1}{2} \left( \frac{s}{\sqrt{\alpha }} K(s,t)\right. \\&\quad - \frac{1}{\sqrt{\alpha }} \left( \frac{(\alpha - \varepsilon _1)(\alpha -\varepsilon _2)}{(s-\varepsilon _1)(s-\varepsilon _2)} + \frac{(\alpha -\varepsilon _2)(\alpha -\varepsilon _3)}{(s-\varepsilon _2)(s-\varepsilon _3)}\right. \\&\quad \left. \left. + \frac{(\alpha - \varepsilon _1)(\alpha - \varepsilon _3)}{(s-\varepsilon _1)(s-\varepsilon _3)} \right) \right) . \end{aligned}$$

Proof

This is a consequence of the formula

$$\begin{aligned} K_m(s,t) = \frac{G(s, t)L(s, t)-2F(s, t)M(s, t)+E(s, t)N(s, t)}{2(E(s, t)G(s, t)-F(s, t)^2)}, \end{aligned}$$

and the coefficients of first and second fundamental form computed in Propositions 3.4 and 3.5. \(\square \)

We remark that our result deviates by the factor \(- \frac{1}{2}\) from Liess’ formula [36, (A.2), p. 91]. This however does not change the curvature properties, which we describe in the following: The Gaussian curvature K vanishes precisely in those points where \(\alpha (s,t)\) attains one of the values \(\varepsilon _1,\varepsilon _2,\varepsilon _3\). We assume \(\varepsilon _1<\varepsilon _2<\varepsilon _3\) for simplicity. Then one has \(\varepsilon _1<\alpha (s,t)<\varepsilon _3\) so that the Gaussian curvature vanishes precisely at those points where \(\alpha (s,t)=\varepsilon _2\). Those are given by \(t=T(s)\) where

$$\begin{aligned} T(s) = \frac{\varepsilon _1 \varepsilon _3 (\varepsilon _2 -s) }{s^2 - (\varepsilon _1 + \varepsilon _2 + \varepsilon _3) s + (\varepsilon _1 \varepsilon _2 + \varepsilon _1 \varepsilon _3 + \varepsilon _2 \varepsilon _3) - \varepsilon _1 \varepsilon _3} = \frac{\varepsilon _1 \varepsilon _3}{\varepsilon _1 + \varepsilon _3 -s}.\nonumber \\ \end{aligned}$$
(27)

This is the parametrization of a one-dimensional submanifold that is called a Hamiltonian circle. Notice that each of the four singular point has its own Hamiltonian circle. (They are distinguished by \(\sigma _i,\sigma _{i+2}\in \{-1,+1\}\) in Proposition 3.2). By Proposition 3.7 the mean curvature is nonzero along the Hamiltonian circles. We thus conclude that in the smooth regular part of Fresnel’s wave surface, there is at least one principal curvature bounded away from zero. The Gaussian curvature is positive on the inner sheet and on the parts on the outer sheet that lie outside the Hamiltonian circles, while it is negative inside the Hamiltonian circles, i.e., close to the singular points on the outer sheet. In Proposition 6.2 we show that the Hessian matrix at a singular point \(D^2 p(\omega ,\zeta )\) is indefinite.

To summarize the geometric properties, we can perceive S as union of two sheets A and B, linked together at the singular points, when A is completely encased by B. A is convex, but B is not. Close to the singular points, B is not convex, and the Gaussian curvature is negative. Increasing geodesic distance from the singular points on B, we reach the Hamiltonian circles: the curvature vanishes. Beyond the Hamiltonian circles, B is locally convex, too, and has again positive Gaussian curvature.

Corollary 3.8

The wave surface \(S=\{\xi \in {\mathbb {R}}^3: p(\omega ,\xi )=0\}\) admits a decomposition \(S=S_1\cup S_2\cup S_3\), where

  1. (i)

    \(S_1\) is a compact smooth regular manifold with two non-vanishing principal curvatures in the interior,

  2. (ii)

    \(S_2\) is a compact smooth regular manifold with one non-vanishing principal curvature in the interior,

  3. (iii)

    \(S_3\) is the union of (small) neighborhoods of the singular points described in Proposition 3.2.

For later sections, it will be important to have these curvature properties likewise for level sets \(\{p(\omega ,\xi ) = t\}_{t \in [-t_0,t_0]}\) for some \(0< t_0 \ll 1\) with uniform bounds in t.

For this purpose, recall that for an implicitly defined surface \(\{ \xi \in {\mathbb {R}}^3: F(\xi ) = 0 \}\) the Gaussian curvature is given by (cf. [18, Corollary 4.2, p. 643])

$$\begin{aligned} K = - \begin{vmatrix} D^2 F&\nabla F \\ \nabla F^t&0 \end{vmatrix} | \nabla F |^{-4} \end{aligned}$$

and hence is continuous on the level sets as long as F is smooth and \(|\nabla F| \ge d > 0\). This shows that \(|K| \ge c/2 > 0\) on all level sets sufficiently close to \(\{\xi \in {\mathbb {R}}^3: F(\xi ) = 0 \}\), where \(|K| \ge c > 0\). Furthermore, we have the following for the mean curvature of an implicitly defined surface (cf. [18, Corollary 4.5, p. 645]):

$$\begin{aligned} K_m = - \nabla \cdot \left( \frac{\nabla F}{|\nabla F|} \right) . \end{aligned}$$

Hence, again due to smoothness of p and \(|\nabla F| \ge d > 0\), along the curves on the level sets, where the Gaussian curvature vanishes, we have one principal curvature bounded from below. Choosing the level sets close to the original surface, we find one principal curvature bounded from below likewise on all the layers.

4 Generalized Bochner–Riesz Estimates with Negative Index

The purpose of this section is to show Theorem 1.4. In the following let \(d \ge 2\) and \(S = \{ (\xi ',\psi (\xi ')) : \, \xi ' \in [-1,1]^{d-1} \}\) be a smooth surface with \(k \in \{1,\ldots ,d-1 \}\) principal curvatures bounded from below. Let

$$\begin{aligned} (T^\alpha f) \widehat{(}\xi ) = \frac{(\xi _d - \psi (\xi '))_+^{-\alpha }}{\Gamma (1-\alpha )} \chi (\xi '){\hat{f}}(\xi ), \; \chi \in C^\infty _c([-1,1]^{d-1}), \; 0< \alpha < \frac{k+2}{2}. \end{aligned}$$

We show strong estimates for a range of p and q

$$\begin{aligned} \Vert T^\alpha f \Vert _{L^q({\mathbb {R}}^d)} \lesssim \Vert f \Vert _{L^p({\mathbb {R}}^d)} \end{aligned}$$

with weak endpoint estimates as stated in Theorem 1.4. We start with recapitulating Bochner–Riesz estimates in the elliptic case, which is understood best.

4.1 Bochner–Riesz Estimates with Negative Index for Elliptic Surfaces

If \(\psi \) is elliptic, i.e., the Hessian \(\partial ^2 \psi \) has eigenvalues of a fixed sign on \([-1,1]^{d-1}\), then \(T^\alpha \) is a Bochner–Riesz operator of negative index. As explained above, we shall show bounds also for possibly degenerate \(\psi \), which will be useful in the next sections. For solutions to time-harmonic Maxwell’s equations we are interested in the case \(d=3\), \(\alpha = 1\), corresponding to restriction–extension operator:

$$\begin{aligned} T^1 f(x) = \int _{S} e^{ix.\xi } {\hat{f}}(\xi ) \hbox {d} \sigma (\xi ). \end{aligned}$$

We take a more general point of view to show that the considerations in the next section also apply in higher dimensions and general \(\alpha \). To put our results into context, we digress for a moment and recapitulate results on the classical Bochner–Riesz problem.

For \(\alpha > 0\) recall

$$\begin{aligned}&{\mathcal {P}}_\alpha (d-1) = \left\{ (x,y) \in [0,1]^2 \, : \, x-y \ge \frac{2\alpha }{d+1}, \right. \\&\quad x > \frac{d-1}{2d} + \frac{\alpha }{d}, \; y < \frac{d+1}{2d} - \frac{\alpha }{d} \Big \}. \end{aligned}$$

The Bochner–Riesz conjecture (for elliptic surfaces) with negative index states:

Conjecture 1

Let \(d \ge 2\) and \(0< \alpha < \frac{d+1}{2}\). Then \(T^\alpha \) is bounded from \(L^p({\mathbb {R}}^d)\) to \(L^q({\mathbb {R}}^d)\) if and only if \((1/p,1/q) \in {\mathcal {P}}_\alpha (d-1)\).

The necessity of these conditions was proved by Börjeson [5]. We refer to [33, Section 2.6] for a survey, where the currently widest range is covered. In the special case \(\alpha >\frac{1}{2}\) contributions are due to Bak–McMichael–Oberlin [1, Theorem 3] and Gutiérrez [24, Theorem 1], see also [2, 41]. In [10, Remark 2.3] it was also pointed out that \(T^\alpha : L^1({\mathbb {R}}^d) \rightarrow L^\infty ({\mathbb {R}}^d)\) is bounded for \(\alpha = \frac{d+1}{2}\).

In the following we recall arguments from [10, 33], which were needed for the proofs and will be used in the next section for more general surfaces. In the first step we decompose the multiplier distribution.

For \(0<\alpha <1\), let \(D^\alpha \in {\mathcal {S}}'({\mathbb {R}}^d)\) be defined by

$$\begin{aligned} \langle D^{\alpha }, g \rangle _{({\mathcal {S}}', {\mathcal {S}})} = \int _{{\mathbb {R}}^d} \frac{(\xi _d - \psi (\xi ^\prime ))^{-\alpha }_+}{\Gamma (1-\alpha )} \chi (\xi ^\prime ) g(\xi ) \hbox {d}\xi , \end{aligned}$$

which is again extended by analytic continuation to the range \(1 \le \alpha <\frac{d+1}{2}\). We recall the following lemma to decompose the Fourier multiplier:

Lemma 4.1

([10, Lemma 2.1]). For \(\alpha > 0\), there is a smooth function \(\phi _\alpha \) satisfying \(\text {supp}(\hat{\phi }_\alpha ) \subseteq \{ t \, : \, |t| \sim 1 \}\) such that for all \(g\in {\mathcal {S}}({\mathbb {R}}^d)\),

$$\begin{aligned} \langle D^{\alpha } , g \rangle _{({\mathcal {S}}', {\mathcal {S}})} = \sum _{j \in {\mathbb {Z}}} 2^{\alpha j} \int _{{\mathbb {R}}^d} \phi _\alpha (2^j(\xi _d - \psi (\xi ^\prime ))) \chi (\xi ^\prime ) g(\xi ) \hbox {d}\xi . \end{aligned}$$

The importance for our analysis comes from \(T^\alpha f(x)=\langle D^{\alpha } , g_x \rangle _{({\mathcal {S}}', {\mathcal {S}})}\) where \(g_x(\xi )=e^{ix.\xi }{{\hat{f}}}(\xi )\). We are thus reduced to study the operators

$$\begin{aligned} \widehat{T_\delta f}(\xi ) = \phi _\alpha \left( \frac{\xi _d - \psi (\xi ^\prime )}{\delta } \right) \chi (\xi ^\prime ) {\hat{f}}(\xi ) \end{aligned}$$

where \(\phi _\alpha \in {\mathcal {S}}\) satisfies \(\text {supp}(\hat{\phi _\alpha }) \subseteq \{ t : \, |t| \sim 1 \}\) and \(\delta =2^{-j} > 0\). Fourier restriction estimates can be applied to \(T_\delta \), and interpolation with a kernel estimate takes advantage of the decomposition given by Lemma 4.1. The Tomas–Stein restriction theorem (cf. [42, 44]) suffices already for the sharp estimates for the restriction–extension operator (\(\alpha =1\)) due to Gutiérrez [25]. Cho et al. [10] made further progress building on Tao’s bilinear restriction theorem [43]. The most recent result is due to Kwon–Lee [33] additionally using sharp oscillatory integral estimates by Guth–Hickman–Iliopoulou [23].

4.2 Bochner–Riesz Estimates with Negative Index for General Non-flat Surfaces

In this section we extend the analysis to compact pieces of smooth regular hypersurfaces \(S{\subset } {\mathbb {R}}^d\) with k non-vanishing principal curvatures and \(k\in \{1,\ldots ,d-1\}\), \(d\in {\mathbb {N}},d\ge 3\). Notice that the case \(d=2\), \(\alpha > 0\) was entirely settled by Bak [2] and Gutiérrez [24, Theorem 1]. Our argument is based on decompositions in Fourier space as in [10, 33]. By further localization in Fourier space we may suppose \(S= \{(\xi ^\prime , \psi (\xi ^\prime )) \, : \, \xi ^\prime \in [-1,1]^{d-1} \}\). Notice that the case of \(k=0\) corresponds to possibly flat surfaces, where no decay of the Fourier transform can be expected. The case \(k=d-1\) means that the Gaussian curvature is non-vanishing. In the special case that all principal curvatures have the same sign, the surface is elliptic and so is \(\psi \).

In the following we show \(L^p\)-\(L^q\) boundedness from Theorem 1.4 of the operator

$$\begin{aligned} (T^\alpha f) \widehat{(}\xi ) = \frac{(\xi _d - \psi (\xi ^\prime ))_+^{-\alpha }}{\Gamma (1-\alpha )} \chi (\xi ^\prime ){\hat{f}}(\xi ) \end{aligned}$$

for \(0<\alpha <\frac{k+2}{2}\) and \(p,q\in [1,\infty ]\) depending on the decay of the Fourier transform of the surface measure and thus by the number of non-vanishing principal curvatures. As above the operator \(T^\alpha \) for \(1\le \alpha <\frac{k+2}{2}\) is defined by analytic continuation (cf. [10, 33]). We comment on \(\alpha = \frac{k+2}{2}\) after the proof of Lemma 4.4.

By Lemma 4.1, we decompose the operator \(T^\alpha \) as distribution:

$$\begin{aligned} T^\alpha f = \sum _{j \in {\mathbb {Z}}} 2^{\alpha j} \int e^{ix.\xi } \phi _\alpha (2^j(\xi _d - \psi (\xi '))) \chi (\xi ') {\hat{f}}(\xi ) \hbox {d}\xi . \end{aligned}$$
(28)

where \(\phi :=\phi _\alpha \in {\mathcal {S}}\) satisfies \(\text {supp}(\hat{\phi }) \subseteq \{ t : \, |t| \sim 1 \}\). In view of (28) we have

$$\begin{aligned} T^\alpha f = \sum _{j \in {\mathbb {Z}}} 2^{j \alpha } T_{2^{-j}} f \end{aligned}$$

so that it suffices to consider the operators

$$\begin{aligned} \widehat{T_\delta f}(\xi ) = \phi \left( \frac{\xi _d - \psi (\xi ^\prime )}{\delta } \right) \chi (\xi ^\prime ){\hat{f}}(\xi ). \end{aligned}$$

for \(\delta > 0\). The contribution away from the surface corresponding to \(\delta \gtrsim 1\) or \(j \le 0\) in the above display can be estimated by Young’s inequality, see below for a precise kernel estimate. This gives summability for \(j \le 0\). We focus on the main contribution from \(j \ge 0\).

We start with using an \(L^2\)-restriction theorem for the surface S. To begin with, we recall the classical result due to Littman [37]; see also [42, Section VIII.5.8], which gives the following decay of the Fourier transform of the surface measure \(\mu \):

$$\begin{aligned} |\hat{\mu }(\xi )| \lesssim \langle \xi \rangle ^{- \frac{k}{2}}. \end{aligned}$$
(29)

By the \(TT^*\)-argument (cf. [17, 31, 44]) this can be recast into an \(L^2\)-\(L^q\) estimate as already recorded by Greenleaf [22]. The decay of the Fourier transform is crucial for the verification of assumption (ii) in the following special case of the abstract result from [31, Theorem 1.2]:

Theorem 4.2

(Keel–Tao). Let (Xdx) be a measure space and H a Hilbert space. Suppose that for each \(t \in {\mathbb {R}}\) we have an operator \(U(t):H \rightarrow L^2(X)\) which satisfies the following assumptions for \(\sigma >0\):

  1. (i)

    For all t and \(f \in H\) we have the energy estimate:

    $$\begin{aligned} \Vert U(t) f \Vert _{L^2(X)} \lesssim \Vert f \Vert _H. \end{aligned}$$
  2. (ii)

    For all \(t \ne s\) and \(g \in L^1(X)\) we have the decay estimate

    $$\begin{aligned} \Vert U(s) (U(t))^* g \Vert _{L^\infty (X)} \lesssim (1+|t-s|)^{-\sigma } \Vert g \Vert _{L^1(X)}. \end{aligned}$$

Then, for \(q \ge \frac{2(1+ \sigma )}{\sigma }\), the estimate

$$\begin{aligned} \Vert U(t) f \Vert _{L^q_{t,x}({\mathbb {R}}\times X)} \lesssim \Vert f \Vert _H \end{aligned}$$

holds.

The following two lemmas are the key ingredients in the proof of Theorem 1.4. Both rely on (29), which in turn depends on the lower bounds of the k non-vanishing curvatures and \(\Vert \psi \Vert _{C^N}\), \(\Vert \chi \Vert _{C^N}\) for some large enough \(N\in {\mathbb {N}}\). This leads to the claimed stability in Theorem 1.4 of the estimates on \(\psi \) and \(\chi \).

In the following lemma we apply Theorem 4.2 to \(T_\delta \) and \(\sigma =\frac{k}{2}\):

Lemma 4.3

Let \(q \ge \frac{2(2+k)}{k}\). Then we have

$$\begin{aligned} \Vert T_\delta f \Vert _{L^q({\mathbb {R}}^d)} \lesssim \delta ^{\frac{1}{2}} \Vert f \Vert _{L^2({\mathbb {R}}^d)}. \end{aligned}$$
(30)

Proof

We perform a linear change of variables to rewrite

$$\begin{aligned} \begin{aligned}&(2 \pi )^d T_\delta f(x)\\&\quad = \int _{{\mathbb {R}}^d} e^{ix.\xi } \chi (\xi ^\prime ) \phi \left( \frac{\xi _d - \psi (\xi ^\prime )}{\delta } \right) {\hat{f}}(\xi ) \hbox {d}\xi \\&\quad = \int _{{\mathbb {R}}^d} e^{i(x^\prime .\xi ^\prime ) + x_d (\xi _d + \psi (\xi ^\prime ))} \chi (\xi ^\prime ) \phi \left( \frac{\xi _d}{\delta }\right) {\hat{f}}(\xi ^\prime ,\xi _d+ \psi (\xi ^\prime )) \hbox {d}\xi ^\prime \hbox {d}\xi _d \\&\quad = \int _{{\mathbb {R}}} e^{i x_d \xi _d} \phi ( \frac{\xi _d}{\delta } ) \int _{{\mathbb {R}}^{d-1}} e^{i(x^\prime . \xi ^\prime + x_d \psi (\xi ^\prime ))} \chi (\xi ^\prime ) {\hat{f}}(\xi ^\prime ,\xi _d + \psi (\xi ^\prime )) \hbox {d}\xi ^\prime \hbox {d}\xi _d. \end{aligned}\nonumber \\ \end{aligned}$$
(31)

For the kernel in the inner integral we find by the assumptions on \(\psi \)

$$\begin{aligned} \left| \int _{{\mathbb {R}}^{d-1}} e^{i(x^\prime .\xi ^\prime + x_d \psi (\xi ^\prime ))} \chi (\xi ^\prime ) \hbox {d}\xi ^\prime \right| \lesssim (1+|x_d|)^{- \frac{k}{2}}. \end{aligned}$$
(32)

From this and Theorem 4.2, applied to \(U(t)g(x')=\int _{{\mathbb {R}}^{d-1}}e^{ix'.\xi '+t\psi (\xi ')}\chi (\xi ')g(\xi ')\,\hbox {d}\xi '\), we infer

$$\begin{aligned} \left\| \int _{{\mathbb {R}}^{d-1}} e^{i(x^\prime .\xi ^\prime + x_d \psi (\xi ^\prime )} \chi (\xi ^\prime ) {\hat{f}}(\xi ^\prime ,\xi _d + \psi (\xi ^\prime )) \hbox {d}\xi ^\prime \right\| _{L^q({\mathbb {R}}^d)} \lesssim \Vert {\hat{f}}(\cdot ,\xi _d + \psi (\cdot )) \Vert _{L^2({\mathbb {R}}^{d-1})}. \end{aligned}$$

By (31) and Minkowski’s inequality, we find

$$\begin{aligned}&\Vert T_\delta f \Vert _{L^q({\mathbb {R}}^d)} \\&\quad \lesssim \int _{\mathbb {R}}|\phi \left( \frac{\xi _d}{\delta }\right) | \left\| \int _{{\mathbb {R}}^{d-1}} e^{i(x^\prime .\xi ^\prime + x_d \psi (\xi ^\prime ))} \chi (\xi ^\prime ) {\hat{f}}(\xi ^\prime ,\xi _d + \psi (\xi ^\prime )) \hbox {d}\xi ^\prime \right\| _{L^q({\mathbb {R}}^d)} \hbox {d}\xi _d \\&\quad \lesssim \int _{\mathbb {R}}|\phi \left( \frac{\xi _d}{\delta }\right) | \Vert {\hat{f}}(\cdot ,\xi _d+\psi (\cdot )) \Vert _{L^2({\mathbb {R}}^{d-1})} \hbox {d}\xi _d \\&\quad \lesssim \delta ^{\frac{1}{2}} \Vert f \Vert _{L^2({\mathbb {R}}^d)}. \end{aligned}$$

The ultimate estimate follows from the Cauchy–Schwarz inequality, Plancherel’s theorem, and inverting the change of variables. \(\square \)

Further estimates for \(T_\delta \) are derived from \(T_\delta f = K_\delta *f\) where

$$\begin{aligned} K_\delta (x) = \frac{1}{(2\pi )^{d}} \int _{{\mathbb {R}}^d} e^{ix.\xi } \chi (\xi ^\prime ) \phi \left( \frac{\xi _d - \psi (\xi ^\prime )}{\delta }\right) \hbox {d}\xi . \end{aligned}$$

Integration by parts leads to the following kernel estimate:

Lemma 4.4

The function \(K_\delta \) is supported in \(\{(x^\prime ,x_d)\in {\mathbb {R}}^d : |x_d| \sim \delta ^{-1} \}\) and the following estimates hold:

$$\begin{aligned} \begin{aligned} |K_\delta (x)|&\lesssim _N \delta ^{N+1} (1+\delta |x|)^{-N}&, \text { if } |x^\prime | \ge c |x_d|, \\ |K_\delta (x)|&\lesssim \delta ^{\frac{k}{2}+1}&, \text { if } |x^\prime | \le c|x_d|. \end{aligned} \end{aligned}$$
(33)

Proof

Changing variables \(\xi _d \rightarrow \xi _d + \psi (\xi ^\prime )\) and integrating in \(\xi _d\), we have

$$\begin{aligned} (2 \pi )^{d-1} K_\delta (x) = \delta {\check{\phi }}(\delta x_d) \int _{{\mathbb {R}}^{d-1}} e^{i(x^\prime .\xi ^\prime + x_d \psi (\xi ^\prime ))} \chi (\xi ^\prime ) \hbox {d}\xi ^\prime . \end{aligned}$$

Since \({\check{\phi }}\) is supported in \(\{t : \, |t| \sim 1\}\), \(K_\delta \) is supported in \(\{(x^\prime ,x_d): |x_d| \sim \delta ^{-1} \}\). For the phase function \(\Phi (\xi ') = x'.\xi ' + x_d \psi (\xi ')\), we find

$$\begin{aligned} |\nabla _{\xi ^\prime } \Phi | \ge c_1|x|, \text { if } |x^\prime | \ge c|x_d|. \end{aligned}$$

So the method of non-stationary phase gives for \(|x^\prime | \gtrsim |x_d|\)

$$\begin{aligned} |K_\delta (x)| \lesssim _N \delta \Vert {{\check{\phi }}}\Vert _\infty (1+|x|)^{-N} \lesssim _N \delta ^{N+1} (1+\delta |x|)^{-N}. \end{aligned}$$

Here we used \(\delta |x|\ge \delta |x_d| \gtrsim 1\) holds in this case. On the other hand, (32) implies for \(|x_d| \gtrsim |x^\prime |\)

$$\begin{aligned} |K_\delta (x)| \lesssim \delta \left| \int _{{\mathbb {R}}^{d-1}} e^{i (x^\prime .\xi ^\prime + x_d \psi (\xi ^\prime ))} \chi (\xi ^\prime ) \hbox {d}\xi ^\prime \right| \lesssim \delta (1+ |x_d |)^{-\frac{k}{2}} \lesssim \delta ^{\frac{k+2}{2}}. \end{aligned}$$

\(\square \)

We remark that the kernel estimate shows that \(T^\alpha : L^1({\mathbb {R}}^d) \rightarrow L^\infty ({\mathbb {R}}^d)\) also for \(\alpha = \frac{k+2}{2}\) by the same argument as in [10, Remark 2.3].

With Lemma 4.4 at hand, we may now localize f to cubes of size \(\delta ^{-1}\) by the following argument, originally due to Fefferman [14]; see also [42, p. 422–423], and [10, 35]: Let \((Q_j)_{j \in {\mathbb {Z}}^d}\) denote a finitely overlapping covering of \({\mathbb {R}}^d\) with cubes of sidelength \(2\delta ^{-1}\) centered at \(j \delta ^{-1}\) and aligned parallel to the coordinate axes. Let \(C_d>0\) be such that \(|j-k|>C_d\) implies \(\text {dist}(Q_j,Q_k) \gtrsim \delta ^{-1} |j-k|\) uniformly with respect to \(j,k,\delta \), and let \(f_k = 1_{Q_k} f\). Then, we obtain

$$\begin{aligned} \begin{aligned} \Vert T_\delta f \Vert _{L^q({\mathbb {R}}^d)} \lesssim \left( \sum _{j \in {\mathbb {Z}}^d} \Vert T_\delta f \Vert _{L^q(Q_j)}^q \right) ^{\frac{1}{q}}&\lesssim \left( \sum _{j\in {\mathbb {Z}}^d} \left( \sum _{|k-j| \le C_d} \Vert T_\delta f_k \Vert _{L^q(Q_j)} \right) ^q \right) ^{\frac{1}{q}} \\&\qquad + \left( \sum _{j\in {\mathbb {Z}}^d} \left( \sum _{|k-j|> C_d} \Vert T_\delta f_k \Vert _{L^q(Q_j)} \right) ^q \right) ^{\frac{1}{q}}. \end{aligned} \end{aligned}$$

If \(|k-j|> C_d\), we use the first kernel estimate in  (33) and obtain for all \(N\in {\mathbb {N}}\)

$$\begin{aligned} \Vert T_\delta f_k \Vert _{L^q(Q_j)}&\lesssim \left( \int _{Q_j} \left| \int _{{\mathbb {R}}^d} K_\delta (x-y)f_k(y)\,\hbox {d}y\right| ^q\,\hbox {d}x \right) ^{\frac{1}{q}} \\&\lesssim _N \delta ^{N+1}(1+\delta {{\,\mathrm{dist}\,}}(Q_j,Q_j))^{-N} \left( \int _{Q_j} \left( \int _{Q_k} |f_k(y)| \,\hbox {d}y\right) ^q\,\hbox {d}x \right) ^{\frac{1}{q}} \\&\lesssim _N \delta ^{N+1}(1+|j-k|)^{-N}\left( \int _{Q_j} \Vert f_k\Vert _p^q |Q_k|^{\frac{q}{p'}}\,\hbox {d}x \right) ^{\frac{1}{q}} \\&\lesssim _N \delta ^{N+1} (1+|j-k|)^{-N} \delta ^{-\frac{d}{q}-\frac{d}{p'}} \Vert f_k\Vert _{L^p({\mathbb {R}}^d)} \\&\lesssim _N \delta ^{N+1-\frac{d}{q}-\frac{d}{p'}} (1+|j-k|)^{-N} \Vert f_k\Vert _{L^p({\mathbb {R}}^d)}. \end{aligned}$$

Hence, choosing \(N\in {\mathbb {N}}\) large enough, these terms allow for summation by Young’s inequality for series:

$$\begin{aligned} \begin{aligned}&\left( \sum _{j\in {\mathbb {Z}}^d} \left( \sum _{|k-j| > C_d} \Vert T_\delta f_k \Vert _{L^q(Q_j)} \right) ^q \right) ^{\frac{1}{q}} \\&\quad \lesssim _N \delta ^{N+1-\frac{d}{q}-\frac{d}{p'}} \big ( \sum _{j\in {\mathbb {Z}}^d} \left( \sum _{k\in {\mathbb {Z}}^d} (1+|j-k|)^{-N} \Vert f_k \Vert _{L^p({\mathbb {R}}^d)} \right) ^q \big )^{\frac{1}{q}} \\&\quad \lesssim _N \delta ^{N+1-\frac{d}{q}-\frac{d}{p'}} \left( \sum _{k\in {\mathbb {Z}}^d} \Vert f_k \Vert _{L^p({\mathbb {R}}^d)}^q \right) ^{\frac{1}{q}} \\&\quad \lesssim \delta ^{N+1-\frac{d}{q}-\frac{d}{p'}} \left( \sum _{k\in {\mathbb {Z}}^d} \Vert f_k \Vert _{L^p({\mathbb {R}}^d)}^p \right) ^{\frac{1}{p}} \\&\quad \lesssim \delta ^{N+1-\frac{d}{q}-\frac{d}{p'}} \Vert f \Vert _{L^p({\mathbb {R}}^d)}. \end{aligned} \end{aligned}$$

The penultimate estimate follows from the embedding \(\ell ^p \hookrightarrow \ell ^q\), \(p \le q\), and the last line from the finite overlapping property. For the “diagonal” set, \(|k-j| \le C_d\), we use (30) as well as Hölder’s inequality:

$$\begin{aligned} \Vert T_\delta f_k \Vert _{L^q(Q_j)} \lesssim \delta ^{\frac{1}{2}} \Vert f_k \Vert _{L^2({\mathbb {R}}^d)} \lesssim \delta ^{\frac{d}{p} - \frac{d-1}{2}} \Vert f_k \Vert _{L^p({\mathbb {R}}^d)}. \end{aligned}$$

Here we have used that the support of \(f_k\) has measure \(\sim \delta ^{-d}\) and \(p\ge 2\). We conclude

$$\begin{aligned} \begin{aligned} \left( \sum _{j\in {\mathbb {Z}}^d} \left( \sum _{|k-j| \le C_d} \Vert T_\delta f_k \Vert _{L^q(Q_j)} \right) ^q \right) ^{\frac{1}{q}}&\lesssim \delta ^{\frac{d}{p} - \frac{d-1}{2}} \left( \sum _{j\in {\mathbb {Z}}^d} \left( \sum _{|k-j| \le C_d} \Vert f_k \Vert _{L^p({\mathbb {R}}^d)} \right) ^q \right) ^{\frac{1}{q}} \\&\lesssim \delta ^{\frac{d}{p} - \frac{d-1}{2}} \left( \sum _{j\in {\mathbb {Z}}^d} \Vert f_j\Vert _{L^p({\mathbb {R}}^d)}^q \right) ^{\frac{1}{q}} \\&\lesssim \delta ^{\frac{d}{p} - \frac{d-1}{2}} \Vert f \Vert _{L^p({\mathbb {R}}^d)}, \end{aligned} \end{aligned}$$

like above due to the embedding \(\ell ^p \hookrightarrow \ell ^q\) for \(p \le q\) and the finite overlapping property. Combining the off-diagonal and the diagonal estimates for large enough N, we get

$$\begin{aligned} 2^{j \alpha } \Vert T_{2^{-j}} f \Vert _{L^q({\mathbb {R}}^d)} \lesssim 2^{j( \frac{d-1}{2} - \frac{d}{p} +\alpha )} \Vert f\Vert _{L^p({\mathbb {R}}^d)} \end{aligned}$$
(34)

for \(q \ge \frac{2(2+k)}{k} \) and \(2 \le p \le q\).

By kernel estimate (33), we find \(|K_\delta (x)|\le \delta ^{\frac{k+2}{2}}\) for all \(x\in {\mathbb {R}}^d\) and thus

$$\begin{aligned} 2^{j \alpha } \Vert T_{2^{-j}} f \Vert _{L^\infty ({\mathbb {R}}^d)} \lesssim 2^{j(\alpha - \frac{k+2}{2})} \Vert f \Vert _{L^1({\mathbb {R}}^d)}. \end{aligned}$$
(35)

Next we interpolate (34) and (35) to prove our bounds. To this end we distinguish the cases \(\frac{1}{2}<\alpha < \frac{k+2}{2}\) and \(0<\alpha \le \frac{1}{2}\). We obtain weak endpoint estimates using a special case of Bourgain’s summation argument (cf. [6, 8]). The present version is taken from [10, Lemma 2.5], see also [35, Lemma 2.3] for an elementary proof:

Lemma 4.5

Let \(\varepsilon _1,\varepsilon _2 > 0\), \(1 \le p_1, \, p_2 <\infty \), \(1 \le q_1,q_2 < \infty \). For every \(j \in {\mathbb {Z}}\) let \({\mathcal {T}}_j\) be a linear operator, which satisfies

$$\begin{aligned} \Vert {\mathcal {T}}_jf \Vert _{q_1}&\le M_1 2^{\varepsilon _1 j} \Vert f \Vert _{p_1}, \\ \Vert {\mathcal {T}}_jf \Vert _{q_2}&\le M_2 2^{-\varepsilon _2 j} \Vert f \Vert _{p_2}. \end{aligned}$$

Then, for \(\theta ,p,q\) defined by \(\theta = \frac{\varepsilon _2}{\varepsilon _1 +\varepsilon _2}\), \(\frac{1}{q} = \frac{\theta }{q_1} + \frac{1-\theta }{q_2}\) and \(\frac{1}{p} = \frac{\theta }{p_1} + \frac{1-\theta }{p_2}\), the following holds:

$$\begin{aligned} \Vert \sum _{j\in {\mathbb {Z}}} {\mathcal {T}}_jf \Vert _{q,\infty }&\le C M_1^\theta M_2^{1-\theta } \Vert f \Vert _{p,1}, \end{aligned}$$
(36)
$$\begin{aligned} \Vert \sum _{j\in {\mathbb {Z}}} {\mathcal {T}}_jf \Vert _q&\le C M_1^\theta M_2^{1-\theta } \Vert f \Vert _{p,1} \qquad \text {if } q_1 = q_2 = q, \end{aligned}$$
(37)
$$\begin{aligned} \Vert \sum _{j\in {\mathbb {Z}}} {\mathcal {T}}_jf \Vert _{q,\infty }&\le C M_1^\theta M_2^{1-\theta } \Vert f \Vert _{p} \qquad \text { if } p_1 = p_2. \end{aligned}$$
(38)

Proof of Theorem 1.4 (i): \(\frac{1}{2}< \alpha < \frac{k+2}{2}\). Interpolating the estimates at the points \((\frac{1}{2},\frac{1}{q_1})\), \(\frac{1}{q_1} \in \big [ 0,\frac{k}{2(k+2)} \big ]\) from (34) and \(A:=(1,0)\) from (35) gives

$$\begin{aligned} 2^{j\alpha }\Vert T_{2^{-j}} f \Vert _{L^q({\mathbb {R}}^d)} \lesssim 2^{j(\alpha +\frac{k}{2} - \frac{k+1 }{p})} \Vert f\Vert _{L^p({\mathbb {R}}^d)} \end{aligned}$$

for \(\frac{1}{p} \in [\frac{1}{2},1]\) and \(\frac{1}{q} \le \frac{k}{k+2} \big ( 1 - \frac{1}{p} \big )\). We use this bound for \(p_1,p_2,q_1,q_2\) given by

$$\begin{aligned} \alpha +\frac{k}{2} - \frac{k+1 }{p_1}=\varepsilon ,\quad \alpha +\frac{k}{2} - \frac{k+1 }{p_2}=-\varepsilon ,\quad \frac{1}{q_i} = \frac{k}{k+2} \left( 1 - \frac{1}{p_i} \right) \quad (i=1,2). \end{aligned}$$

Here, \(\varepsilon >0\) is chosen so small that \(\frac{1}{p_1},\frac{1}{p_2}\in [\frac{1}{2},1]\) holds, which is possible thanks to our assumption \(\frac{1}{2}<\alpha <\frac{k+2}{2}\). So (36) from Lemma 4.5 gives

$$\begin{aligned} \Vert T^\alpha f \Vert _{L^{q,\infty }({\mathbb {R}}^d)}\lesssim & {} \Vert f\Vert _{L^{p,1}({\mathbb {R}}^d)} \;\text {where } \left( \frac{1}{p},\frac{1}{q}\right) \nonumber \\= & {} \left( \frac{k+2\alpha }{2(k+1)}, \frac{k(k+2-2\alpha )}{2(k+1)(k+2)} \right) =:B_{\alpha ,k}. \end{aligned}$$
(39)

Furthermore, since \(T_{2^{-j}}\) coincides with its dual, we have under the same conditions on pq as above:

$$\begin{aligned} 2^{j\alpha }\Vert (T_{2^{-j}})^* g \Vert _{L^{p'}({\mathbb {R}}^d)} \lesssim 2^{j(\alpha +\frac{k}{2} - \frac{k+1 }{p})} \Vert g\Vert _{L^{q'}({\mathbb {R}}^d)}. \end{aligned}$$

So (38) gives for \(p_1=p_2=1,q_1=q_2= \frac{2(k+1)}{k+2\alpha }\) the estimate

$$\begin{aligned} \Vert (T^\alpha )^* g \Vert _{L^{ \left( \frac{2(k+1)}{k+2\alpha }\right) ',\infty }({\mathbb {R}}^d)} \lesssim \Vert g\Vert _{L^1({\mathbb {R}}^d)} \end{aligned}$$

and hence, by duality,

$$\begin{aligned} \Vert T^\alpha f \Vert _{L^q({\mathbb {R}}^d)} \lesssim \Vert f\Vert _{L^{p,1}({\mathbb {R}}^d)} \;\text {where } \left( \frac{1}{p},\frac{1}{q}\right) = \left( \frac{k+2\alpha }{2(k+1)}, 0 \right) =:C_{\alpha ,k}.\nonumber \\ \end{aligned}$$
(40)

Since \(T^\alpha \) coincides with its dual, estimates (39), (40) imply

$$\begin{aligned} \begin{aligned} \Vert T^\alpha f \Vert _{L^{q,\infty }({\mathbb {R}}^d)}&\lesssim \Vert f\Vert _{L^{p,1}({\mathbb {R}}^d)} \quad \text {where } \left( \frac{1}{p},\frac{1}{q}\right) \\&= \left( \frac{ k^2 + 2(2+\alpha )k +4}{2(k+1)(k+2)}, \frac{k+2-2\alpha }{2(k+1)} \right) =:B_{\alpha ,k}', \\ \Vert T^\alpha f \Vert _{L^{q,\infty }({\mathbb {R}}^d)}&\lesssim \Vert f\Vert _{L^{p}({\mathbb {R}}^d)} \quad \text {where } \left( \frac{1}{p},\frac{1}{q}\right) = \left( 1, \frac{k+2-2\alpha }{2(k+1)}\right) =:C_{\alpha ,k}'. \end{aligned} \end{aligned}$$
(41)

Finally, we have the trivial strong estimate

$$\begin{aligned} \Vert T^\alpha f\Vert _{L^q({\mathbb {R}}^d)}\le \Vert f\Vert _{L^p({\mathbb {R}}^d)} \quad \text {for }\left( \frac{1}{p},\frac{1}{q}\right) =(1,0)=:A. \end{aligned}$$
(42)
Fig. 3
figure 3

Riesz diagram for \(T^\alpha \) with \(\frac{1}{2}<\alpha <\frac{k+2}{2}\)

We refer to Fig. 3 for a visualization of the situation. From estimates (39)–(42) we now derive our claim using the real interpolation identity (cf. [3, Theorem 5.3.1])

$$\begin{aligned} (L^{p_1,q_1}({\mathbb {R}}^d),L^{p_2,q_2}({\mathbb {R}}^d))_{\theta ,q} = L^{p,q}({\mathbb {R}}^d) \quad \text { for } \frac{1}{p} = \frac{\theta }{p_1} + \frac{1-\theta }{p_2}, \; \theta \in (0,1) \end{aligned}$$

as well as the Lorentz space embeddings \(L^{\tilde{p}}({\mathbb {R}}^d) = L^{\tilde{p},{\tilde{p}}}({\mathbb {R}}^d) \hookrightarrow L^{\tilde{p},\tilde{q}}({\mathbb {R}}^d)\) for \(\tilde{q} \ge \tilde{p}\). In this way, we obtain strong estimates for the operator \(T^\alpha \) in the interior of the pentagon \(\text {conv}(A,C_{\alpha ,k},B_{\alpha ,k},{B'}_{\alpha ,k},{C'}_{\alpha ,k})\) as well on \((B_{\alpha ,k},{B'}_{\alpha ,k})\): Real interpolation with parameters \((\theta ,\tilde{q})\) gives the estimate

$$\begin{aligned} \Vert T^\alpha f \Vert _{L^{\tilde{q},\tilde{q}}({\mathbb {R}}^d)} \lesssim \Vert f \Vert _{L^{\tilde{p},\tilde{q}}({\mathbb {R}}^d)} \lesssim \Vert f \Vert _{L^{\tilde{p}}({\mathbb {R}}^d)} \end{aligned}$$

for \((1/\tilde{p},1/\tilde{q}) \in (B_{\alpha ,k},B'_{\alpha ,k})\). We have shown strong bounds for p, q such that

$$\begin{aligned} \frac{1}{p} > \frac{k+2\alpha }{2(k+1)}, \qquad \frac{1}{q} < \frac{k+2-2\alpha }{2(k+1)}, \qquad \frac{1}{p} - \frac{1}{q} \ge \frac{2\alpha }{k+2}. \end{aligned}$$

All these estimates are valid for \(\alpha >\frac{1}{2}\). The strong bounds for \(\alpha =\frac{1}{2}\) can be obtained using Stein’s interpolation theorem for analytic families of operators and the estimates for \(\alpha >\frac{1}{2}\) just proved and for \(\alpha <\frac{1}{2}\) that we prove below.

Proof of Theorem 1.4 (ii): \(0< \alpha <\frac{1}{2}\).

We use the estimates from (34) and the same interpolation procedure as above to find (Fig. 4)

$$\begin{aligned} \begin{aligned} \Vert T^\alpha f \Vert _{L^{q,\infty }({\mathbb {R}}^d)}&\lesssim \Vert f\Vert _{L^{p,1}({\mathbb {R}}^d)}, \;\text {where } \left( \frac{1}{p},\frac{1}{q}\right) \\&= \left( \frac{d-1+2\alpha }{2d}, \frac{k}{2(2+k)} \right) =:B_{\alpha ,k}, \\ \Vert T^\alpha f \Vert _{L^q({\mathbb {R}}^d)}&\lesssim \Vert f\Vert _{L^{p,1}({\mathbb {R}}^d)}, \;\text {where } \left( \frac{1}{p},\frac{1}{q}\right) \\&= \left( \frac{d-1+2\alpha }{2d}, 0 \right) =:C_{\alpha ,k}. \end{aligned} \end{aligned}$$

By duality,

$$\begin{aligned} \begin{aligned} \Vert T^\alpha f \Vert _{L^{q,\infty }({\mathbb {R}}^d)}&\lesssim \Vert f\Vert _{L^{p,1}({\mathbb {R}}^d)}, \;\text {where } \left( \frac{1}{p},\frac{1}{q}\right) = \left( \frac{4+k}{2(2+k)}, \frac{d+1-2\alpha }{2d} \right) =:B_{\alpha ,k}', \\ \Vert T^\alpha f \Vert _{L^{q,\infty }({\mathbb {R}}^d)}&\lesssim \Vert f\Vert _{L^p({\mathbb {R}}^d)}, \;\text {where } \left( \frac{1}{p},\frac{1}{q}\right) = \left( 1, \frac{d+1-2\alpha }{2d} \right) =:C_{\alpha ,k}'. \end{aligned} \end{aligned}$$

Again we have trivial strong estimate (42). Interpolating these estimates as above, we get strong bounds precisely for pq such that

$$\begin{aligned} \frac{1}{p} > \frac{d-1+2\alpha }{2d}, \qquad \frac{1}{q} < \frac{d+1-2 \alpha }{2d}, \qquad \frac{1}{p} - \frac{1}{q} \ge \frac{2(d-1+2\alpha ) + k(2\alpha -1)}{2d(2+k)}. \end{aligned}$$
Fig. 4
figure 4

Riesz diagram for \(T^\alpha \) with \(0<\alpha <\frac{1}{2}\)

This finishes the proof of Theorem 1.4. \(\square \)

4.3 Necessary Conditions for Generalized Bochner–Riesz Estimates with Negative Index

In this subsection we discuss necessary conditions for estimates

$$\begin{aligned} \Vert T^\alpha f \Vert _{L^q({\mathbb {R}}^d)} \lesssim \Vert f \Vert _{L^p({\mathbb {R}}^d)}. \end{aligned}$$
(43)

We shall see that for \(\alpha \ge 1/2\) the established strong estimates are sharp, but for \(0<\alpha <1/2\) these are in general not. For this purpose, we compare to the estimates for elliptic surfaces in lower dimensions where the bounds are known to be sharp, see [33, p. 1419].

Suppose that for \(d \ge 3\), there is \(1 \le k \le d-1\) and \((\tilde{p},\tilde{q})\) such that (43) holds true for all regular hypersurfaces with k non-vanishing principal curvatures. Then, let \(d_1:=k+1,d_2:=d-d_1\) and let \(S= \{(\xi ', \psi (\xi ')) \in {\mathbb {R}}^{d_1} \, : \xi ' \in B(0,c) \}\) be an elliptic surface with \(k= d_1-1\) positive principal curvatures. This can be trivially embedded into \({\mathbb {R}}^d\) considering \(S' = \{ (\xi ',\xi '', \psi (\xi ')) \in {\mathbb {R}}^{d_1+d_2} \, : \xi ' \in B(0,c) \}\). We consider the operator

$$\begin{aligned} (T^\alpha f) \widehat{(}\xi ) = \frac{1}{\Gamma (1-\alpha )} \frac{\chi (\xi ')}{(\xi _d - \psi (\xi '))_+^\alpha } {\hat{f}}(\xi ). \end{aligned}$$

Apparently,

$$\begin{aligned} K^\alpha (x) = \frac{1}{(2 \pi )^d} \int _{{\mathbb {R}}^d} e^{ix.\xi } \frac{1}{\Gamma (1-\alpha )} \frac{\chi (\xi ')}{(\xi _d - \psi (\xi '))_+^\alpha } \hbox {d}\xi = L^\alpha (x') \delta (x''), \end{aligned}$$

where \(x'=(x_1,\ldots ,x_{d_1-1},x_{d_1+d_2})\), \(x'' = (x_{d_1}, \ldots , x_{d_1+d_2-1})\), and

$$\begin{aligned} L^\alpha (x) = \frac{1}{(2 \pi )^{d_1}} \int _{{\mathbb {R}}^{d_1}} e^{ix'.(\xi ',\xi _{d})} \frac{1}{\Gamma (1-\alpha )} \frac{\chi (\xi ')}{(\xi _d - \psi (\xi '))_+^\alpha } \hbox {d}\xi ' \hbox {d}\xi _d. \end{aligned}$$

As \(L^\alpha \) is the kernel of a Bochner–Riesz operator with negative index for an elliptic surface in \({\mathbb {R}}^{d_1}\), we know that for \(\frac{1}{2} \le \alpha < \frac{k+2}{2}\) the corresponding operator \(R^\alpha f = L^\alpha * f: L^p({\mathbb {R}}^{d_1}) \rightarrow L^q({\mathbb {R}}^{d_1})\) is bounded if and only if \((1/p,1/q) \in {\mathcal {P}}_\alpha (k)\). For \(f \in L^p({\mathbb {R}}^{d_1})\) consider \(\tilde{f}(x) = f(x') \phi (x'')\) with \(\phi \in C^\infty _c({\mathbb {R}}^{d_2})\). Using that \(T^\alpha :L^p({\mathbb {R}}^d)\rightarrow L^q({\mathbb {R}}^d)\) is bounded, we find

$$\begin{aligned} \Vert R^\alpha f \Vert _{L^{\tilde{q}}({\mathbb {R}}^{d_1})} \Vert \phi \Vert _{L^{\tilde{q}}({\mathbb {R}}^{d_2})} = \Vert T^\alpha \tilde{f} \Vert _{L^{\tilde{q}}({\mathbb {R}}^d)} \lesssim \Vert \tilde{f} \Vert _{L^{\tilde{p}}({\mathbb {R}}^d)} \lesssim \Vert f \Vert _{L^{\tilde{p}}({\mathbb {R}}^{d_1})} \Vert \phi \Vert _{L^{\tilde{p}}({\mathbb {R}}^{d_2})}. \end{aligned}$$

Hence, \(R^\alpha :L^{{\tilde{p}}}({\mathbb {R}}^{d_1})\rightarrow L^{{\tilde{q}}}({\mathbb {R}}^{d_1})\) is bounded. By the sharpness of our conditions for elliptic hypersurfaces we infer \((1/\tilde{p},1/\tilde{q}) \in \mathcal P_{\alpha }(d_1-1)={\mathcal {P}}_\alpha (k)\), which is all we had to show.

On the other hand, we see that the estimates proved in Theorem 1.4 are not sharp for \( 0<\alpha < 1/2\) as in the elliptic case better estimates are known to hold [10, Theorem 1.1]. Apparently, for \(0<\alpha <1/2\) the geometry of the surface becomes more important. We believe that the optimal estimates will also depend on the difference between positive and negative curvatures as for oscillatory integral operators (cf. [7, 23, 47]).

5 Estimates for the Regular Part

In this section we estimate the contribution of (EH) with Fourier support close to smooth and regular component of the Fresnel surface by proving Propositions 2.2 and 2.3. We recall that the first proposition deals with those parts where two principal curvatures are nonzero, whereas the latter proposition deals with frequencies close to the Hamiltonian circles where only one principal curvature is bounded away from zero. As explained in Introduction, our estimates result from uniform estimates for the Fourier multipliers \((P(\xi )+i\delta )^{-1}\) as \(\delta \rightarrow 0\) with \(P(\xi )=p(\omega ,\xi )\). We stress that \(\omega \in {\mathbb {R}}{\setminus }\{0\}\) is fixed from now on.

We first use our estimates for the Bochner–Riesz operator \(T^\alpha \) from the previous section to prove a Fourier restriction–extension estimate related to the two parts of the Fresnel surface mentioned above. To carry out the estimates for both parts, we change from implicit to graph representation and apply Theorem 1.4 for \((\alpha ,k)=(1,2)\), respectively, \((\alpha ,k)=(1,1)\). The \(L^p\)-\(L^q\)-estimates are not affected by this change of representation, see Corollary 5.1. Then we use this result to prove uniform estimates for \((P(D)+i\delta )^{-1}\) by a foliation with level sets of P and the Fourier restriction–extension theorem for the single layer.

5.1 Parametric Representation

Already in [10, p. 152] it was stated that a compact convex surface with curvature bounded from below can be written locally as the graph of an elliptic function. Moreover, it was stated that these parametrizations do not affect Bochner–Riesz estimates. To see that this is also true in the non-elliptic case, we explain this in a nutshell.

So let \(M\subset {\mathbb {R}}^d\) be a compact part of a smooth regular hypersurface with k non-vanishing curvatures where \(k\in \{1,\ldots ,d-1\}\). After finite decompositions and rigid motions, which leave the \(L^p-L^q\)-estimates invariant, we find finitely many local graph representations of M of the form

$$\begin{aligned} M_{loc} = \{\xi =(\xi ',\xi _d):\, p^\mathrm{{loc}}(\xi ) =0, \xi '\in B(0,c)\} = \{(\xi ', \psi (\xi ')) : \, \xi ' \in B(0,c) \}, \end{aligned}$$

where at least k eigenvalues of the Hessian matrices \(\partial ^2 \psi (x), x\in B(0,c)\) are bounded away from zero. Taylor’s formula gives for \(\Delta :=\xi _d - \psi (\xi ')\)

$$\begin{aligned} \begin{aligned} p^\mathrm{{loc}}(\xi )&= p^\mathrm{{loc}}(\xi ',\psi (\xi ') + \Delta ) \\&= \int _0^1 \partial _d p^\mathrm{{loc}}(\xi ',\psi (\xi ') + t \Delta ) \hbox {d}t \cdot (\xi _d - \psi (\xi ')) \\&= m(\xi ) (\xi _d - \psi (\xi ')) \quad \text { for } \xi \in B(0,c) \times (-c',c') =: B'. \end{aligned} \end{aligned}$$

By the properties of \(p^\mathrm{{loc}}\), we find \(m \in C^\infty (B')\) with the properties

$$\begin{aligned} 0< c_1 \le m \le c_2 \;\text { and }\; | \partial ^\gamma m| \lesssim _\gamma 1 \text { for } \gamma \in {\mathbb {N}}_0^d. \end{aligned}$$

The Fourier multiplier \({\mathfrak {m}}_\alpha \) defined by

$$\begin{aligned} ({\mathfrak {m}}_\alpha f) \widehat{(}\xi ) = \beta (\xi ) m^{-\alpha }(\xi ) {\hat{f}}(\xi ), \; \alpha \in {\mathbb {R}}, \end{aligned}$$

for a suitable cutoff \(\beta \in C^\infty _c(B')\), defines a bounded mapping \(L^p({\mathbb {R}}^d)\rightarrow L^p({\mathbb {R}}^d)\), \(1\le p\le \infty \) via Young’s convolution inequality. Real interpolation of these estimates also yields the boundedness \(L^{p,r}({\mathbb {R}}^d) \rightarrow L^{p,r}({\mathbb {R}}^d)\) for \(1<p<\infty ,1 \le r \le \infty \). Accordingly, choosing a suitable finite partition of unity we find that the operators

$$\begin{aligned} ({\mathcal {T}}^\alpha f) \widehat{(}\xi ) := \frac{P(\xi )^{-\alpha }}{\Gamma (1-\alpha )} {{\hat{f}}}(\xi ) \end{aligned}$$

are well-defined for \(0<\alpha <\frac{k+2}{2}\) through analytic continuation and satisfy the same (weak) \(L^p-L^q\)-estimates as the Bochner–Riesz operators that we analyzed in Theorem 1.4. For \(\alpha = 1\) this gives the following:

Corollary 5.1

Let \(K\subset {\mathbb {R}}^d\) be compact, \(P\in C^\infty (K)\) such that \(\nabla P\ne 0\) on the hypersurface \(M:=\{\xi \in K: P(\xi )=0\}\). Assume that in each point of M at least k principal curvatures are nonzero where \(k\in \{1,\ldots ,d-1\}\). Then, there is \(t_0>0\) such that

$$\begin{aligned} \sup _{|t|<t_0} \left\| \int _{M_t} e^{ix.\xi } {{\hat{g}}}(\xi )\,\hbox {d}\sigma _t(\xi ) \right\| _{L^{q,\infty }({\mathbb {R}}^d)} \le \Vert g\Vert _{L^{p,1}({\mathbb {R}}^d)} \end{aligned}$$

for \(M_t:=\{\xi \in K:P(\xi )=t\}\) and \((\frac{1}{p},\frac{1}{q})\in \{B_{1,k},B_{1,k}'\}\). We have \((L^{p,1}({\mathbb {R}}^d), L^q({\mathbb {R}}^d))\)-bounds for \((\frac{1}{p},\frac{1}{q})\in (B_{1,k},C_{1,k}]\), \((L^p({\mathbb {R}}^d),L^{q,\infty }({\mathbb {R}}^d))\)-bounds for \((\frac{1}{p},\frac{1}{q}) \in (B_{1,k}',C_{1,k}']\) and strong \((L^p({\mathbb {R}}^d),L^{q}({\mathbb {R}}^d))\)-bounds for \((\frac{1}{p},\frac{1}{q})\in {\mathcal {P}}_1(k)\).

As described at the end of Sect. 3, the principal curvatures of \(M_t\) vary continuously with respect to t so that the curvature properties of \(M_t\) for small |t| are inherited from those for \(t=0\). The estimates leading to the proof of Proposition 2.2 will result from an application of Corollary 5.1 for \(d=3,K=\text {supp}(\beta _{11}),k=2\), whereas Proposition 2.3 corresponds to the choice \(d=3,K=\text {supp}(\beta _{12}),k=1\). To prove both results simultaneously, we therefore assume that \(K\subset {\mathbb {R}}^3\) and \(k\in \{1,2\}\) satisfy the conditions of the corollary.

5.2 Uniform Estimates for the Singular Multiplier

To prove the desired uniform resolvent estimates for \((P(\xi )+i\delta )^{-1}\), we consider

$$\begin{aligned} A_\delta f(x) = \int _{{\mathbb {R}}^d} \frac{{\hat{f}}(\xi ) \beta (\xi )}{P(\xi ) + i \delta } e^{ix.\xi } \hbox {d}\xi . \end{aligned}$$

It is actually enough to show the restricted weak-type bound

$$\begin{aligned} \Vert A_\delta f \Vert _{L^{q_0,\infty }({\mathbb {R}}^d)} \lesssim \Vert f \Vert _{L^{p_0,1}({\mathbb {R}}^d)} \end{aligned}$$
(44)

for \((1/p_0,1/q_0) = (\frac{2(k+1)(k+2)}{k^2+6k+4},\frac{k}{2(k+1)})=B'\) and

$$\begin{aligned} \Vert A_\delta f \Vert _{L^{q}({\mathbb {R}}^d)} \lesssim \Vert f \Vert _{L^{p,1}({\mathbb {R}}^d)} \end{aligned}$$

for the remaining tuples \((1/p,1/q)\in (B',C']\) where \(C'=(1,\frac{k}{2(k+1)})\) (Fig. 5).

Fig. 5
figure 5

All other claimed estimates result from real interpolation with the corresponding dual estimates or with the trivial bound for \((\frac{1}{p},\frac{1}{q})=(1,0)\)

We focus on (44) in the following. To reduce our analysis to the region \(\{\xi \in K:|P(\xi )|<t_0\}\) for \(t_0\) as in Corollary 5.1, we introduce a cutoff function \(\chi \in C_c^\infty ({\mathbb {R}}^d)\) such that \(|P(\xi )|<t_0\) for \(\chi (\xi )\ne 0\) and \(P(\xi )>t_0/2\) for \(\chi (\xi )\ne 1\). We then have

$$\begin{aligned} A_\delta f(x) = \int _{{\mathbb {R}}^d} e^{ix.\xi } \frac{\chi (\xi ) \beta (\xi )}{P(\xi ) + i \delta } {\hat{f}}(\xi ) \,\hbox {d}\xi + \int _{{\mathbb {R}}^d} e^{ix.\xi } \frac{ (1-\chi (\xi )) \beta (\xi ) }{P(\xi ) + i \delta }{\hat{f}}(\xi )\, \hbox {d}\xi . \end{aligned}$$

Since P is smooth and bounded away from zero on \(\text {supp}(1-\chi )\), the Fourier multiplier in the latter expression is Schwartz and the claimed estimates (in fact even much stronger ones) hold for this second part. For this reason we may from now on concentrate on the first part. We change to generalized polar coordinates via the coarea formula:

$$\begin{aligned}&\int _{{\mathbb {R}}^d} \frac{e^{ix.\xi } \chi (\xi )\beta (\xi ) {\hat{f}}(\xi )}{P(\xi ) + i \delta } \hbox {d}\xi \\&\quad = ({\mathfrak {R}}(D)f)(x)+ i ({\mathfrak {I}}(D)f)(x) \\&\quad = \int _{-t_0}^{t_0} \frac{t}{t^2+ \delta ^2} \left( \int _{M_t} e^{ix.\xi }\chi (\xi )\beta (\xi )|\nabla P(\xi )|^{-1} {\hat{f}}(\xi ) \,\hbox {d}\sigma _t(\xi )\right) \hbox {d}t\\&\quad \quad + i \int _{-t_0}^{t_0} \frac{\delta }{t^2+ \delta ^2} \left( \int _{M_t} e^{ix.\xi }\chi (\xi )\beta (\xi )|\nabla P(\xi )|^{-1} {\hat{f}}(\xi ) \,\hbox {d}\sigma _t(\xi )\right) \hbox {d}t, \end{aligned}$$

where

$$\begin{aligned} \frac{\chi (\xi )\beta (\xi )}{P(\xi ) + i \delta } = \frac{\chi (\xi )\beta (\xi ) P(\xi )}{P(\xi )^2 + \delta ^2} + i \frac{\chi (\xi )\beta (\xi ) \delta }{P(\xi )^2+\delta ^2} =: {\mathfrak {R}}(\xi ) + i {\mathfrak {I}}(\xi ). \end{aligned}$$

In the following we estimate this expression with the aid of Corollary 5.1 by decomposition in Fourier space as in [30, p. 346].

The estimate for \({\mathfrak {I}}(D)\) is based on the coarea formula, Corollary 5.1, and Young’s inequality in Lorentz spaces.

$$\begin{aligned} \begin{aligned}&\Vert {\mathfrak {I}}(D)f\Vert _{L^{q_0,\infty }({\mathbb {R}}^d)} \\&\quad \le \int _{{\mathbb {R}}} \frac{\delta }{t^2 + \delta ^2} \left\| \int _{M_t} e^{ix.\xi } \chi (\xi )\beta (\xi )|\nabla P(\xi )|^{-1} {\hat{f}}(\xi ) \hbox {d}\sigma _t(\xi ) \right\| _{L^{q_0,\infty }({\mathbb {R}}^d)} \,\hbox {d}t \\&\quad \lesssim \int _{{\mathbb {R}}} \frac{\delta }{t^2 + \delta ^2}\Vert \mathcal F^{-1}(\chi (\xi )\beta (\xi )|\nabla P(\xi )|^{-1}{{\hat{f}}}(\xi )) \Vert _{L^{p_0,1}({\mathbb {R}}^d)}\,\hbox {d}t \\&\quad \lesssim \int _{{\mathbb {R}}} \frac{\delta }{t^2 + \delta ^2} \Vert f \Vert _{L^{p_0,1}({\mathbb {R}}^d)}\,\hbox {d}t \\&\quad \lesssim \Vert f \Vert _{L^{p_0,1}({\mathbb {R}}^d)}. \end{aligned} \end{aligned}$$
(45)

We turn to the estimate of \({\mathfrak {R}}(D)\), which requires an additional decomposition: Let \(\phi \in {\mathcal {S}}({\mathbb {R}})\) be such that \( \text {supp} ({\hat{\phi }}) \subseteq [-2,-1/2] \cup [1/2,2]\) with \({\tilde{\phi }}(t):=t\phi (t)\) and

$$\begin{aligned} \sum _{j=-\infty }^\infty {\tilde{\phi }}(2^{-j} t) = 1 \qquad (t\in {\mathbb {R}}{\setminus }\{0\}). \end{aligned}$$

For the existence of \(\phi \) we refer to the proof of [30, Lemma 2.2], where it is denoted by \(\psi \). We split

$$\begin{aligned} A_j(\xi )&= {\mathfrak {R}}(\xi ) \tilde{\phi }(2^{-j} P(\xi ))&(2^j < |\delta |), \\ B_j(\xi )&= \left( {\mathfrak {R}}(\xi ) - \frac{\chi (\xi )\beta (\xi )}{P(\xi )} \right) \tilde{\phi }(2^{-j} P(\xi )) \quad (2^j \ge \delta |), \\ C_j(\xi )&= \frac{\chi (\xi )\beta (\xi )}{P(\xi )} \tilde{\phi }(2^{-j} P(\xi )) \quad (2^j \ge |\delta |). \end{aligned}$$

The coarea formula, Minkowski’s inequality, and Corollary 5.1 yield as above

$$\begin{aligned}&\left\| {\mathcal {F}}^{-1} \left( \sum _{2^j< |\delta |} A_j(\xi ) {\hat{f}}(\xi ) \right) \right\| _{L^{q_0,\infty }({\mathbb {R}}^d)}\nonumber \\&\quad \le \sum _{2^j< |\delta |} \left\| \int _{-t_0}^{t_0} \frac{t\tilde{\phi }(2^{-j} t)}{t^2 + \delta ^2} \left( \int _{M_t} e^{ix.\xi } \chi (\xi )\beta (\xi ) |\nabla P(\xi )|^{-1} {\hat{f}}(\xi ) \,\hbox {d}\sigma _t(\xi )\right) \,\hbox {d}t \right\| _{L^{q_0,\infty }({\mathbb {R}}^d)} \nonumber \\&\quad \lesssim \sum _{2^j< |\delta |} \int _{{\mathbb {R}}} \frac{|t{\tilde{\phi }}(2^{-j}t)|}{t^2 + \delta ^2} \left\| \int _{M_t} e^{ix.\xi } \chi (\xi )\beta (\xi ) |\nabla P(\xi )|^{-1} {\hat{f}}(\xi ) \,\hbox {d}\sigma _t(\xi ) \right\| _{L^{q_0,\infty }({\mathbb {R}}^d)} \,\hbox {d}t\nonumber \\&\quad \lesssim \sum _{2^j < |\delta |} \int _{{\mathbb {R}}} \frac{2^j}{t^2 + \delta ^2} \Vert f\Vert _{L^{p_0,1}({\mathbb {R}}^d)} \,\hbox {d}t \nonumber \\&\quad \lesssim \int _{{\mathbb {R}}} \frac{\delta }{t^2 + \delta ^2} \,\hbox {d}t \, \Vert f\Vert _{L^{p_0,1}({\mathbb {R}}^d)} \nonumber \\&\quad \lesssim \Vert f\Vert _{L^{p_0,1}({\mathbb {R}}^d)}. \end{aligned}$$
(46)

Here we used the estimate \(|{\tilde{\phi }}(s)|\le s^{-1}\), which holds because \({\tilde{\phi }}\) is a Schwartz function. By similar means, we find

$$\begin{aligned}&\left\| {\mathcal {F}}^{-1} \left( \sum _{2^j \ge |\delta |} B_j(\xi ) {\hat{f}}(\xi ) \right) \right\| _{L^{q_0,\infty }({\mathbb {R}}^d)}\nonumber \\&\quad = \left\| {\mathcal {F}}^{-1} \left( \sum _{2^j \ge |\delta |} \frac{\delta ^2\tilde{\phi }(2^{-j} P(\xi ))}{P(\xi )(P(\xi )^2+\delta ^2)} \chi (\xi )\beta (\xi ) {\hat{f}}(\xi ) \right) \right\| _{L^{q_0,\infty }({\mathbb {R}}^d)}\nonumber \\&\quad \lesssim \sum _{2^j \ge |\delta |} \int _{{\mathbb {R}}} \frac{\delta ^2 |\tilde{\phi }(2^{-j}t)|}{|t|(t^2 + \delta ^2)} \left\| \int _{M_t} e^{ix.\xi } \chi (\xi )\beta (\xi ) |\nabla P(\xi )|^{-1} {\hat{f}}(\xi ) \,\hbox {d}\sigma _t(\xi ) \right\| _{L^{q_0,\infty }({\mathbb {R}}^d)} \,\hbox {d}t \nonumber \\&\quad \lesssim \sum _{2^j \ge |\delta |} \int _{{\mathbb {R}}} \frac{\delta ^2 2^{-j}}{t^2 + \delta ^2} \Vert f \Vert _{L^{p_0,1}({\mathbb {R}}^d)} \,\hbox {d}t \nonumber \\&\quad \lesssim \int _{{\mathbb {R}}} \frac{\delta }{t^2 + \delta ^2} \Vert f \Vert _{L^{p_0,1}({\mathbb {R}}^d)} \,\hbox {d}t \nonumber \\&\quad \lesssim \Vert f \Vert _{L^{p_0,1}({\mathbb {R}}^d)}. \end{aligned}$$
(47)

Here, the estimate from the third to the fourth line uses \(|{\tilde{\phi }}(2^{-j}t)| = |\phi (2^{-j}t)| 2^{-j}t \le 2^{-j}t\). For the most involved estimate of \(C_j\), we need the following lemma:

Lemma 5.2

Let \(\chi \in C^\infty _c({\mathbb {R}}^d)\). Suppose \(\phi \in {\mathcal {S}}({\mathbb {R}})\) with \(\text {supp} (\hat{\phi }) \subseteq [-2,-\frac{1}{2}] \cup [ \frac{1}{2}, 2]\) and that the level sets \(\{\xi \in \text {supp}(\chi ):P(\xi )=t\}\) have k principal curvatures uniformly bounded from below in modulus for all \(|t|\le t_0\). Then, for \(1 \le p,q \le \infty \) with \(q \ge 2\) and \(\frac{1}{q} \ge \frac{k+2}{k} \big (1-\frac{1}{p} \big )\), we find the following estimate to hold for all \(\lambda >0\):

$$\begin{aligned} \Vert {\mathcal {F}}^{-1} \big ( \phi (\lambda ^{-1} P(\xi )) \chi (\xi ) {\hat{f}}(\xi ) \big ) \Vert _{L^{q}({\mathbb {R}}^d)} \lesssim \lambda ^{\frac{k+2}{2} - \frac{k+1}{q} } \Vert f \Vert _{L^{p}({\mathbb {R}}^d)}. \end{aligned}$$

Proof

By interpolation, it suffices to prove the endpoint estimates for \((p,q) = (\frac{2(k+2)}{k+4},2)\) and \((p,q)=(1,\infty ),(p,q)=(1,2)\). Since the multiplier is regular for \(\lambda \ge 1\), we may henceforth suppose \(\lambda \le 1\). For \(q=2\) we use Plancherel’s theorem, the coarea formula, and the \(L^{\frac{2(k+2)}{k+4}}\)-\(L^2\) restriction–extension estimate from Corollary 5.1:

$$\begin{aligned}&\Vert {\mathcal {F}}^{-1} \big ( \phi (\lambda ^{-1} P(\xi )) \chi (\xi ) {\hat{f}}(\xi ) \big )\Vert _{L^2({\mathbb {R}}^d)}^2 \\&\quad = \Vert \phi (\lambda ^{-1} P(\xi )) \chi (\xi ) {\hat{f}}(\xi ) \Vert _{L^2({\mathbb {R}}^d)}^2 \\&\quad = \int _{-t_0}^{t_0} | \phi (\lambda ^{-1} t)|^2 \left( \int _{M_t} |{\hat{f}}(\xi )|^2 |\chi (\xi )|^2 |\nabla P(\xi )|^{-1} \,\hbox {d}\sigma _t(\xi ) \right) \,\hbox {d}t \\&\quad \lesssim \int _{-t_0}^{t_0} | \phi (\lambda ^{-1} t)|^2 \Vert f\Vert _{L^{\frac{2(k+2)}{k+4}}({\mathbb {R}}^d)}^2 \,\hbox {d}t \\&\quad \lesssim \lambda \Vert f \Vert ^2_{L^{\frac{2(k+2)}{k+4}}({\mathbb {R}}^d)}. \end{aligned}$$

Using the trivial estimate \(|{{\hat{f}}}(\xi )|\le \Vert f\Vert _{L^1({\mathbb {R}}^d)}\) instead (from the third to the fourth line), we find the endpoint estimate for \((p,q)=(1,2)\).

For the endpoint \((p,q) = (1,\infty )\) it suffices to show the kernel estimate \(|K(x)| \lesssim \lambda ^{\frac{k+2}{2}}\). Let \(J = [-2,-\frac{1}{2}] \cup [\frac{1}{2},2]\). Then

$$\begin{aligned} K(x)&:= {\mathcal {F}}^{-1} \big ( \phi (\lambda ^{-1} P(\xi )) \chi (\xi )\big )(x) \\&= \frac{1}{(2\pi )^{3/2}} \int _{{\mathbb {R}}^d} e^{ix.\xi } \phi (\lambda ^{-1} P(\xi )) \chi (\xi ) \,\hbox {d}\xi \\&= \frac{1}{2\pi } \int _{{\mathbb {R}}^d} e^{ix.\xi } \chi (\xi ) \int _{J} e^{ir \lambda ^{-1} P(\xi )} \hat{\phi }(r)\,\hbox {d}r\,\hbox {d}\xi \\&= \frac{1}{2\pi } \int _{J} \hat{\phi }(r) \left( \int _{-t_0}^{t_0} e^{ir\lambda ^{-1}t} \underbrace{\left( \int _{M_t} e^{ix.\xi } \chi (\xi )|\nabla P(\xi )|^{-1} \,\hbox {d}\sigma _t(\xi )\right) }_{=:a(t,x)} \,\hbox {d}t \right) \,\hbox {d}r \end{aligned}$$

The function a is smooth, all its derivatives are bounded functions, and its support is bounded with respect to t. So the principle of non-stationary phase yields for \(|x| \ll \lambda ^{-1}\) and all \(M\in {\mathbb {N}}\)

$$\begin{aligned} |K(x)| \le _M \int _{J} |\hat{\phi }(r)| |r\lambda ^{-1}|^{-M} \,\hbox {d}r \le _M \lambda ^M. \end{aligned}$$

In particular, this holds for \(M=\frac{k+2}{2}\). For \(|x| \gtrsim \lambda ^{-1}\) we can use the dispersive estimate \(|a(t,x)|\le (1+|x|)^{-k/2}\), which holds due to method of stationary phase and the presence of k non-vanishing principal curvatures. We thus get for \(|x| \gtrsim \lambda ^{-1}\)

$$\begin{aligned} |K(x)|&= \frac{1}{2\pi } \left| \int _{-t_0}^{t_0} \phi (\lambda ^{-1}t) a(t,x) \,\hbox {d}t \right| \\&\lesssim \int _{-t_0}^{t_0} |\phi (\lambda ^{-1}t)| (1+|x|)^{-\frac{k}{2}} \,\hbox {d}t \\&\lesssim \lambda (1+|x|)^{-\frac{k}{2}} \\&\lesssim \lambda ^{\frac{k+2}{2}}. \end{aligned}$$

The proof is complete. \(\square \)

The lemma allows to bound the \(C_j\)-terms as follows:

$$\begin{aligned} \Vert C_j(D) f \Vert _{L^\sigma ({\mathbb {R}}^d)}&= \Vert {\mathcal {F}}^{-1}\left( \frac{\chi (\xi )\beta (\xi )}{P(\xi )} \tilde{\phi }(2^{-j} P(\xi )) \hat{f}(\xi )\right) \Vert _{L^\sigma ({\mathbb {R}}^d)} \\&= 2^{-j} \Vert {\mathcal {F}}^{-1}\left( \chi (\xi )\beta (\xi ) \phi (2^{-j} P(\xi )) {{\hat{f}}}(\xi ) \right) \Vert _{L^\sigma ({\mathbb {R}}^d)} \\&\lesssim 2^{j \left( \frac{k}{2} - \frac{k+1}{\sigma } \right) } \Vert f \Vert _{L^r({\mathbb {R}}^d)} \end{aligned}$$

for \(2 \le \sigma \le \infty \), \(\frac{1}{\sigma } \ge \frac{k+2}{k}\big ( 1- \frac{1}{r} \big )\). Using Lemma 4.5,  (36) for \(q_1,q_2,p_1,p_2\) defined as

$$\begin{aligned} \frac{k}{2}-\frac{k+1}{q_1} = \varepsilon ,\quad \frac{k}{2}-\frac{k+1}{q_2} = -\varepsilon ,\quad \frac{1}{q_i} = \frac{k+2}{k}\left( 1- \frac{1}{p_i} \right) \end{aligned}$$

for small \(\varepsilon >0\), we finally get due to \(\frac{1}{q_0}=\frac{k}{2(k+1)}=\frac{1}{2q_1}+\frac{1}{2q_2} = \frac{k+2}{k}\big ( 1- \frac{1}{p_0}\big )\)

$$\begin{aligned} \left\| {\mathcal {F}}^{-1} \left( \sum _{2^j \ge |\delta |} C_j(\xi ) {\hat{f}}(\xi ) \right) \right\| _{L^{q_0,\infty }({\mathbb {R}}^d)} \le \Vert f\Vert _{L^{p_0,1}({\mathbb {R}}^d)}. \end{aligned}$$
(48)

Combining estimates (45)–(48), we get the claimed estimate

$$\begin{aligned} \Vert A_\delta f \Vert _{L^{q_0,\infty }({\mathbb {R}}^d)} \lesssim \Vert f \Vert _{L^{p_0,1}({\mathbb {R}}^d)}. \end{aligned}$$

This proves Proposition 2.2 (\(k=2\)) and Proposition 2.3 (\(k=1\)). \(\Box \)

5.3 An Improved Fourier Restriction–Extension Estimate for the Fresnel Surface Close to Hamiltonian Circles

The purpose of this section is to point out how the special degeneracy along the Hamiltonian circles might allow for improved estimates in Proposition 2.3. In our proof in the previous section we exploited that one principal curvature is bounded away from zero close to these circles. But actually we have more: The other principal curvature does not vanish identically in that region, but only vanishes at the Hamiltonian circle, which is a curve on the Fresnel surface. We refer to Fig. 2 for an illustration of the situation.

For surfaces with vanishing Gaussian curvature, but no flat points, improved results were established in special cases. For stationary phase estimates for functions with degenerate Hessian we refer to the works of Ikromov–Müller [27,28,29]; see also [20, 21, 39, 45] and references therein. For in a sense generic surfaces in \({\mathbb {R}}^3\) with Gaussian curvature vanishing along a one-dimensional sub-manifold, the decay

$$\begin{aligned} |\hat{\mu }(\xi )| \lesssim \langle \xi \rangle ^{-\frac{3}{4}} \end{aligned}$$

was obtained by J.-C. Cuenin and the second author [12]. Note that the present proof only uses \(|\hat{\mu }(\xi )| \lesssim \langle \xi \rangle ^{-\frac{1}{2}}\), which is (29) for \(k=1\). We also refer to [12] for the corresponding \(L^p\)\(L^q\) estimates.

Since the singular points of our Fresnel surface (to be discussed in the following section) give rise to the worse total decay \(|\hat{\mu }(\xi )| \lesssim \langle \xi \rangle ^{-\frac{1}{2}}\) of the Fourier transform, the analysis leading to this better decay is not detailed here.

6 Estimates for Neighborhoods of the Singular Points

The purpose of this section is to prove the estimate

$$\begin{aligned} \Vert \beta _{13}(D) (E,H) \Vert _{L^q({\mathbb {R}}^3)} \lesssim \Vert \beta _{13}(D) (J_e,J_m) \Vert _{L^p({\mathbb {R}}^3)} \end{aligned}$$

with \(\beta _{13}\) defined in Sect. 2 as smooth cutoff localizing to a neighborhood of the singular points. We shall also take the opportunity to derive estimates for perturbed cone multipliers in \({\mathbb {R}}^d\). These naturally arise for surfaces \(S= \{ \xi \in {\mathbb {R}}^d : p(\xi ) = 0 \}\) at singular points \(\xi \in S\) with \(\nabla p(\xi ) = 0\), and \(\partial ^2 p\) with signature \((1,d-1)\).

In the first step, to clarify the nature of S, we shall change to parametric representation in Sect. 6.1. We will see that it suffices to analyze two perturbed half-cones

$$\begin{aligned} \{ \xi _d = \pm |\xi '| + O(|\xi '|^2) \}, \quad i=1,2. \end{aligned}$$

This yields that for a small, but fixed distance from the origin, we have the curvature properties of the cone and can apply Theorem 1.4 with \(\alpha =1\), \(k=d-2\) to derive Fourier restriction–extension estimates for the layers. Then, the arguments of Sect. 5.2 apply again. We derive the estimates for the generalized cone multiplier and (49) by an additional Littlewood–Paley decomposition and a scaling argument in Sect. 6.2.

Coming back to Fresnel’s surface, we first prove that S looks like a cone around the singular points. We recall that we assumed without loss of generality \(\mu _1=\mu _2=\mu _3=\omega =1\) so that the results from Sect. 3 apply for \(S=S^*\).

Proposition 6.1

Set \(\zeta \in S\) be one of the four singular points given by Proposition 3.2. Then

$$\begin{aligned} p(\omega ,\xi ) = \frac{1}{2}(\xi -\zeta )^T D^2p(\omega ,\zeta )(\xi -\zeta ) + O(|\xi -\zeta |^3) \quad \text {as }\xi \rightarrow \zeta \end{aligned}$$

and \(D^2p(\omega ,\zeta )\) has two positive and one negative eigenvalue.

Proof

By Taylor’s theorem, \(p(\omega ,\zeta )=0\) (because \(\zeta \in S\)), and \(\nabla p(\omega ,\zeta )=0\) (because \(\zeta \) is singular), it suffices to prove that \(D^2p(\omega ,\zeta )\) has two positive and one negative eigenvalue. For notational convenience we assume \(\varepsilon _1<\varepsilon _2<\varepsilon _3\) and concentrate on the singular point \(\zeta =(\zeta _1,\zeta _3,\zeta _3)\in S\) given by

$$\begin{aligned} \zeta _1 = \sqrt{\frac{\varepsilon _3 (\varepsilon _1-\varepsilon _2)}{\varepsilon _1 - \varepsilon _3}}, \qquad \zeta _2=0, \qquad \zeta _3 = \sqrt{\frac{\varepsilon _1(\varepsilon _3 -\varepsilon _2)}{\varepsilon _3 -\varepsilon _1}}, \end{aligned}$$

Then we find

$$\begin{aligned} D^2p(\omega ,\zeta ) = \begin{pmatrix} D_{11} &{}\quad 0 &{}\quad D_{13}\\ 0 &{}\quad D_{22} &{}\quad 0\\ D_{13} &{}\quad 0 &{}\quad D_{33} \end{pmatrix}, \end{aligned}$$

where (cf. [36, pp. 74-75])

$$\begin{aligned} D_{22} = \frac{2}{\varepsilon _1}+\frac{2}{\varepsilon _3}-\frac{2(\varepsilon _1+\varepsilon _2)}{\varepsilon _1\varepsilon _2\varepsilon _3}\zeta _1^2 -\frac{2(\varepsilon _2+\varepsilon _3)}{\varepsilon _1\varepsilon _2\varepsilon _3}\zeta _3^2 = \frac{2}{\varepsilon _1\varepsilon _2\varepsilon _3} (\varepsilon _1 - \varepsilon _2)( \varepsilon _2- \varepsilon _3 )>0 \end{aligned}$$

and

$$\begin{aligned} D_{11}&= \frac{2}{\varepsilon _2}+\frac{2}{\varepsilon _3}-\frac{12}{\varepsilon _2\varepsilon _3}\zeta _1^2 - \frac{2(\varepsilon _1+\varepsilon _3)}{\varepsilon _1\varepsilon _2\varepsilon _3}\zeta _3^2 = -\frac{8(\varepsilon _2 - \varepsilon _1) }{\varepsilon _2(\varepsilon _3 - \varepsilon _1)}<0, \\ D_{33}&= -\frac{8(\varepsilon _2 - \varepsilon _3)}{\varepsilon _2(\varepsilon _1- \varepsilon _3)}, \\ D_{13}&= -\frac{4(\varepsilon _1+\varepsilon _3)}{\varepsilon _1\varepsilon _2\varepsilon _3} \zeta _1\zeta _3 = -\frac{4(\varepsilon _1+\varepsilon _3)\sqrt{(\varepsilon _2-\varepsilon _1)(\varepsilon _3-\varepsilon _2)}}{\varepsilon _2\sqrt{\varepsilon _1\varepsilon _3}(\varepsilon _3-\varepsilon _1)}, \\ D_{11}D_{33} - D_{13}^2&= - \frac{16}{\varepsilon _1\varepsilon _2^2\varepsilon _3} (\varepsilon _2 - \varepsilon _1)( \varepsilon _3 - \varepsilon _2) < 0. \end{aligned}$$

So the symmetric \(2\times 2\)-submatrix with entries \(D_{11},D_{13},D_{13},D_{33}\) is indefinite and hence possesses one positive and one negative eigenvalue. This yields the claim. \(\square \)

Accordingly, after suitable rotations, translations and multiplication by \(-1\), we may suppose in the following that the analyzed singular point lies in the origin and that the Taylor expansion of the Fourier symbol around the singular point is given by

$$\begin{aligned} \tilde{p}(\xi ) = \xi _3^2 - |\xi '|^2 + g(\xi ), \quad |\partial ^\alpha g(\xi )| \le _\alpha |\xi |^{3-|\alpha |} \quad (\alpha \in {\mathbb {N}}_0^3). \end{aligned}$$

We will discuss the corresponding Fourier multiplier given by

$$\begin{aligned} A_\delta f(x) = \int _{{\mathbb {R}}^3} \frac{e^{ix.\xi } \beta (\xi ) {\hat{f}}(\xi ) }{\tilde{p}(\xi ) + i \delta } \hbox {d}\xi , \end{aligned}$$

where \(\beta \in C^\infty _c({\mathbb {R}}^3)\). The support of \(\beta \) will later be assumed to be close to zero so that the mapping properties of \(A_\delta \) are determined by the Taylor expansion of \({\tilde{p}}\) around zero. The aim is to show estimates

$$\begin{aligned} \Vert A_\delta f \Vert _{L^q({\mathbb {R}}^3)} \lesssim \Vert f \Vert _{L^p({\mathbb {R}}^3)} \end{aligned}$$
(49)

for p,q as in Proposition 2.4 as previously independent of \(\delta \). This will be proved in Sect. 6.3.

6.1 Parametric Representation Around the Singular Points

In this subsection we change to a parametric representation. This requires additional arguments as \({\tilde{p}}\) vanishes of second order at the origin. We find the following:

Proposition 6.2

Let \(\tilde{p}: {\mathbb {R}}^d \rightarrow {\mathbb {R}}\) be a smooth function with \(\tilde{p}(0) = 0\) and \(\nabla \tilde{p}(0)=0\), \(\partial ^2 \tilde{p} = \text {diag}(-1,\ldots ,-1,1)\). Then, there is \(c>0\) such that

$$\begin{aligned} \tilde{p}(\xi ) = (\xi _d - |\xi '|+r_1(\xi '))(\xi _d + |\xi '|+r_2(\xi ')) m(\xi ) \text { for } \xi = (\xi ',\xi _d)\in B(0,c) \end{aligned}$$
(50)

with \(m \in C^\infty ({\mathbb {R}}^d)\), \(|m| \gtrsim 1\) on B(0, c), and \(r_i \in C^\infty ({\mathbb {R}}^{d-1} \backslash \{0\})\), \(|\partial ^\alpha r_i(\xi ')| \lesssim |\xi '|^{2-|\alpha |}\).

Proof

We use the Weierstraß–Malgrange preparation theorem (cf. [26, Theorem 7.5.5.]) to obtain a factorization

$$\begin{aligned} \tilde{p}(\xi ',\xi _d) = m(\xi ) (\xi _d^2 + \xi _d a_1(\xi ') + a_2(\xi ')), \quad \xi \in B(0,c) \end{aligned}$$

with \(a_1,a_2,m \in C^\infty (B(0,c))\), \(|m| \gtrsim 1\), \(a_i(0) = 0\), \(\nabla a_i(0) = 0\), \(i=1,2\), and \(\partial ^2 a_2(\xi ') = - 2 \cdot 1_{(d-1) \times (d-1)}\). Solving \(\xi _d^2 + \xi _d a_1(\xi ') + a_2(\xi ') = 0\) with respect to \(\xi _d\), we obtain

$$\begin{aligned} \xi _d = -\frac{a_1(\xi ')}{2} \pm \sqrt{\left( \frac{a_1(\xi ')}{2} \right) ^2 - a_2(\xi ') }. \end{aligned}$$
(51)

Since \(a_2\) is smooth with \(a_2(0)=0,\nabla a_2(0)=0\) and \(\partial ^2 a_2(\xi ') = - 2 \cdot 1_{(d-1) \times (d-1)}\), Taylor expansion gives \(a_2(\xi ') = -|\xi '|^2 + r(\xi ')\) with \(|\partial ^\alpha r(\xi ')| \lesssim _\alpha |\xi '|^{(3-|\alpha |)_+}\). By this, we rewrite

$$\begin{aligned} \sqrt{ \left( \frac{a_1(\xi ')}{2} \right) ^2 - a_2(\xi ') } = |\xi '| \sqrt{1 + \left( \frac{a_1(\xi ')}{2 |\xi '|} \right) ^2 - \frac{r(\xi ')}{|\xi '|^2}}. \end{aligned}$$

Let \(s(\xi ') \!:=\! \big ( \frac{a_1(\xi ')}{2|\xi '|} \big )^2 - \frac{r(\xi ')}{|\xi '|^2}\), which satisfies \(s \!\in \! C^\infty ({\mathbb {R}}^{d-1} \backslash \{ 0 \})\) with \(|\partial ^\alpha s(\xi ')| \lesssim _\alpha |\xi '|^{1-|\alpha |}\). Here we use \(|\partial ^\alpha a_1(\xi ')| \lesssim _\alpha |\xi '|^{(2-|\alpha |)_+}\) and the above estimate for r. By Taylor expansion, we obtain \(\sqrt{1+s(\xi ')}= 1 + t(\xi ')\) for \(t \in C^\infty ({\mathbb {R}}^{d-1} \backslash \{ 0 \})\) with

$$\begin{aligned} |\partial ^\alpha t(\xi ')| \lesssim _\alpha |\xi '|^{1-|\alpha |}. \end{aligned}$$
(52)

Returning to (51), we obtain

$$\begin{aligned} \xi _d = - \frac{a_1(\xi ')}{2} \pm |\xi '| (1+ t(\xi ')). \end{aligned}$$

Let \(r_1(\xi ') = \frac{a_1(\xi ')}{2}- |\xi '|t(\xi ')\), \(r_2(\xi ') = \frac{a_1(\xi ')}{2} + |\xi '| t(\xi ')\) such that

$$\begin{aligned} \tilde{p}(\xi ) = m(\xi ) (\xi _d - |\xi '| + r_1(\xi '))( \xi _d + |\xi '| + r_2(\xi ')) \end{aligned}$$

for \(\xi \in (\xi ',\xi _d) \in B(0,c)\). The estimates \(|\partial ^\alpha r_i(\xi ')| \lesssim _\alpha |\xi '|^{2-|\alpha |}\) follow from \(|\partial ^\alpha a_1(\xi ')| \lesssim _\alpha |\xi '|^{(2-|\alpha |)_+}\) and (52). \(\square \)

6.2 Estimates for Perturbed Cone Multipliers

With \(({\mathfrak {m}}_\alpha f) \widehat{(}\xi ) = m^{-\alpha }(\xi ) {\hat{f}}(\xi )\) a Fourier multiplier in \(L^p({\mathbb {R}}^d)\) for \(1< p < \infty \) by Mikhlin’s theorem, the above parametric representation suggests to analyze the generalized cone multiplier

$$\begin{aligned} ({\mathcal {C}}^\alpha f) \widehat{(}\xi ) = \frac{1}{\Gamma (1-\alpha )} \frac{\beta (\xi ) {\hat{f}}(\xi )}{((\xi _d - |\xi '| + r_1(\xi '))(\xi _d + |\xi '| + r_2(\xi '))^{\alpha }_+}, \end{aligned}$$

which is again defined by analytic continuation for \(\alpha \ge 1\). As provided in Sect. 6.1 for singular non-degenerate points, we suppose that

$$\begin{aligned} r_i \in C^\infty ({\mathbb {R}}^{d-1} \backslash \{0\}), \quad |\partial ^\alpha r_i(\xi ')| \lesssim _\alpha |\xi '|^{2-|\alpha |} \qquad (i=1,2, \alpha \in {\mathbb {N}}_0^d) \end{aligned}$$

and \(\beta \in C^\infty _c(B(0,c))\) satisfies \(\beta (\xi ) = 1\) for \(|\xi | \le \frac{c}{2}\) for c as in Proposition 6.2. We suppose that \(c=1\) to lighten the notation. The aim of this section is to show that \({\mathcal {C}}^\alpha : L^p({\mathbb {R}}^d) \rightarrow L^q({\mathbb {R}}^d)\) is bounded for exponents pq as described below. To explain \({\mathcal {C}}^{\alpha }\) a priori in the distributional sense, we suppose that \(f \in {\mathcal {S}}\) with \(0 \notin \text {supp} ({\hat{f}})\). As we prove estimates independent of the Fourier support, \({\mathcal {C}}^\alpha \) extends by density.

Proposition 6.3

Let \(1/2< \alpha < d/2\). Then \({\mathcal {C}}^\alpha \) has the same mapping properties as the Bochner–Riesz operator \(T^\alpha \) from Theorem 1.4 (i) for \(k=d-2\).

The proposition generalizes Lee’s result [35, Theorem 1.1] for \(\alpha > 1/2\): Fourier supports and perturbations of the cone including the singular point are covered and the space dimension is not restricted to \(d=3\). As we obtain the same conditions on (pq) as Lee, which he showed to be sharp in the case \(d=3\), the conditions in Proposition 6.3 are clearly sharp. It seems likely that by bilinear restriction the result can be improved as in [35] for \(\alpha < \frac{1}{2}\).

To reduce the estimates to Theorem 1.4, we apply a Littlewood–Paley decomposition. Let \(\beta _l(\xi ) = \beta _0(2^l \xi )\) with \(\text {supp} (\beta _0) \subseteq B(0,2) \backslash B(0,1/2)\) and

$$\begin{aligned} \sum _{l \ge 0} \beta _l \cdot \beta = \beta . \end{aligned}$$

We define

$$\begin{aligned} ({\mathcal {C}}_l^{\alpha } f) \widehat{(}\xi )= \frac{1}{\Gamma (1-\alpha )}\frac{ \beta _l(\xi ) \beta (\xi ) {\hat{f}}(\xi )}{((\xi _d - |\xi '| + r_1(\xi '))(\xi _d + |\xi '| + r_2(\xi '))^{\alpha }_+}. \end{aligned}$$

We have the following consequence of Littlewood–Paley theory:

Lemma 6.4

Assume that there are \(1< p< 2< q < \infty \) and \(r_1 \in \{1,p \}\) and \(r_2 \in \{q,\infty \}\) such that

$$\begin{aligned} \Vert C_l^\alpha f \Vert _{L^{q,r_2}({\mathbb {R}}^d)} \le C \Vert f \Vert _{L^{p,r_1}({\mathbb {R}}^d)} \end{aligned}$$

holds for all \(l\in {\mathbb {N}}_0\). Then

$$\begin{aligned} \Vert {\mathcal {C}}^\alpha f \Vert _{L^{q,r_2}({\mathbb {R}}^d)} \lesssim C \Vert f \Vert _{L^{p,r_1}({\mathbb {R}}^d)}. \end{aligned}$$

Proof

We write by the square function estimate, which also holds in Lorentz spaces, see, e.g., [30, Lemma 3.2], and Minkowski’s inequality (note that \(L^{\frac{q}{2},\infty }\) is normable because \(q>2\))

$$\begin{aligned} \Vert {\mathcal {C}}^\alpha f \Vert _{L^{q,r_2}({\mathbb {R}}^d)} \lesssim \big \Vert \left( \sum _{l \ge 0} |{\mathcal {C}}^\alpha _l f |^2 \right) ^{\frac{1}{2}} \big \Vert _{L^{q,r_2}({\mathbb {R}}^d)} \lesssim \left( \sum _{l \ge 0} \Vert {\mathcal {C}}_l^\alpha f \Vert _{L^{q,r_2}({\mathbb {R}}^d)}^2 \right) ^{\frac{1}{2}}. \end{aligned}$$

By hypothesis and noting that \({\mathcal {C}}_l^\alpha f = {\mathcal {C}}_l^\alpha \big ( \sum _{|l'-l| \le 2} \beta _{l'}(D) f \big )\), we find

$$\begin{aligned} \left( \sum _{l \ge 0} \Vert {\mathcal {C}}_l^\alpha f \Vert ^2_{L^{q,r_2}({\mathbb {R}}^d)} \right) ^{\frac{1}{2}} \lesssim \left( \sum _{l' \ge 0} \Vert \beta _{l'}(D) f \Vert ^2_{L^{p,r_1}({\mathbb {R}}^d)} \right) ^{\frac{1}{2}} \lesssim \Vert f \Vert _{L^{p,r_1}({\mathbb {R}}^d)}. \end{aligned}$$

Notice that the ultimate estimate is dual to the previous display. \(\square \)

We are ready for the proof of Proposition 6.3.

Proof of Proposition 6.3

We use scaling to reduce to unit frequencies:

$$\begin{aligned}&{\mathcal {C}}_l^\alpha f(x) = \frac{1}{\Gamma (1-\alpha )} \int _{{\mathbb {R}}^d}\frac{e^{ix.\xi } \beta _l(\xi ) {\hat{f}}(\xi )}{((\xi _d -|\xi '| + r_1(\xi '))(\xi _d+|\xi '| +r_2(\xi '))^\alpha _+} \,\hbox {d}\xi \\&\quad = \frac{2^{-dl}}{\Gamma (1-\alpha )} \int _{{\mathbb {R}}^d}\frac{e^{ix.2^{-l} \zeta } \beta _0(\zeta ) {\hat{f}}(2^{-l} \zeta )}{((2^{-l} \zeta _d - 2^{-l} |\zeta '| + r_1(2^{-l}(\zeta '))(2^{-l} \zeta _d + 2^{-l} |\zeta '| + r_2(2^{-l} \zeta '))^\alpha _+} \hbox {d}\zeta \\&\quad = \frac{2^{2\alpha l - dl}}{\Gamma (1-\alpha )} \int _{{\mathbb {R}}^d}\frac{e^{i 2^{-l} x.\zeta } \beta _0(\zeta ) {\hat{f}}_l(\zeta )}{((\zeta _d - |\zeta '| + r_{1,l}(\zeta '))(\zeta _d + |\zeta '| + r_{2,l}(\zeta '))^\alpha _+} \hbox {d}\zeta , \end{aligned}$$

where \({\hat{f}}_l(\zeta ) = {\hat{f}}(2^{-l} \zeta )\), \(r_{i,l}(\zeta ') = 2^{l} r_{i}(2^{-l} \zeta ')\), \(\xi = 2^{-l} \zeta \). We therefore consider the operator

$$\begin{aligned} S_l^\alpha g(y) = \frac{1}{\Gamma (1-\alpha )} \int _{{\mathbb {R}}^d}\frac{e^{iy.\xi } \beta _0(\xi ) {\hat{g}}(\xi )}{((\xi _d - |\xi '| + r_{1,l}(\xi '))(\xi _d + |\xi '| + r_{2,l}(\xi '))^\alpha _+} \hbox {d}\xi . \end{aligned}$$

With \(\text {supp} (\beta _0) \subseteq B(0,2) \backslash B(0,1/2)\), the subsets of \(\text {supp}(\beta _0)\) where the factors \(\xi _d -|\xi '| + r_{1,l}(\xi ')\) and \(\xi _d + |\xi '| + r_{2,l}(\xi ')\) vanish are separated. We write

$$\begin{aligned} \beta _0(\xi ) = \beta _0(\xi ) ( \gamma _0(\xi ) + \gamma _1(\xi ) + \gamma _2(\xi )) \end{aligned}$$

with \(\gamma _i \in C^\infty _c({\mathbb {R}}^d)\) and \(\text {supp}(\gamma _0) \subseteq \{ \xi \in {\mathbb {R}}^d : |\xi _d| \not \sim |\xi '| \}\), \(\text {supp} (\gamma _i) \subseteq \{ \xi \in {\mathbb {R}}^d : (-1)^{i+1} \xi _d \sim |\xi '| \}\) for \(i=1,2\). Correspondingly, we consider the operators \(S_{l,i}^\alpha \) with

$$\begin{aligned} \big ( S^\alpha _{l,i} h\big ) \widehat{(}\xi ) = \frac{1}{\Gamma (1-\alpha )} \frac{\gamma _i(\xi ) \beta _0(\xi ) {\hat{h}}(\xi )}{((\xi _d -|\xi '| + r_{1,l}(\xi '))(\xi _d+|\xi '| + r_{2,l}(\xi '))^\alpha _+}. \end{aligned}$$

Clearly, \(S_{l,0}^\alpha \) is bounded from \(L^p({\mathbb {R}}^d) \rightarrow L^q({\mathbb {R}}^d)\) for \(1 \le p \le q \le \infty \) as the kernel is a Schwartz function. We shall only estimate \(S_{l,1}^\alpha \) as \(S_{l,2}^\alpha \) is treated mutatis mutandis:

$$\begin{aligned} (S_{l,1}^\alpha h) \widehat{(}\xi ) = \frac{1}{\Gamma (1-\alpha )} \frac{\gamma _1(\xi ) \beta _0(\xi ) {\hat{h}}(\xi )}{((\xi _d - |\xi '| + r_{1,l}(\xi '))(\xi _d + |\xi '| + r_{2,l}(\xi '))^\alpha _+}. \end{aligned}$$

With \(m(\xi ) = \xi _d + |\xi '| + r_{2,l}(\xi ') \gtrsim \xi _d\) for \(\xi \in \text {supp} (\beta _1) \cap \text {supp} (\beta )\) and \(|\partial ^\alpha m(\xi )| \lesssim 1\), by Young’s inequality it is enough to consider \(\tilde{S}^\alpha _{l,1}\) given by

$$\begin{aligned} ( \tilde{S}^\alpha _{l,1} g) \widehat{(}\xi ) = \frac{1}{\Gamma (1-\alpha )} \frac{\gamma _1(\xi ) \beta _0(\xi ) {\hat{g}}(\xi )}{(\xi _d - |\xi '| + r_{1,l}(\xi '))^\alpha _+}. \end{aligned}$$

To this operator, we can apply the estimates of Theorem 1.4 for \(k=d-2\) since in each point of the perturbed cone \(d-2\) principal curvatures are bounded from below in modulus uniformly with respect to k. Moreover, the rescaled surfaces \(\{ \zeta _d = \mp |\zeta '| + r_{i,l}(\zeta ') \}\) can be approximated with the cone in any \(C^N\)-norm. As a consequence, \(S_k^\alpha \) has the mapping properties described in Theorem 1.4 (i) for \(\frac{1}{2}<\alpha <\frac{d}{2}\) with a uniform mapping constant. From

$$\begin{aligned} {\mathcal {C}}_l^\alpha f(x) = 2^{2 \alpha l - 3l} S_l^\alpha f_l(2^{-l} x) \end{aligned}$$

we conclude

$$\begin{aligned} \Vert {\mathcal {C}}_l^\alpha f \Vert _{L^q({\mathbb {R}}^d)}&= 2^{2 \alpha l - dl} \Vert (S_l^\alpha f_l)(2^{-l} \cdot ) \Vert _{L^q({\mathbb {R}}^d)} \\&\lesssim 2^{2 \alpha l - dl} 2^{\frac{dl}{q}} \Vert S_l^\alpha f_l \Vert _{L^q({\mathbb {R}}^d)} \\&\lesssim 2^{2 \alpha l -dl} 2^{\frac{dl}{q}} \Vert f_l \Vert _{L^p({\mathbb {R}}^d)} \\&= 2^{2\alpha l + \frac{dl}{q} -\frac{dl}{p}} \Vert f \Vert _{L^p({\mathbb {R}}^d)}. \end{aligned}$$

Given that the conditions on pq imply \(2 \alpha + \frac{d}{q} - \frac{d}{p}\le 0\), we obtain the desired uniform estimates for any fixed \(l\in {\mathbb {N}}_0\). Hence, an application of Lemma 6.4 finishes the proof for \(p \ne 1\), \(q \ne \infty \) because of \(p<2<q\). If \(p>1,q= \infty \), we can find \(q^*<\infty \) such that the conditions hold for \((p,q^*)\). This is true because \(\frac{1}{p}-\frac{1}{q}= 1-\frac{1}{q*} >\frac{2\alpha }{d}\) for large enough \(q^*\). Take \(\chi \) a cutoff function with \(\chi =1\) on \(\text {supp}(\beta )\). Then, by bounded frequencies and Young’s inequality,

$$\begin{aligned} \Vert {\mathcal {C}}^\alpha f \Vert _{L^{\infty }({\mathbb {R}}^d)}&\le \Vert {\mathcal {C}}^\alpha f \Vert _{L^{q*}({\mathbb {R}}^d)} \\&=\Vert {\mathcal {C}}^\alpha ( \chi (D)f) \Vert _{L^{q*}({\mathbb {R}}^d)} \\&\le \Vert \chi (D) f \Vert _{L^p({\mathbb {R}}^d)} \\&\lesssim \Vert f \Vert _{L^p({\mathbb {R}}^d)} \end{aligned}$$

The case \(p=1,q<\infty \) is dual and thus proved as well. The proof is complete.

\(\square \)

6.3 Estimates for Approximate Solutions Close to the Singular Points

In this section we prove Proposition 2.4 by showing the corresponding \(L^p\)-\(L^q\)-bounds for

$$\begin{aligned} A_\delta f(x) = \int _{{\mathbb {R}}^3} \frac{e^{ix.\xi } \beta (\xi )}{{\tilde{p}}(\xi ) + i\delta } {\hat{f}}(\xi )\,\hbox {d}\xi , \end{aligned}$$

where \({\tilde{p}}\), after some translation and dilation, has the form

$$\begin{aligned} {\tilde{p}}(\xi ) = \xi _3^2 - \xi _1^2 - \xi _2^2 + g(\xi ) \text { with } |\partial ^\alpha g(\xi )| \le C_\alpha |\xi |^{3-|\alpha |} \end{aligned}$$
(53)

and \(\text {supp}(\beta ){\subset }B(0,c)\) with c as in Proposition 6.2. Roughly speaking, this guarantees that the surface \(\{ {\tilde{p}}(\xi ) = 0 \}\) looks like a cone in B(0, c). Due to the singularity at the origin, this seems problematic, but can be remedied by Littlewood–Paley decomposition.

We proceed similar as above. Let \(\beta _0 \in C^\infty _c({\mathbb {R}}^3)\) with \(\text {supp }(\beta _0) \subseteq \{ c/2 \le |\xi | \le 2c \}\) and \(\beta _{\ell }(\xi ) = \beta _0(2^{\ell } \xi )\), \(\ell \ge 1\), such that

$$\begin{aligned} \sum _{\ell \ge 0} \beta _\ell \cdot \beta _{13}(\xi ) = \beta _{13}(\xi ) \quad (\xi \ne 0). \end{aligned}$$

We further set \(\tilde{\beta }_\ell (\xi ) = \beta _{\ell -1}(\xi ) + \beta _{\ell }(\xi ) + \beta _{\ell +1}(\xi )\). As in the previous section, we have the following lemma by Littlewood–Paley theory and Minkowski’s inequality:

Lemma 6.5

Let \(1<p \le 2 \le q < \infty \), \(r_1 \in \{1,p\}\), and \(r_2 \in \{q,\infty \}\). Suppose that

$$\begin{aligned} \left\| \int _{{\mathbb {R}}^3}\frac{{\hat{f}}(\xi ) \beta _\ell (\xi ) e^{ix.\xi }}{p(\xi ) + i \delta } d \xi \right\| _{L^{q,r_2}({\mathbb {R}}^3)} \le C \Vert \tilde{\beta }_\ell (D) f \Vert _{L^{p,r_1}({\mathbb {R}}^3)} \end{aligned}$$
(54)

holds for C independent of \(\ell \) and \(\delta \ne 0\). Then we have

$$\begin{aligned} \Vert A_\delta f \Vert _{L^{q,r_2}({\mathbb {R}}^3)} \lesssim \Vert f \Vert _{L^{p,r_1}({\mathbb {R}}^3)}. \end{aligned}$$

We prove (54) for \(\ell =0\) and see how the remaining estimates follow by rescaling as in the previous subsection. In the first step we localize to the singular set: Let

$$\begin{aligned} \beta _0 = \beta _0 ( \beta _{01} + \beta _{02}), \quad \beta _{0i} \in C^\infty _c({\mathbb {R}}^3) \end{aligned}$$

with

$$\begin{aligned} \text {supp }(\beta _{01})&\subseteq \{ \xi \in {\mathbb {R}}^3 : \, |\xi _3| \sim |\xi '| \}, \\ \text {supp }(\beta _{02})&\subseteq \{ \xi \in {\mathbb {R}}^3 : \, |\xi _3| \ll |\xi '|, \, |\xi '| \ll |\xi _3| \}. \end{aligned}$$

We start with noting that in the support of \(\beta _0 \beta _{02}\) we find \(|p(\xi )| \gtrsim c^2\) and uniform boundedness of

$$\begin{aligned} \left\| \int _{{\mathbb {R}}^3}\frac{{\hat{f}}(\xi ) e^{ix.\xi } \beta _0 \beta _{02} }{p(\xi ) + i \delta } \hbox {d}\xi \right\| _{L^q({\mathbb {R}}^3)} \lesssim \Vert f \Vert _{L^p({\mathbb {R}}^3)} \end{aligned}$$

is immediate from Young’s inequality as the kernel is a Schwartz function.

We turn to the estimate of the contribution close to the vanishing set of p: Let \(\chi (\xi ) = \beta _0(\xi ) \beta _{01}(\xi )\). We follow the arguments of Sect. 5.2: We decompose

$$\begin{aligned} \frac{\chi (\xi )}{p(\xi ) + i \delta } = {\mathfrak {R}}(\xi ) + i {\mathfrak {I}}(\xi ). \end{aligned}$$

These multipliers will be estimated by Fourier restriction–extension estimates for the level sets of p given by Theorem 1.4 for \(k=d-2=1\), \(\alpha =1\). To carry out the program of Sect. 5.2, we need to change to generalized polar coordinates \(\xi = \xi (p,q)\) in \(\text { supp }(\chi )\). We can suppose that this is possible as \(|\nabla p(\xi )| \gtrsim c > 0\) for \(p(\xi ) = 0\), \(|\xi | \sim c\), after making the support of \(\beta _{01}\) closer to the characteristic set, if necessary. Furthermore, with graph parametrizations \((\xi ',\psi (\xi '))\) of \(\{ \xi \in \text {supp}(\chi ) \, : \, p(\xi ) = t\} \) uniform in \(t \in (-t_0,t_0)\), \(t_0\) chosen small enough, Theorem 1.4 yields uniform bounds. Also note that Lemma 5.2 applies with \(k=1\). This finishes the proof of (49) for \(\ell =0\). We show the bounds for \(\ell \ge 1\) by rescaling. A change of variables gives

$$\begin{aligned} \begin{aligned}&\int _{{\mathbb {R}}^3}\frac{e^{ix.\xi } \beta _\ell (\xi ) {\hat{f}}(\xi )}{p(\xi ) + i \delta } \hbox {d}\xi \\&\quad = 2^{-3 \ell } \int _{{\mathbb {R}}^3}\frac{e^{i2^{-\ell } x.\zeta } \beta _0(\zeta ) {\hat{f}}(2^{-\ell } \zeta )}{2^{-2\ell } \zeta _3^2 - 2^{-2\ell } \zeta _1^2 - 2^{-2\ell } \zeta _2^2 + g(2^{-\ell } \zeta ) + i\delta } \hbox {d}\zeta \qquad (\zeta = 2^\ell \xi ) \\&\quad = 2^{-\ell } \int _{{\mathbb {R}}^3}\frac{e^{i 2^{-\ell } x.\zeta } \beta _0(\zeta ) {\hat{f}}_\ell (\zeta )}{\zeta _3^2 - \zeta _1^2 -\zeta _2^2 +2^{2\ell } g(2^{-\ell } \zeta ) + i 2^{2\ell } \delta } \hbox {d}\zeta \qquad ({\hat{f}}_\ell (\zeta ) = {\hat{f}}(2^{-\ell } \zeta )). \end{aligned} \end{aligned}$$
(55)

Let \(p_\ell (\zeta ) = \zeta _3^2 - \zeta _1^2 -\zeta _2^2 +2^{2\ell } g(2^{-\ell } \zeta )\), \(\delta _{\ell } = 2^{2 \ell } \delta \). Recall that \(| \partial ^\alpha g(\xi ) | \lesssim |\xi |^{3-|\alpha |}\), which previously allowed to carry out the proof for \(\ell =0\) for c chosen small enough depending on finitely many \(C_\alpha \) in (53). Furthermore, we find

$$\begin{aligned} \left\| \int _{{\mathbb {R}}^3}\frac{e^{ix.\xi } \beta _0(\zeta ) {\hat{h}}(\zeta )}{p_\ell (\zeta ) + i \delta _\ell } \hbox {d}\zeta \Vert _{L^q({\mathbb {R}}^3)} \lesssim \right\| h \Vert _{L^p({\mathbb {R}}^3)} \end{aligned}$$
(56)

with implicit constant independent of \(\ell \ge 1\) choosing c small enough depending only on finitely many \(C_\alpha \). Hence, taking (55) and (56) together, gives

$$\begin{aligned} \begin{aligned} \left\| \int _{{\mathbb {R}}^3}\frac{e^{ix.\xi } \beta _\ell (\xi ) {\hat{f}}(\xi )}{p(\xi ) + i \delta } \hbox {d}\xi \right\| _{L^q({\mathbb {R}}^3)}&\lesssim 2^{\frac{3\ell }{q} -\ell } \left\| \int _{{\mathbb {R}}^3}\frac{e^{i x.\zeta } \beta _0(\zeta ) {\hat{f}}_\ell (\zeta )}{p_\ell (\zeta ) + i \delta _\ell } \hbox {d}\zeta \right\| _{L^q({\mathbb {R}}^3)} \\&\lesssim 2^{\frac{3\ell }{q} -\ell } \Vert f_\ell \Vert _{L^p({\mathbb {R}}^3)} \\&\lesssim 2^{2\ell - \frac{3\ell }{p} + \frac{3\ell }{q}} \Vert f \Vert _{L^p({\mathbb {R}}^3)}. \end{aligned} \end{aligned}$$

Hence, Lemma 6.5 applies for \(p \ne 1\) and \(q \ne \infty \) because for our choice of p and q we find \(\frac{2}{3} \le \frac{1}{p} - \frac{1}{q}\). For \(q = \infty \) or \(p=1\), we use that frequencies are compactly supported to reduce to \(p \ne 1\) and \(q \ne \infty \) like at the end of the proof of Proposition 6.3. The proof of Proposition 2.4 is complete. \(\square \)