Skip to main content
Log in

Resolution Analysis of Inverting the Generalized N-Dimensional Radon Transform in \(\pmb {\mathbf {\mathbb {R}}^n}\) from Discrete Data

  • Published:
Journal of Fourier Analysis and Applications Aims and scope Submit manuscript

Abstract

Let \({\mathcal {R}}\) denote the generalized Radon transform, which integrates over a family of N-dimensional smooth submanifolds \({\mathcal {S}}_{{{\tilde{y}}}}\subset {\mathcal {U}}\), \(1\le N\le n-1\), where an open set \({\mathcal {U}}\subset {\mathbb {R}}^n\) is the image domain. The submanifolds are parametrized by points \({{\tilde{y}}}\subset {{\tilde{{\mathcal {V}}}}}\), where an open set \({{\tilde{{\mathcal {V}}}}}\subset {\mathbb {R}}^n\) is the data domain. We assume that the canonical relation \({{\tilde{C}}}\) from \(T^*{\mathcal {U}}\) to \(T^*{{\tilde{{\mathcal {V}}}}}\) of \({\mathcal {R}}\) is a local canonical graph (when \({\mathcal {R}}\) is viewed as a Fourier Integral Operator). The continuous data are denoted by g, and the reconstruction is \({\check{f}}={\mathcal {R}}^*{\mathcal {B}}g\). Here \({\mathcal {R}}^*\) is a weighted adjoint of \({\mathcal {R}}\), \({\mathcal {B}}\) is a pseudo-differential operator, and g is a conormal distribution. Discrete data consists of the values of g on a regular lattice with step size \(O(\epsilon )\). Let \({\mathcal {S}}\) denote the singular support of \({\check{f}}\), and \({\check{f}}_\epsilon ={\mathcal {R}}^*{\mathcal {B}}g_\epsilon \) be the reconstruction from interpolated discrete data \(g_\epsilon ({{\tilde{y}}})\). Pick a point \(x_0\in {\mathcal {S}}\), i.e. the singularity of \({\check{f}}\) at \(x_0\) is visible from the data. The main result of the paper is the computation of the limit

$$\begin{aligned} \text {DTB}({\check{x}}):=\lim _{\epsilon \rightarrow 0}\epsilon ^\kappa {\check{f}}_\epsilon (x_0+\epsilon {\check{x}}). \end{aligned}$$

Here \(\kappa \ge 0\) is selected based on the strength of the reconstructed singularity, and \({\check{x}}\) is confined to a bounded set. The limiting function \(\text {DTB}({\check{x}})\), which we call the discrete transition behavior, contains full information about the resolution of reconstruction.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  1. Abels, H.: Pseudodifferential and Singular Integral Operators: An Introduction with Applications. De Gruyter, Berlin/Boston (2012)

    MATH  Google Scholar 

  2. Airapetyan, R.G., Ramm, A.G.: Singularities of the Radon transform. Appl. Anal. 79, 351–371 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  3. Andersson, F., De Hoop, M.V., Wendt, H.: Multiscale discrete approximation of fourier integral operators. Multiscale Model. Simul. 10(1), 111–145 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  4. Candes, E., Demanet, L., Ying, L.: Fast computation of Fourier integral operators. SIAM J. Sci. Comput. 29, 2464–2493 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  5. Candes, E., Demanet, L., Ying, L.: A fast butterfly algorithm for the computation of Fourier integral operators. SIAM Multiscale Model. Simul. 7, 1727–1750 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  6. Duistermaat, J.J., Hormander, L.: Fourier integral operators. II. Acta Math. 128, 183–269 (1972)

    Article  MathSciNet  MATH  Google Scholar 

  7. Faridani, A.: Sampling theory and parallel-beam tomography. In: Benedetto, J.J. (ed.) Sampling. Wavelets, and Tomography, Applied and Numerical Harmonic Analysis, vol. 63, pp. 225–254. Birkhauser, Boston (2004)

    MATH  Google Scholar 

  8. Faridani, A., Buglione, K., Huabsomboon, P., Iancu, O., McGrath, J.: Introduction to local tomography. In Radon transforms and tomography. Contemp. Math. 278, 29–47 (2001)

    Article  MATH  Google Scholar 

  9. Gelfand, I.M., Shilov, G.E.: Generalized Functions Volume 1: Properties and Operations. Academic Press, New York (1964)

    Google Scholar 

  10. Greenleaf, A., Seeger, A.: Oscillatory and fourier integral operators with degenerate canonical relations. Publicacions Matematiques 48, 93–141 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  11. Guillemin, V., Sternberg, S.: Geometric Asymptotics. Mathematical Surveys, vol. 14. American Mathematical Society, Providence (1977)

    Book  MATH  Google Scholar 

  12. Hormander, L.: Fourier integral operators. I. Acta Math. 127, 79–183 (1971)

    Article  MathSciNet  MATH  Google Scholar 

  13. Hormander, L.: The Analysis of Linear Partial Differential Operators III. Pseudo-Differential Operators. Springer-Verlag, Berlin (2007)

    Book  MATH  Google Scholar 

  14. Hormander, L.: The Analysis of Linear Partial Differential Operators IV. Fourier Integral Operators. Springer-Verlag, Berlin (2009)

    Book  MATH  Google Scholar 

  15. Kalender, W.A.: Computed Tomography. Fundamentals, System Technology, Image Quality, Applications, 3rd edn. Publicis, Erlangen (2011)

    MATH  Google Scholar 

  16. Katsevich, A.: Asymptotics of pseudodifferential operators acting on functions with corner singularities. Appl. Anal. 72, 229–252 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  17. Katsevich, A.: A local approach to resolution analysis of image reconstruction in tomography. SIAM J. Appl. Math. 77(5), 1706–1732 (2017)

    Article  MathSciNet  MATH  Google Scholar 

  18. Katsevich, A.: Analysis of reconstruction from discrete Radon transform data in \({\mathbb{R} }^3\) when the function has jump discontinuities. SIAM J. Appl. Math. 79, 1607–1626 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  19. Katsevich, A.: Analysis of resolution of tomographic-type reconstruction from discrete data for a class of distributions. Inverse Probl. 36(12), 124008 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  20. Katsevich, A.: Resolution analysis of inverting the generalized Radon transform from discrete data in \({\mathbb{R} }^3\). SIAM J. Math. Anal. 52, 3990–4021 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  21. Kuipers, L., Niederreiter, H.: Uniform Distribution of Sequences. Dover Publications Inc, Mineola (2006)

    MATH  Google Scholar 

  22. Monard, F., Stefanov, P.: Sampling the X-ray transform on simple surfaces. http://arxiv.org/abs/2110.05761 (2021)

  23. Natterer, F.: Sampling in fan beam tomography. SIAM J. Appl. Math. 53, 358–380 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  24. Orhan, K. (ed.): Micro-computed Tomography (micro-CT) in Medicine and Engineering. Springer Nature, Switzerland (2020)

    Google Scholar 

  25. Palamodov, V.P.: Localization of harmonic decomposition of the Radon transform. Inverse Probl. 11, 1025–1030 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  26. Quinto, E.T.: The dependence of the generalized Radon transforms on defining measures. Trans. Am. Math. Soc. 257, 331–346 (1980)

    Article  MathSciNet  MATH  Google Scholar 

  27. Ramm, A., Katsevich, A.: The Radon Transform and Local Tomography. CRC Press, Boca Raton (1996)

    MATH  Google Scholar 

  28. Ramm, A.G., Zaslavsky, A.I.: Singularities of the Radon transform. Bull. Am. Math. Soc. 25, 109–115 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  29. Salo, M.: Applications of microlocal analysis in inverse problems. Mathematics 8, 1184 (2020)

    Article  Google Scholar 

  30. Sawano, Y.: Theory of Besov Spaces. Springer, Singapore (2018)

    Book  MATH  Google Scholar 

  31. Stefanov, P.: Semiclassical sampling and discretization of certain linear inverse problems. SIAM J. Math. Anal. 52, 5554–5597 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  32. Stefanov, P.: The Radon transform with finitely many angles. http://arxiv.org/abs/2208.05936v1 pp. 1–30 (2022)

  33. Treves, F.: Introduction to Pseudodifferential and Fourier Integral Operators. Volume 2: Fourier Integral Operators. The University Series in Mathematics, Plenum, New York (1980)

    Book  MATH  Google Scholar 

  34. Tuy, H.K.: An inversion formula for cone-beam reconstruction. SIAM J. Appl. Math. 43, 546–552 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  35. Yang, H.: Oscillatory Data Analysis and Fast Algorithms for Integral Operators. PhD thesis, Stanford University (2015)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alexander Katsevich.

Additional information

Communicated by Todd Quinto

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work was supported in part by NSF Grant DMS-1906361.

Appendices

Appendix A. Proof of Lemma 3.5

We begin by constructing an orthogonal matrix \({\check{U}}\) such that the intermediate coordinates \({\check{y}}={\check{U}}^T({{\tilde{y}}}-{{\tilde{y}}}_0)\) and the intermediate function \({{\check{\Phi }}}(t,{\check{y}})={{\tilde{\Phi }}}(t,\check{U}{\check{y}}+{{\tilde{y}}}_0)\) satisfy (3.10). Here \(\check{y}=(z^{(1)},y^{(2)})^T\). The final coordinates y and the intermediate coordinates \({\check{y}}\) will have the same \(y^{(2)}\) component, this is why we wrote \(y^{(2)}\) in the definition of \({\check{y}}\).

Let \(V_1\Sigma V_2^T\) be the SVD of the Jacobian matrix \({{\tilde{\Phi }}}_{{{\tilde{y}}}}^{(1)}\). To remind the reader, \({{\tilde{\Phi }}}_{{{\tilde{y}}}}^{(1)}\) stands for the matrix of partial derivatives \(\partial _{{{\tilde{y}}}}{{\tilde{\Phi }}}^{(1)}(x^{(2)},{{\tilde{y}}})\) evaluated at \((x^{(2)}_0,{{\tilde{y}}}_0)\). Here \(V_1\in O(n-N)\) and \(V_2\in O(n)\) are orthogonal matrices, and \(\Sigma \) is a rectangular \((n-N)\times n\) matrix with \(\Sigma _{ij}=0\), \(i\not =j\), and \(\Sigma _{ii}>0\), \(1\le i \le n-N\). The latter property follows from Assumption 3.1(G2) and \({{\tilde{\Phi }}}^{(1)}_{x^{(2)}}=0\), which yield that \(\text {rank}{{\tilde{\Phi }}}^{(1)}_{{{\tilde{y}}}}=n-N\). Then we can take \({\check{U}}=V_2\), so \({\check{y}}=V_2^T({{\tilde{y}}}-{{\tilde{y}}}_0)\). Indeed,

$$\begin{aligned} \frac{\partial \check{\Phi }^{(1)}}{\partial y^{(2)}}=\frac{\partial {{\tilde{\Phi }}}^{(1)}}{\partial {{\tilde{y}}}}\frac{\partial {{\tilde{y}}}}{\partial y^{(2)}} =V_1\Sigma V_2^T V_2^{{(2)}}=V_1\Sigma \begin{pmatrix} 0\\ I_N\end{pmatrix}=0. \end{aligned}$$
(A.1)

Here \(V_2^{(2)}\) is the \(n\times N\) matrix consisting of the last N columns of \(V_2\). Likewise,

$$\begin{aligned} \frac{\partial \check{\Phi }^{(1)}}{\partial z^{\text {(1)}}}=\frac{\partial {{\tilde{\Phi }}}^{(1)}}{\partial {{\tilde{y}}}}\frac{\partial {{\tilde{y}}}}{\partial z^{\text {(1)}}} =V_1\Sigma V_2^T V_2^{{(1)}}=V_1\Sigma \begin{pmatrix} I_{n-N}\\ 0\end{pmatrix},\ \det \frac{\partial {{\check{\Phi }}}^{(1)}}{\partial z^{\text {(1)}}}\not = 0. \end{aligned}$$
(A.2)

The final coordinates y and the final orthogonal matrix U can be found as follows. As was already mentioned, we keep the coordinates \(y^{(2)}\) the same and rotate the \(z^{\text {(1)}}\) coordinates: \(z^{\text {(1)}}\rightarrow y^{(1)}\). Hence (A.1), (A.2) still hold with \({{\check{\Phi }}}\) and \(z^{\text {(1)}}\) replaced by \(\Phi \) and \(y^{(1)}\), respectively, and the new y coordinates satisfy (3.10).

By (3.5), the rotation \(z^{\text {(1)}}\rightarrow y^{(1)}\) should be selected so that

$$\begin{aligned} {\partial \Phi _1}/{\partial y_j}=({\partial \check{\Phi }_1}/{\partial z^{\text {(1)}}})({\partial z^{\text {(1)}}}/{\partial y_j})=0,\ j=2,\dots ,n-N. \end{aligned}$$
(A.3)

If \(V\in O(n-N)\) is such that \(z^{\text {(1)}}=Vy^{(1)}\), then (A.3) implies that the second through the last columns of V form an orthonormal basis of the subspace of \({\mathbb {R}}^{n-N}\), which consists of vectors orthogonal to \(\partial {{\check{\Phi }}}_1/\partial z^{\text {(1)}}\). It is clear that such a basis can be found. Then the matrix U becomes

$$\begin{aligned} U={\check{U}} \begin{pmatrix} V &{} 0\\ 0 &{} I_{N} \end{pmatrix}. \end{aligned}$$
(A.4)

Our construction ensures that all components of the vector \(\partial _y\Phi _1\), except, possibly, the first one, are zero. By (A.2), the first component is not zero. Multiplying \(\Psi (y)\) by a constant, we can make sure that \(\partial _{y_1}(\Psi \circ \Phi )=1\).

Appendix B. Behavior of \({\mathcal {R}}f\) Near \(\Gamma \)

Suppose \(f\in {\mathcal {E}}'({\mathcal {U}})\) is given by

$$\begin{aligned} f(x)=\frac{1}{2\pi }\int {{\tilde{f}}}(x,\lambda )e^{-i\lambda \Psi (x)}\text {d}\lambda , \end{aligned}$$
(B.1)

where \(\Psi \) is the same as in Sects. 3, 6, and \({{\tilde{f}}}\) satisfies

$$\begin{aligned} \begin{aligned}&{{\tilde{f}}}(x,\lambda )= \tilde{f}^+(x)\lambda _+^{-(s_0+1)+\frac{N}{2}} +{{\tilde{f}}}^-(x)\lambda _-^{-(s_0+1)+\frac{N}{2}}+{{\tilde{R}}}(x,\lambda ), \forall x\in {\mathcal {U}},|\lambda |\ge 1;\\&{{\tilde{f}}}(x,\lambda )\equiv 0\ \forall x\in {\mathcal {U}}\setminus K\text { for some compact }K\subset {\mathcal {U}};\\&{|\partial _x^{m} {{\tilde{f}}}(x,\lambda )| \le c_m|\lambda |^a,\,a>-1,\ \forall x\in {\mathcal {U}},0<|\lambda |\le 1,m\in {\mathbb {N}}_0^n;}\\&{{\tilde{R}}}\in S^{-(s_1+1)+\frac{N}{2}}({\mathcal {U}}\times {\mathbb {R}}),\, \tilde{f}^\pm \in C_0^{\infty }({\mathcal {U}}),\, {(N/2)-1}< s_0<s_1, \end{aligned} \end{aligned}$$
(B.2)

for some \(a,c_m,s_0,s_1\), \({{\tilde{R}}}\), and \({{\tilde{f}}}^\pm \). If \(g={\mathcal {R}}f\) for a sufficiently regular f, then g should have more regularity (\(s_0>(N/2)-1\)) than in the general case (4.15) (\(s_0>0\)).

From (2.3), after changing variables and the defining function (\(t\rightarrow x^{(2)}\), \({{\tilde{y}}}\rightarrow y\), \({{\tilde{\Phi }}}\rightarrow \Phi \)), the GRT of f is given by

$$\begin{aligned} \begin{aligned} {\mathcal {R}}f(y)&=\int _{{\mathbb {R}}^N} f(x) b(x,y)(\det G^{{\mathcal {S}}}(x^{(2)},y))^{1/2}\text {d}x^{(2)}\\&=\frac{1}{2\pi }\int _{\mathbb {R}}\int _{{\mathbb {R}}^N} \tilde{f}(x,\lambda )e^{-i\lambda \Psi (x)}b(x,y)(\det G^{{\mathcal {S}}}(x^{(2)},y))^{1/2}\text {d}x^{(2)}\text {d}\lambda , \\ \end{aligned}\end{aligned}$$
(B.3)

where \(x=\Phi (x^{(2)},y)\).

Consider the second equation for \({\mathcal {T}}_{\mathcal {S}}\) in (6.4) and solve it for \(x^{(2)}\). Since \(\det (\Psi \circ \Phi )_{x^{(2)}x^{(2)}} \not =0\), the solution \(x^{(2)}_*=x^{(2)}_*(y)\) is a smooth function. The function \(x^{(2)}_*(y)\) here is different from \(x^{(2)}(y^\perp )\) in the paragraph following (3.16), because now we solve only the second of the two equations that define \({\mathcal {T}}_{\mathcal {S}}\). The asymptotics as \(\lambda \rightarrow \infty \) of the integral with respect to \(x^{(2)}\) in (B.3) is computed with the help of the stationary phase method [33, Chapter VIII, Eqs. (2.14)–(2.20)]

$$\begin{aligned} \begin{aligned}&\int {{\tilde{f}}}(x,\lambda )e^{-i\lambda \Psi (x)}b(x,y)(\det G^{{\mathcal {S}}}(x^{(2)},y))^{1/2}\text {d}x^{(2)}\\&\quad =\left( {{\tilde{f}}}(x_*,\lambda )b(x_*,y)\left| \frac{\det G^{{\mathcal {S}}}(x^{(2)}_*,y)}{\det (\Psi \circ \Phi )_{x^{(2)}x^{(2)}}(x^{(2)}_*,y)}\right| ^{1/2} \left( \frac{2\pi }{|\lambda |}\right) ^{N/2}+{{\tilde{R}}}(y,\lambda )\right) \\&\qquad \qquad \times e^{-i\frac{\pi }{4}\text {sgn}(\lambda (\Psi \circ \Phi )_{x^{(2)}x^{(2)}}(x^{(2)}_*,y))} e^{-i\lambda \Psi (x_*)},\ |\lambda |\ge 1,\ {{\tilde{R}}}\in S^{-(s_0+2)}({\mathcal {V}}\times {\mathbb {R}}), \end{aligned} \nonumber \\ \end{aligned}$$
(B.4)

for some \({{\tilde{R}}}\). Here \(x_*=\Phi (x^{(2)}_*(y),y)\), and \(\text {sgn}\,M\) for a symmetric matrix M denotes the signature of M, i.e. the number of positive eigenvalues of M minus the number of negative eigenvalues.

Introduce the function

$$\begin{aligned} P_1(y):=(\Psi \circ \Phi )(x^{(2)}_*(y),y). \end{aligned}$$
(B.5)

Then \({\mathcal {R}}f\) can be written as

$$\begin{aligned} {\mathcal {R}}f(y) =\frac{1}{2\pi }\int {{\tilde{\upsilon }}}(y,\lambda )e^{-i\lambda P_1(y)}\text {d}\lambda , \end{aligned}$$
(B.6)

and, with the same \(a,s_0,s_1\) as in (B.2) and some \(c_m\), \({{\tilde{R}}}\),

$$\begin{aligned} \begin{aligned}&{{\tilde{\upsilon }}}(y,\lambda )={{\tilde{\upsilon }}}^+(y)\lambda _+^{-(s_0+1)} +{{\tilde{\upsilon }}}^-(y)\lambda _-^{-(s_0+1)}+{{\tilde{R}}}(y,\lambda ),\ |\lambda |\ge 1;\\&{|\partial _y^{m} {{\tilde{v}}}(y,\lambda )| \le c_m|\lambda |^a,\ \forall y\in {\mathcal {V}},0<|\lambda |\le 1,m\in {\mathbb {N}}_0^n;}\\&{{\tilde{R}}}\in S^{-\min (s_1+1,s_0+2)}({\mathcal {V}}\times {\mathbb {R}});\\&{{\tilde{\upsilon }}}^\pm (y)=(2\pi )^{N/2}{{\tilde{f}}}^\pm (x_*)b(x_*,y)\left| \frac{\det G^{{\mathcal {S}}}(x^{(2)}_*,y)}{\det (\Psi \circ \Phi )_{x^{(2)}x^{(2)}}(x^{(2)}_*,y)}\right| ^{1/2}\times e^{\pm i\frac{\pi }{4} N}\in C^\infty ({\mathcal {V}}); \end{aligned} \end{aligned}$$
(B.7)

where \(x_*=\Phi (x^{(2)}_*(y),y)\), and we have used that \((\Psi \circ \Phi )_{x^{(2)}x^{(2)}}\) is negative definite.

By construction, \(P_1(y)=0\) is another equation for \({\mathcal {T}}_{\mathcal {S}}\). Since \((\Psi \circ \Phi )_{x^{(2)}}=0\), equation (6.6) does not determine \(x^{(2)}_*\). Therefore, to first order, \(x^{(2)}_*\) is determined by solving (6.7):

$$\begin{aligned} x^{(2)}_*(y)=-(\Psi \circ \Phi )_{x^{(2)}x^{(2)}}^{-1}(\Psi \circ \Phi )_{x^{(2)}y}y+O(|y|^2), \end{aligned}$$
(B.8)

and

$$\begin{aligned} P_1(y)=\text {d}_y(\Psi \circ \Phi )y +O(|y|^2)=\Theta _0\cdot y+O(|y|^2). \end{aligned}$$
(B.9)

Remark B.1

We are now in a position to discuss the implications of Assumption 4.5(g4). Suppose \(g={\mathcal {R}}f\) and \(s_0\in {\mathbb {N}}\). From (4.17) and (B.7),

$$\begin{aligned} \frac{{{\tilde{\upsilon }}}^-(y)}{{{\tilde{\upsilon }}}^+(y)}=(-1)^{s_0+1},\ \frac{{{\tilde{f}}}^-(x)}{{{\tilde{f}}}^+(x)}=e(-2a),\ a:=s_0-(N/2)+1,\forall y\in \Gamma ,x\in {\mathcal {S}}. \nonumber \\ \end{aligned}$$
(B.10)

Here we have used that \(x_*(y)\in {\mathcal {S}}\) if \(y\in \Gamma \). Recall that the function e(a) is defined in (4.18).

Suppose first that N is odd, i.e., \(a\not \in {\mathbb {N}}\). Substituting (B.10) into (B.2) gives to leading order:

$$\begin{aligned} \begin{aligned} {{\tilde{f}}}(x,\lambda )\sim&\tilde{f}^+(x)\lambda _+^{-a} +{{\tilde{f}}}^-(x)\lambda _-^{-a} ={{\tilde{f}}}^+(x)(\lambda _+^{-a}+e(-2a)\lambda _-^{-a})\\ =&{{\tilde{f}}}^+(x)(\lambda +i0)^{-a},\ \lambda \rightarrow \infty ,\ \forall x\in {\mathcal {S}}. \end{aligned} \end{aligned}$$
(B.11)

Using (B.1) and computing the inverse Fourier transform, we approximate f to leading order:

$$\begin{aligned} f(x+\Delta x)\sim c{{\tilde{f}}}^+(x)(\text {d}\Psi (x) \Delta x)_+^{a-1},\ |\Delta x|\rightarrow 0,\ \forall x\in {\mathcal {S}}, \end{aligned}$$
(B.12)

for some \(c\not =0\). Thus, if N is odd, Assumption 4.5(g4) means that, to leading order, the nonsmooth part of f is supported on the positive side of \({\mathcal {S}}\).

Suppose next that N is even, i.e. \(a\in {\mathbb {N}}\). Substituting (B.10) into (B.2) gives:

$$\begin{aligned} \begin{aligned}&{{\tilde{f}}}(x,\lambda )\sim {{\tilde{f}}}^+(x)\lambda ^{-a},\ \lambda \rightarrow \infty ,\ \forall x\in {\mathcal {S}},\\&f(x+\Delta x)\sim c{{\tilde{f}}}^+(x)h^{a-1}\text {sgn}\,h,\ h:=\text {d}\Psi (x) \Delta x,|\Delta x|\rightarrow 0,\ \forall x\in {\mathcal {S}}, \end{aligned} \end{aligned}$$
(B.13)

for some \(c\not =0\). Thus, if N is even, Assumption 4.5(g4) means that, to leading order, the nonsmooth part of f is symmetric about \({\mathcal {S}}\): \(f(x+\Delta x)\sim f(x-\Delta x)\) if a is even, and \(f(x+\Delta x)\sim -f(x-\Delta x)\) if a is odd.

Remark B.2

The behavior of f near \({\mathcal {S}}\) can be obtained in the same way even if \(s_0\not \in {\mathbb {N}}\), and Assumption 4.5(g4) does not apply. Taking the inverse Fourier transform of the first (asymptotic) equality in (B.11) using [9, p. 360, Eqs. 25, 26] shows that

$$\begin{aligned} \begin{aligned}&f(x+\Delta x)\sim c_+{{\tilde{f}}}^+(x)(h-i0)^{a-1}+c_-{{\tilde{f}}}^-(x)(h+i0)^{a-1},\\&h:=\text {d}\Psi (x) \Delta x,\ |\Delta x|\rightarrow 0,\ \forall x\in {\mathcal {S}},\ a\not \in {\mathbb {N}}, \end{aligned} \end{aligned}$$
(B.14)

for some constants \(c_\pm \not =0\). If \(a\in {\mathbb {N}}\), then the leading singularity of f may contain logarithmic terms [9, Chapter II, Sect. 2.4, Eqs. (14) and (20)]. Computing the corresponding explicit expressions is fairly straightforward and is outside the scope of the paper.

Appendix C. Proofs of Lemmas 7.17.4

1.1 C.1. Proof of Lemma 7.1

The following expression for g (modulo a \(C^\infty ({\mathcal {V}})\) function) is obtained directly from (4.12), (4.15):

$$\begin{aligned} \begin{aligned} g(y)-&G(y,P(y))\in C^\infty ({\mathcal {V}}),\\ G(y,p):=&{\mathcal {F}}_{1d}^{-1}\left( {{\tilde{\upsilon }}}^+(y)\lambda _+^{-(s_0+1)}+{{\tilde{\upsilon }}}^-(y)\lambda _-^{-(s_0+1)}+\tilde{R}(y,\lambda )\right) (p), \end{aligned} \end{aligned}$$
(C.1)

where \({{\tilde{R}}}\in S^{-(s_1+1)}({\mathcal {V}}\times {\mathbb {R}})\), and \({\mathcal {F}}_{1d}^{-1}\) is the one-dimensional inverse Fourier transform acting with respect to \(\lambda \). The inverse transforms \({\mathcal {F}}_{1d}^{-1}(\lambda _\pm ^{-(s_0+1)})\) are understood in the sense of distributions [9, Chapter II, Sect. 2.3]. By the properties of \({{\tilde{R}}}\), we get by computing the inverse Fourier transform if \(s_0\not \in {\mathbb {N}}\):

$$\begin{aligned} G(y,p) = {{\tilde{\upsilon }}}^+(y)\Psi _{-s_0}^+(p)+{{\tilde{\upsilon }}}^-(y) \Psi _{-s_0}^-(p)+R(y,p),\ \forall y\in {\mathcal {V}}, p\in {\mathbb {R}}. \end{aligned}$$
(C.2)

Here [9, p. 360]

$$\begin{aligned} \Psi _a^{\pm }(p)={\mathcal {F}}_{1d}^{-1}(\lambda _{\pm }^{a-1})(p)=\frac{\Gamma (a)}{2\pi }e(\mp a)(p\mp i0)^{-a},\ a \not =0,-1,-2,\dots , \end{aligned}$$
(C.3)

and \(R(y,p)={\mathcal {F}}_{1d}^{-1}\left( {{\tilde{R}}}(y,\lambda )\right) (p)\). By [1, Theorem 5.12], R satisfies

$$\begin{aligned} |\partial _y^m\partial _p^l R(y,p)|\le c_{m,l}{\left\{ \begin{array}{ll} |p|^{s_1-l},&{} s_1<l,\\ 1+|\log |p||,&{} s_1=l,\\ 1,&{} s_1>l,\end{array}\right. }\ \forall m\in {\mathbb {N}}_0^n,l\in {\mathbb {N}}_0,y\in {\mathcal {V}},p\not =0,\nonumber \\ \end{aligned}$$
(C.4)

for some \(c_{m,l}>0\). Recall that \(P(y+p\Theta _0)\equiv p\) for any \(y\in \Gamma \) and p such that \(y+p\Theta _0\in {\mathcal {V}}\). Combining (C.1)–(C.4) gives the leading singular behavior of g:

$$\begin{aligned} \begin{aligned} g(y+p\Theta _0)\sim&a^+(y)p_+^{s_0}+a^-(y)p_-^{s_0},\ p\rightarrow 0,\ \forall y\in \Gamma ,\\ a^\pm (y)=&\frac{\Gamma (-s_0)}{2\pi }\left( {{\tilde{\upsilon }}}^+(y)e(\pm s_0)+{{\tilde{\upsilon }}}^-(y)e(\mp s_0)\right) ,\ s_0\not \in {\mathbb {N}}. \end{aligned} \end{aligned}$$
(C.5)

See [2, 28] for a characterization of the singularities of the classical Radon transform \(g={\mathcal {R}}f\) for more general surfaces \({\mathcal {S}}\).

If \(s_0\in {\mathbb {N}}\), condition (4.17) implies \({{\tilde{\upsilon }}}^+(y)\lambda _+^{-(s_0+1)}+{{\tilde{\upsilon }}}^-(y)\lambda _-^{-(s_0+1)}\equiv {{\tilde{\upsilon }}}^+(y)\lambda ^{-(s_0+1)}\), \(y\in \Gamma \), so [9, p. 360]

$$\begin{aligned} \begin{aligned} G(y,p)=&{{\tilde{\upsilon }}}^+(y)\Psi _{-s_0}(p)+R(y,p),\ \forall y\in {\mathcal {V}},\\ \Psi _{-s_0}(p)=&{\mathcal {F}}_{1d}^{-1}(\lambda ^{-(s_0+1)})(p)=\frac{1}{2}\frac{(-i)^{s_0+1}}{s_0!}p^{s_0}\text {sgn}(p),\ s_0\in {\mathbb {N}}. \end{aligned} \end{aligned}$$
(C.6)

An equation of the kind (C.5) still holds:

$$\begin{aligned} \begin{aligned} g(y+p\Theta _0)\sim&a^+(y)p_+^{s_0}+a^-(y)p_-^{s_0},\ p\rightarrow 0,\ \forall y\in \Gamma ,\\ a^\pm (y)=&\frac{{{\tilde{\upsilon }}}^+(y)}{2s_0!}e(\mp (s_0+1)),\ s_0\in {\mathbb {N}}. \end{aligned} \end{aligned}$$
(C.7)

Combining (C.1)–(C.6) and using that (C.2) and (C.6) can be differentiated proves (7.5).

From the second equation in (C.1), (C.6), and (7.3) we get also

$$\begin{aligned} \partial _y^mG(y,\cdot )\in {\left\{ \begin{array}{ll} C_*^{s_0}({\mathbb {R}})\ \forall m\in {\mathbb {N}}_0^n,\\ C_0^{s_0}({\mathbb {R}})\ \forall m\in {\mathbb {N}}_0^n\text { if } s_0\in {\mathbb {N}}. \end{array}\right. } \end{aligned}$$
(C.8)

Together with the first equation in (C.1) this proves (7.6).

If \({{\tilde{\upsilon }}}^\pm \equiv 0\), the result follows from the properties of \({{\tilde{R}}}(y,\lambda )\) and (7.3), because \(s_1\not \in {\mathbb {N}}\).

1.2 C.2. Proof of Lemma 7.2

From (4.3) and (4.12),

$$\begin{aligned} \begin{aligned} ({\mathcal {B}}g)(y)&=\frac{1}{(2\pi )^{n+1}}\int _{{\mathbb {R}}^n} \tilde{B}(y,\eta )\int _{{\mathcal {V}}}\int _{{\mathbb {R}}}{{\tilde{\upsilon }}}(z,\lambda )e^{-i\lambda P(z)+i \eta (z-y)}\text {d}\lambda \text {d}z \text {d}\eta . \end{aligned} \end{aligned}$$
(C.9)

As is standard (see e.g., [33]), set \(u=\eta /\lambda \) and consider the phase function

$$\begin{aligned} W(z,u,y):=P(z)-u(z-y). \end{aligned}$$
(C.10)

The only critical point \((z_0,u_0)\) and the corresponding Hessian H are given by

$$\begin{aligned} z_0=y,\ u_0=\text {d}P(y),\ H(y)=\begin{pmatrix} P_{yy}(y) &{} -I_n \\ -I_n &{} 0\end{pmatrix}. \end{aligned}$$
(C.11)

Clearly, \(|\det H(y)|=1\) and \(\text {sgn}\, H(y)=0\) for any \(y\in {\mathcal {V}}\). By the stationary phase method (see [33, Chapter VIII, Eqs. (2.14)–(2.20)]) we get using (4.8) and (4.15)

$$\begin{aligned} \begin{aligned} J(y,\lambda )&:=\frac{|\lambda |^n}{(2\pi )^n}\int _{{\mathbb {R}}^n}\int _{{\mathcal {V}}} {{\tilde{B}}}(y,\lambda u){{\tilde{\upsilon }}}(z,\lambda )e^{-i\lambda (P(z)-P(y) - u(z-y))}\text {d}z \text {d}u\\&={{\tilde{B}}}(y,\lambda \text {d}P(y)){{\tilde{\upsilon }}}(y,\lambda )+{{\tilde{R}}}(y,\lambda ),\\ J&\in S^{\beta _0-s_0-1}({\mathcal {V}}\times {\mathbb {R}}),\ {{\tilde{R}}}\in S^{\beta _0-s_0-2}({\mathcal {V}}\times {\mathbb {R}}). \end{aligned} \end{aligned}$$
(C.12)

The fact that the u-integration is over an unbounded domain does not affect the result, because integrating by parts with respect to z we obtain a function that decreases rapidly as \(|u|\rightarrow \infty \) (i.e., when \(|u|>\sup _{y\in {\mathcal {V}}}|\text {d}P(y)|\)) and \(\lambda \rightarrow \infty \).

Substituting (C.12) into (C.9) and using (4.8), (4.15) leads to

$$\begin{aligned} \begin{aligned} ({\mathcal {B}}g)(y)&=\frac{1}{2\pi }\int J(y,\lambda ) e^{-i\lambda P(y)} \text {d}\lambda \\&={\mathcal {F}}_{1d}^{-1}\biggl ({{\tilde{B}}}_0(y,\text {d}P(y)){{\tilde{\upsilon }}}^+(y)\lambda _+^{\beta _0-s_0-1}\\&\quad +{{\tilde{B}}}_0(y,-\text {d}P(y)){{\tilde{\upsilon }}}^-(y)\lambda _-^{\beta _0-s_0-1}+{{\tilde{R}}}(y,\lambda )\biggr )(P(y)),\\ {{\tilde{R}}}&\in S^{c-1}({\mathcal {V}}\times {\mathbb {R}}),\ c:=\max (\beta _0-s_0-1,\beta _1-s_0,\beta _0-s_1)<\beta _0-s_0. \end{aligned} \end{aligned}$$
(C.13)

The extra factor \(|\lambda |^n\) in (C.12) cancels because \(|\lambda |^n \text {d}u=\text {d}\eta \). Computing the asymptotics of the inverse Fourier transform as \(p=P(y)\rightarrow 0\) and using that \({{\tilde{B}}}_0(y,\pm \text {d}P(y))\in C_0^{\infty }({\mathcal {V}})\) and \(P(y+p\Theta _0)\equiv p\) if \(y\in \Gamma \) gives

$$\begin{aligned} \begin{aligned} ({\mathcal {B}}g)(y+p\Theta _0) =&c_1^+\Psi _{\beta _0-s_0}^+(p)+c_1^-\Psi _{\beta _0-s_0}^-(p)+R(y+p\Theta _0,p),\\ c_1^\pm =&{{\tilde{B}}}_0(y,\pm \text {d}P(y)){{\tilde{\upsilon }}}^\pm (y),\ \forall y\in \Gamma . \end{aligned} \end{aligned}$$
(C.14)

Recall that \(\Psi _a^{\pm }\) are defined in (C.3), and \(R(y,p)={\mathcal {F}}_{1d}^{-1}\bigl ({{\tilde{R}}}(y,\lambda )\bigr )(p)\). By [1, Theorem 5.12] and (C.13), the remainder satisfies

$$\begin{aligned} |\partial _y^m\partial _p^l R(y,p)|\le c_{m,l}{\left\{ \begin{array}{ll} |p|^{c-l},&{} c<l,\\ 1+|\log |p||,&{} c=l,\\ 1,&{} c>l,\end{array}\right. }\ \forall m\in {\mathbb {N}}_0^n,l\in {\mathbb {N}}_0,y\in {\mathcal {V}},p\not =0, \nonumber \\ \end{aligned}$$
(C.15)

for some \(c_{m,l}>0\). The constant c here is the same as in (C.13). The estimate (7.7) follows from (C.14), (C.15).

If \(\kappa =0\), condition (4.20) implies

$$\begin{aligned} \begin{aligned}&{{\tilde{B}}}_0(y,\text {d}P(y)){{\tilde{\upsilon }}}^+(y)\lambda _+^{\beta _0-s_0-1} +{{\tilde{B}}}_0(y,-\text {d}P(y)){{\tilde{\upsilon }}}^-(y)\lambda _-^{\beta _0-s_0-1}\\&={{\tilde{B}}}_0(y,\text {d}P(y)){{\tilde{\upsilon }}}^+(y)(\lambda -i0)^{\beta _0-s_0-1}\ \forall y\in \Gamma , \end{aligned} \end{aligned}$$
(C.16)

and

$$\begin{aligned} \begin{aligned} ({\mathcal {B}}g)(y+p\Theta _0) =&{{\tilde{B}}}_0(y,\text {d}P(y)){{\tilde{\upsilon }}}^+(y){\left\{ \begin{array}{ll}\frac{e(-\beta _0+s_0+1)}{\Gamma (-\beta _0+s_0+1)}p_-^{-(\beta _0-s_0)},&{} \beta _0-s_0\not \in {\mathbb {N}}\\ 0,&{} \beta _0-s_0\in {\mathbb {N}}\end{array}\right. }\\&+R(y+p\Theta _0,p)\ \forall y\in \Gamma ,p\not =0. \end{aligned} \nonumber \\ \end{aligned}$$
(C.17)

This proves (7.8).

1.3 C.3. Proof of Lemma 7.3

Using that \(l>s_0\) and \(\varphi \) has \(\lceil \beta _0^+ \rceil \) bounded derivatives and \(\varphi \) is exact to the degree \(\lceil \beta _0 \rceil \), we get with any \(0\le M\le l\):

$$\begin{aligned} \begin{aligned} g_\epsilon ^{(l)}(y)&=\sum _{|j|\le \vartheta /\epsilon }\partial _{y_1}^l\varphi \left( \frac{y-{{\hat{y}}}^j}{\epsilon }\right) \left( g({{\hat{y}}}^j)-\sum _{|m|\le M-1}\frac{({{\hat{y}}}^j-y)^m}{m!}g^{(m)}(y)\right) \\&=\epsilon ^{-l}\sum _{|j|\le \vartheta /\epsilon }(\partial _{v_1}^l\varphi ) \left( \frac{y-{{\hat{y}}}^j}{\epsilon }\right) \sum _{|m|=M}R_m({{\hat{y}}}^j,y)({{\hat{y}}}^j-y)^{m}. \end{aligned} \end{aligned}$$
(C.18)

Here \((\partial _{v_1}\varphi )(\cdot )\) is the derivative of \(\varphi (v)\) with respect to \(v_1\) evaluated at the indicated point, and the remainder satisfies

$$\begin{aligned} \begin{aligned} R_m({{\hat{y}}}^j,y)&= \frac{|m|}{m!}\int _0^1 (1-t)^{|m|-1}g^{(m)}(y+t({{\hat{y}}}^j-y))dt,\\ |R_m({{\hat{y}}}^j,y)|&\le \frac{1}{m!}\max _{|m'|=|m|,v\in \text {supp} \varphi }|g^{(m')}(y+\epsilon v)|. \end{aligned} \end{aligned}$$
(C.19)

To prove the top case in (7.10) select \(\varkappa _1>0\) so that \(|P(y)|\ge \varkappa _1\epsilon \) and \(\varphi ((y-w)/\epsilon )\not =0\) implies \(|P(w)|\ge \epsilon \). By Lemma 7.1, this ensures that for each \(m\in {\mathbb {N}}_0^{n}\) there exists \(c(m)>0\) such that

$$\begin{aligned} \begin{aligned} \max _{v\in \text {supp} \varphi }|g^{(m)}(y+\epsilon v)|\le c(m){\left\{ \begin{array}{ll} |P(y)|^{s_0-|m|},&{}|m|>s_0,\\ 1,&{}|m|\le s_0,\end{array}\right. },\ |P(y)|\ge \varkappa _1\epsilon . \end{aligned} \nonumber \\ \end{aligned}$$
(C.20)

Set \(M=l\) in (C.18). Then (C.20) together with the bottom line in (C.19) prove the result.

To prove the middle case in (7.10), set \(M=\lfloor s_0 \rfloor \) in (C.18). If \(s_0\in {\mathbb {N}}\), (7.6) and (C.19) imply that \(R_m=O(1)\), thereby proving the assertion. If \(s_0\not \in {\mathbb {N}}\), the remainder can be modified as follows

$$\begin{aligned} \begin{aligned} \tilde{R}_m({{\hat{y}}}^j,y)&= \frac{|m|}{m!}\int _0^1 (1-t)^{|m|-1}[g^{(m)}(y+t({{\hat{y}}}^j-y))-g^{(m)}(y)]dt=O(\epsilon ^{\{s_0\}}). \end{aligned} \nonumber \\ \end{aligned}$$
(C.21)

Here we have used (7.4) with \(r=s_0\). Since \(l>M\), we can replace \(R_m\) with \({{\tilde{R}}}_m\) in (C.18) without changing the equality, and the desired inequality follows.

The bottom case in (7.10) follows by setting \(M=l\) in (C.18) and noticing that (7.6) and (C.19) imply \(R_m=O(1)\).

If \({{\tilde{\upsilon }}}^\pm \equiv 0\), the same argument as above applies with \(s_0\) replaced by \(s_1\). The only change is that there is no need to consider the case \(s_1\in {\mathbb {N}}\).

1.4 C.4. Proof of Lemma 7.4

Since \(\varkappa _1>0\) is the same as in the proof of Lemma 7.3, \(|P(y)|\ge \varkappa _1\epsilon \) and \(\varphi ((y-w)/\epsilon )\not =0\) imply \(|P(w)|\ge \epsilon \). Similarly to (C.18), using the properties of \(\varphi \) we obtain

$$\begin{aligned} \begin{aligned} g_\epsilon ^{(l)}(y)&=\sum _{|j|\le \vartheta /\epsilon }\partial _{y_1}^l\varphi \left( \frac{y-{{\hat{y}}}^j}{\epsilon }\right) \biggl (\sum _{|m|\le M-1}\frac{({{\hat{y}}}^j-y)^m}{m!}g^{(m)}(y)\\&\quad +\sum _{|m|=M}R_m({{\hat{y}}}^j,y)({{\hat{y}}}^j-y)^{m}\biggr )\\&=g^{(l)}(y)+\sum _{|j|\le \vartheta /\epsilon }\partial _{y_1}^l\varphi \left( \frac{y-{{\hat{y}}}^j}{\epsilon }\right) \sum _{|m|=M}R_m({{\hat{y}}}^j,y)({{\hat{y}}}^j-y)^{m},l<M\le \lceil \beta _0\rceil . \end{aligned} \end{aligned}$$
(C.22)

The term \(g^{(l)}(y)\) on the right in (C.22) is the only term from the Taylor polynomial that remains after the summation with respect to j. In particular, all the terms corresponding to \(l< |m| \le M-1\) are converted to zero, because \(\varphi \) is exact to the degree \(\lceil \beta _0\rceil \), and

$$\begin{aligned} \begin{aligned} \sum _{|j|\le \vartheta /\epsilon }&\partial _{y_1}^l\varphi \left( \frac{y-{{\hat{y}}}^j}{\epsilon }\right) ({{\hat{y}}}^j-y)^m=\partial _{w_1}^l (w-y)^m|_{w=y}=0,\\&\forall m\in {\mathbb {N}}_0^n:l< |m| \le M-1,\ y\in {\mathcal {V}}. \end{aligned} \end{aligned}$$
(C.23)

Using (C.22) with \(M=l+1\) and appealing to (C.19), (C.20) proves (7.11). Indeed, recall that \(l\ge \lfloor s_0^-\rfloor \), so \(M=l+1\ge s_0\). If \(s_0\not \in {\mathbb {N}}\), then \(M>s_0\), and the top case in (C.20) applies when estimating \(R_m\), \(|m|=M\). If \(s_0\in {\mathbb {N}}\), then \(M=s_0\), and the bottom case in (C.20) applies when estimating \(R_m\), \(|m|=M\).

To prove (7.12), we use (C.22) with \(M=\lfloor s_0\rfloor \). If \(s_0\in {\mathbb {N}}\), then \(l<\lfloor s_0\rfloor =s_0\) (by assumption, \(l\le \lfloor s_0^-\rfloor \)), and (C.22), (7.6) prove (7.12).

If \(s_0\not \in {\mathbb {N}}\), we replace \(R_m\) with \({{\tilde{R}}}_m\) in (C.22) as this was done in the proof of Lemma 7.3. As before, this does not invalidate the equality and extends its applicability to the case \(l=M\). Note, however, that if \(l=M\), then the term \(g^{(l)}(y)\) on the right in (C.22) comes not from the Taylor polynomial, but from the modification of the remainder. The desired assertion follows from (C.21) and the modified (C.22).

If \({{\tilde{\upsilon }}}^\pm \equiv 0\), the same argument as above applies with \(s_0\) replaced by \(s_1\). The only change is that there is no need to consider the case \(s_1\in {\mathbb {N}}\).

Appendix D. Proof of Lemma 8.2

Throughout the proof, c denotes various positive constants that can vary from one place to the next. To simplify notations, in this proof we drop the subscripts from \(\beta _0\) and \(s_0\): \(\beta =\beta _0\), \(s=s_0\). By the choice of y coordinates (see (3.9)) and by (3.18), \(y_1=\Theta _0\cdot y\) (recall that \(|\Theta _0|=1\)).

Using (4.6), (8.3), and that the symbol of \({\mathcal {B}}\) is homogeneous of degree \(\beta \) we have

$$\begin{aligned} \begin{aligned} \epsilon ^{\beta }({\mathcal {B}}g_\epsilon )(y) = \sum _{|j|\le \vartheta /\epsilon } \left( {\mathcal {B}}\varphi \left( \cdot -({{{\hat{y}}}^j}/\epsilon )\right) \right) (y/\epsilon ) \left[ a^+({{\hat{y}}}^j)P_+^{s_0}({{\hat{y}}}^j)+a^-({{\hat{y}}}^j)P_-^{s_0}({{\hat{y}}}^j)\right] , \\ \end{aligned}\nonumber \\ \end{aligned}$$
(D.1)

where

$$\begin{aligned} \left( {\mathcal {B}}\varphi (\cdot -a)\right) (u):=({\mathcal {B}}\varphi _1)(u),\ \varphi _1(u):=\varphi (u-a). \end{aligned}$$
(D.2)

Also (cf. (8.11)):

$$\begin{aligned} \epsilon ^s{\mathcal {A}}\left( \Theta _0\cdot \frac{{{\hat{y}}}^j-z}{\epsilon }\right) =[a^+(y_0)({{\hat{y}}}^j_1-z_1)_+^s+a^-(y_0)({{\hat{y}}}^j_1-z_1)_-^s]. \end{aligned}$$
(D.3)

We start by estimating the difference between the terms with the subscript ‘+’ inside the brackets in (D.1) and (D.3)

$$\begin{aligned} \begin{aligned} \bigl |a^+({{\hat{y}}}^j)&P_+^{s}({{\hat{y}}}^j)-a^+(y_0)({{\hat{y}}}^j_1-z_1)_+^s\bigr |\\ \le \,&\left| P_+^{s}({{\hat{y}}}^j)-({{\hat{y}}}^j_1-z_1)_+^s\right| |a^+({{\hat{y}}}^j)| +|{{\hat{y}}}^j_1-z_1|^s\left| a^+({{\hat{y}}}^j)-a^+(y_0)\right| ,\\ {{\hat{y}}}^j=&U^T(\epsilon D j-{{\tilde{y}}}_0). \\ \end{aligned}\qquad \end{aligned}$$
(D.4)

The following inequalities can be shown to hold. For all \(q,r\in {\mathbb {R}}\) one has

$$\begin{aligned} \begin{aligned} \left| (q+r)_\pm ^s-q_\pm ^s\right| \le \,&2^{s-1}(|r|^s+s|q|^{s-1}|r|),\ s> 1,\\ \left| (q+r)_\pm ^s-q_\pm ^s\right| \le \,&|r|^s,\ 0<s\le 1. \end{aligned} \end{aligned}$$
(D.5)

Consider the top inequality. The case \(q,q+r\le 0\) is trivial. The cases \(q+r\le 0\le q\) and \(q\le 0\le q+r\) can be verified directly. By a change of variables and convexity, it is easily seen that the case \(r<0< q\) follows from the case \(q,r>0\). To prove the latter, divide by \(q^s\) and set \(x=r/q\). Both sides equal zero when \(x=0\). Differentiating with respect to x, we see that the inequality is proven because \((1+x)^{s-1}\le 2^{s-1}(x^{s-1}+1)\) (consider \(0<x\le 1\) and \(x\ge 1\)). The second inequality in (D.5) is obvious.

The assumption \(z\in \Gamma \) implies \(z_1=\psi (z^\perp )\), so

$$\begin{aligned} P({{\hat{y}}}^j)={{\hat{y}}}^j_1-\psi (({{\hat{y}}}^j)^\perp )={{\hat{y}}}^j_1-z_1+[\psi (z^\perp )-\psi (({{\hat{y}}}^j)^\perp )]. \end{aligned}$$
(D.6)

Setting \(q={{\hat{y}}}^j_1-z_1\) and \(r=\psi (z^\perp )-\psi (({{\hat{y}}}^j)^\perp )\) in (D.5) and using (D.6) and that \(a^+(y)\) is bounded, we estimate the first term on the right in (D.4) as follows

$$\begin{aligned} \begin{aligned}&\left| P_+^{s}({{\hat{y}}}^j)-({{\hat{y}}}^j_1-z_1)_+^s\right| |a^+({{\hat{y}}}^j)|\\&\quad \le c \left( |\psi (z^\perp )-\psi (({{\hat{y}}}^j)^\perp )|^s+ {\left\{ \begin{array}{ll} |{{\hat{y}}}^j_1-z_1|^{s-1}|\psi (z^\perp )-\psi (({{\hat{y}}}^j)^\perp )|,&{} s> 1\\ 0,&{}0<s\le 1 \end{array}\right. } \right) . \end{aligned}\nonumber \\ \end{aligned}$$
(D.7)

Recall that in this lemma we assume that the amplitude of \({\mathcal {B}}\) satisfies \({{\tilde{B}}}(y,\eta )\equiv {{\tilde{B}}}_0(y,\eta )\). By (4.8), the fact that the amplitude of \({\mathcal {B}}\) is homogeneous in the frequency variable (and, therefore, the Schwartz kernel K(yw) of \({\mathcal {B}}\) is homogeneous in w), and Assumption 4.3(IK1),

$$\begin{aligned} |{\mathcal {B}}\varphi (u)|\le c\left( 1+|u|\right) ^{-(\beta +n)},\ u\in {\mathbb {R}}^n. \end{aligned}$$
(D.8)

Therefore, by (8.11) and (D.1)–(D.3), we have to estimate the following two sums

$$\begin{aligned} \begin{aligned} J_1:=&\sum _{|j|\le \vartheta /\epsilon }\frac{|(\psi (z^\perp )-\psi (({{\hat{y}}}^j)^\perp ))/\epsilon |^s}{(1+|(y-{{\hat{y}}}^j)/\epsilon |)^{\beta +n}},\\ J_2:=&\sum _{|j|\le \vartheta /\epsilon }\frac{|({{\hat{y}}}_1^j-z_1)/\epsilon |^{s-1}|\psi (z^\perp )-\psi (({{\hat{y}}}^j)^\perp )|/\epsilon }{(1+|(y-{{\hat{y}}}^j)/\epsilon |)^{\beta +n}}. \end{aligned} \end{aligned}$$
(D.9)

The second sum is required if \(s>1\).

Note that the quantities \(J_{1,2}\) include the factor \(\epsilon ^{-s}\), which appears on the left in (8.11) and has been unaccounted for until now. The remaining factor \(\epsilon ^{\beta }\) has been accounted for in (D.1). In (8.11), \({\mathcal {B}}_0\) already acts with respect to the rescaled variable \(y/\epsilon \), so the factor \(\epsilon ^\beta \) is not needed on the right in (8.11). Since \({\mathcal {B}}_0\) is shift-invariant, it is not necessary to represent its action in the form (D.2).

Assumptions of the lemma imply

$$\begin{aligned} |\psi (z^\perp )-\psi (({{\hat{y}}}^j)^\perp )|\le |\psi '(y^\perp _*)||z^\perp -({{\hat{y}}}^j)^\perp | \le c(\epsilon ^{1/2}+|z^\perp -({{\hat{y}}}^j)^\perp |)|z^\perp -({{\hat{y}}}^j)^\perp | \nonumber \\ \end{aligned}$$
(D.10)

for some \(c>0\). Here \(y_*^\perp \in {\mathbb {R}}^{n-1}\) is some point on the line segment with the endpoints \(z^\perp \), \(({{\hat{y}}}^j)^\perp \), and we have used that \(|\psi '(y_*^\perp )|\le c(|z^\perp |+|z^\perp -({{\hat{y}}}^j)^\perp |)\), which follows from \(\psi '(y_0^\perp )=0\).

Let \(m=m(z,\epsilon )\in {\mathbb {Z}}^n\) be such that \(|(z+U^T\tilde{y}_0)/\epsilon -U^TDm|<c\). The dependence of m on z and \(\epsilon \) is omitted from notations. This implies

$$\begin{aligned} \max (|z_1-{{\hat{y}}}^j_1|,|z^\perp -({{\hat{y}}}^j)^\perp |)\le & {} |z-{{\hat{y}}}^j|\le c \epsilon \left| \frac{z+U^T{{\tilde{y}}}_0}{\epsilon }-U^TDm-U^T D (j-m)\right| \nonumber \\\le & {} c\epsilon (c+|j-m|). \end{aligned}$$
(D.11)

Also, using that \(|y-z|=O(\epsilon )\) gives

$$\begin{aligned} \left| \frac{y-{{\hat{y}}}^j}{\epsilon }\right| =\left| \frac{(y-z)+(z-{{\hat{y}}}^j)}{\epsilon }\right| \ge c|j-m|\text { if } |j-m|\gg 1. \end{aligned}$$
(D.12)

Substitute (D.10) into the expression for \(J_1\) in (D.9), shift the index \(j\rightarrow j-m\), and use (D.11), (D.12):

$$\begin{aligned} \begin{aligned} J_1\le&c\sum _{|j|\le \vartheta /\epsilon }\frac{(\epsilon ^{1/2}+\epsilon (c+|j|))^s(c+|j|)^s}{(1+c|j|)^{\beta +n}}+O(\epsilon ^{s/2}). \end{aligned} \end{aligned}$$
(D.13)

Here we have used that we can ignore any finite number of terms (their contribution is \(O(\epsilon ^{s/2})\)), and (D.12) applies to the remaining terms. This gives

$$\begin{aligned} \begin{aligned} J_1\le \,&c\sum _{0<|j|\le \vartheta /\epsilon }\frac{(\epsilon ^{1/2}+\epsilon |j|)^s}{|j|^{\beta +n-s}}+O(\epsilon ^{s/2})\\ \le \,&c\int _1^{\vartheta /\epsilon }\frac{(\epsilon ^{1/2}+\epsilon r)^s}{r^{\beta +1-s}}dr+O(\epsilon ^{s/2})=O(\epsilon ^{\min (\beta -s,s/2)}). \end{aligned} \end{aligned}$$
(D.14)

To estimate \(J_2\), we use the same approach as in (D.10)–(D.14):

$$\begin{aligned} \begin{aligned} J_2\le \,&c\sum _{|j|\le \vartheta /\epsilon }\frac{(\epsilon ^{1/2}+\epsilon |j|)(c+|j|)^s}{(1+c|j|)^{\beta +n}}+O(\epsilon ^{1/2})\\ \le \,&c\sum _{0<|j|\le \vartheta /\epsilon }\frac{\epsilon ^{1/2}+\epsilon |j|}{|j|^{\beta +n-s}}+O(\epsilon ^{1/2})\\ \le \,&c\int _1^{\vartheta /\epsilon }\frac{\epsilon ^{1/2}+\epsilon r}{r^{\beta +1-s}}dr+O(\epsilon ^{1/2})=O(\epsilon ^{\min (\beta -s,1/2)})=O(\epsilon ^{1/2}). \end{aligned} \end{aligned}$$
(D.15)

Here we have used that \(\beta -s\ge N/2\ge 1/2\).

The second term on the right in (D.4) is estimated as follows:

$$\begin{aligned} \begin{aligned}&|{{\hat{y}}}^j_1-z_1|^s\left| a^+({{\hat{y}}}^j)-a^+(y_0)\right| \le |{{\hat{y}}}^j_1-z_1|^s\left| (a^+({{\hat{y}}}^j)-a^+(z))+(a^+(z)-a^+(y_0))\right| \\&\le c[\epsilon (c+|j-m|)]^s(\epsilon (c+|j-m|)+\epsilon ^{1/2}). \end{aligned} \nonumber \\ \end{aligned}$$
(D.16)

Shifting the j index as before and estimating a finite number of terms by \(O(\epsilon ^{1/2})\) gives an upper bound

$$\begin{aligned} \sum _{0<|j|\le \vartheta /\epsilon }\frac{\epsilon ^{1/2}+\epsilon |j|}{|j|^{\beta +n-s}}+O(\epsilon ^{1/2}) =O(\epsilon ^{1/2}). \end{aligned}$$
(D.17)

The terms with the subscript \('-'\) in (8.11) are estimated analogously. Our argument proves (8.11) with \({\mathcal {B}}\) instead of \({\mathcal {B}}_0\) on the right. This implies, in particular, that the sum on the right in (8.11) is restricted to \(|j|\le \vartheta /\epsilon \).

The left-hand side of (8.11) is bounded, because

$$\begin{aligned} \begin{aligned} |P({{\hat{y}}}^j)|\le \,&|{{\hat{y}}}^j_1-z_1|+|\psi (z^\perp )-\psi (({{\hat{y}}}^j)^\perp )|\\ \le \,&c\epsilon (c+|j-m|)(1+(\epsilon ^{1/2}+\epsilon (c+|j-m|)))\\ \le \,&c\epsilon (1+|j-m|) \end{aligned} \end{aligned}$$
(D.18)

by (D.6), (D.10), (D.11), and \(|j|\le \vartheta /\epsilon \), and

$$\begin{aligned} \begin{aligned} \frac{|P({{\hat{y}}}^j)/\epsilon |^s}{(1+|(y-{{\hat{y}}}^j)/\epsilon |)^{\beta +n}} \le \,&c\sum _{0<|j|\le \vartheta /\epsilon }\frac{1}{|j|^{\beta +n-s}}+O(1)<\infty . \end{aligned} \end{aligned}$$
(D.19)

It is easy to see that

$$\begin{aligned} \begin{aligned} \epsilon ^{\beta }\left| \left( {\mathcal {B}}\varphi \left( \cdot -\frac{{{\hat{y}}}^j}{\epsilon }\right) \right) (y) - {\mathcal {B}}_0 \varphi \left( \frac{y-{{\hat{y}}}^j}{\epsilon }\right) \right| \le c\frac{\epsilon ^{1/2}}{(1+|(y-{{\hat{y}}}^j)/\epsilon |)^{\beta +n}}. \end{aligned}\nonumber \\ \end{aligned}$$
(D.20)

This follows from \(\varphi \in C_0^{\lceil \beta _0^+\rceil }\), \(|y-y_0|=O(\epsilon ^{1/2})\), and

$$\begin{aligned} |\partial _\eta ^m(\tilde{B}_0(y,\eta )-{{\tilde{B}}}_0(y_0,\eta ))|\le c_m|y-y_0||\eta |^{\beta -|m|},\ |\eta |\ge 1,\ m\in {\mathbb {N}}_0^n. \end{aligned}$$
(D.21)

Together with (D.19) this implies that replacing y with \(y_0\) in the amplitude of the \(\Psi \)DO \({\mathcal {B}}\) (i.e., replacing \({{\tilde{B}}}_0(y,\eta )\) with \(\tilde{B}_0(y_0,\eta )\)) introduces an error of the magnitude \(O(\epsilon ^{1/2})\), while keeping the sum restricted to \(|j|\le \vartheta /\epsilon \).

Using that \(|y-z|=O(\epsilon )\), (D.3) and (D.8) imply that the terms of the series on the right in (8.11) are bounded by \(O((1+|j|)^{s-(\beta +n)})\). Hence the series is absolutely convergent. Contribution of the terms corresponding to \(|j|>\vartheta /\epsilon \) is bounded by \(c\sum _{|j|>\vartheta /\epsilon }|j|^{s-(\beta +n)}=O(\epsilon ^{\beta -s})\rightarrow 0\) for some \(c>0\), and the lemma is proven.

Appendix E. Proof of Lemma 8.3

Pick some sufficiently large \(J\gg 1\). Then, with \(D_1:=U^TD\),

$$\begin{aligned} \begin{aligned} \phi (v+\Delta v,p)-\phi (v,p)&= \sum _{j\in {\mathbb {Z}}^n} \bigl [{\mathcal {B}}_0 \varphi (v+\Delta v-D_1 j)-{\mathcal {B}}_0 \varphi (v-D_1 j)\bigr ]{\mathcal {A}}(\Theta _0\cdot D_1 j-p)\\&=\sum _{|j|\le J}(\cdot )+\sum _{|j|>J}(\cdot )=:J_1+J_2. \end{aligned} \end{aligned}$$
(E.1)

Because \({\mathcal {B}}_0\in S^{\beta _0}({\mathbb {R}}^n\times {\mathbb {R}}^n)\), [1, Theorem 6.19] implies that \({\mathcal {B}}_0:C_*^{\lceil \beta _0^+\rceil }\rightarrow C_*^a\), \(a=\lceil \beta _0^+\rceil -\beta _0=1-\{\beta _0\}>0\), is continuous. Nonsmoothness of the symbol at the origin, which is not allowed by the assumptions of the theorem, is irrelevant. By assumption, \(\varphi \in C_0^{\lceil \beta _0^+\rceil }({\mathbb {R}}^n)\), so \(J_1=O(|\Delta v|^a)\). In the second term \(J_2\), the arguments of \({\mathcal {B}}_0\varphi \) are bounded away from zero, and the factor in brackets is smooth. Moreover, using again that the Schwartz kernel K(yw) of \({\mathcal {B}}_0\) is homogeneous in w, we have,

$$\begin{aligned} |\partial _{u_l}{\mathcal {B}}_0 \varphi (u)|=O(|u|^{-(n+\beta _0+1)}),\ |u|\rightarrow \infty ,\ 1\le l\le n. \end{aligned}$$
(E.2)

Using the argument analogous to the one in (D.19), we easily see that \(J_2=O(|\Delta v|)\). This proves the first line in (8.22).

The second line in (8.22) is proven analogously:

$$\begin{aligned} \begin{aligned}&\phi (v,p+\Delta p)-\phi (v,p)\\&\quad = \sum _{j\in {\mathbb {Z}}^n} {\mathcal {B}}_0 \varphi (v-D_1 j)\bigl [{\mathcal {A}}(\Theta _0\cdot D_1 j-(p+\Delta p))-{\mathcal {A}}(\Theta _0\cdot D_1 j-p)\bigr ]\\&\quad =\sum _{|\Theta _0\cdot D_1 j|\le J}(\cdot )+\sum _{|\Theta _0\cdot D_1 j|>J}(\cdot )=:J_1+J_2. \end{aligned} \end{aligned}$$
(E.3)

Clearly, \({\mathcal {A}}(q+\Delta p)-{\mathcal {A}}(q)=O(|\Delta p|^{\min (s_0,1)})\) uniformly in q confined to any bounded set. Using in addition that \({\mathcal {B}}_0 \varphi (u)\) is bounded and \({\mathcal {B}}_0 \varphi (u)=O \left( |u|^{-(n+\beta _0)}\right) \) as \(|u|\rightarrow \infty \), we get that \(J_1=O(|\Delta p|^{\min (s_0,1)})\).

In \(J_2\), the argument of \({\mathcal {A}}\) is bounded away from zero. In view of \({\mathcal {A}}'(q)=O(|q|^{s_0-1})\), \(|q|\rightarrow \infty \), we finish the proof by noticing that

$$\begin{aligned} |J_2| \le O(|\Delta p|) \sum _{|j| > 0}\frac{|j|^{s_0-1}}{|j|^{n+\beta _0+1}}=O(|\Delta p|) . \end{aligned}$$
(E.4)

The fact that both estimates are uniform with respect to v and p confined to bounded sets is obvious.

Appendix F. Proof of Lemma 10.3

As usual, c denotes various positive constants that may have different values in different places. Recall that \(\beta _0-s_0>0\). Set \(k:=\lceil \beta _0\rceil \), \(\nu :=k-\beta _0\). Thus, \(0\le \nu < 1\), and \(\nu =0\) if \(\beta _0\in {\mathbb {N}}\). Similarly to (9.2),

$$\begin{aligned} {\mathcal {B}}={\mathcal {W}}_1 \partial _{y_1}^k+{\mathcal {W}}_2, \end{aligned}$$
(F.1)

for some \({\mathcal {W}}_1\in S^{-\nu }({\mathcal {V}}\times {\mathbb {R}}^n)\), \({\mathcal {W}}_2\in S^{-\infty }({\mathcal {V}}\times {\mathbb {R}}^n)\).

1.1 F.1. Proof in the Case \(\beta _0\not \in \pmb {{\mathbb {N}}}\)

Let K(yw) be the Schwartz kernel of \({\mathcal {W}}_1\). Suppose, for example, that \(P:=P(y)>0\). The case \(P<0\) is completely analogous. Initially, as \(\varkappa _2\) in Lemma 10.3, we can pick any constant that satisfies \(\varkappa _2\ge 2\varkappa _1\), where \(\varkappa _1\) is the same as in (7.11). This implies that \(P/2\ge \varkappa _1\epsilon \). Later (see the beginning of the proof of Lemma F.1), we update the choice of \(\varkappa _2\). Denote (cf. (10.5))

$$\begin{aligned} J_\epsilon (y):=({\mathcal {B}}g_\epsilon )(y)-({\mathcal {B}}g)(y)=({\mathcal {B}}\Delta g_\epsilon )(y). \end{aligned}$$
(F.2)

Then

$$\begin{aligned} \begin{aligned} J_\epsilon (y)=&J_\epsilon ^{(1)}(y)+J_\epsilon ^{(2)}(y)+O(\epsilon ^{s_0}),\\ J_\epsilon ^{(1)}(y):=&\int _{|P(w)|\ge P/2} K(y,y-w)\Delta g_\epsilon ^{(k)}(w)\text {d}w,\\ J_\epsilon ^{(2)}(y):=&\int _{|P(w)|\le P/2} K(y,y-w)\Delta g_\epsilon ^{(k)}(w)\text {d}w. \end{aligned} \end{aligned}$$
(F.3)

The big-O term in (F.3) appears because of the \(\Psi \)DO \({\mathcal {W}}_2\) in (F.1), and the magnitude of the term follows from (7.12) with \(l=0\). From (7.9) and (7.11) with \(l=k\), (F.3), and (9.4) with \(l=0\), it follows that

$$\begin{aligned} \begin{aligned} |J_\epsilon ^{(1)}(y)|&\le c\epsilon \int _{|P(w)|\ge P/2} \frac{|w_1-\psi (w^\perp )|^{s_0-1-k}}{|y-w|^{n-\nu }} \text {d}w\\&=c\epsilon \int _{|p|\ge P/2}\int \frac{|p|^{s_0-1-k}}{|([P-p]+\psi (y^\perp )-\psi (w^\perp ),y^\perp -w^\perp )|^{n-\nu }}\text {d}w^\perp \text {d}p. \end{aligned}\nonumber \\ \end{aligned}$$
(F.4)

Hence, we obtain similarly to (9.8)

$$\begin{aligned} \begin{aligned} |J_\epsilon ^{(1)}(y)|\le&c\epsilon \int \int _{|p|\ge P/2} \frac{|p|^{s_0-1-k}}{(|P-p|+|w^\perp |)^{n-\nu }} \text {d}p\text {d}w^\perp \\ \le&c\epsilon \int _{|p|\ge P/2} \frac{|p|^{s_0-1-k}}{|P-p|^{1-\nu }} \text {d}p=c\epsilon P^{s_0-1-\beta _0}. \end{aligned} \end{aligned}$$
(F.5)

To estimate \(J_\epsilon ^{(2)}(y)\), integrate by parts with respect to \(w_1\) in (F.3):

$$\begin{aligned} \begin{aligned} |J_\epsilon ^{(2)}(y)|\le&c\left( J_k+\sum _{l=l_0}^{k-1} (J_l^-+J_l^+)\right) ,\ l_0:=\lfloor s_0^-\rfloor ,\\ J_k:=&\int _{|P(w)|\le P/2} \left| \partial _{w_1}^{k-l_0}K(y,y-w)\Delta g_\epsilon ^{(l_0)}(w)\right| \text {d}w,\\ J_l^\pm :=&\int _{{\mathbb {R}}^{n-1}} \left| \partial _{w_1}^{k-1-l}K(y,y-w)\Delta g_\epsilon ^{(l)}(w)\right| _{w=(\psi (w^\perp )\pm P/2,w^\perp )}\text {d}w^\perp . \end{aligned} \end{aligned}$$
(F.6)

By construction, \(P/2 \ge \varkappa _1\epsilon \). Using (7.11), (7.12) with \(l=l_0\) (both inequalities apply when \(l=l_0=\lfloor s_0^-\rfloor \)), and arguing similarly to (F.4), (F.5), gives

$$\begin{aligned} \begin{aligned} J_k\le&c \int _{|p|\le \varkappa _1\epsilon } \frac{\epsilon ^{s_0-l_0}}{|P-p|^{\beta _0+1-l_0}} \text {d}p+ c\epsilon \int _{\varkappa _1\epsilon \le |p|\le P/2} \frac{|p|^{s_0-l_0-1}}{|P-p|^{\beta _0+1-l_0}} \text {d}p\\ \le \,&c\epsilon ^{s_0-l_0+1} P^{-(\beta _0+1-l_0)}+ c\epsilon P^{s_0-1-\beta _0}\int _{\varkappa _1(\epsilon /P) \le |{\check{p}}|\le 1/2} \frac{|{\check{p}}|^{s_0-l_0-1}}{|1-{\check{p}}|^{\beta _0+1-l_0}} \text {d}{\check{p}}\\ \le \,&c\epsilon ^{s_0-l_0+1} P^{-(\beta _0+1-l_0)}+ c\epsilon P^{s_0-1-\beta _0}\left( 1+(\epsilon /P)^{s_0-l_0}\right) , \end{aligned} \nonumber \\ \end{aligned}$$
(F.7)

where we have used that \(l_0<s_0\). Using again that \(\epsilon /P \le 1/(2\varkappa _1)\) gives \(J_k\le c\epsilon P^{s_0-1-\beta _0}\).

Next we estimate the boundary terms in (F.6). By (7.11) (using that \(\lfloor s_0^-\rfloor =l_0\le l\le k-1\)) and (9.4),

$$\begin{aligned} \begin{aligned} J_l^\pm \le c\epsilon \int _{{\mathbb {R}}^{n-1}} \frac{|w_1-\psi (w^\perp )|^{s_0-1-l}}{|y-w|^{n+\beta _0-1-l}} \text {d}w^\perp ,\ w_1=\psi (w^\perp )\pm P/2. \end{aligned} \end{aligned}$$
(F.8)

Appealing to (9.7) gives

$$\begin{aligned} \begin{aligned} J_l^\pm \le c\epsilon P^{s_0-1-l}\int _{{\mathbb {R}}^{n-1}} \frac{\text {d}w^\perp }{(P\pm (P/2)+|w^\perp |)^{n+\beta _0-l-1}}=c\epsilon P^{s_0-1-\beta _0}, \end{aligned} \end{aligned}$$
(F.9)

which finishes the proof. As easily checked, the integral in (F.9) converges because \(l\le k-1 <\beta _0\).

1.2 F.2. Proof in the Case \(\beta _0\in \pmb {{\mathbb {N}}}\)

Suppose now \(\beta _0\in {\mathbb {N}}\), i.e. \(k=\beta _0\) and \(\nu =0\). All the terms that do not involve integration over a neighborhood of the set \(\{w\in {\mathcal {V}}:\,P(w)=P\}\) are estimated the same way as before. For example, estimation of \(J_\epsilon ^{(2)}(y)\) is completely analogous to (F.6)–(F.9), and we obtain the same bound \(|J_\epsilon ^{(2)}(y)|\le c\epsilon P^{s_0-1-\beta _0}\). Estimating of \(J_\epsilon ^{(1)}\) is much more involved now, because the singularity at \(P(w)=P\) is no longer integrable. We have with some \(c_1>0\), which is to be selected later:

$$\begin{aligned} \begin{aligned} J_\epsilon ^{(1)}(y)=&J_\epsilon ^{(1a)}(y)+J_\epsilon ^{(1b)}(y)+J_\epsilon ^{(1c)}(y),\\ J_\epsilon ^{(1a)}(y):=&\int _{P/2\le P(w) \le P-c_1\epsilon } K(y,y-w)\Delta g_\epsilon ^{(\beta _0)}(w)\text {d}w,\\ J_\epsilon ^{(1b)}(y):=&\int _{P-c_1\epsilon \le P(w) \le P+c_1\epsilon } K(y,y-w)\Delta g_\epsilon ^{(\beta _0)}(w)\text {d}w,\\ J_\epsilon ^{(1c)}(y):=&\int _{P+c_1\epsilon \le P(w)} K(y,y-w)\Delta g_\epsilon ^{(\beta _0)}(w)\text {d}w. \end{aligned} \end{aligned}$$
(F.10)

We do not estimate the integral \(\int _{P(w)\le P/2} (\cdot )\text {d}w\), because the domain of integration is bounded away from the set \(\{w\in {\mathcal {V}}:\,P(w)=P\}\), and this integral admits the same bound as in the previous subsection (cf. (F.5)). Similarly to (F.5), by (7.11) with \(l=\beta _0\),

$$\begin{aligned} \begin{aligned} |J_\epsilon ^{(1a)}(y)|\le&c\epsilon \int _{P/2\le p\le P-c_1\epsilon } \frac{p^{s_0-1-\beta _0}}{P-p} \text {d}p=c\epsilon P^{s_0-1-\beta _0}\ln (P/\epsilon ),\\ |J_\epsilon ^{(1c)}(y)|\le&c\epsilon \int _{P+c_1\epsilon \le p} \frac{p^{s_0-1-\beta _0}}{p-P} \text {d}p=c\epsilon P^{s_0-1-\beta _0}\ln (P/\epsilon ). \end{aligned} \end{aligned}$$
(F.11)

The term \(J_\epsilon ^{(1b)}\) is split further as follows:

$$\begin{aligned} \begin{aligned} J_\epsilon ^{(1b)}(y)=&J_\epsilon ^{(1b1)}(y)+J_\epsilon ^{(1b2)}(y)+J_\epsilon ^{(1b3)}(y),\\ J_\epsilon ^{(1b1)}(y):=&\int _{\begin{array}{c} |P-P(w)| \le c_1\epsilon \\ |y^\perp -w^\perp |\ge c_1P \end{array}} K(y,y-w)\Delta g_\epsilon ^{(\beta _0)}(w)\text {d}w,\\ J_\epsilon ^{(1b2)}(y):=&\int _{\begin{array}{c} |P-P(w)| \le c_1\epsilon \\ |y^\perp -w^\perp |\le c_1P \end{array}} K(y,y-w)(\Delta g_\epsilon ^{(\beta _0)}(w)-\Delta g_\epsilon ^{(\beta _0)}(y))\text {d}w,\\ J_\epsilon ^{(1b3)}(y):=&\Delta g_\epsilon ^{(\beta _0)}(y)I,\ I:=\int _{\begin{array}{c} |P-P(w)| \le c_1\epsilon \\ |y^\perp -w^\perp |\le c_1 P \end{array}} K(y,y-w)\text {d}w. \end{aligned} \end{aligned}$$
(F.12)

Similarly to (F.5), by (7.11) with \(l=\beta _0\),

$$\begin{aligned} \begin{aligned} |J_\epsilon ^{(1b1)}(y)|\le&c\epsilon \int _{|w^\perp |\ge c_1P}\int _{|P-p| \le c_1\epsilon } \frac{p^{s_0-1-\beta _0}}{(|P-p|+|w^\perp |)^n} \text {d}p \text {d}w^\perp \le c\epsilon ^2 P^{s_0-2-\beta _0}. \end{aligned} \nonumber \\ \end{aligned}$$
(F.13)

The second part is estimated by rearranging the \(\Delta g\) terms:

$$\begin{aligned} \begin{aligned} J_\epsilon ^{(1b2)}(y):=\int _{\begin{array}{c} |P-P(w)| \le c_1\epsilon \\ |y^\perp -w^\perp |\le c_1P \end{array}} K(y,y-w)\bigl [&(g_\epsilon ^{(\beta _0)}(w)-g_\epsilon ^{(\beta _0)}(y))\\&-(g^{(\beta _0)}(w)-g^{(\beta _0)}(y))\bigr ]\text {d}w. \end{aligned} \end{aligned}$$
(F.14)

Lemma F.1

There exist \(c,c_1,\varkappa _2>0\) so that

$$\begin{aligned} \begin{aligned}&|g^{(\beta _0)}(w)-g^{(\beta _0)}(y)|\le c|w-y|P(y)^{s_0-1-\beta _0},\\&|g_\epsilon ^{(\beta _0)}(w)-g_\epsilon ^{(\beta _0)}(y)|\le c|w-y|P(y)^{s_0-1-\beta _0},\\&\text {if}\quad y,w\in {\mathcal {V}},\ |P(y)-P(w)| \le c_1\epsilon ,\ |y^\perp -w^\perp |\le c_1P(y),\ P(y)>\varkappa _2\epsilon . \end{aligned} \end{aligned}$$
(F.15)

Proof

We begin by updating the choice of \(\varkappa _2\). Select \(\varkappa _2\ge 2\varkappa _1\) so that \(P\ge \varkappa _2\epsilon \) implies

$$\begin{aligned} P(v)\ge c P \text { for any }v\in {\mathcal {V}}, |v-y|\le \epsilon d_\varphi ,\ d_\varphi :=\text {diam}(\text {supp}\,\varphi ), \end{aligned}$$
(F.16)

for some \(c>0\).

Next we select \(c_1\). First, pick any \(c_1\) so that \(0<c_1\le \varkappa _1\). This ensures that \(P(w)\ge P-|P-P(w)|\ge \varkappa _1\epsilon \), and (7.11) can be used to estimate the derivatives of \(\Delta g_\epsilon (w)\). Let \(c_\psi :=\max _{v\in {\mathcal {V}}}|\psi '(v)|\). Our assumptions imply

$$\begin{aligned} |y_1-w_1|\le |P-P(w)|+|\psi (y^\perp )-\psi (w^\perp )|\le c_1(\epsilon +c_\psi P). \end{aligned}$$
(F.17)

Let v be any point on the line segment with the endpoints w and y, i.e. \(v=y+\lambda (w-y)\), \(0\le \lambda \le 1\). Then

$$\begin{aligned} P(v)\ge P-(|y_1-w_1|+|\psi (v^\perp )-\psi (y^\perp )|)\ge P-c_1(\epsilon +c_\psi P)-c_\psi c_1 P. \nonumber \\ \end{aligned}$$
(F.18)

Reducing \(c_1>0\) even further, we can ensure that \(P(v)\ge cP\) for some \(c>0\). This is the value of \(c_1\) that is assumed starting from (F.10). In the rest of the proof we assume that \(w,y\in {\mathcal {V}}\) satisfy the inequalities on the last line in (F.15) with the constants \(c_1\) and \(\varkappa _2\) that we have just selected.

From (7.5) with \(|m|=\beta _0+1\),

$$\begin{aligned} |g^{(\beta _0)}(w)-g^{(\beta _0)}(y)|\le & {} |w-y|\max _{0\le \lambda \le 1} |(\partial _y g^{(\beta _0)})(y+\lambda (w-y))| \nonumber \\\le & {} c|w-y|P^{s_0-1-\beta _0} \end{aligned}$$
(F.19)

for some \(c>0\).

To prove the second line in (F.15), find \(c_{2,3}>0\) such that

$$\begin{aligned} v\in {\mathcal {V}},|v-y|\le \epsilon (c_2+d_\varphi ) \text { implies } P(v)\ge c_3 P. \end{aligned}$$
(F.20)

By (F.16), \(c_{2,3}\) with the required properties do exist.

Now, assume first that \(|w-y|\ge c_2\epsilon \), where \(c_2\) is the same as in (F.20). Clearly,

$$\begin{aligned} \begin{aligned} |g_\epsilon ^{(\beta _0)}(w)-g_\epsilon ^{(\beta _0)}(y)|\le |\Delta g_\epsilon ^{(\beta _0)}(w)|&+|g^{(\beta _0)}(w)-g^{(\beta _0)}(y)|+|\Delta g_\epsilon ^{(\beta _0)}(y)|. \end{aligned}\nonumber \\ \end{aligned}$$
(F.21)

By construction, (7.11) applies to \(\Delta g_\epsilon ^{(\beta _0)}(w)\). Applying (7.11) to the first and third terms on the right in (F.21), and (F.19) – to the second term on the right, gives

$$\begin{aligned} \begin{aligned} |g_\epsilon ^{(\beta _0)}(w)-g_\epsilon ^{(\beta _0)}(y)|&\le c\epsilon P(w)^{s_0-1-\beta _0}+c|w-y|P^{s_0-1-\beta _0}+c\epsilon P^{s_0-1-\beta _0}\\&\le c|w-y|P^{s_0-1-\beta _0}, \end{aligned} \end{aligned}$$
(F.22)

because \(\epsilon \le (1/c_2)|w-y|\) and

$$\begin{aligned} P(w)\ge P-|P-P(w)|\ge P(1-c_1(\epsilon /P))\ge P(1-c_1/(2\varkappa _1))\ge P/2. \qquad \end{aligned}$$
(F.23)

If \(|w-y|\le c_2\epsilon \), we argue similarly to (C.22):

$$\begin{aligned} \begin{aligned} g_\epsilon ^{(\beta _0)}(w)-g_\epsilon ^{(\beta _0)}(y)&=\epsilon ^{-\beta _0}\sum _j\left( (\partial _{v_1}^{\beta _0}\varphi )\left( \frac{w-{{\hat{y}}}^j}{\epsilon }\right) -(\partial _{v_1}^{\beta _0}\varphi )\left( \frac{y-{{\hat{y}}}^j}{\epsilon }\right) \right) \\&\qquad \times \sum _{|m|=\beta _0+1}R_m({{\hat{y}}}^j,y)({{\hat{y}}}^j-y)^{m}\\ |R_m({{\hat{y}}}^j,y)|&\le \frac{1}{(\beta _0+1)!}\max _{|m'|=\beta _0+1,|y-v|\le (c_2+d_\varphi )\epsilon }|\partial _v^{m'} g(v)|. \end{aligned} \end{aligned}$$
(F.24)

Here \((\partial _{v_1}\varphi )(\cdot )\) is the derivative of \(\varphi (v)\) with respect to \(v_1\) evaluated at the indicated point. By (F.20), (7.5) implies \(|R_m({{\hat{y}}}^j,y)|\le cP^{s_0-1-\beta _0}\). The assertion follows because \(\varphi \in C_0^{\lceil \beta _0^+\rceil }({\mathbb {R}}^n)\). \(\square \)

Applying (9.4) with \(\nu =l=0\) and (F.15) in (F.14) yields (cf. (9.6)–(9.8))

$$\begin{aligned} \begin{aligned} |J_\epsilon ^{(1b2)}(y)|\le \,&cP^{s_0-1-\beta _0}\int _{\begin{array}{c} |P-P(w)| \le c_1\epsilon \\ |y^\perp -w^\perp |\le c_1 P \end{array}} \frac{|w-y|}{|y-w|^n} \text {d}w\\ \le&cP^{s_0-1-\beta _0}\int _{|w^\perp |\le c_1P}\int _{|P-p| \le c_1\epsilon } \frac{1}{(|P-p|+|w^\perp |)^{n-1}} \text {d}p\text {d}w^\perp \\ \le&cP^{s_0-1-\beta _0}\int _{|w^\perp |\le c_1 P} \int _{|p| \le c_1\epsilon } \frac{\text {d}p\text {d}w^\perp }{(|p|+|w^\perp |)^{n-1}} =c\epsilon P^{s_0-1-\beta _0}\ln \biggl (\frac{P}{\epsilon }\biggr ). \end{aligned} \end{aligned}$$
(F.25)

The final major step is to estimate the integral in the definition of \(J_\epsilon ^{(1b3)}\).

$$\begin{aligned} \begin{aligned} I=&\int _{|y^\perp -w^\perp |\le c_1 P}\int _{|(y_1-w_1)-(\psi (y^\perp )-\psi (w^\perp )|\le c_1\epsilon }K(y,(y_1-w_1,y^\perp -w^\perp )) \text {d}w_1\text {d}w^\perp \\ =&\int _{|v^\perp |\le c_1 P}\int _{-c_1 \epsilon }^{c_1 \epsilon }K(y,(v_1+h(v^\perp ),v^\perp )) \text {d}v_1\text {d}v^\perp ,\ h(v^\perp ):=\psi (y^\perp )-\psi (y^\perp -v^\perp ). \end{aligned} \end{aligned}$$
(F.26)

Let \({{\tilde{W}}}(y,\eta )\) be the amplitude of \({\mathcal {W}}_1\in S^0({\mathcal {V}}\times {\mathbb {R}}^n)\) in (F.1). Then

$$\begin{aligned} \begin{aligned} I&=c \int _{|v^\perp |\le c_1P}\int _{-c_1\epsilon }^{c_1\epsilon }\int _\Omega {{\tilde{W}}}(y,\eta ) e^{-i(\eta _1 (v_1+h(v^\perp ))+\eta ^\perp v^\perp )}\text {d}\eta \text {d}v_1\text {d}v^\perp \\&=c \int _\Omega \frac{\sin (c_1\epsilon \eta _1)}{\eta _1}\int _{|v^\perp |\le c_1 P}{{\tilde{W}}}(y,\eta ) e^{-i(\eta _1 h(v^\perp )+\eta ^\perp v^\perp )}\text {d}v^\perp \text {d}\eta . \end{aligned} \end{aligned}$$
(F.27)

Our goal is to show that I is uniformly bounded for all \(\epsilon >0\) sufficiently small and P that satisfy \(P/\epsilon \ge \varkappa _2>0\). We can select \({\mathcal {W}}_{1,2}\) in (F.1) so that the conic supports of their amplitudes are contained in that of \({\mathcal {B}}\). First, consider only the principal symbol of \({\mathcal {W}}_1\), which we denote \(\tilde{W}_0(y,\eta )\). We can assume that \({{\tilde{W}}}_0(y,\eta )\equiv 0\) if \(\eta \not \in \Omega \), \(\eta \not =0\), where \(\Omega \subset {\mathbb {R}}^n\setminus \{0\}\) is a small conic neighborhood of \(\Theta _0\cup (-\Theta _0)\). This set is used in (F.27). The corresponding value of I, which is obtained by replacing \(\tilde{W}(y,\eta )\) with \({{\tilde{W}}}_0(y,\eta )\) in (F.27), is denoted \(I_0\).

As \({{\tilde{W}}}_0(y,\eta )\) is positively homogeneous of degree zero in \(\eta \), set

$$\begin{aligned} {{\tilde{W}}}^\pm (y,u):=\tilde{W}(y,\eta _1(1,u))={{\tilde{W}}}_0(y,\pm (1,u)),\ u=\eta ^\perp /\eta _1\in \Omega ^\perp , \end{aligned}$$
(F.28)

where \(\Omega ^\perp \) is a small neighborhood of the origin in \({\mathbb {R}}^{n-1}\): \(\Omega ^\perp :=\{u\in {\mathbb {R}}^{n-1}: u=\eta ^\perp /\eta _1,\eta \in \Omega \}\). The sign \('+'\) is selected if \(\eta _1>0\), and \('-'\) - otherwise. By the properties of \({\mathcal {W}}\), \({{\tilde{W}}}^{\pm }(y,\cdot )\in C_0^{\infty }(\Omega ^\perp )\). Thus, (F.27) implies

$$\begin{aligned} \begin{aligned} I_0&=c \int _{{\mathbb {R}}}\frac{\sin (c_1\epsilon \eta _1)}{\eta _1} \int _{|v^\perp |\le c_1P}\int _{\Omega ^\perp }{{\tilde{W}}}^\pm (y,u) e^{-i\eta _1 u v^\perp }\text {d}u e^{-i\eta _1 h(v^\perp )} \text {d}v^\perp |\eta _1|^{n-1}\text {d}\eta _1\\&=c \int _{{\mathbb {R}}}\frac{\sin (c_1\epsilon \eta _1)}{\eta _1} \int _{|v^\perp |\le c_1P}W^\pm (y,\eta _1v^\perp ) e^{-i\eta _1 h(v^\perp )} \text {d}v^\perp |\eta _1|^{n-1}\text {d}\eta _1\\&=c \int _{{\mathbb {R}}}\frac{\sin (c_1\epsilon \eta _1)}{\eta _1} \int _{|w^\perp |\le c_1P|\eta _1|}W^\pm (y,w^\perp ) e^{-i\eta _1 h(w^\perp /\eta _1)} \text {d}w^\perp \text {d}\eta _1\\&=c \int _{{\mathbb {R}}}\frac{\sin (\lambda )}{\lambda } \int _{|w^\perp |\le \frac{P}{\epsilon }|\lambda |}W^\pm (y,w^\perp ) \exp \left( -i\lambda \frac{h(c_1\epsilon w^\perp /\lambda )}{c_1\epsilon }\right) \text {d}w^\perp \text {d}\lambda , \end{aligned} \end{aligned}$$
(F.29)

where \(W^\pm (y,w^\perp )\) is the inverse Fourier transform of \({{\tilde{W}}}^\pm (y,u)\) with respect to u. Since \(P/\epsilon \) is bounded away from zero, \(h(0)=0\), and \(W^\pm (y,w^\perp )\) is smooth and rapidly decreasing as a function of \(w^\perp \), we have by the dominated convergence theorem

$$\begin{aligned} \begin{aligned}&\int _{|w^\perp |\le \frac{P}{\epsilon }|\lambda |}W^\pm (y,w^\perp ) \exp \left( -i\lambda \frac{h(c_1\epsilon w^\perp /\lambda )}{c_1\epsilon }\right) \text {d}w^\perp \\&\rightarrow \int _{{\mathbb {R}}^{n-1}}W^\pm (y,w^\perp )e^{-i h'(0)\cdot w^\perp }\text {d}w^\perp ={{\tilde{W}}}^\pm (y,-\psi '(y^\perp ))=\tilde{W}_0(y,\pm (1,-\psi '(y^\perp ))) \end{aligned} \end{aligned}$$
(F.30)

as \(\lambda \rightarrow \pm \infty \), and convergence is uniform with respect to \(\epsilon \) and P that satisfy \(P/\epsilon \ge \varkappa _2\). As is seen, \(\begin{pmatrix} 1\\ -\psi '(y^\perp ) \end{pmatrix}\) is a vector normal to \(\Gamma \) at the point \(\begin{pmatrix} \psi (y^\perp )\\ y^\perp \end{pmatrix}\).

The remainder term in (F.30) is bounded by the expression

$$\begin{aligned} \begin{aligned}&\int _{|w^\perp |\le \frac{P}{\epsilon }|\lambda |}|W^\pm (y,w^\perp )| \left| \exp \left( -i \frac{\lambda h(c_1\epsilon w^\perp /\lambda )}{c_1\epsilon }+ih'(0)\cdot w^\perp \right) -1\right| \text {d}w^\perp \\&\qquad +\int _{|w^\perp |\ge \frac{P}{\epsilon }|\lambda |}|W^\pm (y,w^\perp )| \text {d}w^\perp \\&\quad \le c \frac{\epsilon }{|\lambda |}\int _{{\mathbb {R}}^{n-1}}|W^\pm (y,w^\perp )| |w^\perp |^2 \text {d}w^\perp \\&\qquad +\int _{|w^\perp |\ge \varkappa _2 |\lambda |}|W^\pm (y,w^\perp )| \text {d}w^\perp =O(|\lambda |^{-1}). \end{aligned} \end{aligned}$$
(F.31)

Due to \({{\tilde{W}}}^{\pm }(y,\cdot )\in C_0^{\infty }(\Omega ^\perp )\), the big-O term on the right-hand side of (F.31) is uniform with respect to \(y\in {\mathcal {V}}\) and \(0<\epsilon \le 1\). Hence

$$\begin{aligned} \begin{aligned} I_0&= c \int _{{\mathbb {R}}}\frac{\sin (\lambda )}{\lambda } {{\tilde{W}}}_0(y,\lambda (1,-\psi '(y^\perp ))) \text {d}\lambda +O(1)\\&=c \frac{\pi }{2} \left[ {{\tilde{W}}}_0(y,(1,-\psi '(y^\perp )))+\tilde{W}_0(y,-(1,-\psi '(y^\perp )))\right] +O(1), \end{aligned} \end{aligned}$$
(F.32)

where O(1) is uniform with respect to \(y\in {\mathcal {V}}\) as well, which proves that \(I_0\) is uniformly bounded.

The remaining term \(\Delta I=I-I_0\) comes from the subprincipal terms of the amplitude \(\Delta {{\tilde{W}}}={{\tilde{W}}}-{{\tilde{W}}}_0\). The corresponding \(\Psi \)DO is in \(S^{-\nu }({\mathcal {V}}\times {\mathbb {R}}^n)\) for some \(\nu >0\), so its Schwartz kernel \(\Delta K(y,w)\) is smooth as long as \(w\not =0\) and absolutely integrable at \(w=0\). It is now obvious that \(\Delta I\) is bounded as well.

By Lemma 7.4 (use (7.11) with \(l=k=\beta _0\)), \(|\Delta g_\epsilon ^{(\beta _0)}(y)|\le c\epsilon P^{s_0-1-\beta _0}\) if \(P\ge \varkappa _1\epsilon \), combining with (F.12) proves that \(|J_\epsilon ^{(1b2)}(y)|\le c\epsilon P^{s_0-1-\beta _0}\). By (F.12), (F.13), (F.25), we conclude \(|J_\epsilon ^{(1b)}(y)|\le c\epsilon P^{s_0-1-\beta _0}\ln (P/\epsilon )\). Combining with (F.10) and (F.11) we finish the proof.

Appendix G. Proof of Lemma 11.1

We begin by proving (11.8). From (3.5) and (3.9), \(|\text {d}\Psi |\partial \Phi _1/\partial y_1=1\), i.e. \(\partial \Phi _1/\partial y_1>0\).

Recall that \(y=Y(y^{(2)},x)\) is found by solving \(x^{(1)}=\Phi ^{(1)}(x^{(2)},y)\) for \(y^{(1)}\). Differentiating \(x_1\equiv \Phi _1(x^{(2)},(Y^{(1)}(y^{(2)},x),y^{(2)}))\) with respect to \(x_1\) gives \(1=(\partial \Phi _1/\partial y_1)(\partial Y_1/\partial x_1\)). Since \(\partial \Phi _1/\partial x^{(2)}=0\) and \(\partial \Phi _1/\partial y^\perp =0\), differentiating the same identity with respect to \(x^\perp \) gives \(0=({\partial \Phi _1}/{\partial y_1})({\partial Y_1}/{\partial x^\perp })\), and all the statements in (11.8) are proven.

By (3.13), (6.3), and (6.14),

$$\begin{aligned} |\det Q|^{1/2}=\frac{|\det M_{22}|}{|\det (\Psi \circ \Phi )_{x^{(2)}x^{(2)}}|^{1/2}} =\frac{|\det (\partial ^2\Phi _1/\partial x^{(2)}\partial y^{(2)})|}{|\text {d}\Psi |^{N/2}|\det \Delta \text {II}_{{\mathcal {S}}}|^{1/2}}. \end{aligned}$$
(G.1)

Using that \(|\text {d}\Psi |\partial \Phi _1/\partial y_1=1\) completes the proof.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Katsevich, A. Resolution Analysis of Inverting the Generalized N-Dimensional Radon Transform in \(\pmb {\mathbf {\mathbb {R}}^n}\) from Discrete Data. J Fourier Anal Appl 29, 6 (2023). https://doi.org/10.1007/s00041-022-09975-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00041-022-09975-x

Keywords

Mathematics Subject Classification

Navigation