Abstract
Let \({\mathcal {R}}\) denote the generalized Radon transform, which integrates over a family of N-dimensional smooth submanifolds \({\mathcal {S}}_{{{\tilde{y}}}}\subset {\mathcal {U}}\), \(1\le N\le n-1\), where an open set \({\mathcal {U}}\subset {\mathbb {R}}^n\) is the image domain. The submanifolds are parametrized by points \({{\tilde{y}}}\subset {{\tilde{{\mathcal {V}}}}}\), where an open set \({{\tilde{{\mathcal {V}}}}}\subset {\mathbb {R}}^n\) is the data domain. We assume that the canonical relation \({{\tilde{C}}}\) from \(T^*{\mathcal {U}}\) to \(T^*{{\tilde{{\mathcal {V}}}}}\) of \({\mathcal {R}}\) is a local canonical graph (when \({\mathcal {R}}\) is viewed as a Fourier Integral Operator). The continuous data are denoted by g, and the reconstruction is \({\check{f}}={\mathcal {R}}^*{\mathcal {B}}g\). Here \({\mathcal {R}}^*\) is a weighted adjoint of \({\mathcal {R}}\), \({\mathcal {B}}\) is a pseudo-differential operator, and g is a conormal distribution. Discrete data consists of the values of g on a regular lattice with step size \(O(\epsilon )\). Let \({\mathcal {S}}\) denote the singular support of \({\check{f}}\), and \({\check{f}}_\epsilon ={\mathcal {R}}^*{\mathcal {B}}g_\epsilon \) be the reconstruction from interpolated discrete data \(g_\epsilon ({{\tilde{y}}})\). Pick a point \(x_0\in {\mathcal {S}}\), i.e. the singularity of \({\check{f}}\) at \(x_0\) is visible from the data. The main result of the paper is the computation of the limit
Here \(\kappa \ge 0\) is selected based on the strength of the reconstructed singularity, and \({\check{x}}\) is confined to a bounded set. The limiting function \(\text {DTB}({\check{x}})\), which we call the discrete transition behavior, contains full information about the resolution of reconstruction.
Similar content being viewed by others
References
Abels, H.: Pseudodifferential and Singular Integral Operators: An Introduction with Applications. De Gruyter, Berlin/Boston (2012)
Airapetyan, R.G., Ramm, A.G.: Singularities of the Radon transform. Appl. Anal. 79, 351–371 (2001)
Andersson, F., De Hoop, M.V., Wendt, H.: Multiscale discrete approximation of fourier integral operators. Multiscale Model. Simul. 10(1), 111–145 (2012)
Candes, E., Demanet, L., Ying, L.: Fast computation of Fourier integral operators. SIAM J. Sci. Comput. 29, 2464–2493 (2007)
Candes, E., Demanet, L., Ying, L.: A fast butterfly algorithm for the computation of Fourier integral operators. SIAM Multiscale Model. Simul. 7, 1727–1750 (2009)
Duistermaat, J.J., Hormander, L.: Fourier integral operators. II. Acta Math. 128, 183–269 (1972)
Faridani, A.: Sampling theory and parallel-beam tomography. In: Benedetto, J.J. (ed.) Sampling. Wavelets, and Tomography, Applied and Numerical Harmonic Analysis, vol. 63, pp. 225–254. Birkhauser, Boston (2004)
Faridani, A., Buglione, K., Huabsomboon, P., Iancu, O., McGrath, J.: Introduction to local tomography. In Radon transforms and tomography. Contemp. Math. 278, 29–47 (2001)
Gelfand, I.M., Shilov, G.E.: Generalized Functions Volume 1: Properties and Operations. Academic Press, New York (1964)
Greenleaf, A., Seeger, A.: Oscillatory and fourier integral operators with degenerate canonical relations. Publicacions Matematiques 48, 93–141 (2002)
Guillemin, V., Sternberg, S.: Geometric Asymptotics. Mathematical Surveys, vol. 14. American Mathematical Society, Providence (1977)
Hormander, L.: Fourier integral operators. I. Acta Math. 127, 79–183 (1971)
Hormander, L.: The Analysis of Linear Partial Differential Operators III. Pseudo-Differential Operators. Springer-Verlag, Berlin (2007)
Hormander, L.: The Analysis of Linear Partial Differential Operators IV. Fourier Integral Operators. Springer-Verlag, Berlin (2009)
Kalender, W.A.: Computed Tomography. Fundamentals, System Technology, Image Quality, Applications, 3rd edn. Publicis, Erlangen (2011)
Katsevich, A.: Asymptotics of pseudodifferential operators acting on functions with corner singularities. Appl. Anal. 72, 229–252 (1999)
Katsevich, A.: A local approach to resolution analysis of image reconstruction in tomography. SIAM J. Appl. Math. 77(5), 1706–1732 (2017)
Katsevich, A.: Analysis of reconstruction from discrete Radon transform data in \({\mathbb{R} }^3\) when the function has jump discontinuities. SIAM J. Appl. Math. 79, 1607–1626 (2019)
Katsevich, A.: Analysis of resolution of tomographic-type reconstruction from discrete data for a class of distributions. Inverse Probl. 36(12), 124008 (2020)
Katsevich, A.: Resolution analysis of inverting the generalized Radon transform from discrete data in \({\mathbb{R} }^3\). SIAM J. Math. Anal. 52, 3990–4021 (2020)
Kuipers, L., Niederreiter, H.: Uniform Distribution of Sequences. Dover Publications Inc, Mineola (2006)
Monard, F., Stefanov, P.: Sampling the X-ray transform on simple surfaces. http://arxiv.org/abs/2110.05761 (2021)
Natterer, F.: Sampling in fan beam tomography. SIAM J. Appl. Math. 53, 358–380 (1993)
Orhan, K. (ed.): Micro-computed Tomography (micro-CT) in Medicine and Engineering. Springer Nature, Switzerland (2020)
Palamodov, V.P.: Localization of harmonic decomposition of the Radon transform. Inverse Probl. 11, 1025–1030 (1995)
Quinto, E.T.: The dependence of the generalized Radon transforms on defining measures. Trans. Am. Math. Soc. 257, 331–346 (1980)
Ramm, A., Katsevich, A.: The Radon Transform and Local Tomography. CRC Press, Boca Raton (1996)
Ramm, A.G., Zaslavsky, A.I.: Singularities of the Radon transform. Bull. Am. Math. Soc. 25, 109–115 (1993)
Salo, M.: Applications of microlocal analysis in inverse problems. Mathematics 8, 1184 (2020)
Sawano, Y.: Theory of Besov Spaces. Springer, Singapore (2018)
Stefanov, P.: Semiclassical sampling and discretization of certain linear inverse problems. SIAM J. Math. Anal. 52, 5554–5597 (2020)
Stefanov, P.: The Radon transform with finitely many angles. http://arxiv.org/abs/2208.05936v1 pp. 1–30 (2022)
Treves, F.: Introduction to Pseudodifferential and Fourier Integral Operators. Volume 2: Fourier Integral Operators. The University Series in Mathematics, Plenum, New York (1980)
Tuy, H.K.: An inversion formula for cone-beam reconstruction. SIAM J. Appl. Math. 43, 546–552 (1983)
Yang, H.: Oscillatory Data Analysis and Fast Algorithms for Integral Operators. PhD thesis, Stanford University (2015)
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Todd Quinto
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
This work was supported in part by NSF Grant DMS-1906361.
Appendices
Appendix A. Proof of Lemma 3.5
We begin by constructing an orthogonal matrix \({\check{U}}\) such that the intermediate coordinates \({\check{y}}={\check{U}}^T({{\tilde{y}}}-{{\tilde{y}}}_0)\) and the intermediate function \({{\check{\Phi }}}(t,{\check{y}})={{\tilde{\Phi }}}(t,\check{U}{\check{y}}+{{\tilde{y}}}_0)\) satisfy (3.10). Here \(\check{y}=(z^{(1)},y^{(2)})^T\). The final coordinates y and the intermediate coordinates \({\check{y}}\) will have the same \(y^{(2)}\) component, this is why we wrote \(y^{(2)}\) in the definition of \({\check{y}}\).
Let \(V_1\Sigma V_2^T\) be the SVD of the Jacobian matrix \({{\tilde{\Phi }}}_{{{\tilde{y}}}}^{(1)}\). To remind the reader, \({{\tilde{\Phi }}}_{{{\tilde{y}}}}^{(1)}\) stands for the matrix of partial derivatives \(\partial _{{{\tilde{y}}}}{{\tilde{\Phi }}}^{(1)}(x^{(2)},{{\tilde{y}}})\) evaluated at \((x^{(2)}_0,{{\tilde{y}}}_0)\). Here \(V_1\in O(n-N)\) and \(V_2\in O(n)\) are orthogonal matrices, and \(\Sigma \) is a rectangular \((n-N)\times n\) matrix with \(\Sigma _{ij}=0\), \(i\not =j\), and \(\Sigma _{ii}>0\), \(1\le i \le n-N\). The latter property follows from Assumption 3.1(G2) and \({{\tilde{\Phi }}}^{(1)}_{x^{(2)}}=0\), which yield that \(\text {rank}{{\tilde{\Phi }}}^{(1)}_{{{\tilde{y}}}}=n-N\). Then we can take \({\check{U}}=V_2\), so \({\check{y}}=V_2^T({{\tilde{y}}}-{{\tilde{y}}}_0)\). Indeed,
Here \(V_2^{(2)}\) is the \(n\times N\) matrix consisting of the last N columns of \(V_2\). Likewise,
The final coordinates y and the final orthogonal matrix U can be found as follows. As was already mentioned, we keep the coordinates \(y^{(2)}\) the same and rotate the \(z^{\text {(1)}}\) coordinates: \(z^{\text {(1)}}\rightarrow y^{(1)}\). Hence (A.1), (A.2) still hold with \({{\check{\Phi }}}\) and \(z^{\text {(1)}}\) replaced by \(\Phi \) and \(y^{(1)}\), respectively, and the new y coordinates satisfy (3.10).
By (3.5), the rotation \(z^{\text {(1)}}\rightarrow y^{(1)}\) should be selected so that
If \(V\in O(n-N)\) is such that \(z^{\text {(1)}}=Vy^{(1)}\), then (A.3) implies that the second through the last columns of V form an orthonormal basis of the subspace of \({\mathbb {R}}^{n-N}\), which consists of vectors orthogonal to \(\partial {{\check{\Phi }}}_1/\partial z^{\text {(1)}}\). It is clear that such a basis can be found. Then the matrix U becomes
Our construction ensures that all components of the vector \(\partial _y\Phi _1\), except, possibly, the first one, are zero. By (A.2), the first component is not zero. Multiplying \(\Psi (y)\) by a constant, we can make sure that \(\partial _{y_1}(\Psi \circ \Phi )=1\).
Appendix B. Behavior of \({\mathcal {R}}f\) Near \(\Gamma \)
Suppose \(f\in {\mathcal {E}}'({\mathcal {U}})\) is given by
where \(\Psi \) is the same as in Sects. 3, 6, and \({{\tilde{f}}}\) satisfies
for some \(a,c_m,s_0,s_1\), \({{\tilde{R}}}\), and \({{\tilde{f}}}^\pm \). If \(g={\mathcal {R}}f\) for a sufficiently regular f, then g should have more regularity (\(s_0>(N/2)-1\)) than in the general case (4.15) (\(s_0>0\)).
From (2.3), after changing variables and the defining function (\(t\rightarrow x^{(2)}\), \({{\tilde{y}}}\rightarrow y\), \({{\tilde{\Phi }}}\rightarrow \Phi \)), the GRT of f is given by
where \(x=\Phi (x^{(2)},y)\).
Consider the second equation for \({\mathcal {T}}_{\mathcal {S}}\) in (6.4) and solve it for \(x^{(2)}\). Since \(\det (\Psi \circ \Phi )_{x^{(2)}x^{(2)}} \not =0\), the solution \(x^{(2)}_*=x^{(2)}_*(y)\) is a smooth function. The function \(x^{(2)}_*(y)\) here is different from \(x^{(2)}(y^\perp )\) in the paragraph following (3.16), because now we solve only the second of the two equations that define \({\mathcal {T}}_{\mathcal {S}}\). The asymptotics as \(\lambda \rightarrow \infty \) of the integral with respect to \(x^{(2)}\) in (B.3) is computed with the help of the stationary phase method [33, Chapter VIII, Eqs. (2.14)–(2.20)]
for some \({{\tilde{R}}}\). Here \(x_*=\Phi (x^{(2)}_*(y),y)\), and \(\text {sgn}\,M\) for a symmetric matrix M denotes the signature of M, i.e. the number of positive eigenvalues of M minus the number of negative eigenvalues.
Introduce the function
Then \({\mathcal {R}}f\) can be written as
and, with the same \(a,s_0,s_1\) as in (B.2) and some \(c_m\), \({{\tilde{R}}}\),
where \(x_*=\Phi (x^{(2)}_*(y),y)\), and we have used that \((\Psi \circ \Phi )_{x^{(2)}x^{(2)}}\) is negative definite.
By construction, \(P_1(y)=0\) is another equation for \({\mathcal {T}}_{\mathcal {S}}\). Since \((\Psi \circ \Phi )_{x^{(2)}}=0\), equation (6.6) does not determine \(x^{(2)}_*\). Therefore, to first order, \(x^{(2)}_*\) is determined by solving (6.7):
and
Remark B.1
We are now in a position to discuss the implications of Assumption 4.5(g4). Suppose \(g={\mathcal {R}}f\) and \(s_0\in {\mathbb {N}}\). From (4.17) and (B.7),
Here we have used that \(x_*(y)\in {\mathcal {S}}\) if \(y\in \Gamma \). Recall that the function e(a) is defined in (4.18).
Suppose first that N is odd, i.e., \(a\not \in {\mathbb {N}}\). Substituting (B.10) into (B.2) gives to leading order:
Using (B.1) and computing the inverse Fourier transform, we approximate f to leading order:
for some \(c\not =0\). Thus, if N is odd, Assumption 4.5(g4) means that, to leading order, the nonsmooth part of f is supported on the positive side of \({\mathcal {S}}\).
Suppose next that N is even, i.e. \(a\in {\mathbb {N}}\). Substituting (B.10) into (B.2) gives:
for some \(c\not =0\). Thus, if N is even, Assumption 4.5(g4) means that, to leading order, the nonsmooth part of f is symmetric about \({\mathcal {S}}\): \(f(x+\Delta x)\sim f(x-\Delta x)\) if a is even, and \(f(x+\Delta x)\sim -f(x-\Delta x)\) if a is odd.
Remark B.2
The behavior of f near \({\mathcal {S}}\) can be obtained in the same way even if \(s_0\not \in {\mathbb {N}}\), and Assumption 4.5(g4) does not apply. Taking the inverse Fourier transform of the first (asymptotic) equality in (B.11) using [9, p. 360, Eqs. 25, 26] shows that
for some constants \(c_\pm \not =0\). If \(a\in {\mathbb {N}}\), then the leading singularity of f may contain logarithmic terms [9, Chapter II, Sect. 2.4, Eqs. (14) and (20)]. Computing the corresponding explicit expressions is fairly straightforward and is outside the scope of the paper.
Appendix C. Proofs of Lemmas 7.1–7.4
1.1 C.1. Proof of Lemma 7.1
The following expression for g (modulo a \(C^\infty ({\mathcal {V}})\) function) is obtained directly from (4.12), (4.15):
where \({{\tilde{R}}}\in S^{-(s_1+1)}({\mathcal {V}}\times {\mathbb {R}})\), and \({\mathcal {F}}_{1d}^{-1}\) is the one-dimensional inverse Fourier transform acting with respect to \(\lambda \). The inverse transforms \({\mathcal {F}}_{1d}^{-1}(\lambda _\pm ^{-(s_0+1)})\) are understood in the sense of distributions [9, Chapter II, Sect. 2.3]. By the properties of \({{\tilde{R}}}\), we get by computing the inverse Fourier transform if \(s_0\not \in {\mathbb {N}}\):
Here [9, p. 360]
and \(R(y,p)={\mathcal {F}}_{1d}^{-1}\left( {{\tilde{R}}}(y,\lambda )\right) (p)\). By [1, Theorem 5.12], R satisfies
for some \(c_{m,l}>0\). Recall that \(P(y+p\Theta _0)\equiv p\) for any \(y\in \Gamma \) and p such that \(y+p\Theta _0\in {\mathcal {V}}\). Combining (C.1)–(C.4) gives the leading singular behavior of g:
See [2, 28] for a characterization of the singularities of the classical Radon transform \(g={\mathcal {R}}f\) for more general surfaces \({\mathcal {S}}\).
If \(s_0\in {\mathbb {N}}\), condition (4.17) implies \({{\tilde{\upsilon }}}^+(y)\lambda _+^{-(s_0+1)}+{{\tilde{\upsilon }}}^-(y)\lambda _-^{-(s_0+1)}\equiv {{\tilde{\upsilon }}}^+(y)\lambda ^{-(s_0+1)}\), \(y\in \Gamma \), so [9, p. 360]
An equation of the kind (C.5) still holds:
Combining (C.1)–(C.6) and using that (C.2) and (C.6) can be differentiated proves (7.5).
From the second equation in (C.1), (C.6), and (7.3) we get also
Together with the first equation in (C.1) this proves (7.6).
If \({{\tilde{\upsilon }}}^\pm \equiv 0\), the result follows from the properties of \({{\tilde{R}}}(y,\lambda )\) and (7.3), because \(s_1\not \in {\mathbb {N}}\).
1.2 C.2. Proof of Lemma 7.2
As is standard (see e.g., [33]), set \(u=\eta /\lambda \) and consider the phase function
The only critical point \((z_0,u_0)\) and the corresponding Hessian H are given by
Clearly, \(|\det H(y)|=1\) and \(\text {sgn}\, H(y)=0\) for any \(y\in {\mathcal {V}}\). By the stationary phase method (see [33, Chapter VIII, Eqs. (2.14)–(2.20)]) we get using (4.8) and (4.15)
The fact that the u-integration is over an unbounded domain does not affect the result, because integrating by parts with respect to z we obtain a function that decreases rapidly as \(|u|\rightarrow \infty \) (i.e., when \(|u|>\sup _{y\in {\mathcal {V}}}|\text {d}P(y)|\)) and \(\lambda \rightarrow \infty \).
Substituting (C.12) into (C.9) and using (4.8), (4.15) leads to
The extra factor \(|\lambda |^n\) in (C.12) cancels because \(|\lambda |^n \text {d}u=\text {d}\eta \). Computing the asymptotics of the inverse Fourier transform as \(p=P(y)\rightarrow 0\) and using that \({{\tilde{B}}}_0(y,\pm \text {d}P(y))\in C_0^{\infty }({\mathcal {V}})\) and \(P(y+p\Theta _0)\equiv p\) if \(y\in \Gamma \) gives
Recall that \(\Psi _a^{\pm }\) are defined in (C.3), and \(R(y,p)={\mathcal {F}}_{1d}^{-1}\bigl ({{\tilde{R}}}(y,\lambda )\bigr )(p)\). By [1, Theorem 5.12] and (C.13), the remainder satisfies
for some \(c_{m,l}>0\). The constant c here is the same as in (C.13). The estimate (7.7) follows from (C.14), (C.15).
If \(\kappa =0\), condition (4.20) implies
and
This proves (7.8).
1.3 C.3. Proof of Lemma 7.3
Using that \(l>s_0\) and \(\varphi \) has \(\lceil \beta _0^+ \rceil \) bounded derivatives and \(\varphi \) is exact to the degree \(\lceil \beta _0 \rceil \), we get with any \(0\le M\le l\):
Here \((\partial _{v_1}\varphi )(\cdot )\) is the derivative of \(\varphi (v)\) with respect to \(v_1\) evaluated at the indicated point, and the remainder satisfies
To prove the top case in (7.10) select \(\varkappa _1>0\) so that \(|P(y)|\ge \varkappa _1\epsilon \) and \(\varphi ((y-w)/\epsilon )\not =0\) implies \(|P(w)|\ge \epsilon \). By Lemma 7.1, this ensures that for each \(m\in {\mathbb {N}}_0^{n}\) there exists \(c(m)>0\) such that
Set \(M=l\) in (C.18). Then (C.20) together with the bottom line in (C.19) prove the result.
To prove the middle case in (7.10), set \(M=\lfloor s_0 \rfloor \) in (C.18). If \(s_0\in {\mathbb {N}}\), (7.6) and (C.19) imply that \(R_m=O(1)\), thereby proving the assertion. If \(s_0\not \in {\mathbb {N}}\), the remainder can be modified as follows
Here we have used (7.4) with \(r=s_0\). Since \(l>M\), we can replace \(R_m\) with \({{\tilde{R}}}_m\) in (C.18) without changing the equality, and the desired inequality follows.
The bottom case in (7.10) follows by setting \(M=l\) in (C.18) and noticing that (7.6) and (C.19) imply \(R_m=O(1)\).
If \({{\tilde{\upsilon }}}^\pm \equiv 0\), the same argument as above applies with \(s_0\) replaced by \(s_1\). The only change is that there is no need to consider the case \(s_1\in {\mathbb {N}}\).
1.4 C.4. Proof of Lemma 7.4
Since \(\varkappa _1>0\) is the same as in the proof of Lemma 7.3, \(|P(y)|\ge \varkappa _1\epsilon \) and \(\varphi ((y-w)/\epsilon )\not =0\) imply \(|P(w)|\ge \epsilon \). Similarly to (C.18), using the properties of \(\varphi \) we obtain
The term \(g^{(l)}(y)\) on the right in (C.22) is the only term from the Taylor polynomial that remains after the summation with respect to j. In particular, all the terms corresponding to \(l< |m| \le M-1\) are converted to zero, because \(\varphi \) is exact to the degree \(\lceil \beta _0\rceil \), and
Using (C.22) with \(M=l+1\) and appealing to (C.19), (C.20) proves (7.11). Indeed, recall that \(l\ge \lfloor s_0^-\rfloor \), so \(M=l+1\ge s_0\). If \(s_0\not \in {\mathbb {N}}\), then \(M>s_0\), and the top case in (C.20) applies when estimating \(R_m\), \(|m|=M\). If \(s_0\in {\mathbb {N}}\), then \(M=s_0\), and the bottom case in (C.20) applies when estimating \(R_m\), \(|m|=M\).
To prove (7.12), we use (C.22) with \(M=\lfloor s_0\rfloor \). If \(s_0\in {\mathbb {N}}\), then \(l<\lfloor s_0\rfloor =s_0\) (by assumption, \(l\le \lfloor s_0^-\rfloor \)), and (C.22), (7.6) prove (7.12).
If \(s_0\not \in {\mathbb {N}}\), we replace \(R_m\) with \({{\tilde{R}}}_m\) in (C.22) as this was done in the proof of Lemma 7.3. As before, this does not invalidate the equality and extends its applicability to the case \(l=M\). Note, however, that if \(l=M\), then the term \(g^{(l)}(y)\) on the right in (C.22) comes not from the Taylor polynomial, but from the modification of the remainder. The desired assertion follows from (C.21) and the modified (C.22).
If \({{\tilde{\upsilon }}}^\pm \equiv 0\), the same argument as above applies with \(s_0\) replaced by \(s_1\). The only change is that there is no need to consider the case \(s_1\in {\mathbb {N}}\).
Appendix D. Proof of Lemma 8.2
Throughout the proof, c denotes various positive constants that can vary from one place to the next. To simplify notations, in this proof we drop the subscripts from \(\beta _0\) and \(s_0\): \(\beta =\beta _0\), \(s=s_0\). By the choice of y coordinates (see (3.9)) and by (3.18), \(y_1=\Theta _0\cdot y\) (recall that \(|\Theta _0|=1\)).
Using (4.6), (8.3), and that the symbol of \({\mathcal {B}}\) is homogeneous of degree \(\beta \) we have
where
Also (cf. (8.11)):
We start by estimating the difference between the terms with the subscript ‘+’ inside the brackets in (D.1) and (D.3)
The following inequalities can be shown to hold. For all \(q,r\in {\mathbb {R}}\) one has
Consider the top inequality. The case \(q,q+r\le 0\) is trivial. The cases \(q+r\le 0\le q\) and \(q\le 0\le q+r\) can be verified directly. By a change of variables and convexity, it is easily seen that the case \(r<0< q\) follows from the case \(q,r>0\). To prove the latter, divide by \(q^s\) and set \(x=r/q\). Both sides equal zero when \(x=0\). Differentiating with respect to x, we see that the inequality is proven because \((1+x)^{s-1}\le 2^{s-1}(x^{s-1}+1)\) (consider \(0<x\le 1\) and \(x\ge 1\)). The second inequality in (D.5) is obvious.
The assumption \(z\in \Gamma \) implies \(z_1=\psi (z^\perp )\), so
Setting \(q={{\hat{y}}}^j_1-z_1\) and \(r=\psi (z^\perp )-\psi (({{\hat{y}}}^j)^\perp )\) in (D.5) and using (D.6) and that \(a^+(y)\) is bounded, we estimate the first term on the right in (D.4) as follows
Recall that in this lemma we assume that the amplitude of \({\mathcal {B}}\) satisfies \({{\tilde{B}}}(y,\eta )\equiv {{\tilde{B}}}_0(y,\eta )\). By (4.8), the fact that the amplitude of \({\mathcal {B}}\) is homogeneous in the frequency variable (and, therefore, the Schwartz kernel K(y, w) of \({\mathcal {B}}\) is homogeneous in w), and Assumption 4.3(IK1),
Therefore, by (8.11) and (D.1)–(D.3), we have to estimate the following two sums
The second sum is required if \(s>1\).
Note that the quantities \(J_{1,2}\) include the factor \(\epsilon ^{-s}\), which appears on the left in (8.11) and has been unaccounted for until now. The remaining factor \(\epsilon ^{\beta }\) has been accounted for in (D.1). In (8.11), \({\mathcal {B}}_0\) already acts with respect to the rescaled variable \(y/\epsilon \), so the factor \(\epsilon ^\beta \) is not needed on the right in (8.11). Since \({\mathcal {B}}_0\) is shift-invariant, it is not necessary to represent its action in the form (D.2).
Assumptions of the lemma imply
for some \(c>0\). Here \(y_*^\perp \in {\mathbb {R}}^{n-1}\) is some point on the line segment with the endpoints \(z^\perp \), \(({{\hat{y}}}^j)^\perp \), and we have used that \(|\psi '(y_*^\perp )|\le c(|z^\perp |+|z^\perp -({{\hat{y}}}^j)^\perp |)\), which follows from \(\psi '(y_0^\perp )=0\).
Let \(m=m(z,\epsilon )\in {\mathbb {Z}}^n\) be such that \(|(z+U^T\tilde{y}_0)/\epsilon -U^TDm|<c\). The dependence of m on z and \(\epsilon \) is omitted from notations. This implies
Also, using that \(|y-z|=O(\epsilon )\) gives
Substitute (D.10) into the expression for \(J_1\) in (D.9), shift the index \(j\rightarrow j-m\), and use (D.11), (D.12):
Here we have used that we can ignore any finite number of terms (their contribution is \(O(\epsilon ^{s/2})\)), and (D.12) applies to the remaining terms. This gives
To estimate \(J_2\), we use the same approach as in (D.10)–(D.14):
Here we have used that \(\beta -s\ge N/2\ge 1/2\).
The second term on the right in (D.4) is estimated as follows:
Shifting the j index as before and estimating a finite number of terms by \(O(\epsilon ^{1/2})\) gives an upper bound
The terms with the subscript \('-'\) in (8.11) are estimated analogously. Our argument proves (8.11) with \({\mathcal {B}}\) instead of \({\mathcal {B}}_0\) on the right. This implies, in particular, that the sum on the right in (8.11) is restricted to \(|j|\le \vartheta /\epsilon \).
The left-hand side of (8.11) is bounded, because
by (D.6), (D.10), (D.11), and \(|j|\le \vartheta /\epsilon \), and
It is easy to see that
This follows from \(\varphi \in C_0^{\lceil \beta _0^+\rceil }\), \(|y-y_0|=O(\epsilon ^{1/2})\), and
Together with (D.19) this implies that replacing y with \(y_0\) in the amplitude of the \(\Psi \)DO \({\mathcal {B}}\) (i.e., replacing \({{\tilde{B}}}_0(y,\eta )\) with \(\tilde{B}_0(y_0,\eta )\)) introduces an error of the magnitude \(O(\epsilon ^{1/2})\), while keeping the sum restricted to \(|j|\le \vartheta /\epsilon \).
Using that \(|y-z|=O(\epsilon )\), (D.3) and (D.8) imply that the terms of the series on the right in (8.11) are bounded by \(O((1+|j|)^{s-(\beta +n)})\). Hence the series is absolutely convergent. Contribution of the terms corresponding to \(|j|>\vartheta /\epsilon \) is bounded by \(c\sum _{|j|>\vartheta /\epsilon }|j|^{s-(\beta +n)}=O(\epsilon ^{\beta -s})\rightarrow 0\) for some \(c>0\), and the lemma is proven.
Appendix E. Proof of Lemma 8.3
Pick some sufficiently large \(J\gg 1\). Then, with \(D_1:=U^TD\),
Because \({\mathcal {B}}_0\in S^{\beta _0}({\mathbb {R}}^n\times {\mathbb {R}}^n)\), [1, Theorem 6.19] implies that \({\mathcal {B}}_0:C_*^{\lceil \beta _0^+\rceil }\rightarrow C_*^a\), \(a=\lceil \beta _0^+\rceil -\beta _0=1-\{\beta _0\}>0\), is continuous. Nonsmoothness of the symbol at the origin, which is not allowed by the assumptions of the theorem, is irrelevant. By assumption, \(\varphi \in C_0^{\lceil \beta _0^+\rceil }({\mathbb {R}}^n)\), so \(J_1=O(|\Delta v|^a)\). In the second term \(J_2\), the arguments of \({\mathcal {B}}_0\varphi \) are bounded away from zero, and the factor in brackets is smooth. Moreover, using again that the Schwartz kernel K(y, w) of \({\mathcal {B}}_0\) is homogeneous in w, we have,
Using the argument analogous to the one in (D.19), we easily see that \(J_2=O(|\Delta v|)\). This proves the first line in (8.22).
The second line in (8.22) is proven analogously:
Clearly, \({\mathcal {A}}(q+\Delta p)-{\mathcal {A}}(q)=O(|\Delta p|^{\min (s_0,1)})\) uniformly in q confined to any bounded set. Using in addition that \({\mathcal {B}}_0 \varphi (u)\) is bounded and \({\mathcal {B}}_0 \varphi (u)=O \left( |u|^{-(n+\beta _0)}\right) \) as \(|u|\rightarrow \infty \), we get that \(J_1=O(|\Delta p|^{\min (s_0,1)})\).
In \(J_2\), the argument of \({\mathcal {A}}\) is bounded away from zero. In view of \({\mathcal {A}}'(q)=O(|q|^{s_0-1})\), \(|q|\rightarrow \infty \), we finish the proof by noticing that
The fact that both estimates are uniform with respect to v and p confined to bounded sets is obvious.
Appendix F. Proof of Lemma 10.3
As usual, c denotes various positive constants that may have different values in different places. Recall that \(\beta _0-s_0>0\). Set \(k:=\lceil \beta _0\rceil \), \(\nu :=k-\beta _0\). Thus, \(0\le \nu < 1\), and \(\nu =0\) if \(\beta _0\in {\mathbb {N}}\). Similarly to (9.2),
for some \({\mathcal {W}}_1\in S^{-\nu }({\mathcal {V}}\times {\mathbb {R}}^n)\), \({\mathcal {W}}_2\in S^{-\infty }({\mathcal {V}}\times {\mathbb {R}}^n)\).
1.1 F.1. Proof in the Case \(\beta _0\not \in \pmb {{\mathbb {N}}}\)
Let K(y, w) be the Schwartz kernel of \({\mathcal {W}}_1\). Suppose, for example, that \(P:=P(y)>0\). The case \(P<0\) is completely analogous. Initially, as \(\varkappa _2\) in Lemma 10.3, we can pick any constant that satisfies \(\varkappa _2\ge 2\varkappa _1\), where \(\varkappa _1\) is the same as in (7.11). This implies that \(P/2\ge \varkappa _1\epsilon \). Later (see the beginning of the proof of Lemma F.1), we update the choice of \(\varkappa _2\). Denote (cf. (10.5))
Then
The big-O term in (F.3) appears because of the \(\Psi \)DO \({\mathcal {W}}_2\) in (F.1), and the magnitude of the term follows from (7.12) with \(l=0\). From (7.9) and (7.11) with \(l=k\), (F.3), and (9.4) with \(l=0\), it follows that
Hence, we obtain similarly to (9.8)
To estimate \(J_\epsilon ^{(2)}(y)\), integrate by parts with respect to \(w_1\) in (F.3):
By construction, \(P/2 \ge \varkappa _1\epsilon \). Using (7.11), (7.12) with \(l=l_0\) (both inequalities apply when \(l=l_0=\lfloor s_0^-\rfloor \)), and arguing similarly to (F.4), (F.5), gives
where we have used that \(l_0<s_0\). Using again that \(\epsilon /P \le 1/(2\varkappa _1)\) gives \(J_k\le c\epsilon P^{s_0-1-\beta _0}\).
Next we estimate the boundary terms in (F.6). By (7.11) (using that \(\lfloor s_0^-\rfloor =l_0\le l\le k-1\)) and (9.4),
Appealing to (9.7) gives
which finishes the proof. As easily checked, the integral in (F.9) converges because \(l\le k-1 <\beta _0\).
1.2 F.2. Proof in the Case \(\beta _0\in \pmb {{\mathbb {N}}}\)
Suppose now \(\beta _0\in {\mathbb {N}}\), i.e. \(k=\beta _0\) and \(\nu =0\). All the terms that do not involve integration over a neighborhood of the set \(\{w\in {\mathcal {V}}:\,P(w)=P\}\) are estimated the same way as before. For example, estimation of \(J_\epsilon ^{(2)}(y)\) is completely analogous to (F.6)–(F.9), and we obtain the same bound \(|J_\epsilon ^{(2)}(y)|\le c\epsilon P^{s_0-1-\beta _0}\). Estimating of \(J_\epsilon ^{(1)}\) is much more involved now, because the singularity at \(P(w)=P\) is no longer integrable. We have with some \(c_1>0\), which is to be selected later:
We do not estimate the integral \(\int _{P(w)\le P/2} (\cdot )\text {d}w\), because the domain of integration is bounded away from the set \(\{w\in {\mathcal {V}}:\,P(w)=P\}\), and this integral admits the same bound as in the previous subsection (cf. (F.5)). Similarly to (F.5), by (7.11) with \(l=\beta _0\),
The term \(J_\epsilon ^{(1b)}\) is split further as follows:
Similarly to (F.5), by (7.11) with \(l=\beta _0\),
The second part is estimated by rearranging the \(\Delta g\) terms:
Lemma F.1
There exist \(c,c_1,\varkappa _2>0\) so that
Proof
We begin by updating the choice of \(\varkappa _2\). Select \(\varkappa _2\ge 2\varkappa _1\) so that \(P\ge \varkappa _2\epsilon \) implies
for some \(c>0\).
Next we select \(c_1\). First, pick any \(c_1\) so that \(0<c_1\le \varkappa _1\). This ensures that \(P(w)\ge P-|P-P(w)|\ge \varkappa _1\epsilon \), and (7.11) can be used to estimate the derivatives of \(\Delta g_\epsilon (w)\). Let \(c_\psi :=\max _{v\in {\mathcal {V}}}|\psi '(v)|\). Our assumptions imply
Let v be any point on the line segment with the endpoints w and y, i.e. \(v=y+\lambda (w-y)\), \(0\le \lambda \le 1\). Then
Reducing \(c_1>0\) even further, we can ensure that \(P(v)\ge cP\) for some \(c>0\). This is the value of \(c_1\) that is assumed starting from (F.10). In the rest of the proof we assume that \(w,y\in {\mathcal {V}}\) satisfy the inequalities on the last line in (F.15) with the constants \(c_1\) and \(\varkappa _2\) that we have just selected.
From (7.5) with \(|m|=\beta _0+1\),
for some \(c>0\).
To prove the second line in (F.15), find \(c_{2,3}>0\) such that
By (F.16), \(c_{2,3}\) with the required properties do exist.
Now, assume first that \(|w-y|\ge c_2\epsilon \), where \(c_2\) is the same as in (F.20). Clearly,
By construction, (7.11) applies to \(\Delta g_\epsilon ^{(\beta _0)}(w)\). Applying (7.11) to the first and third terms on the right in (F.21), and (F.19) – to the second term on the right, gives
because \(\epsilon \le (1/c_2)|w-y|\) and
If \(|w-y|\le c_2\epsilon \), we argue similarly to (C.22):
Here \((\partial _{v_1}\varphi )(\cdot )\) is the derivative of \(\varphi (v)\) with respect to \(v_1\) evaluated at the indicated point. By (F.20), (7.5) implies \(|R_m({{\hat{y}}}^j,y)|\le cP^{s_0-1-\beta _0}\). The assertion follows because \(\varphi \in C_0^{\lceil \beta _0^+\rceil }({\mathbb {R}}^n)\). \(\square \)
Applying (9.4) with \(\nu =l=0\) and (F.15) in (F.14) yields (cf. (9.6)–(9.8))
The final major step is to estimate the integral in the definition of \(J_\epsilon ^{(1b3)}\).
Let \({{\tilde{W}}}(y,\eta )\) be the amplitude of \({\mathcal {W}}_1\in S^0({\mathcal {V}}\times {\mathbb {R}}^n)\) in (F.1). Then
Our goal is to show that I is uniformly bounded for all \(\epsilon >0\) sufficiently small and P that satisfy \(P/\epsilon \ge \varkappa _2>0\). We can select \({\mathcal {W}}_{1,2}\) in (F.1) so that the conic supports of their amplitudes are contained in that of \({\mathcal {B}}\). First, consider only the principal symbol of \({\mathcal {W}}_1\), which we denote \(\tilde{W}_0(y,\eta )\). We can assume that \({{\tilde{W}}}_0(y,\eta )\equiv 0\) if \(\eta \not \in \Omega \), \(\eta \not =0\), where \(\Omega \subset {\mathbb {R}}^n\setminus \{0\}\) is a small conic neighborhood of \(\Theta _0\cup (-\Theta _0)\). This set is used in (F.27). The corresponding value of I, which is obtained by replacing \(\tilde{W}(y,\eta )\) with \({{\tilde{W}}}_0(y,\eta )\) in (F.27), is denoted \(I_0\).
As \({{\tilde{W}}}_0(y,\eta )\) is positively homogeneous of degree zero in \(\eta \), set
where \(\Omega ^\perp \) is a small neighborhood of the origin in \({\mathbb {R}}^{n-1}\): \(\Omega ^\perp :=\{u\in {\mathbb {R}}^{n-1}: u=\eta ^\perp /\eta _1,\eta \in \Omega \}\). The sign \('+'\) is selected if \(\eta _1>0\), and \('-'\) - otherwise. By the properties of \({\mathcal {W}}\), \({{\tilde{W}}}^{\pm }(y,\cdot )\in C_0^{\infty }(\Omega ^\perp )\). Thus, (F.27) implies
where \(W^\pm (y,w^\perp )\) is the inverse Fourier transform of \({{\tilde{W}}}^\pm (y,u)\) with respect to u. Since \(P/\epsilon \) is bounded away from zero, \(h(0)=0\), and \(W^\pm (y,w^\perp )\) is smooth and rapidly decreasing as a function of \(w^\perp \), we have by the dominated convergence theorem
as \(\lambda \rightarrow \pm \infty \), and convergence is uniform with respect to \(\epsilon \) and P that satisfy \(P/\epsilon \ge \varkappa _2\). As is seen, \(\begin{pmatrix} 1\\ -\psi '(y^\perp ) \end{pmatrix}\) is a vector normal to \(\Gamma \) at the point \(\begin{pmatrix} \psi (y^\perp )\\ y^\perp \end{pmatrix}\).
The remainder term in (F.30) is bounded by the expression
Due to \({{\tilde{W}}}^{\pm }(y,\cdot )\in C_0^{\infty }(\Omega ^\perp )\), the big-O term on the right-hand side of (F.31) is uniform with respect to \(y\in {\mathcal {V}}\) and \(0<\epsilon \le 1\). Hence
where O(1) is uniform with respect to \(y\in {\mathcal {V}}\) as well, which proves that \(I_0\) is uniformly bounded.
The remaining term \(\Delta I=I-I_0\) comes from the subprincipal terms of the amplitude \(\Delta {{\tilde{W}}}={{\tilde{W}}}-{{\tilde{W}}}_0\). The corresponding \(\Psi \)DO is in \(S^{-\nu }({\mathcal {V}}\times {\mathbb {R}}^n)\) for some \(\nu >0\), so its Schwartz kernel \(\Delta K(y,w)\) is smooth as long as \(w\not =0\) and absolutely integrable at \(w=0\). It is now obvious that \(\Delta I\) is bounded as well.
By Lemma 7.4 (use (7.11) with \(l=k=\beta _0\)), \(|\Delta g_\epsilon ^{(\beta _0)}(y)|\le c\epsilon P^{s_0-1-\beta _0}\) if \(P\ge \varkappa _1\epsilon \), combining with (F.12) proves that \(|J_\epsilon ^{(1b2)}(y)|\le c\epsilon P^{s_0-1-\beta _0}\). By (F.12), (F.13), (F.25), we conclude \(|J_\epsilon ^{(1b)}(y)|\le c\epsilon P^{s_0-1-\beta _0}\ln (P/\epsilon )\). Combining with (F.10) and (F.11) we finish the proof.
Appendix G. Proof of Lemma 11.1
We begin by proving (11.8). From (3.5) and (3.9), \(|\text {d}\Psi |\partial \Phi _1/\partial y_1=1\), i.e. \(\partial \Phi _1/\partial y_1>0\).
Recall that \(y=Y(y^{(2)},x)\) is found by solving \(x^{(1)}=\Phi ^{(1)}(x^{(2)},y)\) for \(y^{(1)}\). Differentiating \(x_1\equiv \Phi _1(x^{(2)},(Y^{(1)}(y^{(2)},x),y^{(2)}))\) with respect to \(x_1\) gives \(1=(\partial \Phi _1/\partial y_1)(\partial Y_1/\partial x_1\)). Since \(\partial \Phi _1/\partial x^{(2)}=0\) and \(\partial \Phi _1/\partial y^\perp =0\), differentiating the same identity with respect to \(x^\perp \) gives \(0=({\partial \Phi _1}/{\partial y_1})({\partial Y_1}/{\partial x^\perp })\), and all the statements in (11.8) are proven.
Using that \(|\text {d}\Psi |\partial \Phi _1/\partial y_1=1\) completes the proof.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Katsevich, A. Resolution Analysis of Inverting the Generalized N-Dimensional Radon Transform in \(\pmb {\mathbf {\mathbb {R}}^n}\) from Discrete Data. J Fourier Anal Appl 29, 6 (2023). https://doi.org/10.1007/s00041-022-09975-x
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00041-022-09975-x