Skip to main content
Log in

Adaptive Estimation of a Function from its Exponential Radon Transform in Presence of Noise

  • Published:
Sankhya A Aims and scope Submit manuscript

Abstract

In this article we propose a locally adaptive strategy for estimating a function from its Exponential Radon Transform (ERT) data, without prior knowledge of the smoothness of functions that are to be estimated. We build a non-parametric kernel type estimator and show that for a class of functions comprising a wide Sobolev regularity scale, our proposed strategy follows the minimax optimal rate up to a \(\log {n}\) factor. We also show that there does not exist an optimal adaptive estimator on the Sobolev scale when the pointwise risk is used and in fact the rate achieved by the proposed estimator is the adaptive rate of convergence.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Abhishek, A. (2022). Minimax optimal estimator in a stochastic inverse problem for exponential radon transform. Sankhya A. https://doi.org/10.1007/s13171-022-00285-4.

  • Butucea, C. (2000). The adaptive rate of convergence in a problem of pointwise density estimation. Statist. Probab. Lett. 47, 85–90.

    Article  MathSciNet  MATH  Google Scholar 

  • Butucea, C. (2001). Exact adaptive pointwise estimation on Sobolev classes of densities. ESAIM Probab. Statist. 5, 1–31.

    Article  MathSciNet  MATH  Google Scholar 

  • Butucea, C. and Tsybakov, A.B. (2007a). Sharp optimality in density deconvolution with dominating bias. I. Teor. Veroyatn. Primen. 52, 111–128.

    Article  MathSciNet  MATH  Google Scholar 

  • Butucea, C. and Tsybakov, A.B. (2007b). Sharp optimality in density deconvolution with dominating bias. II. Teor. Veroyatn. Primen. 52, 336–349.

    Article  MathSciNet  MATH  Google Scholar 

  • Cavalier, L. (1998). Asymptotically efficient estimation in a problem related to tomography. Math. Methods Statist. 7, 445–456.

    MathSciNet  MATH  Google Scholar 

  • Cavalier, L. (2001). On the problem of local adaptive estimation in tomography. Bernoulli 7, 63–78.

    Article  MathSciNet  MATH  Google Scholar 

  • Cavalier, L. and Tsybakov, A. (2002). Sharp adaptation for inverse problems with random noise. Probab. Theory Related Fields 123, 323–354.

    Article  MathSciNet  MATH  Google Scholar 

  • Cavalier, L., Golubev, Y., Lepski, O. and Tsybakov, A. (2003). Block thresholding and sharp adaptive estimation in severely ill-posed inverse problems. Teor. Veroyatnost. i Primenen. 48, 534–556.

    Article  MathSciNet  MATH  Google Scholar 

  • Donoho, D.L. and Johnstone, I.M. (1994). Ideal spatial adaptation by wavelet shrinkage. Biometrika 81, 425–455.

    Article  MathSciNet  MATH  Google Scholar 

  • Goldenshluger, A. (1999). On pointwise adaptive nonparametric deconvolution. Bernoulli 5, 907–925.

    Article  MathSciNet  MATH  Google Scholar 

  • Goldenshluger, A., Juditsky, A., Tsybakov, A. and Zeevi, A. (2008a). Change-point estimation from indirect observations. II. Adaptation. Ann. Inst. Henri Poincaré Probab. Stat. 44, 819–836.

    MathSciNet  MATH  Google Scholar 

  • Goldenshluger, A., Juditsky, A., Tsybakov, A.B. and Zeevi, A. (2008b). Change-point estimation from indirect observations. I. Minimax complexity. Ann. Inst. Henri Poincaré Probab. Stat. 44, 787–818.

    MathSciNet  MATH  Google Scholar 

  • Hazou, I.A. and Solmon, D.C. (1989). Filtered-backprojection and the exponential Radon transform. J. Math. Anal. Appl. 141, 109–119.

    Article  MathSciNet  MATH  Google Scholar 

  • Johnstone, I.M. and Silverman, B.W. (1990). Speed of estimation in positron emission tomography and related inverse problems. Ann. Statist. 18, 251–280.

    Article  MathSciNet  MATH  Google Scholar 

  • Korostelëv, A.P. and Tsybakov, A.B. (1991). Optimal rates of convergence of estimators in a probabilistic setup of tomography problem. Probl. Inf. Transm. 27, 73–81.

    MATH  Google Scholar 

  • Korostelëv, A.P. and Tsybakov, A.B. (1992). Asymptotically minimax image reconstruction problems. In Topics in Nonparametric Estimation, volume 12 of Adv. Soviet Math. Amer. Math. Soc., Providence, pp. 5–86.

  • Korostelëv, A.P. and Tsybakov, A.B. (1993). Minimax theory of image reconstruction, volume 82 of Lecture notes in statistics. Springer, New York.

    Book  Google Scholar 

  • Kuchment, P. (2014). The Radon transform and medical imaging, volume 85 of CBMS-NSF Regional Conference Series in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia.

    Google Scholar 

  • Lepski, O.V. and Spokoiny, V.G. (1997). Optimal pointwise adaptive methods in nonparametric estimation. Ann. Statist. 25, 2512–2546.

    Article  MathSciNet  MATH  Google Scholar 

  • Lepski, O.V. and Willer, T. (2017). Lower bounds in the convolution structure density model. Bernoulli 23, 884–926.

    Article  MathSciNet  MATH  Google Scholar 

  • Lepski, O.V. and Willer, T. (2019). Oracle inequalities and adaptive estimation in the convolution structure density model. Ann. Statist. 47, 233–287.

    Article  MathSciNet  MATH  Google Scholar 

  • Lepski, O.V., Mammen, E. and Spokoiny, V.G. (1997). Optimal spatial adaptation to inhomogeneous smoothness: an approach based on kernel estimates with variable bandwidth selectors. Ann. Statist. 25, 929–947.

    Article  MathSciNet  MATH  Google Scholar 

  • Lepskiı̆, O.V. (1990). A problem of adaptive estimation in Gaussian white noise. Teor. Veroyatnost. i Primenen. 35, 459–470.

    MathSciNet  Google Scholar 

  • Lepskiı̆, O.V. (1991). Asymptotically minimax adaptive estimation. I. Upper bounds. Optimally adaptive estimates. Teor. Veroyatnost. i Primenen. 36, 645–659.

    MathSciNet  MATH  Google Scholar 

  • Lepskiı̆, O.V. (1992). On problems of adaptive estimation in white Gaussian noise. In Topics in Nonparametric Estimation, volume 12 of Adv. Soviet Math. Amer. Math. Soc., Providence, pp. 87–106.

  • Lepskiı̆, O.V. and Spokoiny, V.G. (1995). Local adaptation to inhomogeneous smoothness: resolution level. Math. Methods Statist. 4, 239–258.

    MathSciNet  MATH  Google Scholar 

  • Monard, F., Nickl, R. and Paternain, G.P. (2019). Efficient nonparametric Bayesian inference for X-ray transforms. Ann. Statist. 47, 1113–1147.

    Article  MathSciNet  MATH  Google Scholar 

  • Natterer, F. (2001). The mathematics of computerized tomography. Society for Industrial and Applied Mathematics.

  • Natterer, F. (1979). On the inversion of the attenuated Radon transform. Numer. Math. 32, 431–438.

    Article  MathSciNet  MATH  Google Scholar 

  • Natterer, F. and Wübbeling, F. (2001). Mathematical methods in image reconstruction. SIAM Monographs on Mathematical Modeling and Computation. Society for Industrial and Applied Mathematics (SIAM), Philadelphia.

    MATH  Google Scholar 

  • Quinto, E.T. (1980). The dependence of the generalized Radon transform on defining measures. Trans. Amer. Math. Soc. 257, 331–346.

    Article  MathSciNet  MATH  Google Scholar 

  • Quinto, E.T. (1983). The invertibility of rotation invariant Radon transforms. J. Math. Anal. Appl. 91, 510–522.

    Article  MathSciNet  MATH  Google Scholar 

  • Siltanen, S., Kolehmainen, V., rvenp, S.J., Kaipio, J.P., Koistinen, P., Lassas, M., Pirttil, J. and Somersalo, E. (2003). Statistical inversion for medical x-ray tomography with few radiographs: I. General theory. Phys. Med. Biol.48, 1437–1463.

    Article  Google Scholar 

  • Tretiak, O. and Metz, C. (1980). The exponential Radon transform. SIAM J. Appl. Math. 39, 341–354.

    Article  MathSciNet  MATH  Google Scholar 

  • Tsybakov, A.B. (1998). Pointwise and sup-norm sharp adaptive estimation of functions on the Sobolev classes. Ann. Statist. 26, 2420–2469.

    Article  MathSciNet  MATH  Google Scholar 

  • Tsybakov, A.B. (2009). Introduction to nonparametric estimation. Springer Series in Statistics. Springer, New York. Revised and extended from the 2004 French original, Translated by Vladimir Zaiats.

    Google Scholar 

  • Vänskä, S., Lassas, M. and Siltanen, S. (2009). Statistical X-ray tomography using empirical Besov priors. Int. J. Tomogr. Stat. 11, 3–32.

    MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sakshi Arya.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Proof of Lemma 2.

(a) Let the distribution function for noise be given by \(p_{\epsilon }(u) = \frac {1}{\sqrt {2\pi \sigma ^{2}}} e^{\frac {-u^{2}}{2\sigma ^{2}}}\). For the proof of this part, first consider,

$$ \begin{array}{@{}rcl@{}} E_{1}[Z_{n,i}] \!\!\!&=&\!\! \frac{1}{\sqrt{\log n}} E_{(\theta,s)} \!\!\left[E_{1|(\theta,s)} \left[\log \frac{p_{\epsilon} (Y_{i})}{p_{\epsilon} (Y_{i} - T_{\mu} f_{n,1} (\theta_{i},s_{i}))}\right]\right]\\ &=&\!\! \frac{-1}{\sqrt{\log n}} E_{(\theta,s)} \!\!\left[\int \!\log \frac{p_{\epsilon} (Y_{i} - T_{\mu} f_{n,1} (\theta_{i}, s_{i})\!)}{p_{\epsilon}(Y_{i})} p_{\epsilon} (Y_{i} \!- \!T_{\mu} f_{n,1} (\theta_{i}, s_{i})) dY_{i}\!\right]\\ &\geq&\!\! \frac{-1}{\sqrt{\log n}} E_{\theta,s} (T_{\mu} f_{n,1} (\theta_{i}, s_{1}))^{2}. \end{array} $$

Recall that for \(f_{n,1} = Ah^{\beta _{1}-1} \eta ((x-x_{0})/h)\) where \(h = \left (\frac {\log n}{n}\right )^{\frac {1}{2\beta +1}}\), similar to equation (18) in Abhishek (2022) we have, \({\int \limits }_{Z} (T_{\mu }f_{n,1}(\theta _{i},s_{i}))^{2} ds d\theta \leq c_{8} h^{2\beta +1}\) where c8 is a constant that can be made as small as desired by choosing a small enough A. In particular, we will choose A such that \(\frac {6(\beta _{N}-\beta _{1})}{(2\beta _{1}+1 )(2\beta _{N}+1)}>c_{8}>0\). We remark here that in deriving the estimate for \({\int \limits }_{Z}(T_{\mu }f_{n,1}(\theta _{i},s_{i}))^{2} ds d\theta \) as above, we assume that the design points satisfy a certain feasibility condition (Korostelëv and Tsybakov 1991, Assumption C): \(E_{(\theta ,s)}\left [\sum \limits _{i=1}^{n}g(\theta _{i},s_{i})\right ] \leq C_{3} n\int \limits _{Z}g(\theta ,s)dsd \theta \). Thus

$$ \sum\limits_{i=1}^{n} E_{1}[Z_{n,i}] \geq \frac{-1}{\sqrt{\log n}} n E_{\theta,s} (T_{\mu}f_{n,1}(\theta_{i},s_{1}))^{2} \geq - c_{8} \sqrt{\log n}. $$

Proof of part (b) We want to show that \({\sigma _{n}^{2}} = {\sum }_{i=1}^{n}V_{1}[Z_{n,i}]\) is bounded below. First note that from the ‘law of total variance’ V1[Zn,i] ≥ E(𝜃,s) [V1|(𝜃,s)[Zn,i]]. Consider

$$ \begin{array}{@{}rcl@{}} Var_{1|(\theta,s)}[Z_{n,i}] &=& \frac{1}{\log n} Var_{1|(\theta,s)} \left[\log \frac{p_{\epsilon} (Y_{i})}{p_{\epsilon} (Y_{i} - T_{\mu} f_{n,1} (\theta_{i}, s_{i}))}\right]\\ &=& \frac{1}{\log n} \left[E_{1|(\theta,s)} \left[\log^{2} \frac{p_{\epsilon} (Y_{i})}{p_{\epsilon} (Y_{i} - T_{\mu} f_{n,1} (\theta_{i},s_{i}))}\right]\right.\\ && \left. - \left( E_{1|(\theta,s)} \left[\log \frac{p_{\epsilon}(Y_{i})}{p_{\epsilon}(Y_{i} - T_{\mu} f_{n,1} (\theta_{i},s_{i}))}\right]\right)^{2}\right]. \end{array} $$

Recall that noise has been assumed to have a Gaussian distribution \(\sim N(0,\sigma ^{2})\) Thus,

$$ \begin{array}{@{}rcl@{}} && E_{1|(\theta,s)} \left[\log^{2} \frac{p_{\epsilon} (Y_{i})}{p_{\epsilon} (Y_{i} - T_{\mu} f_{n,1} (\theta_{i}, s_{i}))} \right] \\ && \quad = \frac{1}{\sqrt{2\pi \sigma^{2}}} \int \log^{2} \exp {\left( \frac{(Y_{i} - T_{\mu}f_{n,1} (\theta_{i}, s_{i}))^{2} - {Y}_{i}^{2}}{2\sigma^{2}}\right)} \!\exp {\!\left( \frac{(Y_{i} - T_{\mu} f_{n,1} (\theta_{i}, s_{i}))^{2}}{2 \sigma^{2}}\right)} dY_{i} \\ && \quad = \frac{1}{4 \sigma^{4} \sqrt{2\pi\sigma^{2}}} \int \left( {T}_{\mu}^{4} (f_{n,1} (\theta_{i}, s_{i}) + 4 {Y}_{i}^{2} {T}_{\mu}^{2} f_{n,1} (\theta_{i}, s_{i})) - 4Y_{i} {T}_{\mu}^{3} f_{n,1} (\theta_{i}, s_{i})\right)\\ && \quad \quad \quad \quad \quad \quad \quad \exp{\left( \frac{(Y_{i} - T_{\mu} f_{n,1} (\theta_{i}, s_{i}))^{2}}{2\sigma^{2}}\right)} dY_{i} \\ && \quad = \frac{1}{{4} \sigma^{4}} \left( {T}_{\mu}^{4} f_{n,1} (\theta_{i}, s_{i}) + {4} \sigma^{2} {T}_{\mu}^{2} f_{n,1} (\theta_{i}, s_{i})\right). \end{array} $$
(A.1)

On the other hand,

$$ \begin{array}{@{}rcl@{}} \left( E_{1|(\theta,s)} \left[\log \frac{p_{\epsilon}(Y_{i})}{p_{\epsilon}(Y_{i}-T_{\mu}f_{n,1}(\theta_{i},s_{i}))}\right]\right)^{2} = \frac{{T}_{\mu}^{4} f_{n,1} (\theta_{i}, s_{i})}{4\sigma^{4}}. \end{array} $$

Thus \(Var_{1|(\theta ,s)}[Z_{n,i}]=\frac {{4}(T_{\mu }f_{n,1}(\theta _{i},s_{i}))^{2}}{\log n\sigma ^{2}}\) and hence,

$$ \begin{array}{@{}rcl@{}} \sum\limits_{i=1}^{n} E_{\theta,s} [Var_{1|(\theta,s)}[Z_{n,i}]] &=& \frac{n}{\sigma^{2}\log n} {\int}_{{}Z} (T_{\mu}f_{n,1}(\theta_{i},s_{i}))^{2} dsd \theta\\ &=& c_{10}\frac{n}{\log n}h^{2\beta+1}>0. \end{array} $$

Proof of part (c)

$$ \begin{array}{@{}rcl@{}} \!\!\!\!\!E_{1} \lvert U_{n,i}^{3}\rvert\!\!\! &=&\!\!\! \frac{1}{{\sigma_{n}^{3}}}E_{1}\lvert Z_{n,i}^{3}-\!(E_{1}[Z_{n,i}])^{3} \!-3 (Z_{n,i})^{2}E_{1}[Z_{n,i}]+3(Z_{n,i})(E_{1}[Z_{n,i}])^{2} \rvert\\ &\leq&\!\!\! \frac{1}{{\sigma_{n}^{3}}}[E_{1}\lvert Z_{n,i}\rvert^{3}+(E_{1}\lvert Z_{n,i}\rvert)^{3} +3 E_{1}\lvert Z_{n,i} \rvert^{2}E_{1}\lvert Z_{n,i} \rvert\\&&+3E_{1}\lvert Z_{n,i} \rvert(E_{1}\lvert Z_{n,i} \rvert)^{2} ] \\ &\leq&\!\!\! \frac{1}{{\sigma_{n}^{3}}}[E_{1}\lvert Z_{n,i}\rvert^{3} + (E_{1}\lvert Z_{n,i} \rvert)^{3} + 3 E_{1}\lvert Z_{n,i} \rvert^{2}E_{1}\lvert Z_{n,i} \rvert + (E_{1}\lvert Z_{n,i} \rvert)^{3}].\\ \end{array} $$
(A.2)

Now we consider each of the above terms one by one. First of all \(E_{1}\lvert Z_{n,i}\rvert =E_{\theta ,s}[E_{1|(\theta ,s)}\lvert Z_{n,i}\rvert ] \). Thus using Pinsker’s second inequality to calculate:

$$ \begin{array}{@{}rcl@{}} E_{1|(\theta,s)}\lvert Z_{n,i}\rvert &=& \frac{1}{\sqrt{\log n}}\int\left\lvert \log \frac{p_{\epsilon}(Y_{i})}{p_{\epsilon}(Y_{i}-T_{\mu}f_{n,1}(\theta_{i},s_{i}))}\right\rvert p_{\epsilon}(Y_{i}-T_{\mu}f_{n,1}(\theta_{i},s_{i})) dY_{i} \\ &\leq& \frac{1}{\sqrt{\log n}} \left[\frac{T_{\mu}f_{n,1}(\theta_{i},s_{i})}{\sigma}+\frac{{T}_{\mu}^{2}f_{n,1}(\theta_{i},s_{i})}{2\sigma^{2}}\right] \text{[Tsybakov, 2009, Lemma 2.5]}.\\ \end{array} $$
(A.3)

Also note that since the cylinder Z = S1 × [− 1,1] has finite measure, we have:

$$ \begin{array}{@{}rcl@{}} \left\lvert {\int}_{{}Z} T_{\mu} f_{n,1} (\theta,s) dsd \theta \right\rvert &\leq& {\int}_{{}Z} \lvert T_{\mu} f_{n,1} (\theta,s) \rvert dsd \theta\\ &\leq& c_{10} \left( {\int}_{{}Z} \lvert T_{\mu} f_{n,1} (\theta,s) \rvert^{2} dsd\theta\right)^{1/2}. \end{array} $$
(A.4)

From inequalities (A.3) and (A.4), we get:

$$ \begin{array}{@{}rcl@{}} E_{1}\rvert{Z_{n,i}}\lvert &\leq& \frac{c_{11}}{\sqrt{\log n}} \left[\left( \frac{\log n}{n}\right)^{1/2} + \left( \frac{\log n}{n}\right)\right] \\ &\leq& \frac{c_{12}}{\sqrt{\log n}} \left( \frac{\log n}{n}\right)^{1/2} \quad \left( \!0\!<\frac{\log n}{n}\!\leq \!\left( \frac{\log n}{n}\right)^{1/2}<1 \text{ for } n\geq 3\!\right)\!. \end{array} $$

Finally,

$$ \begin{array}{@{}rcl@{}} \sum\limits_{i=1}^{n} \left( E_{1}\rvert{Z_{n,i}}\lvert\right)^{3} \leq c_{12}n \left( \frac{1}{n}\right)^{3/2} \to 0 \quad \text{as }n\to\infty. \end{array} $$
(A.5)

Using Eq. A.1, we have:

$$ E_{1|(\theta,s)} [\rvert{Z_{n,i}}\lvert^{2}] \leq \frac{1}{{4}\sigma^{4} \log n} \left( T_{\mu}^{4}f_{n,1}(\theta_{i},s_{i}) + {4} \sigma^{2} {T}_{\mu}^{2} f_{n,1} (\theta_{i},s_{i})\right). $$

Then using the fact that \(\left \lvert T_{\mu } f_{n,1}(\theta _{i},s_{i})\right \rvert \leq c_{13} h^{\frac {\beta }{2\beta +1}} = c_{13} \left (\frac {\log n}{n}\right )^{\frac {\beta }{2\beta +1}}\),

$$ \begin{array}{@{}rcl@{}} E_{1}[\rvert{Z_{n,i}}\lvert^{2}] &\leq& \frac{c_{14}}{\log n} \left[\left( \frac{\log n}{n}\right)^{\frac{4\beta}{2\beta+1}} + \left( \frac{\log n}{n}\right)^{\frac{2\beta}{2\beta+1}} \right] \\ &\leq& \frac{c_{15}}{\log n} \left[\left( \frac{\log n}{n}\right)^{\frac{2\beta}{2\beta+1}}\right]\\ && \quad \quad \left( 0<\left( \frac{\log n}{n}\right)^{\frac{4\beta}{2\beta+1}} \leq \left( \frac{\log n}{n}\right)^{\frac{2\beta}{2\beta+1}} < 1 \text{ for } n\geq 3\right). \end{array} $$

Finally,

$$ \begin{array}{@{}rcl@{}} \sum\limits_{i=1}^{n} E_{1} \lvert Z_{n,i}\rvert^{2} E_{1} \lvert Z_{n,i} \rvert &\leq& c_{16} \frac{n}{\log n} \frac{1}{\sqrt{\log n}} \left( \frac{\log n}{n}\right)^{\frac{6\beta+1}{4\beta+2}} \\ &\leq& c_{16} \frac{1}{\sqrt{\log n}} \left( \frac{\log n}{n}\right)^{\frac{2\beta-1}{4\beta+2}} \quad \to 0 \text{ as } n \to \infty.\\ \end{array} $$
(A.6)

Now we consider \({\sum }_{i=1}^{n} E_{1}\lvert Z_{n,i}\rvert ^{3}\). For this we first evaluate:

$$ \begin{array}{@{}rcl@{}} &&E_{1|(\theta,s)}\lvert Z_{n,i}\rvert^{3}\\ &=& \frac{1}{(\log n)^{3/2}}\int \left\lvert \log \frac{p_{\xi}(Y_{i})}{p_{\epsilon}(Y_{i} - T_{\mu} f_{n,1} (\theta_{i}, s_{i}))} \right\rvert^{3} p_{\epsilon}(Y_{i}-T_{\mu}f_{n,1}(\theta_{i},s_{i})) dY_{i}\\ &\leq& \frac{1}{(\log n)^{3/2}}\int\left[ \left\lvert {T}_{\mu}^{2} f_{n,1} (\theta_{i}, s_{i}) - 2Y_{i} T_{\mu} f_{n,1} (\theta_{i}, s_{i}) \right\rvert \right]^{3} p_{\epsilon} (Y_{i} - T_{\mu}f_{n,1} (\theta_{i}, s_{i})) dY_{i}\\ &\leq& \frac{1}{(\log n)^{3/2}} \int\left[\vphantom{\frac{1}{2}} \left\lvert T^{6}_{\mu}f_{n,1}(\theta_{i},s_{i}) \right\rvert+8 \left\lvert Y_{i} \right\rvert^{3}\left\lvert T_{\mu}^{3}f_{n,1}(\theta_{i},s_{i})\right\rvert+12\left\lvert Y_{i} \right\rvert^{2}\left\lvert T_{\mu}^{4}f_{n,1}(\theta_{i},s_{i}) \right\rvert \right.\\ &&\left. + 6\left\lvert Y_{i}\right\rvert\left\lvert T_{\mu}^{5}f_{n,1}(\theta_{i},s_{i}) \right\rvert \vphantom{\frac{1}{2}}\right] \frac{\exp{(-(Y_{i}-T_{\mu}f_{n,1}(\theta_{i},s_{i}))^{2}/2\sigma^{2}})}{\sqrt{2\pi\sigma^{2}}} dY_{i}\\ &\leq& \frac{c_{16}}{(\log n)^{3/2}}\left( \frac{\log n}{n}\right)^{\frac{6\beta}{2\beta+1}}, \end{array} $$

where the last inequality follows from the previous one by integrating each term and using the fact that \(\lvert T_{\mu } f_{n,1} (\theta _{i}, s_{i}) \rvert \leq c_{13} \left (\frac {\log n}{n}\right )^{\frac {\beta }{2\beta +1}}\). Thus:

$$ \begin{array}{@{}rcl@{}} \sum\limits_{i=1}^{n} E_{1} \lvert Z_{n,i}\rvert^{3} &\leq& \frac{c_{17}}{(\log n)^{1/2}} \frac{n}{\log n} \left( \frac{\log n}{n}\right)^{\frac{6\beta}{2\beta+1}}\\ &\leq& \frac{c_{17}}{(\log n)^{1/2}} \left( \frac{\log n}{n}\right)^{\frac{4\beta-1}{2\beta+1}} \quad \to 0 \text{ as } n \to \infty. \end{array} $$
(A.7)

Equations A.2A.5A.6 and A.7 together prove part (c). □

Rights and permissions

Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Arya, S., Abhishek, A. Adaptive Estimation of a Function from its Exponential Radon Transform in Presence of Noise. Sankhya A 85, 1127–1155 (2023). https://doi.org/10.1007/s13171-022-00300-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13171-022-00300-8

Keywords

AMS (2000) subject classification

Navigation