Advertisement

Adaptivity and Oracle Inequalities in Linear Statistical Inverse Problems: A (Numerical) Survey

  • Frank WernerEmail author
Chapter
Part of the Trends in Mathematics book series (TM)

Abstract

We investigate a posteriori parameter choice methods for filter based regularizations \(\hat f_\alpha = q_\alpha \left (T^*T\right )T^*Y\) in statistical inverse problems Y = Tf + σξ. Here we assume that T is a bounded linear operator between Hilbert spaces, and ξ is Gaussian white noise.

We discuss order optimality of a posteriori parameter choice rules by means of oracle inequalities and review known results for the discrepancy principle, the unbiased risk estimation procedure, the Lepskiı̆-type balancing principle, and the quasi-optimality principle.

The main emphasis of this paper is on numerical comparisons of the mentioned parameter choice rules. We investigate estimation of the second derivative as a mildly ill-posed example, and furthermore satellite gradiometry and the backwards heat equation as severely ill-posed examples. The performance is illustrated by means of empirical convergence rates and inefficiency compared to the best possible (oracle) choice.

Notes

Acknowledgements

Financial support by the German Research Foundation DFG through CRC 755, project A07 is gratefully acknowledged.

References

  1. 1.
    A.B. Bakushinskiı̆, Remarks on choosing a regularization parameter using the quasi-optimality and ratio criterion. USSR Comput. Math. Math. Phys. 24(4), 181–182 (1984)Google Scholar
  2. 2.
    F. Bauer, T. Hohage, A Lepskij-type stopping rule for regularized Newton methods. Inverse Probl. 21(6), 1975 (2005)Google Scholar
  3. 3.
    F. Bauer, S. Kindermann, The quasi-optimality criterion for classical inverse problems. Inverse Probl. 24(3), 035002, 20 pp. (2008)Google Scholar
  4. 4.
    F. Bauer, S. Kindermann, Recent results on the quasi-optimality principle. J. Inverse Ill Posed Probl. 17(1), 5–18 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    F. Bauer, M.A. Lukas, Comparing parameter choice methods for regularization of ill-posed problems. Math. Comput. Simul. 81(9), 1795–1841 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    F. Bauer, S. Pereverzev, Regularization without preliminary knowledge of smoothness and error behaviour. Eur. J. Appl. Math. 16(3), 303–317 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    F. Bauer, M. Reiß, Regularization independent of the noise level: an analysis of quasi-optimality. Inverse Probl. 24(5), 055009, 16 pp. (2008)Google Scholar
  8. 8.
    S.M.A. Becker, Regularization of statistical inverse problems and the Bakushinskiı̆ veto. Inverse Probl. 27(11), 115010, 22 pp. (2011)Google Scholar
  9. 9.
    N. Bissantz, T. Hohage, A. Munk, F. Ruymgaart, Convergence rates of general regularization methods for statistical inverse problems and applications. SIAM J. Numer. Anal. 45(6), 2610–2636 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    G. Blanchard, P. Mathé, Discrepancy principle for statistical inverse problems with application to conjugate gradient iteration. Inverse Probl. 28(11), 115011, 23 pp. (2012)Google Scholar
  11. 11.
    G. Blanchard, M. Hoffmann, M. Reiß, Optimal adaptation for early stopping in statistical inverse problems, arXiv:1606.07702 (2016)Google Scholar
  12. 12.
    E.J. Candès, Modern statistical estimation via oracle inequalities. Acta Numer. 15, 257–325 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    L. Cavalier, Nonparametric statistical inverse problems. Inverse Probl. 24(3), 034004, 19 pp. (2008)Google Scholar
  14. 14.
    L. Cavalier, Y. Golubev, Risk hull method and regularization by projections of ill-posed inverse problems. Ann. Stat. 34(4), 1653–1677 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    L. Cavalier, A.B. Tsybakov, Sharp adaptation for inverse problems with random noise. Probab. Theory Relat. Fields 123(3), 323–354 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    L. Cavalier, G.K. Golubev, D. Picard, A.B. Tsybakov, Oracle inequalities for inverse problems. Ann. Stat. 30(3), 843–874 (2002). Dedicated to the memory of Lucien Le CamGoogle Scholar
  17. 17.
    L. Cavalier, Y. Golubev, O. Lepski, A. Tsybakov, Block thresholding and sharp adaptive estimation in severely ill-posed inverse problems. Teor. Veroyatnost. i Primenen. 48(3), 534–556 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    E. Chernousova, Y. Golubev, Spectral cut-off regularizations for ill-posed linear models. Math. Methods Stat. 23(2), 116–131 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    A.R. Davies, R.S. Anderssen, Improved estimates of statistical regularization parameters in Fourier differentiation and smoothing. Numer. Math. 48(6), 671–697 (1986)MathSciNetCrossRefzbMATHGoogle Scholar
  20. 20.
    D.L. Donoho, I.M. Johnstone, Ideal spatial adaptation by wavelet shrinkage. Biometrika 81(3), 425–455 (1994)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    B. Efron, Selection criteria for scatterplot smoothers. Ann. Stat. 29(2), 470–504 (2001)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    H. Engl, M. Hanke, A. Neubauer, Regularization of Inverse Problems (Springer, New York, 1996)CrossRefzbMATHGoogle Scholar
  23. 23.
    W. Freeden, F. Schneider, Regularization wavelets and multiresolution. Inverse Probl. 14(2), 225–243 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    Y. Golubev, The principle of penalized empirical risk in severely ill-posed problems. Probab. Theory Relat. Fields 130(1), 18–38 (2004)MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    Y. Golubev, On universal oracle inequalities related to high-dimensional linear models. Ann. Stat. 38(5), 2751–2780 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  26. 26.
    G.K. Golubev, Exponential weighting and oracle inequalities for projection estimates. Probl. Inf. Transm. 48(3), 271–282 (2012). Translation of Problemy Peredachi Informatsii 48(3), 83–95 (2012)Google Scholar
  27. 27.
    U. Hämarik, R. Palm, T. Raus, Comparison of parameter choices in regularization algorithms in case of different information about noise level. Calcolo 48(1), 47–59 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  28. 28.
    U. Hämarik, R. Palm, T. Raus, A family of rules for parameter choice in Tikhonov regularization of ill-posed problems with inexact noise level. J. Comput. Appl. Math. 236(8), 2146–2157 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  29. 29.
    U. Hämarik, R. Palm, T. Raus, A family of rules for the choice of the regularization parameter in the Lavrentiev method in the case of rough estimate of the noise level of the data. J. Inverse Ill Posed Probl. 20(5–6), 831–854 (2012)MathSciNetzbMATHGoogle Scholar
  30. 30.
    T. Hohage, Regularization of exponentially ill-posed problems. Numer. Funct. Anal. Optim. 21, 439–464 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  31. 31.
    T. Hohage, F. Werner, Convergence rates for inverse problems with impulsive noise. SIAM J. Numer. Anal. 52(3), 1203–1221 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  32. 32.
    Y. Ingster, T Sapatinas, I.A. Suslina, Minimax signal detection in ill-posed inverse problems. Ann. Stat. 40(3), 1524–1549 (2012)Google Scholar
  33. 33.
    Y. Ingster, B. Laurent, C. Marteau, Signal detection for inverse problems in a multidimensional framework. Math. Methods Stat. 23(4), 279–305 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  34. 34.
    I.M. Johnstone, Wavelet shrinkage for correlated data and inverse problems: adaptivity results. Stat. Sin. 9(1), 51–83 (1999)MathSciNetzbMATHGoogle Scholar
  35. 35.
    I.M. Johnstone, B.W. Silverman, Wavelet threshold estimators for data with correlated noise. J. R. Stat. Soc. Ser. B 59(2), 319–351 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
  36. 36.
    S. Kindermann, A. Neubauer, On the convergence of the quasioptimality criterion for (iterated) Tikhonov regularization. Inverse Probl. Imaging 2(2), 291–299 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  37. 37.
    A. Kneip, Ordered linear smoothers. Ann. Stat. 22(2), 835–866 (1994)MathSciNetCrossRefzbMATHGoogle Scholar
  38. 38.
    C. König, F. Werner, T. Hohage, Convergence rates for exponentially ill-posed inverse problems with impulsive noise. SIAM J. Numer. Anal. 54(1), 341–360 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  39. 39.
    O.V. Lepskiı̆, On a problem of adaptive estimation in Gaussian white noise. Theory Probab. Appl. 35(3), 454–466 (1991)Google Scholar
  40. 40.
    O.V. Lepskiı̆, Adaptive estimation over anisotropic functional classes via oracle approach. Ann. Stat. 43(3), 1178–1242 (2015)Google Scholar
  41. 41.
    K.-C. Li, Asymptotic optimality of C L and generalized cross-validation in ridge regression with application to spline smoothing. Ann. Stat. 14(3), 1101–1112 (1986)MathSciNetCrossRefzbMATHGoogle Scholar
  42. 42.
    K.-C. Li, Asymptotic optimality for C p, C L, cross-validation and generalized cross-validation: discrete index set. Ann. Stat. 15(3), 958–975 (1987)MathSciNetCrossRefzbMATHGoogle Scholar
  43. 43.
    H. Li, F. Werner, Empirical risk minimization as parameter choice rule for general linear regularization methods, arXiv:1703.07809 (2017)Google Scholar
  44. 44.
    S. Lu, P. Mathé, Discrepancy based model selection in statistical inverse problems. J. Complex. 30(3), 290–308 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  45. 45.
    M.A. Lukas, Asymptotic optimality of generalized cross-validation for choosing the regularization parameter. Numer. Math. 66(1), 41–66 (1993)MathSciNetCrossRefzbMATHGoogle Scholar
  46. 46.
    M.A. Lukas, On the discrepancy principle and generalised maximum likelihood for regularisation. Bull. Aust. Math. Soc. 52(3), 399–424 (1995)MathSciNetCrossRefzbMATHGoogle Scholar
  47. 47.
    M.A. Lukas, Comparisons of parameter choice methods for regularization with discrete noisy data. Inverse Probl. 14(1), 161–184 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
  48. 48.
    C.L. Mallows, Some comments on C p. Technometrics 15(4), 661–675 (1973)zbMATHGoogle Scholar
  49. 49.
    P. Mathé, The Lepskiı̆ principle revisited. Inverse Probl. 22(3), L11–L15 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  50. 50.
    P. Mathé, B. Hofmann, How general are general source conditions? Inverse Probl. 24(1), 015009, 5 pp. (2008)Google Scholar
  51. 51.
    P. Mathé, S.V. Pereverzev, Optimal discretization of inverse problems in Hilbert scales. Regularization and self-regularization of projection methods. SIAM J. Numer. Anal. 38(6), 1999–2021 (2001)CrossRefzbMATHGoogle Scholar
  52. 52.
    P. Mathé, S.V. Pereverzev, Geometry of linear ill-posed problems in variable Hilbert scales. Inverse Probl. 19(3), 789–803 (2003)MathSciNetCrossRefzbMATHGoogle Scholar
  53. 53.
    P. Mathé, S.V. Pereverzev, Regularization of some linear ill-posed problems with discretized random noisy data. Math. Comput. 75(256), 1913–1929 (electronic) (2006)Google Scholar
  54. 54.
    V.A. Morozov, On the solution of functional equations by the method of regularization. Sov. Math. Dokl. 7, 414–417 (1966)MathSciNetzbMATHGoogle Scholar
  55. 55.
    A. Neubauer, The convergence of a new heuristic parameter selection criterion for general regularization methods. Inverse Probl. 24(5), 055005, 10 pp. (2008)Google Scholar
  56. 56.
    D.L. Phillips, A technique for the numerical solution of certain integral equations of the first kind. J. Assoc. Comput. Mach. 9, 84–97 (1962)MathSciNetCrossRefzbMATHGoogle Scholar
  57. 57.
    A. Rieder, Runge-Kutta integrators yield optimal regularization schemes. Inverse Probl. 21(2), 453–471 (2005)MathSciNetCrossRefzbMATHGoogle Scholar
  58. 58.
    E. Schock, Approximate solution of ill-posed equations: arbitrarily slow convergence vs. superconvergence. In: Constructive Methods for the Practical Treatment of Integral Equations (Oberwolfach, 1984). Internat. Schriftenreihe Numer. Math., vol. 73 (Birkhäuser, Basel, 1985), pp. 234–243Google Scholar
  59. 59.
    C.M. Stein, Estimation of the mean of a multivariate normal distribution. Ann. Stat. 9(6), 1135–1151 (1981)MathSciNetCrossRefzbMATHGoogle Scholar
  60. 60.
    A.N. Tikhonov, V.B. Glasko, Use of the regularization method in non-linear problems. USSR Comput. Math. Math. Phys. 5(3), 93–107 (1965)CrossRefzbMATHGoogle Scholar
  61. 61.
    C.R. Vogel, Optimal choice of a truncation level for the truncated SVD solution of linear first kind integral equations when data are noisy. SIAM J. Numer. Anal. 23(1), 109–117 (1986)MathSciNetCrossRefzbMATHGoogle Scholar
  62. 62.
    C.R. Vogel, Computational Methods for Inverse Problems. Frontiers in Applied Mathematics, vol. 23 (Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 2002)Google Scholar
  63. 63.
    G. Wahba, Spline Models for Observational Data. CBMS-NSF Regional Conference Series in Applied Mathematics, vol. 59 (Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, 1990)Google Scholar
  64. 64.
    F. Werner, On convergence rates for iteratively regularized Newton-type methods under a Lipschitz-type nonlinearity condition. J. Inverse Ill Posed Probl. 23(1), 75–84 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  65. 65.
    F. Werner, T. Hohage, Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data. Inverse Probl. 28(10), 104004, 15 pp. (2012)Google Scholar

Copyright information

© Springer International Publishing AG 2018

Authors and Affiliations

  1. 1.Statistical Inverse Problems in Biophysics GroupMax Planck Institute for Biophysical ChemistryGöttingenGermany

Personalised recommendations