Abstract
This article deals with the solution of linear ill-posed equations in Hilbert spaces. Often, one only has a corrupted measurement of the right hand side at hand and the Bakushinskii veto tells us, that we are not able to solve the equation if we do not know the noise level. But in applications it is ad hoc unrealistic to know the error of a measurement. In practice, the error of a measurement may often be estimated through averaging of multiple measurements. We integrated that in our anlaysis and obtained convergence to the true solution, with the only assumption that the measurements are unbiased, independent and identically distributed according to an unknown distribution.
Similar content being viewed by others
1 Introduction
The goal is to solve the ill-posed equation \(K{\hat{x}}={\hat{y}}\), where \({\hat{x}}\in {\mathscr {X}}\) and \({\hat{y}}\in {\mathscr {Y}}\) are elements of infinite dimensional Hilbert spaces and K is either linear and bounded with non-closed range, or more specifically compact. We do not know the right hand side \({\hat{y}}\) exactly, but we are given several measurements \(Y_1,Y_2,\ldots \) of it, which are independent, identically distributed and unbiased (\({\mathbb {E}}Y_i = {\hat{y}}\)) random variables. Thus we assume, that we are able to measure the right hand side multiple times, and a crucial requirement is that the solution does not change at least on small time scales. Let us stress that using multiple measurements to decrease the data error is a standard engineering practice under the name ‘signal averaging’, see, e.g., [27] for an introducing monograph or [20] for a survey article. Examples with low or moderate numbers of measurements (up to a hundred) can be found in [9] or [28] on image averaging or [13] on satellite radar measurements. For the recent first image of a black hole, even up to \(10^9\) samples were averaged, cf. [1].
The given multiple measurements naturally lead to an estimator of \({\hat{y}}\), namely the sample mean
But, in general \(K^+{\bar{Y}}_n \not \rightarrow K^+{\hat{y}}\) for \(n\rightarrow \infty \), because the generalised inverse (Definition 2.2 of [12]) of K is not continuous. So the inverse is replaced with a family of continuous approximations \((R_{\alpha })_{\alpha >0}\), called regularisation, e.g. the Tikhonov regularisation \(R_{\alpha }:=\left( K^*K+\alpha Id\right) ^{-1}K^*\), where \(Id:{\mathscr {X}}\rightarrow {\mathscr {X}}\) is the identity. The regularisation parameter \(\alpha \) has to be chosen accordingly to the data \({\bar{Y}}_n\) and the true data error
which is also a random variable. Since \({\hat{y}}\) is unknown, \(\delta _n^{true}\) is also unkown and has to be guessed. Natural guesses are
One first natural approach is now to use a (deterministic) regularisation method together with \({\bar{Y}}_n\) and \(\delta _n^{est}\). We are in particular interested in the discrepancy principle [30], wich is known to provide optimal convergence rates (for some \({\hat{y}}\)) in the classical deterministic setting. The following main result states, that in a certain sense, the natural approach converges and yields the optimal deterministic rates asymptotically.
Corollary 1
(to Theorems 3 and 4) Assume that \(K:{\mathscr {X}}\rightarrow {\mathscr {Y}}\) is a compact operator with dense range between Hilbert spaces and that \(Y_1,Y_2,\ldots \) are i.i.d. \({\mathscr {Y}}-\)valued random variables which fullfill \({\mathbb {E}}[ Y_1] = {\hat{y}}\in {\mathscr {R}}(K)\) and \(0<{\mathbb {E}}\Vert Y_1-{\hat{y}}\Vert ^2<\infty \). Define the Tikhonov regularisation \(R_{\alpha }:=\left( K^*K+\alpha Id\right) ^{-1}K^*\) (or the truncated singular value regularisation, or Landweber iteration). Determine \((\alpha _n)_n\) through the discrepancy principle using \(\delta _n^{est}\) (see Algorithm 1). Then \(R_{\alpha _n}{\bar{Y}}_n\) converges to \(K^+{\hat{y}}\) in probability, that is
Moreover, if \(K^+{\hat{y}}=\left( K^*K\right) ^{\nu /2}w\) with \(w\in {\mathscr {X}}\) and \(\Vert w\Vert \le \rho \) for \(\rho >0\) and \(0<\nu <\nu _0-1\) (where \(\nu _0\) is the qualification of the chosen method, see Assumptions 1), then for all \(\varepsilon >0\),
Moreover it is shown, that the approach in general does not yield \(L^2\) convergenceFootnote 1 for a naive use of the discrepancy principle, but it does for a priori regularisation. We also discuss quickly, how one has to estimate the error to obtain almost sure convergence.
To solve an inverse problem, as already mentioned, typically some a priori information about the noise is required. This may be, in the classical deterministic case, the knowledge of an upper bound of the noise level, or, in the stochastic case, some knowledge of the error distribution or the restriction to certain classes of distributions, for example to Gaussian distributions. Here we present the first rigorous convergence theory for noisy measurements without any knowledge of the error distribution. The approach can be easily used by everyone, who can measure multiple times.
Stochastic or statistical inverse problems are an active field of research with close ties to high dimensional statistics [16, 17, 31]. In general, there are two approaches to tackle an ill-posed problem with stochastic noise. The Bayesian setting considers the solution of the problem itself as a random quantity, on which one has some a priori knowledge (see [23]). This opposes the frequentist setting, where the inverse problem is assumed to have a deterministic, exact solution [6, 10]. We are working in the frequentist setting, but we stay close to the classic deterministic theory of linear inverse problems [12, 32, 33]. For statistical inverse problems, typical methods to determine the regularisation parameter are cross validation [34], Lepski’s balancing principle [29] or penalised empirical risk minimisation [11]. Modifications of the discrepancy principle were studied recently [7, 8, 25, 26]. In [8], it was first shown how to obtain optimal convergence in \(L^2\) under Gaussian white noise with a modified version of the discrepancy principle.
Another approach is to transfer results from the classical deterministic theory using the Ky-Fan metric, which metrises convergence in probability. In [15, 21] it is shown, how to obtain convergence if one knows the Ky-Fan distance between the measurements and the true data. Aspects of the Bakushinskii veto [3] for stochastic inverse problems are discussed in [4, 5, 35] under assumptions for the noise distribution. In particular, [5] gives an explicit non trivial example for a convergent regularisation, without knowing the exact error level, under Gaussian white noise. We extent this to arbitrary distributions here, if one has multiple measurements.
In the articles mentioned above, the error is usually modelled as a Hilbert space process (such as white noise), thus it is impossible to determine the regularisation parameter directly through the discrepancy principle. This is in contrast to our, more classic error model, where the measurement is an element of the Hilbert space itself. Under the popular assumption that the operator K is Hilbert-Schmidt, one could in principle extend our results to a general Hilbert space process error model (considering the symmetrised equation \(K^*K{\hat{x}}=K^*{\hat{y}}\) instead of \(K{\hat{x}}={\hat{y}}\), as it is done for example in [8]). But we will postpone the discussion of the white noise case to a follow up paper.
To summarise the connection to the Bakushinskii veto let us state the following. The Bakushinskii veto states that the inverse problem can only be solved with a deterministic regularisation, if the noise level of the data is known. In this article we show, that if one has access to multiple i.i.d. measurements of an unkown distribution, one may use as data the average together with the estimated noise level and one obtains the optimal deterministic rate with high probability, as the number of measurements tends to infinity. That is one can estimate the error from the data. Finally, the measurements potentially contain more information, which is not used here. For example one could estimate the whole covariance structure of one measurement and use this to rescale the measurements and the operator, eventually increasing the relative smoothness of the data. Also one could directly regularise the non-averaged measurements.
In the following section we apply our approach to a priori regularisations and in the main part we consider the widely used discrepancy principle, which is known to work optimal in the classic deterministic theory. After that we quickly show how to choose \(\delta _n^{est}\) to obtain almost sure convergence and we compare the methods numerically.
2 A priori regularisation
We use the usual definition that \(R_\alpha :{\mathscr {Y}}\rightarrow {\mathscr {X}}\) is called a linear regularisation, if \(R_\alpha \) is a bounded linear operator for all \(\alpha >0\) and if \(R_{\alpha }y\rightarrow K^+y\) for \(\alpha \rightarrow 0\) for all \(y\in {\mathscr {D}}(K^+)\). A regularisation method is a combination of a regularisation and a parameter choice strategy \(\alpha : {\mathbb {R}}^+ \times {\mathscr {Y}} \rightarrow {\mathbb {R}}^+\), such that \(R_{\alpha (\delta ,y^{\delta })}y^{\delta } \rightarrow K^+y\) for \(\delta \rightarrow 0\), for all \(y \in {\mathscr {D}}(K^+)\) and for all \((y^{\delta })_{\delta > 0}\subset {\mathscr {Y}}\) with \(\Vert y^{\delta } - y \Vert \le \delta \). The method is called a priori, if the parameter choice does not depend on the data, that is if \(\alpha (\delta ,y)=\alpha (\delta )\).
The measurements can be formally modelled as realisations of an independent and identically distributed sequence \(Y_1,Y_2,\ldots : \varOmega \rightarrow {\mathscr {Y}}\) of random variables with values in \({\mathscr {Y}}\), such that \({\mathbb {E}}Y_1 ={\hat{y}}\in {\mathscr {D}}(K^+)\). Moreover, we require that \(0<{\mathbb {E}}\Vert Y_1 \Vert ^2 < \infty \), that is the measurements are (almost surely) in the Hilbert space.
In the following we apply the above approach to a priori parameter choice strategies \(\alpha (y^{\delta },\delta )=\alpha (\delta )\). We restrict to \(\delta _n^{est}=1/\sqrt{n}\) here, that is we do not estimate the variance here (otherwise the parameter choice would depend on the data). Since then \(\delta _n^{est}\) and hence \(\alpha (\delta _n^{est})\) are deterministic, the situation is very easy here and the results are not surprising (see Remark 2).
Theorem 1
(Convergence of a priori regularisation) Assume that \(K:{\mathscr {X}}\rightarrow {\mathscr {Y}}\) is a bounded linear operator with non-closed range between Hilbert spaces and that \(Y_1,Y_2,\ldots \) are i.i.d. \({\mathscr {Y}}-\)valued random variables which fullfill \({\mathbb {E}}[ Y_1] = {\hat{y}}\in {\mathscr {D}}(K^+)\) and \(0<{\mathbb {E}}\Vert Y_1\Vert ^2<\infty \). Take an a priori regularisation scheme, with \(\alpha (\delta ) {\mathop {\longrightarrow }\limits ^{\delta \rightarrow 0}} 0\) and \(\Vert R_{\alpha (\delta )} \Vert \delta {\mathop {\longrightarrow }\limits ^{\delta \rightarrow 0}} 0\). Set \({\bar{Y}}_n:= \sum _{i\le n} Y_i/n\) and \(\delta _n^{est}:=n^{-1/2}\). Then \(\lim _{n\rightarrow \infty }{\mathbb {E}}\Vert R_{\alpha (\delta _n^{est})} {\bar{Y}}_n -K^+{\hat{y}}\Vert ^2 =0\).
Proof
Because of linearity, \({\mathbb {E}}\left[ R_{\alpha } Y_1 \right] = R_{\alpha }{\mathbb {E}}\left[ Y_1\right] = R_{\alpha }{\hat{y}}\) and thus by (3)
since \(R_{\alpha }Y_i \in {\mathscr {R}}(K^*)\) where the latter is separable. Therefore, by the bias-variance-decomposition,
\(\square \)
As in the deterministic case, under additional source conditions we can prove convergence rates. We restrict to regularisations \(R_\alpha :=F_{\alpha }\left( K^*K\right) K^*\) defined via the spectral decomposition (see [12]) with the following assumptions for the generating filter.
Assumption 1
\((F_{\alpha })_{\alpha >0}\) is a regularising filter, i.e. a family of piecewise continuous real valued functions on \([0,\Vert K\Vert ^2]\), continuous from the right, with \(\lim _{\alpha \rightarrow 0}F_{\alpha }(\lambda )=\frac{1}{\lambda }\) for all \(\lambda \in (0,\Vert K \Vert ^2]\) and \(\lambda F_{\alpha }(\lambda )\le C_R\) for all \(\alpha >0\) and all \(\lambda \in \left( 0,\Vert K\Vert ^2\right] \), where \(C_R>0\) is some constant. Moreover, it has qualification \(\nu _0>0\), i.e. \(\nu _0\) is maximal such that for all \(\nu \in [0,\nu _0]\) there exists a constant \(C_{\nu }>0\) with
Finally, there is a constant \(C_F>0\) such that \(|F_{\alpha }(\lambda )|\le C_F/\alpha \) for all \(0<\lambda \le \Vert K\Vert ^2\).
Remark 1
The generating filter of the following regularisation methods fullfill the Assumption 1:
-
1.
Tikhonov regularisation (qualification 2)
-
2.
n-times iterated Tikhonov regularisation (qualification 2n),
-
3.
truncated singular value regularisation (infinite qualification),
-
4.
Landweber iteration (infinite qualification).
Theorem 2
(Rate of convergence of aprioi regularisation) Assume that \(K:{\mathscr {X}}\rightarrow {\mathscr {Y}}\) is a bounded linear operator with non-closed range between Hilbert spaces and that \(Y_1,Y_2,\ldots \) are i.i.d. \({\mathscr {Y}}-\)valued random variables which fullfill \({\mathbb {E}}[ Y_1] = {\hat{y}}\in {\mathscr {D}}(K^+)\) and \(0<{\mathbb {E}}\Vert Y_1\Vert ^2<\infty \). Let \(R_{\alpha }\) be induced by a filter fullfilling Assumption 1. Set \({\bar{Y}}_n:= \sum _{i\le n} Y_i/n\) and \(\delta _n^{est}=n^{-1/2}\). Assume that for \(0<\nu \le \nu _0\) and \(\rho >0\) we have that \(K^+{\hat{y}}=(K^*K)^{\nu /2}w\) for some \(w\in {\mathscr {X}}\) with \(\Vert w \Vert \le \rho \). Then if for constants \(0<c<C\),
we have that \(\sqrt{{\mathbb {E}}\Vert R_{\alpha (\delta _n^{est})}{\bar{Y}}_n - K^+{\hat{y}} \Vert ^2} \le {C^{\prime }} {\delta _n^{est}}^\frac{\nu }{\nu +1} \rho ^\frac{1}{\nu +1} = {\mathscr {O}}\left( n^{-\frac{\nu }{2(\nu +1)}}\right) \) for some constant \({C^{\prime }}>0\).
Proof
We proceed similiary to the proof of Theorem 1, using additionally Proposition 1 of Sect. 4.
\(\square \)
Remark 2
For separable Hilbert spaces one could alternatively argue as follows: The spaces \({\mathscr {X}}^{\prime }:=L^2(\varOmega ,{\mathscr {X}})=\{X:\varOmega \rightarrow {\mathscr {X}}:{\mathbb {E}}\Vert X\Vert ^2<\infty \}\) and \({\mathscr {Y}}^{\prime }:=L^2(\varOmega ,{\mathscr {Y}})\) are also Hilbert spaces, with scalar products \((X,{\tilde{X}})_{{\mathscr {X}}^{\prime }}:=\sqrt{{\mathbb {E}}(X,{\tilde{X}})_{{\mathscr {X}}}}\) and \((\cdot ,\cdot )_{{\mathscr {Y}}^{\prime }}\) defined similary. Then \(K:{\mathscr {X}}\rightarrow {\mathscr {Y}}\) induces naturally a bounded linear operator \(K^{\prime }:{\mathscr {X}}^{\prime }\rightarrow {\mathscr {Y}}^{\prime },X\mapsto KX\). Clearly we have that \({\hat{y}}\in {\mathscr {Y}}^{\prime }\), and \(({\bar{Y}}_n)_n\) is a sequence in \({\mathscr {Y}}^{\prime }\) which fullfills
and we can use the classic deterministic results for \(K^{\prime }:{\mathscr {X}}^{\prime }\rightarrow {\mathscr {Y}}^{\prime }\) and \({\bar{Y}}_n\) and \(\delta _n^{est}\).
3 The discrepancy principle
In this section we restrict to compact operators with dense range. Note that then \({\mathscr {Y}}=\overline{{\mathscr {R}}(K)}\) will be automatically separable. In practice the above parameter choice strategies are of limited interest, since they require the knowledge of the abstract smoothness parameters \(\nu \) and \(\rho \). The classical discrepancy principle would be to choose \(\alpha _n\) such that
which is not possible, because of the unknown \(\delta _n^{true}\). So we replace it with our estimator \(\delta _n^{est}\) and implement the discrepancy principle via Algorithm 1 with or without the optional emergency stop.
Remark 3
To our knowledge, the idea of an emergency stop first appeared in [8]. It provides a deterministic lower bound for the regularisation parameter, which may avoid overfitting. We use an elementary form of an emergency stop here, which does not require the knowledge of the singular value decomposition of K. It would be interesting to see, how more sophisticated versions of the emergency stop worked here, which is not clear to us since in our general setting we cannot rely on the concentration properties of Gaussian noise.
Algorithm 1 will terminate, if we use the emergency stop. Otherwise, we can guarantee that Algorithm 1 terminates, if K has dense image (or equivalently, if \(K^*\) is injective) and if \(\delta _n^{est}>0\). This is because then \(\lim _{\alpha \rightarrow 0} KR_{\alpha }=P_{\overline{{\mathscr {R}}(K)}}=Id\) pointwise, so \(\Vert (KR_{q^k}-Id){\bar{Y}}_n\Vert < \delta _n^{est}\) for k large enough . If we decided to use the sample variance, it may happen that \(\delta _n^{est}=0\). But assuming \({\mathbb {E}}\Vert Y_1-{\hat{y}}\Vert ^2>0\), it follows that \({\mathbb {P}}\left( \delta _n^{est}=0\right) ={\mathbb {P}}\left( Y_1=\cdots =Y_n\right) \rightarrow 0\) for \(n\rightarrow \infty \) (with exponential rate). If the distribution of \(Y_1\) posseses a density (with respect to the Gaussian measure for example), then actually \({\mathbb {P}}(Y_1=\cdots =Y_n)=0\) for all \(n\in {\mathbb {N}}\).
Unlike in the previous section, here the \(L^2\) error will not converge in general, even if \(Y_1\) has a density. The regularisation parameter \(\alpha _n\) is now random, since it depends on the potentially bad random data. With a diminishing probability p we are underestimating the data error significantly, and thus the discrepancy principle gives a too small \(\alpha \) and we still have \(p\Vert R_{\alpha }\Vert \gg 1\) in such a case.
In the following we will need the singular value decomposition of the compact operator K with dense range (see [10]): there exists a monotone sequence \(\Vert K \Vert =\sigma _1\ge \sigma _2 \ge \cdots >0\) with \(\sigma _l{\rightarrow }0\) for \(l\rightarrow \infty \). Moreover there are families of orthonormal vectors \((u_l)_{l\in {\mathbb {N}}}\) and \((v_l)_{l\in {\mathbb {N}}}\) with \(span( u_l:l\in {\mathbb {N}})={\mathscr {Y}}\), \(span(v_l:l\in {\mathbb {N}})= {\mathscr {N}}(K)^\bot \) such that \(Kv_l=\sigma _lv_l\) and \(K^*u_l=\sigma _lv_l\).
3.1 A counter example for convergence
We now show that a naive use of the discrepancy principle, as implemented in Algorithm 1 without emergency stop, may fail to converge in \(L^2\). To simplify calculations we pick Gaussian noise and the truncated singular value regularisation and we set \(\delta _n^{est}=1/\sqrt{n}\). We choose \({\mathscr {X}}:=l^2({\mathbb {N}})\) with the standard basis \(\{u_k:=(0,\ldots ,0,1,0,\ldots )\}\) and consider the diagonal operator
with \({\hat{x}}=0={\hat{y}}=K{\hat{x}}\). Hence the \(\sigma _l=(1/100)^\frac{l}{2}\) are the eigenvalues of K and
We assume that the noise is distributed along \(y:= \sum _{l\ge 2} 1/\sqrt{l(l-1)} u_l\), so we have that \(\sum _{l> n} (y,u_l)^2=1/n\) and thus \(y\in l^2({\mathbb {N}})\). That is we set \({\bar{Y}}_n:=\sum _{i\le n} Y_i = \sum _{i\le n} Z_iy\), where \(Z_i\) are i.i.d. standard Gaussians. We define \(\varOmega _n:=\{Z_i\ge 1, i=1\ldots n\}\), a (very unlikely) event on which we significantly underestimate the true data error. We get that \({\mathbb {P}}(\varOmega _n):={\mathbb {P}}(Z_1\ge 1)^n\ge 1/10^n\). Moreover, by the definition of the discrepancy principle
It follows that
That is the probability of the events \(\varOmega _n\) is not small enough to compensate the huge error we have on these events, so in the end \({\mathbb {E}}\Vert R_{\alpha _n}{\bar{Y}}_n-K^+{\hat{y}}\Vert ^2\rightarrow \infty \) for \(n\rightarrow \infty \).
3.2 Convergence in probability of the discrepancy principle
In this section we show, that the discrepancy principle yields convergence in probability, matching asymptotically the optimal deterministic rate. The proofs of the Theorems 3 and 4 and of Corollary 3 are given in the following section.
Theorem 3
(Convergence of the discrepancy principle) Assume that K is a compact operator with dense range between Hilbert spaces \({\mathscr {X}}\) and \({\mathscr {Y}}\) and that \(Y_1,Y_2,\ldots \) are i.i.d. \({\mathscr {Y}}-\)valued random variables with \({\mathbb {E}}Y_1={\hat{y}}\in {\mathscr {R}}(K)\) and \(0<{\mathbb {E}}\Vert Y_1-{\hat{y}}\Vert ^2 < \infty \). Let \(R_{\alpha }\) be induced by a filter fullfilling Assumption 1 with \(\nu _0>1\). Applying Algorithm 1 with or without the emergency stop yields a sequence \((\alpha _n)_n\). Then we have that for all \(\varepsilon > 0\)
i.e. \(R_{\alpha _n}{\bar{Y}}_n {\mathop {\longrightarrow }\limits ^{{\mathbb {P}}}}K^+{\hat{y}}\).
Remark 4
If one tried to argue as in Remark 1 to show \(L^2\) convergence one would have to determine the regularisation parameter not as given by Eq. (1), but such that \({\mathbb {E}}\Vert (KR_{\alpha }-Id){\bar{Y}}_n\Vert ^2 \approx \delta _n^{est}\), which is not practicable since we cannot calculate the expectation on the left hand side.
The popularity of the discrepancy principles is a result of the fact that it guarantees optimal convergence rates under an additional source condition: Assuming that there is a \(0<\nu \le \nu _0-1\) (where \(\nu _0\) is the qualification of the chosen regularisation method) such that \(K^+{\hat{y}}=\left( K^*K\right) ^\frac{\nu }{2}w\) for a \(w\in {\mathscr {X}}\) with \(\Vert w \Vert \le \rho \), then
for some constant \(C>0\). The next theorem shows a concentration result for the discrepancy principle as implemented in Algorithm 1, with a bound similiar to (2).
Theorem 4
(Rate of convergence of the discrepancy principle) Assume that K is a compact operator with dense range between Hilbert spaces \({\mathscr {X}}\) and \({\mathscr {Y}}\). Moreover, \(Y_1,Y_2,\ldots \) are i.i.d. \({\mathscr {Y}}-\)valued random variables with \({\mathbb {E}}Y_1={\hat{y}}\in {\mathscr {R}}(K)\) and \(0<{\mathbb {E}}\Vert Y_1-{\hat{y}}\Vert ^2 < \infty \). Let \(R_{\alpha }\) be induced by a filter fullfilling Assumption 1 with \(\nu _0>1\). Moreover, assume that there is a \(0<\nu \le \nu _0-1\) and a \(\rho >0\) such that \(K^+{\hat{y}}=(K^*K)^{\nu /2}w\) for some \(w\in {\mathscr {X}}\) with \(\Vert w \Vert \le \rho \). Applying Algorithm 1 with or without the emergency stop yields a sequence \((\alpha _n)_{n\in {\mathbb {N}}}\). Then there is a constant L, such that
We deduce a deterministic bound for \(\Vert R_{\alpha _n}\bar{Y_n}-K^+{\hat{y}}\Vert \) (for n large).
Corollary 2
Under the assumptions of Theorem 4, for all \(\varepsilon >0\) it holds that
Proof (Corollary 2)
By the second assertion in Lemma 1 and Markov’s inequality, for any \(c,\varepsilon >0\),
\(\square \)
The ad hoc emergency stop \(\alpha _n>1/n\), additionally assures, that the \(L^2\) error will not explode (unlike in the counter example of the previous subsection). Under the assumption that \({\mathbb {E}}\Vert Y_1-{\hat{y}}\Vert ^4<\infty \), one can guarantee, that the \(L^2\) error will converge.
Corollary 3
Under the assumptions of Theorem 3, consider the sequence \(\alpha _n\) determined by Algorithm 1 with emergency stop. Then there is a constant C such that \({\mathbb {E}}\Vert R_{\alpha _n}{\bar{Y}}_n-K^+{\hat{y}}\Vert ^2\le C\) for all \(n\in {\mathbb {N}}\). If additionally \({\mathbb {E}}\Vert Y_1-{\hat{y}}\Vert ^4<\infty \), then it holds that \({\mathbb {E}}\Vert R_{\alpha _n} {\bar{Y}}_n -K^+{\hat{y}}\Vert ^2 \rightarrow 0\) for \(n\rightarrow \infty \).
3.3 Almost sure convergence
The results so far delievered either convergence in probability or convergence in \(L^2\). We give a short remark how one can obtain almost sure convergence. Roughly speaking, one has to multiply a \(\sqrt{\log \log n}\) term to \(\delta _n^{est}\). This is a simple consequence of the following theorem
Theorem 5
(Law of the iterated logarithm) Assume that \(Y_1,Y_2,\ldots \) is an i.i.d sequence with values in some seperable Hilbert space \({\mathscr {Y}}\). Moreover, assume that \({\mathbb {E}}Y_1 = 0\) and \({\mathbb {E}}\Vert Y_1\Vert ^2<\infty \). Then we have that
Proof
This is a simple consequence of Corollary 8.8 in [24]. \(\square \)
So if \({\mathbb {E}}Y_1 = {\hat{y}} \in {\mathscr {Y}}\) we have for \(\delta _n^{true}=\Vert {\bar{Y}}_n-{\hat{y}}\Vert \)
that is, with probability 1 it holds that \(\delta _n^{true}\le \sqrt{\frac{2{\mathbb {E}}\Vert Y_1-{\hat{y}}\Vert ^2\log \log n}{n}}\) for n large enough. Consequently, for some \(\tau >1\) the estimator should be
where \( s_n\) is the square root of the sample variance. Since \({\mathbb {P}}(\lim _{n\rightarrow \infty } s_n^2={\mathbb {E}}\Vert Y_1-{\hat{y}}\Vert ^2)=1\) and \(\tau >1\) it holds that \(\sqrt{{\mathbb {E}}\Vert Y_1-{\hat{y}}\Vert }\le \tau s_n\) for n large enough with probability 1 and thus \(\delta _n^{true}\le \delta _n^{est}\) for n large enough with probability 1. In other words, there is an event \(\varOmega _0 \subset \varOmega \) with \({\mathbb {P}}(\varOmega _0)=1\) such that for any \(\omega \in \varOmega _0\) there is a \(N(\omega )\in {\mathbb {N}}\) with \(\delta _n^{true}(\omega )\le \delta _n^{est}(\omega )\) for all \(n\ge N(\omega )\). So we can use \({\bar{Y}}_n\) and \(\delta _n^{est}\) together with any deterministic regularisation method to get almost sure convergence.
4 Proofs of Theorem 3 and 4
4.1 Proofs without emergency case
We will multiple times use the Pythagorean theorem for independent separable Hil-bert space valued random variables \(Z_i\) with \({\mathbb {E}}\Vert Z_i\Vert ^2<\infty \) and \({\mathbb {E}}Z_i=0\),
where \((e_l)_{l\in {\mathbb {N}}}\) is an orthonormal basis. Based on this, the central ingridient will be the following lemma, which strengthens the pointwise worst case error bound \(\Vert (KR_{\alpha }-Id)({\bar{Y}}_n-{\hat{y}})\Vert \le C_0 \delta _n^{true}\) in some sense.
Lemma 1
For all \(\varepsilon >0\) and (deterministic) sequences \((q_n)_{n\in {\mathbb {N}}}\) with \(q_n>0\) and \(\lim _{n\rightarrow \infty }q_n=0\), it holds that
and
for \(n\rightarrow \infty \), where \(\gamma =1\) or \(\gamma =\sqrt{{\mathbb {E}}\Vert Y_1-{\hat{y}}\Vert ^2}\), depending on if we used the sample variance or not.
Proof
By Tschebyscheff’s inequality and (3)
Since K has dense range, \(KR_{q_n}-Id\) converges to 0 pointwise for \(n\rightarrow \infty \) and it follows that \((KR_{q_n}-Id)(Y_1-{\hat{y}})\) also converges pointwise to 0. By inequality (6) of Proposition 1 below, \(\Vert (KR_{q_n}-Id)(Y_1-{\hat{y}}) \Vert ^2 \le C_0 \Vert Y_1-{\hat{y}}\Vert ^2\), so \({\mathbb {E}}\Vert (KR_{q_n}-Id)(Y_1-{\hat{y}})\Vert ^2\rightarrow 0\) for \(n\rightarrow \infty \) by the dominated convergence theorem. The second assertion only needs a proof for \(\gamma =\sqrt{{\mathbb {E}}\Vert Y_1-{\hat{y}}\Vert ^2}\) and then
almost surely (thus in particular in probability) for \(n\rightarrow \infty \) by the strong law of large numbers (Corollary 7.10 in [24]) and the bias-variance-decomposition. Therefore \(\sqrt{n}\delta _n^{est}\rightarrow \gamma \) in probability for \(n\rightarrow \infty \). \(\square \)
For convergence in probability it does not matter how large the error is on sets with diminishing probability and with Lemma 1 we will show, that the probability of certain ‘good events’ is 1 in the limit of infinitely many measurements.
Define for \(q\in (0,1)\) (as chosen in Algorithm 1)
So \(\min (q\alpha ,1)\le \psi _q(\alpha )\le \alpha \) and by definition, if \(\Vert \left( KR_{\psi _q(\alpha )}-Id\right) {\bar{Y}}_n\Vert <\delta _n^{est}\), it holds that \(\alpha _n\ge \min (q\alpha ,1)\), where \(\alpha _n\) is the output of Algorithm 1.
We will also need some well known properties of regularisations defined by filters which fullfill Assumption 1. These are mostly easy modifications from [12].
Proposition 1
The constants in the following are defined as in Assumption 1. We assume, that K is bounded and linear with non-closed range. Assume that \((R_{\alpha })_{\alpha >0}\) is induced by a regularising filter fullfilling \(|F_{\alpha }(\lambda )|\le C_F/\alpha \) for all \(0<\lambda \le \Vert K\Vert ^2\). Then
for all \(\alpha >0\), with \(C_0\ge 1\). If moreover, the filter has qualification \(\nu _0>0\) and there is a \(w \in {\mathscr {X}}\) with \(\Vert w \Vert \le \rho \) such that \(K^+{\hat{y}}=\left( K^*K\right) ^\frac{\nu }{2}w\) for some \(0<\nu \le \nu _0\), then
for all \(\alpha >0\). If additionally, \(\nu _0\ge \nu +1>1\), then
Moreover, if K is compact, than for all \(x\in {\mathscr {X}}\) there is a function \(g:{\mathbb {R}}^+\rightarrow {\mathbb {R}}^+\) with \(g(\alpha )\rightarrow \infty \) for \(\alpha \rightarrow 0\), such that
where \(\psi _q\) is given in (4).
Proof (Proposition 1)
(5) and (8) are shown in the proofs of Theorem 4.2 and Theorem 4.17 in [12]. (7) and (8) are Theorem 4.3 in [12]. (6) follows directly from Assumption 1.
For (10), let \(x\in {\mathscr {X}}\) be fixed and set
W.l.o.g. \({\tilde{g}}\) is finite for any \(\alpha >0\). Now we first show that
We mimic the proof of Theorem 3.1.17 of [31] and set \(\varepsilon >0\). We fix L, such that \(C_1^2 \sum _{l=L+1}^\infty ({\hat{x}},v_l)^2<\varepsilon \). Then
for all \(\alpha <\left( \varepsilon ^{-1} C_{\nu _0}^2L\sigma _L^{2(1-\nu _0)}\Vert {\hat{x}}\Vert ^2\right) ^{-\frac{1}{\nu _0-1}}\), therefore \(\Vert (KR_{\alpha }-Id)Kx\Vert /\sqrt{\alpha }=0\) for \(\alpha \rightarrow 0\). So for any \(t>0\)
for \(\alpha \) small enough, because of (11) and since \(\psi _q(\alpha t)\le \alpha t\). So \({\tilde{g}}(\alpha )\rightarrow \infty \) for \(\alpha \rightarrow 0\) and by definition of \({\tilde{g}}\) the claim holds for \(g(\alpha ):={\tilde{g}}(\alpha )-1\) (g is well defined for \(\alpha \) small enough). \(\square \)
Proof (Theorem 4)
Set \(q_n:=\psi _q(b_n)\) where \(b_n:=\left( \frac{1}{\rho }\frac{\gamma }{4C_{\nu +1}\sqrt{n}}\right) ^\frac{2}{\nu +1}\) with \(\gamma =1\) or \(\gamma =\sqrt{{\mathbb {E}}\Vert Y_1-{\hat{y}}\Vert ^2}\), depending on if we used the sample variance or not, and \(\psi _q\) given in (4). Define
Then by (9) and since \(q_n\le b_n\),
so \(\alpha _n \chi _{\varOmega _n} \ge q b_n \chi _{\varOmega _n} \ge q \left( \frac{\delta _n^{est}}{6C_{\nu +1}}\right) ^\frac{2}{\nu +1}\chi _{\varOmega _n}\) for n large enough. By (6), (8) and since K has dense image,
Finally,
with \(L:=2^\frac{\nu }{\nu +1}C_0\rho ^\frac{1}{\nu +1}+\sqrt{C_RC_F/q}\left( 6C_{\nu +1}\right) ^\frac{1}{\nu +1}\) and the proof is finished, because \({\mathbb {P}}\left( \varOmega _n\right) \rightarrow 1\) for \(n\rightarrow \infty \) by Lemma 1. \(\square \)
Proof (Proof of Theorem 3)
W.l.o.g. we may assume that there are arbitrarily large \(l\in {\mathbb {N}}\) with \(({\hat{y}},u_l)\ne 0\), since otherwise we could apply Theorem 4 with any \(\nu >0\). Let \(\varepsilon ^{\prime }>0\). Then there is a \(L\in {\mathbb {N}}\) such that \(({\hat{y}},u_L)\ne 0\) and \(\left( F_{q^k}(\sigma _L^2)\sigma _L^2-1\right) ^2>1/2\) for all \(k\in {\mathbb {N}}_0\) with \(q^k\ge \varepsilon ^{\prime }\) (because the \(F_{q^k}\) are bounded and \(\sigma _l\rightarrow 0\) for \(l\rightarrow \infty \)). Set
Then for \(n\ge 16\gamma ^2/({\hat{y}},u_L)^2\),
for all \(k\in {\mathbb {N}}_0\) with \(q^k\ge \varepsilon ^{\prime }\). Thus for \(\varOmega _n\) given in (14)
by Lemma 1 and since \(({\bar{Y}}_n,u_L)=\sum _{i=1}^n(Y_i,u_L)/n\rightarrow {\mathbb {E}}(Y_1,u_L)=({\hat{y}},u_L)\ne 0\) almost surely for \(n\rightarrow \infty \). Set \(q_n:=\psi _q\left( b_n\right) \) with \(b_n:=n^{-1}g(n^{-1})\) and g and \(\psi _q\) given in (4) and (10). Define
Then for n large enough (such that \(\Vert (KR_{q_n}-Id){\hat{y}}\Vert \sqrt{n}\le \gamma /4\), see (10) with \(\alpha =n^{-1}\)),
That is \(\alpha _n \chi _{\varOmega _n}\ge q b_n \chi _{\varOmega _n}\ge q n^{-1}g(n^{-1}) \chi _{\varOmega _n}\) for n large enough. Finally set
with \(\varOmega _n\) given in (16). So \({\mathbb {P}}\left( {\tilde{\varOmega }}_n\right) \rightarrow 1\) for \(n\rightarrow \infty \), since \({\mathbb {P}}\left( \delta _n^{true}\le \sqrt{\sqrt{g(n^{-1})}/n}\right) \rightarrow 1\), because of \(g(n^{-1})\rightarrow \infty \), \({\mathbb {P}}\left( \varOmega _n\right) \rightarrow 1\) by Lemma 1 and \({\mathbb {P}}\left( \Vert R_{\alpha _n}{\hat{y}}-K^+{\hat{y}}\Vert \le \frac{\varepsilon }{2}\right) \rightarrow 1\) by (15) (\(\varepsilon ^{\prime }>0\) is arbitrary). Thus for n large enough (so that \(C_RC_F/q\sqrt{g(n^{-1})} \le \frac{\varepsilon ^2}{4}\))
and \({\mathbb {P}}\left( \Vert R_{\alpha _n}{\bar{Y}}_n-K^+{\hat{y}}\Vert \le \varepsilon \right) \ge {\mathbb {P}}\left( {\tilde{\varOmega }}_n\right) \rightarrow 1\) for \(n\rightarrow \infty \). \(\square \)
4.2 Proofs for the emergency stop case
Again, denote by \(\alpha _n\) the output of Algorithm 1 without the emergency stop. For the emergency stop, we have to consider \(\Vert R_{\max \{\alpha _n,1/n\}}{\bar{Y}}_n-K^+{\hat{y}}\Vert \). It suffices to show that \({\mathbb {P}}\left( \alpha _n\ge 1/n\right) \rightarrow 1\) for \(n\rightarrow \infty \).
First assume that \(K^+{\hat{y}}=(K^*K)^\frac{\nu }{2}w\) for some \(w\in {\mathscr {X}}\) with \(\Vert w \Vert \le \rho \) and \(0<\nu \le \nu _0-1\). With (13) it follows that
for \(n\rightarrow \infty \), with \(\varOmega _n\) given in (12). Otherwise, if there are no such \(\nu , \rho \) and w, then (17) implies that for all \(\varepsilon >0\)
for \(n\rightarrow \infty \), with \(g(n^{-1})\rightarrow \infty \) and \(\varOmega _n\) given in (16). Then (18) and (19) together yield \({\mathbb {P}}\left( \alpha _n\ge 1/n\right) \rightarrow 1\) for \(n\rightarrow \infty \) and therefore the result. \(\square \)
4.3 Proof of Corollary 3
Proof (Corollary 3)
Fix \(\varepsilon >0\). Denote by \(\alpha _n\) the output of the discrepancy principle with emergency stop and set
It is
for all \(\alpha >0\). By the triangle inequality,
where \(C^{\prime }\) does not depend on n and where we used \(\alpha _n\le 1\) and (21) in the second step and \(\alpha _n\ge 1/n\) in the fourth. By (20) there holds \(\Vert R_{\alpha _n}{\bar{Y}}_n-K^+{\hat{y}}\Vert \chi _{\varOmega _n} \le \varepsilon \), so
We apply Cauchy-Schwartz to the second term
and we claim that there is a constant A with \({\mathbb {E}}\Vert R_{\alpha _n}{\bar{Y}}_n-K^+{\hat{y}}\Vert ^4\le A\) for all \(n\in {\mathbb {N}}\).
for some constant B, where we used (21) in the second step. First,
for some constant \(B_1\), where in the fourth step we used that the \(Y_i\) are i.i.d, that \({\mathbb {E}}\left( Y_1-{\hat{y}},u_j\right) =\left( {\mathbb {E}}[Y_1]-{\hat{y}},u_j\right) =0\) and that \({\mathbb {E}}[XY]={\mathbb {E}}[X]{\mathbb {E}}[Y]\) for independent (and integrable) random variables (so the relevant cases are the ones where either all indices \(i,i^{\prime },l,l^{\prime }\) are equal or exactly pairwise two). Then we used Jensen’s inequality in the fifth step. Moreover, \({\mathbb {E}}\left[ {\delta _n^{true}}^2/\alpha _n\right] \le n {\mathbb {E}}\left[ {\delta _n^{true}}^2\right] = {\mathbb {E}}\Vert Y_1-{\hat{y}}\Vert ^2=B_2\), so the claim holds for \(A=B(B_1+B_2+1)\). By Theorem 3 it holds that \({\mathbb {P}}\left( \varOmega _n\right) \rightarrow 1\) for \(n\rightarrow \infty \), thus \({\mathbb {P}}\left( \varOmega _n^C\right) \le \varepsilon ^4/A\) for n large enough and
\(\square \)
5 Numerical demonstration
We conclude with some numerical results.
5.1 Differentiation of binary option prices
A natural example is given if the data is acquired by a Monte-Carlo simulation, here we consider an example from mathematical finance. The buyer of a binary call option receives after T days a payoff Q, if then a certain stock price \(S_T\) is higher then the strike value K. Otherwise he gets nothing. Thus the value V of the binary option depends on the expected evolution of the stock price. We denote by r the riskfree rate, for which we could have invested the buying price of the option until the expiry rate T. If we already knew today for sure, that the stock price will hit the strike (insider information), we would pay \(V=e^{-rT}Q\) for the binary option (\(e^{-rT}\) is called discount factor). Otherwise, if we believed that the stock price will hit the strike with probability p, we would pay \(V=e^{-rT}Qp\). In the Black Scholes model one assumes, that the relative change of the stock price in a short time intervall is normally distributed, that is
Under this assumption one can show that (see [22])
where \(S_0\) is the initial stock price and \(s \sim {\mathscr {N}}\left( \mu -\sigma ^2/2,\sigma ^2/T\right) \). Under this assumptions one has \(V=e^{-rT}Q\varPhi (d)\), with
Ultimatively we are interested in the sensitivity of V with respect to the starting stock price \(S_0\), that is \(\partial V(S_0)/\partial S_0\). We formulate this as the inverse problem of differentiation. Set \({\mathscr {X}}={\mathscr {Y}}=L^2([0,1]=\) and define
Then our true data is \({\hat{y}}=V=e^{-rT}Q\varPhi (d)\). To demonstrate our results we now approximate \(V: S_0\mapsto e^{-rT}Qp(S_0)\) through a Monte-Carlo approach. That is we generate independent gaussian random variables \(Z_1,Z_2,\ldots \) identically distributed to s and set \(Y_i:=e^{-rT}Q \chi _{\{S_0e^{TZ_i}\ge K \}}\). Then we have \({\mathbb {E}}Y_i = e^{-rT}Q{\mathbb {P}}(S_0e^{TZ_i})=e^{-rT}Qp(S_0)=V(S_0)\) and \({\mathbb {E}}\Vert Y_i \Vert ^2\le e^{-rT}Q<\infty \). We replace \(L^2([0,1])\) with piecewise continuous linear splines on a homogeneous grid with \(m=50,000\) elements (we can calculate Kg exactly for such a spline g). We use in total \(n=10,000\) random variables for each simulation. As parameters we chose \(r=0.0001, T=30, K=0.5, Q=1, \mu = 0.01, \sigma =0.1\). It is easy to see that \({\hat{x}} =K^+{\hat{y}}\in {\mathscr {X}}_{\nu }\) for all \(\nu >0\) using the transformation \(z(\xi )=0,5e^{\sqrt{0,3}\xi -0,15}\). Since the qualification of the Tikhonov regularisation is 2, Theorem 4 gives an error bound which is asymptotically proportional to \(\left( 1/\sqrt{n}\right) ^\frac{1}{2}\). In Fig. 1 we plot the \(L^2\) average of 100 simulations of the discrepancy principle together with the (translated) optimal error bound. In this case the emergency stop did not trigger once - this is plausible, since the true solution is very smooth, which yields comparably higher values of the regularisation parameter and also, the error distribution is Gaussian and the problem is only mildly ill-posed.
Let us stress that this is only an academic example to demonstrate the possibility of using our new generic approach in the context of Monte Carlo simulations. Explicit solution formulas for standard binary options are well-known, and for more complex financial derivatives with discontinuous payoff profiles (such as autocallables or Coco-bonds) one would rather resort to stably differentiable Monte Carlo methods [2] or [14] or use specific regularization methods for numerical differentiation [18].
5.2 Inverse heat equation
We consider the toy problem ‘heat’ from [19]. We chose the discretisation level \(m=100\) and set \(\sigma =0.7\). Under this choice, the last seven singular values (calculated with the function ‘csvd’) fall below the machine precision of \(10^{-16}\). The discretised large systems of linear equations are solved iteratively using the conjugate gradient method (‘pcg’ from MATLAB) with a tolerance of \(10^{-8}\). As a regularisation method we chose Tikhonov regularisation and we compared the a priori choice \(\alpha _n=1/\sqrt{n}\), the discrepancy principle (dp) and the discrepancy principle with emergency stop (dp+es), as implemented in Algorithm 1 with \(q=0.7\) and estimated sample variance. The unbiased i.i.d measurements fullfill \(\sqrt{{\mathbb {E}}\Vert Y_i-{\hat{y}}\Vert ^2}\approx 1.16\) and \({\mathbb {E}}\Vert Y_i - {\mathbb {E}}Y_i \Vert ^k=\infty \) for \(k\ge 3\). Concretely, we chose \(Y_i:={\hat{y}}+E_i\) with \(E_i:=U_i*Z_i*v\), where the \(U_i\) are independent and uniformly on \([-1/2,1/2]\) distributed, the \(Z_i\) are independent Pareto distributed (MATLAB function ‘gprnd’ with parameters 1/3, 1/2 and 3/2), and v is a uniform permutation of \(1,1/2^\frac{3}{4},\ldots ,1/m^\frac{3}{4}\). Thus we chose a rather ill-posed problem together with a heavy-tailed error distribution. We considered three different sample sizes \(n=10^3,10^4,10^5\) with 200 simulations for each one. The results are presented as boxplots in Fig. 2. It is visible, that the results are much more concentrated for a priori regularisation and discrepancy prinicple with emergency stop, indicating the \(L^2\) convergence (strictly speaking we do not know if the discrepancy principle with emergency stop converges in \(L^2\), since the additional assumption of Corollary 3 is violated here). Moreover the statistics of the discrepancy principle with and without emergency stop become more similiar with increasing sample size - with the crucial difference, that the outliers as such we denote the red crosses above the blue box, thus the cases where the mehod performed badly) are only present in case of the discrepancy principle without emergency stop, causing non-convergence in \(L^2\), see Fig. 3. Thus here the discrepancy principle with emergency stop is superior to the discrepancy principle without emergency stop, in particular for large sample sizes. Beside that, the error is falling slower in case of the a priori parameter choice. The number of outliers falls with increasing sample size from 37 for \(n=10^3\) to 18 for \(n=10^5\), indicating the (slow) convergence in probability of the discrepancy principle. Note that \(\delta _n^{true}/\delta _n^{est}\approx 1.9\) (in average), if we only consider the runs yielding outliers. This illustrates, that the lack of convergence in \(L^2\) is caused by the occasional underestimation of the data error.
Notes
also called convergence of the integrated mean squared error or root mean squared error
References
Akiyama, K., Alberdi, A., Alef, W., Asada, K., Azulay, R., Baczko, A.K., Ball, D., Baloković, M., Barrett, J., Bintley, D., et al.: First M87 event horizon telescope results. III. Data processing and calibration. Astrophys. J. Lett. 875(1), L3 (2019)
Alm, T., Harrach, B., Harrach, D., Keller, M.: A Monte Carlo pricing algorithm for autocallables that allows for stable differentiation. J. Comput. Finance 17(1), 43–70 (2013)
Bakushinskiı, A.: Remarks on the choice of regularization parameter from quasioptimality and relation tests. Zh. Vychisl. Mat. i Mat. Fiz. 24(8), 1258–1259 (1984)
Bauer, F., Reiß, M.: Regularization independent of the noise level: an analysis of quasi-optimality. Inverse Probl. 24(5), 055009 (2008)
Becker, S.: Regularization of statistical inverse problems and the Bakushinskiĭ veto. Inverse Probl. 27(11), 115010 (2011)
Bissantz, N., Hohage, T., Munk, A., Ruymgaart, F.: Convergence rates of general regularization methods for statistical inverse problems and applications. SIAM J. Numer. Anal. 45(6), 2610–2636 (2007)
Blanchard, G., Hoffmann, M., Reiß, M.: Optimal adaptation for early stopping in statistical inverse problems. SIAM/ASA J. Uncertain. Quantif. 6(3), 1043–1075 (2018)
Blanchard, G., Mathé, P.: Discrepancy principle for statistical inverse problems with application to conjugate gradient iteration. Inverse Probl. 28(11), 115011 (2012)
Buades, T., Lou, Y., Morel, J.M., Tang, Z.: A note on multi-image denoising. In: 2009 International Workshop on Local and Non-local Approximation in Image Processing, pp. 1–15. IEEE (2009)
Cavalier, L.: Inverse problems in statistics. In: Alquier, P., Gautier, E., Stoltz, G. (eds.) Inverse Problems and High-Dimensional Estimation, pp. 3–96. Springer, Berlin (2011)
Cavalier, L., Golubev, Y., et al.: Risk hull method and regularization by projections of ill-posed inverse problems. Ann. Stat. 34(4), 1653–1677 (2006)
Engl, H.W., Hanke, M., Neubauer, A.: Regularization of Inverse Problems, vol. 375. Springer, Berlin (1996)
Garcia, E.S., Sandwell, D.T., Smith, W.H.: Retracking CryoSat-2, Envisat and Jason-1 radar altimetry waveforms for improved gravity field recovery. Geophys. J. Int. 196(3), 1402–1422 (2014)
Gerstner, T., Harrach, B., Roth, D.: Monte Carlo pathwise sensitivities for barrier options. J. Comput. Finance, accepted for publication (2018)
Gerth, D., Hofinger, A., Ramlau, R.: On the lifting of deterministic convergence rates for inverse problems with stochastic noise. Inverse Probl. Imaging 11(4), 663–687 (2017)
Ghosal, S., Van der Vaart, A.: Fundamentals of Nonparametric Bayesian Inference, vol. 44. Cambridge University Press, Cambridge (2017)
Giné, E., Nickl, R.: Mathematical Foundations of Infinite-Dimensional Statistical Models, vol. 40. Cambridge University Press, Cambridge (2016)
Hanke, M., Scherzer, O.: Inverse problems light: numerical differentiation. Am. Math. Mon. 108(6), 512–521 (2001)
Hansen, P.C.: Discrete Inverse Problems: Insight and Algorithms, vol. 7. SIAM, Philadelphia (2010)
Hassan, U., Anwar, M.S.: Reducing noise by repetition: introduction to signal averaging. Eur. J. Phys. 31(3), 453 (2010)
Hofinger, A.: Ill-Posed Problems: Extending the Deterministic Theory to a Stochastic Setup. Trauner, New York (2006)
Hull, J.C., Basu, S.: Options, Futures, and Other Derivatives. Pearson Education India, Chennai (2016)
Kaipio, J., Somersalo, E.: Statistical and Computational Inverse Problems, vol. 160. Springer, Berlin (2006)
Ledoux, M., Talagrand, M.: Probability in Banach Spaces: Isoperimetry and Processes, vol. 23. Springer, Berlin (1991)
Lu, S., Mathé, P.: Discrepancy based model selection in statistical inverse problems. J. Complex. 30(3), 290–308 (2014)
Lucka, F., Proksch, K., Brune, C., Bissantz, N., Burger, M., Dette, H., Wübbeling, F.: Risk estimators for choosing regularization parameters in ill-posed problems-properties and limitations. Inverse Probl. Imaging 12(5), 1121–1155 (2018)
Lyons, R.G.: Understanding Digital Signal Processing, 3/E. Pearson Education India, Chennai (2004)
Mackay, C.D., Baldwin, J., Law, N., Warner, P.: High-resolution imaging in the visible from the ground without adaptive optics: new techniques and results. In: Ground-Based Instrumentation for Astronomy, vol. 5492, pp. 128–136. International Society for Optics and Photonics (2004)
Mathé, P., Pereverzev, S.V.: Geometry of linear ill-posed problems in variable Hilbert scales. Inverse Probl. 19(3), 789 (2003)
Morozov, V.A.: The error principle in the solution of operational equations by the regularization method. Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki 8(2), 295–309 (1968)
Nakamura, G., Potthast, R.: Inverse Modeling, pp. 2053–2563. IOP Publishing, Bristol (2015). https://doi.org/10.1088/978-0-7503-1218-9
Rieder, A.: Keine Probleme mit inversen Problemen: eine Einführung in ihre stabile Lösung. Springer, Berlin (2013)
Tikhonov, A., Arsenin, V.Y.: Methods for Solving Ill-Posed Problems. Wiley, New York (1977)
Wahba, G.: Practical approximate solutions to linear operator equations when the data are noisy. SIAM J. Numer. Anal. 14(4), 651–667 (1977)
Werner, F.: Adaptivity and Oracle inequalities in linear statistical inverse problems: a (numerical) survey. In: Hofmann, B., Leitão, A., Zubelli, J.P. (eds.) New Trends in Parameter Identification for Mathematical Models, pp. 291–316. Springer, Berlin (2018)
Acknowledgements
Open Access funding provided by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Harrach, B., Jahn, T. & Potthast, R. Beyond the Bakushinkii veto: regularising linear inverse problems without knowing the noise distribution. Numer. Math. 145, 581–603 (2020). https://doi.org/10.1007/s00211-020-01122-2
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00211-020-01122-2