Bayes procedures for adaptive inference in inverse problems for the white noise model

We study empirical and hierarchical Bayes approaches to the problem of estimating an infinite-dimensional parameter in mildly ill-posed inverse problems. We consider a class of prior distributions indexed by a hyperparameter that quantifies regularity. We prove that both methods we consider succeed in automatically selecting this parameter optimally, resulting in optimal convergence rates for truths with Sobolev or analytic"smoothness", without using knowledge about this regularity. Both methods are illustrated by simulation examples.


Introduction
In recent years, Bayesian approaches have become more and more common in dealing with nonparametric statistical inverse problems. Such problems arise in many fields of applied science, including geophysics, genomics, medical image analysis and astronomy, to mention but a few. In nonparametric inverse problems some form of regularization is usually needed in order to estimate the (typically functional) parameter of interest. One possible explanation of the increasing popularity of Bayesian methods is the fact that assigning a prior distribution to an unknown functional parameter is a natural way of specifying a degree of regularization. Probably at least as important is the fact that various computational methods exist to carry out the inference in practice, including MCMC methods and approximate methods like expectation propagation, Laplace approximations and approximate Bayesian computation. A third important aspect that appeals to users of Bayes methods is that an implementation of a Bayesian procedure typically produces not only an estimate of the unknown quantity of interest (usually a posterior mean or mode), but also a large number of samples from the whole posterior distribution. These can then be used to report a credible set, i.e. a set of parameter values that receives a large fixed fraction of the posterior mass, that serves as a quantification of the uncertainty in the estimate. Some examples of papers using Bayesian methods in nonparametric inverse problems in various applied settings include [3,16,24,27,28]. The paper [34] provides a nice overview and many additional references.
Work on the fundamental properties of Bayes procedures for nonparametric inverse problems, like consistency, (optimal) convergence rates, etcetera, has only started to appear recently. The few papers in this area include [1,14,22,23,30]. Other papers addressing frequentist properties of Bayes procedures for different, but related inverse problems include [21] and [15]. This is in sharp contrast with the work on frequentist methodology, which is quite well developed. See for instance the overviews given by Cavalier [8,9].
Our focus in this paper is on the ability of Bayesian methods to achieve adaptive, rate-optimal inference in so-called mildly ill-posed nonparametric inverse problems (in the terminology of, e.g., [8]). Nonparametric priors typically involve one or more tuning parameters, or hyper-parameters, that determine the degree of regularization. In practice there is widespread use of empirical Bayes and full, hierarchical Bayes methods to automatically select the appropriate values of such parameters. These methods are generally considered to be preferable to methods that use only a single, fixed value of the hyper-parameters. In the inverse problem setting it is known from the recent paper [22] that using a fixed prior can indeed be undesirable, since it can lead to convergence rates that are sub-optimal, unless by chance the statistician has selected a prior that captures the fine properties of the unknown parameter (like its degree of smoothness, if it is a function). Theoretical work that supports the preference for empirical or hierarchical Bayes methods does not exist at the present time however. It has until now been unknown whether these approaches can indeed robustify a procedure against prior mismatch. In this paper we answer this question in the affirmative. We show that empirical and hierarchical Bayes methods can lead to adaptive, rate-optimal procedures in the context of nonparametric inverse problems, provided they are properly constructed.
We study this problem in the context of the canonical signal-in-white-noise model, or, equivalently, the infinite-dimensional normal mean model. Using singular value decompositions many nonparametric, linear inverse problems can be cast in this form (e.g. [9,22]). Specifically, we assume that we observe a sequence of noisy coefficients Y = (Y 1 , Y 2 , . . .) satisfying where Z 1 , Z 2 , . . . are independent, standard normal random variables, μ = (μ 1 , μ 2 , . . .) ∈ 2 is the infinite-dimensional parameter of interest, and (κ i ) is a known sequence that may converge to 0 as i → ∞, which complicates the inference. We suppose the problem is mildly ill-posed of order p ≥ 0, in the sense that for some C ≥ 1. Minimax lower bounds for the rate of convergence of estimators for μ are well known in this setting. For instance, the lower bound over Sobolev balls of regularity β > 0 is given by n −β/(1+2β+2 p) and over certain "analytic balls" the lower bound is of the order n −1/2 log 1/2+ p n (see [8]). There are several regularization methods which attain these rates, including classical Tikhonov regularization and Bayes procedures with Gaussian priors. Many of the older existing methods for nonparametric inverse problems are not adaptive, in the sense that they rely on knowledge of the regularity (e.g. in Sobolev sense) of the unknown parameter of interest to select the appropriate regularization. This also holds for the Bayesian approach with fixed Gaussian priors. Early papers on the direct problem, i.e. the case p = 0 in (1.2), include [33,41]. The more recent papers [22] and [1] study the inverse problem case, but also obtain non-adaptive results only. In the last decade however, several methods have been developed in frequentist literature that achieve the minimax convergence rate without knowledge of the regularity of the truth. This development parallels the earlier work on adaptive methods for the direct nonparametric problem to some extent, although the inverse case is technically usually more demanding. The adaptive methods typically involve a data-driven choice of a tuning parameter in order to automatically achieve an optimal bias-variance trade-off, as in Lepski's method for instance.
For nonparametric inverse problems, the construction of an adaptive estimator based on a properly penalized blockwise Stein's rule has been studied in [12], cf. also [6]. This estimator is adaptive both over Sobolev and analytic scales. In [10] the data-driven choice of the regularizing parameters is based on unbiased risk estimation. The authors consider projection estimators and derive the corresponding oracle inequalities. For μ in the Sobolev scale they obtain asymptotically sharp adaptation in a minimax sense, whereas for μ in analytic scale, their rate is optimal up to a logarithmic term. Yet another approach to adaptation in inverse problems is the risk hull method studied in [11]. In this paper the authors consider spectral cut-off estimators and provide oracle inequalities. An extension of their approach is presented in [25]. The link between the penalized blockwise Stein's rule and the risk hull method is presented in [26].
Adaptation properties of Bayes procedures for mildly ill-posed nonparametric inverse problems have until now not been studied in the literature, with an exception of [15] in a different setting. Results in our setting are only available for the direct problem, i.e. the case that κ i = 1 for every i, or, equivalently, p = 0 in (1.2). In the paper [5] it is shown that in this case adaptive Bayesian inference is possible using a hierarchical, conditionally Gaussian prior, while in [35] partially adaptation is shown using Gaussian priors with scale parameter determined by an empirical Bayes method. Other recent papers also exhibit priors that yield rate-adaptive procedures in the direct signal-in-white-noise problem (see for instance [2,13,32,38]), but it is important to note that these papers use general theorems on contraction rates for posterior distributions (as given in [18] for instance) that are not suitable to deal with the truly ill-posed case in which k i → 0 as i → ∞. The reason is that if these general theorems are applied in the inverse case, we only obtain convergence rates relative to the (squared) norm μ → κ 2 i μ 2 i , which is not very interesting. Obtaining rates relative to the 2norm is much more involved and requires a different approach. Extending the testing approach of [17,18] would be one possibility, cf. the recent work of [30], although it seems difficult to obtain sharp results in this manner. In this paper we follow a more pragmatic approach, relying on partly explicit computations in a relatively tractable setting.
To obtain rate-adaptive Bayes procedures for the model (1.1) we consider a family (Π α : α > 0) of Gaussian priors for the parameter μ. These priors are indexed by a parameter α > 0 which quantifies the "regularity" of the prior Π α (details in Sect. 2). Instead of choosing a fixed value for α (which is the approach studied in [22]) we view it as a tuning-, or hyper-parameter and consider two different methods for selecting it in a data-driven manner. The approach typically preferred by Bayesian statisticians is to endow the hyper-parameter with a prior distribution itself. This results in a full, hierarchical Bayes procedure. The paper [5] follows the same approach in the direct problem. We prove that under a mild assumption on the hyper-prior on α, we obtain an adaptive procedure for the inverse problem using the hierarchical prior. Optimal convergence rates are obtained (up to lower order factors), uniformly over Sobolev and analytic scales. For tractability, the priors Π α that we use put independent, Gaussian prior weights on the coefficients μ i in (1.1). Extensions to more general priors, including non-Gaussian densities or priors that are not exactly diagonal (as in [30] for instance) should be possible, but would require considerable additional technical work.
A second approach we study consists in first "estimating" α from the data and then substituting the estimatorα n for α in the posterior distribution for μ corresponding to the prior Π α . This empirical Bayes procedure is not really Bayesian in the strict sense of the word. However, for computational reasons empirical Bayes methods of this type are widely used in practice, making it relevant to study their theoretical performance. Rigorous results about the asymptotic behavior of empirical Bayes selectors of hyperparameters in infinite-dimensional problems only exist for a limited number of special problems, see e.g. [4,19,20,35,40]. In this paper we prove that the likelihood-based empirical Bayes method that we propose has the same desirable adaptation and rateoptimality properties in nonparametric inverse problems as the hierarchical Bayes approach.
The estimatorα n for α that we propose is the commonly used likelihood-based empirical Bayes estimator for the hyper-parameter. Concretely, it is the maximum likelihood estimator for α in the model in which the data Y is generated by first drawing μ from Π α and then generating A crucial element in the proof of the adaptation properties of both procedures we consider is understanding the asymptotic behavior ofα n . In contrast to the typical situation in parametric models (see [29]) this turns out to be rather delicate, since the likelihood for α can have complicated behavior. We are able however to derive deterministic asymptotic lower and upper bounds forα n . In general these depend on the true parameter μ 0 in a complicated way. It appears that in general the difference between these bounds does not become asymptotically negligible, but it can be shown that any value between the bounds gives the correct bias-variance trade-off for the class containing the particular μ 0 , whence adaptive minimaxity arises.
In the special case that the true parameter has regular behavior of the form μ 0,i i −1/2−β for some β > 0, both bounds tends to β and henceα n is essentially a consistent estimator for β (see Lemma 1). This means that in this case the estimatorα n correctly "estimates the regularity" of the true parameter (see [4] for work in a similar direction). Since the typical models used to define "minimax adaptation" only impose upper bounds on the parameters (e.g. μ 0,i i −1/2−β or an integrated version of this), in general the "regularity" of a parameter is an ill-defined concept. The valueα n may then have complicated behaviour, but it still gives minimaxity over the class.
Our priors Π α model the coordinates μ i as independent N (0, i −1−2α ) variables. This is flexible enough to adapt to the full scale of Sobolev spaces, and also to models of supersmooth parameters (up to logarithmic factors). In [35] it was shown (only for the direct problem) that priors of the form N (0, τ 2 i −1−2α ) for a fixed exponent α and adaptation to scale τ achieves adaptive minimaxity over Sobolev classes only in a limited range, dependent on α.
The remainder of the paper is organized as follows. In Sect. 2 we first describe the empirical and hierarchical Bayes procedures in detail. Then we present a theorem on the asymptotic behavior of estimatorα n for the hyper-parameter, followed by two results on the adaptation and rate of contraction of the empirical and hierarchical Bayes posteriors over Sobolev and analytic scales. These results all concern global 2 -loss. In Sect. 2.3 we briefly comment on rates relative to other losses. Specifically we discuss contraction rates of marginal posteriors for linear functionals of the parameter μ. We conjecture that the procedures that we prove to be adaptive and rate-optimal for global hierarchical Bayes approaches are illustrated numerically in Sect. 3. We apply them to simulated data from an inverse signal-in-white-noise problem, where the problem is to recover a signal from a noisy observation of its primitive and also another example with a smaller degree of ill-posedness. Proofs of the main results are presented in Sects. 4-7. Some auxiliary lemmas are collected in Sect. 8.
For two sequences (a n ) and (b n ) of numbers, a n b n means that |a n /b n | is bounded away from zero and infinity as n → ∞, a n b n means that a n /b n is bounded, a n ∼ b n means that a n /b n → 1 as n → ∞, and a n b n means that a n /b n → 0 as n → ∞. For two real numbers a and b, we denote by a ∨ b their maximum, and by a ∧ b their minimum.
For α > 0, consider the product prior Π α on 2 given by It is easy to see that this prior is "α-regular", in the sense that for every α < α, it assigns mass 1 to the Sobolev space S α . In [22] it was proved that if for the true parameter μ 0 we have μ 0 ∈ S β for β > 0, then the posterior distribution corresponding to the Gaussian prior Π α contracts around μ 0 at the optimal rate n −β/(1+2β+2 p) if α = β. If α = β, only sub-optimal rates are attained in general (cf. [7]). In other words, when using a Gaussian prior with a fixed regularity, optimal convergence rates are obtained if and only if the regularity of the prior and the truth are matched. Since the latter is unknown however, choosing the prior that is optimal from the point of view of convergence rates is typically not possible in practice.
However, the results in [22] indicate that a regular enough prior (β ≤ 1 + 2α + 2 p) can be appropriately scaled to attain the optimal rate. This observation in the direct case p = 0, led to the study of a data-driven selection of the scaling parameter τ n in [35] with priors of the form N (0, τ 2 i −1−2α ). Already in the direct case ( p = 0), the performance of the empirical Bayes procedure cuts the range β ≤ 1 + 2α where the optimal deterministic scaling is possible, into two subregimes. If β < 1/2 + α, the empirical Bayes leads to the optimal rate. Otherwise, that is when 1/2 + α ≤ β ≤ 1 + 2α, the performance of the empirical Bayes procedure is strictly worse than the optimal procedure. Therefore, the procedure is suboptimal not only over a wide range of Sobolev classes, but also over certain "analytic balls", e.g., A γ for all γ > 0. The same conclusions hold for the hierarchical Bayes procedure.
Therefore, in this paper we fix τ ≡ 1 and consider two data-driven methods for selecting the regularity α of the prior.
The first is a likelihood-based empirical Bayes method, which attempts to estimate the appropriate value of the hyper-parameter α from the data. In the Bayesian setting described by the conditional distributions (1.3), it holds that The corresponding log-likelihood for α (relative to an infinite product of N (0, 1/n)distributions) is easily seen to be given by The idea is to "estimate" α by the maximizer of n . The results ahead (Lemma 1 and Theorem 1) imply that with P 0 -probability tending to one, n has a global maximum on [0, log n) if μ 0,i = 0 for some i ≥ 2. (In fact, the cited results imply the maximum is attained on the slightly smaller interval [0, (log n)/(2 log 2) − 1/2 − p]). If the latter condition is not satisfied (if μ 0 = 0 for instance), n may attain its maximum only at ∞. Therefore, we truncate the maximizer at log n and definê The continuity of n ensures the argmax exists. If it is not unique, any value may be chosen. We will always assume at least that μ 0 has Sobolev regularity of some order β > 0. Lemma 1 and Theorem 1 imply that in this caseα n > 0 with probability tending to 1. An alternative to the truncation of the argmax of n at log n could be to extend the definition of the priors Π α to include the case α = ∞. The prior Π ∞ should then be defined as the product N (0, 1) ⊗ δ 0 ⊗ δ 0 ⊗ . . ., with δ 0 the Dirac measure concentrated at 0. However, from a practical perspective it is more convenient to definê α n as above.
The empirical Bayes procedure consists in computing the posterior distribution of μ corresponding to a fixed prior Π α and then substitutingα n for α. Under the model described above and the prior (2.1) the coordinates (μ 0,i , Y i ) of the vector (μ 0 , Y ) are independent, and hence the conditional distribution of μ 0 given Y factorizes over the coordinates as well. The computation of the posterior distribution reduces to countably many posterior computations in conjugate normal models. Therefore (see also [22]) the posterior distribution corresponding to the prior Π α is given by Then the empirical Bayes posterior is the random measure Πα n ( · |Y ) defined by for measurable subsets B ⊂ 2 . Note that the construction of the empirical Bayes posterior does not use information about the regularity of the true parameter. In Theorem 2 below we prove that it contracts around the truth at an optimal rate (up to lower order factors), uniformly over Sobolev and analytic scales. The second method we consider is a full, hierarchical Bayes approach where we put a prior distribution on the hyper-parameter α. We use a prior on α with a positive Lebesgue density λ on (0, ∞). The full, hierarchical prior for μ is then given by (2.5) In Theorem 3 below we prove that under mild assumptions on the prior density λ, the corresponding posterior distribution Π( · |Y ) has the same desirable asymptotic properties as the empirical Bayes posterior (2.4).

Adaptation and contraction rates for the full parameter
Understanding of the asymptotic behavior of the maximum likelihood estimatorα n is a crucial element in our proofs of the contraction rate results for the empirical and hierarchical Bayes procedures. The estimator somehow "estimates" the regularity of the true parameter μ 0 , but in a rather indirect and involved manner in general. Our first theorem gives deterministic upper and lower bounds forα n , whose construction involves the function h n : For positive constants 0 < l < L we define the lower and upper bounds as (2.8) and the infimum of the empty set is considered ∞.
One can see that the function h n and hence the lower and upper bounds α n and α n depend on the true μ 0 . We show in Theorem 1 that the maximum likelihood estimator α n is between these bounds with probability tending to one. In general the true μ 0 can have very complicated tail behavior, which makes it difficult to understand the behavior of the upper and lower bounds. If μ 0 has regular tails however, we can get some insight in the nature of the bounds. We have the following lemma, proved in Sect. 4.
We note that items (i) and (iii) of the lemma imply that if μ 0,i i −1/2−β , then the interval [α n , α n ] concentrates around the value β asymptotically. In combination with Theorem 1 this shows that at least in this regular case,α n correctly estimates the regularity of the truth. A parameter μ 0 in an analytic class A γ could be viewed as being infinitely regular. By item (ii) of the lemma, which shows that α n → ∞ in this case, the procedure correctly detects this infinite regularity (although of course it does not reveal the value of γ ).
Item (iv) implies that if μ 0,i = 0 for some i ≥ 2, then α n < log n < ∞ for large n. Conversely, the definitions of h n and α n show that if μ 0,i = 0 for all i ≥ 2, then h n ≡ 0 and hence α n = ∞. This justifies the choice of the truncatedα n in the definition of the empirical Bayes posterior.
The following theorem asserts that the point(s) where n is maximal is (are) asymptotically between the bounds just defined, uniformly over Sobolev and analytic scales. The proof is given in Sect. 5.

Theorem 1
For every R > 0 the constants l and L in (2.7) and (2.8) can be chosen such that With the help of Theorem 1 we can prove the following theorem, which states that the empirical Bayes posterior distribution (2.4) achieves optimal minimax contraction rates up to a slowly varying factor, uniformly over Sobolev and analytic scales. Careful inspection of the proof of Theorem 1 indicates thatα n is contained with probability tending to 1 in a slightly smaller interval obtained by raising or lowering the bounds by a suitable multiple of 1/log n, but this does not help to improve the main results of the paper presented below. We also note that posterior contraction at a rate ε n implies the existence of estimators, based on the posterior, that converge at the same rate. See for instance the construction in Sect. 4 of [5].
where (L n ) is a slowly varying sequence.
So indeed we see that both in the Sobolev and analytic cases, we obtain the optimal minimax rates up to a slowly varying factor. The proofs of the statements (given in Sect. 6) show that in the first case we can take L n = (log n) 2 (log log n) 1/2 and in the second case L n = (log n) (1/2+ p) √ log n/2+1− p (log log n) 1/2 . These sequences converge to infinity but they are slowly varying, hence they converge slower than any power of n.
The full Bayes procedure using the hierarchical prior (2.5) achieves the same results as the empirical Bayes method, under mild assumptions on the prior density λ for α.

Assumption 1 Assume that for every c
One can see that a many distributions satisfy this assumption, for instance the exponential, gamma and inverse gamma distributions. Careful inspection of the proof of the following theorem, given in Sect. 7, can lead to weaker assumptions, although these will be less attractive to formulate. Recall the notation Π( · |Y ) for the posterior corresponding to the hierarchical prior (2.5).
Theorem 3 Suppose the prior density λ satisfies Assumption 1. Then for every β, γ , R > 0 and M n → ∞ we have The hierarchical Bayes method thus yields exactly the same rates as the empirical method, and therefore the interpretation of this theorem is the same as before. We note that already in the direct case p = 0 this theorem is an interesting extension of the existing results of [5]. In particular we find that using hierarchical Bayes we can adapt to a continuous range of Sobolev regularities while incurring only a logarithmic correction of the optimal rate.

Discussion on linear functionals
It is known already in the non-adaptive situation that for attaining optimal rates relative to losses other than the 2 -norm, it may be necessary to set the hyperparameter to a value different from the optimal choice for 2 -recovery of the full parameter μ. If we are for instance interested in optimal estimation of the (possibly unbounded) linear functional where l i i −q−1/2 for some q < p, then if μ 0 ∈ S β for β > −q the optimal Gaussian prior (2.1) is not Π β , but rather Π β−1/2 . The resulting, optimal rate is of the order n −(β+q)/(2β+2 p) (see [22], Sect. 5). An example of this phenomenon occurs when considering global L 2 -loss estimation of a function versus pointwise estimation. If for instance the μ i are the Fourier coefficients of a smooth function of interest f ∈ L 2 [0, 1] relative to the standard Fourier basis e i and for a fixed t ∈ [0, 1], l i = e i (t), then estimating μ relative to 2 -loss corresponds to estimating f relative to L 2 -loss and estimating the functional Lμ in (2.9) corresponds to pointwise estimation of f in the point t (in this case q = −1/2). Theorems 2 and 3 show that the empirical and hierarchical Bayes procedures automatically achieve a bias-variance-posterior spread trade-off that is optimal for the recovery of the full parameter μ 0 relative to the global 2 -norm. As conjectured in a similar setting in [22] this suggests that the adaptive approaches might be sub-optimal outside the 2 -setting. In view of the findings in the non-adaptive case we might expect however that we can slightly alter the procedures to deal with linear functionals. For instance, it is natural to expect that for the linear functional (2.9), the empirical Bayes posterior Πα n −1/2 (·|Y ) yields optimal rates.
Matters seem to be more delicate however. A combination of elements of the proof of Theorem 5.1 of [22] and new results on the coverage of credible sets from the paper [36] lead us to conjecture that for linear functionals L with coefficients l i i −q−1/2 for some q < p and β > −q there exists a μ 0 ∈ S β such that along a subsequence n j , tends to zero at a slower rate than the minimax rate n −(β+q)/(2β+2 p) for S β , this means that there exist "bad truths" for which the adjusted empirical Bayes procedure does not concentrate at the optimal rate along a subsequence. For linear functionals (2.9) the empirical Bayes posterior Πα n −1/2 (·|Y ) seems only to contract at an optimal rate for "sufficiently nice" truths, for instance of the form μ 0,i i −1/2−β , or the more general polished-tail sequences considered in [36].
Similar statements are expected to hold for hierarchical Bayes procedures. This adds to the list of remarkable behaviours of marginal posteriors for linear functionals, cf. also [31], for instance. Further research is necessary to shed more light on these matters.

Numerical illustration
Consider the inverse signal-in-white-noise problem where we observe the process with W a standard Brownian motion, and the aim is to recover the function μ. If, slightly abusing notation, we define Y i = 1 0 e i (t) dY t , for e i the orthonormal basis functions given by e i (t) = √ 2 cos((i − 1/2)π t), then it is easily verified that the observations Y i satisfy (1.1), with κ 2 i = ((i − 1/2) 2 π 2 ) −1 , i.e. p = 1 in (1.2), and μ i the Fourier coefficients of μ relative to the basis e i .
We first consider simulated data from this model for μ 0 the function with Fourier coefficients μ 0,i = i −3/2 sin(i), so we have a truth which essentially has regularity 1. In the following figure we plot the true function μ 0 (black dashed curve) and the empirical Bayes posterior mean (red curve) in the left panels, and the corresponding normalized likelihood exp( n )/ max(exp( n )) in the right panels (we truncated the sum in (2.2) at a high level). Figure 1 shows the results for the empirical Bayes procedure with simulated data for n = 10 3 , 10 5 , 10 7 , 10 9 , and 10 11 , from top to bottom. The figure shows that the estimatorα n does a good job in this case at estimating the regularity level 1, at least for large enough n. We also see however that due to the ill-posedness of the problem, a large signal-to-noise ratio n is necessary for accurate recovery of the function μ.
We applied the hierarchical Bayes method to the simulated data as well. We chose a standard exponential prior distribution on α, which satisfies Assumption 1. Since the posterior can not be computed explicitly, we implemented an MCMC algorithm that generates (approximate) draws from the posterior distribution of the pair (α, μ). More precisely, we fixed a large index J ∈ N and defined the vector μ J = (μ 1 , . . . , μ J ) consisting of the first J coefficients of μ. (If μ has positive Sobolev regularity, then taking J at least of the order n 1/(1+2 p) ensures that the approximation error μ J −μ is of lower order than the estimation rate.) Then we devised a Metropolis-within-Gibbs algorithm for sampling from the posterior distribution of (α, μ J ) (e.g. [37]). The algorithm alternates between draws from the conditional distribution μ J |α, Y and the conditional distribution α|μ J , Y . The former is explicitly given by (2.3). To sample from α|μ J , Y we used a standard Metropolis-Hastings step. It is easily verified that the Metropolis-Hastings acceptance probability for a move from (α, μ) to (α , μ) is given by and q is the transition kernel of the proposal chain. We used a proposal chain that, if it is currently at location α, moves to a new N (α, σ 2 )-distributed location provided the latter is positive. We omit further details, the implementation is straightforward.
The results for the hierarchical Bayes procedure are given in Fig. 2. The figure shows the results for simulated data with n = 10 3 , 10 5 , 10 7 , 10 9 and 10 11 , from top to bottom. Every time we see the posterior mean (in blue) and the true curve (black, dashed) on the left and a histogram for the posterior of α on the right. The results are comparable to what we found for the empirical Bayes procedure.
To illustrate the impact of ill-posedness on the quality of the empirical Bayes procedure we also considered simulated data from the model (1.1) with κ i = i −0.1 . Recall that μ 0,i = i −3/2 sin(i) are the coefficients of μ 0 relative to the basis e i as before. Figure 3 shows the results for the empirical Bayes procedure with simulated data for n = 10, 10 2 , 10 3 , 10 4 , and 10 5 , from top to bottom. Again, in the left panels we plot the true function μ 0 (black dashed curve) and the empirical Bayes posterior mean (red curve), and the corresponding normalized likelihood exp( n )/max(exp( n )) in the right panels. In this case the estimatorα n does a good job at estimating the regularity level 1 already for n = 100. Moreover, the true function μ 0 is accurately recovered for moderate values of n.
We also considered simulated data from the original model with κ i i −1 for μ 0 with Fourier coefficients μ 0,i = (−1) i+1 exp(−2i). This function μ 0 is essentially of infinite regularity. Figure 4 shows the results of the empirical Bayes procedure with simulated data for n = 10 2 , 10 3 , 10 4 , 10 5 , and 10 6 , from top to bottom. We can observe that the empirical posterior Bayes mean recovers the function μ well for n = 10 5 or 10 6 . We also note that the estimated valueα n is rather large and unstable. This is not surprising in this case: item (ii) of Lemma 1 shows that the lower bound forα n diverges to infinity. However, large values of α are good enough to capture the infinite regularity of the truth in the empirical Bayes posterior.

Proof of Lemma 1
In the proofs we assume for brevity that we have the exact equality κ i = i − p . Dealing with the general case (1.2) is straightforward, but makes the proofs somewhat lengthier. Since the function x → x −γ log x is decreasing on [e 1/γ , ∞), this is further bounded by 1+2α+2 p log n.
Combining the bounds for the two sums we obtain the upper bound valid for all α > 0. Now suppose that α ≤ β − c 0 / log n. Then for n large enough, the power of n on the right-hand side is bounded by Hence given l > 0 we can choose c 0 so large, only depending on l, β, μ 0 β and p, that h n (α) ≤ l for α ≤ β − c 0 /log n.
(ii) We show that in this case we have h n (α) ≤ l for α ≤ √ log n/(log log n) and n ≥ n 0 , where n 0 only depends on μ 0 A γ . Again we give an upper bound for h n by splitting the sum in its definition into two smaller sums. The one over indices i > n 1/(1+2α+2 p) is bounded by Using the fact that for δ > 0 the function x → x −δ e −2γ x log x is decreasing on [e 1/δ , ∞) we can see that this is further bounded by 1+2α+2 p log n.
The sum over indices i ≤ n 1/(1+2α+2 p) is bounded by Since the maximum on (0, ∞) of the function x → x 1+2α exp(−2γ x) equals exp((1+ 2α)(log((1 + 2α)/2γ ) − 1)), we have the subsequent bound Combining the two bounds we find that for all α > 0. It is then easily verified that for the given constant l > 0, we have h n (α) ≤ l for n ≥ n 0 if α ≤ √ log n/ log log n, where n 0 only depends on μ 0 A γ . (iii) Let γ n = γ + C 0 (log log n)/(log n). We will show that for n large enough, By monotonicity and the fact that x ≥ x/2 for x large, the sum on the right is bounded from below by the integral This integral can be computed explicitly and is for large n bounded from below by a constant times log n 1 + 2γ n + 2 p n 2γn −2γ +1 1+2γn +2 p .
It follows that, for large enough n, h n (γ n ) is bounded from below by a constant times c 2 n 2(γ n −γ )/(1+2γ n +2 p) . Since (log log n)/(log n) ≤ 1/4 for n large enough, we obtain Hence for C 0 large enough, only depending on c and γ , we indeed have that and h n (γ n ) ≥ L(log n) 2 for large n.

Proof of Theorem 1
With the help of the dominated convergence theorem one can see that the random function n is (P 0 − a.s.) differentiable and its derivative, which we denote by M n , is given by We will show that on the interval (0, α n + 1/ log n] the random function M n is positive and bounded away from 0 with probability tending to one, hence n has no local maximum in this interval. Next we distinguish two cases according to the value of α n . If α n > log n, then the inequalityα n ≤ α n trivially holds. In the case α n ≤ log n we show that for a constant C 1 > 0 we a.s. have for all α ≥ α n . Then we prove that for any given C 2 > 0, the constant L can be set such that for γ ∈ [α n − 1/ log n, α n ] we have M n (γ ) ≤ −C 2 n 1/(1+2α n +2 p) (log n) 3 1 + 2α n + 2 p with probability tending to one uniformly. Together with (5.1) this means that on the interval [α n −1/ log n, α n ] the function n decreases more than it can possibly increase on the interval [α n , ∞). Therefore, it holds with probability tending to one that n has no global maximum on (α n − 1/ log n, ∞).

M n (α) on [α n , ∞)
In this section we give a deterministic upper bound for the integral of M n (α) on the interval [α n , ∞).
For α ≥ log n we apply Lemma 7(ii), and see that M n (α) n2 −1−2α−2 p . Using the fact that x → 2 −x x 3 is decreasing for large x, it is easily seen that n2 −1−2α−2 p (log n) 3 /(1 + 2α + 2 p) 3 for α ≥ log n, hence By Lemma 1 we have β/2 < α n for large enough n, both for the case that μ 0 ∈ S β and μ 0 ∈ A γ , since for any β > 0 we have √ log n/ log log n ≥ β/2 for large enough n. It follows that the integral we want to bound is bounded by a constant times This quantity is bounded by a constant times n 1/(1+2α n +2 p) (log n) 2 1 + 2α n + 2 p .

M n (α) on α ∈ [α n − 1/ log n, α n ]
In this section we show that the process M n (α) is with probability going to one smaller than a negative, arbitrary large constant times n 1/(1+2α n +2 p) (log n) 3 The expected value of the normalized version of the process M n given on the left-hand side of (5.2) is equal to We write this as the sum of two terms and bound the first term by We want to bound this quantity for α ∈ [α n − 1/ log n, α n ]. By Lemma 1, β/4 < α n − 1/ log n for large enough n, both for the case that μ 0 ∈ S β and μ 0 ∈ A γ , so this interval is included in (β/4, ∞). Taking c = β/2 + 2 p in Lemma 7(i) then shows that the first term is bounded by a multiple of 1/(log n) 2 and hence tends to zero, uniformly over [α n − 1/ log n, α n ]. We now consider the second term in (5.4), which is equal to h n (α)/(log n) 2 . By Lemma 2 for any μ 0 ∈ 2 and n ≥ e 4 we have where the last equality holds by the definition of α n . This concludes the proof of (5.2).
To verify (5.3) it suffices, by Corollary 2.2.5 in [39] (applied with ψ(x) = x 2 ), to show that where d n is the semimetric defined by diam n is the diameter of [α n − 1/ log n, α n ] relative do d n , and N (ε, B, d) is the minimal number of d-balls of radius ε needed to cover the set B. By Lemma 3 var 0 (1 + 2α + 2 p)M n (α) 4 1 + h n (α) , (5.6) (with an implicit constant that does not depend on μ 0 and α). By the definition of α n the function h n (α) is bounded above by L(log n) 2 on the interval [α n − 1/ log n, α n ].
The last bound also shows that the d n -diameter of the set [α n − 1/ log n, α n ] is bounded above by a constant times (log n) −1 , with a constant that does not depend on μ 0 and α. By Lemma 4 and the fact that h n (α) ≤ L(log n) 2 for α ∈ [α n −1/ log n, α n ], we get the upper bound, α 1 , α 2 ∈ [α n − 1/ log n, α n ], with a constant that does not depend on μ 0 . Therefore N (ε, [α n − 1/ log n, α n ], d n ) 1/(ε log n) and hence

M n (α) on (0, α n + 1/ log n]
In this subsection we prove that if the constant l in the definition of α n is small enough, then This shows that M n is positive throughout (0, α n + 1/ log n] with probability tending to one uniformly over 2 . Since E 0 Y 2 i = κ 2 i μ 2 0,i + 1/n, the expected value on the left-hand side of (5.7) is equal to We first find a lower bound for the first term. Since α n ≤ √ log n by definition, we have α log n for all α ∈ (0, α n + 1/ log n]. Then it follows from Lemma 9 that for n large enough, the first term in (5.9) is bounded from below by 1/12 for all α ∈ (0, α n + 1/ log n]. Next note that by definition of h n and Lemma 2, we have diam n is the diameter of (0, α n +1/ log n] relative to d n , and N (ε, B, d) is the minimal number of d-balls of radius ε needed to cover the set B.
The variance bound above also imply that the d n -diameter of the set (0, α n + 1/ log n] is bounded by a multiple of e −(1/6) √ log n . By Lemma 4, the definition of α n and Lemma 2, with constants that do not depend on μ 0 . Hence for the covering number of (0, α n + 1/ log n] ⊂ (0, 2 √ log n) we have

Bounds on h n (α), variances and distances
In this section we prove a number of auxiliary lemmas used in the preceding. The first one is about the behavior of the function h n in a neighborhood of α n and α n .
Proof We provide a detailed proof of the first inequality, the second one can be proved using similar arguments. Let be the sum in the definition of h n . Splitting the sum into two parts we get, for α ∈ [α n − 1/ log n, α n ], In the first sum i −2/ log n can be bounded below by exp(−2). Furthermore, for i ∈ [n 1/(1+2α n +2 p) , n 1/(1+2α+2 p) ], we have the inequality Therefore S n (α) can be bounded from below by a constant times Hence, we have S n (α) S n (α n ) for α ∈ [α n − 1/ log n, α n ].
Next we present two results on variances involving the random function M n .

Lemma 3 For any
Proof The random variables Y 2 i are independent and var 0 Y 2 i = 2/n 2 + 4κ 2 i μ 2 0,i /n, hence the variance in the statement of the lemma is equal to By Lemma 10 the first term is bounded by Lemma 7(i) further bounds the right hand side of the above display by a multiple of n −1/(1+2α+2 p) (log n) 2 uniformly for α > c, where c > 0 is an arbitrary constant.
For α ≤ c we get the same bound by applying Lemma 8 (with m = 2, l = 4, r = 1 + 2α + 2 p, r 0 = 1 + 2c + 2 p, and s = 2r ) to the first term in (5.12). By Lemma 10, the second term in (5.12) is bounded by Combining the upper bounds for the two terms we arrive at the assertion of the lemma.

Lemma 4 For any
with a constant that does not depend on α and μ 0 .
Proof The variance we have to bound can be written as and for i ≥ 2, It follows that the variance is bounded by a constant times Since var 0 Y 2 i = 2/n 2 + 4κ 2 i μ 2 0,i /n, it suffices to show that both and n 3 sup are bounded by a constant times (log n) 4 sup α∈[α 1 ,α 2 ] n −1/(1+2α+2 p) (1 + h n (α)).
By applying Lemma 10 twice (once the first statement with r = 1 + 2α + 2 p and m = 1 and once the second one with the same r and m = 3 and ξ = 1) the expression in (5.14) is seen to be bounded above by a constant times The expression in the parentheses equals h n (α)n −1/(1+2α+2 p) log n. Now fix c > 0. Again, applying Lemma 10 twice implies that we get that (5.13) is bounded above by Using the inequality x/(x + y) ≤ 1 and Lemma 7(i), the expression in the parenthesis can be bounded by a constant times n −1/(1+2α+2 p) log n for α > c. For α ≤ c, Lemma 8 (with m = 2 or m = 4, l = 4, r = 1 + 2α + 2 p, r 0 = 1 + 2c + 2 p, and s = 2r ) gives the same bound (or even a better one) for (5.13). The proof is completed by combining the obtained bounds.

Proof of Theorem 2
We only present the details of the proof for the Sobolev case μ 0 ∈ S β . The analytic case differs from the Sobolev case mainly in the upper bound for n −2α n /(1+2α n +2 p) , see also Sect. 6.5. Again, we assume the exact equality κ i = i − p for simplicity. By Markov's inequality and Theorem 1, is the posterior risk. We will show in the subsequent subsections that for ε n = n −β/(1+2β+2 p) (log n) 2 (log log n) 1/2 and arbitrary M n → ∞, the first term on the right of (6.1) vanishes as n → ∞. Note that by the explicit posterior computation (2.3), we have whereμ α,i = ni p (i 1+2α+2 p + n) −1 Y i is the ith coefficient of the posterior mean. We divide the Sobolev-ball μ 0 β ≤ R into two subsets and show that on both subsets the posterior risks are of the order ε 2 n .

Bound for the expected posterior risk over P n
In this section we prove that The second term of (6.2) is deterministic. The expectation of the first term can be split into square bias and variance terms. We find that the expectation of (6.2) is given by Note that the second and third terms in (6.4) are independent of μ 0 , and that the second is bounded by the third. By Lemma 8 (with m = 0, l = 1, r = 1 + 2α + 2 p and s = 2 p) the latter is for α ≥ α n further bounded by In view of Lemma 1 (i), the right-hand side is bounded by a constant times n −2β/(1+2β+2 p) for large n. It remains to consider the first sum in (6.4), which we divide into three parts and show that each of the parts has the stated order. First we note that Next, observe that elementary calculus shows that for α > 0 and n ≥ e, the maximum of the function i → i 1+2α+4 p / log i over the interval [2, n 1/(1+2α+2 p) ] is attained at i = n 1/(1+2α+2 p) , for α ≤ log n/(2 log 2) − 1/2 − p. It follows that for α > 0, 1+2α+2 p h n (α).
By Lemma 1, α n ≥ β − c 0 / log n for a constant c 0 > 0 (only depending on β, R, p). Hence, using that x → x/(c + x) is increasing for every c > 0 the right-hand side is bounded by a constant times n −2β/(1+2β+2 p) log 2 n.
To complete the proof we deal with the terms between n 1/(1+2α n +2 p) and n 1/(1+2β+2 p) . Let J = J (n) be the smallest integer such that α n /(1 + 1/ log n) J ≤ β. One can see that J is bounded above by a multiple of (log n)(log log n) for any positive β. We partition the summation range under consideration into J pieces using the auxiliary numbers b j = 1 + 2 α n (1 + 1/ log n) j + 2 p, j = 0, . . . , J.
Note that the sequence b j is decreasing. Now we have and the upper bound is uniform in α.
On the same interval i 2 p is bounded by n 2 p/b j+1 . Therefore the right hand side of the preceding display is further bounded by a constant times In the last step we used the fact that by construction, b j /2−1/2− p ≥ β/(1+1/ log n) for j ≤ J . Because b j /2 − 1/2 − p ≤ α n for every j ≥ 0, it follows from the definition of α n that h n (b j /2 − 1/2 − p) is bounded above by L(log n) 2 , and we recall that J = J (n) is bounded above by a multiple of (log n)(log log n). Finally we note that n − 2β/(1+1/ log n) 1+2β/(1+1/ log n)+2 p ≤ en −2β/(1+2β+2 p) .

Bound for the centered posterior risk over P n
We show in this section that for the set P n we also have for ε n = n −β/(1+2β+2 p) (log n) 2 (log log n) 1/2 . Using the explicit expression for the posterior meanμ α,i we see that the random variable in the supremum is the absolute value of V(α)/n − 2W(α)/ √ n, where We deal with the two processes separately. For the process V, Corollary 2.2.5 in [39] implies that Using Lemma 8 (with m = 0, l = 4, r = 1 + 2α + 2 p and s = 4 p), we can conclude that the variance of V(α) is bounded above by a multiple of n (1+4 p)/(1+2α+2 p) . It follows that the diameter of the interval diam n n (1+4 p)/(1+2α n +2 p) . To compute the covering number of the interval [α n , ∞) we first note that for 0 < α 1 < α 2 , Hence for ε > 0, a single ε-ball covers the whole interval [K log(n/ε), ∞) for some constant K > 0. By Lemma 5, the distance d n (α 1 , α 2 ) is bounded above by a multiple of |α 1 − α 2 |n (1+4 p)/(2+4α n +4 p) (log n). Therefore the covering number of the interval [α n , K log(n/ε)] relative to the metric d n is bounded above by a multiple of (log n)n (1+4 p)/(2+4α n +4 p) (log(n/ε))/ε. Combining everything we see that By the fact that x → x/(x + c) is increasing and Lemma 1 (i), the right-hand side divided by n is bounded by n − 2α n 1+2α n +2 p (log n) n −2β/(1+2β+2 p) (log n).
It remains to deal with the process W. The basic line of reasoning is the same as followed above for V. An essential difference however is the derivation of a bound for the variance of W, of which we provide the details. The rest of the proof is left to the reader. The variance W(α)/ √ n is given by We show that uniformly for α ∈ [α n , α n ], this variance is bounded above by a constant (which depends only on μ 0 β ) times n −(1+4β)/(1+2β+2 p) (log n) 2 . We note that on the set P n the upper bound α n ≤ log n/ log 2 − 1/2 − p is finite. For the sum over i ≤ n 1/(1+2α+2 p) we have We note that the second term on the right hand side of the preceding display disappears for α > log n/(2 log 2) − 1/2 − p. We have used again the fact that on the range i ≤ n 1/(1+2α+2 p) , the quantity i 1+2α+6 p (log i) −1 is maximal for the largest i. Now the function x → −(1 + 2x)/(x + c) is decreasing on (0, ∞) for any c > 1/2. Moreover h n (α) ≤ L(log n) 2 for any α ≤ α n , thus the preceding display is bounded above by a multiple of n −(1+4α n )/(1+2α n +2 p) (log n) 2 . Using Lemma 1(i) this is further bounded by a constant times n −(1+4β)/(1+2β+2 p) (log n) 2 . Next we consider sum over the range i > n 1/(1+2α+2 p) . We distinguish two cases according to the value of α. First suppose that 1 + 2α ≥ 2 p.

Bound for the expected and centered posterior risk over Q n
To complete the proof of Theorem 2 we show that similar results to Sects. 6.1 and 6.2 hold over the set Q n as well: For the first statement (6.7) we follow the same line of reasoning as in Sect. 6.1. The second and third terms in (6.4) are free of μ 0 , and hence the same upper bound as in Sect. 6.1 apply. The first term in (6.4) is also treated exactly as in Sect. 6.1, except that n 1/(1+2α n +2 p) ≤ 2 if μ 0 ∈ Q n and hence the sum over the terms i < n 1/(1+2α n +2 p need not be treated, and we can proceed by replacing α n by log n/(2 log 2) − 1/2 − p in the definition of J and the sequence b j .
To bound the centered posterior risk (6.8) we follow the proof given in Sect. 6.2. There the process V(α) is already bounded uniformly over [α n , ∞), whence it remains to deal with the process W(α). The only essential difference is the upper bound for the variance of the process W(α)/ √ n. In Sect. 6.2 this was shown to be bounded above by a multiple of the desired rate (log n) 2 n −(1+4β)/(1+2β+2 p) for α ∈ [α n , α n ∧ (log n/ log 2 − 1/2 − p)], which is α ∈ [α n , log n/ log 2 − 1/2 − p] on the set Q n . Finally, for α ≥ log n/ log 2 − 1/2 − p we have This completes the proof.
6.4 Bounds for the semimetrics associated to V and W The following lemma is used in Sect. 6.2.

Lemma 5
For any α n ≤ α 1 < α 2 ≤ α n the following inequalities hold: with a constant that does not depend on α and μ 0 .
Proof The left-hand side of the first inequality is equal to The derivative of f i is given by f i (α) = −4i 1+2α+2 p (log i)/(i 1+2α+2 p + n) 3 , hence the preceding display is bounded above by a multiple of with the help of Lemma 10 (with r = 1 + 2α + 2 p, and m = 2), and Lemma 8 (with m = 0, l = 4, r = 1 + 2α + 2 p, and s = r + 4 p). Since α ≥ α n , we get the first assertion of the lemma.
We next consider W/ √ n. The left-hand side of the second inequality in the statement of the lemma is equal to The derivative of this f i satisfies | f i (α)| ≤ 2(log i) f i (α), hence we get the upper bound The proof is completed by arguing as in (6.6) or (6.9).
6.5 Proof of Theorem 2 in the case of the analytic truth The assertion of Theorem 2 in the case of the analytic truth μ 0 ∈ A γ can be proven along the lines of the proof presented above. In view of Lemma 1.(ii), √ log n/(log log n) < α n , and whence We note that the computations in Sect. 6.1 go through for the analytic case by replacing β and μ 0 β by √ log n/ log log n and μ 0 A γ , respectively. Furthermore in Sect. 6.2 it is sufficient to consider the case 1 + 2α ≥ 2 p following from Lemma 1 (ii).

Proof of Theorem 3
Let B(R) denote a Sobolev or analytic ball of radius R, and ε n,B the corresponding contraction rate. Let A n be the event thatα n ∈ [α n , α n ]. Then with α → λ n (α|Y ) denoting the posterior Lebesgue density of α, we have By Theorem 1 the first term on the right vanishes as n → ∞, provided l and L in the definitions of α n and α n are chosen small and large enough, respectively. We will show that the other terms tend to 0 as well.
It is easy to see that this quantity tends to 0 as n → ∞.
In bounding the third term on the right hand side of (7.1) we replace the supremum over B(R) by the suprema over the sets P n and Q n defined in the beginning of Sect. 6. The supremum over Q n is bounded above by This goes to zero, following from Sects. 6.1 and 6.2 and Markov's inequality. In Sect. 5.1 we have shown that the differentiated log-likelihood function M n on the interval [α n , ∞) can increase maximally by a multiple of n 1/(1+2α n +2 p) (log n) 2 1 + 2α n + 2 p .

Lemma 6
Suppose that the prior density λ satisfies Assumption 1 for some c 1 > 0. Then there exist positive constants C 1 , . . . , C 6 and C 7 < 1 depending on c 1 only such that for all x ≥ c 1 , every δ n → 0, and n large enough x+2δ n x+δ n λ(α) dα ≥ C 1 δ C 2 n exp −C 3 exp Proof The proof only involves straightforward calculus.

Auxiliary lemmas
In this section we collect several lemmas that we use throughout the proofs to upper and lower bound certain sums.
Since x /x ≤ 2 for x ≥ 1, and n 1/r ≥ 2, Lemma 10 Let m, i, r , and ξ be positive reals. Then for n ≥ e m ni r (r log i) m (i r + n) 2 ≤ (log n) m , and n ξ (r log i) ξ m (i r + n) ξ ≤ (log n) ξ m .
Proof Assume first that i ≤ n 1/r , then the left hand side of the first inequality is bounded above by n 2 r log n 1/r m n 2 = (log n) m .
Next assume that i > n 1/r . The derivative of the function is monotone decreasing for x ≥ e m/c . Therefore the function i −r (log i) m is monotone decreasing for i ≥ e m/r and since by assumption i > n 1/r , we get that for n ≥ e m the function f (i) = i −r (log i) m takes its maximum at i = n 1/r . Hence the left hand side of the inequality is bounded above by n (r log i) m i −r ≤ nr m log n 1/r m n −1 = (log n) m .
The second inequality can be proven similarly.
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.