Advertisement

Statistical Inference for Stochastic Processes

, Volume 21, Issue 3, pp 603–628 | Cite as

Adaptive nonparametric drift estimation for diffusion processes using Faber–Schauder expansions

  • Frank van der Meulen
  • Moritz Schauer
  • Jan van Waaij
Open Access
Article
  • 536 Downloads

Abstract

We consider the problem of nonparametric estimation of the drift of a continuously observed one-dimensional diffusion with periodic drift. Motivated by computational considerations, van der Meulen et al. (Comput Stat Data Anal 71:615–632, 2014) defined a prior on the drift as a randomly truncated and randomly scaled Faber–Schauder series expansion with Gaussian coefficients. We study the behaviour of the posterior obtained from this prior from a frequentist asymptotic point of view. If the true data generating drift is smooth, it is proved that the posterior is adaptive with posterior contraction rates for the \(L_2\)-norm that are optimal up to a log factor. Contraction rates in \(L_p\)-norms with \(p\in (2,\infty ]\) are derived as well.

1 Introduction

Assume continuous time observations \(X^T=\left\{ X_t,\, :t\in [0,T]\right\} \) from a diffusion process X defined as (weak) solution to the stochastic differential equation (sde)
$$\begin{aligned} {\,\mathrm {d}}X_t=b_0(X_t) {\,\mathrm {d}}t+{\,\mathrm {d}}W_t, \qquad X_0=x_0. \end{aligned}$$
(1)
Here W is a Brownian Motion and the drift \(b_0\) is assumed to be a real-valued measurable function on the real line that is 1-periodic and square integrable on [0, 1]. The assumed periodicity implies that we can alternatively view the process X as a diffusion on the circle. This model has been used for dynamic modelling of angles, see for instance Pokern (2007) and Hindriks (2011).

We are interested in nonparametric adaptive estimation of the drift. This problem has recently been studied by multiple authors. Spokoiny (2000) proposed a locally linear smoother with a data-driven bandwidth choice that is rate adaptive with respect to \(|b''(x)|\) for all x and optimal up to a log factors. Interestingly, the result is non-asymptotic and does not require ergodicity. Dalalyan and Kutoyants (2002) and Dalalyan (2005) consider ergodic diffusions and construct estimators that are asymptotically minimax and adaptive under Sobolev smoothness of the drift. Their results were extended to the multidimensional case by Strauch (2015).

In this paper we focus on Bayesian nonparametric estimation, a paradigm that has become increasingly popular over the past two decades. An overview of some advances of Bayesian nonparametric estimation for diffusion processes is given in van Zanten (2013).

The Bayesian approach requires the specification of a prior. Ideally, the prior on the drift is chosen such that drawing from the posterior is computationally efficient while at the same time ensuring that the resulting inference has good theoretical properties. which is quantified by a contraction rate. This is a rate for which we can shrink balls around the true parameter value, while maintaining most of the posterior mass. More formally, if d is a semimetric on the space of drift functions, a contraction rate \(\varepsilon _T\) is a sequence of positive numbers \(\varepsilon _T\downarrow 0\) for which the posterior mass of the balls \(\{b\,:\, d(b,b_0)\le \varepsilon _T\}\) converges in probability to 1 as \(T\rightarrow \infty \), under the law of X with drift \(b_0\). For a general discussion on contraction rates, see for instance Ghosal et al. (2000) and Ghosal and van der Vaart (2007).

For diffusions, the problem of deriving optimal posterior convergence rates has been studied recently under the additional assumption that the drift integrates to zero, \(\int _0^1 b_0(x) d x =0\). In Papaspiliopoulos et al. (2012) a mean zero Gaussian process prior is proposed together with an algorithm to sample from the posterior. The precision operator (inverse covariance operator) of the proposed Gaussian process is given by \(\eta \left( (-\Delta )^{\alpha +1/2} + \kappa I\right) \), where \(\Delta \) is the one-dimensional Laplacian, I is the identity operator, \(\eta , \kappa >0\) and \(\alpha +1/2 \in \{2,3,\ldots \}\). A first consistency result was shown in Pokern et al. (2013).

In van Waaij and van Zanten (2016) it was shown that this rate result can be improved upon for a slightly more general class of priors on the drift. More specifically, in this paper the authors consider a prior which is defined as
$$\begin{aligned} b = L \sum _{k=1}^\infty k^{-1/2-\alpha } \varphi _k Z_k, \end{aligned}$$
(2)
where \(\varphi _{2k}(x)=\sqrt{2} \cos (2\pi k x)\), \(\varphi _{2k-1}(x)=\sqrt{2} \sin (2\pi k x)\) are the standard Fourier series basis functions, \(\{Z_k\}\) is a sequence of independent standard normally distributed random variables and \(\alpha \) is positive. It is shown that when L and \(\alpha \) are fixed and \(b_0\) is assumed to be \(\alpha \)-Sobolev smooth, then the optimal posterior rate of contraction, \(T^{-\alpha /(1+2\alpha )}\), is obtained. Note that this result is nonadaptive, as the regularity of the prior must match the regularity of \(b_0\). For obtaining optimal posterior contraction rates for the full range of possible regularities of the drift, two options are investigated: endowing either L or \(\alpha \) with a hyperprior. Only the second option results in the desired adaptivity over all possible regularities.
While the prior in (2) (with additional prior on \(\alpha \)) has good asymptotic properties, from a computational point of view the infinite series expansion is inconvenient. Clearly, in any implementation this expansion needs to be truncated. Random truncation of a series expansion is a well known method for defining priors in Bayesian nonparametrics, see for instance Shen and Ghosal (2015). Exactly this idea was exploited in van der Meulen et al. (2014), where the prior is defined as the law of the random function
$$\begin{aligned} b^{R,S}= S Z_1 \psi _1+S\sum _{j=0}^R\sum _{k=1}^{2^{j}}Z_{jk}\psi _{jk}, \end{aligned}$$
(3)
where the functions \(\psi _{jk}\) constitute the Faber–Schauder basis (see Fig. 1).
Fig. 1

Elements \(\psi _1\) and \(\psi _{j,k}\), \(0 \le j \le 2\) of the Faber–Schauder basis

These functions feature prominently in the Lévy-Ciesielski construction of Brownian motion (see for instance (Bhattacharya and Waymire 2007, paragraph 10.1)).

The prior coefficients \(Z_{jk}\) are equipped with a Gaussian distribution, and the truncation level R and the scaling factor S are equipped with independent priors. Truncation in absence of scaling increases the apparent smoothness of the prior (as illustrated for deterministic truncation by example 4.5 in van der Vaart and van Zanten (2008)), whereas scaling by a number \(\ge 1\) decreases the apparent smoothness. (Scaling with a number \(\le 1\) only increases the apparent smoothness to a limited extent, see for example Knapik et al. (2011).)

The simplest type of prior is obtained by taking the coefficients \(Z_{jk}\) independent. We do however also consider the prior that is obtained by first expanding a periodic Ornstein–Uhlenbeck process into the Faber–Schauder basis, followed by random scaling and truncation. We will explain that specific stationarity properties of this prior make it a natural choice.

Draws from the posterior can be computed using a reversible jump Markov Chain Monte Carlo (MCMC) algorithm (cf. van der Meulen et al. (2014)). For both types of priors, fast computation is facilitated by leveraging inherent sparsity properties stemming from the compact support of the functions \(\psi _{jk}\). In the discussion of van der Meulen et al. (2014) it was argued that inclusion of both the scaling and random truncation in the prior is beneficial. However, this claim was only supported by simulations results.

In this paper we support this claim theoretically by proving adaptive contraction rates of the posterior distribution in case the prior (3) is used. We start from a general result in van der Meulen et al. (2006) on Brownian semimartingale models, which we adapt to our setting. Here we take into account that as the drift is assumed to be one-periodic, information accumulates in a different way compared to (general) ergodic diffusions. Subsequently we verify that the resulting prior mass, remaining mass and entropy conditions appearing in this adapted result are satisfied for the prior defined in Eq. (3). An application of our results shows that if the true drift function is \(B_{\infty ,\infty }^\beta \)-Besov smooth, \(\beta \in (0,2)\), then by appropriate choice of the variances of \(Z_{jk}\), as well as the priors on R and S, the posterior for the drift b contracts at the rate \((T/\log T)^{-\beta /(1+2\beta )}\) around the true drift in the \(L_2\)-norm. Up to the log factor this rate is minimax-optimal (See for instance Kutoyants 2004, Theorem 4.48)). Moreover, it is adaptive: the prior does not depend on \(\beta \). In case the true drift has Besov-smoothness greater than or equal to 2, our method guarantees contraction rates equal to essentially \(T^{-2/5}\) (corresponding to \(\beta =2\)). A further application of our results shows that for \(L_p\)-norms we obtain contraction rate \(T^{-(\beta -1/2+1/p)/(1+2\beta )}\), up to log-factors.

The paper is organised as follows. In the next section we give a precise definition of the prior. In Sect. 3 a general contraction result for the class of diffusion processes considered here is derived. Our main result on posterior contraction for \(L^p\)-norms with \(p\ge 2\) is presented in Sect. 4. Many results of this paper concern general properties of the prior and their application is not confined to drift estimation of diffusion processes. To illustrate this, we show in Sect. 5 how these results can easily be adapted to nonparametric regression and nonparametric density estimation. Proofs are gathered in Sect. 6. The appendix contains a couple of technical results.

2 Prior construction

2.1 Model and posterior

Let
$$\begin{aligned} L^2(\mathbb {T})=\left\{ b : \mathbb {R}\rightarrow \mathbb {R}\,\, \bigg |\,\, \int _0^1 b(x)^2 {\,\mathrm {d}}x <\infty \,\,\text {and}\,\, b \,\,\text {is }1-periodic\right\} \end{aligned}$$
be the space of square integrable 1-periodic functions.

Lemma 1

If \(b_0 \in L^2({{\mathrm{\mathbb {T}}}}),\) then the SDE Eq. (1) has a unique weak solution.

The proof is in Sect. 6.1.

For \(b\in L^2(\mathbb {T})\), let \(P^{b}=P^{b,T}\) denote the law of the process \(X^T\) generated by Eq. (1) when \(b_0\) is replaced by b. If \(P^0\) denotes the law of \(X^T\) when the drift is zero, then \(P^b\) is absolutely continuous with respect to \(P^0\) with Radon-Nikodym density
$$\begin{aligned} p_b\left( X^T\right) =\exp \left( \int _0^Tb(X_t){\,\mathrm {d}}X_t-\frac{1}{2}\int _0^T b^2(X_t){\,\mathrm {d}}t\right) . \end{aligned}$$
(4)
Given a prior \(\Pi \) on \(L^2(\mathbb {T})\) and path \(X^T\) from (1), the posterior is given by
$$\begin{aligned} \Pi (b \in A\mid X^T) = \frac{\int _A p_b(X^T)\,\Pi ({\,\mathrm {d}}b)}{\int p_b(X^T)\,\Pi ({\,\mathrm {d}}b)}, \end{aligned}$$
(5)
where A is Borel set of \(L^2(\mathbb {T})\). These assertions are verified as part of the proof of Theorem 3.

2.2 Motivating the choice of prior

We are interested in randomly truncated, scaled series priors that simultaneously enable a fast algorithm for obtaining draws from the posterior and enjoy good contraction rates.

To explain what we mean by the first item, consider first a prior that is a finite series prior. Let \(\{\psi _1,\ldots , \psi _r\}\) denote basis functions and \(Z=(Z_1,\ldots , Z_r)\) a mean zero Gaussian random vector with precision matrix \(\Gamma \). Assume that the prior for b is given by \(b=\sum _{i=1}^r Z_i \psi _i\). By conjugacy, it follows that \( Z \mid X^T \sim \mathrm N(W^{-1}\mu , W^{-1})\), where \(W= G + \Gamma \),
$$\begin{aligned} \mu _{i} = \int _0^T \psi _{i}(X_t) {\,\mathrm {d}}X_t\quad \text {and}\quad G_{i, i'} = \int _0^T \psi _{i}(X_t)\psi _{i'}(X_t) {\,\mathrm {d}}t \end{aligned}$$
(6)
for \(i, i' \in \{1,\ldots ,r\}\), cf. (van der Meulen et al. 2014, Lemma 1). The matrix G is referred to as the Grammian. From these expressions it follows that it is computationally advantageous to exploit compactly supported basis functions. Whenever \(\psi _{i}\) and \(\psi _{i'}\) have nonoverlapping supports, we have \(G_{i, i'}=0\). Depending on the choice of such basis functions, the Grammian G will have a specific sparsity structure (a set of index pairs \((i,i')\) such that \(G_{i,i'} = 0\), independently of \(X^T\).) This sparsity structure is inherited by W as long as the sparsity structure of the prior precision matrix matches that of G.

In the next section we make a specific choice for the basis functions and the prior precision matrix \(\Gamma \).

2.3 Definition of the prior

Define the “hat” function \(\Lambda \) by \(\Lambda (x) = (2x){\mathbf {1}}_{[0,\frac{1}{2})}(x) + 2(1-x) {\mathbf {1}}_{[\frac{1}{2},1]}(x)\). The Faber–Schauder basis functions are given by
$$\begin{aligned} \psi _{j, k}(x) = \Lambda \left( 2^{j}x - k+1\right) , \quad j\ge 0, \,k=1,\ldots , 2^{j} \end{aligned}$$
Let
$$\begin{aligned} \psi _1 = \left( \psi _{0,1}\left( x-\tfrac{1}{2}\right) + \psi _{0,1}\left( x+\tfrac{1}{2}\right) \right) I_{[0,1]}(x). \end{aligned}$$
In Fig. 1 we have plotted \(\psi _1\) together with \(\psi _{j,k}\) where \(j \in \{0, 1, 2\}\).
We define our prior as in (3) with Gaussian coefficients \(Z_1\) and \(Z_{jk}\), where the truncation level R and the scaling factor S are equipped with (hyper)priors. We extend b periodically if we want to consider b as function on the real line. If we identify the double index (jk) in (3) with the single index \(i = 2^{j}+k\), then we can write \(b^{R,S} = S \sum _{i=1}^{2^{R+1}} \psi _i Z_i\). Let
$$\begin{aligned} \ell (i) = {\left\{ \begin{array}{ll} 0 &{} \quad \text {if} \quad i\in \{1, 2\} \\ j &{} \quad \text {if} \quad i\in \left\{ 2^j +1,\ldots , 2^{j+1}\right\} \quad \text {and}\, j\ge 1 \end{array}\right. }. \end{aligned}$$
We say that \(\psi _i\) belongs to level \(j\ge 0\) if \(\ell (i) = j\). Thus both \(\psi _1\) and \(\psi _{0,1}\) belong to level 0, which is convenient for notational purposes. For levels \(j\ge 1\) the basis functions are per level orthogonal with essentially disjoint support. Define for \(r\in \{0,1,\ldots \}\)
$$\begin{aligned} {\mathscr {I}}_r = \left\{ i\,:\, \ell (i)\le r\}= \{1,2,\ldots ,2^{r+1}\right\} . \end{aligned}$$
Let \(A = ({\text {Cov}}(Z_i, Z_{i'}))_{i,i'\in \mathbb {N}}\) and define its finite-dimensional restriction by \(A^r=(A_{ii'})_{i, i' \in {\mathscr {I}}_r}\). If we denote \(Z^r=\{Z_i,\, i \in {\mathscr {I}}_r\}\), and assume that \(Z^r\) is multivariate normally distributed with mean zero and covariance matrix \(A^r\), then the prior has the following hierarchy
$$\begin{aligned} b\mid R, S, Z^R&= S\sum _{i \in {\mathscr {I}}_R} Z_i\psi _i \end{aligned}$$
(7)
$$\begin{aligned} Z^R \mid R&\sim \mathrm N(0, A^R) \end{aligned}$$
(8)
$$\begin{aligned} (R,S)&\sim \Pi (\cdot ) . \end{aligned}$$
(9)
Here, we use \(\Pi \) to denote the joint distribution of (RS).

We will consider two choices of priors for the sequence \(Z_1,Z_2,\ldots \) Our first choice consists of taking independent Gaussian random variables. If the coefficients \(Z_{i}\) are independent with standard deviation \(2^{-\ell (i)/2}\), the random draws from this prior are scaled piecewise linear interpolations on a dyadic grid of a Brownian bridge on [0, 1] plus the random function \(Z_1\psi _1.\) The choice of \(\psi _1\) is motivated by the fact that in this case \({{\text {Var}}}\left( b(t) \big | S=s, R=\infty \right) = s^2\) is independent of t.

We construct this second type of prior as follows. For \(\gamma , \sigma ^2 >0\), define \(V\equiv (V_t,\, t\in [0,1])\) to be the cyclically stationary and centred Ornstein–Uhlenbeck process. This is a periodic Gaussian process with covariance kernel
$$\begin{aligned} {{\text {Cov}}}\left( V(s) , V(t) \right) = \frac{\sigma ^2}{2\gamma }\frac{e^{-\gamma h}+e^{-\gamma (1-h)}}{1-e^{-\gamma }},\quad h = t-s \ge 0. \end{aligned}$$
(10)
This process is cyclically stationary, that is, the covariance only depends on \(|t-s|\) and \(1-|t-s|\). It is the unique Gaussian and Markovian prior with continuous periodic paths with this property. This makes the cyclically stationary Ornstein–Uhlenbeck prior an appealing choice which respects the symmetries of the problem.
Each realisation of V is continuous and can be extended to a periodic function on \(\mathbb {R}\). Then V can be represented as an infinite series expansion in the Faber–Schauder basis:
$$\begin{aligned} V_t= \sum _{i\ge 1} Z_i \psi _i(t) =Z_1 \psi _1(t) + \sum _{j=0}^\infty \sum _{k=1}^{2^j} Z_{j,k} \psi _{j,k}(t) \end{aligned}$$
(11)
Finally by scaling by S and truncating at R we obtain from V the second choice of prior on the drift function b. Visualisations of the covariance kernels \({{\text {Cov}}}\left( b(s) , b(t) \right) \) for first prior (Brownian bridge type) and for the second prior (periodic Ornstein–Uhlenbeck process prior with parameter \(\gamma = 1.48\)) are shown in Fig. 2 (for \(S=1\) and \(R =\infty \)).
Fig. 2

Heat maps of \((s,t) \mapsto {{\text {Cov}}}\left( b(s) , b(t) \right) \), in case \(S=1\) and \(R=\infty \). Left Brownian bridge plus the random function \(Z_1\psi _1\). Right periodic Ornstein–Uhlenbeck process with parameter \(\gamma =1.48\) and \(\sigma ^2\) chosen such that \({{\text {Var}}}\left( b(s) \right) =1\)

2.4 Sparsity structure induced by choice of \(Z_i\)

Conditional on R and S, the posterior of \(Z^R\) is Gaussian with precision matrix \(G^R+\Gamma ^R\) (here \(G^R\) is the Grammian corresponding to using all basis functions up to and including level R).

If the coefficients are independent it is trivial to see that the precision matrix \(\Gamma \) does not destroy the sparsity structure of G, as defined in (6). This is convenient for numerical computations. The next lemma details the situation for periodic Ornstein–Uhlenbeck processes.

Lemma 2

Let V be defined as in Eq. (10)
  1. 1.

    The sparsity structure of the precision matrix of the infinite stochastic vector Z (appearing in the series representation (11)) equals the sparsity structure of G, as defined in (6).

     
  2. 2.
    The entries of the covariance matrix of the random Gaussian coefficients \(Z_i\) and \(Z_{i'}\), \(A_{i,i'} = \mathbb {E}Z_i Z_{i'}\), satisfy the following bounds: \(A_{11} = A_{22} = \tfrac{\sigma ^2}{2\gamma }\coth (\gamma /2)\) and for \(\gamma \le 1.5\) and \(i\ge 3\),
    $$\begin{aligned} 0.95 \cdot 2^{-\ell (i)}\sigma ^2/4 \le A_{ii} \le 2^{-\ell (i)}\sigma ^2/4 \end{aligned}$$
    and \(A_{12} = A_{21} = \tfrac{\sigma ^2}{2\gamma }\sinh ^{-1}(\gamma /2)\) and for \(i\ne i'\)
    $$\begin{aligned} |A_{ii'}| \le {\left\{ \begin{array}{ll} 0.20\sigma ^22^{-1.5(\ell (i)\vee \ell ( i'))}&{} \qquad i \wedge i'\le 2<i\vee i',\\ 0.37 \sigma ^2 2^{-1.5(\ell (i)+\ell (i'))}&{} \qquad \text {otherwise.} \end{array}\right. } \end{aligned}$$
     

The proof is given in Sect. 6.2. By the first part of the lemma, also this prior does not destroy the sparsity structure of the G. The second part asserts that while the off-diagonal entries of \(A^{r}\) are not zero, they are of smaller order than the diagonal entries, quantifying that the covariance matrix of the coefficients in the Schauder expansion is close to a diagonal matrix.

3 Posterior contraction for diffusion processes

The main result in van der Meulen et al. (2006) gives sufficient conditions for deriving posterior contraction rates in Brownian semimartingale models. The following theorem is an adaptation and refinement of Theorem 2.1 and Lemma 2.2 of van der Meulen et al. (2006) for diffusions defined on the circle. We assume observations \(X^{T}\), where \(T \rightarrow \infty .\) Let \(\Pi ^T\) be a prior on \(L^2(\mathbb {T})\) (which henceforth may depend on T) and choose measurable subsets (sieves) \({\mathscr {B}}_T \subset L^2({{\mathrm{\mathbb {T}}}})\). Define the balls
$$\begin{aligned} B^T(b_0, \varepsilon ) = \left\{ b\in {\mathscr {B}}_T\,:\, \Vert b_0-b\Vert _2<\varepsilon \right\} . \end{aligned}$$
The \(\varepsilon \)-covering number of a set A for a semimetric \(\rho \), denoted by \(N(\varepsilon ,A,\rho )\), is defined as the minimal number of \(\rho \)-balls of radius \(\varepsilon \) needed to cover the set A. The logarithm of the covering number is referred to as the entropy.

The following theorem characterises the rate of posterior contraction for diffusions on the circle in terms of properties of the prior.

Theorem 3

Suppose \(\{\varepsilon _T\}\) is a sequence of positive numbers such that \(T \varepsilon _T^2\) is bounded away from zero. Assume that there is a constant \(\xi >0\) such that for every \(K>0\) there is a measurable set \({\mathscr {B}}_T\subseteq L^2(\mathbb {T})\) and for every \(a>0\) there is a constant \(C>0\) such that for T big enough
$$\begin{aligned} \log N\bigl (a\varepsilon _T,B^T(b_0, \varepsilon _T), \Vert \cdot \Vert _2\bigr )&\le C T \varepsilon _T^2,\end{aligned}$$
(12)
$$\begin{aligned} \Pi ^T\bigl (B^T(b_0,\varepsilon _T)\bigr )&\ge e^{-\xi T \varepsilon ^2_T}, \end{aligned}$$
(13)
and
$$\begin{aligned} \Pi ^T\bigl (L^2({{\mathrm{\mathbb {T}}}}){\setminus } {\mathscr {B}}_T\bigr ) \le e^{-KT \varepsilon ^2_T}. \end{aligned}$$
(14)
Then for every \(M_T\rightarrow \infty \)
$$\begin{aligned} P_{b_0} \Pi ^T\bigl (b\in L^2({{\mathrm{\mathbb {T}}}}): \Vert b-b_0\Vert _2\ge M_T \varepsilon _T \mid X^T\bigr )&\rightarrow 0 \end{aligned}$$
and for K big enough,
$$\begin{aligned} \Pi ^T\bigl (L^2({{\mathrm{\mathbb {T}}}}){\setminus } {\mathscr {B}}_T\mid X^T\bigr )&\rightarrow 0. \end{aligned}$$
(15)

Equations (12), (13) and (14) are referred to as the entropy condition, small ball condition and remaining mass condition of Theorem 3 respectively. The proof of this theorem is in Sect. 6.3.

4 Theorems on posterior contraction rates

The main result of this section, Theorem 9 characterises the frequentist rate of contraction of the posterior probability around a fixed parameter \(b_0\) of unknown smoothness using the truncated series prior from Sect. 2.

We make the following assumption on the true drift function.

Assumption 4

The true drift \(b_0\) can be expanded in the Faber–Schauder basis, \(b_0=z_1\psi _1+\sum _{j=0}^\infty \sum _{k=1}^{2^{j}}z_{jk}\psi _{jk}=\sum _{i\ge 1}z_i\psi _i\) and there exists a \(\beta \in (0,\infty )\) such that
$$\begin{aligned} \llbracket b_0\rrbracket _{\beta }:=\sup _{i\ge 1} 2^{\beta \ell (i)}|z_{i}|<\infty . \end{aligned}$$
(16)

Note that we use a slightly different symbol for the norm, as we denote the \(L^2\)-norm by \(\Vert \cdot \Vert _2\).

Remark 5

If \(\beta \in (0,2)\), then Assumption 4 on \(b_0\) is equivalent to assuming \(b_0\) to be \(B_{\infty ,\infty }^\beta \)-Besov smooth. It follows from the definition of the basis functions that
$$\begin{aligned} z_{jk}=b_0\left( (2k-1)2^{-(j+2)}\right) -\frac{1}{2} b_0\left( 2^{-(j+2)}(2k-2)\right) -\frac{1}{2} b_0\left( 2^{-(j-2)}2k\right) . \end{aligned}$$
Therefore it follows from equations (4.72) (with \(r=2\)) and (4.73) (with \(p=\infty \)) in combination with equation (4.79) (with \(q=\infty \)) in Giné and Nickl (2016), Section 4.3, that \(\Vert b_0\Vert _\infty +\llbracket b_0\rrbracket _\beta \) is equivalent to the \(B_{\infty ,\infty }^\beta \)-norm of \(b_0\) for \(\beta \in (0,2)\).

If \(\beta \in (0,1)\), then \(\beta \)–Hölder smoothness and \(B^\beta _{\infty ,\infty }\)–smoothness coincide (cf. Proposition 4.3.23 in Giné and Nickl (2016)).

For the prior defined in Eqs. (7)–(9) we make the following assumptions.

Assumption 6

The covariance matrix A satisfies one of the following conditions:
  1. (A)

    For fixed \(\alpha >0\), \(A_{ii}=2^{-2\alpha \ell (i)}\) and \(A_{ii'}=0\) for \(i\ne i'\).

     
  2. (B)
    There exists \(0< c_1 < c_2\) and \(0 < c_3\) with \(3 c_3 < c_1\) independent from r, such that for all \(i, i' \in {\mathscr {I}}_r\)
    $$\begin{aligned}&c_1 2^{-\ell (i)} \le A_{ii} \le c_2 2^{-\ell (i)},\\&|A_{ii'}| \le c_3 2^{-1.5(\ell (i)+\ell (i'))} \quad \text { if } i \ne i'. \end{aligned}$$
     

In particular the second assumption if fulfilled by the prior defined by Eq. (10) if \(0 < \gamma \le 3/2\) and any \(\sigma ^2 > 0\).

Assumption 7

The prior on the truncation level satisfies for some positive constants \(c_1,c_2\),
$$\begin{aligned} {\mathrm {P}}(R>r)&\le \exp (-c_12^r r),\nonumber \\ {\mathrm {P}}(R=r)&\ge \exp (-c_22^r r). \end{aligned}$$
(17)
For the prior on the scaling we assume existence of constants \(0<p_1<p_2\), \(q>0\) and \(C>1\) with \(p_1>q|\alpha -\beta |\) such that
$$\begin{aligned} {\mathrm {P}}(S\in [x^{p_1},x^{p_2}])\ge \exp \big (-x^q\big ) \quad \text { for all }x \ge C. \end{aligned}$$
(18)

The prior on R can be defined as \(R= \lfloor ^2\log Y\rfloor \), where Y is Poisson distributed. Equation (18) is satisfied for a whole range of distributions, including the popular family of inverse gamma distributions. Since the inverse gamma prior on \(S^2\) decays polynomially (Lemma 17), condition (A2) of Shen and Ghosal (2015) is not satisfied and hence their posterior contraction results cannot be applied to our prior. We obtain the following result for our prior.

Theorem 8

Assume \(b_0\) satisfies Assumption 4. Suppose the prior satisfies assumptions 6 and 7. Let \(\{\varepsilon _n\}_{n=1}^\infty \) be a sequence of positive numbers that converges to zero. There is a constant \(C_1>0\) such that for any \(C_2>0\) there is a measurable set \({\mathscr {B}}_n\subseteq L^2({{\mathrm{\mathbb {T}}}})\) such that for every \(a>0\) there is a positive constant \(C_3\) such that for n sufficiently large
$$\begin{aligned}&\log {\mathrm {P}}\left( \Vert b^{R,S}-b_0\Vert _\infty <\varepsilon _n\right) \ge -C_1\varepsilon _n^{-1/\beta }|\log \varepsilon _n| \end{aligned}$$
(19)
$$\begin{aligned}&\log {\mathrm {P}}\left( b^{R,S}\notin {\mathscr {B}}_n\right) \le -C_2\varepsilon _n^{-1/\beta }|\log \varepsilon _n|\end{aligned}$$
(20)
$$\begin{aligned}&\log N(a\varepsilon , \{b\in {\mathscr {B}}_n\,:\,\Vert b-b_0\Vert _2\le \varepsilon _n\},\Vert \,\cdot \,\Vert _\infty )\le C_3\varepsilon _n^{-1/\beta }|\log \varepsilon _n|. \end{aligned}$$
(21)

The following theorem is obtained by applying these bounds to Theorem 3 after taking \(\varepsilon _n=(T / \log T)^{-\beta /(1+2\beta )}\).

Theorem 9

Assume \(b_0\) satisfies Assumption 4. Suppose the prior satisfies assumptions 6 and 7. Then for all \(M_T\rightarrow \infty \)
$$\begin{aligned} P_{b_0} \Pi ^n \left( \left. b:\Vert b - b_0\Vert _2 \ge M_T\left( \frac{T}{\log T}\right) ^{-\frac{\beta }{1+2\beta }} \;\right| \, X^T\right) \rightarrow 0 \end{aligned}$$
as \(T \rightarrow \infty \).

This means that when the true parameter is from \(B_{\infty ,\infty }^\beta [0,1],\beta <2\) a rate is obtained that is optimal possibly up to a log factor. When \(\beta \ge 2\) then \(b_0\) is in particular in the space \(B_{\infty ,\infty }^{2-\delta }[0,1],\) for every small positive \(\delta \), and therefore converges with rate essentially \(T^{-2/5}\).

When a different function \(\Lambda \) is used, defined on a compact interval of \(\mathbb {R},\) and the basis elements are defined by \(\psi _{jk}=\sum _{m\in \mathbb {Z}}\Lambda (2^{j}(x-m)+k-1)\); forcing them to be 1-periodic. Then Theorem 9 and derived results for applications still holds provided \(\Vert \psi _{jk}\Vert _\infty = 1\) and \(\psi _{j,k}\cdot \psi _{j,l}\equiv 0\) when \(|k-l|\ge d\) for a fixed \(d \in \mathbb {N}\) and the smoothness assumptions on \(b_0\) are changed accordingly. A finite number of basis elements can be added or redefined as long as they are 1-periodic.

It is easy to see that our results imply posterior convergences rates in weaker \(L^p\)-norms, \(1\le p<2,\) with the same rate. When \(p\in (2,\infty ]\) the \(L^p\)-norm is stronger than the \(L^2\)-norm. We apply ideas of Knapik and Salomond (2014) to obtain rates for stronger \(L^p\)-norms.

Theorem 10

Assume the true drift \(b_0\) satisfies assumption 4. Suppose the prior satisfies assumptions 6 and 7. Let \(p\in (2,\infty ]\). Then for all \(M_T\rightarrow \infty \)
$$\begin{aligned} P_{b_0} \Pi ^n \left( b:\Vert b - b_0\Vert _p \ge M_TT^{-\frac{\beta -1/2+1/p}{1+2\beta }}(\log T)^{\frac{2\beta -2\beta /p}{1+2\beta }} \Bigm \vert X^T\right) \rightarrow 0 \end{aligned}$$
as \(T \rightarrow \infty .\)

These rates are similar to the rates obtained for the density estimation in Giné and Nickl (2011). However our proof is less involved. Note that we have only consistency for \(\beta >1/2-1/p\).

5 Applications to nonparametric regression and density estimation

Our general results also apply to other models. The following results are obtained for \(b_0\) satisfying Assumption 4 and the prior satisfying assumptions 6 and 7.

5.1 Nonparametric regression model

As a direct application of the properties of the prior shown in the previous section, we obtain the following result for a nonparametric regression problem. Assume
$$\begin{aligned} X_{i}^n=b_0(i/n)+\eta _i, \quad 0 \le i \le n, \end{aligned}$$
(22)
with independent Gaussian observation errors \(\eta _i\sim \mathrm N(0,\sigma ^2)\). When we apply Ghosal and van der Vaart (2007), example 7.7 to Theorem 8 we obtain, for every \(M_n\rightarrow \infty \),
$$\begin{aligned} \Pi \left. \left( b:\Vert b - b_0\Vert _2 \ge M_n\left( \frac{n}{\log n}\right) ^{-\frac{\beta }{1+2\beta }} \;\right| \; X^n\right) \overset{{\mathrm {P}}^{b_0}}{\longrightarrow } 0 \end{aligned}$$
as \(n\rightarrow \infty \) and (in a similar way as in Theorem 10) for every \(p\in (2,\infty ]\),
$$\begin{aligned} \Pi \left. \left( b:\Vert b - b_0\Vert _2 \ge M_nn^{-\frac{\beta -1/2+1/p}{1+2\beta }}(\log n)^{\frac{2\beta -2\beta /p}{1+2\beta }} \;\right| \; X^n\right) \overset{{\mathrm {P}}^{b_0}}{\longrightarrow } 0 \end{aligned}$$
as \(n \rightarrow \infty \).

5.2 Density estimation

Let us consider n independent observations \(X^n := (X_1,\ldots ,X_n)\) with \(X_i\sim p_0\) where \(p_0\) is an unknown density on [0, 1] relative to the Lebesgue measure. Let \(\mathscr {P}\) denote the space of densities on [0, 1] relative to the Lebesgue measure. The natural distance for densities is the Hellinger distance h defined by
$$\begin{aligned} h(p,q)^2=\int _0^1\left( \sqrt{p(x)}-\sqrt{q(x)}\right) ^2{\,\mathrm {d}}x. \end{aligned}$$
Define the prior on \(\mathscr {P}\) by \(p=\frac{\mathrm {e}^b}{\Vert e^b\Vert _1},\) where b is endowed with the prior of Theorem 9 or its non-periodic version. Assume that \(\log p_0\) is \(\beta \)-smooth in the sense of Assumption 4. Applying Ghosal et al. (2000), theorem 2.1 and van der Vaart and van Zanten (2008), lemma 3.1 to Theorem 8, we obtain for a big enough constant \(M>0\)
$$\begin{aligned} \Pi \left. \left( p\in \mathscr {P}:h(p,p_0)\ge M\left( \frac{n}{\log n}\right) ^{-\frac{\beta }{1+2\beta }}\;\right| \; X^n\right) \xrightarrow {{\mathrm {P}}_0} 0, \end{aligned}$$
as \(n\rightarrow \infty \).

6 Proofs

6.1 Proof of lemma 1

Since conditions (ND) and (LI) of (Karatzas and Shreve 1991, theorem 5.15) hold, the SDE Eq. (1) has a unique weak solution up to an explosion time.

Assume without loss of generality that \(X_0 = 0\). Define \(\tau _0=0\) and for \(i\ge 1\) the random times
$$\begin{aligned} \tau _i = \inf \{t \ge \tau _{i-1} :|X_{t} - X_{\tau _{i-1}}| = 1\}. \end{aligned}$$
By periodicity of drift and the Markov property the random variables \(U_i = \tau _{i}-\tau _{i-1}\) are independent and identically distributed.
Note that
$$\begin{aligned} \inf \{t:X_t = \pm n\} \ge \sum _{i=1}^n U_i \end{aligned}$$
and hence non-explosion follows from \(\lim _{n\rightarrow \infty }\sum _{i=1}^n U_i=\infty \) almost surely. The latter holds true since \(U_1>0\) with positive probability, which is clear from the continuity of diffusion paths.

6.2 Proof of lemma 2

Proof of the first part

For the proof we introduce some notation: for any (jk), \((j', k')\) we write \((j, k) \prec (j', k')\) if \(\text {supp}\, \psi _{j',k'}\subset \text {supp}\, \psi _{j,k}\). The set of indices become a lattice with partial order \(\prec \), and by \((j,k) \vee (j',k')\) we denote the supremum. Identify i with (jk) and similarly \(i'\) with \((j',k')\).

For \(i>1\), denote by \(t_i\) the time points in [0, 1] corresponding to the maxima of \(\psi _i\). Without loss of generality assume \(t_i<t_{i'}\). We have \(G_{i,i'}=0\) if and only if the interiors of the supports of \(\psi _i\) and \(\psi _{i'}\) are disjoint. In that case
$$\begin{aligned} \max \text {supp}\,\psi _{j,k} \le t_{(j,k) \vee (j',k')} \le \min \text {supp}\, \psi _{j',k'}. \end{aligned}$$
(23)
\(\square \)
The values of \(Z_i\) can be found by the midpoint displacement technique. The coefficients are given by \(Z_1 = V_0\), \(Z_2 = V_{\frac{1}{2}}\) and for \(j\ge 1\)
$$\begin{aligned} Z_{j,k} = V_{2^{-j}\left( k-1/2\right) } - \frac{1}{2}\left( V_{2^{-j}(k-1)} + V_{{2^{-j}k}}\right) . \end{aligned}$$
As V is a Gaussian process, the vector Z is mean-zero Gaussian, say with (infinite) precision matrix \(\Gamma \). Now \(\Gamma _{i,i'}=0\) if there exists a set \({\mathscr {L}}\subset \mathbb {N}\) such that \({\mathscr {L}} \cap \{i,i'\}=\varnothing \) for which conditional on \(\{ Z_{i^\star },\, i^\star \in {\mathscr {L}}\}\), \(Z_i\) are \(Z_{i'}\) are independent.
Define \((j^\star , k^\star )=(j,k) \vee (j',k')\) and
$$\begin{aligned} {\mathscr {L}}=\{i^\star \in \mathbb {N}\,:\, i^\star =2^j+k, \text { with } j\le j^\star \}. \end{aligned}$$
The set \(\{ Z_{i^\star },\, i^\star \in {\mathscr {L}}\}\) determine the process V at all times \(k 2^{-j^\star -1}\), \(k=0\ldots ,2^{j^\star +1}\). Now \(Z_i\) and \(Z_{i'}\) are conditionally independent given \(\{V_t, t=k 2^{-j^\star -1},\, k=0\ldots ,2^{j^\star +1}\}\) by (23) and the Markov property of the nonperiodic Ornstein–Uhlenbeck process. The result follows since \(\sigma (\{Z_{i^\star },\, i^\star \in {\mathscr {L}}\})=\sigma (\{V_t, t=k 2^{-j^\star -1},\, k=0\ldots ,2^{j^\star +1}\})\).

Lemma 11

Let \(K(s,t)=\mathbb {E}{V}_s {V}_t= \frac{\sigma ^2}{2\gamma }\frac{1}{1-e^{-\gamma }}\left( e^{-\gamma |t-s|}+e^{-\gamma (1-|t-s|)}\right) \). If \(x \notin (s,t) \)
$$\begin{aligned} \frac{1}{2}K(s, x) - K\left( \tfrac{s+t}{2},x\right) +\frac{1}{2} K(t,x) = 2 \sinh ^2\left( \gamma \tfrac{t-s}{4}\right) K\left( \tfrac{t+s}{2}, x\right) \end{aligned}$$

Proof

Without loss of generality assume that \( t \le x \le 1\). With \(m= (t+s)/2\) and \(\delta = (t-s)/2\)
$$\begin{aligned}&\left( e^{-\gamma |s-x|}+e^{-\gamma (1-|s-x|)}\right) - 2\left( e^{-\gamma |m-x|}+e^{-\gamma (1-|m-x|)}\right) + \left( e^{-\gamma |t-x|}+e^{-\gamma (1-|t-x|)}\right) \\&\quad = e^{-\gamma |t-x|}e^{-2\gamma \delta } - 2 e^{-\gamma |t-x|} e^{-\gamma \delta }+ e^{-\gamma |t-x|} + e^{-\gamma (1-|s-x|)} - 2e^{-\gamma (1-|s-x|)}e^{-\gamma \delta }\\&\qquad +\,e^{-\gamma (1-|s-x|)}e^{-2\gamma \delta }= (1-e^{-\gamma \delta })^2 (e^{-\gamma |t-x|} + e^{-\gamma (1-|s-x|)}) \\&\quad = \left( 1-e^{-\gamma \delta }\right) ^2 e^{\gamma \delta }\left( e^{-\gamma |m-x|} + e^{-\gamma (1-|m-x|)}\right) \end{aligned}$$
The result follows from \((1-e^{-\gamma \delta })^2 e^{\gamma \delta }= 4\sinh ^2(\gamma \delta /2)\) and scaling both sides with \(\tfrac{1}{2} \frac{\sigma ^2}{2\gamma }\frac{1}{1-e^{-\gamma }} \). \(\square \)

Proof of the second part

Denote by [ab], [cd] the support of \(\psi _i\) and \(\psi _{i'}\) respectively and let \(m = (b+a)/2\) and \(n = (d+c)/2\) but for \(i=1\), let \(m=0\). \(Z_1 = V(0)\), \(Z_2 = V_{1/2}\) and \({{\text {Var}}}\left( Z_1 \right) = {{\text {Var}}}\left( Z_2 \right) = \frac{\sigma ^2}{2\gamma }\coth (\gamma /2)\), and \({{\text {Cov}}}\left( Z_1 , Z_2 \right) = \frac{\sigma ^2}{2\gamma }\sinh ^{-1}(\gamma /2)\). Note that the \(2\times 2\) covariance matrix of \(Z_1\) and \(Z_2\) has eigenvalues \(\tfrac{\sigma ^2}{2\gamma } {\text {tanh}}(\gamma /4)\) and \(\tfrac{\sigma ^2}{2\gamma } \coth (\gamma /4)\) and is strictly positive definite. \(\square \)

By midpoint displacement, \(2Z_{i} = 2V_{m} - V_{a} - V_{b}\), \(i > 2\) and \(K(s,t)=\mathbb {E}{V}_s {V}_t= \frac{\sigma ^2}{2\gamma }\frac{1}{1-e^{-\gamma }} ( e^{-\gamma |t-s|}+e^{-\gamma (1-|t-s|)})\).

Assume without loss of generality \(b-a \ge d-c\). Define \(\delta \) to be the halfwidth of the smaller interval, so that \(\delta := (d-c)/2= 2^{-j'-1}\). Then
$$\begin{aligned} (b-a)/2 =2^{-j-1}=h \delta ,\quad \text {with}\quad h=2^{j'-j}. \end{aligned}$$
Consider three cases:
  1. 1.

    The entries on diagonal, \(i = i'\);

     
  2. 2.

    The interiors of the supports of \(\psi _i\) and \(\psi _{i'}\) are non-overlapping;

     
  3. 3.

    The support of \(\psi _{i'}\) is contained in the support of \(\psi _i\).

     
Case 1. By elementary computations for \(i > 2\),
$$\begin{aligned} 4 \frac{2\gamma }{\sigma ^2} (1-e^{-\gamma }) A_{ii}= & {} 6(1+e^{-\gamma }) + 2(e^{-\gamma 2 \delta } + e^{-\gamma (1- 2\delta )}) - 8 ( e^{-\gamma \delta } + e^{-\gamma (1-\delta )} )\\= & {} 2 (1-e^{-\gamma \delta }) ( 3 -e^{-\gamma \delta }) + 2e^{-\gamma } (1-e^{\gamma \delta }) ( 3 -e^{\gamma \delta }) . \end{aligned}$$
As \(\delta \le \tfrac{1}{4}\) and under the assumption \(\gamma \le 3/2\) the last display can be bounded by
$$\begin{aligned} 0.9715\cdot 4 \gamma \delta (1-e^{-\gamma }) \le 4 \frac{2\gamma }{\sigma ^2} (1-e^{-\gamma }) A_{ii} \le 4 \gamma \delta (1-e^{-\gamma }) . \end{aligned}$$
Hence \( 0.9715\cdot 2^{-j}\sigma ^2/4\le A_{ii}\le 2^{-j}\sigma ^2/4\).
Case 2. Necessarily \(i, i' > 2\). By twofold application of lemma 11
$$\begin{aligned} A_{ij}&= (K(c,b)- 2K(n,b)+K(d,b))/4\nonumber \\&\quad -2(K(c,m) - 2K(n,m) + K(d,m))/4\nonumber \\&\quad +(K(c,a)- 2K(n,a)+K(d,a))/4\nonumber \\&=2 \sinh ^2(\gamma \tfrac{d-c}{4}) (K(n,b) - 2K(n,m) + K(n,a))/2\nonumber \\&=4 \sinh ^2(\gamma \tfrac{b-a}{4})\sinh ^2(\gamma \tfrac{d-c}{4}) K(n, m). \end{aligned}$$
(24)
Using the convexity of \(\sinh \) we obtain the bound
$$\begin{aligned} 2\sinh ^2(x/2) \le 0.55 x^2 \end{aligned}$$
(25)
for \(0 \le x \le 1\). Note that \(f(x)=e^{-\gamma x}+e^{-\gamma (1-x)}\) is convex on [0, 1], from which we derive \(f(x)\le 1+e^{-\gamma }\). Using this bound, and the fact that for \(\gamma \le 3/2\),
$$\begin{aligned} \gamma ^2 K(n,m)\le \tfrac{\sigma ^2}{2}\gamma \coth (\gamma /2) \le \sigma ^2( 1 +\gamma /2), \end{aligned}$$
(26)
which can be easily seen from a plot, that
$$\begin{aligned} | A_{ii'}|&\le 0.55^2\gamma ^4\cdot 2^{-2j-2} \cdot 2^{-2j'-2}|K(n,m)|\\&\le 0.0095\sigma ^2\gamma ^2(1+\gamma /2)2^{-1.5(j + j')}. \end{aligned}$$
Case 3.
For \( i' > 2\), \(i = 1\) with \(m = 0\) or \(i = 2\) with \(m = \frac{1}{2}\), using Eq. (26), we obtain
$$\begin{aligned} |A_{ii'}|&= |K(m,n) - \frac{1}{2}K(m,c) - \frac{1}{2}K(m,d)|\nonumber \\&\le 2 \sinh ^2(\gamma \tfrac{d-c}{4}) K(m,n) \nonumber \\&\le 0.55\gamma ^22^{-2j'-2}K(m,n)\nonumber \\&\le 0.098\sigma ^2(1+\gamma /2) 2^{-1.5j}. \end{aligned}$$
(27)
When \(i,i'>2\) then, using the calculation Eq. (24) and Lemma 11 noting that ab and m are not in (cd), we obtain
$$\begin{aligned} A_{ii'} = 2 \sinh ^2\left( \gamma \tfrac{d-c}{4}\right) (K(n,b) - 2K(n,m) + K(n,a))/2. \end{aligned}$$
Write \(x=\gamma |a-m|=\gamma |b-m|=\gamma h\delta \) and \(\alpha =\frac{|m-n|}{|b-m|}\in (0,1)\). A simple computation then shows
$$\begin{aligned} e^{-\gamma |b-n|} - 2e^{-\gamma |m-n|} + e^{-\gamma |a-n|}=e^{-(1+\alpha )x}-2e^{-\alpha x}+e^{-(1-\alpha )x}. \end{aligned}$$
The derivative of \(f(\alpha ):=e^{-(1+\alpha )x}-2e^{-\alpha x}+e^{-(1-\alpha )x}\) is nonnegative, for \(\alpha ,x>0\) hence \(f(\alpha )\) is increasing and so \(f(0)\le f(\alpha )\le f(1)\). Note that \(f(0)= 2e^{-x}-2\ge -2x, \text { for }x>0\) and \(f(1)= e^{-2x}-2e^{-x}+1=:g(x)\). Maximising \(g'(x)\) over \(x>0\) gives \(g'(x)\le 1/2\) and \(g(0)=0\) and therefore \(f(1)=g(x)\le x/2\).
It follows that
$$\begin{aligned} -2\gamma h\delta \le e^{-\gamma |b-n|} - 2e^{-\gamma |m-n|} + e^{-\gamma |a-n|}\le \gamma h\delta / 2. \end{aligned}$$
For the other terms we derive the following bounds. Write
$$\begin{aligned}&e^{-\gamma (1-|b-n|)} - 2e^{-\gamma (1-|m-n|)} + e^{-\gamma (1-|a-n|)}\\&\quad = e^{-\gamma +(1+\alpha )x}-2e^{-\gamma +\alpha x}+e^{-\gamma +(1-\alpha )x}=:h(\alpha ). \end{aligned}$$
Now \(h(\alpha )\) is decreasing for \(x\le \log 2\) and convex and positive for \(x\ge \log 2\). In both case we can bound \(h(\alpha )\) by its value at the endpoints \(\alpha =0\) and \(\alpha =1\). Using that \(2x\le \gamma \) we obtain \(0\le h(0)= e^{-\gamma }(2e^x-2)\le 2x\) and \( 0 \le h(1)= e^{-\gamma }\big (e^{2x}-2e^x+1\big )\le 2x\). So \(0\le h(\alpha )\le 2\gamma h\delta \).
Using the bound Eq. (25) and \(x/(1-\exp (-x))\le (1+x)\) we obtain
$$\begin{aligned} |A_{ii'}|\le 0.061 \sigma ^2 \gamma (1+\gamma ) 2^{-1.5(j+j')}. \end{aligned}$$

6.3 Proof of theorem 3

A general result for deriving contraction rates for Brownian semi-martingale models was proved in van der Meulen et al. (2006). Theorem 3 follows upon verifying the assumptions of this result for the diffusion on the circle. These assumptions are easily seen to boil down to:
  1. 1.

    For every \(T>0\) and \(b_1,b_2\in L^2(\mathbb {T})\) the measures \(P^{b_1,T}\) and \(P^{b_2,T}\) are equivalent.

     
  2. 2.

    The posterior as defined in equation Eq. (5) is well defined.

     
  3. 3.
    Define the (random) Hellinger semimetric \(h_T\) on \(L^2(\mathbb {T})\) by
    $$\begin{aligned} h_T^2(b_1,b_2):= \int _0^{T} \Bigl (b_1-b_2\Bigr )^2(X_t)\,{\,\mathrm {d}}t, \quad b_1,\, b_2 \in L^2(\mathbb {T}). \end{aligned}$$
    (28)
    There are constants \(0<c<C\) for which
    $$\begin{aligned} \lim _{T\rightarrow \infty } P^{\theta _0,T}\Bigl (c\sqrt{T}\Vert b_1-b_2\Vert _2\le h_T(b_1,b_2) \le C\,\sqrt{T}\Vert b_1-b_2\Vert _2, \forall \, ,b_1, b_2\in L^2(\mathbb {T}) \Bigr ) =1. \end{aligned}$$
     
We start by verifying the third condition. Recall that the local time of the process \(X^T\) is defined as the random process \(L_T(x)\) which satisfies
$$\begin{aligned} \int _0^T f(X_t){\,\mathrm {d}}t=\int _\mathbb {R}f(x)L_T(x){\,\mathrm {d}}x. \end{aligned}$$
For every measurable function f for which the above integrals are defined. Since we are working with 1-periodic functions, we define the periodic local time by
$$\begin{aligned} \mathring{L}_T(x)=\sum _{k\in Z}L_T(x+k). \end{aligned}$$
Note that \(t\mapsto X_t\) is continuous with probability one. Hence the support of \(t\mapsto X_t\) is compact with probability one. Since \(x \mapsto L_T(x)\) is only positive on the support of \(t\mapsto X_t\), it follows that the sum in the definition of \(\mathring{L}_T(x)\) has only finitely many nonzero terms and is therefore well defined. For a one-periodic function f we have
$$\begin{aligned} \int _0^Tf(X_t){\,\mathrm {d}}t=\int _0^1f(x)\mathring{L}_T(x){\,\mathrm {d}}x, \end{aligned}$$
provided the involved integrals exists. It follows from (Schauer and van Zanten 2017, Theorem 5.3) that \(\mathring{L}_T(x)/T\) converges to a positive deterministic function only depending only on \(b_0\) and which is bounded away from zero and infinity. Since the Hellinger distance can be written as
$$\begin{aligned} h_T(b_1,b_2) = \sqrt{T}\sqrt{\int _0^1(b_1(x)-b_2(x))^2\frac{\mathring{L}_T(x)}{T}{\,\mathrm {d}}t} \end{aligned}$$
it follows that the third assumption is satisfied with \(d_T(b_1,b_2)=\sqrt{T}\Vert b_1-b_2\Vert _2\).

Conditions 1 and 2 now follow by arguing precisely as in lemmas A.2 and 3.1 of van Waaij and van Zanten (2016) respectively (the key observation being that the convergence result of \(\mathring{L}_T(x)/T\) also holds when \(\int _0^1b(x){\,\mathrm {d}}x\) is nonzero, which is assumed in that paper).

The stated result follows from Theorem 2.1 in van der Meulen et al. (2006) (taking \(\mu _T=\sqrt{T} \varepsilon _T\) in their paper).

6.4 Proof of theorem 8 with Assumption 6 (A)

The proof proceeds by verifying the conditions of theorem 3. By Assumption 4 the true drift can be represented as \( b_0=z_1\psi _1+\sum _{j=0}^\infty \sum _{k=1}^{2^{j}}z_{jk}\psi _{jk}\). For \(r\ge 0\), define its truncated version by
$$\begin{aligned} b^r_0=z_1\psi _1+\sum _{j=0}^r\sum _{k=1}^{2^{j}}z_{jk}\psi _{jk}. \end{aligned}$$

6.4.1 Small ball probability

For \(\varepsilon >0 \) choose an integer \(r_\varepsilon \) with
$$\begin{aligned} C_\beta \varepsilon ^{-1/\beta } \le 2^{r_\varepsilon } \le 2C_\beta \varepsilon ^{-1/\beta } \quad \text { where }\quad C_\beta = \frac{\llbracket b_0\rrbracket _\beta ^{1/\beta }}{(2^\beta -1)^{1/\beta }}. \end{aligned}$$
(29)
For notational convenience we will write r instead of \(r_\varepsilon \) in the remainder of the proof. By lemma 16 we have \(\Vert b_0^{r}-b_0\Vert _\infty \le \varepsilon \). Therefore
$$\begin{aligned} \Vert b^{r,s}-b_0\Vert _2 \le \Vert b^{r,s}-b_0^{r}\Vert _2 + \Vert b^{r}_0-b_0\Vert _2 \le \Vert b^{r,s} - b_0^{r}\Vert _\infty + \varepsilon \end{aligned}$$
which implies
$$\begin{aligned} {\mathrm {P}}\left( \Vert b^{r,s}-b_0\Vert _2< 2\varepsilon \right) \ge {\mathrm {P}}\left( \Vert b^{r,s}-b_0^{r}\Vert _\infty <\varepsilon \right) . \end{aligned}$$
Let \(f_S\) denotes the probability density of S. For any \(x > 0\), we have
$$\begin{aligned} {\mathrm {P}}\left( \Vert b^{R,S}-b_0\Vert _2< 2\varepsilon \right)&= \sum _{r\ge 1} {\mathrm {P}}(R=r) \int _0^\infty {\mathrm {P}}\left( \Vert b^{r,s}-b_0\Vert _2< 2\varepsilon \right) f_S(s) \,{\,\mathrm {d}}s\nonumber \\&\ge {\mathrm {P}}(R=r) \inf _{s\in [L_\varepsilon , U_\varepsilon ]} {\mathrm {P}}\left( \Vert b^{r,s}-b_0^{r}\Vert _\infty < \varepsilon \right) \int _{L_\varepsilon }^{U_\varepsilon } f_S(s) {\,\mathrm {d}}s, \end{aligned}$$
(30)
where
$$\begin{aligned} L_\varepsilon = \varepsilon ^{-\frac{p_1}{q\beta }} \qquad \text {and} \qquad U_\varepsilon = \varepsilon ^{-\frac{p_2}{q\beta }} \end{aligned}$$
and \(p_1, p_2\) and q are taken from Assumption 7. For \(\varepsilon \) sufficiently small, we have by the second part of Assumption 7
$$\begin{aligned} \int _{L_\varepsilon }^{U_\varepsilon } f_S(s) {\,\mathrm {d}}s \ge \exp \big (-\varepsilon ^{-\frac{1}{\beta }}\big ) \end{aligned}$$
By choice of r and the first part of Assumption 7, there exists a positive constant C such that
$$\begin{aligned} {\mathrm {P}}(R=r)\ge \exp \Big (-c_22^{r}{r}\Big )\ge \exp \Big (-C\varepsilon ^{-\frac{1}{\beta }}|\log \varepsilon |\Big ), \end{aligned}$$
for \(\varepsilon \) sufficiently small.
For lower bounding the middle term in Eq. (30), we write
$$\begin{aligned} b^{r,s}-b_0^{r} = (sZ_1-z_1)\psi _1+\sum _{j=0}^{r} \sum _{k=1}^{2^{j}} (s Z_{jk}-z_{jk}) \psi _{jk} \end{aligned}$$
which implies
$$\begin{aligned} \Vert b^{r,s}-b_0^{r}\Vert _\infty \le |sZ_1-z_1|+ \sum _{j=0}^{r} \max _{1\le k \le 2^{j}} |s Z_{jk}-z_{jk}| \le (r+2) \max _{i \in {\mathscr {I}}_{r}} |s Z_{i}-z_{i}| . \end{aligned}$$
This gives the bound
$$\begin{aligned} {\mathrm {P}}\left( \Vert b^{r,s}-b_0^{r}\Vert _\infty<\varepsilon \right) \ge \prod _{i \in {\mathscr {I}}_{r}} {\mathrm {P}}\Big (|sZ_{i}-z_{i}| <\frac{\varepsilon }{ r+2}\Big ). \end{aligned}$$
By choice of the \(Z_i,\) we have for all \(i\in \{1,2,\ldots \}, 2^{\alpha \ell (i)}Z_i\) is standard normally distributed and hence
$$\begin{aligned} \log {\mathrm {P}}\left( |sZ_{i}-z_{i}|<\frac{\varepsilon }{r+2}\right)&= \log {\mathrm {P}}\left( \left| 2^{\alpha \ell (i)} Z_{i} - 2^{\alpha \ell (i)} z_{i}/s\right| < \frac{2^{\alpha \ell (i)} \varepsilon }{(r+2) s}\right) \\&\ge \log \left( \frac{2^{\alpha \ell (i)}\varepsilon }{ (r+2) s}\right) - \frac{2^{2\alpha \ell (i)}\varepsilon ^2}{(r+2)^2 s^2} - \frac{2^{2\alpha \ell (i)}z_{i}^2}{s^2} +\tfrac{1}{2}\log \left( \tfrac{2}{\pi }\right) , \end{aligned}$$
where the inequality follows from lemma 18. The third term can be further bounded as we have
$$\begin{aligned} 2^{2\alpha \ell (i)} z_{i}^2 = 2^{2(\alpha -\beta ) \ell (i)} 2^{2\beta \ell (i)} z_{i}^2 \le 2^{2(\alpha -\beta ) \ell (i)} \llbracket b_0\rrbracket _\beta ^2. \end{aligned}$$
Hence
$$\begin{aligned} \log P\left( |sZ_{i}-z_{i}| <\frac{\varepsilon }{r+2}\right)\ge & {} \log \left( \frac{2^{\alpha \ell (i)}\varepsilon }{ (r+2) s}\right) - \frac{2^{2\alpha \ell (i)}\varepsilon ^2}{(r+2)^2 s^2}\nonumber \\&- \frac{2^{2(\alpha -\beta ) \ell (i)} \llbracket b_0\rrbracket _\beta ^2}{s^2} +\tfrac{1}{2}\log \left( \tfrac{2}{\pi }\right) .\nonumber \\ \end{aligned}$$
(31)
For \(s \in [L_\varepsilon , U_\varepsilon ]\) and \(i\in {\mathscr {I}}_{r}\) we will now derive bounds on the first three terms on the right of Eq. (31). For \(\varepsilon \) sufficiently small we have \(r\le r+2\le 2r\) and then inequality (29) implies
$$\begin{aligned} \log C_\beta \le r+2\le 2\log (4C_\beta ) +\frac{2}{\beta } |\log \varepsilon |. \end{aligned}$$
Bounding the first term on the RHS of (31). For \(\varepsilon \) sufficiently small, we have
$$\begin{aligned} \log \left( \frac{(r+2) s}{2^{\alpha \ell (i)} \varepsilon }\right)&\le \log \left( \frac{(r+2) U_\varepsilon }{\varepsilon }\right) =\log \left( (r+2) \varepsilon ^{-\left( 1+\frac{p_2}{q\beta }\right) }\right) \\&\le \log \left\{ 2 \log (4C_\beta ) +\frac{2}{\beta } |\log \varepsilon | \right\} + \left( 1+\frac{p_2}{q\beta }\right) |\log \varepsilon | \\&\le \tilde{C}_{p_2, q, \beta } |\log \varepsilon |, \end{aligned}$$
where \(\tilde{C}_{p_2, q, \beta }\) is a positive constant.
Bounding the second term on the RHS of (31). For \(\varepsilon \) sufficiently small, we have
$$\begin{aligned} \frac{2^{2\alpha \ell (i)} \varepsilon ^2}{(r+2)^2 s^2} \le \frac{2^{2\alpha r} \varepsilon ^2}{(\log C_\beta )^2 L_\varepsilon ^2} \le \frac{(2C_\beta )^{2\alpha }}{(\log C_\beta )^2} \varepsilon ^{\frac{2}{\beta }\left( -\alpha +\beta +p_1/q\right) } \le 1. \end{aligned}$$
The final inequality is immediate in case \(\alpha =\beta \), else if suffices to verify that the exponent is non-negative under the assumption \(p_1>q|\alpha -\beta |\).
Bounding the third term on the RHS of (31). For \(\varepsilon \) sufficiently small, in case \(\beta \ge \alpha \) we have
$$\begin{aligned} \frac{2^{2(\alpha -\beta ) \ell (i)} \llbracket b_0\rrbracket _\beta ^2}{s^2} \le \llbracket b_0\rrbracket _\beta ^2 L_\varepsilon ^{-2}\le 1. \end{aligned}$$
In case \(\beta <\alpha \) we have
$$\begin{aligned} \frac{2^{2(\alpha -\beta ) \ell (i)} \llbracket b_0\rrbracket _\beta ^2}{s^2} \le \frac{2^{2(\alpha -\beta ) r} \llbracket b_0\rrbracket _\beta ^2}{L_\varepsilon ^2} \le (2C_\beta )^{2(\alpha -\beta )} \varepsilon ^{\frac{2}{\beta }\left( p_1/q-\alpha +\beta \right) } \le 1 \end{aligned}$$
as the exponent of \(\varepsilon \) is positive under the assumption \(p_1>q|\alpha -\beta |\).
Hence for \(\varepsilon \) small enough, we have
$$\begin{aligned} \log P\left( |sZ_{i}-z_{i}| <\frac{\varepsilon }{r+2}\right) \ge -\tilde{C}_{p_2,q, \beta } |\log \varepsilon | -3. \end{aligned}$$
As \(-2^{r+1}\ge -4C_\beta \varepsilon ^{-1/\beta }\) we get
$$\begin{aligned} \log \inf _{s\in [x^{p_1}, x^{p_2}]}P\left( \Vert b^{r,s}-b_0^{r}\Vert _\infty <\varepsilon \right)&\ge -4C_\beta \varepsilon ^{-1/\beta } \left( \tilde{C}_{p_2,q, \beta } |\log \varepsilon | +3\right) \\&\gtrsim -\varepsilon ^{-1/\beta }|\log \varepsilon |. \end{aligned}$$
We conclude that the right hand side of Eq. (30) is bounded below by \(\exp \big ({-C_1}\varepsilon ^{-1/\beta }|\log \varepsilon |\big )\), for some positive constant \(C_1\) and sufficiently small \(\varepsilon \).

6.4.2 Entropy and remaining mass conditions

For \(r\in \{0,1,\ldots \}\) denote by \({\mathscr {C}}_r\) the linear space spanned by \(\psi _1\) and \(\psi _{jk}\), \(0 \le j \le r,\) \(k \in 1, \dots , 2^j\), and define
$$\begin{aligned} {\mathscr {C}}_{r,t} := \left\{ b \in {\mathscr {C}}_r, \llbracket b \rrbracket _\alpha \le t\right\} . \end{aligned}$$

Proposition 12

For any \(\varepsilon >0\)
$$\begin{aligned} \log N(\varepsilon ,{\mathscr {C}}_{r,t},\Vert \,\cdot \,\Vert _\infty )\le 2^{r+1}\log (3A_\alpha t\varepsilon ^{-1}), \end{aligned}$$
where \(A_\alpha =\sum _{k=0}^\infty 2^{-k\alpha }\).

Proof

We follow (van der Meulen et al. 2006, §3.2.2). Choose \(\varepsilon _0, \ldots , \varepsilon _r>0\) such that \(\sum _{j=0}^r \varepsilon _j \le \varepsilon \). Define
$$\begin{aligned} U_j ={\left\{ \begin{array}{ll} \left[ -2^{-\alpha j} t, 2^{-\alpha j} t\right] ^{2^j} &{} \quad \text {if } j\in \{1,\ldots , r\} \\ \left[ -t, t \right] ^{2} &{} \quad \text {if }j=0\end{array}\right. }. \end{aligned}$$
For each \(j\in \{1,\ldots , r\}\), let \(E_j\) be a minimal \(\varepsilon _j\)-net with respect to the max-distance on \(\mathbb {R}^{2^j}\) and let \(E_0\) be a minimal \(\varepsilon _0\)-net with respect to the max-distance on \(\mathbb {R}^2\). Hence, if \(x\in U_j\), then there exists a \(e_j \in E_j\) such that \(\max _k |x_k-e_k| \le \varepsilon _j\).
Take \(b\in {\mathscr {C}}_{r,t}\) arbitrary: \(b=z_1\psi _1 + \sum _{j=0}^r \sum _{k=1}^{2^j} z_{jk}\psi _{jk}\). Let \(\tilde{b} = e_1\psi _1 + \sum _{j=0}^r \sum _{k=1}^{2^j} e_{jk}\psi _{jk}\), where \((e_1, e_{0,1})\in E_0\) and \((e_{j1},\ldots , e_{j2^j}) \in E_j\) (for \(j=1,\ldots , 2^j\)). We have
$$\begin{aligned} \Vert b-\tilde{b}\Vert _\infty\le & {} |z_1-e_1| \Vert \psi _1\Vert _\infty + \sum _{j=0}^r \max _{1\le k\le 2^j} |z_{jk}-e_{jk}| \Vert \psi _{jk}\Vert _\infty \\\le & {} |z_1-e_1| + \sum _{j=0}^r \max _{1\le k\le 2^j} 2^{j\alpha } |2^{-j\alpha } z_{jk}-2^{-j\alpha } e_{jk}| . \end{aligned}$$
This can be bounded by \(\sum _{j=0}^r \varepsilon _j\) by an appropriate choice of the coefficients in \(\tilde{b}\). In that case we obtain that \(\Vert b-\tilde{b}\Vert _\infty \le \varepsilon \). This implies
$$\begin{aligned} \log N(\varepsilon ,{\mathscr {C}}_{r,t},\Vert \,\cdot \,\Vert _\infty )\le \sum _{j=0}^r \log |E_j| \le \sum _{j=0}^r 2^j \log \left( \frac{3\cdot 2^{-\alpha j} t}{\varepsilon _j}\right) . \end{aligned}$$
The asserted bound now follows upon choosing \(\varepsilon _j =\varepsilon 2^{-j\alpha } /A_\alpha \). \(\square \)

Proposition 13

There exists a constant a positive constant K such that
$$\begin{aligned} \log N\left( a\varepsilon , \left\{ b\in {\mathscr {C}}_r\,:\, \Vert b-b_0\Vert _2\le \varepsilon \right\} , \Vert \cdot \Vert _2\right) \le 2^{r+1} \log \left( 6 A_\alpha K 2^{\alpha r}\right) . \end{aligned}$$

Proof

There exists a positive K such that
$$\begin{aligned} \left\{ b\in {\mathscr {C}}_r\,:\, \Vert b-b_0\Vert _2\le a\varepsilon \right\} \subset \left\{ b\in {\mathscr {C}}_r\,:\, \Vert b\Vert _2\le K \right\} . \end{aligned}$$
By lemma 21, this set is included in the set
$$\begin{aligned} \left\{ b\in {\mathscr {C}}_r\,:\, \Vert b\Vert _\infty \le \sqrt{3} 2^{(r+1)/2} K\right\} . \end{aligned}$$
(32)
By lemma 20, for any \(b=z_1\psi _1 +\sum _{j=0}^r \sum _{k=1}^{2^j} z_{jk}\psi _{jk}\) in this set we have
$$\begin{aligned} \max \left\{ |z_1|, |z_{jk}|, j=0,\ldots , r,\, k=1\ldots ,2^j\right\} \le 2 \Vert b\Vert _\infty \sqrt{3} 2^{(r+1)/2} K. \end{aligned}$$
Hence, the set Eq. (32) is included in the set \(\left\{ b\in {\mathscr {C}}_r\,:\, \llbracket b\rrbracket _\alpha \le a(r,\varepsilon )\right\} ={\mathscr {C}}_{r, a(r,\varepsilon )}\), where \(a(r,\varepsilon )=2^{1+\alpha r} \sqrt{3} 2^{(r+1)/2} K\).
Hence,
$$\begin{aligned} N\left( a\varepsilon , \left\{ b\in {\mathscr {C}}_r\,:\, \Vert b-b_0\Vert _2\le \varepsilon \right\} , \Vert \cdot \Vert _2\right) \le N\left( \varepsilon , {\mathscr {C}}_{r, a(r,\varepsilon )}, \Vert \cdot \Vert _2\right) . \end{aligned}$$
Using Lemma 21 again the latter can be bounded by
$$\begin{aligned} N\left( \varepsilon \sqrt{3}2^{(r+1)/2}, {\mathscr {C}}_{r, a(r,\varepsilon )}, \Vert \cdot \Vert _\infty \right) \end{aligned}$$
The result follows upon applying Proposition 12. \(\square \)

We can now finish the proof for the entropy and remaining mass conditions. Choose \(r_n\) to be the smallest integer so that \(2^{r_n}\ge L\varepsilon _n^{-\frac{1}{\beta }}\), where L is a constant, and set \( {\mathscr {B}}_n={\mathscr {C}}_{r_n}\). The entropy bound then follows directly from Proposition 13.

For the remaining mass condition, using Assumption 7, we obtain
$$\begin{aligned} {\mathrm {P}}\left( b^{R,S}\notin {\mathscr {B}}_n\right) ={\mathrm {P}}(R>r_n) \le \exp \big (-c_12^{r_n}r_n\big ) \le \exp \big (-C_3\varepsilon _n^{-\frac{1}{\beta }}|\log \varepsilon _n|\big ), \end{aligned}$$
and note that the constant \(C_3\) can be made arbitrarily big by choosing L big enough.

6.5 Proof of theorem 8 under assumption 6 (B)

We start with a lemma.

Lemma 14

Assume there exists \(\,0< c_1 < c_2\) and \(\,0 < c_3\) with \(c_3 < c_1\) independent from r, such that for all \(i, i', 2\le \ell (i),\ell (i')\le r\),
$$\begin{aligned}&c_1 2^{-\ell (i)} \le A_{ii} \le c_2 2^{-\ell (i)},\end{aligned}$$
(33)
$$\begin{aligned}&|A_{ii'}| \le c_3 2^{-1.5(\ell (i)+\ell (i'))} \quad \text { if } i \ne i'. \end{aligned}$$
(34)
Let \(\widetilde{A}=(A_{ii'})_{2\le \ell (i),\ell (i')\le r}\) (so the right-lower submatrix of \(A^r\)). Then for all \(x \in \mathbb {R}^{|{\mathscr {I}}_r|-2}\)
$$\begin{aligned} (c_1 -c_3) x'\widetilde{\Lambda }x \le x' \widetilde{A} x \le 2 c_2 x' \widetilde{\Lambda }x . \end{aligned}$$
where \(\widetilde{\Lambda }=(\widetilde{\Lambda }_{ii'})_{2\le \ell (i),\ell (i')\le r}\) is the diagonal matrix with \(\widetilde{\Lambda }_{ii} = 2^{-\ell (i)},\) .

Proof

In the following the summation are over \(i,i', 2\le \ell (i),\ell (i')\le r\). Trivially, \(x' A^r x = \sum _i x_i^2 A_{ii}+ \sum _{i \ne j} x_i A_{ij} x_j\). By the first inequality
$$\begin{aligned} c_1 x' \Lambda ^{(r)} x= c_1 \sum _{i} x_i^2 2^{-\ell (i)}< \sum _i x_i^2 A_{ii} < c_2 \sum _i x_i^2 2^{-\ell (i)}= c_2 x' \Lambda ^{(r)} x . \end{aligned}$$
On the other hand
$$\begin{aligned} \left| \sum _{i \ne i'} x_i A_{ii'} x_{i'} \right| \le c_3 \sum _{i \ne i'} |x_i| 2^{-1.5 \ell (i)} |x_{i'}|2^{-1.5\ell (i') } \le c_3 \left( \sum _i |x_i| 2^{-1.5 \ell (i)}\right) ^2. \end{aligned}$$
At the first inequality we used the second part of of (33). The second inequality follows upon including the diagonal. By Cauchy-Schwarz, this can be further bounded by
$$\begin{aligned} c_3 \left( \sum _i x_i^2 2^{-\ell (i)} \right) \left( \sum _i 2^{-2\ell (i)}\right) \le c_3 x' \Lambda x, \end{aligned}$$
where the final inequality follows from \(\sum _i 2^{-2\ell (i)}\le \sum _{i=3}^\infty 2^{-2\ell (i)} =\sum _{j=1}^\infty 2^j 2^{-2j}=1\). The result follows by combining the derived inequalities. \(\square \)
We continue with the proof of Theorem 8. Write A as block matrix
$$\begin{aligned} A = \begin{bmatrix} A_{1}&\quad B' \\ B&\quad A_{2}\end{bmatrix}, \end{aligned}$$
with \(A_1\) a \(2\times 2\)-matrix, and B, \(A_2\) defined accordingly. By lemma 2
$$\begin{aligned} A_1 = \frac{\sigma ^2}{2}\gamma \begin{bmatrix} \coth (\gamma /2)&\quad \sinh ^{-1}(\gamma /2)\\ \sinh ^{-1}(\gamma /2)&\quad \coth (\gamma /2)\end{bmatrix}. \end{aligned}$$
Define the \(2\times 2\)-matrix
$$\begin{aligned} \Lambda _1 = c\tfrac{ \sigma ^2}{2} \gamma \tanh (\gamma /4) \mathrm {I},\quad c \in (0,1). \end{aligned}$$
where \(\mathrm {I}\) is the \(2\times 2\)-identity matrix. It is easy to see that \(A_1-\Lambda _1\) is positive definite.
When \(A_2 - \Lambda _2 - B(A_1-\Lambda _1)^{-1} B'\) is positive definite, then it follows from the Cholesky decomposition that \(A-\Lambda \) is positive definite, where \(\Lambda =\text {diag}(\Lambda _1,\Lambda _2)\) positive definite. Note
$$\begin{aligned} (B A_1^{-1} B')_{i,i'} = \sum _{k,k'} B_{ik}(A_1)^{-1}_{kk'}B_{i'k'} \le \left( \sum _{k,k'} (A_1)^{-1}_{kk'}\right) (B_{i,1} \vee B_{i,2}) (B_{i',1} \vee B_{i',2}) \end{aligned}$$
where
$$\begin{aligned} \left( \sum _{k,k'} (A_1)^{-1}_{kk'}\right) = \frac{2}{\sigma ^2\gamma } \frac{2}{\sinh ^{-1}(\gamma /2) + \coth (\gamma /2)} \le \frac{2}{\sigma ^2(1+\gamma )}. \end{aligned}$$
Therefore
$$\begin{aligned} |(B A_1^{-1} B')_{ii'}| \le 0.020\sigma ^2(1+\gamma /4) 2^{-1.5(\ell (i) + \ell (i'))} \end{aligned}$$
Now consider \( \tilde{A} = A_2 - \Lambda _2 - B(A_1-\Lambda _1)^{-1} B' \). By lemma 2 and the bound on \( |(B A_1^{-1} B')_{ii'} | \) and choosing \(c>0\) in the definition of \(\Lambda _1\) small enough, under the assumption that \(\gamma \le 1.5\),
$$\begin{aligned}&0.945\cdot 2^{-\ell (i)}\sigma ^2/4< \tilde{A}_{ii} < 1.03\cdot 2^{-\ell (i)}\sigma ^2/4. \end{aligned}$$
and for \(i \ne i'\) \( |\tilde{A}_{ii'}| \le 0.9415\frac{\sigma ^2}{4}2^{-1.5(\ell (i)+\ell (i'))} \). Therefore by lemma 14 \(\tilde{A} - \Lambda _{2}\) is positive definite with diagonal matrix \(\Lambda _{2}\) with diagonal entries \(2^{-\ell (i)}\).

It follows that \(x'\Lambda x \asymp x'Ax\). This implies that the small ball probabilities and the mass outside a sieve behave similar under Assumption 6(B) as when the \(Z_{i}\) are independent normally distributed with zero mean and variance \(\xi _i^2=\Lambda _{ii}\). As this case corresponds to Assumption 6(A) with \(\alpha = \frac{1}{2}\) for which posterior contraction has already been established, the stated contraction rate under Assumption 6(B) follows from Anderson’s lemma (lemma 19).

6.6 Proof of theorem 10: convergence in stronger norms

The linear embedding operator \(T:L^p(\mathbb {T})\rightarrow L^2(\mathbb {T}),x\mapsto x\) is a well-defined injective continuous operator for all \(p\in (2,\infty ]\). Its inverse is easily seen to be a densely defined, closed unbounded linear operator. Following Knapik and Salomond (2014) we define the modulus of continuity m as
$$\begin{aligned} m({\mathscr {B}}_n,\varepsilon ):=\sup \left\{ \Vert f-f_0\Vert _p:f\in {\mathscr {B}}_n,\Vert f-f_0\Vert _2\le \varepsilon \right\} . \end{aligned}$$
Theorem 2.1 of Knapik and Salomond (2014) adapted to our case is

Theorem 15

(Knapik and Salomond (2014)) Let \(\varepsilon _n\downarrow 0, T_n\uparrow \infty \) and \(\Pi \) be a prior on \(L_p({{\mathrm{\mathbb {T}}}})\) such that
$$\begin{aligned} \mathbb {E}_0\,\Pi \left( {\mathscr {B}}_n^c\mid X^{T_n}\right) \rightarrow 0, \end{aligned}$$
for measurable sets \({\mathscr {B}}_{n}\subset L^p({{\mathrm{\mathbb {T}}}})\). Assume that for any positive sequence \(M_n\)
$$\begin{aligned} \mathbb {E}_0\,\Pi \left( b\in {\mathscr {B}}_n:\Vert b-b_0\Vert _2\ge M_n\varepsilon _n \mid X^{T_n}\right) \rightarrow 0, \end{aligned}$$
then
$$\begin{aligned} \mathbb {E}_0\,\Pi \left( b\in L^p({{\mathrm{\mathbb {T}}}}):\Vert b-b_0\Vert _p\ge m({\mathscr {B}}_n,M_n\varepsilon _n)\mid X^{T_n}\right) \rightarrow 0. \end{aligned}$$

Note that the sieves \({\mathscr {C}}_{r,t}\) which we define in Sect. 6.4.2 have by Eq. (15) the property \(\Pi ({\mathscr {C}}_{r,t}^c\mid X^T)\rightarrow 0.\) By lemmas 21 and 23, the modulus of continuity satisfies \(m({\mathscr {C}}_{r,u},\varepsilon _n)\lesssim 2^{r(1/2-1/p)}\varepsilon _n\), for all \(p\in (2,\infty ]\), (assume \(1/\infty =0\)), and the result follows.

Notes

Acknowledgements

This work was partly supported by the Netherlands Organisation for Scientific Research (NWO) under the research programme “Foundations of nonparametric Bayes procedures”, 639.033.110 and by the ERC Advanced Grant “Bayesian Statistics in Infinite Dimensions”, 320637.

References

  1. Anderson TW (1955) The integral of a symmetric unimodal function over a symmetric convex set and some probability inequalities. Proc Am Math Soc 6:170–176MathSciNetCrossRefzbMATHGoogle Scholar
  2. Bhattacharya R, Waymire E (2007) A basic course in probability theory. Universitext, Springer, New YorkzbMATHGoogle Scholar
  3. Dalalyan A (2005) Sharp adaptive estimation of the drift function for ergodic diffusions. Ann Stat 33(6):2507–2528MathSciNetCrossRefzbMATHGoogle Scholar
  4. Dalalyan AS, Kutoyants YA (2002) Asymptotically efficient trend coefficient estimation for ergodic diffusion. Math Methods Stat 11(4):402–427MathSciNetGoogle Scholar
  5. Ghosal S, van der Vaart AW (2007) Convergence rates of posterior distributions for noniid observations. Ann Stat 35(1):192–223CrossRefzbMATHGoogle Scholar
  6. Ghosal S, Ghosh JK, van der Vaart AW (2000) Convergence rates of posterior distributions. Ann Stat 28(2):500–531MathSciNetCrossRefzbMATHGoogle Scholar
  7. Giné E, Nickl R (2011) Rates of contraction for posterior distributions in \(L^r\)-metrics, \(1\le r\le \infty \). Ann Stat 39(6):2883–2911CrossRefzbMATHGoogle Scholar
  8. Giné E, Nickl R (2016) Mathematical foundations of infinite-dimensional statistical models. Cambridge series in statistical and probabilistic mathematics. Cambridge University Press, CambridgezbMATHGoogle Scholar
  9. Hindriks R (2011) Empirical dynamics of neuronal rhythms. PhD thesis, Vrije Universiteit AmsterdamGoogle Scholar
  10. Karatzas I, Shreve SE (1991) Brownian motion and stochastic calculus, volume 113 of graduate texts in mathematics, 2nd edn. Springer, New YorkGoogle Scholar
  11. Knapik BT, van der Vaart AW, van Zanten JH (2011) Bayesian inverse problems with Gaussian priors. Ann Stat 39(5):2626–2657MathSciNetCrossRefzbMATHGoogle Scholar
  12. Knapik B, Salomond J-B (2014) A general approach to posterior contraction in nonparametric inverse problems. BernoulliGoogle Scholar
  13. Kutoyants YA (2004) Statistical inference for ergodic diffusion processes. Springer, New YorkCrossRefzbMATHGoogle Scholar
  14. Papaspiliopoulos O, Pokern Y, Roberts GO, Stuart AM (2012) Nonparametric estimation of diffusions: a differential equations approach. Biometrika 99(3):511MathSciNetCrossRefzbMATHGoogle Scholar
  15. Pokern Y (2007) Fitting Stochastic Differential Equations to Molecular Dynamics Data. PhD thesis, University of WarwickGoogle Scholar
  16. Pokern Y, Stuart AM, van Zanten JH (2013) Posterior consistency via precision operators for Bayesian nonparametric drift estimation in SDEs. Stoch Process Appl 123(2):603–628MathSciNetCrossRefzbMATHGoogle Scholar
  17. Schauer M, van Zanten JH (2017) Uniform central limit theorems for additive functionals of diffusions on the circle. In preparationGoogle Scholar
  18. Shen W, Ghosal S (2015) Adaptive Bayesian procedures using random series priors. Scand J Stat 42(4):1194–1213MathSciNetCrossRefzbMATHGoogle Scholar
  19. Spokoiny VG (2000) Adaptive drift estimation for nonparametric diffusion model. Ann Stat 28(3):815–836MathSciNetCrossRefzbMATHGoogle Scholar
  20. Strauch C (2015) Sharp adaptive drift estimation for ergodic diffusions: the multivariate case. Stoch Process Appl 125(7):2562–2602MathSciNetCrossRefzbMATHGoogle Scholar
  21. van der Meulen FH, van der Vaart AW, van Zanten JH (2006) Convergence rates of posterior distributions for Brownian semimartingale models. Bernoulli 12(5):863–888MathSciNetCrossRefzbMATHGoogle Scholar
  22. van der Meulen FH, Schauer M, van Zanten JH (2014) Reversible jump MCMC for nonparametric drift estimation for diffusion processes. Comput Stat Data Anal 71:615–632MathSciNetCrossRefGoogle Scholar
  23. van der Vaart AW, van Zanten JH (2008) Rates of contraction of posterior distributions based on Gaussian process priors. Ann Stat 36(3):1435–1463Google Scholar
  24. van Waaij J, van Zanten H (2016) Gaussian process methods for one-dimensional diffusions: optimal rates and adaptation. Electron J Stat 10(1):628–645MathSciNetCrossRefzbMATHGoogle Scholar
  25. van Zanten JH (2013) Nonparametric Bayesian methods for one-dimensional diffusion models. Math Biosci 243(2):215–222MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© The Author(s) 2017

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  • Frank van der Meulen
    • 1
  • Moritz Schauer
    • 2
  • Jan van Waaij
    • 3
  1. 1.TU DelftDelftThe Netherlands
  2. 2.Leiden UniversityLeidenThe Netherlands
  3. 3.Korteweg-de Vries Institute for MathematicsAmsterdamThe Netherlands

Personalised recommendations