Abstract
We consider the problem of nonparametric estimation of the drift of a continuously observed onedimensional diffusion with periodic drift. Motivated by computational considerations, van der Meulen et al. (Comput Stat Data Anal 71:615–632, 2014) defined a prior on the drift as a randomly truncated and randomly scaled Faber–Schauder series expansion with Gaussian coefficients. We study the behaviour of the posterior obtained from this prior from a frequentist asymptotic point of view. If the true data generating drift is smooth, it is proved that the posterior is adaptive with posterior contraction rates for the \(L_2\)norm that are optimal up to a log factor. Contraction rates in \(L_p\)norms with \(p\in (2,\infty ]\) are derived as well.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Assume continuous time observations \(X^T=\left\{ X_t,\, :t\in [0,T]\right\} \) from a diffusion process X defined as (weak) solution to the stochastic differential equation (sde)
Here W is a Brownian Motion and the drift \(b_0\) is assumed to be a realvalued measurable function on the real line that is 1periodic and square integrable on [0, 1]. The assumed periodicity implies that we can alternatively view the process X as a diffusion on the circle. This model has been used for dynamic modelling of angles, see for instance Pokern (2007) and Hindriks (2011).
We are interested in nonparametric adaptive estimation of the drift. This problem has recently been studied by multiple authors. Spokoiny (2000) proposed a locally linear smoother with a datadriven bandwidth choice that is rate adaptive with respect to \(b''(x)\) for all x and optimal up to a log factors. Interestingly, the result is nonasymptotic and does not require ergodicity. Dalalyan and Kutoyants (2002) and Dalalyan (2005) consider ergodic diffusions and construct estimators that are asymptotically minimax and adaptive under Sobolev smoothness of the drift. Their results were extended to the multidimensional case by Strauch (2015).
In this paper we focus on Bayesian nonparametric estimation, a paradigm that has become increasingly popular over the past two decades. An overview of some advances of Bayesian nonparametric estimation for diffusion processes is given in van Zanten (2013).
The Bayesian approach requires the specification of a prior. Ideally, the prior on the drift is chosen such that drawing from the posterior is computationally efficient while at the same time ensuring that the resulting inference has good theoretical properties. which is quantified by a contraction rate. This is a rate for which we can shrink balls around the true parameter value, while maintaining most of the posterior mass. More formally, if d is a semimetric on the space of drift functions, a contraction rate \(\varepsilon _T\) is a sequence of positive numbers \(\varepsilon _T\downarrow 0\) for which the posterior mass of the balls \(\{b\,:\, d(b,b_0)\le \varepsilon _T\}\) converges in probability to 1 as \(T\rightarrow \infty \), under the law of X with drift \(b_0\). For a general discussion on contraction rates, see for instance Ghosal et al. (2000) and Ghosal and van der Vaart (2007).
For diffusions, the problem of deriving optimal posterior convergence rates has been studied recently under the additional assumption that the drift integrates to zero, \(\int _0^1 b_0(x) d x =0\). In Papaspiliopoulos et al. (2012) a mean zero Gaussian process prior is proposed together with an algorithm to sample from the posterior. The precision operator (inverse covariance operator) of the proposed Gaussian process is given by \(\eta \left( (\Delta )^{\alpha +1/2} + \kappa I\right) \), where \(\Delta \) is the onedimensional Laplacian, I is the identity operator, \(\eta , \kappa >0\) and \(\alpha +1/2 \in \{2,3,\ldots \}\). A first consistency result was shown in Pokern et al. (2013).
In van Waaij and van Zanten (2016) it was shown that this rate result can be improved upon for a slightly more general class of priors on the drift. More specifically, in this paper the authors consider a prior which is defined as
where \(\varphi _{2k}(x)=\sqrt{2} \cos (2\pi k x)\), \(\varphi _{2k1}(x)=\sqrt{2} \sin (2\pi k x)\) are the standard Fourier series basis functions, \(\{Z_k\}\) is a sequence of independent standard normally distributed random variables and \(\alpha \) is positive. It is shown that when L and \(\alpha \) are fixed and \(b_0\) is assumed to be \(\alpha \)Sobolev smooth, then the optimal posterior rate of contraction, \(T^{\alpha /(1+2\alpha )}\), is obtained. Note that this result is nonadaptive, as the regularity of the prior must match the regularity of \(b_0\). For obtaining optimal posterior contraction rates for the full range of possible regularities of the drift, two options are investigated: endowing either L or \(\alpha \) with a hyperprior. Only the second option results in the desired adaptivity over all possible regularities.
While the prior in (2) (with additional prior on \(\alpha \)) has good asymptotic properties, from a computational point of view the infinite series expansion is inconvenient. Clearly, in any implementation this expansion needs to be truncated. Random truncation of a series expansion is a well known method for defining priors in Bayesian nonparametrics, see for instance Shen and Ghosal (2015). Exactly this idea was exploited in van der Meulen et al. (2014), where the prior is defined as the law of the random function
where the functions \(\psi _{jk}\) constitute the Faber–Schauder basis (see Fig. 1).
These functions feature prominently in the LévyCiesielski construction of Brownian motion (see for instance (Bhattacharya and Waymire 2007, paragraph 10.1)).
The prior coefficients \(Z_{jk}\) are equipped with a Gaussian distribution, and the truncation level R and the scaling factor S are equipped with independent priors. Truncation in absence of scaling increases the apparent smoothness of the prior (as illustrated for deterministic truncation by example 4.5 in van der Vaart and van Zanten (2008)), whereas scaling by a number \(\ge 1\) decreases the apparent smoothness. (Scaling with a number \(\le 1\) only increases the apparent smoothness to a limited extent, see for example Knapik et al. (2011).)
The simplest type of prior is obtained by taking the coefficients \(Z_{jk}\) independent. We do however also consider the prior that is obtained by first expanding a periodic Ornstein–Uhlenbeck process into the Faber–Schauder basis, followed by random scaling and truncation. We will explain that specific stationarity properties of this prior make it a natural choice.
Draws from the posterior can be computed using a reversible jump Markov Chain Monte Carlo (MCMC) algorithm (cf. van der Meulen et al. (2014)). For both types of priors, fast computation is facilitated by leveraging inherent sparsity properties stemming from the compact support of the functions \(\psi _{jk}\). In the discussion of van der Meulen et al. (2014) it was argued that inclusion of both the scaling and random truncation in the prior is beneficial. However, this claim was only supported by simulations results.
In this paper we support this claim theoretically by proving adaptive contraction rates of the posterior distribution in case the prior (3) is used. We start from a general result in van der Meulen et al. (2006) on Brownian semimartingale models, which we adapt to our setting. Here we take into account that as the drift is assumed to be oneperiodic, information accumulates in a different way compared to (general) ergodic diffusions. Subsequently we verify that the resulting prior mass, remaining mass and entropy conditions appearing in this adapted result are satisfied for the prior defined in Eq. (3). An application of our results shows that if the true drift function is \(B_{\infty ,\infty }^\beta \)Besov smooth, \(\beta \in (0,2)\), then by appropriate choice of the variances of \(Z_{jk}\), as well as the priors on R and S, the posterior for the drift b contracts at the rate \((T/\log T)^{\beta /(1+2\beta )}\) around the true drift in the \(L_2\)norm. Up to the log factor this rate is minimaxoptimal (See for instance Kutoyants 2004, Theorem 4.48)). Moreover, it is adaptive: the prior does not depend on \(\beta \). In case the true drift has Besovsmoothness greater than or equal to 2, our method guarantees contraction rates equal to essentially \(T^{2/5}\) (corresponding to \(\beta =2\)). A further application of our results shows that for \(L_p\)norms we obtain contraction rate \(T^{(\beta 1/2+1/p)/(1+2\beta )}\), up to logfactors.
The paper is organised as follows. In the next section we give a precise definition of the prior. In Sect. 3 a general contraction result for the class of diffusion processes considered here is derived. Our main result on posterior contraction for \(L^p\)norms with \(p\ge 2\) is presented in Sect. 4. Many results of this paper concern general properties of the prior and their application is not confined to drift estimation of diffusion processes. To illustrate this, we show in Sect. 5 how these results can easily be adapted to nonparametric regression and nonparametric density estimation. Proofs are gathered in Sect. 6. The appendix contains a couple of technical results.
2 Prior construction
2.1 Model and posterior
Let
be the space of square integrable 1periodic functions.
Lemma 1
If \(b_0 \in L^2({{\mathrm{\mathbb {T}}}}),\) then the SDE Eq. (1) has a unique weak solution.
The proof is in Sect. 6.1.
For \(b\in L^2(\mathbb {T})\), let \(P^{b}=P^{b,T}\) denote the law of the process \(X^T\) generated by Eq. (1) when \(b_0\) is replaced by b. If \(P^0\) denotes the law of \(X^T\) when the drift is zero, then \(P^b\) is absolutely continuous with respect to \(P^0\) with RadonNikodym density
Given a prior \(\Pi \) on \(L^2(\mathbb {T})\) and path \(X^T\) from (1), the posterior is given by
where A is Borel set of \(L^2(\mathbb {T})\). These assertions are verified as part of the proof of Theorem 3.
2.2 Motivating the choice of prior
We are interested in randomly truncated, scaled series priors that simultaneously enable a fast algorithm for obtaining draws from the posterior and enjoy good contraction rates.
To explain what we mean by the first item, consider first a prior that is a finite series prior. Let \(\{\psi _1,\ldots , \psi _r\}\) denote basis functions and \(Z=(Z_1,\ldots , Z_r)\) a mean zero Gaussian random vector with precision matrix \(\Gamma \). Assume that the prior for b is given by \(b=\sum _{i=1}^r Z_i \psi _i\). By conjugacy, it follows that \( Z \mid X^T \sim \mathrm N(W^{1}\mu , W^{1})\), where \(W= G + \Gamma \),
for \(i, i' \in \{1,\ldots ,r\}\), cf. (van der Meulen et al. 2014, Lemma 1). The matrix G is referred to as the Grammian. From these expressions it follows that it is computationally advantageous to exploit compactly supported basis functions. Whenever \(\psi _{i}\) and \(\psi _{i'}\) have nonoverlapping supports, we have \(G_{i, i'}=0\). Depending on the choice of such basis functions, the Grammian G will have a specific sparsity structure (a set of index pairs \((i,i')\) such that \(G_{i,i'} = 0\), independently of \(X^T\).) This sparsity structure is inherited by W as long as the sparsity structure of the prior precision matrix matches that of G.
In the next section we make a specific choice for the basis functions and the prior precision matrix \(\Gamma \).
2.3 Definition of the prior
Define the “hat” function \(\Lambda \) by \(\Lambda (x) = (2x){\mathbf {1}}_{[0,\frac{1}{2})}(x) + 2(1x) {\mathbf {1}}_{[\frac{1}{2},1]}(x)\). The Faber–Schauder basis functions are given by
Let
In Fig. 1 we have plotted \(\psi _1\) together with \(\psi _{j,k}\) where \(j \in \{0, 1, 2\}\).
We define our prior as in (3) with Gaussian coefficients \(Z_1\) and \(Z_{jk}\), where the truncation level R and the scaling factor S are equipped with (hyper)priors. We extend b periodically if we want to consider b as function on the real line. If we identify the double index (j, k) in (3) with the single index \(i = 2^{j}+k\), then we can write \(b^{R,S} = S \sum _{i=1}^{2^{R+1}} \psi _i Z_i\). Let
We say that \(\psi _i\) belongs to level \(j\ge 0\) if \(\ell (i) = j\). Thus both \(\psi _1\) and \(\psi _{0,1}\) belong to level 0, which is convenient for notational purposes. For levels \(j\ge 1\) the basis functions are per level orthogonal with essentially disjoint support. Define for \(r\in \{0,1,\ldots \}\)
Let \(A = ({\text {Cov}}(Z_i, Z_{i'}))_{i,i'\in \mathbb {N}}\) and define its finitedimensional restriction by \(A^r=(A_{ii'})_{i, i' \in {\mathscr {I}}_r}\). If we denote \(Z^r=\{Z_i,\, i \in {\mathscr {I}}_r\}\), and assume that \(Z^r\) is multivariate normally distributed with mean zero and covariance matrix \(A^r\), then the prior has the following hierarchy
Here, we use \(\Pi \) to denote the joint distribution of (R, S).
We will consider two choices of priors for the sequence \(Z_1,Z_2,\ldots \) Our first choice consists of taking independent Gaussian random variables. If the coefficients \(Z_{i}\) are independent with standard deviation \(2^{\ell (i)/2}\), the random draws from this prior are scaled piecewise linear interpolations on a dyadic grid of a Brownian bridge on [0, 1] plus the random function \(Z_1\psi _1.\) The choice of \(\psi _1\) is motivated by the fact that in this case \({{\text {Var}}}\left( b(t) \big  S=s, R=\infty \right) = s^2\) is independent of t.
We construct this second type of prior as follows. For \(\gamma , \sigma ^2 >0\), define \(V\equiv (V_t,\, t\in [0,1])\) to be the cyclically stationary and centred Ornstein–Uhlenbeck process. This is a periodic Gaussian process with covariance kernel
This process is cyclically stationary, that is, the covariance only depends on \(ts\) and \(1ts\). It is the unique Gaussian and Markovian prior with continuous periodic paths with this property. This makes the cyclically stationary Ornstein–Uhlenbeck prior an appealing choice which respects the symmetries of the problem.
Each realisation of V is continuous and can be extended to a periodic function on \(\mathbb {R}\). Then V can be represented as an infinite series expansion in the Faber–Schauder basis:
Finally by scaling by S and truncating at R we obtain from V the second choice of prior on the drift function b. Visualisations of the covariance kernels \({{\text {Cov}}}\left( b(s) , b(t) \right) \) for first prior (Brownian bridge type) and for the second prior (periodic Ornstein–Uhlenbeck process prior with parameter \(\gamma = 1.48\)) are shown in Fig. 2 (for \(S=1\) and \(R =\infty \)).
2.4 Sparsity structure induced by choice of \(Z_i\)
Conditional on R and S, the posterior of \(Z^R\) is Gaussian with precision matrix \(G^R+\Gamma ^R\) (here \(G^R\) is the Grammian corresponding to using all basis functions up to and including level R).
If the coefficients are independent it is trivial to see that the precision matrix \(\Gamma \) does not destroy the sparsity structure of G, as defined in (6). This is convenient for numerical computations. The next lemma details the situation for periodic Ornstein–Uhlenbeck processes.
Lemma 2
Let V be defined as in Eq. (10)

1.
The sparsity structure of the precision matrix of the infinite stochastic vector Z (appearing in the series representation (11)) equals the sparsity structure of G, as defined in (6).

2.
The entries of the covariance matrix of the random Gaussian coefficients \(Z_i\) and \(Z_{i'}\), \(A_{i,i'} = \mathbb {E}Z_i Z_{i'}\), satisfy the following bounds: \(A_{11} = A_{22} = \tfrac{\sigma ^2}{2\gamma }\coth (\gamma /2)\) and for \(\gamma \le 1.5\) and \(i\ge 3\),
$$\begin{aligned} 0.95 \cdot 2^{\ell (i)}\sigma ^2/4 \le A_{ii} \le 2^{\ell (i)}\sigma ^2/4 \end{aligned}$$and \(A_{12} = A_{21} = \tfrac{\sigma ^2}{2\gamma }\sinh ^{1}(\gamma /2)\) and for \(i\ne i'\)
$$\begin{aligned} A_{ii'} \le {\left\{ \begin{array}{ll} 0.20\sigma ^22^{1.5(\ell (i)\vee \ell ( i'))}&{} \qquad i \wedge i'\le 2<i\vee i',\\ 0.37 \sigma ^2 2^{1.5(\ell (i)+\ell (i'))}&{} \qquad \text {otherwise.} \end{array}\right. } \end{aligned}$$
The proof is given in Sect. 6.2. By the first part of the lemma, also this prior does not destroy the sparsity structure of the G. The second part asserts that while the offdiagonal entries of \(A^{r}\) are not zero, they are of smaller order than the diagonal entries, quantifying that the covariance matrix of the coefficients in the Schauder expansion is close to a diagonal matrix.
3 Posterior contraction for diffusion processes
The main result in van der Meulen et al. (2006) gives sufficient conditions for deriving posterior contraction rates in Brownian semimartingale models. The following theorem is an adaptation and refinement of Theorem 2.1 and Lemma 2.2 of van der Meulen et al. (2006) for diffusions defined on the circle. We assume observations \(X^{T}\), where \(T \rightarrow \infty .\) Let \(\Pi ^T\) be a prior on \(L^2(\mathbb {T})\) (which henceforth may depend on T) and choose measurable subsets (sieves) \({\mathscr {B}}_T \subset L^2({{\mathrm{\mathbb {T}}}})\). Define the balls
The \(\varepsilon \)covering number of a set A for a semimetric \(\rho \), denoted by \(N(\varepsilon ,A,\rho )\), is defined as the minimal number of \(\rho \)balls of radius \(\varepsilon \) needed to cover the set A. The logarithm of the covering number is referred to as the entropy.
The following theorem characterises the rate of posterior contraction for diffusions on the circle in terms of properties of the prior.
Theorem 3
Suppose \(\{\varepsilon _T\}\) is a sequence of positive numbers such that \(T \varepsilon _T^2\) is bounded away from zero. Assume that there is a constant \(\xi >0\) such that for every \(K>0\) there is a measurable set \({\mathscr {B}}_T\subseteq L^2(\mathbb {T})\) and for every \(a>0\) there is a constant \(C>0\) such that for T big enough
and
Then for every \(M_T\rightarrow \infty \)
and for K big enough,
Equations (12), (13) and (14) are referred to as the entropy condition, small ball condition and remaining mass condition of Theorem 3 respectively. The proof of this theorem is in Sect. 6.3.
4 Theorems on posterior contraction rates
The main result of this section, Theorem 9 characterises the frequentist rate of contraction of the posterior probability around a fixed parameter \(b_0\) of unknown smoothness using the truncated series prior from Sect. 2.
We make the following assumption on the true drift function.
Assumption 4
The true drift \(b_0\) can be expanded in the Faber–Schauder basis, \(b_0=z_1\psi _1+\sum _{j=0}^\infty \sum _{k=1}^{2^{j}}z_{jk}\psi _{jk}=\sum _{i\ge 1}z_i\psi _i\) and there exists a \(\beta \in (0,\infty )\) such that
Note that we use a slightly different symbol for the norm, as we denote the \(L^2\)norm by \(\Vert \cdot \Vert _2\).
Remark 5
If \(\beta \in (0,2)\), then Assumption 4 on \(b_0\) is equivalent to assuming \(b_0\) to be \(B_{\infty ,\infty }^\beta \)Besov smooth. It follows from the definition of the basis functions that
Therefore it follows from equations (4.72) (with \(r=2\)) and (4.73) (with \(p=\infty \)) in combination with equation (4.79) (with \(q=\infty \)) in Giné and Nickl (2016), Section 4.3, that \(\Vert b_0\Vert _\infty +\llbracket b_0\rrbracket _\beta \) is equivalent to the \(B_{\infty ,\infty }^\beta \)norm of \(b_0\) for \(\beta \in (0,2)\).
If \(\beta \in (0,1)\), then \(\beta \)–Hölder smoothness and \(B^\beta _{\infty ,\infty }\)–smoothness coincide (cf. Proposition 4.3.23 in Giné and Nickl (2016)).
For the prior defined in Eqs. (7)–(9) we make the following assumptions.
Assumption 6
The covariance matrix A satisfies one of the following conditions:

(A)
For fixed \(\alpha >0\), \(A_{ii}=2^{2\alpha \ell (i)}\) and \(A_{ii'}=0\) for \(i\ne i'\).

(B)
There exists \(0< c_1 < c_2\) and \(0 < c_3\) with \(3 c_3 < c_1\) independent from r, such that for all \(i, i' \in {\mathscr {I}}_r\)
$$\begin{aligned}&c_1 2^{\ell (i)} \le A_{ii} \le c_2 2^{\ell (i)},\\&A_{ii'} \le c_3 2^{1.5(\ell (i)+\ell (i'))} \quad \text { if } i \ne i'. \end{aligned}$$
In particular the second assumption if fulfilled by the prior defined by Eq. (10) if \(0 < \gamma \le 3/2\) and any \(\sigma ^2 > 0\).
Assumption 7
The prior on the truncation level satisfies for some positive constants \(c_1,c_2\),
For the prior on the scaling we assume existence of constants \(0<p_1<p_2\), \(q>0\) and \(C>1\) with \(p_1>q\alpha \beta \) such that
The prior on R can be defined as \(R= \lfloor ^2\log Y\rfloor \), where Y is Poisson distributed. Equation (18) is satisfied for a whole range of distributions, including the popular family of inverse gamma distributions. Since the inverse gamma prior on \(S^2\) decays polynomially (Lemma 17), condition (A2) of Shen and Ghosal (2015) is not satisfied and hence their posterior contraction results cannot be applied to our prior. We obtain the following result for our prior.
Theorem 8
Assume \(b_0\) satisfies Assumption 4. Suppose the prior satisfies assumptions 6 and 7. Let \(\{\varepsilon _n\}_{n=1}^\infty \) be a sequence of positive numbers that converges to zero. There is a constant \(C_1>0\) such that for any \(C_2>0\) there is a measurable set \({\mathscr {B}}_n\subseteq L^2({{\mathrm{\mathbb {T}}}})\) such that for every \(a>0\) there is a positive constant \(C_3\) such that for n sufficiently large
The following theorem is obtained by applying these bounds to Theorem 3 after taking \(\varepsilon _n=(T / \log T)^{\beta /(1+2\beta )}\).
Theorem 9
Assume \(b_0\) satisfies Assumption 4. Suppose the prior satisfies assumptions 6 and 7. Then for all \(M_T\rightarrow \infty \)
as \(T \rightarrow \infty \).
This means that when the true parameter is from \(B_{\infty ,\infty }^\beta [0,1],\beta <2\) a rate is obtained that is optimal possibly up to a log factor. When \(\beta \ge 2\) then \(b_0\) is in particular in the space \(B_{\infty ,\infty }^{2\delta }[0,1],\) for every small positive \(\delta \), and therefore converges with rate essentially \(T^{2/5}\).
When a different function \(\Lambda \) is used, defined on a compact interval of \(\mathbb {R},\) and the basis elements are defined by \(\psi _{jk}=\sum _{m\in \mathbb {Z}}\Lambda (2^{j}(xm)+k1)\); forcing them to be 1periodic. Then Theorem 9 and derived results for applications still holds provided \(\Vert \psi _{jk}\Vert _\infty = 1\) and \(\psi _{j,k}\cdot \psi _{j,l}\equiv 0\) when \(kl\ge d\) for a fixed \(d \in \mathbb {N}\) and the smoothness assumptions on \(b_0\) are changed accordingly. A finite number of basis elements can be added or redefined as long as they are 1periodic.
It is easy to see that our results imply posterior convergences rates in weaker \(L^p\)norms, \(1\le p<2,\) with the same rate. When \(p\in (2,\infty ]\) the \(L^p\)norm is stronger than the \(L^2\)norm. We apply ideas of Knapik and Salomond (2014) to obtain rates for stronger \(L^p\)norms.
Theorem 10
Assume the true drift \(b_0\) satisfies assumption 4. Suppose the prior satisfies assumptions 6 and 7. Let \(p\in (2,\infty ]\). Then for all \(M_T\rightarrow \infty \)
as \(T \rightarrow \infty .\)
These rates are similar to the rates obtained for the density estimation in Giné and Nickl (2011). However our proof is less involved. Note that we have only consistency for \(\beta >1/21/p\).
5 Applications to nonparametric regression and density estimation
Our general results also apply to other models. The following results are obtained for \(b_0\) satisfying Assumption 4 and the prior satisfying assumptions 6 and 7.
5.1 Nonparametric regression model
As a direct application of the properties of the prior shown in the previous section, we obtain the following result for a nonparametric regression problem. Assume
with independent Gaussian observation errors \(\eta _i\sim \mathrm N(0,\sigma ^2)\). When we apply Ghosal and van der Vaart (2007), example 7.7 to Theorem 8 we obtain, for every \(M_n\rightarrow \infty \),
as \(n\rightarrow \infty \) and (in a similar way as in Theorem 10) for every \(p\in (2,\infty ]\),
as \(n \rightarrow \infty \).
5.2 Density estimation
Let us consider n independent observations \(X^n := (X_1,\ldots ,X_n)\) with \(X_i\sim p_0\) where \(p_0\) is an unknown density on [0, 1] relative to the Lebesgue measure. Let \(\mathscr {P}\) denote the space of densities on [0, 1] relative to the Lebesgue measure. The natural distance for densities is the Hellinger distance h defined by
Define the prior on \(\mathscr {P}\) by \(p=\frac{\mathrm {e}^b}{\Vert e^b\Vert _1},\) where b is endowed with the prior of Theorem 9 or its nonperiodic version. Assume that \(\log p_0\) is \(\beta \)smooth in the sense of Assumption 4. Applying Ghosal et al. (2000), theorem 2.1 and van der Vaart and van Zanten (2008), lemma 3.1 to Theorem 8, we obtain for a big enough constant \(M>0\)
as \(n\rightarrow \infty \).
6 Proofs
6.1 Proof of lemma 1
Since conditions (ND) and (LI) of (Karatzas and Shreve 1991, theorem 5.15) hold, the SDE Eq. (1) has a unique weak solution up to an explosion time.
Assume without loss of generality that \(X_0 = 0\). Define \(\tau _0=0\) and for \(i\ge 1\) the random times
By periodicity of drift and the Markov property the random variables \(U_i = \tau _{i}\tau _{i1}\) are independent and identically distributed.
Note that
and hence nonexplosion follows from \(\lim _{n\rightarrow \infty }\sum _{i=1}^n U_i=\infty \) almost surely. The latter holds true since \(U_1>0\) with positive probability, which is clear from the continuity of diffusion paths.
6.2 Proof of lemma 2
Proof of the first part
For the proof we introduce some notation: for any (j, k), \((j', k')\) we write \((j, k) \prec (j', k')\) if \(\text {supp}\, \psi _{j',k'}\subset \text {supp}\, \psi _{j,k}\). The set of indices become a lattice with partial order \(\prec \), and by \((j,k) \vee (j',k')\) we denote the supremum. Identify i with (j, k) and similarly \(i'\) with \((j',k')\).
For \(i>1\), denote by \(t_i\) the time points in [0, 1] corresponding to the maxima of \(\psi _i\). Without loss of generality assume \(t_i<t_{i'}\). We have \(G_{i,i'}=0\) if and only if the interiors of the supports of \(\psi _i\) and \(\psi _{i'}\) are disjoint. In that case
\(\square \)
The values of \(Z_i\) can be found by the midpoint displacement technique. The coefficients are given by \(Z_1 = V_0\), \(Z_2 = V_{\frac{1}{2}}\) and for \(j\ge 1\)
As V is a Gaussian process, the vector Z is meanzero Gaussian, say with (infinite) precision matrix \(\Gamma \). Now \(\Gamma _{i,i'}=0\) if there exists a set \({\mathscr {L}}\subset \mathbb {N}\) such that \({\mathscr {L}} \cap \{i,i'\}=\varnothing \) for which conditional on \(\{ Z_{i^\star },\, i^\star \in {\mathscr {L}}\}\), \(Z_i\) are \(Z_{i'}\) are independent.
Define \((j^\star , k^\star )=(j,k) \vee (j',k')\) and
The set \(\{ Z_{i^\star },\, i^\star \in {\mathscr {L}}\}\) determine the process V at all times \(k 2^{j^\star 1}\), \(k=0\ldots ,2^{j^\star +1}\). Now \(Z_i\) and \(Z_{i'}\) are conditionally independent given \(\{V_t, t=k 2^{j^\star 1},\, k=0\ldots ,2^{j^\star +1}\}\) by (23) and the Markov property of the nonperiodic Ornstein–Uhlenbeck process. The result follows since \(\sigma (\{Z_{i^\star },\, i^\star \in {\mathscr {L}}\})=\sigma (\{V_t, t=k 2^{j^\star 1},\, k=0\ldots ,2^{j^\star +1}\})\).
Lemma 11
Let \(K(s,t)=\mathbb {E}{V}_s {V}_t= \frac{\sigma ^2}{2\gamma }\frac{1}{1e^{\gamma }}\left( e^{\gamma ts}+e^{\gamma (1ts)}\right) \). If \(x \notin (s,t) \)
Proof
Without loss of generality assume that \( t \le x \le 1\). With \(m= (t+s)/2\) and \(\delta = (ts)/2\)
The result follows from \((1e^{\gamma \delta })^2 e^{\gamma \delta }= 4\sinh ^2(\gamma \delta /2)\) and scaling both sides with \(\tfrac{1}{2} \frac{\sigma ^2}{2\gamma }\frac{1}{1e^{\gamma }} \). \(\square \)
Proof of the second part
Denote by [a, b], [c, d] the support of \(\psi _i\) and \(\psi _{i'}\) respectively and let \(m = (b+a)/2\) and \(n = (d+c)/2\) but for \(i=1\), let \(m=0\). \(Z_1 = V(0)\), \(Z_2 = V_{1/2}\) and \({{\text {Var}}}\left( Z_1 \right) = {{\text {Var}}}\left( Z_2 \right) = \frac{\sigma ^2}{2\gamma }\coth (\gamma /2)\), and \({{\text {Cov}}}\left( Z_1 , Z_2 \right) = \frac{\sigma ^2}{2\gamma }\sinh ^{1}(\gamma /2)\). Note that the \(2\times 2\) covariance matrix of \(Z_1\) and \(Z_2\) has eigenvalues \(\tfrac{\sigma ^2}{2\gamma } {\text {tanh}}(\gamma /4)\) and \(\tfrac{\sigma ^2}{2\gamma } \coth (\gamma /4)\) and is strictly positive definite. \(\square \)
By midpoint displacement, \(2Z_{i} = 2V_{m}  V_{a}  V_{b}\), \(i > 2\) and \(K(s,t)=\mathbb {E}{V}_s {V}_t= \frac{\sigma ^2}{2\gamma }\frac{1}{1e^{\gamma }} ( e^{\gamma ts}+e^{\gamma (1ts)})\).
Assume without loss of generality \(ba \ge dc\). Define \(\delta \) to be the halfwidth of the smaller interval, so that \(\delta := (dc)/2= 2^{j'1}\). Then
Consider three cases:

1.
The entries on diagonal, \(i = i'\);

2.
The interiors of the supports of \(\psi _i\) and \(\psi _{i'}\) are nonoverlapping;

3.
The support of \(\psi _{i'}\) is contained in the support of \(\psi _i\).
Case 1. By elementary computations for \(i > 2\),
As \(\delta \le \tfrac{1}{4}\) and under the assumption \(\gamma \le 3/2\) the last display can be bounded by
Hence \( 0.9715\cdot 2^{j}\sigma ^2/4\le A_{ii}\le 2^{j}\sigma ^2/4\).
Case 2. Necessarily \(i, i' > 2\). By twofold application of lemma 11
Using the convexity of \(\sinh \) we obtain the bound
for \(0 \le x \le 1\). Note that \(f(x)=e^{\gamma x}+e^{\gamma (1x)}\) is convex on [0, 1], from which we derive \(f(x)\le 1+e^{\gamma }\). Using this bound, and the fact that for \(\gamma \le 3/2\),
which can be easily seen from a plot, that
Case 3.
For \( i' > 2\), \(i = 1\) with \(m = 0\) or \(i = 2\) with \(m = \frac{1}{2}\), using Eq. (26), we obtain
When \(i,i'>2\) then, using the calculation Eq. (24) and Lemma 11 noting that a, b and m are not in (c, d), we obtain
Write \(x=\gamma am=\gamma bm=\gamma h\delta \) and \(\alpha =\frac{mn}{bm}\in (0,1)\). A simple computation then shows
The derivative of \(f(\alpha ):=e^{(1+\alpha )x}2e^{\alpha x}+e^{(1\alpha )x}\) is nonnegative, for \(\alpha ,x>0\) hence \(f(\alpha )\) is increasing and so \(f(0)\le f(\alpha )\le f(1)\). Note that \(f(0)= 2e^{x}2\ge 2x, \text { for }x>0\) and \(f(1)= e^{2x}2e^{x}+1=:g(x)\). Maximising \(g'(x)\) over \(x>0\) gives \(g'(x)\le 1/2\) and \(g(0)=0\) and therefore \(f(1)=g(x)\le x/2\).
It follows that
For the other terms we derive the following bounds. Write
Now \(h(\alpha )\) is decreasing for \(x\le \log 2\) and convex and positive for \(x\ge \log 2\). In both case we can bound \(h(\alpha )\) by its value at the endpoints \(\alpha =0\) and \(\alpha =1\). Using that \(2x\le \gamma \) we obtain \(0\le h(0)= e^{\gamma }(2e^x2)\le 2x\) and \( 0 \le h(1)= e^{\gamma }\big (e^{2x}2e^x+1\big )\le 2x\). So \(0\le h(\alpha )\le 2\gamma h\delta \).
Using the bound Eq. (25) and \(x/(1\exp (x))\le (1+x)\) we obtain
6.3 Proof of theorem 3
A general result for deriving contraction rates for Brownian semimartingale models was proved in van der Meulen et al. (2006). Theorem 3 follows upon verifying the assumptions of this result for the diffusion on the circle. These assumptions are easily seen to boil down to:

1.
For every \(T>0\) and \(b_1,b_2\in L^2(\mathbb {T})\) the measures \(P^{b_1,T}\) and \(P^{b_2,T}\) are equivalent.

2.
The posterior as defined in equation Eq. (5) is well defined.

3.
Define the (random) Hellinger semimetric \(h_T\) on \(L^2(\mathbb {T})\) by
$$\begin{aligned} h_T^2(b_1,b_2):= \int _0^{T} \Bigl (b_1b_2\Bigr )^2(X_t)\,{\,\mathrm {d}}t, \quad b_1,\, b_2 \in L^2(\mathbb {T}). \end{aligned}$$(28)There are constants \(0<c<C\) for which
$$\begin{aligned} \lim _{T\rightarrow \infty } P^{\theta _0,T}\Bigl (c\sqrt{T}\Vert b_1b_2\Vert _2\le h_T(b_1,b_2) \le C\,\sqrt{T}\Vert b_1b_2\Vert _2, \forall \, ,b_1, b_2\in L^2(\mathbb {T}) \Bigr ) =1. \end{aligned}$$
We start by verifying the third condition. Recall that the local time of the process \(X^T\) is defined as the random process \(L_T(x)\) which satisfies
For every measurable function f for which the above integrals are defined. Since we are working with 1periodic functions, we define the periodic local time by
Note that \(t\mapsto X_t\) is continuous with probability one. Hence the support of \(t\mapsto X_t\) is compact with probability one. Since \(x \mapsto L_T(x)\) is only positive on the support of \(t\mapsto X_t\), it follows that the sum in the definition of \(\mathring{L}_T(x)\) has only finitely many nonzero terms and is therefore well defined. For a oneperiodic function f we have
provided the involved integrals exists. It follows from (Schauer and van Zanten 2017, Theorem 5.3) that \(\mathring{L}_T(x)/T\) converges to a positive deterministic function only depending only on \(b_0\) and which is bounded away from zero and infinity. Since the Hellinger distance can be written as
it follows that the third assumption is satisfied with \(d_T(b_1,b_2)=\sqrt{T}\Vert b_1b_2\Vert _2\).
Conditions 1 and 2 now follow by arguing precisely as in lemmas A.2 and 3.1 of van Waaij and van Zanten (2016) respectively (the key observation being that the convergence result of \(\mathring{L}_T(x)/T\) also holds when \(\int _0^1b(x){\,\mathrm {d}}x\) is nonzero, which is assumed in that paper).
The stated result follows from Theorem 2.1 in van der Meulen et al. (2006) (taking \(\mu _T=\sqrt{T} \varepsilon _T\) in their paper).
6.4 Proof of theorem 8 with Assumption 6 (A)
The proof proceeds by verifying the conditions of theorem 3. By Assumption 4 the true drift can be represented as \( b_0=z_1\psi _1+\sum _{j=0}^\infty \sum _{k=1}^{2^{j}}z_{jk}\psi _{jk}\). For \(r\ge 0\), define its truncated version by
6.4.1 Small ball probability
For \(\varepsilon >0 \) choose an integer \(r_\varepsilon \) with
For notational convenience we will write r instead of \(r_\varepsilon \) in the remainder of the proof. By lemma 16 we have \(\Vert b_0^{r}b_0\Vert _\infty \le \varepsilon \). Therefore
which implies
Let \(f_S\) denotes the probability density of S. For any \(x > 0\), we have
where
and \(p_1, p_2\) and q are taken from Assumption 7. For \(\varepsilon \) sufficiently small, we have by the second part of Assumption 7
By choice of r and the first part of Assumption 7, there exists a positive constant C such that
for \(\varepsilon \) sufficiently small.
For lower bounding the middle term in Eq. (30), we write
which implies
This gives the bound
By choice of the \(Z_i,\) we have for all \(i\in \{1,2,\ldots \}, 2^{\alpha \ell (i)}Z_i\) is standard normally distributed and hence
where the inequality follows from lemma 18. The third term can be further bounded as we have
Hence
For \(s \in [L_\varepsilon , U_\varepsilon ]\) and \(i\in {\mathscr {I}}_{r}\) we will now derive bounds on the first three terms on the right of Eq. (31). For \(\varepsilon \) sufficiently small we have \(r\le r+2\le 2r\) and then inequality (29) implies
Bounding the first term on the RHS of (31). For \(\varepsilon \) sufficiently small, we have
where \(\tilde{C}_{p_2, q, \beta }\) is a positive constant.
Bounding the second term on the RHS of (31). For \(\varepsilon \) sufficiently small, we have
The final inequality is immediate in case \(\alpha =\beta \), else if suffices to verify that the exponent is nonnegative under the assumption \(p_1>q\alpha \beta \).
Bounding the third term on the RHS of (31). For \(\varepsilon \) sufficiently small, in case \(\beta \ge \alpha \) we have
In case \(\beta <\alpha \) we have
as the exponent of \(\varepsilon \) is positive under the assumption \(p_1>q\alpha \beta \).
Hence for \(\varepsilon \) small enough, we have
As \(2^{r+1}\ge 4C_\beta \varepsilon ^{1/\beta }\) we get
We conclude that the right hand side of Eq. (30) is bounded below by \(\exp \big ({C_1}\varepsilon ^{1/\beta }\log \varepsilon \big )\), for some positive constant \(C_1\) and sufficiently small \(\varepsilon \).
6.4.2 Entropy and remaining mass conditions
For \(r\in \{0,1,\ldots \}\) denote by \({\mathscr {C}}_r\) the linear space spanned by \(\psi _1\) and \(\psi _{jk}\), \(0 \le j \le r,\) \(k \in 1, \dots , 2^j\), and define
Proposition 12
For any \(\varepsilon >0\)
where \(A_\alpha =\sum _{k=0}^\infty 2^{k\alpha }\).
Proof
We follow (van der Meulen et al. 2006, §3.2.2). Choose \(\varepsilon _0, \ldots , \varepsilon _r>0\) such that \(\sum _{j=0}^r \varepsilon _j \le \varepsilon \). Define
For each \(j\in \{1,\ldots , r\}\), let \(E_j\) be a minimal \(\varepsilon _j\)net with respect to the maxdistance on \(\mathbb {R}^{2^j}\) and let \(E_0\) be a minimal \(\varepsilon _0\)net with respect to the maxdistance on \(\mathbb {R}^2\). Hence, if \(x\in U_j\), then there exists a \(e_j \in E_j\) such that \(\max _k x_ke_k \le \varepsilon _j\).
Take \(b\in {\mathscr {C}}_{r,t}\) arbitrary: \(b=z_1\psi _1 + \sum _{j=0}^r \sum _{k=1}^{2^j} z_{jk}\psi _{jk}\). Let \(\tilde{b} = e_1\psi _1 + \sum _{j=0}^r \sum _{k=1}^{2^j} e_{jk}\psi _{jk}\), where \((e_1, e_{0,1})\in E_0\) and \((e_{j1},\ldots , e_{j2^j}) \in E_j\) (for \(j=1,\ldots , 2^j\)). We have
This can be bounded by \(\sum _{j=0}^r \varepsilon _j\) by an appropriate choice of the coefficients in \(\tilde{b}\). In that case we obtain that \(\Vert b\tilde{b}\Vert _\infty \le \varepsilon \). This implies
The asserted bound now follows upon choosing \(\varepsilon _j =\varepsilon 2^{j\alpha } /A_\alpha \). \(\square \)
Proposition 13
There exists a constant a positive constant K such that
Proof
There exists a positive K such that
By lemma 21, this set is included in the set
By lemma 20, for any \(b=z_1\psi _1 +\sum _{j=0}^r \sum _{k=1}^{2^j} z_{jk}\psi _{jk}\) in this set we have
Hence, the set Eq. (32) is included in the set \(\left\{ b\in {\mathscr {C}}_r\,:\, \llbracket b\rrbracket _\alpha \le a(r,\varepsilon )\right\} ={\mathscr {C}}_{r, a(r,\varepsilon )}\), where \(a(r,\varepsilon )=2^{1+\alpha r} \sqrt{3} 2^{(r+1)/2} K\).
Hence,
Using Lemma 21 again the latter can be bounded by
The result follows upon applying Proposition 12. \(\square \)
We can now finish the proof for the entropy and remaining mass conditions. Choose \(r_n\) to be the smallest integer so that \(2^{r_n}\ge L\varepsilon _n^{\frac{1}{\beta }}\), where L is a constant, and set \( {\mathscr {B}}_n={\mathscr {C}}_{r_n}\). The entropy bound then follows directly from Proposition 13.
For the remaining mass condition, using Assumption 7, we obtain
and note that the constant \(C_3\) can be made arbitrarily big by choosing L big enough.
6.5 Proof of theorem 8 under assumption 6 (B)
We start with a lemma.
Lemma 14
Assume there exists \(\,0< c_1 < c_2\) and \(\,0 < c_3\) with \(c_3 < c_1\) independent from r, such that for all \(i, i', 2\le \ell (i),\ell (i')\le r\),
Let \(\widetilde{A}=(A_{ii'})_{2\le \ell (i),\ell (i')\le r}\) (so the rightlower submatrix of \(A^r\)). Then for all \(x \in \mathbb {R}^{{\mathscr {I}}_r2}\)
where \(\widetilde{\Lambda }=(\widetilde{\Lambda }_{ii'})_{2\le \ell (i),\ell (i')\le r}\) is the diagonal matrix with \(\widetilde{\Lambda }_{ii} = 2^{\ell (i)},\) .
Proof
In the following the summation are over \(i,i', 2\le \ell (i),\ell (i')\le r\). Trivially, \(x' A^r x = \sum _i x_i^2 A_{ii}+ \sum _{i \ne j} x_i A_{ij} x_j\). By the first inequality
On the other hand
At the first inequality we used the second part of of (33). The second inequality follows upon including the diagonal. By CauchySchwarz, this can be further bounded by
where the final inequality follows from \(\sum _i 2^{2\ell (i)}\le \sum _{i=3}^\infty 2^{2\ell (i)} =\sum _{j=1}^\infty 2^j 2^{2j}=1\). The result follows by combining the derived inequalities. \(\square \)
We continue with the proof of Theorem 8. Write A as block matrix
with \(A_1\) a \(2\times 2\)matrix, and B, \(A_2\) defined accordingly. By lemma 2
Define the \(2\times 2\)matrix
where \(\mathrm {I}\) is the \(2\times 2\)identity matrix. It is easy to see that \(A_1\Lambda _1\) is positive definite.
When \(A_2  \Lambda _2  B(A_1\Lambda _1)^{1} B'\) is positive definite, then it follows from the Cholesky decomposition that \(A\Lambda \) is positive definite, where \(\Lambda =\text {diag}(\Lambda _1,\Lambda _2)\) positive definite. Note
where
Therefore
Now consider \( \tilde{A} = A_2  \Lambda _2  B(A_1\Lambda _1)^{1} B' \). By lemma 2 and the bound on \( (B A_1^{1} B')_{ii'}  \) and choosing \(c>0\) in the definition of \(\Lambda _1\) small enough, under the assumption that \(\gamma \le 1.5\),
and for \(i \ne i'\) \( \tilde{A}_{ii'} \le 0.9415\frac{\sigma ^2}{4}2^{1.5(\ell (i)+\ell (i'))} \). Therefore by lemma 14 \(\tilde{A}  \Lambda _{2}\) is positive definite with diagonal matrix \(\Lambda _{2}\) with diagonal entries \(2^{\ell (i)}\).
It follows that \(x'\Lambda x \asymp x'Ax\). This implies that the small ball probabilities and the mass outside a sieve behave similar under Assumption 6(B) as when the \(Z_{i}\) are independent normally distributed with zero mean and variance \(\xi _i^2=\Lambda _{ii}\). As this case corresponds to Assumption 6(A) with \(\alpha = \frac{1}{2}\) for which posterior contraction has already been established, the stated contraction rate under Assumption 6(B) follows from Anderson’s lemma (lemma 19).
6.6 Proof of theorem 10: convergence in stronger norms
The linear embedding operator \(T:L^p(\mathbb {T})\rightarrow L^2(\mathbb {T}),x\mapsto x\) is a welldefined injective continuous operator for all \(p\in (2,\infty ]\). Its inverse is easily seen to be a densely defined, closed unbounded linear operator. Following Knapik and Salomond (2014) we define the modulus of continuity m as
Theorem 2.1 of Knapik and Salomond (2014) adapted to our case is
Theorem 15
(Knapik and Salomond (2014)) Let \(\varepsilon _n\downarrow 0, T_n\uparrow \infty \) and \(\Pi \) be a prior on \(L_p({{\mathrm{\mathbb {T}}}})\) such that
for measurable sets \({\mathscr {B}}_{n}\subset L^p({{\mathrm{\mathbb {T}}}})\). Assume that for any positive sequence \(M_n\)
then
Note that the sieves \({\mathscr {C}}_{r,t}\) which we define in Sect. 6.4.2 have by Eq. (15) the property \(\Pi ({\mathscr {C}}_{r,t}^c\mid X^T)\rightarrow 0.\) By lemmas 21 and 23, the modulus of continuity satisfies \(m({\mathscr {C}}_{r,u},\varepsilon _n)\lesssim 2^{r(1/21/p)}\varepsilon _n\), for all \(p\in (2,\infty ]\), (assume \(1/\infty =0\)), and the result follows.
References
Anderson TW (1955) The integral of a symmetric unimodal function over a symmetric convex set and some probability inequalities. Proc Am Math Soc 6:170–176
Bhattacharya R, Waymire E (2007) A basic course in probability theory. Universitext, Springer, New York
Dalalyan A (2005) Sharp adaptive estimation of the drift function for ergodic diffusions. Ann Stat 33(6):2507–2528
Dalalyan AS, Kutoyants YA (2002) Asymptotically efficient trend coefficient estimation for ergodic diffusion. Math Methods Stat 11(4):402–427
Ghosal S, van der Vaart AW (2007) Convergence rates of posterior distributions for noniid observations. Ann Stat 35(1):192–223
Ghosal S, Ghosh JK, van der Vaart AW (2000) Convergence rates of posterior distributions. Ann Stat 28(2):500–531
Giné E, Nickl R (2011) Rates of contraction for posterior distributions in \(L^r\)metrics, \(1\le r\le \infty \). Ann Stat 39(6):2883–2911
Giné E, Nickl R (2016) Mathematical foundations of infinitedimensional statistical models. Cambridge series in statistical and probabilistic mathematics. Cambridge University Press, Cambridge
Hindriks R (2011) Empirical dynamics of neuronal rhythms. PhD thesis, Vrije Universiteit Amsterdam
Karatzas I, Shreve SE (1991) Brownian motion and stochastic calculus, volume 113 of graduate texts in mathematics, 2nd edn. Springer, New York
Knapik BT, van der Vaart AW, van Zanten JH (2011) Bayesian inverse problems with Gaussian priors. Ann Stat 39(5):2626–2657
Knapik B, Salomond JB (2014) A general approach to posterior contraction in nonparametric inverse problems. Bernoulli
Kutoyants YA (2004) Statistical inference for ergodic diffusion processes. Springer, New York
Papaspiliopoulos O, Pokern Y, Roberts GO, Stuart AM (2012) Nonparametric estimation of diffusions: a differential equations approach. Biometrika 99(3):511
Pokern Y (2007) Fitting Stochastic Differential Equations to Molecular Dynamics Data. PhD thesis, University of Warwick
Pokern Y, Stuart AM, van Zanten JH (2013) Posterior consistency via precision operators for Bayesian nonparametric drift estimation in SDEs. Stoch Process Appl 123(2):603–628
Schauer M, van Zanten JH (2017) Uniform central limit theorems for additive functionals of diffusions on the circle. In preparation
Shen W, Ghosal S (2015) Adaptive Bayesian procedures using random series priors. Scand J Stat 42(4):1194–1213
Spokoiny VG (2000) Adaptive drift estimation for nonparametric diffusion model. Ann Stat 28(3):815–836
Strauch C (2015) Sharp adaptive drift estimation for ergodic diffusions: the multivariate case. Stoch Process Appl 125(7):2562–2602
van der Meulen FH, van der Vaart AW, van Zanten JH (2006) Convergence rates of posterior distributions for Brownian semimartingale models. Bernoulli 12(5):863–888
van der Meulen FH, Schauer M, van Zanten JH (2014) Reversible jump MCMC for nonparametric drift estimation for diffusion processes. Comput Stat Data Anal 71:615–632
van der Vaart AW, van Zanten JH (2008) Rates of contraction of posterior distributions based on Gaussian process priors. Ann Stat 36(3):1435–1463
van Waaij J, van Zanten H (2016) Gaussian process methods for onedimensional diffusions: optimal rates and adaptation. Electron J Stat 10(1):628–645
van Zanten JH (2013) Nonparametric Bayesian methods for onedimensional diffusion models. Math Biosci 243(2):215–222
Acknowledgements
This work was partly supported by the Netherlands Organisation for Scientific Research (NWO) under the research programme “Foundations of nonparametric Bayes procedures”, 639.033.110 and by the ERC Advanced Grant “Bayesian Statistics in Infinite Dimensions”, 320637.
Author information
Authors and Affiliations
Corresponding author
Lemmas used in the proofs
Lemmas used in the proofs
Lemma 16
Suppose z has Faber–Schauder expansion
If \(\llbracket z\rrbracket _\beta <\infty \) (with the norm defined in (16)), then for \(r\ge 1\)
Proof
This follows from
Lemma 17
If \(X \sim \mathrm{IG}(A,B)\) then for any \(M>0\),
Proof
This follows from
\(\square \)
Lemma 18
Let \(X \sim \mathrm N(0,1)\), \(\theta \in \mathbb {R}\) and \(\varepsilon > 0\).Then
Proof
Note that
and
thus \(e^{\frac{1}{2}(x+\theta )^2}\ge e^{\theta ^2}e^{\frac{1}{2}(\sqrt{2} x)^2},\) hence
Now the elementary bound \(\int _{y}^y e^{\frac{1}{2} x^2 }\ge 2y e^{\frac{1}{2} y^2}\) gives
\(\square \)
Lemma 19
(Anderson’s lemma) Define a partial order on the space of \(n\times n\)matrices (\(n\in \mathbb {N}\cup \{\infty \}\)) by setting \(A\le B,\) when \(BA\) is positive definite. If \(X \sim \mathrm N(0,\Sigma _X)\) and \(Y \sim \mathrm N(0, \Sigma _Y)\) independently with \(\Sigma _X \le \Sigma _Y \), then for all symmetric convex sets C
Proof
See Anderson (1955). \(\square \)
Lemma 20
Let
Then
Proof
Note that \(z_{1}=f(0)\le 2\Vert f\Vert _\infty \), and \(z_{0,1}=f(1/2)\le 2\Vert f\Vert _\infty \) and inductively, for \(j\ge 1\), \(z_{jk}=f\big ((2k1)2^{(j+2)}\big )\frac{1}{2}f\big (2^{(j+1)}(k1)\big )\frac{1}{2}f\big (2^{(j1)}k\big )\), hence \(z_{jk}\le 2\Vert f\Vert _\infty \). \(\square \)
Lemma 21
Let \({\mathscr {C}}_{r}\) as in Sect. 6.4.2. Then
Proof
Let \(f\in {\mathscr {C}}_{r}\) be nonzero. Note that for any constant \(c>0\),
Hence, we may and do assume that \(\Vert f\Vert _\infty =1\). Furthermore, since the \(L^2\) and \(L^\infty \) norm of f and f are the same, we also assume that f is nonnegative.
Let \(x_0\) be a global maximum of f. Clearly \(f(x_0)=1\). Since f is a linear interpolation between the points \(\{k2^{j1}: k=0,1,\ldots , 2^{r+1}\}\), we may also assume that \(x_0\) is of the form \(x_0=k2^{j1}\). We consider two cases

(i)
\(0\le k<2^{r+1}\),

(ii)
\(k=2^{r+1}\).
In case (i) we have that \(f(x)\ge \big (12^{r+1}(xk2^{r1})\big )I_{[k2^{r1},(k+1)2^{r1}]}(x)\), for all \(x\in [k2^{r1},(k+1)2^{r1}]\). In case (ii) \(f(x)\ge 2^{r+1}(x1+2^{r1})I_{[12^{r1},1]}(x)\), for all \(x\in [12^{r1},1]\). Hence, in both cases,
Thus
uniformly over all nonzero \(f\in {\mathscr {C}}_{r,s}\). \(\square \)
Lemma 22
Let \(a_1,a_2,x_1,x_2\) be positive numbers. Then
Proof
Suppose that the lemma is not true, so there are positive \(a_1,a_2,x_1,x_2\) such that,
Hence, both terms on the righthandside are negative. In particular, this means for the first term that \(x_2/x_1< a_2 /a_1\). For the second term this gives \(x_1/x_2<a_1 /a_2\). These two inequalities cannot hold simultaneously and we have reached a contradiction. \(\square \)
Lemma 23
Let \({\mathscr {C}}_r\) and \({\mathscr {C}}_{r,s}\) as in Sect. 6.4.2. Then for \(p\in [2,\infty )\),
Proof
Let \(f\in {\mathscr {C}}_{r}\). Just as in proof of lemma 21 we may assume that f is nonnegative and \(\Vert f\Vert _2=1\). Hence
Note that
Hence, by repeatedly applying lemma 22
Note that f is a linear interpolation between the points \(k2^{r1},k\in \{0,1,\ldots ,2^{r+1}\}\).
Now study affine functions \(g:[0,2^{r1}]\rightarrow \mathbb {R}\) which are positive. A maximum of g is attained in either 0 or \(2^{r1}\). Without lose of generality it is attained in 0. Using scaling in a later stadium of the proof, we assume for the moment that \(g(0)=1\). Hence \(a:=g(2^{r1})\in [0,1]\). Note that
When \(a=1\), \(\Vert g\Vert _p=\Vert g\Vert _2=1\). Now consider \(a<1\),
Let \(y=x+\frac{2^{r1}}{1a}\) then \(x=y+\frac{2^{r1}}{1a}\) and \({\,\mathrm {d}}x={\,\mathrm {d}}y\). Hence
Note that for a constant \(c>0\) and a function h,
Let
Hence cg has \(L^2\)norm one and
The maximum is attained for \(a=0\), then
Hence
and the result follows, using that \(\Vert fI_{(k2^{r1},(k+1)2^{r1})}\Vert _2^2\le \Vert f\Vert _2^2\) and that for \(0<c'<c\),
\(\square \)
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
van der Meulen, F., Schauer, M. & van Waaij, J. Adaptive nonparametric drift estimation for diffusion processes using Faber–Schauder expansions. Stat Inference Stoch Process 21, 603–628 (2018). https://doi.org/10.1007/s1120301791637
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s1120301791637