Skip to main content
Log in

Is the Brownian bridge a good noise model on the boundary of a circle?

  • Published:
Annals of the Institute of Statistical Mathematics Aims and scope Submit manuscript

Abstract

In this paper, we study periodical stochastic processes, and we define the conditions that are needed by a model to be a good noise model on the circumference. The classes of processes that fit the required conditions are studied together with their expansion in random Fourier series to provide results about their path regularity. Finally, we discuss a simple and flexible parametric model with prescribed regularity that is used in applications, and we prove the asymptotic properties of the maximum likelihood estimates of model parameters.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  • Ash, R. B. (1990). Information theory. New York: Dover Publications Inc. (corrected reprint of the 1965 original).

    MATH  Google Scholar 

  • Bass, R. F. (2011). Stochastic processes. Cambridge series in statistical and probabilistic mathematics (Vol. 33). Cambridge: Cambridge University Press.

    Google Scholar 

  • Brigham, E. O. (1982) FFT: schnelle Fourier-Transformation. Einführung in die Nachrichtentechnik. [Introduction to Information Technology]. Munich: R. Oldenbourg Verlag (translated from the English by Seyed Ali Azizi).

  • Burrough, P. A., Frank, A. (1996). Geographic objects with indeterminate boundaries: GISDATA 2 (GISDATA Series). Great Britain: Taylor & Francis.

  • Dioguardi, N., Franceschini, B., Aletti, G., Russo, C., Grizzi, F. (2003). Fractal dimension rectified meter for quantification of liver fibrosis and other irregular microscopic objects. Analytical and Quantitative Cytology and Histology, 25(6), 312–320.

  • Dutt, A., Rokhlin, V. (1993). Fast Fourier transforms for nonequispaced data. SIAM Journal on Scientific Computing, 14(6), 1368–1393. doi:10.1137/0914081.

  • Hall, P., Heyde. C. C. (1980). Martingale limit theory and its application. New York: Academic Press Inc. [Harcourt Brace Jovanovich Publishers].

  • Heyde, C. C. (1997). Quasi-likelihood and its application. A general approach to optimal parameter estimation: Springer series in statistics. New York: Springer. doi:10.1007/b98823.

  • Hobolth, A. (2003). The spherical deformation model. Biostatistics, 4(4), 583–595.

    Article  MATH  Google Scholar 

  • Hobolth, A., Vedel Jensen, E. (2002). A note on design-based versus model-based estimation in stereology. Advances in Applied Probability, 34(1), 484–490.

  • Hobolth, A., Pedersen, J., Vedel Jensen, E. (2003). A continuous parametric shape model. Annals of the Institute of Statistical Mathematics, 55(2), 227–242.

  • Karhunen, K. (1947). Über lineare Methoden in der Wahrscheinlichkeitsrechnung. Annales Academiæ Scientiarum Fennicæ. Series A 1, Mathematica Physica, 1947(37), 79.

    MathSciNet  MATH  Google Scholar 

  • Kroese, D. P., Taimre, T., Botev, Z. I. (2011). Handbook of Monte Carlo methods. New Jersey: Wiley.

  • van Lieshout, M. N. M. (2013). A spectral mean for point sampled closed curves. arXiv:1310.7838 (arXiv preprint).

  • Lorentz, G. G. (1948). Fourier-Koeffizienten und Funktionenklassen. Mathematische Zeitschrift, 51, 135–149.

    Article  MathSciNet  MATH  Google Scholar 

  • Manganaro, G. (2011). Advanced Data Converters. Cambridge: Cambridge University Press.

    Book  Google Scholar 

  • Patel, J. K., Read, C. B. (1982). Handbook of the normal distribution, statistics: textbooks and monographs (Vol. 40). New York: Marcel Dekker Inc.

  • Revuz, D., Yor, M. (1999). Continuous martingales and Brownian motion, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], (3rd ed., Vol. 293). Berlin: Springer.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Matteo Ruffini.

Appendices

Appendix 1: Proofs of results of Sect. 2

Proof of the Theorem 2

By Mercer Theorem (see, e.g., Ash 1990), we know that if \( \{e_n\}_{n\ge 0} \) is an orthonormal basis for the space spanned by the eigenfunctions corresponding to nonzero eigenvalues of the integral operator (1) then, uniformly, absolutely and in \( L^2[0,1]\times [0,1] \),

$$\begin{aligned} C(s,t) = \sum _{k=0}^{\infty }{e_k(t)e_k(s)\lambda _k}, \end{aligned}$$
(9)

where \(\lambda _k\) is the eigenvalue corresponding to \(e_k\). By hypothesis, since \(C(s,t) = \tilde{C}(|t-s|) = \tilde{C}(|t-s|+1)\) by Remark 1, we get

$$\begin{aligned} \int _{0}^{1}{\tilde{C}(s)\cos (2n\pi s)\mathrm{d}s} = a_n, \quad \int _{0}^{1}{\tilde{C}(s)\sin (2n\pi s)\mathrm{d}s} = 0, \end{aligned}$$
(10)

and hence

$$\begin{aligned} \tilde{C}(\tau ) = a_0 + 2 \sum _{n=0}^\infty a_n \cos (2 n \pi \tau ). \end{aligned}$$
(11)

It is simple to prove that the sequence \( \{\mathbf {s}_{n}(t),\mathbf {c}_{n}(t)\}_{n\ge 0} \) contains all the eigenfunctions of the operator (1). In fact,

$$\begin{aligned} \int _{0}^{1}{C}(t,\tau )\mathbf {c}_{n}(\tau ) \mathrm{d}\tau&= \sqrt{2}\int _{0}^{1}{\tilde{C}(s)\cos (2n\pi (t+s))\mathrm{d}t}\nonumber \\&=\mathbf {c}_{n}(t)\int _{0}^{1}{\cos (2n\pi s)\tilde{C}(s)\mathrm{d}s} - \mathbf {s}_{n}(t)\int _{-1/2}^{1/2}{\sin (2n\pi s)\tilde{C}(s)\mathrm{d}s} \nonumber \\&=a_n\mathbf {c}_{n}(t), \end{aligned}$$
(12)

the same relation holding when \(\mathbf {c}_{n}(t)\) is replaced by \(\mathbf {s}_{n}(t)\). By (11), we get

$$\begin{aligned} C(s,t)&= \tilde{C}(s-t) = a_0+ \sum _{k=1}^{\infty }{a_k\cos (2k\pi (s- t))}\\&= a_0+ 2 \sum _{k=1}^{\infty }{a_k\cos (2k\pi s)\cos (2k\pi t)} + 2 \sum _{k=1}^{\infty }{a_k\sin (2k\pi s)\sin (2k\pi t)} \\&= a_0+\sum _{k=1}^{\infty }{a_k\mathbf {c}_{k}(s)\mathbf {c}_{k}(t)} + \sum _{k=1}^{\infty }{a_k\mathbf {s}_{k}(s)\mathbf {s}_{k}(t)} \end{aligned}$$

where this equality holds uniformly, absolutely and in \( L^2[0,1]\times [0,1] \) by Mercer Theorem (cfr. (9)).

Now, since C(st) is a covariance function, it is positively definite, and hence \(a_n\ge 0\), \(\forall n\). Moreover, since \( \{a_n\}_{n\ge 0}\in \ell ^1 \), if we define \(c_n = \sqrt{a_n}\), then \( \{c_n\}_{n\ge 0} \in \ell ^2\). From Theorem 1, we deduce the existence of two independent sequences of independent standard Gaussian variables \( \{Y_k\}_{k \ge 1}\) and \(\{Y'_k\}_{k \ge 0}\) such that in mean square, uniformly in t

$$\begin{aligned} x_t = c_0Y'_0+\sum _{k=1}^{\infty }c_k(Y_k \mathbf {s}_{k}(t) + Y'_k \mathbf {c}_{k}(t)). \end{aligned}$$

\(\square \)

Proof of the Theorem 3

The sequence of Gaussian processes \( y^{(n)}_t \) converges to a periodical \( \{y_t\}_{t\in [0,1]} \) in mean square uniformly in t, since it is a Cauchy sequence:

$$\begin{aligned} \sup _{t\in [0,1]}E[| y^{(n)}_t - y^{(m)}_t |^{2}] =2 \sum _{k=n}^{m}c_k^2 \mathop {\longrightarrow }_{m,n\rightarrow \infty } 0. \end{aligned}$$

Hence, \( E[y_t] \equiv 0 \), and

$$\begin{aligned} Cov(y_t,y_s) = c_0^2+ \sum _{k=1}^{\infty }{c_k^2\cos (2k\pi (s-t))} \end{aligned}$$

is a continuous function. Finally, \( \{y_t\}_{t\in [0,1]} \) is a Gaussian process, since the two sequences \( \{Y_k\}_{k \ge 1}\) and \(\{Y'_k\}_{k \ge 0}\) are formed by independent Gaussian variables. \(\square \)

Proof of the Theorem 4

Necessity. Assume there exists a process \(\{x_t\}_{t\in [0,1]}\in \mathscr {H}_Z\) which generates \(\{y_t\}_{t\in [0,1]}\in \mathscr {H}_0\). The covariance function C(st) of \(\{x_t\}_{t\in [0,1]}\) is given as in (2):

$$\begin{aligned} C(s,t) = c_0^2+\sum _{k=1}^{\infty }{c_k^2\mathbf {c}_{k}(s)\mathbf {c}_{k}(t)} + \sum _{k=1}^{\infty }{c_k^2\mathbf {s}_{k}(s)\mathbf {s}_{k}(t)}. \end{aligned}$$

If we define \(x={{C}(0,0)} = \sum _0^\infty c_k^2\), \(p_i = c_i^2/x\), and

$$\begin{aligned} D(s,t) = \frac{C(s,t)}{x} = p_0+\sum _{k=1}^{\infty }{p_k\mathbf {c}_{k}(s)\mathbf {c}_{k}(t)} + \sum _{k=1}^{\infty }{p_k\mathbf {s}_{k}(s)\mathbf {s}_{k}(t)}, \end{aligned}$$

then, \(x>0\) and, by (3), we obtain

$$\begin{aligned} x R(s,t)&= {D}(s,t) - {D}(0,t) {D}(s,0) \nonumber \\&= p_0+\sum _{k=1}^{\infty }{p_k\mathbf {c}_{k}(s)\mathbf {c}_{k}(t)} + \sum _{k=1}^{\infty }{p_k\mathbf {s}_{k}(s)\mathbf {s}_{k}(t)}\nonumber \\&\quad - \Big (p_0+\sum _{k=1}^{\infty }p_k\mathbf {c}_{k}(s) \Big )\Big ( p_0+\sum _{k=1}^{\infty }p_k\mathbf {c}_{k}(t) \Big ). \end{aligned}$$
(13)

(13) and (4) give

$$\begin{aligned} x r^{ss}_{kj}&= {\left\{ \begin{array}{ll} p_k &{}\quad \text {if }k=j >0\\ 0 &{}\quad \text {if }k\ne j \end{array}\right. }\end{aligned}$$
(14)
$$\begin{aligned} x r^{cc}_{kj}&= {\left\{ \begin{array}{ll} p_k - p_k^2 &{}\quad \text {if }k=j \ge 0\\ -p_kp_j &{}\quad \text {if }k\ne j \end{array}\right. } \end{aligned}$$
(15)
$$\begin{aligned} r^{sc}_{kj}&= r^{cs}_{kj} =0. \end{aligned}$$

Since \(\sum _0^\infty p_k = 1\), if \(\bar{r} = \sum _{k=1}^\infty r^{ss}_{kk}\), we obtain by (14)

$$\begin{aligned} x\bar{r} = 1-p_0. \end{aligned}$$

Assume \(\bar{r}=0\), then \(p_0=1\), which is absurd since \(R(s,t)\ne 0\). Hence \(\bar{r}>0\), and we define

$$\begin{aligned} x = \frac{\bar{r}-r^{cc}_{00}}{\bar{r}^2}. \end{aligned}$$
(16)

Thesis follows by combining (16) and (15).

Sufficiency. Given the matrices of the 2-D Fourier series as in the theorem assumption, set \(x>0\) as in (16). Define

$$\begin{aligned} p_k = r^{ss}_{kk}\frac{\bar{r}-r^{cc}_{00}}{\bar{r}^2}, \qquad p_0 = \frac{r^{cc}_{00}}{\bar{r}}. \end{aligned}$$

Then, \(\{p_k\}_{k\ge 0}\) is a nonnegative sequence such that \(\sum _kp_k=1\). Define

$$\begin{aligned} x_t = \sqrt{xp_0}Y'_0+\sum _{k=1}^{n}\sqrt{xp_k}(Y_k \mathbf {s}_{k}(t)+ Y'_k \mathbf {c}_{k}(t)). \end{aligned}$$

By Theorem 2, we have

$$\begin{aligned} C(s,t) = x\Bigg (p_0+\sum _{k=1}^{\infty }{p_k\mathbf {c}_{k}(s)\mathbf {c}_{k}(t)} + \sum _{k=1}^{\infty }{p_k\mathbf {s}_{k}(s)\mathbf {s}_{k}(t)}\Bigg ). \end{aligned}$$

It is straightforward to check that (14) and (15) hold. The fact that the solution is unique follows immediately from the necessary condition. \(\square \)

Appendix 2: Proof of the Theorem 5

The case \(x_t\equiv k\) is obvious. Let \( C(t,s) = \tilde{C}(t-s) \) be the covariogram function of \(\{x_t\}_{t\in [0,1]}\) [see (2) for its expansion]. Since \(x_t\equiv k\iff \tilde{C}(0)=0\), we assume, without loss of generalities, that \(\tilde{C}(0)=1\).

A straightforward computation gives that, if \( \{y_t\}_{t\in [0,1]}\in \mathscr {H}_0 \) is generated by \( \{x_t\}_{t\in [0,1]}\in \mathscr {H}\), then \(\{y_t\}_{t\in [0,1]}\) is a Gaussian process with null expectation and continuous covariance function

$$\begin{aligned} R(t,s) = \tilde{C}(t-s) - \frac{\tilde{C}(t)\tilde{C}(s)}{\tilde{C}(0)} = \tilde{C}(t-s) - {\tilde{C}(t)\tilde{C}(s)}. \end{aligned}$$
(17)

Hence, given the covariogram function \(C(s,t)=\tilde{C}(t-s)\) of the generating process \( \{x_t\}_{t\in [0,1]}\), we need to study the spectrum of the operator (1), where C is replaced by R given in (17).

As in (11) and (2), we write \( \tilde{C}(t) = a_0 + 2\sum _{n=1}^{\infty }a_n\cos (2n\pi t) \) with \( 1 = a_0 + 2\sum _{n=1}^{\infty }a_n \) since \(\tilde{C}(0)=1\).

Let f(t) be an eigenfunction of (17); from the expansion theorem (see Ash 1990), we have in \( L^2[0,1] \),

$$\begin{aligned} f(t) =f_0 + \sum _{n=1}^{\infty }f_n^{c}\mathbf {c}_{n}(t) + f_n^{s}\mathbf {s}_{n}(t). \end{aligned}$$
(18)

where \( f_0 = \int _{0}^{1}f(\tau )\mathrm{d}\tau , \) \( f_n^{c} = \int _{0}^{1} \mathbf {c}_{n}(\tau )f(\tau )\mathrm{d}\tau \) and \( f_n^{s} = \int _{0}^{1} \mathbf {s}_{n}(\tau )f(\tau )\mathrm{d}\tau . \) Let’s look for the eigenvalue related to f:

$$\begin{aligned} \int _{0}^{1}{R(s,t)f(t)\mathrm{d}t} = \int _{0}^{1}{\tilde{C}(t-s)f(t)\mathrm{d}t} -\tilde{C}(s) \int _{0}^{1}{\tilde{C}(t)f(t)\mathrm{d}t} = \tilde{a} f(s). \end{aligned}$$
(19)

Substituting (18) into (19), and integrating with the results in (10) and (12), yields

$$\begin{aligned} a_0f_0 +\sum _{n=1}^{\infty }a_n(f_n^{c} \mathbf {c}_{n}(s) + f_n^{s} \mathbf {s}_{n}(s)) - \tilde{C}(s)\left( a_0 f_0+\sqrt{2}\sum _{n=1}^{\infty }a_n f_n^{c} \right) = \tilde{a} f(s). \end{aligned}$$
(20)

1.1 \(\mathbf {s}_{n}(s)\) eigenfunctions

For any \(a_n\ne 0\), it is straightforward to see that \(f(s)=\mathbf {s}_{n}(s)\) is an eigenfunction, by a direct substitution in (20), and that \(\tilde{a}=a_n\). Moreover, we are going to state more: the only eigenfunctions which contain some \(f_k^{s}\ne 0\) are indeed \(\mathbf {s}_{n}(s)\) (when \(a_n\ne 0\)).

Assume that \(\exists k:f_k^{s}\ne 0\) and, by contradiction, \(f(t)\ne \mathbf {s}_{k}(t)\).

By multiplying both members of (20) by \( \mathbf {s}_{k}(s) \) and integrating, we obtain \(a_kf_k^{s} = \tilde{a} f_k^{s} \), i.e., \( a_k= \tilde{a} . \) Since \(a_k\ne 0\), then \(\mathbf {s}_{k}(t)\) is an eigenfunction. This eigenfunction is orthogonal to f(s) by Mercer Theorem, and hence

$$\begin{aligned} 0 = \int _{0}^{1}\mathbf {s}_{k}(s)f(s)\mathrm{d}s = f_k^{s}. \end{aligned}$$

Summing up, for any \(a_n\ne 0\), \(\mathbf {s}_{n}(t)\) is an eigenfunction associated with \(\tilde{a}=a_n\), and the other eigenfunctions do not contain the terms in \(\{\mathbf {s}_{n}(t)\}_{n\ge 1}\) (they are even function).

1.2 The other eigenfunctions of (20)

To conclude the proof, we should find another sequence of eigenfunctions with eigenvalues \(\{\tilde{a}_n\}_{n\ge 1}\asymp \{a_n\}_{n\ge 1}\). We will first obtain a simple result on the coefficients of the eigenfunctions. Then, we will introduce the multiplicity of the eigenvectors \(\{a_n\}_{n\ge 1}\) to conclude the proof accordingly.

The other eigenfunction takes the form \( f(t) =f_0 + \sum _{k=1}^{\infty }f_k^{c}\mathbf {c}_{k}(t) \). By multiplying both members of (20) by \( \mathbf {c}_{n}(s) \) and integrating, we obtain

$$\begin{aligned} {\left\{ \begin{array}{ll} a_0 f_0 - a_0(a_0 f_0 +\sqrt{2}\sum _{k=1}^{\infty }a_k f^{c}_k ) = \tilde{a} f_0, &{} n= 0; \\ a_n f^{c}_n - \sqrt{2}a_n(a_0 f_0 + \sqrt{2}\sum _{k=1}^{\infty }a_k f^{c}_k ) = \tilde{a} f^{c}_n , &{} n>0 .\end{array}\right. } \end{aligned}$$
(21)

As an immediate consequence, \((a_n = 0) \Rightarrow (f_n^{c}=0)\).

Lemma 1

\( \{f_n^{c}\}_{n\ge 0} \in \ell ^1\), and \( f_0 + \sqrt{2}\sum _{n=1}^{\infty }f_n^{c}= 0. \)

Proof

Recall that \(a_n\ge 0\), and that \(a_0 + 2\sum _{n=1}^{\infty }a_n = \tilde{C}(0) = 1 \). For \(n>0\), by (21), we have

$$\begin{aligned} | f_n^{c} | \le \frac{a_n |f^{c}_n| - \sqrt{2}a_n(a_0 |f_0| + \sqrt{2}\sum _{k=1}^{\infty }a_k |f^{c}_k| )}{\tilde{a}}, \end{aligned}$$

and since \(\{a_k |f^{c}_k|\}_{k\ge 0}\in \ell ^1\) (as a product of two \(\ell ^2\) sequences), and \(\{a_n\}_{n\ge 1}\in \ell ^1\), we obtain the first part of the thesis. By (21) and \(a_0 + 2\sum _{n=1}^{\infty }a_n = \tilde{C}(0) =1 \), we get

$$\begin{aligned} f_0 + \sqrt{2}\sum _{n=1}^{\infty }f_n^{c}&= \frac{a_0 f_0 - a_0(a_0 f_0 +\sqrt{2}\sum _{k=1}^{\infty }a_k f^{c}_k )}{\tilde{a}} + \sqrt{2}\sum _{n=1}^{\infty } \frac{a_n f^{c}_n - \sqrt{2}a_n(a_0 f_0 + \sqrt{2}\sum _{k=1}^{\infty }a_k f^{c}_k )}{\tilde{a}}\\&= \frac{a_0 f_0 +\sqrt{2}\sum _{n=1}^{\infty }a_n f^{c}_n }{\tilde{a}} - \frac{a_0 f_0 +\sqrt{2}\sum _{k=1}^{\infty }a_k f^{c}_k }{\tilde{a}} \Big ( a_0 + 2\sum _{n=1}^{\infty }a_n \Big ) =0. \end{aligned}$$

\(\square \)

Definition 3

(Multiplicity and support) Given \(\{a_n\}_{n\ge 1}\), we define the support \(S_{\tilde{a}}\) of \(\tilde{a}\):

$$\begin{aligned} S_{\tilde{a}} = \{k :a_k = \tilde{a}\}. \end{aligned}$$

The multiplicity \(m_{\tilde{a}} \) of a number \(\tilde{a}>0\) is the cardinality of \(S_{\tilde{a}}\):

$$\begin{aligned} m_{\tilde{a}} = \#\{k :a_k = \tilde{a}\}. \end{aligned}$$

It is clear that \(m_{\tilde{a}}<\infty \) because \(\{a_n\}_{n\ge 1}\in \ell ^1\).

Lemma 2

If \(m_{\tilde{a}} = k>0\), then there are exactly \(k-1\) orthogonal eigenfunctions of R related to \(\tilde{a}\). Moreover for anyone of these \(k-1\) eigenfunctions,

$$\begin{aligned} n \not \in S_{\tilde{a}} \Longrightarrow f_n^{c} = 0 . \end{aligned}$$

Proof

Let \(\tilde{a}>0\) be such that \(m_{\tilde{a}}>1\).

It is simple to prove that there always exist \(m_{\tilde{a}}-1\) orthogonal eigenfunctions related to \(\tilde{a}\) with \(f_n^{c} = 0\) if \(a_n\not \in S_{\tilde{a}}\). We have two possibilities:

  • \(0\in S_{\tilde{a}}\) or, equivalently, \(a_0 = \tilde{a}\). In this case, (21) is equivalent to the following system:

    $$\begin{aligned} {\left\{ \begin{array}{ll} f_n^{c}=0, &{} n \not \in S_{\tilde{a}}\\ \tilde{a} \big (f_0 + \sqrt{2}\sum _{n\in S_{\tilde{a}}\setminus \{0\}} f_n^{c} \big )=0. \end{array}\right. } \end{aligned}$$
  • \(0\not \in S_{\tilde{a}}\). In this case, (21) is equivalent to the following system:

    $$\begin{aligned} {\left\{ \begin{array}{ll} f_n^{c}=0, &{} n \not \in S_{\tilde{a}}\\ \tilde{a} \big (\sum _{n\in S_{\tilde{a}}} f_n^{c} \big )=0. \end{array}\right. } \end{aligned}$$

In both cases, there exist a \(k-1\)-dimensional orthogonal basis for the solution system.

We now need to prove that there are not other eigenfunctions related to \(\tilde{a}\). Assume that \(f_{\bar{n}}^{c}\ne 0\). We recall that this fact implies \(a_{\bar{n}}\ne 0\). If \(\bar{n}=0\), from (21) we have that

$$\begin{aligned} {\left\{ \begin{array}{ll} \frac{(a_0 -\tilde{a} )f_0}{a_0} = a_0 f_0 +\sqrt{2}\sum _{k=1}^{\infty }a_k f^{c}_k ,\\ \frac{(a_n -\tilde{a} )f^{c}_n }{\sqrt{2}a_n} = a_0 f_0 + \sqrt{2}\sum _{k=1}^{\infty }a_k f^{c}_k , &{} n \in S_{\tilde{a}}. \end{array}\right. } \end{aligned}$$

The second equation shows that \(a_0 f_0 + \sqrt{2}\sum _{k=1}^{\infty }a_k f^{c}_k=0\), since \(a_n = \tilde{a}\), and hence \(a_0=\tilde{a}\), which means that \(\bar{n}\in S_{\tilde{a}}\). Analogously, if \(\bar{n}\ne 0\), from (21) we can prove that \(\bar{n}\in S_{\tilde{a}}\) that completes the proof. \(\square \)

Let \(\{a_{(n)}\}_{n\ge 1}\) be the decreasing reordering of the sequence \(\{a_n\}_{n\ge 1}\), positive and without repetition: \(a_{(1)}>a_{(2)}>\cdots >a_{(n)}>\cdots \) and \(\forall a_n>0\), exists k such that \(a_n =a_{(k)}\). To conclude the proof, we must find a sequence of eigenvalues \(\{\tilde{a}_n\}_{n\ge 1}\) such that \(a_{(n)}>\tilde{a}_n>a_{(n+1)}\).

Lemma 3

For each \(n\in \mathbb {N}\), there exists a unique eigenvalue \(\tilde{a}_n\) such that \(a_{(n)}>\tilde{a}_n>a_{(n+1)}\). Moreover, \(m_{\tilde{a}_n}=1\).

Proof

We have already observed that \(a_0 f_0 +\sqrt{2}\sum _{k=1}^{\infty }a_k f^{c}_k=0\) implies, for any n, \(a_n=\tilde{a}\) or \(f_n^{c}=0\). Hence, without loss of generalities, we assume \(a_0 f_0 +\sqrt{2}\sum _{k=1}^{\infty }a_k f^{c}_k=c \ne 0\) and we continue the proof. From (21), we obtain

$$\begin{aligned} f_0 = c\frac{a_0}{a_0 - \tilde{a}}, \quad f^{c}_n = c\frac{\sqrt{2}a_n }{a_n - \tilde{a}}. \end{aligned}$$
(22)

These relations with, again, \(a_0 f_0 +\sqrt{2}\sum _{n=1}^{\infty }a_n f^{c}_n=c\), imply

$$\begin{aligned} \frac{a_0^2}{a_0 - \tilde{a}} + 2\sum _{n=1}^{\infty } \frac{ a_n^2 }{a_n - \tilde{a}}=1. \end{aligned}$$
(23)

We are going to show that there exists a unique solution \(\tilde{a}_n\) of (23) such that \(a_{(n)}>\tilde{a}_n>a_{(n+1)}\). This solution is the searched eigenvalue, whose corresponding eigenfunction’ expansion is given in (22).

Let us consider the series

$$\begin{aligned} S(x) = \frac{a_0^2}{a_0 - x} + 2\sum _{n=1}^{\infty } \frac{ a_n^2 }{a_n - x} \end{aligned}$$

and the derivative series

$$\begin{aligned} S'(x) = \frac{a_0^2}{(a_0 - x)^2} + 2\sum _{n=1}^{\infty } \frac{ a_n^2 }{(a_n - x)^2}, \end{aligned}$$

then they converge absolutely in each compact set not containing \(\{a_n\}_{n\ge 1}\). We have that

$$\begin{aligned} \text {dom}(S) = \text {dom}(s) = \cup _n (a_{(n+1)},a_{(n)}) , \qquad S'(x) = s(x), \forall x \in \text {dom}(S). \end{aligned}$$

Moreover for each n,

$$\begin{aligned} \lim _{x\rightarrow a_{(n+1)}^+} S(x) = -\infty , \qquad \lim _{x\rightarrow a_{(n)}^-} S(x) = +\infty , \qquad S'(x) > 0 , \forall x\in (a_{(n+1)},a_{(n)}) . \end{aligned}$$

Hence, there exists a unique \(\tilde{a}_n\in (a_{(n+1)},a_{(n)}) \) such that \(S(\tilde{a}_n)=1\), i.e., for which (23) holds. The unique corresponding eigenfunction is given by (22) that implies also \(m_{\tilde{a}_n}=1\):

$$\begin{aligned} f (t) = \frac{a_0}{a_0 - \tilde{a}_n} + \sqrt{2}\sum _{n=1}^\infty \frac{a_n }{a_n - \tilde{a}_n} \mathbf {c}_{n}(t). \end{aligned}$$

To complete the proof, we show that there are not eigenvalues greater than \(a_{(1)}=\max _n a_{n}\) or smaller than any \(a_{n}>0\).

In fact, if we assume that there exists an eigenvalue \(\hat{a} > \max a_n\), then (22) shows that the sequence \(\{f^{c}_k\}_{k\ge 0}\) is made of either nonnegative or nonpositive numbers that together with Lemma 1 implies \(f^{c}_k=0\), for any f.

In the same way, it can be shown that there are no eigenvalues smaller than any \(a_n>0\). \(\square \)

Appendix 3: Proofs of results of Sect. 4

We simply deduce the results based on the fact that if \( Y\approx N(0,\sigma ^2) \), then \( E(|Y|^p) = \sigma ^p \frac{2^{\frac{p}{2}}\Gamma \big (\frac{p+1}{2}\big )}{\sqrt{\pi }} , \) (see, e.g., Patel and Read 1982).

Proof of the Theorem 7

Observe that

$$\begin{aligned} E(|x_{t+h} - x_t|^2)= & {} E(x_t^2 + x_{t+h}^2 - 2x_{t+h}x_{t}) = R(t+h,t+h) \\&+\, R(t,t) - 2R(t+h,t) . \end{aligned}$$

Since there exists an M such that \(|R(s+\delta _1,t+\delta _2)-R(s,t)|\le M \Vert (\delta _1,\delta _2) \Vert ^{\alpha } \), then there exists a D such that

$$\begin{aligned} E(|x_{t+h} - x_t|^2) \le D|h|^{\alpha }. \end{aligned}$$

The thesis follows. \(\square \)

Proof of the Theorem 8

The first part of the theorem is a simple calculation. The second holds is a consequence of Theorem 6, since

$$\begin{aligned} E(|x_{t+h} - x_t|^2)= & {} E(E(|x_{t+h} - x_t|^2|x_0)) = E(E((x_{t+h}- x_0) - (x_t-x_0))^2|x_0))\\= & {} R(t+h,t+h) + R(t,t) - 2R(t+h,t) \le D|h|^{\alpha }. \end{aligned}$$

\(\square \)

Proof of the Theorem 10

It is clear that

$$\begin{aligned} \partial ^2 \tilde{C}(\delta ) = 2\partial ^2 \sum _{k=1}^{\infty }{c^2_k\cos (2k\pi (\delta ))} = -2\sum _{k=1}^{\infty }{(2\pi )^2k^2c^2_k\cos (2k\pi (\delta ))} \end{aligned}$$

and that \( \partial ^2 \tilde{C} \in C^{0, \alpha } ([0,1])\), for some \( 0<\alpha \le 1 \). Moreover, we have that uniformly in t and in mean square

$$\begin{aligned} x_t = c_0Y'_0 + \sum _{k=1}^{\infty }c_k(Y_k\mathbf {s}_{k}(t) + Y'_k\mathbf {c}_{k}(t)). \end{aligned}$$

and, from Theorem 2, there also exists a stochastic process in \( \mathscr {H}\) such that uniformly in t and in mean square

$$\begin{aligned} \tilde{x}_t = 2\pi \sum _{k=1}^{\infty }kc_k(Y_k\mathbf {c}_{k}(t) - Y'_k\mathbf {s}_{k}(t)), \end{aligned}$$

which has covariogram function belonging to \(C^{0, \alpha } ([0,1])\) given by

$$\begin{aligned} \tilde{\bar{C}}(\delta ) = 2\sum _{k=1}^{\infty }{(2\pi )^2k^2c^2_k\cos (2k\pi (\delta ))}. \end{aligned}$$

If we define

$$\begin{aligned} y^{(n)}_t:= & {} c_0Y'_0 + \sum _{k=1}^{n}c_k(Y_k\mathbf {s}_{k}(t) + Y'_k\mathbf {c}_{k}(t))\\ \tilde{y}^{(n)}(t):= & {} 2 \pi \sum _{k=1}^{n}kc_k(Y_k\mathbf {c}_{k}(t) - Y'_k\mathbf {s}_{k}(t)), \end{aligned}$$

than \( y^{(n)}_t = {y}^{(n)}_0 + \int _{0}^{t}\tilde{y}^{(n)}_{\tau }\mathrm{d}\tau \), a.s. for any n, while for each fixed t, in mean square we have \( \int _{0}^{t}\tilde{y}^{(n)}_{\tau }\mathrm{d}\tau \rightarrow \int _{0}^{t}\tilde{x}_{\tau }\mathrm{d}\tau . \) Since

$$\begin{aligned}&\sqrt{E\Big ([x_t - x_0 - \int _{0}^{t}\tilde{x}_{\tau }\mathrm{d}\tau ]^2\Big )} \le \sqrt{E\Big ([x_t - y^{(n)}_t]^2\Big )}\\&\quad + \sqrt{E\Big ([y^{(n)}_0 + \int _{0}^{t}\tilde{y}^{(n)}_{\tau }\mathrm{d}\tau - x_0 - \int _{0}^{t}\tilde{x}_{\tau }\mathrm{d}\tau ]^2\Big )} \mathop {\longrightarrow }_{n\rightarrow \infty } 0, \end{aligned}$$

it follows that a.s. \( x_t = x_0 + \int _{0}^{t}\tilde{x}_{\tau }\mathrm{d}\tau . \) By Theorem 7, we know that almost all trajectory path of \( \tilde{x}_t\) belongs to \(C^{0, \beta } ([0,1])\), with \( \beta < \frac{\alpha }{2} \), and thesis follows. \(\square \)

Appendix 4: Proofs of results of Sect. 5

Proof of Theorem 11

Assume \(p_0\) be the true parameter, we may define a sequence of i.i.d. random variables \(\{Z_k\}_{k\ge 0}\) in the following way:

$$\begin{aligned} Z_k \sim \frac{k^{2p_0} o_k}{a_0^2} \sim \chi ^2_2 \sim \exp \left( \tfrac{1}{2}\right) . \end{aligned}$$
(24)

Equation (7), as a function of \((p,p_0)\) and \(\{Z_k\}_{k\ge 1}\), becomes

$$\begin{aligned} \frac{\partial \ell _n}{\partial p} = \sum _{k=1}^n \log (k)\Big ( 2 -k^{2(p-p_0)} Z_k \Big ). \end{aligned}$$

With the notation of (Hall and Heyde 1980, pp. 155–161), we have

$$\begin{aligned} I_n(p)&= \sum _1^n \log ^2(k) E\left( (2 -k^{2(p-p_0)} Z_k)^2| Z_1,\ldots , Z_{k-1} \right) \\&= \sum _1^n \log ^2(k) 2( 1 + ( 1 - 2k^{2(p-p_0)})^2 ) ,\\ J_n(p)&= -\frac{2}{a_0^2} \sum _{k=1}^n \log ^2(k) k^{2p} o_k = -2 \sum _{k=1}^n \log ^2(k) k^{2(p-p_0)} Z_k \end{aligned}$$

and, in particular,

$$\begin{aligned} I_n(p_0)&= 4 \sum _1^n \log ^2(k) ,\end{aligned}$$
(25)
$$\begin{aligned} J_n(p_0)&= -2 \sum _1^n \log ^2(k) Z_k . \end{aligned}$$
(26)

The thesis is a consequence of (Hall and Heyde 1980, pp. 155–161), where the Assumption 1 and Assumption 2 on page 160 guarantee the existence of an ML estimator \(\{\hat{p}_n\}_{n\ge 1}\) such that

$$\begin{aligned} \hat{p}_n \mathop {\longrightarrow }\limits ^{a.s.}_{n\rightarrow \infty } p_0, \quad \frac{\hat{p}_n-p_0}{\sqrt{I_n(p_0)}} \mathop {\longrightarrow }\limits ^{L}_{n\rightarrow \infty } N(0,1). \end{aligned}$$

Check of (Hall and Heyde 1980, Assumption 1, p. 160). The fact that \(I_n(p_0) \mathop {\longrightarrow }\limits ^{a.s.}_{n\rightarrow \infty } \infty \) is a consequence of (25). As \(I_n(p_0)=E(I_n(p_0))\), then \(I_n(p_0)/E(I_n(p_0))\rightarrow 1\) uniformly on compacts. By (25) and (26), we have

$$\begin{aligned} \frac{J_n(p_0)}{I_n(p_0)} = \frac{-2 \sum _1^n \log ^2(k) Z_k}{ 4 \sum _1^n \log ^2(k)}, \end{aligned}$$

and hence, by (24), we have

$$\begin{aligned} E\Bigg ( \frac{J_n(p_0)}{I_n(p_0)} \Bigg ) = 1, \qquad \text {Var}\Bigg ( \frac{J_n(p_0)}{I_n(p_0)} \Bigg ) = \frac{\sum _1^n \log ^4(k)}{\big (\sum _1^n \log ^2(k)\big )^2}. \end{aligned}$$

Since, for \(n\ge 4\),

$$\begin{aligned} \frac{ \log ^2(n) }{ \sum _1^n \log ^2(m)} \le \frac{ 1 }{ \sum _{n/2}^n \big (\frac{\log (n/2)}{\log (n)}\big )^2} \le \frac{ 1 }{ \sum _{n/2}^n \big (\frac{1}{2}\big )^2} \le \frac{8}{n} \end{aligned}$$
(27)

then \(\sum _{n=1}^\infty \big (\frac{\log ^2(n)}{{\sum _1^n \log ^2(m)}}\big )^{2}<\infty \), and hence \( Var\big ( \frac{J_n(p_0)}{I_n(p_0)} \big ) \rightarrow 0 \) by Kronecker’s Lemma, which ensures that \(I_n(p_0)/E(I_n(p_0))\rightarrow -1\) in probability uniformly on compacts.

Check of (Hall and Heyde 1980, Assumption 2, p. 160). Since, for any p, \(E_{p}(I_n(p))\) does not change, then Assumption 2.i) is automatically satisfied.

Now, if \(|p_n-p_0|\le \delta /\sqrt{I_n(p_0)}\), we get

$$\begin{aligned} |J_n(p_n) - J_n(p_0) | \le 2 \sum _{k=1}^n \log ^2(k) \left( k^{\frac{\delta }{\sqrt{\sum _1^n \log ^2(m)}}}-1\right) Z_k \end{aligned}$$
(28)
$$\begin{aligned} |I_n(p_n) - I_n(p_0) | \le \sum _{k=1}^n \log ^2(k) 8 k^{2\frac{\delta }{\sqrt{I_n(p_0)}}} (k^{2\frac{\delta }{\sqrt{I_n(p_0)}}}-1) . \end{aligned}$$

Note that, since \(k\le n\), we have

$$\begin{aligned} 1\le k^{2\frac{\delta }{\sqrt{I_n(p_0)}}} \le e^{2\frac{\delta }{\sqrt{I_n(p_0)}}\log (n)} \le \exp (2\delta ) \end{aligned}$$

and hence, for sufficient large n and \(k\le n\), since

$$\begin{aligned} k^{2\frac{\delta }{\sqrt{I_n(p_0)}}}-1 \le C_0 {2\frac{\delta }{\sqrt{I_n(p_0)}}\log (k)}, \end{aligned}$$
(29)

we obtain

$$\begin{aligned} \Big |\frac{I_n(p_n) - I_n(p_0)}{I_n(p_0)} \Big | \le \frac{C_1\frac{\sum _{k=1}^n \log ^3(k)}{\sqrt{\sum _1^n \log ^2(k)}}}{4\sum _1^n \log ^2(k)} = C_2\sum _{k=1}^n \left( \frac{\log ^2(k)}{{\sum _1^n \log ^2(m)}}\right) ^{\frac{3}{2}}. \end{aligned}$$

By (27), then \(\sum _{n=1}^\infty \big (\frac{\log ^2(n)}{{\sum _1^n \log ^2(m)}}\big )^{\frac{3}{2}}<\infty \), and hence, by Kronecker’s Lemma, we get Assumption (2.ii), namely

$$\begin{aligned} \Big |\frac{I_n(p_n) - I_n(p_0)}{I_n(p_0)} \Big | \rightarrow 0. \end{aligned}$$

The last Assumption (2.iii) requires that

$$\begin{aligned} \frac{J_n(p_n) - J_n(p_0)}{I_n(p_0)} \rightarrow 0, \quad \text {a.s.} \end{aligned}$$

To check this, we first note that

$$\begin{aligned} \frac{\sum _{k=1}^n \log ^2(k) \big (k^{\frac{\delta }{\sqrt{\sum _1^n \log ^2(m)}}}-1\big ) }{\sum _1^n \log ^2(k)} \rightarrow 0, \end{aligned}$$

as a consequence of Kronecker’s Lemma, (29) and (27). Then,

$$\begin{aligned} \Big | E \Big (\frac{J_n(p_n) - J_n(p_0)}{I_n(p_0)} \Big )\Big | \le \frac{E(|J_n(p_n) -J_n(p_0)|)}{I_n(p_0)} \rightarrow 0, \end{aligned}$$

and hence, a sufficient condition for \( \frac{J_n(p_n) - J_n(p_0)}{I_n(p_0)} \rightarrow 0 \) to hold, is that

$$\begin{aligned} Var \Big (\frac{J_n(p_n) - J_n(p_0)}{I_n(p_0)} \Big ) \rightarrow 0. \end{aligned}$$
(30)

By (28), since \(Var(X_k) = 4\), we obtain

$$\begin{aligned} Var ({J_n(p_n) - J_n(p_0)}) \le 8 \sum _{k=1}^n \log ^4(k) \left( k^{\frac{\delta }{\sqrt{\sum _1^n \log ^2(m)}}}-1\right) ^2. \end{aligned}$$

Again, by (29), we obtain

$$\begin{aligned} Var \left( \frac{J_n(p_n) - J_n(p_0)}{I_n(p_0)} \right) \le \frac{C_1\frac{\sum _{k=1}^n \log ^6(k)}{{\sum _1^n \log ^2(k)}}}{\left( 4\sum _1^n \log ^2(k)\right) ^2} = C_2\sum _{k=1}^n \left( \frac{\log ^2(k)}{{\sum _1^n \log ^2(m)}}\right) ^{3}. \end{aligned}$$

As above, by (27) and Kronecker’s Lemma, we obtain (30).

We sketch the second part of the proof, with the notation of (Heyde 1997, pag. 191). If we define

$$\begin{aligned} \mathbf {G}_n(\varvec{\theta }) = \left\{ \begin{aligned} G_n^{(1)}(a,p)&= \frac{1}{a} \sum _{k=1}^n \Big (\frac{k^{2p}}{a^2} o_k -2\Big ) = \frac{1}{a} \sum _{k=1}^n \Big ( \frac{k^{2(p-p_0)}a_0^2}{a^2} Z_k -2 \Big )\nonumber \\ G_n^{(2)}(a,p)&= \sum _{k=1}^n \log (k) \Big ( 2 -\frac{k^{2p}}{a^2} o_k\Big ) = \sum _{k=1}^n \log (k) \Big ( 2 -\frac{k^{2(p-p_0)}a_0^2}{a^2} Z_k \Big ) \end{aligned} \right. \end{aligned}$$

and

$$\begin{aligned} \mathbf {H}^{-1}_n(a_0,p_0) = \begin{pmatrix} \frac{a_0}{2\sqrt{n}} &{}\quad 0\\ 0 &{}\quad \frac{1}{2\sqrt{\sum _{k=1}^n \log ^2(k)}} \end{pmatrix} \end{aligned}$$

it is simple to state that

$$\begin{aligned} \mathbf {H}^{-1}_n(a_0,p_0)\cdot \mathbf {G}_n(a_0,p_0) \mathop {\longrightarrow }\limits _{n\rightarrow \infty }^L \begin{pmatrix} 1\\ -1 \end{pmatrix}Z. \end{aligned}$$
(31)

In fact, since \(\{Z_k\}_{k\ge 0}\) is a i.i.d. sequence of random variables with mean 2 and variance 4 (see (24)), we get

$$\begin{aligned} E( G_n^{(1)}(a_0,p_0) G_n^{(2)}(a_0,p_0) ) = - E\Bigg ( \frac{1}{a_0} \sum _{k=1}^n \log (k) ( 2 - Z_k )^2 \Bigg ) = -\frac{4}{a_0} \sum _{k=1}^n \log (k) \end{aligned}$$

and hence

$$\begin{aligned} \mathrm{Corr}\left( \frac{a_0}{2\sqrt{n}} G_n^{(1)}(a_0,p_0) , \frac{G_n^{(2)}(a_0,p_0) }{2\sqrt{\sum _{k=1}^n \log ^2(k)}} \right) = \frac{ -\sum _{k=1}^n \log (k) }{ \sqrt{n}\sqrt{\sum _{k=1}^n \log ^2(k)} } \mathop {\longrightarrow }\limits _{n\rightarrow \infty } -1. \end{aligned}$$

Now, since

$$\begin{aligned}&\dot{\mathbf {G}}(a_0,p_0)\\&\quad = \begin{pmatrix} -\frac{4n}{a_0^2} \Big ( 1 +\frac{3}{4} \frac{\sum _{k=1}^n (Z_k-2)}{n} \Big ) &{}\quad \frac{4 \sum _{k=1}^n \log (k)}{a_0} \Big ( 1 + \frac{\sum _{k=1}^n \log (k) (Z_k-2)}{ 2\sum _{k=1}^n \log (k) } \Big )\\ \frac{4 \sum _{k=1}^n \log (k)}{a_0} \Big ( 1 + \frac{\sum _{k=1}^n \log (k) (Z_k-2)}{ 2\sum _{k=1}^n \log (k) } \Big ) &{}\quad -4 \sum _{k=1}^n \log ^2(k) \Big ( 1 + \frac{\sum _{k=1}^n \log ^2(k) (Z_k -2)}{ 2\sum _{k=1}^n \log ^2(k) } \end{pmatrix}\\&\quad \asymp \begin{pmatrix} -\frac{4n}{a_0^2} &{} \frac{4 \sum _{k=1}^n \log (k)}{a_0}\\ \frac{4 \sum _{k=1}^n \log (k)}{a_0} &{} -4 \sum _{k=1}^n \log ^2(k) \end{pmatrix} \end{aligned}$$

then, by (31) (see Heyde 1997, pag.191), we get

$$\begin{aligned} \mathbf {H}^{-1}_n(a_0,p_0) \cdot (- \dot{\mathbf {G}}(a_0,p_0)) \cdot \begin{pmatrix} \hat{a}_n-a_0\\ \hat{p}_n-p_0 \end{pmatrix} \mathop {\longrightarrow }\limits ^{L}_{n\rightarrow \infty } \begin{pmatrix} 1\\ -1 \end{pmatrix} Z, \end{aligned}$$

which is the thesis, once the conditions of uniformly boundedness are checked as for the previous case. \(\square \)

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Aletti, G., Ruffini, M. Is the Brownian bridge a good noise model on the boundary of a circle?. Ann Inst Stat Math 69, 389–416 (2017). https://doi.org/10.1007/s10463-015-0546-5

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10463-015-0546-5

Keywords

Navigation