Skip to main content
Log in

Estimation of the pointwise Hölder exponent of hidden multifractional Brownian motion using wavelet coefficients

  • Published:
Statistical Inference for Stochastic Processes Aims and scope Submit manuscript

Abstract

We propose a wavelet-based approach to construct consistent estimators of the pointwise Hölder exponent of a multifractional Brownian motion, in the case where this underlying process is not directly observed. The relative merits of our estimator are discussed, and we introduce an application to the problem of estimating the functional parameter of a nonlinear model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. We will denote this relation by \(a\sim b\).

References

  • Abry P, Flandrin P, Taqqu M S, Veitch D (2002) Self-similarity and long-range dependence through the wavelet lens. In: Theory and Applications of Long-range Dependence, pp. 527–556

  • Abry P, Gonçalvès P (1997) Multiple-window wavelet transform and local scaling exponent estimation. In: Proceedings of the IEEE-ICASSP-97, pp 3433–3436

  • Ayache A, Lévy-Véhel J (2004) On the identification of the pointwise Hölder exponent of the generalized multifractional Brownian motion. Stoch Process Appl 111:119–156

    Article  MATH  Google Scholar 

  • Ayache A, Shieh NR, Xiao Y (2011) Multiparameter multifractional Brownian motion: local nondeterminism and joint continuity of the local times. Ann Inst H Poincaré Probab Stat 47(4):1029–1054

    Article  MathSciNet  MATH  Google Scholar 

  • Bardet JM (2000) Testing for the presence of self-similarity of Gaussian time series having stationary increments. J Time Ser Anal 21(5):497–515

    Article  MathSciNet  MATH  Google Scholar 

  • Bardet JM (2002) Statistical study of the wavelet analysis of fractional Brownian motion. IEEE Trans Inf Theory 48(4):991–999

    Article  MathSciNet  MATH  Google Scholar 

  • Bardet JM, Surgailis D (2013) Nonparametric estimation of the local Hurst function of multifractional Gaussian processes. Stoch Process Appl 123(3):1004–1045

    Article  MathSciNet  MATH  Google Scholar 

  • Barrière O (2007) Synthèse et estimation de mouvements Browniens multifractionnaires et autres processus à régularité prescrite: définition du processus autorégulé multifractionnaire et applications. Ph.D. Dissertation, Ecole Centrale de Nantes

  • Bayraktar E, Horst U, Sircar R (2006) A limit theorem for financial markets with inert investors. Math Oper Res 31(4):789–810

    Article  MathSciNet  MATH  Google Scholar 

  • Benassi A, Jaffard S, Roux D (1997) Elliptic Gaussian random processes. Rev Mat Iberoam 13:19–90

    Article  MathSciNet  MATH  Google Scholar 

  • Bertrand PR, Fhima M, Guillin A (2013) Local estimation of the Hurst index of multifractional Brownian motion by increment ratio statistic method. ESAIM 17:307–327

    Article  MathSciNet  MATH  Google Scholar 

  • Bertrand PR, Hamdouni A, Khadhraoui S (2012) Modelling NASDAQ series by sparse multifractional Brownian motion. Methodol Comput Appl Probab 14(1):107–124

    Article  MathSciNet  MATH  Google Scholar 

  • Bianchi S, Pantanella A, Pianese A (2013) Modeling stock prices by multifractional Brownian motion: an improved estimation of the pointwise regularity. Quant. Finance 13(8):1317–1330

    Article  MathSciNet  MATH  Google Scholar 

  • Chan G, Hall P, Poskitt DS (1995) Periodogram-based estimators of fractal properties. Ann Stat 23(5):1684–1711

    Article  MathSciNet  MATH  Google Scholar 

  • Chan G, Wood ATA (1998) Simulation of multifractional Brownian motion. In: Proceedings of the COMPSTAT 13th Symposium, Bristol, Great Britain, pp. 233–238

  • Cheridito P (2003) Arbitrage in fractional Brownian motion models. Finance Stoch 7:533–553

    Article  MathSciNet  MATH  Google Scholar 

  • Corlay S, Lebovits J, Lévy-Véhel J (2014) Multifractional stochastic volatility models. Math Finance 24(2):364–402

    Article  MathSciNet  MATH  Google Scholar 

  • Coeurjolly JF (2005) Identfication of multifractional Brownian motion. Bernoulli 11(6):987–1008

    Article  MathSciNet  MATH  Google Scholar 

  • Coeurjolly JF (2006) Erratum: identfication of multifractional Brownian motion. Bernoulli 12(2):381–382

    Article  MathSciNet  Google Scholar 

  • Comte F, Renault E (1998) Long memory in continuous-time stochastic volatility models. Math Finance 8(4):291–323

    Article  MathSciNet  MATH  Google Scholar 

  • Delbeke L, Van A W (1995) A wavelet based estimator for the parameter of self-similarity of fractional Brownian motion. In: Proceedings of the 3rd International Conference on Approximation and Optimization in the Caribbean (electronic), Benemérita Univ. Autón

  • Frezza M (2014) Goodness of fit assessment for a fractal model of stock markets. Chaos Solitons Fractals 66:41–50

    Article  MATH  Google Scholar 

  • Jin S, Peng Q, Schellhorn H (2015) Estimation of the pointwise Hölder exponent of hidden multifractional Brownian motion using wavelet coefficients, long version. arXiv:1512.05054

  • Ledoux M, Talagrand M (2010) Probability in Banach spaces. Springer, Berlin

    MATH  Google Scholar 

  • Lévy-Véhel J (1995) Fractal approaches in signal processing. Fractals 3:755–775

    Article  MathSciNet  MATH  Google Scholar 

  • Oehlert GW (1992) A note on the delta method. Am Stat 46(1):27–29

    MathSciNet  Google Scholar 

  • Ohashi A (2009) Fractional term structure models: no-arbitrage and consistency. Ann Appl Probab 19(4):1553–1580

    Article  MathSciNet  MATH  Google Scholar 

  • Peng Q (2011a) Uniform Hölder exponent of a stationary increments Gaussian process: estimation starting from average values. Stat Probab Lett 81:1326–1335

    Article  MATH  Google Scholar 

  • Peng Q (2011b) Statistical inference for hidden multifractional processes in a setting of stochastic volatility models. Ph.D. Dissertation, Lille 1 University

  • Rogers LCG (1997) Arbitrage with fractional Brownian motion. Math Finance 7:95–105

    Article  MathSciNet  MATH  Google Scholar 

  • Rosenbaum M (2008) Estimation of the volatility persistence in a discretely observed diffusion model. Stoch Process Appl 118:1434–1462

    Article  MathSciNet  MATH  Google Scholar 

  • Xiao W, Zhang W, Zhang Z, Wang Y (2010) Pricing currency options in a fractional Brownian motion with jumps. Econ Model 27:935–942

    Article  Google Scholar 

Download references

Acknowledgments

We gratefully acknowledge the anonymous reviewers for their careful reading of our manuscript and their many insightful comments. Their suggestions result in strong improvement of our manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qidi Peng.

Appendix

Appendix

1.1 Proofs of (1.8) and (1.9)

First by using triangle inequality, we get

$$\begin{aligned}&|\widehat{d_{Y,n}}(2^{-j},k)-d_Y(2^{-j},k)|\nonumber \\&\qquad \le 2^{j/2}\sum _{l=0}^{2^{n-j}-1}\int _{\frac{l}{2^n}}^{\frac{l+1}{2^n}}|\psi (2^jt) ||Y(l2^{-n}+k2^{-j})-Y(t+k2^{-j})|{\,\mathrm {d}} t. \end{aligned}$$
(4.1)

Recall that \(Y(t)=\Phi (X(t))\) for \(t\in [0,1]\). Define the random variable \(\Vert X\Vert _{\infty }\) to be

$$\begin{aligned} \Vert X\Vert _{\infty }=\sup _{t\in [0,1]}|X(t)|. \end{aligned}$$
(4.2)

Since \(\theta \) is continuous and not equal to 0 almost everywhere, then \(\{X(t)\}_{t\in [0,1]}\) is a Gaussian process with continuous trajectories, by applying Dudley’s theorem and Borell’s inequality (more precisely, with the same arguments for the proof of \(\mathbb {E}(e^{\widetilde{V}})<+\infty \) on pp. 1445–1446 in Rosenbaum (2008). See also Ledoux and Talagrand (2010)), we can show that \( \mathbb {E}(e^{\Vert X\Vert _{\infty }})<+\infty . \) This means all of \(\Vert X\Vert _{\infty }\)’s moments are finite. Hence, using the mean value theorem, we get

$$\begin{aligned} |Y(l2^{-n}+k2^{-j})-Y(t+k2^{-j})|\le C_1|X(l2^{-n}+k2^{-j})-X(t+k2^{-j})|, \end{aligned}$$
(4.3)

where \(C_1=\sup \limits _{s\in [-\Vert X\Vert _{\infty },\Vert X\Vert _{\infty }]}|\Phi '(s)|\) is a random variable. It follows from (4.1) and (4.3) that

$$\begin{aligned}&|\widehat{d_{Y,n}}(2^{-j},k)-d_Y(2^{-j},k)|\nonumber \\&\quad \le C_12^{j/2}\sum _{l=0}^{2^{n-j}-1}\int _{\frac{l}{2^n}}^{\frac{l+1}{2^n}} |\psi (2^jt)||X(l2^{-n}+k2^{-j})-X(t+k2^{-j})|{\,\mathrm {d}} t. \end{aligned}$$
(4.4)

In order to get (1.8), we need (1.7), from which we see there exists a positive random variable \(C_2\) with all finite moments such that

$$\begin{aligned}&|X(l2^{-n}+k2^{-j})-X(t+k2^{-j})|\nonumber \\&\quad \le |\theta (l2^{-n}+k2^{-j})||B_{H(l2^{-n}+k2^{-j})}(l2^{-n}+k2^{-j}) -B_{H(t+k2^{-j})}(t+k2^{-j})|\nonumber \\&\qquad +\,|\theta (l2^{-n}+k2^{-j})-\theta (t+k2^{-j})||B_{H(t+k2^{-j})} (t+k2^{-j})|\nonumber \\&\quad \le C_2 \left( |l2^{-n}-t|^{H(t+k2^{-j})}|\log |l2^{-n}-t||^{1/2}+|l2^{-n}-t|\right) . \end{aligned}$$
(4.5)

Observe that for \(t\in [l2^{-n},(l+1)2^{-n}]\),

$$\begin{aligned} |l2^{-n}-t|^{H(t+k2^{-j})}|\log |l2^{-n}-t||^{1/2}\ge |l2^{-n}-t| \end{aligned}$$

and

$$\begin{aligned} \sup \limits _{{}^{n\ge 0,j\le n,l\le 2^{n-j}-1,k\le 2^j-1}_{l2^{-n}\le t\le (l+1)2^{-n}}}|l2^{-n}-t|^{H(t+k2^{-j})-H(k2^{-j})}|\log |l2^{-n}-t||^{1/2}<+\infty . \end{aligned}$$
(4.6)

This together with the fact that \(|l2^{-n}-t|\le 2^{-n}\) for \(t\in [l2^{-n},(l+1)2^{-n}]\) yields there exists a positive random variable \(C_3\) with all finite moments such that,

$$\begin{aligned} |X(l2^{-n}+k2^{-j})-X(t+k2^{-j})|\le C_3 2^{-nH(2^{-j}k)}n^{1/2}. \end{aligned}$$
(4.7)

Then (1.8) results from (4.4) and (4.7). \(\square \)

Now we are going to prove (1.9). For \(r\ge 1\), we consider the r-order moment of \(|\widehat{d_{Y,n}}(2^{-j},k)-d_Y(2^{-j},k)|\) in (4.4). By applying the following two versions of Jensen’s inequality:

$$\begin{aligned} \left( \sum _{i=1}^n|a_i|\right) ^r\le n^{r-1}\sum _{i=1}^n|a_i|^r \quad \text{ and } \quad \left( \int _a^b|f(s)|{\,\mathrm {d}} s\right) ^r\le |b-a|^{r-1}\int _a^b|f(s)|^r{\,\mathrm {d}} s, \end{aligned}$$
(4.8)

and Cauchy–Schwarz inequality, we obtain

$$\begin{aligned}&\mathbb {E}|\widehat{d_{Y,n}}(2^{-j},k)-d_Y(2^{-j},k)|^r\nonumber \\&\quad \le 2^{jr/2}2^{-j(r-1)}\sum _{l=0}^{2^{n-j}-1} \int _{\frac{l}{2^n}}^{\frac{l+1}{2^n}}|\psi (2^jt)|^r\mathbb {E}\left( C_1|X(l2^{-n}+k2^{-j})-X(t+k2^{-j})|\right) ^r{\,\mathrm {d}} t\nonumber \\&\quad \le \left( \sup _{s\in [0,1]}|\psi (s)|^r\right) 2^{-j(r/2-1)}\nonumber \\&\qquad \times \sum _{l=0}^{2^{n-j}-1}\int _{\frac{l}{2^n}}^{\frac{l+1}{2^n}} \left( \mathbb {E}\left( C_1^{2r}\right) \right) ^{1/2}\left( \mathbb {E}|X(l2^{-n}+k2^{-j})-X(t+k2^{-j}) |^{2r}\right) ^{1/2}{\,\mathrm {d}} t. \end{aligned}$$
(4.9)

Note that by Lemma 2.12 (i) in Ayache et al. (2011), there exists a constant \(c_1>0\) which does not depend on nljk and H such that

$$\begin{aligned}&\mathbb {E}|B_{H(l2^{-n}+k2^{-j})}(l2^{-n}+k2^{-j})-B_{H(t+k2^{-j})}(t+k2^{-j})|^2\\&\quad \le c_1|l2^{-n}+k2^{-j}-(t+k2^{-j}) |^{2\max \{H(l2^{-n}+k2^{-j}),H(t+k2^{-j})\}}\\&\quad \le c_1|l2^{-n}+k2^{-j}-(t+k2^{-j}) |^{2H(l2^{-n}+k2^{-j})}\\&\quad =c_1|l2^{-n}-t|^{2H(k2^{-j})}|l2^{-n}-t |^{2H(l2^{-n}+k2^{-j})-2H(k2^{-j})}. \end{aligned}$$

Therefore by using again (4.6) and the fact that \(|l2^{-n}-t|\le 2^{-n}\), there exists some constant \(c>0\) such that

$$\begin{aligned} \mathbb {E}|B_{H(l2^{-n}+k2^{-j})}(l2^{-n}+k2^{-j})-B_{H(t+k2^{-j})}(t+k2^{-j})|^2\le c2^{-2nH(k2^{-j})}. \end{aligned}$$
(4.10)

Using (4.10) and similar computations as in (4.5), we obtain there exists \(c_2>0\) such that

$$\begin{aligned} \mathbb {E}|X(l2^{-n}+k2^{-j})-X(t+k2^{-j})|^2\le c_22^{-2nH(k2^{-j})}. \end{aligned}$$
(4.11)

By using the fact that all the moments of Gaussian variable are equivalent, we get there exists some constant \(c_3>0\) (only depending on r) such that

$$\begin{aligned} \mathbb {E}|X(l2^{-n}+k2^{-j})-X(t+k2^{-j})|^{2r}\le c_32^{-2rnH(k2^{-j})}. \end{aligned}$$
(4.12)

Finally it results from (4.9) and (4.12) that

$$\begin{aligned} \mathbb {E}|\widehat{d_{Y,n}}(2^{-j},k)-d_Y(2^{-j},k)|^r\le c 2^{-r(nH(2^{-j}k)+j/2)}, \end{aligned}$$

where \(c=\big (\sup _{s\in [0,1]}|\psi (s)|^r\big )\big (c_3\mathbb {E}(C_1^{2r})\big )^{1/2}\). Therefore (1.9) has been proven. \(\square \)

1.2 Proof of (1.11)

First notice that, by the definition of \(\nu _{t_0,2^j}\), we have

$$\begin{aligned} |card(\nu _{t_0,2^j})- 2^{j+1}\epsilon _j|\le 3 \end{aligned}$$
(4.13)

as \(j\rightarrow +\infty \), because \(2^{j}\epsilon _j\ge 1\).

It follows from (4.8), Cauchy–Schwarz inequality, (4.13) and the fact that \((a+b)^4\le 2^3(a^4+b^4)\) that

$$\begin{aligned}&\mathbb {E}|\widehat{V_{n,t_0,j}}-V_{Y,t_0,j}|^2\le card(\nu _{t_0,2^j})\sum _{k\in \nu _{t_0,2^j}}\mathbb {E}\big |\widehat{d_{Y,n}}(2^{-j},k)^2-d_Y(2^{-j},k)^2\big |^2\nonumber \\&\quad \le 3\times 2^j\epsilon _j\sum _{k\in \nu _{t_0,2^j}}2^{3/2}\left( \mathbb {E}|\widehat{d_{Y,n}}(2^{-j},k)|^4+\mathbb {E}|d_Y(2^{-j},k)|^4\right) ^{1/2}\nonumber \\&\qquad \times \left( \mathbb {E}|\widehat{d_{Y,n}}(2^{-j},k)-d_Y(2^{-j},k)|^4\right) ^{1/2}. \end{aligned}$$
(4.14)

Roughly speaking (and it can be proven without efforts), since the trajectory \(\{\Phi (X(t))\}_{t\ge 0}\) is at least as smooth as \(\{X(t)\}_{t\ge 0}\), then for \(r\ge 1\), there exists a constant \(c_4>0\) (only depending on r) such that,

$$\begin{aligned}&\mathbb {E}|\widehat{d_{Y,n}}(2^{-j},k)|^r\le c_42^{-jr\left( H(k2^{-j})+1/2\right) };\nonumber \\&\mathbb {E}|d_Y(2^{-j},k)|^r\le c_42^{-jr\left( H(k2^{-j})+1/2\right) }. \end{aligned}$$
(4.15)

Then it results from (4.14), (4.15), (1.9) and (4.13) that

$$\begin{aligned} \mathbb {E}|\widehat{V_{n,t_0,j}}-V_{Y,t_0,j}|^2\le 9\times 2^{2j}\epsilon _j^2\left( (8c_4)^{1/2}2^{-j(2H(k2^{-j})+1)}\right) \left( c^{1/2}2^{-2nH(k2^{-j})-j}\right) . \end{aligned}$$
(4.16)

In view of the equivalence relation between \(H(k2^{-j})\) and \(H(t_0)\) as \(k\in \nu _{t_0,2^j}\) and \(j\rightarrow +\infty \), (1.11) finally results from (4.16). \(\square \)

1.3 Proof of Lemma 2

By using Chebyshev’s inequality, (2.9) and the condition that \(\epsilon _j=\mathcal {O}(j^{-1})\), we get for any \(\eta >0\),

$$\begin{aligned}&\mathbb {P}\left( 2^{j(2H(t_0)+1/2)}\epsilon _j^{-1/2}\big |V_{X,t_0,j}-\mathbb {E}(V_{X,t_0,j})\big |\ge \eta \right) \le \frac{ 2^{j(4H(t_0)+1)}\epsilon _j^{-1} Var \left( V_{X,t_0,j}\right) }{\eta ^2}\nonumber \\&\qquad \le c\frac{2^{j(4H(t_0)+1)}2^{-j(4H(t_0)+1)}}{\eta ^2}= \frac{c}{\eta ^2}, \end{aligned}$$
(4.17)

where \(c>0\) is some constant which does not depend on j. This implies

$$\begin{aligned} V_{X,t_0,j}=\mathbb {E}(V_{X,t_0,j})+\mathcal {O}_{\mathbb {P}}\left( 2^{-j(2H(t_0)+1/2)}\epsilon _j^{1/2}\right) . \end{aligned}$$
(4.18)

Then it follows from (2.8), (4.18) and the fact that \(\lim _{j\rightarrow +\infty }2^{-j/2}\epsilon _{j}^{-1/2}=0 \) that

$$\begin{aligned}&V_{X,t_0,j}=2c_2(t_0)2^{-2jH(t_0)}\epsilon _j+\mathcal {O}_{a.s.}\left( 2^{-2jH(t_0)}\left( 2^{-j}+j\epsilon _j^2+2^{-j}j^4\epsilon _j\right) \right) \nonumber \\&\qquad \qquad \quad +\,\mathcal {O}_{\mathbb {P}}(2^{-j(2H(t_0)+1/2)}\epsilon _j^{1/2})\nonumber \\&\qquad \qquad =2c_2(t_0)2^{-2jH(t_0)}\epsilon _j+\mathcal {O}_{a.s.}\left( 2^{-2jH(t_0)}\left( j\epsilon _j^2+2^{-j}j^4\epsilon _j\right) \right) \nonumber \\&\qquad \qquad \quad +\,\mathcal {O}_{\mathbb {P}}\left( 2^{-j(2H(t_0)+1/2)}\epsilon _j^{1/2}\right) . \end{aligned}$$

(2.11) has been proven. Note that (2.12) follows straightforwardly from (2.11). Now we only need to show (2.13) holds. From (4.18) and Chebyshev’s inequality, we observe that there exists a constant \(c>0\) which does not depend on \(\eta \) nor on j such that for any \(\eta >0\),

$$\begin{aligned} \mathbb {P}\left( (2^j\epsilon _j)^{(1-\delta )/2}\left| \frac{V_{X,t_0,j}}{\mathbb {E}(V_{X,t_0,j})}-1\right| >\eta \right) \le \frac{c(2^j\epsilon _j)^{-\delta }}{\eta ^2}. \end{aligned}$$
(4.19)

Since \(\sum \limits _{j=1}^{+\infty }(2^j\epsilon _j)^{-\delta }<+\infty \), then applying Borel–Cantelli’s lemma leads to

$$\begin{aligned} \frac{V_{X,t_0,j}}{\mathbb {E}(V_{X,t_0,j})}-1=\mathcal {O}_{a.s.} \left( (2^j\epsilon _j)^{-1/2+\delta /2}\right) . \end{aligned}$$

Further observe that

$$\begin{aligned} \mathbb {E}(V_{X,t_0,j})=\mathcal {O}_{a.s.}\left( 2^{-2jH(t_0)}\epsilon _j\right) . \end{aligned}$$

Therefore (2.13) follows. Lemma 2 has been proven. \(\square \)

1.4 Proof of Theorem 2

The proof of Theorem 2 relies heavily on the following Propositions 4 and 5.

Proposition 4

For any \(t_0\in (0,1)\) and any integer \(j\ge 1\), denote by

$$\begin{aligned} U_{t_0,j}=\sqrt{card(\nu _{t_0,2^j})}\left( \frac{1}{card(\nu _{t_0,2^j})} \sum _{k\in \nu _{t_0,2^j}}\frac{d_X(2^{-j},k)^2}{\mathbb {E}(d_X(2^{-j},k)^2)}-1\right) . \end{aligned}$$

Then

$$\begin{aligned} \big (U_{t_0,j},U_{t_0,j+1}\big )\xrightarrow [j\rightarrow +\infty ]{dist}\mathcal N(0,\Sigma ), \end{aligned}$$

where \(\Sigma =(\sigma _{ij})_{i,j\in \{1,2\}}\) with \(\sigma _{11}=\sigma _{22}=c_3(t_0)/(c_2(t_0))^2\) and

$$\begin{aligned} \sigma _{12}= & {} \sigma _{21}=c_4(t_0):=\frac{(2c_0)^{1/2}}{c_2(t_0)^2}\left( C_2(t_0,1/2)^22^{2H(t_0)+1}+2c_1(t_0,t_0)^2\right. \nonumber \\&\left. +\,2^{2Q-2H(t_0)+1}C_1(H(t_0),H(t_0),Q,2)^2\theta (t_0)^4\sum _{l\in \mathbb {Z},|l|\ge 2}\frac{1}{|l|^{4Q-4H(t_0)}}\right) . \end{aligned}$$
(4.20)

Proof

Following the similar method as in Bardet (2000), we show that for the empirical average of \(d_X( 2^{-j},k)^2/\mathbb {E}(d_X(2^{-j},k)^2)\), a multivariate central limit theorem holds thanks to a Lindeberg’s condition. More precisely, for j big enough and for \(k\in \nu _{t_0,2^j}\), \(k'\in \nu _{t_0,2^{j+1}}\), denote by

$$\begin{aligned} T_{j,k,k'}=Cov\left( \frac{d_X( 2^{-j},k)^2}{\mathbb {E}(d_X(2^{-j},k)^2)} ,\frac{d_X( 2^{-{(j+1)}},k')^2}{\mathbb {E}(d_X(2^{-{(j+1)}},k')^2)} \right) , \end{aligned}$$
(4.21)

a Lindeberg’s condition thus can be deduced from the following relations:

  • For \(|2k-k'|=0\),

    $$\begin{aligned} T_{j,k,k'}=2^{2H(2^{-j}k)+2}\left( \frac{C_2(2^{-j}k,1/2)}{c_2(2^{-j}k)}\right) ^2 +\mathcal {O}(2^{-j}j^4). \end{aligned}$$
    (4.22)
  • For \(|2k-k'|=1\),

    $$\begin{aligned} T_{j,k,k'}=2\left( \frac{c_1(2^{-j}k,2^{-(j+1)}k')^2}{c_2(2^{-j}k)c_2 (2^{-(j+1)}k')}\right) +\mathcal {O}\left( T(2k,k',Q,2^{-j},2^{-j})\right) . \end{aligned}$$
    (4.23)
  • For \(|2k-k'|\ge 2\),

    $$\begin{aligned} T_{j,k,k'}= & {} 2\left( \frac{(C_1(H(2^{-j}k)+H(2^{-(j+1)}k'),Q,2)\theta (2^{-j}k) \theta (2^{-(j+1)}k'))^2}{c_2(2^{-j}k)c_2(2^{-(j+1)}k')|2k-k'|^{4Q-2H(2^{-j}k) -2H(2^{-(j+1)}k')}}\right) \nonumber \\&+\,\mathcal {O}\left( T(k,k',Q,2^{-j},2^{-(j+1)})\right) . \end{aligned}$$
    (4.24)

To show (4.22)–(4.24) hold, we first recall that if \((Z,Z')\) has a zero-mean joint normal distribution, then (see e.g., Lemma 5.3.4 in Peng (2011b))

$$\begin{aligned} Cov\big (Z^2,{Z'}^2\big )=2\big (Cov(Z,Z')\big )^2. \end{aligned}$$
(4.25)

We thus observe, from (4.21) and (4.25), that

$$\begin{aligned} T_{j,k,k'}=2\left( Cov\left( \frac{d_X( 2^{-j},k)}{\sqrt{\mathbb {E}(d_X(2^{-j},k)^2)}} , \frac{d_X( 2^{-{(j+1)}},k')}{\sqrt{\mathbb {E}(d_X(2^{-{(j+1)}},k')^2)}} \right) \right) ^2. \end{aligned}$$
(4.26)

Therefore (4.22) results from (4.26), (2.3) and (2.6). In order to obtain (4.23), it suffices to take \(l=2k\) for \(k\in \nu _{t_0,2^j}\), then \(l,k'\) belong to \(\nu _{t_0,2^{j+1}}\) and satisfy \(\sup _{s,t\in [0,1]}|(t-s)/(l-k')|=\sup _{s,t\in [0,1]}|s-t|\le 1\). This entails that (2.2) can be applied on \((a,b,l,k')\) by setting \(a=b=2^{-(j+1)}\) and \(|l-k'|=1\). As a consequence (4.23) follows from (4.26), (2.4) and (2.6). For proving (4.24), we just plug \(a=2^{-j}\), \(b=2^{-(j+1)}\) into (2.2) and then use (4.26) and (2.6). Using this identification of \(T_{j,k,k'}\)’s, we show the Lindeberg’s condition (the same as in Bardet (2000)) is verified and the central limit theorem holds. Now it remains to show

$$\begin{aligned}&\displaystyle Cov\big (U_{t_0,j},U_{t_0,j+1}\big )\xrightarrow [j\rightarrow +\infty ]{} c_4(t_0) \quad (\text{ given } \text{ in } \text{(5.20) });&\end{aligned}$$
(4.27)
$$\begin{aligned}&\displaystyle Var(U_{t_0,j})\xrightarrow [j\rightarrow +\infty ]{}\frac{c_3(t_0)}{c_2(t_0)^2}.&\end{aligned}$$
(4.28)

We only prove (4.27) holds since (4.28) can be followed by quite a similar way. Remark that \(\lim _{j\rightarrow +\infty }\sup _{k\in \nu _{t_0,2^j}}2^{-j}k=t_0\); and for j large enough,

$$\begin{aligned}&card\left( \{k\in \nu _{t_0,2^j},k'\in \nu _{t_0,2^{j+1}},|2k-k'|=0\}\right) =card(\nu _{t_0,2^j});\\&card\left( \{k\in \nu _{t_0,2^j},k'\in \nu _{t_0,2^{j+1}},|2k-k'|=l\}\right) =2card(\nu _{t_0,2^j})+\mathcal {O}(1) \quad \text{ for }~l\ge 1; \end{aligned}$$

and for a function f continuous on \(t_0\),

$$\begin{aligned} \lim \limits _{j\rightarrow +\infty }\frac{1}{card(\nu _{t_0,2^j})} \sum \limits _{k\in \nu _{t_0,2^j}}f(2^{-j}k)=f(t_0). \end{aligned}$$

Considering all the above facts and (4.22)–(4.24), (5.49)–(5.51) in Jin et al. (2015), \(card(\nu _{t_0,2^j})\sim 2^{j}\epsilon _j\) and assumption (A3), we obtain

$$\begin{aligned}&Cov\left( U_{t_0,j},U_{t_0,j+1}\right) =\frac{1}{\sqrt{card(\nu _{t_0,2^j}) card(\nu _{t_0,2^{j+1}})}}\nonumber \\&\qquad \times \left( \sum _{(k\in \nu _{t_0,2^j},k'\in \nu _{t_0,2^{j+1}},2k=k')}T_{j,k,k'} +\sum _{|2k-k'|=1}T_{j,k,k'}\right. \\&\left. \qquad +\sum _{l=2}^{+\infty }\sum _{|2k-k'|=l}T_{j,k,k'}\right) \xrightarrow [j\rightarrow +\infty ]{}c_4(t_0)~~~~\text{(given } \text{ in } \text{(5.20)) }. \end{aligned}$$

Finally, we have proved Proposition 4. \(\square \)

Proposition 5

(Multivariate delta rule, see e.g., Oehlert (1992)) Let the estimators \(\{(X_n,Y_n)\}_{n\in \mathbb N}\) (valued in \((0,+\infty )^2\)) of \((\theta _1,\theta _2)\) satisfy the following central limit theorem:

$$\begin{aligned} h(n)\left( (X_n,Y_n)-(\theta _1,\theta _2)\right) \xrightarrow [n\rightarrow +\infty ]{dist}\mathcal N(0,\Sigma ), \end{aligned}$$

where \(\Sigma \) denotes the covariance matrix of the limit distribution and \((h(n))_n\) is a sequence of positive numbers tending to infinity. Let \(g:~(0,+\infty )^2\rightarrow \mathbb R^p\) (\(p=1\) or 2) belong to \(C^1((0,+\infty )^2)\), then the following convergence in law holds:

$$\begin{aligned} h(n)\left( g(X_n,Y_n)-g(\theta _1,\theta _2)\right) \xrightarrow [n\rightarrow +\infty ]{dist}\mathcal N\left( 0,\nabla g(\theta _1,\theta _2)^T\Sigma \nabla g(\theta _1,\theta _2)\right) , \end{aligned}$$

where \(\nabla g(\theta _1,\theta _2)^T\) denotes the transpose of the gradient of g on \((\theta _1,\theta _2)\).

Note that if \(p=2\) in Proposition 5, the gradient of g becomes a Jacobian matrix.

Proof of Theorem 2

For simplifying notation we denote by

$$\begin{aligned} \widehat{U_{t_0,j}}=\frac{1}{2^{j+1}\epsilon _j} \frac{\sum _{k\in \nu _{t_0,2^j}}d_X(2^{-j},k)^2}{c_2(t_0)2^{-j(2H(t_0)+1)}}-1. \end{aligned}$$

Therefore the following decomposition holds:

$$\begin{aligned}&\sqrt{2^{j+1}\epsilon _j}\widehat{U_{t_0,j}}-U_{t_0,j}=\frac{1}{\sqrt{2^{j+1}\epsilon _j}}\sum _{k\in \nu _{t_0,2^j}}d_X(2^{-j},k)^2\nonumber \\&\qquad \times \left( \frac{1}{c_2(t_0) 2^{-j(2H(t_0)+1)}}-\frac{1}{\mathbb {E}(d_X(2^{-j},k)^2)}\sqrt{\frac{2^{j+1}\epsilon _j}{card(\nu _{t_0,2^j})}}\right) \nonumber \\&\qquad +\left( \sqrt{card(\nu _{t_0,2^j})}-\sqrt{2^{j+1}\epsilon _j}\right) \nonumber \\&\quad =\frac{1}{\sqrt{2^{j+1}\epsilon _j}}\sum _{k\in \nu _{t_0,2^j}}d_X(2^{-j},k)^2\nonumber \\&\qquad \times \left( \frac{\mathbb {E}\left( d_X(2^{-j},k)^2\right) \sqrt{card(\nu _{t_0,2^j})}-c_2(t_0) 2^{-j(2H(t_0)+1)}\sqrt{2^{j+1}\epsilon _j}}{c_2(t_0) 2^{-j(2H(t_0)+1)}\mathbb {E}\left( d_X(2^{-j},k)^2\right) \sqrt{card(\nu _{t_0,2^j})}}\right) \nonumber \\&\qquad +\left( \sqrt{card(\nu _{t_0,2^j})}-\sqrt{2^{j+1}\epsilon _j}\right) . \end{aligned}$$
(4.29)

Since the fact that \(card(\nu _{t_0,2^j})=2^{j+1}\epsilon _j+\mathcal {O}(1)\) implies

$$\begin{aligned} \sqrt{card(\nu _{t_0,2^j})}=\sqrt{2^{j+1}\epsilon _j}+\mathcal {O}\left( 2^{-j/2}\epsilon _j^{-1/2}\right) . \end{aligned}$$
(4.30)

And also by taking \(r=4\) in (4.15), we have

$$\begin{aligned} \mathbb {E}|d_X(2^{-j},k)|^4=\mathcal {O}\left( 2^{-j(4H(t_0)+2)}\right) . \end{aligned}$$
(4.31)

By (2.6) (4.29), (4.8), (4.30) and (4.31), we then obtain

$$\begin{aligned}&\mathbb {E}\left| \sqrt{2^{j+1}\epsilon _j}\widehat{U_{t_0,j}}-U_{t_0,j}\right| ^2 =\frac{card(\nu _{t_0,2^j})^2}{card(\nu _{t_0,2^j})}\mathcal {O}\left( 2^{-j(4H(t_0)+2)}\right) \nonumber \\&\qquad \times \left( \frac{\big (c_2(t_0)2^{-j(2H(t_0)+1\big )}+\mathcal {O}\big (2^{-j(2H(t_0) +2)}j^4)\big )(\sqrt{2^{j+1}\epsilon _j}+\mathcal {O}\big (2^{-(j+1)/2}\epsilon _j^{-1/2})\big )}{\big (c_2(t_0) 2^{-j(2H(t_0)+1)}\big )^2\sqrt{2^{j+1}\epsilon _j}}\right. \nonumber \\&\left. \qquad -\,\frac{1}{c_2(t_0) 2^{-j(2H(t_0)+1)}}\right) ^2+\left( \sqrt{2^{j+1}\epsilon _j}+\mathcal {O}\big (2^{-(j+1)/2} \epsilon _j^{-1/2}\big )-\sqrt{2^{j+1}\epsilon _j}\right) ^2\nonumber \\&\quad =\mathcal {O}\big (2^{-j}\epsilon _jj^8+(2^{j}\epsilon _j)^{-1}\big ). \end{aligned}$$

It follows from Markov’s inequality that there exists \(c>0\) such that

$$\begin{aligned} \mathbb {P}\left( \big |\sqrt{2^{j+1}\epsilon _j}\widehat{U_{t_0,j}}-U_{t_0,j}\big |> \eta \right)&\le \eta ^{-2}\mathbb {E}\big |\sqrt{2^{j+1}\epsilon _j}\widehat{U_{t_0,j}}-U_{t_0,j}\big |^2\\&\le c\left( 2^{-j}\epsilon _jj^8+(2^{j}\epsilon _j)^{-1}\right) . \end{aligned}$$

Assumption (A2) then allows us to apply Borel–Cantelli’s lemma to obtain

$$\begin{aligned} \big |\sqrt{2^{j+1}\epsilon _j}\widehat{U_{t_0,j}}-U_{t_0,j}\big |\xrightarrow []{a.s.}0. \end{aligned}$$

Therefore, it follows from Proposition 4 and continuous mapping theorem that

$$\begin{aligned} \sqrt{2^{j+1}\epsilon _j}\left( \widehat{U_{t_0,j}},\sqrt{2\epsilon _{j+1}/\epsilon _j} \widehat{U_{t_0,j+1}}\right) \xrightarrow [j\rightarrow +\infty ]{dist}\mathcal {N}(0,\Sigma ). \end{aligned}$$

Since \(\lim _{j\rightarrow +\infty }\sqrt{2\epsilon _{j+1}/{\epsilon _j}}=\sqrt{2c_0}\), using Slutsky’s theorem, we get

$$\begin{aligned} \sqrt{2^{j+1}\epsilon _j}\left( \widehat{U_{t_0,j}},\sqrt{2c_0} \widehat{U_{t_0,j+1}}\right) \xrightarrow [j\rightarrow +\infty ]{dist}\mathcal N(0,\Sigma ). \end{aligned}$$

This is in fact equivalent to

$$\begin{aligned} \sqrt{2^{j+1}\epsilon _j}\left( \widehat{U_{t_0,j}},\widehat{U_{t_0,j+1}}\right) \xrightarrow [j\rightarrow +\infty ]{dist}\mathcal N(0,\widetilde{\Sigma }), \end{aligned}$$
(4.32)

with \(\widetilde{\Sigma }=\left( {}_{(2c_0)^{-1/2}\sigma _{21}}^{\sigma _{11}} ~{}_{(2c_0)^{-1}\sigma _{22}}^{(2c_0)^{-1/2}\sigma _{12}}\right) \). Then by applying Proposition 5 to (4.32) with \(g_1(x,y)=(\log (x),\log (y))\), \((X_j,Y_j)=\big (\widehat{U_{t_0,j}}+1,\widehat{U_{t_0,j+1}}+1\big )\), \((\theta _1,\theta _2)=(1,1)\) and \(h(j)=\sqrt{2^{j+1}\epsilon _j}\), we get

$$\begin{aligned}&\sqrt{2^{j+1}\epsilon _j}\left( \log \left( \sum _{k\in \nu _{t_0,2^i}}d_X(2^{-i},k)^2\right) +2iH(t_0)\log (2)-\log (2c_2(t_0)\epsilon _i)\right) _{i=j,j+1}\nonumber \\&\quad ~~\xrightarrow [j\rightarrow +\infty ]{dist}\mathcal N(0,\widetilde{\Sigma }), \end{aligned}$$
(4.33)

because the Jacobian matrix \(\nabla g_1(1,1)=Id\).

Now we apply again Proposition 5 to (4.33) with \(g_2(x,y)=\frac{x-y}{2\log 2}\), to get

$$\begin{aligned} \sqrt{2^{j+1}\epsilon _j}\Big (\widehat{H_{X,2^j}}(t_0)-H(t_0)\Big ) \xrightarrow [j\rightarrow +\infty ]{dist}\mathcal N\big (0,\tilde{c}(t_0)\big ), \end{aligned}$$

where

$$\begin{aligned} \tilde{c}(t_0)=\frac{\big ({}_{-1}^{~1}\big )^T\widetilde{\Sigma } \big ({}_{-1}^{~1}\big )}{(2\log 2)^2}=\frac{1}{(2\log 2)^2}\Big (\big ((2c_0)^{-1}+1\big )\frac{c_3(t_0)}{c_2(t_0)^2} -2(2c_0)^{-1/2}c_4(t_0)\Big ), \end{aligned}$$
(4.34)

because \(\nabla g_2(0,0)=\frac{1}{2\log 2}\big ({}_{-1}^{~1}\big )\). \(\square \)

1.5 Proof of Theorem 3

(3.8) and (3.9) are obvious by using Lemma 3 and the fact that convergences in probability and almost surely are preserved under continuous transformations. In order to show (3.10), we only need to verify

$$\begin{aligned} \sqrt{2^{j}\epsilon _j}|\widehat{H_{Y,2^j}}(t_0)-\widehat{H_{X,2^j}} (t_0)|\xrightarrow [j\rightarrow +\infty ]{a.s.}0. \end{aligned}$$
(4.35)

Equivalently, it suffices to show

$$\begin{aligned} \sqrt{2^{j}\epsilon _j}\Big |\log \left( \frac{V_{Y,t_0,j}}{|\Phi '(X(t_0)) |^2V_{X,t_0,j}}\right) -\log (1)\Big |\xrightarrow [j\rightarrow +\infty ]{a.s.}0. \end{aligned}$$
(4.36)

We have, by using the mean value theorem on \(\log (\cdot )\),

$$\begin{aligned} \sqrt{2^{j}\epsilon _j}\left| \log \left( \frac{V_{Y,t_0,j}}{|\Phi '(X(t_0)) |^2V_{X,t_0,j}}\right) -\log (1)\right| =\frac{\sqrt{2^{j}\epsilon _j}}{\gamma _j}\left| \frac{V_{Y,t_0,j}}{|\Phi '(X(t_0))|^2V_{X,t_0,j}}-1\right| , \end{aligned}$$

where \(\gamma _j\) is some random variable valued in the open interval with ending points \(|\Phi '(X(t_0))|^2V_{X,t_0,j}/V_{Y,t_0,j}\) and 1. Since \(|\gamma _j|\) tends to 1 a.s. as \(j\rightarrow +\infty \), then according to (3.4) and (A1), the right-hand side of the above equation can be bounded by

$$\begin{aligned} c2^{j(H(t_0)+1/2)}\epsilon _j^{2H(t_0)+1/2}j^{1/2}|\log \epsilon _j| \quad \text{ with } \text{ some }~c>0, \end{aligned}$$

which converges to 0 as \(j\rightarrow +\infty \), thanks to assumption (A4). \(\square \)

1.6 Proof of Theorem 4

In order to prove (3.11) and (3.12), we rely on the following relation: under assumptions (A1)–(A2),

$$\begin{aligned} \frac{\widehat{V_{n,t_0,J_n}}}{V_{Y,t_0,J_n}}-1=\mathcal {O}_{\mathbb {P}} \left( 2^{(J_n-n)H(t_0)}\right) . \end{aligned}$$
(4.37)

This is because, by using Markov’s inequality, Cauchy-Schwarz inequality, (1.11), (3.7) and the dominated convergence theorem,

$$\begin{aligned}&\mathbb {P}\left( \left| \frac{\widehat{V_{n,t_0,J_n}}-V_{Y,t_0,J_n}}{V_{Y,t_0,J_n}} \right| >\eta \right) \le \frac{1}{\eta }\mathbb {E}\left| \frac{\widehat{V_{n,t_0,J_n}}-V_{Y,t_0,J_n}}{V_{Y,t_0,J_n}}\right| \nonumber \\&\quad \le \frac{1}{\eta }\left( \mathbb {E}\left| \frac{\widehat{V_{n,t_0,J_n}} -V_{Y,t_0,J_n}}{2^{-2J_nH(t_0)}\epsilon _{J_n}}\right| ^2\right) ^{1/2}\left( \mathbb {E}\left| \frac{2^{-2J_nH(t_0)}\epsilon _{J_n}}{V_{Y,t_0,J_n}}\right| ^2\right) ^{1/2}\nonumber \\&\quad \le c2^{(J_n-n)H(t_0)}/\eta . \end{aligned}$$
(4.38)

Similarly to (4.38), we also obtain for any \(\delta '>0\) arbitrarily small,

$$\begin{aligned} \mathbb {P}\left( 2^{(n-J_n-\delta ' n)H(t_0)}\left| \frac{\widehat{V_{n,t_0,J_n}}-V_{Y,t_0,J_n}}{V_{Y,t_0,J_n}}\right| >\eta \right) \le c2^{-\delta ' nH(t_0)}/\eta . \end{aligned}$$

The fact that \(\beta <1\) implies \(\sum _{n\in \mathbb N}c2^{-\delta ' n H(t_0)}/\eta <+\infty \), then by Borel–Cantelli’s lemma,

$$\begin{aligned} \frac{\widehat{V_{n,t_0,J_n}}}{V_{Y,t_0,J_n}}-1=\mathcal {O}_{a.s.} \left( 2^{(J_n-n+\delta ' n)H(t_0)}\right) =\mathcal {O}_{a.s.}\left( 2^{(\beta -1+\delta ')nH(t_0)}\right) . \end{aligned}$$
(4.39)

Therefore, (3.11) (resp. (3.12)) follows from the following 2 decompositions:

$$\begin{aligned} \widehat{H_{Y,2^{J_n},n}}(t_0)-H(t_0)=\left( \widehat{H_{Y,2^{J_n},n}}(t_0) -\widehat{H_{Y,2^{J_n}}}(t_0)\right) +\left( \widehat{H_{Y,2^{J_n}}}(t_0)-H(t_0)\right) ; \end{aligned}$$
$$\begin{aligned}&\frac{\widehat{V_{n,t_0,J_n}}}{2|\Phi '(X(t_0))|^2c_2(t_0) 2^{-2J_nH(t_0)}\epsilon _{J_n}}-1\nonumber \\&\quad = \left( \frac{V_{Y,t_0,J_n}}{2|\Phi '(X(t_0))|^2c_2(t_0)2^{-2J_nH(t_0)}\epsilon _j} -1\right) \left( \frac{\widehat{V_{n,t_0,J_n}}}{V_{Y,t_0,J_n}}\right) +\left( \frac{\widehat{V_{n,t_0,J_n}}-V_{Y,t_0,J_n}}{V_{Y,t_0,J_n}}\right) ; \end{aligned}$$

and equations (3.8), (4.37) (resp. (3.9), (4.39)).

For showing (3.13), we only need to show

$$\begin{aligned} \sqrt{2^{J_n+1}\epsilon _{J_n}}\big |\widehat{H_{Y,2^{J_n},n}}(t_0) -\widehat{H_{Y,2^{J_n}}}(t_0)\big |\xrightarrow [n\rightarrow +\infty ]{\mathbb {P}}0. \end{aligned}$$
(4.40)

By using the same idea we have taken to prove (4.35), we just need to verify

$$\begin{aligned} \sqrt{2^{J_n+1}\epsilon _{J_n}}\Big |\frac{\widehat{V_{n,t_0,J_n}} -V_{Y,t_0,J_n}}{V_{Y,t_0,J_n}}\Big |\xrightarrow [n\rightarrow +\infty ]{\mathbb {P}}0. \end{aligned}$$

This is true since, according to (4.38) and the fact that (by assumption (A4)) \(\epsilon _{J_n}=o\big (2^{-(\frac{2H(t_0)+1}{4H(t_0)+1})J_n}\big )\), the left-hand side of the above term can be bounded in probability by

$$\begin{aligned} c2^{J_n/2}\epsilon _{J_n}^{1/2}2^{(J_n-n)H(t_0)}=o\left( 2^{H(t_0) (J_n\frac{4H(t_0)+2}{4H(t_0)+1}-n)}\right) . \end{aligned}$$

The fact that \(0<\beta \le \frac{4H(t_0)+1}{4H(t_0)+2}\) entails \(J_n\frac{4H(t_0)+2}{4H(t_0)+1}-n\le 0\). Consequently,

$$\begin{aligned} \sqrt{2^{J_n+1}\epsilon _{J_n}}\Big |\frac{\widehat{V_{n,t_0,J_n}} -V_{Y,t_0,J_n}}{V_{Y,t_0,J_n}}\Big |\xrightarrow [n\rightarrow +\infty ]{\mathbb {P}}0, \end{aligned}$$

hence (3.13) holds. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jin, S., Peng, Q. & Schellhorn, H. Estimation of the pointwise Hölder exponent of hidden multifractional Brownian motion using wavelet coefficients. Stat Inference Stoch Process 21, 113–140 (2018). https://doi.org/10.1007/s11203-016-9145-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11203-016-9145-1

Keywords

Navigation