Abstract
We propose a wavelet-based approach to construct consistent estimators of the pointwise Hölder exponent of a multifractional Brownian motion, in the case where this underlying process is not directly observed. The relative merits of our estimator are discussed, and we introduce an application to the problem of estimating the functional parameter of a nonlinear model.
Similar content being viewed by others
Notes
We will denote this relation by \(a\sim b\).
References
Abry P, Flandrin P, Taqqu M S, Veitch D (2002) Self-similarity and long-range dependence through the wavelet lens. In: Theory and Applications of Long-range Dependence, pp. 527–556
Abry P, Gonçalvès P (1997) Multiple-window wavelet transform and local scaling exponent estimation. In: Proceedings of the IEEE-ICASSP-97, pp 3433–3436
Ayache A, Lévy-Véhel J (2004) On the identification of the pointwise Hölder exponent of the generalized multifractional Brownian motion. Stoch Process Appl 111:119–156
Ayache A, Shieh NR, Xiao Y (2011) Multiparameter multifractional Brownian motion: local nondeterminism and joint continuity of the local times. Ann Inst H Poincaré Probab Stat 47(4):1029–1054
Bardet JM (2000) Testing for the presence of self-similarity of Gaussian time series having stationary increments. J Time Ser Anal 21(5):497–515
Bardet JM (2002) Statistical study of the wavelet analysis of fractional Brownian motion. IEEE Trans Inf Theory 48(4):991–999
Bardet JM, Surgailis D (2013) Nonparametric estimation of the local Hurst function of multifractional Gaussian processes. Stoch Process Appl 123(3):1004–1045
Barrière O (2007) Synthèse et estimation de mouvements Browniens multifractionnaires et autres processus à régularité prescrite: définition du processus autorégulé multifractionnaire et applications. Ph.D. Dissertation, Ecole Centrale de Nantes
Bayraktar E, Horst U, Sircar R (2006) A limit theorem for financial markets with inert investors. Math Oper Res 31(4):789–810
Benassi A, Jaffard S, Roux D (1997) Elliptic Gaussian random processes. Rev Mat Iberoam 13:19–90
Bertrand PR, Fhima M, Guillin A (2013) Local estimation of the Hurst index of multifractional Brownian motion by increment ratio statistic method. ESAIM 17:307–327
Bertrand PR, Hamdouni A, Khadhraoui S (2012) Modelling NASDAQ series by sparse multifractional Brownian motion. Methodol Comput Appl Probab 14(1):107–124
Bianchi S, Pantanella A, Pianese A (2013) Modeling stock prices by multifractional Brownian motion: an improved estimation of the pointwise regularity. Quant. Finance 13(8):1317–1330
Chan G, Hall P, Poskitt DS (1995) Periodogram-based estimators of fractal properties. Ann Stat 23(5):1684–1711
Chan G, Wood ATA (1998) Simulation of multifractional Brownian motion. In: Proceedings of the COMPSTAT 13th Symposium, Bristol, Great Britain, pp. 233–238
Cheridito P (2003) Arbitrage in fractional Brownian motion models. Finance Stoch 7:533–553
Corlay S, Lebovits J, Lévy-Véhel J (2014) Multifractional stochastic volatility models. Math Finance 24(2):364–402
Coeurjolly JF (2005) Identfication of multifractional Brownian motion. Bernoulli 11(6):987–1008
Coeurjolly JF (2006) Erratum: identfication of multifractional Brownian motion. Bernoulli 12(2):381–382
Comte F, Renault E (1998) Long memory in continuous-time stochastic volatility models. Math Finance 8(4):291–323
Delbeke L, Van A W (1995) A wavelet based estimator for the parameter of self-similarity of fractional Brownian motion. In: Proceedings of the 3rd International Conference on Approximation and Optimization in the Caribbean (electronic), Benemérita Univ. Autón
Frezza M (2014) Goodness of fit assessment for a fractal model of stock markets. Chaos Solitons Fractals 66:41–50
Jin S, Peng Q, Schellhorn H (2015) Estimation of the pointwise Hölder exponent of hidden multifractional Brownian motion using wavelet coefficients, long version. arXiv:1512.05054
Ledoux M, Talagrand M (2010) Probability in Banach spaces. Springer, Berlin
Lévy-Véhel J (1995) Fractal approaches in signal processing. Fractals 3:755–775
Oehlert GW (1992) A note on the delta method. Am Stat 46(1):27–29
Ohashi A (2009) Fractional term structure models: no-arbitrage and consistency. Ann Appl Probab 19(4):1553–1580
Peng Q (2011a) Uniform Hölder exponent of a stationary increments Gaussian process: estimation starting from average values. Stat Probab Lett 81:1326–1335
Peng Q (2011b) Statistical inference for hidden multifractional processes in a setting of stochastic volatility models. Ph.D. Dissertation, Lille 1 University
Rogers LCG (1997) Arbitrage with fractional Brownian motion. Math Finance 7:95–105
Rosenbaum M (2008) Estimation of the volatility persistence in a discretely observed diffusion model. Stoch Process Appl 118:1434–1462
Xiao W, Zhang W, Zhang Z, Wang Y (2010) Pricing currency options in a fractional Brownian motion with jumps. Econ Model 27:935–942
Acknowledgments
We gratefully acknowledge the anonymous reviewers for their careful reading of our manuscript and their many insightful comments. Their suggestions result in strong improvement of our manuscript.
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
1.1 Proofs of (1.8) and (1.9)
First by using triangle inequality, we get
Recall that \(Y(t)=\Phi (X(t))\) for \(t\in [0,1]\). Define the random variable \(\Vert X\Vert _{\infty }\) to be
Since \(\theta \) is continuous and not equal to 0 almost everywhere, then \(\{X(t)\}_{t\in [0,1]}\) is a Gaussian process with continuous trajectories, by applying Dudley’s theorem and Borell’s inequality (more precisely, with the same arguments for the proof of \(\mathbb {E}(e^{\widetilde{V}})<+\infty \) on pp. 1445–1446 in Rosenbaum (2008). See also Ledoux and Talagrand (2010)), we can show that \( \mathbb {E}(e^{\Vert X\Vert _{\infty }})<+\infty . \) This means all of \(\Vert X\Vert _{\infty }\)’s moments are finite. Hence, using the mean value theorem, we get
where \(C_1=\sup \limits _{s\in [-\Vert X\Vert _{\infty },\Vert X\Vert _{\infty }]}|\Phi '(s)|\) is a random variable. It follows from (4.1) and (4.3) that
In order to get (1.8), we need (1.7), from which we see there exists a positive random variable \(C_2\) with all finite moments such that
Observe that for \(t\in [l2^{-n},(l+1)2^{-n}]\),
and
This together with the fact that \(|l2^{-n}-t|\le 2^{-n}\) for \(t\in [l2^{-n},(l+1)2^{-n}]\) yields there exists a positive random variable \(C_3\) with all finite moments such that,
Then (1.8) results from (4.4) and (4.7). \(\square \)
Now we are going to prove (1.9). For \(r\ge 1\), we consider the r-order moment of \(|\widehat{d_{Y,n}}(2^{-j},k)-d_Y(2^{-j},k)|\) in (4.4). By applying the following two versions of Jensen’s inequality:
and Cauchy–Schwarz inequality, we obtain
Note that by Lemma 2.12 (i) in Ayache et al. (2011), there exists a constant \(c_1>0\) which does not depend on n, l, j, k and H such that
Therefore by using again (4.6) and the fact that \(|l2^{-n}-t|\le 2^{-n}\), there exists some constant \(c>0\) such that
Using (4.10) and similar computations as in (4.5), we obtain there exists \(c_2>0\) such that
By using the fact that all the moments of Gaussian variable are equivalent, we get there exists some constant \(c_3>0\) (only depending on r) such that
Finally it results from (4.9) and (4.12) that
where \(c=\big (\sup _{s\in [0,1]}|\psi (s)|^r\big )\big (c_3\mathbb {E}(C_1^{2r})\big )^{1/2}\). Therefore (1.9) has been proven. \(\square \)
1.2 Proof of (1.11)
First notice that, by the definition of \(\nu _{t_0,2^j}\), we have
as \(j\rightarrow +\infty \), because \(2^{j}\epsilon _j\ge 1\).
It follows from (4.8), Cauchy–Schwarz inequality, (4.13) and the fact that \((a+b)^4\le 2^3(a^4+b^4)\) that
Roughly speaking (and it can be proven without efforts), since the trajectory \(\{\Phi (X(t))\}_{t\ge 0}\) is at least as smooth as \(\{X(t)\}_{t\ge 0}\), then for \(r\ge 1\), there exists a constant \(c_4>0\) (only depending on r) such that,
Then it results from (4.14), (4.15), (1.9) and (4.13) that
In view of the equivalence relation between \(H(k2^{-j})\) and \(H(t_0)\) as \(k\in \nu _{t_0,2^j}\) and \(j\rightarrow +\infty \), (1.11) finally results from (4.16). \(\square \)
1.3 Proof of Lemma 2
By using Chebyshev’s inequality, (2.9) and the condition that \(\epsilon _j=\mathcal {O}(j^{-1})\), we get for any \(\eta >0\),
where \(c>0\) is some constant which does not depend on j. This implies
Then it follows from (2.8), (4.18) and the fact that \(\lim _{j\rightarrow +\infty }2^{-j/2}\epsilon _{j}^{-1/2}=0 \) that
(2.11) has been proven. Note that (2.12) follows straightforwardly from (2.11). Now we only need to show (2.13) holds. From (4.18) and Chebyshev’s inequality, we observe that there exists a constant \(c>0\) which does not depend on \(\eta \) nor on j such that for any \(\eta >0\),
Since \(\sum \limits _{j=1}^{+\infty }(2^j\epsilon _j)^{-\delta }<+\infty \), then applying Borel–Cantelli’s lemma leads to
Further observe that
Therefore (2.13) follows. Lemma 2 has been proven. \(\square \)
1.4 Proof of Theorem 2
The proof of Theorem 2 relies heavily on the following Propositions 4 and 5.
Proposition 4
For any \(t_0\in (0,1)\) and any integer \(j\ge 1\), denote by
Then
where \(\Sigma =(\sigma _{ij})_{i,j\in \{1,2\}}\) with \(\sigma _{11}=\sigma _{22}=c_3(t_0)/(c_2(t_0))^2\) and
Proof
Following the similar method as in Bardet (2000), we show that for the empirical average of \(d_X( 2^{-j},k)^2/\mathbb {E}(d_X(2^{-j},k)^2)\), a multivariate central limit theorem holds thanks to a Lindeberg’s condition. More precisely, for j big enough and for \(k\in \nu _{t_0,2^j}\), \(k'\in \nu _{t_0,2^{j+1}}\), denote by
a Lindeberg’s condition thus can be deduced from the following relations:
-
For \(|2k-k'|=0\),
$$\begin{aligned} T_{j,k,k'}=2^{2H(2^{-j}k)+2}\left( \frac{C_2(2^{-j}k,1/2)}{c_2(2^{-j}k)}\right) ^2 +\mathcal {O}(2^{-j}j^4). \end{aligned}$$(4.22) -
For \(|2k-k'|=1\),
$$\begin{aligned} T_{j,k,k'}=2\left( \frac{c_1(2^{-j}k,2^{-(j+1)}k')^2}{c_2(2^{-j}k)c_2 (2^{-(j+1)}k')}\right) +\mathcal {O}\left( T(2k,k',Q,2^{-j},2^{-j})\right) . \end{aligned}$$(4.23) -
For \(|2k-k'|\ge 2\),
$$\begin{aligned} T_{j,k,k'}= & {} 2\left( \frac{(C_1(H(2^{-j}k)+H(2^{-(j+1)}k'),Q,2)\theta (2^{-j}k) \theta (2^{-(j+1)}k'))^2}{c_2(2^{-j}k)c_2(2^{-(j+1)}k')|2k-k'|^{4Q-2H(2^{-j}k) -2H(2^{-(j+1)}k')}}\right) \nonumber \\&+\,\mathcal {O}\left( T(k,k',Q,2^{-j},2^{-(j+1)})\right) . \end{aligned}$$(4.24)
To show (4.22)–(4.24) hold, we first recall that if \((Z,Z')\) has a zero-mean joint normal distribution, then (see e.g., Lemma 5.3.4 in Peng (2011b))
We thus observe, from (4.21) and (4.25), that
Therefore (4.22) results from (4.26), (2.3) and (2.6). In order to obtain (4.23), it suffices to take \(l=2k\) for \(k\in \nu _{t_0,2^j}\), then \(l,k'\) belong to \(\nu _{t_0,2^{j+1}}\) and satisfy \(\sup _{s,t\in [0,1]}|(t-s)/(l-k')|=\sup _{s,t\in [0,1]}|s-t|\le 1\). This entails that (2.2) can be applied on \((a,b,l,k')\) by setting \(a=b=2^{-(j+1)}\) and \(|l-k'|=1\). As a consequence (4.23) follows from (4.26), (2.4) and (2.6). For proving (4.24), we just plug \(a=2^{-j}\), \(b=2^{-(j+1)}\) into (2.2) and then use (4.26) and (2.6). Using this identification of \(T_{j,k,k'}\)’s, we show the Lindeberg’s condition (the same as in Bardet (2000)) is verified and the central limit theorem holds. Now it remains to show
We only prove (4.27) holds since (4.28) can be followed by quite a similar way. Remark that \(\lim _{j\rightarrow +\infty }\sup _{k\in \nu _{t_0,2^j}}2^{-j}k=t_0\); and for j large enough,
and for a function f continuous on \(t_0\),
Considering all the above facts and (4.22)–(4.24), (5.49)–(5.51) in Jin et al. (2015), \(card(\nu _{t_0,2^j})\sim 2^{j}\epsilon _j\) and assumption (A3), we obtain
Finally, we have proved Proposition 4. \(\square \)
Proposition 5
(Multivariate delta rule, see e.g., Oehlert (1992)) Let the estimators \(\{(X_n,Y_n)\}_{n\in \mathbb N}\) (valued in \((0,+\infty )^2\)) of \((\theta _1,\theta _2)\) satisfy the following central limit theorem:
where \(\Sigma \) denotes the covariance matrix of the limit distribution and \((h(n))_n\) is a sequence of positive numbers tending to infinity. Let \(g:~(0,+\infty )^2\rightarrow \mathbb R^p\) (\(p=1\) or 2) belong to \(C^1((0,+\infty )^2)\), then the following convergence in law holds:
where \(\nabla g(\theta _1,\theta _2)^T\) denotes the transpose of the gradient of g on \((\theta _1,\theta _2)\).
Note that if \(p=2\) in Proposition 5, the gradient of g becomes a Jacobian matrix.
Proof of Theorem 2
For simplifying notation we denote by
Therefore the following decomposition holds:
Since the fact that \(card(\nu _{t_0,2^j})=2^{j+1}\epsilon _j+\mathcal {O}(1)\) implies
And also by taking \(r=4\) in (4.15), we have
By (2.6) (4.29), (4.8), (4.30) and (4.31), we then obtain
It follows from Markov’s inequality that there exists \(c>0\) such that
Assumption (A2) then allows us to apply Borel–Cantelli’s lemma to obtain
Therefore, it follows from Proposition 4 and continuous mapping theorem that
Since \(\lim _{j\rightarrow +\infty }\sqrt{2\epsilon _{j+1}/{\epsilon _j}}=\sqrt{2c_0}\), using Slutsky’s theorem, we get
This is in fact equivalent to
with \(\widetilde{\Sigma }=\left( {}_{(2c_0)^{-1/2}\sigma _{21}}^{\sigma _{11}} ~{}_{(2c_0)^{-1}\sigma _{22}}^{(2c_0)^{-1/2}\sigma _{12}}\right) \). Then by applying Proposition 5 to (4.32) with \(g_1(x,y)=(\log (x),\log (y))\), \((X_j,Y_j)=\big (\widehat{U_{t_0,j}}+1,\widehat{U_{t_0,j+1}}+1\big )\), \((\theta _1,\theta _2)=(1,1)\) and \(h(j)=\sqrt{2^{j+1}\epsilon _j}\), we get
because the Jacobian matrix \(\nabla g_1(1,1)=Id\).
Now we apply again Proposition 5 to (4.33) with \(g_2(x,y)=\frac{x-y}{2\log 2}\), to get
where
because \(\nabla g_2(0,0)=\frac{1}{2\log 2}\big ({}_{-1}^{~1}\big )\). \(\square \)
1.5 Proof of Theorem 3
(3.8) and (3.9) are obvious by using Lemma 3 and the fact that convergences in probability and almost surely are preserved under continuous transformations. In order to show (3.10), we only need to verify
Equivalently, it suffices to show
We have, by using the mean value theorem on \(\log (\cdot )\),
where \(\gamma _j\) is some random variable valued in the open interval with ending points \(|\Phi '(X(t_0))|^2V_{X,t_0,j}/V_{Y,t_0,j}\) and 1. Since \(|\gamma _j|\) tends to 1 a.s. as \(j\rightarrow +\infty \), then according to (3.4) and (A1), the right-hand side of the above equation can be bounded by
which converges to 0 as \(j\rightarrow +\infty \), thanks to assumption (A4). \(\square \)
1.6 Proof of Theorem 4
In order to prove (3.11) and (3.12), we rely on the following relation: under assumptions (A1)–(A2),
This is because, by using Markov’s inequality, Cauchy-Schwarz inequality, (1.11), (3.7) and the dominated convergence theorem,
Similarly to (4.38), we also obtain for any \(\delta '>0\) arbitrarily small,
The fact that \(\beta <1\) implies \(\sum _{n\in \mathbb N}c2^{-\delta ' n H(t_0)}/\eta <+\infty \), then by Borel–Cantelli’s lemma,
Therefore, (3.11) (resp. (3.12)) follows from the following 2 decompositions:
and equations (3.8), (4.37) (resp. (3.9), (4.39)).
For showing (3.13), we only need to show
By using the same idea we have taken to prove (4.35), we just need to verify
This is true since, according to (4.38) and the fact that (by assumption (A4)) \(\epsilon _{J_n}=o\big (2^{-(\frac{2H(t_0)+1}{4H(t_0)+1})J_n}\big )\), the left-hand side of the above term can be bounded in probability by
The fact that \(0<\beta \le \frac{4H(t_0)+1}{4H(t_0)+2}\) entails \(J_n\frac{4H(t_0)+2}{4H(t_0)+1}-n\le 0\). Consequently,
hence (3.13) holds. \(\square \)
Rights and permissions
About this article
Cite this article
Jin, S., Peng, Q. & Schellhorn, H. Estimation of the pointwise Hölder exponent of hidden multifractional Brownian motion using wavelet coefficients. Stat Inference Stoch Process 21, 113–140 (2018). https://doi.org/10.1007/s11203-016-9145-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11203-016-9145-1