Abstract
We establish consistency and asymptotic normality of the maximum likelihood estimator in the level-effect ARCH model of Chan et al. (J Financ 47(3):1209–1227, 1992). Furthermore, it is shown by simulations that the asymptotic properties also apply in finite samples.
Similar content being viewed by others
Notes
Alternative representations for a level-effect ARCH model may be considered such as
$$\begin{aligned} y_{t}= & {} \sigma _{t}z_{t} \\ \sigma _{t}^{2}= & {} w+\alpha y_{t-1}^{2}+\gamma y_{t-1} \end{aligned}$$although they are outside the scope of this paper.
If \(\delta \) is estimated, this will affect the asymptotic properties of the estimators of the volatility parameters, but we leave this for further future research.
Note that the data generating process (DGP) is assumed to be ergodic. It might be possible to relax this assumption about ergodicity and simply assume that the DGP is initiated in some fixed value and that the DGP has an ergodic solution (see e.g. Kristensen and Rahbek (2005) and Jensen and Rahbek (2007)) and we leave that for further research.
We let \(\overset{a.s.}{\longrightarrow }\) denote convergence “almost surely” as \(T\rightarrow \infty .\) Also note that in this case \(\frac{\partial }{ \partial \alpha }L_{T}\left( \theta \right) =-\sum _{t=1}^{T}\frac{1}{2} \left( 1-\frac{{\widetilde{y}}_{t}^{2}}{\sigma _{t}^{2}}\right) \frac{ {\widetilde{y}}_{t-1}^{2}}{\sigma _{t}^{2}};\frac{\partial ^{2}}{\partial \alpha ^{2}}L_{T}\left( \theta _{0}\right) =\frac{1}{2}\sum _{t=1}^{T}\left( 1-2\frac{{\widetilde{y}}_{t}^{2}}{\sigma _{t}^{2}}\right) \frac{{\widetilde{y}} _{t-1}^{4}}{\sigma _{t}^{4}};\frac{\partial ^{3}}{\partial \alpha ^{3}} L_{T}\left( \theta _{0}\right) =-\sum _{t=1}^{T}\left( 1-3\frac{{\widetilde{y}} _{t}^{2}}{\sigma _{t}^{2}}\right) \frac{{\widetilde{y}}_{t-1}^{6}}{\sigma _{t}^{6}}\)as shown in Results 1, 2 and 3 in the Technical Appendix and they do correspond to Eqs. (4), (5) and (6) of Jensen and Rahbek (2004a).
In a Supplementary Appendix that is available upon request from any of the authors, we provide additional simulation results, where \(\delta =\left( a,b\right) ^{\prime }\) is estimated jointly with the variance parameters. According to those simulation results, when \(\delta \) is also estimated and assumptions A and B hold, we conjecture that the ML estimator seem to follow also an asymptotically normal distribution.
References
Andersen TG, Lund J (1997) Estimating continuous time stochastic volatility models of the short term interest rate. J Econom 77:343–377
Ball CA, Torous WN (1999) The stochastic volatility of short-term interest rates: some international evidence. J Financ 54(6):2339–2359
Berkes I, Horváth L (2004) The efficiency of the estimators of the parameters in GARCH processes. Ann Stat 32(2):633–655
Billingsley P (1961) The Lindeberg–Lévy theorem for martingales. Proc Am Math Soc 12:788–792
Bouezmarni T, Rombouts JVK (2010) Nonparametric density estimation for positive time series. Comput Stat Data Anal 54(2):245–261
Brenner R, Harjes R, Kroner K (1996) Another look at models of the short term interest rates. J Financ Quant Anal 31:85–107
Brown BM (1971) Martingale central limit theorems. Ann Math Stat 42:59–66
Brown TC, Feigin PD, Pallant DL (1996) Estimation for a class of positive nonlinear time series models. Stoch Process Appl 63(2):139–152
Broze L, Scaillet O, Zakoïan JM (1995) Testing for continuous time models of the short term interest rate. J Empir Financ 2:199–223
Bu R, Chen J, Hadri K (2017) Specification analysis in regime-switching continuous-time diffusion models for market volatility. Stud Nonlinear Dyn Econom 21:1
Chan KC, Karolyi GA, Longstaff F, Sanders A (1992) An empirical comparison of alternative models of short term interest rates. J Finan 47(3):1209–1227
Dias-Curto J, Castro-Pinto J, Nuno-Tavares G (2009) Modeling stock markets’ volatility using GARCH models with normal Student’s t and stable Paretian distributions. Stat Pap 50:311–321
Engle RF (1982) Autoregressive conditional heteroskedasticity with estimates of the variance of United Kingdom inflation. Econometrica 50:987–1007
Fornari F, Mele A (2006) Approximating volatility diffusions with CEV-ARCH models. J Econ Dyn Control 30(6):931–966
Francq C, Wintenberger O, Zakoïan J-M (2018) Goodness-of-fit tests for Log-GARCH and EGARCH models. TEST 27(1):27–51
Frydman H (1994) Asymptotic inference for the parameters of a discrete-time square-root process. Math Financ 4(2):169–181
Hamadeh T, Zakoïan J-M (2011) Asymptotic properties of LS and QML estimators for a class of nonlinear GARCH processes. J Stat Plan Inference 141(1):488–507
Han H, Zhang S (2009) Nonstationary semiparametric ARCH models. Department of Economics, National University of Singapore, Manuscript
Jensen ST, Rahbek A (2004a) Asymptotic normality of the QML estimator of ARCH in the nonstationary case. Econometrica 72(2):641–646
Jensen ST, Rahbek A (2004b) Asymptotic inference for nonstationary GARCH. Econ Theory 20(6):1203–1226
Jensen ST, Rahbek A (2007) On the law of large numbers for (geometrically) Ergodic Markov chains. Econ Theory 23:761–766
Klüppelberg C, Lindner A, Maller R (2004) A continuous-time GARCH process driven by a Lévy process: stationarity and second-order behaviour. J Appl Probab 41:601–622
Kristensen D, Rahbek A (2005) Asymptotics of the QMLE for a class of ARCH(q) models. Econ Theory 21:946–961
Kristensen D, Rahbek A. (2008) Asymptotics of the QMLE for non-linear ARCH models. J Time Series Econ 1(1), Article 2
Lehmann EL (1999) Elements of large sample theory. Springer, New York
Ling S (2004) Estimation and testing stationarity for double-autoregressive models. J R Stat Soc B 66(1):63–78
Lumsdaine RL (1996) Asymptotic properties of the maximum likelihood estimator in GARCH(1,1) and IGARCH(1,1) models. Econometrica 64(3):575–596
Maheu JM, Yang Q (2016) An infinite hidden Markov model for short-term interest rates. J Empir Financ 38:202–220
Pedersen RS, Rahbek A (2016) Nonstationary GARCH with t-distributed innovations. Econ Lett 138:19–21
Straumann D, Mikosch T (2006) Quasi-maximum-likelihood estimation in conditionally heteroscedastic time series: a stochastic recurrence equations approach. Ann Stat 34(5):2449–2495
Triffi A (2006) Issues of aggregation over time of conditional heteroskedastic volatility models: what type of diffusion do we recover? Stud Nonlinear Dyn Econ 10:4
Wang Q, Phillips PCB (2009a) Asymptotic theory for local time density estimation and nonparametric cointegrating regression. Econ Theory 25:1–29
Wang Q, Phillips PCB (2009b) Structural nonparametric cointegrating regression. Econometrica 77(6):1901–1948
White H (1984) Asymptotic theory for econometricians. Academic Press, New York
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
We wish to thank the Co-Editor and two referees for very helpful comments. E. M. Iglesias is very grateful for the financial support from the Spanish Ministry of Science and Innovation, Project ECO2015-63845-P. Webpage: www.gcd.udc.es/emma.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Appendix
Appendix
The analytical expressions for the first, second and third order derivatives of the quasi log likelihood function are given in a Supplementary Appendix available upon request from any of the authors. We provide now three important propositions that we need in order to prove Theorem 1. The proof technique for the MLE utilizes the classic Cramér type conditions for consistency and asymptotic normality (central limit theorem for the score, convergence of the Hessian and uniformly bounded third-order derivatives); see e.g. Lehmann (1999).
Proposition 1
Let \(u_{jt}\left( \theta _{0}\right) \) be defined as in Theorem 1. Under Assumptions A and B, the joint distribution of the score functions evaluated at \(\theta =\theta _{0}\) are asymptotically Gaussian,
where
and \({\overline{m}}_{ij}=E\left( u_{it}\left( \theta _{0}\right) u_{jt}\left( \theta _{0}\right) \right) \) for \(i=1,2,3\) and \(j=1,2,3\).
Proof of Proposition 1
For the proof of Proposition 1, we need first the following 2 Lemmas. \(\square \)
Lemma A
Let Assumptions A and B hold and define \(u_{1t}\left( \theta _{0}\right) =\left( \ln \left| y_{t-1}\right| -\right. \) \(\left. w_{0}\left( \frac{1}{w_{0}}-\frac{ 1}{\sigma _{t}^{2}\left( \theta _{0}\right) }\right) \ln \left| y_{t-2}\right| \right) ,\)
\(u_{2t}\left( \theta _{0}\right) =\left( \frac{1}{\sigma _{t}^{2}\left( \theta _{0}\right) }\right) \) and \(u_{3t}\left( \theta _{0}\right) =\left( \frac{w_{0}}{\alpha _{0}}\right) \left( \frac{1}{w_{0}}-\frac{1}{\sigma _{t}^{2}\left( \theta _{0}\right) }\right) \). Then \(u_{it}\left( \theta _{0}\right) \) is a stationary and ergodic sequence. In addition \(\frac{1}{T} \sum _{t=1}^{T}u_{it}\left( \theta _{0}\right) \overset{p}{\rightarrow } E\left( u_{it}\left( \theta _{0}\right) \right) \equiv {\overline{u}}_{i}\) and \(\frac{1}{T}\sum _{t=1}^{T}u_{it}^{2}\left( \theta _{0}\right) \overset{p}{ \rightarrow }E\left( u_{it}^{2}\left( \theta _{0}\right) \right) \equiv {\overline{m}}_{ii}\) for \(i=1,2,3.\)
Proof of Lemma A
Define \(I_{t}=\{y_{t},z_{t,}y_{t-1},z_{t-1,}y_{t-2},z_{t-2,}...\}.\) Note first that
hence
where we have used assumptions A and B and where the last inequality follows from A3 where the first two moments of \(\ln \left| y_{t}\right| \) are assumed to be bounded. Hence we can write
where \(g_{1}\) is a \(I_{t}\)-measurable function and where all arguments \( y_{t-1},y_{t-2}\) and \(\sigma _{t}^{2}\left( \theta _{0}\right) \) are stationary and ergodic as a consequence of Lemmas 1 and 2. This implies that \(u_{1t}\left( \theta _{0}\right) \) is stationary and ergodic by Theorem 3.35 in White (1984). Consequently \(\frac{1}{T}\sum _{t=1}^{T}u_{1t}\left( \theta _{0}\right) \overset{p}{\rightarrow }E\left( u_{1t}\left( \theta _{0}\right) \right) \) follows by the Ergodic Theorem. Similarly, it follows straightforwardly that \(E\left| u_{2t}\left( \theta _{0}\right) \right| \le \left( \frac{1}{w_{0}}\right) \) and \(E\left| u_{3t}\left( \theta _{0}\right) \right| \le \left( \frac{2}{\alpha _{0}} \right) .\) We can write \(u_{2t}\left( \theta _{0}\right) \equiv g_{2}\left( \sigma _{t}^{2}\left( \theta _{0}\right) \right) \) and \(u_{3t}\left( \theta _{0}\right) \equiv g_{3}\left( \sigma _{t}^{2}\left( \theta _{0}\right) \right) \) and as above conclude that \((u_{2t}\left( \theta _{0}\right) ,u_{3t}\left( \theta _{0}\right) )\) is stationary and ergodic, and hence \( \frac{1}{T}\sum _{t=1}^{T}u_{it}\left( \theta _{0}\right) \overset{p}{ \rightarrow }E\left( u_{it}\left( \theta _{0}\right) \right) \) for \(i=2,3\). Second, notice that
such that
On the right hand side of the first inequality we have used Lemmas 1 and 2 and the second inequality follows from A3 (existence of second order moments). In addition, \(E\left| u_{2t}^{2}\left( \theta _{0}\right) \right| \le \left( \frac{1}{w_{0}^{2}}\right) \) and \(E\left| u_{3t}^{2}\left( \theta _{0}\right) \right| \le \left( \frac{4}{\alpha _{0}^{2}}\right) .\) We can therefore conclude, by Theorem 3.35 in White (1984), that since \(u_{it}\left( \theta _{0}\right) \) is stationary and ergodic then so is \(u_{it}^{2}\left( \theta _{0}\right) \) for \(i=1,2,3.\) Furthermore as \(E|u_{it}^{2}\left( \theta _{0}\right) |\) is bounded then \( \frac{1}{T}\sum _{t=1}^{T}u_{1t}^{2}\left( \theta _{0}\right) \overset{p}{ \rightarrow }E\left( u_{1t}^{2}\left( \theta _{0}\right) \right) \) for \( i=1,2,3\) follows from the ergodicity theorem. This completes the proof of Lemma A. \(\square \)
Lemma B
Under Assumptions A and B, the marginal distributions of the score functions given by Eqs. (9)–(11) evaluated at \(\theta =\theta _{0}\) are asymptotically Gaussian,
where \({\overline{m}}_{ii},\) \(i=1,2,3\) and \(\zeta \) are defined by Lemma A and A3 respectively.
Proof of Lemma B
We will prove (4) in detail. The results in (5) and ( 6) hold by identical arguments. Define again \(I_{t}= \{y_{t},z_{t,}y_{t-1},z_{t-1,}y_{t-2},z_{t-2,}...\}\) and recall from Result 1 that
Consequently
Since \(\{ s_{1t},I_{t}\} \) is an adapted stochastic sequence the result in (7) implies that \(\{s_{1t},I_{t}\} \) is a martingale difference sequence according to Definition 3.75 in White (1984). Further, notice that
Hence,
Furthermore, according to Lemma A we have that
implying that
From this we see that
Importantly, the result given by equation (8) corresponds to Condition (1), p. 60 in Brown (1971).Footnote 7
Finally, we need to prove that the Lindeberg type condition, which is Condition (2) in Brown (1971). In particular, we need to show that
for all \(\epsilon >0.\) By inserting the expression for \(s_{1t}^{2}\) and \( E(V_{1T}^{2}\left( \theta _{0}\right) )\) we get
for all \(\zeta {\overline{m}}_{11}\) because, from Lemma A and A1, \( u_{1t}^{2}\left( \theta _{0}\right) \) and \(z_{t}^{2}\) have finite moments and are stationary and ergodic. Consequently, the Lindeberg condition holds.
According to Theorem 2, p. 60, in Brown (1971) we can therefore conclude that
which completes the proof. \(\square \)
Along the same lines
and
for some \(\delta >0\) and as T tends to \(\infty \) . \(\square \)
Proof of Proposition 1
In order to fully characterize the asymptotic distribution we need to determine the off-diagonal elements of the variance covariance matrix of the score vectors given by \(\Lambda .\) In particular, because \(u_{1t}\left( \theta _{0}\right) ,u_{2t}\left( \theta _{0}\right) \) and \(u_{3t}\left( \theta _{0}\right) \) are all stationary and ergodic with finite first moments (from Lemma A) it follows straightforwardly that
Since all the elements in the score vector are asymptotically normal (see Lemma B), the result follows directly from application of the Cramer-Wold device, see for example Proposition 5.1 in White (1984), which completes the proof. \(\square \)
Proposition 2
Let \(u_{jt}\left( \theta _{0}\right) \) be defined as in Theorem 1. Under Assumptions A and B, the observed information evaluated at \(\theta =\theta _{0}\) converges in probability, i.e.,
where
and \({\overline{m}}_{ij}=E\left( u_{it}\left( \theta _{0}\right) u_{jt}\left( \theta _{0}\right) \right) \) for \(i=1,2,3\) and \(j=1,2,3\).
Proof of Proposition 2
Recall from Result 2 (see Supplementary Appendix) that
Since \(z_{t}^{2}\) and \(u_{1t}^{2}\left( \theta _{0}\right) \) are independent, the first term on the right hand side converges to \(2{\overline{m}}_{11}\) by Lemma A. Furthermore, since \(\left( \ln \left| y_{t-2}\right| \right) ^{2}w_{0}^{2}\left( \frac{1}{w_{0}}-\frac{1}{ \sigma _{t}^{2}\left( \theta _{0}\right) }\right) ^{2}\) and \(w_{0}\left( \frac{1}{w_{0}}-\frac{1}{\sigma _{t}^{2}\left( \theta _{0}\right) }\right) \left( \ln \left| y_{t-2}\right| \right) ^{2}\) have bounded moments, they are ergodic and stationary and since \(E\left( 1-z_{t}^{2}\right) =0,\) it follows from the ergodic theorem that the last term on the right hand side converges in probability to zero. Therefore, the result follows. Using identical arguments we find
We proceed now to show that \(\Lambda \) is positive definite. \(\Lambda \) will be positive definite if for any non-zero column vector z with entries a, b and c, we show that \(z^{T}\Lambda z>0\). In our case
where we have written \(u_{it}\left( \theta _{0}\right) =u_{it}\) for simplicity reasons. Since \(\zeta ,\) by Assumption A1, is always positive and larger than zero, and from Lemma A we have that
then, we need to show if the following term is strictly positive
Finally notice that since \(\Omega =2\Lambda \zeta ^{-1}=\Lambda ,\) then \( \Omega >0.\) This completes the proof of Proposition 2. \(\square \)
Proposition 3
Define the lower and upper values for each parameter in \(\theta _{0}\) as \( \gamma _{L}<\gamma _{0}<\gamma _{U},w_{L}<w_{0}<w_{U},\) and \(\alpha _{L}<\alpha _{0}<\alpha _{U},\) respectively and the neighborhood \(N\left( \theta _{0}\right) \) around \(\theta _{0}\) as
Under Assumptions A and B, there exists a neighborhood \(N\left( \theta _{0}\right) \) for which for \(i,j,k=1,2,3\)
where \(w_{ijkt}\) is stationary. Furthermore \(\frac{1}{T} \sum _{t=1}^{T}w_{ijkt}\overset{a.s.}{\longrightarrow }E\left( w_{ijkt}\right) <\infty \) for \(\forall ijk.\)
Proof of Proposition 3
Let us start from the components of \(\left| \frac{1}{T}\frac{\partial ^{3}}{\partial \gamma ^{3}}L_{T}\left( \theta \right) \right| \) defined in Result 3 (see Supplementary Appendix). Part I (which is also defined in Result 3) can be written as
where we can define the lower bound for all t, \(y_{L}\le \left| y_{t-1}\right| ,\) \(y_{L}\le \left| y_{t-2}\right| ,\) \(\Lambda _{t-1}=\max \left\{ y_{L}^{2\left| \gamma _{U}-\gamma _{L}\right| },y_{t-1}^{2\left| \gamma _{U}-\gamma _{L}\right| }\right\} \), \( \Lambda _{t-2}=\max \left\{ 1,\left( \frac{y_{t-1}^{*}}{\left| y_{t-2}\right| }\right) ^{2\left| \gamma _{U}-\gamma _{L}\right| }\right\} \) and the result follows by setting \(2\left| \gamma _{U}-\gamma _{L}\right| =\varphi \), Assumptions A and B and the law of large numbers (see Jensen and Rahbek (2004a), Lemma 5). Part II requires also assumption A3 since
Parts III, IV, V and VI follow the same argument.
Along the same lines for \(\left| \frac{1}{T}\frac{\partial ^{3}}{ \partial \alpha ^{3}}L_{T}\left( \theta \right) \right| \)
The rest of the cases follow directly using the same argument. This completes the proof of Proposition 3. \(\square \)
Proof of Theorem 1
Given the conditions provided by Propositions 1–3, Theorem 1 follows from Lumsdaine (1996, pp. 593–595, Theorem 3), the ergodic theorem and Lemma 1, p. 1206 in Jensen and Rahbek (2004b). \(\square \)
Rights and permissions
About this article
Cite this article
Dahl, C.M., Iglesias, E.M. Asymptotic normality of the MLE in the level-effect ARCH model. Stat Papers 62, 117–135 (2021). https://doi.org/10.1007/s00362-019-01086-y
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00362-019-01086-y