Skip to main content
Log in

Asymptotic normality of the MLE in the level-effect ARCH model

  • Regular Article
  • Published:
Statistical Papers Aims and scope Submit manuscript

Abstract

We establish consistency and asymptotic normality of the maximum likelihood estimator in the level-effect ARCH model of Chan et al. (J Financ 47(3):1209–1227, 1992). Furthermore, it is shown by simulations that the asymptotic properties also apply in finite samples.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Alternative representations for a level-effect ARCH model may be considered such as

    $$\begin{aligned} y_{t}= & {} \sigma _{t}z_{t} \\ \sigma _{t}^{2}= & {} w+\alpha y_{t-1}^{2}+\gamma y_{t-1} \end{aligned}$$

    although they are outside the scope of this paper.

  2. If \(\delta \) is estimated, this will affect the asymptotic properties of the estimators of the volatility parameters, but we leave this for further future research.

  3. See for example Brown et al. (1996) and Bouezmarni and Rombouts (2010) as examples showing the relevance of studying positive time series. See also Broze et al (1995, p. 202) for a discussion of models that preclude negative values under some parameter restrictions for continuous-time processes.

  4. Note that the data generating process (DGP) is assumed to be ergodic. It might be possible to relax this assumption about ergodicity and simply assume that the DGP is initiated in some fixed value and that the DGP has an ergodic solution (see e.g. Kristensen and Rahbek (2005) and Jensen and Rahbek (2007)) and we leave that for further research.

  5. We let \(\overset{a.s.}{\longrightarrow }\) denote convergence “almost surely” as \(T\rightarrow \infty .\) Also note that in this case \(\frac{\partial }{ \partial \alpha }L_{T}\left( \theta \right) =-\sum _{t=1}^{T}\frac{1}{2} \left( 1-\frac{{\widetilde{y}}_{t}^{2}}{\sigma _{t}^{2}}\right) \frac{ {\widetilde{y}}_{t-1}^{2}}{\sigma _{t}^{2}};\frac{\partial ^{2}}{\partial \alpha ^{2}}L_{T}\left( \theta _{0}\right) =\frac{1}{2}\sum _{t=1}^{T}\left( 1-2\frac{{\widetilde{y}}_{t}^{2}}{\sigma _{t}^{2}}\right) \frac{{\widetilde{y}} _{t-1}^{4}}{\sigma _{t}^{4}};\frac{\partial ^{3}}{\partial \alpha ^{3}} L_{T}\left( \theta _{0}\right) =-\sum _{t=1}^{T}\left( 1-3\frac{{\widetilde{y}} _{t}^{2}}{\sigma _{t}^{2}}\right) \frac{{\widetilde{y}}_{t-1}^{6}}{\sigma _{t}^{6}}\)as shown in Results 1, 2 and 3 in the Technical Appendix and they do correspond to Eqs. (4), (5) and (6) of Jensen and Rahbek (2004a).

  6. In a Supplementary Appendix that is available upon request from any of the authors, we provide additional simulation results, where \(\delta =\left( a,b\right) ^{\prime }\) is estimated jointly with the variance parameters. According to those simulation results, when \(\delta \) is also estimated and assumptions A and B hold, we conjecture that the ML estimator seem to follow also an asymptotically normal distribution.

  7. Note that since \(s_{1t}\left( \theta _{0}\right) \) is a martingale difference sequence, we may use Billingsley (1961)’s Central Limit Theorem (CLT) instead of Brown (1971)’s CLT.

References

  • Andersen TG, Lund J (1997) Estimating continuous time stochastic volatility models of the short term interest rate. J Econom 77:343–377

    Article  Google Scholar 

  • Ball CA, Torous WN (1999) The stochastic volatility of short-term interest rates: some international evidence. J Financ 54(6):2339–2359

    Article  Google Scholar 

  • Berkes I, Horváth L (2004) The efficiency of the estimators of the parameters in GARCH processes. Ann Stat 32(2):633–655

    Article  MathSciNet  Google Scholar 

  • Billingsley P (1961) The Lindeberg–Lévy theorem for martingales. Proc Am Math Soc 12:788–792

    MATH  Google Scholar 

  • Bouezmarni T, Rombouts JVK (2010) Nonparametric density estimation for positive time series. Comput Stat Data Anal 54(2):245–261

    Article  MathSciNet  Google Scholar 

  • Brenner R, Harjes R, Kroner K (1996) Another look at models of the short term interest rates. J Financ Quant Anal 31:85–107

    Article  Google Scholar 

  • Brown BM (1971) Martingale central limit theorems. Ann Math Stat 42:59–66

    Article  MathSciNet  Google Scholar 

  • Brown TC, Feigin PD, Pallant DL (1996) Estimation for a class of positive nonlinear time series models. Stoch Process Appl 63(2):139–152

    Article  MathSciNet  Google Scholar 

  • Broze L, Scaillet O, Zakoïan JM (1995) Testing for continuous time models of the short term interest rate. J Empir Financ 2:199–223

    Article  Google Scholar 

  • Bu R, Chen J, Hadri K (2017) Specification analysis in regime-switching continuous-time diffusion models for market volatility. Stud Nonlinear Dyn Econom 21:1

    Article  MathSciNet  Google Scholar 

  • Chan KC, Karolyi GA, Longstaff F, Sanders A (1992) An empirical comparison of alternative models of short term interest rates. J Finan 47(3):1209–1227

    Article  Google Scholar 

  • Dias-Curto J, Castro-Pinto J, Nuno-Tavares G (2009) Modeling stock markets’ volatility using GARCH models with normal Student’s t and stable Paretian distributions. Stat Pap 50:311–321

    Article  MathSciNet  Google Scholar 

  • Engle RF (1982) Autoregressive conditional heteroskedasticity with estimates of the variance of United Kingdom inflation. Econometrica 50:987–1007

    Article  MathSciNet  Google Scholar 

  • Fornari F, Mele A (2006) Approximating volatility diffusions with CEV-ARCH models. J Econ Dyn Control 30(6):931–966

    Article  MathSciNet  Google Scholar 

  • Francq C, Wintenberger O, Zakoïan J-M (2018) Goodness-of-fit tests for Log-GARCH and EGARCH models. TEST 27(1):27–51

    Article  MathSciNet  Google Scholar 

  • Frydman H (1994) Asymptotic inference for the parameters of a discrete-time square-root process. Math Financ 4(2):169–181

    Article  MathSciNet  Google Scholar 

  • Hamadeh T, Zakoïan J-M (2011) Asymptotic properties of LS and QML estimators for a class of nonlinear GARCH processes. J Stat Plan Inference 141(1):488–507

    Article  MathSciNet  Google Scholar 

  • Han H, Zhang S (2009) Nonstationary semiparametric ARCH models. Department of Economics, National University of Singapore, Manuscript

  • Jensen ST, Rahbek A (2004a) Asymptotic normality of the QML estimator of ARCH in the nonstationary case. Econometrica 72(2):641–646

    Article  MathSciNet  Google Scholar 

  • Jensen ST, Rahbek A (2004b) Asymptotic inference for nonstationary GARCH. Econ Theory 20(6):1203–1226

    Article  MathSciNet  Google Scholar 

  • Jensen ST, Rahbek A (2007) On the law of large numbers for (geometrically) Ergodic Markov chains. Econ Theory 23:761–766

    Article  MathSciNet  Google Scholar 

  • Klüppelberg C, Lindner A, Maller R (2004) A continuous-time GARCH process driven by a Lévy process: stationarity and second-order behaviour. J Appl Probab 41:601–622

    Article  MathSciNet  Google Scholar 

  • Kristensen D, Rahbek A (2005) Asymptotics of the QMLE for a class of ARCH(q) models. Econ Theory 21:946–961

    Article  MathSciNet  Google Scholar 

  • Kristensen D, Rahbek A. (2008) Asymptotics of the QMLE for non-linear ARCH models. J Time Series Econ 1(1), Article 2

  • Lehmann EL (1999) Elements of large sample theory. Springer, New York

    Book  Google Scholar 

  • Ling S (2004) Estimation and testing stationarity for double-autoregressive models. J R Stat Soc B 66(1):63–78

    Article  MathSciNet  Google Scholar 

  • Lumsdaine RL (1996) Asymptotic properties of the maximum likelihood estimator in GARCH(1,1) and IGARCH(1,1) models. Econometrica 64(3):575–596

    Article  MathSciNet  Google Scholar 

  • Maheu JM, Yang Q (2016) An infinite hidden Markov model for short-term interest rates. J Empir Financ 38:202–220

    Article  Google Scholar 

  • Pedersen RS, Rahbek A (2016) Nonstationary GARCH with t-distributed innovations. Econ Lett 138:19–21

    Article  MathSciNet  Google Scholar 

  • Straumann D, Mikosch T (2006) Quasi-maximum-likelihood estimation in conditionally heteroscedastic time series: a stochastic recurrence equations approach. Ann Stat 34(5):2449–2495

    Article  MathSciNet  Google Scholar 

  • Triffi A (2006) Issues of aggregation over time of conditional heteroskedastic volatility models: what type of diffusion do we recover? Stud Nonlinear Dyn Econ 10:4

    Google Scholar 

  • Wang Q, Phillips PCB (2009a) Asymptotic theory for local time density estimation and nonparametric cointegrating regression. Econ Theory 25:1–29

    Article  MathSciNet  Google Scholar 

  • Wang Q, Phillips PCB (2009b) Structural nonparametric cointegrating regression. Econometrica 77(6):1901–1948

    Article  MathSciNet  Google Scholar 

  • White H (1984) Asymptotic theory for econometricians. Academic Press, New York

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Emma M. Iglesias.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

We wish to thank the Co-Editor and two referees for very helpful comments. E. M. Iglesias is very grateful for the financial support from the Spanish Ministry of Science and Innovation, Project ECO2015-63845-P. Webpage: www.gcd.udc.es/emma.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 93 KB)

Appendix

Appendix

The analytical expressions for the first, second and third order derivatives of the quasi log likelihood function are given in a Supplementary Appendix available upon request from any of the authors. We provide now three important propositions that we need in order to prove Theorem 1. The proof technique for the MLE utilizes the classic Cramér type conditions for consistency and asymptotic normality (central limit theorem for the score, convergence of the Hessian and uniformly bounded third-order derivatives); see e.g. Lehmann (1999).

Proposition 1

Let \(u_{jt}\left( \theta _{0}\right) \) be defined as in Theorem 1. Under Assumptions A and B, the joint distribution of the score functions evaluated at \(\theta =\theta _{0}\) are asymptotically Gaussian,

$$\begin{aligned} \frac{1}{\sqrt{T}}\frac{\partial }{\partial \theta }L_{T}\left( \theta _{0}\right) \overset{d}{\rightarrow }N\left( 0,\Lambda \right) , \end{aligned}$$

where

$$\begin{aligned} \Lambda =\zeta \left[ \begin{array}{ccc} {\overline{m}}_{11} &{} \frac{1}{2}{\overline{m}}_{12} &{} \frac{1}{2}{\overline{m}} _{13} \\ \frac{1}{2}{\overline{m}}_{12} &{} \frac{1}{4}{\overline{m}}_{22} &{} \frac{1}{4} {\overline{m}}_{23} \\ \frac{1}{2}{\overline{m}}_{13} &{} \frac{1}{4}{\overline{m}}_{23} &{} \frac{1}{4} {\overline{m}}_{33} \end{array} \right] >0, \end{aligned}$$

and \({\overline{m}}_{ij}=E\left( u_{it}\left( \theta _{0}\right) u_{jt}\left( \theta _{0}\right) \right) \) for \(i=1,2,3\) and \(j=1,2,3\).

Proof of Proposition 1

For the proof of Proposition 1, we need first the following 2 Lemmas. \(\square \)

Lemma A

Let Assumptions A and B hold and define \(u_{1t}\left( \theta _{0}\right) =\left( \ln \left| y_{t-1}\right| -\right. \) \(\left. w_{0}\left( \frac{1}{w_{0}}-\frac{ 1}{\sigma _{t}^{2}\left( \theta _{0}\right) }\right) \ln \left| y_{t-2}\right| \right) ,\)

\(u_{2t}\left( \theta _{0}\right) =\left( \frac{1}{\sigma _{t}^{2}\left( \theta _{0}\right) }\right) \) and \(u_{3t}\left( \theta _{0}\right) =\left( \frac{w_{0}}{\alpha _{0}}\right) \left( \frac{1}{w_{0}}-\frac{1}{\sigma _{t}^{2}\left( \theta _{0}\right) }\right) \). Then \(u_{it}\left( \theta _{0}\right) \) is a stationary and ergodic sequence. In addition \(\frac{1}{T} \sum _{t=1}^{T}u_{it}\left( \theta _{0}\right) \overset{p}{\rightarrow } E\left( u_{it}\left( \theta _{0}\right) \right) \equiv {\overline{u}}_{i}\) and \(\frac{1}{T}\sum _{t=1}^{T}u_{it}^{2}\left( \theta _{0}\right) \overset{p}{ \rightarrow }E\left( u_{it}^{2}\left( \theta _{0}\right) \right) \equiv {\overline{m}}_{ii}\) for \(i=1,2,3.\)

Proof of Lemma A

Define \(I_{t}=\{y_{t},z_{t,}y_{t-1},z_{t-1,}y_{t-2},z_{t-2,}...\}.\) Note first that

$$\begin{aligned} \left| u_{1t}\left( \theta _{0}\right) \right| \le \left| \ln \left| y_{t-1}\right| \right| +w_{0}\left| \ln \left| y_{t-2}\right| \right| \left( \frac{1}{w_{0}}+\frac{1}{\sigma _{t}^{2}\left( \theta _{0}\right) }\right) \le \left| \ln \left| y_{t-1}\right| \right| +2\left| \ln \left| y_{t-2}\right| \right| , \end{aligned}$$

hence

$$\begin{aligned} E\left| u_{1t}\left( \theta _{0}\right) \right| \le 3E(\ln \left| y_{t}\right| )<\infty , \end{aligned}$$

where we have used assumptions A and B and where the last inequality follows from A3 where the first two moments of \(\ln \left| y_{t}\right| \) are assumed to be bounded. Hence we can write

$$\begin{aligned} u_{1t}\left( \theta _{0}\right) \equiv g_{1}\left( y_{t-1},y_{t-2},\sigma _{t}^{2}\left( \theta _{0}\right) \right) , \end{aligned}$$

where \(g_{1}\) is a \(I_{t}\)-measurable function and where all arguments \( y_{t-1},y_{t-2}\) and \(\sigma _{t}^{2}\left( \theta _{0}\right) \) are stationary and ergodic as a consequence of Lemmas 1 and 2. This implies that \(u_{1t}\left( \theta _{0}\right) \) is stationary and ergodic by Theorem 3.35 in White (1984). Consequently \(\frac{1}{T}\sum _{t=1}^{T}u_{1t}\left( \theta _{0}\right) \overset{p}{\rightarrow }E\left( u_{1t}\left( \theta _{0}\right) \right) \) follows by the Ergodic Theorem. Similarly, it follows straightforwardly that \(E\left| u_{2t}\left( \theta _{0}\right) \right| \le \left( \frac{1}{w_{0}}\right) \) and \(E\left| u_{3t}\left( \theta _{0}\right) \right| \le \left( \frac{2}{\alpha _{0}} \right) .\) We can write \(u_{2t}\left( \theta _{0}\right) \equiv g_{2}\left( \sigma _{t}^{2}\left( \theta _{0}\right) \right) \) and \(u_{3t}\left( \theta _{0}\right) \equiv g_{3}\left( \sigma _{t}^{2}\left( \theta _{0}\right) \right) \) and as above conclude that \((u_{2t}\left( \theta _{0}\right) ,u_{3t}\left( \theta _{0}\right) )\) is stationary and ergodic, and hence \( \frac{1}{T}\sum _{t=1}^{T}u_{it}\left( \theta _{0}\right) \overset{p}{ \rightarrow }E\left( u_{it}\left( \theta _{0}\right) \right) \) for \(i=2,3\). Second, notice that

$$\begin{aligned} \left| u_{1t}^{2}\left( \theta _{0}\right) \right|= & {} |\ln ^{2}\left| y_{t-1}\right| -2w_{0}\left( \frac{1}{w_{0}}-\frac{1}{ \sigma _{t}^{2}\left( \theta _{0}\right) }\right) \ln \left| y_{t-2}\right| \ln \left| y_{t-1}\right| \\&\quad +\left( \ln \left| y_{t-2}\right| \right) ^{2}w_{0}^{2}\left( \frac{1}{w_{0}}-\frac{1}{ \sigma _{t}^{2}\left( \theta _{0}\right) }\right) ^{2}| \\\le & {} \ln ^{2}\left| y_{t-1}\right| +\frac{4w_{0}^{2}}{\sigma _{t}^{4}\left( \theta _{0}\right) }\ln ^{2}\left| y_{t-2}\right| +4 \frac{w_{0}}{\sigma _{t}^{2}\left( \theta _{0}\right) }\left| \ln \left| y_{t-1}\right| \ln \left| y_{t-2}\right| \right| \\\le & {} \ln ^{2}\left| y_{t-1}\right| +4\ln ^{2}\left| y_{t-2}\right| +4\left| \ln \left| y_{t-1}\right| \ln \left| y_{t-2}\right| \right| , \end{aligned}$$

such that

$$\begin{aligned} E\left| u_{1t}^{2}\left( \theta _{0}\right) \right| \le 5E\left( (\ln \left| y_{t}\right| )^{2}\right) +4E\left| \ln \left| y_{t}\right| \ln \left| y_{t-1}\right| \right| <\infty . \end{aligned}$$

On the right hand side of the first inequality we have used Lemmas 1 and 2 and the second inequality follows from A3 (existence of second order moments). In addition, \(E\left| u_{2t}^{2}\left( \theta _{0}\right) \right| \le \left( \frac{1}{w_{0}^{2}}\right) \) and \(E\left| u_{3t}^{2}\left( \theta _{0}\right) \right| \le \left( \frac{4}{\alpha _{0}^{2}}\right) .\) We can therefore conclude, by Theorem 3.35 in White (1984), that since \(u_{it}\left( \theta _{0}\right) \) is stationary and ergodic then so is \(u_{it}^{2}\left( \theta _{0}\right) \) for \(i=1,2,3.\) Furthermore as \(E|u_{it}^{2}\left( \theta _{0}\right) |\) is bounded then \( \frac{1}{T}\sum _{t=1}^{T}u_{1t}^{2}\left( \theta _{0}\right) \overset{p}{ \rightarrow }E\left( u_{1t}^{2}\left( \theta _{0}\right) \right) \) for \( i=1,2,3\) follows from the ergodicity theorem. This completes the proof of Lemma A. \(\square \)

Lemma B

Under Assumptions A and B, the marginal distributions of the score functions given by Eqs. (9)–(11) evaluated at \(\theta =\theta _{0}\) are asymptotically Gaussian,

$$\begin{aligned} \frac{1}{\sqrt{T}}\frac{\partial }{\partial \gamma }L_{T}\left( \theta _{0}\right)= & {} \frac{-1}{\sqrt{T}}\sum _{t=1}^{T}\left( 1-z_{t}^{2}\right) u_{1t}\left( \theta _{0}\right) \overset{d}{\rightarrow }N\left( 0,\zeta {\overline{m}}_{11}\right) , \end{aligned}$$
(4)
$$\begin{aligned} \frac{1}{\sqrt{T}}\frac{\partial }{\partial w}L_{T}\left( \theta _{0}\right)= & {} \frac{-1}{\sqrt{T}}\sum _{t=1}^{T}\frac{1}{2}\left( 1-z_{t}^{2}\right) u_{2t}\left( \theta _{0}\right) \overset{d}{\rightarrow }N\left( 0,\zeta {\overline{m}}_{22}\right) , \end{aligned}$$
(5)
$$\begin{aligned} \frac{1}{\sqrt{T}}\frac{\partial }{\partial \alpha }L_{T}\left( \theta _{0}\right)= & {} \frac{-1}{\sqrt{T}}\sum _{t=1}^{T}\frac{1}{2}\left( 1-z_{t}^{2}\right) u_{3t}\left( \theta _{0}\right) \overset{d}{\rightarrow } N\left( 0,\zeta {\overline{m}}_{33}\right) , \end{aligned}$$
(6)

where \({\overline{m}}_{ii},\) \(i=1,2,3\) and \(\zeta \) are defined by Lemma A and A3 respectively.

Proof of Lemma B

We will prove (4) in detail. The results in (5) and ( 6) hold by identical arguments. Define again \(I_{t}= \{y_{t},z_{t,}y_{t-1},z_{t-1,}y_{t-2},z_{t-2,}...\}\) and recall from Result 1 that

$$\begin{aligned} s_{1t}\left( \theta _{0}\right) =-\left( 1-z_{t}^{2}\right) u_{1t}\left( \theta _{0}\right) . \end{aligned}$$

Consequently

$$\begin{aligned} E\left( s_{1t}|I_{t-1}\right)= & {} -E\left( \left( 1-z_{t}^{2}\right) u_{1t}\left( \theta _{0}\right) |I_{t-1}\right) =-E\left( \left( 1-z_{t}^{2}\right) \right) u_{1t}\left( \theta _{0}\right) \nonumber \\= & {} 0. \end{aligned}$$
(7)

Since \(\{ s_{1t},I_{t}\} \) is an adapted stochastic sequence the result in (7) implies that \(\{s_{1t},I_{t}\} \) is a martingale difference sequence according to Definition 3.75 in White (1984). Further, notice that

$$\begin{aligned} V_{1T}^{2}\left( \theta _{0}\right) =\sum _{t=1}^{T}E\left( s_{1t}^{2}\left( \theta _{0}\right) |I_{t-1}\right) =\sum _{t=1}^{T}E\left( \left( 1-z_{t}^{2}\right) ^{2}\right) u_{1t}^{2}\left( \theta _{0}\right) =\zeta \sum _{t=1}^{T}u_{1t}^{2}\left( \theta _{0}\right) . \end{aligned}$$

Hence,

$$\begin{aligned} E(V_{1T}^{2}\left( \theta _{0}\right) )=\zeta \sum _{t=1}^{T}E\left( u_{1t}^{2}\left( \theta _{0}\right) \right) =T\zeta {\overline{m}}_{11}. \end{aligned}$$

Furthermore, according to Lemma A we have that

$$\begin{aligned} \frac{1}{T}\sum _{t=1}^{T}u_{1t}^{2}\left( \theta _{0}\right) \overset{p}{ \rightarrow }{\overline{m}}_{11}, \end{aligned}$$

implying that

$$\begin{aligned} \frac{1}{T}V_{1T}^{2}\left( \theta _{0}\right) \overset{p}{\rightarrow } \zeta {\overline{m}}_{11}. \end{aligned}$$

From this we see that

$$\begin{aligned} \left( V_{1T}^{2}\left( \theta _{0}\right) \right) \left( E(V_{1T}^{2}\left( \theta _{0}\right) )\right) ^{-1}\overset{p}{\rightarrow }1. \end{aligned}$$
(8)

Importantly, the result given by equation (8) corresponds to Condition (1), p. 60 in Brown (1971).Footnote 7

Finally, we need to prove that the Lindeberg type condition, which is Condition (2) in Brown (1971). In particular, we need to show that

$$\begin{aligned} \left( E(V_{1T}^{2}\left( \theta _{0}\right) )\right) ^{-1}\sum _{t=1}^{T}E\left( s_{1t}^{2}\left( \theta _{0}\right) 1\left\{ \left| s_{1t}\left( \theta _{0}\right) \right| >\epsilon \sqrt{ E(V_{1T}^{2}\left( \theta _{0}\right) )}\right\} \right) \overset{p}{ \rightarrow }0, \end{aligned}$$

for all \(\epsilon >0.\) By inserting the expression for \(s_{1t}^{2}\) and \( E(V_{1T}^{2}\left( \theta _{0}\right) )\) we get

$$\begin{aligned}&\underset{T\rightarrow \infty }{\lim }\frac{1}{T\zeta {\overline{m}}_{11}} \sum _{t=1}^{T}E\left( s_{1t}^{2}\left( \theta _{0}\right) 1\left\{ \left| s_{1t}\left( \theta _{0}\right) \right|>\epsilon \sqrt{ T\zeta {\overline{m}}_{11}}\right\} \right) =\underset{T\rightarrow \infty }{\lim }\frac{1}{\zeta {\overline{m}}_{1}}\\&\quad E\left( \left( \left( 1-z_{t}^{2}\right) ^{2}u_{1t}^{2}\left( \theta _{0}\right) \right) 1\left\{ \left| \left( 1-z_{t}^{2}\right) ^{2}u_{1t}^{2}\left( \theta _{0}\right) \right| >\sqrt{T\zeta {\overline{m}}_{11}}\right\} \right) \rightarrow 0, \end{aligned}$$

for all \(\zeta {\overline{m}}_{11}\) because, from Lemma A and A1, \( u_{1t}^{2}\left( \theta _{0}\right) \) and \(z_{t}^{2}\) have finite moments and are stationary and ergodic. Consequently, the Lindeberg condition holds.

According to Theorem 2, p. 60, in Brown (1971) we can therefore conclude that

$$\begin{aligned} \frac{1}{\sqrt{T\zeta {\overline{m}}_{11}}}\sum _{t=1}^{T}s_{1t}\left( \theta _{0}\right) \overset{d}{\rightarrow }N(0,1), \end{aligned}$$

which completes the proof. \(\square \)

Along the same lines

$$\begin{aligned} \frac{1}{T}\sum _{t=1}^{T}E\left( s_{2t}^{2}\mid I_{t-1}\right)= & {} \frac{1}{T} \sum _{t=1}^{T}\frac{\zeta }{4}\frac{1}{\left( w_{0}+\alpha _{0}\left( \frac{ y_{t-1}^{*}}{\left| y_{t-2}\right| ^{\gamma }}\right) ^{2}\right) }\overset{p}{\longrightarrow }\frac{\zeta }{4w_{0}^{2}}>0, \\ \frac{1}{T}\sum _{t=1}^{T}E\left( s_{3t}^{2}\mid I_{t-1}\right)= & {} \frac{1}{T} \sum _{t=1}^{T}\frac{\zeta }{4}\frac{\left( \frac{y_{t-1}^{*}}{\left| y_{t-2}\right| ^{\gamma }}\right) ^{2}}{\left( w_{0}+\alpha _{0}\left( \frac{y_{t-1}^{*}}{\left| y_{t-2}\right| ^{\gamma }}\right) ^{2}\right) }\overset{p}{\longrightarrow }\frac{\zeta }{4\alpha _{0}^{2}}>0. \end{aligned}$$

and

$$\begin{aligned}&\frac{1}{T}\sum _{t=1}^{T}E\left( s_{2t}^{2}1\left\{ \left| s_{2t}\right|>\sqrt{T}\delta \right\} \right) \\&\quad \le E\left( \left( \frac{ \left( 1-z_{t}^{2}\right) ^{2}}{4w_{0}^{2}}\right) 1\left\{ \left| \frac{ \left( 1-z_{t}^{2}\right) }{2w_{0}}\right|>\sqrt{T}\delta \right\} \right) \rightarrow 0, \\&\frac{1}{T}\sum _{t=1}^{T}E\left( s_{3t}^{2}1\left\{ \left| s_{3t}\right|>\sqrt{T}\delta \right\} \right) \\&\quad \le E\left( \left( \frac{ \left( 1-z_{t}^{2}\right) ^{2}}{4\alpha _{0}^{2}}\right) 1\left\{ \left| \frac{\left( 1-z_{t}^{2}\right) }{2\alpha _{0}}\right| >\sqrt{T}\delta \right\} \right) \rightarrow 0, \end{aligned}$$

for some \(\delta >0\) and as T tends to \(\infty \) . \(\square \)

Proof of Proposition 1

In order to fully characterize the asymptotic distribution we need to determine the off-diagonal elements of the variance covariance matrix of the score vectors given by \(\Lambda .\) In particular, because \(u_{1t}\left( \theta _{0}\right) ,u_{2t}\left( \theta _{0}\right) \) and \(u_{3t}\left( \theta _{0}\right) \) are all stationary and ergodic with finite first moments (from Lemma A) it follows straightforwardly that

$$\begin{aligned} \frac{1}{T}\sum _{t=1}^{T}s_{1t}\left( \theta _{0}\right) s_{2t}\left( \theta _{0}\right)= & {} \frac{1}{T}\sum _{t=1}^{T}\left( 1-z_{t}^{2}\right) ^{2}u_{1t}\left( \theta _{0}\right) u_{2t}\left( \theta _{0}\right) \overset{ p}{\rightarrow }\frac{1}{2}\zeta {\overline{m}}_{12}, \\ \frac{1}{T}\sum _{t=1}^{T}s_{1t}\left( \theta _{0}\right) s_{3t}\left( \theta _{0}\right)= & {} \frac{1}{T}\sum _{t=1}^{T}\frac{1}{2}\left( 1-z_{t}^{2}\right) ^{2}u_{1t}\left( \theta _{0}\right) u_{3t}\left( \theta _{0}\right) \overset{ p}{\rightarrow }\frac{1}{2}\zeta {\overline{m}}_{13}, \\ \frac{1}{T}\sum _{t=1}^{T}s_{2t}\left( \theta _{0}\right) s_{3t}\left( \theta _{0}\right)= & {} \frac{1}{T}\sum _{t=1}^{T}\frac{1}{4}\left( 1-z_{t}^{2}\right) ^{2}u_{2t}\left( \theta _{0}\right) u_{3t}\left( \theta _{0}\right) \overset{ p}{\rightarrow }\frac{1}{4}\zeta {\overline{m}}_{23}. \end{aligned}$$

Since all the elements in the score vector are asymptotically normal (see Lemma B), the result follows directly from application of the Cramer-Wold device, see for example Proposition 5.1 in White (1984), which completes the proof. \(\square \)

Proposition 2

Let \(u_{jt}\left( \theta _{0}\right) \) be defined as in Theorem 1. Under Assumptions A and B, the observed information evaluated at \(\theta =\theta _{0}\) converges in probability, i.e.,

$$\begin{aligned} -\frac{1}{T}\frac{\partial ^{2}}{\partial \theta \partial \theta ^{\prime }} L_{T}\left( \theta _{0}\right) \overset{p}{\rightarrow }\Omega , \end{aligned}$$

where

$$\begin{aligned} \Omega =\left[ \begin{array}{ccc} 2{\overline{m}}_{11} &{} {\overline{m}}_{12} &{} {\overline{m}}_{13} \\ {\overline{m}}_{12} &{} \frac{1}{2}{\overline{m}}_{22} &{} \frac{1}{2}{\overline{m}} _{23} \\ {\overline{m}}_{13} &{} \frac{1}{2}{\overline{m}}_{23} &{} \frac{1}{2}{\overline{m}} _{33} \end{array} \right] >0, \end{aligned}$$

and \({\overline{m}}_{ij}=E\left( u_{it}\left( \theta _{0}\right) u_{jt}\left( \theta _{0}\right) \right) \) for \(i=1,2,3\) and \(j=1,2,3\).

Proof of Proposition 2

Recall from Result 2 (see Supplementary Appendix) that

$$\begin{aligned}&-\frac{1}{T}\frac{\partial ^{2}}{\partial \gamma ^{2}}L_{T}\left( \theta _{0}\right) =2\frac{1}{T}\sum _{t=1}^{T}z_{t}^{2}u_{1t}^{2}\left( \theta _{0}\right) -\frac{2}{T}\sum _{t=1}^{T}\left( 1-z_{t}^{2}\right) \nonumber \\&\quad \left( \ln \left| y_{t-2}\right| \right) ^{2}w_{0}^{2}\left( \frac{1}{w_{0}}- \frac{1}{\sigma _{t}^{2}\left( \theta _{0}\right) }\right) ^{2} \\&\qquad +\frac{2}{T}\sum _{t=1}^{T}\left( 1-z_{t}^{2}\right) w_{0}\left( \frac{1}{ w_{0}}-\frac{1}{\sigma _{t}^{2}\left( \theta _{0}\right) }\right) \left( \ln \left| y_{t-2}\right| \right) ^{2}. \end{aligned}$$

Since \(z_{t}^{2}\) and \(u_{1t}^{2}\left( \theta _{0}\right) \) are independent, the first term on the right hand side converges to \(2{\overline{m}}_{11}\) by Lemma A. Furthermore, since \(\left( \ln \left| y_{t-2}\right| \right) ^{2}w_{0}^{2}\left( \frac{1}{w_{0}}-\frac{1}{ \sigma _{t}^{2}\left( \theta _{0}\right) }\right) ^{2}\) and \(w_{0}\left( \frac{1}{w_{0}}-\frac{1}{\sigma _{t}^{2}\left( \theta _{0}\right) }\right) \left( \ln \left| y_{t-2}\right| \right) ^{2}\) have bounded moments, they are ergodic and stationary and since \(E\left( 1-z_{t}^{2}\right) =0,\) it follows from the ergodic theorem that the last term on the right hand side converges in probability to zero. Therefore, the result follows. Using identical arguments we find

$$\begin{aligned} -\frac{1}{T}\frac{\partial ^{2}}{\partial w^{2}}L_{T}\left( \theta _{0}\right)= & {} -\frac{1}{2}\frac{1}{T}\sum _{t=1}^{T}\left( 1-2z_{t}^{2}\right) u_{2t}^{2}\left( \theta _{0}\right) \overset{p}{\rightarrow }\frac{1}{2} {\overline{m}}_{22},-\frac{1}{T}\frac{\partial ^{2}}{\partial \alpha ^{2}} L_{T}\left( \theta _{0}\right) \\= & {} -\frac{1}{2}\frac{1}{T}\sum _{t=1}^{T}\left( 1-2z_{t}^{2}\right) u_{3t}^{2}\left( \theta _{0}\right) \overset{p}{\rightarrow }\frac{1}{2} {\overline{m}}_{33},-\frac{1}{T}\frac{\partial ^{2}}{\partial \gamma \partial w }L_{T}\left( \theta _{0}\right) \\= & {} \frac{1}{T}\sum _{t=1}^{T}z_{t}^{2}u_{1t}\left( \theta _{0}\right) u_{2t}\left( \theta _{0}\right) \\&+\,\frac{1}{T}\sum _{t=1}^{T}\left( 1-z_{t}^{2}\right) \left( \left( \ln \left| y_{t-2}\right| \right) \left( \frac{w_{0}}{\sigma _{t}^{2}\left( \theta _{0}\right) }\right) \left( \frac{1}{w_{0}}-\frac{1}{\sigma _{t}^{2}\left( \theta _{0}\right) }\right) \right) \overset{p}{\rightarrow }{\overline{m}}_{12}, \\&-\,\frac{1}{T}\frac{\partial ^{2}}{\partial \gamma \partial \alpha } L_{T}\left( \theta _{0}\right) =\frac{1}{T}\sum _{t=1}^{T}z_{t}^{2}u_{1t} \left( \theta _{0}\right) u_{3t}\left( \theta _{0}\right) \\&-\,\frac{1}{T} \sum _{t=1}^{T}\left( 1-z_{t}^{2}\right) w_{0}\left( \ln \left| y_{t-2}\right| \right) \left( \frac{w_{0}}{\alpha _{0}}\right) \left( \frac{1}{w_{0}}-\frac{1}{_{\sigma _{t}^{2}\left( \theta _{0}\right) }} \right) ^{2} \\&\overset{p}{\rightarrow }{\overline{m}}_{13},-\frac{1}{T}\frac{\partial ^{2}L_{T}\left( \theta _{0}\right) }{\partial w\partial \alpha }=-\frac{1}{2} \frac{1}{T}\sum _{t=1}^{T}\left( 1-2z_{t}^{2}\right) u_{2t}\left( \theta _{0}\right) u_{3t}\left( \theta _{0}\right) \overset{p}{\rightarrow }\frac{1 }{2}{\overline{m}}_{23}. \end{aligned}$$

We proceed now to show that \(\Lambda \) is positive definite. \(\Lambda \) will be positive definite if for any non-zero column vector z with entries ab and c, we show that \(z^{T}\Lambda z>0\). In our case

$$\begin{aligned} z^{T}\Lambda z= & {} \zeta \left( \begin{array}{c} aE\left( u_{1t}^{2}\right) +\frac{b}{2}E\left( u_{1t}u_{2t}\right) +\frac{c}{ 2}E\left( u_{1t}u_{3t}\right) \\ \frac{a}{2}E\left( u_{1t}u_{2t}\right) +\frac{b}{4}E\left( u_{2t}^{2}\right) +\frac{c}{4}E\left( u_{2t}u_{3t}\right) \\ \frac{a}{2}E\left( u_{1t}u_{3t}\right) +\frac{b}{4}E\left( u_{2t}u_{3t}\right) +\frac{c}{4}E\left( u_{3t}^{2}\right) \end{array} \right) ^{T}\left( \begin{array}{c} a \\ b \\ c \end{array} \right) \\= & {} \zeta \Bigg [ a^{2}E\left( u_{1t}^{2}\right) +\frac{ab}{2}E\left( u_{1t}u_{2t}\right) +\frac{ac}{2}E\left( u_{1t}u_{3t}\right) +\frac{ab}{2} E\left( u_{1t}u_{2t}\right) \\&+\,\frac{b^{2}}{4}E\left( u_{2t}^{2}\right) +\frac{ bc}{4}E\left( u_{2t}u_{3t}\right) \\&+\,\frac{ac}{2}E\left( u_{1t}u_{3t}\right) +\frac{bc}{4}E\left( u_{2t}u_{3t}\right) +\frac{c^{2}}{4}E\left( u_{3t}^{2}\right) \Bigg ] \\= & {} \zeta \Bigg [ a^{2}E\left( u_{1t}^{2}\right) +\frac{b^{2}}{4}E\left( u_{2t}^{2}\right) +\frac{c^{2}}{4}E\left( u_{3t}^{2}\right) \\&+\, abE\left( u_{1t}u_{2t}\right) +acE\left( u_{1t}u_{3t}\right) +\frac{bc}{2}E\left( u_{2t}u_{3t}\right) \Bigg ] \end{aligned}$$

where we have written \(u_{it}\left( \theta _{0}\right) =u_{it}\) for simplicity reasons. Since \(\zeta ,\) by Assumption A1, is always positive and larger than zero, and from Lemma A we have that

$$\begin{aligned} u_{3t}=\left( \frac{w_{0}}{\alpha _{0}}\right) \left( \frac{1}{w_{0}} -u_{2t}\right) , \end{aligned}$$

then, we need to show if the following term is strictly positive

$$\begin{aligned}&a^{2}E\left( u_{1t}^{2}\right) +\frac{b^{2}}{4}E\left( u_{2t}^{2}\right) +\left( \frac{cw_{0}}{2\alpha _{0}}\right) ^{2}E\left( \left( \frac{1}{w_{0}} -u_{2t}\right) ^{2}\right) \\&\qquad +abE\left( u_{1t}u_{2t}\right) +\frac{acw_{0}}{ \alpha _{0}}E\left( \frac{u_{1t}}{w_{0}}-u_{1t}u_{2t}\right) \\&\qquad +\frac{bcw_{0}}{2\alpha _{0}}E\left( \frac{u_{2t}}{w_{0}}-u_{2t}^{2}\right) =a^{2}E\left( u_{1t}^{2}\right) +\frac{1}{4}\left( b-\frac{cw_{0}}{\alpha _{0}}\right) ^{2}\\&E\left( u_{2t}^{2}\right) +\left( \frac{c}{2\alpha _{0}} \right) ^{2}+\frac{ac}{\alpha _{0}}E\left( u_{1t}\right) \\&\qquad +\frac{c}{2\alpha _{0}}\left( b-\frac{cw_{0}}{\alpha _{0}}\right) E\left( u_{2t}\right) +a\left( b-\frac{cw_{0}}{\alpha _{0}}\right) E\left( u_{1t}u_{2t}\right) \\&\quad =a^{2}E\left( u_{1t}^{2}\right) +\frac{1}{4}\left( b-\frac{cw_{0}}{\alpha _{0}}\right) ^{2}E\left( u_{2t}^{2}\right) +a\left( b-\frac{cw_{0}}{\alpha _{0}}\right) E\left( u_{1t}u_{2t}\right) \\&\qquad +\frac{c}{\alpha _{0}}\left[ aE\left( u_{1t}\right) +\frac{1}{2}\left( b- \frac{cw_{0}}{\alpha _{0}}\right) E\left( u_{2t}\right) \right] +\left( \frac{c}{2\alpha _{0}}\right) ^{2} \\&\quad =E\left[ a^{2}u_{1t}^{2}+\frac{1}{4}\left( b-\frac{cw_{0}}{\alpha _{0}} \right) ^{2}u_{2t}^{2}+a\left( b-\frac{cw_{0}}{\alpha _{0}}\right) u_{1t}u_{2t}\right. \\&\qquad \left. +\frac{c}{\alpha _{0}}\left[ au_{1t}+\frac{1}{2}\left( b-\frac{ cw_{0}}{\alpha _{0}}\right) u_{2t}\right] +\left( \frac{c}{2\alpha _{0}} \right) ^{2}\right] \\&\quad =E\left[ \left( au_{1t} +\frac{1}{2}\left( b-\frac{cw_{0}}{\alpha _{0}} \right) u_{2t}\right) ^{2}\right. \\&\left. \qquad +\frac{c}{\alpha _{0}}\left[ au_{1t} +\frac{1}{2} \left( b-\frac{cw_{0}}{\alpha _{0}}\right) u_{2t}\right] +\left( \frac{c}{ 2\alpha _{0}}\right) ^{2}\right] \\&\quad =E\left( au_{1t}+\frac{1}{2}\left( b-\frac{cw_{0}}{2\alpha _{0}}\right) u_{2t}+\frac{c}{2\alpha _{0}}\right) ^{2}>0. \end{aligned}$$

Finally notice that since \(\Omega =2\Lambda \zeta ^{-1}=\Lambda ,\) then \( \Omega >0.\) This completes the proof of Proposition 2. \(\square \)

Proposition 3

Define the lower and upper values for each parameter in \(\theta _{0}\) as \( \gamma _{L}<\gamma _{0}<\gamma _{U},w_{L}<w_{0}<w_{U},\) and \(\alpha _{L}<\alpha _{0}<\alpha _{U},\) respectively and the neighborhood \(N\left( \theta _{0}\right) \) around \(\theta _{0}\) as

$$\begin{aligned} N\left( \theta _{0}\right) =\left\{ \theta \backslash \gamma _{L}\le \gamma \le \gamma _{U},w_{L}\le w\le w_{U},\text { and }\alpha _{L}\le \alpha \le \alpha _{U}\right\} . \end{aligned}$$

Under Assumptions A and B, there exists a neighborhood \(N\left( \theta _{0}\right) \) for which for \(i,j,k=1,2,3\)

$$\begin{aligned} \sup _{\theta \in N\left( \theta _{0}\right) }\left| \frac{1}{T}\frac{ \partial ^{3}}{\partial \theta _{i}\partial \theta _{j}\partial \theta _{k}} L_{T}\left( \theta \right) \right| \le \frac{1}{T} \sum _{t=1}^{T}w_{ijkt}, \end{aligned}$$

where \(w_{ijkt}\) is stationary. Furthermore \(\frac{1}{T} \sum _{t=1}^{T}w_{ijkt}\overset{a.s.}{\longrightarrow }E\left( w_{ijkt}\right) <\infty \) for \(\forall ijk.\)

Proof of Proposition 3

Let us start from the components of \(\left| \frac{1}{T}\frac{\partial ^{3}}{\partial \gamma ^{3}}L_{T}\left( \theta \right) \right| \) defined in Result 3 (see Supplementary Appendix). Part I (which is also defined in Result 3) can be written as

$$\begin{aligned}&\left| \frac{4}{T}\sum _{t=1}^{T}\frac{\alpha \left( \frac{y_{t-1}^{*} }{\left| y_{t-2}\right| ^{\gamma }}\right) ^{2}\left( \ln \left( \left| y_{t-2}\right| \right) \right) ^{3}}{\left( w+\alpha \left( \frac{y_{t-1}^{*}}{\left| y_{t-2}\right| ^{\gamma }}\right) ^{2}\right) }\left( 1-\frac{\left( y_{t}^{*}\right) ^{2}}{ y_{t-1}^{2\gamma }\left( w+\alpha \left( \frac{y_{t-1}^{*}}{\left| y_{t-2}\right| ^{\gamma }}\right) ^{2}\right) }\right) \right| \end{aligned}$$
$$\begin{aligned}&\quad \le \left| \frac{4}{T}\sum _{t=1}^{T}\frac{\left( \left( w+\alpha \left( \frac{y_{t-1}^{*}}{\left| y_{t-2}\right| ^{\gamma }} \right) ^{2}\right) -w\right) \left| \ln \left( \left| y_{t-2}\right| \right) \right| ^{3}}{\left( w+\alpha \left( \frac{ y_{t-1}^{*}}{\left| y_{t-2}\right| ^{\gamma }}\right) ^{2}\right) } \right. \\&\qquad \times \, \left. \left( \frac{\left( y_{t}^{*}\right) ^{2}}{y_{t-1}^{2\gamma }\left( w+\alpha \left( \frac{y_{t-1}^{*}}{\left| y_{t-2}\right| ^{\gamma }}\right) ^{2}\right) }+1\right) \right| \\&\quad \le \left| \frac{4}{T}\sum _{t=1}^{T}\left( \frac{\left( y_{t}^{*}\right) ^{2}}{y_{t-1}^{2\gamma } \left( w+\alpha \left( \frac{y_{t-1}^{*} }{\left| y_{t-2}\right| ^{\gamma }}\right) ^{2}\right) }+1\right) \left| \ln \left( \left| y_{t-1}\right| \right) \right| ^{3}\right| \\&\quad \le \left| \frac{4}{T}\sum _{t=1}^{T}\left( \left( \frac{ y_{t-1}^{2\gamma _{0}}\left( w_{0}+\alpha _{0}\left( \frac{y_{t-1}^{*}}{ \left| y_{t-2}\right| ^{\gamma _{0}}}\right) ^{2}\right) }{ y_{t-1}^{2\gamma }\left( w+\alpha \left( \frac{y_{t-1}^{*}}{\left| y_{t-2}\right| ^{\gamma }}\right) ^{2}\right) }\right) z_{t}^{2}+1\right) \left| \ln \left( \left| y_{t-1}\right| \right) \right| ^{3}\right| \\&\quad \le \left| \frac{4}{T}\sum _{t=1}^{T}\left( \left( \frac{w_{0}}{w} y_{t-1}^{2\left( \gamma _{0}-\gamma \right) }+\frac{\alpha _{0}}{\alpha } \left( \frac{y_{t-1}^{*}}{\left| y_{t-2}\right| }\right) ^{2\left( \gamma _{0}-\gamma \right) }\right) z_{t}^{2}+1\right) \left| \ln \left( \left| y_{t-1}\right| \right) \right| ^{3}\right| \\&\quad \le \left| \frac{4}{T}\sum _{t=1}^{T}\left( \left\{ \frac{w_{U}}{w_{L}} \Lambda _{t-1}+\frac{\alpha _{U}}{\alpha _{L}}\Lambda _{t-2}\right\} z_{t}^{2}+1\right) \left| \ln \left( \left| y_{t-1}\right| \right) \right| ^{3}\right| \\&\quad \le \left| \frac{4}{T}\sum _{t=1}^{T}\left( \left\{ \frac{w_{U}}{w_{L}} \Lambda _{t-1}+\frac{\alpha _{U}}{\alpha _{L}}\Lambda _{t-2}\right\} z_{t}^{2}+1\right) \left| \ln \left( \left| y_{t-1}\right| \right) \right| ^{3}\right| \\&\quad \le \left| \frac{4}{T}\sum _{t=1}^{T}\left( \left\{ \frac{w_{U}}{w_{L}} \Lambda _{t-1}+\frac{\alpha _{U}}{\alpha _{L}}\Lambda _{t-2}\right\} z_{t}^{2}+1\right) \left| \ln \left( \left| y_{t-1}\right| \right) \right| ^{3}\right| , \end{aligned}$$

where we can define the lower bound for all t\(y_{L}\le \left| y_{t-1}\right| ,\) \(y_{L}\le \left| y_{t-2}\right| ,\) \(\Lambda _{t-1}=\max \left\{ y_{L}^{2\left| \gamma _{U}-\gamma _{L}\right| },y_{t-1}^{2\left| \gamma _{U}-\gamma _{L}\right| }\right\} \), \( \Lambda _{t-2}=\max \left\{ 1,\left( \frac{y_{t-1}^{*}}{\left| y_{t-2}\right| }\right) ^{2\left| \gamma _{U}-\gamma _{L}\right| }\right\} \) and the result follows by setting \(2\left| \gamma _{U}-\gamma _{L}\right| =\varphi \), Assumptions A and B and the law of large numbers (see Jensen and Rahbek (2004a), Lemma 5). Part II requires also assumption A3 since

$$\begin{aligned}&\left| \frac{8}{T}\sum _{t=1}^{T}\frac{\alpha ^{3}\left( \frac{ y_{t-1}^{*}}{\left| y_{t-2}\right| ^{\gamma }}\right) ^{6}\left( \ln \left( \left| y_{t-2}\right| \right) \right) ^{3}}{\left( w+\alpha \left( \frac{y_{t-1}^{*}}{\left| y_{t-2}\right| ^{\gamma }}\right) ^{2}\right) ^{3}}\left( 1-\frac{3\left( y_{t}^{*}\right) ^{2}}{y_{t-1}^{2\gamma }\left( w+\alpha \left( \frac{y_{t-1}^{*} }{\left| y_{t-2}\right| ^{\gamma }}\right) ^{2}\right) }\right) \right| \\&\quad \le \left| \frac{8}{T}\sum _{t=1}^{T}\left( \frac{3\left( y_{t}^{*}\right) ^{2}}{y_{t-1}^{2\gamma }\left( w+\alpha \left( \frac{y_{t-1}^{*} }{\left| y_{t-2}\right| ^{\gamma }}\right) ^{2}\right) }-1\right) \left| \ln \left( \left| y_{t-2}\right| \right) \right| ^{3}\right| \\&\quad \le \left| \frac{8}{T}\sum _{t=1}^{T}\left( 3\left\{ \frac{w_{U}}{w_{L} }\Lambda _{t-1}+\frac{\alpha _{U}}{\alpha _{L}}\Lambda _{t-2}\right\} z_{t}^{2}+1\right) \left| \ln \left( \left| y_{t-1}\right| \right) \right| ^{3}\right| . \end{aligned}$$

Parts III, IV, V and VI follow the same argument.

Along the same lines for \(\left| \frac{1}{T}\frac{\partial ^{3}}{ \partial \alpha ^{3}}L_{T}\left( \theta \right) \right| \)

$$\begin{aligned} \left| \frac{1}{T}\frac{\partial ^{3}}{\partial \alpha ^{3}}L_{T}\left( \theta \right) \right|= & {} \left| \frac{1}{T}\sum _{t=1}^{T}\left( 3 \frac{\left( y_{t}^{*}\right) ^{2}}{y_{t-1}^{2\gamma }\left( w+\alpha \left( \frac{y_{t-1}^{*}}{\left| y_{t-2}\right| ^{\gamma }} \right) ^{2}\right) }-1\right) \frac{\left( \frac{y_{t-1}^{*}}{ y_{t-2}^{\gamma }}\right) ^{6}}{\left( w+\alpha \left( \frac{y_{t-1}^{*} }{\left| y_{t-2}\right| ^{\gamma }}\right) ^{2}\right) ^{3}} \right| \\\le & {} \left| \frac{1}{T}\sum _{t=1}^{T}\left( 3\frac{\left( y_{t}^{*}\right) ^{2}}{y_{t-1}^{2\gamma }\left( w+\alpha \left( \frac{y_{t-1}^{*} }{\left| y_{t-2}\right| ^{\gamma }}\right) ^{2}\right) }-1\right) \right| \frac{1}{\alpha _{L}^{3}} \\\le & {} \frac{1}{T}\sum _{t=1}^{T}\left( 3\left\{ \frac{w_{U}}{w_{L}}\Lambda _{t-1}+\frac{\alpha _{U}}{\alpha _{L}}\Lambda _{t-2}\right\} z_{t}^{2}+1\right) \frac{1}{\alpha _{L}^{3}}. \end{aligned}$$

The rest of the cases follow directly using the same argument. This completes the proof of Proposition 3. \(\square \)

Proof of Theorem 1

Given the conditions provided by Propositions 13, Theorem 1 follows from Lumsdaine (1996, pp. 593–595, Theorem 3), the ergodic theorem and Lemma 1, p. 1206 in Jensen and Rahbek (2004b). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Dahl, C.M., Iglesias, E.M. Asymptotic normality of the MLE in the level-effect ARCH model. Stat Papers 62, 117–135 (2021). https://doi.org/10.1007/s00362-019-01086-y

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-019-01086-y

Keywords

Mathematics Subject Classification

JEL Classification

Navigation