Abstract
Due to the lack of a suitable modeling procedure and the difficulty to identify the threshold variable and estimate the threshold values, the threshold autoregressive moving average (TARMA) model with multi-regime has not attracted much attention in application. Therefore, the chief goal of our paper is to propose a simple and yet widely applicable modeling procedure for multi-regime TARMA models. Under no threshold case, we utilize extended least squares estimate (ELSE) and linear arranged regression to obtain a test statistic \(\hat{F}\), which is proved to follow an approximate F distribution. And then, based on the statistic \(\hat{F}\), we employ some scatter plots to identify the number and locations of the potential thresholds. Finally, the procedures are considered to build a TARMA model by these statistics and the Akaike information criterion (AIC). Simulation experiments and the application to a real data example demonstrate that both the power of the test statistic and the model-building can work very well in the case of TARMA models.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Akaike, H. (1974). A new look at statistical model identification. IEEE Transactions on Automatic Control, 19, 716–722.
Billingsley, P. (1961). The Lindeberg–Levy theorem for martingales. Proceedings of the American Mathematical Society, 12, 788–792.
Brockwell, P., Liu, J., & Tweedie, R. L. (1992). On the existence of stationary threshold autoregressive moving-average processes. Journal of Time Series Analysis, 13, 95–107.
Christopeit, N., & Helmes, K. (1980). Strong consistency of least squares estimators in linear regression models. The Annals of Statistics, 4, 778–788.
Chan, K. S. (1990). Testing for threshold autoregression. Annals of Statistics, 18, 1886–1894.
Chan, K. S., Petruccelli, J. D., Tong, H., & Woolford, S. W. (1985). A multiple-threshold AR(1) model. Journal of Applied Probability, 22, 267–279.
Chan, K. S., & Tong, H. (1986). On estimating thresholds in autoregressive models. Journal of Time Series Analysis, 7, 179–190.
Chen, W. S. C., So, K. P. M., & Liu, F. C. (2011). A review of threshold time series models in finance. Statistics and Its Interface, 4, 167–181.
Cryer, J. D., & Chan, K. S. (2008). Time series analysis with applications in R. Springer texts in statistics (2nd ed.). Berlin: Springer.
de Gooijer, J. G. (1998). On threshold moving-average models. Journal of Time Series Analysis, 19, 1–18.
Ertel, J. E., & Fowlkes, E. B. (1976). Some algorithms for linear spline and piecewise multiple linear regression. Journal of the American Statistical Association, 71, 640–648.
Goodwin, G. C., & Payne, R. L. (1977). Dynamic system identification: Experiment design and data analysis. New York, NY: Academic Press.
Haggan, V., Heravi, S. M., & Priestley, M. B. (1984). A study of the application of state-dependent models in nonlinear time series analysis. Journal of Time Series Analysis, 5, 69–102.
Hannan, E. J., & Rissanen, J. (1982). Recursive estimation of mixed autoregressive-moving average order. Biometrika, 69, 81–94.
Hansen, B. E. (2000). Sample splitting and threshold estimation. Econometrica, 68, 575–603.
Keenan, D. M. (1985). A Tukey nonadditivity-type test for time series nonlinearity. Biometrika, 72, 39–44.
Lai, T. L., & Wei, C. Z. (1982). Least squares estimates in stochastic regression models with applications to identification and control of dynamic systems. The Annals of Statistics, 10, 154–166.
Li, G., & Li, W. K. (2011). Testing a linear time series model against its threshold extension. Biometrika, 98, 243–250.
Li, D., Li, W. K., & Ling, S. (2011). On the least squares estimation of threshold autoregressive and moving-average models. Statistics and Its Interface, 4, 183–196.
Li, D., & Ling, S. (2012). On the least squares estimation of multiple-regime threshold autoregressive models. Journal of Econometrics, 167, 240–253.
Liang, R., Niu, C., Xia, Q., & Zhang, Z. (2015). Nonlinearity testing and modeling for threshold moving average models. Journal of Applied Statistics, 42, 2614–2630.
Ling, S. (1999). On the probabilistic properties of a double threshold ARMA conditional heteroskedastic model. Journal of Applied Probability, 36, 688–705.
Ling, S., & Tong, H. (2005). Testing a linear moving-average model against threshold moving-average models. The Annals of Statistics, 33, 2529–2552.
Ling, S., Tong, H., & Li, D. (2007). The ergodicity and invertibility of threshold moving-average models. Bernoulli, 13, 161–168.
Liu, J., & Susko, E. (1992). On strict stationarity and ergodicity of a nonlinear ARMA model. Journal of Applied Probability, 29, 363–373.
Ljung, L., & Soderstrom, T. (1983). Theory and practice of recursive identification. Cambridge: MIT Press.
Priestley, M. B. (1980). State-dependent models: A general approach to nonlinear time series analysis. Journal of Time Series Analysis, 1, 47–71.
Qian, L. (1998). On maximum likelihood estimators for a threshold autoregression. Journal of Statistical Planning and Inference, 75, 21–46.
Tong, H. (1978). On a threshold model. In C. H. Chen (Ed.), Pattern recognition and signal processing (pp. 101–141). Amsterdam: Sijthoff and Noordhoff.
Tong, H., & Lim, K. S. (1980). Threshold autoregressions, limit cycles, and data. Journal of the Royal Statistical Society B, 42, 245–292.
Tong, H. (1990). Non-linear time series: A dynamical system approach. Oxford: Oxford University Press.
Tsay, R. S. (1986). Nonlinearity tests for time series. Biometrika, 73, 461–466.
Tsay, R. S. (1987). Conditional heteroscedastic time series models. Journal of the American Statistical Association, 82, 590–604.
Tsay, R. S. (1989). Testing and modeling threshold autoregressive process. Journal of the American Statistical Association, 84, 231–240.
Tsay, R. S. (2005). Analysis of financial time series (2nd ed.). London: Wiley.
Wong, C. S., & Li, W. K. (1997). Testing for threshold autoregression with conditional heteroscedasticity. Biometrika, 84, 407–418.
Wong, C. S., & Li, W. K. (2000). Testing for double threshold autoregressive conditional heteroscedastic model. Statistica Sinica, 10, 173–189.
Acknowledgements
We thank the two Referees for their criticisms and suggestions which have led to improvements of the paper. The research of Qiang Xia was supported by National Social Science Foundation of China (No:12CTJ019) and Ministry of Education in China Project of Humanities and Social Sciences (Project No.11YJCZH195).The research of Heung Wong was supported by a grant of the Research Committee of The Hong Kong Polytechnic University (Code: G-YBCV).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix: Proofs of Theorems
Appendix: Proofs of Theorems
Proof
(Theorem 1) For each regime of model (1.1), under the Assumption 4.1, we substitute the fitted residuals \(\{\hat{\varepsilon }^{(j)}_{t-i}, i=1,\ldots ,q_j\}\) for \(\{\varepsilon ^{(j)}_{t-i}, i=1,\ldots ,q_j\}\) using ELSE. Then, in every regime, model (1.1) is the linear regression model, we can obtain least squares estimate \(\hat{\varPhi }^{(j)}\) of jth regime. Under the condition of Assumptions 1–4, they are fulfilled to the condition of Theorem 1 of Lai and Wei [17] and Theorem 2 of Liang et al. [21]. Therefore, for given k, d, and the threshold values \(r_j\), the least squares estimates \(\{\hat{\varPhi }^{(j)}, j=1,2,\ldots ,l\}\) converge to \(\{\varPhi ^{(j)}, j=1,2,\ldots ,l\}\) almost surely. \(\square \)
Proof
(Theorem 2) Consider the observation \(\{y_t, t=1, 2, \ldots , n\}\) and define
with \(\hat{\varepsilon }_{t-i}\)’s being the residuals for model (1.1) fitted by the Hannan–Rissanen algorithm or ELSE.
Also define \(\varPhi , A_n, V_n\) by
Therefore, without loss of generality, the least squares estimate of \(\varPhi \) and the residuals are
Also define \(\varPsi _n\) and \(\hat{a}_t\) by
Hence,
Because \(X_t\) depends on \(\{y_{t-k}; \hat{\varepsilon }_{t-l}, k = 1,\ldots ,p, l = 1,\ldots ,q\}\), which is independent of \(e_t\). \((n-p-q)^{-\frac{1}{2}}\sum X_t'e_t\) forms a stationary and ergodic martingale difference process. Then \((n-p-q)^{-\frac{1}{2}}\sum X_t'e_t\) follows asymptotic normality according to a multivariate version of a martingale central limit theorem [2]. Theorem 2.1 shows \(\varPhi - \hat{\varPhi } \rightarrow ^{a.s.} o(1)\). \(X_t\) is \(p+q+1\) dimensional, therefore, (4) follows approximately an F random variable with degrees of freedom \(p + q + 1\) and \(n-d-b-p-q-h\). As another point of view, it is obvious that \(\dfrac{\sum \hat{a}^2_t}{(n-p-q)\sigma ^2}\) is a chi-square random variable with degrees of freedom \(n-d-b-h-p-q\). Also, the numerator and denominator of (4) have the same asymptotic variance \(\sigma ^2\). Then \((p + q + 1)\hat{F}(p, q, d)\) is asymptotically a chi-square random variable with degrees of freedom \(p + q + 1\), which is a straightforward generalization of Corollary 3.1 of Keenan [16] or Tsay [32]. Theorem 2.2 is proved. \(\square \)
Rights and permissions
Copyright information
© 2016 Springer Science+Business Media New York
About this chapter
Cite this chapter
Xia, Q., Wong, H. (2016). Identification of Threshold Autoregressive Moving Average Models. In: Li, W., Stanford, D., Yu, H. (eds) Advances in Time Series Methods and Applications . Fields Institute Communications, vol 78. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-6568-7_9
Download citation
DOI: https://doi.org/10.1007/978-1-4939-6568-7_9
Published:
Publisher Name: Springer, New York, NY
Print ISBN: 978-1-4939-6567-0
Online ISBN: 978-1-4939-6568-7
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)