Skip to main content

SYMARMA: a new dynamic model for temporal data on conditional symmetric distribution

Abstract

Gaussian models of time series, ARMA, have been widely used in the literature. Benjamin et al. (J Am Stat Assoc 98:214–223, 2003) extended these models to the exponential family distributions. Also in that direction, Rocha and Cribari-Neto (Test 18:529–545, 2009) proposed a time series model for the class of beta distributions. In this paper, we develop an autoregressive and moving average symmetric model, named SYMARMA, which is a dynamic model for random variables belonging to the class of symmetric distributions including also a set of regressors. We discuss methods for parameter estimation, hypothesis testing and forecasting. In particular, we provide closed-form expressions for the score function and Fisher information matrix. Robust study is presented based on influence function. We conduct simulation studies to evaluate the consistency and asymptotic normality of the conditional maximum likelihood estimator for the model parameters. An application with real data is presented and discussed.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Notes

  1. The return is defined as \(r_t = (p_t - p_{t-1})/p_{t-1}\) where \(p_t\) is the price of an asset at time t.

  2. The T-bill rates were divided by 100 to convert from a percentage and then by 253 to convert to a daily rate.

References

  • Benjamin MA, Rigby RA, Stasinopoulos M (2003) Generalized autoregressive moving avarege models. J Am Stat Assoc 98:214–223

    Article  MATH  Google Scholar 

  • Cao CZ, Lin JG, Zhu LX (2010) Heteroscedasticity and/or autocorrelation diagnostics in nonlinear models with AR(1) and symmetrical errors. Stat Pap 51:813–836

    MathSciNet  Article  MATH  Google Scholar 

  • Chen C, Liu LM (1993) Joint Estimation of model parameters and outlier effects in time series. J Am Stat Assoc 88:284–297

    MATH  Google Scholar 

  • Cox DR (1981) Statistical analysis of time series: some recent developments. Scand J Stat 8:93–115

    MathSciNet  MATH  Google Scholar 

  • Cox DR, Hinkley DV (1974) Theoretical statistics. Chapman and Hall, London

    Book  MATH  Google Scholar 

  • Creal D, Koopman SJ, Lucas A (2013) Generalized autoregressive score models with applications. J Appl Econom 28:777–795

    MathSciNet  Article  Google Scholar 

  • Cysneiros FJA, Paula GA (2005) Restricted methods in symmetrical linear regression models. Comput Stat Data Anal 49:689–708

    MathSciNet  Article  MATH  Google Scholar 

  • Efron B, Tibshirani RJ (1993) An introduction to the bootstrap. Chapman and Hall, New York

    Book  MATH  Google Scholar 

  • Fang KT, Kotz S, Ng KW (1990) Symmetric multivariate and relates distributions. Chapman and Hall, London

    Book  MATH  Google Scholar 

  • Galea M, Paula GA, Uribe-Opazo M (2003) On influence diagnostic in univariate elliptical linear regression models. Stat Pap 44:23–45

    MathSciNet  Article  MATH  Google Scholar 

  • Heyde CC, Feigin PD (1975) On efficiency and exponential families in stochastic process estimation. Stat Distrib Sci Work 1:227–240

    Google Scholar 

  • Li WK (1994) Time series model based on generalized linear models: some further results. Biometrics 50:506–511

    Article  MATH  Google Scholar 

  • Ljung GM, Box GEP (1978) On a measure of a lack of fit in time series models. Biometrika 65:297–303

    Article  MATH  Google Scholar 

  • Lucas A (1997) Robustness of the Student-t based M-estimator. Commun Stat, Theory Methods 26:1165–1182

    MathSciNet  Article  MATH  Google Scholar 

  • Paula GA, Cysneiros FJA (2009) Systematic risk estimation in symmetric models. Appl Econ 16:217–221

    Google Scholar 

  • Paula GA, Leiva V, Barros M, Liu S (2012) Robust statistical modeling using Birnbaum-Saunders-\(t\) distribution applied to insurance. Appl Stoch Model Bus Ind 28:16–34

    MathSciNet  Article  MATH  Google Scholar 

  • Peña D (1990) Influential observations in time series. J Bus Econ Stat 8:235–241

    MATH  Google Scholar 

  • Ruppert D (2004) Statistics and finance. Springer, New York

    Book  MATH  Google Scholar 

  • R Core Team. R (2012) A language and environment for statistical computing, R Foundation for Statistical Computing, Vienna, Austria, ISBN: 3-900051-07-0, http://www.R-project.org

  • Rocha AV, Cribari-Neto F (2009) Beta autoregressive moving avarege models. Test 18:529–545

    MathSciNet  Article  MATH  Google Scholar 

  • Zeger SL (1988) A regression model for time series of counts. Biometrics 75:621–629

    MathSciNet  Article  MATH  Google Scholar 

  • Zeger SL, Qaqish B (1988) Markov regression models for time series: a quasi-likelihood approach. Biometrics 44:1019–1031

    MathSciNet  Article  MATH  Google Scholar 

Download references

Acknowledgments

The authors thank the Editor, Dr. Victor Leiva, an anonymous Associate Editor and referees for their constructive comments on an earlier version of this manuscript, which resulted in this improved version. This research work was partially supported by a CNPq, CAPES and FACEPE agency from Brazil.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Francisco José A. Cysneiros.

Appendices

Appendix 1: Proof of Theorem 1

Proof

Let \(\Phi (B) = 1 - \phi _1B - \cdots - \phi _pB^p\) the autoregressive polynomial, \(\Theta (B) = 1 + \theta _1B + \cdots + \theta _qB^q\) the moving averages polynomial and \(B^ky_t=y_{t-k}\) the lag operator and \(\Psi (B) =\sum \nolimits _{i=0}^{\infty }\psi _iB^i=\Theta (B)\Phi (B)^{-1}\), \(\psi _0=1\) and assuming that \(\Phi (B)\) is invertible. The SYMARMA model can be rewritten as

$$\begin{aligned} \Phi (B)(y_t-\mathbf{x}_{t}^{\top }\varvec{\beta }) = \Theta (B)r_t \end{aligned}$$

and, since \(\Phi (B)\) is invertible,

$$\begin{aligned} y_t = \mathbf{x}_{t}^{\top }\varvec{\beta }+ \Psi (B)r_t, \end{aligned}$$

Therefore, assuming that \(\Phi (B)\) is invertible, the marginal mean of \(y_t\) of the SYMARMA model is given by

$$\begin{aligned} \mathrm {E}(y_t) = \mathbf{x}_t^{\top }\varvec{\beta }. \end{aligned}$$

\(\square \)

Appendix 2: Proof of Theorem 2

Proof

Let \(Y_t = \mu _t + r_t\) where \(r_t's\) are uncorrelated residuals with mean zero. We have

$$\begin{aligned} \mathrm {Var}(r_t)= & {} \mathrm {E}(r_t^2) = \mathrm {E}(\mathrm {E}(r_t^2|\mathrm {\mathcal{F}_{t-1}})) = \mathrm {E}(\mathrm {Var}(r_t|\mathrm {\mathcal{F}_{t-1}})) = \mathrm {E}(\mathrm {Var}(Y_t - \mu _t|\mathrm {\mathcal{F}_{t-1}}))\\= & {} \mathrm {E}(\mathrm {Var}(y_t|\mathrm {\mathcal{F}_{t-1}})) = \mathrm {E}(\xi \varphi ) = \xi \varphi . \end{aligned}$$

Note that \(\mu _t\) given in (2) is \(\mathcal{F}_{t-1}\)-measurable. Therefore, the marginal variance of \(Y_t\), \(\mathrm {Var}(Y_t)\), is given by

$$\begin{aligned} \mathrm {Var}(Y_t)= & {} \mathrm {Var}(\mathbf{x}_{t}^{\top }\varvec{\beta }+ \Psi (B)r_t) = \mathrm {Var}(\Psi (B)r_t) = \mathrm {E}[(\Psi (B)r_t)^2] \\= & {} \sum \limits _{i=0}^\infty \sum \limits _{j=0}^\infty \psi _i\psi _j\mathrm {E}(r_{t-i}r_{t-j}) = \sum \limits _{i=0}^\infty \psi _i^2\mathrm {E}(r_{t-i}^2) = \sum \limits _{i=0}^\infty \psi _i^2\mathrm {Var}(r_{t-i})\\= & {} \xi \varphi \sum \limits _{i=0}^\infty \psi _i^2. \end{aligned}$$

\(\square \)

Appendix 3: Proof of Theorem 3

Proof

By Theorem 1 and 2 we have

$$\begin{aligned} \mathrm {E}(Y_t) = x_t^\top \mathbf{\beta } \ \ \mathrm {e} \ \ \ \mathrm {Var}(Y_t) = \xi \varphi \sum _{i=0}^{\infty }\psi _i^2.\quad \mathrm{And} \end{aligned}$$
$$\begin{aligned} \mathrm {Cov}(Y_t,Y_{t-k})= & {} \mathrm {Cov}(\mathbf{x}_{t}^{\top }\varvec{\beta }+ \sum _{i=0}^{\infty }\psi _ir_{t-i}, \mathbf{x}_{t-k}^{\top }\varvec{\beta }+ \sum _{i=0}^{\infty }\psi _ir_{t-k-i}) \\= & {} \mathrm {Cov}(\psi _0r_t + \psi _1r_{t-1} + \cdots , \psi _0r_{t-k} + \psi _1r_{t-k-1} + \cdots )\\= & {} \mathrm {Var}(r_t)\sum _{i=0}^{\infty }\psi _i\psi _{i+k} = \xi \varphi \sum _{i=0}^{\infty }\psi _i\psi _{i+k} \end{aligned}$$

where \(\Psi _0=1\).

$$\begin{aligned} \mathrm {Corr}(Y_t,Y_{t-k})= & {} \dfrac{\mathrm {Cov}(Y_t,Y_{t-k})}{\sqrt{\mathrm {Cov}(Y_t,Y_{t})\mathrm {Cov}(Y_{t-k},Y_{t-k})}} = \dfrac{\xi \varphi \sum _{i=0}^{\infty }\psi _i\Psi _{i+k}}{\xi \varphi \sum _{i=0}^{\infty }\psi _i^2} \\= & {} \dfrac{\sum _{i=0}^{\infty }\psi _i\psi _{i+k}}{\sum _{i=0}^{\infty }\psi _i^2}. \end{aligned}$$

\(\square \)

Expected conditional Fisher information matrix

The elements of the expected conditional Fisher information matrix, \(\mathbf{K}\), are obtained from expression

$$\begin{aligned} \mathbf{K}_{\omega _r\omega _s} = -\mathrm {E}\left[ \dfrac{\partial ^2 \ell ({\varvec{\delta }},\varphi )}{\partial \omega _r\partial \omega _s}\left| \mathcal{F}_{t-1}\right. \right] = \mathrm {E}\left[ \dfrac{\partial \ell ({\varvec{\delta }},\varphi )}{\partial \omega _r}\dfrac{\partial \ell ({\varvec{\delta }},\varphi )}{\partial \omega _s}\left| \mathcal{F}_{t-1}\right. \right] , \end{aligned}$$

where \(\omega _r\) and \(\omega _s\) are model parameters and \(\ell \) is the logarithm of the conditional likelihood function.

Under suitable regularity conditions

$$\begin{aligned} \mathrm {E}\left( \dfrac{\partial \ell _t({\varvec{\delta }},\varphi )}{\partial \mu _t}\left| \right. \mathcal{F}_{t-1}\right)= & {} \mathrm {E}\left( \dfrac{\partial \mathrm {log}f(y_t|\mathcal{F}_{t-1})}{\partial \mu _t}\right) \nonumber \\= & {} \displaystyle \int _{-\infty }^{\infty }\dfrac{\partial \mathrm {log}f(y_t|\mathcal{F}_{t-1})}{\partial \mu _t}f(y_t|\mathcal{F}_{t-1})d\mu _t\nonumber \\= & {} \displaystyle \int _{-\infty }^{\infty }\left( \dfrac{1}{f(y_t|\mathcal{F}_{t-1})}\dfrac{\partial f(y_t|\mathcal{F}_{t-1})}{\partial \mu _t}\right) f(y_t|\mathcal{F}_{t-1})d\mu _t\nonumber \\= & {} \displaystyle \int _{-\infty }^{\infty }\dfrac{\partial f(y_t|\mathcal{F}_{t-1})}{\partial \mu _t}d\mu _t = \dfrac{\partial }{\partial \mu _t}\displaystyle \int _{-\infty }^{\infty }f(y_t|\mathcal{F}_{t-1})d\mu _t = 0.\nonumber \\ \end{aligned}$$
(10)

From some algebraic manipulations, we also obtain that

$$\begin{aligned} \dfrac{\partial \ell _t({\varvec{\delta }},\varphi )}{\partial \mu _t} = -\dfrac{2}{\sqrt{\varphi }}W_g(u_t)z_t, \end{aligned}$$

with \(z_t = \sqrt{u_t} = (y_t-\mu _t)/\sqrt{\varphi }\). Therefore, using the results in (10), we have

$$\begin{aligned} \mathrm {E}\left( W_g(u_t)z_t|\mathcal{F}_{t-1}\right) = 0. \end{aligned}$$
(11)

Furthermore, the expressions

$$\begin{aligned} \dfrac{\partial \mu _t}{\partial \beta _l} = x_{tl} - \sum \limits _{i=1}^{p}\phi _ix_{(t-i)l}, \qquad \ \dfrac{\partial \mu _t}{\partial \phi _i} = y_{t-i} - \mathbf{x}^\top _{t-i}\varvec{\beta }\qquad \ \mathrm {and} \qquad \ \dfrac{\partial \mu _t}{\partial \theta _j} = y_{t-j} - \mu _{t-j} \end{aligned}$$
(12)

are measurable with respect to \(\mathcal{F}_{t-1}\).

The \(\mathbf{K}_{{\varvec{\delta }}{\varvec{\delta }}}\) matrix elements

$$\begin{aligned}&\mathrm {E}\left( \dfrac{\partial \ell _t({\varvec{\delta }},\varphi )}{\partial \delta _i}\dfrac{\partial \ell _t({\varvec{\delta }},\varphi )}{\partial \delta _j}|\mathcal{F}_{t-1}\right) \\&\quad = \mathrm {E}\left[ \left( \dfrac{-2W_g(u_t)}{\sqrt{\varphi }}\dfrac{\partial \mu _t}{\partial \delta _i}z_t\right) \left( \dfrac{-2W_g(u_t)}{\sqrt{\varphi }}\dfrac{\partial \mu _t}{\partial \delta _j}z_t\right) |\mathcal{F}_{t-1}\right] \\&\quad = \dfrac{4}{\varphi }\mathrm {E}\left[ W^2_g(u_t)z_t^2\dfrac{\partial \mu _t}{\partial \delta _i}\dfrac{\partial \mu _t}{\partial \delta _j} |\mathcal{F}_{t-1}\right] \\&\quad = \dfrac{4}{\varphi }\mathrm {E}\left[ W^2_g(u_t)z_t^2|\mathcal{F}_{t-1}\right] \dfrac{\partial \mu _t}{\partial \delta _i}\dfrac{\partial \mu _t}{\partial \delta _j}\\&\quad = \dfrac{4}{\varphi }d_{g}\dfrac{\partial \mu _t}{\partial \delta _i}\dfrac{\partial \mu _t}{\partial \delta _j}, \end{aligned}$$

with \(d_{g} = \mathrm {E}\left[ W^2_g(u_t)z_t^2|\mathcal{F}_{t-1}\right] \). So, \(d_{g} = \mathrm {E}\left[ W^2_g(U^2)U^2|\mathcal{F}_{t-1}\right] \) with \(U \sim S(0,1,g)\).

From the results presented in (12) one can easily find the expressions for the elements of \(\mathbf{K}_{{\varvec{\delta }}{\varvec{\delta }}}\).

The \(\mathbf{K}_{\varphi \varphi }\) matrix elements

$$\begin{aligned} \mathrm {E}\left( \dfrac{\partial \ell _t({\varvec{\delta }},\varphi )}{\partial \varphi }\dfrac{\partial \ell _t({\varvec{\delta }},\varphi )}{\partial \varphi }|\mathcal{F}_{t-1}\right)= & {} \mathrm {E}\left[ \left( -\dfrac{1}{2\varphi } - \dfrac{W_g(u_t)}{\varphi }u_t\right) \left( -\dfrac{1}{2\varphi } - \dfrac{W_g(u_t)}{\varphi }u_t\right) |\mathcal{F}_{t-1}\right] \\= & {} \mathrm {E}\left[ \dfrac{1}{4\varphi ^2} + \dfrac{W_g(u_t)u_t}{\varphi ^2} + \dfrac{W^2_g(u_t)u_t^2}{\varphi ^2} |\mathcal{F}_{t-1}\right] \\= & {} \dfrac{1}{4\varphi ^2} + \dfrac{1}{\varphi ^2}\mathrm {E}\left[ W_g(u_t)u_t|\mathcal{F}_{t-1}\right] + \dfrac{1}{\varphi ^2}\mathrm {E}\left[ W^2_g(u_t)u_t^2|\mathcal{F}_{t-1}\right] \\= & {} \dfrac{1}{4\varphi ^2} + \dfrac{1}{\varphi ^2}\left( -\dfrac{1}{2}\right) + \dfrac{1}{\varphi ^2}f_{g}\\= & {} \dfrac{1}{\varphi ^2}f_{g} - \dfrac{1}{4\varphi ^2} = \dfrac{1}{4\varphi ^2}\left( 4f_{g}-1\right) , \end{aligned}$$

with \(f_{g} = \mathrm {E}\left[ W^2_g(u_t)u_t^2|\mathcal{F}_{t-1}\right] \). So, \(f_{g} = \mathrm {E}\left[ W^2_g(U^2)U^4|\mathcal{F}_{t-1}\right] \) with \(U \sim S(0,1,g)\). From Fang et al. (1990) (p. 94). we have \(\mathrm {E}\left[ W_g(u_t)u_t|\mathcal{F}_{t-1}\right] = -1/2\). Therefore,

$$\begin{aligned} \mathbf{K}_{\varphi \varphi }= & {} \sum \limits _{t=m+1}^n\dfrac{1}{4\varphi ^2}\left( 4f_{g}-1\right) = \dfrac{(n-m)}{4\varphi ^2}\left( 4f_{g}-1\right) . \end{aligned}$$

\(\mathbf{K}_{{\varvec{\delta }}\varphi }\) matrix

$$\begin{aligned}&\mathrm {E}\left( \dfrac{\partial \ell _t({\varvec{\delta }},\varphi )}{\partial \delta _i}\dfrac{\partial \ell _t({\varvec{\delta }},\varphi )}{\partial \varphi }|\mathcal{F}_{t-1}\right) \\&\quad = \mathrm {E}\left[ \left( \dfrac{-2W_g(u_t)}{\sqrt{\varphi }}\dfrac{\partial \mu _t}{\partial \delta _i}z_t\right) \left( -\dfrac{1}{2\varphi } - \dfrac{W_g(u_t)}{\varphi }u_t\right) |\mathcal{F}_{t-1}\right] \\&\quad = \mathrm {E}\left[ \dfrac{W_g(u_t)}{\varphi \sqrt{\varphi }}z_t|\mathcal{F}_{t-1}\right] \dfrac{\partial \mu _t}{\partial \delta _i} + \mathrm {E}\left[ \dfrac{2W^2_g(u_t)}{\varphi \sqrt{\varphi }}z_tu_t|\mathcal{F}_{t-1}\right] \dfrac{\partial \mu _t}{\partial \delta _i}\\&\quad = \dfrac{1}{\varphi \sqrt{\varphi }}\left\{ \mathrm {E}\left[ W_g(u_t)z_t|\mathcal{F}_{t-1}\right] + 2\mathrm {E}\left[ W^2_g(u_t)z_tu_t|\mathcal{F}_{t-1}\right] \right\} \dfrac{\partial \mu _t}{\partial \delta _i}\\&\quad = 0. \end{aligned}$$

From Fang et al. (1990) (p. 94) we have \(\mathrm {E}\left[ W^2_g(u_t)z_tu_t|\mathcal{F}_{t-1}\right] = 0\) and in addition, \(\mathrm {E}\left[ W_g(u_t)z_t|\mathcal{F}_{t-1}\right] =0\) because (11).

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Maior, V.Q.S., Cysneiros, F.J.A. SYMARMA: a new dynamic model for temporal data on conditional symmetric distribution. Stat Papers 59, 75–97 (2018). https://doi.org/10.1007/s00362-016-0753-z

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00362-016-0753-z

Keywords

  • Conditional maximum likelihood
  • Outlier
  • Symmetric distributions
  • Time series