Skip to main content
Log in

Inference of Autoregressive Model with Stochastic Exogenous Variable Under Short-Tailed Symmetric Distributions

  • Research Paper
  • Published:
Iranian Journal of Science and Technology, Transactions A: Science Aims and scope Submit manuscript

Abstract

In classical autoregressive models, it is assumed that the disturbances are normally distributed and the exogenous variable is non-stochastic. However, in practice, short-tailed symmetric disturbances occur frequently and exogenous variable is actually stochastic. In this paper, estimation of the parameters in autoregressive models with stochastic exogenous variable and non-normal disturbances both having short-tailed symmetric distribution is considered. This is the first study in this area as known to the authors. In this situation, maximum likelihood estimation technique is problematic and requires numerical solution which may have convergence problems and can cause bias. Besides, statistical properties of the estimators can not be obtained due to non-explicit functions. It is also known that least squares estimation technique yields neither efficient nor robust estimators. Therefore, modified maximum likelihood estimation technique is utilized in this study. It is shown that the estimators are highly efficient, robust to plausible alternatives having different forms of symmetric short-tailedness in the sample and explicit functions of data overcoming the necessity of numerical solution. A real life application is also given.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Akkaya AD, Tiku ML (2001) Estimating parameters in autoregressive models in non-normal situations: asymmetric innovations. Commun Stat Theory Methods 30:517–536

    Article  MathSciNet  Google Scholar 

  • Akkaya AD, Tiku ML (2005a) Robust estimation and hypothesis testing under short-tailedness and inliers. Test 14(1):129–150

    Article  MathSciNet  Google Scholar 

  • Akkaya AD, Tiku ML (2005b) Time series AR(1) model for short-tailed distributions. Statistics 39:117–132

    Article  MathSciNet  Google Scholar 

  • Akkaya AD, Tiku ML (2008a) Short-tailed distributions and inliers. Test 17:282–296

    Article  MathSciNet  Google Scholar 

  • Akkaya AD, Tiku ML (2008b) Autoregressive models with short-tailed symmetric distributions. Statistics 42:207–221

    Article  MathSciNet  Google Scholar 

  • Bass FM, Clarke DG (1972) Testing distributed lag models of advertising effect. J Mark Res 9:298–308

    Article  Google Scholar 

  • Bayrak OT, Akkaya AD (2010) Estimating parameters of a multiple autoregressive model by the modified maximum likelihood method. J Comput Appl Math 233:1763–1772

    Article  MathSciNet  Google Scholar 

  • Bayrak OT, Akkaya AD (2011) Autoregressive models with stochastic design variables and nonnormal innovations. In: Recent researches in applied mathematics, simulation and modeling, proceedings of the 5th international conference on applied mathematics, simulation, modeling (ASM’11), Corfu Island, Greece, pp 197–201. ISSN:1792-4332

  • Box GEP, Jenkins GM, Reinsel GC (2008) Time series analysis, forecasting and control. Wiley, New Jersey

    MATH  Google Scholar 

  • Hand DJ, Daly F, Lunn AD, McConway KJ, Ostrowski E (1994) Small data sets. Chapman and Hall, New York

    Book  Google Scholar 

  • Hoeffding W (1953) On the distribution of the expected value of the order statistics. Ann Math Stat 24:93–100

    Article  MathSciNet  Google Scholar 

  • Islam MQ, Tiku ML (2004) Multiple linear regression model under nonnormality. Commun Stat Theory Methods 33:2443–2467

    Article  MathSciNet  Google Scholar 

  • Islam MQ, Tiku ML (2010) Multiple linear regression model with stochastic design variables. J Appl Stat 37:923–943

    Article  MathSciNet  Google Scholar 

  • Joiner BL, Rosenblatt JR (1971) Some properties of the range in samples from Tukey’s symmetric lambda distributions. J Am Stat Assoc 66:394–400

    Article  Google Scholar 

  • Kendall MG, Stuart A (1979) The advanced theory of statistics, vol 2. Charles-Griffin, London

    MATH  Google Scholar 

  • Pearson ES, Tiku ML (1970) Some notes on the relationship between the distributions of central and non-central F. Biometrika 57:175–179

    Article  MathSciNet  Google Scholar 

  • Puthenpura S, Sinha NK (1986) Modified maximum likelihood method for the robust estimation of system parameters from very noisy data. Automatica 22:231–235

    Article  Google Scholar 

  • Qumsiyeh SB (2007) Non-normal bivariate distributions: estimation and hypothesis testing. Dissertation, Middle East Technical University

  • Sazak HS, Tiku ML, Islam MQ (2006) Regression analysis with a stochastic design variable. Int Stat Rev 74:77–88

    Article  Google Scholar 

  • Schneider H (1986) Truncated and censored samples from normal populations. Marcel Dekker, New York

    MATH  Google Scholar 

  • Thode HC (2002) Testing for normality. Marcel Dekker, New York

    Book  Google Scholar 

  • Tiku ML (1967) Estimating the mean and standard deviation from a censored normal sample. Biometrika 54:155–165

    Article  MathSciNet  Google Scholar 

  • Tiku ML (1968) Estimating the parameters of normal and logistic distributions from censored samples. Aust J Stat 10:64–74

    Article  MathSciNet  Google Scholar 

  • Tiku ML (1988) Order statistics in goodness-of-fit tests. Commun Stat Theory Methods 17:2369–2387

    Article  MathSciNet  Google Scholar 

  • Tiku ML, Akkaya AD (2004) Robust estimation and hypothesis testing. New Age International Publishers/Oscar Publications, New Delhi

    MATH  Google Scholar 

  • Tiku ML, Suresh RP (1992) A new method of estimation for location and scale parameters. J Stat Plan Inference 30:281–292

    Article  MathSciNet  Google Scholar 

  • Tiku ML, Vaughan DC (1997) Logistic and nonlogistic density functions in binary regression with nonstochastic covariates. Biom J 39:883–898

    Article  MathSciNet  Google Scholar 

  • Tiku ML, Vaughan DC (1999) A family of short-tailed symmetric distributions. Technical report, McMaster University

  • Tiku ML, Tan WY, Balakrishnan N (1986) Robust inference. Marcel Dekker, New York

    MATH  Google Scholar 

  • Tiku ML, Wong WK, Vaughan DC, Bian G (2000) Time series models in non-normal situations: symmetric innovations. J Time Ser Anal 21:571–596

    Article  MathSciNet  Google Scholar 

  • Vaughan DC (2002) The generalized secant hyperbolic distribution and its properties. Commun Stat Theory Methods 31:219–238

    Article  MathSciNet  Google Scholar 

  • Vaughan DC, Tiku ML (2000) Estimation and hypothesis testing for a non-normal bivariate distribution with applications. J Math Comput Model 32:53–67

    Article  Google Scholar 

  • Yee TW, Huang K (n.d.) Tikuv {VGAM}. Retrieved from http://finzi.psych.upenn.edu/R/library/VGAM/html/tikuvUC.html

Download references

Acknowledgements

We sincerely thank the Chief Editor Dr. Ahmad Sheykhi and the referees for their invaluable comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ayşen Dener Akkaya.

Appendices

Appendix 1

Proof of asymptotic equivalence of ML and MML

If f(z) is the probability density function of a random variable z, and z (i) (1 ≤ i ≤ n) are the order statistics of a random sample of size n with t (i) = E{z (i)} all finite, then (Hoeffding 1953)

$$\mathop {\lim }\limits_{n \to \infty } \frac{1}{n}\mathop \sum \limits_{i = 1}^{n} g\{ t_{(i)} \} = E\{ g(z)\} = \int_{ - \infty }^{\infty } {g(z)} f(z)\,{\text{d}}z,$$
(19)

where g(z) is a function which satisfies the condition |g(z)| ≤ h(z), h(z) being convex. Since the nonlinear functions \(h(u_{i} ) = u_{i} /(1 + [\lambda_{2} /2r_{2} ]u_{i}^{2} )\) and \(g(z_{i} ) = z_{i} /(1 + [\lambda_{1} /2r_{1} ]z_{i}^{2} ),\) given in Eq. (6) are all bounded, Eq. (19) is satisfied yielding the asymptotic equivalence of modified likelihood equations and likelihood equations.

Appendix 2

2.1 Sample Information Matrix

The sample information matrix is − 1 times the second derivatives of ln L evaluated at \(\mu_{1} = \hat{\mu }_{1} ,\) \(\sigma_{1} = \hat{\sigma }_{1} ,\) \(\gamma_{1} = \hat{\gamma }_{1} ,\) etc. Asymptotic variances and covariances are obtained from the inverse of this matrix. They provide accurate approximations; see for example the [100,000/n] Monte Carlo run results given in Table 4, where \(\gamma_{0} = 1.0,\) \(\gamma_{1} = 1.5,\) \(\phi = 0.5,\) \(\mu_{1} = 0.0\) and \(\sigma_{1}\) and \(\sigma\) are 1.0. As can be seen the simulated variances given in Table 1 and the ones obtained from sample information matrix are close to each other.

Table 4 Variances of the MML estimators obtained from sample information matrix

Appendix 3

3.1 Asymptotic Variances of the Related Parameters

Write \(w_{1i}^{*} = 1 - \lambda_{1} \{ 1 - (\lambda_{1} /2r_{1} )z_{i}^{2} \} /\{ 1 + (\lambda_{1} /2r_{1} )z_{i}^{2} \}^{2}\) and z i are iid and have the same distribution as z = ε/σ in Eq. (3).

$$E(w_{1i}^{*} ) = \int_{ - \infty }^{\infty } {\left\{ {1 - \lambda_{1} \frac{{1 - (\lambda_{1} /2r_{1} )z^{2} }}{{\left\{ {1 + (\lambda_{1} /2r_{1} )z^{2} } \right\}^{2} }}} \right\}} f(z)\,{\text{d}}z = 1 - \lambda_{1} E(0,2;r_{1} ,\lambda_{1} ) + \frac{{\lambda_{1}^{2} }}{{2r_{1} }}E(1,2;r_{1} ,\lambda_{1} ) = Q$$

since \(\frac{1}{{\sqrt {2\pi } }}\int_{ - \infty }^{\infty } {z^{2j} } \exp \left\{ { - \frac{{z^{2} }}{2}} \right\}\,{\text{d}}z = \frac{(2j)!}{{2^{j} (j)!}}\) and

$$E(p,q;r,\lambda ) = E(z^{2p} \{ 1 + (\lambda /2r)\}^{ - q} ) = \frac{{\left[ {\mathop \sum \nolimits_{j = 0}^{r - q} \left( {\begin{array}{*{20}c} {r - q} \\ j \\ \end{array} } \right)\left( {\frac{\lambda }{2r}} \right)^{j} \{ 2(p + j)\} !/2^{p + j} (p + j)!} \right]}}{{\left[ {\mathop \sum \nolimits_{j = 0}^{r} \left( {\begin{array}{*{20}c} r \\ j \\ \end{array} } \right)\left( {\frac{\lambda }{2r}} \right)^{j} \{ 2j\} !/2^{j} j!} \right]}}.$$

It may be noted that \(w_{1i}^{*}\) is essentially an increasing function of \(z_{i}^{2}\) and is bounded. For large n, \(E( z_{(i)} ) \cong t_{(i)} .\) Since variance of \(z_{(i)}\) tends to zero as n tends to infinity, the following results (asymptotic) are obtained:

$$\frac{m}{n} = \frac{1}{n}\mathop \sum \limits_{i = 1}^{n} \beta_{1i} = \frac{1}{n}\mathop \sum \limits_{i = 1}^{n} \left[ {1 - \lambda_{1} \frac{{1 - (\lambda_{1} /2r_{1} )t_{(i)}^{2} }}{{\left\{ {1 + (\lambda_{1} /2r_{1} )t_{(i)}^{2} } \right\}^{2} }}} \right] \cong E\left( {\frac{1}{n}\mathop \sum \limits_{i = 1}^{n} w_{1(i)}^{*} } \right) = \frac{1}{n}\mathop \sum \limits_{i = 1}^{n} E(w_{1i}^{*} ) = Q$$

and since u i and z i are independent of each other and complete sums are invariant to ordering

$$\bar{u}_{[i]} = \frac{1}{m}\mathop \sum \limits_{i = 1}^{n} \beta_{1i} u_{[i]} \cong \frac{1}{Q}E\left( {\frac{1}{n}\mathop \sum \limits_{i = 1}^{n} w_{1(i)}^{*} u_{[i]} } \right) = \frac{1}{Qn}\mathop \sum \limits_{i = 1}^{n} E(w_{1i}^{*} u_{i} ) = \frac{1}{Q}E(w_{1i}^{*} )E(u_{i} ) = E(u_{i} ).$$

Similarly,

$$\begin{aligned} & \frac{1}{n}\mathop \sum \limits_{i = 1}^{n} \beta_{1i} \left( {u_{\left[ i \right]} - \bar{u}_{\left[ . \right]} } \right)^{2} \cong E\left[ {\frac{1}{n}\mathop \sum \limits_{i = 1}^{n} w_{1\left( i \right)}^{*} \left( {u_{\left[ i \right]} - \bar{u}_{\left[ . \right]} } \right)^{2} } \right] \\ & \quad = \frac{1}{n}\mathop \sum \limits_{i = 1}^{n} E\left( {w_{1i}^{*} } \right)E\left( {u_{[i]} - \bar{u}_{[.]} } \right)^{2} = V\left( u \right)Q;\quad V(u) = \mu_{2,2} \\ \end{aligned}$$

and

$$\begin{aligned} & \frac{1}{n}\left\{ {\mathop \sum \limits_{i = 1}^{n} \beta_{1i} y_{[i] - 1}^{2} - \frac{1}{{m_{1} }}\left( {\mathop \sum \limits_{i = 1}^{n} \beta_{1i} y_{[i] - 1} } \right)^{2} } \right\} = \frac{1}{n}\mathop \sum \limits_{i = 1}^{n} \beta_{1i} \left( {y_{[i] - 1} - \bar{y}_{[i] - 1} } \right)^{2} \\ & \quad \cong E\left[ {\frac{1}{n}\mathop \sum \limits_{i = 1}^{n} w_{1\left( i \right)}^{*} \left( {y_{[i] - 1} - \bar{y}_{[i] - 1} } \right)^{2} } \right] \\ & \quad = \frac{1}{n}\mathop \sum \limits_{i = 1}^{n} E\left( {w_{1i}^{*} } \right)E\left( {y_{i - 1} - \bar{y}_{i - 1} } \right)^{2} \\ & \quad = QV(y_{i - 1} ); \quad V(y_{i - 1} ) = \frac{{\gamma_{1}^{2} V(u) + V(\varepsilon )}}{{(1 - \phi^{2} )}} , \\ \end{aligned}$$

where \(\bar{y}_{[i] - 1} = \frac{1}{{m_{1} }}\sum\nolimits_{i = 1}^{n} {\beta_{1i} y_{[i] - 1} } .\)

More specifically,

$$V(y_{i - 1} ) = V(y_{i} ) = \frac{{\gamma_{1}^{2} V(u) + V(\varepsilon )}}{{(1 - \phi^{2} )}} = \frac{{\gamma_{1}^{2} \mu_{2,2} + \sigma^{2} \mu_{2,1} }}{{(1 - \phi^{2} )}}$$
(20)

since − 1 < ϕ < 1 so that Y is stationary.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bayrak, Ö.T., Akkaya, A.D. Inference of Autoregressive Model with Stochastic Exogenous Variable Under Short-Tailed Symmetric Distributions. Iran J Sci Technol Trans Sci 42, 2105–2116 (2018). https://doi.org/10.1007/s40995-017-0448-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40995-017-0448-x

Keywords

Navigation