Skip to main content
Log in

On Asymmetric Regression Models with Allowance for Temporal Dependence

  • ORIGINAL ARTICLE
  • Published:
Journal of Statistical Theory and Practice Aims and scope Submit manuscript

Abstract

Log-symmetric regression models are particularly useful when the response variable is continuous, strictly positive and asymmetric. In this paper, we proposed a class of log-symmetric regression models in the context of correlated errors. The proposed models provide a novel alternative to the existing log-symmetric regression models due to its flexibility in accommodating correlation. We discuss some properties, parameter estimation by the conditional maximum likelihood method and goodness of fit of the proposed model. We also provide expressions for the observed Fisher information matrix. Two Monte Carlo simulation studies are presented to evaluate the performance of the conditional maximum likelihood estimators and two types of residuals. Finally, a full analysis of a real-world environmental data set is presented to illustrate the proposed approach.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  1. Balakrishnan N, Saulo H, Bourguignon M, Zhu X (2017) On moment-type estimators for a class of log-symmetric distributions. Comput Stat 32:1–17

    Article  MathSciNet  Google Scholar 

  2. Benjamin MA, Rigby RA, Stasinopoulos DM (2003) Generalized autoregressive moving average models. J Am Stat Assoc 98:214–223

    Article  MathSciNet  Google Scholar 

  3. Bhatti C (2010) The Birnbaum–Saunders autoregressive conditional duration model. Math Comput Simul 80:2062–2078

    Article  MathSciNet  Google Scholar 

  4. Crow EL, Shimizu K (1988) Lognormal distributions: theory and applications. Dekker, New York

    MATH  Google Scholar 

  5. Dunn P, Smyth G (1996) Randomized quantile residuals. J Comput Graph Stat 5:236–244

    Google Scholar 

  6. Efron B, Hinkley DV (1978) Assessing the accuracy of the maximum likelihood estimator: observed vs. expected Fisher information. Biometrika 65:457–487

    Article  MathSciNet  Google Scholar 

  7. Ferrari S, Cribari-Neto F (2004) Beta regression for modelling rates and proportions. J Appl Stat 31:799–815

    Article  MathSciNet  Google Scholar 

  8. Gomes AS, Morettin PA, Cordeiro GM, Taddeo MM (2018) Transformed symmetric generalized autoregressive moving average models. Statistics 52(3):643–664

    Article  MathSciNet  Google Scholar 

  9. Gupta PL, Gupta RC (1983) On the moments of residual life in reliability and some characterization results. Commun Stat Theory Methods 12(4):449–461

    Article  MathSciNet  Google Scholar 

  10. Maior VQS, Cysneiros JA (2018) SYMARMA: a new dynamic model for temporal data on conditional symmetric distribution. Stat Pap 59:75–97

    Article  MathSciNet  Google Scholar 

  11. Medeiros FMC, Ferrari SLP (2017) Small-sample testing inference in symmetric and log-symmetric linear regression models. Stat Neerl 71:200–224

    Article  MathSciNet  Google Scholar 

  12. Rahul T, Balakrishnan N, Balakrishna N (2018) Time series with Birnbaum–Saunders marginal distributions. Appl Stoch Models Bus Ind 34:562–581

    Article  MathSciNet  Google Scholar 

  13. Rocha AV, Cribari-Neto F (2009) Beta autoregressive moving avarege models. Test 18:529–545

    Article  MathSciNet  Google Scholar 

  14. Saulo H, Leão J (2017) On log-symmetric duration models applied to high frequency financial data. Econ Bull 37:1089–1097

    Google Scholar 

  15. Shumway RH, Stoffer DS (2017) Time series analysis and its applications. Springer, Cham

    Book  Google Scholar 

  16. Vanegas L, Paula GA (2017) Log-symmetric regression models under the presence of non-informative left-or right-censored observations. Test 26:405–428

    Article  MathSciNet  Google Scholar 

  17. Vanegas LH, Paula GA (2016a) An extension of log-symmetric regression models: R codes and applications. J Stat Comput Simul 86:1709–1735

    Article  MathSciNet  Google Scholar 

  18. Vanegas LH, Paula GA (2016b) Log-symmetric distributions: statistical properties and parameter estimation. Braz J Probab Stat 30:196–220

    Article  MathSciNet  Google Scholar 

  19. Vanegas LH, Paula GA (2016c) ssym: fitting semi-parametric log-symmetric regression models. R package version 1.5.7

  20. Ventura M, Saulo H, Leiva V, Monsueto SE (2019) Log-symmetric regression models: information criteria and application to movie business and industry data. Appl Stoch Models Bus Ind 35:963–977

    Article  MathSciNet  Google Scholar 

  21. Zarrin P, Maleki M, Khodadadi Z, Arellano-Valle RB (2019) Time series models based on the unrestricted skew normal process. J Stat Comput Simul 89:38–51

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Helton Saulo.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix 1

1.1 Stationarity Conditions

In this subsection we consider \(\Lambda\), defined in (5), a linear function.

Theorem 1

The marginal mean of \(Y_t\)in the log-symmetric-ARMAX(pq) model is given by

$$\begin{aligned} \mathrm{E}[Y_t] = \Lambda ({\varvec{x}}_t^{\top }\varvec{\beta }), \end{aligned}$$

provided that \(\Phi (B):\mathbb {R}\rightarrow \mathbb {R}\)is an invertible operator (the autoregressive polynomial) defined by \(\Phi (B) = -\sum _{i=0}^{p}\kappa _i B^i\)with \(\kappa _0=-1\), and \(B^i\)is the lag operator, i.e., \(B^i y_t = y_{t-i}\).

Proof

Since \(\Lambda\) is linear, using (5) and (6) we have

$$\begin{aligned} Y_t&= \lambda _t+[Y_t-\lambda _t] \nonumber \\&= \Lambda ({\varvec{x}}_t^{\top }\varvec{\beta }) + \Lambda \left( \sum _{l=1}^p \kappa _l\, [Y_{t-l}- \Lambda ({\varvec{x}}_{t-l}^{\top }\varvec{\beta })] + \sum _{j=1}^q \zeta _j\, r_{t-j} + r_t \right) . \end{aligned}$$
(12)

Let \(\Theta (B) = \sum _{i=0}^{q}\xi _i B^i\) with \(\xi _0=1\), be the moving averages polynomial. Since \(\Theta (B)\Phi (B)^{-1}=\sum _{i=0}^{\infty }\psi _i B^i\) with \(\psi _0=1\), using (12), the log-symmetric-ARMAX(pq) model can be rewritten as

$$\begin{aligned} w_t = \Lambda \left( \sum _{l=1}^{p}\kappa _l\, w_{t-l}+ \sum _{j=1}^{q}\zeta _j\, r_{t-j} + r_t \right) = \Lambda \big (\Theta (B)\Phi (B)^{-1} r_t\big ) = \Theta (B)\Phi (B)^{-1} \Lambda (r_t), \end{aligned}$$
(13)

where the error \(r_t=Y_t-\lambda _{t}\) is a MDS and \(w_t=Y_t-\Lambda ({\varvec{x}}_t^{\top }\varvec{\beta })\). Since \(\Lambda\) is linear and \(\mathrm{E}[r_t]=0\) for all t, we have \(\mathrm{E}[\Lambda (r_t)]=0\). Therefore, using (13), \(\mathrm{E}[w_t]=0\) for all t. Then

$$\begin{aligned} \mathrm{E}[Y_t] = \Lambda ({\varvec{x}}_t^{\top }\varvec{\beta })+\mathrm{E}[w_t] = \Lambda ({\varvec{x}}_t^{\top }\varvec{\beta }), \end{aligned}$$

whenever the series \(\Theta (B)\Phi (B)^{-1}r_t\) converges absolutely. \(\square\)

Theorem 2

Assuming that \(\Theta (B)\Phi (B)^{-1}=\sum _{i=0}^{\infty }\psi _i B^i\)and \(\Phi (B)\)is invertible, we have that the marginal variance of \(Y_t\)in the log-symmetric-ARMAX(pq) model is given by

$$\begin{aligned} \mathrm {Var}[Y_t] = \sum _{i=0}^{\infty }\psi _i^2\, \mathrm{E}\big [\mathrm {Var}[\Lambda (Y_{t-i})|\mathcal {B}_{t-i-1}]\big ], \end{aligned}$$

where \({\mathcal {B}}_{t}=\sigma (\Lambda (Y_{t}),\Lambda (Y_{t-1}),\ldots ,)\)is the \(\sigma\)-field generated by the information up to time t.

Proof

Since \(\mathrm{E}[r_t|\mathcal {A}_{t-1}]=0\), a.s., for all t, and \(\mathrm {Cov}[r_s,r_t]=0\) for all \(t\ne s\), following the notation of Theorem 1, we have

$$\begin{aligned} \mathrm {Var}[Y_t]&= \mathrm {Var}[w_t] {\mathop {=}\limits ^{(13)}} \mathrm {Var}[\Theta (B)\,\Phi (B)^{-1} \Lambda (r_t)] = \mathrm {Var}\bigg [\sum _{i=0}^{\infty }\psi _i B^i \Lambda (r_t)\bigg ] \nonumber \\&= \sum _{i=0}^{\infty }\psi _i^2\, \mathrm{Var}[\Lambda (r_{t-i})]. \end{aligned}$$
(14)

On the other hand, the law of total variance states that

$$\begin{aligned} \mathrm {Var}[\Lambda (r_t)]&= \mathrm{E}\big [\mathrm {Var}[\Lambda (r_t)|\mathcal {B}_{t-1}]\big ] + \mathrm {Var}\big [\mathrm{E}[\Lambda (r_t)|\mathcal {B}_{t-1}]\big ] \nonumber \\&= \mathrm{E}\big [\mathrm {Var}[\Lambda (Y_t)|\mathcal {B}_{t-1}]\big ]. \end{aligned}$$
(15)

Combining (14) and (15), the proof follows. \(\square\)

Theorem 3

The covariance and correlation of \(Y_t\)and \(Y_{t-k}\)in the log-symmetric-ARMAX (pq) model are given by

$$\begin{aligned} \mathrm {Cov}[Y_t,Y_{t-k}]&= \sum _{i=0}^{\infty } \psi _i \psi _{i-k}\, \mathrm{E}\big [\mathrm {Var}[\Lambda (Y_{t-i})|\mathcal {B}_{t-i-1}]\big ], \quad k>0, \\ \mathrm {Corr}[Y_t,Y_{t-k}]&= { \sum _{i=0}^{\infty } \psi _i \psi _{i-k}\, \mathrm{E}\big [\mathrm {Var}[\Lambda (Y_{t-i})|\mathcal {B}_{t-i-1}]\big ] \over \prod _{j\in \{0,k\}} \sqrt{ \sum _{i=0}^{\infty } \psi _i^2\, \mathrm{E}\big [\mathrm {Var}[\Lambda (Y_{t-j-i})|\mathcal {B}_{t-j-i-1}]\big ]} }, \end{aligned}$$

respectively, where \({\mathcal {B}}_{t}=\sigma (\Lambda (Y_{t}),\Lambda (Y_{t-1}),\ldots ,)\)is the \(\sigma\)-field generated by the information up to time t.

Proof

Since \(w_t=Y_t-\Lambda ({\varvec{x}}_t^{\top }\varvec{\beta })\) and \(\mathrm {Cov}[r_s,r_t]=0\) for all \(t\ne s\),

$$\begin{aligned} \mathrm {Cov}[Y_t,Y_{t-k}]&= \mathrm {Cov}[w_t,w_{t-j}] {\mathop {=}\limits ^{(13)}} \mathrm {Cov}\big [\Theta (B)\Phi (B)^{-1} \Lambda (r_t), \Theta (B)\Phi (B)^{-1} \Lambda (r_{t-k})\big ] \\&= \sum _{i=0}^{\infty } \psi _i \psi _{i-k}\, \mathrm {Var}[\Lambda (r_{t-i})]. \end{aligned}$$

Using (15) the expression on the right side is equal to \(\sum _{i=0}^{\infty } \psi _i \psi _{i-k}\, \mathrm{E}\big [\mathrm {Var}[\Lambda (Y_{t-i})|\mathcal {B}_{t-i-1}]\big ],\) and the proof follows. \(\square\)

Appendix 2

The Hessian matrix can be determined by the following matrix

$$\begin{aligned} \ddot{\varvec{\ell }}(\varvec{\theta }^*) = \begin{bmatrix} \displaystyle {\partial ^2 \ell _{0,1}\over \partial \beta _{r}^2}(\varvec{\theta }^*) &{}\displaystyle {\partial ^2 \ell _{0,1}\over \partial \beta _{r} \partial \tau _s}(\varvec{\theta }^*) &{}\displaystyle {\partial ^2 \ell _{0,1}\over \partial \beta _{r} \partial \kappa _l}(\varvec{\theta }^*) &{}\displaystyle {\partial ^2 \ell _{0,1}\over \partial \beta _{r} \partial \zeta _j}(\varvec{\theta }^*) \\ \displaystyle {\partial ^2 \ell _{0,1}\over \partial \tau _s \partial \beta _{r}}(\varvec{\theta }^*) &{}\displaystyle {\partial ^2 \ell _{0,1}\over \partial \tau _s^2}(\varvec{\theta }^*) &{}\displaystyle {\partial ^2 \ell _{0,1}\over \partial \tau _s \partial \kappa _l}(\varvec{\theta }^*) &{}\displaystyle {\partial ^2 \ell _{0,1}\over \partial \tau _s \partial \zeta _j}(\varvec{\theta }^*) \\ \displaystyle {\partial ^2 \ell _{0,1}\over \partial \kappa _l \partial \beta _{r}}(\varvec{\theta }^*) &{}\displaystyle {\partial ^2 \ell _{0,1}\over \partial \kappa _l\partial \tau _s}(\varvec{\theta }^*) &{}\displaystyle {\partial ^2 \ell _{0,1}\over \partial \kappa _l^2}(\varvec{\theta }^*) &{}\displaystyle {\partial ^2 \ell _{0,1}\over \partial \kappa _l \partial \zeta _j}(\varvec{\theta }^*) \\ \displaystyle {\partial ^2 \ell _{0,1}\over \partial \zeta _j \partial \beta _{r}}(\varvec{\theta }^*) &{}\displaystyle {\partial ^2 \ell _{0,1}\over \partial \zeta _j\partial \tau _s}(\varvec{\theta }^*) &{}\displaystyle {\partial ^2 \ell _{0,1}\over \partial \zeta _j\partial \kappa _l}(\varvec{\theta }^*) &{}\displaystyle {\partial ^2 \ell _{0,1}\over \partial \zeta _j^2}(\varvec{\theta }^*) \end{bmatrix}, \end{aligned}$$

where \(\varvec{\theta }^*=(\beta_r,\tau_s,\kappa_l,\zeta_j); \ r=0,\ldots ,k\); \(s=0,\ldots , l\); \(l=1,\ldots ,p\) and \(j=1,\ldots ,q\). Since the function \(\ell _{0,1}(\varvec{\theta }^*)\) has continuous second partial derivatives at a given point \({\varvec{\theta }^*}\) in \(\mathbb {R}^{4}\), by Schwarz’s Theorem follows that the partial differentiations of this function are commutative at that point, that is,

$$\begin{aligned} {\partial ^2 \ell _{0,1}\over \partial a \partial b}(\varvec{\theta }^*) = {\partial ^2 \ell _{0,1}\over \partial b \partial a}(\varvec{\theta }^*), \quad \text {for} \ a\ne b \ \text {in} \ \{\beta _r,\tau _s,\kappa _l,\zeta _j\}. \end{aligned}$$

It can easily be seen that the first order partial derivatives of \(\ell _{0,1}(\varvec{\theta }^*)\) are

$$\begin{aligned} \begin{array}{llllll} \displaystyle {\partial \ell _{0,1}\over \partial a} (\varvec{\theta }^*) = {1\over g(z^2_t)}\, {\partial g(z^2_t)\over \partial a}, \ a\in \{\beta _r,\kappa _l,\zeta _j\}, \quad&\displaystyle {\partial \ell _{0,1}\over \partial \tau _{s}} (\varvec{\theta }^*) = -{1\over 2\phi _t}\, {\partial \phi _t\over \partial \tau _s}\, + {1\over g(z^2_t)}\, {\partial g(z^2_t)\over \partial \tau _s}, \end{array} \end{aligned}$$
(16)

the second order partial derivatives are

$$\begin{aligned} {\partial ^2 \ell _{0,1}\over \partial a^2} (\varvec{\theta }^*)&= - {1\over [g(z^2_t)]^2}\, {\partial g(z^2_t)\over \partial a} + {1\over g(z^2_t)}\, {\partial ^2 g(z^2_t)\over \partial a^2}, \quad a\in \{\beta _r,\kappa _l,\zeta _j\}, \nonumber \\ {\partial ^2 \ell _{0,1}\over \partial \tau _{s}^2} (\varvec{\theta }^*)&= {1\over 2\phi _t^2}\, \left( {\partial \phi _t\over \partial \tau _s}\right) ^2\, - {1\over 2\phi _t}\, {\partial ^2 \phi _t\over \partial \tau _s^2}\, - {1\over [g(z^2_t)]^2}\, \Big [{\partial g(z^2_t)\over \partial \tau _s}\Big ]^2 + {1\over g(z^2_t)}\, {\partial ^2 g(z^2_t)\over \partial \tau ^2_s}, \end{aligned}$$
(17)

and the mixed partial derivatives are given by

$$\begin{aligned} {\partial ^2 \ell _{0,1}\over \partial \beta _r\partial a} (\varvec{\theta }^*)&= -{1\over [g(z^2_t)]^2}\, {\partial g(z^2_t) \over \partial \beta _r}\, {\partial g(z^2_t)\over \partial a} + {1\over g(z^2_t)}\, {\partial ^2 g(z^2_t)\over \partial \beta _r \partial a}, \quad a\in \{\tau _s,\kappa _l,\zeta _j\}, \\ {\partial ^2 \ell _{0,1}\over \partial \tau _s\partial b} (\varvec{\theta }^*)&= -{1\over [g(z^2_t)]^2}\, {\partial g(z^2_t) \over \partial \tau _s}\, {\partial g(z^2_t)\over \partial b} + {1\over g(z^2_t)}\, {\partial ^2 g(z^2_t)\over \partial \tau _s \partial b}, \quad b\in \{\kappa _l,\zeta _j\}, \\ {\partial ^2 \ell _{0,1}\over \partial \kappa _l\partial \zeta _{j}} (\varvec{\theta }^*)&= -{1\over [g(z^2_t)]^2}\, {\partial g(z^2_t) \over \partial \kappa _l}\, {\partial g(z^2_t)\over \partial \zeta _j} + {1\over g(z^2_t)}\, {\partial ^2 g(z^2_t)\over \partial \kappa _l \partial \zeta _j}. \end{aligned}$$

Let

$$\begin{aligned} \eta _t:={z_t\over \sqrt{\phi _t}} = {[\log (y_t)-\log (\lambda _t)]\over \phi _t}, \quad t=m+1,\ldots ,n. \end{aligned}$$

The first order partial derivatives of g are

$$\begin{aligned} \begin{array}{llllll} \displaystyle {\partial g(z^2_t)\over \partial a} = - {2}\, {\eta _t\over \lambda _t} {\partial \lambda _t\over \partial a}\, g'(z^2_t), \quad a\in \{\beta _r,\kappa _l,\zeta _j\},&\qquad \displaystyle {\partial g(z^2_t)\over \partial \tau _s} = -\eta _t^2\, {\partial \phi _t\over \partial \tau _s}\, g'(z^2_t), \end{array} \end{aligned}$$
(18)

the second order partial derivatives are, for each \(a\in \{\beta _r,\kappa _l,\zeta _j\}\),

$$\begin{aligned} {\partial ^2 g(z^2_t)\over \partial a^2}&= 2 \Big [ {1\over \lambda _t^2} \left( {1\over \phi _t}+\eta _t\right) \left( {\partial \lambda _t \over \partial a} \right) ^2 - {\eta _t\over \lambda _t} {\partial ^2 \lambda _t \over \partial a^2} \Big ] g'(z_t^2) + 4\left( {\eta _t\over \lambda _t}\right) ^2 \left( {\partial \lambda _t \over \partial a}\right) ^2 g''(z_t^2), \nonumber \\ {\partial ^2 g(z^2_t)\over \partial \tau ^2_s}&= \eta _t^2 \Big [ {2\over \phi _t} \left( {\partial \phi _t\over \partial \tau _s }\right) ^2 - {\partial ^2\phi _t\over \partial \tau _s^2 } \Big ] g'(z^2_t) + \eta _t^4 \left( {\partial \phi _t\over \partial \tau _s}\right) ^2 g''(z^2_t), \end{aligned}$$
(19)

and the mixed partial derivatives are given by

$$\begin{aligned} {\partial ^2 g(z^2_t)\over \partial \beta _r \partial \tau _s}&= {2\eta _t\over \lambda _t} \Big [ {1\over \phi _t} g'(z_t^2) + \eta _t^2 g''(z_t^2) \Big ] {\partial \lambda _t\over \partial \beta _r} {\partial \phi _t\over \partial \tau _s}, \\ {\partial ^2 g(z^2_t)\over \partial \beta _r \partial a}&= {2\over \lambda _t} \Big \{ \Big [ {1\over \lambda _t} \left( {1\over \phi _t}+\eta _t\right) {\partial \lambda _t\over \partial \beta _r } {\partial \lambda _t\over \partial a } - \eta _t {\partial ^2 \lambda _t\over \partial \beta _r\partial a} \Big ] g'(z_t^2) + 2{\eta _t^2\over \lambda _t} {\partial \lambda _t\over \partial \beta _r} {\partial \lambda _t\over \partial a} g''(z_t^2) \Big \}, \quad a\in \{\kappa _l,\zeta _j\}, \\ {\partial ^2 g(z^2_t)\over \partial \tau _s \partial b}&= {2\eta _t\over \lambda _t} \Big [ {1\over \phi _t} g'(z_t^2) + \eta _t^2 g''(z_t^2) \Big ] {\partial \phi _t\over \partial \tau _s} {\partial \lambda _t\over \partial b}, \quad b\in \{\kappa _l,\zeta _j\}, \\ {\partial ^2 g(z^2_t)\over \partial \kappa _l \partial \zeta _j}&= {2\over \lambda _t} \Big \{ \Big [ {1\over \lambda _t} \left( {1\over \phi _t}+\eta _t\right) {\partial \lambda _t\over \partial \kappa _l } {\partial \lambda _t\over \partial \zeta _j } - \eta _t {\partial ^2 \lambda _t\over \partial \kappa _l\partial \zeta _j} \Big ] g'(z_t^2) + 2{\eta _t^2\over \lambda _t} {\partial \lambda _t\over \partial \kappa _l} {\partial \lambda _t\over \partial \zeta _j} g''(z_t^2) \Big \}, \end{aligned}$$

with

$$\begin{aligned} \begin{array}{llll} \displaystyle {\partial \phi _t\over \partial \tau _s} = w_{ts} \Lambda '(\varvec{w}^{\top }_{t}\varvec{\tau }),&\qquad \displaystyle {\partial ^2 \phi _t\over \partial \tau _s^2} = w^2_{ts} \Lambda ''(\varvec{w}^{\top }_{t}\varvec{\tau }). \end{array} \end{aligned}$$

By (8), the first order partial derivatives of \(\lambda _t\) are

$$\begin{aligned} {\partial \lambda _t\over \partial \beta _r}&= \left( x_{tr} - \sum \limits _{l=1}^p \kappa _l\, x_{(t-l)r} - \sum \limits _{j=1}^q\zeta _j\, {\partial \lambda _{t-j} \over \partial \beta _r} \right) \Lambda '({\varvec{x}}_t^{\top }\varvec{\beta } + \varrho _{t}), \end{aligned}$$
(20)
$$\begin{aligned} {\partial \lambda _t\over \partial \kappa _l}&= \left( y_{t-l} - \sum _{i=0}^{k} \beta _i\,x_{(t-l)i} - \sum \limits _{j=1}^q\zeta _j\, {\partial \lambda _{t-j}\over \partial \kappa _l} \right) \Lambda '({\varvec{x}}_t^{\top }\varvec{\beta } + \varrho _{t}), \end{aligned}$$
(21)
$$\begin{aligned} {\partial \lambda _t\over \partial \zeta _j}&= \left( r_{t-j}-\lambda _{t-j} - \sum \limits _{\tilde{j}=1}^q\zeta _{\tilde{j}}\, {\partial \lambda _{t-\tilde{j}}\over \partial \zeta _j} \right) \Lambda '({\varvec{x}}_t^{\top }\varvec{\beta } + \varrho _{t}), \end{aligned}$$
(22)

the second order partial derivatives are given by

$$\begin{aligned}&{\partial ^2 \lambda _t\over \partial \beta _r^2} = - \sum \limits _{j=1}^q\zeta _j\, {\partial ^2 \lambda _{t-j} \over \partial \beta _r^2} \, \Lambda '({\varvec{x}}_t^{\top }\varvec{\beta } + \varrho _{t}) + \left( x_{tr} - \sum \limits _{l=1}^p \kappa _l\, x_{(t-l)r} - \sum \limits _{j=1}^q\zeta _j\, {\partial \lambda _{t-j} \over \partial \beta _r} \right) ^2 \Lambda ''({\varvec{x}}_t^{\top }\varvec{\beta } + \varrho _{t}) , \\&{\partial ^2 \lambda _t\over \partial \kappa _l^2} = -\sum \limits _{j=1}^q\zeta _j\, {\partial ^2 \lambda _{t-j}\over \partial \kappa _l^2} \Lambda '({\varvec{x}}_t^{\top }\varvec{\beta }+\rho _t) + \left( y_{t-l} - \sum _{i=0}^{k} \beta _i\,x_{(t-l)i} - \sum \limits _{j=1}^q\zeta _j\, {\partial \lambda _{t-j}\over \partial \kappa _l} \right) ^2 \Lambda ''({\varvec{x}}_t^{\top }\varvec{\beta } + \varrho _{t}) , \\&{\partial ^2 \lambda _t\over \partial \zeta _j^2} = -\left( 2{\partial \lambda _{t-j}\over \partial \zeta _j} + \sum \limits _{\tilde{j}=1}^q \zeta _{\tilde{j}}\, {\partial ^2 \lambda _{t-\tilde{j}}\over \partial \zeta _j^2} \right) \Lambda '({\varvec{x}}_t^{\top }\varvec{\beta }+\rho _t) + \left( r_{t-j}-\lambda _{t-j} - \sum \limits _{\tilde{j}=1}^q\zeta _{\tilde{j}}\, {\partial \lambda _{t-\tilde{j}}\over \partial \zeta _j} \right) ^2 \Lambda ''({\varvec{x}}_t^{\top }\varvec{\beta } + \varrho _{t}) , \end{aligned}$$

with mixed partial derivatives

$$\begin{aligned} {\partial ^2 \lambda _t\over \partial \beta _r \partial \kappa _l}&= - \left( x_{(t-l)r} + \sum \limits _{j=1}^q\zeta _j\, {\partial ^2 \lambda _{t-j}\over \partial \beta _r\partial \kappa _l} \right) \Lambda '({\varvec{x}}_t^{\top }\varvec{\beta }+\rho _t) \\&\qquad +\left( x_{tr} - \sum \limits _{l=1}^p \kappa _l\, x_{(t-l)r} - \sum \limits _{j=1}^q\zeta _j\, {\partial \lambda _{t-j} \over \partial \beta _r} \right) \left( y_{t-l} - \sum _{i=0}^{k} \beta _i\,x_{(t-l)i} - \sum \limits _{j=1}^q\zeta _j\, {\partial \lambda _{t-j}\over \partial \kappa _l} \right) \Lambda ''({\varvec{x}}_t^{\top }\varvec{\beta } + \varrho _{t}), \\ {\partial ^2 \lambda _t\over \partial \beta _r \partial \zeta _j}&= - \left( 2{\partial \lambda _{t-j}\over \partial \beta _r} + \sum \limits _{\tilde{j}=1}^q\zeta _{\tilde{j}}\, {\partial ^2 \lambda _{t-\tilde{j}}\over \partial \beta _r\partial \zeta _j} \right) \Lambda '({\varvec{x}}_t^{\top }\varvec{\beta } + \varrho _{t}) \\&\qquad + \left( r_{t-j}-\lambda _{t-j} - \sum \limits _{\tilde{j}=1}^q\zeta _{\tilde{j}}\, {\partial \lambda _{t-\tilde{j}}\over \partial \zeta _j} \right) \left( x_{tr} - \sum \limits _{l=1}^p \kappa _l\, x_{(t-l)r} - \sum \limits _{j=1}^q\zeta _j\, {\partial \lambda _{t-j} \over \partial \beta _r} \right) \Lambda ''({\varvec{x}}_t^{\top }\varvec{\beta } + \varrho _{t}), \\ {\partial ^2 \lambda _t\over \partial \kappa _l \partial \zeta _j}&= - \left( 2{\partial \lambda _{t-j}\over \partial \kappa _l} + \sum \limits _{\tilde{j}=1}^q\zeta _{\tilde{j}}\, {\partial ^2 \lambda _{t-\tilde{j}}\over \partial \kappa _l\partial \zeta _j} \right) \Lambda '({\varvec{x}}_t^{\top }\varvec{\beta } + \varrho _{t}) \\&\qquad + \left( r_{t-j}-\lambda _{t-j} - \sum \limits _{\tilde{j}=1}^q\zeta _{\tilde{j}}\, {\partial \lambda _{t-\tilde{j}}\over \partial \zeta _j} \right) \left( y_{t-l} - \sum _{i=0}^{k} \beta _i\,x_{(t-l)i} - \sum \limits _{j=1}^q\zeta _j\, {\partial \lambda _{t-j}\over \partial \kappa _l} \right) \Lambda ''({\varvec{x}}_t^{\top }\varvec{\beta } + \varrho _{t}). \end{aligned}$$

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Saulo, H., Vila, R., Vilca, F. et al. On Asymmetric Regression Models with Allowance for Temporal Dependence. J Stat Theory Pract 14, 40 (2020). https://doi.org/10.1007/s42519-020-00104-9

Download citation

  • Published:

  • DOI: https://doi.org/10.1007/s42519-020-00104-9

Keywords

Navigation