Skip to main content
Log in

Estimating Functions for Circular Time Series Models

  • Published:
Sankhya A Aims and scope Submit manuscript

Abstract

In this article, we describe an estimating function (EF) approach for circular time series models. We construct EFs based on conditional trigonometric moments of the circular discrete-time stochastic processes and provide closed form expressions for the optimal EF of the model parameters and its associated Godambe information. When the conditional circular mean and concentration are functions of the same parameters of interest, we show that the combined EF is more informative than its component sine and cosine EFs. We discuss recursive estimation of circular model parameters and illustrate the approach on two well known circular time series models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Artes, R., Paula, G.A. and Ranvaud, R. (2000). Analysis of circular longitudinal data based on generalized estimating equations. Aust. N. Z. J. Stat. 42, 347–358.

    Article  MATH  Google Scholar 

  • Downs, T.D. and Mardia, K.V. (2002). Circular regression. Biometrika89, 683–698.

    Article  MATH  Google Scholar 

  • Fisher, N.I. and Lee, A.J. (1992). Regression models for an angular response. Biometrics 48, 665–677.

    Article  Google Scholar 

  • Fisher, N.I. (1993). Statistical Analysis of Circular Data. Cambridge University Press, Cambridge.

    Book  MATH  Google Scholar 

  • Fisher, N.I. and Lee, A.J. (1994). Time series of circular data. J. R. Stat. Soc. Series B 56, 327–339.

    MATH  Google Scholar 

  • Gatto, R. and Jammalamadaka, S.R. (2003). Inference for wrapped symmetric α-stable circular models. Sankhyā 65, 333–355.

    MATH  Google Scholar 

  • Godambe, V.P. (1985). The foundations of finite sample estimation in stochastic process. Biometrika 72, 319–328.

    Article  Google Scholar 

  • Holzmann, H., Munk, A., Suster, M. and Zucchini, W. (2006). Hidden Markov models for circular and linear-circular time series. Environ. Ecol. Stat. 13, 325–347.

    Article  Google Scholar 

  • Hughes, G. (2007). Multivariate and time series models for circular data with applications to protein conformational angles, PhD thesis University of Leeds, Leeds, UK.

  • Mardia, K.V. (1975a). Statistics of directional data. J. R. Stat. Soc. Series B 3, 349–393.

    MATH  Google Scholar 

  • Mardia, K.V. (1975b). Characterizations of directional distributions. Reidel, Dordrecht, Patil, G. P., Kotz, S. and Ord, J. K. (eds.), p. 365–385.

  • Mardia, K.V. and Jupp, P.E. (2000). Directional Statistics. Wiley, Chichester.

    MATH  Google Scholar 

  • Mardia, K.V., Hughes, G., Taylor, C.C. and Singh, H. (2008). A multivariate von Mises distribution with applications to bioinformatics. Can. J. Stat. 36, 99–109.

    Article  MATH  Google Scholar 

  • Nadarajah, S. and Zhang, Y. (2017). Wrapped: an R package for circular data. PLoS ONE 12, e0188512. https://doi.org/10.1371/journal.pone.0188512.

    Article  Google Scholar 

  • Pewsey, A., Neuhauser, M. and Ruxton, G. (2013). Circular Statistics in R. Oxford University Press, Oxford.

    MATH  Google Scholar 

  • Rao, C.R. (1973). Linear Statistical Inferentce and its Applications, 2nd edn. New York, Wiley.

    Book  Google Scholar 

  • Rivest, S. (1988). Circular statistics in R. Oxford University Press, Oxford.

    Google Scholar 

  • Samorodnitsky, G. and Taqqu, M.S. (1994). Stable Non-Gaussian Random Processes: Stochastic Models with Infinite Variance. Chapman & Hall, New York.

    MATH  Google Scholar 

  • Singh, H., Hnizdo, V. and Demchuk, E. (2002). Probabilistic model for two dependent circular variables. Biometrika 89, 719–723.

    Article  MATH  Google Scholar 

  • Stienne, G., Reboul, S., Azmani, M., Choquel, J.B. and Benjelloun, M.A. (2014). Multi-temporal multi-sensor circular fusion filter. Inf. Fusion 18, 86–100.

    Article  Google Scholar 

  • Thavaneswaran, A., Ravishanker, N. and Liang, Y. (2013). Inference for linear and nonlinear stable error processes via estimating functions. J. Stat. Plan. Inference 143, 827–841.

    Article  MATH  Google Scholar 

  • Thavaneswaran, A., Ravishanker, N. and Liang, Y. (2015). Generalized duration models and optimal estimation using estimating functions. Ann. Inst. Stat. Math. 67, 129–156.

    Article  MATH  Google Scholar 

Download references

Acknowledgements

The authors are grateful to referees for their suggestions and to Prof. N. Balakrishnan for discussions that helped to considerably improve the paper. The first author acknowledges support from an NSERC grant.

Author information

Authors and Affiliations

Authors

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A: The Gobambe EF Approach for Stochastic Processes

Appendix A: The Gobambe EF Approach for Stochastic Processes

Let \(\mathbf {y}_{n}=(y_{1},{\ldots } , y_{n})^{\prime }\) and \(({\Omega }, \mathcal {F}, P_{\boldsymbol {\theta }})\) denote the underlying probability space, and let \(\mathcal {F}_{t}\) be the σ-field generated by {y1,…,yt, t ≥ 1}. Let ht(yt, 𝜃),1 ≤ tn be specified q-dimensional martingale differences (MDs). Let \({\mathscr{M}}\) denote the class of zero mean, square integrable k-dimensional martingale EFs of the form

$$ \mathcal{M} = \left\{\mathbf{g} (\mathbf{y}_{n},\boldsymbol{\theta}): \mathbf{g} (\mathbf{y}_{n},\boldsymbol{\theta}) = {\sum}_{t=1}^{n} \mathbf{a}_{t-1} (\boldsymbol{\theta}) \mathbf{h}_{t} (\mathbf{y}_{t},\boldsymbol{\theta})\right\}, $$
(Appendix.1)

where at− 1(𝜃) are k × q matrices depending on 𝜃 and on yt− 1, 1 ≤ tn. It is assumed that the EFs g(yn, 𝜃) are almost surely differentiable with respect to the components of 𝜃, \(E \!\left (\left .\!\! \frac {\partial \mathbf {g}(\mathbf {y}_{n},\boldsymbol {\theta })}{\partial \boldsymbol {\theta }} \!\right |\mathcal {F}_{n-1} \!\!\right )\) and that \(E (\mathbf {g}(\mathbf {y}_{n},\boldsymbol {\theta }) \mathbf {g} (\mathbf {y}_{n},\boldsymbol {\theta })^{\prime }|\mathcal {F}_{n-1}) \) is positive definite (p.d.) for all 𝜃 and for each n ≥ 1. The expectations are always taken with respect to P𝜃. Estimators of 𝜃 can be obtained by solving the EE g(yn, 𝜃) = 0. To simplify notation in the following equations, we use g and ht to denote g(yn, 𝜃) and ht(yt, 𝜃) respectively.

Theorem 2.

The optimality result for EFs states that in the class of all zero mean, square integrable martingale EFs \({\mathscr{M}}\), the optimal EF g maximizes, in the partial order of nonnegative definite matrices, the information matrix

$$ \begin{array}{@{}rcl@{}} \mathbf{I}_{\mathbf{g}} &= & \left( \sum\limits_{t=1}^{n} \mathbf{a}_{t-1} E \left[\left. \frac{\partial \mathbf{h}_{t} }{\partial \boldsymbol{\theta}}\right| \mathcal{F}_{t -1}\right]\right)^{\prime} \left( \sum\limits_{t = 1}^{n} E [(\mathbf{a}_{t - 1} \mathbf{h}_{t} ) (\mathbf{a}_{t - 1} \mathbf{h}_{t})^{\prime}| \mathcal{F}_{t -1}]\right)^{-1} \\ && \times \left( \sum\limits_{t = 1}^{n} \mathbf{a}_{t - 1} E \left[\left. \frac{\partial \mathbf{h}_{t}} {\partial \boldsymbol{\theta}}\right|\mathcal{F}_{t - 1}\right]\right). \end{array} $$

The optimal EF and corresponding optimal information are given by

$$ \begin{array}{@{}rcl@{}} \mathbf{g}^{\ast} &=& \sum\limits_{t = 1}^{n} \mathbf{a}_{t-1}^{\ast} \mathbf{h}_{t} = \sum\limits_{t = 1}^{n} \left( E \left[\left. \frac{\partial \mathbf{h}_{t} }{\partial \boldsymbol{\theta}}\right| \mathcal{F}_{t - 1}\right]\right)^{\prime} (E [\mathbf{h}_{t} \mathbf{h}_{t}^{\prime}| \mathcal{F}_{t - 1}])^{-1} \mathbf{h}_{t}, \\ \mathbf{I}_{\mathbf{g}^{\ast}} &= & \sum\limits_{t = 1}^{n} \left( \text{E} \left[\left. \frac{\partial \mathbf{h}_{t} }{\partial \boldsymbol{\theta}}\right| \mathcal{F}_{t-1}\!\right]\!\right)^{\prime} \left( E [\mathbf{h}_{t} \mathbf{h}_{t}^{\prime}| \mathcal{F}_{t-1}]\right)^{-1} \!\left( \!\!E \!\left[\left. \frac{\partial \mathbf{h}_{t} }{\partial \boldsymbol{\theta}}\right| \mathcal{F}_{t-1}\!\right]\!\right).\\ \end{array} $$
(Appendix.2)

From Eq. Appendix.2, the recursive estimate of 𝜃 becomes

$$ \begin{array}{@{}rcl@{}} \widehat{\boldsymbol{\theta}}_{t} &= & \widehat{\boldsymbol{\theta}}_{t-1} +\mathbf{K}_{t} \mathbf{a}_{t-1}^{\ast} (\widehat{\boldsymbol{\theta}}_{t-1}) h_{t} (\widehat{\boldsymbol{\theta}}_{t-1}), \\ \mathbf{K}_{t} &= & \mathbf{K}_{t-1} \!\left[ \mathbf{I}_{p} \!- \!\left( \mathbf{a}_{t-1}^{\ast}(\widehat{ \boldsymbol{\theta}}_{t-1}) \frac{\partial h_{t} (\widehat{\boldsymbol{\theta}}_{t-1})}{\partial \boldsymbol{\theta}^{\prime}} + \frac{\partial \mathbf{a}_{t-1}^{\ast} (\widehat{\boldsymbol{\theta}}_{t-1})}{\partial \boldsymbol{\theta}} h_{t} (\widehat{\boldsymbol{\theta}}_{t-1}) \!\right)\! \mathbf{K}_{t-1} \!\right]^{-1}, \\ \end{array} $$
(Appendix.3)

for t = 1,…,n, where Ik is the k-dimensional identity matrix. When 𝜃 is a scalar parameter, its recursive estimate is given by

$$ \begin{array}{@{}rcl@{}} \widehat{\theta}_{t} &= & \widehat{\theta}_{t-1} + K_{t} [a^{\ast}_{t-1} (\widehat{\theta}_{t-1}) h_{t} (\widehat{\theta}_{t-1})], \\ K_{t} &= & K_{t-1} -A_{t} K_{t-1} K_{t}, \end{array} $$
(Appendix.4)

where

\(A_{t} = a^{\ast }_{t-1} (\widehat {\theta }_{t-1}) \frac {\partial h_{t} (\widehat {\theta }_{t-1})}{\partial \theta } +\frac {\partial a^{\ast }_{t-1} (\widehat {\theta }_{t-1})}{\partial \theta } h_{t} (\widehat {\theta }_{t-1})\).

When the conditional mean and variance are functions of the same parameter, the following Lemma 1 gives the form of the combined EF based on two non-orthogonal martingale differences, which is more informative than each of the component EFs; see Thavaneswaran et al. (2015) for more details. Consider a discrete time stochastic process {yt, t = 1,2,…} with first two conditional moments given by

$$ \begin{array}{@{}rcl@{}} \mu_{t}(\boldsymbol \theta) &= & E \left( y_{t}|\mathcal{F}_{t-1}\right), \text{ and } {\sigma_{t}^{2}}(\boldsymbol \theta) = \text{Var} \left( y_{t}| \mathcal{F}_{t-1}\right). \end{array} $$

To estimate the parameter 𝜃 based on the observations \(\mathbf {y}{_{n}}=(y_{1}, \ldots , y_{n})^{\prime }\), consider two martingale differences (MDs) for t = 1,…,n, i.e., mt(𝜃) = ytμt(𝜃) and \(s_{t} (\boldsymbol \theta ) = {m_{t}^{2}} (\boldsymbol \theta ) - {\sigma _{t}^{2}} (\boldsymbol \theta )\)), with quadratic variations and covariation given by 〈mt,〈st and 〈m, st.

Lemma 1.

In the class of all combined EFs of the form \(\mathcal {G}_{C} =\{\mathbf {g}_{C} (\boldsymbol \theta ): \mathbf {g}_{C} (\boldsymbol \theta ) = {\sum }_{t=1}^{n} \left (\mathbf {a}_{t-1} m_{t} + \mathbf {b}_{t-1} s_{t}\right )\}\),

(a) the optimal EF is given by \(\mathbf {g}_{C}^{\ast } (\boldsymbol \theta ) = {\sum }_{t=1}^{n} \left (\mathbf {a}_{t-1}^{*} m_{t} + \mathbf {b}_{t-1}^{*} s_{t}\right )\), where

$$ \begin{array}{@{}rcl@{}} \mathbf{a}_{t-1}^{\ast} &=& \left( 1 - \frac{\langle m,s{\rangle_{t}^{2}}}{\langle m\rangle_{t}\langle s\rangle_{t}}\right)^{-1} \left( -\frac{\partial \mu_{t}}{\partial \boldsymbol \theta}\frac{1}{\langle m\rangle_{t}} + \frac{\partial \langle s(\boldsymbol \theta) \rangle_{t} }{\partial \boldsymbol \theta} \frac{\langle m, s\rangle_{t}}{\langle m\rangle_{t}\langle s\rangle_{t}}\right), \text{ and } \\ \mathbf{b}_{t-1}^{\ast} &=& \left( 1 - \frac{\langle m,s{\rangle_{t}^{2}}}{\langle m\rangle_{t}\langle s\rangle_{t}}\right)^{-1} \left( \frac{\partial \mu_{t}}{\partial \boldsymbol \theta} \frac{\langle m, s(\boldsymbol \theta) \rangle_{t}}{\langle m\rangle_{t}\langle s\rangle_{t}}-\frac{\partial \langle s(\boldsymbol \theta) \rangle_{t} }{\partial \boldsymbol \theta}\frac{1}{\langle s\rangle_{t}}\right); \end{array} $$

(b) the information \(\mathbf {I}_{\mathbf {g}_{C}^{\ast }} (\boldsymbol \theta )\) is given by

$$ \begin{array}{@{}rcl@{}} \mathbf{I}_{\mathbf{g}_{C}^{\ast}} (\boldsymbol \theta) &=& \sum\limits_{t=1}^{n}\left( 1 - \frac{\langle m, s{\rangle_{t}^{2}}}{\langle m\rangle_{t}\langle s\rangle_{t}}\right)^{-1} \left( \frac{\partial \mu_{t}}{\partial \boldsymbol \theta} \frac{\partial \mu_{t}}{\partial \boldsymbol \theta^{\prime}} \frac{1}{\langle m\rangle_{t}} + \frac{\partial \langle s(\boldsymbol \theta) \rangle_{t}}{\partial \boldsymbol \theta} \frac{\partial \langle s(\boldsymbol \theta) \rangle_{t}}{\partial \boldsymbol \theta^{\prime}} \frac{1}{\langle s\rangle_{t}} \right.\\ && \left.-\left( \frac{\partial \mu_{t}}{\partial \boldsymbol \theta}\frac{\partial \langle s(\boldsymbol \theta) \rangle_{t}}{\partial \boldsymbol \theta^{\prime}} + \frac{\partial \langle s(\boldsymbol \theta) \rangle_{t}}{\partial \boldsymbol \theta} \frac{\partial \mu_{t}}{\partial \boldsymbol \theta^{\prime}}\right) \frac{\langle m, s\rangle_{t}}{\langle m\rangle_{t}\langle s\rangle_{t}}\right). \end{array} $$

The proof of this lemma is similar to the proof for combining linear and quadratic EFs shown in Thavaneswaran et al. (2015) and is omitted here.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Thavaneswaran, A., Ravishanker, N. Estimating Functions for Circular Time Series Models. Sankhya A 85, 198–213 (2023). https://doi.org/10.1007/s13171-020-00237-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s13171-020-00237-w

Keywords

AMS (2000) subject classification

Navigation