Skip to main content
Log in

Integer-valued Bilinear Model with Dependent Counting Series

  • Published:
Methodology and Computing in Applied Probability Aims and scope Submit manuscript

Abstract

The present work proposes a new stationary integer-valued bilinear time series model with dependent counting series. The model will enable one to tackle the presence of some correlation between underlying events. The various probabilistic and statistical properties of the model are discussed, unknown parameters are estimated by several methods. Moreover, the performance of the estimation methods is illustrated through a simulation study and an empirical application to two data sets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  • Basrak B, Davis RA, Mikosch T (1999) The sample ACF of a simple bilinear process. Stoch Process Appl 83:1–14

  • Bentarzi M, Bentarzi W (2017) Periodic integer-valued bilinear time series model. Commun Stat Theory Methods 46:1184–1201

  • Davis RA, Resnick SI (1996) Limit theory for bilinear processes with heavy-tailed noise. Ann Appl Probab 6:1191–1210

  • Drost FC, Akker VDR, Werker BJM (2008) Note on integer-valued bilinear time series. Stat Prob Lett 78:992–996

  • Doukhan P, Latour A, Oraichi D (2006) A Simple integer-valued bilinear time series model. Adv Appl Prob 38(2):559–578

  • Granger CWJ, Andersen AP (1978a) An introduction to bilinear time series models. Vandenhoeck and Ruprecht, Gottingen

  • Granger CWJ, Andersen AP (1978b) On the invertibility of time series models. Stoch Process Appl 8:87–92

  • Keenan DM (1985) A Tukey non-additivity type test for time series nonlinearity. Biometrika 72:39–44

  • Kim HY, Park Y (2008) A non-stationary integer-valued autoregressive model. Stat Papers 49:485–502

  • Liu M, Li Q, Zhu F (2019) Threshold negative binomial autoregressive model. Stat 53:1–25

  • Liu M, Li Q, Zhu F (2020) Self-excited hysteretic negative binomial autoregression. AStA Adv Stat Anal 104:385–415

  • Mohammadpour M, Bakouch HS, Ramzani S (2019) An integer-valued bilinear time series model via two random operators. Math Comput Modell Dyn Syst 25:429–446

  • Nastić AS, Ristić MM, Miletić Ilić AV (2017) A geometric time series model with an alternative dependent Bernoulli counting series. Commun Stat Theory Methods 46:770–785

  • Pascual L, Romo J, Ruiz E (2004) Bootstrap predictive inference for ARIMA processes. J Time Ser Anal 25:449–465

  • Qin J, Lawless J (1994) Empirical likelihood and general estimationg equations. Ann Stat 22:300–325

  • Ristić MM, Nastić AS, Miletić Ilić AV (2013) A geometric time series model with dependent Bernoulli counting series. J Time Ser Anal 34:423–516

  • Subba Rao T (1981) On the theory of bilinear time series model. J R Stat Soc Ser B 43:244–255

  • Turkman KF, Turkman MAA (1997) Extremes of bilinear time series models. J Time Ser Anal 18:305–319

  • Wang C, Liu H, Yao J, Davis RA, Li WK (2014) Self-excited threshold Poisson autoregression. J Amer Stat Assoc 109:776–787

  • Zhang Z, Tong H (2001) On some distributional properties of a first-order nonnegative bilinear time series model. J Appl Probab 38(3):659–671

  • Zhang H, Wang D, Zhu F (2011) Empirical likelihood inference for random coefficient INAR(p) process. J Time Ser Anal 32:195–203

  • Zhu F, Wang D (2008) Estimation of parameters in the NLAR(p) model. J Time Ser Anal 29:619–628

  • Zhu F (2011) A negative binomial integer-valued GARCH model. J Time Ser Anal 32:54–67

  • Zhu F, Wang D (2015) Empirical likelihood for linear and log-linear INGARCH models. J Korean Stat Soc 44:150–160

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mehrnaz Mohammadpour.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A: Proof of Theorem 2.1

To investigate the stationarity of the precess, we follow a similar approach as that of Kim and Park (2008). Let \(\{X_{t}^{(n)}, t{\in \mathbb {Z}}\}\) be a sequence of random variables as

$$ X_{t}^{(n)}=\left \{ \begin{array}{cl} 0 & n<0 \\ e_{t} & n=0 \\ a\diamond_{\theta} X_{t-1}^{(n-1)}+b\diamond_{\delta}X_{t-1}^{(n-1)}e_{t-1} +e_{t} & n>0. \end{array} \right. $$
(12)

where sequence \(\{X_{s}^{(n)}\}\) is independent of et for s < t.

B1

The process \(\{X_{t}^{(n)}, t{\in \mathbb {Z}}\}\) is strictly stationary for any \(n \in \mathbb {N}\).

The strict stationarity of the process \(\{X_{t}^{(n)}, t{\in \mathbb {Z}}\}\) will be deduced by showing that the two vectors \((X_{1}^{(n)},...,X_{k}^{(n)})^{T}\) and \( (X_{1+h}^{(n)},...,X_{k+h}^{(n)})^{T}\) are identically distributed. It is clear that process \(\{X_{t}^{(n)}, t{\in \mathbb {Z}}\}\) is strictly stationary for n = 0. Now suppose that the process \(\{X_{t}^{(m)}, t{\in \mathbb {Z}}\}\) is strictly stationary for all 1 ≤ mn − 1. Hence for m = n we have

$$ \begin{pmatrix} X_{1}^{(n)} \\ {\vdots} \\ X_{k}^{(n)} \\ \end{pmatrix} = \begin{pmatrix} a\diamond_{\theta} & {\cdots} & 0\diamond_{\theta} \\ {\vdots} & {\ddots} & {\vdots} \\ 0\diamond_{\theta} & {\cdots} & a\diamond_{\theta} \\ & & \end{pmatrix} \begin{pmatrix} X_{0}^{(n-1)} \\ {\vdots} \\ X_{k-1}^{(n-1)} \\ \end{pmatrix} + \begin{pmatrix} b\diamond_{\delta} & {\cdots} & 0\diamond_{\delta} \\ {\vdots} & {\ddots} & {\vdots} \\ 0\diamond_{\delta} & {\cdots} & b\diamond_{\delta} \\ & & \end{pmatrix} \begin{pmatrix} X_{0}^{(n-1)} e_{0} \\ {\vdots} \\ X_{k-1}^{(n-1)} e_{k-1} \\ \end{pmatrix} + \begin{pmatrix} e_{1} \\ {\vdots} \\ e_{k} \\ \end{pmatrix} $$

and

$$ \begin{pmatrix} X_{1+h}^{(n)} \\ {\vdots} \\ X_{k+h}^{(n)} \\ \end{pmatrix} = \begin{pmatrix} a\diamond_{\theta} & {\cdots} & 0\diamond_{\theta} \\ {\vdots} & {\ddots} & {\vdots} \\ 0\diamond_{\theta} & {\cdots} & a\diamond_{\theta} \\ & & \end{pmatrix} \begin{pmatrix} X_{h}^{(n-1)} \\ {\vdots} \\ X_{k+h-1}^{(n-1)} \\ \end{pmatrix} + \begin{pmatrix} b\diamond_{\delta} & {\cdots} & 0\diamond_{\delta} \\ {\vdots} & {\ddots} & {\vdots} \\ 0\diamond_{\delta} & {\cdots} & b\diamond_{\delta} \\ & & \end{pmatrix} \begin{pmatrix} X_{h}^{(n-1)} e_{h} \\ {\vdots} \\ X_{k+h-1}^{(n-1)} e_{k+h-1} \\ \end{pmatrix} + \begin{pmatrix} e_{h+1} \\ {\vdots} \\ e_{k+h} \\ \end{pmatrix} $$

According to the induction hypothesis, the random vectors \( (X_{1}^{(n)},...,X_{k}^{(n)})^{T}\) and \((X_{1+h}^{(n)},...,X_{k+h}^{(n)})^{T}\) are identically distributed.

B2

The sequence \(\{X_{t}^{(n)}, t{\in \mathbb {Z}}\}\) belongs to the space \(\pounds ^{2}=\{X|EX^{2}<\infty \}\).

Let \(\mu _{n}=E(|X_{t}^{(n)}|)\). According to Eq. 12, we have

$$ \mu_{n}\leqslant |a|E|X_{t-1}^{(n-1)}|+|b|E|X_{t-1}^{(n-1)}e_{t-1}|+\mu . $$
(13)

As \(E\left \vert X_{t}^{(n)}-e_{t}\right \vert =E|a\mathbf {\diamond _{\theta }} X_{t-1}^{(n-1)}+b\mathbf {\diamond _{\delta } }X_{t-1}^{(n-1)}e_{t-1}| \leqslant E|X_{t}^{(n)}|+\mu ,\) we have

$$ E|X_{t-1}^{(n-1)}e_{t-1}|=E\left\vert \left( a\mathbf{\diamond_{\theta}} X_{t-2}^{(n-2)}+b\mathbf{\diamond_{\delta} }X_{t-2}^{(n-2)}e_{t-2}+e_{t-1} \right) e_{t-1}\right\vert \leqslant \mu_{e}E|X_{t-1}^{(n-1)}|+\mu_{e^{2}}+\mu^{2}. $$
(14)

Substitution of Eqs. 14 into 13 gives

$$ \begin{array}{@{}rcl@{}} \mu_{n} &\leqslant &(|a|+|b|\mu )\mu_{n-1}+|b|(\mu_{e^{2}}+\mu^{2})+\mu \\ &\leqslant &(|a|+|b|\mu)^{n}\mu +[|b|(\mu_{e^{2}}+\mu^{2})+\mu ]{\sum}_{i=0}^{n-1}(|a|+|b|\mu )^{i}<\infty . \end{array} $$

Now we show that \(E|X_{t}^{(n)}|^{2}\leqslant \infty \).

$$ \begin{array}{@{}rcl@{}} E|X_{t}^{(n)}|^{2} &\leqslant &E|a\mathbf{\diamond_{\theta}} X_{t-1}^{(n-1)}|^{2}+E|b\mathbf{\diamond_{\delta} } X_{t-1}^{(n-1)}e_{t-1}|^{2}+2E|(a\mathbf{\diamond_{\theta} } X_{t-1}^{(n-1)})(b\mathbf{\diamond_{\delta}}X_{t-1}^{(n-1)}e_{t-1})|+ \\ &&2E|(a\mathbf{\diamond_{\theta} }X_{t-1}^{(n-1)})e_{t}|+2E|(b\mathbf{ \diamond_{\delta}}X_{t-1}^{(n-1)}e_{t-1})e_{t}|+\mu_{e^{2}} \\ &\leqslant &|a \theta| E|X_{t-1}^{(n-1)}|^{2}+a(1-\theta) E|X_{t-1}^{(n-1)}|+|b \delta| E|X_{t-1}^{(n-1)}e_{t-1}|^{2}+b(1-\delta) E|X_{t-1}^{(n-1)}e_{t-1}|+ \\ &&2|ab|E|(X_{t-1}^{(n-1)})^{2}e_{t-1}|+2\mu |a|E|X_{t-1}^{(n-1)}|+2\mu |b|E|X_{t-1}^{(n-1)}e_{t-1}|+\mu_{e^{2}}. \end{array} $$
(15)

On the other hand, after some calculations we have

$$ E|(X_{t-1}^{(n-1)})^{2}e_{t-1}|\leqslant \mu E|X_{t-1}^{(n-1)}|^{2}+2(\mu^{2}+\mu_{e^{2}})E|X_{t-1}^{(n-1)}|+L_{1}(e) $$

and

$$ E|X_{t-1}^{(n-1)}e_{t-1}|^{2}\leqslant \mu_{e^{2}}E|X_{t-1}^{(n-1)}|^{2}+2(\mu \mu_{e^{2}}+\mu_{e^{3}})E|X_{t-1}^{(n-1)}|+L_{2}(e) $$

where \(L_{1}(e)=\mu (3\mu _{e^{2}}+2\mu ^{2})+2\mu _{e^{2}}\mu +\mu _{e^{3}}\) and \(L_{2}(e)=\mu _{e^{2}}(3\mu _{e^{2}}+2\mu ^{2})+2\mu _{e^{3}}\mu +\mu _{e^{4}}\). By substituting these equations and Eqs. 14 into 15, we have

$$ E|X_{t}^{(n)}|^{2}\leqslant K_{1}(e)E|X_{t-1}^{(n-1)}|^{2}+K_{2}(e)\mu_{n-1}+K_{3}(e) $$
(16)

where \(K_{1}(e)=(|a \theta |+|b\delta |\mu _{e^{2}} )+2ab\mu \), \( K_{2}(e)=|a(1-\theta ) |+|b(1-\delta ) |\mu +2\mu _{e^{2}}\mu +\mu _{e^{3}}+4|a||b|(\mu _{e^{2}}+\mu ^{2})+2\mu (|a|+|b|\mu )\) and \( K_{3}(e)=|b(1-\delta ) |(\mu _{e^{2}}+\mu ^{2})+|b|^{2}L_{2}(e)+2|a||b|L_{1}(e)+2\mu |b|(\mu _{e^{2}}+{\mu _{e}^{2}})\).

Repeating (16), we have

$$ E|X_{t}^{(n)}|^{2}\leqslant {K_{1}^{n}}(e)\mu_{e^{2}}+K_{2}(e)\sum\limits_{i=0}^{n-1}{K_{1}^{n}}(e)\mu_{n-(i+1)}+K_{3}(e)\sum\limits_{i=0}^{n-1}{K_{1}^{i}}(e). $$

Therefore under strictly stationary condition, it can be concluded that \( E|X_{t}^{(n)}|^{2}\leqslant \infty ,\) n ≥ 1. Hence \(\{X_{t}^{(n)}\}\in \pounds ^{2}\).

B3

The sequence \(\{X_{t}^{(n)}\}\) is Cauchy.

Let \(\psi (t,n,m)=|X_{t}^{(n)}-X_{t}^{(n-m)}|,\) m = 1, 2,.... Using the definition of \(\{X_{t}^{(n)}\}\), we have

$$ \begin{array}{@{}rcl@{}} E\psi (t,n,m) &\leq &E|a\mathbf{\diamond_{\theta} } (X_{t-1}^{(n)-1}-X_{t-1}^{(n-m-1)})|+E|b\mathbf{\diamond_{\delta}} (X_{t-1}^{(n-1)}-X_{t-1}^{(n-m-1)})e_{t-1}| \\ &\leq &(|a|+|b|\mu )E\psi (t-1,n-1,m) \\ &\leq &(|a|+|b|\mu )^{n}E\psi (t-n,0,m)=(|a|+|b|\mu )^{n}\mu \end{array} $$

As \(n\rightarrow \infty \), under strictly stationary condition, Eψ(t,n,m) converges to 0. Similarly, we have

$$ \begin{array}{@{}rcl@{}} E\psi^{2}(t,n,m) &\leq &(|a \theta|+|b\delta|(\mu_{e^{2}})+2|a b \mu|)E\psi^{2}(t-1,n-1,m)+ (a+b \mu )E\psi (t-1,n-1,m) \\&& -(a \theta+b \delta \mu )E\psi (t-1,n-1,m) \\ &&{\vdots} \\ &\leq &(|a \theta|+|b\delta|(\mu_{e^{2}})+2|a b \mu|)^{n}E\psi^{2}(t-n,0,m)+ \\ &&(a (1-\theta)+b (1-\delta) \mu )\sum\limits_{i=1}^{n}(|a \theta|+|b\delta|(\mu_{e^{2}}) +2|a b \mu|)^{i-1}E\psi (t-i,n-i,m). \end{array} $$

As \(n\rightarrow \infty \), under given condition, it easily can be seen that Eψ2(t,n,m) converges to 0. So \(X_{t}^{\left (n\right ) }\) is a Cauchy sequence. Suppose \(\lim _{n\rightarrow \infty }X_{t}^{(n)}=X_{t}\), then Xt£2.

B4

The process {Xt} satisfies (1)

Since \(X_{t}^{(n)}\rightarrow ^{\pounds ^{2}}X_{t}\), so

$$ E|a\mathbf{\diamond_{\theta} }X_{t-1}^{(n-1)}-a\mathbf{\diamond_{\theta} } X_{t-1}|^{2}=|a \theta| E|X_{t-1}^{(n-1)}-X_{t-1}|^{2}+a (1-\theta) E|X_{t-1}^{(n-1)}-X_{t-1}| $$

and

$$ E|b\mathbf{\diamond_{\delta} }(X_{t-1}^{(n-1)}-X_{t-1})e_{t-1}|^{2}=|b \delta| E|(X_{t-1}^{(n-1)}-X_{t-1})e_{t-1}|^{2}+b(1-\delta) E|(X_{t-1}^{(n-1)}-X_{t-1})e_{t-1}| $$

converge to zero. Hence process {Xt} satisfies (1).

B5

Uniqueness

Suppose that another solution \(X_{t}^{\ast }\) of Eq. 1 exists. By Minkowski inequality, we have

$$ E^{1/2}(|X_{t}-X_{t}^{\ast }|^{2})\leq E^{1/2}(|X_{t}^{(n)}-X_{t}^{\ast }|^{2})+E^{1/2}(|X_{t}^{(n)}-X_{t}|^{2}), $$

so \(X_{t}=X_{t}^{\ast } \) a.s.

B6

Strictly stationarity of the process {Xt}.

As the process \(\{X_{t}^{(n)}\} \) is strictly stationary, so \( (X_{0}^{(n)},...,X_{k}^{(n)})^{T}\) and \((X_{h}^{(n)},...,X_{k+h}^{(n)})^{T}\) have the same distribution for each n,h and k. Since \( X_{t}^{(n)}\rightarrow ^{\pounds ^{2}}X_{t}\), it can be deduced \( X_{t}^{(n)}\rightarrow ^{P}X_{t}\) and hence \( {\sum }_{i=0}^{h}b_{i}X_{t+i}^{(n)}\rightarrow ^{P}{\sum }_{i=0}^{h}b_{i}X_{t+i}.\) Using Cramer-Wold device, it can be concluded that

$$ (X_{0}^{(n)},...,X_{k}^{(n)})\Rightarrow (X_{0},...,X_{k}) $$

and

$$ (X_{h}^{(n)},...,X_{k+h}^{(n)})\Rightarrow (X_{h},...,X_{k+h}). $$

Since \((X_{0}^{(n)},...,X_{k}^{(n)})\) and \((X_{h}^{(n)},...,X_{k+h}^{(n)})\) have the same distribution, hence their limits, (X0,...,Xk) and (Xh,...,Xk+h) have the same distribution. This completes the proof.

Appendix B: Proof of Proposition 2.1

Using the stationarity of the process and the properties of the operator, E(Xt) will be obtained. To obtain \(E({X^{2}_{t}})\), we have

$$ \begin{array}{@{}rcl@{}} E({X^{2}_{t}})=E(B_{t-1}^{2} + {e_{t}^{2}}+2B_{t-1}e_{t}), \end{array} $$

where Bt− 1 = a𝜃Xt− 1 + bδXt− 1et− 1. Using the stationarity of the process and the properties of the operator, we have

$$ \begin{array}{@{}rcl@{}} E(B_{t-1})=E(X_{t}-e_{t})=E(X_{t})-\mu, \end{array} $$

and

$$ \begin{array}{@{}rcl@{}} E(B_{t-1}^{2})&=&E(a\diamond_{\theta} X_{t-1})^{2} +E(b\diamond_{\delta}X_{t-1}e_{t-1})^{2}+2E((a\diamond_{\theta} X_{t-1})(b\diamond_{\delta}X_{t-1}e_{t-1})) \\ &=& a [\theta E(X_{t-1}^{2}) +(1-\theta)E(X_{t-1})]+b[\delta E(X_{t-1}^{2}e_{t-1}^{2}) +(1-\delta)E(X_{t-1}e_{t-1})]\\&&+2ab E(X_{t-1}^{2}e_{t-1}), \end{array} $$

where

$$ \begin{array}{@{}rcl@{}} E({X_{t}^{2}}{e_{t}^{2}})&=&E(B_{t-1}^{2}) E({e_{t}^{2}}) +E({e_{t}^{4}})+2E({e_{t}^{3}}) E(B_{t-1}) \\ E({X_{t}^{2}}e_{t})&=&E(B_{t-1}^{2}) E(e_{t}) +E({e_{t}^{3}})+2E({e_{t}^{2}}) E(B_{t-1}) \\ E(X_{t}e_{t})&=&E(B_{t-1}) E(e_{t}) +E({e_{t}^{2}}). \end{array} $$

After some simplifications, we have

$$ \begin{array}{@{}rcl@{}} E(B_{t-1}^{2})=\frac{a \theta E(X_{t-1}^{2}) + C}{1-[b\delta E({e_{t}^{2}}) +2ab\mu]}, \end{array} $$

where \(C =a(1-\theta )E(X_{t-1})+b\delta (E({e_{t}^{4}})+2E({e_{t}^{3}}) E(B_{t-1}))+b(1-\delta )(E(B_{t-1}) E(e_{t}) +E({e_{t}^{2}})) + 2ab(E({e_{t}^{3}})+2E({e_{t}^{2}}) E(B_{t-1}) )\). Hence substitution of \( E(B_{t-1}^{2})\) into \(E({X_{t}^{2}})\), \( E({X_{t}^{2}})\) can be obtained.

Appendix C: Proof of Proposition 2.2

For the autocovariance function of order one, we have

$$ E(X_{t}X_{t+1})=aE({X_{t}^{2}})+bE({X_{t}^{2}}e_{t})+E(X_{t})E(e_{t+1}), $$

where \(E({X_{t}^{2}}e_{t})\) and \(E(B_{t-1}^{2})\) are obtained as

$$ \begin{array}{@{}rcl@{}} E({X_{t}^{2}}e_{t}) &=&E({e_{t}^{3}})+E(B_{t-1}^{2})E(e_{t})+2E(B_{t-1})E({e_{t}^{2}}) \\ E(B_{t-1}^{2}) &=&\frac{a \theta E(X_{t-1}^{2}) + C}{1-[b\delta E({e_{t}^{2}}) +2ab\mu]}. \end{array} $$

Substitution \(E(B_{t-1}^{2})\) into the second term of \(E({X_{t}^{2}}e_{t})\ \), after some calculations we have

$$ E({X_{t}^{2}}e_{t})=E({X_{t}^{2}})E(e_{t})+E({e_{t}^{3}})+2E({e_{t}^{2}})E(B_{t-1})-E(e_{t})(2\mu E(B_{t-1})+E({e_{t}^{2}})), $$

and hence

$$ \begin{array}{@{}rcl@{}} E(X_{t}X_{t+1})&=&(a+b\mu)E({X_{t}^{2}})+b[E({e_{t}^{3}})+2E({e_{t}^{2}})E(B_{t-1}) -\mu(2\mu E(B_{t-1})-E({e_{t}^{2}}))]+E(X_{t})\mu \\ &=&(a+b\mu )E({X_{t}^{2}})+\mu_{X}\mu +b[2\sigma^{2}(\mu_{X}-\mu )+E({e_{t}^{3}})-\mu (\mu^{2}+ \sigma^{2})]. \end{array} $$

Also

$$ E(X_{t}X_{t+2})=aE(X_{t}X_{t+1})+bE(X_{t}X_{t+1}e_{t+1})+E(X_{t})E(e_{t+2}). $$

After some tedious computations, we obtain E(XtXt+ 1et+ 1) = σ2E(Xt) + μE(XtXt+ 1). By substituting E(XtXt+ 1et+ 1) in E(XtXt+ 2), we have

$$ E(X_{t}X_{t+2})=(a+b\mu )E(X_{t}X_{t+1})+b\sigma^{2}\mu_{X}+\mu \mu_{X}. $$

On the other hand, Eq. 2 implies that \(b\sigma ^{2}\mu _{X}+\mu \mu _{X}+[(a+b\mu )-1]{\mu _{X}^{2}}=0\). Hence

$$ \gamma_{X}(2)=(a+b\mu )\gamma_{X}(1). $$

So by induction, we can conclude that

$$ \gamma_{X}(k)=(a+b\mu )\gamma_{X}(k-1)=(a+b\mu )^{k-1}\gamma_{X}(1). $$

Appendix D: Proof of Proposition 2.3

The one step ahead conditional expectation can be obtained directly. Since

$$ \begin{array}{@{}rcl@{}} E[X_{t+1}e_{t+1}|t] &=&E(e_{t+1}^{2})+ E(e_{t+1})E((a\diamond_{\theta} X_{t}+b\diamond_{\delta} X_{t}e_{t})|t) \\ &=&E(e_{t+1}^{2})+\mu (a+be_{t})X_{t}, \end{array} $$

we have

$$ \begin{array}{@{}rcl@{}} E(X_{t+2}|t) &=& E(a\diamond_{\theta} X_{t+1}+b\diamond_{\delta} X_{t+1}e_{t+1} +e_{t+2}|t) \\ &=& a E(X_{t+1}|t)+ b E (X_{t+1}e_{t+1}|t)+E(e_{t+2}) \\ &=& a E(X_{t+1}|t)+ b(\mu^{2} +\sigma^{2}+\mu (a+be_{t})X_{t})+\mu \\ &=& a E(X_{t+1}|t)+b\sigma^{2} +\mu +b\mu(\mu+ (a+be_{t})X_{t}). \end{array} $$

Using the one step ahead conditional expectation E[Xt+ 1|t] = μ + (a + bet)Xt, we can be conclude that

$$ \begin{array}{@{}rcl@{}} E(X_{t+2}|t)=(a+b\mu)E[X_{t+1}|t]+b\sigma^{2} +\mu, \end{array} $$

and

$$ E(X_{t+3}|t)=(a+b\mu)E[X_{t+2}|t]+b\sigma^{2} +\mu. $$

Hence by induction, we conclude

$$ \begin{array}{@{}rcl@{}} E(X_{t+k}|t) &=&(a+b\mu)E(X_{t+k-1}|t)+b\sigma^{2} +\mu \\ &=&(a+b\mu)^{2}E(X_{t+k-2}|t)+(b\sigma^{2} +\mu)(1+(a+b\mu)) \\ &=&(a+b\mu)^{k-1}E(X_{t+1}|t)+(b\sigma^{2} +\mu)\sum\limits_{i=0}^{k-2}(a+b\mu)^{i}. \end{array} $$

Now we obtain an expression for \(E(X_{t+k}^{2}|t).\) One can easily find \( E(X_{t+1}^{2}|t) \). Also

$$ \begin{array}{@{}rcl@{}} E(X_{t+2}^{2}|t) &=& E([B_{t+1} + e_{t+2}]^{2}|t) \\ &=& E(B_{t+1}^{2}|t) +E(e_{t+2}^{2})+2E(B_{t+1}|t)E(e_{t+2}) \\ &=& a\theta E(X_{t+1}^{2}|t)+a(1-\theta)E(X_{t+1}|t)+b\delta E(X_{t+1}^{2}e_{t+1}^{2}|t)+b(1-\delta)E(X_{t+1}e_{t+1}|t) \\ &&+2abE(X_{t+1}^{2}e_{t+1}|t)+ E(e_{t+2}^{2}|t)+2\mu E(B_{t+1}|t), \end{array} $$

where \(E(X_{t+1}^{2}e_{t+1}|t)\) and \(E(X_{t+1}^{2}e_{t+1}^{2}|t)\) are obtained as

$$ E(X_{t+1}^{2}e_{t+1}|t)=E(e_{t+1})E({B_{t}^{2}}|t) +2E(e_{t+1}^{2})E(B_{t}|t)+E(e_{t+1}^{3}), $$
$$ E(X_{t+1}^{2}e_{t+1}^{2}|t)=E(e_{t+1}^{2})E({B_{t}^{2}}|t) +2E(e_{t+1}^{3})E(B_{t}|t)+E(e_{t+1}^{4}). $$

Substituting the above equations in \(E(X_{t+2}^{2}|t)\) and after some calculations, we have

$$ \begin{array}{@{}rcl@{}} E(X_{t+2}^{2}|t) &=& (a\theta + b\delta E(e_{t+1}^{2}) + 2ab \mu )E(X_{t+1}^{2}|t)+2abE(e_{t+1}^{3}) +b(1-\delta)E(X_{t+1}e_{t+1}|t)\\ &&+a(1-\theta)E(X_{t+1}|t) +b\delta(2E(e_{t+1}^{3})E(B_{t}|t) +E(e_{t+1}^{4}) )+2a b2E(e_{t+1}^{2})E(B_{t}|t) \\&&+(b\delta E(e_{t+1}^{2}) +2ab\mu)[-(2\mu E(B_{t}|t) +E(e_{t+1}^{2})) ] +2\mu E(B_{t+1}|t) +E(e_{t+2}^{2}). \end{array} $$

Also, we can find that

$$ \begin{array}{@{}rcl@{}} E(X_{t+3}^{2}|t) &=& (a\theta + b\delta E(e_{t+2}^{2}) + 2ab \mu )E(X_{t+2}^{2}|t)+b(1-\delta)E(X_{t+2}e_{t+2}|t) +2\mu E(B_{t+2}|t) \\ &&+a(1-\theta)E(X_{t+2}|t) +b\delta(2E(e_{t+2}^{3})E(B_{t+1}|t) +E(e_{t+2}^{4}) )+4a bE(e_{t+2}^{2})E(B_{t+1}|t) \\&&+2abE(e_{t+2}^{3}) +(b\delta E(e_{t+2}^{2}) +2ab\mu)[-(2\mu E(B_{t+1}|t) +E(e_{t+2}^{2}))]+E(e_{t+3}^{2}). \end{array} $$

So by induction and properties of the i.i.d. process \(\{e_{t}, t{\in \mathbb {Z}}\}\), we can conclude that

$$ \begin{array}{@{}rcl@{}} E(X_{t+k}^{2}|t) &=&(a\theta + b\delta E({e_{t}^{2}}) + 2ab \mu )E(X_{t+k-1}^{2}|t)+2abE({e_{t}^{3}}) +b(1-\delta)E(X_{t+k-1}e_{t+k-1}|t) \\ &&+a(1-\theta)E(X_{t+k-1}|t) +b\delta(2E({e_{t}^{3}})E(B_{t+k-2}|t) +E({e_{t}^{4}}) )+4a bE({e_{t}^{2}})E(B_{t+k-2}|t) \\&&+(b\delta E(e_{t+k-1}^{2}) +2ab\mu)[-(2\mu E(B_{t+k-2}|t) +E(e_{t+k-1}^{2})) ]+2\mu E(B_{t+k-1}|t) +E(e_{t+k}^{2}), \end{array} $$

where E(Bt+ki|t) = E(Xt+ki+ 1μ|t) for i = 0, 1,...,k.

Appendix E

Some expectations which are used in the Yule Walker method are calculated in this part.

Let Bt− 1 = a𝜃Xt− 1 + bδXt− 1et− 1. So Bt− 1 = Xtet and we have

$$ \begin{array}{@{}rcl@{}} E(X_{t}e_{t}) &=&E((B_{t-1}+e_{t})e_{t})=E(B_{t-1})E(e_{t})+E({e_{t}^{2}}) =(E(X_{t})-\mu )\mu+E({e_{t}^{2}}) \\ &=&\mu_{X}\mu+\sigma^{2}, \end{array} $$
$$ \begin{array}{@{}rcl@{}} E({X_{t}^{2}}e_{t}) &=&E((B_{t-1}+e_{t})^{2}e_{t}) \\ &=&E(B_{t-1}^{2})E(e_{t})+E({e_{t}^{3}})-2E(B_{t-1})E({e_{t}^{2}}) \\ &=&[E({X_{t}^{2}})+\mu(-2E(X_{t})+\mu)-\sigma^{2}]\mu +E({e_{t}^{3}})-2[\mu_{X}- \mu](\sigma^{2}+\mu^{2}), \end{array} $$
$$ \begin{array}{@{}rcl@{}} E({X_{t}^{2}}{e_{t}^{2}})&=&E((B_{t-1}+e_{t})^{2}{e_{t}^{2}}) \\ &=&E(B_{t-1}^{2}) E({e_{t}^{2}}) +E({e_{t}^{4}})+2E({e_{t}^{3}}) E(B_{t-1}),\\ &=&[E({X_{t}^{2}})+\mu(-2\mu_{X}+\mu)-\sigma^{2}](\sigma^{2}+\mu^{2})+E({e_{t}^{4}})+2E({e_{t}^{3}})[\mu_{X}-\mu], \end{array} $$
$$ \begin{array}{@{}rcl@{}} E({X_{t}^{3}}e_{t})&=&E((B_{t-1}+e_{t})^{3}e_{t}) \\ &=&E(B_{t-1}^{3}) E(e_{t}) +E({e_{t}^{4}})+3E({e_{t}^{3}}) E(B_{t-1}) +3E({e_{t}^{2}}) E(B_{t-1}^{2}),\\ &=&[E({X_{t}^{3}})-E({e_{t}^{3}})-3[E({X_{t}^{2}})+\mu(-2\mu_{X}+\mu)-\sigma^{2}]\mu-3(\mu_{X}-\mu)(\sigma^{2}+\mu^{2})]\mu\\&&+E({e_{t}^{4}})+3E({e_{t}^{3}})[\mu_{X}-\mu]+3E({e_{t}^{2}})[E({X_{t}^{2}})+\mu(-2\mu_{X})+\mu)-\sigma^{2}], \end{array} $$

and

$$ \begin{array}{@{}rcl@{}} E({X_{t}^{3}}{e_{t}^{2}})&=&E((B_{t-1}+e_{t})^{3}{e_{t}^{2}}) \\ &=&E(B_{t-1}^{3}) E({e_{t}^{2}}) +E({e_{t}^{5}})+3E({e_{t}^{4}}) E(B_{t-1})+3E({e_{t}^{3}}) E(B_{t-1}^{2}),\\ &=&[E({X_{t}^{3}})-E({e_{t}^{3}})-3[E({X_{t}^{2}})+\mu(-2\mu_{X}+\mu)-\sigma^{2}]\mu-3(\mu_{X}-\mu)(\sigma^{2}+\mu^{2})](\sigma^{2}+\mu^{2})\\&&+E({e_{t}^{5}})+3E({e_{t}^{4}})[\mu_{X}-\mu] +3E({e_{t}^{3}})[E({X_{t}^{2}})+\mu(-2\mu_{X}+\mu)-\sigma^{2}]. \end{array} $$

It is noted that under P(μ) distribution of innovations {et}, we have

$$ \begin{array}{@{}rcl@{}} E({X_{t}^{3}}{e_{t}^{2}})&=&[E({X_{t}^{3}})-(\mu^{2}+\mu (1+\mu)^{2})-3[E({X_{t}^{2}})+\mu(-2\mu_{X}+\mu)-\mu]\mu -3(\mu_{X}-\mu) \\&&(\mu+\mu^{2})](\mu^{2}+\mu)+(3\mu^{3}+3\mu^{2}(1+\mu)^{2}+\mu(1+\mu)^{4}+2\mu^{2}(1+\mu)(2+\mu) \\&&+\mu^{2}(2+\mu)^{2})+3(\mu^{4}+6\mu^{3}+7\mu^{2}+\mu)(\mu_{X}-\mu)+3(\mu^{3}+3\mu^{2}+\mu )[E({X_{t}^{2}}) \\&&+\mu(-2\mu_{X}+\mu)-\mu], \\ && \\ E({X_{t}^{3}}e_{t})&=&[E({X_{t}^{3}})-(\mu^{2}+\mu (1+\mu)^{2})-3[E({X_{t}^{2}})+\mu(-2\mu_{X}+\mu)-\mu]\mu \\&& -3(\mu_{X}-\mu)(\mu+\mu^{2})]\mu+(2\mu^{2} (1 + \mu) + \mu (1 + \mu)^{3} + \mu^{2} (2 + \mu)) \\&&+3(\mu^{2} + \mu (1 + \mu)^{2})[\mu_{X}-\mu]+3(\mu^{2}+\mu)[E({X_{t}^{2}})+\mu(-2\mu_{X})+\mu)-\mu], \\ && \\ E({X_{t}^{2}}{e_{t}^{2}})&=&[E({X_{t}^{2}})+\mu(-2\mu_{X}+\mu)-\mu](\mu^{2}+\mu)+2(\mu^{3}+3\mu^{2}+ \mu)(\mu_{X}-\mu) \\&&+(\mu^{4}+6\mu^{3}+7\mu^{2}+\mu), \\ && \\ E({X_{t}^{2}}e_{t}) &=&[E({X_{t}^{2}})+\mu (-2\mu_{X}+\mu)-\mu] \mu+(\mu^{3}+3\mu^{2}+\mu)+2(\mu^{2}+\mu )(\mu_{X}-\mu), \\ && \\ E(X_{t}e_{t})&=&\mu_{X}\mu+\mu. \end{array} $$
(17)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ramezani, S., Mohammadpour, M. Integer-valued Bilinear Model with Dependent Counting Series. Methodol Comput Appl Probab 24, 321–343 (2022). https://doi.org/10.1007/s11009-021-09853-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11009-021-09853-x

Keywords

Navigation