Keywords

1 Introduction

In time series analysis the choice of auto regressive (AR) processes is often used, for example as decorrelation filter (see Schubert et al. (2019), Schuh et al. (2014), or Schuh and Brockmann (2019)) or to estimate discrete covariances (see Schuh (2016, p. 32, eq. (182))). The transition to time variable AR processes (TVAR processes) for non-stationary time series has proven to be a suitable extension (see Charbonnier et al. (1987), or Kargoll et al. (2018)). In this paper we concentrate on TVAR(p) process of order \(p=1\) or \(p=2\) and its estimation. The most common way to estimate TVAR processes is to chose the function of the coefficients, without going further into the characteristics of the processes. This has been done with either trigonometric functions, modified Legendre polynomials, or even spheroidal sequences (see Grenier (1983), Hall et al. (1977) and Slepian (1978)). Kamen (1988) also includes the motion of roots for time varying coefficients of a TVAR(2) process in the case that the first parameter is constant and the second is a linear function. Here we go the other way around and determine the TVAR(p) coefficients assuming a linear model for the motion of the roots. This is done in seven sections. In Sect. 2 we derive the TVAR process and the associated auxiliary equations which represents the connection between the coefficients and the roots. Section 3 proves that in the case of linear motions of the roots, the k-th coefficient of the TVAR process is a polynomial of order k. The polynomial representation results in a parameter change from the actual coefficients of the AR process to the coefficients of the individual polynomials. A new estimation equation is derived in Sect. 4. In order to guarantee that the motion of the roots is linear, the parameters must meet further conditions, which are derived in Sect. 5. The robustness of this estimation is tested on a simulation, in Sect. 6. A short summary of the paper as well as the results are presented in Sect. 7.

2 Relating Time Varying AR Process Coefficients to the Time Varying Roots

The definition of the time stable AR (TSAR) process can be found in a variety of books.Footnote 1 The process \(\mathcal {S}_t\) is called time stable AR process of order p (TSAR(p) process) if it is described by the recursive equation

$$\displaystyle \begin{aligned} \mathcal{S}_t = \alpha_1 \mathcal{S}_{t-1} + \alpha_2 \mathcal{S}_{t-2} + ... + \alpha_p \mathcal{S}_{t-p} + \mathcal{E}_t, {} \end{aligned} $$
(1)

where \(\alpha _1\), \(\alpha _2\), ..., \(\alpha _p\) are the coefficients of the AR process and \(\mathcal {E}_t\) is an i.i.d. sequence with variance \(\sigma ^2_{\mathcal {E}}\) (see Hamilton (1994, p. 58, eq. (3.4.31))).

For an AR(p) process the auxiliary equation is defined by:

$$\displaystyle \begin{aligned} b(x) &= x^p-\alpha_1 x^{p-1} - \alpha_2 x^{p-2} - ... - \alpha_p \end{aligned} $$
(2)
$$\displaystyle \begin{aligned} & = (x-r_1)(x-r_2) ... (x-r_p). {} \end{aligned} $$
(3)

In (3) the \(r_k\), (with \(k=1,2,...,p\)), are the roots (\(b(x) \stackrel {!}{=} 0\)) of the auxiliary equation. These are either real values or they appear as complex conjugated pairs. In Hamilton (1994, p. 34) it is shown that the AR process is stationary if and only if these roots are inside the unit-circle (\(\|r_k \|<1\)). For a time varying AR (TVAR) process of order p the definition is given by Kamen (1988) as:

$$\displaystyle \begin{aligned} \mathcal{S}_t = \alpha_1(t) \mathcal{S}_{t-1} + \alpha_2(t) \mathcal{S}_{t-2} + ... + \alpha_p(t) \mathcal{S}_{t-p} + \mathcal{E}_t. {} \end{aligned} $$
(4)

where \(\alpha _1(t)\), \(\alpha _2(t)\), ..., \(\alpha _p(t)\) are the time varying coefficients of the TVAR(p) process, which change their value with the time t. \(\mathcal {E}_t\) remains an i.i.d. sequence with variance \(\sigma ^2_{\mathcal {E}}\). The TVAR process should be stationary at any fixed but arbitrary time \(\tau \) in a given interval I. This is equal to \(\|r_k(\tau ) \| \stackrel {!}{<}1\) \(\forall ~ \tau \in I\). So for \(t=\tau \) the auxiliary equation can be computed by

$$\displaystyle \begin{aligned} b_\tau(x) &= x^p-\alpha_1(\tau) x^{p-1} - \alpha_2(\tau) x^{p-2} - ... - \alpha_p(\tau) {} \end{aligned} $$
(5)
$$\displaystyle \begin{aligned} & = (x-r_1(\tau))(x-r_2(\tau)) ... (x-r_p(\tau)). {} \end{aligned} $$
(6)

If (6) is converted into a polynomial again, and then a coefficient comparison with (5) is made, the coefficients \(\alpha _k(\tau )\) can be calculated directly:

$$\displaystyle \begin{aligned} &\alpha_k(t) \\ &= (-1)^{k+1} \sum_{m_1=1}^{p-k+1} \sum_{m_2 = m_1+1}^{p-k+2} \sum_{m_3 = m_2+1}^{p-k+3}\\ &\quad ... \sum_{m_k = m_{k-1}+1}^{p} r_{m_1}(t) r_{m_2}(t) r_{m_3}(t) ... r_{m_k}(t) {} \end{aligned} $$
(7)

This means that \(\alpha _k(\tau )\) can be written as the sum of all possible products of k different roots multiplied with \((-1)^{k+1}\).

3 Derivation of the Time Varying AR(p) Process Coefficients from Linear Root Motions

To keep it simple, we assume a linear polynomial for the motion of the roots

$$\displaystyle \begin{aligned} r_k(t)=a_k+b_k t. {} \end{aligned} $$
(8)

Analogous to the time stable approach, the \(r_k(t)\) occur again as real roots or as pairs of complex conjugated roots, and therefore this also applies to \(a_k\) and \(b_k\). So it follows from the linear root motions in (8) that the coefficients \(\alpha _k(t)\) from (7) are polynomials of order k:

$$\displaystyle \begin{aligned} \alpha_k(t) = \sum_{j=0}^{k} \beta_j^{(k)} t^j. {} \end{aligned} $$
(9)

In this context the \(\beta _j^{(k)}\), with \(k \in [1,2,...,p\)] and \(j \in [0,1,2,...,k]\), is the \((j+1)\) parameter of the function \(\alpha _k(t)\). It should be mentioned that in this way the number of unknown parameters increased from p to \(\frac {p^2+3p}{2}\) parameters.

Unfortunately, the representation of coefficients by polynomials in (9) is not sufficient to guarantee linear root movements. Therefore, it is shown in Sect. 4 how the TVAR coefficients \(\beta _j^{(k)}\) are generally estimated and in Sect. 5 we derive the restrictions for linear root motions for the TVAR(1) and TVAR(2) process.

4 Parameter Estimation for TVAR Processes

In this section we will show how the parameters (\(\beta _j^{(k)}\)) are estimated. First, the parameter vector with the dimension \(\frac {p^{2}+3p}{2} \times 1\) is set up in ascending order j. This means that the \(\beta _j^{(k)}\) belonging to \(\alpha _k(t)\) do not follow each other, but the \(\beta _j^{(k)}\) are sorted according to the order of the monomials (k):

(10)

With this reorganisation and using (9), the transformation between \(\alpha _k(t)\) and \(\beta _j^{(k)}\) is given by

(11)
(12)

We estimate the parameters directly from the observations by solving the least squares problem \(\mathcal {S}_t + \boldsymbol {v}_t = \boldsymbol {T} \boldsymbol {\alpha }(t)\). By equating the noise and the negative residuals (\(\mathcal {E}_t = -\boldsymbol {v}_t\)), we can derive the linear relationship between the TVAR coefficients \(\alpha (t)\) and the observations \(\mathcal {S}_t\) from (4). So the design matrix \(\boldsymbol {T}\) is given by

$$\displaystyle \begin{aligned} \boldsymbol{T} = \begin{bmatrix} S_{p-1} & S_{p-2} & S_{p-3} & ... & S_{0}\\ S_{p} & S_{p-1} & S_{p-2} &... & S_{1}\\ ...\\ S_{n-1} & S_{n-2} & S_{n-3} & ... & S_{n-p} \end{bmatrix} \text{ with } n \ge \frac{p^2+3p}{2} > p. {} \end{aligned} $$
(13)

If we exchange the parameters for the LS problem from \(\alpha _k(t)\) to \(\beta _j^{(k)}\) like it is seen in (11), then the new estimation problem is given by

$$\displaystyle \begin{aligned} &\begin{bmatrix} \mathcal{S}_p \\ \mathcal{S}_{p+1} \\ \mathcal{S}_{p+2} \\... \\ \mathcal{S}_{p+n} \end{bmatrix} \stackrel{!}{=} \boldsymbol{T} \boldsymbol{M} \boldsymbol{\beta} \\ &\stackrel{!}{=} \begin{bmatrix} \boldsymbol{T} \vert \boldsymbol{T} \odot \boldsymbol{t} \vert \left(\boldsymbol{T} \odot \boldsymbol{t}.^2\right) (1:n,2:p) \vert ... \vert \left(\boldsymbol{T} \odot \boldsymbol{t}.^p\right) (1:n,p:p) \end{bmatrix} \boldsymbol{\beta}. {} \end{aligned} $$
(14)

Now \(\boldsymbol {t}\) is a vector containing the observation times. Here the l-th row of \(\boldsymbol {T} \odot \boldsymbol {t}\) results from the l-th row of \(\boldsymbol {T}\) multiplied by the l-th element in \(\boldsymbol {t}\), and \(\boldsymbol {t}.^h\) is the element-by-element exponentiation of \(\boldsymbol {t}\) to power h.

5 Additional Conditions for Linear Root Motions

In this section we show which conditions must apply to the TVAR(1) and TVAR(2) estimation processes of Sect. 4 to result in linear root motion. For higher order TVAR processes, successive TVAR(1) and TVAR(2) processes are estimated in all possible combinations. The best combination is then found via the AIC for AR processes (see Buttkus (2000, p. 261, eq. (11.85))).

5.1 The TVAR(1) Process with Linear Motion of the Roots

Using (5) and (6) for the TVAR(1) process shows that \(r_1(\tau ) = \alpha _1(\tau )\). Since it follows from (9) that \(\alpha _1(t)\) is a linear function, the same is true for \(r_1(t)\). So every TVAR(1) process estimated by (14) has linear root motions.

5.2 The TVAR(2) Process with Linear Motions of the Roots

The analytical conversion from coefficients \(\alpha _1(\tau )\) and \(\alpha _2(\tau )\) to roots \(r_1(\tau )\) and \(r_2(\tau )\) for a TVAR process is given by the solution of the quadratic auxiliary equation (5), see Abramowitz and Stegun (1965, p. 17, eq. 3.8.1):

$$\displaystyle \begin{aligned} r_{1,2}(\tau) = \frac{\alpha_1(\tau)}{2} \pm \sqrt{\left(\frac{\alpha_1(\tau)}{2}\right)^2+\alpha_2(\tau)}. \end{aligned} $$
(15)

Because of (9) \(\alpha _1(t)\) is linear. But as we assume a linear root motion, the expression under the root must be a linear function to the square so that the sum of both remains linear:

$$\displaystyle \begin{aligned} \left(\frac{\alpha_1(\tau)}{2}\right)^2+\alpha_2(\tau) \stackrel{!}{=} (f+g \tau)^2. \end{aligned} $$
(16)

Both sides are quadratic equations which are equal if and only if each of the three polynomial coefficients are equal. Coefficients comparison leads to three conditions, for which two are used to determine f and g. After f and g have been inserted into the third condition, and \(\alpha (\tau )\) has been replaced by the \(\beta _j^{(k)}\) (see (9)), the non-linear condition is given by

$$\displaystyle \begin{aligned} &\left(\beta_0^{(1)}\right)^2 \beta_2^{(2)} +\left(\beta_1^{(1)}\right)^2 \beta_0^{(2)} -\beta_0^{(1)} \beta_1^{(1)} \beta_1^{(2)}\\ &\quad +4\beta_0^{(2)} \beta_2^{(2)} - \left(\beta_1^{(2)}\right)^2 \stackrel{!}{=} 0. {} \end{aligned} $$
(17)

Adding this condition to the estimation in (14) leads to a TVAR(2) process with linear root motions.

6 Robustness of the Estimate Against Deviations

In this section we will simulate 100 TVAR(3) processes, each consisting of 1000 observations. In each simulation, both the same linear roots and the same standard derivation of the noise (\(\sigma _n = 10^{-3}\)) are used. The true roots are chosen as:

$$\displaystyle \begin{aligned} r_1 &= 0.2+0.9 \cdot t \\ r_{2,3} &= 0.3 \pm 0.6i + (0.5 \mp 0.2i) \cdot t \text{ with } t \in [0,1]. {} \end{aligned} $$
(18)

Furthermore each TVAR process is initialized by p independent and identically distributed random variables with standard derivation \(\sigma _n\) and passes through a warm-up phase over 500 observations. To show the robustness of the TVAR estimate the process is modelled 100 times with different noise. As a reference we use the estimates of time stable AR coefficients under the assumption of a stationary processes. (I.e. the TSAR coefficients are calculated using the Yule-Walker equations (Hamilton 1994, p. 59, eq. (3.4.36)).)

One of the 100 realizations can be seen in Fig. 1. Each dot in Fig. 2 shows one out of three roots of an AR process estimated from a window of 100 observations using the Yule-Walker equations. The change in brightness (from dark to light) visualizes the shift of the window. A new point represents a shift of an observation. The green lines represent the roots of the true TVAR process (from Eq. (18)) and it can be seen that they follow the estimated roots of the moving window. For all other estimates, the whole time series was used instead of switching to a windowing.

Fig. 1
figure 1

A time series for one set of white noise

Fig. 2
figure 2

Roots of the windowed estimate, compared to the roots of the time variable estimate for the timeseries on the right

In Fig. 4 the estimated root motions of the TVAR estimate for all 100 simulations are shown. Comparing the roots of the TSAR process (Fig. 3) with the TVAR root motion (Fig. 4), it is noticeable that the roots of the TSAR process scatter around constant values, but the time-varying estimate tends to vary around the true root motions (which are shown in red).

Fig. 3
figure 3

All root motions of the TSAR estimates for the 100 simulations

Fig. 4
figure 4

All root motions of the TVAR estimates for the 100 simulations

Figure 5 shows the difference between one of the two estimation methods (TVAR (green) or TSAR (blue) estimation) and the true root movement. Instead of considering all 100 realizations individually, the deviations for each time are averaged over all realizations.

Fig. 5
figure 5

Residuals between the roots from the TSAR and the true roots (blue), and the residuals between the roots from the TVAR estimate and the true roots (green). Furthermore a distinction is made between the complex roots, on the left side, and the real root on the right side

It is immediately noticeable that the residuals for the real root in the TVAR estimate (on the right side of Fig. 5) are consistently smaller than in the TSAR estimate. And even in the case of complex roots, the time variable root has on average smaller deviations although the time stable estimation performs better in the interval \(t \in [130, 420]\). Due to the two dimensional representation in Fig. 4 it seems that smaller residuals occur with the real root than with the complex roots, but this is refuted by Fig. 5 where it can be seen that the residuals for the complex roots are smaller than those for the real root.

7 Conclusion and Outlook

In this paper we have shown that the use of TVAR processes with linear motions of the roots leads towards an estimation where the k-th coefficient of the TVAR process is a polynomial of order k. But to construct linear root motions, additional conditions are necessary which we derived here for the TVAR(1) and the TVAR(2) process. By successively calculating TVAR(1) and TVAR(2) processes, TVAR processes of higher order can also be estimated, whose roots then also moves linearly. This can be seen directly in the example where a TVAR(3) process was assembled from a TVAR(1) and a TVAR(2) process.

To show the robustness of the TVAR estimate, 100 time series were simulated and the linear roots of the TVAR estimate were first compared with the roots of AR processes of a moving window. Since the root movement of the TVAR estimate is the averaged over time by the root computed by the moving window, the window can be used as a test if the model of the TVAR estimate with linear roots is suitable for a time series.

Second, 100 time series were simulated by which a TVAR or TSAR process was estimated. The results show, that the roots from the TVAR estimate fit better with the true roots than the roots of the TSAR estimation. This means that the introduction of TVAR processes with linear root moves provides a suitable extension for time series analysis. The results also show that the estimation with the TVAR processes remains reasonably stable.

One problem that has gone unnoticed here is that the linear roots run out of the unit circle over time. In order to solve this problem, future research should focus on root movements which guarantee stationarity for any length of time.