1 Introduction

In this paper we consider locally stationary processes, defined via a triangular sequence of stochastic processes \(\{\eta _{t,T}\}_{t=1,\ldots ,T}\) with \(T\in \mathbb {N}\), where every \(\eta _{t,T}\) has a representation of the form

$$\begin{aligned} \eta _{t,T} = \mu \left( \frac{t}{T}\right) + \sum _{j=0}^\infty \psi _{j,t,T}\varepsilon _{t-j},\quad t=1,\ldots ,T. \end{aligned}$$
(1)

Throughout this paper we impose the following assumption on the error sequence \(\{\varepsilon _t\}_{t\in \mathbb {Z}}\), on the moving average coefficients \(\psi _{j,t,T}\) and on the trend function \(\mu \).

Assumption 1.1

The random variables \(\{\varepsilon _t\}_{t\in \mathbb {Z}}\) are independent and identically distributed with \(\mathbb {E}\varepsilon _t=0\), \(\mathbb {E}\varepsilon _t^2=1\) and \(\mathbb {E}|\varepsilon _t|^{2+\kappa }<\infty \) for some \(\kappa >0\). The coefficients \(\psi _{j,t,T}\) in the moving average representation (1) fulfill

$$\begin{aligned} \sup _{t,T} |\psi _{j,t,T}| \le \frac{K}{l(j)}, \end{aligned}$$

with constant K independent of T and some positive deterministic sequence \(\{l(j)\}_{j\in \mathbb {N}_0}\) satisfying

$$\begin{aligned} \sum _{j=0}^\infty \frac{j}{l(j)} < \infty . \end{aligned}$$

The trend function \(\mu :[0,1]\rightarrow \mathbb {R}\) is assumed to be bounded and continuous almost everywhere.

Remark 1.2

In contrast to the definition of Dahlhaus and Polonik (2006) we restrict locally stationary processes to have a one-sided moving average representation. Nonetheless, our definition covers most of the important examples of locally stationary processes. For instance, it follows from Dahlhaus and Polonik (2009, Proposition 2.4) that time-varying causal ARMA processes have a representation of the form (1).

The idea behind locally stationary processes is that, after rescaling the time domain to the unit interval, the process can be approximated locally in time by a stationary process. Therefore, one usually assumes that \(\psi _{j,t,T}\approx \psi _j(t/T)\) for some well behaving functions \(\psi _j\).

Assumption 1.3

There exist functions \(\psi _j:[0,1]\rightarrow \mathbb {R}\) with

$$\begin{aligned} \Vert \psi _j\Vert _\infty&\le \frac{K}{l(j)},\\ V(\psi _j)&\le \frac{K}{l(j)} \end{aligned}$$

and

$$\begin{aligned} \sum _{t=1}^T \left| \psi _{j,t,T} - \psi _j\left( \frac{t}{T}\right) \right| \le \frac{K}{l(j)},\quad \text {for all }T\in \mathbb {N}, \end{aligned}$$
(2)

where V(f) denotes the total variation of a function f on [0, 1].

Remark 1.4

The coefficient functions are uniquely defined almost everywhere. To see this let \(\{\eta _{t,T}\}_{t=1,\ldots ,T}\) be locally stationary process with moving average coefficients \(\psi _{j,t,T}\) and corresponding coefficient functions \(\psi _j\). Let \(\phi _j\) be another set of coefficient functions that fulfills Assumption 1.3. Then it holds that

$$\begin{aligned} \Vert \psi _j-\phi _j\Vert _{L^1}&= \lim _{T\rightarrow \infty }\frac{1}{T}\sum _{t=1}^T\left| \psi _j\left( \frac{t}{T}\right) - \phi _j\left( \frac{t}{T}\right) \right| \\&\le \lim _{T\rightarrow \infty }\frac{1}{T}\left\{ \sum _{t=1}^T \left| \psi _{j,t,T} - \psi _j\left( \frac{t}{T}\right) \right| + \sum _{t=1}^T\left| \psi _{j,t,T} - \phi _j\left( \frac{t}{T}\right) \right| \right\} \\&\le \lim _{T\rightarrow \infty }\frac{2K}{Tl(j)} = 0, \end{aligned}$$

implying \(\psi _j=\phi _j\) almost everywhere.

For every \(u\in [0,1]\) we define the process \(\{\eta _t(u)\}_{t\in \mathbb {Z}}\) via

$$\begin{aligned} \eta _t(u) = \mu (u) + \sum _{j=0}^\infty \psi _j(u)\varepsilon _{t-j}. \end{aligned}$$

By Assumption 1.3 the centered process \(\{\eta _t(u)-\mu (u)\}_{t\in \mathbb {Z}}\) is weakly stationary with long-run variance given by \(\varPsi ^2(u)\), where

$$\begin{aligned} \varPsi (u) = \sum _{j=0}^\infty \psi _j(u). \end{aligned}$$

The main purpose of the process \(\{\eta _t(u)\}_{t\in \mathbb {Z}}\) is to approximate \(\{\eta _{t,T}\}_{t=1,\ldots ,T}\). In particular, the process \(\{\eta _{t,T}\}_{t=1,\ldots ,T}\) should approximately behave like the stationary process \(\{\eta _t(u)\}_{t\in \mathbb {Z}}\) in the rescaled time point \(u=t/T\). For brevity, we define the auxiliary process \(\{{\tilde{\eta }}_{t,T}\}_{t=1,\ldots ,T}\) via \({\tilde{\eta }}_{t,T} = \eta _t(t/T)\), i.e.

$$\begin{aligned} {\tilde{\eta }}_{t,T} = \mu \left( \frac{t}{T}\right) + \sum _{j=0}^\infty \psi _j\left( \frac{t}{T}\right) \varepsilon _{t-j}. \end{aligned}$$
(3)

Under the stated assumptions it holds that (cf. Lemma A.1 in the appendix)

$$\begin{aligned} \frac{1}{T}\sum _{t=1}^T\left( \eta _{t,T} - {\tilde{\eta }}_{t,T}\right) {\mathop {\rightarrow }\limits ^{P}}{0}, \end{aligned}$$

as \(T\rightarrow \infty \). Hence, the process \(\{\eta _t(u)\}_{t\in \mathbb {Z}}\) approximates the locally stationary process on average over all rescaled time points \(1/T, 2/T\ldots ,1\). Later we will strengthen condition (2) in order to obtain a stronger approximation.

Remark 1.5

The construction of locally stationary processes with time dependent moving-average coefficients \(\psi _{j,t,T}\) on the one hand and approximating functions \(\psi _j\) on the other hand looks cumbersome at first glance. It seems more natural to define locally stationary processes directly via (3). However, it was already pointed out by Künsch (1995) and Dahlhaus and Polonik (2009) that this rules out interesting examples such as time-varying autoregressive processes.

Consider the stationary approximating process \(\{\eta _t(u)\}_{t\in \mathbb {Z}}\) for some fixed \(u\in [0,1]\). By the Beveridge–Nelson decomposition (cf. Phillips and Solo 1992) it holds that

$$\begin{aligned} \eta _t(u)&= \mu (u) + \sum _{j=0}^\infty \psi _j(u)\varepsilon _{t-j}\\&= \mu (u) + \sum _{j=0}^\infty \psi _j(u)\varepsilon _t - \sum _{j=0}^\infty \left( \sum _{k=j+1}^\infty \psi _k(u)\right) (\varepsilon _{t-j} - \varepsilon _{t-1-j}), \end{aligned}$$

which is well defined due to Assumption 1.3. Setting \(u=t/T\) we obtain a time-varying Beveridge–Nelson decomposition for the auxiliary process \(\{{\tilde{\eta }}_{t,T}\}_{t=1,\ldots ,T}\).

Lemma 1.6

(Time-varying Beveridge–Nelson decomposition) The auxiliary process \(\{{\tilde{\eta }}_{t,T}\}_{t=1,\ldots ,T}\) exhibits a representation of the form

$$\begin{aligned} {\tilde{\eta }}_{t,T} = \mu \left( \frac{t}{T}\right) + \varPsi \left( \frac{t}{T}\right) \varepsilon _t - \sum _{j=0}^\infty {\tilde{\psi }}_j\left( \frac{t}{T}\right) (\varepsilon _{t-j}-\varepsilon _{t-1-j}), \end{aligned}$$

with

$$\begin{aligned} {\tilde{\psi }}_j(u) = \sum _{k=j+1}^\infty \psi _j(u). \end{aligned}$$

The time-varying Beveridge–Nelson decomposition will be useful for the derivation of the main results in this paper. In particular, we will use it to generalize the proof techniques of Phillips and Solo (1992) to the locally stationary framework.

2 Main results

The first limit theorem we present is a CLT for locally stationary processes. To motivate the outcome, we first derive the result for an easy example. Let \(\{\eta _{t,T}\}_{t=1,\ldots ,T}\) be defined by

$$\begin{aligned} \eta _{t,T} = \phi \left( \frac{t}{T}\right) \varepsilon _t,\quad t=1,\ldots ,T, \end{aligned}$$

for some bounded variation function \(\phi :[0,1]\rightarrow \mathbb {R}\) and \(\{\varepsilon _t\}_{t\in \mathbb {Z}}\) being a sequence of independent and identically \({\mathcal {N}}(0,1)\) distributed random variables. Then, it holds that

$$\begin{aligned} \frac{1}{\sqrt{T}}\sum _{t=1}^T \eta _{t,T} \sim {\mathcal {N}}\left( 0,\frac{1}{T}\sum _{t=1}^T\phi ^2\left( \frac{t}{T}\right) \right) . \end{aligned}$$

Since \(\phi \) is of bounded variation it is square-integrable on the unit interval and it holds that

$$\begin{aligned} \lim _{T\rightarrow \infty }\frac{1}{T}\sum _{t=1}^T\phi ^2\left( \frac{t}{T}\right) = \int _0^1\! \phi ^2(u)\, du \end{aligned}$$

and Lévy’s continuity theorem implies that

$$\begin{aligned} \frac{1}{\sqrt{T}}\sum _{t=1}^T \eta _{t,T} {\mathop {\rightarrow }\limits ^{d}} {\mathcal {N}}\left( 0,\int _0^1\! \phi ^2(u)\, du\right) . \end{aligned}$$
(4)

Note that the approximating stationary process \(\{\eta _t(u)\}_{t\in \mathbb {Z}}\) is defined by \(\eta _t(u)=\phi (u)\varepsilon _t\) with long-run variance given by \(\phi ^2(u)\). Hence, the variance of the limiting distribution in (4) is equal to the integrated long-run variance of the auxiliary process. This result also holds for arbitrary centered locally stationary processes.

Theorem 2.1

(CLT) Let \(\{\eta _{t,T}\}_{t=1,\ldots ,T}\) be a locally stationary process with moving-average representation (1) that satisfies Assumptions 1.1 and 1.3. Then, as \(T\rightarrow \infty \), it holds that

$$\begin{aligned} \frac{1}{\sqrt{T}}\sum _{t=1}^T \left\{ \eta _{t,T} - \mu \left( \frac{t}{T}\right) \right\} {\mathop {\rightarrow }\limits ^{d}} {\mathcal {N}}\left( 0,\Vert \varPsi \Vert _{L^2}^2\right) , \end{aligned}$$

where \(\Vert \varPsi \Vert _{L^2}\) denotes the \(L^2\) norm of \(\varPsi \) on the unit interval.

Proof

It suffices to show the claim for the auxiliary process \(\{{\tilde{\eta }}_{t,T}\}_{t=1,\ldots ,T}\) since

$$\begin{aligned} \frac{1}{\sqrt{T}}\sum _{t=1}^T\eta _{t,T} = \frac{1}{\sqrt{T}}\sum _{t=1}^T{\tilde{\eta }}_{t,T} + \frac{1}{\sqrt{T}}\sum _{t=1}^T(\eta _{t,T}-{\tilde{\eta }}_{t,T}) \end{aligned}$$

and the second term goes to zero in probability by Lemma A.1. Without loss of generality we assume that \(\mu (u) = 0\) for all \(u\in [0,1]\). By Lemma 1.6 it holds that

$$\begin{aligned} \frac{1}{\sqrt{T}}\sum _{t=1}^T{\tilde{\eta }}_{t,T} = \frac{1}{\sqrt{T}}\sum _{t=1}^T \varPsi \left( \frac{t}{T}\right) \varepsilon _t - \frac{1}{\sqrt{T}}\sum _{t=1}^T \sum _{j=0}^\infty {\tilde{\psi }}_j\left( \frac{t}{T}\right) (\varepsilon _{t-j}-\varepsilon _{t-1-j}). \end{aligned}$$
(5)

We show that the first term in (5) converges in distribution and the second term vanishes in probability. By the i.i.d. assumption on the innovation terms it holds that

$$\begin{aligned} \mathrm {Var}\left( \frac{1}{\sqrt{T}}\sum _{t=1}^T\varPsi \left( \frac{t}{T}\right) \varepsilon _t\right) = \frac{1}{T}\sum _{t=1}^T\varPsi ^2\left( \frac{t}{T}\right) \rightarrow \int _0^1\! \varPsi ^2(u)\, du. \end{aligned}$$

Next, we verify the Lyapunov condition. By Assumption 1.1 there exists some \(\kappa >0\) such that \(\mathbb {E}|\varepsilon _t|^{2+\kappa }\) is finite. Hence,

$$\begin{aligned}&\lim _{T\rightarrow \infty }\frac{\sum _{t=1}^T\mathbb {E}\left| \frac{1}{\sqrt{T}}\varPsi \left( \frac{t}{T}\right) \varepsilon _t\right| ^{2+\kappa }}{\left( \mathrm {Var}\left( \frac{1}{\sqrt{T}}\sum _{t=1}^T\varPsi \left( \frac{t}{T}\right) \varepsilon _t\right) \right) ^{1+\kappa /2}}\\&\quad = \lim _{T\rightarrow \infty }\frac{\mathbb {E}|\varepsilon _1|^{2+\kappa }}{T^{\kappa /2}}\lim _{T\rightarrow \infty }\frac{\frac{1}{T}\sum _{t=1}^T\varPsi ^{2+\kappa }\left( \frac{t}{T}\right) }{\left( \frac{1}{T}\sum _{t=1}^T\varPsi ^2\left( \frac{t}{T}\right) \right) ^{1+\kappa /2}} = 0. \end{aligned}$$

From the Lindeberg-CLT for triangular arrays we deduce that

$$\begin{aligned} \frac{1}{\sqrt{T}}\sum _{t=1}^T\varPsi \left( \frac{t}{T}\right) \varepsilon _t {\mathop {\rightarrow }\limits ^{d}}{\mathcal {N}}\left( 0,\int _0^1\! \varPsi ^2(u)\, du\right) . \end{aligned}$$

To finish the proof it remains to show that the second term in (5) goes to zero in probability. It holds that

$$\begin{aligned}&\sum _{j=0}^\infty {\tilde{\psi }}_j\left( \frac{t}{T}\right) (\varepsilon _{t-j}-\varepsilon _{t-1-j}) = \sum _{j=0}^\infty \left\{ {\tilde{\psi }}_j\left( \frac{t}{T}\right) \varepsilon _{t-j}-{\tilde{\psi }}_j\left( \frac{t}{T}\right) \varepsilon _{t-1-j}\right\} \nonumber \\&\quad =\sum _{j=0}^\infty \left\{ {\tilde{\psi }}_j\left( \frac{t}{T}\right) \varepsilon _{t-j} - {\tilde{\psi }}_j\left( \frac{t-1}{T}\right) \varepsilon _{t-1-j}\right. \nonumber \\&\qquad \left. +\ {\tilde{\psi }}_j\left( \frac{t-1}{T}\right) \varepsilon _{t-1-j}-{\tilde{\psi }}_j\left( \frac{t}{T}\right) \varepsilon _{t-1-j}\right\} \nonumber \\&\quad =\sum _{j=0}^\infty \left\{ {\tilde{\psi }}_j\left( \frac{t}{T}\right) \varepsilon _{t-j} - {\tilde{\psi }}_j\left( \frac{t-1}{T}\right) \varepsilon _{t-1-j}\right\} \nonumber \\&\qquad + \sum _{j=0}^\infty \left\{ {\tilde{\psi }}_j\left( \frac{t-1}{T}\right) \varepsilon _{t-1-j}-{\tilde{\psi }}_j\left( \frac{t}{T}\right) \varepsilon _{t-1-j}\right\} . \end{aligned}$$
(6)

Taking partial sum of the first term and dividing by \(T^{1/2}\) leads to:

$$\begin{aligned}&\frac{1}{\sqrt{T}}\sum _{t=1}^{T} \sum _{j=0}^\infty \left\{ {\tilde{\psi }}_j\left( \frac{t}{T}\right) \varepsilon _{t-j} - {\tilde{\psi }}_j\left( \frac{t-1}{T}\right) \varepsilon _{t-1-j}\right\} \nonumber \\&\quad = \frac{1}{\sqrt{T}}\sum _{j=0}^\infty \left\{ {\tilde{\psi }}_j\left( 1\right) \varepsilon _{T-j}-{\tilde{\psi }}_j(0)\varepsilon _{-j}\right\} , \end{aligned}$$
(7)

as the sum over t is telescopic. Since

$$\begin{aligned} \mathbb {E}\left| \sum _{j=0}^\infty \sup _{u\in (0,1)}|{\tilde{\psi }}_j(u)|\varepsilon _{t-j}\right|&\le \sum _{j=0}^\infty \sum _{k=j+1}^\infty \sup _{u\in (0,1)}|\psi _k(u)|\mathbb {E}|\varepsilon _1|\\&\le \sum _{j=0}^\infty \frac{j K \mathbb {E}|\varepsilon _1|}{l(j)}<\infty \end{aligned}$$

for an arbitrary \(t\in \{1,\ldots ,T\}\) it follows that the term on the right hand side of (7) converges to zero in probability.

It remains to prove that the scaled partial sum of the second term in (6) also vanishes asymptotically. It holds that

$$\begin{aligned}&\mathbb {E}\left| \frac{1}{\sqrt{T}}\sum _{t=1}^T \sum _{j=0}^\infty \left\{ {\tilde{\psi }}_j\left( \frac{t}{T}\right) - {\tilde{\psi }}_j\left( \frac{t-1}{T}\right) \right\} \varepsilon _{t-1-j}\right| \\&\quad \le \frac{1}{\sqrt{T}}\sum _{t=1}^T\sum _{j=0}^\infty \left| {\tilde{\psi }}_j\left( \frac{t}{T}\right) -{\tilde{\psi }}_j\left( \frac{t-1}{T}\right) \right| \mathbb {E}|\varepsilon _{1}|\\&\quad \le \frac{1}{\sqrt{T}}\sum _{j=0}^\infty V({\tilde{\psi }}_j)\mathbb {E}|\varepsilon _1|, \end{aligned}$$

which converges to zero if the \(V({\tilde{\psi }}_j)\) are summable. Using the definition of the total variation we obtain

$$\begin{aligned} \sum _{j=0}^\infty V({\tilde{\psi }}_j)&= \sum _{j=0}^\infty \sup _{\begin{array}{c} 0\le x_1<\ldots<x_M\le 1 \\ M\in \mathbb {N} \end{array}}\sum _{i=1}^M|{\tilde{\psi }}_j(x_{i+1}) - {\tilde{\psi }}_j(x_i)|\\&\le \sum _{j=0}^\infty \sum _{k=j+1}^\infty \sup _{\begin{array}{c} 0\le x_1<\ldots <x_M\le 1 \\ M\in \mathbb {N} \end{array}}\sum _{i=1}^M|\psi _j(x_{i+1}) - \psi _j(x_i)|\\&= \sum _{j=0}^\infty \sum _{k=j+1}^\infty V(\psi _j)\\&\le \sum _{j=0}^\infty \frac{jK}{l(j)}, \end{aligned}$$

which is finite by Assumption 1.1. \(\square \)

From Theorem 2.1 we immediately obtain a WLLN.

Corollary 2.2

(WLLN) Let \(\{\eta _{t,T}\}_{t=1,\ldots ,T}\) be a locally stationary process defined via its moving-average representation (1) with Assumptions 1.1 and 1.3 in place. Then, as \(T\rightarrow \infty \), it holds that

$$\begin{aligned} \frac{1}{T}\sum _{t=1}^T \left\{ \eta _{t,T} - \mu \left( \frac{t}{T}\right) \right\} {\mathop {\rightarrow }\limits ^{P}} 0. \end{aligned}$$

In order to prove a SLLN and a LIL we require a stronger assumption that connects the coefficient functions \(\psi _{j,t,T}\) and the approximating functions \(\psi _j\). The following assumption, that immediately implies condition (2), corresponds to assumption (69) in Dahlhaus (2012).

Assumption 2.3

The functions \(\psi _j\) and the moving average coefficients \(\psi _{j,t,T}\) satisfy

$$\begin{aligned} \sup _{1\le t\le T} \left| \psi _{j,t,T} - \psi _j\left( \frac{t}{T}\right) \right| \le \frac{K}{Tl(j)},\quad \text {for all } T\in \mathbb {N}. \end{aligned}$$

Previously, we observed that the stationary process \(\{\eta _t(u)\}_{t\in \mathbb {Z}}\) approximates the locally stationary process on average over the series. Under Assumption 2.3 we have a better approximation as it now holds that \(\eta _{t,T} - {\tilde{\eta }}_{t,T} = {\mathcal {O}}_P(T^{-1})\). This follows from the fact that

$$\begin{aligned} \lim _{T\rightarrow \infty }\mathbb {E}\left| \eta _{t,T} - {\tilde{\eta }}_{t,T}\right|&\le \lim _{T\rightarrow \infty }\sum _{j=0}^\infty \left| \psi _{j,t,T} - \psi _j\left( \frac{t}{T}\right) \right| \mathbb {E}|\varepsilon _{t-j}|\\&\le \lim _{T\rightarrow \infty }\sum _{j=0}^\infty \frac{K\mathbb {E}|\varepsilon _0|}{Tl(j)} = 0. \end{aligned}$$

Consequently, the stationary process \(\{\eta _t(u)\}_{t\in \mathbb {Z}}\) approximates the locally stationary process \(\{\eta _{t,T}\}_{t=1,\ldots ,T}\) in every rescaled time point \(u=t/T\). In fact, we even have the following strong approximation. It holds that

$$\begin{aligned} \sup _{1\le t\le T}|\eta _{t,T} - {\tilde{\eta }}_{t,T}| {\mathop {\rightarrow }\limits ^{a.s.}}{0}, \end{aligned}$$
(8)

as \(T\rightarrow \infty \) (cf. Lemma A.2 in the appendix).

Theorem 2.4

(SLLN) Let \(\{\eta _{t,T}\}_{t=1,\ldots ,T}\) be a locally stationary process defined via its moving-average representation (1) with Assumptions 1.11.3 and 2.3 in place. Then, as \(T\rightarrow \infty \), it holds that

$$\begin{aligned} \frac{1}{T}\sum _{t=1}^T \eta _{t,T} {\mathop {\rightarrow }\limits ^{a.s.}} \int _0^1\! \mu (u)\, du. \end{aligned}$$

Proof

It suffices to show the claim for the auxiliary process \(\{{\tilde{\eta }}_{t,T}\}_{t=1,\ldots ,T}\) since the strong approximation (8) implies that

$$\begin{aligned} \frac{1}{T}\sum _{t=1}^T \eta _{t,T} = \frac{1}{T}\sum _{t=1}^T {\tilde{\eta }}_{t,T} + o_{a.s.}(1). \end{aligned}$$

The trend function \(\mu \) is assumed to be bounded and continuous everywhere. This implies that it is Riemann integrable and we immediately deduce that

$$\begin{aligned} \lim _{T\rightarrow \infty }\frac{1}{T}\sum _{t=1}^T \mu \left( \frac{t}{T}\right) = \int _0^1\! \mu (u)\, du. \end{aligned}$$

It remains to show that

$$\begin{aligned} \frac{1}{T}\sum _{t=1}^T \sum _{j=0}^\infty \psi _j\left( \frac{t}{T}\right) \varepsilon _{t-j} {\mathop {\rightarrow }\limits ^{a.s.}}0. \end{aligned}$$

Using Lemma 1.6 we first need to verify that

$$\begin{aligned} \frac{1}{T}\sum _{t=1}^T\varPsi \left( \frac{t}{T}\right) \varepsilon _t {\mathop {\rightarrow }\limits ^{a.s.}} 0. \end{aligned}$$
(9)

It holds that

$$\begin{aligned} \sup _{u\in (0,1)}\left| \varPsi (u)\right| \le \sup _{u\in (0,1)}\sum _{j=0}^\infty |\psi _j(u)| \le \sum _{j=0}^\infty \sup _{u\in (0,1)}\left| \psi _j(u)\right| \le \sum _{j=0}^\infty \frac{K}{l(j)}<\infty . \end{aligned}$$

Since the \(\varepsilon _t\)’s are independent and identically distributed with \(\mathbb {E}(\varepsilon _1)=0\) and \(\mathbb {E}\varepsilon _1^2<\infty \) almost sure convergence of (9) follows from Cuzick (1995, Theorem 1.1) or Choi and Sung (1987, Theorem 5).

It remains to show that

$$\begin{aligned}&\frac{1}{T}\sum _{j=0}^\infty {\tilde{\psi }}_j(1)\varepsilon _{T-j} {\mathop {\rightarrow }\limits ^{a.s.}}0, \end{aligned}$$
(10)
$$\begin{aligned}&\frac{1}{T}\sum _{j=0}^\infty {\tilde{\psi }}_j(0)\varepsilon _{-j} {\mathop {\rightarrow }\limits ^{a.s.}}0 \end{aligned}$$
(11)

and

$$\begin{aligned} \frac{1}{T}\sum _{t=1}^T\sum _{j=0}^\infty \left\{ {\tilde{\psi }}_j\left( \frac{t-1}{T}\right) - {\tilde{\psi }}_j\left( \frac{t}{T}\right) \right\} \varepsilon _{t-1-j}{\mathop {\rightarrow }\limits ^{a.s.}}0. \end{aligned}$$
(12)

It holds that

$$\begin{aligned} \mathbb {E}\left( \frac{1}{T}\sum _{j=0}^\infty {\tilde{\psi }}_j(1)\varepsilon _{T-j} \right) ^2&= \frac{1}{T^2}\sum _{j=0}^\infty {\tilde{\psi }}_j(1)^2\\&= \frac{1}{T^2}\sum _{j=0}^\infty \left( \sum _{i=j+1}^\infty \psi _i(1)\right) ^2\\&= {\mathcal {O}}\left( \frac{1}{T^2}\right) , \end{aligned}$$

since

$$\begin{aligned} \sum _{j=0}^\infty \sum _{i=j+1}^\infty \psi _i(1) \le \sum _{j=0}^\infty \sum _{i=j+1}^\infty \Vert \psi _i\Vert _\infty = \sum _{j=0}^\infty j\Vert \psi _j\Vert _\infty <\infty . \end{aligned}$$

Hence, the term in (10) converges sufficiently fast to zero in probability and almost sure convergence follows from the Borel–Cantelli lemma. The proof of (11) is identical. At last we have to show (12). It holds that

$$\begin{aligned}&\mathbb {E}\left( \frac{1}{T}\sum _{t=1}^T\sum _{j=0}^\infty \left\{ {\tilde{\psi }}_j\left( \frac{t-1}{T}\right) - {\tilde{\psi }}_j\left( \frac{t}{T}\right) \right\} \varepsilon _{t-1-j}\right) ^2\\&\quad \le \frac{1}{T^2} \sum _{j_1,j_2=0}^\infty \sum _{t_1,t_2=1}^T \left| {\tilde{\psi }}_{j_1}\left( \frac{t_1-1}{T}\right) - {\tilde{\psi }}_{j_1}\left( \frac{t_1}{T}\right) \right| \left| {\tilde{\psi }}_{j_2}\left( \frac{t_2-1}{T}\right) - {\tilde{\psi }}_{j_2}\left( \frac{t_2}{T}\right) \right| \\&\quad \le \frac{1}{T^2} \sum _{j=0}^\infty V({\tilde{\psi }}_j) \sum _{k=0}^\infty V({\tilde{\psi }}_k) = {\mathcal {O}}\left( \frac{1}{T^2}\right) . \end{aligned}$$

Hence, the second moment of the term in (12) converges sufficiently fast to zero implying almost sure convergence.\(\square \)

Our last result is a LIL. In order to prove the theorem we impose some additional moment condition on the sequence \(\{\varepsilon _t\}_{t\in \mathbb {Z}}\). In particular, we assume that at least the fourth moment of \(\varepsilon _t\) is finite.

Theorem 2.5

(LIL) Let \(\{\eta _{t,T}\}_{t=1,\ldots ,T}\) be a locally stationary process with Assumptions 1.11.3 and 2.3 in place and let \(d_T=T\log \log T\). Assume further that the innovation sequence \(\{\varepsilon _t\}_{t\in \mathbb {Z}}\) satisfies \(\mathbb {E}\varepsilon _t^4=\mu _4<\infty \). Then, as \(T\rightarrow \infty \), it holds that

$$\begin{aligned} \limsup _{T\rightarrow \infty } \frac{1}{\sqrt{d_T}} \sum _{t=1}^T \left\{ \eta _{t,T} - \mu \left( \frac{t}{T}\right) \right\} {\mathop {=}\limits ^{a.s.}} \sqrt{2}\Vert \varPsi \Vert _{L^2}. \end{aligned}$$

Proof

By Lemma A.3 it suffices to show the claim for the auxiliary process \(\{{\tilde{\eta }}_{t,T}\}_{t=1,\ldots ,T}\). Following the lines of the proof of Theorem 2.4 we first prove that

$$\begin{aligned} \limsup _{T\rightarrow \infty } \frac{1}{\sqrt{d_T}}\sum _{t=1}^T \varPsi \left( \frac{t}{T}\right) \varepsilon _t {\mathop {=}\limits ^{a.s.}} \sqrt{2}\Vert \varPsi \Vert _{L^2}. \end{aligned}$$

Since \(\{\varepsilon _t\}_{t\in \mathbb {Z}}\) is a sequence of independent random variables with finite variance the claim follows immediately from Tomkins (1975, Theorem 1), Wichura (1973, page 279) and Lai and Wei (1982, Corollary 2). Therefore, it remains to prove that

$$\begin{aligned}&\frac{1}{\sqrt{d_T}}\sum _{j=0}^\infty {\tilde{\psi }}_j(1)\varepsilon _{T-j} {\mathop {\rightarrow }\limits ^{a.s.}}0, \end{aligned}$$
(13)
$$\begin{aligned}&\frac{1}{\sqrt{d_T}}\sum _{j=0}^\infty {\tilde{\psi }}_j(0)\varepsilon _{-j} {\mathop {\rightarrow }\limits ^{a.s.}}0 \end{aligned}$$
(14)

and

$$\begin{aligned} \frac{1}{\sqrt{d_T}}\sum _{t=1}^T\sum _{j=0}^\infty \left( {\tilde{\psi }}_j\left( \frac{t-1}{T}\right) - {\tilde{\psi }}_j\left( \frac{t}{T}\right) \right) \varepsilon _{t-1-j} {\mathop {\rightarrow }\limits ^{a.s.}}0. \end{aligned}$$
(15)

In contrast to the proof of Theorem 2.4 it is not sufficient to investigate the second moments of these terms, as \(d_T^{-1}\) decays too slowly. However, we adapt the proof using fourth moments. For the term in (13) it holds that

$$\begin{aligned} \mathbb {E}\left( \sum _{j=0}^\infty {\tilde{\psi }}_j(1)\varepsilon _{T-j}\right) ^4&=\sum _{j_1,\ldots ,j_4=0}^\infty \left\{ \prod _{m=1}^4 {\tilde{\psi }}_{j_m}(1)\right\} \mathbb {E}\left( \prod _{m=1}^4\varepsilon _{T-j_m}\right) \\&\le \mu _4\left( \sum _{j=0}^\infty {\tilde{\psi }}_j(1) \right) ^4, \end{aligned}$$

implying

$$\begin{aligned} \sum _{T=1}^\infty \mathbb {E}\left( \frac{1}{\sqrt{d_T}}\sum _{j=0}^\infty {\tilde{\psi }}_j(1)\varepsilon _{T-j}\right) ^4 \le \sum _{T=1}^\infty \frac{C}{T^2(\log \log T)^2} < \infty \end{aligned}$$

and, by the Borel–Cantelli lemma, almost sure convergence. The claim in (14) is proven in exactly the same way. To show (15) consider

$$\begin{aligned}&\mathbb {E}\left( \frac{1}{\sqrt{d_T}}\sum _{t=1}^T\sum _{j=0}^\infty \left( {\tilde{\psi }}_j\left( \frac{t-1}{T}\right) - {\tilde{\psi }}_j\left( \frac{t}{T}\right) \right) \varepsilon _{t-1-j}\right) ^4\\&\quad = \frac{1}{d_T^2}\sum _{t_1,\ldots ,t_4=1}^T\sum _{j_1,\ldots ,j_4=0}^\infty \left\{ \prod _{m=1}^4\left( {\tilde{\psi }}_j\left( \frac{t-1}{T}\right) - {\tilde{\psi }}_j\left( \frac{t}{T}\right) \right) \right\} \mathbb {E}\left( \prod _{m=1}^4\varepsilon _{t_m-1-j_m}\right) \\&\quad \le \frac{1}{d_T^2}\sum _{j_1,\ldots ,j_4=0}^\infty \left\{ \prod _{m=1}^4\sum _{t_m=1}^T\left| {\tilde{\psi }}_j\left( \frac{t_m-1}{T}\right) - {\tilde{\psi }}_j\left( \frac{t_m}{T}\right) \right| \right\} \mu _4\\&\quad \le \frac{1}{d_T^2}\sum _{j_1,\ldots ,j_4=0}^\infty \left\{ \prod _{m=1}^4\ V({\tilde{\psi }}_{j_m})\right\} = \frac{1}{d_T^2}\left( \sum _{j=0}^\infty V({\tilde{\psi }}_j)\right) ^4\mu _4. \end{aligned}$$

The claim follows by the same arguments as above.\(\square \)

Obviously, if the coefficients \(\psi _{j,t,T}\) are not time-dependent, the statement of Theorem 2.5 coincides with the LIL for linear processes which was proven by Phillips and Solo (1992).