Neglected serial correlation tests in UCARIMA models

We derive computationally simple and intuitive score tests of neglected serial correlation in unobserved component univariate models using frequency domain techniques. In some common situations in which the alternative model information matrix is singular under the null, we derive one-sided extremum tests, which are asymptotically equivalent to likelihood ratio tests, and explain how to compute reliable Wald tests. We also explicitly relate the incidence of those problems to the model identiﬁcation conditions and compare our tests with tests based on the reduced form prediction errors. Our Monte Carlo exercises assess the ﬁnite sample reliability and power of our proposed tests.


Introduction
The superposition of Arima time series models forms the basis of two dominant approaches to the classical decomposition of a univariate time series into trend, cyclical, seasonal and irregular components: the reduced form "model-based" decomposition analysed by Box et al. (1978) and Pierce (1978) and further extended by Agustín Maravall and his co-authors, and the so-called "structural time series" models studied by Nerlove (1967), Engle (1978) and Nerlove et al. (1979) and subsequently developed by Andrew Harvey and his co-authors.
In both cases, the model parameters are estimated by maximising the Gaussian log-likelihood function of the observed data, which can be readily obtained either as a by-product of the Kalman filter prediction equations or from Whittle's (1962) frequency domain asymptotic approximation. Once the parameters have been estimated, filtered values of the unobserved components can be extracted by means of the Kalman smoother or its Wiener-Kolmogorov counterpart. These estimation and filtering issues are well understood (see Harvey 1989;Durbin and Koopman 2012 for textbook treatments), and the same can be said of their efficient numerical implementation (see Commandeur et al. 2011 and the references therein).
In contrast, specification tests for these models are far less known. While sophisticated users will often look at several diagnostics, such as the ones suggested by Maravall (1987Maravall ( , 1999Maravall ( , 2003, or the ones computed by the Stamp software package following Harvey and Koopman (1992) (see Koopman et al. 2009 for further details), formal tests are hardly ever reported in empirical work. One particularly relevant issue is the correct specification of the parametric Arima models for the unobserved components, as the various outputs of the model could be misleading under misspecified dynamics.
The objective of our paper is precisely to derive tests for neglected serial correlation in the underlying elements of univariate unobserved components (Ucarima) models. For computational reasons, we focus most of our discussion on score tests, which only require estimation of the model under the null. As is well known, though, in standard situations likelihood ratio (LR), Wald and Lagrange multiplier (LM) tests are asymptotically equivalent under the null and sequences of local alternatives, and therefore they share their optimality properties. Another important advantage of score tests is that they often coincide with tests of easy to interpret moment conditions (see Newey 1985;Tauchen 1985), which will continue to have non-trivial power even in situations for which they are not optimal.
Earlier work on specification testing in unobserved component models include Engle and Watson (1980), who explained how to apply the LM testing principle in the time domain for dynamic factor models with static factor loadings, Harvey (1989), who provides a detailed discussion of time domain and frequency domain testing methods in the context of univariate "structural time series" models, and Fernández (1990), who applied the LM principle in the frequency domain to a multivariate structural time series model. More recently, in a companion paper (Fiorentini and Sentana 2013) we have derived tests for neglected serial correlation in the latent variables of dynamic factor models using frequency domain techniques.
In the specific context of Ucarima models, the contribution of this paper is threefold.
First, we propose dynamic specification test which are very simple to implement, and even simpler to interpret. Once an model has been specified and estimated, the tests that we propose can be routinely computed from simple statistics of the smoothed values of the innovations of the different components. And even though our theoretical derivations make extensive use of spectral methods for time series, we provide both time domain and frequency domain interpretations of the relevant scores, so researchers who strongly prefer one method over the other could apply them without abandoning their favourite estimation techniques.
Second, we provide a thorough discussion of some common situations in which the standard form of LM tests cannot be computed because the information matrix of the alternative model is singular under the null. In those irregular cases, we derive versions of the score tests that remain asymptotically equivalent to the LR tests, which become one-sided, and explain how to compute asymptotically reliable Wald tests. We also explicitly relate the incidence of those problems to the identification conditions for Ucarima models, and highlight that they contradict the widely held view that increases in the Ma and Ar polynomials of the same order provide locally equivalent alternatives in univariate tests for serial correlation (see e.g. Godfrey 1988).
Third, we compare dynamic specification tests for the underlying components with tests based on the reduced form prediction errors. In this regard, we study their relative power and discuss some cases in which they are numerical equivalent.
The rest of the paper is organised as follows. In Sect. 2, we review the properties of Ucarima models, their estimators and filters. Then, in Sect. 3 we derive our tests and discuss their potential pitfalls, comparing them to reduced form tests in Sect. 4. This is followed by a Monte Carlo evaluation of their finite sample behaviour in Sect. 5. Finally, our conclusions can be found in Sect. 6. Auxiliary results are gathered in Appendices.

Theoretical background
As we have just mentioned, in this section we formally introduce Ucarima models, obtain their reduced form representation, review maximum likelihood estimation in the frequency domain, apply Wiener-Kolmogorov filtering theory to optimally extract the unobserved components and derive the time series properties of the smoothed series.

UCARIMA models
To keep the notation to a minimum throughout the paper we focus on models for a univariate observed series y t that can be defined in the time domain by the equations: where x t is the "signal" component, u t the orthogonal "non-signal" component, α x (L) and α u (L) are one-sided polynomials of orders p x and p u , respectively, while β x (L) and β u (L) are one-sided polynomials of orders q x and q u coprime with α x (L) and α u (L), respectively, I t−1 is an information set that contains the values of y t and x t up to, and including time t − 1, μ is the unconditional mean and θ refers to all the remaining model parameters. Importantly, we maintain the assumption that the researcher makes sure that the parameters θ are identified before estimating the model under the null. 1 Hotta (1989) provides a systematic way to check for identification (see Maravall 1979 for closely related results). Specifically, let c denote the degree of the polynomial greatest common divisor of α x (L) and α u (L), so that they share c common roots. Then, the Ucarima model above will be identified (except at a set of parameter values of measure 0) when there are no restrictions on the Ar and Ma polynomials if and only if either p x ≥ q x + c + 1 or p u ≥ q u + c + 1, so that at least one of the components must be a "top-heavy" Arma process in the terminology of Burman (1980) (i.e. a process in which the Ar order exceeds the Ma one). 2 Given the exchangeability of signal and non-signal components in the formulation above, in what follows we assume without loss of generality that this identification condition is satisfied by the signal component. In particular, we assume that p x ≥ q x + c + 1 and p x − q x ≥ p u − q u , and that in case of equality, p x ≥ p u .
In this paper we are interested in hypothesis tests for p x = p 0 x vs p x = p 0 x + k x or p u = p 0 u vs p u = p 0 u + k u , or the analogous hypotheses for q x and q u . For simplicity, we focus most of the discussion in those cases in which k x and k u are in fact 1, which leads to the following four hypothesis of interest: Given that they raise no additional issues, extensions to higher k x and k u are only briefly discussed in Sect. 3.1 below, as well as in our concluding remarks.

Reduced form representation of the model
Unobserved component models can readily handle integrated variables, but for simplicity of exposition in what follows we maintain the assumption that y t is a covariance stationary process, possibly after suitable differencing, as in Appendix 1.
Under stationarity, the spectral density of the observed variable is proportional to , Given that it follows that the reduced form model will be an Arma process with maximum orders p y = p x + p u for the Ar polynomial α y (.) = α x (.)α u (.) and q y = max( p x +q u , q x + p u ) for the Ma polynomial β y (.). Cancellation will trivially occur when α x (.) and α u (.) share c common roots, but there could also be other cases (see Granger and Morris 1976 for further details). The coefficients of β y (L), as well as σ 2 a , which is the variance of the univariate Wold innovations, a t , are obtained by matching autocovariances (see Fiorentini and Planas 1998 for a comparison of numerical methods). Assuming strict invertibility of the Ma part, we could then obtain the reduced form innovations a t from the observed process by means of the one-sided filter α y (e −iλ )/β y (e −iλ ).
But as is well known, these reduced form residuals can also be obtained from the prediction equations of the Kalman filter without making use of the expressions for α y (.) or β y (.).

Maximum likelihood estimation in the frequency domain
Let denote the periodogram of y t and λ j = 2π j/T ( j = 0, . . . , T − 1) the usual Fourier frequencies. If we assume that g yy (λ) is not zero at any of those frequencies, the so-called Whittle (discrete) spectral approximation to the log-likelihood function is 3 The MLE of μ, which only enters through I yy (λ), is the sample mean, so in what follows we focus on demeaned variables. In turn, the score with respect to all the remaining parameters is The information matrix is block diagonal between μ and the elements of θ , with the (1, 1)-element being g yy (0) and the (2, 2)-block where * denotes the conjugate transpose of a matrix. A consistent estimator will be provided either by the outer product of the score or by In fact, by selecting an artificially large value for T in (9), one can approximate (8) to any desired degree of accuracy. Formal results showing the strong consistency and asymptotic normality of the resulting ML estimators of dynamic latent variable models under suitable regularity conditions were provided by Dunsmuir (1979), who generalised earlier results for Varma models by Dunsmuir and Hannan (1976). These authors also show the asymptotic equivalence between time and frequency domain ML estimators. 4

The (Kalman-)Wiener-Kolmogorov filter
By working in the frequency domain we can easily obtain smoothed estimators of the latent variables too. Specifically, let denote the spectral decomposition of the observed process. The Wiener-Kolmogorov two-sided filter for the signal x t at each frequency is given by Hence, the spectral density of the smoother x K t|T as T → ∞ 5 will be while the spectral density of the final estimation error x t − x K t|∞ will be given by It is easily seen that g x K x K (λ) will approach g x x (λ) at those frequencies for which g x x (λ) is large relatively to g uu (λ), i.e. frequencies with a high signal to noise ratio.
In this regard, we can view R 2 x x (λ) as a frequency-by-frequency coefficient of determination.
Having smoothed y t to estimate x t , we can easily obtain the smoother for f t , f K t|∞ , by applying to x K t|∞ the one-sided filter Likewise, we can derive its spectral density, as well as the spectral density of its final estimation error f t − f K t|∞ . Entirely analogous derivations apply to the non-signal component u t , with the peculiarity that Finally, we can obtain the autocovariances of x K t|∞ , f K t|∞ , u K t|∞ , v K t|∞ and their final estimation errors by applying the usual inverse Fourier transformation

Autocorrelation structure of the smoothed variables
As we have seen in the previous section, smoothed values of the latent variables are the result of optimal symmetric two-sided filters. As a consequence, their serial correlation structure is generally different from that of the unobserved state variables. To see the difference between the spectra of the signal and its estimators, recall that (10) implies that g x K x K (λ) < g x x (λ) for any λ ∈ (−π, π) for which g uu (λ) > 0. Therefore, the variance of the optimal estimator will underestimate the variance of the unobserved signal, as expected.
As argued by Maravall (1987Maravall ( , 1999Maravall ( , 2003, the serial dependence structure of the estimators of the unobserved components can be a useful tool for model diagnostic. Large discrepancies between theoretical and empirical autocovariance functions of those estimators can be interpreted as indication of model misspecification. On this basis, Maravall (1987) suggested a (Gaussian) parametric bootstrap procedure to obtain confidence intervals for the empirical autocovariances of a single smoothed innovation. Similarly, Maravall (2003) derived expressions for the asymptotic variance of the sampling variances and autocorrelations of the smoothed components using classic results for linear stationary Gaussian processes (see e.g. Lomnicki andZaremba 1959 or Anderson andWalker 1964). However, in both instances his main objective was to propose useful model diagnostics rather than deriving the null distribution of a formal statistical test. As we shall see in Sect. 3.2, our LM tests carry out the comparison between theoretical and empirical autocovariance functions of the smoothed components in a very precise statistical sense, taking into account both the sampling variability of the estimators of the parameters of the null model and the potential rank failure of the information matrix of the alternative model.
In this regard, an important advantage of our frequency domain approach is that we implicitly compute the required autocovariances without explicitly obtaining the time processes for the unobserved components. Nevertheless, for pedagogical purposes it is of interest to understand those processes.
Given (10), we can write the spectral density of x K as , which corresponds to an Arma( p x + q y , p u + 2q x ) process in the absence of cancellation. Hence, the spectral density of the final estimation error x t − x K t|∞ in (11) will be , which shares the structure of an Arma(q y , q x + q u ) under the same circumstances. In turn, the application of (12) to x K t|∞ implies that the spectral density of f K t|∞ will be , which suggests an Arma(q y , p u + q x ) process, while points out instead to an Arma(q y , p x + q u ) for the final estimation error f t − f K t|∞ . There are special cases, however, in which the resulting models for the smoothed values of the unobserved variables and their innovations are much simpler. For example, if the signal follows a purely autoregressive process and the non-signal component is white noise, so that β x (L) = α u (L) = β u (L) = 1, then , , , and with p y = q y = p x . Once again, entirely analogous derivations apply to the non-signal component u K t|∞ .

Neglected serial correlation tests
In this section we begin by reviewing tests for neglected serial correlation in observable processes. Then, we derive the analogous tests for unobserved components, taking into account that the model parameters must be estimated under the null. Next, we investigate the non-standard situations that arise in Ucarima models which become underidentified under some of the alternatives that we consider. We conclude by providing a step-by-step procedure for the benefit of practitioners. For simplicity, we maintain the assumption that there are no common roots in the autoregressive polynomials of the signal and non-signal components.

Testing for serial correlation in univariate observable processes
For pedagogical purposes, let us initially assume that x t is an observable univariate time series that has been modelled as an Ar(2) process. A natural generalisation is so that the null becomes H 0 : ψ x = 0. 6 Under the alternative, the spectral density of Hence, the derivative of g x x (λ) with respect to ψ x under the null is As a result, the spectral version of the score with respect to ψ x under H 0 is where we have exploited the fact that Given that the spectral version of the score becomes 6 This is a multiplicative alternative. Instead, we could test H 0 : α x3 = 0 in the additive alternative In that case, it would be more convenient to reparametrise the model in terms of partial autocorrelations (see Barndorff-Nielsen and Schou 1973). We stick to multiplicative alternatives, which are closer related to Ma alternatives.
In turn, the time domain version of the score will be . Therefore, the spectral LM test of Ar(2) versus Ar(3) is simply checking that the first sample (circulant) autocovariance of f t , which are the innovations in the observed process, coincides with its theoretical value under H 0 , exactly like the usual Breusch (1978)-Godfrey (1978a serial correlation LM test in the time domain (see also Pagan 1980 or Godfrey 1988).
Let us now consider the following alternative generalisation of an Ar(2) In this case, the null is H 0 : ψ f = 0. In turn, the spectral density of x t under this alternative is Therefore, the spectral LM test of Ar(2) versus Arma(2, 1) will be numerically identical to the corresponding test of Ar(2) versus Ar(3), which confirms that these two alternative hypotheses are locally equivalent for observable time series (see e.g. Godfrey 1988). Generalisations to test Arma(p, q) vs Arma(p + k, q) for k > 1 are straightforward, since they only involve higher order (circulant) autocovariances of f t , as in Godfrey (1978b). Similarly, it is easy to show that Arma(p + k, q) and Arma(p, q + k) multiplicative alternatives are also locally equivalent. 7 Finally, we could also consider (multiplicative) seasonal alternatives.

Testing for neglected serial correlation in the unobserved components
Let us now consider univariate unobserved components models, which are the objective of our study. Initially, we assume that the "top heavy" signal process is such that p x ≥ q x + 2, so that the model is identified under each of the four alternatives stated in Sect. 2.1 in view of Hotta's (1989) results, and postpone the discussion of the other cases to Sects. 3.4 and 3.5.
Let us start by considering neglected serial correlation in the signal. Under alternative Sar1 the model will be so that the null hypothesis is H 0 : ψ x = 0, as in Sect. 3.1. Given and (13), after some straightforward manipulations we can prove that the score of the spectral log-likelihood for the observed series y t under the null will be given by Once more, the time domain counterpart to the spectral score with respect to ψ x is (asymptotically) proportional to the difference between the first sample autocovariance of f K t|∞ and its theoretical counterpart under H 0 . Therefore, the only difference with the observable case is that the autocovariance of f K t|∞ , which is a forward filter of the Wold innovations of y t , is no longer 0 when ψ x = 0, although it approaches 0 as the signal to noise ratio increases. In that case, our proposed tests would converge to the usual Breusch-Godfrey LM tests for neglected serial correlation discussed in Sect. 3.1. 8 Let us illustrate our test by means of a simple example. Imagine that x t follows an Ar(2) process while u t is white noise. The results in Sect. 2.5 imply that when ψ x = 0, f K t|∞ will follow an Ar(2) with an autoregressive polynomial β y (L) that satisfies the condition so that the smaller σ 2 v is, the closer f K t|∞ will be to white noise. In any case, the LM test of H 0 : ψ x = 0 will simply compare the first sample autocovariance of f K t|∞ with its theoretical value. As we mentioned before, the advantage of our frequency for all λ, we can also write the score as 2 T −1 (14). Therefore, the score with respect to ψ x also has the interpretation of the expected value of (15), which is score when x t is observed, conditional on the past, present and future values of y t (see Fiorentini et al. 2014 for further details).
domain approach is that we obtain those autocovariances without explicitly computing σ 2 a , β y (L) or indeed f K t|∞ . In turn, under alternative Sma1 the equation for the signal in (17) is replaced by so that the null hypothesis becomes H 0 : ψ f = 0. Then, it is straightforward to prove that this test will numerically coincide with the test of H 0 : ψ x = 0 in view of (18), (13) and (16).
On the other hand, under alternative Nar1 the model will be while the equation for the non-signal component in (19) will be replaced by The exchangeability of signal and non-signal implies that mutatis mutandis exactly the same derivations apply to tests of neglected serial correlation in u t . Finally, joint tests that simultaneously look for neglected serial correlation in the signal and non-signal components can be easily obtained by combining the two scores involved.

Parameter uncertainty
So far we have implicitly assumed known model parameters. In practice, some of them will have to be estimated under the null. Maximum likelihood estimation of the state space model parameters can be done either in the time domain using the Kalman filter or in the frequency domain.
As we mentioned before, the sampling uncertainty surrounding the sample mean μ is asymptotically inconsequential because the information matrix is block diagonal. The sampling uncertainty surrounding the other parameters, say ϑ, is not necessarily so.
The solution is the standard one: replace the inverse of I ψψ , which is the (ψ, ψ) block of the information matrix by the (ψ, ψ) block of the inverse information matrix I ψψ = (I ψψ − I ψϑ I −1 ϑϑ I ϑψ ) −1 in the quadratic form that defines the LM test. As usual, this is equivalent to orthogonalising the spectral log-likelihood scores corresponding to the parameters in ψ with respect to the scores corresponding to the parameters ϑ estimated under the null. In this regard, the analytical expressions that we provide for the different derivatives involved can be combined with (9) to obtain computationally efficient expressions for the entire information matrix.

Potential pitfalls
As we mentioned in Sect. 2.1, we maintain the innocuous assumption that p x > q x , so that the signal component is a "top-heavy" model. However, by increasing the order of the Ma polynomial of the signal, as the Sma1 alternative hypothesis does, the extended Ucarima model may become underidentified despite the original null model being identified. This will happen when p x = q x + 1 but p u < q u + 1, in which case the null model will be just identified. An important example would be: with f t and u t bivariate white noise orthogonal at all leads and lags. The null hypothesis of interest is H 0 : ψ f = 0, so that the model under the null is a univariate Ar(1) + white noise process, while the signal under the alternative is an Arma(1, 1) instead with moving average coefficient ψ f . In this context, it is possible to formally prove that

Proposition 1
The score with respect to ψ f of model (20) reparametrised in terms of γ yy (0), γ yy (1), α and ψ f is 0 when α = 0 regardless of the value of ψ f .
Intuitively, the problem is that ψ f cannot be identified because the reduced form model for the observed series is an Arma(1, 1) fully characterised by its variance, its first autocovariance and α under both the null and the alternative. As a result, the original and extended log-likelihood functions would be identical at their respective optima, which in turn implies that the LR and LM tests will be trivially 0. 9 A more difficult to detect problem arises when the original model is identified under the null hypothesis and the extended model is identified under the alternative but the information matrix of the extended model is singular under the null. Following Sargan (1983), we shall refer to this situation as a first-order underidentified case because in effect the additional parameter is locally identified but the usual rank condition for identification breaks down.
Although this may seem as a curiosity, it turns out that this problem necessarily occurs with the Sar1 alternative hypothesis whenever alternative Sma1 leads to an underidentified model.
Let us study in more detail the Ar(1) plus white noise example discussed in the previous paragraphs, for which (17) reduces to with f t and u t being bivariate white noise orthogonal at all leads and lags. The null hypothesis of interest is H 0 : ψ x = 0, so that the model under the null is still an Ar(1) 9 The only possible exception arises when the model is exactly on the boundary of the admissibility region under the null but not under the alternative. However, such anomalies tend to be associated to uninteresting cases. For example, in the Ar (1)  plus white noise, while the signal under the alternative follows an Ar(2) process. We can then show that 10 (21) is singular under the null hypothesis H 0 : ψ x = 0.

Proposition 2 The information matrix of model
As we saw in Sect. 3.1, the intuition is that under the null the score of an additional Ar root is the opposite of the score of an additional Ma root, but the latter is identically 0 at the parameter values estimated for the original Ar(1) plus white noise model in view of Proposition 1. Therefore, a standard LM test is infeasible. In contrast, there is no linear combination of the first three scores that is equal to 0 under H 0 when α = 0, so we can consistently estimate α, σ 2 f and σ 2 u if we impose the null hypothesis when it is indeed true. Likewise, there is no linear combination of the four scores that is equal to 0 when the true values of α and ψ x are both different from 0, so again we can consistently estimate σ 2 f , σ 2 u , α and ψ x in those circumstances, unlike what happened with model (20). For those reasons, it seems intuitive to report instead either a Wald test or a LR one. However, intuitions sometimes prove misleading.
It turns out that one has to be very careful in computing the significance level for the LR test and especially the Wald test because, as we will discuss below, the asymptotic distribution of the ML estimator of ψ x will be highly unusual under the null. In contrast, there is a readily available LM-type test along the lines of Lee and Chesher (1986). Specifically, these authors propose to replace the usual score test by what they call an "extremum test". Given that the first-order conditions are identically 0, their suggestion is to study the restrictions that the null imposes on higher order conditions. An equivalent procedure to deal with the singularity of the information matrix is to choose a suitable reparametrisation. We follow this second route because it will allow us to obtain asymptotically valid LR and Wald tests too.
Our approach is as follows. First, we replace σ 2 f and σ 2 u by γ yy (0) and γ yy (1), as in Proposition 1. As the following result shows, this change confines the singularity to the last element of the score.

Proposition 3
The ψ x ψ x element of the information matrix of model (21) reparametrised in terms of γ yy (0), γ yy (1), α and ψ x is zero under the null hypothesis retain the value of ϕ and the sign of the transformation which leads to the largest likelihood function under the alternative. Using the results of Rotnitzky et al. (2000), we can show that under the null the asymptotic distribution of the ML estimator of ϕ will be that of a normal variable censored from below at 0. In contrast, the asymptotic distribution of the corresponding estimator of ψ x will be non-standard, with a faster rate of convergence, half of its density at 0 and the other half equally divided between the positive and negative sides. In this context, the LR test of the null hypothesis H 0 : ϕ = 0 will be a 50:50 mixture of a χ 2 0 , which is 0 with probability 1, and χ 2 1 .
As for the Wald test, the square t-ratio associated to the ML estimator of ϕ will share the same asymptotic distribution. In contrast, Wald tests based on ψ x will have a rather non-standard distribution which will render the t-ratio usually reported for this coefficient very misleading.
The following result explains how to conduct the score-type test.

Proposition 4 The extremum test of the null hypothesis H
where Given the scores for γ yy (0), γ yy (1) and α under the null, this means that the extremum test is effectively comparing the second sample autocovariance of f K t|∞ with its theoretical value after taking into account the estimated nature of those model parameters. Nevertheless, the test must be one-sided because (i) ϕ ≥ 0 under the alternative regardless of whether we reparametrise ψ x as ± √ ϕ and (ii) the score under the null is the same in both cases, which implies that the Kuhn-Tucker multiplier will also coincide. 11 Finally, it is worth noting that although ψ x is not first-order identified because the derivative of the log-likelihood function with respect to this parameter is identically 0 and the expected value of the second derivative under the null is also 0 from Proposition 4, it is locally identified through higher order derivatives. 12 A somewhat surprising implication of our previous results is that in this instance the usual local equivalence between Ar(1) and Ma (1) alternatives hypotheses for the signal breaks down. In contrast, there are other seemingly locally equivalent alternatives. Specifically, consider the following variation on model (21): In this case the null hypothesis of interest is H 0 : δ x = 0, so that the model under the null is still an Ar(1) signal plus white noise, while the signal under the alternative is a "seasonal" Ar(3) with restricted autoregressive polynomial 1 − αL − δ x L 2 + αδ x L 3 . The "top-heavy" nature of the signal together with the restrictions on the coefficients imply the model under the alternative should remain identified. We can then show that Nevertheless, such a test is suboptimal for testing the null hypothesis H 0 : ψ x = 0 because it ignores the effective one-sided nature of its alternative.
For reasons analogous to the ones explained in Sect. 3.1, the test in Proposition 5 will also coincide with the LM test of H 0 : δ f = 0 in the alternative "seasonal" model which will again be two sided. This equivalence is less obvious than it may seem because the signal follows a "bottom-heavy" process under the alternative. Nevertheless, the fact that the first Ma coefficient is 0 is sufficient to guarantee identifiability in this case. Another seemingly locally equivalent alternative to the neglected Ar (1) component in the signal arises when we are interested in testing for first order serial correlation in the non-signal component u t . In that case the model under the alternative becomes with f t and v t orthogonal at all leads and lags. The null hypothesis of interest is H 0 : ψ u = 0. Further, we do not expect any singularity to be present under the alternative, on the grounds that the contemporaneous aggregation of Ar(1) + Ar (1) is an Arma(2, 1). We can then show that

Proposition 6
The LM test of the null hypothesis H 0 : ψ u = 0 in model (25) will numerically coincide with a two-sided version of the test discussed in Proposition 4 once we correct for the sampling uncertainty in the estimation of the model parameters under the null.
As expected, the LM test of the null hypothesis H 0 : ψ v = 0 in the model will also coincide because the derivatives of g yy (λ) with respect to ψ v in model (26) and with respect to ψ u in model (25) only differ in their signs.

An intermediate case
So far, we have dealt with regular models in which p x ≥ q x + 2 in Sect. 3.2 and irregular models in which p x = q x + 1 but p u < q u + 1 in Sect. 3.4. In this section, we study the intermediate case of p x = q x + 1 and p u = q u + 1, which shares some features of the other two. The results in Sect. 2.2 imply that the reduced form of such a model would be Arma( p x + p u , p x + p u − 1), whose 2 p x + 2 p u parameters are generally sufficient to identify the structural parameters of the signal and non-signal components. Similarly, the reduced form models would be Arma( p x + p u + 1, p x + p u ) under alternatives Sar1 and Nar1, and Arma( p x + p u , p x + p u ) under alternatives Sma1 and Nma1. Since all these reduced form models identify the parameters of the associated structural models, the corresponding information matrices evaluated under the null will generally have full rank. Therefore, tests for neglected first serial correlation in the signal or the noise will usually be well behaved, as in Sect. 3.2.
Nevertheless, it turns out that both tests are numerically identical. To understand the reason, let us look at an Ar(1) + Ar(1) process, which is the simplest possible example. The joint alternatives that we consider are of the following form: In this context, we can prove the following proposition: Proposition 7 The nullity of the information matrix of model (27) is one under the joint null hypothesis H 0 : ψ x = ψ u = 0.
Not surprisingly, the same is true if we replace any of the Ar alternatives by its Ma counterpart. Intuitively, the reason is the following. The reduced form model under the combination of alternatives Sma1 and Nma1 is an Arma( p x + p u , p x + p u ), which does not have enough parameters to identify the structural parameters of the signal and non-signal components.
In principle, it might be possible to reparametrise model (27) in such a way that the single singularity of the information matrix is due to the score of one of the new parameters becoming identically 0. In that case, a square root transformation of this parameter should allow one to derive a joint extremum test of H 0 : ψ x = ψ u = 0 along the lines of Sect. 3.4. In the interest of space, we shall not explore this possibility.

The tests in practice
Taking into account the theoretical results obtained in the previous sections, the step by step testing procedure for dynamic misspecification of the unobserved components can be described as follows: 4. Identify the signal with the more "top-heavy" Arma component, so that p x −q x ≥ 1 and p x − q x ≥ p u − q u , where q x , q u and p x , p u are the orders of the Ma and Ar polynomials (including roots on the unit circle). In case of equality, choose the signal so that p x ≥ p u . 5. If p x − q x > 1, so that the model remains identified under all four alternatives and their combinations, then apply the following steps to both the signal and non-signal components: (a) Compute the scores under the alternative but evaluate them at the null.
(b) Compute the information matrix under the alternative but evaluate it at the null. (c) Invert the information matrix and retain the elements corresponding to the scores of the additional parameters. (d) Compute the two quadratic forms defining the LM test statistics. 6. If p x − q x = 1 and p u − q u = 1, so that the model becomes underidentified under the combination of alternatives Sma1 and Nma1, then proceed as in point 5., but compute only one of the tests since the other one is numerically identical. 7. If p x − q x = 1 and p u − q u < 1, so that the model becomes underidentified under the Sma1 alternative: An example of a regular situation would be an Ar(2) + noise process, which is such that p x − q x = 2 and p u − q u = 0. In this case, the model is identified under all possible alternative hypotheses. In fact, it is overidentified when testing dynamic misspecification in the noise while it is just identified in the Sma1 case.
An example of the intermediate case would be an Ar(1) + Ar (1) process, for which p x − q x = p u − q u = 1.
An example of an irregular case would be an Ar(1) + noise plus noise model, including the popular random walk plus noise process. In this instance, p x − q x = 1 and p u − q u = 0. As we mentioned at the beginning of Sect. 3.4, this model becomes an Arma(1, 1) plus noise model under the Sma1 alternative, whose parameters are not identified.
We will study the finite sample behaviour of our tests for unit root versions of the first and third examples in Sects. 5.1 and 5.2, respectively.

Comparison with tests based on the reduced form residuals
In the context of univariate time series models written in state space form, Harvey (1989), Harvey and Koopman (1992) and Durbin and Koopman (2012) suggest the calculation of neglected serial correlations tests for the reduced form residuals, a t , which should be white noise under the null of correct dynamic specification. For that reason, it is of some interest to compare such tests to the tests that we have derived in the previous sections. To do so, let us introduce the following two alternative hypothesis of interest: 5. Rar1: Arma( p x + 1, q x ) + Arma( p u + 1, q u ) with a common Ar root. 6. Rma1: Arma( p x , q x + 1) + Arma( p u , q u + 1) with a common Ma root.
In this context, we can prove the following result:

Proposition 8 Testing for Rar1 in the Ucarima model (1)-(4) is equivalent to testing for Ar(1)-type neglected serial correlation in the reduced form innovations, while testing for Rma1 in the structural form is the same as testing for Ma(1)-type neglected serial correlation in a t .
This means that when we test for first order neglected serial correlation in the reduced form residuals the model under the alternative hypothesis is in effect: In contrast, a test for neglected serial correlation in the signal makes use of the alternative model (17), while a test for neglected serial correlation in the non-signal component relies on (19). Therefore, while it is indeed true that misspecification of the dynamics of any of the components will generally result in the reduced form residuals of the null model being serially correlated under the alternative, as argued by Harvey and Koopman (1992), it does not necessarily follow that tests for neglected serial correlation in those residuals are asymptotically equivalent to our neglected serial correlation tests in the unobserved components. In fact, the relative power of those three tests when p x − q x > 1 will depend on the nature of the true model under the alternative. Specifically, if we represent ψ x on the horizontal axis and ψ u on the vertical axis, the reduced form test of the null hypothesis H 0 : ψ a = 0 will have maximum power for alternatives along the 45 • degree line ψ u = ψ x since it is locally the best test of neglected serial correlation in that direction in view of Proposition 8. In contrast, the structural form tests of the null hypotheses H 0 : ψ x = 0 and H 0 : ψ u = 0 will have maximum power along their respective axis (see Demos and Sentana 1998 for a related discussion in the context of Arch tests). For the intermediate parameter combinations, we could use local power calculations along the lines of appendix B in Fiorentini and Sentana (2015) to compare our LM tests, which are based on the smoothed innovations of the state variables (the so-called auxiliary residuals), to the LM tests based on the reduced form innovations. 13 Specifically, we could obtain two isopower lines, defined as the locus of points in ψ x , ψ u space for which for which the non-centrality parameter of the reduced form test is exactly the same as the non-centrality parameter of the structural tests for H 0 : ψ x = 0 and H 0 : ψ u = 0.
In principle, we could consider the joint test of the composite null hypothesis H 0 : ψ x = ψ u = 0 mentioned at the end of Sect. 3.2, which will generally have two degrees of freedom instead. For comparing the joint test against the simple tests, though, we would have to equate their local power directly since the number of degrees of freedom would be different.
In view of the discussion in Sects. 3.4 and 3.5, though, the reduced form test and the two sided versions of the structural tests will be identical when p x − q x = 1.

A regular case
We first report the results of some simulation experiments based on a special case of the example discussed at the end of Sect. 3.2, in which the autoregressive polynomial of the signal contains a unit root. In this way, we can assess the finite sample reliability of the size of our proposed tests and their power relative to the reduced form test in a realistic situation in which the model remains identified under each of the four alternatives stated in Sect. 2.1.

Size experiment
To evaluate possible finite sample size distortions, we generate 10,000 samples of length 200 (equivalent to 50 years of quarterly data) of the following model with f t and u t being contemporaneously uncorrelated bivariate Gaussian white noise. Thus, the signal component follows an Ari(1, 1) under the null, while the non-signal component is white noise. Given that μ is inconsequential, we fix its true value to 0. We also fix the variance of u t to 1 without loss of generality. As for the remaining parameters, we choose σ 2 f = 1 and α = .7 to clearly differentiate this design from the model in Sect. 5.2. For each simulated sample, we use the first differences of the data to compute the following LM tests: 1. first-order neglected serial correlation in the signal (χ 2 1 ) 2. first-order neglected serial correlation in the non-signal (χ 2 1 ) 3. first-order neglected serial correlation in the reduced form residuals (χ 2 1 ) 4. Joint tests of null hypotheses in points 1. and 2. (χ 2 2 ). The finite sample sizes for the four tests are displayed in the first panel of Table 1. As can be seen, the actual rejection rates at the 10, 5 and 1 % of all four tests fall within the corresponding asymptotic confidence intervals of (9.41, 10.59), (4.57, 5.43) and (.80, 1.20), so one can reliably use them.

Power experiments
Next, we simulate and estimate 5,000 samples of length 200 of DGPs in which either the signal or the noise may have an additional autoregressive root, with everything else being unchanged. In particular, we consider the following four alternatives: (a) neglected serial correlation in the signal (ψ x = .5; ψ u = 0), for which the LM test in point 1. should be optimal (b) neglected serial correlation in the noise (ψ x = 0; ψ u = .5), for which the LM test in point 2. should be optimal (c) symmetric neglected serial correlation in signal and noise (ψ x = .5; ψ u = .5), for which the residual LM test in point 3. should be optimal (d) asymmetric neglected serial correlation in signal and noise (ψ x = .6; ψ u = .3) for which the joint LM test in point 4. should be optimal.
The raw rejection rates are reported in the last four panels of Table 1. For alternative (a), the ranking of the tests is as expected. However, for alternative (b) the LM test for signal is able to match the power of the LM test for noise, closely followed by the residual and joint LM tests. Therefore, misspecification in the serial correlation of the non-signal component seems to substantially alter the serial correlation pattern of the filtered values of the correctly specified signal component because the parameter estimators at which the filter is evaluated are biased and the filter weights would be the wrong ones even if we knew the true values of the estimated parameters.
The most surprising result corresponds to alternative (c), in that the joint LM test has more power than the asymptotically optimal reduced form test. In contrast, the rejection rates for alternative (d) conform to the theoretical predictions.
In summary, our results show that the tests that look for neglected serial correlation in the signal and the noise, either separately or jointly, tend to dominate in terms of power the traditional tests based on the reduced form innovations.

Local level model
Next we analyze the local level model in Appendix 1, which is a rather important practical example of the situation discussed in Sect. 3.4.

Size experiment
To evaluate possible finite sample size distortions, we generate 10,000 samples of length 200 of the following model with f t and u t being contemporaneously uncorrelated bivariate Gaussian white noise. As before, we fix the true value of μ to 0 and the variance of u t to 1 without loss of generality. Therefore, the design depends on a single parameter: the noise to signal ratio σ 2 f , which we choose to be 1. This choice implies a Mean Square Error of the final estimation error of f t relative to σ 2 f of 55.28 % according to expression (33), which is neither too low nor too high.
For each simulated sample, we use the first differences of the data to compute the following statistics: 1. one-sided versions of the extremum test for first-order neglected serial correlation in the signal 2. two-sided version of the same test 3. likelihood ratio version 4. Wald test based on ϕ 5. Wald test based on ψ x 6. second-order neglected serial correlation in the signal 7. first-order neglected serial correlation in the non-signal 8. first-order neglected serial correlation in the reduced form residuals. As expected from the theoretical results in Sect. 3.4, the test statistics in points 2., 6., 7. and 8. are numerically identical, so we only report one of them under the label LM2S.
It is also important to emphasise that the statistics in points 3., 4. and 5. require the estimation of model (29). For the reasons described in Sect. 3.4, this is a non-trivial numerical task because when its true value is 0 (i) approximately half of the ML estimators of ψ x are identically 0; (ii) the log-likelihood function is extremely flat in a neighbourhood of 0, especially if we parametrise it in terms of ψ x ; and (iii) when the maximum is not 0 it tends to have two commensurate maxima for positive and negative values of ψ x . To make sure we have obtained the proper ML estimate, we maximise the spectral log-likelihood of model (29) four times: for positive and negative values of ψ x and with this parameter replaced by ± √ ϕ, retaining the maximum maximorum. A kernel density estimate of the mixed-type discrete-continuous distribution of the ML estimators is displayed in Fig. 1, with its continuous part scaled so that it integrates to .48, which is the fraction of non-zero estimates of ψ x . In addition to bimodality, the sampling distribution shows positive skewness, which nevertheless tends to disappear in non-reported experiments with T = 10, 000. The remaining 52 % of the estimates of ψ x are 0, in which case the test statistics in points 1., 3., 4. and 5. will all be 0 too.
The rejection rates under the null for the tests at the 10, 5 and 1 % are displayed in Table 2. The only procedure which seems to have a reliable size is the two-sided LM test. In contrast, its one-sided version is somewhat conservative, while the LR and especially the two Wald tests are liberal. Reassuringly, though, the size distortions of the one-sided LM test disappear fairly quickly in non-reported experiments with larger sample sizes, while the distortions of the LR and Wald tests for ϕ go down more slowly and are still noticeable even in samples as big as T = 50,000 despite the fact that the fraction of 0 estimates converges very quickly to 1/2. As expected, though, the distortions of the Wald test based on ψ x persist no matter how big the sample size is because the information matrix for this parametrisation is singular.

Power experiments
Next, we simulate and estimate 5000 samples of length 200 of four alternative DGPs analogous to the ones described in a.-d. of the previous section. However, since our focus is on tests of the null hypothesis H 0 : ψ x = 0, we only estimate the model under the null and under the a. alternative. In this regard, an additional issue that we encounter in some designs is that from time to time the estimated value of σ 2 u is 0. In those "pile-up" cases we compute the LM and Wald tests excluding the corresponding row and column of the information matrix.
Given the substantial size distortions, we report not only raw rejection rates based on asymptotic critical values in Table 3 but also size-adjusted ones in Table 4, which exploit the Monte Carlo critical values obtained in the simulation described in the previous subsection. If we focus on this second table, we can conclude that the tests that explicitly acknowledge the implicit one-sided nature of the alternative to H 0 : ψ x = 0 dominate the two-sided test, except when ψ x = 0 but ψ u = .5, when they tend to be equally powerful. In particular, the one-sided tests for H 0 : ψ x = 0 dominate the residual correlation tests even when ψ x = ψ u = .5. We can also conclude that the relative ranking of the extremum, LR and Wald tests for H 0 : ϕ = 0 depends on the DGP, although when it coincides with the alternative for which they are asymptotically optimal, the extremum test dominates the LR test, which in turn dominates the Wald test.

Conclusions and extensions
We have derived computationally simple and intuitive expressions for score tests of neglected serial correlation in unobserved component univariate models using frequency domain methods. Our tests can focus on the state variables individually or jointly. The implicit orthogonality conditions are analogous to the conditions obtained by treating the Wiener-Kolmogorov-Kalman smoothed estimators of the innovations in the latent variables as if they were observed, but they account for their final estimation errors.
In some common situations in which the information matrix of the alternative model is singular under the null we show that contrary to popular belief it is possible to derive extremum tests that are asymptotically equivalent to likelihood ratio tests, which become one-sided. We also explain how to compute asymptotically reliable Wald tests. As a result, from now on empirical researchers would be able to report test statistics in those irregular situations too. Further, we explicitly relate the incidence of those problems to the model identification conditions and compare our tests with tests based on the reduced form prediction errors.
We conduct Monte Carlo exercises to study the finite sample reliability and power of our proposed tests. In the regular case of a latent Ari(1, 1) process cloaked in white noise, our results show that the finite sample size of the different tests is reliable. They also imply that the tests that look for neglected serial correlation in the signal and the noise, either separately or jointly, dominate in terms of power the traditional tests based on the reduced form innovations.
When we look at neglected serial correlation tests in the irregular local level model, our simulation results confirm that the finite sample distribution of the ML estimator of the additional autoregressive root in the signal is highly unusual under the null of correct specification, with almost half its mass at 0 and two modes, one positive and one negative. Not surprisingly, a Wald test based on this parameter is highly unreliable, even asymptotically. We also find some size distortions for the asymptotically valid one-sided tests of H 0 : ψ x = 0 (but not for the two-sided LM test), which nevertheless progressively disappear as the sample size increases. After correcting for those distortions, though, we find that the one-sided tests dominate the residual correlation tests even when ψ x = ψ u = .5, but the relative ranking of the extremum test, the likelihood ratio test and the Wald test depends on the DGP under the alternative.
Although we have considered reasonable Monte Carlo designs, a more through analysis of the determinants of the size and power properties of the different tests would constitute a valuable addition.
The testing procedures we have developed can be extended in several interesting directions. First, it would be tedious but straightforward to consider models with more than two components after dealing with identification issues. More interestingly, we could also consider models with purely seasonal components (see Harvey 1989 for some examples). Tests of higher order serial correlation also deserve further consideration since they might involve singularity problems too. For example, the Ari(1, 1) plus white noise process discussed in Sect. 5.1, which yields standard test statistics for neglected first order serial correlation, gives rise to a singular information matrix when we consider tests against first and second order serial correlation simultaneously because those tests are numerically equivalent to tests against the underidentified alternative of Arima(1, 1, 2) plus white noise.
Second, we have assumed throughout the paper that the model estimated under the null is parametrically identified. Nevertheless, Harvey (1989) discusses some examples in which an Ucarima model is underidentified under the null but identified under the alternative. He formally tackles the problem by using the procedure proposed by Aitchison and Silvey (1960), which effectively adds a matrix to the information matrix to make sure that it has full rank (see also Breusch 1986).
We have also maintained the assumption of normality. To understand its implications, let μ t|t−1 and σ 2 t|t−1 denote the conditional mean and variance of y t given its past alone, which can be obtained from the prediction equations of the Kalman filter. Given that the additional serial correlation parameters effectively enter through μ t|t−1 only, we would expect the asymptotic distribution of our proposed tests to remain valid in the presence of non-Gaussian innovations. Dunsmuir (1979) provides a formal result which confirms our conjecture for the important class of Ar( p) plus noise processes.
Although we have only considered unobserved components with rational spectral densities, in principle our methods could be applied to long memory processes too. In this regard, it would be worth exploring the fractionally integrated alternatives considered by Robinson (1994). More generally, it would also be interesting to consider non-parametric alternatives such as the ones studied by Hong (1996), in which the lag length is implicitly determined by the choice of bandwidth parameter in a kernel-based estimator of a spectral density matrix. Another potential extension would directly deal with non-stationary models without transforming the observed variables to achieve stationarity. All these topics constitute fruitful avenues for future research.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Against AR(1) alternatives
Consider the following modified version of model (21) with f t and u t orthogonal at all leads and lags. The main difference is that we have replaced the covariance stationarity hypothesis for the signal x t by a unit root one. As before, the null hypothesis of interest remains H 0 : ψ x = 0, so that the model under the null is simply a random walk signal plus white noise, while the signal under the alternative is an Ari(1, 1) with autoregressive coefficient ψ x . In order to use spectral methods we need to take first differences of the observed variable to make it stationary, which yields Hence, it is easy to see that Similarly, the spectral density of y t will be The reduced form of y t is an Ima(1, 1) process with Ma coefficient β y given by where q = σ 2 f /σ 2 u denotes the signal to noise ratio, and residual variance σ 2 a = −σ 2 u /β y .
As is well known (see e.g. Priestley 1981, section 10.3), the variance of the final estimation error of f t will be given by Interestingly, we would obtain exactly the same expression by working with pseudospectral densities in levels because The partial derivatives of the spectral density are Under the null of H 0 : ψ x = 0 those derivatives become for all λ. Obviously, exactly the same linear combination of the elements of g −1 yy (λ)∂g yy (λ)/∂θ will be singular too. Therefore, the information matrix of the model, which is given by π −π ∂g yy (λ) ∂θ will only have rank 3 under the null. In view of this result, Harvey (1989) rightly concludes that a standard LM test is infeasible. In contrast, there is no linear combination of the first two derivatives that is equal to 0 under H 0 , so we can consistently estimate σ 2 f and σ 2 u if we impose the null hypothesis when it is indeed true. Likewise, there is no linear combination of the three derivatives that is equal to 0 under the alternative either, so again we can consistently estimate σ 2 f , σ 2 u and ψ x in those circumstances. For that reason, Harvey (1989) recommends reporting either a Wald test or a LR one, which for reasons explained in Sect. 3.4 turns out not to be sound advice.
Although we have not yet eliminated the singularity, we have at least confined it to the last element of the score. If we further reparametrise ψ x as ± √ ϕ, the spectral density becomes Tedious algebra shows that the ∂g y y (λ)/∂ϕ evaluated at ϕ = 0 will be equal to 2σ 2 f cos 2λ, where we have used the fact that γ y y (0) + 2γ y y (1) = σ 2 f under the null. Hence, the extremum test for ψ x , which coincides with the LM test for ϕ, is going to be based on the second autocovariance of the smoothed estimates of f t . Importantly, Lee and Chesher (1986) show that the one-sided version of this extremum test continues to be asymptotically equivalent to both the LR and a one-sided version of the Wald test for ϕ.

Against MA(1) alternatives
Consider now the following variation on model (30): with f t and u t orthogonal at all leads and lags. The null hypothesis of interest is H 0 : ψ f = 0, so that the model under the null is still a random walk signal plus white noise, while the signal under the alternative is an Ima(1, 1) with moving average coefficient ψ f . In this case, the stationary model is Hence, it is easy to see that Similarly, the spectral density of y t will be The partial derivatives of this spectral density are Under the null of H 0 : ψ f = 0 those derivatives become which confirms that (34) also holds for this model. Let us now try and isolate the singularity in a single parameter by using the same procedure as in the previous section. First, we replace σ 2 f and σ 2 u by γ y y (0) and γ y y (1). Thus, it is easy to see from (36) and (37) that which are well defined as long as ψ f = 1.
With this notation, the spectral density becomes The derivatives with respect to these new parameters are Since this last derivative is 0 not only under the null but also under the alternative, ψ f cannot be identified. Intuitively, the reason is that the process for y t is an unrestricted Ma(1) under the alternative, which is fully characterised by γ y y (0) and γ y y (1).
Thus, the usual local equivalence between Ar(1) and Ma(1) alternatives hypothesis for the signal breaks down once again.

Against restricted MA(2) alternatives
Consider this alternative variation on model (30): with u t and w t orthogonal at all leads and lags. The null hypothesis of interest is H 0 : δ f = 0, so that the model under the null is still a random walk signal plus white noise, while the signal under the alternative is an Ima(1, 2) with second moving average coefficient δ f . Therefore, the stationary model will be whose spectral density is The partial derivatives of this spectral density are Under the null of H 0 : δ f = 0 those derivatives become Given that the linear span of ∂g y y (λ)/∂σ 2 f and ∂g y y (λ)/∂σ 2 u is the same as the linear span of ∂g y y (λ)/∂γ y y (0) and ∂g y y (λ)/∂γ y y (1), this test is going to coincide with the two-sided version of the extremum test against an Ar(1) alternative.

Against restricted AR(2) alternatives
Consider yet another variation on model (30): with f t and u t orthogonal at all leads and lags. The null hypothesis of interest is H 0 : δ x = 0, so that the model under the null is still a random walk signal plus white noise, while the signal under the alternative is an Ari(2, 1) with second autoregressive coefficient δ x . In this case, the spectral density of y t will be The partial derivatives of this spectral density are which under the null of H 0 : δ x = 0 become As expected, this test is locally equivalent to a test against a restricted Ma(2), which is also locally equivalent to the two-sided version of the test against an unrestricted Ar(1).

Testing for neglected serial correlation in the noise
Let us know see what happens if we are interested in testing for first order serial correlation in u t . The model under the alternative becomes with u t and f t orthogonal at all leads and lags. The null hypothesis of interest is H 0 : ψ u = 0.
Taking first differences of the observed variables to make them stationary yields Using the expressions for the autocovariances of an Arma(1,1) with a unit root in the Ma part, it is easy to see that As a result, Similarly, the spectral density of y t will be and its partial derivatives Under the null of H 0 : ψ u = 0 those derivatives become Given that the spectral density under the null is we can compute the information matrix by integrating the outerproduct of the following vector: Unlike what happens in the test for ψ x = 0, the information matrix will be regular when ψ u = 0. Given that the score with respect to ψ u involves a square cosine, which can always be expanded in terms of cos 2λ by using the trigonometric identity cos 2λ = 2 cos 2 λ − 1, the test for neglected serial correlation in the noise will also coincide with the two-sided version of the extremum test. Finally, it is easy to see that apart from a sign change, one would get the same derivative under the null if we were considering an Ma(1) alternative for u t .

Appendix 2: Proofs of propositions Proposition 1
Given that ψ f = α −1 if we choose an invertible Ma polynomial, Lemma 1 allows us to replace σ 2 f and σ 2 u by the theoretical variance and first autocovariance of the observed series as follows: Given that the spectral density under the null is
= 2 T −1 j=0 cos λ j 2π I aa (λ j ), which involves the first circulant autocorrelation of the reduced form residuals a t . An analogous proof applies to the Ma tests.
while the remaining ones can be obtained from the recursion As for the unconditional variance, we can use the fact that