A comparison of semiparametric tests for fractional cointegration

There are various competing procedures to determine whether fractional cointegration is present in a multivariate time series, but no standard approach has emerged. We provide a synthesis of this literature and conduct a detailed comparative Monte Carlo study to guide empirical researchers in their choice of appropriate methodologies. Special attention is paid on empirically relevant issues such as assumptions about the form of the underlying process and the ability of the procedures to distinguish between short-run correlation and long-run equilibria. It is found that several approaches are severely oversized in presence of correlated short-run components and that the methods show different performance in terms of power when applied to common-component models instead of triangular systems.


Introduction
The concept of cointegration derives its popularity from the fact that it allows to model equilibrium relationships between non-stationary time series. The most popular tests in the standard I (1)/I (0) setting includes the two-step procedure by Engle and Granger (1987), the trace test by Johansen (1988) and the principal component test by Phillips and Ouliaris (1988) which are subject of several comparaitive studies like Reimers (1992) and Höglund and Östermark (2003). In practice, however, standard cointegration analysis can often not be applied, since the I (1)/I (0) framework is too restrictive.
For example, the series of interest may be persistent but not have a unit root, or the deviations from the equilibrium may be more persistent than the I (0) model allows.
Fractional cointegration overcomes these shortcomings, by allowing for non-integer integration orders of the variables in the system and any (possibly non-zero) memory order in the cointegrating residuals as long as it is reduced compared to the original system. Consequently, fractional cointegration promises to facilitate the modeling of a larger number of equilibrium relationships compared to standard cointegration. This has led to the development of various testing and rank estimation procedures to determine whether fractional cointegration is present in a multivariate time series.
Semiparametric approaches, on the other hand, have the advantage that they allow the researcher to focus on the long-run relationship between the series and do not require the specification of short-run dynamics. This literature encompasses the spectral-based rank estimation procedure of Robinson and Yajima (2002) and its extension by Nielsen and Shimotsu (2007), a Hausmann-type test based on the multivariate local Whittle estimator introduced by Robinson (2008a), a number of residual-based tests for the null hypothesis of no fractional cointegration developed by Marmol and Velasco (2004), Chen and Hurvich (2006), Hualde and Velasco (2008), and Wang et al. (2015), a variance-ratio test proposed by Nielsen (2010), a test based on a GPH-type estimate of the cointegration strength introduced by Souza et al. (2018) and a rank estimation procedure based on an eigenanalysis of the autocovariance function from Zhang et al. (2019).
Unfortunately, the domain of applicability of most of these procedures is much more restrictive than the definition of fractional cointegration. Some are only applicable in stationary systems-some only in non-stationary systems. Some procedures require the reduction in memory to be more than 1/2-some require the memory of the cointegrating residuals to be less than 1/2. Furthermore, there are different assumptions about the form of the fractionally cointegrated system. Some approaches assume that one of the observed series itself is an observation of the common underlying trend. Other approaches assume an unobserved common underlying trend. We refer to these models as the triangular system and the common-components model. Which of these assumptions is more suitable in practice depends on the specific application. On the one hand, it may be appropriate to think of the risk-free interest rate as an observed common component that is perturbed by risk premia in risky bonds so that a triangular model can be used. For cointegrated pairs of stocks, on the other hand, it is unclear why the price of one stock should be interpreted as a perturbed version of another stock price so that a common-components model is more appropriate. Finally, even though the development of each of these procedures to determine whether fractional cointegration is present is a major theoretical contri-bution, relatively little effort has been devoted to analyze how they perform compared to each other.
Here, we try to address these issues by providing a survey of all the rank estimation and testing procedures discussed above. To study the relative performance of the competing approaches, we conduct an extensive Monte Carlo analysis of their size and power properties. It is found that several procedures -namely those of Nielsen and Shimotsu (2007) or Robinson and Yajima (2002), Marmol and Velasco (2004), and Hualde and Velasco (2008) show severe finite sample size distortions in systems with correlated short-run components. The relative performance in terms of power depends on the form of the system. For triangular systems and non-stationary commoncomponents models the test of Souza et al. (2018) performs best overall, whereas the test of Chen and Hurvich (2006) is preferable for stationary common-components models.
The rest of the paper is structured as follows. The next section gives the definition and model of fractional cointegration we adopt and briefly reviews the basic estimation methods required by the tests. Section 3 is divided into two subsections describing two types of tests, 3.1 containing the tests based on a spectral matrix and 3.2 summarizing the tests based on cointegrating residuals, Sect. 4 presents finite sample results, and Sect. 5 concludes.

Fractional cointegration: models and definitions
A p-dimensional vector-valued time series X t has long memory if its spectral density fulfills where G is a real, symmetric, and non-negative definite matrix, Λ j (d) = diag λ −d 1 e iπ d 1 /2 , . . . , λ −d p e iπ d p /2 is a p × p diagonal matrix, Λ j (d) is its complex conjugate transpose and '∼' implies that for each element the ratio of real and imaginary parts on the left-and right-hand side tends to one. The element in the a-th row and b-th columns of the spectral matrix f X (λ) is denoted by f ab (λ) ∼ g ab λ −2d for a, b ∈ {1, . . . , p} where g ab denotes the respective element of G. The periodogram of X t at the Fourier frequencies is given by with w X (λ) = 1 √ 2π T T t=1 X t e iλt , and λ j = 2π j/T , for j = 1, . . . , T /2 , where · denotes the greatest integer smaller than the argument.
There is a number of different definitions of fractional cointegration in the literature. The most common one goes back to Engle and Granger (1987). According to this definition the p-dimensional time series X t is cointegrated of rank r , if all components of X t are integrated of order d (denoted by I (d)), and there exists a non-singular matrix β so that the r linear for all a = 1, . . . , r . The matrix β is called the cointegrating matrix and each of its columns is a cointegrating vector. The elements of the vector v t are the cointegrating residuals. Other definitions are given by Johansen (1995), Flôres Jr andSzafarz (1996), Marinucci and Robinson (2001), and Robinson and Yajima (2002) who also provide a discussion of the implications of the different definitions.
Standard cointegration is a special case of the definition above where d = 1 and d v a = 0 for all a. In this setup the system is non-stationary, whereas the cointegrating residuals are stationary. In contrast to that, fractional cointegration allows for a more flexible model so that several cases can be distinguished: weak cointegration (b < 0.5), strong cointegration (b > 0.5), stationary cointegration (0 < d v < d < 0.5), or nonstationary cointegration (0.5 < d v < d).
In general, (fractional) cointegration is an equilibrium concept where the persistence of the cointegrating residual d v determines the speed of adjustment towards the cointegration equilibrium β X t , and shocks have no permanent influence on the equilibrium as long as d v < 1 holds.
As an example, consider the fractionally (co-)integrated bivariate model with and Here, u t = (u 1t , u 2t ) is a weakly-dependent zero-mean process with constant covariance matrix Ω u and spectral density matrix f u (λ), e t (with variance σ 2 e and spectral density f e (λ)) is a univariate weakly-dependent zero-mean process that is allowed to be correlated with u t , and L denotes the lag-operator so that LY t = Y t−1 . The fractional difference operator Δ d = (1− L) d is defined in terms of the binomial expansion . Furthermore, 1(·) denotes the indicator function that takes the value one if its argument is true and is zero, otherwise. Finally, it is assumed that d The truncated processes Δ −(d−b a ) u at 1(t > 0) are fractionally-integrated processes of type-II which means they are only asymptotically stationary for d < 1/2, but in contrast to type-I processes they are still defined for d > 1/2. For a detailed discussion cf. Marinucci and Robinson (1999). In this bivariate model there can be at most one cointegrating relationship. In this case r = 1 and β itself is a cointegrating vector. Obviously, if the linear combination β X t = v t has reduced memory, the same is true for every scalar multiple of it. To identify the cointegrating vector, it is therefore customary to apply some kind of normalization such as setting the first element of the vector to unity. In Eqs. (3) to (5), fractional cointegration arises if ξ 1 , ξ 2 = 0, and b 1 , b 2 > 0. In this case the normalized cointegrating vector is β = 1, − ξ 1 ξ 2 = 1, −β and the cointegrating . Note that this model is a common-components model, but it also nests a triangular system. This is obtained as a special case if Ω u,22 = 0 so that X 2t is a direct (rescaled) observation of the underlying common trend and only X 1t is perturbed with a cointegration error so that b = b 1 . Standard cointegration in the I (1)/I (0) framework is obtained as a special case if d = 1 and b 1 = b 2 = 1. It is also possible to have ξ 1 , ξ 2 = 0, so that both X 1t and X 2t contain the common component Y t , but they are not cointegrated if b 1 = b 2 = 0.

Tests for no fractional cointegration
In the following, we provide a comprehensive review of semiparametric tests and estimation procedures that can be used to determine the order of fractional cointegration in a p-dimensional vector-valued time series X t . According to the definition discussed above, this requires that the components of X t are integrated of the same order. In practice, this can either be assumed based on domain-specific knowledge, or it can be tested with tests for the equality of memory parameters that allow for cointegration introduced by, for example, Robinson and Yajima (2002), Nielsen and Shimotsu (2007), Hualde (2013), and Wang and Chan (2016). In particular Robinson and Yajima (2002) discuss in detail how to partition a vector-valued time series into subvectors with equal memory parameters. These can then be used for further cointegration analysis.
In the following, it will be assumed that all components of X t are I (d), which means we abstract from these pre-testing issues to focus on the actual tests for the null of no fractional cointegration. For all tests the hypotheses are defined by In contrast to standard I (1)/I (0) cointegration, the memory parameter d is unknown in fractionally cointegrated systems and has to be estimated. Since multivariate memory estimation becomes inconsistent under cointegration, the memory parameters are estimated univariately and, if not stated otherwise, we employ the means of the univariate memory estimates in the tests.
The tests presented in this Section apply the most common estimators: the logperiodogram estimator d G P H of Geweke and Porter-Hudak (1983) and Robinson (1995b), the local Whittle estimator d LW of Künsch (1987) and Robinson (1995a), or the exact local Whittle estimator d E LW of Shimotsu and Phillips (2005) and Shimotsu (2010). All of these estimators are periodogram-based and employ the first m Fourier frequencies. The general requirement is that m < T /2 tends to infinity more slowly than T so that 1 m + m T → 0 as T → ∞ and even the largest frequency 2π m/T is asymptotically local to the zero frequency.
To estimate the cointegrating relationship β X t = v t when r = 1, the vector is partitioned such that X t = (y t , x t ), where y t is a scalar and x t is ( p − 1) × 1. By doing so, the focus is on one possible cointegrating relation y t =βx t + v t whereβ is ( p − 1)-dimensional.
As in standard cointgration analysis the vectorβ can be estimated with ordinary least squares (OLS) as long as d > 1/2 so that the series remains non-stationary. In stationary long-memory time series, OLS is inconsistent in presence of correlation between the stationary regressors and the innovation term v t (cf. Robinson (1994)). Robinson (1994) and Robinson and Marinucci (2001) introduce an alternative estimator of the cointegrating vector that is based on the periodogram local to the zero frequency. In contrast to OLS, this narrow-band frequency domain least squares (NBLS) estimator is consistent under cointegration for all values of d and has a nonnormal limiting distribution in the non-stationary region. Christensen and Nielsen (2006a) extend the asymptotic results to the stationary region where the estimate follows an asymptotic normal distribution and Nielsen and Frederiksen (2011) provide a correction of the asymptotic bias under weak fractional cointegration.
Estimating the linear cointegrating relationship with NBLS requires calculating the averaged cross-periodogram of x t with itself and y t by I av The NBLS estimate ofβ is then defined by The bandwidth m has to fulfill the usual local-to-zero condition as T → ∞. If not specified otherwise, we employ NBLS to estimate the cointegrating vector. Other estimators suggested in the literature include estimation based on the eigenvectors of a version of I av X (λ j ) (cf. Chen and Hurvich (2006)) and joint estimation with the memory parameters in multivariate local Whittle approaches such as those of Nielsen (2007), Robinson (2008b) and Shimotsu (2012).
The following review is divided into tests based on the spectral density local to the origin (Sect. 3.1) and tests based on estimates of the cointegrating residuals (Sect. 3.2). Of course, this distinction is not clear cut, since some of the residual-based approaches also use the spectral properties of the potential cointegrating residuals and for example the test of Nielsen (2010) is presented as a variance-ratio test. Many different categorizations would be possible. Here, we refer to those approaches as "spectral-based" that rely on the properties of the spectrum of the observed series X t itself, and those that rely on the spectrum of the cointegrating residual are called "residual-based".

Tests based on the spectral matrix
A number of procedures to determine the fractional cointegrating rank of the pdimensional time series X t are based on properties of the rescaled spectral matrix local to the zero frequency. This is denoted by G in Eq. (1) and has reduced rank if and only if X t is fractionally cointegrated. If fractional cointegration is present, the number of eigenvalues that are equal to zero corresponds to the cointegrating rank r . More details on the connection between fractional cointegration, unit coherence and singularity of G are given in Velasco (2003b) and Nielsen (2004).
Based on this property Robinson and Yajima (2002) introduce an information criterion to determine the fractional cointegration rank that is extended to non-stationary processes by Nielsen and Shimotsu (2007). To obtain an estimate G of G, the first step consists in applying the univariate exact local Whittle estimator of Shimotsu and Phillips (2005) and Shimotsu (2010) to each component of X t separately, using band-width m, and pooling them to the arithmetic mean d E LW . The estimate of The bandwidths have to fulfill m 1 m → 0 in order to ensure faster convergence of d E LW than of G( d E LW ). 1 Denote the empirical eigenvalues calculated from G( d E LW ) and sorted in descending order by δ a,G for a = 1, . . . , p. The cointegrating rank can then be estimated using a model selection criterion that is based on the partial sum of the sorted eigenvalues where n(T ) is a function which fulfills n(T ) + 1 √ m 1 n(T ) → 0 as T → ∞ so that n(T ) goes to zero more slowly than the estimation error in the eigenvalues that is of order O P m −1/2 1 . Asymptotically, the expression is therefore minimal if only estimates of non-zero eigenvalues are included in the sum.
To deal with situations in which the scales of the components in X t are different, Nielsen and Shimotsu (2007) suggest to base the procedure on the correlation matrix . . , g pp ) contains the diagonal elements of G( d E LW ). This is admissible since the rank of P is the same as that of G in the limit. Nielsen and Shimotsu (2007) point out that this approach works better in simulations and also recommend to use the bandwidth n(T ) = m −0.3 1 . The cointegrating rank estimate is consistent for r ∈ {0, . . . , p − 1}. It is applicable for systems of dimension p ≥ 2, and it does not impose restrictions on d and b. A similar rank estimation procedure based on the average of finitely many tapered periodogram ordinates local to the origin was also proposed by Chen and Hurvich (2003).
The inconsistency of the multivariate local Whittle estimator under fractional cointegration is the basis for a test procedure originally proposed by Marinucci and Robinson (2001). They suggest a Hausman-type test that compares multivariate and univariate local Whittle estimates. Under the null hypothesis of no cointegration the multivariate estimator is efficient and both are consistent, whereas under the alternative of fractional cointegration the univariate estimator remains consistent, while the multivariate one does not. This idea is formalized by Robinson (2008a). The test statistic is based on the objective function of the multivariate local Whittle estimator (cf. Lobato (1999), Shimotsu (2007) with H * (d) = 1 m m j=1 ν j I X (λ j )λ 2d j and ν j = log j − 1 m m k=1 log k. Similar to the previous procedure, the memory parameter d is estimated by pooling the univariate estimates obtained by applying the local Whittle estimator to each of the component series. The equally weighted average is denoted by d LW . To obtain a test statistic, the derivative s * (d) from (8) is evaluated at this averaged univariate estimate: is asymptotically normal so that the test follows a χ 2 1 -distribution if appropriately standardized by the term in the denominator.
The test generates power because G(d) is singular under the alternative of fractional cointegration so that the inverse G * ( d LW ) −1 of the estimate and consequently the trace s * d LW become large. This is a score-type test that avoids the calculation of the multivariate local Whittle estimator that can be numerically expansive. Since the efficiency of the multivariate estimate is obtained with a single Newton step from the univariate estimate in direction of the multivariate one, s * d LW is directly proportionate to the difference between the efficient and the inefficient estimate.
This test allows series of dimensions larger than two, but it is restricted to processes with d ∈ (−1/2, 1/2) and focuses on the empirically relevant range d ∈ (0, 1/2). A non-stationary extension based on a trimmed version of the local Whittle estimator is proposed, but the size and power properties of this test in simulations appear to depend heavily on the sample size. 2 An alternative way to allow for non-stationary processes would be to base the test on the objective function of the multivariate exact local Whittle estimator (as in Shimotsu (2012), but without allowing for fractional cointegration) and univariate ELW estimates. Since the exact local Whittle estimates have the same asymptotic properties as the local Whittle estimate for d ∈ (−1/2, 1/2), the test would have the same limiting distribution.
For a bivariate process with known d ∈ (0, 1], Souza et al. (2018) propose a test based on an estimate of b obtained from the determinant of the trimmed and truncated spectral matrix of the fractionally differenced process via a log-periodogram regression. Denote the fractionally differenced process by depends on the memory reduction parameter b ∈ [0, d] and can be approximated by whereg is a constant and finite scalar. Under cointegration, f Δ d (λ) does not have full rank near the origin (like G in (1)) so that its determinant D Δ d (λ) approaches zero as λ → 0 + . The memory reduction b can be estimated from the logged version of Eq.
(10) using a log-periodogram type regression, In order to make the estimation of b feasible, the empirical determinant D Δ d (λ) has to be calculated from an estimate f Δ d (λ) of the spectral density at the Fourier is thus a local average of the periodogram at frequency j and the l − 1 frequencies to its left and right and the λ j are spaced so that the local averages are non-overlapping.
The resulting estimator for the cointegrating strength b is given by Under the null hypothesis of no fractional cointegration we have b = 0. Under this condition, and assuming that l and m fulfill the condition l+1 is the polygamma function of order 1 and Γ (·) denotes the gamma function.
The null hypothesis of no fractional cointegration can thus be tested using a simple t-test: The method has no restrictions regarding the range of d and b but is only applicable to bivariate processes. For practical purposes, d is usually unknown and has to be estimated, but as shown in our simulation study in Sect. 4 this has no severe implications for the quality of the test. However, a thorough theoretical examination of this aspect would be interesting for further research. Note that the work by Velasco (2003a) might help on this issue as he introduced a similar estimate focusing on knowing or not-knowing the true residuals.

Tests based on cointegrating residuals
By the definition of fractional cointegration the memory of the linear combination v t = β X t is lower than that of X t itself. Under the null hypothesis of no fractional cointegration one can still write v t = β X t = y t −βx t , since y t can still depend on the values of the other components of X t . The difference to the cointegrated case is It is therefore natural to test for fractional cointegration by testing Under weak non-stationary fractional cointegration, i.e., d > d v > 1/2, Marmol and Velasco (2004) suggest a Hausman (1978)-type F-test that compares the OLS estimate β O L S of the cointegrating vector with an alternative estimate β N B with opposite consistency characteristics.
The OLS estimator β O L S is consistent forβ under the alternative (as long as d > 1/2) but inconsistent under the null hypothesis. Marmol and Velasco (2004) propose an alternative estimator β N B that is consistent for the vectorβ under the null hypothesis but inconsistent under the alternative. The estimator is given by where I x x (λ j ) and I xy (λ j ) are the respective elements of the periodogram I ΔX ΔX (λ j ) of the differenced process ΔX t and m 2 is subject to the usual bandwidth conditions. The estimator is closely related to the narrow band least squares estimator β m from (6) but uses a rescaled version of the periodogram. In fact, β N B (0, 0) would be equivalent to the NBLS estimate based only on the real part of the periodogram. Note that Nielsen (2005)

introduced a very similar GLS-type estimate β N B (d, d).
Inconsistency under the alternative is only obtained through the choice whereas d x is estimated from the original series and is consistent for d. Under the null hypothesis, on the other hand, Since the process is non-stationary, the memory is estimated by local Whittle from the differenced process. Alternatively, d could be estimated using a tapered local Whittle estimator, or by the exact or fully extended local Whittle estimator. The test statistic compares both estimates ofβ where the normalizing variance V MV is estimated from the periodogram of the OLS residuals v O L S t and that of x t so that This leads to the test statistic The choices of m and m 2 are not linked, but both have to satisfy the condition η ∈ (0, 1). The asymptotic distribution is non-standard and depends on the memory parameter d. It is given by Critical values are tabulated in Marmol and Velasco (2004) for dimensions up to p = 5 and different forms of detrending that affect the type of the fractional Brownian bridges. The test statistic W MV diverges under the alternative since both β N B and V −1 MV diverge under fractional cointegration. Although the consistency of the test is derived assuming d > d v > 0.5, Marmol and Velasco (2004) state that the test remains consistent if the stationarity border is crossed by the cointegrating residuals, i.e. d > 0.5 > d v . Our simulations in Sect. 4 confirm this.
A direct residual-based test is proposed by Chen and Hurvich (2006) who estimate the possible cointegrating subspaces using eigenvectors of the averaged periodogram local to the zero frequency. The process X t is assumed to be stationary after taking (q − 1) integer differences which allows d ∈ (q − 1.5, q − 0.5). In order to account for possible over-differentiation the complex-valued taper h t = 0.5(1 − e i2π t/T ) of Hurvich and Chen (2000) is applied to the data. The tapered discrete Fourier transform (DFT) and periodogram of X t are defined by Based on the tapered periodogram, define the averaged periodogram matrix of X t by I av where m 3 is a fixed positive integer fulfilling m 3 > p + 3. The eigenvalues of I av X (λ j ) sorted in descending order are denoted by δ a,I av X and the corresponding eigenvectors are given by χ a,I av X , for a = 1, . . . , p. Under the alternative hypothesis, if there are r > 0 cointegrating relationships, the matrix consisting of the first r eigenvectors provides a consistent estimate of the cointegrating subspace.
To construct a test for the null hypothesis of no fractional cointegration the potential cointegrating residuals v t are estimated by multiplying X t with the eigenvectors χ a,I av X so that v av at = χ a,I av X X t , for a = 1, . . . , p. The memory of the p residual processes is estimated with the local Whittle estimator using bandwidth m but calculated using shifted Fourier frequencies λ˜j withj = j + (q − 1)/2 to account for the tapering of order q. These estimates are denoted by d v a , LW , and they remain consistent and asymptotic normal.
Since there can be at most p − 1 cointegrating relationships in a p-dimensional time series, the first residual corresponding to the largest eigenvalue cannot be a cointegrating residual. Its memory must therefore equal the common memory d of X t . In contrast, the last residual v av pt corresponding to the smallest eigenvalue is most likely to be a cointegrating residual if there is cointegration so that its memory is reduced by b under cointegration.
The test idea of Chen and Hurvich (2006) is therefore to compare the estimated memory orders from the residual series v av 1t and v av pt that correspond to d (first residual) and d v (last residual). Chen and Hurvich (2006) show that . A conservative test statistic is therefore given by The test rejects if W C H is larger than the standard normal quantile z 1−α/2 . It is very versatile, since it does not impose restrictions on the cointegration strengh b and can be applied to stationary as well as non-stationary long-memory processes, but it requires a priori knowledge about the location of d in the parameter space to determine the order of differencing. Hualde and Velasco (2008) propose another testing strategy in a residual-based regression framework. As before, the series X t is partitioned such that X t = (y t , x t ) and they consider the single-equation The test idea is based on the observation that the fractionally differenced residual Δ d x v t is unrelated to the long-run level of x t under the null hypothesis. This is because The cross-spectrum of x t and Δ d x v t should therefore be zero at frequencies local to zero. Possible dependence between the short-run components u t and e t in (3) would manifest itself in form of a non-zero cross-spectrum at higher frequencies.
The test statistic of Hualde and Velasco (2008) is therefore based on the quantity τ m defined as The projection vector ζ(λ j ) estimates the DFT of the residual process v t from w Δ dv ,d X (λ j )-the DFT of the fractionally differenced process Δ d v ,d X t . As usual for these semiparametric approaches, it is assumed that m ≤ T /2 and m/T → 0, as T → ∞. This leads to the test statistic where the weights are defined by a j = 1 if j ∈ {0, T /2} and a j = 2 otherwise. Under the null hypothesis this test statistic follows an asymptotic χ 2 p−1 -distribution. Under the alternative the test develops power, since d v is estimated from the NBLS estimate of the cointegrating residuals. Since these have reduced memory under the alternative, the first component of Δ d v ,d X t (y t ) is I (b) instead of I (0) and the cross spectrum of the underdifferenced estimate of v t and x t in τ m becomes non-zero. As before, the memory orders are estimated using consistent estimators that account for the (possible) nonstationarity of the data-for example the exact local Whittle estimator of Shimotsu and Phillips (2005).
A modified test with more power in bivariate systems X t = (X 1t , X 2t ) is calculated withτ m instead of τ m : .
Here, the respective elements of the spectral matrices of differenced processes . This is the same as τ m but with f 12 (λ j ) replaced by f 12 (λ j ) that is constructed using d v so that it also diverges under the alternative and constitutes an additional source of power. The asymptotic χ 2 p−1distribution is unaffected by this modification.
It is not necessary to impose any restrictions on the range of d and d v except for those implied by fractional cointegration, and processes of dimensions higher than two are allowed. The asymptotic χ 2 p−1 distribution depends only on the dimension of the process. Furthermore, the memory parameters are allowed to differ as long as two components of X t share the same memory parameter and the vector is sorted so that the component with the highest memory comes first.
Nielsen (2010) introduces a sequential testing approach to test fractional cointegration and to determine the cointegrating rank. The method is based on a variance-ratio statistic and imposes the assumption that the process X t is non-stationary and the potential cointegrating residual process is stationary with d v < 0.5 < d.
Denote the demeaned process by Z t = X t − X t , where X t is the vector of arithmetic means of the component series. The fractionally integrated version of Z t is denoted by Z t = Δ − Z t . Then the variance ratio is given by Taking the ratio has the advantage of eliminating the processes' variance from the asymptotic distribution. The eigenvalues of K T ( ) sorted in ascending order are denoted by δ a,K with a = 1, . . . , p.
Similar to the spectral matrix G, the rank of K T ( ) is reduced to p − r under fractional cointegration. This leads to a non-parametric trace statistic whose structure is similar to the trace statistic of Johansen (1991) in the parametric context where r is the number of cointegrating relations under the null hypothesis. Using (15) the cointegrating rank can be determined by a sequence of tests of the null hypothesis H 0 : r = r 0 vs. H 1 : r > r 0 . The limiting distribution is given by is a n − r dimensional vector of mutually independent standard fractional Brownian motions of type II, and the Brownian motions driving the fractional Brownian motions B n−r d and B n−r d+ are identical. This asymptotic distribution is non-standard and depends on the dimension p, the cointegrating rank r , the order of fractional integration and d. In practice d can be estimated consistently, and the other parameters are known. Critical values for d = 1, = 0.1, and p − r = 1, 2, . . . , 8 are given by Nielsen (2010), who recommends to use = 0.1 to integrate the process because it leads to higher power than larger values whereas smaller values improve power slightly but lead to size distortions at the same time. For more details confer Nielsen (2009). Note that choosing a different order of fractional summation changes the limiting distribution which implies that the test performance is free from user-chosen tuning parameters.
To see why this test can be considered to be residual-based, note that where η a denotes the eigenvector corresponding to δ a,K . Since the first r eigenvectors are consistent estimates of the cointegrating space, the first r eigenvalues are thus given by the ratio of the sum of the squared cointegrating residuals and the sum of squares of their times integrated version v t . Here the squares are estimators of the respective process variances and it is assumed that d > 1/2 > d v . Therefore, under the null hypothesis of no fractional cointegration the enumerator grows with rate O P (T 2d ) and the more persistent denominator grows with rate O P (T 2(d+ ) ), so that the eigenvalue has rate O P (T −2 ). Under the alternative of fractional cointegration with d v < 1/2, the process v t is stationary so that the process variance is finite and the enumerator grows with rate O P (T ). The denominator that may or may not be stationary due to the integration with is O P (T max{1/2,d−b+ } ). Consequently, the eigenvalue is O P (T min{0,1−2(d−b+ )} ), so that it goes to zero more slowly than under the null hypothesis.
The test is restrictive in that it requires non-stationary processes and, preferably, stationary residual processes, but as shown by his Monte Carlo simulation the test still exhibits power if d v > 0.5 and b > 0. Furthermore, it is applicable to multivariate systems and is able to estimate the number of cointegrating relations. Wang et al. (2015) propose a simple residual-based test in a bivariate setting where X t = (X 1t , X 2t ) . The test statistic is based on the partial sum of Δ d v X 2t , which is the demeaned second component series fractionally differenced with the memory order of the potential cointegrating residual v t . It is given by where f 22 is the spectral density of either u 2t or e t in (3), depending on whether a triangular model or a common-components model is assumed.
Under the null hypothesis d v = d so that Δ d v Z 2t is I (0) and the appropriately rescaled sum is asymptotically standard normal. Under the alternative To make this test statistic feasible the spectral density f 22 can be estimated from the periodogram of the fractionally differenced process Δ d Z 2t following the approach of Hualde (2013) While Wang et al. (2015) are agnostic about the method that is used for the estimation of the memory parameters d and d v , they assume that d > 1/2 so that the cointegrating vector can be estimated using ordinary least squares. The memory orders can be estimated from v O L S t and Z 2t using any of the common semiparametric estimates such as ELW with bandwidth m as in f 22 that fulfills the usual bandwidth conditions. The method does not impose any restrictions on the fractional cointegrating strength b. As the Monte Carlo simulations below show, the non-stationarity requirement (d > 1/2) can be circumvented if the cointegrating residual v t is based on the NBLS estimate of the cointegrating vector instead of the OLS estimate. Zhang et al. (2019) propose an alternative estimator of the cointegrating space that is based on the eigenvectors of the non-negative matrix M = Z t+ j Z t is the autocovariance matrix at lag j and j 0 is a fixed integer. The matrix M is thus the sum of the outer products of the first j 0 autocovariance matrices with themselves. The outer product is used instead of the covariance matrices Ω Z ( j) to ensure that there is no information cancellation over different lags in M. It is assumed that d > 0.5 and d v < 0.5.
The eigenvalues of M in descending order are denoted by δ a,M for a = 1, . . . , p and the corresponding eigenvectors are denoted by χ a,M . Similar to the matrix G in (1), the first p − r eigenvalues of M are non-zero, whereas the remaining r are zero. For known r the eigenvectors corresponding to the r smallest eigenvalues provide a consistent estimate of the cointegrating space.
If r is unknown, the p potential cointegrating residuals are estimated using the eigenvectors so that v M at = χ a,M X t . By the same argument as in the procedure of Chen and Hurvich (2006), the residual corresponding to the smallest eigenvalue is most likely a cointegrating residual with reduced memory of d v = d − b and the residual corresponding to the largest eigenvalue is I (d).
The cointegrating rank can be estimated using a simple criterion based on the summed autocorrelations of the potential cointegrating residuals. Define where v M at is the mean of v M at . The cointegrating rank estimator counts the instances when the averaged autocorrelation is smaller than a threshold c 0 ∈ (0, 1): If the residual v M at is stationary (d v < 1/2), the rescaled sum of autocorrelations Q a (k 0 )/k 0 converges to zero asymptotically for k 0 → ∞, since the autocorrelations are asymptotically proportionate to k 2d v −1 . Under certain regularity conditions this estimate is consistent. Even though the consistency is only proven for r ≥ 1 in Theorem 4.2 of Zhang et al. (2019), our simulations below show that it also works well in discriminating between r = 0 and r = 1.
It should be noted that the authors define r = p if all components of X t are I (0). This leads to some abuse of notation and r cannot be interpreted as the cointegrating rank in a narrow sense. Based on their simulations Zhang et al. (2019) recommend to use j 0 = 5, k 0 = 20 and c 0 = 0.3. The estimator is easy to implement and applicable to higher dimensional processes. However, the requirement of d > 0.5 and d v < 0.5 is restrictive.

Monte Carlo Study
The asymptotic properties of all tests and rank estimates presented in Sect. 3 are derived by the respective authors, and some of them also present simulations to explore the finite sample results of the test statistics. This however is not the case for all tests and a comprehensive comparative study suited to guide the choice of appropriate methods in practical applications is entirely missing. To close this gap, we conduct an extensive Monte Carlo study. In addition to general results, we are particularly interested in answering two empirically motivated questions.
(i) How does correlation between the underlying short-run components influence the size of the tests? This question is important, since applied researchers will generally want to test for fractional cointegration if two related series seem to be co-moving. Similar trajectories, however, can also be generated by persistent processes with highly correlated innovations. Tests for the null hypothesis of no fractional cointegration should therefore be robust to a relatively high degree of correlation between the shortrun components of the series.
(ii) Is there a notable difference in the power of the tests depending on whether the data is generated from a triangular model or from a common-components model? Both models are used in the literature to motivate and construct testing procedures, but to our knowledge simulation results are typically based on the triangular representation. In practice, either model could be justified-depending on the application. For example, if one is considered with potential fractional cointegrating relationships between stock prices, it is not clear why one of the stock prices should be seen as a perturbed version of the other one (as it is the case in the triangular model that treats the series in an asymmetric way) so that the common-components model is more suitable. In contrast to that, in the case of the potential parity between implied volatility and the expected average realized volatility over the next month (the so-called implied-realized parity analyzed by Christensen and Prabhala (1998), Christensen and Nielsen (2006b), and Nielsen (2007), among others), there is theoretical reason to assume that the implied volatility is a perturbed version of the expected average future realized volatility, since it contains a variance-risk premium (cf. Chernov (2007)). Therefore, a triangular model is more suitable.
We focus on three data generating processes (DGPs) based on the general model from Eqs. (3) to (5). For simplicity we set c 1 = c 2 = 0 and b = b 1 = b 2 so that the processes are mean zero and have a common memory reduction parameter. A simple bivariate model without fractional cointegration is constructed by setting ξ 1 = ξ 2 = 0. This model-referred to as (size) DGP1-is given by where correlation between u 1t and u 2t is allowed. This is our size-DGP. For the power simulations, we consider a triangular model and a common-components model. In both cases we set ξ 1 = ξ 2 = 1 which implies a cointegrating vector of β = (1, −1) . The triangular model DGP2 is given by and the common-components model DGP3 is defined by In both DGP2 and DGP3 we have Y t = Δ −d e t 1{t > 0}. The underlying short-run components u 1t and u 2t , or u 1t and e t -depending on the DGP-have unit variance and correlation ρ.
We consider sample sizes of T ∈ {100, 500, 1000, 2500} and values of d ∈ {0.4, 0.7, 1} in the stationary and non-stationary region. Under fractional cointegration, the memory reduction b is linked to the value of d so that b ∈ {d/3, d}. Consequently, there is either a memory reduction to 0 if b = d or a weaker form of cointegration if b = d/3. In order to examine the impact of correlation between the short-run components, we consider ρ ∈ {0, 0.45, 0.9, 0.99}. Note that the results for b = d/3, δ m = 0.55 and further robustness tests (other size DGPs and p = 3dimensional processes) are available online as supplementary material.
The results presented are based on 5000 replications and a nominal significance level of α = 0.05. Since the tests impose different conditions on d and d v , we mark the cells in the tables in bold where the methods have well-defined asymptotic properties and are supposed to deliver good results. In some cases the methods give satisfactory results beyond these limitations. For example, we implement the method of Wang et al. (2015) using a NBLS estimate of the cointegrating vector instead of the OLS estimate. This makes the test applicable in stationary time series as well as in non-stationary ones.
Since the limiting distributions of the non-pivotal test statistics of Marmol and Velasco (2004) and Nielsen (2010) depend on d and it is assumed that d > 1/2, it is unclear which critical values would be used in the stationary region. The respective fields are therefore left blank.
Further, it should be noted that the methods of Nielsen and Shimotsu (2007) (or Robinson and Yajima (2002)) and Zhang et al. (2019) are not tests but rank estimates. Instead of the rejection frequency, we therefore report the ratio of correctly estimated cointegrating ranks. Therefore, the results cannot be interpreted as size or power, and in the size table and graphs the estimates should yield 0 instead of 0.05, since the estimates do not involve any significance level. Table 1 displays size results based on DGP 1 with δ m = 0.75. The methods that have well defined asymptotic properties across all parameter constellations covered in the table are those of Nielsen and Shimotsu (2007), Chen and Hurvich (2006), Hualde and Velasco (2008), and Souza et al. (2018). It can be observed that all of these methods achieve good size properties for ρ = 0, except for the test of Chen and Hurvich (2006), when d = 0.4. Among those four procedures, only the tests of Souza et al. (2018) (disregarding the smallest sample) and Chen and Hurvich (2006) do not Table 1 Size (*rank estimation) based on DGP1 with

0.197
We abbreviate the methods with the initial letters of the authors' names and the year of publication over-reject if ρ increases. 3 For low values of d the test of Hualde and Velasco (2008) already becomes oversized for ρ = 0.45 and as ρ increases it becomes oversized for higher values of d, too. The rank estimation procedure of Robinson and Yajima (2002) and Nielsen and Shimotsu (2007) is even more affected and estimates a cointegrating rank of one in nearly all cases if ρ ≥ 0.9.
In addition to the tests of Souza et al. (2018) and Chen and Hurvich (2006), the modified version of the test by Wang et al. (2015) that is based on the NBLS estimator instead of OLS also maintains satisfactory size properties across all values of ρ and d.
The group of procedures that is only applicable to non-stationary systems consists of Marmol and Velasco (2004), Nielsen (2010), and Zhang et al. (2019). It can be observed that the procedure of Marmol and Velasco (2004) behaves similar to that of Hualde and Velasco (2008) in the sense that it is very liberal for higher values of ρ and lower values of d. For non-stationary series and larger sample sizes the procedure by Zhang et al. (2019) estimates correctly the cointegrating rank to be zero-independently of the degree of correlation. The variance-ratio statistic of Nielsen (2010) turns out to be slightly liberal for d = 0.7 in larger samples, but holds the nominal size for d = 1 even in small samples. In particular, the performance is independent of the degree of correlation.
Finally, the test of Robinson (2008a) is only applicable for stationary systems. Here, it can be observed that the test does not hold its size for ρ = 0. This is because the Hausman-testing principle requires one of the estimates of the memory parameter to be more efficient than the other one, but the multivariate estimate is not more efficient in absence of correlation. For other values of ρ, however, the test has good size properties. Interestingly, the test also has good size properties if d = 1, even though it assumes stationarity. The intermediate value of d = 0.7, on the other hand, leads to a moderately oversized test. Figure 1 analyzes the interaction between the degree of correlation ρ and the choice of the bandwidth δ m . It shows the size of the tests in scatterplots where the results with no correlation (ρ = 0) are plotted against results with high correlation (ρ = 0.99). In the upper panel, tests that allow for stationary processes (d = 0.4) and in the lower panel (d = 1) the non-stationarity-robust tests, i.e. all except that of Robinson (2008a), are displayed. The dashed lines mark the nominal size level of 0.05 so that ideally all points would lie on the intersection between these two lines. The dotted line is the bisector implying that methods above the bisector do better with correlation and methods below the bisector do better without. Black symbols give results with a bandwidth parameter of δ m = 0.75 and gray symbols with δ m = 0.55.
It can be observed that the procedures by Marmol and Velasco (2004), Nielsen and Shimotsu (2007) and Hualde and Velasco (2008) lie below the bisector and are thus negatively affected by high correlation, whereas the tests by Chen and Hurvich (2006) and Robinson (2008a) lie above the bisector. The remaining tests lie on the bisector indicating robustness to correlation. Regarding bandwidth choice, the tests by Marmol and Velasco (2004), Chen and Hurvich (2006), Hualde and Velasco (2008) and Wang et al. (2015) are more liberal with a small bandwidth, whereas Nielsen and Shimotsu Size with varying correlation and bandwidth, d=1 are relatively robust in terms of size. In general, correlation in the underlying shortrun component is mistaken for cointegration more often in stationary systems than in non-stationary ones.
Overall, in terms of size for bivariate systems and taking the range of admissible parameter values into account, we find that the test of Souza et al. (2018) has the best performance, followed by those of Chen and Hurvich (2006) and Wang et al. (2015). Considering procedures only applicable to non-stationary systems, Nielsen (2010) and Zhang et al. (2019) are very reliable options as well.  To analyze the power of the procedures, we focus on the triangular representation in DGP2 with b = d so that the memory reduces to zero in the cointegrating relation. Again, δ m is set to 0.75. The results are shown in Table 2. In the following, we focus on the results for parameter constellations for which the tests have reasonable size properties.
It can be seen that the rank estimate of Nielsen and Shimotsu (2007) correctly identifies the presence of fractional cointegration even in relatively small samples. Since the estimate works well under the null hypothesis if ρ is low, it clearly outperforms its competitors in this situation. The power of the test of Hualde and Velasco (2008) is also high, but it suffers from similar size issues in case of strongly correlated short-run components.
Among the tests that are more widely applicable the approach of Souza et al. (2018) generates higher power than that of Wang et al. (2015) (except for ρ = 0.99), which in turn outperforms the approach of Chen and Hurvich (2006). Furthermore, it can be seen that the test of Souza et al. (2018) outperforms more restrictive approaches in small samples such as those of Robinson (2008a) and Nielsen (2010). In large sample this observation vanishes. For the test of Chen and Hurvich (2006) we can observe that the power is lower for d = 0.7 than for other values of d. Furthermore, the power becomes non-monotonic in T in some cases. This effect is likely to be caused by the fact that the order of differentiation required may be estimated incorrectly for intermediate values of d. The approach of Zhang et al. (2019) behaves similarly as that of Nielsen (2010).
With regard to the test of Robinson (2008a), it is noteworthy that the power is considerably lower for ρ = 0.9 than it is for ρ = 0.45 or ρ = 0.99. Further simulation results on this V-shaped dependence pattern between the power of the test and ρ (not reported here) show that the test has no power if ρ = 0.8 and its power is very low in a neighborhood of this point.
The test of Marmol and Velasco (2004) develops good power for stationary values of d v , even though its theoretical properties are derived under the assumption that d v > 0.5.
Overall, we find that the rank estimation of Nielsen and Shimotsu (2007) performs best in identifying the correct order of fractional cointegration if the correlation between the series is low. For non-stationary data Nielsen (2010) is a good choice, and among the more broadly applicable methods the test of Souza et al. (2018) clearly performs best in terms of size and power.
The previous table is generated based on the triangular model (DGP2), but we are also interested in the performance based on the common-components model (DGP3). Those results are displayed in Table 3. It can be seen that there is a number of striking differences in the relative performance of the tests. For low values of d, the rank estimation procedure of Nielsen and Shimotsu (2007) /Robinson and Yajima (2002) loses precision. At the same time, the test of Chen and Hurvich (2006) becomes more powerful so that overall the two procedures become comparable in terms of their ability to identify the correct rank. Unfortunately, the non-monotonicity of the test of Chen and Hurvich (2006)   The test of Souza et al. (2018) still performs relatively well-especially for larger values of d. The same holds true for that of Wang et al. (2015) which reaches a relatively high power in smaller samples but approaches 1 only slowly.
With respect to the other tests, it can be seen that the test of Hualde and Velasco (2008) has very good power properties-also for low values of ρ where it maintains its size. The procedures of Nielsen (2010) and Zhang et al. (2019) have lower power for d = 0.7 in small samples but in all other constellations their power results are very good.
We further conduct a similar analysis to that for the size in Figure 2 in order to analyze the effect of the model construction and that of the bandwidth choice on the power of the procedures. As before, black symbols represent results with δ m = 0.75 and gray symbols represent δ m = 0.55. The values of d and b are selected such that the power of the procedures tends to be low and changes in their behavior are easier to identify. First of all, most procedures perform better in the triangular model except for the tests by Chen and Hurvich (2006) and Robinson (2008a). Furthermore, while an increase of the bandwidth leads to a considerable power gain for the tests of Chen and Hurvich (2006), Robinson (2008a), and Souza et al. (2018), the approaches of Marmol and Velasco (2004), Hualde and Velasco (2008) and Nielsen and Shimotsu (2007) have higher power with a smaller bandwidth-at least in the common-components model. This, however, might be due to the larger size distortions visible in Figure 1. The performance of the approaches of Nielsen (2010) and Wang et al. (2015) is relatively independent of the bandwidth choice. For the test of Nielsen (2010) this is explained by the fact that the bandwidth only influences the estimate of d that determines the correct set of critical values. The test statistic itself does not depend on the bandwidth.
A strong advantage of semiparametric methods is the theoretical robustness to short-run dynamics. We therefore consider the previous experiments with u 1t , u 2t , e t as AR(1) processes with φ ∈ {−0.5, 0.5}. Table 4 contains both size and power results. Most tests and rank estimates exhibit performance differences compared to the white noise case, in particular with respect to size. For Souza et al. (2018), Wang et al. (2015) and Zhang et al. (2019) we observe sensitivity to the sign of the AR parameter as they tend to become liberal (or have a higher error rate for Zhang et al. (2019)) with φ = −0.5 and become conservative (have a lower error rate) with φ = 0.5. Additionally, the test by Souza et al. (2018) does not hold the nominal size if d = 1 and φ = 0.5. Similar as the estimate of Zhang et al. (2019) does not yield 0 anymore, the one by Nielsen and Shimotsu (2007) has an error rate of 2-10% instead of 0%. The test of Chen and Hurvich (2006) becomes more conservative with stationary data and gets closer to the nominal significance level for non-stationary data, and the procedures by Hualde and Velasco (2008) and Marmol and Velasco (2004) both turn out very conservative. The test by Nielsen (2010) holds the nominal size only with d = 1 and a negative AR parameter and becomes very liberal in the other cases. In comparison to analogous results without short-run dynamics we find weaker performances in small samples for the tests by Robinson (2008a) and Marmol and Velasco (2004) in terms of size, and for the methods by Nielsen and Shimotsu (2007), Souza et al. (2018) and Marmol and Velasco (2004) in terms of power. In general, we observe a tendency of slightly lower power in some occasions like stationary data with the tests of Chen and Hurvich (2006) and Souza et al. (2018), but overall the power results differ not that much compared to the white noise case.

Conclusion
This review is written with the objective to provide guidance for the selection of methods in practical applications. We judge the methods based on (i) the range of values of d and b that are allowed, (ii) the ability to distinguish correctly between common trends and correlated innovations, and (iii) the performance across different DGPs-namely triangular systems as well as common-components models. Table 4 Size (*rank estimation) based on DGP1 and power (*rank estimation) based on DGP2 with Based on our Monte Carlo studies, we find that some of the proposed approaches have weaknesses in their finite sample behavior in some empirically relevant scenarios-especially in presence of correlated short-run components. This concerns mostly the methods of Nielsen and Shimotsu (2007) (or Robinson and Yajima (2002)), Marmol and Velasco (2004), and Hualde and Velasco (2008) that have the highest power but have size issues in case of strongly correlated short-run components. With regard to iii.), we find that the size properties of the tests in the triangular case and the common-components model is generally comparable (see online material). For the power of the tests, however, there are important differences between the two cases. In particular, the test of Chen and Hurvich (2006) has much better power for stationary systems under the common components specification, whereas the methods of Robinson and Yajima (2002) and Hualde and Velasco (2008) become worse in their ability to detect fractional cointegration.
Although the methods of Robinson (2008a), Nielsen (2010), and Zhang et al. (2019) turn out to be robust to short-run correlation and are appealing due to their simplicity, they impose practically relevant restrictions on the permissible range of d and b. However, if there is prior knowledge about the (non-) stationarity of the data, those procedures are very good options.
Overall, we conclude that the test of Souza et al. (2018) for bivariate systems has the best properties, both theoretically and empirically, and is a good choice for the applied econometrician. It allows for the whole empirically relevant range of d and b, it is robust to correlation and short-run dynamics with positive coefficients, and it provides comparable performance in both-triangular systems and commoncomponents models.
In higher dimensional systems, however, the test of Souza et al. (2018) is no longer applicable and that of Chen and Hurvich (2006) turns out to be liberal in finite samples from stationary processes. Here, the procedure of Robinson (2008a) can be recommended for stationary processes and the rank estimation by Nielsen (2010) and Zhang et al. (2019) should be preferred for non-stationary systems if the cointegrating residuals can be expected to be stationary.