A simple wavelet-based test for serial correlation in panel data models

Hong and Kao (2004) proposed a class of general applicable wavelet-based tests for serial correlation of unknown form in the residuals from a panel regression model. The tests can be applied to both static and dynamic panel models. Their test, however, is computationally difficult to implement, and simulation studies show that the test has poor small-sample properties. In this paper, we extend Gençay’s (2010) time-series test for serial correlation to panel data case. Our new test is also wavelet based and maintains the advantages of the Hong and Kao (2004) test, but it is much simpler and easier to implement. Furthermore, simulation results show that our test has quicker convergence rate and hence better small-sample properties, compared to Hong and Kao (2004) test. We also compare our test with several other existing tests for series correlation, and our test has in general better statistical properties in terms of both size and power.

for serially correlated errors in panel models is thus an essential part of econometric modeling.
Most panel data tests; should be, for example, Breusch and Pagan (1980), Bhargave et al. (1982), Baltagi and Li (1995) and Bera et al. (2001) test for no serial correlation against the alternative of serial correlation of some known form. Lee and Hong (2001) and Hong and Kao (2004) relaxed the assumption that the serial correlation form should be known. This new class of tests is constructed using the wavelet spectrum, which can detect serial correlation where the spectrum has peaks or kinks. This class of tests can be applied to residuals from a wide range of different panel data models: static models, dynamic models, one-or two-way error-component models and fixedeffects and random-effects models. Because the Hong and Kao (2004) test is more general than other tests, it also has higher power. Among the weaknesses of the Hong and Kao (2004) test is its complex structure, which causes the convergence rate to be slow. Additionally, although Hong and Kao test under the null has a limit standard normal distribution asymptotically. For small samples, critical values are generated by simulations, which further complicates the tests and makes the tests computationally time-consuming.
In this paper, we propose an alternative serial correlation test for panel data models that maintains the strengths of the Hong and Kao (2004) test and at the same time has a more simplified structure, higher convergence rate and better small-sample properties. Our test is still wavelet based and is constructed by combining the variance ratio test proposed by Gençay (2010) for time-series models with the Fisher-type test applied in Choi (2001). Our test has a simple construction and converges to a normal distribution quickly. No empirical critical values need to be simulated; thus, the computational burden is significantly reduced. Simulations show that the small-sample properties of our test are compared favorably to those of other commonly used tests.
The rest of this paper has the following structures: Sect. 2 introduces the wavelet transform and the Hong and Kao test, Sect. 3 introduces the panel data test, Sect. 4 contains the simulation study, and Sect. 5 concludes the paper.

Introduction to the wavelet transform
Wavelet transform methods began to gain the attention of statisticians and econometricians after a series of articles by, for example, Schleicher (2002) and Crowley (2007), Vidakovic (1999, Percival and Walden (2000) and Gençay et al. (2001). The wavelet methodology represents an arbitrary time series in both time and frequency domains by convolution of the time series with a series of small wavelike functions. Corresponding to the time-infinite sinusoidal waves in the Fourier transform, the time-located wavelet basis functions ψ jk (·) : j, k ∈ Z used in the wavelet transform are generated by translations and dilations of a basic mother wavelet ψ ∈ L 2 (R). The function basis is constructed through ψ jk (t) 2 j/2 ψ(2 j t −k), where k is the location index and j is the scale index that corresponds to the information inside the frequency band 1 2 j+1 ; 1 2 j , in which is the length of sampling interval of the series. For a continuous signal f , its wavelet transform is given by the wavelet coefficients f * {γ ( j, k)} k, j∈Z with γ ( j, k) f , ψ jk f (t)ψ * jk (t) dt, which represent the resolution at time k and scale j. The resolutions in the time domain and the frequency domain are achieved by shifting the time index k and the scale index j, respectively. A lower level of j corresponds to higher-frequency bands, and a higher level of j corresponds to lowerfrequency bands. For a time series Z {Z t , t 0, . . . , T − 1} sampled at discrete time points, its wavelet coefficients are obtained via the discrete wavelet transform (DWT) or maximum overlap discrete wavelet transform (MODWT).
The DWT is implemented by applying a cascade of orthonormal high-pass and low-pass filters to a time series that separates its characteristics at different frequency bands (Mallat 1989). The level J DWT transforms a dyadic time series to the form of T − 1 wavelet coefficients structured as J scales and one scaling coefficient. Scale j 1, . . . , J contains information in the frequency band 1 2 j+1 ; 1 2 j and consists of T 2 j coefficients that correspond to strictly adjacent wavelet functions. The maximum overlap wavelet transform is a variation of the DWT. The level J MODWT projects time series Z on J × T wavelet functions. After the level J MODWT, we can get The T × T matrices w j ( j 1, 2, . . . , J ) can be viewed as the high-pass filter which filters out the higher part of the frequency band in Z. The output from this high-pass filtering is wavelet coefficients W j , which corresponds to fluctuations or changes in average on a scale of τ j 2 j−1 . The T × T matrix v J is then the low-pass filter which filters out the lowest part of the frequency band in Z. The output from this low-pass filtering is scaling coefficients V J , which corresponds to averages on a scale of λ J 2 J . In MODWT, the T functions in each scale are translated by only one time period per iteration and thus overlap to a great extent, making a difference to the strictly adjacent wavelet functions of the DWT. The overlapping property allows considerably greater smoothness in the reconstruction of selected frequency bands at the cost of losing the orthogonality property. For detailed illustration of DWT and MODWT, we refer to Vidakovic (1999), Percival and Walden (2000) and Gençay et al. (2001).
Generally, the wavelet coefficients from either the DWT or the MODWT are separated to different scales of wavelet and scaling coefficients representing different frequencies. When the series Z {Z t , t 0, . . . , T − 1} is pure white noise, as the variance of white noise is evenly distributed in the whole frequency interval, after the DWT and MODWT transform, the variance of wavelet and scaling coefficients should be evenly distributed as well. On the other hand, if Z {Z t , t 0, . . . , T − 1} is not white noise and it contains serial correlation, the character of evenly distributed variance will be violated. This concept can help us to build up a variance-ratio-based test to test serial correlation in Sect. 3.

The Hong and Kao (2004) test
The panel data model in Hong and Kao (2004) is given by: where X it can be either static or dynamic in the form of including lag values of Y it , {μ i } is an individual effect which is mutually independent for the individual and {λ t } is a common time effect. In the Hong and Kao test, the test framework is: The test is performed on the demeaned estimated residualsv it Hong and Kao (2004) use the power spectrum π] to build the test statistic because it can contain the information on serial correlation at all lags. Moreover, instead of Fourier representation of the spectral density, a wavelet-based spectral density Ψ jk (ω) using the above-mentioned wavelet basis ψ ∈ L 2 (R) is used, with Ψ jk (ω) defined as: Ψ jk (ω) can effectively capture the local peaks and spikes in spectral density by shifting the time effect index k. Based on the empirical wavelet coefficientsα i jk , the heteroscedasticity-consistent test statisticW 1 and heteroscedasticity-corrected test statisticW 2 , as well as their distribution under H 0 , are defined separately as: The Hong and Kao (2004) test is spectral density based and requires no specification of the alternative form; thus, it is generally applicable to a wide range of serial correlations. However, Hong and Kao (2004) test has three main disadvantages. First, both test statistics are constructed using the hyperparameters J i (resolution level in wavelet decomposition), which are determined in a computationally intensive, datadriven method. Second, although Hong and Kao (2004) showW 1 −→ d N (0, 1) and W 2 −→ d N (0, 1), the slow convergence rates of both test statistics show a serious bias below the nominal size when using the asymptotic critical values from the standard normal distribution directly. It is therefore necessary to use bootstrapped or simulated critical values, which further complicates the test. Third, because the test statistics are based on the DWT, the data set is restricted as T i should be a multiple power of 2. All of these disadvantages hinder the test's popularity. We hereby propose a much more simplified test and show that our test overcomes the shortcomings ofW 1 andW 2 while still giving good results in testing a wide range of serial correlations in the generalized panel model.

A panel data test based on wavelet variance ratio
The test in Hong and Kao (2004) is the panel version extension of the wavelet spectrum-based serial correlation test in single series proposed by Lee and Hong (2001). However, the Lee and Hong (2001) test has a slow convergence rate because of the estimation of the nonparametric spectrum density. An alternative time-series test for serial correlation of unknown form is the Gençay (2010) variance ratio test, which converges to normal distribution at a much faster parametric rate. In this paper, we extend the Gençay (2010) test to the panel data case by using a Fisher-type test combining the p-values from individual serial correlation tests based on Gençay (2010). This p value combination strategy is inspired by Maddala and Wu (1999) and Choi (2001). Choi (2001) noted that the method to combine p-values can allow more general assumptions of the underlying panel models such as stochastic or non-stochastic models, balanced or non-balanced data and homogeneous or heterogeneous alternatives. This general assumption coincides with the aforementioned wide range assumption of the panel models in Hong and Kao (2004), making our further comparisons possible. Choi (2001) also shows this p-value combination test generally has better size and power performance compared with the previous panel unit root test. A potential shortcoming of this testing procedure is that it requires the cross-sectional units to be uncorrelated. This assumption is also imposed by the Hong and Kao test whereby we do not consider the case of cross-sectional dependence in this paper.
Our test procedure for serial correlation is straightforward. First, the errors in Eq. (1) for each individual i are estimated. Second, the estimated errors are transformed to the wavelet domain using the Haar filter-based MODWT. Unlike the DWT, the MODWT does not impose any restrictions on the sample size, whereby in the Hong and Kao restriction, the sample size is restricted to the power of 2. The MODWT on the estimated errors yields two sets of transform coefficients, W i and V i . For the discrete time series which has the frequency band 0, 1 2 , then W i represent the higher half part of frequency component in the errors (frequency band is 1 4 , 1 2 ), and V i represents the lower half part of frequency component in the errors (frequency band is 0, 1 4 ). Third, if the errors are white noise, each half part of the frequency has the same energy. Thus, the following wavelet-ratio test statistic G i has the expected value 1 2 . Gençay (2010) shows that the test statistic S i N (0, 1) under H 0 , and it can be used to test for an unknown order of serial correlation in time series. Gençay (2010) further shows that the test based on S i performs well in small samples. The last step in our test procedure is to combine the individual wavelet-ratio test with the Fisher type of the test based on p-values of individual tests (Choi 2001). Three p value-based test statistics are defined in Choi (2001) N (0, 1) under H 0 , and Choi (2001) showed that, compared to other two Fisher-type tests P and L, the inverse normal test statistic Z performs best and is recommended for empirical applications. Furthermore, the p value in the Z test statistic is defined as p i 1 − Γ (S 2 i ) and Γ is cumulative density function for Chi-square distribution. The reason why we use S 2 i instead of applying S i directly to obtain the p values in the panel framework is that the test based on S i is a two-sided normal test and will lose power seriously when the panel data contain both positive and negative correlations. As S 2 i −→ dχ 2 (1), the test based on S 2 i is then a one-sided test and can be used to test the heterogeneous alternatives. Another advantage of using S 2 i instead of S i is that the convergence rate turns into O p (T −2 i ) and will lead to even better small-sample performance. The last advantage can be derived directly form the rules of product for big oh (Lemma 1.18; p. 16, Dinneen et al. 2009): if g 1 ∈ O p ( f 1 ) and g 2 ∈ O p ( f 2 ), then g 1 g 2 ∈ O p ( f 1 f 2 ).

Simulation study
The small-sample properties of our proposed inverse normal test statistic Z are compared with those of the Hong and Kao (2004) test,Ŵ 1 andŴ 2 , in a simulation study. We also compare it with Hong's (1996) kernel-based tests,K 1 ,K 2 , Baltagi and Li's (1995) Lagrange multiplier (LM) test BL and Bera et al.'s (2001) modified LM test BSY. We use the same data generation process used in Hong and Kao (2004) for direct comparison. To evaluate the size of the test, we assume both a static panel model with a data-generating process (DGP 1 ): .5] and a dynamic panel model DGP 2 : Y it 5 + 0.5Y it−1 + μ i + u it . In both DGP 1 and DGP 2 , the individual effect μ i ∼ i.i.d. N (0, 0.4) and error process u it ∼ i.i. d. N (0, 1). The error processes {u it } and {u ls } are mutually independent for i l and all t, s; thus, we assume indepen-dence of cross-sectional units here. For the power part, the error process {u it } has two alternatives: AR(1) alternatives: ARMA(12,4) alternatives: In our simulations, we employ the Haar wavelet to limit the number of boundary coefficients, which may negatively affect the size and power of the test. Although other wavelet filters such as the Daubechies filters or the least asymmetric filters come closer to represent an ideal band pass filter, they also generate more boundary coefficients. These filters are therefore more appropriate for longer time dimensions than considered in our simulations. See Percival and Walden (2006) for a detailed discussion of how to choose an appropriate wavelet filter.
For the size and power of the Z test, we take the critical values at 10% and 5% significance levels directly from the N(0, 1) distribution and set the replication number for the simulation to 1000, the same as Hong and Kao (2004). We use Z 1 to represent our Z test constructed from the static data generation process DGP 1 and Z 2 to represent the Z test from the dynamic data generation process (DGP 2 ). Table 1 shows the size performance of our test statistic Z 1 , and Table 2 shows the size performance of our of our test statistic Z 2 . Table 1 is compared with Hong and Kao's (2004) Table I for  the static model, and Table 2 is compared to Hong and Kao's (2004) Table II for the dynamic model.
In Tables 1 and 2,K 1 ,K 2 , BL and BSY are separately heteroscedasticity-consistent Daniell kernel-based tests, heteroscedasticity-corrected Daniell kernel-based tests, Baltagi-Li (Baltagi and Li, 1995) tests and Bera, Sosa-Escudero and Yoon tests (Bera et al. 1995), respectively. When the replication number of the simulation is set to 1000, the confidence intervals for an unbiased size at the 10% and 5% sig-  Tables I and II in Hong and Kao (2004) show that when using the asymptotic critical values, the size is seriously under-biased for W 1 ,Ŵ 2 ,K 1 andK 2 ; for BL, the size is seriously over-biased, and for BSY, the size is either under-biased or over-biased. The wide bootstrapped critical value is then used for Hong and Kao (2004) to adjust the size, and there is still an under-or over-bias problem for all of the tests. On the contrary, our test uses critical values directly from the normal distribution, and the results are mostly unbiased.
To evaluate the power of the proposed tests, we let the error process follow an AR(1) in Eq. (2) and ARMA(12.4) process in Eq. (3), and we get Tables 3 and 4  corresponding to Tables III and IV in Hong and Kao (2004). We use Z 1 and Z 2 to report separately the results for static and dynamic cases, whereas Hong and Kao (2004) did not show the results for the dynamic model.
In the power case for DGP 1 , because of poor performance for small-sample sizes for their tests, Hong and Kao (2004) use a simulated empirical critical value and bootstrap critical value for power comparison. This is a challenging task because critical values must be simulated for all combinations of (n, T ). However, we can directly use the critical values from the standard normal distribution for power tests, making the testing procedure much more straightforward and easier to conduct. For the AR(1) type of error, Table 3 shows that our test performance is modestly improved compared to the other tests and its performance is much better than almost all of the tests for the AR(1) model for sample size (5, 8) and is better than all three models for the sample size (50, 64). All of these examples demonstrate the much faster convergence rate of our test statistic. However, for sample sizes (10, 16) and (25, 32), even though the power performance of our test is modest, it is still quite acceptable. Table 4 shows that for the ARMA(12, 4) type of error, the tests in Hong and Kao (2004) have almost no power when the sample size is (10, 16), whereas our test performs much better. For sample sizes (25, 32) and (50, 64) in all three models, the only Hong and Kao (2004) test that compares well with our test is theirW 1 (J 0 ) test. However, this test requires a computationally intensive, data-driven procedure for the choice of J 0 , making the already complex test even more of an obstacle. Moreover, compared withW 1 andW 2 , our test places no restrictions on T , whereasW 1 andW 2 require T to be a multiple power of 2.

Conclusion
Compared with the test statisticsW 1 andW 2 in Hong and Kao (2004), our test based on statistic Z has a much more simplified construction, quicker convergence rate and better small-sample performances, especially in the perspective of size property. The faster convergence rate of Z may be explained in two factors: First, the nonparametric spectral density estimation inW 1 andW 2 slows the convergence rate; second, the p values in Z are derived from S 2 i instead of S i , which lead to a convergence rate in individuals being O p (T −2 i ) instead of O p (T −1 i ). Moreover, by using the inverse  normal test, our test can be easily extended to a cross-sectional dependence robust test by using a modified inverse normal test (Hartung 1999) when combining the pvalues or by obtaining critical values for the Fisher type of test by wavestrapping (bootstrapping) method. Generally speaking, just by using the N(0, 1) distribution, we obtain unbiased size and quite comparable power performance when compared with all previous tests. The other shortage in Hong and Kao (2004) is that there wavelet transform is based on orthonormal DWT, which requires T to be a multiple power of 2. By using the MODWT in our test, this restriction is relaxed. Moreover, in a small sample, the MODWT yields a more accurate energy decomposition due to its smoothing property. A shortcoming of both the Hong and Kao (2004) test and our test is that it requires the assumption of no cross-sectional dependence. We have left the case of cross-sectional dependence to future research since the main aim of this paper is to develop a test that maintains the strength of the Hong and Kao test but has quicker convergence rate than the Hong and Kao test and does not require bootstrapping the critical values when the cross-sectional units are independent.