On the eigenvalues associated with the limit null distribution of the Epps-Pulley test of normality

The Shapiro–Wilk test (SW) and the Anderson–Darling test (AD) turned out to be strong procedures for testing for normality. They are joined by a class of tests for normality proposed by Epps and Pulley that, in contrast to SW and AD, have been extended by Baringhaus and Henze to yield easy-to-use affine invariant and universally consistent tests for normality in any dimension. The limit null distribution of the Epps–Pulley test involves a sequences of eigenvalues of a certain integral operator induced by the covariance kernel of a Gaussian process. We solve the associated integral equation and present the corresponding eigenvalues.


Introduction
Let X, X 1 , X 2 . . .be a sequence of independent and identically distributed (i.i.d) random variables with unkown distribution.To test the hypothesis H 0 that the distribution of X is some unspecified normal distribution, there is a myriad of testing procedures, among which the tests of Shapiro-Wilk (SW) and Anderson-Darling (AD) deserve special mention, see, e.g., the monographs [7,22].There is, however, a further test which was proposed by Epps and Pulley ( [10]).This test, which is based on the empirical characteristic function, comes as a serious competitor to SW and AD, as shown in simulation studies (see, e.g., [1,4]).Baringhaus and Henze ( [2]) extended the approach of Epps and Pulley to test for normality in any dimension.By now, the BHEP-test (an acronym coined by S. Csörgő [6]) after earlier developers of the idea) is known to be an affine-invariant and universally consistent test of normality in any dimension, and limit distributions of the test statistic have been obtained under H 0 as well as under fixed and contiguous alternatives to normality (see the review article [8]).In this paper, we revisit the limit null distribution of the Epps-Pulley-test statistic in the univariate case.The test statistic involves a positive tuning parameter β, and, based on X 1 , . . ., X n , is denoted by T n,β .It is given by where denote the sample mean and the sample variance of X 1 , . . ., X n , respectively.Moreover, is the density of the centred normal distribution with variance β 2 .A closed-form expression of T n,β that is amenable to computational purposes is The limit null distribution of T n,β , as n → ∞, is that of Here, Z(•) is a centred Gaussian element of the Hilbert space L 2 = L 2 (R, B, ϕ β (t)dt) of Borelmeasurable real-valued functions that are square-integrable with respect to ϕ β (t)dt, and the covariance function of Z(•) is given by (see [12]).The kernel K is the starting point of this paper.Writing ∼ for equality in distribution, it is well-known that where λ 1 , λ 2 , . . . is the sequence on nonzero eigenvalues associated with the integral operator and N 1 , N 2 , . . . is a sequence of i.i.d.standard normal random variables.In the next section, we obtain the eigenvalues of A by numerical methods.In Section 3 the sum of powers of the largest eigenvalues is compared to normalized cumulants.The difference should be close to 0 if the eigenvalues have been computed correctly.Section 4 demonstrates that the results can be applied to fit a Pearson system of distributions, and that the fit is reasonable to approximate critical values of the Epps-Pulley test.The article ends by some concluding remarks.

Solution of a Fredholm integral equation
To obtain the values λ 1 , λ 2 , . .., that determine the distribution of T ∞ , one has to solve the integral equation In general, this is considered a hard problem, and solutions for kernels associated with testing problems involving composite hypotheses are very sparse, see [19,20] for the classical tests of normality and exponentiality that are based on the empirical distribution function.In what follows, we use a result of [23] to obtain the eigenvalues of A by a stable numerical method.
To this end, let and Notice that The first step is to solve the eigenvalue problem for the covariance kernel K 0 , which corresponds to the limit distribution of the a modified statistic T (0) n,β , which originates from T n,β by replacing j=1 exp itX j , i.e., the problem is to test for standard normality and thus no estimation of parameters is involved.The associated eigenvalue problem, which leads to the kernel K 0 , was solved in the context of machine learning in Chapter 4 of [23].
Here, we use the formulation given in [17], Subsection 4.3.1.In our case, we have with corresponding normalized eigenfunctions (see also the errata to [17] on the books homepage).Here, ) is the kth order Hermite polynomial.To solve the eigenvalue problem of A figuring in (1.1), we adapt the methodology in [19].Define With this notation, we can formulate our main result.
Theorem 2.1.The eigenvalues of A are the reciprocals of the solutions λ > 0 of the equation where is the Fredholm determinant connected to the eigenvalue problem of K 0 .Moreover, none of the reciprocals of the eigenvalues λ (0) Proof.Since a j,1 a j,2 = a j,2 a j,3 = 0 holds for all j = 0, 1, 2, . .., we use Theorem 2.2 of [21] to see that the Fredholm determinant for the eigenvalue problem takes the form Hence, the reciprocals of the roots of D(λ) are the eigenvalues of A. By direct calculation, it follows that a j,2 = 0 if j is even, and we have a j,1 = a j,3 = 0 if j is odd.Consequently, none of the reciprocals of the eigenvalues λ k is a root of D(λ) and thus a solution of the eigenvalue problem associated with the kernel K.According to Theorem 2.1, the eigenvalues of A are the roots of S 2 (λ) and of S 1 (λ)S 3 (λ) − S 2 1,3 (λ).The reciprocals of the roots have been obtained numerically, and the first twenty eigenvalues are displayed in Table 1 for different values of β.Note that, since these values tend to be very small, the reciprocal approach used here leads to numerically stable procedures to find the roots of the Fredholm determinant.

Accuracy of the numerical solutions
The accuracy of the values presented in Table 1 may be judged by a comparison with results of [11].That paper gives the first four cumulants of the distribution of T ∞ in the special case where K 1 (x, y) := K(x, y) and for m ≥ 2 (see e.g., Chapter 5 of [18]).We have  The results of [11] have been partially generalized in [12], Theorem 2. univariate case.For the sake of completeness, we restate the formulae of the first two cumulants here.For the first cumulant, we have and the second cumulant is The formula for the third cumulant is found in [12], Theorem 2.3, for the case d = 1.Table 2 exhibits the normalized cumulants, together with the corresponding sums of the first 20 eigenvalues taken from Table 1.We stress that by now no formula for the fourth cumulant is known in the literature for general tuning parameter β.

Pearson system fit for approximation of critical values
The first four cumulants can directly be used in packages that implement the Pearson system of distributions (see Section 4.1 of [14]).In the statistical computing language R (see [16]), we use the package PearsonDS (see [3]) to approximate critical values of the Epps-Pulley test statistic.The Epps-Pulley test is implemented in the R-Package mnt (see [5]) by using the function BHEP.'∞' is the calculated (1 − α)-quantile of the fitted Pearson system using the cumulants given in Table 2.We conclude that, for larger sample sizes, the simulated critical values are close to the approximated counterparts of the Pearson system.Moreover, we have corroborated the results of [11] for the special case β = 1, and we have extended these results for general β > 0.

Conclusions
We have solved the eigenvalue problem of the integral operator associated with the covariance kernel K of the limiting Gaussian process that occurs in the limit null distribution of the Epps-Pulley test statistic.In view of a comparison with the first three known cumulants from the literature, Table 2 shows that the eigenvalues obtained by numerical methods are very close to the corresponding theoretical values.In Section 5 of [9], the authors present a Monte Carlo based approximation method to find stochastic approximations of the eigenvalues.A comparison of Table 1 and Table 1 of [9] reveals that there are some significant differences for some values of β.This observation is of particular interest, since the largest eigenvalue is used in the derivations of approximate Bahadur efficiencies.Recent results concerning this topic for the Epps-Pulley test are presented in [9] and, for other normality tests based on the empirical distribution function, in [15].
We finally point out the difficulties encountered if one tries to generalize our findings to the multivariate case, i.e. to obtain the eigenvalues associated with the limit null distribution of the BHEP test of multivariate normality, see [2,12,13] The first step is to derive explicit expressions for eigenvalues w.r.t. the kernel K 0 (s, t) = exp − s − t 2 /2 , s, t ∈ R d .The second step is to find the corresponding multivariate representation of (2.1), which seems to be non-standard, since the quadratic summand (s ⊤ t) 2 in (5.1) does not factorize easily.Both problems have to be solved in order to successfully apply the method presented in Section 2.

From Table 2 ,
we see that the corresponding sums of the first 20 numerical values resp. of their squares and cubes agree approximately with the values figuring in (3.1) and (3.2), respectively, up to five significant digits in most cases.

Table 1 :
Eigenvalues of A for different tuning parameters β, here E-j stands as usually for 10 −j .

Table 2 :
Sums over different powers of the first 20 eigenvalues and corresponding theoretical cumulants for different values of β.The entry denoted by * could not be computed due to numerical instabilities.
3, for the first three cumulants and a fixed tuning parameter β, and they thus lead to general formulae in the

Table 3 :
Table 3 shows simulatedempirical critical values of the Epps-Pulley statistic for sample sizes n ∈ {10, 25, 50, 100, 200} and levels of significance α ∈ {0.1, 0.05, 0.01}.For each combination of n and β, the entries corresponding to different values of α are based on 10 6 replications under the null hypothesis.Each entry in a row named Empirical critical values (simulated with 10 6 replications) and approximated critical values by the Pearson system for different levels of significance α [12]e d-variate analog to the covariance kernel K in (1.1) is given in Theorem 2.1 of[12], namely, writing • for the Euclidean norm and ⊤ for the transpose of