Abstract
The Shapiro–Wilk test (SW) and the Anderson–Darling test (AD) turned out to be strong procedures for testing for normality. They are joined by a class of tests for normality proposed by Epps and Pulley that, in contrast to SW and AD, have been extended by Baringhaus and Henze to yield easy-to-use affine invariant and universally consistent tests for normality in any dimension. The limit null distribution of the Epps–Pulley test involves a sequences of eigenvalues of a certain integral operator induced by the covariance kernel of a Gaussian process. We solve the associated integral equation and present the corresponding eigenvalues.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let \(X,X_1, X_2 \ldots \) be a sequence of independent and identically distributed (i.i.d) random variables with unkown distribution. To test the hypothesis \(H_0\) that the distribution of X is some unspecified normal distribution, there is a myriad of testing procedures, among which the tests of Shapiro–Wilk (SW) and Anderson–Darling (AD) deserve special mention, see, e.g., the monographs of D’Agostino and Stephens (1996) and Thode (2002). There is, however, a further test which was proposed by Epps and Pulley (1983). This test, which is based on the empirical characteristic function, comes as a serious competitor to SW and AD, as shown in simulation studies (see, e.g., Baringhaus et al. 1989; Betsch and Ebner 2020). Baringhaus and Henze (1988) extended the approach of Epps and Pulley to test for normality in any dimension. By now, the BHEP-test (an acronym coined by Csörgö (1989) after earlier developers of the idea) is known to be an affine-invariant and universally consistent test of normality in any dimension, and limit distributions of the test statistic have been obtained under \(H_0\) as well as under fixed and contiguous alternatives to normality (see the review article Ebner and Henze 2020). In this paper, we revisit the limit null distribution of the Epps–Pulley test statistic in the univariate case. The test statistic involves a positive tuning parameter \(\beta \), and, based on \(X_1,\ldots ,X_n\), is denoted by \(T_{n,\beta }\). It is given by
where \(\psi _n(t) = n^{-1}\sum _{j=1}^n \exp \big (\mathrm{i}tY_{n,j}\big )\) is the empirical characteristic function of the scaled residuals \(Y_{n,1}, \ldots , Y_{n,n}\). Here, \(Y_{n,j} = S_n^{-1} (X_j - {\overline{X}}_n)\), \(j=1,\ldots ,n\), and \({\overline{X}}_n = n^{-1} \sum _{j=1}^n X_j\), \(S_n^2 = n^{-1}\sum _{j=1}^n (X_j- {\overline{X}}_n)^2\) denote the sample mean and the sample variance of \(X_1,\ldots ,X_n\), respectively. Moreover,
is the density of the centred normal distribution with variance \(\beta ^2\). A closed-form expression of \(T_{n,\beta }\) that is amenable to computational purposes is
The limit null distribution of \(T_{n,\beta }\), as \(n \rightarrow \infty \), is that of
Here, \(Z(\cdot )\) is a centred Gaussian element of the Hilbert space \(\text {L}^2 = \text {L}^2({\mathbb {R}},\mathcal{B},\varphi _\beta (t)\text {d}t)\) (of equivalence classes) of Borel-measurable real-valued functions that are square-integrable with respect to \(\varphi _\beta (t)\text {d}t\), and the covariance function of \(Z(\cdot )\) is given by
(see Henze and Wagner 1997). The kernel K is the starting point of this paper. Writing \(\sim \) for equality in distribution, it is well-known that
where \(\lambda _0, \lambda _1, \ldots \) is the sequence of nonzero eigenvalues associated with the integral operator \({\mathbb {A}}: \text {L}^2 \rightarrow \text {L}^2\) defined by
and \(N_0, N_1, \ldots \) is a sequence of i.i.d. standard normal random variables. In the next section, we obtain the eigenvalues of \({\mathbb {A}}\) by numerical methods. In Sect. 3 the sum of powers of the largest eigenvalues is compared to normalized cumulants. The difference should be close to 0 if the eigenvalues have been computed correctly. Section 4 demonstrates that the results can be applied to fit a Pearson system of distributions, and that the fit is reasonable to approximate critical values of the Epps-Pulley test. The article ends by some concluding remarks. Finally, Appendix A extends the results to the cases in which no parameters, only the mean and only the variance, are estimated.
2 Solution of a Fredholm integral equation
To obtain the values \(\lambda _0,\lambda _1, \lambda _2, \ldots \), that determine the distribution of \(T_\infty \), one has to solve the integral equation
In general, this task is considered a hard problem, and solutions for kernels associated with testing problems involving composite hypotheses are very sparse, see Stephens (1976, 1977) for the classical tests of normality and exponentiality that are based on the empirical distribution function. In what follows, we use a result of Zhu et al. (1997) to obtain the eigenvalues of \({\mathbb {A}}\) by a stable numerical method. To this end, let
and
Notice that
The first step is to solve the eigenvalue problem for the covariance kernel \(K_0\). The associated eigenvalue problem, which leads to the kernel \(K_0\), was solved in the context of machine learning in Chapter 4 of Zhu et al. (1997). Here, we use the formulation given in Rasmussen and Williams (2008), Sect. 4.3.1. The eigenvalues of \(K_0\) are given by
with corresponding normalized eigenfunctions
(see also the errata to Rasmussen and Williams (2008) on the books homepage). Here, \(h_k^{-2} = (4\beta ^2+1)^{-1/4} 2^k k!\), and \(H_k(x)=(-1)^k\exp (x^2)\frac{\text{ d}^k}{\text{ d }x^k}\exp (-x^2)\) is the kth order Hermite polynomial.
Remark 2.1
Note that for the special case \(\beta =1\) the eigenvalues \(\lambda _k^{(0)}\) coincides with the formula given in (6) of Baringhaus (1996). In that article, the limit distribution of a modified statistic \(T_{n,\beta }^{(0)}\), which originates from \(T_{n,\beta }\) by replacing \(\psi _n(t)\) with \(\psi _n^{(0)}(t) = n^{-1}\sum _{j=1}^n \exp \big (\mathrm{i}tX_j\big )\), i.e., the problem is to test for standard normality and thus no estimation of parameters is involved, is analyzed, cf. Appendix A. The corresponding covariance kernel is \(K_1(s,t)= K_0(s,t)-\phi _3(s)\phi _3(t)\), \(s,t\in {\mathbb {R}}\), and explicit formulae for the eigenvalues and eigenfunctions are given, see Baringhaus (1996), p. 3878.
To solve the eigenvalue problem of \({\mathbb {A}}\) figuring in (1.1), we adapt the methodology in Stephens (1976). Define
With this notation, we can formulate our main result.
Theorem 2.2
The eigenvalues of \({\mathbb {A}}\) are the reciprocals of the solutions \(\lambda >0\) of the equation
where \(d(\lambda )=\prod _{k=0}^\infty \left( 1/\lambda -\lambda _k^{(0)}\right) \) is the Fredholm determinant connected to the eigenvalue problem of \(K_0\). Moreover, none of the reciprocals of the eigenvalues \(\lambda _k^{(0)}\) of \(K_0\) solve equation (2.2).
Proof
Since \(a_{j,1}a_{j,2}=a_{j,2}a_{j,3}=0\) holds for all \(j=0,1,2,\ldots \), we use Theorem 2.2 of Sukhatme (1972) to see that the Fredholm determinant for the eigenvalue problem takes the form
Hence, the reciprocals of the roots of \(D(\lambda )\) are the eigenvalues of \({\mathbb {A}}\). By direct calculation, it follows that \(a_{j,2}=0\) if j is even, and we have \(a_{j,1}=a_{j,3}=0\) if j is odd. Consequently, none of the reciprocals of the eigenvalues \(\lambda _k^{(0)}\) is a root of \(D(\lambda )\) and thus a solution of the eigenvalue problem associated with the kernel K. \(\square \)
According to Theorem 2.2, the eigenvalues of \({\mathbb {A}}\) are the roots of \(S_2(\lambda )\) and of \(S_{1}(\lambda )S_3(\lambda )-S_{1,3}^2(\lambda )\). The reciprocals of the roots have been obtained numerically, and the largest twenty eigenvalues are displayed in Table 1 for different values of \(\beta \). Note that, since these values tend to be very small, the reciprocal approach used here leads to numerically stable procedures to find the roots of the Fredholm determinant.
3 Accuracy of the numerical solutions
The accuracy of the values presented in Table 1 may be judged by a comparison with results of Henze (1990). That paper gives the first four cumulants of the distribution of \(T_\infty \) in the special case \(\beta =1\). The m-th cumulant of \(T_\infty \) is
where \(K_1(x,y) {:}{=} K(x,y)\) and
for \(m \ge 2\) (see e.g., Chapter 5 of Shorack and Wellner (1986)). We have
and thus
Furthermore,
and thus
From Table 2, we see that the corresponding sums of the first 20 numerical values of the eigenvalues (as well of their squares and cubes) agree approximately with the values figuring in (3.1) and (3.2), respectively, in most cases up to five significant digits.
The results of Henze (1990) have been partially generalized in Henze and Wagner (1997), Theorem 2.3, for the first three cumulants and a fixed tuning parameter \(\beta \), and they thus lead to general formulae in the univariate case. For the sake of completeness, we restate the formulae of the first two cumulants here. For the first cumulant, we have
and the second cumulant is
The formula for the third cumulant is found in Henze and Wagner (1997), Theorem 2.3, for the case \(d=1\). Table 2 exhibits the normalized cumulants, together with the corresponding sums of the first 20 eigenvalues taken from Table 1. We stress that by now no formula for the fourth cumulant is known in the literature for general tuning parameter \(\beta \).
4 Pearson system fit for approximation of critical values
The first four cumulants can directly be used in packages that implement the Pearson system of distributions (see Sect. 4.1 of Johnson et al. (1994)). In the statistical computing language R (see R Core Team 2021), we use the package PearsonDS (see Becker and Klößner 2017) to approximate critical values of the Epps–Pulley test statistic. The Epps–Pulley test is implemented in the R-Package mnt (see Butsch and Ebner 2020) by using the function BHEP. Table 3 shows simulated empirical critical values of the Epps–Pulley statistic for sample sizes \(n\in \{10,25,50,100,200\}\) and levels of significance \(\alpha \in \{0.1,0.05,0.01\}\). For each combination of n and \(\beta \), the entries corresponding to different values of \(\alpha \) are based on \(10^6\) replications under the null hypothesis. Each entry in a row named ’\(\infty \)’ is the calculated (\(1-\alpha \))-quantile of the fitted Pearson system using the cumulants given in Table 2. We conclude that, for larger sample sizes, the simulated critical values are close to the approximated counterparts of the Pearson system. Moreover, we have corroborated the results of Henze (1990) for the special case \(\beta =1\), and we have extended these results for general \(\beta >0\).
5 Conclusions
We have solved the eigenvalue problem of the integral operator associated with the covariance kernel K of the limiting Gaussian process that occurs in the limit null distribution of the Epps–Pulley test statistic. In view of a comparison with the first three known cumulants from the literature, Table 2 shows that the eigenvalues obtained by numerical methods are very close to the corresponding theoretical values. In Sect. 5 of Ebner and Henze (2021), the authors present a Monte Carlo based approximation method to find stochastic approximations of the eigenvalues. A comparison of Table 1 and Table 1 of Ebner and Henze (2021) reveals that there are some significant differences for some values of \(\beta \), which can be explained by the approximation of the eigenvalues by a Monte Carlo method in Ebner and Henze (2021). This observation is of particular interest, since the largest eigenvalue is used in the derivations of approximate Bahadur efficiencies. Recent results concerning this topic for the Epps–Pulley test are presented in Ebner and Henze (2021) and, for other normality tests based on the empirical distribution function, in Milošević et al. (2021).
We point out the difficulties encountered if one tries to generalize our findings to the multivariate case, i.e. to obtain the eigenvalues associated with the limit null distribution of the BHEP test of multivariate normality, see (Baringhaus and Henze 1988; Henze and Wagner 1997; Henze and Zirkler 1990). The d-variate analog to the covariance kernel K in (1.1) is given in Theorem 2.1 of Henze and Wagner (1997), namely, writing \(\Vert \cdot \Vert \) for the Euclidean norm and \(^\top \) for the transpose of vectors, we have
The first step is to derive explicit expressions for eigenvalues w.r.t. the kernel
(for a starting point, see Baringhaus 1996). The second step is to find the corresponding multivariate representation of (2.1), which seems to be non-standard, since the quadratic summand \((s^\top t)^2\) in (5.1) does not factorize easily. Both problems have to be solved in order to successfully apply the method presented in Sect. 2.
Finally, it is an interesting question whether the results may be extended to other recent tests of normality associated with the empirical characteristic function, such as Ebner (2020), or to other empirical integral transformations, such as the moment generating function, see Henze and Koch (2020), or for multivariate versions see Ebner et al. (2021) as well as Henze and Visagie (2020). In each of these papers an explicit formula of the covariance kernel under the null hypothesis is derived, but it is again unclear how to find explicit expressions for the eigenvalues of the reduced kernel formula. Hence we leave these problems open for future work.
References
Baringhaus L (1996) Fibonacci numbers, Lucas numbers and integrals of certain Gaussian processes. Proc Am Math Soc 124(12):3875–3884
Baringhaus L, Danschke R, Henze N (1989) Recent and classical tests for normality - a comparative study. Communications in Statistics - Simulation and Computation 18:363–379
Baringhaus L, Henze N (1988) A consistent test for multivariate normality based on the empirical characteristic function. Metrika 35(1):339–348
Becker, M., Klößner, S.: PearsonDS: Pearson Distribution System. R package version 1.1 (2017)
Betsch S, Ebner B (2020) Testing normality via a distributional fixed point property in the Stein characterization. TEST 29(1):105–138
Butsch L, Ebner B (2020) mnt: Affine Invariant Tests of Multivariate Normality, R package version 1.3
Csörgö S (1989) Consistency of some tests for multivariate normality. Metrika 36:107–116
D’Agostino RB, Stephens MA (1986) (eds) Goodness-of-fit techniques. Statistics: textbooks and monographs, vol 68. Dekker, New York
Ebner B (2020) On combining the zero bias transform and the empirical characteristic function to test normality. ALEA 18:1029–1045
Ebner B, Henze N (2020) Tests for multivariate normality–a critical review with emphasis on weighted \({L}^2\)-statistics. TEST 29(4):845–892
Ebner B, Henze N (2021) Bahadur efficiencies of the Epps-Pulley test for normality. Zapiski Nauchnykh Semin 501:302–314
Ebner B, Henze N, Strieder D (2021) Testing normality in any dimension by Fourier methods in a multivariate stein equation. Can J Stat. https://doi.org/10.1002/cjs.11670
Epps TW, Pulley LB (1983) A test for normality based on the empirical characteristic function. Biometrika 70(3):723–726
Henze N (1990) An approximation to the limit distribution of the Epps-Pulley test statistic for normality. Metrika 37(1):7–18
Henze N, Koch S (2020) On a test of normality based on the empirical moment generating function. Stat Pap 61(1):17–29
Henze N, Visagie J (2020) Testing for normality in any dimension based on a partial differential equation involving the moment generating function. Ann Inst Stat Math 72(5):1109–1136
Henze N, Wagner T (1997) A new approach to the BHEP tests for multivariate normality. J Multivar Anal 62(1):1–23
Henze N, Zirkler B (1990) A class of invariant consistent tests for multivariate normality. Commun Stat Theory Methods 19(10):3595–3617
Johnson NL, Kotz S, Balakrishnan N (1994) Continuous univariate distributions, vol 1, 2nd edn. Wiley, New York
Milošević B, Nikitin YY, Obradović M (2021) Bahadur efficiency of edf based normality tests when parameters are estimated. Zapiski Nauchnykh Semin, vol 501
R Core Team (2021) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna
Rasmussen CE, Williams CKI (2008) Gaussian processes for machine learning. In: Adaptive computation and machine learning. MIT Press, Cambridge
Shorack GR, Wellner JA (1986) Empirical processes with applications to statistics. Wiley, New York
Stephens MA (1976) Asymptotic results for goodness-of-fit statistics with unknown parameters. Ann Stat 4(2):357–369
Stephens MA (1977) Goodness of fit for the extreme value distribution. Biometrika 64(3):583–588
Sukhatme S (1972) Fredholm determinant of a positive definite kernel of a special type and its application. Ann Math Stat 43(6):1914–1926
Thode HC (2002) Testing for normality, vol 164. Statistics: textbooks and monographs. Dekker, New York
Zhu H, Williams CK, Rohwer R, Morciniec M (1997) Gaussian regression and optimal finite dimensional linear models. In: Bishop CM (ed) Neural networks and machine learning. Springer, Berlin
Acknowledgements
The authors thank two anonymous referees for helpful comments that improved the paper.
Funding
Open Access funding enabled and organized by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
A Approximation of eigenvalues in case of testing under partially known parameters
A Approximation of eigenvalues in case of testing under partially known parameters
In the spirit of the work of Stephens (1976), we provide the approximation of the eigenvalues for the following three related cases:
-
1.
both parameters known: The test statistic is applied to \(Y_{j}=(X_j-\mu )/\sigma \), where \(\mu \) and \(\sigma \) are the parameters under the null hypothesis. The covariance kernel then reduces to
$$\begin{aligned} K_{1}(s,t)=K_{0}(s,t)-\phi _{3}(s)\phi _{3}(t),\quad s,t\in {\mathbb {R}}, \end{aligned}$$and the Fredholm determinant is \(D(\lambda )=d(\lambda )S_3(\lambda )\). The zeros of \(D(\lambda )\) (note that some zeros of \(d(\lambda )\) are not zeros of \(D(\lambda )\)) provide the eigenvalues in Table 4.
-
2.
the mean is unknown, but the variance is known: The test statistic is applied to \(Y_{n,j}=(X_j-{\overline{X}}_n)/\sigma \) and the covariance kernel reduces to
$$\begin{aligned} K_{2}(s,t)=K_{0}(s,t)-\sum _{j=2}^{3}\phi _j(s)\phi _{j}(t),\quad s,t\in {\mathbb {R}}. \end{aligned}$$Here, the Fredholm determinant takes the form \(D(\lambda )=d(\lambda )S_2(\lambda )S_3(\lambda )\). The zeros of \(D(\lambda )\) provide the eigenvalues in Table 5. Note that none of the zeros of \(d(\lambda )\) are zeros of \(D(\lambda )\).
-
3.
the mean is known, but the variance is unknown: The test statistic applied to \(Y_{n,j}=S_n^{-1}(X_j-\mu )\). The covariance kernel reduces to
$$\begin{aligned} K_{3}(s,t)=K_{0}(s,t)-\phi _{1}(s)\phi _{1}(t)-\phi _{3}(s)\phi _{3}(t),\quad s,t\in {\mathbb {R}}, \end{aligned}$$and the Fredholm determinant is \(D(\lambda )=d(\lambda )(S_1(\lambda )S_3(\lambda )-S_{1,3}^2(\lambda ))\). The zeros of \(D(\lambda )\) provide the eigenvalues in Table 6. Note that some zeros of \(d(\lambda )\) are also zeros of \(D(\lambda )\).
Note that the sums of powers of the eigenvalues are close to the respective cumulants in all three Tables 4 - 6, which confirms the good approximation of the eigenvalues.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ebner, B., Henze, N. On the eigenvalues associated with the limit null distribution of the Epps-Pulley test of normality. Stat Papers 64, 739–752 (2023). https://doi.org/10.1007/s00362-022-01336-6
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00362-022-01336-6