Skip to main content
Log in

Heteroskedasticity testing through a comparison of Wald statistics

  • Original Article
  • Published:
Portuguese Economic Journal Aims and scope Submit manuscript

Abstract

This paper shows that a test for heteroskedasticity within the context of classical linear regression can be based on the difference between Wald statistics in heteroskedasticity-robust and nonrobust forms. The test is asymptotically distributed under the null hypothesis of homoskedasticity as chi-squared with one degree of freedom. The power of the test is sensitive to the choice of parametric restriction used by the Wald statistics, so the supremum of a range of individual test statistics is proposed. Two versions of a supremum-based test are considered: the first version does not have a known asymptotic null distribution, so the bootstrap is employed to approximate its empirical distribution. The second version has a known asymptotic distribution and, in some cases, is asymptotically pivotal under the null. A simulation study illustrates the use and finite-sample performance of both versions of the test. In this study, the bootstrap is found to provide better size control than asymptotic critical values, namely with heavy-tailed, asymmetric distributions of the covariates. In addition, the use of well-known modifications of the heteroskedasticity consistent covariance matrix estimator of OLS coefficients is also found to benefit the tests’ overall behaviour.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. The idea is remotely inspired by the Hausman (1978) test, as applied to a test statistic contrast rather than an estimator difference.

  2. Different sample sizes (\(n=50\), 200, 500) yielded expectable results, without any significant difference concerning the relative performance of the various tests. Therefore, for brevity sake, those results are omitted.

  3. This is “boot1” method in Hodoshima and Ando (2007). With the White’s “direct” test under homoskedasticity, the approach is found by the authors to work best, overall, among other bootstrap methods (including variants of the wild bootstrap of Mammen 1993, or Davidson and Flachaire 2008).

  4. With Design 2, \(x_{ik}>-\exp \left ( 1\right ) /\sqrt {\exp \left ( 4\right ) -\exp \left ( 2\right ) }>-.5\), so \(\omega \left ( x_{i}\right ) >0\) under \( H_{3} \).

  5. The HC\(_{3}\) form is used here instead of the jacknife variant investigated by MacKinnon and White (1985), because both forms have been found to behave similarly in simulations (mentioned in Davidson and MacKinnon 1993, Ch. 16.3) and it is much easier to use than the latter in the direct tests.

  6. Note that, from White (1980), \(\delta _{n}=n^{-1/2}\sum_{i=1}^{n}\left(\sigma ^{2}-\varepsilon _{i}^{2}\right) \left( \alpha _{i}-\bar{\alpha}\right)+o_{p}\left( 1\right)\).

References

  • Anscombe F (1961) Examination of residuals. Proc Berkeley Symp 1:1–36

    Google Scholar 

  • Beran R (1988) Prepivoting test statistics: a bootstrap view of asymptotic refinements. J Amer Statist Assoc 83:687–697

    Article  Google Scholar 

  • Breusch TS, Pagan AR (1979) A simple test for heteroskedasticity and random coefficients variation. Econom 47:1287–1294

    Article  Google Scholar 

  • Chesher A, Jewitt I (1987) The bias of a heteroskedasticity consistent covariance matrix estimator. Econom 55:1217–1222

    Article  Google Scholar 

  • Davidson R, Flachaire E (2008) The wild bootstrap, tamed at last. J Econom 146(1):162–169

    Article  Google Scholar 

  • Davidson R, MacKinnon JG (1993) Estimation and Inference in Econometrics. Oxford University Press, New York

    Google Scholar 

  • Glejser H (1969) A new test for heteroskedasticity. J Amer Statist Assoc 64:316–323

    Article  Google Scholar 

  • Godfrey LG (1978) Testing for multiplicative heteroskedasticity. J Econom 8:227–236

    Article  Google Scholar 

  • Godfrey LG (1990) Misspecification tests in Econometrics. The lagrange multiplier principle and other approaches. Cambridge University Press, Cambridge

    Google Scholar 

  • Godfrey LG (1996) Some results on the Glejser and Koenker tests for heteroskedasticity. J Econom 72:275–299

    Article  Google Scholar 

  • Godfrey LG, Orme CD (1999) The robustness, reliability and power of heteroskedasticity tests. Econom Rev 18(2):169–194

    Article  Google Scholar 

  • Godfrey LG, Orme CD, Santos Silva JMC (2006) Simulation-based tests for heteroskedasticity in linear regression models: some further results. Econom J 9:76–97

    Article  Google Scholar 

  • Hall BH, Cummins C (1999) TSP user’s guide, version 4.5. TSP International, Palo Alto

    Google Scholar 

  • Hausman JA (1978) Specification tests in econometrics. Econom 46:1251–1272

    Article  Google Scholar 

  • Hodoshima J, Ando M (2007) The finite-sample performance of White’s test for heteroskedasticity under stochastic regressors. Commun Statist – Simul Comput 36:1201–1215

    Article  Google Scholar 

  • Horn RA, Johnson CR (1990) Matrix analysis. Cambridge University Press, Cambridge

    Google Scholar 

  • Im KS (2000) Robustifying the Glejser’s test of heteroskedasticity. J Econom 97:179–188

    Article  Google Scholar 

  • Koenker R (1981) A note on studentizing a test for heteroskedasticity. J Econom 17:107–112

    Article  Google Scholar 

  • Machado JAF, Santos Silva JMC (2000) Glejser’s test revisited. J Econom 97:189–202

    Article  Google Scholar 

  • MacKinnon JG, White H (1985) Some heteroskedasticity consistent covariance matrix estimators with improved finite sample properties. J Econom 29:305–325

    Article  Google Scholar 

  • Mammen E (1993) Bootstrap and wild bootstrap for high dimensional linear models. Ann Statist 21:255–285

    Article  Google Scholar 

  • White H (1980) A heteroskedasticity-consistent covariance matrix estimator and a direct test for heteroskedasticity. Econom 48:817–838

    Article  Google Scholar 

Download references

Acknowledgments

We would like to thank Chris Orme for careful and very detailed comments, which helped to significantly improve previous versions of the present text. We also thank João Santos Silva and an Anonymous Referee for valuable substantive remarks. Remaining errors are obviously covered by the usual disclaimer. Financial support from Fundação para a Ciência e Tecnologia, program FEDER/POCI 2010, is gratefully acknowledged.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to José M. R. Murteira.

Appendix

Appendix

Proof of Lemma 1

The scaled difference between Wald statistics can be successively written as

$$\begin{array}{rll} n^{-1/2}s^{2}\left( W_{R}-W_{NR}\right)&=&n^{-1/2}r\left( b\right) ^{\prime }\left[ s^{2}\left( T^{\prime }D_{e}T\right) ^{-1}-\left( T^{\prime }T\right) ^{-1}\right] r\left( b\right)\\ &=& n^{-1/2}r\left( b\right) ^{\prime }\left( T^{\prime }T\right) ^{-1}T^{\prime }\left( s^{2}I_{n}-D_{e}\right) T\left( T^{\prime }D_{e}T\right) ^{-1}r\left( b\right)\\ &=&c_{1}\left( b\right) ^{\prime }\left[ n^{-1/2}X^{\prime }\left( s^{2}I_{n}-D_{e}\right) X\right] c_{2}\left( b\right)\\ &=&c_{1}\left( b\right) ^{\prime }\left[ n^{-1/2}\sum\limits_{i=1}^{n}\left( s^{2}-e_{i}^{2}\right) x_{i}x_{i}^{\prime }\right] c_{2}\left( b\right)\\ &=&c_{1}\left( b\right) ^{\prime }\left[ n^{-1/2}\sum\limits_{i=1}^{n}\left( s^{2}-e_{i}^{2}\right) \left( x_{i}x_{i}^{\prime }-M_{n}\right) \right] c_{2}\left( b\right) \text{.}\end{array}$$

White (1980, Theorem 2) shows that, under homoskedasticity, the elements of

$$n^{-1/2}\sum\limits_{i=1}^{n}\left( s^{2}-e_{i}^{2}\right) \left( x_{i}x_{i}^{\prime }-M_{n}\right)$$

are asymptotically normally distributed with null means. Also, under White’s Assumptions, \(n^{-1}X^{\prime }X=\Sigma _{X}+O_{p}\left ( n^{-1/2}\right ) \) and \(n^{-1}X^{\prime }D_{e}X=\Xi _{X}+O_{p}\left ( n^{-1/2}\right ) \), so

$$\begin{array}{rll} nT^{\prime }T &=&\Sigma _{T}+O_{p}\left( n^{-1/2}\right) =R\left( \beta \right) \Sigma _{X}^{-1}R\left( \beta \right) ^{\prime }+O_{p}\left( n^{-1/2}\right) \text{,} \\ nT^{\prime }D_{e}T &=&\Xi _{T}+O_{p}\left( n^{-1/2}\right) =R\left( \beta \right) \Sigma _{X}^{-1}\Xi _{X}\Sigma _{X}^{-1}R\left( \beta \right) ^{\prime }+O_{p}\left( n^{-1/2}\right) \text{.} \end{array}$$

From the definitions of \(\gamma _{j}\) and \(c_{j}\left ( b\right ) \), \(j=1,2\) [see Eqs. 2 and 5], it then follows that \(c_{j}\left ( b\right ) =\gamma _{j}+O_{p}\left ( n^{-1/2}\right ) \).

Considering now each case in Lemma 1:

  1. (i)

    \(r\left ( \beta \right ) \neq 0\) at the true value of the regression parameters: false auxiliary restriction. In this case, \(\gamma _{j}\neq 0\), \(j=1,2\), so one can write

    $$\begin{array}{lll} && n^{-1/2}s^{2}\left( W_{R}-W_{NR}\right) \\&&\qquad=\gamma _{1}^{\prime }\left[ n^{-1/2}\sum\limits_{i=1}^{n}{}\left( s^{2}-e_{i}^{2}\right) \left( x_{i}x_{i}^{\prime }-M_{n}\right) \right] \gamma _{2}+o_{p}\left( 1\right) =\delta _{n}+o_{p}\left( 1\right) \text{,} \end{array}$$

    where

    $$\delta _{n}\equiv \gamma _{1}^{\prime }\left[{} n^{-1/2}\sum\limits_{i=1}^{n}\left( s^{2}-e_{i}^{2}\right) \left( x_{i}x_{i}^{\prime }-M_{n}\right){}\right] \gamma _{2}=n^{-1/2}\sum\limits_{i=1}^{n}\left( s^{2}-e_{i}^{2}\right) \left( \alpha _{i}-\bar{\alpha}\right) \text{.}$$

    This is a linear combination of asymptotically normal elements, so, under \( H_{0},\upsilon ^{-1/2}\delta _{n}\overset {D}{\longrightarrow }\mathcal {N}\left ( 0,1\right ) \), withFootnote 6

    $$\upsilon \equiv n^{-1}\sum\limits_{i=1}^{n}E\left \{ \left[ \left( \sigma ^{2}-\varepsilon _{i}^{2}\right) \left( \alpha _{i}-\bar{\alpha}\right)\right] ^{2}\right \} \text{.}$$
    (14)

    Replacing \(\gamma _{j}\) with \(c_{j}\left ( b\right ) \), \(j=1,2\), and recalling the definition of \(a_{i}\) [see (4)], one has \(\upsilon ^{-1/2}n^{-1/2}s^{2}\left( W_{R}-W_{NR}\right) \overset{D}{\longrightarrow }\mathcal{N}\left( 0,1\right)\) as well. Then, the result in Part (i) of Lemma 1 follows.

  2. (ii)

    \(r\left ( \beta \right ) =0\) at the true value of the regression parameters: true auxiliary restriction. In this case, \(\gamma _{j}=0\) so \(c_{j}\left ( b\right ) =O_{p}\left ( n^{-1/2}\right ) \), \(j=1,2\). Thus,

    $$\begin{array}{lll} && n^{1/2}s^{2}\left( W_{R}-W_{NR}\right)\\ &&\quad = n^{1/2}c_{1}\left( b\right) ^{\prime }n^{-1/2}\sum\limits_{i=1}^{n}\left( s^{2}-e_{i}^{2}\right) \left( x_{i}x_{i}^{\prime }-M_{n}\right) n^{1/2}c_{2}\left( b\right)\\ &&\quad = n^{-1/2}\sum\limits_{i=1}^{n}\left( s^{2}-e_{i}^{2}\right) n^{1/2}c_{1}\left( b\right) ^{\prime }\left( x_{i}x_{i}^{\prime }-M_{n}\right) n^{1/2}c_{2}\left( b\right) =O_{p}\left( 1\right) \text{.}\end{array}$$
    (15)

    A standard second-order Taylor expansion of the l-th component of \(r\left ( b\right ) \) around \(\beta \) yields

    $$\begin{array}{rll} r_{l}\left( b\right)&=&r_{l}\left( \beta \right) +R_{l}\left( \beta \right) \left( b-\beta \right) +\frac{1}{2}\left( b-\beta \right) ^{\prime }D_{l}\left( \beta ^{\ast }\right) \left( b-\beta \right)\\ &=& R_{l}\left( \beta \right) \left( b-\beta \right) +O_{p}\left( n^{-1}\right) \text{, }l=1,...,j\text{,} \end{array} $$

    where \(R_{l}\left ( \beta \right ) \equiv \partial r_{l}\left ( \beta \right ) /\partial \beta ^{\prime }\), \(D_{l}\left ( \beta ^{\ast }\right ) \equiv \partial ^{2}r_{l}\left ( \beta ^{\ast }\right ) /\partial \beta \partial \beta ^{\prime }\) is evaluated at some convex combination of b and \(\beta \) , and the last equality results from \(r\left ( \beta \right ) =0\) and the fact that \(b-\beta =O_{p}\left ( n^{-1/2}\right ) \). Consequently [recall (5)],

    $$\begin{array}{rll} n^{1/2}c_{1}\left( b\right) &=&\Sigma _{X}^{-1}R\left( \beta \right) ^{\prime }\Sigma _{T}^{-1}R\left( \beta \right) \Sigma _{X}^{-1}n^{-1/2}X^{\prime }\varepsilon +O_{p}\left( n^{-1/2}\right) \text{,} \\ n^{1/2}c_{2}\left( b\right) &=&\Sigma _{X}^{-1}R\left( \beta \right) ^{\prime }\Xi _{T}^{-1}R\left( \beta \right) \Sigma _{X}^{-1}n^{-1/2}X^{\prime }\varepsilon +O_{p}\left( n^{-1/2}\right) \text{.} \end{array}$$

    Plugging these results in (15) and given that

    $$\begin{array}{rll} s^{2}-e_{i}^{2}&=&\sigma ^{2}+O_{p}\left( n^{-1}\right) -\left[ \varepsilon _{i}^{2}-2\varepsilon _{i}x_{i}^{\prime }\left( b-\beta \right) +\left( b-\beta \right) ^{\prime }x_{i}x_{i}^{\prime }\left( b-\beta \right) \right]\\ &=&\sigma ^{2}-\varepsilon _{i}^{2}+O_{p}\left( n^{-1/2}\right) \end{array}$$

    uniformly in i, one can write

    $$ n^{1/2}s^{2}\left( W_{R}-W_{NR}\right) =\delta _{n}^{\ast }+O_{p}\left( n^{-1/2}\right) \text{,}$$
    (16)

    with

    $$\delta _{n}^{\ast }\equiv n^{-1/2}\sum\limits_{i=1}^{n}\left( \sigma ^{2}-\varepsilon _{i}^{2}\right) \left( \alpha _{i}^{\ast }-\bar{\alpha} ^{\ast }\right) \text{.}$$

    –see (6).

    Noting that \(\alpha _{i}^{\ast }\) is a quadratic form in the n elements of \(\varepsilon \), write the last summation as \(\delta _{n}^{\ast }=n^{-1/2}\sum _{i=1}^{n}\delta _{ni}^{\ast }\), where

    $$ \delta _{ni}^{\ast }=n^{-1}\left( \sigma ^{2}-\varepsilon _{i}^{2}\right) \sum\limits_{l=1}^{n}\sum\limits_{m=1}^{n}d_{ilm}\varepsilon _{l}\varepsilon _{m}\text{,}$$
    (17)

    and \(d_{ilm}\) denotes the element \(\left ( l,m\right ) \) of the matrix \(D_{i}\) , defined in (7). Under \(H_{0}\) the mean of \(\delta _{ni}^{\ast }\) is given by

    $$\begin{array}{rll} E\left( \delta _{ni}^{\ast }\right) &=&n^{-1}E\left[ \left( \sigma ^{2}-\varepsilon _{i}^{2}\right) \sum\limits_{l=1}^{n}\sum\limits_{m=1}^{n}d_{ilm}\varepsilon _{l}\varepsilon _{m}\right]\\ &=&n^{-1}E\left( \left( \sigma ^{2}-\varepsilon _{i}^{2}\right) d_{iii}\varepsilon _{i}^{2}\right) +n^{-1}E\left[ \left( \sigma ^{2}-\varepsilon _{i}^{2}\right) \underset{l\neq i\vee m\neq i}{ \sum\limits_{l=1}^{n}\sum\limits_{m=1}^{n}}d_{ilm}\varepsilon _{l}\varepsilon _{m}\right] \text{,} \end{array}$$

    which is \(O\left ( n^{-1}\right ) \), uniformly in i, under some bounded moment assumptions, homoskedasticity and independence. Under such assumptions, one can also show that \(E\left ( \delta _{ni}^{\ast \text { } 2}\right ) =O\left ( 1\right ) \) and \(E\left ( \delta _{ni}^{\ast }\text { } \delta _{nl}^{\ast }\right ) =O\left ( n^{-1}\right ) \), uniformly in i and l, \(i\neq l\). Consequently,

    $$\lim\limits_{n\rightarrow \infty }V\left( \delta _{n}^{\ast }\right) =\lim\limits_{n\rightarrow \infty }n^{-1}\left[ \sum\limits_{i=1}^{n}V\left( \delta _{ni}^{\ast }\right) +\sum\limits_{i=1}^{n}\sum\limits_{l\neq i}^{n}COV\left( \delta _{ni}^{\ast },\delta _{nl}^{\ast }\right) \right]$$

    exists nonnull and finite. Thus, \(\upsilon ^{{\ast }-1/2}\delta _{n}^{\ast }\overset {D}{\longrightarrow }\mathcal {N}\left ( 0,1\right )\) with

    $$ \upsilon ^{\ast }\equiv V\left( \delta_{n}^{\ast }\right) =n^{-1}\sum\limits_{i=1}^{n}E\left \{ \left[ \left( \sigma ^{2}-\varepsilon _{i}^{2}\right) \left( \alpha _{i}^{\ast }-\bar{\alpha}^{\ast }\right) \right] ^{2}\right \} \text{.}$$
    (18)

    As \(n^{1/2}s^{2}\left ( W_{R}-W_{NR}\right ) =\delta _{n}^{\ast }+O_{p}\left ( n^{-1/2}\right ), \upsilon ^{\ast -1/2} n^{1/2}s^{2}\left ( W_{R}-W_{NR}\right ) \overset {D}{\longrightarrow }\mathcal {N}\left ( 0,1\right )\) as well. This yields the result in part (ii).

Proof of Remark 1

Under homokurticity of \(\varepsilon _{i}\) one can write [see (14)]

$$\upsilon =\left( \mu _{4}-\sigma ^{4}\right) n^{-1}\sum\limits_{i=1}^{n}E\left[ \left( \alpha _{i}-\bar{\alpha}\right) ^{2}\right] \text{,}$$

estimable by

$$n^{-1}\sum\limits_{i=1}^{n}\left( e_{i}^{2}-s^{2}\right) ^{2}n^{-1}\sum\limits_{i=1}^{n}\left( a_{i}-\bar{a}\right) ^{2}\text{.}$$

This factorization allows an asymptotically equivalent version of the test statistic (8) to be written as

$$\begin{array}{lll}&& \frac{\left[ n^{-1/2}\sum_{i=1}^{n}\left( s^{2}-e_{i}^{2}\right) \left( a_{i}-\bar{a}\right) \right] ^{2}}{n^{-1}\sum_{i=1}^{n}\left( e_{i}^{2}-s^{2}\right) ^{2}n^{-1}\sum_{i=1}^{n}\left( a_{i}-\bar{a}\right) ^{2}} \\ &&\quad= n\left[ \frac{\sum_{i=1}^{n}\left( s^{2}-e_{i}^{2}\right) \left( a_{i}-\bar{a }\right) }{\sum_{i=1}^{n}\left( a_{i}-\bar{a}\right) ^{2}}\right]^{2}\frac{ \sum_{i=1}^{n}\left( a_{i}-\bar{a}\right) ^{2}}{\sum_{i=1}^{n}\left( e_{i}^{2}-s^{2}\right) ^{2}}\text{,} \end{array}$$

readily recognized as \(nR^{2}\) from the OLS regression of \(e_{i}^{2}\) on a constant and \(a_{i}\) [this reasoning closely follows the Proof of Corollary 1 of White (1980)].

In case \(r\left ( \beta \right ) =0\) (true auxiliary restriction), the deduction of a similar “direct test” would require factorization of \(E\left \{ \left [ \left ( \sigma ^{2}-\varepsilon _{i}^{2}\right ) \left ( \alpha _{i}^{\ast }-\bar {\alpha } ^{\ast }\right ) \right ] ^{2}\right \} \) into \(\left ( \mu _{4}-\sigma ^{4}\right ) E\left [ \left ( \alpha _{i}^{\ast }-\bar {\alpha }^{\ast }\right ) ^{2}\right ] \), say [see (18)]. However, recalling the Assumptions of error independence and \(E\left ( \varepsilon _{i}|x_{i}\right ) =0\), careful inspection of the expression of \(\alpha _{i}^{\ast }-\bar {\alpha }^{\ast }\) in (17) allows one to write

$$\begin{array}{lll} && E\left \{ \left[ \left( \sigma ^{2}-\varepsilon _{i}^{2}\right) \left(\alpha _{i}^{\ast }-\bar{\alpha}^{\ast }\right) \right] ^{2}\right \}\\&&\quad=n^{-1}E\left[ \left( \sigma ^{2}-\varepsilon _{i}^{2}\right) ^{2}\left( p_{i}\varepsilon _{i}^{4}+\varepsilon _{i}^{2}\sum_{l=1,l\neq i}^{n}q_{il}\varepsilon_{l}^{2}\right.\right.+\left.\left.\varepsilon _{i}\sum_{l=1,l\neq i}^{n}r_{il}\varepsilon _{l}^{3}\right) \right] \text{,} \end{array}$$

where the coefficients \(p_{i}\), \(q_{il}\) and \(r_{il}\) do not involve the error terms (only the regressors). Consequently, validity of the suggested factorization requires an assumption of the sort \(E\left ( \varepsilon _{i}^{p+q}|x_{i}\right ) =E\left ( \varepsilon _{i}^{p}|x_{i}\right ) E\left ( \varepsilon _{i}^{q}|x_{i}\right ) \), \(\forall i\), \(q=1,2\), \(p=2,4\) {the term \(p_{i}\varepsilon _{i}^{4}\) is neglected because only \( \lim _{n\rightarrow \infty }V\left ( \delta _{n}^{\ast }\right ) \) is of interest – not \(V\left ( \delta _{n}^{\ast }\right ) \) per se – and \(\lim _{n\rightarrow \infty }n^{-1}E\left [ \left ( \sigma ^{2}-\varepsilon _{i}^{2}\right ) ^{2}p_{i}\varepsilon _{i}^{4}\right ] =0\)}. Adopting such an assumption, hardly ever justifiable in practice, would greatly restrict the range of admissible error distributions. Thus, application of Remark 1 in this case requires rather more stringent assumptions than those needed for false auxiliary restrictions. □

Proof of Corollary 1

Note, first, the following general feature of the test when \(r\left ( \cdot \right ) \) is a scalar function. In this case T is a column \(n-\)vector and \( a_{i}=r\left ( b\right ) ^{2}T_{\left ( i\right ) }^{2}/\left ( T^{\prime }TT^{\prime }D_{e}T\right ) \), where \(T_{\left ( i\right ) }\equiv x_{i}^{\prime }\left ( X^{\prime }X\right ) ^{-1}R\left ( b\right ) ^{\prime }\) denotes the i-th element of T. Let \(\overline {T^{2}}\equiv n^{-1}\sum _{i=1}^{n}T_{\left ( i\right ) }^{2}\); cancelling out constant terms, the statistic in (8) becomes

$$\frac{\left[\sum_{i=1}^{n}\left( s^{2}-e_{i}^{2}\right) a_{i}\right] ^{2}}{ \sum_{i=1}^{n}\left( s^{2}-e_{i}^{2}\right) ^{2}\left( a_{i}-\bar{a}\right) ^{2}}=\frac{\left[ \sum_{i=1}^{n}\left( s^{2}-e_{i}^{2}\right) T_{\left( i\right) }^{2}\right] ^{2}}{\sum_{i=1}^{n}\left( s^{2}-e_{i}^{2}\right) ^{2}\left( T_{\left( i\right) }^{2}-\overline{T^{2}}\right) ^{2}}\text{,} $$

which does not involve \(r\left ( b\right ) \) [it depends on \(\beta \) only through \(R\left ( b\right ) \)].

Considering now each statement in the Corollary:

  1. (i)

    If \(r\left ( \beta \right ) =R\beta -r\), a scalar, then \(\partial r\left ( \beta \right ) /\partial \beta ^{\prime }=R\), a vector of constants not involving \(\beta \). Thus, from the above, the test statistic (8) no longer depends on \(\beta \) and, consequently, is asymptotically pivotal (it is not pivotal, because its finite sample distribution depends upon the error distribution). The second part of the statement is obvious, given the asymptotic equivalence of Eqs. 8 and 9 with false auxiliary restrictions.

  2. (ii)

    The result is a direct consequence of the fact that \(u_{i}^{2}\) in (10) is proportional to \(\left [ x_{i}^{\prime }\left ( X^{\prime }X\right ) ^{-1}R^{\prime }\right ] ^{2}\) when \(r\left ( \cdot \right ) \) is a scalar affine function. To see this, start by writing the reparameterized model in matrix form as \(y^{\ast }=X^{\ast }\beta ^{\ast }+\varepsilon \), where

    $$\begin{array}{rll}X&\equiv& \left[ \underset{n\times \left( k-1\right) }{X_{1}}\vdots \text{ }\underset{n\times 1}{x_{2}}\right] \text{, \ \ }x_{2}^{\ast }\equiv \left(1/R_{2}\right) x_{2}\text{, \ \ }y^{\ast }\equiv y-x_{2}^{\ast }\text{, \ }\beta ^{\ast }\equiv\left( \beta _{1}^{\prime },\theta \right) ^{\prime }\text{,} \\ X^{\ast }&\equiv&\left[ X_{1}^{\ast }\text{ }\vdots \text{ }x_{2}^{\ast }\right] =XA\text{, \ \ \ }\underset{k\times k}{A}=\left[\begin{array}{cc}I_{k-1} & 0 \\-\left( 1/R_{2}\right) R_{1} & 1/R_{2}\end{array}\right] \text{,}\end{array}$$

    with \(\theta \) and R defined in the main text and \(I_{k-1}\) the \(\left ( k-1\right ) \)-identity matrix. Under the reparameterized model the auxiliary restriction becomes \(\theta =R^{\ast }\beta ^{\ast }=0\), \(R^{\ast }\equiv RA\).

As \(r\left ( \cdot \right ) \) is a scalar affine function [and \(r\left ( \beta \right ) \neq 0\)], the test can be computed as in (9), taking \(nR^{2}\) from the OLS regression of \(e_{i}^{2}\) on an intercept and the regressor \( \left [ x_{i}^{\prime }\left ( X^{\prime }X\right ) ^{-1}R^{\prime }\right ] ^{2} \). In matrix form, the n-vector with generic element \(x_{i}^{\prime }\left ( X^{\prime }X\right ) ^{-1}R^{\prime }\) can be written \(X\left ( X^{\prime }X\right ) ^{-1}R^{\prime }\). As A is invertible,

$$X\left( X^{\prime }X\right) ^{-1}R^{\prime }=XA\left( A^{\prime }X^{\prime }XA\right) ^{-1}A^{\prime }R^{\prime }=X^{\ast }\left( X^{\ast \prime }X^{\ast }\right) ^{-1}R^{\ast \prime }\text{.}$$

The residuals from the original and reparameterized model (\(e^{\ast }\)) are equal, because

$$e^{\ast }=M^{\ast }y^{\ast }=My^{\ast }=My-\left( 1/R_{2}\right) Mx_{2}=My=e \text{,}$$

where \(M^{\ast }=I-X^{\ast }\left ( X^{\ast \prime }X^{\ast }\right ) ^{-1}X^{\ast \prime }=I-X\left ( X^{\prime }X\right ) ^{-1}X^{\prime }=M\) and \( Mx_{2}=0\), since M projects onto the space orthogonal to the space spanned by the columns of X. Thus, the direct test can also be computed as \(nR^{2}\) from the OLS regression of \(e_{i}^{\ast 2}\) on an intercept and the regressor \(\left [ x_{i}^{\ast \prime }\left ( X^{\ast \prime }X^{\ast }\right ) ^{-1}R^{\ast \prime }\right ] ^{2}\). From the definition of \(R^{\ast }\) and the usual formulae for the inverse of partitioned matrices, the n -vector with generic element \(x_{i}^{\ast \prime }\left ( X^{\ast \prime }X^{\ast }\right ) ^{-1}R^{\ast \prime }\) is given by

$$X^{\ast }\left( X^{\ast \prime }X^{\ast }\right) ^{-1}R^{\ast \prime }=\left( x_{2}^{\ast \prime }M_{1}^{\ast }x_{2}^{\ast }\right) ^{-1}M_{1}^{\ast }x_{2}^{\ast }\text{,}$$

where \(M_{1}^{\ast }\equiv I_{k-1}-X_{1}^{\ast }\left ( X_{1}^{\ast \prime }X_{1}^{\ast }\right ) ^{-1}X_{1}^{\ast \prime }\) and \(X_{1}^{\ast }\equiv X_{1}-x_{2}^{\ast }R_{1}\). This is proportional to the vector of OLS residuals from the regression of \(x_{2}^{\ast }\) on \(X_{1}^{\ast }\), proportional, in turn, to \(M_{1}^{\ast }x_{2}\), the n-vector of OLS residuals from the regression of \(x_{2}\) on \(X_{1}^{\ast }\). Thus, \(\left [ x_{i}^{\ast \prime }\left ( X^{\ast \prime }X^{\ast }\right ) ^{-1}R^{\ast \prime }\right ] ^{2}\) and \(u_{i}^{2}\) are proportional, so the regression of \(e_{i}^{2}\) on an intercept and \(\left [ x_{i}^{\prime }\left ( X^{\prime }X\right ) ^{-1}R^{\prime }\right ] ^{2}\) and regression (10) yield the same \(nR^{2}\) statistic. □

Proof of Lemma 2

(i) \(r\left ( \beta \right ) \neq 0\): false auxiliary restriction. Consider again the relation \(s^{2}-e_{i}^{2}=\sigma ^{2}-\varepsilon _{i}^{2}+O_{p}\left ( n^{-1/2}\right ) \) and consequently

$$ n^{-1/2}\sum\limits_{i=1}^{n}\left( s^{2}-e_{i}^{2}\right) \left( x_{i}x_{i}^{\prime }-M_{n}\right)=n^{-1/2}\sum\limits_{i=1}^{n}\left( \sigma ^{2}-\varepsilon _{i}^{2}\right) \left( x_{i}x_{i}^{\prime }-M_{n}\right)+O_{p}\left( n^{-1/2}\right) \text{.}$$

Under \(H_{1}\), the leading term in this expression can be written as

$$n^{-1/2}\sum\limits_{i=1}^{n}\left[ -\sigma ^{2}n^{-1/2}h\left( x_{i}\right) -u_{i} \right] \left( x_{i}x_{i}^{\prime }-M_{n}\right) \text{,} \qquad u_{i}\equiv \varepsilon _{i}^{2}-E\left( \varepsilon _{i}^{2}|x_{i}\right) \text{.} $$

Under some bounded moment assumptions and independence, the asymptotic mean of this random matrix can be written

$$\mu \equiv -\sigma ^{2}p\lim_{n\rightarrow \infty }n^{-1}\sum\limits_{i=1}^{n}h\left( x_{i}\right) \left( x_{i}x_{i}^{\prime }-M_{n}\right) \text{.}$$

Thus, under \(H_{1}\), the elements of \(n^{-1/2}\sum _{i=1}^{n}\left ( s^{2}-e_{i}^{2}\right ) \left ( x_{i}x_{i}^{\prime }-M_{n}\right ) \) are asymptotically normal with means the corresponding elements of \(\mu \). It immediately follows that \(\upsilon ^{-1/2}\delta _{n}\overset {D}{ \longrightarrow }\mathcal {N}\left ( \gamma _{1}^{\prime }\mu \gamma _{2},1\right) \), with \(\upsilon \) defined in (14). Then, \( \upsilon ^{-1/2}n^{-1/2}s^{2}\left ( W_{R}-W_{NR}\right ) \overset {D}{ \longrightarrow }\mathcal {N}\left ( \gamma _{1}^{\prime }\mu \gamma _{2},1\right ) \) as well. Standard results from Statistics ensure that, under the sequence \(H_{1}\),

$$\frac{1}{\upsilon }\left[ n^{-1/2}s^{2}\left( W_{R}-W_{NR}\right) \right] ^{2}\overset{D}{\longrightarrow }\chi _{1}^{2}\left( \lambda \right) \text{,}$$

with noncentrality parameter \(\lambda \equiv \left ( \gamma _{1}^{\prime }\mu \gamma _{2}\right ) ^{2}/\upsilon \). (ii) \(r\left ( \beta \right ) =0\): true auxiliary restriction.

Recall expression (16) with \(\delta _{n}^{\ast }\) written as

$$\delta _{n}^{\ast }=n^{-1/2}\sum\limits_{i=1}^{n}n^{-1}\sum\limits_{l=1}^{n}\sum\limits_{m=1}^{n}\left( \sigma ^{2}-\varepsilon _{i}^{2}\right) d_{ilm}\varepsilon _{l}\varepsilon _{m} \text{.} $$

Under bounded moment assumptions and independence, its asymptotic mean is

$$\begin{array}{rll} \mu ^{\ast }\equiv p\lim\limits_{n\rightarrow \infty }\delta _{n}^{\ast }&=&p\lim\limits_{n\rightarrow \infty }n^{-3/2}\sum\limits_{i=1}^{n}\left( \sigma ^{2}-\varepsilon _{i}^{2}\right) \sum\limits_{l=1}^{n}d_{ill}\varepsilon _{l}^{2}\\ &=&p\lim\limits_{n\rightarrow \infty }n^{-3/2}\sum\limits_{i=1}^{n}d_{iii}\left( \sigma ^{2}-\varepsilon _{i}^{2}\right) \varepsilon _{i}^{2}\\ &&+p\lim\limits_{n\rightarrow\infty }n^{-3/2}\sum\limits_{i=1}^{n}\left( \sigma ^{2}-\varepsilon _{i}^{2}\right) \sum\limits_{l=1\text{, }l\neq i}^{n}d_{ill}\varepsilon _{l}^{2}\\ &=&p\lim\limits_{n\rightarrow \infty }n^{-3/2}\sum\limits_{i=1}^{n}\left( \sigma ^{2}-\varepsilon _{i}^{2}\right) \sum\limits_{l=1\text{, }l\neq i}^{n}d_{ill}\varepsilon _{l}^{2}\text{,} \end{array}$$

with the first equality resulting from the fact that, under independent errors, for \(l\neq m\),

$$E\left[ \left( \sigma ^{2}-\varepsilon _{i}^{2}\right) d_{ilm}\varepsilon _{l}\varepsilon _{m}\right] =0\text{,}$$

and the last equality due to

$$p\lim\limits_{n\rightarrow \infty }n^{-3/2}\sum\limits_{i=1}^{n}d_{iii}\left( \sigma ^{2}-\varepsilon _{i}^{2}\right) \varepsilon _{i}^{2}=0\text{,}$$

because of the factor \(n^{-3/2}\). As before, replace \(\varepsilon _{i}^{2}\) by \(\sigma ^{2}\left [ 1+n^{-1/2}h\left ( x_{i}\right ) \right ] +u_{i}\), \( u_{i}\equiv \varepsilon _{i}^{2}-E\left ( \varepsilon _{i}^{2}|x_{i}\right ) \) . Then,

$$\begin{array}{rll} \mu ^{\ast }&=&p\lim\limits_{n\rightarrow \infty }n^{-3/2}\sum\limits_{i=1}^{n}\sum\limits_{l=1 \text{, }l\neq i}^{n}d_{ill}\left[ -\sigma ^{2}n^{-1/2}h\left( x_{i}\right) +u_{i}\right] \\ &&\times~\left \{ \sigma ^{2}\left[ 1+n^{-1/2}h\left( x_{l}\right) \right]+u_{l}\right \}\\ &=&-\sigma ^{4}p\lim\limits_{n\rightarrow \infty }n^{-2}\sum\limits_{i=1}^{n}\sum\limits_{l=1\text{, }l\neq i}^{n}d_{ill}h\left( x_{i}\right)\\ &&-~\sigma ^{4}p\lim\limits_{n\rightarrow \infty }n^{-5/2}\sum\limits_{i=1}^{n}\sum\limits_{l=1\text{, }l\neq i}^{n}d_{ill}h\left( x_{i}\right) h\left( x_{j}\right)\\ &&-~\sigma ^{2}p\lim\limits_{n\rightarrow \infty }n^{-2}\sum\limits_{i=1}^{n}\sum\limits_{l=1\text{, } l\neq i}^{n}d_{ill}h\left( x_{i}\right) u_{l}+\sigma ^{2}p\lim\limits_{n\rightarrow \infty }n^{-3/2}\sum\limits_{i=1}^{n}\sum\limits_{l=1\text{, }l\neq i}^{n}d_{ill}u_{i}\\ &&+~\sigma ^{2}p\lim\limits_{n\rightarrow \infty }n^{-2}\sum\limits_{i=1}^{n}\sum\limits_{l=1\text{, } l\neq i}^{n}d_{ill}u_{i}h\left( x_{l}\right) +p\lim\limits_{n\rightarrow \infty }n^{-3/2}\sum\limits_{i=1}^{n}\sum\limits_{l=1\text{, }l\neq i}^{n}d_{ill}u_{i}u_{l}\\ &=&-\sigma ^{4}p\lim\limits_{n\rightarrow \infty }n^{-2}\sum\limits_{i=1}^{n}\sum\limits_{l=1\text{, }l\neq i}^{n}d_{ill}h\left( x_{i}\right) \end{array}$$

(because all \(p\lim \)’s are null, except the first). This last expression can be written as \(-\sigma ^{4}p\lim _{n\rightarrow \infty }n^{-2}\sum _{i=1}^{n}\left [ tr\left ( D_{i}\right ) -\left ( D_{i}\right ) _{i,i} \right ] h\left ( x_{i}\right ) \), with \(\left ( D_{i}\right ) _{i,i}\) the diagonal i-th element of the matrix \(D_{i}\) defined in (7). Finally, given that \(\sum _{i=1}^{n}\left ( D_{i}\right ) _{i,i}h\left ( x_{i}\right ) =O_{p}\left ( n\right ) \),

$$\mu ^{\ast }=-\sigma ^{4}p\lim\limits_{n\rightarrow \infty }n^{-2}\sum\limits_{i=1}^{n}tr\left( D_{i}\right) h\left( x_{i}\right) \text{.} $$

Thus, \(\upsilon ^{\ast \text { }-1/2}\delta _{n}^{\ast }\overset {D}{ \longrightarrow }\mathcal {N}\left ( \mu ^{\ast },1\right ) \), with \(\upsilon ^{\ast }\) defined in (18). Obviously, \(\upsilon ^{\ast \text {}-1/2}n^{1/2}s^{2}\left ( W_{R}-W_{NR}\right ) \overset {D}{\longrightarrow } \mathcal {N}\left ( \mu ^{\ast },1\right ) \) as well. Again, standard Statistics results ensure that under the sequence \(H_{1}\),

$$\frac{1}{\upsilon ^{\ast }}\left[ n^{1/2}s^{2}\left( W_{R}-W_{NR}\right) \right] ^{2}\overset{D}{\longrightarrow }\chi _{1}^{2}\left( \lambda ^{\ast }\right) \text{,} $$

with noncentrality parameter \(\lambda ^{\ast }\equiv \mu ^{\ast 2}/\upsilon ^{\ast }\). □

Proof of Lemma 3

Consider the g-th auxiliary restriction; write the corresponding element of the vector \(n^{-1/2}s^{2}wd\) as

$$\begin{array}{l} n^{-1/2}s^{2}\left[ W_{R}^{\left( g\right) }-W_{NR}^{\left( g\right) }\right] =\delta _{n}^{\left( g\right) }+o_{p}\left( 1\right) \text{,} \\ \delta _{n}^{\left( g\right) }\equiv n^{-1/2}\sum\limits_{i=1}^{n}\left( s^{2}-e_{i}^{2}\right) \left[ \alpha _{i}^{\left( g\right) }-\overline{ \alpha ^{\left( g\right) }}\right] \text{,} \end{array}$$

where, as mentioned, \(\alpha _{i}^{\left ( g\right ) }\) is defined analogously as \(\alpha _{i}\) or \(\alpha _{i}^{\ast }\), if, respectively, \(r_{g}\left ( \beta \right ) \neq 0\) or \(r_{g}\left ( \beta \right ) =0\), \(g=1,...,m\), at the true value of the regression parameters [\(\overline {\alpha ^{\left ( g\right ) }}\) is defined as the sample average of \(\alpha _{i}^{\left ( g\right ) }\), \( i=1,...,n\)]. Under present Assumptions, with homoskedastic errors, the multivariate Liapounov central limit theorem can be applied to the random m -vector \(n^{-1/2}\left ( \delta _{n}^{\left ( 1\right ) },\ldots ,\delta _{n}^{\left ( m\right ) }\right ) ^{\prime }\), that is,

$$\Psi \times n^{-1/2}\left( \delta _{n}^{\left( 1\right) },\ldots ,\delta _{n}^{\left( m\right) }\right) ^{\prime }\overset{D}{\longrightarrow } \mathcal{N}\left( 0_{m},I_{m}\right) \text{,}$$

with \(\Psi \) a symmetric pd matrix such, that \(\Psi ^{2}=\Upsilon ^{-1}\) and \(\Upsilon \) the average covariance matrix defined in (11). The Assumptions of the present paper (namely uniform boundedness of moments of products of \(\varepsilon _{i}\) and components of \(x_{i}\), up to order eight) and White’s Assumption 6 ensure that \(\Upsilon \) is a pd matrix with uniformly bounded elements for sufficiently large n. This ensures existence of the matrix \(\Psi \). The asymptotic equivalence between each \( n^{-1/2}s^{2}\left [ W_{R}^{\left ( g\right ) }-W_{NR}^{\left ( g\right ) }\right ] \) and \(\delta _{n}^{\left ( g\right ) }\) then yields the statement in the Lemma. □

Proof of Corollary 2

Consider the components of the \(m\times 1\) vector \(\Psi n^{-1/2}s^{2}wd\),

$$\left( \Psi _{1}^{\prime }n^{-1/2}s^{2}wd,\ldots ,\Psi _{m}^{\prime }n^{-1/2}s^{2}wd\right) ^{\prime }\text{.} $$

According to Lemma 3, these components are asymptotically uncorrelated standard normal rv’s, so they are asymptotically independent. Thus, the corresponding squared variables, \(\left ( \Psi _{g}^{\prime }n^{-1/2}s^{2}wd\right ) ^{2}\), are asymptotically independent chi-squared with one df. The desired result immediately follows from the well-known fact that, for independent rv’s \(t_{g}\), \(g=1,...,m\),

$$\Pr \left( \sup \left \{ t_{1},...,t_{m}\right \} \leq t\right) =\Pr \left( t_{1}\leq t,...,t_{m}\leq t\right) =\prod_{g=1}^{m}\Pr \left( t_{g}\leq t\right) \text{.}$$

Proof of Corollary 3

Consider, first, the following definitions, for scalar auxiliary restrictions (\(j=1\)):

If \(r_{g}\left ( \beta \right ) \neq 0\),

$$\begin{array}{rll} a_{i}^{\left( g\right) }&\equiv& r_{g}\left( b\right) ^{2}T_{g\left( i\right) }^{2}/\left( T_{g}^{\prime }T_{g}T_{g}^{\prime }D_{e}T_{g}\right) \text{,} \\ T_{g}&\equiv& X\left( X^{\prime }X\right) ^{-1}R_{g}\left( b\right) ^{\prime } \text{,} \end{array}$$

with \(T_{g\left ( i\right ) }\) the i-th element of \(T_{g}\); if \(r_{g}\left ( \beta \right ) =0\),

$$\begin{array}{rll} a_{i}^{\left( g\right) }&\equiv& nr_{g}\left( b\right) ^{2}T_{g\left( i\right) }^{2}/\left( T_{g}^{\prime }T_{g}T_{g}^{\prime }D_{e}T_{g}\right) \text{,} \\ T_{g}&\equiv& n^{1/2}X\left( X^{\prime }X\right) ^{-1}R_{g}\left( b\right) ^{\prime }\text{.} \end{array}$$

Thus, if \(r_{g}\left ( \beta \right ) \neq 0\),

$$s^{2}\left[ W_{R}^{\left( g\right) }-W_{NR}^{\left( g\right) }\right] =r_{g}\left( b\right) ^{2}/\left( T_{g}^{\prime }T_{g}T_{g}^{\prime }D_{e}T_{g}\right) \sum_{i=1}^{n}\left( s^{2}-e_{i}^{2}\right) ^{2}T_{g\left( i\right) }^{2} $$

and, if \(r_{g}\left ( \beta \right ) =0\),

$$s^{2}\left[ W_{R}^{\left( g\right) }-W_{NR}^{\left( g\right) }\right] =nr_{g}\left( b\right) ^{2}/\left( T_{g}^{\prime }T_{g}T_{g}^{\prime }D_{e}T_{g}\right) \sum_{i=1}^{n}\left( s^{2}-e_{i}^{2}\right) ^{2}T_{g\left( i\right) }^{2}\text{.}$$

Define the \(m\times m\) diagonal matrix \(DR\left ( b\right ) \) with g-th diagonal entry \(r_{g}\left ( b\right ) ^{2}/\left ( T_{g}^{\prime }T_{g}T_{g}^{\prime }D_{e}T_{g}\right ) \) or n times this [if \(r_{g}\left ( \beta \right ) \neq 0\) or \(r_{g}\left ( \beta \right ) =0\), respectively]. Then, the matrix V can be written as

$$ V=DR\left( b\right) \times M\times DR\left( b\right) \text{,}$$
(19)

where M is \(m\times m\) symmetric and pd for large enough n, with generic element

$$M_{gh}\equiv \sum_{i=1}^{n}\left( s^{2}-e_{i}^{2}\right) ^{2}\left[ T_{g\left( i\right) }^{2}-\overline{T_{g}^{2}}\right] \left[ T_{h\left( i\right) }^{2}-\overline{T_{h}^{2}}\right] \text{, \ \ }g,h=1,...,m\text{.} $$

Given the definition of \(T_{g}\), \(g=1,...,m\), it is clear that M depends on \(\beta \) only through the derivatives \(R_{g}\left ( b\right ) \equiv \partial r_{g}\left ( b\right ) /\partial b^{\prime }\) – not through the functions \(r_{g}\left ( b\right ) \) themselves.

From (19),

$$V^{-1}=DR\left( b\right) ^{-1}\times M^{-1}\times DR\left( b\right) ^{-1} \text{,}$$

where \(DR\left ( b\right ) ^{-1}\) is diagonal with g-th entry \(\left ( T_{g}^{\prime }T_{g}T_{g}^{\prime }D_{e}T_{g}\right ) /r_{g}\left ( b\right ) ^{2}\) or this quantity divided by n [if \(r_{g}\left ( \beta \right ) \neq 0\) or \(r_{g}\left ( \beta \right ) =0\), respectively]. Thus, considering PM, the symmetric square root matrix of \(M^{-1}\),

$$V^{-1}=DR\left( b\right) ^{-1}\times PM^{2}\times DR\left( b\right) ^{-1}= \left[ DR\left( b\right) ^{-1}PM\right] ^{2}=P^{2}\text{,} $$

from which \(P=DR\left ( b\right ) ^{-1}PM\), where PM depends on \(\beta \) only through the derivatives \(R_{g}\left ( b\right ) \), \(g=1,...,m\).

Then, as PM is symmetric, \(P\times wd=PM\times \left [ DR\left ( b\right ) ^{-1}wd\right ] \), which is an m-vector depending on the auxiliary restrictions only through \(R_{g}\left ( b\right ) \), \(g=1,...,m\), because the functions \(r_{g}\left ( b\right ) \) are canceled out in \(DR\left ( b\right ) ^{-1}wd\). Thus, when all the functions \(r_{g}\left ( \cdot \right ) \), \( g=1,...,m\), are scalars, the vector \(Pn^{-1/2}s^{2}wd\) does not depend directly on the value of the \(r_{g}\left ( \beta \right ) \).

Now, if \(r_{g}\left ( \beta \right ) \) is scalar affine, that is \(r_{g}\left ( \beta \right ) =R_{g}\beta -q\), then \(R_{g}\left ( b\right ) =R_{g}\), \( g=1,...,m \) vectors of constants not involving b. Thus, M (and PM) do not involve b, the only link of \(P\times wd\) to \(\beta \). Therefore \( P\times wd \) is a vector of asymptotically pivotal statistics. From Lemma 3, these statistics are asymptotically independent.

For independent rv’s \(t_{g}\), \(g=1,...,m\), \(\Pr \left ( \sup \left \{ t_{1},...,t_{m}\right \} \leq v\right ) =\prod_{g=1}^{m}\Pr \left ( t_{g}\leq t\right ) \). If every \(t_{g}\) is asymptotically pivotal for all DGP’s in \(H_{0}\), then \(\Pr \left ( t_{g}\leq t\right ) \), \(g=1,...,m –\) and so \(\Pr \left ( \sup \left \{ t_{1},...,t_{m}\right \} \leq t\right ) –\) is invariant under all DGP’s in \(H_{0}\). Thus, \(\sup \left \{ P_{1}^{\prime }wd,\ldots ,P_{m}^{\prime }wd\right \} \) is asymptotically pivotal because the \(P_{g}^{\prime }wd\), \(g=1,...,m\), are asymptotically pivotal independent statistics. Obviously, this statement applies to \(\sup \left \{ \left ( P_{1}^{\prime }wd\right ) ^{2},\ldots ,\left ( P_{m}^{\prime }wd\right ) ^{2}\right \} \) as well. □

About this article

Cite this article

Murteira, J.M.R., Ramalho, E.A. & Ramalho, J.J.S. Heteroskedasticity testing through a comparison of Wald statistics. Port Econ J 12, 131–160 (2013). https://doi.org/10.1007/s10258-013-0087-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10258-013-0087-x

Keywords

JEL Classification

Navigation