Skip to main content
Log in

A model specification test for the variance function in nonparametric regression

  • Original Paper
  • Published:
AStA Advances in Statistical Analysis Aims and scope Submit manuscript

Abstract

The problem of testing for the parametric form of the conditional variance is considered in a fully nonparametric regression model. A test statistic based on a weighted \(L_2\)-distance between the empirical characteristic functions of residuals constructed under the null hypothesis and under the alternative is proposed and studied theoretically. The null asymptotic distribution of the test statistic is obtained and employed to approximate the critical values. Finite sample properties of the proposed test are numerically investigated in several Monte Carlo experiments. The developed results assume independent data. Their extension to dependent observations is also discussed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Alba-Fernández, V., Jiménez-Gamero, M.D., Muñoz-García, J.: A test for the two-sample problem based on empirical characteristic functions. Comput. Stat. Data Anal. 52, 3730–3748 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  • Bradley, R.C.: Basic properties of strong mixing conditions. A survey and some open questions. Probab. Surv. 2, 107–144 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  • Dette, H., Hetzler, B.: A simple test for the parametric form of the variance function in nonparametric regression. Ann. Inst. Statist. Math. 61, 861–886 (2009a)

    Article  MathSciNet  MATH  Google Scholar 

  • Dette, H., Hetzler, B.: Khmaladze transformation of integrated variance processes with applications to goodness-of-fit testing. Math. Methods Statist. 18, 97–116 (2009b)

    Article  MathSciNet  MATH  Google Scholar 

  • Dette, H., Marchlewski, M.: A robust test for homoscedasticity in nonparametric regression. J. Nonparametr. Stat. 22, 723–736 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  • Dette, H., Neumeyer, N., Van Keilegom, I.: A new test for the parametric form of the variance function in non-parametric regression. J. R. Statist. Soc. Ser. B 69, 903–917 (2007)

    Article  MathSciNet  Google Scholar 

  • Fan, J., Gijbels, I.: Local Polynomial Modelling and Its Applications. Champan & Hall, London (1996)

    MATH  Google Scholar 

  • Fan, J., Yao, Q.: Nonlinear Time Series. Nonparametric and Parametric Methods. Springer, New York (2003)

    Book  MATH  Google Scholar 

  • Feller, W.: An Introduction to Probability Theory and Its Applications, vol. 2. Wiley, New Delhi (1971)

    MATH  Google Scholar 

  • Hansen, B.E.: Uniform convergence rates for kernel estimation with dependent data. Econom. Theory 24, 726–748 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  • Hušková, M., Meintanis, S.G.: Goodness-of-fit tests for parametric regression models based on empirical characteristic functions. Kybernetika 45, 960–971 (2009)

    MathSciNet  MATH  Google Scholar 

  • Hušková, M., Meintanis, S.G.: Tests for the error distribution in nonparametric possibly heteroscedastic regression models. TEST 19, 92–112 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  • Koul, H.L., Song, W.: Conditional variance model checking. J. Stat. Plann. Inference 140, 1056–1072 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  • Liero, H.: Testing homoscedasticity in nonparametric regression. J. Nonparametr. Stat. 15, 31–51 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  • Neumeyer, N., Selk, L.: A note on non-parametric testing for Gaussian innovations in AR-ARCH models. J. Time Series Anal. 34, 362–367 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  • Neumeyer N, Van Keilegom I (2017) Bootstrap of residual processes in regression: to smooth or not to smooth? (available at arXiv:1712.02685v1)

  • Pardo-Fernández, J.C., Jiménez-Gamero, M.D., El Ghouch, A.: A nonparametric ANOVA-type test for regression curves based on characteristic functions. Scand. J. Stat. 42, 197–213 (2015a)

    Article  MathSciNet  MATH  Google Scholar 

  • Pardo-Fernández, J.C., Jiménez-Gamero, M.D., El Ghouch, A.: Tests for the equality of conditional variance functions in nonparametric regression. Electron. J. Stat. 9, 1826–1851 (2015b)

    Article  MathSciNet  MATH  Google Scholar 

  • Samarakoon, N., Song, W.: Minimum distance conditional variance function checking in heteroscedastic regression models. J. Multivariate Anal. 102, 579–600 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  • Samarakoon, N., Song, W.: Empirical smoothing lack-of-fit tests for variance function. J. Stat. Plann. Inference 142, 1128–1140 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  • Selk, L., Neumeyer, N.: Testing for a change of the innovation distribution in nonparametric autoregression: the sequential empirical process approach. Scand. J. Stat. 40, 770–788 (2013)

    Article  MathSciNet  MATH  Google Scholar 

  • Wang, L., Zhou, X.-H.: Assessing the adequacy of variance function in heteroscedastic regression models. Biometrics 63, 1218–1225 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  • Wu, C.F.: Asymptotic theory of nonlinear least squares estimation. Ann. Stat. 9, 501–513 (1981)

    Article  MathSciNet  MATH  Google Scholar 

  • Yoshihara, K.I.: Limiting behavior of U-statistics for stationary, absolutely regular processes. Z. Wahrsch. Verw. Gebiete 35, 237–252 (1976)

    Article  MathSciNet  MATH  Google Scholar 

  • Yuan, K.H.: A theorem on uniform convergence of stochastic functions with applications. J. Multivariate Anal. 62, 100–109 (1997)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors thank the anonymous referees for their valuable time and careful comments, which improved the presentation of this paper. The authors acknowledge financial support from Grants MTM2014-55966-P and MTM2017-89422-P, funded by the Spanish Ministerio de Economía, Industria y Competitividad, the Agencia Estatal de Investigación and the European Regional Development Fund. J.C. Pardo-Fernández also acknowledges funding from Banco Santander and Complutense University of Madrid (Project PR26/16-5B-1). M. D. Jiménez-Gamero also acknowledges support from CRoNoS COST Action IC1408.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Juan Carlos Pardo-Fernández.

A Proofs

A Proofs

1.1 A.1 Sketch of the proofs of results in Sect. 3

Observe that under Assumptions A.1, A.2 and A.3 for \((X_1,Y_1), \ldots , (X_n,Y_n)\) independent from (1), (see, for example, Hansen 2008),

$$\begin{aligned} \begin{array}{lll} \displaystyle \sup _{x\in R}|\hat{m}(x)-m(x)|=o_P(n^{-1/4}),\\ \displaystyle \sup _{x\in R}|\hat{\sigma }(x)-\sigma (x)|=o_P(n^{-1/4}). \\ \end{array} \end{aligned}$$
(13)

Proof of (7) Let \(S_n(\theta )\) and \(S(\theta )\) be as defined in (3) and (5), respectively. From (13) and the SLLN, it follows that

$$\begin{aligned} S_n(\theta )=S(\theta )+o_P(1), \quad \forall \theta \in \Theta . \end{aligned}$$

Next we prove that the above convergence holds uniformly in \(\theta \). Routine calculations show that the derivative

$$\begin{aligned} \frac{\partial }{\partial \theta }\left\{ S_n(\theta )-S(\theta )\right\} , \end{aligned}$$

is bounded (in probability) \(\forall \theta \in \Theta _0\), which implies that the family \(\left\{ S_n(\theta )-S(\theta ), \, \theta \in \right. \left. \Theta _0\right\} \) is equicontinuous. By the lemma in Yuan (1997), it follows that

$$\begin{aligned} \inf _{\Vert \theta -\theta _0\Vert>\delta } S_n(\theta )-S_n(\theta _0)=\inf _{\Vert \theta -\theta _0\Vert >\delta } S(\theta )-S(\theta _0)+o_p(1). \end{aligned}$$

Now the result follows from Assumption B.1 and Lemma 1 in Wu (1981). \(\square \)

Proof of (8) Under the assumptions made, routine calculations show that under \(H_0\),

$$\begin{aligned} \sqrt{n} \frac{\partial }{\partial \theta } S_n(\theta _0)= & {} -\frac{2}{\sqrt{n}}\sum _{j=1}^n(\varepsilon ^2_j-1)\sigma ^2(X_j; \theta _0)\dot{\sigma }^2(X_j; \theta _0) +o_P(1), \end{aligned}$$
(14)
$$\begin{aligned} \frac{\partial ^2}{\partial \theta \partial \theta ^{T}} S_n(\theta _0)= & {} 2 \Omega +o_P(1). \end{aligned}$$
(15)

By Taylor expansion,

$$\begin{aligned} \frac{\partial }{\partial \theta } S_n(\hat{\theta })-\frac{\partial }{\partial \theta } S_n(\theta _0)=\frac{\partial ^2}{\partial \theta \partial \theta ^{T}} S_n(\theta _0)(\hat{\theta }-\theta _0)+o(\Vert \hat{\theta }-\theta _0\Vert ). \end{aligned}$$
(16)

Taking into account that \(\frac{\partial }{\partial \theta } S_n(\hat{\theta })=0\), the result follows from (14)–(16). \(\square \)

Proof of Theorem 3

From Lemma 10 (i) in Pardo-Fernández et al. (2015b), it follows that

$$\begin{aligned} \begin{array}{rcl} \displaystyle \sqrt{n}\hat{\varphi }(t) &{} = &{} \displaystyle \sqrt{n}\tilde{\varphi }(t)+\mathrm{i}\frac{t}{\sqrt{n}}\sum _{j=1}^n \exp (\mathrm{i}t\varepsilon _j)\frac{m(X_j)-\hat{m}(X_j)}{\sigma (X_j)}\\ &{} &{} \displaystyle +\, \mathrm{i}\frac{t}{\sqrt{n}}\sum _{j=1}^n \exp (\mathrm{i}t\varepsilon _j)\varepsilon _j\frac{\sigma (X_j)-\hat{\sigma }(X_j)}{\sigma (X_j)}+tR_{11}(t)+t^2R_{12}(t), \end{array} \end{aligned}$$

with

$$\begin{aligned} \tilde{\varphi }(t)=\frac{1}{n}\sum _{j=1}^n\exp (\mathrm{i}t \varepsilon _j) \end{aligned}$$
(17)

and \(\sup _t |R_{1k}(t)|=o_P(1)\), \(k=1,2\). As for \(\hat{\varphi }_0(t)\), since

$$\begin{aligned} \begin{array}{rcl} \displaystyle \hat{\varepsilon }_{0j}-\varepsilon _j &{} = &{} \displaystyle \frac{m(X_j)-\hat{m}(X_j)}{\sigma (X_j)}+ \frac{\{m(X_j)-\hat{m}(X_j) \} \{ \sigma (X_j)-\sigma (X_j;\hat{\theta }) \}}{\sigma (X_j)\sigma (X_j;\hat{\theta })}\\ &{} &{} \displaystyle +\, \frac{ \sigma (X_j)-\sigma (X_j;\hat{\theta }) }{\sigma (X_j)}\varepsilon _j+ \frac{ \{ \sigma (X_j)-\sigma (X_j;\hat{\theta }) \}^2}{\sigma (X_j)\sigma (X_j;\hat{\theta })}\varepsilon _j, \end{array} \end{aligned}$$

we have that

$$\begin{aligned} \begin{array}{rcl} \displaystyle \sqrt{n}\hat{\varphi }_0(t) &{} = &{} \displaystyle \sqrt{n}\tilde{\varphi }(t)+\mathrm{i}\frac{t}{\sqrt{n}}\sum _{j=1}^n \exp (\mathrm{i}t\varepsilon _j)\frac{m(X_j)-\hat{m}(X_j)}{\sigma (X_j)}\\ &{} &{} \displaystyle +\, \mathrm{i}\frac{t}{\sqrt{n}}\sum _{j=1}^n \exp (\mathrm{i}t\varepsilon _j)\varepsilon _j\frac{\sigma (X_j)-{\sigma }(X_j;\hat{\theta })}{\sigma (X_j)}+tR_{13}(t)+t^2R_{14}(t), \end{array} \end{aligned}$$

with \(\sup _t |R_{1k}(t)|=o_P(1)\), \(k=3,4\). From Lemma 11 in Pardo-Fernández et al. (2015b),

$$\begin{aligned} \mathrm{i}\frac{t}{\sqrt{n}}\sum _{j=1}^n \exp (\mathrm{i}t\varepsilon _j)\varepsilon _j\frac{\sigma (X_j)-\hat{\sigma }(X_j)}{\sigma (X_j)}=-\frac{t}{2}\varphi '(t)\frac{1}{\sqrt{n}}\sum _{j=1}^n (\varepsilon _j^2-1)+R_{15}(t) \end{aligned}$$

with \(\Vert R_{15}\Vert _w=o_P(1)\). By Taylor expansion and (8),

$$\begin{aligned} \mathrm{i}\frac{t}{\sqrt{n}}\sum _{j=1}^n \exp (\mathrm{i}t\varepsilon _j)\varepsilon _j\frac{\sigma (X_j)-{\sigma }(X_j;\hat{\theta })}{\sigma (X_j)}=V(t)+R_{16}(t), \end{aligned}$$

with \(\Vert R_{16}\Vert _w=o_P(1)\) and

$$\begin{aligned} V(t)=\frac{1}{2}\frac{\mathrm{i}t}{n\sqrt{n}} \sum _{j,k=1}^n \exp (\mathrm{i}t\varepsilon _j)\varepsilon _j\frac{\dot{\sigma }^2(X_j;\theta _0)^T}{\sigma ^2(X_j;\theta _0)} \Omega ^{-1} \dot{\sigma }^2(X_k;\theta _0)\sigma ^2(X_k;\theta _0)(\varepsilon _k^2-1). \end{aligned}$$

Routine calculations show that

$$\begin{aligned} V(t)=-\frac{t}{2}\varphi '(t)\frac{1}{\sqrt{n}}\mu ^T\Omega ^{-1}\sum _{j=1}^n\dot{\sigma }^2(X_j;\theta )\sigma ^2(X_j;\theta )(\varepsilon _j^2-1)+R_{17}(t) \end{aligned}$$

with \(\Vert R_{17}\Vert _w=o_P(1)\). Therefore, the result follows. \(\square \)

Proof of Theorem 8

From Lemma 10 (i) in Pardo-Fernández et al. (2015b), it follows that

$$\begin{aligned} \hat{\varphi }(t)=\tilde{\varphi }(t)+tR(t), \end{aligned}$$
(18)

with \(\tilde{\varphi }(t)\) as defined in (17) and

$$\begin{aligned} \sup _t |R(t)|=o_P(1). \end{aligned}$$
(19)

As for \(\hat{\varphi }_0(t)\), since

$$\begin{aligned}&\hat{\varepsilon }_{0j}-\varepsilon _{0j} = \frac{m(X_j)-\hat{m}(X_j)}{\sigma (X_j;\hat{\theta })}+\varepsilon _j\frac{\sigma (X_j)}{\sigma (X_j;\hat{\theta })\sigma (X_j; \theta _0)} \left\{ \sigma (X_j;\theta _0)-\sigma (X_j;\hat{\theta })\right\} ,\\&\quad \left| \frac{1}{n}\sum _{j=1}^n\frac{m(X_j)-\hat{m}(X_j)}{\sigma (X_j;\hat{\theta })} \right| \le \frac{1}{\displaystyle \inf _{x\in R, \, \theta \in \Theta _0} \sigma (x;\theta )}\sup _{x\in R}|\hat{m}(x)-m(x)|=o_P(1), \end{aligned}$$

and

$$\begin{aligned}&\left| \frac{1}{n}\sum _{j=1}^n \varepsilon _j\frac{\sigma (X_j)}{\sigma (X_j;\hat{\theta })\sigma (X_j; \theta _0)} \left\{ \sigma (X_j;\theta _0)-\sigma (X_j;\hat{\theta })\right\} \right| \\&\quad \le \left( \frac{1}{n}\sum _{j=1}^n \varepsilon _j^2\right) ^{1/2} \frac{\displaystyle \sup _{x\in R} \sigma (X_j)}{\displaystyle \inf _{x\in R, \, \theta \in \Theta _0} \sigma ^3(x;\theta )} \left( \frac{1}{n}\sum _{j=1}^n \left\{ \sigma ^2(X_j;\theta _0)-\sigma ^2(X_j;\hat{\theta })\right\} ^2\right) ^{1/2}\\&\quad =o_P(1), \end{aligned}$$

by Taylor expansion, we get

$$\begin{aligned} \hat{\varphi }_0(t)=\tilde{\varphi }_0(t)+tR_0(t), \end{aligned}$$
(20)

with

$$\begin{aligned} \tilde{\varphi }_0(t)=\frac{1}{n}\sum _{j=1}^n\exp (\mathrm{i}t \varepsilon _{0j}), \qquad \sup _t |R_0(t)|=o_P(1). \end{aligned}$$
(21)

The result follows from (18)–(21), by taking into account that \(\Vert \tilde{\varphi }-{\varphi }\Vert _w=o_P(1)\) and \(\Vert \tilde{\varphi }_0-{\varphi }_0\Vert _w=o_P(1)\). \(\square \)

Proof of Theorem 9

Under \(H_{1n}\),

$$\begin{aligned} \sqrt{n}(\hat{\theta }-{\theta }_0)= & {} \displaystyle \Omega ^{-1}\frac{1}{\sqrt{n}}\sum _{j=1}^n(\varepsilon _j^2-1)\sigma ^2(X_j;\theta _0)\dot{\sigma }^2(X_j;\theta _0)\\&\displaystyle +\,\Omega ^{-1}E\{r(X)\dot{\sigma }^2(X;\theta _0)\}+o_P(1). \end{aligned}$$

By applying the results in Yuan (1997), we get that (13) also hold under \(H_{1n}\). Now, the result follows by proceeding similarly to the proof of Theorem 3. \(\square \)

1.2 A.2 Sketch of the proofs of results in Sect. 6

Under Assumptions A.2, A.3, C.1–C.4 and C.6 (see, for example, Hansen 2008),

$$\begin{aligned} \begin{array}{lll} \displaystyle \sup _{x\in R_g}|\hat{m}(x)-m(x)|=o_P(n^{-1/4}),\\ \displaystyle \sup _{x\in R_g}|\hat{\sigma }(x)-\sigma (x)|=o_P(n^{-1/4}). \\ \end{array} \end{aligned}$$
(22)

Proof of Theorem 11

Proceeding as in the proof of Theorem 3, we obtain

$$\begin{aligned} \sqrt{n} \left\{ \hat{\varphi }_g(t)-\hat{\varphi }_{0g}(t)\right\} = V_1(t)+V_2(t)+tR_1(t)+t^2R_2(t), \end{aligned}$$

with \(\sup _t |R_{k}(t)|=o_P(1)\), \(k=1,2\),

$$\begin{aligned} V_1(t)= & {} \mathrm{i}\frac{t}{\sqrt{n}}\sum _{j=1}^n \exp (\mathrm{i}t\varepsilon _j)\varepsilon _jg(X_j)\frac{\sigma (X_j)-\hat{\sigma }(X_j)}{\sigma (X_j)},\\ V_2(t)= & {} \mathrm{i}\frac{t}{\sqrt{n}}\sum _{j=1}^n \exp (\mathrm{i}t\varepsilon _j)\varepsilon _jg(X_j)\frac{\sigma (X_j)-\sigma (X_j;\hat{\theta })}{\sigma (X_j)}. \end{aligned}$$

From (22),

$$\begin{aligned}&\sup _{x \in R_g}\left| \hat{\sigma }(x)-\sigma (x)-\frac{1}{2nf(x)\sigma (x)}\sum _{j=1}^{n} K_h(X_{j}-x)\left[ \left\{ Y_{j}-m(x)\right\} ^2-\sigma ^2(x)\right] \right| \\&\quad =o_p(n^{-1/2}), \end{aligned}$$

where \(K_h(\cdot )=\frac{1}{h}K(\frac{\cdot }{h})\). By using this identity, we get that

$$\begin{aligned} V_1(t)=V_3(t)+tR_3(t), \end{aligned}$$

with \(\sup _t |R_{3}(t)|=o_P(1)\) and

$$\begin{aligned} V_3(t)= & {} -\mathrm{i}\frac{t}{2n\sqrt{n}}\sum _{j,k=1}^n \exp (\mathrm{i}t\varepsilon _j)\varepsilon _jg(X_j)\frac{1}{\sigma ^2(X_j)f(X_j)}\\&\times \, K_h(X_{j}-X_k)\left[ \left\{ Y_k-m(X_j)\right\} ^2-\sigma ^2(X_j)\right] . \end{aligned}$$

By applying Hoeffding decomposition and Lemma 2 in Yoshihara (1976), we get that

$$\begin{aligned} V_3(t)=-\frac{t}{2} \varphi '(t)\frac{1}{\sqrt{n}}\sum _{j=1}^ng(X_j)(\varepsilon ^2_j-1)+tR_4(t), \end{aligned}$$

with \(\sup _t |R_{4}(t)|=o_P(1)\).

By Taylor expansion and following similar steps to those given for \(V_1(t)\), we obtain

$$\begin{aligned} V_2(t)=-\frac{t}{2} \varphi '(t)\frac{1}{\sqrt{n}}\sum _{j=1}^n\mu _g^T l(\varepsilon _j, X_j; \theta _0)+tR_5(t), \end{aligned}$$

with \(\sup _t |R_{5}(t)|=o_P(1)\). Putting together all above facts, the result follows. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pardo-Fernández, J.C., Jiménez-Gamero, M.D. A model specification test for the variance function in nonparametric regression. AStA Adv Stat Anal 103, 387–410 (2019). https://doi.org/10.1007/s10182-018-00336-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10182-018-00336-y

Keywords

Navigation