Skip to main content
Log in

Goodness-of-fit tests in linear EV regression with replications

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

This paper proposes a goodness-of-fit test for checking the adequacy of parametric forms of the regression error density functions in linear errors-in-variables regression models. Instead of assuming the distribution of the measurement error to be known, we assume that replications of the surrogates of the latent variables are available. The test statistic is based upon a weighted integrated squared distance between a nonparametric estimator and a semi-parametric estimator of the density functions of certain residuals. Under the null hypothesis, the test statistic is shown to be asymptotically normal. Consistency and local power results of the proposed test under fixed alternatives and local alternatives are also established. Finite sample performance of the proposed test is evaluated via simulation studies. A real data example is also included to demonstrate an application of the proposed test.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Beran R (1977) Minimum Hellinger distance estimates for parameteric models. Ann Stat 5(3):445–463

    Article  MATH  Google Scholar 

  • Blas B, Bolfarine H, Lachos V (2013) Statistical analysis of controlled calibration model with replicates. J Stat Comput Simul 83:941–961

    Article  MathSciNet  MATH  Google Scholar 

  • Buonaccorsi J (2010) Measurement error: models, methods and applications. Chapman & Hall, London

    Book  MATH  Google Scholar 

  • Carroll R, Ruppert D, Stefanski L, Crainiceanu C (2006) Measurement Error in Nonlinear Models. Chapman & Hall/CRC, Boca Raton

    Book  MATH  Google Scholar 

  • Cheng C, Van Ness JW (1999) Statistical regression with measurement error. Wiley, New York

    MATH  Google Scholar 

  • Dalen I, Buonaccorsi J, Sexton J, Laake P, Thoresen M (2009) Correction for misclassification of a categorized exposure in binary regression using replication data. Stat Med 28:3386–3410

    Article  MathSciNet  Google Scholar 

  • Delaigle A, Hall P, Meister A (2008) On deconvolution with repeated measurements. Ann Stat 36:665–685

    Article  MathSciNet  MATH  Google Scholar 

  • Eckert R, Carroll RJ, Wang N (1997) Transformations to additivity in measurement error models. Biometrics 53:262–272

    Article  MathSciNet  MATH  Google Scholar 

  • Fuller W (1987) Measurement error models. Wiley, New York

    Book  MATH  Google Scholar 

  • Gao J, Gijbels I (2008) Bandwidth selection in nonparametric kernel testing. J Am Stat Assoc 103:1584–1594

    Article  MathSciNet  MATH  Google Scholar 

  • Gimenez P, Patat M (2005) Estimation in comparative calibration models with replicate measurement. Stat Probab Lett 71:155–164

    Article  MathSciNet  MATH  Google Scholar 

  • Holzmann H, Bissantz N, Munk A (2007) Density testing in a contaminated sample. J Multivar Anal 98:57–75

    Article  MathSciNet  MATH  Google Scholar 

  • Huwang L (1995) Interval estimation in structural errors-in-variables model with partial replication. J Multivar Anal 55:230–245

    Article  MathSciNet  MATH  Google Scholar 

  • Jennrich R (1969) Asymptotic properties of non-linear least squares estimators. Ann Math Stat 40:633–643

    Article  MathSciNet  MATH  Google Scholar 

  • Khmaladze E, Koul H (2004) Martingale transforms goodness-of-fit tests in regression models. Ann Stat 32:995–1034

    Article  MathSciNet  MATH  Google Scholar 

  • Khmaladze E, Koul H (2009) Goodness-of-fit problems for errors in nonparametric regression. Ann Stat 37:3165–3185

    Article  MATH  Google Scholar 

  • Koul H (2002) Weighted empirical processes in dynamic nonlinear models. Lecture notes in statistics, vol 166. Springer, New York

    Book  MATH  Google Scholar 

  • Koul H, Ni P (2004) Minimum distance regression model checking. J Stat Plan Inference 119:109–141

    Article  MathSciNet  MATH  Google Scholar 

  • Koul H, Song W (2010) Model checking in partial linear regression models with berkson measurement error. Stat Sin 20:1551–1579

    MathSciNet  MATH  Google Scholar 

  • Koul H, Song W (2012) Bickel–Rosenblatt type goodness-of-fit tests in errors-in-variables model. J Fr Stat Soc 153:52–70

    MATH  Google Scholar 

  • Koul H, Song W, Zhu X (2018) Goodness-of-fit testing of error distribution in linear measurement error models. Ann Stat (accepted)

  • Laurent B, Loubes J, Marteau C (2011) Testing inverse problems: a direct or an indirect problem? J Stat Plan Inference 141:1849–1861

    Article  MathSciNet  MATH  Google Scholar 

  • Li T, Vuong Q (1998) Nonparametric estimation of the measurement error model using multiple indicators. J Multivar Anal 65:139–165

    Article  MathSciNet  MATH  Google Scholar 

  • Lin J, Cao C (2013) On estimation of measurement error models with replication under heavy-tailed distributions. Comput Stat 28:809–829

    Article  MathSciNet  MATH  Google Scholar 

  • Rao BP (1992) Identifiability in stochastic models. Academic Press, San Diego

    MATH  Google Scholar 

  • White H (1981) Consequences and detection of misspecified nonlinear regression models. J Am Stat Assoc 76:419–433

    Article  MathSciNet  MATH  Google Scholar 

  • White H (1982) Maximum likelihood estimation of misspecified models. Econometrica 50:1–25

    Article  MathSciNet  MATH  Google Scholar 

  • Xiao Z, Shao J, Palta M (2010) Instrumental variable and gmm estimation for panel data with measurement error. Stat Sin 20:1725–1747

    MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Weixing Song.

Additional information

The research is supported by the NSF DMS 1205276.

Appendix: Proofs of main results

Appendix: Proofs of main results

This section contains all the proofs of the main theorems stated in Sects. 3 and 4. Since the main idea of the proofs are similar to those in Koul and Song (2012), only differences are presented here for the sake of brevity. In particular, we will focus the discussion on the statistic \(T_n({\hat{\alpha }}_n,{\hat{\beta }}_n,{\hat{\theta }}_n)\), which will be decomposed into two parts, one part can be dealt with directly using Koul and Song (2012)’s argument, and another part involving all terms related to the kernel density estimator \({\hat{f}}_{{\bar{U}} n}\) has to be investigated separately. The discussions on the normalizing constants \({\hat{C}}_n\) and \({\hat{\Gamma }}_n\) are similar to Koul and Song (2012)’s argument, hence omitted for the sake of brevity.

The proof of Theorem 1:

Note that

$$\begin{aligned} \tilde{f}_{\xi ,b}(v;\hat{\beta }_n,\hat{\theta }_n)&=f_{\xi ,b} (v;\hat{\beta }_n,\hat{\theta }_n)+\iint K_b(v-u)f_\varepsilon \left( u+\hat{\beta }_n^T t,\hat{\theta }_n\right) ({\hat{f}}_{{\bar{U}}n}(t)\\&\quad -f_{{\bar{U}}}(t))dudt\\&=f_{\xi ,b}(v;\hat{\beta }_n,\hat{\theta }_n)+R_{bw}(v;{\hat{\beta }}_n,{\hat{\theta }}_n), \end{aligned}$$

then the statistic \(T_n({\hat{\alpha }}_n,{\hat{\beta }}_n,{\hat{\theta }}_n)\) defined in (8) can be written as the sum of the following three terms,

$$\begin{aligned} T_{n1}=&\int [\hat{f}_{\xi ,n}(v;\hat{\alpha }_n,\hat{\beta }_n)-f_{\xi ,b} (v;\hat{\beta }_n,\hat{\theta }_n)]^2d\Pi (v),\\ T_{n2}=&\int [R_{bw}(v;\hat{\beta }_n,\hat{\theta }_n)]^2d\Pi (v),\\ T_{n3}=&-2\int [\hat{f}_{\xi ,n}(v;\hat{\alpha }_n,\hat{\beta }_n)-f_{\xi ,b} (v;\hat{\beta }_n,\hat{\theta }_n)]R_{bw}(v;\hat{\beta }_n,\hat{\theta }_n)d\Pi (v). \ \end{aligned}$$

We start with showing \(nb^{1/2}T_{n2}=o_p(1)\). Adding and subtracting \(f_\varepsilon (u+\beta _0^Tt,\theta _0)\) from \(f_\varepsilon (u+\hat{\beta }_n^Tt,\hat{\theta }_n)\), \(E{\hat{f}}_{{\bar{U}}n}(t)\) from \({\hat{f}}_{{\bar{U}}n}(t)\), \(R_{bw}\) can be written as the sum of the following four terms:

$$\begin{aligned} R_{bw1}= & {} \iint K_b(v-u)\left[ f_\varepsilon \left( u+\hat{\beta }_n^Tt,\hat{\theta }_n\right) -f_\varepsilon \left( u+\beta _0^Tt,\theta _0\right) \right] \\&\times \left[ {\hat{f}}_{{\bar{U}}n}(t)-E{\hat{f}}_{{\bar{U}}n}(t)\right] dudt,\\ R_{bw2}= & {} \iint K_b(v-u)f_\varepsilon \left( u+\beta _0^Tt,\theta _0\right) \left( {\hat{f}}_{{\bar{U}}n}(t)-E{\hat{f}}_{{\bar{U}}n}(t)\right) dudt,\\ R_{bw3}= & {} \iint K_b(v-u)\left[ f_\varepsilon \left( u+\hat{\beta }_n^Tt,\hat{\theta }_n\right) -f_\varepsilon \left( u+\beta _0^Tt,\theta _0\right) \right] \\&\times \left[ E{\hat{f}}_{{\bar{U}}n}(t)-f_{{\bar{U}}}(t)\right] dudt,\\ R_{bw4}= & {} \iint K_b(v-u)f_\varepsilon \left( u+\beta _0^Tt,\theta _0\right) \left[ E{\hat{f}}_{{\bar{U}}n}(t)-f_{{\bar{U}}}(t)\right] dudt. \end{aligned}$$

It is well known that \(E {\hat{f}}_{{\bar{U}}n}(t)=f_{{\bar{U}}}(t)+w^2\mu _2(L)\text{ tr }(f_{{\bar{U}}}''(t))/2+o(w^2)\). Then from (f1), we can show that \( R_{bw4}=2^{-1}w^2\mu _2(L)\iint K_b(v-u)f_\varepsilon (u+\beta _0^Tt,\theta _0)\text{ tr }(f_{{\bar{U}}}''(t))dudt+o(w^2). \) This, together with (g2), one can easily show that \(|R_{bw4}(v)|=O(w^2)\) uniformly on v, this in turn implies that \(\int R_{bw4}^2(v)d\Pi (v)=O(w^4).\) Hence, by assumption (b1),

$$\begin{aligned} nb^{\frac{1}{2}}\int R_{bw4}^2(v)d\Pi (v)=O(nb^{1/2}w^4)=o(1). \end{aligned}$$
(11)

Now consider \(R_{bw3}\). Denote

$$\begin{aligned}&f_\varepsilon \left( u+\hat{\beta }_n^Tt,\hat{\theta }_n\right) -f_\varepsilon \left( u+\beta _0^Tt,\theta _0\right) -(\hat{\beta }_n-\beta _0)^T\dot{f}_{\varepsilon ,\beta }\left( u+\beta _0^Tt,\theta _0\right) \\&\quad -(\hat{\theta }_n-\theta _0)^T\dot{f}_{\varepsilon ,\theta }\left( u+\beta _0^Tt,\theta _0\right) \end{aligned}$$

by \(\Delta f_\varepsilon (t,u;{\hat{\beta }}_n,{\hat{\theta }}_n)\). First we can write \(R_{bw3}\) as the sum of the following two terms,

$$\begin{aligned} R_{bw31}=\iint K_b(v-u) \Delta f_\varepsilon (t,u;{\hat{\beta }}_n,{\hat{\theta }}_n) [E{\hat{f}}_{{\bar{U}}n}(t)-f_{{\bar{U}}}(t)]dudt, \end{aligned}$$

and

$$\begin{aligned} R_{bw32}= & {} \iint K_b(v-u)\left[ (\hat{\beta }_n-\beta _0)^T\dot{f}_{\varepsilon ,\beta } \left( u+\beta _0^Tt,\theta _0\right) \right. \\&\left. +\,(\hat{\theta }_n-\theta _0)^T\dot{f}_{\varepsilon ,\theta } \left( u+\beta _0^Tt,\theta _0\right) \right] \cdot \\&\quad \left[ E{\hat{f}}_{{\bar{U}}n}(t)-f_{{\bar{U}}}(t)\right] dudt. \end{aligned}$$

By (f1), \(R_{bw31}\) is bounded above by

$$\begin{aligned}&\sup _{u,t}\left| \Delta f(t,u;{\hat{\beta }}_n,{\hat{\theta }}_n)\right| \cdot \left[ \frac{1}{2}w^2\mu _2(L) \iint K_b(v-u)\big |\text{ tr }\left( f_{{\bar{U}}}''(t)\right) \big |dudt+o(w^2)\right] \\&\quad =O_p(w^2/n), \end{aligned}$$

and \(R_{bw32}\) is bounded above by

$$\begin{aligned}&\Vert \hat{\beta }_n-\beta _0\Vert \cdot \iint K_b(v-u)\Big \Vert \dot{f}_{\varepsilon ,\beta }\left( u+\beta _0^Tt,\theta _0\right) \Big \Vert \cdot \Big |E{\hat{f}}_{{\bar{U}}n}(t)-f_{{\bar{U}}}(t)\Big |dudt\\&\quad +\Big \Vert \hat{\theta }_n-\theta _0\Big \Vert \cdot \iint K_b(v-u)\Big \Vert \dot{f}_{\varepsilon ,\theta }\left( u+\beta _0^Tt,\theta _0\right) \Big \Vert \cdot \Big |E{\hat{f}}_{{\bar{U}}n}(t)-f_{{\bar{U}}}(t)\Big |dudt. \end{aligned}$$

Note that

$$\begin{aligned}&\iint K_b(v-u)\Big \Vert \dot{f}_{\varepsilon ,\beta }\left( u+\beta _0^Tt,\theta _0\right) \Big \Vert \Big |E{\hat{f}}_{{\bar{U}}n}(t)-f_{{\bar{U}}}(t)\Big |dudt\\&\quad \le O(w^2)\iint K_b(v-u)\Big \Vert \dot{f}_{\varepsilon ,\beta }\left( u+\beta _0^Tt,\theta _0\right) \Big \Vert dudt+o(w^2)=O(w^2), \end{aligned}$$

and by changing variables, \(u=v+bx\), from (f2),

$$\begin{aligned}&\iint K_b(v-u)\Big \Vert \dot{f}_{\varepsilon ,\beta }\left( u+\beta _0^Tt,\theta _0\right) \Big \Vert dudt\\&\quad =\iint K(x)\Big \Vert \dot{f}_{\varepsilon ,\beta }\left( v+bx+\beta _0^Tt,\theta _0\right) \Big \Vert dxdt\\&\quad \le \iint K(x)\Big \Vert \dot{f}_{\varepsilon ,\beta }\left( v+\beta _0^Tt,\theta _0\right) \Big \Vert dxdt+b\iint | x|K(x)B\left( v+\beta _0^Tt,\theta _0\right) dxdt\\&\quad =\int \Big \Vert \dot{f}_{\varepsilon ,\beta }\left( v+\beta _0^Tt,\theta _0\right) \Big \Vert dt+b\int |x|K(x)dx\cdot \int B\left( v+\beta _0^Tt,\theta _0\right) dt. \end{aligned}$$

The \(\sqrt{n}\)-consistency of \({\hat{\beta }}_n\) and \({\hat{\theta }}_n\), and the integrability of \(\dot{f}_{\varepsilon ,\beta }(v+\beta _0^Tt,\theta _0)\) and \(B(v+\beta _0^Tt,\theta _0)\) with respect to t imply \( \int |R_{bw3}|^2d\Pi (v)= 2\left[ O_p\left( n^{-2}w^4\right) +O_p\left( n^{-1}w^4\right) \right] . \) Thus

$$\begin{aligned} nb^{1/2}\cdot \int |R_{bw3}|^2d\Pi (v)=nb^{1/2}O_p\left( \frac{w^4}{n^2} \right) +nb^{1/2}O_p\left( \frac{w^4}{n}\right) =o_p(1). \end{aligned}$$
(12)

Next, we consider \(R_{bw1}\). Adding and subtracting \((\hat{\beta }_n-\beta _0)^T\dot{f}_{\varepsilon ,\beta }(u+\beta _0^Tt,\theta _0) +(\hat{\theta }_n-\theta _0)^T\dot{f}_{\varepsilon ,\theta }(u+\beta _0^Tt,\theta _0)\) from \(f_\varepsilon (u+\hat{\beta }_n^Tt,\hat{\theta }_n)-f_\varepsilon (u+\beta _0^Tt,\theta _0)\), we can rewrite \(R_{bw1}\) as the summation of three terms

$$\begin{aligned} R_{bw11}&=\iint K_b(v-u)[f_\varepsilon \left( u+\hat{\beta }_n^Tt,\hat{\theta }_n\right) -f_\varepsilon \left( u+\beta _0^Tt,\theta _0\right) -(\hat{\beta }_n\\&\quad -\beta _0)^T\dot{f}_{\varepsilon ,\beta }\left( u+\beta _0^Tt, \theta _0\right) \\&\quad -(\hat{\theta }_n-\theta _0)^T\dot{f}_{\varepsilon ,\theta }\left( u+\beta _0^Tt,\theta _0\right) ] \left[ {\hat{f}}_{{\bar{U}}n}(t)-E{\hat{f}}_{{\bar{U}}n}(t)\right] dudt,\\ R_{bw12}&=(\hat{\beta }_n-\beta _0)^T\iint K_b(v-u)\dot{f}_{\varepsilon ,\beta }\left( u+\beta _0^Tt, \theta _0\right) \left[ {\hat{f}}_{{\bar{U}}n}(t)-E{\hat{f}}_{{\bar{U}}n}(t)\right] dudt,\\ R_{bw13}&=(\hat{\theta }_n-\theta _0)^T\iint K_b(v-u)\dot{f}_{\varepsilon ,\theta } \left( u+\beta _0^Tt,\theta _0\right) \left[ {\hat{f}}_{{\bar{U}}n}(t)-E{\hat{f}}_{{\bar{U}}n}(t)\right] dudt. \end{aligned}$$

From condition (f1),

$$\begin{aligned} |R_{bw11}|&\le \sup _{t,u}\Big |f_\varepsilon \left( u+\hat{\beta }_n^Tt,\hat{\theta }_n\right) -f_\varepsilon \left( u+\beta _0^Tt,\theta _0\right) -(\hat{\beta }_n-\beta _0)^T\dot{f}_{\varepsilon ,\beta } \left( u+\beta _0^Tt,\theta _0\right) \\&-(\hat{\theta }_n-\theta _0)^T\dot{f}_{\varepsilon ,\theta }\left( u+\beta _0^Tt,\theta _0\right) \Big | \iint K_b(v-u)\Big |{\hat{f}}_{{\bar{U}}n}(t)-E{\hat{f}}_{{\bar{U}}n}(t)\Big |dudt\\ =&\,O_p\left( n^{-1}\right) \int K_b(v-u)du\int \Big |{\hat{f}}_{{\bar{U}}n}(t) -E{\hat{f}}_{{\bar{U}}n}(t)\Big |dt=O_p(n^{-1}). \end{aligned}$$

To consider \(R_{bw12}\) and \(R_{bw13}\), we need an upper bound for \(E\int |{\hat{f}}_{{\bar{U}}n}(t)-E{\hat{f}}_{{\bar{U}}n}(t)|dt\). By the Cauchy–Schwarz inequality, we have \( E\int |{\hat{f}}_{{\bar{U}}n}(t)-E{\hat{f}}_{{\bar{U}}n}(t)|dt\le \int (E|{\hat{f}}_{{\bar{U}}n}(t)-E{\hat{f}}_{{\bar{U}}n}(t)|^2)^{\frac{1}{2}}dt.\) Note that \(E[{\hat{f}}_{{\bar{U}}n}(t)-E{\hat{f}}_{{\bar{U}}n}(t)]^2\) equals

$$\begin{aligned}&\frac{1}{n}\left\{ {\frac{f_{{\bar{U}}}(t)}{w^d}\int L^2(v)dv+\frac{1}{2w^{d-2}}\int L^2(v) v^Tf_{{\bar{U}}}''(\tilde{t}_1)v dv}\right. \\&\qquad \left. -\left[ f_{{\bar{U}}}(t)+\frac{w^2}{2}\int L(v)v^T f_{{\bar{U}}}''(\tilde{t}_2) vdv\right] ^2\right\} \\&\quad =\frac{f_{{\bar{U}}}(t)}{nw^d}\int L^2(v)dv+\frac{1}{2nw^{d-2}} \int L^2(v) v^Tf_{{\bar{U}}}''(\tilde{t}_1)vdv-\frac{1}{n}f_{{\bar{U}}}^2(t)\\&\qquad -\frac{w^4}{4n}\left( \int L(v)v^Tf_{{\bar{U}}}''(\tilde{t}_2) vdv\right) ^2-\frac{w^2 f_{{\bar{U}}}(t)}{n}\int L(v)v^Tf_{{\bar{U}}}'' (\tilde{t}_2)vdv. \end{aligned}$$

where \(\tilde{t}_1\) and \(\tilde{t}_2\) are between t and \(t+vw\). Then by condition (g2) and (g3), we have \(E\int |{\hat{f}}_{{\bar{U}}n}(t)-E{\hat{f}}_{{\bar{U}}n}(t)|dt=O((nw^d)^{-1/2})\). Hence \( \int |{\hat{f}}_{{\bar{U}}n}(t)-E{\hat{f}}_{{\bar{U}}n}(t)|dt=O_p \left( (nw^d)^{-1/2}\right) \).

For \(R_{bw12}\), we have

$$\begin{aligned}&\left\| \iint K_b(v-u)\dot{f}_{\varepsilon ,\beta }\left( u+\beta _0^T t,\theta _0\right) \left[ {\hat{f}}_{{\bar{U}}n}(t)-E{\hat{f}}_{{\bar{U}}n}(t)\right] dudt\right\| \\&\quad \le \iint K(x) \Big \Vert \dot{f}_{\varepsilon ,\beta }\left( v+bx+\beta _0^Tt,\theta _0\right) \Big \Vert \cdot \Big |{\hat{f}}_{{\bar{U}}n}(t)-E{\hat{f}}_{{\bar{U}}n}(t)\Big |dxdt\\&\quad \le \iint K(x)\left[ \Big \Vert \dot{f}_{\varepsilon ,\beta }\left( v+\beta _0^T t,\theta _0\right) \Big \Vert b|x|B(v+\beta _0^T t,\theta _0)\right] \\&\qquad \cdot |{\hat{f}}_{{\bar{U}}n}(t)-E{\hat{f}}_{{\bar{U}}n}(t)|dxdt. \end{aligned}$$

which has the order of \(O_p(1/\sqrt{nw^d})\) by condition (f2). This, together with the \(\sqrt{n}\)-consistency of \({\hat{\beta }}_n\), implies that \(R_{bw12}(v)=O_p((nw^{d/2})^{-1})\) uniformly for v. Similarly, we also have \(R_{bw13}(v)=O_p((nw^{d/2})^{-1})\) uniformly for v as well. Therefore,

$$\begin{aligned} nb^{1/2}\int (R_{bw1}(v))^2d\Pi (v)= nb^{1/2}\cdot O_p\left( \frac{1}{n^2w^d}\right) =O_p\left( \frac{b^{1/2}}{nw^d}\right) =o_p(1) \end{aligned}$$
(13)

from assumption (b2). Next we consider \(R_{bw2}\). Note that

$$\begin{aligned} R_{bw2}(v)&=\frac{1}{nw^d}\iint K_b(v-u)f_\varepsilon (u+\beta _0^T t,\theta _0)\\&\quad \times \left[ \sum _{i=1}^nL\left( \frac{t-\tilde{Z}_i}{w}\right) -\sum _{i=1}^n EL\left( \frac{t-\tilde{Z}_i}{w}\right) \right] dudt\\&=\frac{1}{n}\sum _{i=1}^n\iint K_b(v-u)f_\varepsilon (u+\beta _0^Tt,\theta _0)[L_w(t-\tilde{Z}_i)-EL_w(t-\tilde{Z}_i)]dudt. \end{aligned}$$

Therefore,

$$\begin{aligned} E(R_{bw2}(v))^2=&\frac{1}{n}E\left[ \iint K_b(v-u)f_\varepsilon (u+\beta _0^Tt, \theta _0)[L_w(t-\tilde{Z})\right. \\&\qquad \left. -EL_w(t-\tilde{Z})]dtdu\right] ^2\\ =&\frac{1}{n}\int \left[ \iint K(x)f_\varepsilon (v+bx+\beta _0^Tt,\theta _0)\right. \\&\quad \left. \times \left[ \frac{1}{w^d}L\left( \frac{t-z}{w}\right) -f_{{\bar{U}}}(t)+O(w^2)\right] dtdx\right] ^2f_{\tilde{Z}}(z)dz, \end{aligned}$$

which is the order of \(O(n^{-1})\), implying that

$$\begin{aligned} nb^{1/2}\int (R_{bw2}(v))^2d\Pi (v)=nb^{1/2}O_p\left( n^{-1}\right) =o_p(1). \end{aligned}$$
(14)

Therefore, combining (11), (12), (13) and (14), we eventually show that

$$\begin{aligned} nb^{1/2}T_{n2}=nb^{1/2}\int [R_{bw}(v;\hat{\beta }_n,\hat{\theta }_n)]^2d\Pi (v)=o_p(1) \end{aligned}$$
(15)

Next, we investigate the asymptotic behaviour of \(T_{n3}({\hat{\alpha }}_n,{\hat{\beta }}_n,{\hat{\theta }}_n)\). By the decomposition of \(R_{bw}\), we can see that

$$\begin{aligned} \int [\hat{f}_{\xi ,n}(v;\hat{\alpha }_n,\hat{\beta }_n)-f_{\xi ,b} (v;\hat{\beta }_n,\hat{\theta }_n)]R_{bw}(v;\hat{\beta }_n,\hat{\theta }_n)d\Pi (v)= \sum _{j=1}^4Q_{nj}, \end{aligned}$$
(16)

where \( Q_{nj}=\int [\hat{f}_{\xi ,n}(v;\hat{\alpha }_n,\hat{\beta }_n)-f_{\xi ,b}(v; \hat{\beta }_n,\hat{\theta }_n)]R_{bwj}(v;{\hat{\beta }}_n,\hat{\theta }_n)d\Pi (v). \) From Koul and Song (2012), we know that

$$\begin{aligned}&nb^{\frac{1}{2}}\left[ \int [\hat{f}_{\xi ,n}(v;\hat{\alpha }_n,\hat{\beta }_n) f_{\xi ,b}(v;\hat{\beta }_n,\hat{\theta }_n)]^2d\Pi (v)-\hat{C}_n\right] \Longrightarrow N(0,\Gamma ),\quad \nonumber \\&nb^{\frac{1}{2}}\hat{C}_n=O_p(b^{-1/2}), \end{aligned}$$
(17)

where \(\Gamma \) is defined in (10). By the Cauchy–Schwarz inequality, we can see that \(nb^{1/2}|Q_{n1}|\) is bounded above by

$$\begin{aligned}&\left\{ nb^{\frac{1}{2}}\int \left[ \hat{f}_{\xi ,n}\left( v;\hat{\alpha }_n,\hat{\beta }_n\right) -f_{\xi ,b}\left( v;\hat{\beta }_n,\hat{\theta }_n\right) \right] ^2d\Pi (v)\right\} ^{\frac{1}{2}}\\&\qquad \times \left\{ nb^{\frac{1}{2}} \int \left[ R_{bw1}\left( v;\hat{\beta }_n,\hat{\theta }_n\right) \right] ^2d\Pi (v)\right\} ^{\frac{1}{2}}\\&\quad = \left\{ nb^{\frac{1}{2}}\left[ \int [\hat{f}_{\xi ,n}(v;\hat{\alpha }_n, \hat{\beta }_n)-f_{\xi ,b}(v;\hat{\beta }_n,\hat{\theta }_n)]^2d\Pi (v)-\hat{C}_n\right] +nb^{\frac{1}{2}}\hat{C}_n\right\} ^{\frac{1}{2}}\\&\qquad \cdot \left\{ nb^{\frac{1}{2}}\int [R_{bw1}(v;\hat{\beta }_n,\hat{\theta }_n)]^2 d\Pi (v)\right\} ^{\frac{1}{2}}, \end{aligned}$$

this, together with (13), implies that we can conclude

$$\begin{aligned} nb^{1/2}|Q_{n1}|=O_p(b^{-1/4})\cdot O_p\left( \frac{b^{1/4}}{\sqrt{nw^d}}\right) =o_p(1). \end{aligned}$$
(18)

Similarly, from (12), we can show that

$$\begin{aligned} nb^{1/2}|Q_{n3}|=O_p(b^{-1/4})\cdot O_p\left( b^{1/4}w^2\right) =o_p(1). \end{aligned}$$
(19)

Now we shall show that \(nb^{1/2}Q_{nj}=o_p(1)\) holds for \(j=2, 4\). Recall the definitions of \({\hat{f}}_{\xi ,n}(v;{\hat{\alpha }}_n,{\hat{\beta }}_n)\), \(f_{\xi ,b}(v;{\hat{\beta }}_n,{\hat{\theta }}_n)\), we see that \(nb^{1/2}Q_{n2}\) can be written as

$$\begin{aligned} nb^{1/2}Q_{n2}=&\int \left[ \frac{1}{nb}\sum _{i=1}^nK\Big (\frac{v-\hat{\xi }_i}{b}\Big )-\int K_b(v-u)f_\xi (u;\hat{\beta }_n,\hat{\theta }_n)du\right] \\&\cdot \left[ \iint K_b(v-u)f_\varepsilon \left( u+\beta _0^Tt,\theta _0\right) \left( {\hat{f}}_{{\bar{U}}n}(t)-E{\hat{f}}_{{\bar{U}}n}(t)\right) dudt\right] d\Pi (v)\\ =&\int \bigg [\frac{1}{nb}\sum _{i=1}^n K\Big (\frac{v-Y_i+\hat{\alpha }_n+\hat{\beta }_n^T\bar{Z}_i}{b}\Big )-\frac{1}{nb}\sum _{i=1}^n K\Big (\frac{v-Y_i+\alpha _0+\beta _0^T\bar{Z}_i}{b}\Big )\\&+\frac{1}{nb}\sum _{i=1}^n K\Big (\frac{v-Y_i+\alpha _0+\beta _0^T\bar{Z}_i}{b}\Big )-\int K_b(v-u)f_\xi (u;\beta _0,\theta _0)du\\&+\int K_b(v-u)f_\xi (u;\beta _0,\theta _0)du-\int K_b(v-u)f_\xi (u;\hat{\beta }_n,\hat{\theta }_n)du\bigg ]\\&\cdot \bigg [\iint K_b(v-u) f_\varepsilon \left( u+\beta _0^Tt,\theta _0\right) \left[ {\hat{f}}_{{\bar{U}}n}(t)-E{\hat{f}}_{{\bar{U}}n}(t)\right] dtdu\bigg ]d\Pi (v)\\ =&\int \left[ \frac{1}{nb}\sum _{i=1}^n K\Big (\frac{v-Y_i+\alpha _0+\beta _0^T\bar{Z}_i}{b}\Big )-\int K_b(v-u)f_\xi (u;\beta _0,\theta _0)du\right] \\&\left[ \iint K_b(v-u)f_\varepsilon \left( u+\beta _0^Tt,\theta _0\right) \left( {\hat{f}}_{{\bar{U}}n}(t)-E{\hat{f}}_{{\bar{U}}n}(t)\right) dudt\right] d\Pi (v)+R_n, \end{aligned}$$

where the remainder term \(R_n\) converges to 0 faster than the first term. So, it is sufficient to consider the first term only. By the definition of \({\hat{f}}_{{\bar{U}}n}(t)\), we can rewrite the first term as \(S_{n}\),

$$\begin{aligned} S_n=&\frac{1}{n^2}\sum _{i=1}^n\sum _{j=1}^n\int \left[ \frac{1}{b}K\Big (\frac{v-\xi _i}{b}\Big )-E\frac{1}{b}K\Big (\frac{v-\xi }{b}\Big )\right] \\&\cdot \left[ \iint K_b(v-u)f_\varepsilon \left( u+\beta _0^Tt,\theta _0\right) \left[ \frac{1}{w^d}L\Big (\frac{t-\tilde{Z}_j}{w}\Big )-E\frac{1}{w^d}L\Big (\frac{t-\tilde{Z}}{w}\Big )\right] dudt\right] \\&\quad d\Pi (v). \end{aligned}$$

Recall the notation \(\xi =Y-\alpha _0-\beta _0^T\bar{Z}=\varepsilon -\beta _0^T\Big (U_1+U_2\Big )/2\), \(\tilde{Z}=(U_1-U_2)/2\). We have

$$\begin{aligned} ES_{n}&=\frac{1}{n}E\int \frac{1}{b}K\Big (\frac{v-\xi }{b}\Big )\left[ \iint K_b(v-u)f_\varepsilon \left( u+\beta _0^Tt,\theta _0\right) \frac{1}{w^d}L\Big (\frac{t-\tilde{Z}}{w}\Big )dtdu\right] d\Pi (v)\\&\quad -\frac{1}{n}\int E\frac{1}{b}K\Big (\frac{v-\xi }{b}\Big )\iint K_b(v-u)f_\varepsilon \left( u+\beta _0^Tt,\theta _0\right) E\frac{1}{w^d}L\Big (\frac{t-\tilde{Z}}{w}\Big )dtdud\Pi (v)\\&=\frac{1}{n}\int \Bigg [\int \!\!\!\int \!\!\!\int \frac{1}{b}K\Big (\frac{v-\varepsilon +\beta _0^T(u_1+u_2)/2}{b}\Big )\\&\left[ \iint K_b(v-u)f_\varepsilon (u+\beta _0^Tt,\theta _0)\frac{1}{w^d}L\Big (\frac{t-(u_1-u_2)/2}{w}\Big )dtdu\right] \\&f(\varepsilon )f_{U}(u_1)f_{U}(u_2)\ \mathrm {d}\varepsilon \ \mathrm {d}u_1\ \mathrm {d}u_2\Bigg ]d\Pi (v)\\&\quad - \frac{1}{n}\int E\frac{1}{b}K\Big (\frac{v-\xi }{b}\Big )\iint K_b(v-u)f_\varepsilon (u+\beta _0^Tt,\theta _0)E\frac{1}{w^d}L\Big (\frac{t-\tilde{Z}}{w}\Big )dtdud\Pi (v)\\&=\,O(n^{-1}). \end{aligned}$$

We also have

$$\begin{aligned} ES^2_{n}&=E\Bigg [\frac{1}{n^2}\sum _{i,j}\int \left[ \frac{1}{b}K\Big (\frac{v-\xi _i}{b}\Big )-E\frac{1}{b}K\Big (\frac{v-\xi }{b}\Big )\right] \\&\quad \iint K_b(v-u)f_\varepsilon (u+\beta _0^Tt,\theta _0)\Bigg [\frac{1}{w^d}L\Big (\frac{t-\tilde{Z}_j}{w}\Big )-E\frac{1}{w^d}L\Big (\frac{t-\tilde{Z}}{w}\Big )\Bigg ]dudtd\Pi (v)\Bigg ]^2\\&=\frac{1}{n^4}\sum _{i,j}E\Bigg [\int \Big [\frac{1}{b}K\Big (\frac{v-\xi _i}{b}\Big )-E\frac{1}{b}K\Big (\frac{v-\xi }{b}\Big )\Big ]\\&\quad \iint K_b(v-u)f_\varepsilon (u+\beta _0^Tt,\theta _0)\Big [\frac{1}{w^d}L\Big (\frac{t-\tilde{Z}_j}{w}\Big )-E\frac{1}{w^d}L\Big (\frac{t-\tilde{Z}}{w}\Big )\Big ]dudtd\Pi (v)\Bigg ]^2\\&\quad +\frac{n(n-1)}{n^4}E\Bigg [\int \Big [\frac{1}{b}K\Big (\frac{v-\xi _1}{b}\Big )-E\frac{1}{b}K\Big (\frac{v-\xi }{b}\Big )\Big ]\iint K_b(v-u)f_\varepsilon (u+\beta _0^Tt,\theta _0)\\&\quad \Big [\frac{1}{w^d}L\Big (\frac{t-\tilde{Z}_2}{w}\Big )-E\frac{1}{w^d}L\Big (\frac{t-\tilde{Z}}{w}\Big )\Big ]dudtd\Pi (v)\int \Big [\frac{1}{b}K\Big (\frac{v-\xi _2}{b}\Big )-E\frac{1}{b}K\Big (\frac{v-\xi }{b}\Big )\Big ]\\&\quad \iint K_b(v-u)f_\varepsilon (u+\beta _0^T t,\theta _0)\Big [\frac{1}{w^d}L\Big (\frac{t-\tilde{Z}_1}{w}\Big )-E\frac{1}{w^d}L\Big (\frac{t-\tilde{Z}}{w}\Big )\Big ]dudtd\Pi (v)\Bigg ], \end{aligned}$$

which is the order of \(O(n^{-2})\). The expectation and variance arguments implies \(S_{n}=O_p(1/n)\). Hence

$$\begin{aligned} nb^{1/2}Q_{n2}=o_p(1). \end{aligned}$$
(20)

Finally, we are going to prove \(nb^{\frac{1}{2}}Q_{n4}=o_p(1)\). First note that \(nb^{\frac{1}{2}}Q_{n4}\) can be written as the sum of \(nb^{1/2}S_{nj}\), \(j=1,2,3\), where

$$\begin{aligned} nb^{\frac{1}{2}}S_{n1} =&\int \left[ \frac{1}{nb}\sum _{i=1}^n K\Big (\frac{v-Y_i+\alpha _0 +\beta _0^T\bar{Z}_i}{b}\Big )-\int K_b(v-u)f_\xi (u;\beta _0,\theta _0)du\right] \\&\left[ \iint K_b(v-u)f_\varepsilon (u+\beta _0^Tt,\theta _0)(E {\hat{f}}_{{\bar{U}}n} (t)-f_{{\bar{U}}}(t))dudt\right] d\Pi (v),\\ nb^{\frac{1}{2}}S_{n2}=&\int \bigg [\frac{1}{nb}\sum _{i=1}^n K\Big (\frac{v-Y_i+\hat{\alpha }_n +\hat{\beta }_n^T\bar{Z}_i}{b}\Big )-\frac{1}{nb}\sum _{i=1}^n K\Big (\frac{v-Y_i +\alpha _0+\beta _0^T\bar{Z}_i}{b}\Big )\bigg ]\\&\cdot \bigg [\iint K_b(v-u) f_\varepsilon (u+\beta _0^Tt,\theta _0)[E {\hat{f}}_{{\bar{U}}n} (t)-f_{{\bar{U}}}(t)]dtdu\bigg ]d\Pi (v),\\ nb^{\frac{1}{2}}S_{n3}=&\int \bigg [\int K_b(v-u)f_\xi (u;\beta _0,\theta _0)du-\int K_b(v-u)f_\xi (u;\hat{\beta }_n,\hat{\theta }_n)du\bigg ]\\&\cdot \bigg [\iint K_b(v-u) f_\varepsilon (u+\beta _0^Tt,\theta _0)[E {\hat{f}}_{{\bar{U}}n} (t)-f_{{\bar{U}}}(t)]dtdu\bigg ]d\Pi (v). \end{aligned}$$

We can easily see that \(E S_{n1}=0\). From the boundedness of \(f_{{\bar{U}}}''(t)\), we further have

$$\begin{aligned} E S^2_{n1}&\le \,\frac{1}{n}E \Bigg \{\int \left| \frac{1}{b}K\Big (\frac{v-\xi _1}{b}\Big )-E \frac{1}{b}K\Big (\frac{v-\xi }{b}\Big )\right| \\&\quad \times \left[ \iint K_b(v-u)f_\varepsilon (u+\beta _0^Tt,\theta _0)\frac{w^2}{2}\int L(z)|z^T f_{{\bar{U}}}''(\tilde{t})z|dzdudt\right] d\Pi (v)\Bigg \}^2\\&\le \frac{B^2w^4}{4n}E \Bigg \{\int \left| \frac{1}{b}K\Big (\frac{v-\xi _1}{b} \Big )-E \frac{1}{b}K\Big (\frac{v-\xi }{b}\Big )\right| \\&\quad \times \left[ \iint K_b(v-u)f_\varepsilon (u+\beta _0^Tt,\theta _0)dudt\right] d\Pi (v)\Bigg \}^2 \end{aligned}$$

for some finite positive constant B. Note that

$$\begin{aligned}&E \Bigg \{\int \left| \frac{1}{b}K\Big (\frac{v-\xi }{b}\Big )-E \frac{1}{b}K\Big (\frac{v-\xi }{b}\Big )\right| \Bigg [\iint K_b(v-u)f_\varepsilon (u+\beta _0^Tt,\theta _0)dudt\Bigg ]d\Pi (v)\Bigg \}^2 \end{aligned}$$

is the order of O(1), so, we have \(S_{n1}=O_p(w^2/\sqrt{n})\). Thus \(nb^{\frac{1}{2}} S_{n1}=O_p\left( \sqrt{n}b^{\frac{1}{2}}w^2\right) =O_p(\sqrt{nbw^4})=o_p(1).\) Using the Cauchy–Schwarz inequality and from the proof of Theorem 3.1 in Koul and Song (2012), we have

$$\begin{aligned} nb^{1/2}|S_{n2}|&= nb^{\frac{1}{2}}\left| \int \left[ \frac{1}{n} \sum _{i=1}^n(K_b(v-\hat{\xi _i})-K_b(v-\xi _i))\right] \right. \\&\cdot \left. \left[ \iint K_b(v-u)f_\varepsilon (u+\beta _0^Tt,\theta _0)(E {\hat{f}}_{{\bar{U}}n}(t)-f_{{\bar{U}}}(t))dudt\right] d\Pi (v)\right| \\&\le \left\{ nb^{1/2}\int \left[ \frac{1}{n}\sum _{i=1}^n(K_b(v-\hat{\xi _i}) -K_b(v-\xi _i))\right] ^2d\Pi (v)\right\} ^{1/2}\\&\cdot \left\{ nb^{1/2}\int \left[ \iint K_b(v-u)f_\varepsilon (u+\beta _0^Tt,\theta _0) (E {\hat{f}}_{{\bar{U}}n}(t)-f_{{\bar{U}}}(t))dudt\right] ^2d\Pi (v)\right\} ^{1/2}\\&\le o_p(1)O(\sqrt{nb^{1/2}w^4})=o_p(1), \end{aligned}$$

and

$$\begin{aligned} nb^{1/2}S_{n3}&= nb^{\frac{1}{2}}\Big |\int [ f_{\xi ,b}(v;\beta _0, \theta _0)-f_{\xi ,b} (v;\hat{\beta }_n, \hat{\theta }_n)]\\&\quad \cdot \left[ \iint K_b(v-u)f_\varepsilon (u+\beta _0^Tt,\theta _0)(E {\hat{f}}_{{\bar{U}}n}(t)-f_{{\bar{U}}}(t))dudt\right] d\Pi (v)\Big |\\&\le nb^{\frac{1}{2}}\int [(\hat{\theta }_n-\theta _0)^T\dot{f}_{\xi b\theta }(v;\beta _0,\theta _0)+(\hat{\beta }_n-\beta _0)^T\dot{f}_{\xi b\beta }(v;\beta _0,\theta _0)+O_p(1/n)]\\&\quad \cdot \left[ \iint K_b(v-u)f_\varepsilon (u+\beta _0t,\theta _0)\frac{w^2}{2}\int L(z)|z^Tf_{{\bar{U}}}''(\tilde{t})z|dzdudt\right] d\Pi (v)\\&\le O_p(nb^{\frac{1}{2}}w^2/\sqrt{n})=o_p(1). \end{aligned}$$

Therefore, we have

$$\begin{aligned} nb^{1/2}Q_{n4}=o_p(1). \end{aligned}$$
(21)

Combining (18), (19), (20) and (21), we have

$$\begin{aligned} nb^{1/2}T_{n3}=-\,2nb^{1/2}(Q_{n1}+Q_{n2}+Q_{n3}+Q_{n4})=o_p(1). \end{aligned}$$
(22)

Note that

$$\begin{aligned} nb^{1/2}(T_n({\hat{\alpha }}_n,{\hat{\beta }}_n,{\hat{\theta }}_n) -{\hat{C}}_n)=nb^{\frac{1}{2}}\left[ T_{n1}-\hat{C}_n\right] +nb^{1/2}T_{n2}+nb^{1/2}T_{n3}, \end{aligned}$$

combining (15), (17) and (22), we have

$$\begin{aligned} nb^{1/2}(T_n({\hat{\alpha }}_n,{\hat{\beta }}_n,{\hat{\theta }}_n) -{\hat{C}}_n)\Longrightarrow N(0,\Gamma ). \end{aligned}$$

This, together with \({\hat{\Gamma }}_n\rightarrow \Gamma \) in probability, which can be easily shown by the consistency of \({\hat{\alpha }}_n,{\hat{\beta }}_n\) and the kernel density estimator \({\hat{f}}_{\xi ,n}\), completes the proof of Theorem 1.

Proof of Theorem 2:

Define

$$\begin{aligned} \check{f}_{\xi ,b}(v;\beta )=\int K_b(v-u)\tilde{f}_{\xi ,a}(u;\beta )du, \quad \tilde{f}_{\xi ,a}(u;\beta )=\int f_{\varepsilon ,a}(u+\beta ^Tt){\hat{f}}_{{\bar{U}}n}(t)dt, \end{aligned}$$

By adding and subtracting \(\check{f}_{\xi ,b}(v;\hat{\beta }_n)\) from \(\hat{f}_{\xi ,n}(v;\hat{\alpha }_n,\hat{\beta }_n)-\tilde{f}_{\xi ,b}(v;\hat{\beta }_n, \hat{\theta }_n)\), we can rewrite \(T_n=T_{1n}-2T_{2n}+T_{3n}\), where

$$\begin{aligned} T_{1n}=&\int [\hat{f}_{\xi ,n}(v;\hat{\alpha }_n,\hat{\beta }_n)-\check{f}_{\xi ,b} (v;\hat{\beta }_n)]^2d\Pi (v),\\ T_{2n}=&\int [\hat{f}_{\xi ,n}(v;\hat{\alpha }_n,\hat{\beta }_n)-\check{f}_{\xi ,b} (v;\hat{\beta }_n)][\check{f}_{\xi ,b}(v;\hat{\beta }_n)-\tilde{f}_{\xi ,b} (v;\hat{\beta }_n,\hat{\theta }_n)]d\Pi (v),\\ T_{3n}=&\int [\check{f}_{\xi ,b}(v;\hat{\beta }_n)-\tilde{f}_{\xi ,b}(v;\hat{\beta }_n, \hat{\theta }_n)]^2d\Pi (v). \end{aligned}$$

Therefore,

$$\begin{aligned} \mathcal {T}_n=nb^{\frac{1}{2}}\hat{\Gamma }_n^{-\frac{1}{2}}(T_{1n}-\hat{C}_n)-2nb^{\frac{1}{2}} \hat{\Gamma }_n^{-\frac{1}{2}}T_{2n}+nb^{\frac{1}{2}} \hat{\Gamma }_n^{-\frac{1}{2}}T_{3n}. \end{aligned}$$

One can show that \(nb^{\frac{1}{2}}\hat{\Gamma }_n^{-\frac{1}{2}}(T_{1n}-\hat{C}_n)\Rightarrow N(0,1).\) The proof is similar to that of Theorem 1. Note that \( \hat{\Gamma }_n\rightarrow 2\int f_{\xi ,a}^2(v;\beta _a)\pi ^2(v)dv\int K_*^2(u)du={\tilde{\Gamma }}>0, \) where \(K_*(u)=\int K(t)K(u+t)dt\), and

$$\begin{aligned} T_{3n}=&\int \left[ \iint K_b(v-u)[f_{\varepsilon ,a}(u+\hat{\beta }_n^Tt)-f_\varepsilon (u+ \hat{\beta }_n^Tt,\hat{\theta }_n)] {\hat{f}}_{{\bar{U}}n}(t)dtdu\right] ^2d\Pi (v)\\ =&\int \left[ \iint K(x)[f_{\varepsilon ,a}(v+bx+\hat{\beta }_n^Tt)-f_\varepsilon (v+bx +\hat{\beta }_n^Tt,\hat{\theta }_n)] {\hat{f}}_{{\bar{U}}n}(t)dtdx\right] ^2d\Pi (v)\\ \rightarrow&\int \left[ \int f_{\varepsilon ,a}(v+\beta ^T_a t)f_{{\bar{U}}}(t)dt-\int f_\varepsilon (v+\beta ^T_a t, \theta _a)f_{{\bar{U}}}(t)dt\right] ^2d\Pi (v)\\ =&\int [f_{\xi ,a}(v;\beta _a)-f_\xi (v;\beta _a,\theta _a)]^2d\Pi (v)>0 \end{aligned}$$

we have \( nb^{1/2}\hat{\Gamma }_n^{-1/2}T_{3n}=nb^{1/2}{\tilde{\Gamma }}^{-1/2} \int [f_{\xi ,a}(v;\beta _a)-f_\xi (v;\beta _a,\theta _a)]^2d\Pi (v)+o_p(nb^{1/2})\) as \(n\rightarrow \infty \).

By the Cauchy–Schwarz inequality, and using the fact \(\hat{C}_n=O_p(1/(nb))\) from Koul and Song (2012), \(nb^{\frac{1}{2}}\hat{\Gamma }_n^{-1/2}|T_{2n}|\) is bounded above by

$$\begin{aligned}&[nb^{\frac{1}{2}}\hat{\Gamma }_n^{-1/2}T_{1n}]^{\frac{1}{2}}[nb^{\frac{1}{2}} \hat{\Gamma }_n^{-1/2}T_{3n}]^{\frac{1}{2}}= [nb^{\frac{1}{2}}\hat{\Gamma }_n^{-1/2} (T_{1n}-\hat{C}_n+\hat{C}_n)]^{\frac{1}{2}}[nb^{\frac{1}{2}}\hat{\Gamma }_n^{-1/2} T_{3n}]^{\frac{1}{2}}\\&\quad \le [nb^{\frac{1}{2}}\hat{\Gamma }_n^{-1/2}|T_{1n}-\hat{C}_n| +nb^{\frac{1}{2}}\hat{\Gamma }_n^{-1/2}\hat{C}_n]^{\frac{1}{2}}O_p(\sqrt{nb^{1/2}})\\&\quad = [O_p(1)+O_p(b^{-1/2})]^{\frac{1}{2}}O_p(\sqrt{nb^{1/2}})=o_p(nb^{1/2}) \end{aligned}$$

from \(nb\rightarrow \infty \) guaranteed by the assumption (b1). Therefore,

$$\begin{aligned} \mathcal {T}_n&=nb^{1/2}\hat{\Gamma }_n^{-1/2}(T_{1n}-\hat{C}_n)+nb^{1/2}\tilde{\Gamma }^{-1/2} \int [f_{\xi ,a}(v;\beta _a)-f_\xi (v;\beta _a,\theta _a)]^2d\Pi (v)\\&\quad +o_p(nb^{1/2}). \end{aligned}$$

Clearly, the right hand side of the above expression tends to \(\infty \) as \(n\rightarrow \infty \), implying that the proposed test is consistent. \(\square \)

Proof of Theorem 3:

Denote

$$\begin{aligned}&\tilde{f}_\xi ^\mathrm{loc}(v;\beta _0,\theta _0)=\int \left[ (1-\delta _n)f_\varepsilon \left( v+\beta _0^Tu, \theta _0\right) +\delta _n\varphi \left( v+\beta _0^Tu\right) \right] f_{{\bar{U}}}(u)du\\&\quad = \int f_\varepsilon \left( v+\beta _0^Tu,\theta _0\right) f_{{\bar{U}}}(u)du-\delta _n\int \left[ f_\varepsilon \left( v+\beta _0^Tu, \theta _0\right) -\varphi \left( v+\beta _0^Tu\right) \right] f_{{\bar{U}}}(u)du.\\&\tilde{f}_{\xi ,b}^\mathrm{loc}(v;\hat{\beta }_n,\hat{\theta }_n)=\int K_b(v-u)\tilde{f}_\xi ^\mathrm{loc}(u;\hat{\beta }_n,\hat{\theta }_n)du. \end{aligned}$$

Adding and subtracting \(\tilde{f}_{\xi ,b}^\mathrm{loc}(v;\hat{\beta }_n,\hat{\theta }_n)\) from \(\hat{f}_{\xi ,n}(v;\hat{\alpha }_n,\hat{\beta }_n)-\tilde{f}_{\xi ,b}(v; \hat{\beta }_n,\hat{\theta }_n)\), we can rewrite the test statistic as

$$\begin{aligned}&T_n({\hat{\alpha }}_n,{\hat{\beta }}_n,{\hat{\theta }}_n)=\int \left[ \hat{f}_{\xi ,n} (v;\hat{\alpha }_n,\hat{\beta }_n)-\tilde{f}_{\xi ,b}^\mathrm{loc}(v;\hat{\beta }_n, \hat{\theta }_n)+\tilde{f}_{\xi ,b}^\mathrm{loc}(v;\hat{\beta }_n,\hat{\theta }_n)\right. \\&\left. \quad -\tilde{f}_{\xi ,b}(v;\hat{\beta }_n, \hat{\theta }_n)\right] ^2d\Pi (v). \end{aligned}$$

Note that

$$\begin{aligned}&\tilde{f}_{\xi ,b}^\mathrm{loc}(v;\hat{\beta }_n,\hat{\theta }_n)-\tilde{f}_{\xi ,b} (v;\hat{\beta }_n,\hat{\theta }_n)=\int K_b(v-u)\left[ \tilde{f}_\xi ^\mathrm{loc}(u;\hat{\beta }_n, \hat{\theta }_n)-\tilde{f}_\xi (u;\hat{\beta }_n,\hat{\theta }_n)\right] du\\&\quad = -\int K_b(v-u)\cdot \delta _n\int \left[ f_\varepsilon \left( u+\hat{\beta }_n^Tt,\hat{\theta }_n\right) -\varphi \left( u+\hat{\beta }_n^Tt\right) \right] {\hat{f}}_{{\bar{U}}n}(t)dtdu\\&\quad =-\delta _n\iint K_b(v-u)\left[ f_\varepsilon \left( u+\hat{\beta }_n^Tt,\hat{\theta }_n\right) -\varphi \left( u +\hat{\beta }_n^Tt\right) \right] {\hat{f}}_{{\bar{U}}n}(t)dtdu\\&\quad =-\delta _n D_n(v;\hat{\beta }_n,\hat{\theta }_n). \end{aligned}$$

We can rewrite \(T_n\) as the sum of the following three terms

$$\begin{aligned} T_{n1}&=\int \left[ \hat{f}_{\xi ,n}(v;\hat{\alpha }_n,\hat{\beta }_n)-\tilde{f}_{\xi ,b}^\mathrm{loc}(v;\hat{\beta }_n,\hat{\theta }_n)\right] ^2d\Pi (v),\\ T_{n2}&=-\,2\delta _n\int \left[ \hat{f}_{\xi ,n}(v;\hat{\alpha }_n,\hat{\beta }_n) -\tilde{f}_{\xi ,b}^\mathrm{loc}(v;\hat{\beta }_n,\hat{\theta }_n)\right] D_n(v;\hat{\beta }_n,\hat{\theta }_n)d\Pi (v),\\ T_{n3}&=\delta ^2_n\int D_n^2(v;\hat{\beta }_n,\hat{\theta }_n)d\Pi (v). \end{aligned}$$

For the sake of simplicity, denote \( f_n(u,t)=(1-\delta _n)f_\varepsilon (u+\hat{\beta }_n^Tt,\hat{\theta }_n)+\delta _n \varphi (u+\hat{\beta }_n^Tt). \) Adding and subtracting \(f_{{\bar{U}}}(t)\) from \({\hat{f}}_{{\bar{U}} n}(t)\), \(T_{n1}\) can be further written as the sum of the following three terms,

$$\begin{aligned} T_{n11}=&\int \left[ \hat{f}_{\xi ,n}(v;\hat{\alpha }_n,\hat{\beta }_n) -\iint K_b(v-u)f_n(u,t)f_{{\bar{U}}}(t)dtdu\right] ^2d\Pi (v),\\ T_{n12}=&\int \left[ \iint K_b(v-u)f_n(u,t)\left[ {\hat{f}}_{{\bar{U}}n}(t) -f_{{\bar{U}}}(t)\right] dtdu\right] ^2d\Pi (v),\\ T_{n13}=&\int \left[ \hat{f}_{\xi ,n}(v;\hat{\alpha }_n,\hat{\beta }_n) -\iint K_b(v-u)f_n(u,t)f_{{\bar{U}}}(t)dtdu\right] \\&\cdot \left[ \iint K_b(v-u)f_n(u,t)\left[ {\hat{f}}_{{\bar{U}}n}(t)-f_{{\bar{U}}} (t)\right] dtdu\right] d\Pi (v). \end{aligned}$$

Similar to the derivations of \(R_{bw}(v;\hat{\beta }_n,\hat{\theta }_n)\), one can show that \(nb^{\frac{1}{2}}T_{n12}=o_p(1)\). Follow the proof of Theorem 1 in Koul and Song (2012), we can show that \( nb^{\frac{1}{2}}[T_{n11}-\hat{C}_n]\Rightarrow N(0,\Gamma )\) and using the Cauchy–Schwarz inequality, we also have \( nb^{\frac{1}{2}}T_{n13}=o_p(1).\) Therefore, we have \( nb^{\frac{1}{2}}[T_{n1}-\hat{C}_n]=nb^{\frac{1}{2}}[T_{n11}-\hat{C}_n]+o_p(1). \)

By the boundedness of \(f''(t)\) and \(\varphi ''(t)\), then we have

$$\begin{aligned}&nb^{\frac{1}{2}}\hat{\Gamma }_n^{-\frac{1}{2}}T_{n3}\\&\quad =nb^{\frac{1}{2}} \hat{\Gamma }_n^{-\frac{1}{2}}\delta _n^2\int D_n^2(v;\hat{\beta }_n,\hat{\theta }_n)d\Pi (v)=\hat{\Gamma }_n^{-\frac{1}{2}}\int D^2_n(v,\hat{\beta }_n,\hat{\theta }_n)d\Pi (v)\\&\quad =\hat{\Gamma }_n^{-\frac{1}{2}}\int \left[ \iint K_b(v-u)\left[ f_\varepsilon \left( u+\hat{\beta }^T_nt,\hat{\theta }_n\right) -\varphi \left( u+\hat{\beta }^T_nt\right) \right] {\hat{f}}_{{\bar{U}}n}(t)dtdu\right] ^2d\Pi (v)\\&\quad =\Gamma ^{-\frac{1}{2}}\int \left[ \int \left[ f_\varepsilon \left( v+\beta ^T_0 t,\theta _0\right) -\varphi \left( v+\beta ^T_0t\right) \right] f_{{\bar{U}}}(t)dt\right] ^2d\Pi (v)+o_p(1). \end{aligned}$$

Similarly, we can obtain

$$\begin{aligned} nb^{\frac{1}{2}}T_{n2}&=\sqrt{nb^{\frac{1}{2}}}\int \left[ \frac{1}{nb}\sum _{i=1}^n K\left( \frac{v-Y_i+\alpha _0+\beta ^T_0\bar{Z}_i}{b}\right) \right. \\&\quad \left. -\int K_b(v-u) \tilde{f}_\xi ^\mathrm{loc}(u;\beta _0,\theta _0)du\right] \\&\quad \times \left[ \iint K_b(v-u)\left[ f_\varepsilon \left( u+\beta _0^Tt,\theta _0\right) -\varphi \left( u+\beta _0^Tt\right) \right] f_{{\bar{U}}}(t)dudt\right] \\&\qquad d\Pi (v)+o_p(1)\\&= O_p(b^{\frac{1}{4}})=o_p(1). \end{aligned}$$

Summarizing the above results, we can conclude the proof of Theorem 3.\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Jia, W., Song, W. Goodness-of-fit tests in linear EV regression with replications. Metrika 81, 395–421 (2018). https://doi.org/10.1007/s00184-018-0648-1

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-018-0648-1

Keywords

Mathematics Subject Classification

Navigation