Skip to main content
Log in

Empirical likelihood for semivarying coefficient model with measurement error in the nonparametric part

  • Original Paper
  • Published:
AStA Advances in Statistical Analysis Aims and scope Submit manuscript

Abstract

A semivarying coefficient model with measurement error in the nonparametric part was proposed by Feng and Xue (Ann Inst Stat Math 66:121–140, 2014), but its inferences have not been systematically studied. This paper applies empirical likelihood method to construct confidence regions/intervals for the regression parameter and coefficient function. When some auxiliary information about the parametric part is available, the empirical log-likelihood ratio statistic for the regression parameter is introduced based on the corrected local linear estimator of the coefficient function. Furthermore, corrected empirical log-likelihood ratio statistic for coefficient function is also investigated with the use of auxiliary information. The limiting distributions of the resulting statistics both for the regression parameter and coefficient function are shown to have standard Chi-squared distribution. Simulation experiments and a real data set are presented to evaluate the finite sample performance of our proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Banerjee, A., Pitarakis, J.Y.: Functional cointegration: definition and nonparametric estimation. Stud. Nonlinear Dyn. Econom. 18, 507–520 (2014)

    MathSciNet  Google Scholar 

  • Carroll, R.J., Ruppert, D., Stefanski, L.A.: Measurement Error in Nonlinear Models. Chapman & Hall, London (1995)

    Book  MATH  Google Scholar 

  • Chen, J.H., Qin, J.: Empirical likelihood estimation for finite populations and the effective usage of auxiliary information. Biometrika 80, 10–116 (1993)

    Article  MathSciNet  Google Scholar 

  • Cui, H.J., Li, R.C.: On parameter estimation for semi-linear error-in-variables models. J. Multivar. Anal. 64, 1–24 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  • Engle, R.F., Granger, W.J., Rice, J., Weiss, A.: Semiparametric estimates of the relation between weather and electricity sales. J. Am. Stat. Assoc. 80, 310–320 (1986)

    Article  Google Scholar 

  • Fan, G.L., Xu, H.X., Liang, H.Y.: Empirical likelihood inference for partially time-varying coefficient errors-in-variables models. Electron. J. Stat. 6, 1040–1058 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  • Fan, J., Huang, T.: Profile likelihood inferences on semiparametric varying-coefficient partially linear models. Bernoulli 11, 1031–1057 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  • Feng, S.Y., Xue, L.G.: Bias-corrected statistical inference for partially linear varying coefficient errors-in-variables models with restricted condition. Ann. Inst. Stat. Math. 66, 121–140 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  • Fuller, W.A.: Measurement Error Models. Wiley, New York (1987)

    Book  MATH  Google Scholar 

  • Härdle, W., Liang, H., Gao, J.T.: Partially Linear Models. Verlag, Heidelberg (2000)

    Book  MATH  Google Scholar 

  • Hastie, T.J., Tibshirani, R.: Varying-coefficient models. J. R. Stat. Soc. Ser. B 55, 757–796 (1993)

    MathSciNet  MATH  Google Scholar 

  • Huang, Z.S., Zhang, R.Q.: Profile empirical-likelihood inferences for the single-index-coefficient regression model. Stat. Comput. 23, 455–465 (2013)

    Article  MathSciNet  Google Scholar 

  • Hwang, J.T.: The multiplicative errors-in-variables models with applications to the recent data released by the US Department of Energy. J. Am. Stat. Assoc. 81, 680–688 (1986)

    Article  MATH  Google Scholar 

  • Kai, B., Li, R., Zou, H.: New efficient estimation and variable selection methods for semiparametric varying-coefficient partially linear models. Ann. Stat. 39(1), 305–332 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  • Koenker, R.W.: Quantile Regression. Cambridge University Press, Cambridge (2005)

    Book  MATH  Google Scholar 

  • Liang, H., Hädle, W., Carroll, R.J.: Estimation in a semiparametric partially linear errors-in-variables model. Ann. Stat. 27, 1519–1535 (1999)

    Article  MATH  Google Scholar 

  • Niu, C.Z., Guo, X., Xu, W.L., Zhu, L.X.: Empirical likelihood inference in linear regression with nonignorable missing response. Comput. Stat. Data Anal. 79, 91–112 (2014)

    Article  MathSciNet  Google Scholar 

  • Owen, A.B.: Empirical likelihood ratio confidence intervals for a single functional. Biometrika 75, 237–249 (1988)

    Article  MathSciNet  MATH  Google Scholar 

  • Owen, A.B.: Empirical likelihood ratio confidence regions. Ann. Stat. 18, 90–120 (1990)

    Article  MATH  Google Scholar 

  • Qin, J., Lawless, J.F.: Empirical likelihood and general estimating equations. Ann. Stat. 22, 300–325 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  • Tosteson, T., Stefanski, L.A., Schafer, D.W.: A measurement error model for binary and ordinal regression. Stat. Med. 8, 1139–1147 (1989)

    Article  Google Scholar 

  • Wang, N., Carroll, R.J., Liang, K.Y.: Quasi-likelihood and variance functions in measurement error models with replicates. Biometrics 52, 423–432 (1996)

    Google Scholar 

  • Wang, X.L., Li, G.R., Lin, L.: Empirical likelihood inference for semi-parametric varying-coefficient partially linear EV models. Metrika 73, 171–185 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  • Wei, C.H.: Statistical inference for restricted partially linear varying coefficient errors-in-variables models. J. Stat. Plan. Inference 142, 2464–2472 (2012)

    Article  MATH  Google Scholar 

  • Yan, L., Chen, X.: Empirical likelihood for partly linear models with errors in all variables. J. Multivar. Anal. 130, 275–288 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  • Yang, S.J., Park, B.U.: Efficient estimation for partially linear varying coefficient models when coefficient functions have different smoothing variables. J. Multivar. Anal. 126, 100–113 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  • Yang, Y., Li, G., Peng, H.: Empirical likelihood of varying coefficient errors-in-variables models with longitudinal data. J. Multivar. Anal. 127, 1–18 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  • You, J.H., Chen, G.M.: Empirical likelihood for semiparametric varying-coefficient partially linear regression models. Stat. Probab. Lett. 76, 412–422 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  • You, J.H., Zhou, Y., Chen, G.M.: Corrected local polynomial estimation in varying-coefficient models with measurement errors. Can. J. Stat. 34, 391–410 (2006)

    Article  MathSciNet  MATH  Google Scholar 

  • Zhang, B.: Confidence intervals for a distribution function in the presence of auxiliary information. Comput. Stat. Data Anal. 21, 327–342 (1996)

    Article  MATH  Google Scholar 

  • Zhang, W., Lee, S.Y., Song, X.: Local polynomial fitting in semivarying coefficient models. J. Multivar. Anal. 82, 166–188 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  • Zhou, Y., Liang, H.: Statistical inference for semiparametric varying-coefficient partially linear models with error-prone linear covariates. Ann. Stat. 37, 427–458 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  • Zhu, L., Cui, H.: A semiparametric regression model with errors in variables. Scand. J. Stat. 30, 429–442 (2003)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

The authors would like to thank the Editor and two referees for their truly helpful comments and suggestions which led to a much improved presentation. This research was supported by the National Natural Science Foundation of China (11401006, 11226218, 71171003, 71271003, 11471160, 11101114), the National Statistical Science Research Key Program of China (2013LZ45), the Programming Fund Project of the Humanities and Social Sciences Research of the Ministry of Education of China (12YJA790041), Jiangsu Provincial Basic Research Program (Natural Science Foundation) (BK20131345) and the Fundamental Research Funds for the Central Universities (30920130111015).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hong-Xia Xu.

Appendix A: Assumptions and proofs

Appendix A: Assumptions and proofs

For the convenience and simplicity, let \(Z=(X_1,\ldots ,X_n)^\mathrm{T}\), \(\varepsilon =(\varepsilon _1,\ldots ,\varepsilon _n)^\mathrm{T}\), \(\eta =(\eta _1,\ldots ,\eta _n)^\mathrm{T}\), \(\widetilde{Z}=(I-S)Z\), \(\widetilde{M}=(I-S)M\), \(\widetilde{\varepsilon }=(I-S)\varepsilon \), \(\widetilde{\eta }=(I-S)\eta \), \(a_n=\{\frac{\log (1/h_1)}{nh_1}\}^{1/2}+h_1^2\) and \(C\) denote positive constant whose value may vary at each occurrence. Before proving the main theorems, we begin this section with making the following assumptions.

  1. (C1)

    The kernel \(K(\cdot )\) is a symmetric probability function with bounded support.

  2. (C2)

    The variable \(U\) has a bounded support \(\fancyscript{U}\) and its density function \(p(u)>0\) is Lipschitz continuous and bounded away from zero on \(\fancyscript{U}\).

  3. (C3)

    The matrixes \(\Gamma (u)\) is non-singular. \(\Gamma ^{-1}(u)\), \(\Phi (u)\) and \(E(X_1X_1^\mathrm{T}|U_1=u)\) are all Lipschitz continuous. \(E\Vert Z_1\Vert ^{2s}<\infty \), \(E\Vert X_1\Vert ^{2s}<\infty \), \(E|\varepsilon _1|^{2s}<\infty \) and \(E\Vert \eta _1\Vert ^{2s}<\infty \) for some \(s>2\), where \(\Vert \cdot \Vert \) is the \(L_2\) norm.

  4. (C4)

    \(\{\alpha _j(\cdot ),j=1,\ldots ,q\}\) have continuous second derivative on \(\fancyscript{U}\).

  5. (C3)

    There exist a \(\delta <2-s^{-1}\) such that \(\lim _{n\rightarrow \infty }n^{2\delta -1}h_1=\infty .\)

  6. (C4)

    The bandwidth \(h_1\) satisfies that \(nh_1^2(\log n)^{-2}\rightarrow \infty \), and \(nh_1^8\rightarrow 0\).

  7. (C5)

    The bandwidth \(h_2\) satisfies that \(nh_2\rightarrow \infty \) and \(h_2\rightarrow 0.\)

Lemma 5.1

Assume that conditions (C1)–(C5) are satisfied. Then we have

$$\begin{aligned}&(D_u^W)^\mathrm{T} w_u D_u^W-\Omega =nf(u)\Gamma (u)\otimes \left( \begin{array}{l@{\quad }l} 1&{} \mu _1\\ \mu _1 &{} \mu _2 \end{array}\right) \{1+O_p(a_n)\},\\&(D_u^W)^\mathrm{T} w_u D_u^X =nf(u)\Phi (u)\otimes (1,\mu _1)\{1+O_p(a_n)\},\\&(D_u^W)^\mathrm{T} w_u D_u^Z =nf(u)\Gamma (u)\otimes (1,\mu _1)\{1+O_p(a_n)\},\\&n^{-1}\sum _{i=1}^n(\widetilde{X}_i\widetilde{X}_i^\mathrm{T}-X^\mathrm{T}Q_i^\mathrm{T}\Sigma _{\eta } Q_iX)\longrightarrow \Delta _1, \quad \hbox {a.s.} \end{aligned}$$

Lemma 5.1 can be proved as Lemmas 2 and 3 in Feng and Xue (2014).

Lemma 5.2

Let \(D_1,\ldots ,D_n\) be independent and identical distributed random variables. If \(E|D_1|^s<\infty \) for \(s>1,\) then \(\max _{1\le i\le n}|D_i|=o(n^{1/s})\) a.s.

Lemma 5.2 can be proved as Lemma 3 in Owen (1990).

Lemma 5.3

Under the conditions of Theorem 2.1, if \(\beta \) is the true value of the parameter\(,\) then we have

$$\begin{aligned} \mathrm{(i)}\quad&\frac{1}{\sqrt{n}}\sum _{i=1}^n\psi _i(\beta ) \mathop {\longrightarrow }\limits ^{\mathcal {D}}N(0,V_{AI}),\\ \mathrm{(ii)}\quad&\frac{1}{n}\sum _{i=1}^n \psi _i(\beta ) \psi _i^\mathrm{T}(\beta )\mathop {\longrightarrow }\limits ^{\mathcal {P}}V_{AI},\\ \mathrm{(iii)}\quad&\max _{1\le i\le n}\Vert \psi _i(\beta )\Vert =o_p(n^{1/2}), \quad \lambda _1=O_p(n^{-1/2}), \end{aligned}$$

where \(V_{AI}=\bigg (\begin{array}{ll} V_{A}&{}0\\ 0&{}V_1 \end{array}\bigg ),\) \(V_1 =E\big \{\big [E(X_1X_1^\mathrm{T}|U_1)-\Phi ^\mathrm{T}(U_1)\Gamma ^{-1}(U_1)\Phi (U_1)\big ](\varepsilon _1-\eta _1^\mathrm{T}\alpha (U_1))^2\big \} +\sigma ^2E\big \{\Phi ^\mathrm{T}(U_1)\Gamma ^{-1}(U_1)\Sigma _{\eta }\Gamma ^{-1}(U_1)\Phi (U_1)\big \} +E\big \{\Phi ^\mathrm{T}(U_1)\Gamma ^{-1}(U_1)(\eta _1\eta _1^\mathrm{T}-\Sigma _{\eta })\big \}^{\otimes 2}\) and \(V_A=E\{A(X_1)A^\mathrm{T}(X_1)\}\).

Proof

From the definition of \(\varphi _i(\beta )\) and Lemma 5.1, by simply calculation and the proof in Lemma 4 in Feng and Xue (2014), we have

$$\begin{aligned} \frac{1}{\sqrt{n}}\sum _{i=1}^n\varphi _i(\beta )&= \frac{1}{\sqrt{n}}\sum _{i=1}^n\big \{\widetilde{X}_i(\widetilde{Z}_i^\mathrm{T}\alpha (U_i)+\widetilde{\varepsilon }_i)-X^\mathrm{T}Q_i^\mathrm{T}\Sigma _{\eta } Q_i(M+\varepsilon )\big \} \nonumber \\&= \frac{1}{\sqrt{n}}\sum _{i=1}^n\big \{[X_i-\Phi ^\mathrm{T}(U_i)\Gamma ^{-1}(U_i)Z_i][\varepsilon _i-\eta _i^\mathrm{T}\alpha (U_i)] \nonumber \\&-\Phi ^\mathrm{T}(U_i)\Gamma ^{-1}(U_i)\eta _i\varepsilon _i+\Phi ^\mathrm{T}(U_i)\Gamma ^{-1}(U_i)(\eta _i\eta _i^\mathrm{T}-\Sigma _{\eta })\alpha (U_i)\big \}+o_p(1) \nonumber \\&:= \frac{1}{\sqrt{n}}\sum _{i=1}^n G_{i}+o_p(1). \end{aligned}$$
(5.1)

It is easy to see that \(G_{i}\) is independent and identical distributed with mean zero and \(\text{ Var }(G_{i})=V_1\). Thus, by the Slutsky theorem and the central limit theorem, we obtain

$$\begin{aligned} \frac{1}{\sqrt{n}}\sum _{i=1}^n\varphi _i(\beta )\mathop {\longrightarrow }\limits ^{\mathcal {D}}N(0,V_{1}). \end{aligned}$$
(5.2)

Also, we find \(\frac{1}{\sqrt{n}}\sum _{i=1}^nA(X_i)\mathop {\longrightarrow }\limits ^{\mathcal {D}}N(0,V_{A})\) and \(\mathrm{Cov}\big (\frac{1}{\sqrt{n}}\sum _{i=1}^nA(X_i), \frac{1}{\sqrt{n}}\sum _{i=1}^n\varphi _i(\beta )\big ) \rightarrow 0\) by (5.1), which together with (5.2) and the central limit theorem yields Lemma 5.3(i).

As to Lemma 5.3(ii), observe that

$$\begin{aligned} \frac{1}{n}\sum _{i=1}^n \psi _i(\beta )\psi _i^\mathrm{T}(\beta )= \frac{1}{n}\sum _{i=1}^n\left( \begin{array}{ll} A(X_i)A^\mathrm{T}(X_i)&{} A(X_i)\varphi _i^\mathrm{T}(\beta )\\ \varphi _i(\beta )A^\mathrm{T}(X_i)&{}\varphi _i(\beta )\varphi _i^\mathrm{T}(\beta ) \end{array}\right) . \end{aligned}$$

The law of large numbers implies \(\frac{1}{n}\sum _{i=1}^nA(X_i)A^\mathrm{T}(X_i)\mathop {\longrightarrow }\limits ^{\mathcal {P}}V_A.\) On the other hand, by Lemma 5.1 and the proof in (5.1), it follows that

$$\begin{aligned} \frac{1}{n}\sum _{i=1}^n \varphi _i(\beta )\varphi _i^\mathrm{T}(\beta )&= \frac{1}{n} \sum _{i=1}^n \big \{\big [X_i-\Phi ^\mathrm{T}(U_i)\Gamma ^{-1}(U_i)Z_i][\varepsilon _i-\eta _i^\mathrm{T}\alpha (U_i)]\\&-\Phi ^\mathrm{T}(U_i)\Gamma ^{-1}(U_i)\eta _i\varepsilon _i+\Phi ^\mathrm{T}(U_i)\Gamma ^{-1}(U_i) (\eta _i\eta _i^\mathrm{T}-\Sigma _{\eta })\alpha (U_i)\big \}^{\otimes 2}\\&+\,o_p(1)\\&= V_1+o_p(1). \end{aligned}$$

Similar to the proof of (5.1), we can derive

$$\begin{aligned} \frac{1}{n}\sum _{i=1}^nA(X_i)\varphi _i^\mathrm{T}(\beta )=\frac{1}{n}\sum _{i=1}^nA(X_i)G^\mathrm{T}_i+o_p(1)\mathop {\longrightarrow }\limits ^{\mathcal {P}}0, \end{aligned}$$

follows from \(E\Vert n^{-1}\sum _{i=1}^nA(X_i)G^\mathrm{T}_i\Vert ^2=n^{-1}E\Vert A(X_i)G_i^\mathrm{T}\Vert ^2\rightarrow 0\) and Markov inequality. Thus, Lemma 5.3(ii) holds.

As to Lemma 5.3(iii), note that \(A(X_i)\) is i.i.d. and \(E\{A(X_i)A^\mathrm{T}(X_i)\}\) is positive definite, then by Lemma 5.2, we have \(\max _{1\le i\le n}\Vert A(X_i)\Vert =o(n^{1/2})\) \(a.s.\). From (5.1) and standard calculations, when \(n\) is large enough, we can derive that

$$\begin{aligned}&\max _{1\le i\le n}\Vert \varphi _i(\beta )\Vert \le C\Big (\max _{1\le i\le n} \Vert X_i\Vert +\max _{1\le i\le n}\Vert \Phi ^\mathrm{T}(U_1)\Gamma ^{-1}(U_1)(Z_i+\eta _i)\Vert \Big )\\&\qquad \times \Big (\max _{1\le i\le n}|\varepsilon _i|+\max _{1\le i\le n}\Vert X_i+\eta _i\Vert + \max _{1\le i\le n}|Z_i^\mathrm{T}\alpha (U_i)|+\max _{1\le i\le n}|\eta _i^\mathrm{T}\alpha (U_i)|\Big )\\&\quad =o_p(n^{1/2}). \end{aligned}$$

Thus, \(\max _{1\le i\le n}\Vert \psi _i(\beta )\Vert =o_p(n^{1/2})\). Furthermore, one can obtain that \(\lambda _1=O_p(n^{-1/2})\) by Lemma 5.3(ii) and using the arguments similar to Owen (1990). The proof of Lemma 5.1 is thus completed. \(\square \)

Set \(\lambda _2=(\lambda _{21}^\mathrm{T},\lambda _{22}^\mathrm{T})^\mathrm{T}\), where \(\lambda _{21}\) and \(\lambda _{22}\) are \(r\)-dimensional and \(q\)-dimensional column vectors.

Lemma 5.4

Under the conditions of Theorem 2.3, for a given \(u,\) if \(\alpha (u)\) is the true value of the parameter\(,\) then

$$\begin{aligned} \mathrm{(i)}\quad&\sum _{i=1}^n\widehat{\xi }_i(\alpha (u))\mathop {\longrightarrow }\limits ^{\mathcal {D}}N(0,\Sigma _{AI}(u)),\\ \mathrm{(ii)}\quad&\frac{1}{n}\sum _{i=1}^n \widehat{\xi }_i(\alpha (u)) \widehat{\xi }^\mathrm{T}_i(\alpha (u))\mathop {\longrightarrow }\limits ^{\mathcal {P}}\Sigma _{AI}(u),\\ \mathrm{(iii)}\quad&\max _{1\le i\le n}\Vert \widehat{\xi }_i(\alpha (u))\Vert =o_p(1),\quad \lambda _{21}=O_p(n^{-1/2}),\quad \lambda _{22}=O_p((nh_2)^{-1/2}), \end{aligned}$$

where \(\widehat{\xi }_i(\alpha (u))=\bigg (\begin{array}{l} \frac{1}{\sqrt{n}}A(X_i)\\ \frac{1}{\sqrt{nh_2}}\widehat{\zeta }_i(\alpha (u)) \end{array}\bigg )\), \(\Sigma _{AI}(u)=\bigg (\begin{array}{ll} V_{A}&{}0\\ 0&{}\Sigma _1(u) \end{array}\bigg )\) and \(\Sigma _1(u)=\Big \{(\Gamma (u)+\Sigma _{\eta })E(\varepsilon _1^2|U_1=u)+\alpha ^\mathrm{T}(u)\Sigma _{\eta }\alpha (u)\Gamma (u)\Big \}f(u)\int K^2(x)\mathrm{d}x.\)

Proof

Observe that

$$\begin{aligned} \frac{1}{\sqrt{nh_2}}\sum _{i=1}^n\hat{\zeta }_i(\alpha (u))&= \frac{1}{\sqrt{nh_2}}\sum _{i=1}^n \big [\varepsilon _i(Z_i+\eta _i)+\eta _i^\mathrm{T}\alpha (U_i)Z_i\big ]K\Big (\frac{U_i-u}{h_2}\Big ) \nonumber \\&+\frac{1}{\sqrt{nh_2}}\sum _{i=1}^nX_i^\mathrm{T}(\beta -\hat{\beta })W_iK\Big (\frac{U_i-u}{h_2}\Big )\nonumber \\&+ \frac{1}{\sqrt{nh_2}}\sum _{i=1}^nW_i^\mathrm{T}(\tilde{\alpha }(u)-\alpha (u))W_iK\Big (\frac{U_i-u}{h_2}\Big ) \nonumber \\&+\frac{1}{\sqrt{nh_2}}\sum _{i=1}^nZ_i^\mathrm{T}(\alpha (U_i)-\tilde{\alpha }(U_i))W_iK\Big (\frac{U_i-u}{h_2}\Big )\nonumber \\&-\frac{1}{\sqrt{nh_2}}\sum _{i=1}^n(\eta _i\eta _i^\mathrm{T}-\Sigma _{\eta })\tilde{\alpha }(U_i)K\Big (\frac{U_i-u}{h_2}\Big ) \nonumber \\&-\frac{1}{\sqrt{nh_2}}\sum _{i=1}^n\Sigma _{\eta }(\tilde{\alpha }(U_i)-\alpha (U_i))K\Big (\frac{U_i-u}{h_2}\Big )\nonumber \\&+\frac{1}{\sqrt{nh_2}}\sum _{i=1}^n\eta _i^\mathrm{T}(\alpha (U_i)-\tilde{\alpha }(U_i))Z_iK\Big (\frac{U_i-u}{h_2}\Big ) \nonumber \\&:= \sum _{i=1}^7B_{in}. \end{aligned}$$
(5.3)

Note that \(\big \{\big [\varepsilon _i(Z_i+\eta _i)+\eta _i^\mathrm{T}\alpha (U_i)Z_i\big ]K\big (\frac{U_i-u}{h_2}\big ),1\le i\le n\big \}\) is independent and identical distributed with mean zero and

$$\begin{aligned} \text{ Var }(B_{1n})&= h_2^{-1}E\bigg \{\Big [E(\varepsilon _1^2|U_1)\big [E(Z_1Z_1^\mathrm{T}|U_1)+\Sigma _{\eta }\big ]\\&+E\big [\alpha ^\mathrm{T}(U_1)\Sigma _{\eta }\alpha (U_1)|U_1\big ]E(Z_1Z_1^\mathrm{T}|U_1) \Big ]K^2\Big (\frac{U_i-u}{h_2}\Big )\bigg \}\\&= \Big \{(\Gamma (u)+\Sigma _{\eta })E(\varepsilon _1^2|U_1=u)+\alpha ^\mathrm{T}(u)\Sigma _{\eta }\alpha (u)\Gamma (u)\Big \}f(u)\int K^2(x)\mathrm{d}x\\&= \Sigma _1(u). \end{aligned}$$

Then we have \(B_{1n}\mathop {\longrightarrow }\limits ^{\mathcal {D}}N(0,\Sigma _1(u))\).

Theorems 2.2 and 2.4, Lemma 5.2 and condition (C3) imply that \(\hat{\beta }-\beta =O_p(n^{-1/2})\), \(\tilde{\alpha }(u)-\alpha (u)=O_p(n^{-1/2})\) and \(\eta _i\eta _i^\mathrm{T}-\Sigma _{\eta }=o(n^{1/s})\) \(a.s.\) Then by some simple calculations we obtain \(B_{in}=o_p(1)\) for \(i=2,\ldots ,7.\) Invoking the Slutsky theorem and (5.3), we get

$$\begin{aligned} \frac{1}{\sqrt{nh_2}}\sum _{i=1}^n\hat{\zeta }_i(\alpha (u))\mathop {\longrightarrow }\limits ^{\mathcal {D}}N(0,\Sigma _1(u)). \end{aligned}$$
(5.4)

Note that \(\frac{1}{\sqrt{n}}\sum _{i=1}^nA(X_i)\mathop {\longrightarrow }\limits ^{\mathcal {D}}N(0,V_{A})\) and \(\mathrm{Cov}\big (\frac{1}{\sqrt{n}}\sum _{i=1}^nA(X_i), \frac{1}{\sqrt{nh_2}}\sum _{i=1}^n\hat{\zeta }_i(\alpha (u))\big ) \rightarrow 0\), which together with (5.4) and the central limit theorem leads to Lemma 5.4(i).

Analogously to the proof of Lemma 5.3(ii), we can verify Lemma 5.4(ii) easily. As to Lemma 5.4(iii), we find

$$\begin{aligned} \max _{1\le i\le n}\Vert \widehat{\zeta }_i(\alpha (u))\Vert&\le \max _{1\le i\le n}\Big \Vert \big [\varepsilon _i(Z_i+\eta _i)+\eta _i^\mathrm{T}\alpha (U_i)Z_i\big ]K\Big (\frac{U_i-u}{h_2}\Big )\Big \Vert \\&+\max _{1\le i\le n}\Big \Vert X_i^\mathrm{T}(\beta -\hat{\beta }_\mathrm{ME})W_iK\Big (\frac{U_i-u}{h_2}\Big )\Big \Vert \\&+ \max _{1\le i\le n}\Big \Vert W_i^\mathrm{T}(\tilde{\alpha }(u)-\alpha (u))W_iK\Big (\frac{U_i-u}{h_2}\Big )\Big \Vert \\&+\max _{1\le i\le n}\Big \Vert Z_i^\mathrm{T}(\alpha (U_i)-\tilde{\alpha }(U_i))W_iK\Big (\frac{U_i-u}{h_2}\Big )\Big \Vert \\&+\max _{1\le i\le n}\Big \Vert (\eta _i\eta _i^\mathrm{T}-\Sigma _{\eta })\tilde{\alpha }(U_i)K\Big (\frac{U_i-u}{h_2}\Big )\Big \Vert \\&+\max _{1\le i\le n}\Big \Vert \Sigma _{\eta }(\tilde{\alpha }(U_i)-\alpha (U_i))K\Big (\frac{U_i-u}{h_2}\Big )\Vert \\&+\max _{1\le i\le n}\Big \Vert \eta _i^\mathrm{T}(\alpha (U_i)-\tilde{\alpha }(U_i))Z_iK\Big (\frac{U_i-u}{h_2}\Big )\Big \Vert \\&:= \sum _{i=1}^7J_{in}. \end{aligned}$$

From Markov inequality and conditions (C3) and (C5), one can obtain that

$$\begin{aligned} P(J_{1n}\ge \sqrt{nh_2})&\le (nh_2)^{-s}\sum _{i=1}^n E\Big [\Big (\varepsilon _i(Z_i+\eta _i)+\eta _i^\mathrm{T}\alpha (U_i)Z_i\Big )K\Big (\frac{U_i-u}{h_2}\Big )\Big ]^{2s}\\&\le C(nh_2)^{1-s}\rightarrow 0, \end{aligned}$$

which implies that \(D_1=o_p(\sqrt{nh_2})\). Similarly, by \(\hat{\beta }-\beta =O_p(n^{-1/2})\), \(\tilde{\alpha }(u)-\alpha (u)=O_p(n^{-1/2})\) and \(\eta _i\eta _i^\mathrm{T}-\Sigma _{\eta }=o(n^{1/s})\) \(a.s.\), it can be shown that \(J_{in}=o_p(\sqrt{nh_2})\) \(i=2,\ldots ,7\). Therefore we obtain that \(\max _{1\le i\le n}\Vert \widehat{\zeta }_i(\alpha (u))\Vert =o_p((nh_2)^{1/2})\), which together with \(\max _{1\le i\le n}\Vert A(X_i)\Vert =o(n^{1/2})\) \(a.s.\), gives \(\max _{1\le i\le n}\Vert \widehat{\xi }_i(\alpha (u))\Vert =o_p(1)\).

Applying Lemma 5.4(ii) and the proof in Owen (1990) one can derive that \(\lambda _{21}=O_p(n^{-1/2})\) and \(\lambda _{22}=O_p((nh_2)^{-1/2})\), which completes the proof of Lemma 5.4. \(\square \)

Proof of Theorem 2.1

Applying the Taylor expansion to (2.8) and invoking Lemma 5.3, we obtain that

$$\begin{aligned} {\mathcal {L}}_{1n,AI}(\beta _0)=2\sum _{i=1}^n\{\lambda _1^\mathrm{T}\psi _i(\beta _0)-[\lambda _1^\mathrm{T}\psi _i(\beta _0)]^2/2\}+o_p(1). \end{aligned}$$

From (2.9), we have

$$\begin{aligned} 0&= \frac{1}{n}\sum _{i=1}^n\frac{\psi _i(\beta _0)}{1+\lambda _1^\mathrm{T}\psi _i(\beta _0)}\\&= \frac{1}{n}\sum _{i=1}^n\psi _i(\beta _0)-\frac{1}{n}\sum _{i=1}^n\psi _i(\beta _0)\psi ^\mathrm{T}_i(\beta _0)\lambda _1 +\frac{1}{n}\sum _{i=1}^n\frac{\psi _i(\beta _0)[\lambda _1^\mathrm{T} \psi _i(\beta _0)]^2}{1+\lambda _1^\mathrm{T}\psi _i(\beta _0)}. \end{aligned}$$

Using Lemma 5.3, we find

$$\begin{aligned} \left\| \frac{1}{n}\sum _{i=1}^n\frac{\psi _i(\beta _0)[\lambda _1^\mathrm{T}\psi _i(\beta _0)]^2}{1+\lambda _1^\mathrm{T}\psi _i(\beta _0)}\right\|&\le \frac{1}{n}\sum _{i=1}^n\frac{\Vert \psi _i(\beta _0)\Vert ^3\Vert \lambda _1\Vert ^2}{|1+\lambda _1^\mathrm{T} \psi _i(\beta _0)|}\\&\le \Vert \lambda _1\Vert ^2\max _{1\le i\le n}\Vert \psi _i(\beta _0)\Vert \frac{1}{n}\sum _{i=1}^n\Vert \psi _i(\beta _0)\Vert ^2\\&= O_p(n^{-1})o_p(n^{1/2})O_p(1)=o_p(n^{-1/2}). \end{aligned}$$

Then \(\sum _{i=1}^n[\lambda _1^\mathrm{T} \psi _i(\beta _0)]^2=\sum _{i=1}^n\lambda _1^\mathrm{T} \psi _i(\beta _0)+o_p(1),\) and

$$\begin{aligned} \lambda _1=\left[ \sum _{i=1}^n\psi _i(\beta _0)\psi _i^\mathrm{T}(\beta _0)\right] ^{-1}\sum _{i=1}^n\psi _i(\beta _0) +o_p(n^{-1/2}). \end{aligned}$$

Thus,

$$\begin{aligned} {\mathcal {L}}_{1n,AI}(\beta _0)&= \left( \frac{1}{\sqrt{n}}\sum _{i=1}^n\psi _i(\beta _0)\right) ^\mathrm{T} \left( \frac{1}{n}\sum _{i=1}^n\psi _i(\beta _0)\psi ^\mathrm{T}_i(\beta _0)\right) ^{-1} \left( \frac{1}{\sqrt{n}}\sum _{i=1}^n\psi _i(\beta _0)\right) \\&+\,o_p(1). \end{aligned}$$

This together with Lemma 5.3 completes the proof. \(\square \)

Proof of Theorem 2.2

Note that \(\hat{\beta }_\mathrm{ME}\) and \(\hat{\lambda }_1=\lambda (\hat{\beta }_\mathrm{ME})\) satisfy \(H_{1n}(\hat{\beta }_\mathrm{ME},\hat{\lambda }_1)=0\) and \(H_{2n}(\hat{\beta }_\mathrm{ME},\hat{\lambda }_1)=0\), where

$$\begin{aligned} H_{1n}(\beta ,\lambda _1)&= \frac{1}{n}\sum _{i=1}^n\frac{\varphi _i(\beta )}{1+\lambda _1^\mathrm{T}\varphi _i(\beta )} \quad \text{ and } \\ H_{2n}(\beta ,\lambda _1)&= \frac{1}{n}\sum _{i=1}^n\frac{1}{1+\lambda _1^\mathrm{T}\varphi _i(\beta )}\left( \frac{\partial \varphi _i(\beta )}{\partial \beta ^\mathrm{T}}\right) ^\mathrm{T}\lambda _1. \end{aligned}$$

Then by expanding \(H_{1n}(\hat{\beta }_\mathrm{ME},\hat{\lambda }_1)=0\) and \(H_{2n}(\hat{\beta }_\mathrm{ME},\hat{\lambda }_1)=0\) at \((\beta _0,0)\), we derive that

$$\begin{aligned} 0&= H_{1n}(\hat{\beta }_\mathrm{ME},\hat{\lambda }_1)=H_{1n}(\beta _0,0)+\frac{\partial H_{1n}(\beta _0,0)}{\partial \beta ^\mathrm{T}}(\hat{\beta }_\mathrm{ME}-\beta _0)\\&+\,\frac{\partial H_{1n}(\beta _0,0)}{\partial \lambda ^\mathrm{T}_1}(\hat{\lambda }-0)+o_p(\delta _n),\\ 0&= H_{2n}(\hat{\beta }_\mathrm{ME},\hat{\lambda }_1)=H_{2n}(\beta _0,0)+\frac{\partial H_{2n}(\beta _0,0)}{\partial \beta ^\mathrm{T}}(\hat{\beta }_\mathrm{ME}-\beta _0)\\&+\,\frac{\partial H_{2n}(\beta _0,0)}{\partial \lambda ^\mathrm{T}_1}(\hat{\lambda }_1-0)+o_p(\delta _n), \end{aligned}$$

where \(\delta _n=\Vert \hat{\beta }_\mathrm{ME}-\beta _0\Vert +\Vert \hat{\lambda }_1\Vert \). Then, we find

$$\begin{aligned} \left( \begin{array}{l} \hat{\lambda }_1\\ \hat{\beta }_\mathrm{ME}-\beta _0 \end{array}\right)&= \left( \begin{array}{ll} \frac{\partial H_{1n}(\beta ,\lambda _1)}{\partial \lambda ^\mathrm{T}_1} &{} \frac{\partial H_{1n}(\beta ,\lambda _1)}{\partial \beta ^\mathrm{T}}\\ \frac{\partial H_{2n}(\beta ,\lambda _1)}{\partial \lambda ^\mathrm{T}_1} &{} \frac{\partial H_{2n}(\beta ,\lambda _1)}{\partial \beta ^\mathrm{T}} \end{array}\right) _{(\beta _0,0)}^{-1} \left( \begin{array}{l} -H_{1n}(\beta _0,0)\!+\!o_p(\delta _n)\\ -H_{2n}(\beta _0,0)\!+\!o_p(\delta _n) \end{array}\right) \\&= \left( \begin{array}{ll} -\frac{1}{n}\sum _{i=1}^n \varphi _i(\beta _0)\varphi ^\mathrm{T}_i(\beta _0) &{} \frac{1}{n}\sum _{i=1}^n\frac{\partial \varphi _i(\beta _0)}{\partial \beta ^\mathrm{T}}\\ \frac{1}{n}\sum _{i=1}^n\frac{\partial \varphi _i(\beta _0)}{\partial \beta ^\mathrm{T}} &{} 0 \end{array}\right) ^{-1} \left( \begin{array}{l} -H_{1n}(\beta _0,0)+o_p(\delta _n)\\ -H_{2n}(\beta _0,0)+o_p(\delta _n) \end{array}\right) \\&= \left( \begin{array}{ll} -\frac{1}{n}\sum _{i=1}^n \varphi _i(\beta _0)\varphi ^\mathrm{T}_i(\beta _0) &{} -\widehat{\Delta }_1\\ -\widehat{\Delta }_1 &{} 0 \end{array}\right) ^{-1} \left( \begin{array}{l} -H_{1n}(\beta _0,0)+o_p(\delta _n)\\ -H_{2n}(\beta _0,0)+o_p(\delta _n) \end{array}\right) \end{aligned}$$

where \(\widehat{\Delta }_1=\frac{1}{n}\sum _{i=1}^n(\widetilde{X}_i\widetilde{X}_i^\mathrm{T}-X^\mathrm{T} Q_i^\mathrm{T} \Sigma _{\eta } Q_i X)\). Lemma 5.3 and \(H_{1n}(\beta _0,0)=n^{-1}\sum _{i=1}^n\varphi _i(\beta _0)=O_p(n^{-1/2})\) imply \(\delta _n=O_p(n^{-1/2})\). Therefore,

$$\begin{aligned} \sqrt{n}(\hat{\beta }_\mathrm{ME}-\beta _0)=\widehat{\Delta }_1^{-1}\frac{1}{\sqrt{n}}\sum _{i=1}^n\varphi _i(\beta _0)+o_p(1). \end{aligned}$$

This together with (5.3), Lemmas 5.1 and Slutsky theorem yields the result of Theorem 2.2. \(\square \)

Theorem 2.3 can be proved by using the same argument used in Theorems 2.1. Theorem 2.4 can be verified by the proof of Theorem 3 in Feng and Xue (2014) and the asymptotic normalities of \(\tilde{\beta }\) and \(\hat{\beta }_\mathrm{ME}\). We omit the details here.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fan, GL., Xu, HX. & Huang, ZS. Empirical likelihood for semivarying coefficient model with measurement error in the nonparametric part. AStA Adv Stat Anal 100, 21–41 (2016). https://doi.org/10.1007/s10182-015-0247-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10182-015-0247-7

Keywords

Mathematics Subject Classification

Navigation