Abstract
For the partially linear varying-coefficient model when the parametric covariates are measured with additive errors, the estimator of the error variance is defined based on residuals of the model. At the same time, we construct Jackknife estimator as well as Jackknife empirical likelihood statistic of the error variance. Under both the response variables and their associated covariates form a stationary \(\alpha \)-mixing sequence, we prove that the proposed estimators and Jackknife empirical likelihood statistic are asymptotic normality and asymptotic \(\chi ^2\) distribution, respectively. Numerical simulations are carried out to assess the performance of the proposed method.
Similar content being viewed by others
References
Ahmad I, Leehalanon S, Li Q (2005) Efficient estimation of a semiparametric partially linear varying coefficient model. Ann Stat 33:258–283
Bravo F (2014) Varying coefficients partially linear models with randomly censored data. Ann Inst Stat Math 66:383–412
Doukhan P (1994) Mixing: properties and examples. Springer, New York
Fan GL, Xu HX, Liang HY (2012) Empirical likelihood inference for partially time-varying coefficient errors-in-variables models. Electron J Stat 6:1040–1058
Fan GL, Liang HY, Wang JF (2013) Statistical inference for partially linear time-varying coefficient errors-in-variables models. J Stat Plann Inference 143:505–519
Fan GL, Liang HY, Wang JF (2013) Empirical likelihood for heteroscedastic partially linear errors-in-variables model with \(\alpha \)-mixing errors. Stat Pap 54:85–112
Fan J, Huang T (2005) Profile likelihood inferences on semiparametric varying-coefficient partially linear models. Beroulli 11:1031–1057
Feng H, Peng L (2012) Jackknife empirical likelihood tests for distribution functions. J Stat Plan Inference 142:1571–1585
Feng S, Xue L (2014) Bias-corrected statistical inference for partially linear varying coefficient errors-in-variables models with restricted condition. Ann Inst Stat Math 66:121–140
Golub GH, Van Loan CF (1996) Matrix computations, 3rd edn. John Hopkins University Press, Baltimore
Gong Y, Peng L, Qi Y (2010) Smoothed jackknife empirical likelihood method for roc curve. J Multivar Anal 101:1520–1531
Hall P (1992) The bootstrap and edgeworth expansion. Springer, New York
Hall P, La Scala B (1990) Methodology and algorithms of empirical likelihood. Int Stat Rev 58:109–127
Hong S, Cheng P (1994) The convergence rate of estimation for parameter in a semiparametric model. Chin J Appl Probab Stat 10:62–71
Huang Z, Zhang R (2009) Empirical likelihood for nonparametric parts in semiparametric varying-coefficient partially linear models. Stat Probab Lett 79:1798–1808
Jing BY, Yuan J, Zhou W (2009) Jackknife empirical likelihood. J Am Stat Assoc 104:1224–1232
Liang H, Härdle W, Carroll RJ (1999) Estimation in a semiparametric partially linear errors-in-variables model. Ann Stat 27:1519–1535
Liang HY, Jing BY (2009) Asymptotic normality in partially linear models based on dependent errors. J Stat Plan Inference 139:1357–1371
Liang HY, Mammitzsch V, Steinebach J (2006) On a semiparametric regression model whose errors form a linear process with negatively associated innovations. Statistics 40:207–226
Liebscher E (2001) Estimation of the density and the regression function under mixing conditions. Stat Decis 19:9–26
Lin Z, Lu C (1996) Limit theory for mixing dependent random variables. Science Press, New York
Miao Y, Zhao F, Wang K, Chen Y (2013) Asymptotic normality and strong consistency of LS estimators in the EV regression model with NA errors. Stat Pap 54:193–206
Miller RG (1974) An unbalanced jackknife. Ann Stat 2:880–891
Owen AB (1988) Empirical likelihood ratio confidence intervals for a single functional. Biometrika 75:237–249
Owen AB (1990) Empirical likelihood ratio confidence regions. Ann Stat 8:90–120
Peng L (2012) Approximate jackknife empirical likelihood method for estimating equations. Can J Stat 40:110–123
Peng L, Qi Y, Van Keilegom I (2012) Jackknife empirical likelihood method for copulas. Test 21:74–92
Shao QM (1993) Complete convergence for \(\alpha \)-mixing sequences. Stat Probab Lett 16:279–287
Singh S, Jain K, Sharma S (2014) Replicated measurement error model under exact linear restrictions. Stat Pap 55:253–274
Wang X, Li G, Lin L (2011) Empirical likelihood inference for semiparametric varying-coefficient partially linear EV models. Metrika 73:171–185
Wei C, Luo Y, Wu X (2012) Empirical likelihood for partially linear additive errors-in-variables models. Stat Pap 53:485–496
Yang SC (2007) Maximal moment inequality for partial sums of strong mixing sequences and application. Acta Math Sin Engl Ser 23:1013–1024
You J, Chen G (2006) Estimation of a semiparametric varying-coefficient partially linear errors-in-variables model. J Multivar Anal 97:324–341
You J, Chen G (2007) Semiparametric generalized least squares estimation in partially linear regression models with correlated errors. J Stat Plan Inference 137:117–132
You J, Zhou X, Chen G (2005) Jackknifing in partially linear regression models with serially correlated errrors. J Multivar Anal 92:386–404
You J, Zhou Y (2006) Empirical likelihood for semiparametric varying-coefficient partially linear regression models. Stati Probab Lett 76:412–422
Zhang JJ, Liang HY (2012) Asymptotic normality of estimators in heteroscedastic semiparametric model with strong mixing errors. Commun Stat 41:2172–2201
Zhou H, You J, Zhou B (2010) Statistical inference for fixed-effects partially linear regression models with errors in variables. Stat Pap 51:629–650
Zi X, Zou C, Liu Y (2012) Two-sample empirical likelihood method for difference between coefficients in linear regression model. Stat Pap 53:83–93
Acknowledgments
The authors would like to thank anonymous referees for their valuable comment and suggestions which lead to the improvement of the paper. This research was supported by the National Natural Science Foundation of China (11271286) and the Specialized Research Fund for the Doctor Program of Higher Education (20120072110007).
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
In this section, we give some preliminary Lemmas, which have been used in Section 5. Let \(\{X_i, i\ge 1\}\) be a stationary sequence of \(\alpha \)-mixing random variables with the mixing coefficients \(\{\alpha (k)\}\).
Lemma 6.1
(Liebscher (2001), Proposition 5.1) Assume that \(EX_i=0\) and \(|X_i|\le S<\infty \) a.s. \((i=1,2,\cdots ,n)\). Then for n, \(m\in \mathbb {N}\), \(0<m\le n/2\) and \(\epsilon >0\), \( P(|\sum _{i=1}^nX_i|>\epsilon )\le 4\exp \{-\frac{\epsilon ^2}{16}(nm^{-1}D_m+\frac{1}{3}\epsilon Sm)^{-1}\}+32\frac{S}{\epsilon }n\alpha (m), \) where \(D_m=\max _{1\le j\le 2m}Var(\sum _{i=1}^jX_i)\).
Lemma 6.2
(Yang (2007), Theorem 2.2)
-
(i)
Let \(r>2,~\delta >0,~EX_i=0\) and \(E|X_i|^{r+\delta }<\infty \). Suppose that \(\lambda >r(r+\delta )/(2\delta )\) and \(\alpha (n)=O(n^{-\lambda })\). Then for any \(\epsilon >0\), there exists a positive constant \(C:=C(\epsilon ,r,\delta ,\lambda )\) such that \(E\max _{1\le m\le n}|\sum _{i=1}^mX_i|^r\le C\{n^\epsilon \sum _{i=1}^nE|X_i|^r+(\sum _{i=1}^n\Vert X_i\Vert _{r+\delta }^2)^{r/2}\}.\)
-
(ii)
If \(EX_i=0\) and \(E|X_i|^{2+\delta }<\infty \) for some \(\delta >0\), then \(E(\sum _{i=1}^nX_i)^2\le \{1+16\sum _{l=1}^n\alpha ^{\frac{\delta }{2+\delta }}(l)\}\sum _{i=1}^n\Vert X_i\Vert _{2+\delta }^2\).
Lemma 6.3
(Lin and Lu (1996), Theorem 3.2.1) Suppose that \(EX_1\!=\!0,~~E|X_1|^{2+\delta }\!<\!\infty \) for some \(\delta \!>\!0\) and \(\sum _{n=1}^{\infty }\alpha ^{\delta /(2+\delta )}(n)\!<\!\infty \). Then \(\sigma ^2\!:=\!EX_1^2+2\sum _{j=2}^\infty EX_1X_j<\infty \) and, if \(\sigma \ne 0\), \( \frac{S_n}{\sigma \sqrt{n}}\mathop {\rightarrow }\limits ^\mathcal{{D}}N(0,1). \)
Lemma 6.4
(Miller (1974), Lemma 2.1) For a nonsingular matrix A, and vectors U and V, we have \((A+UV^\tau )^{-1}=A^{-1}-\frac{(A^{-1}U)(V^\tau A^{-1})}{1+V^\tau A^{-1}U}\).
Lemma 6.5
(Shao (1993), Corollary 1) Let \(EX_i=0\) and \(\sup _i E|X_i|^r<\infty \) for some \(r>1\). Suppose that \(\alpha (n)=O(\log ^{-\psi }n)\) for some \(\psi >r/(r-1)\). Then \(n^{-1}\sum _{i=1}^n X_i=o(1)~~a.s\).
Lemma 6.6
Suppose (A1)–(A3), (A5) and (A6) are satisfied, then
Proof
We only prove (6.1) here, because (6.2) can be proved similarly. Write
Here, we only give the proof of
We divide \(\Omega \) into subintervals \(\{\Delta _l\}\) (\(l=1,2,\cdots ,l_n\)) with length \(r_n=h\sqrt{\frac{\log n}{nh}}\), and the center of \(\Delta _l\) is at \(t_l\). Then the total number of the subintervals satisfies \(l_n=O(r_n^{-1})\). Then
Therefore, to prove (6.4), it is sufficient to show that \(I_k=O_p(c_n),~~k=1,2,3\).
Using the Lipschitz continuity of \(K(\cdot )\), we have \( |K_h(T_i-t)-K_h(T_i-t_l)|\le \frac{C_1}{h^2}|t-t_l|I(|T_i-t_l|\le C_2h)\le \frac{C_1 r_n}{h^2}I(|T_i-t_l|\le C_2h). \) Therefore, the \((k_1,k_2)\) component in \(I_1\), \(1\le k_1\le k_2\le p\), can be written as
For \(I_{11}\), applying Lemmas 6.1 and 6.2 we have
where \(D_m=\max _{1\le j\le 2m}E(h\sum _{i=1}^j[|W_{ik_1}W_{ik_2}|I(|T_i-t_l|\le C_2h) -E|W_{ik_1}W_{ik_2}|I(|T_i-t_l|\le C_2h)])^2 n^{-2/\delta } \le \frac{C_2mh}{n^{2/\delta }}\). Taking \(m=[\frac{n^{1-1/\delta }h}{C_0\sqrt{nh\log n}}]\), we have
On the other hand, we have \(E|W_{ik_1}W_{ik_2}|I(|T_i-t_l|\le C_2h)=O(h).\) Therefore \(I_{12}=O(\sqrt{\frac{\log n}{nh}}). \) Together with (6.5), one can derive \(I_1=O_p(C_n).\) One can rewrite \(I_2\) as
By the same technique used in proving (6.5), we have \(I_{21}=O_p\left( \sqrt{\frac{\log n}{nh}}\right) \), \(I_{22}=O_p\left( \sqrt{\frac{\log n}{nh}}\right) .\) Using Taylor’s expansion, we have \(I_{23}=O(h^2). \) From (A1), we have
Thus, (6.4) is proved, which completes the proof of this lemma. \(\square \)
Lemma 6.7
Suppose (A1)–(A3), (A5) and (A6) are satisfied, then \( \frac{1}{n}\sum _{i=1}^n\tilde{\xi _i}\tilde{\xi _i}^\tau \mathop {\rightarrow }\limits ^\mathrm{P} \Sigma _e+EX_1X_1^\tau -E[\Phi ^\tau (T_1)\Gamma ^{-1}(T_1)\Phi (T_1)]. \)
Proof
From the definition \(\tilde{\xi _i}^\tau =\xi _i^\tau -S_i{\varvec{\xi }}\) and (1.1), we have
where \(S_i=(W_i^\tau ,~0)(D_{T_i}^\tau \omega _{T_i}D_{T_i})^{-1}D_{T_i}^\tau \omega _{T_i}\). By (6.1) and (6.2) in Lemma 6.6, we have
Similarly, using the approaches above and those in the proof of (6.1) and (6.2), we have
From (6.6) and using Lemma 6.5, it follows that
Similarly \( \frac{1}{n}\sum _{i=1}^n(e_i^\tau -S_i\mathbf e )^\tau (X_i^\tau -S_i\mathbf X ) =\frac{1}{n}\sum _{i=1}^ne_i (X_i^\tau -W_i^\tau \Gamma ^{-1}(T_i)\Phi (T_i))\{1+O_p(c_n)\} \mathop {\rightarrow }\limits ^\mathrm{P} 0. \) According to (6.7), we have \( \frac{1}{n}\sum _{i=1}^n(e_i^\tau -S_i\mathbf e )^\tau (e_i^\tau -S_i\mathbf e ) =\frac{1}{n}\sum _{i=1}^ne_ie_i^\tau \mathop {\rightarrow }\limits ^\mathrm{a.s.} \Sigma _e. \) Thus the conclusion is proved. \(\square \)
Lemma 6.8
Suppose (A1)–(A6) are satisfied, then \(\sum _{i=1}^n\tilde{\xi _i}\tilde{M}_i=o_p(\sqrt{n}),\) where \(\tilde{M}_i=M_i-S_iM\) and \(M_i=W_i^\tau a(T_i)\).
Proof
According to the definition, we have
Note that \( D_t^\tau \omega _tM =\Big (\begin{array}{ccc} \sum _{i=1}^nW_iW^\tau _i a(T_i) K_h(T_i-t) \\ \sum _{i=1}^nW_iW^\tau _i a(T_i) \frac{T_i-t}{h}K_h(T_i-t) \end{array}\Big ). \) Using the similar techniques in the proof of Lemma 6.6, one can easily check that \( D_t^\tau \omega _tM=n\Gamma (t)f(t) a(t)\otimes \Big (\begin{array}{c} 1 \\ 0 \end{array} \Big )\{1+O_p(c_n)\}. \) Therefore \(S_iM=W_i^\tau a(T_i)\{1+O_p(c_n)\}\), furthermore,
Then, from (6.6) and law of large numbers for stationary \(\alpha \)-mixing sequences, one can obtain
Similarly with (6.7), we have \( \frac{1}{n}\sum _{i=1}^n(e_i^\tau -S_i\mathbf e )^\tau (M_i^\tau -S_iM) \mathop {\rightarrow }\limits ^\mathrm{P} 0, \) which, together with (6.8) and (6.10), yields that \( \sum _{i=1}^n\tilde{\xi }_i\tilde{M}_i=O_p(nc_n^2)=o_p(\sqrt{n}). \) \(\square \)
Lemma 6.9
-
(i)
Suppose (A1)–(A6) are satisfied, then
$$\begin{aligned} \sqrt{n}(\hat{\beta }_n-\beta )\mathop {\rightarrow }\limits ^\mathcal{{D}} N(0,\Sigma _1^{-1}\Sigma _2\Sigma _1^{-1}), \end{aligned}$$where \(\Sigma _1=E(X_1X_1^\tau )-E[\Phi ^\tau (T_1)\Gamma ^{-1}(T_1)\Phi (T_1)]\), \(\Phi (T_1)=E(W_1X_1^\tau |T_1)\), \(\Gamma (T_1)=E(W_1W_1^\tau |T_1)\) \(\Sigma _2=\lim _{n\rightarrow \infty }Var\{\frac{1}{\sqrt{n}}\sum _{i=1}^n[\xi _i-\Psi ^\tau (T_i)\Gamma ^{-1}(T_i)W_i][\epsilon _i-e_i^\tau \beta ]\}\). Further, \(\hat{\Sigma }_1^{-1}\hat{\Sigma }_2\hat{\Sigma }_1^{-1}\) is a consistent estimator of \(\Sigma _1^{-1}\Sigma _2\Sigma _1^{-1}\), where \(\hat{\Sigma }_1=\frac{1}{n}\sum _{i=1}^n\tilde{\xi }_i\tilde{\xi }_i^\tau -\Sigma _e,\) \(\hat{\Sigma }_2=\frac{1}{n}\Big \{\sum _{i=1}^n [\tilde{\xi }_i(\tilde{Y}_i-\tilde{\xi }_i^\tau \hat{\beta }_n)]+\Sigma _e\hat{\beta }_n\Big \}^{\otimes 2}, \) here \(C^{\otimes 2}\) means \(CC^\tau \).
-
(ii)
Suppose (A1)–(A6) are satisfied, then \(\sqrt{n}(\hat{\beta }_J-\beta )=\sqrt{n}(\hat{\beta }_n-\beta )+o_p(1)\).
Proof
(i) Let \(\sum _{i=1}^n\tilde{\xi }_i\tilde{\xi }^\tau _i-n\Sigma _e=A\), then \(\hat{\beta }_n=A^{-1}\sum _{i=1}^n\tilde{\xi }_i\tilde{Y}^\tau _i\). Write
From Lemma 6.7, we have \(A^{-1}=O(\frac{1}{n})\). According to the definition and (1.1), we write
Similar to the proof of (6.2) in Lemma 6.6, one can easily check that \( D_t^\tau \omega _t\epsilon =n\mathbf 1 _{2q}\otimes \Big (\begin{array}{c} 1 \\ 0 \end{array} \Big )O_p\Big (\sqrt{\frac{\log n}{nh}}\Big ). \) Together with (6.1), (A1) and (A2), we have
Therefore
Combining (6.11)–(6.15) and Lemma 6.8, we have
Let \(\eta _i=\Sigma _e\beta +[\xi _i-\Phi ^\tau (T_i)\Gamma ^{-1}(T_i)W_i][\epsilon _i-e^\tau \beta ]\). Obviously, \(\{\eta _i,i\ge 1\}\) is an \(\alpha \)-mixing sequence with \(E\eta _i=0\) and \(E|\eta _i|^\delta <\infty \) for \(\delta >4\). Applying Lemma 6.3, one can complete the proof of (i).
(ii) To prove \(\sqrt{n}(\hat{\beta }_J-\beta )=\sqrt{n}(\hat{\beta }_n-\beta )+o_p(1)\), it is sufficient to prove \(\hat{\beta }_J=\hat{\beta }_n+o_p(\frac{1}{\sqrt{n}})\).
Note that \(\hat{\beta }_J=\hat{\beta }_n+\frac{n-1}{n}\sum _{i=1}^n(\hat{\beta }_n-\hat{\beta }_{n,-i}).\) Therefore, we only need to prove that
From the definition,
Using the fact [see Theorem 11.2.3 in Golub and Van Loan (1996)] \( (A+B)^{-1}=A^{-1}-A^{-1}BA^{-1}-A^{-1}B\sum _{k=1}^{\infty }C^kA^{-1}, \) where A is a nonsingular matrix, and \(C=-A^{-1}B\). We write
where \(D=A^{-1}B\sum _{k=1}^{\infty }C^kA^{-1}, A=[\sum _{j\ne i}\tilde{\xi }_j\tilde{\xi }_j^\tau -n\Sigma _e], B=\Sigma _e, C=-A^{-1}B\).
Applying Lemma 6.4, we write
Let \(A=[\sum _{j=1}^n\tilde{\xi }_j\tilde{\xi }_j^\tau -n\Sigma _e]\), the same as in the proof of Lemma 6.9 (i). Then combining (6.17), (6.18) and the definitions of \(\hat{\beta }_n\) and \(\hat{\beta }_{n,-i}\) and noting that \(\sum _{i=1}^n[\tilde{\xi }_i(\tilde{Y}_i-\tilde{\xi }_i^\tau \hat{\beta }_n)+\Sigma _e\hat{\beta }_n]=0\), we write
where \(v_i=\tilde{\xi }_i^\tau A^{-1}\tilde{\xi }_i\), \(r_i=\tilde{\xi }_i^\tau A^{-1}\Sigma _e A^{-1}\tilde{\xi }_i\). By Lemma 6.7 and (A3), we have \(v_i=O_p(n^{-1})\) and \(r_i=O_p(n^{-2})\). Therefore, to prove (6.16), it is sufficient to prove that
First, we deal with \(I_1\). Since
to prove the desired result, one needs only to show that
In fact, from \(\max _{1\le i\le n}|v_i|=o(n^{-3/4})~a.s.\) by the proof of Lemma 3 in Owen (1990), and Lemma 6.11, it follows that \( \frac{1}{\sqrt{n}}I_1=o_p(1). \) Similarly \(\frac{1}{\sqrt{n}}I_2=o_p(1)\), \(\frac{1}{\sqrt{n}}I_3=o_p(1).\)
Meanwhile, \( \Vert \frac{1}{\sqrt{n}}I_{n4}\Vert =\frac{n}{\sqrt{n}}O_p(\frac{1}{n})\rightarrow 0. \) Similarly, we have
Recall the definition of A, B, C, D and Lemma 6.7, we have \(A^{-1}=O(\frac{1}{n})\), \(C=O(\frac{1}{n})\) and
Therefore, by (A3), one can easily obtain that \(\sqrt{n}D\sum _{i=1}^n\sum _{j\ne i}^n\tilde{\xi }_j\tilde{Y}_j=\sqrt{n}O\Big (\frac{1}{n^3}\Big )n^2O_p(1)\rightarrow 0.\) \(\square \)
Lemma 6.10
Suppose (A3) and (A6) are satisfied, then \(\frac{1}{n}\sum _{i=1}^n\epsilon _iW_{ik}\!=\!o(n^{-1/4})~~a.s. \) for \(1\le k\le p\).
Proof
Following the proof of Lemma 2 in Hong and Cheng (1994) under the independent case, using Lemmas 6.1 and 6.2, it is not difficult to prove this lemma. \(\square \)
Lemma 6.11
Suppose (A1)–(A3), (A5) and (A6) are satisfied, then \( \frac{1}{n}\sum _{i=1}^n[\tilde{\xi }_i(\tilde{Y}_i-\tilde{\xi }_i^\tau \hat{\beta }_n)+\Sigma _e\hat{\beta }_n] [\tilde{\xi }_i(\tilde{Y}_i-\tilde{\xi }_i^\tau \hat{\beta }_n)+\Sigma _e\hat{\beta }_n]^\tau \mathop {\rightarrow }\limits ^\mathrm{P} \Sigma _3\) and \(\max _{1\le i\le n}\Vert \hat{\beta }_n-\hat{\beta }_{n,-i}\Vert =O_p(n^{-1})\), where \(\Sigma _3=(\Sigma _1+\Sigma _e)(\sigma ^2+\beta ^\tau \Sigma _e\beta )-\Sigma _e\beta \beta ^\tau \Sigma _e\).
Proof
(i) Write
First, we evaluate the cross terms. By Lemmas 6.9 and 6.5, (A2) and (A3), we have
Similarly \(\Vert \frac{1}{n}\sum _{i=1}^n\tilde{\xi }_i(\tilde{Y}_i-\tilde{\xi }_i^\tau \beta )\tilde{\xi }_i^\tau \tilde{\xi }_i^\tau (\hat{\beta }_n-\beta )\Vert \mathop {\rightarrow }\limits ^\mathrm{P}0.\) Note that \(\sum _{i=1}^n[\tilde{\xi }_i(\tilde{Y}_i-\tilde{\xi }_i^\tau \hat{\beta }_n)+\Sigma _e\hat{\beta }_n]=0\), with Lemma 6.7 we have
Therefore, one can write (6.20) as
On applying Lemma 6.5 and (6.6) we have
With \(\max _{1\le i\le n}\Vert \tilde{\xi }_i\Vert =o(n^{1/{2\delta }})\), \(\Vert \hat{\beta }_n-\hat{\beta }\Vert =O_p(n^{-1/2})\), and Lemma 6.7, one can derive that \(H_2\rightarrow 0\), \(H_3\rightarrow \Sigma _e\beta \beta ^\tau \Sigma _e.\) Hence, the first conclusion is verified.
Similar to the derivation of (6.19), one can write
where \(v_i=\tilde{\xi }_i^\tau A^{-1}\tilde{\xi }_i\), \(r_i=\tilde{\xi }_i^\tau A^{-1}\Sigma _e A^{-1}\tilde{\xi }_i\). Then, it is sufficient to show that
For \(a_{1i}\), since \(E[\tilde{\xi }_i(\tilde{Y}_i-\tilde{\xi }_i^\tau \hat{\beta }_n)+\Sigma _e\hat{\beta }_n]=0\), \(E\Vert \tilde{\xi }_i(\tilde{Y}_i-\tilde{\xi }_i^\tau \hat{\beta }_n)+\Sigma _e\hat{\beta }_n\Vert ^\delta <\infty \) and \(\max _{1\le i\le n}|v_i|=o(n^{-3/4})~~a.s.\), we have \(\max _{1\le i\le n}\Vert \tilde{\xi }_i(\tilde{Y}_i-\tilde{\xi }_i^\tau \hat{\beta }_n)+\Sigma _e\hat{\beta }_n\Vert =O_p(1)\). Therefore, \( \max _{1\le i\le n}\Vert a_{1i}\Vert =O_p(n^{-1}). \) It is easy to see that
which implies \(\frac{n^2\max _{1\le i\le n}\Vert a_{i3}\Vert }{\sqrt{n}}\rightarrow 0\). Then \(\max _{1\le i\le n}\Vert a_{i3}\Vert =o_p(n^{-3/2}). \) Similarly, \(\max _{1\le i\le n}\Vert a_{6i}\Vert =o_p(n^{-3/2}).\)
From \(\max _{1\le i\le n}|v_i|\!=\!o(1)~a.s.\), \(\max _{1\le i\le n}|r_i|\!=\!o(n^{-1})~a.s.\) and \(\max _{1\le i\le n}\Vert \tilde{\xi }_i\Vert =o(n^{1/{2\delta }})~~a.s.\), it is easy to show that \(\max _{1\le i\le n}\Vert a_{2i}\Vert =o(n^{-1})\), \(\max _{1\le i\le n}\Vert a_{4i}\Vert =O(n^{-2})\), \(\max _{1\le i\le n}\Vert a_{5i}\Vert =o(n^{-1})\), \(\max _{1\le i\le n}\Vert a_{7i}\Vert =o(n^{-2})\), \(\max _{1\le i\le n}\Vert a_{8i}\Vert =o(n^{-1})\).
Then the proof of the second conclusion is completed. \(\square \)
Rights and permissions
About this article
Cite this article
Liu, AA., Liang, HY. Jackknife empirical likelihood of error variance in partially linear varying-coefficient errors-in-variables models. Stat Papers 58, 95–122 (2017). https://doi.org/10.1007/s00362-015-0689-8
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00362-015-0689-8
Keywords
- Asymptotic normality
- Error variance
- Jackknife empirical likelihood
- Varying-coefficient errors-in-variables model
- \(\alpha \)-Mixing