Abstract
In this paper, we study partially varying-coefficient single-index model where both the response and predictors are observed with multiplicative distortions which depend on a commonly observable confounding variable. Due to the existence of measurement errors, the existing methods cannot be directly applied, so we recommend using the nonparametric regression to estimate the distortion functions and obtain the calibrated variables accordingly. With these corrected variables, the initial estimators of unknown coefficient and link functions are estimated by assuming that the parameter vector \(\beta \) is known. Furthermore, we can obtain the least square estimators of unknown parameters. Moreover, we establish the asymptotic properties of the proposed estimators. Simulation studies and real data analysis are given to illustrate the advantage of our proposed method.
Similar content being viewed by others
References
Ahmad I, Leelahanon S, Li Q (2005) Efficient estimation of a semiparametric partially linear varying coefficient model. Ann Stat 33:258–283
Carroll RJ, Fan JQ, Gijbels I, Wand MP (1997) Generalized partially linear single-index models. J Am Stat Assoc 92:477–489
Chang ZQ, Xue LG, Zhu LX (2010) On an asymptotically more efficient estimation of the single-index model. J Multivar Anal 101:1898–1901
Cui X, Guo WS, Lin L, Zhu LX (2009) Covariate-adjusted nonlinear regression. Annals Stat 37:1839–1870
Cui X, Härdle WK, Zhu LX (2011) The EFM approach for single-index models. Annals Stat 39:1658–1688
Dai S, Huang ZS (2019) Estimation for varying coefficient partially nonlinear models with distorted measurement errors. J Korean Stat Soc 48:117–133
Delaigle A, Hall P, Zhuo WX (2016) Nonparametric covariate-adjusted regression. Annals Stat 44:2190–2220
Einmahl U, Mason DM (2005) Uniform in bandwidth consistency of kernel-type function estimators 33:1380–1403
Fan J, Gijbels I (1996) Local polynomial modeling and its applications. Chapman and Hall, Landon
Fan JQ, Huang T (2005) Profile likelihood inferences on semiparametric varying-coefficient partially linear models. Bernoulli 11:1031–1057
Feng SY, Xue LG (2013) Variable selection for partially varying coefficient single-index model. J Appl Stat 40:2637–2652
Feng SY, Xue LG (2014) Bias-corrected statistical inference for partially linear varying coefficient errors-in-variables models with restricted condition. Ann Inst Stat Math 66:121–140
Härdle W, Hall P, Ichimura H (1993) Optimal smoothing in single-index models. Annals Stat 21:157–178
Harrison D, Rubinfeld DL (1978) Hedonic housing prices and the demand for clean air. J Environ Econ Manag 5:81–102
Huang ZS (2012) Efficient inferences on the varying-coefficient single-index model with empirical likelihood. Comput Stat Data Anal 56:4413–4420
Huang ZS, Lin BQ, Feng F, Pang Z (2013) Efficient penalized estimating method in the partially varying-coefficient single-index model. J Multivar Anal 114:189–200
Huang ZS, Zhang RQ (2010a) Tests for varying-coefficient parts on varying-coefficient single-index model. J Korean Math Soc 47:385–407
Huang ZS, Zhang RQ (2010b) Empirical likelihood for the varying-coefficient single-index model. Can J Stat 38:434–452
Huang ZS, Zhang RQ (2011) Efficient empirical-likelihood-based inferences for the single-index model. J Multivar Anal 102:937–947
Li F, Lin L, Cui X (2010) Covariate-adjusted partially linear regression models. Commun Stati Theory Methods 39:1054–1074
Li J, Huang C, Zhu H (2017) A functional varying-coefficient single-index model for functional response data. J Am Stat Assoc 112:1169–1181
Li JB, Zhang RQ (2010) Penalized spline varying-coefficient single-index model. Commun Stat Simul Comput 39:221–239
Li JB, Zhang RQ (2011) Partially varying coefficient single index proportional hazards regression models. Comput Stat Data Anal 55:389–400
Li TZ, Mei CL (2013) Estimation and inference for varying coefficient partially nonlinear models. J Stat Plann Inference 143:2023–2037
Liang H, Liu X, Li RZ, Tsai CL (2010) Estimation and testing for partially linear single-index models. Annals Stat 38:3811–3836
Mack YP, Silverman BW (1982) Weak and strong uniform consistency of kernel regression estimates. Wahrscheinlichkeitstheorie und Verwandte Gebiete 61:405–415
Qian YY, Huang ZS (2016) Statistical inference for a varying-coefficient partially nonlinear model with measurement errors. Stat Methodol 32:122–130
Sentürk D, Müller HG (2005a) Covariate-adjusted regression. Biometrika 92:75–89
Sentürk D, Müller HG (2005b) Covariate adjusted correlation analysis via varying coefficient models. Scand J Stat 32:365–383
Wang JL, Xue LG, Zhu LX, Chong YS (2010) Estimation for a partial-linear single-index model. Annals Stat 38:246–274
Wang QH, Xue LG (2011) Statistical inference in partially-varying-coefficient single-index model. J Multivar Anal 102:1–19
Wong H, Ip WC, Zhang RQ (2008) Varying-coefficient single-index model. Comput Stat Data Anal 52:1458–1476
You JH, Chen GM (2006) Estimation of a semiparametric varying-coefficient partially linear errors-in-variables model. J Multivar Anal 97:324–341
You JH, Zhou Y, Chen GM (2006) Corrected local polynomial estimation in varying-coefficient models with measurement errors. Can J Stat 34:391–410
Yu Y, Ruppert D (2002) Penalized spline estimation for partially linear single-index models. J Am Stat Assoc 97:1042–1054
Zhang J, Yu Y, Zhu LX, Liang H (2013) Partial linear single index models with distortion measurement errors. Ann Inst Stat Math 65:237–267
Zhu LX, Fang KT (1996) Asymptotics for kernel estimate of sliced inverse regression. Ann Stat 24:1053–1068
Zhu LX, Xue LG (2006) Empirical likelihood confidence regions in a partially linear single-index model. J R Stat Soc Ser B 68:549–570
Acknowledgements
This research was supported by the National Natural Science Foundation of China (Grant Nos. 11471160, 11101114), the National Statistical Science Research Major Program of China (Grant No. 2018LD01), the Fundamental Research Funds for the Central Universities (Grant No. 30920130111015) and sponsored by Qing Lan Project. The authors would like to thank the referees for their valuable comments that led to a greatly improved presentation of the paper.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendix: Proof of the main results
Appendix: Proof of the main results
We first impose some regularity conditions for the proofs of the theorems (Fig. 4).
- C1. :
-
For any \(\beta \) near \(\beta _0\), the desity funtion r(t) of \(\beta ^TX\) is Lipschitz continuous and bounded away from 0 on the support \(\mathcal {T}_{\beta }=\left\{ \beta ^Tx: x \in \mathcal {X} \right\} \), where the set \(\mathcal {X}\) is an open convex set.
- C2. :
-
The functions \(\alpha (t)\) has bounded and continuous derivatives up to order 2 on \(\mathcal {T}_{\beta _0}\).
- C3. :
-
For \(l=1,.., p\) and \(r=1,..., q\), the functions \(\mu _{1l}(t)\) and \(\mu _{2r}(t)\) have bounded and continuous derivatives up to order 2 on \(\mathcal {T}_{\beta _0}\), where \(\mu _{1l}(t)\) is the lth of \(\mu _1(t)=E(X|\beta ^TX=t)\) and \(\mu _{2r}(t)\) is the rth of \(\mu _2(t)=E(Z|\beta ^TX=t)\).
- C4. :
-
The density function of U, f(u), is continuous in \(\mathcal {N}(u_0)\) and \(\theta _j(\cdot )\) has continuous derivatives of order 2 in \(\mathcal {N}(u_0)\), where \(\mathcal {N}(u_0)\) denotes some neighbourhood of \(u_0\) and \(f(u_0)>0\).
- C5. :
-
The joint function of (\(\beta ^TX, U)\), f(t, u), is bounded away from 0 on the support \(\mathcal {T}_{\beta _0}\times \mathcal {N}(u_0)\). And the functions \(f(t, u), \varkappa _1(t, u)\) and \(\varkappa _{2j}(t, u)\) have bounded partial derivatives up to order 4, where \(\varkappa _{2j}(t, u)\) is the jth of \(\varkappa _2(t, u)\), for \(j=1,...,q\).
- C6. :
-
For \(s>2\), \(l=1,..., p\) and \(r=1,...,q\), E[Y], \(E[X_l]\) and \(E[Z_r]\) are bounded away from 0 and \(E[Y^2]\), \(E[X^2_l]\), \(E[Z^2_r]\), \(E\left( |Z_{1r}|^{2s}\mid U=u\right) \), \(E\left( |\varepsilon |^{2s}\mid U=u\right) \), \(E(|Z_{1r}|^{s}\mid \beta ^TX=t, U=u)\) and \(E(|\varepsilon |^{s}\mid X=x, Z=z, U=u)\) are bounded.
- C7. :
-
Suppose p(v) is the density function of V. For \(r \in R^{q}\) and \(l \in R^{p}\), \(g_{Y}(v)=\psi (v)p(v)\), \(g_{r}(v)=\phi _r(v)p(v)\) and \(g_{l}(v)=\varphi _l(v)p(v)\) are greater than a positive constant and are differential. For some neighbourhood of origin, which denotes as \(\Delta \), and some constant \(c>0\), for any \(\delta \in \Delta \), we have
$$\begin{aligned} \begin{aligned}&\left| g_{Y}^{(3)}(u+\delta )-g_{Y}^{(3)}(u)\right| \le c|\delta |,\\&\left| g_{r}^{(3)}(u+\delta )-g_{r}^{(3)}(u)\right| \le c|\delta |, \quad 1 \le r \le q ,\\&\left| g_{l}^{(3)}(u+\delta )-g_{l}^{(3)}(u)\right| \le c|\delta |, \quad 1 \le l \le p ,\\&\left| p^{(3)}(u+\delta )-p^{(3)}(u)\right| \le c|\delta |. \end{aligned} \end{aligned}$$ - C8. :
-
As \(n\rightarrow \infty \), the bandwidth h satisfies:
- (1):
-
\(h_1\) is in the range from \(O(n^{-1/4}\log n)\) to \(O(n^{-1/8})\);
- (2):
-
For some \(\varsigma <2-\iota ^{-1}\), where \(\iota >2\), \(n^{2\varsigma -1}h_{\iota }\rightarrow \infty \) when \(\iota =3,4\).
- C9. :
-
The kernel functions \(K(\cdot )\) and \(K_1(\cdot , \cdot )\) have following properties:
- (1):
-
\(K(\cdot )\) is a bounded symmertrical density function on its support \([-1, 1]\) and satisfies the Lipschitz condition;
- (2):
-
\(K(\cdot , \cdot )\) is a right continuous kernel function of order 4 with bounded variation and has support \(\left[ -1, 1\right] ^2\);
- (3):
-
\(\int _{-1}^{1}K(v) d v=1\), \(\int _{-1}^{1}vK(v)dv=0\), \(\int _{-1}^{1}v^2K(v)dv\ne 0\) and \(\int _{-1}^{1}|v|^{i}K(v) d v<\infty \) for \(i=1, 2, 3\).
- C10. :
-
The matrices \(\Gamma \) and \(\Omega (u)\) are positive define, where these matrices are defined in Theorem 2. The components of \(\sigma ^2(u)\) and \(\Omega (u)\) are continuous at point \(u_0\) and \(\sigma ^2(u_0)\ne 0\).
Lemma 1
Let \(\eta (X)\) be a continuous function satisfying \(E[\eta (X)]<\infty \). Suppose that conditions C7-C9 hold, we have
Proof
This follows a direct result of Cui et al. (2009). \(\square \)
Lemma 2
Let \((X_1, Y_1),..., (X_n, Y_n)\) be i.i.d bivariate random vectors with joint density function f(x, y). Assume that \(K(\cdot )\) is a bounded positive function with bounded support and is Lipschitz continuous. if \(E|Y|^s<+\infty \) and \(\sup \limits _{x}\int {|y|^sf(x, y)dy}<\infty \), then
provided that \(0<h\rightarrow 0\) and \(n^{2\epsilon -1}h\rightarrow \infty \) for some \(\epsilon <1-s^{-1}\), where h is a bandwidth and D is some closed set.
Proof
This follows a direct result of Mack and Silverman (1982). \(\square \)
Proof of Theorem 1
From (2.9), we have
and
It can be shown that, for any \(\beta \in \mathcal {B}_n\) and each \(j=0, 1, 2, 3\),
where \(\mu _j=\int u^jK(u)du\). By simple calculation, we have
Using the arguments proposed in Zhu and Fang (1996), we have
where p(u) and \(g_Y(u)\) are defined in condition C7. Then, under the condition C6 and using Lamma 2, we have
where \(C_n=h_1^4+n^{-1/2}h_1^{-1}\log n\). By decomposing \(R_2(u_0; \beta )\), we have
According to (A.3) and (A.4), \(R_{21}(u_0; \beta _0)=o_{P}(1)\) can be easily proved. By using Taylor’s expansion for \(\varkappa _2\left( \beta _0^T\hat{X}_i, U_i\right) \) at \(\hat{X}_i\), we can obtain
where \(X^{*}_i=(X_{i1}^{*},..., X_{ip}^{*})\) with \(X^{*}_{il}\) is point between \(\hat{X}_{il}\) and \(X_{il}\). Together with (A.5), we have \(R_2(u_0; \beta )=o_{P}(1)\). Under the assumption (2) , we can show that
for \(r=1,..., q\). Thus, by the same argument as that for (A.8), we have
Applying the Theorem 2 of Einmahl and Mason (2005), we show that
for \(v=1, 2\), in probability 1, where \(\mathcal {X}\) and \(\mathcal {B}_n\) are defined in Sect. 2 and \(\mathcal {N}(u_0)\) is defined in condition C4. Therefore, by direct calculation, it can be prove that \(R_4(u_0; \beta )=o_{P}(1)\) and \(R_5(u_0; \beta )=o_{P}(1)\). Together with (A.4)–(A.7), we can acquire (A.2). It is followed immediately that
for any \(\beta \in \mathcal {B}_n\), where \(R(u_0)=f(u_0)\Omega (u_0)\otimes diag(1, \mu _2)\) with \(\otimes \) be the Kronecker product.
To prove the asymptotic normality of \(\check{\theta }(u_0; \beta )\), we define a center vector of \(\mathcal {A}_n(u_0; \beta )\) as
then, we can obtain
For simplicity of expression, we denote that \(\mathcal {A}_{n, j}=\mathcal {A}_{n, j}(u_0; \beta )\), \(\mathcal {A}_n=\mathcal {A}_n(u_0; \beta )\), \(R_n=R(u_0; \beta )\) and \(R_{n, j}=R_{n, j}(u_0; \beta )\). Using Taylor’s expansion for \(\theta (U_i)\) at \(u_0\), we have
for any \(\beta \in \mathcal {B}_n\) and \(j=0,1\). Thus, it can be proved that
for any \(\beta \in \mathcal {B}_n\). According to (2.9), we obtain
for any \(\beta \in \mathcal {B}_n\), which implies
for any \(\beta \in \mathcal {B}_n\). Therefore, to get the asymptotic normality of \(\check{\theta }(u_0; \beta )\), we only need to prove that
for any \(\beta \in \mathcal {B}_n\). Mimicking to the proof of (A.1), we have
\(J_1(u_0)\) can be decomposed as
Combining (A.3) with the assumption of \(\varepsilon \) implies that
where \(c_1\) is a constant. Hence, \(\sup \limits _{u_0\in \mathcal {N}(u_0)}||J_{11}(u_0)||=o_{P}\left( (nh_3)^{-1/2}\right) \). By the same arguments as that for (A.6) and (A.11), we can prove that \(\sup \limits _{u_0\in \mathcal {N}(u_0)}||J_{13}(u_0)||=o_{P}\left( (nh_3)^{-1/2}\right) \). Let \(J_{14, j}(\cdot )\), \(\varkappa _{2j}(\cdot , \cdot )\) and \(\hat{\varkappa }_{2j}(\cdot , \cdot ; \cdot )\) be the jth components of \(J_{14}(\cdot )\), \(\varkappa _{2}(\cdot , \cdot )\) and \(\hat{\varkappa }_{2}(\cdot , \cdot ; \cdot )\) respectively. Based on the Eq. (A.13) in Wang and Xue (2011), we have
Considering the conditions for bandwidths defined in Theorem 1, we get
Note that \(||\beta -\beta _0||\le C_1n^{-1/2}\), for \(\beta \in \mathcal {B}_n\), where \(C_1\) is a constant, we have
This together with (A.10)–(A.12), we have
uniformly for \(\beta \in \mathcal {B}_n\) and \(u_0\in \mathcal {N}(u_0)\). Similarly, it can be proved that
\(J_{3}(u_0)=O_{P}\left( \left( nh_3\right) ^{-1}\right) \), \(J_{5}(u_0)=O_{P}\left( \left( nh_3\right) ^{-1}\right) \) and
Combining above results with (A.11), (A.13) and (A.14), it follows immediately that
Then, the Lamma 1 and the Slutsky’s Theorem can be used to prove (A.10), and we complete the proof of Theorem 1. \(\square \)
Proof of Theorem 2
By a direct calculation, we have
where \(\check{\Theta }=(\check{\theta }^T(U_1)\hat{Z}_1,..., \check{\theta }^T(U_n)\hat{Z}_n)^T\), \(\hat{\Theta }=({\theta }^T(U_1)\hat{Z}_1,..., {\theta }^T(U_n)\hat{Z}_n)^T\), \(\hat{s}_i(\beta )=(M_{n1}(\beta ^T\hat{X}_i; \beta )\) \(,..., M_{nn}(\beta ^T\hat{X}_i; \beta ))^T\), \(\hat{S}_i(\beta )=(\hat{s}_1(\beta ),..., \hat{s}_n(\beta ))^T\) and \(\hat{W}^{*}=diag\{I_{\mathcal {X}}(\hat{X}_1),..., I_{\mathcal {X}}(\hat{X}_n)\}\). From the proof of Theorem 1, we can show that
where \(C_n\) is defined in (A.4). Then, we can easily prove that \(\Xi _2(\beta )=\Xi _0+o_{P}(n^{1/2})\) and \(\Xi _3(\beta )=o_{P}(n^{1/2})\), where \(\beta \in \mathcal {B}_n\) and \(\Xi _0\) is a constant. Thus, \(\Xi (\beta )=\Xi _1(\beta )-\Xi _0+o_{P}(n^{1/2})\). Let \(Q^{*}\left( \beta ^{(r)}\right) =(-1/2)\frac{\partial \Xi \left( \beta \right) }{\partial \beta ^{(r)}}\), so we have \(Q^{*}\left( \beta ^{(r)}\right) =Q\left( \beta ^{(r)}\right) +o_{P}(n^{1/2})\), where \(Q(\beta ^{(r)})=(-1/2)\frac{\partial \Xi _1(\beta ^{(r)})}{\partial \beta ^{(r)}}\), namely,
where
and
for \(\beta ^{(r)}\in \mathcal {B}_n^{*}\) with \(\mathcal {B}_n^{*}=\left\{ \beta ^{(r)}: ||\beta ^{(r)}-\beta _0^{(r)}||\le C^{*}n^{1/2}\right\} \), where \(C^{*}\) represents a constant. From the discussion in Sect. 2, we know that the estimator \(\hat{\beta }\) can be transformed via \(\hat{\beta }^{(r)}\), uniformly for \(\hat{\beta }\in \mathcal {B}_n\) and \(\beta ^{(r)}\in \mathcal {B}_n^{*}\). Thus, we only need to consider (A.14), which can be decomposed as
where \(I_{\mathcal {X}}(X^{*}_i)=I_{\mathcal {X}}(\hat{X}_i)\cdot I_{\mathcal {X}}(X_i)\). For \(Q_1(\beta ^{(r)})\), we have
Note that \(O_{P}(n\cdot C_n^2)=o_{P}(n^{1/2})\), it can be proved that \(\sup \limits _{{\beta ^{(r)}}\in \mathcal {B}^{*}_n}||Q_{11}(\beta ^{(r)})||=o_{P}(n^{1/2})\) and \(\sup \limits _{{\beta ^{(r)}}\in \mathcal {B}^{*}_n}||Q_{12}(\beta ^{(r)})||=o_{P}(n^{1/2})\). For any \(\beta \in \mathcal {B}_n\) and \(\beta ^{(r)}\in \mathcal {B}^{*}_n\), we have \(||\beta -\beta _0||\le C^{*}n^{-1/2}\), \(||\beta ^{(r)}-\beta ^{(r)}_0||\le C^{*}n^{-1/2}\) and \(J_{\beta ^{(r)}}-J_{\beta _0^{(r)}}=O_{P}(n^{-1/2})\), which implies that
According to (A.3), for any \(\beta ^{(r)}\in \mathcal {B}_n^{*}\) and \(\beta \in \mathcal {B}_n\), we can prove that
where \(\bar{\beta }\) is a point between \(\beta \) and \(\beta _0\). From Lammas 1 and 2 in Zhu and Xue (2006), we have
Thus, we can obtain \(\sup \limits _{{\beta ^{(r)}}\in \mathcal {B}^{*}_n}||Q_{14}(\beta ^{(r)})||=o_{P}(n^{1/2})\). Consequently, for any \(\beta ^{(r)}\in \mathcal {B}_n^{*}\), we have
Similarly, we have
for any \(\beta ^{(r)}\in \mathcal {B}_n^{*}\). Next, we deal with the \(Q_3(\beta ^{(r)})\) and \(Q_4(\beta ^{(r)})\). It can be proved that
By direct calculations, we have
where
uniformly for \(\beta ^{(r)}\in \mathcal {B}^{*}_n\). Using Taylor’s expansion for \(\tilde{\alpha }(\beta ^TX; \beta )\) at \(\beta _0\) and let \(\bar{\beta }=\bar{\beta }(\bar{\beta }^{(r)})\) be a suitable mean with \(\bar{\beta }^{(r)}\in \mathcal {B}_n^{*}\), we have
From Lemma A.4 in Wang et al. (2010), we have
According to the above equation and (A.3), we can show that
which means
This together with (A.15)–(A.21), we can prove that
where
Let \(\hat{\beta }^{(r)}\in \mathcal {B}_n^{*}\) be a solution of \(Q^{*}(\beta ^{(r)})=0\), then we obtain
which implies
and
By the central limiting theorem and Slutsky’s theorem, we prove the Theorem 2. \(\square \)
Proof of Theorem 3
Firstly, we define
From (A.9) and Lamma A.4 in Zhu and Xue (2010) , we have
and
Combining Corollary 1 with the condition \(h_s=cn^{-1/5}\) and \(c>0\), for \(s=1, 2, 3\), we can prove that
Therefore, by simple calculation, we obtain
Theorem 3 is proved immediately. \(\square \)
Rights and permissions
About this article
Cite this article
Huang, Z., Sun, X. & Zhang, R. Estimation for partially varying-coefficient single-index models with distorted measurement errors. Metrika 85, 175–201 (2022). https://doi.org/10.1007/s00184-021-00823-4
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00184-021-00823-4