Skip to main content
Log in

A constructive hypothesis test for the single-index models with two groups

  • Published:
Annals of the Institute of Statistical Mathematics Aims and scope Submit manuscript

Abstract

Comparison of two-sample heteroscedastic single-index models, where both the scale and location functions are modeled as single-index models, is studied in this paper. We propose a test for checking the equality of single-index parameters when dimensions of covariates of the two samples are equal. Further, we propose two test statistics based on Kolmogorov–Smirnov and Cramér–von Mises type functionals. These statistics evaluate the difference of the empirical residual processes to test the equality of mean functions of two single-index models. Asymptotic distributions of estimators and test statistics are derived. The Kolmogorov–Smirnov and Cramér–von Mises test statistics can detect local alternatives that converge to the null hypothesis at a parametric convergence rate. To calculate the critical values of Kolmogorov–Smirnov and Cramér–von Mises test statistics, a bootstrap procedure is proposed. Simulation studies and an empirical study demonstrate the performance of the proposed procedures.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Akritas, M. G., Van Keilegom, I. (2001). Non-parametric estimation of the residual distribution. Scandinavian Journal of Statistics, 28(3), 549–567.

  • Carroll, R. J., Fan, J., Gijbels, I., Wand, M. P. (1997). Generalized partially linear single-index models. Journal of the American Statistical Association, 92(438), 477–489.

  • Cui, X., Härdle, W. K., Zhu, L. (2011). The EFM approach for single-index models. The Annals of Statistics, 39(3), 1658–1688.

  • Dette, H., Neumeyer, N., Keilegom, I. V. (2007). A new test for the parametric form of the variance function in non-parametric regression. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 69(5), 903–917.

  • Fan, J., Gijbels, I. (1996). Local Polynomial Modelling and Its Applications. London: Chapman & Hall.

  • Feng, L., Zou, C., Wang, Z., Zhu, L. (2015). Robust comparison of regression curves. TEST, 24(1), 185–204.

  • Feng, Z., Zhu, L. (2012). An alternating determination-optimization approach for an additive multi-index model. Computational Statistics & Data Analysis, 56(6), 1981–1993.

  • Feng, Z., Wen, X. M., Yu, Z., Zhu, L. (2013). On partial sufficient dimension reduction with applications to partially linear multi-index models. Journal of the American Statistical Association, 108(501), 237–246.

  • Gørgens, T. (2002). Nonparametric comparison of regression curves by local linear fitting. Statistics & Probability Letters, 60(1), 81–89.

  • Härdle, W., Hall, P., Ichimura, H. (1993). Optimal smoothing in single-index models. The Annals of Statistics, 21, 157–178.

  • Koul, H. L., Schick, A. (1997). Testing for the equality of two nonparametric regression curves. Journal of Statistical Planning and Inference, 65(2), 293–314.

  • Kulasekera, K. (1995). Comparison of regression curves using quasi-residuals. Journal of the American Statistical Association, 90(431), 1085–1093.

  • Kulasekera, K., Wang, J. (1997). Smoothing parameter selection for power optimality in testing of regression curves. Journal of the American Statistical Association, 92(438), 500–511.

  • Li, G., Zhu, L., Xue, L., Feng, S. (2010). Empirical likelihood inference in partially linear single-index models for longitudinal data. Journal of Multivariate Analysis, 101(5), 718–732.

  • Li, G., Peng, H., Dong, K., Tong, T. (2014). Simultaneous confidence bands and hypothesis testing for single-index models. Statistica Sinica, 24(2), 937–955.

  • Liang, H., Liu, X., Li, R., Tsai, C. L. (2010). Estimation and testing for partially linear single-index models. The Annals of Statistics, 38, 3811–3836.

  • Lin, W., Kulasekera, K. (2010). Testing the equality of linear single-index models. Journal of Multivariate Analysis, 101(5), 1156–1167.

  • Neumeyer, N. (2009). Smooth residual bootstrap for empirical processes of nonparametric regression residuals. Scandinavian Journal of Statistics, 36(2), 204–228.

  • Neumeyer, N., Dette, H. (2003). Nonparametric comparison of regression curves: An empirical process approach. The Annals of Statistics, 31(3), 880–920.

  • Neumeyer, N., Van Keilegom, I. (2010). Estimating the error distribution in nonparametric multiple regression with applications to model testing. Journal of Multivariate Analysis, 101(5), 1067–1078.

  • Peng, H., Huang, T. (2011). Penalized least squares for single index models. Journal of Statistical Planning and Inference, 141(4), 1362–1379.

  • Pollard, D. (1984). Convergence of Stochastic Processes. New York: Springer.

  • Stute, W., Zhu, L. X. (2005). Nonparametric checks for single-index models. The Annals of Statistics, 33, 1048–1083.

  • Stute, W., Xu, W. L., Zhu, L. X. (2008). Model diagnosis for parametric regression in high-dimensional spaces. Biometrika, 95(2), 451–467.

  • van der Vaart, A.W., Wellner, J.A. (1996). Weak convergence and empirical processes, Springer series in statistics. New York: Springer. (with applications to statistics).

  • Van Keilegom, I., Manteiga, W. G., Sellero, C. S. (2008). Goodness-of-fit tests in parametric regression based on the estimation of the error distribution. TEST, 17(2), 401–415.

  • Wang, J. L., Xue, L., Zhu, L., Chong, Y. S. (2010). Estimation for a partial-linear single-index model. The Annals of Statistics, 38, 246–274.

  • Wang, T., Xu, P., Zhu, L. (2015). Variable selection and estimation for semi-parametric multiple-index models. Bernoulli, 21(1), 242–275.

  • Xia, Y. (2006). Asymptotic distributions for two estimators of the single-index model. Econometric Theory, 22, 1112–1137.

  • Xia, Y., Tong, H., Li, W. K., Zhu, L. X. (2002). An adaptive estimation of dimension reduction space. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 64, 363–410.

  • Xu, P., Zhu, L. (2012). Estimation for a marginal generalized single-index longitudinal model. Journal of Multivariate Analysis, 105, 285–299.

  • Yu, Y., Ruppert, D. (2002). Penalized spline estimation for partially linear single-index models. Journal of the American Statistical Association, 97, 1042–1054.

  • Zhang, C., Peng, H., Zhang, J. T. (2010). Two samples tests for functional data. Communications in Statistics—Theory and Methods, 39(4), 559–578.

  • Zhang, J., Wang, X., Yu, Y., Gai, Y. (2014). Estimation and variable selection in partial linear single index models with error-prone linear covariates. Statistics, 48, 1048–1070.

Download references

Acknowledgements

The authors thank the editor, the associate editor and two referees for their constructive suggestions that helped them to improve the early manuscript. Jun Zhang’s research is supported by the National Natural Science Foundation of China (NSFC) (Grant No. 11401391). Zhenghui Feng’s research is supported by the Fundamental Research Funds for the Central Universities in China (Grant No. 20720171025). Xiaoguang Wang’s research is supported by the NSFC (Grant Nos. 11471065 and 11371077), and the Fundamental Research Funds for the Central Universities in China (Grant No. DUT15LK28).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhenghui Feng.

Appendix

Appendix

1.1 Proofs of Theorems 1 and 3

Lemma 1

Suppose that \(\varvec{X}_{i}\), \(i=1, \ldots , n\) are i.i.d. random vectors. Let \(m(\varvec{x})\) be a continuous function and its derivatives up to second order are bounded, satisfying \(E[m^{2}(\varvec{X})]<\infty \). \(E[m(\varvec{X})|\varvec{\beta }^{\tau }\varvec{X}=u]\) has a continuous bounded second derivative on u. Let K(u) be a bounded positive function with a bounded support satisfying the Lipschitz condition: there exists a neighborhood of the origin, say \(\Upsilon \), and a constant \(c>0\) such that for any \(\epsilon \in \Upsilon \): \(|K(u+\epsilon )-K(u)|<c|\epsilon |\). Given that \(h=n^{-d}\) for some \(d<1\), we have, for \(s_{0}>0\), and \(j=0,1,2\),

$$\begin{aligned}&\displaystyle {\sup _{(x, {\varvec{\beta }})\in \mathcal {X} \times \Delta }} \Bigg |\frac{1}{n}\sum _{i=1}^{n}K_{h}(\varvec{\beta }^{\tau }\varvec{X}_{i}-\varvec{\beta }^{\tau }x)\left( \frac{\varvec{\beta }^{\tau }\varvec{X}_{i}-\varvec{\beta }^{\tau }x}{h} \right) ^{j}m(\varvec{X}_{i})\\&\qquad -\,f_{{\varvec{\beta }}_{0}^{\tau }{\varvec{X}}}(\varvec{\beta }_{0}^{\tau }\varvec{x}) E\big [m(\varvec{X})|\varvec{\beta }_{0}^{\tau }\varvec{X}=\varvec{\beta }_{0}^{\tau }x\big ] \mu _{K,j}-hS(\varvec{\beta }_{0}^{\tau }\varvec{x})\mu _{K,j+1}\Bigg |\\&\quad =\,O_{P}(c_{n}), \end{aligned}$$

where \(\Delta = \{\varvec{\beta }\in \Theta , \Vert \varvec{\beta }-\varvec{\beta }_{0}\Vert \le Cn^{-1/2}\}\) for some positive constant C, \(\Theta =\{\varvec{\beta }, \Vert \varvec{\beta }\Vert =1, \beta _{1}>0\}\), \(\mu _{K,l}=\int t^{l}K(t)dt\), \(S(\varvec{\beta }_{0}^{\tau }\varvec{x})=\frac{d }{du}\left\{ f_{{\varvec{\beta }}_{0}^{\tau }{\varvec{X}}}(u) E\big [m(\varvec{X})|\varvec{\beta }_{0}^{\tau }\varvec{X}=u\big ]\right\} |_{u={\varvec{\beta }}_{0}^{\tau }{\varvec{x}}}\), and \(c_{n}=\left\{ \dfrac{(\log n)^{1+s_{0}}}{nh}\right\} ^{1/2}+h^{2}\).

Proof

This proof can be completed by a similar argument of Lemma A.4 in Wang et al. (2010). See also the Lemma A6.1 in Xia (2006). \(\square \)

Proofs of Theorems 1 and 3

We present the proof of Theorem 3. The proof of Theorem 1 is similar and we omit the details. We define \(c_{n_{s}}=\left\{ \dfrac{(\log n_{s})^{1+s_{0}}}{n_{s}h_{s}}\right\} ^{1/2}+h_{s}^{2}\) for \(s=1, 2\) for simplicity in the following.

Proof

Note that \(\mathcal {W}_{n_{1}n_{2}}\left( \hat{\varvec{\beta }}_{\mathcal {H}_{0}}^{(1)}\right) =\mathbf{0}\). Taylor expansion entails that

$$\begin{aligned}&-\,\frac{1}{\sqrt{n_{1}+n_{2}}}\mathcal {W}_{n_{1}n_{2}}\left( {\varvec{\beta }}_{0}^{(1)}\right) \nonumber \\&\quad =\,\left[ \frac{1}{n_{1}+n_{2}}\frac{\partial \mathcal {W}_{n_{1}n_{2}}\left( {\varvec{\beta }}^{(1)}\right) }{\partial \varvec{\beta }^{(1)}}\Big |_{\varvec{\beta }^{(1)}=\tilde{\varvec{\beta }}^{(1)}_{0}}\right] \left[ \sqrt{n_{1}+n_{2}}\left( \hat{\varvec{\beta }}_{\mathcal {H}_{0}}^{(1)}-\varvec{\beta }_{0}^{(1)}\right) \right] ,\qquad \end{aligned}$$
(18)

where \(\tilde{\varvec{\beta }}^{(1)}_{0}\) is between \(\hat{\varvec{\beta }}_{\mathcal {H}_{0}}^{(1)}\) and \(\varvec{\beta }_{0}^{(1)}\).

Step 1 :

In the following, we define \(N=n_{1}+n_{2}\) for simplicity. In this step, we deal with \({{N}^{-1/2}}\mathcal {W}_{n_{1}n_{2}}\left( {\varvec{\beta }}_{0}^{(1)}\right) \). Using Lemma 1 and the detailed proofs of Lemma A.4 in Zhang et al. (2014), we have \(\hat{g}_{s}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{si}, \varvec{\beta }_{0})=g_{s}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{si})+O_{P}(c_{ns})\), \(\hat{V}_{s}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{si}, \varvec{\beta }_{0})=V_{s, {\varvec{\beta }}_{0}}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{si})+O_{P}(c_{ns})\), for \(s=1,2\). Moreover,

$$\begin{aligned}&S_{n_{1},l_{1}1}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i},\varvec{\beta }_{0})\nonumber \\&\quad =\, \frac{1}{n_{1}}\sum _{j=1}^{n_{1}}K_{h_{1}}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1j}-\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i}) (\varvec{\beta }_{0}^{\tau }\varvec{X}_{1j}-\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i})^{l_{1}} \sigma _{1}^{2}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1j})\epsilon _{1j}^{2}\nonumber \\&\qquad +\,\frac{2}{n_{1}}\sum _{j=1}^{n_{1}}K_{h_{1}}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1j}-\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i}) (\varvec{\beta }_{0}^{\tau }\varvec{X}_{1j}-\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i})^{l_{1}} \nonumber \\&\qquad \times \,\left[ g_{1}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1j})-\hat{g}_{1}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1j}, \varvec{\beta }_{0})\right] \sigma _{1}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1j})\epsilon _{1j}\nonumber \\&\qquad +\,\frac{1}{n_{1}}\sum _{j=1}^{n_{1}}K_{h_{1}}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1j}-\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i}) (\varvec{\beta }_{0}^{\tau }\varvec{X}_{1j}-\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i})^{l_{1}}\nonumber \\&\qquad \times \,\left[ g_{1}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1j})-\hat{g}_{1}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1j}, \varvec{\beta }_{0})\right] ^{2}\nonumber \\&\quad =\,h_{1}^{l_{1}}f_{{\varvec{\beta }}_{0}^{\tau }{\varvec{X}}_{1}}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i}) \sigma _{1}^{2}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i})\mu _{Kl_{1}}+O_{P}(h_{1}^{l_{1}}c_{n1}+h_{1}^{l_{1}}c_{n1}^{2}), \end{aligned}$$
(19)

for \(l_{1}=0,1,2\). Using (19), we obtain \(\hat{\sigma }_{1}^{2}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i},\varvec{\beta }_{0})=\sigma _{1}^{2}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i})+O_{P}(c_{n1})\). Similarly, \(\hat{\sigma }_{2}^{2}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{2i},\varvec{\beta }_{0})=\sigma _{2}^{2}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{2i})+O_{P}(c_{n2})\).

Let \(G^{{\varvec{x}}}_{1,w}(u,\varvec{\beta })= E\left[ Y^{w}_{1}\{\varvec{X}_{1}-\varvec{x}\}|\varvec{\beta }^{\tau }\varvec{X}_{1}=u\right] f_{{\varvec{\beta }}^{\tau }{\varvec{X}}}(u)\), \(K'_{h_{1}}(u)=\frac{1}{h_{1}}K'(u/h_{1})\). Using condition (C3), we have

$$\begin{aligned}&E\left[ \frac{\partial }{\partial \varvec{\beta }}T_{n_{1},l_{1}l_{2}}(\varvec{\beta }^{\tau }\varvec{x},\varvec{\beta })\right] \nonumber \\&\quad =\,\frac{1}{n_{1}}\sum _{i=1}^{n_{1}}E\left[ K'_{h_{1}} (\varvec{\beta }^{\tau }\varvec{X}_{1i}-\varvec{\beta }^{\tau }\varvec{x})J_{{\varvec{\beta }}}^{\tau }\left( \frac{\varvec{X}_{1i}-\varvec{x}}{h_{1}}\right) (\varvec{\beta }^{\tau }\varvec{X}_{1i}-\varvec{\beta }^{\tau }\varvec{x})^{l_{1}} Y_{1i}^{l_{2}}\right] \nonumber \\&\qquad +\,\frac{1}{n_{1}}\sum _{i=1}^{n_{1}}E\left[ K_{h_{1}}(\varvec{\beta }^{\tau }\varvec{X}_{1i}-\varvec{\beta }^{\tau }\varvec{x}) J_{{\varvec{\beta }}}^{\tau }\left( {\varvec{X}_{1i}-\varvec{x}}\right) l_{1}(\varvec{\beta }^{\tau }\varvec{X}_{1i}-\varvec{\beta }^{\tau }\varvec{x})^{l_{1}-1}I\{l_{1}\ge 1\} Y_{1i}^{l_{2}}\right] \nonumber \\&\quad =\,-\sum _{v=0}^{2}\frac{l_{1}+v}{v!}J_{{\varvec{\beta }}}^{\tau }G^{{\varvec{x}}(v)}_{1,l_{2}}(\varvec{\beta }^{\tau }\varvec{x},\varvec{\beta })h_{1}^{l_{1}-1+v} \mu _{K,l_{1}-1+v}I\{l_{1}+v\ge 1\}\nonumber \\&\qquad ~~~+\,\sum _{v=0}^{3}\frac{l_{1}}{v!}J_{{\varvec{\beta }}}^{\tau }G^{{\varvec{x}}(v)}_{1,l_{2}}(\varvec{\beta }^{\tau }\varvec{x},\varvec{\beta })h_{1}^{l_{1}-1+v}\mu _{K,l_{1}-1+v}I\{l_{1}\ge 1\} +O(h_{1}^{l_{1}+2}), \end{aligned}$$
(20)

where \(G^{{\varvec{x}}(v)}_{1,l_{2}}(u,\varvec{\beta })=\frac{\partial ^{v}}{\partial u^{v}}G^{{\varvec{x}}}_{1,l_{2}}(u,\varvec{\beta })\), and \(I\{u\}\) is the indicator function. Similar to the proof of Theorem 3.1 in Fan and Gijbels (1996) and Lemma A.5 in Zhang et al. (2014), together with (20) and Lemma 1, we can have

$$\begin{aligned}&\frac{\partial \hat{g}_{1}(\varvec{\beta }^{\tau }\varvec{X}_{1i}, \varvec{\beta })}{\partial \varvec{\beta }^{(1)}}\Big |_{\varvec{\beta }^{(1)}=\tilde{\varvec{\beta }}^{(1)}_{0}}\nonumber \\&\quad =\,J^{\tau }_{{\varvec{\beta }}_{0}} \left[ \varvec{X}_{1i}-V_{1,{\varvec{\beta }}_{0}}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i})\right] g_{1}'(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i}) +O_{P}\left( h_{1}^{2}+\sqrt{\frac{(\log n_{1})^{1+s_{0}}}{n_{1}h_{1}^{3}}}\right) .\nonumber \\ \end{aligned}$$
(21)

Under the null hypothesis \(\mathcal {H}_{0}\),

$$\begin{aligned}&\frac{\partial \hat{g}_{2}(\varvec{\beta }^{\tau }\varvec{X}_{2i}, \varvec{\beta })}{\partial \varvec{\beta }^{(1)}}\Big |_{\varvec{\beta }^{(1)}=\tilde{\varvec{\beta }}^{(1)}_{0}}\nonumber \\&\quad =\,J^{\tau }_{{\varvec{\beta }}_{0}} \left[ \varvec{X}_{2i}-V_{2,{\varvec{\beta }}_{0}}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{2i})\right] g_{2}'(\varvec{\beta }_{0}^{\tau }\varvec{X}_{2i}) +O_{P}\left( h_{2}^{2}+\sqrt{\frac{(\log n_{2})^{1+s_{0}}}{n_{2}h_{2}^{3}}}\right) .\nonumber \\ \end{aligned}$$
(22)

Define that \(\mathcal {Q}_{n_{1}}(u,\varvec{\beta }_{0})=\frac{1}{n_{1}h_{1}^{2}}T_{n_{1},20}(u, \varvec{\beta }_{0})\frac{1}{n_{1}}T_{n_{1},00}(u, \varvec{\beta }_{0})-\frac{1}{n_{1}^{2}h_{1}^{2}}T_{n_{1},10}^{2}(u, \varvec{\beta }_{0})\) and \(\mathcal {L}_{n_{1}}(u,\varvec{\beta }_{0})=\frac{1}{n^{2}_{1}h_{1}^{2}}T_{n_{1},20}(u, \varvec{\beta }_{0})T_{n_{1},01}(u, \varvec{\beta }_{0})-\frac{1}{n^{2}_{1}h^{2}_{1}}T_{n_{1},10}(u, \varvec{\beta }_{0})T_{n_{1},11}(u, \varvec{\beta }_{0})\). Then, \(\hat{g}_{1}(u, \varvec{\beta }_{0})=\frac{\mathcal {L}_{n_{1}}(u,{\varvec{\beta }}_{0})}{\mathcal {Q}_{n_{1}}(u,{\varvec{\beta }}_{0})}\) and \(\hat{g}_{1}'(u, \varvec{\beta }_{0})=\frac{\partial \mathcal {L}_{n_{1}}(u,{\varvec{\beta }}_{0})/\partial u}{\mathcal {Q}_{n_{1}}(u,{\varvec{\beta }}_{0})}-\frac{\mathcal {L}_{n_{1}}(u,{\varvec{\beta }}_{0})\partial \mathcal {Q}_{n_{1}}(u,{\varvec{\beta }}_{0})/\partial u}{\mathcal {Q}^{2}_{n_{1}}(u,{\varvec{\beta }}_{0})}\). Following the proof of Lemma A.5 in Zhang et al. (2014), together with Lemma 1 and (20), we have \(\hat{g}_{1}'(u, \varvec{\beta }_{0})=g_{1}'(u)+O_{P}\left( h_{1}^{2}+\sqrt{\frac{(\log n_{1})^{1+s_{0}}}{n_{1}h_{1}^{3}}}\right) \) and \(\hat{g}_{1}'(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i}, \varvec{\beta }_{0})=g_{1}'(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i})+O_{P}\left( h_{1}^{2}+\sqrt{\frac{(\log n_{1})^{1+s_{0}}}{n_{1}h_{1}^{3}}}\right) \). Similarly, \(\hat{g}_{2}'(u, \varvec{\beta }_{0})=g_{2}'(u)+O_{P}\left( h_{2}^{2}+\sqrt{\frac{(\log n_{2})^{1+s_{0}}}{n_{2}h_{2}^{3}}}\right) \), \(\hat{g}_{2}'(\varvec{\beta }_{0}^{\tau }\varvec{X}_{2i}, \varvec{\beta }_{0})=g_{2}'(\varvec{\beta }_{0}^{\tau }\varvec{X}_{2i})+O_{P}\left( h_{2}^{2}+\sqrt{\frac{(\log n_{2})^{1+s_{0}}}{n_{2}h_{2}^{3}}}\right) \). Using the asymptotic results (21) and (22) and the condition of that \(\frac{n_{1}}{n_{1}+n_{2}}=\frac{n_{1}}{N}\rightarrow \lambda \in (0, 1)\), as \(\max \left\{ \frac{(\log n_{1})^{2+2s_{0}}}{n_{1}h_{1}^{2}}, \frac{(\log n_{2})^{2+2s_{0}}}{n_{2}h_{2}^{2}}\right\} \rightarrow 0\), and also \(\max \{n_{1}h_{1}^{8},n_{2}h_{2}^{8}\}\rightarrow 0\), we have

$$\begin{aligned}&{({n_{1}+n_{2})}^{-1/2}}\mathcal {W}_{n_{1}n_{2}}\left( {\varvec{\beta }}_{0}^{(1)}\right) \nonumber \\&\quad =\,\sqrt{\frac{n_{1}}{n_{1}+n_{2}}}n_{1}^{-1/2} \sum _{i=1}^{n_{1}}J_{{\varvec{\beta }}_{0}}^{\tau }\frac{\hat{g}_{1}'(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i}, \varvec{\beta }_{0})}{\hat{\sigma }_{1}^{2}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i}, \varvec{\beta }_{0})}\left[ \varvec{X}_{1i}-\hat{V}_{1}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i}, \varvec{\beta }_{0})\right] \nonumber \\&\qquad \times \left[ Y_{1i}-\hat{g}_{1}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i}, \varvec{\beta }_{0})\right] \nonumber \\&\qquad +\,\sqrt{\frac{n_{2}}{n_{1}+n_{2}}}n_{2}^{-1/2} \sum _{i=1}^{n_{2}}J_{{\varvec{\beta }}_{0}}^{\tau }\frac{\hat{g}_{2}'(\varvec{\beta }_{0}^{\tau }\varvec{X}_{2i}, \varvec{\beta }_{0})}{\hat{\sigma }_{2}^{2}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{2i}, \varvec{\beta }_{0})}\left[ \varvec{X}_{2i}-\hat{V}_{2}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{2i}, \varvec{\beta }_{0})\right] \nonumber \\&\qquad \times \left[ Y_{2i}-\hat{g}_{2}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{2i}, \varvec{\beta }_{0})\right] \nonumber \\&\quad =\sqrt{\frac{n_{1}}{n_{1}+n_{2}}} n_{1}^{-1/2} \sum _{i=1}^{n_{1}}J_{{\varvec{\beta }}_{0}}^{\tau }{g}_{1}'(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i})\left[ \varvec{X}_{1i}-{V}_{1, {\varvec{\beta }}_{0}} (\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i})\right] {\sigma }_{1}^{-1}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i})\epsilon _{1i}\nonumber \\&\qquad +\,\sqrt{\frac{n_{2}}{n_{1}+n_{2}}} n_{2}^{-1/2} \sum _{i=1}^{n_{2}}J_{{\varvec{\beta }}_{0}}^{\tau }{g}_{2}'(\varvec{\beta }_{0}^{\tau }\varvec{X}_{2i})\left[ \varvec{X}_{2i}-{V}_{2, {\varvec{\beta }}_{0}} (\varvec{\beta }_{0}^{\tau }\varvec{X}_{2i})\right] {\sigma }_{2}^{-1}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{2i})\epsilon _{2i}\nonumber \\&\qquad +\,o_{P}(1), \end{aligned}$$
(23)

where \(V_{2, {\varvec{\beta }}_{0}}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{2})=E[\varvec{X}_{2}|\varvec{\beta }^{\tau }_{0}\varvec{X}_{2}]\).

Step 2 :

In this sub-step, we deal with \(\frac{1}{n_{1}+n_{2}}\frac{\partial \mathcal {W}_{n_{1}n_{2}}\left( {\varvec{\beta }}^{(1)}\right) }{\partial {\varvec{\beta }}^{(1)}}\big |_{{\varvec{\beta }}^{(1)}=\tilde{{\varvec{\beta }}}^{(1)}_{0}}\). Define

$$\begin{aligned}&\mathcal {S}_{n_{1}n_{2}}(\tilde{\varvec{\beta }}^{(1)}_{0})\mathop {=}\limits ^\mathrm{def}\frac{1}{n_{1}+n_{2}}\sum _{s=1}^{2}\sum _{i=1}^{n_{s}} \left[ Y_{si}-\hat{g}_{s}(\tilde{\varvec{\beta }}_{0}^{\tau }\varvec{X}_{si}, \tilde{\varvec{\beta }}_{0})\right] \\&\times \frac{\partial }{\partial \varvec{\beta }^{(1)}}\left\{ J_{{\varvec{\beta }}}^{\tau }\hat{g}_{s}'(\varvec{\beta }^{\tau }\varvec{X}_{si}, \varvec{\beta })\left[ \varvec{X}_{si}-\hat{V}_{s}(\varvec{\beta }^{\tau }\varvec{X}_{si}, \varvec{\beta })\right] \hat{\sigma }_{s}^{-2}(\varvec{\beta }^{\tau }\varvec{X}_{si}, \varvec{\beta })\right\} \Big |_{\varvec{\beta }^{(1)}=\tilde{\varvec{\beta }}^{(1)}_{0}}, \end{aligned}$$

and

$$\begin{aligned}&\mathcal {L}_{n_{1}n_{2}}(\tilde{\varvec{\beta }}^{(1)}_{0})\\&\mathop {=}\limits ^\mathrm{def}\frac{1}{n_{1}+n_{2}}\sum _{s=1}^{2}\sum _{i=1}^{n_{s}}\Big \{J_{\tilde{{\varvec{\beta }}}_{0}}^{\tau } \hat{g}_{s}'(\tilde{\varvec{\beta }}_{0}^{\tau }\varvec{X}_{si}, \tilde{\varvec{\beta }}_{0})\left[ \varvec{X}_{si}-\hat{V}_{s}(\tilde{\varvec{\beta }}_{0}^{\tau }\varvec{X}_{si}, \tilde{\varvec{\beta }}_{0})\right] \\&\quad \times \hat{\sigma }_{s}^{-2}(\tilde{\varvec{\beta }}_{0}^{\tau }\varvec{X}_{si}, \tilde{\varvec{\beta }}_{0})\Big \}\frac{\partial \hat{g}_{s}(\varvec{\beta }^{\tau }\varvec{X}_{si}, \varvec{\beta })}{\partial \varvec{\beta }^{(1)}}\Bigg |_{\varvec{\beta }^{(1)}=\tilde{\varvec{\beta }}^{(1)}_{0}}. \end{aligned}$$

Then,

$$\begin{aligned} \frac{1}{n_{1}+n_{2}}\frac{\partial \mathcal {W}_{n_{1}n_{2}}\left( {\varvec{\beta }}^{(1)}\right) }{\partial \varvec{\beta }^{(1)}} \Bigg |_{\varvec{\beta }^{(1)}=\tilde{\varvec{\beta }}^{(1)}_{0}} =\mathcal {S}_{n_{1}n_{2}}(\tilde{\varvec{\beta }}^{(1)}_{0})+\mathcal {L}_{n_{1}n_{2}}(\tilde{\varvec{\beta }}^{(1)}_{0}), \end{aligned}$$
(24)

where \(\tilde{\varvec{\beta }}_{0}=\left( \sqrt{1-\tilde{\varvec{\beta }}_{0}^{(1)\tau }\tilde{\varvec{\beta }}_{0}^{(1)}}, \tilde{\varvec{\beta }}_{0}^{(1)\tau }\right) ^{\tau }\). Note that \(\tilde{\varvec{\beta }}^{(1)}_{0}\) is between \(\hat{\varvec{\beta }}_{\mathcal {H}_{0}}^{(1)}\) and \(\varvec{\beta }_{0}^{(1)}\). By using (18), we have \(\hat{\varvec{\beta }}_{\mathcal {H}_{0}}^{(1)}=\varvec{\beta }_{0}^{(1)}+O_P((n_{1}+n_{2})^{-1/2})\). Note that \(\tilde{\varvec{\beta }}^{(1)}_{0}\mathop {\longrightarrow }\limits ^{P}\varvec{\beta }_{0}^{(1)}\), \(\tilde{\varvec{\gamma }}^{(1)}_{0}\mathop {\longrightarrow }\limits ^{P}\varvec{\beta }_{0}^{(1)}\) and \(\tilde{\varvec{\beta }}^{}_{0}\mathop {\longrightarrow }\limits ^{P}\varvec{\gamma }_{0}^{}\), \(\tilde{\varvec{\gamma }}^{}_{0}\mathop {\longrightarrow }\limits ^{P}\varvec{\gamma }_{0}^{}\). Together with (21)–(22) and condition of that \(\frac{n_{1}}{n_{1}+n_{2}}\rightarrow \lambda \in (0, 1)\), we have

$$\begin{aligned}&\mathcal {L}_{n_{1}n_{2}}(\tilde{\varvec{\beta }}^{(1)}_{0})\mathop {\longrightarrow }\limits ^{P} \lambda J_{{{\varvec{\beta }}}_{0}}^{\tau }E\left[ \frac{g_{1}^{'2}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1})}{ {\sigma }_{1}^{2}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1})}\left[ \varvec{X}_{1}-V_{1,{\varvec{\beta }}_{0}}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1})\right] ^{\otimes 2}\right] J_{{{\varvec{\beta }}}_{0}}\nonumber \\&\quad +\, (1-\lambda ) J_{{{\varvec{\beta }}}_{0}}^{\tau }E\left[ \frac{g^{'2}_{2}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{2})}{{\sigma }_{2}^{2}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{2})} \left[ \varvec{X}_{2}-V_{2, {\varvec{\beta }}_{0}}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{2})\right] ^{\otimes 2}\right] J_{{{\varvec{\beta }}}_{0}}. \end{aligned}$$
(25)

Moreover, a direct calculation for \(\mathcal {S}_{n_{1}n_{2}}(\tilde{\varvec{\beta }}^{(1)}_{0})\) and Lemma 1 entail that \(\mathcal {S}_{n_{1}n_{2}}(\tilde{\varvec{\beta }}^{(1)}_{0})=o_{P}(1)\). Together with (23) and (25), we complete the proof of Theorem 2. \(\square \)

1.2 Proof of Theorem 3

Proof

From the proof of Theorem 3, we can have that

(26)
$$\begin{aligned}&\sqrt{n_{2}}\left( \hat{\varvec{\gamma }}^{(1)}_{0}-{\varvec{\gamma }}^{(1)}_{0}\right) \nonumber \\&\quad =\,\varvec{\varOmega }_{2}^{-1} n_{2}^{-1/2} \sum _{i=1}^{n_{2}}J_{{\varvec{\gamma }}_{0}}^{\tau }{g}_{2}'(\varvec{\gamma }_{0}^{\tau }\varvec{X}_{2i})\left[ \varvec{X}_{2i}-{V}_{2, {\varvec{\gamma }}_{0}} (\varvec{\gamma }_{0}^{\tau }\varvec{X}_{2i})\right] {\sigma }_{2}^{-1}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{2i})\epsilon _{2i}+o_{P}(1).\nonumber \\ \end{aligned}$$
(27)

Under the null hypothesis \(\mathcal {H}_{0}: \varvec{\beta }_{0}=\varvec{\gamma }_{0}\), we can have

$$\begin{aligned}&\sqrt{n_{1}+n_{2}}\left( \hat{\varvec{\beta }}^{(1)}_{0}-\hat{\varvec{\gamma }}^{(1)}_{0}\right) \\&\quad =\,\left( \frac{\sqrt{n_{1}}}{\sqrt{\lambda }}\left( \hat{\varvec{\beta }}_{0}^{(1)}-\varvec{\beta }_{0}^{(1)}\right) - \frac{\sqrt{n_{2}}}{\sqrt{1-\lambda }}\left( \hat{\varvec{\gamma }}_{0}^{(1)}-\varvec{\beta }_{0}^{(1)}\right) \right) +o_{P}(1)\\&\quad \xrightarrow [\mathcal {H}^{*}_{0}]{\mathcal {L}}N\left( \mathbf{0}_{p-1}, \frac{1}{\lambda }\varvec{\varOmega }_{1}^{-1}+ \frac{1}{1-\lambda }\varvec{\varOmega }_{2}^{-1}\right) . \end{aligned}$$

Moreover,

$$\begin{aligned}&(n_{1}+n_{2})\widehat{\varvec{A}}\\&\quad =\,\frac{n_{1}+n_{2}}{n_{1}}\left[ J^{\tau }_{\hat{{\varvec{\beta }}}_{0}} \frac{1}{n_{1}}\sum _{i=1}^{n_{1}}\frac{ \hat{g}_{1}^{'2}(\hat{\varvec{\beta }}_{0}^{\tau }\varvec{X}_{1i}, \hat{\varvec{\beta }}_{0})}{\hat{\sigma }_{1}^{2}(\hat{\varvec{\beta }}_{0}^{\tau }\varvec{X}_{1i}, \hat{\varvec{\beta }}_{0}),}\left[ \varvec{X}_{1i}-\hat{V}_{1}(\hat{\varvec{\beta }}_{0}^{\tau }\varvec{X}_{1i}, \hat{\varvec{\beta }}_{0})\right] ^{\otimes 2}J_{\hat{{\varvec{\beta }}}_{0}}\right] ^{-1}\\&\qquad ~~~+\,\frac{n_{1}+n_{2}}{n_{2}}\left[ J^{\tau }_{\hat{{\varvec{\gamma }}}_{0}}\frac{1}{n_{2}} \sum _{i=1}^{n_{2}}\frac{ \hat{g}_{2}^{'2}(\hat{\varvec{\gamma }}_{0}^{\tau }\varvec{X}_{2i}, \hat{\varvec{\gamma }}_{0})}{\hat{\sigma }_{2}^{2}(\hat{\varvec{\gamma }}_{0}^{\tau }\varvec{X}_{2i}, \hat{\varvec{\gamma }}_{0}),}\left[ \varvec{X}_{2i}-\hat{V}_{2}(\hat{\varvec{\gamma }}_{0}^{\tau }\varvec{X}_{2i}, \hat{\varvec{\gamma }}_{0})\right] ^{\otimes 2}J_{\hat{{\varvec{\gamma }}}_{0}}\right] ^{-1}\\&\quad \mathop {\longrightarrow }\limits ^{P}\frac{1}{\lambda }\varvec{\varOmega }_{1}^{-1}+ \frac{1}{1-\lambda }\varvec{\varOmega }_{2}^{-1}. \end{aligned}$$

Then, the Slutsky Theorem and continuous mapping theorem entail that

$$\begin{aligned}&\mathcal {T}_{n_{1}n_{2}}\\&\quad =\,\left[ \sqrt{n_{1}+n_{2}}\left( \hat{\varvec{\beta }}^{(1)}_{0}-\hat{\varvec{\gamma }}^{(1)}_{0}\right) \right] ^{\tau } \left( (n_{1}+n_{2})\widehat{\varvec{A}}\right) ^{-1}\left[ \sqrt{n_{1}+n_{2}}\left( \hat{\varvec{\beta }}^{(1)}_{0}-\hat{\varvec{\gamma }}^{(1)}_{0}\right) \right] \\&\qquad \times \xrightarrow [\mathcal {H}^{*}_{0}]{\mathcal {L}}\chi ^{2}_{p-1}. \end{aligned}$$

We complete the proof of Theorem 3. \(\square \)

1.3 Proof of Theorem 4

Lemma 2

Suppose that conditions (C1)–(C5) hold. Let \(F_{\hat{\epsilon }_{s}}(t|\mathscr {Q}_{n_{s}})\) be the distribution function of \(\hat{\epsilon }_{s}=\frac{Y_{s}-\hat{g}_{s}(\hat{\omega }_{s,0}^{\tau }{\varvec{X}}_{s}, \hat{\omega }_{s,0})}{\hat{\sigma }_{s}(\hat{\omega }_{s,0}^{\tau }{\varvec{X}}_{s},\hat{\omega }_{s,0})}\) conditional on the data \(\mathscr {Q}_{n_{s}}=\{\varvec{X}_{si}, Y_{si}\}_{i=1}^{n_{s}}\) (i.e., considering \(\hat{g}_{s}\left( \hat{\omega }_{s,0}^{\tau }\varvec{x}_{s}, \hat{\omega }_{s,0}\right) , \hat{\sigma }_{s}\left( \hat{\omega }_{s,0}^{\tau }\varvec{x}_{s},\hat{\omega }_{s,0}\right) \) as fixed functions on \(\varvec{x}_{s}\)) for \(s=1,2\) respectively. Here, \(\hat{\omega }_{1,0}=\hat{\varvec{\beta }}_{0}\) and \(\hat{\omega }_{2,0}=\hat{\varvec{\gamma }}_{0}\). Then, we have

$$\begin{aligned}&\sup _{t\in \mathbb {R}}\left| n_{1}^{-1}\sum _{i=1}^{n_{1}}\left[ I\left\{ \hat{\epsilon }_{1i}\le t\right\} -I\{\epsilon _{1i}\le t\}- F_{\hat{\epsilon }_{1}}(t|\mathscr {Q}_{n_{1}})+F_{\epsilon _{1}}(t)\right] \right| \nonumber \\&\quad =\,o_{P}\left( n_{1}^{-1/2}\right) , \end{aligned}$$
(28)
$$\begin{aligned}&\sup _{t\in \mathbb {R}}\left| n_{2}^{-1}\sum _{i=1}^{n_{2}}\left[ I\left\{ \hat{\epsilon }_{2i}\le t\right\} -I\{\epsilon _{2i}\le t\}- F_{\hat{\epsilon }_{2}}(t|\mathscr {Q}_{n_{2}})+F_{\epsilon _{2}}(t)\right] \right| \nonumber \\&\quad =\,o_{P}\left( n_{2}^{-1/2}\right) . \end{aligned}$$
(29)

Proof

In the following, we only prove (28), the proof of (29) is similar and we omit the details. Let

$$\begin{aligned} \mathscr {O}= & {} \Big \{I\{\epsilon _{1}\le tf_{2}(\varvec{X}_{1})+f_{1}(\varvec{X}_{1})\}-I\{\epsilon _{1}\le t\}- P(\epsilon _{1}\le tf_{2}(\varvec{X}_{1})+f_{1}(\varvec{X}_{1}))\nonumber \\&\quad ~~~~+\,P(\epsilon _{1}\le t); t\in \mathbb {R}, f_{1}, f_{2}\in M_{1}^{1+\delta }(\mathfrak {R}_{c}^{p}) \Big \}, \end{aligned}$$

where \(M_{1}^{1+\delta }( \mathfrak {R}_{c}^{p})\) is the class of all differential functions f(u) defined on the domain \(\mathfrak {R}_{c}^{p}\) of \(\varvec{x}_{1}\) and \(\Vert f\Vert _{1+\delta }\le 1\). Here \( \mathfrak {R}_{c}^{p}\) is a compact set of \(\mathbb {R}^{p}\) and

$$\begin{aligned}&\Vert f\Vert _{1+\delta }\\&\quad =\sup _{{\varvec{x}}_{1}\in \mathfrak {R}_{c}^{p}}\left| f(\varvec{x}_{1})\right| +\sum _{l=1}^{p}\sup _{{\varvec{x}}_{1}\in \mathfrak {R}_{c}^{p}}\left| \frac{\partial f({\varvec{x}}_{1})}{\partial x_{1l}}\right| +\sup _{{\varvec{x}}_{1,1}, {\varvec{x}}_{1,2}\in \mathfrak {R}_{c}^{p}}\frac{|\partial f( {\varvec{x}}_{1,1})-\partial f( {\varvec{x}}_{1,2})|}{\Vert {\varvec{x}}_{1,1}-{\varvec{x}}_{1,2}\Vert ^{\delta }}. \end{aligned}$$

Using Lemma 1 and \(\Vert \hat{\varvec{\beta }}_{0}-\varvec{\beta }_{0}\Vert =O_{P}(n_{1}^{-1/2})\), and similar to the proofs of (21) and (22), we have that

$$\begin{aligned} \hat{g}_{1}(\hat{\varvec{\beta }}_{0}^{\tau }\varvec{x}_{1}, \hat{\varvec{\beta }}_{0})= & {} g_{1}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1})+O_{P}\left( n_{1}^{-1/2} +c_{n_{1}}\right) , \end{aligned}$$
(30)
$$\begin{aligned} \hat{\sigma }_{1}(\hat{\varvec{\beta }}_{0}^{\tau }\varvec{x}_{1}, \hat{\varvec{\beta }}_{0})= & {} \sigma _{1}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1}) =O_{P}\left( n_{1}^{-1/2}+c_{n_{1}}\right) , \end{aligned}$$
(31)

uniformly in \(\varvec{x}_{1}\in \mathfrak {R}_{c}^{p}\). Let \(A_{n_{1}}(\varvec{x}_{1})=\frac{\hat{g}_{1}(\hat{{\varvec{\beta }}}_{0}^{\tau }{\varvec{x}}_{1}, \hat{{\varvec{\beta }}}_{0})-{g}_{1}({{\varvec{\beta }}}_{0}^{\tau } {\varvec{x}}_{1})}{\sigma _{1}({{\varvec{\beta }}}_{0}^{\tau } {\varvec{x}})}\), \(B_{n_{1}}(\varvec{x})=\frac{\hat{\sigma }_{1}(\hat{{\varvec{\beta }}}_{0}^{\tau }{\varvec{x}}_{1}, \hat{{\varvec{\beta }}}_{0})}{\sigma _{1}({{\varvec{\beta }}}_{0}^{\tau }{\varvec{x}}_{1})}\). So, (30) and (31) entail \(P\left( A_{n_{1}}\in M_{1}^{1+\delta }(\mathfrak {R}_{c}^{p}) \right) \rightarrow 1\), \(P\left( B_{n_{1}}\in M_{1}^{1+\delta }(\mathfrak {R}_{c}^{p})\right) \rightarrow 1\) as \(n_{1}\rightarrow \infty \), \(h_{1}\rightarrow 0\) and \(\frac{n_{1}h_{1}}{(\log n_{1})^{1+s}}\rightarrow \infty \).

By directly using the Corollary 2.7.2 of van der Vaart and Wellner (1996), the bracketing number \(N_{[~]}\left( \upsilon ^{2}, M_{1}^{1+\delta }(\mathfrak {R}_{c}^{p}), L_{2}(P)\right) \) can be at most \(\exp \left( c_{0}\upsilon ^{-\frac{2p}{1+\delta }}\right) \) for some positive constant \(c_{0}\), According to the proof of Lemma 1 in Appendix B of Akritas and Van Keilegom (2001), and then the class \(\mathscr {O}\) defined above is a Donsker class, i.e., we have that \({\displaystyle \int }_{0}^{\infty }\sqrt{N_{[~]}(\bar{\upsilon }, \mathscr {O}, L_{2}(P))}d\bar{\upsilon }<\infty \). Then, the proof of (28) is complete. \(\square \)

Proof of Theorem 4

We can have that

$$\begin{aligned}&\hat{F}_{\epsilon _{1}}(t)-F_{\epsilon _{1}}(t)\nonumber \\&\quad =\frac{1}{n_{1}}\sum _{i=1}^{n_{1}}I\left\{ {\epsilon }_{1i}\le t\right\} -F_{\epsilon _{1}}(t)+\left( F_{\hat{\epsilon }_{1}}(t|\mathscr {Q}_{n_{1}})-F_{\epsilon _{1}}(t)\right) +R_{n_{1},1}(t), \end{aligned}$$
(32)

where \(R_{n_{1},1}(t)=o_{P}(n_{1}^{-1/2})\) uniformly in \(t\in \mathbb {R}\) by using Lemma 2. Taylor expansion entails that

$$\begin{aligned}&F_{\hat{\epsilon }_{1}}(t|\mathscr {Q}_{n_{1}})-F_{\epsilon _{1}}(t)\nonumber \\&\quad =\,\int \left[ F_{\epsilon _{1}}(t+t[B_{n_{1}}(\varvec{x}_{1})-1]+A_{n_{1}}(\varvec{x}_{1})) -F_{\epsilon _{1}}(t)\right] dF_{{\varvec{X}}_{1}}(\varvec{x}_{1})\nonumber \\&\quad =\,f_{\epsilon _{1}}(t)t\int [B_{n_{1}}(\varvec{x}_{1})-1]dF_{{\varvec{X}}_{1}}(\varvec{x}_{1})+f_{\epsilon _{1}}(t)\int A_{n_{1}}(\varvec{x}_{1})dF_{{\varvec{X}}_{1}}(\varvec{x}_{1})\nonumber \\&\qquad +\,\int f'_{\epsilon _{1}}(t+v^{*}_{n_{1}}(t,\varvec{x}_{1}))\left\{ t[B_{n_{1}}(\varvec{x}_{1})-1]+A_{n_{1}}(\varvec{x}_{1}) \right\} ^{2}dF_{{\varvec{X}}_{1}}(\varvec{x}_{1})\nonumber \\&\quad =\,R_{n_{1},2}(t)+R_{n_{1},3}(t)+R_{n_{1},4}(t), \end{aligned}$$
(33)

where \(v_{n_{1}}^{*}(t,\varvec{x}_{1})\) is between 0 and \(t[B_{n_{1}}(\varvec{x}_{1})-1]+A_{n_{1}}(\varvec{x}_{1})\). Note that

$$\begin{aligned}&A_{n_{1}}(\varvec{x}_{1})\nonumber \\&\quad =\,\frac{\hat{g}_{1}(\hat{\varvec{\beta }}_{0}^{\tau }\varvec{x}_{1}, \varvec{\beta }_{0})- \hat{g}_{1}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1}, \varvec{\beta }_{0})}{\sigma _{1}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1})}+ \frac{\hat{g}_{1}(\hat{\varvec{\beta }}_{0}^{\tau }\varvec{x}_{1}, \varvec{\beta }_{0}) -g(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1})}{\sigma _{1}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1})}. \end{aligned}$$
(34)

Recall the definition of \(\hat{g}_{1}(u,\varvec{\beta }_{0})\) and using Lemma 1,

$$\begin{aligned}&\hat{g}_{1}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1},\varvec{\beta }_{0})-g_{1}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1})\nonumber \\&\quad =\,\frac{1}{n_{1}f_{{\varvec{\beta }}_{0}^{\tau }{\varvec{X}}_{1}}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1})} \sum _{i=1}^{n_{1}}K_{h_{1}}\left( {\varvec{\beta }}_{0}^{\tau }\varvec{X}_{1i}-{\varvec{\beta }}_{0}^{\tau }\varvec{x}_{1}\right) \sigma _{1}({\varvec{\beta }}_{0}^{\tau }\varvec{X}_{1i})\epsilon _{1i}+O_{P}(c_{n_{1}}).\qquad \end{aligned}$$
(35)

Similar to (21), we can also have

$$\begin{aligned}&\hat{g}_{1}(\hat{\varvec{\beta }}_{0}^{\tau }\varvec{x}_{1}, \varvec{\beta }_{0})- \hat{g}_{1}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1}, \varvec{\beta }_{0}) \nonumber \\&\quad =\, \left[ \varvec{x}_{1}-V_{1,{\varvec{\beta }}_{0}}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1})\right] ^{\tau }g_{1}'(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1})\left( \hat{\varvec{\beta }}_{0}-\varvec{\beta }_{0}\right) +O_{P}\left( h_{1}^{2}+\sqrt{\frac{(\log n_{1})^{1+s}}{n_{1}h_{1}^{3}}}\right) .\nonumber \\ \end{aligned}$$
(36)

Together with (34), (35) and (36), we have

$$\begin{aligned} R_{n_{1},3}(t)=f_{\epsilon _{1}}(t)\int A_{n_{1}}(\varvec{x}_{1})dF_{{\varvec{X}}_{1}}(\varvec{x}_{1})=\frac{f_{\epsilon _{1}}(t)}{n_{1}}\sum _{i=1}^{n_{1}}\epsilon _{1i}+o_{P}(n_{1}^{-1/2}). \end{aligned}$$
(37)

Similarly,

$$\begin{aligned}&\hat{\sigma }^{2}_{1}\left( \hat{{\varvec{\beta }}}_{0}^{\tau }{\varvec{x}}_{1}, \hat{{\varvec{\beta }}}_{0}\right) -\sigma _{1}^{2}({\varvec{\beta }}_{0}^{\tau }\varvec{x}_{1})\nonumber \\&\quad =\frac{1}{f^{}_{{\varvec{\beta }}_{0}^{\tau }{\varvec{X}}_{1}}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1})n_{1}} \sum _{i=1}^{n_{1}}K_{h_{1}}\left( {\varvec{\beta }}_{0}^{\tau }\varvec{X}_{1i}-{\varvec{\beta }}_{0}^{\tau }\varvec{x}_{1}\right) \sigma _{1}^{2}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i})(\epsilon _{1i}^{2}-1)\nonumber \\&\quad ~~~+\,2\left[ \varvec{x}_{1}-V_{1,{\varvec{\beta }}_{0}}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1})\right] ^{\tau }\sigma _{1}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1}) \sigma _{1}'(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1}) \left( \hat{\varvec{\beta }}_{0}-\varvec{\beta }_{0}\right) +o_{P}(n_{1}^{-1/2}).\qquad \qquad \end{aligned}$$
(38)

Then, Taylor expansion for \(\sqrt{\hat{\sigma }^{2}_{1}\left( \hat{{\varvec{\beta }}}_{0}^{\tau }{\varvec{x}}, \hat{{\varvec{\beta }}}_{0}\right) }-\sqrt{\sigma ^{2}_{1}({\varvec{\beta }}_{0}^{\tau }\varvec{x})}\) and asymptotic expression (38) entail that

$$\begin{aligned} R_{n_{1},2}(t)= & {} tf_{\epsilon _{1}}(t)\int [B_{n_{1}}(\varvec{x}_{1})-1]dF_{{\varvec{X}}_{1}}(\varvec{x}_{1})\nonumber \\= & {} tf_{\epsilon _{1}}(t)\frac{1}{2n_{1}}\sum _{i=1}^{n_{1}}(\epsilon _{1i}^{2}-1)+o_{P}(n_{1}^{-1/2}). \end{aligned}$$
(39)

Moreover, (34), (39) and Condition (C5) entail that \(R_{n_{1},4}(t)=o_{P}(n_{1}^{-1/2})\) uniformly in t. Together with (32), (26) and (37)–(39), we complete the proof of Theorem 2.

\(\square \)

1.4 Proof of Theorems 5 and 6

Proof

Recalling the definition of \(F^{*}_{\widetilde{\mathcal {H}}_{0},\epsilon _{1}}(t)=E\left[ F_{\epsilon _{1}}\left( t+\frac{ g_{2}({\varvec{\beta }}_{0}^{\tau }{\varvec{X}}_{1})-g_{1}({\varvec{\beta }}_{0}^{\tau }{\varvec{X}}_{1})}{\sigma ({\varvec{\beta }}_{0}^{\tau }{\varvec{X}})}\right) \right] \).

$$\begin{aligned}&\hat{F}_{\widetilde{\mathcal {H}}_{0},\epsilon _{1}}(t)-F^{*}_{\widetilde{\mathcal {H}}_{0},\epsilon }(t)= \frac{1}{n_{1}}\sum _{i=1}^{n_{1}}I\{\hat{\epsilon }_{\widetilde{\mathcal {H}}_{0},i}\le t\}-F^{*}_{\widetilde{\mathcal {H}}_{0}, \epsilon _{1}}(t)\nonumber \\&\quad =\,\frac{1}{n_{1}}\sum _{i=1}^{n_{1}}I\left\{ {\epsilon }_{1i}+\frac{g_{1}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i}) -g_{2}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i})}{\sigma _{1}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i})}\le t\right\} -F^{*}_{\widetilde{\mathcal {H}}_{0},\epsilon _{1}}(t)\nonumber \\&\qquad + \,\left[ F_{\widetilde{\mathcal {H}}_{0},\hat{\epsilon }_{1}}(t|\mathscr {V}_{n_{1}n_{2}}) -F^{*}_{\widetilde{\mathcal {H}}_{0}, \epsilon _{1}}(t)\right] +S_{n_{1},1}(t), \end{aligned}$$
(40)

where \(F_{\widetilde{\mathcal {H}}_{0}, \hat{\epsilon }_{1}}(t|\mathscr {V}_{n_{1}n_{2}})\) be the distribution function of \(\hat{\epsilon }_{\widetilde{\mathcal {H}}_{0},1}=\frac{Y-\hat{g}_{2} (\hat{{\varvec{\beta }}}_{0}^{\tau } {\varvec{X}},\hat{{\varvec{\gamma }}}_{0})}{\hat{\sigma }_{1}(\hat{{\varvec{\beta }}}_{0}^{\tau }{\varvec{X}},\hat{{\varvec{\beta }}}_{0})} \) conditional on the data \(\mathscr {V}_{n_{1}n_{2}}=\{\varvec{X}_{1i}, Y_{1i},\varvec{X}_{2j}, Y_{2j}, 1\le i\le n_{1}, 1\le j \le n_{2} \}\), and similar to the analysis of Lemma 2, we have \(\displaystyle {\sup _{t\in \mathbb {R}}}|S_{n_{1},1}(t)|=o_{P}(n_{1}^{-1/2})\). Taylor expansion entails that

$$\begin{aligned}&F_{\widetilde{\mathcal {H}}_{0},\hat{\epsilon }_{1}}(t|\mathscr {V}_{n_{1}n_{2}}) -F^{*}_{\widetilde{\mathcal {H}}_{0}, \epsilon _{1}}(t)\nonumber \\&\quad =\,\int F_{\epsilon _{1}}\Bigg (t+\frac{g_{1}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1}) -g_{2}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1})}{\sigma _{1}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1})}+[B_{n_{1}}(\varvec{x}_{1})-1]t\nonumber \\&\qquad +\,\frac{\hat{g}_{2}(\hat{\varvec{\beta }}_{0}^{\tau }\varvec{x}_{1}, \hat{\varvec{\gamma }}_{0}) -g_{2}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1})}{\sigma _{1}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1})}\Bigg )dF_{{\varvec{X}}_{1}}(\varvec{x}_{1})-F^{*}_{\widetilde{\mathcal {H}}_{0}, \epsilon _{1}}(t)\nonumber \\&\quad =\,t\int f_{\epsilon _{1}}\left( t+\frac{g_{1}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1}) -g_{2}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1})}{\sigma _{1}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1})}\right) [B_{n_{1}}(\varvec{x}_{1})-1]dF_{{\varvec{X}}_{1}}(\varvec{x}_{1})\nonumber \\&\qquad +\,\int f_{\epsilon _{1}}\left( t+\frac{g_{1}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1}) -g_{2}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1})}{\sigma _{1}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1})}\right) \frac{\hat{g}_{2}(\hat{\varvec{\beta }}_{0}^{\tau }\varvec{x}_{1}, \hat{\varvec{\gamma }}_{0}) -g_{2}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1})}{\sigma _{1}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1})}dF_{{\varvec{X}}_{1}}(\varvec{x}_{1})\nonumber \\&\qquad +\,R_{n_{1}n_{2}}(t). \end{aligned}$$
(41)

Similar to the analysis of (21), we can have that

$$\begin{aligned}&\hat{g}_{2}(\hat{\varvec{\beta }}_{0}^{\tau }\varvec{x}_{1}, \varvec{\gamma }_{0})- \hat{g}_{2}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1}, \varvec{\gamma }_{0}) \nonumber \\&\quad =g_{2}'(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1})\varvec{x}_{1}^{\tau }\left( \hat{\varvec{\beta }}_{0}-\varvec{\beta }_{0}\right) +O_{P}\left( h_{2}^{2}+\sqrt{\frac{(\log n_{2})^{1+s_{0}}}{n_{2}h_{2}^{3}}}\right) . \end{aligned}$$
(42)

Recall the definition of \(\hat{g}_{2}(u,\varvec{\gamma }_{0})\) and using Lemma 1,

$$\begin{aligned}&\hat{g}_{2}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1},\varvec{\gamma }_{0})-g_{2}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1})\nonumber \\&\quad =\frac{1}{n_{2}f_{{\varvec{\gamma }}_{0}^{\tau }{\varvec{X}}_{2}}(\varvec{\beta }_{0}^{\tau }\varvec{x}_{1})} \sum _{i=1}^{n_{2}}K_{h_{2}}\left( {\varvec{\gamma }}_{0}^{\tau }\varvec{X}_{2i}-{\varvec{\beta }}_{0}^{\tau }\varvec{x}_{1}\right) \sigma _{2}({\varvec{\gamma }}_{0}^{\tau }\varvec{X}_{2i})\epsilon _{2i} +O_{P}(c_{n_{2}}).\qquad \end{aligned}$$
(43)

We can also show that \(R_{n_{1}n_{2}}(t)\) defined in (41) is \(o_{P}(n^{-1/2}_{1}+n_{2}^{-1/2})\) uniformly in \(t\in \mathbb {R}\). Together with (39), (42) and (43), we have

$$\begin{aligned}&F_{\widetilde{\mathcal {H}}_{0},\hat{\epsilon }_{1}}(t|\mathscr {V}_{n_{1}n_{2}}) -F^{*}_{\widetilde{\mathcal {H}}_{0}, \epsilon _{1}}(t)\\&\quad =\frac{t}{2n_{1}}\sum _{i=1}^{n_{1}}(\epsilon _{1i}^{2}-1)f_{\epsilon _{1}}\left( t+ \frac{g_{1}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i}) -g_{2}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i})}{\sigma _{1}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i})}\right) \\&\qquad +\,E\left[ f_{\epsilon _{1}}\left( t+ \frac{g_{1}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1}) -g_{2}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1})}{\sigma _{1}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1})}\right) \frac{g'_{2}(\varvec{\beta }_{0}^{\tau }\varvec{X})}{\sigma _{1}(\varvec{\beta }_{0}^{\tau }\varvec{X})}V_{1, {\varvec{\beta }}_{0}}(\varvec{\beta }_{0}^{\tau }\varvec{X})\right] ^{\tau }\left( \hat{\varvec{\beta }}_{0}-\varvec{\beta }_{0}\right) \\&\qquad +\,\frac{1}{n_{2}}\sum _{i=1}^{n_{2}}f_{\epsilon _{1}}\left( t+ \frac{g_{1}(\varvec{\gamma }_{0}^{\tau }\varvec{X}_{2i}) -g_{2}(\varvec{\gamma }_{0}^{\tau }\varvec{X}_{2i})}{\sigma _{1}(\varvec{\gamma }_{0}^{\tau }\varvec{X}_{2i})}\right) \frac{f_{{\varvec{\beta }}_{0}^{\tau }{\varvec{X}}_{1}} (\varvec{\gamma }_{0}^{\tau }\varvec{X}_{2i})}{f_{{\varvec{\gamma }}_{0}^{\tau }{\varvec{X}}_{2}}(\varvec{\gamma }_{0}^{\tau }\varvec{X}_{2i}) }\frac{\sigma _{2}(\varvec{\gamma }_{0}^{\tau }\varvec{X}_{2i})}{\sigma _{1}(\varvec{\gamma }_{0}^{\tau }\varvec{X}_{2i})}\epsilon _{2i}\\&\qquad +\,o_{P}(n^{-1/2}). \end{aligned}$$

Recalling the definitions of \(D_{1}(u)\) and \(\rho _{f,\sigma }(u)\), we complete the proof of Theorem 5. Moreover, the proof of Theorem 6 is completed by following the asymptotic result of Theorem 5 and recalling that \(D_{1}(u)\equiv D_{2}(u)\equiv 0\) under the null hypothesis, we omit the details. \(\square \)

1.5 Proof of Theorem 7

Proof

By using the detailed proof of Theorem 1 in Stute et al. (2008), the class of functions

$$\begin{aligned} \ell _{t}(\epsilon _{1}, \varvec{x}_{1})= & {} I\left\{ \epsilon _{1}\le t-\frac{1}{\sqrt{n_{1}+n_{2}}}\frac{\mu ({\varvec{\beta }}_{0}^{\tau }{\varvec{x}}_{1})}{\sigma _{1}({\varvec{\beta }}_{0}^{\tau }{\varvec{x}}_{1})}\right\} -I\left\{ \epsilon _{1}\le t\right\} \\&\quad ~-\,F_{\epsilon _{1}}\left( t-\frac{1}{\sqrt{n_{1}+n_{2}}} \frac{\mu ({\varvec{\beta }}_{0}^{\tau }{\varvec{x}}_{1})}{\sigma _{1}({\varvec{\beta }}_{0}^{\tau }{\varvec{x}}_{1})}\right) +F_{\epsilon _{1}}(t) \end{aligned}$$

is a Vapnik-Chervonenkis class with envelop function 4 (Pollard 1984, Ch. 2). Then, we can have that

$$\begin{aligned}&n_{1}^{-1/2}\Bigg |\sum _{i=1}^{n_{1}}\Bigg [I\left\{ \epsilon _{1i}\le t-\frac{1}{\sqrt{n_{1}+n_{2}}}\frac{\mu ({\varvec{\beta }}_{0}^{\tau }{\varvec{X}}_{1i})}{\sigma _{1}({\varvec{\beta }}_{0}^{\tau }{\varvec{X}}_{1i})}\right\} -I\left\{ \epsilon _{1i}\le t\right\} \Bigg ]\nonumber \\&\quad ~~~~~~~-\,E\left[ F_{\epsilon _{1}}\left( t-\frac{1}{\sqrt{n_{1}+n_{2}}} \frac{\mu ({\varvec{\beta }}_{0}^{\tau }{\varvec{X}}_{1})}{\sigma _{1}({\varvec{\beta }}_{0}^{\tau }{\varvec{X}}_{1})}\right) \right] +F_{\epsilon _{1}}(t)\Bigg |=o_{P}\big (n_{1}^{-1/2}\big ). \end{aligned}$$
(44)

Moreover, Taylor expansion entails that

$$\begin{aligned}&E\left[ F_{\epsilon _{1}}\left( t-\frac{1}{\sqrt{n_{1}+n_{2}}} \frac{\mu ({\varvec{\beta }}_{0}^{\tau }{\varvec{X}}_{1})}{\sigma _{1}({\varvec{\beta }}_{0}^{\tau }{\varvec{X}}_{1})}\right) \right] -F_{\epsilon _{1}}(t)\nonumber \\&\quad =\,-\frac{1}{\sqrt{n_{1}+n_{2}}}f_{\epsilon _{1}}(t)E\left[ \frac{\mu ({\varvec{\beta }}_{0}^{\tau }{\varvec{X}}_{1})}{\sigma _{1}({\varvec{\beta }}_{0}^{\tau }{\varvec{X}}_{1})}\right] +o\big (n_{1}^{-1/2}+n_{2}^{-1/2}\big ). \end{aligned}$$
(45)

If the local alternative hypothesis \(\mathcal {H}_{1n_{1}n_{2}}\) is true, together with (44) and (45), we have

$$\begin{aligned}&\hat{F}_{\widetilde{\mathcal {H}}_{0},\epsilon _{1}}(t)-\hat{F}_{\epsilon _{1}}(t)=-\frac{1}{\sqrt{n_{1}+n_{2}}}f_{\epsilon _{1}}(t)E\left[ \frac{\mu ({\varvec{\beta }}_{0}^{\tau }{\varvec{X}}_{1})}{\sigma _{1}({\varvec{\beta }}_{0}^{\tau }{\varvec{X}}_{1})}\right] \\&\quad +\,f_{\epsilon _{1}}(t)\mathcal {N}_{1}^{\tau }J_{{{\varvec{\beta }}}_{0}}\varvec{\varOmega }_{1}^{-1}\frac{1}{n_{1}} \sum _{i=1}^{n_{1}}J_{{\varvec{\beta }}_{0}}^{\tau }\frac{{g}_{1}'(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i})}{{\sigma }_{1}(\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i})}\left[ \varvec{X}_{1i}-{V}_{1, {\varvec{\beta }}_{0}} (\varvec{\beta }_{0}^{\tau }\varvec{X}_{1i})\right] \epsilon _{1i}\\&\quad +\,f_{\epsilon _{1}}(t)\frac{1}{n_{2}} \sum _{i=1}^{n_{2}}\rho _{f,\sigma }(Y{\varvec{\gamma }}_{0}^{\tau }{\varvec{X}}_{2i})\epsilon _{2i} +f_{\epsilon _{1}}(t)\frac{1}{n_{1}}\sum _{i=1}^{n_{1}}\epsilon _{1i}\\&\quad +\,o_{P}(n_{1}^{-1/2}+n_{2}^{-1/2}). \end{aligned}$$

We can also obtain a similar expression for \(\hat{F}_{\widetilde{\mathcal {H}}_{0},\epsilon _{2}}(t)-\hat{F}_{\epsilon _{2}}(t)\) and we omit the details. Using the continuous mapping theorem, we complete the proof of Theorem 7. \(\square \)

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, J., Feng, Z. & Wang, X. A constructive hypothesis test for the single-index models with two groups. Ann Inst Stat Math 70, 1077–1114 (2018). https://doi.org/10.1007/s10463-017-0616-y

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10463-017-0616-y

Keywords

Navigation