Identifying shifts between two regression curves


This article studies the problem whether two convex (concave) regression functions modelling the relation between a response and covariate in two samples differ by a shift in the horizontal and/or vertical axis. We consider a nonparametric situation assuming only smoothness of the regression functions. A graphical tool based on the derivatives of the regression functions and their inverses is proposed to answer this question and studied in several examples. We also formalize this question in a corresponding hypothesis and develop a statistical test. The asymptotic properties of the corresponding test statistic are investigated under the null hypothesis and local alternatives. In contrast to most of the literature on comparing shape invariant models, which requires independent data the procedure is applicable for dependent and non-stationary data. We also illustrate the finite sample properties of the new test by means of a small simulation study and two real data examples.

This is a preview of subscription content, access via your institution.

Fig. 1
Fig. 2
Fig. 3
Fig. 4


  1. Ait-Sahalia, Y., Duarte, J. (2003). Nonparametric option pricing under shape restrictions. Journal of Econometrics, 116, 9–47.

    MathSciNet  MATH  Article  Google Scholar 

  2. Carroll, R. J., Hall, P. (1993). Semiparametric comparison of regression curves via normal likelihoods. Australian Journal of Statistics, 34(3), 471–487.

    MathSciNet  MATH  Article  Google Scholar 

  3. Collier, O., Dalalyan, A. S. (2015). Curve registration by nonparametric goodness-of-fit testing. Journal of Statistical Planning and Inference, 162, 20–42.

    MathSciNet  MATH  Article  Google Scholar 

  4. Dahlhaus, R. (1997). Fitting time series models to nonstationary processes. The Annals of Statistics, 25(1), 1–37.

    MathSciNet  MATH  Article  Google Scholar 

  5. Degras, D., Xu, Z., Zhang, T., Wu, W. B. (2012). Testing for parallelism among trends in multiple time series. IEEE Transactions on Signal Processing, 60(3), 1087–1097.

    MathSciNet  MATH  Article  Google Scholar 

  6. de Jong, P. (1987). A central limit theorem for generalized quadratic forms. Probability Theory and Related Fields, 75(2), 261–277.

    MathSciNet  MATH  Article  Google Scholar 

  7. Dette, H., Munk, A. (1998). Nonparametric comparison of several regression curves: Exact and asymptotic theory. Annals of Statistics, 26(6), 2339–2368.

    MathSciNet  MATH  Article  Google Scholar 

  8. Dette, H., Neumeyer, N. (2001). Nonparametric analysis of covariance. Annals of Statistics, 29(5), 1361–1400.

    MathSciNet  MATH  Article  Google Scholar 

  9. Dette, H., Wu, W. (2019). Detecting relevant changes in the mean of non-stationary processes: A mass excess approach. The Annals of Statistics, 47(6), 3578–3608.

    MathSciNet  MATH  Article  Google Scholar 

  10. Dette, H., Neumeyer, N., Pilz, K. F. (2006). A simple nonparametric estimator of a strictly monotone regression function. Bernoulli, 12, 469–490.

    MathSciNet  MATH  Article  Google Scholar 

  11. Durot, C., Groeneboom, P., Lopuhaä, H. P. (2013). Testing equality of functions under monotonicity constraints. Journal of Nonparametric Statistics, 25(4), 939–970.

    MathSciNet  MATH  Article  Google Scholar 

  12. Gamboa, F., Loubes, J., Maza, E. (2007). Semi-parametric estimation of shifts. Electronic Journal of Statistics, 1, 616–640.

    MathSciNet  MATH  Article  Google Scholar 

  13. Hall, P., Hart, J. D. (1990). Bootstrap test for difference between means in nonparametric regression. Journal of the American Statistical Association, 85(412), 1039–1049.

    MathSciNet  MATH  Article  Google Scholar 

  14. Hall, P., Huang, L.-S. (2001). Nonparametric kernel regression subject to monotonicity constraints. Annals of Statistics, 29(3), 624–647.

    MathSciNet  MATH  Article  Google Scholar 

  15. Härdle, W., Marron, J. S. (1990). Semiparametric comparison of regression curves. Annals of Statistics, 18(1), 63–89.

    MathSciNet  MATH  Article  Google Scholar 

  16. Kneip, A., Engel, J. (1995). Model estimation in nonlinear regression under shape invariance. The Annals of Statistics, 46, 551–570.

    MathSciNet  MATH  Article  Google Scholar 

  17. Maity, A. (2012). A powerful test for comparing multiple regression functions. Journal of Nonparametric Statistics, 24(3), 563–576.

    MathSciNet  MATH  Article  Google Scholar 

  18. Mallat, S., Papanicolaou, G., Zhang, Z. (1998). Adaptive covariance estimation of locally stationary processes. The Annals of Statistics, 26(1), 1–47.

    MathSciNet  MATH  Article  Google Scholar 

  19. Mammen, E. (1991). Estimating a smooth monotone regression function. Annals of Statistics, 19(2), 724–740.

    MathSciNet  MATH  Article  Google Scholar 

  20. Matzkin, R. L. (1991). Semiparametric estimation of monotone and concave utility functions for polychotomous choice models. Econometrica, 59(5), 1315–1327.

    MathSciNet  MATH  Article  Google Scholar 

  21. Nason, G. P., Von Sachs, R., Kroisandt, G. (2000). Wavelet processes and adaptive estimation of the evolutionary wavelet spectrum. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 62(2), 271–292.

    MathSciNet  Article  Google Scholar 

  22. Neumeyer, N., Dette, H. (2003). Nonparametric comparison of regression curves: An empirical process approach. Annals of Statistics, 31(3), 880–920.

    MathSciNet  MATH  Article  Google Scholar 

  23. Neumeyer, N., Pardo-Fernández, J. C. (2009). A simple test for comparing regression curves versus one-sided alternatives. Journal of Statistical Planning and Inference, 139(12), 4006–4016.

    MathSciNet  MATH  Article  Google Scholar 

  24. Ombao, H., Von Sachs, R., Guo, W. (2005). Slex analysis of multivariate nonstationary time series. Journal of the American Statistical Association, 100(470), 519–531.

    MathSciNet  MATH  Article  Google Scholar 

  25. Park, C., Hannig, J., Kang, K.-H. (2014). Nonparametric comparison of multiple regression curves in scale-space. Journal of Computational and Graphical Statistics, 23(3), 657–677.

    MathSciNet  Article  Google Scholar 

  26. Rønn, B. (2001). Nonparametric maximum likelihood estimation for shifted curves. Journal of the Royal Statistical Society, Series B, 63(2), 243–259.

    MathSciNet  MATH  Article  Google Scholar 

  27. Varian, H. R. (1984). The nonparametric approach to production analysis. Econometrica, 52(3), 579–597.

    MathSciNet  MATH  Article  Google Scholar 

  28. Vilar-Fernández, J. M., Vilar-Fernández, J. A., González-Manteiga, W. (2007). Bootstrap tests for nonparametric comparison of regression curves with dependent errors. TEST, 16(1), 123–144.

    MathSciNet  MATH  Article  Google Scholar 

  29. Vimond, M. (2010). Efficient estimation for a subclass of shape invariant models. Annals of Statistics, 38(3), 1885–1912.

    MathSciNet  MATH  Article  Google Scholar 

  30. Vogt, M. (2012). Nonparametric regression for locally stationary time series. The Annals of Statistics, 40(5), 2601–2633.

    MathSciNet  MATH  Article  Google Scholar 

  31. Wu, W. B., Zhou, Z. (2011). Gaussian approximations for non-stationary multiple time series. Statistica Sinica, 21(3), 1397–1413.

    MathSciNet  MATH  Article  Google Scholar 

  32. Zhou, Z. (2010). Nonparametric inference of quantile curves for nonstationary time series. The Annals of Statistics, 38(4), 2187–2217.

    MathSciNet  MATH  Article  Google Scholar 

  33. Zhou, Z., Wu, W. B. (2009). Local linear quantile estimation for nonstationary time series. The Annals of Statistics, 37(5B), 2696–2729.

    MathSciNet  MATH  Article  Google Scholar 

  34. Zhou, Z., Wu, W. B. (2010). Simultaneous inference of linear models with time varying coefficients. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 72(4), 513–531.

    MathSciNet  MATH  Article  Google Scholar 

Download references


The first author has been supported by the Collaborative Research Center “Statistical modelling of nonlinear dynamic processes” (SFB 823, Teilprojekt A1, C1) of the German Research Foundation (DFG). The second author is supported by MATRICS (MTR/2019/000039), a research grant from the SERB, Government of India and in part by SFB 823. The third author is supported by NSFC Young program (No.11901337) (a research grant of China) and in part by SFB 823. The authors are grateful to the referees and the Associate Editor for their constructive comments on an earlier version of this paper and to Martina Stein, who typed parts of this paper with considerable technical expertise.

Author information



Corresponding author

Correspondence to Weichi Wu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix: Proofs

Appendix: Proofs


In this section, we state a few auxiliary results, which will be used later in the proof. We begin with Gaussian approximation. A proof of this result can be found in Wu and Zhou (2011).

Proposition 10


$$\begin{aligned} \varvec{S}_i=\sum _{s=1}^i {\mathbf {e}}_i, \end{aligned}$$

and assume that the Assumption 1is satisfied. Then on a possibly richer probability space, there exists a process \(\{{\mathbf {S}}_i^\dag \}_{i\in {\mathbb {Z}}}\) such that

$$\begin{aligned} \{\varvec{S}_i^\dag \}_{i=0}^n {\mathop {=}\limits ^{{\mathcal {D}}}}\{\varvec{S}_i\}_{i=0}^n \end{aligned}$$

(equality in distribution), and a sequence of independent 2-dimensional standard normal distributed random variables \(\{{\mathbf {V}}_i\}_{i\in {\mathbb {Z}}}\), such that

$$\begin{aligned} \max _{1\le j\le n}\Big |\sum _{i=1}^j{\mathbf {S}}_i^\dag -\sum _{i=1}^j \varSigma (i/n){\mathbf {V}}_i\Big |= o_p(n^{1/4}\log ^{2} n), \end{aligned}$$

where \(\varSigma (t)\) is the square root of the long-run variance matrix \(\varSigma ^2(t)\) defined in Assumption 1.

Proposition 11

Let Assumption 1and 2be satisfied.

  1. (i)

    For \(s=1,2\) we have

    $$\begin{aligned}&\sup _{ t\in [b_{n,s},1-b_{n,s}]}\Big | {\hat{m}}_s^{\prime}(t)-m_s^{\prime}(t)-\frac{1}{n_sb_{n,s}^2} \sum _{i=1}^{n_s}K^\circ \Big (\frac{i/n_s-t}{b_{n,s}}\Big )e_{i,s}\Big | \nonumber \\&\quad =O_P\Big (\frac{1}{n_sb_{n,s}^2}+b_{n,s}^2\Big ) \end{aligned}$$

    where the kernel \(K^\circ \) is defined in (23).

  2. (ii)

    For \(s=1,2\)

    $$\begin{aligned}&\sup _{t\in [b_{n,s},1-b_{n,s}]}\Big |\frac{1}{n_sb_{n_s}^2}\sum _{i=1}^{n_s} K^\circ \Big (\frac{i/n_s-t}{b_{n,s}}\Big )\Big (e_{i,s}-\sigma _s(i/n) V_{i,s}\Big )\Big | \nonumber \\&\quad = o_p\Big (\frac{\log ^2n_s}{n^{3/4}_sb_{n,s}^2}\Big ), \end{aligned}$$

    where \(\{ V_{i,s}, \, i=1, \ldots , n_s ,s=1,2\}\) denotes a sequence of independent standard normal distributed random variables.

  3. (iii)

    For \(s=1,2\) we have

    $$\begin{aligned} \sup _{t\in [b_{n,s},1-b_{n,s}]}|{\hat{m}}^{\prime}_s(t)-m_s^{\prime}(t)|=O_p\Big (\frac{\log n_s}{\sqrt{n_sb_{n,s}}b_{n,s}}+\frac{\log ^2n_s}{n^{3/4}_sb_{n,s}^2}+b_{n,s}^2\Big ). \end{aligned}$$
  4. (iv)

    For \(s=1,2\) we have

    $$\begin{aligned} \sup _{t\in [0,b_{n,s}]\cup [1-b_{n,s},1] }|{\hat{m}}^{\prime}_s(t)-m_s^{\prime}(t)|=O_p\Big (\frac{\log n_s}{\sqrt{nb_{n,s}}b_{n,s}}+\frac{\log ^2n_s}{n_s^{3/4}b_{n,s}^2}+b_{n,s}\Big ). \end{aligned}$$


(i): Define for \(s=1,2\) and \(l=0,1,2\)

$$\begin{aligned} R_{n,s,l}(t)&=\frac{1}{n_sb_{n,s}}\sum _{i=1}^{n_s}Y_{i,s}K\Big (\frac{i/n-t}{b_{n,s}}\Big ) \Big (\frac{i/n_s-t}{b_{n,s}}\Big )^l, \\ S_{n,s,l}(t)&=\frac{1}{n_sb_{n,s}}\sum _{i=1}^{n_s}K\Big (\frac{i/n_s-t}{b_{n,s}}\Big ) \Big (\frac{i/n_s-t}{b_{n,s}}\Big )^l. \end{aligned}$$

Straightforward calculations show that \( ({\hat{m}}_s(t), b_{n,s}{\hat{m}}_s'(t))^\top =S_{n,s}^{-1}(t)R_{n,s}(t) \) \((s=1,2)\), where

$$\begin{aligned} R_{n,s}(t)= \begin{pmatrix} R_{n,s,0}(t) \\ R_{n,s,1}(t) \end{pmatrix} \, ,\,\, S_{n,s}(t)=\begin{pmatrix} S_{n,s,0} &{} S_{n,s,1} \\ S_{n,s,1} &{} S_{n,s,2} \end{pmatrix}. \end{aligned}$$

Note that Assumption 2 gives

$$\begin{aligned} S_{n,s,0}(t)&=1+O\Big (\frac{1}{n_sb_s}\Big ),S_{n,s,1}(t) =O\Big (\frac{1}{n_sb_{n,s}}\Big ), \\ S_{n,s,2}(t)&=\int _{-1}^1K(x)x^2dx+O\Big (\frac{1}{n_sb_{n,s}}\Big ) \end{aligned}$$

uniformly with respect to \(t\in [b_{n,s},1-b_{n,s}]\). The first part of the proposition now follows by a Taylor expansion of \(R_{n,s,l}(t)\).

(ii): The fact asserted in (34) follows from (33), Proposition 10, the summation by parts formula and similar arguments to derive equation (44) in Zhou (2010).

(iii) + (iv): Following Lemma C.3 of supplement of Dette and Wu (2019), we have

$$\begin{aligned} \sup _{t\in [b_{n,s},1-b_{n,s}]}\Big |\frac{1}{n_sb_{n,s}}\sum _{i=1}^{n_s}K^\circ \Big (\frac{i/n_s-t}{n_sb_{n,s}}\Big )\Big (\sigma _s(\frac{i}{n_s})V_{i,s}\Big ) \Big |=O_p\Big (\frac{\log n_s}{\sqrt{n_sb_{n,s}}}\Big ). \end{aligned}$$

Finally, (35) follows from (33) (34) and (37) and (36) is obtained by similar arguments using Lemma C.3 in the supplement of Dette and Wu (2019). This completes the proof of Proposition 11. \(\square \)

Proof of Theorem 4

We only prove the result in the case \(g \equiv 0 \). The general case follows by the same arguments. Under Assumptions 1 and 2, it follows from the proof of Theorem 4.1 in Dette and Wu (2019) that

$$\begin{aligned} \displaystyle \sup _{t\in (a+\eta , b - \eta )} \big [\big ({\hat{f}}_1(t) - {\hat{f}}_2(t)\big ) - \big (((m_1^{\prime})^{-1}(t))^{\prime} - ((m_{2}^{\prime})^{-1})^{\prime}(t)\big )\big ]\rightarrow 0 \end{aligned}$$

in probability, where \({\hat{f}}_{1} (t)\) and \({\hat{f}}_{2}(t)\) are defined in (10) and (11), respectively. Next, since under the null hypothesis (5), \(((m_1^{\prime})^{-1}(t))^{\prime} - ((m_{2}^{\prime})^{-1})^{\prime}(t) = 0\) for all \(t\in (a+\eta , b - \eta )\), (See Lemma 1) we have under the null hypothesis,

$$\begin{aligned} \displaystyle \sup _{t\in (a+\eta , b - \eta )} \big [{\hat{f}}_1(t) - {\hat{f}}_2(t) \big ]\rightarrow 0 \end{aligned}$$

in probability. In other words, under \(H_{0}\), for any \(\epsilon > 0\), we have

$$\begin{aligned} \displaystyle \lim _{n\rightarrow \infty } {\mathbb {P}} \Big [\displaystyle \sup _{t\in (a+\eta , b - \eta )}\big |{\hat{f}}_1(t)- {\hat{f}}_2(t)\big | <\epsilon \Big ] = 1, \end{aligned}$$

and hence, under the null hypothesis \(g \equiv 0\), we have \({\mathbb {P}} [{{\mathcal {C}}}_{n_1,n_2}\subset L(\epsilon )] = 1\). \(\square \)

Proof of Theorem 5

To simplify the notation, we prove Theorem 5 in the case of equal sample sizes and equal bandwidths. The general case follows by the same arguments with an additional amount of notation. In this case \(c_2=r_2=1\) and we omit the subscript in bandwidths if no confusion arises, for example, we write \(n_1=n_2=n\), \(b_{n,1}=b_{n,2} =b_n\) and use a similar notation for other symbols depending on the sample size. In particular, we write \(T_n\) for \(T_{n_1,n_2}\) if \( n= n_1=n_2\).

Define the statistic

$$\begin{aligned} {\tilde{T}}_n=\int \Big ({\hat{f}}_1(t)-{\hat{f}}_2(t)\Big )^2 w(t)dt \end{aligned}$$

which is obtained from \(T_n\) by replacing the weight function \({\hat{w}}\) in (18) by its deterministic analogue (20). We shall show Theorem 5 in two steps proving the assertions

$$\begin{aligned}&nb_n^{9/2}{\tilde{T}}_n- B_n(g) \Rightarrow {{\mathcal {N}}} (0, V_T) \end{aligned}$$
$$\begin{aligned}&nb_n^{9/2}(T_n-{\tilde{T}}_n)=o_p(1). \end{aligned}$$

For (39), the difference \({\tilde{T}}_n-T_n\) is contributed by \({\hat{w}}(t)-w(t)\). The arguments of proving (38) are useful for the proof of (39) and are mathematically involved. We shall discuss the proof of (38) in detail in the next subsection.

Proof of (38)

By simple algebra, we obtain the decomposition

$$\begin{aligned} {\tilde{T}}_n=\int (I_1(t)-I_2(t)+II(t))^2w(t)dt, \end{aligned}$$

where for \(s=1,2\)

$$\begin{aligned} I_s(t)&=\frac{1}{Nh_d}\sum _{i=1}^N\Big (K_d\Big (\frac{{\hat{m}}_s^{\prime}(i/N)-t}{h_d}\Big ) -K_d\Big (\frac{ m_s^{\prime}(i/N)-t}{h_d}\Big )\Big ), \end{aligned}$$
$$\begin{aligned} II(t)&=\frac{1}{Nh_d}\sum _{i=1}^N\Big (K_d\Big (\frac{ m_1^{\prime}(i/N)-t}{h_d}\Big )-K_d\Big (\frac{ m_2^{\prime}(i/N)-t}{h_d}\Big )\Big ). \end{aligned}$$

We shall study \(I_s(t)\), \(s=1,2\) and II(t) via borrowing the idea of proof of Theorem 4.1 of Dette and Wu (2019). In fact, the functions \(m^{\prime}_s\) and \({\hat{m}}^{\prime}_s\) for \(s=1,2\) here play a similar role as the functions \(\mu \) and \({\tilde{\mu }}_{b_n}\) in Dette and Wu (2019). Observing the estimate on page 471 of Dette et al. (2006) it follows

$$\begin{aligned} \frac{1}{Nh_d}\sum _{i=1}^NK_d\Big (\frac{m_s^{\prime}(i/N)-t}{h_d}\Big ) =\Big (((m_s^{\prime})^{-1}(t))^{\prime}+O\Big (h_d+\frac{1}{Nh_d}\Big )\Big ), \end{aligned}$$

(\(s=1,2\)) which yields the estimate

$$\begin{aligned} II(t)=((m_1^{\prime})^{-1}(t))^{\prime}-((m^{\prime}_2)^{-1}(t))^{\prime}+O\Big (h_d+\frac{1}{Nh_d}\Big ) \end{aligned}$$

uniformly with respect to \(t\in [a+\eta ,b-\eta ]\). For the two other terms, we use a Taylor expansion and obtain the decomposition

$$\begin{aligned} I_s(t)=I_{s,1}(t)+I_{s,2}(t) \,\,\,\,(s=1,2) , \end{aligned}$$


$$\begin{aligned} I_{s,1}(t)&=\tfrac{1}{Nh^2_d}\sum _{i=1}^N K'_d\Big (\frac{m_s'(\frac{i}{N} )-t}{h_d}\Big )({\hat{m}}_s'(\tfrac{i}{N} ) -m_s'(\tfrac{i}{N} )),\\ I_{s,2}(t)&=\tfrac{1}{2Nh^3_d}\sum _{i=1}^{N} K''_d\Big (\frac{m_s'(\frac{i}{N} )-t+\theta _s({\hat{m}}_s'(\frac{i}{N} )-m_s(\frac{i}{N} ))}{h_d}\Big )({\hat{m}}_s'(\tfrac{i}{N} )-m_s'(\tfrac{i}{N} ))^2 \end{aligned}$$

for some \(\theta _s\in [-1,1]\) (\(s=1,2\)). In the following, we shall prove (38) in the following steps.

  1. (a)

    Using arguments of Dette and Wu (2019) we show that the leading term of \(I_s(t)\) is \(I_{s,1}(t)\), \(s=1,2\).

  2. (b)

    Using Proposition 11, we approximate the leading term of \(I_{1,1}(t)-I_{2,1}(t)\) via a Gaussian process. Therefore, \(T_n\) has the form

    $$\begin{aligned} T_n=\int \left( U_n(t)+((m^{\prime}_1)^{-1}(t))^{\prime}-((m^{\prime}_2)^{-1}(t))^{\prime}+R_n^\dag (t)\right) ^2w(t)dt, \end{aligned}$$

    where \(U_n(t)\) is a Gaussian process and \(R_n^\dag (t)\) is a negligible remaining term.

  3. (c)

    Under the considered alternative hypothesis, the asymptotic distribution is determined by \(\int U_n(t)^2w(t)dt\) and \(\int (((m^{\prime}_1)^{-1}(t))^{\prime}-((m^{\prime}_2)^{-1}(t))^{\prime})^2w(t)dt\). The latter accounts for a part of the bias. The former produces another part of the bias and determines the asymptotic stochastic behaviour. All terms in the expansion of \(T_n\) that involves \(R_n^\dag (t)\) are negligible.

    • (\(c_1\)) Further calculations show \(U_n(t)=U_{n,1}(t)-U_{n,2}(t)\), where

      $$\begin{aligned} U_{n,s}(t)=\displaystyle \sum _{j=1}^nG(m_s^{\prime}(\cdot ),j,t)V_{j,s}\,\, \,\, \,\, \,\, \,\, (s=1,2), \end{aligned}$$

      where \(G(m_s^{\prime}(\cdot ),j,t)\) depends on the kernels, curves, long-run variances and bandwidths. Using a Riemann sum approximation, we can further simplify the leading term of \(G(m_s^{\prime}(\cdot ),j,t)\), and hence, the leading term of \(\int (U_n(t))^2(t)w(t)dt\) is a quadratic Gaussian.

    • (\(c_2\)) The asymptotic normality is then guaranteed by Theorem 2.1 of de Jong (1987). The mean and variance are obtained via straightforward but tedious calculations and certain arguments from Zhou (2010).

  4. (d)

    Finally, we show that \(\int U_n(t)(((m^{\prime}_1)^{-1}(t))^{\prime}-((m^{\prime}_2)^{-1}(t))^{\prime})w(t)dt\) and \(\int U_n(t)R_n^\dag (t)w(t)dt\) are negligible.

Step (a): By part (iii) and (iv) of Proposition 11 and the same arguments that were used in the online supplement of Dette and Wu (2019), to obtain the bound for the term \(\varDelta _{2,N}\) in the proof of their Theorem 4.1 it follows that

$$\begin{aligned} I_{s,2} (t)=O_p\Big (\frac{\pi _n^2}{h_d^3}(h_d+\pi _n)\Big )=O_p\Big (\frac{\pi _n^3}{h_d^3}\Big ) \,(s=1,2), \end{aligned}$$

uniformly with respect to \(t\in [a+\eta ,b-\eta ]\). Here, we used the fact that the number of nonzero summands in \(I_{s,2}(t)\) is of order \(O(h_d+\pi _n)\).

Step (b): Next, for the investigation of the difference \(I_{1,1}(t)-I_{2,1}(t)\), we define \({\mathbf {m}}^{\prime}= (m_1,m_2) \) and consider the vector

$$\begin{aligned} K^{\prime}_d\Big ( \frac{{\mathbf {m}}^{\prime}(i/N)-t}{h_d}\Big )=\Big (K^{\prime}_d\Big ( \frac{ m^{\prime}_1(i/N)-t}{h_d}\Big ),-K^{\prime}_d\Big ( \frac{ m^{\prime}_2(i/N)-t}{h_d}\Big )\Big )^\top . \end{aligned}$$

By part (i) and (ii) of Proposition 11, it follows that there exists independent 2-dimensional standard normally distributed random vectors \({\mathbf {V}}_i \) such that

$$\begin{aligned} I_{1,1} (t)-I_{2,1}(t) =&\frac{1}{nNb_n^2h^2_d}\sum _{j=1}^n\sum _{i=1}^NK^\circ \Big (\frac{j/n-i/N}{b_n}\Big )\\&\times ( K^{\prime}_d)^{T}\Big (\frac{{\mathbf {m}}^{\prime}(i/N)-t}{h_d}\Big )\varSigma (j/n){\mathbf {V}}_j +{ O_p(\pi _n^{\prime}h_d^{-1})}, \end{aligned}$$

uniformly with respect to \(t\in [a+\eta ,b-\eta ]\). Combining this estimate with equations (42) and (43), it follows

$$\begin{aligned} T_n=\int \left( U_n(t)+((m^{\prime}_1)^{-1}(t))^{\prime}-((m^{\prime}_2)^{-1}(t))^{\prime}+R_n^\dag (t)\right) ^2w(t)dt, \end{aligned}$$


$$\begin{aligned} U_n(t)&=\frac{1}{nNb_n^2h^2_d}\sum _{j=1}^n\sum _{i=1}^NK^\circ \Big (\frac{\frac{j}{n} -\frac{i}{N} }{b_n}\Big )(K^{\prime}_d)^\top \Big (\frac{{\mathbf {m}}^{\prime}(\frac{i}{N} )-t}{h_d}\Big )\varSigma (\tfrac{j}{n} ){\mathbf {V}}_j, \,\,\,\,\, \end{aligned}$$

and the remainder \(R_n^\dag (t)\) can be estimated as follows

$$\begin{aligned} \sup _{t\in [a+\eta ,b-\eta ]}|R^\dag _n(t)|&={O_p\Big (\frac{\pi _n^{\prime}}{h_d}+\frac{\pi _n^3}{h_d^3}+h_d +\frac{1}{Nh_d}\Big )}. \end{aligned}$$

Step (c): We now study the asymptotic properties of to the quantities

$$\begin{aligned}&nb_n^{9/2}\int (U_n(t))^2w(t)dt, \end{aligned}$$
$$\begin{aligned}&nb_n^{9/2}\int U_n(t)((m_1^{-1}(t))^{\prime}-(m_2^{-1}(t))^{\prime})w(t)dt, \end{aligned}$$
$$\begin{aligned}&nb_n^{9/2}\int U_n(t)R_n^\dag (t)w(t)dt, \end{aligned}$$

which determine the asymptotic distribution of \(T_{n}\) since the bandwidth conditions yield under local alternatives in the case \((m_1^{-1}(t))^{\prime}-(m_2^{-1}(t))^{\prime}=\rho _ng(t)\),

$$\begin{aligned} nb_n^{9/2}\int \rho _n^2(t){g^2(t)}w(t)=\int g^2(t)w(t)dt, \end{aligned}$$

and the other parts of the expansion are negligible, i.e.

$$\begin{aligned}&nb_n^{9/2}\int (R_n^\dag (t))^2w(t)dt=o(1), \end{aligned}$$
$$\begin{aligned}&nb_n^{9/2}\int \rho _ng(t)R^\dag _{n}(t)w(t)dt=o(1). \end{aligned}$$

Step (\(c_1\)): Asymptotic properties of (47): To address the expressions related to \(U_n(t)\) in (47)–(49) note that

$$\begin{aligned} U_n(t)=U_{n,1}(t)-U_{n,2}(t), \end{aligned}$$


$$\begin{aligned} U_{n,s}(t)=\frac{1}{nNb_n^2h_d^2}\sum _{j=1}^n\sum _{i=1}^NK^\circ \Big (\frac{j/n-i/N}{b_n}\Big )K^{\prime}_d\Big ({ m_s^{\prime}(i/N)-t}{h_d}\Big )\sigma _s(j/n) V_{j,s} \end{aligned}$$

for \(s=1,2\), and \(\{V_{j,s}\}\) are independent standard normal distributed random variables. In order to simplify the notation, we define the quantities

$$\begin{aligned} U_{n,s}(t)=\displaystyle \sum _{j=1}^nG(m_s^{\prime}(\cdot ),j,t)V_{j,s}\,\, \,\, \,\, \,\, \,\, (s=1,2), \end{aligned}$$


$$\begin{aligned} G(m_s^{\prime}(\cdot ),j,t)=\frac{1}{nNb_n^2h_d^2}\sum _{i=1}^NK^\circ \Big (\frac{j/n-i/N}{b_n}\Big )K^{\prime}_d\Big (\frac{ m_s^{\prime}(i/N)-t}{h_d}\Big )\sigma _s(j/n). \end{aligned}$$

A straightforward calculation (using the change of variable \(v=(m_s^{\prime}(u)-t)/h_d\)) shows that

$$\begin{aligned} G(m^{\prime}_s(\cdot ),j,t)=&\frac{1}{nb_n^2h^2_d}\int _{0}^{1}K^\circ \left( \frac{j/n-u}{b_n}\right) K^{\prime}_d\left( \frac{ m_s^{\prime}(u)-t}{h_d}\right) \sigma _s(j/n) du+O\left( \delta _n \right) \\ =&\frac{1}{nb_n^2h_d}\sigma _s(j/n)\int _{{\mathcal {A}}_s(t)}K^{\prime}_d(v) ((m_s^{\prime})^{-1}(t+h_dv))^{\prime} \\&\times K^\circ \left( \frac{j/n-(m_s^{\prime})^{-1}(t+h_dv)}{b_n}\right) dv +O\left( \delta _n \right) , \end{aligned}$$

where \({\mathcal {A}}_s(t) {\mathcal {A}}_s(t) =\big (\frac{m_s^{\prime}(0)-t}{h_d},\frac{ m_s^{\prime}(1)-t}{h_d}\big ), \) the remainder is given by

$$\begin{aligned} \delta _n = O\Big (\Big (\frac{1}{nb_n^2h^2_dN}\Big ){\mathbf {1}} \Big (\Big |\frac{j/n-(m_s^{\prime})^{-1}(t)}{b_n+Mh_d}\Big |\le 1\Big )\Big ), \end{aligned}$$

and \({\mathbf {1}} (A) \) denote the indicator function of the set A. As the kernel \(K^{\prime}_d(\cdot )\) has a compact support and is symmetric, it follows by a Taylor expansion for any t with \(w(t)\ne 0\)

$$\begin{aligned}&\int _{{\mathcal {A}}_s(t)}K^{\prime}_d(v)((m_s^{\prime})^{-1}(t+h_dv))^{\prime}K^\circ \Big (\frac{\frac{j}{n} -(m_s^{\prime})^{-1}(t+h_dv)}{b_n}\Big )dv\\&\quad =-\frac{h_d}{b_n}(((m_s^{\prime})^{-1}(t))^{\prime})^2(K^\circ )^{\prime}\Big (\frac{\frac{j}{n} -(m_s^{\prime})^{-1}(t)}{b_n}\Big )\int K_d^{\prime}(v)vdv\Big (1+O\Big (b_n+\frac{h_d^2}{b_n^2}\Big )\Big ). \end{aligned}$$

With the notation

$$\begin{aligned} {\tilde{G}}(m_s^{\prime}(\cdot ),j,t)=\frac{-1}{nb_n^3}(K^\circ )^{\prime}\Big (\frac{\frac{j}{n} -(m_s^{\prime})^{-1}(t)}{b_n}\Big )\sigma _s(\frac{j}{n} )(((m_s^{\prime})^{-1})^{\prime}(t))^2\int vK^{\prime}_d(v)dv \end{aligned}$$

(\(s=1,2\)) we thus obtain the approximation

$$\begin{aligned} \int U_n^2(t)w(t) =&\sum _{s=1}^2 \sum _{j=1}^nV_{j,s}^2\int G^2(m_s^{\prime}(\cdot ),j,t)^2w(t)dt \nonumber \\&+\sum _{s=1}^2\sum _{1\le i\ne j \le n}V_{i,s}V_{j,s}\int G(m_s^{\prime}(\cdot ),i,t) G(m_s^{\prime}(\cdot ),j,t)w(t)dt\nonumber \\&-2\sum _{1\le i\le n}V_{i,1}V_{i,2}\int G(m_1^{\prime}(\cdot ),i,t) G(m_2^{\prime}(\cdot ),i,t)w(t)dt\nonumber \\ =&\sum _{s=1}^2 \sum _{j=1}^nV_{j,s}^2\left( \int {\tilde{G}}^2(m_s^{\prime}(\cdot ),j,t)^2w(t) dt(1+r_{i,s})\right) \nonumber \\&+\sum _{s=1}^2\sum _{1\le i\ne j \le n}V_{i,s}V_{j,s} \left( \int {\tilde{G}}(m_s^{\prime}(\cdot ),i,t){\tilde{G}}(m_s^{\prime}(\cdot ),j,t) w(t)dt(1+r_{i,j,s})\right) \nonumber \\&-2\sum _{1\le i\le n}V_{i,1}V_{i,2}\left( \int {\tilde{G}} (m_1^{\prime}(\cdot ),i,t){\tilde{G}}(m_2^{\prime}(\cdot ),i,t)w(t)dt(1+r_{i,s}^{\prime})\right) , \end{aligned}$$

where the remainder satisfy

$$\begin{aligned} \max \left( \displaystyle \max _{i,j,s=1,2}(|r_{i,j,s}|),\displaystyle \max _{i,s=1,2}(|r_{i,s}|),\max _{i,s=1,2}(|r_{i,s}^{\prime}|)\right) =o(1). \end{aligned}$$

Let us now consider the statistics \({\tilde{U}}_{n,s}(t)=\sum _{j=1}^n{\tilde{G}}(m_s^{\prime}(\cdot ),j,t)V_{j,s}\) (\(s=1,2\)), and

$$\begin{aligned} {\tilde{U}}_n(t)={\tilde{U}}_{n,1}(t)-{\tilde{U}}_{n,2}(t), \end{aligned}$$

then, by the previous calculations, it follows that

$$\begin{aligned} nb_n^{9/2} \Big ( \int U_n^2(t) w(t)dt - \int {\tilde{U}}_n^2(t)w(t)dt \Big ) = o_P(1), \end{aligned}$$

and therefore, we investigate the weak convergence of \(nb_n^{9/2} \int {\tilde{U}}_n^2(t)w(t)dt\) in the following. For this purpose, we use a similar decomposition as in (53) and obtain

$$\begin{aligned} \int {\tilde{U}}_n^2(t) w(t)dt&=\sum _{s=1}^2\int ({\tilde{U}}_{n,s}(t))^2w(t)dt-2\int ({\tilde{U}}_{n,1}(t){\tilde{U}}_{n,2}(t))w(t)dt\nonumber \\&=\sum _{s=1}^2 \sum _{j=1}^nV_{j,s}^2\int {\tilde{G}}^2(m_s^{\prime}(\cdot ),j,t)^2w(t)dt\nonumber \\&\quad +\sum _{s=1}^2\sum _{1\le i\ne j \le n}V_{i,s}V_{j,s}\int {\tilde{G}}(m_s^{\prime}(\cdot ),i,t){\tilde{G}}(m_s^{\prime}(\cdot ),j,t)w(t)dt\nonumber \\&\quad -2\sum _{1\le i\le n}V_{i,1}V_{i,2}\int {\tilde{G}}(m_1^{\prime}(\cdot ),i,t){\tilde{G}}(m_2^{\prime}(\cdot ),i,t)w(t)dt\nonumber \\&:=D_1+D_2+D_3, \end{aligned}$$

where the last equation defines \(D_1,D_2\) and \(D_3\) in an obvious manner.

Step \((c_2)\): Elementary calculations (using a Taylor expansion and the fact that the kernels have compact support) show that

$$\begin{aligned} {\mathbb {E}}(D_1)=&\sum _{s=1}^2\sum _{j=1}^n\int \Big (\tfrac{\sigma _s(\tfrac{j}{n} )}{nb_n^3}(K^\circ )^{\prime}\Big (\frac{\frac{j}{n} -(m_s^{\prime})^{-1}(t)}{b_n}\Big )(((m_s^{\prime})^{-1})^{\prime}(t))^2 \kappa _K \Big )^2w(t)dt\nonumber \\ =&\sum _{s=1}^2\sum _{j=1}^n\int \Big (\tfrac{\sigma _s((m_s^{\prime})^{-1}(t))}{nb_n^3}(K^\circ )^{\prime}\Big (\frac{\frac{j}{n} -(m_s^{\prime})^{-1}(t)}{b_n}\Big )(((m_s^{\prime})^{-1})^{\prime}(t))^2\kappa _K \Big )^2\nonumber \\&\times w(t)dt(1+O(b_n)), \end{aligned}$$

where \(\kappa _K= \int vK^{\prime}_d(v)dv\). Using the estimate

$$\begin{aligned} \frac{1}{nb_n}\sum _{j=1}^n\Big ((K^\circ )^{\prime}\Big (\frac{j/n-(m_s^{\prime})^{-1}(t)}{b_n}\Big )\Big )^2=\int ((K^\circ )^{\prime}(x))^2dx\Big (1+O\Big (\frac{1}{nb_n}\Big )\Big ), \end{aligned}$$

(uniformly with respect to \(t\in [a+\eta , b-\eta ]\)) and (57) gives

$$\begin{aligned} {\mathbb {E}}(D_1) =&\frac{1}{nb_n^5}\sum _{s=1}^2\int ((K^\circ )^{\prime}(x))^2dx\int \left( \sigma _s((m_s^{\prime})^{-1}(t))(((m_s^{\prime})^{-1}(t))^{\prime})^2\kappa _K \right) ^2w(t)dt \\&\times \Big (1+O\Big (b_n+\frac{1}{nb_n}\Big )\Big ), \end{aligned}$$

which implies

$$\begin{aligned} {\mathbb {E}}(nb_n^{9/2}D_1)= B_n(0) +O\Big (\sqrt{b_n}+\frac{1}{nb_n^{3/2}}\Big ), \end{aligned}$$

where \(B_n (g)\) is defined in Theorem 5 (and we use the notation with the function \(g \equiv 0\)). Here, we used the change of variable \((m_s^{\prime})^{-1}(t)=u\), and afterwards, \(((m_s')^{-1})'(t)=\frac{1}{m_s''((m_s')^{-1}(t))}\). Similar arguments establish that

$$\begin{aligned} \text{ Var } (D_1)=O\Big (\sum _{s=1}^2 \sum _{j=1}^n(\int {\tilde{G}}^2(m_s^{\prime}(\cdot ),j,t)^2w(t)dt)^2\Big ) =O\Big (\frac{nb_n^2}{n^4b_n^{12}}\Big )=O\Big (\frac{1}{n^3b_n^{10}}\Big ), \end{aligned}$$

where the first estimate is obtained from the fact that \(\int G^2(m_s^{\prime}(\cdot ),j,t)w(t)dt=O(b_n/(nb_n^3))\). This leads to the estimate

$$\begin{aligned} \mathrm{Var} (nb_n^{9/2}D_1)=O\Big (\frac{1}{nb_n}\Big ). \end{aligned}$$

For the term \(D_3\) in the decomposition (56), it follows that

$$\begin{aligned} {\mathbb {E}}(D_3^2)&=4\sum _{1\le i\le n}\Big (\int {\tilde{G}} (m_1^{\prime}(\cdot ),i,t){\tilde{G}}(m_2^{\prime}(\cdot ),i,t)w(t)dt\Big )^2 \nonumber \\&=\frac{4 \kappa _K^4}{n^4b^{12}}\sum _i\Big (\int (((m_1^{\prime})^{-1})^{\prime})^2(t)(((m_2^{\prime})^{-1})^{\prime}(t) (K^\circ )^{\prime}\Big (\frac{i/n-(m^{\prime}_1)^{-1}(t)}{b_n}\Big )\nonumber \\&\quad (K^\circ )^{\prime}\Big (\frac{i/n-(m^{\prime}_2)^{-1}(t)}{b_n}\Big )w(t)dt\Big )^2\sigma ^2_1(i/n) \sigma ^2_2(i/n)=O((n^3b_n^{10})^{-1}). \end{aligned}$$


$$\begin{aligned} nb_n^{9/2}D_3=O_p\Big (\Big (\frac{1}{nb_n}\Big )^{1/2}\Big ). \end{aligned}$$

Finally, we investigate the term \(D_2\) using a central limit theorem for quadratic forms (see de Jong 1987). For this purpose define the term (note that \((K^\circ )^{\prime}(\cdot )\) is symmetric and has bounded support)

$$\begin{aligned} V_{s,n}&=\sum _{1\le i\ne j\le n}\Big ( (K^\circ )' \Big (\frac{i/n-(m_s')^{-1}(t)}{b_n}\Big )(K^\circ )' \Big (\frac{j/n-(m_s')^{-1}(t)}{b_n}\Big ) \\&\times \sigma _s( \frac{i}{n})\sigma _s(\frac{j}{n})(((m_s')^{-1})'(t))^4w(t)dt\Big )^2\\ &=n^2\int _0^1\int _0^1\Big (\int _{{\mathbb {R}}} (K^\circ )'\Big (\frac{u-(m_s')^{-1}(t)}{b_n}\Big )(K^\circ )' \Big (\frac{v-(m_s')^{-1}(t)}{b_n}\Big ) \\&\times \sigma _s(u)\sigma _s(v)(((m_s')^{-1})'(t))^4w(t)dt\Big )^2 dudv(1+o(1))\\ &=n^2b_n^2\int _0^1\int _0^1\Big (\int _{{\mathbb {R}}} (K^\circ )'(y)(K^\circ )'(\frac{v-u}{b_n}+y) \\&\times \sigma _s^2(u)w(m_s'(u)) (m_s''(u))^{-3}dy\Big )^2dudv(1+o(1)) \\ &=n^2b_n^3\int ((K^\circ )'*(K^\circ )'(z))^2dz \int (\sigma _s^2(u)w(m_s'(u))(m''_s(u))^{-3} )^2 du(1+o(1)), \end{aligned}$$

then \(\lim _{n\rightarrow \infty } V_{s,n} / (n^2 b_n^3) \) exists \((s=1,2\)) and

$$\begin{aligned} \lim _{n\rightarrow \infty }\frac{2(\int vK^{\prime}_d(v)dv)^4n^2b_n^9}{(nb_n^3)^4}(V_{1,n}+V_{2,n}) = {V_T}, \end{aligned}$$

where the asymptotic variance \(V_T\) is defined in Theorem 5. Now similar arguments as in the proof of Lemma 4 in Zhou (2010) show that \( nb^{9/2}_nD_2\Rightarrow {N(0,V_{T}),} \) Combining this statement with (55), (56), (58), (59), and (60) finally gives

$$\begin{aligned} nb_n^{9/2}\int U_n^2(t)w(t)dt- B_n(0) \Rightarrow N(0, V_T). \end{aligned}$$

Step (d):

Asymptotic properties of (48): Define \(d(t)= ((m_1^{-1})^{\prime}-(m_2^{-1})^{\prime}) \) and note that

$$\begin{aligned} \int U_n(t) d(t) w(t)dt=\int (U_{n,1}(t)-U_{n,2}(t)) d(t) w(t)dt, \end{aligned}$$


$$\begin{aligned} \int U_{n,s}(t) d(t) w(t)dt&=\sum _{j=1}^nV_{j,s}\int G(m_s^{\prime}(\cdot ),j,t) (\rho _ng(t)+o(\rho _n))w(t)dt\nonumber \\&=O_p\Big (\Big (\frac{nb_n^2\rho _n^2}{n^2b^6_n}\Big )^{1/2}\Big )=O_p \Big (\frac{\rho _n}{(nb_n^4)^{1/2}}\Big ). \end{aligned}$$

Observing that \( \int G(m_s^{\prime}(\cdot ),j,t)\rho _ng(t)w(t)dt=O(\rho _nb_n/(nb_n^3)), \) the bandwidth conditions and the definition of \(\rho _n\) give for \(s=1,2\),

$$\begin{aligned} nb_n^{9/2}\int (U_{n,s}(t)((m_1^{-1})^{\prime}-(m_2^{-1})^{\prime})w(t)dt=O_p(b_n^{1/4}). \end{aligned}$$

Asymptotic properties of (49): Note that it follows for the term (49)

$$\begin{aligned} \Big |\int U_{n,s}(t)R^\dag _n(t)w(t)dt\Big |\le \sup _t|R^\dag _n(t)|\int \sup _t\Big |\sum _{j=1}^nV_{j,s}G(m_s^{\prime}(\cdot ),j,t) \Big |w(t)dt . \end{aligned}$$

Observing that \(\displaystyle \sum _jG^2(m_s^{\prime}(\cdot ),j,t)=O(nb_n/(nb_n^3)^2)\) we have

$$\begin{aligned} \sup _t\Big |\sum _{j=1}^nV_{j,s}G(m_s^{\prime}(\cdot ),j,t)\Big |=O_p\Big (\frac{\log ^{1/2} n}{n^{1/2}b_n^{5/2}}\Big ), \end{aligned}$$

and the conditions on the bandwidths and (46) yield

$$\begin{aligned}&nb_n^{9/2}\Big |\int (U_{n,s}(t)(R^\dag _n(t))w(t)dt\Big |\nonumber \\&\quad =O_p\Big (\frac{\log ^{1/2} n}{n^{1/2}b_n^{5/2}}\Big (\frac{\pi _n^{\prime}}{h_d}+\frac{\pi _n^3}{h_d^2} +h_d+\frac{1}{Nh_d}\Big )nb_n^{9/2}\Big )=o_p(1). \end{aligned}$$

The proof of assertion (38) is now completed using the decomposition (44) and the results (50), (51), (52), (61), (62) and (63). \(\square \)

Proof of (39)

From the proof of (38), we have the decomposition

$$\begin{aligned} T_n-{\tilde{T}}_n&=\int \left( U_n(t)+((m^{\prime}_1)^{-1}(t))^{\prime}-((m^{\prime}_2)^{-1}(t))^{\prime}+R_n^\dag (t)\right) ^2 ({\hat{w}}(t)-w(t))dt, \end{aligned}$$

where quantities \(I_s\), II, \(U_n(t)\) and \(R_n^\dag (t)\) are defined in (40), (41), and (45). By the proof of (38), it then suffices to show that

$$\begin{aligned} nb_n^{9/2}\int (U_n(t))^2({\hat{w}}(t)-w(t))dt=o_p(1). \end{aligned}$$

Using the same arguments as given in the proof of (38), this assertion follows from \( nb_n^{9/2}\int ({\tilde{U}}_n(t))^2({\hat{w}}(t)-w(t))dt=o_p(1), \) where \({\tilde{U}}_n(t)\) is defined in (54). Recalling the definition of ab in (20) it then follows (using similar arguments as given for the derivation of (37)) that \( \sup _{t\in [a,b]}|{\tilde{U}}_n(t)|=O_p\left( \frac{\log n}{\sqrt{nb_{n}}b^2_n}\right) . \) Furthermore, together with part (iii) of Proposition 11 it follows that

$$\begin{aligned} \int ({\tilde{U}}_n(t))^2({\hat{w}}(t)-w(t))dt\le \sup _{t\in [a,b]}|{\tilde{U}}_n(t)|)^2\int |{\hat{w}}(t)-w(t)|dt=O_p\left( \tfrac{{\bar{\omega }}_n\log ^2 n}{nb_n^5}\right) , \end{aligned}$$

where \({\bar{\omega }}_n\) is defined in (22). Thus by our choices of bandwidth \(nb_n^{9/2}\frac{{\bar{\omega }}_n\log ^2 n}{nb_n^5}=o(1)\), from which result (ii) follows. Finally, the assertion of Theorem  5 follows from (38) and (39). \(\square \)

About this article

Verify currency and authenticity via CrossMark

Cite this article

Dette, H., Dhar, S.S. & Wu, W. Identifying shifts between two regression curves. Ann Inst Stat Math (2021).

Download citation


  • Comparison of curves
  • Nonparametric regression
  • Hypothesis testing