Abstract
In this paper, in order to test whether changes have occurred in a nonlinear parametric regression, we propose a nonparametric method based on the empirical likelihood. Firstly, we test the null hypothesis of no-change against the alternative of one change in the regression parameters. Under null hypothesis, the consistency and the convergence rate of the regression parameter estimators are proved. The asymptotic distribution of the test statistic under the null hypothesis is obtained, which allows to find the asymptotic critical value. On the other hand, we prove that the proposed test statistic has the asymptotic power equal to 1. These theoretical results allows find a simple test statistic, very useful for applications. The epidemic model, a particular model with two change-points under the alternative hypothesis, is also studied. Numerical studies by Monte Carlo simulations show the performance of the proposed test statistic.
Similar content being viewed by others
References
Antoch J, Gregoire G, Jaruskova D (2004) Detection of structural changes in generalized linear models. Stat Probab Lett 69:315–332
Bai J (1999) Likelihood ratio tests for multiple structural changes. J Econom 91:299–323
Bai J, Perron P (1998) Estimating and testing linear models with multiple structural changes. Econometrica 1:47–48
Boldea O, Hall AR (2013) Estimation and inference in unstable nonlinear least squares models. J Econom 172(1):158–167
Ciuperca G (2011) A general criterion to determinate the number of change-points. Stat Probab Lett 81(8):1267–1275
Ciuperca G (2011) Penalized least absolute deviations estimation for nonlinear model with change-points. Stat Pap 52(2):371–390
Ciuperca G (2013) Two tests for sequential detection of a change-point in a nonlinear model. J Stat Plan Inference 143(10):1621–1834
Ciuperca G (2013) Empirical likelihood for nonlinear model with missing responses. J Stat Comput Simul 83(4):737–756
Csörgö M, Horváth L (1997) Limit theorems in change-point analysis. Wiley, Hoboken
Guan Z (2004) A semiparametric changepoint model. Biometrika 91(3):849–862
Hall A, Sen A (1999) Structural stability testing in models estimated by generalized method of moments. J Bus Econ Stat 17(3):335–348
Horváth L, Hušková M, Kokoszka P, Steinebach J (2004) Monitoring changes in linear models. J Stat Plan Inference 126(1):225–251
Hušková M, Kirch C (2012) Bootstrapping sequential change-point tests for linear regression. Metrika 75(5):673–708
Lai TL, Xing H (2010) Sequential change-point detection when the pre- and post-change parameters are unknown. Seq Anal 29(2):162–175
Lee S, Seo MH, Shin Y (2011) Testing for threshold effects in regression models. J Am Stat Assoc 493:220–231
Liu Y, Zou C, Zhang R (2008) Empirical likelihood ratio test for a change-point in linear regression model. Commun Stat Theory Methods 37:2551–2563
Mei Y (2006) Sequential change-point detection when unknown parameters are present in the pre-change distribution. Ann Stat 34(1):92–122
Neumeyer N, Van Keilegom I (2009) Change-point tests for the error distribution in non-parametric regression. Scand J Stat 36(3):518–541
Ning W, Pailden J, Gupta A (2012) Empirical likelihood ratio test for the epidemic change model. J Data Sci 10:107–127
Nosek K (2010) Schwarz information criterion based tests for a change-point in regression models. Stat Pap 51(4):915–929
Owen AB (2001) Empirical likelihood. Chapman & Hall, Boca Raton
Qin J, Lawless J (2004) Empirical likelihood and general estimating equations. Ann Stat 22(4):300–325
Qu Z (2008) Testing for structural change in regression quantiles. J Econom 146:170–184
Ramanayake A, Gupta AK (2003) Tests for an epidemic change in a sequence of exponentially distributed random variables. Biom J 45:496–958
Ramanayake A (2004) Tests for a change point in the shape parameter of gamma random variables. Commun Stat Theory Methods 33:821–833
Seber G, Wild C (2003) Nonlinear regression. Wiley series in probability and mathematical statistics. Wiley, Hoboken
Van der Vaart AW, Wellner J (1996) Weak convergence and empirical processes. With applications to statistics. In: Springer Series in Statistics. Springer, New York
Wu Y (2008) Simultaneous change point analysis and variable selection in a regression problem. J Multivar Anal 99:2154–2171
Yao QW (1993) Tests for change-points with epidemic alternatives. Biometrika 80:179–191
Yu W, Niu C, Xu W (2014) An empirical likelihood inference for the coefficient difference of a two-sample linear model with missing response data. Metrika. doi:10.1007/s00184-013-0459-3
Zi X, Zou C, Liu Y (2012) Two-sample empirical likelihood method for difference between coefficients in linear regression model. Stat Pap 53(1):83–93
Zou C, Liu Y, Qin P, Wang Z (2007) Empirical likelihood ratio test for a change point. Stat Probab Lett 77:374–382
Acknowledgments
The authors would like to thank the anonymous referee, the Associate Editor and the Editor for constructive comments and suggestions that have contributed to the improvement of the paper.
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
The following lemma will be used in the proof of propositions, theorems and of other lemmas.
Lemma 1
Let \(\mathbf{X}=(X_1,\ldots ,X_p)\) a random vector (column), with the random variables \(X_1,\ldots ,X_p\) not necessarily independent, and \(\mathbf M =(m_{ij})_{1 \le i,j \le p}\), such that \(\mathbf M = \mathbf{X}\mathbf{X}^t\). If for \(j=1,\ldots , p\), we have
then
-
(i)
\(I\!P\big [ \Vert \mathbf{X}\Vert _1 \ge p \max _{1 \le j \le p}\delta _j \big ]\le \max _{1 \le j \le p} \eta _j\),
-
(ii)
\(I\!P\big [ \Vert \mathbf{X}\Vert _2 \ge \sqrt{p} \max _{1 \le j \le p}\delta _j\big ]\le \max _{1 \le j \le p}\eta _j\),
-
(iii)
\(I\!P\big [ \Vert \mathbf M \Vert _1 \ge p \max _{1 \le i,j \le p} \{\delta _i^2,\delta _j^2\}\big ]\le \max _{1 \le i,j \le p} \{\eta _i^2,\eta _j^2\}\),
where \(\Vert \mathbf M \Vert _1= \max _{1 \le j \le p} \{ \sum _{i=1}^p |m_{ij}| \}\) is the subordinate norm to the vector norm \(\Vert .\Vert _1\).
Proof of Lemma 1
- (i):
-
Using the relation (51), we can write
$$\begin{aligned} I\!P\left[ \Vert \mathbf{X}\Vert _1\ge p \max _{1\le j\le p}\delta _j\right] \le I\!P\left[ p\max _{1\le j\le p}|X_j|\ge p\max _{1\le j\le p}\delta _j\right] \le \max _{1\le j\le p}\eta _j. \end{aligned}$$ - (ii):
-
The relation (51) is equivalent to \(I\!P\big [X_j^2\ge \delta _j^2\big ] \le \eta _j\), which implies that
$$\begin{aligned} I\!P\left[ \Vert \mathbf{X}\Vert _2^2\ge p\max _{1\le j\le p}\delta _j^2\right] =I\!P\left[ \max _{1\le j \le p}X_j^2 \ge \max _{1\le j\le p}\delta _j^2\right] \le \max _{1\le j \le p}\eta _j. \end{aligned}$$ - (iii):
-
For \(1\le i,j \le p\), we have
$$\begin{aligned} I\!P\left[ |X_i X_j|\ge \max \{\delta _i^2,\delta _j^2\}\right] \le I\!P\left[ \max \{X_i^2,X_j^2\}\ge \max \{\delta _i^2,\delta _j^2\}\right] \le \max \{\eta _i^2,\eta _j^2\}. \end{aligned}$$
Then, \(I\!P[|m_{ij}|\ge \max \{\delta _i^2,\delta _j^2\}] \le \max \{\eta _i^2,\eta _j^2\}\). Hence, for each \(1\le j \le p\),
\(\square \)
Lemma 2
Let the \(\eta \)-neighbourhood of \({\varvec{\beta }}^\mathbf{0}\), \(\mathcal{V}_{\eta }({\varvec{\beta }}^\mathbf{0})= \{ {\varvec{\beta }}\in \varGamma ; \Vert {\varvec{\beta }}-{\varvec{\beta }}^\mathbf{0}\Vert _2 \le \eta \}\), with \(\eta \rightarrow 0\). Then, under assumptions (A1)–(A4), for all \(\epsilon >0\), there exists a positive constant \(M>0\), such that, for all \({\varvec{\beta }}\in \mathcal{V}_{\eta }({\varvec{\beta }}^\mathbf{0})\),
Proof of Lemma 2
In the following, for simplicity, we denote the functions \(\mathbf {\overset{.}{f}}(\mathbf{X}_i,{\varvec{\beta }})\) by \(\mathbf {\overset{.}{f}}_i({\varvec{\beta }})\), and \(\mathbf {\overset{..}{f}}(\mathbf{X}_i,{\varvec{\beta }})\) by \(\mathbf {\overset{..}{f}}_i({\varvec{\beta }})\). The Taylor’s expansion up the order 2 of \(\mathbf {g}_i({\varvec{\beta }})\) at \({\varvec{\beta }}={\varvec{\beta }}^0\) is
where \(\mathbf {M}_{1i}= \Big (\frac{\partial ^2 f_i({\varvec{\beta }}^{(1)}_{i,jk})}{\partial \beta _j \partial \beta _k} \Big )_{1\le j,k \le d} \), \(\mathbf {M}_{2i}= \Big ( \frac{\partial ^2 f_i( {\varvec{\beta }}^{(2)}_{i,jk})}{\partial \beta _j \partial \beta _ k} \Big )_{1\le j,k \le d} \) and \( {\varvec{\beta }}^{(1)}_{i,jk}={\varvec{\beta }}^\mathbf{0}+u_{i,jk}({\varvec{\beta }}- {\varvec{\beta }}^\mathbf{0})\), \( {\varvec{\beta }}^{(2)}_{i,jk}={\varvec{\beta }}^\mathbf{0}+v_{i,jk}({\varvec{\beta }}- {\varvec{\beta }}^\mathbf{0})\), with \(u_{i,jk}, v_{i,jk} \in [0,1]\).
We note that \( {\varvec{\beta }}^{(1)}_{i,jk}\) and \( {\varvec{\beta }}^{(2)}_{i,jk}\) are random vectors which depend on \(\mathbf{X}_i\).
For \(\mathbf {\overset{.}{f}}_i({\varvec{\beta }}^\mathbf{0}) \varepsilon _i\), because \(\mathbf{X}_i \) and \(\varepsilon _i\) are independent, and \(I\!E(\varepsilon _i)=0\), we have that \(I\!E[\mathbf {\overset{.}{f}}_i({\varvec{\beta }}^\mathbf{0}) \varepsilon _i]=0\) and \({\mathbb {V}}\hbox {ar}\,[\mathbf {\overset{.}{f}}_i({\varvec{\beta }}^\mathbf{0}) \varepsilon _i]= \sigma ^2 \mathbf{V}\). For the jth component of \(\mathbf {\overset{.}{f}}_i({\varvec{\beta }}^\mathbf{0})\), by the Bienaym-Tchebychev’s inequality, for \(1 \le j \le d\), for all \(\epsilon _1>0\), we have
where \(V_{jj}\) is the jth term diagonal of the matrix \(\mathbf{V}\).
For all \( \epsilon >0\), taking \(\epsilon _1= \sigma \sqrt{6V_{jj}/\epsilon }\) in (53), we obtain \(I\!P\big [ | \frac{\partial f_i({\varvec{\beta }}^\mathbf{0})}{\partial \beta _j} \varepsilon _i | \ge \sigma \sqrt{6V_{jj}/\epsilon } \big ] \le \epsilon / 6\). Applying Lemma 1 (i), we obtain, for all \( \epsilon >0\)
For the second term of the right-hand side of (52), using assumption (A3), we obtain that for \( 1\le j,k \le d\), for all \(\epsilon >0\) there exists \(\epsilon _2>0\), such that, \(I\!P\big [|\frac{\partial ^2 f_i( {\varvec{\beta }}^{(1)}_{i,jk})}{\partial \beta _j\partial \beta _k} | \ge \epsilon _2\big ] \le \epsilon /6\). By Lemma 1 (iii), we have that for all \(\epsilon >0\),
Using Bienaymé-Tchebychev’s inequality, and assumption (A1), we obtain that for all \(C_1>0\)
Recall that \(\Vert {\varvec{\beta }}-{\varvec{\beta }}^\mathbf{0}\Vert _2<\eta \), with \(\eta \rightarrow 0\). Then, using (55) and (56), we can write that, for all \(\epsilon >0\), there exists \(\epsilon _2> 0\) such that, \(I\!P\big [\Vert \mathbf {M}_{1i}({\varvec{\beta }}-{\varvec{\beta }}^\mathbf{0})\varepsilon _i\Vert _1\ge \epsilon _2 \big ]\le I\!P\big [\Vert \mathbf {M}_{1i}\Vert _1|\varepsilon _i|\,\, \Vert {\varvec{\beta }}-{\varvec{\beta }}^\mathbf{0}\Vert _1\ge \epsilon _2\big ]\le I\!P\big [\Vert \mathbf {M}_{1i} \Vert _1\ge \epsilon _2/C_1\eta \big ]\le I\!P\big [\Vert \mathbf {M}_{1i}\Vert _1 \ge \epsilon _2 \big ]\le \epsilon /6\). Therefore, for all \(\epsilon >0\), there exists \(\epsilon _2>0\) such that
We consider now the term \(\mathbf {\overset{.}{f}}_i({\varvec{\beta }}^\mathbf{0})\mathbf {\overset{.}{f}}_i^t ({\varvec{\beta }}^\mathbf{0}) ({\varvec{\beta }}-{\varvec{\beta }}^\mathbf{0})\) of relation (52). By Markov’s inequality, taking also into account assumption (A4), we obtain for \(1 \le j,l \le d\), for all \(\epsilon _3>0\), that \(I\!P\big [ | \frac{\partial f_i({\varvec{\beta }}^\mathbf{0})}{\partial \beta _j} \frac{\partial f_i({\varvec{\beta }}^\mathbf{0})}{\partial \beta _l} | \ge \epsilon _3 \big ] \le I\!E[| \frac{\partial f_i({\varvec{\beta }}^\mathbf{0})}{\partial \beta _j} \frac{\partial f_i({\varvec{\beta }}^\mathbf{0})}{\partial \beta _l} |]/\epsilon _3\). We choose, for all \(\epsilon >0, \epsilon _3= 6 I\!E[| \frac{\partial f_i({\varvec{\beta }}^\mathbf{0})}{\partial \beta _j} \frac{\partial f_i({\varvec{\beta }}^\mathbf{0})}{\partial \beta _l} | ]/\epsilon \). Then, the last relation becomes \(I\!P\big [ | \frac{\partial f_i({\varvec{\beta }}^\mathbf{0})}{\partial \beta _j} \frac{\partial f_i({\varvec{\beta }}^\mathbf{0})}{\partial \beta _l} | \ge 6 I\!E[| \frac{\partial f_i({\varvec{\beta }}^\mathbf{0})}{\partial \beta _j} \frac{\partial f_i({\varvec{\beta }}^\mathbf{0})}{\partial \beta _l} | \big ] / \epsilon \big ] \le \epsilon /6\). Using Lemma 1 (iii), we obtain
relation that involves, since for all \(C_2 >0\) we have \(\Vert {\varvec{\beta }}-{\varvec{\beta }}_0 \Vert _1 \le C_2 \eta \) for \(\eta \rightarrow 0 \), that
Then, for all \(\epsilon >0\)
For \(\mathbf {M}_{1i} ({\varvec{\beta }}-{\varvec{\beta }}^\mathbf{0}) \mathbf {\overset{.}{f}}^t_{i} ({\varvec{\beta }}^\mathbf{0}) ({\varvec{\beta }}-{\varvec{\beta }}^\mathbf{0})\) of relation (52), using assumption (A3) and the Markov’s inequality, we obtain for each jth component \(\frac{\partial f_i({\varvec{\beta }}^\mathbf{0})}{\partial \beta _j}\) of the vector \(\mathbf {\overset{.}{f}}_{i} ({\varvec{\beta }}^\mathbf{0})\), for all \(\epsilon _4>0\), that \(I\!P\big [ |\frac{\partial f_i({\varvec{\beta }}^\mathbf{0})}{\partial \beta _j}| \ge \epsilon _4 \big ] \le I\!E[|\frac{\partial f_i({\varvec{\beta }}^\mathbf{0})}{\partial \beta _j}|]/\epsilon _4\). We choose, for all \(\epsilon >0\), \(\epsilon _4=6I\!E[|\frac{\partial f_i({\varvec{\beta }}^\mathbf{0})}{\partial \beta _j}|]/\epsilon \) and this last relation becomes \(I\!P\big [ |\frac{\partial f_i({\varvec{\beta }}^\mathbf{0})}{\partial \beta _j}| \ge 6I\!E[|\frac{\partial f_i({\varvec{\beta }}^\mathbf{0})}{\partial \beta _j}|] / \epsilon \big ] \le \epsilon /6\). Applying Lemma 1 (i), for all \(\epsilon >0\) we obtain
Using assumption (A3), and relations (55), (59), we can write that
Therefore, for all \(\epsilon >0\),
Taking into account assumptions (A3), (A4), by relations (55), (59), we can prove in a similar way as for relation (60) that, for all \(\epsilon >0\),
For the last term on the right-hand side of (52), using assumption (A3), we have that, for all \({\varvec{\beta }}\in \mathcal{V}_ \eta ({\varvec{\beta }}^\mathbf{0})\), for all \(\epsilon >0\), there exists \(\epsilon _5>0\), such that \(I\!P[\Vert \mathbf {M}_{1i}\Vert _1 \Vert \mathbf {M}_{2i} \Vert _1 \ge \epsilon _5] \le \epsilon /6\). Using this relation, we show similarly, then, for all \(\epsilon >0\), there exists \(\epsilon _5>0\), such that,
Choosing
and combining (54), (57), (58), (60), (61), (62) together, lemma yields. \(\square \)
Lemma 3
Under the same assumptions of Theorem 2, we have
Proof of Lemma 3
By the Taylor’s expansion up to the order 3 of \(\mathbf {g}_i({\varvec{\beta }})\) at \({\varvec{\beta }}={\varvec{\beta }}^\mathbf{0}\), we obtain
with \(\mathbf {M}_{2i}\) given by Lemma 2 and
is a vector of dimension \((d \times 1)\), where \( {\varvec{\beta }}^{(3)}_{i,kl} ={\varvec{\beta }}^\mathbf{0}+ w_{i,kl}( {\varvec{\beta }}-{\varvec{\beta }}^\mathbf{0})\), with \( w_{i,kl} \in [0,1]\).
For the first term of the right-hand side of (63), by the central limit theorem, and the fact that \(I\!E[\mathbf {g}_i({\varvec{\beta }}^\mathbf{0})]=0\), we have
For the second term of the right-hand side of (63), by the law of large numbers, the term \( (n\theta _{nk})^{-1} \sum _{i\in I}\mathbf {\overset{..}{f}}_i({\varvec{\beta }}^\mathbf{0}) ({\varvec{\beta }}-{\varvec{\beta }}^\mathbf{0}) \varepsilon _i\) converges almost surely to the expected of \(\mathbf {\overset{..}{f}}_i({\varvec{\beta }}^\mathbf{0}) ({\varvec{\beta }}-{\varvec{\beta }}^\mathbf{0}) \varepsilon _i\) as \(n \rightarrow \infty \). Furthermore, since \(\varepsilon _i\) is independent of \(\mathbf{X}_i\) and \(I\!E[\varepsilon _i]=0\), we have
For the third term of the right-hand side of (63), by the law of large numbers and assumption (A4), the term \((n\theta _{nk})^{-1} \sum _{i\in I}\mathbf {\overset{.}{f}}_i({\varvec{\beta }}^\mathbf{0})\mathbf {\overset{.}{f}}_i^t ({\varvec{\beta }}^\mathbf{0}) ({\varvec{\beta }}-{\varvec{\beta }}^\mathbf{0})\) converges almost surely to the expected value of \(\mathbf {\overset{.}{f}}_i({\varvec{\beta }}^\mathbf{0})\mathbf {\overset{.}{f}}_i^t ({\varvec{\beta }}^\mathbf{0}) ({\varvec{\beta }}-{\varvec{\beta }}^\mathbf{0})\) as \(n \rightarrow \infty \). On the other hand, since \( (n\theta _{nk})^{-1} \sum _{i \in I} \mathbf {\overset{..}{f}}_i({\varvec{\beta }}^\mathbf{0}) \varepsilon _i \overset{a.s}{\longrightarrow } 0\), we have
For the fourth term of the right-hand side of (63), by the law of large numbers, using assumption (A3) and the relation (59), we can write
which implies
In the same way, using assumption (A3) and relation (59), we obtain, for the fifth term on the right-hand side of (63), that
For the sixth term of the right-hand side of (63), using the assumption (A3), we have
For \(1\le j \le d\), and for any fixed \(i\), such that \(1 \le i \le n \theta _{nk}\), denote by \(M_{ij}\) the following random variable designates the jth component of the vector \(\mathbf M _i\), such that
using assumption (A3), we have with a probability one, \(| M_{ij} |\le C_3 \Vert {\varvec{\beta }}-{\varvec{\beta }}^\mathbf{0}\Vert _2^2\). Applying Lemma 1 (i), we obtain
For the term \((6n\theta _{nk})^ {-1} \sum _{i\in I} \mathbf M _i \varepsilon _i\), using relations (56) and (70), we have \((6n\theta _{nk})^{-1}\Vert \sum _{i\in I}\mathbf M _{i}\varepsilon _i\Vert _1 \le (6n\theta _{nk})^{-1}\sum _{i\in I}\Vert \mathbf M _{i}\Vert _1|\varepsilon _i| \le C_4(6n\theta _{nk})^{-1}n\theta _{nk}\Vert {\varvec{\beta }}-{\varvec{\beta }}^\mathbf{0}\Vert _2^2 = C_4 \Vert {\varvec{\beta }}-{\varvec{\beta }}^\mathbf{0}\Vert _2^2\). Then,
Finally, for the last term of the right-hand side of (63), using assumption (A3) and relation (70), we obtain with probability 1, \((12n\theta _{nk})^{-1}\Vert \sum _{i\in I} \mathbf M _{i} ({\varvec{\beta }}-{\varvec{\beta }}^\mathbf{0})^t \mathbf {M}_{2i} ({\varvec{\beta }}-{\varvec{\beta }}^\mathbf{0})\Vert _1 \le C_5 \Vert {\varvec{\beta }}-{\varvec{\beta }}^\mathbf{0}\Vert _2^2\), which gives,
Then, combining relations (64), (65), (66), (67), (68), (69), (71) and (72), we obtain lemma. \(\square \)
Lemma 4
Under the same assumptions as in Theorem 3, for all \(\varrho >0\), there exist two positive constants \(B=B(\varrho )\), \(T=T(\varrho )\) such that
Proof of Lemma 4
The proof of this lemma is similar to that of Lemma 1.2.2 of Csörgö and Horváth (1997). \(\square \)
In order, to prove Lemma 5, we consider
Recall that \(\mathbf{V}\equiv I\!E[ \mathbf {\overset{.}{f}}(\mathbf{X}_i,{\varvec{\beta }}^\mathbf{0}) \mathbf {\overset{.}{f}}^t(\mathbf{X}_i,{\varvec{\beta }}^\mathbf{0})]\), for all \(i=1,\ldots ,n\).
The results of Lemma 5 are similar to that of Theorem 1.1.1 of Csörgö and Horváth (1997).
Lemma 5
Suppose that the assumptions (A1)–(A4) hold. Under the null hypothesis \(H_0\), for all \( 0 \le \alpha < 1/2\) we have
-
(i)
\(n^{\alpha } \max \limits _{\theta _{nk} \in \varTheta _{nk}}[\theta _{nk}(1-\theta _{nk})]^{\alpha } | Z_{nk}(\theta _{nk}, \hat{{\varvec{\lambda }}}(\theta _{nk}), \hat{{\varvec{\beta }}}(\theta _{nk}))-R_k|= O_{I\!P}(1)\).
-
(ii)
\(\max \limits _{\theta _{nk} \in \varTheta _{nk}}[\theta _{nk}(1-\theta _{nk})]| Z_{nk}(\theta _{nk}, \hat{{\varvec{\lambda }}}(\theta _{nk}), \hat{{\varvec{\beta }}}(\theta _{nk}))-R_k| =O_{I\!P}(n^{-1/2}(\log \log n)^{3/2})\).
Proof of Lemma 5
For the score function \({\varvec{\phi }}_{1n}\) of relation (13), the two terms of the right-hand side are replaced by their decomposition obtained by the relations (22) and (25). On the other hand, we have \({\varvec{\phi }}_{1n}(\theta _{nk},\hat{{\varvec{\lambda }}}(\theta _{nk}),\hat{{\varvec{\beta }}}(\theta _{nk}))= \mathbf 0 _d \). Then, we can write
Hence,
with the matrices \(\mathbf {D}^0_{1n}\) and \(\mathbf {D}^0_{2n}\) given by relation (24).
On the other hand, by the law of large numbers, we have \(-\mathbf{V}_{1n}^0 \overset{a.s}{\longrightarrow } \mathbf{V}\) and \(-\mathbf{V}_{2n}^0 \overset{a.s}{\longrightarrow } \mathbf{V}\). Then, \(\mathbf{V}_{1n}^0 (\mathbf{V}_{2n}^0)^{-1}\overset{a.s}{\longrightarrow } I_d\). Always, by the law of large numbers, \(\mathbf {D}_{1n}^0 \) and \(\mathbf {D}_{2n}^0\) converge almost surely to \(\sigma ^2 \mathbf{V}\) as \(n\rightarrow \infty \).
By Theorem 2, we proved that \(\hat{{\varvec{\lambda }}}(\theta _{nk})=\theta _{nk} O_p((n\theta _{nk})^{-1/2})\). Then, we obtain
The limited development of the statistic \(Z_{nk}(\theta _{nk}, \hat{{\varvec{\lambda }}}(\theta _{nk}),\hat{{\varvec{\beta }}}(\theta _{nk}))\), specified by the relation (12), in the neighbourhood of \(({\varvec{\lambda }},{\varvec{\beta }})=(\mathbf 0 _d,{\varvec{\beta }}^\mathbf{0})\) up to order 2, can be written
where
where, for \(1 \le j \le d\), \({\hat{\beta }}_j\) is the jth component of \({\hat{{\varvec{\beta }}}} (\theta _{nk})\), and \({\hat{\lambda }}_j\) is the jth component of \({\hat{{\varvec{\lambda }}}} (\theta _{nk})\). In the expression of \(S_1\), \(S_2\), \(S_3\), \(S_4\) we have also, for all \(1 \le j,k,l \le d\), \({\varvec{\lambda }}^{(a)}_{jkl}= u^{(a)}_{jkl} (\hat{{\varvec{\beta }}}(\theta _{nk})-{\varvec{\beta }}^\mathbf{0})\), and \({\varvec{\beta }}^{(a)}_{jkl}= {\varvec{\beta }}^\mathbf{0}+ v^{(a)}_{jkl} (\hat{{\varvec{\beta }}}(\theta _{nk})-{\varvec{\beta }}^\mathbf{0})\), with \(u^{(a)}_{jkl},v^{(a)}_{jkl} \in [0,1]\) and \(a \in \{1,2,3,4\}\).
We note that, the derivative \(\partial ( \mathbf{V}_{1n}({\varvec{\beta }})(\mathbf{V}_{2n}({\varvec{\beta }}))^{-1}) / \partial {\varvec{\beta }}\) is considered term by term.
Now, we replace \(\hat{{\varvec{\lambda }}}(\theta _{nk})\) in the relation (74) by the value obtained in (73). For the first term of (74), using notations given by relation (30), and the fact that \(\mathbf{V}_{1n}^0 (\mathbf{V}_{2n}^0)^{-1}\overset{a.s}{\longrightarrow } I_d\), as \(n\rightarrow \infty \), we find that this term is equal to \(2n \sigma ^{-2}\theta _{nk}(1-\theta _{nk}) (\mathbf W _{1n}^0-\mathbf W _{2n}^0)^t\mathbf{V}^{-1}(\mathbf W _{1n}^0-\mathbf W _{2n}^0)+o_{I\!P}(\Vert {\hat{{\varvec{\beta }}}}(\theta _{nk})-{\varvec{\beta }}^\mathbf{0}\Vert _2)\).
Similarly, for the second term of (74), using notations given by (24), and the fact that \(\mathbf {D}_{1n}^0 \) and \(\mathbf {D}_{2n}^0\) converge to \(\sigma ^2 \mathbf{V}\), as \(n\rightarrow \infty \), we obtain that this term is equal to \(n\sigma ^{-2}\theta _{nk}(1-\theta _{nk})(\mathbf W _{1n}^0-\mathbf W _{2n}^0)^t\mathbf{V}^{-1}(\mathbf W _{1n}^0-\mathbf W _{2n}^0)+o_{I\!P}(\Vert {\hat{{\varvec{\beta }}}}(\theta _{nk})-{\varvec{\beta }}^\mathbf{0}\Vert _2)\).
For the third term of (74), we know that, \(\mathbf{V}_{1n}^0=(n\theta _{nk})^{-1} \sum _{i\in I}\mathbf {\overset{.}{g}}_i({\varvec{\beta }}^\mathbf{0})\), and \(\mathbf{V}_{2n}^0=(n(1-\theta _{nk}))^{-1} \sum _{j\in J}\mathbf {\overset{.}{g}}_j({\varvec{\beta }}^\mathbf{0})\). On the other hand, by the law of large numbers, we have \(\mathbf{V}_{1n}^0\) and \(\mathbf{V}_{2n}^0\) converge almost surely to \(-\mathbf{V}\) as \(n\rightarrow \infty \), and \(\mathbf{V}_{1n}^0(\mathbf{V}_{2n}^0)^{-1}\overset{a.s}{\longrightarrow }I_d\), which implies that the third term of (74) converge almost surely to zero, as \(n\rightarrow \infty \).
By the central limit theorem, we have that \((n(1-\theta _{nk}))^{-1}\sum _{j \in J}\mathbf {g}_j({\varvec{\beta }}^\mathbf{0}) = O_{I\!P}((n(1-\theta _{nk}))^{-1/2})\). Then, the fourth term of (74) is \(o_{I\!P}(n \sigma ^{-2}\theta _{nk}(1-\theta _{nk})(\mathbf W _{1n}^0-\mathbf W _{2n}^0)^t\mathbf{V}^{-1}(\mathbf W _{1n}^0-\mathbf W _{2n}^0)\).
For the last term of (74), using assumptions (A2)–(A4) and by an elementary calculations, we prove that this term is \(o_{I\!P}(\Vert {\hat{{\varvec{\beta }}}}(\theta _{nk}) -{\varvec{\beta }}^\mathbf{0}\Vert _2)+o_{I\!P}(\Vert {\hat{{\varvec{\lambda }}}}(\theta _{nk}) \Vert _2)+o_{I\!P}(\Vert {\hat{{\varvec{\lambda }}}}(\theta _{nk})\Vert _2 \Vert {\hat{{\varvec{\beta }}}}(\theta _{nk})-{\varvec{\beta }}^\mathbf{0}\Vert _2)\). Combining the obtained results, we obtain
This last relation, together with Lemma 4 imply Lemma 5. \(\square \)
Rights and permissions
About this article
Cite this article
Ciuperca, G., Salloum, Z. Empirical likelihood test in a posteriori change-point nonlinear model. Metrika 78, 919–952 (2015). https://doi.org/10.1007/s00184-015-0534-z
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00184-015-0534-z