Skip to main content
Log in

Testing nonstationary and absolutely regular nonlinear time series models

  • Published:
Statistical Inference for Stochastic Processes Aims and scope Submit manuscript

Abstract

We study some general methods for testing the goodness-of-fit of a general nonstationary and absolutely regular nonlinear time series model. These testing methods are based on some marked empirical processes that we show to converge in distribution to a zero-mean Gaussian process with respect to the Skorohod topology. We investigate the behavior of this process under fixed alternatives and under a sequence of local alternatives. Our results are applied to testing a general class of nonlinear semiparametric time series models. A simulation experiment shows that the Cramér–von Mises test studied behaves well on the examples considered.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  • Athreya KB, Roy V (2016) General Glivenko–Cantelli theorems. Stat 5:306–311

    Article  MathSciNet  Google Scholar 

  • Benghabrit Y, Hallin M (1996) Locally asymptotically optimal tests for autoregressive against bilinear serial dependence. Stat Sin 6:147–169

    MathSciNet  MATH  Google Scholar 

  • Billingsley P (1968) Convergence of probability Measures. Wiley, New York

    MATH  Google Scholar 

  • Carbon M, Francq C, Tran L (2007) Kernel regression estimation for random fields. J Stat Plan Inference 137:778–798

    Article  MathSciNet  MATH  Google Scholar 

  • Deheuvels P, Martynov GV (1995) Cramér–von mises-type tests with applications to tests of independence for multivariate extreme-value distributions. Commun Stat Theory Methods 25:871–908

    Article  MATH  Google Scholar 

  • Diebolt J, Zuber J (1999) Goodness-of-fit tests for nonlinear heteroscedastic regression models. Stat Probab Lett 42:53–60

    Article  MathSciNet  MATH  Google Scholar 

  • Diebolt J, Zuber J (2001) On testing the goodness-of-fit of nonlinear heteroscedastic regression models. Commun Stat Simul Comput 30:195–216

    Article  MathSciNet  MATH  Google Scholar 

  • Doukhan P, Portal F (1987) Principe d’invariance faible pour la fonction de répartition empirique dans un cadre multidimensionnel et mélangeant. Probab Math Stat 8:117–132

    MATH  Google Scholar 

  • Doukhan P, Massart P, Rio E (1995) Invariance principle for absolutely regular empirical processes. Ann Inst H Poincaré Sect B 31:393–427

    MathSciNet  MATH  Google Scholar 

  • Gao J, Tjøstheim D, Yin J (2013) Estimation in threshold autoregressive models with a stationary and a unit root regime. J Econom 172:1–13

    Article  MathSciNet  MATH  Google Scholar 

  • Harel M, Elharfaoui E (2010) The marked empirical process to test nonlinear time series against a large class of alternatives when the random vectors are nonstationary and absolutely regular. Statistics 46:1–17

    MathSciNet  MATH  Google Scholar 

  • Harel M, Puri ML (1989) Limiting behavior of U-statistics, V-statistics and one-sample rank order statistics for nonstationary absolutely regular processes. J Multivariate Anal 30:181–204

    Article  MathSciNet  MATH  Google Scholar 

  • Harel M, Puri ML (1990) The space \(\widetilde{d}_k\) and weak convergence for the rectangle-indexed processes under mixing. Adv Appl Math 11:443–474

    Article  MATH  Google Scholar 

  • Imhof JP (1961) Computing the distribution of quadratic forms in normal variables. Biometrika 48:419–426

    Article  MathSciNet  MATH  Google Scholar 

  • Koul H, Schick A (1997) Efficient estimation in nonlinear autoregressive time-series models. Bernoulli 3:247–277

    Article  MathSciNet  MATH  Google Scholar 

  • Koul H, Stute W (1999) Nonparametric model checks for time series. Ann Stat 27:204–236

    Article  MathSciNet  MATH  Google Scholar 

  • Le Cam L (1986) Asymptotic methods in statistical decision theory. Springer, Berlin

    Book  MATH  Google Scholar 

  • Liebscher E (2003) Strong convergence of estimators in nonlinear autoregressive models. J Multivar Anal 84:247–261

    Article  MathSciNet  MATH  Google Scholar 

  • Mokkadem A (1990) Propriétés de mélange des processus autorégressifs polynomiaux. Ann Inst H Poincaré Probab Stat 26:219–260

    MathSciNet  MATH  Google Scholar 

  • Ngatchou-Wandji J (2002) Weak convergence of some marked empirical processes: application to testing heteroscedasticity. J Nonparametr Stat 14:325–339

    Article  MathSciNet  MATH  Google Scholar 

  • Ngatchou-Wandji J (2008) Estimation in a class of nonlinear heteroscedastic time series models. Electron J Stat 2:40–62

    Article  MathSciNet  MATH  Google Scholar 

  • Ngatchou-Wandji J, Harel M (2013) A cramér–von mises test for symmetry of the error distribution in asymptotically stationary stochastic models. Stat Inference Stoch Process 16:207–236

    Article  MathSciNet  MATH  Google Scholar 

  • Ngatchou-Wandji J, Laïb N (2008) Local power of a cramér–von mises type test for parametric autoregressive models of order one. Comput Math Appl 56:918–929

    Article  MathSciNet  MATH  Google Scholar 

  • Robinson PM (1989) Hypothesis testing in semiparametric and nonparametric models for econometric time series. Rev Econom Stud 56:511–534

    Article  MathSciNet  MATH  Google Scholar 

  • Sen P (1981) Sequential nonparametrics: invariance principles and statistical inference. Wiley, New York

    MATH  Google Scholar 

  • Stute W (1984) Asymptotic normality on nearest neighbor regression functions estimates. Ann Stat 12:917–926

    Article  MathSciNet  MATH  Google Scholar 

  • Stute W (1986) On almost sure convergence of conditional empirical distribution functions. Ann Probab 14:891–901

    Article  MathSciNet  MATH  Google Scholar 

  • Stute W (1988) Nonparametric model checks for regression. Ann Stat 25:613–641

    MathSciNet  MATH  Google Scholar 

  • Stute W, Presedo Quindimilb M, González Manteiga W, Koul HL (2006) Model checks of higher order time series. Stat Probab Lett 76:1385–1396

    Article  MathSciNet  MATH  Google Scholar 

  • Yoshihara Y (1988) Asymptotic normality of nearest neighbor regression function estimates based on some dependent observations. Yokohama Math J 36:55–68

    MathSciNet  MATH  Google Scholar 

  • Yoshihara Y (1990) Conditional empirical processes defined by \(\varphi \)-mixing sequences. Comput Math Appl 19:149–158

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joseph Ngatchou-Wandji.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Mathematical tools

Lemma 1

(Harel and Puri 1989) Let \(\{X_{ni}^*\}\) be a sequence of zero-mean absolutely regular random variables (rv)’s with rates satisfying

$$\begin{aligned} \sum _{n\ge 1}[\beta (n)]^{\delta '/(2+\delta ')}<\infty \ \text{ for } \text{ some } \delta '>0. \end{aligned}$$
(29)

Suppose that for any K, there exists a sequence \(\{Y_{ni}^{K}\}\) of rv’s satisfying (29) such that

$$\begin{aligned} \sup _{n\in \mathbb {N}}\max _{0\le i\le n}|Y_{ni}^K|\le B_K<\infty , \end{aligned}$$
(30)

where \(B_K\) is some positive constant

$$\begin{aligned} \sup _{n\in \mathbb {N}}\max _{0\le i\le n}E \left( |X_{ni}^*-Y_{ni}^K|^{2+\delta '} \right)\rightarrow & {} 0 \ \text{ as } K\rightarrow \infty \end{aligned}$$
(31)
$$\begin{aligned} \frac{1}{n}E\left[ \left( \sum _{i=1}^nX_{ni}^*\right) ^2 \right]\rightarrow & {} c \ \text{ as } n\rightarrow 0, \end{aligned}$$
(32)

where c is some positive constant

$$\begin{aligned} \frac{1}{n}E\left[ \left( \sum _{i=1}^nY_{ni}^K-E(Y_{ni}^K)\right) ^2 \right] \rightarrow c_K<\infty \ \text{ as } n\rightarrow 0, \end{aligned}$$
(33)

where \(c_K\) is some constant \(>0\)

$$\begin{aligned} c_K\rightarrow c \ \text{ as } K\rightarrow \infty . \end{aligned}$$
(34)

Then

$$\begin{aligned} n^{-1/2}\sum _{i=1}^n X_{ni}^* \end{aligned}$$

converges in distribution to the normal distribution with mean 0 and variance c.

Lemma 2

(Ngatchou-Wandji and Harel 2013) Let \(\{V_i:i \in \mathbb {N}\}\) be a sequence of d-dimensional centered absolutely regular and non necessarily stationary random vectors with rate satisfying

$$\begin{aligned} \sum _{i \ge 0}(i+1)^{r/2}[\beta (i)]^{\delta _0 \over \delta _0+r}< & {} \infty \end{aligned}$$
(35)
$$\begin{aligned} \sup _{i \ge 0}E\Big (||V_i||_E^{r+\delta _0} \Big )< & {} \infty \end{aligned}$$
(36)

for some \(r>2\) and \(\delta _0>0\).

Then

$$\begin{aligned} n^{-1}\sum _{i=1}^nV_i \rightarrow 0 \text{ with } \text{ probability } \text{1, } \text{ as } n\rightarrow \infty . \end{aligned}$$

Proposition 2

(Ngatchou-Wandji and Harel 2013) Let \(\{W_i:i \in \mathbb {N} \}\) be a sequence of random vectors and \(\{\widetilde{W}_i:i \in \mathbb {N}\}\) a sequence of stationary random vectors. Denote \(H_{i,j}\) the distribution function of \((W_i,W_j)\) and assume that for all \(l'>1\), there exists a distribution function \(\widetilde{H}_{l'}\) such that \((\widetilde{W}_i, \widetilde{W}_j)\) has distribution function \(\widetilde{H}_{j-i}, \ i < j\), and for some \(\varrho _0 \in (0,1)\),

$$\begin{aligned} ||H_{i,j}-\widetilde{H}_{j-i} ||_{TV} ={\mathcal {O}}(\varrho _0^i), \ i<j, i, j \in \mathbb {N}. \end{aligned}$$
(37)

For any real-valued measurable function h such that for some \(\delta ' >0\)

$$\begin{aligned} \sup _{i<j, i,j \in \mathbb {N}}\int |h|^{2+ \delta '} d H_{i,j}< & {} \infty \end{aligned}$$
(38)
$$\begin{aligned} \sup _{i<j, i,j \in \mathbb {N}} \int |h|^{2+ \delta '} d \widetilde{H}_{j-i}< & {} \infty , \end{aligned}$$
(39)

one has

$$\begin{aligned} \left| \int h dH_{i,j}- \int h d \tilde{H}_{i-j} \right| ={\mathcal {O}}(\varrho ^i), \ i<j, i, j \in \mathbb {N}, \end{aligned}$$
(40)

for some \(\varrho \in (0,1)\), \(\varrho > \varrho _0\).

A version of the above proposition for sequences of single vectors is given by the following result.

Proposition 3

Let \(\{Y_i : i \in \mathbb {N} \}\) be a sequence of random vectors and \(\{\widetilde{Y}_i : i \in \mathbb {N}\}\) a sequence of stationary random vectors. Denote \(K_i\) the distribution function of \(Y_i\) and assume there exists a distribution function K such that \(\widetilde{Y}_i\) has distribution function K, and for some \(\epsilon _0 \in (0,1)\),

$$\begin{aligned} ||K_i- K ||_{TV} ={\mathcal {O}}(\epsilon _0^i), \ i \in \mathbb {N}. \end{aligned}$$

For any real-valued measurable function \(\hbar \) such that for some \(\delta ^{\prime \prime } >0\)

$$\begin{aligned} \sup _{i\in \mathbb {N}}\int |\hbar |^{2+ \delta ^{\prime \prime }} d K_i< & {} \infty \\ \int |\hbar |^{2+ \delta ^{\prime \prime }} d K< & {} \infty , \end{aligned}$$

one has

$$\begin{aligned} \left| \int \hbar dK_i- \int \hbar d K \right| ={\mathcal {O}}(\epsilon ^i), \ i\in \mathbb {N}, \end{aligned}$$

for some \(\epsilon \in (0,1)\), \(\epsilon > \epsilon _0\).

Theorem 7

(Athreya and Roy 2016) Let F be a non negative, non decreasing function on \(\mathbb {R}\). Let \(A\equiv \{a: a \in \mathbb {R}, p\equiv F(a^+)-F(a^-)>0 \}\) be the set of discontinuity points of F, where A is at most countable but also can be empty set. Let \(F_n, n\ge 1\) be a sequence of nonnegative, nondecreasing functions on \(\mathbb {R}\). Let F, \(F_n\)’s satisfy the following conditions

  1. (i)

    \(F(\infty ) \equiv \lim _{x \uparrow \infty }F(x)< \infty \), \( F_n(\infty ) \equiv \lim _{x \uparrow \infty }F_n(x)< \infty \), \(F(-\infty ) \equiv \lim _{x \uparrow -\infty }F(x)=0\), \( F_n(-\infty ) \equiv \lim _{x \downarrow -\infty }F_n(x)=0\),

  2. (ii)

    \(\lim _{n \rightarrow \infty }F_n(\infty )=F(\infty )\),

  3. (iii)

    Letting \(A= \{a_j: j=1,2, \ldots \}\), let for all j, \(p_{nj}\equiv F_n(a_j^+)-F_n(a_j^-) \rightarrow p_j\equiv F(a_j^+)-F(a_j^-)\) as \(n \rightarrow \infty \),

  4. (iv)

    \(\sum _{a_j \in A} p_{nj} \rightarrow \sum _{a_j \in A} p_j \) and

  5. (v)

    \(F_n(x) \rightarrow F(x)\) for all \(x \in D\) where D is a dense set in \(\mathbb {R}\).

    Then

    $$\begin{aligned} \sup _{x \in \mathbb {R}}|F_n(x)-F(x)| \rightarrow 0 \ \text{ as } \ n \rightarrow \infty . \end{aligned}$$

1.2 Proofs of the results

Proof of Proposition 1

For all \(i <j\), denote by \(f_{i,j}\) and \(\widetilde{f}_{j-i}\) respectively the density functions of \(F_{i,j}\) and \(\widetilde{F}_{j-i}\). Using (9), one can write, by assumption (11), for all \(i\in \mathbb {N}\) :

$$\begin{aligned} ||F_i-F||_{TV}= & {} {1 \over 2} \int _{\mathbb {R}} \left| \int _{\mathbb {R}} \left[ f_{i,j}(x,y)-\widetilde{f}_{j-i}(x,y) \right] dx \right| dy\\\le & {} {1 \over 2} \int _{\mathbb {R}} \int _{\mathbb {R}}\left| f_{i,j}(x,y)-\widetilde{f}_{j-i}(x,y) \right| dxdy \\= & {} ||F_{i,j}-\widetilde{F}_{j-i}||_{TV}\\= & {} O(\rho ^i). \end{aligned}$$

Now, for all \(n \ge 1\), using a property of norms and the above result, one obtains easily :

$$\begin{aligned} ||\overline{F}_n -F ||_{TV}\le & {} n^{-1}Cst \rho {1-\rho ^n \over 1-\rho }. \end{aligned}$$

For the third result, for all \(i \in \mathbb {N}\), one has for all \(x \in \mathbb {R}\),

$$\begin{aligned} |F_i(x)-F(x)|= & {} \left| \int _{-\infty }^x \int _{\mathbb {R}} \left[ f_{i,j}(u,v)-\widetilde{f}_{j-i}(u,v) \right] dudv \right| \\\le & {} \int _{\mathbb {R}} \int _{\mathbb {R}}\left| f_{i,j}(u,v)-\widetilde{f}_{j-i}(u,v) \right| dudv\\= & {} 2||F_{i,j}-\widetilde{F}_{j-i}||_{TV}\\= & {} O(\rho ^i). \end{aligned}$$

Thus, for all \(i \in \mathbb {N}\),

$$\begin{aligned} \sup _{x \in \mathbb {R}}|F_i(x)-F(x)| = O(\rho ^i). \end{aligned}$$

The last results can be handled with the same reasoning. \(\square \)

Proof of Theorem 1

We must show that \(R_{n,\psi }\) is tight and that its finite-dimensional distributions converge to those of a zero-mean Gaussian process. \(\square \)

We start with the finite-dimensional distributions. From the Cramér-Wold device (see, eg, Billingsley 1968), we must show that for any \(r>1\), any \(\lambda _{i}\in \mathbb {R}\) and \(x_{1}<\cdots <x_{r}\), \(\sum _{i=1}^{r}\lambda _{i}R_{n,\psi }(x_{i})\) converges in law to a normal distribution. Without loss of generality, we take \(r=2\) and \(x_{1}<x_{2}.\) Then, from Lemma 1 (see “Appendix”), we need to verify the assertions (29)–(34).

The assertion (29) readily follows from (10). For establishing (32) we need proving that

$$\begin{aligned} \lim _{n\rightarrow \infty }E(\lambda _{1}R_{n,\psi }(x_{1})+\lambda _{2}R_{n,\psi }(x_{2}))^{2} = \lambda _{1}^{2}I_{1}+\lambda _{2}^{2}I_{2}+2\lambda _{1}\lambda _{2}I_{3}, \end{aligned}$$

where for \(j=1,2\),

$$\begin{aligned} I_{j}= & {} \int _{-\infty }^{x_{j}}\text{ Var }(\psi (\widetilde{X}_{1}-m_{\psi }(\widetilde{X}_{0}))\mid \widetilde{X}_{0}=u)dF(u) +2\sum _{k=1}^{\infty }\int _{-\infty }^{x_{j}}\int _{-\infty }^{x_{j}}\\&\text{ Cov }(\psi (\widetilde{X}_{1}-m_{\psi }(\widetilde{X}_{0})),\psi (\widetilde{X}_{1+k}-m_{\psi }(\widetilde{X}_{k}))\mid \widetilde{X}_{0}=u,\widetilde{X}_{k}=v)d\widetilde{F}_{k}(u,v) \end{aligned}$$

and

$$\begin{aligned} I_{3}= & {} \int _{-\infty }^{x_{1}\vee x_{2}}\text{ Var }(\psi (\widetilde{X}_{1}-m_{\psi }(\widetilde{X}_{0}))\mid \widetilde{X}_{0}=u)dF(u) +2\sum _{k=1}^{\infty } \int _{-\infty }^{x_{1}}\int _{-\infty }^{x_{2}} \\&\text{ Cov }(\psi (\widetilde{X}_{1}-m_{\psi }(\widetilde{X}_{0})),\psi (\widetilde{X}_{1+k}-m_{\psi }(\widetilde{X}_{k}))\mid \widetilde{X}_{0}=u, \widetilde{X}_{k}=v)d\widetilde{F}_{k}(u,v). \end{aligned}$$

Expanding \(E(\lambda _{1}R_{n,\psi }(x_{1}) +\lambda _{2}R_{n,\psi }(x_{2}))^{2}\), we see that we have to study the limits \(\lim _{n\rightarrow \infty }E(R_{n,\psi }(x_{1}))^{2}\), \(\lim _{n\rightarrow \infty }E(R_{n,\psi }(x_{2}))^{2}\) and \(\lim _{n\rightarrow \infty }E(R_{n,\psi }(x_{1})R_{n,\psi }(x_{2}))\). We just study the first one as the others can be done in the same lines. We can write

$$\begin{aligned} E(R_{n,\psi }(x_{1}))^{2}= & {} E(n^{-\frac{1}{2}}\sum _{i=1}^{n}\psi (X_i-m_{\psi }(X_{i-1}))\mathbf {1}_{\{X_{i-1}\le x_{1}\}})^{2}\\= & {} n^{-1}\sum _{i=1}^{n}\sum _{j=1}^{n}E \Big [\psi (X_i-m_{\psi }(X_{i-1})) \psi (X_j-m_{\psi }(X_{j-1}))\\&\times \,{\mathbb {1} }_{\{X_{i-1}\le x_1\} \cap \{ X_{j-1}\le x_1\}} \Big ]\\= & {} n^{-1} \sum _{i=1}^{n}\sum _{j=1}^{n}\phi (i,j). \end{aligned}$$

For \(i \ge 1\) and \(k, l=1,2\), define

$$\begin{aligned} \mu _{k,l} (i)=E \left[ \psi (\widetilde{X}_i-m_{\psi }(\widetilde{X}_{i-1}))\psi (\widetilde{X}_1-m_{\psi }(\widetilde{X}_0)){\mathbb {1} }_{\{\widetilde{X}_{i-1}\le x_k\} \cap \{\widetilde{X}_0\le x_l\}} \right] . \end{aligned}$$

Rewrite \(E(R_{n,\psi }(x_{1}))^{2} \) as

$$\begin{aligned} E(R_{n,\psi }(x_{1}))^{2} = n^{-1}\sum _{i=1}^{n}\phi (i,i) +2n^{-1}\sum _{i=1}^{n}\sum _{j=1}^{n-i}\phi (i,i+j). \end{aligned}$$

It results that

$$\begin{aligned}&\left| E(R_{n,\psi }(x_{1}))^{2}-\mu _{11} (1)-2\sum _{i=2}^{\infty }\mu _{11} (i)\right| \\&\quad = \left| n^{-1}\sum _{i=1}^{n}\phi (i,i)+2n^{-1}\sum _{i=1}^{n}\sum _{j=1}^{n-i}\phi (i,i+j)-\mu _{11}(1)-2\sum _{i=2}^{\infty }\mu _{11} (i)\right| \\&\quad \le n^{-1}\sum _{i=1}^{n}\left| \phi (i,i)-\mu _{11} (1)\right| \\&\qquad +\,2n^{-1}\left| \sum _{i=1}^{n}\sum _{j=1}^{n-i}\phi (i,i+j)-\sum _{i=1}^{n}(n-i)\mu _{11} (i)\right| \\&\qquad +\,2\sum _{i=n+1}^{\infty }\left| \mu _{11} (i)\right| +2n^{-1}\sum _{i=1}^{n}i\left| \mu _{11} (i)\right| \\&\quad = \mathcal {L}_{1,n}+\mathcal {L}_{2,n}+\mathcal {L}_{3,n}+\mathcal {L}_{4,n}. \end{aligned}$$

Thus, to prove (32), it is suffices to show that each of the sequences \(\mathcal {L}_{1,n}, \ \mathcal {L}_{2,n}, \ \mathcal {L}_{3,n}\) and \(\mathcal {L}_{4,n}\) tends to 0 as n tends to \(\infty \).

We start with \(\mathcal {L}_{1,n}\). Recall from (14) that for all \(i \in \mathbb {N}^*\) and \(x \in \mathbb {R}\),

$$\begin{aligned} \xi _i(x)=\psi (X_i-m_\psi (X_{i-1})){\mathbb {1} }_{\{X_{i-1}\le x\}}. \end{aligned}$$

For all \(i \ge 1\) and \(k=1,2\), let

$$\begin{aligned} \xi _{i,k}=\xi _i(x_k). \end{aligned}$$

From the well-known inequality on moment of absolutely regular process (see, eg, Doukhan and Portal 1987), we have

$$\begin{aligned} |\mu _{11}(i)|\le 12\{\beta (i)\}^{\gamma _0/(2+\gamma _0)} \{E|\xi _{1,1}|^{2+\gamma _0}\}^{2/(2+\gamma _0)} \end{aligned}$$

and

$$\begin{aligned} \sup _{i\ge 1}|E(\xi _{i,1}\xi _{i+j,1})|\le 12[\beta (j)]^{\gamma _0/(2+\gamma _0)}\{E| \xi _{i,1}|^{2+\gamma _0}\}^{1/(2+\gamma _0)} \{E|\xi _{i+j,1}|^{2+\gamma _0}\}^{1/(2+\gamma _0)}. \end{aligned}$$

It follows from (12) that \(E|\xi _{i,1}|^{2+\gamma _0}\) is bounded by some finite positive constant Cst. This implies that for another finite positive constant Cst,

$$\begin{aligned} |\mu _{11}(i)|\le Cst\{\beta (i)\}^{\gamma _0/(2+\gamma _0)} \end{aligned}$$
(41)

and for all j,

$$\begin{aligned} \sup _{i\ge 1}|\phi (i,i+j)|\ge Cst\{\beta (j)\}^{\gamma _0/(2+\gamma _0)}. \end{aligned}$$
(42)

Denoting by [x] the integer part of a real number x, one has

$$\begin{aligned} \mathcal {L}_{1,n}= & {} \frac{1}{n}\sum _{i=1}^{[\sqrt{n}]}|\phi (i,i)-\mu _{11}(1)|+\frac{1}{n}\sum _{i=[\sqrt{n}]+1}^{n}|\phi (i,i)-\mu _{11}(1)|\\= & {} \mathcal {L}_{1,n}'+\mathcal {L}_{1,n}''. \end{aligned}$$

One deduces from (41) and (42) that

$$\begin{aligned} \mathcal {L}_{1,n}'=\frac{24}{n}\sqrt{n} Cst\{\beta (0)\}^{\gamma _0/(2+\gamma _0)}=\mathcal {O}\left( n^{-1/2}\right) . \end{aligned}$$

Using (11) and Proposition 2 (see “Appendix”), one obtains

$$\begin{aligned} \mathcal {L}_{1,n}''=n^{-1}\mathcal {O}\Bigg (\sum _{i=[\sqrt{n}]+1}^{n}\varrho ^i\Bigg )=n^{-1}\mathcal {O}\Big (\varrho ^{[\sqrt{n}]+1}\Big ). \end{aligned}$$

Consequently, both \(\mathcal {L}_{1,n}'\) and \(\mathcal {L}_{1,n}''\) tend to 0 as n tends to infinity.

One also has

$$\begin{aligned} \frac{1}{2}\mathcal {L}_{2,n}= & {} \frac{1}{n}\Big |\sum _{i=1}^{n}\sum _{j=1}^{n-i}\phi (i,i+j) -\sum _{i=1}^{n}(n-i)\mu _{11}(i)\Big |\\= & {} \frac{1}{n}\Big |\sum _{j=1}^{n} \sum _{i=1}^{n-j}\phi (i,i+j)-\sum _{j=1}^{n}(n-j)\mu _{11}(j)\Big |\\= & {} \frac{1}{n}\Big |\sum _{j=1}^{[\sqrt{n}]} \sum _{i=1}^{n-j}\phi (i,i+j)-\sum _{j=1}^{[\sqrt{n}]}(n-j)\mu _{11}(j)\Big |\\&+\,\frac{1}{n}\Big |\sum _{j=[\sqrt{n}]+1}^{n} \sum _{i=1}^{n-j}\phi (i,i+j)-\sum _{j=[\sqrt{n}]+1}^{n}(n-j)\mu _{11}(j)\Big |\\= & {} \mathcal {L}_{2,n}'+\mathcal {L}_{2,n}''. \end{aligned}$$

Then, one has to study the convergence of \(\mathcal {L}_{2,n}'\) and \(\mathcal {L}_{2,n}''\). One can write the following inequality

$$\begin{aligned} \frac{1}{2}\mathcal {L}_{2,n}'\le & {} \frac{1}{n}\Big |\sum _{j=1}^{[\sqrt{n}]} \sum _{i=1}^{[\root 4 \of {n}]}\phi (i,i+j) -\sum _{j=1}^{[\sqrt{n}]}[\root 4 \of {n}]\mu _{11}(j)\Big |\\&+\,\frac{1}{n}\Big |\sum _{j=1}^{[\sqrt{n}]} \sum _{i=[\root 4 \of {n}]+1}^{n-j}\phi (i,i+j) -\sum _{j=1}^{[\sqrt{n}]}(n-j-[\root 4 \of {n}])\mu _{11}(j)\Big |. \end{aligned}$$

It can be deduced from (41), (42), (11) and Proposition 2 that

$$\begin{aligned} \mathcal {L}_{2,n}'=\mathcal {O}(n^{-1/4})+\mathcal {O} \big (n^{1/2}\varrho ^{\root 4 \of {n}-1}\big ), \end{aligned}$$

and from (41), (42) and (10) that

$$\begin{aligned} \mathcal {L}_{2,n}''=\mathcal {O}\Big (n\tau ^{\frac{\gamma _0(\sqrt{n}+1)}{2+\gamma _0}}\Big ). \end{aligned}$$

Whence, \(\mathcal {L}_{2,n}\) goes to zero as n goes to infinity.

Finally, since

$$\begin{aligned} \mathcal {L}_{4,n}\le \frac{Cst}{n}\sum _{i=1}^n\mathcal {O}(\tau ^{\frac{\gamma _0 i}{2+\gamma _0}}), \end{aligned}$$

it results that \(\mathcal {L}_{4,n}\rightarrow 0\) as \(n\rightarrow \infty \).

Thus,

$$\begin{aligned} \lim _{n\rightarrow \infty }E(R_{n,\psi }(x_{1}))^{2}=\mu _{11}(1)+2\sum _{i=2}^{\infty }\mu _{11}(i)=\sigma _{1}^2<\infty . \end{aligned}$$

Proceeding in the same way, we prove that

$$\begin{aligned} \lim _{n\rightarrow \infty }E(R_{n,\psi }(x_{2}))^{2} =\mu _{22}(1)+2\sum _{i=2}^{\infty }\mu _{22}(i)=\sigma _{2}^2<\infty \end{aligned}$$

and

$$\begin{aligned} \lim _{n\rightarrow \infty }E(R_{n,\psi }(x_{1})R_{n,\psi }(x_{2})) =\mu _{12}(1)+2\sum _{i=2}^{\infty }\mu _{12}(i)=\sigma _{1,2}. \end{aligned}$$

We conclude that

$$\begin{aligned} \lim _{n\rightarrow \infty }E(\lambda _{1}R_{n,\psi }(x_{1})+\lambda _{2}R_{n,\psi }(x_{2}))^{2} =\lambda _{1}^{2}\sigma _{1}^2+2\lambda _{1}\lambda _{2} \sigma _{1,2}+\lambda _{2}^{2}\sigma _{2}^2, \end{aligned}$$

which establishes (32).

The assertion (33) can be handled in the same lines.

Now, we turn to proving (31). For all \(i \ge 1\), \(k=1,2\), and for any \(K>0\) define \(\xi _{i,k}^{K}\) by

$$\begin{aligned} \xi _{i,k}^K=\left\{ \begin{array}{ll} \psi (X_{i}-m_{\psi }(X_{i-1})) &{} \text{ if } |\psi (X_{i}-m_{\psi }(X_{i-1}))|\le K \text{ and } X_{i-1} \le x_k\\ 0 &{} \text{ if } |\psi (X_{i}-m_{\psi }(X_{i-1}))|>K \text{ or } X_{i-1} >x_k\\ \end{array}\right. \end{aligned}$$

Then, one has,

$$\begin{aligned} \sup _{i\ge 1}E\big (\big |\xi _{i,1}-\xi _{i,1}^{K}\big |^{2+\gamma _0}\big ) =\sup _{i\ge 1} E\left( \big |\psi (X_{i}-m_{\psi }(X_{i-1})){\mathbb {1} }_{\{X_{i-1}\le x_1\} \cap \{|\psi (X_{i}-m_{\psi }(X_{i-1}))|>K \}} \right) . \end{aligned}$$

It results from the condition (12) that the sequence \(\{\xi _{i,1}; i \ge 1 \}\) is uniformly integrable. Whence

$$\begin{aligned} \lim _{K\rightarrow \infty }\sup _{i\ge 1}E\left( \big |\psi (X_{i}-m_{\psi }(X_{i-1})){\mathbb {1} }_{\{X_{i-1}\le x_1\} \cap \{|\psi (X_{i}-m_{\psi }(X_{i-1}))|>K \}} \right) =0. \end{aligned}$$

Consequently,

$$\begin{aligned} \sup _{i\ge 1}E\left| \xi _{i,1}-\xi _{i,1}^{K}\right| ^{2+\gamma _{0}}\rightarrow 0 \ \text {as }K\rightarrow \infty . \end{aligned}$$

Similarly, we prove that

$$\begin{aligned} \sup _{i\ge 1}E\left| \xi _{i,2}-\xi _{i,2}^{K}\right| ^{2+\gamma _{0}}\rightarrow 0 \text{ as } K\rightarrow \infty . \end{aligned}$$

Applying the inequality

$$\begin{aligned} \left| a+b\right| ^{K}\le 2^{K-1}\left( \left| a\right| ^{K}+\left| b\right| ^{K}\right) \end{aligned}$$

yields

$$\begin{aligned} \sup _{i\ge 1}E\left| (\lambda _{1}\xi _{i,1} +\lambda _{2}\xi _{i,2})-\left( \lambda _{1}\xi _{i,1}^{K} +\lambda _{2}\xi _{i,2}^{K}\right) \right| ^{2+\gamma _{0}}< \infty \end{aligned}$$

and (31) is proved.

We also have

$$\begin{aligned} \sup _{i\ge 1}\left| \lambda _{1}\xi _{i,1}^{K}+\lambda _{2}\xi _{i,2}^{K}\right| \le (\lambda _{1}+\lambda _{2})K<\infty , \quad \forall K>0 \end{aligned}$$

which proves (30).

It remains to verify assertion (34). For all \(i\ge 1\) and \(k=1,2\), denote by \(\widetilde{\xi }_{i,k}\) the stationary counterpart of \( \xi _{i,k}\) with the \(X_i\)’s substituted for the \(\widetilde{X}_i\)’s. Define, for all \(k=1,2\)

$$\begin{aligned} \sigma _{K,k}^{2}= & {} E\left[ \left( \widetilde{\xi }_{0,k}^{K} -E\left( \widetilde{\xi }_{0,k}^{K}\right) \right) \right] ^{2} +2\sum _{i=2}^{\infty }E\left[ \left( \widetilde{\xi }_{0,k}^{K} -E\left( \widetilde{\xi }_{0,k}^{K}\right) \right) \left( \widetilde{\xi }_{i,k}^{K} -E\left( \widetilde{\xi }_{i,k}^{K}\right) \right) \right] \end{aligned}$$

By the Lebesgue dominated convergence theorem, one obtains

$$\begin{aligned} E\left[ \left( \widetilde{\xi }_{0,1}^{K} -E\left( \widetilde{\xi }_{0,1}^{K}\right) \right) \right] ^{2}\rightarrow E\left( \widetilde{\xi }_{0,1}\right) ^{2} \ \ \text {as } K\rightarrow \infty \end{aligned}$$

and

$$\begin{aligned} E\left[ \left( \widetilde{\xi }_{0,1}^{K} -E\left( \widetilde{\xi }_{0,1}^{K}\right) \right) \left( \widetilde{\xi }_{i,1}^{K} -E\left( \widetilde{\xi }_{i,1}^{K}\right) \right) \right] \rightarrow E\left( \widetilde{\xi }_{0,1}\widetilde{\xi }_{i,1}\right) \ \text {as } K\rightarrow \infty . \end{aligned}$$

Therefore

$$\begin{aligned} \lim _{K\rightarrow \infty }\sigma _{K,1}^{2}=\sigma _{1}^{2}. \end{aligned}$$

Similarly we prove that

$$\begin{aligned} \lim _{K\rightarrow \infty }\sigma _{K,2}^{2}=\sigma _{2}^{2} \end{aligned}$$

and

$$\begin{aligned} \lim _{K\rightarrow \infty }\left\{ E\left[ \left( \widetilde{\xi }_{0,1}^{K} -E\left( \widetilde{\xi }_{0,1}^{K}\right) \right) \left( \widetilde{\xi }_{0,2}^{K} -E\left( \widetilde{\xi }_{0,2}^{K}\right) \right) \right] +2 \sum _{i=2}^{\infty } E\left[ \left( \widetilde{\xi }_{0,1}^{K} -E\left( \widetilde{\xi }_{0,1}^{K}\right) \right) \left( \widetilde{\xi }_{i,2}^{K} -E\left( \widetilde{\xi }_{i,2}^{K}\right) \right) \right] \right\} = \sigma _{1,2}. \end{aligned}$$

Then (34) is proved and the Cramer–Wold device is verified.

From Billingsley (1968), it remains to prove the tightness. In this purpose, we have to prove that, for any \(\varepsilon >0\), there exists \(\eta \in [0,1]\) and an integer \(N_0\) such that for any \(n\ge N_0\)

$$\begin{aligned} P\Big (\sup _{|u-v|<\eta }| R_{n,\psi }(u)- R_{n,\psi }(v)|\ge \varepsilon \Big )\le \varepsilon . \end{aligned}$$
(43)

The first step of the proof of (43) is a decomposition of \( R_{n,\psi }\) into a sum of a positive process and a negative process.

For all \(u \in u\in \mathbb {R}\), define the functions

$$\begin{aligned} \psi ^{+}(u)=\max \{\psi (u),0\} \ \ \text{ and } \ \ \psi ^{-}(u)=-\inf \{\psi (u),0\}. \end{aligned}$$

Denote by \(R_{n,\psi }^{+}\) and \(R_{n,\psi }^{-}\) the processes obtained by substituting \(\psi ^{+}\) and \(\psi ^{-}\) respectively for \(\psi \) in the expression of \(R_{n,\psi }\). It is well known that

$$\begin{aligned} R_{n,\psi }=R_{n,\psi }^{+}-R_{n,\psi }^{-}. \end{aligned}$$

We will establish (43) for \(R_{n,\psi }^{+}\). The proof for \(R_{n,\psi }^{-}\) can be handled in the same way, and (43) would follow from the above decomposition of \(R_{n,\psi }\) and routine arguments.

From Harel and Puri (1990), we must show that there exists a sequence of grids \(\mathcal {G}_{n}\) with basis \(B_{n}=\{u_{i}\}_{0\le i\le n}\) asymptotically dense in \(\overline{\mathbb {R}}=\mathbb {R\cup \{-\infty } ,+\infty \}\) wich accompanies \(R_{n,\psi }^+\).

A sequence of grids \(\mathcal {G}_{n}\) with basis \(B_{n}\) is said accompaning the process \(R_{n,\psi }^+\) if only if for any \(\epsilon >0,\) there exists \( \epsilon '>0\) and for any \(0\le \eta <\frac{1}{2},\) there exists \( N_{0}\ge 1\) such that for any \(n\ge N_{0}\),

$$\begin{aligned} P\left( \sup _{\left| u-v\right|<\eta }\left| R_{n,\psi }^+(u)-R_{n,\psi }^+(v)\right| >\epsilon ; \sup _{\left| u_{i}-u_{j}\right|<2\eta ,0\le i<j\le n,}\left| R_{n,\psi }^+(u_{i})-R_{n,\psi }^+(u_{j})\right| <\epsilon '\right) =0. \end{aligned}$$
(44)

We first prove that for \(0\le i<j\le n\),

$$\begin{aligned} P\left( |R_{n,\psi }^+(u_i)-R_{n,\psi }^+(u_j)|>\varepsilon \right) \le \frac{Cst}{\epsilon ^{4}} \left\{ n^{-1}(\nu ^{(n)}([u_{i},u_{j}]))^{\frac{2}{ 2+\gamma }}+(\nu ^{(n)}([u_{i},u_{j}]))^{\frac{4}{2+\gamma }} \right\} , \end{aligned}$$
(45)

where \(0<\gamma <\gamma _0, \ 4/(2+\gamma )>1\) and (\(\nu ^{(n)})_{n\ge 1}\) is a sequence of positive continuous measures on \(\overline{\mathbb {R}}\) such that as \(n\rightarrow \infty \),

$$\begin{aligned} \nu ^{(n)}([u,v])\longrightarrow \nu ([u,v]),\ -\infty \le u<v\le \infty , \end{aligned}$$
(46)

with \(\nu \) standing for is a positive continuous measure on \(\overline{\mathbb {R}}\).

We start with the study of the moments of the random variables \(R_{n,\psi }^+(t_{2})-R_{n,\psi }^+(t_{1})\) where \(-\infty \le t_{1}<t_{2}\le \infty .\) Denote for all \( i\ge 1\),

$$\begin{aligned} \mathcal {B}_i={\mathbb {1} }_{\{t_1<X_{i-1}\le t_2\}}\psi ^+(X_i-m_{\psi } (X_{i-1})). \end{aligned}$$

We have the following inequalities:

$$\begin{aligned} E\big (R_{n,\psi }^+(t_{2})-R_{n,\psi }^+(t_{1})\big )^{4}\le & {} n^{-2}\sum _{1\le i,j,k,l\le n}|E(\mathcal {B}_{i} \mathcal {B}_{j}\mathcal {B}_{k}\mathcal {B}_{l})|\\\le & {} 4!\Bigg [n^{-2}\sum _{i=0}^{n-1}\sum _{j=1}^{n-i}\sum _{k\le i}\sum _{l\le i}| E(\mathcal {B}_{j}\mathcal {B}_{j+i} \mathcal {B}_{j+i+k}\mathcal {B}_{j+i+k+l})|\\&+\, n^{-2}\sum _{i=0}^{n-1}\sum _{j=1}^{n-i}\sum _{k\le i}\sum _{l\le i}| E(\mathcal {B}_{j-i-k-l}\mathcal {B}_{j-i-k}\mathcal {B}_{j-i} \mathcal {B}_{j})|\\&+\,n^{-2}\sum _{i=0}^{n-1}\sum _{j=1}^{n-i}\sum _{k\le i}\sum _{l\le i}| E(\mathcal {B}_{j-l}\mathcal {B}_{j}\mathcal {B}_{j+i}\mathcal {B}_{j+i+k})| \Bigg ]\\= & {} 4!(\mathcal {W}_{1,n}+\mathcal {W}_{2,n}+\mathcal {W}_{3,n}). \end{aligned}$$

The study of each of the tree terms in the right-hand side of the last inequality is necessary. To proceed to their study, for all \(i \ge 1\), put \(\tau _i=E|\mathcal {B}_i|^{2+\gamma }\). Since \(E(\mathcal {B}_{i})=0\), well-known moment inequalities for mixing random variables yield

$$\begin{aligned} \mathcal {W}_{1,n}\le & {} n^{-2}\sum _{i=0}^{n-1}i^{2}\big [\beta (i)\big ]^{\gamma /(2+\gamma )}\sum _{j=1}^{n-i} \big (E|\mathcal {B}_{j}|^{2+\gamma }\big )^{1/(2+\gamma )} \big (E|\mathcal {B}_{j+i}|^{2+\gamma }\big )^{1/(2+\gamma )}\\\le & {} n^{-2}\sum _{i=0}^{n-1}i^{2}\big [\beta (i)\big ]^{\gamma /(2+\gamma )} \left( \sum _{j=1}^{n-i}\tau _{j}\right) ^{^{1/(2+\gamma )}}\left( \sum _{j=1}^{n-i} \tau _{j+i}\right) ^{^{1/(2+\gamma )}}\\\le & {} n^{-2}\sum _{i=0}^{n-1}i^{2}\big [\beta (i)\big ]^{\gamma /(2+\gamma )} \left( \sum _{j=1}^{n}\tau _{j}\right) ^{2/(2+\gamma )}\\= & {} n^{-2}\sum _{i=0}^{n-1}i^{2}\big [\beta (i)\big ]^{\gamma /(2+\gamma )} \left[ n(n^{-1}\sum _{j=1}^{n}\tau _{j})\right] ^{^{2/(2+\gamma )}}. \end{aligned}$$

Thus, for all \(i \ge 1\) defining the function

$$\begin{aligned} \kappa _i(u)=E\big (|\psi (X_i-m_{\psi }(X_{i-1}))|^{2+\gamma }\mid X_{i-1}=u\big ), \ u \in \mathbb {R}, \end{aligned}$$

it is easy to see that

$$\begin{aligned} \mathcal {W}_{1,n}\le Cst \ n^{-1} \left( n^{-1}\sum _{j=1}^{n}\int _{t_{1}}^{t_{2}}\kappa _{j}(u)dF_{j}(u) \right) ^{\frac{2}{2+\gamma }}. \end{aligned}$$

In a similar way, one obtains

$$\begin{aligned} \mathcal {W}_{2,n} \le Cst \ n^{-1} \left( n^{-1}\sum _{j=1}^{n}\int _{t_{1}}^{t_{2}}\kappa _{j}(u)dF_{j}(u) \right) ^{\frac{2}{2+\gamma }}. \end{aligned}$$

We now turn to the study of the last term. One can write

$$\begin{aligned} \mathcal {W}_{3,n}&\le n^{-2}\Bigg (\sum _{i=0}^{n-1}\sum _{j=1}^{n-i}\sum _{k\le i}\sum _{l\le i}| E(\mathcal {B}_{j-l}\mathcal {B}_{j}) E(\mathcal {B}_{j+i}\mathcal {B}_{j+i+k})|\Bigg )\\&\quad +\,n^{-2}\Bigg [\sum _{i=0}^{n-1}i^{2}\Big (\beta (i)\Big )^{\gamma /(2+\gamma )}\Big (\sum _{j=1}^{n-i} \tau _{j}\Big )^{^{1/(2+\gamma )}}\Big (\tau _{j+i} \Big )^{^{1/(2+\gamma )}}\Bigg ]\\&\le n^{-2}\Bigg (\sum _{i=0}^{n-1}\sum _{j=1}^{n-i}| E(\mathcal {B}_{j}\mathcal {B}_{j+i})|\sum _{k=0}^{n-1} \sum _{l=1}^{n-k}|E(\mathcal {B}_{l}\mathcal {B}_{l+k})|\Bigg )\\&\quad +\,Cst n^{-1}\Bigg (\sup _{1\le i\le n}\int _{t_1}^{t_2}\kappa _i(u)\; \text{ d }u\Bigg )^{2/(2+\gamma )}\\&\le n^{-2}\Bigg [\sum _{i=0}^{n-1}\Big (\beta (i)\Big )^{\gamma /(2+\gamma )} \sum _{j=1}^{n-i}\Big (\tau _{j}\Big )^{^{1/(2+\gamma )}} \Big (\tau _{j+i}\Big )^{^{1/(2+\gamma )}}\Bigg ]^2\\&\quad +\,Cst \ n^{-1}\left( n^{-1}\sum _{j=1}^{n}\int _{t_{1}}^{t_{2}} \kappa _{j}(u)dF_{j}(u) \right) ^{\frac{2}{2+\gamma }} . \end{aligned}$$

It results from the last inequality that

$$\begin{aligned} \mathcal {W}_{3,n}\le Cst \left[ \left( n^{-1}\sum _{j=1}^{n}\int _{t_{1}}^{t_{2}}\kappa _{j}(u)dF_{j}(u) \right) ^{ \frac{4}{2+\gamma }}+n^{-1} \left( n^{-1}\sum _{j=1}^{n}\int _{t_{1}}^{t_{2}}\kappa _{j}(u)dF_{j}(u) \right) ^{\frac{2}{2+\gamma }}\right] . \end{aligned}$$

Collecting all these bounds and using the fact that from (12) \( \sup _{n}\sup _{1\le i\le n}\kappa _{i}(u)<Cst \ \kappa (u),\) one has

$$\begin{aligned}&E \left( R_{n,\psi }^+(t_2)-R_{n,\psi }^+(t_1)\right) ^4 \nonumber \\&\quad \le Cst \left\{ \left( n^{-1}\sum _{j=1}^{n} \int _{t_1}^{t_2}\kappa (u)dF_{j}(u) \right) ^{\frac{4}{2+\gamma }} +n^{-1} \left( n^{-1}\sum _{j=1}^{n}\int _{t_1}^{t_2}\kappa (u) dF_{j}(u) \right) ^{\frac{2}{2+\gamma }} \right\} ,\nonumber \\ \end{aligned}$$
(47)

where \(\kappa \) is the stationary counterpart of the \(\kappa _j\)’s defined on \(\mathbb {R}\), by \(u \mapsto \kappa (u)=E\big (|\psi (\widetilde{X}_1-m_{\psi }(\widetilde{X}_0)|^{2+\gamma }\mid \widetilde{X}_0=u\big )\). Let \(\nu \) be the probability measure on \(\overline{\mathbb {R}}\) defined for all \(\infty \le a<b\le \infty \) by

$$\begin{aligned} \nu ([a,b])=\frac{\int _a^b \kappa (u)\text{ d }F(u)}{\int _{\overline{\mathbb {R}}} \kappa (v)\text{ d }F(v)}. \end{aligned}$$

For all \(1 \le j \le n\), let \(\nu _j\) be a probability measure defined as \(\nu \) with F substituted for \(F_j\) in the numerator. Define the probability measures \(\nu ^{(n)}\) by

$$\begin{aligned} \nu ^{(n)}=n^{-1}\sum _{j=1}^{n}\nu _{j}. \end{aligned}$$

If the basis \(B_{n}=\{u_{i}\}_{0\le i\le n}\) of the grid \(\mathcal {G}_{n}\) satisfies \(\nu ([u_{i},u_{i+1}])=\frac{ Cst}{n}\), \(0\le i\le n-1\), then (45) follows from (47).

Now, we show that the sequence \(\mathcal {G}_n\) of grids with basis \(B_n\) accompanies \(R_{n, \psi }^+\).

Given that \(R_{n,\psi }^+\) is monotonous, for all \([t_1,t_2]\subset [t_1',t_2']\), one has

$$\begin{aligned} \left| R_{n,\psi }^+(t_1)-R_{n,\psi }^+(t_2)\right| \le \left| R_{n,\psi }^+(t_1')-R_{n,\psi }^+(t_2')\right| . \end{aligned}$$
(48)

For any \(t_1\) and \(t_2\) such that \(-\infty \le t_1<t_2\le \infty \) and any \(n\ge 1,\) there exists \(u_{i}\) and \(u_{j}\) in the basis \(B_{n}\) such that \(u_{i}\le t_1\le u_{i+1},0\le i\le n-1\) and \(u_{j}\le t_2\le u_{j+1}\), \(0<j\le n-1\) and (48) implies that

$$\begin{aligned} \left| R_{n,\psi }^+(t_2)-R_{n,\psi }^+(t_1)\right| \le \left| R_{n,\psi }^+(u_{j+1})-R_{n,\psi }^+(u_{i})\right| . \end{aligned}$$

We deduce that for any \(\epsilon >0,\) there exists \(\epsilon '>0\) and for any \(0\le \eta <\frac{1}{2},\) there exists \(N_{0}\ge 1\) such that for any \(n\ge N_{0}\)

$$\begin{aligned} \left\{ \sup _{_{\left| u-v\right|<\eta }}\left| R_{n,\psi }^+(u)-R_{n,\psi }^+(v)\right| >\epsilon \right\} \bigcap \left\{ \sup _{\left| u_{i}-u_{j}\right|<2\eta ,0\le i<j\le n,}\left| R_{n,\psi }^+(u_{i})-R_{n,\psi }^+(u_{j})\right| <\epsilon ' \right\} =\emptyset . \end{aligned}$$

which implies (44) and the grid \(\mathcal {G}_{n}\) defined above by the measure \(\nu \) accompanies the process \(R_{n,\psi }^+.\)

Now, proving (43) for \(R_{n,\psi }^+\) is equivalent to proving that for any \(\epsilon >0,\) there exists \(\eta \in [0,1]\) and an integer \(N_{0}\) such that for any \(n\ge N_{0}\),

$$\begin{aligned} P\left( \sup _{\left| u_{i}-u_{j}\right|<2\eta ,0\le i<j\le n,}\left| R_{n,\psi }^+(u_{i})-R_{n,\psi }^+(u_{j})\right| \ge \epsilon \right) \le \epsilon . \end{aligned}$$
(49)

We have from

$$\begin{aligned}&P\left( \sup _{\left| u_{i}-u_{j}\right|<2\eta ,0\le i<j\le n,}\left| R_{n,\psi }^+(u_{i})-R_{n,\psi }^+(u_{j})\right| \ge \epsilon \right) \nonumber \\&\quad \le Cst \sum _{i=1}^{n-1}\left\{ n^{-1} \left( \nu ^{(n)} ([u_{i},u_{i+1}]) \right) ^{\frac{2}{2+\gamma }}+\left( \nu ^{(n)} ([u_{i},u_{i+1}]) \right) ^{\frac{4}{2+\gamma }}\right\} . \end{aligned}$$
(50)

From Propositions 1 and 3, there exists an integer \(N_0\) such that for any \(n \ge N_0\)

$$\begin{aligned}&P\left( \sup _{\left| u_{i}-u_{j}\right|<2\eta ,0\le i<j\le n,}\left| R_{n,\psi }^+(u_{i})-R_{n,\psi }^+(u_{j})\right| \ge \epsilon \right) \\&\quad \le Cst \sum _{i=1}^{n-1}\left( \nu ([u_{i},u_{i+1}]) +\mathcal {O}\left( \frac{1}{n} \right) \right) ^{\frac{4}{ 2+\gamma }} \\&\quad =Cst\sum _{i=1}^{n-1}\left( \frac{1}{n}+\mathcal {O}\left( \frac{1}{n} \right) \right) ^{\frac{4}{2+\gamma }}\\&\quad =Cst\sum _{i=1}^{n-1}\left( \mathcal {O}\left( \frac{1}{n} \right) \right) ^{\frac{4}{2+\gamma }}\\&\quad =\left( O\left( \frac{1}{n}\right) \right) ^{ \frac{4}{2+\gamma }-1}. \end{aligned}$$

The fact that \(\frac{4}{2+\gamma }-1>0\) implies that (49) is handled. An analogous result can be established for \(R_{n,\psi }^-\). Whence, (43) is proved by elementary arguments.

Remark 7

For proving the case \(d>1\), we have to generalize the definition of the probability measure \(\nu \). For handling the analog of inequality (47), \(\gamma \) should be chosen smaller than \(2/(d-1)\), which would imply \(2/(2+\gamma )+1/d>1\). Then we define a d-dimensional probability measure \({\nu }\) such that for all \(-\infty \le a_{i},<b_{i}\le \infty \), \( 1\le i\le d,\)

$$\begin{aligned} {\nu }\left( \prod \limits _{i=1}^{d}[a_{i},b_{i}] \right) =\frac{ \int _{a_{1}}^{b_{1}}\ldots \int _{a_{d}}^{b_{d}}{\kappa } (u_{1},\ldots ,u_{d})dFu_{1}\ldots dFu_{d}}{\int _{\mathbb {R}^{d}}{\kappa } (v_{1},\ldots ,v_{d})dFv_{1}\ldots dFv_{d}}, \end{aligned}$$

where \({\kappa } \) is the function defined for all \((u_{1},\ldots ,u_{d})\in \overline{\mathbb {R}}^{d}\) by

$$\begin{aligned} {\kappa } (u_{1},\ldots ,u_{d})=E\left( \left| \psi (\widetilde{\mathbf {X}} _{1}-m_{\psi }(\widetilde{\mathbf {X}}_{0})\right| ^{2+\gamma } \mid \widetilde{\mathbf {X}}_{0}=(u_{1},\ldots ,u_{d}) \right) . \end{aligned}$$

To have the possibility to build a sequence of grids \(\mathcal {G}_{n}^{d}\) with basis \(\{a_{i,j},1\le i\le n,1\le j\le d\}\) asymptotically dense in \(\overline{\mathbb {R}}^{d}\) which accompanies \(R_{n,\psi },\) the measure \( {\nu }\) should be the product of its marginals \((\nu _{i})_{1\le i\le d}.\) As it is not the case in general, we use the fact that \({\nu } \le (\nu _1 \cdot \nu _2\cdots \nu _d)^{\frac{1}{ d}}\) and we replace \({\nu }\) by the measure \({\nu }^{*}=(\nu _1 \cdot \nu _2\cdots \nu _d)^{\frac{1}{d}}.\) Then, the basis of the grid \(\mathcal {G}_{n}^{d}\) is a subdivision of \(\overline{ \mathbb {R}}^{d}\) verifying

$$\begin{aligned} {\nu }^{*} \left( \prod \limits _{j=1}^{d}[a_{i_{j},j},a_{_{i_{j}+1,},j}] \right) =\frac{Cst}{n^{d}}, \ 0\le i_{j}<i_{n}, \ \ 1\le j\le d. \end{aligned}$$

With this measure, the proof for the case \(d>1\) is similar to that of \(d=1\).

Proof of Theorem 2

One can write for all \(x \in \mathbb {R}\),

$$\begin{aligned} R^{*}_{n,\psi }(x)= & {} R_{n,\psi }(x)+n^{-1/2}\sum _{i=1}^n\big (\psi ({X}_i-m_{\psi }( {X}_{i-1};\widetilde{\theta }_n)) -\psi ({X}_i-m_{\psi }({X}_{i-1};\theta _0)) \big ){\mathbb {1} }_{\{{X}_{i-1}\le {x}\}} \\= & {} R_{n,\psi }(x) +J_{n,\psi }(x). \end{aligned}$$

By a suitable Taylor expansion, for all \(x \in \mathbb {R}\), \(-J_{n,\psi }(x)\) admits the representation

$$\begin{aligned} -J_{n,\psi }(x)= & {} n^{1/2}(\widetilde{\theta }_n-\theta _0)^tn^{-1}\sum _{i=1}^n \Big \{ \psi '(X_i-m_{\psi }(X_{i-1};\theta _0)) \mathbf {g}(X_{i-1};\theta _0) {\mathbb {1} }_{\{{X}_{i-1}\le {x}\}}\\&-\, E \left[ \psi '(\widetilde{X}_i-m_{\psi }(\widetilde{X}_{i-1};\theta _0)) \mathbf {g}(\widetilde{X}_{i-1};\theta _0) {\mathbb {1} }_{\{\widetilde{X}_{i-1}\le x\}} \right] \Big \}\\&+\,n^{1/2}(\widetilde{\theta }_n-\theta _0)^tn^{-1}\sum _{i=1}^n \left[ \psi '(X_i-m_{\psi }(X_{i-1};\theta _{ni}))\right. \\&\left. -\,\psi '(X_i-m_{\psi }(X_{i-1};\theta _0)) \right] \mathbf {g}(X_{i-1};\theta _{ni}) {\mathbb {1} }_{\{{X}_{i-1}\le {x}\}} \\&+\,n^{1/2}(\widetilde{\theta }_n-\theta _0)^tn^{-1}\sum _{i=1}^n \psi '(X_i-m_{\psi }(X_{i-1};\theta _0))\\&\left[ \mathbf {g}( X_{i-1};\theta _{ni}) -\mathbf {g}(X_{i-1};\theta _0)\right] {\mathbb {1} }_{\{X_{i-1}\le {x}\}}\\&+\,n^{1/2}(\widetilde{\theta }_n-\theta _0)^tE\left[ \psi '(\widetilde{X}_1-m_{\psi }(\widetilde{X}_0;\theta _0)) \mathbf {g}(\widetilde{X}_0;\theta _0) {\mathbb {1} }_{\{\widetilde{X}_0\le {x}\}}\right] \\= & {} n^{1/2}(\widetilde{\theta }_n-\theta _0)^t \left\{ J_{1n}(x)+ J_{2n}(x) + J_{3n}(x)\right. \\&\left. +\,E\left[ \psi '(\widetilde{X}_1-m_{\psi }(\widetilde{X}_0;\theta _0)) \mathbf {g}(\widetilde{X}_0;\theta _0) {\mathbb {1} }_{\{\widetilde{X}_0\le {x}\}}\right] \right\} , \end{aligned}$$

where for all \(i=1,\ldots ,n\), \(\theta _{ni}\) denotes a point lying between \(\widetilde{\theta }_n\) and \(\theta _0\).

Since \(n^{1/2}(\widetilde{\theta }_n-\theta _0)\) is tight, it suffices to show that for all \(k=1,2,3\), in probability,

$$\begin{aligned} \sup _{x \in \mathbb {R}} ||J_{kn}(x) ||\longrightarrow 0, \ n \rightarrow \infty . \end{aligned}$$

By (A3), it is easy to see that

$$\begin{aligned} \sup _{x \in \mathbb {R}} ||J_{2n}(x)||\le & {} Cst \ n^{-1}\sum _{i=1}^n ||\theta _{in}-\theta _0)|| C(X_{i-1},X_i)M(X_i) \nonumber \\\le & {} Cst \ ||\widetilde{\theta }_n-\theta _0)|| n^{-1}\sum _{i=1}^n C(X_{i-1},X_i)M(X_i). \end{aligned}$$
(51)

Now, write

$$\begin{aligned} n^{-1}\sum _{i=1}^n C(X_{i-1},X_i)M(X_i)= & {} n^{-1}\sum _{i=1}^n \left\{ M(X_i) - E[C(X_{i-1},X_i)M(X_i)] \right\} \\&\quad +\,n^{-1}\sum _{i=1}^n E[C(X_{i-1},X_i)M(X_i)]. \end{aligned}$$

It follows from Lemma 2 (see “Appendix”), that in probability,

$$\begin{aligned} n^{-1}\sum _{i=1}^n \left\{ C(X_{i-1},X_i) M(X_i) - E[C(X_{i-1},X_i)M(X_i)] \right\} \longrightarrow 0, \ n \rightarrow \infty . \end{aligned}$$

Next, observe that, for all \(i=1, \ldots , n\),

$$\begin{aligned}&|E[C(X_{i-1},X_i)M(X_i)]- E[C(\widetilde{X}_{i-1},\widetilde{X}_i)M(\widetilde{X}_i)]|\\&\quad = \left| \int _{-\infty }^{\infty } C(x,y)M(y)dF_{i-1,i}(x,y)-\int _{-\infty }^{\infty } C(x,y)M(y)d \widetilde{F}_1(x,y) \right| . \end{aligned}$$

By Proposition 2 applied to \(\{ W_i=(X_i,X_{i-1}): i\ge 1 \}\) , one can see that there exists some \(\widetilde{\rho } \in (0,1)\) such that the right-hand side of the above inequality is bounded by \(O(\widetilde{\rho }^i)\). Since the sequence \(\{ \widetilde{X}_i : i \ge 0\}\) is stationary, it results from above that, as i tends to infinity \(E[C(X_{i-1},X_i)M(X_i)]\) tends to \(E[C(\widetilde{X}_0, \widetilde{X}_1)M(\widetilde{X}_0)]\). Whence, by Cesaro’s theorem, as n tends to infiniy,

$$\begin{aligned} n^{-1}\sum _{i=1}^n E[C(X_{i-1},X_i)M(X_i)] \longrightarrow E[C(\widetilde{X}_0, \widetilde{X}_1)M(\widetilde{X}_0)], \end{aligned}$$

and next,

$$\begin{aligned} n^{-1}\sum _{i=1}^n C(X_{i-1},X_i)M(X_i) \longrightarrow E[C(\widetilde{X}_0, \widetilde{X}_1)M(\widetilde{X}_0)]. \end{aligned}$$

Finally, since \(\widetilde{\theta }_n\) is consistent to \(\theta _0\), the right-hand side of (51) tends to 0 in probability and \(\sup _{x \in \mathbb {R}}||J_{2n}(x)||\) does the same.

We turn to the study of \(J_{3n}\). The assumption (A2) allows for writing

$$\begin{aligned} \sup _{x \in \mathbb {R}} ||J_{3n}(x)|| \le Cst \ ||\widetilde{\theta }_n-\theta _0|| \left( n^{-1}\sum _{i=1}^n D(X_i) M(X_i) \right) . \end{aligned}$$
(52)

The term in the big brackets in the above inequality can be decomposed as follows

$$\begin{aligned} n^{-1}\sum _{i=1}^n D(X_i)M(X_i)= & {} n^{-1}\sum _{i=1}^n \left\{ D(X_i)M(X_i) - E[D(X_i)M(X_i)] \right\} \\&\quad +\, n^{-1}\sum _{i=1}^n E[D(X_i)M(X_i)]. \end{aligned}$$

Making use again of Lemma 2 yields that in probability,

$$\begin{aligned} n^{-1}\sum _{i=1}^n \left\{ D(X_i) M(X_i) - E[D(X_i)M(X_i)] \right\} \longrightarrow 0, \ n \rightarrow \infty . \end{aligned}$$

Observing that, for all \(i=1, \ldots , n\),

$$\begin{aligned} |E[D(X_i)M(X_i)]- E[D(\widetilde{X}_i)M(\widetilde{X}_i)]|= \left| \int _{-\infty }^{\infty } D(x)M(x)dF_i(x)-\int _{-\infty }^{\infty } D(x)M(x)dF(x) \right| , \end{aligned}$$

from Propositions 1 and 3, one sees that there exists some \(\epsilon \in (0,1)\) such that the right-hand side of the above inequality is bounded by \(O(\epsilon ^i)\). By the stationarity of the sequence \(\{ \widetilde{X}_i : i \ge 0\}\), it results from above that, as i tends to infinity \(E[D(X_i)M(X_i)]\) tends to \(E[D(\widetilde{X}_0)M(\widetilde{X}_0)]\). Once again, by Cesaro’s theorem, as n tends to infinity,

$$\begin{aligned} n^{-1}\sum _{i=1}^n E[D(X_i)M(X_i)] \longrightarrow E[D(\widetilde{X}_0)M(\widetilde{X}_0)], \end{aligned}$$

from which it follows that, as n tends to infinity, in probability, one has

$$\begin{aligned} n^{-1}\sum _{i=1}^n D(X_i)M(X_i) \longrightarrow E[D(\widetilde{X}_0)M(\widetilde{X}_0)]. \end{aligned}$$

Once more, by the consistency of \(\widetilde{\theta }_n\) to \(\theta _0\), the right-hand side of (52) tends to 0 in probability and \(\sup _{x \in \mathbb {R}}||J_{3n}(x)||\) does the same.

It remains to show that

$$\begin{aligned} \sup _{x \in \mathbb {R}} ||J_{1n}(x) || \longrightarrow 0, \ n \rightarrow \infty . \end{aligned}$$

We will just show that for any component \(J_{1n}^j(x)\), \(j=1,\ldots ,q\) of \(J_{1n}(x)\),

$$\begin{aligned} \sup _{x \in \mathbb {R}} |J_{1n}^j(x) | \longrightarrow 0, \ n \rightarrow \infty . \end{aligned}$$

For this, we use a general Glivenko-Cantelli theorem established in Athreya and Roy (2016) (see Theorem 1 in Appendix). But we need to rewrite \(J_{1n}^j\) on a suitable form. Define for all \(x \in \mathbb {R}\) and \(j=1,\ldots ,q\)

$$\begin{aligned} J_{1n}^{j+}(x)= & {} n^{-1}\sum _{i=1}^n \Big \{ \lfloor \psi '(X_i-m_{\psi }(X_{i-1};\theta _0)) \mathbf {g}_j(X_{i-1};\theta _0) \rfloor ^+ {\mathbb {1} }_{\{X_{i-1}\le x\}}\\&-\, E \left[ \lfloor \psi '(\widetilde{X}_1-m_{\psi }(\widetilde{X}_0;\theta _0)) \mathbf {g}_j(\widetilde{X}_0;\theta _0) \rfloor ^+{\mathbb {1} }_{\{\widetilde{X}_0\le x\}} \right] \Big \}\\ J_{1n}^{j-}(x)= & {} n^{-1}\sum _{i=1}^n \Big \{ \lfloor \psi '(X_i-m_{\psi }(X_{i-1};\theta _0)) \mathbf {g}_j(X_{i-1};\theta _0) \rfloor ^- {\mathbb {1} }_{\{X_{i-1}\le x\}}\\&-\, E \left[ \lfloor \psi '(\widetilde{X}_1-m_{\psi }(\widetilde{X}_0;\theta _0)) \mathbf {g}_j(\widetilde{X}_0;\theta _0) \rfloor ^-{\mathbb {1} }_{\{\widetilde{X}_0\le x\}} \right] \Big \}, \end{aligned}$$

where for real number z, \(\lfloor z \rfloor ^+\) and \(\lfloor z \rfloor ^-\) stand respectively for

$$\begin{aligned} \lfloor z \rfloor ^+= \max (z,0) \text{ and } \lfloor z \rfloor ^- = \max (-z,0). \end{aligned}$$

It is well known that for all \(x \in \mathbb {R}\) and \(j=1,\ldots ,q\),

$$\begin{aligned} J_{1n}^j(x)=J_{1n}^{j+}(x)-J_{1n}^{j-}(x). \end{aligned}$$

Then, to show the uniform convergence in probability to 0 of \(J_{1n}^j\), it suffies to show those of \(J_{1n}^{j+}\) and \(J_{1n}^{j-}\). We only show that of \(J_{1n}^+\). This needs checking the conditions of Theorem 1 with the functions

$$\begin{aligned} \mathcal {F}_n(x)= & {} n^{-1}\sum _{i=1}^n \lfloor \psi '(X_i-m_{\psi }(X_{i-1};\theta _0)) \mathbf {g}_j(X_{i-1};\theta _0) \rfloor ^+ {\mathbb {1} }_{\{X_{i-1}\le x \}}\\ \mathcal {F}(x)= & {} E \left[ \lfloor \psi '(\widetilde{X}_1-m_{\psi }(\widetilde{X}_0;\theta _0)) \mathbf {g}_j(\widetilde{X}_0;\theta _0) \rfloor ^+{\mathbb {1} }_{\{\widetilde{X}_0\le x\}} \right] , \ x \in \mathbb {R}. \end{aligned}$$

It is an easy matter that both \(\mathcal {F}_n\) and \(\mathcal {F}\) are nonnegative, nondecreasing functions defined on \(\mathbb {R}\), and that \(\mathcal {F}\) is continuous. It is also easy to see that

$$\begin{aligned} \mathcal {F}(\infty )\equiv & {} \lim _{x \rightarrow \infty }\mathcal {F}(x) = E \left[ \lfloor \psi '(\widetilde{X}_1-m_{\psi }(\widetilde{X}_0;\theta _0)) \mathbf {g}_j(\widetilde{X}_0;\theta _0) \rfloor ^+\right] \le \infty \\ \mathcal {F}_n(\infty )\equiv & {} \lim _{x \rightarrow \infty }\mathcal {F}_n(x) = n^{-1}\sum _{i=1}^n \lfloor \psi '(X_i-m_{\psi }(X_{i-1};\theta _0)) \mathbf {g}_j(X_{i-1};\theta _0) \rfloor ^+ \le \infty . \end{aligned}$$

and that

$$\begin{aligned} \mathcal {F}(-\infty )\equiv \lim _{x \rightarrow - \infty }\mathcal {F}(x) =0 \text{ and } \mathcal {F}_n(-\infty ) \equiv \lim _{x \rightarrow -\infty }\mathcal {F}_n(x)=0. \end{aligned}$$

So, Point (i) of Theorem 7 is checked. For checking Point (ii), observe that

$$\begin{aligned} \mathcal {F}_n(\infty )= & {} n^{-1}\sum _{i=1}^n \Big \{ \lfloor \psi '(X_i-m_{\psi }(X_{i-1};\theta _0))\mathbf {g}_j(X_i;\theta _0) \rfloor ^+\\&-\,E \left[ \lfloor \psi '( X_i-m_{\psi }( X_{i-1};\theta _0)) \mathbf {g}_j( X_i;\theta _0) \rfloor ^+\right] \Big \}\\&+\, n^{-1}\sum _{i=1}^n E \left[ \lfloor \psi '(X_i-m_{\psi }(X_{i-1};\theta _0)) \mathbf {g}_j(X_i;\theta _0) \rfloor ^+\right] \\&-\, E \left[ \lfloor \psi '(\widetilde{X}_1-m_{\psi }(\widetilde{X}_0;\theta _0)) \mathbf {g}_j(\widetilde{X}_0;\theta _0) \rfloor ^+\right] \\&+\,E \left[ \lfloor \psi '(\widetilde{X}_1-m_{\psi }(\widetilde{X}_0;\theta _0)) \mathbf {g}_j(\widetilde{X}_0;\theta _0) \rfloor ^+\right] \\= & {} \mathcal {F}_{1n} + \mathcal {F}_{2n}+ E \left[ \lfloor \psi '(\widetilde{X}_1-m_{\psi }(\widetilde{X}_0;\theta _0)) \mathbf {g}_j(\widetilde{X}_0;\theta _0) \rfloor ^+\right] . \end{aligned}$$

Using Lemma 2 once more, one shows that \(\mathcal {F}_{1n}\) tends in probability to 0 as n tends to infinity. Also, by Proposition 2, \(E \left[ \lfloor \psi '(X_i-m_{\psi }(X_{i-1};\theta _0)) \mathbf {g}_j(X_{i-1};\theta _0) \rfloor ^+\right] \) tends to \(E \left[ \lfloor \psi '(\widetilde{X}_1-m_{\psi }(\widetilde{X}_0;\theta _0)) \mathbf {g}_j(\widetilde{X}_0;\theta _0) \rfloor ^+ \right] \) as i tends to infinity. By Cesaro theorem,

$$\begin{aligned}&n^{-1}\sum _{i=1}^n E \left[ \lfloor \psi '(X_i-m_{\psi }(X_{i-1};\theta _0)) \mathbf {g}_j(X_i;\theta _0) \rfloor ^+\right] \\&\quad \longrightarrow E \left[ \lfloor \psi '(\widetilde{X}_1-m_{\psi }(\widetilde{X}_0;\theta _0)) \mathbf {g}_j(\widetilde{X}_0;\theta _0) \rfloor ^+\right] . \end{aligned}$$

This clearly implies that \(\mathcal {F}_{2n}\) tends in probabbility to 0 as n tends to infinity. Consequently, \(\mathcal {F}_n(\infty )\) tends in probability to \(\mathcal {F}(\infty )\) as n tends to infinity. With this, the second point is handled. By the same techniques and arguments, see that Point (v) also holds. That is, as n tends to infinity, in probability,

$$\begin{aligned} \mathcal {F}_n(x) \longrightarrow \mathcal {F}(x), \ x \in \mathbb {R}. \end{aligned}$$

For checking the remaining points, let \(A= \{a_{\ell }, \ell =1, 2 , \ldots , \}\) and for all \(\ell =1, 2 , \ldots \),

$$\begin{aligned} p_{n\ell } \equiv \mathcal {F}_n(a_{\ell }^+)-\mathcal {F}_n(a_{\ell }^-), \ \ \ \ \ \ p_{\ell } \equiv \mathcal {F}(a_{\ell }^+)-\mathcal {F}(a_{\ell }^-). \end{aligned}$$

Since \(\mathcal {F}\) is continuous, all the \(p_{\ell }\)’s are nil. Moreover, for all \(n \ge 1\), and all \(\ell \ge 1\),

$$\begin{aligned} p_{n\ell }= & {} n^{-1}\sum _{i=1}^n \lfloor \psi '(X_i m_{\psi }(X_{i-1};\theta _0)) \mathbf {g}_j(X_{i-1};\theta _0) \rfloor ^+) \left( {\mathbb {1} }_{\{ X_{i-1}\le a_{\ell } \}} - {\mathbb {1} }_{\{ X_{i-1}<a_{\ell } \}}\right) \\= & {} n^{-1}\sum _{i=1}^n \lfloor \psi '(X_i-m_{\psi }(X_{i-1};\theta _0)) \mathbf {g}_j(X_{i-1};\theta _0) \rfloor ^+){\mathbb {1} }_{\{ X_{i-1}=a_{\ell } \}}. \end{aligned}$$

It is an easy matter that for all \(i=1, \ldots , n\), and \(\ell \ge 1\),

$$\begin{aligned} E \left\{ \lfloor \psi '(X_i-m_{\psi }(X_{i-1};\theta _0)) \mathbf {g}_j(X_{i-1};\theta _0) \rfloor ^+){\mathbb {1} }_{\{ X_{i-1}=a_{\ell } \}} \right\} =0. \end{aligned}$$

Whence, from Lemma 2, as n tends to infinity, one sees that in probability

$$\begin{aligned} p_{n\ell }= n^{-1}\sum _{i=1}^n \lfloor \psi '(X_i-m_{\psi }(X_{i-1};\theta _0)) \mathbf {g}_j(X_{i-1};\theta _0) \rfloor ^+){\mathbb {1} }_{\{ X_{i-1}=a_{\ell } \}} \longrightarrow 0=p_{\ell }. \end{aligned}$$

This yields Point (iii).

For Point (iv), write

$$\begin{aligned} \sum _{\ell \ge 1}p_{n\ell } = n^{-1}\sum _{i=1}^n \left\{ \sum _{\ell \ge 1} \lfloor \psi '(X_i-m_{\psi }(X_{i-1};\theta _0)) \mathbf {g}_j(X_{i-1};\theta _0) \rfloor ^+){\mathbb {1} }_{\{ X_{i-1}=a_{\ell }\}}\right\} . \end{aligned}$$

Observe that among the \(a_j\)’s at most n of them coincide with the \(X_i\)’s. Rename them as \(\widetilde{a}_k, \ k=1, \ldots ,n_0\), where \(n_0\) stands for their number. Then

$$\begin{aligned}&\sum _{\ell \ge 1} \lfloor \psi '(X_i-m_{\psi }(X_{i-1};\theta _0)) \mathbf {g}_j(X_{i-1};\theta _0) \rfloor ^+){\mathbb {1} }_{\{ X_{i-1}=a_{\ell } \}}\\&\qquad =\sum _{k =1}^{n_0}\lfloor \psi '(X_i-m_{\psi }(X_{i-1};\theta _0)) \mathbf {g}_j(X_{i-1};\theta _0) \rfloor ^+){\mathbb {1} }_{\{ X_{i-1}= \widetilde{a}_k \}}. \end{aligned}$$

Thus, for all \(i=1, \ldots , n\),

$$\begin{aligned} E\left\{ \sum _{\ell \ge 1} \lfloor \psi '(X_i-m_{\psi }(X_{i-1};\theta _0)) \mathbf {g}_j(X_{i-1};\theta _0) \rfloor ^+){\mathbb {1} }_{\{ X_{i-1}=a_{\ell } \}}\right\} = 0. \end{aligned}$$

Then, we conclude by the same arguments as above that, as n tends to infinity, one has in probability,

$$\begin{aligned} \sum _{\ell \ge 1}p_{n\ell } \longrightarrow 0= \sum _{\ell \ge 1}p_{\ell } . \end{aligned}$$

By Theorem 1 one can conclude that, in probability,

$$\begin{aligned} \sup _{x \in \mathbb {R}} \left| J_{1n}^{j+}(x) \right| \longrightarrow 0, \ n \rightarrow \infty . \end{aligned}$$

Proceeding in the same lines as above, one can also show that in probability,

$$\begin{aligned} \sup _{x \in \mathbb {R}} \left| J_{1n}^{j-}(x) \right| \longrightarrow 0, \ n \rightarrow \infty . \end{aligned}$$

Since for all \(1\le j \le q\),

$$\begin{aligned} \sup _{x \in \mathbb {R}} \left| J_{1n}^j(x) \right| \le 2 \max \left( \sup _{x \in \mathbb {R}} \left| J_{1n}^{j-}(x) \right| , \sup _{x \in \mathbb {R}} \left| J_{1n}^{j+}(x) \right| \right) , \end{aligned}$$

it results that

$$\begin{aligned} \sup _{x \in \mathbb {R}} \left| J_{1n}^{j-}(x) \right| \longrightarrow 0, \ n \rightarrow \infty \end{aligned}$$

and

$$\begin{aligned} \sup _{x \in \mathbb {R}} \left| \left| J_{1n}(x)\right| \right| \longrightarrow 0, \ n \rightarrow \infty . \end{aligned}$$

Whence, equality (17) is established and the first part of the proof of Theorem 2 handled. The second part is an easy consequence of the first part as the residual series is uncorrelated and uncorrelated with the past of \(X_{i-1}\). \(\square \)

Proof of Corollary 1

This result is established as soon as we prove that \(R^{*}_{n,\psi }\) is tight and its finite-dimensional distributions converge in distribution to those of a zero-mean Gaussian process with covariance kernel \( K^{*}_{\psi }\). The study of the latter property can be handled in the same lines as that of \(R_{n,\psi }\). For the study of the tightness, recalling from Theorem 2 that for all \(x\in \mathbb {R}\), and \(\theta \in \varTheta \), \(\varDelta (x; \theta )= E\left[ \psi '(\widetilde{X}_1-m_{\psi }(\widetilde{X}_{0};\theta )) \mathbf {g}(\widetilde{X}_{0};\theta ) {\mathbb {1} }_{\{ \widetilde{X}_0\le x \}}\right] \), it suffies to prove the tightness of the process

$$\begin{aligned} \mathcal {Q}_n(x)=n^{-1/2}\varDelta ^t(x; \theta _0) \sum _{i=1}^n \ell ({X}_i,{X}_{i-1};\theta _0). \end{aligned}$$

In view of Part (a) of assumption (A1), it is very easy to see that for all \(x,y \in \mathbb {R}\),

$$\begin{aligned} E(|\mathcal {Q}_n(x)-\mathcal {Q}_n(y)|^2 )\le Cst |x-y|^2. \end{aligned}$$

Thus, from Billingsley (1968), \(\mathcal {Q}_n\) is tight and \(R^{*}_{n,\psi }\) is tight as the sum of two tight processes.

The kernel \( K^{*}_{\psi }\) is obtained as the limit of the covariance kernel of the process \(R^{*}_{n,\psi }\). For all \(x,y \in \mathbb {R}\), one has

$$\begin{aligned}&\text{ Cov }(R^{*}_{n,\psi }(x), R^{*}_{n,\psi }(y))\\&\quad = \text{ Cov }(R_{n,\psi }(x), R_{n,\psi }(y))\\&\qquad -\,\varDelta ^t(x, \theta _0) n^{-1} \sum _{i=1}^n \sum _{j=1}^n E \left[ \ell ({X}_i,{X}_{i-1};\theta _0) \psi (X_j-m_{\psi }(X_{j-1})){\mathbb {1} }_{\{ X_{j-1}\le y \}} \right] \\&\qquad -\,\varDelta ^t(y, \theta _0) n^{-1} \sum _{i=1}^n \sum _{j=1}^n E \left[ \ell ({X}_i,{X}_{i-1};\theta _0) \psi (X_i-m_{\psi }(X_{j-1})){\mathbb {1} }_{\{ X_{j-1}\le x\}} \right] \\&\qquad +\,\varDelta ^t(x, \theta _0) n^{-1} \sum _{i=1}^n \sum _{j=1}^n L_{i,j}(\theta _0) \varDelta (y, \theta _0) +o(1). \end{aligned}$$

Clearly, for all \(x,y \in \mathbb {R}\), as n tends to infinity,

$$\begin{aligned} \text{ Cov }(R_{n,\psi }(x), R_{n,\psi }(y))=E \left[ R_{n,\psi }(x) R_{n,\psi }(y)\right] \longrightarrow K_{\psi }(x,y). \end{aligned}$$

For all \(x \in \mathbb {R}\), for all \(\theta \in \varTheta \) and for all \(n \ge 1\), define the following vectors and matrices

$$\begin{aligned} \mathcal {M}_n(x, \theta )= & {} n^{-1} \sum _{i=1}^n \sum _{j=1}^n E \left[ \ell ({X}_i,{X}_{i-1};\theta ) \psi (X_j-m_{\psi }(X_{j-1})){\mathbb {1} }_{\{ X_{j-1}\le x \}} \right] \\ \mathcal {N}_n(\theta )= & {} n^{-1} \sum _{i=1}^n \sum _{j=1}^n L_{i,j}(\theta ). \end{aligned}$$

Proceeding as in the study of the finite-dimensional distributions of \(R_{n,\psi }\), one sees that, for all \(x \in \mathbb {R}\), as n tends to infinity,

$$\begin{aligned} \mathcal {M}_n(x, \theta _0)\longrightarrow & {} E \left[ \ell (\widetilde{X}_1,\widetilde{X}_0;\theta _0) \psi (\widetilde{X}_1-m_{\psi }(\widetilde{X}_0)) {\mathbb {1}}_{\{ \widetilde{X}_0\le x \}} \right] \\&+\,2 \sum _{j> 1} E \left[ \ell (\widetilde{X}_1,\widetilde{X}_0;\theta _0) \psi (\widetilde{X}_j-m_{\psi }(\widetilde{X}_{j-1})){\mathbb {1} }_{\{ \widetilde{X}_{j-1}\le x \}} \right] \\ \mathcal {N}_n(\theta _0)\longrightarrow & {} \widetilde{L}_{1,1}(\theta _0)+ 2 \sum _{j > 1} \widetilde{L}_{1,j}(\theta _0). \end{aligned}$$

Whence, the weak convergence of \(R^{*}_{n,\psi }\) to a zero-mean Gaussian process with covariance kernel defined by (18) follows from Theorem 2. This establishes the first part of Corollary 1.

The second part can be established with the arguments invoked for the proof of the second part of Theorem 2. \(\square \)

Proof of Theorem 4

Under \(\mathcal {H}_1\), for all \(\mathbf {x}\in \overline{\mathbb {R}}^d\), by a first-order Taylor expansion, one has,

$$\begin{aligned} { R_{n,\psi }^* (\mathbf {x}) \over \sqrt{n}}= & {} {1 \over n} \sum _{i=1}^n\psi (m_{\psi }(\mathbf {X}_{i-1};\theta _0) +\varpi (\mathbf {X}_{i-1})+V(\mathbf {X}_{i-1}) \varepsilon _i-m_{\psi }(\mathbf {X}_{i-1}; \widetilde{\theta }_n)){\mathbb {1} }_{\{\mathbf {X}_{i-1}\le \mathbf {x}\}}\\= & {} {1 \over n} \sum _{i=1}^n\psi (m_{\psi }(\mathbf {X}_{i-1};\theta ) +V(\mathbf {X}_{i-1})\varepsilon _i-m_{\psi }(\mathbf {X}_{i-1}; \widetilde{\theta }_n)){\mathbb {1} }_{\{\mathbf {X}_{i-1}\le \mathbf {x}\}}\\&+\,{1 \over n} \sum _{i=1}^n \varpi (\mathbf {X}_{i-1}) \psi '(Y_{i,n}){\mathbb {1} }_{\{\mathbf {X}_{i-1}\le \mathbf {x}\}}, \end{aligned}$$

for some \(Y_{i,n}\) lying between \(m_{\psi }(\mathbf {X}_{i-1};\theta )+\varpi (\mathbf {X}_{i-1}) +V(\mathbf {X}_{i-1})\varepsilon _i-m_{\psi }(\mathbf {X}_{i-1}; \widetilde{\theta }_n)\) and \(m_{\psi }(\mathbf {X}_{i-1};\theta ) +V(\mathbf {X}_{i-1})\varepsilon _i-m_{\psi }(\mathbf {X}_{i-1}; \widetilde{\theta }_n)\). From the last equality, adding and subtracting an appropriate term, by repeated Taylor expansions of order one, for all \(\mathbf {x} \in \mathbb {R}^d\), one can write

$$\begin{aligned} { R_{n,\psi }^* (\mathbf {x}) \over \sqrt{n}}= & {} {1 \over n} \sum _{i=1}^n \left\{ \psi (m_{\psi }(\mathbf {X}_{i-1};\theta ) +V(\mathbf {X}_{i-1})\varepsilon _i-m_{\psi }(\mathbf {X}_{i-1}; \widetilde{\theta }_n)) \right. \\&\left. -\,\psi (m_{\psi }(V(\mathbf {X}_{i-1})\varepsilon _i) \right\} {\mathbb {1} }_{\{\mathbf {X}_{i-1}\le \mathbf {x}\}}\\&+\,{1 \over n} \sum _{i=1}^n \psi (m_{\psi }(V(\mathbf {X}_{i-1})\varepsilon _i) {\mathbb {1} }_{\{\mathbf {X}_{i-1}\le \mathbf {x}\}} + {1 \over n} \sum _{i=1}^n \varpi (\mathbf {X}_{i-1}) \psi '(Y_{i,n}){\mathbb {1} }_{\{\mathbf {X}_{i-1} \le \mathbf {x}\}}\\= & {} {1 \over n} \sum _{i=1}^n \left( m_{\psi }(\mathbf {X}_{i-1};\theta _0) -m_{\psi }(\mathbf {X}_{i-1};\widetilde{\theta }_n) \right) \psi '(Z_{i,n}) {\mathbb {1} }_{\{\mathbf {X}_{i-1}\le \mathbf {x}\}}\\&+\,{1 \over n} \sum _{i=1}^n \psi (V(\mathbf {X}_{i-1})\varepsilon _i) {\mathbb {1} }_{\{\mathbf {X}_{i-1}\le \mathbf {x}\}} + {1 \over n} \sum _{i=1}^n \varpi (\mathbf {X}_{i-1}) \psi '(Y_{i,n}) {\mathbb {1} }_{\{\mathbf {X}_{i-1}\le \mathbf {x}\}}\\= & {} {1 \over n} (\theta _0- \widetilde{\theta }_n) \sum _{i=1}^n \mathbf {g}(\mathbf {X}_{i-1}, \dot{\theta }_n) \psi '(Z_{i,n}) {\mathbb {1} }_{\{\mathbf {X}_{i-1}\le \mathbf {x}\}}\\&+\,{1 \over n} \sum _{i=1}^n \psi (V(\mathbf {X}_{i-1})\varepsilon _i) {\mathbb {1} }_{\{\mathbf {X}_{i-1}\le \mathbf {x}\}} + {1 \over n} \sum _{i=1}^n \varpi (\mathbf {X}_{i-1}) \psi '(Y_{i,n}) {\mathbb {1} }_{\{\mathbf {X}_{i-1}\le \mathbf {x}\}}, \end{aligned}$$

for some \(Z_{i,n}\) between \(m_{\psi }(\mathbf {X}_{i-1};\theta _0) +V(\mathbf {X}_{i-1})\varepsilon _i-m_{\psi }(\mathbf {X}_{i-1}; \widetilde{\theta }_n)\) and \(V(\mathbf {X}_{i-1})\varepsilon _i\), and for some \(\dot{\theta }_n\) lying between \(\theta _0\) and \(\widetilde{\theta }_n\).

Now, by (A1)–(A3), one has :

$$\begin{aligned} \sup _{\mathbf {x} \in \mathbb {R}^d} \left| { R_{n,\psi }^* (\mathbf {x}) \over \sqrt{n}} \right|\le & {} \left( {1 \over n} \sum _{i=1}^n || \ell (\mathbf {X}_i,\mathbf {X}_{i-1}; \theta _0)|| \right) \left( {1 \over n} \sum _{i=1}^n M(\mathbf {X}_i) \right) \\&+\,{1 \over n} \sum _{i=1}^n |\psi (V(\mathbf {X}_{i-1})\varepsilon _i)| + {1 \over n} \sum _{i=1}^n |\varpi (\mathbf {X}_{i-1}) |+o_P(1). \end{aligned}$$

Then using the same arguments as in the proof of Theorem 2, it is easy to see that the right hand-side of the above equality converges in probability to

$$\begin{aligned} E(||\ell (\widetilde{\mathbf {X}}_1, \widetilde{\mathbf {X}}_0; \theta _0)||) E(M(\widetilde{\mathbf {X}}_0)+ E( |\psi (V(\widetilde{\mathbf {X}}_0) \widetilde{\varepsilon }_1)|)+ E(|\varpi (\widetilde{\mathbf {X}}_0)|). \end{aligned}$$

This completes the proof of Theorem 4. \(\square \)

Proof of Theorem 5

Under \(\mathcal {H}_0\), one has the following approximation for \(L_n\) :

$$\begin{aligned} L_n = n^{-1/2} \sum _{i=1}^n Z_{i-1} \phi _{f}(\varepsilon _i) - (2n)^{-1} \sum _{i=1}^n Z_{i-1}^2 \phi _{f}^{\prime }\left( \varepsilon _i-\upsilon _{i-1}^n n^{-1/2} Z_{i-1}\right) + o_P(1), \end{aligned}$$

for some \(\upsilon _i^n \in (0,1), \ i=1, \ldots , n\). In view of (A4), we have

$$\begin{aligned}&\left| n^{-1} \sum _{i=1}^n Z_{i-1}^2 \phi _{f}^{\prime }\left( \varepsilon _i-\upsilon _{i-1}^n n^{-1/2} Z_{i-1}\right) - n^{-1} \sum _{i=1}^n Z_{i-1}^2 \phi _{f}^{\prime }(\varepsilon _i) \right| \nonumber \\&\le n^{-1/2} C_{\phi } \left( n^{-1} \sum _{i=1}^n\Big |Z_{i-1} \Big |^3 \right) . \end{aligned}$$
(53)

By Proposition 2, the term in brackets in the right-hand side of (53) converges a.s. to \(E( |\widetilde{Z}_0 |^3)\). Therefore, the right-hand side of (53) converges a.s. to 0, and one has

$$\begin{aligned} L_n = n^{-1/2} \sum _{i=1}^n Z_{i-1} \phi _{f}(\varepsilon _i) + (2n)^{-1} \sum _{i=1}^n -Z_{i-1}^2 \phi _{f}^{\prime }(\varepsilon _i) + o_P(1). \end{aligned}$$
(54)

Making use again of Proposition 2, the second term in the right-hand side of (54) converges a.s. to the negative number \(-\lambda ^2/2\), where

$$\begin{aligned} \lambda ^2= I(f) E\left[ {\varpi ^2(\widetilde{X}_0) \over V^2(\widetilde{X}_0)} \right] . \end{aligned}$$

Assumption (A4) implies that \(\int \phi _f(x) f(x) dx =0\), \(\int \phi _f^2(x) f(x) dx = \int \phi _f^{\prime }(x) f(x) dx \) and \(\int x\phi _f(x) f(x) dx =1\). Then, under \(\mathcal {H}_0\), the first term in the right-hand side of (54) is centered. For the study of its convergence, proceeding as in the study of the finite-dimensional distributions of \(R_{n,\psi }\), one sees that under \(\mathcal {H}_0\), it converges in distribution to a \({\mathcal {N}}(0, \lambda ^2)\). It results from this that under \(\mathcal {H}_0\), \(L_n\) converges in distribution to \({\mathcal {N}}(-\lambda ^2/2, \lambda ^2)\), as \(n\rightarrow \infty \). Thus, the contiguity of the sequences \(\{ \mathcal {H}_{1,n}: n \ge 1 \}\) and \(\{\mathcal {H}_{0,n}=\mathcal {H}_0: n \ge 1 \}\) is a direct consequence of Theorem 1 and Proposition 7 of Le Cam (1986). \(\square \)

Proof of Theorem 6

Here again, we have to study the tightness and the finite-dimensional distributions of of \(R^{*}_{n,\psi }\) under \(\mathcal {H}_{1,n}\). The tightness of \(R^{*}_{n,\psi }\) under \(\mathcal {H}_{1,n}\) readily follows from an application of Theorem 4.3.4 of Sen (1981), by the contiguity of \(\{\mathcal {H}_{0,n}=\mathcal {H}_0: n \ge 1 \}\) and \(\{\mathcal {H}_{1,n}: n \ge 1 \}\).

It is easy to check that under \({\mathcal {H}}_0\), for \(L_n\) given by (26), and for all \(x \in \mathbb {R}\),

$$\begin{aligned} \text{ Cov }(R_{n,\psi }^*(x), L_n)= & {} n^{-1} \sum _{i=1}^n \sum _{j=1}^n \text{ Cov }\left( \psi (\varepsilon _i){\mathbb {1} }_{\{X_{i-1}\le x\}}, Z_{j-1} \phi _{f}(\varepsilon _j) \right) \\&-\, {1 \over 2}n^{-3/2} \sum _{i=1}^n \sum _{j=1}^n \text{ Cov }\left( \psi (\varepsilon _i){\mathbb {1} }_{\{X_{i-1}\le x\}}, Z_{i-1}^2 \phi _{f}^{\prime }(\varepsilon _i) \right) \\&-\,n^{-1}\varDelta ^t(x; \theta _0) \sum _{i=1}^n \sum _{j=1}^n \text{ Cov } \left( \ell ({X}_i,{X}_{i-1};\theta _0), Z_{j-1} \phi _{f}(\varepsilon _j) \right) \\&+\,{1 \over 2}n^{-3/2} \varDelta ^t(x; \theta _0) \sum _{i=1}^n \sum _{j=1}^n \text{ Cov }\left( \ell ({X}_i,{X}_{i-1};\theta _0), Z_{j-1}^2 \phi _{f}^{\prime }(\varepsilon _i) \right) +o(1). \end{aligned}$$

Proceeding as in the study of the finite-dimensional distributions of \(R_{n,\psi }\) it is easy to see that

$$\begin{aligned} s_{\psi }(x)= & {} \lim _{n \rightarrow \infty } \text{ Cov }(R_{n,\psi }^*(x), L_n)\\= & {} E \left[ \psi (\widetilde{\varepsilon }_1) \phi _{f}(\widetilde{\varepsilon }_1)\widetilde{Z}_0 {\mathbb {1} }_{\{\widetilde{X}_0\le x\}}\right] +2\sum _{j> 1} E \left[ \psi (\widetilde{\varepsilon }_1) {\mathbb {1} }_{\{\widetilde{X}_0\le x\}} \widetilde{Z}_{j-1} \phi _{f}(\widetilde{\varepsilon }_j)\right] \\&-\, \varDelta ^t(x; \theta _0) \Bigg \{E \left[ \ell (\widetilde{X}_1,\widetilde{X}_0;\theta _0) \widetilde{Z}_0^2 \phi _{f}^{\prime }(\widetilde{\varepsilon }_1) \right] \\&+\,2\sum _{j > 1} E \left[ \ell (\widetilde{X}_1,\widetilde{X}_0;\theta _0) \widetilde{Z}_{j-1}^2 \phi _{f}^{\prime }(\widetilde{\varepsilon }_j) \right] \Bigg \}. \end{aligned}$$

By (A1)(a), writing the third expectation in right-hand side of the above equality in terms of the conditional expectation given \(\widetilde{X}_0\), this expectation is nil. Also, by our assumptions on the model, the first, second and fourth terms equal respectively

$$\begin{aligned}&E \left[ \psi (\widetilde{\varepsilon }_1) \phi _{f}(\widetilde{\varepsilon }_1) \right] E \left[ \widetilde{Z}_0 {\mathbb {1} }_{\{\widetilde{X}_0\le x\}}\right] \\&\quad 2 E \left[ \psi (\widetilde{\varepsilon }_1) \right] E \left[ \phi _{f}(\widetilde{\varepsilon }_1)\right] \sum _{j> 1}E \left[ {\mathbb {1} }_{\{\widetilde{X}_0\le x\}} \widetilde{Z}_{j-1} \right] \\&\qquad -\,2 E \left[ \phi _{f}'(\widetilde{\varepsilon }_1)\right] \varDelta ^t(x; \theta _0)\sum _{j > 1}E \left[ \ell (\widetilde{X}_1,\widetilde{X}_0;\theta _0) \widetilde{Z}_{j-1}^2 \right] . \end{aligned}$$

Collecting all these terms, and writing them in terms of integrals yields the expression of \(s_{\psi }(x)\) given by (28).

From what precedes, it results that for all \(k \in \mathbb {N}^*\) and for \(x_1, \ldots , x_k \in \mathbb {R}\), the joint limiting distribution of the random vector\((R_{n,\psi }^*(x_1), \ldots , R_{n,\psi }^*(x_k), L_n)^t\) is Gaussian with mean \((0, \ldots , 0, - \lambda ^2/2)^t\) and covariance matrix \(\left( \begin{array}{lr} \varPsi &{} \vartheta \\ \vartheta ' &{} \lambda ^2 \end{array}\right) \) where \(\varPsi =(K_{\psi }^*(x_{\ell },x_m): 1 \le \ell ,m \le k)\) and \(\vartheta =(s_{\psi }(x_1), \ldots , s_{\psi }(t_k))^t\), with \(K_{\psi }^*(\cdot ,\cdot )\) given by (18) and \(s_{\psi }(\cdot )\) given by (28). Whence, by Le Cam’s third lemma, under \({\mathcal {H}}_{1,n}\), the finite-dimensional distributions of \(R_{n,\psi }^*\) converge to those of \(R_{\infty ,\psi }^*\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Ngatchou-Wandji, J., Puri, M.L., Harel, M. et al. Testing nonstationary and absolutely regular nonlinear time series models. Stat Inference Stoch Process 22, 557–593 (2019). https://doi.org/10.1007/s11203-018-9194-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11203-018-9194-8

Keywords

Mathematics Subject Classification

Navigation