Skip to main content
Log in

On the large-sample behavior of two estimators of the conditional copula under serially dependent data

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

The conditional copula of a random pair \((Y_1,Y_2)\) given the value taken by some covariate \(X \in {\mathbb {R}}\) is the function \(C_x:[0,1]^2 \rightarrow [0,1]\) such that \({\mathbb {P}}(Y_1 \le y_1, Y_2 \le y_2 | X=x) = C_x \{ {\mathbb {P}}(Y_1\le y_1 | X=x), {\mathbb {P}}(Y_2\le y_2 | X=x) \}\). In this note, the weak convergence of the two estimators of \(C_x\) proposed by Gijbels et al. (Comput Stat Data Anal 55(5):1919–1932, 2011) is established under \(\alpha \)-mixing. It is shown that under appropriate conditions on the weight functions and on the mixing coefficients, the limiting processes are the same as those obtained by Veraverbeke et al. (Scand J Stat 38(4):766–780, 2011) under the i.i.d. setting. The performance of these estimators in small sample sizes is investigated with simulations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  • Balacheff S, Dupont G (1980) Normalité asymptotique des processus empiriques tronqués et des processus de rang (cas multidimensionnel mélangeant). In: Nonparametric asymptotic statistics (Proc. Conf., Rouen, 1979) (French), Lecture Notes in Math., vol 821, Springer, Berlin, pp 19–45

  • Bickel PJ, Wichura MJ (1971) Convergence criteria for multiparameter stochastic processes and some applications. Ann Math Stat 42:1656–1670

    Article  MathSciNet  MATH  Google Scholar 

  • Bouezmarni T, Camirand Lemyre F, Quessy JF (2019) Supplementary material for the paper “On the large-sample behavior of two estimators of the conditional copula under serially dependent data”. Tech. Rep. 2019-166, Département de mathématiques, Université de Sherbrooke, Sherbrooke, QC, Canada

  • Bücher A, Kojadinovic I (2016) A dependent multiplier bootstrap for the sequential empirical copula process under strong mixing. Bernoulli 22(2):927–968

    Article  MathSciNet  MATH  Google Scholar 

  • Bücher A, Volgushev S (2013) Empirical and sequential empirical copula processes under serial dependence. J Multivar Anal 119:61–70

    Article  MathSciNet  MATH  Google Scholar 

  • Carrasco M, Chen X (2002) Mixing and moment properties of various GARCH and stochastic volatility models. Econom Theory 18(1):17–39

    Article  MathSciNet  MATH  Google Scholar 

  • Doukhan P (1994) Mixing, Lecture Notes in Statistics, vol 85. Springer, New York (properties and examples)

  • Fan J, Gijbels I (1996) Local polynomial modelling and its applications. Monographs on statistics and applied probability: 66. Chapman & Hall/CRC, Boca Raton

  • Gijbels I, Veraverbeke N, Omelka M (2011) Conditional copulas, association measures and their applications. Comput Stat Data Anal 55(5):1919–1932

    Article  MathSciNet  MATH  Google Scholar 

  • Masry E (1996) Multivariate local polynomial regression for time series: uniform strong consistency and rates. J Time Ser Anal 17(6):571

    Article  MathSciNet  MATH  Google Scholar 

  • Masry E, Fan J (1997) Local polynomial estimation of regression functions for mixing processes. Scand J Stat 2:165

    Article  MathSciNet  MATH  Google Scholar 

  • Meitz M, Saikkonen P (2008) Ergodicity, mixing, and existence of moments of a class of Markov models with applications to GARCH and ACD models. Econom Theory 24(5):1291–1320

    Article  MathSciNet  MATH  Google Scholar 

  • Nelsen RB (2006) An introduction to copulas, 2nd edn. Springer Series in Statistics, Springer, New York

    MATH  Google Scholar 

  • Patton AJ (2006) Modelling asymmetric exchange rate dependence. Int Econom Rev 47(2):527–556

    Article  MathSciNet  Google Scholar 

  • van der Vaart AW, Wellner JA (1996) Weak convergence and empirical processes. Springer Series in Statistics, Springer, New York (with applications to statistics)

  • Veraverbeke N, Omelka M, Gijbels I (2011) Estimation of a conditional copula and association measures. Scand J Stat 38(4):766–780

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Félix Camirand Lemyre.

Ethics declarations

Conflict of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This research was supported in part by individual grants from the Natural Sciences and Engineering Research Council of Canada (NSERC), by the Fonds de recherche du Québec—Nature et technologies (FRQNT) and by Australian Research Council Discovery Project DP140100125.

Appendix A: Proof of the main theoretical results

Appendix A: Proof of the main theoretical results

This section is devoted to the proof of Proposition 1, Proposition 2 and Proposition 3. Some arguments given in this section rely on Lemma 1, Lemma 2, Lemma 3 and Lemma 4 whose proofs can be found in the technical report by Bouezmarni et al. (2019). One of these results is Lemma 1 stated below; the latter is helpful to demonstrate the main results in Proposition 1–3. Basically, this result identifies the random behaviour of the local linear system of weights under \(\alpha \)-mixing as the sample size gets large.

Lemma 1

Under Assumptions (\(\mathcal {S}\)), (\(\mathcal {LL}\)) and (\(\mathcal {N}\)), one has almost surely that as \(n \rightarrow \infty \),

$$\begin{aligned} \sup _{z \in J_x} \sup _{u : K(u)>0} {1\over K(u)} \left| \mathcal {K}_{zn}(u) - { K(u) \over f_X(z) } \right| \rightarrow 0. \end{aligned}$$

1.1 A.1: Proof of Proposition 1

According for instance to Theorem 1.5.4 of van der Vaart and Wellner (1996), weak convergence in \(\ell ^\infty ({\mathbb {R}}^2)\) is equivalent to the finite-dimensional convergence combined with the asymptotic tightness. That the finite-dimensional distributions of \({\mathbb {H}}_{xh}\) converge to those of \({\mathbb {H}}_x\) under \(\alpha \)-mixing is a consequence of Theorem 6 of Masry and Fan (1997) and of the Cramér–Wold device. In particular, one deduces

$$\begin{aligned} \mathrm{E}\left\{ {\mathbb {H}}_x(y_1,y_2) \right\} = { \kappa \over 2 } \left( \int _{\mathbb {R}}z^2 K(z) \, \mathrm{d}z \right) {\ddot{H}}_x(y_1,y_2) \end{aligned}$$

and for \(\sigma ^2_{H_x}\) defined in (5),

$$\begin{aligned} \mathrm{Cov}\left\{ {\mathbb {H}}_x(y_1,y_2), {\mathbb {H}}_x(y_1',y_2') \right\} = {1 \over f_X(x) } \left( \int _{\mathbb {R}}\{ K(z) \}^2 \, \mathrm{d}z \right) \sigma ^2_{H_x}(y_1,y_2,y_1',y_2'). \end{aligned}$$

In order to show the asymptotic tightness of \({\mathbb {H}}_{xh}\), define \(Z_{xh}^\star = \sqrt{nh} \left( {\bar{H}}_{xh} - H_x \right) \) and \(Z_{xh} = \sqrt{nh} \left( H_{xh} - {\bar{H}}_{xh} \right) \), where

$$\begin{aligned} {\bar{H}}_{xh}(y_1,y_2) = { 1 \over nh } \sum _{i=1}^n \mathcal {K}_{xn} \left( X_i - x \over h \right) \, H_{X_i}(y_1,y_2). \end{aligned}$$

One can then write \({\mathbb {H}}_{xh} = Z_{xh}^\star + Z_{xh}\), so that the asymptotic tightness of \({\mathbb {H}}_{xh}\) will follow from that of both \(Z_{xh}^\star \) and \(Z_{xh}\). For \(Z_{xh}^\star \), note that a Taylor expansion of order two allows to write that for some \(\zeta _i\) between \(X_i\) and x,

$$\begin{aligned} H_{X_i}(y_1,y_2) = H_x(y_1,y_2) + \left( X_i - x \right) \dot{H}_x(y_1,y_2) + { 1 \over 2} \left( X_i - x \right) ^2 {\ddot{H}}_{\zeta _i}(y_1,y_2). \end{aligned}$$

Using the fact that

$$\begin{aligned} \frac{1}{nh} \sum _{i=1}^n \mathcal {K}_{xn} \left( X_i - x \over h \right) = 1 \quad \text{ and } \quad \frac{1}{nh} \sum _{i=1}^n \left( X_i - x \right) \mathcal {K}_{xn} \left( X_i - x \over h \right) = 0, \end{aligned}$$

one deduces from straightforward computations that

$$\begin{aligned} Z_{xh}^\star (y_1,y_2) = { 1 \over \sqrt{nh} } \sum _{i=1}^n { 1 \over 2 } \left( X_i-x\right) ^2 \mathcal {K}_{xn} \left( X_i - x \over h \right) {\ddot{H}}_{\zeta _i}(y_1,y_2). \end{aligned}$$

Since Assumptions \((\mathcal {S})\), \((\mathcal {L})\) and \((\mathcal {N})\) are satisfied, and because \(z\mapsto {\ddot{H}}_z\) is uniformly continuous in a neighbourhood of x (see Condition \((\mathcal {H})\)), one can invoke Lemma 1 and write

$$\begin{aligned} Z_{xh}^\star (y_1,y_2)= & {} { 1 \over \sqrt{nh} } \sum _{i=1}^n { 1 \over 2 } \left( X_i-x\right) ^2 \left\{ \mathcal {K}_{xn} \left( X_i - x \over h \right) \right. \\&\left. - { 1 \over f_X(x) } \, K \left( X_i - x \over h \right) \right\} {\ddot{H}}_{\zeta _i}(y_1,y_2) \\&+ \, { 1 \over 2 \, f_X(x) } \, { 1 \over \sqrt{nh} } \sum _{i=1}^n \left( X_i-x\right) ^2 K \left( X_i - x \over h \right) {\ddot{H}}_{\zeta _i}(y_1,y_2) \\= & {} { 1 + o_{as}(1) \over 2 \, f_X(x) } \, { 1 \over \sqrt{nh} } \sum _{i=1}^n \left( X_i-x\right) ^2 K \left( X_i - x \over h \right) {\ddot{H}}_{\zeta _i}(y_1,y_2) \\= & {} { {\ddot{H}}_x(y_1,y_2)+ o_{as}(1) \over 2 \, f_X(x) } \, { 1 \over \sqrt{nh} } \sum _{i=1}^n \left( X_i-x\right) ^2 K \left( X_i - x \over h \right) \\= & {} { {\ddot{H}}_x(y_1,y_2) + o_{as}(1) \over 2 \, f_X(x) } \sqrt{n h^5} \, S_{n,2}(x). \end{aligned}$$

Now according to Corollary 1 of Masry (1996), Assumptions (\(\mathcal {S}\)), (\(\mathcal {LL}\)) and (\(\mathcal {N}\)) ensure that

$$\begin{aligned} S_{n,2}(x) = f_X(x) \int _{\mathbb {R}}z^2 K(z) \, \mathrm{d}z + o_{as}(1). \end{aligned}$$

Since Assumption \((\mathcal {N})\) ensures that \(n h^5 \rightarrow \kappa ^2 < \infty \) as \(n\rightarrow \infty \), one can conclude that

$$\begin{aligned} Z_{xh}^\star (y_1,y_2) = { \kappa {\ddot{H}}_x(y_1,y_2) \over 2 } \, \int _{\mathbb {R}}z^2 K(z) \, \mathrm{d}z \, + o_{as}(1). \end{aligned}$$
(7)

In view of Assumption (\(\mathcal {H}\)), one can conclude that \(Z_{xh}^\star \) is asymptotically tight.

Now to show that \(Z_{xh}\) is also asymptotically tight, consider for a fixed \(x \in {\mathbb {R}}\) and \({\mathbf {y}}= (y_1,y_2)\), \({\mathbf {y}}' = (y_1',y_2')\), the semi-metric

$$\begin{aligned} \rho ({\mathbf {y}},{\mathbf {y}}') = \left| F_{1x}(y_1) - F_{1x}(y_1') \right| + \left| F_{2x}(y_2) - F_{2x}(y_2') \right| \end{aligned}$$

and define for \(\delta > 0\), \(f : {\mathbb {R}}^2 \rightarrow {\mathbb {R}}\) bounded and \(T \subseteq {\mathbb {R}}^2\),

$$\begin{aligned} \mathfrak {W}_\delta (f,T) = \sup _{{\mathbf {y}},{\mathbf {y}}' \in T;\rho ({\mathbf {y}},{\mathbf {y}}') < \delta } \left| f({\mathbf {y}}) - f({\mathbf {y}}') \right| . \end{aligned}$$

The modulus of \(\rho \)-equicontinuity of \(Z_{xn}\) is then given by \(\mathfrak {W}_\cdot ( Z_{xn}, {\mathbb {R}}^2)\). For a fixed \({\mathbf {y}}\in {\mathbb {R}}^2\), the random variable \(Z_{xn}({\mathbf {y}})\) is asymptotically tight in \({\mathbb {R}}\), so to prove that the process \(Z_{xn}\) is asymptotically tight in \(\ell ^\infty ([0,1]^2)\), it suffices to show (see Theorem 1.5.7 in van der Vaart and Wellner (1996)) that for every \(\epsilon >0\),

$$\begin{aligned} \lim _{\delta \downarrow 0} \lim _{n\rightarrow \infty } {\mathbb {P}}\left\{ \mathfrak {W}_\delta ( Z_{xn}, {\mathbb {R}}^2) > \epsilon \right\} =0\,. \end{aligned}$$
(8)

Note that if the available observations were serially independent, one could proceed as in Veraverbeke et al. (2011) and use the empirical process machinery developed in van der Vaart and Wellner (1996) to show that (8) holds by checking few conditions on the process \(Z_{xh}\). Under a mixing assumption, however, these arguments cannot be used anymore. To overcome this problem, and following for example Bücher and Kojadinovic (2016) (see the proof of Lemma A.3 therein), a possibility is to proceed in the spirit of Theorem 3 of Bickel and Wichura (1971) and to study the increments of the process \(Z_{xn}\) on blocks. Specifically, for an arbitrary non-empty rectangle \(A \in {\mathbb {R}}^2\), define

$$\begin{aligned} {\mathbb {H}}_{xh}(A) = { 1 \over \sqrt{nh} } \sum _{i=1}^n \mathcal {K}_{xn}\left( \frac{X_i-x}{h}\right) \left[ {\mathbb {I}}\left\{ (Y_{1i},Y_{2i}) \in A \right\} - \nu _{X_i}(A) \right] , \end{aligned}$$

where \(\nu _x(A) = {\mathbb {P}}\{ (Y_{1i},Y_{2i}) \in A | X_i = x \}\). The next result whose proof can be found in Bouezmarni et al. (2019) provides a bound on the moment of order six of \({\mathbb {H}}_{xh}(A)\).

Lemma 2

Under Assumptions \(({\mathcal {S}})\), \(({\mathcal {H}})\) and \((\mathcal {LL})\), one can find a finite constant \(\omega >0\) such that for all b satisfying \(0< b < \min \{ (a-6)/a, 2/5 \}\),

$$\begin{aligned} \mathrm{E}\left\{ \left| {\mathbb {H}}_{xh}(A) \right| ^6 \right\}\le & {} \omega \left\{ \frac{\nu _{x}(A)}{n^2h^2}+ \frac{ \nu _{x}(A)^{2-\frac{4}{a}}}{nh} + \nu _{x}(A)^{3-\frac{6}{a}}+ \mathcal {J}_n(h,b) \right\} \,, \end{aligned}$$

where \(\mathcal {J}_n(h,b) = h^4 h^{2b} + {h^b}({nh})^{-1} + h^{5b}(nh)^{-2} \).

Now for \(\gamma \in (0,1/2)\), define the product space \(T_\gamma = T_\gamma ^{(1)} \times T_\gamma ^{(2)}\), where for \(\kappa _\gamma = \lfloor (nh)^{1/2+\gamma } \rfloor \),

$$\begin{aligned} T^{(j)}_\gamma= & {} \left\{ F_{jx}^{-1}(0), F_{jx}^{-1} \left( 1 \over \kappa _\gamma \right) , \ldots , F_{jx}^{-1}(1) \right\} , \quad j \in \{ 1,2 \}. \end{aligned}$$

Lemma 3

Under Assumptions \((\mathcal {S})\), \((\mathcal {L}\mathcal {L})\) and \((\mathcal {N})\), one has for n sufficiently large that for any \(\epsilon >0\) and \(\delta > 2\kappa _\gamma ^{-1}\),

$$\begin{aligned} {\mathbb {P}}\left\{ \mathfrak {W}_\delta (Z_{xn},{\mathbb {R}}^2)> \epsilon \right\} \le {\mathbb {P}}\left\{ \mathfrak {W}_{2\delta }(Z_{xn}, T_{\gamma }) > {\epsilon \over 3} \right\} . \end{aligned}$$

Lemma 3 entails that (8) will hold if for any \(\epsilon >0\),

$$\begin{aligned} \lim _{\delta \downarrow 0} \lim _{n\rightarrow \infty } {\mathbb {P}}\left\{ \mathfrak {W}_\delta ( Z_{xn}, T_{\gamma } ) > \epsilon \right\} = 0. \end{aligned}$$
(9)

According to Problem 2.1.5 in van der Vaart and Wellner (1996), Eq. (9) holds if and only if for any sequence \(\delta _n \downarrow 0\),

$$\begin{aligned} \lim _{n\rightarrow \infty } {\mathbb {P}}\left\{ \mathfrak {W}_{\delta _n}( Z_{xn}, T_{\gamma } ) > \epsilon \right\} = 0. \end{aligned}$$

In order to show that it is the case, one proceeds as in Bücher and Kojadinovic (2016) and uses Lemma 2 of Balacheff and Dupont (1980). To this end, define \(\lambda _x(B_1\times B_2 ) = {\mathbb {P}}( Y_1 \in B_1 | X=x) \times {\mathbb {P}}( Y_2 \in B_2 | X=x)\) for any \(B_1,B_2\subset {\mathbb {R}}\). At this point, letting \(\mu _x = \nu _x + \lambda _x\), note that for any rectangle \(A_\gamma \) whose corner points are all distinct and lie in \(T_\gamma \),

$$\begin{aligned} \left( 1 \over nh \right) ^{1+2\gamma } \le \mu _x(A_\gamma ) \le 2. \end{aligned}$$

As a consequence, the Markov inequality and Lemma 2 entails that for any \(\eta >0\) and \(\beta \) such that \(0< \beta < (a-5)/a\),

$$\begin{aligned} {\mathbb {P}}\left\{ \left| {\mathbb {H}}_{xh}(A_\gamma ) \right| > \eta \right\}\le & {} \frac{ \omega }{\eta ^6} \left\{ \frac{\mu _{x}(A_\gamma ) }{n^2h^2} + \frac{ \mu _{x}(A_\gamma )^{2-\frac{4}{a}} }{nh} + \mu _{x}(A_\gamma )^{3-\frac{6}{a}} + \mathcal {J}_n(h,b) \right\} \\\le & {} \frac{ \omega }{\eta ^6} \, \mu _x(A_\gamma )^{1+\beta } \left\{ \frac{\mu _{x}(A_\gamma )^{-\beta } }{n^2h^2} \right. \\&\left. + \, \frac{ \mu _{x}(A_\gamma )^{\frac{1}{a}}}{nh} + \mu _{x}(A_\gamma )^{1-\frac{1}{a}} + \mu _{x}(A_\gamma )^{-1-\beta } \mathcal {J}_n(h,b) \right\} \\\le & {} \frac{ \omega }{\eta ^6} \, \mu _x(A_\gamma )^{1+\beta } \left\{ \frac{(nh)^{(1+2\gamma )\beta } }{n^2h^2}+ 4\right. \\&\left. +(nh)^{(1+2\gamma )(1+\beta )}\mathcal {J}_n(h,b) \right\} . \end{aligned}$$

From Assumption \((\mathcal {N})\), \(nh^5 \rightarrow \kappa ^2\), so that \(nh^5\) is bounded above by some positive and finite constant \(\mathrm{cst}\) as \(n \rightarrow \infty \). It follows that

$$\begin{aligned} (nh)^{(1+2\gamma )(1+\beta )} h^4 h^{2b} \le \mathrm{cst}(h^{-4})^{\beta +2\gamma \beta + 2\gamma } h^{2b} =\mathrm{cst}(h^2)^{b - 2 (\beta +2\gamma \beta + 2\gamma )} \,. \end{aligned}$$

In addition, since \(h<1\) and \(nh>1\) for n sufficiently large, one has

$$\begin{aligned} (nh)^{(1+2\gamma )(1+\beta )} \left\{ \frac{h^{5b}}{(nh)^2} + \frac{h^b}{nh} \right\}\le & {} \mathrm{cst}\, (nh)^{2\gamma +\beta + 2\gamma \beta }h^b \\\le & {} \mathrm{cst}\, h^{b - 4(2\gamma +\beta + 2\gamma \beta )}. \end{aligned}$$

It follows that for any \(\beta ,\gamma \in (0,b/16)\) and n sufficiently large,

$$\begin{aligned} (nh)^{(1+2\gamma )(1+\beta )}\mathcal {J}_n(h,b)<1 \,. \end{aligned}$$
(10)

One can then write

$$\begin{aligned} {\mathbb {P}}\left\{ \left| {\mathbb {H}}_{xh}(A_\gamma ) \right| > \eta \right\} \le { 6 \omega \over \eta ^6 } \, \mu _x(A_\gamma )^{1+\beta } \, . \end{aligned}$$
(11)

Now let \({\widetilde{\mu }}_x\) be the finite positive measure such that for \((y_1,y_2) \in T_\gamma \), \({\widetilde{\mu }}_x (\{(y_1,y_2)\})\) vanishes if \(F_{1x}(y_1) = 0\) or \(F_{2x}(y_2) = 0\), and \({\widetilde{\mu }}_x (\{(y_1,y_2)\}) = \mu _x (]\underline{y}_1,y_1] \times ]\underline{y}_2,y_2])\) otherwise, where \(\underline{y}_j = \max \{ \xi \in T_\gamma ^{(j)}: \xi < y_j \}\). Since \( \mu _x(A_\gamma ) = {\widetilde{\mu }} ( A_\gamma \cap T_\gamma )\), the inequality in (11) may be expressed equivalently as

$$\begin{aligned} {\mathbb {P}}\left\{ \left| {\mathbb {H}}_{xh}(A_\gamma ) \right| > \eta \right\} \le { 6 \omega \over \eta ^6 } \, {\widetilde{\mu }}_x(A_\gamma \cap T_\gamma )^{1+\beta }. \end{aligned}$$

For \(\delta _n \downarrow 0\), define \(\delta _n'\downarrow 0\) in such a way that for each \(n \in \mathbb {N}\), \(\delta '_n \in \{ 1, 1/2, 1/3, 1/4, \ldots \}\) and \(\delta '_n \ge \max ( \delta _n, \kappa _\gamma ^{-1} )\). From Lemma 2 of Balacheff and Dupont (1980), one deduces by a straightforward reparametrization that there exists a constant \(\vartheta = \vartheta (\epsilon ,\beta ) > 0\) such that

$$\begin{aligned}&{\mathbb {P}}\left\{ \mathfrak {W}_{\delta _n}( Z_{xn}, T_{\gamma } )> \epsilon \right\} \\&\quad \le {\mathbb {P}}\left\{ \mathfrak {W}_{\delta '_n}( Z_{xn}, T_{\gamma } ) > \epsilon \right\} \\&\quad \le \vartheta (\epsilon ,\beta ) \, {\widetilde{\mu }}_x(T_\gamma ) \left\{ \sup _{\begin{array}{c} y< y' \in T_\gamma ^{(1)}\\ F_{1x}(y') - F_{1x}(y) \le 3 \delta '_n \end{array}} {\widetilde{\mu }}_x\left( ]y,y'] \cap T_{\gamma }^{(1)} \times T_\gamma ^{(2)} \right) \right. \\&\qquad \left. + \, \sup _{\begin{array}{c} y< y' \in T_\gamma ^{(2)}\\ F_{2x}(y') - F_{2x}(y) \le 3 \delta '_n \end{array}} {\widetilde{\mu }}_x\left( T_\gamma ^{(1)} \times ]y,y'] \cap T_{\gamma }^{(2)} \right) \right\} ^\beta . \end{aligned}$$

One can then conclude that as \(n \rightarrow \infty \),

$$\begin{aligned} {\mathbb {P}}\left\{ \mathfrak {W}_{\delta _n}(Z_{xn}, T_{\gamma }) > \epsilon \right\}\le & {} \vartheta (\epsilon ,\beta ) \, \mu _x({\mathbb {R}}^2) \left\{ \sup _{\begin{array}{c} y< y' \in T_\gamma ^{(1)} \\ F_{1x}(y') - F_{1x}(y) \le 3 \delta '_n \end{array}} \mu _x \left( ]y,y'] \times {\mathbb {R}}\right) \right. \\&\left. + \, \sup _{\begin{array}{c} y< y' \in T_\gamma ^{(2)}\\ F_{2x}(y') - F_{2x}(y) \le 3 \delta '_n \end{array}} \mu _x\left( {\mathbb {R}}\times ]y,y'] \right) \right\} ^\beta , \end{aligned}$$

so that \({\mathbb {P}}\left\{ \mathfrak {W}_{\delta _n}(Z_{xn}, T_{\gamma }) > \epsilon \right\} \rightarrow 0\). This finally entails that (9) is satisfied, which in turn ensures that (8) holds true. The proof is therefore complete.

1.2 A.2: Proof of Proposition 2

Let \(V_{1i}=F_{1x}(Y_{1i})\), \(V_{2i} =F_{2x}(Y_{2i}) \), and consider

$$\begin{aligned} J_{xh}(u,v) = \frac{1}{nh}\sum _{i=1}^n\mathcal {K}_{xn}\left( \frac{X_i-x}{h}\right) \, {\mathbb {I}}(V_{1i}\le u, V_{2i}\le v) \,, \end{aligned}$$

\(I_{1xh}(y) = J_{xh}(y,1)\) and \( I_{2xh}(y) = J_{xh}(1,y)\). As \(C_{xh}(u,v) = J_{xh}\lbrace I_{1xh}^{-1}(u),I_{2xh}^{-1}(u)\rbrace \), then \({\mathbb {C}}_{xh} = \sqrt{nh}\lbrace J_{xh}( I_{1xh}^{-1},I_{2xh}^{-1}) - C_x\rbrace \). Now define \(\mathbb {D}\) as the space of bivariate distribution functions J on \([0,1]^2\) whose marginal cumulative distribution functions \(I_1\) and \(I_2\) satisfy \(I_1(0) = I_2(0) = 0\), and consider the mapping \(\varLambda : \mathbb {D} \rightarrow \mathbb {D}\) such that for \(J \in \mathbb {D}\),

$$\begin{aligned} \varLambda (J)(u_1,u_2) = J \left\{ I_1^{-1}(u_1), I_2^{-1}(u_2) \right\} . \end{aligned}$$

With this notation,

$$\begin{aligned} {\mathbb {C}}_{xh} = \sqrt{nh} \left\{ \varLambda (J_{xh}) - C_x \right\} =\sqrt{nh} \left\{ \varLambda (J_{xh}) - \varLambda (C_x) \right\} . \end{aligned}$$

Also, let \(\mathbb {D}_0 = \lbrace \alpha \in C([0,1]^2) : \alpha (1,1) = 0 \, \text {and} \, \alpha (z_1,z_2)=0\,\text {if } \min (z_1,z_2)=0 \rbrace \), where \(C([0,1]^2)\) is the space of continuous functions on \([0,1]^2\). From Theorem 2.4 in Bücher and Volgushev (2013), one has in view of Assumption (\({\mathcal {C}}_x\)) that \(\varLambda \) is Hadamard differentiable at \(C_x\) tangentially to \(\mathbb {D}_0\), with derivative given for \(\varDelta \in \mathbb {D}_0\) by

$$\begin{aligned} \varLambda '_{C_x}(\varDelta )(u_1,u_2) = \varDelta (u_1,u_2) - C_x^{[1]}(u_1,u_2) \, \varDelta (u_1,1) - C_x^{[2]}(u_1,u_2) \, \varDelta (1,u_2). \end{aligned}$$

It is a consequence of Lemma 1 that under Assumptions \((\mathcal {S})\), \((\mathcal {L}\mathcal {L})\) and \((\mathcal {N})\) it holds for sufficiently large n that \(\mathcal {K}_{xn}\ge 0\) almost surely, yielding \(J_{xh}\in \mathbb {D}\). Moreover, under conditions (\({\mathcal {S}}\)), (\({\mathcal {H}}\)), (\(\mathcal {LL}\)) and (\({\mathcal {N}}\)), it can easily be shown that \((V_{11},V_{21},X_1),\ldots ,(V_{1n},V_{2n},X_n)\) fulfill the requirements of Proposition 1. Therefore, one deduces that \(\sqrt{nh}(J_{xh}-C_x)\) converges weakly in \(l^{\infty }([0,1]^2)\) to \({\mathbb {B}}_x\), where \({\mathbb {B}}_x(u_1,u_2) = {\mathbb {H}}_x \{ F_{1x}^{-1}(u_1), F_{2x}^{-1}(u_2) \} \in \mathbb {D}_0\). Hence, from the functional delta method, one can then conclude that \({\mathbb {C}}_{xh}\) converges weakly to

$$\begin{aligned} {\mathbb {C}}_x = \varLambda '_{C_x}({\mathbb {B}}_x) = {\mathbb {B}}_x(u_1,u_2) - C_x^{[1]}(u_1,u_2) \, {\mathbb {B}}_x(u_1,1) - C_x^{[2]}(u_1,u_2) \, {\mathbb {B}}_x(1,u_2). \end{aligned}$$

1.3 A.3: Proof of Proposition 3

Consider a version of \(G_{xh}\) based on \((U_{1},V_{1},X_1), \ldots , (U_{n},V_{n},X_n)\), where \(U_{i} = F_{1 X_i}(Y_{1i})\) and \(V_{i} = F_{2 X_i }(Y_{2i})\), namely

$$\begin{aligned} {\widetilde{G}}_{xh}(u,v) = \frac{1}{nh} \sum _{i=1}^n \mathcal {K}_{xn}\left( \frac{X_i-x}{h} \right) \, {\mathbb {I}}\left( U_{i} \le u, V_{i} \le v \right) . \end{aligned}$$

One can then write for the functional \(\varLambda \) defined in the proof of Proposition 2 that

$$\begin{aligned} \widetilde{{\mathbb {C}}}_{xh} = \sqrt{n h} \left\{ \varLambda (\widetilde{G}_{xh}) - C_x \right\} + \sqrt{nh} \left\{ \varLambda ({G}_{xh}) - \varLambda (\widetilde{G}_{xh}) \right\} . \end{aligned}$$

The first summand is a special case of Proposition 2 with \((Y_{1i},Y_{2i},X_i)\) replaced by \((U_i,V_i,X_i)\). Because the conditional marginal distributions of \((U_i,V_i)\) are uniform on (0, 1), their joint conditional distribution is \(C_{X_i}\). Since Assumptions (\({\mathcal {S}}\)), (\({\mathcal {H}}^\star \)), (\(\mathcal {LL}\)) and (\({\mathcal {C}}_x\)) are satisfied, Proposition 2 ensures that \(\sqrt{n h} \{ \varLambda (\widetilde{G}_{xh}) - C_x \}\) converges weakly to \(\varLambda '_{C_x}({\mathbb {G}}_x) = {\widetilde{{\mathbb {C}}}}_x\). It remains to show that \(\sqrt{nh} \{ \varLambda ({G}_{xh}) - \varLambda (\widetilde{G}_{xh}) \}\) is asymptotically negligible. As pointed out by Veraverbeke et al. (2011), this is closely related to the asymptotic behavior of the processes \(\widetilde{Z}_{jxn} = Z_{jxn} - {\bar{Z}}_{jxn}\), \(j \in \{ 1, 2 \}\), where for \(z_t = x + t \, C \, h\) and \(C= \inf \lbrace z>0: K(z) = 0\rbrace \),

$$\begin{aligned} Z_{jxn}(t,u) = \sqrt{n h_j} \, F_{j z_t h_j} \circ F_{j z_t}^{-1}(u) \quad \text{ and } \quad {\bar{Z}}_{jxn}(t,u) = \sqrt{n h_j} \sum _{i=1}^n F_{j X_i} \circ F_{j z_t}^{-1}(u). \end{aligned}$$

The key is the following lemma whose proof is to be found in Bouezmarni et al. (2019).

Lemma 4

Under Assumptions (\({\mathcal {S}}\)), (\({\mathcal {H}}^\star \)), (\(\mathcal {LL}\)) and (\({\mathcal {N}}^\star \)), the sequences \({\widetilde{Z}}_{1xn}\) and \({\widetilde{Z}}_{2xn}\) are asymptotically tight in \(\ell ^\infty ([-1,1] \times [0,1])\).

Finally, from arguments similar as those in Appendix B.2 of Veraverbeke et al. (2011), one obtains that \(\sqrt{nh} \{ \varLambda ({G}_{xh}) - \varLambda (\widetilde{G}_{xh}) \} = o_{\mathbb {P}}(1)\), and thus \(\widetilde{{\mathbb {C}}}_{xh} = \sqrt{n h} \{ \varLambda (\widetilde{G}_{xh}) - C_x \} + o_{\mathbb {P}}(1)\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bouezmarni, T., Camirand Lemyre, F. & Quessy, JF. On the large-sample behavior of two estimators of the conditional copula under serially dependent data. Metrika 82, 823–841 (2019). https://doi.org/10.1007/s00184-019-00711-y

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-019-00711-y

Keywords

Navigation