Skip to main content
Log in

Smoothed jackknife empirical likelihood for the difference of two quantiles

  • Published:
Annals of the Institute of Statistical Mathematics Aims and scope Submit manuscript

Abstract

In this paper, we propose a smoothed estimating equation for the difference of quantiles with two samples. Using the jackknife pseudo-sample technique for the estimating equation, we propose the jackknife empirical likelihood (JEL) ratio and establish the Wilk’s theorem. Due to avoiding estimating link variables, the simulation studies demonstrate that JEL method has computational efficiency compared with traditional normal approximation method. We carry out a simulation study in terms of coverage probability and average length of the proposed confidence intervals. A real data set is used to illustrate the JEL procedure.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Baysal, E., Staum, J. (2008). Empirical likelihood for value-at-risk and expected shortfall. Journal of Risk, 11, 3–32.

  • Chen, J., Peng, L., Zhao, Y. (2009). Empirical likelihood based confidence intervals for copulas. Journal of Multivariate Analysis, 100, 137–151.

  • Chen, S., Hall, P. (1993). Smoothed empirical likelihood confidence intervals for quantiles. Annals of Statistics, 21, 1166–1181.

  • Csörgo, M. (1987). Quantile processes with statistical applications. Philadelphia, PA: SIAM Publications.

  • Gong, Y., Peng, L., Qi, Y. (2010). Smoothed jackknife empirical likelihood method for ROC curve. Journal of Multivariate Analysis, 101, 1520–1531.

  • Jing, B.-Y., Yuan, J., & Zhou, W. (2009). Jackknife empirical likelihood. Journal of the American Statistical Association, 104, 1224–1232.

    Article  MathSciNet  MATH  Google Scholar 

  • Kosorok, M. (1999). Two-sample quantile tests under general conditions. Biometrika, 86, 909–921.

    Article  MathSciNet  MATH  Google Scholar 

  • Miller, R. G. (1974). The jackknife: a review. Biometrika, 61, 1–15.

    MathSciNet  MATH  Google Scholar 

  • Owen, A. (1988). Empirical likelihood ratio confidence intervals for a single functional. Biometrika, 75, 237–249.

    Article  MathSciNet  MATH  Google Scholar 

  • Owen, A. (1990). Empirical likelihood and confidence regions. Annals of Statistics, 18, 90–120.

    Article  MathSciNet  MATH  Google Scholar 

  • Owen, A. (2001). Empirical likelihood. New York: Chapman & Hall.

    Book  MATH  Google Scholar 

  • Qin, J., Lawless, J. (1994). Empirical likelihood and general estimating equations. Annals of Statistics, 22, 300–325.

  • Reiss, R. D. (1989). Approximate distributions of order statistics. New York: Springer.

    Book  MATH  Google Scholar 

  • Sheather, S., Marron, J. (1990). Kernel quantile estimators. Journal of the American Statistical Association, 85, 410–416.

  • van der Vaart, A. W., (2000). Asymptotic Statistics. Cambridge, UK: Cambridge University Press.

  • Wang, R., Peng, L., Qi, Y. (2013). Jackknife empirical likelihood test for equality of two high-dimensional means. Statistica Sinica, 23(2), 667–690.

  • Yang, H., Zhao, Y. (2013). Smoothed jackknife empirical likelihood inference for the difference of ROC curves. Journal of Multivariate Analysis, 140, 270–284.

  • Yang, H., Zhao, Y. (2015). Smoothed jackknife empirical likelihood inference for ROC curves with missing data. Journal of Multivariate Analysis, 115, 123–138.

  • Zhou, W., Jing, B.-Y. (2003). Adjusted empirical likelihood method for quantiles. Annals Institute of Statistical Mathematics, 55, 689–703.

Download references

Acknowledgments

This research was supported by National Natural Science Foundation of China (No. 11501567). Yichuan Zhao acknowledges the support from the NSF Grant (DMS-1406163) and NSA Grant (H98230-12-1-0209). The authors would like to thank an associate editor, the two referees for the helpful comments, which has improved the quality of the paper significantly.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yichuan Zhao.

Appendix: Proofs of Theorems

Appendix: Proofs of Theorems

Proof of Theorem 1

We can decompose \(\Pi _{m ,n}\{p,\theta (p)\}\) as

$$\begin{aligned} \Pi _{m ,n}\{p,\theta (p)\}&= \frac{1}{m} \sum ^{m}_{j=1} K \left\{ \frac{p-{F}_{n,2}\{X_{j}-\theta (p)\}}{h}\right\} -p-\Pi _{m }\{p,\theta (p)\}+\Pi _{m }\{p,\theta (p)\} , \end{aligned}$$
(2)

where

$$\begin{aligned} \Pi _{m }\{p,\theta (p)\}=\frac{1}{m} \sum ^{m}_{j=1} K \left\{ \frac{p-{F}_{2}\{X_{j}-\theta (p)\}}{h}\right\} -p. \end{aligned}$$
(3)

The Eq. (3) is simplified as follows

$$\begin{aligned} \Pi _{m }\{p,\theta (p)\}&= \frac{1}{m} \sum ^{m}_{j=1} K \left\{ \frac{p-{F}_{2}\{X_{j}-\theta (p)\}}{h}\right\} -p \nonumber \\&= \int _{-\infty }^{\infty } K\left\{ \frac{p-{F}_{2}\{x-\theta (p)\}}{h}\right\} \mathrm {d}F_{m,1}(x)-p\nonumber \\&= K\left\{ \frac{p-{F}_{2}\{x-\theta (p)\}}{h} \right\} F_{m,1}(x)|^{\infty }_{-\infty } \nonumber \\&\quad -\int _{-\infty }^{\infty } F_{m,1}(x)\mathrm {d}K\left\{ \frac{p-{F}_{2}\{x-\theta (p)\}}{h}\right\} -p\nonumber \\&= \frac{1}{h}\int _{-\infty }^{\infty } F_{m,1}(x)w\left\{ \frac{p-{F}_{2}\{x-\theta (p)\}}{h}\right\} \mathrm {d}{F}_{2} \{x-\theta (p)\}-p \nonumber \\&= \int _{-1}^{1} F_{m,1}\{F_2^{-1}(p+uh)+\theta (p)\}w\left( u\right) \mathrm {d}u-p \nonumber \\&=\int _{-1}^{1} [ F_{m,1}\{F_2^{-1}(p+uh)+\theta (p)\}- F_{1}\{F_2^{-1}(p+uh)+\theta (p)\} \nonumber \\&\quad + F_{1}\{F_2^{-1}(p+uh)+\theta (p)\} -F_{1}\{F_2^{-1}(p)+\theta (p)\}]w\left( u\right) \mathrm {d}u\nonumber \\&= o_p(1). \end{aligned}$$
(4)

The above equation is obtained by the Glivenko–Cantelli Theorem of \(F_1\) and the bounded derivative of \(D\{\theta (p), p\}=F_1\{F_2^{-1}(p)+\theta (p)\}\). By Eqs. (10) and (11) in Gong et al. (2010), we can extend their result in our case, i.e., \(\Pi _{m}\{p,\theta (p)\}-\Pi _{m,n}\{p,\theta (p)\}=o_p(1)\). Thus, considering Eqs. (3) and (4), we finish the proof about

$$\begin{aligned} \Pi _{m ,n}\{p,\theta (p)\} \mathop { \rightarrow }\limits ^\mathcal{P} 0. \end{aligned}$$
(5)

\(\square \)

Proof of Theorem 2:

One has that

$$\begin{aligned} \sqrt{m+n}\Pi _{m,n}\{p,\theta (p)\}&=\frac{\sqrt{m+n}}{\sqrt{m}}\sqrt{m}[\Pi _{m} \{p,\theta (p)\}] \nonumber \\&\quad + \frac{\sqrt{m+n}}{\sqrt{n}}\sqrt{n}[\Pi _{m ,n}\{p,\theta (p)\}-\Pi _{m }\{p,\theta (p)\}]. \end{aligned}$$
(6)

For the first term of (6), one has

$$\begin{aligned}&\sqrt{m}[\Pi _{m }\{p,\theta (p)\}]\nonumber \\&\quad = \int _{-1}^{1}\sqrt{m} [ F_{m,1}\{F_2^{-1}(p+uh)+\theta (p)\}- F_{1}\{F_2^{-1}(p+uh)+\theta (p)\}]w\left( u\right) \mathrm {d}u \nonumber \\&\qquad + \sqrt{m}\int _{-1}^{1} F_{1}\{F_2^{-1}(p+uh)+\theta (p)\} -F_{1}\{F_2^{-1}(p)+\theta (p)\}]w\left( u\right) \mathrm {d}u\nonumber \\&\quad =\int _{-1}^{1}W_{F_{1}}\{F_2^{-1}(p+uh)+\theta (p)\}w\left( u\right) \mathrm {d}u +\sqrt{m}\int _{-1}^{1}D^{'}\{\theta (p),p\}uhw\left( u\right) \mathrm {d}u\nonumber \\&\qquad +O_p(\sqrt{m}h^2) = I+II+O_p(\sqrt{m}h^2), \end{aligned}$$
(7)

where \(W_{F_1}(t)=\sqrt{m}\{F_{m,1}(t)-F_{1}(t)\}\). Because of the symmetric property of kernel function, the second term of (7) is equal to zero.

By arguments in pp. 266 and 269 in van der Vaart (2000), the weak convergence of \(F_{m,1}(x)\) and \(F_{n,2}(x)\) is true.

$$\begin{aligned} \sqrt{m}\{F_{m,1}(x)-F_1(x)\}\Longrightarrow B_{1}(F_1(x)),\quad \sqrt{n}\{F_{n,2}(x)-F_2(x)\}\Longrightarrow B_{2}(F_2(x)), \end{aligned}$$

where \(B_{1}(\cdot )\) and \(B_{2}(\cdot )\) are two independent Brownian bridge on [0, 1]. Thus, \(B_{1}(F_1(x))\) and \(B_{2}(F_2(x))\) are independent. Due to the Donsker theorem and similar proofs for equation 9 in Gong et al. (2010), \(I\overset{\mathfrak {D}}{\longrightarrow }B_1(F_1\{F_2^{-1}(p)+\theta (p)\})\). Using \(F_1^{-1}(p)=F_2^{-1}(p)+\theta (p)\), it is clear that

$$\begin{aligned} \sqrt{m}[\Pi _{m}\{p,\theta (p)\}]\overset{\mathfrak {D}}{\longrightarrow }B_1(p). \end{aligned}$$
(8)

For the second term of Eq. (6), under condition C.1, we adopt the procedure similar to that in Gong et al. (2010),

$$\begin{aligned}&\sqrt{n}[\Pi _{m ,n}\{p,\theta (p)\}-\Pi _{m}\{p,\theta (p)\}]\nonumber \\&\quad =-\int _{-\infty }^{\infty }W_{F_2}(x)w\left\{ \frac{p-{F}_{2}\{x-\theta (p)\}}{h}\right\} \mathrm {d}F_{1}(x)+O_p(n^{-1/2}h^{-1})\nonumber \\&\quad =\int _{-1}^{1}W_{F_2}\{F_2^{-1}(p)\}w(u)D^{'}\{p,\theta (p)\} \mathrm {d}u+O_p(n^{-1/2}h^{-1})\nonumber \\&\quad \overset{\mathfrak {D}}{\longrightarrow }B_2(F_2\{F_2^{-1}(p)\})D^{'}\{p,\theta (p)\},\nonumber \\&\quad =B_2(p)D^{'}\{p,\theta (p)\}, \end{aligned}$$
(9)

where \(W_{F_2}(t)=\sqrt{n}\{F_{n,2}(t)-F_{2}(t)\}\). Combining (7), (8), (9) and the independence of \(B_1(F_1(x))\) and \(B_2(F_2(x))\), one has that

$$\begin{aligned} \sqrt{m+n}\Pi _{m,n}\{p,\theta (p)\}\overset{\mathfrak {D}}{\longrightarrow } N(0,\sigma ^{2}(p)). \end{aligned}$$
(10)

\(\square \)

Before we prove Theorem 3, we need to obtain the asymptotic normality of jackknife estimator and the consistency of jackknife variance estimator. Those asymptotic properties are given in Lemmas 1 and 2.

Lemma 1

Suppose conditions C.1–C.4 hold. We have

$$\begin{aligned} \sqrt{m+n}\left\{ \frac{1}{m+n}\sum ^{m+n}_{i=1}\hat{V}_{i}\{p, \theta (p)\}\right\} \mathop { \rightarrow }\limits ^\mathcal{D} N\{0, \sigma ^2(p)\}, \end{aligned}$$

where \(\sigma ^{2}(p)\) is defined in Theorem 2.

Proof of Lemma 1:

First, we introduce some properties of \(F_{n,2,-i}\) as follows:

$$\begin{aligned} F_{n,2,-i}(X_{j})-F_{n,2}(X_{j})=\frac{1}{n-1}\{F_{n,2}(X_{j})-I(Y_{ i} \le X_{j})\}=O_{p}\left( \frac{1}{n-1}\right) , i=1,\ldots ,n \end{aligned}$$
(11)

and

$$\begin{aligned} \sum ^{n}_{i=1}\{F_{n,2,-i}(X_{j})-F_{n,2}(X_{j})\}=0, \end{aligned}$$

because

$$\begin{aligned}&F_{n,2,-i}(X_{j})-F_{n,2}(X_{j})\nonumber \\&\quad = \frac{1}{n-1}\sum ^{n}_{k=1,k\ne i}I(Y_{k} \le X_{j})-\frac{1}{n} \sum ^{n}_{k=1}I(Y_{k} \le X_{j})\nonumber \\&\quad = \frac{1}{n-1}\left\{ \sum ^{n}_{k=1,k\ne i}I(Y_{k} \le X_{j})-\sum ^{n}_{k=1}I(Y_{ k} \le X_{j})\right\} +\left( \frac{1}{n-1}-\frac{1}{n} \right) \sum ^{n}_{k=1}I(Y_{k} \le X_{j})\nonumber \\&\quad = \frac{1}{n-1}\{F_{n,2}(X_{j})-I(Y_{ i} \le X_{j})\}. \end{aligned}$$
(12)

For the pseudo-sample, based on equation (16) in Gong et al. (2010) and Eqs. (11) and (12), one has that

$$\begin{aligned}&\left\{ \frac{1}{m+n}\sum ^{m+n}_{i=1}\hat{V}_{i}\{p, \theta (p)\}\right\} \nonumber \\&\quad = \frac{1}{m+n} \sum ^{m}_{i=1} \left\{ \frac{m+n}{m} \sum ^{m}_{j=1} K \left\{ \frac{p-{F}_{n,2}\{X_{j}-\theta (p)\}}{h}\right\} \right\} \nonumber \\&\qquad -\frac{m+n-1}{(m+n)(m-1)}\sum ^{m}_{i=1} \left\{ \sum ^{m}_{j=1, j\ne i} K \left\{ \frac{p-{F}_{n,2}\{X_{j}-\theta (p)\}}{h}\right\} \right\} -p \nonumber \\&\qquad + \sum ^{m+n}_{i=m+1} \left\{ \frac{1}{m} \sum ^{m}_{j=1} K \left\{ \frac{p-{F}_{n,2}\{X_{j}-\theta (p)\}}{h}\right\} \right. \nonumber \\&\left. \qquad -\frac{m+n-1}{(m+n)m} \sum ^{m}_{j=1} K \left\{ \frac{p-{F}_{n,2,m-i}\{X_{j}-\theta (p)\}}{h}\right\} \right\} \nonumber \\&\quad =\frac{1}{m+n} \left[ \sum ^{m}_{j=1} K\left\{ \frac{p-{F}_{n,2}\{X_{j}-\theta (p)\}}{h}\right\} +\frac{n}{m}\sum ^{m}_{j=1} K\left\{ \frac{p-{F}_{n,2}\{X_{j}-\theta (p)\}}{h}\right\} \right] \nonumber \\&\qquad + \frac{m+n-1}{(m+n)m} \sum ^{m+n}_{i=m+1}\sum ^{m}_{j=1} \left[ K\left\{ \frac{p-{F}_{n,2}\{X_{j}-\theta (p)\}}{h}\right\} \right. \nonumber \\&\left. \qquad - K\left\{ \frac{p-{F}_{n,2,m-i}\{X_{j}-\theta (p)\}}{h}\right\} \right] -p\nonumber \\&\quad =\frac{1}{m}\sum ^{m}_{j=1} K\left\{ \frac{p-{F}_{n,2}\{X_{j}-\theta (p)\}}{h}\right\} -p+O_p\left\{ \frac{mn}{(m+n)(n-1)^2h}\right\} . \end{aligned}$$
(13)

Using (10) and (13), it is clear that

$$\begin{aligned} \sqrt{m+n}\left\{ \frac{1}{m+n}\sum ^{m+n}_{i=1}\hat{V}_{i}\{p, \theta (p)\}\right\}&=\sqrt{m+n}\Pi _{m ,n}\{p,\theta (p)\}+o_p(1)\nonumber \\&\overset{\mathfrak {D}}{\longrightarrow }N(0,\sigma ^{2}(p)). \end{aligned}$$
(14)

\(\square \)

In order to obtain the Wilks’ theorem for the JEL procedure, we need to check the consistency of jackknife pseudo-sample variance in addition to Eq. (14). Define the pseudo-sample variance

$$\begin{aligned} v^2_{m,n}\{p,\theta (p)\}=\frac{1}{m+n}\sum ^{m+n}_{i=1}\left\{ \hat{V}_{i} \{p, \theta (p)\}-\frac{1}{m+n}\sum ^{m+n}_{i=1}\hat{V}_{i}\{p, \theta (p)\}\right\} ^{2}. \end{aligned}$$

Lemma 2

Under the conditions C.1–C.4, one has that

$$\begin{aligned} v^2_{m ,n}\{p,\theta (p)\} \mathop { \rightarrow }\limits ^\mathcal{P} \sigma ^2(p). \end{aligned}$$

Proof of Lemma 2:

For \(1\le i\le m\),

$$\begin{aligned}&\hat{V}_{i}\{p, \theta (p)\}\\&\quad =\frac{m+n}{m} \sum ^{m}_{j=1} K \left\{ \frac{p-{F}_{n,2}\{X_{j}-\theta (p)\}}{h}\right\} -\frac{m+n-1}{m-1} \sum ^{m}_{j=1, j\ne i} K \left\{ \frac{p-{F}_{n,2}\{X_{j}-\theta (p)\}}{h}\right\} -p \\&\quad = \frac{m+n-1}{m-1} K \left\{ \frac{p-{F}_{n,2}(X_{i}-\theta (p))}{h}\right\} - \frac{n}{m(m-1)}\sum ^{m}_{j=1} K \left\{ \frac{p-{F}_{n,2}\{X_{j}-\theta (p)\}}{h}\right\} -p \end{aligned}$$

and

$$\begin{aligned}&\hat{V}_{i}^2\{p, \theta (p)\}\\&=\left[ \frac{m+n-1}{m-1} K \left\{ \frac{p-{F}_{n,2}(X_{i}-\theta (p))}{h}\right\} \right] ^2+\left[ \frac{n}{m(m-1)}\sum ^{m}_{j=1} K \left\{ \frac{p-{F}_{n,2}\{X_{j}-\theta (p)\}}{h}\right\} \right] ^2\\&\quad -\frac{2(m+n-1)n}{m(m-1)^2} K \left\{ \frac{p-{F}_{n,2}(X_{i}-\theta (p))}{h}\right\} \sum ^{m}_{j=1} K \left\{ \frac{p-{F}_{n,2}\{X_{j}-\theta (p)\}}{h}\right\} +p^2\\&\quad - 2p\left[ \frac{m+n-1}{m-1} K \left\{ \frac{p-{F}_{n,2}(X_{i}-\theta (p))}{h}\right\} - \frac{n}{m(m-1)}\sum ^{m}_{j=1} K \left\{ \frac{p-{F}_{n,2}\{X_{j}-\theta (p)\}}{h}\right\} \right] . \end{aligned}$$

By the similar argument in Gong et al. (2010), one has

$$\begin{aligned} \frac{1}{m+n}\sum ^{m}_{j=1}\hat{V}_{i}^2\{p, \theta (p)\}\overset{\mathcal {P}}{\longrightarrow }\frac{r+1}{r}p(1-p). \end{aligned}$$
(15)

For \(m+1\le i\le m+n\), one has that

$$\begin{aligned}&\hat{V}_{i}\{p, \theta (p)\} \\&\quad =\frac{m+n-1}{m} \sum ^{m}_{j=1} \left[ K\left\{ \frac{p-{F}_{n,2}\{X_{j}-\theta (p)\}}{h}\right\} - K\left\{ \frac{p-{F}_{n,2,m-i}\{X_{j}-\theta (p)\}}{h}\right\} \right] \\&\qquad +\frac{1}{m}\sum ^{m}_{j=1}K\left\{ \frac{p-{F}_{n,2}\{X_{j}-\theta (p)\}}{h}\right\} - p, \end{aligned}$$

and

$$\begin{aligned}&\hat{V}^2_{i}\{p, \theta (p)\} =\left\{ \frac{m+n-1}{m}\right\} ^2 \left[ \sum ^{m}_{j=1} K\left\{ \frac{p-{F}_{n,2}\{X_{j}-\theta (p)\}}{h}\right\} \right. \nonumber \\&\left. \qquad - K\left\{ \frac{p-{F}_{n,2,m-i}\{X_{j}-\theta (p)\}}{h}\right\} \right] ^2+o_p(1) =\left\{ \frac{m+n-1}{m}\right\} ^2 \nonumber \\&\qquad \times \left[ \sum ^{m}_{j=1} w\left\{ \frac{p-{F}_{n,2}\{X_{j}-\theta (p)\}}{h}\right\} \frac{{F}_{n,2}\{X_{j}-\theta (p)\}-{F}_{n,2,m-i}\{X_{j}-\theta (p)\}}{h}\right] ^2\\&\qquad + o_p(1). \end{aligned}$$

Under condition C.1, we follow the argument which is similar to Gong et al. (2010),

$$\begin{aligned}&\frac{1}{m+n}\sum ^{m+n}_{j=m+1}\hat{V}^2_{i}\{p, \theta (p)\} = \frac{m+n}{nh^2}\sum ^n_{i=1}\sum ^m_{j=1}\sum ^m_{l=1} \{F_{n,2}(X_{j}\nonumber \\&\quad -\theta (p))F_{n,2}(X_{l}-\theta (p))-F_{n,2}\{X_{j}-\theta (p)\}I(Y_{i} \le X_{l}-\theta (p))\\&\quad -F_{n,2}(X_{l}-\theta (p))I(Y_{i} \le X_{j}-\theta (p))+I(Y_{i} \le X_{j}-\theta (p)) I(Y_{i} \le X_{l}-\theta (p))\}\\&\quad w\left( \frac{p-F_{n,2}\{X_{j}-\theta (p)\}}{h}\right) w\left( \frac{p-F_{n,2} (X_{l}-\theta (p))}{h}\right) +o_p(1) \end{aligned}$$
$$\begin{aligned}&\quad =\frac{m+n}{nh^2}\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }\{F_{n,2}(x_{1}\wedge x_{2}-\theta (p))-F_{n,2}(x_{1}-\theta (p))F_{n,2}(x_{2}-\theta (p))\}\nonumber \\&\qquad w\left( \frac{p-F_{n,2}(x_{1}-\theta (p))}{h}\right) w \left( \frac{p-F_{n,2}(x_{2}-\theta (p))}{h}\right) \mathrm {d} F_{m,1}(x_1)\mathrm {d} F_{m,1}(x_2)+o_p(1)\nonumber \\&\quad =\frac{m+n}{nh^2}\int _{-1}^{1}\int _{-1}^{1}\{F_{2}\{F_{2}^{-1}(p-u_1h)\wedge F_{2}^{-1}(p-u_2h)\}\nonumber \\&\qquad -F_{2}\{F_{2}^{-1}(p-u_1h)\}F_{2}\{F_{2}^{-1}(p-u_2h)\}\}\nonumber \\&\qquad w\left( u_1\right) w\left( u_2\right) \mathrm {d} F_{1}\{F_{2}^{-1}(p-u_1h)+\theta (p)\}\mathrm {d} F_{1}\{F_{2}^{-1}(p-u_2h)+\theta (p)\}+o_p(1)\nonumber \\&\quad =\frac{m+n}{n}\int _{-1}^{1}\int _{-1}^{1}p(1-p)\{D^{'}\{p,\theta (p)\}\}^2w(u_1)w(u_2)\mathrm {d}u_1\mathrm {d}u_2\nonumber \\&\quad =\frac{m+n}{n}p(1-p)\{D^{'}\{p,\theta (p)\}\}^2+o_p(1). \end{aligned}$$
(16)

Thus, based on Eqs. (15) and (16),

$$\begin{aligned}&\frac{1}{m+n}\sum ^{m+n}_{j=1}\hat{V}^2_{i}\{p, \theta (p)\}\overset{\mathcal {P}}{\longrightarrow }\sigma ^2(p). \end{aligned}$$

From Eqs. (5) and (13),

$$\begin{aligned} v^2_{m ,n}\{p,\theta (p)\} \mathop { \rightarrow }\limits ^\mathcal{P} \sigma ^2(p). \end{aligned}$$

\(\square \)

Proof of Theorem 3:

Combining Lemmas 1 and 2, we can easily prove Theorem 3 by the standard arguments in Owen (1990). The details of the proof are omitted. \(\square \)

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yang, H., Zhao, Y. Smoothed jackknife empirical likelihood for the difference of two quantiles. Ann Inst Stat Math 69, 1059–1073 (2017). https://doi.org/10.1007/s10463-016-0576-7

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10463-016-0576-7

Keywords

Navigation