Skip to main content
Log in

Three-stage confidence intervals for a linear combination of locations of two negative exponential distributions

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

Mukhopadhyay and Padmanabhan (Metrika 40:121–128, 1993) considered the construction of fixed-width confidence intervals for the difference of location parameters of two negative exponential distributions via triple sampling when the scale parameters are unknown and unequal. Under the same setting, this paper deals with the problem of fixed-width confidence interval estimation for a linear combination of location parameters, using the above mentioned three-stage procedure.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  • Chow YS, Yu KF (1981) The performance of a sequential procedure for the estimation of the mean. Ann Stat 9:184–189

    Article  MathSciNet  MATH  Google Scholar 

  • Gut A (2005) Probability, graduate course. Springer, Berlin

    MATH  Google Scholar 

  • Hall P (1981) Asymptotic theory and triple sampling for sequential estimation of a mean. Ann Stat 9:1229–1238

    Article  MathSciNet  MATH  Google Scholar 

  • Hamdy HI (1997) Performance of fixed width confidence intervals under type II errors: the exponential case. S Afr Stat J 31:259–269

    MathSciNet  MATH  Google Scholar 

  • Hamdy HI, Al-Mahmeed M, Nigm A, Son MS (1989) Three-stage estimation procedure for the exponential location parameters. Metron 47:279–294

    MathSciNet  MATH  Google Scholar 

  • Hamdy HI, Son MS, Yousef AS (2015) Sensitivity analysis of multistage sampling to departure of an underlying distribution from normality with computer simulations. Seq Anal 34:532–558

    Article  MathSciNet  MATH  Google Scholar 

  • Honda T (1992) Estimation of the mean by three stage procedure. Seq Anal 11:73–89

    Article  MathSciNet  MATH  Google Scholar 

  • Isogai E, Futschik A (2010) Sequential estimation of a linear function of location parameters of two negative exponential distributions. J Stat Plan Inference 140:2416–2424

    Article  MathSciNet  MATH  Google Scholar 

  • Lombard F, Swanepoel JWH (1978) On finite and infinite confidence sequences. S Afr Stat J 12:1–24

    MathSciNet  MATH  Google Scholar 

  • Mukhopadhyay N (1990) Some properties of a three-stage procedure with applications in sequential analysis. Sankhyā A52:218–231

    MathSciNet  MATH  Google Scholar 

  • Mukhopadhyay N, Hamdy HI (1984) On estimating the difference of location parameters of two negative exponential distributions. Can J Stat 12:67–76

    Article  MathSciNet  MATH  Google Scholar 

  • Mukhopadhyay N, Mauromoustakos A (1987) Three-stage estimation procedures for the negative exponential distributions. Metrika 34:83–93

    Article  MathSciNet  MATH  Google Scholar 

  • Mukhopadhyay N, Padmanabhan AR (1993) A note on three-stage confidence intervals for the difference of locations: the exponential case. Metrika 40:121–128

    Article  MathSciNet  MATH  Google Scholar 

  • Mukhopadhyay N, Zack S (2007) Bounded risk estimation of linear combinations of the location and scale parameters in exponential distributions under two-stage sampling. J Stat Plan Inference 137:3672–3686

    Article  MathSciNet  MATH  Google Scholar 

  • Singh RK, Chaturvedi A (1991) A note on sequential estimation of the difference between location parameters of two negative exponential distributions. J Indian Stat Assoc 29:107–114

    MathSciNet  Google Scholar 

  • Son MS, Haugh LD, Hamdy HI, Costanza MC (1997) Controlling type II error while constructing triple sampling fixed precision confidence intervals for the normal mean. Ann Inst Stat Math 49:681–692

    Article  MathSciNet  MATH  Google Scholar 

  • Yousef AS, Kimber AC, Hamdy HI (2013) Sensitivity of normal-based triple sampling sequential point estimation to the normality assumption. J Stat Plan Inference 143:1606–1618

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors thank the anonymous referees for their constructive comments and suggestions which helped to improve the paper. The first author was supported by JSPS KAKENHI Grant-Number 26400193.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Chikara Uno.

Appendix

Appendix

In this appendix we will give the uniform integrability of \(\{\tilde{S}_i^{\,p},\,0<d\le d_0\}\) for each \(p\ge 1\) in Lemma 3. Let \(Y_2,\,Y_3,\ldots \) be a sequence of independent and identically distributed positive continuous random variables having a finite mean \(\theta =E(Y_2).\) We consider the following three-stage procedure defined by Mukhopadhyay (1990):

$$\begin{aligned} R=R(d)={\max }\left\{ m,\ N_1\right\} \quad \text{ and }\quad S=S(d)={\max }\left\{ R,\ N_2\right\} , \end{aligned}$$

where \(N_1=\langle \rho \lambda \overline{Y}_m \rangle +1\), \(N_2=\langle \lambda \overline{Y}_R \rangle +1\), \(0<\rho <1\), \(0<\lambda <\infty \), \(\overline{Y}_n=(n-1)^{-1}\sum _{i=2}^n Y_i\) for \(n\ge 2\) and \(m=m(d)\, (\ge 2)\) is the starting sample size such that \(m\rightarrow \infty \) as \(d\rightarrow 0\). Let \(n^*=\lambda \theta \) and we suppose the following conditions

$$\begin{aligned} \lambda =\lambda (m)\rightarrow \infty \ \text{ as } m\rightarrow \infty ,\quad \limsup _{d\rightarrow 0}{m}/n^* <\rho ^2 \end{aligned}$$
(21)

and for some \(r>1\), as \(m\rightarrow \infty \)

$$\begin{aligned} n^*=O(m^r) . \end{aligned}$$
(22)

In the following we assume that \(E(Y_2^p)<\infty \) for some \(p\ge 2\) and let M denote a generic positive constant, not depending on d. Let \(V_j=Y_j/\theta \) for \(j=2,3,\cdots \) and \(\overline{V}_n=\sum _{j=2}^{n}V_j/(n-1)\). Then \(N_1=\langle \rho n^* \overline{V}_m \rangle +1\) and \(N_2=\langle n^* \overline{V}_R \rangle +1\). For \(\varepsilon \in (0,1)\), define a set \(B_{m,\varepsilon }\) by \(B_{m,\varepsilon }=\left\{ \overline{V}_m<1-\varepsilon \right\} \).

Lemma 8

As \(d\rightarrow 0\), we have \(P(B_{m,\varepsilon })=O(m^{-p/2})\).

Proof

Since \(\{\overline{V}_n -1,\ n\ge m\}\) is a reversed martingale, we have from the submartingale inequality,

$$\begin{aligned} P(B_{m,\varepsilon })\le P\left\{ \sup _{n\ge m}\left| \overline{V}_n -1\right| >\varepsilon \right\} \le \varepsilon ^{-p}E\left| \overline{V}_m -1\right| ^{p}=O(m^{-p/2}) . \end{aligned}$$

\(\square \)

Lemma 9

As \(d\rightarrow 0\), we have

$$\begin{aligned} P(R\ne N_1)=O(m^{-p/2})\quad \text{ and }\quad P(S\ne N_2)= & {} O(m^{-p/2}). \end{aligned}$$
(23)

Proof

Fix \(\varepsilon _0\in (0,1-\rho )\). By (21) and Lemma 8, for sufficiently small d,

$$\begin{aligned} P(R\ne N_1)\le P(\overline{V}_m<m/(\rho n^*))\le P(\overline{V}_m<1-\varepsilon _0)=O(m^{-p/2}), \end{aligned}$$

which implies the left side of (23). Next,

$$\begin{aligned}&P(S\ne N_2)\le P(\langle \rho \lambda \overline{Y}_m \rangle +1>\langle \lambda \overline{Y}_R \rangle +1, R= N_1)+P(R\ne N_1)\\&\le P(\rho \lambda \overline{Y}_m > \lambda \overline{Y}_R ) +O(m^{-p/2}) \end{aligned}$$

from the left side of (23). The first term is evaluated as follows.

$$\begin{aligned}&P(\rho \lambda \overline{Y}_m> \lambda \overline{Y}_R )\\&= P(\rho n^* \overline{V}_m> n^* \overline{V}_R, \overline{V}_R<1-\varepsilon _0)+P(\rho n^* \overline{V}_m> n^* \overline{V}_R, \overline{V}_R\ge 1-\varepsilon _0)\\&\le P\left( \left| \overline{V}_R-1\right|>\varepsilon _0\right) +P(\rho \overline{V}_m>1-\varepsilon _0). \end{aligned}$$

As in the proof of Lemma 8, we have that \(P\left( \left| \overline{V}_R-1\right| >\varepsilon _0\right) =O(m^{-p/2})\) and \(P(\rho \overline{V}_m>1-\varepsilon _0)=P\left( \left| \overline{V}_m-1\right| >(1-\varepsilon _0-\rho )/\rho \right) =O(m^{-p/2}).\) Hence, the right side of (23) holds.\(\square \)

Lemma 10

If \(0<q<p/(2r)\), where r is as in (22), then \(\left\{ (n^*/R)^{q}, 0<d\le d_0\right\} \) and \(\left\{ (n^*/S)^{q}, 0<d\le d_0\right\} \) are uniformly integrable for some \(d_0>0\).

Proof

Note that \((n^*/S)^{q}\le (n^*/R)^{q}\). From Lemma 1 of Chow and Yu (1981), it suffices to show that \( P(R<\varepsilon _1 n^*)=o(n^{*\,-q}) \) for some \(\varepsilon _1 \in (0,1)\). By choosing \(\varepsilon _1 \in (0,\rho )\), we have from (22)

$$\begin{aligned} P(R<\varepsilon _1 n^*)\le P(\rho \overline{V}_m<\varepsilon _1 )\le P\left( \left| \overline{V}_m-1\right| >1-\varepsilon _1/\rho \right) =o({n^*}^{-q}).\square \end{aligned}$$

Lemma 11

For \(0<q\le p, \left\{ (R/n^*)^{q}, 0<d\le d_0\right\} \) and \(\left\{ (S/n^*)^{q}, 0<d\le d_0\right\} \) are uniformly integrable for some \(d_0>0\).

Proof

From Corollary 4.1 of Gut (2005), if \(E\left\{ \sup _{0<d\le d_0}(R/n^*)^q\right\} <\infty \), then \(\left\{ (R/n^*)^{q}, 0<d\le d_0\right\} \) is uniformly integrable. By the definition of R, Doob’s maximal inequality for the reversed martingale and (21),

$$\begin{aligned}&E\left\{ \sup _{0<d\le d_0}(R/n^*)^{q}\right\} \le M\cdot E\left[ \sup _{0<d\le d_0}\{(m/n^*)^{q}+(\rho \overline{V}_m+(1/n^*))^{q}\}\right] \\&\quad \le M +M \rho ^{q}E\left( \sup _{0<d\le d_0}\overline{V}_m^q\right) \le M+M E\left( \sup _{n\ge 2}\overline{V}_n^{q}\right) \le M\quad \text{ for } 1<q\le p, \end{aligned}$$

which yields the uniform integrability of \(\left\{ (R/n^*)^{q}, 0<d\le d_0\right\} \) for \(1<q\le p\). When \(0<q\le 1\), we have that \(\sup _{0<d\le d_0}E(R/n^*)^{q\zeta }= \sup _{0<d\le d_0}E(R/n^*)^{p}<\infty \) for \(\zeta =p/q>1\). Therefore, \(\left\{ (R/n^*)^{q}, 0<d\le d_0\right\} \) is uniformly integrable for \(0<q\le p\). Next, we shall show the uniform integrability of \(\left\{ (S/n^*)^{q}, 0<d\le d_0\right\} \). Since \(S\le N_2+R\), it suffices to show that \(E\left\{ \sup _{0<d\le d_0}(N_2/n^*)^{q}\right\} <\infty \) which can be proved similarly. \(\square \)

Lemma 12

For \(0<q\le p\),

$$\begin{aligned} \left\{ \left| {n^*}^{-\frac{1}{2}}\sum _{j=2}^{R}(V_j-1)\right| ^{q}\!, 0<d\le d_0\right\} \ \text{ and }\ \left\{ \left| {n^*}^{-\frac{1}{2}}\sum _{j=2}^{S}(V_j-1)\right| ^{q}\!, 0<d\le d_0\right\} \end{aligned}$$

are uniformly integrable for some \(d_0>0\).

Proof

Follows from Lemma 5 of Chow and Yu (1981) and Lemma 11. \(\square \)

Proposition 2

We assume that \(E(Y_2^p)<\infty \) for some \(p\ge 2\). Let \(\tilde{S}={n^*}^{-\frac{1}{2}}(S-n^*)\). Under the conditions (21) and (22), if \(0<q< p/(2r+1)\), then \(\left\{ \tilde{S}^{q}, 0<d\le d_0\right\} \) is uniformly integrable for some \(d_0>0\).

Proof

Now,

$$\begin{aligned} |\tilde{S}^q|= & {} |n^{*\,-1/2}(S-n^*)|^{q}\\= & {} |n^{*\,-1/2}(\langle n^* \overline{V}_R \rangle +1-n^*)|^{q}I(S=N_2) +|n^{*\,-1/2}(R-n^*)|^{q}I(S\ne N_2)\\\equiv & {} K_1+K_2,\quad \text{ say }. \end{aligned}$$

Since \(K_3\equiv n^{*\,-1/2}(\langle n^* \overline{V}_R \rangle +1-n^* \overline{V}_R)\le n^{*\,-1/2}\le 1\) and \(0<R/(R-1)\le 2\), we have for some \(\zeta >1\), \(u=2r+1\) and \(v=\frac{1}{2r}+1\),

$$\begin{aligned}&E(K_1 ^{\zeta }) \le \ E|n^{*\,-1/2}(n^* \overline{V}_R -n^*)+ K_3|^{q\zeta } \\&\le \ M E\left| {n^*}^{-\frac{1}{2}}\sum _{j=2}^{R}(V_j-1)\cdot (R/(R-1))\cdot (n^*/R)\right| ^{q\zeta }+M \\&\le \ M \left\{ E\left| {n^*}^{-\frac{1}{2}}\sum _{j=2}^{R}(V_j-1)\right| ^{u q \zeta }\right\} ^{\frac{1}{u}} \left\{ E(n^*/R)^{v q \zeta }\right\} ^{\frac{1}{v}}+M=O(1) \end{aligned}$$

by Lemmas 10 and 12. Finally, for some \(\zeta >1\), \(u_0=r+1\) and \(v_0=\frac{1}{r}+1\), we have from (23) and Lemma 11

$$\begin{aligned} E(K_2^{\zeta })\le & {} M\, {n^*}^{\frac{1}{2} q \zeta }\{E(R/n^*)^{u_0 q\zeta }+1\}^{\frac{1}{u_0}}\left\{ P(S\ne N_2)\right\} ^{\frac{1}{v_0}} = O(m^{q\zeta r/2-p/(2v_0)})\\= & {} O(1). \end{aligned}$$

Hence, the proposition is proved.\(\square \)

Proof of the uniform integrability

We will show the uniform integrability of \(\{\tilde{S}_i^{\,p},\ 0<d\le d_0\}\) for each \(p\ge 1\). Let \(Y_{i j}^{\prime }=Y_{i j}/\sigma _i\) and \(C_i=\lambda _i=a_{*}\sigma _i d^{-1}\), where \(Y_{ij}\) has the exponential distribution \(\mathrm{{E_{XP}}}(0,\sigma _i)\). Then \(Y_{i2}^{\prime },\,Y_{i3}^{\prime },\ldots \) are i.i.d random variables according to \(\mathrm{{E_{XP}}}(0,1)\), and \(R_i\) and \(S_i\) defined by (10) can be written as \(R_i={\max }\left\{ m,\ N_{1i}\right\} \) and \(S_i={\max }\left\{ R_i,\ N_{2i}\right\} ,\) where \(N_{1i}=\langle \rho _i \lambda _i \overline{Y^{\prime }}_{i m} \rangle +1,\ N_{2i}=\langle \lambda _i \overline{Y^{\prime }}_{i R_i} \rangle +1\) and \(0<\rho _i<1\). Put \(n^*=C_i,\, \lambda =\lambda _i,\, \rho =\rho _i,\,R=R_i,\,S=S_i\), \(Y_j=Y_{ij}\) and \(V_j=Y_{ij}^{\prime }\) for \(i=1,2\). Since \(E(Y_{i 2}^p)<\infty \) for all \(p>0\) and \(m=O(d^{-1/r})\) for some \(r>1\), the conditions (21) and (22) are satisfied. Therefore from Proposition 2, \(\{\tilde{S_i}^{p}, 0<d\le d_0\}\) is uniformly integrable for some \(d_0>0\).

\(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Isogai, E., Uno, C. Three-stage confidence intervals for a linear combination of locations of two negative exponential distributions. Metrika 81, 85–103 (2018). https://doi.org/10.1007/s00184-017-0635-y

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-017-0635-y

Keywords

Mathematics Subject Classification

Navigation