Skip to main content
Log in

On purely sequential estimation of an inverse Gaussian mean

  • Published:
Metrika Aims and scope Submit manuscript

Abstract

The first part of this paper deals with developing a purely sequential methodology for the point estimation of the mean \(\mu \) of an inverse Gaussian distribution having an unknown scale parameter \(\lambda \). We assume a weighted squared error loss function and aim at controlling the associated risk function per unit cost by bounding it from above by a known constant \(\omega \). We also establish first-order and second-order asymptotic properties of our stopping rule. The second part of this paper deals with obtaining a purely sequential fixed accuracy confidence interval for the unknown mean \(\mu \), assuming that the scale parameter \(\lambda \) is known. First-order asymptotic efficiency and asymptotic consistency properties are also built of our proposed procedures. We then provide extensive sets of simulation studies and real data analysis using data from fatigue life analysis to show encouraging performances of our proposed stopping strategies.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Anscombe FJ (1952) Large sample theory of sequential estimation. Proc Camb Philos Soc 48:600–607

    Article  MathSciNet  Google Scholar 

  • Banerjee S, Mukhopadhyay N (2016) A general sequential fixed-accuracy confidence interval estimation methodology for a positive parameter: illustrations using health and safety data. Ann Inst Stat Math 68:541–570

    Article  MathSciNet  Google Scholar 

  • Bapat SR (2018a) Purely sequential fixed accuracy confidence intervals for \(P(X < Y)\) under bivariate exponential models. Am J Math Manag Sci. https://doi.org/10.1080/01966324.2018.1465867

    Article  Google Scholar 

  • Birnbaum ZW, Saunders SC (1958) A statistical model for life length of material. J Am Stat Assoc 53:151–160

    Article  MathSciNet  Google Scholar 

  • Birnbaum ZW, Saunders SC (1969) Estimation for a family of life distributions. J Appl Prob 6:319–327

    Article  MathSciNet  Google Scholar 

  • Chaturvedi A (1996) Correction to sequential estimation of an inverse Gaussian parameter with prescribed proportional closeness. Calcutta Stat Assoc Bull 35:211–212

    Article  Google Scholar 

  • Chaturvedi A, Pandey SK, Gupta M (1991) On a class of asymptotically risk-efficient sequential procedures. Scand Actuar J 1:87–96

    Article  MathSciNet  Google Scholar 

  • Chhikara RS, Folks JL (1989) The inverse gaussian distribution, theory, methodology and applications. Marcel Dekker Inc., New York

    MATH  Google Scholar 

  • Chow YS, Robbins H (1965) On the asymptotic theory of fixed width sequential confidence intervals for the mean. Ann Math Stat 36:457–462

    Article  MathSciNet  Google Scholar 

  • Edgeman RL, Salzburg PM (1991) A sequential sampling plan for the inverse Gaussian mean. Stat Pap 32:45–53

    Article  MathSciNet  Google Scholar 

  • Folks JL, Chhikara RS (1978) The inverse Gaussian distribution and its statistical application—a review. J R Stat Soc B40:263–289

    MathSciNet  MATH  Google Scholar 

  • Ghosh M, Mukhopadhyay N (1975) Asymptotic normality of stopping times in sequential analysis. Unpublished Report

  • Ghosh M, Mukhopadhyay N (1979) Sequential point estimation of the mean when the distribution is unspecified. Commun Stat Ser A 8:637–652

    Article  MathSciNet  Google Scholar 

  • Ghosh M, Mukhopadhyay N (1981) Consistency and asymptotic efficiency of two-stage and sequential procedures. Sankhya Ser A 43:220–227

    MathSciNet  MATH  Google Scholar 

  • Ghosh M, Mukhopadhyay N, Sen PK (1997) Sequential estimation. Wiley, New York

    Book  Google Scholar 

  • Johnson N, Kotz S, Balakrishnan N (1994) Continuous univariate distributions, vol 1. Wiley, New York

    MATH  Google Scholar 

  • Joshi S, Shah M (1990) Sequential analysis applied to testing the mean of an inverse Gaussian distribution with known coefficient of variation. Commun Stat 19(4):1457–1466

    Article  MathSciNet  Google Scholar 

  • Lai TL, Siegmund D (1977) A nonlinear renewal theory with applications to sequential analysis I. Ann Stat 5:946–954

    Article  MathSciNet  Google Scholar 

  • Lai TL, Siegmund D (1979) A nonlinear renewal theory with applications to sequential analysis II. Ann Stat 7:60–76

    Article  MathSciNet  Google Scholar 

  • Leiva V, Hernandez H, Sanhueza A (2008b) An R package for a general class of inverse Gaussian distributions. J Stat Softw 26(4):1–21

    Article  Google Scholar 

  • Mukhopadhyay N (1988) Sequential estimation problems for negative exponential populations. Commun Stat Theory Methods Ser A 17:2471–2506

    Article  MathSciNet  Google Scholar 

  • Mukhopadhyay N, Banerjee S (2014) Purely sequential and two stage fixed-accuracy confidence interval estimation methods for count data for negative binomial distributions in statistical ecology: one-sample and two-sample problems. Seq Anal 33:251–285

    Article  MathSciNet  Google Scholar 

  • Mukhopadhyay N, Bapat SR (2016a) Multistage point estimation methodologies for a negative exponential location under a modified linex loss function: illustrations with infant mortality and bone marrow data. Seq Anal 35:175–206

    Article  MathSciNet  Google Scholar 

  • Mukhopadhyay N, Bapat SR (2016b) Multistage estimation of the difference of locations of two negative exponential populations under a modified linex loss function: real data illustrations from cancer studies and reliability analysis. Seq Anal 35:387–412

    Article  MathSciNet  Google Scholar 

  • Mukhopadhyay N, Bapat SR (2017a) Purely sequential bounded-risk point estimation of the negative binomial mean under various loss functions: one sample problem. Ann Inst Stat Math. https://doi.org/10.1007/s10463-017-0620-2

    Article  MATH  Google Scholar 

  • Mukhopadhyay N, Bapat SR (2017b) Purely sequential bounded-risk point estimation of the negative binomial means under various loss functions: multisample problems. Seq Anal 36(4):490–512

    Article  Google Scholar 

  • Mukhopadhyay N, de Silva BM (2009) Sequential methods and their applications. CRC, Boca Raton

    MATH  Google Scholar 

  • Mukhopadhyay N, Solanky TKS (1994) Multistage selection and ranking procedures. Marcel Dekker Inc., New York

    MATH  Google Scholar 

  • R Core Team (2014) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria

  • Schrodinger E (1915) Zur theorie der fall und steigversuche an teilchen mit Brown-scher bewegung. Physikalische Zeitschrift 16:289–295

    Google Scholar 

  • Sen PK (1981) Sequential nonparametrics. Wiley, New York

    MATH  Google Scholar 

  • Seshadri V (1993) The inverse Gaussian distribution—a case study in exponential families. Clarendon Press, Oxford

    Google Scholar 

  • Seshadri V (1999) The invere Gaussian distribution, statistical theory and applications. Springer, New York

    MATH  Google Scholar 

  • Wiener N (1939) The Ergodic theorem. Duke Math J 5:1–18

    Article  MathSciNet  Google Scholar 

  • Woodroofe M (1977) Second order approximation for sequential point and interval estimation. Ann Stat 5:984–995

    Article  MathSciNet  Google Scholar 

  • Woodroofe M (1982) Nonlinear renewal theory in sequential analysis, CBMS 39. SIAM, Philadelphia

    Book  Google Scholar 

Download references

Acknowledgements

The author would like to sincerely thank the editor in chief, Dr. Hajo Holzmann and the anonymous referee for their valuable and constructive comments which greatly improved an earlier manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sudeep R. Bapat.

Ethics declarations

Conflicts of interest

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Appendix

Appendix

Proof of Theorem 2.1

Part (i)

From (9) we have,

$$\begin{aligned} \frac{1}{\sqrt{c\omega }}\left( \frac{1}{\widehat{\lambda }_{N}}\right) ^{(k+1)/2}\le N\le \frac{1}{\sqrt{c\omega }}\left( \frac{1}{\widehat{\lambda }_{N-1}}\right) ^{(k+1)/2}+m. \end{aligned}$$
(20)

On dividing throughout by \(n^{*}\) we get,

$$\begin{aligned} \left( \frac{\lambda }{\widehat{\lambda }_{N}}\right) ^{(k+1)/2}\le \frac{N}{n^{*}}\le \left( \frac{\lambda }{\widehat{\lambda }_{N-1}}\right) ^{(k+1)/2}+\frac{m}{n^{*}}. \end{aligned}$$
(21)

The proof follows by taking limits throughout and noting that \(N\rightarrow \infty \) w.p.1, \(\widehat{\lambda }_{N}\rightarrow \lambda \) w.p.1 and \(m/n^{*}\rightarrow 0\) as \(\omega \rightarrow 0\).

Part (ii)

From the right hand side of (21) we have (for sufficiently large \(n^{*} \)),

$$\begin{aligned} \frac{N}{n^{*}}\le \left( \frac{\lambda }{\widehat{\lambda }_{N-1}}\right) ^{(k+1)/2}+m. \end{aligned}$$
(22)

Now denoting \(\sup _{n\ge 2}\) \(\frac{1}{\widehat{\lambda }_{n}}\) by W, we can claim w.p.1:

$$\begin{aligned} N/n^{*}\le \left( \frac{\lambda }{W}\right) ^{(k+1)/2}+m. \end{aligned}$$
(23)

The right hand side of (23) is free from \(\omega \) and using Wiener’s (1939) ergodic theorem we can claim the uniform integrability of all positive powers of \(N/n^{*}\). The proof is hence complete by using part (i).

Part (iii)

As seen before, N from (9) can be expressed as \(J+1\), where J is as defined in (10). Using Lemma 2.3 from Woodroofe (1977) with \(b=1/2\) we claim:

$$\begin{aligned} P\left( J<\frac{1}{2}n^{*}\right) =O\left( n^{* ^{-\frac{2}{k+1}(m-1)}}\right) . \end{aligned}$$
(24)

Now since \(N=J+1\) w.p.1, we have \(0<\left( \frac{n^{*}}{N}\right) ^{s}<\left( \frac{n^{*}}{J}\right) ^{s}\) and it suffices to show that \(\left( \frac{n^{*}}{J}\right) ^{s}\) is uniformly integrable.

We can now split \(\left( \frac{n^{*}}{J}\right) ^{s}\) into two parts as \(\left( \frac{n^{*}}{J}\right) ^{s}I(J>\frac{1}{2}n^{*})\) and \(\left( \frac{n^{*}}{J}\right) ^{s}I(J<\frac{1}{2}n^{*})\) and hope to show that both these parts are \(1+o(1)\) accounting for restrictions on m. Now we have,

$$\begin{aligned} \left( \frac{n^*}{J}\right) ^s I\left( J>\frac{1}{2}n^*\right) <2^s, \end{aligned}$$
(25)

and so \(\left( \frac{n^*}{J}\right) ^s I(J>\frac{1}{2}n^*)\) becomes uniformly integrable. However, \(\left( \frac{n^*}{J}\right) ^s I(J>\frac{1}{2}n^*)\overset{P}{\rightarrow }\) 1 and hence on applying the dominated convergence theorem we must have,

$$\begin{aligned} E\left[ \left( \frac{n^*}{J}\right) ^s I\left( J>\frac{1}{2}n^*\right) \right] =1+o(1). \end{aligned}$$
(26)

Also from (31) we have,

$$\begin{aligned} E\left[ \left( \frac{n^*}{J}\right) ^s I\left( J<\frac{1}{2}n^*\right) \right] \le \left( \frac{n^*}{m-1}\right) ^sP\left( J<\frac{1}{2}n^*\right) =O\left( n^{* ^{-\frac{2}{k+1}(m-1)+s}}\right) , \end{aligned}$$
(27)

which is o(1) if \(m>1+\frac{k+1}{2}s\). Hence combining (26) and (27) we have that \(\left( \frac{n^*}{J}\right) ^s\) is uniformly integrable, which gives,

$$\begin{aligned} E\left[ (n^*/J)^s\right] =1+o(1), \end{aligned}$$
(28)

if \(m>1+\frac{k+1}{2}s\) and \(s>0\).

Part (iv)

From (2), the associated loss function is given by:

$$\begin{aligned} L_{N}=\frac{(\overline{X}_{N}-\mu )^{2}}{\mu ^{3}}. \end{aligned}$$

Now, recalling \(\hbox {RPUC}_{N}\) from (5) and utilizing the facts that N is an observable finite random variable and the event \(N=n\) is measurable only with respect to \(\{\widehat{\lambda }_{j};\) \(m\le j\le n\}\) for all fixed \(n\ge m\), we can evaluate \(E\left[ \text {RPUC}_{N}\right] \) as:

$$\begin{aligned}&E\left[ \text {RPUC}_{N}\right] \\&\quad =\underset{m\le n<\infty }{\sum }E\left[ \frac{L_{N}}{C_{N} }\mid N=n\right] P(N=n)\\&\quad =\underset{m\le n<\infty }{\sum }E\left[ \frac{L_{n}}{C_{n} }\mid N=n\right] P(N=n).\\&\quad =\underset{m\le n<\infty }{\sum }E\left[ \frac{L_{n}}{C_{n} }\right] P(N=n)\\&\quad =\underset{m\le n<\infty }{\sum }\frac{E[L_{n}]}{C_{n}} P(N=n)\\&\quad =\underset{m\le n<\infty }{\sum }\frac{R_{n}}{C_{n}}P(N=n)\\&\quad =\underset{m\le n<\infty }{\sum }\frac{1}{cn^{2}\lambda ^{k+1}}\\&\quad =\underset{m\le n<\infty }{\sum }\left( \frac{n^{*}}{n}\right) ^{2}\omega . \end{aligned}$$

Thus we have:

$$\begin{aligned} \omega ^{-1}E\left[ \text {RPUC}_{N}\right] =E\left[ \left( \frac{n^{*} }{N}\right) ^{2}\right] , \end{aligned}$$

which is clearly o(1) by utilizing part (iii), when \(m>k+2\). \(\square \)

Proof of Theorem 2.2

Let us first prove the following:

$$\begin{aligned} E\underset{}{\left[ \left( J/n^{*}\right) ^{t}\right] =1+\left\{ t\eta _{k}+\frac{1}{2}t(t-1)p\right\} n^{*-1}+o\left( n^{*-1}\right) }, \end{aligned}$$
(29)

for different values of t, where J comes from (10). By utilizing Theorem 3 in Ghosh and Mukhopadhyay (1979) and Theorem 2.3 in Woodroofe (1977), we define V where,

$$\begin{aligned} V=(J-n^*)^2/n^*\overset{\mathcal {L}}{\rightarrow }p\chi ^2_1, \end{aligned}$$
(30)

and V is also uniformly integrable if \(m>k+1\). Also, from Theorem 2.4 of Woodroofe (1977) we have,

$$\begin{aligned} E(J)=n^*+\eta +o(1), \end{aligned}$$
(31)

if \(m>k+1\).

Case 1 \(t=-1\)

$$\begin{aligned} E\{(n^*/J)\}=n^{*^{-1}}\{E((J-n^*)^2/J)-E(J-n^*)+n^*\}. \end{aligned}$$
(32)

One can now follow along similar lines as part (iii) of Theorem 2.1, and split \(E[(J-n^*)^2/J]\) into two parts and show:

$$\begin{aligned} E\left\{ \frac{(J-n^*)^2}{J}I\left( J>\frac{1}{2}n^*\right) \right\} =p+o(1), \end{aligned}$$
(33)

if \(m>(k+1)\) and,

$$\begin{aligned} E\left\{ \frac{(J-n^*)^2}{J}I\left( J<\frac{1}{2}n^*\right) \right\} =o(1), \end{aligned}$$
(34)

if \(m>2(k+1)\), by Lemma 2.3 of Woodroofe (1977). Now combining (31)–(34) the expansion follows, if \(m>2(k+1)\).

Case 2 \(t\in (-\infty , 0)-\{-1\}\)

Let \(t=-r\), where \(r>0\). Now,

$$\begin{aligned} E\{(J/n^*)^t\}=1-rE(J-n^*)n^{*-1}+\frac{1}{2}r(r+1)n^{*{-1}}E\{V/Q^{r+2}\}, \end{aligned}$$
(35)

where V comes from (30) and Q is a suitable random variable between \(J/n^*\) and 1. As before, one can now easily show:

$$\begin{aligned} E\{VQ^{-r-2}I(J>\frac{1}{2}n^*)\}=p+o(1), \end{aligned}$$
(36)

if \(m>(k+1)\) and,

$$\begin{aligned} E\left\{ VQ^{-r-2}I\left( J<\frac{1}{2}n^*\right) \right\}&\le \left\{ n^{*r+1}m^{-r}+n^{*r+3}m^{-r+2}P\left( J<\frac{1}{2}n^*\right) \right\} \nonumber \\&=o(1), \end{aligned}$$
(37)

if \(m>(r+3)(k+1)\), by Lemma 2.3 of Woodroofe (1977). Now combining (35)–(37) the expansion follows, if \(m>(r+3)(k+1)\) which is nothing but \(m>(3-t)(k+1)\).

Case 3 \(0<t\le 2\)

If \(t=1\) or \(t=2\), the result readily follows by noting (30)–(31) and from Woodroofe (1977), if \(m>k+1\). So we can restrict to \(t\in (0,2)-\{1\}\). Now following along the lines of Case 2 one can show, \(E\{UQ^{t-2}I(J>\frac{1}{2}n^*)\}=p+o(1)\), if \(m>(k+1)\) and \(E\{UQ^{t-2}I(J<\frac{1}{2}n^*)\}=o(1)\), if \(m>(3-t)(k+1)\). Hence the result follows under this case as well.

Case 4 \(t>2\)

Following along the lines of Case 2 one can show, \(E\{UQ^{t-2}I(J<2n^*)\}=p+o(1)\), if \(m>(k+1)\) and \(E\{UQ^{t-2}I(J>2n^*)\}=o(1)\), which follows from Lemma 2.2 of Woodroofe (1977). Hence the result is true under this case also.

This proves (29) under all possible cases, with different conditions on m. Finally by noting \(N=J+1\) completes the proof of Theorem 2.2. \(\square \)

For proving Theorem 2.3 we first note that \(\omega ^{-1}E\left[ \text {RPUC}_{N}\right] =E\left[ \left( n^{*}/N\right) ^{2}\right] \), from part (iv) of Theorem 2.1. One can now exploit Theorem 2.2 by replacing \(t=-2\).

Theorem 2.4 follows readily from an application found in Ghosh and Mukhopadhyay (1975) where we have:

$$\begin{aligned} U=n^{*^{-1/2}}(N-n^*)\overset{\mathcal {L}}{\rightarrow }N(0,p), \end{aligned}$$

where p comes from (12). One may also refer to Theorem 2.4.3 or Theorem 2.4.8, part (ii) in Mukhopadhyay and Solanky (1994).

Proof of Theorem 3.1

Part (i)

From (18) one can get,

$$\begin{aligned} \frac{\widehat{\mu }_{N}}{\lambda }\left( \frac{z_{\alpha /2}}{\log d}\right) ^{2}\le N\le \frac{\widehat{\mu }_{N-1}}{\lambda }\left( \frac{z_{\alpha /2} }{\log d}\right) ^{2}+m. \end{aligned}$$
(38)

Dividing throughout by \(n^{*}\) we have,

$$\begin{aligned} \frac{\widehat{\mu }_{N}}{\mu }\le \frac{N}{n^{*}}\le \frac{\widehat{\mu }_{N}}{\mu }+\frac{m}{n^{*}}. \end{aligned}$$
(39)

The proof follows by taking limits throughout and noting that \(N\rightarrow \infty \) w.p.1, \(\widehat{\mu }_{N}\rightarrow \mu \) w.p.1 and \(m/n^{*}\rightarrow 0\) as \(d\rightarrow 1\).

Part (ii)

From the right hand side of (39) we have (for sufficiently large \(n^{*}\)),

$$\begin{aligned} N/n^{*}\le \frac{\widehat{\mu }_{N}}{\mu }+m. \end{aligned}$$
(40)

Now denoting \(\sup _{n\ge 2}(\widehat{\mu }_{n})\) by U, we can claim w.p.1:

$$\begin{aligned} N/n^{*}\le \frac{U}{\mu }+m. \end{aligned}$$
(41)

Since the right hand side of (41) is free from d, one can apply Wiener’s (1939) ergodic theorem to conclude the uniform integrability of \(N/n^{*}\). The proof is hence complete by using part (i).

Part (iii)

From (19), the confidence interval for \(\mu \) is \(C_{N}=\left[ d^{-1} \widehat{\mu }_{N},\text { }d\widehat{\mu }_{N}\right] \). We may now write,

$$\begin{aligned} P_{\mu }(\mu \in C_{N})= & {} P_{\mu }(d^{-1}\widehat{\mu }_{N}\le \mu \le d\widehat{\mu }_{N})\nonumber \\= & {} P_{\mu }(\log \widehat{\mu }_{N}-\log d\le \log \mu \le \log \widehat{\mu }_{N}+\log d\nonumber \\= & {} P_{\mu }\left( \left| \log \widehat{\mu }_{N} -\log \mu \right| \le \log d\right) \end{aligned}$$
(42)

We now invoke Anscombe’s (1952) random CLT and can thus write,

$$\begin{aligned} \frac{\sqrt{N}(\widehat{\mu }_{N}-\mu )}{\sigma }\overset{D}{\rightarrow }N(0,1) \hbox { and } \frac{\sqrt{n^{*}}(\widehat{\mu }_{N}-\mu )}{\sigma }\overset{D}{\rightarrow }N(0,1) \hbox { as } d\rightarrow 1. \end{aligned}$$
(43)

Using (43) and the Mann–Wald theorem, we introduce \(V_{N}\), where,

$$\begin{aligned} V_{N}=\frac{z_{\alpha /2}(\log \widehat{\mu }_{N}-\log \mu )}{\log d}\overset{D}{\rightarrow }N(0,1) \hbox { as } d\rightarrow 1. \end{aligned}$$
(44)

The proof is hence complete by using (42) and noting,

$$\begin{aligned} P_{\mu }(d^{-1}\widehat{\mu }_{N}\le \mu \le d\widehat{\mu }_{N})=P_{\mu } (|V_{N}|\le z_{\alpha /2})\rightarrow 1-\alpha \hbox { as } d\rightarrow 1. \end{aligned}$$
(45)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Bapat, S.R. On purely sequential estimation of an inverse Gaussian mean. Metrika 81, 1005–1024 (2018). https://doi.org/10.1007/s00184-018-0665-0

Download citation

  • Received:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00184-018-0665-0

Keywords

Mathematics Subject Classification

Navigation