Abstract
The first part of this paper deals with developing a purely sequential methodology for the point estimation of the mean \(\mu \) of an inverse Gaussian distribution having an unknown scale parameter \(\lambda \). We assume a weighted squared error loss function and aim at controlling the associated risk function per unit cost by bounding it from above by a known constant \(\omega \). We also establish first-order and second-order asymptotic properties of our stopping rule. The second part of this paper deals with obtaining a purely sequential fixed accuracy confidence interval for the unknown mean \(\mu \), assuming that the scale parameter \(\lambda \) is known. First-order asymptotic efficiency and asymptotic consistency properties are also built of our proposed procedures. We then provide extensive sets of simulation studies and real data analysis using data from fatigue life analysis to show encouraging performances of our proposed stopping strategies.
Similar content being viewed by others
References
Anscombe FJ (1952) Large sample theory of sequential estimation. Proc Camb Philos Soc 48:600–607
Banerjee S, Mukhopadhyay N (2016) A general sequential fixed-accuracy confidence interval estimation methodology for a positive parameter: illustrations using health and safety data. Ann Inst Stat Math 68:541–570
Bapat SR (2018a) Purely sequential fixed accuracy confidence intervals for \(P(X < Y)\) under bivariate exponential models. Am J Math Manag Sci. https://doi.org/10.1080/01966324.2018.1465867
Birnbaum ZW, Saunders SC (1958) A statistical model for life length of material. J Am Stat Assoc 53:151–160
Birnbaum ZW, Saunders SC (1969) Estimation for a family of life distributions. J Appl Prob 6:319–327
Chaturvedi A (1996) Correction to sequential estimation of an inverse Gaussian parameter with prescribed proportional closeness. Calcutta Stat Assoc Bull 35:211–212
Chaturvedi A, Pandey SK, Gupta M (1991) On a class of asymptotically risk-efficient sequential procedures. Scand Actuar J 1:87–96
Chhikara RS, Folks JL (1989) The inverse gaussian distribution, theory, methodology and applications. Marcel Dekker Inc., New York
Chow YS, Robbins H (1965) On the asymptotic theory of fixed width sequential confidence intervals for the mean. Ann Math Stat 36:457–462
Edgeman RL, Salzburg PM (1991) A sequential sampling plan for the inverse Gaussian mean. Stat Pap 32:45–53
Folks JL, Chhikara RS (1978) The inverse Gaussian distribution and its statistical application—a review. J R Stat Soc B40:263–289
Ghosh M, Mukhopadhyay N (1975) Asymptotic normality of stopping times in sequential analysis. Unpublished Report
Ghosh M, Mukhopadhyay N (1979) Sequential point estimation of the mean when the distribution is unspecified. Commun Stat Ser A 8:637–652
Ghosh M, Mukhopadhyay N (1981) Consistency and asymptotic efficiency of two-stage and sequential procedures. Sankhya Ser A 43:220–227
Ghosh M, Mukhopadhyay N, Sen PK (1997) Sequential estimation. Wiley, New York
Johnson N, Kotz S, Balakrishnan N (1994) Continuous univariate distributions, vol 1. Wiley, New York
Joshi S, Shah M (1990) Sequential analysis applied to testing the mean of an inverse Gaussian distribution with known coefficient of variation. Commun Stat 19(4):1457–1466
Lai TL, Siegmund D (1977) A nonlinear renewal theory with applications to sequential analysis I. Ann Stat 5:946–954
Lai TL, Siegmund D (1979) A nonlinear renewal theory with applications to sequential analysis II. Ann Stat 7:60–76
Leiva V, Hernandez H, Sanhueza A (2008b) An R package for a general class of inverse Gaussian distributions. J Stat Softw 26(4):1–21
Mukhopadhyay N (1988) Sequential estimation problems for negative exponential populations. Commun Stat Theory Methods Ser A 17:2471–2506
Mukhopadhyay N, Banerjee S (2014) Purely sequential and two stage fixed-accuracy confidence interval estimation methods for count data for negative binomial distributions in statistical ecology: one-sample and two-sample problems. Seq Anal 33:251–285
Mukhopadhyay N, Bapat SR (2016a) Multistage point estimation methodologies for a negative exponential location under a modified linex loss function: illustrations with infant mortality and bone marrow data. Seq Anal 35:175–206
Mukhopadhyay N, Bapat SR (2016b) Multistage estimation of the difference of locations of two negative exponential populations under a modified linex loss function: real data illustrations from cancer studies and reliability analysis. Seq Anal 35:387–412
Mukhopadhyay N, Bapat SR (2017a) Purely sequential bounded-risk point estimation of the negative binomial mean under various loss functions: one sample problem. Ann Inst Stat Math. https://doi.org/10.1007/s10463-017-0620-2
Mukhopadhyay N, Bapat SR (2017b) Purely sequential bounded-risk point estimation of the negative binomial means under various loss functions: multisample problems. Seq Anal 36(4):490–512
Mukhopadhyay N, de Silva BM (2009) Sequential methods and their applications. CRC, Boca Raton
Mukhopadhyay N, Solanky TKS (1994) Multistage selection and ranking procedures. Marcel Dekker Inc., New York
R Core Team (2014) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria
Schrodinger E (1915) Zur theorie der fall und steigversuche an teilchen mit Brown-scher bewegung. Physikalische Zeitschrift 16:289–295
Sen PK (1981) Sequential nonparametrics. Wiley, New York
Seshadri V (1993) The inverse Gaussian distribution—a case study in exponential families. Clarendon Press, Oxford
Seshadri V (1999) The invere Gaussian distribution, statistical theory and applications. Springer, New York
Wiener N (1939) The Ergodic theorem. Duke Math J 5:1–18
Woodroofe M (1977) Second order approximation for sequential point and interval estimation. Ann Stat 5:984–995
Woodroofe M (1982) Nonlinear renewal theory in sequential analysis, CBMS 39. SIAM, Philadelphia
Acknowledgements
The author would like to sincerely thank the editor in chief, Dr. Hajo Holzmann and the anonymous referee for their valuable and constructive comments which greatly improved an earlier manuscript.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Appendix
Appendix
Proof of Theorem 2.1
Part (i)
From (9) we have,
On dividing throughout by \(n^{*}\) we get,
The proof follows by taking limits throughout and noting that \(N\rightarrow \infty \) w.p.1, \(\widehat{\lambda }_{N}\rightarrow \lambda \) w.p.1 and \(m/n^{*}\rightarrow 0\) as \(\omega \rightarrow 0\).
Part (ii)
From the right hand side of (21) we have (for sufficiently large \(n^{*} \)),
Now denoting \(\sup _{n\ge 2}\) \(\frac{1}{\widehat{\lambda }_{n}}\) by W, we can claim w.p.1:
The right hand side of (23) is free from \(\omega \) and using Wiener’s (1939) ergodic theorem we can claim the uniform integrability of all positive powers of \(N/n^{*}\). The proof is hence complete by using part (i).
Part (iii)
As seen before, N from (9) can be expressed as \(J+1\), where J is as defined in (10). Using Lemma 2.3 from Woodroofe (1977) with \(b=1/2\) we claim:
Now since \(N=J+1\) w.p.1, we have \(0<\left( \frac{n^{*}}{N}\right) ^{s}<\left( \frac{n^{*}}{J}\right) ^{s}\) and it suffices to show that \(\left( \frac{n^{*}}{J}\right) ^{s}\) is uniformly integrable.
We can now split \(\left( \frac{n^{*}}{J}\right) ^{s}\) into two parts as \(\left( \frac{n^{*}}{J}\right) ^{s}I(J>\frac{1}{2}n^{*})\) and \(\left( \frac{n^{*}}{J}\right) ^{s}I(J<\frac{1}{2}n^{*})\) and hope to show that both these parts are \(1+o(1)\) accounting for restrictions on m. Now we have,
and so \(\left( \frac{n^*}{J}\right) ^s I(J>\frac{1}{2}n^*)\) becomes uniformly integrable. However, \(\left( \frac{n^*}{J}\right) ^s I(J>\frac{1}{2}n^*)\overset{P}{\rightarrow }\) 1 and hence on applying the dominated convergence theorem we must have,
Also from (31) we have,
which is o(1) if \(m>1+\frac{k+1}{2}s\). Hence combining (26) and (27) we have that \(\left( \frac{n^*}{J}\right) ^s\) is uniformly integrable, which gives,
if \(m>1+\frac{k+1}{2}s\) and \(s>0\).
Part (iv)
From (2), the associated loss function is given by:
Now, recalling \(\hbox {RPUC}_{N}\) from (5) and utilizing the facts that N is an observable finite random variable and the event \(N=n\) is measurable only with respect to \(\{\widehat{\lambda }_{j};\) \(m\le j\le n\}\) for all fixed \(n\ge m\), we can evaluate \(E\left[ \text {RPUC}_{N}\right] \) as:
Thus we have:
which is clearly o(1) by utilizing part (iii), when \(m>k+2\). \(\square \)
Proof of Theorem 2.2
Let us first prove the following:
for different values of t, where J comes from (10). By utilizing Theorem 3 in Ghosh and Mukhopadhyay (1979) and Theorem 2.3 in Woodroofe (1977), we define V where,
and V is also uniformly integrable if \(m>k+1\). Also, from Theorem 2.4 of Woodroofe (1977) we have,
if \(m>k+1\).
Case 1 \(t=-1\)
One can now follow along similar lines as part (iii) of Theorem 2.1, and split \(E[(J-n^*)^2/J]\) into two parts and show:
if \(m>(k+1)\) and,
if \(m>2(k+1)\), by Lemma 2.3 of Woodroofe (1977). Now combining (31)–(34) the expansion follows, if \(m>2(k+1)\).
Case 2 \(t\in (-\infty , 0)-\{-1\}\)
Let \(t=-r\), where \(r>0\). Now,
where V comes from (30) and Q is a suitable random variable between \(J/n^*\) and 1. As before, one can now easily show:
if \(m>(k+1)\) and,
if \(m>(r+3)(k+1)\), by Lemma 2.3 of Woodroofe (1977). Now combining (35)–(37) the expansion follows, if \(m>(r+3)(k+1)\) which is nothing but \(m>(3-t)(k+1)\).
Case 3 \(0<t\le 2\)
If \(t=1\) or \(t=2\), the result readily follows by noting (30)–(31) and from Woodroofe (1977), if \(m>k+1\). So we can restrict to \(t\in (0,2)-\{1\}\). Now following along the lines of Case 2 one can show, \(E\{UQ^{t-2}I(J>\frac{1}{2}n^*)\}=p+o(1)\), if \(m>(k+1)\) and \(E\{UQ^{t-2}I(J<\frac{1}{2}n^*)\}=o(1)\), if \(m>(3-t)(k+1)\). Hence the result follows under this case as well.
Case 4 \(t>2\)
Following along the lines of Case 2 one can show, \(E\{UQ^{t-2}I(J<2n^*)\}=p+o(1)\), if \(m>(k+1)\) and \(E\{UQ^{t-2}I(J>2n^*)\}=o(1)\), which follows from Lemma 2.2 of Woodroofe (1977). Hence the result is true under this case also.
This proves (29) under all possible cases, with different conditions on m. Finally by noting \(N=J+1\) completes the proof of Theorem 2.2. \(\square \)
For proving Theorem 2.3 we first note that \(\omega ^{-1}E\left[ \text {RPUC}_{N}\right] =E\left[ \left( n^{*}/N\right) ^{2}\right] \), from part (iv) of Theorem 2.1. One can now exploit Theorem 2.2 by replacing \(t=-2\).
Theorem 2.4 follows readily from an application found in Ghosh and Mukhopadhyay (1975) where we have:
where p comes from (12). One may also refer to Theorem 2.4.3 or Theorem 2.4.8, part (ii) in Mukhopadhyay and Solanky (1994).
Proof of Theorem 3.1
Part (i)
From (18) one can get,
Dividing throughout by \(n^{*}\) we have,
The proof follows by taking limits throughout and noting that \(N\rightarrow \infty \) w.p.1, \(\widehat{\mu }_{N}\rightarrow \mu \) w.p.1 and \(m/n^{*}\rightarrow 0\) as \(d\rightarrow 1\).
Part (ii)
From the right hand side of (39) we have (for sufficiently large \(n^{*}\)),
Now denoting \(\sup _{n\ge 2}(\widehat{\mu }_{n})\) by U, we can claim w.p.1:
Since the right hand side of (41) is free from d, one can apply Wiener’s (1939) ergodic theorem to conclude the uniform integrability of \(N/n^{*}\). The proof is hence complete by using part (i).
Part (iii)
From (19), the confidence interval for \(\mu \) is \(C_{N}=\left[ d^{-1} \widehat{\mu }_{N},\text { }d\widehat{\mu }_{N}\right] \). We may now write,
We now invoke Anscombe’s (1952) random CLT and can thus write,
Using (43) and the Mann–Wald theorem, we introduce \(V_{N}\), where,
The proof is hence complete by using (42) and noting,
Rights and permissions
About this article
Cite this article
Bapat, S.R. On purely sequential estimation of an inverse Gaussian mean. Metrika 81, 1005–1024 (2018). https://doi.org/10.1007/s00184-018-0665-0
Received:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00184-018-0665-0
Keywords
- Fatigue life
- Inverse Gaussian
- Purely sequential
- Fixed-accuracy intervals
- Point estimation
- First-order asymptotic efficiency
- First-order asymptotic consistency