Skip to main content
Log in

Some Inferential Studies on Inliers in Gompertz Distribution

  • Research Article
  • Published:
Journal of the Indian Society for Probability and Statistics Aims and scope Submit manuscript

Abstract

This paper deals with the inferential studies of inliers in Gompertz distribution. The inliers are inconsistent observations, which are generally the resultant of instantaneous and early failures. These situations are generally modeled using a non-standard mixture of distributions with a failure time distribution (FTD) for positive observations. Considering FTD as Gompertz distribution, we have studied various methods of estimating parameters including the uniformly minimum variance unbiased estimate of some parametric functions. An application of inliers prone models is illustrated with a real data set.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Abu-Zinadah HH (2014) Six method of estimations for the shape parameter of Exponentiated gompertz distribution. Appl Math Sci 8(88):4349–4359

  • Aitchison J (1955) On the distribution of a positive random variable having a discrete probability mass at the origin. J Am Stat Assoc 50:901–908

    MathSciNet  MATH  Google Scholar 

  • Al-Khedhairi A, El-Gohary A (2008) A new class of bivariate Gompertz distributions and its mixture. Int J Math Anal 2:235–253

    MathSciNet  MATH  Google Scholar 

  • Ananda MM, Dalpatadu RJ, Singh AK (1996) Adaptive bayes estimators for parameters of the Gompertz survival model. Appl Math Comput 75(2–3):167–177

    MATH  Google Scholar 

  • Charalambides CH (1974) Inimum variance unbiased estimation for a class of left truncated distributions. Sankhya A 36:392–418

  • Chen Z (1997) Parameter estimation of the Gompertz population. Biom J 39(1):117–124

    Article  MathSciNet  MATH  Google Scholar 

  • Dixit VU (2003) Estimation of parameters of mixed failure time distribution based on an extended modified sampling scheme. Commun Stat Theory Methods 32(10):1911–1923

    Article  MathSciNet  MATH  Google Scholar 

  • Gompertz B (1825) On the nature of the function expressive of the law of human mortality and on the new mode of determining the value of life contingencies. Philos Trans R Soc A 115:513–580

    Article  Google Scholar 

  • Gordon NH (1990) Maximum likelihood estimation for mixtures of two Gompertz distribution when censoring occurs. Commun Stat Simul Comput 19:733–747

    Article  MATH  Google Scholar 

  • Gupta RC (1977) Minimum variance unbiased estimation in modified power series distribution and some of its applications. Commun Stat 6:977–991

    Article  MathSciNet  MATH  Google Scholar 

  • Gupta RD, Kundu D (2001) Generalized exponential distribution: different method of estimations. J Stat Comput Simul 69:315–337

    Article  MathSciNet  MATH  Google Scholar 

  • Ismail AA (2010) Bayes estimation of Gompertz distribution parameters and acceleration factor under partially accelerated life tests with type-I censoring. J Stat Comput Simul 80(11):1253–1264

    Article  MathSciNet  MATH  Google Scholar 

  • Jaheen ZF (2003) A Bayesian analysis of record statistics from the Gompertz model. Appl Math Comput 145(2–3):307–320

    MathSciNet  MATH  Google Scholar 

  • Jani PN (1977) Minimum variance unbiased estimation for some left-truncated modified power power series distributions. Sankhya 39:258–278

    MathSciNet  MATH  Google Scholar 

  • Jani PN (1993) A characterization of one-parameter exponential family of distributions. Calcutta Stat Assoc Bull 43(171–172):253–255

  • Jani PN, Dave HP (1990) Minimum variance unbiased estimation in a class of exponential family of distributions and some of its applications. Metron 48:493–507

    MathSciNet  MATH  Google Scholar 

  • Jayade VP, Prasad MS (1990) Estimation of parameters of mixed failure time distribution. Commun Stat Theory Methods 19(12):4667–4677

    Article  MathSciNet  MATH  Google Scholar 

  • Johnson NL, Kotz S, Balakrishnan N (1995) Continuous univatiate distribution, vol 2, 2nd edn. Wiley, New York

    Google Scholar 

  • Joshi SW, Park CJ (1974) Minimum variance unbiased estimation for truncated power series distributions. Sankhya A 36:305–314

    MathSciNet  MATH  Google Scholar 

  • Kale BK (2003) Modified failure time distributions to accommodate instantaneous and early failures. In: Misra JC (ed) Industrial mathematics and statistics. Narosa Publishing House, New Delhi, pp 623–648

    Google Scholar 

  • Kale BK, Muralidharan K (2000) Optimal estimating equations in mixture distributions accommodating instantaneous or early failures. J Indian Stat Assoc 38:317–329

    MathSciNet  Google Scholar 

  • Kale BK, Muralidharan K (2007) Masking effect of inliers. J Indian Stat Assoc 45(1):33–49

    Google Scholar 

  • Kale BK, Muralidharan K (2008) Maximum likelihood estimation in presence of inliers. J Indian Soc Probab Stat 10:65–80

    Google Scholar 

  • Kao JHK (1958) Computer methods for estimating Weibull parameters in reliability studies. Trans IRE Reliab Quality Control 13:15–22

    Article  Google Scholar 

  • Kao JHK (1959) A graphical estimation of mixed Weibull parameters in life testing electron tube. Technometrics 1:389–407

    Article  Google Scholar 

  • Kleyle RM, Dahiya RL (1975) Estimation of parameters of mixed failure time distribution from censored data. Commun Stat Theory Methods 4(9):873–882

    Article  MathSciNet  MATH  Google Scholar 

  • Muralidharan K (2010) Inlier prone models: a review. ProbStat Forum 3:38–51

    MathSciNet  MATH  Google Scholar 

  • Muralidharan K, Lathika P (2006) Analysis of instantaneous and early failures in Weibull distribution. Metrika 64(3):305–316

    Article  MathSciNet  MATH  Google Scholar 

  • Murthy DNP, Xie M, Jiang R (2004) Weibull models. Wiley, New Jersey

    MATH  Google Scholar 

  • Patil GP (1963a) Minimum variance unbiased estimation and certain problem of additive number theory. Ann Math Stat 34:1050–1056

    Article  MATH  Google Scholar 

  • Roy J, Mitra SK (1957) Unbiased minimum variance estimation in a class of discrete distributions. Sankhya 18:371–378

    MathSciNet  MATH  Google Scholar 

  • Shawky A, Abu-Zinadah HH (2009) Exponentiated pareto distribution: different method of estimations. Int J Contempl Math Sci 4(14):677–693

  • Shawky A, Bakoban RA (2012) Exponential gamma distribution: different methods of estimations. J Appl Math 2012:284296

  • Shinde RL, Shanubhogue A (2000) Estimation of parameters and the mean life of a mixed failure time distribution. Commu Stat Theory Methods 29(1):2621–2642

    Article  MathSciNet  MATH  Google Scholar 

  • Swain J, Venkatraman S, Wilson J (1988) Least squares estimation of distribution function in Johnson’s translation system. J Stat Comput Simul 29:271–297

    Article  Google Scholar 

  • Vannman K (1995) On the distribution of the estimated mean from the nonstandard mixtures of distribution. Commun Stat Theory Methods 24(6):1569–1584

    Article  MathSciNet  MATH  Google Scholar 

  • Walker SG, Adham SA (2001) A multivariate Gompertz-type distribution. J Appl Stat 28:1051–1065

    Article  MathSciNet  MATH  Google Scholar 

  • Wu JW, Hung WL, Tsai CH (2004) Estimation of parameters of the Gompertz distribution using the least squares method. Appl Math Comput 158(1):133–147

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

We thank the referees and the editor for their careful reading, useful comments and valuable suggestions which greatly improved this research paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pratima Bavagosai.

Appendix: Asymptotic Distribution of MLE

Appendix: Asymptotic Distribution of MLE

For inlier prone Gompertz distribution \(g\left( {x;p,\alpha ,\theta } \right) \) given by (4) with \(\alpha \) known,

$$\begin{aligned} \frac{\partial ln\,g\left( {x;p,\alpha ,\theta } \right) }{\partial p}=\left\{ {{\begin{array}{ll} 0, &{}\quad x<d \\ -\frac{e^{-\frac{\theta }{\alpha }\left( {e^{\alpha d}-1} \right) }}{1-p\,e^{-\frac{\theta }{\alpha }\left( {e^{\alpha d}-1} \right) }},&{} \quad x=d \\ \frac{1}{p},&{} \quad x>d \\ \end{array}},} \right. \end{aligned}$$

and

$$\begin{aligned} \frac{\partial ln\,g\left( {x;p,\alpha ,\theta } \right) }{\partial \theta }=\left\{ {{\begin{array}{ll} 0,&{} \quad x<d \\ \frac{p\,e^{-\frac{\theta }{\alpha }\left( {e^{\alpha d}-1} \right) }\frac{\left( {e^{\alpha d}-1} \right) }{\alpha }}{1-p\,e^{-\frac{\theta }{\alpha }\left( {e^{\alpha d}-1} \right) }}, &{}\quad x=d \\ \frac{1}{\theta }-\frac{\left( {e^{\alpha x}-1} \right) }{\alpha }, &{} \quad x>d \\ \end{array}}.} \right. \end{aligned}$$

One can verify that \(E\left( {\frac{\partial ln\,g\left( {x;p,\alpha ,\theta } \right) }{\partial p}} \right) =0\)   and    \(E\left( {\frac{\partial ln\,g\left( {x;p,\alpha ,\theta } \right) }{\partial \theta }} \right) =0\).

Also,

$$\begin{aligned}&\frac{\partial ^{2}ln\,g\left( x;p,\alpha ,\theta \right) }{\partial p^{2}}=\left\{ {{\begin{array}{ll} 0, &{}\quad x<d \\ -\frac{e^{-\frac{2\,\theta }{\alpha }\left( {e^{\alpha d}-1} \right) }}{\left[ {1-p\,e^{-\frac{\theta }{\alpha }\left( {e^{\alpha d}-1} \right) }} \right] ^{2}},&{} \quad x=d \\ -\frac{1}{p^{2}}, &{} \quad x>d \\ \end{array}}} \right. , \\&\quad \frac{\partial ^{2}ln\,g\left( {x;p,\alpha ,\theta } \right) }{\partial \theta ^{2}}=\left\{ {{\begin{array}{ll} 0, &{} \quad x<d \\ -\frac{p\,e^{-\frac{\theta }{\alpha }\left( {e^{\alpha d}-1} \right) }\left[ {\frac{\left( {e^{\alpha d}-1} \right) }{\alpha }} \right] ^{2}}{\left[ {1-p\,e^{-\frac{\theta }{\alpha }\left( {e^{\alpha d}-1} \right) }} \right] ^{2}},&{} \quad x=d \\ -\frac{1}{\theta ^{2}}, &{} \quad x>d \\ \end{array} }} \right. , \\&\quad \frac{\partial ^{2}ln\,g\left( {x;p,\alpha ,\theta } \right) }{\partial p\,\partial \theta }=\left\{ {{\begin{array}{ll} 0, &{} \quad x<d \\ \frac{e^{-\frac{\theta }{\alpha }\left( {e^{\alpha d}\,-\,1} \right) }\frac{\left( {e^{\alpha d}\,-\,1} \right) }{\alpha }}{\left[ {1-p\,e^{-\frac{\theta }{\alpha }\left( {e^{\alpha d}-1} \right) }} \right] ^{2}}, &{} \quad x=d \\ 0,&{} \quad x>d \\ \end{array} }}\right. . \end{aligned}$$

Hence, the Fisher information is:

$$\begin{aligned} I_{pp}= & {} E\left( {-\frac{\partial ^{2}ln\,g\left( {x;p,\alpha ,\theta } \right) }{\partial p^{2}}} \right) =\frac{e^{-\frac{\theta }{\alpha }\left( {e^{\alpha d}-1} \right) }}{p\,\left( {1-pe^{-\frac{\theta }{\alpha }\left( {e^{\alpha d}-1} \right) }} \right) }=\frac{e^{-\frac{\theta }{\alpha }\left( {e^{\alpha d}-1} \right) }}{p\,p^{*}}, \\ I_{\theta \theta }= & {} E\left( {-\frac{\partial ^{2}\,ln\,g\left( {x;p,\alpha ,\theta } \right) }{\partial \theta ^{2}}} \right) =\frac{\left( {1-p^{*}} \right) \left\{ {\theta ^{2}\left[ {\frac{\left( {e^{\alpha d}-1} \right) }{\alpha }} \right] ^{2}+p^{*}} \right\} }{\theta ^{2}\,p^{*}}, \\ I_{p\theta }= & {} E\left( {-\frac{\partial ^{2}ln\,g\left( {x;p,\alpha ,\theta } \right) }{\partial p\,\partial \theta }} \right) =-\frac{\,e^{-\frac{\theta }{\alpha }\left( {e^{\alpha d}\,-\,1} \right) }\,\, \frac{\left( {e^{\alpha d}-1} \right) }{\alpha }}{p^{*}} \end{aligned}$$

where, \(p^{*}=1-pe^{-\frac{\theta }{\alpha }\left( {e^{\alpha d}-1} \right) }\).

Therefore, the Fisher information matrix \(I_g \left( {p,\theta } \right) \) is given by:

$$\begin{aligned} I_g \left( {p,\theta } \right) =\left[ {{\begin{array}{ll} {I_{pp} } &{} {I_{p\theta } } \\ {I_{\theta p} }&{} {I_{\theta \theta } } \\ \end{array} }} \right] =\left[ {{\begin{array}{ll} {\frac{e^{-\frac{\theta }{\alpha }\left( {e^{\alpha d}-1} \right) }}{p\,p^{*}}} &{} {-\frac{e^{-\frac{\theta }{\alpha }\left( {e^{\alpha d}-1} \right) }\,\,\frac{\left( {e^{\alpha d}-1} \right) }{\alpha }}{p^{*}}} \\ {-\frac{e^{-\frac{\theta }{\alpha }\left( {e^{\alpha d}-1} \right) }\,\,\frac{\left( {e^{\alpha d}-1} \right) }{\alpha }}{p^{*}}}&{} {\frac{\left( {1-p^{*}} \right) \left\{ {\theta ^{2}\frac{\left( {e^{\alpha d}-1} \right) ^{2}}{\alpha ^{2}}+p^{*}} \right\} }{\theta ^{2}p^{*}}} \\ \end{array} }} \right] . \end{aligned}$$

The inverse matrix \(I_g^{-1} \left( {p,\theta } \right) \) is given by:

$$\begin{aligned} I_g^{-1} \left( {p,\theta } \right) =\left[ {{\begin{array}{ll} {\frac{p\left( {\theta ^{2}\frac{\left( {e^{\alpha d}-1} \right) ^{2}}{\alpha ^{2}}+p^{*}} \right) }{e^{-\frac{\theta }{\alpha }\left( {e^{\alpha d}-1} \right) }}}&{} {-\frac{\theta ^{2}\left( {e^{\alpha d}-1} \right) }{\alpha \,\,e^{-\frac{\theta }{\alpha }\left( {e^{\alpha d}-1} \right) }}} \\ {-\frac{\theta ^{2}\left( {e^{\alpha d}-1} \right) }{\alpha \,\,e^{-\frac{\theta }{\alpha }\left( {e^{\alpha d}-1} \right) }}}&{} {\frac{\theta ^{2}}{1-p^{*}}} \\ \end{array} }} \right] . \end{aligned}$$

and the determinant of \(I_g \left( {p,\theta } \right) \) is given by \({\Delta }\) is \({\Delta }=\frac{e^{-\frac{2\theta }{\alpha }\left( {e^{\alpha d}-1} \right) }}{\theta ^{2}p^{*}}\).

Using the standard result of MLE, we have

$$\begin{aligned} \left( {\hat{p},\hat{\theta }} \right) ^{{\prime }}\sim AN^{\left( 2 \right) }\left[ {\left( {p,\theta } \right) ^{{\prime }},\,\frac{1}{n}\,I_g^{-1} \left( {p,\theta } \right) } \right] . \end{aligned}$$

Using the estimated variances, one can also propose large sample tests for p and \(\theta \).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Muralidharan, K., Bavagosai, P. Some Inferential Studies on Inliers in Gompertz Distribution. J Indian Soc Probab Stat 17, 35–55 (2016). https://doi.org/10.1007/s41096-016-0005-5

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s41096-016-0005-5

Keywords

Navigation