1 Introduction

Gompertz [17] introduced the Gompertz distribution to fit mortality tables. This distribution is unimodal and positively skew, whereas the Gompertz hazard rate function increases monotonically. As a result, the Gompertz distribution is used to model phenomena that has an increasing failure rate. It is worth mentioning that it has some interesting relationships with well-known distributions like exponential, Weibull, Gumbel, generalized logistic and double exponential distributions (see, [39]). Garg et al. [13] studied the Maximum likelihood estimation of the parameters of the Gompertz survival function.

Ahuja and Nash [1] contains a survey and applications of the Gompertz distribution. Many researchers have contributed to the studies of characterization and statistical methodology of this distribution for analyzing a variety of real-world applications, including the analysis of medical, survival, behavioral, biological, environmental, and actuarial studies. For instance, [2,3,4, 8, 10, 12, 21, 22, 26, 27, 31,32,33, 35, 40, 42].

The maximum likelihood technique is well-known as the most popular estimating method. This is due to its appealing mathematical properties for large sample sizes, such as unbiasedness, consistency, efficiency and asymptotic normality. These properties, however, may not hold for small or even moderate sample sizes; see, for example, [14,15,16, 25, 34, 38] along with others.

In this study, we will look at two strategies. The first is a correction strategy known as the "analytical approach" which was presented by [7]. This method corrects the bias of MLEs to the second order of magnitude by subtracting it from the MLEs. Some researchers, including [20, 28, 36], developed software applications (even though they are limited) that allow users to compute the analytic Cox-Snell formula for bias corrections for various pre-specified distributions. Moreover, the second strategy is based on the bootstrap re-sampling procedure which may minimize bias to the second order, as described in the "bootstrap approach" developed by [9]. In both strategies, we shall refer to these corrected estimators as bias-corrected estimators. To demonstrate the performance of these estimators, Monte Carlo simulations and real-world applications are used.

If X follows the Gompertz distribution (denoted by Gomp(\(\alpha ,\beta \))), then the distribution function (cdf) and the probability density function (pdf) of X are given respectively by (see for example [5, 19, 23])

$$\begin{aligned} F\left( x;\,\alpha , \beta \right)=1-e^{-\left[ \frac{\beta }{\alpha }\left( e^{\alpha x}-1\right) \right] }, \end{aligned}$$
(1.1)
$$\begin{aligned} f\left( x;\,\alpha ,\beta \right)=\beta \ e^{\left( \alpha x\right) }\ e^{-\left[ \frac{\beta }{\alpha }\left( e^{\alpha x}-1\right) \right] }, \end{aligned}$$
(1.2)

where \(\alpha >0\) is the shape parameter and \(\beta >0\) is the scale parameter as shown in Fig. 1. In addition, the role of the shape parameter \(\alpha \) can also be seen in the hazard function (h(x)) of this distribution

$$\begin{aligned} h\left( x\right) =\beta \ e^{\left( \alpha x\right) }. \end{aligned}$$
(1.3)

It is clear that as \(\alpha \rightarrow 0\), the Gompertz distribution approaches the exponential distribution.

Fig. 1
figure 1

Plots of the pdf of the Gompertz distribution with a \(\beta =1\) and different values of \(\alpha \) and b \(\alpha =1\) and different values of \(\beta \)

The rest of the paper is organized as follows. The Maximum Likelihood Estimation(MLE) is introduced in Sect. 2. In Sect. 3, we give bias-corrected MLEs for both analytical and bootstrap approaches. In Sect. 4, we obtain simulation results to evaluate the performance of the maximum likelihood estimation method. Moreover, two examples based on real data are used to demonstrate the performance of these approaches in Sect. 5. lastly, some concluding remarks are given in Sect. 6.

2 Maximum Likelihood Estimation

Let \(X_1,\cdots ,X_n\) be a random sample of size n from Gomp(\(\alpha ,\beta \)). The log likelihood function (l) is

$$\begin{aligned} l=\frac{n\ \alpha }{\beta }+n\ \textrm{log}\left( \alpha \right) +\beta \sum ^n_{i=1}{x_i}-\frac{\alpha }{\beta }\sum ^n_{i=1}{e^{\beta x_i}} \end{aligned}$$
(2.1)

We maximize Eq. (2.1) with respect to \(\alpha \ and\ \beta \) in order to obtain the MLE’s (\({\widehat{\alpha }}\ and\ {\widehat{\beta }}\)) of \(\alpha \ and\ \beta \) respectively. Thus, we have the following equations:

$$\begin{aligned} \frac{\partial l}{\partial \alpha }=\frac{n}{\alpha }-\frac{1}{\beta }\sum ^n_{i=1}{\left( e^{\beta x_i}-1\right) } \end{aligned}$$
(2.2)
$$\begin{aligned} \frac{\partial l}{\partial \beta }=\sum ^n_{i=1}{x_i}-\frac{\alpha }{\beta }\sum ^n_{i=1}{x_i\ e^{\beta x_i}}+\frac{\alpha }{{\beta }^2}\sum ^n_{i=1}{\left( e^{\beta x_i}-1\right) } \end{aligned}$$
(2.3)

The non-linear Eqs. (2.2) and (2.3) cannot be solved analytically. However, several programs such as R, Mathematica or Maple can be used to solve these equations numerically to estimate the MLEs denoted by \({{\widehat{\xi }}}_{MLE}\) where \(\xi =\left( \alpha ,\beta \right) ^{'}\). For small sample sizes, these MLEs will be biased as mentioned previously. Therefore, the bias may provide misleading results, influencing the interpretation of occurrences in real-world applications. As a result, this motivates us to examine roughly unbiased estimators to decrease the bias of these Gompertz distribution MLEs.

3 Bias-Corrected MLEs

This section examines two bias-correction techniques. The first is [7]’s "analytical approach", which is described in Sect. 3.1, and the second is [9]’s "bootstrap approach", which is presented in Sect. 3.2.

3.1 Analytical Approach

Assume \(l(\xi )\) is the log likelihood function based on n observations with a p-dimensional parameter vector represented as \(\xi ={\left( {\xi }_1,\dots ,{\xi }_p\right) }^{'}\) and \(l(\xi )\) is regular with respect to all derivatives up to the third order.

The joint cumulants of \(l=l(\xi )\) derivatives are therefore defined as

$$\begin{aligned} {\eta }_{ij}=E\left[ \frac{{\partial }^2l}{\partial {\xi }_i\partial {\xi }_j}\right] \ ,\ i,j=1,\dots ,p\ , \end{aligned}$$
(3.1)
$$\begin{aligned} {\eta }_{ijk}=E\left[ \frac{{\partial }^3l}{\partial {\xi }_i\partial {\xi }_j\partial {\xi }_k}\right] \ ,\ i,j,k=1,\dots ,p\ , \end{aligned}$$
(3.2)
$$\begin{aligned} {\eta }_{ij,k}=E\left[ \left( \frac{{\partial }^2l}{\partial {\xi }_i\partial {\xi }_j}\right) \left( \frac{\partial l}{\partial {\xi }_k}\right) \right] \ ,\ i,j,k=1,\cdots ,p\ . \end{aligned}$$
(3.3)

These joint cumulants’ derivatives are denoted by

$$\begin{aligned} {\eta }^{(k)}_{ij}=\frac{\partial {\eta }_{ij}}{\partial {\xi }_k} \ ,\ i,j,k=1,\dots ,p\ . \end{aligned}$$
(3.4)

In addition, the expressions in Eqs. (3.1) through (3.4) are assumed to be of order O(n).

Assume \(\varPhi =\left[ -{\eta }_{ij}\right] \) denotes the Fisher information matrix of \(\xi \), with \(i,j=1,\cdots p\). For non-identical independent samples, [7] demonstrated that the bias of the \(s^{th}\) element of the MLE of \(\xi \) is

$$\begin{aligned} Bias\left( {{\widehat{\xi }}}_s\right)&=\sum ^p_{i=1}\sum ^p_{j=1}\sum ^p_{k=1}{\eta }^{si}{\eta }^{jk}\left[ \frac{1}{2}{\eta }_{ijk}+ {\eta }_{ij,k}\right] + O\left( n^{-2}\right) \ ,\ s=1,\dots ,p\ , & \end{aligned}$$
(3.5)

where \({\eta }^{ij}\) denotes the \({\left( i,j\right) }^{th}\) member of the information matrix’s inverse. Thereafter, [6] demonstrated that Eq. (3.5) remains valid even if the sample data are not identical and non-independent observations, given that all expressions in Eq. (3.1) through Eq. (3.4) are of order O(n). Equation (3.5) was expressed as

$$\begin{aligned} Bias\left( {{\widehat{\xi }}}_s\right)&=\sum ^p_{i=1}{\eta }^{si}\sum ^p_{j=1}\sum ^p_{k=1}{{\eta }^{jk}\left[ {\eta }^{(k)}_{ij}-\frac{1}{2}{\eta }_{ijk}\right] }+ O\left( n^{-2}\right) \ ,\ s=1,\dots ,p\ . & \end{aligned}$$
(3.6)

Because the term of the form in Eq. (3.3) was eliminated from Eq. (3.6), this bias expression in Eq. (3.6) is often easier to compute than that of Eq. (3.5).

Let \({\omega }^{(k)}_{ij}={\eta }^{(k)}_{ij}-\frac{1}{2}{\eta }_{ijk}\,\ i,j,k=1,\dots ,p\). As a result, we have these two matrices (\(\varPsi \) and \({\varOmega }^{(k)}\)) as

$$\begin{aligned} \varPsi =\left[ {\varOmega }^{(1)}|\ {\varOmega }^{\left( 2\right) }\ |\ \cdots |{\varOmega }^{(p)}\right] \ where \ {\varOmega }^{(k)}=\left[ {\omega }^{(k)}_{ij}\right] \ ,\ i,j,k=1,\dots ,p. & \end{aligned}$$
(3.7)

Hence, \({\widehat{\xi }}\)’s bias expression may be written in matrix form as

$$\begin{aligned} Bias\left( {\widehat{\xi }}\right) ={\varPhi }^{-1}\varPsi \cdot \textrm{vec}\left( {\varPhi }^{-1}\right) +O\left( n^{-2}\right) , \end{aligned}$$
(3.8)

where vec is an operation that stacks a matrix’s column vectors one on top of the other. Therefore, the \(\xi \) bias-corrected MLE, represented as \({{\widehat{\xi }}}_{BCMLE}\), is given by

$$\begin{aligned} {{\widehat{\xi }}}_{BCMLE}={\widehat{\xi }}-{{\widehat{\varPhi }}}^{-1}{\widehat{\varPsi }} \cdot \textrm{vec}\left( {{\widehat{\varPhi }}}^{-1}\right) , \end{aligned}$$
(3.9)

where \({\widehat{\xi }}\) is the MLE of \(\xi \), \({\widehat{\varPhi }}=\varPhi |_{\xi ={\widehat{\xi }}}\) and \({\widehat{\varPsi }}=\varPsi |_{\xi ={\widehat{\xi }}}\). It should be noted that the bias of \({{\widehat{\xi }}}_{BCMLE}\) is \(O\left( n^{-2}\right) \).

We have \(\xi ={\left( \alpha , \beta \right) }^{'}\)and \(p=2\) because we are studying the Gompertz distribution. To get the bias-corrected MLEs, we must first compute the higher-order derivatives of the log-likelihood function for the Gompertz distribution with respect to \(\alpha \) and \(\beta \) as shown below

$$\begin{aligned}&\frac{{\partial }^2l}{\partial {\alpha }^2}=-\frac{n}{{\alpha }^2} \end{aligned}$$
(3.10)
$$\begin{aligned}&\quad \frac{{\partial }^2l}{\partial {\beta }^2}=-\frac{\alpha }{\beta }\sum ^n_{i=1}{{x_i}^2\ e^{\beta x_i}}+\frac{2 \alpha }{{\beta }^2}\sum ^n_{i=1}{x_i\ e^{\beta x_i}}-\frac{2 \alpha }{{\beta }^3}\sum ^n_{i=1}{\left( e^{\beta x_i}-1\right) } \end{aligned}$$
(3.11)
$$\begin{aligned}&\quad \frac{{\partial }^2l}{\partial \alpha \partial \beta }=-\frac{1}{\beta }\sum ^n_{i=1}{x_i\ e^{\beta x_i}}+\frac{1}{{\beta }^2}\sum ^n_{i=1}{\left( e^{\beta x_i}-1\right) } \end{aligned}$$
(3.12)
$$\begin{aligned}&\quad \frac{{\partial }^3l}{\partial {\alpha }^3}&=\frac{2n}{{\alpha }^3} \end{aligned}$$
(3.13)
$$\begin{aligned}&\quad \frac{{\partial }^3l}{\partial {\beta }^3}=-\frac{\alpha }{\beta }\sum ^n_{i=1}{{x_i}^3\ e^{\beta x_i}}+\frac{3 \alpha }{{\beta }^2}\sum ^n_{i=1}{{x_i}^2\ e^{\beta x_i}}-\frac{6 \alpha }{{\beta }^3}\sum ^n_{i=1}{x_i\ e^{\beta x_i}}+\frac{6 \alpha }{{\beta }^4}\sum ^n_{i=1}{\left( e^{\beta x_i}-1\right) } \end{aligned}$$
(3.14)
$$\begin{aligned}&\quad \frac{{\partial }^3l}{\partial {\alpha }^2\partial \beta }=0 \end{aligned}$$
(3.15)
$$\begin{aligned}&\quad \frac{{\partial }^3l}{\partial {\beta }^2\partial \alpha }=-\frac{1}{\beta }\sum ^n_{i=1}{{x_i}^2\ e^{\beta x_i}}+\frac{2}{{\beta }^2}\sum ^n_{i=1}{x_i\ e^{\beta x_i}}-\frac{2}{{\beta }^3}\sum ^n_{i=1}{\left( e^{\beta x_i}-1\right) } \end{aligned}$$
(3.16)

Notice that if X follows Gomp(\(\alpha ,\beta \)) then \(Y=\frac{\alpha }{\beta }\left( e^{\beta X}-1\right) \ \sim \ f\left( y\right) =e^{-Y}\ \ for\ y>0\). Therefore, Y follows exponential distribution. The following formulas will be needed:

The upper incomplete gamma function is defined as

$$\begin{aligned}&\varGamma \left( s,t\right) =\int ^{\infty }_t{w^{s-1}\ e^{-w}}dw\ \ for\ s,t>0. \end{aligned}$$
(3.17)

These derivatives of the upper incomplete gamma function are straightforward to be obtained as

$$\begin{aligned}&Q\left( r,s,t\right) =\frac{{\partial }^r}{{\partial s}^r}\ \varGamma \left( s,t\right) =\int ^{\infty }_t{{\left[ {\log \left( w\right) \ }\right] }^rw^{s-1}\ e^{-w}}dw\ \ for\ \ r=0,1,\cdots \ \ and\ s,t>0, \end{aligned}$$
(3.18)
$$\begin{aligned}&\quad {D\varGamma }_a\left( s,\frac{a}{b}\right) =\frac{\partial }{\partial a}\varGamma \left( s,\frac{a}{b}\right) =-\frac{1}{b}\left\{ {\left( \frac{a}{b}\right) }^{s-1}\ e^{-\left( \frac{a}{b}\right) }\right\} , \end{aligned}$$
(3.19)
$$\begin{aligned}&\quad {D\varGamma }_b\left( s,\frac{a}{b}\right) =\frac{\partial }{\partial b}\varGamma \left( s,\frac{a}{b}\right) =\frac{a}{b^2}\left\{ {\left( \frac{a}{b}\right) }^{s-1}\ e^{-\left( \frac{a}{b}\right) }\right\} , \end{aligned}$$
(3.20)
$$\begin{aligned}&\quad {DQ}_a\left( r,s,\frac{a}{b}\right) =\frac{\partial }{\partial a}Q\left( r,s,\frac{a}{b}\right) =-\frac{1}{b}\left\{ {\left[ {\log \left( \frac{a}{b}\right) \ }\right] }^r{\left( \frac{a}{b}\right) }^{s-1}\ e^{-\left( \frac{a}{b}\right) }\right\} , \end{aligned}$$
(3.21)
$$\begin{aligned}&\quad {DQ}_b\left( r,s,\frac{a}{b}\right) =\frac{\partial }{\partial b}Q\left( r,s,\frac{a}{b}\right) =\frac{a}{b^2}\left\{ {\left[ {\log \left( \frac{a}{b}\right) \ }\right] }^r{\left( \frac{a}{b}\right) }^{s-1}\ e^{-\left( \frac{a}{b}\right) }\right\} , \end{aligned}$$
(3.22)

The Eqs. (3.17) and (3.22) can be implemented easily in any mathematical or statistical programs. Moreover,

$$\begin{aligned}&\int ^{\infty }_1{{\left[ \log \left( y\right) \right] }^r\ y^{s-1}\ e^{-ty}\ dy}=\frac{{\partial }^r}{{\partial s}^r}\left\{ t^{-s}\ \varGamma \left( s,t\right) \ \right\} \ \ for\ \ r=0,1,\dots \ \ and\ s,t>0, \end{aligned}$$
(3.23)

see equation (4.358.1) in [18]. Hence, using the preceding Eqs. (3.17)–(3.23), we obtain

$$\begin{aligned}&\int ^{\infty }_1{y\ e^{-ty}\ dy}=t^{-2}\ \varGamma \left( 2,t\right) =\frac{\left( 1+t\right) \ e^{-t}}{t^2}, \end{aligned}$$
(3.24)
$$\begin{aligned}&\quad \int ^{\infty }_1{\log \left( y\right) \ y\ e^{-ty}\ dy}&={\left. \frac{\partial }{\partial s}\left\{ t^{-s}\ \varGamma \left( s,t\right) \ \right\} \right| }_{s=2}={\left. \left\{ t^{-s}\ \textrm{Q}\left( 1,s,t\right) -t^{-s}\ \varGamma \left( s,t\right) \ \log (t)\right\} \right| }_{s=2} \nonumber \\&\quad =t^{-2}\ \left\{ \textrm{Q}\left( 1,2,t\right) -\log (t)\ \varGamma \left( 2,t\right) \right\} . \end{aligned}$$
(3.25)

Similarly,

$$\begin{aligned} \int ^{\infty }_1{{\left[ \log \left( y\right) \right] }^2\ y\ e^{-ty}\ dy}&={\left. \frac{{\partial }^2}{{\partial s}^2}\left\{ t^{-s}\ \varGamma \left( s,t\right) \ \right\} \right| }_{s=2}={\left. \frac{\partial }{\partial s}\left\{ t^{-s}\ \textrm{Q}\left( 1,s,t\right) -t^{-s}\ \varGamma \left( s,t\right) \ \log (t)\right\} \right| }_{s=2} \nonumber \\&={\left. \left\{ t^{-s}\ \textrm{Q}\left( 2,s,t\right) -2t^{-s}\ {\log \left( t\right) \ }\ \textrm{Q}\left( 1,s,t\right) +t^{-s}\ \varGamma \left( s,t\right) \ {\left[ \log (t)\right] }^2\right\} \right| }_{s=2} \nonumber \\&=t^{-2}\ \left\{ \textrm{Q}\left( 2,2,t\right) -2{\log \left( t\right) \ }\ \textrm{Q}\left( 1,2,t\right) +{\left[ {\log \left( t\right) \ }\right] }^2\mathrm{\ }\varGamma \left( 2,t\right) \right\} \end{aligned}$$
(3.26)
$$\begin{aligned} \int ^{\infty }_1{{\left[ \log \left( y\right) \right] }^3\ y\ e^{-ty}\ dy}&={\left. \frac{{\partial }^3}{{\partial s}^3}\left\{ t^{-s}\ \varGamma \left( s,t\right) \ \right\} \right| }_{s=2} \nonumber \\&=\frac{\partial }{\partial s}{\left. \left\{ t^{-s}\ \textrm{Q}\left( 2,s,t\right) -2t^{-s}\ {\log \left( t\right) \ }\ \textrm{Q}\left( 1,s,t\right) +t^{-s}\ \varGamma \left( s,t\right) \ {\left[ \log (t)\right] }^2\right\} \right| }_{s=2} \nonumber \\&=t^{-2}\ \left\{ \textrm{Q}\left( 3,2,t\right) -3{\log \left( t\right) \ }\mathrm{\ Q}\left( 2,2,t\right) +\ 3\ {\left[ {\log \left( t\right) \ }\right] }^2\ \textrm{Q}\left( 1,2,t\right) -{\left[ {\log \left( t\right) \ }\right] }^3\mathrm{\ }\varGamma \left( 2,t\right) \right\} . \end{aligned}$$
(3.27)

As a result,

$$\begin{aligned}&{{\mathbb {E}}}\left\{ e^{\beta X}-1\right\} ={{\mathbb {E}}}\left\{ \frac{\beta }{\alpha }Y\right\} =\frac{\beta }{\alpha }\int ^{\infty }_0{y\ f\left( y\right) dy}=\frac{\beta }{\alpha }\int ^{\infty }_0{y\ e^{-y}\ dy}=\frac{\beta }{\alpha } \end{aligned}$$
(3.28)
$$\begin{aligned}&\quad {{\mathbb {E}}}\left\{ X\ e^{\beta X}\right\} ={{\mathbb {E}}}\left\{ \left( \frac{\alpha +\beta \ Y}{\alpha \ \beta }\right) \ \log \left( \frac{\alpha +\beta \ Y}{\alpha }\right) \right\} =\left( \frac{1}{\alpha \ \beta }\right) \int ^{\infty }_0{\left( \alpha +\beta \ y\right) \ \log \left( \frac{\alpha +\beta \ Y}{\alpha }\right) \ e^{-y}\ dy} \nonumber \\&\quad =\left( \frac{\alpha \ e^{\left( \frac{\alpha }{\beta }\right) }}{{\beta }^2}\right) \int ^{\infty }_1{w\ \log \left( w\right) \ e^{-\frac{\alpha }{\beta }\ w}\ dw}\ where\ \ w=\frac{\alpha +\beta \ Y}{\alpha }\ \ \ \Rightarrow \nonumber \\ {{\mathbb {E}}}\left\{ X\ e^{\beta X}\right\}&\quad =\left( \frac{\alpha \ e^{\left( \frac{\alpha }{\beta }\right) }}{{\beta }^2}\right) \int ^{\infty }_1{w\ \log \left( w\right) \ e^{-\frac{\alpha }{\beta }\ w}\ dw}=\left( \frac{e^{\left( \frac{\alpha }{\beta }\right) }}{\alpha }\right) \ \left\{ \textrm{Q}\left( 1,2,\frac{\alpha }{\beta }\right) -{\log \left( \frac{\alpha }{\beta }\right) \ }\ \varGamma \left( 2,\frac{\alpha }{\beta }\right) \right\} & \end{aligned}$$
(3.29)

Similarly,

$$\begin{aligned} {{\mathbb {E}}}\left\{ X^2\ e^{\beta X}\right\}&={{\mathbb {E}}}\left\{ \left( \frac{\alpha +\beta \ Y}{\alpha \ {\beta }^2}\right) \ {\left[ \log \left( \frac{\alpha +\beta \ Y}{\alpha }\right) \right] }^2\right\} \nonumber \\&=\left( \frac{e^{\left( \frac{\alpha }{\beta }\right) }}{\alpha \ \beta }\right) \ \left\{ \textrm{Q}\left( 2,2,\frac{\alpha }{\beta }\right) -2{\log \left( \frac{\alpha }{\beta }\right) \ }\ \textrm{Q}\left( 1,2,\frac{\alpha }{\beta }\right) +{\left[ {\log \left( \frac{\alpha }{\beta }\right) \ }\right] }^2\mathrm{\ }\varGamma \left( 2,\frac{\alpha }{\beta }\right) \right\} \end{aligned}$$
(3.30)
$$\begin{aligned}&\quad {{\mathbb {E}}}\left\{ X^3\ e^{\beta X}\right\} ={{\mathbb {E}}}\left\{ \left( \frac{\alpha +\beta \ Y}{\alpha \ {\beta }^3}\right) \ {\left[ \log \left( \frac{\alpha +\beta \ Y}{\alpha }\right) \right] }^3\right\} \nonumber \\&\quad =\left( \frac{e^{\left( \frac{\alpha }{\beta }\right) }}{\alpha \ {\beta }^2}\right) \ \left\{ \textrm{Q}\left( 3,2,\frac{\alpha }{\beta }\right) -3{\log \left( \frac{\alpha }{\beta }\right) \ }\mathrm{\ Q}\left( 2,2,\frac{\alpha }{\beta }\right) + \right. \nonumber \\ &\left. \qquad \qquad 3\ {\left[ {\log \left( \frac{\alpha }{\beta }\right) \ }\right] }^2\ \textrm{Q}\left( 1,2,\frac{\alpha }{\beta }\right) -{\left[ {\log \left( \frac{\alpha }{\beta }\right) \ }\right] }^3\mathrm{\ }\varGamma \left( 2,\frac{\alpha }{\beta }\right) \right\} \end{aligned}$$
(3.31)

Refer to A for the joint cumulants of the derivatives of the log-likelihood function. As previously stated, any computer program may be used to compute the bias-corrected MLE of \(\xi \) (\({{\widehat{\xi }}}_{BCMLE}\) ) provided by

$$\begin{aligned} {{\widehat{\xi }}}_{BCMLE}={\widehat{\xi }}-{{\widehat{\varPhi }}}^{-1}{\widehat{\varPsi }}\cdot vec\left( {{\widehat{\varPhi }}}^{-1}\right) \end{aligned}$$

where \(\varPhi \) is the Fisher information matrix of \(\xi \) and \(\varPsi =\left[ {\varOmega }^{\left( 1\right) }|\ {\varOmega }^{\left( 2\right) }\right] \ such\ that\ \ {\varOmega }^{(k)}=\left[ {\omega }^{(k)}_{ij}\right] \,\ i,j,k=1,2\).

3.2 Bootstrap Approach

To create pseudo-samples from the original sample, [9] invented the bootstrap resampling technique. We deduct the estimated bias from these samples from the original MLEs in the following manner to produce the bias-corrected MLEs:

A random sample of size n from F, the distribution function (cdf), is represented by the formula x \(={\left( x_1,\cdots ,x_n\right) }^{'}\). Let \(\nu =t\left( F\right) \) be a function of F, and let \({\widehat{\nu }}\) be the estimator of \(\nu \). By creating observations using replacement, we resample the original sample, x, into pseudo-samples of size n, x \(^{*}\) \(={\left( x^*_1,\dots ,x^*_n\right) }^{'}\). From these pseudo-samples, indicated by \({{\widehat{\nu }}}^*=g({{{\textbf{x}}}}^{{\mathbf *}})\), we get the bootstrap replicates of \({\widehat{\nu }}\). The cdf of \({\widehat{\nu }}\), \(F_{{\widehat{\nu }}}\) may be estimated using the empirical cdf (ecdf) of \({{\widehat{\nu }}}^*\). This following equation may be used to estimate the bootstrap bias of the estimator \({\widehat{\nu }}=g({{\textbf{x}}})\) as

$$\begin{aligned} B_F\left( {\widehat{\nu }},\nu \right) ={{{\mathbb {E}}}}_F\left[ {\widehat{\nu }}\right] -\ \nu \left( F\right) . \end{aligned}$$
(3.32)

Since it is a consistent estimator, we may substitute F with \(F_{{\widehat{\nu }}}\) in Equation (3.32) to get the estimated bootstrap bias as

$$\begin{aligned} {\hat{B}}_{F_{{\widehat{\nu }}}}\left( {\widehat{\nu }},\nu \right) ={{{\mathbb {E}}}}_{F_{{\widehat{\nu }}}}\left[ {{\widehat{\nu }}}^*\right] -\ {\widehat{\nu }}. \end{aligned}$$
(3.33)

If there are N bootstrap estimates, for example, \(\left( {{\widehat{\nu }}}^{*\left( 1\right) },\dots ,{{\widehat{\nu }}}^{*\left( N\right) }\right) \), and N is sufficiently large then the estimator \({{{\mathbb {E}}}}_{F_{{\widehat{\nu }}}}\left[ {{\widehat{\nu }}}^*\right] \) in Equation (3.33) may be roughly expressed as

$$\begin{aligned} {{\widehat{\nu }}}^{*\left( \cdot \right) }=\frac{1}{N}\sum ^N_{i=1}{{{\widehat{\nu }}}^{*\left( i\right) }}. \end{aligned}$$

Consequently, the estimated bootstrap bias is

$$\begin{aligned} {\hat{B}}_{F_{{\widehat{\nu }}}}\left( {\widehat{\nu }},\nu \right) ={{\widehat{\nu }}}^{*\left( \cdot \right) }-\ {\widehat{\nu }}. \end{aligned}$$
(3.34)

Therefore, using Efron’s bootstrap resampling approach, the bias-corrected estimator (\({\nu }^B\)) equals

$$\begin{aligned} {\nu }^B={\widehat{\nu }}-{\hat{B}}_{F_{{\widehat{\nu }}}}\left( {\widehat{\nu }},\nu \right) =2\ {\widehat{\nu }}-\ {{\widehat{\nu }}}^{*\left( \cdot \right) }. \end{aligned}$$
(3.35)

As a result, we in this instance let \({{\widehat{\xi }}}_{BCBOOT}={\nu }^B\).

4 A Simulation Study

In this part, using the cdf and pdf supplied in Eqs. (1.1) and (1.2), respectively, we run Monte Carlo simulations to evaluate the efficacy of the various Gompertz distribution estimators that have been taken into consideration. We select random samples of size n = 5, 10, 20, 35, 50, 75, 100, 200 with \(\alpha \) = 0.1, 0.5, 1, 1.5, 3 and \(\beta \) = 0.1, 0.5, 1, 1.5, 3. For each combination of (n, \(\alpha \), \(\beta \)), we simulated using \(M=5000\) Monte Carlo replications and \(B=1000\) bootstrap replications.

We compute the average bias and RMSEs for an estimator \({\hat{\xi }}\) of the parameter \(\xi =(\alpha ,\beta )'\), which are indicated by Bias(\({\widehat{\xi }}\))=\(\frac{1}{M}\sum ^M_{i=1}{\left( {{\widehat{\xi }}}_i-\xi \right) }\) and RMSE(\({\widehat{\xi }}\))=\(\sqrt{\frac{1}{M}\sum ^M_{i=1}{{\left( {{\widehat{\xi }}}_i-\xi \right) }^2}}\), respectively.

Figures 2 and 3 show the average biases and RMSEs of the estimations of \(\alpha \) and \(\beta \) across sample sizes. Some inferences that can be drawn are the ones below.

  1. 1.

    The MLE estimators of \(\alpha \) appear to have a positive bias for each simulation being taken into account. This demonstrates how, generally speaking, they overestimate the value of the parameter \(\alpha \), especially when the sample size is small. Additionally, the MLE estimators commonly show a negative bias, i.e., they consistently underestimate the true value of the parameter \(\beta \) for different sample sizes, when the real value of the parameter \(\beta \) is equal to or greater than one. On the other hand, the MLE estimators frequently seem to have a positive bias when the true value of the parameter \(\beta \) is less than one. This means that for various sample sizes, they on average underestimate the true value of the parameter \(\beta \).

  2. 2.

    In most simulations for various sample sizes, the MLE estimators underperformed the BCMLEs of \(\alpha \) and \(\beta \) in terms of bias and RMSE, i.e., the BCBOOTs of \(\alpha \) and \(\beta \) beat the MLE estimators. Therefore, the BCMLEs would be ideal or better options for estimating \(\alpha \) and \(\beta \) if bias is a concern.

  3. 3.

    The biases and RMSEs of all examined estimators will decrease as sample size n increases, as expected. This is mostly because, according to statistical theory, most estimators function more effectively as sample size n increases. For the bias-corrected estimators for small sample sizes, the reductions in bias and RMSE are quite considerable, as shown above. For example, where n = 10, \(\alpha \) = 0.1, and \(\beta \) =1, we have Bias(\({\hat{\alpha }}\) MLE)=0.3229, Bias(\({\hat{\alpha }}\) BCMLE)=0.0676, Bias(\({\hat{\alpha }}\) BCBOOT)= 0.0576, Bias(\({\hat{\beta }}\) MLE) = \(-\)0.0435, Bias(\({\hat{\beta }}\) BCMLE) = 0.0066, Bias(\({\hat{\beta }}\) BCBOOT) = \(-\)0.0122, RMSE(\({\hat{\alpha }}\) MLE) = 0.8725, RMSE(\({\hat{\alpha }}\) BCMLE) = 0.4399, RMSE(\({\hat{\alpha }}\) BCBOOT)= 0.5036, RMSE(\({\hat{\beta }}\) MLE) = 0.4422, RMSE(\({\hat{\beta }}\) BCMLE) = 0.3101, RMSE(\({\hat{\beta }}\) BCBOOT) = 0.3437.

Fig. 2
figure 2

Plots of the average biases comparisons of the three different estimation methods for \(\alpha \) (left panel) and \(\beta \) (right panel)

Fig. 3
figure 3

Plots of the RMSEs Comparisons of the three different estimation methods for \(\alpha \) (left panel) and \(\beta \) (right panel)

5 Illustrative Applications

The maximum likelihood estimator, denoted by \({{\widehat{\xi }}}_{\textrm{MLE}}\), the bias-corrected maximum likelihood estimator using an analytical approach, denoted by \({{\widehat{\xi }}}_{\textrm{BCMLE}}\), and the bias-corrected estimator using a bootstrap approach, denoted by \({{\widehat{\xi }}}_{\textrm{BCBOOT}}\), are taken into consideration. In order to compare the performance of these estimators, we take into account two real data sets.

The data set represents the lifetimes of 20 electronic components, see [30], page 100. Also, it was studied by [37]. It is provided, for convenience, as follows: 0.03, 0.12, 0.22, 0.35, 0.73, 0.79, 1.25, 1.41, 1.52, 1.79, 1.8, 1.94, 2.38, 2.4, 2.87, 2.99, 3.14, 3.17, 4.72 and 5.09.

Table 1 contains the estimated values for the parameters of the Gompertz distribution. Table 1 demonstrates that the bias-corrected MLE and bootstrap estimates of \(\alpha \) are less than the MLE estimate, indicating that the MLE approach overestimates this parameter. The Gompertz distribution’s pdf and cdf are shown in Fig. 4 after being assessed using the \(\alpha \) and \(\beta \) estimations in Table 1. We suggest using bias-corrected MLEs for this data set since the density shape based on the MLE approach may be deceptive, as illustrated in this Figure.

Table 1 Point estimates of the parameters (\(\alpha \) and \(\beta \)) of Gompertz distribution for the first data
Fig. 4
figure 4

The cdf and pdf of the Gompertz distribution fitted to the survival rate data using the various estimates of \(\alpha \) and \(\beta \)

The second data set represents the failure times (in minutes) for a sample of 15 electronic components in an accelerated life test. This data was taken from [24], page 204. Additionally, researchers like [29, 11], and [41] analyzed this data as well. We provide it here for convenience: 1.4, 5.1, 6.3, 10.8, 12.1, 18.5, 19.7, 22.2, 23, 30.6, 37.3, 46.3, 53.9, 59.8 and 66.2.

Similarly, Table 2 lists the estimated values of the parameters of the Gompertz distribution. Table 2 demonstrates that the bias-corrected MLE and bootstrap estimates of \(\alpha \) are less than the MLE estimate, indicating that the MLE approach overestimates this parameter. The Gompertz distribution’s pdf and cdf are shown in Fig. 5 after being assessed using the \(\alpha \) and \(\beta \) estimations in Table 2. Given that the density shape based on the MLE technique may be deceiving, as shown in this Figure, we advise utilizing bias-corrected MLEs for this data set.

Table 2 Point estimates of the parameters (\(\alpha \) and \(\beta \)) of Gompertz distribution for the second data
Fig. 5
figure 5

The cdf and pdf of the Gompertz distribution fitted to the failure time data using the various estimates of \(\alpha \) and \(\beta \)

6 Concluding Remarks

Based on the "analytical technique" created by [7], we were able to obtain the second-order bias-corrected MLEs of the Gompertz distribution. In addition to having straightforward formulas, bias-corrected MLEs simultaneously reduce the bias and root mean square errors (RMSEs) of the parameters of the Gompertz distribution. We also assessed the resampling technique known as the "bootstrap approach" described by [9] for parameters estimation. Bias-corrected MLEs should be advised for use in practical applications, particularly when the sample size is small or moderate, according to the numerical findings of both simulation studies and real-data applications.