An Improvement in Maximum Likelihood Estimation of the Gompertz Distribution Parameters

In this study, we will look at estimating the parameters of the Gompertz distribution. We know that the maximum likelihood technique is the most often used method in the literature for parameter estimation. However, it is well known that the maximum likelihood estimators (MLEs) are biased for small sample sizes. As a result, we are motivated to produce nearly unbiased estimators for the parameters of this distribution. To be more specific, we concentrate on two bias-correction strategies (analytical and bootstrap approaches) to minimize MLE biases to the second order of magnitude. Monte Carlo simulations are used to compare the performances of these estimators. Finally, two real-data examples are offered to demonstrate the utility of our proposed estimators in small sample sizes.


Introduction
Gompertz [17] introduced the Gompertz distribution to fit mortality tables. This distribution is unimodal and positively skew, whereas the Gompertz hazard rate function increases monotonically. As a result, the Gompertz distribution is used to model phenomena that has an increasing failure rate. It is worth mentioning that it has some interesting relationships with well-known distributions like exponential, Weibull, Gumbel, generalized logistic and double exponential distributions (see, [39]). Garg et al. [13] studied the Maximum likelihood estimation of the parameters of the Gompertz survival function.
Ahuja and Nash [1] contains a survey and applications of the Gompertz distribution. Many researchers have contributed to the studies of characterization and statistical methodology of this distribution for analyzing a variety of real-world applications, including the analysis of medical, survival, behavioral, biological, environmental, and actuarial studies. For instance, [2-4, 8, 10, 12, 21, 22, 26, 27, 31-33, 35, 40, 42].
The maximum likelihood technique is well-known as the most popular estimating method. This is due to its appealing mathematical properties for large sample sizes, such as unbiasedness, consistency, efficiency and asymptotic normality. These properties, however, may not hold for small or even moderate sample sizes; see, for example, [14-16, 25, 34, 38] along with others.
In this study, we will look at two strategies. The first is a correction strategy known as the "analytical approach" which was presented by [7]. This method corrects the bias of MLEs to the second order of magnitude by subtracting it from the MLEs. Some researchers, including [20,28,36], developed software applications (even though they are limited) that allow users to compute the analytic Cox-Snell formula for bias corrections for various pre-specified distributions. Moreover, the second strategy is based on the bootstrap re-sampling procedure which may minimize bias to the second order, as described in the "bootstrap approach" developed by [9]. In both strategies, we shall refer to these corrected estimators as bias-corrected estimators. To demonstrate the performance of these estimators, Monte Carlo simulations and real-world applications are used.
If X follows the Gompertz distribution (denoted by Gomp( , )), then the distribution function (cdf) and the probability density function (pdf) of X are given respectively by (see for example [5, 19, 23]) where > 0 is the shape parameter and > 0 is the scale parameter as shown in Fig. 1. In addition, the role of the shape parameter can also be seen in the hazard function (h(x)) of this distribution It is clear that as → 0 , the Gompertz distribution approaches the exponential distribution. The rest of the paper is organized as follows. The Maximum Likelihood Estimation(MLE) is introduced in Sect. 2. In Sect. 3, we give bias-corrected MLEs for both analytical and bootstrap approaches. In Sect. 4, we obtain simulation results to evaluate the performance of the maximum likelihood estimation method. Moreover, two examples based on real data are used to demonstrate the performance of these approaches in Sect. 5. lastly, some concluding remarks are given in Sect. 6.

Maximum Likelihood Estimation
Let X 1 , ⋯ , X n be a random sample of size n from Gomp( , ). The log likelihood function (l) is We maximize Eq. (2.1) with respect to and in order to obtain the MLE's ( ̂ and̂ ) of and respectively. Thus, we have the following equations:  . For small sample sizes, these MLEs will be biased as mentioned previously. Therefore, the bias may provide misleading results, influencing the interpretation of occurrences in realworld applications. As a result, this motivates us to examine roughly unbiased estimators to decrease the bias of these Gompertz distribution MLEs.

Bias-Corrected MLEs
This section examines two bias-correction techniques. The first is [7]'s "analytical approach", which is described in Sect. 3.1, and the second is [9]'s "bootstrap approach", which is presented in Sect. 3.2.

Analytical Approach
Assume l( ) is the log likelihood function based on n observations with a p-dimensional parameter vector represented as = 1 , … , p � and l( ) is regular with respect to all derivatives up to the third order.
The joint cumulants of l = l( ) derivatives are therefore defined as These joint cumulants' derivatives are denoted by In addition, the expressions in Eqs. (3.1) through (3.4) are assumed to be of order O(n).
For non-identical independent samples, [7] demonstrated that the bias of the s th element of the MLE of is where ij denotes the (i, j) th member of the information matrix's inverse. Thereafter, [6] demonstrated that Eq. (3.5) remains valid even if the sample data are not identical and non-independent observations, given that all expressions in Eq. Let As a result, we have these two matrices ( and (k) ) as Hence, ̂ 's bias expression may be written in matrix form as where vec is an operation that stacks a matrix's column vectors one on top of the other. Therefore, the bias-corrected MLE, represented as ̂ BCMLE , is given by We have = ( , ) � and p = 2 because we are studying the Gompertz distribution. To get the bias-corrected MLEs, we must first compute the higher-order derivatives of the log-likelihood function for the Gompertz distribution with respect to and as shown below Therefore, Y follows exponential distribution. The following formulas will be needed: The upper incomplete gamma function is defined as These derivatives of the upper incomplete gamma function are straightforward to be obtained as Refer to A for the joint cumulants of the derivatives of the log-likelihood function.
As previously stated, any computer program may be used to compute the bias-corrected MLE of ( ̂ BCMLE ) provided by where is the Fisher information matrix of and = (1) | (2) such that (k) = (k) ij i, j, k = 1, 2.

Bootstrap Approach
To create pseudo-samples from the original sample, [9] invented the bootstrap resampling technique. We deduct the estimated bias from these samples from the original MLEs in the following manner to produce the bias-corrected MLEs: A random sample of size n from F, the distribution function (cdf), is represented by the formula x = x 1 , ⋯ , x n � . Let = t(F) be a function of F, and let ̂ be the estimator of . By creating observations using replacement, we resample the original sample, x, into pseudo-samples of size n, x * = x * 1 , … , x * n � . From these pseudo-samples, indicated by ̂ * = g(x * ) , we get the bootstrap replicates of ̂ . The cdf of ̂ , F̂ may be estimated using the empirical cdf (ecdf) of ̂ * . This following equation may be used to estimate the bootstrap bias of the estimator ̂ = g(x) as Since it is a consistent estimator, we may substitute F with F̂ in Equation

A Simulation Study
In this part, using the cdf and pdf supplied in Eqs.
respectively. Figures 2 and 3 show the average biases and RMSEs of the estimations of and across sample sizes. Some inferences that can be drawn are the ones below. 1. The MLE estimators of appear to have a positive bias for each simulation being taken into account. This demonstrates how, generally speaking, they overestimate the value of the parameter , especially when the sample size is small. Additionally, the MLE estimators commonly show a negative bias, i.e., they consistently underestimate the true value of the parameter for different sample sizes, when the real value of the parameter is equal to or greater than one. On the other hand, the MLE estimators frequently seem to have a positive bias when the true value of the parameter is less than one. This means that for various sample sizes, they on average underestimate the true value of the parameter . 2. In most simulations for various sample sizes, the MLE estimators underperformed the BCMLEs of and in terms of bias and RMSE, i.e., the BCBOOTs of and beat the MLE estimators. Therefore, the BCMLEs would be ideal or better options for estimating and if bias is a concern. 3. The biases and RMSEs of all examined estimators will decrease as sample size n increases, as expected. This is mostly because, according to statistical theory, most estimators function more effectively as sample size n increases. For the biascorrected estimators for small sample sizes, the reductions in bias and RMSE are quite considerable, as shown above.

Illustrative Applications
The maximum likelihood estimator, denoted by ̂ MLE , the bias-corrected maximum likelihood estimator using an analytical approach, denoted by ̂ BCMLE , and the biascorrected estimator using a bootstrap approach, denoted by ̂ BCBOOT , are taken into consideration. In order to compare the performance of these estimators, we take into account two real data sets. The data set represents the lifetimes of 20 electronic components, see [30], page 100. Also, it was studied by [37]. It is provided, for convenience, as follows: 0.03, 0.12, 0. 22 Fig. 4 after being assessed using the and estimations in Table 1. We suggest using bias-corrected MLEs for this data set since the density shape based on the MLE approach may be deceptive, as illustrated in this Figure. The second data set represents the failure times (in minutes) for a sample of 15 electronic components in an accelerated life test. This data was taken from [24], page 204. Additionally, researchers like [29,11], and [41] analyzed this data as well. We provide it here for convenience: Similarly, Table 2 lists the estimated values of the parameters of the Gompertz distribution. Table 2 demonstrates that the bias-corrected MLE and bootstrap estimates of are less than the MLE estimate, indicating that the MLE approach overestimates this parameter. The Gompertz distribution's pdf and cdf are shown in Fig. 5 after being assessed using the and estimations in Table 2. Given that the density shape based on the MLE technique may be deceiving, as shown in this Figure, we advise utilizing bias-corrected MLEs for this data set.

Concluding Remarks
Based on the "analytical technique" created by [7], we were able to obtain the second-order bias-corrected MLEs of the Gompertz distribution. In addition to having straightforward formulas, bias-corrected MLEs simultaneously reduce the bias and root mean square errors (RMSEs) of the parameters of the Gompertz distribution. We also assessed the resampling technique known as the "bootstrap approach" described by [9] for parameters estimation. Bias-corrected MLEs should be advised for use in practical applications, particularly when the sample size is small or moderate, according to the numerical findings of both simulation studies and real-data applications.