1 Introduction

The exponential probability distribution is a commonly used model for various real-world situations in fields such as engineering, business, economics, medicine, and biology. In modeling Poisson processes, the inter-arrival duration can typically be determined using the exponential distribution. However, in real-life situations, the assumption of constant hazard rates and constant event occurrence rates may not hold, making the model less adaptable. To address this issue, a new probability distribution called the Transformed MG-Extended Exponential Distribution has been developed. The distribution described here modifies the exponential distribution by raising its cumulative distribution to a power determined by an additional parameter. This parameter influences the shape of the distribution and allows for the representation of certain characteristics in real-world phenomena that were not accounted for by the original exponential distribution. The method involves a transformation that enhances the flexibility of the distribution.

New probability distributions have been introduced by adding new parameter(s) to existing distributions, making them more adaptable to various scenarios through transformation methods.

Several distributions have been studied in the literature that are derived from the exponential distribution. For example, Gupta and Kundu [1] introduced a new parameter to derive the exponentiated exponential distribution. Merovci [2] used a quadratic rank transformation map to obtain the transmuted exponentiated exponential distribution from the exponentiated exponential distribution. Similarly, Oguntunde and Adejumo [3] generalized the exponential distribution to a two-parameter model using the same transformation technique. Hussian [4] examined the transmuted exponentiated gamma distribution, which is a generalization of the exponentiated gamma distribution. Enahoro et al. [5] applied the performance rating of the transmuted exponential distribution to a real-life dataset. Nadarajah and Kotz [6] studied the beta exponential distribution, which is generated from the logit of the exponential distribution.

Cordeiro and Castro [7] studied a new family of generalized distributions called Kumaraswamy distributions, which includes Weibull, gamma, normal, Gumbel, and inverse Gaussian distributions. Mahdavi and Kundu [8] developed a new alpha power transformation method by adding a new parameter to the exponential distribution, producing new probability distributions. Recently, Khalil et al. [9] developed a new Modified Frechet distribution by adding a shape parameter to the Frechet distribution. In their work, [10] introduced a new Modified Frechet-Rayleigh distribution. They used the Rayleigh distribution as a base distribution and added a shape parameter to their derivation.

Alzaatreh et al. [11] combined the T-X method with the probability density function of the exponential distribution to introduce a novel technique for generating new probability distributions. Marshal and Olkin [12] proposed a new technique by adding a parameter to a family of distributions using the Weibull distribution as a base distribution. Tahir et al. [13] explored a novel Weibull G-family distribution and its characteristics by using the Weibull distribution as a base distribution. When analyzing lifetime or failure time data, the Weibull and gamma distributions with two and three parameters are commonly used. However, these models have a limitation in that their hazard functions are monotonic, i.e., they either increase or decrease with time. This may not be suitable for all scenarios. For example, as pointed out by [14], the hazard function of the Weibull distribution grows from zero to infinity as its shape parameter increases. This makes it unsuitable for modeling lifetime data in survival analysis, where some events have increasing risks over time and constant risks after a certain point.

The paper is organized into several sections for ease of navigation. The introduction can be found in Sect. 1. The transformed MG-extended exponential distribution and its associated density, cumulative distribution function, survival, and hazard function graphs are detailed in Sects. 2 and 3. The definitions of quintiles, mean, moments, and moment-generating functions are provided in Sects. 4 through 6. Section 7 explains the definition of order statistics, while Sect. 8 covers parameter estimation and the characteristics of estimators. The paper also includes a simulation study in Sect. 9 and applications discussed in Sect. 10. Finally, the conclusion can be found in Sect. 11.

2 The Transformed MG-Extended Exponential Distribution

A study conducted by [9] introduced a modified Frechet approach for generating probability distributions. The study presented the cumulative distribution function and the density function, which are defined by Eqs. 1 and 2, respectively. These functions were generated based on the base or induced distribution function, F(x), of a random variable X. The equation for the distribution function of the modefied Frechet is provided below.

$$\begin{aligned} F_{New}(x)=\frac{e^{-{{(F(x))}^{\alpha }}}-1}{e^{-1}-1}, x>0, \alpha >0 \end{aligned}$$
(1)

The probability density function is subsequently specified to be

$$\begin{aligned} f_{New}(x)=\frac{d}{dx}{F_{New}(x)}=\frac{\alpha f(x) (F(x))^{\alpha -1} e^{-{{(F(x))}^{\alpha }}}}{1-e^{-1}}, x>0, \alpha >0 \end{aligned}$$
(2)

where \(\alpha\) is the distribution’s extra parameter.

The aforementioned approach is employed in this study to develop a new probability distribution based on the exponential probability distribution, named the Transformed MG-Extended Exponential (TMGEE) distribution. The TMGEE distribution is introduced to improve time-to-event data models for survival analysis and to increase the flexibility of the distribution for modeling real-world problems. Equations 3 and 4 define the cumulative distribution and density functions of the exponential distribution, respectively.

$$\begin{aligned} F(x;\lambda )= & {} 1-e^{-\lambda x} \end{aligned}$$
(3)
$$\begin{aligned} f(x;\lambda )= & {} \lambda e^{-\lambda x}, x \ge 0, \lambda >0 \end{aligned}$$
(4)

where \(\lambda\) is the distribution’s rate parameter, often indicating the rate at which events occur in a Poisson point process with a mean of \(1/\lambda\) and variance 1/\(\lambda ^2\).

The new proposed TMGEE cumulative distribution function is defined in Eq. 5, for the shape parameter \(\alpha\) and scale parameter \(\lambda\) by substituting the cumulative distribution function defined in Eq. 3 to Eq. 1.

$$\begin{aligned} F_{TMGEE}(x;\alpha ,\lambda )=\frac{e^{-{(1-e^{-\lambda x})}^{\alpha }}-1}{e^{-1}-1}, x \ge 0, \alpha>0, \lambda >0 \end{aligned}$$
(5)

By differentiating the cumulative distribution with respect to x, we determine the probability density function. Equation 6 thus defines the probability density function of the TMGEE distribution.

$$\begin{aligned} f_{TMGEE}(x;\alpha ,\lambda )=\frac{\alpha \lambda (1-e^{-\lambda x})^{\alpha -1} e^{- \lambda x-{{(1-e^{-\lambda x})}^{\alpha }}}}{1-e^{-1}}, x \ge 0, \alpha>0, \lambda >0 \end{aligned}$$
(6)

Proposition 1

The TMGEE distribution, \(f_{TMGEE}(x;\alpha ,\lambda )\) is a legitimate probability density function.

Proof

\(f_{TMGEE}(x;\alpha ,\lambda )\) is obviously non-negative for all \(x \ge 0\), and

$$\begin{aligned} \int _{0}^{\infty } f(x) dx&= \int _{0}^{\infty } \frac{\alpha \lambda (1-e^{-\lambda x})^{\alpha -1} e^{- \lambda x-{{(1-e^{-\lambda x})}^{\alpha }}}}{1-e^{-1}} dx \\&= \frac{1}{1-e^{-1}} \int _{0}^{\infty } \alpha \lambda (1-e^{-\lambda x})^{\alpha -1} e^{- \lambda x-{{(1-e^{-\lambda x})}^{\alpha }}}dx \end{aligned}$$

Substituting \(u = (1-e^{-\lambda x})^{\alpha}\), and \(du = \alpha \lambda e^{-\lambda x} (1-e^{- \lambda x})^{\alpha -1}dx\), we have 

$$\begin{aligned} \int _{0}^{\infty } f(x) dx&= \frac{1}{1-e^{-1}} \int _{0}^{1} e^{- u}\,du = 1 \end{aligned}$$

Note that the TMGEE probability density function has a decreasing curve with a form akin to the exponential distribution’s probability density function for \(0<\alpha \le 1\). The exponential distribution, however, is not a special case of it. The probability density function and cumulative distribution function of the TMGEE distribution are displayed in Fig. 1 for a few selected \(\alpha\) and \(\lambda\) values. \(\square\)

Fig. 1
figure 1

The density function and distribution function of the TMGEE distribution for some chosen values of \(\alpha\) and \(\lambda\)

3 Survival and Hazard Functions of the TMGEE Distribution

The survival and hazard functions of the TMGEE distribution are defined, respectively, by Eqs.  7 and 8 below.

$$\begin{aligned} S_{TMGEE}(x;\alpha ,\lambda )= & {} \frac{e^{-1}-e^{-{(1-e^{-\lambda x})}^{\alpha }}}{e^{-1}-1}, x \ge 0, \alpha>0, \lambda >0 \end{aligned}$$
(7)
$$\begin{aligned} h_{TMGEE}(x;,\alpha ,\lambda )= & {} \frac{\alpha \lambda (1-e^{-\lambda x})^{\alpha -1} e^{- \lambda x-{{(1-e^{-\lambda x})}^{\alpha }}}}{e^{-{(1-e^{-\lambda x})}^{\alpha }}-e^{-1}}, x \ge 0, \alpha>0, \lambda >0 \end{aligned}$$
(8)

For some chosen values of \(\alpha\) and \(\lambda\), Fig. 2 displays the survival and hazard function of the TMGEE distribution. It’s worth noting that the hazard function has distinct characteristics depending on the selected parameters:

  • When \(0 < \alpha \le 1\), the curve continuously decreases until settling to the value of \(\lambda\).

  • When \(\alpha > 1\) and \(\alpha > \lambda\), the curve rises until stabilizing at the value of \(\lambda\).

  • Finally, when \(\alpha > 1\) and \(\lambda \ge \alpha\), the curve reaches its highest value and then slightly decreases before settling to the value of \(\lambda\).

Fig. 2
figure 2

The survival and hazard functions of the TMGEE distribution for some chosen values of \(\alpha\) and \(\lambda\)

Proposition 2

The function \(f_{TMGEE}(x;\alpha ,\lambda )\) is convex for \(0<\alpha \le 1\) and concave for \(\alpha >1.\)

Proposition 2 can be verified by differentiating the log of the TMGEE probability density function with respect to x. The results indicate that \(f_{TMGEE}(x;\alpha ,\lambda )\) is decreasing for \(0<\alpha \le 1\) and unimodal for \(\alpha >1\). Equation 9 can be used to get the modal value.

$$\begin{aligned} \frac{d}{dx}f_{TMGEE}(x)= \frac{d}{dx}\frac{\alpha \lambda (1-e^{-\lambda x})^{\alpha -1} e^{- \lambda x-{{(1-e^{-\lambda x})}^{\alpha }}}}{1-e^{-1}} = 0 \end{aligned}$$
(9)

4 Quantiles of the TMGEE Distribution

To obtain the pth quantile value for the TMGEE distribution, where X is a random variable such that X \(\sim\) TMGEE(\(\alpha\),\(\lambda\)), follow these steps:

  1. i.

    Find the inverse function, \(F^{-1}(.)\), based on the cumulative distribution function Eq. 5.

  2. ii.

    Generate a random variable U, such that \(U \sim U(0,1)\).

  3. iii.

    Then, use the formula \(X_{p} = F^{-1}(U)\) to obtain the pth quantile value \(X_{p}\).

From Eq. 5, \(F_{TMGEE}(x)=\frac{e^{-{(1-e^{-\lambda x})}^{\alpha }}-1}{e^{-1}-1}\), and \(F(X_{p}) = u\) implies

$$\begin{aligned} \frac{e^{-{(1-e^{-\lambda x_{p}})}^{\alpha }}-1}{e^{-1}-1} = u. \end{aligned}$$

Consequently, the quantile function of the TMGEE distribution is given by Eq. 10 as follows:

$$\begin{aligned} X_{p} = -\frac{1}{\lambda }log(1-[log(p(e^{-1}-1)+1)]^{\frac{1}{\alpha }}) \end{aligned}$$
(10)

The median is defined in Eq. 11 and is found by substituting \(u=0.5\) in Eq. 10.

$$\begin{aligned} Median = -\frac{1}{\lambda }log(1-[log(0.5(e^{-1}-1)+1)]^{\frac{1}{\alpha }}) \end{aligned}$$
(11)

Equations 12 and 13 give the following definitions for the first and third quartiles, \(Q_{1}\) and \(Q_{3}\), respectively.

$$\begin{aligned} Q_{1}= & {} -\frac{1}{\lambda }log(1-[log(0.25(e^{-1}-1)+1)]^{\frac{1}{\alpha }}) \end{aligned}$$
(12)
$$\begin{aligned} Q_{3}= & {} -\frac{1}{\lambda }log(1-[log(0.75(e^{-1}-1)+1)]^{\frac{1}{\alpha }}) \end{aligned}$$
(13)

5 Mean and Moments of the TMGEE Distribution

5.1 The Mean

If X is a random variable such that X \(\sim\) TMGEE(\(\alpha\),\(\lambda\)), then mean, \(\mu = E(X)\), can be derived as follows:

$$\begin{aligned} \mu&= \int _{0}^{\infty } \frac{x \alpha \lambda (1-e^{-\lambda x})^{\alpha -1} e^{- \lambda x-{{(1-e^{-\lambda x})}^{\alpha }}}}{1-e^{-1}} dx\\&= \frac{\alpha \lambda }{1-e^{-1}} \int _{0}^{\infty } x(1-e^{-\lambda x})^{\alpha -1} e^{- \lambda x-{{(1-e^{-\lambda x})}^{\alpha }}} dx \end{aligned}$$

If \(y = (1-e^{-\lambda x})\), and \(e^{-\lambda x} = 1-y\), then dy \(=\) \(\lambda (1-y) dx\). As a result, the mean can be written as

$$\begin{aligned} \mu&= \frac{\alpha \lambda }{1-e^{-1}} \int _{0}^{1} -\frac{1}{\lambda }log(1-y) e^{-{y^\alpha }} (1-y) y^{\alpha -1}\frac{1}{\lambda (1-y)}dy\\&= -\frac{\alpha }{\lambda (1-e^{-1})} \int _{0}^{1}log(1-y) e^{-{y^\alpha }} y^{\alpha -1}dy \end{aligned}$$

If \(z=y^{\alpha }\), the mean can be evaluated as follows:

$$\begin{aligned} \mu = -\frac{1}{\lambda (1-e^{-1})} \int _{0}^{1} log(1-z^{\frac{1}{\alpha }}) e^{-z}dz \end{aligned}$$

By using the series expansion of \(log(1-z^{\frac{1}{\alpha }}) = -\sum _{k=1}^{\infty }\frac{z^{\frac{k}{\alpha }}}{k}\), for \(|{z^{\frac{1}{\alpha }}}| < 1\),

$$\begin{aligned} \mu = \frac{1}{\lambda (1-e^{-1})} \int _{0}^{1} \sum _{k=1}^{\infty }\frac{z^{\frac{k}{\alpha }}}{k} e^{-z}dz = \frac{1}{\lambda (1-e^{-1})}\sum _{k=1}^{\infty }\frac{1}{k} \int _{0}^{1}{z^{\frac{k}{\alpha }}}e^{-z}dz \end{aligned}$$

Therefore, using the infinite series representation of the incomplete gamma integral, Eq. 14 can be used to define the mean of the random variable following the TMGEE distribution as:

$$\begin{aligned} \mu =\frac{1}{\lambda (1-e^{-1})} \sum _{k=1}^{\infty }\frac{1}{k}\left( \Gamma \left( \frac{\alpha +k}{\alpha }\right) -\Gamma \left( \frac{\alpha +k}{\alpha },1 \right) \right) \end{aligned}$$
(14)

where the discussion of the upper incomplete gamma function \(\Gamma \left( .,.\right)\) can be found in the study by [15].

5.2 Moments of the TMGEE Distribution

5.2.1 Moments

The r-th moment, denoted by \(\mu _r\), of the TMGEE random variable can be defined as follows:

$$\begin{aligned} \mu _{r}^{/}&= \int _{0}^{\infty } x^r \frac{x \alpha \lambda (1-e^{-\lambda x})^{\alpha -1} e^{- \lambda x-{{(1-e^{-\lambda x})}^{\alpha }}}}{1-e^{-1}} dx\\&= \frac{\alpha \lambda }{(1-e^{-1})} \int _{0}^{\infty } x^r (1-e^{-\lambda x})^{\alpha -1} e^{- \lambda x-{{(1-e^{-\lambda x})}^{\alpha }}} dx. \end{aligned}$$

For \(y = 1-e^{-\lambda x}, e^{-\lambda x} = 1-y\) and \(dx = \frac{dy}{\lambda(1-y)}\), \(\mu _{r}^{/}\) can be written as

$$\begin{aligned} \mu _{r}^{/} = \frac{\alpha }{\lambda ^r (1-e^{-1})} \int _{0}^{1}\left( log(1-y)\right) ^r e^{-{y^\alpha }} y^{\alpha -1}dy \end{aligned}$$

When \(z=y^\alpha , \mu _{r}^{/}\) can be expressed as

$$\begin{aligned} \mu _{r}^{/} = \frac{(-1)^r}{\lambda ^r (1-e^{-1})} \int _{0}^{1} \left( log(1-z^{\frac{1}{\alpha }})\right) ^r e^{-z}dz \end{aligned}$$
(15)

For \(|z^{\frac{1}{\alpha }}| <1\), the rth moment can be simplified by substituting the series expansion

$$\begin{aligned} log(1-z^{\frac{1}{\alpha }}) = - \sum _{k=1}^{\infty }\frac{z^\frac{k}{\alpha }}{k} \end{aligned}$$

in Eq. 15 as:

$$\begin{aligned} \mu _{r}^{/} = \frac{1}{\lambda ^r(1-e^{-1})} \int _{0}^{1}\left( \sum _{k=1}^{\infty }\frac{z^\frac{k}{\alpha }}{k}\right) ^{\!r} e^{-z} dz \end{aligned}$$

The \(r^{th}\) moment of the TMGEE distribution can be found by expanding the summation part of the equation, which is located just above, and then integrating each term that results from the expansion. Equation 16 defines the final result. The incomplete gamma function is used in this derivation.

$$\begin{aligned} \mu _{r}^{/}=\frac{1}{\lambda ^r (1-e^{-1})} \sum _{k=1}^{\infty }\frac{1}{k^r}\left( \Gamma \left( \frac{\alpha +rk}{\alpha }\right) -\Gamma \left( \frac{\alpha +rk}{\alpha },1 \right) \right) \end{aligned}$$
(16)

The first moment defined in Eq. 16 is equivalent to the mean of the random variable following the TMGEE distribution given in Eq. 14.

5.2.2 Central Moments

The rth central moment of a random variable X is defined as follows.

$$\begin{aligned} \mu _{r} = E(X-\mu )^r = E\left( \sum _{j=0}^{r} \left( {\begin{array}{c}r\\ j\end{array}}\right) x^{r-j} (-\mu )^j\right) \end{aligned}$$

For a random variable that follows the TMGEE distribution, Eq. 17 can be utilized to formulate the rth central moment.

$$\begin{aligned} \mu _{r} = \sum _{j=0}^{r} (-1)^j \left( {\begin{array}{c}r\\ j\end{array}}\right) \mu _{r-j}^{/} \mu ^{j}, \hspace{0.1cm} r\in \mathbb {N} \end{aligned}$$
(17)

The zero moment and central moments are equal to one, i.e., \(\mu _{0}^{/} = \mu _{0} =1\) and the first central moment, \(\mu _{1}\) is zero.

The variance of the random variable X, following the TMGEE distribution, can be determined using Eq. 17.

$$\begin{aligned} \sigma ^2 = \mu _{2}^{/} -\mu ^2 \end{aligned}$$
(18)

where \(\mu\) is the mean defined in Eqs. 14 and  16 can be used to find the second moment \(\mu _{2}^{/}\). The standard deviation can be determined using the square root of the variance defined in Eq. 18.

5.3 Skewness and Kurtosis of the TMGEE Distribution

To determine the coefficients of skewness and kurtosis for the random variable \(X\) \(\sim\) \(TMGEE(\alpha ,\lambda )\), we apply the methods of [16, 17]. Equations 19 and 20 define the results.

$$\begin{aligned} CS_{TMGEE}= & {} \frac{\mu _{3}}{\sigma ^3} \end{aligned}$$
(19)
$$\begin{aligned} CK_{TMGEE}= & {} \frac{\mu _{4}}{\sigma ^4} \end{aligned}$$
(20)

where \(CS_{TMGEE}\) and \(CK_{TMGEE}\) represent the coefficients of skewness and kurtosis for the TMGEE distribution, respectively. The standard deviation is denoted by \(\sigma\), and the third and fourth-order central moments are represented by \(\mu _{3}\) and \(\mu _{4}\), respectively.

The inverse transform algorithm based on Eq. 10 with specified values of \(\alpha\) and \(\lambda\) is used to replicate eight random samples of size 100,000 from the Transformed MG-Extended Exponential probability distribution to better familiarize with it.

The summary statistics for each sample are shown in Table 1 including mean, standard deviation (Sd), skewness, and kurtosis. The table also shows how, depending on the value of \(\alpha\), the new distribution alters the base distribution. For instance, when \(\lambda = 2\), the base distribution’s mean and standard deviation are both \(\frac{1}{\lambda }\) and equal to 0.5. The mean and standard deviation of the simulated data, however, are approximately 0.22 and 0.35, respectively.

Table 1 Summary statistics for some parameter settings for a random sample of size 100,000 taken from the TMGEE distribution

6 The Moment Generating Function of the TMGEE Distribution

The moment-generating function (MGF), \(M_{x}(t)\), of the TMGEE random variable X is derived as follows:

$$\begin{aligned} M_{x}(t)&= \int _{0}^{\infty } e^{tx} \frac{ \alpha \lambda (1-e^{-\lambda x})^{\alpha -1} e^{- \lambda x-{{(1-e^{-\lambda x})}^{\alpha }}}}{1-e^{-1}} dx\\&= \frac{\alpha \lambda }{(1-e^{-1})} \int _{0}^{\infty } e^{tx} (1-e^{-\lambda x})^{\alpha -1} e^{- \lambda x-{{(1-e^{-\lambda x})}^{\alpha }}} dx \end{aligned}$$

By setting \(u = 1-e^{-\lambda x}, du = \lambda e^{-\lambda x} dx\) and \(e^{x} = (1-u)^{-\frac{1}{\lambda }}\), the MGF can be written as:

$$\begin{aligned} M_{u}(t) = \frac{\alpha }{(1-e^{-1})} \int _{0}^{1} e^{-{u^\alpha }} (1-u)^{-\frac{t}{\lambda }} u^{\alpha -1}du \end{aligned}$$

For \(z = u^\alpha , u=z^{\frac{1}{\alpha }}\) and \(dz = \alpha u^{\alpha -1} du\), we have

$$\begin{aligned} M_{z}(t) = \frac{1}{(1-e^{-1})} \int _{0}^{1} e^{-z} (1-z^{\frac{1}{\alpha }})^{-\frac{t}{\lambda }} dz \end{aligned}$$

We use the following series expansion to formulate the moment-generating function:

$$\begin{aligned} (z+a)^{-n}&= \sum _{k=0}^{\infty } \left( {\begin{array}{c}-n\\ k\end{array}}\right) z^k a^{-n-k}\\&= \sum _{k=0}^{\infty } (-1)^k \left( {\begin{array}{c}n+k-1\\ k\end{array}}\right) z^k a^{-n-k} \end{aligned}$$

For \(|z^{\frac{1}{\alpha }}| <1\), \(M_{z}(t)\) can be expressed as:

$$\begin{aligned} (1-z^{\frac{1}{\alpha }})^{-\frac{t}{\lambda }}&=\sum _{k=0}^{\infty } (-1)^k \left( {\begin{array}{c}\frac{t}{\lambda }+k-1\\ k\end{array}}\right) (-z^\frac{1}{\alpha })^k\\ M_{z}(t)&= \frac{1}{(1-e^{-1})} \int _{0}^{1} e^{-z} \sum _{k=0}^{\infty } \left( {\begin{array}{c}\frac{t}{\lambda }+k-1\\ k\end{array}}\right) (z^\frac{1}{\alpha })^k dz\\&= \frac{1}{(1-e^{-1})} \sum _{k=0}^{\infty } \left( {\begin{array}{c}\frac{t}{\lambda }+k-1\\ k\end{array}}\right) \int _{0}^{1}e^{-z}z^\frac{k}{\alpha } dz \end{aligned}$$

Consequently, the MGF is defined by Eq. 21.

$$\begin{aligned} M_{x}(t)=\frac{1}{(1-e^{-1})} \sum _{k=0}^{\infty } \left( {\begin{array}{c}\frac{t}{\lambda }+k-1\\ k\end{array}}\right) \left( \Gamma \left( \frac{\alpha +k}{\alpha }\right) -\Gamma \left( \frac{\alpha +k}{\alpha },1\right) \right) \end{aligned}$$
(21)

7 The Order Statistics of the TMGEE Distribution

If \(X_{1}, X_{2},..., X_{n}\) are samples drawn at random from the TMGEE distribution and \(X_{(1)}, X_{(2)},..., X_{(n)}\) are the order statistics, then Eq. 22 defines the probability density function of the ith order statistics \(X_{i:n}\) as follows:

$$\begin{aligned} f_{i:n}(x)=\frac{n!}{(i-1)!(n-1)!} f(x) \left( F(x)\right) ^{i-1} \left( 1-F(x)\right) ^{n-i} \end{aligned}$$
(22)

By substituting the density function f(x) and the cumulative distribution function F(x) of the random variable X in Eq. 22, the ith order statistics, \(f_{i:n}(x)\) can be written as

$$\begin{aligned} f_{i:n}(x)= & {} \frac{n!}{(i-1)!(n-1)!} \frac{\alpha \lambda \left( 1-e^{-\lambda x}\right) ^{\alpha -1} e^{- \lambda x-{{(1-e^{-\lambda x})}^{\alpha }}}}{1-e^{-1}} \left( \frac{e^{-{(1-e^{-\lambda x})}^{\alpha }}-1}{e^{-1}-1}\right) ^{i-1}\\{} & {} \times \left( \frac{e^{-1}-e^{-{(1-e^{-\lambda x})}^{\alpha }}}{e^{-1}-1}\right) ^{n-i} \end{aligned}$$

Consequently, Eq. 23 defines the ith order statistics of the TMGEE distribution.

$$\begin{aligned} f_{i:n}(x)&=\frac{-\alpha \lambda n}{\left( e^{-1}-1\right) ^n (i-1)!} \left( 1-e^{-\lambda x}\right) ^{\alpha -1} e^{- \lambda x-{{\left( 1-e^{-\lambda x}\right) }^{\alpha }}} \left( e^{-{(1-e^{-\lambda x})}^{\alpha }}-1\right) ^{i-1}\nonumber \\&\quad \times \left( e^{-1}-e^{-{(1-e^{-\lambda x})}^{\alpha }}\right) ^{n-i} \end{aligned}$$
(23)

Equations 24 and 25 define the first order statistics, \(f_{1:n}(x)\), and nth order statistics, \(f_{n:n} (x)\).

$$\begin{aligned} f_{1:n}(x)= & {} \frac{-\alpha \lambda n}{\left( e^{-1}-1\right) ^n}(1-e^{-\lambda x})^{\alpha -1} e^{- \lambda x-{{\left( 1-e^{-\lambda x}\right) }^{\alpha }}} \left( e^{-1}-e^{-{(1-e^{-\lambda x})}^{\alpha }}\right) ^{n-1} \end{aligned}$$
(24)
$$\begin{aligned} f_{n:n}(x)= & {} \frac{-\alpha \lambda n}{\left( e^{-1}-1\right) ^n (n-1)!}\left( 1-e^{-\lambda x}\right) ^{\alpha -1} e^{- \lambda x-{{\left( 1-e^{-\lambda x}\right) }^{\alpha }}} \left( e^{-{(1-e^{-\lambda x})}^{\alpha }}-1\right) ^{n-1} \end{aligned}$$
(25)

8 Parameter Estimation: The Maximum Likelihood Method

Let the respective realization be \(\varvec{x} = (x_{1}, x_{2},x_{3},...,x_{n})\) for \(X_{1}, X_{2}, X_{3},..., X_{n}\) independent and identically distributed (iid) random samples of size n from the TMGEE distribution. We estimate the distribution’s parameters using the maximum likelihood estimation approach.

The probability density function \(f_{TMGEE}(\varvec{x};\alpha ,\lambda )\) can be used to express the likelihood function, where \(\alpha\) and \(\lambda\) are unknown parameters.

$$\begin{aligned} L(x_{1}, x_{2},x_{3},...,x_{n}|\alpha ,\lambda ) = \prod _{i=1}^{n} \frac{\alpha \lambda \left( 1-e^{-\lambda x_{i}}\right) ^{\alpha -1} e^{- \lambda x_{i}-{{\left( 1-e^{-\lambda x_{i}}\right) }^{\alpha }}}}{1-e^{-1}} \end{aligned}$$

Equation 26 can be used to define the likelihood function pf the distribution’s parameters.

$$\begin{aligned} L(\alpha ,\lambda | \varvec{x}) = \left( \frac{\alpha \lambda }{1-e^{-1}}\right) ^n e^{-\lambda \sum _{i=1}^{n} x_{i}-\sum _{i=1}^{n} {{\left( 1-e^{-\lambda x_{i}}\right) }^{\alpha }}} \prod _{i=1}^{n} \left( 1-e^{-\lambda x_{i}}\right) ^{\alpha -1} \end{aligned}$$
(26)

The log-likelihood function, which is defined by Eq. 27, is obtained by utilizing Eq. 26.

$$\begin{aligned} l(\alpha ,\lambda | \varvec{x})&= n\left( log\alpha + log\lambda -log(1-e^{-1})\right) -\lambda \sum _{i=1}^{n}x_{i}-\sum _{i=1}^{n} {{(1-e^{-\lambda x_{i}})}^{\alpha }} \nonumber \\&\quad +(\alpha -1)\sum _{i=1}^{n}log(1-e^{-\lambda x_{i}}) \end{aligned}$$
(27)

We obtain the maximum likelihood estimates (MLEs) by differentiating the log-likelihood function with respect to the parameters \(\alpha\) and \(\lambda\). Equating the results to zero gives the score function in Eqs. 28 and 29.

$$\begin{aligned} \frac{\partial {l(\alpha ,\lambda |\varvec{x})}}{\partial {\alpha }}= & {} \frac{n}{\alpha }-\sum _{i=1}^{n} \left( {{(1-e^{-\lambda x_{i}})}^{\alpha }}\hspace{0.05 cm} log(1-e^{-\lambda x_{i}})\right) +\sum _{i=1}^{n}log(1-e^{-\lambda x_{i}})=0 \end{aligned}$$
(28)
$$\begin{aligned} \frac{\partial {l(\alpha ,\lambda |\varvec{x})}}{\partial {\lambda }}= & {} \frac{n}{\lambda }-\sum _{i=1}^{n} x_{i}-\alpha \sum _{i=1}^{n} \left( x_{i} e^{-\lambda x_{i}} {{(1-e^{-\lambda x_{i}})}^{\alpha -1}}\right) +(\alpha -1)\sum _{i=1}^{n}\left( \frac{x_{i}e^{-\lambda x_{i}}}{1-e^{-\lambda x{i}}}\right) =0 \end{aligned}$$
(29)

Equations 30 and 31 give the second derivatives of the likelihood function specified in Eq. 27 with respect to \(\alpha\) and \(\lambda\), respectively.

$$\begin{aligned} \frac{\partial ^{2}{l(\alpha ,\lambda |\varvec{x})}}{\partial {\alpha ^{2}}}= & {} -\frac{n}{\alpha ^2}-\sum _{i=1}^{n} \left( {{(1-e^{-\lambda x_{i}})}^{\alpha }} (log(1-e^{-\lambda x_{i}}))^2\right) \end{aligned}$$
(30)
$$\begin{aligned} \frac{\partial ^{2}{l(\alpha ,\lambda |\varvec{x})}}{\partial {\lambda ^{2}}}= & {} -\frac{n}{\lambda ^2}-\sum _{i=1}^{n}\left( \frac{-x_{i}^2 e^{-\lambda x_{i}}}{(1-e^{-\lambda x{i}})^2}\right) +(\alpha -1)\sum _{i=1}^{n} (e^{-\lambda x_{i}}(1-e^{-\lambda x_{i}})^{n-2}\nonumber \\{} & {} \times (e^{-\lambda x_{i}}(x_{i}+\alpha -1)-x_{i})) \end{aligned}$$
(31)

Since the score function defined in Eqs. 28 and 29 cannot be solved in closed form, numerical methods can be employed to estimate the parameters. Equation 32 defines the observed Fisher information matrix of the random variable X.

$$\begin{aligned} I(\hat{\theta )}= \begin{pmatrix} -\frac{\partial ^2 {l(\alpha ,\lambda |\varvec{x})}}{\partial {\alpha ^2}} &{} -\frac{\partial ^2 {l(\alpha ,\lambda |\varvec{x})}}{\partial {\alpha }\partial {\lambda }} \\ - \frac{\partial ^2{l(\alpha ,\lambda |\varvec{x})}}{\partial {\lambda }\partial {\alpha }} &{} -\frac{\partial ^2{l(\alpha ,\lambda |\varvec{x})}}{\partial {\lambda ^2}} \end{pmatrix}_{\alpha = \hat{\alpha }, \lambda = \hat{\lambda }} \end{aligned}$$
(32)

The estimated variances of the parameters, which are specified on the diagonal of the variance-covariance matrix defined in Eq. 33), can be derived by inverting the Fisher information matrix defined in Eq. (32). By taking the square roots of the estimated variances, the estimated standard errors \(\hat{\sigma }(\hat{\alpha })\) and \(\hat{\sigma }(\hat{\lambda })\) are produced. Meanwhile, the off-diagonal elements represent the estimated covariances between the parameter estimates.

$$\begin{aligned} {[}I(\hat{\theta })]^{-1} = \begin{pmatrix} \hat{\sigma }^2(\hat{\alpha }) &{} \hat{\sigma }(\hat{\alpha },\hat{\lambda }) \\ \hat{\sigma }(\hat{\lambda },\hat{\alpha }) &{} \hat{\sigma }^2(\hat{\lambda }) \end{pmatrix} \end{aligned}$$
(33)

8.1 Asymptotic Distribution of MLEs

The MLEs are intrinsically random variables that depend on the sample size. [16] and [18] addressed the asymptotic distribution of MLEs and the properties of the estimators. The ML estimates consistently converge in probability to the true values. Owing to the general asymptotic theory of the MLEs, the sampling distribution of \((\hat{\alpha }-\alpha )/ \sqrt{\hat{\sigma }^2(\hat{\alpha })}\) and \((\hat{\lambda }-\lambda )/ \sqrt{\hat{\sigma }^2(\hat{\lambda })}\) can be approximately represented by the standard normal distribution. The asymptotic confidence intervals of the parameter estimates \(\hat{\alpha }\) and \(\hat{\lambda }\) can be determined using Eq. 34.

$$\begin{aligned} \hat{\alpha } \pm z_{t/2} \sqrt{\hat{\sigma }^2(\hat{\alpha })} \hspace{0.2 cm}and \hspace{0.2 cm} \hat{\lambda } \pm z_{t/2} \sqrt{\hat{\sigma }^2(\hat{\lambda })} \end{aligned}$$
(34)

where \(\sqrt{\hat{\sigma }^2(\hat{\alpha })}\) and \(\sqrt{\hat{\sigma }^2(\hat{\lambda })}\), respectively, are the estimated standard errors of the estimates \(\hat{\alpha }\) and \(\hat{\lambda }\). \(z_{t/2}\) is the 100t upper percentage point of the standard normal distribution.

9 Simulation Studies

9.1 Rejection Sampling Method

In our first simulation study, we used the rejection sampling method, also known as acceptance/rejection sampling, to generate samples from a target distribution with a density function of f(x). This method allows us to simulate a random variable, X, without directly sampling from the distribution. We achieve this by using two independent random variables, U and X, where U is a uniform random variable between 0 and 1, and X is a random variable with a proposal density of g(x). The goal of this method is to ensure that f(x) is less than or equal to c times g(x) for every value of x, where g(x) is an arbitrary proposal density and c is a finite constant.

According to [19], the rejection sampling technique is versatile enough to derive values from g(x) even without complete information about the specification of f(x). Here is a step-by-step guide of the rejection sampling algorithm on how to generate a random variable X that follows the probability density function f(x).

  1. 1.

    Generate a candidate X randomly from a distribution g(x).

  2. 2.

    Compute the acceptance probability \(\alpha = \frac{1}{c} \cdot \frac{f(X)}{g(X)}\), where c is a constant such that \(cg(x) \ge f(x)\) for all x.

  3. 3.

    Generate a random number u from a uniform distribution on the interval  (0, 1).

  4. 4.

    If \(u < \alpha\), accept X as a sample from f(x) and return X.

  5. 5.

    If \(u \ge \alpha\), reject X and go back to step 1.

By following steps 1–4 repeatedly, we were able to obtain a set of samples that adhered to the probability density function f(x).

We used the rejection sampling method and inverse transform of the Monte Carlo methods to generate a random sample of observations for a random variable X, which followed the TMGEE distribution.

To illustrate this, we simulated random samples for two probability density functions: \(f_{TMGEE}(1.5,1.5)\) and \(f_{TMGEE}(5,3)\), using both the aforementioned methods. For the first distribution, we used the exponential distribution \(f_{exp}(0.8)\) as a proposal density, while for the second distribution, we employed the Weibull distribution \(f_{weib}(1.5,0.8)\). In each simulation, we randomly selected 10,000 samples. The R codes can be found in the appendix section of A and B.

To sample from \(f_{TMGEE}(1.5,1.5)\), we used the rejection sampling method and obtained 10,000 observations from the \(f_{exp}(0.8)\) distribution. Out of these samples, 7182 were accepted as draws from \(f_{TMGEE}(1.5,1.5)\), which resulted in an acceptance rate of 71.82%. The histogram in Fig. 3a shows the accepted draws from the rejection sampling method, while the density plot represents the target distribution.

Similarly, we used the rejection sampling method to obtain 10,000 observations from the \(f_{weib}(1.5,0.8)\) distribution. Out of the 10,000 samples, 6679 were accepted as draws from \(f_{TMGEE}(5,3)\), resulting in an acceptance rate of 66.79%. The accepted draws are displayed in the histogram in Fig. 3b, and the density plot in the figure represents the target distribution.

It has been confirmed that the proposal distribution used in each of the cases has a remarkable agreement with the results obtained using the rejection sampling method.

Fig. 3
figure 3

a Histogram created with accepted draws of rejection sampling using exponential proposal density and density plot of TMGEE (1.5, 1.5) distribution using inverse transform sampling. b Histogram created with accepted draws of rejection sampling using Weibull proposal density and density plot of TMGEE(5, 3) distribution using inverse transform sampling

9.2 Inverse Transform-Based Sampling Method

To examine the MLEs of \(\hat{\theta } \in (\hat{\alpha }, \hat{\lambda })\) for \(\theta\), we conducted a Monte Carlo simulation study utilizing the quantile function of the TMGEE distribution as defined by Eq. 10. We examined the precision of the estimators using bias and mean squared error (MSE) and observed their characteristics. We generated samples of sizes 50, 100, 150, 300, and 1000 using the quantile function of the TMGEE distribution and performed simulations with R = 1,000. Following that, we computed the bias, MSE, variance, and MLEs. To simulate and estimate, we followed these steps:

  1. 1.

    Define the likelihood function of the model parameters.

  2. 2.

    Obtain the MLEs by minimizing the negative log-likelihood function using the Optim approach of [20].

  3. 3.

    Repeat the estimation for the R simulations.

  4. 4.

    Compute the bias, MSE, and variance of the estimates.

We assume that the distribution’s \(\alpha\) values are 0.5, 2, 3, 5, and 8 and its \(\lambda\) values are 0.5, 3, 5, and 10 to run the simulations. There are 20 different parameter combinations for each of the generated sample sizes. Bias and MSE are calculated using Eqs. 35 and 36, respectively.

$$\begin{aligned} Bias(\hat{\theta })= & {} \frac{1}{R} \sum _{i=1}^{r}(\hat{\theta _{i}}-\theta ) \end{aligned}$$
(35)
$$\begin{aligned} MSE(\hat{\theta })= & {} \frac{1}{R} \sum _{i=1}^{r}((\hat{\theta _{i}}-\theta ))^2 \end{aligned}$$
(36)

where \(\theta \in (\alpha ,\lambda )\) and \(MSE(\hat{\theta }) = (Bias(\hat{\theta }))^2+Var(\hat{\theta })\)

The Monte Carlo simulation was conducted for each parameter setting to estimate the random variable’s bias, MSE, and variance. The results of the parameter estimates are presented in Tables 2, 3, 45, and 6.

As part of our analysis, we ran R = 10,000 simulations and generated samples of size n = 1,000 for each simulation. This helped us evaluate the asymptotic distribution of the TMGEE distribution’s parameter estimates. To do this, we utilized the inverse transform method with the parameter values of \(\alpha =1.5\), \(\lambda =2\), and \(\alpha =5\), \(\lambda =3\). We’ve presented the outcomes of the respective estimates in Figs. 4 and 5.

Based on the simulation studies carried out, using the varying sample sizes, samll (n = 50) to large (n = 1000), we can draw the following conclusions: As we increase the sample size, the MSE, the variance of the parameter estimates decreases, and the estimates of the parameters converge towards their true values. This means that the MLEs of the TMGEE parameters are unbiased and consistent. Upon examining the histograms in Figs. 4 and 5 and considering the large sample properties of the MLEs, it has been established that the asymptotic distribution of the MLEs of the TMGEE distribution parameters is normal. Furthermore, the simulation study has revealed that there is a positive correlation between the MLEs of the TMGEE distribution’s parameters for some parameter settings (refer to Fig. 6).

Fig. 4
figure 4

The results of 10,000 simulations for sample sizes of 1000 in each, with corresponding histogram and density plots for \(\alpha = 1.5\) in a and for \(\lambda = 2\) in b

Fig. 5
figure 5

The results of 10.000 simulations for sample sizes of 1.000 in each, with corresponding histogram and density plots for \(\alpha = 5\) in a and for \(\lambda = 3\) in b

Fig. 6
figure 6

The results of estimated alpha, estimated lambda, and Pearson correlations for different parameter settings from 1,000 simulations for sample sizes of 1000 each

Table 2 Monte Carlo simulation results of bias, MSE, and variances for \(n = 50\)
Table 3 Monte Carlo simulation results of bias, MSE, and variances for \(n = 100\)
Table 4 Monte Carlo simulation results of bias, MSE, and variances for \(n = 150\)
Table 5 Monte Carlo simulation results of bias, MSE, and variances for \(n = 300\)
Table 6 Monte Carlo simulation results of bias, MSE, and variances for \(n = 1,000\)

10 Applications

In this section, we compare the TMGEE distribution with various probability distributions, including the exponential (Exp) distribution, the Weibull (WE) distribution, the lognormal (LN) distribution, and the alpha power exponential (APE) and alpha power Weibull (APW) distributions studied by [8]. We also compare the exponentiated Weibull distribution (EW) introduced by [21], the exponentiated Kumaraswamy G family distribution where the baseline distribution is exponential (EKG-E), as studied by [11], and the Kumaraswamy G family distributions (KG-W and KG-G) researched by [7], where the baseline distributions are Weibull and gamma, respectively. We define the probability density functions and cumulative distribution functions of some of these distributions below.

  • Exponentiated Weibull distribution(EW)

\(f(x;\beta , c,\alpha ) = \alpha \beta c (\beta x)^{c-1} (1-e^{-(\beta x)^c})^{\alpha -1}e^{-(\beta x)^c}\)

\(F(x;\beta , c,\alpha ) = (1-e^{-(\beta x)^c})^{\alpha }, x>0, c>0, \alpha >0\) and \(\beta >0\)

  • Alpha power exponential distribution(APE)

\(f(x;\alpha ,\lambda ) = \frac{log(\alpha )\lambda e^{-\lambda x} \alpha ^{1-e^{-\lambda x}}}{\alpha -1}\)

\(F(x,\alpha ,\lambda ) = \frac{\alpha ^{1-e^{-\lambda x}}-1}{\alpha -1}, x>0, \alpha >0, \alpha \ne 1\) and \(\lambda >0\)

  • The alpha power Weibull distribution (APW)

\(f(x;\alpha ,\lambda , \beta ) = \frac{log(\alpha )\lambda \beta e^{-\lambda x^\beta } x^{\beta -1} \alpha ^{1-e^{-\lambda x^\beta }}}{\alpha -1}\)

\(F(x;\alpha ,\lambda ,\beta ) = \frac{1-\alpha ^{1-e^{-\lambda x^\beta }}}{1-\alpha },x>0, \alpha >0, \alpha \ne 1\) and \(\lambda>0,\beta >0\)

  • Exponentiated Kumaraswamy G family distributions (EKG-E)

\(f(x;a,b,c) = abcg(x)G(x)^{\alpha -1} (1-G(x)^a)^{b-1}(1-(1-G(x)^a)^b)^{c-1}\)

\(F(x;a,b,c) = (1-(1-G(x)^a)^b)^c, x>0, a>0, b>0,c>0\)

where g(x) and G(x) are, respectively, the pdf and CDF of the exponential distribution.

  • Kumaraswamy G family distributions (KG-W and KG-G)

\(f(x;a,b) = abg(x)G(x)^{\alpha -1} (1-G(x)^a)^{b-1}(1-G(x)^a)^{b-1}\)

\(F(x;a,b) = 1-(1-G(x)^a)^b, x>0, a>0, b>0,c>0\)

where the Weibull and gamma distributions’ respective pdf and CDF are denoted as g(x) and G(x).

Three real datasets were used to illustrate the fitting of distributions. One of the datasets used in this study was obtained from [22]. The dataset consists of the frailty term assessed in a study of the recurrence time of infections in 38 patients receiving kidney dialysis. The data set includes 76 observations. [23] explained that each person has a distinct level of frailty or an observed heterogeneity term, which determines their risk of death in proportional hazard models. Another dataset was obtained from the systematic review and meta-analysis conducted by [24]. This dataset shows overall mortality rates among people who injected drugs. The third dataset consists of 128 individuals with bladder cancer, and it shows the duration of each patient’s remission in months. This dataset was used by [25] to compare the fits of the five-parameter beta-exponentiated Pareto distribution. Tables 78, and 9 display these datasets.

Table 7 Kidney infection:frailty data
Table 8 Mortality rates data
Table 9 Bladder cancer patients’ data
Fig. 7
figure 7

The empirical and density plots for the data sets that are shown in Tables 78, and 9

MLEs and model fitting statistics were obtained by fitting the models with numerical techniques adapted from [26] and [17]. We provided the MLEs and standard errors of the model parameters for three datasets in Tables 1011, and 12. To compare the fit of the models, we used various information criteria, such as Akaike information criteria (AIC), corrected Akaike information criteria (AICc), Bayesian information criteria (BIC), Hannan-Quinn information criterion (HQIC), Kolmogorov-Smirnov test statistic (K-S), and its p-value. These information criteria, as explained in [27, 28], are useful in comparing model fits in various applications. However, AIC can be vulnerable to overfitting when sample sizes are small. Therefore, to correct this, we added a bias-correction term of second order to AIC to obtain AICc, which performs better in small samples. The bias-correction term raises the penalty on the number of parameters compared to AIC. As the sample size increases, the term asymptotically approaches zero, and AICc moves closer to AIC. In large samples, the HQIC penalizes complex models less than the BIC. However, individual AIC and AICc values are not interpretable [27], so they need to be rescaled to \(\Delta _i\), which is defined as follows.

Let \(\Delta _i\) denote the difference between \(AIC_i\) and \(AIC_{min}\), where \(AIC_{min}\) is the minimum value of AIC among all models. According to [27], the best model has \(\Delta _i=0\). A smaller value of the information criteria indicates a better fit, regardless of the specific criteria used. The equations defined from 37 to 40 for k and \(l(\hat{\theta })\) can be used to compute some of the model fitting statistics, where k is the number of parameters to be estimated and \(l(\hat{\theta })\) is a likelihood.

$$\begin{aligned} AIC= & {} -2l(\hat{\theta })+2k \end{aligned}$$
(37)
$$\begin{aligned} AICc= & {} AIC+\frac{2k(k+1)}{n-k-1} \end{aligned}$$
(38)
$$\begin{aligned} BIC= & {} -2l(\hat{\theta })+klogn \end{aligned}$$
(39)
$$\begin{aligned} HIQC= & {} -2l(\hat{\theta })+2klog(logn) \end{aligned}$$
(40)

The values of the model-fitting statistics are summarized in Tables 1314, and 15. Goodness-of-fit plots were produced for three datasets (shown in Tables 78, and 9) using specific distributions (TMGEE, exponential, Weibull, and lognormal) with methods proposed by [26]. The results of these plots can be seen in Figs. 78, and 9. Our analysis shows that the newly proposed TMGEE probability distribution provides a better fit than the previously considered distributions when applied to the three datasets.

Table 10 ML estimates and the associated standard errors (within parenthesis) for kidney infection: frailty data
Table 11 MLEs and associated standard errors (within parenthesis) of mortality rates data
Table 12 MLEs and the associated standard errors (within parenthesis) for bladder cancer patients’ data
Table 13 Model fitting statistics for kidney infection: frailty data
Table 14 Model fitting statistics for mortality rates data
Table 15 Model fitting statistics for bladder cancer patients’ data
Fig. 8
figure 8

Q–Q plots (TMGEE, exponential, Weibull, and lognormal distributions)

Fig. 9
figure 9

P–P plots (TMGEE, exponential, weibull, and Lognormal distributions)

11 Conclusions

In this research, a new probability distribution called the Transformed MG-Extended Exponential Distribution (TMGEE) has been developed from the exponential probability distribution. The distribution’s detailed features have been explored and derived, and the approach of maximum likelihood has been used to estimate the parameters. We obtained unbiased and consistent estimates. Two simulation experiments have been conducted using the rejection sampling and inverse-transform sampling techniques. The usefulness of the new distribution has been evaluated using three different real datasets.

To evaluate the maximum likelihood estimates, we have used different statistical tools such as the Kolmogorov-Smirnov test statistic, the Hannan-Quinn information criteria, the corrected Akaike information criteria, the Bayesian information criteria, and the Akaike information criteria. The newly introduced TMGEE distribution has been found to fit the three sets of data far better than some of the most commonly used probability distributions, including the Weibull, exponential, and lognormal distributions.

We recommend further studies on this probability distribution in statistical theory, such as the Bayesian parameter estimation method and its application to other datasets involving lifetime and time-to-event processes.