1 Introduction

The suitable technique to derive a new, flexible and unknown distribution is adding parameters [1]. Through statistical techniques, the probability distributions derived from generalization have no more effect, so there is a need to be proposed such a type of probability distribution with authentic and valid application compared to previous distributions [2]. With the help of literature, the moment’s development has more effect on the generalization of probability distributions [3]. Some important distributions commonly used in reliability theory and survival analysis are Exponential, Weibull and Gamma [4,5,6]. Cox and Oakes [7] present these families' importance and application. Koleoso [8] studied the Odd-Lomax Dagum distribution and explored several properties. Aslam et al. [9] presented the Bayesian version of exponentiated Gompertz distribution using informative priors. Faryal et al. [10] discussed the Lomax distribution under different posterior priors. In application aspects, these distributions have limited performance access and cannot handle all situations. For instance, even though the exponential distribution is often described as flexible, its hazard function is restricted, being constant. The limitations of standard distributions often provoke the interest of researchers in finding new distributions by extending existing ones. Expanding a family of distributions for added flexibility or constructing covariate models is a well-known technique in the literature [10,11,12]. For instance, the family of Weibull distributions contains exponential distribution and is constructed by taking powers of exponentially distributed random variables [13]. Marshall and Olkin [14] introduced a new method of adding a parameter to a family of distributions. Marshall-Olkin extended distributions offer a wide range of behavior than the basic distributions from which they are derived. The property that the extended form of distributions can have an interesting hazard function depending on the value of the added parameter α and, therefore, can be used to model real situations in a better manner than the basic distribution resulted in the detailed study of Marshall-Olkin extended family of distributions by many researchers [15,16,17,18].

In the 1970s, Camilo Dagum [19] boarded on a mission for a statistical distribution thoroughly fitting empirical income and wealth distributions. He is unsatisfied with classical distributions used to summarize such data—the Pareto distribution [20] and the lognormal distribution [21]. The earlier characteristic is well defined by the Pareto but not by the lognormal distribution, the latter by the lognormal but not the Pareto distribution. Experimenting with a shifted log-logistic distribution, a generalized distribution previously considered by Fisk [22], he quickly realized that a further parameter was needed. This led to the Dagum type I distribution, a three-parameter distribution, and two four-parameter generalizations [23, 27].

Theorem 1

If \(X \sim {\text{MOED}}\left( {{\text{X}}: \propto , \theta , \beta ,\Upsilon ,a} \right)\), then the CDF and PDF of MOED distribution are:

$$ F\left( x \right) = \frac{{1 - \left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } }}{{1 - \left( {1 - \propto } \right)\left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } }} x > 0 , a > 0, \theta > 0, \beta > 0 ,\gamma > 0, \propto > 0 $$
(1)
$$ f\left( x \right) = \frac{{ \propto \gamma a\theta \beta x^{ - \theta - 1} (1 + ax^{ - \theta } )^{ - \beta - 1} \left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma - 1} }}{{\left( {1 - \left( {1 - \propto } \right)\left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } } \right)^{2} }}x > 0 , a > 0, \theta > 0, \beta > 0 ,\gamma > 0, \propto > 0 $$
(2)

where\(\propto\)”, “\(\theta\)”, “\(\beta\)and\(\gamma\)are the shape parameters and “a” is the scale parameter.

Proof

The cumulative distribution function (cdf) of the Marshall Olkin family of distribution [18] is

$$ F\left( x \right) = \frac{G\left( x \right)}{{1 - \left( {1 - \propto } \right)\overline{G}\left( x \right)}} - \infty < x < \infty $$
(3)

G(x) is the Cdf of any continuous type distribution and \(\propto\) is the shape parameter [24].

The corresponding probability density function (pdf) of the Marshall Olkin family of distribution

$$ f\left( x \right) = \frac{ \propto g\left( x \right)}{{\left( {1 - \left( {1 - \propto } \right)\overline{G}\left( x \right)} \right)^{2} }}\;\;\; - \infty < x < \infty \;\;0 < \infty < \infty $$
(4)

where g(x) is the pdf of a continuous type distribution and \(\overline{G}\left( x \right)\) = 1-\(G\left( x \right)\) is the survival function of any continuous type distribution.

The cumulative density function (cdf) and pdf of Dagum distribution [20] are,

$$ F\left( {x,a,\theta ,\beta } \right) = (1 + ax^{ - \theta } )^{ - \beta } x \, > \, 0,\;a > 0,\; \theta > 0, \beta > 0, $$
(5)

The corresponding Pdf of Dagum distribution is

$$ {{g(x) = a\theta \beta x}}^{{{{ - \theta - 1}}}} {\text{(1 + ax}}^{{{{ - \theta }}}} {)}^{{ - {\upbeta } - 1}} {{ x > 0 , a > 0, \theta > 0, \beta > 0 }} $$
(6)

where, \(\theta {\text{ and }}\beta \) are shape parameters, and \(a{\text{ is the scale parameter}}.\)

The cdf of the exponentiated family of distribution [25] is

$$ G(x) = 1 - \left[ {1 - F\left( x \right)} \right]^{\gamma } $$
(7)

where x > 0, \({\upgamma } > 0{ }\)

The corresponding Pdf of the exponentiated family of distribution Nadaraja and Kotz [25] is

$$ g(x) = \gamma f\left( x \right)\left[ {1 - F\left( x \right)} \right]^{\gamma - 1} x > 0, \gamma > 0 $$
(8)

\(\gamma :\) shape parameter, F(x) is the Cdf, f(x) is the Pdf of any continuous distribution.

Substituting Eq. (3) in (5), the Cdf of Exponentiated Dagum distribution is,

$$ \begin{aligned} \overline{G}(x) = 1 - G\left( x \right) & = 1 - \left[ {1 - \left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } } \right] = 1 - 1 + \left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } \\ \overline{G}(x) & = \left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } . \\ \end{aligned} $$
(8a)
$$ G(x) = 1 - \left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } \;\;x > 0 , a > 0, \theta > 0, \beta > 0 ,\gamma > 0 $$
(9)

By putting Eqs. (3) and (4) in (6), the Pdf of Exponentiated Dagum distribution is:

$$ \left( x \right) = \gamma a\theta \beta x^{ - \theta - 1} (1 + ax^{ - \theta } )^{ - \beta - 1} \left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma - 1} x > 0 , a > 0, \theta > 0, \beta > 0 ,\gamma > 0 $$
(10)

\(\theta\)”, “\(\beta {\text{ and}} \gamma\)” are the shape parameters and “a” is the scale parameter.

The Cdf of Marshall Olkin Exponentiated Dagum distribution can be obtained by using Eqs. (8) and (8a) in (1)

$$ F\left( x \right) = \frac{{1 - \left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } }}{{1 - \left( {1 - \propto } \right)\left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } }} x > 0 , a > 0, \theta > 0, \beta > 0 ,\gamma > 0, \propto > 0 $$
(11)

Equation (11) is the Cdf of Marshal Olkin Exponentiated Dagum Distribution (MOED).where, “\(\propto\)”, “\(\theta\)”, “\(\beta\)” and “\(\gamma\)” are the shape parameters and “a” is the scale parameter. At the same time, the respective Pdf of Marshall Olkin Exponentiated Dagum distribution can be obtained by using Eqs. (8a) and (9) in (2).

$$ f\left( x \right) = \frac{{ \propto \gamma a\theta \beta x^{ - \theta - 1} (1 + ax^{ - \theta } )^{ - \beta - 1} \left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma - 1} }}{{\left( {1 - \left( {1 - \propto } \right)\left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } } \right)^{2} }}\;\;x > 0 , a > 0, \theta > 0, \beta > 0 ,\gamma > 0, \propto > 0 . $$
(12)

Equation (12) is the Pdf of MOED, where “\(\propto\)”, “\(\theta\)”, “\(\beta\)” and “\(\gamma\)” are the shape parameters and “a” is the scale parameter.

Lemma 1

If \(X \sim {\text{MOED}}\left( {{\text{X}}: \propto , \theta , \beta ,\Upsilon ,a} \right)\), then (2) is a proper PDF.

Proof

The probability density function (2) of MOED distribution is a proper pdf if

$$ \mathop \int \limits_{ - \infty }^{\infty } f\left( x \right)dx = 1 $$

As Eq. (12) in the above equation, we get

$$ \mathop \int \limits_{0}^{\infty } \frac{{ \propto \gamma a\theta \beta x^{ - \theta - 1} (1 + ax^{ - \theta } )^{ - \beta - 1} \left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma - 1} }}{{\left( {1 - \left( {1 - \propto } \right)\left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } } \right)^{2} }}dx . $$
(13)

Making substitution

$$ \left( {1 + ax^{ - \theta } } \right)^{ - 1} = t,\;\;x = \frac{1}{{a^{{ - \frac{1}{\theta }}} }} \left[ {\frac{1}{t} - 1} \right]^{{ - \frac{1}{\theta }}} ,\;\;dx = \left( \frac{1}{a} \right)^{{ - \frac{1}{\theta }}} \left( {\frac{1}{{\theta t^{2} }}} \right)\left[ {\frac{1}{t} - 1} \right]^{{ - \frac{1}{\theta } - 1}} dt, $$

Limits; When \(x \to \infty then t = 1\) and when \(x \to 0 \;{\text{then}}\; t = 0\).

So Eq. (13) becomes

$$ = \quad \propto \gamma \beta \mathop \int \limits_{0}^{1} \frac{{t^{\beta - 1} \left( {1 - t^{\beta } } \right)^{\gamma - 1} }}{{\left( {1 - \left( {1 - \propto } \right)\left( {1 - t^{\beta } } \right)^{\gamma } } \right)^{2} }}dt. $$

Let \(\left( {1 - \propto } \right)\left( {1 - t^{\beta } } \right)^{\gamma } = s\), \(t = \left( {1 - \frac{{s^{{\frac{1}{\gamma }}} }}{{\left( {1 - \propto } \right)^{{\frac{1}{\gamma }}} }}} \right)^{{\frac{1}{\beta }}}\),\(dt = - \frac{1}{\beta \gamma }\left( {\frac{{s^{{\frac{1}{\gamma } - 1}} }}{{\left( {1 - \propto } \right)^{{\frac{1}{\gamma }}} }}} \right)\left( {1 - \frac{{s^{{\frac{1}{\gamma }}} }}{{\left( {1 - \propto } \right)^{{\frac{1}{\gamma }}} }}} \right)^{{\frac{1}{\beta } - 1}} ds\).

Limits; when \( t \to 1\; {\text{then}}\; s = 0\) and when \(t \to 0 then s = 1 - \propto\), so after simplification, we get

$$ = \quad \propto \mathop \int \limits_{0}^{1 - \propto } \frac{{\left( {\frac{{s^{{1 - \frac{1}{\gamma } + \frac{1}{\gamma } - 1}} }}{{\left( {1 - \propto } \right)^{{1 - \frac{1}{\gamma } + \frac{1}{\gamma }}} }}} \right)}}{{\left( {1 - s} \right)^{2} }}ds = \frac{ \propto }{{\left( {1 - \propto } \right)}}\mathop \int \limits_{0}^{1 - \propto } \left( {1 - s} \right)^{ - 2} ds. $$

Let, \(\left( {\text{1 - s}} \right){{ = u}}\), \(\left( {\text{1 - u}} \right){{ = s}}\), \({{ds = - du}}\), Limits; When \({\text{s}} \to 1 - \propto {\text{ u}} = \quad \propto\) and when \({\text{s}} \to 0{\text{ u}} = 1\), so we have

$$ = \frac{ \propto }{{\left( {1 - \propto } \right)}}\mathop \int \limits_{1}^{ \propto } {\text{u}}^{ - 2} {\text{( - du)}} = \frac{ \propto }{{\left( {1 - \propto } \right)}}\left[ {\frac{{\left( {1 - \propto } \right)}}{ \propto }} \right] = {1}. $$

Figure 1 shown below represent the shapes of pdf and CDF of MOED distribution.

Fig. 1.
figure 1

1.1: Pdf plot of MOED distribution. 1.2: Pdf plot of MOED distribution. 1.3: F(x) plot of MOED when a = 0.4, \({\upbeta }\) = 1.2, \({\upgamma } = 5\) and \({\uptheta } = 2\). 1.4: F(x) plot of MOED when a = 0.4, \({\upbeta }\) = 1.2, \({\upgamma } = 5\) and \(\propto = 5\). 1.5: F(x) plot of MOED when a = 1, \({\upbeta }\) = 1.2, \(\propto = 5{\text{ and }}\) \({\uptheta } = 3\). 1.6: F(x) plot of MOED when a = 1,, \({\upgamma } = 12{ }\), \(\propto = 5{\text{ and }}\) \({\uptheta } = 3\)

The CDF plots for MOED distribution depict an increasing behavior by varying the respective parameters and keeping the rest of the parameters constant. F(x) plots show a gradual rise by varying the parameters a,\({{ \beta }}\), \(\propto {\text{and}}\) \({\upgamma }\) one by one and keeping the rest of the parameters fixed.

Lemma 1.2

If \(X \sim {\text{MOED}}\left( {{\text{X}}: \propto , \theta , \beta ,\Upsilon ,a} \right)\), then the CDF (1) can be expressed as

$$ F\left( x \right) = \mathop \sum \limits_{l = 0}^{\infty } S_{l} \left( {1 + ax^{ - \theta } } \right)^{ - \beta l} , $$
(14)

where

$$ s_{l} = \mathop \sum \limits_{i = 0}^{\infty } \mathop \sum \limits_{j = 0}^{\infty } \left( {1 - \propto } \right)^{i} \frac{{\left( { - 1} \right)^{j + l} }}{i! j!l!}\frac{{\Gamma \left( {1 + i} \right)}}{\Gamma \left( 1 \right)}\frac{\Gamma \left( 2 \right) }{{\Gamma \left( {2 - j} \right)}}\frac{{\Gamma \left( {\gamma \left( {j + i} \right) + 1} \right) }}{{\Gamma \left( {\gamma \left( {j + i} \right) + 1 - l} \right)}}. $$

Proof

For any positive real number “a” and for |z|< 1, we have the generalized binomial expansion

$$ \left( {1 - z} \right)^{ - a} = \mathop \sum \limits_{i = 0}^{\infty } \frac{{\Gamma \left( {a + i} \right) }}{\Gamma a}\frac{{z^{i} }}{i!} . $$
(15)

Applying expression (15) in the denominator of Eq. (1),

$$ F\left( x \right) = \left( {1 - \left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } } \right) \left( {\mathop \sum \limits_{i = 0}^{\infty } \frac{{\Gamma \left( {1 + i} \right)}}{\Gamma \left( 1 \right)i!}\left( {1 - \propto } \right)^{i} \left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma i} } \right) , $$
$$ \left( {1 - \left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } } \right) \mathop \sum \limits_{i = 0}^{\infty } \left( {1 - \propto } \right)^{i} \frac{{\Gamma \left( {1 + i} \right)}}{\Gamma \left( 1 \right)i!}\left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma i} , $$

or, \(\left( {1 - \left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } } \right)^{2 - 1} \mathop \sum \limits_{i = 0}^{\infty } \left( {1 - \propto } \right)^{i} \frac{{\Gamma \left( {1 + i} \right)}}{\Gamma \left( 1 \right)i!}\left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma i} .\)

Consider the power series expansion by Cordeiro et al. [15]

$$ \left( {1 - t} \right)^{ \propto - 1} = \mathop \sum \limits_{i = 0}^{\infty } \left( { - 1} \right)^{i} \frac{\Gamma \left( \propto \right) }{{\Gamma \left( { \propto - 1} \right)}}\frac{{t^{i} }}{i!}. $$
(16)

Using the expression (16) in the above equation, we get

$$ { = }\mathop \sum \limits_{i = 0}^{\infty } \mathop \sum \limits_{j = 0}^{\infty } \frac{{\left( { - 1} \right)^{{\text{j}}} }}{i! j!}\frac{{\Gamma \left( {1 + i} \right)}}{\Gamma \left( 1 \right)}\frac{\Gamma \left( 2 \right) }{{\Gamma \left( {2 - {\text{j}}} \right)}}\left( {1 - \propto } \right)^{i} \left( {1 - {\text{(1 + ax}}^{{{{ - \theta }}}} {)}^{ - \beta } } \right)^{{\gamma \left( {j + i} \right) + 1 - 1}} $$

Again using the expression (16) in the above equation, we will get (14).

Lemma 1.3

If \(X \sim {\text{MOED}}\left( {{\text{X}}: \propto , \theta , \beta ,\Upsilon ,a} \right)\), then the PDF (2) can be expressed as

$$ f\left( x \right) = \, \propto \gamma a\theta \beta x^{ - \theta - 1} \mathop \sum \limits_{j = 0}^{\infty } t_{j} \left( {1 + ax^{ - \theta } } \right)^{{ - \beta \left( {j + 1} \right) - 1}} , $$
(17)

where, \(t_{j} = \mathop \sum \limits_{i = 0}^{\infty } \left( {1 - \propto } \right)^{i} \frac{{\left( { - 1} \right)^{j} }}{i!j!}\frac{{\Gamma \left( {2 + i} \right)}}{\Gamma \left( 2 \right)}\frac{{\Gamma \left( {\gamma \left( {i + 1} \right)} \right) }}{{\Gamma \left( {\gamma \left( {i + 1} \right) - j} \right)}}.\)

Proof

Applying the generalized Binomial expansion (15) in Eq. (2), we get

$$ f\left( x \right) = \, \propto \gamma a\theta \beta x^{ - \theta - 1} (1 + ax^{ - \theta } )^{ - \beta - 1} \mathop \sum \limits_{i = 0}^{\infty } \frac{{\Gamma \left( {2 + i} \right)\left( {1 - \propto } \right)^{i} }}{\Gamma \left( 2 \right)i!}\left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{{\gamma \left( {i + 1} \right) - 1}} . $$

Now applying the power series expansion (16), we have

$$ f\left( x \right) = \, \propto \gamma a\theta \beta x^{ - \theta - 1} (1 + ax^{ - \theta } )^{ - \beta - 1} \mathop \sum \limits_{i = 0}^{\infty } \frac{{\Gamma \left( {2 + i} \right)\left( {1 - \propto } \right)^{i} }}{\Gamma \left( 2 \right)i!} \mathop \sum \limits_{j = 0}^{\infty } \left( { - 1} \right)^{j} \frac{{\Gamma \left( {\gamma \left( {i + 1} \right)} \right) }}{{\Gamma \left( {\gamma \left( {i + 1} \right) - j} \right)}}\frac{{\left( {1 + ax^{ - \theta } } \right)^{ - \beta j} }}{j!}, $$
$$ f\left( x \right) = \, \propto \gamma a\theta \beta x^{ - \theta - 1} \mathop \sum \limits_{i = 0}^{\infty } \mathop \sum \limits_{j = 0}^{\infty } \left( {1 - \propto } \right)^{i} \frac{{\left( { - 1} \right)^{j} }}{i!j!}\frac{{\Gamma \left( {2 + i} \right)}}{\Gamma \left( 2 \right)}\frac{{\Gamma \left( {\gamma \left( {i + 1} \right)} \right) }}{{\Gamma \left( {\gamma \left( {i + 1} \right) - j} \right)}}\left( {1 + ax^{ - \theta } } \right)^{{ - \beta \left( {j + 1} \right) - 1}} , $$
$$ f\left( x \right) = \, \propto \gamma a\theta \beta x^{ - \theta - 1} \mathop \sum \limits_{j = 0}^{\infty } t_{j} \left( {1 + ax^{ - \theta } } \right)^{{ - \beta \left( {j + 1} \right) - 1}} , t_{j} = \mathop \sum \limits_{i = 0}^{\infty } \left( {1 - \propto } \right)^{i} \frac{{\left( { - 1} \right)^{j} }}{i!j!}\frac{{\Gamma \left( {2 + i} \right)}}{\Gamma \left( 2 \right)}\frac{{\Gamma \left( {\gamma \left( {i + 1} \right)} \right) }}{{\Gamma \left( {\gamma \left( {i + 1} \right) - j} \right)}}. $$

Corollary 1.1

If \(\gamma\) = 1 in (2), we get the Marshall Olkin Dagum distribution:

$$ F\left( x \right) = \frac{{1 - \left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{1} }}{{1 - \left( {1 - \propto } \right)\left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{1} }} = \frac{{1 - 1 + (1 + ax^{ - \theta } )^{ - \beta } }}{{1 - \left( {1 - \propto } \right)\left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)}} = \frac{{(1 + ax^{ - \theta } )^{ - \beta } }}{{1 - \left( {1 - \propto } \right)\left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)}}. $$

Corollary 1.2

If \(\propto\) = 1 in (2), we get the Exponentiated Dagum distribution

$$ F\left( x \right) = \frac{{1 - \left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } }}{{1 - \left( {1 - 1} \right)\left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } }} = \frac{{1 - \left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } }}{1 - 0} = 1 - \left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } . $$

Corollary 1.3

If \(\propto\) = 1 and \(\gamma = 1\) in (2), we get the Cdf of Dagum distribution

$$ F\left( x \right) = \frac{{1 - \left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{1} }}{{1 - \left( {1 - 1} \right)\left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{1} }} = \frac{{1 - 1 + (1 + ax^{ - \theta } )^{ - \beta } }}{1 - 0} = (1 + ax^{ - \theta } )^{ - \beta } . $$

Corollary 1.4

If \(\propto\) = 1, \(\beta\) = 1 and \(\gamma = 1\) in (2), we get the cdf of Log-logistic or Fisk distribution

$$ F\left( x \right) = \frac{{1 - \left( {1 - (1 + ax^{ - \theta } )^{ - 1} } \right)^{1} }}{{1 - \left( {1 - 1} \right)\left( {1 - (1 + ax^{ - \theta } )^{ - 1} } \right)^{1} }} = \frac{{1 - 1 + (1 + ax^{ - \theta } )^{ - 1} }}{1 - 0} = (1 + ax^{ - \theta } )^{ - 1} . $$

Corollary 1.5

If \(\propto\) = 1, a = 1 and \(\gamma = 1\) in (2), we get the Cdf of Burr III distribution

$$ F\left( x \right) = \frac{{1 - \left( {1 - (1 + x^{ - \theta } )^{ - \beta } } \right)^{1} }}{{1 - \left( {1 - 1} \right)\left( {1 - (1 + x^{ - \theta } )^{ - \beta } } \right)^{1} }} = \frac{{1 - 1 + (1 + x^{ - \theta } )^{ - \beta } }}{1 - 0} = (1 + x^{ - \theta } )^{ - \beta } . $$

Corollary 1.6

If \(\propto\) = 1 and a = 1 in (2), we get the cdf of Kumaraswamy Burr III distribution

$$ F\left( x \right) = \frac{{1 - \left( {1 - (1 + x^{ - \theta } )^{ - \beta } } \right)^{\gamma } }}{{1 - \left( {1 - 1} \right)\left( {1 - (1 + x^{ - \theta } )^{ - \beta } } \right)^{\gamma } }} = \frac{{1 - \left( {1 - (1 + x^{ - \theta } )^{ - \beta } } \right)^{\gamma } }}{1 - 0} = 1 - \left( {1 - (1 + x^{ - \theta } )^{ - \beta } } \right)^{\gamma } . $$

Corollary 1.7

If \(\propto \) =1 and \(\beta \) =1 in (2), we get the cdf of Kumaraswamy Fisk or Kumaraswamy Log-logistic distribution.

$$ F\left( x \right) = \frac{{1 - \left( {1 - (1 + ax^{ - \theta } )^{ - 1} } \right)^{\gamma } }}{{1 - \left( {1 - 1} \right)\left( {1 - (1 + ax^{ - \theta } )^{ - 1} } \right)^{\gamma } }} = \frac{{1 - \left( {1 - (1 + ax^{ - \theta } )^{ - 1} } \right)^{\gamma } }}{1 - 0} = 1 - \left( {1 - (1 + ax^{ - \theta } )^{ - 1} } \right)^{\gamma } . $$

2 Statistical Properties of MOED Distribution

In this section, the statistical properties of the proposed MOED distribution will be developed and discussed.

Theorem 2.1

If \(X \sim {\text{MOED}}\left( {{\text{X}}: \propto , \theta , \beta ,\Upsilon ,a} \right)\), then the characteristic function for r.v X is

$$ \varphi_{x} \left( t \right) = \, \propto \gamma \beta \mathop \sum \limits_{j = 0}^{\infty } \mathop \sum \limits_{p = 0}^{\infty } \frac{{t_{j} (ita^{{\frac{1}{\theta }}} )^{p} }}{p!}\frac{{\Gamma \left( {\beta \left( {j + 1} \right) + \frac{p}{\theta }} \right)\Gamma \left( {1 - \frac{p}{\theta }} \right)}}{{\Gamma \left( {\beta \left( {j + 1} \right) + 1} \right)}},Under \, condition \left( {\frac{p}{\theta } < 1} \right). $$
(18)

Proof

The characteristic function for MOED distribution can be defined as;

$$ \varphi_{x} \left( t \right) = \mathop \int \limits_{0}^{\infty } e^{itx} \propto \gamma a\theta \beta x^{ - \theta - 1} \mathop \sum \limits_{j = 0}^{\infty } t_{j} \left( {1 + ax^{ - \theta } } \right)^{{ - \beta \left( {j + 1} \right) - 1}} dx, $$
$$ \varphi_{x} \left( t \right) = \, \propto \gamma a\theta \beta \mathop \sum \limits_{j = 0}^{\infty } t_{j} \mathop \int \limits_{0}^{\infty } e^{itx} x^{ - \theta - 1} \left( {1 + ax^{ - \theta } } \right)^{{ - \left( {\beta \left( {j + 1} \right) + 1} \right)}} dx. $$
(19)

Making substitution

$$ \left( {1 + ax^{ - \theta } } \right)^{ - 1} = s,x = \frac{1}{{a^{{ - \frac{1}{\theta }}} }} \left[ {\frac{1}{s} - 1} \right]^{{ - \frac{1}{\theta }}} ,dx = \frac{1}{{\theta s^{2} a^{{ - \frac{1}{\theta }}} }}\left[ {\frac{1}{s} - 1} \right]^{{ - \frac{1}{\theta } - 1}} ds, $$

Limits; when \(x \to \infty\) then \(s = 1 \) and when \(x \to 0\) then \(s = 0\). So Eq. (19) becomes

$$ \varphi_{x} \left( t \right) = \, \propto \gamma a\beta \mathop \sum \limits_{j = 0}^{\infty } t_{j} \mathop \int \limits_{0}^{1} e^{{it\left( {\frac{1}{{a^{{ - \frac{1}{\theta }}} }} \left[ {\frac{1}{s} - 1} \right]^{{ - \frac{1}{\theta }}} } \right)}} \left( {\frac{1}{{a^{{1 + \frac{1}{\theta }}} }} \left[ {\frac{1}{s} - 1} \right]^{{1 + \frac{1}{\theta }}} } \right)s^{{\left( {\beta \left( {j + 1} \right) + 1} \right)}} \frac{1}{{s^{2} a^{{ - \frac{1}{\theta }}} }}\left[ {\frac{1}{s} - 1} \right]^{{ - \frac{1}{\theta } - 1}} ds, $$
$$ \varphi_{x} \left( t \right) = \quad \propto \gamma \beta \mathop \sum \limits_{j = 0}^{\infty } t_{j} \mathop \int \limits_{0}^{1} e^{{it\left( {\frac{1}{{a^{{ - \frac{1}{\theta }}} }} \left[ {\frac{1}{s} - 1} \right]^{{ - \frac{1}{\theta }}} } \right)}} s^{{\left( {\beta \left( {j + 1} \right) - 1} \right)}} ds. $$

Because of Mead and Abd–Eltawab [26], \(e^{tx} = \mathop \sum \limits_{j = 0}^{\infty } \frac{{(tx)^{j} }}{j!}\), we have

$$ \varphi_{x} \left( t \right) = \, \propto \gamma \beta \mathop \sum \limits_{j = 0}^{\infty } t_{j} \mathop \sum \limits_{p = 0}^{\infty } \frac{{(ita^{{\frac{1}{\theta }}} )^{p} }}{p!}\mathop \int \limits_{0}^{1} \left[ {1 - s} \right]^{{\left( { - \frac{p}{\theta } + 1} \right) - 1}} s^{{\left( {\beta \left( {j + 1} \right) + \frac{p}{\theta } - 1} \right)}} ds, $$
$$ \varphi_{x} \left( t \right) = \, \propto \gamma \beta \mathop \sum \limits_{j = 0}^{\infty } t_{j} \mathop \sum \limits_{p = 0}^{\infty } \frac{{(ita^{{\frac{1}{\theta }}} )^{p} }}{p!}B\left( {\beta \left( {j + 1} \right) + \frac{p}{\theta },\left( {1 - \frac{p}{\theta }} \right)} \right), $$

or, \({{\varphi }}_{{\text{x}}} \left( {\text{t}} \right) = \, \propto {{\gamma \beta }}\mathop \sum \limits_{{{\text{j}} = 0}}^{\infty } \mathop \sum \limits_{{{\text{p}} = 0}}^{\infty } \frac{{{\text{t}}_{{\text{j}}} (ita^{{\frac{1}{\theta }}} )^{p} }}{p!}\frac{{\Gamma \left( {\beta \left( {j + 1} \right) + \frac{p}{\theta }} \right)\Gamma \left( {1 - \frac{p}{\theta }} \right)}}{{\Gamma \left( {\beta \left( {j + 1} \right) + 1} \right)}}, \) Under condition \( \left( {\frac{p}{\theta } < 1} \right).\)

Theorem 2.2

If \(X \sim {\text{MOED}}\left( {{\text{X}}: \propto , \theta , \beta ,\Upsilon ,a} \right)\), then the moment generating function for r.v X is

$$ M_{x} \left( t \right) = \mathop \sum \limits_{r = 0}^{\infty } \frac{{t^{r} }}{r!} \propto \gamma a^{{\frac{r}{\theta }}} \beta \mathop \sum \limits_{j = 0}^{\infty } t_{j} \frac{{\Gamma \left( {\beta \left( {j + 1} \right) + \frac{r}{\theta }} \right)\Gamma \left( {1 - \frac{r}{\theta }} \right)}}{{\Gamma \left( {\beta \left( {j + 1} \right) + 1} \right)}} . $$
(20)

Proof

Moment generating function for MOED can be defined as,

$$ M_{x} \left( t \right) = \mathop \sum \limits_{r = 0}^{\infty } \frac{{t^{r} }}{r!}E(X^{r} ), $$
(21)

where, \({\text{E}}\left( {{\text{X}}^{{\text{r}}} } \right) = \mathop \int \limits_{0}^{\infty } {\text{x}}^{{\text{r}}} \propto {{\gamma a\theta \beta x}}^{{{{ - \theta - 1}}}} \mathop \sum \limits_{j = 0}^{\infty } t_{j} \left( {1 + ax^{ - \theta } } \right)^{{ - \beta \left( {j + 1} \right) - 1}} dx{,}\)

$$ E\left( {X^{r} } \right) = \quad \propto \gamma a\theta \beta \mathop \sum \limits_{j = 0}^{\infty } t_{jj} \mathop \int \limits_{0}^{\infty } x^{r - \theta - 1} \left( {1 + ax^{ - \theta } } \right)^{{ - \left( {\beta \left( {j + 1} \right) - 1} \right)}} dx. $$
(22)

Making substitution,

$$ \left( {1 + ax^{ - \theta } } \right)^{ - 1} = u,x = \frac{1}{{a^{{ - \frac{1}{\theta }}} }} \left[ {\frac{1}{u} - 1} \right]^{{ - \frac{1}{\theta }}} ,dx = \left( \frac{1}{a} \right)^{{ - \frac{1}{\theta }}} \left( {\frac{1}{{\theta u^{2} }}} \right)\left[ {\frac{1}{u} - 1} \right]^{{ - \frac{1}{\theta } - 1}} du, $$

Limits: when \(x \to \infty \; {\text{then}}\; u = 1\) and when \(x \to 0 \;{\text{then}}\; u = 0.\) So Eq. (22) becomes

$$ E\left( {X^{r} } \right) = \quad \propto \gamma a\beta \mathop \sum \limits_{j = 0}^{\infty } t_{jj} \mathop \int \limits_{0}^{1} \frac{1}{{a^{{ - \frac{r}{\theta } + 1 + \frac{1}{\theta } - \frac{1}{\theta }}} }} \left[ {\frac{1}{u} - 1} \right]^{{ - \frac{r}{\theta } + 1 + \frac{1}{\theta } - \frac{1}{\theta } - 1}} u^{{\left( {\beta \left( {j + 1} \right) + 1 - 2} \right)}} du, $$
$$ E\left( {X^{r} } \right) = \quad \propto \gamma a^{{\frac{r}{\theta }}} \beta \mathop \sum \limits_{j = 0}^{\infty } t_{jj} \mathop \int \limits_{0}^{1} \left[ {1 - u} \right]^{{\left( { - \frac{r}{\theta } + 1} \right) - 1}} u^{{\left( {\beta \left( {j + 1} \right) + \frac{r}{\theta } - 1} \right)}} du, $$
$$ E\left( {X^{r} } \right) = \quad \propto \gamma a^{{\frac{r}{\theta }}} \beta \mathop \sum \limits_{j = 0}^{\infty } t_{j} B\left( {\beta \left( {j + 1} \right) + \frac{r}{\theta },\left( {1 - \frac{r}{\theta }} \right)} \right), $$
$$ E\left( {X^{r} } \right) = \quad \propto \gamma a^{{\frac{r}{\theta }}} \beta \mathop \sum \limits_{j = 0}^{\infty } t_{j} \frac{{\Gamma \left( {\beta \left( {j + 1} \right) + \frac{r}{\theta }} \right)\Gamma \left( {1 - \frac{r}{\theta }} \right)}}{{\Gamma \left( {\beta \left( {j + 1} \right) + 1} \right)}} for\; \left( {\frac{r}{\theta } < 1} \right). $$
(23)

By substituting Eq. (23) in (21), we will get (20).

Putting r = 1, 2, 3, 4 in (23), we get the first four raw moments as:

$$ \mu_{1}^{{\prime}} = Mean = E\left( {X^{1} } \right) = \quad \propto \gamma a^{{\frac{1}{\theta }}} \beta \mathop \sum \limits_{j = 0}^{\infty } t_{j} \frac{{\Gamma \left( {\beta \left( {j + 1} \right) + \frac{1}{\theta }} \right)\Gamma \left( {1 - \frac{1}{\theta }} \right)}}{{\Gamma \left( {\beta \left( {j + 1} \right) + 1} \right)}} for \left( {\frac{1}{\theta } < 1} \right). $$
(24)
$$ \mu_{2}^{{\prime}} = E\left( {X^{2} } \right) = \quad \propto \gamma a^{{\frac{2}{\theta }}} \beta \mathop \sum \limits_{j = 0}^{\infty } t_{j} \frac{{\Gamma \left( {\beta \left( {j + 1} \right) + \frac{2}{\theta }} \right)\Gamma \left( {1 - \frac{2}{\theta }} \right)}}{{\Gamma \left( {\beta \left( {j + 1} \right) + 1} \right)}} for \left( {\frac{2}{\theta } < 1} \right). $$
(25)
$$ \mu_{3}^{{\prime}} = E\left( {X^{3} } \right) = \quad \propto \gamma a^{{\frac{3}{\theta }}} \beta \mathop \sum \limits_{j = 0}^{\infty } t_{j} \frac{{\Gamma \left( {\beta \left( {j + 1} \right) + \frac{3}{\theta }} \right)\Gamma \left( {1 - \frac{3}{\theta }} \right)}}{{\Gamma \left( {\beta \left( {j + 1} \right) + 1} \right)}} for \left( {\frac{3}{\theta } < 1} \right), $$
(26)
$$ {\upmu }_{4}^{{\prime}} = {\text{E}}\left( {{\text{X}}^{4} } \right) = \quad \propto {\upgamma }a^{{\frac{4}{{\uptheta }}}} {\upbeta }\mathop \sum \limits_{j = 0}^{\infty } t_{j} \frac{{\Gamma \left( {\beta \left( {j + 1} \right) + \frac{4}{{\uptheta }}} \right)\Gamma \left( {1 - \frac{4}{{\uptheta }}} \right)}}{{\Gamma \left( {\beta \left( {j + 1} \right) + 1} \right)}} for \left( {\frac{4}{\theta } < 1} \right), $$
(27)

Also, the variance of MOED is,

$$ {\text{Variance}} = a^{{\frac{2}{\theta }}} \left[ { \propto \gamma \beta \mathop \sum \limits_{j = 0}^{\infty } t_{j} \frac{{\Gamma \left( {\beta \left( {j + 1} \right) + \frac{2}{\theta }} \right)\Gamma \left( {1 - \frac{2}{\theta }} \right)}}{{\Gamma \left( {\beta \left( {j + 1} \right) + 1} \right)}} - \left( { \propto \gamma \beta \mathop \sum \limits_{j = 0}^{\infty } t_{j} \frac{{\Gamma \left( {\beta \left( {j + 1} \right) + \frac{1}{\theta }} \right)\Gamma \left( {1 - \frac{1}{\theta }} \right)}}{{\Gamma \left( {\beta \left( {j + 1} \right) + 1} \right)}}} \right)^{2} } \right] . $$
(28)

Theorem 2.2

If \(X \sim {\text{MOED}}\left( {{\text{X}}: \propto , \theta , \beta ,\Upsilon ,a} \right)\), then the qth quantile for r.v X is,

$$ x_{q} = \left( {\frac{1}{{\text{a}}}\left( {\left( {1 - \left( {\frac{{\left( {1 - {\text{q}}} \right)}}{{\left( {1 - {\text{q}}\left( {1 - \propto } \right)} \right)}}} \right)^{{\frac{1}{\gamma }}} } \right)^{{ - \frac{1}{\beta }}} - 1} \right)} \right)^{{ - { }\frac{1}{{\uptheta }}}} . $$
(29)

Proof

The \(q{th}\) quantile of any distribution can be obtained by simplifying

$$ F\left( {x_{q} } \right) = q. $$
(30)

Substituting Eq. (11) in Eq. (30), we get

$$ \frac{{1 - \left( {1 - (1 + ax_{q}^{ - \theta } )^{ - \beta } } \right)^{\gamma } }}{{1 - \left( {1 - \propto } \right)\left( {1 - (1 + ax_{q}^{ - \theta } )^{ - \beta } } \right)^{\gamma } }} = q. $$

After simplification for x, we get

$$ x_{q} = \left( {\frac{1}{{\text{a}}}\left( {\left( {1 - \left( {\frac{{\left( {1 - {\text{q}}} \right)}}{{\left( {1 - {\text{q}}\left( {1 - \propto } \right)} \right)}}} \right)^{{\frac{1}{\gamma }}} } \right)^{{ - \frac{1}{\beta }}} - 1} \right)} \right)^{{ - { }\frac{1}{{\uptheta }}}} , $$

which is the \(q^{th}\) quantile for MOED distribution.

Corollary 2.1

The median for MOED distribution is obtained by putting \(q = \frac{1}{2} = 0.5\) in Eq. (29), we have,

$$ {\text{Median}} = x_{0.5} = \left( {\frac{1}{a}\left( {\left( {1 - \left( {1 + \propto } \right)^{{ - \frac{1}{\gamma }}} } \right)^{{ - \frac{1}{\beta }}} - 1} \right)} \right)^{{ - \frac{1}{\theta }}} . $$
(30a)

Theorem 2.3

If \(X \sim {\text{MOED}}\left( {{\text{X}}: \propto , \theta , \beta ,\Upsilon ,a} \right)\), then the mean deviation about mean and median are:

$$ M.D_{{\overline{x}}} = 2\mu \left( {\mathop \sum \limits_{l = 0}^{\infty } S_{l} \left( {1 + a\mu^{ - \theta } } \right)^{ - \beta l} } \right) - 2\left( { \propto \gamma a^{{ \frac{1}{\theta }}} \beta \mathop \sum \limits_{j = 0}^{\infty } t_{j} B_{t\left( \mu \right)} \left( {\beta \left( {j + 1} \right) + - \frac{1}{\theta },\left( {1 - \frac{1}{\theta }} \right)} \right)} \right) $$
(31)

Under the condition \(\left( {\frac{1}{\theta } < 1} \right)\). Also

$$ M.D_{{\overline{x}}} = \quad \propto \gamma a^{{\frac{1}{\theta }}} \beta \mathop \sum \limits_{j = 0}^{\infty } t_{j} \frac{{\Gamma \left( {\beta \left( {j + 1} \right) + \frac{1}{\theta }} \right)\Gamma \left( {1 - \frac{1}{\theta }} \right)}}{{\Gamma \left( {\beta \left( {j + 1} \right) + 1} \right)}} + 2M\left( {\frac{1}{a}\left( {\left( {1 - \left( {1 + \propto } \right)^{{ - \frac{1}{\gamma }}} } \right)^{{ - \frac{1}{\beta }}} - 1} \right)} \right)^{{ - \frac{1}{\theta }}} $$
$$ - M - 2 \propto \gamma a^{{\frac{1}{\theta }}} \beta \mathop \sum \limits_{j = 0}^{\infty } t_{j} B_{t\left( M \right)} \left( {\beta \left( {j + 1} \right) + \frac{1}{\theta },1 - \frac{1}{\theta }} \right), $$
(32)

where M is the median defined in (30).

Corollary 2.2

The random number generator of MOED distribution is

$$ x_{u} = \left( {\frac{1}{a}\left( {\left( {1 - \left( {\frac{{\left( {1 - U} \right)}}{{\left( {1 - U\left( {1 - \propto } \right)} \right)}}} \right)^{{\frac{1}{\gamma }}} } \right)^{{ - \frac{1}{\beta }}} - 1} \right)} \right)^{{ - \frac{1}{\theta }}} for\;\;U\sim u \left( {0,1} \right). $$
(33)

3 Measures of Uncertainty and Inequality

In this section, we find the measure of uncertainty for MOED distribution, which includes Renyi entropy and Q entropy. After that, measures of inequality expressions like Lorenz and Bonferroni curves are derived.

3.1 Renyi Entropy

Renyi entropy is defined as;

$$ I_{X;R} \left( \delta \right) = \frac{1}{1 - \delta }\log \left[ {I_{X} \left( \delta \right)} \right] , $$
(34)

where

$$ I_{X} \left( \delta \right) = \mathop \int \limits_{R}^{.} f_{X}^{\delta } \left( x \right)dx, for \delta > 0 \, and \, \delta \ne 1 , $$
(35)

Substituting (12) in (35) and applying the generalized Binomial expansion (15), we get,

$$ I_{X} \left( \delta \right) = \left[ {\left( { \propto \gamma a\theta \beta } \right)^{\delta } \mathop \sum \limits_{i = 0}^{\infty } \frac{{\Gamma \left( {2\delta + i} \right)}}{{\Gamma \left( {2\delta } \right)i!}}\left( {1 - \propto } \right)^{i} \mathop \int \limits_{0}^{\infty } \left( {x^{ - \theta - 1} } \right)^{\delta } \left( {1 + ax^{ - \theta } } \right)^{{ - \delta \left( {\beta + 1} \right)}} \left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{{\left( {\gamma \left( {\delta + i} \right) - \delta + 1} \right) - 1}} dx} \right]. $$

Applying the power series expansion (4.1.2 = 16) in the above equation, we get

$$ I_{X} \left( \delta \right) = \left[ {\left( { \propto \gamma a\theta \beta } \right)^{\delta } \mathop \sum \limits_{i = 0}^{\infty } \mathop \sum \limits_{j = 0}^{\infty } \frac{{\left( { - 1} \right)^{j} }}{i!j!}\frac{{\Gamma \left( {2\delta + i} \right)}}{{\Gamma \left( {2\delta } \right)}}\frac{{\Gamma \left( {\gamma \left( {\delta + i} \right) - \delta + 1} \right) }}{{\Gamma \left( {\gamma \left( {\delta + i} \right) - \delta + 1 - j} \right)}}\left( {1 - \propto } \right)^{i} \mathop \int \limits_{0}^{\infty } \left( {x^{ - \theta - 1} } \right)^{\delta } \left( {1 + ax^{ - \theta } } \right)^{{ - \beta \left( {\delta + j} \right) - \delta }} dx} \right]. $$
(36)

Making substitution, \(\left( {1 + ax^{ - \theta } } \right)^{ - 1} = z\), \(x = \frac{1}{{a^{{ - \frac{1}{\theta }}} }} \left[ {\frac{1}{z} - 1} \right]^{{ - \frac{1}{\theta }}}\), \(dx = \frac{1}{{\theta z^{2} a^{{ - \frac{1}{\theta }}} }}\left[ {\frac{1}{z} - 1} \right]^{{ - \frac{1}{\theta } - 1}} dz\).

Limits; When \(x \to \infty\) then \(z = 1\) and when \(x \to 0\) then \(z = 0\). Equation (36) becomes

$$ I_{X} \left( \delta \right) = \quad \propto^{\delta } \gamma^{\delta } a^{{\frac{1}{\theta }\left( {1 - \delta } \right)}} \theta^{{ - \left( {1 - \delta } \right)}} \beta^{\delta } \left[ {\mathop \sum \limits_{i = 0}^{\infty } \mathop \sum \limits_{j = 0}^{\infty } \frac{{\left( { - 1} \right)^{j} }}{i!j!}\frac{{\Gamma \left( {2\delta + i} \right)}}{{\Gamma \left( {2\delta } \right)}}\frac{{\Gamma \left( {\gamma \left( {\delta + i} \right) - \delta + 1} \right) }}{{\Gamma \left( {\gamma \left( {\delta + i} \right) - \delta + 1 - j} \right)}}\left( {1 - \propto } \right)^{i} \mathop \int \limits_{0}^{1} \left[ {1 - z} \right]^{{\frac{1}{\theta }\left( {\delta - 1} \right) + \delta - 1}} z^{{\beta \left( {\delta + j} \right) + \frac{1}{\theta }\left( {1 - \delta } \right) - 1}} dz} \right]. $$

Or,

\(I_{X} \left( \delta \right) = \quad \propto^{\delta } \gamma^{\delta } a^{{\frac{1}{\theta }\left( {1 - \delta } \right)}} \theta^{{ - \left( {1 - \delta } \right)}} \beta^{\delta } \left[ {\mathop \sum \limits_{i = 0}^{\infty } \mathop \sum \limits_{j = 0}^{\infty } \frac{{\left( { - 1} \right)^{j} }}{i!j!}\frac{{\Gamma \left( {2\delta + i} \right)}}{{\Gamma \left( {2\delta } \right)}}\frac{{\Gamma \left( {\gamma \left( {\delta + i} \right) - \delta + 1} \right) }}{{\Gamma \left( {\gamma \left( {\delta + i} \right) - \delta + 1 - j} \right)}}\left( {1 - \propto } \right)^{i} B\left( {\beta \left( {\delta + j} \right) + \frac{1}{\theta }\left( {1 - \delta } \right),\frac{1}{\theta }\left( {\delta - 1} \right) + \delta } \right)} \right],\) so after simplification and applying log on both sides, we get

$$ \log I_{X} \left( \delta \right) = \delta \log \left( { \propto \gamma \beta } \right) + \frac{1}{\theta }\left( {1 - \delta } \right)\log (a) - \left( {1 - \delta } \right)\log \left( \theta \right) + \log \left[ {\mathop \sum \limits_{i = 0}^{\infty } \mathop \sum \limits_{j = 0}^{\infty } \frac{{\left( { - 1} \right)^{j} }}{i!j!}\frac{{\Gamma \left( {2\delta + i} \right)}}{{\Gamma \left( {2\delta } \right)}}\frac{{\Gamma \left( {\gamma \left( {\delta + i} \right) - \delta + 1} \right) }}{{\Gamma \left( {\gamma \left( {\delta + i} \right) - \delta + 1 - j} \right)}}\left( {1 - \propto } \right)^{i} \frac{{\Gamma \left( {\beta \left( {\delta + j} \right) + \frac{1}{\theta }\left( {1 - \delta } \right)} \right)\Gamma \left( {\frac{1}{\theta }\left( {\delta - 1} \right) + \delta } \right)}}{{\Gamma \left( {\beta \left( {\delta + j} \right) + \delta } \right)}}} \right]. $$
(37)

Substituting (37) in (34), we get

$$ I_{X;R} \left( \delta \right) = \left( {\frac{\delta }{1 - \delta } log\left( { \propto {{\gamma \beta }}} \right) + \frac{\log (a)}{\theta } - \log \left( \theta \right) + \frac{log}{{1 - \delta }}\left[ {\mathop \sum \limits_{i = 0}^{\infty } \mathop \sum \limits_{j = 0}^{\infty } \frac{{\left( { - 1} \right)^{{\text{j}}} }}{i!j!}\frac{{\Gamma \left( {2\delta + i} \right)}}{{\Gamma \left( {2\delta } \right)}}\frac{{\Gamma \left( {{\upgamma }\left( {\delta + i} \right) - \delta + 1} \right) }}{{\Gamma \left( {{\upgamma }\left( {\delta + i} \right) - \delta + 1 - {\text{j}}} \right)}}\left( {1 - \propto } \right)^{i} \frac{{\Gamma \left( {\beta \left( {\delta + j} \right) + \frac{1}{\theta }\left( {1 - \delta } \right)} \right)\Gamma \left( {\frac{1}{\theta }\left( {\delta - 1} \right) + \delta } \right)}}{{\Gamma \left( {\beta \left( {\delta + j} \right) + \delta } \right)}}} \right]} \right). $$
(38)

Under the condition (\(\delta > 1)\).

3.2 Q entropy

Let X be a r.v that has MOED distribution with parameters a,\( \beta\),\({{ \gamma }}\),\({{ \theta {\rm and} }} \propto { }\) then Q entropy is obtained by

$$ H_{q} \left( x \right) = \frac{1}{q - 1}log\left[ {1 - \mathop \int \limits_{ - \infty }^{\infty } f\left( x \right)^{q} dx} \right] q > 0, q \ne 0 . $$
(39)

For MOED distribution

$$ H_{q} \left( x \right) = \frac{1}{q - 1}log\left[ {1 - \mathop \int \limits_{0}^{\infty } f\left( x \right)^{q} dx} \right] $$

As solved before

$$ \mathop \int \limits_{0}^{\infty } f\left( x \right)^{\delta } dx = \left[ { \propto^{\delta } \gamma^{\delta } a^{{\frac{1}{\theta }\left( {1 - \delta } \right)}} \theta^{{ - \left( {1 - \delta } \right)}} \beta^{\delta } \left[ {\mathop \sum \limits_{i = 0}^{\infty } \mathop \sum \limits_{j = 0}^{\infty } \frac{{\left( { - 1} \right)^{j} }}{i!j!}\frac{{\Gamma \left( {2\delta + i} \right)}}{{\Gamma \left( {2\delta } \right)}}\frac{{\Gamma \left( {\gamma \left( {\delta + i} \right) - \delta + 1} \right) }}{{\Gamma \left( {\gamma \left( {\delta + i} \right) - \delta + 1 - j} \right)}}\left( {1 - \propto } \right)^{i} \frac{{\Gamma \left( {\beta \left( {\delta + j} \right) + \frac{1}{\theta }\left( {1 - \delta } \right)} \right)\Gamma \left( {\frac{1}{\theta }\left( {\delta - 1} \right) + \delta } \right)}}{{\Gamma \left( {\beta \left( {\delta + j} \right) + \delta } \right)}}} \right]} \right]. $$

Hence Q entropy is

$$ {\text{H}}_{{\text{k}}} \left( {\text{x}} \right){ = }\frac{{1}}{{\text{q - 1}}}{\text{log}}\left[ {{1 - }\left[ { \propto^{q} {\upgamma }^{q} a^{{\frac{1}{\theta }\left( {1 - q} \right)}} \theta^{{ - \left( {1 - q} \right)}} {\upbeta }^{q} \left[ {\mathop \sum \limits_{i = 0}^{\infty } \mathop \sum \limits_{j = 0}^{\infty } \frac{{\left( { - 1} \right)^{{\text{j}}} }}{i!j!}\frac{{\Gamma \left( {2q + i} \right)}}{{\Gamma \left( {2q} \right)}}\frac{{\Gamma \left( {{\upgamma }\left( {q + i} \right) - q + 1} \right) }}{{\Gamma \left( {{\upgamma }\left( {q + i} \right) - q + 1 - {\text{j}}} \right)}}\left( {1 - \propto } \right)^{i} \frac{{\Gamma \left( {\beta \left( {q + j} \right) + \frac{1}{\theta }\left( {1 - q} \right)} \right)\Gamma \left( {\frac{1}{\theta }\left( {q - 1} \right) + q} \right)}}{{\Gamma \left( {\beta \left( {q + j} \right) + q} \right)}}} \right]} \right]} \right] $$

3.3 Lorenz Curve

Let the random variable “X” have a MOED distribution with parameters a,\( \beta\),\({{ \gamma }}\),\({{ \theta {\rm and} }} \propto\) The Lorenz curve denoted by L(X) is defined as

$$ L\left( X \right) = \frac{{F^{*} \left( X \right)}}{E\left( X \right)} , $$
(40)
$$ F^{*} \left( X \right) = \mathop \int \limits_{0}^{x} f^{*} \left( z \right)dz , $$
(41)

where

$$ f^{*} \left( z \right) = \frac{zf\left( z \right)}{{E\left( x \right)}} , $$
(42)

where

$$ E\left( X \right) = \quad \propto \gamma a^{{\frac{1}{\theta }}} \beta \mathop \sum \limits_{j = 0}^{\infty } t_{j} \frac{{\Gamma \left( {\beta \left( {j + 1} \right) + \frac{1}{\theta }} \right)\Gamma \left( {1 - \frac{1}{\theta }} \right)}}{{\Gamma \left( {\beta \left( {j + 1} \right) + 1} \right)}}. $$
(43)

Substituting Eq. (17) and (43) in Eq. (42)

$$ f^{*} \left( z \right) = \frac{{z \propto {{\gamma a\theta \beta z}}^{{{{ - \theta - 1}}}} \mathop \sum \nolimits_{j = 0}^{\infty } t_{j} \left( {1 + az^{ - \theta } } \right)^{{ - \beta \left( {j + 1} \right) - 1}} }}{{ \propto {\upgamma }a^{{\frac{1}{{\uptheta }}}} {\upbeta }\mathop \sum \nolimits_{j = 0}^{\infty } t_{j} \frac{{\Gamma \left( {\beta \left( {j + 1} \right) + \frac{1}{{\uptheta }}} \right)\Gamma \left( {1 - \frac{1}{{\uptheta }}} \right)}}{{\Gamma \left( {\beta \left( {j + 1} \right) + 1} \right)}}}} . $$
(44)

Substitute Eq. (44) in Eq. (41)

$$ F^{*} \left( X \right) = \frac{{a^{{1 - \frac{1}{{\uptheta }}}} {\uptheta }\mathop \sum \nolimits_{j = 0}^{\infty } { }t_{j} }}{{\mathop \sum \nolimits_{j = 0}^{\infty } t_{j} \frac{{\Gamma \left( {\beta \left( {j + 1} \right) + \frac{1}{{\uptheta }}} \right)\Gamma \left( {1 - \frac{1}{{\uptheta }}} \right)}}{{\Gamma \left( {\beta \left( {j + 1} \right) + 1} \right)}}}}\mathop \int \limits_{0}^{x} {\text{z}}^{{{{ - \theta }}}} \left( {1 + az^{ - \theta } } \right)^{{ - \left( {\beta \left( {j + 1} \right) + 1} \right)}} dz . $$
(45)

Making substitution, \(\left( {1 + az^{ - \theta } } \right)^{ - 1} = t,\) and after simplification, we get the expression for Lorenz Curve:

$$ L\left( X \right) = \frac{{\mathop \sum \nolimits_{j = 0}^{\infty } t_{j} B_{t\left( x \right)} \left( {\beta \left( {j + 1} \right) + \frac{1}{\theta },1 - \frac{1}{\theta }} \right)}}{{ \propto \gamma a^{{\frac{1}{\theta }}} \beta \left( {\mathop \sum \nolimits_{j = 0}^{\infty } t_{j} \frac{{\Gamma \left( {\beta \left( {j + 1} \right) + \frac{1}{\theta }} \right)}}{{\Gamma \left( {\beta \left( {j + 1} \right) + 1} \right)}}} \right)^{2} \left( {\Gamma \left( {1 - \frac{1}{\theta }} \right)} \right)^{2} }}. $$
(46)

3.4 Bonferroni Curve

Bonferroni curve is defined as

$$ B\left( X \right) = \frac{L\left( X \right)}{{F\left( X \right)}} . $$

Substitute the Lorenz curve (46) and Cdf (1) in the above equation; we get the Bonferroni Curve for MOED distribution.

$$ B\left( X \right) = \frac{{\mathop \sum \nolimits_{j = 0}^{\infty } { }t_{j} B_{t\left( x \right)} \left( {\beta \left( {j + 1} \right) + \frac{1}{\theta },1 - \frac{1}{\theta }} \right)}}{{ \propto {\upgamma }a^{{\frac{1}{{\uptheta }}}} {\upbeta }\left( {\mathop \sum \nolimits_{j = 0}^{\infty } t_{j} \frac{{\Gamma \left( {\beta \left( {j + 1} \right) + \frac{1}{{\uptheta }}} \right)}}{{\Gamma \left( {\beta \left( {j + 1} \right) + 1} \right)}}} \right)^{2} \left( {\Gamma \left( {1 - \frac{1}{{\uptheta }}} \right)} \right)^{2} \mathop \sum \nolimits_{l = 0}^{\infty } S_{l} \left( {{\text{1 + ax}}^{{{{ - \theta }}}} } \right)^{ - \beta l} }}. $$
(47)

4 Order Statistics

This section comprises expressions for \(i^{th}\) order statistics.

Theorem 4.1

If \(X \sim {\text{MOED}}\left( {{\text{X}}: \propto , \theta , \beta ,\Upsilon ,a} \right)\), then the ith order statistics for MOED distribution is,

$$ {\text{D}}^{\left( l \right)}_{{{\text{i}}:{\text{n}}}} = \frac{n!}{{\left( {i - 1} \right)!\left( {n - i} \right)!}} \propto^{{{\text{n}} - {\text{i}} + 1}} \mathop \sum \limits_{k = 0}^{\infty } \mathop \sum \limits_{j = 0}^{\infty } \left( { - 1} \right)^{k + l} \frac{{\Gamma \left( {\text{i}} \right) }}{{\Gamma \left( {{\text{i}} - {\text{k}}} \right)k!}}\frac{{\Gamma \left( {n + 1 + j} \right)}}{{\Gamma \left( {n + 1} \right)j!}}\frac{{\Gamma {\upgamma }\left( {n - i + j + k + 1} \right)}}{{\Gamma {\upgamma }\left( {n - i + j + k + 1} \right)j!}}^{.} \left( {1 - \propto } \right)^{{\text{j}}} $$
(48)

Proof

The density function of \(i^{th} \) order statistics of the random variable \(X_{i:n}\) can be obtained using

$$ f_{{X\left( {i:n} \right)}} \left( x \right) = \frac{n!}{{\left( {i - 1} \right)!\left( {n - i} \right)!}}f_{X} \left( x \right)\left( {F_{X} \left( x \right)} \right)^{i - 1} \left( {1 - F_{X} \left( x \right)} \right)^{n - i} . $$
(49)

Substituting (11) and (12) in above Eq. (49), we get

$$ f_{{X\left( {i:n} \right)}} \left( x \right) = \frac{n!}{{\left( {i - 1} \right)!\left( {n - i} \right)!}}\left( {\frac{{ \propto \gamma a\theta \beta x^{ - \theta - 1} (1 + ax^{ - \theta } )^{ - \beta - 1} \left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma - 1} }}{{\left( {1 - \left( {1 - \propto } \right)\left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } } \right)^{2} }}} \right)\left( {\frac{{1 - \left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } }}{{1 - \left( {1 - \propto } \right)\left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } }}} \right)^{i - 1} $$
$$ \left( {1 - \frac{{1 - \left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } }}{{1 - \left( {1 - \propto } \right)\left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } }}} \right)^{n - i} $$

After simplification, we get ith order statistics (48).

Corollary 4.1

The rth moment of ith order statistic from MOED distribution is:

$$ E\left( {X_{i:n}^{r} } \right) = \gamma a^{{ \frac{r}{\theta }}} \beta \mathop \sum \limits_{l = 0}^{\infty } D^{\left( l \right)}_{i:n} \frac{{\Gamma \left( {\beta \left( {l + 1} \right) + \frac{r}{\theta }} \right)\Gamma \left( {1 - \frac{r}{\theta }} \right)}}{{\Gamma \left( {\beta \left( {l + 1} \right) + 1} \right)}},for \, r = 1,2,3,4. $$
(50)

5 MLE’s of MOED Distribution Parameters

This section obtains maximum likelihood estimates of the unknown parameters of the newly introduced distribution, Marshall Olkin Exponentiated distribution. Suppose a random sample of size n from \(X\sim MOED\left( {{\text{ a}},{{ \beta }},{{ \gamma }},{{ \alpha }},{{ \theta }}} \right);\) then the likelihood function can be expressed as

$$ L = \mathop \prod \limits_{i = 1}^{n} \left[ {\frac{{ \propto \gamma a\theta \beta x^{ - \theta - 1} (1 + ax^{ - \theta } )^{ - \beta - 1} \left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma - 1} }}{{\left( {1 - \left( {1 - \propto } \right)\left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } } \right)^{2} }}} \right]. $$

Taking logarithms on both sides, we have

$$ \ln L = n\ln \propto + n\ln \gamma + n\ln a + n\ln \theta + n\ln \beta - \left( {\theta + 1} \right)\mathop \sum \limits_{i = 1}^{n} \ln x_{i} - \left( {\beta + 1} \right)\mathop \sum \limits_{i = 1}^{n} \ln \left( {1 + ax_{i}^{ - \theta } } \right) + \left( {\gamma - 1} \right)\mathop \sum \limits_{i = 1}^{n} \ln \left( {1 - (1 + ax_{i}^{ - \theta } )^{ - \beta } } \right) - 2\mathop \sum \limits_{i = 1}^{n} \ln \left( {1 - \left( {1 - \propto } \right)\left( {1 - (1 + ax^{ - \theta } )^{ - \beta } } \right)^{\gamma } } \right) . $$
(51)

On differentiating Eq. (51) w.r.t unknown parameters, we have:

$$ \frac{\partial \ln L}{{\partial \propto }} = \frac{n}{ \propto } - 2\mathop \sum \limits_{i = 1}^{n} \frac{{\left( {1 - {\text{(1 + ax}}^{{{{ - \theta }}}} {)}^{{ - {\upbeta }}} } \right)}}{{\left( {1 - \left( {1 - \propto } \right)\left( {1 - {\text{(1 + ax}}^{{{{ - \theta }}}} {)}^{{ - {\upbeta }}} } \right)^{{\upgamma }} } \right)}} , $$
(52)
$$\frac{\partial \mathrm{ln}L}{\partial\upbeta }=\frac{n}{\upbeta }-0-\sum_{\mathrm{i}=1}^{\mathrm{n}}ln\left(\text{1+ }{{\text{a}}{\mathrm{x}}_{i}}^{-\theta}\right)+\left(\upgamma -1\right)\sum_{\mathrm{i}=1}^{\mathrm{n}}\frac{\left({\text{(1+ }{{\text{a}}{\mathrm{x}}_{i}}^{-\theta}\text{)}}^{-\upbeta }\left(-\mathrm{ln}\text{(1+ }{{\text{a}}{\mathrm{x}}_{i}}^{-\theta}\text{)}\right)\right)}{\left(1-{\text{(1+ }{{\text{a}}{\mathrm{x}}_{i}}^{-\theta}\text{)}}^{-\upbeta }\right)}-2\left(1-\propto \right)\upgamma \sum_{i=1}^{n}\left(\frac{\begin{array}{c}\left({\left(1-{\text{(1+ }{\text{ax}}^{-\theta}\text{)}}^{-\upbeta }\right)}^{\upgamma -1}\right){\text{(1+ }{\text{ax}}^{-\theta}\text{)}}^{-\upbeta }\\ \left(-\mathrm{ln}\text{(1+ }{\text{ax}}^{-\theta}\text{)}\right)\end{array}}{\left(1-\left(1-\propto \right){\left(1-{\text{(1+ }{\text{ax}}^{-\theta}\text{)}}^{-\upbeta }\right)}^{\upgamma }\right)}\right)$$
(53)
$$\frac{\partial \mathrm{ln}L}{\partial\upgamma }=\frac{n}{\upgamma }+\sum_{\mathrm{i}=1}^{\mathrm{n}}ln\left(1-{\text{(1+ }{\text{ax}}^{-\theta}\text{)}}^{-\upbeta }\right)-2\left(1-\propto \right)\sum_{i=1}^{n}\left(\frac{\left({\left(1-{\text{(1+ }{\text{ax}}^{-\theta}\text{)}}^{-\upbeta }\right)}^{\upgamma }\right)ln\left(1-{\text{(1+ }{\text{ax}}^{-\theta}\text{)}}^{-\upbeta }\right)}{\left(1-\left(1-\propto \right){\left(1-{\text{(1+ }{\text{ax}}^{-\theta}\text{)}}^{-\upbeta }\right)}^{\upgamma }\right)}\right),$$
(54)
$$\frac{\partial \mathrm{ln}L}{\partial \mathrm{a}}= \frac{n}{\mathrm{a}}-\left(\upbeta +1\right)\sum_{\mathrm{i}=1}^{\mathrm{n}}\frac{ \, {{\mathrm{x}}_{i}}^{-\theta}}{\left(\text{1+ }{{\text{a}}{\mathrm{x}}_{i}}^{-\theta}\right)}+ \left(\upgamma -1\right)\upbeta \sum_{\mathrm{i}=1}^{\mathrm{n}}\frac{{\text{(1+ }{{\text{a}}{\mathrm{x}}_{i}}^{-\theta}\text{)}}^{-\upbeta -1}{{\mathrm{x}}_{i}}^{-\theta}}{\left(1-{\text{(1+ }{{\text{a}}{\mathrm{x}}_{i}}^{-\theta}\text{)}}^{-\upbeta }\right)}+2\left(1-\propto \right)\mathrm{\gamma \beta }\sum_{i=1}^{n}\left(\frac{{\left(1-{\text{(1+ }{\text{ax}}^{-\theta}\text{)}}^{-\upbeta }\right)}^{\upgamma -1}{\text{(1+ }{{\text{a}}{\mathrm{x}}_{i}}^{-\theta}\text{)}}^{-\upbeta -1}{{\mathrm{x}}_{i}}^{-\theta}}{\left(1-\left(1-\propto \right){\left(1-{\text{(1+ }{\text{ax}}^{-\theta}\text{)}}^{-\upbeta }\right)}^{\upgamma }\right)}\right),$$
(55)
$$\frac{\partial \mathrm{ln}L}{\partial {\theta}}=\frac{n}{\theta}-\sum_{\mathrm{i}=1}^{\mathrm{n}}l{nx}_{i}+\mathrm{a}\left(\upbeta +1\right)\sum_{\mathrm{i}=1}^{\mathrm{n}}\frac{ \, {{\mathrm{x}}_{i}}^{-\theta}\left(-l{nx}_{i}\right)}{\left(\text{1+ }{{\text{a}}{\mathrm{x}}_{i}}^{-\theta}\right)}-\left(\upgamma -1\right)\mathrm{a\beta }\sum_{\mathrm{i}=1}^{\mathrm{n}}\frac{{\text{(1+ }{{\text{a}}{\mathrm{x}}_{i}}^{-\theta}\text{)}}^{-\upbeta -1}{{\mathrm{x}}_{i}}^{-\theta}\left(l{nx}_{i}\right)}{\left(1-{\text{(1+ }{{\text{a}}{\mathrm{x}}_{i}}^{-\theta}\text{)}}^{-\upbeta }\right)}-2\left(1-\propto \right)\mathrm{\gamma \beta a}\sum_{i=1}^{n}\left(\frac{{\left(1-{\text{(1+ }{\text{ax}}^{-\theta}\text{)}}^{-\upbeta }\right)}^{\upgamma -1}{\text{(1+ }{{\text{a}}{\mathrm{x}}_{i}}^{-\theta}\text{)}}^{-\upbeta -1}{{\mathrm{x}}_{i}}^{-\theta}\left(l{nx}_{i}\right)}{\left(1-\left(1-\propto \right){\left(1-{\text{(1+ }{\text{ax}}^{-\theta}\text{)}}^{-\upbeta }\right)}^{\upgamma }\right)}\right),$$
(56)

The maximum likelihood estimates (MLEs) of nonlinear Eqs. (5256) are obtained by solving them iteratively using the Newton–Raphson method.

6 Simulation Study

The effect of sample size on maximum likelihood estimates (MLEs) of the corresponding parameters of the Marshall Olkin exponentiated Dagum distribution is presented using the Monte Carlo approach. The performance of the respective parameters of MOED distribution is assessed for different sample sizes (25, 50, 100, 200 and 500), and the procedure is replicated 5000 times for each sample size. The following two situations are considered here for estimating parameters and determining their stability.

  • a = 1.5, \({\upbeta }\) = 1.5,\({{ \gamma }}\) = 1.5, \({\upalpha }\) = 1.5,\({{ \theta }}\) = 1.5

  • a = 2, \({\upbeta }\) = 3,\({{ \gamma }}\) = 4, \({\upalpha }\) = 5,\({{ \theta }}\) = 6

Results presented in Table 1 are the MLEs, standard deviation, bias and mean square error of the parameters of the MOED distribution and depicted that as sample size increases, the standard deviation, bias, and mean square error of the estimators decrease.

Table 1 Results of MLE’s Mean, Standard deviation, Bias and MSE

7 Applications of MOED Distribution on Real Data

In this section, three real-life data sets are fitted to Marshall Olkin Exponentiated Dagum distribution to analyze its applicability and flexibility. We compare the Marshall Olkin Exponentiated Dagum (MOED) distribution with seven different models that are the Marshall Olkin Dagum (MOD) distribution, Exponentiated Dagum (ED) distribution, Dagum (D) distribution, Burr III distribution, Kumaraswamy Burr III (KBurr III) distribution, Kumaraswamy Log Logistic (KLL) distribution and Log Logistic (LL) distribution.

7.1 Application 1: Fracture Toughness of Alumina (Al2O3) Data

The first data set is taken from Nadarajah [26] based on the fracture toughness of Alumina (Al2O3) (in the units of MPa m1 = 2).

Table 2 lists the Maximum Likelihood Estimates (MLE) of the unknown parameters of the fitted distributions for the fracture toughness of Alumina (Al2O3) data. The log-likelihood of the subject distribution is highest than other competing distributions. The MOED distribution gives the smallest values for the AIC, CAIC, HQIC, Cramrvon Mises (\({\text{W}}^{*}\)), Anderson–Darling (\({\text{A}}^{*}\)) and Kolmogorov–Smirnov (KS) statistics (for fracture toughness of Alumina (Al2O3) data) among all other seven distributions. The Histogram of the fracture toughness of Alumina (Al2O3) data and the fitted density functions is displayed in Fig. 2. It is clear from Fig. 2 that the Marshall Olkin Exponentiated Dagum (MOED) model provides a better fit to the other distributions in terms of model fitting for this data set.

Table 2 MLE’s and goodness of fit statistics for the fracture toughness of Alumina (Al2O3) data
Fig. 2
figure 2

Fitted pdfs for the fracture toughness of Alumina (Al2O3) data

7.2 Application 2: Yarn Sample at 2.3% Strain Level Data

The second data set comprises a 100-cm yarn sample at a 2.3% strain level. The polyester’s/viscose's characteristics of tensile fatigue is studied using this data set to resolve the wrap breakage issue. Queensberry et al. [27] and other researchers used the above data set to determine various probability models' reliability.

From Table 3, it can be observed that the Log likelihood of the subject distribution is highest than other competing distributions. The MOED distributon gives the smallest values for the AIC, CAIC, HQIC, Cramrvon Mises (\({\text{W}}^{*}\)), Anderson–Darling (\({\text{A}}^{*}\)) and Kolmogorov–Smirnov (KS) statistics (for yarn sample at 2.3% strain level data) among all other seven distributions. The Histogram of the yarn sample at 2.3% strain level data and the fitted density functions are displayed in Fig. 3. It is clear from Fig. 3 that the Marshall Olkin Exponentiated Dagum (MOED) model provides a better fit to the other distributions in terms of model fitting for this data set.

Table 3 MLE’s and goodness of fit statistics for the yarn sample at 2.3% strain level data
Fig. 3
figure 3

Fitted pdfs for the yarn sample at 2.3% strain level data

7.3 Application 3: Breaking Stress of Carbon Fibres (in Gba) Data

The third example is a data set from Cordeiro et al. [28] consisting of 66 observations on the breaking stress of carbon fibres (in Gba) [29].

From Table 4, it can be observed that the Log likelihood of the subject distribution is highest than other competing distributions. The MOED distribution gives the smallest values for the AIC, CAIC, HQIC, Cramrvon Mises (\({\text{W}}^{*}\)), Anderson–Darling (\({\text{A}}^{*}\)) and Kolmogorov–Smirnov (KS) statistics (for breaking stress of carbon fibres data) among all other seven distributions. The Histogram of the breaking stress of carbon fibres data and the fitted density functions are displayed in Fig. 4. It is clear from Fig. 6 that the Marshall Olkin Exponentiated Dagum (MOED) model provides a better fit to the other distributions in terms of model fitting for this data set.

Table 4 MLE’s and goodness of fit statistics for the breaking stress of carbon fibres (in Gba) data
Fig. 4
figure 4

Fitted pdfs for the breaking stress of carbon fibres data

8 Conclusion

A five-parametric Marshall Olkin Exponentiated Dagum distribution is introduced using Marshall Olkin family distribution. The newly proposed distribution is studied thoroughly, and various properties are derived. Parameters were estimated theoretically through the maximum likelihood method and numerically via Newton Raphson's iterative method for various sample sizes. Lastly, the proposed distribution is applied to three real data sets and compared with other allied distributions in the literature. The results depicted that the newly proposed MOED distribution performs better than the compared distributions.