1 Introduction

In the survival and reliability analysis there are situations in which we detect two lifetimes for the same patient or device, which is known as bivariate survival data. Many studies apply this model to a wide-ranging of important problems, involving engineering, sports and medicine, which is an urgent necessity these days. In such cases, it is important to use various bivariate distributions to model such bivariate data. Marshall–Olkin (MO) type is one of the most used in for modelling failure of different paired data (i.e., the lifetime of the first component is smaller, greater, or equal to that of the of the second component). Various bivariate distributions were obtained based on the Marshall–Olkin type. The bivariate exponential distribution with Marshall–Olkin type is introduced by Marshall and Olkin (1967). For a detailed search of bivariate models using Marshall–Olkin method, see Kotz et al. (2000). Recently, many articles have been devoted to bivariate distribution with Marshall–Olkin type. Bai et al. (2019) proposed a bivariate Weibull distribution. Bakouch et al. (2019) suggested a bivariate Kumaraswamy-Exponential distribution. Shoaee and Khorram (13) introduced a bivariate Pareto distribution. Ortega (2010) provided some examples of bivariate distributions in survival analysis and reliability, that illustrate the application of the bivariate ageing models including Marshall–Olkin shock model.

The Extended Chen (EC) distribution has been considered by Bhatti et al. (2021). The hazard rate function of EC distribution holds various behaviour (increasing, decreasing, bathtub, modified bathtub, decreasing–increasing and increasing–decreasing–increasing). Thus, the EC distribution is the best model for modelling the survival, life testing and reliability data. Therefore, the purpose of this paper is to present a Bivariate Extended Chen (BEC) distribution whose marginals is EC distribution based on the idea of Marshall and Olkin (1967). The major aim of BEC distribution is to introduce a powerful and flexible model that for modelling the various shapes of the hazard function for bivariate data and studying many bivariate data sets in various practical situations.

If X is a random variable following the Extended Chen distribution, then the probability density function (pdf), cumulative distribution function (cdf), survival function and hazard rate function are as follows, respectively

$$f\left( {x;{ }\alpha ,{ }\beta ,{ }\lambda } \right) = \alpha \beta \lambda x^{\beta - 1} e^{{x^{\beta } }} \left[ {1 + \lambda \left( {e^{{x^{\beta } }} - 1} \right)} \right]^{ - \alpha - 1} ,$$
(1)
$$F\left( {x; \alpha , \beta , \lambda } \right) = 1 - \left[ {1 + \lambda \left( {e^{{x^{\beta } }} - 1} \right)} \right]^{ - \alpha } ,$$
(2)
$$S\left( {x; \alpha , \beta , \lambda } \right) = \left[ {1 + \lambda \left( {e^{{x^{\beta } }} - 1} \right)} \right]^{ - \alpha } ,$$
(3)
$$h\left( {x; \alpha , \beta , \lambda } \right) = \alpha \beta \lambda x^{\beta - 1} e^{{x^{\beta } }} \left[ {1 + \lambda \left( {e^{{x^{\beta } }} - 1} \right)} \right]^{ - 1} .$$
(4)

where \(x > 0\), the shape parameters \(\alpha , \beta > 0\) and the scale parameter \(\lambda > 0\).

The proposed bivariate Extended Chen (BEC) distribution is constructed from three independent (EC) random variable using a minimization process. The Marshall–Olkin BEC model can handle the following models,

1.1 Competing risks model

Suppose a system has two components, with labels 1 and 2, and \({X}_{i}\) denotes the survival time of component i, where \(i=\mathrm{1,2}.\) The system could have been affected by three independent sources of failure. Component 1 can fail because of source 1 of failure only and component 2 can fail because of source 2 of failure only, while source 3 can cause both components 1 and 2 fail at the same time. If \({U}_{1} , {U}_{2} \ and\ {U}_{3}\) are the lifetime of cause and follow the EC distribution, then \(({X}_{1}, {X}_{2})\) has the BEC distribution.

1.2 Shock model

Take into consideration three independent causes of shocks: labelled 1, 2 and 3. These shocks are having an effect on a system that consists of two components, component 1 and component 2. Supposing shocks 1 and 2 enter the system and completely destroy components 1 and 2, respectively, while shock 3 enters the system and completely destroys both components. Let \({U}_{i}\) indicate the inter-interval times between the shocks \(i\) which follow the EC distribution. If \({X}_{1} \ and\ {X}_{2}\) are the survival times of the components, then \(({X}_{1}, {X}_{2})\) follows the BEC distribution.

1.3 Stress model

Assume a system has two components and each component undergoes individual independent stress say \({U}_{1}\ and\ {U}_{2}\). There is a total stress \({U}_{3}\) which has been conveyed to both the components equally, regardless of their separate stresses. Therefore, \({X}_{1}=max({U}_{1},{U}_{3}\)) and \({X}_{2}=max({U}_{2},{U}_{3}\)) are the observed stress at the two components.

1.4 Maintenance model

Consider a system has two components and each component has been maintained independently and there is a total maintenance. Assume that the lifetime of the individual components is increased by \({U}_{i}\) amount due to component maintenance. The lifetime of each component is increased by \({U}_{3}\) amount because of the total maintenance. Thus, \({X}_{1}=max({U}_{1},{U}_{3}\)) and \({X}_{2}=max({U}_{2},{U}_{3}\)) are the increased lifetimes of the two components.

The paper is structured as follows: Sect. 2 contains the formulation of the new bivariate model as well as the derivation of the joint probability density function, joint cumulative distribution function, joint survival function, and the conditional probability density function of the BEC distribution. Section 3 discusses reliability measures including bivariate hazard rate function and stress strength reliability. The most likely candidates for the unknown parameters are estimated in Sect. 4. In Sect. 5, an application of the proposed distribution is presented along with comparisons to a number of existing bivariate distributions using actual data sets.

2 Bivariate extended Chen distribution

The formulation of the Bivariate Extended Chen (BEC) distribution is introduced in this Section. The joint survival function of the new model is derived, the joint cumulative distribution function and the conditional probability density functions are obtained.

2.1 The joint survival function

Assume \({U}_{\mathrm{i}}\sim EC\left({\alpha }_{\mathrm{i}}, \beta ,\uplambda \right),\) where, \(i=\mathrm{1,2},3\) and \({U}_{1} , {U}_{2}\ and\ {U}_{3}\) are independent random variables Define the random lifetime of the component 1, say \({X}_{1}\)= min (\({U}_{1},{U}_{3}\)). While the random lifetime of component 2, say \({X}_{2}\)= min (\({U}_{2},{U}_{3}\)). The vector \(({X}_{1}, {X}_{2})\) has BEC distribution with parameters\(({\alpha }_{1}, {\alpha }_{2}, {\alpha }_{3}, \beta , \lambda )\). The following theorem constructs the joint survival function of the random variables \({X}_{1}\) and \({X}_{2 }.\)

Theorem 1.

If \(({X}_{1}, {X}_{2})\sim BEC({\alpha }_{1}, {\alpha }_{2}, {\alpha }_{3}, \beta , \lambda )\), then the joint survival function of \({X}_{1}\) and \({X}_{2}\) is expressed as.

$${S}_{{X}_{1}, {X}_{2}}\left({x}_{1}, {x}_{2}\right)=\left\{\begin{array}{c}{S}_{EC}\left({x}_{1};{\alpha }_{1}, \beta ,\lambda \right) {S}_{EC}\left({x}_{2};{\alpha }_{2}+{\alpha }_{3}, \beta ,\lambda \right),\ \ \ { x}_{2}>{x}_{1}\\{S}_{EC}\left({x}_{1};{\alpha }_{1}+{\alpha }_{3}, \beta ,\lambda \right) {S}_{EC}\left({x}_{2};{\alpha }_{2}, \beta ,\lambda \right),\ \ \ {x}_{1}>{x}_{2}\\{S}_{EC}\left(x;{\alpha }_{1}+{{\alpha }_{2}+\alpha }_{3}, \beta ,\lambda \right),\ \ \ \ \ \ {x}_{1}={x}_{2}=x\end{array}\right.$$
(5)

where\(, z = \max \left( {x_{1} , x_{2} } \right)\).

Proof

Indeed, the joint survival function of \(X_{1}\) and \(X_{2 }\) is defined as follows,

$$S_{{X_{1} , X_{2} }} \left( {x_{1} , x_{2} } \right) = P ( X_{1} > x_{1} ,X_{2} > x_{2} )$$

Thus,

$$\begin{aligned} & S_{{X_{1} , X_{2} }} \left( {x_{1} , x_{2} } \right) = P ( \left\{ {min \left( {U_{1} ,U_{3} } \right) > x_{1} } \right\} ,\{ min\left( {U_{2} ,U_{3} } \right) > x_{2 } \} ) \\ & \quad = P \left( { \left\{ {U_{1} > x_{1} , U_{3} > x_{1} } \right\} , \left\{ { U_{2} > x_{2} ,U_{3} > x_{2} } \right\} } \right) \\ & \quad = P ( U_{1} > x_{1 } ,U_{2} > x_{2 } ,U_{3} > max \left( {x_{1} , x_{2} } \right) ) \\ & \quad = P ( U_{1} > x_{1 } ,U_{2} > x_{2 } ,U_{3} > z) \\ \end{aligned}$$

where \(z = \max \left( {x_{1} , x_{2} } \right)\)

As the random variables \(U_{1} , U_{2} \ and \ U_{3}\) are independent, we may directly derive

$$\begin{aligned} & S_{{X_{1} , X_{2} }} \left( {x_{1} , x_{2} } \right) = P\left( {U_{1} > x_{1 } } \right) P\left( {U_{2} > x_{2 } } \right) P(U_{3} > z) \\ & \quad \quad = S_{EC} \left( {x_{1} ;\alpha_{1} , \beta ,\lambda } \right) S_{EC} \left( {x_{2} ;\alpha_{2} , \beta ,\lambda } \right) S_{EC} \left( {z;\alpha_{3} , \beta ,\lambda } \right) \\ & \quad \quad = \left[ {1 + \lambda \left( {e^{{x_{1}^{\beta } }} - 1} \right)} \right]^{{ - \alpha_{1} }} \left[ {1 + \lambda \left( {e^{{x_{2}^{\beta } }} - 1} \right)} \right]^{{ - \alpha_{2} }} \left[ {1 + \lambda \left( {e^{{z^{\beta } }} - 1} \right)} \right]^{{ - \alpha_{3} }} \\ \end{aligned}$$

If \(x_{2} > x_{1}\) and \(z = \max \left( {x_{1} , x_{2} } \right) = x_{2}\) , then we get

$$\begin{aligned} & S_{{X_{1} , X_{2} }} \left( {x_{1} , x_{2} } \right) = \left[ {1 + \lambda \left( {e^{{x_{1}^{\beta } }} - 1} \right)} \right]^{{ - \alpha_{1} }} \left[ {1 + \lambda \left( {e^{{x_{2}^{\beta } }} - 1} \right)} \right]^{{ - \alpha_{2} }} \left[ {1 + \lambda \left( {e^{{x_{2}^{\beta } }} - 1} \right)} \right]^{{ - \alpha_{3} }} \\ & \quad \quad = \left[ {1 + \lambda \left( {e^{{x_{1}^{\beta } }} - 1} \right)} \right]^{{ - \alpha_{1} }} \left[ {1 + \lambda \left( {e^{{x_{2}^{\beta } }} - 1} \right)} \right]^{{ - \left( {\alpha_{2} + \alpha_{3} } \right)}} \\ & \quad \quad = S_{EC} \left( {x_{1} ;\alpha_{1} , \beta ,\lambda } \right) S_{EC} \left( {x_{2} ;\alpha_{2} + \alpha_{3} , \beta ,\lambda } \right) \\ \end{aligned}$$

Similarly, if \(x_{1} > x_{2}\), then \(z = \max \left( {x_{1} , x_{2} } \right) = x_{1}\) Thus,

$$\begin{aligned} & S_{{X_{1} , X_{2} }} \left( {x_{1} , x_{2} } \right) = \left[ {1 + \lambda \left( {e^{{x_{1}^{\beta } }} - 1} \right)} \right]^{{ - \alpha_{1} }} \left[ {1 + \lambda \left( {e^{{x_{2}^{\beta } }} - 1} \right)} \right]^{{ - \alpha_{2} }} \left[ {1 + \lambda \left( {e^{{x_{1}^{\beta } }} - 1} \right)} \right]^{{ - \alpha_{3} }} \\ & \quad \quad = \left[ {1 + \lambda \left( {e^{{x_{1}^{\beta } }} - 1} \right)} \right]^{{ - (\alpha_{1} + \alpha_{3} )}} \left[ {1 + \lambda \left( {e^{{x_{2}^{\beta } }} - 1} \right)} \right]^{{ - \alpha_{2} }} \\ & \quad \quad = S_{EC} \left( {x_{1} ;\alpha_{1} + \alpha_{3} , \beta ,\lambda } \right) S_{EC} \left( {x_{2} ;\alpha_{2} , \beta ,\lambda } \right) \\ \end{aligned}$$

Finally, in the case \(x_{1} = x_{2}\), then \(z = \max \left( {x_{1} , x_{2} } \right) = x.\) Thus,

$$\begin{aligned} & S_{{X_{1} , X_{2} }} \left( {x_{1} , x_{2} } \right) = \left[ {1 + \lambda \left( {e^{{x^{\beta } }} - 1} \right)} \right]^{{ - \alpha_{1} }} \left[ {1 + \lambda \left( {e^{{e^{{x^{\beta } }} }} - 1} \right)} \right]^{{ - \alpha_{2} }} \left[ {1 + \lambda \left( {e^{{x^{\beta } }} - 1} \right)} \right]^{{ - \alpha_{3} }} \\ & \quad \quad = \left[ {1 + \lambda \left( {e^{{x^{\beta } }} - 1} \right)} \right]^{{ - (\alpha_{1} + \alpha_{2} + \alpha_{3} )}} = S_{EC} \left( {x;\alpha_{1} + \alpha_{2} + \alpha_{3} , \beta ,\lambda } \right) \\ \end{aligned}$$

Proposition 1

If \(\left( {X_{1} , X_{2} } \right) \sim BEC\left( {\alpha_{1} , \alpha_{2} , \alpha_{3} , \beta , \lambda } \right)\), then.

  1. (i)

    The marginal distribution of \(X_{1}\) and\(, X_{2}\) are \(EC\left( {\alpha_{1} + \alpha_{3} , \beta , \lambda } \right)\) and \(EC\left( { \alpha_{2} + \alpha_{3} , \beta , \lambda } \right)\), respectively.

  2. (ii)

    \(min(X_{1} ,X_{2} ) \sim EC\left( {\alpha_{1} + \alpha_{2} + \alpha_{3} , \beta , \lambda } \right)\)

Proof

  1. (i)

    If \(x_{2} > x_{1}\), then \({\text{z}} = \max \left( {x_{1} ,{ }x_{2} } \right) = x_{2}\). Thus, from Eq. (5) we get

    $$\begin{aligned} & \mathop {\lim }\limits_{{x_{1} \to 0}} S_{{X_{1} , X_{2} }} \left( {x_{1} , x_{2} } \right) = \left[ {1 + \lambda \left( {e^{{x_{2}^{\beta } }} - 1} \right)} \right]^{{ - \alpha_{2} }} \left[ {1 + \lambda \left( {e^{{x_{2}^{\beta } }} - 1} \right)} \right]^{{ - \alpha_{3} }} \\ & \quad \quad = \left[ {1 + \lambda \left( {e^{{x_{2}^{\beta } }} - 1} \right)} \right]^{{ - \left( {\alpha_{2} + \alpha_{3} } \right)}} = \user2{ }S_{EC} \left( {x_{2} ;\alpha_{2} + \alpha_{3} , \beta ,\lambda } \right) \\ \end{aligned}$$

Analogously, if \(x_{1} > x_{2}\), then \({\text{z}} = \max \left( {x_{1} ,{ }x_{2} } \right) = x_{1}\). Thus,

$$\mathop {\lim }\limits_{{x_{2} \to 0}} S_{{X_{1} , X_{2} }} \left( {x_{1} , x_{2} } \right) = \left[ {1 + \lambda \left( {e^{{x_{1}^{\beta } }} - 1} \right)} \right]^{{ - \left( {\alpha_{1} + \alpha_{3} } \right)}} \user2{ } = \user2{ }S_{EC} \left( {x_{1} ;\alpha_{1} + \alpha_{3} , \beta ,\lambda } \right)$$
  1. (ii)

    Based on the fact that

    $$\begin{aligned} & P(min(X_{1} ,X_{2} ) > x) = P(X_{1} > x,X_{2} > x) = P(U_{1} > x,U_{2} > x,U_{3} > x) \\ & \quad \quad = P\left( {U_{1} > x} \right){ }P\left( {U_{2} > x} \right){ }P(U_{3} > x) \\ & \quad \quad = \left[ {1 + \lambda \left( {e^{{x^{\beta } }} - 1} \right)} \right]^{{ - \left( {\alpha_{1} + \alpha_{2} + \alpha_{3} } \right)}} { } \\ \end{aligned}$$

Thus, result (ii) holds.

2.2 The joint cumulative distribution function

The following Theorem provides the joint cdf of the new Bivariate Extended Chen distribution.

Theorem 2

If \(\left({X}_{1}, {X}_{2}\right) \sim BEC\left({\alpha }_{1}, {\alpha }_{2}, {\alpha }_{3},\beta ,\lambda \right),\) then the joint cumulative distribution function of \({X}_{1}\) and \({X}_{2}\) has the following form.

$${\mathrm{F}}_{{\mathrm{X}}_{1}, {\mathrm{X}}_{2}}\left({\mathrm{x}}_{1}, {\mathrm{x}}_{2}\right)=$$
$$\left\{\begin{array}{c}{\mathrm{F}}_{\mathrm{EC}}\left({\mathrm{x}}_{1};{\mathrm{\alpha }}_{1}+{\mathrm{\alpha }}_{3},\upbeta ,\uplambda \right)-{\mathrm{F}}_{\mathrm{EC}}\left({\mathrm{x}}_{1};{\mathrm{\alpha }}_{1},\upbeta ,\uplambda \right)\left[1-{\mathrm{F}}_{\mathrm{EC}}\left({\mathrm{x}}_{2};{\mathrm{\alpha }}_{2}+{\mathrm{\alpha }}_{3},\upbeta ,\uplambda \right)\right], {\mathrm{x}}_{2}>{\mathrm{x}}_{1}\\ {\mathrm{F}}_{\mathrm{EC}}\left({\mathrm{x}}_{2};{\mathrm{\alpha }}_{2}+{\mathrm{\alpha }}_{3},\upbeta ,\uplambda \right)-{\mathrm{F}}_{\mathrm{EC}}\left({\mathrm{x}}_{2};{\mathrm{\alpha }}_{2},\upbeta ,\uplambda \right)\left[1-{\mathrm{F}}_{\mathrm{EC}}\left({\mathrm{x}}_{1};{\mathrm{\alpha }}_{1}+{\mathrm{\alpha }}_{3},\upbeta ,\uplambda \right)\right], {\mathrm{x}}_{1}>{\mathrm{x}}_{2}\\ 1-{\mathrm{F}}_{\mathrm{EC}}\left(\mathrm{x};{\mathrm{\alpha }}_{1}+{{\mathrm{\alpha }}_{2}+\mathrm{\alpha }}_{3},\upbeta ,\uplambda \right), {\mathrm{ x}}_{1}={\mathrm{x}}_{2}=x\end{array}\right.$$
(6)

Proof

In the case of \(x_{2} > x_{1} { }\), from Theorem 1 and Proposition 1 we have.

$$\begin{aligned} & F_{{X_{1} , X_{2} }} \left( {x_{1} , x_{2} } \right) = P\left( {X_{1} > x_{1} , X_{2} > x_{2} } \right) + P\left( {X_{1} < x_{1} } \right) + P\left( {X_{2} < x_{2} } \right) - 1 \\ & \quad \quad = \left[ {1 - F_{EC} \left( {x_{1} ;\alpha_{1} , \beta ,\lambda } \right)} \right]\left[ {1 - F_{EC} \left( {x_{2} ;\alpha_{2} + \alpha_{3} , \beta ,\lambda } \right)} \right] + F_{EC} \left( {x_{1} ;\alpha_{1 } + \alpha_{3} , \beta ,\lambda } \right) \\ & \quad \quad + F_{EC} \left( {x_{2} ;\alpha_{2} + \alpha_{3} , \beta ,\lambda } \right) - 1 \\ & \quad \quad = F_{EC} \left( {x_{1} ;\alpha_{1} + \alpha_{3} ,{ }\beta ,\lambda } \right) - F_{EC} \left( {x_{1} ;\alpha_{1} ,{ }\beta ,\lambda } \right)\left[ {1 - F_{EC} \left( {x_{2} ;\alpha_{2} + \alpha_{3} ,{ }\beta ,\lambda } \right)} \right] \\ \end{aligned}$$

Analogously follows the case \(x_{1} > x_{2}\) and for \(x_{1} = x_{2} = x\) is obvious.

2.3 The joint probability density function

The joint pdf of the new Bivariate Extended Chen distribution is given by following theorem.

Theorem 3

If \(\left( {X_{1} , X_{2} } \right) \sim BEC\left( {\alpha_{1} , \alpha_{2} , \alpha_{3} , \beta , \lambda } \right)\), then the joint probability density function of \(X_{1}\) and \(X_{2}\) is expressed as.

$$f_{{X_{1} , X_{2} }} \left( {x_{1} , x_{2} } \right) = \left\{ {\begin{array}{*{20}c} {f_{1} \left( {x_{1} ,x_{2} } \right),\ \ \ { }x_{2} > x_{1} } \\ {f_{2} \left( {x_{1} ,x_{2} } \right),\ \ \ { }x_{1} > x_{2} } \\ \ \ {f_{3} \left( x \right),\ \ \ { }x_{1} = x_{2} = x} \\ \end{array} } \right.$$
(7)

where

$$f_{1} \left( {x_{1} ,x_{2} } \right) = { }\alpha_{1} \left( {\alpha_{2} + \alpha_{3} } \right){ }\beta^{2} { }\lambda^{2} { }x_{1}^{\beta - 1} x_{2}^{\beta - 1} { }e^{{x_{1}^{\beta } + x_{2}^{\beta } }} { }\left[ {1 + \lambda { }\left( {e^{{x_{1}^{\beta } }} - 1} \right)} \right]^{{ - \alpha_{1} - 1}} { } \times \left[ {1 + \lambda { }\left( {e^{{x_{2}^{\beta } }} - 1} \right)} \right]^{{ - \left( {\alpha_{2} + \alpha_{3} } \right) - 1}}$$
$$f_{2} \left( {x_{1} ,x_{2} } \right) = \alpha_{2} \left( {\alpha_{1} + \alpha_{3} } \right){ }\beta^{2} { }\lambda^{2} { }x_{1}^{\beta - 1} x_{2}^{\beta - 1} { }e^{{x_{1}^{\beta } + x_{2}^{\beta } }} { }\left[ {1 + \lambda { }\left( {e^{{x_{1}^{\beta } }} - 1} \right)} \right]^{{ - \left( {\alpha_{1} + \alpha_{3} } \right) - 1}} { } \times \left[ {1 + \lambda { }\left( {e^{{x_{2}^{\beta } }} - 1} \right)} \right]^{{ - \alpha_{2} - 1}}$$
$$f_{3} \left( x \right) = \alpha_{3} { }\beta { }\lambda { }x^{\beta - 1} { }e^{{x^{\beta } }} { }\left[ {1 + { }\lambda \left( {e^{{x^{\beta } }} - 1} \right)} \right]^{{ - \left( {\alpha_{1} + { }\alpha_{2} + \alpha_{3} } \right) - 1}} { }$$

Proof

By taking the second derivative \(\frac{{\partial^{2} F_{{X_{1} , X_{2} }} \left( {x_{1} , x_{2} } \right)}}{{\partial_{{x_{1} }} \partial_{{x_{2} }} }}\), then we can obtain \(f_{1} \left( {x_{1} ,x_{2} } \right)\) and \(f_{2} \left( {x_{1} ,x_{2} } \right)\) for \(x_{2} > x_{1}\) and \(x_{1} > x_{2}\). Then, \(f_{3} \left( x \right)\) is obtained based on the fact in Eq. (8).

$$\mathop \smallint \limits_{0}^{\infty } \mathop \smallint \limits_{0}^{{x_{2} }} f_{1} \left( {x_{1} ,x_{2} } \right)dx_{1} dx_{2} + \mathop \smallint \limits_{0}^{\infty } \mathop \smallint \limits_{0}^{{x_{1} }} f_{2} \left( {x_{1} ,x_{2} } \right)dx_{2} dx_{1} + \mathop \smallint \limits_{0}^{\infty } f_{3} \left( x \right)dx = 1$$
(8)

Let \(I_{1} = { }\mathop \smallint \limits_{0}^{\infty } \mathop \smallint \limits_{0}^{{x_{2} }} f_{1} \left( {x_{1} ,x_{2} } \right)dx_{1} dx_{2}\) and \(I_{2} = { }\mathop \smallint \limits_{0}^{\infty } \mathop \smallint \limits_{0}^{{x_{1} }} f_{2} \left( {x_{1} ,x_{2} } \right)dx_{2} dx_{1}\) Then,

$$\begin{aligned} & I_{1} = \mathop \smallint \limits_{0}^{\infty } \mathop \smallint \limits_{0}^{{x_{2} }} \alpha_{1} \left( {\alpha_{2} + \alpha_{3} } \right){ }\beta^{2} { }\lambda^{2} { }x_{1}^{\beta - 1} x_{2}^{\beta - 1} { }e^{{x_{1}^{\beta } + x_{2}^{\beta } }} { }\left[ {1 + \lambda \left( {e^{{x_{1}^{\beta } }} - 1} \right)} \right]^{{ - \alpha_{1} - 1}} \left[ {1 + \lambda \left( {e^{{x_{2}^{\beta } }} - 1} \right)} \right]^{{ - \left( {\alpha_{2} + \alpha_{3} } \right) - 1}} { }dx_{1} dx_{2} { } \\ & \quad \quad = \mathop \smallint \limits_{0}^{\infty } \left( {\alpha_{2} + \alpha_{3} } \right){ }\beta { }\lambda { }x_{2}^{\beta - 1} e^{{x_{2}^{\beta } }} { }\left[ {\left[ {1 + \lambda \left( {e^{{x_{2}^{\beta } }} - 1} \right)} \right]^{{ - \left( {\alpha_{2} + \alpha_{3} } \right) - 1}} - { }\left[ {1 + \lambda \left( {e^{{x_{2}^{\beta } }} - 1} \right)} \right]^{{ - \left( {\alpha_{1} + \alpha_{2} + \alpha_{3} } \right) - 1}} } \right]{ }dx_{2} \\ & \quad \quad = \frac{{\alpha_{1} }}{{\left( {\alpha_{1} + \alpha_{2} + \alpha_{3} } \right)}} \\ \end{aligned}$$
(9)

Similarly,

$$I_{2} = \mathop \smallint \limits_{0}^{\infty } \mathop \smallint \limits_{0}^{{x_{1} }} \alpha_{2} \left( {\alpha_{1} + \alpha_{3} } \right){ }\beta^{2} { }\lambda^{2} { }x_{1}^{\beta - 1} x_{2}^{\beta - 1} { }e^{{x_{1}^{\beta } + x_{2}^{\beta } }} { }\left[ {1 + \lambda \left( {e^{{x_{1}^{\beta } }} - 1} \right)} \right]^{{ - \left( {\alpha_{1} + \alpha_{3} } \right) - 1}} { }\left[ {1 + \lambda \left( {e^{{x_{2}^{\beta } }} - 1} \right)} \right]^{{ - \alpha_{2} - 1}} dx_{2} dx_{1} { } = { }\frac{{\alpha_{2} }}{{\left( {\alpha_{1} + \alpha_{2} + \alpha_{3} } \right)}}$$
(10)

Equations (9) and (10) are substituted into Eq. (8) to produce

$$\frac{{\alpha_{1} }}{{\left( {\alpha_{1} + \alpha_{2} + \alpha_{3} } \right)}} + \frac{{\alpha_{2} }}{{\left( {\alpha_{1} + \alpha_{2} + \alpha_{3} } \right)}} + \mathop \smallint \limits_{0}^{\infty } f_{3} \left( x \right)dx = 1$$

As a result, we obtain \(\mathop \smallint \limits_{0}^{\infty } f_{3} \left( x \right)dx = \frac{{\alpha_{3} }}{{\left( {\alpha_{1} + \alpha_{2} + \alpha_{3} } \right)}}\).

Therefore, \(f_{3} \left( x \right) = \frac{{\alpha_{3} }}{{\alpha_{1} + \alpha_{2} + \alpha_{3} }}{ }f_{EC} \left( {x;\alpha_{1} + \alpha_{2} + \alpha_{3} ,{ }\beta ,\lambda } \right)\) which completes the proof.

2.4 Conditional probability density functions

If \(\left( {X_{1} , X_{2} } \right) \sim BEC\left( {\alpha_{1} , \alpha_{2} , \alpha_{3} , \beta , \lambda } \right)\) and from Proposition 1, the marginal pdf of \(X_{1} \ and\ X_{2}\) are expressed by

$$f_{i} \left( {x_{i} } \right) = { }\left( {\alpha_{i} + \alpha_{3} } \right){ }\beta { }\lambda { }x_{i}^{\beta - 1} { }e^{{x_{i}^{\beta } }} { }\left[ {1 + \lambda \left( {e^{{x_{i}^{\beta } }} - 1} \right)} \right]^{{ - \left( {\alpha_{i} + \alpha_{3} } \right) - 1}} \left( {i = 1,2} \right) ,\,x_{i} > 0$$
(11)

In Theorem 4, the conditional probability density function for BEC distribution is derived.

Theorem 4

If \(\left( {X_{1} , X_{2} } \right) \sim BEC\left( {\alpha_{1} , \alpha_{2} , \alpha_{3} , \beta , \lambda } \right)\), the conditional probability density functions of \(X_{i}\), given \(X_{j} = x_{j} ,\left( {i ,j = 1 , 2} \right) ,\) \(\left( {i \ne j } \right)\) can be stated as follows:

$$f_{{X_{1} \left| {X_{2} } \right.}} (x_{1} \left| {x_{2} ) = } \right.\left\{ {\begin{array}{*{20}c} {f_{{X_{1} \left| {X_{2} } \right.}}^{\left( 1 \right)} (x_{1} \left| {x_{2} ),{ }} \right.\ \ x_{2} > x_{1} } \\ {f_{{X_{1} \left| {X_{2} } \right.}}^{\left( 2 \right)} (x_{1} \left| {x_{2} )} \right.,\ \ x_{1} > x_{2} } \\ \end{array} } \right.{ }$$
(12)

where

$$f_{{X_{1} \left| {X_{2} } \right.}}^{\left( 1 \right)} (x_{1} \left| {x_{2} ) = { }} \right.\alpha_{{1{ }}} \beta { }\lambda { }x_{1}^{\beta - 1} { }e^{{x_{1}^{\beta } }} { }\left[ {1 + \lambda \left( {e^{{x_{1}^{\beta } }} - 1} \right)} \right]^{{ - \alpha_{1} - 1}}$$
$$f_{{X_{1} \left| {X_{2} } \right.}}^{\left( 2 \right)} (x_{1} \left| {x_{2} )} \right. = \frac{{\alpha_{2} \left( {\alpha_{1} + \alpha_{3} } \right)}}{{\left( {\alpha_{2} + \alpha_{3} } \right)}}\beta { }\lambda { }x_{1}^{\beta - 1} { }e^{{x_{1}^{\beta } }} { }\left[ {1 + \lambda \left( {e^{{x_{1}^{\beta } }} - 1} \right)} \right]^{{ - (\alpha_{1} + \alpha_{3} ) - 1}} { }\left[ {1 + \lambda \left( {e^{{x_{2}^{\beta } }} - 1} \right)} \right]^{{\alpha_{3} }}$$

Proof.

Using Eq. (7) and the marginal probability density function of \(X_{i}\), \(\left( {i = 1 , 2 } \right)\) presented in Eq. (11) and substituting them into the following expression.

$$f_{{X_{1} \left| {X_{2} } \right.}} (x_{1} \left| {x_{2} ) = } \right.\frac{{f_{{X_{1} , X_{2} }} \left( {x_{1} , x_{2} } \right)}}{{f_{{X_{2} }} \left( {x_{2} } \right)}} . , i \ne j = 1{ },{ }2.$$

The proof of Theorem 4 is complete.

3 Reliability measures

Reliability theory is concerned with the application of probability theory to the modelling of failures and the prediction of success probability. This section highlights some of the reliability measures.

3.1 Stress-strength reliability measure

The lifetime of a component with a strength \({X}_{2}\) and stress \({X}_{1}\) is described by the stress strength reliability measure. If the stress \({X}_{1}\) is less than the strength \({X}_{2}\), then the component fails. The stress-strength reliability measure defined as,

$$R=P\left({X}_{1}<{X}_{2}\right)$$

If \(({X}_{1},{X}_{2})\) has a BEC distribution, then the stress-strength reliability measure R is calculated as follows,

$$R=P\left({X}_{1}<{X}_{2}\right)={\int }_{0}^{\infty }{\int }_{{x}_{1}}^{\infty }{f}_{1}\left({x}_{1},{x}_{2}\right)d{x}_{2}d{x}_{1}$$
$$={\int }_{0}^{\infty }{\int }_{{x}_{1}}^{\infty }{\alpha }_{1}\left({\alpha }_{2} +{\alpha }_{3}\right){ \beta }^{2} {\lambda }^{2}{x}_{1}^{\beta -1} {x}_{2}^{\beta -1} {e}^{({x}_{1}^{\beta }+{x}_{2}^{\beta })} {[1+\lambda ({e}^{{x}_{1}^{\beta }}-1)]}^{-{\alpha }_{1}-1} {[1+\lambda ({e}^{{x}_{2}^{\beta }}1)]}^{-{(\alpha }_{2}+ {\alpha }_{3})-1}d{x}_{2}d{x}_{1}$$
$$={\int }_{0}^{\infty }{\alpha }_{1} \beta \lambda {x}_{1}^{\beta -1} {e}^{{x}_{1}^{\beta }}{\left[1+\lambda \left({e}^{{x}_{1}^{\beta }}-1\right)\right]}^{-{({ \alpha }_{1}+ \alpha }_{2}+ {\alpha }_{3})-1} d{x}_{1}$$
$$= \frac{{\alpha }_{1}}{{\alpha }_{1}+{{\alpha }_{2}+\alpha }_{3}}$$

3.2 Hazard rate function

In the literature, the bivariate failure rate function is described in various methods. One of these ways was defined by Basu (1971) as

$${h}_{{X}_{1}, {X}_{2}}\left({x}_{1}, {x}_{2}\right)=\frac{{f}_{{X}_{1}, {X}_{2}}\left({x}_{1}, {x}_{2}\right)}{{S}_{{X}_{1}, {X}_{2}}\left({x}_{1}, {x}_{2}\right)}$$

if \(({X}_{1},{X}_{2})\) has a BEC distribution, then the joint hazard rate function of \({X}_{1}\) and \({X}_{2}\) has the following form

$${h}_{{X}_{1}, {X}_{2}}\left({x}_{1}, {x}_{2}\right)=\left\{\begin{array}{c}{h}_{1}\left({x}_{1}, {x}_{2}\right), {x}_{2}>{x}_{1}\\ {h}_{2}\left({x}_{1}, {x}_{2}\right), {x}_{1}>{x}_{2}\\ {h}_{3}\left(x\right), {x}_{1}={x}_{2}=x\end{array}\right.$$
(13)

where,

$${h}_{1}\left({x}_{1}, {x}_{2}\right)=\frac{{f}_{1}({x}_{1}, {x}_{2})}{{S}_{1}({x}_{1}, {x}_{2})}=\frac{{\alpha }_{1}\left({{\alpha }_{2}+\alpha }_{3}\right) {\beta }^{2} {\lambda }^{2}{ x}_{1}^{\beta -1}{x}_{2}^{\beta -1} {e}^{{x}_{1}^{\beta }+{x}_{2}^{\beta }} }{[1+\lambda ({e}^{{x}_{1}^{\beta }}-1)] [1+\lambda ({e}^{{x}_{2}^{\beta }}-1)]}$$
$${h}_{2}\left({x}_{1}, {x}_{2}\right)=\frac{{f}_{2}({x}_{1}, {x}_{2})}{{S}_{2}({x}_{1}, {x}_{2})}=\frac{{\alpha }_{2}\left({{\alpha }_{1}+\alpha }_{3}\right) {\beta }^{2} {\lambda }^{2}{ x}_{1}^{\beta -1}{x}_{2}^{\beta -1} {e}^{{x}_{1}^{\beta }+{x}_{2}^{\beta }} }{[1+\lambda ({e}^{{x}_{1}^{\beta }}-1)] [1+\lambda ({e}^{{x}_{2}^{\beta }}-1)]}$$
$${h}_{3}\left(x\right)=\frac{{f}_{3}\left(x\right)}{{S}_{3}\left(x\right)}=\frac{{\alpha }_{3} \beta \lambda { x}^{\beta -1}{ e}^{{x}^{\beta }} }{[1+\lambda ({e}^{{x}^{\beta }}-1)]}$$

The bivariate hazard rate function was defined by Johnson and Kotz (1975) and Marshall (1975) as the hazard gradient in vector form as follows,

$${h}_{{X}_{1}, {X}_{2}}\left({x}_{1}, {x}_{2}\right)=\left({h}_{{X}_{1}}\left({x}_{1}, {x}_{2}\right),{h}_{{X}_{2}}\left({x}_{1}, {x}_{2}\right)\right)=\left(\frac{- \partial \mathit{ln} {S}_{{X}_{1}, {X}_{2}}\left({x}_{1}, {x}_{2}\right) }{\partial {x}_{1}} , \frac{- \partial \mathit{ln} {S}_{{X}_{1}, {X}_{2}}\left({x}_{1}, {x}_{2}\right)}{\partial {x}_{2}}\right)$$

As a result of some calculations, the hazard gradient for BEC distribution is given by

$${h}_{{X}_{1}}\left({x}_{1}, {x}_{2}\right)=\left\{\begin{array}{c}{ \alpha }_{1} \beta \lambda {x}_{1}^{\beta -1} {e}^{{x}_{1}^{\beta }} {[1+\lambda ({e}^{{x}_{1}^{\beta }}-1)]}^{-1} , {x}_{2}>{x}_{1}\\ {(\alpha }_{1}+{\alpha }_{3}) \beta \lambda {x}_{1}^{\beta -1} {e}^{{x}_{1}^{\beta }} {[1+\lambda ({e}^{{x}_{1}^{\beta }}-1)]}^{-1}, {x}_{1}>{x}_{2}\\ {(\alpha }_{1}+{\alpha }_{2}+{\alpha }_{3}) \beta \lambda {x}_{1}^{\beta -1} {e}^{{x}_{1}^{\beta }} {[1+\lambda ({e}^{{x}_{1}^{\beta }}-1)]}^{-1}, {x}_{1}={x}_{2}\end{array}\right.$$
(14)

and

$${h}_{{X}_{2}}\left({x}_{1}, {x}_{2}\right)=\left\{\begin{array}{c}{(\alpha }_{2}+{\alpha }_{3}) \beta \lambda {x}_{2}^{\beta -1} {e}^{{x}_{2}^{\beta }} {[1+\lambda ({e}^{{x}_{2}^{\beta }}-1)]}^{-1}, {x}_{2}>{x}_{1}\\ {\alpha }_{2} \beta \lambda {x}_{2}^{\beta -1} {e}^{{x}_{2}^{\beta }}{ [1+\lambda ({e}^{{x}_{2}^{\beta }}-1)]}^{-1}, {x}_{1}>{x}_{2}\\ {(\alpha }_{1}+{\alpha }_{2}+{\alpha }_{3}) \beta \lambda {x}_{2}^{\beta -1} {e}^{{x}_{2}^{\beta }}{ [1+\lambda ({e}^{{x}_{2}^{\beta }}-1)]}^{-1}, {x}_{1}={x}_{2}\end{array}\right.$$
(15)

4 Maximum likelihood estimation

The maximum likelihood estimate method is used in this Section to construct the estimators for the five parameters of the BEC distribution. Consider a sample of size n from the BEC distribution with parameters \({\alpha }_{1}, {\alpha }_{2}, {\alpha }_{3}, \beta , \lambda\) and let

$${I}_{1}=\left\{\left({x}_{1i},{x}_{2i}\right): {x}_{1i}>{x}_{2i}, i=1, \dots , n\right\}, { I}_{2}=\left\{\left({x}_{1i},{x}_{2i}\right): {x}_{2i}>{x}_{1i}, i=1, \dots , n\right\}$$
$${I}_{3}=\left\{\left({x}_{1i},{x}_{2i}\right): {x}_{1i}={x}_{2i}, i=1, \dots , n\right\}, I={I}_{1}\cup { I}_{2}\cup {I}_{3}$$

\({n}_{1}=\left|{I}_{1}\right|, {n}_{2}=\left|{I}_{2}\right|, {n}_{3}=\left|{I}_{3}\right|\) and \(n= {n}_{1}+{n}_{2}+{n}_{3}\).

The likelihood function for the parameter vector \(\theta =\)(\({\alpha }_{1}, {\alpha }_{2}, {\alpha }_{3}, \beta , \lambda )\) is obtained as

$$l\left(\theta \left|{x}_{1},{x}_{2}\right.\right)=\prod_{i\in {I}_{1}}{f}_{1}\left({x}_{1i},{x}_{2i}\right) \prod_{i\in {I}_{2}}{f}_{2}\left({x}_{1i},{x}_{2i}\right) \prod_{i\in {I}_{3}}{f}_{3}\left({x}_{i}\right)$$
(16)

where,

$$\prod_{i\in {I}_{1}}{f}_{1}\left({x}_{1i},{x}_{2i}\right) = {\alpha }_{1}^{{n}_{1}} {\left({\alpha }_{2} +{\alpha }_{3}\right)}^{{n}_{1}} {\beta }^{2{n}_{1}} {\lambda }^{2{n}_{1}}$$
$$\times \prod_{i\in {I}_{1}}\left[{x}_{1i}^{\beta -1} {x}_{2i}^{\beta -1} {e}^{{x}_{1i}^{\beta }} {e}^{{x}_{2i}^{\beta }} {[1+\lambda ({e}^{{{x}_{1i}}^{\beta }}-1 )]}^{-{\alpha }_{1}-1} {[1+\lambda ({e}^{{{x}_{2i}}^{\beta }}-1)]}^{-{(\alpha }_{2}+ {\alpha }_{3})-1}\right]$$
$$\prod_{i\in {I}_{2}}{f}_{2}\left({x}_{1i},{x}_{2i}\right)={\alpha }_{2}^{{n}_{2}} {\left({\alpha }_{1}+{\alpha }_{3}\right)}^{{n}_{2}} {\beta }^{{2n}_{2}} {\lambda }^{2{n}_{2}}$$
$$\times \prod_{i\in {I}_{2}}\left[{x}_{1i}^{\beta -1} {x}_{2i}^{\beta -1} {e}^{{x}_{1i}^{\beta }} {e}^{{x}_{2i}^{\beta }} {[1+\lambda ({e}^{{{x}_{1i}}^{\beta }}-1)]}^{-({\alpha }_{1}+ {\alpha }_{3})-1} {[1+\lambda ({e}^{{{x}_{2i}}^{\beta }}-1)]}^{-{\alpha }_{2}-1}\right]$$
$$\prod_{i\in {I}_{3}}{f}_{3}\left({x}_{i}\right)={\alpha }_{3}^{{n}_{3}}{ \beta }^{{n}_{3}}{ \lambda }^{{n}_{3}} \prod_{i\in {I}_{3}}\left[{x}_{i}^{\beta -1} {e}^{{x}_{i}^{\beta }} {[1+\lambda ({e}^{{{x}_{i}}^{\beta }}-1)]}^{-({\alpha }_{1 }+{\alpha }_{2} + {\alpha }_{3})-1}\right]$$

The logarithm of the likelihood function in Eq. (16) is as follows,

$$L=\left(2{n}_{1}+2{n}_{2}+{n}_{3}\right) \mathit{ln}\beta +{n}_{1}\mathit{ln}{\alpha }_{1}+{n}_{2}\mathit{ln}{\alpha }_{2}+{n}_{3}\mathit{ln}{\alpha }_{3}+{n}_{1}\mathit{ln}{(\alpha }_{2}+{\alpha }_{3})+$$
$${n}_{2} \mathit{ln}{(\alpha }_{1}+{\alpha }_{3})+\left(2{n}_{1}+2{n}_{2}+{n}_{3}\right)\mathit{ln} \lambda +\sum_{i\in {I}_{1}}\mathit{ln}[{x}_{1i}^{\beta -1}]+\sum_{i\in {I}_{1}}\mathit{ln}[{x}_{2i}^{\beta -1}]+ \sum_{i\in {I}_{1}}{x}_{1i}^{\beta } +$$
$$\sum_{i\in {I}_{1}}{x}_{2i}^{\beta } - \left({\alpha }_{1}+1\right) \sum_{i\in {I}_{1}}\mathit{ln}\left[1+\lambda \left({e}^{{x}_{1i}^{\beta }}-1\right)\right] - \left({\alpha }_{2}+{\alpha }_{3}+1\right) \sum_{i\in {I}_{1}}\mathit{ln}\left[1+\lambda \left({e}^{{x}_{2i}^{\beta }}-1\right)\right]$$
$$+ \sum_{i\in {I}_{2}}\mathit{ln} \left[{x}_{1i}^{\beta -1}\right] + \sum_{i\in {I}_{2}}\mathit{ln} \left[{x}_{2i}^{\beta -1}\right]+\sum_{i\in {I}_{2}}{x}_{1i}^{\beta }+\sum_{i\in {I}_{2}}{x}_{2i}^{\beta }-$$
$$\left({\alpha }_{1}+{\alpha }_{3}+1\right)\sum_{i\in {I}_{2}}\mathit{ln}\left[1+\lambda \left({e}^{{x}_{1i}^{\beta }}-1\right)\right]-\left({\alpha }_{2}+1\right) \sum_{i\in {I}_{2}}\mathit{ln} \left[1+\lambda \left({e}^{{x}_{2i}^{\beta }}-1\right)\right]+\sum_{i\in {I}_{3}}\mathit{ln}\left[{x}_{i}^{\beta -1}\right]+\sum_{i\in {I}_{3}}{x}_{i}^{\beta }-({\alpha }_{1}+{\alpha }_{2}+{\alpha }_{3}+1) \sum_{i\in {I}_{3}}\mathit{ln}\left[1+\lambda \left({e}^{{x}_{i}^{\beta }}-1\right)\right]$$
(17)

By taking the first partial derivatives of Eq. (17) with respect \({\alpha }_{1}, {\alpha }_{2}, {\alpha }_{3}, \beta , \lambda\) and equating these first partial derivatives by zero, the likelihood equations are as follows,

$$\frac{\partial L}{\partial {\alpha }_{1}}=\frac{{n}_{1}}{{\alpha }_{1}}+\frac{{n}_{2}}{{\alpha }_{1}+{\alpha }_{3}}-\sum_{i\in {I}_{1}}\mathit{ln}\left[1+\lambda \left({e}^{{x}_{1i}^{\beta }}-1\right)\right]-\sum_{i\in {I}_{2}}\mathit{ln}\left[1+\lambda \left({e}^{{x}_{1i}^{\beta }}-1\right)\right]-$$
$$\sum_{i\in {I}_{3}}\mathit{ln}\left[1+\lambda \left({e}^{{x}_{i}^{\beta }}-1\right)\right]=0$$
$$\frac{\partial L}{\partial {\alpha }_{2}}=\frac{{n}_{2}}{{\alpha }_{2}}+\frac{{n}_{1}}{{\alpha }_{2}+{\alpha }_{3}}-\sum_{i\in {I}_{1}}\mathit{ln}\left[1+\lambda \left({e}^{{x}_{2i}^{\beta }}-1\right)\right]-\sum_{i\in {I}_{2}}ln\left[1+\lambda \left({e}^{{x}_{2i}^{\beta }}-1\right)\right]-$$
$$\sum_{i\in {I}_{3}}\mathit{ln}\left[1+\lambda \left({e}^{{x}_{i}^{\beta }}-1\right)\right]=0$$
$$\frac{\partial L}{\partial {\alpha }_{3}}=\frac{{n}_{3}}{{\alpha }_{3}}+\frac{{n}_{2}}{{\alpha }_{1}+{\alpha }_{3}}+\frac{{n}_{1}}{{\alpha }_{2}+{\alpha }_{3}}-\sum_{i\in {I}_{2}}ln\left[1+\lambda \left({e}^{{x}_{1i}^{\beta }}-1\right)\right]-\sum_{i\in {I}_{1}}ln\left[1+\lambda \left({e}^{{x}_{2i}^{\beta }}-1\right)\right]-$$
$$\sum_{i\in {I}_{3}}ln\left[1+\lambda \left({e}^{{x}_{i}^{\beta }}-1\right)\right]=0$$
$$\frac{\partial L}{\partial \beta }=\frac{2{n}_{1}+2{n}_{2}+{n}_{3}}{\beta }+\sum_{i\in {I}_{1}}\mathit{ln}\left({x}_{1i}\right)+\sum_{i\in {I}_{2}}\mathit{ln}\left({x}_{1i}\right)+\sum_{i\in {I}_{1}}\mathit{ln}\left({x}_{2i}\right)+\sum_{i\in {I}_{2}}\mathit{ln}\left({x}_{2i}\right)+\sum_{i\in {I}_{3}}\mathit{ln}\left({x}_{i}\right)+\sum_{i\in {I}_{1}}{x}_{1i}+\sum_{i\in {I}_{2}}{x}_{1i}-\left({\alpha }_{1}+1\right)\sum_{i\in {I}_{1}}\frac{{\lambda {x}_{1i}^{\beta } e}^{{x}_{1i}^{\beta }} \mathit{ln}\left({x}_{1i}\right)}{1+\lambda \left({e}^{{x}_{1i}^{\beta }}-1\right)}-\left({\alpha }_{1}+{\alpha }_{3}+1\right)\sum_{i\in {I}_{2}}\frac{{\lambda {x}_{1i}^{\beta } e}^{{x}_{1i}^{\beta }} \mathit{ln}\left({x}_{1i}\right)}{1+\lambda \left({e}^{{x}_{1i}^{\beta }}-1\right)}+\sum_{i\in {I}_{1}}{x}_{2i}+\sum_{i\in {I}_{2}}{x}_{2i}-\left({\alpha }_{2}+{\alpha }_{3}+1\right)\sum_{i\in {I}_{1}}\frac{{\lambda {x}_{2i}^{\beta } e}^{{x}_{2i}^{\beta }} \mathit{ln}\left({x}_{2i}\right)}{1+\lambda \left({e}^{{x}_{2i}^{\beta }}-1\right)}-\left({\alpha }_{2}+1\right)\sum_{i\in {I}_{2}}\frac{{\lambda {x}_{2i}^{\beta } e}^{{x}_{2i}^{\beta }} \mathit{ln}\left({x}_{2i}\right)}{1+\lambda \left({e}^{{x}_{2i}^{\beta }}-1\right)}+\sum_{i\in {I}_{3}}{x}_{i}-\left({\alpha }_{1}+{\alpha }_{2}+{\alpha }_{3}+1\right)\sum_{i\in {I}_{3}}\frac{{\lambda {x}_{i}^{\beta } e}^{{x}_{i}^{\beta }} \mathit{ln}\left({x}_{i}\right)}{1+\lambda \left({e}^{{x}_{i}^{\beta }}-1\right)}=0$$
$$\frac{\partial L}{{\partial \lambda }} = \frac{{2n_{1} + 2n_{2} + n_{3} }}{\lambda } - \left( {\alpha_{1} + 1} \right)\mathop \sum \limits_{{i \in I_{1} }} \frac{{e^{{x_{1i}^{\beta } }} - 1}}{{1 + \lambda \left( {e^{{x_{1i}^{\beta } }} - 1} \right)}} - \left( {\alpha_{1} + \alpha_{3} + 1} \right)\mathop \sum \limits_{{i \in I_{2} }} \frac{{e^{{x_{1i}^{\beta } }} - 1}}{{1 + \lambda \left( {e^{{x_{1i}^{\beta } }} - 1} \right)}} -$$
$$\left( {\alpha_{2} + \alpha_{3} + 1} \right)\mathop \sum \limits_{{i \in I_{1} }} \frac{{e^{{x_{2i}^{\beta } }} - 1}}{{1 + \lambda \left( {e^{{x_{2i}^{\beta } }} - 1} \right)}} - \left( {\alpha_{2} + 1} \right)\mathop \sum \limits_{{i \in I_{2} }} \frac{{e^{{x_{2i}^{\beta } }} - 1}}{{1 + \lambda \left( {e^{{x_{2i}^{\beta } }} - 1} \right)}} - \left( {\alpha_{1} + \alpha_{2} + \alpha_{3} + 1} \right)\mathop \sum \limits_{{i \in I_{3} }} \frac{{e^{{x_{i}^{\beta } }} - 1}}{{1 + \lambda \left( {e^{{x_{i}^{\beta } }} - 1} \right)}} = 0$$

Because the above system of five nonlinear equations is difficult to solve, a numerical method is required to obtain the MLEs.

5 Application

Using two actual data sets, this section illustrates the new BEC model's behavior and compares the new BEC model's ability and efficiency to other existing models.

5.1 Diabetic retinopathy data

One of the main reasons of blindness and visual loss in diabetes patients is the Diabetic Retinopathy. National Eye Institute presented a data set for diabetic patients who had blindness, see Huster et al. (1989). A study was performed to find out the result of laser treatment in reducing the blindness. Each patient had one eye randomly chosen for laser photocoagulation and the time for both eyes to go blind was recorded in months. The primary objective of this study is to determine whether laser treatment has any effect on delaying blindness. This study included a subset of 38 DRS patients from 197 high-risk patients to investigate the efficacy of the new model. The data are presented in Table 1. Let X1 and X2 defined as follows:

Table 1 Time of vision loss for diabetic retinopathy patients

X1: represents the time to blindness in the untreated or control eye (in months).

X2: represent the time to blindness in the treated eye (in months).

Before analyzing data with the BEC distribution, the Extended Chen distribution is fitted to \({X}_{1}\), \({X}_{2}\) and \(\mathit{min}\left({X}_{1},{X}_{2}\right)\). For computational stability with the fitting of the distribution, we divided all the data points by 10. This has no direct effect on the computational process.

Table 2 displays the maximum likelihood estimates (MLEs) of parameters, the associated Kolmogorov–Smirnov (K–S) distances and the p-values for Diabetic Retinopathy Data. The Extended Chen distribution is used to fit the marginals and the minimum established the p-values.

Table 2 The MLEs, K–S and the p-values for Diabetic Retinopathy data

Now, two bivariate distributions that have taken a lot of interest in the literature will be compared to the BEC distribution's goodness of fit criteria. For the results of this data set, the BEC distribution will be compared to the bivariate Pareto distribution introduced by Shoaee and Khorram (2020) and the bivariate Generalized Linear Failure Rate distribution introduced by Sarhan et al. (2011).

We will evaluate the adequacy of fit for the new model BEC with bivariate Pareto and bivariate GLFR models using the Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC), Consistent AIC (CAIC), and Hannan-Quinn Information Criterion (HQIC). Table 3 provides a summary of the findings.

Table 3 Goodness of fit criteria for diabetic retinopathy data

5.2 American football league data

The data in Table 4 was drawn from games played over three weekends in 1986 and was gathered from the National Football League. The data are presented in Csorgo and Welsh (1989). Let the X1 and X2 be the following variables:

Table 4 American football league (NFL) data

X1: the game time to the first points scored by kicking the ball between goal posts.

X2: the game time to the first points scored by moving the ball into the end zone.

These times are useful for spectators who are just getting into the game or who are curious about how long they will have to wait to see a touchdown. The data are shown in Table 4 (scoring times in minutes and seconds). Before analyzing data with the BEC distribution, the Extended Chen distribution is fitted to \({X}_{1}\), \({X}_{2}\) and \(\mathit{min}\left({X}_{1},{X}_{2}\right)\). For computational stability with the fitting of the distribution, we divided all the data points by 10.

The MLEs, the associated K–S distances and the p-values for American Football League data are shown in Table 5. According to the p-values, the extended chen distribution is used to fit the marginals as well as the minimum.

Table 5 The MLEs, K–S and the p-values for American football league data

Table 6 summarises the results of the MLEs AIC, BIC, CAIC and HQIC to study the capability of fitting for the proposed BEC model with other bivariate models. According to the results, The BEC model offers a better fit to the two data sets because it has the smallest goodness of fit information criteria values. Therefore, the new Bivariate Extended Chen (BEC) distribution using the Marshall–Olkin method is more suitable than the other models.

Table 6 Goodness of fit Criteria for American football league data

6 Conclusions

In this paper we presented and studied the new class of bivariate distributions, namely Bivariate Extended Chen (BEC) distribution. Different properties of this distribution have been derived and discussed such as joint pdf and cdf, marginal distributions, conditional distributions, the hazard rate function, and stress strength reliability. The estimations of unknown parameters are studied using maximum likelihood method. In a practical application, some built models of Bivariate Pareto and GLFR distributions were used to compare data sets. When compared, the study confirmed that the BEC distribution could provide a good and adaptable model. According to the goodness of fit criteria results, the BEC distribution offers a superior fit than other competitor bivariate models, demonstrating its flexibility and applicability in modelling and allowing the BEC distribution to be used in practice without hesitation.