Introduction

In statistical literature the gamma distribution has been the subject of considerable interest, study, and applications for many years in different areas such as medicine, engineering, economics and Bayesian analysis. In Bayesian analysis it is used as the conjugate prior for the variance of a normal distribution. Although ample information about gamma distribution is available, little appears to have been done in the literature to study the distribution of the inverse gamma (IG). For example, Gelen and Leemis [7] studied the inverse gamma as a survival distribution. Gelman [8] studid inverse gamma distribution as a prior distributions for variance parameters in hierarchical models. Llera and Beckmann [14] introduced five different algorithms based on method of moments, maximum likelihood and Baysian to estimate the parameters of inverted gamma distribution. Abid and Al-Hassany [1] studied maximum likelihood estimator, moments estimator, percentile estimator, least square estimator, and weighted least square estimator the parameters of inverted gamma distribution. The inverted gamma distribution is a two-parameter family of continuous probability distributions on the positive real line which belongs to the exponential family and always have a upside-down bathtub shaped hazard function.

A random variable X is said to have a gamma distribution with parameters \(\alpha (>0)\) and \(\beta (>0)\), denoted by \(X \sim Ga(\alpha, \beta )\), if its probability density function (pdf) is given by

$$\begin{aligned} f(x)=\frac{\beta ^{\alpha }}{\Gamma (\alpha )}\;x^{\alpha -1}e^{-\beta x},\quad x>0, \end{aligned}$$

where \(\Gamma (.)\) is gamma function. If \(Y=1/X\), then the pdf of Y is given by

$$\begin{aligned} f(y)=\frac{\beta ^{\alpha }}{\Gamma (\alpha )}\;y^{-\alpha -1} e^{-\beta /y},\quad y>0. \end{aligned}$$
(1)

A random variable Y with pdf (1) is said to have an Inverted gamma distribution with shape parameter \(\alpha (>0)\) and scale parameter \(\beta (>0)\), denoted by \(Y \sim IG(\alpha, \beta )\). The cumulative distribution function (cdf) of Y are given as:

$$\begin{aligned} F_{X}(x) = \frac{\gamma \left( \alpha,\frac{\beta }{x}\right) }{\Gamma (\alpha )} = I\left( \alpha,\frac{\beta }{x}\right),~~~x\ge 0,~~~\alpha>0,~~~~~\beta >0, \end{aligned}$$
(2)

where \(\gamma (., .)\) and I(., .) are lower incomplete gamma function and regularized incomplete gamma function respectively. If \(\alpha =1\) the distribution of Y is named inverted exponential distribution and denoted by \(Y \sim IED(\beta )\). Thus the pdf and cumulative distribution function (cdf) of Y are given as:

$$\begin{aligned} f(y)=\beta y^{-2}e^{-\beta /y},\quad y>0 ,\quad \beta >0, \end{aligned}$$
(3)
$$\begin{aligned} F(y)=e^{-\beta /y},\quad y>0 , \quad \beta >0. \end{aligned}$$
(4)

Let X and Y are the independent random variables. Then the stress-strength reliability R is calculated as:

$$\begin{aligned} R&= P(X>Y)\\&= \int _0^{+\infty }\int _0^x f(x,y) \mathrm{d}y \mathrm{d}x \\&= \int _0^{+\infty }\left[ \int _0^x f_Y(y) \mathrm{d}y\right] f_X(x) \mathrm{d}x \\&= \int _0^{+\infty }F_Y(x)f_X(x) \mathrm{d}x. \end{aligned}$$

The estimation of stress-strength parameter plays an important role in the reliability analysis. For example, if X is the strength of a system which is subjected to stress Y , then the parameter R measures the system performance which is frequently used in the context of mechanical reliability of a system.

It seems that Birnbaum and McCarty [5] was the first paper with R in its title. They obtained a non-parametric upper confidence bound for R. There are several works on the inference procedures for R based on complete and incomplete data from X and Y samples. We refer the readers to Kotz et al. [11] and references therein for some applications of R. This book collects and digests theoretical and practical results on the theory and applications of the stress–strength relationships in industrial and economic systems up to 2003. Kundu and Raqab [12] considered the estimation of the stress-strength parameter R, when X and Y are independent and both are three parameter Weibull distributions.

Among some works about stress-strength reliability based on records, Baklizi [2, 4] studied point and interval estimation of the stress-strength reliability using record data in the one and two parameter exponential distributions. Baklizi [3] considered the likelihood and Bayesian estimation of stress-strength reliability using lower record values from the generalized exponential distribution.

Also in the literature the estimation of R in the case of Weibull, exponential, Inverted exponential, Generalized Lindley, generalized exponential and many other distributions has been obtained. Some of the recent work on the stress-strength model can be seen in [13, 16,17,18]. Recently Singh et al. [19] consider the estimation of the parameter R when X and Y are independent inverted exponential random variables.

In this paper we let \(X \sim IG(\alpha _{1}, \beta _1)\) and \(Y\sim IG(\alpha _{2}, \beta _2)\), also X and Y are the independent random variables. Then, the parameter R is calculated as:

$$\begin{aligned} R= & {} \int _0^{+\infty } \frac{\gamma \left( \alpha _2 , \frac{\beta _2}{x}\right) }{\Gamma (\alpha _2)} \times \frac{\beta _1^{\alpha _1}}{\Gamma (\alpha _1)} \times x^{-\alpha _1 -1} e^{\frac{-\beta _1}{x}} \mathrm{d}x \nonumber \\= & {} \frac{\beta _1^{\alpha _1}}{\Gamma (\alpha _1) \Gamma (\alpha _2)} \int _0^{+\infty }\gamma \left( \alpha _2 , \frac{\beta _2}{x}\right) x^{-\alpha _1 -1} e^{\frac{-\beta _1}{x}} \mathrm{d}x. \end{aligned}$$
(5)

From the above, we observed that parameter R is the function of parameters \(\alpha _{1}\), \(\alpha _{2}\), \(\beta _1\) and \(\beta _2\). Therefore, for maximum likelihood estimate (MLE) of R, we need to obtain the MLEs of \(\alpha _{1}\), \(\alpha _{2}\), \(\beta _1\) and \(\beta _2\). In especial case let \(\alpha _{2}=1\) then \(X \sim IG(\alpha _{1}, \beta _1)\) and \(Y\sim IED(\beta _2)\), also X and Y are the independent random variables. So the parameter R (denoted \(R_{1}\)) is calculated as;

$$\begin{aligned} R_1= & {} \frac{\beta _1^{\alpha _{1}}}{\Gamma (\alpha _{1})}\int _0^{+\infty }x^{-\alpha _{1}-1}\exp (-(\beta _1+\beta _2)/x)\mathrm{d}x\nonumber \\= & {} \left( \frac{\beta _1}{\beta _1+\beta _2}\right) ^{\alpha _{1}}. \end{aligned}$$
(6)

Maximum likelihood estimation for \(\alpha\) and \(\beta\)

Let \(x_1, x_2, \cdots , x_n\) are independent observation from \(IG(\alpha , \beta )\). Then, the log-likelihood function of \(\alpha\), \(\beta\) is given by

$$\begin{aligned} \ln L(\alpha ,\beta )= \ell (\alpha ,\beta ) = n\alpha \ln \beta - n\ln \Gamma (\alpha ) -(\alpha +1) \sum _{i=1}^n \ln x_i - \beta \sum _{i=1}^n \frac{1}{x_i}. \end{aligned}$$
(7)

Differentiating (7) with respect to \(\alpha\) and \(\beta\) and equating the derivative to zero, we get the following normal equations

$$\begin{aligned} \frac{\partial l(\alpha , \beta )}{\partial \alpha } = n\ln \beta -n \psi (\alpha ) -\sum _{i=1}^{n}\ln {x_i} =0, \end{aligned}$$
(8)
$$\begin{aligned} \frac{\partial l(\alpha , \beta )}{\partial \beta } = \frac{n\alpha }{\beta } - \sum _{i=1}^n \frac{1}{x_i}=0, \end{aligned}$$
(9)

where \(\psi (\alpha )=\frac{\partial }{\partial \alpha } \ln \Gamma (\alpha )=\frac{\Gamma ^{\prime }(\alpha )}{\Gamma (\alpha )}\) is digamma function which can be approximate by [9]

$$\begin{aligned} \psi (\alpha )\sim \ln (\alpha )-\frac{1}{2\alpha }-\frac{1}{12\alpha ^2}+\frac{1}{120 \alpha ^4}-\frac{1}{252 \alpha ^6}+... \end{aligned}$$
(10)

From Eq. (9) we have \(\beta = \frac{n\alpha }{\sum _{i=1}^n \frac{1}{x_i}}\). By replacing it in (8), we obtain

$$\begin{aligned} \ln (\alpha )-\psi (\alpha )=\frac{1}{n}\sum _{i-1}^{n}\ln x_i+\ln \left( \sum _{i-1}^{n}\frac{1}{x_i}\right) - \ln (n). \end{aligned}$$

By approximating \(\psi (\alpha )\approx \ln (\alpha )-\frac{1}{2\alpha }\) from (10), we obtain

$$\begin{aligned} \widehat{\alpha } \approx \left[ \frac{2}{n}\sum _{i=1}^n\ln x_i+ 2 \ln \left( \sum _{i=1}^n\frac{1}{x_i} \right) -2\ln n \right] ^{-1},~~~~~\widehat{\beta }=\frac{n \widehat{\alpha }}{\sum _{i=1}^{n}\frac{1}{x_i}}. \end{aligned}$$
(11)

From equation (10) if we consider \(\psi (\alpha ) \approx \ln \alpha -\frac{1}{2 \alpha } - \frac{1}{12 \alpha ^2}\) another approximate of \(\alpha\) is obtained, as follows

$$\begin{aligned} \widehat{\alpha }\approx \frac{1 \pm \sqrt{1+\frac{4 \left( \frac{1}{n} \sum _{i=1}^n \ln x_i + \ln \left( \sum _{i=1}^n \frac{1}{x_i}\right) - \ln n\right) }{3}}}{4\left( \frac{1}{n} \sum _{i=1}^n \ln x_i + \ln \left( \sum _{i=1}^n \frac{1}{x_i}\right) - \ln n\right) }. \end{aligned}$$

Maximum likelihood estimation for R

The main aim of this section is to derived the mle of R and \(R_1\) in (5) and (6).

Now let \(x_1, x_2, \ldots , x_n\) and \(y_1, y_2, \ldots , y_m\) are two independent observations from \(IGa(\alpha _1 , \beta _1)\) and \(IGa(\alpha _2 , \beta _2)\), respectively. Then, the log-likelihood function of \(\alpha _{1}\), \(\alpha _{2}\) , \(\beta _1\) and \(\beta _2\) is given by

$$\begin{aligned} \ell (\alpha _1,\beta _1,\alpha _2,\beta _2)&= \ln L(\alpha _1,\beta _1,\alpha _2,\beta _2) \nonumber \\&= n \alpha _1 \ln \beta _1 - n\ln \Gamma (\alpha _1) - (\alpha _1+1) \sum _{i=1}^n \ln x_i - \beta _1 \sum _{i=1}^n\frac{1}{x_i} \nonumber \\&\quad + m \alpha _2 \ln \beta _2 - m\ln \Gamma (\alpha _2) - (\alpha _2+1) \sum _{j=1}^m \ln y_j - \beta _2 \sum _{j=1}^m\frac{1}{y_j}. \end{aligned}$$
(12)

Differentiating (12) with respect to \(\alpha _1\), \(\alpha _2\), \(\beta _1\) and \(\beta _2\) get the following normal equations

$$\begin{aligned} \frac{\partial \ell }{\partial \alpha _1} = n \ln \beta _1 - n \psi (\alpha _1)-\sum _{i=1}^n \ln x_i = 0, \end{aligned}$$
$$\begin{aligned} \frac{\partial \ell }{\partial \beta _1} = \frac{n\alpha _1}{\beta _1}-\sum _{i=1}^n\frac{1}{x_i} =0, \end{aligned}$$
$$\begin{aligned} \frac{\partial \ell }{\partial \alpha _2} = m \ln \beta _2 -m \psi (\alpha _2)-\sum _{j=1}^m \ln y_j = 0, \end{aligned}$$
$$\begin{aligned} \frac{\partial \ell }{\partial \beta _2} = \frac{m\alpha _2}{\beta _2}-\sum _{j=1}^m\frac{1}{y_j} =0. \end{aligned}$$

Similar to previous section the MLE of \(\alpha _1\), \(\alpha _2\), \(\beta _1\) and \(\beta _2\) are given by

$$\begin{aligned} \widehat{\beta _1} = \frac{n \widehat{\alpha _1}}{\sum _{i=1}^n \frac{1}{x_i}} , \quad \widehat{\beta _2}= \frac{ m \widehat{\alpha _2}}{\sum _{i=1}^n \frac{1}{y_i}}, \end{aligned}$$
$$\begin{aligned} \widehat{\alpha _1} \approx \left[ \frac{2}{n}\sum _{i=1}^n\ln x_i+ 2 \ln \left( \sum _{i=1}^n\frac{1}{x_i} \right) -2\ln n \right] ^{-1}, \end{aligned}$$
$$\begin{aligned} \widehat{\alpha _2} \approx \left[ \frac{2}{m}\sum _{j=1}^n\ln y_j+ 2 \ln \left( \sum _{j=1}^m\frac{1}{y_j} \right) -2\ln m \right] ^{-1}. \end{aligned}$$

Hence, using the invariance properties of MLEs, the MLE of the parameters R and \(R_1\) are given by

$$\begin{aligned} \widehat{R} = \frac{\widehat{\beta _1}^{\widehat{\alpha _1}}}{\Gamma (\widehat{\alpha _1}) \Gamma (\widehat{\alpha _2})} \int _0^{+\infty }\gamma \left( \widehat{\alpha _2} , \frac{\widehat{\beta _2}}{x} \right) x^{-\widehat{\alpha _1} -1} e^{\frac{-\widehat{\beta _1}}{x}} \mathrm{d}x, \end{aligned}$$
(13)

and

$$\begin{aligned} \widehat{R_1}=\left( \frac{\widehat{\beta _1}}{\widehat{\beta _1}+\widehat{\beta _2}}\right) ^{\widehat{\alpha _1}}. \end{aligned}$$
(14)

Bayes estimation

In this section, we have developed the Bayesian estimation procedure for the estimation of parameter R from inverted gamma and inverted exponential distributions assuming independent gamma priors for the unknown model parameters. Let \(X_1, ..., X_n \sim IG(\alpha , \beta _1)\) and \(Y_1, ..., Y_m \sim IG(1, \beta _2)= IED(\beta _2)\), where the two samples are independent. Also let \(\alpha\) is known, \(\beta _1 \sim Ga(a,b)\) and \(\beta _2 \sim Ga(c,d)\) where abc and d are known. Since \(\beta _1\) and \(\beta _2\) are independent, then the joint prior distribution of \(\beta _1\) and \(\beta _2\) is given by

$$\begin{aligned} \pi (\beta _1,\beta _2)= & {} \frac{b^a d^c}{\Gamma (b)\Gamma (b)}\beta _1^{a-1}\beta _2^{c-1}\exp ( -b \beta _1-d \beta _2) \\&\varpropto\, \beta _1^{a-1}\beta _2^{c-1}\exp \left( -b \beta _1-d \beta _2\right) . \end{aligned}$$

Therefore, the joint posterior distribution of \(\beta _1\) and \(\beta _2\) given data is given by

$$\begin{aligned} \pi (\beta _1 , \beta _2 |\varvec{x}, \varvec{y})\propto &\, {} L(\varvec{x},\varvec{y}| \beta _1,\beta _2) \times \pi (\beta _1, \beta _2) \\\propto\, & {} \prod _{i=1}^n \frac{\beta _1 ^\alpha }{\Gamma (\alpha )} x_i ^{-\alpha -1} e^{\frac{-\beta _1}{x_i}} \times \prod _{j=1}^m \frac{\beta _2}{y_j^2} e^{\frac{-\beta _2}{y_j}} \times \pi (\beta _1 , \beta _2) \\\propto & {} \beta _1^{n\alpha +a-1} \beta _2^{m+c-1} e^{-\beta _1(s_1+b)-\beta _2(s_2+d)} \\\propto\, & {} Ga (n\alpha +a , s_1+b) \times Ga(m+c , s_2+d). \end{aligned}$$

where \(s_1=\sum _{i=1}^{n} \frac{1}{x_i}\) and \(s_2=\sum _{j=1}^{m}\frac{1}{y_j}\). Therefore, the posterior distribution of \(\beta _1\) and \(\beta _2\) are

$$\begin{aligned} \beta _1|\,\varvec{x}\thicksim Ga(n\alpha +a , s_1+b)~~,~~\beta _2 |\varvec{y}\,\thicksim Ga(m+c , s_2+d). \end{aligned}$$
(15)

Lemma 4.1

Let X and Y are independent random variables where\(X \sim Ga(\alpha _1, \beta _2)\) and \(Y \sim Ga (\alpha _2, \beta _2)\). Then the pdf of \(W=\frac{X}{X+Y}\) is given by

$$\begin{aligned} f_W(w)= \frac{\beta _1^{\alpha _1}\beta _2^{\alpha _2}}{B(\alpha _1,\alpha _2)} \times \frac{w^{\alpha _1-1}(1-w)^{\alpha _2-1}}{\left[ \beta _1 w+ \beta _2(1-w) \right] ^{\alpha _1+\alpha _2} }, \quad 0<w<1, \end{aligned}$$
(16)

where B(., .) is beta function.

If \(\beta _1=\beta _2=\beta\) then \(W \sim Beta(\alpha _1, \alpha _2)\). Now let W has the pdf in (16), it is easy to prove that the distribution of \(U=W^{\alpha }\) is

$$\begin{aligned} f_U(u)= \frac{\beta _1^{\alpha _1}\beta _2^{\alpha _2}}{\alpha B(\alpha _1 , \alpha _2)}\times \frac{u^{\frac{\alpha _1}{\alpha }-1}(1-u^{\frac{1}{\alpha }})^{\alpha _2-1}}{\left[ \beta _1 u^{\frac{1}{\alpha }}+\beta _2(1-u^{\frac{1}{\alpha }})\right] ^ {\alpha _1+\alpha _2}}, \quad 0<u<1. \end{aligned}$$
(17)

According to (15) and (17) the posterior density of R is given by

$$\begin{aligned} \pi (r| \varvec{x}, \varvec{y})= & {} \frac{(s_1+b)^{a+n \alpha }(s_2+d)^{c+m}}{\alpha B(a+n \alpha ,c+m )} \nonumber \\&\times \frac{r^{ \frac{a+n \alpha }{\alpha }-1}(1-r^{\frac{1}{\alpha }})^{c+m-1}}{\left[ (s_1+b) r^{\frac{1}{\alpha }} + (s_2+d) (1-r^{\frac{1}{\alpha }}) \right] ^{a+c+m+n \alpha }}, \quad 0<r<1. \end{aligned}$$
(18)

To obtain the Bayes estimator of R, we consider the squared error loss function (SELF) and Linex loss function (LLF) as follows

$$\begin{aligned} L(\theta , \widehat{\theta })=(\theta -\widehat{\theta })^{2}, ~~~~~~L(\theta , \widehat{\theta })=e^{\delta (\theta -\widehat{\theta })}-\delta (\theta -\widehat{\theta })-1, \end{aligned}$$

where \(\delta\) is loss parameter. Under SELF, the Bayes estimator of R is mean of the posterior distribution of R, which is given in following equation.

$$\begin{aligned} \widehat{R}_{B}= &\, {} E\left[ \pi (r|\varvec{x},\varvec{y}) \right] = \int _0^1 r~ \pi (r|\varvec{x},\varvec{y}) \mathrm{d}r \nonumber \\= & {} \int _0^1 \frac{(s_1+b)^{a+n \alpha }(s_2+d)^{c+m}}{\alpha B(a+n \alpha ,c+m )} \nonumber \\&\times \frac{r^{ \frac{a+n \alpha }{\alpha }}(1-r^{\frac{1}{\alpha }})^{c+m-1}}{\left[ (s_1+b) r^{\frac{1}{\alpha }} + (s_2+d) (1-r^{\frac{1}{\alpha }}) \right] ^{a+c+m+\,n \alpha }}\mathrm{d}r \end{aligned}$$
(19)

Under LLF the Bayes estimator of R is

$$\begin{aligned} \widehat{R}_{BL}= & {} \frac{-1}{\delta } \ln E[\exp (-\delta R)] \nonumber \\= & {} -\frac{1}{\delta }\ln \int _0^1 e^{-\delta r} \pi (r|\varvec{x}, \varvec{y}) \mathrm{d}r \nonumber \\= & {} -\frac{1}{\delta } \ln \int _0^1 e^{-\delta r}\frac{(s_1+b)^{a+n \alpha }(s_2+d)^{c+m}}{\alpha B(a+n \alpha ,c+m )} \nonumber \\&\times \frac{r^{ \frac{a+n \alpha }{\alpha }-1}(1-r^{\frac{1}{\alpha }})^{c+m-1}}{\left[ (s_1+b) r^{\frac{1}{\alpha }} + (s_2+d) (1-r^{\frac{1}{\alpha }}) \right] ^{a+c+m+n \alpha }} \mathrm{d}r \end{aligned}$$
(20)

The analytical solution of the above equations is not possible. Therefore, we may propose the use of any approximation technique to solve such integrals. Here, we suggest the use of Gauss quadrature method of approximation or Lindley approximation.

Applications

In this section we use two real data set to show that the IGa distribution can be a better model than other ones. The first data set shows active repair times (hours) For an airborne communication transceiver (n = 40) which are initially proposed by Jorgensen [10].

$$\begin{aligned} Data(X): 0.5, 0.6, 0.6, 0.7, 0.7, 0.7, 0.8, 0.8,1, 1,1, 1, 1.1,1.3,1.5, 1.5, 1.5, 1.5, 2, 2, 2.2, 2.5, 2.7, 3, 3,3.3,4, 4, 4.5,4.7,5, 5.4, 5.4,7, 7.5, 8.8, 9, 10.2, 22, 24.5 \end{aligned}$$

we use this real data set to show that the inverse gamma distribution (IGa) can be a better model than the inverted exponential distribution (IED), generalized inverted exponential distribution (GIED), inverse Rayleigh distribution (IRD) and log-normal distribution (LN) distributions. To compare the models, we used Akaike Information Criterion (AIC) and Bayesian Information Criterion (BIC). Table 1 lists the MLEs of the parameters from the fitted models and the values of the AIC and BIC. Based on the values of these statistics, we conclude that the IGa distribution is better than others models. The plots of the empirical and theoretical cumulative distribution function for the five distributions and P–P plot for IGa distribution are given in Fig. 1. This figure again shows that the IGa distribution gives a good fit for these data.

Table 1 The MLEs of parameters for example 1
Fig. 1
figure 1

Empirical and theoretical CDFs and P–P plot for example 1

In second example we consider data sets, from two groups of patients suffering from head and neck cancer disease which are initially proposed by [6]. The data are corresponded to the survival times of 51 patients in one group were treated using radiotherapy (X), whereas the 45 patients belonging to other group were treated using a combined radiotherapy and chemotherapy (Y). The data sets given as follows

$$\begin{aligned} Data(X): 6.53, 7, 10.42, 14.48, 16.10, 22.70, 34, 41.55, 42, 45.28, 49.40, 53.62, 63, 83, 84, 91,108, 112, 129, 133, 133, 139, 140, 140, 146, 149, 154, 157, 160, 160 ,165, 173,176, 218, 225, 241, 248, 273, 277, 297, 405, 417, 420, 440, 523, 583, 594,1101, 1146, 1417. \end{aligned}$$
$$\begin{aligned} Data(Y): 12.20, 23.56, 23.74, 25.87, 31.98, 37, 41.35, 47.38, 55.46, 58.36, 63.47, 68.46, 74.48,78.26, 81.43, 84, 92, 94, 110, 112, 119, 127, 130, 133, 140, 146, 155, 159, 173, 179, 194, 195, 209, 249,281, 319, 339, 432, 469, 519, 633, 725, 817, 1776. \end{aligned}$$

Recently Singh et al. [19] is modelled this data using inverted exponential distribution(IED), inverse Rayleigh distribution (IRD) and generalized inverted exponential distribution (GIED). Also Makkar et al. [15] is modelled the data using log-normal distribution (LN) and considered Bayesian survival analysis.

In this paper we consider IED, GIED and IGa distributions for above data. Table 2 shows the MLEs of the parameters from the fitted models and the values of the AIC and BIC. Based on the values of these statistics, we conclude that the IGa distribution, sometimes is better than and sometimes is as good as others models. In Fig. 2 the plots of the empirical and theoretical cumulative distribution function for the three distributions and P–P plot for IGa distribution are given. These figures illustrate again that the IGa distribution has a good fit for data. We obtain the MLE estimates of \((\alpha _{i}, \beta _{i}), ~~i=1, 2\) as, (0.761, 44.997) and (1.1373, 85.7317) for data X and Y, respectively. Therefore, the MLE of R and \(R_1\) using (13) and (14) become \(\widehat{R}=0.482\) and \(\widehat{R}_1=0.444\).

Table 2 The MLEs of parameters for example 2
Fig. 2
figure 2

Emprical and theoretical CDFs for example 2

Fig. 3
figure 3

P–P plots for data set X (left) and data set Y (right)

According to Table 2 let \(X_1, ..., X_n \sim IG(\alpha , \beta _1)\) and \(Y_1, ..., Y_m \sim IG(1, \beta _2)= IED(\beta _2)\), where the two samples are independent and \(\alpha\) is known. Also let \(\beta _1 \sim Ga(2, 2)\) and \(\beta _2 \sim Ga(2, 2)\), then the Bayes estimates of R using (19) and (20 )become \(\widehat{R}_{B}=0.5544\), \(\widehat{R}_{BL}=0.5550(\delta =0.5)\) and \(\widehat{R}_{BL}=0.5539(\delta =-0.5)\).