1 Introduction

The Farlie-Gumbel-Morgenstern (FGM) family is an important and seminal class of bivariate distributions. For a given set of marginal distribution functions (DFs) \(F_{X}(.)\) and \(F_{Y}(.),\) the FGM family is defined as \(G_{X,Y}(x,y)=F_X(x)F_Y(y)[1+\lambda {\overline{F}}_X(x){\overline{F}}_Y(y)],\) \(-1\le \lambda \le 1,\) where \({\overline{F}}_X(x)\) and \({\overline{F}}_Y(y)\) are the survival functions of the random variables (RVs) X and Y,  respectively. One important limitation of the FGM family is that the correlation coefficient between its marginals is restricted to a narrow range \(\left[ -\frac{1}{3},\frac{1}{3}\right] ,\) where the maximum value of the correlation is attained for the FGM copula, i.e., with uniform marginals (cf. Schucany et al. [33]). Accordingly, this family can be used to model the bivariate data that exhibits low correlation. For this reason, since the inception of this family, several extensions of it have been introduced in the literature in an attempt to improve the correlation level. Among these extensions, we recall the one-shape-parameter Sarmanov family, which was suggested by Sarmanov [30] as a new mathematical model of hydrological processes. Its copula (with uniform marginals \(F_X(x)=x, F_Y(y)=y,\) \(0\le x,y\le 1\)) is given by

$$\begin{aligned} F_{X,Y}(x,y;\alpha )=xy\left( 1+3\alpha (1-x)(1-y)+5\alpha ^2 (1-x)(1-y)\varphi (x)\varphi (y)\right) ,~\mid \alpha \mid \le \frac{\sqrt{7}}{5},\nonumber \\ \end{aligned}$$
(1.1)

where \( \varphi (x)=2x-1.\) The corresponding PDF is given by

$$\begin{aligned} f_{X,Y}(x,y;\alpha )=1+3\alpha \varphi (x)\varphi (y)+\frac{5}{4} \alpha ^2 (3\varphi ^2(x)-1)(3\varphi ^2(y)-1). \end{aligned}$$
(1.2)

The correlation coefficient of the Sarmanov copula is \(\alpha ,\) which means that the copula’s maximum correlation coefficient is 0.529 (cf. Balakrishnan and Lai, [11]; page 74). Clearly, the Sarmanov family is the most efficient one among many known extended families of the FGM family because on both the positive and negative sides, it delivers the best improvement in the correlation level. Barakat et al. [17] and Husseiny et al. [21] revisited the Sarmanov family with general marginals \(F_X\) and \(F_Y\) (denoted by SAR\((\alpha )\)) and showed that this family is an extension of the FGM family and it belongs to a wider family suggested by Sarmanov [29], which has many recorded applications in the literature (see, e.g., Bairamov et al. [10], and Tank and Gebizlioglu, [37]). Moreover, they showed that the SAR\((\alpha )\) is the only one of the extended families of FGM with a radially symmetric copula about (0.5, 0.5). This property was used by them to reveal several prominent statistical properties for the concomitants of OSs and record values from this family and some of information measures, namely, the Shannon entropy, inaccuracy measure, extropy, cumulative entropy, and Fisher information number (FIN) were studied theoretically and numerically.

Due to its relevance in a wide range of scientific issues, concomitants of OSs have got a lot of attention in recent years. Suppose \((X_i, Y_i), i = 1,2,...,n,\) is a random sample from a bivariate DF \(F_{X,Y}.\) If we order the sample by the \(X-\)variate, and obtain the OSs \(X_{1:n}\le X_{2:n}\le ....\le X_{n:n}\) for the X sample, then the \(Y-\)variate associated with the rth OS \(X_{r:n}\) is the concomitant of the rth OS, and is denoted by \(Y_{[r:n]},r=1,...,n.\) The concept of concomitants of OSs was first introduced by David [20] and almost simultaneously under the name of induced OSs by Bhattacharya [19]. Biological selection problem is the most striking application of concomitants of OSs. Another application of concomitants of OSs is in reliability theory, see Barakat et al. [18]. Scaria and Mohan [31] investigated the concomitants of record values from the Cambanis family with logistic marginals. Moreover, Scaria and Mohan [32] investigated the concomitants of OSs based on the FGM and Cambanis families with exponentiated exponential marginals. Recently, Abd Elgawad et al. [4] studied the concomitants of m-generalized OSs of the Cambanis family and some information measures. Furthermore, Alawady et al. [7] studied the concomitants of m-generalized OSs from the Cambanis family under general setting. For more recent works about this subject see Abd Elgawad and Alawady [1], Abd Elgawad et al. [2,3,4,5], Alawady et al. [7,8,9], Barakat et al. [14, 16], Jafari et al. [22], and Tahmasebi et al. [35, 36].

The FI is a way of determining how much information an observable RV has about an unknown parameter on which the probability of that RV depends. The FI is an important and fundamental criterion in statistical inference, physics, thermodynamics, information theory and some other fields. Traditionally, the FI has played a valuable role in statistical inference through the Cramér–Rao inequality and its association with the asymptotic properties of the maximum likelihood estimators. Knowing how much information a sample contains about an unknown parameter can assist determine bounds on the variance of a particular estimator of that parameter and approximate the sampling distribution of that estimator when the sample is large enough.

Consider a RV X,  which has a PDF \(f_X(x;\theta ),\) where \(\theta \in \Theta \) is an unknown parameter with a parameter space \(\Theta .\) Under certain regularity conditions (see, Rao, [28], p. 329 and Abo-Eleneen and Nagaraja, [6]), FI about \(\theta \in \Theta ,\) contained in the RV X,  is \(I_{\theta } (X;\theta ):= -\text{ E } \left( \frac{\partial ^{2} \ln f_X(x;\theta )}{\partial \theta ^{2}}\right) =\text{ E } \left( \frac{\partial \ln f_X(x;\theta )}{\partial \theta }\right) ^2.\) For some recent studies on the FI, one may refer to Barakat and Husseiny [13], Barakat et al. [15], and Kharazmi and Asadi [25]. Another important kind of the FI is the FIN, which is the second moment of the score function (the score function of a RV X with a PDF \(f_X\) is defined by \(\rho (x) = \frac{ \frac{\partial f_X(x;\theta )}{\partial x}}{f_X(x;\theta )}=\frac{\partial \ln f_X(x;\theta )}{\partial x}\)), where the derivative is with respect to x in a given PDF \(f_X(x;\theta )\) rather than the parameter \(\theta .\) It is a FI for a location parameter. For some recent works about this measure, see Abd Elgawad et al. [2], Barakat and Husseiny [12], Barakat et al. [17], and Tahmasebi and Jafari [34].

In the definition of FI, one should be aware that, the FI is not robust to the presence of outliers. This is because the FI is an expectation with respect to X,  it equally weights all X values, including large values that may be outliers. The cumulative residual Fisher information (abbreviated as CF) was recently presented by Kharazmi and Balakrishnan [26], which is naturally robust to the presence of outliers. This modified measure is defined as \(CF_{\theta }(X;\theta ):=\int _{-\infty }^{\infty }\left( \frac{\partial \ln \overline{F}_X(x;\theta )}{\partial \theta }\right) ^2 \overline{F}_X(x;\theta )dx,\) where \({\overline{F}}_X(x;\theta )\) is the survival function of X. It is worth mentioning that the CF can be used effectively for developing suitable goodness-of-fit tests for lifetime distributions by using empirical versions of this measure. Kharazmi and Balakrishnan [26] analogously defined the modified FIN via the survival function as \(CF(X;\theta ):=\int _{-\infty }^{\infty }\left( \frac{\partial \ln \overline{F}_X(x;\theta )}{\partial x}\right) ^2 \overline{F}_X(x;\theta )dx.\)

The purpose of this work is twofold. The first part (Sect. 2) is to reveal some new salient features pertaining to the dependence structure of SAR\((\alpha ),\) where some bounds and relationships are shown for the correlation coefficient. In the second part (Sects. 3, 4, and 5), we theoretically and numerically study in Sect. 3 the FI, \(I_\alpha (X_{r:n},Y_{[r:n];\alpha }),\) for a single pair \((X_{r:n},Y_{[r:n]}).\) Moreover, in Sect. 4, we theoretically and numerically discuss the FI about the mean of the exponential distribution marginals and the shape parameter of the power function distribution marginals for SAR\((\alpha ).\) In Sect. 5, \(CF(Y_{[r:n]};\alpha )\) is theoretically and numerically investigated.

2 Dependence structure

In this section, we first obtain the maximum bounds for the correlation coefficient, between two arbitrary continuous RVs with finite nonzero variances, which are arising from the SAR\((\alpha ),\) whose copula is defined by (1.1). Secondly, we deduce the sufficient conditions under which the marginal components become uncorrelated, among these conditions, we have a direct condition that \(\alpha =0.\) The upper bound for the correlation coefficient, given in the following theorem, is so large as to confirm the usefulness and distinction of the Sarmanov family.

Theorem 2.1

Let \(F_X(x)\) and \(F_Y(y)\) be the DFs of some RVs X and Y,  respectively. If the RVs X and Y are bivariate SAR\((\alpha )\) with finite nonzero variances and correlation coefficient, \(\rho (X,Y),\) where \(F_X=F_Y,\) then \(\rho (X,Y)\le 0.8091502.\)

Proof

First, we assume that \(F_X(x)\) and \(F_Y(y)\) are different. Then in view of (1.1) and (1.2) and by using the result of Schucany et al. [33], we get

$$\begin{aligned} COV(X,Y){} & {} =3\alpha \delta _{X}\delta _{Y}+\frac{5}{4}\alpha ^2\int _{-\infty }^{\infty } x(3\varphi ^2(F_X(x))-1)dF_X(x)\nonumber \\ {}{} & {} \quad \times \int _{-\infty }^{\infty } y(3\varphi ^2(F_Y(y))-1)dF_Y(y), \end{aligned}$$
(2.1)

where \(\delta _{j}=\int _{-\infty }^{\infty } xdF_j^{2}(x)-\int _{-\infty }^{\infty } xdF_j(x)=\int _{-\infty }^{\infty } x\varphi ^2(F_j(x))dF_j(x),\) \(~j=X,Y.\) Clearly, \(\delta _j>0,~j=X,Y,\) since \(\delta _j\) is the difference between the mean of the larger of two independent RVs from \(F_j\) and the mean of \(F_j\) (cf. Johnson and Kotz, [24], Schucany et al. [33]). On the other hand, we can easily verify that

$$\begin{aligned} W_{j}{} & {} :=\int _{-\infty }^{\infty } x(3\varphi ^2(F_j(x))-1)dF_j(x)\nonumber \\ {}{} & {} =4\int _{-\infty }^{\infty } xdF^3_j(x)-6\int _{-\infty }^{\infty } xdF^2_j(x)+2\int _{-\infty }^{\infty } xdF_j(x)\nonumber \\{} & {} =4\left( \int _{-\infty }^{\infty } xdF^3_j(x)-\int _{-\infty }^{\infty } xdF^2_j(x)\right) -2\left( \int _{-\infty }^{\infty } xdF^2_j(x)-\int _{-\infty }^{\infty } xdF_j(x)\right) \nonumber \\{} & {} :=4W_{j1}-2W_{j2}. \end{aligned}$$
(2.2)

Moreover, we can show that

$$\begin{aligned} W_{j}=0,~~\text{ if }~ F_j(x)=x,~\text{ i.e., } \text{ for } \text{ the } \text{ uniform } \text{ marginal } \text{ case }. \end{aligned}$$
(2.3)

Note that, \(W_{j1}\) is the difference between the mean of the larger of three independent RVs from \(F_j\) and the mean of two independent RVs from \( F_j.\) Furthermore, \(W_{j2}\) is the difference between the mean of the larger of two independent RVs from \(F_j\) and the mean of \( F_j.\) Then we get \(W_{ji}>0,~~i=1,2.\) Upon combining (2.1) and (2.2), we get

$$\begin{aligned} {\textstyle COV}(X,Y)=3\alpha \delta _{X}\delta _{Y}+\frac{5}{4}\alpha ^2W_XW_Y, \end{aligned}$$
(2.4)

where for the uniform marginal DFs \(F_U\) and \(F_V,\) (2.3) entails that \( COV(U,V)=3\alpha \delta _{U}\delta _{V}=\frac{\alpha }{12},\) which implies, as it is known, \(\rho (U,V)=\alpha .\) Now, let \(F:=F_X(x)=F_Y(y),\) where F has finite nonzero variance, \(\sigma ^2,\) and finite mean, \(\mu .\) Furthermore, let \(W:=W_j,~j=X,Y.\) Then, we get

$$\begin{aligned} W^{2}=\left( \int _{-\infty }^{\infty } x(3\varphi ^2(F(x))-1)dF(x)\right) ^2=\left( \int _{-\infty }^{\infty } (x-\mu )(3\varphi ^2(F(x))-1)dF(x)\right) ^2, \end{aligned}$$

since \(\int _{-\infty }^{\infty }(3\varphi ^2(F(x))-1)dF(x)=\int _{0}^{1}(3\varphi ^2(u)-1)du=0.\) Thus, by applying Cauchy–Schwartz inequality, we get

$$\begin{aligned} W^{2}\le & {} \sigma ^2\int _{-\infty }^{\infty } (3\varphi ^2(F(x))-1)^2dF(x)=\sigma ^2\int _{0}^{1} (3\varphi ^2(u)-1)^2du \nonumber \\= & {} \frac{\sigma ^2}{2}\int _{-1}^{1}(3v^2-1)^2dv=0.8\sigma ^2. \end{aligned}$$
(2.5)

In view of the inequality \(\alpha \le \frac{\sqrt{7}}{5}\) and the result of Schucany et al. [33], a combination between (2.5) and (2.4), thus yields \(\rho (X,Y)\le 0.8091502.\) This completes the proof. \(\square \)

Corollary 2.1

Under the conditions of Theorem 2.1, possibly except the condition \(F_X=F_Y,\) we have \(0.8091502\le \max \rho (X,Y)\le 1.\)

Proof

The proof follows from the general relation \(\max \{x:x\in A\}\le \max \{x:x\in B\},\) if \(A\subseteq B,\) and the result of Theorem 2.1. \(\square \)

Corollary 2.2

Let the conditions of Theorem 2.1, possibly except the condition \(F_X=F_Y,\) be satisfied. Assume that \(W_X,W_Y\ne 0,\) \(\alpha \ne 0,\) and \(|\frac{\delta _{X}\delta _{Y}}{W_{x}W_{Y}}|\le \frac{\sqrt{7}}{12}.\) Then, we get the following interesting result \(\rho (X,Y)=0,\) while \(X\) and Y are dependent, if \(\alpha =\frac{-12}{5}\frac{\delta _{X}\delta _{Y}}{W_{x}W_{Y}}.\)

Proof

The proof follows directly from (2.4). \(\square \)

3 FI in \((X_{r:n},Y_{[r:n]})\) about \(\alpha \) based on the copula of SAR\((\alpha )\)

Let X and Y be uniformly distributed RVs over (0, 1),  written \(X,Y\sim U(0,1),\) and let they be jointly distributed as the Sarmanov copula (1.1). This copula is free of any unknown parameters except the parameter \(\alpha .\) The joint PDF of \((X_{r:n},Y_{[r:n]})\) is given by (cf. Abo-Eleneen and Nagaraja [6], Barakat et al. [15])

$$\begin{aligned} f_{[r:n]}(x,y;\alpha )= C_{r,n}f_{X,Y}(x,y;\alpha )F_X(x)^{r-1}(1-F_X(x))^{n-r}, \end{aligned}$$
(3.1)

where \(C_{r,n}=\frac{1}{\beta (r,n-r+1)}\) and \(\beta (a,b)=\int _{0}^{1}u^{a-1}(1-u)^{b-1}~du,~a,b>0,\) is the beta function.

3.1 Theoretical result

Before formulating Theorem 3.1 about the FI in \((X_{r:n},Y_{[r:n]}),\) we consider the set \(\Omega =\{\alpha :\,\mid 3\alpha \varphi (x)\varphi (y)+\frac{5}{4} \alpha ^2 (3\varphi ^2(x)-1)(3\varphi ^2(y)-1)\mid < 1,~ \forall ~ 0\le x,y\le 1\}.\) From now on, we deal only with \(\alpha \in \Omega \cap \Upsilon ,\) where \(\Upsilon =\{\alpha : \mid \alpha \mid \le \frac{\sqrt{7}}{5}\}.\) Since \(\alpha =0\in \Omega \cap \Upsilon ,\) then the set \(\Omega \cap \Upsilon \) is not empty. On the other hand, the set \(~\{\alpha :\,\mid 3\alpha \varphi (x)\varphi (y)+\frac{5}{4} \alpha ^2 (3\varphi ^2(x)-1)(3\varphi ^2(y)-1)\mid >1, \forall 0\le x,y\le 1\}=\emptyset \) (empty set). Therefore, \(\Omega \cup {\tilde{\Omega }}=\mathcal U\) (while \(\Omega \cap {\tilde{\Omega }}\ne \emptyset \)), where \({\mathcal {U}}\) is the universal set and \({\tilde{\Omega }}\!=\!\{\alpha :\,\mid 3\alpha \varphi (x)\varphi (y)+\frac{5}{4} \alpha ^2 (3\varphi ^2(x)-1)(3\varphi ^2(y)-1)\!\mid<1,\,0\le x\le x_0,\,0\le y\le y_0,\,\text{ and }\,\mid 3\alpha \varphi (x)\varphi (y)+\frac{5}{4} \alpha ^2 (3\varphi ^2(x)-1)(3\varphi ^2(y)-1)\mid \ge 1, x>x_0,y> y_0,\,\text{ for } \text{ some }\, 0<x_0,y_0<1\}.\) In order to check \(\alpha \in \Omega ,\) for any \(\alpha \in \Upsilon ,\) draw the function \({{\mathcal {F}}}(x,y;\alpha )=\) \(\mid 3\alpha \varphi (x)\varphi (y)+\frac{5}{4} \alpha ^2 (3\varphi ^2(x)-1)(3\varphi ^2(y)-1)\mid ,0\le x,y\le 1,\) as 3D diagram \((x,y,{{\mathcal {F}}}),\) by using Mathematica 12. If the curve of \({{\mathcal {F}}}\) falls entirely within the cube \(\mathcal{C}=\{(x,y,z):-1\le x,y,z\le +1\},\) then \(\alpha \in \Omega ,\) otherwise \(\alpha \notin \Omega .\) The fact that \(\Omega \!\cup \!{\tilde{\Omega }}={\mathcal {U}}\) means that we have only two possibilities. The first possibility is that the curve of \({{\mathcal {F}}}\) falls entirely within the cube \({\mathcal {C}},\) represented by the set \(\Omega .\) The second possibility is that a portion of that curve falls within \({\mathcal {C}}\) and the other portion is outside the cube \({\mathcal {C}},\) represented by the set \({\tilde{\Omega }}.\) Parts a,b,c, and d of Fig. 1 show how we can achieve this check for some values of \(\alpha \in \Omega \cap \Upsilon .\)

Fig. 1
figure 1

3D Diagrams for checking the belonging relationship \(\alpha \in \Omega \cap \Upsilon \)

Theorem 3.1

Suppose that X and Y \(\sim U(0,1)\) with joint PDF (1.2), then for any \(1\le r\le n, \) and \(\alpha \in \Omega \cap \Upsilon ,\) the FI in \((X_{r:n},Y_{[r:n]})\) about \(\alpha \) is given by

$$\begin{aligned} I_{\alpha }(X_{r:n},Y_{[r:n]};\alpha )=C_{r,n}\sum \limits _{i=0}^{\infty }\sum \limits _{j=0}^{i}\left( {\begin{array}{c} i\!\\ \scriptstyle {j}\\ \end{array}}\right) (-1)^{i}(\alpha )^{i}\big (\frac{1}{2}\big )^{j} [I_1+I_2+I_3],\end{aligned}$$
(3.2)

where

$$\begin{aligned} I_1= & {} \sum \limits _{k=0}^{i-j+2}\sum \limits _{l=0}^{j}\sum \limits _{m=0}^{2l}\sum \limits _{s=0}^{i-j+2}\sum \limits _{t=0}^{j} \sum \limits _{p=0}^{2t}(-2)^{s+p+k+m}(-3)^{l+t}(3)^{i-j+2}(\frac{5\alpha }{2})^{j}\nonumber \\{} & {} \times \left( {\begin{array}{c} i-j+2\!\\ \scriptstyle {k}\end{array}}\right) \left( {\begin{array}{c} j\!\\ \scriptstyle {l}\end{array}}\right) \left( {\begin{array}{c} 2l\!\\ \scriptstyle {m}\end{array}}\right) \left( {\begin{array}{c} i-j+2\!\\ \scriptstyle {s}\end{array}}\right) \left( {\begin{array}{c} j\!\\ \scriptstyle {t}\end{array}}\right) \left( {\begin{array}{c} 2t\!\\ \scriptstyle {p}\end{array}}\right) \frac{\beta (k+m+r,n-r+1)}{s+p+1},\nonumber \\ \end{aligned}$$
(3.3)
$$\begin{aligned} I_2= & {} \sum \limits _{u=0}^{i-j}\sum \limits _{v=0}^{j+2}\sum \limits _{w=0}^{2v}\sum \limits _{z=0}^{i-j}\sum \limits _{c=0}^{j+2} \sum \limits _{b=0}^{2c}(-2)^{u+w+z+b}(-3)^{v+c}(3)^{i-j}(\frac{5\alpha }{2})^{j+2}\nonumber \\{} & {} \times \left( {\begin{array}{c} i-j\!\\ \scriptstyle {u}\end{array}}\right) \left( {\begin{array}{c} j+2\!\\ \scriptstyle {v}\end{array}}\right) \left( {\begin{array}{c} 2v\!\\ \scriptstyle {w}\end{array}}\right) \left( {\begin{array}{c} i-j\!\\ \scriptstyle {z}\end{array}}\right) \left( {\begin{array}{c} j+2\!\\ \scriptstyle {c}\end{array}}\right) \left( {\begin{array}{c} 2c\!\\ \scriptstyle {b}\end{array}}\right) \frac{\beta (u+w+r,n-r+1)}{z+b+1},\nonumber \\ \end{aligned}$$
(3.4)

and

$$\begin{aligned} I_3= & {} \sum \limits _{a=0}^{i-j+1}\sum \limits _{d=0}^{j+1}\sum \limits _{e=0}^{2d}\sum \limits _{f=0}^{i-j+1}\sum \limits _{g=0}^{j+1} \sum \limits _{h=0}^{2g}(-2)^{a+e+f+h}(-3)^{d+g}(3)^{i-j+1}(5\alpha )^{j+1}(\frac{1}{2})^j \nonumber \\{} & {} \times \left( {\begin{array}{c} i-j+1\!\\ \scriptstyle {a}\end{array}}\right) \left( {\begin{array}{c} j+1\!\\ \scriptstyle {d}\end{array}}\right) \left( {\begin{array}{c} 2d\!\\ \scriptstyle {e}\end{array}}\right) \left( {\begin{array}{c} i-j+1\!\\ \scriptstyle {f}\end{array}}\right) \left( {\begin{array}{c} j+1\!\\ \scriptstyle {g}\end{array}}\right) \left( {\begin{array}{c} 2g\!\\ \scriptstyle {h}\end{array}}\right) \frac{\beta (a+e+r,n-r+1)}{f+h+1}.\nonumber \\ \end{aligned}$$
(3.5)

Proof

From (1.2) and (3.1), we get

$$\begin{aligned} \ln f_{[r:n]}(x,y;\alpha )= & {} \ln C_{r,n} + \ln \left( 1+3\alpha \varphi (x)\varphi (y)+\frac{5}{4} \alpha ^2 (3\varphi ^2(x)-1)(3\varphi ^2(y)-1)\right) \\{} & {} +(r-1)\ln x + (n-r)\ln (1-x), \end{aligned}$$

Thus,

$$\begin{aligned} \frac{\partial ^{2}\ln f_{[r:n]}(x,y;\alpha )}{\partial \alpha ^{2}}= & {} \frac{\frac{5}{2}(3\varphi ^2(x)-1)(3\varphi ^2(y)-1)}{f_{X,Y}(x,y;\alpha )}\nonumber \\{} & {} -\frac{\left( 3\varphi (x)\varphi (y)+\frac{5}{2} \alpha (3\varphi ^2(x)-1)(3\varphi ^2(y)-1)\right) ^2}{f_{X,Y}^2(x,y;\alpha )}.\nonumber \\ \end{aligned}$$
(3.6)

Therefore, by using (3.1), (3.6) and the binomial expansion under the condition \(\alpha \in \Omega \cap \Upsilon ,\) the FI about the shape parameter \(\alpha \) is given as

$$\begin{aligned}{} & {} I_{\alpha }(X_{r:n},Y_{[r:n]};\alpha )\nonumber \\ {}{} & {} \quad =-\text{ E }\left( \frac{\partial ^{2}\ln f_{[r:n]}(X_{r:n},Y_{[r:n]};\alpha )}{\partial \alpha ^{2}}\right) \nonumber \\{} & {} \quad = C_{r,n} \sum \limits _{i=0}^{\infty }\sum \limits _{j=0}^{i}(-1)^{i}\left( {\begin{array}{c} i\\ \scriptstyle {j}\end{array}}\right) (\alpha )^{i}(\frac{1}{2})^{j} \nonumber \\{} & {} \qquad \times \int _{0}^{1}\int _{0}^{1}\left( 3\varphi (x)\varphi (y)+\frac{5}{2}\alpha (3\varphi ^2(x)-1)(3\varphi ^2(y)-1) \right) ^2(3\varphi (x)\varphi (y))^{i-j}\nonumber \\{} & {} \qquad \times \left( \frac{5}{2}\alpha (3\varphi ^2(x)-1)(3\varphi ^2(y)-1)\right) ^{j} x^{r-1}(1-x)^{n-r}~dxdy\nonumber \\{} & {} \quad =C_{r,n} \sum \limits _{i=0}^{\infty }\sum \limits _{j=0}^{i}(-1)^{i}\left( {\begin{array}{c} i\\ \scriptstyle {j}\end{array}}\right) (\alpha )^{i}(\frac{1}{2})^{j}[I_1+I_2+I_3], \end{aligned}$$
(3.7)

where

$$\begin{aligned} I_1= & {} \int _{0}^{1}\int _{0}^{1}(3\varphi (x)\varphi (y))^{i-j+2}(\frac{5}{2}\alpha (3\varphi ^2(x)-1)\nonumber \\ {}{} & {} \quad (3\varphi ^2(y)-1))^{j} x^{r-1}(1-x)^{n-r}dxdy \nonumber \\= & {} \int _{0}^{1}(3\varphi (x))^{i-j+2}(\frac{5}{2}\alpha (3\varphi ^2(x)-1))^{j} x^{r-1}(1-x)^{n-r}dx\nonumber \\ {}{} & {} \quad \int _{0}^{1}(\varphi (y))^{i-j+2}(3\varphi ^2(y)-1)^{j} dy \nonumber \\= & {} \sum \limits _{k=0}^{i-j+2}\sum \limits _{l=0}^{j}\sum \limits _{m=0}^{2l}\sum \limits _{s=0}^{i-j+2}\sum \limits _{t=0}^{j} \sum \limits _{p=0}^{2t}(-2)^{s+p+k+m}(-3)^{l+t}(3)^{i-j+2}(\frac{5\alpha }{2})^{j}\nonumber \\{} & {} \times \left( {\begin{array}{c} i-j+2\!\\ \scriptstyle {k}\end{array}}\right) \left( {\begin{array}{c} j\!\\ \scriptstyle {l}\end{array}}\right) \left( {\begin{array}{c} 2l\!\\ \scriptstyle {m}\end{array}}\right) \left( {\begin{array}{c} i-j+2\!\\ \scriptstyle {s}\end{array}}\right) \left( {\begin{array}{c} j\!\\ \scriptstyle {t}\end{array}}\right) \left( {\begin{array}{c} 2t\!\\ \scriptstyle {p}\end{array}}\right) \frac{\beta (k+m+r,n-r+1)}{s+p+1},\nonumber \\ \end{aligned}$$
(3.8)
$$\begin{aligned} I_2= & {} \int _{0}^{1}\int _{0}^{1}(3\varphi (x)\varphi (y))^{i-j}(\frac{5}{2}\alpha (3\varphi ^2(x)-1)\nonumber \\ {}{} & {} \quad (3\varphi ^2(y)-1))^{j+2} x^{r-1}(1-x)^{n-r}dxdy\nonumber \\= & {} \int _{0}^{1}(3\varphi (x))^{i-j}(\frac{5}{2}\alpha (3\varphi ^2(x)-1))^{j+2} x^{r-1}(1-x)^{n-r}dx\nonumber \\ {}{} & {} \quad \int _{0}^{1}(\varphi (y))^{i-j}(3\varphi ^2(y)-1)^{j+2} dy \nonumber \\= & {} \sum \limits _{u=0}^{i-j}\sum \limits _{v=0}^{j+2}\sum \limits _{w=0}^{2v}\sum \limits _{z=0}^{i-j}\sum \limits _{c=0}^{j+2} \sum \limits _{b=0}^{2c}(-2)^{u+w+z+b}(-3)^{v+c}(3)^{i-j}(\frac{5\alpha }{2})^{j+2} \nonumber \\{} & {} \times \left( {\begin{array}{c} i-j\!\\ \scriptstyle {u}\end{array}}\right) \left( {\begin{array}{c} j+2\!\\ \scriptstyle {v}\end{array}}\right) \left( {\begin{array}{c} 2v\!\\ \scriptstyle {w}\end{array}}\right) \left( {\begin{array}{c} i-j\!\\ \scriptstyle {z}\end{array}}\right) \left( {\begin{array}{c} j+2\!\\ \scriptstyle {c}\end{array}}\right) \left( {\begin{array}{c} 2c\!\\ \scriptstyle {b}\end{array}}\right) \frac{\beta (u+w+r,n-r+1)}{z+b+1},\nonumber \\ \end{aligned}$$
(3.9)

and

$$\begin{aligned} I_3= & {} \int _{0}^{1}\int _{0}^{1}(3\varphi (x)\varphi (y))^{i-j+1}(5\alpha (3\varphi ^2(x)-1)(3\varphi ^2(y)-1))^{j+1}\nonumber \\ {}{} & {} \quad (\frac{1}{2})^{j} x^{r-1}(1-x)^{n-r}dxdy \nonumber \\= & {} \int _{0}^{1}(3\varphi (x))^{i-j+1}(5\alpha (3\varphi ^2(x)-1))^{j+1}(\frac{1}{2})^{j} x^{r-1}(1-x)^{n-r}dx\nonumber \\{} & {} \times \int _{0}^{1}(\varphi (y))^{i-j+1}((3\varphi ^2(y)-1))^{j+1} dy \nonumber \\= & {} \sum \limits _{a=0}^{i-j+1}\sum \limits _{d=0}^{j+1}\sum \limits _{e=0}^{2d}\sum \limits _{f=0}^{i-j+1}\sum \limits _{g=0}^{j+1} \sum \limits _{h=0}^{2g}(-2)^{a+e+f+h}(-3)^{d+g}(3)^{i-j+1}(5\alpha )^{j+1}(\frac{1}{2})^{j}\nonumber \\{} & {} \times \left( {\begin{array}{c} i-j+1\!\\ \scriptstyle {a}\end{array}}\right) \left( {\begin{array}{c} j+1\!\\ \scriptstyle {d}\end{array}}\right) \left( {\begin{array}{c} 2d\!\\ \scriptstyle {e}\end{array}}\right) \left( {\begin{array}{c} i-j+1\!\\ \scriptstyle {f}\end{array}}\right) \left( {\begin{array}{c} j+1\!\\ \scriptstyle {g}\end{array}}\right) \left( {\begin{array}{c} 2g\!\\ \scriptstyle {h}\end{array}}\right) \frac{\beta (a+e+r,n-r+1)}{f+h+1}.\nonumber \\ \end{aligned}$$
(3.10)

Combining (3.8), (3.9), and (3.10), we get \(I_{\alpha }(X_{r:n},Y_{[r:n]};\alpha ).\) The theorem is proved. \(\square \)

3.2 Computing \(I_{\alpha }(X_{r:n},Y_{[r:n]};\alpha )\) with discussion

Table 1 displays the FI \(I_{\alpha }(X_{r:n},Y_{[r:n]};\alpha )\) as a function of \(n, r\le \frac{n+1}{2},\) and \(\alpha ,\) for \(n=1,3,5,15,\) \(\alpha =-0.2,-0.15,-0.1,0.1,0.15,0.2,\) where \(\alpha \in \Omega \cap \Upsilon .\) The entries are computed using the relations (3.2)–(3.5). We only compute 9 terms from the infinite series included in (3.2), as this procedure gives us satisfactory results. The first row of Table 1 represents the FI \(I_{\alpha }(X,Y;\alpha ).\) Based on the fact that the FI \(I_{\alpha }(X,Y;\alpha )\) in a random sample of size n is \(nI_{\alpha }(X,Y;\alpha ),\) Table 1 allows us to compute the proportion of the sample FI \(I_{\alpha }(X_{r:n},Y_{[r:n]};\alpha )\) contained in a single pair. For example, when \(n=5,\) the FI \(I_{\alpha }(X_{r:n},Y_{[r:n]};\alpha )\) about \(\alpha \) in the extreme pair ranges from \(26\%\) to \(35.5\%\) of the total information, as \(\alpha \) ranges from \(-0.2\) to 0.2. In addition, the FI \(I_{\alpha }(X_{r:n},Y_{[r:n]};\alpha )\) in the central pair ranges from \(9\%\) to \(13\%\) of what is available in the complete sample in all cases. Another useful application of Table 1 is that it may be used to quickly extract the FI contained in singly or multiply censored bivariate data sample from the SAR\((\alpha ).\) Simply sum up the FI in each pair that constitutes the censored sample. For example, when \(n = 5,\) the FI about \(\alpha \) in the type-II censored sample consisting of the bottom (or the top) three pairs ranges from \(52\%\) to \(65\%\) as \(\alpha \) ranges from \(-0.2\) to 0.2. The following interesting features can be extracted from Table 1:

  • \(I_{\alpha }(X_{r:n},Y_{[r:n]};\alpha )\) increases when the difference between the rank r and the sample size n, increases for \(r\le \frac{n+1}{2}\).

  • For fixed n and r,  the value of \(I_{\alpha }(X_{r:n},Y_{[r:n]};\alpha )\) is close to \(I_{\alpha }(X_{r:n},Y_{[r:n]};-\alpha ).\) Moreover, \(I_{\alpha }(X_{r:n},Y_{[r:n]};\alpha )=I_{\alpha }(X_{r:n},Y_{[r:n]};-\alpha )\) in some cases.

Table 1 FI for \((X_{r:n},Y_{[r:n]})\) about the parameter \(\alpha \)

4 FI in concomitant \(Y_{[r:n]}\) based on some distributions

Barakat et al. [17] studied the concomitants of OSs based on the Sarmanov family with marginal generalized exponential DFs, where the DF of the generalized exponential distribution is defined as \(F_X(x)=\left( 1-e^{-\theta x}\right) ^{a},\) \( x;a, \theta > 0,\) and is denoted by \(GE(\theta ;a).\) Barakat et al. [17] expressed the marginal PDF of the concomitant \(Y_{[r:n]}\) as

$$\begin{aligned} f_{[r:n]}(y;\alpha ,\theta ){} & {} =\left( 1-3\Delta ^{(\alpha )}_{1,r:n}+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}\right) f_Y(y)\nonumber \\ {}{} & {} \quad + \left( 3\Delta ^{(\alpha )}_{1,r:n}-\frac{15}{2}\Delta ^{(\alpha )}_{2,r:n}\right) f_{V_1}(y)+5\Delta ^{(\alpha )}_{2,r:n}f_{V_2}(y), \end{aligned}$$
(4.1)

where \(V_{1}\sim \text{ GE }(\theta ;2a),\) \( V_{2}\sim \text{ GE }(\theta ;3a)\) , \(\Delta ^{(\alpha )}_{1,r:n}=\frac{\alpha (2r-n-1)}{n+1}\) and \(\Delta ^{(\alpha )}_{2,r:n}=2\alpha ^2 \left[ 1-6\frac{r(n-r+1)}{(n+1)(n+2)}\right] .\)

Proposition 1

Let the FI associated with concomitants of OSs based on SAR\((\alpha )\) with marginal DFs \(GE(\theta ;a)\) about unknown parameter \(\theta \) be denoted by \(I_{\theta }(Y_{[r:n]};\alpha ,\theta ).\) Then,

  1. 1.

    \(I_{\theta }(Y_{[r:n]};-\alpha ,\theta )=I_{\theta }(Y_{[n-r+1:n]};\alpha ,\theta ).\)

  2. 2.

    \(I_{\theta }(Y_{[\frac{n+1}{2}:n]};\alpha ,\theta )=I_{\theta }(Y_{[\frac{n+1}{2}:n]};-\alpha ,\theta ).\)

Proof

From (4.1), we get

$$\begin{aligned} I_{\theta }(Y_{[r:n]};-\alpha ,\theta )= & {} \text{ E }\left[ \frac{\partial ^{2} \ln f_{[r:n]}(Y_{[r:n]};-\alpha ,\theta )}{\partial \theta ^{2}}\right] ^{2}. \end{aligned}$$

By applying the easy-check relations

$$\begin{aligned} \Delta ^{(\alpha )}_{1,r:n}= & {} \frac{\alpha (2r-n-1)}{n+1}=\Delta ^{(-\alpha )}_{1,n-r+1:n}, \\ \Delta ^{(\alpha )}_{2,r:n}= & {} 2\alpha ^2 \left[ 1-6\frac{r(n-r+1)}{(n+1)(n+2)}\right] =\Delta ^{(\alpha )}_{2,n-r+1:n},~\text{ and }~\Delta ^{(\alpha )}_{2,r:n}=\Delta ^{(-\alpha )}_{2,r:n}, \end{aligned}$$

we immediately get the relation \(f_{[r:n]}~ (y;-\alpha ,\theta )= f_{[n-r+1:n]}(y;\alpha ,\theta ).\) The first part of the proposition is thus proved. Also, for the second part we have \(\Delta ^{(\alpha )}_{1,\frac{n+1}{2}:n}=2\Delta ^{(-\alpha )}_{1,\frac{n+1}{2}:n}=0,\) This completes the proof. \(\square \)

4.1 FI in \(Y_{[r:n]}\) about \(\text{ E }(Y)\) of exponential distribution marginal

By putting \(a=1\) in (4.1), we get the marginal PDF of the concomitant \(Y_{[r:n]}\) based on the exponential distribution as

$$\begin{aligned}{} & {} f_{[r:n]}(y;\alpha ,\theta )\\ {}{} & {} \quad =\frac{1}{\theta }\exp (\frac{-y}{\theta })+3\Delta ^{(\alpha )}_{1,r:n}\left( \frac{2}{\theta }\exp (\frac{-y}{\theta })\left( 1-\exp (\frac{-y}{\theta })\right) -\frac{1}{\theta }\exp (\frac{-y}{\theta })\right) \\{} & {} \qquad +\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n} \left( \frac{1}{\theta }\exp (\frac{-y}{\theta })\right. \left. -\frac{6}{\theta }\exp (\frac{-y}{\theta })\left( 1- \exp (\frac{-y}{\theta })\right) +\frac{6}{\theta }\exp (\frac{-y}{\theta })(1-\exp (\frac{-y}{\theta }))^2\right) . \end{aligned}$$

This expression, after some algebra, can be written as

$$\begin{aligned} f_{[r:n]}(y;\alpha ,\theta )=\frac{1}{\theta }\exp (\frac{-y}{\theta })\left( A_{1}+A_{2}\exp (\frac{-y}{\theta })+A_{3}\exp (\frac{-2y}{\theta })\right) , \end{aligned}$$

where \(A_{1}=\left( 1+3\Delta ^{(\alpha )}_{1,r:n}+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}\right) ,\) \(A_{2}=-\left( 6\Delta ^{(\alpha )}_{1,r:n}+15\Delta ^{(\alpha )}_{2,r:n}\right) ,\) and \(A_{3}=\left( 15\Delta ^{(\alpha )}_{2,r:n}\right) \). Therefore,

$$\begin{aligned}{} & {} \frac{\partial \ln f_{[r:n]}(y;\alpha ,\theta )}{\partial \theta }\\{} & {} \quad =\frac{1}{\theta }\left( \frac{y}{\theta }-1+\frac{y}{\theta }\frac{A_{2}\exp (\frac{-y}{\theta })+2A_{3}\exp (\frac{-2y}{\theta })}{A_{1}+A_{2} \exp (\frac{-y}{\theta })+A_{3}\exp (\frac{-2y}{\theta })}\right) \\ {}{} & {} \quad =\frac{1}{\theta }\left( \frac{2y}{\theta }-1-\frac{A_{1}\frac{y}{\theta }}{A_{1}+A_{2}\exp (\frac{-y}{\theta })+A_{3}\exp (\frac{-2y}{\theta })}+ \frac{A_{3}\frac{y}{\theta }\exp (\frac{-2y}{\theta })}{A_{1}+A_{2}\exp (\frac{-y}{\theta })+A_{3}\exp (\frac{-2y}{\theta })}\right) . \end{aligned}$$

Thus,

$$\begin{aligned} \left( \frac{\partial \ln f_{[r:n]}(y;\alpha ,\theta )}{\partial \theta }\right) ^{2}= & {} \frac{1}{\theta ^{2}}\left( 4w^{2}+1+\frac{A_{1}^{2}w^{2}}{(A_{1}+A_{2}e^{-w}+A_{3}e^{-2w})^{2}}\right. \nonumber \\{} & {} \left. +\frac{A_{3}^{2}w^{2} e^{-4w}}{\left( A_{1}+A_{2}e^{-w}+A_{3}e^{-2w}\right) ^{2}}-4w -\frac{4A_{1}w^{2}}{A_{1}+A_{2}e^{-w}+A_{3}e^{-2w}}\right. \nonumber \\{} & {} \left. +\frac{4A_{3}w^{2}e^{-2w}}{A_{1}+A_{2} e^{-w}+A_{3}e^{-2w}}+\frac{2A_{1}w}{A_{1}+A_{2}e^{-w}+A_{3}e^{-2w}}\right. \nonumber \\{} & {} \left. -\frac{2A_{3}we^{-2w}}{A_{1}+A_{2}e^{-w}+A_{3}e^{-2w}}-\frac{2A_{1}A_{3}w^{2}e^{-2w}}{\left( A_{1}+A_{2} e^{-w}+A_{3}e^{-2w}\right) ^{2}}\right) ,\nonumber \\ \end{aligned}$$
(4.2)

where \(w=\frac{y}{\theta }.\) On the other hand, the PDF of the RV \(W=\frac{Y_{[r:n]}}{\theta }\) is \(f_{W}(w)=e^{-w}(A_{1}\) \(+A_{2}e^{-w}+A_{3}e^{-2w}).\) Therefore, the relation (4.2) yields

$$\begin{aligned} I_{\theta }(Y_{[r:n]};\alpha ,\theta )=\int _{0}^{\infty }\left( \frac{\partial \ln f_W(w)}{\partial \theta }\right) ^{2}f_{W}(w)~dw=\frac{1}{\theta ^{2}}\sum \limits _{i=1}^{10}T_{i}, \end{aligned}$$

where

$$\begin{aligned} T_{1}= & {} 4\int _{0}^{\infty }w^{2}e^{-w}\left( A_{1}+A_{2}e^{-w}+A_{3}e^{-2w}\right) ~dw=4\left( 2A_{1}+\frac{A_{2}}{4}+ \frac{2A_{3}}{27}\right) , \\ T_{2}= & {} \int _{0}^{\infty }e^{-w}\left( A_{1}+A_{2}e^{-w}+A_{3}e^{-2w}\right) ~dw=1, \\ T_{3}= & {} A_{1}^{2}\int _{0}^{\infty }\frac{w^{2}e^{-w}}{A_{1}+A_{2}e^{-w}+A_{3}e^{-2w}}~dw,\qquad T_{4}=A_{3}^{2}\int _{0}^{\infty }\frac{w^{2}e^{-5w}}{A_{1}+ A_{2}e^{-w}+A_{3} e^{-2w}}~dw, \\ T_{5}= & {} -4\int _{0}^{\infty }w e^{-w}\left( A_{1}+A_{2}e^{-w}+A_{3}e^{-2w}\right) ~dw=-4\left( A_{1}+\frac{A_{2}}{4}+\frac{A_{3}}{9}\right) , \\ T_{6}= & {} -4A_{1}\int _{0}^{\infty }w^{2}e^{-w}~dw=-8A_{1},\qquad T_{7}=4A_{3}\int _{0}^{\infty }w^{2}e^{-3w}~dw=\frac{8A_{3}}{27}, \\ T_{8}= & {} 2A_{1}\int _{0}^{\infty }we^{-w}~dw=2A_{1},\qquad T_{9}=-2A_{3}\int _{0}^{\infty }we^{-3w}~dw=\frac{-2A_{3}}{9}, \end{aligned}$$

and

$$\begin{aligned} T_{10}=-2A_{1}A_{3}\int _{0}^{\infty }\frac{w^{2} e^{-3w}}{A_{1}+A_{2}e^{-w}+A_{3}e^{-2w}}~dw. \end{aligned}$$

Therefore, we get

$$\begin{aligned} I_{\theta }(Y_{[r:n]};\alpha ,\theta )=\frac{1}{\theta ^{2}}\left( 1-2A_{1}-\frac{2A_{3}}{27}+T_3+T_4+T_{10} \right) .\end{aligned}$$
(4.3)

We use Mathematica 12 to compute the values of the infinite integrals \(T_3,T_4,\) and \(T_{10},\) consequently, the FI \(I_{\theta }(Y_{[r:n]};\alpha ,\theta )\) can be computed by using (4.3). Table 2 provides the values of \(I_{\theta }(Y_{[r:n]};\alpha ,\theta ),\) for \(n=5,15,\) \(\theta =1.\) From Table 2, the following properties can be extracted:

  • In the vast majority of the cases \(I_{\theta }(Y_{[r:n]};\alpha ,1)\) increases when the difference \(n-r\) decreases. In contrast, almost \(I_{\theta }(Y_{[r:n]};-\alpha ,1)\) increases when \(n-r\) increases. Moreover, Table 2 reveals that the greatest values of FI are obtained almost at the maximum OSs.

  • Generally, we have \(I_{\theta }(Y_{[\frac{n+1}{2}:n]};\alpha ,1)=I_{\theta }(Y_{[\frac{n+1}{2}:n]};-\alpha ,1).\)

  • \(I_{\theta }(Y_{[r:n]};\alpha ,1)\) =\(I_{\theta }(Y_{[n-r+1:n]};-\alpha ,1),\) which endorse Proposition 2.

Table 2 FI in \(Y_{[r:n]}\) for \(\theta =1\)

4.2 FI in \(Y_{[r:n]}\) about the shape parameter of power distribution marginal

Let \(f_{Y}(y)=c y^{c-1},\) \(c>0,\) \( 0\le y\le 1.\) By using (4.1) we get the marginal PDF of the concomitant \(Y_{[r:n]}\) based on the power function distribution as:

$$\begin{aligned} f_{[r:n]}(y;\alpha ,c){} & {} =cy^{c-1}+3\Delta ^{(\alpha )}_{1,r:n}\left( 2cy^{2c-1}-cy^{c-1}\right) \\ {}{} & {} \quad +\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}\left( cy^{c-1}-6cy^{2c-1}+6cy^{3c-1}\right) . \end{aligned}$$

This expression, after some algebra, can be written as \(f_{[r:n]}(y;\alpha ,c)=cy^{c-1}\left( B_1+B_2y^{c}+B_3y^{2c}\right) ,\) where \(B_{1}=\left( 1-3\Delta ^{(\alpha )}_{1,r:n}+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}\right) ,\) \(B_{2}=\left( 6\Delta ^{(\alpha )}_{1,r:n}\right. \left. -15\Delta ^{(\alpha )}_{2,r:n}\right) \) and \(B_{3}=\left( 15\Delta ^{(\alpha )}_{2,r:n}\right) \). Therefore,

$$\begin{aligned} \frac{\partial \ln f_{[r:n]}(y;\alpha ,c)}{\partial c}=\frac{1}{c}+\ln y+\frac{B_{2}y^{c}\ln y+2B_{3}y^{2c}\ln y}{B_{1}+B_{2} y^c+B_{3}y^{2c}}, \end{aligned}$$

which implies

$$\begin{aligned} \left( \frac{\partial \ln f_{[r:n]}(y;\alpha ,c)}{\partial c}\right) ^{2}= & {} \frac{1}{c^{2}}+(\ln y)^{2}+\left( \frac{B_{2}y^{c}\ln y\!+\!2B_{3}y^{2c}\ln y}{B_{1}\!+\!B_{2}y^{c}\!+\!B_{3}y^{2c}}\right) ^{2}\!+\!\frac{2\ln y}{c}\nonumber \\{} & {} +\frac{2}{c}\left( \frac{B_{2}y^{c}\ln y\!+\!2B_{3}y^{2c}\ln y}{B_{1}\!+\!B_{2}y^{c}\!+\!B_{3}y^{2c}}\right) +2\left( \!\frac{B_{2}y^{c}\ln y\!+\!2B_{3}y^{2c}\ln y}{B_{1}+B_{2}y^{c}\!+\!B_{3}y^{2c}}\!\right) \!\ln y.\nonumber \\ \end{aligned}$$
(4.4)

Thus, (4.4) yields

$$\begin{aligned} I_{c}(Y_{[r:n]};\alpha ,c)=\int _{0}^{1}\left( \frac{\partial \ln f_{Y_{[r:n]}}(y;\alpha ,c)}{\partial c}\right) ^{2}f_{[r:n]}(y;\alpha ,c)=\sum \limits _{i=1}^{6}k_{i}, \end{aligned}$$

where

$$\begin{aligned} k_{1}= & {} \frac{1}{c^{2}}\int _{0}^{1}c y^{c-1}\left( B_{1}+B_{2}y^{c}+B_{3}y^{2c}\right) dy=\frac{1}{c^{2}}, \\ k_{2}= & {} c\int _{0}^{1}y^{c-1}\left( B_{1}+B_{2}y^{c}+B_{3}y^{2c}\right) (\ln y)^{2}dy=\frac{27(8B_{1}+B_{2})+8B_{3}}{108c^{2}}, \\ k_{3}= & {} c\int _{0}^{1}y^{c-1}\frac{\left( B_{2}y^{c}\ln y+2B_{3}y^{2c}\ln y\right) ^{2}}{B_{1}+B_{2}y^{c}+B_{3}y^{2c}}dy, \\ k_{4}= & {} 2\int _{0}^{1}y^{c-1}(B_{1}+B_{2}y^{c}+B_{3} y^{2c})\ln y dy=-\frac{36B_{1}+9B_{2}+4B_{3}}{18c^{2}}, \\ k_{5}= & {} 2\int _{0}^{1}y^{c-1}\left( B_{2}y^{c}\ln y+2B_{3}y^{2c}\ln y\right) dy=-\frac{9B_{2}+8B_{3}}{18c^{2}}, \end{aligned}$$

and

$$\begin{aligned} k_{6}=2c\int _{0}^{1}y^{c-1}\left( B_{2}y^{c}\ln y+2B_{3}y^{2c}\ln y\right) \ln y dy=\frac{27B_{2}+16B_{3}}{54c^{2}}. \end{aligned}$$

Therefore, we get \(I_{c}(Y_{[r:n]};\alpha ,c)=\frac{1}{c^{2}}\left( 1-\frac{1}{4}B_{2}-\frac{8}{27}B_{3}\right) +k_{3} .\) Table 3 provides the values of \(I_{c}(Y_{[r:n]};\alpha ,c),\) for \(n=5,15,\) \(c=2.\) From Table 3, the following properties can be extracted:

  • Generally, we have that \(I_{c}(Y_{[r:n]};\alpha ,2)\) increases when the difference \(n-r\) increases. In contrast, \(I_{c}(Y_{[r:n]};\alpha ,2)\) increases when the difference \(n-r\) decreases. Moreover, Table 3 reveals that the greatest values of FI are obtained at the maximum OSs.

  • Generally, we have that \(I_{c}(Y_{[\frac{n+1}{2}:n]};\alpha ,2)=I_{c}(Y_{[\frac{n+1}{2}:n]};-\alpha ,2).\)

  • \(I_{c}(Y_{[r:n]};\alpha ,2)\) =\(I_{c}(Y_{[n-r+1:n]};-\alpha ,2).\)

Table 3 FI in \(Y_{[r:n]},\) for c at \(c=2\)

5 Cumulative residual FI in \(Y_{[r:n]}\)

Theorem 5.1

Let \(F_{[r:n]}(y;\alpha )\) be the DF of the concomitant of \(Y_{[r:n]}\) based on SAR\((\alpha ).\) Then, the CF for location parameter based on \(Y_{[r:n]},\) is given by

$$\begin{aligned} CF(Y_{[r:n]};\alpha ){} & {} =\int _{-\infty }^{\infty }\left( \frac{\partial \ln \overline{F}_{[r:n]}(y;\alpha )}{\partial y}\right) ^2\overline{F}_{[r:n]}(y;\alpha )dy\\ {}{} & {} = CF(Y)+\zeta (\alpha )+2\eta (\alpha )+\xi (\alpha ),~ \end{aligned}$$

where

$$\begin{aligned}\zeta (\alpha )= & {} \int _{-\infty }^{\infty }\left( \frac{\partial \ln \overline{F}_Y(y)}{\partial y}\right) ^2 \left( 3\Delta ^{(\alpha )}_{1,r:n}F_Y(y)+ \frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}F_Y(y)\varphi (F_Y(y))\right) \overline{F}_Y(y)dy,\\ \eta (\alpha )= & {} -\int _{-\infty }^{\infty }\left( 3\Delta ^{(\alpha )}_{1,r:n}- \frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}+10\Delta ^{(\alpha )}_{2,r:n}F_Y(y)\right) f^{2}_Y(y)dy, \end{aligned}$$

and

$$\begin{aligned} \xi (\alpha )= & {} \int _{-\infty }^{\infty }\frac{\left[ f_Y(y)\left( 3\Delta ^{(\alpha )}_{1,r:n}-\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}+10\Delta ^{(\alpha )}_{2,r:n} F_Y(y)\right) \right] ^2}{1+(3\Delta ^{(\alpha )}_{1,r:n} -\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n})F_Y(y)+5\Delta ^{(\alpha )}_{2,r:n}F^2_Y(y)}\overline{F}_Y(y)dy. \end{aligned}$$

Proof

By using (4.1), the CF of \( Y_{[r:n]}\) is given by

$$\begin{aligned}{} & {} CF(Y_{[r:n]};\alpha )\\{} & {} \quad =\int _{-\infty }^{\infty }\left( \frac{\partial \ln \overline{F}_Y(y)}{\partial y}+ \frac{\partial \ln (1+3\Delta ^{(\alpha )}_{1,r:n}F_Y(y)+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}F_Y(y)\varphi (F_Y(y))}{\partial y}\right) ^2 \\{} & {} \qquad \times \left( \overline{F}_Y(y)\left[ 1+3\Delta ^{(\alpha )}_{1,r:n}F_Y(y)+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}F_Y(y)\varphi (F_Y(y))\right] \right) dy \\{} & {} \quad =\int _{-\infty }^{\infty }\left( \frac{\partial \ln \overline{F}_Y(y)}{\partial y}\right) ^2 \overline{F}_Y(y)dy+\int _{-\infty }^{\infty }\left( \frac{\partial \ln \overline{F}_Y(y)}{\partial y}\right) ^2\\{} & {} \qquad \times \left( 3\Delta ^{(\alpha )}_{1,r:n}F_Y(y)+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}F_Y(y)\varphi (F_Y(y))\right) \overline{F}_Y(y)dy\\{} & {} \qquad +\int _{-\infty }^{\infty } \left( \frac{\partial \ln (1+3\Delta ^{(\alpha )}_{1,r:n}F_Y(y)+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}F_Y(y)\varphi (F_Y(y))}{\partial y}\right) ^2 \nonumber \\{} & {} \qquad \times \left( 1+3\Delta ^{(\alpha )}_{1,r:n}F_Y(y)+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}F_Y(y)\varphi (F_Y(y))\right) \overline{F}_Y(y)dy\\{} & {} \qquad +2\int _{-\infty }^{\infty } \left( \frac{\partial \ln \overline{F}_Y(y)}{\partial y}\right) \times \left( \frac{\partial \ln (1+3\Delta ^{(\alpha )}_{1,r:n}F_Y(y)+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}F_Y(y)\varphi (F_Y(y))}{\partial y}\right) \nonumber \\ {}{} & {} \qquad \times \left( 1+3\Delta ^{(\alpha )}_{1,r:n}F_Y(y)+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}F_Y(y)\varphi (F_Y(y))\right) \overline{F}_Y(y)dy. \end{aligned}$$

\(\square \)

Proposition 2

Let \({\overline{F}}_{[r:n]}(y;\alpha )\) and \(R_{[r:n]}(y;\alpha )\) be the survival and hazard functions of concomitant \( Y_{[r:n]}\) from \(SAR(\alpha ),\) respectively, where \(R_{[r:n]}(y;\alpha )=\frac{f_{[r:n]}(y;\alpha )}{\overline{F}_{[r:n]}(y;\alpha )}.\) Then,

$$\begin{aligned} CF(Y_{[r:n]};\alpha ) = E\left[ R_{[r:n]}(Y_{[r:n]};\alpha )\right] \ge \frac{1}{E(Y_{[r:n]})}. \end{aligned}$$

Proof

From the definition of \(CF(Y_{[r:n]};\alpha )\) and by using the result of Nanda [27], that \(E[R_{F_{X}}(X)]\ge \frac{1}{E(X)},\) we get

$$\begin{aligned} CF(Y_{[r:n]};\alpha )= & {} \int _{-\infty }^{\infty }\left( \frac{\partial \ln \overline{F}_{[r:n]}(y;\alpha )}{\partial y}\right) ^2 \overline{F}_{[r:n]}(y;\alpha )dy\nonumber \\ {}= & {} \int _{-\infty }^{\infty }\left( -\frac{f_{[r:n]}(y;\alpha )}{{\overline{F}}_{[r:n]}(y;\alpha )}\right) ^{2}{\overline{F}}_{[r:n]}(y;\alpha )dy\nonumber \\ {}= & {} \int _{-\infty }^{\infty }R_{[r:n]}(y;\alpha )f_{[r:n]}(y;\alpha )dy\ge \frac{1}{E(Y_{[r:n]})}. \end{aligned}$$

The proof is completed. \(\square \)

Example 5.1

Let X and Y have exponential distributions with means \( \frac{1}{\theta *}\) and \(\frac{1}{\theta },\) respectively. Then, \( CF(Y_{[r:n]};\alpha )=\theta ,\) \(\zeta (\alpha )=\frac{\theta }{2}(3\Delta ^{(\alpha )}_{1,r:n}-\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n})+\frac{5\theta }{3}\Delta ^{(\alpha )}_{2,r:n},\) and \( \eta (\alpha )=\frac{\theta }{2}(-3\Delta ^{(\alpha )}_{1,r:n}+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n})-\frac{5\theta }{3}\Delta ^{(\alpha )}_{2,r:n}\) Thus, the CF of \( Y_{[r:n]}\) is given by

$$\begin{aligned}{} & {} CF(Y_{[r:n]};\alpha )\\{} & {} \quad =(1-3\Delta ^{(\alpha )}_{1,r:n}+\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n})\theta +(3\Delta ^{(\alpha )}_{1,r:n}-\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n})\frac{\theta }{2}-\frac{5\theta }{3}\Delta ^{(\alpha )}_{2,r:n}+\theta ^2 J{(\theta )}, \end{aligned}$$

where

$$\begin{aligned} J{(\theta )}\!=\!\int _{0}^{\infty }\frac{\left( 3\Delta ^{(\alpha )}_{1,r:n}-\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n}+10\Delta ^{(\alpha )}_{2,r:n}(1-e^{-\theta y})\right) ^2e^{-3\theta y}}{1+(3\Delta ^{(\alpha )}_{1,r:n}-\frac{5}{2}\Delta ^{(\alpha )}_{2,r:n})(1-e^{-\theta y})+5\Delta ^{(\alpha )}_{2,r:n}(1-e^{-\theta y})^2}\,dy. \end{aligned}$$

Table 4 provides \(CF(Y_{[r:n]};\alpha )\) values for \(n=5,15,\) \(\theta =1.\) From Table 4 following properties can be extracted:

  • Generally, we have \(CF(Y_{[\frac{n+1}{2}:n]};\alpha )=CF(Y_{[\frac{n+1}{2}:n]};-\alpha ,).\) Moreover, \(CF(Y_{[r:n]};\alpha )=CF(Y_{[n-r+1:n]};-\alpha ).\)

  • with fixed n and \(\alpha >0,\) the value of \(CF(Y_{[r:n]};\alpha )\) slowly decreases as r increases. In contrast, the value of \(CF(Y_{[r:n]};\alpha )\) slowly increases as r increases for \(\alpha <0\).

Table 4 CF in \(Y_{[r:n]}\) for SAR\((\alpha )\) at \(\theta =1\)

6 Concluding remarks

Among all the known families that generalize the FGM family, we exclusively found that the Sarmanov family shares two properties with the FGM family of theoretical and practical importance. The first property is that they both have one shape parameter, which makes it easy to work with them for bivariate data modeling. The second property is the radial symmetry, which gives many of the measures of information associated with them, mathematical flexibility in terms of simplifying the mathematical relationships that describe those measures. On the other hand, the Sarmanov family surpassed all known families in terms of the high coefficient of correlation between its components.

If we transfer to the issue of modelling multivariate data (with more than two variables), we find the multivariate FGM, which was proposed by Johnson and Kotz [23]. The multivariate FGM family, which has a direct structure and can describe the interrelationships of two or more variables, is useful as an alternative to a multivariate normal distribution and it has been applied to statistical modeling in various research fields. The formulation of the Sarmanov family as a multivariate distribution, which benefits from the radial symmetry property and offers high partial correlations between its variables, will be a very useful contribution in the field of multivariate data modelling and represents a potential future development for the current work.