1 Introduction

Let \(\{X_i, ~i \ge 1\}\) be a sequence of independent and identically distributed random variables (RVs) with a common continuous distribution function (DF) \(F_X(x)\) and probability density function (PDF) \(f_X(x).\) An observation \(X_j\) is called an upper record value if \(X_j>X_i\) for every \(i<j.\) An analogous definition can be given for lower record values. The model of record values becomes inadequate in several situations when the expected waiting time between two record values is very large. For example, serious difficulties for the statistical inference based on records arise because record data is extremely rare in practical contexts and the predicted waiting time for each record following the first is infinite. In those situations, second or third largest values are of great importance. These issues are avoided, however, if we consider the k-record value model; refer to Aly et al. [9], Berred [21], and Fashandi and Ahmadi [26]. The PDF of the nth upper and lower k-record values are, respectively, given by Dziubdziela and Kopocinski [25] as

$$\begin{aligned} g_{n,k}^{(u)}(x)=\frac{k^n}{\Gamma (n)}(-\log (\overline{F}_X(x)))^{n-1} (\overline{F}_X(x))^{k-1}f_X(x) \end{aligned}$$

and

$$\begin{aligned} g_{n,k}^{(\ell )}(x)=\frac{k^n}{\Gamma (n)}(-\log (F_X(x)))^{n-1} (F_X(x))^{k-1}f_X(x), \end{aligned}$$

where \(\Gamma (.)\) is the gamma function and \(\overline{F}_X(x)=1-F_X(x).\) Moreover, the joint PDF of the nth upper k-record value, \(X_{n,k}^{(u)},\) and the mth upper k-record value, \(X_{m,k}^{(u)},\) is given by

$$\begin{aligned} g_{m,n,k}^{(u)}(x_{1},x_{2})= & {} \frac{k^n}{\Gamma (n)\Gamma (m-n)}(-\log (\overline{F}_X(x_1)))^{n-1}\left( \overline{F}_X(x_2)\right) ^{k-1}\nonumber \\ {}\times & {} \left( -\log \frac{ \overline{F}_X(x_2)}{\overline{F}_X(x_1)}\right) ^{m-n-1}\frac{f_X(x_1)f_X(x_2)}{\overline{F}_X(x_1)},~x_1\le x_2. \end{aligned}$$

When prior information is in the form of marginal distributions, it is advantageous to consider families of bivariate DFs with specified marginals when modelling bivariate data. Among these families, the Farlie–Gumbel–Morgenstern (FGM) family was studied extensively by many authors. The FGM family is represented by the bivariate DF \(F_{X,Y}(x,y)=F_X(x)F_Y(y)[1+\theta (1-F_X(x))(1-F_Y(y))],\) \(-1\le \theta \le 1,\) where \(F_X(x)\) and \(F_Y(y)\) are the marginal DFs of two RVs X and Y, respectively. The FGM family has undergone a number of alterations to broaden the range of the correlation between its marginals, according to the literature. These extended families have been the subject of numerous studies from various angles. Examples of these studies are Abd Elgawad and Alawady [1], Abd Elgawad et al. [2, 5], Alawady et al. [6, 7], Barakat et al. [12,13,14, 16, 17], Beg and Ahsanullah [19], and Bekrizadeh et al. [20].

Sarmanov [34] proposed a new mathematical model of hydrological processes described by the following family of bivariate DFs

$$\begin{aligned} F_{X,Y}(x,y)= & {} F_X(x)F_Y(y)[1+3\alpha \bar{F}_X(x)\bar{F}_Y(y)\nonumber \\{} & {} +5\alpha ^2 (2F_X(x)-1)(2F_Y(y)-1)\bar{F}_X(x)\bar{F}_Y(y)],\nonumber \\ \mid \alpha \mid\le & {} \frac{\sqrt{7}}{5}, \end{aligned}$$
(1.1)

denoted by SAR\((\alpha ).\) The corresponding PDF is given by

$$\begin{aligned} f_{X,Y}(x,y)\!= & {} \!f_X(x)f_Y(y)[1+3\alpha (2F_X(x)\!-\!1)(2F_Y(y)\!-\!1)\!\nonumber \\{} & {} +\!\frac{5\alpha ^2}{4} (3(2F_X(x)\!-\!1)^2\!-\!1)(3(2F_Y(y)\!-\!1)^2\!-\!1)]. \end{aligned}$$
(1.2)

The Sarmanov copula’s correlation coefficient is \(\alpha .\) As a result, the copula’s maximum correlation coefficient is 0.529 (cf. [10]; page 74). Recently, some aspects of concomitants of order statistics (OSs) and record values from this family were presented by Barakat et al. [15] and Husseiny et al. [27], respectively. In the present paper, we reveal some additional motivating properties for SAR\((\alpha )\). We also discuss some aspects of the concomitants of k-record values and some information measures that are relevant to this abandoned family.

Let \((X_i, Y_i), i=1,2,...,\) be a random bivariate sample, with a common continuous DF \(F_{X,Y}(x,y)\) \(=P(X\le x,Y\le y).\) When the investigator is just interested in studying the sequence of k-records of the first component X,  the second component associated with the k-record value of the first one is termed as the concomitant of that k-record value. The subject of k-record values and their concomitants can be found in a wide range of practical experiments, e.g. Bdair and Raqab [18], Chacko and Mary [22], Chacko and Muraleedharan [23], and Thomas et al. [35]. The PDFs of the concomitants \(Y_{[n,k]}^{(u)}\) (the nth upper concomitant of \(X_{n,k}^{(u)}\)) and \(Y_{[n,k]}^{(\ell )}\) (the nth lower concomitant of \(X_{n,k}^{(\ell )}\)) are given by

$$\begin{aligned} f_{[n,k]}^{(u)}(y)=\int _{0}^{\infty }f_{Y| X}(y|x)g_{n,k}^{(u)}(x)dx~~\text{ and }~~ f_{[n,k]}^{(\ell )}(y)=\int _{0}^{\infty }f_{Y| X}(y|x)g_{n,k}^{(\ell )}(x)dx, \end{aligned}$$
(1.3)

respectively, where \(f_{Y| X}(y|x)\) is the conditional PDF of Y given X. Moreover, the joint PDF of concomitants \(Y_{[n,k]}^{(u)}\) and \(Y_{[m,k]}^{(u)}\) (\( n<m\)) is given by

$$\begin{aligned} f_{[n,m,k]}^{(u)}(y_{1},y_{2})=\int _{0}^{\infty }\int _{x_{1}}^{\infty }f_{Y|X}(y_{1}| x_{1})f_{Y| X}(y_{2}| x_{2})g_{m,n,k}^{(u)}(x_{1},x_{2})dx_{2}dx_{1}. \end{aligned}$$
(1.4)

Now we are going to take a quick look at some of information measures that will be discussed in this study. The Shannon entropy is a statistical measure of information that determines how much variability associated with an RV is reduced on average. In domains as disparate as computer science, financial analysis, and medical research, this measure is used. For a continuous RV X with a PDF \(f_X(x),\) the Shannon entropy is defined by \(H(X)=-\text{ E }(\log f_X(X)).\) Obviously, \(H(X + b)=H(X),\) for any \(b \ge 0.\) For further details about this measure, see Abd Elgawad et al. [3, 4], Alawady et al. [8], and Barakat and Husseiny [11]. Lad et al. [30] presented a new measure of uncertainty termed extropy, which is a complement dual of Shannon entropy and is defined by \(J(X)=-\frac{1}{2}\text{ E }(f_X(X)).\) It goes without saying that \(J(X)\le 0.\) The total log scoring rule is one of the statistical applications of extropy, which is used to score forecasting distributions.

In the literature, there are several different versions of entropy, each one suitable for a specific situation. Rao et al. [33] introduced the cumulative residual entropy (abbreviated by CRE) of X as \(CRE(X)=-\int _{0}^{\infty }\overline{F}_X(x)\log \overline{F}_X(x)dx\). The Shannon entropy presents various drawbacks when used as a continuous counterpart of the entropy for discrete RVs. Various efforts have been made by some authors to define alternative information measures. One example is given in Di Crescenzo and Longobardi [24], where it was proposed a measure, called cumulative entropy (abbreviated by CE), and it is defined by \(CE(X)=-\int _{0}^{\infty }F_X(x)\log (F_X(x))dx,\) for any non-negative RV X with the DF \(F_X.\) Unlike the Shannon entropy, CE is always non-negative. Moreover, the CE is appropriate to deal with reliability systems whose uncertainty is related to the past. We also consider a measure of inaccuracy known as the Kerridge measure of inaccuracy linked with two RVs as an expansion of uncertainty, which was defined by Kerridge [28]. This inaccuracy measure is defined as \(I(X;Y)= -\int _{0}^{\infty }f_X(t)\log (f_Y(t))\,dt\) for any non-negative RVs X and Y. Kharazmi and Balakrishnan [29] recently introduced the cumulative residual Fisher information (abbreviated as CF) for the location parameter, which is defined as \(CF_{\overline{F}_Y(y)}=\int _{0}^{\infty }\left( \frac{\partial \log \overline{F}_Y(y)}{\partial y}\right) ^2 \overline{F}_Y(y)dy .\)

The rest of the paper is organized as follows. In Sect. 2, we obtain some new interesting results pertaining to SAR\((\alpha )\). In Sect. 3, we obtain the marginal DF of concomitants of k-record values and joint DF of concomitants of k-record values based on SAR\((\alpha ).\) In Sect. 4, we get some new elegant and useful relations for the Shannon entropy concerning the Sarmanov copula. In Sect. 5, the Shannon entropy, inaccuracy measure, extropy, CE, CRE, and CF for SAR\((\alpha )\) are derived with some illustrative examples. Finally, in Sect. 6, we perform some numerical studies which lend further support to our theoretical results with discussion.

2 Properties of Concomitants of k-Record Values Based on Sarmanov Copula

The FGM copula is \(C(u,v;\theta )=uv(1+\theta (1-u)(1-v)),~0\le u,v\le 1,\) and the corresponding density copula is \(\mathcal{C}(u,v;\theta )=1+\theta (1-2u)(1-2v).\) Moreover, in view of (1.1) and (1.2), the Sarmanov copula is \(S(u,v;\alpha )=uv[1+3\alpha (1-u)(1-v)+5\alpha ^2 (1-u)(1-v) (2u-1)(2v-1)],\,0\le u,v\le 1,\) with corresponding PDF \(\mathcal{S}(u,v;\alpha )=1+3\alpha (2u-1)(2v-1)+\frac{5}{4} \alpha ^2 (3(2u-1)^2-1)(3(2v-1)^2-1).\) Clearly, the FGM copula is radially symmetric about \((\frac{1}{2},\frac{1}{2}),\) i.e. \(\mathcal{C}(\frac{1}{2}-u,\frac{1}{2}-v;\theta )=\mathcal{C}(\frac{1}{2}+u,\frac{1}{2}+v;\theta )\) (cf. [32]). We can simply prove that SAR\((\alpha )\) is the only one of the known-extended families of FGM with a radially symmetric copula about \((\frac{1}{2},\frac{1}{2}).\) In this section, we will go over two more intriguing aspects of the Sarmanov copula.

Proposition 1

Let \(\mathcal{S}_{[n,k]}^{(\ell )}(.;\alpha )\) and \(\mathcal{S}_{[n,k]}^{(u)}(.;\alpha )\) be the PDFs of concomitants of the nth lower and upper k-record values based on the Sarmanov copula, respectively. Then,

$$\begin{aligned} \mathcal{S}^{(u)}_{[n,k]}~ \left( \frac{1}{2}-v;\alpha \right) = \mathcal{S}^{(\ell )}_{[n,k]}\left( \frac{1}{2}+v;\alpha \right) ,~0\le v\le 1. \end{aligned}$$
(2.1)

Proof

According to (1.3), we get \(\mathcal{S}_{[n,k]}^{(u)}(v;\alpha )=\int _{0}^{1}\mathcal{S}(u,v;\alpha ) g_{n,k}^{(u)}(u) du,\) where \(g_{n,k}^{(u)}(u)\) is the nth upper k-record values from uniform distribution over (0,1). Taking the transformation \(u=\frac{1}{2}-z,\) and change v to \((\frac{1}{2}-v),\) we get

$$\begin{aligned} \mathcal{S}^{(u)}_{[n,k]}~ \left( \frac{1}{2}-v;\alpha \right)= & {} \int _{-\frac{1}{2}}^{\frac{1}{2}}\mathcal{S}\left( \frac{1}{2}-z,\frac{1}{2}-v;\alpha \right) g_{n,k}^{(u)}\left( \frac{1}{2}-z\right) dz\\= & {} \int _{-\frac{1}{2}}^{\frac{1}{2}}\mathcal{S}\left( \frac{1}{2}+z,\frac{1}{2}+v;\alpha \right) g_{n,k}^{(\ell )}\left( \frac{1}{2}+z\right) dz, \end{aligned}$$

since \(g_{n,k}^{(u)}(\frac{1}{2}-z)=g_{n,k}^{(\ell )}(\frac{1}{2}+z).\) Put \(\frac{1}{2}+z =\eta ,\) we get \(\mathcal{S}^{(u)}_{[n,k]}~ (\frac{1}{2}-v;\alpha )=\int _{0}^{1}\mathcal{S}(\eta ,\frac{1}{2}+v;\alpha )g_{n,k}^{(\ell )}(\eta ) d\eta = \mathcal{S}^{(\ell )}_{[n,k]}(\frac{1}{2}+v;\alpha ).\) This proves the proposition. \(\square \)

Remark 2.1

The proof of Proposition 1 shows that for the concomitants of lower and upper k-records based on any radially symmetric copula, the relation (2.1) is valid.

Remark 2.2

The case \(k=1,\) which mainly includes the case of record values, was handled by Husseiny et al. [27].

The FGM and Sarmanov copulas are linked by the concomitants of k-record values in the following elegant result.

Theorem 2.1

Let \(\mathcal{C}_{[n,k]}^{(\ell )}(.;\alpha )\) and \(\mathcal{C}_{[n,k]}^{(u)}(.;\alpha )\) be the PDFs of the concomitants of nth lower and upper k-record values based on the FGM copula, respectively. Then

$$\begin{aligned} \mathcal{S}^{(\ell )}_{[n,k]}(v;\alpha )-\mathcal{S}^{(u)}_{[n,k]}(v;\alpha )=3[\mathcal{C}^{(\ell )}_{[n,k]}(v;\alpha )-\mathcal{C}^{(u)}_{[n,k]}(v;\alpha )],~0\le v\le 1. \end{aligned}$$
(2.2)

Proof

With a short glance at the Sarmanov copula, we can write

$$\begin{aligned} \mathcal{S}(u,v;\alpha )=3 \mathcal{C}(u,v;\alpha )+L(u,v;\alpha ), \end{aligned}$$
(2.3)

where \(L(u,v;\alpha )=\frac{5}{4}\alpha ^2[3(2u-1)^2-1][3(2v-1)^2-1]-2.\) Clearly, the function \(L(u,v;\alpha )\) is radially symmetric. Moreover, the relation (2.3) yields

$$\begin{aligned} \mathcal{S}_{[n,k]}^{(i)}(v;\alpha )= & {} 3\mathcal{C}_{[n,k]}^{(i)}(v;\alpha ) +\int _{0}^{1}L(u,v;\alpha )g_{n,k}^{(i)}(u) du\nonumber \\= & {} 3\mathcal{C}_{[n,k]}^{(i)}(v;\alpha )+J_{[n,k]}^{(i)}(v;\alpha ),~i=\ell ,u, \end{aligned}$$
(2.4)

where \(g_{n,k}^{(i)}(u)\) is the PDF of the nth (lower, \(i=\ell ,\) or upper, \(i=u\)) k-record values based on the uniform distribution over (0,1). Because the function \(L(u,v;\alpha )\) is radially symmetric, we can proceed as we did in the proof of Proposition 1 to show that \(J ^{(\ell )}_{[n,k]}(v;\alpha )=J^{(u)}_{[n,k]}(v;\alpha ).\) Therefore, we can get the relation (2.2) by using the relation (2.4). \(\square \)

Remark 2.3

When \(k=1\) in (2.2), we get the result which was handled by Husseiny et al. [27].

3 Concomitants of k-Record Values Based on SAR\((\alpha )\)

In this section, the marginal DF, moment generating function (MGF), and moments of the upper k-record values based on SAR\((\alpha )\) are obtained. Moreover, the joint DF of the bivariate concomitants of k-record values based on SAR\((\alpha )\) is derived.

3.1 Marginal DF of Concomitants of k-Record Values

The PDF of \(Y_{[n,k]}^{(u)}\) is represented in the following theorem in a useful way. We use the notation \(X\sim F\) to signify that X is distributed as F,  for a given RV X and some DF F.

Theorem 3.1

Let \(V_1\sim F_Y^{2}\) and \(V_2\sim F_Y^{3}.\) Then

$$\begin{aligned} f_{[n,k]}^{(u)}(y)=\left( 1-3\delta ^{(\alpha )}_{n,k:1} +\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) f_Y(y)+ \left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{15}{2}\delta ^{(\alpha )}_{n,k:2}\right) f_{V_1}(y)+5\delta ^{(\alpha )}_{n,k:2}f_{V_2}(y),\nonumber \\ \end{aligned}$$
(3.1)

where \(\delta ^{(\alpha )}_{n,k:1}=\alpha \left[ 1-2\left( \frac{k}{k+1}\right) ^{n}\right] \) and \(\delta ^{(\alpha )}_{n,k:2}=\alpha ^{2}\left[ 12\left( \left( \frac{k}{k+2}\right) ^{n}-\left( \frac{k}{k+1}\right) ^{n}\right) +2\right] .\)

Proof

By using (1.2) and (1.3), we get

$$\begin{aligned} f_{[n,k]}^{(u)}(y)\!\!\!= & {} \!\!\!\int _{0}^{\infty }f_Y(y)\{1+3\alpha (2F_X(x)-1)(2F_Y(y)-1)+\frac{5}{4} \alpha ^2\left[ 3(2F_X(x)-1)^2-1\right] \nonumber \\{} & {} \times \left[ 3(2F_Y(y)-1)^2-1\right] \}\frac{k^n}{\Gamma (n)}(-\log (\overline{F}_X(x)))^{n-1} (\overline{F}_X(x))^{k-1}f_X(x)dx. \nonumber \\= & {} \int _{0}^{\infty }\{f_Y(y)+3\alpha (2F_X(x)-1)(f_{V_1}(y)-f_Y(y))+\frac{5}{4} \alpha ^2 [3(2F_X(x)-1)^2-1]\nonumber \\{} & {} \times [4f_{V_2}(y)-6f_{V_1}(y)+2f_Y(y)]\}\frac{k^n}{\Gamma (n)}(-\log (\overline{F}_X(x)))^{n-1} (\overline{F}_X(x))^{k-1}f_X(x)dx. \nonumber \\= & {} f_Y(y)+3(f_{V_1}(y)-f_Y(y))I_1+\frac{5}{4}\left[ 4f_{V_2}(y)-6f_{V_1}(y)+2f_Y(y)\right] I_2, \end{aligned}$$

where

$$\begin{aligned} I_1= & {} \frac{\alpha k^n}{\Gamma (n)}\int _{0}^{\infty }(2F_X(x)-1)(-\log (\overline{F}_X(x)))^{n-1} (\overline{F}_X(x))^{k-1}f_X(x)dx\\= & {} \alpha \left[ 1-2\left( \frac{k}{k+1}\right) ^{n}\right] =\delta ^{(\alpha )}_{n,k:1} \end{aligned}$$

and

$$\begin{aligned} I_2\!= & {} \!\frac{\alpha ^2 k^n}{\Gamma (n)}\int _{0}^{\infty }\left[ 3(2F_X(x)-1)^2-1\right] (-\log (\overline{F}_X(x)))^{n-1} (\overline{F}_X(x))^{k-1}f_X(x)dx\nonumber \\ \!= & {} \! \alpha ^{2}\left[ 12\left( \left( \frac{k}{k+2}\right) ^{n}-\left( \frac{k}{k+1}\right) ^{n}\right) +2\right] \!=\!\delta ^{(\alpha )}_{n,k:2}. \end{aligned}$$

This completes the proof. \(\square \)

Remark 3.1

If \(k=1\) in Theorem 3.1, which mainly includes the case of record values, we obtain the results of Husseiny et al. [27].

Relying on (3.1), the MGF of \(Y_{[n,k]}^{(u)}\) based on SAR\((\alpha )\) is given by

$$\begin{aligned} M_{Y_{[n,k]}^{(u)}}(t){} & {} =\left( 1-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) M_Y(t)+ \left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{15}{2}\delta ^{(\alpha )}_{n,k:2}\right) M_{V_1}(t)\\ {}{} & {} \qquad +5\delta ^{(\alpha )}_{n,k:2}M_{V_2}(t), \end{aligned}$$

where \(M_{Y}(t), M_{V_1}(t),\) and \(M_{V_2}(t)\) are the MGFs of the RVs \(Y, V_1\), and \(V_2,\) respectively. Thus, by using (3.1) the pth moment of \(Y_{[n,k]}^{(u)}\) based on SAR\((\alpha )\) is given by

$$\begin{aligned} \mu _{Y_{[n,k]}^{(u)}}^{(p)}\!\!\!&=(1-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}) \mu _Y^{(p)}+(3\delta ^{(\alpha )}_{n,k:1}-\frac{15}{2}\delta ^{(\alpha )}_{n,k:2})\mu _{V_1}^{(p)}+5\delta ^{(\alpha )}_{n,k:2}\mu _{V_2}^{(p)}, \end{aligned}$$

where \(\mu _Y^{(p)}=E[Y^{p}],~\mu _{V_1}^{(p)}=E[{V_1}^{p}]\), and \(\mu _{V_2}^{(p)}=E[{V_2}^{p}].\)

Remark 3.2

When the value of n is too large, we can use the approximations \(\delta ^{(\alpha )}_{n,k:1}\approx \alpha \) and \(\delta ^{(\alpha )}_{n,k:2}\approx 2\alpha ^{2}.\)

The FGM and Sarmanov families have a common intriguing trait about the concomitants of k-record values based on them, as shown by the following theorem.

Theorem 3.2

Let \(\mathcal{F}_{[n,k]}^{(i)}(y;\theta )\) be the PDF of \(Y_{[n,k]}^{(i)}, i=\ell ,u,\) based on the FGM\((\theta )\) family. Furthermore, throughout this theorem, let \(f_{[n,k]}^{(i)}(y;\alpha )\) denote the PDF of \(Y_{[n,k]}^{(i)}, ~i=\ell ,u,\) based on SAR\((\alpha ).\) Then

  1. 1.

    \(\mathcal{F}_{[n,k]}^{(u)}(y;-\theta )=\mathcal{F}_{[n,k]}^{(\ell )}(y;\theta ),\)

  2. 2.

    \(f_{[n,k]}^{(u)}~ (y;-\alpha )= f_{[n,k]}^{(\ell )}(y;\alpha ).\)

Proof

It is simple to verify that \(\mathcal{F}_{[n,k]}^{(u)}(v;\theta )=1-\delta ^{(\theta )}_{n,k:1}[2F_Y(y)-1]=\mathcal{F}_{[n,k]}^{(\ell )}(v;-\theta ),\) where \(\delta ^{(\theta )}_{n,k:1} =\theta \left[ 1-2\left( \frac{k}{k+1}\right) ^{n}\right] ,\) since \(\delta ^{(-\theta )}_{n,k:1}=-\delta ^{(\theta )}_{n,k:1}.\) The first part of the theorem is thus proved. By using the same method as the proof of Theorem 3.1, we can now prove that

$$\begin{aligned} f_{[n,k]}^{(\ell )}~ (y;\alpha ){} & {} =\left( 1+3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) f_Y(y)- \left( 3\delta ^{(\alpha )}_{n,k:1}+\frac{15}{2}\delta ^{(\alpha )}_{n,k:2}\right) f_{V_1}(y)\\ {}{} & {} \qquad +5\delta ^{(\alpha )}_{n,k:2}f_{V_2}(y). \end{aligned}$$

Using the simple-to-check relationships \(\delta ^{(-\alpha )}_{n,k:1}=-\delta ^{(\alpha )}_{n,k:1}\) and \(\delta ^{(-\alpha )}_{n,k:2}=\delta ^{(\alpha )}_{n,k:2},\) the relation (3.1) yields \(f_{[n,k]}^{(u)}(y;-\alpha )= f_{[n,k]}^{(\ell )}(y;\alpha ).\) The second part of the theorem is thus proved. \(\square \)

Remark 3.3

For \(k=1\) (record case) in Theorem 3.2, we obtain the results of Husseiny et al. [27].

3.2 Joint DF of Bivariate Concomitants of k-Record Values Based on SAR(\(\alpha \))

The following theorem gives the joint PDF \(f_{[n,m,k]}^{(u)}(y_1,y_2)\) (defined by (1.4)) of the concomitants of \(Y_{[n,k]}^{(u)}\) and \(Y_{[m,k]}^{(u)},~ n<m,\) in SAR\((\alpha ).\)

Theorem 3.3

Let \(V_1\sim F_Y^{2}\) and \(V_2\sim F_Y^{3}.\) Then

$$\begin{aligned}{} & {} f_{[n,m,k]}^{(u)}(y_1,y_2)=f_Y(y_1)f_Y(y_2)+\left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) f_Y(y_2)(f_{V_1}(y_1)-f_Y(y_1)) \nonumber \\{} & {} \qquad +\left( 3\delta ^{(\alpha )}_{m,k:1}\frac{5}{2}\delta ^{(\alpha )}_{m,k:2}\right) f_Y(y_1)(f_{V_1}(y_2)-f_Y(y_2))+5\delta ^{(\alpha )}_{n,k:2}f_Y(y_2)(f_{V_2}(y_1)-f_{V_1}(y_1)) \nonumber \\{} & {} \qquad +5\delta ^{(\alpha )}_{m,k:2}f_Y(y_1)(f_{V_2}(y_2)-f_{V_1}(y_2))+ \left( 9\delta ^{(\alpha )}_{n,m,k:1}-\frac{15}{2}\delta ^{(\alpha )}_{n,m,k:2}-\frac{15}{2}\delta ^{(\alpha )}_{n,m,k:3}\right. \nonumber \\{} & {} \qquad +\left. \frac{25}{4}\delta ^{(\alpha )}_{n,m,k:4}\right) (f_{V_1}(y_1)-f_Y(y_1))(f_{V_1}(y_2)-f_Y(y_2))+ \left( 15\delta ^{(\alpha )}_{n,m,k:2}-\frac{25}{2}\delta ^{(\alpha )}_{n,m,k:4}\right) \nonumber \\{} & {} \qquad \times (f_{V_2}(y_1)-f_{V_1}(y_1))(f_{V_1}(y_2)-f_Y(y_2))+\left( 15\delta ^{(\alpha )}_{n,m,k:3}-\frac{25}{2} \delta ^{(\alpha )}_{n,m,k:4}\right) (f_{V_1}(y_1) \nonumber \\{} & {} \qquad -f_{Y}(y_1))(f_{V_2}(y_2)\!-\!f_{V_1}(y_2))\!+\!25 \delta ^{(\alpha )}_{n,m,k:4}(f_{V_2}(y_1)\!-\!f_{V_1}(y_1))(f_{V_2}(y_2)\!-\!f_{V_1}(y_2)), \end{aligned}$$

where \(\delta ^{(\alpha )}_{m,k:1}\) and \(\delta ^{(\alpha )}_{m,k:2}\) are defined via Theorem 3.1 by replacing n with m in \(\delta ^{(\alpha )}_{n,k:1}\) and \(\delta ^{(\alpha )}_{n,k:2}\), respectively,

$$\begin{aligned} \delta ^{(\alpha )}_{n,m,k:1}= & {} \alpha ^2\left[ -2\left( \frac{k^n}{(k+1)^m}+\frac{k^n}{k^{m-n}(k+1)^n}\right) +\frac{4k^n}{(k+2)^n(k+1)^{m-n}}+1\right] ,\\ \delta ^{(\alpha )}_{n,m,k:2}= & {} \alpha ^3\left[ 4\left( k^{n-m}-\frac{k^n}{(k+1)^m}\right) -24\left( \frac{k^n}{(k+3)^n(k+1)^{m-n}}\right. \right. \nonumber \\{} & {} \left. \left. -\frac{k^n}{(k+2)^n(k+1)^{m-n}}\right) -12\left( \frac{k^n}{(k+1)^nk^{m-n}}-\frac{k^n}{(k+2)^nk^{m-n}}\right) -2\right] ,\\ \delta ^{(\alpha )}_{n,m,k:3}= & {} \alpha ^3\left[ 4\left( k^{n-m}-\frac{k^n}{(k+1)^nk^{m-n}}\right) -24\left( \frac{k^n}{(k+3)^n(k+2)^{m-n}}\right. \right. \nonumber \\{} & {} \left. \left. -\frac{k^n}{(k+2)^n(k+1)^{m-n}}\right) -12\left( \frac{k^n}{(k+1)^m}-\frac{k^n}{(k+2)^m}\right) -2\right] , \end{aligned}$$

and

$$\begin{aligned} \delta ^{(\alpha )}_{n,m,k:4}= & {} \alpha ^4\left[ 4-144\left( \frac{k^n}{(k+3)^n(k+2)^{m-n}}+ \frac{k^n}{(k+3)^n(k+1)^{m-n}}\right. \right. \nonumber \\{} & {} \left. \left. -\frac{k^n}{(k+4)^n(k+2)^{m-n}}-\frac{k^n}{(k+2)^n(k+1)^{m-n}}\right) \right. \\{} & {} \left. -24\left( \frac{k^n}{(k+1)^m}-\frac{k^n}{(k+2)^m}+\frac{k^n}{(k+1)^nk^{m-n}}-\frac{k^n}{(k+2)^nk^{m-n}}\right) \right] . \end{aligned}$$

Proof

Consider the integration

$$\begin{aligned} I_{p,q}^{}(n,m,k)= & {} \frac{k^n}{\Gamma (n)\Gamma (m-n)}\int _{0}^{\infty }\int _{x_1}^{\infty }F^p_X(x_1)F^q_X(x_2) (-\log (\overline{F}_X(x_1)))^{n-1}\left( \overline{F}_X(x_2)\right) ^{k-1}\nonumber \\{} & {} \times \left( -\log \frac{ \overline{F}_X(x_2)}{\overline{F}_X(x_1)}\right) ^{m-n-1}\frac{f_X(x_1)f_X(x_2)}{\overline{F}_X(x_1)}dx_{2}dx_{1}. \end{aligned}$$

Taking the transformation \( u_1=-\log (\overline{F}_X(x_1))\) and \( u_2=-\log (\overline{F}_X(x_2)),\) we get

$$\begin{aligned} I_{p,q}^{}(n,m,k)= & {} \frac{k^n}{\Gamma (n)\Gamma (m-n)}\int _{0}^{\infty }\int _{u_1}^{\infty } (1-e^{-u_1})^{p}(1-e^{-u_2})^{q}(u_1)^{n-1}\\{} & {} (u_2-u_1)^{m-n-1}e^{-ku_2}du_{2}du_{1}. \end{aligned}$$

By using the transformation \( z=u_2-u_1,\) and after some algebra, we get

$$\begin{aligned} I_{p,q}^{}(n,m,k)= & {} \sum _{i=0}^{p}\sum _{j=0}^{q}(-1)^{i+j} \left( \!\!{\begin{array}{c} p\!\\ {i}\end{array}}\!\!\right) \left( \!\!{\begin{array}{c} q\!\\ {j}\end{array}}\!\!\right) \frac{k^n}{(i+j+k)^n(j+k)^{m-n}},~~ p,q=0,1,2,....\nonumber \\ \end{aligned}$$
(3.2)

Now, by using (1.4), we get

$$\begin{aligned} f_{[n,m,k]}^{(u)}(y_1,y_2)= & {} \int _{0}^{\infty }\int _{x_1}^{\infty } g_{m,n,k}^{(u)}(x_{1},x_{2})[f_Y(y_1)+3\alpha f_Y(y_1)(2F_X(x_1)-1)(2F_Y(y_1)-1)\nonumber \\{} & {} +\frac{5}{4}\alpha ^2f_Y(y_1)(3(2F_X(x_1)-1)^2-1)(3(2F_Y(y_1)-1)^2-1)] \nonumber \\{} & {} \times [f_Y(y_2)+3\alpha f_Y(y_2)(2F_X(x_2)-1)(2F_Y(y_2)-1)+\frac{5}{4}\alpha ^2f_Y(y_2)\nonumber \\{} & {} \times (3(2F_X(x_2)-1)^2-1)(3(2F_Y(y_2)-1)^2-1)]dx_{2}dx_{1}. \end{aligned}$$

By using the relations \(f_{V_1}=2f_YF_Y\) and \(f_{V_2}=3f_YF^2_Y\) and carrying out some algebra, we get

$$\begin{aligned} f_{[n,m,k]}^{(u)}(y_1,y_2){} & {} =\int _{0}^{\infty }\int _{x_1}^{\infty } g_{m,n,k}^{(u)}(x_{1},x_{2}) \left[ f_Y(y_1)+3\alpha (2F_X(x_1)-1)(f_{V_1}(y_1)-f_Y(y_1))\right. \\{} & {} \qquad +\frac{5}{4}\alpha ^2(12F^2_X(x_1)-12F_X(x_1)+2)(4f_{V_2}(y_1)-6f_{V_1}(y_1)+2f_{Y}(y_1))]\\{} & {} \qquad \times [f_Y(y_2)+3\alpha (2F_X(x_2)-1)(f_{V_1}(y_2)-f_Y(y_2))+\frac{5}{4}\alpha ^2(12F^2_X(x_2)\\{} & {} \qquad -12F_X(x_2)+2) \times (4f_{V_2}(y_2)-6f_{V_1}(y_2)+2f_{Y}(y_2))]dx_{2}dx_{1}. \end{aligned}$$

On the other hand, upon using (3.2), with \(p=1\) and \(q=0,\) for \(t=1,\) and with \(p=0\) and \(q=1,\) for \(t=2,\) we get after some algebra

$$\begin{aligned}{} & {} \int _{0}^{\infty }\int _{x_1}^{\infty }\frac{\alpha k^n}{\Gamma (n)\Gamma (m\!-\!n)}(2F_X(x_{t})\!-\!1) \!(\!-\!\log (\overline{F}_X(x_1)))^{n-1}\left( \overline{F}_X(x_2)\right) ^{k-1}\\ {}{} & {} \quad \!\left( \!-\!\log \frac{\overline{F}_X(x_2)}{\overline{F}_X(x_1)}\right) ^{m-n-1}\times \frac{f_X(x_1)\!f_X(x_2)}{\overline{F}_X(x_1)}dx_{2}dx_{1}\\{} & {} \quad = \left\{ \begin{array}{ll} \alpha \left( 2I_{1,0}(n,m,k)-1\right) =\alpha \left( 1-2\left( \frac{k}{k+1}\right) ^{n}\right) =\delta ^{(\alpha )}_{n,k:1},&{}t=1,\\ \alpha \left( 2I_{0,1}(n,m,k)-1\right) =\alpha \left( 1-2\left( \frac{k}{k+1}\right) ^{m}\right) =\delta ^{(\alpha )}_{m,k:1},&{}t=2.\\ \end{array}\right. \end{aligned}$$

Similarly, we can obtain \(\delta ^{(\alpha )}_{n,k:2}, \delta ^{(\alpha )}_{m,k:2}, \delta ^{(\alpha )}_{n,m,k:1}, \delta ^{(\alpha )}_{n,m,k:2}, \delta ^{(\alpha )}_{n,m,k:3},\) and \(\delta ^{(\alpha )}_{n,m,k:4}.\) This completes the proof. \(\square \)

Remark 3.4

In Theorem 3.3 for \(k=1,\) we obtain the results of Husseiny et al. [27].

The joint MGF of upper concomitants \(Y_{[n,k]}^{(u)}\) and \(Y_{[m,k]}^{(u)},~ n<m,\) based on SAR(\(\alpha \)) is given by

$$\begin{aligned} M_{[n,m,k]}(t_1,t_2)= & {} M_Y(t_1)M_Y(t_2)+\left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{5}{2}\delta ^{(\alpha )}_{n,k:2} \right) M_Y(t_2)(M_{V_1}(t_1)-M_Y(t_1))\nonumber \\ {}{} & {} \quad +\left( 3\delta ^{(\alpha )}_{m,k:1}-\frac{5}{2}\delta ^{(\alpha )}_{m,k:2}\right) M_Y(t_1)(M_{V_1}(t_2)-M_Y(t_2))\nonumber \\{} & {} +5\delta ^{(\alpha )}_{n,k:2}M_Y(t_2)(M_{V_2}(t_1)-M_{V_1}(t_1)) \nonumber \\{} & {} +5\delta ^{(\alpha )}_{m,k:2}M_Y(t_1)(M_{V_2}(t_2)-M_{V_1}(t_2))\nonumber \\{} & {} +\left( 9\delta ^{(\alpha )}_{n,m,k:1}-\frac{15}{2} \delta ^{(\alpha )}_{n,m,k:2}-\frac{15}{2}\delta ^{(\alpha )}_{n,m,k:3}+\frac{25}{4}\delta ^{(\alpha )}_{n,m,k:4}\right) \nonumber \\{} & {} \times (M_{V_1}(t_1)-M_Y(t_1))(M_{V_1}(t_2)-M_Y(t_2))\nonumber \\{} & {} +\left( 15\delta ^{(\alpha )}_{n,m,k:2}- \frac{25}{2}\delta ^{(\alpha )}_{n,m,k:4}\right) (M_{V_2}(t_1) \nonumber \\{} & {} -M_{V_1}(t_1))(M_{V_1}(t_2)-M_Y(t_2))\nonumber \\{} & {} +\left( 15\delta ^{(\alpha )}_{n,m,k:3}-\frac{25}{2} \delta ^{(\alpha )}_{n,m,k:4}\right) (M_{V_1}(t_1)-M_Y(t_1)) \nonumber \\{} & {} \times (M_{V_2}(t_2)-M_{V_1}(t_2))+25 \delta ^{(\alpha )}_{n,m,k:4}(M_{V_2}(t_1)\nonumber \\{} & {} -M_{V_1}(t_1))(M_{V_2}(t_2)-M_{V_1}(t_2)). \end{aligned}$$
(3.3)

Thus, the product moment \(\text{ E }[Y_{[n,k]}Y_{[m,k]}]=\mu _{[n,m,k]}\) is obtained from (3.3) by

$$\begin{aligned} \mu _{[n,m,k]}^{(u)}{} & {} =\left[ 3\left( \delta ^{(\alpha )}_{n,k:1}+\delta ^{(\alpha )}_{m,k:1}\right) -\frac{5}{2} \left( \delta ^{(\alpha )}_{n,k:2}+\delta ^{(\alpha )}_{m,k:2}\right) \right] \mu _Y(\mu _{V_1}-\mu _Y)\\{} & {} \qquad +25\delta ^{(\alpha )}_{n,m,k:4}(\mu _{V_2}-\mu _{V_1})^2+\frac{5}{2}\left( \delta ^{(\alpha )}_{n,k:2}+\delta ^{(\alpha )}_{m,k:2}\right) \mu _Y(\mu _{V_2} -\mu _{V_1})\\{} & {} \qquad +\left[ 15\left( \delta ^{(\alpha )}_{n,m,k:2}+\delta ^{(\alpha )}_{n,m,k:3}\right) -25\delta ^{(\alpha )}_{n,m,k:4}\right] (\mu _{V_1}-\mu _Y)\\{} & {} \qquad \times (\mu _{V_2}-\mu _{V_1})+\left( 9\delta ^{(\alpha )}_{n,m,k:1}-\frac{15}{2} \delta ^{(\alpha )}_{n,m,k:2}-\frac{15}{2}\delta ^{(\alpha )}_{n,m,k:3}\right. \\{} & {} \qquad \left. +\frac{25}{4}\delta ^{(\alpha )}_{n,m,k:4}\right) (\mu _{V_1}-\mu _Y)^2+\mu ^2_Y. \end{aligned}$$

4 General Theoretical Relations of Entropy, Extropy, CE, CRE, and CF Based on Sarmanov Copula

We have the following general results concerning any radially symmetric copula and especially concerning the FGM and Sarmanov copulas.

Proposition 2

For any radially symmetric copula about \((\frac{1}{2},\frac{1}{2}),\) we get \(H^{(u)}_{[n,k]}=H^{(\ell )}_{[n,k]},\) where \(H^{(\ell )}_{[n,k]}\) and \(H^{(u)}_{[n,k]}\) are the Shannon entropies of the concomitants of the nth lower and upper k-record values of that copula, respectively.

Proof

The Shannon entropies \(H^{(\ell )}_{[n,k]}\) and \(H^{(u)}_{[n,k]}\) are given by

$$\begin{aligned} H^{(i)}_{[n,k]} = -\int _{0}^{1}\mathcal{L}^{(i)}_{[n,k]}(v)\log \mathcal{L}^{(i)}_{[n,k]}(v) dv, ~i=\ell , u, \end{aligned}$$
(4.1)

where \(\mathcal{L}^{(\ell )}_{[n,k]}(.)\) and \(\mathcal{L}^{(u)}_{[n,k]}(.)\) are the PDFs of the concomitants of the nth lower and upper k-record values based on that copula, respectively. Put \(i=u\) in (4.1) and use the transformation \(v=\frac{1}{2}-w,\) in view of Proposition 1 and Remark 2.1, we get \(H^{(u)}_{[n,k]} = -\int _{-\frac{1}{2}}^{\frac{1}{2}}\mathcal{L}^{(\ell )}_{[n,k]}(\frac{1}{2}+w)\log \mathcal{L}^{(\ell )}_{[n,k]}(\frac{1}{2}+w) dw.\) Thus, upon using the transformation \(\eta =\frac{1}{2}+w,\) we get

$$\begin{aligned} H^{(u)}_{[n,k]} = -\int _{0}^{1}\mathcal{L}^{(\ell )}_{[n,k]}(\eta )\log \mathcal{L}^{(\ell )}_{[n,k]}(\eta ) d\eta =H^{(\ell )}_{[n,k]}. \end{aligned}$$

This completes the proof. \(\square \)

Theorem 4.1

Let the Shannon entropy associated with the FGM and Sarmanov copulas be denoted by \(H^{*(u)}_{[n,k]}(\theta )\) and \(H^{(u)}_{[n,k]}(\alpha ),\) respectively. Then, we get

$$\begin{aligned} H^{*(u)}_{[n,k]}(\theta ) = H^{*(u)}_{[n,k]}(-\theta )~~\text{ and }~~H^{(u)}_{[n,k]}(\alpha )=H^{(u)}_{[n,k]}(-\alpha ). \end{aligned}$$

Proof

From (4.1), Theorem 3.2, and Proposition 2, we get

$$\begin{aligned} H_{[n,k]}^{*(u)}(\!-\theta )\!\!= & {} \!\!-\!\!\int _{0}^{1}\mathcal{C}^{(u)}_{[n,k]}(v,-\theta )\log \mathcal{C}^{(u)}_{[n,k]}(v,-\theta )dv\!\\= & {} -\!\int _{0}^{1}\mathcal{C}^{(\ell )}_{[n,k]}(v,\theta )\log \mathcal{C}^{(\ell )}_{[n,k]}(v,\theta )dv\!\!=\!\! H_{[n,k]}^{*(\ell )}(\theta )\!\!=\!\!H_{[n,k]}^{*(u)}(\theta ). \end{aligned}$$

This proves the first part of the theorem. The proof of the second part is similar. The proof is completed. \(\square \)

Proposition 3

For any radially symmetric copula about \((\frac{1}{2},\frac{1}{2}),\) we get \(J^{(u)}_{[n,k]}=J^{(\ell )}_{[n,k]},\) where \(J^{(\ell )}_{[n,k]}\) and \(J^{(u)}_{[n,k]}\) are extropies of the concomitants of the nth lower and upper k-record values of that copula, respectively.

Proof

The extropies \(J^{(\ell )}_{[n,k]}\) and \(J^{(u)}_{[n,k]}\) are given by \(J^{(i)}_{[n,k]} = -\frac{1}{2}\int _{0}^{1}\left[ \mathcal{L}^{(i)}_{[n,k]}(v)\right] ^{2} dv, ~i=\ell , u.\) In view of Proposition 1 and Remark 2.1, we get

$$\begin{aligned} J^{(u)}_{[n,k]} = -\frac{1}{2}\int _{-\frac{1}{2}}^{\frac{1}{2}}\left[ \mathcal{L}^{(\ell )}_{[n,k]}\left( \frac{1}{2}+w\right) \right] ^{2} dw. \end{aligned}$$

Thus, upon using the transformation \(\eta =\frac{1}{2}+w,\) we get

$$\begin{aligned} J^{(u)}_{[n,k]} = -\frac{1}{2}\int _{0}^{1}\left[ \mathcal{L}^{(\ell )}_{[n,k]}(\eta )\right] ^{2} d\eta =J^{(\ell )}_{[n,k]}. \end{aligned}$$

This completes the proof. \(\square \)

Theorem 4.2

Let the extropy associated with the FGM and Sarmanov copulas be denoted by \(J^{*(u)}_{[n,k]}(\theta )\) and \(J^{(u)}_{[n,k]}(\alpha ),\) respectively. Then, we get

$$\begin{aligned} J^{*(u)}_{[n,k]}(\theta ) = J^{*(u)}_{[n,k]}(-\theta )~~\text{ and }~~J^{(u)}_{[n,k]}(\alpha )=J^{(u)}_{[n,k]}(-\alpha ). \end{aligned}$$

Proof

From (4.1), Theorem 3.2, and Proposition 2, we get

$$\begin{aligned} J_{[n,k]}^{*(u)}(\!-\theta ){} & {} =-\frac{1}{2}\int _{0}^{1}\left[ \mathcal{C}^{(u)}_{[n,k]}(v,-\theta )\right] ^{2}dv\\ {}{} & {} =-\frac{1}{2}\int _{0}^{1}\left[ \mathcal{C}^{(\ell )}_{[n,k]}(v,\theta )\right] ^{2}dv= J_{[n,k]}^{*(\ell )}(\theta )=J_{[n,k]}^{*(u)}(\theta ). \end{aligned}$$

This proves the part based on FGM copula. The proof of the part based on Sarmanov copula is similar. The proof is completed. \(\square \)

Theorem 4.3

Let \({\overline{F}}_{[n,k]}^{(\ell )}(.;\alpha )\) and \({F}_{[n,k]}^{(u)}(.;\alpha )\) be the survival function and DF of concomitants of the nth lower and upper k-record values based on the Sarmanov copula, respectively. Then,

$$\begin{aligned} {F}^{(u)}_{[n,k]}~ \left( \frac{1}{2}-v \right) = {\overline{F}}^{(\ell )}_{[n,k]}\left( \frac{1}{2}+v\right) . \end{aligned}$$

Proof

We have \({F}^{(u)}_{[n,k]}(v)=v\left[ 1+\delta ^{(\alpha )}_{n,k:1}(v-1)+\delta ^{(\alpha )}_{n,k:2}(2v-1)(v-1)\right] .\) Thus,

$$\begin{aligned} {F}^{(u)}_{[n,k]}~ \left( \frac{1}{2}-v\right)= & {} \left( \frac{1}{2}-v\right) \left( 1+\delta ^{(\alpha )}_{n,k:1}\left( -\frac{1}{2}-v\right) +\delta ^{(\alpha )}_{n,k:2}(-2v)(-\frac{1}{2}-v)\right) \nonumber \\= & {} \left( \frac{1}{2}-v\right) \left( 1+\delta ^{(-\alpha )}_{n,k:1}\left( \frac{1}{2}+v\right) +\delta ^{(\alpha )}_{n,k:2}(2v)\left( \frac{1}{2}+v\right) \right) \nonumber \\= & {} \left( \frac{1}{2}-v\right) +\delta ^{(-\alpha )}_{n,k:1}\left( \frac{1}{2}+v\right) \left( \frac{1}{2}-v\right) +\delta ^{(\alpha )}_{n,k:2}(2v)\left( \frac{1}{2}-v\right) \left( \frac{1}{2}+v\right) \nonumber \\= & {} 1-\left( \frac{1}{2}+v\right) \left( 1+\delta ^{(-\alpha )}_{n,k:1}\left( -\frac{1}{2}+v\right) \! +\!\delta ^{(\alpha )}_{n,k:2}(2v)\left( \frac{-1}{2}+v\right) \right) \!\\= & {} \!{\overline{F}}^{(\ell )}_{[n,k]}~ \left( \frac{1}{2}+v\right) . \end{aligned}$$

This completes the proof. \(\square \)

Proposition 4

For any radially symmetric copula about \((\frac{1}{2},\frac{1}{2}),\) we get \(CE^{(u)}_{[n,k]}=CRE^{(\ell )}_{[n,k]}.\)

Theorem 4.4

For the FGM and Sarmanov copulas, we, respectively, get

$$\begin{aligned} CE^{*(u)}_{[n,k]}(\theta ) = CRE^{*(\ell )}_{[n,k]}(\theta )~~\text{ and }~~CE^{(u)}_{[n,k]}(\alpha )=CRE^{(\ell )}_{[n,k]}(\alpha ). \end{aligned}$$

Proof

From Theorem 4.3 and Proposition 4, we get

$$\begin{aligned} CE_{[n,k]}^{*(u)}(\theta )\!= & {} \!-\!\int _{0}^{1}{F}^{(u)}_{[n,k]}(v,\theta )\log {F}^{(u)}_{[n,k]}(v,\theta )dv\\= & {} -\int _{0}^{1}{\overline{F}}^{(\ell )}_{[n,k]}(v,\theta )\log {\overline{F}}^{(\ell )}_{[n,k]}(v,\theta )dv\!=\! CRE^{*(\ell )}_{[n,k]}(\theta ). \end{aligned}$$

This proves the theorem for FGM copula. The proof for Sarmanov copula is similar. \(\square \)

Proposition 5

Let \(\widetilde{Y}=aY_{[n,k]}^{(u)}+b,\) where \(a>0\) and \(b\ge 0.\) Then \(CE^{(u)}_{[n,k]}(\widetilde{Y}) = a CE^{(u)}_{[n,k]}(Y).\)

Proof

The proof follows directly by the definition of CE and the obvious relation \(F_{aY_{[n,k]}^{(u)}+b}(y)=F_{Y_{[n,k]}^{(u)}}(\frac{y-b}{a}).\) \(\square \)

Proposition 5 means that \(CE_{[n,k]}^{(u)}\) is a shift-independent measure. This property is also hold for \(CRE^{(u)}_{[n,k]}.\) On the other hand, the effect of linear transformations on the differential entropy and extropy is quite different, since \(H_{[n,k]}^{(u)}(\widetilde{Y})= H^{(u)}_{[n,k]}(Y)+\log a\) and \(J_{[n,k]}^{(u)}(\widetilde{Y})=\frac{1}{a} J^{(u)}_{[n,k]}(Y).\)

Proposition 6

Let \({\overline{F}_Y}^{(u)}_{[n,k]}(y)\) and \(R_{F^{(u)}_{[n,k]}}(y)\) be the survival and hazard functions of concomitants of the nth upper k-record values from \(SAR(\alpha ),\) respectively, where \(R_{F^{(u)}_{[n,k]}}(y)=\frac{f^{(u)}_{[n,k]}(y)}{\overline{F}^{(u)}_{[n,k]}(y)}.\) Then,

$$\begin{aligned} CF_{\overline{F}_{[n,k]}}^{(u)}(\alpha ) = E\left[ R_{F^{(u)}_{[n,k]}}(Y_{[n,k]}^{(u)})\right] \ge \frac{1}{\mu _{Y_{[n,k]}^{(u)}}}. \end{aligned}$$

Proof

From the definition of \(CF_{\overline{F}_{[n,k]}}^{(u)}(\alpha )\) and by using the result of Nanda [31], that \(E[R_{F_{X}}(X)]\ge \frac{1}{E(X)},\) we get

$$\begin{aligned} CF_{\overline{F}_{[n,k]}}^{(u)}(\alpha )= & {} \int _{0}^{\infty }\left( \frac{\partial \log \overline{F}^{(u)}_{[n,k]}}{\partial y}\right) ^2 \overline{F}^{(u)}_{[n,k]}dy\\= & {} \int _{0}^{\infty }\left( -\frac{f^{(u)}_{[n,k]}}{{\overline{F}}^{(u)}_{[n,k]}}\right) ^{2}{\overline{F}}^{(u)}_{[n,k]}dy=\int _{0}^{\infty }R_{F^{(u)}_{[n,k]}}(y)f^{(u)}_{[n,k]}dy\ge \frac{1}{\mu _{Y_{[n,k]}^{(u)}}}. \end{aligned}$$

The proof is completed. \(\square \)

5 Information Measures Based on SAR\((\alpha )\) with Illustrative Examples

Theorems 5.1-5.6 give an explicit form of the extropy, Shannon entropy, inaccuracy measures, CE, CRE, and CF for concomitants of the nth upper k-record values based on SAR\((\alpha )\), respectively.

Theorem 5.1

Let \(V_i\sim F_Y^{i+1}, i=1,2,3,\). Then, the extropy of \(Y_{[n,k]}^{(u)},\) for \(n\ge 1,\) is given by

$$\begin{aligned}{} & {} J_{Y}^{(u)}(\alpha )=\left( 1-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) ^{2}J_Y(Y) +\left( 3\delta ^{(\alpha )}_{n,k:1}- \frac{15}{2}\delta ^{(\alpha )}_{n,k:2}\right) ^{2}\\{} & {} \qquad J_{Y}(V_1)+\left( 5\delta ^{(\alpha )}_{n,k:2}\right) ^{2}J_{Y}(V_2)\\{} & {} \qquad -\left( 1-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) \left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{15}{2}\delta ^{(\alpha )}_{n,k:2}\right) E_Y(f_{V_1}) -\left( 1-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) \\{} & {} \qquad \times \left( 5\delta ^{(\alpha )}_{n,k:2}\right) E_Y(f_{V_2}) -\frac{3}{2}(5\delta ^{(\alpha )}_{n,k:2})\left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{15}{2}\delta ^{(\alpha )}_{n,k:2}\right) E_Y(f_{V_3}), \end{aligned}$$

where \(J_Y(Y)=-\frac{1}{2}\int _{0}^{\infty }f^{2}_Y(y)dy~\) and \(J_{Y}(V_i)=-\frac{1}{2}\int _{0}^{\infty }f^{2}_{V_i}(y)dy,~i=1,2,\) are the extropy measures of the RVs Y and \(V_i,\) respectively, and \( E_Y(f_{V_i})=\int _{0}^{\infty }f_Y(y)f_{V_i}(y)dy,~i=1,2,3.\) \(\square \)

Proof

Clearly, we have

$$\begin{aligned}{} & {} J_{Y}^{(u)}(\alpha )=-\frac{1}{2}\int _{0}^{\infty }\left( f_Y(y)[1+3\delta ^{(\alpha )}_{n,k:1}(2F_Y(y)-1)+\frac{5}{4}\delta ^{(\alpha )}_{n,k:2}(3(2F_Y(y)-1)^2-1)]\right) ^{2}dy\\{} & {} \quad =-\frac{1}{2}\int _{0}^{\infty }\left( f_Y(y)+3\delta ^{(\alpha )}_{n,k:1}(f_{V_1}(y)-f_Y(y))+\frac{5}{4}\delta ^{(\alpha )}_{n,k:2}(4f_{V_2}(y)\right. \\{} & {} \qquad \left. -6f_{V_1}(y)+2f_Y(y))\right) ^{2}dy\\{} & {} \quad =\left( 1-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) ^{2}J_Y(Y) +\left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{15}{2}\delta ^{(\alpha )}_{n,k:2}\right) ^{2}J_{Y}(V_1) +(5\delta ^{(\alpha )}_{n,k:2})^{2}J_{Y}(V_2)\\{} & {} \qquad -\left( 1-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) \left( 3\delta ^{(\alpha )}_{n,k:1} -\frac{15}{2}\delta ^{(\alpha )}_{n,k:2}\right) E_Y(f_{V_1})\\{} & {} \qquad -\left( 1-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) \left( 5\delta ^{(\alpha )}_{n,k:2}\right) E_Y(f_{V_2})\\{} & {} \qquad -\frac{3}{2}\left( 5\delta ^{(\alpha )}_{n,k:2}\right) \left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{15}{2}\delta ^{(\alpha )}_{n,k:2}\right) E_Y(f_{V_3}). \end{aligned}$$

The proof is completed. \(\square \)

Example 5.1

Suppose that X and Y have exponential distributions with mean \( \frac{1}{\theta *}\) and \(\frac{1}{\theta },\) respectively. We get \( J_Y(Y)=\frac{-\theta }{4},\) \( J_{Y}(V_1)=\frac{-\theta }{6},\) \(J_{Y}(V_2)=\frac{-3\theta }{20},\) \( E_Y(f_{V_1})=\frac{\theta }{3},\) \( E_Y(f_{V_2})=\frac{\theta }{4},\) and \(E_Y(f_{V_3})=\frac{\theta }{5}.\) Moreover,

$$\begin{aligned}{} & {} J_{Y}^{(u)}(\alpha )=-\frac{\theta }{4}\left( 1-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) ^2 -\frac{\theta }{6}\left( 3\delta ^{(\alpha )}_{n,k:1}- \frac{15}{2}\delta ^{(\alpha )}_{n,k:2}\right) ^{2} -\frac{3\theta }{20}(5\delta ^{(\alpha )}_{n,k:2})^{2} \\{} & {} \quad -\frac{\theta }{3}\left( 1-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) \left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{15}{2}\delta ^{(\alpha )}_{n,k:2}\right) -\frac{5\theta }{4} \delta ^{(\alpha )}_{n,k:2}\left( 1-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) \\{} & {} \quad -\frac{3\theta }{2}\delta ^{(\alpha )}_{n,k:2}\left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{15}{2}\delta ^{(\alpha )}_{n,k:2}\right) . \end{aligned}$$

Example 5.2

Suppose that X and Y have power distributions (i.e. \(f_{Y}(y)=c y^{c-1},~\) \(c>0,\) \( 0\le y\le 1\)). After simple algebra, we get \( J_Y(Y)=\frac{-c^{2}}{2(2c-1)},\) \( J_{Y}(V_1)=\frac{-2c^{2}}{4c-1},\) \(J_{Y}(V_2)=\frac{-9c^{2}}{2(6c-1)},\) \( E_Y(f_{V_1})=\frac{2c^{2}}{3c-1},\) \( E_Y(f_{V_2})=\frac{3c^{2}}{4c-1},\) and \(E_Y(f_{V_3})=\frac{4c^{2}}{5c-1}.\) Then,

$$\begin{aligned}{} & {} J_{Y}^{(u)}(\alpha )\!=\!-\frac{c^{2}}{2(2c-1)}\left( 1\!-\!3\delta ^{(\alpha )}_{n,k:1}\!+\!\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) ^{2} \!-\!\frac{2c^{2}}{4c-1}\left( 3\delta ^{(\alpha )}_{n,k:1}\!-\!\frac{15}{2}\delta ^{(\alpha )}_{n,k:2}\right) ^{2}\\{} & {} \quad \!-\!\frac{9c^{2}}{2(6c-1)}\left( 5\delta ^{(\alpha )}_{n,k:2}\right) ^{2}-\frac{2c^{2}}{3c-1}\left( 1-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) \left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{15}{2} \delta ^{(\alpha )}_{n,k:2}\right) \\{} & {} \quad -\frac{3c^{2}}{4c-1}\left( 1-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) \left( 5\delta ^{(\alpha )}_{n,k:2} \right) \\{} & {} \quad -\frac{6c^{2}}{5c-1}\left( 5\delta ^{(\alpha )}_{n,k:2}\right) \left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{15}{2} \delta ^{(\alpha )}_{n,k:2}\right) . \end{aligned}$$

Theorem 5.2

Let \(a(n,k)=1-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2},\) \(b(n,k)=3\delta ^{(\alpha )}_{n,k:1}-\frac{15}{2}\delta ^{(\alpha )}_{n,k:2},\) and \(c(n,k)=-(a(n,k)+b(n,k)-1)\). Furthermore, let \(3a(n,k)c(n,k)-b^2(n,k)>0\) and \( b(n,k)+2c(n,k)+1>0.\) Then the explicit form of the Shannon entropy of \(Y^{(u)}_{[n,k]},\) \(n\ge 1,\) based on SAR\((\alpha )\) is given by

$$\begin{aligned} H_{[n,k]}^{(u)}(\alpha )=\psi _{[n,k]}+a(n,k)H(Y)-2b(n,k)\phi _{f}(1) -15\delta ^{(\alpha )}_{n,k:2}\phi _{f}(2) , \end{aligned}$$

where \(H(Y)=-E(\log f_Y(Y))\) is the Shannon entropy of Y\(\phi _{f}(p)=\int _{0}^{\infty }F^{p}_Y(y)f_Y(y)\log f_Y(y)dy=\int _{0}^{1}z^{p}\log f_Y(F_Y^{-1}(z))dz,~p=1,2,~\) \(\psi _{[n,k]}=-\log (1+3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2})+2b(n,k)J_{0}(n,k)+6c(n,k)J_{1}(n,k),~\) and

$$\begin{aligned} J_{\ell }(n,k)=\int _{0}^{1}\frac{z^{\ell }(a(n,k)z+b(n,k)z^2+c(n,k)z^3)}{a(n,k)+2b(n,k)z+3c(n,k)z^2}dz,~\ell =0,1. \end{aligned}$$

Proof

The Shannon entropy of \( Y_{[n,k]}^{(u)}\) is given by

$$\begin{aligned} H_{[n,k]}^{(u)}(\alpha )= & {} -\int _{0}^{\infty }f_{[n,k]}^{(u)}(y)\log f_{[n,k]}^{(u)}(y)dy\\= & {} -\int _{0}^{\infty }f_Y(y)\left[ 1+3\delta ^{(\alpha )}_{n,k:1}(2F_Y(y)-1)+\frac{5}{4}\delta ^{(\alpha )}_{n,k:2}(3(2F_Y(y)-1)^2-1)\right] \\{} & {} \times \log \left[ f_Y(y)(1+3\delta ^{(\alpha )}_{n,k:1}(2F_Y(y)-1)+\frac{5}{4}\delta ^{(\alpha )}_{n,k:2}(3(2F_Y(y)-1)^2-1))\right] dy\\= & {} H(Y)(1-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2})-\phi _{f}(1)(6\delta ^{(\alpha )}_{n,k:1}-15\delta ^{(\alpha )}_{n,k:2})\\{} & {} -\phi _{f}(2)(15\delta ^{(\alpha )}_{n,k:2})+\psi _{[n,k]}, \end{aligned}$$

where \(\psi _{[n,k]}=-E(\log (1+3\delta ^{(\alpha )}_{n,k:1}(2F_Y(Y_{[n,k]})-1)+\frac{5}{4}\delta ^{(\alpha )}_{n,k:2}(3(2F_Y(Y_{[n,k]})-1)^2-1))).\) Upon integrating by part, we get

$$\begin{aligned} \psi _{[n,k]}= & {} -\int _{0}^{\infty }f_{[n,k]}^{(u)}(y)\log (1+3\delta ^{(\alpha )}_{n,k:1}(2F_Y(y)-1)+\frac{5}{4}\delta ^{(\alpha )}_{n,k:2}(3(2F_Y(y)-1)^2-1))dy\nonumber \\= & {} -\log (1+3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2})+\int _{0}^{\infty }V_{n,k}\,dU_{n,k}, \end{aligned}$$

where \(U_{n,k}=\log (1+3\delta ^{(\alpha )}_{n,k:1}(2F_Y(y)-1)+\frac{5}{4}\delta ^{(\alpha )}_{n,k:2}(3(2F_Y(y)-1)^2-1))\) and \(V_{n,k}=F_Y(y)(1+3\delta ^{(\alpha )}_{n,k:1}\) \((F_Y(y)-1)+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}(2F^2_Y(y)-3F_Y(y)+1)).\) Thus, by using the integral probability transformation \(z=F_Y(y)\) and simplifying the result, we get

$$\begin{aligned} \psi _{[n,k]}=-\log (1+3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2})+2b(n,k)J_{0}(n,k)+6c(n,k)J_{1}(n,k). \end{aligned}$$

The proof is completed. \(\square \)

Theorem 5.3

The inaccuracy measure between \(f_{[n,k]}^{(u)}(y)\) and \(f_Y(y)\) for \(n\ge 1,\) \(\alpha \ne 0,\) is given by

$$\begin{aligned} I_{[n,k]}^{(u)}(\alpha )= & {} (1-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2})H(Y)-(6\delta ^{(\alpha )}_{n,k:1}-15\delta ^{(\alpha )}_{n,k:2})\phi _{f}(1) -15\delta ^{(\alpha )}_{n,k:2}\phi _{f}(2), \end{aligned}$$

where \(H(Y)=-\int _{0}^{\infty }f_Y(y)\log f_Y(y)dy~\) is the Shannon entropy of the RV Y and

$$\begin{aligned} \phi _{f}(p)=\int _{0}^{\infty }F^{p}{_Y(y)}f_Y(y)\log f_Y(y)dy=\int _{0}^{1}z^p\log f_Y(F^{-1}{_Y(z)})dz,~p=1,2. \end{aligned}$$

Proof

Clearly, we have

$$\begin{aligned}{} & {} I_{[n,k]}^{(u)}(\alpha )=-\int _{0}^{\infty }f_{[n,k]}^{(u)}(y)\log f_Y(y)dy\nonumber \\{} & {} \quad =(1-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2})H(Y)- (6\delta ^{(\alpha )}_{n,k:1}-15\delta ^{(\alpha )}_{n,k:2})\int _{0}^{\infty }F_Y(y)f_Y(y)\log f_Y(y)dy \nonumber \\{} & {} \qquad -15\delta ^{(\alpha )}_{n,k:2}\int _{0}^{\infty }F^2_Y(y)f_Y(y)\log f_Y(y)dy =(1-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2})H(Y)\nonumber \\{} & {} \qquad -(6\delta ^{(\alpha )}_{n,k:1}-15\delta ^{(\alpha )}_{n,k:2})\int _{0}^{1}u\log f_Y(F^{-1}_Y(u))du -15\delta ^{(\alpha )}_{n,k:2}\int _{0}^{1}u^2\log f_Y(F^{-1}_Y(u))du \nonumber \\{} & {} \quad =(1-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2})H(Y)-(6\delta ^{(\alpha )}_{n,k:1}-15\delta ^{(\alpha )}_{n,k:2})\phi _{f}(1) -15\delta ^{(\alpha )}_{n,k:2}\phi _{f}(2). \end{aligned}$$

This completes the proof. \(\square \)

Example 5.3

Suppose that X and Y have exponential distributions with mean \( \frac{1}{\theta *}\) and \(\frac{1}{\theta },\) respectively. After simple algebra, we get \( H(Y)= 1-\log \theta ,\) \( \phi _{f}(1)=\frac{-3+2\log \theta }{4},\) and \(\phi _{f}(2)=\frac{-11+6\log \theta }{18}.\) Then,

$$\begin{aligned} I_{[n,k]}^{(u)}(\alpha )\!= & {} \!\left( 1\!-\!3\delta ^{(\alpha )}_{n,k:1}\!+\!\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) (1\!-\!\log \theta )\!-\! \left( 6\delta ^{(\alpha )}_{n,k:1}\!-\!15\delta ^{(\alpha )}_{n,k:2}\right) \left( \frac{-3\!+\!2\log \theta }{4}\right) \!\\{} & {} -\!15\delta ^{(\alpha )}_{n,k:2}\left( \frac{-11\!+\!6\log \theta }{18}\right) . \end{aligned}$$

Theorem 5.4

The CE of \(Y_{[n,k]}^{(u)},\) for \(n\ge 1,\) is given by

$$\begin{aligned} CE_{[n,k]}^{(u)}(\alpha )= & {} \tau _{[n,k]}+CE(Y)\left( 1-3\delta ^{(\alpha )}_{n,k:1}+ \frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) -\phi _{f}(2)\left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{15}{2} \delta ^{(\alpha )}_{n,k:2}\right) \\{} & {} -\phi _{f}(3)\left( 5\delta ^{(\alpha )}_{n,k:2}\right) , \end{aligned}$$

where  \(CE(Y)\!=-\!\int _{0}^{\infty }F_Y(y)\log F_Y(y)dy\) \(=-\!\int _{0}^{1}\frac{z \log z}{f(F^{-1}{_Y(z)})}dz,~\) \(\phi _{f}(p)\) \(=\int _{0}^{\infty }F^{p}_Y(y)\log F_Y(y)dy\) \(=\int _{0}^{1}\frac{z^{p} \log z}{f(F^{-1}{_Y(z)})}dz,\) \(p=2,3,\) and \(\tau _{[n,k]}=-\int _{0}^{\infty }F_{[n,k]}^{(u)}(y)\log (1+3\delta ^{(\alpha )}_{n,k:1}(F_Y(y)-1)+\frac{5}{4}\delta ^{(\alpha )}_{n,k:2}(4F^{2}_Y(y)-6F_Y(y)+2)) dy.~\)

Proof

Clearly, we have

$$\begin{aligned}{} & {} CE_{[n,k]}^{(u)}(\alpha )=-\int _{0}^{\infty }F_{[n,k]}^{(u)}(y)\log F_{[n,k]}^{(u)}(y)dy\\{} & {} \quad =-\int _{0}^{\infty }F_Y(y)\left[ 1+3\delta ^{(\alpha )}_{n,k:1}(F_Y(y)-1)+\frac{5}{4}\delta ^{(\alpha )}_{n,k:2}(4F^{2}_Y(y)-6F_Y(y)+2)\right] \\{} & {} \qquad \times \log \left[ F_Y(y)\left[ 1+3\delta ^{(\alpha )}_{n,k:1}(F_Y(y)-1)+\frac{5}{4}\delta ^{(\alpha )}_{n,k:2}(4F^{2}_Y(y)-6F_Y(y)+2)\right] \right] dy\\{} & {} \quad =CE(Y)(1-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2})-\phi _{f}(2)(3\delta ^{(\alpha )}_{n,k:1}-\frac{15}{2}\delta ^{(\alpha )}_{n,k:2}) -\phi _{f}(3)(5\delta ^{(\alpha )}_{n,k:2})+\tau _{[n,k]}. \end{aligned}$$

The proof is completed. \(\square \)

Example 5.4

For the Sarmanov copula, we have after simple algebra, \(CE(Y)=\frac{1}{4},\) \(\phi _{f}(2)=-\frac{1}{9},\) and \(\phi _{f}(3)=-\frac{1}{16}.\) Thus, the CE of \( Y_{[n,k]}^{(u)}\) is given by

$$\begin{aligned} CE_{[n,k]}^{(u)}(\alpha )=\tau _{[n,k]}+\frac{1}{4}\left( 1-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2} \delta ^{(\alpha )}_{n,k:2}\right) +\frac{1}{9}\left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{15}{2}\delta ^{(\alpha )}_{n,k:2}\right) +\frac{1}{16}\left( 5\delta ^{(\alpha )}_{n,k:2}\right) , \end{aligned}$$

where

$$\begin{aligned} \tau _{[n,k]}\!= & {} \!-\int _{0}^{1}\left[ \left( \left( 1-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) z +\left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{15}{2}\delta ^{(\alpha )}_{n,k:2}\right) z^{2} +\left( 5\delta ^{(\alpha )}_{n,k:2}\right) z^{3}\right) \right. \\{} & {} \left. \times \log \left( \left( 1-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) +\left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{15}{2}\delta ^{(\alpha )}_{n,k:2}\right) z +\left( 5\delta ^{(\alpha )}_{n,k:2}\right) z^{2}\right) \right] dz. \end{aligned}$$

Remark 5.1

For \(k=1\) (record case), All the result concerning the Shannon entropy, inaccuracy measure, extropy, and CE were obtained by Husseiny et al. [27].

Theorem 5.5

The CRE of \(Y_{[n,k]}^{(u)}\) for \(n\ge 1,\) is given by

$$\begin{aligned} CRE_{[n,k]}^{(u)}(\alpha )=CRE(Y)-\phi _{\overline{F}}(1)\left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{5}{2} \delta ^{(\alpha )}_{n,k:2}\right) -\phi _{\overline{F}}(2)\left( 5\delta ^{(\alpha )}_{n,k:2}\right) +\Omega _{[n,k]}, \end{aligned}$$

where  \(CRE(Y)\!=\!-\!\int _{0}^{\infty }\overline{F}_Y(y)\log \overline{F}_Y(y)dy \!=\!-\!\int _{0}^{1}\frac{z\log z}{f(F^{-1}{_Y(1-z)})}dz,\) \(\phi _{\overline{F}}(p+1)\!=\!\int _{0}^{\infty } F^{p+1}_Y(y)\overline{F}_Y(y)\log \overline{F}_Y(y)dy \) \(=\int _{0}^{1}\frac{z (1-z)^{p+1} \log z}{f(F^{-1}{_Y(1-z)})}dz,~p=0,1,~\) and \(\Omega _{[n,k]}=-\int _{0}^{\infty }\overline{F}_{[n,k]}^{(u)}(y)\log (1+3\delta ^{(\alpha )}_{n,k:1}F_Y(y)+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}(2F^{2}_Y(y)-F_Y(y))) dy.~\)

Proof

Clearly, we have

$$\begin{aligned}{} & {} CRE_{[n,k]}^{(u)}(\alpha )=-\int _{0}^{\infty }\overline{F}_{[n,k]}^{(u)}(y)\log \overline{F}_{[n,k]}^{(u)}(y)dy\\{} & {} \quad =-\int _{0}^{\infty }\overline{F}_Y(y)\left[ 1+3\delta ^{(\alpha )}_{n,k:1}F_Y(y)+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}F_Y(y)(2F_Y(y)-1)\right] \\{} & {} \qquad \times \log \left[ \overline{F}_Y(y)\left[ 1+3\delta ^{(\alpha )}_{n,k:1}F_Y(y)+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}F_Y(y)(2F_Y(y)-1)\right] \right] dy\\{} & {} \quad =CRE(Y)-\phi _{\overline{F}_Y}(1)\left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) -\phi _{\overline{F}_Y}(2)\left( 5\delta ^{(\alpha )}_{n,k:2}\right) +\Omega _{[n,k]}. \end{aligned}$$

The proof is completed. \(\square \)

Example 5.5

For the Sarmanov copula, we have after simple algebra, \(CRE(Y)=\frac{1}{4},\) \(\phi _{\overline{F}_Y}(1)=-\frac{5}{36},\) and \(\phi _{\overline{F}_Y}(2)=-\frac{13}{144}.\) Thus, the CRE of \( Y_{[n,k]}^{(u)}\) is given by

$$\begin{aligned} CRE_{[n,k]}^{(u)}(\alpha )=\frac{1}{4}+\frac{5}{36}\left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) +\frac{13}{144}\left( 5\delta ^{(\alpha )}_{n,k:2}\right) +\Omega _{[n,k]}, \end{aligned}$$

where

$$\begin{aligned} \Omega _{[n,k]}\!= & {} \!-\int _{0}^{1}\left[ \left( 1+\left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) z +5\delta ^{(\alpha )}_{n,k:2}z^{2}\right) (1-z)\right. \\{} & {} \left. \times \log \left( 1+\left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) z +5\delta ^{(\alpha )}_{n,k:2}z^{2}\right) \right] dz. \end{aligned}$$

Theorem 5.6

Let \(F_{[n,k]}^{(u)}(y)\) be the DF of the concomitant of the nth upper k-record value based on SAR\((\alpha ).\) Then, the CF for location parameter based on \(Y_{[n,k]}^{(u)},\) for \(n\ge 1,\) is given by

$$\begin{aligned} CF_{\overline{F}_{[n,k]}}^{(u)}(\alpha )= & {} \int _{0}^{\infty }\left( \frac{\partial \log \overline{F}_{[n,k]}^{(u)}(y)}{\partial y}\right) ^2\overline{F}_{[n,k]}^{(u)}(y)dy\\ {}= & {} CF_{\overline{F}_Y}(Y)+\tau _{\overline{F}_Y}+2\phi _{\overline{F}_Y}+\psi _{\overline{F}_Y}, \end{aligned}$$

where

$$\begin{aligned} \tau _{\overline{F}_Y}= & {} \int _{0}^{\infty }\left( \frac{\partial \log \overline{F}_Y(y)}{\partial y}\right) ^2 \left( 3\delta ^{(\alpha )}_{n,k:1}F_Y(y)+ \frac{5}{2}\delta ^{(\alpha )}_{n,k:2}F_Y(y)(2F_Y(y)-1)\right) \overline{F}_Y(y)dy,\\ \phi _{\overline{F}_Y}= & {} -\int _{0}^{\infty }\left( 3\delta ^{(\alpha )}_{n,k:1}- \frac{5}{2}\delta ^{(\alpha )}_{n,k:2}+10\delta ^{(\alpha )}_{n,k:2}F_Y(y)\right) f^{2}_Y(y)dy, \end{aligned}$$

and

$$\begin{aligned} \psi _{\overline{F}_Y}=\int _{0}^{\infty }\frac{\left[ f_Y(y)\left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}+10\delta ^{(\alpha )}_{n,k:2} F_Y(y)\right) \right] ^2}{1+\left( 3\delta ^{(\alpha )}_{n,k:1} -\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) F_Y(y)+5\delta ^{(\alpha )}_{n,k:2}F^2_Y(y)}\overline{F}_Y(y)dy. \end{aligned}$$

Proof

By using (3.1), the CF of \( Y_{[n,k]}^{(u)}\) is given by

$$\begin{aligned}{} & {} CF_{\overline{F}_{[n,k]}}^{(u)}(\alpha )=\int _{0}^{\infty }\left( \frac{\partial \log \overline{F}_Y(y)}{\partial y}+ \frac{\partial \log (1+3\delta ^{(\alpha )}_{n,k:1}F_Y(y)+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}F_Y(y)(2F_Y(y)-1))}{\partial y}\right) ^2\\{} & {} \qquad \times \left( \overline{F}_Y(y)\left[ 1+3\delta ^{(\alpha )}_{n,k:1}F_Y(y)+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}F_Y(y)(2F_Y(y)-1)\right] \right) dy\\{} & {} \quad =\int _{0}^{\infty }\left( \frac{\partial \log \overline{F}_Y(y)}{\partial y}\right) ^2 \overline{F}_Y(y)dy+\int _{0}^{\infty }\left( \frac{\partial \log \overline{F}_Y(y)}{\partial y}\right) ^2\\{} & {} \qquad \times \left( 3\delta ^{(\alpha )}_{n,k:1}F_Y(y)+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}F_Y(y)(2F_Y(y)-1)\right) \overline{F}_Y(y)dy\\{} & {} \qquad +\int _{0}^{\infty } \left( \frac{\partial \log (1+3\delta ^{(\alpha )}_{n,k:1}F_Y(y)+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}F_Y(y)(2F_Y(y)-1))}{\partial y}\right) ^2 \\{} & {} \qquad \times \left( 1+3\delta ^{(\alpha )}_{n,k:1}F_Y(y)+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}F_Y(y)(2F_Y(y)-1)\right) \overline{F}_Y(y)dy \\{} & {} \qquad +2\int _{0}^{\infty } \left( \frac{\partial \log \overline{F}_Y(y)}{\partial y}\right) \\{} & {} \qquad \times \left( \frac{\partial \log (1+3\delta ^{(\alpha )}_{n,k:1}F_Y(y)+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}F_Y(y)(2F_Y(y)-1))}{\partial y}\right) \\{} & {} \qquad \times \left( 1+3\delta ^{(\alpha )}_{n,k:1}F_Y(y)+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}F_Y(y)(2F_Y(y)-1)\right) \overline{F}_Y(y)dy. \end{aligned}$$

\(\square \)

Remark 5.2

Let \(Y_{[n,k]}^{(i)},~i=\ell ,u,\) be based on SAR\((\alpha )\) with arbitrary marginals. In view of Theorem 3.2 and the relation between \(f_{[n,k]}^{(u)}(y)\) and \(f_{[n,k]}^{(\ell )}(y),\) it is easy to check that \(J^{(u)}_{[n,k]}(-\alpha )=J^{(\ell )}_{[n,k]}(\alpha ),\) \(H^{(u)}_{[n,k]}(-\alpha )=H^{(\ell )}_{[n,k]}(\alpha ),\) \(I^{(u)}_{[n,k]}(-\alpha )=I^{(\ell )}_{[n,k]}(\alpha ), CE^{(u)}_{[n,k]}(-\alpha )=CE^{(\ell )}_{[n,k]}(\alpha ), CRE^{(u)}_{[n,k]}(-\alpha )=CRE^{(\ell )}_{[n,k]}(\alpha ),\) and \(CF_{\overline{F}_{[n,k]}}^{(u)}(-\alpha )=CF_{\overline{F}_{[n,k]}}^{(\ell )}(\alpha ),\) where \(J^{(i)}_{[n,k]}(\alpha ), H^{(i)}_{[n,k]}(\alpha ), I^{(i)}_{[n,k]}(\alpha ), CE^{(i)}_{[n,k]}(\alpha ), CRE^{(i)}_{[n,k]}(\alpha ),\) and \(CF_{\overline{F}_{[n,k]}}^{(i)}(\alpha )\) are the extropy, Shannon entropy, inaccuracy measure, CE, CRE, and CF of \(Y_{[n,k]}^{(i)}, i=\ell ,u,\) respectively.

Example 5.6

Let X and Y have exponential distributions with means \( \frac{1}{\theta *}\) and \(\frac{1}{\theta },\) respectively. Then, \( CF_{\overline{F}_{[n,k]}}(Y)=\theta ,\) \(\tau _{\overline{F}_Y(y)}=\frac{\theta }{2}(3\delta ^{(\alpha )}_{n,k:1}-\frac{5}{2}\delta ^{(\alpha )}_{n,k:2})+\frac{5\theta }{3}\delta ^{(\alpha )}_{n,k:2},\) and \( \phi _{\overline{F}_Y(y)}=\frac{\theta }{2}(-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2})-\frac{5\theta }{3}\delta ^{(\alpha )}_{n,k:2}\) Thus, the CF of \( Y_{[n,k]}^{(u)}\) is given by

$$\begin{aligned} CF_{\overline{F}_{[n,k]}}^{(u)}(\alpha )=\left( 1-3\delta ^{(\alpha )}_{n,k:1}+\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) \theta +\left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) \frac{\theta }{2}-\frac{5\theta }{3}\delta ^{(\alpha )}_{n,k:2} +\theta ^2 J{(\theta )}, \end{aligned}$$

where

$$\begin{aligned} J{(\theta )}\!=\!\int _{0}^{\infty }\frac{\left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}+10\delta ^{(\alpha )}_{n,k:2}(1-e^{-\theta y})\right) ^2e^{-3\theta y}}{1+\left( 3\delta ^{(\alpha )}_{n,k:1}-\frac{5}{2}\delta ^{(\alpha )}_{n,k:2}\right) (1-e^{-\theta y})+5\delta ^{(\alpha )}_{n,k:2}(1-e^{-\theta y})^2}\,dy. \end{aligned}$$
Table 1 Extropy and entropy for \(Y_{[n,k]}^{(u)}\) in SAR\((\alpha )\)
Table 2 Inaccuracy, CE and CF for \(Y_{[n,k]}^{(u)}\) in SAR\((\alpha )\)

6 Numerical Study and Discussion

Tables 1 and 2 display the values of extropy, entropy, inaccuracy, CE, and CF of concomitants of the nth upper k-record values based on SAR\((\alpha )\) with marginals of the most popular distributions. The entries were evaluated by MATHEMATICA version 12 and Theorems 5.1-5.6. From these tables, the following properties can be extracted:

  • For the exponential marginals, the greatest value of the extropy is \(J_{[8,2]}^{(u)}(0.52)\simeq -0.069\) and the smallest value is \(J_{[8,2]}^{(u)}(-0.52)\simeq -0.4.\) In addition, with fixed \(\alpha ,\) the value of \(J_{[n,k]}^{(u)}(\alpha )\) slowly increases as n increases for \(\alpha >0.\) In contrast, the value of \(J_{[n,k]}^{(u)}(\alpha )\) slowly decreases as n increases for \(\alpha <0\) (see Table 1, Part 1).

  • For the power marginals, the greatest value of the extropy is \(J_{[8,2]}^{(u)}(-0.3)\simeq -0.55\) and the smallest value is \(J_{[8,2]}^{(u)}(0.52)\simeq -1.76.\) Moreover, with fixed \(\alpha ,\) the value of \(J_{[n,k]}^{(u)}(\alpha )\) slowly decreases as n increases for \(\alpha >0.\) In contrast, the value of \(J_{[n,k]}^{(u)}(\alpha )\) slowly increases as n increases for \(\alpha <0\) (see Table 1, Part 2).

  • By using the Sarmanov copula, Table 1, Part 3, shows that \(H_{[n,k]}^{(u)}(-\alpha )=H_{[n,k]}^{(u)}(\alpha ),\) which endorse the theoretical results given in Theorem 4.1. Moreover, the greatest values of the entropy is \( H_{[3,4]}^{(u)}(0.2)\simeq -0.0004\) and the smallest value is \(H_{[8,2]}^{(u)}(-0.52)\simeq -0.42.\) Also, the value of \(H_{[n,k]}^{(u)}(\alpha )\) increases as the value of k increases for large n \((n\ge 5).\)

  • For the exponential marginals, the value of \(I_{[n,k]}^{(u)}(\alpha )\) slowly increases as n increases for \(\alpha >0\). In contrast, the value of \(I_{[n,k]}^{(u)}(\alpha )\) slowly decreases as n increases for \(\alpha <0\) (see Table 2, Part 1).

  • By using Sarmanov copula, the greatest value of CE is \(CE_{[8,2]}^{(u)}(0.2)\simeq 0.259\) and the smallest value is \(CE_{[8,2]}^{(u)}(-0.52)\simeq 0.166.\) Also, with fixed \(\alpha \) and k the value of \(CE_{[n,k]}^{(u)}(\alpha )\) slowly increase as n increases for \(0<\alpha \le 0.4.\) Also, the value of \(CE_{[n,k]}^{(u)}(\alpha )\) slowly decreases as n increases for \(\alpha <0.\) In addition, the value of \(CE_{[n]}^{(u)}(\alpha )\) decreases with \(|\alpha |\) increases (see Table 2, Part 2).

  • For the exponential marginals, the greatest value of CF is \(CF_{\overline{F}_{[8,2]}}^{(u)}(-0.52)\simeq 2.86\) and the smallest value is \(CF_{\overline{F}_{[8,2]}}^{(u)}(0.52)\simeq 0.683.\) Moreover, with fixed k and \(\alpha >0,\) the value of \(CF_{\overline{F}_{[n,k]}}^{(u)}(\alpha )\) slowly decreases as n increases. In contrast, the value of \(CF_{\overline{F}_{[n,k]}}^{(u)}(\alpha )\) slowly increases as n increases for \(\alpha <0\) (see Table 2, Part 3).