1 Introduction

The concept of generalized order statistics (GOSs) has been introduced by [1] as a unified model for ascendingly ordered random variables. It is included a variety of models of ordered random variables such as ordinary order statistics, sequential order statistics, progressive type-II censoring, record values, and Pfeifer’s records. Let F be an absolutely continuous cumulative distribution function (cdf) with corresponding probability density function (pdf) f. The random variables \({\varvec{X}} =(X_{\left( 1,n,\tilde{m},k\right) },\dots , X_{\left( n,n,\tilde{m},k\right) })\) are GOSs based on F if their joint pdf is

$$\begin{aligned} f_{\varvec{X}}\left( x_{1}, \dots , x_n\right) =C_n\left( \prod ^{n-1}_{i =1} \left( 1-F(x_i )\right) ^{{\gamma }_i-{\gamma }_{i+1}-1}f(x_i)\right) \left( 1-F(x_n)\right) ^{k-1}f(x_{n}), \end{aligned}$$
(1)

where \(F^{-1}\left( 0\right)<x_{1}<\dots<x_n <F^{-1}\left( 1\right) \), \(C_r=\prod ^r_{i=1}{\gamma }_i\), \(r=1,\dots n\), and \(F^{-1}(t)=\inf \left\{ x:F(x)\ge t \right\} \) denotes the quantile function of F. The parameters in (1) are defined by \({\gamma }_i=k+n-i+\sum ^{n-1}_{l=i}{}m_l>0\), \(i =1,{ \dots },n-1\) , \({\gamma }_{n+1}=0\), \({\gamma }_{n}=k\), and \(\tilde{m}=\left( m_{1},\dots ,m_{n-1}\right) \in {{\mathbb R}}^{n-1}\).

Appropriate choices of the parameters \(m_i\) and k lead to special cases of GOSs. For example, the GOSs \(X_{\left( r,n,\tilde{m},k\right) }\) reduces to the rth order statistic choosing \(m_i = 0\) and \(k = 1\), whereas it becomes the rth upper record for \(m_i =-1\) and \(k = 1\) ([1]). The GOSs also include sequential order statistics, k-record values, Pfeifer’s records, and progressive type-II censored statistics. In this paper, we consider m-GOSs and denote the random variable \(X_{\left( r,n,\tilde{m},k\right) }\) by \(X_{\left( r,n,m,k\right) }\), \(r=1,\dots ,n\). The marginal pdf of \(X_{\left( r,n,m,k\right) }\) is

$$\begin{aligned} f_{\left( r,n,m,k\right) }(x)=\frac{C_r}{(r-1)!}\bar{F}^{\gamma _r-1}(x)f(x)h_m^{r-1}(F(x)), \end{aligned}$$

where \(\bar{F}=1-F\) and

$$\begin{aligned} h_m(F(x))=\left\{ \begin{array}{ll} \frac{1}{m+1}\left[ 1-\bar{F}^{m+1}(x) \right] &{} m\ne -1\\ -\log \bar{F}(x) &{} m=-1,F(x)\in [0,1). \end{array}\right. \end{aligned}$$

Suppose that \((X_{i},Y_{i})\), \(i=1,2,\dots \) is a sequence of independent random variables from a bivariate distribution. If the X-variates are arranged in increasing order as \(X_{\left( 1,n,m,k\right) }\le X_{\left( 2,n,m,k\right) }\le \dots \le X_{\left( n,n,m,k\right) }\), then Y-variates paired with these m-GOSs are called the concomitants of m-GOSs and denoted by \(Y_{\left[ r,n,m,k\right] }\), \(r=1,\dots ,n\). For the Farlie-Gumbel-Morgenstern (FGM) family defined by [2] with pdf given by

$$\begin{aligned} f_{X,Y}(x,y)=f_{X}(x) f_{Y}(y) \left[ 1+\alpha (2F_{X}(x)-1)(2F_{Y}(y)-1)\right] , \; \; \; \; \vert \alpha \vert \le 1, \end{aligned}$$
(2)

the pdf and cdf of the concomitant of the rth m-GOS are given by [3] as follows:

$$\begin{aligned}&g_{[r,n,m,k]}(y)=f_Y(y)\left[ 1+\alpha C^*(r,n,m,k)(1-2F_Y(y))\right] , \end{aligned}$$
(3)
$$\begin{aligned}&G_{[r,n,m,k]}(y)=F_Y(y)\left[ 1+\alpha C^*(r,n,m,k)(1-F_Y(y))\right] , \end{aligned}$$
(4)

where \(C^*(r,n,m,k)=\frac{2\prod _{j=1}^r \gamma _j}{\prod _{i=1}^r (\gamma _i+1)}-1\).

The concomitants of m-GOSs in the FGM family were studied by [3]. Some properties for the concomitants of m-GOSs from FGM type bivariate Rayleigh distribution were obtained by [4]. Some inaccuracy measures for the concomitants of m-GOSs in the FGM family were derived by [5. Barakat and Husseiny [6] and Abd Elgawad et al.7] studied some information measures in concomitants of m-GOSs under iterated FGM and Huang-Kotz FGM, respectively. Abd Elgawad et al. [8] and Alawady et al. [9] investigated some properties of concomitants of m-GOS from the bivariate Cambanis family. Mohamed et al. [10] studied the residual extropy of concomitants of m-GOSs based on FGM distribution and presented a nonparametric estimation for this measure.

Recently, an alternative measure of uncertainty, named by extropy, was proposed by [11]. For an absolutely continuous nonnegative random variable X with pdf f and cdf F, the extropy is defined as

$$\begin{aligned} J(X)=-\frac{1}{2}\int _{0}^{+\infty }[f(x)]^{2}\mathrm{d}x=-\frac{1}{2}\int _{0}^{1}f(F^{-1}(u))du. \end{aligned}$$
(5)

Some authors paid attention to extropy and its applications. Qui [12] discussed some characterization results, and lower bounds of extropy for order statistics and record values. Qiu and Jia [13] studied the residual extropy of order statistics and [14] explored the extropy estimators with applications in testing uniformity. Qiu et al. [15] obtained some results on the extropy properties of mixed systems. Zamanzade and Mahdizadeh [16] presented an extropy-based test of uniformity in ranked set sampling and compared it with simple random sampling. Mohamed et al.[17] applied the fractional and weighted cumulative residual entropy measures to test the uniformity and discussed some of its properties.

The cumulative residual extropy (CREX) was proposed by [18] in analogy with (5) as a measure of uncertainty of random variables. The CREX is defined as

$$\begin{aligned} {\mathcal {J}}^{\star }(X)=-\frac{1}{2}\int _{0}^{\infty }\overline{F}^{2}(x)\mathrm{d}x. \end{aligned}$$
(6)

It is always non-positive. Hence, the negative CREX (NCREX) can be presented as

$$\begin{aligned} {\mathcal {J}}(X)=\frac{1}{2}\int _{0}^{\infty }\overline{F}^{2}(x)\mathrm{d}x. \end{aligned}$$
(7)

[18] and [19] studied and investigated some results on the CREX and hence NCREX. Recently, [20] proposed a negative cumulative extropy (NCEX) in analogy with (7), defined as

$$\begin{aligned} {\mathcal {CJ}}(X)=\frac{1}{2}\int _{0}^{\infty }\left[ 1-F^{2}(x)\right] \mathrm{d}x=\frac{1}{2}\int _{0}^{1}\frac{\phi (u)}{f\left( F^{-1}(u)\right) }du, \end{aligned}$$
(8)

where \(\phi (u)=\frac{1-u^2}{2}, 0<u<1.\)

Motivated by [5, 10, 16, 18, 20], in this paper, we aim to present some results on extropy for concomitants of m-GOSs in the FGM family. Therefore, the rest of this paper is organized as follows: In Sect. 2, we first obtain a measure of extropy for \(Y_{[r,n,m,k]}\) in the FGM family. We also study some results of CREX and NCEX for \(G_{[r,n,m,k]}(y)\). In Sect. 3, we discuss the problem of estimating the NCREX and NCEX employing the empirical NCREX and NCEX for concomitants of m-GOSs. A real example is given in Sect. 4.

Note that the terms increasing and decreasing are used in non-strict sense. Throughout this paper, it is assumed that the expectation exists when it appears.

2 Some Measures for \(Y_{[r,n,m,k]}\)

In this section, we obtain the measures of extropy, CREX, and NCEX for the rth concomitant \(Y_{[r,n,m,k]}\) of m-GOSs in FGM family. Some properties of these measures are investigated, and applications of this result are given for the concomitants of order statistics and record values.

2.1 Extropy Measure for \(Y_{[r,n,m,k]}\)

If \(Y_{[r,n,m,k]}\) is the concomitant of the rth m-GOSs with pdf (3), then the extropy measure is given by

$$\begin{aligned} J(Y_{[r,n,m,k]})= & {} -\frac{1}{2}\int _0^{\infty }g^{2}_{[r,n,m,k]}(y)dy \nonumber \\= & {}\, J(Y)\nonumber \\&-\frac{[\alpha C^*(r,n,m,k)]^{2}}{2}\mathbb {E}[f_Y(F_Y^{-1}(U))(1-2U)^{2}]\nonumber \\&-\alpha C^*(r,n,m,k)\mathbb {E}[f_Y(F_Y^{-1}(U))(1-2U)],\nonumber \\= & {}\, J\left( Y\right) {\left[ 1+\alpha C^*(r,n,m,k)\right] }^2\nonumber \\&+2\alpha C^*(r,n,m,k) \mathbb {E}\left[ f_Y\left( F^{-1}_Y\left( U\right) \right) U\right] \nonumber \\&-2{\left[ \alpha C^*(r,n,m,k)\right] }^2 \mathbb {E}\left[ f_Y\left( F^{-1}_Y\left( U\right) \right) \left( U^2-U\right) \right] , \end{aligned}$$
(9)

where U is a uniformly random variable on (0, 1), and J(Y) is the extropy of the random variable Y.

Suppose that \(Q(u)=F_Y^{-1}(u)\) is the quantile function. We know that \(q(u)f_Y(Q(u))=1\), where q(u) is derivative of Q(u) with respect to u, i.e. \(q(u) =Q'(u)\) that it is known as the quantile density function. Now by making use of (9), the corresponding quantile-based \(J(Y_{[r,n,m,k]})\) can be written as

$$\begin{aligned} J(Y_{[r,n,m,k]})= & {}\, J(Y)-\frac{[\alpha C^*(r,n,m,k)]^{2}}{2}\mathbb {E}\left[ \frac{(1-2U)^{2}}{q(U)}\right] \nonumber \\&-\alpha C^*(r,n,m,k)\mathbb {E}\left[ \frac{1-2U}{q(U)}\right] , \end{aligned}$$
(10)

where \(J(Y)=\frac{-1}{2}\mathbb {E}\left[ \frac{1}{q(U)}\right] \). This quantity is easy to obtain. In the following, we consider order statistics and record values as two special cases of m-GOSs and obtain the properties of extropy measure for their concomitant.

Case 1: If \(m = 0\) and \(k = 1\), then the m-GOSs become order statistics. The pdf and cdf of the concomitant of the rth order statistic, \(Y_{[r:n]}\), are given by

$$\begin{aligned}&g_{Y_{[r:n]}}(y)=f_{Y}(y)\left[ 1+\alpha \delta _r(1-2F_{Y}(y))\right] ,\\&G_{Y_{[r:n]}}(y)=F_{Y}(y)\left[ 1+\alpha \delta _r(1-F_{Y}(y))\right] , \end{aligned}$$

respectively, where \(\delta _r=\frac{n-2r+1}{n+1}\). According to (10), the extropy measure of \(Y_{[r:n]}\) is obtained as

$$\begin{aligned} J(Y_{[r:n]}) =J(Y)-\frac{(\alpha \delta _r)^{2}}{2}\mathbb {E}\left[ \frac{(1-2U)^{2}}{q(U)}\right] -\alpha \delta _r\mathbb {E}\left[ \frac{1-2U}{q(U)}\right] . \end{aligned}$$
(11)

Define now \(a_n=\frac{n-1}{n+1}\). From (11), we have

$$\begin{aligned} J(Y_{[n:n]})&=J(Y)-\frac{(\alpha a_n)^{2}}{2}\mathbb {E}\left[ \frac{(1-2U)^{2}}{q(U)}\right] +\alpha a_n \mathbb {E}\left[ \frac{1-2U}{q(U)}\right] ,\\ J(Y_{[1:n]})&=J(Y)-\frac{(\alpha a_n)^{2}}{2}\mathbb {E}\left[ \frac{(1-2U)^{2}}{q(U)}\right] -\alpha a_n\mathbb {E}\left[ \frac{1-2U}{q(U)}\right] . \end{aligned}$$

Therefore,

$$\begin{aligned} J(Y_{[n:n]})-J(Y_{[1:n]}) =2\alpha a_n \mathbb {E}\left[ \frac{1-2U}{q(U)}\right] =-4\alpha a_n \left( J(Y)+ \mathbb {E}\left[ \frac{U}{q(U)}\right] \right) . \end{aligned}$$

Moreover, if \(\lambda \ge 1\) is an integer number and we change r to \(r\lambda \) and n to \((n+1)\lambda -1\), then, from (11), we have

$$\begin{aligned} J(Y_{[r:n]})=J(Y_{[r\lambda :(n+1)\lambda -1]}). \end{aligned}$$

In the following, we present some examples and properties of \(J(Y_{[r:n]})\).

Example 1

Suppose that we have the Morgenstern type bivariate extended Weibull distribution (MTEWD) with cdf

$$\begin{aligned} F(x,y)=[1-e^{-\theta _1 H(x;{\varvec{\xi }}_1)}][1-e^{-\theta _2 H(y;{\varvec{\xi }}_2)}]&[1+\alpha e^{-\theta _1 H(x;{\varvec{\xi }}_1)-\theta _2 H(y;{\varvec{\xi }}_2)}], \nonumber \\&\qquad \qquad x>0, \, \, \, y>0, \end{aligned}$$
(12)

where \(\theta _i >0\), \(\varvec{\xi }_i\) is a vector of parameters, and \(H\left( x;\varvec{\xi }\right) \) is a non-negative, continuous, monotone increasing, and differentiable function of x which depends on the parameter vector \(\varvec{\xi }\). Also, \(H\left( x;\varvec{\xi }\right) \rightarrow 0^+\) as \(x\rightarrow 0^+\) and \(H\left( x;\varvec{\xi }\right) \rightarrow \infty \) as \(x\rightarrow \infty \). From (11), we can find

$$\begin{aligned} J(Y_{[r:n]})&=-\frac{1}{2}\int _0^{\infty }g^{2}_{[r:n]}(y)dy \nonumber \\&=-\frac{1}{2}\int _0^{\infty }\left[ \theta _2 h(y;{\varvec{\xi }}_{2})e^{-(\theta _2 H(y;{\varvec{\xi }}_{2}))} \left( 1+\alpha \delta _r(2e^{-\theta _2 H(y;{\varvec{\xi }}_{2})}-1)\right) \right] ^2 dy\nonumber \\&=-\frac{1}{2}\int _0^{\infty }\left[ f_Y(y)(1-\alpha \delta _r) +\alpha \delta _r f_W(y)\right] ^2 dy\nonumber \\&=J\left( Y\right) {\left[ 1- \alpha \delta _r \right] }^2+{\left( \alpha \delta _r\right) }^2J\left( W\right) +\frac{16}{9}[\alpha \delta _r-(\alpha \delta _r)^2]J(V), \end{aligned}$$
(13)

where W and V follow \(EW(2\theta _2,{\varvec{\xi }}_2)\) and \(EW(\frac{3\theta _2}{2},{\varvec{\xi }}_2)\), respectively, and \(EW(\theta ,{\varvec{\xi }})\) denotes the extended Weibull distribution with the following cdf:

$$\begin{aligned} F(x)=1-e^{-\theta H(x;{\varvec{\xi }})},\,\,\, x>0. \end{aligned}$$

In the following, we consider some special cases of MTEWD.

Example 2

For the Morgenstern type bivariate exponential distribution (MTED) with cdf

$$\begin{aligned} F(x,y)=\left( 1-e^{-\frac{x}{\theta _{1}}}\right) \left( 1-e^{-\frac{y}{\theta _{2}}}\right) \left[ 1+\alpha e^{-\frac{x}{\theta _{1}}-\frac{y}{\theta _{2}}}\right] ,\,\,\,x>0,\,\,\,y>0, \end{aligned}$$
(14)

using (13), we have

$$\begin{aligned} J(Y_{[r:n]})=\frac{1}{12\theta _{2}}\left[ 3-(\alpha \delta _r)^{2}-2\alpha \delta _r\right] . \end{aligned}$$

Also, we get

$$\begin{aligned} A_{\alpha }(n)= J(Y_{[n:n]})- J(Y_{[1:n]})=\frac{\alpha }{3\theta _{2}} a_n, \end{aligned}$$

which is positive, negative or zero whenever \(0 <\alpha \le 1\), \(-1\le \alpha < 0\), or \(\alpha = 0\), respectively. Also, the difference between \(J(Y_{[r:n]})\) and J(Y) is

$$\begin{aligned} B_{\alpha ,n}(r)=J(Y_{[r:n]})-J(Y)=-\frac{\alpha \delta _r}{12\theta _{2}}\left( \alpha \delta _r+2\right) . \end{aligned}$$

\(B_{\alpha ,n}(r)\) is positive for \(-1\le \alpha <0\) , \(1\le r<\frac{n+1}{2}\) (or \(0<\alpha \le 1\), \(\frac{n+1}{2}< r\le n)\). Also, it is negative for \(-1\le \alpha <0\), \(\frac{n+1}{2}< r\le n\) (or \(0<\alpha \le 1\), \(1\le r<\frac{n+1}{2}\)).

Example 3

For the Morgenstern type bivariate logistic distribution with the cdf

$$\begin{aligned} F(x,y)=\left( 1+\exp (-x)\right) ^{-1}\left( 1+\exp (-y)\right) ^{-1}\left( 1+\frac{\alpha e^{-x-y}}{(1+e^{-x})(1+e^{-y})}\right) ,\qquad \qquad \\ -\infty<x<+\infty ,\;\;-\infty<y<+\infty , \end{aligned}$$

using (13), we have

$$\begin{aligned} J(Y_{[r:n]})=\frac{-1}{12}-\frac{(\alpha \delta _r)^{2}}{60}-\frac{\alpha \delta _r}{16}. \end{aligned}$$

Also, we get

$$\begin{aligned} D_{\alpha }(n)= J(Y_{[n:n]})- J(Y_{[1:n]})=\frac{\alpha }{8} a_n, \end{aligned}$$

which is positive, negative or zero whenever \(0 <\alpha \le 1\), \(-1\le \alpha < 0\) or \(\alpha = 0\), respectively.

Example 4

For the Morgenstern type bivariate Rayleigh distribution with the cdf

$$\begin{aligned} F(x,y)=\left( 1-\exp (-\frac{x^{2}}{2\sigma _{1}^{2}})\right) \left( 1-\exp (-\frac{y^{2}}{2\sigma _{2}^{2}})\right) \left( 1+\alpha \exp \left( -\frac{x^{2}}{2\sigma _{1}^{2}}-\frac{y^{2}}{2\sigma _{2}^{2}}\right) \right) , \end{aligned}$$

we have

$$\begin{aligned} J(Y_{[r:n]})=-\frac{\sqrt{\pi }}{8\sigma _2}-\frac{0.026(\alpha \delta _r)^{2}}{\sigma _2}-\frac{0.02(\alpha \delta _r)}{\sigma _2}. \end{aligned}$$

Also,

$$\begin{aligned} W_{\alpha }(n)=J(Y_{[n:n]})-J(Y_{[1:n]})=\frac{0.04\alpha }{\sigma _2} a_n, \end{aligned}$$

which is positive, negative or zero whenever \(0 <\alpha \le 1\), \(-1\le \alpha < 0\) or \(\alpha = 0\), respectively.

Example 5

For the Morgenstern type bivariate exponentiated exponential distribution with the cdf

$$\begin{aligned} F_{X,Y}(x,y)=[(1-e^{-\theta _{1}x})(1-e^{-\theta _{2}y})]^{\lambda }[1+\alpha (1-(1-e^{-\theta _{1}x})^{\lambda })(1-(1-e^{-\theta _{2}y})^{\lambda })], \end{aligned}$$

if \(\lambda \ge 1\), we get

$$\begin{aligned} J(Y_{[r:n]})= & {} -\frac{\lambda \theta _2}{4(2\lambda -1)}-\frac{(\alpha \delta _r)^{2}}{2} \lambda \theta _{2}\left[ \frac{(26\lambda ^2-9 \lambda +1)}{12(2\lambda -1)(4\lambda -1)(3\lambda -1)}\right] \\&-\alpha \delta _r\left[ \frac{5\lambda -1}{6(2\lambda -1)(3\lambda -1)}\right] . \end{aligned}$$

Finally, we have

$$\begin{aligned} Q_{\alpha ,\lambda }(n)= J(Y_{[n:n]})- J(Y_{[1:n]})=\alpha a_n\left[ \frac{5\lambda -1}{2(2\lambda -1)(3\lambda -1)}\right] , \end{aligned}$$

which is positive, negative or zero whenever \(0 <\alpha \le 1\), \(-1\le \alpha < 0\) or \(\alpha = 0\), respectively.

Hereafter, we consider the concomitants of order statistics while \((X_1,Y_1),\) \((X_2,Y_2),\dots ,(X_n,Y_n)\) are independent but otherwise arbitrarily distributed. Let us consider the Morgenstern family with cdf

$$\begin{aligned} {} F_{{X,Y}}^{i}(x,y)=F_{X}^{i}(x)F_{Y}^{i}(y)\left[ 1+\alpha _i(1-F_{X}^{i}(x))(1-F_{Y}^{i}(y))\right] . \end{aligned}$$
(15)

Now, suppose that \(F_{X}^{i}(x)=F_{X}(x)\), \(F^{i}_{Y}(y)=F_{Y}(y)\) and \(\vert \alpha _i\vert \le 1\). Then, the pdf’s of \(Y_{[1:n]}\) and \(Y_{[n:n]}\) are given by [21] as follows:

$$\begin{aligned} g_{Y_{[1:n]}}(y)= & {} f_Y(y)\left( 1+b_n\sum _{j=1}^n\alpha _j (1-2F_Y(y))\right) ,\\ g_{Y_{[n:n]}}(y)= & {} f_Y(y)\left( 1-b_n\sum _{j=1}^n\alpha _j (1-2F_Y(y))\right) , \end{aligned}$$

where \(b_n=\frac{n-1}{(n+1)n}\). Furthermore, the extropy measures for concomitants of extremes of order statistics are presented as

$$\begin{aligned} J(Y_{[1:n]})= & {} J(Y)-\frac{[b_n\sum _{j=1}^n\alpha _j]^{2}}{2}\mathbb {E}\left[ \frac{(1-2U)^{2}}{q(U)}\right] -b_n\sum _{j=1}^n\alpha _j\mathbb {E}\left[ \frac{1-2U}{q(U)}\right] ,\\ J(Y_{[n:n]})= & {} J(Y)-\frac{[b_n\sum _{j=1}^n\alpha _j]^{2}}{2}\mathbb {E}\left[ \frac{(1-2U)^{2}}{q(U)}\right] +b_n\sum _{j=1}^n\alpha _j\mathbb {E}\left[ \frac{1-2U}{q(U)}\right] . \end{aligned}$$

Hence, we have

$$\begin{aligned} A_n=J(Y_{[n:n]})-J(Y_{[1:n]})=2b_n\sum _{j=1}^n\alpha _j\mathbb {E}\left[ \frac{1-2U}{q(U)}\right] . \end{aligned}$$

Finally, we get

$$\begin{aligned} J(Y)=\frac{1}{2}\left\{ J(Y_{[n:n]})+J(Y_{[1:n]})+\left[ b_n\sum _{j=1}^n\alpha _j\right] ^{2}\mathbb {E}\left[ \frac{(1-2U)^{2}}{q(U)}\right] \right\} . \end{aligned}$$

Case 2: Let \((X_{1},Y_{1}),(X_{2},Y_{2}),\cdots \) be a sequence of bivariate random variables from a continuous distribution. If \(\{R_r, \ r\ge 1\}\) is the sequence of upper record values in the sequence of X’s, then the Y which corresponds with the rth record will be called the concomitant of the rth record, denoted by \(R_{[r]}\). The concomitants of record values apply in practical experiments such as life time experiments, sporting matches, and other experimental fields. Chandler [22] launched a statistical study of the record values, record times and inter-record times. Applications of record values and their concomitants have been discussed in [23] and [24].

The record value is a special case of the m-GOSs with putting \(m = -1\) and \(k = 1\). Therefore, the pdf and cdf for \(R_{[r]}\) have been obtained as

$$\begin{aligned} g_{R_{[r]}}(y)= & {} \,f_Y(y)[1+\alpha c_{r}(1-2F_Y(y))],\\ G_{R_{[r]}}(y)= & {} \,F_Y(y)[1+\alpha c_{r}(1-F_Y(y))], \end{aligned}$$

where \(c_{r}=2^{1-r}-1\) [See [23]]. Therefore, the extropy measure for \(R_{[r]}\) is obtained as follows:

$$\begin{aligned} J(R_{[r]})=J(Y)- \frac{\alpha ^2 c_{r}^2}{2}\mathbb {E}\left[ \frac{(1-2U)^{2}}{q(U)}\right] -\alpha c_{r}\mathbb {E}\left[ \frac{1-2U}{q(U)}\right] . \end{aligned}$$
(16)

Example 6

For the MTEWD, we can find

$$\begin{aligned} J({R_{[r]}})=J\left( Y\right) {\left[ 1-\alpha c_r\right] }^2+\alpha ^2 c_{r}^2 J\left( W\right) -\frac{16}{9}\left[ \alpha ^2 c_{r}^2-\alpha c_r \right] J(V). \end{aligned}$$

Example 7

For the MTED, we can find

$$\begin{aligned} J({R_{[r]}})=\frac{1}{12\theta _{2}} \left[ 3-\alpha ^2 c_r^{2}-2\alpha c_r\right] . \end{aligned}$$

Also, we get

$${A_\alpha }(r) = J({R_{[r]}}) - J({R_{[r - 1]}}) = \frac{{\alpha {\text{ (}}{2^{ - r}})}}{{{\theta _2}}}\left[ {\alpha \left( {{2^{ - r}} - \frac{1}{3}} \right) + \frac{1}{3}} \right],$$

which is positive, negative or zero whenever \(0<\alpha \le 1\), \(-1\le \alpha <0\) or \(\alpha = 0\), respectively.

2.2 CREX for \(Y_{[r,n,m,k]}\)

For the concomitant \(Y_{[r,n,m,k]}\) of the rth m-GOS, the CREX measure is given by

$$\begin{aligned} {\mathcal {J}}^{\star }(Y_{[r,n,m,k]})=&-\frac{1}{2}\int _0^{\infty }\bar{G}^2_{[r,n,m,k]} (y) dy \nonumber \\ =&{\mathcal {J}}^{\star }(Y)-\frac{[\alpha C^*(r,n,m,k)]^{2}}{2}\mathbb {E}[q(U)U^2(1-U)^2] \nonumber \\&+\alpha C^*(r,n,m,k)\mathbb {E}[q(U)U(1-U)^2], \end{aligned}$$
(17)

where \({\mathcal {J}}^{\star }(Y)\) is the CREX of the random variable Y.

Case 1: If we put \(m = 0\) and \(k = 1\), then the CREX measure for \(Y_{[r:n]}\) is presented as

$$\begin{aligned} {\mathcal {J}}^{\star }(Y_{[r:n]}) ={\mathcal {J}}^{\star }(Y)-\frac{(\alpha \delta _r)^{2}}{2}\mathbb {E}[q(U)(U(1-U))^2]+ \alpha \delta _r\mathbb {E}[q(U)U(1-U)^2]. \end{aligned}$$

Example 8

For the Morgenstern type bivariate uniform distribution (MTUD) with cdf

$$\begin{aligned} F(x,y)=\frac{xy}{\theta _{1}\theta _{2}}\left[ 1+\alpha (1-\frac{x}{\theta _{1}})(1-\frac{y}{\theta _{2}})\right] ,\;\;0<x<\theta _{1},\;\;0<y<\theta _{2}, \end{aligned}$$

we have

$$\begin{aligned} {\mathcal {J}}^{\star }({Y_{[r:n]}})=-\frac{\theta _{2}}{6}-\frac{\theta _{2}(\alpha \delta _r)^{2}}{60}+\frac{\theta _{2}\alpha \delta _r}{12}. \end{aligned}$$

Therefore,

$$\begin{aligned} D_{\alpha ,\theta _{2}}(n)= {\mathcal {J}}^{\star }({Y_{[n:n]}})- {\mathcal {J}}^{\star }({Y_{[1:n]}})=\frac{-\alpha a_n\theta _{2}}{6}, \end{aligned}$$

which is positive, negative or zero whenever \(-1\le \alpha <0\), \(0<\alpha \le 1\) or \(\alpha = 0\), respectively.

Example 9

For the MTED, we have

$$\begin{aligned} {\mathcal {J}}^{\star }({Y_{[r:n]}})= \frac{\theta _2}{12} \left[ -3-2(\alpha \delta _r)^{2}+2\alpha \delta _r\right] . \end{aligned}$$
(18)

Therefore,

$$\begin{aligned} Q_{\alpha ,\theta _{2}}(n)={\mathcal {J}}^{\star }({Y_{[n:n]}})- {\mathcal {J}}^{\star }({Y_{[1:n]}})=\frac{-\alpha \theta _2 a_n}{3}, \end{aligned}$$

which is positive, negative or zero whenever \(-1\le \alpha <0\), \(0<\alpha \le 1\) or \(\alpha = 0\), respectively.

Example 10

For the Morgenstern type bivariate Weibull distribution with the cdf

$$\begin{aligned} F(x,y)=\left( 1-e^{ -\theta _1 x^{\beta _1}}\right) \left( 1-e^{ -\theta _2 y^{\beta _2}} \right)&\left[ 1+\alpha e^{-\theta _1 x^{\beta _1}-\theta _2 y^{\beta _2}}\right] , \\&x>0,\ y>0, \beta _i>0, \ \theta _i>0,\ i=1,2, \end{aligned}$$

we have

$$\begin{aligned} {\mathcal {J}}^{\star }({Y_{[r:n]}})= & {} (\theta _{2})^{\frac{-1}{\beta _2}}(\beta _{2})^{-1}\Gamma \left( \frac{1}{\beta _2}\right) \\&\times \left[ -2^{\frac{-1}{\beta _2}-1}-(\alpha \delta _r)^{2}(2^{\frac{-1}{\beta _2}-1}+ 2^{\frac{-2}{\beta _2}-1}-3^{\frac{-1}{\beta _2}})-\alpha \delta _r(3^{\frac{-1}{\beta _2}}-2^{\frac{-1}{\beta _2}}) \right] . \end{aligned}$$

Therefore,

$$\begin{aligned} D_{\alpha ,\theta _{2}}(n)= {\mathcal {J}}^{\star }({Y_{[n:n]}})- {\mathcal {J}}^{\star }({Y_{[1:n]}})= -2\alpha a_n(\beta _{2})^{-1}\Gamma (\frac{1}{\beta _2})(\theta _{2})^{\frac{-1}{\beta _2}}(-3^{\frac{-1}{\beta _2}}+2^{\frac{-1}{\beta _2}}), \end{aligned}$$

which is positive, negative or zero whenever \(-1\le \alpha <0\), \(0<\alpha \le 1\) or \(\alpha = 0\), respectively.

Case 2: If we put \(m = -1\) and \(k = 1\), then the CREX measure for \(R_{[r]}\) is presented as

$$\begin{aligned} {\mathcal {J}}^{\star }(R_{[r]})={\mathcal {J}}^{\star }(Y)-\frac{\alpha ^2 c_{r}^{2}}{2}\mathbb {E}[q(U)(U(1-U))^2] +\alpha c_{r}\mathbb {E}[q(U)U(1-U)^2]. \end{aligned}$$

2.3 NCEX Measure for \(Y_{[r,n,m,k]}\)

For the concomitant of the rth m-GOS \(Y_{[r,n,m,k]}\), the NCEX measure is given by

$$\begin{aligned} {\mathcal {CJ}}(Y_{[r,n,m,k]})= & {}\, {\mathcal {CJ}}(Y)-\frac{[\alpha C^*(r,n,m,k)]^{2}}{2}\mathbb {E}\left[ q(U) \left( U(1-U)\right) ^{2}\right] \nonumber \\&-\alpha C^*(r,n,m,k)\mathbb {E}\left[ q(U) U^{2}(1-U)\right] , \end{aligned}$$
(19)

where \({\mathcal {CJ}}(Y)\) is the NCEX of the random variable Y. It can be observed that the results are similar to CREX.

Case 1: If \(m = 0\) and \(k = 1\), then the NCEX measure for \(Y_{[r:n]}\) in Morgenstern family is obtained as

$$\begin{aligned} {\mathcal {CJ}}(Y_{[r:n]}) ={\mathcal {CJ}}(Y)-\frac{(\alpha \delta _r)^{2}}{2}\mathbb {E}\left[ q(U) \left( U(1-U)\right) ^{2}\right] -\alpha \delta _r\mathbb {E}\left[ U^{2}(1-U)q(U)\right] . \end{aligned}$$

Therefore, we have

$$\begin{aligned} {\mathcal {CJ}}(Y_{[n:n]})&={\mathcal {CJ}}(Y)-\frac{(\alpha a_n)^{2}}{2}\mathbb {E}\left[ q(U) \left( U(1-U)\right) ^{2}\right] +\alpha a_n\mathbb {E}\left[ U^{2}(1-U)q(U)\right] ,\\ {\mathcal {CJ}}(Y_{[1:n]})&={\mathcal {CJ}}(Y)-\frac{(\alpha a_n)^{2}}{2}\mathbb {E}\left[ q(U)\left( U(1-U)\right) ^{2}\right] -\alpha a_n\mathbb {E}\left[ U^{2}(1-U)q(U)\right] . \end{aligned}$$

Hence,

$$\begin{aligned} {\mathcal {CJ}}(Y_{[n:n]})-{\mathcal {CJ}}(Y_{[1:n]})=2\alpha a_n\mathbb {E}\left[ U^{2}(1-U)q(U)\right] . \end{aligned}$$

Case 2: If we put \(m = -1\) and \(k = 1\), then the NCEX measure for \(R_{[r]}\) is presented as

$$\begin{aligned} {\mathcal {CJ}} (R_{[r]})={\mathcal {CJ}}(Y)-\frac{\alpha ^2 c_{r}^{2}}{2}\mathbb {E}[q(U)(U(1-U))^2] -\alpha c_{r}\mathbb {E}[q(U)U^{2}(1-U)]. \end{aligned}$$

3 Empirical Measures for \(Y_{[r,n,m,k]}\)

In this section, we estimate the NCREX and NCEX for concomitants by means of the empirical estimators.

3.1 Empirical NCREX

Henceforward, we consider the problem of estimating the NCREX for concomitants using the empirical NCREX. Let \((X_{i},Y_{i})\), \(i=1,2,\cdots ,\) be a sequence from the Morgenstern family. According to (17) and relation \({\mathcal {J}}=-{\mathcal {J}}^{\star }\), the empirical NCREX of \(Y_{[r,n,m,k]}\) can be obtained as follows:

$$\begin{aligned} \hat{{\mathcal {J}}}(Y_{[r,n,m,k]})= & {} \frac{1}{2}\int _0^{\infty }[1-\hat{G}_{[r,n,m,k]}(y)]^{2} dy \nonumber \\= & {} \frac{1}{2}\sum _{j=1}^{n-1}\int _{Z_{(j)}}^{Z_{(j+1)}}[1-\hat{F}(y)]^2dy\nonumber \\&+\frac{[\alpha C^*(r,n,m,k)]^{2}}{2}\sum _{j=1}^{n-1}\int _{Z_{(j)}}^{Z_{(j+1)}}[\hat{F}(y)(1-\hat{F}(y))]^2dy\nonumber \\&-\alpha C^*(r,n,m,k)\sum _{j=1}^{n-1}\int _{Z_{(j)}}^{Z_{(j+1)}}\hat{F}(y)[1-\hat{F}(y)]^2 dy\nonumber \\= & {} \frac{1}{2}\sum _{j=1}^{n-1}U_{j}(1-\frac{j}{n})^2+\frac{[\alpha C^*(r,n,m,k)]^{2}}{2}\sum _{j=1}^{n-1}U_{j}\left( \frac{j}{n}\left( 1-\frac{j}{n}\right) \right) ^2 \nonumber \\&-\alpha C^*(r,n,m,k)\sum _{j=1}^{n-1}U_{j}\frac{j}{n}\left( 1-\frac{j}{n} \right) ^2 \nonumber \\= & {} \frac{1}{2}\sum _{j=1}^{n-1}U_{j}(1-\frac{j}{n})^2\left[ 1-\alpha C^*(r,n,m,k)\frac{j}{n}\right] ^2, \end{aligned}$$
(20)

where \(U_{j}=Z_{(j+1)}-Z_{(j)}, j=1,2,\dots ,n-1\) are the sample spacings based on ordered random samples \(Y_j\).

Case 1: If \(m = 0\) and \(k = 1\), then the empirical NCREX of \({Y_{[r:n]}}\) is given by

$$\begin{aligned} \hat{{\mathcal {J}}}({Y_{[r:n]}})=\frac{1}{2}\sum _{j=1}^{n-1}U_{j}(1-\frac{j}{n})^2\left[ 1- \alpha \delta _r\frac{j}{n}\right] ^2. \end{aligned}$$
(21)

Case 2: If \(m = -1\) and \(k = 1\), then the empirical NCREX of \(R_{[r]}\) can be written as

$$\begin{aligned} {} \hat{{\mathcal {J}}}({R_{[r]}}) =\frac{1}{2}\sum _{j=1}^{n-1}U_{j}(1-\frac{j}{r})^2\left[ 1-\alpha c_r\frac{j}{r} \right] ^2. \end{aligned}$$
(22)

Let us consider the following examples.

Example 11

Let \((X_{i},Y_{i})\), \(i=1,2,\dots ,n\) be a random sample from MTED, then the sample spacings \(U_{j}\) are independent and exponentially distributed with mean \(\frac{1}{\theta _2 (n-j)}\) (for more details see [25]). Upon recalling (22), we obtain

$$\begin{aligned} E[ \hat{{\mathcal {J}}}({Y_{[r:n]}})]= & {} \frac{1}{2\theta _2 }\sum _{j=1}^{n-1}\frac{n-j}{n^2}\left[ 1-\alpha \delta _r\frac{j}{n}\right] ^2,\\ Var[\hat{{\mathcal {J}}}({Y_{[r:n]}})]= & {} \frac{1}{4n^{4}\theta _2 ^2}\sum _{j=1}^{n-1}(n-j)^{2}\left[ 1-\alpha \delta _r\frac{j}{n}\right] ^4, \end{aligned}$$

respectively. We computed the values of \(E[\hat{{\mathcal {J}}}({Y_{[2:n]}})]\) and \(Var[\hat{{\mathcal {J}}}({Y_{[2:n]}})]\) for sample sizes \(n=5,10, 15,20\), \(\theta _2 =0.5,1,2\), \(\alpha =-1,-0.5,0.5,1\) in Table 1. We can easily see that \(E[\hat{{\mathcal {J}}}({Y_{[2:n]}})]\) and \(Var[\hat{{\mathcal {J}}}({Y_{[2:n]}})]\) are decreasing in \(\alpha \). Also, \(\lim _{n\rightarrow \infty }Var[\hat{{\mathcal {J}}}({Y_{[2:n]}})]=0\).

Table 1 Numerical values of \(\mathbb {E}[ \hat{{\mathcal {J}}}({Y_{[2:n]}})]\) and \(Var[ \hat{{\mathcal {J}}}({Y_{[2:n]}})]\) for MTED

Example 12

For MTUD with \(\theta _1=\theta _2=1\), the sample spacings \(U_{j}\) are independent of beta distribution with parameters 1 and n [for more details see [25]]. By making use of (22), we obtain

$$\begin{aligned} E[\hat{{\mathcal {J}}}({R_{[r]}})]= & {} \frac{1}{n+1}\sum _{j=1}^{n-1}(1-\frac{j}{n})^2\left[ 1-\alpha c_r \frac{j}{n}\right] ^2,\end{aligned}$$
(23)
$$\begin{aligned} Var[\hat{{\mathcal {J}}}({R_{[r]}})]= & {} \frac{n}{(n+1)^2(n+2)}\sum _{j=1}^{n-1}(1-\frac{j}{n})^4\left[ 1-\alpha c_r \frac{j}{n}\right] ^4. \end{aligned}$$
(24)

We computed the values of \(E[\hat{{\mathcal {J}}}({R_{[2]}})]\) and \(Var[\hat{{\mathcal {J}}}({R_{[2]}})]\) for sample sizes \(n=5,10, 15,20\), \(\alpha =-1,-0.5,0.5,1\) in Table 2. We can easily see that \(E[\hat{{\mathcal {J}}}({R_{[2]}})]\) and \(Var[\hat{{\mathcal {J}}}({R_{[2]}})]\) are decreasing in \(\alpha \). Also, \(\lim _{n\rightarrow \infty }Var[\hat{{\mathcal {J}}}({R_{[2]}})]=0\).

Table 2 Numerical values of \(\mathbb {E}[\hat{{\mathcal {J}}}({R_{[2]}})]\) and \(Var[\hat{{\mathcal {J}}}({R_{[2]}})]\) for MTUD with \(\theta _1=\theta _2=1\)

Theorem 1

Let \((X_{i},Y_{i})\), \(i=1,2,\cdots ,n\) be a random sample of size n from Morgenstern family. Then

$$\begin{aligned} \hat{{\mathcal {J}}}({R_{[r]}})\,\longrightarrow {\mathcal {J}}({R_{[r]}})\; \; \; as \; \; n\rightarrow \infty . \end{aligned}$$

We are now capable to present a central limit theorem for \(\hat{{\mathcal {J}}}({R_{[r]}})\) coming from a random sample of the MTED.

Theorem 2

Let \((X_{i},Y_{i})\), \(i=1,2,\dots ,n\) be a random sample from MTED. Then

$$\begin{aligned} Z_n= \frac{\hat{{\mathcal {J}}}({R_{[r]}})-E\left[ \hat{{\mathcal {J}}}({R_{[r]}})\right] }{\sqrt{Var\left[ \hat{{\mathcal {J}}}({R_{[r]}})\right] }}, \end{aligned}$$

converges in distribution to a standard normal distribution.

Proof

The mean of empirical NCREX measure \(\hat{{\mathcal {J}}}({R_{[r]}})\) can be expressed as sum of independent random variables as

$$\begin{aligned} \hat{{\mathcal {J}}}({R_{[r]}})=\sum _{j=1}^{n-1} W_j, \end{aligned}$$

where \(W_j=\frac{1}{2}U_{j}(1-\frac{j}{n})^2\left[ 1-\alpha c_r \frac{j}{n} \right] ^2\) are independent random variables with the mean and variance given by

$$\begin{aligned} E[W_j]= & {} \frac{1}{2n\theta _2 }(1-\frac{j}{n})\left[ 1- \alpha c_r\frac{j}{n}\right] ^2,\\ Var[W_j]= & {} \frac{1}{4n^2\theta _2 ^2}(1-\frac{j}{n})^2\left[ 1-\alpha c_r\frac{j}{n}\right] ^4. \end{aligned}$$

Since \(E[|W_j-E(W_j)|^3]=2e^{-1}(6-e)[E(W_j)]^3\) for any exponentially distributed random variable \(W_j\), by setting \(\alpha _{j,k}=E[|W_j-E(W_j)|^k]\) the following approximations hold for large n:

$$\begin{aligned}&\sum _{j=1}^{n} \alpha _{j,2}=\frac{1}{4n^2\theta _2 ^2}\sum _{j=1}^n(1-\frac{j}{n})^2\left[ 1-\alpha c_r\frac{j}{n}\right] ^4\approx \frac{c_2}{4n\theta _2 ^2},\\&\sum _{j=1}^{n} \alpha _{j,3}=\frac{2(6-e)}{en^3\theta _2^3}\sum _{j=1}^{n}\left( 1-\frac{j}{n}\right) ^3 \left[ 1-\alpha c_r\frac{j}{n}\right] ^6\approx \frac{2(6-e)c_3}{en^2\theta _2^3}, \end{aligned}$$

where \(c_k:=\int _0^1 (1-x)^k\left[ 1-\alpha c_rx\right] ^{2k}.\) Hence, Lyapunov’s condition of the central limit theorem is satisfied [cf. [26]]:

$$\begin{aligned} \frac{(\alpha _{1,3}+\cdots +\alpha _{n,3})^{1/3}}{(\alpha _{1,2}+\cdots +\alpha _{n,2})^{1/2}}\approx \frac{[2(6-e)c_3]^{1/3}}{e^{1/3}c_2^{1/2}}n^{-1/6} \rightarrow 0 \;\;\;\;\; as \;\; n\rightarrow \infty , \end{aligned}$$

which completes the proof. \(\square \)

If the random variables \((X_{i},Y_{i})\), \(i=1,2,\cdots ,n\) are from MTED. Then, upon using the results of Theorem 3.2, an approximate confidence interval for \({{\mathcal {J}}}({R_{[r]}})\) can be constructed as

$$\begin{aligned} \hat{{\mathcal {J}}}({R_{[r]}})\pm z_{{q}/{2}}\sqrt{Var\left[ \hat{{\mathcal {J}}}({R_{[r]}})\right] }, \end{aligned}$$

where \(z_{q}\) is the qth upper quantile of the standard normal distribution.

3.2 Empirical NCEX for Concomitants of m-GOSs

In this section, we address the problem of estimating the NCEX for concomitants using the empirical NCEX. Let \((X_{i},Y_{i})\), \(i=1,2,\cdots ,n\) be a random sample of size n from the Morgenstern family. According to (19), the empirical NCEX of \(Y_{[r,n,m,k]}\) can be obtained as

$$\begin{aligned} \widehat{\mathcal {CJ}}({Y_{[r,n,m,k]}})= & {} \frac{1}{2}\sum _{j=1}^{n-1}U_{j}\left( 1-\left( \frac{j}{n}\right) ^2\right) \nonumber \\&-\frac{[\alpha C^*(r,n,m,k)]^{2}}{2}\sum _{j=1}^{n-1}U_{j}\left( \frac{j}{n}\left( 1-\frac{j}{n}\right) \right) ^2 \nonumber \\&-\alpha C^*(r,n,m,k)\sum _{j=1}^{n-1}U_{j}\left( \frac{j}{n}\right) ^2\left( 1-\frac{j}{n} \right) . \end{aligned}$$
(25)

Case 1: If \(m = 0\) and \(k = 1\), then the empirical NCEX of \(Y_{[r:n]}\) is given by

$$\begin{aligned} \widehat{\mathcal {CJ}}(Y_{[r:n]})=\frac{1}{2}\sum _{j=1}^{n-1}U_{j}\left[ 1-( \frac{j}{n})^{2} \left[ 1+\alpha \delta _r(1-\frac{j}{n})\right] ^{2}\right] . \end{aligned}$$
(26)

Case 2: If \(m = -1\) and \(k = 1\), then the empirical NCEX of \(R_{[r]}\) can be written as

$$\begin{aligned} \widehat{{\mathcal {CJ}}}({R_{[r]}}) =\frac{1}{2}\sum _{j=1}^{n-1}U_{j}\left[ 1-( \frac{j}{n})^{2} \left[ 1+\alpha c_r(1-\frac{j}{n})\right] ^{2}\right] . \end{aligned}$$
(27)

Example 13

For MTED, from (27), we obtain

$$\begin{aligned} E[ \widehat{{\mathcal {CJ}}}({R_{[r]}})]= & {} \frac{1}{2\theta _2 }\sum _{j=1}^{n-1}\frac{1}{ (n-j)}\left[ 1-( \frac{j}{n})^{2} \left[ 1+\alpha c_r(1-\frac{j}{n})\right] ^{2}\right] ,\\ Var[\widehat{{\mathcal {CJ}}}({R_{[r]}})]= & {} \frac{1}{4\theta _2 ^2}\sum _{j=1}^{n-1}(\frac{1}{ (n-j)})^2\left[ 1-( \frac{j}{n})^{2} \left[ 1+\alpha c_r(1-\frac{j}{n})\right] ^{2}\right] ^2. \end{aligned}$$

We computed the values of \(E[\widehat{{\mathcal {CJ}}}({R_{[2]}})]\) and \(Var[\widehat{{\mathcal {CJ}}}({R_{[2]}})]\) for sample sizes \(n=5, 10, 15, 20\), \(\theta _2 =0.5,1,2\), \(\alpha =-1,-0.5,0.5,1\) in Table 3. We can easily see that \(E[\widehat{{\mathcal {CJ}}}({R_{[2]}})]\) and \(Var[\widehat{{\mathcal {CJ}}}({R_{[2]}})]\) are decreasing in \(\alpha \). Also, \(\lim _{n\rightarrow \infty }Var[\widehat{{\mathcal {CJ}}}({R_{[2]}})]=0\).

Table 3 Numerical values of \(\mathbb {E}[\widehat{{\mathcal {CJ}}}({R_{[2]}})]\) and \(Var[\widehat{{\mathcal {CJ}}}({R_{[2]}})]\) for MTED

4 Real Data

We consider a data set on 137 bone marrow transplant (BMT) patients presented by [27]. It is included 22 attributes. The attribute ‘\(T_2\)’ represents disease free survival time (time to relapse, death or end of study), and the attribute ‘\(T_P\)’ represents time (in days) to return of platelets to normal levels. For \(T_2\) and \(T_P\), the Spearman correlation coefficient is -0.2544 (with p-value 0.0027), and the Kendall correlation coefficient is -0.1806 (with p-value 0.0020). [28] analyzed these two attributes and fitted some families of bivariate exponential distributions. For the MTED with cdf in (14), the maximum likelihood estimation of parameters are \(\hat{\alpha }=-0.6703\), \({\hat{\theta }}_1=47.9682\), and \({\hat{\theta }}_2=730.8846\). We calculated the NCREX of \(Y_{[r,n]}\) for MTED given in (18) and the empirical NCREX of \(Y_{[r,n]}\) given in (21) for all values of r and some values of \(\alpha \). The results are given in Fig. 1. We can observe that

  1. 1.

    The values of NCREX and empirical NCREX are close when \(-0.5\le \alpha \le 0.5\) especially when \(\alpha =0\) for all values of r.

  2. 2.

    When \(\alpha >0.5\), the values of NCREX and empirical NCREX are close for \(r\ge 50\).

  3. 3.

    When \(\alpha >0.5\), the value of NCREX is larger than the value of empirical NCREX for \(r<50\).

  4. 4.

    When \(\alpha <-0.5\), the values of NCREX and empirical NCREX are close for \(r\le 100\).

  5. 5.

    When \(\alpha <-0.5\), the value of NCREX is larger than the value of empirical NCREX for \(r>100\).

Fig. 1
figure 1

The NCREX for MTED of \(Y_{[r,n]}\) (dashed line) and empirical NCREX (solid line) of \(Y_{[r,n]}\)

5 Conclusions

In this paper, we have obtained various properties of extropy measure for concomitants of m-GOSs in Morgenstern family. Applications of this result have been given for concomitants of order statistics and record values. Also, we have introduced CREX and NCEX for \(G_{[r;n;m;k]}(y)\) in this family. These measures can be applied for measuring the information volatility contained in concomitants of order statistics and record values. Finally, we have proposed the estimators of NCREX and NCEX for concomitants by using empirical approach. Further, the validity of the new measures have supported by numerical simulations on MTED and MTUD. Using a real data set, we showed that the NCREX of \(Y_{[r,n]}\) for MTED and the empirical NCREX of \(Y_{[r,n]}\) are close in most cases.