1 Introduction

Let X and Y be two non-negative random variables with distribution functions \(F(x)\), \(G(x)\) and survival functions \(\bar{F}(x)\), \(\bar{G}(x)\), respectively. If \(f (x)\) is the actual probability density function (pdf) corresponding to the observations and \(g(x)\) is the density assigned by the experimenter, then the inaccuracy measure of X and Y is defined by Kerridge [9] as

$$ I(X,Y)=I(f,g)=- \int _{0}^{+\infty }f(x)\log g(x)\,dx. $$
(1.1)

Recently, Kundu [10] considered a weighted measure of inaccuracy as

$$ I^{w}(f,g)=- \int _{0}^{+\infty }xf(x)\log g(x)\,dx. $$
(1.2)

Analogous to the Kerridge measure of inaccuracy (1.1), Thapliyal and Taneja [17] proposed a cumulative past inaccuracy (CPI) measure as

$$ I(F,G)=- \int _{0}^{+\infty }F(x)\log G(x)\,dx. $$
(1.3)

When \(G(x)= F(x)\), Eq. (1.3) becomes cumulative entropy studied by Di Crescenzo and Longobardi [4]. Kundu et al. [11] studied some properties of CPI for truncated random variables. In analogy with (1.2), we define the weighted cumulative past inaccuracy (WCPI) as

$$ I^{w}(F,G)=- \int _{0}^{+\infty }xF(x)\log G(x)\,dx. $$
(1.4)

Similarly, Kundu et al. [11] introduced the concept of cumulative residual inaccuracy (CRI) which is defined as

$$ \bar{I}(\bar{F},\bar{G})=- \int _{0}^{+\infty }\bar{F}(x)\log \bar{G}(x)\,dx. $$
(1.5)

In analogy with (1.4), we define the weighted cumulative residual inaccuracy (WCRI) as

$$ \bar{I}^{w}(F,G)=- \int _{0}^{+\infty }x\bar{F}(x)\log \bar{G}(x)\,dx. $$
(1.6)

Let \(X_{1},X_{2},\ldots \) be a sequence of iid random variables having an absolutely continuous cdf \(F(x)\) and pdf \(f(x)\). An observation \(X_{j}\) is called a lower record (upper record) value if its value is less (greater) than that of all previous observations. Thus, \(X_{j}\) is a lower (upper) record if \(X_{j}<(>)X_{i}\) for every \(i< j\). Further, assume that \(T_{1}=1\) and \(T_{n}=\min \{j: j>T_{n-1}, X_{j}< X _{T_{n-1}}\}\) are known as lower record time sequence. Then, the lower record value sequence can be defined by \(L_{n}=X_{T_{n}}\), \(n\geq 1\). The density function and cumulative distribution function (cdf) of \(L_{n}\), which are denoted by \(f_{L_{n}}\) and \(F_{L_{n}}\), respectively, are given by

$$\begin{aligned}& f_{L_{n}}(x)=\frac{ [-\log {F(x)} ]^{n-1}}{(n-1)!}f(x), \end{aligned}$$
(1.7)
$$\begin{aligned}& F_{L_{n}}(x)=\sum_{j=0}^{n-1} \frac{[-\log F(x)]^{j}}{j!}F(x). \end{aligned}$$
(1.8)

Similarly, assume that \(Z_{1}=1\) and \(Z_{n}=\min \{j^{*}: j^{*}>Z_{n-1}, X_{j^{*}}>X_{Z_{n-1}}\}\) are known as upper record time sequence. Then, \(R_{n}=X_{Z_{n}}\), \(n\geq 1\) are said to be upper record values. The pdf of \(R_{n}\) is given by

$$ f_{R_{n}}(x)=\frac{ [-\log {\bar{F}(x)} ]^{n-1}}{(n-1)!}f(x). $$
(1.9)

Also, the survival function of \(R_{n}\) can be obtained as

$$ \bar{F}_{R_{n}}(x)=\sum_{j=0}^{n-1} \frac{[-\log \bar{F}(x)]^{j}}{j!} \bar{F}(x). $$
(1.10)

Record values are applied in problems such as industrial stress testing, meteorological analysis, hydrology, sporting and economics. In reliability theory, record values are used to study, for example, technical systems which are subject to shocks, e.g., peaks of voltages. For more details about records and their applications, one may refer to Arnold et al. [1]. Several authors have worked on measures of inaccuracy for ordered random variables. Thapliyal and Taneja [16] proposed a measure of inaccuracy between the ith order statistic and the parent random variable. Thapliyal and Taneja [17] developed measures of dynamic cumulative residual and past inaccuracy. They studied characterization results of these dynamic measures under proportional hazard model and proportional reversed hazard model. Thapliyal and Taneja [18] have introduced the measure of residual inaccuracy of order statistics and proved a characterization result for it. Tahmasebi and Daneshi [14] and Tahmasebi et al. [15] obtained some results for inaccuracy measures of record values. In this paper, we propose a weighted cumulative past (residual) inaccuracy of record values and study its characterization results. The paper is organized as follows. In Sect. 2, we consider a weighted measure of inaccuracy associated with \(F_{L_{n}}\) and F and obtain some results of its properties. In Sect. 3, we study a dynamic version of WCPI between \(F_{L_{n}}\) and F. In Sect. 4, we propose empirical WCPI of lower record values. In Sect. 5, we study WCRI and its dynamic version between \(\bar{F}_{R_{n}}\) and , and obtain some results about their properties. Throughout the paper we assume that the terms increasing and decreasing are used in non-strict sense.

2 Weighted cumulative past inaccuracy for \(L_{n}\)

In this section, we propose a weighted measure of CPI between \(F_{L_{n}}\) and F. For this concept, we study some properties and characterization results under some assumptions.

Definition 2.1

Let X be a non-negative absolutely continuous random variable with cdf F. Then, we define the WCPI between \(F_{L_{n}}\) (distribution function of the nth lower record value \(L_{n}\)) and F as

$$\begin{aligned} I^{w}(F_{L_{n}},F) =&- \int _{0}^{\infty }xF_{L_{n}}(x)\log {F(x)}\,dx \\ =& \int _{0}^{\infty } \sum_{j=0}^{n-1}(j+1)x \bigl[F_{L_{j+2}}(x)-F_{L_{j+1}}(x) \bigr]\,dx \\ =&\sum_{j=0}^{n-1}(j+1)\mathbb{E}_{L_{j+2}} \biggl[\frac{X}{ \tilde{\lambda }(X)} \biggr], \end{aligned}$$
(2.1)

where \(\tilde{\lambda }(x)=\frac{f(x)}{F(x)}\) is the reversed hazard rate function and \(L_{j+2}\) is a random variable with density function \(f_{L_{j+2}}(x)=\frac{[-\log F(x)]^{j+1}f(x)}{(j+1)!}\).

In the following, we present some examples and properties of \(I^{w}(F_{L_{n}},F)\).

Example 2.1

  1. (i)

    If X has an inverse Weibull distribution with the cdf \(F(x)=\exp (-(\frac{\alpha }{x})^{\beta })\), \(x>0\), then we have

    $$ I^{w}(F_{L_{n}},F)=\frac{\alpha ^{2}}{\beta }\sum _{j=0}^{n-1}\frac{ \varGamma (\frac{(j+1)\beta -2}{\beta } )}{j!}. $$
  2. (ii)

    If X is uniformly distributed on \([0,\theta ]\), then we obtain

    $$ I^{w}(F_{L_{n}},F)=\theta ^{2}\sum _{j=0}^{n-1}(j+1) \biggl(\frac{1}{3} \biggr) ^{j+2}. $$
  3. (iii)

    If X has a power distribution with cdf \(F(x)=[\frac{x}{\alpha }]^{\beta }\), \(0< x<\alpha \), \(\beta >0\), then we obtain

    $$ I^{w}(F_{L_{n}},F)=\alpha ^{2}\sum _{j=0}^{n-1}(j+1)\frac{\beta ^{j+1}}{(2+ \beta )^{j+2}}. $$

Proposition 2.2

Let X be a non-negative random variable with cdf F, then we have

$$ I^{w}(F_{L_{n}},F)=\sum_{j=0}^{n-1} \frac{1}{j!} \int _{0}^{\infty } \tilde{\lambda }(z) \biggl[ \int _{0}^{z} x \bigl[-\log {F(x)} \bigr] ^{j} F(x)\,dx \biggr]\,dz . $$
(2.2)

Proof

By (2.1) and using the relation \(-\log F(x)=\int _{x}^{\infty } \tilde{\lambda }(z)\,dz\), we have

$$\begin{aligned} I^{w}(F_{L_{n}},F) &= \sum_{j=0}^{n-1} \int ^{\infty }_{0}x\frac{ [- \log {F(x)} ]^{j+1}}{j!}F(x)\,dx \\ & =\sum_{j=0}^{n-1} \int _{0}^{\infty } \biggl[ \int _{x}^{\infty } \tilde{\lambda }(z)\,dz \biggr]x \frac{ [-\log {F(x)} ]^{j}}{j!}F(x)\,dx \\ & =\sum_{j=0}^{n-1}\frac{1}{j!} \int _{0}^{\infty }\tilde{\lambda }(z) \biggl[ \int _{0}^{z} x \bigl[-\log {F(x)} \bigr]^{j}F(x)\,dx \biggr]\,dz. \end{aligned}$$

So, the proof is completed. □

The weighted mean inactivity time (WMIT) function of a non-negative random variable X is given by

$$ \mu ^{w}(t)=\frac{\int _{0}^{t}xF(x)\,dx}{tF(t)}, \quad t>0. $$

Now, the WMIT of \(L_{n}\) is given by

$$\begin{aligned} \mu ^{w}_{n}(t)=\frac{\sum_{j=0}^{n-1}\frac{1}{j!}\int _{0}^{t}xF(x)[- \log F(x)]^{j}\,dx}{t\sum_{j=0}^{n-1}\frac{1}{j!}F(t)[-\log F(t)]^{j}}. \end{aligned}$$
(2.3)

Note that \(\mu ^{w}_{n}(t)\) is analogous to the mean residual waiting time used in reliability and survival analysis (for more details, see Bdair and Raqab [2]).

Proposition 2.3

Let X be a non-negative random variable with cdf F. Then, we have

$$\begin{aligned} I^{w}(F_{L_{n}},F)=\sum_{j=0}^{n-1} \mathbb{E}_{L_{{j+1}}} \bigl[X \mu _{n}^{w}(X) \bigr]. \end{aligned}$$

Proof

From (2.2) and (2.3), we obtain

$$\begin{aligned} I^{w}(F_{L_{n}},F) =& \int _{0}^{\infty }\tilde{\lambda }(z) \Biggl[\sum _{j=0} ^{n-1}\frac{1}{j!} \biggl[ \int _{0}^{z}x \bigl[-\log F(x) \bigr]^{j}F(x)\,dx \biggr] \Biggr]\,dz \\ =& \int _{0}^{+\infty }\sum_{j=0}^{n-1}z \mu _{n}^{w}(z)f_{L_{{j+1}}}(z)\,dz \\ =&\sum_{j=0}^{n-1} \int _{0}^{+\infty }z \mu _{n}^{w}(z)f_{L_{{j+1}}}(z)\,dz \\ =&\sum_{j=0}^{n-1}\mathbb{E}_{L_{{j+1}}} \bigl[X \mu _{n}^{w}(X) \bigr], \end{aligned}$$

yielding the claim. □

Proposition 2.4

Let X be an absolutely continuous non-negative random variable with \(I^{w}(F_{L_{n}},F)<\infty \), for all \(n\geq 1\). Then we have

$$ I^{w}(F_{L_{n}},F)=\sum_{j=0}^{n-1} \frac{1}{j!} \mathbb{E} \bigl(\tilde{h} ^{w}_{j+1}(T) \bigr), $$
(2.4)

where \(\tilde{h}^{w}_{j+1}(t)=\int _{t}^{\infty }x [-\log F(x) ] ^{j+1}\,dx\).

Proof

By using (2.1) and Fubini’s theorem, we obtain

$$\begin{aligned} I^{w}(F_{L_{n}},F) & =\sum_{j=0}^{n-1} \int _{0}^{\infty }x\frac{ [- \log F(x) ]^{j+1}}{j!} \biggl[ \int _{0}^{x}f(t)\,dt \biggr]\,dx \\ & =\sum_{j=0}^{n-1} \int _{0}^{\infty }\frac{f(t)}{j!} \biggl[ \int _{t} ^{\infty }x \bigl[-\log F(x) \bigr]^{j+1}\,dx \biggr]\,dt \\ &=\sum_{j=0}^{n-1}\frac{1}{j!} \mathbb{E} \bigl[\tilde{h}^{w}_{j+1}(T) \bigr]. \end{aligned}$$

 □

Remark 2.1

Let X be a symmetric random variable with respect to the finite mean \(\mu =E(X)\), i.e., \(F(x+\mu )=1-F(\mu -x)\) for all \(x\in \mathbb{R}\). Then

$$ I^{w}(F_{L_{n}},F)=\bar{I}^{w}(\bar{F}_{R_{n}}, \bar{F})-2\mu \bar{I}( \bar{F}_{R_{n}},\bar{F}) , $$

where \(\bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})\) is the weighted cumulative residual measure of inaccuracy between \(\bar{F}_{R_{n}}\) (survival function of the nth upper record value \(R_{n}\)) and .

Now we can prove an important property of the inaccuracy measure using some properties of stochastic ordering. For that we present the following definitions:

  1. 1.

    A random variable X is said to be smaller than Y according to stochastic ordering (denoted by \(X\leq ^{st}Y\)) if \(P(X\geq x)\leq P(Y \geq x)\) for all x. It is known that \(X \leq ^{st}Y \Leftrightarrow \mathbb{E}(\phi (X))\leq \mathbb{E}(\phi (Y))\) for all increasing functions (equivalency (1.A.7) in Shaked and Shanthikumar [13]).

  2. 2.

    A random variable X is said to be smaller than Y in likelihood ratio ordering (denoted by \(X\leq ^{lr}Y\)) if \(\frac{g(x)}{f(x)}\) is increasing in x.

  3. 3.

    A random variable X is said to be smaller than a random variable Y in the increasing convex order, denoted by \(X \leq ^{icx}Y\), if \(\mathbb{E}(\phi (X))\leq \mathbb{E}(\phi (Y))\) for all increasing convex functions ϕ such that the expectations exist.

  4. 4.

    A non-negative random variable X is said to have a decreasing reversed hazard rate on average (DRHRA) if \(\frac{\tilde{\lambda }(x)}{x}\) is decreasing in x.

  5. 5.

    A non-negative random variable X is said to have a decreasing hazard rate on average (DHRA) if \(\frac{\lambda (x)}{x}\) is decreasing in x.

Theorem 2.5

Suppose that a non-negative random variable X is DRHRA, then

$$ I^{w}(F_{L_{n+1}},F)-I^{w}(F_{L_{n}},F)\leq \sum _{i=1}^{n+1} \mathbb{E}_{L_{i}} \biggl[ \frac{X}{\tilde{\lambda }(x)} \biggr]. $$
(2.5)

Proof

Let \(f_{L_{n}}(x)\) be the pdf of the nth lower record value \(X_{L_{n}}\). Then, the ratio \(\frac{f_{L_{n}}(x)}{f _{L_{n+1}}(x)}=\frac{-n}{\log F(x)}\) is increasing in x. Therefore, \(X_{n+1}\leq ^{lr}X_{n}\), and this implies that \(X_{n+1}\leq ^{st}X _{n}\), i.e., \(\bar{F}_{n+1}(x)\leq \bar{F}_{n}(x)\) (for more details, see Shaked and Shanthikumar ([13], Chap. 1)). This is equivalent (see Shaked and Shanthikumar ([13], p. 4)) to having

$$ \mathbb{E} \bigl(\phi (X_{n+1}) \bigr)\leq \mathbb{E} \bigl(\phi (X_{n}) \bigr) $$

for all increasing functions ϕ such that these expectations exist. Thus, if X is DRHRA and \(\tilde{\lambda }(x)\) is its reversed hazard rate, then \(\frac{x}{\tilde{\lambda }(x)}\) is increasing in x. From (2.1), we have that

$$\begin{aligned} I^{w}(F_{L_{n+1}},F) &=\sum_{j=0}^{n}(j+1) \mathbb{E}_{L_{j+2}} \biggl[\frac{X}{ \tilde{\lambda }(X)} \biggr] \\ &\leq \sum_{j=0}^{n}(j+1) \mathbb{E}_{L_{j+1}} \biggl[\frac{X}{ \tilde{\lambda }(X)} \biggr] \\ &= \sum_{i=-1}^{n-1}(i+2)\mathbb{E}_{L_{i+2}} \biggl[\frac{X}{ \tilde{\lambda }(X)} \biggr] \\ &= \sum_{i=0}^{n-1}(i+2)\mathbb{E}_{L_{i+2}} \biggl[\frac{X}{ \tilde{\lambda }(X)} \biggr]+\mathbb{E}_{L_{1}} \biggl[ \frac{X}{ \tilde{\lambda }(X)} \biggr] \\ &=I^{w}(F_{L_{n}},F)+\sum_{i=1}^{n+1} \mathbb{E}_{L_{i}} \biggl[\frac{X}{ \tilde{\lambda }(X)} \biggr]. \end{aligned}$$

Thus the proof is completed. □

Proposition 2.6

Let X be a non-negative random variable with absolutely continuous cumulative distribution function \(F(x)\). Then for \(n=1,2,\dots \), we have

$$ I^{w}(F_{L_{n}},F)\geq \sum_{j=0}^{n-1} \sum_{i=0}^{j+1} \frac{(-1)^{i}(j+1)}{i!(j+1-i)!} \int _{0}^{\infty }x \bigl[F(x) \bigr]^{i+1}\,dx. $$

Proof

Since \(-\log F(x)\geq 1-F(x)\), the proof follows by recalling (2.1). □

Proposition 2.7

Let X be a non-negative random variable with absolutely continuous cumulative distribution function \(F(x)\). Then for \(n=1,2,\dots \), we have

$$ I^{w}(F_{L_{n}},F)\leq \sum_{j=0}^{n-1} \frac{1}{j!} \int _{0}^{\infty }x \bigl[- \log F(x) \bigr]^{j+1}\,dx. $$

Assume that \(\tilde{X}_{\theta }\) denotes a non-negative absolutely continuous random variable with the distribution function \(H_{\theta }(x)=[F(x)]^{\theta }\), \(x\geq 0\). We now obtain the cumulative measure of inaccuracy between \(H_{L_{n}}\) and H as follows:

$$\begin{aligned} I^{w}(H_{L_{n}},H) =&- \int _{0}^{+\infty }x H_{L_{n}}(x)\log \bigl(H(x) \bigr)\,dx \\ =&\sum_{j=0}^{n-1}\theta ^{j+1} \int _{0}^{+\infty }x \frac{[-\log F(x)]^{j+1}}{j!} \bigl[F(x) \bigr]^{\theta }\,dx. \end{aligned}$$
(2.6)

Proposition 2.8

If \(\theta \geq 1\), then for any \(n=1,2,\dots \), we have

$$ I^{w}(H_{L_{n}},H) =\sum_{j=0}^{n-1}(j+1) \mathcal{CE}^{w}_{j+1}( \tilde{X}_{\theta }) \leq \sum _{j=0}^{n-1}\theta ^{j+1}(j+1) \mathcal{CE}^{w}_{j+1}(X). $$
(2.7)

Proof

Suppose that \(\theta \geq 1\), then it is clear that \([F(x)]^{\theta }\leq F(x)\), and hence we have

$$ I^{w}(H_{L_{n}},H) =\sum_{j=0}^{n-1}(j+1) \mathcal{CE}^{w}_{j+1}( \tilde{X}_{\theta }) \leq \sum _{j=0}^{n-1}\theta ^{j+1}(j+1) \mathcal{CE}^{w}_{j+1}(X). $$

 □

Proposition 2.9

Let X be a non-negative random variable with cdf F, then an analytical expression for \(I^{w}(F_{L_{n}},F)\) is given by

$$ I^{w}(F_{L_{n}},F)=\sum _{j=0}^{n-1} \int _{0}^{\infty }x\frac{ [- \log F(x) ]^{j+1}}{j!}F(x)\,dx =\sum _{j=0}^{n-1}(j+1){\mathcal{CE}} ^{w}_{j+1}(X), $$
(2.8)

where

$$ {\mathcal{CE}}^{w}_{j+1}(X)= \int _{0}^{\infty }x\frac{ [-\log F(x) ] ^{j+1}}{(j+1)!}F(x)\,dx, $$
(2.9)

is a weighted generalized cumulative entropy (WGCE) which was introduced by Kayal and Moharana [7].

Proposition 2.10

Let \(a,b>0\). Then for \(n=1,2,\ldots \) , it holds that

$$ I^{w}(F_{aL_{n}+b},F_{aX+b})=a^{2}I^{w}(F_{L_{n}},F)+abI(F_{L_{n}},F). $$
(2.10)

Proof

From (2.8), we have

$$\begin{aligned} I^{w}(F_{aL_{n}+b},F_{aX+b}) =&\sum _{j=0}^{n-1}(j+1) {\mathcal{CE}} ^{w}_{j+1}(aX+b) \\ =&a^{2}\sum_{j=0}^{n-1}(j+1){ \mathcal{CE}}^{w}_{j+1}(X)+ab\sum_{j=0} ^{n-1}(j+1){\mathcal{CE}}_{j+1}(X) \\ =&a^{2}I^{w}(F_{L_{n}},F)+abI(F_{L_{n}},F). \end{aligned}$$

The proof is completed. □

Recently, Cali et al. [3] introduced a generalized CPI of order m defined as

$$ I_{m}(F,G)=\frac{1}{m!} \int _{0}^{+\infty }F(x) \bigl[-\log G(x) \bigr]^{m}\,dx. $$
(2.11)

In analogy with the measure defined in Eq. (2.11), we now introduce a weighted generalized CPI (WGCPI) of order m defined as

$$ I_{m}^{w}(F,G)=\frac{1}{m!} \int _{0}^{+\infty }x F(x) \bigl[-\log G(x) \bigr]^{m}\,dx. $$
(2.12)

Remark 2.2

Let X be a non-negative absolutely continuous random variable with cdf F. Then, the WGCPI of order m between \(F_{L_{n}}\) and F is

$$\begin{aligned} I_{m}^{w}(F_{L_{n}},F) =& \frac{1}{m!} \int _{0}^{\infty }xF_{L_{n}}(x) \bigl[- \log {F(x)} \bigr]^{m}\,dx \\ =&\sum_{j=0}^{n-1}\binom{m+j}{j}{ \mathcal{CE}}^{w}_{m+j}(X). \end{aligned}$$
(2.13)

3 Dynamic weighted cumulative past inaccuracy

In this section, we study a dynamic version of \(I^{w}( F_{L_{n}},F )\). If a system that begins to work at time 0 is observed only at deterministic inspection times, and is found to be ‘down’ at time t, then we consider a dynamic cumulative measure of inaccuracy as

$$\begin{aligned} I^{w}(F_{L_{n}},F;t) =&- \int _{0}^{t}x \frac{F_{L_{n}}(x)}{F_{L_{n}}(t)}\log \biggl( \frac{F(x)}{F(t)} \biggr)\,dx \\ =&\log F(t)\mu _{n}^{w}(t)- \int _{0}^{t}x \frac{F_{L_{n}}(x)}{F_{L_{n}}(t)}\log \bigl(F(x) \bigr)\,dx \\ =&\log F(t)\mu _{n}^{w}(t) +\frac{1}{F_{L_{n}}(t)}\sum _{j=0}^{n-1} \int _{0}^{t}x\frac{[-\log F(x)]^{j+1}}{j!}F(x)\,dx. \end{aligned}$$
(3.1)

Note that \(\lim_{t\rightarrow \infty }I^{w}(F_{L_{n}},F;t)=I^{w}(F _{L_{n}},F)\). Since \(\log F(t)\leq 0\) for \(t\geq 0\), we have

$$\begin{aligned} I^{w}(F_{L_{n}},F;t) \leq & \frac{1}{F_{L_{n}}(t)}\sum _{j=0}^{n-1} \int _{0}^{t}x\frac{[-\log F(x)]^{j+1}}{j!}F(x)\,dx \\ \leq &\frac{1}{F_{L_{n}}(t)}\sum_{j=0}^{n-1} \int _{0}^{+\infty }x\frac{[- \log F(x)]^{j+1}}{j!}F(x)\,dx= \frac{I^{w}(F_{L_{n}},F)}{F_{L_{n}}(t)}. \end{aligned}$$

In the following theorem, we prove that \(I^{w}(F_{L_{n}},F;t)\) uniquely determines the distribution function.

Theorem 3.1

Let X be a non-negative continuous random variable with distribution function \(F(\cdot )\). Let the weighted dynamic cumulative inaccuracy of the nth lower record value be finite, that is, \(I^{w}(F_{L_{n}},F;t)< \infty \), \(t\geq 0\). Then \(I^{w}(F_{L_{n}},F;t)\) characterizes the distribution function.

Proof

From (3.1) we have

$$\begin{aligned} I^{w}(F_{L_{n}},F;t) =\log F(t)\mu _{n}^{w}(t)+\frac{1}{F_{L_{n}}(t)} \sum _{j=0}^{n-1} \int _{0}^{t}x\frac{[-\log F(x)]^{j+1}}{j!}F(x)\,dx. \end{aligned}$$
(3.2)

Differentiating both sides of (3.2) with respect to t, we obtain

$$\begin{aligned} \frac{\partial }{\partial t} \bigl[I^{w}(F_{L_{n}},F;t) \bigr] =&\tilde{ \lambda } _{F}(t)\mu _{n}^{w}(t)-\tilde{\lambda }_{F_{L_{n}}}(t)I^{w}(F_{L_{n}},F;t) \\ =&\tilde{\lambda }_{F}(t)\mu _{n}^{w}(t)-c(t) \tilde{\lambda }_{F}(t)I ^{w}(F_{L_{n}},F;t) \\ =&\tilde{\lambda }_{F}(t) \bigl[\mu _{n}^{w}(t)-c(t)I^{w}(F_{L_{n}},F;t) \bigr]. \end{aligned}$$

Taking derivative with respect to t again, we get

$$\begin{aligned} \acute{\tilde{\lambda }}_{F}(t)= \frac{(\tilde{\lambda }_{F}(t))^{2} (\acute{c}(t)I^{w}(F_{L_{n}},F;t)+c(t)\acute{I}^{w}(F_{L_{n}},F;t)-t+c(t) \tilde{\lambda }_{F}(t)\mu _{n}^{w}(t) )}{\acute{I}^{w}(F_{L_{n}},F;t)}. \end{aligned}$$
(3.3)

Suppose that there are two functions F and \(F^{*}\) such that

$$ I^{w}(F_{L_{n}},F;t)=I^{w} \bigl(F^{*}_{L_{n}},F^{*};t \bigr)=z(t). $$

Then for all t, from (3.3) we get

$$\begin{aligned} \acute{\tilde{\lambda }}_{F}(t)=\varphi \bigl(t,\tilde{\lambda }_{F}(t) \bigr), \qquad \acute{\tilde{\lambda }}_{F^{*}}(t)= \varphi \bigl(t,\tilde{\lambda }_{F ^{*}}(t) \bigr), \end{aligned}$$

where

$$ \varphi (t,y)=\frac{y^{2} [\acute{c}(t)z(t)+c(t) (\acute{z}(t)+ys(t) )-t ]}{ \acute{z}(t)}, $$

and \(s(t)=\mu _{n}^{w}(t)\). By using Theorem 2.1 and Lemma 2.2 of Gupta and Kirmani [5], we have \(\tilde{\lambda }_{F}(t)=\tilde{\lambda } _{F^{*}}(t)\), for all t. Since the reversed hazard rate function characterizes the distribution function uniquely, we complete the proof. □

4 Empirical weighted cumulative past inaccuracy

In this section we address the problem of estimating the weighted cumulative measure of inaccuracy by means of the empirical weighted cumulative inaccuracy of lower record values. Let \(X_{1},X_{2}, \dots ,X_{m}\) be a random sample of size m from an absolutely continuous cumulative distribution function \(F(x)\). Then according to (2.8), the empirical cumulative measure of inaccuracy is

$$ \hat{I}^{w}(F_{L_{n}},F)=\sum _{j=0}^{n-1} \int _{0}^{\infty }x\frac{ [-\log \hat{F}_{m}(x) ]^{j+1}}{j!} \hat{F}_{m}(x)\,dx =\sum_{j=0}^{n-1}(j+1){ \mathcal{CE}}^{w}_{j+1}(\hat{F}_{m}), $$
(4.1)

where

$$ \hat{F}_{m}(x)=\frac{1}{m}\sum_{i=1}^{m}I_{(X_{i}\leq x)},\quad x\in \mathbb{R}, $$

is the empirical distribution of the sample and I is the indicator function. If we denote \(X_{(1)} \leq X_{(2)}\leq \cdots \leq X_{(m)}\) as the order statistics of the sample, then (4.1) can be written as

$$ \hat{I}^{w}(F_{L_{n}},F)=\sum _{j=0}^{n-1}\sum_{k=1}^{m-1} \int _{X_{(k)}} ^{X_{(k+1)}}x\frac{ [-\log \hat{F}_{m}(x) ]^{j+1}}{j!} \hat{F}_{m}(x)\,dx. $$
(4.2)

Moreover,

$$ \hat{F}_{m}(x)= \textstyle\begin{cases} 0, & x< X_{(1)}, \\ \frac{k}{m},& X_{(k)}\le x\le X_{(k+1)}, k=1,2,\dots ,j, \\ 1, & x>X_{(k+1)}. \end{cases} $$

Hence, (4.2) can be written as

$$ \hat{I}^{w}(F_{L_{n}},F)=\sum _{j=0}^{n-1}\sum_{k=1}^{m-1} \frac{1}{j!}U _{k}\frac{k}{m} \biggl(-\log \frac{k}{m} \biggr)^{j+1}, $$
(4.3)

where \(U_{k}=\frac{X^{2}_{(k+1)}-X^{2}_{(k)}}{2}\), \(k=1,2,\dots ,m-1\) are the sample spacings.

Example 4.1

Consider a random sample \(X_{1},X_{2}, \dots ,X_{m}\) from the Weibull distribution with density function

$$ f(x)=2\lambda \exp \bigl(-\lambda x^{2} \bigr). $$

Then \(Y_{k}=X_{k}^{2}\) has an exponential distribution with mean \(\frac{1}{\lambda }\). In this case, the sample spacings \(2U_{k}=X^{2} _{(k+1)}-X^{2}_{(k)}\) are independent and exponentially distributed with mean \(\frac{1}{\lambda (m-k)}\) (for more details, see Pyke [12]). Now from (4.3) we obtain

$$ \mathbb{E} \bigl[\hat{I}^{w}(F_{L_{n}},F) \bigr]=\sum _{j=0}^{n-1}\sum_{k=1}^{m-1} \frac{k}{2 \lambda j!(m-k)m} \biggl(-\log \frac{k}{m} \biggr)^{j+1} $$
(4.4)

and

$$ \operatorname{Var} \bigl[\hat{I}^{w}(F_{L_{n}},F) \bigr]=\sum _{j=0}^{n-1}\sum_{k=1}^{m-1} \frac{k ^{2}}{4\lambda ^{2}(j!)^{2}(m-k)^{2}m^{2}} \biggl(-\log \frac{k}{m} \biggr) ^{2(j+1)}. $$
(4.5)

We have computed the values of \(\mathbb{E}[\hat{I}^{w}(F_{L_{n}},F)]\) and \(\operatorname{Var}[\hat{I}^{w}(F_{L_{n}},F)]\) for sample sizes \(m=10, 15,20\), \(\lambda =0.5,1,2\) and \(n=2,3,4,5\) in Table 1. We can easily see that \(\mathbb{E}[\hat{I}^{w}(F_{L_{n}},F)]\) and \(\operatorname{Var}[\hat{I}^{w}(F_{L_{n}},F)]\) are decreasing in m. Also, we consider that \(\lim_{m\rightarrow \infty }\operatorname{Var}[\hat{I}^{w}(F_{L_{n}},F)]=0\).

Table 1 Numerical values of \(\mathbb{E}[\hat{I}^{w}(F_{L_{n}},F)]\) and \(\operatorname{Var}[\hat{I}^{w}(F_{L_{n}},F)]\) for Weibull distribution

Example 4.2

Let \(X_{1},X_{2}, \dots ,X_{m}\) be a random sample from a population with pdf \(f(x)=2x\), \(0< x<1\). Then the sample spacings \(2U_{k}\) are independent and beta distributed with parameters 1 and m (for more details, see Pyke [12]). Now from (4.3) we obtain

$$ \mathbb{E} \bigl[\hat{I}^{w}(F_{L_{n}},F) \bigr]=\sum _{j=0}^{n-1}\sum_{k=1}^{m-1} \frac{k}{2j!(m+1)m} \biggl(-\log \frac{k}{m} \biggr)^{j+1} $$
(4.6)

and

$$ \operatorname{Var} \bigl[\hat{I}^{w}(F_{L_{n}},F) \bigr]=\sum _{j=0}^{n-1}\sum_{k=1}^{m-1} \frac{k ^{2}}{4(j!)^{2}(m+1)^{2}(m+2)m} \biggl(-\log \frac{k}{m} \biggr)^{2(j+1)}. $$
(4.7)

We have computed the values of \(\mathbb{E}[\hat{I}^{w}(F_{L_{n}},F)]\) and \(\operatorname{Var}[\hat{I}^{w}(F_{L_{n}},F)]\) for sample sizes \(m=10, 15,20\) and \(n=2,3,4,5\) in Table 2. We can easily see that \(\operatorname{Var}[\hat{I}^{w}(F_{L_{n}},F)]\) is decreasing in m and \(\lim_{m\rightarrow \infty }\operatorname{Var}[\hat{I}^{w}(F _{L_{n}},F)]=0\).

Table 2 Numerical values of \(\mathbb{E}[\hat{I}^{w}(F_{L_{n}},F)]\) and \(\operatorname{Var}[\hat{I}^{w}(F_{L_{n}},F)]\) for beta distribution

Theorem 4.1

Let X be an absolutely continuous non-negative random variable such that \(I^{w}(F_{L_{n}},F)<\infty \), for all \(n\geq 1\). Then we have

$$\begin{aligned} \hat{I}^{w}(F_{L_{n}},F) \longrightarrow I^{w}(F_{L_{n}},F) \quad \textit{a.s.} \end{aligned}$$

Proof

From (2.8) we have

$$\begin{aligned} \hat{I}^{w}(F_{L_{n}},F)=\sum _{j=0}^{n-1}(j+1){\mathcal{CE}}^{w}_{j+1}( \hat{F}_{m}), \end{aligned}$$
(4.8)

where

$$ {\mathcal{CE}}^{w}_{j+1}(\hat{F}_{m})= \int _{0}^{\infty }x\frac{(- \log {\hat{F}_{m}(x)})^{j+1}}{(j+1)!} \hat{F}_{m}(x) \,dx. $$

Now we can obtain

$$\begin{aligned} \frac{(j+1)!\mathcal{CE}^{w}_{j+1}(\hat{F}_{m})}{(-1)^{j+1}} =& \int _{0}^{\infty }x \bigl(\log \hat{F}_{m}(x) \bigr)^{j+1}\hat{F}_{m}(x)\,dx \\ =& \int _{0}^{1} x \bigl(\log \hat{F}_{m}(x) \bigr)^{j+1}\hat{F}_{m}(x)\,dx+ \int _{1} ^{\infty }x \bigl(\log \hat{F}_{m}(x) \bigr)^{j+1}\hat{F}_{m}(x)\,dx \\ =:&W_{1}+W_{2}, \end{aligned}$$
(4.9)

where

$$\begin{aligned}& W_{1}= \int _{0}^{1} x \bigl(\log \hat{F}_{m}(x) \bigr)^{j+1}\hat{F}_{m}(x)\,dx, \\& W_{2}= \int _{1}^{\infty }x \bigl(\log \hat{F}_{m}(x) \bigr)^{j+1}\hat{F}_{m}(x)\,dx. \end{aligned}$$

Using dominated convergence (DCT) and Glivenko–Cantelli theorems, we have

$$\begin{aligned} \int _{0}^{1} x \bigl(\log \hat{F}_{m}(x) \bigr)^{j+1}\hat{F}_{m}(x)\,dx\longrightarrow \int _{0}^{1} x \bigl(\log F(x) \bigr)^{j+1} F(x)\,dx \quad \text{as } m\rightarrow \infty . \end{aligned}$$
(4.10)

It follows that

$$\begin{aligned} x^{p}\hat{F}_{m}(x)\leq \frac{1}{m}\sum _{i=1}^{m} X_{i}^{p}. \end{aligned}$$

Moreover, by using SLLN, \(\frac{1}{m}\sum_{i=1}^{m} X_{i}^{p} \longrightarrow \mathbb{E}(X^{p})\) and \(\sup_{m}(\frac{1}{m}\sum_{i=1}^{m} X_{i}^{p})< \infty \), so \(\hat{F}_{m} (x)\leq x^{-p} (\sup_{m}(\frac{1}{m} \sum_{i=1}^{m} X_{i}^{p}) )=Cx^{-p}\). Now applying DCT, we have

$$\begin{aligned} \lim_{m\rightarrow \infty }W_{2}= \int _{1}^{\infty }xF(x) \bigl(\log F(x) \bigr) ^{j+1}\,dx. \end{aligned}$$
(4.11)

Finally, by using (4.8) and (4.9), the result follows. □

5 Weighted cumulative residual inaccuracy for \(R_{n}\)

In this section, we propose WCRI between \(\bar{F}_{R_{n}}\) and . We discuss some properties of WCRI such as the effect of a linear transformation, relationships with other reliability functions, bounds and stochastic ordering.

Definition 5.1

Let X be a non-negative absolutely continuous random variable with survival function . Then, we define the WCRI between \(\bar{F}_{R_{n}}\) and as follows:

$$\begin{aligned} \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F}) =&- \int _{0}^{+\infty }x\bar{F} _{R_{n}}(x)\log \bigl( \bar{F}(x) \bigr)\,dx \\ =&\sum_{j=0}^{n-1} \int _{0}^{+\infty }x \frac{[-\log \bar{F}(x)]^{j+1}}{j!}\bar{F}(x)\,dx \\ =&\sum_{j=0}^{n-1}(j+1)\mathbb{E}_{R_{j+2}} \biggl(\frac{X}{\lambda (X)} \biggr), \end{aligned}$$
(5.1)

where \(\lambda (x)=\frac{f(x)}{\bar{F}(x)}\) is the hazard rate function and \(R_{j+2}\) is a random variable with reliability \(\bar{F}_{R_{j+2}}\).

In the following example, we calculate \(\bar{I}^{w}(\bar{F}_{R_{n}}, \bar{F})\) for some specific lifetime distributions which are widely used in reliability theory and life testing.

Example 5.1

  1. (a)

    If X is uniformly distributed on \([0,\theta ]\), then it is easy to see that \(\bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})=\theta ^{2}\sum_{j=0} ^{n-1}\frac{3^{j+2}-2^{j+2}}{6^{j+2}}(j+1)\), for all integers \(n\geq 1\).

  2. (b)

    If X has a Weibull distribution with survival function \(\bar{F}(x)=e^{-\alpha x^{\beta }}\), \(x\geq 0\), \(\alpha ,\beta >0\), then for all integers \(n\geq 1\), we obtain \(\bar{I}^{w}(\bar{F}_{R _{n}},\bar{F})=\frac{1}{\beta }\sum_{j=0}^{n-1}\frac{ \alpha ^{2(1+j-\frac{1}{\beta })}(j+\frac{2}{\beta })!}{j!}\).

  3. (c)

    Let X be an exponential distribution with mean \(\frac{1}{ \lambda }\), then \(\bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})=\frac{n(n+1)(n+2)}{3 \lambda ^{2}}\).

Proposition 5.2

Let X be an absolutely continuous non-negative random variable with survival function . Then, we have

$$\begin{aligned} \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})=\sum _{j=0}^{n-1}(j+1)x[\mu _{j+2}- \mu _{j+1}], \end{aligned}$$

where \(\mu _{n}=\int _{0}^{+\infty }\bar{F}_{R_{n}}(x)\,dx\).

Proof

From (1.10) and (5.1) we have

$$\begin{aligned} \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F}) =&\sum _{j=0}^{n-1} \int _{0}^{+ \infty } x\frac{[-\log \bar{F}(x)]^{j+1}}{j!}\bar{F}(x)\,dx \\ =&\sum_{j=0}^{n-1}(j+1) \int _{0}^{+\infty }x \bigl[\bar{F}_{R_{j+2}}(x)- \bar{F}_{R_{j+1}}(x) \bigr]\,dx \\ =&\sum_{j=0}^{n-1}(j+1)x[\mu _{j+2}-\mu _{j+1}]. \end{aligned}$$

 □

Proposition 5.3

Let \(a,b > 0\). For \(n =1,2,\dots \), it holds that

$$ \bar{I}^{w}(\bar{F}_{aR_{n}+b},\bar{F}_{aX+b})=a^{2} \bar{I}^{w}( \bar{F}_{R_{n}},\bar{F})+ab\bar{I}( \bar{F}_{R_{n}},\bar{F}). $$

Proof

From (5.1) and noting that \(\bar{F}_{aX+b}(x)= \bar{F}(\frac{x-b}{a})\), we have

$$\begin{aligned} \bar{I}^{w}(\bar{F}_{aR_{n}+b},\bar{F}_{aX+b}) =&- \int _{0}^{+\infty }\sum_{j=0}^{n-1}x \frac{[-\log \bar{F}_{aX+b}(x)]^{j+1}}{j!}\bar{F} _{aX+b}(x)\,dx \\ =&- \int _{0}^{+\infty }\sum_{j=0}^{n-1}x \frac{[-\log \bar{F}_{X}( \frac{x-b}{a})]^{j+1}}{j!}\bar{F}_{X} \biggl(\frac{x-b}{a} \biggr)\,dx \\ =&- \int _{0}^{+\infty }\sum_{j=0}^{n-1}a(ay+b) \frac{[-\log \bar{F} _{X}(y)]^{j+1}}{j!}\bar{F}_{X}(y)\,dy \\ =&a^{2}\bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})+ab \bar{I}(\bar{F}_{R_{n}}, \bar{F}). \end{aligned}$$
(5.2)

 □

Kayid et al. [8] proposed the combination mean residual life (CMRL) function of X as the reciprocal hazard rate of the length-biased equilibrium distribution given by

$$ m^{c}(t)=\frac{\int _{t}^{+\infty }x\bar{F}(x)\,dx}{t\bar{F}(t)}, \quad t>0. $$

Now, the CMRL of \(R_{n}\) is given by

$$\begin{aligned} m^{c}_{n}(t)=\frac{\sum_{j=0}^{n-1}\frac{1}{j!}\int _{t}^{+\infty }x \bar{F}(x)[-\log \bar{F}(x)]^{j}\,dx}{t\sum_{j=0}^{n-1}\frac{1}{j!} \bar{F}(t)[-\log \bar{F}(t)]^{j}}. \end{aligned}$$
(5.3)

Proposition 5.4

Let X be an absolutely continuous non-negative random variable with survival function . Then, we have

$$ \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})=\sum _{j=0}^{n-1}\frac{1}{j!} \int _{0}^{\infty }\lambda (z) \biggl[ \int _{z}^{\infty }x \bigl[-\log \bar{F}(x) \bigr]^{j} \bar{F}(x)\,dx \biggr]\,dz. $$

Proof

By (5.1) and the fact that \(-\log \bar{F}(x)=\int _{0}^{x}\lambda (z)\,dz\), we have

$$\begin{aligned} \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F}) =&\sum _{j=0}^{n-1} \int _{0}^{+ \infty }x\frac{[-\log \bar{F}(x)]^{j+1}}{j!}\bar{F}(x)\,dx \\ =&\sum_{j=0}^{n-1} \int _{0}^{+\infty } \biggl[ \int _{0}^{x}\lambda (z)\,dz \biggr]x \frac{[- \log \bar{F}(x)]^{j}}{j!}\bar{F}(x)\,dx \\ =&\sum_{j=0}^{n-1}\frac{1}{j!} \int _{0}^{+\infty }\lambda (z) \biggl[ \int _{z}^{\infty }x \bigl[-\log \bar{F}(x) \bigr]^{j}\bar{F}(x)\,dx \biggr]\,dz. \end{aligned}$$

 □

Proposition 5.5

Let X be a non-negative random variable with survival function . Then, we have

$$\begin{aligned} \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})=\sum _{j=0}^{n-1}\mathbb{E}_{R _{{j+1}}} \bigl[X m_{n}^{c}(X) \bigr]. \end{aligned}$$

Proof

From (5.3) and using Proposition 5.4, we obtain

$$\begin{aligned} \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F}) =& \int _{0}^{\infty }\lambda (z) \Biggl[\sum _{j=0}^{n-1}\frac{1}{j!} \biggl[ \int _{z}^{\infty }x \bigl[-\log \bar{F}(x) \bigr]^{j}\bar{F}(x)\,dx \biggr] \Biggr]\,dz \\ =& \int _{0}^{+\infty }\sum_{j=0}^{n-1}z m_{n}^{c}(z)f_{R_{{j+1}}}(z)\,dz \\ =&\sum_{j=0}^{n-1} \int _{0}^{+\infty }z m_{n}^{c}(z)f_{R_{{j+1}}}(z)\,dz \\ =&\sum_{j=0}^{n-1}\mathbb{E}_{R_{{j+1}}} \bigl[X m_{n}^{c}(X) \bigr]. \end{aligned}$$

This completes the proof. □

Proposition 5.6

Let X be an absolutely continuous non-negative random variable such that \(\bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})<\infty \), for \(n\geq 1\). Then, we have

$$\begin{aligned} \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})=\sum _{j=0}^{n-1}\frac{1}{j!} \mathbb{E} \bigl[h^{w}_{j+1}(X) \bigr], \end{aligned}$$
(5.4)

where

$$ h^{w}_{j+1}(x)= \int _{0}^{x}z \bigl[-\log \bar{F}(z) \bigr]^{j+1}\,dz, \quad x\geq 0. $$

Proof

From (5.1) and using Fubini’s theorem, we obtain

$$\begin{aligned} \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F}) =&\sum _{j=0}^{n-1} \int _{0}^{ \infty }z\frac{[-\log \bar{F}(z)]^{j+1}}{j!}\bar{F}(z)\,dz \\ =&\sum_{j=0}^{n-1}\frac{1}{j!} \int _{0}^{\infty } \biggl[ \int _{z}^{ \infty }f(x)\,dx \biggr]z \bigl[-\log \bar{F}(z) \bigr]^{j+1}\,dz \\ =&\sum_{j=0}^{n-1}\frac{1}{j!} \int _{0}^{\infty }f(x) \biggl[ \int _{0} ^{x}z \bigl[-\log \bar{F}(z) \bigr]^{j+1}\,dz \biggr]\,dx=\sum_{j=0}^{n-1} \frac{1}{j!}\mathbb{E} \bigl[h^{w}_{j+1}(X) \bigr]. \end{aligned}$$

 □

Proposition 5.7

Let X be a non-negative and absolutely continuous random variable with cdf F. Then

$$\begin{aligned} \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})\geq \sum _{j=0}^{n-1}\frac{1}{j!} \bigl[ \mathcal{E}^{w_{j+1}}(X) \bigr]^{j+1}, \end{aligned}$$

where \(\mathcal{E}^{w_{j+1}}(X)=-\int _{0}^{\infty }x^{ (\frac{1}{j+1} )} \bar{F}(x)\log {\bar{F}(x)}\,dx\).

Proof

From (5.1), we have

$$\begin{aligned} \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F}) =&\sum _{j=0}^{n-1}\frac{1}{j!} \int _{0}^{+\infty }x \bar{F}(x) \bigl[-\log \bar{F}(x) \bigr]^{j+1}\,dx \\ \geq & \sum_{j=0}^{n-1}\frac{1}{j!} \int _{0}^{+\infty } \bigl[x^{ (\frac{1}{j+1} )}\bar{F}(x) \bigl[-\log \bar{F}(x) \bigr] \bigr]^{j+1}\,dx \\ \geq & \sum_{j=0}^{n-1}\frac{1}{j!} \biggl[- \int _{0}^{+\infty }x^{ (\frac{1}{j+1} )}\bar{F}(x)\log \bar{F}(x)\,dx \biggr]^{j+1}. \end{aligned}$$

This completes the proof. □

The next propositions give some lower and upper bounds for \(\bar{I} ^{w}(\bar{F}_{R_{n}},\bar{F})\).

Proposition 5.8

Let X be a non-negative random variable with absolutely continuous cumulative distribution function \(F(x)\). Then for \(n=1,2,\dots \), we have

$$ \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})\leq \sum _{j=0}^{n-1}\frac{1}{j!} \int _{0}^{\infty }x \bigl[-\log \bar{F}(x) \bigr]^{j+1}\,dx. $$

Proof

By using (5.1), the proof is easy. □

Proposition 5.9

Let X be a non-negative random variable with survival function \(\bar{F}(x)\). Then for \(n=1,2,\dots \), we have

$$ \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})\geq \sum _{j=0}^{n-1}\sum_{i=0} ^{j+1}\frac{(-1)^{i}(j+1)}{i!(j+1-i)!} \int _{0}^{\infty }x \bigl[\bar{F}(x) \bigr]^{i+1}\,dx. $$

Proof

Since \(-\log \bar{F}(x)\geq 1-\bar{F}(x)\), the proof follows by recalling (5.1). □

In the following, we obtain some results on \(I^{w}(\bar{F}_{R_{n}}, \bar{F})\) and its connection with notions of reliability theory.

Proposition 5.10

If X is DFRA, then for \(n=1,2,\dots \), we have

$$ \bar{I}^{w}(\bar{F}_{R_{n+1}},\bar{F})- \bar{I}^{w}( \bar{F}_{R_{n}}, \bar{F})\geq \sum_{i=1}^{n+1} \mathbb{E}_{R_{i}} \biggl[\frac{X}{\lambda (X)} \biggr]. $$
(5.5)

Proof

Suppose that \(f_{R_{n}}\) is the pdf of of the nth record value \(R_{n}\). Then, the ratio \(\frac{f_{R_{n+1}}(t)}{f _{R_{n}}(t)}=\frac{-\log \bar{F}(t)}{n}\) is increasing in t. Therefore, \(R_{n}\leq ^{lr}R_{n+1}\), and this implies that \(R_{n}\leq ^{st}R_{n+1}\), i.e., \(\bar{F}_{R_{n}}\leq \bar{F}_{R_{n+1}}\) (for more details, see Shaked and Shanthikumar ([13], Chap. 1)). Hence, if X is DFRA and \(\lambda (x)\) is its hazard rate, then \(\frac{x}{\lambda (x)}\) is incre asing function of x. So, from (5.1) we have

$$\begin{aligned} \bar{I}^{w}(\bar{F}_{R_{n+1}},\bar{F}) =&\sum _{j=0}^{n}(j+1) \mathbb{E}_{R_{j+2}} \biggl[ \frac{X}{\lambda (X)} \biggr] \\ \geq &\sum_{j=0}^{n}(j+1) \mathbb{E}_{R_{j+1}} \biggl[\frac{X}{\lambda (X)} \biggr] \\ =&\sum_{i=-1}^{n-1}(i+2)\mathbb{E}_{R_{i+2}} \biggl[\frac{X}{\lambda (X)} \biggr] \\ =&\sum_{i=0}^{n-1}(i+2)\mathbb{E}_{R_{i+2}} \biggl[\frac{X}{\lambda (X)} \biggr]+ \mathbb{E}_{R_{1}} \biggl[ \frac{X}{\lambda (X)} \biggr] \\ =&\sum_{i=0}^{n-1}(i+1)\mathbb{E}_{R_{i+2}} \biggl[\frac{X}{\lambda (X)} \biggr]+ \sum_{i=0}^{n-1} \mathbb{E}_{R_{i+2}} \biggl[\frac{X}{\lambda (X)} \biggr]+ \mathbb{E}_{R_{1}} \biggl[\frac{X}{\lambda (X)} \biggr] \\ =&\bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})+\sum _{i=1}^{n+1}\mathbb{E} _{R_{i}} \biggl[ \frac{X}{\lambda (X)} \biggr]. \end{aligned}$$
(5.6)

The proof is completed. □

Proposition 5.11

If X has the exponential distribution with mean \(\mu =\frac{1}{ \theta }\), then as the hazard rate is constant, we obtain that \(\bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})=\frac{n(n+1)(n+2)}{3}\mu ^{2}\), which is an increasing function of n.

Proposition 5.12

Let X and Y be two non-negative random variables with reliability functions \(\bar{F}(x)\) and \(\bar{G}(x)\), respectively. If \(X\leq ^{hr}Y\) and X is DFRA, then

$$ \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})\leq \bar{I}^{w}( \bar{G}_{ \tilde{R}_{n}},\bar{G}), $$
(5.7)

for \(n=1,2,\dots \).

Proof

It is well known that \(X\leq ^{hr} Y\) implies \(X\leq ^{st} Y\) (see Shaked and Shanthikumar [13]). Hence, we have

$$ \bar{F}_{R_{j+2}}\leq \bar{G}_{\tilde{R}_{j+2}}, $$

where \(\bar{G}_{\tilde{R}_{j+2}}\) is the survival function of \(\tilde{R}_{j+2}\). That is, \(R_{j+2}\leq ^{st}\tilde{R}_{j+2}\) holds. This is equivalent (see Shaked and Shanthikumar [13], p. 4) to having

$$ \mathbb{E} \bigl(\phi (R_{j+2}) \bigr)\leq \mathbb{E} \bigl(\phi ( \tilde{R}_{j+2}) \bigr), $$

for all increasing functions ϕ such that these expectations exist. Thus, if we assume that X is DFRA and \(\lambda (x)\) is its failure rate, then \(\frac{x}{\lambda (x)}\) is increasing and we have

$$ \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})= \sum _{j=0}^{n-1}(j+1)\mathbb{E} _{R_{j+2}} \biggl( \frac{X}{\lambda _{F}(X)} \biggr)\leq \sum_{j=0}^{n-1}(j+1) \mathbb{E}_{\tilde{R}_{j+2}} \biggl(\frac{X}{\lambda _{F}(X)} \biggr). $$

On the other hand, \(X\leq ^{hr} Y\) implies that the respective failure rate functions satisfy \(\lambda _{F}(x)\geq \lambda _{G}(y)\). Hence, we have

$$ \sum_{j=0}^{n-1}(j+1)\mathbb{E}_{\tilde{R}_{j+2}} \biggl(\frac{X}{ \lambda _{F}(X)} \biggr)\leq \sum_{j=0}^{n-1}(j+1) \mathbb{E}_{ \tilde{R}_{j+2}} \biggl(\frac{X}{\lambda _{G}(Y)} \biggr)=I^{w}( \bar{G} _{\tilde{R}_{n}},\bar{G}). $$

Therefore, using both expressions, we obtain \(\bar{I}^{w}(\bar{F}_{R _{n}},\bar{F})\leq \bar{I}^{w}(\bar{G}_{\tilde{R}_{n}},\bar{G})\). □

Proposition 5.13

Let X and Y be two non-negative random variables with reliability functions \(\bar{F}(x)\) and \(\bar{G}(x)\), respectively. If \(X\leq ^{icx}Y\), then

$$ \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})\leq \bar{I}^{w}( \bar{G}_{ \tilde{R}_{n}},\bar{G}). $$

Proof

Since \(h_{j+1}^{w}(\cdot )\) is an increasing convex function for \(j\geq 0\), it follows by Shaked and Shanthikumar [13] that \(X\leq ^{icx}Y\) implies \(h_{j+1}^{w}(X)\leq ^{icx}h_{j+1} ^{w}(Y)\). By recalling the definition of increasing convex order and Proposition 5.6, the proof is complete. □

Proposition 5.14

If X is IFRA (DFRA), then for \(n=1,2,\dots \), we have

$$ \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})\leq (\geq )\sum _{j=0}^{n-1} \frac{1}{j!}\mathbb{E} \bigl[X^{2} \bigl(-\log \bar{F}(X) \bigr)^{j} \bigr]. $$
(5.8)

Proof

From (5.1), we have

$$\begin{aligned} \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})=\sum _{j=0}^{n-1} \int _{0}^{+ \infty }x\frac{[-\log \bar{F}(x)]^{j}}{j!} \bigl[-\log \bar{F}(x) \bigr]\bar{F}(x)\,dx. \end{aligned}$$
(5.9)

Now, since X is IFRA (DFRA), \(\frac{-\log \bar{F}(x)}{x}\) is increasing (decreasing) with respect to \(x>0\), which implies that

$$\begin{aligned} -\bar{F}(x)\log \bar{F}(x)\leq (\geq )x f(x), \quad x>0. \end{aligned}$$
(5.10)

Hence, the proof is completed by noting (5.9) and (5.10). □

Proposition 5.15

Let X and Y be two non-negative random variables with survival functions \(\bar{F}(x)\) and \(\bar{G}(x)\), respectively. If \(X\leq ^{hr}Y\), then for \(n=1,2,\dots \), it holds that

$$\begin{aligned} \frac{\bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})}{\mathbb{E}(X)}\leq \frac{ \bar{I}^{w}(\bar{G}_{R_{n}},\bar{G})}{\mathbb{E}(Y)}. \end{aligned}$$

Proof

By noting that the function \(h_{j+1}^{w}(x)= \int _{0}^{x}z[-\log \bar{F}(z)]^{j+1}\,dz\) is an increasing convex function, under the assumption \(X\leq ^{hr}Y\), it follows, by Shaked and Shanthikumar [13], that

$$ \sum_{j=0}^{n-1}\frac{1}{j!} \biggl[ \frac{\mathbb{E} [h_{j+1}^{w}(X) ]}{ \mathbb{E}(X)} \biggr] \leq \sum_{j=0}^{n-1} \frac{1}{j!} \biggl[\frac{ \mathbb{E} [h_{j+1}^{w}(Y) ]}{\mathbb{E}(Y)} \biggr]. $$

Hence, the proof is completed by recalling (5.1). □

Proposition 5.16

  1. (i)

    Let X be a continuous random variable with survival function \(\bar{F}(\cdot )\) that takes values in \([0, b]\), with finite b. Then,

    $$ \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})\leq b\bar{I}( \bar{F}_{R_{n}}, \bar{F}). $$
  2. (ii)

    Let X be a non-negative continuous random variable with survival function \(\bar{F}(\cdot )\) that takes values in \([a, \infty )\), with finite \(a>0\). Then,

    $$ \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})\geq a\bar{I}( \bar{F}_{R_{n}}, \bar{F}). $$

Assume that \(X^{*}_{\theta }\) denotes a non-negative absolutely continuous random variable with the survival function \(\bar{H}_{ \theta }(x)=[\bar{F}(x)]^{\theta }\), \(x\geq 0\). This model is known as a proportional hazards rate model. We now obtain the weighted cumulative residual measure of inaccuracy between \(\bar{H}_{R_{n}}\) and as follows:

$$\begin{aligned} \bar{I}^{w}(\bar{H}_{R_{n}},\bar{H}) =&- \int _{0}^{+\infty }x\bar{H} _{R_{n}}(x)\log \bigl( \bar{H}(x) \bigr)\,dx \\ =&\sum_{j=0}^{n-1}\theta ^{j+1} \int _{0}^{+\infty }x\frac{[-\log \bar{F}(x)]^{j+1}}{j!} \bigl[\bar{F}(x) \bigr]^{\theta }\,dx. \end{aligned}$$
(5.11)

Proposition 5.17

If \(\theta \geq (\leq )1\), then for any \(n\geq 1\), we have

$$ \bar{I}^{w}(\bar{H}_{R_{n}},\bar{H})\leq (\geq ) \sum _{j=0}^{n-1}(j+1) \theta ^{j+1}{ \mathcal{E}}^{w}_{j+1}(X), $$

where \({\mathcal{E}}^{w}_{j+1}(X)\) is the weighted generalized cumulative residual entropy of X, defined by Kayal [6] as

$$ {\mathcal{E}}^{w}_{j+1}(X)= \int _{0}^{+\infty }x\frac{\bar{F}(x)[- \log \bar{F}(x)]^{j+1}}{(j+1)!}\,dx. $$

Proof

Suppose that \(\theta \geq (\leq )1\), then it is clear that \([\bar{F}(x)]^{\theta }\leq (\geq )\bar{F}(x)\), and hence (5.11) yields

$$ \bar{I}^{w}(\bar{H}_{R_{n}},\bar{H})\leq (\geq ) \sum _{j=0}^{n-1}(j+1) \theta ^{j+1}{ \mathcal{E}}^{w}_{j+1}(X). $$

 □

Proposition 5.18

Let X be a non-negative random variable with survival function \(\bar{F}(\cdot )\), then an analytical expression for \(I^{w}(\bar{F} _{R_{n}},\bar{F})\) is given by

$$ \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})=\sum _{j=0}^{n-1} \int _{0}^{\infty }x\frac{ [-\log \bar{F}(x) ]^{j+1}}{j!}\bar{F}(x)\,dx =\sum _{j=0}^{n-1}(j+1){\mathcal{E}}^{w}_{j+1}(X). $$
(5.12)

Theorem 5.19

\(\bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})=0\) if and only if X is degenerate.

Proof

Suppose X is degenerate at point a. Then, obviously, by definition of degenerate function and definition of \(\bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})\), we have \(\bar{I}^{w}(\bar{F} _{R_{n}},\bar{F})=0\).

Now, suppose that \(I^{w}(\bar{F}_{R_{n}},\bar{F})=0\), i.e.,

$$\begin{aligned} \mathcal{E}^{w}_{j+1}(X)= \int _{0}^{\infty }x\bar{F}(x) \bigl(-\log \bar{F}(x) \bigr)^{j+1}\,dx=0. \end{aligned}$$
(5.13)

Then, by noting that the integrand of (5.13) is non-negative, we conclude that

$$ x\bar{F}(x) \bigl(-\log \bar{F}(x) \bigr)^{j+1}=0, $$

for almost all \(x\in \mathbb{R}^{+}\). Thus, \(\bar{F}(x)=0 \text{ or } 1\), for almost all \(x\in \mathbb{R}^{+}\). □

Remark 5.1

Let X be a non-negative absolutely continuous random variable with survival function \(\bar{F}(\cdot )\). Then in analogy with the measure defined in (2.13), the WGCRI of order m between \(\bar{F} _{R_{n}}\) and is given by

$$\begin{aligned} \bar{I}_{m}^{w}(\bar{F}_{R_{n}},\bar{F}) =& \frac{1}{m!} \int _{0}^{ \infty }x\bar{F}_{R_{n}}(x) \bigl[-\log {\bar{F}(x)} \bigr]^{m}\,dx \\ =&\sum_{j=0}^{n-1}\binom{m+j}{m}{ \mathcal{E}}^{w}_{m+j}(X). \end{aligned}$$

In the remainder of this section, we study a dynamic version of \(\bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})\). Let X be the lifetime of a system under the condition that the system has survived up to age t. Analogously, we can also consider a dynamic version of \(\bar{I} ^{w}(\bar{F}_{R_{n}},\bar{F})\) as

$$\begin{aligned} \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F};t) =&- \int _{t}^{+\infty }x\frac{ \bar{F}_{R_{n}}(x)}{\bar{F}_{R_{n}}(t)}\log \biggl( \frac{\bar{F}(x)}{ \bar{F}(t)} \biggr)\,dx \\ =&\log \bar{F}(t)m^{c}_{n}(t)- \int _{t}^{+\infty }x\frac{\bar{F}_{R _{n}}(x)}{\bar{F}_{R_{n}}(t)}\log \bigl( \bar{F}(x) \bigr)\,dx \\ =&\log \bar{F}(t)m^{c}_{n}(t)+\frac{1}{\bar{F}_{R_{n}}(t)}\sum _{j=0} ^{n-1} \int _{t}^{+\infty }x\frac{[-\log \bar{F}(x)]^{j+1}}{j!}\bar{F}(x)\,dx. \end{aligned}$$
(5.14)

Note that \(\lim_{t\rightarrow 0}\bar{I}^{w}(\bar{F}_{R_{n}},\bar{F};t)= \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})\). Since \(\log \bar{F}(t)\leq 0\) for \(t\geq 0\), we have

$$\begin{aligned} \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F};t) \leq & \frac{1}{\bar{F}_{R_{n}}(t)}\sum_{j=0}^{n-1} \int _{t}^{+\infty }x\frac{[- \log \bar{F}(x)]^{j+1}}{j!}\bar{F}(x)\,dx \\ \leq &\frac{1}{\bar{F}_{R_{n}}(t)}\sum_{j=0}^{n-1} \int _{0}^{+\infty }x\frac{[-\log \bar{F}(x)]^{j+1}}{j!}\bar{F}(x)\,dx= \frac{\bar{I}^{w}( \bar{F}_{R_{n}},\bar{F})}{\bar{F}_{R_{n}}(t)}. \end{aligned}$$

Theorem 5.20

Let X be a non-negative continuous random variable with distribution function \(F(\cdot )\). Let the weighted dynamic cumulative inaccuracy of the nth record value satisfy \(\bar{I}^{w}(\bar{F}_{R_{n}},\bar{F};t)< \infty \), \(t\geq 0\). Then \(\bar{I}^{w}(\bar{F}_{R_{n}},\bar{F};t)\) characterizes the distribution function.

Proof

From (5.14) we have

$$\begin{aligned} \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F};t) =\log \bar{F}(t)m_{n}^{c}(t)+\frac{1}{ \bar{F}_{R_{n}}(t)}\sum _{j=0}^{n-1} \int _{t}^{+\infty }x\frac{[-\log \bar{F}(x)]^{j+1}}{j!}\bar{F}(x)\,dx. \end{aligned}$$
(5.15)

Differentiating both sides of (5.15) with respect to t, we obtain

$$\begin{aligned} \frac{\partial }{\partial t} \bigl[\bar{I}^{w}(\bar{F}_{R_{n}},\bar{F};t) \bigr] =&- \lambda _{F}(t)m_{n}^{c}(t)+\lambda _{F_{R_{n}}}(t)\bar{I}^{w}(\bar{F} _{R_{n}},\bar{F};t) \\ =&-\lambda _{F}(t)m_{n}^{c}(t)+c(t)\lambda _{F}(t)\bar{I}^{w}(\bar{F} _{R_{n}},\bar{F};t) \\ =&\lambda _{F}(t) \bigl[c(t)\bar{I}^{w}( \bar{F}_{R_{n}},\bar{F};t)-m _{n}^{c}(t) \bigr]. \end{aligned}$$

Taking derivative with respect to t again, we get

$$\begin{aligned} \acute{\lambda }_{F}(t)= \frac{(\lambda _{F}(t))^{2} [t-c(t) \lambda _{F}(t)m_{n}^{c}(t)+\acute{c}(t)\bar{I}^{w}(\bar{F}_{R_{n}}, \bar{F};t)+c(t)\frac{\partial }{\partial t}\bar{I}^{w}(\bar{F}_{R_{n}}, \bar{F};t) ]}{\frac{\partial }{\partial t}\bar{I}^{w}(\bar{F} _{R_{n}},\bar{F};t)}. \end{aligned}$$
(5.16)

Suppose that there are two functions F and \(F^{*}\) such that

$$ \bar{I}^{w}(\bar{F}_{R_{n}},\bar{F};t)=\bar{I}^{w} \bigl(\bar{F}^{*}_{R_{n}}, \bar{F}^{*};t \bigr)= \tilde{z}(t). $$

Then for all t, from (5.16) we get

$$\begin{aligned} \acute{\lambda }_{F}(t)=\varphi \bigl(t,\lambda _{F}(t) \bigr), \qquad \acute{\lambda }_{F^{*}}(t)=\varphi \bigl(t,\lambda _{F^{*}}(t) \bigr), \end{aligned}$$

where

$$ \varphi (t,y)=\frac{y^{2} [\acute{c}(t)\tilde{z}(t)+c(t) (\acute{\tilde{z}}(t)-y \tilde{s}(t) )+t ]}{\acute{\tilde{z}}(t)}, $$

and \(\tilde{s}(t)=m_{n}^{c}(t)\). By using Theorem 3.2 and Lemma 3.3 of Gupta and Kirmani [5], we have \(\lambda _{F}(t)=\lambda _{F^{*}}(t)\), for all t. Since the hazard rate function characterizes the distribution function uniquely, we complete the proof. □

6 Conclusions

In this paper, we discussed the concept of a weighted past inaccuracy measure between \(F_{L_{n}}\) and F. We proposed a dynamic version of WCPI and studied its characterization results. We have also proved that \(I^{w}(F_{L_{n}},F;t)\) uniquely determines the parent distribution F. Moreover, we studied some new basic properties of \(I^{w}(F_{L _{n}},F)\) such as the effect of a linear transformation, relationships with other reliability functions, bounds and stochastic order properties. We estimated the WCPI by means of the empirical cumulative inaccuracy of lower record values. Finally, we proposed the WCRI measure between the survival function \(\bar{F}_{R_{n}}\) and . We also studied some properties of \(\bar{I}^{w}(\bar{F}_{R_{n}},\bar{F})\) such as the connections with other reliability functions, several useful bounds and stochastic orderings. These concepts can be applied in measuring the weighted inaccuracy contained in the associated past (residual) lifetime.