1 Introduction

For a non-negative and absolutely continuous random variable X, the probability density function f, the cumulative distribution function F, and the survival function \(\overline{F}\) are three equivalent ways to describe the probability distribution. Alternative tools for such a description are also the hazard rate and the reversed hazard rate of X, denoted by r(x) and q(x) respectively. See e.g. Barlow and Proschan (1996) and Chandra and Roy (2001) for details. The latter functions have many key applications in different applied fields. In a sense, the reversed hazard rate function q(x) is the dual of the hazard rate function and it bears some interesting features useful in reliability analysis; see Block et al. (1998) and Finkelstein (2002). In particular, it is useful in the analysis of inactivity times.

For x such that \(F(x)>0\), the reversed hazard rate at \(\ x\) of X is defined by

$$\begin{aligned} q(x)&:=\lim _{\Delta x\rightarrow 0^{+}} \frac{\mathbb {P}(x-\Delta x< X\le x|X\le x)}{\Delta x} \\&=\frac{1}{F(x)}\lim _{\Delta x\rightarrow 0^{+}}\frac{\mathbb {P}(x-\Delta x < X\le x)}{\Delta x}=\frac{f(x)}{F(x)}. \end{aligned}$$

When X denotes the lifetime of a unit U, the function q(x) can be interpreted as the rate of instantaneous failure of U occurring immediately before the time-point x, given that U has not survived the age x.

The integrated past intensity function Q is defined by

$$\begin{aligned} Q(x):=\int _{x}^{+\infty }q(s)\mathrm {d}s. \end{aligned}$$

In terms of q and Q, the other characteristics of the distribution of X are given by

$$\begin{aligned} F(t)=\mathrm {e}^{-Q(t)},\;f(t)=q(t)\mathrm {e}^{-Q(t)}, \end{aligned}$$
$$\begin{aligned} r(x)=\frac{q(t)\mathrm {e}^{-Q(t)}}{1-\mathrm {e}^{-Q(t)}}. \end{aligned}$$

So far we have considered the case when X is a scalar random variable. Also, when we deal with a family of independent scalar random variables with absolutely continuous distributions, the family of the corresponding hazard rate or of the reversed hazard rate functions is sufficient to describe the joint distribution of them.

As it is well known, in the case of non-negative random variables -which are not independent but still have an absolutely continuous joint distribution- such distribution can be rather described in terms of the so called multivariate conditional hazard rate functions (m.c.h.r.); see, for instance, Shaked and Shanthikumar (1990, 2015) and Spizzichino (2018).

Let \(X_{1},\dots ,X_{n}\) be non-negative random variables with an absolutely continuous joint distribution. For a fixed index \(j\in [n]=\{1,\dots ,n\}\) and \(I=\{i_{1},\dots ,i_{k}\}\subset [n]\) with \(j\notin I\), and an ordered sequence \(0\le t_{1}\le \dots \le t_{k}\), the m.c.h.r. function \(\lambda _{j}(t|I;t_{1},\dots ,t_{k})\) is defined as follows:

$$\begin{aligned} \lambda _{j}(t|I;t_{1},\dots ,t_{k}):=\lim _{\Delta t\rightarrow 0^{+}}\frac{1}{\Delta t}\mathbb {P}\left( X_{j}\le t+\Delta t\left| X_{i_{1}}=t_{1},\dots ,X_{i_{k}}=t_{k},\min _{h\notin I}X_{h}>t\right. \right) . \end{aligned}$$

Furthermore, we use the notation

$$\begin{aligned} \lambda _{j}(t|\emptyset )&:=\lim _{\Delta t\rightarrow 0^{+}}\frac{1}{\Delta t}\mathbb {P}\left( X_{j}\le t+\Delta t\left| \min _{h\in [n]}X_{h}>t\right. \right) \\&=\lim _{\Delta t\rightarrow 0^{+}}\frac{1}{\Delta t}\mathbb {P}\left( X_{j}\le t+\Delta t\left| X_{1:n}>t\right. \right) . \end{aligned}$$

The main purpose of this paper is to introduce a concept of multivariate conditional reversed hazard rate function. In this respect a family of functions \(\tau _{j}(t|I;t_{1},\dots ,t_{k})\) and \(\tau _{j}(t|\emptyset )\) will be suitably defined in the next section. On this purpose we will both follow the analogy with the definition of the function q in the scalar case, on one side, and the analogy with the above definition of the m.c.h.r. functions for the multivariate case, on the other side.

Several results based on the family of the \(\tau\)’s can be obtained by following the analogy with results presented in the literature concerning the family of the \(\lambda\)’s. In particular, the recent paper De Santis et al. (2020) has pointed out a natural role of the functions \(\lambda _{1}(t|\emptyset ),\lambda _{2}(t|\emptyset ),\dots ,\lambda _{n}(t|\emptyset )\) in the study of the minimum \(X_{1:n}\) among dependent variables. In this paper we show an analogous role of \(\tau _{j}(t|\emptyset )\), \(j=1,\dots ,n\), in the study of the maximum \(X_{n:n}\).

A further purpose of the paper is to introduce a concept of reversed time-homogeneous load-sharing models. The latter can be introduced in a natural way starting from the so-called time-homogeneous load-sharing models, a particular class of dependence models for the lifetimes \(X_{1},\dots ,X_{n}\) (see Spizzichino (2018)).

More precisely the structure of the paper is as follows. After presenting the definition of the multivariate conditional reversed hazard rate functions, in Section 2 we study some related properties and show, in particular, how the functions \(\tau _{j}(t|\emptyset )\)’s emerge in the study of the maximum order statistic \(X_{n;n}\) (see Proposition 2). We also show how two families \(\{\lambda _{j}(t|I,\mathbf {t})\}\) and \(\{\tau _{j}(t|I,\mathbf {t})\}\) can be connected each other. As natural generalizations of the joint distributions of several independent variables marginally distributed according to inverse exponential distributions, in Section 3 we introduce and study reversed time-homogeneous load-sharing models. In this context, a relation will be shown between the class formed by those dependence models and the one of the (ordinary) time-homogeneous load-sharing models. Some properties related with reliability issues and distributions of inactivity times will be analyzed in the second part of the section. The paper concludes with a section containing a brief discussion and a few hints concerning future work.

2 Multivariate Conditional Reversed Hazard Rates and Related Properties

Let us consider a vector of n non-negative random variables \(X_{1} ,\dots ,X_{n}\) defined on the same probability space \((\Omega ,\mathcal {F} ,\mathbb {P})\). We assume that the joint probability distribution of \(X_{1},\dots ,X_{n}\) is absolutely continuous and so ties among \(X_{1} ,\dots ,X_{n}\) have probability zero (see formula (1) below). In the following definition, for any fixed positive number t, the set I must be interpreted as the set of indices associated to the variables which take values greater than t. Correspondingly, \(\tilde{I}\) is the set of indices of the variables which take values less than or equal to t, so that \(\tilde{I}\) is the complementary set of I in [n].

Definition 1

For a fixed index \(j\in [n]\), a vector \((i_{1},\dots ,i_{k})\) where \(i_{1}\ne \dots \ne i_{k}\in [n]\), let us set \(I\equiv \{i_{1} ,\dots ,i_{k}\}\subset [n]\). For \(j\notin I\) and an ordered sequence \(0\le t\le t_{k}\le \dots \le t_{1}\) the multivariate conditional reversed hazard rate (m.c.r.h.r.) function \(\tau _{j}(t|I;t_{1},\dots ,t_{k})\) is defined as follows:

$$\begin{aligned} \tau _{j}(t|I;t_{1},\dots ,t_{k})=\lim _{\Delta t\rightarrow 0^{+}}\frac{1}{\Delta t}\mathbb {P}\left( X_{j}\ge t-\Delta t\left| X_{i_{1}}=t_{1} ,\dots ,X_{i_{k}}=t_{k},\max _{h\in \tilde{I}}X_{h}\le t\right. \right) . \end{aligned}$$

Furthermore, we use the notation

$$\begin{aligned} \tau _{j}(t|\emptyset )&=\lim _{\Delta t\rightarrow 0^{+}}\frac{1}{\Delta t}\mathbb {P}\left( X_{j}\ge t-\Delta t\left| \max _{h\in [n]} X_{h}\le t\right. \right) \\&=\lim _{\Delta t\rightarrow 0^{+}}\frac{1}{\Delta t}\mathbb {P}\left( X_{j}\ge t-\Delta t\left| X_{n:n}\le t\right. \right) . \end{aligned}$$

Occasionally, when necessary to distinguish between different vectors of lifetimes, we shall use a notation of the form \(\tau _{j}^{\left( \mathbf {X}\right) }(t|\emptyset ), \ \lambda _{j}^{\left( \mathbf {X}\right) }(t|\emptyset )\) in place of \(\tau _{j}(t|\emptyset ), \ \lambda _{j}(t|\emptyset )\) and \(\tau _{j}^{\left( \mathbf {X}\right) }(t|I;t_{1},\dots ,t_{k}), \ \lambda _{j}^{\left( \mathbf {X}\right) }(t|I;t_{1},\dots ,t_{k})\) in place of \(\tau _{j}(t|I;t_{1},\dots ,t_{k})\), \(\lambda _{j}(t|I;t_{1},\dots ,t_{k})\).

Remark 1

In the case when \(X_{1},\dots ,X_{n}\) are independent, \(\tau _{j}(t|I)\), for \(j=1,\dots ,\) n, does not depend on I. Actually, in this case, \(\tau _{j}(t|I)\) coincides with the classical, univariate, reversed hazard rate function \(q_{j}(x)\) of \(X_{j}\).

Remark 2

We remind that one can give formulas that express the joint density function in terms of m.c.h.r. functions and viceversa. An analogous equivalence can also be established for m.c.r.h.r. functions.

The information contained in the family of the m.c.r.h.r. functions allows us to analyze different type of properties of the order statistics \(X_{1:n},\dots ,X_{n:n}\) of n random variables \(X_{1},X_{2},\dots ,X_{n}\). In passing, we observe that the assumption of absolute continuity, in view of the implied condition of no-tie, guarantees the property

$$\begin{aligned} P\left( X_{1:n}< \ldots < X_{n:n} \right) =1. \end{aligned}$$
(1)

In particular, the knowledge of the m.c.r.h.r. functions is relevant when studying the behaviour of the maximum order statistic \(X_{n:n}\) (see Proposition 2 below). Such a result can have different types of applications in view of the special role of the statistic \(X_{n:n}\) in many applied fields.

We respectively denote by \(k_{(n)},K_{(n)},F_{(n)},f_{(n)}\), the past intensity function, i.e., the reversed hazard rate function, the integrated past intensity function, the distribution function and the probability density function of \(X_{n:n}\). Namely

$$\begin{aligned}&k_{(n)}(t)=\lim _{\Delta t\rightarrow 0^{+}}\frac{1}{\Delta t}\mathbb {P} \left( X_{n:n}\ge t-\Delta t\left| X_{n:n}\le t\right. \right) \nonumber \\&K_{(n)}(t)=\int _{t}^{+\infty }k_{(n)}(s)\mathrm {d}s \nonumber \\&F_{(n)}(t)=\mathrm {e}^{-K_{(n)}(t)} \nonumber \\&f_{(n)}(t)=k_{(n)}(t)\mathrm {e}^{-K_{(n)}(t)} \end{aligned}$$
(2)

For illustrative purposes, we can consider the i.i.d. exponential case. In this case, the previous relations reduce to

$$\begin{aligned}&k_{(n)}(t)=\frac{\lambda n\mathrm {e}^{-\lambda t}}{1-\mathrm {e}^{-\lambda t}},\ \ \ \ \ \ \ \ K_{(n)}(t)=-n\log (1-\mathrm {e}^{-\lambda t}), \\&F_{(n)}(t)=(1-\mathrm {e}^{-\lambda t})^{n},\ \ \ \ \ f_{(n)}(t)=\lambda n\mathrm {e}^{-\lambda t}(1-\mathrm {e}^{-\lambda t})^{n-1}. \end{aligned}$$

In view of the assumption of absolute continuity and following the analogy with the definition of multivariate conditional hazard rate functions, we can define the following limits for \(j=1,\dots ,n\)

$$\begin{aligned} \delta _{j}(t)=\lim _{\Delta t\rightarrow 0^{+}}\mathbb {P}(X_{j}=X_{n:n} |X_{n:n}\in (t-\Delta t,t])=\mathbb {P}(X_{j}=X_{n:n}|X_{n:n}=t) \end{aligned}$$
(3)

and we notice that

$$\begin{aligned} \sum _{j=1}^{n}\delta _{j}(t)=1. \end{aligned}$$
(4)

Let us prove that

$$\begin{aligned} \tau _{j}(t|\emptyset )=k_{(n)}(t)\delta _{j}(t). \end{aligned}$$
(5)

In fact,

$$\begin{aligned} k_{(n)}(t)\delta _{j}(t)&=\lim _{\Delta t\rightarrow 0^{+}}\frac{\mathbb {P}(X_{j}=X_{n:n},X_{n:n}\in (t-\Delta t,t])}{\mathbb {P}(X_{n:n}\in (t-\Delta t,t])}\frac{\mathbb {P}(X_{n:n}\in (t-\Delta t,t])}{\Delta t\ \mathbb {P}(X_{n:n}\le t)}\\&=\lim _{\Delta t\rightarrow 0^{+}}\frac{\mathbb {P}(X_{j}>t-\Delta t,X_{n:n}\le t)}{\Delta t\ \mathbb {P}(X_{n:n}\le t)}=\tau _{j}(t|\emptyset ). \end{aligned}$$

By taking into account (4) and (5), and by performing an integration from t to \(+\infty\) we immediately get

Proposition 1

For any \(t\ge 0\) we have

$$\begin{aligned} k_{(n)}(t)=\sum _{j=1}^{n}\tau _{j}(t|\emptyset ),\ \ \ K_{(n)}(t)=\int _{t}^{+\infty }\sum _{j=1}^{n}\tau _{j}(s|\emptyset )\mathrm {d}s. \end{aligned}$$
(6)

The role of the functions \(\tau _{1}(t|\emptyset ),\dots ,\tau _{n}(t|\emptyset )\) in the study of the properties of the statistic \(X_{n:n}\) is described by the following result

Proposition 2

For any \(t\ge 0\) and \(j=1,\dots ,n\), we have

$$\begin{aligned} \mathbb {P}(X_{j}=X_{n:n},X_{n:n}\le t)=\int _{0}^{t}\tau _{j}(s|\emptyset )\mathrm {e}^{-K_{(n)}(s)}\mathrm {d}s. \end{aligned}$$

Proof

Taking into account (2), (3) and (5) we obtain

$$\begin{aligned} \mathbb {P}(X_{j}=X_{n:n},X_{n:n}\le t)&=\int _{0}^{t}f_{(n)}(s)\mathbb {P}(X_{j}=X_{n:n}|X_{n:n}=s)\mathrm {d}s\\&=\int _{0}^{t}k_{(n)}(s)\mathrm {e}^{-K_{(n)}(s)}\delta _{j}(s)\mathrm {d}s=\int _{0}^{t}\tau _{j}(s|\emptyset )\mathrm {e}^{-K_{(n)}(s)}\mathrm {d}s, \end{aligned}$$

that is the thesis.

As an immediate consequence of the previous Proposition we note that, for \(j=1,\dots ,n\), the probability \(\mathbb {P}(X_{j}=X_{n:n},X_{n:n}\le t)\) only depends on the functions \(\tau _{1}(t|\emptyset ),\dots ,\tau _{n}(t|\emptyset )\). Whence, we get the following result.

Proposition 3

Take n independent random variables \(Z_{1},\dots ,Z_{n}\) with reversed hazard rate functions \(q_{j}(t)\) and let \((X_{1},\dots ,X_{n})\) be a vector with m.c.r.h.r. functions \(\tau _{j}^{\left( \mathbf {X}\right) }(t|\emptyset )\) such that

$$\begin{aligned} \tau _{j}^{\left( \mathbf {X}\right) }(t|\emptyset )=q_{j}(t),\;j=1,\dots ,n. \end{aligned}$$
(7)

Then, for any \(j\in [n]\) and for any \(t\ge 0\)

$$\begin{aligned} \mathbb {P}(X_{j}=X_{n:n},X_{n:n}\le t)=\mathbb {P}(Z_{j}=Z_{n:n},Z_{n:n}\le t). \end{aligned}$$
(8)

Proof

In view of independence, the m.c.r.h.r. functions \(\tau _{j}^{\left( \mathbf {Z}\right) }(t|\emptyset )\) for the vector \(Z_{1},\dots ,Z_{n}\) respectively coincide with the univariate reversed hazard rate functions \(q_{j}(t)\) (see Remark 1). Then the thesis follows immediately by applying Proposition 2 to both the vectors \((X_{1},\dots ,X_{n})\) and \((Z_{1},\dots ,Z_{n})\).

We also notice that the multivariate conditional reversed hazard rate functions \(\tau _{j}^{\left( \mathbf {X}\right) }\)’s of variables \(X_{1},\dots ,X_{n}\) are strictly related to the m.c.h.r. functions \(\lambda _{j}^{\left( \mathbf {Y}\right) }\)’s of the variables \(Y_{1}:=1/X_{1},\dots ,Y_{n}:=1/X_{n}\). More precisely we have the following proposition.

Proposition 4

Let \(X_1,\dots ,X_n\) be absolutely continuous random variables and let \(Y_i=1/X_i\), for \(i=1,\dots ,n\). Then, we have

$$\begin{aligned} \tau _{j}^{\left( \mathbf {X}\right) }(t|\emptyset )&=\frac{1}{t^{2}}\lambda _{j}^{\left( \mathbf {Y}\right) }\left( \left. \frac{1}{t}\right| \emptyset \right) \end{aligned}$$
(9)

and, for \(\emptyset \ne I\subset [n]\), \(0\le t\le t_{k}\le \dots \le t_{1}\)

$$\begin{aligned} \tau _{j}^ {\left( \mathbf {X}\right) }(t|I;t_{1},\dots ,t_{k})&=\frac{1}{t^{2}}\lambda _{j}^{\left( \mathbf {Y}\right) }\left( \left. \frac{1}{t}\right| I;\frac{1}{t_{1}},\dots ,\frac{1}{t_{k}}\right) . \end{aligned}$$
(10)

Proof

One can obtain the stated result as follows:

$$\begin{aligned} \tau _{j}^{\left( \mathbf {X}\right) }(t|\emptyset )&=\lim _{\Delta t\rightarrow 0^{+}}\frac{1}{\Delta t}\mathbb {P}\left( X_{j}\ge t-\Delta t\left| X_{n:n}\le t\right. \right) \end{aligned}$$
(11)
$$\begin{aligned}&=\lim _{\Delta t\rightarrow 0^{+}}\frac{1}{\Delta t}\mathbb {P}\left( \frac{1}{Y_{j}}\ge t-\Delta t\left| Y_{1:n}\ge \frac{1}{t}\right. \right) \nonumber \\&=\lim _{\Delta t\rightarrow 0^{+}}\frac{1}{\Delta t}\mathbb {P}\left( Y_{j}\le \frac{1}{t-\Delta t}\left| Y_{1:n}\ge \frac{1}{t}\right. \right) \nonumber \\&=\lim _{\Delta t\rightarrow 0^{+}}\frac{1}{\Delta t}\mathbb {P}\left( Y_{j}\le \frac{1}{t}+\frac{\Delta t}{t(t-\Delta t)}\left| Y_{1:n}\ge \frac{1}{t}\right. \right) \nonumber \\&=\lim _{\Delta t\rightarrow 0^{+}}\frac{1}{t(t-\Delta t)}\cdot \frac{t(t-\Delta t)}{\Delta t}\mathbb {P}\left( Y_{j}\le \frac{1}{t}+\frac{\Delta t}{t(t-\Delta t)}\left| Y_{1:n}\ge \frac{1}{t}\right. \right) \nonumber \\&=\frac{1}{t^{2}}\lambda _{j}^{\left( \mathbf {Y}\right) }\left( \left. \frac{1}{t}\right| \emptyset \right) . \end{aligned}$$
(12)

Similarly, for \(\emptyset \ne I\subset [n]\), \(0\le t\le t_{k}\le \dots \le t_{1}\)

$$\begin{aligned} \tau _{j}^{\left( \mathbf {X}\right) }(t|I;t_{1},\dots ,t_{k})&=\lim _{\Delta t\rightarrow 0^{+}}\frac{1}{\Delta t}\mathbb {P}\left( X_{j}\ge t-\Delta t\left| X_{i_{1}}=t_{1},\dots ,X_{i_{k}}=t_{k},\max _{h\notin I}X_{h}\le t\right. \right) \nonumber \\&=\lim _{\Delta t\rightarrow 0^{+}}\frac{1}{\Delta t}\mathbb {P}\left( Y_{j}\le \frac{1}{t-\Delta t}\left| Y_{i_{1}}=\frac{1}{t_{1}},\dots ,Y_{i_{k}}=\frac{1}{t_{k}},\min _{h\notin I}Y_{h}\ge \frac{1}{t}\right. \right) \nonumber \\&=\frac{1}{t^{2}}\lambda _{j}^{\left( \mathbf {Y}\right) }\left( \left. \frac{1}{t}\right| I;\frac{1}{t_{1}},\dots ,\frac{1}{t_{k}}\right) . \end{aligned}$$
(13)

3 Reversed Load-Sharing Models

First of all, we remind the definition of inverse exponential distribution. Let us consider Y distributed as an exponential random variable, \(Y\sim Exp(\lambda )\), then \(X=1/Y \sim invExp(\lambda )\) is an inverse exponential random variable. For \(t>0\), the cdf, pdf and reversed hazard rate function of X are respectively given by

$$\begin{aligned}&F_X(t)=\overline{F}_Y\left( \frac{1}{t}\right) =\mathrm e^{-\lambda /t}, \nonumber \\&f_X(t)=\frac{1}{t^2}f_Y\left( \frac{1}{t}\right) =\frac{\lambda }{t^2} \mathrm e^{-\lambda /t}, \nonumber \\&q_X(t)=\frac{1}{t^2}r_Y\left( \frac{1}{t}\right) =\frac{\lambda }{t^2}. \end{aligned}$$
(14)

The inverse exponential distribution and some of its generalizations have found many key applications in several contexts, such as medicine, survival analysis of patients and of devices. For further details see Murty and Naikan (1996); Oguntunde et al. (2017); Pavlov et al. (2018).

The following result concerns with the behaviour of the maximum \(X_{n:n}\) among independent variables distributed according to inverse exponential distributions.

Proposition 5

Let \(X_1,\dots ,X_n\) be independent random variables, respectively distributed according to inverse exponential distributions with parameters \(\lambda _1,\) \(\dots ,\lambda _n\). Then, the following identities hold:

$$\begin{aligned}&\mathbb P(X_{n:n}=X_j,X_{n:n}\le t)=\mathbb P(X_{n:n}=X_j)\mathbb P(X_{n:n}\le t), \text{ for } \text{ any } t>0, \end{aligned}$$
(15)
$$\mathbb P(X_{n:n}=X_j)=\frac{\lambda _j}{\sum _{i=1}^n \lambda _i},$$
(16)
$$\mathbb P(X_{n:n}\le t)=\mathrm e^{-\frac{1}{t}\sum _{i=1}^n \lambda _i}.$$
(17)

Proof

We start from the event \((X_{n:n}\le t)\). In view of independence among variables, we have

$$\begin{aligned} \mathbb P(X_{n:n}\le t)= & {} \mathbb P(X_1\le t,\dots ,X_n\le t)\\= & {} \mathbb P(X_1\le t)\cdots \mathbb P(X_n\le t)\\= & {} \mathrm e^{-\frac{1}{t}\sum _{i=1}^n \lambda _i}. \end{aligned}$$

Let us then consider the event \((X_{n:n}=X_j,X_{n:n}\le t)\). By recalling Proposition 2, we can write

$$\begin{aligned} \mathbb P(X_{n:n}=X_j,X_{n:n}\le t)= & {} \mathbb P(X_j\le t \text{ and } X_j> X_i, i\ne j)\\= & {} \int _0^t \frac{\lambda _j}{s^2}\mathrm e^{-\frac{\lambda _j}{s}} \mathbb P(X_i\le s, i\ne j) \mathrm ds. \end{aligned}$$

By taking into account the independence among \(X_1,\dots ,X_n\) we can then write

$$\begin{aligned} \mathbb P(X_{n:n}=X_j,X_{n:n}\le t)= & {} \int _0^t \frac{\lambda _j}{s^2}\mathrm e^{-\frac{\lambda _j}{s}} \prod _{i=1, i\ne j}^n e^{-\frac{\lambda _i}{s}} \mathrm ds \\= & {} \int _0^t \frac{\lambda _j}{s^2}\prod _{i=1}^n e^{-\frac{\lambda _i}{s}} \mathrm ds \\= & {} \frac{\lambda _j}{\sum _{i=1}^n \lambda _i}\mathrm e^{-\frac{1}{t}\sum _{i=1}^n \lambda _i}. \end{aligned}$$

Hence, the events \((X_{n:n}\le t)\) and \((X_{n:n}=X_j)\) are independent and so we get

$$\begin{aligned} \mathbb P(X_{n:n}=X_j)=\frac{\lambda _j}{\sum _{i=1}^n \lambda _i}. \end{aligned}$$

Remark 3

We highlight that Proposition 5 is analogous to a well known result concerning with minimum among variables and with exponential distributions. See e.g. Theorem 2.3.3 in Norris (1998): Let \(Y_1,\dots ,Y_n\) be independent random variables such that \(Y_j\sim Exp(\lambda _j)\), then we have

$$\begin{aligned}&\mathbb P(Y_{1:n}=Y_j,Y_{1:n}> t)=\mathbb P(Y_{1:n}=Y_j)\mathbb P(Y_{1:n}> t), \text{ for } \text{ any } t>0, \\&\mathbb P(Y_{1:n}=Y_j)=\frac{\lambda _j}{\sum _{i=1}^n \lambda _i}, \\&\mathbb P(Y_{1:n}> t)=\mathrm e^{-\frac{1}{t}\sum _{i=1}^n \lambda _i}. \end{aligned}$$

We have preferred to provide a direct proof of Proposition 5, even though a proof can be easily obtained from the above result. In fact, it is sufficient to remind that when \(X_1,\dots ,X_n\) are independent and \(X_j \sim invExp(\lambda _j)\) then \(Y_1=1/X_1,\dots ,Y_n=1/X_n\) are independent and \(Y_j\sim Exp (\lambda _j)\) and, furthermore, that the following equivalences hold

$$\begin{aligned}&Y_{1:n}=Y_j \Leftrightarrow 1/X_{n:n}=1/X_j \Leftrightarrow X_{n:n}=X_j, \\&Y_{1:n}>t \Leftrightarrow 1/X_{n:n}>t \Leftrightarrow X_{n:n}<\frac{1}{t}. \end{aligned}$$

Before continuing, we focus attention on a special class of dependence models for lifetimes which has been considered several times in the applied literature, possibly under a variety of different terminologies. Here we will say that the random vector \((X_1,\dots ,X_n)\) is distributed according to a Load-Sharing model (LS) if, for non-empty \(I\subset [n]\) with cardinality k and \(j\in [n]\smallsetminus I\), there exist functions \(\mu _j(t|I)\) such that, for all \(0\le t_1\le \dots \le t_k\le t\),

$$\begin{aligned} \lambda _j(t|I;t_1,\dots ,t_k)=\mu _j(t|I). \end{aligned}$$

Furthermore, a load-sharing model is time-homogeneous (THLS) when there exist non-negative numbers \(\mu _j(I)\) and \(\mu _j(\emptyset )\) such that, for any \(t>0\),

$$\begin{aligned}&\mu _j(t|I)=\mu _j(I),\\&\lambda _j(t|\emptyset )=\mu _j(\emptyset ). \end{aligned}$$

For further details, see e.g. Shaked and Shanthikumar (2015); Spizzichino (2018) and references cited therein.

Remark 4

Notice that the joint distribution of n independent and exponential variables (non-necessarily identically distributed) is a special case of THLS.

Definition 2

We say that the random vector \((X_1,\dots ,X_n)\) is distributed according to a Reversed Load–Sharing model (RLS) if, for nonempty \(I\subset [n]\) and \(j\in [n]\smallsetminus I\), the m.c.r.h.r. functions \(\tau _j(t|I;t_1,\dots ,t_{|I|})\) does not depend on \(t_1,\dots ,t_{|I|}\), for all \(0\le t\le t_{|I|}\le \dots \le t_1\), i.e.,

$$\begin{aligned} \tau _j(t|I;t_1,\dots ,t_{|I|})=\tau _j(t|I). \end{aligned}$$

We now concentrate attention on a special subclass of reversed load sharing models. Let us consider a vector \((Y_1,\dots ,Y_n)\) distributed according to a THLS model with parameters \(\mu _j(\emptyset ), \mu _j(I)\). Then \((X_1,\dots ,X_n)\), defined by \(X_j=1/Y_j\), for \(j=1,\dots ,n\), is such that the m.c.r.h.r. functions are expressed by (9) and (10) in the following way

$$\begin{aligned}&\tau _j^{\left( \mathbf {X}\right) }(t|\emptyset )=\frac{1}{t^2}\lambda _j^{\left( \mathbf {Y}\right) }\left( \left. \frac{1}{t}\right| \emptyset \right) =\frac{1}{t^2}\mu _j(\emptyset ) \nonumber \\&\tau _j^{\left( \mathbf {X}\right) }(t|I)=\frac{1}{t^2}\lambda _j^{\left( \mathbf {Y}\right) }\left( \left. \frac{1}{t}\right| I\right) =\frac{1}{t^2}\mu _j(I). \end{aligned}$$
(18)

By recalling the formula of the reversed hazard rate \(q_{X}\) in (14), we observe, in particular, that the reversed m.c.h.r. functions of vectors of independent, inverse-exponentially distributed random variables satisfy the identities in (18). Also taking into account the above Remark 4, we then give the following definition.

Definition 3

We say that the random vector \((X_1,\dots ,X_n)\) is distributed according to a Reversed Time Homogeneous Load–Sharing model (RTHLS) if, it is a RLS and, in addition, for \(I\subset [n]\) and \(j\in [n]\smallsetminus I\), the m.c.r.h.r. functions are expressed as

$$\begin{aligned} \tau _j(t|I)=\frac{c_j(I)}{t^2}, \end{aligned}$$

where \(c_j(I)\ge 0\).

Remark 5

Notice that we have tacitly assumed that any coefficient \(c_j(I)\) depends only on the set I and it does not depend on the order according to which its elements are considered.

We emphasize that the vector \((X_1,\dots ,X_n)\) is distributed according to a RTHLS model if and only if the vector \((Y_1,\dots ,Y_n)\) is distributed according to a THLS model, where \(Y_j = 1/X_j\), \(j = 1,\dots ,n\). Furthermore, the RTHLS models can be seen as natural generalizations of the case of independent variables with inverse exponential distributions and, in particular, they inherit several remarkable properties of them. See in particular the basic property shown in the following Proposition 6.

If \((X_1,\dots ,X_n)\) follows a RTHLS model, then we set, for \(j\in [n]\), \(I\subset [n]\), \(j\notin I\),

$$\begin{aligned}&N(I)=\sum _{h\notin I} c_h(I) \end{aligned}$$
(19)
$$\begin{aligned}&\eta _j(I)=\frac{\tau _j(t|I)}{\sum _{h\notin I} \tau _h(t|I)}=\frac{c_j(I)}{\sum _{h\notin I} c_h(I)}=\frac{c_j(I)}{N(I)}. \end{aligned}$$
(20)

We note furthermore that the parameters \(c_{j}\left( \emptyset \right) ,c_{j}\left( I\right)\) of a reversed THLS model for variables \(X_{1},\dots ,X_{n}\) actually coincide with the parameters of the THLS model for the reciprocal variables \(Y_{j}=1/X_{j}\) (\(j=1,\dots ,n\)), i.e.

$$\begin{aligned} c_j(\emptyset )=\mu _j(\emptyset ), \ \ \ \ c_j(I)=\mu _j(I). \end{aligned}$$
(21)

Before continuing, we present the following simple example.

Example 1

Consider a triple of lifetimes \(Y_{1},Y_{2},Y_{3}\) distributed according to a THLS model with parameters fixed as follows: for \(j=1,2,3\) and \(i\ne j\)

$$\begin{aligned}&\mu _{j}\left( \emptyset \right) =1, \ \ \ \mu _{j}\left( i\right) =1; \\&\mu _{2}\left( 1,3\right) =\mu _{3}\left( 1,2\right) =1, \end{aligned}$$

whereas

$$\begin{aligned} \mu _{1}\left( 2,3\right) =\varepsilon , \end{aligned}$$

where \(\varepsilon\) is a positive number, close to 0. Setting \(X_{j}:=\frac{1}{Y_{j}}\) (\(j=1,2,3\)) and noting that \(X_1,X_2,X_3\) are distributed according to a RTHLS model, we will write

$$\begin{aligned} \mathbb {P}\left( X_{j}>t-\delta |X_{3:3}<t\right)= & {} \mathbb {P}\left( \frac{1}{Y_{j}}>t-\delta |\frac{1}{Y_{1:3}}<t\right) \\= & {} \mathbb {P}\left( Y_{j}<\frac{1}{t-\delta }|Y_{1:3}>\frac{1}{t}\right) . \end{aligned}$$

Thus, by letting \(\omega :=t^{-1}\) and by taking into account that \(Y_{1},Y_{2},Y_{3}\) are jointly distributed according to a THLS model with parameter specified as above,

$$\begin{aligned} \mathbb {P}\left( Y_{j}<\omega +\left( \frac{1}{t-\delta }-\omega \right) |Y_{1:3}>\omega \right)= & {} \mathbb {P}\left( Y_{j}<\omega +\frac{\delta }{t^{2}-\delta t}|Y_{1:3}>\omega \right) \\= & {} \frac{\delta }{t^{2}-\delta t}+o(\delta )=\frac{\delta }{t^{2}}+o(\delta ). \end{aligned}$$

Then we obtain

$$\begin{aligned} \tau _{j}(t|\emptyset )=\lim _{\delta \rightarrow 0^+}\frac{1}{\delta }\left( \frac{\delta }{t^{2}}+o(\delta )\right) =\frac{1}{t^{2}}. \end{aligned}$$

Furthermore,

$$\begin{aligned} \mathbb {P}\left( X_{3:3}>t-\delta |X_{3:3}<t\right) =\mathbb {P}\left( Y_{1:3}<\frac{1}{t-\delta }|Y_{1:3}>\omega \right) =\exp \left\{ -3\frac{\delta }{t^{2}-\delta t}\right\} , \\ \mathbb {P}\left( X_{3:3}=X_{j}|X_{3:3}<t\right) =\mathbb {P}\left( Y_{1:3}=Y_{j}|Y_{1:3}>\omega \right) =\frac{1}{3}. \end{aligned}$$

The latter equation demonstrates that, in particular, the events \(\left( X_{3:3}<t\right)\) and \(\left( X_{3:3}=X_{j}\right)\) are independent. In particular, this fact guaranties that RTHLS is different from THLS. Notice on the contrary that the events \(\left( X_{1:3}=X_{j}\right)\) and \(\left( X_{1:3}<t\right)\) cannot, in general, be independent. In fact the conditional probability

$$\begin{aligned} \mathbb {P}\left( X_{1:3}=X_{j}|X_{1:3}<t\right) =\mathbb {P}\left( Y_{3:3}=Y_{j}|Y_{3:3}>\omega \right) \end{aligned}$$

generally depends on t. This fact also admit an heuristic explanation as follows: when the value of \(\omega\) is very, very large (i.e. t very, very small), the condition \(\mu _{1}\left( 2,3\right) =\varepsilon\) with \(\varepsilon<<1\) leads us to assign much greater probability to the event \(\left( Y_{3:3}=Y_{1}\right)\) than to the complementary event \(\left( Y_{3:3}\ne Y_{1}\right)\).

The following result points out the appropriate way to extend Proposition 5 from the independent case to the case of time-homogeneous reversed load-sharing models.

Proposition 6

Let \(\left( X_{1},\dots ,X_{n}\right)\) be distributed according to a reversed time-homogeneous load-sharing model with parameters \(c_{j}(I)\). Then, the following identities hold:

$$\begin{aligned}&\mathbb {P}(X_{n:n}=X_{j},X_{n:n}\le v)=\mathbb {P}(X_{n:n}=X_{j} )\mathbb {P}(X_{n:n}\le v), \text{ for } \text{ any } v>0, \nonumber \\&\mathbb {P}(X_{n:n}=X_{j})=\eta _{j}\left( \emptyset \right) , \end{aligned}$$
(22)
$$\begin{aligned}&\mathbb {P}(X_{n:n}\le v)=\mathrm {e}^{-\frac{N\left( \emptyset \right) }{v}}. \end{aligned}$$
(23)

Proof

By applying Proposition 2 to the present case, we can write

$$\begin{aligned} \mathbb P(X_{n:n}=X_{\pi (n)},X_{n:n}\le v)= & {} \int _0^{v} \tau _{\pi (n)}(s|\emptyset )\mathrm e^{-\int _s^{+\infty }\sum _{i=1}^n \tau _i(w|\emptyset )\mathrm dw}\mathrm ds \\= & {} \int _0^{v} \frac{c_{\pi (n)}(\emptyset )}{s^2}\mathrm e^{-\int _s^{+\infty }\frac{\sum _{i=1}^n c_i(\emptyset )}{w^2}\mathrm dw}\mathrm ds \\= & {} \int _0^{v} \frac{c_{\pi (n)}(\emptyset )}{s^2}\mathrm e^{-\frac{\sum _{i=1}^n c_i(\emptyset )}{s}}\mathrm ds \\= & {} \frac{c_{\pi (n)}(\emptyset )}{\sum _{i=1}^n c_i(\emptyset )}\mathrm e^{-\frac{\sum _{i=1}^n c_i(\emptyset )}{v}}=\eta _{\pi (n)}(\emptyset )\mathrm e^{-\frac{N(\emptyset )}{v}}. \end{aligned}$$

Notice also that, in the case when \(\left( X_{1},\dots ,X_{n}\right)\) is distributed according to a reversed THLS model, the independent variables \(Z_{1},\dots ,Z_{n}\) introduced in Proposition 3 are distributed according to inverse exponential distributions with parameters \(c_{j}\left( \emptyset \right)\).

Now, we introduce the discrete random variables \(J_{1},\dots ,J_{n}\), where

$$\begin{aligned} J_{h}=j\text { if }X_{h:n}=X_{j}. \end{aligned}$$

Other basic aspects of RTHLS models can be better understood by writing down, for \(k = 1,\dots ,n\), the joint pdf \(f_{X_{n:n},\dots ,X_{k:n},J_{n},\dots ,J_{k}}(t_{n},\dots ,t_{k};j_{n},\dots ,j_{k})\) of \((X_{n:n},\dots ,X_{k:n};J_{n},\dots ,J_{k})\) with respect to the product between the k-dimensional Lebesgue measure and an appropriate counting measure. For \(t_1\le t_2 \le \dots \le t_n\), we can write in this respect

$$\begin{aligned} f_{X_{n:n},\dots ,X_{k:n},J_{n},\dots ,J_{k}}(t_{n},\dots ,t_{k};j_{n},\dots ,j_{k}) \end{aligned}$$
$$\begin{aligned} =f_{X_{n:n},J_n}(t_n;j_n)\times f_{X_{n-1:n},J_{n-1}}(t_{n-1};j_{n-1}|t_n;j_n)\times \dots \end{aligned}$$
$$\begin{aligned} \times f_{X_{k:n},J_{k}}(t_{k};j_{k}|t_n,\dots , t_{k+1};j_n,\dots , j_{k+1}). \end{aligned}$$

Taking into account both the meaning of the reversed m.c.h.r. functions and the definition of reversed THLS models, the above equation takes the form

$$\begin{aligned} f_{X_{n:n},\dots ,X_{k:n},J_{n},\dots ,J_{k}}(t_{n},\dots ,t_{k};j_{n},\dots ,j_{k}) \end{aligned}$$
$$\begin{aligned} =\frac{c_{j_{n}}(\emptyset )}{t_{n}^{2}}\exp \left\{ -\int _{t_{n}}^{+\infty }\frac{1}{u^{2}}\sum _{i=1}^{n}c_{i}(\emptyset ) \right\} du\times \\ \times \frac{c_{j_{n-1}}(j_{n})}{t_{n-1}^{2}}\exp \left\{ -\int _{t_{n-1}}^{t_{n}}\frac{1}{u^{2}}\left( \sum _{i \in [n] \setminus \{j_n \}}c_{i}(j_{n})\right) du \right\} \times \dots \end{aligned}$$
$$\begin{aligned} \times \frac{c_{j_{k}}(j_{n},\dots ,j_{k+1})}{t_{k}^{2}}\exp \left\{ -\int _{t_{k} }^{t_{k+1}}\frac{1}{u^{2}}\left( \sum _{i\in [n] \setminus \{ j_{n}, \ldots ,j_{k+1}\} }^{n}c_{i}(u|(j_{n},\dots ,j_{k+1}))\right) du \right\} . \end{aligned}$$
(24)

For \(I\subset \left[ n\right]\) we remind the notation

$$\begin{aligned} N(I)=\sum _{j\notin I}c_{j}(I). \end{aligned}$$

Thus we can write

$$\begin{aligned} f_{X_{n:n},\dots ,X_{k:n},J_{n},\dots ,J_{k}}(t_{n},\dots ,t_{k};j_{n},\dots ,j_{k}) \end{aligned}$$
$$\begin{aligned} =\frac{c_{j_{n}}(\emptyset )}{t_{n}^{2}}\exp \left\{ -\int _{t_{n}}^{+\infty }\frac{1}{u^{2}}N(\emptyset )du\right\} \times \frac{c_{j_{n-1}}(j_{n})}{t_{n-1}^{2}}\exp \left\{ -\int _{t_{n-1}}^{t_{n}}\frac{1}{u^{2}} N\left( j_{n}\right) du\right\} \times \dots \end{aligned}$$
$$\begin{aligned} \times \frac{c_{j_{k}}(j_{n},\dots ,j_{k+1})}{t_{k}^{2}}\exp \left\{ -\int _{t_{k}}^{t_{k+1}}\frac{1}{u^{2}} N\left( j_{n},\dots ,j_{k+1}\right) du\right\} . \end{aligned}$$
(25)

In view of Remark 5, we notice that it could be more precise to use the notation \(N(\{j_{n},\dots ,j_{k+1}\})\) in place of \(N(j_{n},\dots ,j_{k+1})\). Here, we omit the curly brackets for the sake of simplicity.

4 On Inactivity Times of Coherent Systems

In this section we aim to study the probability distribution of the inactivity time of a coherent system made up of components whose lifetimes are jointly distributed according to reversed THLS models. More precisely, let S be a coherent system, let \(T_S\) be its lifetime and \(\hat{T}_{v,S}:=v-T_S\) be its inactivity time at time v. We aim to compute the conditional probability

$$\begin{aligned} \mathbb P(\hat{T}_{v,S}\ge t|X_{n:n}\le v). \end{aligned}$$

Namely, we look for the conditional distribution of the system’s inactivity time, conditional on the detailed information that all the components are down at time v. On this purpose, we will in particular employ the following results which, in a sense, are respectively dual to results valid for the ordinary THLS models, as presented in Spizzichino (2018) or to results presented in De Santis et al. (2020).

First of all we notice that, in view of Proposition 6, the conditional distribution of the maximum order statistic \(X_{n:n}\) given the event \(\left( X_{n:n}=X_{j}\right)\) coincides with an inverse exponential distribution whose parameter is \(N\left( \emptyset \right)\). More precisely we can state the following result.

Proposition 7

We have, for any \(v>t>0\) and for any \(j\in [n]\)

$$\begin{aligned} \mathbb P(v-X_{n:n}\ge t|X_{n:n}=X_j,X_{n:n}\le v)=\exp \left( -\frac{t\ N(\emptyset )}{v(v-t)}\right) . \end{aligned}$$
(26)

Proof

From Proposition 2 and Equations (19)–(20), we have

$$\begin{aligned} \mathbb P(v-X_{n:n}\ge t|X_{n:n}=X_j,X_{n:n}\le v)= & {} \mathbb P(X_{n:n}\le v- t|X_{n:n}=X_j,X_{n:n}\le v) \nonumber \\= & {} \frac{\mathbb P(X_{n:n}\le v- t,X_{n:n}=X_j)}{\mathbb P(X_{n:n}\le v,X_{n:n}=X_j)} \nonumber \\= & {} \frac{\int _0^{v-t} \tau _{j}(s|\emptyset )\mathrm e^{-\int _s^{+\infty }\sum _{h=1}^n \tau _h(w|\emptyset )\mathrm dw}\mathrm ds}{\int _0^{v} \tau _{j}(s|\emptyset )\mathrm e^{-\int _s^{+\infty }\sum _{h=1}^n \tau _h(w|\emptyset )\mathrm dw}\mathrm ds} \nonumber \\= & {} \frac{\eta _j(\emptyset )\exp \left( -\frac{N(\emptyset )}{v-t}\right) }{\eta _j(\emptyset )\exp \left( -\frac{N(\emptyset )}{v}\right) } \nonumber \\= & {} \exp \left( -\frac{t\ N(\emptyset )}{v(v-t)}\right) , \end{aligned}$$
(27)

and this completes the proof of (26).

Proposition 8

Let \((X_1,\dots ,X_n)\) be distributed according to a reversed time homogeneous load–sharing model with parameters \(c_j(I)\), \(I\subset [n], j\in [n]\smallsetminus I\). Let us fix \(v>0\). We have for \(k=1,\dots ,n\),

$$\begin{aligned}&\mathbb P(X_{n:n}=X_{j_n}, X_{n-1:n}=X_{j_{n-1}},\dots ,X_{k:n}=X_{j_k},X_{n:n}\le v)\\&=\eta _{j_n}(\emptyset )\eta _{j_{n-1}}(\{j_n\})\cdots \eta _{j_k}(\{j_n,j_{n-1},\dots ,j_{k+1}\})\exp \left\{ {-\frac{N(\emptyset )}{v}}\right\} . \end{aligned}$$

Proof

By plugging the identity

$$\begin{aligned} \int _{a}^{b}\frac{A}{u^{2}}du=A\left( \frac{1}{a}-\frac{1}{b}\right) , \end{aligned}$$

for \(0< a < b\) and \(A>0\), within formula (25), we can write

$$\begin{aligned} f_{X_{n:n},\dots ,X_{1:n},J_{n},\dots ,J_{1}}(t_{n},\dots ,t_{1};j_{n},\dots ,j_{1}) \end{aligned}$$
$$\begin{aligned} =\frac{c_{j_{n}}(\emptyset )\cdot c_{j_{n-1}}(j_{n})\cdot \ldots \cdot c_{j_{1}} (j_{n},\dots ,j_{2})}{t_{n}^{2}\cdot t_{n-1}^{2}\cdot \ldots \cdot t_{1}^{2}}\times \end{aligned}$$
$$\begin{aligned} \times \exp \left\{ -\left[ N(\emptyset )\frac{1}{t_{n}}+N(j_{n})\left( \frac{1}{t_{n-1} }-\frac{1}{t_{n}}\right) +\dots +N(j_{n},\dots ,j_{2})\left( \frac{1}{t_{1} }-\frac{1}{t_{2}}\right) \right] \right\} \end{aligned}$$
$$\begin{aligned} =\frac{c_{j_{n}}(\emptyset )\cdot c_{j_{n-1}}(j_{n})\cdot \ldots \cdot c_{j_{1}} (j_{n},\dots ,j_{2})}{t_{n}^{2}\cdot t_{n-1}^{2}\cdot \ldots \cdot t_{1}^{2}}\times \end{aligned}$$
$$\begin{aligned} \times \exp \left\{ -\left[ \frac{1}{t_{n}}\left[ N(\emptyset )-N(j_{n})\right] +\frac{1}{t_{n-1}}\left[ N(j_{n})-N(j_{n},j_{n-1})\right] +\dots \right. \right. \end{aligned}$$
$$\begin{aligned} \left. \left. +\frac{1}{t_{2}}\left[ N(j_{n},\dots ,j_{3})-N(j_{n},\dots ,j_{2})\right] +\frac{1}{t_{1}}N(j_{n},\dots ,j_{2})\right] \right\} . \end{aligned}$$

By properly integrating the joint density function of \((X_{n:n},\dots , X_{1:n}; J_n,\dots ,J_1)\) over the appropriate domain, we obtain

$$\begin{aligned} \mathbb {P}\left( X_{n:n}=X_{j_{n}},\dots ,X_{k:n}=X_{j_{k}},X_{n:n}\le v \right) \end{aligned}$$
$$\begin{aligned} =c_{j_{n}}(\emptyset )\cdot \ldots \cdot c_{j_{k}}(j_{n},\dots ,j_{k+1})\sum _{j_1\ne \dots \ne j_{k-1}\ne j_{k}\ne j_{k+1}\ne \dots \ne j_{n} }c_{j_{k-1}}(j_{n},\dots ,j_{k})\cdot \ldots \cdot c_{j_{1}}(j_{n},\dots ,j_{2})\times \end{aligned}$$
$$\begin{aligned} \times \int _{0}^{v}dt_{n}\int _{0}^{t_{n}}dt_{n-1}\dots \int _{0}^{t_{2}} \frac{1}{t_{n}^{2}\cdot t_{n-1}^{2} \cdot \ldots \cdot t_{1}^{2}}\exp \left\{ -\left[ \frac{1}{t_{n}}\left[ N(\emptyset )-N(j_{n})\right] \right. \right. + \end{aligned}$$
$$\begin{aligned} \left. \left. \dots +\frac{1}{t_{2}}[N(j_{n},\dots ,j_{3})-N(j_{n},\dots ,j_{2})]+\frac{1}{t_{1}}N(j_{n},\dots ,j_{2})\right] \right\} dt_{1} \end{aligned}$$
$$\begin{aligned} =c_{j_{n}}(\emptyset )\cdot \ldots \cdot c_{j_{k}}(j_{n},\dots ,j_{k+1})\sum _{j_1\ne \dots \ne j_{k-1}\ne j_{k}\ne j_{k+1}\ne \dots \ne j_{n} }c_{j_{k-1}}(j_{n},\dots ,j_{k})\cdot \ldots \cdot c_{j_{1}}(j_{n},\dots ,j_{2})\times \end{aligned}$$
$$\begin{aligned} \int _{0}^{v}\frac{\exp \left\{ -\left[ \frac{1}{t_{n}}\left[ N(\emptyset )-N(j_{n})\right] \right] \right\} }{t_{n}^{2}}dt_{n}\dots \int _{0}^{t_{3}} \frac{\exp \left\{ -\left[ \frac{1}{t_{2}}\left[ N(j_{n},\dots ,j_{3} )-N(j_{n},\dots ,j_{2}\right] \right] \right\} }{t_{2}^{2}}\times \end{aligned}$$
$$\begin{aligned} \times \int _{0}^{t_{2}}\frac{\exp \left\{ -\frac{1}{t_{1}}N(j_{n},\dots ,j_{2} )\right\} }{t_{1}^{2}}dt_{1}. \end{aligned}$$

Now, by taking into account the identity

$$\begin{aligned} \int _{0}^{t_{2}}\frac{\exp \left\{ -\frac{1}{t_{1}}N(j_{n},\dots ,j_{2})\right\} }{t_{1}^{2}}dt_{1}=\frac{\exp \left\{ -\frac{1}{t_{2}}N(j_{n},\dots ,j_{2} )\right\} }{N(j_{n},\dots ,j_{2})}, \end{aligned}$$

we write

$$\begin{aligned} \mathbb {P}\left( X_{n:n}=X_{j_{n}},\dots ,X_{k:n}=X_{j_{k}},X_{n:n}\le v\right) \end{aligned}$$
$$\begin{aligned} =c_{j_{n}}(\emptyset )\cdot \ldots \cdot c_{j_{k}}(j_{n},\dots ,j_{k+1})\sum _{j_1\ne \dots \ne j_{k-1}\ne j_{k}\ne j_{k+1}\ne \dots \ne j_{n} }c_{j_{k-1}}(j_{n},\dots ,j_{k})\cdot \ldots \cdot c_{j_{1}}(j_{n},\dots ,j_{2})\times \end{aligned}$$
$$\begin{aligned} \times \int _{0}^{v}\frac{\exp \left\{ -\left[ \frac{1}{t_{n}}\left[ N(\emptyset )-N(j_{n})\right] \right] \right\} }{t_{n}^{2}}dt_{n}\dots \int _{0}^{t_{3}}\frac{\exp \left\{ -\frac{1}{t_{2}}[N(j_{n},\dots ,j_{2})\right\} }{N(j_{n},\dots ,j_{2})\cdot t_{2}^{2}}dt_{2}. \end{aligned}$$

Continuing so on, we obtain

$$\begin{aligned} \mathbb {P}\left( X_{n:n}=X_{j_{n}},\dots ,X_{k:n}=X_{j_{k}},X_{n:n}\le v\right) \end{aligned}$$
$$\begin{aligned} =\frac{c_{j_{n}}(\emptyset )\cdot \ldots \cdot c_{j_{k}}(j_{n},\dots ,j_{k+1})}{N(j_{n})\cdot \ldots \cdot N(j_{n},\dots ,j_{k+1})}\sum _{j_1\ne \dots \ne j_{k-1}\ne j_{k}\ne j_{k+1}\ne \dots \ne j_{n} }\frac{c_{j_{k-1}} (j_{n},\dots ,j_{k})\cdot \ldots \cdot c_{j_{1}}(j_{n},\dots ,j_{2})}{N(j_{n} ,\dots ,j_{k})\cdot \ldots \cdot N(j_{n},\dots ,j_{2})}\times \end{aligned}$$
$$\begin{aligned} \times \int _{0}^{v}\frac{\exp \left\{ -\left[ \frac{1}{t_{n}}\left[ N(\emptyset )-N(j_{n})\right] \right] \right\} \exp \left\{ -\frac{1}{t_{n}}N(j_{n})\right\} }{t_{n}^{2} }dt_{n} \end{aligned}$$
$$\begin{aligned} =c_{j_{n}}\left( \emptyset \right) \cdot \eta _{j_{n-1}}(j_{n})\cdot \ldots \cdot \eta _{j_{k}}\left( j_{n},\dots ,j_{k+1}\right) \frac{1}{N(\emptyset )} \exp \left\{ -\frac{N(\emptyset )}{v}\right\} \end{aligned}$$
$$\begin{aligned} =\eta _{j_{n}}\left( \emptyset \right) \cdot \eta _{j_{n-1}}(j_{n})\cdot \ldots \cdot \eta _{j_{k}}\left( j_{n},\dots ,j_{k+1} \right) \exp \left\{ -\frac{N(\emptyset )}{v}\right\} . \end{aligned}$$

Remark 6

In the following, we give a different proof of Proposition 8 based on THLS models. Let us consider the variables \(Y_j=1/X_j\), \(j=1,\dots ,n\), we have

$$\begin{aligned} \mathbb {P}\left( X_{n:n}=X_{j_{n}},\dots ,X_{k:n}=X_{j_{k}},X_{n:n}\le v\right) \end{aligned}$$
$$\begin{aligned} =\mathbb {P}\left( Y_{1:n}=Y_{j_{n}},\dots ,Y_{n-k+1:n}=Y_{j_{k}},Y_{1:n}>\frac{1}{v}\right) \end{aligned}$$
$$\begin{aligned} =\mathbb {P}\left( Y_{1:n}=Y_{j_{n}},\dots ,Y_{n-k+1:n}=Y_{j_{k}}\left| Y_{1:n}>\frac{1}{v}\right. \right) \mathbb {P}\left( Y_{1:n}>\frac{1}{v}\right) \end{aligned}$$
$$\begin{aligned} =\mathbb {P}\left( Y_{1:n}=Y_{j_{n}},\dots ,Y_{n-k+1:n}=Y_{j_{k}}\left| Y_{1:n}>\frac{1}{v}\right. \right) \exp \left\{ -\frac{1}{v}N(\emptyset )\right\} . \end{aligned}$$

As pointed out in De Santis et al. (2020), an important property of THLS models is the following: conditioning on the event \(\{Y_{1:n}>t\}\) the joint distribution of residual lifetimes \(Y_j-t\), \(j=1,\dots ,n\), is the same of original variables. Then, we have

$$\begin{aligned} \mathbb {P}\left( X_{n:n}=X_{j_{n}},\dots ,X_{k:n}=X_{j_{k}},X_{n:n}\le v\right) \end{aligned}$$
$$\begin{aligned} =\mathbb {P}\left( Y_{1:n}=Y_{j_{n}},\dots ,Y_{n-k+1:n}=Y_{j_{k}}\right) \cdot \exp \left\{ -\frac{1}{v}N(\emptyset )\right\} . \end{aligned}$$

Now, by using Proposition 2 of Spizzichino (2018) we can conclude

$$\begin{aligned} \mathbb {P}\left( X_{n:n}=X_{j_{n}},\dots ,X_{k:n}=X_{j_{k}},X_{n:n}\le v\right) \end{aligned}$$
$$\begin{aligned} =\eta _{j_{n}}\left( \emptyset \right) \cdot \eta _{j_{n-1}}(j_{n})\cdot \ldots \cdot \eta _{j_{k}}\left( j_{n},\dots ,j_{k+1}\right) \cdot \exp \left\{ -\frac{1}{t}N(\emptyset )\right\} . \end{aligned}$$

It is convenient now to introduce the following notation. We denote by \(\overline{G}_{\lambda _1,\dots ,\lambda _r}\) the survival function of the distribution obtained as convolution of r exponential distributions with parameters \(\lambda _1,\dots ,\lambda _r\), respectively. Also the next result can easily be obtained by resorting to THLS models by recalling the notation introduced in (19). We can state

Proposition 9

Let \(\left( X_{1},\dots ,X_{n}\right)\) be distributed according to a reversed time-homogeneous load-sharing model. We have, for any \(v>u>0\) and \(k=1,\dots ,n\),

$$\begin{aligned} \mathbb {P}\left( X_{k:n}<u|X_{n:n}=X_{j_{n}},\dots ,X_{k:n}=X_{j_{k}} ,X_{n:n}\le v \right) \nonumber \\ =\overline{G}_{N(\emptyset ),\dots ,N(j_{n},\dots ,j_{k+1})}\left( \frac{1}{u} -\frac{1}{v}\right) . \end{aligned}$$
(28)

Proof

Let us consider the variables \(Y_j=1/X_j\), \(j=1,\dots ,n\). Then, \((Y_1,\dots ,Y_n)\) follows a THLS model with the same parameters of the RTHLS model associated to \((X_1,\dots ,X_n)\). Then, we have

$$\begin{aligned} \mathbb {P}\left( X_{k:n}<u|X_{n:n}=X_{j_{n}},\dots ,X_{k:n}=X_{j_{k}} ,X_{n:n}\le v \right) \\ =\mathbb {P}\left( \left. Y_{n-k+1:n}>\frac{1}{u}\right| Y_{1:n}=Y_{j_{n}},\dots ,Y_{n-k+1:n}=Y_{j_{k}},Y_{1:n}>\frac{1}{v}\right) \\ =\mathbb {P}\left( \left. Y_{n-k+1:n}>\frac{1}{u}-\frac{1}{v}\right| Y_{1:n}=Y_{j_{n}},\dots ,Y_{n-k+1:n}=Y_{j_{k}}\right) \\ =\overline{G}_{N(\emptyset ),\dots ,N(j_{n},\dots ,j_{k+1})}\left( \frac{1}{u}-\frac{1}{v}\right) , \end{aligned}$$

where the last equality follows by Proposition 4 of Spizzichino (2018).

We now turn to the computation of reliability characteristics of coherent systems. Several properties can be obtained by assuming a THLS model for the components’ lifetimes. In particular a special formula is obtained for the computation of the survival function of the lifetime \(T_{S}\) of a given system S, in terms of appropriate convolutions of exponential distributions (see Spizzichino (2018)). In the following, we will show a dual result under the assumption of a reversed THLS model.

Let us consider a system formed by n components \(C_1,\dots ,C_n\), whose lifetimes are non-negative random variables \(X_1,\dots ,X_n\). We assume that the joint probability distribution of \(X_1,\dots ,X_n\) is absolutely continuous and so ties among \(X_1,\dots ,X_n\) have probability zero. Let us indicate by \(T_S\) the lifetime of the system and by \(\hat{T}_{v,S}\) the inactivity time at time v, namely \(\hat{T}_{v,S}=v-T_S\).

Let \(\mathcal P_n\) denote the set of permutation of \(\{1,\dots ,n\}\) and let \(B_k\) denote the subset of \(\mathcal P_n\) formed with the elements \(\pi\) such that the event \(\{X_{n:n}=X_{\pi (n)},\dots ,X_{k:n}=X_{\pi (k)}\}\) implies that the system fails at the k-th failure at component level, i.e.

$$\begin{aligned} B_k=\{\pi \in \mathcal P_n : \text{ if } X_{n:n}=X_{\pi (n)},\dots ,X_{k:n}=X_{\pi (k)} \text{ then } E_k\}, \end{aligned}$$

where, for \(k=1,\dots ,n\), \(E_k\) is the event \(E_k=\{T_S=X_{k:n}\}\). The events \(E_k\) are strictly related to the structure of the system. They are also related to the concepts of signature and dual signature, see Samaniego (2007) for further details.

Proposition 10

Let S be a system formed by n components whose lifetimes are non-negative random variables \(X_1,\dots ,X_n\) distributed according to a reversed time-homogeneous load-sharing model and let \(\hat{T}_{v,S}\) be the inactivity time of the system at time v. We have, for \(0<t<v\),

$$\begin{aligned} \mathbb P(\hat{T}_{v,S}\ge t|X_{n:n}\le v)= & {} \sum _{k=1}^n \sum _{\pi \in B_k} \overline{G}_{N(\emptyset ),\dots ,N(\pi (n),\dots ,\pi (k+1))}\left( \frac{t}{v(v-t)}\right) \cdot \\&\cdot \eta _{\pi (n)}(\emptyset )\cdots \eta _{\pi (2)}(\{\pi (n),\pi (n-1),\dots ,\pi (3)\}). \end{aligned}$$

Proof

Taking into account that \(\{B_1,\dots ,B_n\}\) is a partition of \(\mathcal P_n\), we can write

$$\begin{aligned} \begin{aligned} \mathbb P(\hat{T}_{v,S}\ge t|X_{n:n}\le v)&= \sum _{\pi \in \mathcal P_n} \mathbb P(\hat{T}_{v,S}\ge t|X_{n:n}=X_{\pi (n)},\dots ,X_{1:n}=X_{\pi (1)},X_{n:n}\le v)\cdot \\&\ \ \ \cdot \mathbb P(X_{n:n}=X_{\pi (n)},\dots ,X_{1:n}=X_{\pi (1)}| X_{n:n}\le v)\\&=\sum _{k=1}^n \sum _{\pi \in B_k} \mathbb P(T_S\le v-t|X_{n:n}=X_{\pi (n)},\dots ,X_{1:n}=X_{\pi (1)}, X_{n:n}\le v)\cdot \\&\ \ \ \cdot \mathbb P(X_{n:n}=X_{\pi (n)},\dots ,X_{1:n}=X_{\pi (1)})\\&=\sum _{k=1}^n \sum _{\pi \in B_k} \mathbb P(X_{k:n}\le v-t|X_{n:n}=X_{\pi (n)},\dots ,X_{1:n}=X_{\pi (1)}, X_{n:n}\le v)\cdot \\&\ \ \ \cdot \mathbb P(X_{n:n}=X_{\pi (n)},\dots ,X_{1:n}=X_{\pi (1)})\\&=\sum _{k=1}^n \sum _{\pi \in B_k} \overline{G}_{N(\emptyset ),\dots ,N(\pi (n),\dots ,\pi (k+1))}\left( \frac{t}{v(v-t)}\right) \cdot \\&\ \ \ \cdot \eta _{\pi (n)}(\emptyset )\cdots \eta _{\pi (2)}(\{\pi (n),\pi (n-1),\dots ,\pi (3)\}). \end{aligned} \end{aligned}$$

In the following, based on the result of Proposition 10, we give an example of evaluation of the probability distributions of the inactivity times for two different systems.

Example 2

Let us consider a coherent system S formed by three components \(X_1,X_2,X_3\) and whose lifetime \(T_S\) is described as

$$\begin{aligned} T_S= \max \{ X_1, \min \{ X_2,X_3\}\}. \end{aligned}$$
figure a

Now, as far as the joint distribution of \(X_1,X_2,X_3\) is concerned, we simply consider the RTHLS model introduced in the previous Example 1.

We want to apply the result of Proposition 10 to evaluate the distribution of the inactivity time of the system. In order to do this, we need to establish how the partition \(\{ B_1,B_2,B_3\}\) of \(\mathcal P_3\) is composed. Here, we have

$$\begin{aligned} B_1=\emptyset , \end{aligned}$$
$$\begin{aligned} B_2= \{ (1,2,3),(1,3,2),(2,1,3),(3,1,2)\}, \end{aligned}$$
$$\begin{aligned} B_3=\{(2,3,1),(3,2,1)\}. \end{aligned}$$

Hence, about the inactivity time of the system, we have

$$\begin{aligned} \mathbb P(\hat{T}_{v,S}\ge t|X_{n:n}\le v)= & {} \sum _{k=2}^3 \sum _{\pi \in B_k} \overline{G}_{N(\emptyset ),\dots ,N(\pi (3),\dots ,\pi (k+1))}\left( \frac{t}{v(v-t)}\right) \cdot \\&\cdot \eta _{\pi (3)}(\emptyset )\eta _{\pi (2)}(\pi (3)) \\= & {} \sum _{\pi \in B_2} \overline{G}_{N(\emptyset ),N(\pi (3))}\left( \frac{t}{v(v-t)}\right) \cdot \\&\cdot \eta _{\pi (3)}(\emptyset )\eta _{\pi (2)}(\pi (3)) + \nonumber \\&+\sum _{\pi \in B_3} \overline{G}_{N(\emptyset )}\left( \frac{t}{v(v-t)}\right) \cdot \eta _{\pi (3)}(\emptyset )\eta _{\pi (2)}(\pi (3)). \end{aligned}$$

By recalling the identities (19)–(20), we can say that the related coefficients are described as follows. Regardless of \(\pi \in \ \mathcal P_3\), we have

$$\begin{aligned} N(\emptyset )=3, \ \ \ N(\pi (3))=2, \ \ \ \eta _{\pi (3)}(\emptyset )=1/3,\ \ \ \eta _{\pi (2)}(\pi (3))=1/2. \end{aligned}$$

Then, we can conclude

$$\begin{aligned} \mathbb P(\hat{T}_{v,S}\ge t|X_{n:n}\le v)= & {} 4 \overline{G}_{3,2}\left( \frac{t}{v(v-t)}\right) \cdot \frac{1}{6} + 2 \overline{G}_{3}\left( \frac{t}{v(v-t)}\right) \cdot \frac{1}{6} \\ \nonumber= & {} \frac{2}{3} \overline{G}_{3,2}\left( \frac{t}{v(v-t)}\right) + \frac{1}{3} \exp \left( -3 \frac{t}{v(v-t)}\right) . \end{aligned}$$

Concerning the computation of the convolution of several exponential distributions one can of course resort to a quite wide literature. See e.g. Akkouchi (2008); Cramer and Kamps (2003) and references cited therein.

Maintaining the same RTHLS model as above for the components’ lifetimes, we now switch to considering the system \(\widetilde{S}\) dual of S, whose lifetime is

$$\begin{aligned} T_{\widetilde{S}}=\min \{X_1,\max \{X_2,X_3\}\}. \end{aligned}$$
figure b

In this case, the partition \(\{ B_1,B_2,B_3\}\) of \(\mathcal P_3\) is given by

$$\begin{aligned} B_1=\{(1,2,3),(1,3,2)\}, \end{aligned}$$
$$\begin{aligned} B_2= \{ (2,1,3),(2,3,1),(3,1,2),(3,2,1)\}, \end{aligned}$$
$$\begin{aligned} B_3=\emptyset . \end{aligned}$$

Hence, about the inactivity time of the system \(\widetilde{S}\), we have

$$\begin{aligned} \mathbb P(\hat{T}_{v,\widetilde{S}}\ge t|X_{n:n}\le v)= & {} \sum _{\pi \in B_1} \overline{G}_{N(\emptyset ),N(\pi (3)),N(\pi (3),\pi (2))}\left( \frac{t}{v(v-t)}\right) \cdot \\&\cdot \eta _{\pi (3)}(\emptyset )\eta _{\pi (2)}(\pi (3)) + \\&+\sum _{\pi \in B_2} \overline{G}_{N(\emptyset ),N(\pi (3))}\left( \frac{t}{v(v-t)}\right) \cdot \eta _{\pi (3)}(\emptyset )\eta _{\pi (2)}(\pi (3)). \end{aligned}$$

The parameters of the form \(N(\emptyset )\) and N(i) (\(i=1,2,3\)) have already been computed above. For what concerns the parameters of the form \(N(i_{1},i_{2})\), with \(i_{1}\ne i_{2}\), the special structure of this system entails that we only need to consider the value \(N(2,3)=\varepsilon\). Then, we can conclude

$$\begin{aligned} \mathbb P(\hat{T}_{v,\widetilde{S}}\ge t|X_{n:n}\le v)= & {} 2 \overline{G}_{3,2,\varepsilon }\left( \frac{t}{v(v-t)}\right) \cdot \frac{1}{6} + 4 \overline{G}_{3,2}\left( \frac{t}{v(v-t)}\right) \cdot \frac{1}{6} \\= & {} \frac{1}{3} \overline{G}_{3,2,\varepsilon }\left( \frac{t}{v(v-t)}\right) + \frac{2}{3} \overline{G}_{3,2}\left( \frac{t}{v(v-t)}\right) . \end{aligned}$$

5 Conclusions

For a n-tuple of non-negative random variables \(X_{1},\dots ,X_{n}\), we have introduced the concept of Multivariate Reversed Hazard Rate (m.c.r.h.r.) and the related class of special dependence models, that we termed Reversed Time-Homogeneous Load-Sharing.

Such notions can in particular be of interest when studying the probabilistic behavior of the inactivity time of a coherent system, made with n interdependent components (\(X_{1},\dots ,X_{n}\) being the corresponding lifetimes). Starting from the relation existing between the univariate concepts of failure rate and of reversed failure rate functions, the new notions have been inspired by following a principle of duality which can be established between “forward” and “backward” longitudinal observation.

In this frame we have proven several basic aspects of the m.c.r.h.r. functions, along with the exact relations between them and the ordinary m.c.h.r. functions. In the past, these latter functions have been fruitfully applied to different types of problems in applied probability, for instance in simulation, in the definition of multivariate stochastic orderings, dependence concepts, and multivariate ageing notions (see Shaked and Shanthikumar (2015); Shaked et al. (1994), in particular).

By employing the afore-mentioned relations and duality, analogous and/or different results might be further obtained in terms of m.c.r.h.r. functions. To start with, in this paper we have then confined our analysis to basic properties, whereas additional studies may be the object of future work.

Potential results concerning applications of m.c.r.h.r. functions to multivariate stochastic orderings and to inactivity times of systems might in particular be combined in order to deal with the important problem of obtaining stochastic comparisons between inactivity times of different systems (see e.g. Misra et al. (2008); Navarro et al. (2017)).

Computation of the m.c.r.h.r. functions in the case of multivariate mixture models (see Belzunce et al. (2009); Li and Da (2010)) and related applications may also be an interesting issue.

Furthermore, one might investigate about properties and applications of a discrete-time version of m.c.r.h.r. (see Shaked et al. (1994, 1995), and references therein for what concerns the discrete versions of the usual notions of multivariate failure rates).