Stochastic monotonicity of dependent variables given their sum

Given a finite set of independent random variables, assume one can observe their sum, and denote with s its value. Efron in 1965, and Lehmann in 1966, described conditions on the involved variables such that each of them stochastically increases in the value s, i.e., such that the expected value of any non-decreasing function of the variable increases as s increases. In this paper, we investigate conditions such that this stochastic monotonicity property is satisfied when the assumption of independence is removed. Comparisons in the stronger likelihood ratio order are considered as well.


Introduction
Consider a sample {X 1 , X 2 , . . . , X n } of independent and identically distributed random variables having finite expected value, and denote with S = n i=1 X i their sum. If one considers the expected value of any of the variables X i given that S = s ∈ R, i.e., E[X i |S = s], then it is easy to verify that E[X i |S = s] = s/n; thus, such a conditioned expected value of X i increases in s. However, this property is no longer satisfied if a stronger stochastic comparison is considered, such as, for example, the usual stochastic order, as the following simple counterexample shows. To this aim, recall that, given the variables Y 1 and Y 2 , then Y 1 is said to be smaller than Y 2 in the B for all non-decreasing functions φ for which the expectations exist, or, equivalently, if P[Y 1 > y] ≤ P[Y 2 > y] for all y ∈ R (see, e.g., Belzunce et al. 2015 or Shaked andShanthikumar 2007 for details, properties and applications of the usual stochastic order and other stochastic comparisons).
The monotonicity of E[φ(X i )|S = s] in s for a non-decreasing function φ can find a wide range of applications in different research contexts, for example, in statistical estimation and testing when one can just observe the sum of the sample and must make inferences on the distribution of the X i , or in applied probability modeling, where one can observe only the total number of individuals in a population but needs to take decisions based on the proportion of a specific sub-category of members.
For this reason, sufficient conditions for the expectation E[φ(X i )|S = s] to be increasing in s for any non-decreasing function φ have been investigated and finally provided by Efron in Efron (1965), who proved the following statement. For it, recall that an absolutely continuous random variable X is said to have a logconcave density f X if it satisfies ln( f X (λx + (1 − λ)y)) ≥ λ ln f X (x) + (1 − λ) ln f X (y) for all λ ∈ (0, 1) and all x, y in the support of X . Logconcavity of the density is a well-known property, which is satisfied by a large number of remarkable distributions, such as the normal or the exponential distributions, and has a straightforward analog definition for discrete random variables (see, e.g., Bagnoli and Bergstrom 2005 or Saumard and Wellner 2014 for two recent comprehensive surveys). Moreover, it must be pointed out that alternative nomenclatures are commonly used in the literature for this property, such as P F 2 (Polya functions of order 2) or I L R (increase in likelihood ratio) densities.
Proposition 1.1 (Efron 1965) Let {X 1 , X 2 , . . . , X n } be a set of independent random variables having logconcave densities, let S = n i=1 X i be their sum, and let φ : R n → R be a real measurable function non-decreasing in each of its arguments. Then, E[φ(X 1 , X 2 , . . . , X n )|S = s] is a non-decreasing function of s. Proposition 1.1 provides conditions for stochastic monotonicity in s of the whole random vector (X 1 , X 2 , . . . , X n ) given S = s, which, in general, is a stronger property rather than the stochastic monotonicity in s of [X i |S = s] for any i = 1, . . . , n. Also, the assumption of identical distribution for the X i is not required. However, independence is still required.
The stochastic monotonicity property stated in Proposition 1.1 can be of interest in a variety of fields. It has been applied, for example, in queueing theory (see, e.g., Masuda 1995 andYao 1987), in economic theory (Edered 2010;Wang 2012), in stochastic comparisons of order statistics (Boland et al. 1996;Zhuang et al. 2010), in dependence modeling (Block et al. 1985;Hu and Hu 1999) and in statistical testing, estimation and regression Sackrowitz 1987, 1990;Hwang and Stefanski 1994). An interesting and exhaustive list of references where the property has been applied can be found in Saumard and Wellner (2018). Moreover, alternative proofs or generalizations of this property have been provided in Daduna and Szekli (1996), where applications in queueing networks are considered, in , where a more general result for which Proposition 1.1 is just a corollary is proved, or in Liggett (2000), where a discrete version of the statement is obtained (with applications in modeling for interacting particle systems). A different interesting generalization is also described in the recent paper (Oudghiri 2021) In particular, an important alternative result was proved one year later by Lehmann, in Example 12 in Lehmann (1966). In it, he showed that under the same assumptions on the variables X i (except for one of them), the monotonicity property holds for a stronger stochastic order, i.e., for the likelihood ratio order. Given the variables Y 1 and Y 2 having densities g 1 and g 2 , Y 1 is said to be smaller than Y 2 in the likelihood ratio order (denoted by Y 1 ≤ L R Y 2 ) if, and only if, the ratio g 1 (y)/g 2 (y) is non-increasing in y over the union of the supports of Y 1 and Y 2 (for details see, e.g., Belzunce et al. 2015 or Shaked andShanthikumar 2007). It must be pointed out that the likelihood ratio order is stronger than the usual stochastic order, in the sense that if Y 1 ≤ L R Y 2 , then Y 1 ≤ ST Y 2 , but not the vice versa. Proposition 1.2 (Lehmann 1966) Let {X 1 , X 2 , . . . , X n } be a set of independent random variables having logconcave densities, except for X 1 , and let S = n i=1 X i be their sum. Then, [X 1 |S = s] is non-decreasing in the likelihood ratio order in s, i.e., Note that, since the likelihood ratio order implies the usual stochastic order, under the same assumptions one has [X 1 |S = s 1 ] ≤ ST [X 1 |S = s 2 ] whenever s 1 ≤ s 2 . Also note that in this statement, as for the previous one, the independence between the variables X i is assumed.
Among the main reasons of interest in Proposition 1.2 is the fact that for many parametric families of distributions the likelihood ratio order coincides with the ordering between the parameters. This is the case, for example, of the exponential family, the Poisson family, and the normal family (with respect to the mean μ, for fixed variance σ 2 ). Thus, for example, uniformly most powerful likelihood tests through the value of statistic S can be determined for composite hypothesis on the parameter, according to the Karlin-Rubin theorem (see, e.g., Brown et al. 1976).
In practical situations, however, the assumption of independence seems to be too restrictive. This is the case in many applicative fields like in reliability, where items subjected to common environments are usually considered, or in actuarial sciences, where policyholders may have family relationships or share the same media channels. In these cases the independence assumption is not fulfilled, and the above monotonicity properties can be unsatisfied despite the logconcavity property for the involved variables is satisfied, as shown in the following counterexample.
Observe that the density of where f is the joint density of (X 1 , X 2 ), whose analytical expression, for (x 1 , being zero elsewhere. The ratio between the densities of [X 1 |S = s 1 ] and [X 1 |S = s 2 ], for s 1 ≤ s 2 , is then given by which is defined in the union of the supports of [X 1 |S = s 1 ] and [X 1 |S = s 2 ] (that is, in (0, s 2 )).
With straightforward calculations, it is easy to verify that such a ratio is increasing for x ∈ [0, s 1 ], but then it collapses to zero in (s 1 , s 2 ]. For example, for α 1 = α 2 = 1, γ = 0.5, s 1 = 1 and s 2 = 2, the ratio assumes values 1.81 for x = 0, 2.20 for x = 0.5, 2.56 for x = 1 − and 0 for x ∈ (1, 2). Thus, it does not satisfy monotonicity, and [X 1 |S = s 1 ] and [X 1 |S = s 2 ] are not comparable in the likelihood ratio order. Actually, they are also not comparable in the usual stochastic sense, since the corresponding survival functions do intersect.
Taking also into account the fact that there are few results where the distribution of the sum of dependent random variables is available in a closed form (see, e.g., Navarro and Sarabia 2020 for a detailed discussion on this topic), it becomes important to understand when the properties of monotonicity described above are satisfied also for dependent variables, even without explicitly knowing the distribution of their sum. To the best of our knowledge, generalizations to dependent variables of Proposition 1.1 have been provided only in the recent paper (Saumard and Wellner 2018), while no generalizations of Proposition 1.2 are available in the literature.
Therefore, the aim of this paper is to provide such generalizations of Proposition 1.2 and further generalizations of Proposition 1.1 in the case that the variables X 1 , X 2 , . . . , X n do not satisfy independence. The new extensions of Proposition 1.1 provided here describe conditions on the joint distribution of the X i that seem easier to be verified, and show that the class of bivariate distributions satisfying the property is wider than the one described in Saumard and Wellner (2018). Also, some generalizations in case of random vectors having more than two components are provided here.
Together with this, monotonicity properties for [S|X 1 = x] in x, which follow easily from the main results, are presented as well The rest of the paper is organized as follows. Section 2 considers the case of bivariate vectors (X 1 , X 2 ), while the multivariate case, i.e., the case (X 1 , X 2 , . . . , X n ) for n > 2, is considered in Sect. 3. Illustrative examples are provided in both sections. Finally, some conclusions are given in Sect. 4.

The bivariate case
First we consider the generalization of Proposition 1.2, for which, given an absolutely continuous random vector (X 1 , X 2 ), one can observe that the monotonicity of [X 1 |S = s] in s in the likelihood ratio order is actually equivalent to a property of its joint density which is related to the notion of total positivity. To this aim, remember that a function φ : R 2 → R + is said to be Totally Positive of order 2 (shortly, T P 2 ) in its arguments (x 1 , x 2 ) if, and only if, for any x, y in R 2 it satisfies where the operators ∧ and ∨ denote coordinatewise minimum and maximum, respectively.
Proposition 2.1 Let the vector (X 1 , X 2 ) have a joint density f . Then, the following conditions are equivalent: Proof For the equivalence between points (a) and (b) observe that, for any x, s ∈ R, Taking into account that a function of one variable does not affect the T P 2 property, one can immediately observe that the ratio For the equivalence between points (a) and (c), one can reasoning as above, just observing that Let us see some examples of bivariate random vectors that satisfy the conditions of Proposition 2.1.

Example 2.1
Let (X 1 , X 2 ) have a Gompertz distribution, i.e., be such that it has joint survival functionF With straightforward computations, one can verify that h(t) is logconcave, i.e., that the ratio h(t + s)/h(t) is decreasing in t for every s ≥ 0. Assume that α 1 < α 2 , and observe that in this case for a negative β (and βx + α 2 s ≥ 0). Thus, for x 1 ≤ x 2 and s 1 ≤ s 2 , Thus, for 0 < α 1 < α 2 , and any θ ∈ [1, ∞), one can apply Proposition 2.1 obtaining that [X 1 |S = s] is non-decreasing in the likelihood ratio order in s and that [S|X 1 = x] is non-decreasing in the likelihood ratio order in x.
Example 2.2 Let (X 1 , X 2 ) have a bivariate Pareto distribution, i.e., be such that it has joint survival functionF It is easy to verify that the latter is T P 2 in (x, s) if, and only if, α 1 ≥ α 2 . In this case, [X 1 |S = s] is non-decreasing in the likelihood ratio order in s. On the contrary, if α 1 ≤ α 2 , then [X 2 |S = s] is non-decreasing in the likelihood ratio order in s.
It is interesting to observe that X 1 and X 2 , marginally, have Pareto distributions, i.e., they have densities for i = 1, 2, which are logconvex. Thus, this example shows that logconcavity of the density is not a necessary condition for the monotonicity of [X 1 |S = s] in likelihood ratio order.
If the vector satisfies properties similar to those stated in Proposition 2.1, then also Proposition 1.1 in the bivariate case can be generalized to dependent variables. Since the comparison considered next is the usual stochastic order between random vectors, rather than between random variables, we recall here its definition. Given the random vectors Y 1 = (Y 1,1 , Y 1,2 , . . . , Y 1,n ) and Y 2 = (Y 2,1 , Y 2,2 , . . . , Y 2,n ), then Y 1 is said to be smaller than Y 2 in the usual stochastic order (denoted by for all functions φ : R n → R that are non-decreasing in each argument and for which the expectations exist. Equivalently, for any upper set U ⊆ R n , i.e., a set such that (y 2,1 , y 2,2 , . . . , y 2,n ) ∈ U whenever y 1,i ≤ y 2,i for all i = 1, 2, . . . , n and (y 1,1 , y 1,2 , . . . , y 1,n ) ∈ U (see Shaked and Shanthikumar 2007 for details).
To prove such a generalization, which is an adaptation to the case of dependent variables of the proof given in Efron (1965) for Proposition 1.1, we need a preliminary statement.
Lemma 2.1 Let g : R 2 → R + be a function which is T P 2 on its arguments, and it is defined on the whole R 2 . If y 1 ≤ y 2 and equality holds, then x 1 ≤ x 2 .
Proof First observe that, by the well-known Basic Composition Formula (see, e.g., Karlin 1968 It follows that, for x 2 < ∞ and y 1 ≤ y 2 , one has Since g assumes nonnegative values, the equality in (1) can be obtained by reducing the upper extreme of integration in the integral of the numerator in the left-hand side of (2), i.e., for x 1 ≤ x 2 .
We can now describe the conditions for a vector (X 1 , X 2 ) to satisfy the monotonicity in the usual stochastic order given the value of the sum S = X 1 + X 2 .

Proposition 2.2 Let the vector
for any s 1 ≤ s 2 .
Note that the bivariate stochastic order implies the upper and lower orthant orders (see Shaked and Shanthikumar 2007, p. 308) and so, under the assumptions of the preceding proposition, we get for any x 1 , x 2 and any s 1 ≤ s 2 .
The following example, showing a case where Proposition 2.2 can be applied, deals with frailty models. The frailty approach, introduced in Vaupel et al. (1979), provides a tool in survival analysis to model the dependence of lifetimes on common environmental conditions. According to this model, the frailty (an unobservable random variable that describes common risk factors) acts simultaneously on the hazard functions of the lifetimes. Given the vector (X 1 , X 2 ), it is said to be described by a bivariate frailty model if its joint survival function is defined as where V is a random variable taking values in ⊆ R + and having cumulative distribution H , whileḠ is any suitable survival function, commonly called the baseline survival function of the X i (different from the common marginal survival function of X 1 and X 2 unless V = 1 a.s.). Note that this model is based on the assumption that the components in the vector are independent given the common frailty V . Further details on frailty models can be found in Navarro and Mulero (2020), where Time Transformed Exponential models (a generalization of frailty models) are considered.
In the particular case that baseline survival function is of exponential type then, as shown below, the vector satisfies the assumptions of both Proposition 2.1 and Proposition 2.2.

Example 2.3
Let (X 1 , X 2 ) have a joint survival function defined as in (3), wherē G(x) = exp(−λx), with λ > 0, and where H is any cumulative distribution of a random environment taking values in ⊆ R + . Then, its joint density function, for Being constant in x, the latter is T P 2 in (x, s). Similarly, also f (s − x, x) is T P 2 in (x, s). Thus, one can apply Proposition 2.1 obtaining that [X i |S = s 1 ] ≤ L R [X i |S = s 2 ], for any i = 1, 2, whenever s 1 ≤ s 2 , and that [S|X 1 = x] ≤ L R [S|X 1 = y] whenever x ≤ y. Also, one can apply Proposition 2.2 obtaining [(X 1 , X 2 )|S = s 1 ] ≤ ST [(X 1 , X 2 )|S = s 2 ] whenever s 1 ≤ s 2 .

Remark 2.1
It must be pointed out that the assumptions of Proposition 2.1 and of Proposition 2.2 are not always satisfied for any frailty model, as the following example shows. Let (X 1 , X 2 ) have joint survival function defined as in (3), whereḠ(x) = 1−x, with x ∈ [0, 1], and where V has exponential distribution with hazard rate λ = 1. With straightforward calculations, one can get that its joint density is which is not T P 2 in (x, s). For example, for s 1 = 0.5, s 2 = 0.9, x 1 = 0.05 and Proposition 2.2 can be extended to (φ 1 (X 1 ), φ 2 (X 2 )) given the value of the sum S * = φ 1 (X 1 ) + φ 2 (X 2 ) for increasing functions φ 1 and φ 2 as follows. The proof is omitted being easy, but, on the contrary, the conditions described in the statement are quite strong.
for any s 1 ≤ s 2 .
The following statements provide simple sufficient conditions for a joint bivariate density to satisfy the conditions of Propositions 2.1 and 2.2. (X 1 , X 2 ) have a joint density f . If f (x 1 , x 2 ) is T P 2 in (x 1 , x 2 ) and lonconcave in x 2 for every x 1 , then f (x, s − x) is T P 2 in (x, s). Moreover, if f (x 1 , x 2 ) is also lonconcave in x 1 for every x 2 , then also f (s − x, x) is T P 2 in (x, s).

Proposition 2.4 Let the vector
for any y ∈ R, x 1 ≤ x 2 and 1 , 2 > 0. Note while from logconcavity of f when the first argument is fixed one has From (5) and (6) follows (4), thus the assertion. The T P 2 property in ( is lonconcave in x 1 for every x 2 can be proved in the same manner. Proposition 2.4 can be applied, for example, when one knows the marginal distributions of X 1 and X 2 and the connecting copula, or the survival copula, of (X 1 , X 2 ) (see, e.g., Nelsen 2006 for the definition of the copula of a random vector) Example 2.4 Let the vector (X 1 , X 2 ) have a survival copulaĈ and marginal univariate survival functionsF 1 andF 2 , i.e., let be its joint survival function. Then, as one can easily verify, its joint density can be expressed as for all (x 1 , x 2 ) in the support of (X 1 , X 2 ), where c is the second mixed partial derivative ofĈ while f 1 and f 2 are the marginal densities (assuming all of them exist). From (7) immediately follows that f ( This latter property of copulas is satisfied by a number of well-known copulas, such as, for example, the Clayton copula, for which for any value of its parameter θ ∈ (0, ∞) (see, e.g., Tenzer and Elidan 2016, where a list of copulas having T P 2 density is provided). Now note that logconcavity of f (x 1 , x 2 ) in x 1 for every x 2 is satisfied if the ratio is non-increasing in x 1 for all y ≥ 0 and v ∈ [0, 1]. This monotonicity, in turns, is satisfied if X 1 has a logconcave density, and if the copula and the marginal survival functionF 1 are such that is non-increasing in x 1 for all y ≥ 0 and v ∈ (0, 1). If, for example, X 1 has an exponential distribution, then the ratio (8) decreases if, and only if, c(au, v)/c (u, v) increases in u for all a, v ∈ (0, 1). It turns out that if (X 1 , X 2 ) has a Clayton survival copula and exponentially distributed margins, then both [X 1 |S = s] and [X 2 |S = s] are non-decreasing in the likelihood ratio order in s.
For the next statement recall that, as in the univariate case, a function f : R n → R is said to be logconcave if it satisfies for all λ ∈ (0, 1) and all x, y in R n .
Proposition 2.5 Let the vector (X 1 , X 2 ) have a joint density f (x 1 , x 2 ) which is logconcave and T P 2 in (x 1 , x 2 ). Then: for any s 1 ≤ s 2 and any non-decreasing functions φ 1 and φ 2 .
Proof For the proof, it is enough to observe that For fixed x 1 the term log f X 1 (x 1 ) is constant, while log f (x 1 , x 2 ) is concave, by definition of logconcavity. Thus, [X 2 |X 1 = x 1 ] has a logconcave density. Similarly, one can prove that [X 1 |X 2 = x 2 ] has a logconcave density. Thus, one can apply Proposition 2.4, obtaining that both f (x, s − x) and f (s − x, x) are T P 2 in (x, s). The assertions (a) and (b) now follow from Proposition 2.1 and assertion (c) from Proposition 2.2. The proof of (d) is a consequence of (c) and Theorem 6.B.20 in Shaked and Shanthikumar (2007), p. 276.
The following is an example of application of Proposition 2.5.
As stated in Proposition 1.2 of Abdous et al. (2005) where T = {t ∈ R : φ (t) < 0}. This condition is actually satisfied for every r ≥ 0 when Fang et al. 1990 for details). When g is defined as above then log g(t) = −βt α , which is concave for any α ≥ 1; thus, f satisfies the assumptions of Proposition 2.5 for that g when 1 ≤ α ≤ (1 − r ) −1 and β > 0.
Note that, as a particular case for α = 1 and β = 1/2, this example includes the bivariate normal distributions, whose density is always T P 2 when the covariance between X 1 and X 2 is non-negative (see, e.g., Theorem 3.3 in Fang et al. (2002)).

The multivariate case
Multivariate random vectors (X 1 , X 2 , . . . , X n ), with n > 2, are considered in this section, and few examples where the monotonicity in s of [X 1 |S = s] (in the likelihood ratio order) and monotonicity in s of [(X 1 , . . . , X n )|S = s] (in the usual stochastic order) are provided, where S = n j=1 X j . First observe that, from Proposition 2.5 (a) and (b), the following statement easily follows.
Proposition 3.1 Given the vector (X 1 , X 2 , . . . , X n ), let Y i = j, j =i X j . If for any i the vector (X i , Y i ) has a joint density f (x, y) which is logconcave and T P 2 in (x, y), then [X i |S = s] is non-decreasing in the likelihood ratio order in s, and [S|X i = x] is non-decreasing in the likelihood ratio order in x.
Note that the likelihood ratio order implies the usual stochastic order and so, under the assumptions of the preceding proposition, we get for any x i , any s 1 ≤ s 2 and any increasing function φ such that these conditional expectations exist.
As an immediate example of application of this statement, one gets that the monotonicity in s of [X 1 |S = s] in the likelihood ratio order can be satisfied for multivariate normal distributions, as stated in the following corollary.
Corollary 3.1 Let (X 1 , X 2 , . . . , X n ) have a N μ, distribution. Then, fixed any i = 1, . . . , n and defined Y i = j, j =i X j , by closure properties of normal distributions the vector (X i , Y i ) has a bivariate normal distribution. Thus, it has a logconcave density. Moreover, by Theorem 3.3 in Fang et al. (2002) (see also the remark before Proposition 1.2 in Abdous et al. 2005), the density of (X i , Y i ) satisfies the T P 2 property if j =i Cov(X i , X j ) ≥ 0. Thus, from Proposition 3.1 one has that [X i |S=s] is nondecreasing in LR order.
Example 3.1 Let Y (having normal distribution) be a signal from an item, which describes its working state, and assume the item fails when Y < 0. Assume also that Y cannot be directly read, since its reading is subjected by a number n of noises, so that what one can actually read is the "proxy" variable S = Y + X 1 + · · · + X n , where the X i represent the noises. If the signal and the noises are described by a vector (Y , X 1 , . . . , X n ) having a multivariate normal distribution, then by Corollary 3.1 one has that P[Y > t|S = s] is non-decreasing in s for all t ∈ R; thus, P[Y < 0|S = s] is non-increasing in s. It follows that if the reading of the signal is positive, i.e., if s > 0, then the probability of failure of the item has the upper bound P[Y < 0|S = 0], which can be easily calculated given the parameters μ, of the vector (Y , X 1 , . . . , X n ). Moreover, assume that T = φ(Y ) represents a non-decreasing function of the signal Y , representing a performance of the item. Corollary 3.1 also shows that the regression E[T |S = s] is monotone as well, when n i=1 Cov(Y , X i ) ≥ 0, so that the regression function with measurement error is a good proxy of the "true" regression function in the sense described in Hwang and Stefanski (1994), even if the noises are not independent on Y (as it is assumed, on the contrary, in Hwang and Stefanski 1994).
Another interesting case where the monotonicity property in the likelihood ratio order is satisfied is the case of vectors having Schur-constant joint survival functions, whose definition is recalled here. A vector (X 1 , X 2 , . . . , X n ) of random lifetimes (i.e., of non-negative random variables) is said to have a Schur-constant joint survival function if, for x i ≥ 0, i = 1, 2, .., n, whereḠ is a non-increasing function, continuous from the right, such thatḠ(0) = 1, lim t→∞Ḡ (t) = 0 and other conditions for which it defines a bona fide joint survival function (see Caramellino and Spizzichino 1994 for details). The family of Schurconstant survival functions is an important family that has been extensively considered in a variety of applicative fields such as reliability and insurance; we refer the reader to Caramellino and Spizzichino (1994) and references therein for applications in reliability, and to the recent paper (Genest and Kolev 2021) for applications in extensions of the law of uniform seniority for insurance contracts to the case of dependent lifetimes.
Proposition 3.2 Let the vector (X 1 , X 2 , . . . , X n ) have a Schur-constant joint survival function. Then, for any i = 1, 2, . . . , n, one has that [X i |S = s] is non-decreasing in s in the likelihood ratio order and that [S|X i = x] is non-decreasing in x in the likelihood ratio order.
If φ is non-decreasing and 0 < s 1 ≤ s 2 , then we get which concludes the proof.
Note that from Theorem 6.B.16 in Shaked and Shanthikumar (2007), p. 273, the ST ordering obtained in the preceding proposition can be extended to (φ(X 1 , . . . , X n )|S = s) in s for any non-decreasing function φ : R n → R k .
It must be observed that Example 2.3 is, actually, a corollary of both Propositions 3.2 and 3.3, since the frailty model with exponential baseline survival functions reduces to a Schur-constant model.

Conclusions
We have studied monotonicity properties of dependent random variables conditioned on their sum, and we have obtained several results that extend the classic results for independent random variables. We have considered both the likelihood ratio order and the usual stochastic order in its univariate and multivariate versions.
The main task for future research could be the extension of the result given in Proposition 2.2 to the multivariate case and/or to other (stronger) stochastic orders. Proposition 3.3 can be seen as a first step in that direction. Other tasks could be to find more models where the conditions assumed here are satisfied and so that the monotonicity properties hold. Inference tools to check that conditions in practice should be investigated as well.