Accelerating the Pool-Adjacent-Violators Algorithm for Isotonic Distributional Regression

In the context of estimating stochastically ordered distribution functions, the pool-adjacent-violators algorithm (PAVA) can be modified such that the computation times are reduced substantially. This is achieved by studying the dependence of antitonic weighted least squares fits on the response vector to be approximated.


Introduction
Let X be a set equipped with a binary relation ⪯ , for instance, some partial order. The general problem is as follows: For m ≥ 2 pairs (x 1 , z 1 ), … , (x m , z m ) ∈ X × ℝ and weights w 1 , … , w m > 0 , let Suppose that z (0) , z (1) , … , z (n) are vectors in ℝ m such that for 1 ≤ t ≤ n , the two vectors z (t−1) and z (t) differ only in a few components, and our task is to compute all antitonic (i.e. monotone decreasing) approximations A(z (0) ), A(z (1) ), … , A(z (n) ) . We show that A(z (t) ) can be computed efficiently, provided we know already A(z (t−1) ) . Briefly speaking, this is achieved by noticing that A(z (t−1) ) and A(z (t) ) share some identical components, and that the remaining components of A(z (t) ) can be determined directly from A(z (t−1) ) and z (t) with only a few operations.
The efficient computation of a sequence of antitonic approximations appears naturally in the context of isotonic distributional regression, see Henzi et al. (2021), Mösching and Dümbgen (2020) and Jordan et al. (2021). There, one observes random pairs (X 1 , Y 1 ), (X 2 , Y 2 ), … , (X n , Y n ) in X × ℝ such that, conditional on (X i ) n i=1 , the random variables Y 1 , Y 2 , … , Y n are independent with distribution functions F X 1 , F X 2 , … , F X n , where (F x ) x∈X is an unknown family of distribution functions. Then the goal is to estimate the latter family under the sole assumption that F x ≥ F x ′ pointwise whenever x ⪯ x � . This notion of ordering of distributions is known as stochastic ordering, or first order stochastic dominance. This isotonic distributional regression leads to the aforementioned least squares problem, where x 1 , … , x m denote the different elements of {X 1 , X 2 , … , X n } , and z (t) has components Section 2 provides some facts about monotone least squares which are useful for the present task. For a complete account and derivations, we refer to Barlow et al. (1972) and Robertson et al. (1988). Then it is shown in Section 3 how to turn this into an efficient computation scheme in case of a total order ⪯ . Finally, we discuss the specific application to isotonic distributional regression, and provide numerical experiments which show that computation times of the naive approach are decreased substantially with our procedure.

Some Facts About Antitonic Least Squares Estimation
Since the sum on the right hand side of (1) is a strictly convex and coercive function of f ∈ ℝ m , and since ℝ m ↓,x is a closed and convex set, A(z) is well-defined. It possesses several well-known characterizations, two of which are particularly useful for our considerations.
The first characterization uses local weighted averages. Let us first introduce some notations. In this article, upper, lower and level sets are seen as subsets of {1, … , m} inheriting the structure of (X, ⪯) . More precisely, a set U ⊂ {1, … , m} is an upper set if i ∈ U and x i ⪯ x j imply that j ∈ U . A set L ⊂ {1, … , m} is a lower set if j ∈ L and x i ⪯ x j imply that i ∈ L . The families of all upper and all lower sets are denoted by U and L , respectively. For a non-empty set S ⊂ {1, … , m} , its weight and the weighted average of z over S are respectively defined as Characterization I For any index 1 ≤ j ≤ m, For all vectors f ∈ ℝ m , numbers ∈ ℝ and relations ⋉ in {<, ≤, =, ≥, >} , let

Characterization II
In particular, = M [ f = ] (z). One possible reference for Characterizations I and II is Domínguez-Menchero and González-Rodríguez (2007). They treat the case of a quasi-order ⪯ and more general target functions ∑ m j=1 h j ( f j ) to be minimized over f ∈ ℝ m ↓,x . For the present setting with an arbitrary binary relation ⪯ and weighted least squares, a relatively short and selfcontained derivation of these two characterizations is available from the authors upon request.
The next lemma summarizes some facts about changes in A(z) if some components of z are increased.
Lemma 2.1 Let z,z ∈ ℝ m such that z ≥ z component-wise. Then the following conclusions hold true for f ∶= A(z) , f ∶= A(z) and K ∶= {k ∶z k > z k } : so the values of f and f are equal on the orange and yellow regions in the top right corner, which is indicated by saturated colors. Furthermore, when passing from z to z , the slightly transparent pink, blue and green level sets on the bottom left (including the point x j o ) can only be merged, but never be split. This follows from part (iv) of Lemma 2.1. Finally, for all points in the faded pink, blue and green areas, there is no statement about the behavior of the antitonic regression when passing from z to z.

Proof of Lemma 2.1 Part (i) is a direct consequence of Characterization I.
(2) This inequality and part (i) show that f i = f i . Part (iv) follows directly from parts (i) and (iii). Let i and j be different indices such that The Special Case of a Total Order If one replaces the binary relation ⪯ by a total order ≤ on X , as for example in the case of the usual total order on a subset of ℝ , the conclusions of Lemma 2.1 take a simpler form. In case of a total order, we assume that the covariates are ordered as follows

A Sequential Algorithm for Total Orders
Lemma 2.1 is potentially useful to accelerate algorithms for isotonic distributional regression with arbitrary partial orders, possibly in conjunction with the recursive partitioning algorithm by Luss and Rosset (2014), but this will require additional research. Now we focus on improvements of the well-known pool-adjacent-violators algorithm (PAVA) for a total order.

General Considerations
In what follows, we assume that To understand the different variants of the PAVA, let us recall two basic facts about , and let ℝ m P be the set of vectors f ∈ ℝ m such that f i = f j whenever i, j belong to the same block of P.
Fact 1 Let r 1 > ⋯ > r d be the sorted elements of {A i (z) ∶ 1 ≤ i ≤ m} , and let P consist of the blocks P s = {i ∶ A i (z) = r s } . Then r s = M P s (z) for 1 ≤ s ≤ d.

Fact 2 Suppose that
. That means, one may replace P with a coarser partition by pooling P s and P s+1 and still, Fact 1 is a direct consequence of Characterization II. To verify Fact 2, suppose that f ∈ ℝ m ↓ ∩ ℝ m P such that f i = r s for i ∈ P s , f i = r s+1 for i ∈ P s+1 , and r s > r s+1 . Now we show that f cannot be equal to A(z) . For t ≥ 0 let f (t) ∈ ℝ m P be given by so for sufficiently small t > 0 , f (t) ∈ ℝ m ↓ and is superior to f (0) . Hence f ≠ A(z). Facts 1 and 2 indicate already a general PAV strategy to compute A(z) . One starts with the finest partition P = ({1}, … , {m}) . As long as P contains two neighboring blocks P s and P s+1 such that M P s (z) ≥ M P s+1 (z) , the partition P is coarsened by replacing P s and P s+1 with the block P s ∪ P s+1 .
Standard PAVA Specifically, one works with three tuples: P = (P 1 , … , P d ) is a partition of The number b d is running from 1 to m, and the number d ≥ 1 changes during the algorithm, too. The tuples W = (W 1 , … , W d ) and M = (M 1 , … , M d ) contain the corresponding weights W s = w P s and weighted means M s = M P s (z) . Before increasing b d , the tuples P , W and M describe the minimizer of Here is the complete algorithm:  Modified PAVA In our specific applications of the PAVA, we are dealing with vectors z containing larger blocks {a, … , b} on which i ↦ z i is constant. Indeed, in regression settings with continuously distributed covariates and responses, z will always be a {0, 1}-valued vector. Then it is worthwhile to utilize fact 2 and modify the initialization as well as the very beginning of the induction step as follows: For the initialization, we determine the largest index b 1 such that z 1 = ⋯ = z b 1 and the corresponding weight W P 1 with P 1 = {1, … , b 1 } . Then we set P ← (P 1 ) , W ← (w P 1 ) and

At the beginning of the induction step, we determine the largest index
) , and d ← d + 1.

Abridged PAVA Suppose that we have computed A(z) with corresponding tuples
By parts (ii) and (iv) of Corollary 2.2, the partition corresponding to A(z) will be a coarsening of the partition with the following blocks: This allows us to compute A(z) as follows, keeping copies of the auxiliary objects for A(z) and indicating this with a superscript z: A j (z) = M s for j ∈ P s and 1 ≤ s ≤ d.

Computational Complexity It directly follows from the algorithmic description that when
A(z) is available, the abridged PAVA for computing A(z) requires not more operations than the standard PAVA. Its computational complexity is therefore at most of order O(m) if x 1 , … , x m are already sorted. More precisely, the number of averaging operations in the abridged PAVA is bounded from above by Numerical Example We illustrate the previous procedures with two vectors z,z ∈ ℝ 9 and w = (1) 9

j=1
. Table 1 shows the main steps of the PAVA for z . The first line shows the components of z , the other lines contain the current candidate for , where f = A(z) eventually, and the current partition P is indicated by extra vertical bars. Table 2 shows the abridged PAVA for two different vectors z.

Numerical Experiment 1
We generated data sets with n = 1000 independent obser- is the gamma distribution with shape parameter √ x and scale parameter 2 + (x − 5)∕ √ 2 + (x − 5) 2 . Figure 2 shows one such data set. In addition, one sees estimated -quantile curves for levels ∈ {0.1, 0.25, 0.5, 0.75, 0.9} , resulting from the estimator F . Now we simulated 1000 such data sets and measured the times T 1 , T 2 , T 3 for computing the estimator F via the standard, the modified and the abridged PAVA, respectively. Table 3 reports the sample means and standard deviations of these computation times in the 1000 simulations. In addition, one sees the averages and standard deviations of the ratios T i ∕T j , for 1 ≤ i < j ≤ 3 . It turned out that using the modified instead of the standard  PAVA reduced the computation time by a factor of 3.46 already. Using the abridged PAVA yielded a further improvement by a factor of 8.90. Figure 3 displays the result of simulation experiments for sample sizes ranging from 200 to 10 000 , where the data were generated using the procedure mentioned earlier. The simulations indicate that the improvement due to using modified instead of standard PAVA is almost constant in n, whereas the improvement due to abridged instead of modified PAVA increases with n. Presumably, the complexity of the abridged PAVA for computing the isotonic distributional regression remains quadratic in n. But our numerical experiments show that the constant is substantially smaller than the one resulting from applying the usual PAVA with complexity O(n) for n − 1 different levels of the response.

Numerical Experiment 2
The goal of this experiment is to study the influence of the strength of the monotone association between X and Y on the efficiency gain of the abridged PAVA for isotonic distributional regression. The gains of abridged PAVA are expected to be milder when Y is independent of X, and to become larger as the monotone association strengthens. The reason behind it is that, while the standard PAVA proceeds independently of the stochastic order, the abridged PAVA relies on the index j o indicating the component increasing in z(t − 1) and on the nature of the partition corresponding to A(z(t − 1)) , at a certain state t ∈ {1, … , n} of the procedure. If the monotone association is weak, then the partition corresponding to A(z(t − 1)) tends to contain fewer blocks in total and relatively large blocks in the middle of {1, … , n} . If the index j o happens to lie in a block containing many indices to the right of j o , even the abridged PAVA will have to inspect all of these.
To demonstrate this claim, we simulated n independent bivariate Gaussian random vectors (X, Y) ⊤ with correlation Corr(X, Y) = ≥ 0 . Note that the respective means and variances of X and Y have no influence on the results of the experiment. Indeed, the running times are invariant under strictly isotonic transformations of X and of Y. In particular, the simulations for = 0 cover all situations in which X and Y are stochastically independent with continuous distribution functions. The stochastic order between L(Y|X = x 1 ) and L(Y|X = x 2 ) for x 1 < x 2 becomes stronger as the correlation ∈ [0, 1) increases, from an equality in distribution when = 0 to a deterministic ordering when approaches 1. Now, for sample sizes n ranging from 200 to 10 000 and for each correlation ∈ {0, 0.5, 0.9} , the mean and standard deviation of the time ratio T 3 ∕T 1 were estimated from 1 000 repetitions. The results are summarized in Table 4. As expected, the efficiency gain is smallest for = 0 . But even then, it is larger than 6 for n ≥ 200 and larger than 9 for n ≥ 1 000.