Estimation of a Likelihood Ratio Ordered Family of Distributions

Consider bivariate observations ( X 1 , Y 1 ) , . . . , ( X n , Y n ) ∈ R × R with unknown conditional distributions Q x of Y , given that X = x . The goal is to estimate these distributions under the sole assumption that Q x is isotonic in x with respect to likelihood ratio order. If the observations are identically distributed, a related goal is to estimate the joint distribution L ( X, Y ) under the sole assumption that it is totally positive of order two. An algorithm is developed which estimates the unknown family of distributions ( Q x ) x via empirical likelihood. The benefit of the stronger regularization imposed by likelihood ratio order over the usual stochastic order is evaluated in terms of estimation and predictive performances on simulated as well as real data.


Introduction
Consider a univariate regression setting with observations (X 1 , Y 1 ), (X 2 , Y 2 ), . . ., (X n , Y n ) in X × R, where X is an arbitrary real set.We assume that conditional on X := (X i ) n i=1 , the observations Y 1 , Y 2 , . . ., Y n are independent with distributions L(Y i | X) = Q X i , where the distributions Q x , x ∈ X, are unknown.The goal is to estimate the latter under the sole assumption that Q x is isotonic in x in a certain sense.That means, if (X, Y ) denotes a generic observation, the larger (or smaller) the value of X, the larger (or smaller) Y tends to be.An obvious notion of order would be the usual stochastic order, which states that ) for all y ∈ R.This concept has been investigated and generalized by numerous authors, see Mösching and Dümbgen (2020), Henzi et al. (2021b) and the references cited therein.The latter paper illustrates the application of isotonic distributional regression in weather forecasting, and Henzi et al. (2021a) use it to analyze the length of stay of patients in Swiss hospitals.
The present paper investigates a stronger notion of order, the so-called likelihood ratio order.The usual definition is that for arbitrary points x 1 < x 2 in X, the distributions Q x 1 and Q x 2 have densities g x 1 and g x 2 with respect to some dominating measure such that g x 2 /g x 1 is isotonic on the set {g x 1 +g x 2 > 0}, and this condition will be denoted by Q x 1 ≤ lr Q x 2 .At first glance, this looks like a rather strong assumption coming out of thin air, but it is familiar from mathematical statistics or discriminant analyses and has interesting properties.For instance, Furthermore, likelihood ratio ordering is a frequent assumption or implication of models in mathematical finance, see Beare and Moon (2015), Jewitt (1991).The notion of likelihood ratio order is reviewed thoroughly in Dümbgen and Mösching (2023), showing that it defines a partial order on the set of all probability measures on the real line which is preserved under weak convergence.That material generalizes definitions and results in Shaked and Shanthikumar (2007).
Thus far, estimation of distributions under a likelihood ratio order constraint was mainly limited to settings with two or finitely many samples and populations.First, Dykstra et al. (1995) estimated the parameters of two multinomial distributions that are likelihood ratio ordered via a restricted maximum likelihood approach.After reparametrization, they found that the maximization problem at hand had reduced to a specific bioassay problem treated by Robertson et al. (1988) and which makes use of the theory of isotonic regression.It is then suggested that their approach generalizes well to any two distributions that are absolutely continuous with respect to some dominating measure.Later, Carolan and Tebbs (2005) focused on testing procedures for the equality of two distributions Q 1 and Q 2 versus the alternative hypothesis that Q 1 ≤ lr Q 2 , in the specific case where the cumulative distribution functions G i of Q i , i = 1, 2, are continuous.To this end, they made use of the equivalence between likelihood ratio order and the convexity of the ordinal dominance curve α → G 2 G −1 1 (α) , α ∈ [0, 1], which holds in case of G 2 being absolutely continuous with respect to G 1 .The convexity of the ordinal dominance curve was also exploited by Westling et al. (2023) to provide nonparametric maximum likelihood estimators of G 1 and G 2 under likelihood ratio order for discrete, continuous, as well as mixed continuous-discrete distributions using the greatest convex minorant of the empirical ordinal dominance curve.However, this method still necessitates the restrictive assumption that G 2 is absolutely continuous with respect to G 1 .Other attempts at estimating two likelihood ratio ordered distributions include Yu et al. (2017) who treat the estimation problem with a maximum smoothed likelihood approach, requiring the choice of a kernel and bandwidth parameters, and Hu et al. (2023) who suppose absolutely continuous distributions and model the logarithm of the ratio of densities as a linear combination of Bernstein polynomials.
To the best of our knowledge, only Dardanoni and Forcina (1998) considered the problem of estimating an arbitrary fixed number ℓ ≥ 2 of likelihood ratio ordered distributions Q 1 , Q 2 , . . ., Q ℓ , all of them sharing the same finite support.They showed that the constrained maximum likelihood problem may be reparametrized to obtain a convex optimization problem with linear inequality constraints, and they propose to solve the latter via a constrained version of the Fisher scoring algorithm.At each step of their procedure, it is necessary to solve a quadratic programming problem.
Within the setting of distributional regression, we follow an empirical likelihood approach (Owen, 1988(Owen, , 2001) ) to estimate the family (Q x ) x∈X for arbitrary real sets X.After a reparametrization similar to that of Dardanoni and Forcina (1998), we show that the problem of maximizing the (empirical) likelihood under the likelihood ratio order constraint yields again a finite-dimensional convex optimization problem with linear inequality constraints.We did experiments with active set algorithms in the spirit of Dümbgen et al. (2021) which are similar to the algorithms of Dardanoni and Forcina (1998).But, as explained later, the computational burden may become too heavy for large sample sizes n.Alternatively, we devise an algorithm which adapts and extends ideas from Jongbloed (1998) and Dümbgen et al. (2006) for the present setting.It makes use of a quasi-Newton approach, and new search directions are obtained via multiple isotonic weighted least squares regression.
There is an interesting aspect of the present estimation problem.If we assume that the observations (X i , Y i ) are independent copies of a generic random pair (X, Y ), the new estimation method may also be interpreted as an empirical likelihood estimator of the joint distribution of (X, Y ), hypothesizing that the latter is bivariate totally positive of order two (TP2).That is, for arbitrary intervals If the joint distribution of (X, Y ) has a density h with respect to Lebesgue measure on R × R, or if it is discrete with probability mass function h, then TP2 is equivalent to requiring that and this is just a special case of multivariate total positivity of order two (Karlin, 1968).For further equivalences and results in dimension two, see Dümbgen and Mösching (2023).Interestingly, this TP2 constraint is symmetric in X and Y , and our algorithm exploits this symmetry.A different, more restrictive approach to the estimation of a TP2 distribution is proposed by Hütter et al. (2020).They assume that the distribution of (X, Y ) has a smooth density with respect to Lebesgue measure on a given rectangle and devise a sieve maximum likelihood estimator.
The rest of the article is structured as follows.Section 2 explains why empirical likelihood estimation of a family of likelihood ratio ordered distributions is essentially equivalent to the estimation of a discrete bivariate TP2 distribution.In Section 3 we present an algorithm to estimate a bivariate TP2 distribution.In Section 4, a simulation study illustrates the benefits of the new estimation paradigm compared to the usual stochastic order constraint.Proofs and technical details are deferred to the appendix.

Two versions of empirical likelihood modelling
That means, the empirical distribution R emp of the observations (X i , Y i ) can be written as R emp = n −1 ℓ j=1 m k=1 w jk δ (x j ,y k ) .

Estimating the conditional distributions Q x
To estimate (Q x ) x∈X under likelihood ratio ordering, we first estimate (Q x j ) 1≤j≤ℓ .If that results in ( Q x j ) 1≤j≤ℓ , we may define This piecewise linear extension preserves isotonicity with respect to ≤ lr , see Lemma A.1.
To estimate Q x 1 , . . ., Q x ℓ , we restrict our attention to distributions with support {y 1 , . . ., y m }.That means, we assume temporarily that for 1 ≤ j ≤ ℓ, q jk δ y k with weights q j1 , . . ., q jm ≥ 0 summing to one.The empirical log-likelihood for the corresponding matrix q = (q jk ) j,k ∈ [0, 1] ℓ×m equals (2.1) Then the goal is to maximize this log-likelihood over all matrices q ∈ [0, 1] ℓ×m such that The latter constraint is equivalent to saying that Q x j is isotonic in j ∈ {1, . . ., ℓ} with respect to ≤ lr .

Estimating the distribution of (X, Y )
Suppose that the observations (X i , Y i ) are independent copies of a random pair (X, Y ) with unknown TP2 distribution R on R × R.An empirical likelihood approach to estimating R is to restrict one's attention to distributions with ℓm weights h jk ≥ 0 summing to one.
with equality if and only if h ++ = 1, that is, h = h.

Equivalence of the two estimation problems
For any matrix a ∈ R ℓ×m define the row sums a j+ := k a jk and column sums a +k := j a jk .If h is an arbitrary matrix in [0, ∞) ℓ×m such that L raw (h) > −∞, and if we write h jk = p j q jk with p j := h j+ and q jk := h jk /h j+ , then h satisfies (2.3) if and only if q does.Furthermore, q satisfies (2.2), and elementary algebra shows that w j+ log p j − np j + w j+ .
The unique maximizer p = (p j ) j of j (w j+ log p j − np j + w j+ ) is the vector (w j+ /n) j , and this implies the following facts: • If h is a maximizer of L(h) under the constraints (2.3), then h j+ = w j+ /n for all j, and q jk := h jk / h j+ defines a maximizer q of L raw (q) under the constraints (2.2) and (2.3).
• If q is a maximizer of L raw (q) under the constraints (2.2) and (2.3), then h jk := (w j+ /n) q jk defines a maximizer h of L(h) under the constraints (2.3).
As a final remark, note that the two estimation problems are monotone equivariant in the following sense: If (X, Y ) is replaced with ( X, Ỹ ) = (σ(X), τ (Y )) with strictly isotonic functions σ : Furthermore, the constraints of likelihood ratio ordered conditional distributions or of a TP2 joint distribution remain valid under such transformations.

Calibration of rows and columns
The previous considerations motivate to find a maximizer h ∈ [0, ∞) ℓ×m of L(h) under the constraint (2.3), even if the ultimate goal is to estimate the conditional distributions Q x , x ∈ X.They also indicate two simple ways to improve a current candidate h for h.Let h be defined via hjk := (w j+ /n)h jk /h j+ , i.e. we rescale the rows of h such that the new row sums hj+ coincide with the empirical weights w j+ /n.Then with equality if and only if h = h.Similarly, one can improve h by rescaling its columns, i.e. replacing h with h, where hjk := (w +k /n)h jk /h +k .

Dimension reduction
The minimization problem mentioned before involves a parameter h ∈ [0, ∞) ℓ×m under ℓ 2 m 2 nonlinear inequality constraints.The parameter space and the number of constraints may be reduced as follows.
Lemma 3.1.Let P be the set of all index pairs (j, k) such that there exist indices All in all, we may restrict our attention to parameters h ∈ (0, ∞) P satisfying (3.1),where h jk := 0 for (j, k) ̸ ∈ P. Note that (3.1) involves only (ℓ − 1)(m − 1) inequalities, and the inequality for one particular index pair (j, k) is nontrivial only if the two pairs (j − 1, k), (j, k − 1) belong to P.
The set P consists of all pairs (j, k) such that the support of the empirical distribution R emp contains a point (x j 1 , y k 2 ) "northwest" and a point (x j 2 , y k 1 ) "southeast" of (x j , y k ).If P contains two pairs (j 2 , k 1 ), (j 1 , k 2 ) with j 1 < j 2 and k 1 < k 2 , then it contains the whole set {j 1 , . . ., j 2 } × {k 1 , . . ., k 2 }. Figure 1 illustrates the definition of P. It also illustrates two alternative codings of P: An index pair (j, k) belongs to P if and only if m j ≤ k ≤ M j , where Note that by definition, for any index pair (j, k),

Reparametrization and reformulation
If we replace a parameter h ∈ (0, ∞) P with its component-wise logarithm θ ∈ R P , then property (3.1) is equivalent to The set of all θ ∈ R P satisfying (3.4) is a closed convex cone and is denoted by Θ.
Uniqueness follows directly from f being strictly convex, but existence is less obvious, unless w jk > 0 for all (j, k).With θ at hand, the corresponding solution h ∈ [0, ∞) ℓ×m of the original problem is given by In the proof of Theorem 3.2 and from now on, we view R P as a Euclidean space with inner product ⟨x, y⟩ := (j,k)∈P x jk y jk and the corresponding norm ∥x∥ := ⟨x, x⟩ 1/2 .For a differentiable function f : R P → R, its gradient is defined as ∇f (x) := ∂f (x)/∂x jk (j,k)∈P .
Let us explain briefly why traditional optimization algorithms may become infeasible for large sample sizes n.Depending on the input data, the set P may contain more than cn 2 parameters, and the constraint (3.4) may involve at least cn 2 linear inequalities, where c > 0 is some generic constant.Even if we restrict our attention to parameters θ ∈ Θ such that a given subset of the inequalities in (3.4) are equalities, they span a linear space of dimension at least max(ℓ, m), because all parameters θ jm j and θ ℓ k k are unconstrained, and max(ℓ, m) may be at least cn.Just determining a gradient and Hessian matrix of the target function f within this linear subspace would then require at least cn 4 steps.Consequently, traditional minimization algorithms involving exact Newton steps may be computationally infeasible.Alternatively, we propose an iterative algorithm with quasi Newton steps each of which has running time O(n 2 ), and the required memory is of this order, too.

Finding a new proposal
Version 1.To determine whether a given parameter θ ∈ R P is already optimal and, if not, to obtain a better one, we reparametrize the problem a second time.Let θ = T (θ) ∈ R P be given by θjk More importantly, we may represent P as where the latter equation follows from (3.2) and (3.3).Now the constraints (3.4) read θjk The set of θ ∈ R P satisfying (3.6) is denoted by Θ.For given θ and θ = T (θ), we approximate f (x) by the quadratic function This quadratic function of x is easily minimized over Θ via the pool-adjacent-violators algorithm, applied to the subtuple (x jk ) j=ℓ k for each k = 2, . . ., m separately.Then we obtain the proposal Interestingly, if θ is row-wise calibrated in the sense that n M j k=m j exp(θ jk ) = w j+ for 1 ≤ j ≤ ℓ, then γjm j (θ) = θjm j and thus Ψ row jm j (θ) = θ jm j for 1 ≤ j ≤ ℓ.
Version 2. Instead of reparametrizing θ ∈ Θ in terms of its values θ jm j , 1 ≤ j ≤ ℓ, and its increments within rows, one could reparametrize it in terms of its values θ ℓ k k , 1 ≤ k ≤ m, and its increments within columns, leading to a proposal Ψ col (θ).Here,

Calibration
In terms of the log-parametrization with θ ∈ Θ, the row-wise calibration mentioned earlier for h means to replace θ jk with Analogously, replacing θ jk with leads to a column-wise calibrated parameter θ.Iterating these calibrations alternatingly, leads to a parameter which is (approximately) calibrated, row-wise as well as column-wise.

From new proposal to new parameter
Both functions Ψ = Ψ row , Ψ col have some useful properties summarized in the next lemma.
Table 1: Pseudo code of our algorithm, returning an approximation θ of θ.
We determine t o := 2 −no with n o the smallest integer such that ρ θ (2 −no ) ≥ 0. Then we define a Hermite interpolation of ρ θ : This new function is such that ρθ (t) = ρ θ (t) for t = 0, t o , and ρ′ , the maximizer of ρθ over [0, t o ] is given by As shown in Lemma 1 of Dümbgen et al. (2006), this choice of t * fulfils the requirements just stated, where κ = 1/4.

Complete algorithms
A possible starting point for the algorithm is given by θ (0) := (− log(#P)) (j,k)∈P , but any other parameter θ (0) ∈ Θ would work, too.Suppose we have determined already ) ∈ [0, 1] as described before.No matter which proposal function Ψ we are using in each step, the resulting sequence (θ (s) ) s≥0 will always converge to θ.
Our numerical experiments showed that a particularly efficient refinement is as follows: Before computing a new proposal Ψ(θ (s) ), one should calibrate θ (s) in the sense that it is row-wise and column-wise calibrated.If s is even, we compute Ψ row (θ (s) ) to determine the next candidate θ (s+1) .If s is odd, we compute Ψ col (θ (s) ) to obtain θ (s+1) .The algorithm stops as soon as δ(θ (s) ) = ∇f (θ (s) ), θ (s) − Ψ(θ (s) ) is smaller than a prescribed small threshold.Table 1 provides corresponding pseudo code.The true conditional Gamma distribution function G x , the estimate under likelihood ratio (LR) order constraint G x and the estimated under usual stochastic (ST) order constraint q G x are displayed from left to right for x ∈ {1.5, 2, 2.5, 3, 3.5}.

Simulation study
In this section, we compare estimation and prediction performances of the likelihood ratio order constrained estimator presented in this article with the estimator under usual stochastic order obtained via isotonic distributional regression.The latter estimator was mentioned briefly in the introduction.It is extensively discussed in Henzi et al. (2021b) and Mösching and Dümbgen (2020).

A Gamma model
We choose a parametric family of distributions from which we draw observations.We will then use these data to provide distribution estimates which we then compare with the truth.The specific model we have in mind is a family (Q x ) x∈X of Gamma distributions with densities with respect to Lebesgue measure on (0, ∞), with some shape function a : X → (0, ∞) and scale function b : X → (0, ∞).Then Q x is isotonic in x ∈ X with respect to likelihood ratio ordering if and only if both functions a and b are isotonic.Recall that since the family is increasing in likelihood ratio order, it is also increasing with respect to the usual stochastic order.The specific shape and scale functions used for this study are  (Dümbgen and Kovac, 2009) is computed between the lower X ∋ x → min{y ∈ R : Gx (y) ≥ β} and upper X ∋ x → inf{y ∈ R : Gx (y) > β} quantile curves for each G ∈ {G, G, q G} (corresponding respectively to 'Truth', 'LR' and 'ST') and β ∈ {0.1, 0.25, 0.5, 0.75, 0.9}.

Sampling method
Let ℓ o ∈ {50, 1000} be a predefined number and let For a given sample size n ∈ N, the sample (X 1 , Y 1 ), (X 2 , Y 2 ), . . ., (X n , Y n ) is obtained as follows: Draw X 1 , X 2 , . . ., X n uniformly from X o and sample independently each Y k from Q X k .This yields unique covariates x 1 < • • • < x ℓ as well as unique responses For each such sample, we compute estimates of (Q x j ) ℓ j=1 under likelihood ratio order and usual stochastic order constraints.Using linear interpolation, we complete both families of estimates with covariates originally in {x j } ℓ j=1 to families of estimates with covariates in the full set X o , see Lemma A.1.We therefore obtain estimates ( Q x ) x∈Xo and ( q Q x ) x∈Xo under likelihood ratio order and usual stochastic order constraint, respectively.The corresponding families of cumulative distribution functions are written ( G x ) x∈Xo and ( q G x ) x∈Xo , whereas the truth is denoted by (G x ) x∈Xo .Although the performance of the empirical distribution is worse than those of the two order constrained estimators, it is still useful to study its behaviour, for instance to better understand boundary effects.The family of empirical cumulative distribution functions will be written ( G x ) x∈Xo .

Single sample
Figure 2 provides a visual comparison of a selection of true conditional distribution functions with their corresponding estimates under order constraint for a single sample generated in the setting ℓ o = 1000 and n = 1000.It shows that the estimates under likelihood ratio order constraint are much smoother than those under usual stochastic order constraint.The former are in general also closer to the truth than the latter.This fact is in reality true on average, as demonstrated in the next paragraph.Smoothness and  greater precision in estimation resulting from the likelihood ratio order is also apparent in Figure 3, which displays a selection of quantile curves for each G ∈ {G, G, q G}.

A simple score
To assess the ability of each estimator to retrieve the truth, we produce Monte-Carlo estimates of the median of the score for each estimator G ∈ { G, q G, G} and for each x ∈ X o .The above score may be decomposed as a sum of simple expressions involving the evaluation of Gx and G x on the finite set of unique responses, see Section A.3.We also compute Monte-Carlo quartiles of the relative change in score The results of the simulations are displayed in Figure 4.A first observation is that the performance of all three estimators decreases towards the boundary points of X, and this effect is more pronounced for the two order constrained estimators.This is a known phenomenon from shape constrained inference.However, in the interior of X, taking the stochastic ordering into account pays off.The second row of plots in Figure 4 shows the relative change in score when estimating the family of distributions with a likelihood ratio order constraint instead of the usual stochastic order constraint.It is observed that the improvement in score becomes larger and occurs on a wider sub-interval of X as ℓ o and n increase.Only towards the boundary, the usual stochastic order seems to have better performance.

Theoretical predictive performances
Using the same Gamma model, we evaluate predictive performances of both estimators using the continuous ranked probability score The CRPS is a sctrictly proper scoring rule which allows for comparisons of probabilistic forecasts, see Gneiting and Raftery (2007) and Jordan et al. (2019).It can be seen as an extension of the mean absolute error for probabilistic forecasts.The CRPS is therefore interpreted in the same unit of measurement as the true distribution or data.
Because the true underlying distribution is known in the present simulation setting, the expected CRPS score is given by , where y 0 := 0, y m+1 := +∞ and B(•, •) is the beta function.As shown in Section A.3, the above sum of integrals may be rewritten as a sum of elementary expressions involving the evaluation of Gx and G x on the finite set of unique responses, as well as two simple integrals which are computed via numerical integration.Consequently, we compute Monte-Carlo estimates of the median of each score S x ( G, G), G ∈ { G, q G, G}, as well as estimates of quartiles of the relative change in score when choosing G over q G. Figure 5 outlines the results of the simulations.Similar boundary effects as for the simple score are observed.On the interior of X, the usual stochastic order improves the naive empirical estimator, and the likelihood ratio order yields the best results.In terms of relative change in score, it appears that imposing a likelihood ratio order constraint to estimate the family of distributions yields an average score reduction of about 0.5% in comparison with the usual stochastic order estimator for a sample of n = 50.For n = 1000, this improvement occurs on a wider subinterval of X and more frequently, as shown by the third quartile curve.Note further that the expected CRPS increases on the interior of X.This is due to the fact that the CRPS has the same unit of measurement as the response variable.Since the scale of the response characterized by b increases with x, then so does the corresponding score.

Empirical predictive performances
We use the weight for age dataset already studied in Mösching and Dümbgen (2020).It comprises the age and weight of n = 16 432 girls whose age in years lies within X :=   Figure 6: Subsample of the weight for age data and β-quantile curves computed from that sample under likelihood ratio order constraint, β ∈ {0.1, 0.25, 0.5, 0.75, 0.9}.A logarithmic scale was used for the weight variable.
[ 2,16].A subsample of these data of size 2 000 is presented in Figure 6, along with estimated quantile curves under likelihood ratio order using that subsample.The dataset was publicly released as part of the National Health and Nutrition Examination Survey conducted in the US between 1963 and 1991 (data available from www.cdc.gov) and was analyzed by Kuczmarski et al. (2002) with parametric models to produce smooth quantile curves.
Although the likelihood ratio order constraint is harder to justify than the very natural stochastic order constraint, we are interested in the effect of a stronger regularization imposed by the former constraint.
The forecast evaluation is performed using a leave-n train -out cross-validation scheme.More precisely, we choose random subsets D train of n train observations which we use to train our estimators.Using the rest of the n test := n − n train data pairs in D test , we evaluate predictive performance by computing the sample median of S x ( G, D test ) for each estimator G ∈ { G, q G, G} and each x ∈ X o , where Quartile estimates of the relative change in score are also computed.
Figure 7 shows the forecast evaluation results.As expected, the empirical CRPS increases with age, since the spread of the weight increases with age.As to the relative change in score, improvements of about 0.5% can be seen for both training sample sizes.The region of X where the estimator under likelihood ratio order constraint shows better predictive performances is the widest for the largest training sample size.These results show the benefit of a stronger regularization.

A Proofs and technical details
A.1 Proofs for Sections 2 and 3 Proof.By assumption, there exist densitites g 0 of Q 0 and g 1 of Q 1 with respect to some dominating measure µ such that g 1 /g 0 is isotonic on {g 0 + g 1 > 0}, and this is equivalent to the property that g 0 (y)g 1 (x) ≤ g 0 (x)g 1 (y) whenever x < y.
Since h j 1 k 2 , h j 2 k 1 > 0, it follows from (2.3) that h j 1 k 1 , h j 2 k 2 > 0, too.(If j 1 = j 2 or k 1 = k 2 , this conclusion is trivial.)This type of argument will reappear several times, so we denote it by A(j 1 , j 2 , k 1 , k 2 ).
Analogously, one can show that Finally, if j 1 < j < j 2 and k 1 < k < k 2 , then we may apply A(j 1 , j, k 1 , k) or A(j, j 2 , k, k 2 ) to deduce that h jk > 0.
Proof of Theorem 3.2.Since f is strictly convex and Θ is convex, f has at most one minimizer in Θ.To prove existence of a minimizer, it suffices to show that Suppose that (A.1) is false.Then there exists a sequence (θ (s) ) s in Θ such that ∥θ∥ → ∞ but f (θ (s) ) s is bounded.With r s := ∥θ (s) ∥ and u (s) := r −1 s θ (s) , we may assume without loss of generality that u (s) → u as s → ∞ for some u ∈ Θ with ∥u∥ = 1.For any fixed t > 0 and sufficiently large s, convexity and differentiablity of f imply that Since lim s→∞ f (tu (s) ) = f (tu) and lim s→∞ ∂f (tu (s) )/∂t = ∂f (tu)/∂t, we conclude that ∂f (tu)/∂t ≤ 0 for all t > 0.
But as t → ∞, the directional derivative ∂f (tu)/∂t = (j,k)∈P −w jk u jk + u jk exp(tu jk ) converges to Consequently, the limiting direction u lies in Θ ∩ (−∞, 0] P and satisfies u jk = 0 whenever w jk > 0. But as shown below, this implies that u = 0, a contradiction to ∥u∥ = 1. The proof of u = 0 is very similar to the proof of Lemma 3.1.If j 1 ≤ j 2 and k 1 ≤ k 2 are indices such that u j 1 k 2 = u j 2 k 1 = 0, then it follows from u ∈ (−∞, 0] P and (3.4) that u j 1 k 1 + u j 2 k 2 ≥ 0, whence u j 1 k 1 = u j 2 k 2 = 0. Repeating this argument as in the proof of Lemma 3.1, one can show that for arbitrary (j 1 , k 2 ), (j 2 , k 1 ) ∈ P with j 1 ≤ j 2 , k 1 ≤ k 2 , and w j 1 k 2 , w j 2 ,k 1 > 0, we have u jk = 0 for j 1 ≤ j ≤ j 2 and k 1 ≤ k ≤ k 2 .By definition of P, this means that u = 0.
Proof of Lemma 3.3.With the linear bijection T : R P → R P and Θ = T (Θ), θ = T (θ), f = f • T −1 , one can show that for arbitrary x ∈ R P and x = T (x), The next lemma clarifies some connections between θ * and θ in terms of the directional derivative

A.3 Technical details for Sections 4
For fixed ℓ o , n ∈ N, let ( G x ) x∈Xo , ( q G x ) x∈Xo and ( G x ) x∈Xo be estimates of (G x ) x∈Xo from a sample {(X i , Y i )} n i=1 as described in Section 4.2.Then, for all G ∈ { G, q G, G} and x ∈ X o , the estimate Gx is a step function with jumps in the set {y 1 , . . ., y m } of unique observations.For convenience, we further denote y 0 := 0, y m+1 := ∞, and define Gjk := Gx j (y k ), 0 ≤ k ≤ m, and Gjm+1 := 1, for all 1 ≤ j ≤ ℓ o and G ∈ {G, G, q G, G}.For the remainder of this section, we fix 1 ≤ j ≤ ℓ o and G ∈ { G, q G, G}.Observe that R x j ( G, G) is the sum of the terms R (k)  x j ( G, G) = we find that x j ( G, G) = 1/2 − ρ(1, G jm ), for 1 ≤ k < m, where ρ(z 1 , z 2 ) := z 1 z 2 − z 2 2 /2.Similarly, the computation of the CRPS involves the sum of the following integrals S (k)  x j ( G, G) := where c j := b(x j )Γ(a(x j ) + 1)/Γ(a(x j )) and Ḡx j denotes the cumulative distribution function of a Gamma distribution with shape a(x j ) + 1 and scale b(x j ).In consequence, if we define Ḡjk := Ḡx j (y k ) and where the above two integrals are computed numerically.

Figure 1 :
Figure 1: In this specific example, n ≥ 8 raw observations yielded ℓ = 6 different values x j and m = 7 different values y k .The green dots represent those (j, k) with w jk > 0. The green dots and black circles represent the set P.
Figure2: The true conditional Gamma distribution function G x , the estimate under likelihood ratio (LR) order constraint G x and the estimated under usual stochastic (ST) order constraint q G x are displayed from left to right for x ∈ {1.5, 2, 2.5, 3, 3.5}.

Figure 4 :
Figure 4: Monte Carlo simulations to evaluate estimation performances with a simple score.First row: Simple scores with G being either G (solid line), q G (dashed line) or G (dotted line).Second row: Relative change of score when enforcing a likelihood ratio order constraint over the usual stochastic order constraint.The thicker line is the median variation, whereas the thin lines are the first and third quartiles.Negative values represent an improvement in score.

Figure 5 :
Figure5: Monte Carlo simulations to evaluate prediction performances using a CRPS-type score.First row: CRPS scores with G being either G (solid line), q G (dashed line) or G (doted line).Second row: Relative change of score when enforcing a likelihood ratio order constraint over the usual stochastic order constraint.

Figure 7 :
Figure7: Monte Carlo simulations to evaluate prediction performances using an empirical CRPS score.First row: empirical CRPS scores with G being either G (solid line), q G (dashed line, hardly distinguishable from solid line) or G (dotted line).Second row: Relative change of score when enforcing a likelihood ratio order constraint over the usual stochastic order constraint.