1 Introduction

Consider a univariate regression setting with observations \((X_1,Y_1)\), \((X_2,Y_2)\), ..., \((X_n,Y_n)\) in \({\mathfrak {X}} \times {\mathbb {R}}\), where \({\mathfrak {X}}\) is an arbitrary real set. We assume that conditional on \(\varvec{X} := (X_i)_{i=1}^n\), the observations \(Y_1,Y_2,\ldots ,Y_n\) are independent with distributions \(\mathcal {L}(Y_i \,|\, \varvec{X}) = Q_{X_i}\), where the distributions \(Q_x\), \(x \in {\mathfrak {X}}\), are unknown. The goal is to estimate the latter under the sole assumption that \(Q_x\) is isotonic in x in a certain sense. That means, if (XY) denotes a generic observation, the larger (or smaller) the value of X, the larger (or smaller) Y tends to be. An obvious notion of order would be the usual stochastic order, which states that \(Q_{x_1} \le _{\textrm{st}}Q_{x_2}\) whenever \(x_1 \le x_2\), that is, \(Q_{x_1}((-\infty ,y]) \ge Q_{x_2}((-\infty ,y])\) for all \(y\in {\mathbb {R}}\). This concept has been investigated and generalized by numerous authors, see Henzi et al. (2021b) and Mösching and Dümbgen (2020) and the references cited therein. The latter paper illustrates the application of isotonic distributional regression in weather forecasting, and Henzi et al. (2021a) use it to analyze the length of stay of patients in Swiss hospitals.

The present paper investigates a stronger notion of order, the so-called likelihood ratio order. The usual definition is that for arbitrary points \(x_1 < x_2\) in \({\mathfrak {X}}\), the distributions \(Q_{x_1}\) and \(Q_{x_2}\) have densities \(g_{x_1}\) and \(g_{x_2}\) with respect to some dominating measure such that \(g_{x_2}/g_{x_1}\) is isotonic on the set \(\{g_{x_1} + g_{x_2} > 0\}\), and this condition will be denoted by \(Q_{x_1} \le _{\textrm{lr}}Q_{x_2}\). At first glance, this looks like a rather strong assumption coming out of thin air, but it is familiar from mathematical statistics or discriminant analyses and has interesting properties. For instance, \(Q_{x_1} \le _{\textrm{lr}}Q_{x_2}\) if and only if \(Q_{x_1}(\cdot \,|\, B) \le _{\textrm{st}}Q_{x_2}(\cdot \,|\, B)\) for any real interval B such that \(Q_{x_1}(B), Q_{x_2}(B) > 0\), where \(Q_{x_j}(A \,|\, B) := Q_{x_j}(A\cap B)/Q_{x_j}(B)\). Furthermore, likelihood ratio ordering is a frequent assumption or implication of models in mathematical finance, see Beare and Moon (2015) and Jewitt (1991). The notion of likelihood ratio order is reviewed thoroughly in Dümbgen and Mösching (2023), showing that it defines a partial order on the set of all probability measures on the real line which is preserved under weak convergence. That material generalizes definitions and results in Shaked and Shanthikumar (2007).

Thus far, estimation of distributions under a likelihood ratio order constraint was mainly limited to settings with two or finitely many samples and populations. First, Dykstra et al. (1995) estimated the parameters of two multinomial distributions that are likelihood ratio ordered via a restricted maximum likelihood approach. After reparametrization, they found that the maximization problem at hand had reduced to a specific bioassay problem treated by Robertson et al. (1988) and which makes use of the theory of isotonic regression. It is then suggested that their approach generalizes well to any two distributions that are absolutely continuous with respect to some dominating measure. Later, Carolan and Tebbs (2005) focused on testing procedures for the equality of two distributions \(Q_1\) and \(Q_2\) versus the alternative hypothesis that \(Q_1 \le _{\textrm{lr}}Q_2\), in the specific case where the cumulative distribution functions \(G_i\) of \(Q_i\), \(i=1,2\), are continuous. To this end, they made use of the equivalence between likelihood ratio order and the convexity of the ordinal dominance curve \(\alpha \mapsto G_2\bigl (G_1^{-1}(\alpha )\bigr )\), \(\alpha \in [0,1]\), which holds in case of \(G_2\) being absolutely continuous with respect to \(G_1\). The convexity of the ordinal dominance curve was also exploited by Westling et al. (2023) to provide nonparametric maximum likelihood estimators of \(G_1\) and \(G_2\) under likelihood ratio order for discrete, continuous, as well as mixed continuous-discrete distributions using the greatest convex minorant of the empirical ordinal dominance curve. However, this method still necessitates the restrictive assumption that \(G_2\) is absolutely continuous with respect to \(G_1\). Other attempts at estimating two likelihood ratio ordered distributions include Yu et al. (2017) who treat the estimation problem with a maximum smoothed likelihood approach, requiring the choice of a kernel and bandwidth parameters, and Hu et al. (2023) who suppose absolutely continuous distributions and model the logarithm of the ratio of densities as a linear combination of Bernstein polynomials.

To the best of our knowledge, only Dardanoni and Forcina (1998) considered the problem of estimating an arbitrary fixed number \(\ell \ge 2\) of likelihood ratio ordered distributions \(Q_1,Q_2,\ldots ,Q_\ell \), all of them sharing the same finite support. They showed that the constrained maximum likelihood problem may be reparametrized to obtain a convex optimization problem with linear inequality constraints, and they propose to solve the latter via a constrained version of the Fisher scoring algorithm. At each step of their procedure, it is necessary to solve a quadratic programming problem.

Within the setting of distributional regression, we follow an empirical likelihood approach (Owen 1988, 2001) to estimate the family \((Q_x)_{x\in {\mathfrak {X}}}\) for arbitrary real sets \({\mathfrak {X}}\). After a reparametrization similar to that of Dardanoni and Forcina (1998), we show that the problem of maximizing the (empirical) likelihood under the likelihood ratio order constraint yields again a finite-dimensional convex optimization problem with linear inequality constraints. We did experiments with active set algorithms in the spirit of Dümbgen et al. (2021) which are similar to the algorithms of Dardanoni and Forcina (1998). But, as explained later, the computational burden may become too heavy for large sample sizes n. Alternatively, we devise an algorithm which adapts and extends ideas from Jongbloed (1998) and Dümbgen et al. (2006) for the present setting. It makes use of a quasi-Newton approach, and new search directions are obtained via multiple isotonic weighted least squares regression.

There is an interesting aspect of the present estimation problem. If we assume that the observations \((X_i,Y_i)\) are independent copies of a generic random pair (XY), the new estimation method may also be interpreted as an empirical likelihood estimator of the joint distribution of (XY), hypothesizing that the latter is bivariate totally positive of order two (TP2). That is, for arbitrary intervals \(A_1, A_2\) and \(B_1, B_2\) such that \(A_1 < A_2\) and \(B_1 < B_2\) element-wise,

If the joint distribution of (XY) has a density h with respect to Lebesgue measure on \({\mathbb {R}}\times {\mathbb {R}}\), or if it is discrete with probability mass function h, then TP2 is equivalent to requiring that

$$\begin{aligned}{} & {} {} h(x_1,y_2) h(y_1,x_2) \ \le \ h(x_1,y_1) h(x_2, y_2)\\ {}{} & {} \quad \text{ whenever } \ \ x_1< x_2, y_1 < y_2 , \end{aligned}$$

and this is just a special case of multivariate total positivity of order two (Karlin 1968). For further equivalences and results in dimension two, see Dümbgen and Mösching (2023). Interestingly, this TP2 constraint is symmetric in X and Y, and our algorithm exploits this symmetry. A different, more restrictive approach to the estimation of a TP2 distribution is proposed by Hütter et al. (2020). They assume that the distribution of (XY) has a smooth density with respect to Lebesgue measure on a given rectangle and devise a sieve maximum likelihood estimator.

The rest of the article is structured as follows. Section 2 explains why empirical likelihood estimation of a family of likelihood ratio ordered distributions is essentially equivalent to the estimation of a discrete bivariate TP2 distribution. In Sect. 3 we present an algorithm to estimate a bivariate TP2 distribution. In Sect. 4, a simulation study illustrates the benefits of the new estimation paradigm compared to the usual stochastic order constraint. Proofs and technical details are deferred to the appendix.

2 Two versions of empirical likelihood modelling

With our observations \((X_i,Y_i)\in {\mathfrak {X}}\times {\mathbb {R}}\), \(1 \le i \le n\), let

$$\begin{aligned} \{X_1, X_2, \ldots , X_n\}&= \{x_1, \ldots , x_\ell \}, \\ \{Y_1, Y_2, \ldots , Y_n\}&= \{y_1, \ldots , y_m\}, \end{aligned}$$

with \(x_1< \cdots < x_\ell \) and \(y_1< \cdots < y_m\). For an index pair (jk) with \(1 \le j \le \ell \) and \(1 \le k \le m\), let

$$\begin{aligned} w_{jk} \ := \ \# \bigl \{ i : (X_i,Y_i) = (x_j,y_k) \bigr \} . \end{aligned}$$

That means, the empirical distribution \({\widehat{R}}_{\textrm{emp}}\) of the observations \((X_i,Y_i)\) can be written as \({\widehat{R}}_{\textrm{emp}} = n^{-1} \sum _{j=1}^\ell \sum _{k=1}^m w_{jk}^{} \delta _{(x_j,y_k)}^{}\).

2.1 Estimating the conditional distributions \(Q_x\)

To estimate \((Q_x)_{x \in {\mathfrak {X}}}\) under likelihood ratio ordering, we first estimate \((Q_{x_j})_{1 \le j \le \ell }\). If that results in \(({\widehat{Q}}_{x_j})_{1 \le j \le \ell }\), we may define

$$\begin{aligned} {\widehat{Q}}_x \ := \ {\left\{ \begin{array}{ll} {\widehat{Q}}_{x_1} &{}{} \text{ if } \ x< x_1 , \\ (1 - \lambda ) {\widehat{Q}}_{x_j} + \lambda {\widehat{Q}}_{x_{j+1}} &{}{} \text{ if } \ x = (1 - \lambda ) x_j + \lambda x_{j+1}, \\ {} &{} {} \quad 1 \le j< \ell , \ 0< \lambda < 1 , \\ {\widehat{Q}}_{x_\ell } &{}{} \text{ if } \ x > x_\ell . \end{array}\right. } \end{aligned}$$

This piecewise linear extension preserves isotonicity with respect to \(\le _{\textrm{lr}}\), see Lemma 3.

To estimate \(Q_{x_1}, \ldots , Q_{x_\ell }\), we restrict our attention to distributions with support \(\{y_1,\ldots ,y_m\}\). That means, we assume temporarily that for \(1 \le j \le \ell \),

$$\begin{aligned} Q_{x_j} \ = \ \sum _{k=1}^m q_{jk}^{} \delta _{y_k}^{} \end{aligned}$$

with weights \(q_{j1}, \ldots , q_{jm} \ge 0\) summing to one. The empirical log-likelihood for the corresponding matrix \(\varvec{q} = (q_{jk})_{j,k} \in [0,1]^{\ell \times m}\) equals

$$\begin{aligned} L_{\textrm{raw}}(\varvec{q}) \ := \ \sum _{j=1}^\ell \sum _{k=1}^m w_{jk}^{} \log q_{jk}^{} . \end{aligned}$$
(1)

Then the goal is to maximize this log-likelihood over all matrices \(\varvec{q} \in [0,1]^{\ell \times m}\) such that

$$\begin{aligned} \sum _{k=1}^m q_{jk}^{} \ {}&= \ 1{},&{} 1 \le j \le \ell , \end{aligned}$$
(2)
$$\begin{aligned} q_{j_1k_2}^{} q_{j_2k_1}^{} \ {}&\le \ q_{j_1k_1}^{} q_{j_2k_2}^{}{},&{} 1 \le j_1< j_2 \le \ell , \ 1 \le k_1 < k_2 \le m . \end{aligned}$$
(3)

The latter constraint is equivalent to saying that \(Q_{x_j}\) is isotonic in \(j \in \{1,\ldots ,\ell \}\) with respect to \(\le _{\textrm{lr}}\).

2.2 Estimating the distribution of (XY)

Suppose that the observations \((X_i,Y_i)\) are independent copies of a random pair (XY) with unknown TP2 distribution R on \({\mathbb {R}}\times {\mathbb {R}}\). An empirical likelihood approach to estimating R is to restrict one’s attention to distributions

$$\begin{aligned} R \ = \ \sum _{j=1}^\ell \sum _{k=1}^m h_{jk}^{} \delta _{(x_j,y_k)}^{} \end{aligned}$$

with \(\ell m\) weights \(h_{jk} \ge 0\) summing to one. The empirical log-likelihood of the corresponding matrix \(\varvec{h} = (h_{jk})_{j,k}\) equals \(L_{\textrm{raw}}(\varvec{h})\) with the function \(L_{\textrm{raw}}\) defined in (1). But now the goal is to maximize \(L_\textrm{raw}(\varvec{h})\) over all matrices \(\varvec{h} \in [0,1]^{\ell \times m}\) satisfying the constraints

$$\begin{aligned} \sum _{j=1}^\ell \sum _{k=1}^m h_{jk}^{} \ = \ 1 \end{aligned}$$
(4)

and (3). As mentioned in the introduction, requirement (3) for \(\varvec{h}\) is equivalent to R being TP2. One can get rid of the constraint (4) via a Lagrange trick and maximize

$$\begin{aligned} L(\varvec{h}) \ := \ L_{\textrm{raw}}(\varvec{h}) - n h_{++} + n \end{aligned}$$

over all \(\varvec{h}\) satisfying (3), where \(h_{++} := \sum _j \sum _k h_{jk}\). Indeed, if \(\varvec{h}\) is a matrix in \([0,\infty )^{\ell \times m}\) such that \(L_{(\textrm{raw})}(\varvec{h}) > - \infty \), then \(\tilde{\varvec{h}} := (h_{jk}/h_{++})_{j,k}\) satisfies (3) if and only if \(\varvec{h}\) does, and

$$\begin{aligned} L(\varvec{h}) \ = \ L_{\textrm{raw}}(\tilde{\varvec{h}}) + n (\log h_{++} - h_{++} + 1) \ \le \ L_{\textrm{raw}}(\tilde{\varvec{h}}) \ = \ L(\tilde{\varvec{h}}) \end{aligned}$$

with equality if and only if \(h_{++} = 1\), that is, \(\varvec{h} = \tilde{\varvec{h}}\).

2.3 Equivalence of the two estimation problems

For any matrix \(\varvec{a} \in {\mathbb {R}}^{\ell \times m}\) define the row sums \(a_{j+} := \sum _k a_{jk}\) and column sums \(a_{+k} := \sum _j a_{jk}\). If \(\varvec{h}\) is an arbitrary matrix in \([0,\infty )^{\ell \times m}\) such that \(L_{\textrm{raw}}(\varvec{h}) > - \infty \), and if we write

$$\begin{aligned} h_{jk}^{} \ = \ p_j^{} q_{jk}^{} \quad \text {with} \ p_j^{} := h_{j+}^{} \ \text {and} \ q_{jk}^{} := h_{jk}^{} / h_{j+}^{} , \end{aligned}$$

then \(\varvec{h}\) satisfies (3) if and only if \(\varvec{q}\) does. Furthermore, \(\varvec{q}\) satisfies (2), and elementary algebra shows that

$$\begin{aligned} L(\varvec{h}) \ = \ L_{\textrm{raw}}(\varvec{q}) + \sum _{j=1}^\ell \bigl ( w_{j+}^{} \log p_j^{} - n p_j^{} + w_{j+}^{} \bigr ) . \end{aligned}$$

The unique maximizer \(\varvec{p} = (p_j)_j\) of \(\sum _j (w_{j+} \log p_j - n p_j + w_{j+})\) is the vector \((w_{j+}/n)_j\), and this implies the following facts:

  • If \(\widehat{\varvec{h}}\) is a maximizer of \(L(\varvec{h})\) under the constraints (3), then \({\widehat{h}}_{j+} = w_{j+}/n\) for all j, and \({\widehat{q}}_{jk} := {\widehat{h}}_{jk}/{\widehat{h}}_{j+}\) defines a maximizer \(\widehat{\varvec{q}}\) of \(L_{\textrm{raw}}(\varvec{q})\) under the constraints (2) and (3).

  • If \(\widehat{\varvec{q}}\) is a maximizer of \(L_{\textrm{raw}}(\varvec{q})\) under the constraints (2) and (3), then \({\widehat{h}}_{jk} := (w_{j+}/n) {\widehat{q}}_{jk}\) defines a maximizer \(\widehat{\varvec{h}}\) of \(L(\varvec{h})\) under the constraints (3).

As a final remark, note that the two estimation problems are monotone equivariant in the following sense: If (XY) is replaced with \(({\tilde{X}}, {\tilde{Y}})=(\sigma (X), \tau (Y))\) with strictly isotonic functions \(\sigma :{\mathfrak {X}}\rightarrow {\mathbb {R}}\) and \(\tau :{\mathbb {R}}\rightarrow {\mathbb {R}}\), then \({\mathcal {L}}({\tilde{Y}}|{\tilde{X}}=\sigma (x)) = {\mathcal {L}}(\tau (Y)|X = x)\) for \(x\in {\mathfrak {X}}\). Furthermore, the constraints of likelihood ratio ordered conditional distributions or of a TP2 joint distribution remain valid under such transformations.

2.4 Calibration of rows and columns

The previous considerations motivate to find a maximizer \(\widehat{\varvec{h}} \in [0,\infty )^{\ell \times m}\) of \(L(\varvec{h})\) under the constraint (3), even if the ultimate goal is to estimate the conditional distributions \(Q_x\), \(x \in {\mathfrak {X}}\). They also indicate two simple ways to improve a current candidate \(\varvec{h}\) for \(\widehat{\varvec{h}}\). Let \(\tilde{\varvec{h}}\) be defined via

$$\begin{aligned} {\tilde{h}}_{jk}^{} \ := \ (w_{j+}^{}/n) h_{jk}^{}/h_{j+}^{} , \end{aligned}$$

i.e. we rescale the rows of \(\varvec{h}\) such that the new row sums \({\tilde{h}}_{j+}\) coincide with the empirical weights \(w_{j+}/n\). Then

$$\begin{aligned} L(\tilde{\varvec{h}}) - L(\varvec{h}) \ {}{} & {} = \ \sum _{j=1}^\ell \Bigl ( w_{j+}^{} \log \Bigl ( \frac{w_{j+}}{n h_{j+}} \Bigr ) + n h_{j+}^{} - w_{j+}^{} \Bigr ) \\{} & {} \ge \ 0 \end{aligned}$$

with equality if and only if \(\tilde{\varvec{h}} = \varvec{h}\). Similarly, one can improve \(\varvec{h}\) by rescaling its columns, i.e. replacing \(\varvec{h}\) with \(\tilde{\varvec{h}}\), where

$$\begin{aligned} {\tilde{h}}_{jk}^{} \ := \ (w_{+k}^{}/n) h_{jk}^{}/h_{+k}^{} . \end{aligned}$$

3 Estimation

3.1 Dimension reduction

The minimization problem mentioned before involves a parameter \(\varvec{h} \in [0,\infty )^{\ell \times m}\) under \(\left( {\begin{array}{c}\ell \\ 2\end{array}}\right) \left( {\begin{array}{c}m\\ 2\end{array}}\right) \) nonlinear inequality constraints. The parameter space and the number of constraints may be reduced as follows.

Lemma 1

Let \(\mathcal {P}\) be the set of all index pairs (jk) such that there exist indices \(1 \le j_1 \le j \le j_2 \le \ell \) and \(1 \le k_1 \le k \le k_2 \le m\) with \(w_{j_1k_2}, w_{j_2k_1} > 0\).

(a) If \(\varvec{h} \in [0,\infty )^{\ell \times m}\) satisfies (3) and \(L(\varvec{h}) > - \infty \), then \(h_{jk} > 0\) for all \((j,k) \in \mathcal {P}\).

(b) If such a matrix \(\varvec{h}\) is replaced with \(\tilde{\varvec{h}}\! := \!\bigl ( 1_{[(j,k) \!\in \mathcal {P}]} h_{jk} \bigr )_{j,k}\), then \(\tilde{\varvec{h}}\) satisfies (3), too, and \(L(\tilde{\varvec{h}}) \ge L(\varvec{h})\) with equality if and only if \(\tilde{\varvec{h}} = \varvec{h}\).

(c) If \(\varvec{h} \in [0,\infty )^{\ell \times m}\) such that \(\{(j,k):h_{jk} > 0\} = \mathcal {P}\), then constraint (3) is equivalent to

$$\begin{aligned} h_{j-1,k}^{} h_{j,k-1} \ \le \ h_{j-1,k-1}^{} h_{j,k}^{}, \quad 1< j \le \ell , \ 1 < k \le m . \end{aligned}$$
(5)

All in all, we may restrict our attention to parameters \(\varvec{h} \in (0,\infty )^{\mathcal {P}}\) satisfying (5), where \(h_{jk} := 0\) for \((j,k) \not \in \mathcal {P}\). Note that (5) involves only \((\ell -1)(m-1)\) inequalities, and the inequality for one particular index pair (jk) is nontrivial only if the two pairs \((j-1,k),(j,k-1)\) belong to \(\mathcal {P}\).

The set \(\mathcal {P}\) consists of all pairs (jk) such that the support of the empirical distribution \({\widehat{R}}_{\textrm{emp}}\) contains a point \((x_{j_1},y_{k_2})\) “northwest” and a point \((x_{j_2},y_{k_1})\) “southeast” of \((x_j,y_k)\). If \(\mathcal {P}\) contains two pairs \((j_2,k_1), (j_1,k_2)\) with \(j_1 < j_2\) and \(k_1 < k_2\), then it contains the whole set \(\{j_1,\ldots ,j_2\} \times \{k_1,\ldots ,k_2\}\). Figure 1 illustrates the definition of \(\mathcal {P}\). It also illustrates two alternative codings of \(\mathcal {P}\): An index pair (jk) belongs to \(\mathcal {P}\) if and only if \(m_j \le k \le M_j\), where

$$\begin{aligned} m_j \ {}&:= \ \min \bigl \{ k : w_{j'k}> 0 \ \text {for some} \ j' \ge j \bigr \} , \\ M_j \ {}&:= \ \max \bigl \{ k : w_{j'k} > 0 \ \text {for some} \ j' \le j \bigr \} . \end{aligned}$$

Note that \(m_j \le M_j\) for all j, \(1 = m_1 \le \cdots \le m_\ell \), and \(M_1 \le \cdots \le M_\ell = m\). Analogously, a pair (jk) belongs to \(\mathcal {P}\) if and only if \(\ell _k \le j \le L_k\), where

$$\begin{aligned} \ell _k \ {}&:= \ \min \bigl \{ j : w_{jk'}> 0 \ \text {for some} \ k' \ge k \bigr \} , \\ L_k \ {}&:= \ \max \bigl \{ j : w_{jk'} > 0 \ \text {for some} \ k' \le k \bigr \} . \end{aligned}$$

Here \(\ell _k \le L_k\) for all k, \(1 = \ell _1 \le \cdots \le \ell _M\), and \(L_1 \le \cdots \le L_m = \ell \).

Fig. 1
figure 1

In this specific example, \(n \ge 8\) raw observations yielded \(\ell = 6\) different values \(x_j\) and \(m = 7\) different values \(y_k\). The green dots represent those (jk) with \(w_{jk} > 0\). The green dots and black circles represent the set \(\mathcal {P}\)

Note that by definition, for any index pair (jk),

$$\begin{aligned}&k \le M_j \quad \text {if and only if} \quad j \ge l_k, \end{aligned}$$
(6)
$$\begin{aligned}&k \ge m_j \quad \text {if and only if} \quad j \le L_k. \end{aligned}$$
(7)

3.2 Reparametrization and reformulation

If we replace a parameter \(\varvec{h} \in (0,\infty )^{\mathcal {P}}\) with its component-wise logarithm \(\varvec{\theta }\in {\mathbb {R}}^{\mathcal {P}}\), then property (5) is equivalent to

$$\begin{aligned}{} & {} \theta _{j-1,k-1}^{} + \theta _{j,k}^{} - \theta _{j-1,k}^{} - \theta _{j,k-1}^{} \ \ge \ 0 \quad \nonumber \\{} & {} \quad \text {whenever} \ (j-1,k), (j,k-1) \in \mathcal {P}. \end{aligned}$$
(8)

The set of all \(\varvec{\theta }\in {\mathbb {R}}^{\mathcal {P}}\) satisfying (8) is a closed convex cone and is denoted by \(\Theta \).

Now our goal is to minimize

$$\begin{aligned} f(\varvec{\theta }) \ := \ \sum _{(j,k) \in \mathcal {P}} \bigl ( - w_{jk}^{} \theta _{jk}^{} + n \exp (\theta _{jk}^{}) \bigr ) \end{aligned}$$
(9)

over all \(\varvec{\theta }\in \Theta \).

Theorem 1

There exists a unique minimizer \({\widehat{\varvec{\theta }}}\) of \(f(\varvec{\theta })\) over all \(\varvec{\theta }\in \Theta \).

Uniqueness follows directly from f being strictly convex, but existence is less obvious, unless \(w_{jk} > 0\) for all (jk). With \({\widehat{\varvec{\theta }}}\) at hand, the corresponding solution \(\widehat{\varvec{h}} \in [0,\infty )^{\ell \times m}\) of the original problem is given by

$$\begin{aligned} {\widehat{h}}_{jk} \ = \ {\left\{ \begin{array}{ll} \exp ({\widehat{\theta }}_{jk}) &{} \text {if} \ (j,k) \in \mathcal {P}, \\ 0 &{} \text {else} . \end{array}\right. } \end{aligned}$$

In the proof of Theorem 1 and from now on, we view \({\mathbb {R}}^{\mathcal {P}}\) as a Euclidean space with inner product \(\langle \varvec{x},\varvec{y}\rangle := \sum _{(j,k) \in \mathcal {P}} x_{jk}^{} y_{jk}^{}\) and the corresponding norm \(\Vert \varvec{x}\Vert := \langle \varvec{x},\varvec{x}\rangle ^{1/2}\). For a differentiable function \(f : {\mathbb {R}}^{\mathcal {P}} \rightarrow {\mathbb {R}}\), its gradient is defined as \(\nabla f(\varvec{x}) := \bigl ( \partial f(\varvec{x}) / \partial x_{jk}^{} \bigr )_{(j,k) \in \mathcal {P}}\).

Let us explain briefly why traditional optimization algorithms may become infeasible for large sample sizes n. Depending on the input data, the set \(\mathcal {P}\) may contain more than \(cn^2\) parameters, and the constraint (8) may involve at least \(cn^2\) linear inequalities, where \(c > 0\) is some generic constant. Even if we restrict our attention to parameters \(\varvec{\theta }\in \Theta \) such that a given subset of the inequalities in (8) are equalities, they span a linear space of dimension at least \(\max (\ell ,m)\), because all parameters \(\theta _{jm_j}\) and \(\theta _{\ell _kk}\) are unconstrained, and \(\max (\ell ,m)\) may be at least cn. Just determining a gradient and Hessian matrix of the target function f within this linear subspace would then require at least \(cn^4\) steps. Consequently, traditional minimization algorithms involving exact Newton steps may be computationally infeasible. Alternatively, we propose an iterative algorithm with quasi Newton steps each of which has running time \(O(n^2)\), and the required memory is of this order, too.

3.3 Finding a new proposal

Version 1. To determine whether a given parameter \(\varvec{\theta }\in {\mathbb {R}}^{\mathcal {P}}\) is already optimal and, if not, to obtain a better one, we reparametrize the problem a second time. Let \(\tilde{\varvec{\theta }}= T(\varvec{\theta }) \in {\mathbb {R}}^{\mathcal {P}}\) be given by

$$\begin{aligned} {\tilde{\theta }}_{jk} \ = \ {\left\{ \begin{array}{ll} \theta _{jm_j} &{} \text {if} \ k = m_j , \\ \theta _{jk} - \theta _{j,k-1} &{} \text {if} \ m_j < k \le M_j . \end{array}\right. } \end{aligned}$$

Then \(\varvec{\theta }= T^{-1}(\tilde{\varvec{\theta }}) = \bigl ( \sum _{k'=m_j}^k {\tilde{\theta }}_{jk'} \bigr )_{j,k}\), and \(f(\varvec{\theta })\) is equal to

$$\begin{aligned} {\tilde{f}}(\tilde{\varvec{\theta }}) \ := \ {}&\sum _{j=1}^\ell \sum _{k=m_j}^{M_j} \Bigl ( - w_{jk} \sum _{k'=m_j}^k {\tilde{\theta }}_{jk'} + n \exp \Bigl ( \sum _{k'=m_j}^k {\tilde{\theta }}_{jk'} \Bigr ) \Bigr ) \\ = \ {}&\sum _{j=1}^\ell \sum _{k=m_j}^{M_j} \Bigl ( - {\underline{w}}_{jk} {\tilde{\theta }}_{jk} + n \exp \Bigl ( \sum _{k'=m_j}^k {\tilde{\theta }}_{jk'} \Bigr ) \Bigr ) \\ {}&\quad \text{ with } \ {\underline{w}}_{jk} := \sum _{k'=k}^{M_j} w_{jk'} . \end{aligned}$$

More importantly, we may represent \(\mathcal {P}\) as

$$\begin{aligned} \mathcal {P}\ {}&= \ \bigl \{ (j,m_j) : 1 \le j \le \ell \bigr \} \cup \bigl \{ (j,k) : 1 \le j \le \ell , m_j < k \le M_j \bigr \} \\ \ {}&= \ \bigl \{ (j,m_j) : 1 \le j \le \ell \bigr \} \cup \bigcup _{k=2}^m \bigl \{ (j,k) : \ell _k \le j \le L_{k-1} \bigr \}, \end{aligned}$$

where the latter equation follows from (6) and (7). Now the constraints (8) read

$$\begin{aligned}{} & {} {} \bigl ( {\tilde{\theta }}_{jk} \bigr )_{j=\ell _k}^{L_{k-1}} \in {\mathbb {R}}_\uparrow ^{L_{k-1} - \ell _k + 1} \\ \nonumber{} & {} \quad \text{ if } \ 2 \le k \le m \ \text{ and } \ L_{k-1} - \ell _k + 1 \ge 2 . \end{aligned}$$
(10)

Here \({\mathbb {R}}_\uparrow ^d := \{ \varvec{x} \in {\mathbb {R}}^d : x_1 \le \cdots \le x_d\}\). The set of \(\tilde{\varvec{\theta }}\in {\mathbb {R}}^{\mathcal {P}}\) satisfying (10) is denoted by \({\tilde{\Theta }}\).

For given \(\varvec{\theta }\) and \(\tilde{\varvec{\theta }}= T(\varvec{\theta })\), we approximate \({\tilde{f}}(\tilde{\varvec{x}})\) by the quadratic function

$$\begin{aligned} \tilde{\varvec{x}}\ \mapsto \ {}&{\tilde{f}}(\tilde{\varvec{\theta }}) + \bigl \langle \nabla {\tilde{f}}(\tilde{\varvec{\theta }}), \tilde{\varvec{x}}- \tilde{\varvec{\theta }}\bigr \rangle + 2^{-1} \sum _{(j,k) \in \mathcal {P}} \frac{\partial ^2 {\tilde{f}}}{\partial {\tilde{\theta }}_{jk}^2}(\tilde{\varvec{\theta }}) ({\tilde{x}}_{jk} - {\tilde{\theta }}_{jk})^2 \\&= \ \textrm{const}(\varvec{\theta }) + 2^{-1} \sum _{(j,k) \in \mathcal {P}} {\tilde{v}}_{jk}(\varvec{\theta }) ({\tilde{x}}_{jk} - {\tilde{\gamma }}_{jk}(\varvec{\theta }))^2 \\&= \ \textrm{const}(\varvec{\theta }) + 2^{-1} \sum _{j=1}^\ell {\tilde{v}}_{jm_j}(\varvec{\theta }) ({\tilde{x}}_{jm_j} - {\tilde{\gamma }}_{jm_j}(\varvec{\theta }))^2 \\&\qquad \qquad + \ 2^{-1} \sum _{k=2}^m \sum _{\ell _k \le j \le L_{k-1}} {\tilde{v}}_{jk}(\varvec{\theta }) ({\tilde{x}}_{jk} - {\tilde{\gamma }}_{jk}(\varvec{\theta }))^2 \end{aligned}$$

with

$$\begin{aligned} {\tilde{v}}_{jk}(\varvec{\theta }) \ :=&\ \frac{\partial ^2 {\tilde{f}}}{\partial {\tilde{\theta }}_{jk}^2}(\tilde{\varvec{\theta }}) = \ n \sum _{k'=k}^{M_j} \exp (\theta _{jk'}) , \\{} {} {\tilde{\gamma }}_{jk}(\varvec{\theta }) \ :=&\ {\tilde{\theta }}_{jk} - {\tilde{v}}_{jk}(\varvec{\theta })^{-1} \frac{\partial {\tilde{f}}}{\partial {\tilde{\theta }}_{jk}}(\tilde{\varvec{\theta }}) \\{} {} \ =&\ T_{jk}(\varvec{\theta }) + {\tilde{v}}_{jk}(\varvec{\theta })^{-1} {\underline{w}}_{jk} - 1 . \end{aligned}$$

This quadratic function of \(\tilde{\varvec{x}}\) is easily minimized over \({\tilde{\Theta }}\) via the pool-adjacent-violators algorithm, applied to the subtuple \(({\tilde{x}}_{jk})_{j=\ell _k}^{L_{k-1}}\) for each \(k=2,\ldots ,m\) separately. Then we obtain the proposal

$$\begin{aligned}{} & {} \Psi ^{\textrm{row}}(\varvec{\theta }) \ := \ T^{-1}(\tilde{\varvec{\theta }}_*(\varvec{\theta })) \quad \text {with}\quad \tilde{\varvec{\theta }}_*(\varvec{\theta }) \\{} & {} \quad := \ \mathop \mathrm{arg\,min}_{\tilde{\varvec{x}}\in {\tilde{\Theta }}} \sum _{(j,k) \in \mathcal {P}} {\tilde{v}}_{jk}(\varvec{\theta }) ({\tilde{x}}_{jk} - {\tilde{\gamma }}_{jk}(\varvec{\theta }))^2 . \end{aligned}$$

Interestingly, if \(\varvec{\theta }\) is row-wise calibrated in the sense that \(n \sum _{k=m_j}^{M_j} \exp (\theta _{jk}) = w_{j+}\) for \(1 \le j \le \ell \), then \({\tilde{\gamma }}_{jm_j}(\varvec{\theta }) = {\tilde{\theta }}_{jm_j}\) and thus \(\Psi ^{\textrm{row}}_{jm_j}(\varvec{\theta }) = \theta _{jm_j}\) for \(1 \le j \le \ell \).

Version 2. Instead of reparametrizing \(\varvec{\theta }\in \Theta \) in terms of its values \(\theta _{jm_j}\), \(1 \le j \le \ell \), and its increments within rows, one could reparametrize it in terms of its values \(\theta _{\ell _kk}\), \(1 \le k \le m\), and its increments within columns, leading to a proposal \(\Psi ^{\textrm{col}}(\varvec{\theta })\). Here, \(\Psi ^{\textrm{col}}_{\ell _kk}(\varvec{\theta }) = \theta _{\ell _kk}\) for \(1 \le k \le m\), provided that \(\varvec{\theta }\) is column-wise calibrated.

3.4 Calibration

In terms of the log-parametrization with \(\varvec{\theta }\in \Theta \), the row-wise calibration mentioned earlier for \(\varvec{h}\) means to replace \(\theta _{jk}\) with

$$\begin{aligned} \theta _{jk} - \log \bigl ( \sum _{k'=m_j}^{M_j} \exp (\theta _{jk'}) \bigr ) + \log (w_{j+}/n). \end{aligned}$$

Analogously, replacing \(\theta _{jk}\) with

$$\begin{aligned} \theta _{jk} - \log \bigl ( \sum _{j'=\ell _k}^{L_k} \exp (\theta _{j'k}) \bigr ) + \log (w_{+k}/n) \end{aligned}$$

leads to a column-wise calibrated parameter \(\varvec{\theta }\). Iterating these calibrations alternatingly, leads to a parameter which is (approximately) calibrated, row-wise as well as column-wise.

3.5 From new proposal to new parameter

Both functions \(\Psi = \Psi ^{\textrm{row}}, \Psi ^{\textrm{col}}\) have some useful properties summarized in the next lemma.

Lemma 2

The function \(\Psi \) is continuous on \(\Theta \) with \(\Psi ({\widehat{\varvec{\theta }}}) = {\widehat{\varvec{\theta }}}\). For \(\varvec{\theta }\in \Theta \setminus \{{\widehat{\varvec{\theta }}}\}\),

$$\begin{aligned}{} & {} \delta (\varvec{\theta }) \ := \ \bigl \langle \nabla f(\varvec{\theta }), \varvec{\theta }- \Psi (\varvec{\theta }) \bigr \rangle \ > \ 0 , \\{} & {} f(\varvec{\theta }) - f({\widehat{\varvec{\theta }}}) \ \le \ \max \bigl ( 2 \delta (\varvec{\theta }), \beta _1(\varvec{\theta }) \sqrt{\delta (\varvec{\theta })} \Vert \varvec{\theta }- {\widehat{\varvec{\theta }}}\Vert \bigr ) , \end{aligned}$$

and

$$\begin{aligned}{} & {} {} \max _{t \in [0,1]} \, \Bigl ( f(\varvec{\theta }) - f \bigl ( (1 - t)\varvec{\theta }+ t \Psi (\varvec{\theta }) \bigr ) \Bigr ) \\ {}{} & {} \ge \ \min \Bigl ( 2^{-1} \delta (\varvec{\theta }), \frac{\delta (\varvec{\theta })^2}{\beta _2(\varvec{\theta }) \Vert \varvec{\theta }- \Psi (\varvec{\theta })\Vert ^2} \Bigr )\end{aligned}$$

with continuous functions \(\beta _1, \beta _2 : \Theta \rightarrow (0,\infty )\).

Table 1 Pseudo code of our algorithm, returning an approximation \(\varvec{\theta }\) of \({\widehat{\varvec{\theta }}}\)

In view of this lemma, we want to replace \(\varvec{\theta }\ne {\widehat{\varvec{\theta }}}\) with \((1 - t_*) \varvec{\theta }+ t_* \Psi (\varvec{\theta })\) for some suitable \(t_* = t_*(\varvec{\theta }) \in [0,1]\) such that \(f(\varvec{\theta })\) really decreases. More specifically, with

$$\begin{aligned} \rho _{\varvec{\theta }}(t) \ := \ f(\varvec{\theta }) - f \bigl ( (1 - t)\varvec{\theta }+ t \Psi (\varvec{\theta }) \bigr ) , \end{aligned}$$

our goals are that for some constant \(\kappa \in (0,1]\),

$$\begin{aligned} \rho _{\varvec{\theta }}(t_*) \ \ge \ \kappa \max _{t \in [0,1]} \rho _{\varvec{\theta }}(t) , \end{aligned}$$

and in case of \(\rho _{\varvec{\theta }}\) being (approximately) a quadratic function, \(t_*\) should be (approximately) equal to \(\mathop \mathrm{arg\,max}_{t \in [0,1]} \rho _{\varvec{\theta }}(t)\). For that, we proceed similarly as in Dümbgen et al. (2006). We determine \(t_o := 2^{-n_o}\) with \(n_o\) the smallest integer such that \(\rho _{\varvec{\theta }}(2^{-n_o}) \ge 0\). Then we define a Hermite interpolation of \(\rho _{\varvec{\theta }}\):

$$\begin{aligned} {\tilde{\rho }}_{\varvec{\theta }}(t) \ {}{}{} & {} {} := \ \rho _{\varvec{\theta }}'(0) t - c_o t^2 , \\ c_o{} & {} := t_o^{-1} \bigl ( \rho _{\varvec{\theta }}'(0) - t_o^{-1}\rho _{\varvec{\theta }}(t_o) \bigr ) \ > \ 0 . \end{aligned}$$

This new function is such that \({\tilde{\rho }}_{\varvec{\theta }}(t)=\rho _{\varvec{\theta }}(t)\) for \(t = 0, t_o\), and \({\tilde{\rho }}_{\varvec{\theta }}'(0) = \rho _{\varvec{\theta }}'(0) > 0\). Since \({\tilde{\rho }}_{\varvec{\theta }}'(t) = \rho _{\varvec{\theta }}'(0) - 2 t c_o\), the maximizer of \({\tilde{\rho }}_{\varvec{\theta }}\) over \([0,t_o]\) is given by

$$\begin{aligned} t_* \ := \ \min \bigl ( t_o, 2^{-1} \rho _{\varvec{\theta }}'(0)/c_o \bigr ) . \end{aligned}$$

As shown in Lemma 1 of Dümbgen et al. (2006), this choice of \(t_*\) fulfils the requirements just stated, where \(\kappa = 1/4\).

3.6 Complete algorithms

A possible starting point for the algorithm is given by \(\varvec{\theta }^{(0)} := ( - \log (\#\mathcal {P}) )_{(j,k) \in \mathcal {P}}\), but any other parameter \(\varvec{\theta }^{(0)} \in \Theta \) would work, too. Suppose we have determined already \(\varvec{\theta }^{(0)}, \ldots , \varvec{\theta }^{(s)}\) such that \(f(\varvec{\theta }^{(0)}) \ge \cdots \ge f(\varvec{\theta }^{(s)})\). Let \(\Psi (\varvec{\theta }^{(s)})\) be a new proposal with \(\Psi = \Psi ^{\textrm{row}}\) or \(\Psi = \Psi ^{\textrm{col}}\), and let \(\varvec{\theta }^{(s+1)} = (1 - t_*^{(s)}) \varvec{\theta }^{(s)} + t_*^{(s)} \Psi (\varvec{\theta }^{(s)})\) with \(t_*^{(s)} = t_*(\varvec{\theta }^{(s)}) \in [0,1]\) as described before. No matter which proposal function \(\Psi \) we are using in each step, the resulting sequence \((\varvec{\theta }^{(s)})_{s\ge 0}\) will always converge to \({\widehat{\varvec{\theta }}}\).

Theorem 2

Let \((\varvec{\theta }^{(s)})_{s \ge 0}\) be the sequence just described. Then \(\lim _{s \rightarrow \infty } \varvec{\theta }^{(s)} = {\widehat{\varvec{\theta }}}\).

Our numerical experiments showed that a particularly efficient refinement is as follows: Before computing a new proposal \(\Psi (\varvec{\theta }^{(s)})\), one should calibrate \(\varvec{\theta }^{(s)}\) in the sense that it is row-wise and column-wise calibrated. If s is even, we compute \(\Psi ^{\textrm{row}}(\varvec{\theta }^{(s)})\) to determine the next candidate \(\varvec{\theta }^{(s+1)}\). If s is odd, we compute \(\Psi ^\textrm{col}(\varvec{\theta }^{(s)})\) to obtain \(\varvec{\theta }^{(s+1)}\). The algorithm stops as soon as \(\delta (\varvec{\theta }^{(s)}) = \bigl \langle \nabla f(\varvec{\theta }^{(s)}), \varvec{\theta }^{(s)} - \Psi (\varvec{\theta }^{(s)}) \bigr \rangle \) is smaller than a prescribed small threshold. Table 1 provides corresponding pseudo code.

4 Simulation study

In this section, we compare estimation and prediction performances of the likelihood ratio order constrained estimator presented in this article with the estimator under usual stochastic order obtained via isotonic distributional regression. The latter estimator was mentioned briefly in the introduction. It is extensively discussed in Henzi et al. (2021b) and Mösching and Dümbgen (2020).

4.1 A Gamma model

We choose a parametric family of distributions from which we draw observations. We will then use these data to provide distribution estimates which we then compare with the truth. The specific model we have in mind is a family \((Q_x)_{x\in {\mathfrak {X}}}\) of Gamma distributions with densities

$$\begin{aligned} g_x(y) \ := \ \frac{b(x)^{-a(x)}}{\Gamma \bigl (a(x)\bigr )} y^{a(x)-1} \exp \bigl (-y/b(x)\bigr ), \end{aligned}$$

with respect to Lebesgue measure on \((0,\infty )\), with some shape function \(a:{\mathfrak {X}}\rightarrow (0,\infty )\) and scale function \(b:{\mathfrak {X}}\rightarrow (0,\infty )\). Then \(Q_x\) is isotonic in \(x \in {\mathfrak {X}}\) with respect to likelihood ratio ordering if and only if both functions a and b are isotonic. Recall that since the family is increasing in likelihood ratio order, it is also increasing with respect to the usual stochastic order.

Fig. 2
figure 2

The true conditional Gamma distribution function \(G_x\), the estimate under likelihood ratio (LR) order constraint \({\widehat{G}}_x\) and the estimated under usual stochastic (ST) order constraint are displayed from left to right for \(x\in \{1.5,2,2.5,3,3.5\}\)

The specific shape and scale functions used for this study are

$$\begin{aligned} a(x) \ := \ 2+(x+1)^2 \quad \text {and} \quad b(x) \ := \ 1 - \exp (-10x), \end{aligned}$$

defined for \(x\in {\mathfrak {X}}:=[1,4]\). Figure 2 displays corresponding true conditional distribution functions for a selection of x’s.

4.2 Sampling method

Let \(\ell _o\in \{50,1000\}\) be a predefined number and let

$$\begin{aligned} {\mathfrak {X}}_o \ := \ 1 + \frac{3}{\ell _o} \cdot \{1,2,\ldots , \ell _o\} \ \subset \ {\mathfrak {X}}. \end{aligned}$$

For a given sample size \(n\in {\mathbb {N}}\), the sample \((X_1,Y_1),(X_2,Y_2),\ldots ,(X_n,Y_n)\) is obtained as follows: Draw \(X_1,X_2,\ldots ,X_n\) uniformly from \({\mathfrak {X}}_o\) and sample independently each \(Y_k\) from \(Q_{X_k}\). This yields unique covariates \(x_1<\cdots <x_\ell \) as well as unique responses \(y_1< \cdots < y_m\), for some \(1\le \ell ,m \le n\).

For each such sample, we compute estimates of \((Q_{x_j})_{j=1}^\ell \) under likelihood ratio order and usual stochastic order constraints. Using linear interpolation, we complete both families of estimates with covariates originally in \(\{x_j\}_{j=1}^\ell \) to families of estimates with covariates in the full set \({\mathfrak {X}}_o\), see Lemma 3. We therefore obtain estimates \(({\widehat{Q}}_{x})_{x\in {\mathfrak {X}}_o}\) and under likelihood ratio order and usual stochastic order constraint, respectively. The corresponding families of cumulative distribution functions are written \(({\widehat{G}}_x)_{x\in {\mathfrak {X}}_o}\) and , whereas the truth is denoted by \((G_x)_{x\in {\mathfrak {X}}_o}\). Although the performance of the empirical distribution is worse than those of the two order constrained estimators, it is still useful to study its behaviour, for instance to better understand boundary effects. The family of empirical cumulative distribution functions will be written \((\widehat{{\mathbb {G}}}_x)_{x\in {\mathfrak {X}}_o}\).

4.3 Single sample

Fig. 3
figure 3

Selection of \(\beta \)-quantile curves. Specifically, a taut-string (Dümbgen and Kovac 2009) is computed between the lower \({\mathfrak {X}}\ni x\mapsto \min \{y\in {\mathbb {R}}: {\tilde{G}}_x(y)\ge \beta \}\) and upper \({\mathfrak {X}}\ni x\mapsto \inf \{y\in {\mathbb {R}}: {\tilde{G}}_x(y)> \beta \}\) quantile curves for each (corresponding respectively to ‘Truth’, ‘LR’ and ‘ST’) and \(\beta \in \{0.1,0.25,0.5,0.75,0.9\}\)

Figure 2 provides a visual comparison of a selection of true conditional distribution functions with their corresponding estimates under order constraint for a single sample generated in the setting \(\ell _o=1000\) and \(n=1000\). It shows that the estimates under likelihood ratio order constraint are much smoother than those under usual stochastic order constraint. The former are in general also closer to the truth than the latter. This fact is in reality true on average, as demonstrated in the next paragraph. Smoothness and greater precision in estimation resulting from the likelihood ratio order is also apparent in Fig. 3, which displays a selection of quantile curves for each .

Fig. 4
figure 4

Monte Carlo simulations to evaluate estimation performances with a simple score. First row: Simple scores with \({\tilde{G}}\) being either \({\widehat{G}}\) (solid line), (dashed line) or \(\widehat{{\mathbb {G}}}\) (dotted line). Second row: Relative change of score when enforcing a likelihood ratio order constraint over the usual stochastic order constraint. The thicker line is the median variation, whereas the thin lines are the first and third quartiles. Negative values represent an improvement in score

4.4 A simple score

To assess the ability of each estimator to retrieve the truth, we produce Monte-Carlo estimates of the median of the score

$$\begin{aligned} R_x({\tilde{G}}, G) \ := \ \int \bigl |{\tilde{G}}_x(y) - G_x(y) \bigr | \, \text {d}Q_x(y), \end{aligned}$$

for each estimator and for each \(x\in {\mathfrak {X}}_o\). The above score may be decomposed as a sum of simple expressions involving the evaluation of \({\tilde{G}}_x\) and \(G_x\) on the finite set of unique responses, see Sect. 3. We also compute Monte-Carlo quartiles of the relative change in score

Fig. 5
figure 5

Monte Carlo simulations to evaluate prediction performances using a CRPS-type score. First row: CRPS scores with \({\tilde{G}}\) being either \({\widehat{G}}\) (solid line), (dashed line) or \(\widehat{{\mathbb {G}}}\) (doted line). Second row: Relative change of score when enforcing a likelihood ratio order constraint over the usual stochastic order constraint

The results of the simulations are displayed in Fig. 4. A first observation is that the performance of all three estimators decreases towards the boundary points of \({\mathfrak {X}}\), and this effect is more pronounced for the two order constrained estimators. This is a known phenomenon from shape constrained inference. However, in the interior of \({\mathfrak {X}}\), taking the stochastic ordering into account pays off. The second row of plots in Fig. 4 shows the relative change in score when estimating the family of distributions with a likelihood ratio order constraint instead of the usual stochastic order constraint. It is observed that the improvement in score becomes larger and occurs on a wider sub-interval of \({\mathfrak {X}}\) as \(\ell _o\) and n increase. Only towards the boundary, the usual stochastic order seems to have better performance.

4.5 Theoretical predictive performances

Using the same Gamma model, we evaluate predictive performances of both estimators using the continuous ranked probability score

$$\begin{aligned} \text {CRPS}({\tilde{G}}_x, y) \ := \ \int \bigl ( {\tilde{G}}_x(z) - 1_{[y\le z]}\bigr ) ^2 \, \text {d}z. \end{aligned}$$

The CRPS is a sctrictly proper scoring rule which allows for comparisons of probabilistic forecasts, see Gneiting and Raftery (2007) and Jordan et al. (2019). It can be seen as an extension of the mean absolute error for probabilistic forecasts. The CRPS is therefore interpreted in the same unit of measurement as the true distribution or data.

Because the true underlying distribution is known in the present simulation setting, the expected CRPS score is given by

$$\begin{aligned} S_x({\tilde{G}}, G) \ :=&\ \int \textrm{CRPS}({\tilde{G}}_x, y) \, \textrm{d}Q_x(y) \\ \ =&\ \sum _{k=0}^m \int _{[y_k,y_{k+1})} \bigl ( {\tilde{G}}_x(y_k) - G_x(y) \bigr )^2 \,\textrm{d}y + \frac{b(x)}{B(1/2,a(x))}, \end{aligned}$$

where \(y_0:=0\), \(y_{m+1}:=+\infty \) and \(B(\cdot ,\cdot )\) is the beta function. As shown in Sect. 3, the above sum of integrals may be rewritten as a sum of elementary expressions involving the evaluation of \({\tilde{G}}_{x}\) and \(G_{x}\) on the finite set of unique responses, as well as two simple integrals which are computed via numerical integration. Consequently, we compute Monte-Carlo estimates of the median of each score \(S_x({\tilde{G}}, G)\), , as well as estimates of quartiles of the relative change in score when choosing \({\widehat{G}}\) over .

Fig. 6
figure 6

Subsample of the weight for age data and \(\beta \)-quantile curves computed from that sample under likelihood ratio order constraint, \(\beta \in \{0.1,0.25,0.5,0.75,0.9\}\). A logarithmic scale was used for the weight variable

Figure 5 outlines the results of the simulations. Similar boundary effects as for the simple score are observed. On the interior of \({\mathfrak {X}}\), the usual stochastic order improves the naive empirical estimator, and the likelihood ratio order yields the best results. In terms of relative change in score, it appears that imposing a likelihood ratio order constraint to estimate the family of distributions yields an average score reduction of about \(0.5\%\) in comparison with the usual stochastic order estimator for a sample of \(n=50\). For \(n=1000\), this improvement occurs on a wider subinterval of \({\mathfrak {X}}\) and more frequently, as shown by the third quartile curve. Note further that the expected CRPS increases on the interior of \({\mathfrak {X}}\). This is due to the fact that the CRPS has the same unit of measurement as the response variable. Since the scale of the response characterized by b increases with x, then so does the corresponding score.

Fig. 7
figure 7

Monte Carlo simulations to evaluate prediction performances using an empirical CRPS score. First row: empirical CRPS scores with \({\tilde{G}}\) being either \({\widehat{G}}\) (solid line), (dashed line, hardly distinguishable from solid line) or \(\widehat{{\mathbb {G}}}\) (dotted line). Second row: Relative change of score when enforcing a likelihood ratio order constraint over the usual stochastic order constraint

4.6 Empirical predictive performances

We use the weight for age dataset already studied in Mösching and Dümbgen (2020). It comprises the age and weight of \(n=16\,432\) girls whose age in years lies within \({\mathfrak {X}}:=[2,16]\). A subsample of these data of size \(2\,000\) is presented in Fig. 6, along with estimated quantile curves under likelihood ratio order using that subsample. The dataset was publicly released as part of the National Health and Nutrition Examination Survey conducted in the US between 1963 and 1991 (data available from www.cdc.gov) and was analyzed by Kuczmarski et al. (2002) with parametric models to produce smooth quantile curves.

Although the likelihood ratio order constraint is harder to justify than the very natural stochastic order constraint, we are interested in the effect of a stronger regularization imposed by the former constraint.

The forecast evaluation is performed using a leave-\(n_{\text {train}}\)-out cross-validation scheme. More precisely, we choose random subsets \(\mathcal {D}_{\text {train}}\) of \(n_{\text {train}}\) observations which we use to train our estimators. Using the rest of the \(n_{\text {test}}:=n-n_{\text {train}}\) data pairs in \(\mathcal {D}_{\text {test}}\), we evaluate predictive performance by computing the sample median of \({\widehat{S}}_x({\tilde{G}},\mathcal {D}_{\text {test}})\) for each estimator and each \(x\in {\mathfrak {X}}_o\), where

$$\begin{aligned} {\widehat{S}}_x({\tilde{G}},\mathcal {D}_{\text {test}}) \ := \ \frac{\sum _{(X,Y)\in \mathcal {D}_{\text {test}}:X=x} \textrm{CRPS}({\tilde{G}}_x, Y)}{\#\{(X,Y)\in \mathcal {D}_{\text {test}}:X=x\}}. \end{aligned}$$

Quartile estimates of the relative change in score are also computed.

Figure 7 shows the forecast evaluation results. As expected, the empirical CRPS increases with age, since the spread of the weight increases with age. As to the relative change in score, improvements of about \(0.5\%\) can be seen for both training sample sizes. The region of \({\mathfrak {X}}\) where the estimator under likelihood ratio order constraint shows better predictive performances is the widest for the largest training sample size. These results show the benefit of a stronger regularization.