Problem-driven scenario generation: an analytical approach for stochastic programs with tail risk measure

Scenario generation is the construction of a discrete random vector to represent parameters of uncertain values in a stochastic program. Most approaches to scenario generation are distribution-driven, that is, they attempt to construct a random vector which captures well in a probabilistic sense the uncertainty. On the other hand, a problem-driven approach may be able to exploit the structure of a problem to provide a more concise representation of the uncertainty. In this paper we propose an analytic approach to problem-driven scenario generation. This approach applies to stochastic programs where a tail risk measure, such as conditional value-at-risk, is applied to a loss function. Since tail risk measures only depend on the upper tail of a distribution, standard methods of scenario generation, which typically spread their scenarios evenly across the support of the random vector, struggle to adequately represent tail risk. Our scenario generation approach works by targeting the construction of scenarios in areas of the distribution corresponding to the tails of the loss distributions. We provide conditions under which our approach is consistent with sampling, and as proof-of-concept demonstrate how our approach could be applied to two classes of problem, namely network design and portfolio selection. Numerical tests on the portfolio selection problem demonstrate that our approach yields better and more stable solutions compared to standard Monte Carlo sampling.


Introduction
Stochastic programming is a tool for making decisions under uncertainty.Under this modeling paradigm, uncertain parameters are modeled as a random vector, and one attempts to minimize (or maximize) the expectation or risk measure of some loss function which depends on the initial decision.However, what distinguishes stochastic programming from other stochastic modeling approaches is its ability to explicitly model future decisions based on outcomes of stochastic parameters and initial decisions, and the associated costs of these future decisions.The power and flexibility of the stochastic programming approach comes at a price: stochastic programs are usually analytically intractable, and often not susceptible to solution techniques for deterministic programs.
Typically, a stochastic program can only be solved when it is scenario-based, that is when the random vector for the problem has a finite discrete distribution.For example, stochastic linear programs become large-scale linear programs when the underlying random vector is discrete.In the stochastic programming literature, the mass points of this random vector are referred to as scenarios, the discrete distribution as the scenario set and the construction of this set as scenario generation.Scenario generation can consist of discretizing a continuous probability distribution, or directly modeling the uncertain quantities as discrete random variables.The more scenarios in a set, the more computational power that is required to solve the problem.The key issue of scenario generation is therefore how to represent the uncertainty to ensure that the solution to the problem is reliable, while keeping the number of scenarios low so that the problem is computationally tractable.
A common approach to scenario generation is to fit a statistical model to the uncertain problem parameters and then generate a random sample from this for the scenario set.This has desirable asymptotic properties [22,33], but may require large sample sizes to ensure the reliability of the solutions it yields.This can be mitigated somewhat by using variance reduction techniques such as stratified sampling and importance sampling [24].Sampling also has the advantage that it can be used to construct confidence intervals on the true solution value [25].Another approach is to construct a scenario set whose distance from the true distribution, with respect to some probability metric, is small [28,19,12].These approaches tend to yield better and much more stable solutions to stochastic programs than does sampling.
A characteristic of these approaches to scenario generation is that they are distribution-driven; that is, they only aim to approximate a distribution and are divorced from the stochastic program for which they are producing scenarios.By exploiting the structure of a problem, it may be possible to find a more parsimonious representation of the uncertainty.Note that such a problem-driven approach may not yield a discrete distribution which is close to the true distribution in a probabilistic sense; the aim is only to find a discrete distribution which yields a high quality solution to our problem.
Stochastic programs often have the objective of minimizing the expectation of a loss function.This is particularly appropriate when the initial decision represents a strategic decision that is going to be used again and again, and individual large losses do not matter in the long term.For example, in a stochastic facility location problem (e.g.see [5]) the locations of several facilities must be chosen subject to the unknown demands of customers in a way which minimizes fixed investment costs, and future distribution costs.In other cases, the decision may be only used once or a few times, and the occurrence of large losses may have serious consequences such as bankruptcy.This is characteristic of the portfolio selection problem [26] studied in detail in the latter part of this paper.In this latter case, minimizing the expectation alone is not appropriate as this does not necessarily mitigate against the possibility of large losses.One possible remedy is to use a risk measure which penalises in some way the likelihood and severity of potential large losses.
In this paper we are interested in stochastic programs which use tail risk measures.A precise definition of a tail-risk measure will be given in Section 3 but for now, one can think of a tail risk measure as a function of a random variable which only depends on the upper tail of its distribution function.Tail risk measures are useful as they summarize the extent of potential losses in the worst possible outcomes.Examples of tail risk measures include the Value-at-Risk (VaR) [21] and the Conditional Value-at-Risk (CVaR) [29], both of which are commonly used in financial contexts.Although the methodology developed in this paper can in principle be applied to any loss function, in this work we are mainly interested in loss functions which arise in one and two-stage stochastic programs.
Distribution-driven scenario generation methods are particularly problematic for stochastic programs involving tail risk measures.This is because these methods tend to spread their scenarios evenly across the support of distribution and so struggle to adequately represent the tail risk without using a potentially prohibitively large number of scenarios.
In this paper, we propose an analytic problem-driven approach to scenario generation applicable to stochastic programs which use tail risk measures of a form made precise in Section 3. We observe that the value of a tail risk measure depends only on scenarios confined to an area of the distribution that we call the risk region.This means that all scenarios that are not in the risk region can be aggregated into a single point.By concentrating scenarios in the risk region, we can calculate the value of a tail risk measure more accurately.
Given a risk region for a problem, we propose a simple algorithm for generating scenarios which we call aggregation sampling.This algorithm takes samples from the random vector until a specified number of samples in the risk region have been produced, and all other scenarios are aggregated into a single scenario.We provide and give proofs of conditions under which this method is asymptotically consistent with standard Monte Carlo sampling.
In general, finding a risk region is difficult as it is determined by the loss function, problem constraints and the distribution of the uncertain parameters.Therefore, we derive risk regions for two classes of from [35].Definition 3.1 (Risk Measure).Let (Ω, P) be a probability space, and Θ be the set of measurable realvalued random variables on (Ω, P).Then, a risk measure is some function ρ : Θ → R ∪ {∞}.
For a risk measure to be useful, it should in some way penalize potential large losses.For example, in the classical Markowitz problem [26], one aims to minimize the variance of the return of a portfolio.By choosing a portfolio with a low variance, we reduce the probability of larges losses as a direct consequence of Chebyshev's inequality (see for instance [6]).Various criteria for risk measures have been proposed; in [3] a coherent risk measure is defined to be a risk measure which satisfies axioms such as positive homogeneity and subadditivity; another perhaps desirable criterion for risk measures is that the risk measure is consistent with respect to first and second order stochastic dominance, see [27] for instance.
Besides not satisfying some of the above criteria, a major drawback with using variance as a measure is that it penalizes all large deviations from the mean, that is, it penalizes large profits as well as large losses.This motivates the idea of using risk measures which depend only on the upper tail of the loss distribution.To formalize this idea, we first recall the definition of quantile function.Definition 3.2 (Quantile Function).Suppose θ is a random variable with distribution function F θ .Then the generalized inverse distribution function, or quantile function is defined as follows: We refer to the quantile function evaluated at β, F −1 θ (β), as the β-quantile.
The β-quantile can be interpreted as the smallest value for which the distribution function is greater than or equal to β.The β-tail of a distribution is the restriction of the distribution function to values equal to or above the β-quantile.In the context of risk management, we typically have 0.9 ≤ β < 1.0.The following definition says that a tail risk measure is a risk measure that only depends on the β-tail of a distribution.Definition 3.3 (Tail Risk Measure).Let ρ β : Θ → R ∪ {∞} be a risk measure per Definition 3.1.Then ρ β is a β-tail risk measure if ρ β (θ) depends only on the restriction of quantile function of θ above β, in the sense that if θ and θ are random variables with To show that ρ β is a β-tail risk measure, we must show that ρ β (θ) can be written as a function of the quantile function above or equal to β.Two very popular tail risk measures are the value-at-risk [21] and the conditional value-at-risk [30]: Example 3.4 (Value at risk).Let θ be a random variable, and 0 < β < 1.Then, the β−VaR for θ is defined to be the β-quantile of θ: Example 3.5 (Conditional value at risk).Let θ be a random variable, and 0 < β < 1.The following alternative characterization of β -CVaR [2] shows directly that it is a β-tail risk measure.
Note that in the case that θ is a continuous random variable, the β -CVaR is the conditional expectation of the random variable above its β-quantile (e.g.see [30]).
The observation that we exploit for this work is that very different random variables will have the same β-tail risk measure as long as their β-tails are the same.
When showing that two distributions have the same β-tails, it is convenient to use distribution functions rather than quantile functions.The following result gives conditions which ensure that the β-tails of two distributions are the same.We will make use of these in proofs later in this paper.Lemma 3.6.Suppose that θ and θ are random variables such that one of the two following conditions hold: Then, Proof.We first prove that condition (i) implies that the β-tails are the same.Since where the second and fourth lines follow from the fact that quantile functions are non-decreasing.

Risk regions
In this paper we are primarily interested in problems of the following form: where X ⊆ R k is a deterministic set of feasible decisions, ξ ∈ Ξ ⊆ R d is a random vector defined on a probability space (Ω, P), the set Ξ is convex, f : X × Ξ → R is a loss function, and ρ β is a tail risk measure.
In order to solve these problems accurately, we need to be able to approximate well the tail risk measure of our the loss function f (x, ξ) for all feasible decisions x ∈ X .
To avoid repeated use of cumbersome notation we introduce the following short-hand for distribution and quantile functions: In addition, since the loss function is only defined on Ξ, we frequently take complements of sets contained in Ξ.Again, to avoid repeated use of cumbersome notation, the standard notation for complements will apply with respect to Ξ.That is, for R ⊆ Ξ we write R c in place of Ξ \ R.
Since tail risk measures depend only on those outcomes which are in the β-tail, we aim to identify which outcomes lead to a loss in the β-tails for a feasible decision.This motivates the following definition.Definition 3.7 (Risk region).For 0 < β < 1 the β-risk region with respect to the decision x ∈ X is defined as follows: The risk region with respect to the feasible region X ⊂ R k is defined to be: The complement of this region is called the non-risk region.This can also be written The following basic properties of the risk region follow directly from the definition.
We now state a technical property and prove that this ensures the distribution of the random vector in a given region completely determines the value of a tail risk measure.In essence, this condition ensures that there is enough mass in the set to ensure that the β-quantile does not depend on the probability distribution outside of it.Definition 3.8 (Aggregation condition).Suppose that R X (β) ⊆ R ⊂ Ξ and that for all x ∈ X , R satisfies the following condition: Then R is said to satisfy the β-aggregation condition.
The motivation for the term aggregation condition comes from Theorem 3.9 which follows.This result ensures that if a set satisfies the aggregation condition then we can transform the probability distribution of ξ so that all the mass in the complement of this set can be aggregated into a single point without affecting the value of the tail risk measure.This property is particularly relevant to scenario generation as if we have such a set, then all scenarios which it does not contain can be aggregated, reducing the size of the stochastic program.Note that the β-aggregation condition does not hold if ξ is a discrete random vector.However, in this case, the conclusion of the theorem holds without any extra conditions on R. Theorem 3.9.Suppose that R X (β) ⊆ R ⊂ Ξ and that ξ is a random vector for which Then for any tail risk measure ρ β we have ρ β (f (x, ξ)) = ρ β f (x, ξ) for all x ∈ X , if one of the following conditions hold: (a) R satisfies the β-aggregation condition, (b) ξ is a discrete random vector.
Proof.Fix x ∈ X .To show that ρ β (f (x, ξ)) = ρ β f (x, ξ) we must show that the β-quantile and the β-tail distributions of f (x, ξ) and f (x, ξ) are the same.Using Lemma 3.6, the following two conditions are necessary and sufficient for this to occur: In the first case suppose that θ ≥ F −1 x (β).Note that as a direct consequence of (9) we have Now, 9) and (10) In the second case we suppose θ < F −1 x (β).We show that F f (x, ξ) (θ ) < β for each of the two conditions (a) and (b) separately.In the case where condition (a) holds, that is, when R satisfies the β-aggregation condition we have: x (β)} by ( 9) and ( 10) ≤ β as required.In the case condition (b) holds, that is when ξ is discrete, we have: It is difficult to verify that a set R ⊇ R X (β) satisfies the β-aggregation condition by directly checking that the condition (8) holds.The following proposition gives conditions under which it holds immediately for R X (β ) when β < β.Proposition 3.10.Suppose β < β and F x is continuous at F −1 x (β) for all x ∈ X .Then, R X (β ) satisfies the β-aggregation condition.That is, for all x ∈ X For convenience, we now drop β from our notation and terminology.Thus, we refer to the β-risk region and β-aggregation condition as simply the risk region and aggregation condition respectively, and write R X (β) as R X .
All sets satisfying the aggregation condition must contain the risk region, however, the aggregation condition does not necessarily hold for the risk region itself.
We must impose extra conditions on the problem to avoid some degenerate cases where the aggregation condition and the conclusion of Theorem 3.9 do not hold.The following example demonstrates such a degenerate case.
for all x ∈ X , and so R X = [β, 1].Now, consider the random variable φ(ξ) where φ : R → R is defined as follows: On the other hand, we have that The following result provides extra conditions for continuous distributions which ensure that the aggregation condition holds for the risk region R X .Proposition 3.12.Suppose that ξ is a continuous random vector whose support coincides with Ξ, and that the following conditions hold: Then the risk region R X satisfies the aggregation condition. Proof.
x (β) and f (x, ξ 1 ) ≥ F −1 x (β) and so given that t → f (x, γ(t)) is continuous there must exist 0 This is a non-empty open set contained in the support of ξ and so has positive probability, hence the aggregation condition holds for R X .
The following proposition gives a condition under which the non-risk region is convex.
This convexity condition is held by a large class of stochastic programs.Two-stage stochastic linear programs have loss functions of the following general form: where q, y ∈ R r , h ∈ R t , W ∈ R t×r and T ∈ R t×k , and ξ is the concatenation of all the stochastic components of the problem; that is, ξ T = q T , h T , T 1 , . . ., T t , W 1 , . . ., W t where T i and W i denote the i-th rows of the matrices T and W respectively.Standard results in stochastic programming guarantee that ξ → Q(x, ξ) is convex if the only random components of the problem are h and T , that is if ξ T = (h T , T 1 , . . ., T t ).See for instance [7,Chapter 3,Theorem 2].
The random vector in the following definition plays a special role in our theory.
Definition 3.14 (Aggregated random vector).For some set R X ⊆ R ⊂ Ξ the aggregated random vector is defined as follows: If R satisfies the aggregation condition and )) for all x ∈ X .The latter condition holds, for example, if ξ → f (x, ξ) is convex for all x ∈ X , since by Proposition 3.13 we have that R c X is convex and also R c ⊆ R c X .Under these conditions, as well as preserving the value of the tail risk measure, the function ψ R will also preserve the expectation for affine loss functions.
Corollary 3.15.Suppose for each x ∈ X the function ξ → f (x, ξ) is affine and for a set R ⊂ Ξ satisfying the aggregation condition we have that Proof.The equality of the tail-risk measures follows immediately from Theorem 3.9.For the expectation function we have

Scenario generation
In the previous section, we showed that under mild conditions the value of a tail risk measure only depends on the distribution of outcomes in the risk region.In this section we demonstrate how this feature may be exploited for the purposes of scenario generation.We assume throughout this section that our scenario sets are constructed from some underlying probabilistic model from which we can draw independent identically distributed samples.We also assume we have a set R X ⊆ R ⊂ Ξ which satisfies the aggregation condition for the problem under consideration, and for which we can easily test membership.The set R may be an exact risk region, that is R = R X , or it could a conservative risk region, that is R ⊃ R X .To avoid repeating cumbersome terminology, we simply refer to R as a risk region, differentiating between the conservative and exact cases only where necessary.The complement R c will be referred to as the aggregation region for reasons which will become clear.Our general approach to scenario generation is to prioritize the construction of scenarios in the risk region R.
In Section 4.1 we present and analyse a scenario generation method which we call aggregation sampling.In Section 4.2 we briefly discuss alternative ways of exploiting risk regions for scenario generation.

Aggregation sampling
In aggregation sampling the user specifies a number of risk scenarios, that is, the number of scenarios to represent the risk region.The algorithm then draws samples from the distribution, storing those samples which lie in the risk region and aggregating those in the aggregation region into a single point.In particular, the samples in the aggregation region are aggregated into their mean.The algorithm terminates when the specified number of risk scenarios has been reached.This is detailed in Algorithm 1.
Aggregation sampling can be thought of as equivalent to sampling from the aggregated random vector from Definition 3.14 for large sample sizes.Aggregation sampling is thus consistent with standard input : R ⊂ Ξ set satisfying aggregation condition, N R number of required risk scenarios output: Monte Carlo sampling only if R satisfies the aggregation condition and E [ξ|ξ ∈ R c ] ∈ R c .In Section 5, we provide conditions under which we can prove consistency.Note that it is possible that the algorithm could terminate without sampling any scenario in the aggregation region.This could happen in cases where P (ξ ∈ R c ) is very small, and the number of specified risk scenarios n is relatively small.In this case, to ensure that the algorithm terminates in a reasonable amount of time and that the scenario set which the algorithm outputs always has a consistent number of scenarios, we sample an arbitrary scenario in place of a scenario representing the aggregated scenarios.This situation is irrelevant for the asymptotic analysis of the algorithm.
We now study the performance of our aggregation sampling algorithm.Let a = P (ξ ∈ R c ) be the probability of the aggregation region, and n the desired number of risk scenarios.Let N (n) denote the effective sample size for aggregation sampling, that is, the number of samples drawn until the algorithm terminates 1 .The aggregation sampling algorithm can be viewed as a sequence of Bernoulli trials where a trial is a success if the corresponding sample lies in the aggregation region, and which terminates once we have reached n failures, that is, once we have sampled n scenarios from the risk region.We can therefore write down the distribution of N (n): where N B(n, a) denotes a negative binomial random variable whose probability mass function is as follows: The expected effective sample size of aggregation sampling is thus: The expected effective sample size N (n) can be thought of as the required sample size to construct a scenario set via Monte Carlo sampling with n scenarios in the risk region R. Thus, the greater the expected effective sample size, the greater the benefit of using aggregation sampling over standard Monte Carlo sampling.From (12) we can see that the expected effective sample size increases as the probability a of the aggregation region increases.Therefore, when constructing a risk region R ⊇ R X for the purposes of scenario generation, it is important that R is as tight an approximation of the exact risk region R X as possible in order that a = P (ξ ∈ R c ) is as large as possible.Also, the fact that the advantage of using aggregation sampling over standard Monte Carlo sample improves as the probability of the risk region increases, also tells us that this methodology will potentially work better for problems with higher values of β and which are more constrained due to the relations ( 5) and (6).

Alternative approaches
Aggregation reduction In aggregation reduction one draws a fixed number of samples n from the distribution and then aggregates all those in the aggregation region.As opposed to aggregation sampling, this method uses a fixed number of samples, but constructs a scenario set with a random number of scenarios.Let R(n) denote the number of scenarios which are aggregated in the aggregation reduction method.Aggregation reduction can similarly be viewed as a sequence of n Bernoulli trials, where success and failure are defined in the same way as described above.The number of aggregated scenarios in aggregation reduction is therefore distributed as follows: where B(n, a) denotes a binomial random variable and so we have Again, the performance of this method, in terms of the expected number of aggregated scenarios, can be seen to improve as the probability of the aggregation region increases.

Alternative sampling methods
The above algorithms and analyses assume that the samples of ξ were identically, independently distributed.However, in principle the algorithms will work for any unbiased sequence of samples.This opens up the possibility of enhancing the scenario aggregation and reduction algorithms by using them in conjunction with variance reduction techniques such as importance sampling, or antithetic sampling [20] 2 .The formulae ( 12) and ( 13) will still hold, but a will be the probability of a sample occuring in the aggregation region rather than the actual probability of the aggregation region itself.

Alternative representations of the aggregation region
The above algorithms can also be generalized in how they represent the non-risk region.Because aggregation sampling and aggregating reduction only represent the non-risk region with a single scenario, they do not in general preserve the overall expectation of the loss function, or any other statistics of the loss function except for the value of a tail risk measure.These algorithms should therefore generally only be used for problems which only involve tail risk measures.However, if the loss function is affine (in the sense of Corollary 3.15), then collapsing all points in the non-risk region to the conditional expectation preserves the overall expectation.
If expectation or any other statistic of the cost function is used in the optimization problem then one could represent the non-risk region region with many scenarios.For example, instead of aggregating all scenarios in the non-risk region into a single point we could apply a clustering algorithm to them such as k-means.The ideal allocation of points between the risk and non-risk regions will be problem dependent and is beyond the scope of this paper.

Consistency of aggregation sampling
The reason that aggregation sampling and aggregation reduction work is that, for large sample sizes, they are equivalent to sampling from the aggregated random vector, and if the aggregation condition holds then the aggregated random vector yields the same optimization problem as the original random vector.We only prove consistency for aggregation sampling and not aggregation reduction as the proofs are very similar.Essentially, the only difference is that aggregation sampling has the additional complication of terminating after a random number of samples.
We suppose in this section that we have a sequence of independently identically distributed (i.i.d.) random vectors ξ 1 , ξ 2 , . . .with the same distribution as ξ, and which are defined on the product probability space Ω ∞ .

Uniform convergence of empirical β-quantiles
The i.i.d.sequence of random vectors ξ 1 , ξ 2 , . . .can be used to estimate the distribution and quantile functions of ξ.We introduce the additional short-hand for the empirical distribution and quantile functions: Note that these are random-valued functions on the probability space Ω ∞ .It is immediate from the strong law of large numbers that for all x ∈ X and θ ∈ R, we have F n,x (θ) x (β), that is for all > 0 then we also have The following result extends this pointwise convergence to a convergence which is uniform with respect to x ∈ X .Theorem 5.1.Suppose the following hold: (i) For each x ∈ X , F x is strictly increasing and continuous at F −1 x (β), (ii) For all x ∈ X with probability 1 the mapping x → f (x, ξ) is continuous at x, Then, with probability 1 lim The proof of this result relies on various continuity properties of the distribution and quantile functions which are provided in Appendix A. Some elements of the proof below have been adapted from [34,Theorem 7.48], a result which concerns the uniform convergence of expectation functions.
Proof.Fix 0 > 0 and x ∈ X .Since F x is right-continuous with left limits, it has only countably many discontinuities, and so there exists 0 < < 0 such that F x is continuous at By Corollary A.2 in Appendix A the mapping x → F x F −1 x (β) − is continuous at x. Applying Lemma A.4 in Appendix A, there exists a neighborhood W of x such that with probability 1 lim sup In addition, by the strong law of large numbers, with probability 1 Note that for all x ∈ W ∩ X Thus, with probability 1 lim sup Similarly, we can choose W such that with probability 1 lim sup Using ( 14), ( 16) and ( 17) we can conclude that for all x ∈ W ∩ X with probability 1 Hence, we have that for all x ∈ W ∩ X , with probability 1, there exists N such that for all n > N and so we can conclude that lim sup Also, by Proposition A.3 in Appendix A the function x → F −1 x (β) is continuous and so the neighborhood W can also be chosen so that sup and so combining (18) and (19) we have that with probability 1 Finally, since X is compact, there exists a finite number of points x 1 , . . ., x m ∈ X with corresponding neighborhoods W 1 , . . ., W m covering X , such that with probability 1, the following holds: that is, with probability 1, lim sup Since the choice of 0 was arbitrary the result follows.
To facilitate the statement and proofs of the following results we introduce the following index sets which keep track of the indices of the samples which are in the risk and aggregation regions.
The following corollary shows that we have uniform convergence of the β-quantiles when sampling from the aggregated random vector ψ R (ξ).In order to state and prove this result, we introduce the following additional notation for the distribution and quantile functions for f (x, ψ R (ξ)), and their empirical counterparts for the sample ψ R (ξ 1 ), ψ R (ξ 2 ), . ..: Like F n,x and F −1 n,x , the final two functions are random-valued functions on the probability space Ω ∞ .
Corollary 5.2.Let R X ⊆ R ⊂ R d be a set satisfying the aggregation condition, and suppose that conditions (i)-(iii) from Theorem 5.1 hold and in addition: Then with probability 1 Proof.Since R satisfies the aggregation condition, and condition (a) holds, by Theorem 3.9, we have that F −1 x (β) = F −1 x (β) for all x ∈ X .Therefore, to prove this result, we will apply Theorem 5.1 to f (x, ψ R (ξ)) and so must show that conditions (i)-(iii) from Theorem 5.1 also hold for f (x, ψ R (ξ)).Condition (iii) holds immediately, and condition (ii) holds for f (x, ψ R (ξ)) since x → f (x, ξ) is continuous with probability 1, and It remains to show that Fx is continuous and strictly increasing at F −1 x (β) for all x ∈ X .Fix x ∈ X .Since F x (θ) and Fx (θ) coincide for θ ≥ F −1 x (β) and F x is strictly increasing at F −1 x (β), we have and so Fx is also strictly increasing at F −1 x (β).Finally, to show that Fx is continuous at F −1 x (β), it suffices to show that it is left continuous, since all distribution functions are right continuous.For > 0 sufficiently small we have that f x (β), and so Now, since by assumption F x is continuous at x (β) − = 0, and so must also have lim ↓0 Fx (F −1 x (β)) − Fx (F −1 x (β) − = 0 as required.
In the next subsection this result will be used to show that any point in the interior of the non-risk region R c will, with probability 1, be in the non-risk region of the sampled scenario set as the sample size grows large.

Equivalence of aggregation sampling with from aggregated random vector
The main obstacle in showing that aggregation sampling is equivalent to sampling from the aggregated random vector is to show that the aggregated scenario in the non-risk region converges almost surely to the conditional expectation of the non-risk region as the number of specified risk scenarios tends to infinity.Recall from Section 4 that N (n) denotes the effective sample size in aggregation sampling when we require n risk scenarios and is distributed as n + N B(n, a) where a is the probability of the non-risk region.The purpose of the next lemma is to show that as n → ∞ the number of samples drawn from the non-risk region almost surely tends to infinity.
Lemma 5.3.Suppose M (n) ∼ N B(n, p) where 0 < p < 1.Then with probability 1 we have that Proof.First note that, Hence, to show that P ({lim n→∞ M (n) = ∞}) = 1 it is enough to show for each k ∈ N we have that Now, fix k ∈ N. Then for all n ∈ N we have that and in particular, For large enough n we have that k+n n (1−p) ≤ c < 1 for some constant c, hence The result (20) now holds by the first Borel-Cantelli Lemma [6, Section 4].
The next Corollary shows that the strong law of large numbers still applies for the conditional expectation of the non-risk region in aggregation sampling despite the sample size being a random quantity.
Corollary 5.4.Suppose E [ ξ ] < +∞ and P (ξ ∈ R c ) > 0. Then with probability 1 Define the following measurable subsets of Ω ∞ : By the strong law of large numbers Ω 2 and Ω 3 have probability one.Since N (n) − n ∼ N B(n, a), where a = P (ξ ∈ R c ), Ω 1 has probability 1 by Lemma 5.3.Therefore, Ω 1 ∩ Ω 2 ∩ Ω 3 has probability 1 and so it is enough to show that for any Noting that To show that aggregation sampling yields solutions consistent with the underlying random vector ξ, we show that with probability 1, for n large enough, it is equivalent to sampling from the aggregated random vector ψ R (ξ), as defined in Definition 3.14.If the region R satisfies the aggregation condition, and X , Theorem 3.9 tells us that ρ β (f (x, ψ R (ξ))) = ρ β (f (x, ξ)) for all x ∈ X .Hence, if sampling is consistent for the risk measure ρ β , then aggregation sampling is also consistent.
Noting that |I R c (N (n))| = N (n)−n, we introduce the following notation for the empirical distribution and quantile functions for loss function with scenario set constructed by aggregation sampling with n risk scenarios.
Note that these latter functions will depend on the sample ξ 1 , . . ., ξ N (n) .
Theorem 5.5.Let R X ⊆ R ⊂ R d be a set satisfying the aggregation condition.Suppose that that conditions (i)-(v) from Theorem 5.1 and Corollary 5.2 hold, and in addition that Then, with probability 1, for all u ≥ β Proof.We actually prove a slightly stronger result, that is, with probability 1, there exists N > 0 such that for all n > N , x ∈ X and u ≥ β we have that So if the following holds with probability 1 then, by application of Lemma 3.6, this implies that with probability 1, there exists N > 0 such that for all n > N and for all u ≥ β and x ∈ X we have x (β) for all x ∈ X , and since X is compact there exists δ > 0 such that inf By Corollary 5.4, the compactness of X and the continuity of Also, by Corollary 5.2, with probability 1 lim sup ), we also have with probability 1 that In particular, with probability 1 there exists N such that for n > N In which case, for n > N 23) and (26).
We can similarly show that lim sup ) > 0 holds with probability 1. Hence (22) holds with probability 1 and the proof is complete.
Note that although the continuity conditions (ii), (v) and (vi) look complicated, the loss function f : X × Ξ → R will typically be continuous everywhere, and so these will be satisfied automatically.

A conservative risk region for monotonic loss functions
In order to use risk regions for scenario generation, we need to have a characterization of the risk region which conveniently allows us to test membership.In general this is a difficult as the risk region depends on the loss function, the distribution and the problem constraints.Therefore, as a proof-of-concept, in the following two sections we derive risk regions for two classes of problems.In this section we propose a conservative risk region for problems which have monotonic loss functions.Definition 6.1 (Monotonic loss function).A loss function f : X × Ξ → R is monotonic increasing if for all x ∈ X and ξ, ξ ∈ Ξ such that ξ < ξ we have f (x, ξ) < f (x, ξ).Similarly, we say it is monotonic decreasing if for all x ∈ X and ξ, ξ ∈ Ξ such that ξ < ξ we have f (x, ξ) > f (x, ξ).Monotonic loss functions occur naturally in stochastic linear programming.The following result presents a class of loss functions which arise in the context of network design, and gives conditions under which they are monotonic.Proposition 6.2.Suppose X ⊆ R k + , Ξ ⊆ R d + and the loss function Q(x, ξ) is defined to be the optimal value to the following linear program: such that W y + z ≥ ξ (28) N y = 0 (31) where W, B, T, V, N are matrices and q, u, b are vectors of compatible dimensions.Then, Q(x, ξ) is monotonic increasing under the following conditions: 1. q, u > 0, Proof.Fix x ∈ X .The problem is always feasible since y = 0 and z = ξ is a feasible solution.Since x ≥ 0 and q, u > 0 the problem ( 27)-( 32) is bounded below by zero.In addition, when ξ ≥ 0 with at least one component strictly greater than zero, the optimal solution (y * , z * ) must contain at least one strictly positive element due to constraint (28) and the fact that W ≥ 0, and so in this case the optimal value is strictly positive.Because the problem is both bounded below and feasible, strong duality applies and so Q(x, ξ) is also equal to the optimal solution to the dual problem: such that π, ν, η ≥ 0.
Let ξ, ξ ∈ Ξ be such that ξ < ξ.In the first case suppose that ξ = 0, and let (π, ν, η, λ) be the optimal dual variables for (33)-( 36) for ξ = ξ.As discussed above, this means that Q(x, ξ) > 0, and given that x T V T ν + b T η ≥ 0, at least one component of π will be greater than zero in order for the objective of the dual to be strictly positive.Now, (π, ν, η, λ) is also a feasible solution to the dual problem with ξ = ξ and so In the second case suppose that ξ = 0.In this case y = 0, z = 0 is a feasible solution to the primal problem ( 27)-( 32) with ξ = ξ and this solution has an objective value of zero.Since the objective is bounded below by zero, this means this solution is also optimal and so Q(x, ξ) = 0. Since ξ > 0 we have that Q(x, ξ) > 0, and so Q(x, ξ) < Q(x, ξ).Hence Q(x, ξ) is monotonic as required.
This recourse function arises in stochastic network design, and the problem formulation in the previous proposition was adapted from a model in the paper [31].In this type of problem, we have a network consisting of suppliers, processing units, and customers, and decisions must be made relating to opening facilities and the capacities of nodes and arcs.The problem which defines the recourse function Q(x, ξ) depends on the capacity and opening decisions x of the first stage, and the demand of the customers ξ.The aim of the problem is construct of flow of products y which minimize transportation costs for satisfying customers demand, plus penalties for any unsatisfied demand z.
For a problem with a monotonic loss function, the following result defines a conservative risk region.
6.3.Suppose the loss function f : X × Ξ → R is monotonic increasing.Then the following set is a conservative risk region: Similarly, if the loss function is monotonic decreasing then the following set is a conservative risk region: Proof.Suppose f (x, ξ) is monotonic increasing and let ξ ∈ R X , then and so ξ ∈ R 1 as required.The set R 2 can similarly be shown to be a conservative risk region when f (x, ξ) is monotonic decreasing.

An exact risk region for the portfolio selection problem
In this section, we characterize exactly the risk region of the portfolio selection problem when the distribution of asset returns belongs to a certain class of distributions.
In the portfolio selection problem one aims to choose a portfolio of financial assets with uncertain returns.For i = 1, . . ., d, let x i denote the amount to invest in asset i, and ξ i the random return of asset i.The loss function in this problem is the negative total return, that is feasible portfolios may encompass constraints like no short-selling (x ≥ 0), total investment ( d i=1 x i = 1) and quotas on certain stocks (x ≤ c).
The following corollary gives sufficient conditions for the risk region to satisfy the aggregation condition, and for aggregation sampling to be consistent.Corollary 7.1.Suppose that R ⊇ R X and that the following conditions hold: 1. ξ is continuous with support R d , 2. There exists x 1 , x 2 ∈ X which are linearly independent, 3. 0 / ∈ X , 4. X is compact.
Then R satisfies the aggregation condition, and aggregation sampling with respect to R is consistent in the sense of Theorem 5.5.
Proof.To prove that R satisfies the aggregation condition, it is enough to show that R X satisfies the aggregation condition.We prove this by showing that all the conditions of Proposition 3.12 hold.Note that x → −x T ξ is continuous so condition (i) of Proposition 3.12 holds immediately.
For each x ∈ X the interior of the corresponding risk region and non-risk region are open half-spaces: Fix x ∈ X .Then either x is linearly independent to x 1 or it is linearly independent to x 2 .Assume it is linearly independent to x 1 .Now, int (R x) and int (R x1 ) are non-parallel half-spaces and so both int x) are non-empty, and since we also have Ξ = R d , condition (ii) of Proposition 3.12 is satisfied.
Since R x1 and R x2 are non-parallel half-spaces, their union R x1 ∪ R x2 is connected.Similarly, for any x ∈ X , we must have R x being non-parallel with either R x1 or R x2 and so R x ∪ R x1 ∪ R x2 must also μ = (P T ) −1 µ, K = −P K and K = conic (X ). Proof.

Numerical tests
In this section, we test the performance of the methodology developed in this paper.For the portfolio selection problem, when X ⊆ R d + \ {0} the loss function f (x, ξ) = −x T ξ is monotonic decreasing.We therefore use this problem throughout this section to test both the conservative risk region presented in Section 6, and the exact risk region presented in Section 7.
In order to test whether a point belongs to the exact non-risk region in (41) requires the projection of a point onto a convex cone.This can be done by solving a small linear complementarity problem.See [36] or our follow-up paper [15] for more details.We solve linear complementarity problems using code from the Siconos numerics library [1].To test whether a point ξ ∈ Ξ belongs to the conservative risk region in (38), involves the evaluation of the probability P (ξ < ξ).Since calculating this probability exactly involves evaluating a multidimensional integral we approximate the probability by taking a large sample from ξ, and using the empirical distribution function of this sample.Repeatedly testing membership of both types of risk region is therefore computationally intensive.Ways of mitigating this issue are discussed in our follow-up paper [15].These membership tests, and the aggregation sampling algorithm have been implemented and made available as a package for the Julia programming language [14].All experiments were conducted on a laptop with an Intel Core i7-720QM CPU at 1.6 GHz.

Probability of risk regions
As discussed in Section 4.1, the performance of the aggregation sampling algorithm with respect to standard Monte Carlo sampling improves as the probability of the aggregation region increases.In this first experiment we observe the behavior of this probability over a range of dimensions.
For this experiment, we suppose that K = conic (X ) = R d + , and that the random vector follows a multivariate Normal distribution N (0, Λ(ρ)), where the covariance matrix Λ(ρ), for 0 ≤ ρ < 1, is defined as follows: In particular, we calculate the probability for the case ρ = 0, that is where the asset returns are independently distributed, and the case ρ = 0.3, that is where the asset returns are positively correlated.
The probabilities of the non-risk regions are estimated by sampling and testing membership for 20000 points.
The results of this experiment are plotted in Figure 1.In Figures 1a and 1b are plotted the probabilities of the conservative and exact aggregation regions.To aid the readers' intuition we have also plotted a reduced scenario set in two dimensions using conservative and exact risk regions in Figures 1c and 1d for ρ = 0.3 and β = 0.95.
The figures show that not only is the probability of the conservative aggregation region smaller than that of the exact aggregation region but also it decays much more quickly.This emphasizes the importance of using an exact risk region for aggregation sampling if possible.Interestingly the probability of the aggregation regions for the asset returns is greater and decays more slows than that of the independent asset returns.This tells us that, in addition to the loss function, the performance of our methodology depends strongly on the distribution of the random vector.Although the probability of the conservative aggregation region decays fairly rapidly, it remains non-negligible for random vectors of a moderate dimension, around 15, for the correlated asset returns.For exact aggregation regions, the probability remains high for the correlated asset returns for up to a dimension of 40.

Performance of aggregation sampling
We now test the performance of the aggregation sampling algorithm using conservative and exact risk regions against standard Monte Carlo sampling in terms of the quality of the solutions each method yields.
Experimental Set-up We use the following problem: the asset returns follow a multivariate Normal distribution N (µ, Σ).We use two distributions: one of dimension 5 and another of dimension 10.These distributions have been fitted from monthly return data for randomly selected companies in the FTSE 100 index.The problem is thus to select a portfolio which minimizes the conditional value-at-risk of the one-month return, subject to a minimimum expected return of t, and no short-selling.These distributions have been made available online [13] in an HDF5 file, and can be accessed using the keys "normal/dim = 5/dist 1" and "normal/dim = 10/dist 1".We use the target expected one-month return t = 0.005 which is feasible for the constructed problems.
This problem has been chosen so that we can solve the problem exactly for Normally distributed returns, and so calculate the optimality gap for solutions found from solving scenario-based approximations.The following formula is easily verified by recalling that for continuous probability distributions, the β -CVaR is just the conditional expectation of the random variable above the β-quantile (see [29] for instance): where Φ denotes the distribution function of the standard Normal distribution.The problem (P) can therefore be solved exactly using an interior point algorithm and in our experiments we use the software package IPOPT [38] to do this.Denote by {(ξ s , p s )} n s=1 a scenario set of size n, where ξ s denotes the vector of asset returns in scenario s, and p s the corresponding probability.Then, the scenario-based approximation to (P) using this scenario set, is the following linear program: x, y ≥ 0.
See [29] for more details on how β -CVaR is linearized for discrete random vectors in this way.These scenario-based problems are modelled using JuMP [10] and solved using Gurobi 7.5 [18].
We are interested in the quality and stability of the solutions that are yielded by our scenario generation method as compared to standard Monte Carlo sampling for a given scenario set size.To this end, in each experiment, for a range of scenario set sizes, we construct 100 scenario sets using sampling and aggregation sampling with conservative and exact risk regions, solve the resulting problems, and calculate the optimality gaps for the solutions that these yield.
Denote by z * the optimal solution value for problem (P), and by x a solution found by solving a scenario-based approximation.Then the optimality gap of x is given by where β -CVaR(−x T ξ) calculated using (42).

Results
In Figure 2 are presented the results of these stability tests for two different problems.In the first problem we have d = 5 and β = 0.95.In the second problem we have d = 10 and β = 0.99.For each scenario set size and scenario generation method we have drawn a box plot of the optimality gap of the 100 constructed scenario sets.In the legend of each plot we have given the estimated probability of the aggregation regions, a, and the true optimal value z * is included in the title.Note that Cons.Agg.sampling and Exact Agg.Sampling are abbrieviations for, respectively, aggregation sampling using the conservative risk region, and aggregation sampling using the exact risk region.
In both cases, both aggregation sampling methods outperform standard Monte Carlo sampling for all scenario set sizes in terms of the size and variability of the calculated optimality gaps.This is because for aggregation sampling we are effectively sampling more scenarios compared with standard Monte Carlo sampling.Aggregation sampling with exact risk regions also significantly outperforms aggregation sampling with conservative risk regions.The improved performance can be expected given that its probability is greater than that of the conservative risk region which gives a greater effective sample size.

Conclusions
In this paper we have demonstrated that for stochastic programs which use a tail risk measure, a significant portion of the support of the random vector in the problem may not participate in the calculation of that tail risk measure, whatever feasible decision is used.As a consequence, for scenariobased problems, if we concentrate our scenarios in the region of the distribution which is important to the problem, the risk region, we can represent the uncertainty in our problem in a more parsimonious way, thus reducing the computational burden of solving it.We have proposed and analyzed two specific methods of scenario generation using risk regions: aggregation sampling and aggregation reduction.Both of these methods were shown to be more effective, in comparison to standard Monte Carlo sampling, as the probability of the non-risk region increases: in essence the higher this probability the more redundancy there is in the original distribution.The application of our methodology relies on having a convenient characterization of a risk region.For portfolio selection problems we derived the exact risk region when returns have an elliptical distribution.However, a characterization of the exact risk region will generally not be possible.Nevertheless, it is sufficient to have a conservative risk region.For stochastic programs with monotonic loss functions, a wide problem class which includes some network design problems, we were able to derive such a region.
The effectiveness of our methodology depends on the probability of the aggregation region, that is the exact or conservative non-risk region used in our scenario generation algorithms.We observed that for both the stochastic programs with monotonic loss function and portfolio selection problems that this probability tends to zero as the dimension of the random vector in the problem increases.However, in some circumstances this effect is mitigated.We observed that small positive correlations slowed down this convergence for the portfolio selection problem.
We tested the performance of our aggregation sampling algorithm for portfolio selection problems using both the exact non-risk region and the conservative risk region for monotonic loss functions.This demonstrated a significant improvement on the performance of standard Monte Carlo sampling, particularly when an exact non-risk region was used.
The methodology has much potential.For some small to moderately-sized network design problems this methodology could yield much better solutions.In particular the methodology is agnostic to the presence of integer variables, and so could be used to solve difficult mixed integer programs.
In our follow-up paper [15] we demonstrate that our methodology may be applied to more difficult and realistic portfolio selection problems such as those involving integer variables, and for which the asset returns are no longer elliptically distributed.In the same paper we also discuss some of the technical involved in applying the method, such as finding the conic hull of the feasible region, and methods of projecting points onto this.We also investigate the use of artificial constraints as a way of making our methodology more effective.
Proposition A.3.Suppose for some x X , and z = F −1 x (β) that the conditions of Corollary A.2 hold, and in addition that F x is strictly increasing at F −1 x (β), that is for all > 0 Then x → F −1 x (β) is continuous at x.
Proof.Assume x → F −1 x (β) is not continuous at x.This means there exists > 0 such that for all neighborhoods W of x there exists x ∈ W such that F −1 x (β) − F −1 x (β) > .
By the continuity of x → F x F −1 x (β) at x there exists W a neighborhood of x, such that: But for the x identified above we have and so given that F x is non-decreasing, and by the definition of γ we must have: which contradicts (43).
Recall, that for a sequence of i.i.d.random vectors ξ 1 , ξ 2 , . . .with the same distribution as ξ, we define the sampled distribution function as follows: The final result concerns the continuity of the sampled distribution function.
Lemma A.4. Suppose for g : X × Ξ → R, and x ∈ X the conditions from Proposition A.1 hold.Then for all > 0 there exists a neighborhood W , of x, such that with probability 1 In particular, if x → f (x, ξ) is continuous at x with probability 1 and F x is continuous at z ∈ R then for all > 0 there exists a neighborhood W , of x such that with probability 1 Note first that the quantity δ k (ξ) is Lebesgue measurable (see [34,Theorem 7.37] for instance).By assumption (i) of Proposition A.1 the mapping x → g(x, ξ) is continuous at x with probability 1, hence The result (44) follows immediately as the special case g(x, ξ) = 1 {f (x,ξ)≤z} .

B Convex cone results
The results in this appendix relate to the characterization of the non-risk region for the portfolio selection problem with elliptically distributed returns.
The following two propositions give properties about projections onto convex cones which are required in the proof of the main results of this appendix.

Figure 1 :
Figure 1: Probabilities of conservative and exact aggregation regions

Figure 2 :
Figure 2: Error bar plot of optimality gaps yielded by sampling and aggregation sampling