On convex lower-level black-box constraints in bilevel optimization with an application to gas market models with chance constraints

Bilevel optimization is an increasingly important tool to model hierarchical decision making. However, the ability of modeling such settings makes bilevel problems hard to solve in theory and practice. In this paper, we add on the general difficulty of this class of problems by further incorporating convex black-box constraints in the lower level. For this setup, we develop a cutting-plane algorithm that computes approximate bilevel-feasible points. We apply this method to a bilevel model of the European gas market in which we use a joint chance constraint to model uncertain loads. Since the chance constraint is not available in closed form, this fits into the black-box setting studied before. For the applied model, we use further problem-specific insights to derive bounds on the objective value of the bilevel problem. By doing so, we are able to show that we solve the application problem to approximate global optimality. In our numerical case study we are thus able to evaluate the welfare sensitivity in dependence of the achieved safety level of uncertain load coverage.


Introduction
Bilevel optimization has become more and more important during the last years and decades. The reason is that this class of problems enables the modeler to describe hierarchical decision processes, which is often of great importance in practical applications. On the other hand, this ability also makes bilevel problems hard to solve-both in theory and practice. For instance, it is well known that even the easiest class of bilevel problems, i.e., models with linear lower and upper level, are strongly NP-hard [24] and even checking the local optimality of a given point is NP-hard [54]. Nevertheless, starting in the 1980's, researchers developed increasingly powerful algorithms to tackle different challenging classes of bilevel problems; see, e.g., [10,11,31] for an overview of the field. The most recent trend is to consider more and more challenging bilevel models by also incorporating (mixed-)integer aspects [13,17,18,42,57], continuous nonconvexities [40,41], or uncertainty modeling [4,7,8,46,59]. The resulting bilevel models usually are harder to solve if these complicating aspects appear in the lower level of the bilevel problem. For instance, a convex lower-level problem (that also satisfies a reasonable constraint qualification) can be replaced by its necessary and sufficient optimality conditions, leading to an equivalent single-level reformulation of the bilevel problem. This is of course not possible anymore if the lower-level problem does not possess compact optimality certificates as it is the case for mixed-integer or other nonconvex problems.
One main field of application for bilevel optimization are energy markets, where one often is faced with a hierarchical structure of decision making; see, e.g., the recent survey [56] of bilevel optimization applied in the energy sector. In this paper, we consider a simplified version of the European entry-exit gas market [15,16], which can be modeled as a bilevel problem. The main goal of this market design is to decouple the trade and transport of natural gas. This is also reflected in the bilevel structure since the transport-related aspects are incorporated in the upper level, in which the transmission system operator (TSO) acts, while anticipating the market outcome that is determined in the lower-level problem. We refer to [23] for the derivation of this bilevel model and to [6,50] for some further studies that tackle this bilevel model of the European gas market. All papers mentioned so far, however, make the assumption that all data of the model is deterministic. Obviously, this is not the case in practice. In particular, the future energy consumption is not known and it is rather standard to tackle such aspects using techniques from stochastic optimization.
Stochastic bilevel problems or, more generally, stochastic mathematical programs with equilibrium constraints (SMPECs), have been introduced in [43] and further investigated theoretically, e.g., in [51]. They have found numerous applications in specific electricity market models; see, e.g., [27]. In these models, the typical chronology of first-and second-stage decisions x, y accompanied by the observation of the random parameter ξ is x ξ y. In this case, x is a here-and-now decision (before randomness is revealed) and y is a wait-and-see decision (reacting on randomness). This implies, in particular, that y can be chosen (as a function of x and ξ) such that the lower-level constraints are satisfied almost surely with respect to randomness for any x. However, in the gas market model we are considering in this paper, the chronology is x y ξ, which means that both first-and second-stage decisions are of type here-and-now. In this case, the satisfaction of random lower-level constraints cannot be guaranteed almost surely in general, but only with a certain probability. This leads in a natural way to the use of probabilistic or chance constraints. The research on this topic is rather young. Exemplarily, we refer to the theoretical paper [29] devoted to a chance-constrained bilevel model in the chronology x ξ y we mentioned above and refer to the two papers [47,58] dealing with the numerical solution of chance constraints as parts of bilevel problems. These latter articles make simplifying assumptions by restricting randomness to discrete measures or imposing individual rather than joint chance constraints; see also our critical discussion in the case study of Section 3.4. The main contribution of this paper is the consideration of joint chance constraints with continuous multivariate random distribution on the lower level of a bilevel problem.
We thus extend the deterministic setting to incorporate uncertain loads in the lower, i.e., the market, level of the bilevel problem. In the literature, gas buying firms are usually modeled using inverse market demand functions to model price elasticity. While this is a reasonable model in the deterministic setting, the bought quantities do not solely follow an economic rationale but also need to cover mostly random fluctuations in the day-ahead. This is not covered in the models in the literature and we capture this aspect by introducing a respective chance constraint in the lower level of the bilevel problem. On the other hand, we make simplifying assumptions such as considering a linear gas flow model as well as that we only consider passive networks to keep the model practically tractable and the presentation streamlined.
This extension of the model adds a significant mathematical challenge to the overall bilevel problem. Although the chance constraint is convex-and, thus, firstorder optimality conditions are usable in general-it is not given in a closed form. Instead, only function and gradient evaluations are available and we have to design methods that can cope with such black-box functions. To the best of our knowledge, this problem class has not been considered so far in the literature.
In this setup, our contribution is the following. First, we study bilevel problems with a convex upper and lower level that contain a black-box constraint in the lower level in Section 2. Since a closed form of the latter is not available, one has to resort to iterative approaches that subsequently approximate the lower-level's feasible set, which is possible by outer approximations since function evaluations and first-order information is available. Thus, the lower-level's feasible set can only be approximated up to a certain prescribed tolerance. We discuss in detail that this makes it challenging (if possible at all) to ensure global optimality of the bilevel problem. Nevertheless we can show that, if our algorithm terminates, it terminates with a point that is approximate feasible for the lower level. Second, we present a chance-constrained extension of a simplified bilevel model of the European entry-exit gas market and also discuss the economic interpretation of the added chance constraint; cf. Section 3. For this application, we can further derive provably lower bounds by adjusting our method, which allows us to assess the quality of the obtained solutions. It turns out that we are always very close to a globally optimal solution of the chance-constrained bilevel problem. Third and finally, we present numerical results for the chance-constrained bilevel problem of the European gas market on an academic instance and discuss the effect of uncertainty on the results.

Bilevel Problems with Convex Lower-Level Black-Box Constraints
2.1. Problem Formulation. We consider convex bilevel problems of the form in which the solution set of the convex lower-level problem is given by The variable vector x denotes the n x upper-level (or leader) decisions and y denotes the n y lower-level (or follower) decisions. We assume that F : R nx × R ny → R, G : R nx × R ny → R mu , f : R nx × R ny → R, g : R nx × R ny → R m , and b : R ny → R are convex and differentiable functions. Further, we suppose that b(y) ≤ 0 is a black-box constraint for which we make the following assumptions.
Assumption 1. The black-box function b is convex and for all (x, y) ∈ {(x, y) : G(x, y) ≤ 0, g(x, y) ≤ 0}, (i) we can evaluate the function b(y), (ii) we can evaluate the gradient ∇b(y), and (iii) the gradient is bounded, i.e., ∇b(y) ≤ K for a fixed K ∈ R.
Overall, the upper-level problem (1a)-(1c) is a convex minimization problem and for fixed upper-level variables x =x, the lower level is a convex minimization problem as well. In the case that the follower problem (2) has multiple solutions, i.e., S(x) is not a singleton, Problem (1) models the so-called optimistic bilevel solution, which realizes the lower-level solution y ∈ S(x) that minimizes the upper-level objective function F .
In the following, we denote the shared constraint set by its projection onto the decision space of the leader by Ω u := {x : ∃y with (x, y) ∈ Ω}, and the feasible set of the lower-level problem for a fixed leader decision x =x by Further, we denote the optimal value function of the lower-level problem by Using the latter, we rewrite Problem (1) as the equivalent single-level problem x ∈ R nx , y ∈ R ny .
2.2. Obstacles and Pitfalls. Besides being a bilevel problem, the main challenge of Problem (1), and also of Problem (3), is the black-box constraint b(y) ≤ 0 in the lower-level problem. Typical methods for solving bilevel problems with convex lower levels replace the lower level with its necessary and sufficient optimality conditions in algebraic form. This is, however, not possible if the lower level contains a black-box function-i.e., a function for which an explicit expression is not available-since without such an expression, optimality conditions are also not available in closed form. Since later on, our method needs to employ global optimization solvers and since general-purpose global optimization software usually requires all constraints to be given in algebraic form (e.g., to derive suitable over-and underestimators), the considered back-box setting adds to the hardness of the bilevel problem at hand. As a remedy, we need to resort to iterative approaches, e.g., cutting-plane techniques [30] or outer approximation [14,19], and the best we can expect is to fulfill the black-box constraint up to an a priori specified tolerance. In order to specify the quality of such solutions, we make use of the concepts of δ-feasibility and ε-δ-optimality known from global optimization; see, e.g., [37]. The same concept is also used in a bilevel-specific context in, e.g., [41], where the authors require this notion since they consider a nonconvex optimization problem in the lower level. In this paper, we use the following definition.
We highlight that a δ-feasible point (x,ȳ) of Problem (1) is, in particular, δ f -(δ g , δ b )-optimal for the lower-level problem (2) with fixed x =x. The point is (δ g , δ b )-feasible because g(x,ȳ) ≤ δ g as well as b(y) ≤ δ b hold and the objective function value is bounded by f (x, y) ≤ ϕ(x) + δ f . If the functions f and g do not lead to further difficulties, we can choose δ f = δ g = 0. If we can, in addition, also optimize exactly over F and G, then the best we can hope for in our bilevel setting is to obtain a 0-δ-optimal solution of Problem (1) with δ = (0, 0, δ b , 0).
In practice, however, this turns out to be not so easy. We already mentioned that we can only expect to fulfill the black-box constraint up to a tolerance. Let us consider the following relaxation of the lower level for a given upper-level decision x =x: We denote the optimal value function of this problem byφ(x). Since Problem (4) is a relaxation of the original lower-level problem (2),φ(x) ≤ ϕ(x) holds for all feasible x ∈ Ω u . Virtually all solution techniques for bilevel problems with a convex lower-level problem exploit the single-level reformulation (3) or similar variants that express the optimal value function ϕ more explicitly, e.g., by using the optimality conditions of the convex lower-level problem. If we replace the lower-level problem with its relaxation (4) and derive a single-level reformulation in analogy to Problem (3), then we need to (i) relax Constraint (3c) and (ii) replace Constraint (3d) by f (x, y) ≤φ(x). Ifφ(x) < ϕ(x) holds for any x ∈ Ω u , then such a "first-relax-thenreformulate" approach yields a single-level reformulation that is not a relaxation of the single-level reformulation (3). Thus, it is also not a relaxation of the original bilevel problem (1). Consequently, it is not clear whether and how ε-δ-optimality can be guaranteed.
This observation is closely related to the missing independence of irrelevant constraints (IIC) property that is known from single-level optimization. In [38], it is shown that adding an inequality to the lower-level problem that is valid for the bilevel optimal solution may result in a better bilevel solution that is not feasible without the added valid inequality. We demonstrate this effect using a simple linear bilevel problem.
possesses the optimal solution (2, 2) with objective function value 2. The inequality y ≥ 1 is valid but inactive for the optimal solution (2, 2). Adding this constraint to the lower level yields the problem with optimal solution (0, 1) and objective value 0.
For our application, Example 1 particularly suggests that stopping a cutting-plane approach once the black-box constraint is fulfilled up to a δ b -tolerance may not yield a globally optimal solution. According to [38], a cutting-plane approach is only applicable, if the solution of the so-called high-point relaxation-the relaxation of the bilevel problem that disregards lower-level optimality-is also a solution of the original bilevel problem. This however means that we do not have a "real" bilevel problem in the first place. In [12], it is shown that a globally optimal solution of a bilevel problem remains locally optimal when adding valid inequalities to the lower level in case of inner semicontinuity of the solution set mapping S(x). Thus, under this restrictive assumption, a cutting-plane approach may at least yield locally optimal bilevel solutions.
In any case, determining an ε-optimality certificate, i.e., finding an ε such that F (x, y) ≤ F * + ε holds for (x, y) derived by a cutting-plane algorithm is, in our opinion, an extremely challenging and up to now open task. Thus, the best we can hope for when applying a cutting-plane approach is to arrive at a δ-feasible point with δ = (0, 0, δ b , 0). In other words, the best we can hope for is an approximate heuristic for the bilevel problem with lower-level black-box constraints.

2.3.
A "First-Relax-Then-Reformulate"-Approach to Compute δ-Feasible Points. In this section, we formalize a cutting-plane approach that proceeds in a "first-relax-then-reformulate" way. We will show that this approach can compute δ-feasible points of Problem (1). After that, we briefly discuss why an intuitive extension, a "first-reformulate-then-relax" approach, fails.
Since the black-box constraint b(y) ≤ 0 is convex, we can construct a sequence of linear outer approximations (E r , e r ) r∈N of the black-box constraint b(y) ≤ 0 with the property {y ∈ R ny : b(y) ≤ 0} ⊆ {y ∈ R ny : E r+1 y ≤ e r+1 } ⊆ {y ∈ R ny : E r y ≤ e r }. (5) Such descriptions can be obtained, e.g., by a classic cutting-plane approach [30] that exploits the fact that the first-order Taylor approximation of b is a global underestimator of b.
We note that for a given upper-level solutionx ∈ Ω u and r ∈ N, the adapted lower-level problem is a convex relaxation of the original lower-level problem (2). If we denote the optimal value function of Problem (6) byφ r (x), then the following lemma follows directly from (5).
Proposition 1. For every r ∈ N and every upper-level decision x ∈ Ω u , it holds We now follow a "first-relax-then-reformulate" approach by replacing the original lower-level problem (2) with its relaxation (6). This yields the following modified variant of the single-level reformulation (3): f (x, y) ≤φ r (x) (7d) x ∈ R nx , y ∈ R ny .
For the practical tractability of Problem (7), we need the following assumption.

Assumption 2.
For every upper-level decisionx ∈ Ω u and every r ∈ N, Slater's constraint qualification holds for the relaxed lower-level problem (6).
With Assumption 2, we can solve Problem (7) to global optimality, e.g., by replacing the optimal-value-function constraint (7d), which models the optimality of the relaxed lower level (6), and the lower-level primal feasibility with the necessary and sufficient KKT conditions of Problem (6). Note that such an explicit expression is not available for the Constraint (3d), because it would require explicit knowledge on the black-box function b.
Following the discussion of the last section, we re-iterate that, although we relax the black-box constraint, we do not obtain a relaxation of Problem (3) due to the tightened constraint (7d); see Proposition 1. Still, we can exploit Problem (7) to compute a δ-feasible point of Problem (1) as summarized in Algorithm 1. After the initialization (Step 1), we enter a while-loop in which we first check, among others, for approximate feasibility w.r.t. the chance constraint. Inside the loop, we first update the linear outer approximation in Step 3. This can be achieved, e.g., by using the first-order Taylor approximation of the black-box function at the last iterate, which is possible due to Assumption 1. Then, we solve Problem (7) to obtain a point (x r+1 , y r+1 ). In case Problem (7) is infeasible, we resort to the Algorithm 1 "First-Relax-Then-Reformulate".
following feasibility problem: x ∈ R nx , y ∈ R ny .
We argued before that Problem (7) is not a relaxation of the single-level reformulation (3), because the optimality condition of the lower level is tightened. We thus relax the optimality of the lower level to compute a new iterate (x r+1 , y r+1 , s).
Since Problem (7) is infeasible, s > 0 must hold. Note also that Problem (8) (7) is feasible. This is ensured by the check on s in Step 2.
Note that we have to solve Problem (7) and (8) exactly and to global optimality to get a (0, 0, δ b , 0)-feasible point as stated in the last theorem. While suitable convexity or even linearity assumptions on F , G, g, and f might render this feasible also for larger instances, this clearly underpins the hardness of the original problem addressed in this paper and, of course, puts a significant computational burden when it comes to real-world and large-scale instances. We also point out that Theorem 1 makes no statement regarding the convergence of Algorithm 1. In fact, convergence cannot be guaranteed. Consider, for example, the case in which Problem (7) is infeasible and for the solution (x r+1 , y r+1 , s) of the feasibility problem (8) the black-box constraint is fulfilled, i.e., it holds b(y r+1 ) ≤ 0. Then, updating the linear outer approximation using the respective first-order Taylor approximation of b at y r+1 does not exclude the point (x r+1 , y r+1 ). In the new iteration, the algorithm then delivers either the same solution (x r+1 , y r+1 , s) again-or we are lucky and the missing IIC property helps to find a new iterate: Adding the first-order Taylor approximation may changeφ r (x) such that either Problem (7) might become feasible or the solution of Problem (8) differs from the previous iteration. In any way, Algorithm 1 may not terminate if such a lucky situation does not appear. However, we demonstrate in Section 3 that it works very well in practice.
We also note that Algorithm 1 requires to solve a bilevel problem in every iteration. In order to reduce the computational burden, we can initialize the algorithm with a tight initial linear outer approximation E 0 y ≤ e 0 , obtained, e.g., by an initial cut sampling phase.
Before we conclude this section, we briefly discuss a "first-reformulate-then-relax" approach. We already highlighted in the last section that there is not much hope to derive algorithms that yield more than a δ-feasible point of Problem (1). We also discussed that for the "first-relax-then-reformulate" approach, the main problem is that Problem (7) is not a relaxation of Problem (1). However, Problem (8) might lead to the intuition that relaxing the optimality of the follower problem appropriately resolves this issue. Consequently, we may apply a "first-reformulatethen-relax" approach as follows.

Remark 1.
We may first reformulate Problem (1) to Problem (3) and then relax the black-box constraint (3c) and the optimality of the lower level (3d). We recap that the latter is necessary, because we cannot describe ϕ(x) explicitly. We therefore need to find functionsφ r with is a relaxation of Problem (1) for every r ∈ N. This would allow to compute 0δ-optimal points of Problem (1). It is, however, not clear how to derive suitable functionsφ r . One strategy might be to use a sequence (H r , h r ) r∈N of linear inner approximations of the lower-level problem such that holds. By replacing the black-box constraint with the linear inner description, we would obtain the tightened lower-level problem In order to use Problem (9) algorithmically, we need an explicit description ofφ r (x). Therefore, we may use the KKT conditions or strong duality of Problem (10) and reformulate Problem (9) accordingly. This, in turn, requires Problem (10) to be feasible for every upper-level decision x ∈ Ω u , which could be fulfilled, e.g., for a sufficiently tight linear inner description (H r , h r ) of b(y) ≤ 0 throughout its entire domain. This is not practicable from a computational point of view because such a linearization over the entire domain, in general, requires a significant amount of inequalities such that the resulting problem is very large. In addition, if such a sufficiently tight linear description over the entire domain would be available, we could directly use it to replace the lower level by its KKT conditions.

An Academic Example.
In this section, we test Algorithm 1 on an academic instance. Since we are not aware of any bilevel test instances with black-box constraints, we use the following problem: This example is taken from [9] and is contained in BASBLib, a library of bilevel test problems; see [44]. Problem (11) has a convex-quadratic objective function. In addition, the lower-level problem is a convex-quadratic minimization problem over linear constraints. Note that we could replace the lower level by its necessary and sufficient KKT conditions and solve the resulting nonconvex single-level problem to global optimality by using the well-known mixed-integer linearization of the KKT complementarity conditions using big-M values. However, in order to apply Algorithm 1, we equivalently rewrite the lower-level problem (13) to In this formulation, the "black-box function" b(y, t) fulfills Assumption 1. Overall, we apply Algorithm 1 to the following problem: We first initialize δ b = 10 −6 , E 0 = [0, 0], as well as e 0 = 0 and enter the while-loop. In order to obtain E 1 and e 1 , i.e., a tightening of the description of the black-box constraint, we solve the auxiliary problem which is a relaxation of Problem (15). We obtain the solutionx = 2.8,ȳ = 2.4, andt = 0, with corresponding objective value 0.2. Note that there is a strong ambiguity on the value oft in this situation, which is due to the fact that we moved the objective function of the lower-level problem into its constraint set. However, t = 0 is a reasonable choice because we minimize t in the reformulated lower-level problem (14). Thus, we proceed with this value to illustrate the further behavior of our algorithm. In order to construct E 1 and e 1 , we compute the first-order Taylor approximation at the point (ȳ,t). With the gradient ∇ y b(y, t) = 2y − 10, which translates to E 1 = [−5.2, −1] and e 1 = −19.24. We then solve the single-level problem (7) by describingφ 1 (x) with the linearized KKT conditions of the relaxed lower-level problem; see also the explanations after Assumption 2. We therefore use a big-M value of 10 6 in our computations. We obtain the solutionx = 1,ȳ = 3, t = 3.64 with objective function value 5 and χ = 0.36.

A European Gas Market Model with Chance Constraints
In this section, we use the algorithmic approach developed in the previous section to solve a simplified model of the European entry-exit gas market with uncertain load modeling. We therefore take the model from [23] and extend it by a chance constraint to model uncertain loads. This results in a bilevel problem with a blackbox constraint in the lower level; see Section 3.2. Afterward, we tailor the proposed solution approach to the specific setting of the considered gas market model and give some more details on the handling of the chance constraint, which is a black-box constraint as discussed in the previous section. Finally, we provide some algorithmic insights and discuss the obtained results from an applied perspective in Section 3.4.
In a nutshell, the European entry-exit gas market is organized as follows: (i) The transmission system operator (TSO) announces so-called technical capacities and booking price floors for every entry and exit node of the network to maximize the overall welfare. (ii) Gas selling and buying firms located at entry and exit nodes, respectively, afterward sign capacity-right contracts (called bookings), which are limited by the technical capacities. The prices of these bookings are determined by the booking price floors set by the TSO plus a scarcity-based markup price. (iii) The gas selling and buying firms then nominate entry and exit quantities on a day-ahead market. These nominations need to be balanced and are bounded above by the bookings. No other technical or physical restrictions are considered in this level. (iv) The realized nominations are cost-optimally transported through the network by the TSO. This setting is adequately captured by the four-level model developed in [23]. A peculiarity of the European gas market is the so-called robustness constraint. It requires that the TSO announces only those technical capacities that guarantee that every possibly occurring balanced nomination (i.e., entry and exit quantities match) that is restricted by the given set of bookings can indeed be transported through the network. This means, every such nomination needs to be physically and technically feasible. This leads to unnecessarily restrictive technical capacities, which themselves result in welfare losses (cf., e.g., [6]) on the one hand and in mathematically extremely challenging models (cf. [35,36,50,53]) on the other hand. From a mathematical point of view, the robustness constraint can be seen as an adjustable robust constraint; see, e.g., [5,60]. As shown in [6], an economically and algorithmically more reasonable approach is to require that only the actually realized nominations need to be transportable through the network. This is the setup we consider in what follows and we will come back to this aspect in the next section as well.
All papers published so far that tackle this four-level setup consider the deterministic case, i.e., exit nominations are purely driven by economic mechanisms that are modeled using inverse market demand functions to capture price elasticity. In what follows, we analyze the effects of uncertain demand. Before we do so in Section 3.2, we first recap the deterministic model from the literature in the next section.

A Deterministic Bilevel Model.
In Section 3 of [23] it is shown that the special structure of the four-level gas market model can be exploited to equivalently re-state it as a bilevel problem. In particular, the original second-and third-level problem, which model the profit-or surplus-maximizing bookings and nominations of the gas selling and buying firms, can be merged into a single level; see Theorem 5 in [23]. Moreover, the original fourth-level problem (i.e., the transportation level) can be merged into the original first level, in which the technical capacities and booking price floors are determined; see Theorem 7 in [23]. Before we state the resulting bilevel problem, we introduce some notation that is in line with [6,23]. The gas network is modeled as a directed and connected graph G = (V, A) with nodes V and arcs A. The node set is split into the set of entry nodes V + ⊆ V at which gas is supplied, the set of exit nodes V − ⊆ V at which gas is discharged, and the set of inner nodes V 0 ⊆ V without gas supply or withdrawal. Thus, V = V + ∪ V − ∪ V 0 holds. The model allows for multiple gas selling or gas buying firms i ∈ P u for u ∈ V + or u ∈ V − , respectively. We denote the set of entry players by P + := ∪ u∈V+ P u and the set of exit players by P − := ∪ u∈V− P u . We consider multiple time periods t ∈ T with |T | < ∞ of gas trading and transport and model the rational behavior of gas buying firms i ∈ P − based on linear inverse market demand functions [39] for the economic background. 1 Further, we characterize gas selling firms using pairwise distinct variable costs of production c var i > 0, i ∈ P + . When abstracting from the above mentioned robustness constraint, the bilevel reformulation of the four-level model from [23] is given by 1 Thus, we can replace the integrals that appear in the objective function (18a), and also later on in other objective functions, using the explicit quadratic expression for the integral.
where the lower level reads In the upper level (18), the TSO specifies technical capacities q TC u and booking price floorsπ book u for every entry and exit node u ∈ V + ∪ V − . The TSO uses the booking price floors to cover the transportation costs; see the bilinear constraint (18c). This constraint involves the bookings q book , which are, together with the nominations q nom , outcome of the lower-level problem; see Constraint (18e). The actual nominations are transported in a cost-optimal way through the network. To be more specific, the squared nodal pressures π and mass flows q on the arcs resulting from the nomination need to fulfill the technical network limitations denoted by F(q nom ) such that the transportation costs c trans are minimized. The generic notation in Constraint (18d) allows to use various transportation models. In this paper, we use the modeling and notation that is used in [6]. For an arc a = (u, v), q a,t > 0 denotes the mass flow in the direction of the arc, i.e., from u to v, and q a,t < 0 denotes flow in the opposite direction. The mass flow has to satisfy given bounds of the pipes: In addition, we model mass balance at every node of the network using the constraints where δ out u and δ in u represent outgoing and incoming edges at node u, respectively. Moreover, the pressure loss law couples the mass flows q a,t on each arc with the squared pressures π u,t , π v,t at the incident nodes. The squared pressures are bounded by the squared pressure bounds (20) denotes the pressure loss function for arc a ∈ A. In this paper, we make the following assumption, which is also used in [6].
Assumption 3. The pressure loss function Φ a is linear for all a ∈ A.
We discuss this simplification of the physics model at the end of this subsection. The transportation costs c trans generally arise from compensating pressure losses via controllable elements. However, in this paper, we only consider passive networks without controllable elements such as compressors or control valves. We also discuss the simplification of considering passive networks at the end of this subsection in more detail. To mimic transportation costs in our passive network model, we penalize squared pressure losses on all arcs of the network, i.e., where c trans t > 0 is a given parameter and a = (u, v) ∈ A, t ∈ T . Note that the squared pressures π u,t , π v,t are coupled to the mass flow q a,t via Constraint (20).
In the lower-level problem (19), gas buyers and sellers choose their booking and nomination quantities to maximize their individual profit, i.e., each player solves individual maximization problems. It is shown in [23] that under the assumption of perfect competition, this is equivalent to the welfare maximizing problem (19), which consists of the merged second and third level of the original four-level model. We briefly shed some more light on the derivation of Problem (19). In the second level of the original four-level model, each player i chooses the bookings q book i to maximize the anticipated outcome that is realized after trade in the third level. Under the assumption of perfect competition, the individual second-level problems can be equivalently reformulated as the aggregated second-level problem . Under the assumption of perfect competition, all players act as price takers and, consequently, every gas seller i ∈ P + maximizes the profit in every time period t ∈ T as follows: Similarly, every gas buyer i ∈ P − maximizes its surplus in every time period t ∈ T : The endogenously resulting equilibrium market price π nom t ensures that the shared market-clearing condition (19d) is satisfied in every time period t ∈ T . It is shown in [23] that the nomination level can be modeled as a mixed nonlinear complementarity problem (MNCP) by stacking the first-order optimality conditions of all Problems (23) and (24) and by further adding the market-clearing constraint (19d). The market price π nom t is then exactly the dual variable of the market-clearing condition (19d). In Theorem 3 of [23] it is shown that the resulting MNCP can be recast as an equivalent welfare maximization problem. Further, in Theorem 5 of [23], it is shown that when integrating this problem into the second-level problem (22), one obtains Problem (19).
Overall, the upper-level objective maximizes total welfare (18a), which consists of exit player utilities minus the costs of the entry players and the transportation costs of the TSO. Since the inverse demand functions P i,t are linear, the exit player utilities are concave-quadratic. In addition, the nonsmooth absolute values that appear in the transportation costs can be linearized with additional binary and continuous variables and constraints, exactly like it is done in Section 4 in [6]. After this linearization, we maximize a concave-quadratic objective function over linear and bilinear constraints in the upper level. For fixed upper-level decisions, the lower level is a continuous and concave-quadratic maximization problem.
Remark 3. We extend this setting by integrating uncertain loads into the lower level (19) in the next section. However, this means adding a further complicating aspect to an already challenging model. In this light, we also want to discuss the simplifications that we made in the modeling of the deterministic model. First, we simplified the transportation model by imposing Assumption 3, i.e., by assuming a linear pressure loss function. Using various other transportation models, e.g., the Weymouth equation [21,55] is possible but results in additional nonlinear constraints, leading to further nonconvexities in the upper level. Second, we abstract from controllable elements such as compressors and instead use the proxy (21) to account for transportation costs that are mostly driven by these controllable elements. Allowing for active instead of passive networks requires additional integer variables in the upper level. Thus, we would obtain a nonconvex mixed-integer nonlinear problem in the upper level. Third and finally, we also simplified the market mechanism by abstracting from the robustness constraint. This constraint states that every balanced nomination that fulfills the technical capacities must be guaranteed to be transportable by the TSO. In [6], a characterization of feasible bookings from [35] is used to obtain a tractable reformulation of the robustness constraint. This characterization is only valid for passive networks with linear transportation models. It is further reported that adding the robustness constraint results in a significant computational burden.
Overall, we abstract from all these aspects because the primary focus of this paper is on the handling of black-box constraints in the lower level; see also the next section in which we extend the deterministic lower level by chance constraints. As discussed in Section 2.2, a setting with black-box constraints in a bilevel lower-level problem is very challenging. Dropping any of the mentioned simplifications further complicates this already demanding setting even more.

A Probabilistic Extension.
We now extend the deterministic setting of the last section to capture demand uncertainties. In reality, exit players i ∈ P − nominate quantities q nom i,t without exactly knowing the actual load ξ i,t that they need to cover, say one day ahead. We assume that the load vector ξ = (ξ i,t ) i∈P−,t∈T has a log-concave cumulative distribution function. Thanks to Prékopa's theorem [48,Theorem 4.2.1.], this is in particular true if the distribution has a log-concave density. It is well-known that many prominent multivariate distributions do have a log-concave distribution, e.g., (for any choice of parameters) Gaussian distributions and the uniform distribution on convex and compact sets, as well as (for a restricted range of parameters) Dirichlet, Gamma, Wishart, and log-normal distributions. In the following, we will assume that ξ obeys a multivariate Gaussian distribution, i.e., ξ ∼ N (m, Σ). We further assume that the TSO is interested in a fail-safe network and imposes a fee µ on the exit players i ∈ P − to ensure that the realized loads are covered up to a specified safety level p ∈ [0, 1]. We model this by the following joint (over all times and exit players) probabilistic constraint which we add to the lower-level problem (19). The left-hand side of (25) is nothing but the cumulative distribution function of ξ evaluated at the vector q nom − of nominations at all exits and times. The mentioned log-concavity of this Gaussian distribution function implies that the log-transformed probabilistic load coverage constraint is convex. Consequently, adding Constraint (26) to the lower-level problem (19) yields the extended lower-level problem which is still convex. In summary, we obtain the following probabilistic bilevel problem max ϕ u (q nom , q) s.t. (18b)-(18d), (q book , q nom ) ∈ arg max (27).
Remark 4. Let us finally discuss in detail the economic interpretation of the probabilistic lower-level problem (27). We argued above that Constraint (25) models a situation, in which the TSO forces the exit players to fulfill a given safety level with respect to the actual loads ξ by imposing a sufficiently large fee µ. Thus, the maximization problem (24), which each gas buyer i ∈ P − solves in the third level, changes to One can think of the fee µ as the "price of the stochasticity". Similarly to the deterministic case, we can again use the first-order optimality conditions to model the nomination level as an MNCP. In addition to the shared market-clearing condition (19d), we also need to add the further "clearing condition" to the MNCP, which couples the fee µ with the log-transformed chance constraint (26).
The dual variable of the latter then corresponds to the fee µ. As in the deterministic case, we can recast the resulting MNCP as an equivalent maximization problem (given that the latter satisfies Slater's constraint qualification). When integrating this problem into the second-level problem, we obtain Problem (27).

A Tailored Solution
Approach. The probabilistic bilevel model (28) is a problem in which the lower level is convex. However, we do not have a closedform expression of the log-transformed chance constraint (26), which is part of the lower-level constraint set. In order to circumvent this difficulty, we tailor the solution approach for bilevel problems with black-box constrained lower levels from Section 2.3 to the specific setting of Problem (28). We therefore first give some more details on the log-transformed chance constraint (26). The probability function h in (26) plays the role of the black-box constraint function b in (2). We are going to check next the satisfaction of Assumption 1 for h. By (26), we have that where Φ is a multivariate Gaussian distribution function. The basic requirement in Assumption 1 that h is convex, has already been verified above using the logconcavity of Φ. Proceeding with the three items of Assumption 1, the first two items claim the possibility to evaluate h along with its gradient at any given argument y. By (31), this amounts to evaluate Φ and ∇Φ at any given nomination vector q nom − . As a multivariate Gaussian distribution function, Φ is given as a multivariate integral. Therefore, one has no access to a closed formula but is constrained to sufficiently precise approximations. An efficient way to get fairly precise values of Φ has been described and implemented by Genz [22]. 2 Moreover, it is well-known (see, e.g., [3, Lemma 1]) that the partial derivatives of Φ can be represented as where m i , Σ ii are the corresponding components of the parameters m, Σ of the given Gaussian distribution andΦ is another Gaussian distribution function evaluated in one dimension less. Here, the parameters ofΦ can be explicitly derived from the parameters m, Σ of Φ and, similarly, the argumentũ is an explicit function of the original argument u. Hence, one and the same code (as the one by Genz) may serve to provide values and gradients of the cumulative distribution function at any argument. This provides items (i) and (ii) of Assumption 1. Another possibility to do so, and which we are using here, is the application of the spheric-radial decomposition of Gaussian random vectors, which applies not just to probabilities of random inequality systems with random left-hand side as in (25) but to arbitrary structures, preferably convex in the random vector. Values and gradients of the probability function can be determined simultaneously by evaluating certain spherical integrals. This approach has been intensively analyzed theoretically (see, e.g., [1,2]) and successfully applied in practice [25]. It remains to justify item (iii) of Assumption 1. By (26), we have that where Φ is a multivariate Gaussian distribution function. As such it is continuously differentiable and strictly positive, implying that Since Assumption 1 refers in particular to all x, y satisfying the general constraint g(x, y) ≤ 0, this implies in the concrete problem (28) the constraint q nom − ≥ 0; see (19c). From the fact that distribution functions are non-decreasing with respect to the partial order of the space they are defined on, we infer that Combining this with (32) and (33), one arrives at the desired boundedness property where Σ min := min i Σ ii and where we used thatΦ as a distribution function satisfies the relationΦ ≤ 1.
Overall, the probabilistic bilevel problem (28) matches the setting of the general bilevel problem (1), the Assumptions 1 and 2, as well as its extensions as discussed in Remark 2 such that we obtain δ-feasible points by applying Algorithm 1. The main steps in every iteration of Algorithm 1 are (i) to derive a tighter linear outer approximation (E r , e r ) and (ii) to solve Problem (7) to global optimality. In our setting, we can achieve the former by a classic cutting-plane approach [30]. In iteration r, we use the point q r − = (q nom − ) r , which is part of the solution provided by Algorithm 1 in iteration r − 1, as the linearization point. By exploiting that for convex functions, the first-order Taylor approximation is a global underestimator, we derive the cut Thus, in iteration r, we construct E r+1 by adding the row ∇ q nom − (q r − ) q nom − to E r and construct e r+1 by adding the entry ∇ q nom − (q r − ) q r − − h(q r − ) to e r . Consequently, we obtain the relaxation of the lower-level problem (27) by replacing Constraint (26) with the Constraints in (35e). As already discussed for the general bilevel problem (1), if Assumption 2 holds, then Task (ii) from above can be achieved by replacing lower-level optimality, i.e., Constraint (7d), with the KKT conditions of the relaxed lower-level problem (6). In this way, we obtain an algorithmically tractable reformulation of Problem (7). For our application, the KKT conditions of the relaxed lower-level problem (35) consist of primal feasibility (35b)-(35e), the stationarity conditions the complementarity conditions and the nonnegativity conditions Altogether, in every iteration of Algorithm 1 applied to Problem (28), we have to solve the problem max ϕ u (q nom , q) s.t. (18b)-(18d), (35b)-(35e), (36)- (38).
This problem contains several difficulties. First, although the primal constraints (35b)-(35e) of the relaxed lower-level problem are linear, the KKT complementarity constraints (36) are nonconvex. Fortunately, they can be expressed as a set of mixed-integer linear constraints by introducing additional binary variables and sufficiently large big-M values; see [20]. For a linear constraint a x ≤ b and its dual variable λ ≥ 0, the complementarity condition λ(b − a x) = 0 can be linearized  [45]. In fact, finding correct big-M s may in general be as hard as solving the original bilevel problem [32]. Sometimes, however, problem-specific knowledge can be used to obtain correct big-M values; see, e.g., [33,50,52]. For the deterministic model that we discussed in Section 3.1, valid big-M values are derived from economic relationships in [6]. In the probabilistic setting, economic relationships are disturbed. Still, we can derive valid big-M s in a similar way as it is done in [6] by exploiting technical limits of the network. For the only Lagrangian multiplier that does not appear in the deterministic setting, we derive a valid upper bound in Appendix B.
In addition, we are facing the bilinear upper-level constraint (18c). Hence, after having obtained the reformulation (40) of the complementarity conditions (37), we need to solve a mixed-integer maximization problem with a concave-quadratic objective function over linear and bilinear constraints in every iteration of Algorithm 1. Such problems can be solved by modern global solvers such as Gurobi, which tackle bilinear constraints by spatial branching [28]. We highlight that by design of spatial branching, a solution of Problem (39) fulfills Constraint (18c) only up to a pre-specified solver tolerance δ G > 0. Consequently, the solution we obtain from Algorithm 1 is a (δ G , 0, δ b , 0)-feasible point of Problem (28); see also Remark 2.
Finally, we show how we can exploit structural properties of the specific model at hand to extend Algorithm 1. The extended variant is capable of computing a tight upper bound on the optimal objective function value of Problem (28) such that we get an ex-post guarantee regarding the optimality gap of the δ-feasible solution obtained by Algorithm 1. To this end, we compare the KKT conditions (36)-(38) of the relaxed lower-level problem (35) with the KKT conditions of the original probabilistic lower level (27). The latter are given by primal feasibility (19b)-(19d) and (26), KKT stationarity (36a), (36c), and KKT complementarity (37a)-(37c) and (30), as well as KKT nonnegativity conditions (38). We notice two differences when comparing the two sets of KKT conditions. First, instead of the single complementarity condition (30), we obtain the set of complementarity conditions (37d). Second, we note that in the stationarity condition (36b) we approximate the product ∇ q nom i,t h(q nom − )µ that appears in (41) by the sum r j=1 ∇ q nom i,t h(q j − )µ j . As a consequence, the single-level problem (39) that we solve in every iteration r of Algorithm 1 (applied to Problem (28)) is neither a relaxation of Problem (28) nor is Problem (39) in iteration r a relaxation of Problem (39) in iteration r + 1. This is in line with the previous discussions in Section 2.2 and 2.3. However, by omitting the Constraints (36b), (37d), and (38c) from Problem (39), we obtain the desired relaxation.  if Problem (42) is infeasible then 5: Return "Problem (28) is infeasible". Extract q r+1 − and the optimal objective valueφ u from the solution.
is a relaxation of the probabilistic bilevel problem (28). In addition, Problem (42) in iteration r is a relaxation of Problem (42) in iteration r + 1.
The lemma follows directly from the construction of the problems. We note that Problem (42) relates to relaxing the optimal response of the exit players i ∈ P − . For general bilevel problems, relaxing the optimality of the lower-level problem may result in weak bounds on the optimal objective value of the bilevel problem. In our application, however, we observe that the objective functions (18a) and (19a) of the two levels are rather aligned. In fact, the only difference is that in the upper level, we account for the transportation costs t∈T a∈A c trans (q a,t ) of the TSO and in the lower level, we account for the booking costs u∈V+∪V− i∈Puπ book u q book i of the players. These terms are forced to be equal by the upper-level constraint (18c). Thus, our working hypothesis is that relaxing the optimality of exit players as it is done in Problem (42) still yields a good approximation of the optimal response of the exit players. We thus expect that the bounds obtained by Problem (42) are rather tight. We solve Problem (42) iteratively as described in Algorithm 2. Essentially, this directly applies Kelley's cutting-plane approach [30] to the single level problem (42). Since the hypothesis is that the obtained bound is tight, we also expect that the cuts obtained in Algorithm 2 are useful for the original problem.
In practice, we thus proceed as follows. We invoke Algorithm 2 until it converges. We store the boundφ u and initialize (E 0 , e 0 ) in Algorithm 1 with all inequalities (35e) that we generated in Algorithm 2. This corresponds to extending Problem (42) to Problem (39), which we solve in every iteration of Algorithm 1. If Algorithm 1 then terminates with a δ-feasible point of Problem (28), we compute the gap which gives an ex-post optimality certificate. We analyze the quality of this gap in more detail in the next section. 3.4. Numerical Study. In this section, we illustrate the proposed approach using the example of a specific instance of the probabilistic gas market model introduced in Section 3.2. We first give details on this instance, before we evaluate the approach from a computational point of view. Finally, we interpret the results that we obtain for various probability levels of the chance constraint.
Physical and Economic Setup. For our analysis, we consider a network with eleven nodes as shown in Figure 1. Three of the nodes are entries at which gas is injected to the network, three others are exits at which gas is withdrawn from the network, and the remaining nodes are so-called inner nodes. The nodes are connected by eleven arcs and the network contains one cycle. The length L a of each pipe is specified in Figure 1. In addition, we assume an equal diameter D a = 0.5 m, roughness k a = 0.1 mm, and capacities q ± a = ±266 kg s −1 for all pipes a ∈ A. With respect to the flow model, we follow [6]. This means, we consider stationary gas flow, i.e., we abstract from temporal dependencies and only consider horizontal pipes. In this setting, we can use the pressure loss function see, e.g., [21], in which the constant c = 340 m s −1 denotes the speed of sound in natural gas. The friction coefficient λ a can be approximated by the formula of Nikuradse; see [21]: Equation (43) is a suitable simplification of gas flow physics but is still nonlinear. In order to arrive at a linear approximation, see Assumption 3 and Remark 3, we replace |q a | by a mean flow q mean a = 100 kg s −1 . Again, exactly as done in [6], we set c trans t = 1. For all inner nodes u, we have lower and upper pressure bounds of 15 bar and 140 bar. At all entries we have a lower pressure bound of 20 bar. Further, at Entry 1 and Entry 2 we have an upper pressure bound of 80 bar, and at Entry 3 we have 160 bar. At Exit 1, Exit 2, and Exit 3, we have lower pressure bounds of 20 bar, 10 bar, and 5 bar, and all upper pressure bounds are set to 120 bar. We note that the pressure bounds are rather large such that the network is not very restrictive. The reason behind this choice is that we want to analyze the effect of uncertain loads.
To this end, we analyze various choices of p ∈ [0, 1] in the chance constraint (25). Hedging against the uncertainty increases nominated quantities, especially for p close to 1. Assuming large pressure bounds prevents technical infeasibilities that may otherwise result from large nominations.
In our example, we consider one player at each of the entry and exit nodes and |T | = 12 time steps, which refer to the months of a year. We thus set the initial willingness to pay, which corresponds to the intercepts a i,t , higher in winter months than in summer months. For a fixed time stept, fluctuations of the intercepts a i,t across the players i ∈ P − are rather small. We specify the exact choices for a i,t in Table 2 in the appendix. In contrast to the intercepts, we assume that the price elasticity, i.e., the slopes b i,t of the exit players, are constant over time, and specify b i = b i,t in Figure 1. Exit 2 has a rather elastic demand, while the demand of Exit 3 is the least elastic. For the gas sellers, we set the variable costs to be constant over time as specified in Figure 1. Entries 1 and 2 have similar variable costs, while Entry 3 has a significantly lower variable cost.
The vector ξ of loads at the exits is random and we assume that it follows a multivariate Gaussian distribution ξ ∼ N (m, Σ) with mean vector m and covariance matrix Σ. In reality, also other types of distributions (e.g., log-normal, uniform, discrete) and mixtures thereof may be relevant [34,Chapter 13]. When specifying the parameters of this distribution, we do not rely on statistical data analysis of a concrete real-life network. We rather follow the idea that the mean vector of loads represents the equilibrium exit nominations for the deterministic bilevel model from Section 3.1, thus modeling an elastic demand, which changes only slowly over time. Hence, our mean load is-among other parameters-a consequence of the inverse demand functions P i,t as specified in Table 2. For real-world problems, one would rather solve an inverse problem and calibrate the inverse demand functions in a way to derive a specific mean vector as a solution of the deterministic bilevel problem. We specify our actual choices of m and Σ in the appendix.
The load vector ξ itself can be understood as a random inelastic deviation from its mean, which is more relevant on the shorter time scale of a day-ahead market. This scattering around the mean is defined by the covariance matrix. In order to fix the covariance matrix, it is sufficient to define standard deviations of the single timeand node-components as well as correlations between these components. Orientation for both features can be obtained from general gas load data as analyzed in [34]. As a consequence, we built up our covariance matrix in a fairly heuristic way (zero correlation between exits at different times, common positive value of correlation between exits at fixed time, common relative standard deviation between exits, but constant over time). Clearly, the dimension of the random vector (or its multivariate distribution) is the same as that of the vector of exit nominations, namely 36 (3 exits combined with 12 time steps). All data required to reproduce our experiments are publicly available; see [26].
Computational Analysis. In this section we briefly evaluate the tailored solution approach presented in Section 3.3 from a computational perspective. We implemented the approach in Python 3.7 and used Gurobi 9.1.1 to solve all involved optimization problems. All computations have been carried out on a compute cluster; see [49] for details on the installed hardware.
In order to have a benchmark, we first consider the deterministic model from Section 3.1, which can be solved by global solvers out-of-the-box after a classic single-level reformulation, e.g., using the KKT conditions of the lower level, has been applied. This model has 930 variables, thereof 282 binaries, and 1260 constraints. It is solved in 1.17 s and can thus be considered easy. We now turn to the probabilistic model for various probability levels p of the chance constraint (25). The model sizes increase slightly compared to the deterministic model, because we obtain a new complementarity condition in every iteration of the algorithm. Since we linearize the complementarity conditions, we particularly obtain an additional binary variable in every iteration.
For our computations, we set δ b = 10 −3 , i.e., we terminate Algorithm 1 and 2 if h(q nom − ) ≤ 10 −3 holds. Preliminary results showed that it is very important to equip Algorithm 2 with initial cuts. The reason is that without an initial cut, we obtain a solution from the bounding problem (42), for which it is very likely that P ξ i,t ≤ q nom i,t for all i ∈ P − , t ∈ T is close to zero. Consequently, we would linearize the chance constraints at the tails of the Gaussian distribution, where the gradient of the chance constraint is almost zero. For the log-transformed chance constraint (26), this results in partial derivatives going to infinity such that we cannot construct the cuts (34). The situation can be resolved by adding initial valid inequalities that bound the nominations appropriately from below. A straightforward way to achieve this is to exploit the p-quantile Q p of the standard Gaussian distribution as follows: These "quantile cuts" correspond to individual p-level constraints on load coverage for all exit players and for all time steps separately, i.e., which are clearly valid for the original chance constraint. Another approach is to add a single first-order cut (34) at a pointq nom − with p − δ p ≤ P ξ i,t ≤q nom − for all i ∈ P − , t ∈ T ≤ p for a given δ p > 0. We can find such a point, e.g., via bisection. The latter approach turned out to be very effective in our computations such that we apply it by default.
In Table 1, we present the numbers of iterations and runtimes in total and separated for the three phases of our tailored algorithm, i.e., bisection to find an initial cut, the bounding procedure Algorithm 2, and the feasibility procedure Algorithm 1. We observe several aspects in Table 1. First, in general, the larger fraction of the total iterations and runtimes is spent in the bounding procedure, while mostly only a few iterations are needed afterward to compute a δ-feasible point. However, the bounding problems are rather cheap such that we cannot say that the bounding phase requires more runtime than the feasibility phase. Second, we cannot observe a relationship between the p-level and the total runtime, i.e., in general a larger p-level does not result in longer runtimes. The longer runtimes for p = 0.98 and p = 0.99 are caused by a single iteration of Algorithm 1 that takes much more time than every other iteration. However, we do not see any explanation of these "outlier iterations" in terms of larger values of p. Third, we spent a significant amount of the runtime to find an initial cut via bisection. We already discussed that initial cuts are crucial for the applicability of the approach. However, the quantile cuts (44) can be added for free such that it appears to be a reasonable option to add these cuts. We shed some more light on this by discussing the evolution of the p-level over the iterations of our approach for both cases, i.e., using quantile cuts or running an initial bisection. By p-level, we mean the value of the original chance constraint (25) when it is evaluated at the solution of the respective iteration. Figure 2 demonstrates this comparison exemplary for the instance p = 0.90. We see that we start with a much higher p-level when we equip our approach with the bisection phase. This results in a faster convergence to the desired p-level. Ultimately, this reduction in iterations overcompensates the expensive bisection. This justifies to use the bisection instead of the quantile cuts. Another conclusion from Figure 2 can be drawn with respect to the usefulness of the cuts generated in the bounding phase. The vertical lines in Figure 2 mark the end of the bounding phase. While it takes several iterations in the bounding phase to finally arrive at the desired p-level, we remain close to this p-level when entering the feasibility phase. This behavior can be observed for all tested instances and means that the cuts generated in the bounding phase are very useful in Algorithm 1, most likely because they linearize the log-transformed chance constraint around the δ-feasible point to which we converge.
We now turn back to Table 1 and discuss the most important observation. Except p = 0.99, for which we terminate with a gap of 0.187 %, we always terminate with a negligibly small gap that allows to consider the computed solutions to be approximate optimal. Thus, the computational results allow to use our tailored approach from Section 3.3 in a case study to analyze and interpret the effects of uncertain loads on economic quantities like nominations and total welfare. Case Study. In the following, we discuss the outcomes of our computations for different constraints on load coverage. The first instance is defined as the solution of the nominal problem without any load coverage constraint. As noted before, this solution corresponds to mean exit nominations. The second instance imposes individual 90 %-constraints on load coverage for all exit players and for all time steps as it is done in Constraint (44), respectively (45). As discussed, the advantage of this quantile-approach relies on the fact that it is as easy to obtain as the nominal solution, where the lower bound is just zero. The drawback is that these solutions-while yielding robust load coverage for each exit player and in each time step separately, may be far from guaranteeing robust load coverage for all exit players and times simultaneously. This observation-which will be supported by our results below-is not surprising because only a very restricted part of the underlying multivariate distribution is taken into account, i.e., no covariances between load components are considered. Our main interest is about the third instance, namely the joint (over all times and exits) probabilistic load coverage, which results in the probabilistic constraint (25). In contrast to the quantile-based solution, robust load coverage can be achieved. As seen in the last section, the robust solution of the chance-constrained problem comes at the cost of an additional computational burden. In the following we evaluate the impact of different probability levels p ∈ [0, 1] as detailed in Table 1. Figure 3 shows the time-dependent profiles of nominations at the three exits for the mean vector, the 0.9-quantile constraint and the probabilistic constraints with safety levels from the indicated range. It can be observed how much nominations have to be increased in order to satisfy the desired safety. It can be seen that, while the mean profiles are just scaled/shifted versions of each other, which actually is a consequence of keeping the slopes of the inverse demand functions P i,t in our model time-invariant, the other nomination profiles change their shape over time and exits. Figure 4 provides a plot of total welfare (18a) and of the fee µ for uniform load coverage occurring in (29) (also interpreted as the "price of stochasticity") as a function of the chosen probability level. We reiterate that in our iterative solution approach, µ is approximated by a series of µ j . However, we can compute the "real" fee µ ex-post via the KKT stationarity condition (41) of the original probabilistic lower-level problem (27). It can be seen that the loss of welfare is moderate when increasing the safety towards 0.9, whereas it severely decreases when driving safety further towards 1. A similar pattern is observed for the price of uniform load coverage. These diagrams suggest that in the given example, 90 % safety of load coverage would represent a reasonable compromise between the TSO's interest in nominations covering the random load on the one hand and the welfare losses induced by this safety of load coverage.
The probabilistic effect of exit nominations is illustrated in Figure 5: The black thick curves correspond to nominations according to mean (top), 0.9-quantile solution (middle), and probabilistic constraint for p = 0.9 (bottom). In order to visualize load coverage by these nomination profiles, we simulated ten scenarios for load profiles according to the given multivariate Gaussian distribution. Note that each scenario is related to all exits and all times. Hence, for instance the three thin blue curves in the lower diagrams correspond to one scenario. Nominating according to the mean load not surprisingly yields that at each exit and each time the load of approximately one half of the scenarios is covered by the nomination. On the other hand, each of the ten scenarios is not covered by the nomination at some exit at some time (such scenarios not uniformly covered by the nomination are colored in the diagrams). This means that the TSO has to expect almost surely uncovered load in the network over the considered time horizon. In contrast, the quantile-based exit  nomination guarantees at each exit and each time separately a load coverage for all but one (on average) scenarios, which corresponds well to the chosen 90 %-quantile. However, uniform load coverage is as poor as for the mean-based nomination: none of the ten scenarios is uniformly covered for all exits and times (i.e., all curves are colored). Things change when imposing a joint chance constraint: here, nine out of ten scenarios are uniformly covered by the corresponding nomination profile.
Only one scenario (colored in blue) remains unsatisfied at Exits 1 and 3 at time 2. We note that the numbers of violating scenarios is random itself. Thus, repeating  the analysis with a new sample of load scenarios might result in slightly different numbers. However, on average, it will reflect the true probabilities of violation. For instance, in a sample of 100 load scenarios (which we do not illustrate here for reasons of visibility), we found 100, 89, and 12 violating scenarios for the mean, the quantile, and the joint probabilistic solution, respectively. Finally, the nomination counterparts of entries are plotted in Figure 6 for six different safety instances. The arrangement in these diagrams is different from that of the exits in Figure 3: Here, we opposed the three entries within each diagram. Three major observations can be made.
(i) The sum of entry nominations increases with increasing safety required. (iii) Both entries change their role from inactive to strongly active when increasing the safety level. The first observation is not surprising since exit nominations increase as well and both entry and exit nominations have to be balanced. The second observation is due to technical limitations in combination with economic reasons. The maximum throughput of the network is limited mainly by pressure bounds. The technical capacities are set to ensure that the pressure bounds are fulfilled. Entry 2 and 3 have lower variable costs compared to Entry 1 such that it economically makes sense to use as much gas as possible from these two entries. Since total demand in most cases exceeds the technical capacities at Entries 2 and 3, we observe an almost constant profile.
Finally, we turn to observation 3, which is the least intuitive. For low safety levels, the cheapest entry, Entry 3, is supplying a lot of gas. With higher safety levels, Entry 3 is substituted by the significantly more expensive Entry 2, until Entry 3 is driven out of the market entirely. One possible explanation for this behavior might be as follows. Whenever Entry 3 supplies large amounts of gas, the pressure loss over Pipe 2 results in a low pressure at Node 1; see Figure 1. When the nominations increase, more gas has to be transported from left to right-in particular, more gas than Entry 3 is able to deliver due to technical limitations. Thus, Entry 2, the second cheapest producer, needs to step in to supply Exit 2 and 3. However, the low pressure at Node 1 forces Entry 2 to "send" gas over Pipe 11 (and thus over Pipe 3 and Pipe 5 to Node 4) to satisfy all pressure loss constraints. This results in very high transportation costs compared to "sending" gas directly via Pipe 7 to Node 4. Consequently, with higher overall nominations, it becomes more attractive for the TSO to shut down Entry 3 entirely, because its cheaper production is overcompensated by higher overall transportation costs. This underlines also the complicated interplay of physics and economics.

Conclusion
We considered bilevel optimization problems with convex lower levels that involve black-box constraints, i.e., constraint functions for which no closed form is available.
To tackle such problems, we developed a cutting-plane algorithm that can compute approximate bilevel-feasible points. Afterward, we applied this method to a bilevel model of the European gas market in which we model uncertain gas loads via a Table 2. Intercepts a i,t (in EUR/(1000 Nm 3 /h)) and slopes b i (in EUR/(1000 Nm 3 /h) 2 ) of the exit players. joint chance constraint in the lower level. Since such a constraint cannot be stated in closed form, this setting fits the setup of a bilevel problem with a lower-level black-box constraint. Using further problem-specific techniques we can also show that the computed approximate feasible points are indeed approximate optimal as well, which allowed to analyze the sensitivity of welfare outcomes in dependence of the achieved safety level for load coverage. The considered setting is a rather new sub-field in bilevel optimization. Thus, many aspects can be elaborated on further in the future. Let us sketch two of them. First, it might be interesting to study a bilevel setting in which the lower-level black-box also depends on the leader's decision. In the context of chance constraints, this can be seen as a variant of decision-dependent uncertainty in the lower-level problem. Second, we are sure that other algorithmic approaches to bilevel problems with black-box constraints are possible and every algorithmic improvement will be beneficial for tackling relevant problems from practice.
i ∈ P − : In this section, we derive an upper bound on the Lagrangian multipliers µ j as used in Section 3.2. To this end, we consider the stationarity condition (36b), i.e., First, we assume q nom i,t > 0 for all i ∈ P − and t ∈ T , which is justified in the probabilistic setting: According to (44), we can bound the nominations of the exit players from below. With this, we obtain From now on, we consider an arbitrary but fixed index j. Since the multivariate Gaussian distribution function Φ is strictly positive, it follows from (32) that ∇ q nom i,t h(q j − ) < 0 holds for all i ∈ P − , t ∈ T , and j = 1, . . . , r, we can solve for µ j and obtain , i ∈ P − , t ∈ T.
Note that all ∇ q nom i,t h(q j − ) for all i ∈ P − , t ∈ T , and j = 1, . . . , r are given (and negative) constants. Hence, to obtain a valid upper bound on µ j we first have to properly bound the other terms in the numerator. Since we divided by a negative number, a valid upper bound for µ j can be obtained by taking the smallest possible numerator. Due to b i,t < 0, a lower bound for a i,t + b i,t q nom i,t is given by L 1 = a i,t + b i,tq nom i,t , whereq nom i,t is an upper bound for q nom i,t . Taking the network constraints into account, we can setq where u ∈ V is the node in the network at which player i ∈ P − is located.
Let us now further assume that the overall booking price is bounded above, i.e., π book u + π book u ≤ L 2 for some L 2 > 0 and all i ∈ P u , u ∈ V + ∪ V − . Thus, with γ + i,t ≥ 0 and (36a), we can also bound all γ + i,t from above by L 2 . By using (36c), we then see that is a valid upper bound for the resulting market equilibrium price π nom t . The last term, − k∈{1,...,r}\{j} ∇ q nom i,t h(q k − )µ k , in the numerator gets as small as possible for µ k = 0 for all k ∈ {1, . . . , r} \ {j} since ∇ q nom i,t h(q k − ) < 0. Taking this all together, we see that µ j ≤ L j with L j = L 1 − L 2 − L 3 ∇ q nom i,t h(q j − ) holds. Note that we can, in principle, do this derivation for every i ∈ P − and t ∈ T to get the tightest possible upper bound L j .