Abstract
Optimization problems under uncertain conditions abound in many reallife applications. While solution approaches for probabilistic constraints are often developed in case the uncertainties can be assumed to follow a certain probability distribution, robust approaches are usually applied in case solutions are sought that are feasible for all realizations of uncertainties within some predefined uncertainty set. As many applications contain different types of uncertainties that require robust as well as probabilistic treatments, we deal with a class of joint probabilistic/robust constraints. Focusing on complex uncertain gas network optimization problems, we show the relevance of this class of problems for the task of maximizing free booked capacities in an algebraic model for a stationary gas network. We furthermore present approaches for finding their solution. Finally, we study the problem of controlling a transient system that is governed by the wave equation. The task consists in determining controls such that a certain robustness measure remains below some given upper bound with high probability.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
1.1 Joint Probabilistic/Robust Constraints
Decision making problems are more than often affected by parameter uncertainty. An optimization problem dealing with uncertain variables has the typical form
Here \(x\in \mathbb {R}^{n}\) is a decision vector, \(z\in \mathbb {R}^{m}\) is an uncertain parameter, \(g_{0}\colon \mathbb {R}^{n} \to \mathbb {R}\) is the objective function and \(g\colon \mathbb {R}^{n}\times \mathbb {R}^{m}\rightarrow \mathbb {R}^{k}\) is the constraint mapping. The decision support schemes with nondeterministic parameters have to take into account the nature and source of uncertainty while balancing the objective and the constraints of the problem. There are two main approaches for dealing with uncertainty in the constraints of an optimization problem: the first one is the use of probabilistic constraints. This approach is based on the assumption that historical data is available to estimate the probability distribution of the uncertain parameter and thus considering it as a random vector ξ taking values in \(\mathbb {R}^{m}\). Then (1) may be given the form
(note that the first ‘≥’ sign is to be understood componentwise). Here, a decision x is declared to be feasible if and only if the original random inequality system (1) is satisfied with a prescribed probability p, a level usually chosen close to, but not identical to one. Of course, higher values of p lead to a smaller set of feasible decisions x in (2), hence to optimal solutions at higher costs. Concerning the random variable ξ we essentially focus on continuous ones. For a standard reference on optimization problems with probabilistic constraints we refer to the monograph [22].
An alternative approach is given by robust optimization. It applies when the uncertain parameter u is completely unknown or nonstochastic due to a lack of available data. Then, satisfaction of the uncertain inequality system (1) is required for every realization of the uncertainty parameter within some uncertainty set \(\mathcal {U}\subseteq \mathbb {R}^{m}\) containing infinitely many elements in general, so that one arrives at the following optimization problem:
For a basic monograph on robust optimization, we refer to [3].
We notice that both optimization models with probabilistic and robust constraints are deterministic reformulations of (1), since they depend only on the decision vector x but no longer on the outcome of the uncertain parameter z.
A central issue in robust optimization is the appropriate choice of the uncertainty set \(\mathcal {U}\). Simpleshaped sets like polyhedra or ellipsoids induce computationally tractability [2] and allow one to deal with much larger dimensions than in the case of probabilistic constraints. Moreover, when choosing \(\mathcal {U}\) such that \(\mathbb {P}(\xi \in \mathcal {U})=p\), then the feasible set of decision variables x of (3) is contained in that of probabilistic programming (2), so that an optimal solution to (3) is a feasible solution to (2). For these two reasons, robust optimization is not only preferred in the absence of statistical data, but also as a conservative approximation of the probabilistic programming setting. This conservatism, however, may be significant up to the point of ending up at very small or even empty feasible sets possibly coming at much higher costs than under a probabilistic constraints. This tradeoff propels the use of probabilistic constraints in the presence of statistical information at least in moderate dimension.
Although these approaches, probabilistic programming and robust optimization are often dealt with separately, in many applications, one is faced with uncertain variables of both mentioned types. This leads us naturally to the consideration of uncertain inequalities (2) in which the uncertain variable has a stochastic and a nonstochastic part, i.e., z = (ξ, u). A typical example is optimization of gas transport in the presence of uncertain loads. Historical data are available (stochastic uncertainty) for the exit loads in many situations. On the other hand, observations can hardly be accessed (nonstochastic uncertainty) for entry loads, or for example, uncertain roughness coefficients in pipes. Therefore, it seems natural, to combine the originally separate models (2) and (3). An appropriate way to do so is to formulate a probabilistic constraint (w.r.t. ξ) involving a robustified (w.r.t. u) infinite inequality system:
This class of constraints has been recently investigated in [23] under the name of hybrid robust/chanceconstraints and in [10] under the name of probabilistic/robust constraints. For easier reference, we shall be using in the following the natural acronym of probust constraints. We note that even the more complex situation of the uncertainty set depending on the decision and random variable plays an increasing role in applications. Here, the constraint becomes
where the inner part resembles constraint sets arising in generalized semiinfinite optimization [25].
We note that yet another form of combining the probabilistic and robust parts of the inequality system would result from interchanging their arrangements in (4):
In this way one does not arrive at a probabilistic constraint involving infinitely many random inequalities as in (4) but rather at an infinite system of probabilistic constraints. This setting is related to (robust) firstorder stochastic dominance constraints [6] and to distributionally robust probabilistic constraints [26], which currently receives increased attention. We won’t deal with this model here but rather focus our considerations on (4) and (5) respectively, which impose stronger conditions in the sense of joint probabilistic constraints compared to individual ones.
The aim of this paper is to illustrate the application of this new class of probust constraints to optimization problems in gas transport under uncertainty in the exit and entry loads. Uncertainty in the roughness coefficients of the pipe is not a part of this paper and it has been analysed for example in [1]. For a recent monograph on gas transport optimization we refer to [18]. We will consider first the problem of maximizing free booked capacities in an algebraic model for a stationary gas network. The corresponding model is presented in Section 2. This overall problem is too complex, however, to be dealt with in this paper. Therefore we will split it into two subproblems, namely capacity maximization at exits (consumer side) which is discussed in Section 3 and capacity maximization at entries (provider side) which is analyzed in Section 4. Without loss of generality, we follow the convention of entry loads being non positive and exit loads being non negative.
While often the research in optimization in finitedimensional spaces (including mixedinteger nonlinear optimization) and PDEconstrained optimization takes place within disjoint communities, we think that it is important to identify common problem structures that occur in both classes of problems, in particular in the development of methods that allow to take into account uncertainties. Therefore in Section 5 we discuss an optimization problem with a probust constraint for a system that is governed by a PDE that is related to the application in gas transport. The PDE model allows us to consider a transient System in Section 5, whereas in the first two parts of the paper, stationary problems are studied. The transient system is governed by the wave equation under probabilistic initial and Dirichlet boundary data at one end of the space interval. At the other end of the space interval, Neumann velocity feedback is active. For the system a desired stationary state is given. The robustness of the system is measured by the \(L^{\infty }\)norm of the difference between the actual state \(\tilde {v}\) and the desired state \(\bar {v}\). Due to the definition of the \(L^{\infty }\)norm (as the essential supremum of the absolute value), this approach gives information about the maximal pointwise distance in space and time. Since our solutions are in fact continuous for appropriate initial and boundary data, the \(L^{\infty }\)norm is equal to the maximum norm. The robustness requirement is that this pointwise distance remains below a given upper bound v_{max}. In our system, the state depends on uncertain initial and boundary data with a given probability distribution. The meaning of the probabilistic constraint is the following: The probability that the robustness requirement is satisfied is sufficiently large, i.e., greater than or equal to a given value p ∈ (0,1]. This probabilistic constraint can be written in the form of (4); for details see Section 5, (36). As a numerical example, we consider the optimization with respect to the feedback parameter in Subsection 5.3.
2 Maximization of Free Capacities in a Stationary Network
We consider a passive stationary gas network given by a directed graph \(\mathcal {G}=(V,E)\) with a set V of vertices and a set E of edges. We shall assume that the set of nodes decomposes into a set V_{+} of entries, where gas is injected and a set V_{−} of exits, where gas is withdrawn. Hence, V = V_{+} ∪ V_{−} and V_{+} ∩ V_{−} = ∅. Without loss of generality we label the nodes in a way that entries come first and exits last. The gas transport along the network is triggered by bilateral delivery contracts between traders who inject gas at entries and traders covering customer demands by withdrawing gas at exits. An amount of gas injected into or withdrawn from the network at certain nodes is called a nomination. We shall refer to b ≤ 0 and ξ ≥ 0 as to the vectors of nominations at entries and exits, respectively.
Nominations have to satisfy three conditions:

1.
At each node (entry or exit) of the network, nominations must not exceed the capacity booked for that node by the respective trader.

2.
Nominations must be balanced over the whole network, i.e., the sum of nominations at entries equals the sum of nominations at exits.

3.
Nominations must be technically feasible in the sense that there exist pressures within given bounds at the nodes and a flow through the network such that the nominations at the exits can be served by the nominations at the entries.
The first condition has to be satisfied by the traders. Referring to C_{−}, C_{+} ≥ 0 as to the vectors of booked capacities at entries and exits, respectively, it can be written as
where the intervals are to be understood in a multidimensional sense. The second condition is an automatic consequence of the collection of bilateral delivery contracts between entries and exits and can be written as
where 1_{+} and 1_{−} are vectors filled with entries 1 of the respective dimension of entries and exits.
The third condition of technical feasibility of some joint nomination vector (b, ξ) can be characterized by the existence of vectors q of flows along the edges of the network and π of pressure squares at the nodes satisfying the conditions
Here, \(\mathcal {A}\) is the incidence matrix of the network graph, Φ := diag((Φ_{e}))_{e∈E} is a diagonal matrix of roughness coefficients and the modulus sign for a vector has to be understood componentwise. The first two equations in (8) correspond to the first two Kirchhoff’s Laws (mass conservation and pressure drop), whereas the interval condition imposes bounds on the pressure. It is actually these bounds that constrain the feasibility of nominations b, ξ, see, e.g., [18].
It is the network owners’ responsibility to make sure—without knowledge of concrete bilateral delivery contracts between entries and exits—that condition 3. is satisfied for all nominations fulfilling conditions 1. and 2. This requirement clearly imposes a constraint on the booked capacities C_{+}, C_{−} via (6) saying for all (b, ξ) satisfying (6), (7) there exists (q, π) satisfying (8). It can be formally written as:
Satisfying (9) in a rigorous way would yield (too) small booking capacities at the nodes of the network. Here, the network owner can benefit from the fact that nominations at the exits (gas withdrawn for consumption) are endowed with a typically large historical data base so that they can be modeled as random vectors obeying some appropriate multivariate distribution. This offers the possibility to relax the ‘for all’ condition on ξ in a probabilistic sense as to satisfy (9) with sufficiently high probability p. In this way, by choosing p close to one, it is possible to keep a robust satisfaction of technical feasibility while allowing for considerably larger booked capacities. A similar probabilistic modeling of entry nominations would not be justified (although historical data might be available here too) because their outcome is market driven and thus not a genuine random vector.
This motivates the network owner to relax the worst case condition in a probabilistic sense on the side of exits but keeping it on the side of entries. He then arrives at the following probabilistic formulation of feasible booked capacities C_{+}, C_{−}:
Here, \(\mathbb {P}\) refers to a probability measure related with the random vector ξ and p ∈ (0,1] is a desired probability level chosen by the network owner. The expression on the lefthand side of this inequality provides the probability that a random exit nomination (within booked capacity) combined with an arbitrary entry nomination (within booked capacity and in balance with the exit nomination) is technically feasible.
Now, for a given capacity vector (C_{+}, C_{−}) it may turn out that the associated probability on the lefthand side of (10) is larger than the desired minimum probability p, e.g., 0.96 vs. 0.9. This would give the network owner the opportunity to offer larger capacities while still keeping the desired probability p. Therefore, he might be led to determine the largest possible additional capacities (x_{+}, x_{−}) he could offer for booking by new clients. This would lead to the following optimization problem:
Here, the weight vector w in the objective reflects some preferences the network owner could have in order to offer new booking capacities at different nodes. In the absence of preferences, he could simply choose w := 1. Note, that the nomination vector at exits has been split into ξ and y, where ξ refers to the nominations of already existing clients (thus endowed with historical data and amenable to stochastic modeling) while y refers to nominations of potentially new clients without nomination history. As these can therefore not be treated stochastically, they are considered with a ‘for all’ requirement similar to entry nominations. No such splitting is necessary on the side of entries because nominations have to be considered there with a ‘for all’ requirement anyway as they cannot be modeled stochastically in a straightforward manner. In the following section, we shall address in detail the capacity maximization problem for exits only, a restriction which allows us to solve numerically the arising entire optimization problem subject to probust constraints. In contrast, Section 4 will focus on entries only and discuss essential issues related with the solution of this alternatively restricted optimization problem.
3 Maximization of Booked Capacities for Exits in a Stationary Gas Network
As mentioned in the introduction, the overall problem of capacity maximization (11) is too complex to be dealt with here. Therefore, we shall focus in a first step on maximizing capacities at exits.
3.1 The Capacity Maximization Problem for Several Exits and One Entry
In the following we will make the assumption that the network is a tree and that there exists only one entry point serving m exits. The presence of cycles in the network would significantly complicate the numerical solution of the problem. Nonetheless, in Section 3.4, we sketch a possible methodology in the presence of cycles. The restriction to a single entry is made here, in order not to deal with the robust uncertainty related with the splitting of nominations within several entry nodes (see ‘∀b ∈...’ condition in (11)) which will be considered later in Section 4 separately. Without loss of generality, we define the entry to be the root of the network labeled by index ‘0’. For simplicity, we assume moreover that, the booked capacity C_{+} of the entry is large enough to meet the maximum possible load by all exits as well as possible extensions thereof after adding additional capacity at the exits as a result of an optimization problem. As a consequence of this constellation our general capacity maximization problem (11) reduces to an exit capacity maximization problem of the form
Here, the remaining decision variables are just the extensions of exit capacities. Since no capacity extension for the single entry is intended and since its existing capacity is not constrained by our assumption, the corresponding constraint disappears as well as the balance equation which is just substituted in the description of technical feasibility. The resulting optimization problem does no longer contain entry nominations at all but only random exit nominations ξ and deterministic exit nominations y of new clients along with the additionally allocated booking capacities x_{−}.
Clearly, the probabilistic constraint in (12) is not yet in the explicit form of the probust constraint (4). This can be achieved in our case, thanks to the network being a tree having the single entry as its root. Note that by this special structure the direction of the gas flow is completely determined. Moreover, by directing all edges in E away from the root, according to [8], a vector z of exit loads in this configuration is technically feasible, if and only if in the notation introduced above, the inequality system
is satisfied, where
In order to explain the definition of the functions h_{k} above, we denote k ≽ l for k, l ∈ V if the unique directed path from the root to k, denoted π(k), passes through l. The term H(e) refers to the head of the (directed) arc e ∈ E.
With these specifications, which are fully explicit in terms of the initial data of the problem, we reformulate problem (12) with the aid of inequalities (13) as
The meaning of this constraint is as follows: The capacity extension x_{−} is feasible if and only if with probability larger than p ∈ (0,1] the sum ξ + y of the original random nomination vector and of a new nomination vector can be technically realized for every such new nomination vector in the limits [0,x_{−}]. Clearly, the probust constraint (15) is of the form (5), with u := y, x := x_{−} and the uncertainty set \(\mathcal {U}(x):=[0,x]\).
In [16] it is shown, that the infinite random inequality system
inside (15) can be reduced—using (13) and (14)—to the following finite one
For the random vector ξ of stochastic exit nominations we will suppose that it follows a truncated multivariate Gaussian distribution:
More precisely, the distribution of ξ is obtained by truncating an mdimensional Gaussian distribution with mean μ and covariance matrix Σ to an mdimensional rectangle [0,C_{−}] representing the (historical) booked capacity at all exit nodes. By definition of truncation, this means that there exists a Gaussian random vector \(\tilde {\xi }\sim \mathcal {N}(\mu ,{\Sigma })\) such that
holds true for all Borel measurable subsets \(A\subseteq \mathbb {R}^{m}\). Hence, in order to determine probabilities under a truncated Gaussian distribution, it is sufficient to be able to determine probabilities under a Gaussian distribution itself. Applying this observation to the probabilistic constraint (15) and combining it with (16) yields the equivalent description
This is now, in contrast to (15) a conventional probabilistic constraint over a finite inequality system. We are aiming to apply an efficient gradient based solution algorithm in the framework of gradient based optimization methods. To this end, in order to deal algorithmically with the probabilistic constraint (17), one has evidently to be able to calculate for each fixed decision vector x_{−} the probabilities occurring there, as well as their derivatives with respect to x_{−}. In the following section we briefly sketch the methodology used here.
3.2 SphericRadial Decomposition of Gaussian Random Vectors
From the wellknown sphericradial decomposition (see, e.g., [7]) of a Gaussian random vector \(\tilde {\xi }\sim \mathcal {N}(\mu ,{\Sigma })\) it follows that the probability of an arbitrary Borel measurable subset M of \(\mathbb {R}^{m}\) may be represented as the following integral over the unit sphere \(\mathbb {S}^{m1}\):
Here, μ_{χ} refers to the onedimensional Chidistribution with m degrees of freedom, μ_{η} is the uniform distribution on \(\mathbb {S}^{m1}\) and
where P is a factor from a decomposition Σ = PP^{T} of the covariance matrix Σ. Following these remarks, the probability on the lefthand side of (17) (depending also on the decision variable x_{−}) can be represented as
where
and, with P_{t} denoting row number t of P, for k, l = 0,…,m:
In order to evaluate the integrand in (18), one has to be able to characterize (for each given \(v\in \mathbb {S}^{m1}\) and \(x_{}\in \mathbb {R}^{m}\)) the set E(v, x_{−}) and to determine its Chi probability. The latter task is easily accomplished, whenever the set E(v, x_{−}) can be represented as a finite union of intervals because there exist numerically highly precise approximations of the one dimensional Chi distribution function.
Hence, we are left with the task of efficiently representing E(v, x_{−}) as a finite union of intervals. This is easily done for the first set in the intersection providing E(v, x_{−}) in (19) which can be shown either to be empty or an interval:
As for the second part of the intersection in (19), we will provide for each k, l an explicit representation of the set E^{k, l}(v, x) either as a single interval or as the disjoint union of two intervals, such that the intersection of all these sets (and the first set determined above) is readily obtained in the form of a finite union of disjoint intervals. Indeed, upon developing the expressions in (20) in terms of r, one arrives at the representation
where, for k, l = 0,…,m:
This leads, by case distinction and elementary calculus, to the following explicit representation of E^{k, l}(v, x_{−}) for k, l = 0,…,m:
where the case study is done according to
Along with (21) we may use this explicit description in order to efficiently represent the set E(v, x_{−}) in (19) as the desired finite union of intervals by determining the finite intersection of sets which are intervals or disjoint unions of intervals.
It is important to note that, at the same time, the partial derivatives of the probability with respect to the decision variable x_{−} can be calculated as a spherical integral of type (18) again, however with a different integrand which is easily obtained from the partial derivatives of the initial data [24]. In this gradient formula, the same disjoint union of intervals as in the computation of the probability itself is employed. The spherical integrals can be approximated by finite sums using QuasiMonte Carlo sampling on the sphere (see, e.g., [4]). Then, for each sampled direction v on the sphere, one may update first the probability itself and then, simultaneously, the gradient of the probability with respect to x_{−} by using the same disjoint union of intervals in both cases. This approach makes the gradient come almost for free as far as computation time is concerned. Having access to values and gradients of the probabilistic constraint (17), one may set up an appropriate nonlinear optimization solver for solving (15). For the subsequent numerical results, we employed a simple projected gradient method.
3.3 Numerical Results for an Example
As an illustrating example, similar to [16, Section 6], we considered a network as displayed in Fig. 1 with one entry (filled black circle) and 26 exits. The parameters (i.e., pressure bounds, roughness coefficients, truncated Gaussian distribution for the random nominations at exits) were chosen from modified quantities of real network.
The applied gradient method cannot guarantee to find a global optimal solution because the optimization problem is nonconvex in x_{−}. However, a stationary point can be computed satisfying the probust constraint with a high accuracy. We are able to control the quality of the accuracy via the QuasiMonte Carlo sampling. In our computations a number of 10 000 samples turns out to allow for reasonably accurate results.
We did not assume any preferences in the allocation of new capacities, hence the weight vector in the objective of (15) was chosen as w_{−} := 1_{−}. The colored rings around exit points refer to the optimal cumulative capacities (historical+new), i.e., C_{−} + x_{−} after maximization, upon choosing probability levels p = 0.95,0.9,0.85,0.8. The radii of the rings are proportional to the available capacities. It can be clearly seen, how decreasing of the probability level allows for increasing the allocation of capacity in certain regions of the network.
Figure 2 illustrates how the computed solution for a probability level p = 0.8 works for two random exit nomination scenarios ξ simulated a posteriori according to the chosen truncated multivariate Gaussian distribution. The first scenario is feasible because one could add a common capacity to every exit (green color) in order to satisfy this scenario. In contrast, the second scenario is infeasible because one would have to reduce the capacities by an amount corresponding to the dark red rings, in order to satisfy this scenario. When simulating a large set of such scenarios, say 1000, it would turn out that according to the probability level p = 0.8 approximately 800 are feasible, while 200 are infeasible.
3.4 Methodology in the Presence of Cycles
It is generally acknowledged that the presence of cycles in gas networks is both realistic for applications and demanding for formal analysis. In what follows we elucidate this at calculating the probability of feasible nominations in a gas network with cycles. Networks with a single or with multiple nodedisjoint cycles are covered in [8] which essentially relies on explicit formulas for the roots of univariate real polynomials of degree less than 5.
If cycles in a gas network are sharing edges, then the approach via the mentioned formula is no longer valid. It also cannot be stretched to more general cases. A first alternative attempt in this respect has been undertaken recently in [9] for networks with up to three mutually interconnected cycles.
To display the stateoftheart in calculating by sphericradial decomposition probabilities of sets of feasible nominations in gas networks with cycles, consider again
for every sample \(v_{1},\dots , v_{s}\) on the sphere. Analogously to the case of trees, the set E_{i} can be expressed as a finite union of disjoint intervals, \(E_{i} = \cup _{j=1}^{l} [a_{j}, b_{j}]\), for calculating its probability, it is sufficient to determine all points where the ray rPv_{i} + μ enters or exits the set of feasible load vectors M.
With cycles, the matrix \(\mathcal {A}\) decomposes into a basis part \(\mathcal {A}_{B}\) and a nonvacuous nonbasis part \(\mathcal {A}_{N}\) whose columns correspond to the fundamental cycles with respect to the tree behind \(\mathcal {A}_{B}\). Accordingly q, Φ are split into q_{B}, q_{N} and Φ_{B}, Φ_{N}.
In [8] it is shown that a given load (−1^{⊤}ξ, ξ) is feasible iff there exists a q_{N} such that
with the function \(g \colon \mathbb {R}^{V 1} \times \mathbb {R}^{N} \to \mathbb {R}^{V1}\) where
Having in mind the sphericradial decomposition and the sets E_{i}, we insert ξ(r) = rPv_{i} + μ into the above characterization of feasible loads and reformulate the min, max expressions. Then E_{i} consists of all \(r\in \mathbb {R}_{\ge 0}\) for which there is a q_{N} such that
To decide, for a given sample point v_{i}, whether the ray rPv_{i} + μ enters or exits the set of feasible load vectors M and, in the affirmative, compute an entry or exit point, the following basic procedure is possible: Augment one of the inequalities of the system (23–25) as an equation to (22) yielding a system of N + 1 degree2 multivariate polynomial equations with N + 1 unknowns.
Notice that the above considerations hold for gas networks with arbitrary cardinality N of nonbasis columns in \(\mathcal {A}\). Of course, the number of augmentations, and hence the number of passes through some polynomialequation solver can become exorbitant.
A first attempt on solving systems with multivariate polynomials via computer algebra is reported in [9] for N≤ 3. In the core of the method there are Gröbner bases inducing “triangular” representations of the polynomial equations, allowing for “reverse propagation” of solutions, in the spirit of Gaussian elimination, with multivariate quadratic polynomials instead of linear forms.
In contrast with gradienttype analytical solvers, algebraic solvers using symbolic computation usually detect infeasibility of the system under consideration much faster, which can be crucial as the decision of (in)feasibility is one of the fundamental tasks in this context. These methods rely on iterating bases of ideals. Emptiness follows as soon as there arises a constant polynomial in the current ideal basis.
There are a number of possible improvements, some of which investigated in [9] that deserve further explorations: identifying and removing redundant inequalities in (23–25), studying the special structure of the system (22) and exploring the impact of “Comprehensive Gröbner Systems” that were developed for parametric polynomial equations, see [20].
4 Capacity Maximization Under Uncertain Loads and Uncertain Distribution of Entry Nominations
After discussing the methodology for the case of uncertain exit loads, we address the case of uncertain entry loads and fixed exit capacities, i.e., x_{−} = 0, and we only take extensions x_{+} of the entry capacities C_{+} into account. Thus, we consider the following optimization problem:
In other words, for a realization d of the random variable ξ and for an extension x_{+}, we want every entry nomination of the uncertainty set
to be feasible with probability p. The set of realizations of ξ for which every entry nomination is feasible for a given x_{+} is henceforth denoted as D(x_{+}):
Applying this notation, we can formulate the probust constraint of problem (26) more compactly:
We note that the ‘probust’ nature of the constraint is ‘hidden’ in the probability constraint that is expressed in D(x_{+}).
Analogously to Section 3, we assume that \(\xi \sim \mathcal {T}\mathcal {N}(\mu , {\Sigma }, [0,C_{}])\). Furthermore, we assume that the following mild condition holds:

(C1)
There is a bound y_{+} ≥ 0, such that μ ∈ D(x_{+}) for all x_{+} ∈ [0,y_{+}], i.e., the mean μ of ξ is a feasible exit booking nomination for all admissible x_{+}.
Since ξ is based on historical data, the mean being feasible for a capacity extension x_{+} is a natural assumption for practical applications at least if the upper bound y_{+} is not too large. Furthermore, since there is, naturally, no infinite amount of capacity extension, such a bound y_{+} does naturally exist.
In the following, we apply the spherical radial decomposition, see Section 3.2, to reformulate the probust constraint of problem (28) with an integral:
As before, m is the number of exit nodes, \(\mathbb {S}^{m1}\) the unit sphere, μ_{χ} refers to the onedimen sional χdistribution with m degrees of freedom, μ_{η} is the uniform distribution on \(\mathbb {S}^{m1}\) and
where P is a factor from a decomposition Σ = PP^{T} of the covariance matrix Σ.
We aim to solve problem (29) numerically and we approximate the integral. We briefly summarize the method of Section 3.2 for our purposes: We sample N vectors \(v_{1}, \dots , v_{N}\) of the unit sphere \(\mathbb {S}^{m1}\) and compute E(v_{i}, x_{+}) for each sampled vector v_{i}. Hence
In [8], it is pointed out that E(v_{i}, x_{+}) is a finite union of intervals:
and that in this case,
where F_{χ} is the distribution function of the respective χ distribution. Now, for the numerical accessibility of (31) we make an additional assumption:

(C2)
For any x_{+} ≥ 0, D(x_{+}) is starshaped with respect to μ.
Using (C2), it is immediately seen that E(v_{i}, x_{+}) = [a_{0}, b_{0}], i.e., E(v_{i}, x_{+}) is a simple interval. Thus, (31) becomes
Furthermore, due to condition (C1), we have 0 ∈ E(v_{i}, x_{+}) which implies a_{0} = 0. With this b_{0} is the length of the interval E(v, x_{+}), i.e., b_{0} = E(v_{i}, x_{+}). As there exist highquality numerical approximations for F_{χ}, the value of μ_{χ}(E(v_{i}, x_{+})) can be computed, if b_{0} is numerically accessible.
In summary, under conditions (C1) and (C2), approximating the integral in (30) for a given capacity extension x_{+} effectively reduces to sampling vectors v_{i} on the unit sphere and determining E(v_{i}, x_{+}).
Before we continue, we briefly discuss the role of (C2). First of all, it is important to state that without (C2) it is not clear, how (30) can be evaluated numerically. Second, to the best of our knowledge, there is no applicable criterion for testing (C2). That means, we need to assume that in practice (C2) holds. Fortunately, this is not as bad as it sounds at the first glance. The reason is that we are able to show that in the case where E(v_{i}, x_{+}) consisted of more than one interval, our algorithm would always correctly approximate the length of the first interval, i.e., the interval with lower bound 0. Due to the simple estimate
this would mean that, instead of approximating E(v, x_{+}), we would compute a valid lower bound. As a consequence, the computed result would be feasible for the optimization problem (29) and thus a (potentially conservative) valid underestimator of the true maximizer.
During the remainder of this section, we will present and discuss an algorithm for approximating E(v_{i}, x_{+}) for any sampled v_{i}. In particular, we will apply binary search: In every iteration, it will be decided whether a given r ≥ 0 is an element of E(v_{i}, x_{+}). This decision is made by solving a nonlinear optimization problem (NLP) which, for all \(b \in \mathcal {U}(\mu + rPv,x_{+})\), checks whether the henceforth called pressure flow solution(π, q) fulfills (8). We prove the correctness of this decision procedure under the following assumption:

(C3)
There is a node j ∈ V with fixed pressure, i.e., \(\pi _{j,\ast } = \pi _{j}^{\ast }\).
Condition (C3) implies that there exists exactly one solution (π, q) which fulfills \(\mathcal {A}q = (b, \xi )^{T}\) and \(\mathcal {A}^{T} \pi = {\Phi } qq\), see for example Theorem 7.2. of [18]. As we will see later, the uniqueness of the pressure flow solution is crucial for the correctness of the presented algorithm.
In order to ensure that all potential violations of pressure bounds are detected, globally optimal solutions are required. In order to achieve this, we relax the NLP to a mixedinteger linear problem (MIP) by interpolating the nonlinearities with linear splines and modeling these splines through linear constraints and additional binary and continuous variables. The resulting MIP can be solved globally with offtheshelfsoftware. The effects of the linearization are pointed out in the remainder of the section.
We would like to point out that, like condition (C1), condition (C3) is not a critical assumption in reality. Since gas is injected at some entry node, we can assume that the pressure at this node is known.
We conclude this section by the presentation of computational results that show the effectiveness of our method.
4.1 Methodology for General Stationary Networks
As discussed beforehand, approximating the integral under conditions (C1) and (C2) can be reduced to deciding whether r ∈ E(v, x_{+}) for a real number r ≥ 0 and a sampled vector \(v \in \mathbb {S}^{m1}\). In other words, we need to check whether μ + rPv ∈ D(x_{+}), i.e., whether μ + rPv is robust feasible:
Definition 1
Let d be a realization of the random variable ξ and let x_{+} be an entry capacity extension. If d ∈ D(x_{+}), d is called robust feasible. The problem of deciding d ∈ D(x_{+}) is denoted as DecProb(d, x_{+}). Analogously, for a sampled vector \(v \in \mathbb {S}^{m1}\), the real number r ≥ 0 is called robust feasible for v, if μ + rPv ∈ D(x_{+}), i.e., r ∈ E(v, x_{+}). The problem of deciding the robust feasibility of a number r for a vector v is denoted as DecProb(r, v, x_{+}).
In the special case of unbounded pressure at every node, DecProb(d, x_{+}) would be answered positively for every feasible realization d and every extension x_{+} (see [18], Theorem 7.1). Although in real gas network operations and in our setting the pressures are bounded, we can use the following: We introduce penalty functions for every u ∈ V, in formulas,
If f_{u}(π_{u}) > 0 for a node u ∈ V, π_{u} lies outside its bounds. Consequently, π ∈ [π_{∗}, π^{∗}] if and only if
Now consider a balanced nomination (b, d)^{T} and the optimization problem
Since the pressure is unbounded, there exists a pressure flow solution (π, q) and since condition (C3) holds, it is unique and the feasible set of problem (33) is atomic. Consequently, (b, d) is a realizable nomination if the optimal value of problem (33) is 0, i.e., (32) holds.
However, for a given sampled vector v and a real number r, we want to check whether DecProb(r, v, x_{+}) is answered positively, i.e., if for any b which results in a balanced nomination (μ + rPv, b), there exists a pressure flow solution which satisfies (8). Therefore, we modify (33):
The feasible set of Pen(r, v, x_{+}) consists of the vectors (b, π, q) for which (μ + rPv, b) is a balanced nomination and for which there exists a pressure flow solution which satisfies (8) but could violate the pressure bounds.
In particular, we will show in the following theorem that r is robust feasible for v if and only if the optimal value of problem Pen(r, v, x_{+}) is 0.
Theorem 1
Let \(v \in \mathbb {S}^{m1}\), r ≥ 0 and let x^{+} ≥ 0 be a capacity extension at the entries. Assume that condition (C3) holds. Then DecProb(r, v, x_{+}) is answered positively if and only if problem Pen(r, v, x_{+}) is solvable with optimal value 0.
Proof
Since f_{u}(π_{u}) is nonnegative for all nodes u ∈ V, the optimal value of Pen(r, v, x_{+}) is at least zero.
Assume on the one hand that DecProb(r, v, x_{+}) is answered positively, i.e., μ + rPv is robust feasible. Now consider a solution (b, π, q) which is feasible for problem Pen(r, v, x_{+}). If the objective value of (b, π, q) was strictly positive, there would exist a node j ∈ V with \(\pi _{j} \notin [\pi _{\ast ,j}, \pi ^{\ast }_{j}]\). Since the pressure flow solution is unique (due to condition (C3)), this contradicts the robust feasibility of μ + rPv. Therefore, the objective value is 0. Since this applies for every feasible solution, the optimal value of Pen(r, v, x_{+}) is 0.
On the other hand, assume that the optimal value is 0. We maximize which implies that for all feasible solutions (b, π, q), π lies in its prescribed bounds. In other words, for every \(b \in \mathcal {U}(\mu + rPv,x_{+})\), the unique pressure flow solution (π, q) satisfies (8). This implies the robust feasibility of (μ + rPv, x_{+}), i.e., DecProb(r, v, x_{+}) is answered positively. □
We note that condition (C3) is crucial in the proof of Theorem 1. Without pressure bounds, the projection of the pressure flow solutions to the squared pressure component has, for a fixed \(\overline {\pi }\), the form
see Theorem 7.2. of [18]. Hence, unless one pressure is fixed, the pressure values are not necessarily unique and problem Pen(r, v, x_{+}) can be unbounded although r is robust feasible.
As already discussed, we need to determine the length of the interval E(v, x_{+}) and as a consequence of Theorem 1, problem Pen(r, v, x_{+}) can be used to determine the length. In particular, by applying a binary search, which solves the subproblem Pen(r, v, x_{+}) in every iteration, we can determine the length of E(v, x_{+}).
A binary search algorithm requires a lower and an upper bound. Since \(0 \in E(v,x_{+}) \subset \mathbb {R}_{\geq 0}\) (condition (C1)), we choose 0 as a lower bound. A trivial upper bound for E(v, x_{+}) is given by the exit capacities
However, we can even give a tighter bound. Due to (8), the pressure drop constraint
holds for every arc (i, j) ∈ E. Since the pressures are bounded and Φ_{i, j} > 0 for every arc (i, j), we can derive flow bounds for every arc which do not depend on the actual nomination. In the following, these flow bounds, which are called implicit flow bounds and are denoted by q_{∗} and q^{∗}, are exploited to determine a tighter upper bound for E(v, x_{+}):
Lemma 1
Let \(v \in \mathbb {S}^{m1}\), let q_{∗} and q^{∗} be the implicit flow bounds and x_{+} ≥ 0 be a capacity extension at the entry nodes. For a node u ∈ V, let δ^{−}(u) denote the set of incoming arcs and let δ^{+}(u) denote the set of outgoing arcs. Then an upper bound for E(v, x_{+}) is given by the optimal value of the optimization problem
Proof
Since we are interested in an upper bound for E(v, x_{+}), 0 ≤ μ + rPv ≤ C_{−} and r ≥ 0 are satisfied. Furthermore, Kirchoff’s first law demands
for a flow q. Substituting the flow variables by the implicit flow bounds q_{∗} and q^{∗} results in
and
This concludes the proof. □
With Lemma 1, the prerequisites for the binary search have been taken. However, problem Pen(r, v, x_{+}) is a nonconvex nonlinear optimization problem. Since we require global optima, we aim to linearize the nonlinear constraints of problem Pen(r, v, x_{+}), i.e., the Weymouth equation
by interpolating Φ_{i, j}q_{i, j}q_{i, j} with linear splines s_{i, j}(q_{i, j}). For a given linearization error 𝜖 > 0, the linear splines are constructed such that
for every (i, j) ∈ E. Therefore, in Pen(r, v, x_{+}), we relax (34) with
for all (i, j) ∈ E. The splines s_{i, j}(q_{i, j}) are modeled with the incremental method, see [21], using an additional set of linear inequalities and equations and additional continuous and binary variables. Hence subproblem Pen(r, v, x_{+}) is relaxed to a MIP which can be solved to global optimality. The optimal value of the relaxation is an upper bound for the optimal value of problem Pen(r, v, x_{+}). Due to the objective being nonnegative, if the optimal value of the linearized problem is zero, the optimal value of problem Pen(r, v, x_{+}) is zero as well. The linearized version of problem Pen(r, v, x_{+}) is henceforth denoted as Pen(r, v, x_{+}, 𝜖).
Before we summarize our algorithm for finding a lower bound for the length of E(v, x_{+}), we note that the incremental method is applied for modeling linear splines which are defined on compact intervals. In our case, the spline variables are the flow variables which are, at least by definition, unbounded. In practice, one can for example apply the preprocessing developed in [1] for determining flow bounds. This method neglects the pressure bounds as well, which is the reason why we can apply it.
In the following, we summarize our procedure to find a lower bound for the length of E(v, x_{+}). The tolerance for the binary search is henceforth given by tol > 0: Algorithm 1 bounds E(v, x_{+}) from below with an error of at most tol. Due to Theorem 1 and Lemma 1, Algorithm 1 terminates with a correct lower bound.
This concludes the presentation of our method to determine a lower bound for E(v, x_{+}) under the conditions (C1), (C2) and (C3). We note that there are several sources of approximation errors which are caused by the binary search and the linearization of Pen(r, v, x_{+}). Yet, those can be limited by reducing tol and 𝜖 in Algorithm 1, respectively. However, one has to keep in mind that reducing tol results in more iterations and that reducing 𝜖, i.e., a tighter linearization, results in more binary variables for Pen(r, v, x_{+}, 𝜖). Both lead to an increase of the running time, see Section 4.2.
Before we discuss our numerical results, we would like to demonstrate, how our algorithm could be modified to produce a lower bound on E(v, x_{+}) in the case when condition (C2) fails to hold. As pointed out before, without condition (C2), the set E(v, x_{+}) is in general not an interval but a finite union of intervals. Due to condition (C1), one of those intervals, henceforth denoted as E_{0}(v, x_{+}), has 0 as its lower bound. Obviously, R ∈ E_{0}(v, x_{+}) only holds if [0,R] ⊂ E_{0}(v, x_{+}), i.e., if all r ∈ [0,R] are robust feasible for v. Therefore, we can check whether R ∈ E_{0}(v, x_{+}) by modifying problem Pen(r, v, x_{+}) and solving
The feasible set of this optimization problem consists of the vectors (r, b, π, q) for which (μ + rPv, b) is a balanced nomination (with r ≤ R) with pressure flow solution (π, q) which is unique due to condition (C3). Thus, the only difference in problem Pen(r, v, x_{+}) and the above optimization problem is r being a variable since we have to check whether μ + rPv is robust feasible for every r ∈ [0,R]. Consequently, using this modified problem in Algorithm 1 instead of problem Pen(r, v, x_{+}) yields a lower bound for E_{0}(v, x_{+}) and thus a (potentially conservative) lower bound on E(v, x_{+}).
In the next subsection, we evaluate Algorithm 1 with respect to quality of the obtained solutions and running time.
4.2 Numerical Results
We modify the gas network instance of Section 3.3 by adding a second entry next to the first entry (the black filled node) which implies that \(x_{+} \in \mathbb {R}^{2}\). We note that the instance is still a tree and that the structure of the instance has not changed substantially which has been desired since we do not want to analyze an instance which is very different from the one of Section 3. However, if the instance had only one entry, the uncertainty set would have only one element which is not interesting in the context of this section. In addition, we fix the pressure at a leaf node. Beyond that, we provide sphere vectors v by sampling a collection of 10 000 elements on the unit sphere using a QuasiMonte Carlo method. Our goal is to approximate the probability of robust feasibility for this network and uncertain entry loads by using a spheric radial decomposition and applying Algorithm 1. Since we can not verify condition (C2), we assume that D(x_{+}) is not starshaped with respect to the mean.
The performance of the algorithm is investigated by testing the algorithm on the given instance under a variety of parameter combinations. All experiments were carried out using Gurobi 7.5 [15] with 4 threads running on machines with Xeon E31240 v5 CPUs (4 cores, 3.5 GHz each).
We apply Algorithm 1 to all 10 000 rays using all combinations of relaxation parameters 𝜖 ∈{2^{− 6},2^{− 5},…,2^{4}} and bisection termination tolerances tol ∈{0.001,0.01,0.1}. Experiments for smaller tolerances down to tol = 10^{− 6} were carried out as well but are omitted here since they produced almost identical probabilities, when compared to tol = 10^{− 3}. The results of this study are displayed in Fig. 3. The approximated probabilities for robust feasibility of the exit nomination and the capacity extension of the instance are displayed in Fig. 3(a). We approximate the overall probability to be between 78 % and 78.5 %, depending on 𝜖 and tol. As expected, we obtain more conservative solutions for increasing approximation parameters 𝜖. However, the influence of 𝜖 is much smaller than expected, even for large 𝜖. In the same fashion, increasing the bisection termination tolerances tol leads to more conservative solutions. We note that for both parameters, a combination of \(\epsilon = \frac {1}{2}\) and tol = 0.001 produces solutions that can be improved only very little (within the scope of this study) by decreasing both parameters further. The overall running times for all rays, i.e., the cumulated running time of Algorithm 1, applied for each ray, are plotted in Fig. 3(b). As expected, the running times increase for decreasing parameter 𝜖, as the latter leads to more complex MIP models. A decrease of the tolerance tol leads to a larger number of iterations of the bisection algorithm and thus to longer running times as well. In the previous experiments, we focused on the influence of the relaxation parameter 𝜖 and the bisection precision tolerance tol on the algorithm’s running time and on the reliability of the obtained probability. Another important impact on the overall running time is the number of rays that needs to be used. Fig. 4(a) shows the resulting probability when only the first k rays of the 10 000 given samples are used for the probability approximation. At a glance, we observe large fluctuations when using only up to about 2500 rays. A considerable decrease in the magnitude of the probability fluctuation can be seen for values of k ≥ 2500. We further strengthen this observation in Fig. 4(b) by comparing the first graph with a second graph that was obtained from 5000 other random sphere vectors in the same fashion. Since the second graph follows the same pattern, we conclude that for the instance considered here, the number of rays should not be smaller than 2500 if the approximation of the probability has been supposed to be reliable. Assuming that the parameter selection k = 2500, \(\epsilon =\frac {1}{2}\), and tol = 0.001 is sufficient for a reliable probability calculation, the sum of all MIP running times is about 8 min.
As a final experiment, we demonstrate the practical applicability of our method by solving a simple optimization task where we assume that the number of sampled points k is large enough and that our approximation is good enough to check whether the probust constraint is satisfied for a given capacity extension.
The goal is to determine capacities for the two entry nodes such that the probability of robust feasibility is at least 75%. We use a linear cost function with equal costs for expansion at each node. In Section 3, the capacity problem with uncertain exit loads has been solved using (sub)gradient information in the sense of [24]. However, due to the different situation here caused by the MIPapproximations and the robust constraints, the derivation of suitable (sub)gradients needs further research that is beyond the scope of this article. Instead, we decided to apply a gradient free pattern search algorithm available in MATLAB [19]. It is important to note that—due to the fact that the probust constraint is in general nonsmooth—no convergence of this algorithm to a stationary point can be guaranteed. Instead, one can expect convergence to a point in which directional derivatives are nonnegative for all directions in the positive basis used by the algorithm. We refer to [5] for a further discussion on this, as well as a general overview of derivativefree optimization.
In every major iteration, a capacity extension x_{+} has been proposed by the algorithm. For all sampled vectors v_{i}, Algorithm 1 is applied to approximate E_{0}(v_{i}, x_{+}). Hence, the probability of robust feasibility is estimated from below by \(\frac {1}{2500}{\sum }_{i=1}^{2500} F_{\chi }(E_{0}(v_{i},x_{+}))\) and thus, the inequality
is checked.
In our experiment, we consider the entry capacities in the box [28000,32000] × [4000,8000] and start with C_{+} = (− 28000,− 4000). In other words the extension x_{+} is an element of the box [0,4000] × [0,4000] and our starting point is (0,0).
Overall, the pattern search algorithm converged after 149 function evaluations. The red lines which connect the black, filled dots show the trajectory of the pattern search algorithm, with the optimum marked by a red cross. The extra function evaluations are represented by black circles. Obviously, the probability is not very sensitive with respect to capacity changes at entry 1 but clearly decreases, when the capacity is increased at entry 2. (Fig. 5)
This concludes the discussion and presentation of the methodology for stationary gas networks. In the next section, transient systems controlled by the wave equation are discussed.
5 Stabilization with Probabilistic Constraints of a System Governed by the Wave Equation
Now, we consider a transient system that is governed by the wave equation. The wave equation is a linear model of the gas flow in gas pipelines for sufficiently small velocities. The state is determined by an initial boundary value problem with Dirichlet boundary data at one end and Neumann boundary feedback at the other end of the space interval. The initial data and the boundary data are given by a stochastic process. The aim is to maximize the probability to stay near a desired state everywhere in the time space domain.
Let a finite length L > 0 and a finite time T > 0 be given. In this section, let \(\mathcal {U} = [0,T] \times [0,L]\). Let c > 0 denote the sound speed in the gas. Let a stationary velocity field \(\bar v\) be given, see [13]. Let \(v= \tilde v  \bar v\) denote the difference between the velocity and the stationary state. If the norm of \(\bar v\) is sufficiently small, the dynamics for v are governed by the wave equation v_{tt} = c^{2}v_{xx}. Moreover the gas density ρ satisfies the wave equation ρ_{tt} = c^{2}ρ_{xx} and the flow rate q of the gas satisfies the wave equation q_{tt} = c^{2}q_{xx}; see [14]. For given uncertain boundary data (that model the uncertain demand) \(\xi \in L^{\infty }(0,T)\), an uncertain initial state \((v_{0}, v_{1})\in L^{\infty }(0, L) {\times } L^{1}(0, L)\) and a feedback parameter η > 0, we consider the closed loop system that is governed by the initial boundary value problem for \((t,x) \in \mathcal {U}\)
An explicit representation of the generated state in terms of travelling waves (d’Alembert’s solution) is given in [11, 12]. This allows the computation of the system state \(v\in L^{\infty }(\mathcal {U})\) without discretization errors. In the operation of pipeline networks, there is a constraint on the magnitude of the flow velocity in the pipe. Let \(v_{\max \limits }>0\) be an upper bound for the velocity. We consider the probabilistic constraint for the probability
where v solves (S) and \(\\cdot \_{L^{\infty }}\) denotes the norm on \(L^{\infty }(\mathcal {U})\).
In order to write the probabilistic constraint similar to (4), we introduce the notation
with ξ = (a, b), \(a = (a_{k})_{k=1}^{N}\), \(b = (b_{k})_{k=1}^{N}\), \(u = (t,x) \in \mathcal {U}\), where \(\tilde {v}\) solves the initial boundary value problem (S) with initial and boundary data that depend on the parameter (a, b) (see (KLid) and (KLbd) below).
Theorem 2
(Solution of system (S)) Consider system (S) with \(\xi \in L^{\infty }(0,T)\) and \((v_{0}, v_{1}) \in L^{\infty }(0,L)\times L^{1}(0,L)\) for the feedback parameter \(\eta = \frac {1}{c}\). Define the antiderivative of v_{1} by
and define for
and
Then the function
solves system (S) and the solution v lies in \(L^{\infty }(\mathcal {U})\).
Proof
We show that v defined in (37) fulfills the PDE system (S). First we see that v satisfies the wave equation, because we have
Now we show that v satisfies the initial conditions. At t = 0, we have for all x ∈ (0,L)
For the time derivative at t = 0, x ∈ (0,L) we have
where the derivatives are to be understood in the sense of distributions. Finally, we show that the boundary conditions are fulfilled. Now, we prove that the Dirichlet boundary condition at x = L is fulfilled for t > 0. We have
For the feedback law at x = 0, we have
Now we show that v lies in \(L^{\infty }(\mathcal {U})\). By the assumptions, we have \(v_{0} \in L^{\infty }(0,L)\) and \(\xi \in L^{\infty }(0,L)\). The claim is true if V_{1} is in \(L^{\infty }(0,L)\). We know that v_{1} is in L^{1}(0,L). This implies
This finishes the proof Theorem 2. □
Theorem 3
(Value of \(\v\_{L^{\infty }}\) in terms of initial and boundary data) Let v be a solution of system (S) under the assumptions of Theorem 2. For \((t,x) \in \mathcal {U}\), define
Set
(see Fig. 6). Furthermore, for i ∈{1,…,5}, set
Then the \(L^{\infty }\)norm of the velocity v is given by
Proof
By Theorem 2 the solution of system (S) is given by
By the definition of α and β, there are four cases to consider. The last case is split into two subcases. The first case \(t<\min \limits \left \{\tfrac {x}{c}, \tfrac {Lc}{c}\right \}\) is the first case for both α and β. We have
For \(\tfrac {x}{c} \le t <\tfrac {Lx}{c}\), we are in the first case for α and in the second case for β. Note that the interval for t can only be nonempty for \(x \in (0,\tfrac {L}{2})\). We have
For \(\tfrac {Lx}{c} \le t < \frac {x}{c}\), we are in the second case for α and in the first case for β. Note that the interval for t can only be nonempty for \(x \in (\tfrac {L}{2}, L)\). Since \(t<\tfrac {x}{c}<\tfrac {L}{c}\) and \(\tfrac {xL}{c}<0\), we have \(t+\tfrac {xL}{c}<\tfrac {L}{c}\) and therefore
The last case to consider is \(t \ge \max \limits \{\frac {Lx}{c}, \frac {x}{c}\}\). It leads to
This yields the assertion of Theorem 3. □
5.1 Boundary Data with Random Amplitude, Frequency and Phaseshift
For the boundary data, we consider the parametric family
with a random variable (λ, κ, ω) and the compatible initial data
We assume that (λ, κ, ω) is normally distributed with expected value \(\mu \in \mathbb {R}^{3}\) and a positive definite covariance matrix \({\Sigma } \in \mathbb {R}^{3 \times 3}\). For the numerical computation of the probability, we use the spheric radial decomposition described in Section 3.2.
Corollary 1
(Analytic formula for \(\v\_{L^{\infty }}\)) Let v be a solution of system (S) under the assumptions of Theorem 2 for the initial conditions given by (cosid) and the Dirichlet boundary data at x = L given by (cosbd). Then
Proof
With the definitions from Theorem 3 and (cosbd) as well as (cosid), we have
By m_{i}(t, x)≤λ for i = 1,…,5 the claim follows. □
Remark 1
If ω≠ 0 and T is sufficiently large, then \(\v\_{\infty } = \lambda \) holds.
5.2 Karhunen–Loève Approximation of a Wiener Process as Initial and Boundary Data
We consider the Karhunen–Loève representation (see [17]) of a Wiener process on [0,T] with covariance function \(\text {Cov}(W_{t}, W_{s}) = \min \limits (s,t)\) given by
with independently normally distributed random variables a_{k}. It is reasonable to use a finite approximation of it as boundary data, i.e.,
Analogously, we choose the compatible initial data
with independently normally distributed random variables b_{k}. We have the compatibility condition ξ(0) = v_{0}(L) = 0. Furthermore, set v_{1} = 0 (Fig. 7). Different realizations of the initial and boundary data can be seen in Figs. 8 and 9. The solution of the wave equation for different realizations of the initial and boundary data is depicted in Fig. 10.
The case is much more involved than that in Section 5.1, since the value of \(\v\_{L^{\infty }}\) is not easily expressed as an analytic function of the random variables. This means a sampling scheme based on spheric radial decomposition can not be directly be applied. We use a quasi Monte Carlo method based on a Sobol sequence instead.
If one wants to approximate the \(L^{\infty }\)norm of the velocity by pointwise evaluation on a grid, Lipschitz continuity of the velocity is required.
Theorem 4
(Lipschitz continuity of the solution) Assume the boundary data \(\xi \in \mathcal {C}^{0,1}(0,T)\) and initial data \(v_{0} \in \mathcal {C}^{0,1}(0,L)\) to be Lipschitz continuous and assume that Lipschitz compatibility over the edge holds, i.e., we have
with a Lipschitz constant K > 0. Furthermore, let \(v_{1}\in L^{\infty }(0,L)\).Then, under the assumptions of Theorem 2, the solution v of system (S) is Lipschitz continuous on \(\mathcal {U}\), i.e., \(v \in \mathcal {C}^{0,1}(\mathcal {U})\).
Proof
The sum of Lipschitz continuous functions is Lipschitz continuous. It is therefore sufficient to show the Lipschitz continuity of α and β defined as in Theorem 2. Without loss of generality—by going to the maximum of the occurring Lipschitz constants—we assume that they are all the same and denote each of them by K > 0. First, we show the Lipschitz continuity of V_{1}. We have, for x, y ∈ [0,L]
The Lipschitz continuity of β is clear in the individual intervals \(\left [0, \tfrac {L}{c}\right )\) and \(\left [\tfrac {L}{c}, T+\tfrac {L}{c}\right )\). Consider \(s \in \left [0, \tfrac {L}{c}\right )\) and \(r \in \left [\tfrac {L}{c}, T+\tfrac {L}{c}\right )\). Then, using V_{1}(0) = 0 and \(\left \tfrac {L}{c} s\right  = \tfrac {L}{c} s \le r  s = rs\), leads to
The Lipschitz continuity of β of ξ imply that α is Lipschitz on \(t\ge \tfrac {L}{c}\) and by the Lipschitz continuity of v_{0} and V_{1} it is Lipschitz on \(\left [0, \tfrac {L}{c}\right )\). Again, the case \(s \in \left [0, \tfrac {L}{c}\right )\) and \(r \ge \tfrac {L}{c}\) is remaining. We obtain
For \(\frac {L}{c}\le r < \tfrac {2L}{c}\), this yields by the definition of β
By the triangle inequality and the compatibility v_{0}(L) = ξ(0), we obtain
since \(\tfrac {L}{c} \le s\). For \(r\ge \tfrac {2L}{c}\), we have by the definition of β and V_{1}(0) = 0
because 2L ≤ rc, \(\tfrac {L}{c} \le s\) and \(rs \ge \tfrac {2L}{c}  s \ge \tfrac {L}{c} \ge s\). This shows the Lipschitz continuity of α and concludes the proof. □
Remark 2
Also for general feedback gains η > 0, results similar to Theorems 2 and 4 hold.
5.3 Optimization of the Feedback Parameter
The feedback parameter η can be chosen such that the probability (35) as a function of η is maximized. We call this function G(η). We consider the probability to stay under the bound \(v_{\max \limits } = 5\) for different feedback parameters η > 0 on a grid with stepsize 0.05 between 1.5 and 4. The data for the example has been chosen as L = 2, T = 2, c = 0.5. For the approximation of the probability, 2000 samples were used for each value of η. The maximum of the probability is reached for completely absorbing feedback η = 1/c = 2. The peak in probability is very distinct. At the peak the probability function appears to be nonsmooth. Numerically, we find that the choice η = 1/c is optimal; see Fig. 11.
6 Conclusion
In this paper we dealt with a joint model of probabilistic and robust constraints, socalled probust constraints and illustrated their importance for gas transport under uncertainty. In particular, we addressed the problem of capacity maximization under uncertainty thereby distinguishing between the cases of uncertain exit and uncertain entry loads. Moreover, we considered a stabilization problem in a transient system governed by the wave equation and subject to probust constraints. By applying the spheric radial decomposition of Gaussian random vectors, we approximated the occurring probabilities and—where possible—their sensitivities with respect to the decision variables in order to numerically solve the resulting optimization problems. There are a lot of remaining challenges for future work, such as efficient incorporation of cycles or active elements in the network. Moreover, a full integration of the methodology outlined in Section 4 for the robust treatment of uncertain entries with the capacity maximization problem described in Section 3 ultimately would allow an application of the probust approach to arbitrary network topologies.
References
Aßmann, D., Liers, F., Stingl, M.: Decomposable robust twostage optimization: an application to gas network operations under uncertainty. Networks 74, 40–61 (2019)
Bandi, C., Bertsimas, D.: Tractable stochastic analysis in high dimensions via robust optimization. Math. Program. 134, 23–70 (2012)
BenTal, A., El Ghaoui, L., Nemirovski, A.: Robust Optimization. Princeton Series in Applied Mathematics. Princeton University Press, Princeton, NJ (2009)
Brauchart, J.S., Saff, E.B., Sloan, I.H., Womersley, R.S.: QMC Designs: Optimal order Quasi Monte Carlo integration schemes on the sphere. Math. Comput. 83, 2821–2851 (2014)
Conn, A.R., Scheinberg, K., Vicente, L.N.: Introduction to DerivativeFree optimization. MPSSIAM Series on Optimization. SIAM, Philadelphia (2009)
Dentcheva, D., Ruszczyński, A.: Robust stochastic dominance and its application to riskaverse optimization. Math. Program 123, 85–100 (2010)
Genz, A., Bretz, F.: Computation of Multivariate Normal and t Probabilities. Lecture Notes in Statistics, vol. 195. Springer, Berlin, Heidelberg (2009)
Gotzes, C., Heitsch, H., Henrion, R., Schultz, R.: On the quantification of nomination feasibility in stationary gas networks with random load. Math. Methods Oper. Res. 84, 427–457 (2016)
Gotzes, C., Nitsche, S., Schultz, R.: Probability of feasible loads in passive gas networks with up to three cycles. Preprint SFB TRR 154. https://opus4.kobv.de/opus4trr154/frontdoor/index/index/docId/135 (Submitted for publication) (2017)
Grandón, T. G., Heitsch, H., Henrion, R.: A joint model of probabilistic/robust constraints for gas transport management in stationary networks. Comput. Manag. Sci. 14, 443–460 (2017)
Gugat, M.: Optimal Boundary Control and Boundary Stabilization of Hyperbolic Systems. Birkhäuser Cham (2015)
Gugat, M., Leugering, G.: \(L^{\infty }\)norm minimal control of the wave equation: on the weakness of the bangbang principle. ESAIM: COCV 14, 254–283 (2008)
Gugat, M., Leugering, G., Wang, K.: Neumann boundary feedback stabilization for a nonlinear wave equation: A strict H^{2}Lyapunov function. Math. Control Relat. Fields 7, 419–448 (2017)
Gugat, M., Steffensen, S.: Dynamic boundary control games with networks of strings. ESAIM: COCV 24, 1789–1813 (2018)
Gurobi Optimization, Inc.: Gurobi Optimizer Reference Manual (2016)
Heitsch, H.: On probabilistic capacity maximization in a stationary gas network. Optimization 69, 575–604 (2020)
Karhunen, K.: Über lineare Methoden in der Wahrscheinlichkeitsrechnung. Universitat Helsinki (1947)
Koch, T., Hiller, B., Pfetsch, M.E., Schewe, L. (eds.): Evaluating Gas Network Capacities. SIAMMOS Series on Optimization. SIAM, Philadelphia (2015)
The Mathworks, Inc.: MATLAB Version 9.1.0.441655 (R2016b). Natick, Massachusetts (2016)
Montes, A., Wibmer, M.: Gröbner bases for polynomial systems with parameters. J. Symb. Comput. 45, 1391–1425 (2010)
Padberg, M.: Approximating separable nonlinear functions via mixed zeroone programs. Oper. Res. Lett. 27, 1–5 (2000)
Prékopa, A.: Stochastic Programming. Mathematics and Its Applications, vol. 324. Springer (1995)
van Ackooij, W., Frangioni, A., de Oliveira, W.: Inexact stabilized Benders’ decomposition approaches with application to chanceconstrained problems with finite support. Comput. Optim. Appl 65, 637–669 (2016)
van Ackooij, W., Henrion, R.: (Sub)gradient formulae for probability functions of random inequality systems under Gaussian distribution. SIAM/ASA J. Uncertain. Quantif. 5, 63–87 (2017)
Vázquez, F.G., Rückmann, J.J., Stein, O., Still, G.: Generalized semiinfinite programming: a tutorial. J. Comput. Appl. Math. 217, 394–419 (2008)
Zymler, S., Kuhn, D., Rustem, B.: Distributionally robust joint chance constraints with secondorder moment information. Math. Program. 137, 167–198 (2013)
Acknowledgements
The authors thank the DFG for their support within Projects B04, B05, B06, C03, and Z01 in CRC TRR 154. This paper has received funding from the Europeans Union’s EU Framework Programme for Research and Innovation Horizon 2020 under the Marie SkłodowskaCurie Actions Grant Agreement No 764759.
Funding
Open Access funding provided by Projekt DEAL.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Adelhütte, D., Aßmann, D., Grandòn, T.G. et al. Joint Model of ProbabilisticRobust (Probust) Constraints Applied to Gas Network Optimization. Vietnam J. Math. 49, 1097–1130 (2021). https://doi.org/10.1007/s1001302000434y
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s1001302000434y