1 Introduction

1.1 Joint Probabilistic/Robust Constraints

Decision making problems are more than often affected by parameter uncertainty. An optimization problem dealing with uncertain variables has the typical form

$$ \begin{aligned} & \min\limits_{x} & & g_{0}(x) \\ & \text{subject to} & & g_{i}(x,z)\geq 0\quad (i=1,\ldots,k). \end{aligned} $$
(1)

Here \(x\in \mathbb {R}^{n}\) is a decision vector, \(z\in \mathbb {R}^{m}\) is an uncertain parameter, \(g_{0}\colon \mathbb {R}^{n} \to \mathbb {R}\) is the objective function and \(g\colon \mathbb {R}^{n}\times \mathbb {R}^{m}\rightarrow \mathbb {R}^{k}\) is the constraint mapping. The decision support schemes with non-deterministic parameters have to take into account the nature and source of uncertainty while balancing the objective and the constraints of the problem. There are two main approaches for dealing with uncertainty in the constraints of an optimization problem: the first one is the use of probabilistic constraints. This approach is based on the assumption that historical data is available to estimate the probability distribution of the uncertain parameter and thus considering it as a random vector ξ taking values in \(\mathbb {R}^{m}\). Then (1) may be given the form

$$ \begin{aligned} & \min\limits_{x} & & g_{0}(x) \\ & \text{subject to} & & \mathbb{P}\left( g(x, \xi)\geq 0\right)\geq p\in (0,1] \end{aligned} $$
(2)

(note that the first ‘≥’ sign is to be understood component-wise). Here, a decision x is declared to be feasible if and only if the original random inequality system (1) is satisfied with a prescribed probability p, a level usually chosen close to, but not identical to one. Of course, higher values of p lead to a smaller set of feasible decisions x in (2), hence to optimal solutions at higher costs. Concerning the random variable ξ we essentially focus on continuous ones. For a standard reference on optimization problems with probabilistic constraints we refer to the monograph [22].

An alternative approach is given by robust optimization. It applies when the uncertain parameter u is completely unknown or non-stochastic due to a lack of available data. Then, satisfaction of the uncertain inequality system (1) is required for every realization of the uncertainty parameter within some uncertainty set \(\mathcal {U}\subseteq \mathbb {R}^{m}\) containing infinitely many elements in general, so that one arrives at the following optimization problem:

$$ \begin{aligned} & \min\limits_{x} & & g_{0}(x) \\ & \text{subject to} & & g(x,u)\geq 0 \quad\forall u\in\mathcal{U}. \end{aligned} $$
(3)

For a basic monograph on robust optimization, we refer to [3].

We notice that both optimization models with probabilistic and robust constraints are deterministic reformulations of (1), since they depend only on the decision vector x but no longer on the outcome of the uncertain parameter z.

A central issue in robust optimization is the appropriate choice of the uncertainty set \(\mathcal {U}\). Simple-shaped sets like polyhedra or ellipsoids induce computationally tractability [2] and allow one to deal with much larger dimensions than in the case of probabilistic constraints. Moreover, when choosing \(\mathcal {U}\) such that \(\mathbb {P}(\xi \in \mathcal {U})=p\), then the feasible set of decision variables x of (3) is contained in that of probabilistic programming (2), so that an optimal solution to (3) is a feasible solution to (2). For these two reasons, robust optimization is not only preferred in the absence of statistical data, but also as a conservative approximation of the probabilistic programming setting. This conservatism, however, may be significant up to the point of ending up at very small or even empty feasible sets possibly coming at much higher costs than under a probabilistic constraints. This trade-off propels the use of probabilistic constraints in the presence of statistical information at least in moderate dimension.

Although these approaches, probabilistic programming and robust optimization are often dealt with separately, in many applications, one is faced with uncertain variables of both mentioned types. This leads us naturally to the consideration of uncertain inequalities (2) in which the uncertain variable has a stochastic and a non-stochastic part, i.e., z = (ξ, u). A typical example is optimization of gas transport in the presence of uncertain loads. Historical data are available (stochastic uncertainty) for the exit loads in many situations. On the other hand, observations can hardly be accessed (non-stochastic uncertainty) for entry loads, or for example, uncertain roughness coefficients in pipes. Therefore, it seems natural, to combine the originally separate models (2) and (3). An appropriate way to do so is to formulate a probabilistic constraint (w.r.t. ξ) involving a robustified (w.r.t. u) infinite inequality system:

$$ \mathbb{P}\left( g(x, \xi, u)\geq 0 \quad\forall u\in\mathcal{U} \right)\geq p. $$
(4)

This class of constraints has been recently investigated in [23] under the name of hybrid robust/chance-constraints and in [10] under the name of probabilistic/robust constraints. For easier reference, we shall be using in the following the natural acronym of probust constraints. We note that even the more complex situation of the uncertainty set depending on the decision and random variable plays an increasing role in applications. Here, the constraint becomes

$$ \mathbb{P}\left( g(x, \xi, u)\geq 0 \quad\forall u\in\mathcal{U}(x,\xi) \right)\geq p, $$
(5)

where the inner part resembles constraint sets arising in generalized semi-infinite optimization [25].

We note that yet another form of combining the probabilistic and robust parts of the inequality system would result from interchanging their arrangements in (4):

$$ \mathbb{P}\left( g(x, \xi, u)\geq 0 \right)\geq p \quad\forall u\in\mathcal{U}. $$

In this way one does not arrive at a probabilistic constraint involving infinitely many random inequalities as in (4) but rather at an infinite system of probabilistic constraints. This setting is related to (robust) first-order stochastic dominance constraints [6] and to distributionally robust probabilistic constraints [26], which currently receives increased attention. We won’t deal with this model here but rather focus our considerations on (4) and (5) respectively, which impose stronger conditions in the sense of joint probabilistic constraints compared to individual ones.

The aim of this paper is to illustrate the application of this new class of probust constraints to optimization problems in gas transport under uncertainty in the exit and entry loads. Uncertainty in the roughness coefficients of the pipe is not a part of this paper and it has been analysed for example in [1]. For a recent monograph on gas transport optimization we refer to [18]. We will consider first the problem of maximizing free booked capacities in an algebraic model for a stationary gas network. The corresponding model is presented in Section 2. This overall problem is too complex, however, to be dealt with in this paper. Therefore we will split it into two subproblems, namely capacity maximization at exits (consumer side) which is discussed in Section 3 and capacity maximization at entries (provider side) which is analyzed in Section 4. Without loss of generality, we follow the convention of entry loads being non positive and exit loads being non negative.

While often the research in optimization in finite-dimensional spaces (including mixed-integer nonlinear optimization) and PDE-constrained optimization takes place within disjoint communities, we think that it is important to identify common problem structures that occur in both classes of problems, in particular in the development of methods that allow to take into account uncertainties. Therefore in Section 5 we discuss an optimization problem with a probust constraint for a system that is governed by a PDE that is related to the application in gas transport. The PDE model allows us to consider a transient System in Section 5, whereas in the first two parts of the paper, stationary problems are studied. The transient system is governed by the wave equation under probabilistic initial and Dirichlet boundary data at one end of the space interval. At the other end of the space interval, Neumann velocity feedback is active. For the system a desired stationary state is given. The robustness of the system is measured by the \(L^{\infty }\)-norm of the difference between the actual state \(\tilde {v}\) and the desired state \(\bar {v}\). Due to the definition of the \(L^{\infty }\)-norm (as the essential supremum of the absolute value), this approach gives information about the maximal pointwise distance in space and time. Since our solutions are in fact continuous for appropriate initial and boundary data, the \(L^{\infty }\)-norm is equal to the maximum norm. The robustness requirement is that this pointwise distance remains below a given upper bound vmax. In our system, the state depends on uncertain initial and boundary data with a given probability distribution. The meaning of the probabilistic constraint is the following: The probability that the robustness requirement is satisfied is sufficiently large, i.e., greater than or equal to a given value p ∈ (0,1]. This probabilistic constraint can be written in the form of (4); for details see Section 5, (36). As a numerical example, we consider the optimization with respect to the feedback parameter in Subsection 5.3.

2 Maximization of Free Capacities in a Stationary Network

We consider a passive stationary gas network given by a directed graph \(\mathcal {G}=(V,E)\) with a set V of vertices and a set E of edges. We shall assume that the set of nodes decomposes into a set V+ of entries, where gas is injected and a set V of exits, where gas is withdrawn. Hence, V = V+V and V+V = . Without loss of generality we label the nodes in a way that entries come first and exits last. The gas transport along the network is triggered by bilateral delivery contracts between traders who inject gas at entries and traders covering customer demands by withdrawing gas at exits. An amount of gas injected into or withdrawn from the network at certain nodes is called a nomination. We shall refer to b ≤ 0 and ξ ≥ 0 as to the vectors of nominations at entries and exits, respectively.

Nominations have to satisfy three conditions:

  1. 1.

    At each node (entry or exit) of the network, nominations must not exceed the capacity booked for that node by the respective trader.

  2. 2.

    Nominations must be balanced over the whole network, i.e., the sum of nominations at entries equals the sum of nominations at exits.

  3. 3.

    Nominations must be technically feasible in the sense that there exist pressures within given bounds at the nodes and a flow through the network such that the nominations at the exits can be served by the nominations at the entries.

The first condition has to be satisfied by the traders. Referring to C, C+ ≥ 0 as to the vectors of booked capacities at entries and exits, respectively, it can be written as

$$ b\in [-C_{+},0] ,\quad \xi \in [0,C_{-}], $$
(6)

where the intervals are to be understood in a multidimensional sense. The second condition is an automatic consequence of the collection of bilateral delivery contracts between entries and exits and can be written as

$$ \mathbf{1}_{+}^{T}b+\mathbf{1}_{-}^{T}\xi =0, $$
(7)

where 1+ and 1 are vectors filled with entries 1 of the respective dimension of entries and exits.

The third condition of technical feasibility of some joint nomination vector (b, ξ) can be characterized by the existence of vectors q of flows along the edges of the network and π of pressure squares at the nodes satisfying the conditions

$$ \mathcal{A}q=\left( \begin{array}{c} b \\ \xi \end{array}\right);\quad \mathcal{A}^{T}\pi =-{\Phi} q |q|;\quad \pi \in [\pi_{\ast},\pi^{\ast}]. $$
(8)

Here, \(\mathcal {A}\) is the incidence matrix of the network graph, Φ := diag((Φe))eE is a diagonal matrix of roughness coefficients and the modulus sign for a vector has to be understood componentwise. The first two equations in (8) correspond to the first two Kirchhoff’s Laws (mass conservation and pressure drop), whereas the interval condition imposes bounds on the pressure. It is actually these bounds that constrain the feasibility of nominations b, ξ, see, e.g., [18].

It is the network owners’ responsibility to make sure—without knowledge of concrete bilateral delivery contracts between entries and exits—that condition 3. is satisfied for all nominations fulfilling conditions 1. and 2. This requirement clearly imposes a constraint on the booked capacities C+, C via (6) saying for all (b, ξ) satisfying (6), (7) there exists (q, π) satisfying (8). It can be formally written as:

$$ \forall (b,\xi):~(6), (7)~\exists (q,\pi) :~(8). $$
(9)

Satisfying (9) in a rigorous way would yield (too) small booking capacities at the nodes of the network. Here, the network owner can benefit from the fact that nominations at the exits (gas withdrawn for consumption) are endowed with a typically large historical data base so that they can be modeled as random vectors obeying some appropriate multivariate distribution. This offers the possibility to relax the ‘for all’ condition on ξ in a probabilistic sense as to satisfy (9) with sufficiently high probability p. In this way, by choosing p close to one, it is possible to keep a robust satisfaction of technical feasibility while allowing for considerably larger booked capacities. A similar probabilistic modeling of entry nominations would not be justified (although historical data might be available here too) because their outcome is market driven and thus not a genuine random vector.

This motivates the network owner to relax the worst case condition in a probabilistic sense on the side of exits but keeping it on the side of entries. He then arrives at the following probabilistic formulation of feasible booked capacities C+, C:

$$ \mathbb{P}\left( \xi \in [0,C_{-}],~\forall b\in [-C_{+},0]:~\mathbf{1}_{+}^{T}b+\mathbf{1}_{-}^{T}\xi = 0~\exists (q,\pi): (8)\right) \geq p. $$
(10)

Here, \(\mathbb {P}\) refers to a probability measure related with the random vector ξ and p ∈ (0,1] is a desired probability level chosen by the network owner. The expression on the left-hand side of this inequality provides the probability that a random exit nomination (within booked capacity) combined with an arbitrary entry nomination (within booked capacity and in balance with the exit nomination) is technically feasible.

Now, for a given capacity vector (C+, C) it may turn out that the associated probability on the left-hand side of (10) is larger than the desired minimum probability p, e.g., 0.96 vs. 0.9. This would give the network owner the opportunity to offer larger capacities while still keeping the desired probability p. Therefore, he might be led to determine the largest possible additional capacities (x+, x) he could offer for booking by new clients. This would lead to the following optimization problem:

$$ \begin{array}{@{}rcl@{}} &\max\limits_{x_{+}, x_{-}} w_{+}^{T}x_{+}+w_{-}^{T}x_{-}& \\ &\mathbb{P}\left( \begin{array}{c} \xi\in [0,C_{-}]~\forall y\in [0,x_{-}],~\forall b\in [-C_{+} - x_{+},0]:\\ \mathbf{1}_{+}^{T}b+\mathbf{1}_{-}^{T}\xi +\mathbf{1}_{-}^{T}y=0\\ \exists (q,\pi):~\mathcal{A}q=\left( \begin{array}{c} b \\ \xi +y \end{array}\right);~\mathcal{A}^{T}\pi =-{\Phi} q |q|;~\pi \in [\pi_{\ast},\pi^{\ast}] \end{array}\right)\geq p&. \end{array} $$
(11)

Here, the weight vector w in the objective reflects some preferences the network owner could have in order to offer new booking capacities at different nodes. In the absence of preferences, he could simply choose w := 1. Note, that the nomination vector at exits has been split into ξ and y, where ξ refers to the nominations of already existing clients (thus endowed with historical data and amenable to stochastic modeling) while y refers to nominations of potentially new clients without nomination history. As these can therefore not be treated stochastically, they are considered with a ‘for all’ requirement similar to entry nominations. No such splitting is necessary on the side of entries because nominations have to be considered there with a ‘for all’ requirement anyway as they cannot be modeled stochastically in a straightforward manner. In the following section, we shall address in detail the capacity maximization problem for exits only, a restriction which allows us to solve numerically the arising entire optimization problem subject to probust constraints. In contrast, Section 4 will focus on entries only and discuss essential issues related with the solution of this alternatively restricted optimization problem.

3 Maximization of Booked Capacities for Exits in a Stationary Gas Network

As mentioned in the introduction, the overall problem of capacity maximization (11) is too complex to be dealt with here. Therefore, we shall focus in a first step on maximizing capacities at exits.

3.1 The Capacity Maximization Problem for Several Exits and One Entry

In the following we will make the assumption that the network is a tree and that there exists only one entry point serving m exits. The presence of cycles in the network would significantly complicate the numerical solution of the problem. Nonetheless, in Section 3.4, we sketch a possible methodology in the presence of cycles. The restriction to a single entry is made here, in order not to deal with the robust uncertainty related with the splitting of nominations within several entry nodes (see ‘∀b ∈...’ condition in (11)) which will be considered later in Section 4 separately. Without loss of generality, we define the entry to be the root of the network labeled by index ‘0’. For simplicity, we assume moreover that, the booked capacity C+ of the entry is large enough to meet the maximum possible load by all exits as well as possible extensions thereof after adding additional capacity at the exits as a result of an optimization problem. As a consequence of this constellation our general capacity maximization problem (11) reduces to an exit capacity maximization problem of the form

$$ \begin{array}{@{}rcl@{}} &\max\limits_{x_{-}} w_{-}^{T}x_{-}& \\ &\mathbb{P}\left( \begin{array}{c} \xi\in [0,C_{-}]~\forall y\in [0,x_{-}]~\exists (q,\pi):\\ \mathcal{A}q=\left( \begin{array}{c} -\mathbf{1}_{-}^{T}\xi-\mathbf{1}_{-}^{T}y \\ \xi +y \end{array}\right);~\mathcal{A}^{T}\pi =-{\Phi} q|q|;~\pi \in [\pi_{\ast},\pi^{\ast}] \end{array}\right)\geq p&. \end{array} $$
(12)

Here, the remaining decision variables are just the extensions of exit capacities. Since no capacity extension for the single entry is intended and since its existing capacity is not constrained by our assumption, the corresponding constraint disappears as well as the balance equation which is just substituted in the description of technical feasibility. The resulting optimization problem does no longer contain entry nominations at all but only random exit nominations ξ and deterministic exit nominations y of new clients along with the additionally allocated booking capacities x.

Clearly, the probabilistic constraint in (12) is not yet in the explicit form of the probust constraint (4). This can be achieved in our case, thanks to the network being a tree having the single entry as its root. Note that by this special structure the direction of the gas flow is completely determined. Moreover, by directing all edges in E away from the root, according to [8], a vector z of exit loads in this configuration is technically feasible, if and only if in the notation introduced above, the inequality system

$$ g_{k,l}(z) :=h_{k}(z)+\pi^{\ast}_{k}-h_{l}(z)-\pi_{\ast,l}\geq 0\quad (k,l=0,\ldots,m) $$
(13)

is satisfied, where

$$ h_{k}(z):=\left\{ \begin{array}{ll} \sum\limits_{e\in {\Pi} (k)}{\Phi}_{e}\left( \sum\limits_{t\succeq H(e)}z_{t}\right)^{2}&\text{ if }k\geq 1, \\ 0 &\text{ if }k=0. \end{array}\right. $$
(14)

In order to explain the definition of the functions hk above, we denote kl for k, lV if the unique directed path from the root to k, denoted π(k), passes through l. The term H(e) refers to the head of the (directed) arc eE.

With these specifications, which are fully explicit in terms of the initial data of the problem, we reformulate problem (12) with the aid of inequalities (13) as

$$ \begin{array}{@{}rcl@{}} &\max w_{-}^{T}x_{-}&\\ &\mathbb{P}\left( \xi\in [0,C_{-}],~g_{k,l}(\xi +y) \geq 0\quad \forall y\in [0,x_{-}];\quad \forall k,l=0,{\ldots} ,m\right)\geq p.& \end{array} $$
(15)

The meaning of this constraint is as follows: The capacity extension x is feasible if and only if with probability larger than p ∈ (0,1] the sum ξ + y of the original random nomination vector and of a new nomination vector can be technically realized for every such new nomination vector in the limits [0,x]. Clearly, the probust constraint (15) is of the form (5), with u := y, x := x and the uncertainty set \(\mathcal {U}(x):=[0,x]\).

In [16] it is shown, that the infinite random inequality system

$$ g_{k,l}(\xi +y) \geq 0\quad \forall y\in [0,x];\quad \forall k,l=0,{\ldots} ,m $$

inside (15) can be reduced—using (13) and (14)—to the following finite one

$$ \begin{array}{@{}rcl@{}} \sum\limits_{e\in {\Pi} (k)\backslash {\Pi} (l)}{\Phi}_{e}\left( \sum\limits_{t\succeq H(e)}\xi_{t}\right)^{2}-\sum\limits_{e\in {\Pi}(l)\backslash {\Pi} (k)}{\Phi}_{e}\left( \sum\limits_{t\succeq H(e)}\xi_{t}+(x_{-})_{t}\right)^{2}&\geq&\pi_{\ast,l}-\pi^{\ast}_{k};\\ && \forall k,l=0,\ldots,m. \end{array} $$
(16)

For the random vector ξ of stochastic exit nominations we will suppose that it follows a truncated multivariate Gaussian distribution:

$$ \xi \sim \mathcal{T}\mathcal{N}(\mu,{\Sigma},[0,C_{-}]). $$

More precisely, the distribution of ξ is obtained by truncating an m-dimensional Gaussian distribution with mean μ and covariance matrix Σ to an m-dimensional rectangle [0,C] representing the (historical) booked capacity at all exit nodes. By definition of truncation, this means that there exists a Gaussian random vector \(\tilde {\xi }\sim \mathcal {N}(\mu ,{\Sigma })\) such that

$$ \mathbb{P}(\xi \in A) = \frac{\mathbb{P}\left( \tilde{\xi}\in A\cap [0,C_{-}] \right)}{\mathbb{P}\left( \tilde{\xi}\in [0,C_{-}]\right)} $$

holds true for all Borel measurable subsets \(A\subseteq \mathbb {R}^{m}\). Hence, in order to determine probabilities under a truncated Gaussian distribution, it is sufficient to be able to determine probabilities under a Gaussian distribution itself. Applying this observation to the probabilistic constraint (15) and combining it with (16) yields the equivalent description

$$ \begin{array}{@{}rcl@{}} &&\mathbb{P}\left( \tilde{\xi}\in [0,C_{-}] : ~\sum\limits_{e\in {\Pi}(k)\backslash {\Pi}(l)}{\Phi}_{e}\left( \sum\limits_{t\succeq H(e)}\tilde{\xi}_{t}\right)^{2}\right.\\ &&\quad\left. -\sum\limits_{e\in {\Pi}(l)\backslash {\Pi} (k)}{\Phi}_{e}\left( \sum\limits_{t\succeq H(e)}\tilde{\xi}_{t}+(x_{-})_{t}\right)^{2}\geq \pi_{\ast,l}-\pi^{\ast}_{k};\quad\forall k,l=0,{\ldots} ,m\right)\\ &&\geq p\cdot \mathbb{P}\left( \tilde{\xi}\in [0,C_{-}] \right). \end{array} $$
(17)

This is now, in contrast to (15) a conventional probabilistic constraint over a finite inequality system. We are aiming to apply an efficient gradient based solution algorithm in the framework of gradient based optimization methods. To this end, in order to deal algorithmically with the probabilistic constraint (17), one has evidently to be able to calculate for each fixed decision vector x the probabilities occurring there, as well as their derivatives with respect to x. In the following section we briefly sketch the methodology used here.

3.2 Spheric-Radial Decomposition of Gaussian Random Vectors

From the well-known spheric-radial decomposition (see, e.g., [7]) of a Gaussian random vector \(\tilde {\xi }\sim \mathcal {N}(\mu ,{\Sigma })\) it follows that the probability of an arbitrary Borel measurable subset M of \(\mathbb {R}^{m}\) may be represented as the following integral over the unit sphere \(\mathbb {S}^{m-1}\):

$$ \mathbb{P}(\tilde{\xi}\in M) = {\int}_{v\in \mathbb{S}^{m-1}}\mu_{\chi}(E(v)) d\mu_{\eta}(v). $$

Here, μχ refers to the one-dimensional Chi-distribution with m degrees of freedom, μη is the uniform distribution on \(\mathbb {S}^{m-1}\) and

$$ E(v):=\{r\geq 0~|~\mu +rPv\in M\}, $$

where P is a factor from a decomposition Σ = PPT of the covariance matrix Σ. Following these remarks, the probability on the left-hand side of (17) (depending also on the decision variable x) can be represented as

$$ {\int}_{v\in \mathbb{S}^{m-1}}\mu_{\chi}(E(v,x_{-})) d\mu_{\eta }(v), $$
(18)

where

$$ E(v,x_{-})=\{r\geq 0~|~\mu +rPv\in [0,C_{-}]\} \cap \bigcap\limits_{k,l=0,\ldots,m}E^{k,l}(v,x_{-}) $$
(19)

and, with Pt denoting row number t of P, for k, l = 0,…,m:

$$ \begin{array}{@{}rcl@{}} E^{k,l}(v,x_{-})&:=&\left\{r\geq 0~|\sum\limits_{e\in {\Pi} (k)\backslash {\Pi} (l)} {\Phi}_{e}\left( \sum\limits_{t\succeq H(e)}\mu_{t}+rP_{t}v\right)^{2}\right.\\ &&\left. - \sum\limits_{e\in {\Pi} (l)\backslash {\Pi} (k)} {\Phi}_{e}\left( \sum\limits_{t\succeq H(e)}\mu_{t}+rP_{t}v+(x_{-})_{t}\right)^{2}\geq \pi_{\ast,l}-\pi^{\ast}_{k}\right\}. \end{array} $$
(20)

In order to evaluate the integrand in (18), one has to be able to characterize (for each given \(v\in \mathbb {S}^{m-1}\) and \(x_{-}\in \mathbb {R}^{m}\)) the set E(v, x) and to determine its Chi probability. The latter task is easily accomplished, whenever the set E(v, x) can be represented as a finite union of intervals because there exist numerically highly precise approximations of the one dimensional Chi distribution function.

Hence, we are left with the task of efficiently representing E(v, x) as a finite union of intervals. This is easily done for the first set in the intersection providing E(v, x) in (19) which can be shown either to be empty or an interval:

$$ \begin{array}{@{}rcl@{}} \left\{r\geq 0~|~\mu +rPv\in [0,C_{-}]\right\}{} &=&{} \left\{{}\begin{array}{ll} \emptyset &{}\quad\text{if }\exists t\in {}\{{}1,\ldots,m\}:P_{t}v=0,~\mu_{t}\notin [0,C_{-,t}],\\ {[}L,R] &\quad{} \text{else}; \end{array} \right.\quad\\ L &:= & \max \left\{0,~\max\limits_{P_{t}v>0}\frac{-\mu_{t}}{P_{t}v},~\max\limits_{P_{t}v<0}\frac{C_{-,t}-\mu_{t}}{P_{t}v}\right\}\quad \text{ and}\\ R &:= & \min \left\{\min\limits_{P_{t}v>0}\frac{C_{-,t}-\mu_{t}}{P_{t}v},~\min\limits_{P_{t}v<0} \frac{-\mu_{t}}{P_{t}v}\right\}. \end{array} $$
(21)

As for the second part of the intersection in (19), we will provide for each k, l an explicit representation of the set Ek, l(v, x) either as a single interval or as the disjoint union of two intervals, such that the intersection of all these sets (and the first set determined above) is readily obtained in the form of a finite union of disjoint intervals. Indeed, upon developing the expressions in (20) in terms of r, one arrives at the representation

$$ E^{k,l}(v,x_{-})=\left\{r \geq 0~|~\alpha^{k,l}r^{2}+{\upbeta}^{k,l}r+\gamma^{k,l}\geq 0\right\}\quad (k,l=0,\ldots,m), $$

where, for k, l = 0,…,m:

$$ \begin{array}{@{}rcl@{}} \alpha^{k,l} &:=&\sum\limits_{e\in {\Pi} (k)\backslash {\Pi} (l)}{\Phi}_{e}\left( \sum\limits_{t\succeq H(e)}P_{t}v\right)^{2}-\sum\limits_{e\in {\Pi} (l)\backslash {\Pi} (k)}{\Phi}_{e}\left( \sum\limits_{t\succeq H(e)}P_{t}v\right)^{2} \\ {\upbeta}^{k,l} &:=&2\sum\limits_{e\in {\Pi} (k)\backslash {\Pi} (l)}{\Phi}_{e}\left( \sum\limits_{t\succeq H(e)}\mu_{t}\right) \left( \sum\limits_{t\succeq H(e)}P_{t}v\right) \\ &&-2\sum\limits_{e\in {\Pi}(l)\backslash {\Pi} (k)}{\Phi}_{e}\left( \sum\limits_{t\succeq H(e)}\mu_{t}+x_{-,t}\right) \left( \sum\limits_{t\succeq H(e)}P_{t}v\right) \\ \gamma^{k,l} &:=&\sum\limits_{e\in {\Pi} (k)\backslash {\Pi} (l)}{\Phi}_{e}\left( \sum\limits_{t\succeq H(e)}\mu_{t}\right)^{2}-\sum\limits_{e\in {\Pi} (l)\backslash {\Pi} (k)}{\Phi}_{e}\left( \sum\limits_{t\succeq H(e)}\mu_{t}+x_{-,t}\right)^{2}+\left( \pi_{k}^{\ast}\right)^{2}-\left( \pi_{\ast,l}\right)^{2}. \end{array} $$

This leads, by case distinction and elementary calculus, to the following explicit representation of Ek, l(v, x) for k, l = 0,…,m:

$$ \begin{array}{@{}rcl@{}} E^{k,l}(v,x_{-}) = \left\{ \begin{array}{ll} \emptyset & \text{ 1) or 2)}, \\ \mathbb{R} & \text{ 3) or 4)}, \\ \left[-\frac{\gamma^{k,l}}{{\upbeta}^{k,l}},\infty \right) &\text{ 5)}, \\ \left( -\infty,-\frac{\gamma^{k,l}}{{\upbeta}^{k,l}}\right] &\text{ 6)}, \\ \left( -\infty,\frac{-{\upbeta}^{k,l}-\sqrt{\left( {\upbeta}^{k,l}\right)^{2}-4\alpha^{k,l}\gamma^{k,l}}}{2\alpha^{k,l}}\right]\cup \left[ \frac{-{\upbeta}^{k,l}+\sqrt{\left( {\upbeta}^{k,l}\right)^{2}-4\alpha^{k,l}\gamma^{k,l}}}{2\alpha^{k,l}},\infty \right) &\text{ 7)}, \\ \left[\frac{-{\upbeta}^{k,l}+\sqrt{\left( {\upbeta}^{k,l}\right)^{2}-4\alpha^{k,l}\gamma^{k,l}}}{2\alpha^{k,l}},\frac{-{\upbeta}^{k,l}-\sqrt{\left( {\upbeta}^{k,l}\right)^{2}-4\alpha^{k,l}\gamma^{k,l}}}{2\alpha^{k,l}}\right] &\text{ 8)}, \end{array}\right. \end{array} $$

where the case study is done according to

$$ \begin{array}{@{}rcl@{}} {1)} &&\alpha^{k,l}={\upbeta}^{k,l}=0,\quad\gamma^{k,l}<0, \\ {2)} &&\alpha^{k,l}<0,\quad\left( {\upbeta}^{k,l}\right)^{2}< 4\alpha^{k,l}\gamma^{k,l}, \\ {3)} &&\alpha^{k,l}={\upbeta}^{k,l}=0,\quad\gamma^{k,l}\geq 0, \\ {4)} &&\alpha^{k,l}>0,\quad\left( {\upbeta}^{k,l}\right)^{2}< 4\alpha^{k,l}\gamma^{k,l}, \\ {5)} &&\alpha^{k,l}=0,\quad{\upbeta}^{k,l}>0, \\ {6)} &&\alpha^{k,l}=0,\quad{\upbeta}^{k,l}<0, \\ {7)} &&\alpha^{k,l}>0,\quad\left( {\upbeta}^{k,l}\right)^{2}\geq 4\alpha^{k,l}\gamma^{k,l}, \\ {8)} &&\alpha^{k,l}<0,\quad\left( {\upbeta}^{k,l}\right)^{2}\geq 4\alpha^{k,l}\gamma^{k,l}. \end{array} $$

Along with (21) we may use this explicit description in order to efficiently represent the set E(v, x) in (19) as the desired finite union of intervals by determining the finite intersection of sets which are intervals or disjoint unions of intervals.

It is important to note that, at the same time, the partial derivatives of the probability with respect to the decision variable x can be calculated as a spherical integral of type (18) again, however with a different integrand which is easily obtained from the partial derivatives of the initial data [24]. In this gradient formula, the same disjoint union of intervals as in the computation of the probability itself is employed. The spherical integrals can be approximated by finite sums using Quasi-Monte Carlo sampling on the sphere (see, e.g., [4]). Then, for each sampled direction v on the sphere, one may update first the probability itself and then, simultaneously, the gradient of the probability with respect to x by using the same disjoint union of intervals in both cases. This approach makes the gradient come almost for free as far as computation time is concerned. Having access to values and gradients of the probabilistic constraint (17), one may set up an appropriate nonlinear optimization solver for solving (15). For the subsequent numerical results, we employed a simple projected gradient method.

3.3 Numerical Results for an Example

As an illustrating example, similar to [16, Section 6], we considered a network as displayed in Fig. 1 with one entry (filled black circle) and 26 exits. The parameters (i.e., pressure bounds, roughness coefficients, truncated Gaussian distribution for the random nominations at exits) were chosen from modified quantities of real network.

Fig. 1
figure 1

Solution of the capacity maximization problem at exits for different probability levels: 0.95 (top left); 0.9 (top right); 0.85 (bottom left); 0.8 (bottom right)

The applied gradient method cannot guarantee to find a global optimal solution because the optimization problem is non-convex in x. However, a stationary point can be computed satisfying the probust constraint with a high accuracy. We are able to control the quality of the accuracy via the Quasi-Monte Carlo sampling. In our computations a number of 10 000 samples turns out to allow for reasonably accurate results.

We did not assume any preferences in the allocation of new capacities, hence the weight vector in the objective of (15) was chosen as w := 1. The colored rings around exit points refer to the optimal cumulative capacities (historical+new), i.e., C + x after maximization, upon choosing probability levels p = 0.95,0.9,0.85,0.8. The radii of the rings are proportional to the available capacities. It can be clearly seen, how decreasing of the probability level allows for increasing the allocation of capacity in certain regions of the network.

Figure 2 illustrates how the computed solution for a probability level p = 0.8 works for two random exit nomination scenarios ξ simulated a posteriori according to the chosen truncated multivariate Gaussian distribution. The first scenario is feasible because one could add a common capacity to every exit (green color) in order to satisfy this scenario. In contrast, the second scenario is infeasible because one would have to reduce the capacities by an amount corresponding to the dark red rings, in order to satisfy this scenario. When simulating a large set of such scenarios, say 1000, it would turn out that according to the probability level p = 0.8 approximately 800 are feasible, while 200 are infeasible.

Fig. 2
figure 2

Two scenarios for random exit loads ξ according to the chosen multivariate truncated Gaussian distribution. Left: feasible scenario; Right: infeasible scenario

3.4 Methodology in the Presence of Cycles

It is generally acknowledged that the presence of cycles in gas networks is both realistic for applications and demanding for formal analysis. In what follows we elucidate this at calculating the probability of feasible nominations in a gas network with cycles. Networks with a single or with multiple node-disjoint cycles are covered in [8] which essentially relies on explicit formulas for the roots of univariate real polynomials of degree less than 5.

If cycles in a gas network are sharing edges, then the approach via the mentioned formula is no longer valid. It also cannot be stretched to more general cases. A first alternative attempt in this respect has been undertaken recently in [9] for networks with up to three mutually interconnected cycles.

To display the state-of-the-art in calculating by spheric-radial decomposition probabilities of sets of feasible nominations in gas networks with cycles, consider again

$$ E_{i}:= E(v_{i}) = \{r\ge 0 | \mu+rPv_{i}\in M\} $$

for every sample \(v_{1},\dots , v_{s}\) on the sphere. Analogously to the case of trees, the set Ei can be expressed as a finite union of disjoint intervals, \(E_{i} = \cup _{j=1}^{l} [a_{j}, b_{j}]\), for calculating its probability, it is sufficient to determine all points where the ray rPvi + μ enters or exits the set of feasible load vectors M.

With cycles, the matrix \(\mathcal {A}\) decomposes into a basis part \(\mathcal {A}_{B}\) and a non-vacuous non-basis part \(\mathcal {A}_{N}\) whose columns correspond to the fundamental cycles with respect to the tree behind \(\mathcal {A}_{B}\). Accordingly q, Φ are split into qB, qN and ΦB, ΦN.

In [8] it is shown that a given load (−1ξ, ξ) is feasible iff there exists a qN such that

$$ \begin{array}{@{}rcl@{}} \mathcal{A}_{N}^{\top} g(\xi,q_{N}) & = & {\Phi}_{N} \cdot \mid q_{N}| \cdot q_{N},\\ \min\limits_{i=1,\ldots,|V|-1} \left[\pi^{\ast}_{i} + g_{i}(\xi,q_{N}) \right] & \ge & \max\limits_{i=1,\ldots,|V|-1} \left[\pi_{\ast,i}+ g_{i}(\xi,q_{N})\right],\\ \pi_{\ast,0} &\le & \min\limits_{i=1,\ldots,|V|-1} \left[\pi^{\ast}_{i} + g_{i}(\xi,q_{N}) \right],\\ \pi_{0}^{\ast} & \ge & \max\limits_{i=1,\ldots,|V|-1} \left[\pi_{\ast,i}+ g_{i}(\xi,q_{N})\right] \end{array} $$

with the function \(g \colon \mathbb {R}^{|V| -1} \times \mathbb {R}^{|N|} \to \mathbb {R}^{|V|-1}\) where

$$ g(s,t) := \left( \mathcal{A}^{\top}_{B}\right)^{-1}{\Phi}_{B} |\mathcal{A}_{B}^{-1}(s-\mathcal{A}_{N}t)| \left( \mathcal{A}_{B}^{-1}(s-\mathcal{A}_{N}t)\right)~\forall (s,t) \in \mathbb{R}^{|V| -1} \times \mathbb{R}^{|N|}. $$

Having in mind the spheric-radial decomposition and the sets Ei, we insert ξ(r) = rPvi + μ into the above characterization of feasible loads and reformulate the min, max expressions. Then Ei consists of all \(r\in \mathbb {R}_{\ge 0}\) for which there is a qN such that

$$ \begin{array}{@{}rcl@{}} \mathcal{A}_{N}^{\top} g(rPv_{i} +\mu,q_{N}) & = & {\Phi}_{N}|q_{N}|q_{N} \end{array} $$
(22)
$$ \begin{array}{@{}rcl@{}} \pi^{\ast}_{j} + g_{j}(rPv_{i} +\mu,q_{N}) & \ge & \pi_{\ast,k}+ g_{k}(rPv_{i} +\mu,q_{N})~\text{ for all } j,k = 1,\ldots,|V|-1,~j\neq k, \end{array} $$
(23)
$$ \begin{array}{@{}rcl@{}} \pi_{\ast,0} &\le & \pi^{\ast}_{j} + g_{j} (rPv_{i} +\mu,q_{N})~ \text{ for all } j = 1,\ldots,|V|-1, \end{array} $$
(24)
$$ \begin{array}{@{}rcl@{}} \pi_{0}^{\ast} & \ge & \pi_{\ast,j}+ g_{j}(rPv_{i} +\mu,q_{N})~ \text{ for all } j = 1,\ldots,|V|-1. \end{array} $$
(25)

To decide, for a given sample point vi, whether the ray rPvi + μ enters or exits the set of feasible load vectors M and, in the affirmative, compute an entry or exit point, the following basic procedure is possible: Augment one of the inequalities of the system (2325) as an equation to (22) yielding a system of |N| + 1 degree-2 multivariate polynomial equations with |N| + 1 unknowns.

Notice that the above considerations hold for gas networks with arbitrary cardinality |N| of non-basis columns in \(\mathcal {A}\). Of course, the number of augmentations, and hence the number of passes through some polynomial-equation solver can become exorbitant.

A first attempt on solving systems with multivariate polynomials via computer algebra is reported in [9] for |N|≤ 3. In the core of the method there are Gröbner bases inducing “triangular” representations of the polynomial equations, allowing for “reverse propagation” of solutions, in the spirit of Gaussian elimination, with multivariate quadratic polynomials instead of linear forms.

In contrast with gradient-type analytical solvers, algebraic solvers using symbolic computation usually detect infeasibility of the system under consideration much faster, which can be crucial as the decision of (in-)feasibility is one of the fundamental tasks in this context. These methods rely on iterating bases of ideals. Emptiness follows as soon as there arises a constant polynomial in the current ideal basis.

There are a number of possible improvements, some of which investigated in [9] that deserve further explorations: identifying and removing redundant inequalities in (2325), studying the special structure of the system (22) and exploring the impact of “Comprehensive Gröbner Systems” that were developed for parametric polynomial equations, see [20].

4 Capacity Maximization Under Uncertain Loads and Uncertain Distribution of Entry Nominations

After discussing the methodology for the case of uncertain exit loads, we address the case of uncertain entry loads and fixed exit capacities, i.e., x = 0, and we only take extensions x+ of the entry capacities C+ into account. Thus, we consider the following optimization problem:

$$ \begin{array}{@{}rcl@{}} &\max\limits_{x_{+}} w_{+}^{T}x_{+}& \\ &\mathbb{P}\left( \begin{array}{c} \xi \in [0,C_{-}]~\forall b\in [-C_{+} - x_{+},0]:\mathbf{1}_{+}^{T}b+\mathbf{1}_{-}^{T}\xi=0 \\ \exists (q,\pi):\mathcal{A}q= \left( \begin{array}{l} b \\ \xi \end{array}\right);~\mathcal{A}^{T}\pi =-{\Phi} q|q|;~\pi \in [\pi_{\ast},\pi^{\ast}] \end{array}\right)\geq p.& \end{array} $$
(26)

In other words, for a realization d of the random variable ξ and for an extension x+, we want every entry nomination of the uncertainty set

$$ \mathcal{U}(d,x_{+}) := \{b \in [-C_{+} - x_{+},0]:~ \mathbf{1}_{+}^{T}b+\mathbf{1}_{-}^{T}d=0 \} $$
(27)

to be feasible with probability p. The set of realizations of ξ for which every entry nomination is feasible for a given x+ is henceforth denoted as D(x+):

$$ D(x_{+}):=\left\{d \in [0,C_{-}]:\forall b \in \mathcal{U}(d, x_{+}) ~\exists (q,\pi):\mathcal{A}q = \left( \begin{array}{l} b \\ \xi \end{array}\right),\mathcal{A}^{T}\pi = -{\Phi}|q|q,\pi \in [\pi_{\ast},\pi^{\ast}] \right\}. $$

Applying this notation, we can formulate the probust constraint of problem (26) more compactly:

$$ \begin{array}{@{}rcl@{}} &&\max\limits_{x_{+}} w_{+}^{T} x_{+} \\ &&\mathbb{P}(\xi \in D(x_{+})) \geq p. \end{array} $$
(28)

We note that the ‘probust’ nature of the constraint is ‘hidden’ in the probability constraint that is expressed in D(x+).

Analogously to Section 3, we assume that \(\xi \sim \mathcal {T}\mathcal {N}(\mu , {\Sigma }, [0,C_{-}])\). Furthermore, we assume that the following mild condition holds:

  1. (C1)

    There is a bound y+ ≥ 0, such that μD(x+) for all x+ ∈ [0,y+], i.e., the mean μ of ξ is a feasible exit booking nomination for all admissible x+.

Since ξ is based on historical data, the mean being feasible for a capacity extension x+ is a natural assumption for practical applications at least if the upper bound y+ is not too large. Furthermore, since there is, naturally, no infinite amount of capacity extension, such a bound y+ does naturally exist.

In the following, we apply the spherical radial decomposition, see Section 3.2, to reformulate the probust constraint of problem (28) with an integral:

$$ \begin{array}{@{}rcl@{}} &&\max\limits_{x_{+} \in [0,y_{+}]} w_{+}^{T} x_{+} \\ &&{\int}_{v\in \mathbb{S}^{m-1}}\mu_{\chi}\left( E(v,x_{+})\right) d\mu_{\eta}(v) \geq p. \end{array} $$
(29)

As before, m is the number of exit nodes, \(\mathbb {S}^{m-1}\) the unit sphere, μχ refers to the one-dimen- sional χ-distribution with m degrees of freedom, μη is the uniform distribution on \(\mathbb {S}^{m-1}\) and

$$ E(v,x_{+}):=\{r\geq 0 ~|~ \mu + rPv \in D(x_{+})\}, $$

where P is a factor from a decomposition Σ = PPT of the covariance matrix Σ.

We aim to solve problem (29) numerically and we approximate the integral. We briefly summarize the method of Section 3.2 for our purposes: We sample N vectors \(v_{1}, \dots , v_{N}\) of the unit sphere \(\mathbb {S}^{m-1}\) and compute E(vi, x+) for each sampled vector vi. Hence

$$ {\int}_{v\in \mathbb{S}^{m-1}}\mu_{\chi}\left( E(v,x_{+})\right) d\mu_{\eta}(v) \approx \frac1N \sum\limits_{i=1}^{N} \mu_{\chi} \left( E(v_{i},x_{+})\right). $$
(30)

In [8], it is pointed out that E(vi, x+) is a finite union of intervals:

$$ E(v_{i},x_{+}) =: \cup_{j=0}^{k} [a_{j},b_{j}] $$

and that in this case,

$$ \mu_{\chi} (E(v_{i},x_{+})) = \sum\limits_{j=0}^{k} F_{\chi}(b_{j}) - F_{\chi}(a_{j}), $$
(31)

where Fχ is the distribution function of the respective χ distribution. Now, for the numerical accessibility of (31) we make an additional assumption:

  1. (C2)

    For any x+ ≥ 0, D(x+) is star-shaped with respect to μ.

Using (C2), it is immediately seen that E(vi, x+) = [a0, b0], i.e., E(vi, x+) is a simple interval. Thus, (31) becomes

$$ \mu_{\chi} (E(v_{i},x_{+})) = F_{\chi}(b_{0}) - F_{\chi}(a_{0}). $$

Furthermore, due to condition (C1), we have 0 ∈ E(vi, x+) which implies a0 = 0. With this b0 is the length of the interval E(v, x+), i.e., b0 = |E(vi, x+)|. As there exist high-quality numerical approximations for Fχ, the value of μχ(E(vi, x+)) can be computed, if b0 is numerically accessible.

In summary, under conditions (C1) and (C2), approximating the integral in (30) for a given capacity extension x+ effectively reduces to sampling vectors vi on the unit sphere and determining |E(vi, x+)|.

Before we continue, we briefly discuss the role of (C2). First of all, it is important to state that without (C2) it is not clear, how (30) can be evaluated numerically. Second, to the best of our knowledge, there is no applicable criterion for testing (C2). That means, we need to assume that in practice (C2) holds. Fortunately, this is not as bad as it sounds at the first glance. The reason is that we are able to show that in the case where E(vi, x+) consisted of more than one interval, our algorithm would always correctly approximate the length of the first interval, i.e., the interval with lower bound 0. Due to the simple estimate

$$ {\int}_{v\in \mathbb{S}^{m-1}}\mu_{\chi }\left( E(v,x_{+})\right) d\mu_{\eta }(v) \approx \frac1N \sum\limits_{i=1}^{N} \mu_{\chi} (E(v_{i},x_{+})) \geq \frac1N \sum\limits_{i=1}^{N} \mu_{\chi} (E_{0}(v_{i},x_{+})), $$

this would mean that, instead of approximating |E(v, x+)|, we would compute a valid lower bound. As a consequence, the computed result would be feasible for the optimization problem (29) and thus a (potentially conservative) valid underestimator of the true maximizer.

During the remainder of this section, we will present and discuss an algorithm for approximating |E(vi, x+)| for any sampled vi. In particular, we will apply binary search: In every iteration, it will be decided whether a given r ≥ 0 is an element of E(vi, x+). This decision is made by solving a nonlinear optimization problem (NLP) which, for all \(b \in \mathcal {U}(\mu + rPv,x_{+})\), checks whether the henceforth called pressure flow solution(π, q) fulfills (8). We prove the correctness of this decision procedure under the following assumption:

  1. (C3)

    There is a node jV with fixed pressure, i.e., \(\pi _{j,\ast } = \pi _{j}^{\ast }\).

Condition (C3) implies that there exists exactly one solution (π, q) which fulfills \(\mathcal {A}q = (b, \xi )^{T}\) and \(\mathcal {A}^{T} \pi = -{\Phi } |q|q\), see for example Theorem 7.2. of [18]. As we will see later, the uniqueness of the pressure flow solution is crucial for the correctness of the presented algorithm.

In order to ensure that all potential violations of pressure bounds are detected, globally optimal solutions are required. In order to achieve this, we relax the NLP to a mixed-integer linear problem (MIP) by interpolating the nonlinearities with linear splines and modeling these splines through linear constraints and additional binary and continuous variables. The resulting MIP can be solved globally with off-the-shelf-software. The effects of the linearization are pointed out in the remainder of the section.

We would like to point out that, like condition (C1), condition (C3) is not a critical assumption in reality. Since gas is injected at some entry node, we can assume that the pressure at this node is known.

We conclude this section by the presentation of computational results that show the effectiveness of our method.

4.1 Methodology for General Stationary Networks

As discussed beforehand, approximating the integral under conditions (C1) and (C2) can be reduced to deciding whether rE(v, x+) for a real number r ≥ 0 and a sampled vector \(v \in \mathbb {S}^{m-1}\). In other words, we need to check whether μ + rPvD(x+), i.e., whether μ + rPv is robust feasible:

Definition 1

Let d be a realization of the random variable ξ and let x+ be an entry capacity extension. If dD(x+), d is called robust feasible. The problem of deciding dD(x+) is denoted as DecProb(d, x+). Analogously, for a sampled vector \(v \in \mathbb {S}^{m-1}\), the real number r ≥ 0 is called robust feasible for v, if μ + rPvD(x+), i.e., rE(v, x+). The problem of deciding the robust feasibility of a number r for a vector v is denoted as DecProb(r, v, x+).

In the special case of unbounded pressure at every node, DecProb(d, x+) would be answered positively for every feasible realization d and every extension x+ (see [18], Theorem 7.1). Although in real gas network operations and in our setting the pressures are bounded, we can use the following: We introduce penalty functions for every uV, in formulas,

$$ f_{u}\colon \mathbb{R} \to \mathbb{R}^{+}, \quad \pi_{u} \mapsto \max\{0, \pi_{\ast,u} - \pi_{u}, \pi_{u} - \pi^{\ast}_{u}\}. $$

If fu(πu) > 0 for a node uV, πu lies outside its bounds. Consequently, π ∈ [π, π] if and only if

$$ \sum\limits_{u \in V} f_{u}(\pi_{u}) = 0. $$
(32)

Now consider a balanced nomination (b, d)T and the optimization problem

$$ \begin{array}{@{}rcl@{}} && \max\limits_{\pi,q}~\sum\limits_{u \in V} f_{u}(\pi_{u}) \\ \text{s.t. } &&\mathcal{A}q = \left( \begin{array}{l} b \\ d \end{array}\right), \\ && \mathcal{A}^{T} \pi = -{\Phi}|q|q. \end{array} $$
(33)

Since the pressure is unbounded, there exists a pressure flow solution (π, q) and since condition (C3) holds, it is unique and the feasible set of problem (33) is atomic. Consequently, (b, d) is a realizable nomination if the optimal value of problem (33) is 0, i.e., (32) holds.

However, for a given sampled vector v and a real number r, we want to check whether DecProb(r, v, x+) is answered positively, i.e., if for any b which results in a balanced nomination (μ + rPv, b), there exists a pressure flow solution which satisfies (8). Therefore, we modify (33):

figure a

The feasible set of Pen(r, v, x+) consists of the vectors (b, π, q) for which (μ + rPv, b) is a balanced nomination and for which there exists a pressure flow solution which satisfies (8) but could violate the pressure bounds.

In particular, we will show in the following theorem that r is robust feasible for v if and only if the optimal value of problem Pen(r, v, x+) is 0.

Theorem 1

Let \(v \in \mathbb {S}^{m-1}\), r ≥ 0 and let x+ ≥ 0 be a capacity extension at the entries. Assume that condition (C3) holds. Then DecProb(r, v, x+) is answered positively if and only if problem Pen(r, v, x+) is solvable with optimal value 0.

Proof

Since fu(πu) is nonnegative for all nodes uV, the optimal value of Pen(r, v, x+) is at least zero.

Assume on the one hand that DecProb(r, v, x+) is answered positively, i.e., μ + rPv is robust feasible. Now consider a solution (b, π, q) which is feasible for problem Pen(r, v, x+). If the objective value of (b, π, q) was strictly positive, there would exist a node jV with \(\pi _{j} \notin [\pi _{\ast ,j}, \pi ^{\ast }_{j}]\). Since the pressure flow solution is unique (due to condition (C3)), this contradicts the robust feasibility of μ + rPv. Therefore, the objective value is 0. Since this applies for every feasible solution, the optimal value of Pen(r, v, x+) is 0.

On the other hand, assume that the optimal value is 0. We maximize which implies that for all feasible solutions (b, π, q), π lies in its prescribed bounds. In other words, for every \(b \in \mathcal {U}(\mu + rPv,x_{+})\), the unique pressure flow solution (π, q) satisfies (8). This implies the robust feasibility of (μ + rPv, x+), i.e., DecProb(r, v, x+) is answered positively. □

We note that condition (C3) is crucial in the proof of Theorem 1. Without pressure bounds, the projection of the pressure flow solutions to the squared pressure component has, for a fixed \(\overline {\pi }\), the form

$$ \left\{\overline{\pi} + \eta \mathbf{1} ~|~ \eta \in \left[\max\limits_{u \in V} \{\pi_{\ast,u} - \overline{\pi}_{u}\},~\min\limits_{u \in V} \{\pi_{u}^{\ast} - \overline{\pi}_{u}\}\right]\right\}, $$

see Theorem 7.2. of [18]. Hence, unless one pressure is fixed, the pressure values are not necessarily unique and problem Pen(r, v, x+) can be unbounded although r is robust feasible.

As already discussed, we need to determine the length of the interval E(v, x+) and as a consequence of Theorem 1, problem Pen(r, v, x+) can be used to determine the length. In particular, by applying a binary search, which solves the subproblem Pen(r, v, x+) in every iteration, we can determine the length of E(v, x+).

A binary search algorithm requires a lower and an upper bound. Since \(0 \in E(v,x_{+}) \subset \mathbb {R}_{\geq 0}\) (condition (C1)), we choose 0 as a lower bound. A trivial upper bound for E(v, x+) is given by the exit capacities

$$ 0 \leq \mu +rPv \leq C_{-}. $$

However, we can even give a tighter bound. Due to (8), the pressure drop constraint

$$ \pi_{i} - \pi_{j} = {\Phi}_{i,j} \left| q_{i,j} \right| q_{i,j} $$
(34)

holds for every arc (i, j) ∈ E. Since the pressures are bounded and Φi, j > 0 for every arc (i, j), we can derive flow bounds for every arc which do not depend on the actual nomination. In the following, these flow bounds, which are called implicit flow bounds and are denoted by q and q, are exploited to determine a tighter upper bound for E(v, x+):

Lemma 1

Let \(v \in \mathbb {S}^{m-1}\), let q and q be the implicit flow bounds and x+ ≥ 0 be a capacity extension at the entry nodes. For a node uV, let δ(u) denote the set of incoming arcs and let δ+(u) denote the set of outgoing arcs. Then an upper bound for E(v, x+) is given by the optimal value of the optimization problem

figure b

Proof

Since we are interested in an upper bound for E(v, x+), 0 ≤ μ + rPvC and r ≥ 0 are satisfied. Furthermore, Kirchoff’s first law demands

$$ \sum\limits_{e \in \delta^{-}(u)} q_{e} - \sum\limits_{e \in \delta^{+}(u)} q_{e} = \mu_{u} + r(Pv)_{u} \quad \forall u \in V_{-} $$

for a flow q. Substituting the flow variables by the implicit flow bounds q and q results in

$$ \sum\limits_{e \in \delta^{-}(u)} q_{e}^{\ast} - \sum\limits_{e \in \delta^{+}(u)} q_{\ast,e} \geq \mu_{u} + r(Pv)_{u} \quad \forall u \in V_{-} $$

and

$$ \sum\limits_{e \in \delta^{-}(u)} q_{\ast,e} - \sum\limits_{e \in \delta^{+}(u)} q_{e}^{\ast} \leq \mu_{u} + r(Pv)_{u} \quad \forall u \in V_{-}. $$

This concludes the proof. □

With Lemma 1, the prerequisites for the binary search have been taken. However, problem Pen(r, v, x+) is a non-convex non-linear optimization problem. Since we require global optima, we aim to linearize the non-linear constraints of problem Pen(r, v, x+), i.e., the Weymouth equation

$$ \pi_{i} - \pi_{j} = {\Phi}_{i,j} \left| q_{i,j} \right| q_{i,j} $$

by interpolating Φi, j|qi, j|qi, j with linear splines si, j(qi, j). For a given linearization error 𝜖 > 0, the linear splines are constructed such that

$$ s_{i,j}(q_{i,j}) - \epsilon \leq {\Phi}_{i,j}|q_{i,j}|q_{i,j} \leq s_{i,j}(q_{i,j}) + \epsilon $$

for every (i, j) ∈ E. Therefore, in Pen(r, v, x+), we relax (34) with

$$ s_{i,j}(q_{i,j}) - \epsilon \leq \pi_{i} - \pi_{j} \leq s_{i,j}(q_{i,j}) + \epsilon $$

for all (i, j) ∈ E. The splines si, j(qi, j) are modeled with the incremental method, see [21], using an additional set of linear inequalities and equations and additional continuous and binary variables. Hence subproblem Pen(r, v, x+) is relaxed to a MIP which can be solved to global optimality. The optimal value of the relaxation is an upper bound for the optimal value of problem Pen(r, v, x+). Due to the objective being non-negative, if the optimal value of the linearized problem is zero, the optimal value of problem Pen(r, v, x+) is zero as well. The linearized version of problem Pen(r, v, x+) is henceforth denoted as Pen(r, v, x+, 𝜖).

Before we summarize our algorithm for finding a lower bound for the length of E(v, x+), we note that the incremental method is applied for modeling linear splines which are defined on compact intervals. In our case, the spline variables are the flow variables which are, at least by definition, unbounded. In practice, one can for example apply the preprocessing developed in [1] for determining flow bounds. This method neglects the pressure bounds as well, which is the reason why we can apply it.

In the following, we summarize our procedure to find a lower bound for the length of E(v, x+). The tolerance for the binary search is henceforth given by tol > 0: Algorithm 1 bounds |E(v, x+)| from below with an error of at most tol. Due to Theorem 1 and Lemma 1, Algorithm 1 terminates with a correct lower bound.

This concludes the presentation of our method to determine a lower bound for |E(v, x+)| under the conditions (C1), (C2) and (C3). We note that there are several sources of approximation errors which are caused by the binary search and the linearization of Pen(r, v, x+). Yet, those can be limited by reducing tol and 𝜖 in Algorithm 1, respectively. However, one has to keep in mind that reducing tol results in more iterations and that reducing 𝜖, i.e., a tighter linearization, results in more binary variables for Pen(r, v, x+, 𝜖). Both lead to an increase of the running time, see Section 4.2.

figure c

Before we discuss our numerical results, we would like to demonstrate, how our algorithm could be modified to produce a lower bound on E(v, x+) in the case when condition (C2) fails to hold. As pointed out before, without condition (C2), the set E(v, x+) is in general not an interval but a finite union of intervals. Due to condition (C1), one of those intervals, henceforth denoted as E0(v, x+), has 0 as its lower bound. Obviously, RE0(v, x+) only holds if [0,R] ⊂ E0(v, x+), i.e., if all r ∈ [0,R] are robust feasible for v. Therefore, we can check whether RE0(v, x+) by modifying problem Pen(r, v, x+) and solving

$$ \begin{array}{@{}rcl@{}} && \max\limits_{r,b,\pi,q} \sum\limits_{u \in V} f_{u}(\pi_{u}) \\ \text{s.t. }&& \mathcal{A}q = \left( \begin{array}{c} b \\ \mu + rPv \end{array}\right), \\ && \mathcal{A}^{T} \pi = -{\Phi}|q|q, \\ && \mathbf{1}_{-}^{T} (\mu + rPv) + \mathbf{1}_{+}^{T}b = 0, \\ && b \in [-C_{+} - x_{+}, 0],\\ && r \in [0,R]. \end{array} $$

The feasible set of this optimization problem consists of the vectors (r, b, π, q) for which (μ + rPv, b) is a balanced nomination (with rR) with pressure flow solution (π, q) which is unique due to condition (C3). Thus, the only difference in problem Pen(r, v, x+) and the above optimization problem is r being a variable since we have to check whether μ + rPv is robust feasible for every r ∈ [0,R]. Consequently, using this modified problem in Algorithm 1 instead of problem Pen(r, v, x+) yields a lower bound for E0(v, x+) and thus a (potentially conservative) lower bound on E(v, x+).

In the next subsection, we evaluate Algorithm 1 with respect to quality of the obtained solutions and running time.

4.2 Numerical Results

We modify the gas network instance of Section 3.3 by adding a second entry next to the first entry (the black filled node) which implies that \(x_{+} \in \mathbb {R}^{2}\). We note that the instance is still a tree and that the structure of the instance has not changed substantially which has been desired since we do not want to analyze an instance which is very different from the one of Section 3. However, if the instance had only one entry, the uncertainty set would have only one element which is not interesting in the context of this section. In addition, we fix the pressure at a leaf node. Beyond that, we provide sphere vectors v by sampling a collection of 10 000 elements on the unit sphere using a Quasi-Monte Carlo method. Our goal is to approximate the probability of robust feasibility for this network and uncertain entry loads by using a spheric radial decomposition and applying Algorithm 1. Since we can not verify condition (C2), we assume that D(x+) is not star-shaped with respect to the mean.

The performance of the algorithm is investigated by testing the algorithm on the given instance under a variety of parameter combinations. All experiments were carried out using Gurobi 7.5 [15] with 4 threads running on machines with Xeon E3-1240 v5 CPUs (4 cores, 3.5 GHz each).

We apply Algorithm 1 to all 10 000 rays using all combinations of relaxation parameters 𝜖 ∈{2− 6,2− 5,…,24} and bisection termination tolerances tol ∈{0.001,0.01,0.1}. Experiments for smaller tolerances down to tol = 10− 6 were carried out as well but are omitted here since they produced almost identical probabilities, when compared to tol = 10− 3. The results of this study are displayed in Fig. 3. The approximated probabilities for robust feasibility of the exit nomination and the capacity extension of the instance are displayed in Fig. 3(a). We approximate the overall probability to be between 78 % and 78.5 %, depending on 𝜖 and tol. As expected, we obtain more conservative solutions for increasing approximation parameters 𝜖. However, the influence of 𝜖 is much smaller than expected, even for large 𝜖. In the same fashion, increasing the bisection termination tolerances tol leads to more conservative solutions. We note that for both parameters, a combination of \(\epsilon = \frac {1}{2}\) and tol = 0.001 produces solutions that can be improved only very little (within the scope of this study) by decreasing both parameters further. The overall running times for all rays, i.e., the cumulated running time of Algorithm 1, applied for each ray, are plotted in Fig. 3(b). As expected, the running times increase for decreasing parameter 𝜖, as the latter leads to more complex MIP models. A decrease of the tolerance tol leads to a larger number of iterations of the bisection algorithm and thus to longer running times as well. In the previous experiments, we focused on the influence of the relaxation parameter 𝜖 and the bisection precision tolerance tol on the algorithm’s running time and on the reliability of the obtained probability. Another important impact on the overall running time is the number of rays that needs to be used. Fig. 4(a) shows the resulting probability when only the first k rays of the 10 000 given samples are used for the probability approximation. At a glance, we observe large fluctuations when using only up to about 2500 rays. A considerable decrease in the magnitude of the probability fluctuation can be seen for values of k ≥ 2500. We further strengthen this observation in Fig. 4(b) by comparing the first graph with a second graph that was obtained from 5000 other random sphere vectors in the same fashion. Since the second graph follows the same pattern, we conclude that for the instance considered here, the number of rays should not be smaller than 2500 if the approximation of the probability has been supposed to be reliable. Assuming that the parameter selection k = 2500, \(\epsilon =\frac {1}{2}\), and tol = 0.001 is sufficient for a reliable probability calculation, the sum of all MIP running times is about 8 min.

Fig. 3
figure 3

Resulting probability and total running time for 10 000 rays using different relaxation qualities 𝜖 and bisection termination tolerances tol

Fig. 4
figure 4

Probability plot when using only the first k rays for its computation. Parameters used in ray length calculation: \(\epsilon = \frac {1}{2}\) and tol = 0.001

As a final experiment, we demonstrate the practical applicability of our method by solving a simple optimization task where we assume that the number of sampled points k is large enough and that our approximation is good enough to check whether the probust constraint is satisfied for a given capacity extension.

The goal is to determine capacities for the two entry nodes such that the probability of robust feasibility is at least 75%. We use a linear cost function with equal costs for expansion at each node. In Section 3, the capacity problem with uncertain exit loads has been solved using (sub)gradient information in the sense of [24]. However, due to the different situation here caused by the MIP-approximations and the robust constraints, the derivation of suitable (sub)gradients needs further research that is beyond the scope of this article. Instead, we decided to apply a gradient free pattern search algorithm available in MATLAB [19]. It is important to note that—due to the fact that the probust constraint is in general non-smooth—no convergence of this algorithm to a stationary point can be guaranteed. Instead, one can expect convergence to a point in which directional derivatives are nonnegative for all directions in the positive basis used by the algorithm. We refer to [5] for a further discussion on this, as well as a general overview of derivative-free optimization.

In every major iteration, a capacity extension x+ has been proposed by the algorithm. For all sampled vectors vi, Algorithm 1 is applied to approximate |E0(vi, x+)|. Hence, the probability of robust feasibility is estimated from below by \(\frac {1}{2500}{\sum }_{i=1}^{2500} F_{\chi }(|E_{0}(v_{i},x_{+})|)\) and thus, the inequality

$$ \frac{1}{2500}\sum\limits_{i=1}^{2500} F_{\chi}(|E_{0}(v_{i},x_{+})|) \geq 0.75 $$

is checked.

In our experiment, we consider the entry capacities in the box [28000,32000] × [4000,8000] and start with C+ = (− 28000,− 4000). In other words the extension x+ is an element of the box [0,4000] × [0,4000] and our starting point is (0,0).

Overall, the pattern search algorithm converged after 149 function evaluations. The red lines which connect the black, filled dots show the trajectory of the pattern search algorithm, with the optimum marked by a red cross. The extra function evaluations are represented by black circles. Obviously, the probability is not very sensitive with respect to capacity changes at entry 1 but clearly decreases, when the capacity is increased at entry 2. (Fig. 5)

Fig. 5
figure 5

Contour plot of the probability for robust feasible entry capacities together with the trajectory of a gradient-free optimization method to determine a capacity with 75% feasibility

This concludes the discussion and presentation of the methodology for stationary gas networks. In the next section, transient systems controlled by the wave equation are discussed.

5 Stabilization with Probabilistic Constraints of a System Governed by the Wave Equation

Now, we consider a transient system that is governed by the wave equation. The wave equation is a linear model of the gas flow in gas pipelines for sufficiently small velocities. The state is determined by an initial boundary value problem with Dirichlet boundary data at one end and Neumann boundary feedback at the other end of the space interval. The initial data and the boundary data are given by a stochastic process. The aim is to maximize the probability to stay near a desired state everywhere in the time space domain.

Let a finite length L > 0 and a finite time T > 0 be given. In this section, let \(\mathcal {U} = [0,T] \times [0,L]\). Let c > 0 denote the sound speed in the gas. Let a stationary velocity field \(\bar v\) be given, see [13]. Let \(v= \tilde v - \bar v\) denote the difference between the velocity and the stationary state. If the norm of \(\bar v\) is sufficiently small, the dynamics for v are governed by the wave equation vtt = c2vxx. Moreover the gas density ρ satisfies the wave equation ρtt = c2ρxx and the flow rate q of the gas satisfies the wave equation qtt = c2qxx; see [14]. For given uncertain boundary data (that model the uncertain demand) \(\xi \in L^{\infty }(0,T)\), an uncertain initial state \((v_{0}, v_{1})\in L^{\infty }(0, L) {\times } L^{1}(0, L)\) and a feedback parameter η > 0, we consider the closed loop system that is governed by the initial boundary value problem for \((t,x) \in \mathcal {U}\)

$$ \left\{ \begin{array}{rll} v(0,x)&= v_{0}(x), & v_{t}(0, x)= v_{1}(x), \\ v_{x}(t,0) &= \eta v_{t}(t, 0), & v(t, L) = \xi(t), \\ v_{tt}(t,x) &= c^{2} v_{xx}(t,x).& \end{array} \right. $$
(S)

An explicit representation of the generated state in terms of travelling waves (d’Alembert’s solution) is given in [11, 12]. This allows the computation of the system state \(v\in L^{\infty }(\mathcal {U})\) without discretization errors. In the operation of pipeline networks, there is a constraint on the magnitude of the flow velocity in the pipe. Let \(v_{\max \limits }>0\) be an upper bound for the velocity. We consider the probabilistic constraint for the probability

$$ \mathbb{P}\left( \|v\|_{L^{\infty}}\leq v_{\max}\right), $$
(35)

where v solves (S) and \(\|\cdot \|_{L^{\infty }}\) denotes the norm on \(L^{\infty }(\mathcal {U})\).

In order to write the probabilistic constraint similar to (4), we introduce the notation

$$ g(\tilde{v} ,\xi, u) := v_{\max} - |\tilde{v}(u) - \bar{v}(u)| , $$
(36)

with ξ = (a, b), \(a = (a_{k})_{k=1}^{N}\), \(b = (b_{k})_{k=1}^{N}\), \(u = (t,x) \in \mathcal {U}\), where \(\tilde {v}\) solves the initial boundary value problem (S) with initial and boundary data that depend on the parameter (a, b) (see (KL-id) and (KL-bd) below).

Theorem 2

(Solution of system (S)) Consider system (S) with \(\xi \in L^{\infty }(0,T)\) and \((v_{0}, v_{1}) \in L^{\infty }(0,L)\times L^{1}(0,L)\) for the feedback parameter \(\eta = \frac {1}{c}\). Define the antiderivative of v1 by

$$ V_{1}(x):= {{\int}_{0}^{x}} v_{1}(s) \mathrm{d} s $$

and define for

$$ \alpha(s) := \left\{\begin{array}{ll} v_{0}(cs) + \frac{1}{c}V_{1}(cs)&\quad \text{ for } s \in \left[0,\frac{L}{c}\right),\\ 2\xi\left( s- \frac{L}{c}\right) - \upbeta\left( s - \frac{L}{c}\right)&\quad \text{ for } s \in \left[\frac{L}{c}, T+\frac{L}{c}\right] \end{array}\right. $$

and

$$ \upbeta(s):= \left\{\begin{array}{ll} v_{0}(L-cs) - \frac{1}{c} V_{1}(L-cs) &\quad \text{ for } s \in \left[0, \frac{L}{c}\right),\\ v_{0}(0)&\quad \text{ for } s \in \left[\frac{L}{c}, T+\frac{L}{c}\right]. \end{array}\right. $$

Then the function

$$ v(t,x):=\tfrac{1}{2}\alpha\left( t+\tfrac{x}{c}\right) + \tfrac{1}{2} \upbeta\left( t + \tfrac{L-x}{c}\right) $$
(37)

solves system (S) and the solution v lies in \(L^{\infty }(\mathcal {U})\).

Proof

We show that v defined in (37) fulfills the PDE system (S). First we see that v satisfies the wave equation, because we have

$$ \begin{array}{@{}rcl@{}} v_{tt} &=& \tfrac{1}{2}\alpha^{\prime\prime} \left( t + \tfrac{x}{c}\right) + \tfrac{1}{2} {\upbeta}^{\prime\prime} \left( t+ \tfrac{L-x}{x}\right),\\ v_{xx} &=& \tfrac{1}{2c^{2}} \alpha^{\prime\prime} \left( t+ \tfrac{x}{c}\right) + \tfrac{1}{2c^{2}} {\upbeta}^{\prime\prime} \left( t + \tfrac{L-x}{c}\right) = \tfrac{1}{2c^{2}}v_{tt}. \end{array} $$

Now we show that v satisfies the initial conditions. At t = 0, we have for all x ∈ (0,L)

$$ \begin{array}{@{}rcl@{}} v(0,x) &=& \tfrac{1}{2}\alpha\left( \tfrac{x}{c}\right) + \tfrac{1}{2}\upbeta\left( \tfrac{L-x}{c}\right) \\ &=& \tfrac{1}{2} \left[v_{0}(x) + \tfrac{1}{c} V_{1}(x)\right] + \tfrac{1}{2} \left[v_{0}(x) - \tfrac{1}{c}V_{1}(x)\right] = v_{0}(x). \end{array} $$

For the time derivative at t = 0, x ∈ (0,L) we have

$$ \begin{array}{@{}rcl@{}} v_{t}(0,x) &=& \tfrac{1}{2}\alpha^{\prime}\left( \tfrac{x}{c}\right) + \tfrac{1}{2} {\upbeta}^{\prime}\left( \tfrac{L-x}{c}\right) \\ &=&\tfrac{1}{2}\left[v_{0}^{\prime}(x) + v_{1}(x)\right] - \tfrac{1}{2} \left[v_{0}^{\prime}(x) - v_{1}(x)\right] = v_{1}(x), \end{array} $$

where the derivatives are to be understood in the sense of distributions. Finally, we show that the boundary conditions are fulfilled. Now, we prove that the Dirichlet boundary condition at x = L is fulfilled for t > 0. We have

$$ v(t,L) = \tfrac{1}{2} \alpha(t+\tfrac{L}{c}) + \tfrac{1}{2}\upbeta(t) = \tfrac{1}{2} [2\xi(t) - \upbeta(t)] + \tfrac{1}{2}\upbeta(t) = \xi(t). $$

For the feedback law at x = 0, we have

$$ \begin{array}{@{}rcl@{}} v_{x}(t,0) &=& \tfrac{1}{2c} \alpha^{\prime}(t) -\tfrac{1}{2c} {\upbeta}^{\prime}\left( t+\tfrac{L}{c}\right) = \tfrac{1}{2c} \alpha^{\prime}(t) - \tfrac{1}{2c} v_{0}^{\prime}(0), \\ \eta v_{t}(t,0)&=& \tfrac{\eta}{2}\alpha^{\prime}(t) - \tfrac{\eta}{2} {\upbeta}^{\prime}\left( t+\tfrac{L}{c}\right) = \tfrac{1}{2c}\alpha^{\prime}(t) - \tfrac{1}{2c} v_{0}^{\prime}(0). \end{array} $$

Now we show that v lies in \(L^{\infty }(\mathcal {U})\). By the assumptions, we have \(v_{0} \in L^{\infty }(0,L)\) and \(\xi \in L^{\infty }(0,L)\). The claim is true if V1 is in \(L^{\infty }(0,L)\). We know that v1 is in L1(0,L). This implies

$$ \|V_{1}\|_{L^{\infty}} = \underset{x \in (0,L)}{\mathrm{ess sup}}\left|{{\int}_{0}^{x}} v_{1}(s) \mathrm{d} s\right| \le \underset{x \in (0,L)}{\mathrm{ess sup}}{{\int}_{0}^{x}} |v_{1}(s)| \mathrm{d} s \le {{\int}_{0}^{L}} |v_{1}(s)| \mathrm{d} s = \|v_{1}\|_{L^{1}}. $$

This finishes the proof Theorem 2. □

Theorem 3

(Value of \(\|v\|_{L^{\infty }}\) in terms of initial and boundary data) Let v be a solution of system (S) under the assumptions of Theorem 2. For \((t,x) \in \mathcal {U}\), define

$$ \begin{array}{@{}rcl@{}} m_{1}(t, x) &:=& \tfrac{1}{2} \left[v_{0}(x+ct) +\tfrac{1}{c} V_{1}(x+ct)\right]+ \tfrac{1}{2} \left[v_{0}(x-ct) -\tfrac{1}{c} V_{1}(x-ct)\right], \\ m_{2}(t, x) &:=& \tfrac{1}{2} \left[v_{0}(ct +x) +\tfrac{1}{c} V_{1}(ct+x) +v_{0}(0)\right],\\ m_{3}(t, x) &:=& \xi \left( t + \tfrac{x-L}{c}\right) + \tfrac{1}{2} \left[v_{0}(ct+x) - \tfrac{1}{c}V_{1}(ct + x)\right.\\ &&\left. - v_{0}(2L-x-ct) + \tfrac{1}{c} V_{1}(2L-x-ct) \right],\\ m_{4}(t, x) &:=& \xi\left( t + \tfrac{x-L}{c}\right) + \tfrac12\left[\tfrac{1}{c} V_{1}(2L-x-ct) + v_{0}(0) - v_{0}(2L-x-ct)\right],\\ m_{5}(t, x) &:=& \xi\left( t + \tfrac{x-L}{c}\right). \end{array} $$

Set

$$ \begin{array}{@{}rcl@{}} {\Omega}_{1} &:=& \left\{(t,x) \in \mathcal{U} ~|~ t < \min\left\{\tfrac{L-x}{c}, \tfrac{x}{c}\right\}\right\},\\ {\Omega}_{2} &:=& \left\{(t,x) \in \mathcal{U} ~|~ \tfrac{x}{c} \le t < \tfrac{L-x}{c}\right\},\\ {\Omega}_{3} &:=& \left\{(t,x) \in \mathcal{U} ~|~ \tfrac{L-x}{c} \le t < \tfrac{x}{c}\right\},\\ {\Omega}_{4} &:=& \left\{(t,x) \in \mathcal{U} ~|~\max\left\{\tfrac{L-x}{c}, \tfrac{x}{c}\right\} \le t < \tfrac{L}{c} + \tfrac{L-x}{c} \right\},\\ {\Omega}_{5} &:=& \left\{(t,x) \in \mathcal{U} ~|~ t \ge \tfrac{L}{c} + \tfrac{L-x}{c}\right\} \end{array} $$

(see Fig. 6). Furthermore, for i ∈{1,…,5}, set

$$ M_{i} := \sup\{|m_{i}(t,x)| : (t, x) \in {\Omega}_{i}\}. $$

Then the \(L^{\infty }\)-norm of the velocity v is given by

$$ \|v\|_{L^{\infty}} = \max\{M_{1}, M_{2}, M_{3}, M_{4}, M_{5}\}. $$
Fig. 6
figure 6

Decomposition of the space-time domain \(\mathcal {U}\)

Proof

By Theorem 2 the solution of system (S) is given by

$$ v(t,x):=\tfrac{1}{2}\alpha\left( t+\tfrac{x}{c}\right) + \tfrac{1}{2} \upbeta\left( t + \tfrac{L-x}{c}\right). $$

By the definition of α and β, there are four cases to consider. The last case is split into two subcases. The first case \(t<\min \limits \left \{\tfrac {x}{c}, \tfrac {L-c}{c}\right \}\) is the first case for both α and β. We have

$$ v(t,x)= \tfrac{1}{2} \left[v_{0}(x+ct) +\tfrac{1}{c} V_{1}(x+ct)\right] + \tfrac{1}{2} \left[v_{0}(x-ct) -\tfrac{1}{c} V_{1}(x-ct)\right]. $$

For \(\tfrac {x}{c} \le t <\tfrac {L-x}{c}\), we are in the first case for α and in the second case for β. Note that the interval for t can only be nonempty for \(x \in (0,\tfrac {L}{2})\). We have

$$ v(t,x) = \tfrac{1}{2} \left[v_{0}(ct +x) +\tfrac{1}{c} V_{1}(ct+x) +v_{0}(0)\right]. $$

For \(\tfrac {L-x}{c} \le t < \frac {x}{c}\), we are in the second case for α and in the first case for β. Note that the interval for t can only be nonempty for \(x \in (\tfrac {L}{2}, L)\). Since \(t<\tfrac {x}{c}<\tfrac {L}{c}\) and \(\tfrac {x-L}{c}<0\), we have \(t+\tfrac {x-L}{c}<\tfrac {L}{c}\) and therefore

$$ \begin{array}{@{}rcl@{}} v(t,x) &=& \tfrac{1}{2}\left[2\xi\left( t+\tfrac{x-L}{c}\right) - \upbeta\left( t+\tfrac{x-L}{c}\right)+ \upbeta\left( t+ \tfrac{L-x}{c}\right)\right]\\ &=& \xi\left( t + \tfrac{x-L}{c}\right) - \tfrac12 \upbeta\left( t + \tfrac{x-L}{c}\right) + \tfrac{1}{2} \left[v_{0}(ct +x) -\tfrac{1}{c} V_{1}(ct+x)\right]\\ &=& \xi\left( t + \tfrac{x-L}{c}\right) - \tfrac{1}{2} \left[v_{0}(2L-x-ct) - \tfrac{1}{c} V_{1}(2L-x-ct)\right] \\ && +\tfrac{1}{2} \left[v_{0}(ct +x) -\tfrac{1}{c} V_{1}(ct+x)\right]. \end{array} $$

The last case to consider is \(t \ge \max \limits \{\frac {L-x}{c}, \frac {x}{c}\}\). It leads to

$$ \begin{array}{@{}rcl@{}} v(t,x) &=& \tfrac12 \left[2 \xi\left( t + \tfrac{x-L}{c}\right) - \upbeta\left( t + \tfrac{x-L}{c}\right) + v_{0}(0)\right]\\ &=&\begin{cases} \xi(t + \tfrac{x-L}{c}) +\tfrac12\left[\tfrac{1}{c} V_{1}(2L-x-ct) + v_{0}(0) - v_{0}(2L-x-ct)\right], & t < \tfrac{L}{c} + \tfrac{L-x}{c},\\ \xi(t + \tfrac{x-L}{c}), & t \ge \tfrac{L}{c} + \tfrac{L-x}{c}. \end{cases} \end{array} $$

This yields the assertion of Theorem 3. □

5.1 Boundary Data with Random Amplitude, Frequency and Phaseshift

For the boundary data, we consider the parametric family

$$ \xi(t) := \lambda \cos(\omega t + \kappa) $$
(cos-bd)

with a random variable (λ, κ, ω) and the compatible initial data

$$ v_{0}(x) = \lambda \cos(\kappa),\quad v_{1}=0. $$
(cos-id)

We assume that (λ, κ, ω) is normally distributed with expected value \(\mu \in \mathbb {R}^{3}\) and a positive definite covariance matrix \({\Sigma } \in \mathbb {R}^{3 \times 3}\). For the numerical computation of the probability, we use the spheric radial decomposition described in Section 3.2.

Corollary 1

(Analytic formula for \(\|v\|_{L^{\infty }}\)) Let v be a solution of system (S) under the assumptions of Theorem 2 for the initial conditions given by (cos-id) and the Dirichlet boundary data at x = L given by (cos-bd). Then

$$ \|v\|_{L^{\infty}} \le |\lambda|. $$

Proof

With the definitions from Theorem 3 and (cos-bd) as well as (cos-id), we have

$$ \begin{array}{@{}rcl@{}} m_{1}(t, x) &:=& v_{0}(ct + x) = \lambda \cos(\kappa),\\ m_{2}(t, x) &:=& \tfrac{1}{2} \left[v_{0}(ct +x) +\tfrac{1}{c} V_{1}(ct+x) +v_{0}(0)\right] = \lambda \cos(\kappa),\\ m_{3}(t, x) &:=& \xi \left( t + \tfrac{x-L}{c}\right) + \tfrac{1}{2} \left[v_{0}(ct+x) - \tfrac{1}{c}V_{1}(ct + x) \right.\\ &&\left. - v_{0}(2L-x-ct) + \tfrac{1}{c} V_{1}(2L-x-ct) \right]\\ &=& \lambda \cos\left( \omega \left( t + \tfrac{x-L}{c}\right) + \kappa\right), \\ m_{4}(t, x) &:=& \xi(t + \tfrac{x-L}{c}) +\tfrac12\left[\tfrac{1}{c} V_{1}(2L-x-ct) + v_{0}(0) - v_{0}(2L-x-ct)\right]\\ &=& \lambda \cos\left( \omega \left( t + \tfrac{x-L}{c}\right) + \kappa\right),\\ m_{5}(t, x) &:=& \xi\left( t + \tfrac{x-L}{c}\right) = \lambda \cos\left( \omega \left( t + \tfrac{x-L}{c}\right) + \kappa\right). \end{array} $$

By |mi(t, x)|≤|λ| for i = 1,…,5 the claim follows. □

Remark 1

If ω≠ 0 and T is sufficiently large, then \(\|v\|_{\infty } = |\lambda |\) holds.

5.2 Karhunen–Loève Approximation of a Wiener Process as Initial and Boundary Data

We consider the Karhunen–Loève representation (see [17]) of a Wiener process on [0,T] with covariance function \(\text {Cov}(W_{t}, W_{s}) = \min \limits (s,t)\) given by

$$ W_{t} = \sqrt{2 T} \sum\limits_{k=1}^{\infty} a_{k} \frac{\sin \left( \omega_{k} \pi \tfrac{t}{T}\right)}{\omega_{k}\pi}, \quad \omega_{k} = k-\tfrac{1}{2}, $$

with independently normally distributed random variables ak. It is reasonable to use a finite approximation of it as boundary data, i.e.,

$$ \xi(t) = \sqrt{2 T} \sum\limits_{k=1}^{N} a_{k} \frac{\sin \left( \omega_{k} \pi \tfrac{t}{T}\right)}{\omega_{k}\pi}, \quad \omega_{k} = k-\tfrac{1}{2}~ \text{ on } [0,T]. $$
(KL-bd)

Analogously, we choose the compatible initial data

$$ v_{0}(x) = \sqrt{2L} \sum\limits_{k=1}^{N} b_{k} \frac{\sin \left( \omega_{k} \pi \tfrac{L-x}{L}\right)}{\omega_{k}\pi}, \quad \omega_{k} = k-\tfrac{1}{2}~\text{ on } [0,L], $$
(KL-id)

with independently normally distributed random variables bk. We have the compatibility condition ξ(0) = v0(L) = 0. Furthermore, set v1 = 0 (Fig. 7). Different realizations of the initial and boundary data can be seen in Figs. 8 and 9. The solution of the wave equation for different realizations of the initial and boundary data is depicted in Fig. 10.

Fig. 7
figure 7

The solution v of the wave equation for nine samples (λ, ω, κ) on the sphere. The radius r is scaled such that \(\|v\|_{L^{\infty }} = v_{\max \limits }\) holds. The value of the cumulative distribution of the Chi distribution evaluated at this radius is given on top of each picture. The probability for the solution to be bounded by \(v^{\max \limits }\) is \(\mathbb {P}(\|v\|_{L^{\infty }} \le v^{\max \limits }) \approx 0.7856\) for the data T = 6, L = 2, c = 0.5, \(v^{\max \limits } = 1.8\). The random vector (λ, ω, κ) is normal distributed with expected value μ = (1,1,1) and covariance matrix Σ = I. The number of samples used to approximate the probability is 20000

Fig. 8
figure 8

Different realizations (21) of the initial data for a Karhunen–Loève sum with 20 standard normally distributed coefficients

Fig. 9
figure 9

Different realizations (21) of the boundary data for a Karhunen–Loève sum with 20 standard normally distributed coefficients

Fig. 10
figure 10

The solution v of the wave equation with boundary and initial data given by the functions in (KL-bd) and (KL-id) for nine samples of the standard normal distributed random vector (a, b) with realizations in \(\mathbb {R}^{40}\), i.e., N = 20. The constants T = 6, L = 2, c = 0.5 were used. The bound \(v_{\max \limits }= 5\) was chosen and \(\bar {v}= 0\) was used. The probability of \(\|v+\bar {v}\|_{L^{\infty }}\le v_{\max \limits }\) is 0.8808 with 10 000 samples used. The value of the \(L^{\infty }\)-norm is approximated by evaluation on a 100 × 100 grid on \(\mathcal {U}\). The points, where the value of the \(L^{\infty }\)-norm is attained are marked with a point

The case is much more involved than that in Section 5.1, since the value of \(\|v\|_{L^{\infty }}\) is not easily expressed as an analytic function of the random variables. This means a sampling scheme based on spheric radial decomposition can not be directly be applied. We use a quasi Monte Carlo method based on a Sobol sequence instead.

If one wants to approximate the \(L^{\infty }\)-norm of the velocity by pointwise evaluation on a grid, Lipschitz continuity of the velocity is required.

Theorem 4

(Lipschitz continuity of the solution) Assume the boundary data \(\xi \in \mathcal {C}^{0,1}(0,T)\) and initial data \(v_{0} \in \mathcal {C}^{0,1}(0,L)\) to be Lipschitz continuous and assume that Lipschitz compatibility over the edge holds, i.e., we have

$$ |\xi(t)- v_{0}(L-x)| \le K |t - L + x| \quad\text{ for } (t,x) \in\mathcal{U} $$

with a Lipschitz constant K > 0. Furthermore, let \(v_{1}\in L^{\infty }(0,L)\).Then, under the assumptions of Theorem 2, the solution v of system (S) is Lipschitz continuous on \(\mathcal {U}\), i.e., \(v \in \mathcal {C}^{0,1}(\mathcal {U})\).

Proof

The sum of Lipschitz continuous functions is Lipschitz continuous. It is therefore sufficient to show the Lipschitz continuity of α and β defined as in Theorem 2. Without loss of generality—by going to the maximum of the occurring Lipschitz constants—we assume that they are all the same and denote each of them by K > 0. First, we show the Lipschitz continuity of V1. We have, for x, y ∈ [0,L]

$$ \begin{array}{@{}rcl@{}} |V(x)-V(y)| &=& \left|{{\int}_{0}^{x}} v_{1}(s) \mathrm{d} s - {{\int}_{0}^{y}} v_{1}(s) \mathrm{d} s\right| = \left|{{\int}_{y}^{x}} v_{1}(s) \mathrm{d} s\right|\\ &\le& |x-y| \|v_{1}\|_{L^{\infty}} \le K|x-y|. \end{array} $$

The Lipschitz continuity of β is clear in the individual intervals \(\left [0, \tfrac {L}{c}\right )\) and \(\left [\tfrac {L}{c}, T+\tfrac {L}{c}\right )\). Consider \(s \in \left [0, \tfrac {L}{c}\right )\) and \(r \in \left [\tfrac {L}{c}, T+\tfrac {L}{c}\right )\). Then, using V1(0) = 0 and \(\left |\tfrac {L}{c} -s\right | = \tfrac {L}{c} -s \le r - s = |r-s|\), leads to

$$ \begin{array}{@{}rcl@{}} |\upbeta(s) - \upbeta(r)| &=& \tfrac{1}{c} |c v_{0}(L-cs) - V_{1}(L-cs) - cv_{0}(0) + V_{1}(0)| \\ &\le& \tfrac{K}{c}|L-cs| \le K |r-s|. \end{array} $$

The Lipschitz continuity of β of ξ imply that α is Lipschitz on \(t\ge \tfrac {L}{c}\) and by the Lipschitz continuity of v0 and V1 it is Lipschitz on \(\left [0, \tfrac {L}{c}\right )\). Again, the case \(s \in \left [0, \tfrac {L}{c}\right )\) and \(r \ge \tfrac {L}{c}\) is remaining. We obtain

$$ |\alpha(s) - \alpha(r)| = \left|v_{0}(cs) + \tfrac{1}{c} V_{1}(cs) - 2 \xi\left( r-\tfrac{L}{c}\right) + \upbeta\left( r- \tfrac{L}{c}\right)\right|. $$

For \(\frac {L}{c}\le r < \tfrac {2L}{c}\), this yields by the definition of β

$$ \begin{array}{@{}rcl@{}} |\alpha(s) - \alpha(r)| &=& \left|v_{0}(cs) + \tfrac{1}{c} V_{1}(cs) - 2 \xi\left( r-\tfrac{L}{c}\right) + v_{0}(2L-cr)- \tfrac{1}{c} V_{1}(2L- cr)\right| \\ &=& \left|v_{0}(cs) - v_{0}(L) +2\left( v_{0}(L) - \xi \left( r- \tfrac{L}{c}\right)\right)\right.\\ && \left. + v_{0}(2L-cr)- v_{0}(L) + \tfrac{1}{c}\left( V_{1}(cs)- V_{1}(2L-cr)\right)\right|. \end{array} $$

By the triangle inequality and the compatibility v0(L) = ξ(0), we obtain

$$ \begin{array}{@{}rcl@{}} |\alpha(s) - \alpha(r)| &\le& |v_{0}(cs) -v_{0}(L)| + 2 \left|\xi(0) -\xi \left( r- \tfrac{L}{c}\right)\right|\\ && + |v_{0}(2L-cr)- v_{0}(L)| + \tfrac{1}{c}|V_{1}(cs)- V_{1}(2L-cr)|\\ &\le& K|cs-L| + 2 K\left|-r+\tfrac{L}{c}\right| + K|L-cr| \\ &&+ \tfrac{K}{c} |cs -L| + \tfrac{K}{c}|-L + cr|\\ &=& K\left[L - cs + 2\left( r- \tfrac{L}{c}\right) + cr - L + \tfrac{L}{c} -s + r - \tfrac{L}{c}\right]\\ &\le& K[c(r-s) + 2 (r-s) + (r-s)] = K(c+3)|r-s|, \end{array} $$

since \(-\tfrac {L}{c} \le -s\). For \(r\ge \tfrac {2L}{c}\), we have by the definition of β and V1(0) = 0

$$ \begin{array}{@{}rcl@{}} |\alpha(s) - \alpha(r)| & =& \left|v_{0}(cs) + \tfrac{1}{c} V_{1}(cs) - 2 \xi\left( r - \tfrac{L}{c}\right) + v_{0}(0)\right|\\ & =& \left|v_{0}(cs) - v(L) +2\left( v_{0}(L) -\xi\left( r- \tfrac{L}{c}\right)\right) + v_{0}(0)\right. \\ && \left. - v_{0}(L) + \tfrac{1}{c} V_{1}(cs) - \tfrac{1}{c} V_{1}(0) \right|\\ & \le& |v_{0}(cs) - v(L)| + 2 \left| \xi(0) -\xi\left( r- \tfrac{L}{c}\right)\right| \\ && + |v_{0}(0)- v_{0}(L)| + \tfrac{1}{c} |V_{1}(cs) - V_{1}(0)|\\ & \le& K|cs - L| + 2 K \left|-r + \tfrac{L}{c}\right| + K |L| + \tfrac{K}{c} |cs|\\ & =& K \left[L -cs + 2\left( r- \tfrac{L}{c}\right) + L + s \right]\\ & \le& K [c (r-s) + 2 (r-s) + (r-s)] \le K(c+3) |r-s|, \end{array} $$

because 2Lrc, \(-\tfrac {L}{c} \le -s\) and \(r-s \ge \tfrac {2L}{c} - s \ge \tfrac {L}{c} \ge s\). This shows the Lipschitz continuity of α and concludes the proof. □

Remark 2

Also for general feedback gains η > 0, results similar to Theorems 2 and 4 hold.

5.3 Optimization of the Feedback Parameter

The feedback parameter η can be chosen such that the probability (35) as a function of η is maximized. We call this function G(η). We consider the probability to stay under the bound \(v_{\max \limits } = 5\) for different feedback parameters η > 0 on a grid with stepsize 0.05 between 1.5 and 4. The data for the example has been chosen as L = 2, T = 2, c = 0.5. For the approximation of the probability, 2000 samples were used for each value of η. The maximum of the probability is reached for completely absorbing feedback η = 1/c = 2. The peak in probability is very distinct. At the peak the probability function appears to be nonsmooth. Numerically, we find that the choice η = 1/c is optimal; see Fig. 11.

Fig. 11
figure 11

The probability to stay under the bound \(v_{\max \limits } = 5\) over the feedback parameter η for the data L = 2, c = 0.5, T = 2 using 2000 samples. The maximum of the probability is reached for completely absorbing feedback η = 1/c = 2

6 Conclusion

In this paper we dealt with a joint model of probabilistic and robust constraints, so-called probust constraints and illustrated their importance for gas transport under uncertainty. In particular, we addressed the problem of capacity maximization under uncertainty thereby distinguishing between the cases of uncertain exit and uncertain entry loads. Moreover, we considered a stabilization problem in a transient system governed by the wave equation and subject to probust constraints. By applying the spheric radial decomposition of Gaussian random vectors, we approximated the occurring probabilities and—where possible—their sensitivities with respect to the decision variables in order to numerically solve the resulting optimization problems. There are a lot of remaining challenges for future work, such as efficient incorporation of cycles or active elements in the network. Moreover, a full integration of the methodology outlined in Section 4 for the robust treatment of uncertain entries with the capacity maximization problem described in Section 3 ultimately would allow an application of the probust approach to arbitrary network topologies.