# Multi-usage hydropower single dam management: chance-constrained optimization and stochastic viability

## Abstract

We consider the management of a single hydroelectric dam, subject to uncertain inflows and electricity prices and to a so-called “tourism constraint”: the water storage level must be high enough during the tourist season with high enough probability. We cast the problem in the stochastic optimal control framework: we search at each time *t* the optimal control as a function of the available information at *t*. We lay out two approaches. First, we formulate a chance-constrained stochastic optimal control problem: we maximize the expected gain while guaranteeing a minimum storage level with a minimal prescribed probability level. Dualizing the chance constraint by a multiplier, we propose an iterative algorithm alternating additive dynamic programming and update of the multiplier value “à la Uzawa”. Our numerical results reveal that the random gain is very dispersed around its expected value; in particular, low gain values have a relatively high probability to materialize. This is why, to put emphasis on these low values, we outline a second approach. We propose a so-called stochastic viability approach that focuses on jointly guaranteeing a minimum gain and a minimum storage level during the tourist season. We solve the corresponding problem by multiplicative dynamic programming. To conclude, we discuss and compare the two approaches.

### Keywords

Stochastic optimal control Chance constraints Stochastic viability Dynamic programming Hydroelectric dam management Energy management## 1 Introduction

As a source of electricity, hydropower is an interesting asset: it emits no greenhouse gases and provides fast-usable energy, cheap and substitutable for the thermal one. On the flip side, hydropower management has to deal with uncertain water inflows and uncertain electricity prices, and multiple uses (agriculture, tourism, flood prevention). This paper depicts the case of single dam submitted to conflicting economic and touristic demands, as faced by the Electricité de France company (French main electricity provider).

We will explore two ways to conciliate, under uncertainty, the economic objective of maximizing the payoff and a tourist objective. In this case, the local authorities prescribe a so-called “contrainte de cote touristique” (tourism water level constraint) as a chance constraint: there must be enough water during the tourist season with high enough probability.

Chance-constrained programming was introduced more than fifty years ago by [5] and then widely developed by many authors (see e.g. [9, 16, 23, 24]). Its application to dam management problems can be found in [2, 4, 22, 23] and the references therein. Most of the above literature deals with chance constraints where the decision variables are static or open-loop, that is, solutions are deterministic vectors. By contrast, in this paper we focus on chance constraints in the framework of discrete-time stochastic optimal control: at each stage *t*, decision variables are “closed-loop” variables, that is, functions depending on the information available at *t*. Up to our best knowledge, few papers have addressed closed-loop stochastic dynamic optimization problems subject to probability constraints. Let us mention [4], in which dealing with the (joint) probability constraint is based on a discretization of the control variables modeled as functions of the trajectory of past noises (rather than functions of the state variables): while effective, this approach does not easily allow to take a large number of time steps into account. In [21], an approach based on Dynamic Programming is proposed: however, the joint chance constraint is not treated as such, but is replaced in a conservative way by another constraint, so that the solution obtained is only admissible for the original problem.

In this paper, we analyze how to handle the conflicting objectives of maximizing the payoff from dam energy production and of satisfying a tourist objective, formulated as a chance constraint. In Sect. 2, we present the dam hydroelectric dynamics over a discrete-time span \(\{0,\ldots ,T\}\), and the economic objective. In Sect. 3, we aim to maximize the expectation of the economic gain while satisfying the tourist constraint: we formulate a so-called chance-constrained stochastic optimal control problem. To prepare a resolution by stochastic dynamic programming (SDP), we add a binary random variable to the storage level of the dam at time *t* to form an extended dynamic state. This new random variable allows to represent the chance constraint as an expectation constraint involving the extended state at final time *T*. As formulated, the problem is amenable to SDP, but at the price of an infinite dimensional state (see [6]). To overcome this obstacle, we present an original approach: after dualizing the expectation constraint, we apply additive dynamic programming for every fixed value of the multiplier, and we iteratively update the multiplier, until numerical convergence is obtained. We provide numerical results for this method, based on a real-life example provided by Electricité de France. We observe that the random gain is noticeably dispersed around its expected value; in particular, low gain values have a relatively high probability to materialize. We focus on these low gains in Sect. 4. We propose a so-called stochastic viability approach (see [8, 10]) that symmetrizes the economic and the tourist stakes. More precisely, we aim to maximize the probability to jointly guarantee storage levels and gains. We propose another extended dynamic state making it possible to solve the problem by multiplicative dynamic programming. We provide numerical results on the same instance plus a graphical description of the trade-offs between economic and tourist objectives. To conclude, we discuss and compare the two approaches in Sect. 5.

## 2 Dam modeling

We present the dynamics of the dam, and the production model. As far as the decisions we are looking for depend on the available information, most of the variables involved in the model are random variables, denoted by bold letters.

### 2.1 Dynamics of the dam

*t*in \( \{0,\ldots ,T\} \), we consider the following real valued random variables:

\( \mathbf {X}_{t} \), the water

*storage level*in the dam at the beginning of period \([t, t+1[\),\( \mathbf {U}_{t} \), the dam

*turbined outflow*during \([t, t+1[\),\( \mathbf {A}_{t} \) and \( \mathbf {C}_{t} \), the dam

*inflow*and the*electricity price*during \([t, t+1[\).

*noise*process and assume that \( \mathbf {W}_{0}, \, \ldots , \, \mathbf {W}_{T-1} \) are independent random variables. The independence assumption is of paramount importance in order to use Stochastic Dynamic Programming and obtain optimal closed-loop decision variables as feedbacks on the water storage \( \mathbf {X}_{t} \). This assumption can be alleviated when it is possible to identify a dynamics in the noise process, by incorporating this new dynamics in the state variables, that is, by extending the state (see e.g. [17, 19] on this topic). Note that the \(\mathbf {W}_{t}\) need to be statistically independent, but that their distribution can depend upon time

*t*. This makes possible to handle part of a temporal dependency, such as seasonal effects (more inflow in winter, less in summer). Finally, we do not require that inflow \( \mathbf {A}_{t} \) and price \( \mathbf {C}_{t} \) are statistically independent, thus opening the possibility to take into account the customary correlation

^{1}between these random variables.

### 2.2 Constraints on the control

^{2}

*t*, namely the past noises \(\left( \mathbf {W}_{0}, \, \ldots , \, \mathbf {W}_{t} \right) \). More precisely here, it corresponds to the so-called Hazard-Decision framework: indeed, the control \(\mathbf {U}_{t}\) depends upon the past and the current realizations of the noise at time

*t*.

^{3}

### 2.3 Dam production and valuation

the electricity production \( \eta _{t} (\mathbf {X}_{t}, \, \mathbf {U}_{t}, \, \mathbf {A}_{t}) \) at time

*t*is sold at price \(\mathbf {C}_{t}\),the quadratic term \(\epsilon \, {\mathbf {U}_{t}}^{2}\) is a (small) technical term introduced for differentiability purpose (more details will be given in Remark 1, page 10),

the non zero final value of water \( v_{f}(\mathbf {X}_{T}) \) prevents the dam reservoir from being empty at the end of the horizon.

## 3 Chance-constrained stochastic optimal management of a dam

The gain \( \mathbf {G} \) defined in Eq. (4) represents the economic stakes of the dam hydropower management. However, a dam is a facility which may be used for several usages. Here, water sports are possible during the tourist season, provided that a minimal reference water level \(x_{\text {ref}}\) is ensured.

### 3.1 Mathematical problem statement

^{4}

### 3.2 Discussion on chance constraints with closed-loop decisions

The optimization problem (7) is a so-called chance constrained stochastic optimal control problem. Chance constrained optimization problems were introduced by [5] with an individual chance constraint and by [20] with a joint chance constraint.

Such problems raise theoretical and numerical difficulties: indeed, it is mathematically difficult to guarantee the connectedness, the convexity or the closedness of the feasible set induced by the chance constraint, although these properties play key roles in optimization. When solutions are looked after as open-loop, connectedness, convexity or closedness properties may be proven to hold under assumptions on the constraint structure and on the distribution laws of the random variables (see [1, 9, 14, 23, 24] and the references therein). However, even a very general continuity result such as [1, Theorem 2.3.3] cannot be extended in a straightforward manner to the closed-loop situation. As a matter of fact, in the open-loop situation, that is, in the case where the control *u* lives in a standard vector space \(\mathbb {U}\) (e.g. \(\mathbb {R}^{n}\)), the control “does not move” with the randomness, whereas in the closed-loop situation, that is, in the case where the control \(\mathbf {U}\) is a random variable, namely a function defined on \(\Omega \) and valued on \(\mathbb {U}\), the control and the noise live in two spaces of random variables defined on the same probability space, and thus both vary with the randomness. So, the usual proofs (designed for open-loop solutions) no longer work for closed-loop solutions.

on the one hand, we will use stochastic dynamic programming, a method agnostic to whether variables are continuous, discrete or both, whether constraints define a convex domain or not, whether cost functions are convex or not, etc.,

on the other hand, we will use duality only to obtain bounds, without relying on connectedness, convexity or closedness of the feasible set.

### 3.3 Reformulation with an additional binary process

It happens that problem (7) is amenable to SDP, but at the price of an infinite dimensional state. More precisely, using the *probability distribution* of \(\mathbf {X}_{t}\) rather than the value of \(\mathbf {X}_{t}\) as the state variable, DP can be used in a straightforward manner on problem (7) (see [6] for further details). But such an approach involving an infinite dimensional state is usually untractable and we have to find an alternative solution. Our very first issue being to design practical algorithms that produce closed-loop solutions, we now present an original approach to overcome this dimensionality obstacle. As a first step, we develop an equivalent formulation of (7) that involves an additional binary random process.

*K*. Then, the chance constraint (6) can be written as

### 3.4 Theoretical analysis

- the criterion$$\begin{aligned} J({\mathbf {X}_{}}, {\varvec{\pi }_{}}, {\mathbf {U}_{}}) = \mathbb {E}\left[ { \mathbf {G} } \right] = \mathbb {E}\left[ { \sum \limits _{t = 0}^{T-1} {\mathbf {C}_{t} \, \eta _{t} (\mathbf {X}_{t}, \, \mathbf {U}_{t}, \, \mathbf {A}_{t}) - \epsilon \, {\mathbf {U}_{t}}^{2} } + v_f(\mathbf {X}_{T}) } \right] , \end{aligned}$$(10)
- the value \(J^{\sharp }\) of the optimization problem (9)$$\begin{aligned} J^{\sharp } = \mathbb {E}\left[ { \mathbf {G}^{\sharp } } \right] = \max _{\mathbf {X}_{}, \varvec{\pi }_{}, \mathbf {U}_{} } J({\mathbf {X}_{}}, {\varvec{\pi }_{}}, {\mathbf {U}_{}}) \text { s.t. }(9\mathrm{b}){-}(9\mathrm{c}){-}(9\mathrm{d}){-}(9\mathrm{e}){-}(9\mathrm{f}), \end{aligned}$$(11)
- and the
*dual function*$$\begin{aligned} D(\lambda ) = \max _{\mathbf {X}_{}, \, \varvec{\pi }_{}, \, \mathbf {U}_{} } \;&J({\mathbf {X}_{}}, {\varvec{\pi }_{}}, {\mathbf {U}_{}}) + \lambda \mathbb {E}\left[ { \varvec{\pi }_{T} - p_\mathrm{{ref}} } \right] \nonumber \\ \text {s.t.} \quad&(9\mathrm{b}){-}(9\mathrm{c}){-}(9\mathrm{d}){-}(9\mathrm{e}). \end{aligned}$$(12)

**Proposition 1**

- 1.\((\mathbf {X}_{}^{\star }, \varvec{\pi }_{}^{\star }, \mathbf {U}_{}^{\star })\) is solution of$$\begin{aligned} \max _{\mathbf {X}_{}, \, \varvec{\pi }_{}, \, \mathbf {U}_{} } \;&\mathbb {E}\left[ { \sum \limits _{t = 0}^{T-1} {\mathbf {C}_{t} \eta _{t} (\mathbf {X}_{t}, \, \mathbf {U}_{t}, \, \mathbf {A}_{t})} - \epsilon \, {\mathbf {U}_{t}}^{2} + v_f(\mathbf {X}_{T}) + \lambda ^{\star } \left( \varvec{\pi }_{T} - p_\mathrm{{ref}}\right) } \right] \nonumber \\ \text {s.t.} \quad&(9\mathrm{b}){-}(9\mathrm{c}){-}(9\mathrm{d}){-}(9\mathrm{e}), \end{aligned}$$(13a)
- 2.\(\varvec{\pi }_{T}^{\star }\) satisfies the chance constraint (9f), that is,$$\begin{aligned} \mathbb {E}\left[ { \varvec{\pi }_{T}^{\star } } \right] \ge p_\mathrm{{ref}}. \end{aligned}$$(13b)

*Proof*

Equation (14) is reminiscent of the marginalist interpretation of multipliers. It allows to control the error on the optimal gain. In addition, when equality holds in the chance constraint (9f), \((\mathbf {X}_{}^{\star }, \varvec{\pi }_{}^{\star }, \mathbf {U}_{}^{\star })\) is an optimal solution of the optimization problem (9): hence, we recover an application of Everett’s theorem (see [13]).

### 3.5 Iterative algorithm and numerical convergence

*k*:

the maximization (13a) that leads, for the given multiplier value \(\lambda ^{(k)}\), to the computation of the strategy \(\mathbf {U}_{}^{(k+1)}\),

the update of the multiplier value \(\lambda ^{(k)}\) to \(\lambda ^{(k+1)}\) by a gradient step method,

^{5}

To prove the convergence of the algorithm in Fig. 1—for instance, by applying general results on the convergence of the Uzawa method (see e.g. [12])—raises delicate issues. Indeed, as mentioned in Sect. 3.1, there is hardly any functional property (connectedness, convexity, etc.) that can be proved to hold for the chance constraint (6). However, we shall content ourselves with approximate convergence, with an estimate given by Proposition 1.

In Sect. 3.5.1, we detail the maximization (13a) by additive dynamic programming and, in Sect. 3.5.2, we detail the multiplier update by a gradient step method. Finally, we discuss approximate convergence in Sect. 3.5.3.

#### 3.5.1 Primal maximization by additive dynamic programming

#### 3.5.2 Multiplier update by a gradient step method

*Remark 1*

The multipliers are updated by a gradient method essentially for the sake of easiness. In fact, a more sophisticated update scheme (subgradient, cutting plane, bundle methods) could be used because the dual function (12) might be only subdifferentiable. Note however that we added the technical term \(- \epsilon \, {\mathbf {U}_{t}}^{2}\) in the hydroelectric production gain \(\mathbf {G}\) (see Sect. 2.3) in order to reinforce the strong concavity of the gain function, and hence the differentiability of the dual function. As the dual problem consists in minimizing a function defined on the real line, another possibility would be to use a dichotomy method (see [21] for such an application).

#### 3.5.3 Numerical convergence

When implementing the algorithm, at least two questions arise. First, as theoretical properties such as convexity cannot be assessed, there may be a incompressible duality gap between the optimal gain \(J^{\sharp }\) and the values obtained for the dual function. Second, as the random variables involved in the problem modeling (inflows and prices) are discrete, the variable \(\mathbb {E} \big [ \varvec{\pi }_{T} \big ]\) takes a finite number of possible values; therefore, any probability level \(p_\mathrm{{ref}}\) that is not among this finite number of possible values can never be obtained.

In the light of these findings, we seek an admissible solution being *as-good-as-possible*. The multiplier update makes this possible since the gradient step both increases the multiplier value and the probability level at the same time, up to a value such that the probability level \(p^{(k)}\) obtained by (16) at some iteration *k* of the algorithm is greater than the required level \(p_\mathrm{{ref}}.\)^{6} Then, a straightforward application of Proposition 1 (in the form of Everett’s theorem with saturation of the chance constraint) shows that the solution of the inner maximisation problem (15) leading to \(p^{(k)}\) is also a solution of problem (9) when replacing constraint (9f) by \(\mathbb {E} \big [\varvec{\pi }_{T}\big ]\ge p_\mathrm{{ref}}+ \epsilon ^{(k)}\) where \(\epsilon ^{(k)}= p^{(k)}-p_\mathrm{{ref}}>0\). This way, an admissible solution of problem (7) is exhibited if existing. Since the probability level \(p^{(k)}\) is over its prescribed value \(p_\mathrm{{ref}}\), the gradient step method is going to make it decrease to a value that is just below \(p_\mathrm{{ref}}\). In this case, the solution of the inner maximisation problem (15) leading to \(p^{(k+l)}\) is also a solution of problem (9) when replacing constraint (9f) by \(\mathbb {E} [\varvec{\pi }_{T}]\ge p_\mathrm{{ref}}- \epsilon ^{(k+l)}\) where \(\epsilon ^{(k+l)}=p_\mathrm{{ref}}-p^{(k+l)}>0\). And so on, resulting in a cyclic behavior that can be observed in the Figs. 4, 5 and 6.

The algorithm can be stopped when the cycling is identified, which is easy to detect since the dual minimization in \(\lambda \) corresponds to a one-dimensional optimization problem. Then, the best iteration within the cycle is selected, that is, the iteration corresponding to a solution \((\mathbf {X}_{}^{\star }, \varvec{\pi }_{}^{\star }, \mathbf {U}_{}^{\star })\) such that \(\mathbb {E} [\varvec{\pi }_{T}^{\star }] \ge p_\mathrm{{ref}}\) and with the lowest possible gap \(\lambda ^{\star } ( \mathbb {E}\left[ { \varvec{\pi }_{T}^{\star } } \right] -p_\mathrm{{ref}})\) given by estimate (14).

### 3.6 Numerical experiment

We now solve the optimization problem (7) (or (9)) for a specific numerical instance. The instance is based on a real case given by Electricité de France, the main French electricity provider. We graphically display the almost-optimal solution, given by the algorithm developed in Sect. 3.5, and exhibit the variability of the corresponding almost-optimal gain.

#### 3.6.1 Numerical instance

maximum capacity of the dam reservoir: \( \overline{x} = 80~\mathrm{hm^3} \),

time horizon: \( T = 11 \) (11 time steps),

tourist reference storage level: \( x_{\text {ref}}= 50~\mathrm{hm^3} \),

tourist season: \(\mathcal {T}= \{7, \, 8\}\), i.e. July and August months,

tourist reference probability level: \( p_\mathrm{{ref}}= 0.9 \),

maximum water volume which can be turbined:\( \overline{u} = 40~\mathrm{hm^3} \),

electricity production function (in (4)): \( \eta _{t} (\mathbf {X}_{t}, \, \mathbf {U}_{t}, \, \mathbf {A}_{t}) = 66 \times \mathbf {U}_{t} \),

final value of water (in (4)): \( v_{f}(\mathbf {X}_{T}) = 500 \times {\max \{\mathbf {X}_{T}-x_{0}, \, 0\}}^{2} \),

technical term in the cost (in (4)): \( {\mathbf {U}_{t}}^{2} \), that is, \( \epsilon = 1 \).

*t*, the noise random variables \(\mathbf {C}_{t}\) and \(\mathbf {A}_{t}\) are supposed to be independent

^{7}and to take equiprobable values in discrete sets \(\mathbb {C}_{t}\) and \(\mathbb {A}_{t}\) respectively, each containing few tens of values. The sets \(\mathbb {C}_{t}\) and \(\mathbb {A}_{t}\) explicitly depend on time

*t*to account for seasonality effects. Representative scenarios of the noise are displayed in Fig. 2 for the inflows and in Fig. 3 for the electricity prices.

#### 3.6.2 Implementation of the algorithm described in Sect. 3.5

According to Sect. 3.6.1, the dynamic state \( (\mathbf {X}_{t}, \varvec{\pi }_{t}) \) is such that \( \varvec{\pi }_{t} \) is a binary variable and that \(\mathbf {X}_{t}\) takes its values in \( \mathbb {X} = [0, \, 80] \). Moreover, the control variable \( \mathbf {U}_{t} \) takes its values in \( \mathbb {U} = [0, \, 40] \). To solve the dynamic programming equations (15), the continuous spaces \( \mathbb {X} \) and \( \mathbb {U} \) are discretized by regular grids with \( 2~\mathrm{hm^{3}} \) steps.^{8} Thus, \( \mathbb {X} \) is reduced to a set of 40 points and \( \mathbb {U} \) is reduced to a set of 20 points. Regarding the implementation of the gradient step method, we set \( \rho = 3000 \) in (17).

#### 3.6.3 Numerical results

*Numerical convergence*We have run the algorithm described in Sect. 3.5 on an Intel i7-based personal computer. The convergence is obtained in less than 800 iterations (100 s), and the associated behavior of the algorithm is represented in Figs. 4, 5 and 6. These figures depict the evolution of the tourist probability level, the value of the dual function (12), and the multiplier values along the algorithm iterations. They contain an emphasis on the last 100 iterations, in order to observe the cyclic phenomenon that characterizes the numerical convergence, as discussed in Sect. 3.5.3.

*Quality assessment of the solution*Using the stopping criterion defined in Sect. 3.5.3, the algorithm converges numerically to an approximate solution \( \left( \mathbf {X}_{}^{\star }, \varvec{\pi }_{}^{\star }, \mathbf {U}_{}^{\star }, \lambda ^{\star } \right) \) such thatFrom these values, we deduce

the gap Open image in new window,

the gainOpen image in new window.

Thus, we coin the approximate solution \( \left( \mathbf {X}_{}^{\star }, \varvec{\pi }_{}^{\star }, \mathbf {U}_{}^{\star }, \lambda ^{\star } \right) \) as *chance-constrained quasi-optimal solution*.

*Remark 2*

*Chance-constrained quasi-optimal storage level trajectories*To depict the storage level process \( \mathbf {X}_{}^{\star } \) and the probability distribution of the gain \(\mathbf {G}^{\star }\), we draw a sample of 1,000,000 realizations of the noise process \( \left( \mathbf {A}_{t}, \, \mathbf {C}_{t} \right) _{t \in \{0, \, \ldots , \, T-1\}} \). We then apply the control strategy \( \mathbf {U}_{}^{\star } \) on each of these noise trajectories and obtain the associated storage level trajectory and the value of the gain by means of Monte-Carlo approximations. By contrast, notice that the algorithm outputs given in the above paragraph were exact up to numerical precision.

Regarding this 1,000,000-sample, we observe that 891,875 trajectories satisfy the tourism constraint, leading to a tourist probability level around 0.892 (the Monte Carlo approximated value could be above or below 0.9, depending on the sample). This can also be seen on Fig. 7, which represents 100 (among 1,000,000) trajectories of the dam storage level: the tourist storage level \(x_{\text {ref}}\) is respected during the tourist season \(\mathcal {T}\) for 90 % of the trajectories.

*Empirical probability distribution of the chance-constrained quasi-optimal gain.*Figure 8 represents the empirical probability distribution of the gain associated with the sample. We observe that the deviation of the random variable \( \mathbf {G}^{\star } \) from its expected value is substantial: the standard deviation is about 40 % of \( \mathbb {E}\left[ { \mathbf {G}^{\star } } \right] \). Such property might disappoint a dam manager who would expect to observe a gain rather close to the theoretical mean. In particular, losses can be substantial with sensible probability. This is why we highlight the left tail of the gain distribution in the next section.

## 4 Stochastic viability approach to the dam management

In this section, we put emphasis on the low gain realizations that we detected in Sect. 3.6.3. We propose a so-called *stochastic viability approach* ([8, 10]) that symmetrizes the economic and the tourist stakes by maximizing the probability to jointly guarantee storage levels and gains.

### 4.1 Description of the approach

*viability probability*:

The optimal value (21a) is called the *maximal viability probability*. In Sect. 4.2, we show how we can solve the optimization problem (21) by *multiplicative* stochastic dynamic programming.

### 4.2 Solving the stochastic viability problem by dynamic programming

**Theorem 1**

Thus, solving the dynamic programming equation (24) gives the solution of the stochastic viability problem (23), hence the solution of (21) since \(\mathbf {S}_{T}=\mathbf {G}\) by (22).

Now, we apply the stochastic viability approach to the numerical instance of Sect. 3.6.1. Then, we plot and interpret the function \(\phi ^{*}(x_{\text {ref}}, \, g_{\text {ref}})\) computed in Algorithm 1.

### 4.3 Numerical experiment

We consider the numerical instance described in Sect. 3.6.1.

#### 4.3.1 Implementation of the algorithm described in Sect. 4.2

To implement the resolution of the dynamic programming equation (24), we use the discretization scheme given in Sect. 3.6.2 for the continuous sets \( [0, \, 80] \) and \( [0, \, 40] \) in which the storage state variable \( \mathbf {X}_{t} \) and the outflow control variable \( \mathbf {U}_{t} \) take their values. For the second state variable \( \mathbf {S}_{t} \), that lives in a continuous set \( \mathbb {S}\) by (22), we fix \( \mathbb {S} = [0, \, 7.5 \times 10^{5}] \) and we discretize \( \mathbb {S}\) as a set of 2000 points. As mentioned in Sect. 4.2, the use of such a state variable substantially increases the algorithm running time.^{9}

On an Intel i7-based personal computer, the computation associated with a single value of the pair \(\left( x_{\text {ref}}, \, g_{\text {ref}}\right) \) in the multiplicative dynamic programming algorithm (24) requires a CPU time of 229 s. We do such computation for each tourism guaranteed storage \(x_{\text {ref}}\) varying from 20 to 70 \(\mathrm {hm^{3}}\) by \(5~\mathrm{hm^{3}}\) steps, and for each guaranteed gain \(g_{\text {ref}}\) from 100,000 to 400,000 € by 25,000 € steps.

#### 4.3.2 Numerical results

*Isovalues of the maximal viability probability*Figure 9 displays the isovalues of the maximal viability probability in (21a) as function of the guaranteed gain \( g_{\text {ref}}\) in € and guaranteed storage \( x_{\text {ref}}\) in \(\mathrm{hm^{3}}\).

In Sect. 3, we maximized the expected gain subject to the tourist 0.9-level chance constraint (of having at least \(50~\mathrm{hm^{3}}\) in the tourist season), and obtained a quasi-optimal expected gain of 250,116 €. To what do these numbers correspond on Fig. 9? We see that *jointly* guaranteeing such a gain value and the same tourist constraint leads to no more than a 59 % probability level. Looking from another angle, if we keep the guaranteed storage \(x_{\text {ref}}\) to \(50~\mathrm{hm^{3}}\) and want a maximal viability probability (21a) of at least 0.9 (as was prescribed in Sect. 3), we cannot guarantee a gain higher than 175,000 €.

Thus, by symmetrizing the economic and the tourist stakes with the stochastic viability approach, we offer a complementary view on the management of the dam hydroelectric production by focusing on joint tails of economic and tourist indicators. As a practical application to tackle multiple uses, we suggest that the decision-making process could start by drawing the viability probability isovalues, and then let stakeholders discuss the thresholds values to set.

*Simulation of stochastic viability optimal trajectories* Now, we focus our attention on the solutions \( \mathbf {X}_{}^{\star \star } \), \( \mathbf {S}_{}^{\star \star } \) and \( \mathbf {G}^{\star \star } \) of the maximal viability problem (21a) for the thresholds \( x_{\text {ref}}= 50~\mathrm{hm^{3}}\) and \( g_{\text {ref}}= 250{,}116 \) €. These two values respectively correspond to the tourist threshold prescribed in Sect. 3.6.1 and to the value of the expected total gain \( \mathbb {E}\left[ { \mathbf {G}^{\star } } \right] \) as computed in Sect. 3.6.3. As we mentioned previously, the maximum probability to jointly respect these thresholds only equals 0.59.

*Stochastic viability optimal storage and gain trajectories*The grey rectangle to the far right side of Fig. 10 is hit by the trajectories that do not achieve the guaranteed gain threshold. In the same vein, the grey rectangle in the middle of Fig. 11 is hit by the trajectories that do not achieve the guaranteed tourist storage threshold.

By comparison, we observe that more trajectories hit the rectangle in Fig. 10 than the one in Fig. 11. This means that, for the 100 representative realizations drawn, the gain threshold is more critical than the tourist threshold to maximize the viability probability. This observation is confirmed by the shape of the isovalues of the maximal viability probability in Fig. 9. Indeed, isovalues curves change much more when the guaranteed gain \( g_{\text {ref}}\) changes than when the guaranteed storage \(x_{\text {ref}}\) does. Moreover, it is interesting to notice that the Fig. 11 displays a lot of trajectories which ensure the tourist constraint but do not meet the gain level threshold. This is due to the fact that only the joint respect of the probability and gain levels thresholds matters.

*Stochastic viability empirical probability distribution of the gain*\(\mathbf {G}^{\star \star }\). To end with the numerical experiments, we depict in Fig. 12 the empirical probability distribution of the gain \(\mathbf {S}_{T}^{\star \star }=\mathbf {G}^{\star \star }\) and compare it to that of \(\mathbf {G}^{\star }\) obtained in Sect. 3.6.3.

## 5 Conclusion

When moving from deterministic to stochastic control, modelers traditionally take the expected value of the original criterion (economists call this approach “risk-neutral”). They tackle constraints in various senses, such as robust, in probability one, or with a given probability level. In a first part, we chose to handle the dam management issue with the latter way. We considered a chance constrained stochastic optimal control problem, and we obtained satisfactory results with an algorithm that converged to an almost optimal solution. However, our numerical simulations revealed that the optimal random gain displayed a substantial dispersion. This is why, in a second part, we proposed a stochastic viability approach that symmetrizes the economic and the tourist stakes, and jointly guarantees minimal thresholds. We computed the isovalues of the maximal probability to jointly guarantee these thresholds. With this second approach, we obtained a more complete picture of how to deal with the management of multi-purpose facilities under risk, as dam reservoirs often are. Thus, we illustrated, on a case-based dam management problem, that risk in a dynamic setting can be formalized in various ways. More precisely, we shed light on a multi-purpose dam management issue from two angles, stochastic optimal control under chance constraint and stochastic viability, that offered complementary insights.

Extensions are possible in two directions for chance-constrained stochastic optimal control (Sect. 3). For multiple constraints, one needs to use a multidimensional multiplier; numerical illustrations can be found in [3]. The extension to multiple dams requires to be able to implement stochastic dynamic programming techniques on large numerical instances.

## Footnotes

- 1.
Even if it was not the case in the data provided by Electricité de France.

- 2.
The abbreviation w.r.t. stands for “with respect to”.

- 3.
Whereas it would correspond to the Decision-Hazard framework if \( \mathbf {U}_{t} \) were measurable w.r.t. \( \sigma \left( \mathbf {W}_{0}, \, \ldots , \, \mathbf {W}_{t-1} \right) \) (see [7]).

- 4.
The abbreviation s.t. stands for “such that”.

- 5.
The gradient step method for the dual minimization problem may be replaced by a more efficient method such as dichotomy, conjugate gradient or quasi-Newton.

- 6.
Otherwise, the multiplier goes to infinity with the iteration index, which means that Constraint (9f) is infeasible.

- 7.
Although this assumption is by no means required.

- 8.
See Remark 2 for the influence of these discretization choices on the quality of the solution.

- 9.
We could consider that the set in which \( \mathbf {S}_{t} \) takes its values might vary with respect to

*t*. This would certainly reduce the algorithm running time, but it would not reduce it by orders of magnitude.

## Notes

### Acknowledgments

The authors thank Electricité de France Research and Development for initiating this research through the CIFRE PhD funding of Jean-Christophe Alais and for supplying us with data.

### References

- 1.Van Ackooij, W.: Chance Constrained Programming with applications in Energy Management. PhD thesis, École Centrale des Arts et Manufactures (2013)Google Scholar
- 2.Van Ackooij, W., Zorgati, R., Henrion, R., Möller, A.: Joint chance-constrained programming for hydro reservoir management. Optim. Eng.
**15**, 509–531 (2014)MathSciNetGoogle Scholar - 3.Alais, J.C.: Risque et optimisation pour le management d’énergies: application à l’hydraulique. PhD thesis, Université Paris-Est (2013)Google Scholar
- 4.Andrieu, L., Henrion, R., Römisch, W.: A model for dynamic chance constraints in hydro power reservoir management. Eur. J. Oper. Res.
**207**, 579–589 (2010)MathSciNetCrossRefMATHGoogle Scholar - 5.Charnes, A., Cooper, W.W.: Chance-constrained programming. Manag. Sci.
**6**, 73–79 (1959)MathSciNetCrossRefMATHGoogle Scholar - 6.Carpentier, P., Chancelier, J.-P., Cohen, G., De Lara, M., Girardeau, P.: Dynamic consistency for stochastic optimal control problems. Ann. Oper. Res.
**200**, 247–263 (2012)MathSciNetCrossRefMATHGoogle Scholar - 7.Carpentier, P., Chancelier, J.-P., Cohen, G., De Lara, M.: Stochastic multi-stage optimization. At the crossroads between discrete time stochastic control and stochastic programming. Springer, Berlin (2015)MATHGoogle Scholar
- 8.De Lara, M., Doyen, L.: Sustainable management of natural resources: mathematical models and methods. Environmental science and engineering. Springer, Berlin (2008)Google Scholar
- 9.Dentcheva, D.: Optimization models with probabilistic constraints. In Lecture Notes on Stochastic Programming Modeling and Theory, pp. 87–153 (2009)Google Scholar
- 10.Doyen, L., De Lara, M.: Stochastic viability and dynamic programming. Syst. Control Lett.
**59**(10), 629–634 (2010)MathSciNetCrossRefMATHGoogle Scholar - 11.Dupačová, J.: Stability and sensitivity analysis for stochastic programming. Ann. Oper. Res.
**27**(1), 115–142 (1990)MathSciNetCrossRefMATHGoogle Scholar - 12.Ekeland, I., Temam, R.: Convex analysis and variational problems. Studies in mathematics and its applications. North-Holland Pub Co., New York (1999)MATHGoogle Scholar
- 13.Everett, H.: Generalized Lagrange multiplier method for solving problems of optimum allocation of resources. Oper. Res.
**11**, 399–417 (1963)MathSciNetCrossRefMATHGoogle Scholar - 14.Henrion, R.: On the connectedness of probabilistic constraint sets. J. Optim. Theory Appl.
**112**(3), 657–663 (2002)MathSciNetCrossRefMATHGoogle Scholar - 15.Henrion, R.: A critical note on empirical (sample average, Monte Carlo) approximation of solutions to chance constrained programs. In:Homberg, D., Troltzsch, F. (eds.) System Modeling and Optimization. IFIP Advances in Information and Communication Technology, vol. 391,pp. 25–37. Springer, Hidelberg (2013)Google Scholar
- 16.Henrion, R., Römisch, W.: Hölder and Lipschitz stability of solution sets in programs with probabilistic constraints. Math. Program.
**100**(3), 589–611 (2004)MathSciNetCrossRefMATHGoogle Scholar - 17.Infanger, G., Morton, D.P.: Cut sharing for multistage stochastic linear programs with interstage dependency. Math. Program.
**75**(2), 241–256 (1996)MathSciNetCrossRefMATHGoogle Scholar - 18.Luedtke, J., Ahmed, S.: A sample approximation approach for optimization with probabilistic constraints. SIAM J. Optim.
**19**(2), 674–699 (2008)MathSciNetCrossRefMATHGoogle Scholar - 19.Maceira, M.E., Damazio, J.M.: The use of PAR(p) model in the stochastic dual dynamic programming optimization scheme used in the operation planning of the Brazilian hydropower system Operation Planning Studies of the Brazilian Generating System. In: IEEE 8th International Conference on Probabilistic Methods Applied to Power Systems,pp. 397–402, Ames, Iowa, (2004)Google Scholar
- 20.Miller, L.B., Wagner, H.: Chance-constrained programming with joint constraints. Oper. Res.
**13**, 930–945 (1965)CrossRefMATHGoogle Scholar - 21.Ono, M., Kuwata Y., Balaram J.: Joint chance-constrained dynamic programming. In: 51st IEEE Conference on Decision and Control, pp. 1915–1922. Maui, Hawaii (2012)Google Scholar
- 22.Prékopa, A., Szántai, T.: On optimal regulation of a storage level with application to the water level regulation of a lake. Europ. J. Oper. Res.
**3**(3), 175–189 (1979)MathSciNetCrossRefMATHGoogle Scholar - 23.Prékopa, A.: Stochastic Programming. Mathematics and Its Applications. Kluwer Academic Publisher, Dordrecht (1995)MATHGoogle Scholar
- 24.Prékopa, A.: Probabilistic programming. In: Ruszczynski, A., Shapiro, A. (eds.) Stochastic Programming. Handbooks in Operations Research and Management Science, vol. 10, pp. 267–351. Elsevier, Amsterdam (2003)Google Scholar