Energy Systems

, Volume 8, Issue 1, pp 7–30 | Cite as

Multi-usage hydropower single dam management: chance-constrained optimization and stochastic viability

  • Jean-Christophe Alais
  • Pierre Carpentier
  • Michel De Lara
Original Paper

Abstract

We consider the management of a single hydroelectric dam, subject to uncertain inflows and electricity prices and to a so-called “tourism constraint”: the water storage level must be high enough during the tourist season with high enough probability. We cast the problem in the stochastic optimal control framework: we search at each time t the optimal control as a function of the available information at t. We lay out two approaches. First, we formulate a chance-constrained stochastic optimal control problem: we maximize the expected gain while guaranteeing a minimum storage level with a minimal prescribed probability level. Dualizing the chance constraint by a multiplier, we propose an iterative algorithm alternating additive dynamic programming and update of the multiplier value “à la Uzawa”. Our numerical results reveal that the random gain is very dispersed around its expected value; in particular, low gain values have a relatively high probability to materialize. This is why, to put emphasis on these low values, we outline a second approach. We propose a so-called stochastic viability approach that focuses on jointly guaranteeing a minimum gain and a minimum storage level during the tourist season. We solve the corresponding problem by multiplicative dynamic programming. To conclude, we discuss and compare the two approaches.

Keywords

Stochastic optimal control Chance constraints Stochastic viability Dynamic programming Hydroelectric dam management Energy management 

1 Introduction

As a source of electricity, hydropower is an interesting asset: it emits no greenhouse gases and provides fast-usable energy, cheap and substitutable for the thermal one. On the flip side, hydropower management has to deal with uncertain water inflows and uncertain electricity prices, and multiple uses (agriculture, tourism, flood prevention). This paper depicts the case of single dam submitted to conflicting economic and touristic demands, as faced by the Electricité de France company (French main electricity provider).

We will explore two ways to conciliate, under uncertainty, the economic objective of maximizing the payoff and a tourist objective. In this case, the local authorities prescribe a so-called “contrainte de cote touristique” (tourism water level constraint) as a chance constraint: there must be enough water during the tourist season with high enough probability.

Chance-constrained programming was introduced more than fifty years ago by [5] and then widely developed by many authors (see e.g. [9, 16, 23, 24]). Its application to dam management problems can be found in [2, 4, 22, 23] and the references therein. Most of the above literature deals with chance constraints where the decision variables are static or open-loop, that is, solutions are deterministic vectors. By contrast, in this paper we focus on chance constraints in the framework of discrete-time stochastic optimal control: at each stage t, decision variables are “closed-loop” variables, that is, functions depending on the information available at t. Up to our best knowledge, few papers have addressed closed-loop stochastic dynamic optimization problems subject to probability constraints. Let us mention [4], in which dealing with the (joint) probability constraint is based on a discretization of the control variables modeled as functions of the trajectory of past noises (rather than functions of the state variables): while effective, this approach does not easily allow to take a large number of time steps into account. In [21], an approach based on Dynamic Programming is proposed: however, the joint chance constraint is not treated as such, but is replaced in a conservative way by another constraint, so that the solution obtained is only admissible for the original problem.

In this paper, we analyze how to handle the conflicting objectives of maximizing the payoff from dam energy production and of satisfying a tourist objective, formulated as a chance constraint. In Sect. 2, we present the dam hydroelectric dynamics over a discrete-time span \(\{0,\ldots ,T\}\), and the economic objective. In Sect. 3, we aim to maximize the expectation of the economic gain while satisfying the tourist constraint: we formulate a so-called chance-constrained stochastic optimal control problem. To prepare a resolution by stochastic dynamic programming (SDP), we add a binary random variable to the storage level of the dam at time t to form an extended dynamic state. This new random variable allows to represent the chance constraint as an expectation constraint involving the extended state at final time T. As formulated, the problem is amenable to SDP, but at the price of an infinite dimensional state (see [6]). To overcome this obstacle, we present an original approach: after dualizing the expectation constraint, we apply additive dynamic programming for every fixed value of the multiplier, and we iteratively update the multiplier, until numerical convergence is obtained. We provide numerical results for this method, based on a real-life example provided by Electricité de France. We observe that the random gain is noticeably dispersed around its expected value; in particular, low gain values have a relatively high probability to materialize. We focus on these low gains in Sect. 4. We propose a so-called stochastic viability approach (see [8, 10]) that symmetrizes the economic and the tourist stakes. More precisely, we aim to maximize the probability to jointly guarantee storage levels and gains. We propose another extended dynamic state making it possible to solve the problem by multiplicative dynamic programming. We provide numerical results on the same instance plus a graphical description of the trade-offs between economic and tourist objectives. To conclude, we discuss and compare the two approaches in Sect. 5.

2 Dam modeling

We present the dynamics of the dam, and the production model. As far as the decisions we are looking for depend on the available information, most of the variables involved in the model are random variables, denoted by bold letters.

2.1 Dynamics of the dam

Let \( \{0,1,\ldots ,T\} \) be the (integer) time span, where \( T\in \mathbb {N}^{*} \), and let \( \left( \Omega , \, \mathcal {F}, \, \mathbb {P} \right) \) be the underlying probability space. For any time t in \( \{0,\ldots ,T\} \), we consider the following real valued random variables:
  • \( \mathbf {X}_{t} \), the water storage level in the dam at the beginning of period \([t, t+1[\),

  • \( \mathbf {U}_{t} \), the dam turbined outflow during \([t, t+1[\),

  • \( \mathbf {A}_{t} \) and \( \mathbf {C}_{t} \), the dam inflow and the electricity price during \([t, t+1[\).

We set \( \mathbf {W}_{t} = \left( \mathbf {A}_{t}, \, \mathbf {C}_{t} \right) \), call \( \mathbf {W}_{} = (\mathbf {W}_{t})_{t \in \{0, \, \ldots , \, T-1\}} \) the noise process and assume that \( \mathbf {W}_{0}, \, \ldots , \, \mathbf {W}_{T-1} \) are independent random variables. The independence assumption is of paramount importance in order to use Stochastic Dynamic Programming and obtain optimal closed-loop decision variables as feedbacks on the water storage \( \mathbf {X}_{t} \). This assumption can be alleviated when it is possible to identify a dynamics in the noise process, by incorporating this new dynamics in the state variables, that is, by extending the state (see e.g. [17, 19] on this topic). Note that the \(\mathbf {W}_{t}\) need to be statistically independent, but that their distribution can depend upon time t. This makes possible to handle part of a temporal dependency, such as seasonal effects (more inflow in winter, less in summer). Finally, we do not require that inflow \( \mathbf {A}_{t} \) and price \( \mathbf {C}_{t} \) are statistically independent, thus opening the possibility to take into account the customary correlation1 between these random variables.
Let \( \overline{x} \) denotes the maximum water volume of the dam and \( x_0 \) its initial value. The dynamics of the storage level process \( \mathbf {X}_{} = (\mathbf {X}_{t})_{t\in \{0, \, \ldots , \, T\}} \) reads:
$$\begin{aligned} \mathbf {X}_{t+1} = f_{ t }^{ \mathbf {X}_{} }\left( { \mathbf {X}_{t}, \, \mathbf {U}_{t}, \, \mathbf {A}_{t} } \right) = \min \left\{ \mathbf {X}_{t} + \mathbf {A}_{t} - \mathbf {U}_{t}, \, \overline{x} \right\} , \quad \forall t \in \{0, \ldots , T - 1 \}, \end{aligned}$$
(1)
with \(\mathbf {X}_{0} = x_0\). Equation (1) describes a typical dam reservoir storage dynamics which takes into account the possible overflow of the dam: if the forthcoming water volume \( \mathbf {X}_{t} + \mathbf {A}_{t} - \mathbf {U}_{t} \) is greater than \( \overline{x} \), then the dam water surplus \( \mathbf {X}_{t} + \mathbf {A}_{t} - \mathbf {U}_{t} - \overline{x} \) spills out.

2.2 Constraints on the control

The control \( \mathbf {U}_{t} \) cannot be greater than both the available water volume \( \mathbf {X}_{t} + \mathbf {A}_{t} \) and a maximum turbined capacity \(\overline{u}\), that is,
$$\begin{aligned} 0 \le \mathbf {U}_{t} \le \min \{\mathbf {X}_{t} + \mathbf {A}_{t}, \overline{u}\} , \quad \forall t \in \{0, \ldots , T - 1 \}. \end{aligned}$$
(2)
As we deal with decision-making in a dynamic and stochastic context, we require the control strategy \( \mathbf {U}_{} = (\mathbf {U}_{t})_{t \in \{0, \, \ldots , \, T-1\}} \) to be non-anticipative, that is,2
$$\begin{aligned} \mathbf {U}_{t} \; \; \text {is measurable} \; \;\text {w.r.t.}\; \; \sigma \left( \mathbf {W}_{0}, \, \ldots , \, \mathbf {W}_{t} \right) , \quad \forall t \in \{0, \ldots , T - 1 \}, \end{aligned}$$
(3)
where \( \sigma \left( \mathbf {W}_{0}, \, \ldots , \, \mathbf {W}_{t} \right) \) stands for the sigma-algebra generated by \( \left( \mathbf {W}_{0}, \, \ldots , \, \mathbf {W}_{t} \right) \). The measurability constraint (3) expresses the property that the control \(\mathbf {U}_{t}\) is a function of the available information at time t, namely the past noises \(\left( \mathbf {W}_{0}, \, \ldots , \, \mathbf {W}_{t} \right) \). More precisely here, it corresponds to the so-called Hazard-Decision framework: indeed, the control \(\mathbf {U}_{t}\) depends upon the past and the current realizations of the noise at time t.3

2.3 Dam production and valuation

The hydroelectric production gain \(\mathbf {G}\) is given by
$$\begin{aligned} \mathbf {G} = \sum \limits _{t = 0}^{T-1} {\mathbf {C}_{t} \, \eta _{t} (\mathbf {X}_{t}, \, \mathbf {U}_{t}, \, \mathbf {A}_{t})} - \epsilon \, {\mathbf {U}_{t}}^{2} + v_f(\mathbf {X}_{T}), \end{aligned}$$
(4)
where
  • the electricity production \( \eta _{t} (\mathbf {X}_{t}, \, \mathbf {U}_{t}, \, \mathbf {A}_{t}) \) at time t is sold at price \(\mathbf {C}_{t}\),

  • the quadratic term \(\epsilon \, {\mathbf {U}_{t}}^{2}\) is a (small) technical term introduced for differentiability purpose (more details will be given in Remark 1, page 10),

  • the non zero final value of water \( v_{f}(\mathbf {X}_{T}) \) prevents the dam reservoir from being empty at the end of the horizon.

3 Chance-constrained stochastic optimal management of a dam

The gain \( \mathbf {G} \) defined in Eq. (4) represents the economic stakes of the dam hydropower management. However, a dam is a facility which may be used for several usages. Here, water sports are possible during the tourist season, provided that a minimal reference water level \(x_{\text {ref}}\) is ensured.

3.1 Mathematical problem statement

Let us denote the tourist season period by a subset
$$\begin{aligned} \mathcal {T}\subset \{1, \, \ldots , \, T-1\}. \end{aligned}$$
(5)
We formulate a chance constraint
$$\begin{aligned} \mathbb {P}\left[ { \mathbf {X}_{\tau } \ge x_{\text {ref}}, \quad \forall \tau \in \mathcal {T} } \right] \ge p_\mathrm{{ref}}, \end{aligned}$$
(6)
which consists in ensuring a minimal reference storage level \(x_{\text {ref}}\) during the tourist season \(\mathcal {T}\) at a probability level \(p_\mathrm{{ref}}\in (0,1)\).
Now, we can formulate the dam management problem exposed in the introduction as one where we maximize the expected value \(\mathbb {E}\left[ { \mathbf {G} } \right] \) of the gain \( \mathbf {G} \) in (4) under three types of constraints:
  • bounds on the control: almost sure constraints (2),

  • non-anticipativity of the strategy: measurability constraints (3),

  • tourist requirements: chance constraint (6).

Summing up, we address the dam hydroelectric production management by the formulation4
$$\begin{aligned} \max _{\mathbf {X}_{}, \, \mathbf {U}_{} } \quad&\mathbb {E}\left[ { \sum \limits _{t = 0}^{T-1} { \mathbf {C}_{t} \, \eta _{t} (\mathbf {X}_{t}, \, \mathbf {U}_{t}, \, \mathbf {A}_{t}) - \epsilon \, {\mathbf {U}_{t}}^{2} } + v_f(\mathbf {X}_{T}) } \right] \end{aligned}$$
(7a)
$$\begin{aligned} \text {s.t.}\quad&\mathbf {U}_{t} \; \; \text {is measurable} \; \;\text {w.r.t.}\ \sigma (\mathbf {W}_{0}, \, \ldots , \, \mathbf {W}_{t}),\end{aligned}$$
(7b)
$$\begin{aligned}&0 \le \mathbf {U}_{t} \le \min \{\mathbf {X}_{t} + \mathbf {A}_{t}, \; \overline{u}\} , \quad \forall t \in \{0, \ldots , T - 1 \}, \end{aligned}$$
(7c)
$$\begin{aligned}&\mathbf {X}_{t+1} = f_{ t }^{ \mathbf {X}_{} }\left( { \mathbf {X}_{t}, \, \mathbf {U}_{t}, \, \mathbf {A}_{t} } \right) , \quad \forall t \in \{0, \ldots , T - 1 \}, \mathbf {X}_{0} = x_0, \end{aligned}$$
(7d)
$$\begin{aligned}&\mathbb {P}\left[ { \mathbf {X}_{\tau } \ge x_{\text {ref}}, \quad \forall \tau \in \mathcal {T} } \right] \ge p_\mathrm{{ref}}. \end{aligned}$$
(7e)

3.2 Discussion on chance constraints with closed-loop decisions

The optimization problem (7) is a so-called chance constrained stochastic optimal control problem. Chance constrained optimization problems were introduced by [5] with an individual chance constraint and by [20] with a joint chance constraint.

Such problems raise theoretical and numerical difficulties: indeed, it is mathematically difficult to guarantee the connectedness, the convexity or the closedness of the feasible set induced by the chance constraint, although these properties play key roles in optimization. When solutions are looked after as open-loop, connectedness, convexity or closedness properties may be proven to hold under assumptions on the constraint structure and on the distribution laws of the random variables (see [1, 9, 14, 23, 24] and the references therein). However, even a very general continuity result such as [1, Theorem 2.3.3] cannot be extended in a straightforward manner to the closed-loop situation. As a matter of fact, in the open-loop situation, that is, in the case where the control u lives in a standard vector space \(\mathbb {U}\) (e.g. \(\mathbb {R}^{n}\)), the control “does not move” with the randomness, whereas in the closed-loop situation, that is, in the case where the control \(\mathbf {U}\) is a random variable, namely a function defined on \(\Omega \) and valued on \(\mathbb {U}\), the control and the noise live in two spaces of random variables defined on the same probability space, and thus both vary with the randomness. So, the usual proofs (designed for open-loop solutions) no longer work for closed-loop solutions.

However, we do not need to dwell on such issues because
  • on the one hand, we will use stochastic dynamic programming, a method agnostic to whether variables are continuous, discrete or both, whether constraints define a convex domain or not, whether cost functions are convex or not, etc.,

  • on the other hand, we will use duality only to obtain bounds, without relying on connectedness, convexity or closedness of the feasible set.

3.3 Reformulation with an additional binary process

It happens that problem (7) is amenable to SDP, but at the price of an infinite dimensional state. More precisely, using the probability distribution of \(\mathbf {X}_{t}\) rather than the value of \(\mathbf {X}_{t}\) as the state variable, DP can be used in a straightforward manner on problem (7) (see [6] for further details). But such an approach involving an infinite dimensional state is usually untractable and we have to find an alternative solution. Our very first issue being to design practical algorithms that produce closed-loop solutions, we now present an original approach to overcome this dimensionality obstacle. As a first step, we develop an equivalent formulation of (7) that involves an additional binary random process.

To prepare a resolution of the optimization problem (7) by stochastic dynamic programming, we introduce the binary valued random process \( \varvec{\pi }_{} = \left( \varvec{\pi }_{t} \right) _{t \in \{0, \, \ldots , \, T\}} \), that follows the dynamics:
$$\begin{aligned} \varvec{\pi }_{t + 1}= & {} f_{ t }^{ \varvec{\pi }_{} }\left( { \mathbf {X}_{t}, \, \varvec{\pi }_{t}, \, \mathbf {U}_{t}, \, \mathbf {A}_{t} } \right) \nonumber \\= & {} \left\{ \begin{array}{l@{\quad }l} {\mathbf {1}}_{\left\{ \mathbf {X}_{t+1} \ge x_{\text {ref}}\right\} } \, \varvec{\pi }_{t} \; \;&{} \text {if} \; \;t \in \mathcal {T}\\ \varvec{\pi }_{t} &{} \text {else} \end{array} \right. , \; \;\forall t \in \{0, \ldots , T - 1 \}. \end{aligned}$$
(8)
with \(\varvec{\pi }_{0} = 1\), and where \(\mathbf {1}_{K}\) denotes the indicator function of a set K. Then, the chance constraint (6) can be written as
$$\begin{aligned} \mathbb {P}\left[ { \mathbf {X}_{\tau } \ge x_{\text {ref}}, \quad \forall \tau \in \mathcal {T} } \right]= & {} \mathbb {E}\left[ { \mathbf {1}_{ \left\{ \mathbf {X}_{\tau } \ge x_{\text {ref}}, \quad \forall \tau \in \mathcal {T}\, \right\} } } \right] \\= & {} \mathbb {E}\left[ { \prod _{\;\tau \in \mathcal {T}}\mathbf {1}_{ \left\{ \mathbf {X}_{\tau } \ge x_{\text {ref}}\right\} } } \right] = \mathbb {E}\left[ { \varvec{\pi }_{T} } \right] . \end{aligned}$$
Contrarily to [21], we do not approximate the chance constraint (6) by a conservative one that is easier to handle, but we reformulate the chance constraint (6) with a new state variable. Now, the optimization problem (7) reads
$$\begin{aligned} \max _{\mathbf {X}_{}, \, \varvec{\pi }_{}, \, \mathbf {U}_{} } \quad&\mathbb {E}\left[ { \sum \limits _{t = 0}^{T-1} {\mathbf {C}_{t} \, \eta _{t} (\mathbf {X}_{t}, \, \mathbf {U}_{t}, \, \mathbf {A}_{t}) - \epsilon \, {\mathbf {U}_{t}}^{2} } + v_f(\mathbf {X}_{T}) } \right] \end{aligned}$$
(9a)
$$\begin{aligned} \text {s.t.}\quad&\mathbf {U}_{t} \; \; \text {is measurable} \; \;\text {w.r.t.}\ \sigma (\mathbf {W}_{0}, \, \ldots , \, \mathbf {W}_{t}), \end{aligned}$$
(9b)
$$\begin{aligned}&0 \le \mathbf {U}_{t} \le \min \{\mathbf {X}_{t} + \mathbf {A}_{t}, \; \overline{u}\}, \quad \forall t \in \{0, \ldots , T - 1 \}, \end{aligned}$$
(9c)
$$\begin{aligned}&\mathbf {X}_{t+1} = f_{ t }^{ \mathbf {X}_{} }\left( { \mathbf {X}_{t}, \, \mathbf {U}_{t}, \, \mathbf {A}_{t} } \right) , \quad \forall t \in \{0, \ldots , T - 1 \}, \mathbf {X}_{0} = x_0, \end{aligned}$$
(9d)
$$\begin{aligned}&\varvec{\pi }_{t+1} = f_{ t }^{ \varvec{\pi }_{} }\left( { \mathbf {X}_{t}, \, \varvec{\pi }_{t}, \, \mathbf {U}_{t}, \, \mathbf {A}_{t} } \right) , \quad \forall t \in \{0, \ldots , T - 1 \}, \varvec{\pi }_{0} = 1, \end{aligned}$$
(9e)
$$\begin{aligned}&\mathbb {E}\left[ { \varvec{\pi }_{T} } \right] \ge p_\mathrm{{ref}}. \end{aligned}$$
(9f)

3.4 Theoretical analysis

Here, we present estimates for the value of the optimization problem (9). For this purpose, we introduce three objects:
  • the criterion
    $$\begin{aligned} J({\mathbf {X}_{}}, {\varvec{\pi }_{}}, {\mathbf {U}_{}}) = \mathbb {E}\left[ { \mathbf {G} } \right] = \mathbb {E}\left[ { \sum \limits _{t = 0}^{T-1} {\mathbf {C}_{t} \, \eta _{t} (\mathbf {X}_{t}, \, \mathbf {U}_{t}, \, \mathbf {A}_{t}) - \epsilon \, {\mathbf {U}_{t}}^{2} } + v_f(\mathbf {X}_{T}) } \right] , \end{aligned}$$
    (10)
  • the value \(J^{\sharp }\) of the optimization problem (9)
    $$\begin{aligned} J^{\sharp } = \mathbb {E}\left[ { \mathbf {G}^{\sharp } } \right] = \max _{\mathbf {X}_{}, \varvec{\pi }_{}, \mathbf {U}_{} } J({\mathbf {X}_{}}, {\varvec{\pi }_{}}, {\mathbf {U}_{}}) \text { s.t. }(9\mathrm{b}){-}(9\mathrm{c}){-}(9\mathrm{d}){-}(9\mathrm{e}){-}(9\mathrm{f}), \end{aligned}$$
    (11)
  • and the dual function
    $$\begin{aligned} D(\lambda ) = \max _{\mathbf {X}_{}, \, \varvec{\pi }_{}, \, \mathbf {U}_{} } \;&J({\mathbf {X}_{}}, {\varvec{\pi }_{}}, {\mathbf {U}_{}}) + \lambda \mathbb {E}\left[ { \varvec{\pi }_{T} - p_\mathrm{{ref}} } \right] \nonumber \\ \text {s.t.} \quad&(9\mathrm{b}){-}(9\mathrm{c}){-}(9\mathrm{d}){-}(9\mathrm{e}). \end{aligned}$$
    (12)
The following proposition provides an estimate of the gap between the optimal gain \(J^{\sharp }\) and a solution obtained by computing the dual function (12) for a certain multiplier \(\lambda ^{\star }\).

Proposition 1

Let \(\lambda ^{\star } \ge 0\) and \((\mathbf {X}_{}^{\star }, \varvec{\pi }_{}^{\star }, \mathbf {U}_{}^{\star })\) be such that
  1. 1.
    \((\mathbf {X}_{}^{\star }, \varvec{\pi }_{}^{\star }, \mathbf {U}_{}^{\star })\) is solution of
    $$\begin{aligned} \max _{\mathbf {X}_{}, \, \varvec{\pi }_{}, \, \mathbf {U}_{} } \;&\mathbb {E}\left[ { \sum \limits _{t = 0}^{T-1} {\mathbf {C}_{t} \eta _{t} (\mathbf {X}_{t}, \, \mathbf {U}_{t}, \, \mathbf {A}_{t})} - \epsilon \, {\mathbf {U}_{t}}^{2} + v_f(\mathbf {X}_{T}) + \lambda ^{\star } \left( \varvec{\pi }_{T} - p_\mathrm{{ref}}\right) } \right] \nonumber \\ \text {s.t.} \quad&(9\mathrm{b}){-}(9\mathrm{c}){-}(9\mathrm{d}){-}(9\mathrm{e}), \end{aligned}$$
    (13a)
     
  2. 2.
    \(\varvec{\pi }_{T}^{\star }\) satisfies the chance constraint (9f), that is,
    $$\begin{aligned} \mathbb {E}\left[ { \varvec{\pi }_{T}^{\star } } \right] \ge p_\mathrm{{ref}}. \end{aligned}$$
    (13b)
     
Then, we have the estimate
$$\begin{aligned} 0 \le J^{\sharp } - J(\mathbf {X}_{}^{\star }, \varvec{\pi }_{}^{\star }, \mathbf {U}_{}^{\star }) \le \lambda ^{\star } \left( \mathbb {E}\left[ { \varvec{\pi }_{T}^{\star } } \right] - p_\mathrm{{ref}}\right) . \end{aligned}$$
(14)

Proof

First, we prove the left-hand side of (14). Since \((\mathbf {X}_{}^{\star }, \varvec{\pi }_{}^{\star }, \mathbf {U}_{}^{\star })\) satisfies (9b)–(9c)–(9d)–(9e) by the first assumption, and satisfies the chance constraint (9f) by the second assumption, it is a feasible solution of problem (9), so that
$$\begin{aligned} J(\mathbf {X}_{}^{\star }, \varvec{\pi }_{}^{\star }, \mathbf {U}_{}^{\star }) \le J^{\sharp }. \end{aligned}$$
Second, we prove the right-hand side of (14). We have
$$\begin{aligned}&J(\mathbf {X}_{}^{\star }, \varvec{\pi }_{}^{\star }, \mathbf {U}_{}^{\star }) + \lambda ^{\star } \left( \mathbb {E}\left[ { \varvec{\pi }_{T}^{\star } } \right] - p_\mathrm{{ref}}\right) \\&\quad = \max _{\mathbf {X}_{}, \varvec{\pi }_{}, \mathbf {U}_{} } J({\mathbf {X}_{}}, {\varvec{\pi }_{}}, {\mathbf {U}_{}}) + \lambda ^{\star } \left( \mathbb {E}\left[ { \varvec{\pi }_{T} } \right] - p_\mathrm{{ref}}\right) \text { s.t. } (9\mathrm{b}){-}(9\mathrm{c}){-}(9\mathrm{d}){-}(9\mathrm{e}), \end{aligned}$$
by the first assumption,
$$\begin{aligned} \ge&\max _{\mathbf {X}_{}, \varvec{\pi }_{}, \mathbf {U}_{} } J({\mathbf {X}_{}}, {\varvec{\pi }_{}}, {\mathbf {U}_{}}) + \lambda ^{\star } \left( \mathbb {E}\left[ { \varvec{\pi }_{T} } \right] - p_\mathrm{{ref}}\right) \text { s.t. } (9\mathrm{b}){-}(9\mathrm{c}){-}(9\mathrm{d}){-}(9\mathrm{e}){-}(9\mathrm {f}), \end{aligned}$$
by adding the chance constraint (9f), hence reducing the feasible set,
$$\begin{aligned} \ge&\max _{\mathbf {X}_{}, \varvec{\pi }_{}, \mathbf {U}_{} } J({\mathbf {X}_{}}, {\varvec{\pi }_{}}, {\mathbf {U}_{}}) \text { s.t. } (9\mathrm{b}){-}(9\mathrm{c}){-}(9\mathrm{d}){-}(9\mathrm{e}){-}(9\mathrm {f}), \end{aligned}$$
since \(\lambda ^{\star } \left( \mathbb {E}\left[ { \varvec{\pi }_{T} } \right] - p_\mathrm{{ref}}\right) \ge 0 \) by \(\lambda ^{\star }\ge 0\) and by (9f),
$$\begin{aligned}&=J^{\sharp }. \end{aligned}$$
This ends the proof. \(\square \)

Equation (14) is reminiscent of the marginalist interpretation of multipliers. It allows to control the error on the optimal gain. In addition, when equality holds in the chance constraint (9f), \((\mathbf {X}_{}^{\star }, \varvec{\pi }_{}^{\star }, \mathbf {U}_{}^{\star })\) is an optimal solution of the optimization problem (9): hence, we recover an application of Everett’s theorem (see [13]).

3.5 Iterative algorithm and numerical convergence

Proposition 1 will prove useful in an algorithmic context. To find a multiplier value \( \lambda ^{\star } \) such that a solution \( \mathbf {U}_{}^{\star } \) to the optimization problem (13a) satisfies the chance constraint (9f), we appeal to a gradient-like algorithm involving the dual variable \(\lambda \). This algorithm “à la Uzawa” (see [12] for the origin of the terminology) is illustrated in Fig. 1. It alternates two steps for each iteration k:
  • the maximization (13a) that leads, for the given multiplier value \(\lambda ^{(k)}\), to the computation of the strategy \(\mathbf {U}_{}^{(k+1)}\),

  • the update of the multiplier value \(\lambda ^{(k)}\) to \(\lambda ^{(k+1)}\) by a gradient step method,5

until \(\left( \lambda ^{(k)}, \, \mathbf {U}_{}^{(k)} \right) \) possibly converges to \(\left( \lambda ^{\star }, \, \mathbf {U}_{}^{\star } \right) \), or at least till enough precision is achieved thanks to the estimate (14).
Fig. 1

Algorithm “à la Uzawa”

To prove the convergence of the algorithm in Fig. 1—for instance, by applying general results on the convergence of the Uzawa method (see e.g. [12])—raises delicate issues. Indeed, as mentioned in Sect. 3.1, there is hardly any functional property (connectedness, convexity, etc.) that can be proved to hold for the chance constraint (6). However, we shall content ourselves with approximate convergence, with an estimate given by Proposition 1.

In Sect. 3.5.1, we detail the maximization (13a) by additive dynamic programming and, in Sect. 3.5.2, we detail the multiplier update by a gradient step method. Finally, we discuss approximate convergence in Sect. 3.5.3.

3.5.1 Primal maximization by additive dynamic programming

The criterion of the optimization problem (13a) is additive with respect to time and the random variables \((\mathbf {W}_{t})_{t \in \{0, \, \ldots , \, T-1\}} \) are independent. Therefore, we can obtain the solution of (13a), where \( \lambda ^{\star }= \lambda ^{(k)} \) is a deterministic scalar, by additive dynamic programming with state \( (\mathbf {X}_{t}, \, \varvec{\pi }_{t}) \). This amounts to solving the following backward induction equation, for all \( \left( x, \, \pi \right) \in [0,\, \overline{x}] \times \{0, \, 1\} \):
$$\begin{aligned} V_{T}^{(k+1)}(x,\pi )= & {} \lambda ^{(k)} \left( { \pi - \, p_\mathrm{{ref}}} \right) + v_f(x), \nonumber \\ V_{t}^{(k+1)}(x,\pi )= & {} \mathbb {E}\left[ { \max \limits _{u \in \mathfrak {U}\left( x,\mathbf {A}_{t} \right) } \; \mathbf {C}_{t} \, \eta _{t} (x, u, \mathbf {A}_{t}) - \epsilon \, {u}^{2} + V_{t+1}^{(k+1)}\left( \mathbf {X}_{t+1},\varvec{\pi }_{t+1} \right) } \right] , \end{aligned}$$
(15)
where the set \(\mathfrak {U}\left( x, \, a \right) \) is defined by \(\mathfrak {U} \left( x, \, a \right) = \left\{ u \in \mathbb {R} \, | \vert 0 \le u \le \min \{x + a, \; \overline{u}\} \right\} \) and where we use the notations \(\mathbf {X}_{t+1}=f_{ t }^{ \mathbf {X}_{} }\left( { x, \, u, \, \mathbf {A}_{t} } \right) \) and \(\varvec{\pi }_{t+1}=f_{ t }^{ \varvec{\pi }_{} }\left( { x, \, \pi , \, u, \, \mathbf {A}_{t} } \right) \). Observe that, although the dynamic state is two-dimensional, solving the Equations (15) is only twice as complicated as solving one-dimensional dynamic programming equations thanks to the fact that \( \varvec{\pi }_{} \) is a binary valued process.

3.5.2 Multiplier update by a gradient step method

Let \(p^{(k+1)}\) stand for the probability to respect the tourist constraint (9f) when the optimal control obtained at Sect. 3.5.1 is used:
$$\begin{aligned} p^{(k+1)} = \mathbb {E}\left[ { \varvec{\pi }_{T}^{(k+1)} } \right] \, . \end{aligned}$$
(16)
In the algorithm “à la Uzawa” described in Fig. 1, we need to compute \(p^{(k+1)}\) to update the multiplier \( \lambda ^{(k)} \) to \( \lambda ^{(k+1)} \) by the gradient step
$$\begin{aligned} \lambda ^{(k+1)} = \max \left\{ \lambda ^{(k)} + \rho \left( p_\mathrm{{ref}}- p^{(k+1)} \right) , \, 0\right\} , \; \;\rho > 0 \; \;\text {being a given step size}. \end{aligned}$$
(17)
Knowing the strategy \( \mathbf {U}_{}^{(k+1)} \) obtained by solving the inner maximization problem (15), we can compute \(p^{(k+1)}\) by solving the following (non controlled) dynamic programming equation:
$$\begin{aligned}&P_{T}^{(k+1)}(x, \, \pi ) = \pi , \\&P_{t}^{(k+1)}(x, \, \pi ) = \mathbb {E}\left[ { P_{t + 1}^{(k+1)}\left( f_{ t }^{ \mathbf {X}_{} }\left( { x, \, \mathbf {U}_{t}^{(k+1)}, \, \mathbf {A}_{t} } \right) , \, f_{ t }^{ \varvec{\pi }_{} }\left( { x, \, \pi , \, \mathbf {U}_{t}^{(k+1)}, \, \mathbf {A}_{t} } \right) \right) } \right] , \end{aligned}$$
because the value \(P_{0}^{(k+1)}(x_{0}, \, \pi _{0})\) is equal to \(\mathbb {E} \big [ \varvec{\pi }_{T}^{(k+1)} \big ]\) by construction. Note that the computation of \(p^{(k+1)}\) is exact and does not require approximations, as would be those associated with a Monte Carlo method. Thus doing, we avoid the stability difficulties we could have encountered with approximations ([11, 15, 16, 18]).

Remark 1

The multipliers are updated by a gradient method essentially for the sake of easiness. In fact, a more sophisticated update scheme (subgradient, cutting plane, bundle methods) could be used because the dual function (12) might be only subdifferentiable. Note however that we added the technical term \(- \epsilon \, {\mathbf {U}_{t}}^{2}\) in the hydroelectric production gain \(\mathbf {G}\) (see Sect. 2.3) in order to reinforce the strong concavity of the gain function, and hence the differentiability of the dual function. As the dual problem consists in minimizing a function defined on the real line, another possibility would be to use a dichotomy method (see [21] for such an application).

3.5.3 Numerical convergence

When implementing the algorithm, at least two questions arise. First, as theoretical properties such as convexity cannot be assessed, there may be a incompressible duality gap between the optimal gain \(J^{\sharp }\) and the values obtained for the dual function. Second, as the random variables involved in the problem modeling (inflows and prices) are discrete, the variable \(\mathbb {E} \big [ \varvec{\pi }_{T} \big ]\) takes a finite number of possible values; therefore, any probability level \(p_\mathrm{{ref}}\) that is not among this finite number of possible values can never be obtained.

In the light of these findings, we seek an admissible solution being as-good-as-possible. The multiplier update makes this possible since the gradient step both increases the multiplier value and the probability level at the same time, up to a value such that the probability level \(p^{(k)}\) obtained by (16) at some iteration k of the algorithm is greater than the required level \(p_\mathrm{{ref}}.\)6 Then, a straightforward application of Proposition 1 (in the form of Everett’s theorem with saturation of the chance constraint) shows that the solution of the inner maximisation problem (15) leading to \(p^{(k)}\) is also a solution of problem (9) when replacing constraint (9f) by \(\mathbb {E} \big [\varvec{\pi }_{T}\big ]\ge p_\mathrm{{ref}}+ \epsilon ^{(k)}\) where \(\epsilon ^{(k)}= p^{(k)}-p_\mathrm{{ref}}>0\). This way, an admissible solution of problem (7) is exhibited if existing. Since the probability level \(p^{(k)}\) is over its prescribed value \(p_\mathrm{{ref}}\), the gradient step method is going to make it decrease to a value that is just below \(p_\mathrm{{ref}}\). In this case, the solution of the inner maximisation problem (15) leading to \(p^{(k+l)}\) is also a solution of problem (9) when replacing constraint (9f) by \(\mathbb {E} [\varvec{\pi }_{T}]\ge p_\mathrm{{ref}}- \epsilon ^{(k+l)}\) where \(\epsilon ^{(k+l)}=p_\mathrm{{ref}}-p^{(k+l)}>0\). And so on, resulting in a cyclic behavior that can be observed in the Figs. 4, 5 and 6.

The algorithm can be stopped when the cycling is identified, which is easy to detect since the dual minimization in \(\lambda \) corresponds to a one-dimensional optimization problem. Then, the best iteration within the cycle is selected, that is, the iteration corresponding to a solution \((\mathbf {X}_{}^{\star }, \varvec{\pi }_{}^{\star }, \mathbf {U}_{}^{\star })\) such that \(\mathbb {E} [\varvec{\pi }_{T}^{\star }] \ge p_\mathrm{{ref}}\) and with the lowest possible gap \(\lambda ^{\star } ( \mathbb {E}\left[ { \varvec{\pi }_{T}^{\star } } \right] -p_\mathrm{{ref}})\) given by estimate (14).

3.6 Numerical experiment

We now solve the optimization problem (7) (or (9)) for a specific numerical instance. The instance is based on a real case given by Electricité de France, the main French electricity provider. We graphically display the almost-optimal solution, given by the algorithm developed in Sect. 3.5, and exhibit the variability of the corresponding almost-optimal gain.

3.6.1 Numerical instance

We consider a dam management problem with the following features:
  • maximum capacity of the dam reservoir: \( \overline{x} = 80~\mathrm{hm^3} \),

  • time horizon: \( T = 11 \) (11 time steps),

  • tourist reference storage level: \( x_{\text {ref}}= 50~\mathrm{hm^3} \),

  • tourist season: \(\mathcal {T}= \{7, \, 8\}\), i.e. July and August months,

  • tourist reference probability level: \( p_\mathrm{{ref}}= 0.9 \),

  • maximum water volume which can be turbined:\( \overline{u} = 40~\mathrm{hm^3} \),

  • electricity production function (in (4)): \( \eta _{t} (\mathbf {X}_{t}, \, \mathbf {U}_{t}, \, \mathbf {A}_{t}) = 66 \times \mathbf {U}_{t} \),

  • final value of water (in (4)): \( v_{f}(\mathbf {X}_{T}) = 500 \times {\max \{\mathbf {X}_{T}-x_{0}, \, 0\}}^{2} \),

  • technical term in the cost (in (4)): \( {\mathbf {U}_{t}}^{2} \), that is, \( \epsilon = 1 \).

The units are given in \(\mathrm{hm^3}\) for the inflows and in € \(\text {per} \; \mathrm{hm^{3}} \) for the electricity prices. For each time t, the noise random variables \(\mathbf {C}_{t}\) and \(\mathbf {A}_{t}\) are supposed to be independent7 and to take equiprobable values in discrete sets \(\mathbb {C}_{t}\) and \(\mathbb {A}_{t}\) respectively, each containing few tens of values. The sets \(\mathbb {C}_{t}\) and \(\mathbb {A}_{t}\) explicitly depend on time t to account for seasonality effects. Representative scenarios of the noise are displayed in Fig. 2 for the inflows and in Fig. 3 for the electricity prices.
Fig. 2

Ten inflows scenarios sample (in \(\mathrm{hm^3}\), along the 11 time steps)

Fig. 3

Ten prices scenarios sample (in € per \(\mathrm{hm^3}\), along the 11 time steps)

3.6.2 Implementation of the algorithm described in Sect. 3.5

According to Sect. 3.6.1, the dynamic state \( (\mathbf {X}_{t}, \varvec{\pi }_{t}) \) is such that \( \varvec{\pi }_{t} \) is a binary variable and that \(\mathbf {X}_{t}\) takes its values in \( \mathbb {X} = [0, \, 80] \). Moreover, the control variable \( \mathbf {U}_{t} \) takes its values in \( \mathbb {U} = [0, \, 40] \). To solve the dynamic programming equations (15), the continuous spaces \( \mathbb {X} \) and \( \mathbb {U} \) are discretized by regular grids with \( 2~\mathrm{hm^{3}} \) steps.8 Thus, \( \mathbb {X} \) is reduced to a set of 40 points and \( \mathbb {U} \) is reduced to a set of 20 points. Regarding the implementation of the gradient step method, we set \( \rho = 3000 \) in (17).

3.6.3 Numerical results

Numerical convergence We have run the algorithm described in Sect. 3.5 on an Intel i7-based personal computer. The convergence is obtained in less than 800 iterations (100 s), and the associated behavior of the algorithm is represented in Figs. 4, 5 and 6. These figures depict the evolution of the tourist probability level, the value of the dual function (12), and the multiplier values along the algorithm iterations. They contain an emphasis on the last 100 iterations, in order to observe the cyclic phenomenon that characterizes the numerical convergence, as discussed in Sect. 3.5.3.
Fig. 4

Evolution of the probability level \( p^{(k)} \) at iteration k, for k from 1 to 800, with a zoom on the 100 last iterations

Fig. 5

Evolution of the dual function at iteration k, for k from 1 to 800, with a zoom on the 100 last iterations

Fig. 6

Evolution of the multiplier value \( \lambda ^{(k)} \) at iteration k, for k from 1 to 800, with a zoom on the 100 last iterations

Quality assessment of the solution Using the stopping criterion defined in Sect. 3.5.3, the algorithm converges numerically to an approximate solution \( \left( \mathbf {X}_{}^{\star }, \varvec{\pi }_{}^{\star }, \mathbf {U}_{}^{\star }, \lambda ^{\star } \right) \) such thatFrom these values, we deduceThanks to Proposition 1 and estimate (14), we obtainthat is, the gain attached to the feasible solution \(\left( \mathbf {X}_{}^{\star }, \varvec{\pi }_{}^{\star }, \mathbf {U}_{}^{\star }\right) \) is equal to the optimal gain \(J^{\sharp }\) up to a relative precision of less than \(0.01~\%\).

Thus, we coin the approximate solution \( \left( \mathbf {X}_{}^{\star }, \varvec{\pi }_{}^{\star }, \mathbf {U}_{}^{\star }, \lambda ^{\star } \right) \) as chance-constrained quasi-optimal solution.

Remark 2

We also did numerical experiments that show that the optimal solution does not change that much when the discretization employed in the DP algorithm is improved. More precisely, refining the discretization for both the state and the control, namely using \(1~\mathrm{hm^{3}}\) steps rather than \(2~\mathrm{hm^{3}}\) steps, leads to small variations in the results:to be compared with (18). The associated value of the gain \(\mathbb {E}\left[ { \mathbf {G}^{\star } } \right] \) is Open image in new window instead of Open image in new window, that is, a relative variation of \(0.052~\%\). By contrast, the CPU time is multiplied by four when using this finer mesh.
Chance-constrained quasi-optimal storage level trajectories To depict the storage level process \( \mathbf {X}_{}^{\star } \) and the probability distribution of the gain \(\mathbf {G}^{\star }\), we draw a sample of 1,000,000 realizations of the noise process \( \left( \mathbf {A}_{t}, \, \mathbf {C}_{t} \right) _{t \in \{0, \, \ldots , \, T-1\}} \). We then apply the control strategy \( \mathbf {U}_{}^{\star } \) on each of these noise trajectories and obtain the associated storage level trajectory and the value of the gain by means of Monte-Carlo approximations. By contrast, notice that the algorithm outputs given in the above paragraph were exact up to numerical precision.
Fig. 7

Realizations of the storage level process \( \mathbf {X}_{}^{\star } \) (100 realizations); the dotted lines stand for the realizations that do not respect the tourist level \( x_{\text {ref}}\) during the tourist season \( \mathcal {T}\)

Regarding this 1,000,000-sample, we observe that 891,875 trajectories satisfy the tourism constraint, leading to a tourist probability level around 0.892 (the Monte Carlo approximated value could be above or below 0.9, depending on the sample). This can also be seen on Fig. 7, which represents 100 (among 1,000,000) trajectories of the dam storage level: the tourist storage level \(x_{\text {ref}}\) is respected during the tourist season \(\mathcal {T}\) for 90 % of the trajectories.

Empirical probability distribution of the chance-constrained quasi-optimal gain. Figure 8 represents the empirical probability distribution of the gain associated with the sample. We observe that the deviation of the random variable \( \mathbf {G}^{\star } \) from its expected value is substantial: the standard deviation is about 40 % of \( \mathbb {E}\left[ { \mathbf {G}^{\star } } \right] \). Such property might disappoint a dam manager who would expect to observe a gain rather close to the theoretical mean. In particular, losses can be substantial with sensible probability. This is why we highlight the left tail of the gain distribution in the next section.
Fig. 8

Empirical probability distribution of the gain (1,000,000 realizations with a 3500 € discretization step), the mean gain value is in the white box

4 Stochastic viability approach to the dam management

In this section, we put emphasis on the low gain realizations that we detected in Sect. 3.6.3. We propose a so-called stochastic viability approach ([8, 10]) that symmetrizes the economic and the tourist stakes by maximizing the probability to jointly guarantee storage levels and gains.

4.1 Description of the approach

In addition to the storage threshold \(x_{\text {ref}}\), we introduce a threshold \(g_{\text {ref}}\) for guaranteed gain, and we now aim to maximize the following so-called viability probability:
$$\begin{aligned} \mathbb {P}\left[ { \mathbf {G} \ge g_{\text {ref}}\; \; \text {and} \; \;\mathbf {X}_{\tau } \ge x_{\text {ref}}, \quad \forall \tau \in \mathcal {T} } \right] . \end{aligned}$$
(20)
This way, we address the management of the dam by symmetrizing the economic and the tourist stakes whereas, in Sect. 3, the former laid in the criterion \( \mathbb {E}\left[ { \mathbf {G} } \right] \) to maximize and the latter as a chance constraint \( \mathbb {P}\left[ { \mathbf {X}_{\tau } \ge x_{\text {ref}}, \quad \forall \tau \in \mathcal {T} } \right] \ge p_\mathrm{{ref}}\).
We now consider the optimization problem
$$\begin{aligned} \max _{ \mathbf {X}_{}, \, \mathbf {U}_{}} \quad&\mathbb {P}\left[ { \mathbf {G} \ge g_{\text {ref}}\; \; \text {and} \; \;\mathbf {X}_{\tau } \ge x_{\text {ref}}, \quad \forall \tau \in \mathcal {T} } \right] \end{aligned}$$
(21a)
$$\begin{aligned} \text {s.t.} \quad&\mathbf {U}_{t} \; \; \text {is measurable} \; \;\text {w.r.t.}\ \sigma (\mathbf {W}_{0}, \, \ldots , \, \mathbf {W}_{t}), \end{aligned}$$
(21b)
$$\begin{aligned}&0 \le \mathbf {U}_{t} \le \min \{\mathbf {X}_{t} + \mathbf {A}_{t}, \; \overline{u}\} \; \;\forall t \in \{0, \ldots , T - 1 \}, \end{aligned}$$
(21c)
$$\begin{aligned}&\mathbf {X}_{t+1} = f_{ t }^{ \mathbf {X}_{} }\left( { \mathbf {X}_{t}, \, \mathbf {U}_{t}, \, \mathbf {A}_{t} } \right) , \, \forall t \in \{0, \ldots , T - 1 \}, \mathbf {X}_{0} = x_{0}. \end{aligned}$$
(21d)

The optimal value (21a) is called the maximal viability probability. In Sect. 4.2, we show how we can solve the optimization problem (21) by multiplicative stochastic dynamic programming.

4.2 Solving the stochastic viability problem by dynamic programming

To prepare a resolution of the optimization problem (21) by stochastic dynamic programming, we represent the dynamics of gain accumulation by introducing a new real valued process \(\mathbf {S}_{}\) given by
$$\begin{aligned} \left\{ \begin{array}{rl} \mathbf {S}_{0}&{}=0, \\ \mathbf {S}_{t+1}&{}=f_{ t }^{ \mathbf {S}_{} }\left( { \mathbf {X}_{t},\,\mathbf {S}_{t},\,\mathbf {U}_{t},\,\mathbf {W}_{t} } \right) \\ \qquad &{}=\mathbf {C}_{t}\,\eta _{t}(\mathbf {X}_{t},\,\mathbf {U}_{t},\,\mathbf {A}_{t}) - \epsilon \, {\mathbf {U}_{t}}^{2} + \mathbf {S}_{t}, \; \;\forall t \in \{0, \ldots , T - 2 \}, \\ \mathbf {S}_{T}&{}=f_{ T-1 }^{ \mathbf {S}_{} }\left( { \mathbf {X}_{T-1},\,\mathbf {S}_{T-1},\,\mathbf {U}_{T-1},\, \mathbf {W}_{T-1} } \right) \\ &{}=\mathbf {C}_{T-1}\eta _{T-1}(\mathbf {X}_{T-1},\,\mathbf {U}_{T-1},\,\mathbf {A}_{T-1}) - \epsilon \,{\mathbf {U}_{T-1}}^{2} + v_{f}(\mathbf {X}_{T}) + \mathbf {S}_{T-1}. \end{array}\right. \end{aligned}$$
(22)
We have that \(\mathbf {S}_{T}=\mathbf {G}\), the total gain in (4).
Then, we write the viability probability (20) as an expectation over a product of indicator functions:
$$\begin{aligned} \mathbb {P}\left[ { \mathbf {G} \ge g_{\text {ref}}\; \; \text {and} \; \;\mathbf {X}_{\tau } \ge x_{\text {ref}}, \; \;\forall \tau \in \mathcal {T} } \right] = \mathbb {E}\left[ { \prod _{\;\tau \in \mathcal {T}}\mathbf {1}_{ \left\{ \mathbf {X}_{\tau } \ge x_{\text {ref}}\right\} } \; \mathbf {1}_{ \left\{ \mathbf {S}_{T} \ge g_{\text {ref}}\right\} } } \right] . \end{aligned}$$
Finally, the optimization problem (21) reads:
$$\begin{aligned} \max _{\mathbf {X}_{}, \, \mathbf {S}_{}, \, \mathbf {U}_{} } \quad&\mathbb {E}\left[ { \prod _{\tau \in \mathcal {T}}\mathbf {1}_{ \left\{ \mathbf {X}_{\tau } \ge x_{\text {ref}}\right\} } \; \mathbf {1}_{ \left\{ \mathbf {S}_{T} \ge g_{\text {ref}}\right\} } } \right] \end{aligned}$$
(23a)
$$\begin{aligned} \text { s.t. } \quad&\mathbf {U}_{t} \; \; \text {is measurable} \; \;\text {w.r.t.}\ \sigma (\mathbf {W}_{0}, \, \ldots , \, \mathbf {W}_{t}), \end{aligned}$$
(23b)
$$\begin{aligned}&0 \le \mathbf {U}_{t} \le \min \{\mathbf {X}_{t} + \mathbf {A}_{t}, \; \overline{u}\}, \; \;\forall t \in \{0, \ldots , T - 1 \}, \end{aligned}$$
(23c)
$$\begin{aligned}&\mathbf {X}_{t+1} = f_{ t }^{ \mathbf {X}_{} }\left( { \mathbf {X}_{t}, \, \mathbf {U}_{t}, \, \mathbf {A}_{t} } \right) , \; \;\forall t \in \{0, \ldots , T - 1 \}, \, \mathbf {X}_{0} = x_{0}, \end{aligned}$$
(23d)
$$\begin{aligned}&\mathbf {S}_{t + 1} = f_{ t }^{ \mathbf {S}_{} }\left( { \mathbf {X}_{t}, \, \mathbf {S}_{t}, \, \mathbf {U}_{t}, \, \mathbf {W}_{t} } \right) , \; \;\forall t \in \{0, \ldots , T - 1 \}, \, \mathbf {S}_{0}=s_0, \end{aligned}$$
(23e)
with \(s_0=0\). Note that the criterion (23a) is multiplicative. Using the following Theorem 1, we can solve (23) by multiplicative dynamic programming with state \( ( \mathbf {X}_{t}, \, \mathbf {S}_{t} ) \) (the proof is a variant of the one given in [10]).

Theorem 1

Consider the following backward induction equation:
$$\begin{aligned}&V_{T}(x, \, s) = \mathbf {1}_{\{s \ge g_{\text {ref}}\}}, \nonumber \\&V_{t}(x, \, s) = \mathbb {E}\left[ { \max _{u \in \mathfrak {U}\left( x, \, \mathbf {A}_{t} \right) } \; \mathbf {1}_{\{x \ge x_{\text {ref}}\}} V_{t+1} \left( \mathbf {X}_{t+1}, \, \mathbf {S}_{t+1} \right) } \right] \quad \text {if} \;\; t \in \mathcal {T}, \nonumber \\&V_{t}(x, \, s) = \mathbb {E}\left[ { \max _{u \in \mathfrak {U}\left( x, \, \mathbf {A}_{t} \right) } \; V_{t+1} \left( \mathbf {X}_{t+1}, \, \mathbf {S}_{t+1} \right) } \right] \qquad \text {if} \;\; t \notin \mathcal {T}\cup \{T\}, \end{aligned}$$
(24)
where the set \(\mathfrak {U}\left( x, \, a \right) \) is defined by \(\mathfrak {U}\left( x, \, a \right) = \left\{ u \in \mathbb {R} | \vert \, 0 \le u \le \min \{x + a, \; \overline{u}\} \right\} \) and where we use the notations \(\mathbf {X}_{t+1}=f_{ t }^{ \mathbf {X}_{} }\left( { x, \, u, \, \mathbf {A}_{t} } \right) \) and \(\mathbf {S}_{t+1}=f_{ t }^{ \mathbf {S}_{} }\left( { x, \, s , \, u, \, \mathbf {W}_{t} } \right) \). Then, for all \( (x_{0}, \, s_{0}) \in \mathbb {R}_{+}^{2} \), the maximal viability probability (21a) is given by
$$\begin{aligned} V_{0}(x_{0}, \, s_{0}) =&\max _{\mathbf {X}_{}, \, \mathbf {S}_{}, \, \mathbf {U}_{} } \; \mathbb {P}\left[ { \mathbf {S}_{T} \ge g_{\text {ref}}\; \; \text {and} \; \;\mathbf {X}_{\tau } \ge x_{\text {ref}}, \; \forall \tau \in \mathcal {T} } \right] , \\ \text {s.t.} \; \;&(23\mathrm{b}){-}(23\mathrm{c}){-}(23\mathrm{d}){-}(23\mathrm{e}). \end{aligned}$$

Thus, solving the dynamic programming equation (24) gives the solution of the stochastic viability problem (23), hence the solution of (21) since \(\mathbf {S}_{T}=\mathbf {G}\) by (22).

We now imbed the multiplicative dynamic programming equation (24) in a loop where the thresholds \(\left( x_{\text {ref}},\,g_{\text {ref}}\right) \) vary. This gives the Algorithm 1.
Whereas the dynamic programming state belonged to \( [0, \, \overline{x}] \times \{0, \, 1\} \) in Sect. 3.5.1, it belongs to \( [0, \, \overline{x}] \times [0,+\infty [\) here. This is why the dynamic programming equation (24) may be much slower to compute than (15), and the loop over the thresholds \( \left( x_{\text {ref}}, \, g_{\text {ref}}\right) \) in Algorithm 1 makes it even harder.

Now, we apply the stochastic viability approach to the numerical instance of Sect. 3.6.1. Then, we plot and interpret the function \(\phi ^{*}(x_{\text {ref}}, \, g_{\text {ref}})\) computed in Algorithm 1.

4.3 Numerical experiment

We consider the numerical instance described in Sect. 3.6.1.

4.3.1 Implementation of the algorithm described in Sect. 4.2

To implement the resolution of the dynamic programming equation (24), we use the discretization scheme given in Sect. 3.6.2 for the continuous sets \( [0, \, 80] \) and \( [0, \, 40] \) in which the storage state variable \( \mathbf {X}_{t} \) and the outflow control variable \( \mathbf {U}_{t} \) take their values. For the second state variable \( \mathbf {S}_{t} \), that lives in a continuous set \( \mathbb {S}\) by (22), we fix \( \mathbb {S} = [0, \, 7.5 \times 10^{5}] \) and we discretize \( \mathbb {S}\) as a set of 2000 points. As mentioned in Sect. 4.2, the use of such a state variable substantially increases the algorithm running time.9

On an Intel i7-based personal computer, the computation associated with a single value of the pair \(\left( x_{\text {ref}}, \, g_{\text {ref}}\right) \) in the multiplicative dynamic programming algorithm (24) requires a CPU time of 229 s. We do such computation for each tourism guaranteed storage \(x_{\text {ref}}\) varying from 20 to 70 \(\mathrm {hm^{3}}\) by \(5~\mathrm{hm^{3}}\) steps, and for each guaranteed gain \(g_{\text {ref}}\) from 100,000 to 400,000 € by 25,000 € steps.

4.3.2 Numerical results

Isovalues of the maximal viability probability Figure 9 displays the isovalues of the maximal viability probability in (21a) as function of the guaranteed gain \( g_{\text {ref}}\) in € and guaranteed storage \( x_{\text {ref}}\) in \(\mathrm{hm^{3}}\).
Fig. 9

Isovalues of the maximal viability probability (21a), as function of the guaranteed gain \(g_{\text {ref}}\) and guaranteed storage \(x_{\text {ref}}\)

In Sect. 3, we maximized the expected gain subject to the tourist 0.9-level chance constraint (of having at least \(50~\mathrm{hm^{3}}\) in the tourist season), and obtained a quasi-optimal expected gain of  250,116  €. To what do these numbers correspond on Fig. 9? We see that jointly guaranteeing such a gain value and the same tourist constraint leads to no more than a 59 % probability level. Looking from another angle, if we keep the guaranteed storage \(x_{\text {ref}}\) to \(50~\mathrm{hm^{3}}\) and want a maximal viability probability (21a) of at least 0.9 (as was prescribed in Sect. 3), we cannot guarantee a gain higher than 175,000 €.

Thus, by symmetrizing the economic and the tourist stakes with the stochastic viability approach, we offer a complementary view on the management of the dam hydroelectric production by focusing on joint tails of economic and tourist indicators. As a practical application to tackle multiple uses, we suggest that the decision-making process could start by drawing the viability probability isovalues, and then let stakeholders discuss the thresholds values to set.

Simulation of stochastic viability optimal trajectories Now, we focus our attention on the solutions \( \mathbf {X}_{}^{\star \star } \), \( \mathbf {S}_{}^{\star \star } \) and \( \mathbf {G}^{\star \star } \) of the maximal viability problem (21a) for the thresholds \( x_{\text {ref}}= 50~\mathrm{hm^{3}}\) and \( g_{\text {ref}}= 250{,}116 \) €. These two values respectively correspond to the tourist threshold prescribed in Sect. 3.6.1 and to the value of the expected total gain \( \mathbb {E}\left[ { \mathbf {G}^{\star } } \right] \) as computed in Sect. 3.6.3. As we mentioned previously, the maximum probability to jointly respect these thresholds only equals 0.59.

Now—to draw realizations of the water storage \( \mathbf {X}_{}^{\star \star } \) (Fig. 11) and of the cumulated gain \(\mathbf {S}_{}^{\star \star } \) (Fig. 10), and to depict the empirical probability distribution of the total gain \(\mathbf {G}^{\star \star }\) (Fig. 12)—we apply the control strategy \( \mathbf {U}_{}^{\star \star } \) to the dynamics (1) using the same sample of 1,000,000 realizations of the noise process \( \left( \mathbf {A}_{t}, \, \mathbf {C}_{t} \right) _{t \in \{0, \, \ldots , \, T-1\}} \) as the one used in Sect. 3.6.3.
Fig. 10

Cumulated gain up to time t, \( \mathbf {S}_{t} \), in M€, for t from 1 to 11 (100 realizations)

Stochastic viability optimal storage and gain trajectories The grey rectangle to the far right side of Fig. 10 is hit by the trajectories that do not achieve the guaranteed gain threshold. In the same vein, the grey rectangle in the middle of Fig. 11 is hit by the trajectories that do not achieve the guaranteed tourist storage threshold.
Fig. 11

Storage level process \( \mathbf {X}_{}^{\star \star } \) in \(\mathrm{hm^{3}}\) (100 realizations)

By comparison, we observe that more trajectories hit the rectangle in Fig. 10 than the one in Fig. 11. This means that, for the 100 representative realizations drawn, the gain threshold is more critical than the tourist threshold to maximize the viability probability. This observation is confirmed by the shape of the isovalues of the maximal viability probability in Fig. 9. Indeed, isovalues curves change much more when the guaranteed gain \( g_{\text {ref}}\) changes than when the guaranteed storage \(x_{\text {ref}}\) does. Moreover, it is interesting to notice that the Fig. 11 displays a lot of trajectories which ensure the tourist constraint but do not meet the gain level threshold. This is due to the fact that only the joint respect of the probability and gain levels thresholds matters.

Stochastic viability empirical probability distribution of the gain\(\mathbf {G}^{\star \star }\). To end with the numerical experiments, we depict in Fig. 12 the empirical probability distribution of the gain \(\mathbf {S}_{T}^{\star \star }=\mathbf {G}^{\star \star }\) and compare it to that of \(\mathbf {G}^{\star }\) obtained in Sect. 3.6.3.

Not surprisingly, they differ substantially. Indeed, whereas the distribution of \(\mathbf {G}^{\star }\) is balanced around its mean, that of \(\mathbf {G}^{\star \star }\) displays a peak of probability at the \( g_{\text {ref}}\) value and almost no probability mass beyond. We must admit that Fig. 12 baffles us...Suffice to say that the viability probability criterion is not an economic quantity, to the difference of the gain. Maximizing the viability probability does not yield a “smooth” random gain.
Fig. 12

Empirical probability distribution of the gain (1,000,000 realizations with a 3500 € discretization scheme), the one we obtained in Sect. 3.6.3 is grey

5 Conclusion

When moving from deterministic to stochastic control, modelers traditionally take the expected value of the original criterion (economists call this approach “risk-neutral”). They tackle constraints in various senses, such as robust, in probability one, or with a given probability level. In a first part, we chose to handle the dam management issue with the latter way. We considered a chance constrained stochastic optimal control problem, and we obtained satisfactory results with an algorithm that converged to an almost optimal solution. However, our numerical simulations revealed that the optimal random gain displayed a substantial dispersion. This is why, in a second part, we proposed a stochastic viability approach that symmetrizes the economic and the tourist stakes, and jointly guarantees minimal thresholds. We computed the isovalues of the maximal probability to jointly guarantee these thresholds. With this second approach, we obtained a more complete picture of how to deal with the management of multi-purpose facilities under risk, as dam reservoirs often are. Thus, we illustrated, on a case-based dam management problem, that risk in a dynamic setting can be formalized in various ways. More precisely, we shed light on a multi-purpose dam management issue from two angles, stochastic optimal control under chance constraint and stochastic viability, that offered complementary insights.

Extensions are possible in two directions for chance-constrained stochastic optimal control (Sect. 3). For multiple constraints, one needs to use a multidimensional multiplier; numerical illustrations can be found in [3]. The extension to multiple dams requires to be able to implement stochastic dynamic programming techniques on large numerical instances.

Footnotes

  1. 1.

    Even if it was not the case in the data provided by Electricité de France.

  2. 2.

    The abbreviation w.r.t. stands for “with respect to”.

  3. 3.

    Whereas it would correspond to the Decision-Hazard framework if \( \mathbf {U}_{t} \) were measurable w.r.t. \( \sigma \left( \mathbf {W}_{0}, \, \ldots , \, \mathbf {W}_{t-1} \right) \) (see [7]).

  4. 4.

    The abbreviation s.t. stands for “such that”.

  5. 5.

    The gradient step method for the dual minimization problem may be replaced by a more efficient method such as dichotomy, conjugate gradient or quasi-Newton.

  6. 6.

    Otherwise, the multiplier goes to infinity with the iteration index, which means that Constraint (9f) is infeasible.

  7. 7.

    Although this assumption is by no means required.

  8. 8.

    See Remark 2 for the influence of these discretization choices on the quality of the solution.

  9. 9.

    We could consider that the set in which \( \mathbf {S}_{t} \) takes its values might vary with respect to t. This would certainly reduce the algorithm running time, but it would not reduce it by orders of magnitude.

Notes

Acknowledgments

The authors thank Electricité de France Research and Development for initiating this research through the CIFRE PhD funding of Jean-Christophe Alais and for supplying us with data.

References

  1. 1.
    Van Ackooij, W.: Chance Constrained Programming with applications in Energy Management. PhD thesis, École Centrale des Arts et Manufactures (2013)Google Scholar
  2. 2.
    Van Ackooij, W., Zorgati, R., Henrion, R., Möller, A.: Joint chance-constrained programming for hydro reservoir management. Optim. Eng. 15, 509–531 (2014)MathSciNetGoogle Scholar
  3. 3.
    Alais, J.C.: Risque et optimisation pour le management d’énergies: application à l’hydraulique. PhD thesis, Université Paris-Est (2013)Google Scholar
  4. 4.
    Andrieu, L., Henrion, R., Römisch, W.: A model for dynamic chance constraints in hydro power reservoir management. Eur. J. Oper. Res. 207, 579–589 (2010)MathSciNetCrossRefMATHGoogle Scholar
  5. 5.
    Charnes, A., Cooper, W.W.: Chance-constrained programming. Manag. Sci. 6, 73–79 (1959)MathSciNetCrossRefMATHGoogle Scholar
  6. 6.
    Carpentier, P., Chancelier, J.-P., Cohen, G., De Lara, M., Girardeau, P.: Dynamic consistency for stochastic optimal control problems. Ann. Oper. Res. 200, 247–263 (2012)MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    Carpentier, P., Chancelier, J.-P., Cohen, G., De Lara, M.: Stochastic multi-stage optimization. At the crossroads between discrete time stochastic control and stochastic programming. Springer, Berlin (2015)MATHGoogle Scholar
  8. 8.
    De Lara, M., Doyen, L.: Sustainable management of natural resources: mathematical models and methods. Environmental science and engineering. Springer, Berlin (2008)Google Scholar
  9. 9.
    Dentcheva, D.: Optimization models with probabilistic constraints. In Lecture Notes on Stochastic Programming Modeling and Theory, pp. 87–153 (2009)Google Scholar
  10. 10.
    Doyen, L., De Lara, M.: Stochastic viability and dynamic programming. Syst. Control Lett. 59(10), 629–634 (2010)MathSciNetCrossRefMATHGoogle Scholar
  11. 11.
    Dupačová, J.: Stability and sensitivity analysis for stochastic programming. Ann. Oper. Res. 27(1), 115–142 (1990)MathSciNetCrossRefMATHGoogle Scholar
  12. 12.
    Ekeland, I., Temam, R.: Convex analysis and variational problems. Studies in mathematics and its applications. North-Holland Pub Co., New York (1999)MATHGoogle Scholar
  13. 13.
    Everett, H.: Generalized Lagrange multiplier method for solving problems of optimum allocation of resources. Oper. Res. 11, 399–417 (1963)MathSciNetCrossRefMATHGoogle Scholar
  14. 14.
    Henrion, R.: On the connectedness of probabilistic constraint sets. J. Optim. Theory Appl. 112(3), 657–663 (2002)MathSciNetCrossRefMATHGoogle Scholar
  15. 15.
    Henrion, R.: A critical note on empirical (sample average, Monte Carlo) approximation of solutions to chance constrained programs. In:Homberg, D., Troltzsch, F. (eds.) System Modeling and Optimization. IFIP Advances in Information and Communication Technology, vol. 391,pp. 25–37. Springer, Hidelberg (2013)Google Scholar
  16. 16.
    Henrion, R., Römisch, W.: Hölder and Lipschitz stability of solution sets in programs with probabilistic constraints. Math. Program. 100(3), 589–611 (2004)MathSciNetCrossRefMATHGoogle Scholar
  17. 17.
    Infanger, G., Morton, D.P.: Cut sharing for multistage stochastic linear programs with interstage dependency. Math. Program. 75(2), 241–256 (1996)MathSciNetCrossRefMATHGoogle Scholar
  18. 18.
    Luedtke, J., Ahmed, S.: A sample approximation approach for optimization with probabilistic constraints. SIAM J. Optim. 19(2), 674–699 (2008)MathSciNetCrossRefMATHGoogle Scholar
  19. 19.
    Maceira, M.E., Damazio, J.M.: The use of PAR(p) model in the stochastic dual dynamic programming optimization scheme used in the operation planning of the Brazilian hydropower system Operation Planning Studies of the Brazilian Generating System. In: IEEE 8th International Conference on Probabilistic Methods Applied to Power Systems,pp. 397–402, Ames, Iowa, (2004)Google Scholar
  20. 20.
    Miller, L.B., Wagner, H.: Chance-constrained programming with joint constraints. Oper. Res. 13, 930–945 (1965)CrossRefMATHGoogle Scholar
  21. 21.
    Ono, M., Kuwata Y., Balaram J.: Joint chance-constrained dynamic programming. In: 51st IEEE Conference on Decision and Control, pp. 1915–1922. Maui, Hawaii (2012)Google Scholar
  22. 22.
    Prékopa, A., Szántai, T.: On optimal regulation of a storage level with application to the water level regulation of a lake. Europ. J. Oper. Res. 3(3), 175–189 (1979)MathSciNetCrossRefMATHGoogle Scholar
  23. 23.
    Prékopa, A.: Stochastic Programming. Mathematics and Its Applications. Kluwer Academic Publisher, Dordrecht (1995)MATHGoogle Scholar
  24. 24.
    Prékopa, A.: Probabilistic programming. In: Ruszczynski, A., Shapiro, A. (eds.) Stochastic Programming. Handbooks in Operations Research and Management Science, vol. 10, pp. 267–351. Elsevier, Amsterdam (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2015

Authors and Affiliations

  • Jean-Christophe Alais
    • 1
  • Pierre Carpentier
    • 2
  • Michel De Lara
    • 3
  1. 1.ArtelysParisFrance
  2. 2.UMA, ENSTA ParisTechUniversité Paris-Saclay 828Palaiseau cedexFrance
  3. 3.Université Paris-Est, CERMICS (ENPC)Marne-la-ValléeFrance

Personalised recommendations