# Dynamic consistency for stochastic optimal control problems

## Abstract

For a sequence of dynamic optimization problems, we aim at discussing a notion of consistency over time. This notion can be informally introduced as follows. At the very first time step *t*_{0}, the decision maker formulates an optimization problem that yields optimal decision rules for all the forthcoming time steps *t*_{0},*t*_{1},…,*T*; at the next time step *t*_{1}, he is able to formulate a new optimization problem starting at time *t*_{1} that yields a new sequence of optimal decision rules. This process can be continued until the final time *T* is reached. A family of optimization problems formulated in this way is said to be dynamically consistent if the optimal strategies obtained when solving the original problem remain optimal for all subsequent problems. The notion of dynamic consistency, well-known in the field of economics, has been recently introduced in the context of risk measures, notably by Artzner et al. (Ann. Oper. Res. 152(1):5–22, 2007) and studied in the stochastic programming framework by Shapiro (Oper. Res. Lett. 37(3):143–147, 2009) and for Markov Decision Processes (MDP) by Ruszczynski (Math. Program. 125(2):235–261, 2010). We here link this notion with the concept of “state variable” in MDP, and show that a significant class of dynamic optimization problems are dynamically consistent, provided that an adequate state variable is chosen.

### Keywords

Stochastic optimal control Dynamic consistency Dynamic programming Risk measures## 1 Introduction

Stochastic Optimal Control (SOC) is concerned with sequential decision-making under uncertainty. Consider a dynamical process that can be influenced by exogenous noises as well as decisions one has to make at every time step. The decision maker wants to optimize the behavior of the dynamical system (for instance, minimize a production cost) over a certain time horizon. As the system evolves, observations of the system are made; we here suppose that the decision maker is able to keep in memory all the past observations. Naturally, it is generally more profitable for him to adapt his decisions to the observations he makes of the system. He is hence looking for strategies rather than simple decisions. In other words, he is looking for applications that map every possible history of the observations to corresponding decisions. Because the number of time steps may be large, the representation of such an object is in general numerically intractable.

However, an amount of information lighter than the whole history of the system is often sufficient to make an optimal decision. In the seminal work of Bellman (1957), the minimal information on the system that is necessary to make the optimal decision plays a crucial role; it is called the *state variable* (see Whittle 1982, for a more formal definition). Moreover, the Dynamic Programming (DP) principle provides a way to compute the optimal strategies when the state space dimension is not too large (see Bertsekas 2000, for a broad overview on DP). The aim of this paper is to establish a link between the concept of state variable and the notion of dynamic consistency.

The notion of “consistent course of action” (see Peleg and Yaari 1973) is well-known in the field of economics, with the seminal work of Strotz (1955–1956): an individual having planned his consumption trajectory is consistent if, reevaluating his plans later on, he does not deviate from the originally chosen plan. This idea of consistency as “sticking to one’s plan” may be extended to the uncertain case where plans are replaced by decision rules (“Do thus-and-thus if you find yourself in this portion of state space with this amount of time left”, Richard Bellman cited in Dreyfus 2002): Hammond (1976) addresses “consistency” and “coherent dynamic choice”, Kreps and Porteus (1978) refers to “temporal consistency”. Dynamic or time consistency has been introduced in the context of risk measures (see Riedel 2004; Detlefsen and Scandolo 2005; Cheridito et al. 2006; Artzner et al. 2007, for definitions and properties of coherent and consistent dynamic risk measures). Time consistency has then been studied in the stochastic programming framework by Shapiro (2009) and for Markov Decision Processes by Ruszczynski (2010). In this paper, we rather use the (almost equivalent) definition of time consistency given by Ekeland and Lazrak (2006), which is more intuitive and seems better suited in the framework of optimal control problems. We see that terminology varies among authors between dynamic, temporal or time consistency. The term “time consistency” being attached in our community to dynamic risk measures, we will rather use the term “dynamic consistency” for optimal control problems. In this context, the property of dynamic consistency is loosely stated as follows. The decision maker formulates an optimization problem at time *t*_{0} that yields a sequence of optimal decision rules for *t*_{0} and for the following time steps *t*_{1},…,*t*_{N}=*T*. Then, at the next time step *t*_{1}, he formulates a new problem starting at *t*_{1} that yields a new sequence of optimal decision rules from time steps *t*_{1} to *T*. Suppose the process continues until time *T* is reached. The sequence of optimization problems is said to be dynamically consistent if the optimal strategies obtained when solving the original problem at time *t*_{0} remain optimal for all subsequent problems. In other words, dynamic consistency means that strategies obtained by solving the problem at the very first stage do not have to be questioned later on.

The notion of information here plays a crucial role. Indeed, we show in this paper that a sequence of problems may be consistent for some information structure while inconsistent for a different one. Consider for example a standard stochastic optimization problem solvable using DP. We will observe that the sequence of problems formulated after the original one at the later time steps are dynamically consistent. Add now a probabilistic constraint involving the state at the final time *T*. We will show that such a constraint brings dynamic inconsistency in the sense that optimal strategies based on the usual state variable have to be reconsidered at each time step. This is because, roughly speaking, a probabilistic constraint involves not only the state variable values but their probabilistic distributions. Hence knowledge only of the usual state variable of the system is insufficient to formulate consistent problems at subsequent time steps. So, in addition to the usual technical difficulties regarding probabilistic constraints (mainly related to the non-convexity of the feasible set of strategies), an additional problem arises in the dynamic case. We will see that, in fact, this new matter stems from the information on which the optimal decision is based. Therefore, with a well-suited state variable, the sequence of problems regains dynamic consistency.

In Sect. 2, we carefully examine the notion of dynamic consistency in the context of a deterministic optimal control problem. The main ideas of the paper are so explained and then extended, in Sect. 3, to a sequence of SOC problems. Next, in Sect. 4, we show that simply adding a probability constraint (or, equivalently in our context, an expectation constraint) to the problem makes dynamic consistency fall apart, when using the original state variable. We then establish that dynamic consistency can be recovered provided that an adequate state variable is chosen. A toy example is presented in Sect. 5. We conclude that, for a broad class of SOC problems, dynamic consistency has to be considered with respect to the notion of a state variable and of DP.

## 2 A first example

We introduce sequential deterministic optimal control problems, indexed by time, and derive the notion of dynamic consistency for this instance. We then illustrate the fact that the decision making process may be dynamically consistent or not, depending on the information on which decisions are based. The discussion is informal, in the sense that we do not enter technical details regarding existence of the solutions for the problems we introduce.

Let us consider a discrete and finite time horizon *t*_{0},…,*t*_{N}=*T*.^{1} The decision maker has to optimize (according to a cost function we introduce below) the management of an amount of stock *x*_{t}, which lies in some space \(\mathcal{X}_{t}\), at every time step *t*=*t*_{0},…,*T*. Let \(\mathcal{U}_{t}\) be some other space, for every time step *t*=*t*_{0},…,*T*−1. At each time step *t*, a decision \(u_{t}\in\mathcal{U}_{t}\) has to be made. Then a cost *L*_{t} is incurred by the system, depending on the values of the control and on the auxiliary variable *x*_{t} that we call the state of the system. This state variable is driven from time *t* to time *t*+1 by some dynamics \(f_{t}: \mathcal{X}_{t} \times\mathcal{U}_{t} \rightarrow \mathcal{X}_{t+1}\). The aim of the decision maker is to minimize the sum of the intermediate costs *L*_{t} at all time steps plus a final cost *K*.

*t*is taken knowing the current time step and the initial condition (the decision is generally termed “open loop”). A priori, there is no need for more information since the model is deterministic (hence the evolution of the system is entirely determined by the initial condition of the state and by the control values).

*t*

_{i}=

*t*

_{1},…,

*T*−1: We denote the solutions of these problems by \(u_{t_{i},t_{i}}^{*},\dots,u_{t_{i},T-1}^{*}\), for every time step

*t*

_{i}=

*t*

_{1},…,

*T*−1. Those notations however make implicit the fact that the solutions do generally depend on the initial condition \(x_{t_{i}}\). We now make a first observation.

### Lemma 1

(Independence of the initial condition)

*In the very particular case when the solution to Problem *(1a)*–*(1c) *and the solutions to Problems *(2a)*–*(2c) *for every time step t*_{i}=*t*_{1},…,*T*−1 *do not depend on the initial state conditions*, *problems are dynamically consistent*.

### Proof

Let us denote by \(x_{t_{0}, t_{i}}^{*}\) the optimal value of the state variable within Problem (1a)–(1c) at time *t*_{i}. If we suppose that solutions to Problems (2a)–(2c) do not depend on the initial condition, then they are the same as the solutions obtained with the initial condition \(x_{t_{0}, t_{i}}^{*}\), namely \(u_{t_{0},t_{i}}^{*}, \dots, u_{t_{0},T-1}^{*}\). In other words, the sequence of decisions \(u_{t_{0},t_{0}}^{*}\), …, \(u_{t_{0},T-1}^{*}\) remains optimal for the subsequent problems starting at a later date. □

This property is of course not true in general, but we see in Example 1 hereafter and in Sect. 3 that some very practical problems do have this surprising property.

### Example 1

*t*=

*t*

_{0},…,

*T*−1, functions \(l_{t}:\mathcal{U}_{t} \rightarrow\mathbb{R}\) and \(g_{t}: \mathcal{U}_{t}\rightarrow\mathbb{R}\), and assume that

*x*

_{t}is scalar. Let

*K*be a scalar constant and consider the following deterministic optimal control problem: Variables

*x*

_{t}can be recursively replaced using dynamics

*g*

_{t}. Therefore, the above optimization problem can be written:

*t*

_{i}>

*t*

_{0}provided that dynamics are such that

*x*

_{t}remains positive for every time step

*t*=

*t*

_{1},…,

*T*. Now, formulate the same problem at a later date

*t*

_{i}=

*t*

_{1},…,

*T*−1, with initial condition \(x_{t_{i}}\) given. By the same token as for the first stage problem, the value of the initial condition \(x_{t_{i}}\) has no influence on the optimal controls. Assumptions made in Lemma 1 are fulfilled, so that the dynamic consistency property holds true for open-loop decisions without reference to initial state conditions.

Although, for the time being, this example may look very special, we will see later on that it is analogous to familiar SOC problems.

At time

*t*_{0}, Problem (1a)–(1c) is solved. One obtains a decision \(u_{t_{0}, t_{0}}^{*}\) to apply at time*t*_{0}, as well as decisions \(u_{t_{0}, t_{1}}^{*}\), …, \(u_{t_{0}, T-1}^{*}\) for future time steps.At time

*t*_{1}, one formulates and solves the problem starting at time*t*_{1}with initial condition \(x_{t_{1}}=f_{t_{0}}(x_{t_{0}}, u_{t_{0}, t_{0}}^{*}) +\varepsilon_{t_{1}}\), \(\varepsilon_{t_{1}}\) being some perturbation of the model. There is no reason not to use the observation of the actual value of the variable \(x_{t_{1}}\) at time*t*_{1}as long as we have it at our disposal.Hence a decision \(u_{t_{1}, t_{1}}^{*}\) is obtained, which is different from the initially obtained optimal decision \(u_{t_{0},t_{1}}^{*}\) (once again, in general).

The same process continues at times

*t*_{2},…,*T*−1.

Let us now state the following two lemmas.

### Lemma 2

(True deterministic world)

*If the deterministic model is actually exact*, *i*.*e*. *if all perturbations *\(\varepsilon_{t_{i}}\)*introduced above equal zero*, *then Problems *(2a)*–*(2c) *with initial conditions *\(x_{t_{i}} =x_{t_{i}}^{*} :=f_{t_{i}}(x_{t_{i-1}}^{*}, u_{t_{0}, t_{i-1}}^{*})\)*are dynamically consistent*.

### Proof

*t*

_{1}. The same argument applies recursively for subsequent time steps. □

It is clear that the assumption made at Lemma 2 is not satisfied in real life. Therefore, adding disturbances to the problem seems to bring inconsistency to the sequence of optimization problems. Decisions that are optimal for the first stage problem do not remain optimal for the subsequent problems if we do not let decisions depend on the initial conditions.

In fact, as it is stated next, dynamic consistency is recovered provided we let decisions depend upon the right information.

### Lemma 3

(Right amount of information)

*Suppose that the solutions of Problem *(1a)*–*(1c) *are looked for in the set of control strategies*\((\phi_{t_{0},t_{0}}^{*}\), …, \(\phi_{t_{0}, T-1}^{*})\), *where each feedback function *\(\phi_{t_{0}, t}^{*}\)*is defined on *\(\mathcal{X}_{t}\)*and takes values in *\(\mathcal{U}_{t}\). *Then Problems *(2a)*–*(2c) *are dynamically consistent for every time step t*=*t*_{0},…,*T*−1.

### Proof

The result is a direct application of the DP principle, which states that there exists such a feedback function \(\phi^{*}_{t_{0}, t_{i}}\) that is optimal for Problem (1a)–(1c) and is still optimal for Problem (2a)–(2c) at time *t*_{i}, whatever initial condition \(x_{t_{i}}\) is. □

We thus retrieve the dynamic consistency property provided that we use the feedback functions \(\phi_{t_{0},t}^{*}\) rather than the controls \(u_{t_{0},t}^{*}\). In other words, problems are dynamically consistent as soon as the control strategy is based on a sufficiently rich amount of information (time instant *t**and* state variable *x* in the deterministic case).

The considerations we made so far seem to be somewhat trivial. However, we shall observe that for SOC problems, which may seem more complicated at first sight, the same considerations remain true. Most of the time, decision making processes are dynamically consistent, provided we choose the correct information on which decisions are based.

## 3 Stochastic optimal control without constraints

We now consider a more general case in which a controlled dynamical system is influenced by modeled exogenous disturbances. The decision maker has to find strategies to drive the system so as to minimize some objective function over a certain time horizon. This is a sequential decision making process on which we can state the question of dynamic consistency. As in the previous example, the family of optimization problems is derived from the original one by truncating the dynamics and the cost function (the final time step *T* remains unchanged in each problem), and strategies are defined relying on the same information structure as in the original problem. In the sequel, random variables will be denoted using bold letters.

### 3.1 The classical case

^{2}variables \(\uppercase{\boldsymbol{x}}=(\uppercase{\boldsymbol{x}}_{t})_{t=t_{0},\dots,T}\), where \(\uppercase{\boldsymbol{x}}_{t}\) takes values in \(\mathcal{X}_{t}\). The system can be influenced by control variables \(\uppercase{\boldsymbol{u}}=(\uppercase{\boldsymbol{u}}_{t})_{t=t_{0},\dots,T-1}\) and by exogenous noise variables \(\uppercase{\boldsymbol{w}} = (\uppercase{\boldsymbol{w}}_{t})_{t=t_{0}, \dots, T}\) (\(\uppercase{\boldsymbol{u}}_{t}\) and \(\uppercase{\boldsymbol{w}}_{t}\) taking values in \(\mathcal{U}_{t}\) and \(\mathcal{W}_{t}\) respectively). Noises that affect the system can be correlated through time. All random variables are defined on a probability space Open image in new window. The problem we consider consists in minimizing the expectation of a sum of costs depending on the state, the control and the noise variables over a discrete finite time horizon. The state variable evolves with respect to some dynamics that depend on the current state, noise and control values. The problem starting at

*t*

_{0}is written:

^{3}On the one hand, (3b) states that the

*probability law*\(\mu_{t_{0}}\) of the initial state \(\uppercase{\boldsymbol{x}}_{t_{0}}\) is part of the known data of the problem (such a knowledge is mandatory for evaluating the criterion in (3a)). On the other hand, (3d) implies that each control variable \(\uppercase{\boldsymbol{u}}_{t}\) may be modeled as a function notably depending on the

*realization*of the random variable \(\uppercase{\boldsymbol{x}}_{t_{0}}\). Of course, such a function may also parametrically depend on the data of the problem, and thus on the probability law \(\mu_{t_{0}}\). We shall see now that this last dependency does not occur for Problem (3a)–(3d) under the so-called Markovian assumption (Assumption 1 hereafter).

A general approach in optimal control consists in assuming that noise variables \(\uppercase{\boldsymbol{w}}_{t_{1}}, \dots, \uppercase{\boldsymbol{w}}_{T}\) are independent through time, so that all necessary information is included in the state variables \(\uppercase{\boldsymbol{x}}\).^{4} We hence make the following assumption.

### Assumption 1

(Markovian setting)

Noise variables \(\uppercase{\boldsymbol{x}}_{t_{0}}, \uppercase{\boldsymbol{w}}_{t_{1}}, \dots, \uppercase{\boldsymbol{w}}_{T}\) are independent.

there is no loss of optimality in looking for the optimal strategy \(\uppercase{\boldsymbol{u}}_{t}\) at time

*t*as a feedback function depending on the state variable \(\uppercase{\boldsymbol{x}}_{t}\), i.e. as a (measurable) function of the form \(\phi_{t_{0}, t}: \mathcal{X}_{t} \rightarrow\mathcal{U}_{t}\);- the optimal strategies \(\phi_{t_{0}, t_{0}}^{*}, \dots, \phi_{t_{0},T-1}^{*}\) can be obtained by solving the classical DP equation. Let
*V*_{t}(*x*) denote the optimal cost when being at time step*t*with state value*x*, this equation reads:

*classical case*. A consequence of the DP equation is that the argmin in (4b) depends on both

*x*and the probability law of \(\uppercase{\boldsymbol{w}}_{t+1}\), but not on \(\mu_{t_{0}}\) so that the optimal feedback functions do not depend on the initial condition \(\mu_{t_{0}}\). The probability law of \(\uppercase{\boldsymbol{x}}_{t_{0}}\) only affects the optimal cost value, but not its argmin. Moreover it is clear while inspecting the DP equation that optimal strategies \(\phi_{t_{0},t_{0}}^{*}\), …, \(\phi_{t_{0}, T-1}^{*}\) remain optimal for the subsequent optimization problems: for every

*t*

_{i}=

*t*

_{1},…,

*T*−1. In other words, these problems are dynamically consistent provided the information variable at time

*t*contains at least the state variable \(\uppercase{\boldsymbol{x}}_{t}\), that is the control variable \(\uppercase{\boldsymbol{u}}_{t}\) is modeled as a function of the realizations of \(\uppercase{\boldsymbol{x}}_{t}\). While building an analogy with properties described in the deterministic example in Sect. 2, the reader should be aware that the case we consider here is closer to Lemma 1 than to Lemma 3, as we explain now in more details.

### 3.2 The distributed formulation

In fact, we are within the same framework as in Example 1. Indeed, Problem (3a)–(3d) can be written as a deterministic distributed optimal control problem involving the probability laws of the state variable, the dynamics of which are given by the so-called Fokker–Planck equation. Let us detail this last formulation (see Witsenhausen 1973).

*Ξ*

_{t}be a linear space of ℝ-valued measurable functions on \(\mathcal{X}_{t}\) (equipped with an appropriate

*σ*-field Open image in new window). We can consider a dual pairing between

*Ξ*

_{t}and a space

*ϒ*

_{t}of signed measures on Open image in new window by considering the bilinear form:

^{5}

*ξ*

_{t}∈

*Ξ*

_{t}, let \(\uppercase{\boldsymbol{x}}_{t}\) be a random variable taking values in \(\mathcal{X}_{t}\) and distributed according to the probability law

*μ*

_{t}. The former bilinear form reads as follows:

*t*=

*t*

_{0},…,

*T*−1, we define the operator \(\mathbb {T}_{t}^{\phi_{t}}: \varXi_{t+1} \rightarrow\varXi_{t}\) as:

*ϕ*

_{t}) by the following Fokker–Planck equation:

*ξ*

_{t+1}∈

*Ξ*

_{t+1}we have: Next we introduce the operator \(\varLambda_{t}^{\phi_{t}} : \mathcal{X}_{t} \rightarrow\mathbb{R}\):

*t*for each possible state value when feedback function

*ϕ*

_{t}is applied. Using \(\varLambda_{t}^{\phi_{t}}\) we have:

### Remark 1

*ξ*

_{t}∈

*Ξ*

_{t}are real-valued functions on \(\mathcal{X}_{t}\). This may be called “

*the backward formulation*” since the “state”

*ξ*

_{t}follows an affine dynamics which is backward in time, with an initial-only cost function (whereas the previous forward formulation follows a forward linear dynamics with an integral + final cost function). Both formulations are infinite-dimensional linear programming problems which are dual of each other (see Theorem 1 in Witsenhausen 1973). The measures

*μ*

_{t}and functions

*ξ*

_{t}are the distributed state and/or co-state (according to which one is considered the primal problem) of this distributed deterministic optimal control problem of which

*ϕ*is the distributed control.

*μ*

_{t}are by definition positive and appear only in a multiplicative manner in the problem. Hence we are in a similar case as Example 1. The main difference is rather technical: since we here have probability laws instead of scalars, we need to apply backwards in time interchange theorems between expectation and minimization in order to prove that the solution of the problem actually does not depend on the initial condition \(\mu_{t_{0}}\). Indeed, suppose that

*μ*

_{T−1}is given at time step

*T*−1. Then the most inner optimization problem reads: which is equivalent to:

*x*. Indeed, let \(g:\,\mathcal{X}_{T-1}\times\mathcal{U}_{T-1}\rightarrow\mathbb{R}\) be defined as:

*ϕ*

_{T−1}being a (measurable) function defined on \(\mathcal{X}_{T-1}\). The interchange theorem (Rockafellar and Wets 1998, Theorem 14.60) applies.

^{6}We are in the case of Example 1 for every

*x*. Therefore, the minimizer does not depend on

*μ*

_{T−1}. The same argument applies recursively to every time step before

*T*−1 so that, at time

*t*

_{0}, the initial condition \(\mu_{t_{0}}\) only influences the optimal cost of the problem, but not the argument of the minimum itself (here, the feedback laws \(\phi_{t_{0},t}^{*}\)).

Hence, following Lemma 1, Problems (5a)–(5d) are naturally dynamically consistent when strategies are searched as feedback functions on \(\uppercase{\boldsymbol{x}}_{t}\) only. It thus appears that the rather general class of stochastic optimal control problems shaped as Problem (3a)–(3d) is in fact very specific. However, such a property does not remain true when adding new ingredients in the problem, as we show in the next subsection.

## 4 Stochastic optimal control with constraints

*T*. For instance, we want the system to be in a certain state at the final time step with a given probability:

**1**

_{A}refers to the indicator function of set

*A*. Note however that chance constraints bring important theoretical and numerical difficulties, notably regarding connexity and convexity of the feasible set of controls, even in the static case. The interested reader should refer to the work of Prékopa (1995), and to the handbook by Ruszczynski and Shapiro (2003, Chap. 5) for mathematical properties and numerical algorithms in Probabilistic Programming (see also Henrion 2002; Henrion and Strugarek 2008, for related studies). We do not discuss them here. The difficulty we are interested in is common to both chance and expectation constraints. This is why we concentrate in the sequel on adding an expectation constraint to Problem (3a)–(3d) of the form:

*a*as a level of probability that one wants to satisfy for a certain event at the final time step.

We now show that when adding such an expectation constraint, the dynamic consistency property falls apart. More precisely, the sequence of SOC problems are not dynamically consistent anymore when using the usual state variable. Nevertheless, we observe that the lack of consistency comes from an inappropriate choice for the state variable. By choosing the appropriate state variable, one regains dynamic consistency.

### 4.1 Problem setting

*a*∈ℝ and consider Problem (3a)–(3d) with the additional final expectation constraint:

*t*

_{i}>

*t*

_{0}are naturally deduced from this problem. The level

*a*of the expectation constraint remains the same for every problem. Admittedly this corresponds to a (naive) formulation of the family of optimization problems under consideration. Such a choice is questionable since the perception of the risk level

*a*may evolve over time.

Suppose there exists a solution for the problem at *t*_{0}. As previously, we are looking for the optimal control at time *t* as a feedback function \(\phi_{t_{0}, t}^{*}\) depending on the realizations of the variable \(\uppercase{\boldsymbol{x}}_{t}\). The first index *t*_{0} refers to the time step at which the problem is stated, while the second index *t* refers to the time step at which the decision is taken.

*μ*

_{T}be the probability law of \(\uppercase{\boldsymbol{x}}_{T}\). Constraint (9) can be written \(\left\langle g, \mu_{T} \right\rangle\leq a\), so that the equivalent distributed formulation of the initial time problem is:

*μ*

_{t}, the last constraint introduces an additional highly nonlinear term in the cost function, namely:

*χ*

_{A}stands for the characteristic function

^{7}of set

*A*. The dynamics are still linear and variables

*μ*

_{t}are still positive, but the objective function is not linear with respect to

*μ*

_{T}anymore, and therefore not linear with respect to the initial law \(\mu_{t_{0}}\) either. Hence there is no reason for feedback laws to be independent of the initial condition as in the case without constraint presented in Sect. 3.

^{8}The reasoning made for the problem initiated at time

*t*

_{0}remains true for the subsequent problems starting at time

*t*

_{i}: an observation \(x_{t_{i}}\) of the state variable \(\uppercase{\boldsymbol{x}}_{t_{i}}\) becomes available before solving Problem (5a)–(5d), so that its natural initial condition is in fact:

*t*

_{i}will be dynamically consistent provided their initial conditions are given by the optimal Fokker–Planck equation:

*t*

_{i}. As a conclusion, the sequence of subsequent optimization problems deduced from (10a)–(10c) for initial times

*t*

_{i}>

*t*

_{0}is not dynamically consistent as long as we consider feedback laws

*ϕ*

_{t}depending on \(\uppercase{\boldsymbol{x}}_{t}\) only.

### Remark 2

(Joint probability constraints)

### 4.2 Back to dynamic consistency

We now show that dynamic consistency can be recovered provided we choose the right state variable on which to base decisions. We hence establish a link between dynamic consistency of a family of optimization problems and the notion of state variable.

*V*

_{t}(

*μ*

_{t}) the optimal cost of the problem starting at time

*t*with initial condition

*μ*

_{t}. Using notations of the distributed formulation of a SOC problem, one can write a DP equation depending on the probability laws

*μ*on \(\mathcal{X}\): and, for every

*t*=

*t*

_{0},…,

*T*−1 and every probability law

*μ*on \(\mathcal{X}\): The context is similar to the one of the deterministic example of Sect. 2, and Lemma 3 states that solving the deterministic infinite-dimensional problem associated with the constrained problem leads to dynamic consistency provided DP is used. For the problem under consideration, we thus obtain optimal feedback functions

*ϕ*

_{t}which depend on the probability laws

*μ*

_{t}. Otherwise stated, the family of constrained problems introduced in Sect. 4.1 is dynamically consistent provided one looks for strategies as feedback functions depending on both the realization of variable \(\uppercase{\boldsymbol{x}}_{t}\) and the probability law of \(\uppercase{\boldsymbol{x}}_{t}\).

Naturally, this DP equation is rather conceptual. The resolution of such an equation is intractable in practice since probability laws *μ*_{t} are infinite-dimensional objects.

## 5 A toy example with finite probabilities

*u*takes values in \(\mathbb{U}=\{0,1\}\), the value 1 corresponding to using some given water release to produce some electricity. The noise variable affecting the reservoir (stochastic inflow/outflow) takes its values in \(\mathbb{W}=\{-1,0,1\}\). We suppose that Assumption 1 is fulfilled, the discrete probability law of each noise variable \(\uppercase{\boldsymbol{w}}_{t}\) being characterized by the weights {

*ϖ*

_{−},

*ϖ*

_{0},

*ϖ*

_{+}}. Finally the reservoir dynamics is:

*μ*

_{t}be the discrete probability law associated with the state random variable \(\uppercase{\boldsymbol{x}}_{t}\), the initial probability law \(\mu_{t_{0}}\) being given. In such a discrete case, it is easy to compute the transition matrix

*P*

^{u}giving the Markov chain transitions for each possible value

*u*of the control, with the following interpretation: We obtain for the reservoir problem:

*i*th row of matrix

*P*

^{u}.

*t*. We denote by

**Φ**_{t}the set of admissible feedbacks at time

*t*. In our discrete case, \(\mathrm{card}(\boldsymbol{\varPhi}_{t}) =\mathrm{card}(\mathbb{U})^{\mathrm{card}(\mathbb{X})} =8\) for all

*t*. The transition matrix \(P^{\phi_{t}}\) associated with such a feedback is obtained by properly selecting rows of the transition matrices

*P*

^{u}, namely:

*μ*

_{t}involved in this dynamic equation is a three-dimensional vector:

^{9}

*t*is supposed to be linear, equal to \(p_{t}\uppercase{\boldsymbol{u}}_{t}\),

*p*

_{t}being a (negative) deterministic price. There is no final cost (

*K*≡0), but the reservoir level at final time

*T*must be equal to \(\overline{x}\) with a probability level at least equal to

*π*. The distributed formulation (10a)–(10c) associated with the reservoir control problem is:

*μ*

_{T}has to be greater than or equal to

*π*. Here, 〈⋅,⋅〉 denotes the standard scalar product in ℝ

^{3}.

Problem (11a)–(11c) can be solved using dynamic programming: the state equation is a three-dimensional one,^{10} so that the computational burden of such a resolution remains achievable. Suppose now that the number of reservoir discrete levels is *n* (rather than 3), with *n* big enough: the resolution of the distributed formulation suffers from the curse of dimensionality ((*n*−1)-dimensional state), the ultimate curse being to consider that the level takes values in the interval \([\,\underline{x},\overline{x}\,]\), so that the *μ*_{t}’s are continuous probability laws.

## 6 Conclusion

We informally introduced a notion of dynamic consistency of a sequence of decision-making problems, which basically requires that plans that are made from the very first time remain optimal if one rewrites optimization problems at subsequent time steps. We show that, for several classes of optimal control problems, this concept is not new and can be directly linked with the notion of state variable, which is the minimal information one must use to be able to take the optimal decision.

We show that, in general, feedback laws have to depend on the probability law of the usual state variable for Stochastic Optimal Control problems to be dynamically consistent. This is necessary, for example, when the model contains expectation or chance constraints.

Future works will focus on three main directions. The first concern will be to better formalize the state notion in the vein of the works by Witsenhausen (1971, 1973) and Whittle (1982). The second will be to establish the link with the literature concerning risk measures, in particular the work by Ruszczynski (2010). Finally, the last DP equations we introduced are in general intractable. In a forthcoming paper, we will provide a way to get back to a finite-dimensional information variable, which makes a resolution by DP tractable.

## Footnotes

- 1.
Where

*t*_{i}+1=*t*_{i+1}. - 2.
The use of the terminology “state” is somewhat abusive until we make Assumption 1.

- 3.
We here use the notations ∼ for “is distributed according to” and ⪯ for “is measurable with respect to”. We suppose that all measurability and integrability conditions are fulfilled, so that all stochastic optimization problems in the sequel are well defined.

- 4.
If the noise variables are not independent, one has to include at most all the past values of the noise variable to make a new state variable.

- 5.
We do not aim at discussing technical details concerning spaces here. Following Witsenhausen (1973), a natural choice of

*Ξ*_{t}is the Banach space of real bounded measurable functions with the supremum norm,*ϒ*_{t}being the Banach space of signed measures with the total variation norm. In the same vein, as far as integrability is concerned, we will suppose that all the operators we introduce are well-defined. - 6.
One needs several technical assumptions concerning spaces and measurability in order to use (Rockafellar and Wets 1998, Theorem 14.60). We do not intend to discuss them in this paper.

- 7.As defined in convex analysis:$$\chi _{{A}}(x) =\left\{\begin{array}{l@{\quad}l} 0 & \text{if } x \in A, \\+\infty&\text{otherwise} .\end{array}\right.$$
- 8.
The initial law \(\mu_{t_{0}}\) in Problem (3a)–(3d) corresponds to the information available on \(\uppercase{\boldsymbol{x}}_{t_{0}}\)

*before*\(\uppercase{\boldsymbol{x}}_{t_{0}}\) is observed, but it seems more reasonable in a practical situation to use all the available information when setting the problem again at each new initial time, and thus to use a Dirac function as the initial condition. - 9.
The dynamics of a controlled Markov chain is traditionally written: \(\mu_{t+1} = \mu_{t} P^{\phi_{t}}\),

*μ*_{t}being represented by a row vector. We rather use here a column vector (transpose of the previous one) in order to have consistent notations throughout the paper. - 10.
In fact two-dimensional because

*μ*_{1,t}+*μ*_{2,t}+*μ*_{3,t}=1.

## Notes

### Acknowledgements

This study was made within the Systems and Optimization Working Group (SOWG), which is composed of Laetitia Andrieu, Kengy Barty, Pierre Carpentier, Jean-Philippe Chancelier, Guy Cohen, Anes Dallagi, Michel De Lara and Pierre Girardeau, and based at Université Paris-Est, CERMICS, Champs sur Marne, 77455 Marne la Vallée Cedex 2, France.

### References

- Artzner, P., Delbaen, F., Eber, J.-M., Heath, D., & Ku, H. (2007). Coherent multiperiod risk-adjusted values and Bellman’s principle.
*Annals of Operations Research*,*152*(1), 5–22. CrossRefGoogle Scholar - Bellman, R. (1957).
*Dynamic programming*. Princeton: Princeton University Press. Google Scholar - Bertsekas, D. (2000).
*Dynamic programming and optimal control*(2nd ed.). Nashua: Athena Scientific. Google Scholar - Cheridito, P., Delbaen, F., & Kupper, M. (2006). Dynamic monetary risk measures for bounded discrete-time processes.
*Electronic Journal of Probability*,*11*(3), 57–106. Google Scholar - Detlefsen, K., & Scandolo, G. (2005). Conditional and dynamic convex risk measures.
*Finance and Stochastics*,*9*(4), 539–561. CrossRefGoogle Scholar - Dreyfus, S. (2002). Richard Bellman on the birth of dynamic programming.
*Operations Research*,*50*(1), 48–51. CrossRefGoogle Scholar - Ekeland, I., & Lazrak, A. (2006). Being serious about non-commitment: subgame perfect equilibrium in continuous time. arXiv:math.OC/0604264.
- Hammond, P. J. (1976). Changing tastes and coherent dynamic choice.
*Review of Economic Studies*,*43*(1), 159–173. CrossRefGoogle Scholar - Henrion, R. (2002). On the connectedness of probabilistic constraint sets.
*Journal of Optimization Theory and Applications*,*112*(3), 657–663. CrossRefGoogle Scholar - Henrion, R., & Strugarek, C. (2008). Convexity of chance constraints with independent random variables.
*Computational Optimization and Applications*,*41*(2), 263–276. CrossRefGoogle Scholar - Kreps, D. M., & Porteus, E. L. (1978). Temporal resolution of uncertainty and dynamic choice theory.
*Econometrica*,*46*(1), 185–200. CrossRefGoogle Scholar - Peleg, B., & Yaari, M. E. (1973). On the existence of a consistent course of action when tastes are changing.
*Review of Economic Studies*,*40*(3), 391–401. CrossRefGoogle Scholar - Prékopa, A. (1995).
*Stochastic programming*. Dordrecht: Kluwer Academic. Google Scholar - Riedel, F. (2004). Dynamic coherent risk measures.
*Stochastic Processes and Their Applications*,*112*(2), 185–200. CrossRefGoogle Scholar - Rockafellar, R., & Wets, R.-B. (1998).
*Variational analysis*. Berlin: Springer. CrossRefGoogle Scholar - Ruszczynski, A. (2010). Risk-averse dynamic programming for Markov decision processes.
*Mathematical Programming*,*125*(2), 235–261. CrossRefGoogle Scholar - Ruszczynski, A., & Shapiro, A. (Eds.) (2003).
*Handbooks in operations research and management science*:*Vol*.*10*.*Stochastic programming*. Amsterdam: Elsevier. Google Scholar - Shapiro, A. (2009). On a time consistency concept in risk averse multistage stochastic programming.
*Operations Research Letters*,*37*(3), 143–147. CrossRefGoogle Scholar - Strotz, R. H. (1955–1956). Myopia and inconsistency in dynamic utility maximization.
*Review of Economic Studies*,*23*(3), 165–180. CrossRefGoogle Scholar - Whittle, P. (1982).
*Optimization over time*. New York: Wiley. Google Scholar - Witsenhausen, H. S. (1971). On information structures, feedback and causality.
*SIAM Journal on Control*,*9*(2), 149–160. CrossRefGoogle Scholar - Witsenhausen, H. S. (1973). A standard form for sequential stochastic control.
*Mathematical Systems Theory*,*7*(1), 5–11. CrossRefGoogle Scholar