Policy iteration for Hamilton-Jacobi-Bellman equations with control constraints

Policy iteration is a widely used technique to solve the Hamilton Jacobi Bellman (HJB) equation, which arises from nonlinear optimal feedback control theory. Its convergence analysis has attracted much attention in the unconstrained case. Here we analyze the case with control constraints both for the HJB equations which arise in deterministic and in stochastic control cases. The linear equations in each iteration step are solved by an implicit upwind scheme. Numerical examples are conducted to solve the HJB equation with control constraints and comparisons are shown with the unconstrained cases.


Introduction
Stabilizability is one of the major objectives in the area of optimal control theory. Since the open loop control depends only on the time and the initial state, so if the initial state is changed, unfortunately control needs to be recomputed again. In many physical situation, we are particularly interested in seeking a control law which depends on the state and which can deal additional external perturbation or model errors. When the dynamical systems is linear with unconstrained control and corresponding cost functional or performance index is quadratic, then the associated closed loop or feedback control law which minimizes the cost functional is obtained by the so called Riccati equation. Now when the dynamics is nonlinear, one popular approach is to linearize the dynamics and obtain the Riccati based controller to apply for the original nonlinear system, see e.g. [33].
If the optimal feedback cannot be obtained by LQR theory, then it can be approached by the verification theorem and the value function, which in turn is a solution of the Hamilton Jacobi Bellman (HJB) equation associated to the optimal control problem, see e.g. [21]. But in most of the situations, it is very difficult to obtain the exact value function and thus one has to resort to iterative techniques. One possible approach utilizes the so called value iteration. Here we are interested in more efficient technique, known as policy iteration. Discrete counterpart of policy iteration is also known as Howards' algorithm [27]. Policy iteration can be interpreted as a Newton method applied to the HJB equation. Hence, using the policy iteration HJB equation reduces to a sequence of linearized HJB equation, which for historical reasons are called 'generalized HJB equations'. The policy iteration requires as initialization a stabilizing control. If such a control is not available, then one can use discounted path following policy iteration, see e.g. [28]. The focus of the present paper is the analysis of the policy iteration in the presence of control constraints. To the authors' knowledge, such an analysis is not available in the literature.
Let us next mentions very selectively, some of the literature on the numerical solution of the control-HJB equations. Specific comments for the constraint case, are given further below. Finite difference methods and vanishing viscosity method are developed in the work of Crandall and Lions [17]. Other notable techniques are finite difference methods [16], semi-Lagrangian schemes [4,19], the finite element method [26], filtered schemes [11], domain decomposition methods [14], and level set methods [36]. For an overview we refer to [20]. Policy iteration algorithms are developed in [32,37,8,9]. If the dynamics are given by a partial differential equation (PDE), then the corresponding HJB equations become infinite dimensional. Applying grid based scheme to convert PDE dynamics to ODE dynamics leads to a high dimensional HJB equation. This phenomenon is known as the curse of dimensionality. In the literature there are different techniques to tackle this situation. Related recent works include, polynomial approximation [28], deep neural technique [31], tensor calculus [18], Taylor series expansions [13] and graph-tree structures [5].
Iteration in policy space for second order HJB PDEs arising in stochastic control is discussed in [40]. Tensor calculus technique is used in [39] where the associated HJB equation becomes linear after using some exponential transformation and scaling factor. For an overview for numerical approximations to stochastic HJB equations, we refer to [30]. To solve the continuous time stochastic control problem by the Markov chain approximation method (MCAM) approach, the diffusion operator is approximated by a finite dimensional Markov decision process, and further solved by the policy iteration algorithm. For the connection between the finite difference method and the MCAM approach for the second order HJB equation, see [12]. Let us next recall contributions for the case with control constraints. Regularity of the value function in the presence of control constraints has been discussed in [24,25] for linear dynamics and in [15] for nonlinear systems. In [34] non-quadratic penalty functionals were introduced to approximately include control constraints. Later in [1] such functionals were discussed in the context of policy iteration for HJB equations.
Convergence of the policy iteration without constraints has been investigated in earlier work, see in particular [37,40]. In our work, the control constraints are realized exactly by means of a projection operator. This approach has been used earlier only for numerical purposes in the context of the value iteration, see [23] and [29]. In our work we prove the convergence of the policy iteration with control constraints for both the first and the second order HJB-PDEs. For numerical experiments we use an implicit upwind scheme as proposed in [2].
The rest of the paper is organized as follows. Section 2 provides a convergence of the policy iteration for the first order HJB equation in the presence of control constraints. Section 3 establishes corresponding convergence result for the second order HJB equation with control constraints. Numerical tests are presented in Section 4. Finally in Section 5 concluding remarks are given.

2.
Nonlinear H 2 feedback control problem subject to deterministic system 2.1. First order Hamilton-Jacobi-Bellman equation. We consider the following infinite horizon optimal control problem min u(·)∈U subject to the nonlinear deterministic dynamical constraint where y(t) = (y 1 (t), . . . , y d (t)) t ∈ R d is the state vector, and u(·) ∈ U is the control input with U = {u(t) : R + → U ⊂ R m }. Further (y) > 0, for y = 0, is the state running cost, and u 2 R = u t Ru, represents the control cost, with R ∈ R m×m , R > 0 a positive definite matrix. Throughout we assume that the dynamics f and g, as well as are Lipschitz continuous on R d , and that f (0) = 0 and (0) = 0. We also require that g is globally bounded on R d . This set-up relates to our aim of asymptotic stabilization to the origin by means of the control u. We shall concentrate on the case where the initial conditions are chosen from a domain Ω ⊂ R d containing the origin in its interior.
The specificity of this work relates to controls which need to obey constraints u(t) ∈ U , where U is a closed convex set containing 0 in R m . As a special case we mention bilateral point-wise constraints of the form where α = (α 1 , . . . , α m ) ∈ R m , β = (β 1 , . . . , β m ) ∈ R m , α ≤ 0, β ≥ 0, and the inequalities act coordinate-wise.
The optimal value function associated to (2.1)-(2.2) is given by where x ∈ Ω. Here and throughout the paper we assume that for every x ∈ Ω a solution to (2.1) exists. Thus, implicitly we also assume that the existence of a control u ∈ L 2 (0, ∞; R m ) such that (2.2) admits a solution y ∈ W 1,2 (0, ∞; R d ). If required by the context, we shall indicate the dependence of y on x ∈ Ω or u ∈ U, by writing y(·; x), respectively y(·; u). In case V ∈ C 1 (R d ) it satisfies the Hamilton-Jacobi-Bellman (HJB) equation Otherwise, sufficient conditions which guarantee that V is the unique viscosity solution to (2.4) are well-investigated, see for instance [6,Chapter III].
In the unconstrained case, with U = R m the value function is always differentiable (see [6, pg. 80]). If = x t Qx, with Q a positive definite matrix, and linear control dynamics of the form f (y) + g(y)u = Ay + Bu, the value function is differentiable provided that U is nonempty polyhedral (finite horizon) or closed convex set with 0 ∈ int U (infinite horizon), see [24,25]. Sufficient conditions for (local) differentiability of the value function associated to finite horizon optimal control problems with nonlinear dynamics and control constraints have been obtained in [15]. For our analysis, we assume that Assumption 1. The value function satisfies V ∈ C 1 (R d ). Moreover it is radially unbounded, i.e. lim x →∞ V (x) = ∞.
With Assumption 1 holding, the verification theorem, see eg. [22,Chapter 1] implies that an optimal control in feedback form is given as the minimizer in (2.4) and thus where P U is the orthogonal projection in the R-weighted inner product on R m onto U , i.e.
Alternatively u * can be expressed as where PŪ is the orthogonal projection in R m , with Euclidean inner product, ontoŪ = R 1 2 U . For the particular case of (2.3) we have where the max and min operations operate coordinate-wise. It is common practice to refer to the open loop control as in (2.1) and to the closed loop control as in (2.5) by the same letter. For the unconstrained case the corresponding control law reduces to u * ( where we now drop the superscript * ) we obtain the equivalent form of the HJB equation as (2.6) We shall require the notion of generalized Hamilton-Jacobi-Bellman (GHJB) equations (see e.g. [37,7]), given as follows: Here f, g and u are considered as functions of x ∈ R d .  )) is a stabilizable pair, then (2.8) admits a unique locally positive definite solution V (x), which is locally analytic at the origin.
For solving (2.6) we analyse the following iterative scheme, which is referred to as policy iteration or Howards' algorithm, see e.g. [37,4,7,10], where the case without control constraints is treated. We require the notion of admissible controls, a concept introduced in [32,7].
Here y(t; u) denotes the solution to (2.2), where x ∈ Ω, and with control in feedback form u(t) = u(y(t; u)). As in (2.1) the value of the cost in (iv) associated to the closed loop control is In general we cannot guarantee that the controlled trajectories t → y(t; u(t)) remain in Ω for all t. For this reason we demand continuity and differentiability properties of u and V on all of R d . Under additional assumptions we could introduce a setΩ with Ω Ω R d with the property that {y(t, u) : t ∈ [0, ∞), u ∈ A(Ω)} ⊂Ω and demanding the regularity properties of u and V onΩ only. We shall not pursue such a set-up here.
We are now prepared the present the algorithm.

Algorithm 1 Continuous policy iteration algorithm for first order HJB equation
Input: Let u (0) : R d → U ⊂ R m be an admissible control law for the dynamics (2.2) and let > 0 be a given tolerance.
Update the Control:

End
Note that (2.8) can equivalently be expressed as GHJB(V (i) , ∇V (i) ; u (i) ) = 0, V (i) (0) = 0. Lemma 1. Assume that u(·) is an admissible feedback control with respect to Ω. If there exists a function V (·; u) ∈ C 1 (R d ) satisfying Proof. The proof takes advantage, in part, from the verification of an analogous result in the unconstrained, see eg. [37], where a finite horizon problem is treated. Let x ∈ Ω be arbitrary and fixed, and choose any T > 0. Then we have Since lim T →∞ y(T ; u) = 0 by (iii), and due to V(0;u)=0, we can take the limit T → ∞ in this equation to obtain ∇V (y; u) t (f (y) + g(y)u(y))) dt, (2.12) where y = y(t; u) on the right hand side of the above equation. Adding both sides of J (x, u) = ∞ 0 ( (y) + u 2 R ) dt to the respective sides of (2.12), we obtain Let us next consider the optimal feedback law u * (x). We need to show that it is admissible in the sense of Definition 1. By Assumption 1 and (2.5) it is continuous on R d . Moreover V (0) = 0, V (x) > 0 for all 0 = x ∈ Ω, thus ∇V (0) = 0, and consequently u * (0) = 0. Here we also use that 0 ∈ U . Thus (i) and (ii) of Definition 1 are satisfied for u * and (iv) follows from our general assumption that (2.1) has a solution for every x ∈ Ω. To verify (iii), note that from (2.11), (2.5), and (2.6) we have for every T > 0 Thus T → V (y(T ; u * )) is strictly monotonically decreasing, (unless y(T ; u * ) = 0 for some T ).
The radial unboundedness assumption on V further implies that S is bounded and thus it is compact. Let us seṫ Then by (2.6) we haveV By compactness of S we have max z∈SV (z) =: ζ < 0. Note that {T : y(T ; u * )} ⊂ S. Hence by (2.13) we find lim which is impossible. Hence lim T →∞ y(T ; u * ) = 0 and u * ∈ A(Ω). Now we can apply the arguments from the first part of the proof with u = u * and obtain . This concludes the proof.

2.2.
Convergence of policy iteration. The previous lemma establishes the fact that the value J (x, u), for a given admissible control u, and x ∈ Ω can be obtained as the evaluation of the solution of the GHJB equation (2.10). In the following lemma we commence the analysis of the convergence of Algorithm 1.
Let us recall that R , for y ∈ R d and set y (i+1) = y(u (i+1) ). Then we find Throughout the following computation we do not indicate the dependence on t on the right hand side of the equality. Next we need to rearrange the terms on the right hand side of the last expression. For this purpose it will be convenient to introduce z = 1 2 R −1 g(y (i+1) ) t ∇V (i) (y (i+1) ) and observe that u (i+1) = P U (−z). We can express the above equality as Hence t → V (i) (y (i+1) (t)) is strictly monotonically decreasing. As mentioned above, V i is positive definite. With the arguments as in the last part of the proof of Lemma 1 it follows that lim t→∞ y (i+1) (t) = lim t→∞ y(t; u (i+1) ) = 0. Finally (2.15) implies that 0 ≤ ∞ 0 (y(t; u (i+1) )) + . Since x ∈ Ω was chosen arbitrarily it follows that u (i+1) defined in (2.9) is admissible. Lemma Since for each x ∈ Ω the trajectory y (i+1) corresponding to u (i+1) and satisfyinġ is asymptotically stable, the difference between V (i+1) (x) and V (i) (x) can be obtained as where f and g are evaluated at y (i+1) . Utilizing the generalized HJB equation (2.8), we get . This leads to The last two terms in the above integrand appeared in (2.14) and were estimated in the subsequent steps. We can reuse this estimate and obtain Hence, {V (i) } is a monotonically decreasing sequence which is bounded below by the optimal value function V , see Lemma 1. Since {V (i) } is a monotonically decreasing sequence and bounded below by V , it converges pointwise to someV ≥ V .
To show convergence of u (i) toū := P U (− 1 2 R −1 g(x) t ∇V (x)), additional assumptions are needed. This is considered in the following proposition. In the literature, for the unconstrained case, one can find the statement that, based on Dini's theorem, the monotonically convergent sequence {V (i) } converges uniformly toV , if Ω is compact. This, however, only holds true, once it is argued thatV is continuous. For the following it will be useful to recall that C m (Ω), m ∈ N 0 , consists of all functions φ ∈ C m (Ω) such that D α φ is bounded and uniformly continuous on Ω for all multi-index α with 0 ≤ α ≤ m, see e.g. [3, pg. 10].
By assumption Ω is bounded and thusΩ is compact. Hence by Dini's theorem V (i) converges toV uniformly onΩ. Here we use the assumption thatV ∈ C(Ω). We can now chooseī > 0 such that We estimate I 1 and I 3 by using (2.17) as and We estimate I 2 by using (2.18) and combining with the estimates for I 1 and I 3 , we obtain Since x ∈ Ω was arbitrary this implies that It then follows from (2.9) that Proof. We denote by y (i) (x) the solution to (2.20) with initial condition y(0) = x and control u (i) as in (2.9). In particular the controls u (i) are uniformly bounded; i.e. there exists M > 0 such that |u (i) (t)| ≤ M for all t ∈ [0, ∞) and i = 1, 2, . . .. Let us first show that V (i) are equicontinuous on Ω. Let x 1 ∈ Ω, x 2 ∈ Ω and observe that where y (i) (x j ) = e At x j + t 0 e A(t−s) Bu (i) (s) ds. Since the spectrum of A is strictly in the left half plane, there exist C > 0, w > 0 such that e At ≤ Ce −wt for all t > 0. For every t > 0 we have .

Consequently we get
from which equicontinuity of V (i) in Ω follows. Since V (i) is bounded in Ω for each i and each V (i) is uniformly continuous in Ω, we have V (i) ∈ C(Ω). Thus by the Arzela-Ascoli theorem and pointwise convergence V (i) →V , we have uniform convergence of V (i) toV andV ∈ C(Ω).

This implies that ∇V
x −x , and the desired equicontinuity of {∇V (i) } on Ω follows.
It remains to verify thatV ∈ C 1 (Ω). We observe that {∇V (i) } is equibounded on Ω. In fact Clearly {V (i) } is also equibounded on Ω. Thus we can apply the Arzela Ascoli theorem to {∇V (i) } in C 1 (Ω). It follows that there exist a subsequence and some G ∈ C 1 (Ω; R d ) such that V (i j ) →Ṽ in C(Ω) and ∇V (i j ) → G in C 1 (Ω; R d ). By uniqueness of the limit we have that V =V . Moreover we have G = ∇V . In fact by equicontinuity of {∇V (i) } on Ω, there exists for every > 0 and x ∈ Ω, some δ > 0 such that ∇V (i) (x) − ∇V (i) (y) ≤ for all y ∈ Ω with x − y ≤ δ. Now let x ∈ Ω be arbitrary.
Since Ω is open there exists δ 1 ∈ (0, δ], such that x + t(y − x) : t ∈ [0, 1] ⊂ Ω for all y with y − x < δ 1 . For this x and all y with y − x ≤ δ 1 and all i we have Taking the limit i j → ∞ this implies Therefore ∇V (x) = G(x) at each x ∈ Ω andV ∈ C 1 (Ω) as desired. Now, since {V (i) } admits a unique limit in C(Ω) , every C 1 -convergent subsequence of {V (i) } converges to the same limitV in C 1 (Ω), and thus the whole sequence converges toV in C 1 (Ω).

3.
Nonlinear H 2 control subject to stochastic system 3.1. Second order Hamilton-Jacobi-Bellman equation. Here we consider the stochastic infinite horizon optimal control problem min u(·)∈U subject to the nonlinear stochastic dynamical constraint where y(t) ∈ R d is the state vector, W (t) ∈ R k is a standard multi-dimensional separable Wiener process defined on a complete probability space, and the control input u(·) ∈ U is an adapted process with respect to a natural filter. The functions f, g, and g 1 are assumed to be Lipschitz continuous on R d , and satisfy f (0) = 0, g(0) = 0 and g 1 (0) = 0. For details about existence and uniqueness for (3.2), we refer to e.g. [35,Chapter 2]. As in the previous section Ω denotes a domain in R d containing the origin.
From the verification theorem [22, Theorem 3.1], the explicit minimizer u * of (3.3) is given by where the projection P U is defined as in (2.5). The infinitesimal differential generator of the stochastic system (3.2) for the optimal pair (u, V ) (dropping the superscript *) is denoted by Using the optimal control law (3.4) in (3.3), we obtain an equivalent form of the HJB equation For the unconstrained control, the above HJB equation becomes The associated generalized second order GHJB equation is of the form We next define the admissible feedback controls with respect to stochastic set up.
In Algorithm 2, the policy iteration for the second order HJB equation is documented, see [40].

Algorithm 2 Policy iteration algorithm for the second order HJB equation
Let u (0) be an initial stabilizing control law in the stability region Ω for system (3.2).

End
Update the Control: End Lemma 3. Assume that u(·) is an admissible feedback control in Ω. If there exist a function V (·; u) ∈ C 2 (R d ) satisfying then V (x; u) is the value function of the system (3.2) and V (x; u) = J(x, u) ∀x ∈ Ω. Moreover, the optimal control law u * (x) is admissible and the optimal value function V (x; u * ), if it exists and is radial unbounded, satisfies V (x; u * ) = J (x, u * ), and 0 < V (x; u * ) ≤ V (x; u).
Taking the expectation with limit as T → ∞ on both sides, the above equation becomes Concerning the admissibility of u * , properties (i), (ii) and (iv) in Definition 2, can be shown similarly as in Lemma 1. Since the stochastic Lyapunov functional V is radially unbounded by assumption, and its differential generator one can follow the standard argument given in Theorem [35, Theorem 2.3-2.4] to obtain P (lim t→∞ y(t; u * ) = 0) = 1, ∀ x ∈ Ω and hence u * ∈ A(Ω).

Convergence of policy iteration.
Here we turn to the convergence analysis of the iterates V (i) .
Using that dy = f (y) + g(y)u (i) dt + g 1 (y) dW is asymptotically stable for all i, by Itô's formula along the trajectory dy = f (y) + g(y)u (i+1) dt + g 1 (y) dW, (3.14) Using the generalized HJB equation (3.8) for two consecutive iterations (i) and (i + 1), we have , we obtain finally where A − B can be calculated as in Proposition 1 to obtain can be argued as for the first derivatives which was done in the proof of Proposition 2. We can now pass to the limit i → ∞ in (3.8) to obtain thatV satisfies the HJB equation (3.5). Concerning the uniqueness of the value functionV = V , we refer to e.g. [22, pg. 247].

Numerical examples
Here we conduct numerical experiments to demonstrate the feasibility of the policy iteration in the presence of constraints and to compare the solutions between constrained and unconstrained formulations. To describe a setup which is independent of a stabilizing feedback we also introduce a discount factor λ > 0. Unless specified otherwise we choose λ = 0.05, and we also give results with λ = 0.
subject to the following deterministic dynamics with control constraint subject to the following stochastic system with control constraint where λ > 0 is the discount factor. We solve the GHJB equation (3.8) combining an implicit method and with an upwind scheme over Ω = (−2, 2) as follows where the superscript n stands for the iteration loop, the subscript i stands for the mesh point numbering in the state space (V i ≈ V (x i ) for i = 1, · · · , I (total number of grid points)), dt is the time step, ∆x = x i+1 − x i is the distance between equi-spaced grid points, and D 2 V n i = (V n i+1 − 2V n i + V n i−1 )/∆x 2 is the approximation of the second order derivative of the value function. To solve the first order HJB (2.8), we take g 1 = 0 in (4.3). For the upwind scheme, we take forward difference , where 1 1 is the characteristic function. The updated control policy for (n + 1) iteration becomes . Note that since we solve the HJB backward in time, it is necessary to construct the upwind scheme as above which is in a reverse compared to the form an upwind scheme for dynamics forward in time. For more details about upwind schemes for HJB, see [2] including its appendix.
We choose, R = 0.1, dt = 2, I = 400 and two different initial conditions x 0 = −1.8 and x 0 = 1. Figure 1(i) depicts the value functions in the unconstrained and the constrained case. As expected they are convex. Outside [−1, 1] the two value functions differ significantly. In the constrained case it tends to infinity at ±2 indicating that for such initial conditions the constrained control cannot stabilize anymore. Figure 1(ii), shows the evolution of the states under the effect of the control as shown in 1(iv). In the transient phase, the decay rate for the constrained control system is slower than in the unconstrained case, which is the expected behavior. The temporal behavior of the running cost of y(t) is documented in Figure 1(iii). It is clear that the uncontrolled solution diverges whereas y(t) 2 tends to zero for the cases with control, both for constrained and unconstrained controls. The pointwise error for the HJB equation (2.6) is documented in Figure 1(v). Similar behavior is achieved for the stochastic dynamics. It is observed from Figure 2(i) that even with the small intensity of noise (g 1 = 0.005), we can see the Brownian motion of the state cost for the controlled system. Figure 2(ii) plots that error for the value function of the increasing iteration count.
We have also solved this problem with λ = 0. For this purpose we initialized the algorithm with the solution of the discounted problem with λ = 0.05. If the dt was further reduced, then the algorithm converged. Moreover, the value functionals with λ = 0.05 and λ = 0 are visually indistinguishable.
The next example focuses on the exclusion of discount factor once we have proper initialization, which is obtained through solving HJB equation with a discount factor. 4.1. Test 2: One dimensional nonlinear equation. We consider the following infinite horizon problem subject to the following dynamics with control constraint Here subject to the following 3D linear control system with σ = 10, ρ = 1.1, and β = 8/3. System (4.6)-(4.8) arises from linearization of the Lorenz system at (0,0,0). For ρ > 1, the equilibrium solution (0, 0, 0) is unstable [38,Chapter 1]. Figure 4 (i) depicts the unconstrained and the constrained controls, where one component of the control constraint is active up to t = 0.5, and the other one up to t = 1. The first two components of the state of the uncontrolled solution diverge, while the third one converges, see the dotted curves in 4(ii). For the unconstrained HJB control the state tends to zero with a faster decay rate than for the constrained one. See again Figure 4 (ii). Figures 4(iii)-(iv) document the running and state costs, respectively. We next turn to the stochastic version of problem (4.5)-(4.8) with noise of intensity .05 added to the second and third components. For the stochastic system the constrained and unconstrained states tend to zero with some oscillations, see  Test 4: Lorenz system. For the third test, we choose the Lorenz system which appears in weather prediction, for example. This is a nonlinear three dimensional system, with control (i) (ii)   . i) Control, ii) State, iii) Running cost ( x(t) 2 + y(t) 2 + z(t) 2 + u 1 (t) 2 R ) + u 2 (t) 2 R ), iv) State cost x(t) 2 + y(t) 2 + z(t) 2 .
(i) (ii) Similar numerical results were also obtained in the stochastic case.

Concluding remarks
We investigated convergence of the policy iteration technique for the stationary HJB equations which arise from the deterministic and stochastic control of dynamical systems in the presence of control constraints. The control constraints are realized as hard constraints by a projection operator rather than by approximation by means of a penalty technique, for example. Numerical examples illustrate the feasibility of the approach and provide a comparison of the behavior of the closed loop controls in the presence of control constraints and without them. The algorithmic realization is based on an upwind scheme.