On the Existence of Lipschitz Continuous Optimal Feedback Control

We consider an optimal control problem involving a nonlinear ODE with control, an integral cost functional, and a control constraint. Our main assumptions include a coercivity condition and the condition that the optimal control is an isolated solution of the variational inequality appearing in the first-order optimality condition. We show that the optimal open-loop control is Lipschitz continuous in time; moreover, we identify the dependence of the Lipschitz constant of the optimal control on the data of the problem. Then, we establish the existence of a Lipschitz continuous optimal feedback control. As an application, we study regularity properties of the optimal value function. A main tool for obtaining these results is the property of uniform strong metric regularity.


Introduction
In this paper, we consider an optimal control problem for a time-dependent nonlinear control system over a fixed time interval [0, T ] with an integral cost functional. The set of feasible controls consists of all functions in L ∞ (the space of measurable and essentially bounded functions over [0, T ]) with values in a given convex and closed set in R m . We assume twice differentiability with respect to the state and the control of the functions involved in the problem and local Lipschitz continuity of these functions together with all their derivatives with respect to all arguments. We also assume the existence of a reference optimal solution. Since the reference optimal control is a function in L ∞ , its values can be changed in a subset of [0, T ] with Lebesgue measure zero without violating the optimality. In fact, the optimal control is a class of functions that differ from each other on a set of measure zero.
Our first task is to prove that, under an integral coercivity condition at the reference solution, we can select from the class of optimal controls a function which satisfies the firstorder optimality condition for all t ∈ [0, T ], instead of for almost every (a.e.) t ∈ [0, T ]. Then, we show that under a coercivity condition this representative of the optimal controls is Lipschitz continuous with respect to time t ∈ [0, T ] provided that it is an isolated solution of the Hamiltonian variational inequality in the first-order optimality condition. Moreover, we establish that the Lipschitz constant of the optimal control depends only on two constants: the coercivity constant and the Lipschitz constant of all functions defining the problem and their first and second derivatives over a bounded set in the space of variables (time, state, control).
The integral coercivity condition is a rather standard assumption in optimal control; the specific condition we use here goes back to the work of Hager [7]. In contrast, the isolatedness condition was introduced only recently in [2,Definition 3.6] in the context of the so-called differential variational inequalities, with the aim to prevent different solution curves from crossing each other. The isolatedness assumption is automatically satisfied when the Hamiltonian has a unique minimizer for each t ∈ [0, T ], e.g., when the Hamiltonian is strictly convex. In [2,Theorem 4.1], it was established that if an optimal controlū is an isolated solution of the Hamiltonian variational inequality and for each t ∈ [0, T ] the mapping defining this variational inequality is strongly metrically regular atū for 0; then, the optimal controlū is Lipschitz continuous on [0, T ]. We also mention the earlier work [4] in that direction for an optimal control problem with linear dynamics and a strongly convex cost for which strong regularity holds automatically; in fact, only continuity of the optimal control is claimed there but the Lipschitz continuity can be gleaned from the proof. We note that the coercivity condition implies strong metric regularity in the respective function spaces, see [2,Theorem 4.2].
Our next task is to prove the existence of a Lipschitz continuous optimal feedback control. We show that under the coercivity and isolatedness conditions for the optimal control, there exists an optimal feedback control (τ, ξ ) → u * (τ, ξ ) which is a Lipschitz continuous function; here (τ, ξ ) represents the parametrizing pair initial time-initial condition.
Our third and last task is to show that the existence of a Lipschitz continuous optimal feedback control implies that the optimal value function (τ, ξ ) → V (τ, ξ) is differentiable with respect to ξ and its derivative is Lipschitz continuous.
An outline of the paper follows. In Section 2, we introduce the optimal control problem considered and set the stage for the further developments. Section 3 contains preliminary material showing in particular that the optimal control can be redefined on a set of measure zero so that the first-order optimality system holds for all t ∈ [0, T ]. Section 4 gives conditions for Lipschitz continuity in time of the optimal open-loop control while Section 5 is devoted to the existence of a Lipschitz continuous optimal feedback control. The last Section 6 applies the latter result to show Lipschitz differentiability of the value function.

The Optimal Control Problem
We consider the following optimal control problem: subject toẋ where the state x(t) ∈ R n , the set U of feasible control values is a closed and convex subset of R m , and the functions g : The final time T and the initial state x 0 are fixed. Throughout we assume that the function g is twice differentiable and its second derivative is locally Lipschitz continuous, the functions h(t, ·, ·) and f (t, ·, ·) are two times continuously differentiable (with respect to (x, u)), and these functions, together with all their derivatives, are locally Lipschitz continuous (with respect to (t, x, u)).
We also assume that problem (1)-(2) has a locally optimal solution (x,ū). The local optimality is understood in the following way: there exists a number e 0 > 0 such that for every u ∈ U with u −ū ∞ ≤ e 0 either there is no solution of (2) over [0, T ] or such a solution exists and J (u) ≥ J (ū).
In this paper, we employ the standard function spaces L ∞ , L 2 , W 1,∞ , W 1,2 , all over [0, T ]. Specifically, the space of controls u is L ∞ , the space of measurable and essentially bounded functions. The state trajectory x is in W 1,∞ , the space of Lipschitz continuous functions. For the controls we also use the space L 2 of measurable square integrable functions, and for the state trajectory x the space W 1,2 such that both x and its derivativeẋ are in L 2 . Furthermore, for an element x of a metric space we denote by IB a (x) (respectively • IB a (y)) the closed (respectively open) ball centered at x with radius a.
Clearly, any feasible control u is actually a class of functions which differ from each other on a set of Lebesgue measure zero. We call any particular function from this class a representative and denote it in the same way, by u.
Introducing the Hamiltonian H (t, x, u, λ) = h(t, x, u) + λ f (t, x, u), where means transposition, we employ the standard first-order necessary optimality condition (a consequence of the Pontryagin maximum principle) in the form used, e.g., in [7], according to which there exists a Lipschitz continuous functionλ : [0, T ] → R n such that the triple (x,ū,λ) satisfies for a.e. t ∈ [0, T ] the following optimality system: where H x denotes the derivative of H with respect to x, etc., and N U is the normal cone mapping to the set U defined as In further lines, we give the following long but important remark, which summarizes various observations that will be used later on.
Remark 1 It is a standard fact that under our assumptions there exist positive reals d 0 and d such that for everyũ ∈ U with ũ −ū ∞ ≤ d and for every ξ ∈ IB d 0 (x 0 ) there exists a unique solutionx of the differential equatioṅ which satisfies x −x W 1,∞ ≤ 1. Moreover, making d 0 and d smaller if necessary, we obtain that the (unique) solutionλ of the linear adjoint equatioṅ satisfies λ −λ W 1,∞ ≤ 1. Without loss of generality, we assume that d ≤ 1 and d ≤ e 0 , where e 0 appears in the definition of local optimality given in the beginning of this section. Sinceū ∈ L ∞ , there exists a compact setŪ such thatū(t) ∈Ū for a.e. t ∈ [0, T ]. Define the set  (4) and (5), respectively, the functions To shorten the notations, we skip arguments with "bar", shifting the "bar" to the functions, e.g., x(t),ū(t)),ḡ xx := g xx (x(T )), etc. Define the matrices Our first main assumption is the following: -COERCIVITY: there exists a constant ρ > 0 such that The coercivity condition was first used in [7] to show convergence of the multiplier method and later in [5] to establish Lipschitz stability as well as convergence of discrete approximations in optimal control. It can be viewed as a strong second-order sufficient condition in optimal control. Checking this condition would very much depend on the specific problem at hand; sometimes it is enforced numerically by adding penalty terms to the cost. The coercivity condition has also been used for a posteriori numerical verification of optimality after an approximate solution is found.
In the following section, we present some preparatory material. In particular, we show that the coercivity condition implies a pointwise in time coercivity property which plays an important role in further analysis.

Preliminaries
Denote by meas(E) the Lebesgue measure of a set E. Let ⊂ [0, T ] be a measurable set with meas( ) > 0, and let v : → R m be a measurable and bounded function. For t ∈ denote by V (v; t) the set of points w ∈ R m with the following property: there is a sequence of measurable sets E k ⊂ such that A point t ∈ is said to be essentially non-isolated if for every ε > 0 the set [t − ε, t + ε] ∩ is of positive measure.

Lemma 1
Let ⊂ [0, T ] be a measurable set and let v : → R m be a measurable and bounded function. Then, for any t ∈ , the following statements are equivalent: Proof If (i) holds, then the very definition of V (v; t) implies that t is essentially nonisolated.
Let us pick an essentially non-isolated point t of . Let K ⊂ R m be a compact set such that v(s) ∈ K for every s ∈ . Take an arbitrary w ∈ K. If for every ε > 0 and every natural number k there exists E k ⊂ [t − 1/k, t + 1/k] ∩ such that meas(E k ) > 0 and sup s∈E k |v(s) − w| < ε, then w ∈ V (v; t). If this is not the case, then there exist ε(w) > 0 and a natural number This contradicts the essential non-

Lemma 2 Let u andũ be two measurable and bounded functions acting from
Then, the functionũ can be redefined on a set of measure zero in such a way thatũ(t) ∈ V (ũ; t) and Proof Take an arbitrary t ∈ [0, T ]. Consider first the case where both functions u andũ are approximately continuous at t. We recall that u is approximately continuous at t ∈ (0, T ) if there exists a measurable set E ⊂ [0, T ] containing t such that lim k→∞ 2k meas (E ∩ [t − 1/k, t + 1/k]) = 1 and the restriction of u to E is continuous. LetẼ be the set in the definition of approximate continuity ofũ at t ∈ (0, T ). Then, the set E k := E ∩Ẽ ∩ [t − 1/k, t + 1/k] satisfies lim k→∞ 2k meas(E k ) = 1. In particular, meas(E k ) > 0 for all sufficiently large k. Due to the continuity of u andũ on E ∩Ẽ, we have Moreover, since the sets E k in the definition of V can be replaced by E k , we conclude that T ] be such that u orũ is not approximately continuous at t, or t equals 0 or T .
We will now redefineũ(t) to fit the claim. It is well known (see, e.g., [8,Theorem 7.54]) that almost all t ∈ [0, T ] are points of approximate continuity of both u andũ; therefore we need to redefineũ only on a set of measure zero. Note that the sets V (ũ; t) are invariant with respect to changes ofũ on a set of measure zero.
Denote w := u(t) ∈ V (u; t). Let E k be the sets in the definition of V . In particular, Letw be a cluster point of the sequence {w k }. To show thatw ∈ V (ũ; t), we employ the following argument involving choosing a diagonal sequence. For an arbitrary natural number j , choose k = k j so large that Then, choose i = i j such that We havẽ Taking also into account that meas(Ẽ j ) > 0, the last two relations imply thatw ∈ V (ũ; t). Hence, Passing to the limit with i and then with k, we obtain |w − w| ≤ ũ − u ∞ . Then, we redefineũ(t) asũ(t) =w. This completes the proof.
For a proof, apply Lemma 2 withũ = v and the constant function Remark 2 From now on, the elementū ∈ L ∞ will be identified with a function (denoted Observe that the coercivity condition (6) does not depend on the particular representative ofū.

Lemma 3 Let the coercivity condition (6) hold, whereū is identified as in Remark 2. Then,
For an arbitrary w ∈ U − U , we define a function w k as Using the Cauchy formula for the equatioṅ where here and further c 1 , c 2 , . . . are positive reals independent of k. Then, for the terms involved in (6), we have Since R(s) =H uu (s,ū(s)), using (8), we obtain (see Remark 1) that for s ∈ E k one has Using the above estimated in (6) and the above five displayed formulas, we obtain Dividing by meas(E k ) (here we use the first inequality in (8)) and passing to the limit with k, we obtain (7).

Lipschitz Continuity of the Optimal Control
Let us recall the optimality system (3): Lemma 4 Let the coercivity condition hold. Then, the optimal controlū ∈ L ∞ has a representativeū such that the matrix R(t) =H uu (t,ū(t)) satisfies (7) and In fact, any representative of the optimal control that satisfies Proof Let us redefineū so thatū(t) ∈ V (ū; t) for all t ∈ [0, T ] (see Corollary 1 and Remark 2). Then, according to Lemma 3, the pointwise coercivity condition (7) holds for (8) holds. Since meas(E k ) > 0 and (9) is satisfied by (x(t),ū(t),λ(t)) almost everywhere, there exists t k ∈ E k such that (9) holds for t k . From (8), we obtain that t k → t andū(t k ) →ū(t). Then, due to the continuity of the function (t, u) → H u (t,x(t), u,λ(t)) and the upper semi-continuity of the mapping u → N U (u), (9) holds for t as well.
We recall next the property of strong metric regularity of a general set-valued mapping F : Y ⇒ Z, where Y and Z are Banach spaces (for more on that, see, e.g., [6, Section 3.7]). A mapping F is said to be strongly metrically regular atŷ forẑ if there exist constants κ ≥ 0, a > 0 and b > 0 such that the truncated inverse mapping is single-valued (a function) and Lipschitz continuous on Our further analysis is based on the following version of Robinson's implicit function theorem. It was first stated as [6, Theorem 5G.3] 1 and then in corrected form as Theorem 3.2 in [2] (see also [3,Theorem 2.3] for a slight extension): Theorem 1 Let a, b, and κ be positive scalars and let a mapping F : Y ⇒ Z be strongly metrically regular atŷ forẑ with neighborhoods IB a (ŷ) and IB b (ẑ) and constant κ. Let μ > 0 be such that κμ < 1 and let κ > κ/ (1 − κμ). Then, for every positive α and β such that α ≤ a/2, 2μα + 2β ≤ b and 2κ β ≤ α and for every function g : Y → Z satisfying g(ŷ) ≤ β and g(y) − g(y ) ≤ μ y − y for every y, y ∈ IB 2α (ŷ), Compared with the standard Robinson's implicit function theorem, see [6, Theorem 2B.1], Theorem 1 exhibits the fact that everything hinges on the constants involved; that is, the constants of metric regularity of the perturbed mapping g + F do not depend on the actual perturbations but only on g(ŷ) , the Lipschitz constant of g and the constants of the strong regularity of F. In that sense, Theorem 1 shows strong metric regularity which is uniform with respect to perturbations.
Let us get back to the optimal control problem at hand. If (t, u) ∈ cl gph(ū), then there exists a sequence t k → t such thatū(t k ) → u. According to (7), we have w H uu (t k ,ū(t k )) w ≥ ρ|w| 2 for every w ∈ U − U .
Passing to the limit, we obtain that w H uu (t, u)w ≥ ρ|w| 2 (10) for every (t, u) ∈ cl gph(ū) and every w ∈ U − U . It is well known that the property (10) implies that for every (t, u) ∈ cl gph(ū) the mapping is strongly metrically regular at u for 0 with constants κ = 1/ρ, a = b = +∞ (that is, with any positive a and b ), see, e.g., [7,Lemma 1]. Note that these constants are independent of t. Next, we reformulate, adapted to our notations and needs, a simplified version of Theorem 3.5 in [2], which in turn is a corollary of Theorem 1.

Theorem 2
Assume that for every (t, u) ∈ cl gph(ū) the mapping in (11) is strongly metrically regular at u for 0 with constants κ , a , b that are independent of (t, u). Then, for every t ∈ [0, T ], the mapping u →H u (t, u) + N U (u) is strongly metrically regular atū(t) for 0 with any constants κ, a, b satisfying the inequalities where L is a Lipschitz constant of the mapping u →H uu (t, u) on IB a (ū(t)), for every t ∈ [0, T ].
The conditions (12) are not stated in Theorem 3.5 in [2], but are explicitly written in the beginning of its proof there.
Continuing the analysis of (11), we apply Theorem 2 with a = 1, b = +∞ and κ = 1/ρ, having the inequalities (12) reduced to where now L is the constant from Remark 1.

Remark 3
The important consequence of (13) is that the constants κ, a, b of strong regularity of u →H u (t, u) + N U (u) atū(t) for 0 can be chosen to depend only on the constant ρ in the coercivity condition (6) and the constant L in Remark 1.
We introduce next our second main assumption: For example, the isolatedness assumption holds if for every t ∈ [0, T ] the inclusion H u (t, u) + N U (u) 0 has a unique solution (which has to beū(t)). In this case, one can verify the isolatedness condition taking any (relatively) open set O ⊂ [0, T ] × R m containing gph(ū). (7) hold. Then, the optimal controlū is Lipschitz continuous on [0, T ]. Moreover, the Lipschitz constant of u depends only on the number ρ in (7) and the constant L in Remark 1.

Theorem 3 Suppose that the isolatedness assumption (14) and condition
Proof The proof is somewhat parallel to the proof of Theorem 3.7 in [2]. Here we use Theorem 2 and (13) instead of the more general Theorem 3.5 in [2] (used in the proof of Theorem 3.7 in [2]), which does not imply the second claim of Theorem 3.
As mentioned around (10), condition (7) implies that for every (t, u) ∈ cl gph(ū) the mapping in (11) is strongly metrically regular at u for 0. Then, we can apply Theorem 2. Let the numbers a, b, κ be chosen to satisfy conditions (13), so that for every t ∈ [0, T ] the mapping u →H u (t, u) + N U (u) is strongly metrically regular atū(t) for 0 (see Theorem 2). Let L be the constant in Remark 1; then, the mappings (t, u) →H u (t, u) and (t, u) →H uu (t, u) are Lipschitz continuous with constant L on the set {(t, u) : t ∈ [0, T ], u ∈ IB a (ū(t))}. Without loss of generality we considerū as taking values in the set U in Remark 1; we also recall that a ≤ 1.
The second claim of the theorem follows from Remark 3 concerning κ.
The example displayed in Remark 9 in [5] demonstrates that the isolatedness assumption (14) is essential for the Lipschitz continuity of the optimal control shown in Theorem 3. In this example h = (u 2 − 1) 2 , g = 0, f = 0, U = R, T = 1. Here, for each measurable set ⊂ [0, 1] the function defined as u(t) = −1 for t ∈ and u(t) = 1 for t ∈ [0, 1] \ is an optimal control, and the coercivity condition is satisfied. However, the isolatedness condition is satisfied only if the measure of is either zero or 1. In these two cases the optimal control is Lipschitz continuous.

Lipschitz Continuous Optimal Feedback Control
In this section, we prove the existence of a Lipschitz continuous locally optimal feedback control for problem (1)- (2). For this purpose we embed the problem into a family of problems by replacing the initial time 0 with any τ ∈ [0, T ] and the initial condition x(0) = x 0 with x(τ ) = ξ ∈ R n . Denote this new family of problems by P(τ, ξ ), so that P(0, x 0 ) is (1)- (2). Also, denote by J (τ, ξ; u) the value of the objective function of P(τ, ξ ) for a control u ∈ U being defined as where x is the solution of the initial-value probleṁ To set the stage, we give first the following definition which recasts the usual way a locally optimal feedback control is understood . Recall that (x,ū) is a locally unique solution of problem (1)-(2).

Definition 1
The function u * : [0, T ] × R n → U is said to be a locally optimal feedback control around the reference solution pair (x,ū) if there exist positive numbers ε 0 andā, and a set ⊂ [0, T ] × R n such that Let us first sketch the idea of the proof. First, we prove that for ξ close tox(τ ) a unique solution (x[τ, ξ ],ū[τ, ξ ]) exists and it is close to the restriction of (x,ū) to [τ, T ]; moreover,ū[τ, ξ ] depends in a Lipschitz way on ξ (in the space L ∞ ). Then, we show that u[τ, ξ ] is Lipschitz continuous.
For any τ ∈ [0, T ), we define the spaces where the time interval for these functional spaces is [τ, T ]. It is convenient to define the norm in Y τ as (x, u, λ) := max{ x 1,∞ , u ∞ , λ 1,∞ }. For any fixed τ ∈ [0, T ), any (locally) optimal solution-multiplier triple y := (x, u, λ) ∈ Y τ for P(τ,x(τ )) satisfies the inclusion where F τ : Y τ → Z τ and G τ : Y τ ⇒ Z τ are defined as Here N U (u(t)) for a.e. t ∈ [τ, T ]}. By using the superscript ∞ in the notation of the latter set we emphasize that the cone N ∞ U (u) includes only a part of the normal cone N U (u) which is a subset of the dual space of L ∞ ; note that the dependence on τ is not indicated. (6) hold. Then, the mapping F τ + G τ is strongly metrically regular at the restriction ofȳ := (x,ū,λ) to [τ, T ] (denoted in the same way) for 0. Moreover, the constants of strong regularity, call themκ,ā,b, can be chosen independent of τ .

Proposition 1 Let the coercivity condition
Proof The strong metric regularity of the mapping F τ follows from [5,Theorem 5], with the only difference that in [5] there is no terminal term in the cost functional and the functions h and f do not depend on time t. As is well known, under the smoothness conditions imposed the problem with a terminal cost can be transformed into an equivalent problem without a terminal cost. In addition, the time-dependent problem is handled in exactly the same way as the time-invariant; thus, the difference is basically formal. For reader's convenience, below we outline the proof by highlighting the main steps and utilizing Theorem 1 as a shortcut.
First, observe that the coercivity condition (6) is fulfilled for problem P(τ,x(τ )) with the same constant ρ for all τ . To show this, it is enough to take w(t) = 0 on [0, τ ) in (6). The next step is to linearize the generalized (20) atȳ = (x,ū,λ), obtaining where The strong regularity of the mapping appearing in the linearization (21), say with constants κ, a, b independent of τ is established in [7, Lemma 3] (with the caveat concerning the terminal cost and the dependence on t). Consider the function Then, g τ (ȳ) = 0. Since A τ is the strict derivative (in L ∞ ) of F τ atȳ, the Lipschitz modulus of g τ atȳ is zero. Thus, in the notation of Theorem 1, taking α sufficiently small one can make μ arbitrarily close to zero; furthermore, κ and β could be chosen accordingly to satisfy (17). It remains to putk = k ,ā = α,b = β and to observe that these constants are independent of τ .
It is important to note that assumingb small enough we may guarantee that Remark 1 is still valid with e 0 , d 0 and d replaced with e 0 /2, d 0 /2 and d/2, respectively, and for the interval [τ, T ] and the functionū[τ, ξ ], ξ ∈ IBb(x(τ )), instead of [0, T ] andū. The constant L remains the same.
Note that the right side of (22) is contained in the left side; thus, it is sufficient to prove the opposite inclusion. Targeting a contradiction, let us assume that there exists a point which is not in gph(ũ). Then,ũ(t 0 ) = u 0 . From (26) and the second relation in (24), we haveH Then, using (25) (notice that u 0 ∈ U , since otherwise N U (u 0 ) = ∅), we obtain Then, continuing the inequality (26), we obtain This last inequality contradicts (23). Hence, (22) holds.
Having proved that the isolatedness condition is also fulfilled for problem P(τ, ξ ), we can apply Theorem 3 to this problem and obtain that the (locally) optimal controlū[τ, ξ ] is Lipschitz continuous. The Lipschitz constant,L, depends on the problem only through the constant ρ (now ρ/2) and the constant L, therefore can be chosen independent of τ and ξ , provided that |ξ −x(τ )| ≤ ε, where ε > 0 is sufficiently small (independent of τ ).

Remark 4
The last part of the proof and the uniqueness claim in Proposition 2 imply that the functionû[τ, ξ ] appearing in Definition 1 is the unique locally optimal control for in problem P(τ, ξ ) in the set IBā(ū).

Regularity of the Value Function
In this section, we show that the existence of a Lipschitz continuous optimal feedback established in Theorem 4 implies certain smoothness properties of the value function. In the preceding sections we assume only local optimality at the reference point, see Definition 1. In line with that assumption, we introduce the following definition:
By this definition, the local value function, with a set and a neighborhood IBā(ū), is finite if for every (τ, ξ ) ∈ there exists at least one admissible pair (x, u) satisfying u −ū ∞ ≤ā and gph(x) ∈ . Clearly, in that case (x,ū) is a locally optimal solution.

Theorem 5 Let the coercivity condition and the isolatedness condition hold. Then, problem
(1)-(2) has a (finite) local value function V around (x,ū) (with a set and parameters ε 0 andā); moreover V (τ, ·) is differentiable with respect to ξ whenever (τ, ξ ) ∈ • and the derivative V ξ is Lipschitz continuous on Proof The proof is routine, in principle, but we present it in full, because we deal here with a local value function, which requires some attention to detail. We will prove the theorem with , ε 0 andā as in Theorem 4. Then, there is a locally optimal Lipschitz continuous feedback control u * in the sense of Definition 1, with the corresponding pairs (x[τ, ξ ],û(τ, ξ ]). According to this definition, we have First, we prove the following claim.
Observe that , ε 0 andā in this theorem can be taken to be those in the proof of Theorem 4. Also, observe that at the end of the last proof we obtained the equality V ξ (τ, ξ 0 ) = λ[τ, ξ 0 ](τ ), which, as is well known, holds under various sets of assumptions. Moreover, based on Theorem 5, one can verify that ifū is a globally optimal solution, then the value function V is a classical solution of the corresponding Hamilton-Jacobi-Bellman equation (see, e.g., [1,Chapter III.3]).