A neural network based policy iteration algorithm with global $H^2$-superlinear convergence for stochastic games on domains

In this work, we propose a class of numerical schemes for solving semilinear Hamilton-Jacobi-Bellman-Isaacs (HJBI) boundary value problems which arise naturally from exit time problems of diffusion processes with controlled drift. We exploit policy iteration to reduce the semilinear problem into a sequence of linear Dirichlet problems, which are subsequently approximated by a multilayer feedforward neural network ansatz. We establish that the numerical solutions converge globally in the $H^2$-norm, and further demonstrate that this convergence is superlinear, by interpreting the algorithm as an inexact Newton iteration for the HJBI equation. Moreover, we construct the optimal feedback controls from the numerical value functions and deduce convergence. The numerical schemes and convergence results are then extended to HJBI boundary value problems corresponding to controlled diffusion processes with oblique boundary reflection. Numerical experiments on the stochastic Zermelo navigation problem are presented to illustrate the theoretical results and to demonstrate the effectiveness of the method.


Introduction
In this article, we propose a class of numerical schemes for solving Hamilton-Jacobi-Bellman-Isaacs (HJBI) boundary value problems of the following form: where Ω is an open bounded domain, G is the (nonconvex) Hamiltonian defined as with given nonempty compact sets A, B, and B is a boundary operator, i.e., if B is the identity operator, (1.1) is an HJBI Dirichlet problem, while if Bu = γ i ∂ i u + γ 0 u with some functions {γ i } n i=0 , (1.1) is an HJBI oblique derivative problem.Above and hereafter, when there is no ambiguity, we shall adopt the summation convention as in [20], i.e., repeated equal dummy indices indicate summation from 1 to n.
It is well-known that the value function of zero-sum stochastic differential games in domains satisfies the HJBI equation (1.1), and the optimal feedback controls can be constructed from the derivatives of the solutions (see e.g.[32] and references within; see also Section 6 for a concrete example).In particular, the HJBI Dirichlet problem corresponds to exit time problems of diffusion processes with controlled drift (see e.g.[32,10,37]), while the HJBI oblique derivative problem corresponds to state constraints (see e.g.[36,34]).A nonconvex HJBI equation as above also arises from a penalty approximation of hybrid control problems involving continous controls, optimal stopping and impulse controls, where the HJB (quasi-)variational inequality can be reduced to an HJBI equation by penalizing the difference between the value function and the obstacles (see e.g.[27,47,39,40]).As (1.1) in general cannot be solved analytically, it is important to construct effective numerical schemes to find the solution of (1.1) and its derivatives.
The standard approach to solving (1.1) is to first discretize the operators in (1.1) by finite difference or finite element methods, and then solve the resulting nonlinear discretized equations by using policy iteration, also known as Howard's algorithm, or generally (finite-dimensional) semismooth Newton methods (see e.g.[18,8,45,39]).However, this approach has the following drawbacks, as do most mesh-based methods: (1) it can be difficult to generate meshes and to construct consistent numerical schemes for problems in domains with complicated geometries; (2) the number of unknowns in general grows exponentially with the dimension n, i.e., it suffers from Bellman's curse of dimensionality, and hence this approach is infeasible for solving high-dimensional control problems.Moreover, since policy iteration is applied to a fixed finite-dimensional equation resulting from a particular discretization, it is difficult to infer whether the same convergence rate of policy iteration remains valid as the mesh size tends to zero ( [42,8]).We further remark that, for a given discrete HJBI equation, it can be difficult to determine a good initialization of policy iteration to ensure fast convergence of the algorithm; see [2] and references therein on possible accelerated methods.
Recently, numerical methods based on deep neural networks have been designed to solve highdimensional partial differential equations (PDEs) (see e.g.[35,15,6,16,26,44]).Most of these methods reformulate (1.1) into a nonlinear least-squares problem: where F is a collection of neural networks with a smooth activation function.Based on collocation points chosen randomly from the domain, (1.3) is then reduced into an empirical risk minimization problem, which is subsequently solved by using stochastic optimization algorithms, in particular the Stochastic Gradient Descent (SGD) algorithm or its variants.Since these methods avoid mesh generation, they can be adapted to solve PDEs in high-dimensional domains with complex geometries.Moreover, the choice of smooth activation functions leads to smooth numerical solutions, whose values can be evaluated everywhere without interpolations.In the following, we shall refer to these methods as the Direct Method, due to the fact that there is no policy iteration involved.We observe, however, that the Direct Method also has several serious drawbacks, especially for solving nonlinear nonsmooth equations including (1.1).Firstly, the nonconvexity of both the deep neural networks and the Hamiltonian G leads to a nonconvex empirical minimization problem, for which there is no theoretical guarantee on the convergence of SGD to a minimizer (see e.g.[43]).In practice, training a network with a desired accuracy could take hours or days (with hundreds of thousands of iterations) due to the slow convergence of SGD.Secondly, each SGD iteration requires the evaluation of ∇G (with respect to u and ∇u) on sample points, but ∇G is not necessarily defined everywhere due to the nonsmoothness of G.Moreover, evaluating the function G (again on a large set of sample points) can be expensive, especially when the sets A and B are of high dimensions, as we do not require more regularity than continuity of the coefficients with respect to the controls, so that approximate optimization may only be achieved by exhaustive search over a discrete coverage of the compact control set.Finally, as we shall see in Remark 4.2, merely including an L 2 (∂Ω)-norm of the boundary data in the loss function (1.3) does not generally lead to convergence of the derivatives of numerical solutions or the corresponding feedback control laws.
In this work, we propose an efficient neural network based policy iteration algorithm for solving (1.1).At the (k + 1)th iteration, k ≥ 0, we shall update the control laws (α k , β k ) by performing pointwise maximization/minimization of the Hamiltonian G based on the previous iterate u k , and obtain the next iterate u k+1 by solving a linear boundary value problem, whose coefficients involve the control laws (α k , β k ).This reduces the (nonconvex) semilinear problem into a sequence of linear boundary value problems, which are subsequently approximated by a multilayer neural network ansatz.Note that compared to Algorithm Ho-3 in [8] for discrete HJBI equations, which requires to solve a nonlinear HJB subproblem (involving minimization over the set B) for each iteration, our algorithm only requires to solve a linear subproblem for each iteration, hence it is in general more efficient, especially when the dimension of B is high.
Policy iteration (or Successive Galerkin Approximation) was employed in [4,5,28,30] to solve convex HJB equations on the whole space R n .Specifically, [4,5,28] approximate the solution to each linear equation via a separable polynomial ansatz (without concluding any convergence rate), while [30] assumes each linear equation is solved sufficiently accurately (without specifying a numerical method), and deduces pointwise linear convergence.The continuous policy iteration in [28] has also been applied to solve HJBI equations on R n in [29], which is a direct extension of Algorithm Ho-3 in [8] and still requires to solve a nonlinear HJB subproblem at each iteration.In this paper, we propose an easily implementable accuracy criterion for the numerical solutions of the linear PDEs which ensures the numerical solutions converge superlinearly in a suitable function space for nonconvex HJBI equations from an arbitrary initial guess.
Our algorithm enjoys the main advantage of the Direct Method, i.e., it is a mesh-free method and can be applied to solve high-dimensional stochastic games.Moreover, by utilizing the superlinear convergence of policy iteration, our algorithm effectively reduces the number of pointwise maximization/minimization over the sets A and B, and significantly reduces the computational cost of the Direct Method, especially for high dimensional control sets.The superlinear convergence of policy iteration also helps eliminate the oscillation caused by SGD, which leads to smoother and more rapidly decaying loss curves in both the training and validation processes (see Figure 7).Our algorithm further allows training of the feedback controls on a separate network architecture from that representing the value function, or adaptively adjusting the architecture of networks for each policy iteration.
A major theoretical contribution of this work is the proof of global superlinear convergence of the policy iteration algorithm for the HJBI equation (1.1) in H 2 (Ω), which is novel even for HJB equations (i.e., one of the sets A and B is singleton).Although the (local) superlinear convergence of policy iteration for discrete equations has been proved in various works (e.g.[38,42,18,8,47,45,39]), to the best of our knowledge, there is no published work on the superlinear convergence of policy iteration for HJB PDEs in a function space, nor on the global convergence of policy iteration for solving nonconvex HJBI equations.
Moreover, this is the first paper which demonstrates the convergence of neural network based methods for the solutions and their (first and second order) derivatives of nonlinear PDEs with merely measurable coefficients (cf.[22,23,26,44]).We will also prove the pointwise convergence of the numerical solutions and their derivatives, which subsequently enables us to construct the optimal feedback controls from the numerical value functions and deduce convergence.
Let us briefly comment on the main difficulties encountered in studying the convergence of policy iteration for HJBI equations.Recall that at the (k + 1)th iteration, we need to solve a linear boundary value problem, whose coefficients involve the control laws (α k , β k ), obtained by performing pointwise maximization/minimization of the Hamiltonian G.The uncountability of the state space Ω and the nonconvexity of the Hamiltonian require us to exploit several technical measurable selection arguments to ensure the measurability of the controls (α k , β k ), which is essential for the well-definedness of the linear boundary value problems and the algorithm.
Moreover, the nonconvexity of the Hamiltonian prevents us from following the arguments in [42,18,8] for discrete HJB equations to establish the global convergence of our inexact policy iteration algorithm for HJBI equations.In fact, a crucial step in the arguments for discrete HJB equations is to use the discrete maximum principle and show the iterates generated by policy iteration converge monotonically with an arbitrary initial guess, which subsequently implies the global convergence of the iterates.However, this monotone convergence is in general false for the iterates generated by the inexact policy iteration algorithm, due to the nonconvexity of the Hamiltonian and the fact that each linear equation is only solved approximately.We shall present a novel analysis technique for establishing the global convergence of our inexact policy iteration algorithm, by interpreting it as a fixed point iteration in H 2 (Ω).
Finally, we remark that the proof of superlinear convergence of our algorithm is significantly different from the arguments for discrete equations.Instead of working with the sup-norm for (finite-dimensional) discrete equations as in [42,18,8,47,39], we employ a two-norm framework to establish the generalized differentiability of HJBI operators, where the norm gap is essential as has already been pointed out in [24,46,45].Moreover, by taking advantage of the fact that the Hamiltonian only involves low order terms, we further demonstrate that the inverse of the generalized derivative is uniformly bounded.Furthermore, we include a suitable fractional Sobolev norm of the boundary data in the loss functions used in the training process, which is crucial for the H 2 (Ω)-superlinear convergence of the neural network based policy iteration algorithm.
We organize this paper as follows.Section 2 states the main assumptions and recalls basic results for HJBI Dirichlet problems.In Section 3 we propose a policy iteration scheme for HJBI Dirichlet problems and establish its global superlinear convergence.Then in Section 4, we shall introduce the neural network based policy iteration algorithm, establish its various convergence properties, and construct convergent approximations to optimal feedback controls.We extend the algorithm and convergence results to HJBI oblique derivative problems in Section 5. Numerical examples for two-dimensional stochastic Zermelo navigation problems are presented in Section 6 to confirm the theoretical findings and to illustrate the effectiveness of our algorithms.The Appendix collects some basic results which are used in this article, and gives a proof for the main result on the HJBI oblique derivative problem.

HJBI Dirichlet problems
In this section, we introduce the HJBI Dirichlet boundary value problems of our interest, recall the appropriate notion of solutions, and state the main assumptions on its coefficients.We start with several important spaces used frequently throughout this work.
Let n ∈ N and Ω be a bounded C 1,1 domain in R n , i.e., a bounded open connected subset of R n with a C 1,1 boundary.For each integer k ≥ 0 and real p with 1 ≤ p < ∞, we denote by W k,p (Ω) the standard Sobolev space of real functions with their weak derivatives of order up to k in the Lebesgue space L p (Ω).When p = 2, we use H k (Ω) to denote W k,2 (Ω).We further denote by H 1/2 (∂Ω) and H 3/2 (∂Ω) the spaces of traces from H 1 (Ω) and H 2 (Ω), respectively (see [21,Proposition 1.1.17]),which can be equivalently defined by using the surface measure σ on the boundaries ∂Ω as follows (see e.g.[19]): We shall consider the following HJBI equation with nonhomogeneous Dirichlet boundary data: ) where the nonlinear Hamiltonian is given as in (1.1):We now list the main assumptions on the coefficients of (2.2).
H. 1.Let n ∈ N, Ω ⊂ R n be a bounded C 1,1 domain, A be a nonempty finite set, and B be a nonempty compact metric space.Let g ∈ H 3/2 (∂Ω), {a ij } n i,j=1 ⊆ C( Ω) satisfy the following ellipticity condition with a constant λ > 0: i , for all ξ ∈ R n and x ∈ Ω, As we shall see in Theorem 3.3 and Corollary 3.5, the finiteness of the set A enables us to establish the semismoothness of the HJBI operator (2.2a), whose coefficients involve a general nonlinear dependence on the parameters α and β.If all coefficients of (2.2a) are in a separable form, i.e., it holds for all φ = b i , c, f that φ(x, α, β) = φ 1 (x, α) + φ 2 (x, β) for some functions φ 1 , φ 2 (e.g. the penalized equation for variational inequalities with bilateral obstacles in [27]), then we can relax the finiteness of A to the same conditions on B.
Finally, in this work we focus on boundary value problems in a C 1,1 domain to simplify the presentation, but the numerical schemes and their convergence analysis can be extended to problems in nonsmooth convex domains with sufficiently regular coefficients (see e.g.[21,45]).
We end this section by proving the uniqueness of solutions to the Dirichlet problem (2.2) in H 2 (Ω).The existence of strong solutions shall be established constructively via policy iteration below (see Theorem 3.7).
Proof.Let u, v ∈ H 2 (Ω) be two strong solutions to (2.2), we consider the following linear homogeneous Dirichlet problem: where we define the following measurable functions: with the Hamiltonian G defined as in (2.3).Note that the boundedness of coefficients implies that { bi } n i=1 ⊆ L ∞ (Ω), and c ∈ L ∞ (Ω).Moreover, one can directly verify that the following inequality holds for all parametrized functions (f α,β , g α,β ) α∈A,β∈B : for all x ∈ R n , inf which together with (H.1) leads to the estimate that c(x , and hence we have c ≥ 0 a.e.Ω.Then, we can deduce from Theorem A.1 that the Dirichlet problem (2.4) admits a unique strong solution w * ∈ H 2 (Ω) and w * = 0. Since w = u − v ∈ H 2 (Ω) satisfies (2.4) a.e. in Ω and τ w = 0, we see that w = u − v is a strong solution to (2.4) and hence u − v = w * = 0, which subsequently implies the uniqueness of strong solutions to the Dirichlet problem (2.2).

Policy iteration for HJBI Dirichlet problems
In this section, we propose a policy iteration algorithm for solving the Dirichlet problem (2.2).We shall also establish the global superlinear convergence of the algorithm, which subsequently gives a constructive proof for the existence of a strong solution to the Dirichlet problem (2.2).
We start by presenting the policy iteration scheme for the HJBI equations in Algorithm 1, which extends the policy iteration algorithm (or Howard's algorithm) for discrete HJB equations (see e.g.[18,8,39]) to the continuous setting.
The remaining part of this section is devoted to the convergence analysis of Algorithm 1.For notational simplicity, we first introduce two auxiliary functions: for each (x, u, α, β) Note that for all k ≥ 0 and x ∈ Ω, by setting u k (x) = (u k (x), ∇u k (x)), we can see from (3.1) and We then recall several important concepts, which play a pivotal role in our subsequent analysis.The first concept ensures the existence of measurable feedback controls and the well-posedness of Algorithm 1.
Algorithm 1 Policy iteration algorithm for Dirichlet problems.
2. Given the iterate u k ∈ H 2 (Ω), update the following control laws: for all α ∈ A, x ∈ Ω, 3. Solve the linear Dirichlet problem for u k+1 ∈ H 2 (Ω): where = 0, then terminate with outputs u k+1 , α k and β k , otherwise increment k by one and go to step 2. Definition 3.1.Let (S, Σ) be a measurable space, and let X and Y be topological spaces.A function ψ : S × X → Y is a Carathéodory function if: 1. for each x ∈ X, the function We now recall a generalized differentiability concept for nonsmooth operators between Banach spaces, which is referred as semismoothness in [46] and slant differentiability in [12,24].It is wellknown (see e.g.[8,45,39]) that the HJBI operator in (2.2a) is in general non-Fréchet-differentiable, and this generalized differentiability is essential for showing the superlinear convergence of policy iteration applied to HJBI equations.Definition 3.2.Let F : V ⊂ Y → Z be defined on a open subset V of the Banach space Y with images in the Banach space Z.In addition, let ∂ * F : V ⇒ L(Y, Z) be a given a set-valued mapping with nonempty images, i.e., ∂ * F (y) = ∅ for all y ∈ V .We say F is ∂ * F -semismooth in V if for any given y ∈ V , we have that F is continuous near y, and sup The set-valued mapping ∂ * F is called a generalized differential of F in V .Remark 3.2.As in [46], we always require that ∂ * F has a nonempty image, and hence the ∂ * Fsemismooth of F in V shall automatically imply that the image of ∂ * F is nonempty on V .Now we are ready to analyze Algorithm 1.We first prove the semismoothness of the Hamiltonian G defined as in (2.3), by viewing it as the composition of a pointwise maximum operator and a family of HJB operators parameterized by the control α.Moreover, we shall simultaneously establish that, for each iteration, one can select measurable control laws α k , β k to ensure the measurability of the controlled coefficients b i k , c k , f k in the linear problem (3.3), which is essential for the well-posedness of strong solutions to (3.3), and the well-definedness of Algorithm 1.
The following proposition establishes the semismoothness of a parameterized family of firstorder HJB operators, which extends the result for scalar-valued HJB operators in [45].Moreover, by taking advantage of the fact that the operators involve only first-order derivatives, we are able to establish that they are semismooth from H 2 (Ω) to L p (Ω) for some p > 2 (cf.[45,Theorem 13]), which is essential for the superlinear convergence of Algorithm 1.
We proceed to show that the operator F 1 is in fact ∂ * F 1 -semismooth from W 1,r (Ω) to (L p (Ω)) |A| , which implies the desired conclusion due to the continuous embedding H 2 (Ω) ֒→ W 1,r (Ω).For each α ∈ A, we denote by F 1,α : W 1,r (Ω) → L p (Ω) the α-th component of F 1 , and by ∂ * F 1,α the α-th component of ∂ * F 1 .Theorem A.4 and the continuity of ℓ in u show that for each (x, α) ∈ Ω × A, the set-valued mapping is upper hemicontinuous, from which, by following precisely the steps in the arguments for [45,Theorem 13], we can prove that F 1,α : W 1,r (Ω) → L p (Ω) is ∂ * F 1,α -semismooth.Then, by using the fact that a direct product of semismooth operators is again semismooth with respect to the direct product of the generalized differentials of the components (see [46,Proposition 3.6]), we can deduce that F 1 : W 1,r (Ω) → (L p (Ω)) |A| is semismooth with respect to the generalized differential ∂ * F 1 and finishes the proof.
We then establish the semismoothness of a general pointwise maximum operator, by extending the result in [24] for the max-function f : x ∈ R → max(x, 0).Proposition 3.2.Let p ∈ (2, ∞) be a given constant, A be a finite set, and Ω be a bounded subset of R n .Let F 2 : (L p (Ω)) |A| → L 2 (Ω) be the pointwise maximum operator such that for each (3.9) defined as follows: for any u = (u(•, α) where α u : Ω → A is any measurable function such that Proof.Let the Banach space (L p (Ω)) |A| be endowed with the product norm • p,A defined as in the proof of Proposition 3.1.We first show the mappings F 2 and ∂ * F 2 are well-defined, ∂ * F 2 has nonempty images, and The finiteness of A implies that any u ∈ (L p (Ω)) |A| can also be viewed as a Carathéodory function u : Ω × A → R. Hence for any given u ∈ (L p (Ω)) |A| , we can deduce from Theorem A.3 the existence of a measurable function α u : Ω → A satisfying (3.10).Moreover, for any given measurable function α u : Ω → A and v ∈ (L p (Ω)) |A| , the function ∂ * F 2 (u)v remains Lebesgue measurable (see Remark 3.1).Then, for any given u ∈ (L p (Ω)) |A| with p > 2, one can easily check that F 2 (u) ∈ L 2 (Ω), and , which subsequently implies that F 2 and ∂ * F 2 are well-defined, and the image of ∂ * F 2 is nonempty on (L p (Ω)) |A| .Moreover, for any u, v ∈ (L p (Ω)) |A| , Hölder's inequality leads to the following estimate: Now we prove by contradiction that the operator F 2 is ∂ * F 2 -semismooth.Suppose there exists a constant δ > 0 and functions u, where for each k ∈ N, ∂ * F 2 (u + v k ) is defined with some measurable function α u+v k : Ω → A.
Then, by passing to a subsequence, we may assume that for all α ∈ A, the sequence {v k (•, α)} k∈N converges to zero pointwise a.e. in Ω, as k → ∞.
For notational simplicity, we define Σ(x, u) := arg max α∈A (u(x, α)) for all u ∈ (L p (Ω)) |A| and x ∈ Ω.Then for a.e.x ∈ Ω, we have lim k→∞ v k (x, α) = 0 for all α ∈ A, α u+v k (x) ∈ Σ(x, u+v k ) for all k ∈ N. By using the finiteness of A and the convergence of {v k (•, α)} k∈N , it is straightforward to prove by contradiction that for all such x ∈ Ω, α u+v k (x) ∈ Σ(x, u) for all large enough k.
We now derive an upper bound of the left-hand side of (3.11).For a.e.x ∈ Ω, we have from any α u (x) ∈ Σ(x, u).Thus, for each k ∈ N, we have for a.e.x ∈ Ω that, where, by applying Theorem A.3 twice, we can see that both the set-valued mapping x ⇒ Σ(x, u) and the function φ k are measurable.We then introduce the set The measurability of the set-valued mapping x ⇒ Σ(x, u) implies the associated distance function ρ(x, α) := dist(α, Σ(x, u)) is a Carathéodory function (see [1,Theorem 18.5]), which subsequently leads to the measurability of Ω k for all k.Hence we can deduce that which leads to the following estimate: where we have used the bounded convergence theorem and the fact that for a.e.x ∈ Ω, 1 Ω k (x) = 0 for all large enough k.This contradicts to the hypothesis (3.11), and hence finishes our proof.Now we are ready to conclude the semismoothness of the HJBI operator.Note that the argument in [45] does not apply directly to the HJBI operator, due to the nonconvexity of the Hamiltonian G defined as in (2.3).Theorem 3.3.Suppose (H.1) holds, and let F : where β u : Ω × A → B is any jointly measurable function satisfying (3.8), and α : Ω → A is any measurable function such that Proof.Note that we can decompose the HJBI operator F : is the pointwise maximum operator defined in Proposition 3.2, and p is a constant satisfying p > 2 if n ≤ 2, and p ∈ (2, 2n/(n − 2)) if n > 2. Proposition 3.1 shows that F 1 is Lipschitz continuous and semismooth with respect to the generalized differential ∂ * F 1 defined by (3.7), while Proposition 3.2 shows that F 2 is semismooth with respect to the uniformly bounded generalized differential ∂ * F 2 defined by (3.9).Hence, we know the composed operator F 2 • F 1 is semismooth with respect to the composition of the generalized differentials (see [46,Proposition 3.8] , we can conclude from Propositions 3.1 and 3.2 that F : H 2 (Ω) → L 2 (Ω) is semismooth on H 2 (Ω), and that (3.12) is a desired generalized differential of F at u.

Note that the above characterization of the generalized differential of the HJBI operator involves a jointly measurable function
We now present a technical lemma, which allows us to view the control law β k in (3.2) as such a feedback control on x ∈ Ω and α ∈ A. Lemma 3.4.Suppose (H.1) holds.Let h, {h i } n i=1 : Ω → R, α h : Ω → A be given measurable functions, and β h : Ω → B be a measurable function such that for all x ∈ Ω, Then there exists a jointly measurable function βh : Ω × A → B such that β h (x) = βh (x, α h (x)) for all x ∈ Ω, and it holds for all x ∈ Ω and α ∈ A that βh (x, α) ∈ arg min The measurability of α h and the finiteness of A imply that the set C is measurable in the product σ-algebra on Ω × A, which along with the joint measurability of β leads to the joint measurability of the function βh .
For any given x ∈ Ω, we have (x, }, from which we can deduce from the definition of βh that βh (x, α h (x)) = β h (x) for all x ∈ Ω.Finally, for any given α i ∈ A, we shall verify (3.15) for all x ∈ Ω and α = α i .Let x ∈ Ω be fixed.If α h (x) = α i , then the fact that (x, α i ) ∈ C and the definition of βh imply that βh (x, α i ) = β h (x), which along with (3.14) and α h (x) = α i shows that (3.15) holds for the point (x, α i ).On the other hand, if α h (x) = α i , then (x, α i ) ∈ C and βh (x, α i ) = β(x, α i ) satisfies the condition (3.15) due to the selection of β.
As a direct consequence of the above extension result, we now present an equivalent characterization of the generalized differential of the HJBI operator.
The above characterization of the generalized differential of the HJBI operator enables us to demonstrate the superlinear convergence of Algorithm 1 by reformulating it as a semismooth Newton method for an operator equation.Theorem 3.6.Suppose (H.1) holds and let u * ∈ H 2 (Ω) be a strong solution to the Dirichlet problem (2.2).Then there exists a neighborhood N of u * , such that for all u 0 ∈ N , Algorithm 1 either terminates with u k = u * for some k ∈ N, or generates a sequence {u k } k∈N that converges q-superlinearly to Proof.Note that the Dirichlet problem (2.2) can be written as an operator equation F (u) = 0 with the following operator where F is the HJBI operator defined as in (2.2a), and τ : H 2 (Ω) → H 3/2 (∂Ω) is the trace operator.Moreover, one can directly check that given an iterate u k ∈ H 2 (Ω), k ≥ 0, the next iterate u k+1 solves the following Dirichlet problem: with the differential operator L k ∈ ∂ * F (u k ) defined as in (3.16).Since F : ) and τ ∈ L(H 2 (Ω), H 3/2 (∂Ω)), we can conclude that Algorithm 1 is in fact a semismooth Newton method for solving the operator equation F (u) = 0. Note that the boundedness of coefficients and the classical theory of elliptic regularity (see Theorem A.1) imply that under condition (H.1), there exists a constant C > 0, such that for any u ∈ H 2 (Ω) and any L ∈ ∂ * F (u), the inverse operator (L, τ ) −1 : L 2 (Ω) × H 3/2 (∂Ω) → H 2 (Ω) is well-defined, and the operator norm (L, τ ) −1 is bounded by C, uniformly in u ∈ H 2 (Ω).Hence one can conclude from [46,Theorem 3.13] (see also Theorem A.5) that the iterates {u k } k∈N converges superlinearly to u * in a neighborhood N of u * .
The next theorem strengthens Theorem 3.6, and establishes a novel global convergence result of Algorithm 1 applied to the Dirichlet problem (2.2), which subsequently provides a constructive proof for the existence of solutions to (2.2).The following additional condition is essential for our proof of the global convergence of Algorithm 1: Let the function c in (H.1) be given as: c(x, α, β) = c(x, α, β)+c 0 , for all (x, α, β) ∈ Ω×A×B, where c 0 is a sufficiently large constant, depending on Ω, In practice, (H.2) can be satisfied if (2.2) arises from an infinite-horizon stochastic game with a large discount factor (see e.g.[10]), or if (2.2) stems from an implicit (time-)discretization of parabolic HJBI equations with a small time stepsize.Theorem 3.7.Suppose (H.1) and (H.2) hold, then the Dirichlet problem (2.2) admits a unique strong solution u * ∈ H 2 (Ω).Moreover, for any initial guess u 0 ∈ H 2 (Ω), Algorithm 1 either terminates with u k = u * for some k ∈ N, or generates a sequence {u k } k∈N that converges qsuperlinearly to Proof.If Algorithm 1 terminates in iteration k, we have F (u k ) = L k u k − f k = 0 and τ u k = g, from which, we obtain from the uniqueness of strong solutions to (2.2) (Proposition 2.1) that u k = u * is the strong solution to the Dirichlet problem (2.2).Hence we shall assume without loss of generality that Algorithm 1 runs infinitely.We now establish the global convergence of Algorithm 1 by first showing the iterates {u k } k∈N form a Cauchy sequence in H 2 (Ω).For each k ≥ 0, we deduce from (3.6) and (H.2) that τ u k+1 = g on ∂Ω and for a.e.x ∈ Ω, where the function ck (x) := c(x, α k (x), β k (x)) for all x ∈ Ω, and the modified Hamiltonian is defined as: Hence, by taking the difference of equations corresponding to the indices k − 1 and k, one can obtain that for x ∈ Ω, and τ (u k+1 − u k ) = 0 on ∂Ω.
It has been proved in Theorem 9.14 of [20] that, there exist positive constants C and γ 0 , depending only on {a ij } n i,j=1 and Ω, such that it holds for all u ∈ H 2 (Ω) with τ u = 0, and for all which, together with the identity that γu = (−a ij ∂ ij u + γu) + a ij ∂ ij u and the boundedness of {a ij } ij , implies that the same estimate also holds for u Thus, by assuming c 0 ≥ γ 0 and using the boundedness of the coefficients, we can deduce from (3.20) that for some constant C independent of c 0 and the index k.Now we apply the following interpolation inequality (see [20,Theorem 7.28]): there exists a constant C, such that for all u ∈ H 2 (Ω) and ε > 0, we have u Hence, for any given ε 1 ∈ (0, 1), ε 2 > 0, we have Then, by taking ε 1 ∈ (0, 1), ε 2 < 1−ε 1 , and assuming that c 0 satisfies (c which implies that {u k } k∈N is a Cauchy sequence with the norm , we can deduce that {u k } k∈N converges to some ū in H 2 (Ω).By passing k → ∞ in (3.18) and using Proposition 2.1, we can deduce that ū = u * is the unique strong solution of (2.2).Finally, for a sufficiently large K 0 ∈ N, we can conclude the superlinear convergence of {u k } k≥K 0 from Theorem 3.6.
We end this section with an important remark that, if one of the sets A and B is a singleton, and a ij ∈ C 0,1 ( Ω) for all i, j, then Algorithm 1 applied to the Dirichlet problem (2.2) is in fact monotonically convergent with an arbitrary initial guess.Suppose, for instance, that A is a singleton, then for each k ∈ N ∪ {0}, we have that Hence we can deduce that w k+1 := u k+1 − u k+2 is a weak subsolution to L k+1 w = 0, i.e., Thus, the weak maximal principle (see [19,Theorem 1.3.7])and the fact that u k+1 − u k+2 = 0 a.e.x ∈ ∂Ω (with respect to the surface measure), leads to the estimate ess sup Ω u k+1 − u k+2 ≤ 0, which consequently implies that u k ≤ u k+1 for all k ∈ N and a.e.x ∈ Ω.
4 Inexact policy iteration for HJBI Dirichlet problems Note that at each policy iteration, Algorithm 1 requires us to obtain an exact solution to a linear Dirichlet boundary value problem, which is generally infeasible.Moreover, an accurate computation of numerical solutions to linear Dirichlet boundary value problems could be expensive, especially in a high-dimensional setting.In this section, we shall propose an inexact policy iteration algorithm for (2.2), where we compute an approximate solution to (3.3) by solving an optimization problem over a family of trial functions, while maintaining the superlinear convergence of policy iteration.
We shall make the following assumption on the trial functions of the optimization problem.
H.3.The collections of trial functions {F M } M ∈N satisfies the following properties: for all M ∈ N, and It is clear that (H.3) is satisfied by any reasonable H 2 -conforming finite element spaces (see e.g.[9]) and high-order polynomial spaces or kernel-function spaces used in global spectral methods (see e.g.[4,5,13,28,29]).We now demonstrate that (H.3) can also be easily satisfied by the sets of multi-layer feedforward neural networks, which provides effective trial functions for highdimensional problems.Let us first recall the definition of a feedforward neural network.Definition 4.1 (Artificial neural networks).Let L, N 0 , N 1 , . . ., N L ∈ N be given constants, and ̺ : R → R be a given function.For each l = 1, . . ., L, let T l : R N l−1 → R N l be an affine function given as is called a feedforward neural network.Here the activation function ̺ is applied componentwise.We shall refer the quantity L as the depth of F , N 1 , . . .N L−1 as the dimensions of the hidden layers, and N 0 , N L as the dimensions of the input and output layers, respectively.We also refer to the number of entries of {W l , b l } N l=1 as the complexity of F . Let L (M ) −1 } M ∈N be some nondecreasing sequences of natural numbers, we define for each M the set F M of all neural networks with depth L (M ) , input dimension being equal to n, output dimension being equal to 1, and dimensions of hidden layers being equal to {N The following proposition is proved in [25,Corollary 3.8], which shows neural networks with one hidden layer are dense in H 2 (Ω).
Then the family of all neural networks with depth L = 2 is dense in H 2 (Ω).Now we discuss how to approximate the strong solutions of Dirichlet problems by reformulating the equations into optimization problems over trial functions.The idea is similar to least squares finite-element methods (see e.g.[7]), and has been employed previously to develop numerical methods for PDEs based on neural networks (see e.g.[35,6,44]).However, compared to [6,35], we do not impose additional constraints on the trial functions by requiring that the networks exactly agree with the boundary conditions, due to the lack of theoretical support that the constrained neural networks are still dense in the solution space.Moreover, to ensure the convergence of solutions in the H 2 (Ω)-norm, we include the H 3/2 (∂Ω)-norm of the boundary data in the cost function, instead of the L 2 (∂Ω)-norm used in [44] (see Remark 4.2 for more details).
For each k ∈ N ∪ {0}, let u k+1 ∈ H 2 (Ω) be the unique solution to the Dirichlet problem (3.3): where L k and f k denote the linear elliptic operator and the source term in (3.3), respectively.For each M ∈ N, we shall consider the following optimization problems: The following result shows that the cost function J k provides a computable indicator of the error.
. Then, by using the assumption that u k+1 solves (3.3), we deduce that the residual term satisfies the following Dirichlet problem: Hence the boundedness of coefficients and the regularity theory of elliptic operators (see Theorem A.1) lead to the estimate that where the constants C 1 , C 2 > 0 depend only on the L ∞ (Ω)-norms of a ij , b i k , c k , f k , which are independent of k.The above estimate, together with the facts that {F M } M ∈N is dense in H 2 (Ω) and F M ⊂ F M +1 , leads to the desired result that lim M →∞ J k,M = 0.
We now present the inexact policy iteration algorithm for the HJBI problem (2.2), where at each policy iteration, we solve the linear Dirichlet problem within a given accuracy.3. Find u k+1 ∈ F such that1 where L k and f k denote the linear operator and the source term in (3.3), respectively.
4. If u k+1 − u k H 2 (Ω) = 0, then terminate with outputs u k+1 , α k and β k , otherwise increment k by one and go to step 2.
Remark 4.1.In practice, the evaluation of the squared residuals J k in (4.2) depends on the choice of trial functions.For trial functions with linear architecture, e.g. if {F M } M ∈N are finite element spaces, high-order polynomial spaces, and kernel-function spaces (see [9,13,28,29]), one may evaluate the norms by applying high-order quadrature rules to the basis functions involved.
For trial functions with nonlinear architecture, such as feedforward neural networks, we can replace the integrations in J k by the empirical mean over suitable collocation points in Ω and on ∂Ω, such as pseudorandom points or quasi-Monte Carlo points (see Section 6; see also [35,6,44]).In particular, due to the existence of local coordinate charts of the boundaries, we can evaluate the double integral in the definition of the H 3/2 (∂Ω)-norm (see (2.1)) by first generating points in R 2(n−1) and then mapping the samples onto ∂Ω × ∂Ω.The resulting empirical least-squares problem for the k + 1-th policy iteration step (cf.(4.1)) can then be solved by stochastic gradient descent (SGD) algorithms; see Section 6.We remark that, instead of pre-generating all the collocation points in advance, one can perform gradient descent based on a sequence of minibatches of points generated at each SGD iteration.This is particularly useful in higher dimensions, where many collocation points may be needed to cover the boundary, and using mini-batches avoids having to evaluate functions at all collocation points in each iteration.
It is well-known (see e.g.[46,14]) that the residual term u k+1 − u k H 2 (Ω) is crucial for the superlinear convergence of inexact Newton methods.This next theorem establishes the global superlinear convergence of Algorithm 2.
Proof.Let u 0 ∈ F be an arbitrary initial guess.We first show that Algorithm 2 is always welldefined.For each k ∈ N ∪ {0}, if u k ∈ F is the strong solution to (3.3), then we can choose u k+1 = u k , which satisfies (4.2) and terminates the algorithm.If u k does not solve (3.3), the fact that F is dense in H 2 (Ω) enables us to find u k+1 ∈ F satisfying the criterion (4.2).
Moreover, one can clearly see from (4.2) that if Algorithm 2 terminates at iteration k, then u k is the exact solution to the Dirichlet problem (2.2).Hence in the sequel we shall assume without loss of generality that Algorithm 2 runs infinitely, i.e., u k+1 − u k H 2 (Ω) > 0 and u k = u * for all k ∈ N ∪ {0}.
We next show the iterates converge to u * in H 2 (Ω) by following similar arguments as those for Theorem 3.7.For each k ≥ 0, we can deduce from (4.2) that there exists f e k ∈ L 2 (Ω) and g e k ∈ H 3/2 (∂Ω) such that and ) with lim k→∞ η k = 0.Then, by taking the difference between (4.3) and (2.2), we obtain that and τ (u k+1 − u * ) = g e k on ∂Ω, where Ḡ is the modified Hamiltonian defined as in (3.19).Then, by proceeding along the lines of Theorem 3.7, we can obtain a positive constant C, independent of c 0 and the index k, such that as k → ∞, where the additional high-order terms are due to the residuals f e k and g e k .Then, by using the interpolation inequality and assuming c 0 is sufficiently large, we can deduce that {u k } k∈N converge linearly to u * in H 2 (Ω).
We then reformulate Algorithm 2 into a quasi-Newton method for the operator equation F (u) = 0, with the operator F : u ∈ H 2 (Ω) → (F (u), τ u − g) ∈ L 2 (Ω) × H 3/2 (∂Ω) defined in the proof of Theorem 3.6.Let H 2 (Ω) * denote the strong dual space of H 2 (Ω), and •, • denote the dual product on H 2 (Ω) * × H 2 (Ω).For each k ∈ N ∪ {0}, by using the fact that u k+1 − u k H 2 (Ω) > 0, we can choose w k ∈ H 2 (Ω) * satisfying w k , u k+1 − u k = −1, and introduce the following linear operators δL k ∈ L(H 2 (Ω), L 2 (Ω)) and δτ k ∈ L(H 2 (Ω), H 3/2 (∂Ω)): Then, we can apply the identity F (u k ) = L k u k − f k and rewrite (4.3) as: ) as shown in Theorem 3.6.Hence one can clearly see that (4.3) is precisely a Newton step with a perturbed operator for the equation F (u) = 0. Now we are ready to establish the superlinear convergence of {u k } k∈N .For notational simplicity, in the subsequent analysis we shall denote by Z := L 2 (Ω) × H 3/2 (∂Ω) the Banach space with the usual product norm z Z := z 1 L 2 (Ω) + z 2 H 3/2 (∂Ω) for each z = (z 1 , z 2 ) ∈ Z.By using the semismoothess of F : H 2 (Ω) → Z (see Theorem 3.6) and the strong convergence of {u k } k∈N in H 2 (Ω), we can directly infer from Theorem A.5 that it remains to show that there exists a open neighborhood V of u * , and a constant L > 0, such that and also lim k→∞ The criterion (4.2) and the definitions of δL k and δτ k imply that (4.5) holds: as k → ∞.Moreover, the boundedness of the coefficients a ij , b i , c, f shows that F is Lipschitz continuous.Finally, the characterization of the generalized differential of F in Theorem 3.6 and the regularity theory of elliptic operators (see Theorem A.1) show that for each v ∈ H 2 (Ω), we can choose an invertible operator for all v in some neighborhood V of u * , which completes our proof for q-superlinear convergence of {u k } k∈N .Finally, we establish the pointwise convergence of {u k } ∞ k=1 and their derivatives.For any given γ ∈ (0, 1), the superlinear convergence of {u k } ∞ k=1 implies that there exists a constant C > 0, depending on γ, such that u k − u * 2 H 2 (Ω) ≤ Cγ 2k for all k ∈ N. Taking the summation over the index k, we have where we used the monotone convergence theorem in the first equality.Thus, we have which leads us to the pointwise convergence of u k and its partial derivatives with respect to k.
Remark 4.2.We reiterate that merely including the L 2 (∂Ω)-norm of the boundary data in the cost functional (4.2) in general cannot guarantee the convergence of the derivatives of the numerical solutions {u k } ∞ k=1 , which can be seen from the following simple example.Let {g k } ∞ k=1 ⊆ H 3/2 (∂Ω) be a sequence such that g k → 0 in L 2 (∂Ω) but not in H 1/2 (∂Ω), and for each k ∈ N, let h k ∈ H 2 (Ω) be the strong solution to −∆h k = 0 in Ω and h k = g k on ∂Ω.
The fact that g k → 0 in H 1/2 (∂Ω) implies that h k → 0 in H 1 (Ω) as k → ∞.We now show lim k→∞ h k = 0 in L 2 (Ω).Let w ∈ H 2 (Ω) be the solution to −∆w = h k in Ω and w = 0 on ∂Ω, we can deduce from the integration by parts and the a priori estimate w which shows that h k L 2 (Ω) ≤ C g k L 2 (∂Ω) → 0 as k → ∞.Now let F be a given family of trial functions, which is dense in H 2 (Ω).One can find {u k } ∞ k=1 ⊆ F satisfying lim k→∞ u k −h k H 2 (Ω) = 0, and consequently u k → 0 in H 1 (Ω) as k → ∞.However, we have Similarly, one can construct functions k=1 does not converge to 0 in H 2 (Ω).We end this section with a convergent approximation of the optimal control strategies based on the iterates {u k } ∞ k=1 generated by Algorithm 2. For any given u ∈ H 2 (Ω), we denote by A u (x) and B u (x, α) the set of optimal control strategies for all α ∈ A and for a.e.x ∈ Ω, such that As an important consequence of the superlinear convergence of Algorithm 2, we now conclude that the feedback control strategies {α k } ∞ k=1 and {β k } ∞ k=1 generated by Algorithm 2 are convergent to the optimal control strategies.
Proof.Let ℓ and h be the Carathéodory functions defined by (3.4) and (3.5), respectively, and we consider the following set-valued mappings: Remark 4.3.If we assume in addition that A ⊂ X A and B ⊂ Y B for some Banach spaces X A and Y B , then by using the compactness of A and B (see (H.1)), we can conclude from the dominated convergence theorem that α k → α * in L p (Ω; X A ) and

Inexact policy iteration for HJBI oblique derivative problems
In this section, we extend the algorithms introduced in previous sections to more general boundary value problems.In particular, we shall propose a neural network based policy iteration algorithm with global H 2 -superlinear convergence for solving HJBI boundary value problems with oblique derivative boundary conditions.Similar arguments can be adapted to design superlinear convergent schemes for mixed boundary value problems with both Dirichlet and oblique derivative boundary conditions.
We consider the following HJBI oblique derivative problem: x ∈ Ω, (5.1a) where (5.1a) is the HJBI equation given in (2.2a), and (5.1b) is an oblique boundary condition.Note that the boundary condition Bu on ∂Ω involves the traces of u and its first partial derivatives, which exist almost everywhere on ∂Ω (with respect to the surface measure).
The next proposition establishes the well-posedness of the oblique derivative problem.Proposition 5.1.Suppose (H.4) holds.Then the oblique derivative problem (2.2) admits a unique strong solution u * ∈ H 2 (Ω).
Proof.We shall establish the uniqueness of strong solutions to (5.1) in this proof, and then explicitly construct the solution in Theorem 5.2 with the help of policy iteration; see also Theorem 3.7.Suppose that u, v ∈ H 2 (Ω) are two strong solutions to (5.1), then we can see w = u − v is a strong solution to the following linear oblique derivative problem: where bi is defined as in Proposition 2.1, and By following the same arguments as the proof of Proposition 2.1, we can show that bi , c ∈ L ∞ (Ω), and c ≥ µ > 0 a.e. in Ω, which, along with Theorem A.2, implies that w * = 0 is the unique strong solution to (5.2), and consequently u = v in H 2 (Ω).
Now we present the neural network based policy iteration algorithm for solving the oblique derivative problem and establish its rate of convergence.3. Find u k+1 ∈ F such that (5.3) where L k , f k , and B denote the linear operator in (3.3), the source term in (3.3) and the boundary operator in (5.1b), respectively.
4. If u k+1 − u k H 2 (Ω) = 0, then terminate with outputs u k+1 , α k and β k , otherwise increment k by one and go to step 2.
Note that the H 1/2 (∂Ω)-norm of the boundary term is included in the cost function J k , instead of the H 3/2 (∂Ω)-norm as in Algorithm 2. It is straightforward to see that Algorithm 3 is welldefined under (H.3) and (H.4).In fact, for each k ∈ N ∪ {0}, given the iterate u k ∈ F ⊂ H 2 (Ω), Corollary 3.5 shows that one can select measurable control laws (α k , β k ) such that the following linear oblique boundary value problem has measurable coefficients: and hence admits a unique strong solution ūk in H 2 (Ω) (see Theorem A.2).If u k = ūk , then u k solve the HJBI oblique derivative problem (5.1), and we can select u k+1 = u k and terminate the algorithm.Otherwise, the facts that J k (u k+1 ) ≤ C ūk − u k+1 2 H 2 (Ω) and F is dense in H 2 (Ω) allows us to choose u k+1 ∈ F sufficiently closed to ū such that the criterion (5.3) is satisfied, and proceed to the next iteration.
The following result is analogue to Theorem 4.3, and shows the global superlinear convergence of Algorithm 3 for solving the oblique derivative problem (5.1).The proof follows precisely the lines given in Theorem 4.3, hence we shall only present the main steps in Appendix B for the reader's convenience.The convergence of feedback control laws can be concluded similarly to Corollary 4.4 and Remark 4.3.

Numerical experiments: Zermelo's Navigation Problem
In this section, we illustrate the theoretical findings and demonstrate the effectiveness of the schemes through numerical experiments.We present a two-dimensional convection-dominated HJBI Dirichlet boundary value problem in an annulus, which is related to stochastic minimum time problems.
In particular, we consider the stochastic Zermelo navigation problem (see e.g.[37]), which is a time-optimal control problem where the objective is to find the optimal trajectories of a ship/aircraft navigating a region of strong winds, modelled by a random vector field.Given a bounded open set Ω ⊂ R n and an adaptive control strategy {α t } t≥0 taking values in A, we assume the dynamics X x,α of the ship is governed by the following controlled dynamics: where the drift coefficient b : Ω × A → R n is the sum of the velocity of the wind and the relative velocity of the ship, the nondegenerate diffusion coefficient σ : Ω → R n×n describes a random perturbation of the velocity field of the wind, and W is an n-dimensional Brownian motion defined on a probability space ( Ω, {F t } t≥0 , P).
The aim of the controller is to minimize the expected exit time of the region Ω, taking model ambiguity into account in the spirit of [41].More generally, we consider the following value function: over all admissible choices of α ∈ A, where τ x,α := inf{t ≥ 0 | X x,α t ∈ Ω} denotes the first exit time of the controlled dynamics X x,α , the functions f and g denote the running cost and the exit cost, respectively, which indicate the desired destinations, and M is a family of absolutely continuous probability measures with respect to P with density , where {β t } t≥0 is a predictable process satisfying β t ∞ = max i |β t,i | ≤ κ for all t and a given parameter κ ≥ 0. In other words, we would like to minimize a functional of the trajectory up to the exit time under the worst-case scenario, with uncertainty arising from the unknown law of the random perturbation.
By using the dual representation of E[•] and the dynamic programming principle (see e.g.[41,10]), we can characterize the value function u as the unique viscosity solution to an HJBI Dirichlet boundary value problem of the form (2.2).Moreover, under suitable assumptions, one can further show that u is the strong (Sobolev) solution to this Dirichlet problem (see e.g.[33]).
For our numerical experiments, we assume that the domain Ω is an annulus, i.e., Ω = {(x, y) ∈ R 2 | r 2 < x 2 + y 2 < R 2 }, the wind blows along the positive x-axis with a magnitude v c : , for some constant a ∈ [0, 1), which decreases in terms of the distance from the bank, and the random perturbation of the wind is given by the constant diffusion coefficient σ = diag(σ x , σ y ).We also assume that the ship moves with a constant velocity v s , and the captain can control the boat's direction instantaneously, which leads to the following dynamics of the boat in the region: where α t ∈ A = [0, 2π] represents the angle (measured counter-clockwise) between the positive x-axis and the direction of the boat.Finally, we assume the exit cost g ≡ 0 on ∂B r (0) and g ≡ 1 on ∂B R (0), which represents that the controller prefers to exit the domain through the inner boundary instead of the outer one (see Figure 1).Then the corresponding Dirichlet problem for the value function u in (6.1) is given by: u ≡ 0 on ∂B r (0), u ≡ 1 on ∂B R (0), and where • ℓ 1 and • ℓ 2 denote the ℓ 1 -norm and ℓ 2 -norm on R 2 , respectively.The optimal feedback control laws can be further computed as where u ∈ H 2 (Ω) is the strong solution to (6.2), and θ ∈ (−π, π] is the angle between ∇u and the positive x direction.Note that the equation (6.2) is neither convex nor concave in ∇u.

Implementation details
In this section, we discuss the implementation details of Algorithm 2 for solving (6.2) with multi-layer neural networks (see Definition 4.1) as the trial functions.We shall now introduce the architecture of the neural networks, the involved hyper-parameters, and various computational aspects of the training process.
For simplicity, we shall adopt a fixed set of trial functions F M for all policy iterations, which contains fully-connected networks with the activation function ̺(y) = tanh(y), the depth L, and the dimension of each hidden layer H.The hyper-parameters L and H will be chosen depending on the complexity of the problem, which ensures that F M admits sufficient flexibility to approximate the solutions within the desired accuracy.More complicated architectures of neural networks with shortcut connections can be adopted to further improve the performance of the algorithm (see e.g.[16,44]).
We then proceed to discuss the computation of the cost functional J k in (4.2) for each policy iteration.It is well-known that Sobolev norms of functions on sufficiently smooth boundaries can be explicitly computed via local coordinate charts of the boundaries (see e.g.[21]).In particular, due to the annulus shaped domain and the constant boundary conditions used in our experiment, we can express the cost functional J k as follows: for all k ∈ N and u ∈ F M , where we define the map Φ l : θ ∈ (−π, π) → (l cos(θ), l sin(θ)) ∈ ∂B l (0) for l = r, R. Note that we introduce an extra weighting parameter γ > 0 in (6.4), which helps achieve the optimal balance between the residual of the PDE and the residuals of the boundary data.We set the parameter γ = 0.1 for all the computations.The cost functional (6.4) is further approximated by an empirical cost via the collocation method (see [35,6]), where we discretize Ω and Θ = (−π, π) 2 by sets of collocation points respectively, and write the discrete form of (6.4) as follows: for all k ∈ N and u ∈ F M , where |Ω| = π(R 2 − r 2 ), and |∂B l (0)| = 2πl for l = r, R are, respectively, the Lebesgue measures of the domain and boundaries.Note that the choice of the smooth activation function ̺(y) = tanh(y) implies that every trial function u ∈ F M is smooth, hence all its derivatives are well-defined at any given point.For simplicity, we take the same number of collocation points in the domain and on the boundaries, i.e., It is clear that the choice of collocation points is crucial for the accuracy and efficiency of the algorithm.Since the total number of points in a regular grid grows exponentially with respect to the dimension, such a construction is infeasible for high-dimensional problems.Moreover, it is well-known that uniformly distributed pseudorandom points in high dimensions tend to cluster on hyperplanes and lead to a suboptimal distribution by relevant measures of uniformity (see e.g.[11,6]).Therefore, we shall generate collocation points by a quasi-Monte Carlo (QMC) method based on low-discrepancy sequences.In particular, we first define points in [0, 1] 2 from the generalized Halton sequence (see [17]), and then map those points into the annulus via the polar map (x, y) → (l cos(ψ), l sin(ψ)), where l = (R 2 − r 2 )x + r 2 and ψ = 2πy for all (x, y) ∈ [0, 1] 2 .The above transformation preserves fractional area, which ensures that a set of well-distributed points on the square will map to a set of points spread evenly over the annulus.We also use Halton points to approximate the (one-dimensional) boundary segments.Now we are ready to describe the training process, i.e., how to optimize (6.5) over all trial functions in F M .The optimization is performed by using the well-known Adam stochastic gradient descent (SGD) algorithm [31] with a decaying learning rate schedule.At each SGD iteration, we randomly draw a mini-batch of points with size B = 25 from the collection of collocation points, and perform gradient descent based on these samples.We initialize the learning rate at 10 −3 and decrease it by a factor of 0.5 for every 2000 SGD iterations for the examples with analytic solutions in Section 6.2, while for the examples without analytic solutions in Section 6.3 we decrease the learning rate by a factor of 0.5 once the total number of iterations reaches one of the milestones 2000, 4000, 6000, 10000, 20000, 30000.
We implement Algorithm 2 using PyTorch and perform all computations on a NVIDIA Tesla K40 GPU with 12 GB memory.The entire algorithm can be briefly summarized as follows.Let {η k } ∞ k=0 be a given sequence, denoting the accuracy requirement for each policy iteration.For each k ∈ N ∪ {0}, given the previous iterate u k , we compute the feedback controls as in (6.3) and obtain the controlled coefficients as defined in (3.3).Then we apply the SGD method with analytically derived gradient to optimize J k,d over F M until we obtain a solution u k+1 satisfying J k,d (u k+1 ) ≤ η k min( u k+1 − u k 2 2,d , η 0 ), where • 2,d denotes the discrete H 2 -norm evaluated based on the training samples in Ω d .We then proceed to the next policy iteration, and terminate Algorithm 2 once the desired accuracy is achieved.

Examples with analytical solutions
In this section, we shall examine the convergence of Algorithm 2 for solving Dirichlet problems of the form (6.2) with known solutions.In particular, we shall choose a running cost f such that the analytical solution to (6.2) is given by u * (x, y) = sin(πr 2 /2) − sin(π(x 2 + y 2 )/2) for all (x, y) ∈ Ω.To demonstrate the generalizability and the superlinear convergence of the numerical solutions obtained by Algorithm 2, we generate a different set of collocation points in Ω of the size N val = 2000, and use them to estimate the relative error and the q-factor of the numerical solution u k obtained from the k-th policy iteration for all k ∈ N: We use neural networks with depth L = 4 and varying H as trial functions, initialize Algorithm 2 with u 0 = 0, and perform experiments with the following model parameters: a = 0.04, σ x = 0.5, σ y = 0.2, r = 0.5, R = √ 2, κ = 0.1 and v s = 0.6.Figure 2 depicts the performance of Algorithm 2 with different sizes of training samples and the dimensions of hidden layers, which are denoted by N and H respectively.The hyper-parameters {η k } ∞ k=0 are chosen as η 0 = 10 and η k = 2 −k for all k ∈ N. One can clearly see from Figure 2 (left) and (middle) that, despite the fact that Algorithm 2 is initialized with a relatively poor initial guess, the numerical solutions converge superlinearly to the exact solution in the H 2 -norm for all these combinations of H and N , which confirms the theoretical result in Theorem 4.3.It is interesting to observe from Figure 2 that, even though increasing either the complexity of the networks (the red lines) or the size of training samples (the black lines) seems to accelerate the training process slightly (right), neither of them ensures a higher generalization accuracy on the testing samples (left).In all our computations, the accuracies of numerical solutions in the L 2 -norm and the H 1 -norm are in general higher than the accuracy in the H 2 -norm.For example, both the L 2 -relative error and the H 1 -relative error of the numerical solution obtained at the 9th policy iteration with H = 80, N = 1000 are 0.0045.Figure 3: Impact of η 0 on the performance of Algorithm 2; from left to right: relative errors (plotted in a log scale), q-factors and the overall runtime for all policy iterations.
We then proceed to analyze the effects of the hyper-parameters {η k } ∞ k=0 .Roughly speaking, the magnitude of η 0 indicates the accuracy of the iterates {u k } ∞ k=1 to the linear Dirichlet problems in the initial stage of Algorithm 2, while the decay of {η k } ∞ k=1 determines the speed at which the q-factors {q k } ∞ k=1 converge to 0, at an extra cost of solving the optimization problem in a given iteration more accurately for smaller q k .Figure 3 presents the numerical results for different choices of η 0 with a fixed training sample size N = 1000, hidden width H = 100 and η k = 2 −k for all k ≥ 1.Note that solving each linear equation extremely accurate in the initial stage, i.e., by choosing η 0 to be a small value (the blue line), may not be beneficial for the overall performance of the algorithm in terms of both the accuracy and computational efficiency.This is due to the fact that the initialization of the algorithm is in general far from the exact solution to the semilinear boundary value problem, and so are the solutions of the linear equations arising from the first few policy iterations.In fact, it appears in our experiments that the choices of η 0 = 20, 40 lead to the optimal performance of Algorithm 2, which solves the initial equations sufficiently accurately, and leverages the superlinear convergence of policy iteration to achieve a higher accuracy with a same testing samples in Ω of the size N val = 2000 as follows: where u * denotes the analytical solution to (6.2). Figure 5 (left) depicts the H 2 -convergence of "DM with • 2 3/2,∂Ω " and "DM with ϑ • 2 0,∂Ω " (with various choices of ϑ > 0) as the number of SGD iterations tends to infinity, which clearly shows that, compared with using the L 2 -boundary norm as in [16,44], incorporating the H 3/2boundary norm in the loss function helps achieve a higher H 2 -accuracy of the numerical solutions.It is interesting to point out that, even though penalizing the L 2 -norm of the boundary term with a suitable parameter ϑ helps improve the accuracy of "DM with ϑ • 2 0,∂Ω " as suggested in [16], in our experiments, ϑ = 10 leads to the best H 2 -convergence of "DM with ϑ • 2 0,∂Ω " (after 10 4 SGD iterations) among other choices of ϑ ∈ {0.1, 1, 5, 10, 20, 50, 100, 500, 1000}.
Figure 5 (right) presents the decay of H 2 -relative errors with respect to the number of SGD iterations used in "DM with • 2 0,∂Ω ", "DM with • 2 3/2,∂Ω " and Algorithm 2, which clearly demonstrates that the superlinear convergence of policy iteration significantly accelerates the convergence of the algorithm.In particular, the accuracy enhancement of Algorithm 2 over "DM with • 2 0,∂Ω " (or equivalently the Deep Galerkin Method proposed in [44]) is of a factor of 20 with 10 4 SGD iterations.We remark that the training time of Algorithm 2 is only slightly longer than that of "DM with • 2 0,∂Ω " (the runtimes of Algorithm 2 and "DM with • 2 0,∂Ω " with 10 4 SGD iterations are 333 and 308 seconds, respectively), since Algorithm 2 requires to determine whether a given iterate solves the policy evaluation equations sufficiently accurate (see (4.2)), in order to proceed to the next policy iteration step.

Examples without analytical solutions
In this section, we shall demonstrate the performance of Algorithm 2 by solving (6.2) with f ≡ 1.This corresponds to a minimum time problem with preferred targets, whose solution in general is not known analytically.Numerical simulations will be conducted with the following model parameters: a = 0.2, σ x = 0.5, σ y = 0.2, r = 0.5, R = √ 2 and κ = 0.1 but two different values of v s , v s = 0.5 and v s = 1.2, which are associated with the two scenarios where the ship moves slower than and faster than the wind, respectively.The algorithm is initialized with u 0 = 0. We remark that this is a numerically challenging problem due to the fact that the convection term in (6.2) dominates the diffusion term, which leads to a sharp change of the solution and its derivatives near the boundaries.However, as we shall see, these boundary layers can be captured effectively by the numerical solutions of Algorithm 2. Figure 6 presents the numerical results for the two different scenarios obtained by Algorithm 2 with N = 2000, η 0 = 40 and η k = 1/k for all k ∈ N. The set of trial functions consists of all fully-connected neural networks with depth L = 7 and hidden width H = 50 (the total number of parameters in this network is 12951).We can clearly see from Figure 6 (left) and (middle) that for both scenarios, the numerical solution ū and its derivatives are symmetric with respect to the axis y = 0, and change rapidly near the boundaries.
The feedback control strategies, computed by (6.3), are depicted in Figure 6 (right).If the ship starts from the left-hand side and travels toward the inner boundary, then the expected travel time to ∂B r (0) is around R−r vs+vc , which is smaller than the exit cost along ∂B R (0).Hence the ship would move in the direction of the positive x-axis for both cases, v s < v c and v s > v c .However, the optimal control is different for the two scenarios if the ship is on the right-hand side.For the case where the ship's speed is less than the wind (v s = 0.5), if the ship is closed to ∂B r (0), then it would move in the direction of the negative x-axis, hoping the random perturbation of the wind would bring it to the preferred target, while if it is far from ∂B r (0), then it has less chance to reach ∂B r (0), so it would move along the positive x-axis.On the other hand, for the case where the ship's speed is larger than the wind (v s = 1.2), the ship would in general try to reach the inner boundary.However, if the ship is sufficiently close to ∂B R (0) in the right-hand half-plane, then the expected travel time to ∂B r (0) is around R−r vs−vc , which is larger than the exit cost along ∂B R (0).Hence the ship would choose to exit directly from the outer boundary.
We then analyze the convergence of Algorithm 2 by performing computations with 7-layer networks with different hidden width H (networks with wider hidden layers are employed such that every linear Dirichlet problem can be solved more accurately) and parameters {η k } ∞ k=0 .For any given iterate u k , we shall consider the following (squared) residual of the semilinear boundary value problem (6.2) : which will be evaluated similar to (6.5) based on testing samples in Ω and on (∂Ω) 2 of the same size N val = 2000.Figure 7 presents the decay of the residuals in terms of the number of policy iterations, which suggests the H 2 -superlinear convergence of the iterates {u k } ∞ k=0 (the H 2 -norms of the last iterates for v s = 0.5 and v s = 1.2 are 20.3 and 31.8,respectively).Note that the parameter η k = 1/k, k ∈ N leads to a slower and more oscillating convergence of the iterates {u k } ∞ k=0 , due to the fact that we apply a mini-batch SGD method to optimize the discrete cost functional J k,d for each policy iteration.A faster and smoother convergence can be achieved by choosing a more rapidly decaying {η k } ∞ k=1 .
We further investigate the influence of the parameters {η k } ∞ k=1 on the accuracy and efficiency of Algorithm 2 in detail.Algorithm 2 is carried out with the same trial functions (7-layer networks with hidden width H = 80 and complexity 32721) but different {η k } ∞ k=1 (we choose different η 0 to keep the quantity η 0 η 1 constant), and the numerical results are summarized in Table 1.One can clearly see that a more rapidly decaying {η k } ∞ k=1 results in a better overall performance (in terms of accuracy and computational time), even though it requires more time to solve the linear equation for each policy iteration.The rapid decay of {η k } ∞ k=1 not only accelerates the superlinear   convergence of the iterates {u k } ∞ k=1 , but also helps to eliminate the oscillation caused by the randomness in the SGD algorithm (see Figure 7), which enables us to achieve a higher accuracy with less total computational effort.However, we should keep in mind that smaller {η k } ∞ k=1 means that we need to solve all linear equations with higher accuracy, which subsequently requires more complicated networks and more careful choices of the optimizers for J k,d .Therefore, in general, we need to tune the balance between the superlinear convergence rate of policy iteration and the computational costs of the linear solvers, in order to achieve optimal performance of the algorithm.Finally, we shall compare the performance of Algorithm 2 (with η 0 = 80, η k = 2 −k ) and the Direct Method (with • X,∂Ω,tra = • 3/2,∂Ω,tra in (6.6)) by fixing the trial functions (7-layer networks with hidden width H = 80), the training samples and the learning rates of the SGD algorithms.For both methods, we shall consider the following squared residual for each iterate ûi obtained from the i-th SGD iteration (see (6.7)): HJBI Residual := F (û i ) 2 0,Ω,val + ûi − g 2 3/2,∂Ω,val .
Figure 8 (left) presents the decay of the residuals as the number of SGD iterations tends to infinity, which demonstrates the efficiency improvement of Algorithm 2 over the Direct Method.The superlinear convergence of policy iteration helps to provide better initial guesses of the SGD algorithm, which leads to a more rapidly decaying loss curve with smaller noise (on the validation samples); the HJBI residuals obtained in the last 1000 SGD iterations of Algorithm 2 (resp.the Direct Method) oscillates around the value 0.0028 (resp.0.0144) with a standard derivation 0.00096 (resp.0.0093).

Conclusions
This paper develops a neural network based policy iteration algorithm for solving HJBI boundary value problems arising in stochastic differential games of diffusion processes with controlled drift and state constraints.We establish the q-superlinear convergence of the algorithm in H 2 (Ω) with an arbitrary initial guess, and also the pointwise (almost everywhere) convergence of the numerical solutions and their (first and second order) derivatives, which subsequently leads to convergent approximations of optimal feedback controls.The convergence results also hold for general trial functions, including kernel functions and high-order separable polynomials used in global spectral methods.Numerical examples for stochastic Zermelo navigation problems are presented to illustrate the theoretical findings.
To the best of our knowledge, this is the first paper which demonstrates the global superlinear convergence of policy iteration for nonconvex HJBI equations in function spaces, and proposes convergent neural network based numerical methods for solving the solutions of nonlinear boundary value problems and their derivatives.Natural next steps would be to extend the inexact policy iteration algorithm to parabolic HJBI equations, and to employ neural networks with tailored architectures to enhance the efficiency of the algorithm for solving high-dimensional problems.
and the following estimate holds with a constant C independent of f and g: u H 2 (Ω) ≤ C f L 2 (Ω) + g H 1/2 (∂Ω) .
We then recall several important measurability results.The following measurable selection theorem follows from Theorems 18.10 and 18.19 in [1], and ensures the existence of a measurable selector maximizing (or minimizing) a Carathéodory function.
Theorem A.3.Let (S, Σ) a measurable space and X be a separable metrizable space.Let Γ : S ⇒ X be a measurable set-valued mapping with nonempty compact values, and suppose g : S × X → R is a Carathéodory function.Define the value function m : S → R by m(s) = max x∈Γ(s) g(s, x), and the set-valued map µ : S ⇒ X by µ(s) = {x ∈ Γ(s) | g(s, x) = m(s)}.Then we have 1.The value function m is measurable.
2. The set-valued mapping µ is measurable, has nonempty and compact values.Moreover, there exists a measurable function ψ : S → X satisfying ψ(s) ∈ µ(s) for each s ∈ S.
The following theorem shows the arg max set-valued mapping is upper hemicontinuous.Moreover, if Y is Hausdorff, then µ is upper hemicontinuous, i.e., for every x ∈ X and every neighborhood U of µ(x), there is a neighborhood V of x such that z ∈ V implies µ(z) ⊂ U .
Finally, we present a special case of [14, Theorem 2], which characterizes q-superlinear convergence of quasi-Newton methods for a class of semismooth operator-valued equations.For some starting point y 0 in V , let the sequence {y k } k∈N ⊂ V satisfy y k = y * for all k, and be generated by the following quasi-Newton method: Let u 0 ∈ F be an arbitrary initial guess, we shall assume without loss of generality that Algorithm 3 runs infinitely, i.e., u k+1 − u k H 2 (Ω) > 0 and u k = u * for all k ∈ N ∪ {0}.
We first show {u k } k∈N converges to the unique solution u * in H 2 (Ω).For each k ≥ 0, we can deduce from (5.3) that there exists f e k ∈ L 2 (Ω) and g e k ∈ H 1/2 (∂Ω) such that H 1/2 (Ω) ≤ η k+1 ( u k+1 − u k 2 H 2 (Ω) ) with lim k→∞ η k = 0.Then, we can proceed as in the proof of Theorem 4.3, and conclude that if c ≥ c 0 with a sufficiently large c 0 , then {u k } k∈N converges to the solution u * of (5.1).

Proposition 4 . 1 .
Let Ω ⊂ R n be an open bounded starshaped domain, and

Algorithm 2 2 .
Inexact policy iteration algorithm for Dirichlet problems.1. Choose a family of trial functions F = {F M } M ∈N ⊂ H 2 (Ω), an initial guess u 0 in F, a sequence {η k } k∈N∪{0} of positive scalars, and set k = 0. Given the iterate u k , update the control laws α k and β k by (3.1) and (3.2), respectively.

Algorithm 3 2 .
Inexact policy iteration algorithm for oblique derivative problems.1. Choose a family of trial functions F = {F M } M ∈N ⊂ H 2 (Ω), an initial guess u 0 in F, a sequence {η k } k∈N∪{0} of positive scalars, and set k = 0. Given the iterate u k , update the control laws α k and β k by (3.1) and (3.2), respectively.

HFigure 2 :
Figure2: Impact of the training sample size N and the hidden width H on the performance of Algorithm 2; from left to right: relative errors (plotted in a log scale), q-factors and the overall runtime for all policy iterations.

Figure 5 :
Figure 5: Relative errors of the Direct Methods and Algorithm 2 with different numbers of SGD iterations (plotted in a log scale); from left to right: improvements caused by the H 3/2 -boundary norm and by policy iteration.

Figure 6 :
Figure6: Numerical results for the two different scenarios; from top to bottom: the ship moves slower than the wind (v s = 0.5) and faster than the wind (v s = 1.2); from left to right: the value function ū, the numerical solutions along y = 0, and feedback control strategies.

Figure 7 :
Figure 7: Residuals of the HJBI Dirichlet problems for the two different scenarios with respect to the number of policy iterations (plotted in a log scale); from left to right: the ship moves slower than the wind (v s = 0.5) and faster than the wind (v s = 1.2).

Figure 8 :
Figure 8: Performance comparison of the Direct Method and Algorithm 2 for the scenario where the ship is slower than the wind (v s = 0.5); from left to right: residuals (plotted in a log scale) and overall runtime for all SGD iterations.

Theorem A. 5 .
Let Y, Z be two Banach spaces, and F : Y → Z be a given function with a zero y * ∈ Y .Suppose there exists an open neighborhood V of y * such that F is semismooth with a generalized differential ∂ * F in V , and there exists a constant L > 0 such thaty − y * Y /L ≤ F (y) − F (y * ) Z ≤ L y − y * Y , ∀y ∈ V.

B
k s k = −F (y k ), y k+1 = s k + y k , k = 0, 1, . . .where {B k } k∈N is a sequence of bounded linear operators in L(Y, Z).Let {A k } k∈N be a sequence of generalized differentials of F such that A k ∈ ∂ * F (y k ) for all k, and let E k = B k − A k .Then y k → y * q-superlinearly if and only if lim k→∞ y k = y * and lim k→∞ E k s k Z / s k Y = 0.B Proof of Theorem 5.2
where B Y is the Borel σ-algebra of the topological space Y ; and 2. for each s ∈ S, the function ψ s = ψ(s, •) : X → Y is continuous.