, Volume 10, Issue 1, pp 367-377

On a discrete approximation of the Hamilton-Jacobi equation of dynamic programming

Rent the article at a discount

Rent now

* Final gross prices may vary according to local VAT.

Get Access

Abstract

An approximation of the Hamilton-Jacobi-Bellman equation connected with the infinite horizon optimal control problem with discount is proposed. The approximate solutions are shown to converge uniformly to the viscosity solution, in the sense of Crandall-Lions, of the original problem. Moreover, the approximate solutions are interpreted as value functions of some discrete time control problem. This allows to construct by dynamic programming a minimizing sequence of piecewise constant controls.

Supported in part by a CNR-NATO grant during a visit at the University of Maryland.
Communicated by A. V. Balakrishnan