SIAM Journal on Control and Optimization, Vol.40, No.5, 1358-1383, 2002
Viscosity solutions of the Bellman equation for exit time optimal control problems with vanishing Lagrangians
We study the Hamilton-Jacobi-Bellman equation for undiscounted exit time optimal control problems for fully nonlinear systems and fully nonlinear singular Lagrangians using the dynamic programming approach. We prove a local uniqueness theorem characterizing the value functions for these problems a the unique viscosity solutions of the corresponding Hamilton-Jacobi-Bellman equations that satisfy appropriate boundary conditions. The novelty of this theorem is in the relaxed hypotheses on the lower bound on the Lagrangian and the very general assumptions on the target set. As a corollary, we show that the value function for the Fuller problem is the unique viscosity solution of the corresponding Hamilton-Jacobi-Bellman equation that vanishes at the origin and satisfies certain growth conditions. This implies as special cases first that the value function of this problem is the unique proper viscosity solution of the corresponding Hamilton Jacobi Bellman equation, in the class of all functions which are continuous in the plane and null at the origin, and second that this value function is the unique viscosity solution of that equation in a class which includes functions which are not bounded below. We also apply our results to the degenerate eikonal equation of geometric optics and to the shape-from-shading equations in image processing. Our theorem also applies to problems with noncompact targets and unbounded control sets whose Lagrangians take negative values.