SIAM Journal on Control and Optimization, Vol.32, No.3, 612-634, 1994
Optimal-Control on the L(Infinity) Norm of a Diffusion Process
Stochastic control problems are considered, where the cost to be minimized is either a running maximum of the state variable or more generally a running maximum of a function of the state variable and the control. In both cases it is proved that the value function, which must be defined on an augmented state space to take care of the non-Markovian feature of the running maximum, is the unique viscosity solution of the associated Bellman equation, which turns out to be, in the second case, a variational inequality with an oblique derivative boundary condition. Most of this work consists of proving the convergence of LP approximations and this is done by purely partial differential equation (PDE) methods.
Keywords:PARTIAL-DIFFERENTIAL EQUATIONS;HAMILTON-JACOBI EQUATIONS;VISCOSITY SOLUTIONS;TIME PROBLEMS;PRINCIPLE