SIAM Journal on Control and Optimization, Vol.36, No.2, 719-741, 1998
Rates of convergence for approximation schemes in optimal control
We present a simple method for obtaining rate of convergence estimates for approximations in optimal control problems. Although the method is applicable to a wide range of approximation problems, it requires in all cases some type of smoothness of the quantity being approximated. We illustrate the method by presenting a number of examples, including finite difference schemes for stochastic and deterministic optimal control problems. A general principle can be abstracted, and indeed the method may be applied to a variety of approximation problems, such as the numerical approximation of nonlinear PDEs not a priori related to control theory.
Keywords:HAMILTON-JACOBI EQUATIONS;DIFFERENTIAL-EQUATIONS;BELLMAN EQUATION;LARGE DEVIATIONS;UPPER-BOUNDS