SIAM Journal on Control and Optimization, Vol.56, No.1, 231-252, 2018
STABLE OPTIMAL CONTROL AND SEMICONTRACTIVE DYNAMIC PROGRAMMING
We consider discrete-time infinite horizon deterministic optimal control problems with nonnegative cost per stage, and a destination that is cost free and absorbing. The classical linear-quadratic regulator problem is a special case. Our assumptions are very general, and allow the possibility that the optimal policy may not be stabilizing the system, e.g., may not reach the destination either asymptotically or in a finite number of steps. We introduce a new unifying notion of stable feedback policy, based on perturbation of the cost per stage, which in addition to implying convergence of the generated states to the destination, quantifies the speed of convergence. We consider the properties of two distinct cost functions: J*, the overall optimal, and (J) over cap, the restricted optimal over just the stable policies. Different classes of stable policies (with different speeds of convergence) may yield different values of (J) over cap. We show that for any class of stable policies, (J) over cap is a solution of Bellman's equation, and we characterize the smallest and the largest solutions: they are J*, and J(+), the restricted optimal cost function over the class of (finitely) terminating policies. We also characterize the regions of convergence of various modified versions of value and policy iteration algorithms, as substitutes for the standard algorithms, which may not work in general.
Keywords:stable policy;dynamic programming;shortest path;value iteration;policy iteration;discrete-time optimal control