SIAM Journal on Control and Optimization, Vol.57, No.4, 2843-2872, 2019
INFINITE HORIZON AVERAGE COST DYNAMIC PROGRAMMING SUBJECT TO TOTAL VARIATION DISTANCE AMBIGUITY
We analyze the per unit-time infinite horizon average cost Markov control model, subject to a total variation distance ambiguity on the controlled process conditional distribution. This stochastic optimal control problem is formulated as a minimax optimization problem in which the minimization is over the admissible set of control strategies, while the maximization is over the set of conditional distributions which are in a ball, with respect to the total variation distance, centered at a nominal distribution. We derive two new equivalent dynamic programming equations, and a new policy iteration algorithm. The main feature of the new dynamic programming equations is that the optimal control strategies are insensitive to inaccuracies or ambiguities in the controlled process conditional distribution. The main feature of the new policy iteration algorithm is that the policy evaluation and policy improvement steps are performed using the maximizing conditional distribution, which is obtained via a water filling solution of aggregating states together to form new states. Throughout the paper, we illustrate the new dynamic programming equations and the corresponding policy iteration algorithm to various examples.
Keywords:stochastic control;Markov control models;minimax;dynamic programming;average cost;infinite horizon;total variation distance;policy iteration