IEEE Transactions on Automatic Control, Vol.48, No.11, 1951-1961, 2003
Two-person zero-sum Markov games: Receding horizon approach
We consider a receding. horizon approach as an approximate solution to two-person zero-sum Markov games with infinite horizon discounted cost and average cost criteria. We first present error bounds from the optimal equilibrium value of the game when both players take "correlated" receding horizon policies that are based on exact or approximate solutions of receding finite horizon suligames. Motivated by the worst-case optimal control of queueing systems by Altman, we then analyze error bounds when the minimizer plays the (approximate) receding horizon control and the maximizer plays the worst case policy. We finally discuss some state-space size independent methods to compute the value of the subgame approximately for the approximate receding horizon control, along with heuristic receding horizon policies for the minimizer.