IEEE Transactions on Automatic Control, Vol.52, No.1, 89-94, 2007
An asymptotically efficient simulation-based algorithm for finite horizon stochastic dynamic programming
We present a simulation-based algorithm called "Simulated Annealing Multiplicative Weights" (SAMW) for solving large finite-horizon stochastic dynamic programming problems. At each iteration of the algorithm, a probability distribution over candidate policies is updated by a simple multiplicative weight rule, and with proper annealing of a control parameter, the generated sequence of distributions converges to a distribution concentrated only on the best policies. The algorithm is "asymptotically efficient," in the sense that for the goal of estimating the value of an optimal policy, a provably convergent finite-time upper bound for the sample mean is obtained.
Keywords:learning algorithms;Markov decision processes;simulation;simulated annealing;stochastic dynamic programming