IEEE Transactions on Automatic Control, Vol.66, No.3, 1314-1320, 2021
Stochastic Approximation for Risk-Aware Markov Decision Processes
We develop a stochastic approximation-type algorithm to solve finite state/action, infinite-horizon, risk-aware Markov decision processes. Our algorithm has two loops. The inner loop computes the risk by solving a stochastic saddle-point problem. The outer loop performs Q-learning to compute an optimal risk-aware policy. Several widely investigated risk measures (e.g., conditional value-at-risk, optimized certainty equivalent, and absolute semideviation) are covered by our algorithm. Almost sure convergence and the convergence rate of the algorithm are established. For an error tolerance epsilon > 0 for optimal Q-value estimation gap and learning rate k is an element of (1/2, 1], the overall convergence rate of our algorithm is Omega((ln(1/delta epsilon)/epsilon(2))(1/k) + (ln(1/epsilon))(1/(1-k))) with probability at least 1-delta.
Keywords:Markov decision processes (MDPs);risk measure;saddle point;stochastic approximation;Q-learning