Applied Mathematics and Optimization, Vol.82, No.2, 433-450, 2020
On the Expected Total Reward with Unbounded Returns for Markov Decision Processes
We consider a discrete-time Markov decision process with Borel state and action spaces. The performance criterion is to maximize a total expected utility determined by unbounded return function. It is shown the existence of optimal strategies under general conditions allowing the reward function to be unbounded both from above and below and the action sets available at each step to the decision maker to be not necessarily compact. To deal with unbounded reward functions, a new characterization for the weak convergence of probability measures is derived. Our results are illustrated by examples.
Keywords:Markov decision processes;Expected total reward;Unbounded return;Weak convergence of measure