IEEE Transactions on Automatic Control, Vol.65, No.12, 5399-5406, 2020
Finding the Equilibrium for Continuous Constrained Markov Games Under the Average Criteria
For Markov game with cost constraints and continuous actions, the local constraint of single-decision maker is the interacted result of joint actions taken by the other decision makers, and is usually eliminated by imposing penalties on the undesired states and policies, which may suffer from the failure of penalties as the game policy changes and the nonexistence of the mixed policies. In this article, a framework of the actor-critic deep neural network is utilized to solve this problem. The actor network establishes the continuous pure policy to replace the mixed policy, and the critic network converts the global interacted results into a local performance potential. The local search for a constrained equilibrium average objective is converted into an unconstrained minimax optimization. Based on the equivalent conversion, the optimality function of the local action is given to evaluate the influence of the single decision maker's action on the global system. The proposed algorithm simultaneously iterates the local constraint multiplier and policy along opposite directions, and a typical congestion control numerical result in the emerging Internet of Things shows the efficiency.
Keywords:Optimization;Games;Markov processes;Internet of Things;Reinforcement learning;Heuristic algorithms;Neural networks;Constrained Markov game (MG);continuous state and action;expected average criteria;optimality equation;performance potential