화학공학소재연구정보센터
IEEE Transactions on Automatic Control, Vol.59, No.3, 629-644, 2014
Learning in Mean-Field Games
The purpose of this paper is to show how insight obtained from a mean-field model can be used to create an architecture for approximate dynamic programming (ADP) for a certain class of games comprising of a large number of agents. The general technique is illustrated with the aid of a mean-field oscillator game model introduced in our prior work. The states of the model are interpreted as the phase angles for a collection of nonhomogeneous oscillators, and in this way the model may be regarded as an extension of the classical coupled oscillator model of Kuramoto. The paper introduces ADP techniques for design and adaptation (learning) of approximately optimal control laws for this model. For this purpose, a parameterization is proposed, based on an analysis of the mean-field PDE model for the game. In an offline setting, a Galerkin procedure is introduced to choose the optimal parameters while in an online setting, a steepest descent algorithm is proposed. The paper provides a detailed analysis of the optimal parameter values as well as the Bellman error with both the Galerkin approximation and the online algorithm. Finally, a phase transition result is described for the large population limit when each oscillator uses the approximately optimal control law. A critical value of the control penalty parameter is identified: above this value, the oscillators are incoherent; and below this value (when control is sufficiently cheap) the oscillators synchronize. These conclusions are illustrated with results from numerical experiments.