화학공학소재연구정보센터
IEEE Transactions on Automatic Control, Vol.62, No.11, 6026-6031, 2017
Single Sample Fictitious Play
This paper is concerned with distributed learning and optimization in large-scale settings. The well-known fictitious play (FP) algorithm has been shown to achieve Nash equilibrium learning in certain classes of multiagent games. However, FP can be computationally difficult to implement when the number of players is large. Sampled FP (SFP) is a variant of FP that mitigates the computational difficulties arising in FP by using a Monte Carlo (i.e., sampling based) approach. Despite its computational advantages, a shortcoming of SFP is that the number of samples that must be drawn at each iteration grows without bound as the algorithm progresses. In this paper, we propose single sample FP (SSFP)-A variant of SFP in which only one sample needs to be drawn in each round of the algorithm. Convergence of SSFP to the set of Nash equilibria is proven. Simulation results show the performance of SSFP is comparable to that of SFP, despite drawing far fewer samples.