IEEE Transactions on Automatic Control, Vol.62, No.12, 6407-6414, 2017
Stochastic Dual Averaging for Decentralized Online Optimization on Time-Varying Communication Graphs
We consider a decentralized online convex optimization problem in a network of agents, where each agent controls only a coordinate (or a part) of the global decision vector. For such a problem, we propose two decentralized stochastic variants (SODA-C and SODA-PS) of Nesterov's dual averagingmethod (DA), where each agent only uses a coordinate of the noise-corrupted gradient in the dual-averaging step. We show that the expected regret bounds for both algorithms have sublinear growth of O(root T), with the time horizon T, in scenarios when the underlying communication topology is time-varying. The sublinear regret can be obtained when the stepsize is of the form 1/root t and the objective functions are Lipschitz-continuous convex functions with Lipschitz gradients, and the variance of the noisy gradients is bounded. We also provide simulation results of the proposed algorithms on sensor networks to complement our theoretical analysis.
Keywords:Decentralized optimization;online optimization;stochastic dual-averaging method;pseudo-regret