IEEE Transactions on Automatic Control, Vol.57, No.9, 2348-2354, 2012
Zero-Gradient-Sum Algorithms for Distributed Convex Optimization: The Continuous-Time Case
This technical note presents a set of continuous-time distributed algorithms that solve unconstrained, separable, convex optimization problems over undirected networks with fixed topologies. The algorithms are developed using a Lyapunov function candidate that exploits convexity, and are called Zero-Gradient-Sum (ZGS) algorithms as they yield nonlinear networked dynamical systems that evolve invariantly on a zero-gradient-sum manifold and converge asymptotically to the unknown optimizer. We also describe a systematic way to construct ZGS algorithms, show that a subset of them actually converge exponentially, and obtain lower and upper bounds on their convergence rates in terms of the network topologies, problem characteristics, and algorithm parameters, including the algebraic connectivity, Laplacian spectral radius, and function curvatures. The findings of this technical note may be regarded as a natural generalization of several well-known algorithms and results for distributed consensus, to distributed convex optimization.
Keywords:Distributed consensus;distributed convex optimization;multi-agent systems;networked dynamical systems