IEEE Transactions on Automatic Control, Vol.64, No.8, 3355-3361, 2019
Dual Prediction-Correction Methods for Linearly Constrained Time-Varying Convex Programs
Devising efficient algorithms to solve continuously-varying strongly convex optimization programs is key in many applications, from control systems to signal processing and machine learning. In this context, solving means to find and track the optimizer trajectory of the continuously-varying convex optimization program. Recently, a novel prediction-correction methodology has been put forward to set up iterative algorithms that sample the continuously-varying optimization program at discrete time steps and perform a limited amount of computations to correct their approximate optimizer with the new sampled problem and predict how the optimizer will change at the next time step. Prediction-correction algorithms have been shown to outperform more classical strategies, i.e., correction-only methods. Typically, prediction-correction methods have asymptotical tracking errors of the order of h(2) , where h is the sampling period, whereas classical strategies have order of h. Up to now, prediction-correction algorithms have been developed in the primal space, both for unconstrained and simply constrained convex programs. In this paper, we show how to tackle linearly constrained continuously-varying problem by prediction-correction in the dual space and we prove similar asymptotical error bounds as their primal versions.
Keywords:Dual ascent;parametric programming;prediction-correction methods;time-varying convex optimization