Automatica, Vol.71, 348-360, 2016
Value iteration and adaptive dynamic programming for data-driven adaptive optimal control design
This paper presents a novel non-model-based, data-driven adaptive optimal controller design for linear continuous-time systems with completely unknown dynamics. Inspired by the stochastic approximation theory, a continuous-time version of the traditional value iteration (VI) algorithm is presented with rigorous convergence analysis. This VI method is crucial for developing new adaptive dynamic programming methods to solve the adaptive optimal control problem and the stochastic robust optimal control problem for linear continuous-time systems. Fundamentally different from existing results, the a priori knowledge of an initial admissible control policy is no longer required. The efficacy of the proposed methodology is illustrated by two examples and a brief comparative study between VI and earlier policy iteration methods. (C) 2016 Elsevier Ltd. All rights reserved.
Keywords:Value iteration;Adaptive dynamic programming;Optimal control;Adaptive control;Stochastic approximation