화학공학소재연구정보센터
Applied Mathematics and Optimization, Vol.31, No.3, 297-326, 1995
Optimality, Stability, and Convergence in Nonlinear Control
Sufficient optimality conditions for infinite-dimensional optimization problems are derived in a setting that is applicable to optimal control with endpoint constraints and with equality and inequality constraints on the controls. These conditions involve controllability of the system dynamics, independence of the gradients of active control constraints, and a relatively weak coercivity assumption for the integral cost functional. Under these hypotheses, we show that the solution to an optimal control problem is Lipschitz stable relative to problem perturbations. As an application of this stability result, we establish convergence results for the sequential quadratic programming algorithm and for penalty and multiplier approximations applied to optimal control problems.