SIAM Journal on Control and Optimization, Vol.54, No.6, 3106-3125, 2016
LINEARLY SOLVABLE STOCHASTIC CONTROL LYAPUNOV FUNCTIONS
This paper presents a new method for synthesizing stochastic control Lyapunov functions for a class of nonlinear stochastic control systems. The technique relies on a transformation of the classical nonlinear Hamilton-Jacobi-Bellman partial differential equation to a linear partial differential equation for a class of problems with a particular constraint on the stochastic forcing. This linear partial differential equation can then be relaxed to a linear differential inclusion, allowing for relaxed solutions to be generated using sum of squares programming. The resulting relaxed solutions are in fact viscosity super-/subsolutions, and by the maximum principle are pointwise upper and lower bounds to the underlying value function, even for coarse polynomial approximations. Furthermore, the pointwise upper bound is shown to be a stochastic control Lyapunov function, yielding a method for generating nonlinear controllers with pointwise bounded distance from the optimal cost when using the optimal controller. These approximate solutions may be computed with nonincreasing error via a hierarchy of semidefinite optimization problems. Finally, this paper develops a priori bounds on trajectory suboptimality when using these approximate value functions and demonstrates that these methods, and bounds, can be applied to a more general class of nonlinear systems not obeying the constraint on stochastic forcing. Simulated examples illustrate the methodology.
Keywords:stochastic control Lyapunov function;sum of squares programming;Hamilton-Jacobi-Bellman equation;nonlinear systems;optimal control