SIAM Journal on Control and Optimization, Vol.48, No.5, 3501-3531, 2010
OPTIMAL CONTROL UNDER STOCHASTIC TARGET CONSTRAINTS
We study a class of Markovian optimal stochastic control problems in which the controlled process Z(v) is constrained to satisfy an almost sure constraint Z(v)(T) is an element of G subset of R(d+1) P-a.s. at some final time T > 0. When the set is of the form C := {(x, y) is an element of R(d) x R : g(x, y) >= 0}, with g nondecreasing in y, we provide a Hamilton-Jacobi-Bellman characterization of the associated value function. It gives rise to a state constraint problem, where the constraint can be expressed in terms of an auxiliary value function w which characterizes the set D := {(t, Z(v)(t)) is an element of [0, T] x R(d+1) : Z(v)(T) is an element of G a. s. for some v}. Contrary to standard state constraint problems, the domain D is not given a priori and we do not need to impose conditions on its boundary. It is naturally incorporated in the auxiliary value function w, which itself is a viscosity solution of a nonlinear parabolic PDE. Applying ideas recently developed in Bouchard, Elie, and Touzi [SIAM J. Control Optim., 48 ( 2009), pp. 3123-3150], our general result also allows us to consider optimal control problems with moment constraints of the form E[g(Z(v)(T))] >= 0 or P[g(Z(v)(T)) >= 0] >= p.
Keywords:optimal control;state constraint problems;stochastic target problem;discontinuous viscosity solutions