화학공학소재연구정보센터
SIAM Journal on Control and Optimization, Vol.52, No.3, 1712-1744, 2014
A BELLMAN APPROACH FOR REGIONAL OPTIMAL CONTROL PROBLEMS IN R-N
This article is a continuation of a previous work where we studied infinite horizon control problems for which the dynamic, running cost, and control space may be different in two half-spaces of some Euclidian space R-N. In this article we extend our results in several directions: (i) to more general domains; (ii) to consideration of finite horizon control problems; (iii) to weakening the controllability assumptions. We use a Bellman approach and our main results are to identify the right Hamilton-Jacobi-Bellman equation (and, in particular, the right conditions to be put on the interfaces separating the regions where the dynamic and running cost are different) and to provide the maximal and minimal solutions, as well as conditions for uniqueness. We also provide stability results for such equations.