- Previous Article
- Next Article
- Table of Contents
SIAM Journal on Control and Optimization, Vol.58, No.5, 2711-2739, 2020
A UNIVERSAL DYNAMIC PROGRAM AND REFINED EXISTENCE RESULTS FOR DECENTRALIZED STOCHASTIC CONTROL
For sequential stochastic control problems with standard Borel measurement and control action spaces, we introduce a general (universally applicable) dynamic programming formulation, establish its well-posedness, and provide new existence results for optimal policies. Our dynamic program builds in part on Witsenhausen's standard form, but with a different formulation for the state, action, and transition dynamics. Using recent results on measurability properties of strategic measures in decentralized control, we obtain a standard Borel controlled Markov model. This allows for a well-defined dynamic programming recursion through universal measurability properties of the value functions for each time stage. In addition, new existence results are obtained for optimal policies in decentralized stochastic control. These state that for a static team with independent measurements, it suffices for the cost function to be continuous (only) in the actions for the existence of an optimal policy under mild compactness (or tightness) conditions. These also apply to dynamic teams which admit static reductions with independent measurements through a change of measure transformation. We show through a counterexample that weaker conditions may not lead to the existence of an optimal team policy. The paper's existence results generalize those previously reported in the literature. A summary of and comparison with previously reported results and some applications are presented.
Keywords:stochastic control;decentralized control;information structures;existence of optimal policies