SIAM Journal on Control and Optimization, Vol.54, No.2, 987-1016, 2016
RATIONALLY INATTENTIVE CONTROL OF MARKOV PROCESSES
The article poses a general model for optimal control subject to information constraints, motivated in part by recent work of Sims and others on information-constrained decision making by economic agents. In the average-cost optimal control framework, the general model introduced in this paper reduces to a variant of the linear-programming representation of the average-cost optimal control problem, subject to an additional mutual information constraint on the randomized stationary policy. The resulting optimization problem is convex and admits a decomposition based on the Bellman error, which is the object of study in approximate dynamic programming. The theory is illustrated through the example of information-constrained linear quadratic Gaussian control problem. Some results on the infinite-horizon discounted-cost criterion are also presented.
Keywords:stochastic control;information theory;observation channels;optimization;Markov decision processes