IEEE Transactions on Automatic Control, Vol.62, No.9, 4549-4563, 2017
An Approximate Dynamic Programming Approach to Multiagent Persistent Monitoring in Stochastic Environments With Temporal Logic Constraints
We consider the problem of generating control policies for a team of robots moving in a stochastic environment. The team is required to achieve an optimal surveillance mission, in which a certain "optimizing proposition" needs to be satisfied infinitely often. In addition, a correctness requirement expressed as a temporal logic formula is imposed. Bymodeling the robots as game transition systems and the environmental elements as Markov chains, the problem reduces to finding an optimal control policy for a Markov decision process, which also satisfies a temporal logic specification. The existing approaches based on dynamic programming are computationally intensive, thus not feasible for large environments and/or large numbers of robots. We propose an approximate dynamic programming (ADP) framework to obtain suboptimal policieswith reduced computational complexity. Specifically, we choose a set of basis functions to approximate the optimal costs and find the best approximation through the least-squares method. We also propose a simulation-based ADP approach to further reduce the computational complexity by employing low-dimensional calculations and simulation samples.
Keywords:Approximate dynamic programming (ADP);Markov decision process (MDP);multiagent system;simulation-based method;temporal logic constraints