화학공학소재연구정보센터
IEEE Transactions on Automatic Control, Vol.63, No.9, 3046-3053, 2018
On the Whittle Index for Restless Multiarmed Hidden Markov Bandits
We consider a restless multiarmed bandit in which each arm can be in one of two states. When an arm is sampled, the state of the arm is not available to the sampler. Instead, a binary signal with a known randomness that depends on the state of the arm is available. No signal is available if the arm is not sampled. An arm-dependent reward is accrued from each sampling. In each time step, each arm changes state according to known transition probabilities, which, in turn, depend on whether the arm is sampled or not sampled. Since the state of the arm is never visible and has to be inferred from the current belief and a possible binary signal, we call this the hidden Markov bandit. Our interest is in a policy to select the arm(s) in each time step to maximize the infinite horizon discounted reward. Specifically, we seek the use of the Whittle index in selecting the arms. We first analyze the single-armed bandit and show that, in general, it admits an approximate threshold-type optimal policy when there is a positive reward for the "no-sample" action. We also identify several special cases for which the threshold policy is indeed the optimal policy. Next, we show that such a single-armed bandit also satisfies an approximate-indexability property. For the case when the single-armed bandit admits a threshold-type optimal policy, we perform the calculation of the Whittle index for each arm. Numerical examples illustrate the analytical results.