Journal of Energy Engineering-ASCE, Vol.137, No.3, 116-129, 2011
Stochastic Optimal CPS Relaxed Control Methodology for Interconnected Power Systems Using Q-Learning Method
This paper presents the application and design of a novel stochastic optimal control methodology based on the Q-learning method for solving the automatic generation control (AGC) under the new control performance standards (CPS) for the North American Electric Reliability Council (NERC). The aims of CPS are to relax the control constraint requirements of AGC plant regulation and enhance the frequency dispatch support effect from interconnected control areas. The NERC's CPS-based AGC problem is a dynamic stochastic decision problem that can be modeled as a reinforcement learning (RL) problem based on the Markov decision process theory. In this paper, the Q-learning method is adopted as the RL core algorithm with CPS values regarded as the rewards from the interconnected power systems; the CPS control and relaxed control objectives are formulated as immediate reward functions by means of a linear weighted aggregative approach. By regulating a closed-loop CPS control rule to maximize the long-term discounted reward in the procedure of online learning, the optimal CPS control strategy can be gradually obtained. This paper also introduces a practical semisupervisory group prelearning method to improve the stability and convergence ability of Q-learning controllers during the prelearning process. Tests on the China Southern Power Grid demonstrate that the proposed control strategy can effectively enhance the robustness and relaxation property of AGC systems while CPS compliances are ensured. DOI:10.1061/(ASCE)EY.1943-7897.0000017. (C) 2011 American Society of Civil Engineers.
Keywords:Q-learning algorithm;Reinforcement learning;Automatic generation control;Control performance standard;Markov decision process;Optimal control;China Southern Power Grid