AIChE Journal, Vol.52, No.7, 2473-2485, 2006
Approximate dynamic programming: Application to process supply chain management
Many continuous process industries are required to operate multiproduct supply chains under significant uncertainty in market demands and prices, and mathematical programming formulations have been developed for the deterministic version of the problem. However, available extensions to the stochastic case cannot be readily used to obtain an optimal operating policy in practice because the uncertainty results in an explosion of the scenarios to be considered over an extended time horizon. In this article we represent the uncertainty through Markov chains and use an approach based on stochastic dynamic programming (DP), which can generate a dynamic operating policy that incorporates information about the uncertainty in the problem at each time step. DP is computationally infeasible because of the size of the state and action space for realistic problems. We propose a novel stochastic optimization algorithmic framework called "DP in a heuristically restricted state space." The confined state space is generated by simulating various potential scenarios under centralized dynamic inventory and production policy generated by combining static local inventory policy heuristics. The resulting DP policy responds to the time-varying demand for products by appropriately stitching together decisions made by the local static policies. This is a new formulation for effectively combining heuristic policies. (c) 2006 American Institute of Chemical Engineers.