화학공학소재연구정보센터
IEE Proceedings-Control Theory & Applications, Vol.142, No.5, 486-492, 1995
Network-Growth Approach to Design of Feedforward Neural Networks
Multilayer feedforward networks (MFNs) are a class of ANN model that has been widely used in the field of pattern classification. Training in the model is usually accomplished by the well known backpropagation algorithm. This is a steepest-descent-optimisation that can get ’stuck’ in local minima and which results in inferior performance. On the other hand, a critical issue in applying the MFNs is the need to predetermine an appropriate network size for the problem being solved. A network-growth approach is pursued to address the problems concurrently and a progressive-training (PT) algorithm is proposed. The algorithm starts training with a one-hidden-node network and a one-pattern training subset. The training subset is then expanded by including one more pattern and the previously trained network, with or without a new hidden node grown, is trained again to cater for the new pattern. Such a process continues until all the available training patterns have been taken into account. At each training stage, convergence is guaranteed and at most one hidden node is added to the previously trained network. Thus the PT algorithm is guaranteed to converge to a finite-size network with a global-minimum solution. This holds true for any real-to-real mapping task. The algorithm’s effectiveness is growing reasonably large, globally minimised networks with superior generalisation performances is demonstrated through three representative data sets.