IEEE Transactions on Automatic Control, Vol.40, No.8, 1359-1375, 1995
Minimum-Seeking Properties of Analog Neural Networks with Multilinear Objective Functions
In this paper, we study the problem of minimzing a multilinear objective function over the discrete set {0,1}(n). This is an extension of an earlier work addressed to the problem of minimizing a quadratic function over {0,1}(n). A gradient-type neural network is proposed to perform the optimization, A novel feature of the network is the introduction of a so-called bias vector, The network is operated in the high-gain region of the sigmoidal nonlinearities, The following comprehensive theorem is proved : For all sufficiently small bias vectors except those belonging to a set of measure zero, for all sufficiently large sigmoidal gains, for all initial conditions except those belonging to a nowhere dense set, the state of the network converges to a local minimum of the objective function, This is a considerable generalization of earlier results for quadratic objective functions, Moreover, the proofs here are completely rigorous, The neural network-based approach to optimization is briefly compared to the so-called interior-point methods of nonlinear programming, as exemplified by Karmarkar’s algorithm, Some problems for future research are suggested.
Keywords:POLYNOMIAL-TIME ALGORITHM;OPTIMIZATION PROBLEMS;NONLINEAR GEOMETRY;TRAJECTORIES;COMPUTATION;SYSTEMS;SOLVE