A Recurrent Neural Network for Linear Fractional Programming with Bound Constraints
This paper presents a novel recurrent time continuous neural network model which performs linear fractional optimization subject to bound constraints on each of the optimization variables. The network is proved to be complete in the sense that the set of optima of the objective function to be minimized with bound constraints coincides with the set of equilibria of the neural network. It is also shown that the network is primal and globally convergent in the sense that its trajectory cannot escape from the feasible region and will converge to an exact optimal solution for any initial point chosen in the feasible bound region. Simulation results are given to demonstrate further the global convergence and the good performance of the proposed neural network for linear fractional programming problems with bound constraints.
KeywordsNeural Network Variational Inequality Neural Network Model Global Convergence Recurrent Neural Network
Unable to display preview. Download preview PDF.
- 2.Patkar, V.N.: Fractional Programming Models for Sharing of Urban Development Responsabilities. Nagarlok 22(1), 88–94 (1990)Google Scholar
- 4.Stancu-Minasian, I.M.: Fractional Programming, Theory, Methods and Applications. Kluwer Academic Publishers, Netherlands (1992)Google Scholar
- 12.Xia, Y.S., Wang, J.: A General Methodology for Designing Globally Convergent Optimization Neural Networks. IEEE Transaction on Neural Networks 9(12), 1311–1343 (1998)Google Scholar
- 15.Xu, Z.B., Hu, G.Q., Kwong, C.P.: Asymmetric-Hopfield-Type Networks: Theory and Applications. Neural Networks 9(2), 483–501 (2000)Google Scholar
- 19.Hale, J.K.: Ordinary Diffential Equations. Wiley, New York (1993)Google Scholar