Advertisement

A Recurrent Neural Network for Linear Fractional Programming with Bound Constraints

  • Fuye Feng
  • Yong Xia
  • Quanju Zhang
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3971)

Abstract

This paper presents a novel recurrent time continuous neural network model which performs linear fractional optimization subject to bound constraints on each of the optimization variables. The network is proved to be complete in the sense that the set of optima of the objective function to be minimized with bound constraints coincides with the set of equilibria of the neural network. It is also shown that the network is primal and globally convergent in the sense that its trajectory cannot escape from the feasible region and will converge to an exact optimal solution for any initial point chosen in the feasible bound region. Simulation results are given to demonstrate further the global convergence and the good performance of the proposed neural network for linear fractional programming problems with bound constraints.

Keywords

Neural Network Variational Inequality Neural Network Model Global Convergence Recurrent Neural Network 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Charnes, A., Cooper, W.W., Rhodes, E.: Measuring the Efficiency of Decision Making Units. European J. Oper. Res. 2(2), 429–444 (1978)MATHCrossRefMathSciNetGoogle Scholar
  2. 2.
    Patkar, V.N.: Fractional Programming Models for Sharing of Urban Development Responsabilities. Nagarlok 22(1), 88–94 (1990)Google Scholar
  3. 3.
    Mjelde, K.M.: Fractional Resource Allocation with S-shaped Return Functions. J. Oper. Res. Soc. 34(2), 627–632 (1983)MATHMathSciNetGoogle Scholar
  4. 4.
    Stancu-Minasian, I.M.: Fractional Programming, Theory, Methods and Applications. Kluwer Academic Publishers, Netherlands (1992)Google Scholar
  5. 5.
    Hopfield, J.J.: Neurons with Graded Response Have Collective Computational Properties Like Those of Two-state Neurons. Proc. Natl. Acad. Sci. 81(10), 3088–3092 (1984)CrossRefGoogle Scholar
  6. 6.
    Hopfield, J.J., Tank, D.W.: Neural Computation of Decisions in Optimization Problems. Biolog. Cybernetics 52(1), 141–152 (1985)MATHMathSciNetGoogle Scholar
  7. 7.
    Cichocki, A., Unbehauen, R.: Neural Networks for Optimization and Signal Processing. John Wiley & Sons, New York (1993)MATHGoogle Scholar
  8. 8.
    Wang, J.: A Deterministic Annealing Neural Network for Convex Programming. Neural Networks 7(2), 629–641 (1994)MATHCrossRefGoogle Scholar
  9. 9.
    Wang, J., Chankong, V.: Recurrent Neural Networks for Linear Programming: Analysis and Design Principles. Computers and Operations Research 19(1), 297–311 (1992)MATHCrossRefGoogle Scholar
  10. 10.
    Wang, J.: Analysis and Design of a Recurrent Neural Network for Linear Programming. IEEE Transactions on Circuits and Systems 40(5), 613–618 (1993)MATHGoogle Scholar
  11. 11.
    Kennedy, M.P., Chua, L.O.: Neural Networks for Nonlinear Programming. IEEE Transaction on Circuits and Systems 35(5), 554–562 (1988)CrossRefMathSciNetGoogle Scholar
  12. 12.
    Xia, Y.S., Wang, J.: A General Methodology for Designing Globally Convergent Optimization Neural Networks. IEEE Transaction on Neural Networks 9(12), 1311–1343 (1998)Google Scholar
  13. 13.
    Bouzerdorm, A., Pattison, T.R.: Neural Network for Quadratic Optimization with Bound Constraints. IEEE Transaction on Neural Networks 4(2), 293–304 (1993)CrossRefGoogle Scholar
  14. 14.
    Liang, X.B., Wang, J.: A Recurrent Neural Network for Nonlinear Optimization with a Continuously Differentiable Objective Function and Bound Constraints. IEEE Transaction on Neural Networks 11(11), 1251–1262 (2000)MathSciNetGoogle Scholar
  15. 15.
    Xu, Z.B., Hu, G.Q., Kwong, C.P.: Asymmetric-Hopfield-Type Networks: Theory and Applications. Neural Networks 9(2), 483–501 (2000)Google Scholar
  16. 16.
    Kinderlehrer, D., Stampcchia, G.: An Introduction to Variational Inequalities and Their Applications. Academic, New York (1980)MATHGoogle Scholar
  17. 17.
    Bazaraa, M.S., Shetty, C.M.: Nonlinear Programming, Theory and Algorithms. John Wiley and Sons, New York (1979)MATHGoogle Scholar
  18. 18.
    Eaves, B.C.: On the Basic Theorem of Complementarity. Mathematical Pragramming 1(1), 68–75 (1970)CrossRefMathSciNetGoogle Scholar
  19. 19.
    Hale, J.K.: Ordinary Diffential Equations. Wiley, New York (1993)Google Scholar
  20. 20.
    LaSalle, J.: The Stability Theory for Ordinary Differential Equations. J. Differential Equations 4(1), 57–65 (1983)CrossRefMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Fuye Feng
    • 1
  • Yong Xia
    • 1
  • Quanju Zhang
    • 1
  1. 1.Dongguan University of TechnologyDongguanChina

Personalised recommendations