Advertisement

Transformation of Nonlinear Programming Problems into Separable ones Using Multilayer Neural Networks

  • Bao-Liang Lu
  • Koji Ito
Part of the Operations Research/Computer Science Interfaces Series book series (ORCS, volume 8)

Abstract

In this paper we present a novel method for transforming nonseparable nonlinear programming (NLP) problems into separable ones using multilayer neural networks. This method is based on a useful feature of multilayer neural networks, i.e., any nonseparable function can be approximately expressed as a separable one by a multilayer neural network. By use of this method, the nonseparable objective and (or) constraint functions in NLP problems can be approximated by multilayer neural networks, and therefore, any nonseparable NLP problem can be transformed into a separable one. The importance of this method lies in the fact that it provides us with a promising approach to using modified simplex methods to solve general NLP problems.

Keywords

separable nonlinear programming linear programming multilayer neural network 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [1]
    M. S. Bazaraa, H. D. Sherali, and C. M. Shetty, Nonlinear Programming: Theory and Algorithms, 2nd Edition, John Wiley & Sons, Inc. (1993).Google Scholar
  2. [2]
    K. Hornik, M. Stinchcombe and H. White, Multilayer feedforward networks are universal approximators, Neural Networks, Vol.2 (1989), pp359–366.CrossRefGoogle Scholar
  3. [3]
    J. Lee, A novel design method for multilayer feedforward neural networks, Neural Computation, Vol.6 (1994), pp885–901.CrossRefGoogle Scholar
  4. [4]
    C. E. Miller, The Simplex Method for Local Separable Programming, in: Recent Advances in Mathematical Programming, R. L. Graves and P. Wolfee eds., McGraw-Hill (1963), pp89–100.Google Scholar
  5. [5]
    M. J. D. Powell, A fast algorithm for nonlinearly constrained optimization calculations, in: Lecture Notes in Mathematics No. 630 (1978), G. A. Waston ed., Springer-Verlag, Berlin.Google Scholar
  6. [6]
    D. E. Rumelhart, G. E. Hinton and R. J. Williams, Learning representations by backpropagating errors, Nature, Vol. 323 (1986), pp533–536.CrossRefGoogle Scholar
  7. [7]
    X. H. Yu, G. A. Chen and S. X. Cheng, Dynamic learning rate optimization of the backpropagation algorithm, IEEE Transactions on Neural Networks, Vol.6 (1995), pp669–677.CrossRefGoogle Scholar
  8. [8]
    Z. Wang, C. D. Massimo, M. T. Tham, and A. J. Morris, A procedure for determining the topology of multilayer feedforwar neural networks, Neural Networks, Vol. 7 (1994), pp291–300.CrossRefGoogle Scholar

Copyright information

© Springer Science+Business Media New York 1997

Authors and Affiliations

  • Bao-Liang Lu
    • 1
  • Koji Ito
    • 2
  1. 1.The Institute of Physical and Chemical Research (RIKEN)Atsuta-ku, NagoyaJapan
  2. 2.Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of TechnologyMidori-ku, YokohamaJapan

Personalised recommendations