Continuous Function Optimisation via Gradient Descent on a Neural Network Approxmiation Function

  • Kate A. Smith
  • Jatinder N. D. Gupta
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2084)

Abstract

Existing neural network approaches to optimisation problems are quite limited in the types of optimisation problems that can be solved. Convergence theorems that utilise Liapunov functions limit the applicability of these techniques to minimising usually quadratic functions only. This paper proposes a new neural network approach that can be used to solve a broad variety of continuous optimisation problems since it makes no assumptions about the nature of the objective function. The approach comprises two stages: first a feedforward neural network is used to approximate the optimisation function based on a sample of evaluated data points; then a feedback neural network is used to perform gradient descent on this approximation function. The final solution is a local minima of the approximated function, which should coincide with true local minima if the learning has been accurate. The proposed method is evaluated on the De Jong test suite: a collection of continuous optimisation problems featuring various characteristics such as saddlepoints, discontinuities, and noise.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Copyright information

© Springer-Verlag Berlin Heidelberg 2001

Authors and Affiliations

  • Kate A. Smith
    • 1
  • Jatinder N. D. Gupta
    • 2
  1. 1.School of Business SystemsMonash University, ClaytonVictoriaAustralia
  2. 2.Department of ManagementBall State University, MuncieIndianaUSA

Personalised recommendations