Classical Optimisation

  • Adrian J. Shepherd
Part of the Perspectives in Neural Computing book series (PERSPECT.NEURAL)

Abstract

This chapter serves as an introduction to the field of classical optimisation in general, and to second-order classical methods in particular. The aim of the chapter is to explain why second-order methods have superior convergence characteristics to first-order methods such as steepest descent (and, in the context of MLP training, the ‘traditional’ backpropagation algorithm). The chapter also discusses various topics of general relevance when implementing classical optimisation methods in practice. The particular characteristics and implementation requirements of specific second-order methods are the subject of Chapter 3.

Keywords

Nash 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Reference

  1. 1.
    Good General surveys of the field are provided by Flectcher(1987), Gill et al.(1981) Luenberger (1984) and Wolfe (1978).Google Scholar

Copyright information

© Springer-Verlag London Limited 1997

Authors and Affiliations

  • Adrian J. Shepherd
    • 1
  1. 1.Department of Biochemistry and Molecular BiologyUniversity College LondonLondonUK

Personalised recommendations