Advertisement

Conjugate Gradient Methods

  • Shashi Kant MishraEmail author
  • Bhagwat Ram
Chapter
  • 111 Downloads

Abstract

Our interest in the conjugate gradient methods is twofold. First, they are among the most useful techniques to solve a large system of linear equations. Second, they can be adopted to solve large nonlinear optimization problems. In the previous chapters, we studied two important methods for finding a minimum point of real-valued functions of n real variables, namely, the steepest descent method and Newton’s method. The steepest descent method is easy to apply. However, convergence is often very slow. On the other hand, Newton’s algorithm normally has rapid convergence but involves considerable computation at each step. Recall Newton’s method, which involves the computation of Hessian of the function at every iteration. It is always required to reserve space for storing \(n\times n\) Hessian to run this algorithm. Also, Newton’s method does not choose n suitable directions for n number of variables of the function. If the inverse of Hessian is not available, then Newton’s method fails to find the minimum point. These drawbacks are the central theme in the development of an important class of minimization algorithms, what so-called the conjugate direction algorithm. It uses the history of the previous iteration for creating new search directions. The conjugate direction method acts as an intermediate method between the steepest descent method and Newton’s method.

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  1. 1.Department of MathematicsBanaras Hindu UniversityVaranasiIndia

Personalised recommendations