Skip to main content

Conjugate Gradient Methods

  • Chapter
  • First Online:
Introduction to Unconstrained Optimization with R

Abstract

Our interest in the conjugate gradient methods is twofold. First, they are among the most useful techniques to solve a large system of linear equations. Second, they can be adopted to solve large nonlinear optimization problems. In the previous chapters, we studied two important methods for finding a minimum point of real-valued functions of n real variables, namely, the steepest descent method and Newton’s method. The steepest descent method is easy to apply. However, convergence is often very slow. On the other hand, Newton’s algorithm normally has rapid convergence but involves considerable computation at each step. Recall Newton’s method, which involves the computation of Hessian of the function at every iteration. It is always required to reserve space for storing \(n\times n\) Hessian to run this algorithm. Also, Newton’s method does not choose n suitable directions for n number of variables of the function. If the inverse of Hessian is not available, then Newton’s method fails to find the minimum point. These drawbacks are the central theme in the development of an important class of minimization algorithms, what so-called the conjugate direction algorithm. It uses the history of the previous iteration for creating new search directions. The conjugate direction method acts as an intermediate method between the steepest descent method and Newton’s method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
EUR 32.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or Ebook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shashi Kant Mishra .

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Mishra, S.K., Ram, B. (2019). Conjugate Gradient Methods. In: Introduction to Unconstrained Optimization with R. Springer, Singapore. https://doi.org/10.1007/978-981-15-0894-3_8

Download citation

Publish with us

Policies and ethics