Skip to main content

Newton’s Method

  • Chapter
  • First Online:
Introduction to Unconstrained Optimization with R

Abstract

Newton’s method was invented by Newton to solve the nonlinear one-dimensional problem. Later it was extended to solve multivariable nonlinear optimization problems. It is well known that the method of steepest descent uses only the first-order derivative (gradient).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 54.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shashi Kant Mishra .

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Mishra, S.K., Ram, B. (2019). Newton’s Method. In: Introduction to Unconstrained Optimization with R. Springer, Singapore. https://doi.org/10.1007/978-981-15-0894-3_7

Download citation

Publish with us

Policies and ethics