Skip to main content

Improving Deep Neural Networks

  • Chapter
  • First Online:
  • 4506 Accesses

Abstract

We’re still on the subject of gradient descent. But let’s now talk about gradient optimization, because of its importance to gradient descent. It is an optimization method for finding the minimum of a function, and it’s important in deep learning. It works to update the weights of the neural network through backpropagation.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   34.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   44.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Hisham El-Amir and Mahmoud Hamdy

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

El-Amir, H., Hamdy, M. (2020). Improving Deep Neural Networks. In: Deep Learning Pipeline. Apress, Berkeley, CA. https://doi.org/10.1007/978-1-4842-5349-6_10

Download citation

Publish with us

Policies and ethics