Skip to main content

Deep Neural Networks-II

  • Chapter
  • First Online:
Deep Learning with R
  • 5467 Accesses

Abstract

We will implement a multi-layered neural network with different hyperparameters

  • Hidden layer activations

  • Hidden layer nodes

  • Output layer activation

  • Learning rate

  • Mini-batch size

  • Initialization

  • Value of \(\beta \)

  • Values of \(\beta _1\)

  • Value of \(\beta _2\)

  • Value of \(\epsilon \)

  • Value of keep_prob

  • Value of \(\lambda \)

  • Model training time

The purpose of computing is insights, not numbers.

R.W. Hamming

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 99.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Downloaded from http://yann.lecun.com/exdb/mnist/.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Abhijit Ghatak .

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Singapore Pte Ltd.

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Ghatak, A. (2019). Deep Neural Networks-II. In: Deep Learning with R. Springer, Singapore. https://doi.org/10.1007/978-981-13-5850-0_6

Download citation

  • DOI: https://doi.org/10.1007/978-981-13-5850-0_6

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-13-5849-4

  • Online ISBN: 978-981-13-5850-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics