Abstract
We will implement a multi-layered neural network with different hyperparameters
-
Hidden layer activations
-
Hidden layer nodes
-
Output layer activation
-
Learning rate
-
Mini-batch size
-
Initialization
-
Value of \(\beta \)
-
Values of \(\beta _1\)
-
Value of \(\beta _2\)
-
Value of \(\epsilon \)
-
Value of keep_prob
-
Value of \(\lambda \)
-
Model training time
The purpose of computing is insights, not numbers.
R.W. Hamming
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Downloaded from http://yann.lecun.com/exdb/mnist/.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Ghatak, A. (2019). Deep Neural Networks-II. In: Deep Learning with R. Springer, Singapore. https://doi.org/10.1007/978-981-13-5850-0_6
Download citation
DOI: https://doi.org/10.1007/978-981-13-5850-0_6
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-5849-4
Online ISBN: 978-981-13-5850-0
eBook Packages: Computer ScienceComputer Science (R0)