Abstract
In this chapter we shall approach the study of neural networks from the Information Geometry perspective. This applies both techniques of Differential Geometry and Probability Theory to neural networks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Sometimes, this is stated equivalently as \(Var(\hat{\theta }) \ge \frac{1}{I(\theta )}.\)
- 2.
If A and B are two square matrices, we write \(A\ge B\) if \(A-B\) is positive semidefinite, i.e., all its eigenvalues are nonnegative.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Calin, O. (2020). Neuromanifolds. In: Deep Learning Architectures. Springer Series in the Data Sciences. Springer, Cham. https://doi.org/10.1007/978-3-030-36721-3_14
Download citation
DOI: https://doi.org/10.1007/978-3-030-36721-3_14
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-36720-6
Online ISBN: 978-3-030-36721-3
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)