Abstract
In this paper we study the natural gradient method for overparametrised systems. This method is based on the natural gradient field which is invariant with respect to coordinate transformations. One calculates the natural gradient of a function on the manifold by multiplying the ordinary gradient of the function by the inverse of the Fisher Information Matrix (FIM). In overparametrised models, the FIM is degenerate and therefore one needs to use a generalised inverse. We show explicitly that using a generalised inverse, and in particular the Moore-Penrose inverse, does not affect the parametrisation independence of the natural gradient. Furthermore, we show that for singular points on the manifold the parametrisation independence is not even guaranteed for non-overparametrised models.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Note that this becomes the FIM when g is the Fisher metric.
References
Amari, S.I.: Natural gradient works efficiently in learning. Neural Comput. 10(2), 251–276 (1998)
Ay, N.: On the locality of the natural gradient for learning in deep Bayesian networks. Inf. Geom. 1–49 (2020). https://doi.org/10.1007/s41884-020-00038-y
Ay, N., Montúfar, G., Rauh, J.: Selection criteria for neuromanifolds of stochastic dynamics. In: Yamaguchi, Y. (ed.) Advances in Cognitive Neurodynamics (III), pp. 147–154. Springer, Dordrecht (2013). https://doi.org/10.1007/978-94-007-4792-0_20
Bernacchia, A., Lengyel, M., Hennequin, G.: Exact natural gradient in deep linear networks and application to the nonlinear case. In: NIPS (2019)
Grosse, R., Martens, J.: A Kronecker-factored approximate fisher matrix for convolution layers. In: International Conference on Machine Learning, pp. 573–582. PMLR (2016)
Martens, J.: New insights and perspectives on the natural gradient method. arXiv preprint arXiv:1412.1193 (2014)
Ollivier, Y.: Riemannian metrics for neural networks I: feedforward networks. Inf. Infer. J. IMA 4(2), 108–153 (2015)
Van Hasselt, H.: Reinforcement learning in continuous state and action spaces. In: Wiering, M., van Otterlo, M. (eds.) Reinforcement Learning. ALO, vol. 12, pp. 207–251. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-27645-3_7
Várady, C., Volpi, R., Malagò, L., Ay, N.: Natural wake-sleep algorithm. arXiv preprint arXiv:2008.06687 (2020)
Watanabe, S.: Algebraic geometry and statistical learning theory. Cambridge University Press, Cambridge (2009)
Zhang, G., Martens, J., Grosse, R.: Fast convergence of natural gradient descent for overparameterized neural networks. arXiv preprint arXiv:1905.10961 (2019)
Acknowledgements
The authors acknowledge the support of the Deutsche Forschungsgemeinschaft Priority Programme “The Active Self” (SPP 2134).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
4Appendix
4Appendix
Example 1
Let us consider the specific example where:
Plugging this into the expressions derived above gives:
Now we fix \(\eta = (1,1)\) and \(\xi = f(\eta ) = (2,1)\). We start by computing \(y_\varXi \). From the above we know that:
It can be easily verified that this gives \(y_\varXi = (3,3)\). For \(y_H\) we get:
which gives: \(y_H = (4 \frac{4}{5}, 1\frac{1}{5})\). Evidently we have \(y_\varXi \ne y_H\). Note however that when we map the difference of the two gradient vectors from \(T_{(2,1)} \varXi \) to \(T_{(3,0)}\mathcal {M}\) through \(d\phi _{(2,1)}\) we get:
which shows that although the gradient vectors can be dependent on the parametrisation or inner product on the parameter space, when mapped to the manifold \(\mathcal {M}\) they are invariant.
Example 2
We illustrate the discussion in Sect. 2.2 with a specific example. Let us consider the following parametrisation:
This gives \(\xi _1 = 0, \xi _2 = \frac{1}{2} \pi \) in the above discussion. We get the following calculation for \(\overline{\mathrm {grad}}_p^{\partial (0)} \mathcal {L}\):
Now let:
We define the alternative parametrisation \(\tilde{\phi } = \phi \circ f\). Note that we have \(\tilde{\phi }(t) = (-\sin (2t), \sin (t))\) and thus \(\tilde{\phi }(0) = \phi (0) = (0,0)\). A similar calculation as before gives:
Note that because \(\partial ^{(0)} \ne \tilde{\partial }^{(0)}\) (53) and (57) are not equal to each other. We can therefore conclude that in this case the expression for the gradient is parametrisation-dependent.
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
van Oostrum, J., Ay, N. (2021). Parametrisation Independence of the Natural Gradient in Overparametrised Systems. In: Nielsen, F., Barbaresco, F. (eds) Geometric Science of Information. GSI 2021. Lecture Notes in Computer Science(), vol 12829. Springer, Cham. https://doi.org/10.1007/978-3-030-80209-7_78
Download citation
DOI: https://doi.org/10.1007/978-3-030-80209-7_78
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-80208-0
Online ISBN: 978-3-030-80209-7
eBook Packages: Computer ScienceComputer Science (R0)