Invariance properties of the natural gradient in overparametrised systems

The natural gradient field is a vector field that lives on a model equipped with a distinguished Riemannian metric, e.g. the Fisher–Rao metric, and represents the direction of steepest ascent of an objective function on the model with respect to this metric. In practice, one tries to obtain the corresponding direction on the parameter space by multiplying the ordinary gradient by the inverse of the Gram matrix associated with the metric. We refer to this vector on the parameter space as the natural parameter gradient. In this paper we study when the pushforward of the natural parameter gradient is equal to the natural gradient. Furthermore we investigate the invariance properties of the natural parameter gradient. Both questions are addressed in an overparametrised setting.


Introduction
Within the field of deep learning, gradient methods have become ubiquitous tools for parameter optimisation.Standard gradient optimisation procedures use the vector of coordinate derivatives of the objective function as the update direction of the parameters.This is implicitly assuming a Euclidean geometry on the space of parameters.It can be argued that this is not always the most natural choice of geometry.Instead one can choose a more natural geometry for the problem at hand and then determine the Riemannian gradient of the objective function for this natural geometry, resulting in the so called natural gradient.The natural gradient method is the optimisation algorithm that performs discrete parameter updates in the direction of the natural gradient.This method was first proposed by Amari [1] using the geometry induced by the Fisher-Rao metric.It is an active field of study within information geometry [3,10,6] and has been shown extremely effective in many applications [4,16,17].More recently, also other geometries on the model have been studied, such as the Wasserstein geometry [11,9].The natural gradient is defined independently of a specific parametrisation.Although it is an open problem, there is work supporting the idea that the efficiency of learning of the method is due to this invariance [18].
In practice, the update direction of the parameters is given by the ordinary gradient multiplied by the inverse of the Gram matrix associated with the metric on the model.We will refer to this vector on the parameter space as the natural parameter gradient.In order to determine whether this direction is desired we have to map this vector to the model, since it is the location on the model, not on the parameter space, that determines the performance of the model.In non-overparametrised systems it can be shown that in a non-singular point on the model, the pushforward of the natural parameter gradient is equal to the natural gradient.Furthermore the natural parameter gradient can be called parametrisation invariant in this case [12].In many practical applications of machine learning, and in particular deep learning, one deals however with overparametrised models, in which different directions on the parameter space correspond to a single direction on the model.In this case, the Gram matrix is degenerate and we use a generalised inverse to calculate the natural parameter gradient.In this paper, we will investigate whether the pushforward of the natural parameter gradient remains equal to the natural gradient in the overparametrised setting.
The Moore-Penrose (MP) inverse is the canonical choice of generalised inverse for the natural parameter gradient [5].The definition of the MP inverse is based on the Euclidean inner product defined on the parameter space.Using the MP inverse is therefore thought to affect the parametrisation invariance of the natural parameter gradient [13], and thus potentially the performance of the natural gradient method.In this paper we propose two different notions of invariance.The first evaluates the invariance of the natural parameter gradient by examining the behaviour of its pushforward on the model.The second looks at the behaviour on the parameter space itself.Since the location and direction on the model is what matters, we argue that the former is of greater importance.

The natural gradient
Let (Z, g) be a Riemannian manifold, Ξ be a parameter space that we assume to be an open subset of R d , φ : Ξ → Z a smooth map (taking the role of  1).We call p ∈ M non-singular if M is locally an embedded submanifold of Z around p and we denote the set of non-singular points with Smooth(M).A point p is called singular if it is not non-singular.
The Riemannian gradient of L on Z is defined implicitly as follows: By the Riesz representation theorem, this defines the gradient uniquely.
Definition 1 (Natural gradient).For p ∈ Smooth(M) the Riemannian gradient of L| M on the model M is called the natural gradient and is denoted grad M p L. It is easy to show that: where Π p is the projection onto T p M. We define the pushforward of the tangent vector on the parameter space through the parametrisation as , and the Gram matrix G(ξ) by G ij (ξ) := g φ(ξ) (∂ i (ξ), ∂ j (ξ)).We denote the vector of coordinate derivatives with Let ξ be such that φ(ξ) ∈ Smooth(M).We say that a parametrisation is proper in ξ when: span ({∂ 1 (ξ), ..., ∂ d (ξ)}) = T φ(ξ) M. Furthermore, following the Einstein summation convention, we write a i b i for the sum i a i b i .
Definition 2 (Generalised inverse).A generalised inverse of an n × m matrix A, denoted A + , is an m × n matrix satisfying the following property: Note that this definition implies that for w ∈ R n in the image of A, i.e. w = Av for some v ∈ R m , we have: This shows that AA + is the identity operator on the image of A.
Definition 3 (Natural parameter gradient).We define the natural parameter gradient to be the following vector on the parameter space: The pushforward of this vector, given by: is called the natural parameter gradient on M.
Often, the natural parameter gradient is denoted with matrix notation as follows: where an identification between the canonical basis of R d and the vectors ∂ ∂ξ i ξ is made implicitly.
We are now in the position to state the main result of the paper: Theorem 1.Let ξ ∈ Ξ and p = φ(ξ) ∈ M. We have: where Π ξ is the projection onto span{∂ i (ξ)} i .In particular, when φ(ξ) is nonsingular and span{∂ i (ξ)} i = T p M we have: This theorem implies that under certain conditions the pushforward of the natural parameter gradient is equal to the natural gradient.Furthermore we see that in general the natural parameter gradient on M is dependent on the choice of parametrisation through Π ξ , but becomes invariant when the coordinate vectors span the full tangent space of M. In the next section we will study in more detail the invariance properties of the natural parameter gradient.
The proof of Theorem 1 will be based on the following result from linear algebra: • ) be a finite-dimensional inner product space and V * its dual space.Let {e i } i∈{1,...,d} ⊂ V (not necessarily linearly independent), G the matrix defined by and Π the projection on the space span{e i } i .Then, Proof.Start by noting Π(v) is uniquely defined by the fact that Π(v), w = ω(w), for w ∈ span{e i } i and Π(v), w = 0 for w ∈ (span{e i } i ) ⊥ .Since the RHS of (10) lies in the span of {e i } i , it remains to show that for an arbitrary vector w = w i e i ∈ span{e i } i we have: Working out the LHS gives: where we use the fact that: ω(e i ) = G ij v j and the symmetry of G in the second equality.
Proof of Theorem 1.We now let T p M take the role of V , dL p the role of ω, ∂ i (ξ) the role of e i , and grad p L the role of v. Equation ( 8) now follows immediately.When the tangent vectors {∂ i (ξ)} i span the whole tangent space of M at p, Π ξ becomes the identity on T p M. This gives Equation ( 9).

Invariance properties of the natural parameter gradient
In this section we study the invariance properties of the natural parameter gradient by using an alternative parametrisation of M given by: Note that G + (ξ), ∇ ξ L and ∂ i (ξ) in the definition of dφ ξ ∇ ξ L all implicitly depend on the parametrisation φ.For an alternative parametrisation ψ we will therefore write: The invariance properties can be studied from the perspective of the model and from the perspective of the parameter space itself.Since the former is of more importance, we will start with this one.

Parametrisation dependence and reparametrisation invariance on the model
A parametrisation can be used to represent tangent vectors on the model space by elements of R d .A representation (of vectors on M) can be interpreted as the ) that takes a parametrisation-coordinate pair and assigns a tangent vector on the parameter space to it.The natural parameter gradient defined by in Equation ( 5) is an example of a representation, where the dependence on φ on the RHS is implicit.Naively, one could define invariance of a representation in the following way: Definition 4 (Parametrisation independence).Let M be a model.A representation O(•, •) is called parametrisation independent if for any pair φ, ψ of parametrisations of M, and coordinates ξ, θ such that ψ(θ) = φ(ξ), the following holds: It turns out that this is not a very useful definition.As we will see, no non-trivial representation can be parametrisation independent in the sense of this definition.We will illustrate this in Example 1 and 2 below for the natural parameter gradient on specific models.A formal proof can be found in the Appendix A. Proof.By Definition 5, we need to show that for ψ = φ • f and θ = f −1 (ξ) we have: Since the differential df θ is surjective, we have that span{∂ i (ξ)} i = span{∂ j (θ)} j .Therefore, by using equation Equation ( 8) of Theorem 1, we get: which is what we wanted to show.
Remark 1.Note that under the extra assumptions that M is a smooth manifold and all parametrisations are required to be diffeomorphisms Definitions 4 and 5 are equivalent, since the composition f = ψ • φ −1 is a diffeomorphism.These assumptions are often implicitly made when referring to the invariance of the natural gradient.However, as we will see below, this is no longer the case in our more general setting.

Example 1
This example will be of a graphical nature.Consider the parametrisation φ that is the composition of the 2 maps in Figure 3. Now let ψ : Ξ → M be this parametrisation but with a 90 degree rotation around φ(ξ) applied before projecting down to M.
Figure 3: Parametrisation with a non-surjective span of the parameter vectors Note that the spans of the parameter vectors have trivial intersection as depicted in Figure 4a.This immediately implies that for any representation we have: dψ θ O(ψ, θ) = dφ ξ O(φ, ξ) except when both sides are equal to zero.In particular, we can let the natural gradient be as in Figure 4b.We know from Theorem 1 that the natural parameter gradients on M will be the projections of the natural gradient onto the respective spans of the parameter vectors as depicted in the figure.Note that the projection should be orthogonal with respect to the inner product g p , which we have chosen here to be Euclidean for ease of illustration.This example shows that for non-singular points, we can construct two parametrisations that give different natural parameter gradients on the same point of the model.Note that this is not in violation of Theorem 2 since there does not exist a diffeomorphism f such that  Let us consider the case in which φ is a smooth map from an interval on the real line to R 2 as depicted in Figure 5.We have that ξ 1 and ξ 2 are both mapped to the same point p in R 2 .Note that M is in this case not a locally embedded submanifold around p and thus p is a singular point.Note that G(ξ 1 ) is a real number different from zero and therefore non-degenerate.Calculating the natural parameter gradient on M for ξ = ξ 1 gives: Since G −1 (ξ 1 ) ∂L•φ ∂ξ (ξ 1 ) is a scalar, the resulting vector will lie in the span of ∂(ξ 1 ) illustrated by the blue arrows in the figure.Now let f : Θ → Ξ be a diffeomorphism such that f (θ 1 ) = ξ 2 .An alternative parametrisation of M is given by: Calculating the natural parameter gradient at θ 1 for this parametrisation gives: Note that this vector is in the span of ∂(θ 1 ) denoted by the red arrows in the figure and therefore in general different from ( 22).This shows that when span{∂ i (ξ)} i = span{∂ j (θ)} j the outcome of (G + (ξ)∇ ξ L) i ∂ i (ξ) can be depenon the choice of parametrisation and therefore the natural parameter gradient is not parametrisation independent.Note however that this result is not in contradiction with Theorem 2 since we do not have θ 1 = f −1 (ξ 1 ).See Appendix A.2 for a worked-out example of this.

Reparametrisation (in)variance on the parameter space
In the previous section we have looked at the invariance properties of the natural parameter gradient from the perspective of the model.One can also study the invariance properties from the perspective of the parameter space as is done for example in Section 12 of [12].Translating the definition of invariance given there to our notation gives the following: Definition 6 (Reparametrisation invariance on the parameter space).A representation O(•, •) is called reparametrisation invariant on the parameter space if for any pair of parametrisations φ, ψ such that ψ = φ • f for a diffeomorphism f : Θ → Ξ, and coordinates ξ, θ such that θ = f −1 (ξ), we have: Note that reparametrisation invariance on the parameter space implies reparametrisation invariance on the model as defined in Definition 5. Furthermore, it can be shown that when M is a smooth manifold and all parametrisations are required to be diffeomorphisms, like in Remark 1, this definition is equivalent to Definitions 4 and 5.In that case, the natural parameter gradient satisfies Equation (25).As we will see below, this is not true for general φ.We would like to argue, however, that this is not a suitable definition of invariance, since multiple vectors on the parameter space can be mapped to the same vector on the model.Therefore inequality on the parameter space does not have to imply inequality on the model.
We will now make the above explicit.Let us choose the MP inverse as generalised inverse for ∇ ξ L and consider an alternative parametrisation ψ = φ•f for a diffeomorphism f : Θ → Ξ (see Figure 6).We denote the matrix of partial derivatives of f at θ with F j i (θ) = ∂f j ∂θ i (θ).For ξ = f (θ) we get the following relations: We map ∇ θ L to T ξ Ξ through df θ and get: We will write y Ξ , y Θ for the coefficients of ∇ ξ L and df θ ∇ θ L respectively.From Theorem 1 and the fact that F (θ) is of full rank we know that F (θ) ∇ ξ L lies in the image of F (θ)G(ξ)F T (θ).Therefore, by the definition of the MP inverse, we have that where we substitute y = F T (θ)x in the last line.
Remark 2. Note that || F T −1 (•)|| is the pushforward of the norm on Θ through f .This shows nicely the equivalence of the gradient for on the one hand constructing a different parametrisation (ψ), and on the other hand defining a different inner product || F T −1 (•)|| for the existing parametrisation (φ).
Comparing the result to the natural parameter gradient on Ξ gives: Because the norms in ( 33) and ( 36) are different, generally y Θ = y Ξ .However, both satisfy G(ξ)y = ∇ ξ L and therefore G(ξ)(y Θ − y Ξ = 0.This implies: where the last equality can be verified by taking the norm on the RHS of (37) using that it is non-degenerate, like so: This shows that for overparametrised systems the natural parameter gradient is not reparametrisation invariant on the parameter space.However, as implied by Theorem 1, the dependency on the parametrisation disappears when the gradient is mapped to the model.See Appendix A.3 for a worked-out example of the above discussion.

Practical considerations for the natural gradient method
The natural gradient method is performed by updating the current parameter vector ξ ∈ Ξ in the direction of the vector In case of a constrained parameter space, i.e.Ξ is not the full space R d , such as the space of covariance matrices, one runs the risk of stepping outside the parameter space, see also [2].This is called a constraint violation.One can use backprojection [8], addition of a penalty, and weight clipping [7] to avoid these violations.Note that for a variety of neural network applications, including many supervised learning tasks, the parameter space is unconstrained.For these models however, the generalised inverse is often hard to compute due to the high number of parameters.In this context, the Woodbury matrix identity with damping is often used instead [15].Investigating these topics further falls outside the scope of this paper.

Reparametrisation (in)variance of the natural gradient method trajectory
We have shown in Theorem 2 that from the perspective of the model, the natural parameter gradient is reparametrisation invariant.That is, for two parametrisations φ, ψ for which ψ = φ • f for a diffeomorphism f and θ = f −1 (ξ), we have that, The natural gradient method is performed by updating the current parameter vector ξ ∈ R d in the direction of the vector implies that if we would update the parameters for both parametrisations an infinitesimal amount, this would give us the same result on the model.We would like to emphasise however that updating the parameters by a finite amount will in general result in different locations on the model.Therefore the natural gradient method trajectory is dependent on the choice of parametrisation.This is however not an issue specific to overparametrised models but with the natural gradient method in general.See Section 12 of [12] for exact bounds on the invariance.

Occurrence of non-proper points
We saw in the proof of Theorem 2 that when span{∂ i (ξ)} i = span{∂ j (θ)} j for two parametrisations φ and ψ with φ(ξ) = ψ(θ), we have dφ ξ ∇ ξ L = dψ θ ∇ θ L. For φ(ξ) ∈ Smooth(M) note that this equality holds in particular when span{∂ i (ξ)} i = span{∂ j (θ)} j = T p M, i.e. φ is proper in ξ.Therefore we will now study when this is the case.We start by recalling some basic facts from smooth manifold theory: Let M, N be smooth manifolds and F : M → N a smooth map.We call a point p ∈ M a regular point if dF p : T p M → T F (p) N is surjective and a critical point otherwise.A point q ∈ N is called a regular value if all the elements in F −1 (q) are regular points, and a critical value otherwise.If M is n-dimensional, we say that a subset S ⊂ M has measure zero in M , if for every smooth chart (U, ψ) for M , the subset ψ(S ∩ U ) ⊂ R n has n-dimensional measure zero.That is: ∀δ > 0, there exists a countable cover of ψ(S ∩ U ) consisting of open rectangles, the sum of whose volumes is less than δ.We have the following result based on Sard's theorem: Proposition 1.If Smooth(M) is a manifold, then the image of the set of points for which φ is not proper has measure zero in Smooth(M).
Proof.From the definition of Smooth(M) we know that for every p ∈ Smooth(M) there exists a U p open in Z such that U p ∩ M is an embedded submanifold of Z.Let U := p∈Smooth(M) U p .Note that: U ∩ M = Smooth(M) and therefore ) is an open subset of Ξ and thus an embedded submanifold.Therefore we can consider the map: and note that the image of the set of points for which φ is not proper is equal to the set of critical values of φ| φ −1 (Smooth(M)) in Smooth(M).A simple application of Sard's theorem the result.
This proposition implies that when Smooth(M) is a manifold, the set of points for which the pushforward of the natural parameter gradient is unequal to the natural gradient has measure zero in Smooth(M).

Conclusion
In this paper we have studied the natural parameter gradient, which was defined as the update direction of the natural gradient method, and its pushforward to the model in an overparametrised setting.We have seen that the latter is equal to the natural gradient under certain conditions.Furthermore we have proposed different notions of invariance and studied whether the natural parameter gradient satisfies these.From the perspective of the model, we have seen that the natural parameter gradient is reparametrisation invariant but that it is not parametrisation independent.Additionally, we saw that the natural parameter gradient is not reparametrisation invariant on the parameter space.We have argued, however, that this notion is less suitable in an overparametrised setting since multiple vectors on the parameter space can correspond to the same vector on the model.Finally we have given some practical considerations for the natural gradient method.
[18] Guodong Zhang, James Proof.Let M be a model and assume that O is a representation satisfying Definition 4. Let φ be a parametrisation of M and ξ * ∈ Ξ be a fixed (arbitrary) element on the domain of φ.Now consider the following function: We define Θ = f −1 (Ξ) and ψ = φ•f | Θ .First note that, since f is continuous, Θ is an open set.Secondly, since f is surjective, we have ψ(Θ) = M. Therefore ψ is a parametrisation of M. It is easy to see that the differential of f at θ = 0, df 0 , is equal to zero and therefore by the chain rule we have dψ 0 = d(φ • f | Θ ) 0 = dφ f (0) • df 0 = 0. Furthermore we have: ψ(0) = φ(ξ * ).Therefore in order for O to satisfy equation (18) we need that: Since ξ * was chosen arbitrarily, this implies that O is a trivial representation.
Remark 3. The function f used in the proof above is actually a homeomorphism since it is a continuous bijection and its inverse, given by f −1 (ξ) = 3 ξ 1 − (ξ * ) 1 , ..., 3 ξ d − (ξ * ) d , is also continuous.The inverse is however not differentiable and therefore f is not a diffeomorphism, which is required for Definition 5.

A.3 Example calculation of reparametrisation (in)variance on the parameter space
We illustrate the discussion in Section 3.2 with a specific calculation.Let us consider the following setting: L(x, y) = x 2 (60)

Figure 1 :
Figure 1: Parametrisation and objective function 1.In order to overcome the limitation of Definition 4, we propose the following more suitable definition of invariance of a representation: Definition 5 (Reparametrisation invariance).Let M be a model.A representation O(•, •) is called reparametrisation invariant if for any pair φ, ψ of parametrisations of M, such that ψ = φ • f for a diffeomorphism f : Θ → Ξ, and coordinates ξ, θ such that θ = f −1 (ξ), the equality (18) holds.Due to the extra requirement of the existence of the reparametrisation function f in Definition 5, we get the following central result of this paper: Theorem 2. The natural parameter gradient is reparametrisation invariant.

Figure 4 : 2 Figure 5 :
Figure 4: Spans and gradient vectors on the model of both parametrisations