Asymptotic Freeness of Layerwise Jacobians Caused by Invariance of Multilayer Perceptron: The Haar Orthogonal Case

Free Probability Theory (FPT) provides rich knowledge for handling mathematical difficulties caused by random matrices that appear in research related to deep neural networks (DNNs), such as the dynamical isometry, Fisher information matrix, and training dynamics. FPT suits these researches because the DNN's parameter-Jacobian and input-Jacobian are polynomials of layerwise Jacobians. However, the critical assumption of asymptotic freenss of the layerwise Jacobian has not been proven completely so far. The asymptotic freeness assumption plays a fundamental role when propagating spectral distributions through the layers. Haar distributed orthogonal matrices are essential for achieving dynamical isometry. In this work, we prove asymptotic freeness of layerwise Jacobians of multilayer perceptron (MLP) in this case. A key of the proof is an invariance of the MLP. Considering the orthogonal matrices that fix the hidden units in each layer, we replace each layer's parameter matrix with itself multiplied by the orthogonal matrix, and then the MLP does not change. Furthermore, if the original weights are Haar orthogonal, the Jacobian is also unchanged by this replacement. Lastly, we can replace each weight with a Haar orthogonal random matrix independent of the Jacobian of the activation function using this key fact.


Introduction
Free Probability Theory (FPT) provides essential insight when handling mathematical difficulties caused by random matrices that appear in deep neural networks (DNNs) [18,6,7].The DNNs have been successfully used to achieve empirically high performance in various machine learning tasks [12,5].However, their understanding at a theoretical level is limited, and their success relies heavily on heuristic search settings such as architecture and hyperparameters.To understand and improve the training of DNNs, researchers have developed several theories to investigate, for example, the vanishing/exploding gradient problem [22], the shape of the loss landscape [19,10], and the global convergence of training and generalization [8].The nonlinearity of activation functions, the depth of DNN, and the lack of commutation of random matrices result in significant mathematical challenges.In this respect, FPT, invented by Voiculescu [24,25,26], is well suited for this kind of analysis.
FPT essentially appears in the analysis of the dynamical isometry [17,18].It is well known that reducing the training error in very deep models is difficult without carefully preventing the gradient's vanishing/exploding.Naive settings (i.e., activation function and initialization) cause vanishing/exploding gradients, as long as the network is relatively deep.The dynamical isometry [21,18] was proposed to solve this problem.The dynamical isometry can facilitate training by setting the input-output Jacobian's singular values to be one, where the input-output Jacobian is the Jacobian matrix of the DNN at a given input.Experiments have shown that with initial values and models satisfying dynamical isometry, very deep models can be trained without gradient vanishing/exploding; [18,28,23] have found that DNNs achieve approximately dynamical isometry over random orthogonal weights, but they do not do so over random Gaussian weights.For the sake of the prospect of the theory, let J be the Jacobian of the multilayer perceptron (MLP), which is the fundamental model of DNNs.The Jacobian J is given by the product of layerwise Jacobians: where each W ℓ is ℓ-th weight matrix, each D ℓ is Jacobian of ℓ-th activation function, and L is the number of layers.Under an assumption of asymptotic freeness, the limit spectral distribution is given by [18].
To examine the training dynamics of MLP achieving the dynamical isometry, [7] introduced a spectral analysis of the Fisher information matrix per sample of MLP.The Fisher information matrix (FIM) has been a fundamental quantity for such theoretical understandings.The FIM describes the local metric of the loss surface concerning the KL-divergence function [1].The neural tangent kernel [8], which has the same eigenvalue spectrum except for trivial zero as FIM, also describes the learning dynamics of DNNs when the dimension of the last layer is relatively smaller than the hidden layer.In particular, the FIM's eigenvalue spectrum describes the efficiency of optimization methods.For instance, the maximum eigenvalue determines an appropriate size of the learning rate of the first-order gradient method for convergence [13,10,27].Despite its importance in neural networks, the FIM spectrum has been the object of only very little study from a theoretical perspective.The reason is that it was limited to random matrix theory for shallow networks [19] or mean-field theory for eigenvalue bounds, which may be loose in general [9].Thus, [7] focused on the FIM per sample and found an alternative approach applicable to DNNs.The FIM per sample is equal to J ⊤ θ J θ , where J θ is the parameter Jacobian.Also, the eigenvalues of the FIM per sample are equal to the eigenvalues of the H L defined recursively as follows, except for the trivial zero eigenvalues and normalization: where I is the identity matrix, and qℓ is the empirical variance of ℓ-th hidden unit.Under an asymptotic freeness assumption, [7] gave some limit spectral distributions of H L .
The asymptotic freeness assumptions have a critical role in these researches [18,7] to obtain the propagation of spectral distributions through the layers.However, the proof of the asymptotic freeness was not completed.In the present work, we prove the asymptotic freeness of layerwise Jacobian of multilayer perceptrons with Haar orthogonal weights.

Main Results
Our results are as follows.Firstly, the following L + 1 tuple of families are asymptotically free almost surely (see Theorem 4.1): Secondly, for each ℓ = 1, . . ., L−1, the following pair is almost surely asymptotically free (see Proposition 4.2): The asymptotic freeness is at the heart of the spectral analysis of the Jacobian.Lastly, for each ℓ = 1, . . ., L − 1, the following pair is almost surely asymptotically free (see Proposition 4.3): ℓ .The asymptotic freeness of the pair is the key to the analysis of the conditional Fisher information matrix.
The fact that each parameter matrix W ℓ contains elements correlated with the activation's Jacobian matrix D ℓ is a hurdle towards showing asymptotic freeness.Therefore, among the components of W ℓ , we move the elements that appear in D ℓ to the N-th row or column.This is achieved by changing the basis of W ℓ .The orthogonal matrix (3.2) that defines the change of basis can be realized so that each hidden layer is fixed, and as a result, the MLP does not change.Then, the dependency between W ℓ and D ℓ is only in the N-th row or column, so it can be ignored by taking the limit of N → ∞.From this result, we can say that (W ℓ , W ⊤ ℓ ) and D ℓ are asymptotically free for each ℓ.However, this is still not enough to prove the asymptotical freeness between families (W ℓ , W ⊤ ℓ ) ℓ=1,...,L and (D ℓ ) ℓ=1,...,L .Therefore, we complete the proof of the asymptotic freeness by additionally considering another change of basis (3.3) that rotates the N − 1 × N − 1 submatrix of each W ℓ by independent Haar orthogonal matrices.A key of the desired asymptotic freeness is the invariance of MLP described in Lemma 3.1.The invariance follows from a structural property of MLP and an invariance property of Haar orthogonal random matrices.The invariance of MLP helps us apply the asymptotical freeness of Haar orthogonal random matrices [2] to our situation.

Related Works
The asymptotic freeness is weaker than the assumption of the forward-backward independence that research of dynamical isometry assumed [17,18,10].Although studies of mean-field theory [21,12,4] succeeded in explaining many experimental deep learning results, they use an artificial assumption (gradient independence [29]), which is not rigorously true.Asymptotic freeness is weaker than this artificial assumption.Our work clarifies that asymptotic free independence is just the right property that is useful and strictly valid for analysis.
Several works prove or treat the asymptotic freeness with Gaussian initialization [6,29,30,16].However, asymptotic freeness was not proven for the orthogonal initialization.As dynamical isometry can be achieved under orthogonal initialization but cannot be done under Gaussian initialization [18], proof of the asymptotic freeness in orthogonal initialization is essential.Since our proof makes crucial use of the properties of Haar distributed random matrices, the proof is clear because we only need to aim to replace the weights with Haar orthogonal, which is independent of the other Jacobians.While [6] restricting the activation function to ReLU, our proof covers a comprehensive class of activation functions, including smooth functions.

Organization of the Paper
Section 2 is devoted to preliminaries.It contains settings of MLP and notations about random matrices, spectral distribution, and free probability theory.Section 3 consists of two keys to prove main results.A key is the invariance of MLP, and the other is to cut off a dimension.Section 4 is devoted to proving the main results on the asymptotic freeness.In Section 5, we show applications of the asymptotic freeness to spectral analysis of random matrices, which appear in the theory of dynamical isometry and training dynamics of DNNs.Section 6 is devoted to the discussion and future works.

Setting of MLP
We consider multilayer perceptron settings, as usual in the studies of FIM [19,10] and dynamical isometry [21,18,7].Fix L, N ∈ N. We consider an L-layer multilayer perceptron as a parametrized map f = (f θ | θ = (W 1 , . . ., W L )) with weight matrices W 1 , W 2 , . . ., W L ∈ M N (R) as follows.Firstly, consider functions ϕ 1 , . . .ϕ L−1 on R. Besides, we assume that ϕ ℓ is continuous and differentiable except for finite points.Secondly, for a single input x ∈ R N we set x 0 = x.In addition, for ℓ = 1, . . ., L, set inductively where ϕ ℓ acts on R N as the entrywise operation.Note that we set b ℓ = 0 to simplify the analysis.Write f θ (x) = x L .Denote by D ℓ the Jacobian of the activation ϕ ℓ given by Lastly, we assume that each W ℓ (ℓ = 1, . . ., L) be independent Haar orthogonal random matrices and further consider the following condition (d1), . . ., (d4) on distributions.In Fig. 1, we visualize the dependency of the random variables.
Figure 1: A graphical model of random matrices and random vectors drawn by the following rules (i-iii).(i)A node's boundary is drawn as a square or a rectangle if it contains a square random matrix; otherwise, it is drawn as a circle.(ii)For each node, its parent node is a source node of a directed arrow.A node is measurable concerning the σ-algebra generated by all parent nodes.(iii)The nodes which have no parent node are independent.
(d3) For fixed N, the family Let us define r ℓ > 0 and q ℓ > 0 by the following recurrence relations: The inequality r ℓ < ∞ holds by the assumption (a2) of activation functions.We further assume that each activation function satisfies the following conditions (a1), . . ., (a5).
(a1) It is a continuous function on R and is not the identically zero function.
(a3) It is differentiable almost everywhere concerning Lebesgue measure.We denote by (ϕ ℓ ) ′ the derivative defined almost everywhere.
Example 2.1 (Activation Functions).The following activation functions are used tePennington2017resurrecting, Pennington2018emergence, Hayase2020spectrum to satisfy the above conditions.

Basic Notations
Linear Algebra We denote by M N (K) the algebra of N ×N matrices with entries in a field K. Write unnormalized and normalized traces of A ∈ M N (K) as follows: In this work, a random matrix is a M N (R) valued Borel measurable map from a fixed probability space for an N ∈ N. We denote by O N the group of N × N orthogonal matrices.It is well-known that O N is equipped with a unique left and right translation invariant probability measure, called the Haar probability measure.
Spectral Distribution Recall that the spectral distribution µ of a linear operator where tr is the normalized trace.If A is an N × N symmetric matrix with N ∈ N, its spectral distribution is given by N −1 N n=1 δ λn , where λ n (n = 1, . . ., N) are eigenvalues of A, and δ λ is the discrete probability distribution whose support is {λ} ⊂ R.
Joint Distribution of All Entries For random matrices X 1 , . . ., X L , Y 1 , . . ., Y L and random vectors x 1 , . . ., x L , y 1 , . . ., y L , we write if the joint distributions of all entries of corresponding matrices and vectors in the families match.

Asymptotic Freeness
In this section, we summarize required topics of random matrices and free probability theory.We start with the following definition.We omit the definition of a C * -algebra, and for complete details, we refer to [14].Definition 2.2.A noncommutative C * -probability space (NCPS, for short) is a pair (A, τ ) of a unital C * -algebra A and a faithful tracial state τ on A, which are defined as follows.A linear map τ on A is said to be a tracial state on A if the following four conditions are satisfied.
In addition, we say that τ is faithful if τ (a * a) = 0 implies a = 0.
For N ∈ N, the pair of the algebra M N (C) of N × N matrices of complex entries and the normalized trace tr is an NCPS.Consider the algebra of M N (R) of N × N matrices of real entries and the normalized trace tr.The pair itself is not an NCPS in the sense of Definition 2.2 since it is not C-linear space.However, M N (C) contains M N (R) and preserves * by setting, for A ∈ M N (R): Also, the inclusion M N (R) ⊂ M N (C) preserves the trace.Therefore, we consider the joint distributions of matrices in M N (R) as that of elements in the NCPS (M N (C), tr).Definition 2.3.(Joint Distribution in NCPS) Let a 1 , . . ., a k ∈ A and let C X 1 , . . ., X k be the free algebra of non-commutative polynomials on C generated by k indeterminates X 1 , . . ., X k .Then the joint distirubtion of the k-tuple (a 1 , . . ., a k ) is the linear form µ a 1 ,...,a k : C X 1 , . . ., X k → C defined by µ a 1 ,...,a k (P ) = τ (P (a 1 , . . ., a k )), for any P ∈ C X 1 , . . ., X k .Definition 2.5.(Freeness) Let (A, τ ) be a NCPS.Let A 1 , . . ., A k be subalgebras having the same unit as A. They are said to be free if the following holds: for any n ∈ N, any sequence j 1 , . . ., j n ∈ [k], and any the following holds true: Besides, elements in A are said to be free iff the unital subalgebras that they generate are free.
The example below is basically a reformulation of freeness, and follows from [26].
Let us now introduce asymptotic freeness of random matrices with compact support limit spectral distributions.Since we consider a family of a finite number of random matrices, we restrict it to a finite index set.Note that the finite index is not required for a general definition of freeness.
It is then said to be almost surely asymptotically free as N → ∞ if the following two conditions are satisfied.
1.There exist a family (a i ) i∈I of elements in A such that the following k tuple is free:

For every
almost surely, where |I| is the number of elements of I.

Haar Distributed Orthogonal Random Matrices
We introduce asymptotic freeness of Haar distributed orthogonal random matrices.
Haar random matrices, and A 1 (N), . . ., A L ′ (N) be symmetric random matrices, which have the almost-sure-limit joint distribution.Assume that all entries of (V ℓ (N)) L ℓ=1 are independent of that of (A 1 (N), . . ., A L ′ (N)), for each N. Then the families are asymptotically free as N → ∞.
Proof.This is a particular case of [2, Theorem 5.2].
The following proposition is a direct consequence of Proposition 2.8.Proposition 2.9.For N ∈ N, let A(N) and B(N) be N × N symmetric random matrices, and let V (N) be a N × N Haar-distributed orthogonal random matrix.Assume that 1.The random matrix V (N) is independent of A(N), B(N) for every N ∈ N.

The spectral distribution of A(N) (resp. B(N)) converges in distribution to a
compactly supported probability measure µ (resp.ν), almost surely.
Then the following pair is asymptotically free as N → ∞, Proof.Instead of proving that A(N), V (N)B(N)V (N) ⊤ are asymptotically free, we will prove that U(N)A(N)U(N) ⊤ , U(N)V (N)B(N)V (N) ⊤ U(N) ⊤ for any orthogonal matrix U(N), and in particular, for an independent Haar distributed orthogonal matrix.This is equivalent because a global conjugation by U(N) does not affect the joint distribution.In turn, since U(N), U(N)V (N) has the same distribution as U(N), V (N) thanks to the Haar property, it is enough to prove that U(N)A(N)U(N) ⊤ , V (N)B(N)V (N) ⊤ is asymptotically free as as N → ∞.Let us replace A(N) by Ã(N) where Ã(N) is diagonal, and has the same eigenvalues as A(N), arranged in non-increasing order, and likewise, we construct B(N) from B(N).It is clear that have the same distribution.In addition, B(N), Ã(N) have a joint distribution by construction, therefore we can apply Proposition 2.8.
Note that we do not require independence between A(N) and B(N) in Proposition 2.9.Here we recall the following result, which is a direct consequence of the translation invariance of Haar random matrices.Lemma 2.10.Fix N ∈ N. Let V 1 , . . ., V L be independent O N Haar random matrices.Let T 1 , . . ., T L be O N valued random matrices.Let S 1 , . . ., S L be O N valued random matrices.Let A 1 , . . ., A L be N × N random matrices.Assume that all entries of (V ℓ ) L ℓ=1 are independent of (T 1 , . . ., T L , S 1 , . . ., S L , A 1 , . . ., A L ). Then, Proof.For the readers' convenience, we include a proof.The characteristic function of where . By using conditional expectation, (2.1) is equal to By the property of the Haar measure and the independence, the conditional expectation contained in (2.2) is equal to Thus the assertion holds.

Action of Haar Orthogonal Matrices
Firstly we consider action of Haar orthogonal to a random vector with finite second moment.For N-dimensional random vector x = (x 1 , . . ., x n ), we denote its empirical distribution by where δ x is the delta probability measure at the point x ∈ R.
Lemma 2.11.Let (Ω, F , P) be a probability space and x(N) be a R N valued random variable for each N ∈ N. Assume that there exists r > 0 such that Furthermore we assume that x(N) and O(N) are independent.Then as N → ∞ almost surely.
Firstly, by the assumption, Now convergence in moments to Gaussian distribution implies convergence in law.Therefore, almost surely.This completes the proof.
Note that we do not assume that entries of x(N) are independent.
Lemma 2.12.Let g be a measurable function and set Let Z ∼ N (0, 1).Assume that P(Z ∈ N g ) = 0. Then under the setting of Lemma 2.11, it holds that as N → ∞ almost surely.
as N → ∞}.By Lemma 2.11, P (F ) = 0. Fix ω ∈ Ω \ F .For N ∈ N, let X N be a real random variable on the probability space with By the assumption, we have P(Z ∈ N g ) = 0. Then the continuous mapping theorem (see [3, Theorem 3.2.4])implies that g(X N ) ⇒ g(Z).

Convergence of Empirical Distribution
Furthermore, for any measurable function g on R and probability measure µ, we denote by g * (µ) the push-forward of µ.That is, if a real random variable X is distributed with µ, then g * (µ) is the distribution of g(X).

Key to Asymptotic Freeness
Here we introduce key lemmas to prove the asymptotic freeness.A key lemma is about an invariance of MLP, and the other one is about a property of cutting off matrices.

Notations
We prepare notations related to the change of basis to cut off entries in W ℓ , which are correlated with D ℓ .
Lastly, let V 0 , . . ., V L−1 be independent Haar distributed N − 1 × N − 1 orthogonal random matrices such that all entries of them are independent of that of (x 0 , W 1 , . . ., W L ).Set Then Each V ℓ is the N − 1 × N − 1 random matrix which determines the action of U ℓ on the orthogonal complement of Rx ℓ .Further, for any ℓ = 0, . . ., L − 1, all entries of (U 0 , . . ., U ℓ−1 ) are independent from that of (W ℓ , . . ., W L ) since each U ℓ is G(x ℓ , V ℓ )measurable, where G(x ℓ , V ℓ ) is the σ-algebra generated by x ℓ and V ℓ .We have completed the construction of the U ℓ .Fig. 2 visualizes a dependency of the random variables that appeared in the above discussion.
Figure 2: A graphical model of random variables in a specific case using V ℓ for U ℓ .See Fig. 1 for the graph's drawing rule.The node of W ℓ , . . ., W L is an isolated node in the graph.
In addition, let P (N) be the N × N diagonal matrix given by If there is no confusion, we omit the index N and simply write it P .The matrix P (N) is an orthogonal projection onto an N − 1 dimenstional subspace.

Invariance of MLP
Since Haar random matrices' invariance leads to asymptotic freeness (Proposition 2.8), it is essential to investigate the network's invariance.The following invariance is the key to the main theorem.Note that the Haar property of V ℓ is not necessary to construct U ℓ in Lemma 3.1, but the property is used in the proof of Theorem 4.1.
Here we visualize the dependency of the random variables in Fig. 3 in the case of the specific (U ℓ ) L−1 ℓ=0 in (3.3) constructed with (V ℓ ) L−1 ℓ=0 .Note that we do not use the specific construction in the proof of Lemma 3.1.

Matrix Size Cutoff
The invariance described in Lemma 2.10 fixes the vector x ℓ−1 , and there are no restrictions on the remaining N − 1 dimensional space P (N)R N .We call P (N)AP (N) the catoff of any N × N matrix A. This section quantifies that cutting off the fixed space causes no significant effect when taking the large-dimensional limit.
For p ≥ 1, we denote by ||X|| p the L p -norm of X ∈ M N (R) defined by Recall that the following non-commutative Hölder's inequality holds: for any r, p, q ≥ 1 with 1/r = 1/p + 1/q.
Let P (N) be the orthogonal projection defined in (3.4).Then we have almost surely In particular, the left-hand side of (3.14) goes to 0 as N → ∞ almost surely.
We only need to show that Tr[(1 − D) p ] ≤ 1, where Tr is the unnormalized trace.Write R = P − (P W P ) ⊤ P W P = P W ⊤ (1 − P )W P.

Asymptotic Freeness of Layerwise Jacobians
This section contains some of our main results.The first one is the most general form, but it relies on the existence of the limit joint moments of (D ℓ ) ℓ .The second one is required for the analysis of the dynamical isometry.The last one is needed for the analysis of the Fisher information matrix.The second and the third ones do not assume the existence of the limit joint moments of (D ℓ ) ℓ .We use the notations in Section 3.1.In the sequel, for each ℓ, N ∈ N, each Y ℓ is the x ℓ -measurable and O N valued random matrix described in (3.2).It is x ℓmeasurable and satisfies Y ℓ x ℓ = ||x ℓ || 2 e N , where e N is the N-th vector of the standard basis of R N .Recall that V 0 , . . ., V L−1 are independent O N −1 valued Haar random matrices such that all entries of them are independent of that of (x 0 , W 1 , . . ., W L ).In addition, and U ℓ x ℓ = x ℓ .Further, for any ℓ = 0, . . ., ℓ − 1, all entries of (U 0 , . . ., U ℓ−1 ) are independent from that of (W ℓ , . . ., W L ).Thus by Lemma 3.1, In addition, for any n ∈ N and almost surely we have since each D ℓ has the limit spectral distribution by Corollary 2.14.
We are now prepared to prove our main theorem.Proof.Without loss of generality, we may assume that σ w,1 , . . ., σ w,L = 1.Set for each ℓ = 1, . . ., L, where P = P (N) is defined in (3.4).By Lemma 3.2 and (4.1), we only need to show the asymptotic freeness of the families In addition, let Dℓ be the N − 1 × N − 1 matrix determined by By Lemma 3.3, there are O N −1 valued random matrices Ẁℓ and Ỳℓ−1 satisfying for any n ∈ N. Therefore, we only need to show asymptotic freeness of the following L + 1 families: Now all entries of Haar random matrices (V ℓ ) ℓ are independent of those of ( Ẁℓ , Ỳℓ−1 , Dℓ ) ℓ .Thus by Lemma 2.10 and Proposition 2.8, the asymptotic freeness of (4.4) holds as N → ∞ almost surely.We have completed the proof.
The following result is useful in the study of dynamical isometry and spectral analysis of Jacobian of DNNs.It follows directly from Theorem 4.1 if we assume the existence of the limit joint moments of (D ℓ ) L ℓ=1 .Note that the following result does not assume the existence of the limit joint moments.Proposition 4.2.For each ℓ = 1, . . ., L − 1, let J ℓ be the Jacobian of ℓ-th layer, that is, Then J ℓ J ⊤ ℓ has the limit spectral distribution and the pair is asymptotically free as N → ∞ almost surely.
Let ℓ = 1.Then J 1 J ⊤ 1 = D 2 1 has the limit spectral distribution by Proposition 2.13.By Lemma 3.2 and (4.1), we only need to show that the asymptotically freeness of the pair By Lemma 3.3, there are Ẁ2 ∈ O N −1 and Ỳ1 ∈ O N −1 which approximate P W 2 P and P Y 1 P in the sence of (4.3).Let D2 be the N − 1 × N − 1 random matrix given by (4.2).Then, we only need to show the asymptotical freeness of the following pair: By the independence and Lemma 2.10, the asymptotic freeness holds almost surely.Next, fix ℓ ∈ [1, L − 1] and assume that the limit spectral distribution of J ℓ J ⊤ ℓ exists and the asymptotic freeness holds for the ℓ.Now By the asymptotic freeness for the case ℓ, J ℓ+1 J ⊤ ℓ+1 has the limit spctral distribution.There exists Jℓ ∈ M N −1 (R) so that P J ℓ P = Jℓ 0 0 0 .
Then for the case of ℓ + 1, by the same argument as above, we only need to show the asymptotic freeness of Now, all enties of V ℓ+1 are independent from those of ( Jℓ+1 , Ẁℓ+1 , Ỳℓ+1 , Dℓ+2 ).By the independence and Lemma 2.10, we only need to show the asymptotic freeness of The asymptotic freeness of the pair follows from Proposition 2.9.The assertion follows by induction.
Proof.We proceed by induction over ℓ.The case ℓ = 1 is trivial.Assume that the assertion holds for an ℓ ≥ 1 and consider the case ℓ + 1.Then by (4.5) and the assumption of induction, H ℓ+1 has the limit spectral distribution.Let Hℓ+1 be the N − 1 × N − 1 matrix determined by By the same arguments as above, we only need to prove the asymptotic freeness of the following pair: By Lemma 2.10, considering the joint distributions of all entries, we only need to show the asymptotic freeness of the following pair: By the assumption, Dℓ Hℓ Dℓ has the limit spectral distribution.Then by Proposition 2.9, the assertion holds for ℓ + 1.The assertion follows by induction.

Application
Let ν ℓ be the limit spectral distribution of D 2 ℓ for each ℓ.We introduce applications of the main results.

Jacobian and Dynamical Isometry
Let J be the Jacobian of the network with respect to the input vector.In [21,17,18], a DNN is said to achieve dynamical isometry if J acts as a near isometry, up to some overall global O(1) scaling, on a subspace of as high a dimension as possible.Calling H such a subspace, the ||(J [21,17,18], a rigorous definition is not given, and that many variants of this definition are likely to be acceptable for the theory.In their theory, they take firstly the wide limit N → ∞.To examine the dynamical isometry as the wide limit N → ∞ and the deep limit L → ∞, [17,18,7] consider S-transform of the spectral distribution.(See [25,20] for the definition of S-transform).Now Recall that the existence of the limit spectral distribution of each J ℓ (ℓ = 1, . . ., L) is supported by Proposition 4.2.

Fisher Information Matrix and Training Dynamics
We focus on the the Fisher information matrix (FIM) for supervised learning with a mean squared error (MSE) loss [15,19,9].Let us summarize its definition and basic properties.Given x ∈ R N and parameters θ = (W 1 , . . ., W ℓ ), we consider a Gaussian probability model Now, the normalized MSE loss L is given by L(u) = ||u|| 2 2 /2N, for u ∈ R N , and || • || 2 is the Euclidean norm.In addition, consider a probability density function p(x) and a joint density p θ (x, y) = p θ (y|x)p(x).Then, the FIM is defined by which is an LN 2 × LN 2 matrix.As it is known in information geometry [1], the FIM works as a degenerate metric on the parameter space: the Kullback-Leibler divergence between the statistical model and itself perturbed by an infinitesimal shift dθ is given by D KL (p θ ||p θ+dθ ) = dθ ⊤ I(θ)dθ.More intuitive understanding is that we can write the Hessian of the loss as Hence the FIM also characterizes the local geometry of the loss surface around a global minimum with a zero training error.In addition, we regard p(x) as an empirical distribution of input samples and then the FIM is usually referred to as the empirical FIM [11,19,9].The conditional FIM is used [7] for the analysis of training dynamics of DNNs achieving dynamical isometry.Now, we denote by I(θ|x) the conditional FIM (or FIM per sample) given a single input x defined by Clearly, I(θ|x)p(x)dx = I(θ).Since p θ (y|x) is Gaussian, we have Now, in order to ignore I(θ|x)'s trivial eigenvalue zero, consider a dual of I(θ|x) given by which is an N × N matrix.Except for trivial zero eigenvalues, I(θ|x) and J (x, θ) share the same eigenvalues as follows: where µ A is the spectral distribution for a matrix A. Now, for simplicity, consider the case bias parameters are zero.Then it holds that where Since δ L→ℓ = W L D L−1 δ L−1→ℓ (ℓ < L), it holds that where I is the identity matrix.
Proof.The assertion directly follows from Proposition 4.3 and by induction.
[7] uses the recursive equation (5.2) to compute the maximum value of the limit spectrum of H L .

Discussion
We have proved the asymptotic freeness of MLPs with Haar orthogonal initialization by focusing on the invariance of the MLP.[6] shows the asymptotic freeness of MLP with Gaussian initialization and ReLU activation.The proof relies on the observation that each ReLU's derivative can be replaced with independent Bernoulli from weight matrices.On the contrary, our proof builds on the observation that weight matrices are replaced with independent random matrices from activations' Jacobians based on Haar orthogonal random matrices' invariance.In addition, [29,30] proves the asymptotic freeness of MLP with Gaussian initialization, which relies on Gaussianity.Since our proof relies on the orthogonal invariance of weight matrices, our proof covers and generalizes the GOE case.
It is straightforward to extend our results including Theorem 4.1 to MLPs with Haar unitary weights since the proof basely relies on the invariance of weight matrices (see Lemma 3.1) and the cut off (see Lemma 3.3).We expect that our theorem can be extended to Haar permutation weights since Haar distributed random permutation matrices and independent random matrices are asymptotic free [2].Moreover, we expect that it is possible to extend the principal results and cover MLPs with orthogonal/unitary/permutation invariant random weights since each proof is based on the invariance of MLP.
The neural tangent kernel theory [8] describes the learning dynamics of DNNs when the dimension of the last layer is relatively smaller than the hidden layers.In our analysis, we do not consider such a case and instead consider the case where the last layer has the same order dimension as the hidden layers.

B Review on Free Multiplicative Convolution
Let (A, τ ) be a NCPS.For a ∈ A with τ (a) = 0, the S-transform of a is defined as the formal power series Not that the S-transform can be defined in a more general situation [20].

Definition 2 . 7 (
Asymptotic Freeness of Random Matrices).Consider a nonempty finite index set I, a family A i (N) of N × N random matrices where N ∈ N. Given a partition {I 1 , . . ., I k } of I, consider a sequence of k-tuples

Figure 3 :
Figure 3: A graphical model of random variables for computing characteristic functions in a specific case using V ℓ for constructing U ℓ .See (3.7) and (3.8) for the definition of α ℓ and β ℓ .See Fig. 1 for the graph's drawing rule.