A Spin Glass Model for the Loss Surfaces of Generative Adversarial Networks

We present a novel mathematical model that seeks to capture the key design feature of generative adversarial networks (GANs). Our model consists of two interacting spin glasses, and we conduct an extensive theoretical analysis of the complexity of the model’s critical points using techniques from Random Matrix Theory. The result is insights into the loss surfaces of large GANs that build upon prior insights for simpler networks, but also reveal new structure unique to this setting which explains the greater difficulty of training GANs.


Introduction
By making various modeling assumptions about standard multi-layer perceptron neural networks, [Cho+15] argued heuristically that the training loss surfaces of large networks could be modelled by a spherical multi-spin glass.Using theoretical results of [Auf13], they were able to arrive at quantitative asymptotic characterisations, in particular the existence of a favourable 'banded structure' of local-optima of the loss.There are clear and acknowledged deficiencies with their assumptions [CLA15] and recent observations have shown that the Hessians of real-world deep neural networks do not behave like random matrices from the Gaussian Orthogonal Ensemble (GOE) of Random Matrix Theory at the macroscopic scale [Pap18;Gra+19;Gra20], despite this being implied by the spin-glass model of [Cho+15].Moreover, there have been questions raised about whether the mean asymptotic properties of loss surfaces for deep neural networks (or energy surfaces of glassy objects) are even relevant practically for gradient-based optimisation in sub-exponential time [Bai+19; Man+19a; FFR19], though interpretation of experiments with deep neural networks remains difficult and the discussion about the true shape of their loss surfaces and the implications thereof is far from settled.Nevertheless, spin-glass models present a tractable example of high-dimensional complex random functions that may well provide insights into aspects of deep learning.Rather than trying to improve or reduce the assumptions of [Cho+15], various authors have recently opted to skip the direct derivation from a neural network to a statistical physics model, instead proposing simple models designed to capture aspects of training dynamics and studying those directly.Examples include: the modified spin glass model of [Ros+19a] with some explicitly added 'signal'; the simple explicitly non-linear model of [MAB19]; the spiked tensor 'signal-in-noise' model of [Man+19b].In a slightly different direction, [Bas+20] removed one of the main assumptions from the [Cho+15] derivation, and in so doing arrived at a deformed spin-glass model.All of this recent activity sits in the context of earlier work connecting spin-glass objects with simple neural networks [KS87; Gar88; EV01] and, more generally, with image reconstruction and other signal processing problems [Nis01].
One area that has not been much explored in the line of the above-mentioned literature is the study of architectural variants.Modern deep learning contains a very large variety of different design choices in network architecture, such as convolutional networks for image and text data (among others) [Goo+16;Con+17], recurrent networks for sequence data [HS97] and self-attention transformer networks for natural language [Dev+19;Rad+18].Given the ubiquity of convolutional networks, one might seek to study those, presumably requiring consideration of local correlations in data.One could imagine some study of architectural quirks such as residual connections [He+16], and batch-norm has been considered to some extent by [PW17].In this work, we propose a novel model for generative adversarial networks (GANs) [Goo+14] as two interacting spherical spin glasses.GANs have been the focus of intense research and development in recent years, with a large number of variants being proposed [RMC15; Zha+18; LT16; KLA20; MO14; ACB17; Zhu+17] and rapid progress particularly in the field of image generation.From the perspective of optimisation, GANs have much in common with other deep neural networks, being complicated high-dimensional functions optimised using local gradient-based methods such as stochastic gradient descent and variants.On the other hand, the adversarial training objective of GANs, with two deep networks competing, is clearly an important distinguishing feature, and GANs are known to be more challenging to train than single deep networks.Our objective is to capture the essential adversarial aspect of GANs in a tractable model of high-dimensional random complexity which, though being a significant simplification, has established connections to neural networks and high dimensional statistics.
Our model is inspired by [Cho+15;Ros+19b;Man+19b;Aro+19] with spherical multi-spin glasses being used in place of deep neural networks.We thus provide a complicated, random, high-dimensional model with the essential feature of GANs clearly reflected in its construction.By employing standard Kac-Rice complexity calculations [Fyo04;FW07;Auf13] we are able to reduce the loss landscape complexity calculation to a random matrix theoretic calculation.We then employ various Random Matrix Theory techniques as in [Bas+20] to obtain rigorous, explicit leading order asymptotic results.Our calculations rely on the supersymmetric method in Random Matrix Theory, in particular the approach to calculating limiting spectral densities follows [Ver04] and the calculation also follows [GW90;Guh91] in important ways.The greater complexity of the random matrix spectra encountered present some challenges over previous such calculations, which we overcome with a combination of analytical and numerical approaches.Using our complexity results, we are able to draw qualitative implications about GAN loss surfaces analogous to those of [Cho+15] and also investigate the effect of a few key design parameters included in the GAN.We compare the effect of these parameters on our spin glass model and also on the results of experiments training real GANs.Our calculations include some novel details, in particular, we use precise sub-leading terms for a limiting spectral density obtained from supersymmetric methods to prove a required concentration result.
The role that statistical physics models such as spherical multi-spin glasses are to ultimately play in the theory of deep learning is not yet clear, with arguments both for and against their usefulness and applicability.We provide a first attempt to model an important architectural feature of modern deep neural networks within the framework of spin glass models and provide a detailed analysis of properties of the resulting loss (energy) surface.Our analysis reveals potential explanations for observed properties of GANs and demonstrates that it may be possible to inform practical hyperparameter choices using models such as ours.Much of the advancement in practical deep learning has come from innovation in network architecture, so if deep learning theory based on simplified physics models like spin-glasses is to keep pace with practical advances in the field, then it will be necessary to account for architectural details within such models.Our work is a first step in that direction and the mathematical techniques used may prove more widely valuable.
The paper is structured as follows: in Section 2 we introduce the interacting spin glass model; in Section 3 we use a Kac-Rice formula to derive random matrix expressions for the asymptotic complexity of our model; in Section 4 we derive the limiting spectral density of the relevant random matrix ensemble; in Section 5 we use the Coulomb gas approximation to compute the asymptotic complexity, and legitimise its use by proving a concentration result; in Section 6 we derive some implications of our model for GAN training and compare to experimental results from real GANs; in Section 7 we conclude.All code used for numerical calculations of our model, training real GANs, analysing the results and generating plots is made available1 .

An interacting spin glass model
We use multi-spin glasses in high dimensions as a toy model for neural network loss surfaces without any further justification, beyond that found in [Cho+15;Bas+20].GANs are composed of two networks: generator (G) and G's purpose is to generate synthetic data samples by transforming random input noise, while D's is to distinguish between real data samples and those generated by G. Given some probability distribution P data on some R d , GANs have the following minimax training objective where Θ D , Θ G are the parameters of the discriminator and generator respectively.With z ∼ N (0, σ2 z ), G(z) has some probability distribution P gen .When successfully trained, the initially unstructured P gen examples are easily distinguished by D, this in turn drives improvements in G, bring P gen closer to P data .Ultimately, the process successfully terminates when P gen is very close to P data and D performs little better than random at the distinguishing task.To construct our model, we introduce two spin glasses: where all the X i1,...,ip are i.i.d.N (0, 1) and Z j1,...,jq are similarly i.i.d.N (0, 1).We then define the generator and discriminator spin glasses: (D) plays the role of the loss of the discriminator network when trying to classify genuine examples as such. (G) plays the role of loss of the discriminator when applied to samples produced by the generator, hence the sign difference between L (D) and L (G) .w (D) are the weights of the discriminator, and w (G) the weights of the generator.The X i are surrogates for the training data (i.e.samples from P data ) and the Z j are surrogates for the noise distribution of the generator.For convenience, we have chosen to pull the σ z scale outside of the Z j and include it as a constant multiplier in (5)-(6).In reality, we should like to keep Z j as i.i.d.N (0, 1) but take X i to have some other more interesting distribution, e.g.normally or uniformly distributed on some manifold.Using [x] to denote the integer part of x, we take for fixed κ ∈ (0, 1), κ = 1 − κ, and study the regime N → ∞.Note that there is no need to distinguish between [κN ] and κN in the N → ∞ limit.

Kac-Rice formulae for complexity
Training GANs involves jointly minimising the losses of the discriminator and the generator.Therefore, rather than being interested simply in upper-bounding a single spin-glass and counting its stationary points, the complexity of interest comes from jointly upper bounding both L (D) and L (G) and counting points where both are stationary.Using S M to denote the M -sphere 2 , we define the complexity for some Borel sets B D , B G ⊂ R and where ∇ D , ∇ G denote the conformal derivatives with respect to the discriminator and generator weights respectively.Note: 1. We have chosen to treat the parameters of each network as somewhat separate by placing them on their own hyper-spheres.This reflects the minimax nature of GAN training, where there really are 2 networks being optimised in an adversarial manner rather than one network with some peculiar structure.
2. We could have taken ∇ = (∇ D , ∇ G ) and required ∇L (D) = ∇L (G) = 0 but, as in the previous comment, our choice is more in keeping with the adversarial set-up, with each network seeking to optimize separately its own parameters in spite of the other.
3. We will only be interested in the case So that the finer structure of local minima and saddle points can be probed, we also define the corresponding complexity with Hessian index prescription where i(M ) is the index of M (i.e. the number of negative eigenvalues of M ).To calculate the complexities, we follow the well-trodden route of Kac-Rice formulae as pioneered by [Fyo04;FW07].For a fully rigorous treatment, we proceed as in [Auf13;Bas+20] by turning to the following result from [AT09].
Theorem 3.1 ([AT09] Theorem 12.1.1).Let M be a compact , oriented, N-dimensional C 1 manifold with a C 1 Riemannian metric g.Let φ : M → R N and ψ : M → R K be random fields on M. For an open set A ⊂ R K for which ∂A has dimension K − 1 and a point u ∈ R N let Assume that the following conditions are satisfied for some orthonormal frame field E: (a) All components of φ, ∇ E φ, and ψ are a.s.continuous and have finite variances (over M).
(b) For all x ∈ M, the marginal densities p x of φ(x) (implicitly assumed to exist) are continuous at u.
(d) The conditional densities p x (•|φ(x) = z) of det(∇ Ej φ i (x)) given are continuous in a neighbourhood of 0 for z in a neighbourhood of u uniformly in M.
(e) The conditional densities p x (•|φ(x) = z) are continuous for z in a neighbourhood of u uniformly in M.
(f) The following moment condition holds (g) The moduli of continuity with respect to the (canonical) metric induced by g of each component of ψ, each component of φ and each ∇ Ej φ i all satisfy, for any > 0 where the modulus of continuity of a real-valued function G on a metric space (T, τ ) is defined as (c.f.[AT09] around (1.3.6))ω(η) := sup s,t:τ (s,t)≤η Then where p x is the density of φ and Vol g is the volume element induced by g on M.
In the notation of Theorem 3.1, we make the following choices: and so and the manifold M is taken to be S N D × S N G with the product topology.
and therefore where D) , and ϕ L (G) the density of L (G) , all implicitly evaluated at (w (G) , w (D) ).
Proof.It is sufficient to check the conditions of Theorem 3.1 with the above choices.
Conditions (a)-(f) are satisfied due to Gaussianity and the manifestly smooth definition of L (D) , L (G) .The moduli of continuity conditions as in (g) are satisfied separately for L (D) and its derivatives on S N D and for L (G) and its derivatives on S N G , as seen in the proof of the analogous result for a single spin glass in [Auf13].But since M is just a direct product with product topology, it immediately follows that (g) is satisfied, so Theorem 13 applies and we obtain ( 14). ( 15) follows simply, using the rules of conditional expectation.
Define the Hessian matrix .
To make use of (15), we need the joint distribution of (D) , ∂ and the independent G) .As in [Auf13], we will simplify the calculation by evaluating in the region of the north poles on each hyper-sphere. (D) behaves just like a single spin glass, and so we have [Auf13]: Cov(∂ To find the joint and thence conditional distributions for (G) , we first note that from which, by comparing with [Auf13], one can write down the necessary expressions, at the north poles in a coordinate basis: Cov(∂ Cov(∂ Also, all first derivatives of (G) are clearly independent of (G) and its second derivatives by the same reasoning as in [Auf13].Note that and so We need now to calculate the joint distribution of (∂ where The conditional covariance is then In summary, from (37) and (25-27) we obtain where where is another independent GOE N D −1 matrix.We can simplify to obtain: where and Alternatively, because 1,2 , let us write H as where If follows that Then The desired complexity term then comes from (15), (31) where and and the variances are (recall ( 16), (20), (46)) and ω N = 2π N/2 Γ(N/2) is the surface area of the N sphere.The domain of integration B arises from the contraints We will need the asymptotic behaviour of the constant K N , which we now record in a small lemma.
Proof.By Stirling's formula where we have used (N − 2)

Limiting spectral density of the Hessian
Our intention now is to compute the the expected complexity EC N via the Coulomb gas method.The first step in this calculation is to obtain the limiting spectral density of the random matrix where, note, H = H − xI.Here the upper-left block is of dimension κN , and the overall dimension is N .Let µ eq be the limiting spectral measure of H and ρ eq its density.The supersymmetric method provides a way of calculating the expected Stieltjes transforms of ρ eq [Ver04]: Recall that a density and its Stieltjes transform are related by the Stieltjes inversion formula The function Z(J) can be computed using a supersymmetric representation of the ratio of determinants.Firstly, we recall an elementary result from multivariate calculus, where M is a real matrix: By introducing the notion of Grassmann variables and Berezin integration, we obtain a complimentary expression: Here the χ i , χ * i are purely algebraic objects defined by the anti-commutation rule and χ * i are separate objects, with the complex conjugation unary operator * defined so that (χ * i ) * = −χ * i , and Hermitian conjugation is then defined as usual by χ † = (χ T ) * .The set of variables {χ i , χ * i } N i=1 generate a graded algebra over C. Mixed vectors of commuting and anti-commuting variables are called supervectors, and they belong to a vector space called superspace.The integration symbol dχ i dχ * is defined as a formal algebraic linear operator by the properties Functions of the the Grassmann variables are defined by their formal power series, e.g.
where the termination of the series follows from χ 2 i = 0 ∀i, which is an immediate consequence of (61).From this it is apparent that (62), along with (61), is sufficient to define Berezin integration over arbitrary functions of arbitrary combinations of Grassmann variables.Using the integral results (59), (60) we can then write where the measure is φ is a vector of N complex commuting variables, χ and χ * are vectors of N Grassmann variables, and we use the [t] notation to denote the splitting of each of the vectors into the first κN and last (1 − κ)N components, as seen in [GW90]: We then split the quadratic form expressions in (64) Taking the GOE averages is now simple [Ver04; Noc17]: where the supersymmetric matrices are given by Introducing the tensor notation and we can compactly write We now perform two Hubbard-Stratonovich transformations [Ver04] where σ and σ Grassmanns; the factor i is introduced to ensure convergence.Integrating out over dΨ is now a straightforward Gaussian integral in superspace, giving Recalling the definition of ζ, we have and so one immediately obtains To obtain the limiting spectral density (LSD), or rather its Stieltjes transform, one must find the leading order term in the N → ∞ expansion for (78).This can be done by using the saddle point method on the σ, σ[1] manifolds.We know that the contents of the exponential must vanish at the saddle point, since the LSD is O(1), so we in fact need only compute σ F F at the saddle point.We can diagonalise σ within the integrand of (78) and absorb the diagonalising graded U (1/1) matrix into σ[1].The resulting saddle point equations for the off-diagonal entries of the new (rotated) σ[1] dummy variable are trivial and immediately give that σ[1] is also diagonal at the saddle point.The saddle point equations are then (81) and (82) combine to give an explicit expression for σ F F [1]: With a view to simplifying the numerical solution of the coming quartic, we define t = i(σ F F − iz) and then a line of manipulation with (82) and (83) gives By solving (84) numerically for fixed values of κ, b, b 1 , x 1 , we can obtain the four solutions t 1 (z), t 2 (z), t 3 (z), t 4 (z).These four solution functions arise from choices of branch for (z, x 1 ) ∈ C 2 and determining the correct branch directly is highly non-trivial.However, for any z ∈ R, at most one of the t i will lead to a positive LSD, which gives a simple way to compute ρ eq numerically using (58) and (78): Plots generated using (85) and eigendecompositions of matrices sampled from the distribution of H are given in Figure 1 and show good agreement between the two.

The asymptotic complexity
In the previous section, we have found the equilibrium measure, µ eq , of the ensemble of random matrices The Coulomb gas approximation gives us a method of computing E| det(H − x)|: We have access to the density of µ eq pointwise (in x and x 1 ) numerically, and so (87) is a matter of one-dimensional quadrature.Recalling (49), we then have where Due to Lemma 3.3, the constant term has asymptotic form We then define the desired Θ(u D , u G ) as and we have Using these numerical methods, we obtain the plot of Φ in B and a plot of Θ for some example p, q, σ z , κ values, shown in Figures 2, 3. Numerically obtaining the maximum of Φ on B is not as onerous as it may appear, since −Φ grows quadratically in |x|, |x 1 | at moderate distances from the origin.Here p = q = 3, σ z = 1, κ = 0.9.
We numerically verify the legitimacy of this Coulomb point approximation with Monte Carlo integration where λ (i) j is the j-th eigenvalues of the i-th i.i.d.sample from the distribution of H .The results, comparing N −1 log E| det(H − x)| at N = 50 for a variety of x, x 1 are show in Figure 4.Note the strong agreement even at such modest N , however to rigorously substantiate the Coulomb gas approximation in (87), we must prove a concentration result.87) and (93), verifying the Coulomb gas approximation numerically.Here p = q = 3, σ z = 1, κ = 0.9.Sampled matrices for MC approximation are dimension N = 50, and n = 50 MC samples have been used.Lemma 5.1.Let (H N ) ∞ N =1 be a sequence of random matrices, where for each N and M ∼ GOE N , M 1 ∼ GOE κN .Let µ N be the empirical spectral measure of H N and say µ N → µ eq weakly almost surely.Then for any (x, as N → ∞.
Proof.We begin by establishing an upper bound.For any α, β > 0 Considering B N , we have and so The entries of H N are Gaussians with variance ) and all the diagonal and upper diagonal entries are independent.All of these variances are O(N −1 ), so where the X ij are i.i.d.standard Gaussians for i ≤ j.It follows that Elementary calculations give and so thus when we take β → ∞, we have B N ≤ e o(N ) .
Considering A N , it is sufficient now to show where f (z) = 2 max (min(log |x − z|, β), −α), a continuous and bounded function.For any > 0, we have The entries of H N are Gaussian with O(N −1 ) variance and so obey a log-Sobolev inequality as required by Theorem 1.5 from [GZ+00].The constant, c, in the inequality is independent of N, x, x 1 , so we need not compute it exactly.The theorem from [GZ+00] then gives We have shown We now need to establish a complimentary lower bound to complete the proof.By Jensen's inequality for any α, β > 0. Convergence in law of µ N to µ eq and the dominated convergence theorem give for large enough β, because µ eq has compact support.It remains to show that the expectation inside the exponent in the second term of (107) converges to zero uniformly in N in the limit α → ∞.
By (58), it is sufficient to consider G N (z) , which is computed via (78).Let us define the function Ψ so that * are the solution to the saddle point equations (79-82) and σF F , σF F [1], σBB , σBB [1] are integration variables.Around the saddle point for some r ≥ 2. We use the notation σ for (σ BB , σ ) and similarly σ BB , σ F F .A superscript asterisk on Ψ or any of its derivatives is short hand for evaluation at the saddle point.While the Hessian of Ψ may not in general vanish at the saddle point, and so we must go to at least the cubic term in the expansion of Ψ around the saddle point, i.e.
The bosonic (BB) and fermionic (FF) coordinates do not interact, so we can consider derivatives of Φ as block tensors.Simple differentiation gives where (∇ 3 Ψ) * F follows similarly with By the saddle point equations ( 79)-(82) we have Let or perhaps with the the integration ranges reversed depending on the signs of A F , A B , D F , D B .We have where we have defined  81)-(82).We immediately see that where ) is positive and is decreasing in α.Since µ eq is bounded, it follows that Eµ N is bounded, and therefore as α → ∞ uniformly in N , and so the lower bound is completed.
Equipped with this result, we can now prove the legitimacy of the Coulomb gas approximation in our complexity calculation.The proof will require an elementary intermediate result which has undoubtedly appeared in various places before, but we prove it here anyway for the avoidance of doubt.Lemma 5.2.Let M N be a random N × N symmetric real matrix with independent centred Gaussian upper-diagonal and diagonal entries.Suppose that the variances of the entries are bounded above by cN −1 for some constant c > 0.
Then there exists some constant c e such that Proof.Let σ 2 ij denote the variance of M ij .Then Simple integration with a change of variables gives and then, for large enough N , Stirling's formula gives So finally so defining c e = 1 2 log 2 + 2 gives the result.Theorem 5.3.For any x 1 ∈ R, let H N be a random N × N matrix distributed as in the statement of Lemma 5.1.Then as Proof.Let R > 0 be some constant, independent of N .Introduce the notation We have the upper bound (106) of Lemma 5.1 but this cannot be directly applied to (132) since the bound relies on uniformity in x, x 1 which can only be established for bounded x, x 1 .We use a much cruder bound instead.First, let and then J N has centred Gaussian entries with variance O(N −1 ), so Lemma 5.2 applies, and we find for some constant c e > 0 which is independent of x, x 1 and N , but we need not compute it.

Now we have
But, since µ eq is bounded and has compact support, we can choose R large enough (independent of N ) so that for all (x, x 1 ) with x 2 + x 2 1 > R and for some fixed L independent of N .Whence as N → ∞.Finally, for x, x 1 in B ≤R , the result of the Lemma 5.1 holds uniformly in x, x 1 , so The result follows from (138), (139) and the triangle inequality.

Asymptotic complexity with prescribed Hessian index
Recall the complexity defined in (8): The extra Hessian signature conditions in (8) enforce that both generator and discriminator are at low-index saddle points.Our method for computing the complexity C N in the previous subsection relies on the Coulomb gas approximation applied to the spectrum of H .However, the Hessian index constraints are formulated in the natural Hessian matrix (40), but our spectral calculations proceed from the rewritten form (45).We find however that we can indeed proceed much as in [Bas+20].Recall the key Hessian matrix H given in (40) by and all are independent.Note that we have used (46) to slightly rewrite (40).We must address the problem of computing Indeed, we introduce integration variables y 1 , -vectors of commuting and anticommuting variables respectively.Use [t] notation to split all vectors into the first κN − 1 and last κ N − 1 components.Let With these definitions, we have [Bas+20] | det H| = (2(N − 2)) where dΞ is the normalised measure of the 2 and the ellipsis represents terms with no dependence on M (D) or M (G) , which we need not write down.The crux of the matter is that we must compute but [Bas+20] has performed exactly these calculations (see around (5.146) therein) and so there exist constants and where Here I 1 is the rate function of the largest eigenvalue of the GOE as obtained in [ADG01] and used in [Auf13; Bas+20]: shown with a red colour red scheme, and k G with green.Only alternate contours are shown, in the interests of clarity.

Effect of variance ratio
In the definition of complexity, u D and u G are upper bounds on the loss of the discriminator and generator, respectively.We are interested in the region of the u D , u G plane such that Θ(u D , u G ) > 0, this being the region where gradient descent algorithms are expected to become trapped.We therefore investigate the minimum loss such that Θ > 0, this being, for a given σ z , the theoretical minimum loss attainable by the GAN.We consider two natural notions of loss: We vary σ z over a range of values in (10 −5 , 10 2 ) and compute ϑ D , ϑ G .Here p = q = 3, σ z = 1, κ = 0.9.
To compare the theoretical predictions of the effect of σ z to real GANs, we perform a simple set of experiments.We use a DCGAN architecture [RMC15] with 5 layers in each network, using the reference PyTorch implementation from [Ink18], however we introduce the generator noise scale σ z .For a given σ z , we train the GANs for 10 epochs on CIFAR10 [KH+09] and record the generator and discriminator losses.For each σ z , we repeat the experiment 30 times and average the minimum attained generator and discriminator losses to account for random variations between runs with the same σ z .We note that the sample variances of the loss were typically very high, despite the PyTorch random seed being fixed across all runs.We plot the sample means, smoothed with rolling averaging over a short window, in the interest of clearly visualising whatever trends are present.The results are shown in Figure 8.
There is a striking similarity between the generator plots, with a sharp decline between σ z = 10 −5 and around 10 −3 , after which the minimum loss is approximately constant.The picture for the discriminator is less clear.Focusing on the sections σ z > 10 −3 , both plots show a clear minimum, at around σ z = 10 −1 in experiments and σ z = 10 −2 in theory.Note that the scales on the y-axes of these plots should not be considered meaningful.Though there is not precise correspondence between the discriminator curves, we claim that both theory and experiment tell the same qualitative story: increasing σ z to at least around 10 −3 gives the lowest theoretical generator loss, and then further increasing to, tentatively, some value in (10 −2 , 10 −1 ) gives the lowest possible discriminator loss at no detriment to the generator.

Effect of size ratio
Similarly to the previous section, we can investigate the effect of κ using ϑ D , ϑ G while varying κ over (0, 1).To achieve this variation in the DCGAN, we vary the number of convolutional filters in each network.The generator and discriminator are essentially mirror images of each other and the number of filters in each intermediate layer are defined as increasing functions3 of some positive integers n G , n D .We fix n D + n G = 128 and vary n D to obtain a range of κ values, with κ = n d n d +ng .The results are shown in Figure 9.The theoretical model predicts a a broad range of equivalently optimal κ values centred on κ = 0.5 from the perspective of the discriminator loss, and no effect of κ on the generator loss.The experimental results similarly show a broad range of equivalently optimal κ centred around κ = 0.5, however there appear to be deficiencies in our model, particularly for higher κ values.The results of the experiments are intuitively sensible: the generator loss deteriorates for κ closer to 1, i.e. when the discriminator has very many more parameters than the generator, and vice-versa for small κ.

Conclusions and outlook
We have contributed a novel model for the study of large neural network gradient descent dynamics with statistical physics techniques, namely an interacting spin-glass model for generative adversarial neural networks.We believe this is the first attempt in the literature to incorporate advanced architectural features of modern neural networks, beyond basic single network multi-layer perceptrons, into such statistical physics style models.We have conducted an asymptotic complexity analysis via Kac-Rice formulae and Random Matrix Theory calculations of the energy surface of this model, acting as a proxy for GAN training loss surfaces of large networks.Our analysis has revealed a banded critical point structure as seen previously for simpler models, explaining the surprising success of gradient descent in such complicated loss surfaces, but with added structural features that offer explanations for the greater difficulty of training GANs compared to single networks.We have used our model to study the effect of some elementary GAN hyper-parameters and compared with experiments training real GANs on a standard computer vision dataset.We believe that the interesting features of our model, and their correspondence with real GANs, are yet further compelling evidence for the role of statistical physics effects in deep learning and the value of studying such models as proxies for real deep learning models, and in particular the value of concocting more sophisticated models that reflect aspects of modern neural network design and practice.
From a mathematical perspective, we have extensively studied the limiting spectral density of a novel random matrix ensemble using supersymmetric methods.In the preparation of this paper, we made considerable efforts to complete the average absolute value determinant calculations directly using a supersymmetric representation, as seen in [Bas+20], however this was found to be analytically intractable (as expected), but also extremely troublesome numerically (essentially due to analytically intractable and highly complicated Riemann sheet structure in C 2 ).We were able to sidestep these issues by instead using a Coulomb gas approximation, whose validity we have rigorously proved using a novel combination of concentration arguments and supersymmetric asymptotic expansions.We have verified with numerical simulations our derived mean spectral density for the relevant Random Matrix Theory ensemble and also the accuracy of the Coulomb gas approximation.
We hope that future work will be inspired to further study models of neural networks such as we have considered here.Practically, it would be exciting to explore the possibility of using our insights into GAN loss surfaces to devise algorithmic methods of avoiding training failure.Mathematically, the local spectral statistics of our random matrix ensemble may be interesting to study, particularly around the cusp where the two disjoint components of the limiting spectral density merge.
This last bound follows from a standard Cauchy rotation of integration contour if any of D F , A F , D B , A B has vanishing real part.(122) is valid for D B , A B , D F , A F = 0, but if D B = 0 and A B = 0, then the preceding calculations are simplified and we still obtain an upper bound but proportional to (|MA B |) −1/3 .Similarly with A B = 0 and D B = 0 and similarly for A F , D F .The only remaining cases are A B = D B = 0 or A F = D F = 0.But recall (118) and (

Figure 6 :
Figure 6: Contours in the (u D , u G ) plane of the maximum k D and kG such that Θ k D ,k G (u D , u G ) > 0. k D resultsshown with a red colour red scheme, and k G with green.Only alternate contours are shown, in the interests of clarity.Here p = q = 3, σ z = 1, κ = 0.9.

Figure 7 :
Figure 7: Contours in the (u D , u G ) plane of the maximum k D and kG such that Θ k D ,k G (u D , u G ) > 0. k D resultson the left, k G on the right.Here p = q = 3, σ z = 1, κ = 0.9.

Figure 8 :
Figure 8: The effect of σ z .Comparison of theoretical predictions of minimum possible discriminator and generator losses to observed minimum losses when training DCGAN on CIFAR10.The left plots in each case (cross-dashed) show the experimental DCGAN results, and the right plots show the theoretical results θ G , θ D .p = q = 5 and κ = 0.5 are used in the theoretical calculations, to best match the DCGAN architecture.σ z is shown on a log-scale.

Figure 9 :
Figure 9: The effect of κ.Comparison of theoretical predictions of minimum possible discriminator and generator losses to observed minimum losses when training DCGAN on CIFAR10.The left plots in each case (cross-dashed) show the experimental DCGAN results, and the right plots show the theoretical results ϑ G , ϑ D .p = q = 5 and σ z = 1 are used in the theoretical calculations, to best match the DCGAN architecture.