The Brown measure of the free multiplicative Brownian motion

The free multiplicative Brownian motion $b_{t}$ is the large-$N$ limit of the Brownian motion on $\mathsf{GL}(N;\mathbb{C}),$ in the sense of $\ast $-distributions. The natural candidate for the large-$N$ limit of the empirical distribution of eigenvalues is thus the Brown measure of $b_{t}$. In previous work, the second and third authors showed that this Brown measure is supported in the closure of a region $\Sigma_{t}$ that appeared work of Biane. In the present paper, we compute the Brown measure completely. It has a continuous density $W_{t}$ on $\bar{\Sigma}_{t},$ which is strictly positive and real analytic on $\Sigma_{t}$. This density has a simple form in polar coordinates: \[ W_{t}(r,\theta)=\frac{1}{r^{2}}w_{t}(\theta), \] where $w_{t}$ is an analytic function determined by the geometry of the region $\Sigma_{t}$. We show also that the spectral measure of free unitary Brownian motion $u_{t}$ is a"shadow"of the Brown measure of $b_{t}$, precisely mirroring the relationship between Wigner's semicircle law and Ginibre's circular law. We develop several new methods, based on stochastic differential equations and PDE, to prove these results.

Each is a Gaussian random matrix; the Ginibre ensemble has i.i.d. complex Gaussian entries, with variance 1/N , while the Gaussian unitary ensemble is the Hermitian part of the Ginibre ensemble. For our purposes, it is natural to think of these two ensembles as endpoints of Brownian motion on Lie algebras. Indeed, the Lie algebra gl(N ; C) of the general linear group consists of all N × N complex matrices. For each fixed t, the Brownian motion Z t on this space (with an appropriately scaled time parameter) is distributed as the Ginibre ensemble, scaled by √ t. Similarly, the space of Hermitian matrices is equal to iu(N ), where u(N ) is the Lie algebra of the unitary group, namely the space of skew-Hermitian matrices. For each fixed t, the Brownian motion X t on this space is distributed as the Gaussian unitary ensemble, scaled by √ t. Among the earliest results in random matrix theory are the discovery of the large-N limits of the empirical eigenvalue distributions of these ensembles. That is to say, the (random) counting measure 1 N N j=1 δ λj of the eigenvalues {λ j } of each ensemble has an almost-sure limit which is a deterministic measure. The eigenvalues of X t are real, and their limit empirical eigenvalue distribution is the semicircle law on the interval [−2 √ t, 2 √ t] (cf. [49]); the eigenvalues of Z t are complex, and their limit empirical eigenvalue distribution is the uniform probability measure on the disk of radius √ t (cf. [20]). We note for later reference a simple but intriguing link between these two limiting distributions: the push-forward under the "real part" map of the uniform measure on the disk is the semicircular distribution on an interval.
It is convenient to recast the empirical eigenvalue distribution in an analytic form. For an N × N matrix A = A N , the function is subharmonic on C. Actually, L A is harmonic on the complement of the spectrum, and −∞ at the eigenvalues. The (distributional) Laplacian of L is therefore a positive measure; in fact it is (up to a factor of 2π) equal to the empirical eigenvalue distribution of the matrix A. Hence, if one can compute an appropriate "large-N limit" of the functions L A N , the Laplacian of the limiting function will provide a natural candidate for the limiting empirical eigenvalue distribution. Free probability theory affords a medium in which to identify abstract limits of random matrix ensembles themselves. The limits are constructed as operators in a tracial von Neumann algebra (A, τ ), and the limit is with respect to * -distribution. If A N is a sequence of N × N random matrices, an operator a ∈ A is said to be a limit in * -distribution of A N if, for each polynomial p in two noncommuting variables, we have almost surely. For X t and Z t , Voiculescu [48] showed that the large-N limits can be identified as certain free stochastic processes, namely the free additive Brownian motion x t and the free circular Brownian motion c t . These limiting process are no longer random: they are one-parameter families of operators with freely independent increments, and * -distributions that can be described elegantly in the combinatorial framework of free probability. In particular, x t is a self-adjoint operator, and it therefore has a spectral resolution E xt : a projection-value measure so that x t = R λ E xt (dλ). The composition µ xt := τ • E xt is a probability measure on R called the spectral measure of x t (in the state τ ). The statement that X t has the semicircle law as its limit empirical eigenvalue distribution is equivalent to the statement that µ xt is semicircular.
On the other hand, since the operator c t is not normal, there is no spectral theorem to yield a spectral measure. There is, however, a substitute: the Brown measure, introduced in [8]. For any operator a ∈ A, define a function L a on C by L a (λ) = τ [log(|a − λ|)], (1.2) where |a − λ| is the self-adjoint operator ((a − λ) * (a − λ)) 1/2 . The quantity L a (λ) is the logarithm of the Fuglede-Kadison determinant [18,19] of a − λ. It is finite outside the spectrum of a but may become −∞ at points in the spectrum. If A is the space of all N × N matrices and τ = 1 N trace, then L a agrees with the function in (1.1). In a general tracial von Neumann algebra (A, τ ), the function L a is subharmonic.
The Brown measure of a is then defined in terms of the distributional Laplacian of L a : If a is self-adjoint, the Brown measure of a coincides with the spectral measure. If a = c t is the free circular Brownian motion at time t, its Brown measure µ ct is equal to the uniform probability measure on the disk of radius √ t. By regularizing the right-hand side of (1.2), one can construct the Brown measure µ a as a weak limit, where ∆ λ is the Laplacian with respect to λ and dλ is the Lebesgue measure on the plane. (See [43,Section 11.5] and [35,Eq. (2.11)].) It is not hard to see that the Brown measure of a is determined by the * -moments of a, but the dependence is singular: if a sequence of operators a n converges in * -distribution to a, the Brown measures µ an need not converge to µ a .
1.2. Brownian motions on U(N ), GL(N ; C), and their large-N limits. As we have noted, the Gaussian unitary ensemble and the Ginibre ensemble can be described in terms of Brownian motions on the Lie algebras u(N ) and gl(N ; C). Specifically, the Brownian motions are induced by a choice of inner product on these finite-dimensional vector spaces; in both cases, we use the inner product X, Y N = N Re trace(X * Y ).
(The factor of N in the definition produces the scaling of 1/N in the variances of the two ensembles.) It is natural to consider the counterpart Brownian motions on the Lie groups U(N ) and GL(N ; C). In general, if G ⊂ M N (C) is a matrix Lie group with Lie algebra g ⊂ M N (C), there is a simple relationship between the Brownian motion B t on G and the Brownian motion A t on g (the latter being the standard Brownian motion determined by an inner product on g). It is known as the rolling map, and it can be written as a Stratonovitch stochastic differential equation (SDE): The solution of this SDE is a diffusion process on G whose increments (computed in the left multiplicative sense) are independent and whose generator is half the Laplacian on G determined by the left-invariant Riemannian metric induced by the given inner product on g. Thus, its distribution at each time is the heat kernel on the group. For computational purposes, it is useful to write the SDE in Itô form; the result depends on the structure of the group. Letting U t = U N t denote the Brownian motion on U(N ) and B t = B N t the Brownian motion on GL(N ; C), the corresponding Itô SDEs are It is then natural to investigate the large-N limits of these random matrix processes. The candidate large-N limits are the free stochastic processes generated by the analogous free SDEs: (For the theory of free stochastic calculus, see [6,7,40].) In 1997, Biane [4,5] introduced these processes, the free unitary Brownian motion u t , and the free multiplicative Brownian motion b t . (He called b t = Λ t , and wrote a slightly different but equivalent free SDE for it.) The main result of [4] was the theorem that the process u t is indeed the large-N limit in * -distribution of the unitary Brownian motion U t = U N t . Since U t and u t are unitary (hence normal) operators, this also means that the empirical eigenvalue distribution of U N t converges to the spectral measure of u t .
Biane also computed the spectral measure ν t of u t . We now record this result, since it relates closely to the results of the present paper. Let f t denote the holomorphic function on C \ {1} defined by (1.6) Then f t has a holomorphic inverse χ t in the open unit disk, and χ t extends continuously to the closed unit disk. Biane showed that where ψ ut (z) = τ [(1 − zu t ) −1 ] − 1 is the (recentered) moment-generating function of u t . From this (and other SDE computations) he determined the following result. Theorem 1.1   [4,5]). The spectral measure ν t of the free unitary Brownian motion u t is supported in the arc e iφ |φ| < φ max (t) := 1 2 (4 − t)t + cos −1 (1 − t/2) for t < 4, and is fully supported on the circle for t ≥ 4. The measure ν t has a continuous density κ t , which is real analytic on the interior of its support arc, given by See, for example, p. 275 in [5]. In the same papers [4,5] in which he introduced u t , Biane considered the free multiplicative Brownian motion b t as well (for example, computing its norm). He conjectured that it should be the large-N limit of the Brownian motion B t = B N t on GL(N ; C), in * -distribution. This was proved by Kemp in [38], with complementary estimates of moments given in [39]. The goal of the present paper is to fully determine the Brown measure of the free multiplicative Brownian motion b t , giving the full complex analog of Theorem 1.1.
1.3. The Brown measure of b t . The main result of this paper is a formula for the Brown measure µ bt of the free multiplicative Brownian motion b t . We expect that µ bt coincides with the large-N limit of the empirical eigenvalue distribution of the Brownian motion B N t on GL(N ; C). Techniques that can be used to prove results of this sort-that a limiting eigenvalue distribution agrees with a Brown measurehave been developed in the context of the general circular law, in which the entries are independent and identically distributed but not necessarily Gaussian. Analysis of this model began with the work of Girko [21] and continued with results of Bai [2], Götze and Tikhomirov [22], and ending with the definitive version of the circular law established by Tao and Vu [47]. All of these works compute the empirical eigenvalue distribution by taking the Laplacian of the quantity L A in (1.1). They then consider the limiting eigenvalue distribution of (A − λ) * (A − λ) for each λ, from which they compute (essentially) the Brown measure of the limiting object, which in this case is uniform on a disk. Then to prove convergence of the eigenvalue distribution of the random matrices to this uniform measure, they develop techniques for controlling the singularities of the logarithm function in (1.1) at infinity and (especially) at zero.
A previous result [35] of the second and third authors showed that µ bt is supported on the closure of a certain region Σ t introduced by Biane in [5]; see Figures (We reprove that result in the present paper by a different method; see Theorem 7.2 in Section 7.2.) The proof in [35] is based on Biane's "free Hall transform" G t , introduced in [5]. This transform was conjectured by Biane to be the the large-N limit of the generalized Segal-Bargmann transform of Hall [29,30], a conjecture that was verified independently by Cébron [10] and the authors of the present paper [14]. A key idea in Biane's work is Gross and Malliavin's probabilistic interpretation [26] of the transform in [29].
For each λ outside Σ t , [35] uses G t to construct an "inverse" of b t − λ and (b t − λ) 2 . These inverses are not necessarily bounded operators, but live in the noncommutative L 2 space, that is, the completion of A with respect to the inner product a, b := τ (a * b). We then strengthen the standard result that the Brown measure is supported on the spectrum of the operator to show that existence of an L 2 inverse of (b t − λ) 2 guarantees that λ is outside the support of µ bt . We note, however, that the methods of [35] do not give any information about the distribution of the Brown measure µ bt inside the region Σ t .
1.4. Connection to Physics. The eigenvalue distribution of Brownian motion in GL(N ; C), in the large-N limit, has been studied in the physics literature, first by Gudowska-Nowak, Janik, Jurkiewicz, and Nowak [27] and then by Lohmayer, Neuberger, and Wettig [42]. At least in the case of [42], the motivation for considering this model is a connection to two-dimensional Yang-Mills theory. Yang-Mills quantum field theory is a key part of the standard model in particle physics and the two-dimensional case can be treated in a mathematically rigorous fashion. Twodimensional Yang-Mills theory is a much-studied model, in part as a toy model of the four-dimensional theory and in part because of its connections to string theory [23,24].
Yang-Mills theory with structure group G describes a random connection on a principal G-bundle. One typically studies the theory through the associated Wilson loop functionals, given by the expectation value of the trace of the holonomy around a loop. Assume at first that G is compact, that the spacetime manifold is the plane, and that the loop is a simple closed curve in the plane. Then the distribution of the holonomy is described by Brownian motion in G, with time-parameter proportional to the area enclosed by the curve. (See works of Driver [13] and Gross-King-Sengupta [25] and the references therein.) Of particular importance is the case G = U(N ), with N tending to infinity; the resulting theory is called the "master field." The master field in the plane is therefore built around the large-N limit of Brownian motion in U(N ), i.e., the free unitary Brownian motion u t . (See works of Singer [45], Anshelevich-Sengupta [1], and Lévy [41].) In this context, the change in behavior of u t at t = 4-in which the support of the spectral measure wraps all the way around the circle-is called a "topological phase transition." (See also [15], [16], [11], and [32] for recent progress constructing a rigorous large-N Yang-Mills theory on surfaces other than the plane.) Although Yang-Mills theory is typically constructed when the structure group G is compact, it requires only a small step of imagination to consider also the case G = GL(N ; C). Thus, in [42], Brownian motion in GL(N ; C) is considered as a sort of "complex Wilson loop" computation. It is of interest to determine whether the large-N limit-namely the free multiplicative Brownian motion b t -still has a topological phase transition at t = 4.
The papers [27] and [42] both derive, using nonrigorous methods, the region into which the eigenvalues of B N t cluster in the large-N limit. Both papers find this domain to be precisely the region Σ t considered in the present work. Since Σ t wraps around the origin precisely at time t = 4, the authors conclude that indeed the topological phase transition persists after the change from U(N ) to GL(N ; C). The paper [42] also considers a two-parameter extension of the Brownian motion of the sort considered in [14,36,38], and finds that the eigenvalues cluster into the domain denoted Σ s,t in [36]. A rigorous version of these results-specifically, that the Brown measure of the relevant free Brownian motion is supported in Σ t or Σ s,t -was then obtained by the second and third authors in [35].
We emphasize that the papers [27], [42], and [35] are concerned only with the region into which the eigenvalues cluster. Nothing is said there about how the eigenvalues are distributed in the region. By contrast, in the present work, we not only prove (again) that the Brown measure of b t is supported in Σ t , we actually compute the Brown measure (Theorem 2.2). Furthermore, we not only see the same transition at t = 4 for the GL(N ; C) case as for the U(N ) case, we actually find a direct connection (Proposition 2.6) between the Brown measure of b t and the spectral measure of u t .
1.5. Subsequent work. Since the first version of this paper appeared on the arXiv, three subsequent works have appeared that use the techniques developed here to analyze Brown measures of other operators. First, work of Ho and Zhong [37] has extended the results of the present paper to the case of a free multiplicative Brownian motion with an arbitrary unitary initial condition. This means that they compute the Brown measure of ub t , where u is a unitary element freely independent of b t . Ho and Zhong also compute the Brown measure of x 0 + c t , where c t is a free circular Brownian motion and x 0 is a self-adjoint element freely independent of c t . Second, Demni and Hamdi [12] have analyzed the support of the Brown measure of u t P, where u t is the free unitary Brownian motion and P is a projection freely independent of u t . Last, Hall and Ho [34] have computed the Brown measure of x 0 + ix t , where x t is the free additive Brownian motion and x 0 is a self-adjoint element freely independent of x t .
The reader may also consult the expository article [33] by the second author, which provides a nontechnical introduction to the techniques used in the present paper.

2.1.
A formula for the Brown measure. In this paper, we compute the Brown measure µ bt of the free multiplicative Brownian motion b t , using completely different methods from those in [35]. To state our main result, we need to briefly describe the regions Σ t . For each t > 0, consider the holomorphic function f t on C \ {1} defined by (1.6). It is easily verified that if |λ| = 1 then |f t (λ)| = 1. There are, however, other points where |f t (λ)| = 1. We then define Definition 2.1. For each t > 0, we define Σ t to be the connected component of the complement of E t containing 1.
We will show (Theorem 4.1) that Σ t may also be characterized as where the function T is defined in (4.1). Each region Σ t is invariant under the maps λ → 1/λ and λ →λ. If we consider a ray from the origin with angle θ, if this ray intersects Σ t at all, it does so in an interval of the form 1/r t (θ) < r < r t (θ) for some r t (θ) > 1. (See Figures 3 and 4.) See Section 4 for more information.
We are now ready to state our main result. . We let r t (θ) denote the larger of the two radii where the ray with angle θ intersects ∂Σ t . Shown for t = 1.5 Theorem 2.2. For all t > 0, the Brown measure µ bt of b t is absolutely continuous with respect to the Lebesgue measure on the plane and supported in the domain Σ t . In Σ t , the density W t of µ bt with respect to the Lebesgue measure is strictly positive and real analytic, with the following form in polar coordinates: for a certain even function w t . This function may be computed as where r t (θ) is the larger of the two radii where the ray with angle θ intersects the boundary of Σ t .
Since Σ t is invariant under λ →λ, the function r t (θ) is an even function of θ, from which it is easy to check that the second term on the right-hand side of of (2.5) is also an even function of θ. Although we will customarily let r t (θ) denote the larger of the the two radii, we note that is invariant under r → 1/r. Thus, the value of w t does not actually depend on which radius is used. It is noteworthy that the one nonexplicit part of the formula for w t , namely the second term on the right-hand side of (2.5), is computable entirely in terms of the geometry of the region Σ t . According to Proposition 8.5, w t can also be computed as a logarithmic derivative along the boundary of Σ t of the function f t in (1.6). It follows from (2.3) that the function T equals t on the boundary of Σ t . It is then possible to use implicit differentiation in the equation T (λ) = t to compute dr t (θ)/dθ as a function of r t (θ) and θ. We may then use this computation to rewrite (2.5) in a form that no longer involves a derivative with respect to θ, as follows.  Figure 4. Graphs of r t (θ) (black) and 1/r t (θ) (dashed) for t = 2, 3.5, 4, and 7 Proposition 2.3. The function w t in Theorem 2.2 may also be computed in the form Here ω(r, θ) = 1 + h(r) α(r) cos θ + β(r) β(r) cos θ + α(r) , where h(r) = r log(r 2 ) r 2 − 1 ; α(r) = r 2 + 1 − 2rh(r); β(r) = (r 2 + 1)h(r) − 2r.
Thus, to compute w t (θ), we evaluate ω/(2πt) on the boundary of Σ t and then parametrize the boundary by the angle θ; see Figure 5. Using Proposition 2.3, we can derive small-and large-t asymptotics of w t (θ) as follows: See Section 8 for details.
The following simple consequences of Theorem 2.2 helps explain the significance of the factor of 1/r 2 in the formula (2.4) for W t . (1) µ bt is invariant under the maps λ → 1/λ and λ →λ.
(2) Let Ξ t denote the image of Σ t \ (−∞, 0) under the complex logarithm map, using the standard branch cut along the negative real axis. We write points Figure 5. The function w t (θ) is computed by evaluating ω on the boundary of Σ t and parametrizing the boundary by the angle θ. Figure 6. Plots of w t (θ) for t = 2, 3.5, 4, and 7 z ∈ Ξ t as (ρ, θ). Then for points in Ξ t , the pushforward of µ bt by the logarithm map has density ω t (ρ, θ) given by independent of ρ.
Plots of w t (θ) are shown in Figure 6. Note that for t < 4, not all angles θ actually occur in the domain Σ t . Thus, for t < 4, the function w t (θ) is only defined for θ in a certain interval (−θ max (t), θ max (t))-where, as shown in Section 4, θ max (t) = cos −1 (1 − t/2). Plots of W t for t = 1 and t = 4 are then shown in Figures 7,8,and 9. Actually, when t = 1, the function w t is almost constant (see Figure 19). Thus, the variation in W t in Figure 7 comes almost entirely from the variation in the factor of 1/r 2 in (2.4).
We also observe that by Point 1 of Corollary 2.4, half the mass of µ bt is contained in the unit disk and half in the complement of the unit disk. Thus, although the density W t becomes large near the origin in, say, Figures 8 and 9, it is not correct to say that most of the mass of µ bt is near the origin.

2.2.
A connection to free unitary Brownian motion. It follows easily from Theorem 2.2 that the distribution of the argument of λ with respect to µ bt has a density given by a t (θ) = 2 log[r t (θ)]w t (θ), (2.8) where, as in Theorem 2.2, we take r t (θ) to be the outer radius of the domain (with r t (θ) > 1). After all, the Brown measure in the domain is computed in polar coordinates as (1/r 2 )w t (θ)r dr dθ. Integrating with respect to r from 1/r t (θ) to r t (θ) then gives the claimed density for θ.  Recall from Theorem 1.1 that the limiting eigenvalue distribution ν t for Brownian motion in the unitary group was determined by Biane. We now claim that the distribution in (2.8) is related to Biane's measure ν t by a natural change of variable.
To each angle θ arising in the region Σ t , we associate another angle φ by the formula f t (r t (θ)e iθ ) = e iφ , (2.9) Proposition 2.5. If θ is distributed according to the density in (2.8) and φ is defined by (2.9), then φ is distributed as Biane's measure ν t .
We may think of this result in a more geometric way, as follows. Define a map Φ t : Σ t → S 1 by requiring (a) that Φ t should agree with f t on the boundary of Σ t , and (b) that Φ t should be constant along each radial segment inside Σ t , as in Figure 10. (This specification makes sense because f t has the same value at the two boundary points on each radial segment.) Explicitly, Φ t may computed as Φ t (λ) = f t (r t (arg λ)e i arg λ ).
Then Proposition 2.5 gives the following result, which may be summarized by saying that the distribution ν t of free unitary Brownian motion is a "shadow" of the Brown measure of b t .
Proposition 2.6. The push-forward of the Brown measure of b t under the map Φ t is Biane's measure ν t on S 1 . Indeed, the Brown measure of b t is the unique measure µ on Σ t with the following two properties: (1) the push-forward of µ by Φ t is ν t and (2) µ is absolutely continuous with respect to Lebesgue measure with a density W having the form W (r, θ) = 1 r 2 g(θ) in polar coordinates, for some continuous function g.
Now, the results of [5] and [35] already indicate a relationship between the free unitary Brownian motion u t (whose spectral measure is ν t ) and the free multiplicative Brownian motion b t (whose Brown measure we are studying in this paper). It is nevertheless striking to see such a direct relationship between µ bt and ν t . Indeed, Proposition 2.6 precisely mirrors the relationship between the semicircle law and the circular law. If c t is a circular random variable of variance t, and x t is semicircular of variance t, then the distribution of x t (the semicircle law on the is the push-forward of the Brown measure of c t (the uniform probability measure on the disk D( √ t) of radius √ t) under a similar "shadow map": first project the disk onto its upper boundary circle via (x, y) → (x, √ t − x 2 ), and then use the conformal map z → z + t z from C \ D( (The net result of these two operations is (x, y) → 2x.) Since, as described in the introduction, u t and b t are the "Lie group" versions of the "Lie algebra" operators x t and c t , it is pleasing that this shadow relationship between their Brown measures persists.
2.3. The structure of the formula. We now explain the significance of the two terms on the right-hand side of the formula (2.5) for w t . Following the general construction of Brown measures in (1.3), the density of the Brown measure is computed as 1 4π ∆s t (λ), where ∆ is the Laplacian with respect to λ and where . It is then convenient to work in polar coordinates (r, θ). In these coordinates, we may write ∆ as Theorem 2.7. For each t > 0, the function s t (λ) is real analytic for λ ∈ Σ t and also for λ ∈ (Σ t ) c . At each boundary point, s t (λ) and its first derivatives with respect to λ approach the same value from the inside of Σ t as from the outside of Σ t . For λ inside Σ t , we have For λ inside Σ t , we also have that ∂s t /∂θ is independent of r with t and θ fixed. Indeed, ∂s t /∂θ(λ) is the unique function on Σ t that is independent of r and agrees with the angular derivative of log(|λ − 1| 2 ) as we approach ∂Σ t .
The formula (2.11)-along with (2.10)-accounts for the first term on the righthand side of (2.5). Then since the angular derivative of log(|λ − 1| 2 ) is computable as 2r sin θ r 2 + 1 − 2 cos θ as in (2.6), we can recognize the second term on the right-hand side of (2.5) as the θ-derivative of ∂s t /∂θ. Thus, Theorem 2.7, together with the formula (2.10) for the Laplacian in polar coordinates, accounts for the formula (2.5) for w t .

2.4.
Deriving the formula. We now briefly indicate the method we will use to compute the Brown measure µ bt . Following the general construction of the Brown measure in (1.3), we consider the function S defined by (2.12) for λ ∈ C and ε > 0, where b t is the free multiplicative Brownian motion and τ is the trace in the von Neumann algebra in which b t lives. It is easily verified that as ε decreases with t and λ fixed, S(t, λ, ε) also decreases. Hence, the limit s t (λ) = lim ε→0 + S(t, λ, ε) exists, possibly with the value −∞.
The general theory developed by Brown [8] shows that s t (λ) is a subharmonic function of λ for each fixed t, so that the Laplacian (in the distribution sense) of s t (λ) with respect to λ is a positive measure. If this measure happens to be absolutely continuous with respect to the Lebesgue measure, then the density W (t, λ) of the Brown measure is computed in terms of the value of s t (λ), as follows: See also Chapter 11 in [43] and Section 2.3 in [35] for general information on Brown measures. The first major step toward proving Theorem 2.2 is the following result.
Theorem 2.8. The function S in (2.12) satisfies the following PDE: with the initial condition We emphasize that S(t, λ, ε) is only defined for ε > 0. Although, as we will see, lim ε→0 + S(t, λ, ε) is finite, ∂S/∂ε develops singularities in this limit. Thus, it is not correct to formally set ε = 0 in (2.14) to obtain ∂s t /∂t = 0. (Actually, it will turn out that s t (λ) is independent of t for as long as λ remains outside Σ t , but not after this time; see Section 7.2.) After verifying this equation (Section 5), we will use the Hamilton-Jacobi formalism to analyze the solution (Section 6). In the remaining sections, we will then analyze the limit of the solution as ε tends to zero and compute the Laplacian in (2.13). The expository article [33] of the second author provides an introduction to the methods used in the present paper.
By way of comparison, we mention that a similar PDE was used in Biane's paper [3]. There he studies the spectral measure µ t of x 0 + x t , the free additive Brownian motion with a nonconstant initial distribution x 0 freely independent from x t . Biane studies the Cauchy transform G of µ t : and shows that G satisfies the complex inviscid Burger's equation The measure µ t may then be recovered, up to a constant, as lim ε→0 + Im G(t, x+iε).
In our paper, we similarly use a first-order, nonlinear PDE whose solution in a certain limit gives the desired measure. We note, however, that the PDE (2. 16) is not actually the main source of information about µ t in [3]. By contrast, our analysis of the Brown measure of the free multiplicative Brownian motion b t is based entirely on the PDE in Theorem 2.8. Figure 11. A plot of W t with t = 2 (left). A histogram of the eigenvalues of B N t with N = 2, 000 and t = 2 (right) is shown for comparison Finally, we mention that for the case of the circular Brownian motion c t , a PDE similar to the one in Theorem 2.14 appeared in work of Burda, Grela, Nowak, Tarnowski, and Warcho l [9, Equation (9)].
3. Comparison with the eigenvalue distribution of B N t As mentioned in Section 1.3, the Brown measure of the free multiplicative Brownian motion b t is a natural candidate for the limiting empirical eigenvalue distribution of the Brownian motion B N t in GL(N ; C). We may express this idea formally as a conjecture. While natural, Conjecture 3.1 is technically difficult to approach. It is by now well known that the logarithmic singularity in the definition of the Brown measure can result in failure of convergence of the empirical distribution of eigenvalues to the Brown measure of the limit (in * -distribution) of the random matrix ensembles. Suppose, for example, T N is an N × N matrix with all 0 entries except for 1's just below the diagonal. Then all of T N 's eigenvalues are 0, and hence the empirical eigenvalue distribution is a point-mass at 0 for each N . However, in * -distribution, T N converges to a Haar unitary u, whose Brown measure is the uniform probability measure on the unit circle. (See, for example, Section 2.6 of [46].) In [46],Śniady proved that convergence to the Brown measure is "generic", in the sense that a small (vanishing in the limit) independent Gaussian perturbation of the original ensemble will always yield convergence to the Brown measure. (The required size of the perturbation was more recently explored in [28].) A main step inŚniady's proof is to show (in our language) that the empirical eigenvalue distribution of a complex Brownian motion Z t on gl(N ; C), with any deterministic initial condition, converges to the appropriate Brown measure. That is to say, there is enough regular noise in such matrix diffusions to kill any pseudo-spectral discontinuities. It is natural to expect that the same should hold true for our geometric matrix diffusion B t . In support of Conjecture 3.1, we first note that for small t, the distribution of B N t -namely, the time-t heat kernel measure on GL(N ; C), based at the identityis approximately Gaussian. Thus, B N t is distributed, for small t, similarly to the Ginibre ensemble, shifted by the identity and scaled by √ t. Thus, Conjecture 3.1 leads us to expect that that µ bt will be close, for small t, to the uniform probability measure on a disk of radius √ t. This expectation is confirmed by the asymptotics in Section 8.1.
We now offer several numerical tests of the conjecture. First, Figure 11 directly compares the density W t with t = 2 to the distribution of eigenvalues of B N t with t = 2 and N = 2, 000. Next, in light of Conjecture 3.1 and Point 2 of Corollary 2.4, we expect that the limiting distribution of the logarithms of the eigenvalues of B N t will be constant in the horizontal direction. This expectation is confirmed by simulations, as in the right-hand side of Figure 12.
Furthermore, Conjecture 3.1 predicts that the large-N distribution of the arguments of the eigenvalues will be given by the density a t in (2.8). Furthermore, Figure 13. The density a t (θ) in (2.8) plotted against a histogram of the arguments of the eigenvalues of B N t , for N = 2, 000 and t = 2, 3.5, 4, and 7 -π - Figure 14. The density of Biane's measure ν t (φ) plotted against a histogram of {Φ t (λ j )} N j=1 , for N = 2, 000 and t = 2, 3.5, 4, and 7

Properties of Σ t
We now verify some important properties of the regions Σ t in Definition 2.1. Define Note that the function x − 1 has a removable singularity at x = 1, with a limiting value of 1 at x = 1. Thus, T (λ) is a real analytic function on all of C \ {0}. Since, also, we see that T (λ) → +∞ as λ → 0. By checking the three cases |λ| > 1, |λ| = 1, and |λ| < 1, we may verify that T (λ) ≥ 0 for all λ, with equality only if λ = 1.
Theorem 4.1. For all t > 0, the region Σ t may be expressed as and the boundary of Σ t may be expressed as Thus, each fixed λ ∈ C will be outside Σ t until t = T (λ) and will be inside Σ t for all t > T (λ). We may therefore say that T (λ) is the time that the domain Σ t gobbles up λ. See Figures 15 and 16.
For each t > 0, the region Σ t has the following properties.
For Point 1, see Figure 17. We now begin working toward the proofs of Theorems 4.1 and 4.2.
We now state some important properties of the function r t occurring in the statement of Theorem 2.2; the proof is given on p. 24.
Using the proposition, we can now compute the sets F t and E t = F t that enter into the definition of Σ t . (Recall (2.1) and (2.2).) Corollary 4.5. For t ≤ 4, the set F t consists of points of the form r t (θ)e iθ and (1/r t (θ))e iθ for − cos −1 (1 − t/2) < θ < cos −1 (1 − t/2). In this case, the closure of F t consists of F t together with the points e iθ on the unit circle with cos θ = 1 − t/2. There are two such points when t < 4 and one such point when t = 4, namely −1.
For t > 4, the set F t consists of points of the form r t (θ)e iθ and (1/r t (θ))e iθ , where θ ranges over all possible angles, and this set is closed.
1. Figure 18. The set F t with t = 1.3, with the unit circle (dashed) shown for comparison.
Lemma 4.6. Let us write the function T in (4.1) in polar coordinates. Then for each θ, the function r → T (r, θ) is strictly decreasing for 0 < r < 1 and strictly increasing for r > 1. For each θ, the minimum value of T (r, θ), achieved at r = 1, is 2(1 − cos θ), and we have Proof. We will show in Proposition 6.13 that the function T (λ) is the limit of another function t * (λ 0 , ε 0 ) as ε 0 goes to zero. Explicitly, this amounts to saying that T (r, θ) = g θ (δ), where g is defined in (6.61) and δ = r + 1/r. Now, δ is decreasing for 0 < r < 1 and increasing for r > 1. Thus, the claimed monotonicity of T follows if g θ (δ) an increasing function δ for each θ, which we will show in the proof of Proposition 6.16.
For the convenience of the reader, we briefly outline how the argument goes in the context of the function T (r, θ). We note that where if we assign log(r 2 )/(r 2 − 1) the value 1 at r = 1, then T is analytic except at r = 0. We then compute that, after simplification, We then claim that for all θ, we have ∂T /∂r > 0 for r > 1 and ∂T /∂r < 0 for r < 1.
Note that for each fixed r, the right-hand side of (4.3) depends linearly on cos θ. Thus, if, for a fixed r, if ∂T /∂r is positive when cos θ = 1 and when cos θ = −1, it will be positive for all θ. Specifically, we may say that ∂T ∂r (r, θ) ≥ min ∂T ∂r (r, 0), ∂T ∂r (r, π) .
It is now an elementary (if slightly messy) computation to check that the right-hand side of (4.4) is strictly positive for all r > 1. A similar argument then shows that ∂T /∂r is negative for all θ and all 0 < r < 1. We conclude that for each θ, the function r → T (r, θ) is decreasing for 0 < r < 1 and increasing for r > 1. The minimum value therefore occurs at r = 1, and this value is the value of r 2 + 1 − 2 cos θ at r = 1, namely 2(1 − cos θ). Finally, we can easily see that for r approaching zero, we have T (r, θ) ∼ − log(r 2 ) → +∞ and for r approaching infinity, we have Proof of Proposition 4.4. The minimum value of T (r, θ), achieved at r = 1, is 2 − 2 cos θ. This value is always less than t, as can be verified separately in the cases t > 4 (all θ) and t ≤ 4 (|θ| < cos −1 (1 − t/2)). Thus, Lemma 4.6 tells us that the equation T (r, θ) = t has exactly one solution for r with 0 < r < 1 and exactly one solution for r > 1. Since, as is easily verified, T (1/r, θ) = T (r, θ), the two solutions are reciprocals of each other, and we let r t (θ) denote the solution with r > 1. Since ∂T /∂r is nonzero for all r = 1, the implicit function theorem tells us that r t (θ) depend analytically on θ.
We are now ready for the proofs of our main results about Σ t .
Proof of Theorem 4.1. We first claim that the set E t = F t is precisely the set where T (λ) = t. To see this, first note that F t is, by Lemma 4.3, the set of λ with |λ| = 1 where T (λ) = t. Then by Corollary 4.5, the closure of F t is obtained by adding in the points on the unit circle (zero, one, or two such points, depending on t) where cos θ = 1 − t/2. But these points are easily seen to be the points on the unit circle where T (λ) = t.
Using Corollary 4.5, we see that the complement of the set E t = {λ|T (λ) = t} has two connected components when t < 4 and three connected components when t ≥ 4. Since T (1) = 0 < t, we have T (λ) < t on the entire connected component of E c t containing 1, which is, by definition, the region Σ t . The remaining components of E c t are the unbounded component and (for t ≥ 4) the component containing 0. Since T (λ) tends to +∞ at zero and at infinity, we see that T (λ) > t on these regions, so that T (λ) < t precisely on Σ t .
It is also clear from Corollary 4.5 that the boundary of the region Σ t (i.e., the connected component of E c t containing 1) contains the entire set E t = {λ|T (λ) = t} .
Proof of Theorem 4.2. Point 1 follows easily from Corollary 4.5. For Point 2, we note that by Proposition 4.4, we have T (r, θ) < t for 1/r t (θ) < r < r t (θ), and T (r, θ) ≥ t for 0 < r ≤ 1/r t (θ) and for r ≥ r t (θ). Thus, by Theorem 4.1, the ray with angle θ intersects Σ t precisely in the claimed interval. For Point 3, we have already shown that ∂T /∂r is nonzero except when r = 1. When r = 1, we know from (4.1) that Thus, when r = 1, we have ∂T /∂θ = 2 sin θ, which is nonzero except when θ = 0 or θ = π. Thus, the gradient of T (λ) is nonzero except when λ = 0 (where T (λ) is undefined), when λ = 1, and when λ = −1. Since 0 is never in Σ t and 1 is always in Σ t , the only possible singular point in the boundary of Σ t is at λ = −1. Since T (r, θ) = 2 − 2 cos π = 4 when r = 1 and θ = π, the point λ = −1 belongs to the boundary of Σ 4 . Meanwhile, the Taylor expansion of T to second order at λ = −1 is easily found to be T (λ) ≈ 4 + (Re λ + 1) 2 /3 − (Im λ) 2 . By the Morse lemma, we can then make a smooth change of variables so that in the new coordinate system, Thus, near λ = −1, the set T (λ) = 4 is the union of the curves u + v = 0 and The invariance of Σ t under λ → 1/λ and under λ →λ follows from the easily verified invariance of T (λ) under these transformations. Finally, we verify that the domain Σ t , as we have defined it, coincides with the one originally introduced by Biane in [5]. Let us start with the case t < 4. According to the discussion at the bottom of p. 273 in [5], the boundary of Biane's domain Σ t consists in this case of two analytic arcs. The interior of one arc lies in the open unit disk and the interior of the other arc lies in the complement of the closed unit disk, while the endpoints of both arcs lie on the unit circle. The first arc is then computed by applying a certain holomorphic function χ(t, ·) to the support of Biane's measure ν t in the unit circle. Now, χ(t, ·) satisfies f t (χ(t, z)) = z on the closed unit disk. (Combine the identity involving κ on p. 266 of [5] with the definition of χ on p. 273.) We see that the interior of the first arc consists of points with |λ| = 1 but |f t (λ)| = 1. This arc must, therefore, coincide with the arc of points with radius 1/r t (θ). The second arc is obtained from the first by the map λ → 1/λ and therefore coincides with the points of radius r t (θ). We can now see that the boundary of Biane's domain coincides with the boundary of the domain we have defined. A similar analysis applies to the cases t > 4 and t = 4, using the description of the boundary of Σ t in those cases at the top of p. 274 in [5].

The PDE for S
In this section, we will verify the PDE for S in Theorem 2.8. The claimed initial condition (2.15) holds because b 0 = 1. We now proceed to verify the equation (2.14) itself.
Let (c t ) t≥0 denote a free circular Brownian motion. The rules of free stochastic calculus, in "stochastic differential" form, are as follows; see [38, Lemma 2.5, Lemma 4.3]. If g t and h t are processes adapted to c t , then In addition, we have the following Itô product rule: if a 1 t , . . . , a n t are processes adapted to c t , then t · · · a n t ).
We let b t be the free multiplicative Brownian motion, which satisfies the free stochastic differential equation Throughout the rest of this section, we will use the notation b t,λ := b t − λ.
Note that second factor on the right-hand side of (5.7) has (b t,λ b * t,λ ) n−j , with the adjoint on the second factor.
by moving the d inside the trace and then applying the product rule in (5.5) and (5.6). By (5.4), the terms arising from (5.5) will not contribute. Furthermore, by (5.2), the only terms from (5.6) that contribute are those where one d goes on a factor of b t,λ and one goes on a factor of b * t,λ .
By choosing all possible factors of b t,λ and all possible factors of b * t,λ , we get n 2 terms. In each term, after putting the d inside the trace, we can cyclically permute the factors until, say, the db t,λ factor is at the end. There are then only n distinct terms that occur, each of which occurs n times. By (5.1), each distinct term is computed as The reader who doubts the validity of using the cyclic invariance of the trace when some factors are differentials may compute each term by first using (5.1) and then using the cyclic invariance of the trace, with the same result.) Since each distinct term occurs n times, we obtain Of course, since b t = b t,λ + λ, we can rewrite (5.8) in a way that involves only b t,λ and not b t .
Proof. We note that the definition of S actually makes sense for all ε ∈ C with Re(ε) > 0, using the standard branch of the logarithm function. We note that for |ε| > |z| , we have Integrating with respect to z gives Assume for the moment that it is permissible to differentiate (5.10) term by term with respect to t. Then by Lemma 5.1, we have Now, by [6, Proposition 3.2.3], the map t → b t is continuous in the operator norm topology; in particular, b t is a locally bounded function of t. From this observation, it is easy to see that the right-hand side of (5.11) converges locally uniformly in t. Thus, a standard result about interchange of limit and derivative (e.g., Theorem 7.17 in [44]) shows that the term-by-term differentiation is valid. Now, in (5.11), we let k = j and l = n − j − 1, so that n = k + l + 1. Then k and l go from 0 to ∞, and we get (We may check that the power of ε in the denominator is k + l + 1 = n and that the power of −1 is k + l = n − 1.) Thus, moving the sums inside the traces and using (5.9), we obtain the claimed form of ∂S/∂t. We have now established the claimed formula for ∂S/∂t for ε in the right halfplane, provided |ε| is sufficiently large, depending on t and λ. Since, also, S(0, λ, ε) = log(|λ − 1| 2 + ε), we have, for sufficiently large |ε| , We now claim that both sides of (5.12) are well-defined, holomorphic functions of ε, for ε in the right half-plane. This claim is easily established from the standard power-series representation of the inverse: and a similar power-series representation of the logarithm. Thus, (5.12) actually holds for all ε in the right half-plane. Differentiating with respect to t then establishes the claimed formula (5.8) for dS/dt for all ε in the right half-plane.
Lemma 5.3. We have the following formulas for the derivatives of S with respect to ε and λ: Proof. We use the formula for the derivative of the trace of a logarithm (Lemma (We emphasize that there is no such simple formula for the derivative of log(f (u)) without the trace, unless df /du commutes with f (u).) The lemma easily follows from this formula.
We are now ready for the verification of the differential equation for S.
Proof of Theorem 2.8. We note that Multiplying by (b * t,λ b t,λ + ε) −1 on the right and (b t,λ b * t,λ + ε) −1 on the left gives a useful identity: Replacing b t,λ by its adjoint gives another version of the identity: Note that in both (5.13) and (5.14), both side have the same pattern of starred and unstarred variables, always with the two outer variables being the same and the middle one being different.
We also claim that To verify this identity for large ε, we replace z by b * t,λ b t,λ in the series (5.9) and note that by the cyclic invariance of the trace, The result for general ε follows by an analyticity argument as in the proof of Lemma 5.2.
We start from the formula for ∂S/∂t in Lemma 5.2. Noting that we expand the second factor on the right-hand side of (5.8) as We then simplify the first term by In the middle two terms, we use (5.13), (5.14), and cyclic invariance of the trace. Using also (5.15), we get (all the terms in (5.16)). (5.17) All terms on the right-hand side of (5.17) are expressible using Lemma 5.3 in terms of derivatives of S, and the claimed differential equation follows. 6. The Hamilton-Jacobi method 6.1. Setting up the method. The equation (2.14) is a first-order, nonlinear PDE of Hamilton-Jacobi type. (The reader may consult, for example, Section 3.3 in the book of Evans [17], but we will give a brief self-contained account of the theory in the proof of Proposition 6.3.) We consider a Hamiltonian function obtained from the right-hand side of (2.14) by replacing each partial derivative with momentum variable, with an overall minus sign. Thus, we define We then consider Hamilton's equations for this Hamiltonian. That is to say, we consider this system of six coupled ODEs: As convenient, we will let The initial conditions for a, b, and ε are arbitrary: while those for p a , p b , and p ε are determined by those for a, b, and ε as follows: where The motivation for (6.4) is that the momentum variables p a , p b , and p ε will correspond to the derivatives of S along the curves (a(t), b(t), ε(t)); see (6.8). Thus, the initial momenta are simply the derivatives of the initial value (2.15) of S, evaluated at (a 0 , b 0 , ε 0 ). For future reference, we record the value H 0 of the Hamiltonian at time t = 0.
The main result of this section is the following; the proof is given on p. 33. Theorem 6.2. Assume λ 0 = 0 and ε 0 > 0. Suppose a solution to the system (6.2) with initial conditions (6.3) and (6.4) exists with ε(t) > 0 for 0 ≤ t < T. Then we have for all t ∈ [0, T ). Furthermore, the derivatives of S with respect to a, b, and ε satisfy Note that S(t, λ, ε) is only defined for ε > 0. Thus, (6.7) and (6.8) only make sense as long as the solution to (6.2) exists with ε(t) > 0.
Since our objective is to compute ∆s t (λ) = ∂ 2 s t /∂a 2 + ∂ 2 s t /∂ 2 b 2 , the formula (6.8) for the derivatives of S will ultimately be of as great importance as the formula (6.7) for S itself. We emphasize that we are not using the Hamilton-Jacobi method to construct a solution to (2.14); the function S(t, λ, ε) is already defined in (2.12) in terms of free probability and is known (Theorem 2.8) to satisfy (2.14). Rather, we are using the Hamilton-Jacobi method to analyze a solution that is already known to exist.
We begin by briefly recapping the general form of the Hamilton-Jacobi method. Consider a pair (x(t), p(t)) with x(t) ∈ U, p(t) ∈ R n , and t ∈ [0, T 1 ] with T 1 ≤ T. Assume this pair satisfies Hamilton's equations: with initial conditions Then we have and (∇ x S)(t, x(t)) = p(t). (6.12) Again, we are not trying to construct solutions to (6.9), but rather to analyze a solution that is already assumed to exist.
Proof. Take an arbitrary (for the moment) smooth curve x(t) and note that where we use the Einstein summation convention. Let us use the notation that is p j (t) = ∂S/∂x j (t, x(t)). Then (6.13) may be rewritten as (6.14) If we can choose x(t) so that p(t) is somehow computable, then the right-hand side of (6.14) would be known and we could integrate to get S(t, x(t)).
To see how we might be able to compute p(t), we try differentiating: Now, from (6.9), we have Thus, (6.15) becomes (suppressing the dependence on the path) If we now take x(t) to satisfy the second term on the right-hand side of (6.16) vanishes, and we find that p(t) satisfies dp With this choice of x(t), (6.14) becomes because H is constant along the solutions to Hamilton's equations.
Note that not all solutions (x(t), p(t)) to Hamilton's equations (6.17) and (6.18) will arise by the above method. After all, we are assuming that p(t) = (∇ x S)(t, x(t)), from which it follows that the initial conditions (x 0 , p 0 ) will be of the form in (6.10).
On the other hand, suppose we take a pair (x 0 , p 0 ) as in (6.10). Let us then take x(t) to be the solution to where since S is a fixed, "known" function, this ODE for x(t) will have unique solutions for as long as they exist. If we set p(t) = (∇ x S)(t, x(t)), then p(0) = p 0 as in (6.10) and (6.20) says that the pair (x(t), p(t)) satisfies the first of Hamilton's equations. Applying (6.16) with this choice of x(t) shows that the pair (x(t), p(t)) also satisfies the second of Hamilton's equations. Thus, (x(t), p(t)) must be the unique solution to Hamilton's equations with the given initial condition (x 0 , p 0 ). We conclude that for any solution to Hamilton's equations with initial conditions of the form (6.10), the formula (6.14) holds. Since, also, H is constant along solutions to Hamilton's equations, we may replace H(x(t), p(t)) by H(x 0 , p 0 ) in (6.14), at which point, integration with respect to t gives (6.11). Finally, (6.12) holds by the definition of p(t).
We are now ready for the proof of Theorem 6.2.
Proof of Theorem 6.2. We apply Proposition 6.3 with n = 3 and the open set U consisting of triples (a, b, ε) with ε > 0. The PDE (2.14) is of the type in (6.9), with H given by (6.1). The initial conditions (6.4) are obtained by differentiating the initial condition S(0, λ, ε) = log(|λ − 1| 2 + ε). We let x(t) = (a(t), b(t), ε(t)) and p(t) = (p a (t), p b (t), p ε (t)). For the case of the Hamiltonian (6.1), a simple computation shows that Thus, the general formula (6.11) becomes, in this case, But we also may compute that If we now plug in the value of S(0, x 0 ) = S(0, λ 0 , ε 0 ) and use Lemma 6.1 along with the definition (6.5) of p 0 , we obtain (6.7). Finally, (6.8) is just the general formula (6.12), applied to the case at hand.

Constants of motion.
We now identify several constants of motion for the system (6.2), from which various useful formulas can be derived. Throughout the section, we assume we have a solution to (6.2) with the initial conditions (6.3) and (6.4), defined on a time-interval of the form 0 ≤ t < T. We continue the notation λ(t) = a(t) + ib(t).
Proof. For any system of the form (6.2), the Hamiltonian H itself is a constant of motion, as may be verified easily from the equations. The conservation of the angular momentum is a consequence of the invariance of H under simultaneous rotations of (a, b) and (p a , p b ); see Proposition 2.30 and Conclusion 2.31 in [31]. This result can also be verified by direct computation from (6.2). Finally, note from (6.22) that if λ 0 = 0, then log |λ(t)| remains finite as long as the solution to (6.2) exists, so that λ(t) cannot pass through the origin. We then compute that (If a = 0, we instead compute the time-derivative of cot(arg λ), which also equals zero.) with σ varying over R. Thus, the generator of this family of transformations, namely, is a constant of motion for the system (6.2). The constant Ψ may be computed in terms of ε 0 and λ 0 as where p 0 is as in (6.5).
Proof. The claimed invariance of H is easily checked from the formula (6.1). One can easily check that Ψ is the generator of this family. That is to say, if we replace H by Ψ in (6.2), the solution is given by the map in (6.23). Thus, by a simple general result, Ψ will be a constant of motion; see Conclusion 2.31 in [31]. Of course, one can also check by direct computation that the function in (6.24) is constant along solutions to (6.2). The expression (6.25) then follows easily from the initial conditions in (6.4).
Proposition 6.6. For all t, we have where C = 2Ψ−1 and Ψ is as in (6.24). The constant C in (6.26) may be computed in terms of ε 0 and λ 0 as Proof. We compute thatε and then that d dt . The unique solution to this equation is (6.26). The expression (6.27) is obtained by evaluating Ψ at t = 0, using the initial conditions (6.4), and simplifying.
We now make an important application of preceding results. Theorem 6.7. Suppose a solution to (6.2) exists with ε(t) > 0 for 0 ≤ t < t * , but that lim t→t * ε(t) = 0. Then Equation (6.30) is a key step in the derivation of our main result; see Section 7.1. We will write (6.29) in a more explicit way in Proposition 6.12, after the time t * has been determined. We note also from Proposition 6.6 that since ε(t) approaches zero as t approaches t * , then p ε (t) must be blowing up, so that ε(t)p ε (t) 2 can remain positive in this limit.
Proof. Using the constant of motion Ψ in (6.24), we can rewrite the Hamiltonian H as H = −εp ε (1 + (a 2 + b 2 )p ε − 2Ψ + εp ε ). (6.31) Now, by assumption, the variable ε approaches zero as t approaches t * . Furthermore, by Proposition 6.6, εp 2 ε remains finite in this limit, so that εp ε = √ ε εp 2 ε tends to zero. Thus, in the t → t * limit, the εp ε terms in (6.31) vanish while εp 2 ε remains finite, leaving us with Since H is a constant of motion, we may write this result as where we have used Lemma 6.1 in the second equality and Proposition 6.6 in the third. The formula (6.29) follows. Meanwhile, as t approaches t * , the εp ε term in the formula (6.24) for Ψ vanishes and we find, using (6.29), that as claimed in (6.30), where we have used (6.29) in the last equality.
6.3. Solving the equations. We now solve the system (6.2) subject to the initial conditions (6.3) and (6.4). The formula in Proposition 6.6 for ε(t)p ε (t) 2 will be a key tool. Although we are mainly interested in the case ε 0 > 0, we will need in Section 7.2 to allow ε 0 to be slightly negative. We begin by with the following elementary lemma. Here, we use the principal branch of the inverse hyperbolic tangent, with branch cuts (−∞, −1] and [1, ∞) on the real axes, which corresponds to using the principal branch of the logarithm. When a = 0, we interpret the right-hand side of (6.35) or (6.36) as having its limiting value as a approaches zero, namely 1/y 0 .
In passing from (6.35) to (6.36), we have used the standard formula for the inverse hyperbolic tangent, In (6.34), we interpret sinh(at)/a as having the value t when a = 0. If a 2 < 0, so that a is pure imaginary, one can rewrite the solution in terms of ordinary trigonometric functions, using the identities cosh(iα) = cos α and sinh(iα) = i sin α. For each fixed t, the solution is an even analytic function of a and therefore an analytic function of a 2 .
Proof. If a is nonzero and real, we may integrate (6.33) to obtain It is then straightforward to solve for y(t) and simplify to obtain (6.34). Similar computations give the result when a is zero (recalling that we interpret sinh(at)/a as equaling t when a = 0) and when a is nonzero and pure imaginary. Alternatively, one may check by direct computation that the function on the right-hand side of (6.34) satisfies the equation (6.33) for all a ∈ C. Now, if a 2 ≥ y 2 0 > 0, the denominator in (6.34) is easily seen to be nonzero for all t and there is no singularity. If a 2 is positive but less than y 2 0 , the denominator remains positive until it becomes zero when tanh(at) = a/y 0 . If a 2 is negative, so that a = iα for some nonzero α ∈ R, we write the solution using ordinary trigonometric functions as y(t) = y 0 cos(αt) + α y0 sin(αt) cos(αt) − y0 α sin(αt) .

(6.38)
The denominator in (6.38) becomes zero at αt = tan −1 (α/y 0 ) < π/2. Finally, if a 2 = 0, the solution is y(t) = y 0 /(1 − y 0 t), which blows up at t = 1/y 0 . It is then not hard to check that for all cases with a 2 < y 2 0 , the blow-up time can be computed as t * = 1 a tanh −1 (a/y 0 ), where we use the principal branch of the inverse hyperbolic tangent, with branch cuts (−∞, −1] and [1, ∞) on the real axis. (At a = 0 we have a removable singularity with a value of 1/y 0 .) This recipe corresponds to using the principal branch of the logarithm in the last expression in (6.36).
We now apply Lemma 6.8 to compute the p ε -component of the solution to (6.2). We use the following notations, some of which have been introduced previously: We now make the following standing assumptions: We note that under these assumptions, y 0 is positive. Furthermore, we may compute that so that a 2 < y 2 0 . Now, the assumptions p 0 > 0 and δ > 0 can be written as ε 0 > − |λ 0 − 1| 2 and ε 0 > −(1 + |λ 0 | 2 ). Thus, for λ 0 = 0, the assumptions (6.44) are always satisfied if ε 0 > 0. Furthermore, except when λ 0 = 1, some negative values of ε 0 are allowed. Proposition 6.9. Under the assumptions (6.44), the p ε -component of the solution to (6.2) subject to the initial conditions (6.3) and (6.4) is given by for as long as the solution to the system (6.2) exists. Here we write a as in (6.45) and we use the same choice of √ δ 2 − 4 in the computation of a as in the two times √ δ 2 − 4 appears explicitly in (6.47). If δ = 2, we interpret sinh(at)/ √ δ 2 − 4 as equaling 1 2 p 0 |λ 0 | t. If ε 0 ≥ 0, the numerator in the fraction on the right-hand side of (6.47) is positive for all t. Hence when ε 0 ≥ 0, we see that p ε (t) is positive for as long as the solution exists and 1/p ε (t) extends to a real-analytic function of t defined for all t ∈ R.
The first time t * (λ 0 , ε 0 ) at which the expression on the right-hand side of (6.47) blows up is where θ 0 = arg λ 0 and √ δ 2 − 4 is either of the two square roots of δ 2 − 4. The principal branch of the inverse hyperbolic tangent should be used in (6.48), with branch cuts (−∞, −1] and [1, ∞) on the real axis, which corresponds to using the principal branch of the logarithm in (6.49). When δ = 2, we interpret t * (λ 0 , ε 0 ) as having its limiting value as δ approaches 2, namely δ − 2 cos θ 0 .
Note that the expression 1 a tanh −1 a b is an even function of a with b fixed, with a removable singularity at a = 0. This expression is therefore an analytic function of a 2 near the origin. In particular, the value of t * (λ 0 , ε 0 ) does not depend on the choice of square root of δ 2 − 4.
Proof of Proposition 6.9. We assume at first that ε 0 = 0. We recall from Proposition 6.6 that ε(t)p ε (t) 2 is equal to ε 0 p 2 0 e −Ct , which is never zero, since we assume ε 0 is nonzero and p 0 is positive. Thus, as long as the solution to the system (6.2) exists, both ε(t) and p ε (t) must be nonzero-and must have the same signs they had at t = 0. Using (6.28) and the fact that H is a constant of motion, we obtaiṅ But ε 0 p 2 0 /ε(t) = p ε (t) 2 e Ct and we obtaiṅ p ε (t) = p ε (t) 2 e Ct − ε 0 p 2 0 e −Ct . Then if y(t) = e Ct p ε (t) + C/2, we find that y satisfies (6.33). Thus, we obtain p ε (t) = (y(t) − C/2)e −Ct , where y(t) is as in (6.34), which simplifies to the claimed formula for p ε . The same formula holds for ε 0 = 0, by the continuous dependence of the solutions on initial conditions. (It is also possible to solve the system (6.2) with ε 0 = 0 by postulating that ε(t) is identically zero and working out the equations for the other variables.) In this paragraph only, we assume ε 0 ≥ 0. Then a 2 ≥ 0, with a = 0 occurring only if ε 0 = 0 and |λ 0 | = 1, so that δ = 2. In that case, the numerator on the right-hand side of (6.47) is identically equal to 1. If a 2 > 0, then the numerator will always be positive provided that and we are assuming ε 0 ≥ 0. Now, since the numerator in (6.47) is always positive, we conclude that p ε remains positive until it blows up. For any value of ε 0 , the blow-up time for the function on the right-hand side of (6.47) is computed by plugging the expression (6.46) for a/y 0 into the formula (6.36), giving After computing that we obtain the claimed formula (6.48) for t * (λ 0 , ε 0 ). Remark 6.10. If ε 0 < 0, then numerator on the right-hand side of (6.47) can become zero. The time σ at which this happens is computed using (6.45) and (6.50) as By considering separately the cases |λ 0 | = 1 and |λ 0 | = 1, we can verify that σ tends to infinity, locally uniformly in λ 0 , as ε 0 tends to zero from below. Thus, for small negative values of ε 0 , the function on the right-hand side of (6.47) will remain positive until the time t * (λ 0 , ε 0 ) at which it blows up.
We now show that the whole system (6.2) has a solution up to the time at which the function on the right-hand side of (6.47) blows up. Proposition 6.11. Assume that ε 0 and λ 0 satisfy the assumptions (6.44). Assume further that if ε 0 < 0, then |ε 0 | is sufficiently small that p ε remains positive until it blows up, as in Remark 6.10. Then the solution to the system (6.2) exists up to the time t * (λ 0 , ε 0 ) in Proposition 6.9.
Proof. Let T be the maximum time such that the solution to (6.2) exists on [0, T ). We now compute formulas for the solution on this interval. Recall from Proposition 6.9 that if ε 0 ≥ 0, then p ε (t) remains positive for as long as the solution exists; by Remark 6.10, the same assertion holds if ε 0 is small and negative. Now, since εp 2 ε = ε 0 p 2 0 e −Ct , we see that Finally, dp a dt = − ∂H ∂a = −2aεp 2 ε + εp ε p a (6.54) which is a first-order, linear equation for p a , which can be solved using an integrating factor. A similar calculation applies to p b . Suppose now that the existence time T of the whole system were smaller than the time t * (λ 0 , ε 0 ) at which the right-hand side of (6.47) blows up. Then from the formulas (6.52), (6.53), and (6.54), we see that all functions involved would remain bounded up to time T . But then by a standard result, T could not actually be the maximal time. The solution to the system (6.2) must therefore exist all the way up to time t * (λ 0 , ε 0 ).
Notice that there is a strong similarity between the formula (6.49) for t * (λ 0 , ε 0 ) and the expression on the right-hand side of (6.55).
Proof of Proposition 6.13. In the limit as ε 0 → 0, we have so that In the case |λ 0 | = 1, the limiting value of δ is 2. We then make use of the elementary limit Thus, using (6.49), we obtain in this case, which agrees with the value of T (λ 0 ) when |λ 0 | = 1.
In the case |λ 0 | = 1, we note that the quantity (1/b) log((a+b)/(a−b)) is an even function of b with a fixed. We may therefore choose the plus sign on the right-hand side of (6.58), regardless of the sign of |λ 0 | 2 − 1. We then obtain, using (6.49), A similar calculation, beginning from (6.55), establishes (6.57).
Remark 6.15. If we began with (6.48) instead of (6.49), we would obtain by similar reasoning Using (6.37), this expression is easily seen to agree with T (λ 0 ) but is more transparent in its behavior at |λ 0 | = 1.
Proof. We note that the quantity δ in (6.40) is an increasing function of ε 0 with λ 0 fixed, with δ tending to infinity as ε 0 tends to infinity. We note also that if ε 0 ≥ 0, then It therefore suffices to show that for each angle θ 0 , the function is strictly increasing, non-negative, continuous function of δ for δ ≥ 2 that tends to +∞ as δ tends to infinity. Here when δ = 2, we interpret g θ0 (δ) as having the value 2 − 2 cos θ 0 , in accordance with the limit (6.59).
Our definition of g θ0 (δ) for δ = 2, together with (6.59), shows that g θ0 is nonnegative and continuous there. To show that g θ0 is an increasing function of δ, we show that ∂g θ0 /∂δ is positive for δ > 2. The derivative is computed, after simplification, as Since this expression depends linearly on cos θ 0 with δ fixed, if it is positive when cos θ 0 = 1 and also when cos θ 0 = −1, it will be positive always. Thus, it suffices to verify the positivity of the functions Now, (6.62) is clearly positive for all δ > 2. Meanwhile, a computation shows that from which we conclude that (6.63) is also positive for all δ > 2.
6.5. Surjectivity. In Section 7.3, we will compute s t (λ) := lim ε→0 + S(t, λ, ε) for λ in Σ t . We will do so by evaluating S (and its derivatives) along curves of the form (t, λ(t), ε(t)) and then the taking the limit as we approach the time t * when ε(t) becomes zero. For this method to be successful, we need the following result, whose proof appears on p. 45.
We first recall that we have shown (Proposition 6.16) that the lifetime of the path to be a strictly increasing function of ε 0 ≥ 0 with λ 0 fixed. If λ 0 is outside Σ t , then by Theorem 4.1 and Proposition 6.13, the lifetime is at least t, even at ε 0 = 0. (That is to say, T (λ 0 ) = t * (λ 0 , 0) ≥ t for λ 0 outside Σ t .) Thus, for λ 0 outside Σ t , the lifetime cannot equal t for ε 0 > 0. On the other hand, if λ 0 ∈ Σ t , then t * (λ 0 , 0) < t and Proposition 6.16 tells us that there is a unique ε 0 > 0 with t * (λ 0 , ε 0 ) = t.
We note that the desired function Λ t 0 in Theorem 6.17 is the inverse function to λ t and that E t 0 (λ) = ε t 0 (λ −1 t (λ)). Recall from Proposition 6.4 that the argument of λ(t) is constant. By the formula (6.29) in Theorem 6.7 together with the expression (6.27) for the constant C, we can write where we have used that t * (λ 0 , ε t 0 (λ 0 )) = t. As noted in the proof of Proposition 6.12, this formula can also be written as Proof. We start by trying to compute the function ε t 0 , which we will do by finding the correct value of δ and then solving for ε t 0 . Recall that the lifetime t * (λ 0 , ε 0 ) is computed as g θ0 (δ), where δ is as in (6.40) and g θ0 is as in (6.61). As we have computed in (6.60), we have Assume, then, that the ray with angle θ 0 intersects Σ t and let r t (θ 0 ) be the outer (for definiteness) radius at which this ray intersects the boundary of Σ t . Then Theorem 4.1 tells us that T (r t (θ 0 )e iθ0 ) = t, and we conclude that Consider, then, some λ 0 ∈ Σ t with arg(λ 0 ) = θ 0 . By the formula (6.49), to find ε 0 with t * (λ 0 , ε 0 ) = t, we first find δ so that g θ0 (δ) = t. (Note that the value of δ depends only on the argument of λ 0 .) We then adjust ε 0 so that (|λ 0 | 2 + ε 0 + 1)/ |λ 0 | = δ. Since the correct value of δ is given in (6.66), this means that we should choose ε 0 so that We can solve this relation for ε 0 to obtain Now, we have shown that r t (θ) is continuous for the full range of angles θ occurring in Σ t . Since 0 is not in Σ t , we can then see that the formula (6.67) is well defined and continuous on all of Σ t . For λ 0 ∈ ∂Σ t , we have that |λ 0 | equals r t (arg λ 0 ) or 1/r t (arg λ 0 ), so that ε t 0 (λ 0 ) equals zero. Now, the point 0 is always outside Σ t , while the point 1 is always in Σ t and therefore not on the boundary of Σ t . Thus, since ε t 0 is continuous on Σ t and zero precisely on the boundary, we see from (6.64) that λ t is continuous on Σ t . Furthermore, on ∂Σ t , we compute λ t (λ 0 ) by putting ε t 0 (λ 0 ) = 0 in (6.64). Suppose now that λ 0 is in ∂Σ t . Then ε t 0 (λ 0 ) = 0 and, by Theorem 4.1, the function T (λ 0 ) in (4.1) has the value t, so that Thus, from (6.64), we see that λ t (λ 0 ) = λ 0 . Consider an angle θ 0 for which the ray Ray(θ 0 ) with angle θ 0 intersects Σ t and let δ be chosen so that g θ0 (δ) = t, noting again that the value of δ depends only on θ 0 = arg λ 0 . We now observe from (6.65) that |λ t (λ 0 )| is a strictly increasing function of |λ 0 | with δ fixed. Thus, λ t is a strictly increasing function of the interval Ray(θ 0 ) ∩ Σ t into Ray(θ 0 ) that fixes the endpoints. Thus, actually, λ t maps this interval bijectively into itself. Since this holds for all θ 0 , we conclude that λ t maps Σ t bijectively into itself. The continuity of the inverse then holds because λ t is continuous and Σ t is compact.

7.
Letting ε tend to zero 7.1. Outline. Our goal is to compute the Laplacian with respect to λ of the function s t (λ) := lim ε→0 + S(t, λ, ε), using the Hamilton-Jacobi method of Theorem 6.2. We want the curve ε(·) occurring in (6.7) and (6.8) to approach zero at time t; a simple way we might try to accomplish this is to let the initial condition ε 0 approach zero. Suppose, then, that ε 0 is very small. Using various formulas from Section 6.3, we then find that for as long as the solution to the system (6.2) exists, the whole curve ε(·) will be small and the whole curve λ(·) will be approximately constant. Thus, by taking ε 0 ≈ 0 and λ 0 ≈ λ, we obtain a curve with ε(t) ≈ 0 and λ(t) ≈ λ. We may then hope to compute s t (λ) by letting λ 0 and λ(t) approach λ and ε 0 approach zero in the Hamilton-Jacobi formula (6.7), with the result that It is essential to note, however, that this approach is only valid if the solution to system (6.2) exists up to time t. Corollary 6.14 tells us that for ε 0 ≈ 0, the solution will exist beyond time t provided λ is outside Σ t . Thus, we expect that for λ outside Σ t , the function s t will be given by (7.1) and therefore that ∆s t will be zero. (The function log(|λ − 1| 2 ) is harmonic except at the point λ = 1, which is always inside Σ t .) To analyze s t (λ) for λ inside Σ t , we first make use of the surjectivity result in Theorem 6.17. The theorem says that for each t > 0 and λ ∈ Σ t , there exist ε 0 > 0 and λ 0 ∈ Σ t such that ε(u) approaches 0 and λ(u) approaches λ as u approaches t. We then use the formula (6.30) in Theorem 6.7. In light of the second Hamilton-Jacobi formula (6.8), we can write (6.30) as Once we have established enough regularity in the function S(t, λ, ε) near ε = 0, we will be able to identify the left-hand side of (7.2) with the corresponding derivative of s t , giving the following explicit formula for one of the derivatives of s t : We now compute in logarithmic polar coordinates, with ρ = log |λ| and θ = arg λ. We may recognize the left-hand side of (7.3) as the derivative of s t with respect to ρ, giving ∂s t ∂ρ = 2ρ t + 1 (7.4) for points inside Σ t . Remarkably, ∂s t /∂ρ is independent of θ! Thus, ∂ ∂ρ meaning that ∂s t /∂θ is independent of ρ. Now, we will show in Section 7.4 that the first derivatives of s t have the same value as we approach a point λ ∈ ∂Σ t from the inside as when we approach λ from the outside. We can therefore give a complete description of the function ∂s t /∂θ on Σ t as follows. It is the unique function on Σ t that is independent of ρ (or, equivalently, independent of r = |λ|) and whose boundary values agree 2r sin θ r 2 + 1 − 2r cos θ . (7.5) Since the points on the outer boundary of Σ t have the polar form (r t (θ), θ), we conclude that ∂s t ∂θ = 2r t (θ) sin θ r t (θ) 2 + 1 − 2r t (θ) cos θ .
We now briefly discuss what is needed to make the preceding arguments rigorous. If λ is outside Σ t and ε is small and positive, we need to know that we can find a λ 0 close to λ and a small, positive ε 0 such that with these initial conditions, ε(t) = ε and λ(t) = λ. To show this, we apply the inverse function theorem to the map U t (λ 0 , ε 0 ) := (λ(t), ε(t)) in a neighborhood of the point (λ 0 , ε 0 ) = (λ, 0).
For λ inside Σ t , we need to know first that S(t, λ, ε) is continuous-in all three variables-up to ε = 0. After all, s t (λ) is defined letting ε tend to zero in the expression S(t, λ, ε), with t and λ fixed. But the Hamilton-Jacobi formula (6.7) gives a formula for S(u, λ(u), ε(u)), in which the first two variables in S are not remaining constant. Furthermore, we want to apply also the Hamilton-Jacobi formula (6.8) for the derivatives of S, which means we need also continuity of the derivatives of S with respect to λ up to ε = 0. Using another inverse function theorem argument, we will show that after making the change of variable z = √ ε, the function S will extend smoothly up to ε = z = 0, from which the needed regularity will follow. We use the following notation throughout the section.

7.2.
Outside Σ t . The goal of this subsection is to prove the following result. Thus, ∆s t (λ) = 0 whenever λ is outside Σ t .
As we have discussed in Section 7.1, the idea is that for λ outside Σ t and ε small and positive, we should try to find a λ 0 close to λ and a small, positive ε 0 such that ε(u) and λ(u) will approach 0 and λ, respectively, as u approaches t. To that end, we define, for each t > 0, a map U t from an open subset of R × C into R × C by U t (λ 0 , ε 0 ) = (λ(t; λ 0 , ε 0 ), ε(t; λ 0 , ε 0 )).
We wish to evaluate the derivative of this map at the point (λ 0 , ε 0 ) = (λ, 0). For this idea to make sense, λ(t; λ 0 , ε 0 ) and ε(t; λ 0 , ε 0 ) must be defined in a neighborhood of (λ, 0); it is for this reason that we have allowed ε 0 to be negative in Section 6.3.
Proof. The claimed form of the second column of U t (λ, 0) follows immediately from (7.7). We then compute from (6.52) that which is positive.
Proof of Theorem 7.2. We note that the inverse of the matrix in (7.8) will have a positive entry in the bottom right corner, meaning that U −1 t has the property that ∂ε 0 /∂ε > 0. It follows that the ε 0 -component of U −1 t (λ, ε) will be positive for ε small and positive. In that case, the solution to the system (6.2) will have ε(u) > 0 up to the blow-up time. The blow-up time, in turn, exceeds t for all points in the domain of U t .
Finally, when λ 0 = 0, we can use continuous dependence of the solutions on the initial conditions. The formula for p ε (t) in Proposition 6.9 has a limit as |λ 0 | tends to zero, so that δ tends to +∞. From (6.46), we find that a 2 = y 2 0 , so that from (6.34), y(t) ≡ y 0 . We then obtain p ε (t) = e −Ct p 0 , which remains nonsingular for all t. We can then continue to use the formula (6.52) for ε(t). Furthermore, by exponentiating (6.53) and letting |λ 0 | tend to zero, we find that λ(t) ≡ 0. We then continue to use the remaining formulas in the proof of Proposition 6.11 and find that the solution to the system exists for all time.
When λ 0 = 0, we apply the Hamilton-Jacobi formula in the form (6.21), which is to say that we replace the last two terms in (6.7) by t 0 ε(s)p ε (s) ds. We then compute as in (7.9) that the derivative of ε(t; 0, ε 0 ) with respect to ε 0 is positive at ε 0 = 0. Thus, by the inverse function theorem, for small positive ε, we can find a small positive ε 0 that gives ε(t; 0, ε 0 ) = ε. We then apply (6.21) with λ 0 = 0 and λ(t) = 0, and let ε tend to zero, which means that ε 0 also tends to zero. As ε 0 tends to zero, the function tends to zero uniformly and we obtain (7.6).
7.3. Inside Σ t . In this subsection, we establish the needed regularity of S(t, λ, ε) as ε tends to zero, for λ in Σ t . This result, whose proof is on p. 52, together with Theorem 6.7, will allow us to understand the structure of s t and its derivatives on Σ t .
Corollary 7.5. Fix a pair (σ, µ) with µ in Σ σ . Then the functions all have extensions that are continuous in all three variables to the set of (t, λ, ε) with λ ∈ Σ t and ε ≥ 0. Furthermore, for each t > 0, the function s t is infinitely differentiable on Σ t , and its derivatives with respect to a and b agree with the ε → 0 + limit of ∂S/∂a and ∂S/∂b. If we let t * be short for t * (λ 0 , ε 0 ), then for all λ 0 and ε 0 > 0, we have Proof. We note that the four functions in (7.12) may be computed as respectively, and that S(t, λ, 0) =S(t, λ, 0). The first claim then follows from Theorem 7.4. Now that the continuity of S and its derivatives has been established, we may let t approach t * (λ 0 , ε 0 ) in the Hamilton-Jacobi formulas (6.7) and (6.8) to obtain the second claim.
Corollary 7.6. Let us write λ ∈ Σ t in logarithmic polar coordinates, with ρ = log |λ| and θ = arg λ. Then for each pair (t, λ) with λ ∈ Σ t , we have ∂s t ∂ρ (t, λ) = 2ρ t + 1. (7.14) Furthermore, ∂s t /∂θ is independent of ρ; that is, for some smooth function m t . Thus, for some smooth function m t , and In Section 7.4, we will obtain a formula for the function m t (θ) appearing in Corollary 7.6.
In light of (7.13), the formula (7.14) then follows from the formula (6.30) in Theorem 6.7. Now, ∂s t /∂ρ is manifestly independent of θ. Since, by Corollary 7.5, s t is an analytic, hence C 2 , function on Σ t , we conclude that ∂ ∂ρ showing that ∂s t /∂θ is independent of ρ. The formula (7.15) then follows by differentiating (7.14) with respect to ρ. Finally, if we use the standard formula for the Laplacian in polar coordinates, we obtain (7.16) from (7.15).
We are now ready for the proof of the main result of this section.
Remark 7.8. Neither the method of Section 7.2 nor the method of Section 7.3 allows us to compute the value of s t (λ) for λ in the boundary of Σ t . Although we expect that this value will be log(|λ − 1| 2 ), the question is irrelevant to the computation of the Brown measure. After all, we are supposed to consider ∆s t computed in the distribution sense, that is, the distribution whose value on a test function ψ is The value of (7.21) is unaffected by the value of s t (λ) for λ in ∂Σ t , which is a set of measure zero in C.
It is nevertheless essential to understand the behavior of s t (λ) as λ approaches the boundary of Σ t . Definition 7.9. We say that a function f : C → R is analytic up to the boundary from inside Σ t if the following conditions hold. First, f is real analytic on Σ t . Second, for each λ ∈ ∂Σ t , we can find an open set U containing λ and a real analytic function g on U such that g agrees with f on U ∩ Σ t . We may similarly define what it means for f to be analytic up to the boundary from outside Σ t . Proposition 7.10. For each t > 0, the function s t is analytic up to the boundary from inside Σ t and analytic up to the boundary from outside Σ t .
Note that the proposition is not claiming that s t is an analytic function on all of C. Indeed, our main results tell us that 1 4π ∆s t (λ) is identically zero for λ outside Σ t but approaches a typically nonzero value as λ approaches a boundary point from the inside. As we approach from the inside a boundary point with polar coordinates (r, θ), the limiting value of 1 4π ∆s t (λ) is w t (θ)/r 2 . This quantity certainly cannot always be zero, or the Brown measure of b t would be identically zero. Actually, we will see in Section 8.1 that w t (θ) is strictly positive except when t = 4 and θ = π.
Proof. We have shown that s t (λ) = log(|λ − 1| 2 ) for λ in (Σ t ) c . Since 1 ∈ Σ t , we see that s t is analytic from the outside of Σ t .
To address the analyticity from the inside, first note that by applying (7.20) with z = 0, we have where HJ is as in (7.10). But if ε t 0 : Σ t → R and λ t : Σ t → C are as in Lemma 6.18, then we can see that and we conclude that ). (7.22) We now claim that the function ε t 0 (λ 0 ), initially defined for λ 0 ∈ Σ t , extends to an analytic function in a neighborhood of Σ t . For t ≥ 4, we can simply use the formula (6.67) for all nonzero λ 0 . For t < 4, however, the formula (6.67) becomes undefined in a neighborhood of a point where ∂Σ t intersects the unit circle.
We now consider the function λ t , defined as λ t (λ 0 ) = λ(t; λ 0 , ε t 0 (λ 0 )), and we recall that λ t (λ 0 ) = λ 0 for λ 0 ∈ ∂Σ t . Although λ t was initially defined for λ 0 in Σ t , it has an analytic extension to a neighborhood of Σ t , namely the set of λ 0 in the domain of the extended function ε t 0 for which the pair (λ 0 , ε 0 ) satisfy the assumptions in (6.44). We now claim that the derivative of λ t (λ 0 ) is invertible at each point in its domain. We use polar coordinates in both domain and range. Since arg(λ t (λ 0 )) = arg λ 0 , the derivative will have the form and it therefore suffices to check that ∂ |λ t | /∂r is nonzero. To see this, we use the formula (6.65), where δ = δ θ0,t as in the previous paragraph. Since δ is independent of |λ 0 | with t and arg λ 0 fixed, we can easily verify from (6.65) that ∂ |λ t | /∂r > 0. Now, we have already established that s t is analytic in the interior of Σ t . Consider, then, a point λ in ∂Σ t , so that λ t (λ) = λ. Since λ t (λ) is invertible, it has a analytic local inverse λ −1 t defined near λ. Then the formula (7.22) gives an analytic extension of s t to a neighborhood of λ.
all approach the same value when λ approaches µ from inside Σ t as when λ approaches µ from outside Σ t .
Proof. We begin by considering s t itself. The limit as λ approaches µ from the inside may be computed by using (7.22). By Lemma 6.18, as λ approaches µ from the inside, λ −1 t (λ) approaches λ −1 t (µ) = µ, and ε t 0 (λ −1 t (λ)) approaches 0. Thus, the limiting value of s t from the inside is where HJ is given by (7.10) and were we have used that λ(t; µ, 0) = µ. (See the last part of Proposition 6.11.) Since s t (λ) = log(|λ − 1| 2 ) outside Σ t , the limit of s t from the outside agrees with the limit from the inside.
Next we consider the derivatives, which we compute in logarithmic polar coordinates ρ = log |λ| and θ = arg λ. By (7.14), we have for λ ∈ Σ t . Letting λ approach µ from the inside gives the value log(|µ| 2 )/t + 1. Since µ is on the boundary of Σ t , Theorem 4.1 says that T (µ) = t, so that log |µ| Taking the corresponding derivative of the "outside" function log(|λ − 1| 2 ) and letting λ tend to µ from the outside gives the same result.
Finally, we recall from Proposition 6.4 that ap b − bp a is a constant of motion. Thus, by the second Hamilton-Jacobi formula (6.8) and the initial conditions (6.4), we have If we choose ε 0 and λ 0 so that t * (λ 0 , ε 0 ) = t we can use the regularity result in Corollary 7.5 to let u tend to t. This gives where now λ 0 = λ −1 t (λ) and ε 0 = ε t 0 (λ −1 t (λ)). As λ approaches µ, Theorem 6.17 says that the value of λ 0 approaches µ and ε 0 approaches 0, so we get Taking the corresponding derivative of log(|λ − 1| 2 ) and letting λ tend to µ from the outside gives the same result. 7.5. Proof of the main result. In this subsection, we prove our first main result, Theorem 2.2. Proposition 2.3 will then be proved in Section 8.1, while Propositions 2.5 and 2.6 will be proved in Section 8.2.
Proposition 7.12. For each fixed t, the restriction to Σ t of the function is the unique function that on Σ t that (1) extends continuously to the boundary, (2) agrees with the θ-derivative of log(|λ − 1| 2 ) on the boundary, and (3) is independent of r = |λ| . Thus, the function m t in Corollary 7.6 is given by where r t (θ) is the outer radius of the domain Σ t (Figure 3).
Proof. We have already established in Corollary 7.6 that ∂s t /∂θ is independent of ρ (or equivalently, of r) in Σ t . Then Propositions 7.10 and 7.11 tell us that ∂s t /∂θ is continuous up to the boundary and agrees there with the angular derivative of log(|λ − 1| 2 ). Thus, to compute ∂s t /∂θ at a point in Σ t , we travel along a radial segment (in either direction) until we hit the boundary at radius r t (θ) or 1/r t (θ). We then evaluate the angular derivative of log(|λ − 1| 2 ), as in (7.5), giving the claimed expression for ∂s t /∂θ = m t (θ). Proposition 7.13. For each t > 0, the distributional Laplacian of s t (λ) with respect to λ may be computed as follows. Take the pointwise Laplacian of s t outside Σ t (giving zero), take the pointwise Laplacian of s t inside Σ t (giving the expression (7.16) in Corollary 7.6) and ignore the boundary of Σ t .
Proof. Since, by Proposition 7.10, s t is analytic up to the boundary of Σ t from the inside, Green's second identity says that Σt s t (λ)∆ψ(λ) d 2 λ = Σt (∆s t (λ))ψ(λ) d 2 λ + ∂Σt (s t (λ)∇ψ(λ) − ψ(λ)∇s t (λ)) ·n dS, for any test function ψ, where in the last integral, the limiting value of ∇s t from the inside should be used. This identity holds because the boundary of Σ t is smooth for t = 4 and piecewise smooth when t = 4 (Point 3 of Theorem 4.2). We also have similar formula for the integral over the complement of Σ t , provided that ψ is compactly supported, but with the direction of the unit normal reversed. Proposition 7.11 then tells us that the boundary terms in the two integrals cancel, giving (7.24) where the integral over (Σ t ) c is actually zero, since ∆s t (λ) = 0 there. The formula (7.24) says that the distributional Laplacian of s t may be computed by taking the ordinary, pointwise Laplacian in Σ t and in Σ t and ignoring the boundary of Σ t .
We now have all the ingredients for a proof of Theorem 2.2.
Proof of Theorem 2.2. Proposition 7.13 tells us that we can compute the distributional Laplacian of s t separately inside Σ t and outside Σ t , ignoring the boundary. Theorem 7.2 tells us that the Laplacian outside Σ t is zero. Corollary 7.6 gives us the form of ∆s t inside Σ t , while Proposition 2.6 identifies the function m t appearing in Corollary 7.6. The claimed formula for the Brown measure therefore holds. 8. Further properties of the Brown measure 8.1. The formula for ω. In this subsection, we derive the formula for w t given in Proposition 2.3 in terms of the density ω. Throughout, we will write the function T in (4.1) in polar coordinates as T (r, θ) = (r 2 + 1 − 2r cos θ) log(r 2 ) r 2 − 1 . (8.1) We start with a simple rewriting of the expression for w t in Theorem 2.2.
Lemma 8.1. The density w t (θ) in Theorem 2.2 may also be written as
We now formulate the main result of this subsection, whose proof is on p. 59.
The function ω has the following properties.
See Figures 19 and 20. The small-and large-t behavior of the region Σ t can also be determined using the behavior of the function T (λ) near λ = 1 (small t) and near λ = 0 (large t), together with the invariance of the region under λ → 1/λ. For small t, the region resembles a disk of radius √ t around 1, while for large t,  Figure 20. Plots of w t (θ) (black) and 1/(2πt) (dashed) for t = 7 and 10 the region resembles an annulus with inner radius e −t/2 and outer radius e t/2 . In particular, the expected behavior of the Brown measure for small t can be observed: it resembles the uniform probability measure on a disk of radius √ t centered at 1.
Proof. When t is small, the entire boundary of Σ t will be close to λ = 1, since this is the only point where T (λ) = 0. Furthermore, when t is small, θ max (t) = cos −1 (1 − t/2) is close to zero. When t is small, therefore, the quantity πtw t (θ) = 1 2 ω(r t (θ), θ) will be close to ω(1, 0)/2 = 1 for all θ ∈ (−θ max (t), θ max (t)), by Point 2 of Theorem 8.2. When t is large (in particular, greater than 4), the inner boundary of the domain will be close to λ = 0, since this is the only point in the unit disk where T (λ) is large. Thus, for large t, the inner radius 1/r t (θ) of the domain will be uniformly small, and therefore 2πtw t (θ) = ω(r t (θ), θ) = ω(1/r t (θ), θ) will be uniformly close to 1, by Point 4 of Theorem 8.2.

(8.4)
After computing that h (r) = 2 it is a straightforward but tedious exercise to simplify (8.4) and obtain the claimed formula (2.7).
Since h(1/r) = h(r), we may readily verify Point (1); both numerator and denominator in the fraction on the right-hand side of (2.7) change by a factor of 1/r 2 when r is replaced by 1/r.
To verify the claimed positivity of ω, we first observe thatβ(r)z +α(r) is positive when z = 1 (with a value of 2 − (r − 1) 2 c(r) = 1 + h(r)) and also positive when We now observe a close relationship between the density w t (θ) in Theorem 2.2 and the map in (8.8).
We are now ready for the proof of Proposition 2.5.
Proof of Proposition 2.6. The value Φ t (λ) is computed by first taking the argument of λ to obtain θ and then applying the map in (8.8) to obtain φ. Thus, the first result is just a restatement of Proposition 2.5. For the uniqueness claim, suppose a measure µ on Σ t has the form dµ(λ) = 1 r 2 g(θ) r dr dθ. Then the distribution of the argument θ of λ will be, by integrating out the radial variable, 2 log(r t (θ))g(θ) dθ. The distribution of φ will then be 2 log(r t (θ)g(θ) dθ dφ dφ = 2 log(r t (θ)g(θ) 1 2πtw t (θ) dφ.
The only way this can reduce to Biane's measure as computed in (8.14) is if g coincides with w t .