Uniform probability distribution over all density matrices

Let H\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathscr {H}$$\end{document} be a finite-dimensional complex Hilbert space and D\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathscr {D}$$\end{document} the set of density matrices on H\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathscr {H}$$\end{document}, i.e., the positive operators with trace 1. Our goal in this note is to identify a probability measure u on D\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathscr {D}$$\end{document} that can be regarded as the uniform distribution over D\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathscr {D}$$\end{document}. We propose a measure on D\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\mathscr {D}$$\end{document}, argue that it can be so regarded, discuss its properties, and compute the joint distribution of the eigenvalues of a random density matrix distributed according to this measure.


Introduction
With every probability distribution μ over wave functions, i.e., over the unit sphere S(H ) in a complex Hilbert space H , there is associated a density matrix ρ = S(H ) μ(dψ) |ψ ψ| . (1) In this note, in contrast, we consider a probability distribution over density matrices, and we ask whether there exists a distribution that should be regarded as the uniform distribution u over all density matrices. Our considerations involve certain applications of random matrix theory.
Here is our motivation for the question. Density matrices can arise not only as encoding random ψs, but also as partial traces of states of larger systems; moreover, it is conceivable that even the fundamental quantum state of the universe is represented by a density matrix ρ. For example, it is easy to set up a version of Bohmian mechanics in which the particles are guided, not by a wave function ψ, but by a density matrix ρ [4,12]. The density matrix, in such a theory, is not an expression of our ignorance of the actual pure state even if we continue to call it a "mixed state," nor is it an expression of entanglement with another system, but it is a fundamental object-a physical variable on which the motion of the Bohmian particles depends. Likewise, a density matrix can be a fundamental object in collapse theories or many-worlds theories [2,24]. But if a density matrix is a fundamental object in nature, then it makes sense to consider a random density matrix. For example, when considering the initial state of the universe, it is common to consider a random state in a particular subspace H PH of the Hilbert space of the universe associated with very low entropy; the statement that the initial state of the universe lies in the subspace H PH is often called the past hypothesis (PH) [1,7]. Here, one usually has in mind a random pure state ψ in H PH , but if states ρ that are fundamentally mixed are possible, as illustrated by the above-mentioned versions of Bohmian mechanics, collapse theories, and many-worlds theories, we can also consider a random ρ in H PH [9]. Since one considers for ψ the uniform distribution over S(H PH ), the analog would involve the uniform distribution u over all ρ concentrated in H PH , which brings us to the question whether such a distribution u exists, whether it is uniquely defined, and what it looks like.
Here, we propose a natural definition of u on any Hilbert space H of finite dimension d ∈ N. It will be clear from the definition that u exists and is unique. For infinite-dimensional Hilbert spaces H , it does not seem that a uniform distribution exists over the density matrices on H , which is not surprising as there is no uniform distribution either over H itself or S(H ). Of course, our reasoning also yields, for any subspace H of a bigger Hilbert space K with dim H < ∞, a uniform probability distribution over the density matrices concentrated on H , regardless of whether dim K is finite or infinite. We show that u is invariant under unitary transformations of H , so that a u-distributed ρ has an eigenbasis that is uniformly distributed in the set of all orthonormal bases of H . Furthermore, we compute the joint distribution of the eigenvalues of ρ. The expectation value of ρ is d −1 I , where I is the identity operator on H .
In applications, the normalized measure u may often play the role of a typicality measure (see, e.g., [15,Sect. 6], [16,Sect. 7.1], [18,22,26]) rather than that of direct probability. That is, it may serve for defining what is true of most density matrices (that are, say, concentrated in a certain subspace such as H PH ). For example, the properties of u will entail that for a bipartite system, most density matrices are entangled, just as most pure states are [14].
Concerning the past hypothesis, another approach proposes to take the initial density matrix of the universe to be the normalized projection onto H PH [8][9][10]. So, one could consider different kinds of initial conditions: a random ψ with uniform distribution over S(H PH ), a fixed density matrix proportional to the projection to H PH , or a random density matrix with distribution u over the density matrices in H PH . It seems reasonable to expect that all three theories are empirically equivalent, according to the appropriate sense of typicality. We leave that issue to another paper.

Definition of the measure
Let S be the space of self-adjoint operators on H (a real vector space of dimension d 2 ), P ⊂ S the set of positive operators on H , and T c the set of self-adjoint operators with trace c (an affine subspace of S of dimension d 2 − 1); the set D of all density matrices is D = P ∩ T 1 . Since for d = 1, D has only one element, we assume d ≥ 2. Let P • denote the interior of P, which is the set of positive definite operators on H , and D • = P • ∩ T 1 the interior of D in T 1 (the set of density matrices for which 0 is not an eigenvalue).
As in every affine space of finite dimension, there is a natural notion of volume in T 1 : a nonzero translationinvariant measure on the Borel σ -algebra of T 1 . It is well known that this measure is unique up to a global positive factor.

Proposition 1 For every such measure, the volume of D is neither zero nor infinite.
Proof It is not zero because the interior D • is open and non-empty. That it is finite will follow once we show that D is compact and therefore bounded in T 1 . The compactness of D will follow from the fact that the continuous image of any compact set is compact. Here, the relevant mapping is ϕ : the unitary group of d × d matrices, here regarded as orthonormal bases of H ) defined by (2) ϕ is clearly continuous, and since U (d) is known to be compact and is clearly compact, also × U (d) is compact, which gets mapped to D.
Thus, one can restrict the volume measure in T 1 to D and normalize, which removes the arbitrary constant. The resulting measure is the desired measure 1 u.

Proposition 2 u is invariant under unitary transformations U of H .
Proof U maps S to itself in a linear way and maps T 1 to itself in an affine-linear way. Thus, any translation invariant measure on T 1 will be mapped by U to a multiple of itself. Since U also maps P to itself, it also maps D to itself. As a consequence, it must preserve volumes when acting on T 1 , and so it preserves u.
Proof (Alternative proof.) Equip S with the Hilbert-Schmidt inner product which is invariant under U . Using the inner product, one has a notion of area on every surface, in particular on T 1 . u is just the normalized surface area 2 restricted to D, and it follows that surface area is invariant under U .
Note that unitary invariance does not uniquely select the measure u. Unitary invariance means that the joint distribution of the eigenvectors of ρ is uniform while saying nothing about the joint distribution of the eigenvalues. The property that selects u as the natural normalized measure on D is that u is, up to a normalizing factor, "just volume." (After all, it is the volume measure in T 1 applied to subsets of D ⊂ T 1 .) 3

Expectation and covariance
The covariance of a random vector V in a real vector space V with inner product , is defined to be the operator for all v, v ∈ V .

Proposition 3 A u-distributed ρ has expectation
and covariance (in V = S with Hilbert-Schmidt inner product (4)) with c(d) > 0 some constant 4  Clearly, RI and T 0 are U (d)-invariant (as tr(U AU −1 ) = tr(A)), they are orthogonal in the Hilbert-Schmidt inner product, their sum is S , and RI is irreducible because it is 1-dimensional. To show that T 0 is irreducible, we show that {0} and T 0 are its only invariant subspaces. To this end, let U = {0} be an invariant subspace of T 0 ; we show that U + RI = S , which implies that U = T 0 . Note that U + RI is invariant. Let 0 = A ∈ U . Then A has at least two different eigenvalues; choose an orthonormal basis of H that diagonalizes A. We show that all B ∈ S that are diagonal in the same basis also lie in U + RI ; it then follows by applying unitaries that U + RI = S . For this, it suffices to show that for d ≥ 2 the only subspace of R d that is invariant under permutation of components and contains c := (1, 1, . . . , 1) and some vector not proportional to c is R d itself. Indeed, if W is such a subspace and w ∈ W \ Rc, then w i = w j for some i = j. Let w be the vector obtained from w by permuting w i and w j , By permutation, all (0, . . . , 0, 1, 0, . . . , 0) ∈ W , so W = R d . Now, since T 0 is irreducible, we can apply Schur's lemma [20]. Since the irreducible representations RI (which has dimension 1) and T 0 (which has dimension d 2 − 1 ≥ 3) are inequivalent, Schur's lemma yields that every Footnote 3 continued For comparison, Hall [17] advocated a different measure on D, associated with the Bures metric [5,6], which quantifies the distinguishability between two density matrices (see also Sect. 4). A third, perhaps stronger while more vague, reason is simplicity: The volume measures on T 1 are the first measures to consider and the simplest choice. A final reason is intuition: the volume measures on T 1 are the measures that seem most obviously "uniform." 4 After completion of this paper, we have become aware of results of Tucci [23] that imply that c(d) = U (d)-invariant operator C : S → S is of the form For the covariance operator C, since always ρ − Eρ ∈ T 0 , we have thatc = 0.
We can characterize the value of c = c(d) as follows. Fix ψ ∈ S(H ) and set On the other hand, Thus, We did not succeed in evaluating the expectation value. 5

Distribution of eigenvalues
Let T 1 be the plane Proposition 4 Under u, the eigenvalues relative to the volume measure in T 1 with normalization constant N > 0.
Proof The strategy of proof is to use, instead of volume on S , a Gaussian unitary ensemble, for which the distribution of the eigenvalues is known, and then let its variance tend to infinity, so that the distribution becomes flat on every compact set. The Gaussian unitary ensemble (e.g., [13,19,21]) is the probability distribution over self-adjoint d × d matrices X i j = A i j + i B i j with real part A i j = A ji and imaginary part B i j = −B ji such that all A i j (i ≤ j) and all B i j (i < j) are independent random variables, where A i j with i < j and B i j are Gaussian with mean 0 and variance 1/(2d), while the A ii are Gaussian with mean 0 and variance 1/d. Thus, the joint distribution of all X i j has density (with lower case symbols the possible values of random variables) It is known (e.g., [13,19,21]) that the eigenvalues μ 1 ≥ · · · ≥ μ d of X have joint distribution with density That is, ϕ −1 maps the distribution f X (x) dx to the product of g X (μ) dμ (with μ = (μ 1 , . . . , μ d )) and the uniform distribution on U (d). Now consider Y := σ X with arbitrary σ > 0 that we will ultimately let tend to infinity. Y has density f Y (y 11 , y 12 , . . . , y dd ) ∝ e −d tr y 2 /2σ 2 , and its eigenvalues ν 1 = σ μ 1 , . . . , ν d = σ μ d have joint density Again, ϕ −1 maps the distribution f Y (y) dy to the product of g Y (ν) dν (with ν = (ν 1 , . . . , ν d )) and the uniform distribution on U (d).
Since ϕ maps T 1 × U (d) to T 1 , it maps the conditional distribution of ν on T 1 , times the uniform distribution on U (d), to the conditional distribution of Y on T 1 . Likewise, it maps the conditional distribution of ν on , times the uniform distribution on U (d), to the conditional distribution of Y on D. Note that the conditional distribution of Y on T 1 has density, up to a normalizing factor, given by f Y restricted to T 1 , and the conditional distribution of ν on T 1 has density g Y on T 1 up to a factor. In the limit σ → ∞, the right-hand side of (26) converges to 1, in fact uniformly on the compact set D; thus, also f Y (including the appropriate normalizing factor) converges uniformly to 1 on D.
On the other hand, in the same way, the right-hand side of (27), after dropping the factors of σ in the denominator, converges to |ν i − ν j | 2 , in fact uniformly on the compact set . We want to draw the conclusion that ϕ maps the limit of g Y -conditional-on-(times the uniform distribution on U (d)) to the limit of f Y -conditional-on-D (i.e., to u).
To justify this conclusion, we note the following. The interior of is Since is a convex set, its boundary has measure zero in T 1 ; thus, it does not matter whether we consider continuous measures on or • . For eigenvalues in • , the orthonormal basis of eigenvectors is unique up to phases; that is, ϕ maps • × [U (d)/U (1) d ] bijectively to the set of non-degenerate positive definite density matrices, a dense set of full u-measure in D. Since ϕ is smooth (in particular) on T 1 × U (d), so is its Jacobian determinant; since × U (d) is compact, the Jacobian is bounded on × U (d). According to the transformation formula for integrals, the density of the pre-image is the Jacobian times the density of the image; as a consequence, if the Jacobian is bounded and the density of the image converges uniformly, then so does the density of the pre-image. That is, we can pull the limit through ϕ, as we claimed.
The upshot is that ϕ −1 maps u to which proves (21) (and by the way again the unitary invariance of u).

Note added, concerning prior works
After completion of this paper, we have learned of prior works [17,23,[27][28][29] that considered the measure we denote by u.
Hall [17] asked which distribution over the density matrices "corresponds to minimal prior knowledge" or is "most random." That is perhaps the same as asking which distribution is uniform, or perhaps it is subtly different. He came up with three proposed answers, one of which is u, and regarded another one as "most random," in fact the measure associated with the Bures metric [5,6], another unitarily invariant metric that quantifies how easy it is to distinguish two density matrices through measurements. Hall also arrived at the formula (21) for the distribution of the eigenvalues, but in a different way than we did.
While the Bures measure is not unreasonable, u seems more convincing to us (see Footnote 3). Presumably, the reasons why Hall prefers the Bures measure and we u have to do with the subtle differences in motivation, as Hall talked about "minimal knowledge" and "extracting information," whereas we are interested in a notion of typicality in nature, as reflected in the word "uniform"; one could say that we ask which measure would make "every ρ equally probable." After all, the distribution over a set of initial fundamental density matrices of the universe has a law-like character that is independent of anyone's knowledge or information, as it plays a role in grounding the objective arrows of time in such a universe. Now, if the target is an objective notion of "most," it is natural to consider the volume measure on that set, even though it does not perform all the roles of an information-theoretic notion. For example, there is no need for a typicality measure to reflect how easily measurements could distinguish between two density matrices. That is to be expected, as our inference about the initial state of the universe is largely theoretical. Any measurement scheme about the universe will ultimately be about measurements of the subsystems, whose resultant probability distribution may be different from the one we select for the universal state. At the subsystem level, Hall's information-theoretic consideration becomes salient, and in some situations the Bures measure may be more appropriate than u.
Zyczkowski and Sommers computed [28] the volume of D in T 1 according to the Hilbert-Schmidt metric (and thus the normalization constant in the definition of u), and computed [27,Eq. (3.7)] the normalization constant in Proposition 4 to be N = (d 2 − 1)!/ d k=1 [k!(k − 1)!]. They also showed [27] that for a uniformly random unit vector in H ⊗ H , the reduced density matrix in H is u-distributed, and that, for a random d × d matrix A from the Ginibre ensemble (i.e., for which each entry is independent complex Gaussian with mean 0 and variance 1), ρ := A A * / tr(A A * ) has distribution u; asymptotics for large d are studied in [29]. Tucci [23] also considered u, called it the "uniform ensemble of density matrices," and computed all moments of the entries of a u-distributed ρ.