Uniform Probability Distribution Over All Density Matrices

Let $\mathscr{H}$ be a finite-dimensional complex Hilbert space and $\mathscr{D}$ the set of density matrices on $\mathscr{H}$, i.e., the positive operators with trace 1. Our goal in this note is to identify a probability measure $u$ on $\mathscr{D}$ that can be regarded as the uniform distribution over $\mathscr{D}$. We propose a measure on $\mathscr{D}$, argue that it can be so regarded, discuss its properties, and compute the joint distribution of the eigenvalues of a random density matrix distributed according to this measure.


Introduction
With every probability distribution µ over wave functions, i.e., over the unit sphere S(H ) in a complex Hilbert space H , there is associated a density matrix ρ = S(H ) µ(dψ) |ψ ψ| . (1) In this note, in contrast, we consider a probability distribution over density matrices, and we ask whether there exists a distribution that should be regarded as the uniform distribution u over all density matrices. Our considerations involve certain applications of random matrix theory.
Here is our motivation for the question. Density matrices can arise not only as encoding random ψ's, but also as partial traces of states of larger systems; moreover, it is conceivable that even the fundamental state, as known to nature, is a density matrix ρ. For example, it is easy to set up a version of Bohmian mechanics in which the particles are guided, not by a wave function ψ, but by a density matrix ρ [6]. The density matrix, in such a theory, is not an expression of our ignorance of the actual pure state even if we continue to call it a "mixed state," nor is it an expression of entanglement with another system, but it is a fundamental object-a physical variable on which the motion of the Bohmian particles depends. Likewise, a density matrix can be a fundamental object in collapse theories or many-worlds theories [2]. But if a density matrix is a fundamental object in nature, then it makes sense to consider a random density matrix. For example, when considering the initial state of the universe, it is common to consider a random state in a particular subspace H P H of the Hilbert space of the universe associated with very low entropy; the statement that the initial state of the universe lies in the subspace H P H is often called the past hypothesis (PH) [1]. Here, one usually has in mind a random pure state ψ in H P H , but if states ρ that are fundamentally mixed are possible, as illustrated by the above-mentioned versions of Bohmian mechanics, collapse theories, and many-worlds theories, we can also consider a random ρ in H P H [4]. Since one considers for ψ the uniform distribution over S(H P H ), the analog would involve the uniform distribution u over all ρ concentrated in H P H , which brings us to the question whether such a distribution u exists, whether it is uniquely defined, and what it looks like.
Here, we propose a natural definition of u on any Hilbert space H of finite dimension d ∈ N. It will be clear from the definition that u exists and is unique. For infinitedimensional Hilbert spaces H , it does not seem that a uniform distribution exists over the density matrices on H , which is not surprising as there is no uniform distribution either over H itself or S(H ). Of course, our reasoning also yields, for any subspace H of a bigger Hilbert space K with dim H < ∞, a uniform probability distribution over the density matrices concentrated on H , regardless of whether dim K is finite or infinite. We show that u is invariant under unitary operators on H , so that a udistributed ρ has an eigenbasis that is uniformly distributed in the set of all orthonormal bases of H . Furthermore, we compute the joint distribution of the eigenvalues of ρ. The expectation value of ρ is d −1 I, where I is the identity operator on H .
In applications, the normalized measure u may often play the role of a typicality measure (see, e.g., [8,Sec. 6] and [9, Sec. 7.1]) rather than that of direct probability. That is, it may serve for defining what is true of most density matrices (that are, say, concentrated in a certain subspace such as H P H ). For example, the properties of u will entail that for a bipartite system, most density matrices are entangled, just as most pure states are [7].
Concerning the past hypothesis, another approach proposes to take the initial density matrix of the universe to be the normalized projection onto H P H [3,5]. So, one could consider different kinds of initial conditions: a random ψ with uniform distribution over S(H P H ), a fixed density matrix proportional to the projection to H P H , or a random density matrix with distribution u over the density matrices in H P H . It seems reasonable to expect that all three theories are empirically equivalent, according to the appropriate sense of typicality. We leave that issue to another paper.

Definition of the Measure
Let S be the space of self-adjoint operators on H (a real vector space of dimension d 2 ), P ⊂ S the set of positive operators on H , and T c the set of self-adjoint operators with trace c (an affine subspace of S of dimension d 2 − 1); the set D of all density matrices is D = P ∩ T 1 . Since for d = 1, D has only one element, we assume d ≥ 2. Let P • denote the interior of P, which is the set of positive definite operators on H , and D • = P • ∩ T 1 the interior of D in T 1 (the set of density matrices for which 0 is not an eigenvalue).
As in every affine space of finite dimension, there is a natural notion of volume in T 1 : a nonzero translation-invariant measure on the Borel σ-algebra of T 1 . It is well known that this measure is unique up to a global positive factor.

Proposition 1. For every such measure, the volume of D is neither zero nor infinite.
Proof. It is not zero because the interior D • is open and non-empty. That it is finite will follow once we show that D is compact and therefore bounded in T 1 . The compactness of D will follow from the fact that the continuous image of any compact set is compact. Here, the relevant mapping is ϕ : ϕ is clearly continuous, and since U(d) is known to be compact and is clearly compact, also Λ × U(d) is compact, which gets mapped to D.
Thus, one can restrict the volume measure in T 1 to D and normalize, which removes the arbitrary constant. The resulting measure is the desired measure u. Proof. U maps S to itself in a linear way and maps T 1 to itself in an affine-linear way. Thus, any translation invariant measure on T 1 will be mapped by U to a multiple of itself. Since U also maps P to itself, it also maps D to itself. As a consequence, it must preserve volumes when acting on T 1 , and so it preserves u.

Alternative proof. Equip S with the Hilbert-Schmidt inner product
which is invariant under U. Using the inner product, one has a notion of area on every surface, in particular on T 1 . u is just the normalized surface area restricted to D, and it follows that surface area is invariant under U.
Note that unitary invariance does not uniquely select the measure u. Unitary invariance means that the joint distribution of the eigenvectors of ρ is uniform while saying nothing about the joint distribution of the eigenvalues. The property that selects u as the natural normalized measure on D is that u is, when looked at in the right way, just volume.

Expectation and Covariance
The covariance of a random vector V in a real vector space V with inner product , is defined to be the operator C :

Proposition 3. A u-distributed ρ has expectation
and covariance (in V = S with Hilbert-Schmidt inner product (4)) with c(d) > 0 some constant 1  To determine all U(d)-invariant operators on S , we first show that the representation of U(d) on S is the direct sum of two irreducible representation spaces, RI (the multiples of the identity) and T 0 .
Clearly, RI and T 0 are U(d)-invariant (as tr(UAU −1 ) = tr(A)), they are orthogonal in the Hilbert-Schmidt inner product, their sum is S , and RI is irreducible because it is 1-dimensional. In order to show that T 0 is irreducible, we show that {0} and T 0 are its only invariant subspaces. To this end, let U = {0} be an invariant subspace of T 0 ; we show that U + RI = S , which implies that U = T 0 . Note that U + RI is invariant. Let 0 = A ∈ U . Then A has at least two different eigenvalues; choose an orthonormal basis of H that diagonalizes A. We show that all B ∈ S that are diagonal in the same basis also lie in U + RI; it then follows by applying unitaries that U + RI = S . For this, it suffices to show that for d ≥ 2 the only subspace of R d that is invariant under permutation of components and contains c := (1, 1, . . . , 1) and some vector not proportional to c is R d itself. Indeed, if W is such a subspace and w ∈ W \ Rc, then w i = w j for some i = j. Let w ′ be the vector obtained from w by permuting w i and w j , By permutation, all (0, . . . , 0, 1, 0, . . . , 0) ∈ W , so W = R d . Now, since T 0 is irreducible, we can apply Schur's lemma [12]. Since the irreducible representations RI (which has dimension 1) and T 0 (which has dimension For the covariance operator C, since always ρ − Eρ ∈ T 0 , we have thatc = 0. We can characterize the value of c = c(d) as follows. Fix ψ ∈ S(H ) and set On the other hand, Thus, We did not succeed in evaluating the expectation value. 2

Distribution of Eigenvalues
Let T 1 be the plane relative to the volume measure in T 1 with normalization constant N > 0.
Proof. The strategy of proof is to use, instead of volume on S , a Gaussian unitary ensemble, for which the distribution of the eigenvalues is known, and then let its variance tend to infinity, so that the distribution becomes flat on every compact set. The Gaussian unitary ensemble [11] is the probability distribution over self-adjoint d × d matrices X ij = A ij + iB ij with real part A ij = A ji and imaginary part B ij = −B ji such that all A ij (i ≤ j) and all B ij (i < j) are independent random variables, where A ij with i < j and B ij are Gaussian with mean 0 and variance 1/(2d), while the A ii are Gaussian with mean 0 and variance 1/d. Thus, the joint distribution of all X ij has density (with lower case symbols the possible values of random variables) It is known [11] that the eigenvalues µ 1 ≥ . . . ≥ µ d of X have joint distribution with density That is, ϕ −1 maps the distribution f X (x) dx to the product of g X (µ) dµ (with µ = (µ 1 , . . . , µ d )) and the uniform distribution on U(d). Now consider Y := σX with arbitrary σ > 0 that we will ultimately let tend to infinity. Y has density f Y (y 11 , y 12 , . . . , y dd ) ∝ e −d tr y 2 /2σ 2 , and its eigenvalues ν 1 = σµ 1 , . . . , ν d = σµ d have joint density Again, ϕ −1 maps the distribution f Y (y) dy to the product of g Y (ν) dν (with ν = (ν 1 , . . . , ν d )) and the uniform distribution on U(d).
Since ϕ maps T 1 × U(d) to T 1 , it maps the conditional distribution of ν on T 1 , times the uniform distribution on U(d), to the conditional distribution of Y on T 1 . Likewise, it maps the conditional distribution of ν on Λ, times the uniform distribution on U(d), to the conditional distribution of Y on D. Note that the conditional distribution of Y on T 1 has density, up to a normalizing factor, given by f Y restricted to T 1 , and the conditional distribution of ν on T 1 has density g Y on T 1 up to a factor. In the limit σ → ∞, the right-hand side of (26) converges to 1, in fact uniformly on the compact set D; thus, also f Y (including the appropriate normalizing factor) converges uniformly to 1 on D. On the other hand, in the same way, the right-hand side of (27), after dropping the factors of σ in the denominator, converges to |ν i − ν j | 2 , in fact uniformly on the compact set Λ. We want to draw the conclusion that ϕ maps the limit of g Y -conditionalon-Λ (times the uniform distribution on U(d)) to the limit of f Y -conditional-on-D (i.e., to u).
To justify this conclusion, we note the following. The interior of Λ is Since Λ is a convex set, its boundary has measure zero in T 1 ; thus, it does not matter whether we consider continuous measures on Λ or Λ • . For eigenvalues in Λ • , the orthonormal basis of eigenvectors is unique up to phases; that is, bijectively to the set of non-degenerate positive definite density matrices, a dense set of full u-measure in D. Since ϕ is smooth (in particular) on T 1 × U(d), so is its Jacobian determinant; since Λ×U(d) is compact, the Jacobian is bounded on Λ×U(d). According to the transformation formula for integrals, the density of the pre-image is the Jacobian times the density of the image; as a consequence, if the Jacobian is bounded and the density of the image converges uniformly, then so does the density of the pre-image.
That is, we can pull the limit through ϕ, as we claimed.
The upshot is that ϕ −1 maps u to which proves (21) (and by the way again the unitary invariance of u).
Note added. After completion of this paper we have learned of prior works [10,14,13,15,16] that considered the measure we denote by u. Hall [10] asked which distribution over the density matrices "corresponds to minimal prior knowledge" or is "most random." That is perhaps the same as asking which distribution is uniform, or perhaps it is subtly different. He came up with three proposed answers, one of which is u, and regarded another one as "most random." Hall also arrived at the formula (21) for the distribution of the eigenvalues, but in a different way than we did.Życzkowski and Sommers computed [15] the volume of D in T 1 according to the Hilbert-Schmidt metric (and thus the normalization constant in the definition of u), and computed [14, Eq. (3.7)] the normalization constant in Proposition 4 to be N = (d 2 − 1)!/ d k=1 [k!(k − 1)!]. They also showed [14] that for a uniformly random unit vector in H ⊗H , the reduced density matrix in H is u-distributed, and that, for a random d × d matrix A from the Ginibre ensemble (i.e., for which each entry is independent complex Gaussian with mean 0 and variance 1), ρ := AA * / tr(AA * ) has distribution u; asymptotics for large d are studied in [16]. Tucci [13] also considered u, called it the "uniform ensemble of density matrices," and computed all moments of the entries of a u-distributed ρ.