Skip to main content

Uniform probability distribution over all density matrices

Abstract

Let \(\mathscr {H}\) be a finite-dimensional complex Hilbert space and \(\mathscr {D}\) the set of density matrices on \(\mathscr {H}\), i.e., the positive operators with trace 1. Our goal in this note is to identify a probability measure u on \(\mathscr {D}\) that can be regarded as the uniform distribution over \(\mathscr {D}\). We propose a measure on \(\mathscr {D}\), argue that it can be so regarded, discuss its properties, and compute the joint distribution of the eigenvalues of a random density matrix distributed according to this measure.

Introduction

With every probability distribution \(\mu \) over wave functions, i.e., over the unit sphere \(\mathbb {S}(\mathscr {H})\) in a complex Hilbert space \(\mathscr {H}\), there is associated a density matrix

$$\begin{aligned} \rho = \int _{\mathbb {S}(\mathscr {H})}\mu (d\psi ) \, |\psi \rangle \langle \psi |\,. \end{aligned}$$
(1)

In this note, in contrast, we consider a probability distribution over density matrices, and we ask whether there exists a distribution that should be regarded as the uniform distribution u over all density matrices. Our considerations involve certain applications of random matrix theory.

Here is our motivation for the question. Density matrices can arise not only as encoding random \(\psi \)s, but also as partial traces of states of larger systems; moreover, it is conceivable that even the fundamental quantum state of the universe is represented by a density matrix \(\rho \). For example, it is easy to set up a version of Bohmian mechanics in which the particles are guided, not by a wave function \(\psi \), but by a density matrix \(\rho \) [4, 12]. The density matrix, in such a theory, is not an expression of our ignorance of the actual pure state even if we continue to call it a “mixed state,” nor is it an expression of entanglement with another system, but it is a fundamental object—a physical variable on which the motion of the Bohmian particles depends. Likewise, a density matrix can be a fundamental object in collapse theories or many-worlds theories [2, 24]. But if a density matrix is a fundamental object in nature, then it makes sense to consider a random density matrix. For example, when considering the initial state of the universe, it is common to consider a random state in a particular subspace \(\mathscr {H}_\mathrm{PH}\) of the Hilbert space of the universe associated with very low entropy; the statement that the initial state of the universe lies in the subspace \(\mathscr {H}_\mathrm{PH}\) is often called the past hypothesis (PH) [1, 7]. Here, one usually has in mind a random pure state \(\psi \) in \(\mathscr {H}_\mathrm{PH}\), but if states \(\rho \) that are fundamentally mixed are possible, as illustrated by the above-mentioned versions of Bohmian mechanics, collapse theories, and many-worlds theories, we can also consider a random \(\rho \) in \(\mathscr {H}_\mathrm{PH}\) [9]. Since one considers for \(\psi \) the uniform distribution over \(\mathbb {S}(\mathscr {H}_\mathrm{PH})\), the analog would involve the uniform distribution u over all \(\rho \) concentrated in \(\mathscr {H}_\mathrm{PH}\), which brings us to the question whether such a distribution u exists, whether it is uniquely defined, and what it looks like.

Here, we propose a natural definition of u on any Hilbert space \(\mathscr {H}\) of finite dimension \(d\in \mathbb {N}\). It will be clear from the definition that u exists and is unique. For infinite-dimensional Hilbert spaces \(\mathscr {H}\), it does not seem that a uniform distribution exists over the density matrices on \(\mathscr {H}\), which is not surprising as there is no uniform distribution either over \(\mathscr {H}\) itself or \(\mathbb {S}(\mathscr {H})\). Of course, our reasoning also yields, for any subspace \(\mathscr {H}\) of a bigger Hilbert space \(\mathscr {K}\) with \(\dim \mathscr {H}<\infty \), a uniform probability distribution over the density matrices concentrated on \(\mathscr {H}\), regardless of whether \(\dim \mathscr {K}\) is finite or infinite. We show that u is invariant under unitary transformations of \(\mathscr {H}\), so that a u-distributed \(\rho \) has an eigenbasis that is uniformly distributed in the set of all orthonormal bases of \(\mathscr {H}\). Furthermore, we compute the joint distribution of the eigenvalues of \(\rho \). The expectation value of \(\rho \) is \(d^{-1}I\), where I is the identity operator on \(\mathscr {H}\).

In applications, the normalized measure u may often play the role of a typicality measure (see, e.g., [15, Sect. 6], [16, Sect. 7.1], [18, 22, 26]) rather than that of direct probability. That is, it may serve for defining what is true of most density matrices (that are, say, concentrated in a certain subspace such as \(\mathscr {H}_\mathrm{PH}\)). For example, the properties of u will entail that for a bipartite system, most density matrices are entangled, just as most pure states are [14].

Concerning the past hypothesis, another approach proposes to take the initial density matrix of the universe to be the normalized projection onto \(\mathscr {H}_\mathrm{PH}\) [8,9,10]. So, one could consider different kinds of initial conditions: a random \(\psi \) with uniform distribution over \(\mathbb {S}(\mathscr {H}_\mathrm{PH})\), a fixed density matrix proportional to the projection to \(\mathscr {H}_\mathrm{PH}\), or a random density matrix with distribution u over the density matrices in \(\mathscr {H}_\mathrm{PH}\). It seems reasonable to expect that all three theories are empirically equivalent, according to the appropriate sense of typicality. We leave that issue to another paper.

Definition of the measure

Let \(\mathscr {S}\) be the space of self-adjoint operators on \(\mathscr {H}\) (a real vector space of dimension \(d^2\)), \(\mathscr {P}\subset \mathscr {S}\) the set of positive operators on \(\mathscr {H}\), and \(\mathscr {T}_c\) the set of self-adjoint operators with trace c (an affine subspace of \(\mathscr {S}\) of dimension \(d^2-1\)); the set \(\mathscr {D}\) of all density matrices is \(\mathscr {D}=\mathscr {P}\cap \mathscr {T}_1\). Since for \(d=1\), \(\mathscr {D}\) has only one element, we assume \(d\ge 2\). Let \(\mathscr {P}^\circ \) denote the interior of \(\mathscr {P}\), which is the set of positive definite operators on \(\mathscr {H}\), and \(\mathscr {D}^\circ =\mathscr {P}^\circ \cap \mathscr {T}_1\) the interior of \(\mathscr {D}\) in \(\mathscr {T}_1\) (the set of density matrices for which 0 is not an eigenvalue).

As in every affine space of finite dimension, there is a natural notion of volume in \(\mathscr {T}_1\): a nonzero translation-invariant measure on the Borel \(\sigma \)-algebra of \(\mathscr {T}_1\). It is well known that this measure is unique up to a global positive factor.

Proposition 1

For every such measure, the volume of \(\mathscr {D}\) is neither zero nor infinite.

Proof

It is not zero because the interior \(\mathscr {D}^\circ \) is open and non-empty. That it is finite will follow once we show that \(\mathscr {D}\) is compact and therefore bounded in \(\mathscr {T}_1\). The compactness of \(\mathscr {D}\) will follow from the fact that the continuous image of any compact set is compact. Here, the relevant mapping is \(\varphi : \mathbb {R}^d\times U(d) \rightarrow \mathscr {S}\) (where U(d) denotes the unitary group of \(d\times d\) matrices, here regarded as orthonormal bases of \(\mathscr {H}\)) defined by

$$\begin{aligned} \varphi (\lambda _1,\ldots ,\lambda _d,\psi _1,\ldots ,\psi _d) = \sum _{i=1}^d \lambda _i | \psi _i \rangle \langle \psi _i|. \end{aligned}$$
(2)

\(\varphi \) is clearly continuous, and since U(d) is known to be compact and

$$\begin{aligned} \Lambda :=\Bigl \{(\lambda _1,\ldots ,\lambda _d)\in [0,1]^d: \lambda _1\ge \ldots \ge \lambda _d\,,~\sum _{i=1}^d \lambda _i=1 \Bigr \} \end{aligned}$$
(3)

is clearly compact, also \(\Lambda \times U(d)\) is compact, which gets mapped to \(\mathscr {D}\). \(\square \)

Thus, one can restrict the volume measure in \(\mathscr {T}_1\) to \(\mathscr {D}\) and normalize, which removes the arbitrary constant. The resulting measure is the desired measureFootnote 1u.

Properties of the measure

Unitary invariance

Proposition 2

u is invariant under unitary transformations U of \(\mathscr {H}\).

Proof

U maps \(\mathscr {S}\) to itself in a linear way and maps \(\mathscr {T}_1\) to itself in an affine-linear way. Thus, any translation invariant measure on \(\mathscr {T}_1\) will be mapped by U to a multiple of itself. Since U also maps \(\mathscr {P}\) to itself, it also maps \(\mathscr {D}\) to itself. As a consequence, it must preserve volumes when acting on \(\mathscr {T}_1\), and so it preserves u. \(\square \)

Proof

(Alternative proof.) Equip \(\mathscr {S}\) with the Hilbert–Schmidt inner product

$$\begin{aligned} \langle A,B \rangle = {{\,\mathrm{tr}\,}}(AB), \end{aligned}$$
(4)

which is invariant under U. Using the inner product, one has a notion of area on every surface, in particular on \(\mathscr {T}_1\). u is just the normalized surface areaFootnote 2 restricted to \(\mathscr {D}\), and it follows that surface area is invariant under U. \(\square \)

Note that unitary invariance does not uniquely select the measure u. Unitary invariance means that the joint distribution of the eigenvectors of \(\rho \) is uniform while saying nothing about the joint distribution of the eigenvalues. The property that selects u as the natural normalized measure on \(\mathscr {D}\) is that u is, up to a normalizing factor, “just volume.” (After all, it is the volume measure in \(\mathscr {T}_1\) applied to subsets of \(\mathscr {D}\subset \mathscr {T}_1\).)Footnote 3

Expectation and covariance

The covariance of a random vector V in a real vector space \(\mathscr {V}\) with inner product \(\langle ~,\,\rangle \) is defined to be the operator \(C:\mathscr {V}\rightarrow \mathscr {V}\) such that

$$\begin{aligned} \langle v,Cv'\rangle = \mathbb {E}\Bigl [\bigl \langle v,(V-\mathbb {E}V)\bigr \rangle \bigl \langle (V-\mathbb {E}V),v'\bigr \rangle \Bigr ] \end{aligned}$$
(5)

for all \(v,v'\in \mathscr {V}\).

Proposition 3

A u-distributed \(\rho \) has expectation

$$\begin{aligned} \mathbb {E}\rho = \tfrac{1}{d}I \end{aligned}$$
(6)

and covariance (in \(\mathscr {V}=\mathscr {S}\) with Hilbert–Schmidt inner product (4))

$$\begin{aligned} C= c(d)\,P_{\mathscr {T}_0}\,, \end{aligned}$$
(7)

with \(c(d)>0\) some constantFootnote 4 and \(P_{\mathscr {T}_0}\) the projection to the set \(\mathscr {T}_0\) of traceless operators in \(\mathscr {S}\).

Proof

As a consequence of Proposition 2, \(\mathbb {E}\rho \) must be invariant under U, and since the only operators in \(\mathscr {H}\) invariant under all unitaries are the multiples of the identity I, (6) follows.

Likewise, C must be invariant under U(d). To determine all U(d)-invariant operators on \(\mathscr {S}\), we first show that the representation of U(d) on \(\mathscr {S}\) is the direct sum of two irreducible representation spaces, \(\mathbb {R}I\) (the multiples of the identity) and \(\mathscr {T}_0\).

Clearly, \(\mathbb {R}I\) and \(\mathscr {T}_0\) are U(d)-invariant (as \({{\,\mathrm{tr}\,}}(UAU^{-1})={{\,\mathrm{tr}\,}}(A)\)), they are orthogonal in the Hilbert–Schmidt inner product, their sum is \(\mathscr {S}\), and \(\mathbb {R}I\) is irreducible because it is 1-dimensional. To show that \(\mathscr {T}_0\) is irreducible, we show that \(\{0\}\) and \(\mathscr {T}_0\) are its only invariant subspaces. To this end, let \(\mathscr {U}\ne \{0\}\) be an invariant subspace of \(\mathscr {T}_0\); we show that \(\mathscr {U}+\mathbb {R}I=\mathscr {S}\), which implies that \(\mathscr {U}=\mathscr {T}_0\). Note that \(\mathscr {U}+\mathbb {R}I\) is invariant. Let \(0\ne A\in \mathscr {U}\). Then A has at least two different eigenvalues; choose an orthonormal basis of \(\mathscr {H}\) that diagonalizes A. We show that all \(B\in \mathscr {S}\) that are diagonal in the same basis also lie in \(\mathscr {U}+\mathbb {R}I\); it then follows by applying unitaries that \(\mathscr {U}+\mathbb {R}I=\mathscr {S}\). For this, it suffices to show that for \(d\ge 2\) the only subspace of \(\mathbb {R}^d\) that is invariant under permutation of components and contains \(\varvec{c}:=(1,1,\ldots ,1)\) and some vector not proportional to \(\varvec{c}\) is \(\mathbb {R}^d\) itself. Indeed, if \(\mathscr {W}\) is such a subspace and \(\varvec{w}\in \mathscr {W}\setminus \mathbb {R}\varvec{c}\), then \(w_i\ne w_j\) for some \(i\ne j\). Let \(\varvec{w}'\) be the vector obtained from \(\varvec{w}\) by permuting \(w_i\) and \(w_j\), then \(\varvec{w}'':=\varvec{w}-\varvec{w}' \in \mathscr {W}\) has \(w''_i= w_i-w_j\), \(w''_j=w_j-w_i\), while all other components of \(\varvec{w}''\) vanish. Thus, using permutations again, \((1,-1,0,\ldots ,0)\in \mathscr {W}\) and

$$\begin{aligned}&(1,0,\ldots ,0) =\nonumber \\&\quad \tfrac{1}{d}\Bigl [\varvec{c}+ (1,-1,0,0,\ldots ,0) + (1,0,-1,0,\ldots ,0) + \ldots + (1,0,\ldots ,0,-1)\Bigr ] \in \mathscr {W}\,. \end{aligned}$$
(8)

By permutation, all \((0,\ldots ,0,1,0,\ldots ,0)\in \mathscr {W}\), so \(\mathscr {W}=\mathbb {R}^d\).

Now, since \(\mathscr {T}_0\) is irreducible, we can apply Schur’s lemma [20]. Since the irreducible representations \(\mathbb {R}I\) (which has dimension 1) and \(\mathscr {T}_0\) (which has dimension \(d^2-1\ge 3\)) are inequivalent, Schur’s lemma yields that every U(d)-invariant operator \(C:\mathscr {S}\rightarrow \mathscr {S}\) is of the form

$$\begin{aligned} C= {\tilde{c}} P_{\mathbb {R}I} + c P_{\mathscr {T}_0}\,. \end{aligned}$$
(9)

For the covariance operator C, since always \(\rho -\mathbb {E}\rho \in \mathscr {T}_0\), we have that \({\tilde{c}}=0\). \(\square \)

We can characterize the value of \(c=c(d)\) as follows. Fix \(\psi \in \mathbb {S}(\mathscr {H})\) and set \(v=v'=|\psi \rangle \langle \psi |\). Then

$$\begin{aligned} \langle v,Cv\rangle&= \mathbb {E}\Bigl [\bigl ({{\,\mathrm{tr}\,}}[v(\rho -\mathbb {E}\rho )]\bigr )^2\Bigr ] \end{aligned}$$
(10)
$$\begin{aligned}&= \mathbb {E}\Bigl [\bigl (\langle \psi |\rho |\psi \rangle -d^{-1}\bigr )^2\Bigr ] \end{aligned}$$
(11)
$$\begin{aligned}&= \mathbb {E}\Bigl [\langle \psi |\rho |\psi \rangle ^2\Bigr ] -2d^{-1}\mathbb {E}\langle \psi |\rho |\psi \rangle +d^{-2} \end{aligned}$$
(12)
$$\begin{aligned}&= \mathbb {E}\Bigl [\langle \psi |\rho |\psi \rangle ^2\Bigr ] -d^{-2}\,. \end{aligned}$$
(13)

On the other hand,

$$\begin{aligned} \langle v,Cv\rangle&= c(d) \langle v,P_{\mathscr {T}_0}v\rangle \end{aligned}$$
(14)
$$\begin{aligned}&= c(d) \Bigl (\langle v,v\rangle - \langle v,P_{\mathbb {R}I}v\rangle \Bigr ) \end{aligned}$$
(15)
$$\begin{aligned}&= c(d) \Bigl (1 - \langle v,d^{-1/2}I\rangle \langle d^{-1/2}I, v\rangle \Bigr ) \end{aligned}$$
(16)
$$\begin{aligned}&= c(d) \bigl (1 - d^{-1}({{\,\mathrm{tr}\,}}v)^2 \bigr ) \end{aligned}$$
(17)
$$\begin{aligned}&= c(d) (1-d^{-1})\,. \end{aligned}$$
(18)

Thus,

$$\begin{aligned} c(d) = \tfrac{d}{d-1} \mathbb {E}\Bigl [\langle \psi |\rho |\psi \rangle ^2\Bigr ] - \tfrac{1}{d(d-1)}\,. \end{aligned}$$
(19)

We did not succeed in evaluating the expectation value.Footnote 5

Distribution of eigenvalues

Let \(T_1\) be the plane

$$\begin{aligned} T_1:= \Bigl \{(\lambda _1,\ldots ,\lambda _d)\in \mathbb {R}^d: \sum _{i=1}^d \lambda _i=1 \Bigr \}\,. \end{aligned}$$
(20)

Proposition 4

Under u, the eigenvalues \(\lambda _1\ge \lambda _2 \ge \ldots \ge \lambda _d\) of \(\rho \) have joint distribution in \(\Lambda \subset T_1\) with densityFootnote 6

$$\begin{aligned} f(\lambda _1,\ldots ,\lambda _d) = {\mathcal {N}} \prod _{1\le i<j \le d} |\lambda _i-\lambda _j|^2 \end{aligned}$$
(21)

relative to the volume measure in \(T_1\) with normalization constant \({\mathcal {N}}>0\).

Proof

The strategy of proof is to use, instead of volume on \(\mathscr {S}\), a Gaussian unitary ensemble, for which the distribution of the eigenvalues is known, and then let its variance tend to infinity, so that the distribution becomes flat on every compact set.

The Gaussian unitary ensemble (e.g., [13, 19, 21]) is the probability distribution over self-adjoint \(d\times d\) matrices \(X_{ij}= A_{ij}+iB_{ij}\) with real part \(A_{ij}=A_{ji}\) and imaginary part \(B_{ij}=-B_{ji}\) such that all \(A_{ij}\) (\(i\le j\)) and all \(B_{ij}\) (\(i<j\)) are independent random variables, where \(A_{ij}\) with \(i<j\) and \(B_{ij}\) are Gaussian with mean 0 and variance 1/(2d), while the \(A_{ii}\) are Gaussian with mean 0 and variance 1/d. Thus, the joint distribution of all \(X_{ij}\) has density (with lower case symbols the possible values of random variables)

$$\begin{aligned} f_X(x_{11},x_{12},\ldots ,x_{dd})&\propto \prod _{i<j} e^{-da_{ij}^2}e^{-db_{ij}^2} \prod _i e^{-da_{ii}^2/2} \end{aligned}$$
(22)
$$\begin{aligned}&= \prod _{i,j=1}^d e^{-d|x_{ij}|^2/2} \end{aligned}$$
(23)
$$\begin{aligned}&= e^{-d {{\,\mathrm{tr}\,}}x^2/2}\,. \end{aligned}$$
(24)

It is known (e.g., [13, 19, 21]) that the eigenvalues \(\mu _1\ge \cdots \ge \mu _d\) of X have joint distribution with density

$$\begin{aligned} g_X(\mu _1,\ldots ,\mu _d) \propto \prod _{k=1}^d e^{-\frac{d}{2} \mu _k^2} \prod _{1\le i<j \le d} |\mu _i-\mu _j|^2\,. \end{aligned}$$
(25)

That is, \(\varphi ^{-1}\) maps the distribution \(f_X(x) \, dx\) to the product of \(g_X(\varvec{\mu })\,d\varvec{\mu }\) (with \(\varvec{\mu }=(\mu _1,\ldots ,\mu _d)\)) and the uniform distribution on U(d).

Now consider \(Y:=\sigma X\) with arbitrary \(\sigma >0\) that we will ultimately let tend to infinity. Y has density

$$\begin{aligned} f_Y(y_{11},y_{12},\ldots ,y_{dd}) \propto e^{-d {{\,\mathrm{tr}\,}}y^2/2\sigma ^2}\,, \end{aligned}$$
(26)

and its eigenvalues \(\nu _1=\sigma \mu _1,\ldots ,\nu _d=\sigma \mu _d\) have joint density

$$\begin{aligned} g_Y(\nu _1,\ldots ,\nu _d) \propto \prod _{k=1}^d e^{-d \nu _k^2/2\sigma ^2} \prod _{1\le i<j \le d} \frac{|\nu _i-\nu _j|^2}{\sigma ^2}\,. \end{aligned}$$
(27)

Again, \(\varphi ^{-1}\) maps the distribution \(f_Y(y) \, dy\) to the product of \(g_Y(\varvec{\nu })\, d\varvec{\nu }\) (with \(\varvec{\nu }=(\nu _1,\ldots ,\nu _d)\)) and the uniform distribution on U(d).

Since \(\varphi \) maps \(T_1\times U(d)\) to \(\mathscr {T}_1\), it maps the conditional distribution of \(\varvec{\nu }\) on \(T_1\), times the uniform distribution on U(d), to the conditional distribution of Y on \(\mathscr {T}_1\). Likewise, it maps the conditional distribution of \(\varvec{\nu }\) on \(\Lambda \), times the uniform distribution on U(d), to the conditional distribution of Y on \(\mathscr {D}\). Note that the conditional distribution of Y on \(\mathscr {T}_1\) has density, up to a normalizing factor, given by \(f_Y\) restricted to \(T_1\), and the conditional distribution of \(\varvec{\nu }\) on \(T_1\) has density \(g_Y\) on \(T_1\) up to a factor. In the limit \(\sigma \rightarrow \infty \), the right-hand side of (26) converges to 1, in fact uniformly on the compact set \(\mathscr {D}\); thus, also \(f_Y\) (including the appropriate normalizing factor) converges uniformly to 1 on \(\mathscr {D}\). On the other hand, in the same way, the right-hand side of (27), after dropping the factors of \(\sigma \) in the denominator, converges to \(\prod |\nu _i-\nu _j|^2\), in fact uniformly on the compact set \(\Lambda \). We want to draw the conclusion that \(\varphi \) maps the limit of \(g_Y\)-conditional-on-\(\Lambda \) (times the uniform distribution on U(d)) to the limit of \(f_Y\)-conditional-on-\(\mathscr {D}\) (i.e., to u).

To justify this conclusion, we note the following. The interior of \(\Lambda \) is

$$\begin{aligned} \Lambda ^\circ = \Bigl \{(\lambda _1,\ldots ,\lambda _d)\in T_1: \lambda _1>\cdots> \lambda _d>0 \Bigr \}\,. \end{aligned}$$
(28)

Since \(\Lambda \) is a convex set, its boundary has measure zero in \(T_1\); thus, it does not matter whether we consider continuous measures on \(\Lambda \) or \(\Lambda ^\circ \). For eigenvalues in \(\Lambda ^\circ \), the orthonormal basis of eigenvectors is unique up to phases; that is, \(\varphi \) maps \(\Lambda ^\circ \times [U(d)/U(1)^d]\) bijectively to the set of non-degenerate positive definite density matrices, a dense set of full u-measure in \(\mathscr {D}\). Since \(\varphi \) is smooth (in particular) on \(T_1\times U(d)\), so is its Jacobian determinant; since \(\Lambda \times U(d)\) is compact, the Jacobian is bounded on \(\Lambda \times U(d)\). According to the transformation formula for integrals, the density of the pre-image is the Jacobian times the density of the image; as a consequence, if the Jacobian is bounded and the density of the image converges uniformly, then so does the density of the pre-image. That is, we can pull the limit through \(\varphi \), as we claimed.

The upshot is that \(\varphi ^{-1}\) maps u to

$$\begin{aligned}&\lim _{\sigma \rightarrow \infty } g_Y(\varvec{\nu }) \, d\varvec{\nu }\times \mathrm {uniform}_{U(d)} \end{aligned}$$
(29)
$$\begin{aligned}&= {\mathcal {N}} \biggl (\prod _{1\le i<j \le d} |\nu _i-\nu _j|^2\biggr ) d\varvec{\nu }\times \mathrm {uniform}_{U(d)}\,, \end{aligned}$$
(30)

which proves (21) (and by the way again the unitary invariance of u). \(\square \)

Note added, concerning prior works

After completion of this paper, we have learned of prior works [17, 23, 27,28,29] that considered the measure we denote by u.

Hall [17] asked which distribution over the density matrices “corresponds to minimal prior knowledge” or is “most random.” That is perhaps the same as asking which distribution is uniform, or perhaps it is subtly different. He came up with three proposed answers, one of which is u, and regarded another one as “most random,” in fact the measure associated with the Bures metric [5, 6], another unitarily invariant metric that quantifies how easy it is to distinguish two density matrices through measurements. Hall also arrived at the formula (21) for the distribution of the eigenvalues, but in a different way than we did.

While the Bures measure is not unreasonable, u seems more convincing to us (see Footnote 3). Presumably, the reasons why Hall prefers the Bures measure and we u have to do with the subtle differences in motivation, as Hall talked about “minimal knowledge” and “extracting information,” whereas we are interested in a notion of typicality in nature, as reflected in the word “uniform”; one could say that we ask which measure would make “every \(\rho \) equally probable.” After all, the distribution over a set of initial fundamental density matrices of the universe has a law-like character that is independent of anyone’s knowledge or information, as it plays a role in grounding the objective arrows of time in such a universe. Now, if the target is an objective notion of “most,” it is natural to consider the volume measure on that set, even though it does not perform all the roles of an information-theoretic notion. For example, there is no need for a typicality measure to reflect how easily measurements could distinguish between two density matrices. That is to be expected, as our inference about the initial state of the universe is largely theoretical. Any measurement scheme about the universe will ultimately be about measurements of the subsystems, whose resultant probability distribution may be different from the one we select for the universal state. At the subsystem level, Hall’s information-theoretic consideration becomes salient, and in some situations the Bures measure may be more appropriate than u.

Życzkowski and Sommers computed [28] the volume of \(\mathscr {D}\) in \(\mathscr {T}_1\) according to the Hilbert–Schmidt metric (and thus the normalization constant in the definition of u), and computed [27, Eq. (3.7)] the normalization constant in Proposition 4 to be \({\mathcal {N}}= (d^2-1)!/\prod _{k=1}^d [k!(k-1)!]\). They also showed [27] that for a uniformly random unit vector in \(\mathscr {H}\otimes \mathscr {H}\), the reduced density matrix in \(\mathscr {H}\) is u-distributed, and that, for a random \(d\times d\) matrix A from the Ginibre ensemble (i.e., for which each entry is independent complex Gaussian with mean 0 and variance 1), \(\rho :=AA^*/{{\,\mathrm{tr}\,}}(AA^*)\) has distribution u; asymptotics for large d are studied in [29].

Tucci [23] also considered u, called it the “uniform ensemble of density matrices,” and computed all moments of the entries of a u-distributed \(\rho \).

Availability of data and material

Not applicable.

Notes

  1. See also Footnote 3 below for reasons motivating this choice.

  2. We mention in passing another consequence of this fact. Balian [3] suggested as a general recipe for identifying the “least biased” measure over matrices with given properties to maximize, under the constraints provided by the given properties, the Gibbs entropy functional relative to the volume measure over all matrices associated with the Hilbert–Schmidt inner product (4). Arguably, the constraints of being self-adjoint and having trace 1 correspond to using the Hilbert–Schmidt metric on \(\mathscr {T}_1\), and the Gibbs entropy functional relative to that is maximized among measures on \(\mathscr {D}\) by u. (Of course, the Gibbs entropy does not play much of a role here, as it is maximized, in the absence of constraints on moments of the measure, by the measure that is put in as uniform; that is, u comes out because the Hilbert–Schmidt metric was put in.)

  3. One may think of several reasons making this choice natural. One is invariance under symmetries of \(\mathscr {T}_1\): In fact, the volume measures on \(\mathscr {T}_1\) are the only measures invariant under translations within \(\mathscr {T}_1\). However, it is not clear that translations of \(\mathscr {T}_1\) should be relevant to \(\mathscr {D}\), as \(\mathscr {D}\) is not left invariant by these translations. Another reason is the metric: The Hilbert–Schmidt metric, defined by the inner product (4), is a Riemann metric on \(\mathscr {T}_1\), and with every Riemann metric is associated a measure, and the measure associated with the Hilbert–Schmidt metric is one of the volume measures on \(\mathscr {T}_1\). To the extent that the Hilbert–Schmidt metric is natural, so is u. For comparison, Hall [17] advocated a different measure on \(\mathscr {D}\), associated with the Bures metric [5, 6], which quantifies the distinguishability between two density matrices (see also Sect. 4). A third, perhaps stronger while more vague, reason is simplicity: The volume measures on \(\mathscr {T}_1\) are the first measures to consider and the simplest choice. A final reason is intuition: the volume measures on \(\mathscr {T}_1\) are the measures that seem most obviously “uniform.”

  4. After completion of this paper, we have become aware of results of Tucci [23] that imply that \(c(d)=\frac{1}{d(d^2+1)}\).

  5. Tucci [23] showed that \(\mathbb {E}\bigl [\langle \psi |\rho |\psi \rangle ^2\bigr ] = \frac{d+1}{d(d^2+1)}\), which leads to the formula of Footnote 3.

  6. This density formula is strikingly similar to the result of Weyl [25, Thm. 7.4.B] [11, Eq. (2.2)] that for a uniformly distributed random unitary \(d\times d\) matrix, the joint distribution of the eigenvalues \(\lambda _j\) has density proportional to \(\prod _{1\le i<j \le d} |\lambda _i-\lambda _j|^2\) relative to \(d\theta _1\cdots d\theta _d\), where \(\lambda _j=e^{i\theta _j}\). Of course, in the unitary case, \(\lambda _j\) lies on the complex unit circle, whereas in our case \(\lambda _j\) is real.

References

  1. Albert, D.Z.: Time and Chance. Harvard University Press, Cambridge (2000)

    Book  Google Scholar 

  2. Allori, V., Goldstein, S., Tumulka, R., Zanghì, N.: Predictions and primitive ontology in quantum foundations: a study of examples. Br. J. Philos. Sci. 65: 323–352 (2013). arXiv:1206.0019

  3. Balian, R.: Random matrices and information theory. Il Nuovo Cimento B 57, 183–193 (1968)

    Article  Google Scholar 

  4. Bell, J.S.: De Broglie–Bohm, delayed-choice double-slit experiment, and density matrix. Int. J. Quant. Chem. 14: 155–159 (1980). Reprinted on p. 111–116 in J.S. Bell (1987): Speakable and unspeakable inquantum mechanics, Cambridge University Press

  5. Bengtsson, I., Życzkowski, K.: Geometry of Quantum States, 2nd edn. Cambridge University Press, Cambridge (2017)

    Book  Google Scholar 

  6. Bures, D.: An extension of Kakutani’s theorem on infinite product measures to the tensor product of semifinite \(w^*\)-algebras. Trans. Am. Math. Soc. 135, 199–212 (1969)

    MathSciNet  MATH  Google Scholar 

  7. Callender, C.: Thermodynamic asymmetry in time. In E.N. Zalta (ed.): Stanford encyclopedia of philosophy (Winter 2016 Edition) (2016). http://plato.stanford.edu/entries/time-thermo

  8. Chen, E.K.: Quantum mechanics in a time-asymmetric universe: on the nature of the initial quantum state. Br. J. Philos. Sci. forthcoming (2018). arXiv:1712.01666

  9. Chen, E.K.: Quantum states of a time-asymmetric universe: wave function, density matrix, and empirical equivalence (2019). arXiv:1901.08053

  10. Chen, E.K.: Time’s arrow in a quantum universe: on the status of statistical mechanical probabilities. In: Allori V (ed) Statistical mechanics and scientific explanation: determinism, indeterminism and laws of nature. World Scientific, Pages 479–515 (2020) arXiv:1902.04564

  11. Diaconis, P., Shahshahani, M.: On the eigenvalues of random matrices. J. Appl. Prob. 31, 49–62 (1994)

    MathSciNet  Article  Google Scholar 

  12. Dürr, D., Goldstein, S., Tumulka, R., Zanghì, N.: On the role of density matrices in Bohmian mechanics. Found. Phys. 35: 449–467, (2005). arXiv: quant-ph/0311127

  13. Forrester, P.J.: Log-gases and random matrices. Princeton University Press, Princeton (2010)

    Book  Google Scholar 

  14. Goldstein, S., Lebowitz, J.L., Tumulka, R., Zanghì, N.: Canonical Typicality. Phys. Rev. Lett. 96(5): 050403 (2006). arXiv: cond-mat/0511091

  15. Goldstein, S., Lebowitz, J.L., Tumulka, R., Zanghì, N.: Long-time behavior of macroscopic quantum systems. Eur. Phys. J. H 35: 173–200, (2010). arXiv:1003.2129

  16. Goldstein, S., Lebowitz, J.L., Tumulka, R., Zanghì, N.: Gibbs and Boltzmann entropy in classical and quantum mechanics. In: Allori, V. (ed) Statistical mechanics and scientific explanation: determinism, indeterminism and laws of nature. World Scientific, Pages 519–581 (2020). arXiv:1903.11870

  17. Hall, M.J.W.: Random quantum correlations and density operator distributions. Phys. Lett. A 242(3): 123–129 (1998). arXiv: quant-ph/9802052

  18. Lazarovici, D., Reichert, P.: Typicality, irreversibility and the status of macroscopic laws. Erkenntnis 80(4): 689-716 (2015). http://philsci-archive.pitt.edu/10895/

  19. Mehta, M.L.: Random matrices. Elsevier, Amsterdam (2004)

    MATH  Google Scholar 

  20. Schur’s lemma. In Wikipedia, the free encyclopedia. http://en.wikipedia.org/wiki/Schur%27s_lemma (Accessed 3/11/2020)

  21. Tao, T.: Topics in Random Matrix Theory. American Mathematical Society, Providence (2012)

    Book  Google Scholar 

  22. Tasaki, H.: Typicality of thermal equilibrium and thermalization in isolated macroscopic quantum systems. J. Stat. Phys. 163: 937–997, (2016). arXiv:1507.06479

  23. Tucci, R.R.: All Moments of the uniform ensemble of quantum density matrices (2002). arXiv: quant-ph/0206193

  24. Wallace, D.: The emergent multiverse. Oxford University Press, Oxford (2012)

    Book  Google Scholar 

  25. Weyl, H.: The classical groups, 2nd edn. Princeton University Press, Princeton (1946)

    MATH  Google Scholar 

  26. Wilhelm, I.: Typical: A theory of typicality and typicality explanation. Br. J. Philos. Sci. forthcoming (2019). http://philsci-archive.pitt.edu/15973/

  27. Życzkowski, K., Sommers, H.-J.: Induced measures in the space of mixed quantum states. J. Phys. A: Math. Gen. 34(35): 7111–7125 (2001). arXiv: quant-ph/0012101

  28. Życzkowski, K., Sommers, H.-J.: Hilbert–Schmidt volume of the set of mixed quantum states. J. Phys. A: Math. Gen. 36: 10115–10130 (2003). arXiv: quant-ph/0302197

  29. Życzkowski, K., Sommers, H.-J.: Statistical properties of random density matrices. J. Phys. A: Math. Gen. 37: 8457–8466, (2004). arXiv: quant-ph/0405031

Download references

Acknowledgements

We thank Stefan Keppeler for helpful discussion and Michael Hall, Christian Majenz, Ion Nechita, Michael Walter, and Karol Życzkowski for pointing to relevant literature.

Open Access

This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Funding

Open Access funding enabled and organized by Projekt DEAL. Not applicable.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Roderich Tumulka.

Ethics declarations

Conflict of interest

Not applicable.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Chen, E.K., Tumulka, R. Uniform probability distribution over all density matrices. Quantum Stud.: Math. Found. 9, 225–233 (2022). https://doi.org/10.1007/s40509-021-00267-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s40509-021-00267-5

Keywords

  • Random matrix
  • Finite-dimensional Hilbert space
  • Past hypothesis
  • Typicality