1 Introduction

In the 21st century, a modern perspective on quantum statistical mechanics is to consider an individual closed system in a pure state and investigate its and its subsystems’ thermodynamic behavior; see, e.g., [1, 2, 7,8,9, 11, 12, 15, 16, 20, 31, 34, 36,37,38,39, 41, 42, 45, 48] after pioneering work in [4, 40, 44, 47, 51].

Roughly speaking, “canonical typicality” is the statement that the reduced density matrix of a subsystem obtained from a pure state of the total system is nearly deterministic if the pure state is randomly drawn from a sufficiently large subspace and the subsystem is not too large. More precisely, the original statement of canonical typicality [7, 18, 26, 31] asserts that for most pure states \(\psi \) from a high-dimensional (e.g., micro-canonical) subspace \({\mathcal {H}}_R\) of the Hilbert space \({\mathcal {H}}_S\) of a macroscopic quantum system S and for a subsystem a of \(S=a\cup b\) so that \({\mathcal {H}}_S={\mathcal {H}}_a \otimes {\mathcal {H}}_b\), the reduced density matrix

$$\begin{aligned} \rho _a^\psi := {{\,\textrm{tr}\,}}_b |\psi \rangle \langle \psi | \end{aligned}$$
(1)

is close to the partial trace of \(\rho _R:=P_R/d_R\) (the normalized projection to \({\mathcal {H}}_R\)) and thus deterministic, provided that \(d_R:=\dim {\mathcal {H}}_R\) is sufficiently large:

$$\begin{aligned} \rho _a^\psi \approx {{\,\textrm{tr}\,}}_b \rho _R\,. \end{aligned}$$
(2)

Here, the words “most \(\psi \)” refer to the uniform distribution \(u_R\) (normalized surface area measure) over the unit sphere

$$\begin{aligned} {\mathbb {S}}({\mathcal {H}}_R) := \{\psi \in {\mathcal {H}}_R: \Vert \psi \Vert =1\} \end{aligned}$$
(3)

in \({\mathcal {H}}_R\). The name “canonical typicality” comes from the fact that if \({\mathcal {H}}_R={\mathcal {H}}_\textrm{mc}\) is a micro-canonical subspace and thus \(\rho _R=\rho _\textrm{mc}\) a micro-canonical density matrix, then \({{\,\textrm{tr}\,}}_b \rho _\textrm{mc}\) is close to the canonical density matrix

$$\begin{aligned} \rho _{a,\textrm{can}} = \frac{1}{Z_a}e^{-\beta H_a} \end{aligned}$$
(4)

for a with suitable \(\beta \), provided b is large and the interaction between a and b is weak; see, e.g., [18] for a summary of the standard derivation of this fact.

In this paper, we replace the uniform distribution by other, much more general distributions, so-called GAP measures, and show that for them a generalized canonical typicality remains valid. For any density matrix \(\rho \) replacing \(\rho _R\) in \({\mathcal {H}}_S\), \({\textrm{GAP}}(\rho )\) is the most spread-out distribution over \({\mathbb {S}}({\mathcal {H}}_S)\) with density matrix \(\rho \); the acronym stands for Gaussian adjusted projected measure [19, 23]. For \(\rho =\rho _{\textrm{can}}\), it arises as the distribution of wave functions in thermal equilibrium [17, 19]. If a system is initially in thermal equilibrium for the Hamiltonian \(H_0\) but then driven out of equilibrium by means of a time-dependent \(H_t\), its wave function will still be \({\textrm{GAP}}(\rho )\)-distributed for suitable \(\rho \). For general \(\rho \), we think of \({\textrm{GAP}}(\rho )\) as the natural ensemble of wave functions with density matrix \(\rho \); for a more detailed description, see Sect. 2.2.

We prove quantitative bounds asserting that for any \(\rho \) with small eigenvalues (so \(\rho \) is far from pure) and \({\textrm{GAP}}(\rho )\)-most \(\psi \in {\mathbb {S}}({\mathcal {H}}_S)\),

$$\begin{aligned} \rho _a^\psi \approx {{\,\textrm{tr}\,}}_b \rho \,. \end{aligned}$$
(5)

Some reasons for seeking this generalization are as follows: first, that it is mathematically natural; second, that in situations in which we can ask what the actual distribution of \(\psi \) is (more detail later), this distribution might not be uniform; third, that it shows that the sharp cut-off of energies involved in the definition of \({\mathcal {H}}_\textrm{mc}\) actually plays no role; and finally, that it informs and extends our picture of the equivalence of ensembles. A more detailed discussion of these reasons is given in Sect. 2.1.

As a direct consequence of generalized canonical typicality let us mention that, just as canonical typicality implies that for most pure states \(\psi \in {\mathbb {S}}({\mathcal {H}}_S)\) the entanglement entropy \(-{{\,\textrm{tr}\,}}(\rho _a^\psi \log \rho _a^\psi )\) has nearly the maximal value \(\log d_a\) with \(d_a=\dim {\mathcal {H}}_a\) [22] (because \(\rho _a^\psi \approx {{\,\textrm{tr}\,}}_b I_S/D = I_a/d_a\) with I the identity operator and \(D=d_ad_b=\dim {\mathcal {H}}_S\)), generalized canonical typicality implies that \({\textrm{GAP}}(\rho )\)-typical \(\psi \) have entanglement entropy \(-{{\,\textrm{tr}\,}}(\rho _a^\psi \log \rho _a^\psi )\approx -{{\,\textrm{tr}\,}}(\rho _a\log \rho _a)\) with \(\rho _a={{\,\textrm{tr}\,}}_b\rho \).

Since different probability distributions over the unit sphere in a Hilbert space \({\mathcal {H}}\) can have the same density matrix, and since the outcome statistics of any experiment depend only on the density matrix, it may seem at first irrelevant to even consider distributions over \({\mathbb {S}}({\mathcal {H}})\). However, for example, an ensemble of spins prepared so that (about) half are in state \(\bigl | \uparrow \bigr \rangle \) and the others in \(\bigl | \downarrow \bigr \rangle \) is physically different from a uniform ensemble over \({\mathbb {S}}({\mathbb {C}}^2)\), even though both ensembles have density matrix \(\tfrac{1}{2}I\). Likewise, for an ensemble of particles prepared by taking them from a system in thermal equilibrium, the wave function is GAP-distributed (see Sect. 2.2). More basically, probability distributions play a key role in any typicality statement, i.e., one saying that some condition is satisfied by most wave functions—“most” relative to a certain distribution; such a statement cannot be formulated in terms of density matrices.

We note that the generalization of canonical typicality from uniform measures to GAP measures is not straightforward. First, not every measure \(\mu \) over \({\mathbb {S}}({\mathcal {H}}_S)\) with a given density matrix \(\rho \) with small eigenvalues makes it true that for \(\mu \)-most \(\psi \), \(\rho _a^\psi \approx {{\,\textrm{tr}\,}}_b \rho \). We give a counter-example in Remark 15 in Sect. 3. Second, if \(\rho \) is not close to a multiple of a projection, then GAP\((\rho )\) is far from uniform; specifically, its density will at some points be larger than at others by a factor like \(\exp (D)\) (see Remark 13). And third, even measures close to uniform (for example the von Mises-Fisher distribution, see again Remark 13) can fail to satisfy generalized canonical typicality.

In this paper, we prove generalized canonical typicality in rigorous form by providing error bounds for (5) at any desired confidence level that is implicit in the word “most,” see Theorem 1 and Theorem 3. Compared to the known error bounds based on \(u_R\), we can prove more or less the same bounds with \(d_R\) replaced by the reciprocal of the largest eigenvalue of \(\rho \),

$$\begin{aligned} \frac{1}{p_{\max }}:= \frac{1}{\Vert \rho \Vert } \end{aligned}$$
(6)

with \(\Vert \cdot \Vert \) the operator norm. Thus, the approximation is good as soon as no single direction contributes too much to \(\rho \). In particular, for \(\rho =\rho _R\), our results essentially reproduce the known error bounds. As one central part of our proof, we also establish a variant of Lévy’s lemma [24, 25, 27] (a statement about the concentration of measure on a high-dimensional sphere, see below) for GAP measures instead of the uniform measure (Theorem 2). In particular, our version of Lévy’s lemma holds also on infinite dimensional spheres, where the uniform measure does not exist.

Furthermore, we provide several corollaries. The first one shows that for any observable and \({\textrm{GAP}}(\rho )\)-most \(\psi \), the coarse-grained Born distribution is near a \(\psi \)-independent one (see Remark 4 in Sect. 3.1 for discussion). The second arises from evolving the observable with time and provides a form of dynamical typicality [2], which means that for typical initial wave functions, the time evolution “looks” the same; here, “typical” refers to the \({\textrm{GAP}}(\rho )\) distribution, and “look” (which in [48] meant the macroscopic appearance) refers to the Born distribution for the observable considered. In fact, Corollary 2 even shows that the relevant kind of closeness (to a t-dependent but \(\psi \)-independent distribution) holds jointly for most \(t\in [0,T]\). As a further variant (Corollary 3), dynamical typicality also holds when “look” refers to \(\rho _a^\psi \). Put differently, the statement here is that for \({\textrm{GAP}}(\rho )\)-most \(\psi \) and most \(t\in [0,T]\),

$$\begin{aligned} \rho _a^{\psi _t} \approx {{\,\textrm{tr}\,}}_b \rho _t\,, \end{aligned}$$
(7)

where \(\psi _t=U_t \,\psi \) and \(\rho _t=U_{t} \, \rho \, U_{t}^*\) for an arbitrary unitary time evolution \(U_{t}\) (allowing for time-dependent \(H_t\)). In the original version of canonical typicality, one particularly considers for \(\rho _R\) the micro-canonical density matrix \(\rho _\textrm{mc}\) for a fixed Hamiltonian H, for which the time evolution yields nothing interesting because it is invariant anyway; but if we consider arbitrary \(\rho \), then \(\rho \) can evolve in a non-trivial way even for fixed H.

Another corollary (Corollary 4) concerns the conditional wave function \(\psi _a\) of a (which is the natural notion of the subsystem wave function for a, see Sect. 2.2 for the definition): It is known that if \(d_R\) is large, then for \(u_R\)-most \(\psi \) and most bases of \({\mathcal {H}}_b\), the Born distribution of \(\psi _a\) is approximately \({\textrm{GAP}}({{\,\textrm{tr}\,}}_b \rho _R)\). We generalize this statement as follows: if \(d_b\) is large and \(\rho \) has small eigenvalues, then for \({\textrm{GAP}}(\rho )\)-most \(\psi \) and most bases of \({\mathcal {H}}_b\), the Born distribution of \(\psi _a\) is approximately \({\textrm{GAP}}({{\,\textrm{tr}\,}}_b \rho )\).

The results of this paper can also be regarded as a variant of equivalence-of-ensembles in quantum statistical mechanics, i.e., as a new instance of the well-known phenomenon in statistical mechanics that it does not make a big difference whether we use the micro-canonical ensemble or the canonical one (for suitable \(\beta \)) or another equilibrium ensemble. Indeed, the uniform distribution over the unit sphere in a micro-canonical subspace can be regarded as a quantum analog of the micro-canonical distribution in classical statistical mechanics, and the GAP measure associated with a canonical density matrix as a quantum analog of the canonical distribution; see also Remark 11 in Sect. 3.2.

Our results on generalized canonical typicality (5) provide two kinds of error bounds based on two strategies of proof. They are roughly analogous to the following two bounds on the probability that a random variable X deviates from its expectation \({\mathbb {E}}X\) by more than n standard deviations \(\sqrt{{{\,\textrm{Var}\,}}(X)}\): First, the Chebyshev inequality yields the bound \(1/n^2\), which is valid for any distribution of X. Second, the Gaussian distribution has very light tails, so if X is Gaussian distributed, then the aforementioned probability is actually smaller than \(e^{-n}\) (a type of bound known as a Chernoff bound), so the Chebyshev bound would be very coarse. Likewise, the two kinds of bound we provide are based, respectively, on the Chebyshev inequality and the Chernoff bound (in the form of Lévy’s lemma). The former is polynomial in \(p_{\max }\), the latter exponential as in \(e^{-1/p_{\max }}\). For the original statement of canonical typicality (using \(u_R\)), the Chebyshev-type bounds were first given by Sugita [46], the Chernoff-type bounds by Popescu et al. [30]. Our proof of the Chebyshev-type bounds makes heavy use of results of Reimann [35].

A version of Lévy’s lemma was also established for the mean-value ensemble on a finite-dimensional Hilbert space \({\mathcal {H}}\) [28]. This is the uniform distribution on \({\mathbb {S}}({\mathcal {H}})\) restricted to the set \(\{\psi \in {\mathbb {S}}({\mathcal {H}}):\langle \psi |A|\psi \rangle =a\}\) for a given observable A and a value a satisfying further conditions. However, as also the authors of [28] point out, the physical relevance of this ensemble remains unclear. Also dynamical typicality has been established for the mean-value ensemble, see [39] for an overview.

The remainder of this paper is organized as follows: In Sect. 2, we elucidate the motivation and background. In Sect. 3, we formulate and discuss our results. In Sect. 4, we provide the proofs. In Sect. 5, we conclude.

2 Motivation and Background

2.1 Motivation

Canonical typicality is often (rightly) used as a justification and derivation of the canonical density matrix \(\rho _\textrm{can}\) from something simpler, viz., from the uniform distribution over the unit sphere in an appropriate subspace \({\mathcal {H}}_\textrm{mc}\). So it may appear surprising that here we consider other distributions instead of the uniform one. That is why we give some elucidation in this section.

The uniform distribution for \(\psi \) can appear in either of two roles: as a measure of probability or a measure of typicality. What is the difference? The concept of probability, in the narrower sense used here, refers to a physical situation that occurs many times or can be made to occur many times, so that one can meaningfully speak of the empirical distribution of part of the physical state, such as \(\psi \), over the ensemble of trials. In contrast, the concept of typicality, in the sense used here, refers to a hypothetical ensemble and applies also in situations that do not occur repeatedly, such as the universe as a whole, or occur at most a few times; it defines what a typical solution of an equation or theory looks like, or the meaning of “most.” Typicality is used in defining what counts as thermal equilibrium (e.g., [10] and references therein), but also in certain laws of nature such as the past hypothesis (a proposed law about the initial micro-state of the universe serving as the basis of the arrow of time; see [21, Sec. 5.7] for a formulation in terms of typicality). Moreover, it plays a key role for the explanation of certain phenomena by showing that they occur in “most” cases.

The mathematical statements apply regardless of whether we think of the measure as probability or typicality. If we use \(u_\textrm{mc}\) as probability, then the question naturally arises whether the actual distribution of \(\psi \) is uniform, and generalizations to other measures are called for. The GAP measures are then particularly relevant, not just as a natural choice of measures, but also because they arise as the thermal equilibrium distribution of wave functions.

But also for \(u_\textrm{mc}\) as a measure of typicality, which is perhaps the more important or more widely used case, the generalization is relevant. The way we practically think of canonical typicality is that if \(\psi \) is just “any old” wave function of S, then \(\rho _a^\psi \) will be approximately canonical. But the theorem of original canonical typicality (using \(u_\textrm{mc}\)) would require that the coefficients of \(\psi \) relative to energy levels of S outside of the micro-canonical energy interval \([E-\Delta E, E]\) are exactly zero, which of course goes against the idea of \(\psi \) being “any old” \(\psi \). Of course, we would expect that the canonicality of \(\rho _a^\psi \) does not depend much on whether other coefficients are exactly zero or not. And the theorems in this paper show that this is correct! They show that if the \(\rho \) we start from is not \(\rho _\textrm{mc}\), then the crucial part of the reasoning (the typical-\(\psi \) part) still goes through, just with corrections reflected in the deviation of \({{\,\textrm{tr}\,}}_b \rho \) from \({{\,\textrm{tr}\,}}_b \rho _\textrm{mc}\) (which, by the way, will be minor for \(\rho =\rho _\textrm{can}\) with appropriate inverse temperature \(\beta \)). More generally, the theorems in this paper prove the robustness of canonical typicality toward changes in the underlying measure.

The results of this paper also show that when computing the typical reduced state \(\rho _a^\psi \) for “any old” \(\psi \), we can start from various choices of \(\rho \) of the whole, as long as they yield approximately the same \({{\,\textrm{tr}\,}}_b \rho \). The results thus provide researchers with a new angle of looking at canonical typicality: it is OK to imagine “any old” \(\psi \), and not crucial to start from \(u_\textrm{mc}\).

More generally, our results are a kind of equivalence-of-ensembles statement in the quantum case, and thus add to the picture consisting of various senses in which different thermal equilibrium ensembles are practically equivalent, in this case with “ensemble” meaning ensemble of wave functions (i.e., measures over the unit sphere). Again, it plays a role that the GAP measures arise as the thermal equilibrium distribution of wave functions, and thus as an analog of the canonical ensemble in classical statistical mechanics. This means also that if \(\psi \) is itself a conditional wave function, a case in which we know [17, 19] that (for high dimension and most orthonormal bases) \(\psi \) is approximately GAP distributed, then canonical typicality applies. A special application concerns the thermodynamic limit, for which it is desirable to think of the conditional wave function \(\psi _A\) of a region A in 3-space as obtained from \(\psi _{A'}\) for a larger \(A'\supset A\), which in turn is obtained from \(\psi _{A''}\) for an even larger \(A'' \supset A'\), and so on. Then for each step, \(\psi _{A'}\) (etc.) is GAP-distributed.

By the way, the results here also have the converse implication of supporting the naturalness of the GAP measures. One might even consider a version of the past hypothesis that uses, as the measure of typicality, a GAP measure instead of the uniform distribution over the unit sphere in some subspace of the Hilbert space of the universe.

2.2 Mathematical Setup and Some Background

One often considers the uniform distribution over the unit sphere in a subspace \({\mathcal {H}}'\) of a system’s Hilbert space \({\mathcal {H}}\). While this distribution is associated with a density matrix given by the normalized projection to \({\mathcal {H}}'\), the measure \({\textrm{GAP}}(\rho )\) forms an analog of it for an arbitrary density matrix. We now give its definition and that of some other mathematical concepts we use.

Throughout this paper, all Hilbert spaces \({\mathcal {H}}\) are assumed to be separable, i.e., to have either a finite or a countably infinite orthonormal basis (ONB). The unit sphere \({\mathbb {S}}({\mathcal {H}})\) is always equipped with the Borel \(\sigma \)-algebra.

Density matrix. To any probability measure \(\mu \) on \({\mathbb {S}}({\mathcal {H}})\) we can associate a density matrix \(\rho _\mu \) by

$$\begin{aligned} \rho _\mu := \int _{{\mathbb {S}}({\mathcal {H}})} \mu (\textrm{d}\psi ) |\psi \rangle \langle \psi | \end{aligned}$$
(8)

(which always exists [49, Lemma 1]). Note that if \(\mu \) has mean zero then \(\rho _\mu \) is the covariance matrix of \(\mu \). It will turn out for \(\mu = {\textrm{GAP}}(\rho )\) that \(\rho _{\mu } = \rho \).

GAP measure. The measure \({\textrm{GAP}}(\rho )\) was first introduced for finite-dimensional \({\mathcal {H}}\) by Jozsa, Robb, and Wootters [23], who named it Scrooge measure.Footnote 1 Among several equivalent definitions [17], we use the following one based on Gaussian measures. Let \({\mathcal {H}}\) be separable and \(\rho \) a density matrix on \({\mathcal {H}}\) with eigenvalues \(p_n\) and eigen-ONB \((|n\rangle )_{n=1\ldots \dim {\mathcal {H}}}\), i.e.,

$$\begin{aligned} \rho = \sum _n p_n |n\rangle \langle n|. \end{aligned}$$
(9)

A complex-valued random variable Z will be said to be Gaussian with mean \(z\in {\mathbb {C}}\) and variance \(\sigma ^2>0\) if and only if \({\textrm{Re}}\,Z\) and \(\textrm{Im}\,Z\) are independent real Gaussian random variables with mean \({\textrm{Re}}\,z\), respectively, \(\textrm{Im}\,z\) and each with variance \(\sigma ^2/2\). Let \((Z_n)_{n=1\ldots \dim {\mathcal {H}}}\) be a sequence of independent \({\mathbb {C}}\)-valued Gaussian random variables with mean 0 and variances

$$\begin{aligned} {\mathbb {E}}|Z_n|^2 = p_n. \end{aligned}$$
(10)

Then, we define \({\textrm{G}}(\rho )\) to be the distribution of the random vector

$$\begin{aligned} \Psi ^{{\textrm{G}}} := \sum _n Z_n |n\rangle , \end{aligned}$$
(11)

i.e., the Gaussian measure on \({\mathcal {H}}\) with mean 0 and covariance operator \(\rho \). (It is known [32] in general that for every \(\phi \in {\mathcal {H}}\) and every positive trace-class operator \(\rho \) there exists a unique Gaussian measure on \({\mathcal {H}}\) with mean \(\phi \) and covariance operator \(\rho \).) Note that

$$\begin{aligned} {\mathbb {E}}\Vert \Psi ^{{\textrm{G}}}\Vert ^2 = \sum _n {\mathbb {E}}|Z_n|^2 = \sum _n p_n =1, \end{aligned}$$
(12)

which also shows that \(\Vert \Psi ^{\textrm{G}}\Vert <\infty \) almost surely, but in general \(\Vert \Psi ^{{\textrm{G}}}\Vert \ne 1\), i.e., \({\textrm{G}}(\rho )\) is not a distribution on the sphere \({\mathbb {S}}({\mathcal {H}})\). Projecting the measure \({\textrm{G}}(\rho )\) to the sphere \({\mathbb {S}}({\mathcal {H}})\) would not result in a measure with density matrix \(\rho \); therefore, we first adjust the density of \({\textrm{G}}(\rho )\) and define the adjusted Gaussian measure \({\text {GA}}(\rho )\) on \({\mathcal {H}}\) as the measure that has density \(\Vert \psi \Vert ^2\) relative to \({\textrm{G}}(\rho )\), i.e.,

$$\begin{aligned} {\text {GA}}(\rho )(\textrm{d}\psi ) := \Vert \psi \Vert ^2 \, {\textrm{G}}(\rho )(\textrm{d}\psi ), \end{aligned}$$
(13)

which is a probability measure by virtue of (12). It will turn out below that \(\Vert \psi \Vert ^2\) is the right factor to ensure that \(\rho _{{\textrm{GAP}}(\rho )}=\rho \).

Let \(\Psi ^{{\text {GA}}}\) be a \({\text {GA}}(\rho )\)-distributed random vector. We define \({\textrm{GAP}}(\rho )\) to be the distribution of

$$\begin{aligned} \Psi ^{{\textrm{GAP}}} := \frac{\Psi ^{{\text {GA}}}}{\Vert \Psi ^{{\text {GA}}}\Vert }. \end{aligned}$$
(14)

Note that the denominator is almost surely nonzero (because every 1-element subset of \({\mathcal {H}}\) has \({\textrm{G}}(\rho )\)-measure 0 because every \(Z_n\) has continuous distribution). With this, we find that indeed

$$\begin{aligned} \rho _{{\textrm{GAP}}(\rho )}&= \int _{{\mathbb {S}}({\mathcal {H}})} {\textrm{GAP}}(\rho )(\textrm{d}\psi ) \,|\psi \rangle \langle \psi | \end{aligned}$$
(15a)
$$\begin{aligned}&= \int _{{\mathcal {H}}} {\text {GA}}(\rho )(\textrm{d}\psi ) \frac{1}{\Vert \psi \Vert ^2} |\psi \rangle \langle \psi | \end{aligned}$$
(15b)
$$\begin{aligned}&= \int _{{\mathcal {H}}} {\textrm{G}}(\rho )(\textrm{d}\psi ) ~ |\psi \rangle \langle \psi | = \rho . \end{aligned}$$
(15c)

See [49] for a complete proof of existence and uniqueness of \({\textrm{GAP}}(\rho )\) for every density matrix \(\rho \).

\({\textrm{GAP}}(\rho )\) can also be characterized as the minimizer of the “accessible information” functional under the constraint that its density matrix is \(\rho \) [23]. If all eigenvalues of \(\rho \) are positive and \(D:=\dim {\mathcal {H}}<\infty \), then \({\textrm{GAP}}(\rho )\) possesses a density relative to the uniform distribution u on \({\mathbb {S}}({\mathcal {H}})\) [17, 19],

$$\begin{aligned} {\textrm{GAP}}(\rho )(\textrm{d}\psi ) = \frac{D}{\det \rho }\langle \psi |\rho ^{-1}|\psi \rangle ^{-D-1}\, u(\textrm{d}\psi )\,. \end{aligned}$$
(16)

It was argued in [19] and mathematically justified in [17] that GAP measures describe the thermal equilibrium distribution of the (conditional) wave function of the system if \(\rho \) is a canonical density matrix.

It was also shown in [19] that \({\textrm{GAP}}\) is equivariant under unitary transformations, i.e., for all density matrices \(\rho \), all unitary operators U on \({\mathcal {H}}\), and all measurable sets \(M\subset {\mathbb {S}}({\mathcal {H}})\) one has

$$\begin{aligned} {\textrm{GAP}}(U\rho U^*) (M) = {\textrm{GAP}}(\rho )(UM)\,. \end{aligned}$$
(17)

In particular, \({\textrm{GAP}}\) is equivariant under unitary time evolution, and, as a consequence, \({\textrm{GAP}}(\rho _t)\) is the relevant distribution on \({\mathbb {S}}({\mathcal {H}})\) whenever the system starts in thermal equilibrium with respect to some Hamiltonian \(H_0\) and evolves according to any Hamiltonian \(H_t\) at later times. More generally, the results of [17] (and their extension in Corollary 4) show that if a system has density matrix \(\rho \) arising from entanglement, then its (conditional) wave function (relative to a typical basis, see below) is asymptotically GAP-distributed. Thus, GAP is the correct distribution in many practically relevant cases. On top of that, when we have no further restriction than that the density matrix is \(\rho \), then the natural concept of a “typical \(\psi \)” should refer to the most spread-out distribution compatible with \(\rho \), which is \({\textrm{GAP}}(\rho )\).

Finally, let us remark that \({\textrm{GAP}}(\rho )\) is also invariant under global phase changes, i.e., \({\textrm{GAP}}(\rho )(M) ={\textrm{GAP}}(\rho )(e^{i\varphi }M)\) for all measurable \(M\subset {\mathbb {S}}({\mathcal {H}})\) and \(\varphi \in {\mathbb {R}}\). Hence, \({\textrm{GAP}}(\rho )\) naturally also defines a probability distribution on the projective space of complex rays in \({\mathcal {H}}\) and all results presented in the following can be equivalently formulated for rays instead of vectors.

Remark 1

In terms of \(\rho _\mu \), we can easily formulate and prove a weaker version of our main result (5); this version is related to (5) in more or less the same way as the statement that in a certain population, the average height is 170 cm, is related to the stronger statement that in that population, most people are 170 cm tall. The weaker version asserts that the average of \(\rho ^\psi _a\) over \(\psi \) using the \({\textrm{GAP}}(\rho )\) distribution is equal to \({{\,\textrm{tr}\,}}_b \rho \), whereas the statement about (5) was that most \(\psi \) relative to \({\textrm{GAP}}(\rho )\) have \(\rho ^\psi _a\) (approximately) equal to \({{\,\textrm{tr}\,}}_b \rho \). On the other hand, the statement about the average is stronger because it asserts, not approximate equality, but exact equality. On top of that, the average statement is not limited to the GAP measure but holds for any probability measure \(\mu \). Here is the full statement: for separable \({\mathcal {H}}={\mathcal {H}}_a \otimes {\mathcal {H}}_b\), any probability measure \(\mu \) on \({\mathbb {S}}({\mathcal {H}})\), and a random vector \(\psi \) with distribution \(\mu \),

$$\begin{aligned} {\mathbb {E}}_\mu \rho _a^\psi = {{\,\textrm{tr}\,}}_b \rho _\mu \,. \end{aligned}$$
(18)

Indeed, \({{\,\textrm{tr}\,}}_b\) commutes with \(\mu \)-integration,Footnote 2 so

$$\begin{aligned} {\mathbb {E}}_\mu \rho _a^\psi&= \int _{{\mathbb {S}}({\mathcal {H}})} \mu (\textrm{d}\psi ) \, {{\,\textrm{tr}\,}}_b |\psi \rangle \langle \psi | \end{aligned}$$
(19a)
$$\begin{aligned}&= {{\,\textrm{tr}\,}}_b\int _{{\mathbb {S}}({\mathcal {H}})} \mu (\textrm{d}\psi ) \, |\psi \rangle \langle \psi | \end{aligned}$$
(19b)
$$\begin{aligned}&= {{\,\textrm{tr}\,}}_b \rho _\mu \,. \end{aligned}$$
(19c)

\(\diamond \)

Norms. The distance between two density matrices will be measured in the trace norm

$$\begin{aligned} \Vert M\Vert _{{{\,\textrm{tr}\,}}} := {{\,\textrm{tr}\,}}|M| = {{\,\textrm{tr}\,}}\sqrt{M^*M}, \end{aligned}$$
(20)

where \(M^*\) denotes the adjoint operator of M. If M can be diagonalized through an orthonormal basis (ONB), then \(\Vert M\Vert _{{{\,\textrm{tr}\,}}}\) is the sum of the absolute eigenvalues. We will also sometimes use the operator norm

$$\begin{aligned} \Vert M\Vert := \sup _{\Vert \psi \Vert =1} \Vert M\psi \Vert \,, \end{aligned}$$
(21)

which, if M can be diagonalized through an ONB, is the largest absolute eigenvalue.

Purity. For a density matrix \(\rho \), its purity is defined as \({{\,\textrm{tr}\,}}\rho ^2\). In terms of the spectral decomposition \(\rho =\sum _n p_n |n\rangle \langle n|\), the purity is \({{\,\textrm{tr}\,}}\rho ^2=\sum _n p_n^2\), which can be thought of as the average size of \(p_n\). In particular, the purity is positive and \(\le 1\); it is \(=1\) if and only if \(\rho \) is pure, i.e., a 1d projection; for a normalized projection \(\rho _R=P_R/d_R\), the purity is \(1/d_R\); conversely, 1/purity can be thought of as the effective number of dimensions over which \(\rho \) is spread out. It also easily follows that

$$\begin{aligned} {{\,\textrm{tr}\,}}\rho ^2 \le \Vert \rho \Vert \le \sqrt{{{\,\textrm{tr}\,}}\rho ^2} \le \sqrt{\Vert \rho \Vert } \end{aligned}$$
(22)

because \(p_n^2 \le p_n \Vert \rho \Vert \), and if \(p_{n_0}\) is the largest eigenvalue, then \(p_{n_0}^2 \le \sum _n p_n^2\) because all other terms are \(\ge 0\). In words, the average \(p_n\) is no greater than the maximal \(p_n\), which is bounded by the square root of the average \(p_n\) (and the square root of the maximal \(p_n\)).

Conditional wave function. For \({\mathcal {H}}={\mathcal {H}}_a\otimes {\mathcal {H}}_b\), an ONB \(B=(|m\rangle _b)_{m=1\ldots \dim {\mathcal {H}}_b}\) of \({\mathcal {H}}_b\), and \(\psi \in {\mathbb {S}}({\mathcal {H}})\), the conditional wave function \(\psi _a\) [5, 6, 19] of system a is a random vector in \({\mathcal {H}}_a\) that can be constructed by choosing a random one of the basis vectors \(|m\rangle _b\), let us call it \(|M\rangle _b\), with the Born distribution

$$\begin{aligned} {\mathbb {P}}(M=m) = \bigl \Vert {}_b\langle m|\psi \rangle \bigr \Vert ^2_a \,, \end{aligned}$$
(23)

taking the partial inner product of \(|M\rangle _b\) and \(\psi \), and normalizing:

$$\begin{aligned} \psi _a := \frac{{}_b\langle M|\psi \rangle }{\Vert {}_b\langle M|\psi \rangle \Vert _a}\,. \end{aligned}$$
(24)

(Note that the event that \(\Vert {}_b\langle M|\psi \rangle \Vert _a=0\) has probability 0 by (23). In the context of Bohmian mechanics, the expression “conditional wave function” refers to the position basis and the Bohmian configuration of b [5]; but for our purposes, we can leave it general.)

We can also think of \(\psi _a\) as arising from \(\psi \) through a quantum measurement with eigenbasis B on system b, which leads to the collapsed quantum state \(\psi _a \otimes |M\rangle _b\). Correspondingly, we call the distribution of \(\psi _a\) in \({\mathbb {S}}({\mathcal {H}}_a)\) the Born distribution of \(\psi _a\) and denote it by \({\textrm{Born}}_a^{\psi ,B}\). However, when considering \(\psi _a\), we will not assume that any observer actually, physically carries out such a quantum measurement; rather, we use \(\psi _a\) as a theoretical concept of a wave function associated with the subsystem a. It is related to the reduced density matrix \(\rho ^\psi _a\) in a way similar to how a conditional probability distribution is to a marginal distribution,

$$\begin{aligned} {\mathbb {E}}|\psi _a\rangle \langle \psi _a| = \rho _a^\psi \,. \end{aligned}$$
(25)

\(\psi _a\) is also related to the GAP measure, in fact in two ways. First, when we average \({\textrm{Born}}^{\psi ,B}_a\) over all ONBs B (using the uniform distribution corresponding to the Haar measure), then we obtain \({\textrm{GAP}}(\rho _a^\psi )\) [17, Lemma 1]. Put differently, if we think of both M and B as random and \(\psi _a\) thus as doubly random, then its (marginal) distribution is \({\textrm{GAP}}(\rho _a^\psi )\); put more briefly, \({\textrm{GAP}}(\rho _a^\psi )\) is the distribution of the collapsed pure state in a after a purely random quantum measurement in b on \(\psi \). Second, if \(d_b\) is large, then even conditionally on a single given B, the distribution of \(\psi _a\) is close to a GAP measure for most B and most \(\psi \) according to a GAP measure on \({\mathcal {H}}_a\otimes {\mathcal {H}}_b\); this is the content of Corollary 4 below.

3 Main Results

In this section, we present and discuss our main results about generalized canonical typicality. In the following, we use the notation \(\mu (f)\) for the average of the function f under the measure \(\mu \),

$$\begin{aligned} \mu (f) := \int \mu (\textrm{d}\psi ) \, f(\psi ) \,. \end{aligned}$$
(26)

Note that, by (18),

$$\begin{aligned} {\textrm{GAP}}(\rho )(\rho _a^\psi )={{\,\textrm{tr}\,}}_b \rho \,. \end{aligned}$$
(27)

The statement of our generalized canonical typicality differs in that it concerns approximate equality and holds for the individual \(\rho _a^\psi \), not only for its average.

3.1 Statements

We first formulate our main theorem on canonical typicality for GAP measures and the underlying variant of Lévy’s lemma for GAP measures. We then give a list of further consequences of this generalized version of Lévy’s lemma, including results on dynamical typicality and the fact that the typical Born distribution of conditional wave functions is itself a GAP measure. At the end of this section we also state a slightly weaker version of our main theorem that is not based on Lévy’s lemma but instead allows for a rather elementary proof based on the Chebyshev inequality. Finally, the known bounds for uniformly distributed \(\psi \) will be stated in Remark 12 in Sect. 3.2 for comparison.

Theorem 1

(Generalized canonical typicality, exponential bounds). Let \({\mathcal {H}}_a\) and \({\mathcal {H}}_b\) be Hilbert spaces with \({\mathcal {H}}_a\) having finite dimension \(d_a\) and \({\mathcal {H}}_b\) being separable, and let \(\rho \) be a density matrix on \({\mathcal {H}}= {\mathcal {H}}_a \otimes {\mathcal {H}}_b\). Then for every \(\delta >0\),

$$\begin{aligned} {{\textrm{GAP}}}(\rho )\Biggl \{\psi \in {\mathbb {S}}({\mathcal {H}}): \bigl \Vert \rho _a^\psi -{{\,\textrm{tr}\,}}_b\rho \bigr \Vert _{{{\,\textrm{tr}\,}}}\le c d_a \sqrt{\ln \left( \frac{12d_a^2}{\delta }\right) \Vert \rho \Vert }\Biggr \} \ge 1-\delta , \end{aligned}$$
(28)

where \(c=48\pi \).

Remark 2

The relation (28) can equivalently be formulated as a bound on the confidence level, given the allowed deviation: For every \(\varepsilon \ge 0\),

$$\begin{aligned} {\textrm{GAP}}(\rho )\Bigl \{\psi \in {\mathbb {S}}({\mathcal {H}}): \bigl \Vert \rho _a^\psi -{{\,\textrm{tr}\,}}_b\rho \bigr \Vert _{{{\,\textrm{tr}\,}}} > \varepsilon \Bigr \} \le 12d_a^2 \exp \left( -\frac{{\tilde{C}}\varepsilon ^2}{d_a^2 \Vert \rho \Vert }\right) , \end{aligned}$$
(29)

where \({\tilde{C}}=\frac{1}{2304\pi ^2}\). This form makes it visible why we call Theorem 1 an “exponential bound”: because the bound on the probability of too large a deviation is exponentially small in \(1/\Vert \rho \Vert \). In contrast, the bound (37) is polynomially small in \({{\,\textrm{tr}\,}}\rho ^2\).\(\diamond \)

A key tool for proving Theorem 1 is Theorem 2 below, a variant of Lévy’s lemma for GAP measures. Recall the notation (26).

Theorem 2

(Lévy’s lemma for GAP measures). Let \({\mathcal {H}}\) be a separable Hilbert space, let \(f:{\mathbb {S}}({\mathcal {H}})\rightarrow {\mathbb {R}}\) be a Lipschitz continuous function with Lipschitz constantFootnote 3\(\eta \), let \(\rho \) be a density matrix on \({\mathcal {H}}\), and let \(\varepsilon \ge 0\). Then,

$$\begin{aligned} {\textrm{GAP}}(\rho )\Bigl \{ \psi \in {\mathbb {S}}({\mathcal {H}}): \bigl |f(\psi )-{\textrm{GAP}}(\rho )(f) \bigr |>\varepsilon \Bigr \} \le 6 \exp \left( -\frac{C\varepsilon ^2}{\eta ^2 \Vert \rho \Vert } \right) , \end{aligned}$$
(30)

where \(C=\frac{1}{288\pi ^2}\).

Remark 3

The statement remains true for complex-valued f if we replace the constant factor 6 in (30) by 12 and C by C/2, as follows from considering the real and imaginary parts of f separately.\(\diamond \)

As an immediate consequence of Theorem 2 for \(f(\psi ) = \langle \psi |B|\psi \rangle \), which has Lipschitz constant \(\eta \le 2\Vert B\Vert \) [30, Lemma 5], we obtain:

Corollary 1

Let \(\rho \) be a density matrix and B a bounded operator on the separable Hilbert space \({\mathcal {H}}\). For every \(\varepsilon \ge 0\),

$$\begin{aligned} {\textrm{GAP}}(\rho )\left\{ \psi \in {\mathbb {S}}({\mathcal {H}}): \bigl |\langle \psi |B|\psi \rangle -{{\,\textrm{tr}\,}}(\rho B) \bigr |>\varepsilon \right\} \le 12 \exp \left( -\frac{{\tilde{C}}\varepsilon ^2}{\Vert B\Vert ^2 \Vert \rho \Vert } \right) \end{aligned}$$
(31)

with \({\tilde{C}} = \frac{1}{2304\pi ^2}\).

Remark 4

Corollary 1 provides an extension to GAP measures of the known fact [33] that \(\langle \psi |B|\psi \rangle \) has nearly the same value for most \(\psi \) relative to the uniform distribution. This kind of near-constancy is different from the near-constancy property of a macroscopic observable, viz., that most of its eigenvalues (counted with multiplicity) in the micro-canonical energy shell are nearly equal. Here, in contrast, nothing (except boundedness) is assumed about the distribution of eigenvalues of B. In particular, if B is a self-adjoint observable, then a typical \(\psi \) may well define a non-trivial probability distribution over the spectrum of B, not necessarily a sharply peaked one. The near-constancy property asserted here is that the average of this probability distribution is the same for most \(\psi \). In fact, it also follows that the probability distribution itself is the same for most \(\psi \) (“distribution typicality”), at least on a coarse-grained level (by covering the spectrum of B with not-too-many intervals) and provided that many dimensions participate in \(\rho \). This follows from inserting spectral projections of the observable for B in (31).\(\diamond \)

In contrast to the uniform distribution on the sphere in the micro-canonical subspace, which is invariant under the unitary time evolution, \({\textrm{GAP}}(\rho _0)\) will in general evolve, in fact to \({\textrm{GAP}}(\rho _t)\) by (17). This leads to questions about what the history \(t\mapsto \psi _t\) looks like. Inserting \(U_t^*BU_t\) for B in (31) leads us to the first equation in the following variant of “dynamical typicality” for GAP measures.

Corollary 2

Let \({\mathcal {H}}\) be a separable Hilbert space, B a bounded operator and \(\rho \) a density matrix on \({\mathcal {H}}\), and \(t\mapsto U_{t}\) a measurable family of unitary operators. Then for every \(\varepsilon ,t\ge 0\),

$$\begin{aligned} {\textrm{GAP}}(\rho )\Bigl \{\psi \in {\mathbb {S}}({\mathcal {H}}):\left| \langle \psi _t|B|\psi _t\rangle - {{\,\textrm{tr}\,}}(\rho _t B)\right| >\varepsilon \Bigr \} \le 12 \exp \left( -\frac{{\tilde{C}}\varepsilon ^2}{\Vert B\Vert ^2 \Vert \rho \Vert }\right) , \end{aligned}$$
(32)

where \(\rho _t = U_{t}\,\rho \, U_{t}^*\), \(\psi _t = U_{t}\psi \) and \({\tilde{C}} = \frac{1}{2304\pi ^2}\). Moreover, for every \(\varepsilon , T>0\),

$$\begin{aligned}&{\textrm{GAP}}(\rho )\left\{ \psi \in {\mathbb {S}}({\mathcal {H}}): \frac{1}{T}\int _0^T \left| \langle \psi _t|B|\psi _t\rangle - {{\,\textrm{tr}\,}}(\rho _t B)\right| \, dt > \varepsilon \right\} \nonumber \\&\quad \le 9\exp \left( -\frac{{\tilde{C}}\varepsilon ^2}{36\Vert B\Vert ^2 \Vert \rho \Vert }\right) . \end{aligned}$$
(33)

Clearly, for \(U_t\) we have in mind either a unitary group \(U_t= \exp (-iHt)\) generated by a time-independent Hamiltonian H, or a unitary evolution family \(U_{t}\) satisfying \(i\frac{d}{dt} U_{t} = H_t U_{t}\) and \(U_{0}=I\) generated by a time-dependent Hamiltonian \(H_t\). However, the group resp. co-cycle structure play no role in the proof. (In [48], a similar result for the uniform distribution over the sphere in a large subspace was formulated only for time-independent Hamiltonians, but the proof given there actually applies equally to time-dependent ones.)

The last two corollaries were applications of Lévy’s lemma that did not involve reduced density matrices. We now turn to bi-partite systems again and present two further corollaries. We first ask whether, for \({\textrm{GAP}}(\rho _0)\)-typical \(\psi _0\), the reduced density matrix \(\rho _a^{\psi _t}\) remains close to \({{\,\textrm{tr}\,}}_b \rho _t\) over a whole time interval [0, T]. The following corollary answers this question affirmatively for most times in this interval.

Corollary 3

Let \({\mathcal {H}}_a\) and \({\mathcal {H}}_b\) be Hilbert spaces with \({\mathcal {H}}_a\) having finite dimension \(d_a\) and \({\mathcal {H}}_b\) being separable, \(\rho \) a density matrix on \({\mathcal {H}}={\mathcal {H}}_a\otimes {\mathcal {H}}_b\), and \(t\mapsto U_{t}\) a measurable family of unitary operators on \({\mathcal {H}}\). Then for every \(\varepsilon ,T>0\),

$$\begin{aligned} {\textrm{GAP}}(\rho )\left\{ \psi \in {\mathbb {S}}({\mathcal {H}}):\frac{1}{T}\int _0^T \bigl \Vert \rho _a^{\psi _t} - {{\,\textrm{tr}\,}}_b \rho _t\bigr \Vert _{{{\,\textrm{tr}\,}}}\, dt > \varepsilon \right\} \le 9 d_a^2 \exp \left( -\frac{{\tilde{C}}\varepsilon ^2}{36 d_a^2 \Vert \rho \Vert }\right) , \end{aligned}$$
(34)

where \(\rho _t = U_{t}\,\rho \, U_{t}^*\), \(\psi _t = U_{t}\psi \) and \({\tilde{C}} = \frac{1}{2304\pi ^2}\).

The next corollary expresses that for \({\textrm{GAP}}(\rho )\)-typical \(\psi \), large \(d_b\), and small \({{\,\textrm{tr}\,}}\rho ^2\), the conditional wave function \(\psi _a\) (relative to a typical basis) has Born distribution close to \({\textrm{GAP}}({{\,\textrm{tr}\,}}_b\rho )\). (Note that we are considering the distribution of \(\psi _a\) conditionally on a given \(\psi \), rather than the marginal distribution of \(\psi _a\) for random \(\psi \), which would be \(\int _{{\mathbb {S}}({\mathcal {H}})} {\textrm{GAP}}(\rho )(\textrm{d}\psi ) \, {\textrm{Born}}^{\psi ,B}_a(\cdot )\).) Recall the notation (26).

Corollary 4

Let \(\varepsilon ,\delta \in (0,1)\), let \({\mathcal {H}}_a\) be a Hilbert space of dimension \(d_a\in {\mathbb {N}}\), let \(f:{\mathbb {S}}({\mathcal {H}}_a)\rightarrow {\mathbb {R}}\) be any continuous (test) function, and let \({\mathcal {H}}_b\) be a Hilbert space of finite dimension \(d_b\ge \max \{4,d_a,32\Vert f\Vert ^2_\infty /\varepsilon ^2\delta \}\). Then, there is \(p>0\) such that for every density matrix \(\rho \) on \({\mathcal {H}}={\mathcal {H}}_a \otimes {\mathcal {H}}_b\) with \(\Vert \rho \Vert \le p\),

$$\begin{aligned}{} & {} {\textrm{GAP}}(\rho ) \times u_\textrm{ONB} \Bigl \{(\psi ,B) \in {\mathbb {S}}({\mathcal {H}}) \times \textrm{ONB}({\mathcal {H}}_b): \nonumber \\{} & {} \quad \bigl |{\textrm{Born}}_a^{\psi ,B}(f) - {\textrm{GAP}}({{\,\textrm{tr}\,}}_b\rho )(f)\bigr |< \varepsilon \Bigr \} \ge 1-\delta , \end{aligned}$$
(35)

where \({\textrm{Born}}_a^{\psi ,B}\) is the distribution of the conditional wave function, \(\textrm{ONB}({\mathcal {H}}_b)\) is the set of all orthonormal bases on \({\mathcal {H}}_b\), and \(u_{\textrm{ONB}}\) the uniform distribution over this set.

Remark 5

We conjecture that the closeness between \({\textrm{Born}}_a^{\psi ,B}\) and \({\textrm{GAP}}({{\,\textrm{tr}\,}}_b\rho )\) is even better than stated in Corollary 4, at least when 0 is not an eigenvalue of \({{\,\textrm{tr}\,}}_b \rho \), in the sense that (35) holds not only for continuous f but even for bounded measurable f, and in fact uniformly in f with given \(\Vert f\Vert _\infty \). This conjecture is suggested by using Lemma 6 of [17] instead of Lemma 5, or rather a variant of it with more explicit bounds. \(\diamond \)

Whereas Theorem 1 is based on the rather technical concentration of measure result Theorem 2, a slightly weaker statement can be obtained using only the Chebyshev inequality and a bound on the variance of random variables of the form \(\psi \mapsto \langle \psi |A|\psi \rangle \) with respect to GAP\((\rho )\) given in Proposition 1 in Sect. 4.2. The latter bound is also of interest in its own right and has already been established for self-adjoint A by Reimann in [35].

Theorem 3

(Generalized canonical typicality, polynomial bounds). Let \({\mathcal {H}}_a\) and \({\mathcal {H}}_b\) be Hilbert spaces with \({\mathcal {H}}_a\) having finite dimension \(d_a\) and \({\mathcal {H}}_b\) being separable. Let \(\rho \) be a density matrix on \({\mathcal {H}}= {\mathcal {H}}_a \otimes {\mathcal {H}}_b\) with \(\Vert \rho \Vert <1/4\). Then for every \(\delta >0\),

$$\begin{aligned} {\textrm{GAP}}(\rho )\Biggl \{\psi \in {\mathbb {S}}({\mathcal {H}}): \bigl \Vert \rho _a^{\psi } - {{\,\textrm{tr}\,}}_b\rho \bigr \Vert _{\text {tr}} \le \sqrt{\frac{28d_a^5 {{\,\textrm{tr}\,}}\rho ^2}{\delta }} \Biggr \} \ge 1-\delta . \end{aligned}$$
(36)

Remark 6

Again, we can equivalently express Theorem 3 as a bound on the confidence level \(1-\delta \) for any given allowed deviation of \(\rho _a^\psi \) from \({{\,\textrm{tr}\,}}_b\rho \): For every \(\rho \) with \(\Vert \rho \Vert <1/4\) and every \(\varepsilon >0\),

$$\begin{aligned} {\textrm{GAP}}(\rho )\Bigl \{\psi \in {\mathbb {S}}({\mathcal {H}}): \bigl \Vert \rho _a^{\psi } - {{\,\textrm{tr}\,}}_b\rho \bigr \Vert _{\text {tr}} >\varepsilon \Bigr \} \le \frac{28d_a^5{{\,\textrm{tr}\,}}\rho ^2}{\varepsilon ^2}\,. \end{aligned}$$
(37)

\(\diamond \)

Remark 7

While our main motivation for developing Theorem 3 is the different strategy of proof, and while the exponential bound of Theorem 1 will usually be tighter than the polynomial bound of Theorem 3, this is not always the case: the bound of Theorem 3 is actually sometimes better, as the following example shows. Suppose that \(\Vert \rho \Vert =\frac{1}{\sqrt{D}} = p_1\) and that all other \(p_j\) are equal, i.e.,

$$\begin{aligned} p_j = \frac{1-\frac{1}{\sqrt{D}}}{D-1} \end{aligned}$$
(38)

for all \(j>1\). Then,

$$\begin{aligned} {{\,\textrm{tr}\,}}\rho ^2 = \frac{1}{D}+\frac{1}{D-1}\left( 1-\frac{1}{\sqrt{D}}\right) ^2 \approx \frac{2}{D}, \end{aligned}$$
(39)

and for, e.g., \(d_a=1000\) and \(\varepsilon =0.01\) we find that

$$\begin{aligned} \frac{28d_a^5}{\varepsilon ^2} \frac{2}{D} < 12d_a^2\exp \left( -\frac{{\tilde{C}}\varepsilon ^2\sqrt{D}}{d_a^2}\right) \end{aligned}$$
(40)

for \(4.67\cdot 10^{13}< D < 9.17\cdot 10^{31}\), i.e., in this example there is a regime in which D is already very large but still the polynomial bound is smaller than the exponential one. \(\diamond \)

3.2 Discussion

Remark 8

System size. Theorem 3 shows, roughly speaking, that as soon as

$$\begin{aligned} {{\,\textrm{tr}\,}}\rho ^2 \ll d_a^{-5}, \end{aligned}$$
(41)

\({\textrm{GAP}}(\rho )\)-most wave functions \(\psi \) have \(\rho _a^{\psi }\) close to \({{\,\textrm{tr}\,}}_b\rho \). If we think of \(1/{{\,\textrm{tr}\,}}\rho ^2\) as the effective number of dimensions participating in \(\rho \), and if this number of dimensions is comparable to the full number \(D=\dim {\mathcal {H}}=d_a d_b\) of dimensions, then (41) reduces to

$$\begin{aligned} d_a^{5} \ll D. \end{aligned}$$
(42)

Since the dimension is exponential in the number of degrees of freedom, this condition roughly means that the subsystem a comprises fewer than \(20\%\) of the degrees of freedom of the full system. (The same consideration was carried out in [13, 14] for the original statement of canonical typicality.) The stronger exponential bound yields that a can even comprise up to \(50\%\) of the degrees of freedom [13, 14].\(\diamond \)

Remark 9

Canonical density matrix. A \(\rho \) of particular interest is the canonical density matrix

$$\begin{aligned} \rho _\textrm{can}= \frac{1}{Z(\beta )} e^{-\beta H}\,. \end{aligned}$$
(43)

The relevant condition for generalized canonical typicality to apply to \(\rho =\rho _\textrm{can}\) is that it has small purity \({{\,\textrm{tr}\,}}\rho ^2\) and small largest eigenvalue \(\Vert \rho \Vert \). We argue that indeed it does.

One heuristic reason is equivalence of ensembles: since \(\rho _\textrm{mc}\) has purity \(1/d_\textrm{mc}\) and largest eigenvalue \(1/d_\textrm{mc}\), which is small, the values for \(\rho _\textrm{can}\) should be similarly small. Another heuristic argument is based on the idealization that the system consists of many non-interacting constituents, so that \({\mathcal {H}}={\mathcal {H}}_1^{\otimes N}\) and \(H=\sum _{j=1}^N I^{\otimes (j-1)}\otimes H_1 \otimes I^{\otimes (N-j)}\), so \(\rho _\textrm{can}=\rho _{1\textrm{can}}^{\otimes N}\). It is a general fact that for tensor products \(\rho _1\otimes \rho _2\) of density matrices, the purities multiply, \({{\,\textrm{tr}\,}}(\rho _1\otimes \rho _2)^2=({{\,\textrm{tr}\,}}\rho _1^2)({{\,\textrm{tr}\,}}\rho _2^2)\), and the largest eigenvalues multiply, \(\Vert \rho _1\otimes \rho _2\Vert =\Vert \rho _1\Vert \, \Vert \rho _2\Vert \). Thus, the purity of \(\rho _\textrm{can}\) is the N-th power of that of \(\rho _{1\textrm{can}}\), and likewise the largest eigenvalue. Since \(N\gg 1\) and the values of \(\rho _{1\textrm{can}}\) are somewhere between 0 and 1, and not particularly close to 1, the values of \(\rho _\textrm{can}\) are close to 0, as claimed. We expect that mild interaction does not change that picture very much.\(\diamond \)

Remark 10

Classical vs. quantum. While classically, a typical phase point from a canonical ensemble is also a typical phase point from some micro-canonical ensemble, a typical wave function from \({\textrm{GAP}}(\rho _\beta )\) does not lie in any micro-canonical subspace \({\mathcal {H}}_{\textrm{mc}}\) (if \({\mathcal {H}}\ne {\mathcal {H}}_{\textrm{mc}}\)) and even if it does lie in an \({\mathcal {H}}_{\textrm{mc}}\), then it is not typical from that subspace; that is because typical wave functions are superpositions of many energy eigenstates, and the weights of these eigenstates in \(\rho _{\textrm{mc}}\) and \(\rho _\textrm{can}\) are reflected in the weights of these eigenstates in the superposition. Therefore, already in the case that \(\rho \) is a canonical density matrix, Theorems 3 and 1 are not just simple consequences of canonical typicality but independent results.\(\diamond \)

Remark 11

Equivalence of ensembles. We can now state more precisely the sense in which our results provide a version of equivalence of ensembles. It is well known that if a and b interact weakly and b is large enough, then both \(\rho _{\textrm{mc}}\) and \(\rho _\textrm{can}\) in \({\mathcal {H}}_S={\mathcal {H}}_a\otimes {\mathcal {H}}_b\) lead to reduced density matrices close to the canonical density matrix (4) for a, \({{\,\textrm{tr}\,}}_b\rho _{\textrm{mc}}\approx \rho _{a,\textrm{can}}\approx {{\,\textrm{tr}\,}}_b \rho _\textrm{can}\), provided the parameter \(\beta \) of \(\rho _\textrm{can}\) and \(\rho _{a,\textrm{can}}\) is suitable for the energy E of \(\rho _{\textrm{mc}}\). Hence, Theorems 3 and 1 yield that we can start from either \(u_{\textrm{mc}}\) or \({\textrm{GAP}}(\rho _\textrm{can})\) and obtain for both ensembles of \(\psi \) that \(\rho _a^\psi \) is nearly constant and nearly canonical.

\(\diamond \)

Remark 12

Comparison to original theorems. The original, known theorems about canonical typicality, which refer to the uniform distribution over a suitable sphere instead of a GAP measure, are still contained in our theorems as special cases, except for worse constants and in some places additional factors of \(d_a\) (which we usually think of as constant as well). For more detail, let us begin with the known theorem analogous to Theorem 3 (formulated this way in [14, Eq. (32)], based on arguments from [46]):

Theorem 4

(Canonical typicality, polynomial bounds). Let \({\mathcal {H}}_a\) and \({\mathcal {H}}_b\) be Hilbert spaces of respective dimensions \(d_a, d_b \in {\mathbb {N}}\), \({\mathcal {H}}= {\mathcal {H}}_a \otimes {\mathcal {H}}_b\), \({\mathcal {H}}_R\) be any subspace of \({\mathcal {H}}\) of dimension \(d_R\), \(\rho _R\) be \(1/d_R\) times the projection to \({\mathcal {H}}_R\), and \(u_R\) the uniform distribution over \({\mathbb {S}}({\mathcal {H}}_R)\). Then for every \(\delta >0\),

$$\begin{aligned} u_R\Biggl \{\psi \in {\mathbb {S}}({\mathcal {H}}_R): \bigl \Vert \rho _a^{\psi } - {{\,\textrm{tr}\,}}_b\rho _R \bigr \Vert _{\textrm{tr}} \le \frac{d_a^2}{\sqrt{\delta d_R}} \Biggr \} \ge 1-\delta . \end{aligned}$$
(44)

When we apply our Theorem 3 to \(\rho =\rho _R\) (and assume \(d_R\ge 4\)), we obtain that \({\textrm{GAP}}(\rho )=u_R\), \({{\,\textrm{tr}\,}}\rho ^2=1/d_R\), and almost exactly the bound (44) except for a (rather irrelevant) factor \(\sqrt{28}\) and \(d_a^{2.5}\) instead of \(d_a^2\). Further explanation of how this different exponent comes about can be found in Sect. 4.6.

Theorem 5

(Canonical typicality, exponential bounds [30, 31]). With the notation and hypotheses as in Theorem 4, for every \(\delta >0\) such that

$$\begin{aligned}{} & {} \delta < 4\exp \left( -d_a^2/(18\pi ^3)\right) , \end{aligned}$$
(45)
$$\begin{aligned}{} & {} u_R \Biggl \{ \psi \in {\mathbb {S}}({\mathcal {H}}_R): \bigl \Vert \rho _a^\psi - {{\,\textrm{tr}\,}}_b \rho _R \bigr \Vert _{{{\,\textrm{tr}\,}}} \le 2\sqrt{\frac{18\pi ^3}{d_R}\ln (4/\delta )} \Biggr \} \ge 1-\delta . \end{aligned}$$
(46)

This theorem was stated slightly differently in [30, 31]; we give the derivation of this form in Sect. 4.6. Again, the bound agrees with the one (28) provided by Theorem 1 for \(\rho =\rho _R\) (so \(\Vert \rho \Vert =1/d_R\)) up to worse constants and additional factors of \(d_a\).

Next, here is the standard statement of Lévy’s lemmaFootnote 4

Theorem 6

(Lévy’s Lemma [27]). Let \({\mathcal {H}}\) be a Hilbert space of finite dimension \(D:=\dim {\mathcal {H}}\in {\mathbb {N}}\), let \(f:{\mathbb {S}}({\mathcal {H}})\rightarrow {\mathbb {R}}\) be a function with Lipschitz constant \(\eta \), let u be the uniform distribution over \({\mathbb {S}}({\mathcal {H}})\), and let \(\varepsilon >0\). Then,

$$\begin{aligned} u\Bigl \{\psi \in {\mathbb {S}}({\mathcal {H}}): \bigl |f(\psi )-u(f)\bigr | > \varepsilon \Bigr \} \le 4\exp \left( -\frac{{\hat{C}} D\varepsilon ^2}{\eta ^2}\right) \,, \end{aligned}$$
(47)

where \({\hat{C}}=\frac{2}{9\pi ^3}\).

When we apply our Theorem 2 to \(\rho =I/D\), we obtain that \({\textrm{GAP}}(\rho )=u\), \(\Vert \rho \Vert =1/D\), and exactly the bound (47) except for worse constants. Note that Theorem 2 holds also for infinite-dimensional separable \({\mathcal {H}}\).

We turn to previous results for dynamical typicality. In [48], an inequality analogous to the bound (32) of Corollary 2 was proven for the uniform distribution over the sphere in a subspace. In [28], variants of Lévy’s lemma and dynamical typicality were established for the mean-value ensemble of an observable A for a value \(a\in {\mathbb {R}}\), defined by restricting the uniform distribution on \({\mathbb {S}}({\mathbb {C}}^D)\) to the set \(\{\psi \in {\mathbb {S}}({\mathbb {C}}^D):\langle \psi | A|\psi \rangle =a\}\) and normalizing afterward. However, the physical relevance of this ensemble is unclear, since, in general, the mean value of an observable is itself no observable, and thus it is unclear how this ensemble could be prepared or occur in an experiment.\(\diamond \)

Remark 13

Lévy’s lemma for other distributions. Lévy’s lemma, although it applies to the uniform and GAP measures, does not apply to all rather-spread-out distributions on the sphere; it is thus a non-trivial property of the family of GAP measures.

This can be illustrated by means of the von Mises-Fisher (VMF) distribution, a well known and natural probability distribution on the unit sphere \({\mathbb {S}}({\mathbb {R}}^D)\) in \({\mathbb {R}}^D\) that is different from the GAP measure. It has parameters \(\kappa \in {\mathbb {R}}_+\) and \(\mu \in {\mathbb {S}}({\mathbb {R}}^D)\) and can be obtained from a Gaussian distribution in \({\mathbb {R}}^D\) with mean \(\mu \) and covariance \(\kappa ^{-1}I\) by conditioning on \({\mathbb {S}}({\mathbb {R}}^D)\). The analog of Lévy’s lemma for the von Mises-Fisher distribution is false; this can be seen as follows. Its density

$$\begin{aligned} g(x)= C(D,\kappa ) \,\exp \bigl ( \kappa \,\langle \mu ,x\rangle _{{\mathbb {R}}^D} \bigr ) \end{aligned}$$
(48)

with respect to the uniform distribution u on \({\mathbb {S}}({\mathbb {R}}^D)\) varies at most by a factor of \(e^{2\kappa }\) when varying x (while keeping D and \(\kappa \) fixed). For a given Lipschitz function F on the sphere, insertion of \(F(x)\,g(x)\) for f(x) in a real variant of Lévy’s lemma for the uniform distribution (Theorem 6 above) yields that \(F(x)\, g(x)\) for u-most x is close to the u-average of Fg, which equals the VMF-average of F (where the Lipschitz constant of \(f=Fg\) could be a bit worse than that of F). The set of exceptional x has small u-measure, and since \(C(D,\kappa )\in [e^{-\kappa },e^\kappa ]\) and thus \(g(x)\in [e^{-2\kappa },e^{2\kappa }]\), it also has small VMF-measure (larger at most by a factor of \(e^{2\kappa }\)). Thus, for VMF-most x, F(x) is close to VMF(F)/g(x), and thus not constant at all. The same argument shows that Lévy’s lemma is violated for any sequence of measures \((\mu _D)_{D\in {\mathbb {N}}}\) on \({\mathbb {S}}({\mathbb {R}}^D)\) whose density \(g_D\) relative to u is bounded uniformly in D, has Lipschitz constant bounded uniformly in D, but deviates significantly from 1 on a non-negligible set in \({\mathbb {S}}({\mathbb {R}}^D)\).

For GAP measures, the situation is very different. From (16) one can see, for example, that if the eigenvalue \(p_{n_2}\) of \(\rho =\sum _n p_n |n\rangle \langle n|\) is twice as large as another eigenvalue \(p_{n_1}\), then the density (16) at \(\psi =|n_2\rangle \) is \(2^{D+1}\) times as large as that at \(\psi =|n_1\rangle \). Thus, the density and its Lipschitz constant are not (for relevant choices of \(\rho \)) bounded uniformly in D; rather, non-uniform GAP measures become more and more singular with respect to the uniform distribution for large D.\(\diamond \)

Remark 14

Generalized canonical typicality from conditional wave function? One might imagine a different strategy of deriving generalized canonical typicality, based on regarding \(\psi \) itself as a conditional wave function and using the known fact [17, 19] that conditional wave functions are typically \({\textrm{GAP}}\) distributed. We could introduce a further big system c, choose a high-dimensional subspace \({\mathcal {H}}_{Rabc}\) in \({\mathcal {H}}_{abc}={\mathcal {H}}_a \otimes {\mathcal {H}}_b \otimes {\mathcal {H}}_c\) so that \({{\,\textrm{tr}\,}}_c P_{Rabc}/d_{Rabc}\) coincides with the given \(\rho \) on \({\mathcal {H}}_a\otimes {\mathcal {H}}_b\), and start from a random wave function from \({\mathbb {S}}({\mathcal {H}}_{Rabc})\). However, we do not see how to make such a derivation work. \(\diamond \)

Remark 15

Not every measure does what \({\textrm{GAP}}(\rho )\) does. Generalized canonical typicality as expressed in Theorems 3 and 1 is not true in general if we replace \({\textrm{GAP}}(\rho )\) by a different measure: if \(\rho \) is a density matrix on \({\mathcal {H}}\) and \(\mu \) a probability distribution over \({\mathbb {S}}({\mathcal {H}})\) with density matrix \(\rho _\mu =\rho \), then it need not be true for \(\mu \)-most \(\psi \) that \(\rho ^\psi _a\approx {{\,\textrm{tr}\,}}_b \rho \).

Here is a counter-example. Let \(\rho =\sum _{n=1}^D p_n |n\rangle \langle n|\) have eigenvalues \(p_n\) and eigen-ONB \((|n\rangle )_{n\in \{1,\ldots ,D\}}\), and let

$$\begin{aligned} \mu =\sum _{n=1}^D p_n \, \delta _{|n\rangle } \end{aligned}$$
(49)

be the measure that is concentrated on the finite set \(\{|n\rangle :1\le n\le D\}\) and gives weight \(p_n\) to each \(|n\rangle \). This measure is the narrowest, most concentrated measure with density matrix \(\rho \), and thus a kind of opposite of \({\textrm{GAP}}(\rho )\), the most spread-out measure with density matrix \(\rho \). A random vector \(\psi \) with distribution \(\mu \) is a random eigenvector \(|n\rangle \). What the reduced density matrix \(\rho _a^{|n\rangle }\) looks like depends on the vectors \(|n\rangle \in {\mathcal {H}}={\mathcal {H}}_a\otimes {\mathcal {H}}_b\). Suppose that the eigenbasis of \(\rho \) is the product of ONBs of \({\mathcal {H}}_a\) and \({\mathcal {H}}_b\), \(|n\rangle =|\ell \rangle _a \otimes |m\rangle _b\); then \(\rho _a^{|n\rangle }={{\,\textrm{tr}\,}}_b |n\rangle \langle n| = |\ell \rangle _a\langle \ell |\) (in an obvious notation), so \(\rho _a^{|n\rangle }\) is always a pure state and thus far away from \({{\,\textrm{tr}\,}}_b \rho = \sum _{\ell ,m}p_{\ell m} |\ell \rangle _a\langle \ell |\) if that is highly mixed. Note, however, that if instead of a product basis, we had taken \((|n\rangle )_{n=1\ldots D}\) to be a purely random ONB of \({\mathcal {H}}\), then (with overwhelming probability if \(d_b\gg 1\)) \(\rho _a^{|n\rangle }\approx d_a^{-1} I_a\) and thus also \({{\,\textrm{tr}\,}}_b \rho \) (which by (18) is the \(\mu \)-average of \(\rho _a^\psi \)) is close to \(d_a^{-1} I_a\), so \(\rho _a^{\psi }\approx {{\,\textrm{tr}\,}}_b \rho \) for \(\mu \)-most \(\psi \), despite the narrowness of \(\mu \).\(\diamond \)

Remark 16

Canonical typicality with respect to \({\textrm{GAP}}(\rho )\) does not hold for every \(\rho \) . Let us consider the special case in which \(\rho \) has one eigenvalue that is large (e.g., \(10^{-1}\)), while all others are very small (e.g., \(10^{-1000}\)). Such a situation occurs for example for N-body quantum systems with a gapped ground state \(|0\rangle \) at very low temperature, T of order \((\log N)^{-1}\). So call the large eigenvalue p and suppose for definiteness that all other eigenvalues are equal,

$$\begin{aligned} \rho =p|0\rangle \langle 0|+\frac{1-p}{D-1}(I-|0\rangle \langle 0|) = p|0\rangle \langle 0|+(1-p)\frac{I}{D} + O\Bigl (\frac{1}{D}\Bigr ) \end{aligned}$$
(50)

with O(1/D) referring to the trace norm and the limit \(D\rightarrow \infty \). In that case, \({{\,\textrm{tr}\,}}\rho ^2\approx p^2\) (e.g., \(10^{-2}\), while \(d_a\) may be \(10^{100}\)), so the smallness condition (41) for generalized canonical typicality is strongly violated. To investigate \(\rho _a^\psi \), note that any vector \(\psi \in {\mathbb {S}}({\mathcal {H}})\) can be written as \(\psi =\cos \theta e^{i\alpha } |0\rangle + \sin \theta |\phi \rangle \) with \(\theta \in [0,\pi /2]\), \(\alpha \in [0,2\pi )\), and \(|\phi \rangle \perp |0\rangle \). If \(\psi \) has distribution \({\textrm{GAP}}(\rho )\), then \(\phi \) has distribution \(u_{{\mathbb {S}}(|0\rangle ^\perp )}\) and is independent of \(\theta \) and \(\alpha \), \(\alpha \) is independent of \(\theta \) and uniformly distributed, and a lengthy computation shows that the distribution of \(\theta \) has density

$$\begin{aligned} \frac{2(1-p)^2}{p} \frac{\cos \theta }{\sin ^5\theta } \exp \Bigl ((1-\tfrac{1}{p})\cot ^2\theta \Bigr ) \end{aligned}$$
(51)

as \(D\rightarrow \infty \). By an error of order \(1/\sqrt{D}\), we can replace \(\phi \) by a \(u_{{\mathbb {S}}({\mathcal {H}})}\)-distributed vector. If \(|0\rangle \) factorizes as in \(|0\rangle =|0\rangle _a |0\rangle _b\), then \({{\,\textrm{tr}\,}}_b\rho = p|0\rangle _a\langle 0| + (1-p)(I_a/d_a)+O(1/d_b)\) and \(\rho _a^\psi =\cos ^2\theta |0\rangle _a \langle 0| + \sin ^2\theta (I_a/d_a) + O(1/\sqrt{d_b})\). Since the latter depends on \(\theta \) (and thus is not deterministic but has a non-trivial distribution), it follows that \(\rho _a^\psi \not \approx {{\,\textrm{tr}\,}}_b \rho \) with high probability.\(\diamond \)

Remark 17

Comparison to large deviation theory. In large deviation theory [50], one studies another version of concentration of measures: one considers a sequence of probability distributions \(({\mathbb {P}}_N)_{N\in {\mathbb {N}}}\) on (say) the real line and studies whether (and at which rate) \({\mathbb {P}}_N\bigl ([x,\infty )\bigr )\) tends to 0 exponentially fast as \(N\rightarrow \infty \) for fixed \(x\in {\mathbb {R}}\). Our situation is a bit similar, with the role of x played by \(\varepsilon \) in (29), and that of \({\mathbb {P}}_N\) by the distribution of \(\Vert \rho _a^\psi -{{\,\textrm{tr}\,}}_b \rho \Vert _{{{\,\textrm{tr}\,}}}\) in \({\mathbb {R}}\) for \({\textrm{GAP}}(\rho )\)-distributed \(\psi \). However, our situation does not quite fit the standard framework of large deviations because we do not necessarily consider a sequence \(\rho _N\) of density matrices, but rather a fixed \(\rho \) with small \(\Vert \rho \Vert \). That is why we have provided error bounds in terms of the given \(\rho \).\(\diamond \)

4 Proofs

4.1 Proof of Remark 1

What needs proof here is that also in infinite dimension, the partial trace commutes with the expectation,

$$\begin{aligned} {\mathbb {E}}_\mu {{\,\textrm{tr}\,}}_b|\psi \rangle \langle \psi |={{\,\textrm{tr}\,}}_b{\mathbb {E}}_\mu |\psi \rangle \langle \psi |\,. \end{aligned}$$
(52)

(For \(\dim {\mathcal {H}}_b<\infty \), \({{\,\textrm{tr}\,}}_b\) is a finite sum and thus trivially commutes with \({\mathbb {E}}_\mu \).) So suppose that \({\mathcal {H}}_b\) has a countable ONB \((|l\rangle _b)_{ l\in {\mathbb {N}}}\), and let \(|\phi \rangle _a\in {\mathcal {H}}_a\). Then

$$\begin{aligned} {}_a\langle \phi |{\mathbb {E}}_\mu {{\,\textrm{tr}\,}}_b\bigl (|\psi \rangle \langle \psi |\bigr )|\phi \rangle _a&= \int _{{\mathbb {S}}({\mathcal {H}})} {}_a\langle \phi |{{\,\textrm{tr}\,}}_b\bigl (|\psi \rangle \langle \psi |\bigr )|\phi \rangle _a\, \mu (\textrm{d}\psi ) \end{aligned}$$
(53a)
$$\begin{aligned}&=\int _{{\mathbb {S}}({\mathcal {H}})} \sum _l \bigl |\langle \phi ,l|\psi \rangle \bigr |^2\, \mu (\textrm{d}\psi ) \end{aligned}$$
(53b)
$$\begin{aligned}&= \sum _l \int _{{\mathbb {S}}({\mathcal {H}})} \langle \phi ,l|\psi \rangle \langle \psi |l,\phi \rangle \, \mu (\textrm{d}\psi ) \end{aligned}$$
(53c)
$$\begin{aligned}&=\sum _l \langle \phi ,l|\rho _\mu |l,\phi \rangle \end{aligned}$$
(53d)
$$\begin{aligned}&= {}_a\langle \phi |{{\,\textrm{tr}\,}}_b\rho _\mu |\phi \rangle _a, \end{aligned}$$
(53e)

where we used Fubini’s theorem in the third and the definition of \(\rho _\mu \) in the fourth line. Since a bounded operator A is uniquely determined by the quadratic form \(\phi \mapsto \langle \phi |A|\phi \rangle \), it follows that \({\mathbb {E}}_\mu (\rho _a^\psi )={{\,\textrm{tr}\,}}_b\rho _\mu \).

4.2 Proof of Theorem 3

We start with the proof of the polynomial version of generalized canonical typicality and thereby introduce approximation techniques for infinite-dimensional Hilbert spaces, which will also be used in the proof of the exponential bounds of Theorem 1 later on. For the proof of Theorem 3, we make use of a result from Reimann [35]. Let \((|n\rangle )_{n=1\ldots D}\) be an orthonormal basis of eigenvectors of \(\rho \) and \(p_1,\dots ,p_D\) the corresponding (positive) eigenvalues. Reimann used the density of the GAP measure \({\textrm{GAP}}(\rho )\) to compute expressions of the form

$$\begin{aligned} {\mathbb {E}}(c_j^* c_k c_m^* c_n), \end{aligned}$$
(54)

where the expectation is taken with respect to \({\textrm{GAP}}(\rho )\) and \(c_j = \langle j|\psi \rangle \) are the coordinates of \(\psi \in {\mathbb {S}}({\mathcal {H}})\) with respect to the orthonormal basis \((|j\rangle )_{j=1\ldots D}\). With the help of these expressions he derived an upper bound for the variance \({{\,\textrm{Var}\,}}\langle \psi |A|\psi \rangle \) (also taken with respect to \({\textrm{GAP}}(\rho )\)) for self-adjoint operators \(A:{\mathcal {H}}\rightarrow {\mathcal {H}}\). We show that Reimann’s upper bound for \({{\,\textrm{Var}\,}}\langle \psi |A|\psi \rangle \) remains essentially valid also for non-self-adjoint A and this bound will be a main ingredient in our proof of Theorem 3.

We start by computing the expectation \({\mathbb {E}}\langle \psi |A|\psi \rangle \) and an upper bound for the variance \({{\,\textrm{Var}\,}}\langle \psi |A|\psi \rangle \) for an arbitrary operator \(A:{\mathcal {H}}\rightarrow {\mathcal {H}}\), where the expectation and variance are with respect to the measure \({\textrm{GAP}}(\rho )\). We closely follow Reimann [35] who did these computations in the case that A is self-adjoint. We arrive at the same bound for the variance (with the distance between the largest and smallest eigenvalue of A replaced by its operator norm); however, one step in the proof needs to be modified to account for A not being necessarily self-adjoint. Moreover, we show that the expression for \({\mathbb {E}}\langle \psi |A|\psi \rangle \) and the upper bound for \({{\,\textrm{Var}\,}}\langle \psi |A|\psi \rangle \) remain valid if \({\mathcal {H}}\) has countably infinite dimension, i.e., if it is separable.

Proposition 1

Let \(\rho \) be a density matrix on a separable Hilbert space \({\mathcal {H}}\) with positive eigenvalues \(p_n\) such that \(p_{\max }=\Vert \rho \Vert <1/4\) and let \(\dim {\mathcal {H}}\ge 4\). For \({\textrm{GAP}}(\rho )\)-distributed \(\psi \) and any bounded operator \(A:{\mathcal {H}}\rightarrow {\mathcal {H}}\),

$$\begin{aligned} {\mathbb {E}}\langle \psi |A|\psi \rangle = {{\,\textrm{tr}\,}}(A\rho ) \end{aligned}$$
(55)

and

$$\begin{aligned} {{\,\textrm{Var}\,}}\langle \psi |A|\psi \rangle \le \frac{\Vert A\Vert ^2 {{\,\textrm{tr}\,}}\rho ^2}{1-p_{\max }} \left( 1+\frac{4\sqrt{{{\,\textrm{tr}\,}}\rho ^2} + 2{{\,\textrm{tr}\,}}\rho ^2}{(1-2p_{\max })(1-3p_{\max })}\right) . \end{aligned}$$
(56)

Proof

We first assume that \(D:=\dim {\mathcal {H}}<\infty \). The formula for the expectation follows immediately from the fact that the density matrix of \({\textrm{GAP}}(\rho )\) is \(\rho \):

$$\begin{aligned} {\mathbb {E}}\langle \psi |A|\psi \rangle = {\mathbb {E}}{{\,\textrm{tr}\,}}(|\psi \rangle \langle \psi |A) = {{\,\textrm{tr}\,}}({\mathbb {E}}|\psi \rangle \langle \psi |A) = {{\,\textrm{tr}\,}}(A\rho ). \end{aligned}$$
(57)

For a complex-valued random variable X, the variance can be computed by

$$\begin{aligned} {{\,\textrm{Var}\,}}X = {\mathbb {E}}\left[ (X-{\mathbb {E}}X)^*(X-{\mathbb {E}}X)\right] = {\mathbb {E}}(X^*X) - {\mathbb {E}}(X^*){\mathbb {E}}(X). \end{aligned}$$
(58)

Since the variance of a random variable does not change when a constant is added, we can assume for its computation without loss of generality that \({\mathbb {E}}\langle \psi |A|\psi \rangle = 0\). Let \((|n\rangle )_{n=1,\dots ,D}\) be an orthonormal basis of \({\mathcal {H}}\) consisting of eigenvectors of \(\rho \). For \(\psi \in {\mathbb {S}}({\mathcal {H}})\), we write

$$\begin{aligned} \langle \psi |A|\psi \rangle = \sum _{l,m} \langle \psi |m\rangle \langle m|A|l\rangle \langle l|\psi \rangle =: \sum _{l,m} c_m^* A_{ml} c_l \end{aligned}$$
(59)

with \(c_l = \langle l|\psi \rangle \) and \(A_{ml} = \langle m|A|l\rangle \). Then, for \(X=\langle \psi |A|\psi \rangle \), we find that

$$\begin{aligned} {{\,\textrm{Var}\,}}X&= \sum _{l,m,l',m'} A^*_{ml} A_{m'l'} {\mathbb {E}}(c_l^* c_m c_{m'}^* c_{l'}). \end{aligned}$$
(60)

Reimann [35] showed that the fourth moments \({\mathbb {E}}(c_l^* c_m c_{m'}^* c_{l'})\) all vanish except for the two cases \(l=m, m'=l'\) and \(l=m', m=l'\) and that

$$\begin{aligned} {\mathbb {E}}(|c_m|^2 |c_l|^2) = p_m p_l (1+\delta _{ml}) K_{ml}, \end{aligned}$$
(61)

where

$$\begin{aligned} K_{ml} = \int _0^{\infty } (1+xp_m)^{-1} (1+xp_l)^{-1} \prod _{n=1}^D (1+xp_n)^{-1}\, \textrm{d}x. \end{aligned}$$
(62)

This implies

$$\begin{aligned} {{\,\textrm{Var}\,}}X&= \sum _{m,l} |A_{ml}|^2 p_m p_l (1+\delta _{ml}) K_{ml} + \sum _{m,m'} A^*_{mm} A_{m'm'} p_m p_{m'}(1+\delta _{mm'}) K_{mm'}\nonumber \\&\quad - 2 \sum _m |A_{mm}|^2 p_m^2 K_{mm} \end{aligned}$$
(63)
$$\begin{aligned}&= \sum _{m,l} \left[ |A_{ml}|^2 + A_{mm}^* A_{ll} \right] p_m p_l K_{ml}. \end{aligned}$$
(64)

Because of \(|A_{mm}| \le \Vert A\Vert \) it follows from the computation in [35] that

$$\begin{aligned} \sum _{m,l} A^*_{mm} A_{ll} p_m p_l K_{ml} \le \frac{2\Vert A\Vert ^2 {{\,\textrm{tr}\,}}\rho ^2}{(1-p_{\max })(1-2p_{\max })(1-3p_{\max })} \left( 2({{\,\textrm{tr}\,}}\rho ^2)^{1/2} + {{\,\textrm{tr}\,}}\rho ^2\right) \end{aligned}$$
(65)

Moreover, as it was shown in [35], \(K_{ml} \le \frac{1}{1-p_{\max }}\) for all l and m and therefore

$$\begin{aligned} \sum _{m,l} |A_{ml}|^2 p_m p_l K_{ml}\le \frac{1}{1-p_{\max }} {{\,\textrm{tr}\,}}(A^*\rho A \rho ). \end{aligned}$$
(66)

Since A is not necessarily self-adjoint, we have to proceed in a different way than Reimann [35] did to bound this term. To this end we make use of the Cauchy-Schwarz inequality for the trace, i.e. \({{\,\textrm{tr}\,}}(B^*C) \le \sqrt{{{\,\textrm{tr}\,}}(B^*B){{\,\textrm{tr}\,}}(C^*C)}\), and the inequality \(|{{\,\textrm{tr}\,}}(BC)|\le \Vert B\Vert {{\,\textrm{tr}\,}}(|C|)\) for any operators BC [43, Thm. 3.7.6]. With these inequalities, we have that

$$\begin{aligned} {{\,\textrm{tr}\,}}(A^*\rho A \rho )&\le \sqrt{{{\,\textrm{tr}\,}}(A^*\rho ^2 A){{\,\textrm{tr}\,}}(\rho A^*A\rho )} \end{aligned}$$
(67a)
$$\begin{aligned}&= \sqrt{{{\,\textrm{tr}\,}}(AA^*\rho ^2) {{\,\textrm{tr}\,}}(A^*A\rho ^2) } \end{aligned}$$
(67b)
$$\begin{aligned}&\le \Vert A\Vert ^2 {{\,\textrm{tr}\,}}\rho ^2. \end{aligned}$$
(67c)

Combining (64), (65), (66) and (67c) proves the bound for the variance and thus finishes the proof in the finite-dimensional case.

Now suppose that \({\mathcal {H}}\) has a countably infinite ONB. The expectation can be computed as before since \({\textrm{GAP}}(\rho )(|\psi \rangle \langle \psi |)=\rho \) remains true in the infinite-dimensional setting [49]. For the variance, we approximate \(\rho \) by density matrices \(\rho _n\), \(n\in {\mathbb {N}}\), of finite rank defined by

$$\begin{aligned} \rho _n := \sum _{m=1}^{n-1} p_m |m\rangle \langle m| +\Biggl ( \sum _{m=n}^{\infty } p_m\Biggr )|n\rangle \langle n|. \end{aligned}$$
(68)

Then, \(\Vert \rho _n-\rho \Vert _{{{\,\textrm{tr}\,}}} \rightarrow 0\) as \(n\rightarrow \infty \), and therefore Theorem 3 in [49] implies that \({\textrm{GAP}}(\rho _n) \Rightarrow {\textrm{GAP}}(\rho )\) (weak convergence). Note also that from some \(n_0\) onwards, \(\sum _{m=n}^\infty p_m \le p_1\) and thus \(\Vert \rho _n\Vert =p_1=\Vert \rho \Vert \). Let \(f(\psi ):=|\langle \psi |A|\psi \rangle -{{\,\textrm{tr}\,}}(A\rho )|^2\) and \(f_n(\psi ):= |\langle \psi |A|\psi \rangle -{{\,\textrm{tr}\,}}(A\rho _n)|^2\). Because of \({{\,\textrm{tr}\,}}(A\rho _n)\rightarrow {{\,\textrm{tr}\,}}(A\rho )\) and therefore \(f_n\rightarrow f\) uniformly in \(\psi \) it follows that \({\textrm{GAP}}(\rho _n)(f_n)-{\textrm{GAP}}(\rho _n)(f) \rightarrow 0\). Since f is continuous, it follows from the weak convergence of the measures \({\textrm{GAP}}(\rho _n)\) that \({\textrm{GAP}}(\rho _n)(f) \rightarrow {\textrm{GAP}}(\rho )(f)\) and therefore altogether that \({\textrm{GAP}}(\rho _n)(f_n)\rightarrow {\textrm{GAP}}(\rho )(f)\). Since, as one easily verifies, \({{\,\textrm{tr}\,}}\rho _n^2 \rightarrow {{\,\textrm{tr}\,}}\rho ^2\), the bound for the variance in the finite-dimensional case remains valid in the infinite-dimensional setting.Footnote 5\(\square \)

Proof of Theorem 3

Without loss of generality assume that all eigenvalues of \(\rho \) are positive. Proposition 1 together with Chebyshev’s inequality implies for any operator A and any \(\varepsilon >0\) that

$$\begin{aligned}&\!\!\!\!\!\!\!\!{\textrm{GAP}}(\rho )\left\{ \psi \in {\mathbb {S}}({\mathcal {H}}): \bigl |\langle \psi |A|\psi \rangle - {{\,\textrm{tr}\,}}(A\rho ) \bigr | > \varepsilon \right\} \nonumber \\&~~~~~\le \frac{\Vert A\Vert ^2 {{\,\textrm{tr}\,}}\rho ^2}{\varepsilon ^2(1-p_{\textrm{max}})} \left( 1 + \frac{4\sqrt{{{\,\textrm{tr}\,}}\rho ^2}+2{{\,\textrm{tr}\,}}\rho ^2}{(1-2p_{\textrm{max}})(1-3p_{\textrm{max}})}\right) \end{aligned}$$
(69a)
$$\begin{aligned}&~~~~~\le \frac{4\Vert A\Vert ^2 {{\,\textrm{tr}\,}}\rho ^2}{3\varepsilon ^2}\left( 1+8\left( 4\sqrt{p_{\max }}+2p_{\max }\right) \right) \end{aligned}$$
(69b)
$$\begin{aligned}&~~~~~\le \frac{28 \Vert A\Vert ^2 {{\,\textrm{tr}\,}}\rho ^2}{\varepsilon ^2}. \end{aligned}$$
(69c)

Let \((|l\rangle _a)_{l=1\dots d_a}\) and \((|n\rangle _b)_{n=1\dots d_b}\), where \(d_a:= \dim {\mathcal {H}}_a \in {\mathbb {N}}\) and \(d_b:=\dim {\mathcal {H}}_b \in {\mathbb {N}}\cup \{\infty \}\), be an orthonormal basis of \({\mathcal {H}}_a\) and \({\mathcal {H}}_b\), respectively. For

$$\begin{aligned} A^{lm} = \left[ |l\rangle _a \langle m|\right] \otimes I_b, \end{aligned}$$
(70)

where \(I_b\) is the identity on \({\mathcal {H}}_b\), we find \(\Vert A^{lm}\Vert =1\),

$$\begin{aligned} \langle \psi |A^{lm}|\psi \rangle&= \sum _n\langle \psi | \left( |l\rangle _a\langle m| \otimes |n\rangle _b\langle n|\right) |\psi \rangle \end{aligned}$$
(71a)
$$\begin{aligned}&= {}_a\langle m|\left( \sum _n {}_b\langle n|\psi \rangle \langle \psi |n\rangle _b\right) |l\rangle _a \end{aligned}$$
(71b)
$$\begin{aligned}&= {}_a\langle m|\rho _a^\psi |l\rangle _a \end{aligned}$$
(71c)

and similarly

$$\begin{aligned} {{\,\textrm{tr}\,}}(A^{lm}\rho )&= \sum _{k,n} {}_a\langle k| {}_b\langle n| \left[ \left( [|l\rangle _a \langle m|] \otimes I_b \right) \rho \right] |k\rangle _a |n\rangle _b \end{aligned}$$
(72a)
$$\begin{aligned}&= {}_a\langle m|\left( \sum _n {}_b\langle n| \rho |n\rangle _b\right) |l\rangle _a \end{aligned}$$
(72b)
$$\begin{aligned}&= {}_a\langle m|{{\,\textrm{tr}\,}}_b\rho |l\rangle _a. \end{aligned}$$
(72c)

For any \(d_a\times d_a\) matrix \(M = (M_{ij})\), it holds that \(\Vert M\Vert _{{{\,\textrm{tr}\,}}} \le \sqrt{d_a} \Vert M\Vert _2\), where \(\Vert M\Vert _2\) denotes the Hilbert-Schmidt norm of M which is defined by

$$\begin{aligned} \Vert M\Vert _2 = \sqrt{{{\,\textrm{tr}\,}}(M^*M)} = \sqrt{\sum _{i,j=1}^{d_a} |M_{ij}|^2}, \end{aligned}$$
(73)

see, e.g., Lemma 6 in [30]. Therefore, we have that

$$\begin{aligned} \Vert \rho _a^\psi - {{\,\textrm{tr}\,}}_b \rho \Vert ^2_{{{\,\textrm{tr}\,}}} \le d_a \sum _{l,m=1}^{d_a} \bigl |{}_a\langle m|\rho _a^\psi -{{\,\textrm{tr}\,}}_b\rho |l\rangle _a \bigr |^2 \end{aligned}$$
(74)

and thus

$$\begin{aligned} {\textrm{GAP}}(\rho )&\left\{ \psi \in {\mathbb {S}}({\mathcal {H}}): \left\| \rho _a^\psi - {{\,\textrm{tr}\,}}_b\rho \right\| _{{{\,\textrm{tr}\,}}} > d_a^{3/2} \varepsilon \right\} \nonumber \\&\le {\textrm{GAP}}(\rho )\left\{ \psi \in {\mathbb {S}}({\mathcal {H}}): \sum _{l,m=1}^{d_a}\bigl |{}_a\langle m|\rho _a^\psi -{{\,\textrm{tr}\,}}_b\rho |l\rangle _a \bigr |^2 \ge d_a^2 \varepsilon ^2\right\} \end{aligned}$$
(75a)
$$\begin{aligned}&\le {\textrm{GAP}}(\rho )\left\{ \psi \in {\mathbb {S}}({\mathcal {H}}): \exists \; l,m : \bigl |{}_a\langle m|\rho _a^\psi -{{\,\textrm{tr}\,}}_b\rho |l\rangle _a \bigr | \ge \varepsilon \right\} \end{aligned}$$
(75b)
$$\begin{aligned}&\le \frac{28d_a^2 {{\,\textrm{tr}\,}}\rho ^2}{\varepsilon ^2}, \end{aligned}$$
(75c)

where we used (69c), (71c), (72c) and \(\Vert A^{lm}\Vert =1\) in the last step. By replacing \(\varepsilon \rightarrow d_a^{-3/2}\varepsilon \), we finally obtain

$$\begin{aligned} {\textrm{GAP}}(\rho )\left\{ \psi \in {\mathbb {S}}({\mathcal {H}}): \Vert \rho _a^\psi -{{\,\textrm{tr}\,}}_b\rho \Vert _{{{\,\textrm{tr}\,}}} > \varepsilon \right\} \le \frac{28 d_a^5 {{\,\textrm{tr}\,}}\rho ^2}{\varepsilon ^2}. \end{aligned}$$
(76)

Setting

$$\begin{aligned} \delta = \frac{28 d_a^5 {{\,\textrm{tr}\,}}\rho ^2}{\varepsilon ^2} \end{aligned}$$
(77)

and solving for \(\varepsilon \) gives (36) and thus finishes the proof. \(\square \)

4.3 Proof of Theorem 1

The proof of Theorem 1 follows largely the one of canonical typicality given in [30]; some crucial differences concern our generalization of the Lévy lemma and the steps needed for covering infinite dimension.

Let \(U_a\) be a unitary operator on \({\mathcal {H}}_a\). Then, the function \(f:{\mathbb {S}}({\mathcal {H}})\rightarrow {\mathbb {C}}\), \(f(\psi ) = {{\,\textrm{tr}\,}}_a(U_a\rho _a^\psi )=\langle \psi |U_a\otimes I_b|\psi \rangle \) is Lipschitz continuous with Lipschitz constant \(\eta \le 2\Vert U_a\Vert = 2\) (see, e.g., Lemma 5 in [30]). By Theorem 2 and Remark 3,

$$\begin{aligned} {\textrm{GAP}}(\rho )&\left\{ \psi \in {\mathbb {S}}({\mathcal {H}}):\bigl |{{\,\textrm{tr}\,}}_a(U_a\rho _a^\psi )-{\textrm{GAP}}(\rho )({{\,\textrm{tr}\,}}_a(U_a \rho _a^\psi )) \bigr | >\varepsilon \right\} \nonumber \\&\le 12\exp \left( -\frac{C\varepsilon ^2}{8\Vert \rho \Vert }\right) . \end{aligned}$$
(78)

By (27),

$$\begin{aligned} {\textrm{GAP}}(\rho )({{\,\textrm{tr}\,}}_a(U_a\rho _a^\psi )) = {{\,\textrm{tr}\,}}_a\left( U_a {\textrm{GAP}}(\rho )(\rho _a^\psi )\right) = {{\,\textrm{tr}\,}}_a\left( U_a {{\,\textrm{tr}\,}}_b\rho \right) . \end{aligned}$$
(79)

Let \((U_a^j)_{j=0}^{d_a^2-1}\) be unitary operators that form a basis for the space of operators on \({\mathcal {H}}_a\) such thatFootnote 6

$$\begin{aligned} {{\,\textrm{tr}\,}}_a(U_a^{j*} U_a^k) = d_a \delta _{jk}. \end{aligned}$$
(80)

Then,

$$\begin{aligned}&{\textrm{GAP}}(\rho )\left\{ \psi \in {\mathbb {S}}({\mathcal {H}}):\exists j: \bigl |{{\,\textrm{tr}\,}}_a(U_a^{j}\rho _a^\psi )-{{\,\textrm{tr}\,}}_a(U_a^{j}{{\,\textrm{tr}\,}}_b\rho ) \bigr | >\varepsilon \right\} \nonumber \\&\quad \le 12d_a^2 \exp \left( -\frac{C\varepsilon ^2}{8\Vert \rho \Vert }\right) . \end{aligned}$$
(81)

As in [30], the density matrix \(\rho _a^\psi \) can be expanded as

$$\begin{aligned} \rho _a^\psi = \frac{1}{d_a}\sum _{j} C_{j}(\rho _a^\psi ) U_a^{j}, \end{aligned}$$
(82)

where \(C_j(\rho _a^\psi ) = {{\,\textrm{tr}\,}}_a(U_a^{j*}\rho _a^\psi )\) and (81) becomes

$$\begin{aligned} {\textrm{GAP}}(\rho )\left\{ \psi \in {\mathbb {S}}({\mathcal {H}}):\exists j: \bigl | C_{j}(\rho _a^\psi )-C_{j}({{\,\textrm{tr}\,}}_b\rho ) \bigr | >\varepsilon \right\} \le 12d_a^2 \exp \left( -\frac{C\varepsilon ^2}{8\Vert \rho \Vert }\right) . \end{aligned}$$
(83)

If \(|C_{j}(\rho _a^\psi )-C_{j}({{\,\textrm{tr}\,}}_b\rho )|\le \varepsilon \) for all j, then

$$\begin{aligned} \Vert \rho _a^\psi -{{\,\textrm{tr}\,}}_b\rho \Vert _{{{\,\textrm{tr}\,}}}^2&\le d_a\Vert \rho _a^\psi -{{\,\textrm{tr}\,}}_b\rho \Vert ^2_2 \end{aligned}$$
(84a)
$$\begin{aligned}&=d_a\left\| \frac{1}{d_a}\sum _{j}\left( C_{j}(\rho _a^\psi )-C_{j}({{\,\textrm{tr}\,}}_b\rho )\right) U_a^{j}\right\| _{2}^2 \end{aligned}$$
(84b)
$$\begin{aligned}&= \frac{1}{d_a} {{\,\textrm{tr}\,}}_a\left| \sum _{j} \left( C_{j}(\rho _a^\psi )-C_{j}({{\,\textrm{tr}\,}}_b\rho )\right) U_a^{j}\right| ^2 \end{aligned}$$
(84c)
$$\begin{aligned}&=\sum _{j} \left| C_{j}(\rho _a^\psi )-C_{j}({{\,\textrm{tr}\,}}_b\rho )\right| ^2 \end{aligned}$$
(84d)
$$\begin{aligned}&\le d_a^2\varepsilon ^2. \end{aligned}$$
(84e)

This implies that

$$\begin{aligned} {\textrm{GAP}}(\rho )\left\{ \psi \in {\mathbb {S}}({\mathcal {H}}):\Vert \rho _a^\psi -{{\,\textrm{tr}\,}}_b\rho \Vert _{{{\,\textrm{tr}\,}}} > d_a\varepsilon \right\} \le 12d_a^2 \exp \left( -\frac{C\varepsilon ^2}{8\Vert \rho \Vert }\right) \end{aligned}$$
(85)

and, after replacing \(\varepsilon \) by \(\varepsilon d_a^{-1}\),

$$\begin{aligned} {\textrm{GAP}}(\rho )\left\{ \psi \in {\mathbb {S}}({\mathcal {H}}):\Vert \rho _a^\psi -{{\,\textrm{tr}\,}}_b\rho \Vert _{{{\,\textrm{tr}\,}}} > \varepsilon \right\} \le 12d_a^2 \exp \left( -\frac{C\varepsilon ^2}{8d_a^2\Vert \rho \Vert }\right) . \end{aligned}$$
(86)

Setting

$$\begin{aligned} \delta = 12d_a^2 \exp \left( -\frac{C\varepsilon ^2}{8d_a^2\Vert \rho \Vert }\right) \end{aligned}$$
(87)

and solving for \(\varepsilon \) finishes the proof.

4.4 Proof of Theorem 2

The proofs begins with an auxiliary theorem formulated as Theorem 8 below. For better orientation, we also state the analogous fact about Gaussian distributions as Theorem 7 and start with quoting its real version:Footnote 7

Lemma 1

([27]). Let \(F:{\mathbb {R}}^D \rightarrow {\mathbb {R}}\) be a Lipschitz function with constant \(\eta \). Let \(X=(X_1,\dots ,X_D)\) be a vector of independent (real) standard Gaussian random variables. Then for every \(\varepsilon >0\),

$$\begin{aligned} {\mathbb {P}}\bigl \{|F(X)-{\mathbb {E}}F(X)|>\varepsilon \bigr \} \le 2 \exp \left( -\frac{2\varepsilon ^2}{\pi ^2\eta ^2}\right) . \end{aligned}$$
(88)

Now let \(\rho =\sum _{n=1}^D p_n |n\rangle \langle n|\) be a density matrix on the D-dimensional Hilbert space \({\mathcal {H}}\), and let Z be a random vector in \({\mathcal {H}}\) whose distribution is \({\textrm{G}}(\rho )\), the Gaussian measure with mean 0 and covariance \(\rho \) as defined in Sect. 2.2; equivalently, \(Z=\sum _{n=1}^D Z_n |n\rangle \), where the \(Z_n\) are independent complex mean-zero Gaussian random variables with variances

$$\begin{aligned} {\mathbb {E}}|Z_n|^2 = p_n \,. \end{aligned}$$
(89)

Then we can write \(Z=\sqrt{\rho /2} {\tilde{Z}}\), where the components \({\tilde{Z}}_n\) of \({\tilde{Z}}=\sum _{n=1}^D {\tilde{Z}}_n |n\rangle \) are D independent complex mean-zero Gaussian random variables with variances \({\mathbb {E}}|{\tilde{Z}}_n|^2=2\), which can be in a natural way identified with a vector of 2D independent real standard Gaussian variables.

If \(F:{\mathcal {H}}\rightarrow {\mathbb {R}}\) is Lipschitz with constant \(\eta \), then \(F\circ \sqrt{\rho /2}: {\mathcal {H}}\rightarrow {\mathbb {R}}\) is also Lipschitz with constant \(\eta \sqrt{\Vert \rho \Vert /2}\). This function can also naturally be considered as a function on \({\mathbb {R}}^{2D}\) and then an application of Lemma 1 immediately proves the following theorem:

Theorem 7

Let \(\dim {\mathcal {H}}<\infty \), let \(\rho \) be a density matrix on \({\mathcal {H}}\), let Z be a random vector with distribution \({\textrm{G}}(\rho )\), and let \(F:{\mathcal {H}}\rightarrow {\mathbb {R}}\) be a Lipschitz function with Lipschitz constant \(\eta \). Then for every \(\varepsilon >0\),

$$\begin{aligned} {\mathbb {P}}\bigl \{|F(Z)-{\mathbb {E}}F(Z)|>\varepsilon \bigr \} \le 2\exp \left( -\frac{4\varepsilon ^2}{\pi ^2\eta ^2 \Vert \rho \Vert }\right) . \end{aligned}$$
(90)

However, instead of using Theorem 7, we will use Theorem 8 below, a similar result for the Gaussian adjusted measure \({\text {GA}}(\rho )\) defined in Sect. 2.2, which has density \(\Vert \psi \Vert ^2\) relative to \({\textrm{G}}(\rho )\). Its proof closely follows the proof of Lévy’s Lemma in [27]; for convenience of the reader we provide all the details.

Theorem 8

Let \(\dim {\mathcal {H}}<\infty \), let \(\rho \) be a density matrix on \({\mathcal {H}}\), let Z be a random vector with distribution \({\textrm{GAP}}(\rho )\), and let \(F:{\mathcal {H}}\rightarrow {\mathbb {R}}\) be a Lipschitz function with Lipschitz constant \(\eta \). Then for every \(\varepsilon >0\),

$$\begin{aligned} {\textrm{GAP}}(\rho )\Bigl \{\psi \in {\mathbb {S}}({\mathcal {H}}):\bigl |F(\psi )-{\text {GA}}(\rho )(F) \bigr |>\varepsilon \Bigr \} \le 4\exp \left( -\frac{2\varepsilon ^2}{\pi ^2\eta ^2\Vert \rho \Vert }\right) . \end{aligned}$$
(91)

Proof

We identify \({\mathcal {H}}\) with \({\mathbb {C}}^D\) by means of the ONB \((|n\rangle )_{n=1\ldots D}\). Let \(\varphi : {\mathbb {R}}\rightarrow {\mathbb {R}}\) be a convex function and let \({\tilde{Z}}=(\tilde{Z_1},\dots ,{\tilde{Z}}_D)\) be a vector with the same distribution as Z but independent of it. With the help of Jensen’s inequality and Hölder’s inequality, we find that

$$\begin{aligned}&{\text {GA}}(\rho )_{\psi }\left[ \varphi (F(\psi )-{\text {GA}}(\rho )_\phi (F))\right] \nonumber \\&\qquad \le {\text {GA}}(\rho )_\psi {\text {GA}}(\rho )_\phi \left[ \varphi (F(\psi )-F(\phi ))\right] \end{aligned}$$
(92a)
$$\begin{aligned}&\qquad = \int _{{\mathcal {H}}} \int _{{\mathcal {H}}} \varphi (F(\psi )-F(\phi )) \Vert \psi \Vert ^2 \Vert \phi \Vert ^2 \, {\mathbb {P}}(\textrm{d}\psi ) {\mathbb {P}}(\textrm{d}\phi ) \end{aligned}$$
(92b)
$$\begin{aligned}&\qquad = \sum _{n,m} \int _{{\mathbb {C}}^D} \int _{{\mathbb {C}}^D}\varphi (F(Z)-F({\tilde{Z}})) |Z_n|^2 |{\tilde{Z}}_m|^2 {\mathbb {P}}(dZ){\mathbb {P}}(d{\tilde{Z}}) \end{aligned}$$
(92c)
$$\begin{aligned}&\qquad \le \sum _{n,m}\left( {\mathbb {E}}_{(Z,{\tilde{Z}})}(|Z_n|^4 |{\tilde{Z}}_m|^4){\mathbb {E}}_{(Z,{\tilde{Z}})}\left( \varphi (F(Z)-F({\tilde{Z}}))^2\right) \right) ^{1/2}, \end{aligned}$$
(92d)

where we use the notation F(Z) and \(F(\psi )\) interchangeably. We can write \(Z_n = \text {Re\,}Z_n + i \text {Im\,}Z_n\) where \(\text {Re\,}Z_n\) and \(\text {Im\,}Z_n\) are independent real-valued Gaussian random variables with mean 0 and variance \(p_n/2\). Since \({\mathbb {E}}|\text {Re\,}Z_n|^2 = p_n/2\) and \({\mathbb {E}}|\text {Re\,}Z_n|^4 = 3p_n^2/4\), we obtain

$$\begin{aligned} {\mathbb {E}}|Z_n|^4 = {\mathbb {E}}|\text {Re\,}Z_n|^4 + 2 {\mathbb {E}}|\text {Re\,}Z_n|^2{\mathbb {E}}|\text {Im\,}Z_n|^2 + {\mathbb {E}}|\text {Im\,}Z_n|^4 = 2p_n^2 \end{aligned}$$
(93)

and therefore

$$\begin{aligned} \sum _{n,m}\left( {\mathbb {E}}_{(Z,{\tilde{Z}})}(|Z_n|^4 |{\tilde{Z}}_m|^4)\right) ^{1/2} = \sum _{n,m} 2 p_n p_m = 2. \end{aligned}$$
(94)

We identify Z with the vector \(X:=(\text {Re\,}Z_1,\text {Im\,}Z_1,\text {Re\,}Z_2,\dots ,\text {Re\,}Z_D, \text {Im\,}Z_D)\) of real Gaussian random variables and similarly \({\tilde{Z}}\) with \(Y:= (\text {Re\,}{\tilde{Z}}_1,\text {Im\,}{\tilde{Z}}_1,\text {Re\,} {\tilde{Z}}_2,\dots ,\text {Re\,}{\tilde{Z}}_D, \text {Im\,}{\tilde{Z}}_D)\). For each \(0\le \theta \le \frac{\pi }{2}\) set \(X_\theta = X \sin \theta + Y \cos \theta \). One easily sees that the joint distribution of X and Y, which is the multivariate normal distribution with mean vector 0 and covariance matrix \(\textrm{diag}(p_1,p_1,\dots ,p_D,p_D,,p_1,p_1,\dots ,p_D,p_D)/2\), is the same as the joint distribution of \(X_\theta \) and \(\frac{d}{\textrm{d}\theta } X_\theta = X \cos \theta - Y \sin \theta \) since linear combinations of independent Gaussian random variables are again Gaussian and the entries of the expectation vector and covariance matrix can be easily computed.

Since F can be approximated uniformly by continuously differentiable functions, we can without loss of generality assume that F is continuously differentiable.

Let us now assume that \(\varphi \) is non-negative. Then, \(\varphi ^2\) is also convex. Then, we find with the help of Jensen’s inequality that

$$\begin{aligned} {\mathbb {E}} \varphi (F(Z)-F({\tilde{Z}}))^2&= {\mathbb {E}}\varphi (F(X)-F(Y))^2 \end{aligned}$$
(95a)
$$\begin{aligned}&={\mathbb {E}}\left[ \varphi \left( \int _0^{\pi /2} \frac{\textrm{d}}{\textrm{d}\theta } F(X_{\theta })\, \textrm{d}\theta \right) ^2\right] \end{aligned}$$
(95b)
$$\begin{aligned}&= {\mathbb {E}}\left[ \varphi \left( \int _0^{\pi /2}\left( \nabla F(X_\theta ), \frac{\textrm{d}}{\textrm{d}\theta } X_\theta \right) \, \textrm{d}\theta \right) ^2 \right] \end{aligned}$$
(95c)
$$\begin{aligned}&\le \frac{2}{\pi } {\mathbb {E}}\left[ \int _0^{\pi /2}\varphi \left( \frac{\pi }{2}\left( \nabla F(X_\theta ),\frac{\textrm{d}}{\textrm{d}\theta } X_\theta \right) \right) ^2 \textrm{d}\theta \right] \end{aligned}$$
(95d)
$$\begin{aligned}&= {\mathbb {E}}\varphi \left( \frac{\pi }{2}\left( \nabla F(X),Y\right) \right) ^2, \end{aligned}$$
(95e)

where in the last step we used Fubini’s theorem and the fact that the joint distribution of \(X_\theta \) and \(\frac{\textrm{d}}{\textrm{d}\theta }X_\theta \) is the same as the joint distribution of X and Y.

Let \(\lambda \in {\mathbb {R}}\) and set \(\varphi (x)=\exp (\lambda x)\). Then, we get

$$\begin{aligned} {\mathbb {E}}\exp \left[ 2\lambda (F(X)-F(Y))\right]&\le {\mathbb {E}}\exp \left( \lambda \pi \sum _{i=1}^{2D}\frac{\partial F}{\partial x_i}(X) Y_i\right) \end{aligned}$$
(96a)
$$\begin{aligned}&= {\mathbb {E}}_X \prod _{i=1}^{2D} {\mathbb {E}}_Y \exp \left( \lambda \pi \frac{\partial F}{\partial x_i}(X) Y_i\right) \end{aligned}$$
(96b)
$$\begin{aligned}&={\mathbb {E}} \exp \left( \frac{\lambda ^2 \pi ^2}{4}\sum _{i=1}^{2D}\left( \frac{\partial F}{\partial x_i}(X)\right) ^2p_i\right) \end{aligned}$$
(96c)
$$\begin{aligned}&\le {\mathbb {E}}\exp \left( \frac{\lambda ^2\pi ^2 \Vert \rho \Vert \Vert \nabla F(X)\Vert ^2}{4}\right) \end{aligned}$$
(96d)
$$\begin{aligned}&\le \exp \left( \frac{\lambda ^2\pi ^2 \Vert \rho \Vert \eta ^2}{4}\right) . \end{aligned}$$
(96e)

Altogether we obtain

$$\begin{aligned} {\text {GA}}(\rho )\left[ \exp (\lambda (F(\psi )-{\text {GA}}(\rho )(F))) \right] \le 2\exp \left( \frac{\lambda ^2\pi ^2 \Vert \rho \Vert \eta ^2}{8}\right) . \end{aligned}$$
(97)

By Markov’s inequality, we find that

$$\begin{aligned}&{\text {GA}}(\rho )\left\{ |F(Z)-{\text {GA}}(\rho )(F)|>\varepsilon \right\} \nonumber \\&\qquad = {\text {GA}}(\rho )\left\{ F(Z)- {\text {GA}}(\rho )(F)> \varepsilon \right\} \nonumber \\&\qquad \qquad + {\text {GA}}(\rho )\left\{ {\text {GA}}(\rho )(F)-F(Z)>\varepsilon \right\} \end{aligned}$$
(98a)
$$\begin{aligned}&\qquad = {\text {GA}}(\rho )\left\{ \exp (\lambda (F(Z)-{\text {GA}}(\rho )(F)))> e^{\lambda \varepsilon }\right\} \nonumber \\&\qquad \qquad + {\text {GA}}(\rho )\left\{ \exp (-\lambda (F(Z)-{\text {GA}}(\rho )(F))) > e^{\lambda \varepsilon }\right\} \end{aligned}$$
(98b)
$$\begin{aligned}&\qquad \le 4\exp \left( -\lambda \varepsilon + \frac{\lambda ^2\pi ^2\Vert \rho \Vert \eta ^2}{8}\right) . \end{aligned}$$
(98c)

Since \(\lambda \in {\mathbb {R}}\) was arbitrary, we can minimize the right-hand side over \(\lambda \). The minimum is attained at \(\lambda _{\min } = 4\varepsilon /(\pi ^2 \Vert \rho \Vert \eta ^2)\) and inserting this value in (98c) finally yields (91). \(\square \)

The last ingredient we need for the proof of Theorem 2 is the following lemma:

Lemma 2

For all \(r>0\) it holds that

$$\begin{aligned} {\textrm{GAP}}(\rho )\left\{ \Vert \psi \Vert <r\right\} \le \sqrt{2} \exp \left( -\frac{1/2-r^2}{2\Vert \rho \Vert }\right) . \end{aligned}$$
(99)

Proof

With the help of Hölder’s inequality, we find that

$$\begin{aligned} {\text {GA}}(\rho )\left\{ \Vert \psi \Vert< r\right\}&= \sum _n \int _{{\mathcal {H}}} |Z_n|^2 \mathbbm {1}_{\{\Vert \psi \Vert < r\}} \, {\mathbb {P}}(\textrm{d}\psi ) \end{aligned}$$
(100a)
$$\begin{aligned}&\le \sum _n \left( {\mathbb {E}} |Z_n|^4 {\mathbb {P}}\left( \Vert \psi \Vert <r\right) \right) ^{1/2} \end{aligned}$$
(100b)
$$\begin{aligned}&= \sqrt{2} \left( {\mathbb {P}}\left( \Vert \psi \Vert < r\right) \right) ^{1/2} \end{aligned}$$
(100c)

Note that in the third line we used (93) and that \(\sum _n p_n =1\). We can write

$$\begin{aligned} \Vert \psi \Vert ^2 = \sum _n |Z_n|^2 = \sum _{n} p_n |{\tilde{Z}}_n|^2, \end{aligned}$$
(101)

where the \({\tilde{Z}}_n\) are independent complex standard Gaussian random variables. For a random variable Y, let \(M_Y(t) = {\mathbb {E}}(e^{tY})\) denote its moment generating function. The Chernoff bound states that for any \(a\in {\mathbb {R}}\),

$$\begin{aligned} {\mathbb {P}}\{Y\le a\} \le \inf _{t<0} M_Y(t) e^{-ta}. \end{aligned}$$
(102)

Here, we thus obtain

$$\begin{aligned} {\mathbb {P}}\left\{ \Vert \psi \Vert< r\right\} ={\mathbb {P}}\left\{ \Vert \psi \Vert ^2< r^2\right\} \le \inf _{t<0} M_{\Vert \psi \Vert ^2}(t) e^{-tr^2}. \end{aligned}$$
(103)

We compute

$$\begin{aligned} M_{\Vert \psi \Vert ^2}(t) = \prod _n M_{|{\tilde{Z}}_n|^2}(p_n t) = \prod _n M_{2(\text {Re\,}{\tilde{Z}}_n)^2}\left( \frac{p_n t}{2}\right) M_{2(\text {Im\,}{\tilde{Z}}_n)^2}\left( \frac{p_n t}{2}\right) . \end{aligned}$$
(104)

Next note that \(2(\text {Re\,}{\tilde{Z}}_n)^2\) and \(2(\text {Im\,}{\tilde{Z}}_n)^2\) are chi-squared distributed random variables with one degree of freedom and that the moment generating function of a random variable Y with distribution \(\chi ^2_1\) is given by

$$\begin{aligned} M_Y(t) = (1-2t)^{-1/2} \quad \text{ for }\quad t<1/2. \end{aligned}$$
(105)

Therefore,

$$\begin{aligned} M_{\Vert \psi \Vert ^2}(t)&= \prod _n (1-p_n t)^{-1} \end{aligned}$$
(106)

and this implies

$$\begin{aligned} {\mathbb {P}}\left\{ \Vert \psi \Vert<r\right\}&\le \inf _{t<0} e^{-tr^2} \prod _n (1-p_n t)^{-1} \end{aligned}$$
(107a)
$$\begin{aligned}&= \inf _{t<0} \exp \left( -tr^2 - \sum _n \ln (1-p_n t)\right) \end{aligned}$$
(107b)
$$\begin{aligned}&= \inf _{s>0} \exp \left( sr^2-\sum _n \ln (1+p_n s)\right) \end{aligned}$$
(107c)
$$\begin{aligned}&\le \exp \left( \frac{r^2}{\Vert \rho \Vert } - \sum _n \ln \left( 1+\frac{p_n}{\Vert \rho \Vert }\right) \right) , \end{aligned}$$
(107d)

where we chose \(s=\Vert \rho \Vert ^{-1}\) in the last line. Because of

$$\begin{aligned} \ln (1+x)\ge \frac{x}{x+1} \ge \frac{x}{2}\quad \text{ for }\quad 0<x\le 1 \end{aligned}$$
(108)

we find that

$$\begin{aligned} {\mathbb {P}}\left\{ \Vert \psi \Vert <r\right\}&\le \exp \left( \frac{r^2}{\Vert \rho \Vert } - \sum _n \frac{p_n}{2\Vert \rho \Vert }\right) = \exp \left( -\frac{1/2-r^2}{\Vert \rho \Vert }\right) . \end{aligned}$$
(109)

Inserting this into (100c) finishes the proof. \(\square \)

Proof of Theorem 2

We first assume that \(D=\dim {\mathcal {H}}<\infty \). Without loss of generality we can assume that \({\textrm{GAP}}(\rho )(f)=0\). Due to the continuity of f it follows that there exists a \(\varphi \in {\mathbb {S}}({\mathcal {H}})\) such that \(f(\varphi )=0\). This implies for all \({\tilde{\varphi }} \in {\mathbb {S}}({\mathcal {H}})\) that

$$\begin{aligned} |f({\tilde{\varphi }})| = |f({\tilde{\varphi }}) - f(\varphi )| \le \eta \Vert {\tilde{\varphi }}-\varphi \Vert \le \pi \eta , \end{aligned}$$
(110)

where we used in the last step that the distance (in the spherical metric) between two points on the unit sphere is bounded by \(\pi \). Thus f is bounded by \(\pi \eta \).

Let \(0<r<1\) and define \({\tilde{f}}:{\mathcal {H}}\rightarrow {\mathbb {R}}\) by

$$\begin{aligned} {\tilde{f}}(\psi ) = {\left\{ \begin{array}{ll} f\left( \frac{\psi }{\Vert \psi \Vert }\right) \quad &{}{\textrm{if}}\; \Vert \psi \Vert \ge r,\\ r^{-1} \Vert \psi \Vert f\left( \frac{\psi }{\Vert \psi \Vert }\right) &{}{\textrm{if}}\; \Vert \psi \Vert \le r. \end{array}\right. } \end{aligned}$$
(111)

For every \(\psi ,\varphi \in {\mathcal {H}}\) such that \(\Vert \psi \Vert ,\Vert \varphi \Vert \ge r\) we find that

$$\begin{aligned} \left| {\tilde{f}}(\psi )-{\tilde{f}}(\varphi )\right|&= \left| f\left( \frac{\psi }{\Vert \psi \Vert }\right) - f\left( \frac{\varphi }{\Vert \varphi \Vert }\right) \right| \end{aligned}$$
(112a)
$$\begin{aligned}&\le \eta \left\| \frac{\psi }{\Vert \psi \Vert }-\frac{\varphi }{\Vert \varphi \Vert }\right\| \end{aligned}$$
(112b)
$$\begin{aligned}&\le \frac{\eta }{r} \Vert \psi -\varphi \Vert , \end{aligned}$$
(112c)

where the last inequality follows from

$$\begin{aligned} \left\| \frac{\psi }{\Vert \psi \Vert } - \frac{\varphi }{\Vert \varphi \Vert }\right\| ^2&= 2-\frac{2}{\Vert \psi \Vert \Vert \varphi \Vert } \text {Re\,}\langle \psi ,\varphi \rangle \end{aligned}$$
(113a)
$$\begin{aligned}&= 2 + 2\text {Re\,}\langle \psi ,\varphi \rangle \left( r^{-2}-\frac{1}{\Vert \psi \Vert \Vert \varphi \Vert }\right) - 2r^{-2}\text {Re\,}\langle \psi ,\varphi \rangle \end{aligned}$$
(113b)
$$\begin{aligned}&\le r^{-2}\left( 2 \Vert \psi \Vert \Vert \varphi \Vert - 2 \text {Re\,}\langle \psi ,\varphi \rangle \right) \end{aligned}$$
(113c)
$$\begin{aligned}&\le r^{-2} \left( \Vert \psi \Vert ^2+\Vert \varphi \Vert ^2 - 2\text {Re\,}\langle \psi ,\varphi \rangle \right) \end{aligned}$$
(113d)
$$\begin{aligned}&= r^{-2} \Vert \psi -\varphi \Vert ^2. \end{aligned}$$
(113e)

Thus, \({\tilde{f}}\) is Lipschitz continuous with constant \(\eta /r\) on \(\{\psi \in {\mathcal {H}}: \Vert \psi \Vert \ge r\}\).

Now let \(\psi ,\varphi \in {\mathcal {H}}\) such that \(\Vert \psi \Vert ,\Vert \varphi \Vert \le r\) and \(\Vert \varphi \Vert \le \Vert \psi \Vert \). Then, we obtain

$$\begin{aligned} \left| {\tilde{f}}(\psi ) - {\tilde{f}}(\varphi ) \right|&= r^{-1} \left| \Vert \psi \Vert f\left( \frac{\psi }{\Vert \psi \Vert }\right) - \Vert \varphi \Vert f\left( \frac{\varphi }{\Vert \varphi \Vert }\right) \right| \end{aligned}$$
(114a)
$$\begin{aligned}&\le r^{-1} \left| \Vert \psi \Vert f\left( \frac{\psi }{\Vert \psi \Vert }\right) - \Vert \varphi \Vert f\left( \frac{\psi }{\Vert \psi \Vert }\right) \right| \nonumber \\&\quad + r^{-1} \left| \Vert \varphi \Vert f\left( \frac{\psi }{\Vert \psi \Vert }\right) - \Vert \varphi \Vert f\left( \frac{\varphi }{\Vert \varphi \Vert }\right) \right| \end{aligned}$$
(114b)
$$\begin{aligned}&\le \frac{\pi \eta }{r} \bigl |\Vert \psi \Vert -\Vert \varphi \Vert \bigr | + \frac{\eta }{r} \Vert \varphi \Vert \left\| \frac{\psi }{\Vert \psi \Vert }-\frac{\varphi }{\Vert \varphi \Vert } \right\| \end{aligned}$$
(114c)
$$\begin{aligned}&\le \frac{5\eta }{r} \Vert \psi -\varphi \Vert , \end{aligned}$$
(114d)

where the last inequality follows from

$$\begin{aligned} \Vert \varphi \Vert ^2 \left\| \frac{\psi }{\Vert \psi \Vert } - \frac{\varphi }{\Vert \varphi \Vert } \right\| ^2&= 2\Vert \varphi \Vert ^2 + 2\text {Re\,}\langle \psi ,\varphi \rangle \left( 1-\frac{\Vert \varphi \Vert }{\Vert \psi \Vert }\right) - 2 \text {Re\,}\langle \psi ,\varphi \rangle \end{aligned}$$
(115a)
$$\begin{aligned}&\le 2 \Vert \psi \Vert \Vert \varphi \Vert - 2 \text {Re\,}\langle \psi ,\varphi \rangle \end{aligned}$$
(115b)
$$\begin{aligned}&\le \Vert \psi -\varphi \Vert ^2. \end{aligned}$$
(115c)

Due to the symmetry of the argument in \(\psi \) and \(\varphi \), one finds the same estimate in the case that \(\Vert \psi \Vert \le \Vert \varphi \Vert \) and we conclude that \({\tilde{f}}\) is Lipschitz continuous with constant \(5\eta /r\) on \(\{\psi \in {\mathcal {H}}: \Vert \psi \Vert \le r\}\).

Finally, let \(\psi ,\varphi \in {\mathcal {H}}\) such that \(\Vert \psi \Vert \le r\) and \(\Vert \varphi \Vert \ge r\) and define \(\gamma : [0,1] \rightarrow {\mathcal {H}}, \gamma (t)=(1-t)\psi + t\varphi \). Then, there exists a \(t_0\in [0,1]\) such that \(\Vert \gamma (t_0)\Vert =r\) and

$$\begin{aligned} \Vert \psi -\gamma (t_0)\Vert&= t_0 \Vert \psi -\varphi \Vert \le \Vert \psi -\varphi \Vert , \end{aligned}$$
(116)
$$\begin{aligned} \Vert \gamma (t_0) - \varphi \Vert&= (1-t_0) \Vert \psi -\varphi \Vert \le \Vert \psi -\varphi \Vert . \end{aligned}$$
(117)

Therefore, we find that

$$\begin{aligned} \left| {\tilde{f}}(\psi ) - {\tilde{f}}(\varphi )\right|&= \left| r^{-1}\Vert \psi \Vert f\left( \frac{\psi }{\Vert \psi \Vert }\right) - f\left( \frac{\varphi }{\Vert \varphi \Vert }\right) \right| \end{aligned}$$
(118a)
$$\begin{aligned}&\le r^{-1} \left| \Vert \psi \Vert f\left( \frac{\psi }{\Vert \psi \Vert }\right) - \Vert \gamma (t_0)\Vert f\left( \frac{\gamma (t_0)}{\Vert \gamma (t_0)\Vert }\right) \right| \nonumber \\&\quad + \left| f\left( \frac{\gamma (t_0)}{\Vert \gamma (t_0)\Vert }\right) - f\left( \frac{\varphi }{\Vert \varphi \Vert }\right) \right| \end{aligned}$$
(118b)
$$\begin{aligned}&\le \frac{5\eta }{r} \Vert \psi -\gamma (t_0)\Vert + \frac{\eta }{r} \Vert \gamma (t_0)-\varphi \Vert \end{aligned}$$
(118c)
$$\begin{aligned}&\le \frac{6\eta }{r} \Vert \psi -\varphi \Vert . \end{aligned}$$
(118d)

We conclude that \({\tilde{f}}\) is Lipschitz continuous with Lipschitz constant \(6\eta /r\).

Using the definition of \({\tilde{f}}\), we find that

$$\begin{aligned} {\textrm{GAP}}(\rho )\left\{ |f(\psi )|>\varepsilon \right\}&= {\text {GA}}(\rho )\left\{ \left| f\left( \frac{\psi }{\Vert \psi \Vert }\right) \right| >\varepsilon \right\} \end{aligned}$$
(119a)
$$\begin{aligned}&\le {\text {GA}}(\rho )\left\{ \left| f\left( \frac{\psi }{\Vert \psi \Vert }\right) \right| > \varepsilon \text{ and } \Vert \psi \Vert \ge r\right\} \nonumber \\&\quad + {\text {GA}}(\rho )\left\{ \Vert \psi \Vert < r\right\} \end{aligned}$$
(119b)
$$\begin{aligned}&= {\text {GA}}(\rho )\left\{ \left| {\tilde{f}}(\psi ) \right| >\varepsilon \text{ and } \Vert \psi \Vert \ge r\right\} + {\text {GA}}(\rho )\left\{ \Vert \psi \Vert < r\right\} \end{aligned}$$
(119c)
$$\begin{aligned}&\le {\text {GA}}(\rho )\left\{ \left| {\tilde{f}}(\psi ) \right| >\varepsilon \right\} + {\text {GA}}(\rho )\left\{ \Vert \psi \Vert <r\right\} \end{aligned}$$
(119d)
$$\begin{aligned}&\le {\text {GA}}(\rho )\left\{ \left| {\tilde{f}}(\psi )- {\text {GA}}(\rho )({\tilde{f}})\right| > \varepsilon - |{\text {GA}}(\rho )({\tilde{f}})|\right\} \nonumber \\&\quad + {\text {GA}}(\rho )\left\{ \Vert \psi \Vert <r\right\} . \end{aligned}$$
(119e)

By Lemma 2, the second term can be bounded by \(\sqrt{2}\exp (-(1/2-r^2)/2\Vert \rho \Vert )\). In order to estimate the first term in (119e), we first derive an upper bound for \(|{\text {GA}}(\rho )({\tilde{f}})|\). We compute

$$\begin{aligned} {\text {GA}}(\rho )({\tilde{f}})&= \int _{\{\Vert \psi \Vert <r\}} r^{-1} \Vert \psi \Vert f\left( \frac{\psi }{\Vert \psi \Vert }\right) \, {\text {GA}}(\rho )(\textrm{d}\psi )\nonumber \\&\quad + \int _{\{\Vert \psi \Vert \ge r\}} f\left( \frac{\psi }{\Vert \psi \Vert }\right) \, {\text {GA}}(\rho )(\textrm{d}\psi ) \end{aligned}$$
(120)
$$\begin{aligned}&= \underbrace{\int _{\mathcal {H}}f\left( \frac{\psi }{\Vert \psi \Vert }\right) \, {\text {GA}}(\rho )(\textrm{d}\psi )}_{={\textrm{GAP}}(\rho )(f)=0} + \int _{\{\Vert \psi \Vert <r\}} r^{-1} \Vert \psi \Vert f\left( \frac{\psi }{\Vert \psi \Vert }\right) \nonumber \\&\quad - f\left( \frac{\psi }{\Vert \psi \Vert }\right) \, {\text {GA}}(\rho )(\textrm{d}\psi ) \end{aligned}$$
(121)

and so we obtain, again by Lemma 2,

$$\begin{aligned} |{\text {GA}}(\rho )({\tilde{f}})| \le \pi \eta \,{\text {GA}}(\rho )\left\{ \Vert \psi \Vert < r\right\} \le 5\eta \exp \left( -\frac{1/2-r^2}{2\Vert \rho \Vert }\right) . \end{aligned}$$
(122)

This implies with the help of Theorem 8 that

$$\begin{aligned} {\text {GA}}(\rho )&\left\{ \left| {\tilde{f}}(\psi )- {\text {GA}}(\rho )({\tilde{f}})\right| > \varepsilon - |{\text {GA}}(\rho )({\tilde{f}})|\right\} \end{aligned}$$
(123a)
$$\begin{aligned}&\le {\text {GA}}(\rho )\left\{ \left| {\tilde{f}}(\psi ) - {\text {GA}}(\rho )({\tilde{f}})\right| > \varepsilon - 5\eta \exp \left( -\frac{1/2-r^2}{2\Vert \rho \Vert }\right) \right\} \end{aligned}$$
(123b)
$$\begin{aligned}&\le 4\exp \left( -\frac{r^2(\varepsilon -5\eta \exp (-(1/2-r^2)/2\Vert \rho \Vert ))^2}{18\pi ^2\eta ^2 \Vert \rho \Vert }\right) , \end{aligned}$$
(123c)

provided that \(\varepsilon > 5\eta \exp (-(1/2-r^2)/2\Vert \rho \Vert )\). Altogether we arrive at

$$\begin{aligned} {\textrm{GAP}}(\rho )\left\{ |f(\psi )|>\varepsilon \right\}&\le 4\exp \left( -\frac{r^2(\varepsilon -5\eta \exp (-(1/2-r^2)/2\Vert \rho \Vert ))^2}{18\pi ^2\eta ^2 \Vert \rho \Vert }\right) \nonumber \\&\quad + \sqrt{2}\exp \left( -\frac{1/2-r^2}{2\Vert \rho \Vert }\right) . \end{aligned}$$
(124)

Choosing \(r=1/2\) we obtain

$$\begin{aligned} {\textrm{GAP}}(\rho )\left\{ |f(\psi )|>\varepsilon \right\} \le 4 \exp \left( -\frac{(\varepsilon -5\eta \exp (-1/8\Vert \rho \Vert ))^2}{72\pi ^2\eta ^2 \Vert \rho \Vert }\right) + \sqrt{2} \exp \left( -\frac{1}{8\Vert \rho \Vert }\right) . \end{aligned}$$
(125)

We can assume without loss of generality that

$$\begin{aligned} \varepsilon <\pi \eta \end{aligned}$$
(126)

because otherwise the left-hand side of (30) vanishes: indeed, the distance between any two points on the sphere is at most \(\pi \), so their f values can differ at most by \(\pi \eta \), and for the same reason \(f(\psi )\) can differ from its average relative to any measure by at most \(\pi \eta \).

Likewise, we can assume without loss of generality that

$$\begin{aligned} \varepsilon \ge 10\eta \exp (-1/8\Vert \rho \Vert ) \end{aligned}$$
(127)

because otherwise the right-hand side of (30) is greater than 1: indeed, for \(\varepsilon < 10\eta \exp (-1/8\Vert \rho \Vert )\),

$$\begin{aligned} 6 \exp \left( -\frac{\varepsilon ^2}{288\pi ^2\eta ^2 \Vert \rho \Vert }\right) \ge 6\exp \left( -\frac{25 \exp (-1/4\Vert \rho \Vert )}{72\pi ^2\Vert \rho \Vert }\right) >1. \end{aligned}$$
(128)

As a consequence of (126) and (127), the first exponent in (125) is greater than the second, so

$$\begin{aligned} {\textrm{GAP}}(\rho )\left\{ |f(\psi )|>\varepsilon \right\}&\le 6 \exp \left( -\frac{(\varepsilon -5\eta \exp (-1/8\Vert \rho \Vert ))^2}{72\pi ^2\eta ^2 \Vert \rho \Vert }\right) \end{aligned}$$
(129)
$$\begin{aligned}&\le 6 \exp \left( -\frac{\varepsilon ^2}{288\pi ^2\eta ^2 \Vert \rho \Vert }\right) \end{aligned}$$
(130)

by (127). This finishes the proof in the finite-dimensional case.

Now suppose that \({\mathcal {H}}\) has a countably infinite ONB. Consider the density matrices \(\rho _n\) defined as in (68). Let \(\varepsilon '>0\). Because the set

$$\begin{aligned} A_\varepsilon := \{\psi \in {\mathbb {S}}({\mathcal {H}}): |f(\psi )|>\varepsilon \} \end{aligned}$$
(131)

is open in \({\mathbb {S}}({\mathcal {H}})\), it follows from the weak convergence of the measures \({\textrm{GAP}}(\rho _n)\) to \({\textrm{GAP}}(\rho )\) by the “portmanteau theorem” [3, Thm. 2.1] that

$$\begin{aligned} {\textrm{GAP}}(\rho )(A_\varepsilon ) \le \liminf _{n\rightarrow \infty } {\textrm{GAP}}(\rho _n)(A_\varepsilon ) \le {\textrm{GAP}}(\rho _N)(A_\varepsilon ) + \varepsilon ' \end{aligned}$$
(132)

for some large enough \(N\in {\mathbb {N}}\) with \(N\ge n_0\). Recall that \(n_0\in {\mathbb {N}}\) is chosen such that \(\Vert \rho _n\Vert =\Vert \rho \Vert \) for all \(n\ge n_0\). Let \({\mathcal {H}}_N:= \textrm{span}\{|n\rangle : n=1,\dots ,N\}\). Then, since \(\rho _N\) is a density matrix on \({\mathcal {H}}_N\) and \({\textrm{GAP}}(\rho _N)\) is concentrated on \({\mathcal {H}}_N\), it follows with what we have already proven in the finite-dimensional case that

$$\begin{aligned} {\textrm{GAP}}(\rho _N)\bigl \{\psi \in {\mathbb {S}}({\mathcal {H}}): |f(\psi )|>\varepsilon \bigr \}&= {\textrm{GAP}}(\rho _N)\bigl \{\psi \in {\mathbb {S}}({\mathcal {H}}_N): |f(\psi )|>\varepsilon \bigr \} \end{aligned}$$
(133a)
$$\begin{aligned}&\le 6 \exp \left( -\frac{C\varepsilon ^2}{\eta ^2 \Vert \rho _N\Vert }\right) , \end{aligned}$$
(133b)

where \(C=\frac{1}{288\pi ^2}\). Noting that \(\Vert \rho _N\Vert =\Vert \rho \Vert \) and that \(\varepsilon '>0\) was arbitrary, we can altogether conclude that

$$\begin{aligned} {\textrm{GAP}}(\rho )\Bigl \{\psi \in {\mathbb {S}}({\mathcal {H}}): \bigl |f(\psi )-{\textrm{GAP}}(\rho )(f)\bigr |>\varepsilon \Bigr \} \le 6\exp \left( -\frac{C\varepsilon ^2}{\eta ^2 \Vert \rho \Vert }\right) , \end{aligned}$$
(134)

i.e., the bound (130) remains true in the infinite-dimensional setting. \(\square \)

4.5 Proofs of Corollaries 234

Proof of Corollary 2

As already noted before Corollary 2, the first inequality follows immediately from Corollary 1 by inserting \(U_t^* B U_t\) for B.

For the proof of the second inequality, we define

$$\begin{aligned} Y_t := \left| \langle \psi _t|B|\psi _t\rangle - {{\,\textrm{tr}\,}}(\rho _t B) \right| . \end{aligned}$$
(135)

Then, for every \(s> 0\) we find that

$$\begin{aligned} {\textrm{GAP}}(\rho )\Bigl \{\psi \in {\mathbb {S}}({\mathcal {H}}): e^{sY_t}>e^{s\varepsilon }\Bigr \} \le 12 \exp \left( -\frac{{\tilde{C}}\varepsilon ^2}{\Vert B\Vert ^2 \Vert \rho \Vert }\right) , \end{aligned}$$
(136)

i.e., with \(\delta := e^{s\varepsilon }\),

$$\begin{aligned} {\textrm{GAP}}(\rho )\Bigl \{\psi \in {\mathbb {S}}({\mathcal {H}}): e^{sY_t} > \delta \Bigr \} \le 12\exp \left( -\frac{{\tilde{C}}}{\Vert B\Vert ^2\Vert \rho \Vert } \frac{\ln (\delta )^2}{s^2}\right) . \end{aligned}$$
(137)

This implies

$$\begin{aligned} {\textrm{GAP}}(\rho )\left( e^{sY_t}\right)&\le \sum _{n=0}^\infty (n+1)\; {\textrm{GAP}}(\rho )\Bigl \{\psi \in {\mathbb {S}}({\mathcal {H}}): e^{s Y_t} \in (n,n+1]\Bigr \} \end{aligned}$$
(138a)
$$\begin{aligned}&\le 1+12 \sum _{n=1}^\infty (n+1) \exp \left( -\frac{{\tilde{C}}\ln (n)^2}{\Vert B\Vert ^2 \Vert \rho \Vert s^2}\right) \end{aligned}$$
(138b)
$$\begin{aligned}&= 1+ 12\sum _{n=1}^\infty (n+1) n^{-\frac{{\tilde{C}}\ln (n)}{\Vert B\Vert ^2\Vert \rho \Vert s^2}}. \end{aligned}$$
(138c)

With \(a:=\frac{{\tilde{C}}}{\Vert B\Vert ^2\Vert \rho \Vert s^2}\) and assuming that \(a\le 1\), we obtain

$$\begin{aligned} {\textrm{GAP}}(\rho )\left( e^{sY_t}\right)&\le 1+12 \sum _{n=1}^{\lfloor e^{5/2a}\rfloor } (n+1) + 12 \sum _{n=\lceil e^{5/2a}\rceil }^{\infty } (n+1) \frac{1}{n^{5/2}} \end{aligned}$$
(139a)
$$\begin{aligned}&\le 1+ 6 e^{\frac{5}{2a}} \left( e^{\frac{5}{2a}}+3\right) + 12 \end{aligned}$$
(139b)
$$\begin{aligned}&= 13 + 18e^{\frac{5}{2a}} + 6e^{\frac{5}{a}} \end{aligned}$$
(139c)
$$\begin{aligned}&\le 9 e^{\frac{5}{a}}. \end{aligned}$$
(139d)

An application of Jensen’s inequality and Fubini’s theorem shows that

$$\begin{aligned} {\textrm{GAP}}(\rho )\left( \exp \left( \frac{1}{T}\int _0^T Y_t\, dt\; s\right) \right)&\le {\textrm{GAP}}(\rho )\left( \frac{1}{T}\int _0^T e^{Y_t s}\, dt\right) \end{aligned}$$
(140a)
$$\begin{aligned}&= \frac{1}{T}\int _0^T {\textrm{GAP}}(\rho )\left( e^{Y_t s}\right) \, dt \end{aligned}$$
(140b)
$$\begin{aligned}&\le 9e^{5/a}. \end{aligned}$$
(140c)

With the help of Markov’s inequality, we find that

$$\begin{aligned} {\textrm{GAP}}(\rho )\Biggl \{\psi \in {\mathbb {S}}({\mathcal {H}}): \frac{1}{T}\int _0^T Y_t\, dt > \varepsilon \Biggr \} \le 9e^{5/a} e^{-\varepsilon s}. \end{aligned}$$
(141)

and choosing \(s:=\frac{\varepsilon {\tilde{C}}}{6\Vert B\Vert ^2 \Vert \rho \Vert }\) yields the desired bound provided that \(\varepsilon >0\) and \(a\le 1\), i.e., \(\Vert \rho \Vert ~\le ~\frac{{\tilde{C}}\varepsilon ^2}{36\Vert B\Vert ^2}\). However, since the bound becomes trivial for \(\Vert \rho \Vert >\frac{{\tilde{C}}\varepsilon ^2}{36\Vert B\Vert ^2}\), this assumption on \(\Vert \rho \Vert \) can be dropped. Moreover, note that the bound is also trivial if \(\varepsilon =0\). \(\square \)

Proof of Corollary 3

Let

$$\begin{aligned} Z_t := \Vert \rho _a^{\psi _t}-{{\,\textrm{tr}\,}}_b\rho _t\Vert _{{{\,\textrm{tr}\,}}}. \end{aligned}$$
(142)

It follows from the equivariance of \(\rho \mapsto {\textrm{GAP}}(\rho )\) and Remark 2 that

$$\begin{aligned} {\textrm{GAP}}(\rho )\Bigl \{\psi \in {\mathbb {S}}({\mathcal {H}}): Z_t> \varepsilon \Bigr \}&= {\textrm{GAP}}(\rho _t)\Bigl \{\psi _t\in {\mathbb {S}}({\mathcal {H}}): Z_t > \varepsilon \Bigr \} \end{aligned}$$
(143a)
$$\begin{aligned}&\le 12 d_a^2 \exp \left( -\frac{{\tilde{C}}\varepsilon ^2}{d_a^2\Vert \rho \Vert }\right) . \end{aligned}$$
(143b)

The rest of the proof now follows along the same lines as the proof of Corollary 2. \(\square \)

Proof of Corollary 4

Choose \(\psi \) and B independently with the distributions mentioned. By Theorem 2 of [17] (which requires that \(d_b\ge d_a\) and \(d_b\ge 4\)), we have that

$$\begin{aligned} \bigl |{\textrm{Born}}_a^{\psi ,B}(f) - {\textrm{GAP}}(\rho _a^\psi )(f)\bigr |< \varepsilon /2 \end{aligned}$$
(144)

with probability \(\ge 1 - 16\Vert f\Vert ^2_{\infty }/\varepsilon ^2 d_b \ge 1-\delta /2\) for \(d_b\ge 32\Vert f\Vert ^2_\infty /\varepsilon ^2 \delta \). By Lemma 5 of [17], there is \(r=r(\varepsilon ,d_a,f)>0\) such that

$$\begin{aligned} \bigl |{\textrm{GAP}}(\rho _a^\psi )(f) - {\textrm{GAP}}({{\,\textrm{tr}\,}}_b \rho )(f) \bigr |<\varepsilon /2 \end{aligned}$$
(145)

whenever \(\Vert \rho _a^\psi -{{\,\textrm{tr}\,}}_b\rho \Vert _{{{\,\textrm{tr}\,}}} < r\). By Theorem 1 in the form (29), the latter condition is fulfilled with probability \(\ge 1-12d_a^2\exp (-{\tilde{C}}r^2/d_a^2\Vert \rho \Vert )\ge 1-\delta /2\) for \(\Vert \rho \Vert \le p:= {\tilde{C}}r^2/d_a^2\ln (24d_a^2/\delta )\). Now (35) follows. \(\square \)

4.6 Further Explanations to Remark 12

As discussed after Theorem 4, applying Theorem 3 to \(\rho =\rho _R\) yields the worse factor \(d_a^{2.5}\) instead of \(d_a^2\). Here we want to give some details why in this special case of Theorem 3, slightly better bounds can be obtained.

First suppose that \({\mathcal {H}}_R = {\mathcal {H}}\). Similarly to the proof of Theorem 3 one finds that

$$\begin{aligned} u\bigl \{\psi \in {\mathbb {S}}({\mathcal {H}}): \Vert \rho _a^\psi -{{\,\textrm{tr}\,}}_b\rho \Vert _{{{\,\textrm{tr}\,}}}>\varepsilon \bigr \} \le \frac{d_a^3}{\varepsilon ^2} \sum _{l,m}{{\,\textrm{Var}\,}}\langle \psi |A^{lm}|\psi \rangle , \end{aligned}$$
(146)

where \(A^{lm} = |l\rangle _a\langle m|\otimes I_b\) and \((|l\rangle _a)_{l=1\dots d_a}\) is an orthonormal basis of \({\mathcal {H}}_a\). Instead of bounding the sum by \(d_a^2\) times a uniform bound on the variances \({{\,\textrm{Var}\,}}\langle \psi |A^{lm}|\psi \rangle \), one can now make use of the fact that for uniformly distributed \(\psi \in {\mathbb {S}}({\mathcal {H}})\), the second and fourth moments of the coefficients \(c_l\) of \(\psi \) in an orthonormal basis \((|n\rangle )_{n=1\dots D}\) of eigenvectors of \(\rho \) can be computed explicitly. More precisely, they satisfy

$$\begin{aligned} {\mathbb {E}}(|c_n|^2) = \frac{1}{D}, \quad {\mathbb {E}}(|c_n|^2|c_k|^2) = \frac{1+\delta _{nk}}{D(D+1)}, \end{aligned}$$
(147)

and all other second and fourth moments vanish, see e.g. [8, App. A.2 and C.1]. With this, we find that

$$\begin{aligned} {{\,\textrm{Var}\,}}\langle \psi |A^{lm}|\psi \rangle&= \sum _{k,n} |A_{kn}^{lm}|^2 \frac{1+\delta _{kn}}{D(D+1)} + \sum _{k,n} A^{lm*}_{kk} A_{nn}^{lm} \frac{1+\delta _{kn}}{D(D+1)}\nonumber \\&\qquad - \sum _n |A_{nn}^{lm}|^2 \frac{2}{D(D+1)} - {{\,\textrm{tr}\,}}(A^{lm}\rho )^2 \end{aligned}$$
(148a)
$$\begin{aligned}&= \frac{{{\,\textrm{tr}\,}}(A^{lm*}A^{lm})}{D(D+1)} - \frac{\left| {{\,\textrm{tr}\,}}(A^{lm}\rho )\right| ^2}{D+1} \end{aligned}$$
(148b)
$$\begin{aligned}&\le \frac{{{\,\textrm{tr}\,}}(A^{lm*}A^{lm})}{D(D+1)}. \end{aligned}$$
(148c)

Next note that

$$\begin{aligned} \sum _{l,m} {{\,\textrm{tr}\,}}(A^{(lm)*}A^{(lm)}) = d_a \sum _{l} {{\,\textrm{tr}\,}}(|l\rangle _a\langle l|\otimes I_b) = d_a D \end{aligned}$$
(149)

and therefore

$$\begin{aligned} u\left\{ \psi \in {\mathbb {S}}({\mathcal {H}}): \Vert \rho _a^\psi -{{\,\textrm{tr}\,}}_b\rho \Vert _{{{\,\textrm{tr}\,}}}>\varepsilon \right\} \le \frac{d_a^4}{\varepsilon ^2 D}. \end{aligned}$$
(150)

If \({\mathcal {H}}_R \ne {\mathcal {H}}\) is a subspace of \({\mathcal {H}}\), then this bound remains valid after replacing \(\rho \) by \(P_R/d_R\), u by \(u_R\), \({\mathcal {H}}\) by \({\mathcal {H}}_R\) and D by \(d_R\). This follows immediately from the previous computations after noting that

$$\begin{aligned} \sum _{l,m}{{\,\textrm{tr}\,}}(A^{(lm)*}P_RA^{(lm)}P_R)&\le \sum _{l,m} {{\,\textrm{tr}\,}}(A^{(lm)*}P_R A^{(lm)})\nonumber \\&= d_a \sum _l {{\,\textrm{tr}\,}}((|l\rangle _a\langle l|\otimes I_b) P_R) = d_a d_R. \end{aligned}$$
(151)

Setting \(\delta := d_a^4/(\varepsilon ^2 d_R)\) and solving for \(\varepsilon \) finally gives Theorem 4.

In [30, 31], Theorem 5 was stated in a slightly different form; more precisely, there it was shown that for every \(\varepsilon >0\),

$$\begin{aligned} u_R \Biggl \{ \psi \in {\mathbb {S}}({\mathcal {H}}_R): \bigl \Vert \rho _a^\psi - {{\,\textrm{tr}\,}}_b \rho _R \bigr \Vert _{{{\,\textrm{tr}\,}}} > \varepsilon + \sqrt{d_a {{\,\textrm{tr}\,}}({{\,\textrm{tr}\,}}_a \rho _R)^2} \Biggr \} \le 4 \exp \Bigl (-\frac{d_R\varepsilon ^2}{18\pi ^3}\Bigr ). \end{aligned}$$
(152)

We now show how this implies the bound in Theorem 5. By setting \(\delta := 4 \exp (-d_R\varepsilon ^2/(18\pi ^3))\) and solving for \(\varepsilon \), we obtain

$$\begin{aligned} \varepsilon = \sqrt{\frac{18\pi ^3}{d_R}\ln (4/\delta )}. \end{aligned}$$
(153)

With this and \({{\,\textrm{tr}\,}}({{\,\textrm{tr}\,}}_a\rho _R)^2 \le d_a/d_R\) we obtain

$$\begin{aligned} u_R \Biggl \{ \psi \in {\mathbb {S}}({\mathcal {H}}_R): \bigl \Vert \rho _a^\psi - {{\,\textrm{tr}\,}}_b \rho _R \bigr \Vert _{{{\,\textrm{tr}\,}}} \le \sqrt{\frac{18\pi ^3}{d_R}\ln (4/\delta )}+ \sqrt{d_a^2/d_R} \Biggr \} \ge 1-\delta \,. \end{aligned}$$
(154)

The first square root dominates as soon as

$$\begin{aligned} \delta < 4\exp \left( -d_a^2/(18\pi ^3)\right) , \end{aligned}$$
(155)

which we can, of course, assume without loss of generality since otherwise we would have \(\delta >1\) and then the lower bound on the probability would be trivial. This immediately implies (46).

5 Summary and Conclusions

Typicality theorems assert that, for big systems, some condition is true of most points, or here, most wave functions. The word “most” usually refers to a uniform distribution u (say, over the unit sphere \({\mathbb {S}}({\mathcal {H}}_R)\) in some Hilbert subspace \({\mathcal {H}}_R\)), but here we use the GAP measure as the natural analog of the uniform distribution in cases with given density matrix \(\rho \). Since the GAP measure for \(\rho =\rho _\textrm{can}\) is the thermal equilibrium distribution of wave functions, our typicality theorems can be understood as expressing a kind of equivalence of ensembles between a micro-canonical ensemble of wave functions (\(u_{{\mathbb {S}}({\mathcal {H}}_{\textrm{mc}})}\)) and a canonical ensemble of wave functions (\({\textrm{GAP}}(\rho _\textrm{can})\)). Yet, our results apply to arbitrary \(\rho \).

The key mathematical step is the generalization of Lévy’s lemma to GAP measures, that is, of the concentration of measure on high-dimensional spheres. The fact that the pure states of a quantum system are always the points on a sphere then allows us to deduce very general typicality theorems from this kind of concentration of measure. In particular, these typicality statements are largely independent of the properties of the Hamiltonian and require only that many dimensions participate in \(\rho \).

Specifically, some of these statements concern a bi-partite quantum system \(a\cup b\), where b is macroscopically large. We have shown that for most \(\psi \) from the \({\textrm{GAP}}(\rho )\) ensemble, the reduced density matrix \(\rho _a^\psi \) is close to its average \({{\,\textrm{tr}\,}}_b \rho \) assuming that the largest eigenvalue (Theorem 1) or at least the average eigenvalue (Theorem 3) of \(\rho \) is small. That is, we have established an extension of canonical typicality to GAP measures. This family of measures is particularly natural in this context because it arises anyway in the context of bi-partite systems as the typical asymptotic distribution of the conditional wave function [17, 19], a fact extended further in Corollary 4.

Another important application of concentration-of-measure of GAP yields (Corollary 1) that for any observable B, most \(\psi \) from the \({\textrm{GAP}}(\rho )\) ensemble have nearly the same Born distribution (when suitably coarse grained). Moreover (Corollaries 2 and 3), if the initial wave function \(\psi _0\) is \({\textrm{GAP}}(\rho )\)-distributed, then for any unitary time evolution the whole curves \(t\mapsto \langle \psi _t|B|\psi _t\rangle \) and \(t\mapsto \rho _a^{\psi _t}\) are nearly deterministic (and given by \({{\,\textrm{tr}\,}}(B \rho _t)\) and \({{\,\textrm{tr}\,}}_b \rho _t\)).

All these results contribute different aspects to the picture of how an individual, closed quantum system in a pure state can display thermodynamic behavior [1, 2, 4, 7,8,9, 11, 12, 15, 16, 20, 31, 34, 36,37,38,39,40,41,42, 44, 45, 47, 48, 51], and thus help clarify the role of ensembles as defining a concept of typicality, while thermal density matrices arise from partial traces.

In sum, our results describe simple relations between the following concepts: reduced density matrix, many participating dimensions, and GAP measures. That is, if many dimensions participate in \(\rho \), then for \({\textrm{GAP}}(\rho )\)-most \(\psi \), the reduced density matrix \(\rho _a^\psi \) is nearly independent of \(\psi \).