Measurements of Quantum Hamiltonians with Locally-Biased Classical Shadows

Obtaining precise estimates of quantum observables is a crucial step of variational quantum algorithms. We consider the problem of estimating expectation values of molecular Hamiltonians, obtained on states prepared on a quantum computer. We propose a novel estimator for this task, which is locally optimised with knowledge of the Hamiltonian and a classical approximation to the underlying quantum state. Our estimator is based on the concept of classical shadows of a quantum state, and has the important property of not adding to the circuit depth for the state preparation. We test its performance numerically for molecular Hamiltonians of increasing size, finding a sizable reduction in variance with respect to current measurement protocols that do not increase circuit depths.


Introduction
Estimating observables of interest for quantum states prepared on quantum processor is a central subroutine in a variety of quantum algorithms. Improving the precision of the measurement process is a pressing need, considering the fast-paced increase in size of current quantum devices. One key application is the energy estimation of complex molecular Hamiltonians, a staple of variational quantum eigensolvers (VQE) [1,2,3,4]. Readout of quantum information on quantum processors is available only through single-qubit projective measurements. The outcomes of these single-qubit measurements are combined to estimate quantum observables described by linear combinations of Pauli operators. Naively, each Pauli operator can be estimated independently by appending a quantum circuit composed of one layer of single-qubit gates at the end of state preparation, before readout.
A series of recent efforts [5,6,7,8,9,10,11,12] has shown that savings in the number of measurements can be obtained for the estimation of complex observables, at the expense of increasing circuit depths. This increase in circuit depth can defy the purpose of variational quantum algorithms, which aim to keep gate counts low [13].
Other strategies, more amenable to execution on near-term devices, have considered reducing the number of measurements while simultaneously not increasing circuit depth. These strategies are based on grouping together Pauli operators that can be measured in the same single-qubit basis. This Pauli grouping approach was introduced in [3] and explored thoroughly in [14] for chemistry systems. Machine learning techniques have also recently been used to tackle the measurement problem [15], with no increase in circuit depth. The machine learning approach is based upon the assumption that fermionic neural-network states can capture quantum correlations in ground states of molecular systems [16].
The measurement problem has been considered in the context of predicting collections of generic observables on reduced density matrices [17,18,19]. The best asymptotic scalings up to polylogarithmic factors are obtained in [17], where it is proposed to characterise a quantum state through random measurements, later used to retrieve arbitrary observables.
In this article we introduce an estimator that recovers, in expectation, mean values of observables on quantum states prepared on quantum computers. The protocol is based on classical shadows using random Pauli measurements introduced in [17] and referred to as classical shadows in this present article. We show how sampling from random measurement bases in the original protocol can be locally biased towards certain bases on each individual qubit. We name this technique locallybiased classical shadows. We show how to optimise the estimator's local bias on each qubit based on the knowledge of a target observable and a classical approximation of the quantum state, named reference state. We also prove that this optimisation has a convex cost function in certain regimes. We benchmark our optimisation procedure in the setting of quantum chemistry Hamiltonians, where reference states can be obtained from the Hartree-Fock solution, or multi-reference states, obtained with perturbation theory. We finally compare the variance of our estimator to previous methods for estimating average values that do not increase circuit depth, obtaining consistent improvements.
Outline of the paper. Section 2 reviews classical shadows using random Pauli measurements in a notation convenient to this current article. Section 3 provides the construction of the locally-biased classical shadows and calculates the expectation and variance associated with the estimator introduced. Section 4 shows how to optimise the estimator. Section 5 benchmarks our estimator for molecular energies on molecules of increasing sizes. Section 6 finishes with closing remarks. Appendix A reviews the methods for molecular energy estimation to which we compare our estimator.
Acknowledgements. We thank Giacomo Nannicini for useful discussions regarding the convexity of the cost functions introduced here. SB acknowledges the support of the IBM Research Frontiers Institute.

Classical Shadows Using Random Pauli Measurements
Classical shadows using random Pauli measurements has been introduced in [17]. This section reproduces the procedure in a different style. Since we are only concerned with estimating one specific observable, we do not mention the aspect of a snapshot, nor the efficient description using the symplectic representation, nor the notion of median of means.
The problem that we want to address is the estimation of tr(ρO) for a given n-qubit state ρ and an observable O decomposed as a linear combination of Pauli terms: where α Q ∈ R. Notationally, for a Pauli operator Q as above and for a given qubit i ∈ {1, 2, . . . , n} we shall write Q i for the i th single-qubit Pauli operator so that Q = ⊗ i Q i . We denote the support of such an operator supp(Q) = {i|Q i = I} and its weight wt(Q) = | supp(Q)|. An n-qubit Pauli operator Q is said to be full-weight if wt(Q) = n. This task of estimating tr(ρO) is accomplished with classical shadows of [17] as described in Algorithm 1. Briefly, one randomly selects a Pauli basis for each of the n-qubits in which to measure the quantum state; this is irrespective of the operator O. Then, after measurement, non-zero estimates can be provided for all Pauli operators which qubit-wise commute with the measurement bases. All other Pauli operators are implicitly provided with the zero estimator for their expectation values.
We introduce the function from [17,Eq. E28]. For two n-qubit Pauli operators P, Q define, for each qubit i, and extend this to the multi-qubit setting by declaring f (P, Q) = n i=1 f i (P, Q). Also, given a full-weight Pauli operator P , we let µ(P, i) ∈ {±1} denote the eigenvalue measurement when qubit i is measured in the P i basis. For a subset A ⊆ {1, 2, . . . , n} declare with the convention that µ(P, ∅) = 1.
As shown in [17], the output of this algorithm is an unbiased estimator of the desired expectation value, that is, E(ν) = tr (ρO).

Locally-Biased Classical Shadows
In this section we generalise classical shadows by observing that the randomisation procedure of the Pauli measurements can be biased in the measurement basis for each qubit. We build an estimator based on biased measurements, which in expectation recovers tr(ρO). We then proceed to calculate its variance.
As in Section 2, we wish to estimate tr(ρO) for a given state ρ and an observable O = Q α Q Q. For each qubit i ∈ {1, 2, . . . , n}, consider a probability distribution β i over {X, Y, Z} and denote by β i (P i ) the probability associated with each Pauli P i ∈ {X, Y, Z}. We write β for the collection {β i } n i=1 and note that β may be considered a probability distribution on full-weight Pauli operators by associating with P ∈ {X, Y, Z} ⊗n the probability β(P ) = i β i (P i ).
We generalise the function introduced in Eq. (2). For two n-qubit Pauli operators P, Q and a product probability distribution β, define, for each qubit i, In the above, (β i (P i )) −1 ought be interpreted as 0 if β i (P i ) vanishes. We extend this to the multiqubit setting by declaring Algorithm 2 describes an estimator via locally-biased classical shadows. Note that the (uniform) classical shadows case is retrieved when β i (P i ) = 1 3 for every qubit i ∈ {1, 2, . . . , n} and every Pauli term P i ∈ {X, Y, Z}.

Algorithm 2 Estimation of observable via locally-biased classical shadows
Algorithm 2 recovers the expectation tr(ρO), as shown in the following lemma. Lemma 1. The estimator ν from Algorithm 2 with a single sample (S = 1) satisfies and Proof. Let E P denote the expected value over the distribution β(P ). Let E µ(P ) denote the expected value over the measurement outcomes for a fixed Pauli basis P . Using the fact that β(P ) is a product distribution one can easily check that for any Q, R ∈ {I, X, Y, Z} ⊗n . Let us say that an n-qubit Pauli operator Q agrees with a basis P ∈ {X, Y, Z} ⊗n iff Q i ∈ {I, P i } for any qubit i. Note that f (P, Q, β) = 0 unless Q agrees with P . For any n-qubit Pauli operators Q, R that agree with a basis P one has E µ(P ) µ(P, supp(Q)) = tr (ρQ) (9) and E µ(P ) µ(P, supp(Q))µ(P, supp(R)) = tr (ρQR).
To get the last equality, observe that µ(P, A)µ(P, A ) = µ(P, A ⊕ A ) for any subsets of qubits A, A , where A ⊕ A is the symmetric difference of A and A . The assumption that both Q and R agree with the same basis P implies that supp(Q) ⊕ supp(R) = supp(QR). Now Eq. (10) follows from Eq. (9).
By definition, the expected value in Eq. (6) is a composition of the expected values over a Pauli basis P and over the measurement outcomes µ(P ), that is, E = E P E µ(P ) . Using the above identities one gets Here the second equality is obtained using Eq. (9) and the linearity of expected values. The third equality follows from Eq. (7). Likewise, Here the second equality is obtained using Eq. (10) and observing that that f (P, Q, β)f (P, R, β) = 0 unless both Q and R agree with P . The third equality follows from Eq. (8).
Recall that in the context of using a quantum processor, we aim to use the random variable ν to estimate tr(ρO) to some (additive) precision ε. This dictates the number of samples S required. Specifically, for fixed ρ, O, we require S = O(ε −2 Var(ν (s) )) where Var(ν (s) ) is obviously independent of the specific sample s. For future reference, we record explicitly the variance of ν for a single sample (S = 1). Lemma 1 establishes Remark 1. In [17,Proposition 3], the authors aim to upper-bound this variance independently of the state ρ. In the uniform setting, this is achieved with an application of Cauchy-Schwarz and it leads to a bound of 4 k O 2 ∞ where k is the weight of the operator.

Optimised Locally-Biased Classical Shadows
In this section we show how the locally-biased classical shadows introduced in Section 3 can be optimised when one has partial knowledge about the underlying quantum state. This partial information is obtained efficiently with a classical computation. This is the case of VQE for molecular Hamiltonians if one initialises the variational procedure from a reference state, which can be, for example, the Hartree-Fock solution, a generic fermionic Gaussian state [20], or perturbative Møller-Plesset solutions. Our method can be also extended to generic many-body Hamiltonians, considering for example the generation of reference states with semidefinite programming [21]. On a more general note, the existence of a good reference state is the assumption of all algorithms that target ground state properties of interacting many-body problems, including quantum phase estimation.
In this setting, we optimise the probability distributions β = {β i } n i=1 to obtain the smallest variance on a given reference state. To do this, we consider the variance calculated in Eq. (11) and extract from it the component which, associated with the reference state, explicitly depends on the distributions β. We proceed to optimise this cost function, thereby minimising the variance, noting that a negligible restriction of the cost function that we use leads to a convex optimisation problem. Finally, we use the optimised distributions β * to build molecular energy estimators as defined in Algorithm 2.
To set notation, we introduce a molecular Hamiltonian, H, acting on n qubits. We write and denote by H 0 the traceless part of H.

4.1.
Single-reference optimisation. We first consider the case in which the reference state is a single logical basis state. This is the case if a VQE targeting a molecular Hamiltonian in the molecular basis is initialised with the Hartree-Fock state. Motivated by this, we use the label "HF" to indicate the single logical basis state. However we remark that the results here can be generalised out of the quantum chemistry domain. We are given a reference product state ρ HF = 1 The variance of the estimator ν is independent of the constant term H − H 0 . Writing the variance from Eq. (11) for the state ρ HF upon explicit removal of the constant term reads Our objective is to find probability distributions β so as to minimise Eq. (13). The following proposition explicits the relevant cost function appropriate to this task.
Proposition 1. Given a reference product state ρ HF , represented by the logical basis element {m i } n i=1 , the variance associated with Algorithm 2 is minimised upon choosing β so as to minimise subject to β i,P ≥ 0 and β i,X +β i,Y +β i,Z = 1 for all i. In the above, the sum is taken over "influential pairs": Proof. We must pay attention to only β-dependent terms in Var(ν|ρ HF ). The simple structure of ρ HF implies, for n-qubit non-identity Pauli operators Q, R, The preceding display is independent of β whenever (Q, R) ∈ I Z ⊗n . Hence the cost function captures precisely the component of the variance (when estimating the reference product state) which is dependent on the probability distributions β.
Some remarks are in order.
Remark 2. The cost function of Eq. (14) is not convex. If we however restrict to diagonal terms from the set of influential pairs, then we obtain the following alternative cost function, which we refer to as the diagonal cost function: This diagonal cost function is convex: For fixed Q, the function − log β i (Q i ) is convex, hence so too is i∈supp(Q) (− log(β i (Q i ))). Exponentiating this result implies i∈supp(Q) β i (Q i ) −1 is convex. The positive linear combination over Pauli operators Q preserves convexity. In this convex case we are assured that the minimised collection of distributions provides a global minimum (of the diagonal cost function). This diagonal cost function makes no reference to the specific single-reference state, and therefore can be used to find optimal β i which are independent of the underlying quantum state ρ HF . In fact, the diagonal cost function can be derived from Eq. (11) when ρ is the maximally mixed state. We also find that the diagonal cost function does however numerically give very satisfying results. We explain this in the following paragraph by relating the two cost functions.
Set Γ(P, Q) = |α P | |α Q | i|Pi=Qi =I (β i (P i )) −1 . The diagonal cost function is Q Γ(Q, Q), while the original cost function is at most P,Q Γ(P, Q), (the sum is over only influential pairs in the original cost function). By definition, the following inequality holds Summation over all pairs (P, Q) on both sides leads to where |H 0 | is the number of traceless terms in the Hamiltonian. The preceding display upper-bounds the original cost function, therefore minimising the diagonal cost function implies minimising the original cost function per Pauli in the Hamiltonian.
Remark 3. The diagonal cost function can be formulated in the language of geometric programming [22], while the original cost function is an example of signomial geometric programming.
Remark 4. We solve these optimisation problems using the method of Lagrange multipliers. Specifically given current values β (t) (P ) and update step-size ∆ ∈ (0, 1), we may update iteratively: where the closed-form Lagrange equations (detailed below) are calculated using values of β (t) (P ) and must hold at optimality. The closed-form equations for the diagonal cost function of Eq. (16) are while for the original cost function of Eq. (14), they read The iterative updates find optimal probability distributions for the diagonal cost function because at every iteration the constraints on β are always satisfied whenever initialisation occurs with a random collection of probability distributions.

4.2.
Multi-reference optimisation. We finish this section by observing that the technique of optimising the probability distributions also works for multi-reference frame states such as fermionic Gaussian states, or perturbative solutions. Specifically, consider a multi-reference state |ψ MR , written in the logical basis and λ k ∈ C are amplitudes such that |ψ MR is normalised. The associated density now reads In the following paragraphs, we calculate an appropriate cost function for this case.
Let us restrict ourselves to the single-qubit setting briefly: ρ (k, ) = |b (k) b ( ) |. There are two cases for ρ (k, ) dependent on whether b (k) , b ( ) agree or not. If they agree then ρ (k, ) = 1 2 (I + (−1) b (k) Z). If they disagree, then ρ (k, ) = 1 2 (X + (−1) b (k) iY ). In a similar way to the single-reference setting, we need to calculate f (Q, R, β) tr(ρ (k, ) QR). This is best done by considering the two cases: We introduce the function g when b (k) = b ( ) and obtain We introduce the function h when b (k) = b ( ) and obtain We can now return to the multi-qubit setting to write down a cost function which ought be minimised:

Numerical experiments on molecular Hamiltonians
In this section we test numerically the locally-biased classical shadows (LBCS) estimator defined in Algorithm 2 for molecular Hamiltonians. We consider six Hamiltonians corresponding to different molecules, represented in a minimal STO-3G basis, ranging from 4 to 16 spin orbitals. (The 8 qubit H 2 example uses a 6-31G basis.) We map the molecular Hamiltonians to qubit ones, using three encodings detailed in [23]. The result is qubit Hamiltonians defined on up to 16 qubits. The molecular Hamiltonians are defined in the molecular basis. In this basis, the Hartree-Fock state is a computational basis state. We choose the Hartree-Fock state as our single-reference state, and optimise the distributions β according to Eq. (14) and Eq. (16) separately. We call the optimisation procedure of the β according to Eq. (16) diagonal. We then use the optimised β * to compute the variance Eq. (11) on the ground state of the molecular Hamiltonians; the ground state and the ground energy are obtained by the Lanczos method for sparse matrices. We report the results in Table 1. In this table, we compare variances obtained with our LBCS estimator against other previously known observable estimators that do not increase circuit depth: • An estimator based on 1 sampling of the Hamiltonian, detailed in [24,25].
• An estimator which measures together collections of qubit-wise commuting Pauli operators.
To find the collections of Pauli operators, we use a largest degree first (LDF) heuristic [26]. The collections are then sampled according to their Hamiltonian 1 weights. • Classical shadows as given in [17], which corresponds to the case β i (P i ) = 1 3 for any qubit i and Pauli term P i ∈ {X, Y, Z}. Details of the first two estimators may be found in Appendix A. 1 For all the estimators, we report variances exactly computed on the ground states of the Hamiltonians considered.
In all but one experiment of Table 1, we observe that the LBCS estimator outperforms the other estimators. The one case where the LDF decomposition provides a lower variance -H 2 on a minimal basis -should be considered a curiosity due to the small qubit count.
We also plot in Figure 5 an optimised distribution β * . Specifically, we take the example of H 2 O on 14 qubits in the Jordan-Wigner encoding. Due to the symmetry [23] where the first 7 qubits correspond to spin-up orbitals, and the last 7 qubits correspond to spin-down orbitals, we observe that β * i = β * i+7 for i ∈ {1, 2, . . . , 7}. Note also that the probabilities are symmetric in X and Y (which is not the case for the Bravyi-Kitaev encoding).
We next analyse the role played by the specific fermionic encoding used. For a restricted set of Hamiltonians, Table 2 reports variances for the three estimators: LDF grouping; classical shadows; and LBCS, with three different fermion-to-qubit encodings: Jordan-Wigner; parity; and Bravyi-Kitaev. Note that the variances for parity and Bravyi-Kitaev mappings are higher because those mappings generate Pauli distributions that tend to have more X and Y operators, as opposed to the linear tail of Z operators of the Jordan-Wigner, against which the distributions β can be easily biased. We do not report 1 sampling in Table 2, as it is invariant under choice of encoding. Irrespective of the encoding, the locally biased classical shadows shows a reduction in variance over the LDF grouping whose collections are sampled according to their 1-norm.
The two different cost functions used to optimise the β-distributions provide very similar variance. This is remarkable considering that the diagonal cost function defined in Eq. (16) is convex. For  any given molecule, our numerical analysis indicated that the non-convex cost function Eq. (14) always converged to a single collection of distributions, irrespective of the initialised values for the distributions.

Conclusion
This article has considered the measurement problem associated with molecular energy estimation on quantum computers and has proposed a new algorithm for that problem. Investigating the principal subroutine present in classical shadows using random Pauli measurements, we are able to produce a non-uniform version of these shadows, termed locally-biased classical shadows. These locally biased classical shadows require probability distributions for each qubit. By solving a convex optimisation problem for a given molecular Hamiltonian, we find appropriate probability distributions for measuring states which are close to the true ground state of the molecular Hamiltonian. We benchmark the proposed algorithm on systems up to 16 qubits in size and observe significant and consistent improvement over Pauli grouping heuristic algorithms. To claim this, we have compared with the LDF heuristic, noting that Ref. [14] finds that other heuristics produce a number of qubit-wise commuting sets that only differ by 10%. We are able to obtain this improvement without solving computationally-intensive problems. This is unlike Pauli grouping methods which use node colouring and minimum clique covering, whose running times are quadratic in the number of Pauli terms of the Hamiltonian.
Finally, the introduction of such a domain-specific cost function is, to the authors' knowledge, novel. It is sufficiently general that applications of this idea will also be relevant in fields unrelated to quantum chemistry.

Appendix A. Comparative Algorithms for Estimating Molecular Hamiltonians
This appendix provides details for the two algorithms against which we benchmark locally-biased classical shadows. Recall that we assume that the molecular Hamiltonian H acts on n-qubits and that a given state ρ is provided and whose energy we aim to estimate. We write H = P ∈{I,X,Y,Z} ⊗n α P P (25) and denote by H 0 the traceless part of H. Denote by α 1 the 1 -norm of the traceless coefficients, and associate with this norm the following 1 -distribution γ over the Pauli operators: We expose the dependence of the algorithms on the identity coefficient α I ⊗n . This is because in the practical setting of molecular Hamiltonians considered in this text, the identity coefficient can be on the order of 10% of α 1 . For the 1 sampling, it would be unwise to prepare ρ only to subsequently measure no qubits. For the largest degree first setting, it would be unwise to arbitrarily associate the identity operator to one of the collections of qubit-wise commuting Pauli operators thereby associating the identity operator's weight |α I ⊗n | to the corresponding collection's weight and overly favouring the sampling of said collection.
Recall the notation from Section 2. Given a Pauli operator P , we let µ(P, i) ∈ {±1} denote the eigenvalue measurement when qubit i is measured in the P i basis. For a subset A ⊆ {1, 2, . . . , n} we write µ(P, A) = i∈A µ(P, i).
A.1. Ell-1 algorithm. This algorithm was the first algorithm proposed for estimating energies in the context of variational quantum algorithms [24]. The 1 -norm of the traceless coefficients provides the probability distribution γ. We may use this probability distribution to select a Pauli operator P which dictates the Pauli basis in which to measure the state ρ, thereby providing an estimate for tr(ρP ). Algorithm 3 describes this procedure precisely.
For completeness, we record calculations for the expectation and variance of this estimator. Consider a single shot giving ν. Let E P denote the expected value over the distribution γ(P ) and let E µ(P ) denote the expected value over the measurement outcomes for a fixed Pauli operator P . Without loss of generality, we may assume α I ⊗n = 0. Now E µ(P ) µ(P, supp(P )) = tr(ρP ) whence E(ν) = E P E µ(P ) ν = E P α 1 sgn(α P ) tr(ρP ) = P α P tr(ρP ) = tr(ρH).
A.2. Largest degree first. Consider a Hamiltonian decomposed into K collections {C (k) } K k=1 excluding the identity term: H = α I ⊗n I ⊗n + K k=1 H k where H k = Q∈C (k) α Q Q. (Recalling the notation H 0 for the traceless part of the Hamiltonian, we note that H 0 = k H k .) Suppose that for each collection C (k) , the Pauli terms commute qubit-wise: for all Q, R ∈ C (k) and all qubits i, we have [Q i , R i ] = 0. In this case, there exists a Pauli operator P (k) of weight n which commutes qubit-wise with each Pauli in C (k) .
Consider also a probability distribution κ over the collections {C (k) } K k=1 . Sampling from this distribution provides Algorithm 4.
Consider a single sample giving an estimator ν. Similar to the 1 algorithm we observe that ν recovers tr(ρH) in expectation. Specifically, for a fixed collection C (k) and hence a fixed fullweight Pauli operator P (k) , let E µ(P (k) ) denote the expected value over the measurement outcomes associated with P (k) . Now E µ(P (k) ) µ(P (k) , supp(Q)) = tr(ρQ) whenever Q ∈ C (k) and if we let E C (k) denote the expected value over the distribution κ(C (k) ) we conclude E(ν) = E C (k) E µ(P (k) ) ν = E C (k) 1 κ(C (k) ) Q∈C (k) α Q tr(ρQ) = tr(ρH) Again, we have assumed without loss of generality that H is traceless.