Optimizing measurements sequences for quantum state verification

We consider the problem of deciding whether a given state preparation, i.e., a source of quantum states, is accurate; namely, it produces states close to a target one within a prescribed threshold. While most of the result in the literature considers the case in which the measurement operators can be arbitrarily chosen depending on the target state, obtaining favorable (Heisenberg) scaling, we focus on the case in which the measurements can be only chosen from a given set. We show that, in this case, the order of measurements is critical for quickly assessing accuracy. We propose and compare different strategies to compute optimal or suboptimal measurement sequences either relying solely on a priori information, i.e., the target state for state preparation, or actively adapting the sequence to the previously obtained measurements. Numerical simulations show that the proposed algorithms reduce significantly the number of measurements needed for verification and indicate an advantage for the adaptive protocol especially assessing faulty preparations.


Introduction
Due to the unavoidable errors, noise or decoherence, realistic quantum devices do not always behave as expected.Various metrics can be used to characterize and benchmark a quantum device [1].In this work, we focus on devices expected to reliably produce some target state.Given an unknown quantum state in a d-dimensional Hilbert space H d , d 2 −1 measurements are necessary in general for a full tomographic reconstruction of the corresponding density matrix [2].
However, in many situations , such as quantum telecommunication, quantum state preparation, quantum computation etc., we are more concerned with whether some experimentally-accessible quantum state ρ exp is accurate enough with respect to a target state ρ 0 , representing the intended result of the preparation, processing or communication task, rather than fully reconstructing it.This problem is referred to as quantum state certification [3].
Of course, one way to tackle the problem would be to proceed with a full tomography of ρ, and then decide accuracy consequently.This is in general however not efficient as it requires to obtain averages for at least d 2 − 1 independent observables, and it does not leverage on prior information about the target state ρ 0 .For example, if the target state is known to be pure, a smaller number of measurements are required via compressed sensing techniques [4].
If the measurements can be designed in an optimized way on the state to be certified, more efficient techniques can be devised, that require less measurements and less repetitions to obtain reliable certification with a specified probability [3,5].The basic intuition is that if the state is pure, the state itself is an optimal measurement and repeating the measurement will lead to a quadratic advantage in the number of tests to be performed to achieve a desired accuracy.The strategy can then be extended to include locality constraints, specific classes of target states, adversarial choices in the states to be tested, classical communication and more [5][6][7][8][9][10][11][12].A common assumption of these algorithms, to avoid false negatives and ensuring the quadratic advantage, is that all measurements leave the target state invariant.
In this work we reconsider this task, which we shall call the verification problem, in a different scenario: we assume that only a finite set of measurement is available and given.Under this assumption, the previously-recalled optimal verification strategies are typically not effective, as it is possible that no measurements leaves the target invariant.
We thus construct procedures that decide whether the state ρ is accurate within a prescribed tolerance, without necessarily obtaining a full tomography and thus still reducing the number of required observables.The central idea is to order the measurement sequence using the a priori information, so that the first measurements are the most informative when the state to be measured is indeed ρ 0 .The procedure can also be seen as a way to optimize the order of the measured observables in a tomography depending on the best available estimate of the state at hand, in the spirit of [13].
The procedure we propose are of two types: the first ones compute the whole measurement sequence off-line, and then uses it choose which measurements to actually perform, stopping as soon as verification can be decided.A crucial aspect, in practice, is in fact the computation of the optimal sequence.The latter is a nontrivial optimization problem that has to be solved in a number of instances that scales combinatorially with the number of measurements, which in turn grows at least quadratically in the dimension of the space.For this reason, even off-line calculation becomes of optimal sequences becomes rapidly impractical.In order to address this problem, we propose iterative algorithms, which determine the best next measurements given the previously chosen ones.Two versions are provided, where the second one relies on a relaxation of the constraints that allows for an analytic treatment.These ways of constructing the sequence, albeit suboptimal, are computationally treatable and offer another advantage: they lend themselves to be used as adaptive strategies, which rely on the previously obtained actual measurements rather than just the target state.In fact, the second type of verification method we propose is an adaptive strategy, where the the next measurement is chosen based on the best available estimate given the actual measurement performed to that point.The different methods are tested with a paradigmatic example: a two-qubit state where only local Pauli measurements are available.The results highlight the flexibility of the adaptive method, which performs well even in the case of inaccurate priors.

Problem setting and verification criteria
We denote by B(H) the set of all linear operators on a finite dimensional Hilbert space as the set of all physical density matrices on H d .
In order to precisely specify the verification task, we introduce the following definition, which depends on the choice of a relevant distance-like function.Definition 1 (( , D, ρ 0 )-accurate).Given a target state ρ 0 ∈ S(H d ) and a (pseudo- Consider a set of observables, represented by Hermitian matrices {A i } R i=1 where R is a positive integer.This set of observables is called information-complete if {A i } R i=1 generate the set of all d-dimensional traceless Hermitian matrices.Note that a necessary condition for the observables to be information i=1 is information-complete and the measurement statistics {ŷ i } R i=1 are known exactly, i.e., ŷi = y i := Tr(ρ exp A i ) with i ∈ {1, . . ., R}, then there is a unique state compatible with the constraints, that is the generated state ρ exp .Throught this paper, we suppose that set of observables is finite, information-complete, and fixed.The problem we will be concerned with is the following, Problem 1.Based on the a priori state ρ 0 and available data { ŷi } K i=1 with K ≤ R, determine the optimal order of A k to verify if the generated state ρ exp is ( , D, ρ 0 )accurate via as few measurements as possible.
In order to introduce the central idea of the work, let us assume for now that a certain sequence of the available observables has been decided.There are two cases in which the verification process can be terminated, establishing whether the generate state is ( , D, ρ 0 )-accurate or not with a minimum of measurements.Suppose that the measurements are perfect, namely the available data y i satisfy y i = Tr(A i ρ exp ).Denote by Si := {ρ ∈ S(H d )| Tr(ρA i ) = y i } the set of states compatible with the measurement data y i .Based on {y i } K i=1 , two criteria can be used to verify if the generated state ρ exp is ( , D, ρ 0 )-accurate in each step: The grey area represents Si ∩ Sj ∩ Sk , i.e., the states compatible with the measurement data y i , y j and y k .
Depictions of the situations corresponding to the above two criteria C1 and C2 are shown in Figure 1.C1 guarantees that all states compatible with the measurement data are outside of the ball of radius around the target state ρ 0 , while C2 ensures that the same states are all inside.
In the following sections, we shall leverage the criteria above in order to devise optimal measurement sequences, or suboptimal ones that present computational advantages and can be adapted to the actual measurement outcomes.
3 Verification of quantum state based on the a priori state In this section, we first introduce a strategy of determining the measurement sequence M off-line based only on the a priori target state ρ 0 , i.e., without using the measurement data.We next use the sequence M to verify that the generated states ρ exp is or is not ( , D, ρ 0 ) accurate according to the criteria C1 and C2.The objective is to perform as few measurements as possible to achieve verification.

Off-line construction of the optimal measurement sequence
From an experimental point of view, it is arguably easier to determine the whole sequence of measurements before performing them.We shall start by exploring this approach, while the adaptive approach, in which the next measurement is chosen depending on the outcome of the previous ones, will be treated in Section 4. Denote by S i (ρ 0 ) := {ρ ∈ S(H d )| Tr(ρA i ) = Tr(ρ 0 A i )}, the set of density matrices compatible with the measurement A i that we would have if the state was actually ρ 0 ∈ S(H d ).While relying only on prior information, with no true measurements data available, we use S i (ρ 0 ) to replace the constraints Si in the criteria C1 and C2.Note that S i (ρ 0 ) = Si if the state is perfect generated, i.e., Tr(ρ Obviously, since ρ 0 ∈ S i (ρ 0 ) for all i ∈ {1, . . ., R} by construction, then in this scenario C1 can never be satisfied.Thus, we only exploit C2 to determine the order of measurements.Suppose that the distance function D is continuous on S(H d ), e.g., any matrix norm, quantum relative entropy, etc., (see [14,Chapter 9,11] for standard options), due to the compactness of i S i (ρ 0 ), max ρ∈ i Si(ρ0) D(ρ, ρ 0 ) exists.
If the state was actually ρ 0 , the minimal amount of measurements that allow to determine that the preparation was indeed accurate would correspond, according to C2, to the minimum n for which there exists a set of measurements indexes M n ⊂ {1, . . ., R} such that max and the optimal sequence would be any permutation of the M n .
The Algorithm OS could be used to generate one such optimal sequence.
Note that each step of above algorithm is independent, thus for some i < j, M i ⊂ M j .At the end of process, we obtain a sequence of measurements M n containing n ≤ R elements, whose corresponding observables are the optimal choice for the verification of ( , D, ρ 0 ) accuracy of ρ exp = ρ 0 .The order of the elements belonging to M n is not important.However, the computational complexity of the above algorithm is too large, in order to determine M n , it needs to solve n k=1 ( n k ) = 2 n − 1 optimization problems.Moreover, in practice, the generated state ρ exp is usually different from the target state ρ 0 , thus the generated measurement sequence M n by Algorithm OS with respect to may not be able to verify the accuracy of ρ exp .To obtain a tomographically-complete sequence one needs to add d 2 − n linearly-independent measurements operators from the available set.

Iterative construction of verification sequences
In order to address the above issues, we propose to construct the sequence of measurements iteratively, based on the previous determined measurement indexes, which can greatly reduce the computational complexity and allow to extend the procedure to the full observable set.The resulting sequence will be in general suboptimal with respect to ρ 0 , but still yields an advantage with respect to a random sequence of observables, as shown in Section 5.

Optimization-based approach
The general algorithm we propose works as follows: it starts by evaluating, for each measurement A i , the maximal distance α i with respect to ρ 0 of the states ρ belonging to S i (ρ 0 ), the set of states that are compatible with the measurement outcome Tr(ρA i ) = Tr(ρ 0 A i ).The measurement giving the minimum value of α i is selected as first measurement A m1 , and the corresponding maximum distance is α 1 m1 .Now, the next measurement A mi+1 is chosen so that it is linearly independent on the previously chosen ones and at the same time minimizes the maximum distance of the new compatible set with the measurement of all the previously selected A m1 , . . ., A mi .The minimum worst-case distance among compatible states α n i , with n indicating the iteration and i the selected measurement, is chosen as an indicator of how likely it is that checking C2 will allow us to determine whether the actual state is ( , D, ρ 0 )-accurate.
A more formal form of the above algorithm is summarized as Algorithm IOS.
• Step 1: Define S as the set of all i ∈ S such that If min i∈ S α k i = 0, set M = M ∪ S and stop the process: in this case ρ 0 must belong to the span of the selected measurements.Otherwise, compute arg min i∈ S α k i .If arg min output a single integer, set m k = arg min i∈ S α k i .If arg min output multiple integers, designate a unique m k in that set, according to some deterministic rule or at random: in this case the criteria we consider do not lead to a preferred choice.
At the end of the procedure, M is a ordered sequence of measurements, from the most to the less informative based on a priori state.Note that, at the end of Step 2, we obtain a sequence of measurements containing n linearly independent observables, from which the target state ρ 0 can be reconstructed via tomography.By construction mn is a decreasing sequence of the maximum distance from ρ 0 of the states compatible with the measurement.However, in practice ρ exp = ρ 0 , for the case of n < d 2 , the n observables may not be sufficient to verify the accuracy of ρ exp .Thus, we need to complete the sequence with additional d 2 − n linearly independent observables, which we can choose at random or according to other criteria.

Analytic approach based on distance bound
The computational complexity of Algorithm IOS is still highly dependent on the number of optimization problems to be solved that, albeit reduced with respect to the optimal a priori sequence, still increases quadratically with the dimension of the Hilbert space.To address this issue, we provide an approximation of Algorithm IOS when the Hilbert-Schmidt distance is chosen as the distance function.In this case, we are not ordering the measurements by evaluating the exact maximal distance of the set of states compatible with the measurement (i.e., the α k i values), but instead by evaluating an upper bound of such distance that can be expressed analytically.
The Hilbert-Schmidt distance is defined as In the following proposition, we provide an upper bound of the distance on the target state ρ 0 for states σ that are compatible with ρ 0 according to a set of observables where K is the projection of ρ 0 in the subspace spanned by the operators {A i } K i=1 .Proof.The square of the Hilbert-Schmidt distance can be written as for all i ∈ {1, . . ., K}.Therefore, the orthogonal projection of ρ 0 and σ on the space spanned by the operators {A i } K i=1 is the same: we can defined it as K .We can thus write ρ 0 = K + ⊥ K and σ = K + ς ⊥ K with ⊥ K and ς ⊥ K orthogonal to K according to the Hilbert-Schmidt inner product, i.e., ρ, σ HS = Tr(ρ * σ).Therefore Tr(ρ From the Cauchy-Schwarz inequality we have that Tr and the main proposition follows. Remark 1.We would like to point out that if the target state ρ 0 is pure (i.e., Tr(ρ 2 0 ) = 1) the upper bound given in (1) simplifies to Moreover, a similar bound also holds when the Bures metric is employed and the target state ρ 0 is pure.Indeed, when ρ 0 is pure, the Bures distance is written as d B (ρ 0 , σ) = 2(1 − Tr(ρ 0 σ).Therefore, by following similar step it possible to demonstrate that, for pure ρ 0 for which Tr( 2 K ) ≥ 1/2 we have Lastly, the bound (1) can be interpreted geometrically.The states σ are written as σ = K +ς ⊥ K with fixed K .Therefore the states σ are contained within a ball centered in K and radius ).The state ρ 0 = K + ⊥ K also belongs to such ball, but its distance from the center is given by d ).Therefore, the maximum distance between ρ 0 and σ is indeed bounded by R K +d K , as in (1).Notice that, by starting from a set of linear independent observables {A i }, adding an extra observable A j will improve the bound.Proposition 2. Assume we have fixed the first {A i } K i=1 and we add a further measurement operator A K+1 .Let {Γ i } be an orthonormal basis of the space spanned by the Then the projected state becomes: The latter also implies K+1 ). Therefore the projection of ρ 0 into the subspace spanned by the {A i } and A K+1 is given by: More in detail, write From the latter we have that: Hence: .
Notice that the rhs of ( 1) represent an upper bound on the parameter α k i defined in Algorithm IOS.Since K HS = Tr( 2 K ), according to Proposition 1, the norm K HS of the projection K of ρ 0 over the subspace spanned by a subset of observables {A i } is an useful parameter to optimize the sequence of the measurements.The larger is K HS , the lower the upper bound on d HS (ρ 0 , σ).Therefore, the measurement sequence should be chosen in order to maximize the norm of such projection at each step, since the upper bound ( 1) is monotonically non-increasing with respect to the norm of the projection.To this aim, it is sufficient to select an observable A K+1 which maximizes the value of A more formal form of the above algorithm is summarized as Algorithm IAS.
for all j ∈ S. Then define the index m k ∈ arg max j∈S ω (k) j , and the matrix Γ Note that if ω (k) j = 0 then ρ 0 ∈ span{Γ m1 , . . ., Γ m k−1 }.If the arg max in the algorithm above produces more than a single index, one is chosen at random in the set.The sequence is generated by increasing as much as possible in each cycle the value of k HS .At the end of the procedure, M corresponds to an ordered sequence of d 2 linearly independent measurement operators based on the upper bound on the distance from ρ 0 provided above.

Verification algorithm based on the measurement sequence
Once we obtained the measurement sequence M using one of the algorithms above, we can perform the Algorithm VM to verify whether the generated state ρ exp is ( , D, ρ 0 )accurate according to C1 and C2.-If γ k > , then ρ exp is not ( , D, ρ 0 )-accurate and stop the procedure; -If Γ k ≤ , then ρ exp is ( , D, ρ 0 )-accurate and stop the procedure; -Otherwise, update k = k + 1 and N = N ∪ {m k }.
Remark 2. At the end of the above algorithm, if the procedure ends with k = d 2 we can reconstruct the generated state ρ exp = i∈N c i A i where {c i } i∈N can be computed for example by

Adaptive quantum state verification
In the previously proposed algorithms, the measurement sequence was determined offline (i.e., without performing any measurement) by only leveraging the information on the a-priori state ρ 0 .Here, we optimize the verification procedure Algorithm IOS and Algorithm IAS by also exploiting the measurement data at each step in addition to the a priori state to determine the next measurement and then verify the state.We call such protocol adaptive verification.
For now, suppose that the measurements are perfect: namely the sampled output averages correspond to the true expected values for the actual state.We initialize the algorithm as same as in Algorithm IOS, since before perform the measurements, the a priori state is the only accessible information.Compute α 1 i := max ρ∈Si(ρ0) D(ρ, ρ 0 ) for all i ∈ {1, . . ., R} and m 1 ∈ arg min i∈{1,...,R} α 1 i .If arg min cannot assign an unique m 1 , then we consider the following rule: select an observable at random among those indicated by the criterion of Algorithm IAS, namely those maximizing HS .Then, we perform the measurement A m1 and obtain an empirical estimate of y m1 = Tr(ρ exp A m1 ).For the sake of simplicity in presenting the algorithm, we shall here assume we actually obtain the exact value y m1 .The case of imperfect estimates can be treated along the same lines.In order to test both criteria C1 and C2, we compute If ω 1 > , then ρ exp is not ( , D, ρ 0 )-accurate; and if Ω 1 ≤ , then ρ exp is ( , D, ρ 0 )accurate.Otherwise, we determine an estimate of ρ exp based on the measurement data y m1 by ρ 1 = arg min ρ∈ Sm 1 f ρ0 (ρ), where f ρ0 (ρ) is a continuous function such that ρ 0 = arg min ρ∈S(H d ) f ρ0 (ρ), quantifying information distance between ρ ∈ S(H d ) and ρ 0 ∈ S(H d ).Common choices for f can be the quantum relative entropy [13], or any distance function on S(H d ) [14, Chapter 9].Strictly convex functions guarantee the uniqueness of the minimum.For all i ∈ {1, . . ., R} \ {m 1 }, according to the criteria C1 and C2, we compute where Notice that the constrained set now is computed for ρ 1 , which depends on the actual measurement outcomes.Intuitively, the smaller − δ 1 i (resp.∆ 1 i − ) is, the more likely C1 (resp.C2) is verified (see Figure 2).
If for some i we have that ∆ , it means that choosing the corresponding measurement is expected to bring the compatible set closer to verify criteria C1 or C2, respectively.However, if there exists i such that δ 1 i = 0, it implies that min{ − δ 1 i , ∆ 1 i − } = and ρ 0 ∈ S i (ρ 1 ) ∩ Sm1 , which means that C1 cannot yield the conclusion.Thus, if δ 1 i = 0 for all i, only ∆ 1 i provides the information for the selection of the next measurement.Therefore, in order to maximize the possibility of the successful verification, we set If arg min cannot assign an unique m 2 , then we can select one by employing the idea of Algorithm IAS, that is to select an observable at random among those which maximize Tr 2 (ρ 1 A ⊥ j )/ A ⊥ j 2 HS .Then, the whole procedure of verification can be defined recursively.Remark 3. Note that, at each step, determining an estimate ρ k of ρ exp solves the quantum state tomography [2] based on the partial information, the obtained sequence Si = ρ exp and the measurements are supposed to be perfect.
We summarize the algorithm of adaptive verification with perfect measurements as Algorithm AV.Due to the perfect measurements, we can always obtain the verification results when the above algorithm ends.In Step 2, we specifically consider the case δ k i = 0 for all i ∈ S, in which ρ 0 belongs to the compatible sets, i.e., C1 is always verified.Thus, we can only apply C2 to determine the next measurement.If ρ exp = ρ 0 , in Step 1 of Algorithm 4, we have ρ k ≡ ρ 0 for any k ∈ {1, . . ., R} since ρ 0 = arg min ρ∈S(H d ) f ρ0 (ρ), which implies δ k i ≡ 0. Thus, in this case, Algorithm 4 is equivalent to the combination of Algorithm IOS and Algorithm IAS.
Note that Algorithm AV can also be applied to the imperfect measurement case.However, if the sample size is not big enough or there are errors and bias, one may obtain K i=1 Ŝmi = ∅.In this case, we need to stop the verification process and remeasure ρ exp .

Application: Two-qubit systems
In the following, we test the proposed algorithm simulating measurements to verify the accuracy of preparation of randomized pure states in a two qubit system.We summarize the key elements of the numerical experiments we ran.
Target states: According to the normal distribution, we pick 100 sets of 4 independent complex random numbers with real and imaginary parts belonging to [−100, 100], i.e., |ψ i ∈ C 4 with i = 1, . . ., 100.Then, we generate 100 pure target states by ρ 0,i = |ψi ψi| Tr(|ψi ψi|) .Bures distance: The distance we employ is the Bures diatance, which reduces to ) for the case of ρ 0 being a pure state.Obviously, d B (ρ, ρ 0 ) is strictly monotonically decreasing with respect to Tr(ρρ 0 ).Due to the linearity, we can apply the convex optimization (CVX-SDP [15]) in the simulation for searching the minimum and maximum value of Tr(ρρ 0 ) under constraints.
• Step 2: Collect all i ∈ S such that A i / ∈ span{A i } i∈M in S. For all i ∈ S, compute -If arg min outputs a single integer, set m k+1 = arg min i∈ S ∆ k i ; -If arg min outputs multiple integers, compute A ⊥ j = A j − i∈M Tr(A j Γ i )Γ i for all j ∈ S and choose at random m k ∈ arg max j∈ S Tr 2 (ρ -If arg min outputs a single integer, set m k+1 = arg min i∈ S min{ − δ k i , ∆ k i − } ; -If arg min outputs multiple integers, compute A ⊥ j = A j − i∈M Tr(A j Γ i )Γ i for all j ∈ S and choose at random Measurements: We apply projective measurements into Pauli eigenstates.Let Π 1 . . .Π 6 be the eigenprojectors of Pauli matrices corresponding to the eigenvalue 1 and −1 respectively, i.e., σ x Π 1 = Π 1 , σ x Π 2 = −Π 2 , . . ., σ z Π 6 = −Π 6 .We denote by A 6(i−1)+j = Π i ⊗ Π j with i, j ∈ {1, . . ., 6} the 36 observables for the two-qubit system.The set of observables {A i } 36 i=1 is information-complete. Generated state: We generate 100 full rank ( , d B , ρ 0,k )-accurate states ρ a exp,k and 100 full rank ( , d B , ρ 0,k )-non-accurate states ρ n exp,k by perturbing the target state ρ 0,k with k ∈ 1, . . ., 100 as where λ ∈ (0, 1), η > 0 and H k are random Hermitian matrix.We generate the random H k ∈ B * (C 4 ) in the following way, express H k = 15 j=0 h j,k Γ n where Γ 0 = 1 4 and {Γ j } 15 j=1 are generators of the Lie algebra SU(4) satisfying Tr(Γ j ) = 0 and Tr(Γ m Γ j ) = 2δ jm with j, m ∈ {1, . . ., 15}, {h j,k } 16 n=0 are random scalars drawn from the uniform distribution in the intervals (−1, 1).We set η = 0.1, λ = 0.0001 for the accurate case and λ = 0.1 for the non-accurate case.For 100 target states ρ 0,k , the mean values and the standard deviations of the number of measurements required by Algorithm IOS and Algorithm IAS for the reconstruction are (5.69,0.5449) and (6.47, 0.6884) respectively.It is worth noting that, more measurements are required by Algorithm IOS than Algorithm IAS in few cases, since Algorithm IOS does not always provide the optimal measurement sequence, being itself an approximation of Algorithm OS.

Accurate ρ exp : Algorithm IOS vs Algorithm IAS vs Algorithm AV vs Control groups
Control groups: Since the set of measurements considered here is informationovercomplete, we generate 5 random measurement sequences for each accurate generated state ρ a exp,k , every sequence contains 16 linearly independent observables.Numerical Test: We apply the verification protocol (Algorithm VM) on the measurement sequences generated off-line by Algorithm IOS, Algorithm IAS and randomized control groups, and run the adaptive protocol (Algorithm AV) with f ρ0 (ρ) = d B (ρ, ρ 0 ).Remark 4. In the case of multiple measurements with the same index of merit, Algorithm IAS selects one measurement at random, while Algorithm IOS and AV use the following rule, inspried by Algorithm IAS: select an observable at random among those which maximize Tr 2 (ρ 1 A ⊥ j )/ A ⊥ j 2 HS .The further optimization step is based on analytic formulas so it is not computationally intensive.The same rule will be used in the next set of simulations as well.
The main results are summarized in Figure 4 and Table 1.The first diagram of Figure 4 shows the histogram of the number of measurements required for the verification of accuracy by Algorithm IOS, Algorithm IAS, Algorithm AV and control groups.This diagram and Table 1 confirm the efficiency of our algorithms in verification of accuracy.The rest diagrams show the histogram of difference of the number of measurements required by different algorithms.In this situation, Algorithm IOS exhibits an advantage with respect to Algorithm IAS.In this case, the performance of Algorithm AV is almost equal to Algorithm IOS.This results are not surprising: when the state to be verified is indeed close to the target one, Algorithm IOS is expected to provide the best iteratively-built sequence.Nonetheless, Algorithm IAS performance is fairly close (one extra measurement operator is needed on average), and has the advantage of avoiding iterated optimization procedures as it relies only on analytic formulas.Remark 5.It is worth noticing that the performance of Algorithm AV strongly depends on the choice of the function f ρ0 .Here, we only consider the basic choice f ρ0 (ρ) = d B (ρ, ρ 0 ): the optimization of f ρ0 will be the focus of the future work.

Non-accurate ρ exp : Algorithm IOS vs Algorithm IAS vs Algorithm AV vs Control groups
Control groups: We generate 5 random measurement sequences for each nonaccurate generated state ρ n exp,k , every sequence contains 16 linearly independent observables.Numerical Test: We apply the verification protocol (Algorithm VM) on the measurement sequences generated off-line by Algorithm IOS, Algorithm IAS and randomized control groups, and also run the adaptive protocol (Algorithm AV) with f ρ0 (ρ) = d B (ρ, ρ 0 ).The main results are summarized in Figure 5 and Table 2.The first diagram of Figure 5 shows the histogram of the number of measurements required for the verification of non-accuracy by Algorithm IOS, Algorithm IAS, Algorithm AV and control groups.This diagram and Table 2 confirm the efficiency of our algorithms in verification of non-accuracy with respect to random sequences.The rest diagrams show the histogram of difference of the number of measurements required by different algorithms.We can observe that the performance are similar, with a slight advantage for the adaptive protocol, Algorithm AV.Other numerical experiments indicate that the difference in performance becomes more relevant if the needed number of measurements grows.

Conclusions
In this work we define and study quantum state verification, a key task to test the effectiveness of quantum state preparation procedures, quantum communication channels, quantum memories, and a variety of quantum control algorithms.
Assuming that i.i.d.copies of the system can be produced, the resulting state can be identified by tomographic techniques: sampled averages of a basis of observables are sufficient to determine an estimate of the state and thus to decide if it is compatible with given accuracy requirements.We propose improved strategies to select the observables to be measured, so that a decision on the accuracy of the preparation can be reached well before the full set of measurement is completed.While The protocols rely on prior information about the target state, and either provide a full ordered list of observables to be performed, or adaptively decide the next observable based on the previously obtained ones.While in our approach scales as a linear function of 1/ 2 (by applying Proposition 10 of [3] to each measurement of the sequence in order to obtain an appropriate accuracy), all strategies obtaining 1/ scaling rely on the ability of tuning the measurements for the specific target.Here, on the other hand, we are limited to a fixed, finite set of general measurements, a situation motivated by typical experimentally-available setups.
Numerical tests indicate that, for example, a fidelity of 0.95 can be tested on a qubit system with just 5 measurement of joint Pauli operators, when using randomized sequences requires at least 8.While the solution of the problem leads to solve and compare multiple optimization problems, we also propose an iterative, suboptimal algorithm whose solution can be computed analytically, based on a geometric approximation of the set of states compatible with given measurement outcomes.The adaptive strategy holds an advantage, especially when the needed number of measurements grows, albeit it requires a more involved implementation.Further work will address the use of optimized measurement sequences for fast tomography, the use of different distance functions for the adaptive strategies, and the application to real data from experimental systems of interest.

Fig. 1 :
Fig. 1: Diagrams corresponding to the quantum state verification criteria C1 and C2.The grey area represents Si ∩ Sj ∩ Sk , i.e., the states compatible with the measurement data y i , y j and y k .

Fig. 2 :
Fig. 2: Diagrams corresponding to the option of quantum state verification criteria C1 and C2.The grey area represents Si ∩ Sj ∩ Sk , i.e., the states compatible with the measurement data y i , y j and y k .

5. 1
Before measurements: Algorithm IOS vs Algorithm IAS Algorithm IOS: We use CVX-SDP mode to apply semidefinite programming, and obtain 100 measurement sequences, M k = [m k,j ] j≤16 for k ∈ {1, . . ., 100}.Algorithm IAS: We obtain 100 measurement sequences, R k = [r k,j ] j≤16 for k ∈ {1, . . ., 100}.Comparison: Based on the measurement sequences R k generated by Algorithm IOS, we apply semidefinite programming (CVX-SDP mode) to compute the following β k,l = max ρ∈ j∈[R k ] l Sj (ρ 0,k ) d B (ρ, ρ 0,k ), where [R k ] l denotes the first l elements of R k .The value β k can be considered as an indicator of how well Algorithm IAS approximates Algorithm IOS.The upper diagram of Figure 3 draws error bars of β k − α k which represents the mean value and standard deviation, where α k are defined in Algorithm IOS; the lower diagram draws the number of measurements required by Algorithm IAS minus the one required by Algorithm IOS for reconstructing ρ 0,k .Taking the machine precision into account, reconstruction of ρ 0,k means d B (ρ, ρ 0,k ) ≤ 10 −6 for ρ belonging to the compatible set.

Fig. 4 :
Fig. 4: The first histogram displays the distribution of the number of measurements required to verify the accuracy of ρ a exp,k for k = 1, . . ., 100.The other three show the distribution of the difference between the lengths of the sequences of two algorithms for the same set of generated measurements: For example, if the displayed N ((Alg.X) − N ((Alg.Y)) is negative it indicates an advantage for (Alg.X).

Fig. 5 :
Fig. 5: The first histogram displays the distribution of the number of measurements required to verify the non-accuracy of ρ n,1 exp,k for k = 1, . . ., 100.The other three show the distribution of the difference between the lengths of the sequences of two algorithms for the same set of generated measurements.