Verification of Quantum Computation: An Overview of Existing Approaches
 529 Downloads
 4 Citations
Abstract
Quantum computers promise to efficiently solve not only problems believed to be intractable for classical computers, but also problems for which verifying the solution is also considered intractable. This raises the question of how one can check whether quantum computers are indeed producing correct results. This task, known as quantum verification, has been highlighted as a significant challenge on the road to scalable quantum computing technology. We review the most significant approaches to quantum verification and compare them in terms of structure, complexity and required resources. We also comment on the use of cryptographic techniques which, for many of the presented protocols, has proven extremely useful in performing verification. Finally, we discuss issues related to fault tolerance, experimental implementations and the outlook for future protocols.
Keywords
Verification of quantum computation Delegated quantum computation Quantum cryptography Blind quantum computing1 Introduction
The first researcher who formalised the above “paradox” as a complexity theoretic question was Gottesman, in a 2004 conference [10]. It was then promoted, in 2007, as a complexity challenge by Aaronson who asked: “If a quantum computer can efficiently solve a problem, can it also efficiently convince an observer that the solution is correct? More formally, does every language in the class of quantumly tractable problems (BQP) admit an interactive proof where the prover is in \(\mathsf {BQP}\) and the verifier is in the class of classically tractable problems (BPP)?” [10]. Vazirani, then emphasized the importance of this question, not only from the perspective of complexity theory, but from a philosophical point of view [11]. In 2007, he raised the question of whether quantum mechanics is a falsifiable theory, and suggested that a computational approach could answer this question. This perspective was explored in depth by Aharonov and Vazirani in [12]. They argued that although many of the predictions of quantum mechanics have been experimentally verified to a remarkable precision, all of them involved systems of low complexity. In other words, they involved few particles or few degrees of freedom for the quantum mechanical system. But the same technique of “predict and verify” would quickly become infeasible for systems of even a few hundred interacting particles due to the exponential overhead in classically simulating quantum systems. And so what if, they ask, the predictions of quantum mechanics start to differ significantly from the real world in the high complexity regime? How would we be able to check this? Thus, the fundamental question is whether there exists a verification procedure for quantum mechanical predictions which is efficient for arbitrarily large systems.If aquantum experiment solves aproblem which is proven to be intractable for classical computers, how can one verify the outcome of the experiment?
In trying to answer this question we return to complexity theory. The primary complexity class that we are interested in is \(\mathsf {BQP}\), which, as mentioned above, is the class of problems that can be solved efficiently by a quantum computer. The analogous class for classical computers, with randomness, is denoted \(\mathsf {BPP}\). Finally, concerning verification, we have the class \(\mathsf {MA}\), which stands for MerlinArthur. This consists of problems whose solutions can be verified by a \(\mathsf {BPP}\) machine when given a proof string, called a witness.^{1}BPP is contained in \(\mathsf {BQP}\), since any problem which can be solved efficiently on a classical computer can also be solved efficiently on a quantum computer. Additionally \(\mathsf {BPP}\) is contained in \(\mathsf {MA}\) since any \(\mathsf {BPP}\) problem admits a trivial empty witness. Both of these containments are believed to be strict, though this is still unproven.
What this tells us is that, very likely, there do not exist witnesses certifying the outcomes of general quantum experiments.^{2} We therefore turn to a generalization of \(\mathsf {MA}\) known as an interactiveproof system. This consists of two entities: a verifier and a prover. The verifier is a \(\mathsf {BPP}\) machine, whereas the prover has unbounded computational power. Given a problem for which the verifier wants to check a reported solution, the verifier and the prover interact for a number of rounds which is polynomial in the size of the input to the problem. At the end of this interaction, the verifier should accept a valid solution with high probability and reject, with high probability, otherwise. The class of problems which admit such a protocol is denoted \(\mathsf {IP}\).^{3} In contrast to \(\mathsf {MA}\), instead of having a single proof string for each problem, one has a transcript of backandforth communication between the verifier and the prover.
If we are willing to allow our notion of verification to include such interactive protocols, then one would like to know whether \(\mathsf {BQP}\) is contained in \(\mathsf {IP}\). Unlike the relation between \(\mathsf {BQP}\) and \(\mathsf {MA}\), it is, in fact, the case that \(\mathsf {BQP} \subseteq \textsf {IP}\), which means that every problem which can be efficiently solved by a quantum computer admits an interactiveproof system. One would be tempted to think that this solves the question of verification, however, the situation is more subtle. Recall that in \(\mathsf {IP}\), the prover is computationally unbounded, whereas for our purposes we would require the prover to be restricted to \(\mathsf {BQP}\) computations. Hence, the question that we would like answered and, arguably, the main open problem concerning quantum verification is the following:
Problem 1 (Verifiability of
\(\mathsf {BQP}\) computations] Does every problem in \(\mathsf {BQP}\) admit an interactiveproof system in which the prover is restricted to \(\mathsf {BQP}\) computations?
The primary technique that has been employed in most, thought not all, of these settings, to achieve verification, is known as blindness. This entails delegating a computation to the provers in such a way that they cannot distinguish this computation from any other of the same size, unconditionally.^{4} Intuitively, verification then follows by having most of these computations be tests or traps which the verifier can check. If the provers attempt to deviate they will have a high chance of triggering these traps and prompt the verifier to reject.
 1.
Singleprover prepareandsend. These are protocols in which the verifier has the ability to prepare quantum states and send them to the prover. They are covered in Section 2.
 2.
Singleprover receiveandmeasure. In this case, the verifier receives quantum states from the prover and has the ability to measure them. These protocols are presented in Section 3.
 3.
Multiprover entanglementbased. In this case, the verifier is fully classical, however it interacts with more than one prover. The provers are not allowed to communicate during the protocol. Section 4 is devoted to these protocols.
After reviewing the major approaches to verification, in Section 5, we address a number of related topics. In particular, while all of the protocols from Sections 2–4 are concerned with the verification of general \(\mathsf {BQP}\) computations, in Section 5.1 we mention subuniversal protocols, designed to verify only a particular subclass of quantum computations. Next, in Section 5.2 we discuss an important practical aspect concerning verification, which is fault tolerance. We comment on the possibility of making protocols resistant to noise which could affect any of the involved quantum devices. This is an important consideration for any realistic implementation of a verification protocol. Finally, in Section 5.3 we outline some of the existing experimental implementations of these protocols.
Throughout the review, we are assuming familiarity with the basics of quantum information theory and some elements of complexity theory. However, we provide a brief overview of these topics as well as other notions that are used in this review (such as measurementbased quantum computing) in the appendix, Section A. Note also, that we will be referencing complexity classes such as \(\mathsf {BQP}\), \(\mathsf {QMA}\), \(\mathsf {QPIP}\) and \(\mathsf {MIP^{*}}\). Definitions for all of these are provided in Section A of the appendix. We begin with a short overview of blind quantum computing.
1.1 Blind Quantum Computing
The concept of blind computing is highly relevant to quantum verification. Here, we simply give a succinct outline of the subject. For more details, see this review of blind quantum computing protocols by Fitzsimons [34] as well as [35, 36, 37, 38, 39]. Note that, while the review of Fitzsimons covers all of the material presented in this section (and more), we restate the main ideas, so that our review is selfconsistent and also in order to establish some of the notation that is used throughout the rest of the paper.
Blindness is related to the idea of computing on encrypted data [40]. Suppose a client has some input x and would like to compute a function f of that input, however, evaluating the function directly is computationally infeasible for the client. Luckily, the client has access to a server with the ability to evaluate \(f(x)\). The problem is that the client does not trust the server with the input x, since it might involve private or secret information (e.g. medical records, military secrets, proprietary information etc). The client does, however, have the ability to encrypt x, using some encryption procedure \(\mathcal {E}\), to a ciphertext \(y \leftarrow \mathcal {E}(x)\). As long as this encryption procedure hides x sufficiently well, the client can send y to the server and receive in return (potentially after some interaction with the server) a string z which decrypts to \(f(x)\). In other words, \(f(x) \leftarrow \mathcal {D}(z)\), where \(\mathcal {D}\) is a decryption procedure that can be performed efficiently by the client.^{6} The encryption procedure can, roughly, provide two types of security: computational or informationtheoretic. Computational security means that the protocol is secure as long as certain computational assumptions are true (for instance that the server is unable to invert oneway functions). Informationtheoretic security (sometimes referred to as unconditional security), on the other hand, guarantees that the protocol is secure even against a server of unbounded computational power. See [45] for more details on these topics.
In the quantum setting, the situation is similar to that of \(\mathsf {QPIP}\) protocols: the client is restricted to \(\mathsf {BPP}\) computations, but has some limited quantum capabilities, whereas the server is a \(\mathsf {BQP}\) machine. Thus, the client would like to delegate \(\mathsf {BQP}\) functions to the server, while keeping the input and the output hidden. The first solution to this problem was provided by Childs [35]. His protocol achieves informationtheoretic security but also requires the client and the server to exchange quantum messages for a number of rounds that is proportional to the size of the computation. This was later improved in a protocol by Broadbent et al. [36], known as universal blind quantum computing (UBQC), which maintained informationtheoretic security but reduced the quantum communication to a single message from the client to the server. UBQC still requires the client and the server to have a total communication which is proportional to the size of the computation, however, apart from the first quantum message, the interaction is purely classical. Let us now state the definition of perfect, or informationtheoretic, blindness from [36]:
Definition 1 (Blindness)
 1.
The distribution of the classical information obtained by the server in P is independent of X.
 2.
Given the distribution of classical information described in 1, the state of the quantum system obtained by the server in P is fixed and independent of X.
The definition is essentially saying that the server’s “view” of the protocol should be independent of the input, when given the length of the input. This view consists, on the one hand, of the classical information he receives, which is independent of X, given \(L(X)\). On the other hand, for any fixed choice of this classical information, his quantum state should also be independent of X, given \(L(X)\). Note that the definition can be extended to the case of multiple servers as well. To provide intuition for how a protocol can achieve blindness, we will briefly recap the main ideas from [35, 36]. We start by considering the quantum onetime pad.
Quantum OneTime Pad
Childs’ Protocol for Blind Computation
Now suppose Alice has some nqubit state \(\rho \) and wants a quantum circuit \(\mathcal {C}\) to be applied to this state and the output to be measured in the computational basis. However, she only has the ability to store n qubits, prepare qubits in the \({\left \vert {0}\right \rangle }\) state, swap any two qubits, or apply a Pauli \(\mathsf {X}\) or \(\mathsf {Z}\) to any of the n qubits. So in general, she will not be able to apply a general quantum circuit \(\mathcal {C}\), or perform measurements. Bob, on the other hand, does not have these limitations as he is a \(\mathsf {BQP}\) machine and thus able to perform universal quantum computations. How can Alice delegate the application of \(\mathcal {C}\) to her state without revealing any information about it, apart from its size, to Bob? The answer is provided by Childs’ protocol [35]. Before presenting the protocol, recall that any quantum circuit, \(\mathcal {C}\), can be expressed as a combination of Clifford operations and \(\mathsf {T}\) gates. Additionally, Clifford operations normalise Pauli gates. All of these notions are defined in the appendix, Section 1.
While Childs’ protocol provides an elegant solution to the problem of quantum computing on encrypted data, it has significant requirements in terms of Alice’s quantum capabilities. If Alice’s input is fully classical, i.e. some state \({\left \vert {x}\right \rangle }\), where \(x \in \{0,1\}^{n}\), then Alice would only require a constantsize quantum memory. Even so, the protocol requires Alice and Bob to exchange multiple quantum messages. This, however, is not the case with UBQC which limits the quantum communication to one quantum message sent from Alice to Bob at the beginning of the protocol. Let us now briefly state the main ideas of that protocol.
Universal Blind Quantum Computation (UBQC)
In UBQC the objective is to not only hide the input (and output) from Bob, but also the circuit which will act on that input^{9} [36]. As in the previous case, Alice would like to delegate to Bob the application of some circuit \(\mathcal {C}\) on her input (which, for simplicity, we will assume is classical). This time, however, we view \(\mathcal {C}\) as an MBQC computation.^{10} By considering some universal graph state, \({\left \vert {G}\right \rangle }\), such as the brickwork state (see Fig. 17), Alice can convert \(\mathcal {C}\) into a description of \({\left \vert {G}\right \rangle }\) (the graph G) along with the appropriate measurement angles for the qubits in the graph state. By the property of the universal graph state, the graph G would be the same for all circuits \(\mathcal {C^{\prime }}\) having the same number of gates as \(\mathcal {C}\). Hence, if she were to send this description to Bob, it would not reveal to him the circuit \(\mathcal {C}\), merely an upper bound on its size. It is, in fact, the measurement angles and the ordering of the measurements (known as flow) that uniquely characterise \(\mathcal {C}\) [46]. But the measurement angles are chosen assuming all qubits in the graph state were initially prepared in the \({\left \vert {+}\right \rangle }\) state. Since these are \(\mathsf {X}\textsf {Y}\)plane measurements, as explained in Section A, the probabilities, for the two possible outcomes, depend only on the difference between the measurement angle and the preparation angle of the state, which is 0, in this case.^{11} Suppose instead that each qubit, indexed i, in the cluster state, were instead prepared in the state \(\left \vert {+_{\theta _{i}}}\right \rangle \). Then, if the original measurement angle for qubit i was \(\phi _{i}\), to preserve the relative angles, the new value would be \(\phi _{i} + \theta _{i}\). If the values for \(\theta _{i}\) are chosen at random, then they effectively act as a onetime pad for the original measurement angles \(\phi _{i}\). This means that if Bob does not know the preparation angles of the qubits and were instructed to measure them at the updated angles \(\phi _{i} + \theta _{i}\), to him, these angles would be indistinguishable from random, irrespective of the values of ϕ_{i}. He would, however, learn the measurement outcomes of the MBQC computation. But there is a simple way to hide this information as well. One can flip the probabilities of the measurement outcomes for a particular state by performing a \(\pi \) rotation around \(\mathsf {Z}\) axis. In other words, the updated measurement angles will be δ_{i} = ϕ_{i} + 𝜃_{i} + r_{i}π, where \(r_{i}\) is sampled randomly from \(\{0, 1\}\).

(1) Alice chooses an input x and a quantum computation \(\mathcal {C}\) that she would like Bob to perform on \({\left \vert {x}\right \rangle }\).

(2) She converts x and \(\mathcal {C}\) into a pair \((G, \{\phi _{i}\}_{i})\), where \({\left \vert {G}\right \rangle }\) is an Nqubit universal graph state (with an established ordering for measuring the qubits), \(N = O(\mathcal {C})\) and \(\{\phi _{i}\}_{i}\) is the set of computation angles allowing for the MBQC computation of \(\mathcal {C}\left \vert {x}\right \rangle \).

(3) She picks, uniformly at random, values \(\theta _{i}\), with i going from 1 to N, from the set \(\{0, \pi /4, 2\pi /4, ... 7\pi /4\}\) as well as values \(r_{i}\) from the set \(\{0, 1\}\).

(4) She then prepares the states \({\left \vert {+_{\theta _{i}}}\right \rangle }\) and sends them to Bob, who is instructed to entangle them, using \(\mathsf {CZ}\) operations, according to the graph structure G.

(5) Alice then asks Bob to measure the qubits at the angles \(\delta _{i} = \phi ^{\prime }_{i} + \theta _{i} + r_{i} \pi \) and return the measurement outcomes to her. Here, \(\phi ^{\prime }_{i}\) is an updated version of \(\phi _{i}\) that incorporates corrections resulting from previous measurements, as in the description of MBQC given in Section A.

(6) After all the measurements have been performed, Alice undoes the \(r_{i}\) onetime padding of the measurement outcomes, thus recovering the true outcome of the computation.
We can see that as long as Bob does not know the values of the \(\theta _{i}\) and \(r_{i}\) variables, the measurements he is asked to perform, as well as their outcomes, will appear totally random to him. The reason why Bob cannot learn the values of \(\theta _{i}\) and \(r_{i}\) from the qubits prepared by Alice is due to the limitation, in quantum mechanics, that one cannot distinguish between nonorthogonal states. In fact, a subsequent paper by Dunjko and Kashefi shows that Alice can utilize any two nonoverlapping, nonorthogonal states in order to perform UBQC [48].
2 PrepareandSend Protocols
We start by reviewing \(\mathsf {QPIP}\) protocols in which the only quantum capability of the verifier is to prepare and send constantsize quantum states to the prover (no measurement). The verifier must use this capability in order to delegate the application of some \(\mathsf {BQP}\) circuit, \(\mathcal {C}\), on an input \({\left \vert {\psi }\right \rangle }\).^{12} Through interaction with the prover, the verifier will attempt to certify that the correct circuit was indeed applied on her input, with high probability, aborting the protocol otherwise.
In the context of prepareandsend protocols, it is useful to provide more refined notions of completeness and soundness than the ones in the definition of a \(\mathsf {QPIP}\) protocol. This is because, apart from knowing that the verifier wishes to delegate a \(\mathsf {BQP}\) computation to the prover, we also know that it prepares a particular quantum state and sends it to the prover to act with some unitary operation on it (corresponding to the quantum circuit associated with the \(\mathsf {BQP}\) computation). This extra information allows us to define δcorrectness and 𝜖verifiability. We start with the latter:
Definition 2 (𝜖verifiability)
We now define \(\delta \)correctness:
Definition 3 (δcorrectness)
This definition says that when the prover behaves honestly, the verifier obtains the correct outcome, with high probability, for any possible choice of its secret parameters.
If a prepareandsend protocol has both \(\delta \)correctness and \(\epsilon \)verifiability, for some \(\delta > 0\), \(\epsilon < 1\), it will also have completeness δ(1/2 + 1/poly(n)) and soundness \(\epsilon \) as a \(\mathsf {QPIP}\) protocol, where n is the size of the input. The reason for the asymmetry in completeness and soundness is that in the definition of δcorrectness we require that the output quantum state of the protocol is \(\delta \)close to the output quantum state of the desired computation. But the computation outcome is dictated by a measurement of this state, which succeeds with probability at least \(1/2 + 1/poly(n)\), from the definition of \(\mathsf {BQP}\). Combining these facts leads to δ(1/2 + 1/poly(n)) completeness. It follows that for this to be a valid \(\mathsf {QPIP}\) protocol it must be that \(\delta (1/2 + 1/poly(n))  \epsilon \geq 1/poly(n)\), for all inputs. For simplicity, we will instead require \(\delta /2  \epsilon \geq 1/poly(n)\), which implies the previous inequality. As we will see, for all prepareandsend protocols \(\delta = 1\). This condition is easy to achieve by simply designing the protocol so that the honest behaviour of the prover leads to the correct unitary being applied to the verifier’s quantum state. Therefore, the main challenge with these protocols will be to show that \(\epsilon \leq 1/2  1/poly(n)\).
2.1 Quantum AuthenticationBased Verification
 1.δcorrectness. Intuitively this says that if the state sent through the channel was not tampered with, then the receiver should accept with high probability (at least \(\delta \)), irrespective of the used keys. More formally, for \(0 \leq \delta \leq 1\), let:be the projector onto the correct state \({\left \vert {\psi }\right \rangle }\) and on acceptance for the flag state. Then, it must be the case that for all keys k:$$P_{correct} = {\left\vert{\psi}\right\rangle}{\left\langle{\psi}\right\vert} \otimes {\left\vert{acc}\right\rangle}{\left\langle{acc}\right\vert} $$$$Tr \left( P_{correct} Dec_{k}(Enc_{k}({\left\vert{\psi}\right\rangle}{\left\langle{\psi}\right\vert} \otimes {\left\vert{acc}\right\rangle}{\left\langle{acc}\right\vert} )) \right) \geq \delta $$
 2.𝜖security. This property states that for any deviation that the eavesdropper applies on the sent state, the probability that the resulting state is far from ideal and the receiver accepts is small. Formally, for \(0 \leq \epsilon \leq 1\), let:be the projector onto the orthogonal complement of the correct state \({\left \vert {\psi }\right \rangle }\), and on acceptance, for the flag state. Then, it must be the case that for any CPTP action, \(\mathcal {E}\), of the eavesdropper, we have:$$P_{incorrect} = (I  {\left\vert{\psi}\right\rangle}{\left\langle{\psi}\right\vert}) \otimes {\left\vert{acc}\right\rangle}{\left\langle{acc}\right\vert} $$$$Tr \left( P_{incorrect} {\sum}_{k} p(k) Dec_{k} (\mathcal{E}(Enc_{k}({\left\vert{\psi}\right\rangle}{\left\langle{\psi}\right\vert} \otimes {\left\vert{acc}\right\rangle}{\left\langle{acc}\right\vert} ))) \right) \leq \epsilon $$
CliffordQAS VQC
The first protocol, named Clifford QASbased Verifiable Quantum Computing (CliffordQAS VQC) is based on a QAS which uses Clifford operations in order to perform the encoding procedure. Strictly speaking, this protocol is not a prepareandsend protocol, since, as we will see, it involves the verifier performing measurements as well. However, it is a precursor to the second protocol from [25, 26], which is a prepareandsend protocol. Hence, why we review the CliffordQAS VQC protocol here.

(1) The sender performs the encoding procedure \(Enc_{k}\). This consists of applying the Clifford operation \(C_{k}\) to the state \({\left \vert {\psi }\right \rangle }{\left \vert {acc}\right \rangle }\).

(2) The state is sent through the quantum channel.

(3) The receiver applies the decoding procedure \(Dec_{k}\) which consists of applying \(C_{k}^{\dagger }\) to the received state.

(4) The receiver measures the flag subsystem and accepts if it is in the \({\left \vert {acc}\right \rangle }\) state.
We can see that this protocol has correctness \(\delta = 1\), since, the sender and receiver’s operations are exact inverses of each other and, when there is no intervention from the eavesdropper, they will perfectly cancel out. It is also not too difficult to show that the protocol achieves security \(\epsilon = 2^{m}\). We will include a sketch proof of this result as all other proofs of security, for prepareandsend protocols, rely on similar ideas. Aharonov et al start by using the following lemma:
Lemma 1 (Clifford twirl)
As mentioned, in all prepareandsend protocols we assume that the verifier will prepare some state \({\left \vert {\psi }\right \rangle }\) on which it wants to apply a quantum circuit denoted \(\mathcal {C}\). Since we are assuming that the verifier has a constantsize quantum device, the state \({\left \vert {\psi }\right \rangle }\) will be a product state, i.e. \({\left \vert {\psi }\right \rangle } = {\left \vert {\psi _{1}}\right \rangle } \otimes {\left \vert {\psi _{2}}\right \rangle } \otimes ... \otimes {\left \vert {\psi _{n}}\right \rangle }\). For simplicity, assume each \({\left \vert {\psi _{i}}\right \rangle }\) is one qubit, though any constant number of qubits is allowed. In CliffordQAS VQC the verifier will use the prover as an untrusted quantum storage device. Specifically, each \({\left \vert {\psi _{i}}\right \rangle }\), from \({\left \vert {\psi }\right \rangle }\), will be paired with a constantsize flag system in the accept state, \({\left \vert {acc}\right \rangle }\), resulting in a block of the form \({\left \vert {block_{i}}\right \rangle } = {\left \vert {\psi _{i}}\right \rangle }{\left \vert {acc}\right \rangle }\). Each block will be encoded, by having a random Clifford operation applied on top of it. The verifier prepares these blocks, one at a time, for all \(i \in \{1,... n\}\), and sends them to the prover. The prover is then asked to return pairs of blocks to the verifier so that she may apply gates from \(\mathcal {C}\) on them (after undoing the Clifford operations). The verifier then applies new random Clifford operations on the blocks and sends them back to the prover. The process continues until all gates in \(\mathcal {C}\) have been applied.

(1) Suppose the input state that the verifier intends to prepare is \({\left \vert {\psi }\right \rangle } = {\left \vert {\psi _{1}}\right \rangle } \otimes {\left \vert {\psi _{2}}\right \rangle } \otimes ... \otimes {\left \vert {\psi _{n}}\right \rangle }\), where each \(\left \vert {\psi _{i}}\right \rangle \) is a one qubit state.^{19} Also let \(\mathcal {C}\) be quantum circuit that the verifier wishes to apply on \(\left \vert {\psi }\right \rangle \). The verifier prepares (one block at a time) the state \(\left \vert {\psi }\right \rangle \left \vert {flag}\right \rangle = \left \vert {block_{1}}\right \rangle \otimes \left \vert {block_{2}}\right \rangle \otimes ... \otimes \left \vert {block_{n}}\right \rangle \), where \(\left \vert {block_{i}}\right \rangle = \left \vert {\psi _{i}}\right \rangle \left \vert {acc}\right \rangle \) and each \(\left \vert {acc}\right \rangle \) state consists of a constant number m of qubits. Additionally let the size of each block be \(t = m + 1\).

(2) The verifier applies a random Clifford operation, from the set \(\mathfrak {C}_{t}\) on each block and sends it to the prover.

(3) The verifier requests a pair of blocks, \(({\left \vert {block_{i}}\right \rangle }, {\left \vert {block_{j}}\right \rangle })\), from the prover, in order to apply a gate from \(\mathcal {C}\) on the corresponding qubits, \((\left \vert {\psi _{i}}\right \rangle , \left \vert {\psi _{j}})\right \rangle \). Once the blocks have been received, the verifier undoes the random Clifford operations and measures the flag registers, aborting if these are not in the \(\left \vert {acc}\right \rangle \) state. Otherwise, the verifier performs the gate from \(\mathcal {C}\), applies new random Clifford operations on each block and sends them back to the prover. This step repeats until all gates in \(\mathcal {C}\) have been performed.

(4) Once all gates have been performed, the verifier requests all the blocks (one by one) in order to measure the output. As in the previous step, the verifier will undo the Clifford operations first and measure the flag registers, aborting if any of them are not in the \(\left \vert {acc}\right \rangle \) state.
We can see that the security of this protocol reduces to the security of the Clifford QAS. Moreover, it is also clear that if the prover behaves honestly, then the verifier will obtain the correct output state exactly. Hence:
Theorem 1
For a fixed constant \(m > 0\), CliffordQAS VQC is a prepareandsend \(\mathsf {QPIP}\) protocol having correctness \(\delta = 1\) and verifiability \(\epsilon = 2^{m}\).
PolyQAS VQC
The signed polynomial CSS code can be used to create a simple authentication scheme having security \(\epsilon = 2^{d}\). This works by having the sender encode the state \({\left \vert {{\Psi }_{in}}\right \rangle } = {\left \vert {\psi }\right \rangle }{\left \vert {0}\right \rangle }^{\otimes t  1}\), where \({\left \vert {\psi }\right \rangle }\) is a qudit to be authenticated, in the signed code and then onetime padding the encoded state. Note that the \({\left \vert {0}\right \rangle }^{\otimes t  1}\) part of the state is acting as a flag system. We are assuming that the sender and the receiver share both the sign key of the code and the key for the onetime padding. The onetime padded state is then sent over the insecure channel. The receiver undoes the pad and applies the inverse of the encoding operation. It then measures the last \(t  1\) qudits, accepting if and only if they are all in the \({\left \vert {0}\right \rangle }\) state. Proving security is similar to the Clifford QAS and relies on two results:
Lemma 2 (Pauli twirl)
This result is identical to the Clifford twirl lemma, except the Clifford operations are replaced with Pauli operators.^{20} The result is also valid for qubits.
Lemma 3 (Signed polynomial code security)
The second aspect is that, as mentioned, the signed polynomial code is transversal for Clifford operations. However, in order to apply nonClifford operations it is necessary to measure encoded states together with socalled magic states (which will also be encoded). This manner of performing gates is known as gate teleportation [54]. The target state, on which we want to apply a nonClifford operation, and the magic state are first entangled using a Clifford operation and then the magic state is measured in the computational basis. The effect of the measurement is to have a nonClifford operation applied on the target state, along with Pauli errors which depend on the measurement outcome. For the nonClifford operations, Aharonov et al use Toffoli gates.^{21}

(1) Suppose the input state that the verifier intends to prepare is \({\left \vert {\psi }\right \rangle } = {\left \vert {\psi _{1}}\right \rangle } \otimes {\left \vert {\psi _{2}}\right \rangle } \otimes ... \otimes {\left \vert {\psi _{n}}\right \rangle }\), where each \(\left \vert {\psi _{i}}\right \rangle \) is a qqudit. Also suppose that the verifier wishes to apply the quantum circuit \(\mathcal {C}\) on \(\left \vert {\psi }\right \rangle \), which contains L Toffoli gates. The verifier prepares the state \(\left \vert {{\Psi }_{in}}\right \rangle = \left \vert {\psi _{1}}\right \rangle \left \vert {0}\right \rangle ^{t1} \otimes \left \vert {\psi _{2}}\right \rangle \left \vert {0}\right \rangle ^{t1} \otimes ... \otimes \left \vert {\psi _{n}}\right \rangle \left \vert {0}\right \rangle ^{t1} \otimes \left \vert {M_{1}}\right \rangle \left \vert {0}\right \rangle ^{3t3} \otimes ... \otimes \left \vert {M_{L}}\right \rangle \left \vert {0}\right \rangle ^{3t3}\), where \(t = 2d + 1\) and each \(\left \vert {M_{i}}\right \rangle \) is a 3qudit magic state, used for performing Toffoli gates. Groups of t qubits will comprise a block as follows. The first n blocks are simply \(\left \vert {block_{i}}\right \rangle =\left \vert {\psi _{i}}\right \rangle \left \vert {0}\right \rangle ^{t1}\), with \(i \in \{1, ..., n\}\). Next, we have the states of the form \(\left \vert {M_{i}}\right \rangle \left \vert {0}\right \rangle ^{3t3}\) which consist of 3 blocks, each. Each block, from such a state, will comprise of one qudit from \(\left \vert {M_{i}}\right \rangle \) and a \(\left \vert {0}\right \rangle ^{t1}\) state. Note that we can no longer represent these blocks as pure states, since the 3 qudits of a \(\left \vert {M_{i}}\right \rangle \) state are entangled. So, to summarize, each block contains one qudit from either the state \(\left \vert {\psi }\right \rangle \) or a magic state \(\left \vert {M_{i}}\right \rangle \), together with a flag system, \(\left \vert {0}\right \rangle ^{t1}\).

(2) The verifier encodes each block in a signed polynomial code with a randomly chosen key \(k \in \{1, + 1\}^{t}\) (the same key for each block) and then quantum onetime pads each block (using different keys for the padding of each block). The blocks are prepared and encoded in sequence (the verifier has the ability to process 3 blocks, or \(3t\) qudits, at a time) and then sent to the prover.

(3) When applying Clifford operations, the verifier simply asks the prover to apply the gates in a transversal fashion. Since Clifford operations normalise Pauli operators, the verifier then updates the onetime pad keys similar to Childs’ protocol (see Section 1.1).

(4) When applying a Toffoli gate, the verifier asks the prover to measure 3 blocks, comprising a magic state, in the computational basis and report the measurement outcomes. It is assumed that the magic state was entangled, using a Clifford operation, with 3 target blocks on which the Toffoli gate is to be applied. The verifier undoes the (classical) onetime padding of the measurement outcomes and expects each of the 3 groups of measurement outcomes (associated with each of the 3 blocks) to be of the form \([ k_{1} p(\alpha _{1}), ..., k_{t} p(\alpha _{t})]\). The verifier then takes these classical strings and turns them into states of the form \(\left \vert {\phi }\right \rangle = \left \vert {k_{1} p(\alpha _{1})}\right \rangle ... \left \vert {k_{t} p(\alpha _{t})}\right \rangle \) (using her constantsized quantum computer).^{22} She then applies \(D_{k}^{\dagger }\) on each of these \(\left \vert {\phi }\right \rangle \) states and checks that the last d qudits, of each state, are \(\left \vert {0}\right \rangle \), aborting otherwise. Assuming notabort, the verifier instructs the prover to perform the appropriate Pauli corrections resulting from the gate teleportation.

(5) Once all gates have been performed, the verifier instructs the prover to measure all blocks in the computational basis. As in step 4, the verifier will then deonetime pad the outcomes, apply \(D_{k}^{\dagger }\) to each state of the form \(\left \vert {\phi }\right \rangle \) (prepared from these outcomes), and check that the last d qudits are \(\left \vert {0}\right \rangle \), aborting otherwise.
As with the previous protocol, the security is based on the security of the authentication scheme. However, there is a significant difference. In the CliffordQAS VQC protocol, one could always assume that the state received by the verifier was the correctly encoded state with a deviation on top that was independent of this encoding. However, in the PolyQAS VQC protocol, the quantum state is never returned to the verifier and, moreover, the prover’s instructed actions on this state are adaptive based on the responses of the verifier. Since the prover is free to deviate at any point throughout the protocol, if we try to commute all of his deviations to the end (i.e. view the output state as the correct state resulting from an honest run of the protocol, with a deviation on top that is independent of the secret parameters), we find that the output state will have a deviation on top which depends on the verifier’s responses. Since the verifier’s responses depend on the secret keys, we cannot directly use the security of the authentication scheme to prove that the protocol is 2^{−d}verifiable.
The solution, as explained in [26], is to consider the state of the entire protocol comprising of the prover’s system, the verifier’s system and the transcript of all classical messages exchanged during the protocol. For a fixed interaction transcript, the prover’s attacks can be commuted to the end of the protocol. This is because, if the transcript is fixed, there is no dependency of the prover’s operations on the verifier’s messages. We simply view all of his operations as unitaries acting on the joint system of his private memory, the input quantum state and the transcript. One can then use Lemma 2 and Lemma 3 to bound the projection of this state onto the incorrect subspace with acceptance. The whole state, however, will be a mixture of all possible interaction transcripts, but since each term is bounded and the probabilities of the terms in the mixture must add up to one, it follows that the protocol is \(2^{d}\)verifiable:
Theorem 2
For a fixed constant \(d > 0\), PolyQAS VQC is a prepareandsend \(\mathsf {QPIP}\) protocol having correctness \(\delta = 1\) and verifiability \(\epsilon = 2^{d}\).
Let us briefly summarize the two protocols in terms of the verifier’s resources. In both protocols, if one fixes the security parameter, \(\epsilon \), the verifier must have a O(log(1/𝜖))size quantum computer. Additionally, both protocols are interactive with the total amount of communication (number of messages times the size of each message) being upper bounded by \(O(\mathcal {C} \cdot log(1/\epsilon ))\), where \(\mathcal {C}\) is the quantum circuit to be performed.^{23} However, in CliffordQAS VQC, this communication is quantum whereas in PolyQAS VQC only one quantum message is sent at the beginning of the protocol and the rest of the interaction is classical.
Before ending this subsection, we also mention the result of Broadbent et al. from [55]. This result generalises the use of quantum authentication codes for achieving verification of delegated quantum computation (not limited to decision problems). Moreover, the authors prove the security of these schemes in the universal composability framework, which allows for secure composition of cryptographic protocols and primitives [56].
2.2 TrapBased Verification
In this subsection we discuss Verifiable Universal Blind Quantum Computing (VUBQC), which was developed by Fitzsimons and Kashefi in [27]. The protocol is written in the language of MBQC and relies on two essential ideas. The first is that an MBQC computation can be performed blindly, using UBQC, as described in Section 1.1. The second is the idea of embedding checks or traps in a computation in order to verify that it was performed correctly. Blindness will ensure that these checks remain hidden and so any deviation by the prover will have a high chance of triggering a trap. Notice that this is similar to the QASbased approaches where the input state has a flag subsystem appended to it in order to detect deviations and the whole state has been encoded in some way so as to hide the input and the flag subsystem. This will lead to a similar proof of security. However, as we will see, the differences arising from using MBQC and UBQC lead to a reduction in the quantum resources of the verifier. In particular, in VUBQC the verifier requires only the ability to prepare single qubit states, which will be sent to the prover, in contrast to the QASbased protocols which required the verifier to have a constantsize quantum computer.
Recall the main steps for performing UBQC. The client, Alice, sends qubits of the form \({\left \vert {+_{\theta _{i}}}\right \rangle }\) to Bob, the server, and instructs him to entangle them according to a graph structure, G, corresponding to some universal graph state. She then asks him to measure qubits in this graph state at angles \(\delta _{i} = \phi ^{\prime }_{i} + \theta _{i} + r_{i} \pi \), where \(\phi ^{\prime }_{i}\) is the corrected computation angle and \(r_{i} \pi \) acts a random \(\mathsf {Z}\) operation which flips the measurement outcome. Alice will use the measurement outcomes, denoted \(b_{i}\), provided by Bob to update the computation angles for future measurements. Throughout the protocol, Bob’s perspective is that the states, measurements and measurement outcomes are indistinguishable from random. Once all measurements have been performed, Alice will undo the \(r_{i}\) padding of the final outcomes and recover her output. Of course, UBQC does not provide any guarantee that the output she gets is the correct one, since Bob could have deviated from her instructions.
Transitioning to VUBQC, we will identify Alice as the verifier and Bob as the prover. To augment UBQC with the ability to detect malicious behaviour on the prover’s part, the verifier will introduce traps in the computation. How will she do this? Recall that the qubits which will comprise \({\left \vert {G}\right \rangle }\) need to be entangled with the \(\mathsf {CZ}\) operation. Of course, for X Yplane states \(\mathsf {CZ}\) does indeed entangle the states. However, if either qubit, on which \(\mathsf {CZ}\) acts, is \({\left \vert {0}\right \rangle }\) or \({\left \vert {1}\right \rangle }\), then no entanglement is created. So suppose that we have a \({\left \vert {+_{\theta }}\right \rangle }\) qubit whose neighbours, according to G, are computational basis states. Then, this qubit will remain disentangled from the rest of the qubits in \({\left \vert {G}\right \rangle }\). This means that if the qubit is measured at its preparation angle, the outcome will be deterministic. The verifier can exploit this fact to certify that the prover is performing the correct measurements. Such states are referred to as trap qubits, whereas the \({\left \vert {0}\right \rangle }\), \({\left \vert {1}\right \rangle }\) neighbours are referred to as dummy qubits. Importantly, as long as G’s structure remains that of a universal graph state^{24} and as long as the dummy qubits and the traps are chosen at random, adding these extra states as part of the UBQC computation will not affect the blindness of the protocol. The implication of this is that the prover will be completely unaware of the positions of the traps and dummies. The traps effectively play a role that is similar to that of the flag subsystem in the authenticationbased protocols. The dummies, on the other hand, are there to ensure that the traps do not get entangled with the rest of qubits in the graph state. They also serve another purpose. When a dummy is in a \({\left \vert {1}\right \rangle }\) state, and a \(\mathsf {CZ}\) acts on it and a trap qubit, in the state \({\left \vert {+_{\theta }}\right \rangle }\), the effect is to “flip” the trap to \({\left \vert {_{\theta }}\right \rangle }\) (alternatively \({\left \vert {_{\theta }}\right \rangle }\) would have been flipped to \({\left \vert {+_{\theta }}\right \rangle }\)). This means that if the trap is measured at its preparation angle, \(\theta \), the measurement outcome will also be flipped, with respect to the initial preparation. Conversely, if the dummy was initially in the state \({\left \vert {0}\right \rangle }\), then no flip occurs. Traps and dummies, therefore, serve to also certify that the prover is performing the \(\mathsf {CZ}\) operations correctly. Thus, by using the traps (and the dummies), the verifier can check both the prover’s measurements and his entangling operations and hence verify his MBQC computation.

(1) The verifier chooses an input x and a quantum computation \(\mathcal {C}\) that she would like the prover to perform on \({\left \vert {x}\right \rangle }\).^{25}

(2) She converts x and \(\mathcal {C}\) into a pair \((G, \{\phi _{i}\}_{i})\), where \({\left \vert {G}\right \rangle }\) is an Nqubit universal graph state (with an established ordering for measuring the qubits), which admits an embedding of T traps and D dummies. We therefore have that \(N = T + D + Q\), where \(Q = O(\mathcal {C})\) is the number of computation qubits used for performing \(\mathcal {C}\) and \(\{\phi _{i}\}_{i \leq Q}\) is the associated set of computation angles.^{26}

(3) Alice picks, uniformly at random, values \(\theta _{i}\), with i going from 1 to \(T+Q\), from the set \(\{0, \pi /4, 2\pi /4, ... 7\pi /4\}\) as well as values \(r_{i}\) from the set \(\{0, 1\}\) for the trap and computation qubits.

(4) She then prepares the \(T+Q\) states \({\left \vert {+_{\theta _{i}}}\right \rangle }\), as well as D dummy qubits which are states chosen at random from \(\{ {\left \vert {0}\right \rangle }, {\left \vert {1}\right \rangle } \}\). All these states are sent to Bob, who is instructed to entangle them, using \(\mathsf {CZ}\) operations, according to the graph structure G.

(5) Alice then asks Bob to measure the qubits as follows: computation qubits will be measured at \(\delta _{i} = \phi ^{\prime }_{i} + \theta _{i} + r_{i} \pi \), where \(\phi ^{\prime }_{i}\) is an updated version of \(\phi _{i}\) that incorporates corrections resulting from previous measurements; trap qubits will be measured at \(\delta _{i} = \theta _{i} + r_{i} \pi \); dummy qubits are measured at randomly chosen angles from \(\{0, \pi /4, 2\pi /4, ... 7\pi /4\}\). This step is interactive as Alice needs to update the angles of future measurements based on past outcomes. The number of rounds of interaction is proportional to the depth of \(\mathcal {C}\). If any of the trap measurements produce incorrect outcomes, Alice will abort upon completion of the protocol.

(6) Assuming all trap measurements succeeded, after all the measurements have been performed, Alice undoes the \(r_{i}\) onetime padding of the measurement outcomes, thus recovering the outcome of the computation.
If however, there are multiple trap states, the bound improves. Specifically, for a type of resource state called dottedtriple graph, the number of traps can be a constant fraction of the total number of qubits, yielding \(\epsilon = 8/9\). If the protocol is then repeated a constant number of times, d, with the verifier aborting if any of these runs gives incorrect trap outcomes, it can be shown that \(\epsilon = (8/9)^{d}\) [57]. Alternatively, if the input state and computation are encoded in an error correcting code of distance d, then one again obtains \(\epsilon = (8/9)^{d}\). This is useful if one is interested in a quantum output, or a classical bit string output. If, instead, one would only like a single bit output (i.e. the outcome of the decision problem) then sequential repetition and taking the majority outcome is sufficient. The fault tolerant encoding need not be done by the verifier. Instead, the prover will simply be instructed to prepare a larger resource state which also offers topological errorcorrection. See [27, 58, 59] for more details. An important observation, however, is that the fault tolerant encoding, just like in the PolyQAS VQC protocol, is used only to boost security and not for correcting deviations arising from faulty devices. This latter case is discussed in Section 5.2. To sum up:
Theorem 3
For a fixed constant \(d > 0\), VUBQC is a prepareandsend \(\mathsf {QPIP}\) protocol having correctness \(\delta = 1\) and verifiability \(\epsilon = (8/9)^{d}\).
It should be noted that in the original construction of the protocol, the fault tolerant encoding, used for boosting security, required the use of a resource state having \(O(\mathcal {C}^{2})\) qubits. The importance of the dottedtriple graph construction is that it achieves the same level of security while keeping the number of qubits linear in \(\mathcal {C}\). The same effect is achieved by a composite protocol which combines the PolyQAS VQC scheme, from the previous section, with VUBQC [51]. This works by having the verifier run small instances of VUBQC in order to prepare the encoded blocks used in the PolyQAS VQC protocol. Because of the blindness property, the prover does not learn the secret keys used in the encoded blocks. The verifier can then run the PolyQAS VQC protocol with the prover, using those blocks. This hybrid approach illustrates how composition can lead to more efficient protocols. In this case, the composite protocol maintains a single qubit preparation device for the verifier (as opposed to a O(log(1/𝜖))size quantum computer) while also achieving linear communication complexity. We will encounter other composite protocols when reviewing entanglementbased protocols in Section 4.
Lastly, let us explicitly state the resources and overhead of the verifier throughout the VUBQC protocol. As mentioned, the verifier requires only a singlequbit preparation device, capable of preparing states of the form \(\left \vert {+_{\theta }}\right \rangle \), with \(\theta \in \{0, \pi /4, 2\pi /4, ... 7\pi /4 \}\), and \(\left \vert {0}\right \rangle \), \(\left \vert {1}\right \rangle \). The number of qubits needed is on the order of \(O(\mathcal {C})\). After the qubits have been sent to the prover, the two interact classically and the size of the communication is also on the order of \(O(\mathcal {C})\).
2.3 Verification Based on Repeated Runs

Computation run. The verifier delegates \(\mathcal {C} {\left \vert {0}\right \rangle }^{\otimes n}\) to the prover.

Xtest run. The verifier delegates the identity computation on the \({\left \vert {0}\right \rangle }^{\otimes n}\) state to the prover.

Ztest run. The verifier delegates the identity computation on the \({\left \vert {+}\right \rangle }^{\otimes n}\) state to the prover.
The gadget works in a gate teleportation fashion. For each qubit, labelled j, on which the prover should apply a \(\mathsf {T}\) gate, the verifier sends a qubit of the form \(\mathsf {X}^{d} \textsf {Z}^{c} \textsf {S}^{y} \textsf {T} {\left \vert {+}\right \rangle }\), as well as the classical bit \(x = a \oplus c \oplus y\), where a is the \(\mathsf {X}\) padding of qubit j and c, d and y are chosen at random. The verifier then instructs the prover to apply a \(\mathsf {CNOT}\) between the sent qubit and qubit j, effectively entangling them, and then measure qubit j in the computational basis. Lastly, the verifier instructs the prover to apply an \(\mathsf {S}^{x}\) gate to the sent qubit. The end result is that this qubit will be the same as the deonetime padded qubit j but with a \(\mathsf {T}\) and a new onetime pad acting on it. Importantly, the new pad is kept secret from the prover.
Note that the measurement bit, c, provided by the prover to the verifier should be an xor of the original \(\mathsf {X}\) padding of the input and the updated \(\mathsf {X}\) padding of the input. Checking the value of this bit allows the verifier to test that the gadget was applied correctly.
What about the \(\mathsf {Z}\)test run? In that case, the output should be the \({\left \vert {+}\right \rangle }^{\otimes n}\) which, upon measurement, should collapse with equal probability into any of the \(2^{n}\) possible binary strings. The verifier does not test for this. Instead, each \(\mathsf {H}\) gate that should be applied by the prover has the effect of switching between the \(\mathsf {Z}\)test run and the \(\mathsf {X}\)test run. Thus, whenever a Hadamard is applied to a qubit, during a \(\mathsf {Z}\)test run, the verifier switches to an \(\mathsf {X}\)test run until the next Hadamard operation. In the \(\mathsf {X}\)test runs, the verifier will use the \(\mathsf {T}\) gate gadget from Fig. 8. These are the only checks that are performed in the \(\mathsf {Z}\)test run.

Computation run. The verifier onetime pads the \({\left \vert {0}\right \rangle }^{\otimes n}\) state and sends it to the prover. The prover is then instructed to apply \(\mathcal {C}\) on this state, such that for each \(\mathsf {T}\) gate in the circuit the prover and the verifier interact in order to perform the \(\mathsf {T}\) gadget. Additionally, any \(\mathsf {H}\) in \(\mathcal {C}\) is performed as in (52). For Clifford operations, the verifier updates the onetime pad of the state accordingly. The prover is instructed to measure the output state of the circuit in the computational basis and return the outcome to the verifier. The verifier undoes the padding of this outcome and accepts if the output of the circuit indicates acceptance.

Xtest run. The verifier onetime pads the \({\left \vert {0}\right \rangle }^{\otimes n}\) state and sends it to the prover. As in the computation run, for each \(\mathsf {T}\), the verifier and the prover will interact to run the \(\mathsf {T}\) gate gadget. In this case, however, the verifier will use the \(\mathsf {T}\) gate gadget from Fig. 8, making the circuit effectively act as identity and checking that the prover is performing these gadgets correctly (rejecting otherwise). Additionally, the \(\mathsf {H}\) gates in \(\mathcal {C}\) will also act as identity, from (53), as described previously. The verifier updates the onetime padding of the state accordingly for all gates in the circuit. Once the circuit is finished, the prover is instructed to measure the output in the computational basis and report the outcome to the verifier. The verifier accepts if the deonetime padded output is \(\left \vert {0}\right \rangle ^{\otimes n}\).

Ztest run. The verifier onetime pads the \({\left \vert {+}\right \rangle }^{\otimes n}\) state and sends it to the prover. As in the \(\mathsf {X}\)test run, the \(\mathsf {T}\) gate gadgets will act as identity. The \(\mathsf {H}\) operations that the prover performs will temporarily switch the \(\mathsf {Z}\)test run into an \(\mathsf {X}\)test run, in which the verifier uses the gadget from Fig. 8 to check that prover implemented it correctly. Any subsequent \(\mathsf {H}\) will switch back to a \(\mathsf {Z}\)test run. Additionally, the verifier updates the onetime padding of the state accordingly for all gates in the circuit. The prover is instructed to measure the output in the computational basis and report the outcome to the verifier, however in this case the verifier discards the output.
The asymmetry between the \(\mathsf {X}\)test run and the \(\mathsf {Z}\)test run stems from the fact that the output is always measured in the computational basis. This means that an incorrect output is one which has been bitflipped. In turn, this implies that only \(\mathsf {X}\) and \(\mathsf {Y}\) operations on the output will act as deviations, since \(\mathsf {Z}\) effectively acts as identity on computational basis states. If the circuit \(\mathcal {C}\) does not contain any Hadamard gates and hence, the computation takes place entirely in the computational basis, then the \(\mathsf {X}\)test is sufficient for detecting such deviations. However, when Hadamard gates are present, this is no longer the case since deviations can occur in the conjugate basis, \(({\left \vert {+}\right \rangle }, {\left \vert {}\right \rangle })\), as well. This is why the \(\mathsf {Z}\)test is necessary. Its purpose is to check that the prover’s operations are performed correctly when switching to the conjugate basis. For this reason, a Hadamard gate will switch a \(\mathsf {Z}\)test run into an \(\mathsf {X}\)test run which provides verification using the \(\mathsf {T}\) gate gadget.
Note that when discussing the correctness and verifiability of the TestorCompute protocol, we have slightly abused the terminology, since this protocol does not rigorously match the established definitions for correctness and verifiability that we have used for the previous protocols. The reason for this is the fact that in the TestorCompute protocol there is no additional flag or trap subsystem to indicate failure. Rather, the verifier detects malicious behaviour by alternating between different runs. It is therefore more appropriate to view the TestorCompute protocol simply as a \(\mathsf {QPIP}\) protocol having a constant gap between completeness and soundness:
Theorem 4
TestorCompute is a prepareandsend \(\mathsf {QPIP}\) protocol having completeness \(8/9\) and soundness \(7/9\).
In terms of the verifier’s quantum resources, we notice that, as with the VUBQC protocol, the only requirement is the preparation of single qubit states. All of these states are sent in the first round of the protocol, the rest of the interaction being completely classical.
2.4 Summary of PrepareandSend Protocols
The protocols, while different, have the common feature that they all use blindness or have the potential to be blind protocols. Out of the five presented protocols, only the PolyQAS VQC and the TestorCompute protocols are not explicitly blind since, in both cases, the computation is revealed to the server. However, it is relatively easy to make the protocols blind by encoding the circuit into the input (which is onetime padded). Hence, one can say that all protocols achieve blindness.
This feature is essential in the proof of security for these protocols. Blindness combined with either the Pauli twirl Lemma 2 or the Clifford twirl Lemma 1 have the effect of reducing any deviation of the prover to a convex combination of Pauli attacks. Each protocol then has a specific way of detecting such an attack. In the CliffordQAS VQC protocol, the convex combination is turned into a uniform combination and the attack is detected by a flag subsystem associated with a quantum authentication scheme. A similar approach is employed in the PolyQAS VQC protocol, using a quantum authentication scheme based on a special type of quantum error correcting code. The VUBQC protocol utilizes trap qubits and either sequential repetition or encoding in an error correcting code to detect Pauli attacks. Finally, the TestorCompute protocol uses a hidden identity computation acting on either the \({\left \vert {0}\right \rangle }^{\otimes n}\) or \({\left \vert {+}\right \rangle }^{\otimes n}\) states, in order to detect the malicious behavior of the prover.
Comparison of prepareandsend protocols
Protocol  Verifier resources  Communication  2way quantum comm. 

CliffordQAS VQC  O(log(1/𝜖))  O(N ⋅ log(1/𝜖))  Y 
PolyQAS VQC  O(log(1/𝜖))  O((n + L) ⋅ log(1/𝜖))  N 
VUBQC  O(1)  O(N ⋅ log(1/𝜖))  N 
TestorCompute  O(1)  O((n + T) ⋅ log(1/𝜖))  N 
As mentioned, if we want to make the PolyQAS VQC and TestorCompute protocols blind, the verifier will hide her circuit by incorporating it into the input. The input would then consist of an encoding of \(\mathcal {C}\) and an encoding of x. The prover would be asked to perform controlled operations from the part of the input containing the description of \(\mathcal {C}\), to the part containing x, effectively acting with \(\mathcal {C}\) on x. We stress that in this case, the protocols would have a communication complexity of \(O(\mathcal {C} \cdot log(1/\epsilon ))\), just like VUBQC and CliffordQAS VQC.^{31}
3 ReceiveandMeasure Protocols
For prepareandsend protocols we saw that blindness was an essential feature for achieving verifiability. While most of the receiveandmeasure protocols are blind as well, we will see that it is possible to perform verification without hiding any information about the input or computation, from the prover. Additionally, while in prepareandsend protocols the verifier was sending an encoded or encrypted quantum state to the prover, in receiveandmeasure protocols, the quantum state received by the verifier is not necessarily encoded or encrypted. Moreover, this state need not contain a flag or a trap subsystem. For this reason, we can no longer consistently define \(\epsilon \)verifiability and \(\delta \)correctness, as we did for prepareandsend protocols. Instead, we will simply view receiveandmeasure protocols as \(\mathsf {QPIP}\) protocols.
There is an additional receiveandmeasure protocol by Gheorghiu et al. [33] which we refer to as Steeringbased VUBQC. That protocol, however, is similar to the entanglementbased GKW protocol from Section 4.1. We will therefore review Steeringbased VUBQC in that subsection by comparing it to the entanglementbased protocol.
3.1 MeasurementOnly Verification
In this section we discuss the measurementonly protocol from [31], which we shall simply refer to as the measurementonly protocol. This protocol uses MBQC to perform the quantum computation, like the VUBQC protocol from Section 2.2, however the manner in which verification is performed is more akin to Broabdent’s TestorCompute protocol, from Section 2.3. This is because, just like in the TestorCompute protocol, the measurementonly approach has the verifier alternate between performing the computation or testing the prover’s operations.
When viewed as observables, stabilizers allow one to test that an unknown quantum state is in fact a particular graph state \({\left \vert {G}\right \rangle }\), with high probability. This is done by measuring random stabilizers of \(\left \vert {G}\right \rangle \) on multiple copies of the unknown quantum state. If all measurements return the \(+ 1\) outcome, then, the unknown state is close in trace distance to \(\left \vert {G}\right \rangle \). This is related to a concept known as selftesting, which is the idea of determining whether an unknown quantum state and an unknown set of observables are close to a target state and observables, based on observed statistics. We postpone a further discussion of this topic to the next section, since selftesting is ubiquitous in entanglementbased protocols.

(1) The verifier chooses an input x and a quantum computation \(\mathcal {C}\).

(2) She instructs the prover to prepare \(2k + 1\) copies of a 2D cluster state, \({\left \vert {G}\right \rangle }\), for some constant k, and send all of the qubits, one at a time, to the verifier.

(3) The verifier randomly picks one copy to run the computation of \(\mathcal {C}\) on x in an MBQC fashion. The remaining \(2k\) copies are randomly divided into the \(\mathsf {X}\mathsf {Z}\) groups and the \(\mathsf {Z}\mathsf {X}\) group and measured, as described above, so as to check the stabilizers of \(\left \vert {G}\right \rangle \).

(4) If all stabilizer measurement outcomes are successful (i.e. produced the outcome \(+ 1\)), then the verifier accepts the outcome of the computation, otherwise she rejects.
As with all protocols, completeness follows immediately, since if the prover behaves honestly, the verifier will accept the outcome of the computation. In the case of soundness, Hayashi and Morimae treat the problem as a hypothesis test. In other words, in the testing phase of the protocol the verifier is checking the hypothesis that the prover prepared \(2k + 1\) copies of the state \({\left \vert {G}\right \rangle }\). Hayashi and Morimae then prove the following theorem:
Theorem 5
This theorem is essentially showing that as the number of copies of \({\left \vert {G}\right \rangle }\), requested by the verifier, increases, and the verifier accepts in the testing phase, one gets that the state \(\rho \), used by the verifier for the computation, is close in trace distance to the ideal state, \({\left \vert {G}\right \rangle }\). The confidence level, \(\alpha \), represents the maximum acceptance probability for the verifier, such that the computation state, \(\rho \), does not satisfy (60). Essentially this represents the probability for the verifier to accept a computation state that is far from ideal. Hayashi and Morimae argue that the lower bound, \(\alpha \geq 1/(2k + 1)\), is tight, because if the prover corrupts one of the \(2k + 1\) states sent to the verifier, there is a \(1/(2k + 1)\) chance that that state will not be tested and the verifier accepts.
In terms of the quantum capabilities of the verifier, she only requires a single qubit measurement device capable of measuring the observables: \(\mathsf {X}, \textsf {Y}, \textsf {Z}, (\textsf {X} + \textsf {Y})/\sqrt {2}, (\textsf {X}  \textsf {Y})/\sqrt {2}\). Recently, however, Morimae, Takeuchi and Hayashi have proposed a similar protocol which uses hypergraph states [32]. These states have the property that one can perform universal quantum computations by measuring only the Pauli observables (X, \(\mathsf {Y}\) and \(\mathsf {Z}\). Hypergraph states are generalizations of graph states in which the vertices of the graph are linked by hyperedges, which can connect more than two vertices. Hence, the entangling of qubits is done with a generalized \(\mathsf {CZ}\) operation involving multiple qubits. The protocol itself is similar to the one from [31], as the prover is required to prepare many copies of a hypergraph state and send them to the verifier. The verifier will then test all but one of these states using stabilizer measurements and use the remaining one to perform the MBQC computation. For a computation, \(\mathcal {C}\), the protocol has completeness lower bounded by \(1  \mathcal {C} e^{\mathcal {C}}\) and soundness upper bounded by \(1/\sqrt {\mathcal {C}}\). The communication complexity is higher than the previous measurementonly protocol, as the prover needs to send \(O(\mathcal {C}^{21})\) copies of the \(O(\mathcal {C})\)qubit graph state, leading to a total communication cost of \(O(\mathcal {C}^{22})\). We end with the following result:
Theorem 6
The measurementonly protocols are receiveandmeasure \(\mathsf {QPIP}\) protocols having an inverse polynomial gap between completeness and soundness.
3.2 Post Hoc Verification
The protocols we have reviewed so far have all been based on cryptographic primitives. There were reasons to believe, in fact, that any quantum verification protocol would have to use some form of encryption or hiding. This is due to the parallels between verification and authentication, which were outlined in Section 2. However, it was shown that this is not the case when Morimae and Fitzsimons, and independently Hangleiter et al, proposed a protocol for post hoc quantum verification [29, 30]. The name “post hoc” refers to the fact that the protocol is not interactive, requiring a single round of back and forth communication between the prover and the verifier. Moreover, verification is performed after the computation has been carried out. It should be mentioned that the first post hoc protocol was proposed in [22], by Fitzsimons and Hajdušek, however, that protocol utilizes multiple quantum provers, and we review it in Section 4.3.
In this section, we will present the post hoc verification approach, refered to as 1SPosthoc, from the perspective of the Morimae and Fitzsimons paper [29]. The reason for choosing their approach, over the Hangleiter et al one, is that the entanglementbased post hoc protocols, from Section 4.3, are also described using similar terminology to the Morimae and Fitzsimons paper. The protocol of Hangleiter et al is essentially identical to the Morimae and Fitzsimons one, except it is presented from the perspective of certifying the ground state of a gapped, local Hamiltonian. Their certification procedure is then used to devise a verification protocol for a class of quantum simulation experiments, with the purpose of demonstrating a quantum computational advantage [30].
The starting point is the complexity class \(\mathsf {QMA}\), for which we have stated the definition in Section A. Recall, that one can think of \(\mathsf {QMA}\) as the class of problems for which the solution can be checked by a \(\mathsf {BQP}\) verifier receiving a quantum state \({\left \vert {\psi }\right \rangle }\), known as a witness, from a prover. We also stated the definition of the klocal Hamiltonian problem, a complete problem for the class \(\mathsf {QMA}\), in Definition 9. We mentioned that for \(k = 2\) the problem is \(\mathsf {QMA}\)complete [64]. For the post hoc protocol, Morimae and Fitzsimons consider a particular type of 2local Hamiltonian known as an \(\mathsf {X}\textsf {Z}\)Hamiltonian.
To define an \(\mathsf {X}\textsf {Z}\)Hamiltonian we introduce some helpful notation. Consider an nqubit operator S, which we shall refer to as \(\mathsf {X}\textsf {Z}\)term, such that \(S = \bigotimes _{j = 1}^{n} P_{j}\), with \(P_{j} \in \{I, \textsf {X}, \textsf {Z}\}\). Denote \(w_{X}(S)\) as the \(\mathsf {X}\)weight of S, representing the total number of j’s for which \(P_{j} = \textsf {X}\). Similarly denote \(w_{Z}(S)\) as the \(\mathsf {Z}\)weight for S. An \(\mathsf {X}\textsf {Z}\)Hamiltonian is then a 2local Hamiltonian of the form \(H = {\sum }_{i} a_{i} S_{i}\), where the \(a_{i}\)’s are real numbers and the \(S_{i}\)’s are \(\mathsf {X}\textsf {Z}\)terms having \(w_{X}(S_{i}) + w_{Z}(S_{i}) \leq 2\).
The 1SPosthoc protocol starts with the observation that \(\mathsf {BQP} \subseteq \mathsf {QMA}\). This means that any problem in \(\mathsf {BQP}\) can be viewed as an instance of the 2local Hamiltonian problem. Therefore, for any language \(L \in \mathsf {BQP}\) and input x, there exists an \(\mathsf {X}\textsf {Z}\)Hamiltonian, H, such that the smallest eigenvalue of H is less than a when \(x \in L\) or larger than b, when \(x \not \in L\), where a and b are a pair of numbers satisfying \(b  a \geq 1/poly(x)\). Hence, the lowest energy eigenstate of H (also referred to as ground state), denoted \({\left \vert {\psi }\right \rangle }\), is a quantum witness for \(x \in L\). In a \(\mathsf {QMA}\) protocol, the prover would be instructed to send this state to the verifier. The verifier then performs a measurement on \({\left \vert {\psi }\right \rangle }\) to estimate its energy, accepting if the estimate is below a and rejecting otherwise. However, we are interested in a verification protocol for \(\mathsf {BQP}\) problems where the verifier has minimal quantum capabilities. This means that there will be two requirements: the verifier can only perform singlequbit measurements; the prover is restricted to \(\mathsf {BQP}\) computations. The 1SPosthoc protocol satisfies both of these constraints.
The first requirement is satisfied because estimating the energy of a quantum state, \({\left \vert {\psi }\right \rangle }\), with respect to an \(\mathsf {X}\textsf {Z}\)Hamiltonian H, can be done by measuring one of the observables \(S_{i}\) on the state \({\left \vert {\psi }\right \rangle }\). Specifically, it is shown in [65] that if one chooses the local term \(S_{i}\) according to a probability distribution given by the normalized terms \(a_{i}\), and measures \({\left \vert {\psi }\right \rangle }\) with the \(S_{i}\) observables, this provides an estimate for the energy of \({\left \vert {\psi }\right \rangle }\). Since H is an X ZHamiltonian, this entails performing at most two measurements, each of which can be either an \(\mathsf {X}\) measurement or a \(\mathsf {Z}\) measurement. This implies that the verifier need only perform singlequbit measurements.

(1) The verifier chooses a quantum circuit, \(\mathcal {C}\), and an input x to delegate to the prover.

(1) The verifier computes the terms \(a_{i}\) of the \(\mathsf {X}\textsf {Z}\)Hamiltonian, \(H = {\sum }_{i} a_{i} S_{i} \), having as a ground state the FeynmanKitaev state asswith \(\mathcal {C}\) and x, denoted \(\left \vert {\psi }\right \rangle \).

(2) The verifier instructs the prover to send her \({\left \vert {\psi }\right \rangle }\), qubit by qubit.

(4) The verifier chooses one of the \(\mathsf {X}\textsf {Z}\)terms \(S_{i}\), according to the normalized distribution \(\{a_{i}\}_{i}\), and measures it on \({\left \vert {\psi }\right \rangle }\). She accepts if the measurement indicates the energy of \(\left \vert {\psi }\right \rangle \) is below a.
As mentioned, the essential properties that any \(\mathsf {QPIP}\) protocol should satisfy are completeness and soundness. For the post hoc protocol, these follow immediately from the local Hamiltonian problem. Specifically, we know that there exist a and b such that \(b  a \geq 1/poly(x)\). When \(\mathcal {C}\) accepts x with high probability, the state \(\left \vert {\psi }\right \rangle \) will be an eigenstate of H having eigenvalue smaller than a. Otherwise, any state, when measured under the H observable, will have an energy greater than b. Of course, the verifier is not computing the exact energy \(\left \vert {\psi }\right \rangle \) under H, merely an estimate. This is because she is measuring only one local term from H. However, it is shown in [29] that the precision of her estimate is also inverse polynomial in \(x\). Therefore:
Theorem 7
1SPosthoc is a receiveandmeasure \(\mathsf {QPIP}\) protocol having an inverse polynomial gap between completeness and soundness.
The only quantum capability of the verifier is the ability to measure single qubits in the computational and Hadamard bases (i.e. measuring the \(\mathsf {Z}\) and \(\mathsf {X}\) observables). The protocol, as described, suggests that it is sufficient for the verifier to measure only two qubits. However, since the completenesssoundness gap decreases with the size of the input, in practice one would perform a sequential repetition of this protocol in order to boost this gap. It is easy to see that, for a protocol with a completenesssoundness gap of 1/p(x), for some polynomial p, in order to achieve a constant gap of at least \(1  \epsilon \), where \(\epsilon > 0\), the protocol needs to be repeated O(p(x) ⋅ log(1/𝜖)) times. It is shown in [30, 68] that \(p(x)\) is \(O(\mathcal {C}^{2})\), hence the protocol should be repeated \(O(\mathcal {C}^{2} \cdot log(1/\epsilon ))\) times and this also gives us the total number of measurements for the verifier.^{33} Note, however, that this assumes that each run of the protocol is independent of the previous one (in other words, that the states sent by the prover to the verifier in each run are uncorrelated). Therefore, the \(O(\mathcal {C}^{2} \cdot log(1/\epsilon ))\) overhead should be taken as an i.i.d. (independent and identically distributed states) estimate. This is, in fact, mentioned explicitly in the Hangleiter et al result, where they explain that the prover should prepare “a number of independent and identical copies of a quantum state” [30]. Thus, when considering the most general case of a malicious prover that does not obey the i.i.d. constraint, one requires a more thorough analysis involving nonindependent runs, as is done in the measurementonly protocol [31] or the steeringbased VUBQC protocol [33].
3.3 Summary of ReceiveandMeasure Protocols
Comparison of receiveandmeasure protocols
Protocol  Measurements  Observables  Blind 

Measurementonly  O(N ⋅ 1/α ⋅ 1/𝜖^{2})  5  Y 
Hypergraph measurementonly  O(max(N, 1/𝜖^{2})^{22})  3  Y 
1SPosthoc  O(N^{2} ⋅ log(1/𝜖))  2  N 
Steeringbased VUBQC  O(N^{13}log(N) ⋅ log(1/𝜖))  5  Y 
Of course, the number of measurements is not the only metric we use in comparing the protocols. Another important aspect is how many observables the verifier should be able to measure. The 1SPosthoc protocol is optimal in that sense, since the verifier need only measure \(\mathsf {X}\) and \(\mathsf {Z}\) observables. Next is the hypergraph state measurementonly protocol which requires all three Pauli observables. Lastly, the other two protocols require the verifier to be able to measure the \(\mathsf {X}\textsf {Y}\)plane observables \(\mathsf {X}\), \(\mathsf {Y}\), \((\textsf {X}+\textsf {Y})/\sqrt {2}\) and \((\textsf {X}\textsf {Y})/\sqrt {2}\) plus the \(\mathsf {Z}\) observable.
Finally, we compare the protocols in terms of blindness, which we have seen plays an important role in prepareandsend protocols. For receiveandmeasure protocols, the 1SPosthoc protocol is the only one that is not blind. While this is our first example of a verification protocol that does not hide the computation and input from the prover, it is not the only one. In the next section, we review two other post hoc protocols that are also not blind.
4 EntanglementBased Protocols
The protocols discussed in the previous sections have been either prepareandsend or receiveandmeasure protocols. Both types employ a verifier with some minimal quantum capabilities interacting with a single \(\mathsf {BQP}\) prover. In this section we explore protocols which utilize multiple noncommunicating provers that share entanglement and a fully classical verifier. The main idea will be for the verifier to distribute a quantum computation among many provers and verify its correct execution from correlations among the responses of the provers.
 1.
Section 4.1 three protocols which make use of the CHSH game, the first one developed by Reichardt et al. [18], the second by Gheorghiu et al. [19] and the third by Hajdušek, PérezDelgado and Fitzsimons.
 2.
 3.
Section 4.3 two post hoc protocols, one developed by Fitzsimons and Hajdušek [29] and another by Natarajan and Vidick [23].
Unlike the previous sections where, for the most part, each protocol was based on a different underlying idea for performing verification, entanglementbased protocols are either based on some form of rigid selftesting or on testing local Hamiltonians via the post hoc approach. In fact, as we will see, even the post hoc approaches employ selftesting. Of course, there are distinguishing features within each of these broad categories, but due to their technical specificity, we choose to label the protocols in this section by the initials of the authors.
Since selftesting plays such a crucial role in entanglementbased protocols, let us provide a brief description of the concept. The idea of selftesting was introduced by Mayers and Yao in [69], and is concerned with characterising the shared quantum state and observables of n noncommunicating players in a nonlocal game. A nonlocal game is one in which a referee (which we will later identify with the verifier) will ask questions to the n players (which we will identify with the provers) and, based on their responses, decide whether they win the game or not. Importantly, we are interested in games where there is a quantum strategy that outperforms a classical strategy. By a classical strategy, we mean that the players can only produce local correlations.^{34} Conversely, in a quantum strategy, the players are allowed to share entanglement in order to produce nonlocal correlations and achieve a higher win rate. Even so, there is a limit to how well the players can perform in the game. In other words, the optimal quantum strategy has a certain probability of winning the game, which may be less than 1. Selftesting results are concerned with nonlocal games in which the optimal quantum strategy is unique, up to local isometries on the players’ systems. This means that if the referee observes a near maximal win rate for the players, in the game, she can conclude that they are using the optimal strategy and can therefore characterise their shared state and their observables, up to a local isometries. More formally, we give the definition of selftesting, adapted from [70] and using notation similar to that of [23]:
Definition 4 (Selftesting)
Let G denote a game involving n noncommunicating players denoted \(\{ P_{i} \}_{i = 1}^{n}\). Each player will receive a question from a set, Q and reply with an answer from a set A. Thus, each \(P_{i}\) can be viewed as a mapping from Q to A. There exists some condition establishing which combinations of answers to the questions constitutes a win for the game. Let \(\omega ^{*}(G)\) denote the maximum winning probability of the game for players obeying quantum mechanics.
Note that TD denotes trace distance, and is defined in Section 1.
4.1 Verification Based on CHSH Rigidity
RUV Protocol
In [71], Tsirelson gave an upper bound for the total amount of nonlocal correlations shared between two noncommunicating parties, as predicted by quantum mechanics. In particular, consider a twoplayer game consisting of Alice and Bob. Alice is given a binary input, labelled a, and Bob is given a binary input, labelled b. They each must produce a binary output and we label Alice’s output as x and Bob’s output as y. Alice and Bob win the game iff \(a \cdot b = x \oplus y\). The two are not allowed to communicate during the game, however they are allowed to share classical or quantum correlations (in the form of entangled states). This defines a nonlocal game known as the CHSH game [72]. The optimal classical strategy for winning the game achieves a success probability of \(75\%\), whereas, what Tsirelson proved, is that any quantum strategy achieves a success probability of at most \(cos^{2}(\pi /8) \approx 85.3\%\). This maximal winning probability, in the quantum case, can in fact be achieved by having Alice and Bob do the following. First, they will share the state \(\left \vert {{\Phi }_{+}}\right \rangle = (\left \vert {00}\right \rangle + \left \vert {11}\right \rangle ) / \sqrt {2}\). If Alice receives input \(a = 0\), then she will measure the Pauli \(\mathsf {X}\) observable on her half of the \(\left \vert {{\Phi }_{+}}\right \rangle \) state, otherwise (when \(a = 1\)) she measures the Pauli \(\mathsf {Z}\) observable. Bob, on input \(b = 0\) measures \((\textsf {X}+\textsf {Z})/\sqrt {2}\), on his half of the Bell pair, and on input \(b = 1\), he measures \((\textsf {X}\textsf {Z})/\sqrt {2}\). We refer to this strategy as the optimal quantum strategy for the CHSH game.
McKague, Yang and Scarani proved a converse of Tsierlson’s result, by showing that if one observes two players winning the CHSH game with a near \(cos^{2}(\pi /8)\) probability, then it can be concluded that the players’ shared state is close to a Bell pair and their observables are close to the ideal observables of the optimal strategy (Pauli \(\mathsf {X}\) and \(\mathsf {Z}\), for Alice, and \((\textsf {X} + \textsf {Z})/\sqrt {2}\) and \((\textsf {X}  \textsf {Z})/\sqrt {2}\), for Bob) [73]. This is effectively a selftest for a Bell pair. Reichardt, Unger and Vazirani then proved a more general result for selftesting a tensor product of multiple Bell states as well as the observables acting on these states [18].^{35} It is this latter result that is relevant for the RUV protocol so we give a more formal statement for it:
Theorem 8
Suppose two players, Alice and Bob, are instructed to play n sequential CHSH games. Let the inputs, for Alice and Bob, be given by the nbit stringsa,b ∈{0,1}^{n}. Additionally, let \(S = ({\left \vert {\tilde {\psi }}\right \rangle }, \tilde {A}(\mathbf {a}), \tilde {B}(\mathbf {b}))\) be the strategy employed by Alice and Bob in playing the n CHSH games, where\({\left \vert {\tilde {\psi }}\right \rangle }\) is their shared state and \(\tilde {A}(\mathbf {a})\) and \(\tilde {B}(\mathbf {b})\) are their respective observables, for inputs \(\mathbf {a}, \mathbf {b}\).
What this means is that, up to a local isometry, the players share a state which is close in trace distance to a tensor product of Bell pairs and their measurements are close to the ideal measurements. This result, known as CHSH game rigidity, is the key idea for performing multiprover verification using a classical verifier. We will refer to the protocol in this section as the RUV protocol.

CHSH games. In this subprotocol, the verifier will simply play CHSH games with Alice and Bob. To be precise, the verifier will repeatedly instruct Alice and Bob to perform the ideal measurements of the CHSH game. She will collect the answers of the two provers (which we shall refer to as CHSH statistics) and after a certain number of games, will compute the win rate of the two provers. The verifier is interested in the case when Alice and Bob win close to the maximum number of games as predicted by quantum mechanics. Thus, at the start of the protocol she takes \(\epsilon = poly(1/\mathcal {C})\) and accepts the statistics produced by Alice and Bob if and only if they win at least a fraction \((1  \epsilon )cos^{2}(\pi /8)\) of the total number of games. Using the rigidity result, this implies that Alice and Bob share a state which is close to a tensor product of perfect Bell states (up to a local isometry). This step is schematically illustrated in Fig. 11.

State tomography. This time the verifier will instruct Alice to perform the ideal CHSH game measurements, as in the previous case. However, she instructs Bob to measure his halves of the entangled states so that they collapse to a set of resource states which will be used to perform gate teleportation. The resource states are chosen so that they are universal for quantum computation. Specifically, in the RUV protocol, the following resource states are used: \(\{ \mathsf {P}\left \vert {0}\right \rangle , (\mathsf {HP})_{2} \left \vert {{\Phi }_{+}}\right \rangle , (\mathsf {GY})_{2} \left \vert {{\Phi }_{+}}\right \rangle , \textsf {CNOT}_{2,4}\mathsf {P}_{2} \mathsf {Q}_{4} (\left \vert {{\Phi }_{+}}\right \rangle \otimes \left \vert {{\Phi }_{+}}\right \rangle ) : \mathsf {P}, \mathsf {Q} \in \{\textsf {X}, \textsf {Y}, \textsf {Z}, I \} \}\), where \(\mathsf {G} = exp \left (i \frac {\pi }{8} \textsf {Y}\right )\) and the subscripts indicate on which qubits the operators act. Assuming Alice and Bob do indeed share Bell states, Bob’s measurements will collapse Alice’s states to the same resource states (up to a onetime padding known to the verifier). Alice’s measurements on these states are used to check Bob’s preparation, effectively performing state tomography on the resource states.

Process tomography. This subprotocol is similar to the state tomography one, except the roles of Alice and Bob are reversed. The verifier instructs Bob to perform the ideal CHSH game measurements. Alice, on the other hand, is instructed to perform Bell basis measurements on pairs of qubits. As in the previous subprotocol, Bob’s measurement outcomes are used to tomographically check that Alice is indeed performing the correct measurements.

Computation. The final subprotocol combines the previous two. Bob is asked to perform the resource preparation measurements, while Alice is asked to perform Bell basis measurements. This effectively makes Alice perform the desired computation through repeated gate teleportation.
An important aspect, in proving the correctness of the protocol, is the local similarity of pairs of subprotocols. For instance, Alice cannot distinguish between the CHSH subprotocol and the state tomography one, or between the process tomography one and computation. This is because, in those situations, she is asked to perform the same operations on her side, while being unaware of what Bob is doing. Moreover, since the verifier can test all but the computation part, if Alice deviates there will be a high probability of her deviation being detected. The same is true for Bob. In this way, the verifier can, essentially, enforce that the two players behave honestly and thus perform the correct quantum computation. Note, that this is not the same as the blindness property, discussed in relation to the previous protocols. The RUV protocol does, however, posses that property as well. This follows from a more involved argument regarding the way in which the computation by teleportation is performed.
It should be noted that there are only two constraints imposed on the provers: that they cannot communicate once the protocol has commenced and that they produce close to quantum optimal winrates for the CHSH games. Importantly, there are no constraints on the quantum systems possessed by the provers, which can be arbitrarily large. Similarly, there are no constraints on what measurements they perform or what strategy they use in order to respond to the verifier. In spite of this, the rigidity result shows that for the provers to produce statistics that are accepted by the verifier, they must behave according to the ideal strategy (up to local isometry). Having the ability to fully characterise the prover’s shared state and their strategies in this way is what allows the verifier to check the correctness of the delegated quantum computation. This approach, of giving a full characterisation of the states and observables of the provers, is a powerful technique which is employed by all the other entanglementbased protocols, as we will see.
In terms of practically implementing such a protocol, there are two main considerations: the amount of communication required between the verfier and the provers and the required quantum capabilities of the provers. For the latter, it is easy to see that the RUV protocol requires both provers to be universal quantum computers (i.e. \(\mathsf {BQP}\) machines), having the ability to store multiple quantum states and perform quantum circuits on these states. In terms of the communication complexity, since the verifier is restricted to \(\mathsf {BPP}\), the amount of communication must scale polynomially with the size of the delegated computation. It was computed in [19], that this communication complexity is of the order \(O(\mathcal {C}^{c})\), with \(c > 8192\). Without even considering the constant factors involved, this scaling is far too large for any sort of practical implementation in the near future.^{36}
There are essentially two reasons for the large exponent in the scaling of the communication complexity. The first, as mentioned by the authors, is that the bounds derived in the rigidity result are not tight and could possibly be improved. The second and, arguably more important reason, stems from the rigidity result itself. In Theorem 8, notice that \(\epsilon = poly(\delta , 1/n)\) and \(\epsilon \rightarrow 0\) as \(n \rightarrow \infty \). We also know that the provers need to win a fraction \((1\epsilon )cos^{2}(\pi /8)\) of CHSH games, in order to pass the verifier’s checks. Thus, the completenesssoundness gap of the protocol will be determined by \(\epsilon \). But since, for fixed \(\delta \), \(\epsilon \) is essentially inverse polynomial in n, the completenesssoundness gap will also be inverse polynomial in n. Hence, one requires polynomially many repetition in order to boost the gap to constant.
We conclude with:
Theorem 9
The RUV protocol is an \(\mathsf {MIP^{*}}\) protocol achieving an inverse polynomial gap between completeness and soundness.
GKW Protocol
As mentioned, in the RUV protocol the two quantum provers must be universal quantum computers. One could ask whether this is a necessity or whether there is a way to reduce one of the provers to be nonuniversal. In a paper by Gheorghiu, Kashefi and Wallden it was shown that the latter option is indeed possible. This leads to a protocol which we shall refer to as the GKW protocol. The protocol is based on the observation that one could use the state tomography subprotocol of RUV in such a way so that one prover is remotely preparing single qubit states for the other prover. The preparing prover would then only be required to perform single qubit measurements and, hence, not need the full capabilities of a universal quantum computer. The specific single qubit states that are chosen, can be the ones used in the VUBQC protocol of Section 2.2. This latter prover can then be instructed to perform the VUBQC protocol with these states. Importantly, because the provers are not allowed to communicate, this would preserve the blindness requirement of VUBQC. We will refer to the preparing prover as the sender and the remaining prover as the receiver. Once again, we assume the verifier wishes to delegate to the provers the evaluation of some quantum circuit \(\mathcal {C}\).

(1) Verified preparation. This part is akin to the state tomography subprotocol of RUV. The verifier is trying to certify the correct preparation of states \(\{ {\left \vert {+_{\theta }}\right \rangle } \}_{\theta }\) and \({\left \vert {0}\right \rangle }\), \({\left \vert {1}\right \rangle }\), where \(\theta \in \{0, \pi /4, ..., 7\pi /4 \}\). Recall that these are the states used in VUBQC. We shall refer to them as the resource states. This is done by selftesting a tensor product of Bell pairs and the observables of the two provers using CHSH games and the rigidity result of Theorem 8.^{37} As in the RUV protocol, the verifier will play multiple CHSH games with the provers. This time, however, each game will be an extended CHSH game (as defined in [18]) in which the verifier will ask each prover to measure an observable from the set \(\{ \mathsf {X}, \mathsf {Y}, \mathsf {Z}, (\mathsf {X} \pm \mathsf {Z})/\sqrt {2}, (\mathsf {Y} \pm \mathsf {Z})/\sqrt {2}, (\mathsf {X} \pm \mathsf {Y})/\sqrt {2} \}\). Alternatively, this can be viewed as the verifier choosing to play one of 6 possible CHSH games defined by the observables in that set^{38} These observables are sufficient for obtaining the desired resource states. In particular, measuring the \(\mathsf {X}\), \(\mathsf {Y}\), and \((\textsf {X} \pm \textsf {Y}) / \sqrt {2}\) observables on the Bell pairs will collapse the entangled qubits to states of the form \(\{ {\left \vert {+_{\theta }}\right \rangle } \}_{\theta }\), while measuring \(\mathsf {Z}\) will collapse them to \({\left \vert {0}\right \rangle }\), \({\left \vert {1}\right \rangle }\). The verifier accepts if the provers win a fraction \((1\epsilon )cos^{2}(\pi /8)\) of the CHSH games, where \(\epsilon = poly(\delta , 1/\mathcal {C})\), and \(\delta > 0\) is the desired trace distance between the reduced state on the receiver’s side and the ideal state consisting of the required resource states in tensor product, up to a local isometry (\(\epsilon \rightarrow 0\) as \(\delta \rightarrow 0\) or \(\mathcal {C} \rightarrow \infty \)). The verifier will also instruct the sender prover to perform additional measurements so as to carry out the remote preparation on the receiver’s side. This verified preparation is illustrated in Fig. 12.

(2) Verified computation. This part involves verifying the actual quantum computation, \(\mathcal {C}\). Once the resource states have been prepared on the receiver’s side, the verifier will perform the VUBQC protocol with that prover as if she had sent him the resource states. She accepts the outcome of the computation if all trap measurements succeed, as in VUBQC.
Note the essential difference, in terms of the provers’ requirements, between this protocol and the RUV protocol. In the RUV protocol, both provers had to perform entangling measurements on their side. However, in the GKW protocol, the sender prover is required to only perform single qubit measurements. This means that the sender prover can essentially be viewed as an untrusted measurement device, whereas the receiver is the only universal quantum computer. For this reason, the GKW protocol is also described as a deviceindependent [75, 76] verification protocol. This stems from comparing it to VUBQC or the receiveandmeasure protocols, of Section 3, where the verifier had a trusted preparation or measurement device. In this case, the verifier essentially has a measurement device (the sender prover) which is untrusted.
Of course, performing the verified preparation subprotocol and combining it with VUBQC raises some questions. For starters, in the VUBQC protocol, the state sent to the prover is assumed to be an ideal state (i.e. an exact tensor product of states of the form \(\left \vert {+_{\theta }}\right \rangle \) or \(\left \vert {0}\right \rangle \), \(\left \vert {1}\right \rangle \)). However, in this case the preparation stage is probabilistic in nature and therefore the state of the receiver will be \(\delta \)close to the ideal tensor product state, for some \(\delta > 0\). How is the completenesssoundness gap of the VUBQC protocol affected by this? Stated differently, is VUBQC robust to deviations in the input state? A second aspect is that, since the resource state is prepared by the untrusted sender, even though it is \(\delta \)close to ideal, it can, in principle, be correlated with the receiving prover’s system. Do these initial correlations affect the security of the protocol?
Both of these issues are addressed in the proofs of the GKW protocol. Firstly, assume that in the VUBQC protocol the prover receives a state which is \(\delta \)close to ideal and uncorrelated with his private system. Any action of the prover can, in the most general sense, be modelled as a CPTP map. This CPTP map is of course distance preserving and so the output of this action will be \(\delta \)close to the output in the ideal case. It follows from this that the probabilities of the verifier accepting a correct or incorrect result change by at most \(O(\delta )\). As long as \(\delta > 1/poly(\mathcal {C})\) (for a suitably chosen polynomial), the protocol remains a valid \(\mathsf {QPIP}\) protocol.
These two facts essentially show that the GKW protocol is a valid entanglementbased protocol, as long as sufficient tests are performed in the verified preparation stage so that the system of resource states is close to the ideal resource states. As with the RUV protocol, this implies a large communication overhead, with the communication complexity being of the order \(O(\mathcal {C}^{c})\), where \(c > 2048\). One therefore has:
Theorem 10
The GKW protocol is an \(\mathsf {MIP^{*}}\) protocol achieving an inverse polynomial gap between completeness and soundness.
Before concluding this section, we describe the steeringbased VUBQC protocol that we referenced in Section 3. As mentioned, the GKW protocol can be viewed as a protocol involving a verifier with an untrusted measurement device interacting with a quantum prover. In a subsequent paper, Gheorghiu, Wallden and Kashefi addressed the setting in which the verifier’s device becomes trusted [33]. They showed that one can define a selftesting game for Bell states which involves steering correlations [77] as opposed to nonlocal correlations. Steering correlations arise in a twoplayer setting in which one of the players is trusted to measure certain observables. This extra piece of information allows for the characterisation of Bell states with comparatively fewer statistics than in the nonlocal case. The steeringbased VUBQC protocol, therefore, has exactly the same structure as the GKW protocol. First, the verifier uses this steeringbased game, between her measurement device and the prover, to certify that the prover prepared a tensor product of Bell pairs. She then measures some of the Bell pairs so as to remotely prepare the resource states of VUBQC on the prover’s side and then performs the trapbased verification. As mentioned in Section 3, the protocol has a communication complexity of \(O(\mathcal {C}^{13} log(\mathcal {C}))\) which is clearly an improvement over \(O(\mathcal {C}^{2048})\). This improvement stems from the trust added to the measurement device. However, the overhead is still too great for any practical implementation.
HPDF Protocol
Independently from the GKW approach, Hajdušek, PérezDelgado and Fitzsimons developed a protocol which also combines the CHSH rigidity result with the VUBQC protocol. This protocol, which we refer to as the HPDF protocol has the same structure as GKW in the sense that it is divided into a verified preparation stage and a verified computation stage. The major difference is that the number of noncommunicating provers is on the order \(O(poly(\mathcal {C}))\), where \(\mathcal {C}\) is the computation that the verifier wishes to delegate. Essentially, there is one prover for each Bell pair that is used in the verified preparation stage. This differs from the previous two approaches in that the verifier knows, a priori, that there is a tensor product structure of states. She then needs to certify that these states are close, in trace distance, to Bell pairs. The advantage of assuming the existence of the tensor product structure, instead of deriving it through the RUV rigidity result, is that the overhead of the protocol is drastically reduced. Specifically, the total number of provers, and hence the total communication complexity of the protocol is of the order \(O(\mathcal {C}^{4} log(\mathcal {C} ))\).

(1) Verified preparation. The verifier is trying to certify the correct preparation of the resource states \(\{ {\left \vert {+_{\theta }}\right \rangle } \}_{\theta }\) and \({\left \vert {0}\right \rangle }\), \(\left \vert {1}\right \rangle \), where \(\theta \in \{0, \pi /4, ..., 7\pi /4 \}\). The verifier instructs each prover to prepare a Bell pair and send one half to her untrusted measurement device. For each received state, she will randomly measure one of the following observables \(\{ \mathsf {X}, \mathsf {Y}, \mathsf {Z}, (\mathsf {X} + \mathsf {Z})/\sqrt {2}, (\mathsf {Y} + \mathsf {Z})/\sqrt {2}, (\mathsf {X} + \mathsf {Y})/\sqrt {2}, (\mathsf {X}  \mathsf {Y})/\sqrt {2} \}\). Each prover is either instructed to randomly measure an observable from the set \(\{ \mathsf {X}, \mathsf {Y}, \mathsf {Z} \}\) or to not perform any measurement at all. The latter case corresponds to the qubits which are prepared for the computation stage. The verifier will compute correlations between the measurement outcomes of her device and the provers and accept if these correlations are above some threshold parametrized by \(\epsilon = poly(\delta , 1/\mathcal {C})\) (\(\epsilon \rightarrow 0\) as \(\delta \rightarrow 0\) or \(\mathcal {C} \rightarrow \infty \)), where \(\delta > 0\) is the desired trace distance between the reduced state on the receiving provers’ sides and the ideal state consisting of the required resource states in tensor product, up to a local isometry.

(2) Verified computation. Assuming the verifier accepted in the previous stage, she instructs the provers that have received the resource states to act as a single prover. The verifier then performs the VUBQC protocol with that prover as if she had sent him the resource states. She accepts the outcome of the computation if all trap measurements succeed, as in VUBQC.
In their paper, Hajdušek et al have proved that the procedure in the verified preparation stage of their protocol constitutes a selftesting procedure for Bell states. This procedure selftests individual Bell pairs, as opposed to the CHSH rigidity theorem which selftests a tensor product of Bell pairs. In this case, however, the tensor product structure is already given by having the \(O(\mathcal {C}^{4} log(\mathcal {C} ))\) noncommunicating provers. The correctness of the verified computation stage follows from the robustness of the VUBQC protocol, as mentioned in the previous section. One therefore has the following:
Theorem 11
The HPDF protocol is an \(\mathsf {MIP^{*}[poly]}\) protocol achieving an inverse polynomial gap between completeness and soundness.
4.2 Verification Based on SelfTesting Graph States
We saw, in the HPDF protocol, that having multiple noncommunicating provers presents a certain advantage in characterising the shared state of these provers, due to the tensor product structure of the provers’ Hilbert spaces. This approach not only leads to simplified proofs, but also to a reduced overhead in characterising this state, when compared to the CHSH rigidity Theorem 8, from [18].
Another approach which takes advantage of this tensor product structure is the one of McKague from [21]. In his protocol, as in HPDF, the verifier will interact with \(O(poly(\mathcal {C}))\) provers. Specifically, there are multiple groups of \(O(\mathcal {C})\) provers, each group jointly sharing a graph state \({\left \vert {G}\right \rangle }\). In particular, each prover should hold only one qubit from \({\left \vert {G}\right \rangle }\). The central idea is for the verifier to instruct the provers to measure their qubits to either test that the provers are sharing the correct graph state or to perform an MBQC computation of \(\mathcal {C}\). This approach is similar to the stabilizer measurementonly protocol of Section 3.1 and, just like in that protocol or the TestorCompute or RUV protocols, the verifier will randomly alternate between tests and computation.
The verifier will choose one of the n groups of provers at random to perform the computation \(\mathcal {C}\). The computation is performed in an MBQC fashion. In other words, the verifier will pick appropriate measurement angles \(\{\theta _{v}\}_{v \in V(G)}\), for all vertices in G, as well as a partial order for the vertices. To perform the computation \(\mathcal {C}\), the verifier instructs the provers to measure the qubits of \({\left \vert {G}\right \rangle }\) with the observables \(\mathsf {R}(\theta _{v})\), defined above. The partial order establishes the temporal ordering of these measurements. Additionally, the \(\theta _{v}\) angles, for the \(\mathsf {R}(\theta _{v})\) measurements, should be updated so as to account for corrections arising from previous measurement outcomes. In other words, the angles \(\{\theta _{v}\}_{v \in V(G)}\), which we shall refer to as computation angles, are the ideal angles assuming no corrections. See Section A for more details about measurementbased quantum computations.

Vertex test, chosen with probability \(\frac {n}{N_{G}}\). The verifier randomly picks a vertex v of the graph. She instructs the prover having the qubit corresponding to v to measure it in the \(\mathsf {X}\) basis. At the same time, she asks the provers having the neighboring qubits, \(N(v)\), to measure them in the \(\mathsf {Z}\) basis. The \(\pm 1\) outcomes of the provers are multiplied by the verifier and she checks to see whether this product is 1 (since she is essentially measuring the \(S_{v}\) stabilizer). If this is the case, she accepts, otherwise she rejects.

Triangle test, chosen with probability \(\frac {T}{N_{G}}\). Similar to the previous test, the verifier picks a random triangle in G, consisting of the vertices \(\tau \). She instructs the provers having these vertices to measure them in the \(\mathsf {X}\) basis, while their neighbors (N(τ)) are instructed to measure in the \(\mathsf {Z}\) basis. The verifier checks if the product of their outcomes is \(1\) and if so, she accepts, otherwise she rejects.

]\(\boldsymbol {R(\theta )}\) test, chosen with probability \(\frac {2n}{N_{G}}\). In this case the verifier first randomly picks a vertex v of the graph, a neighbor u of v (so \(u \in N(v)\)) and t in \(\{1, + 1\}\). She then randomly picks \(\mathsf {X}\) with probability \(p = \frac {cos(\theta _{v})}{cos(\theta _{v}) + sin(\theta _{v})}\) or \(\mathsf {Z}\) with probability \(1p\), where \(\theta _{v}\) is the computation angle associated with v. If she chose \(\mathsf {X}\), then she queries the prover holding v to measure \(\mathsf {R}(t \theta _{v})\), and his neighbors (N(v)) to measure \(\mathsf {Z}\). She accepts if the product of their replies is \(+ 1\). If the verifier instead chose \(\mathsf {Z}\), then she instructs the prover holding v to measure \(t \mathsf {R}(t \theta _{v})\), the prover holding u to measure \(\mathsf {X}\) and the neighbors of u and v to measure \(\mathsf {Z}\). She accepts if the product of their outcomes is \(+ 1\).
Together, these three tests are effectively performing a selftest of the graph state \({\left \vert {G}\right \rangle }\) and the prover’s observables. Specifically, McKague showed the following:
Theorem 12

Computation. In this case, the verifier instructs the provers to perform the MBQC computation of \(\mathcal {C}\) on the graph state \({\left \vert {G}\right \rangle }\), as described above.

Testing \({\left \vert {G}\right \rangle }\). In this case, the verifier will randomly choose between one of the three tests described above accepting if an only if the test succeeds.
It is therefore the case that:
Theorem 13
McKague’s protocol is an \(\mathsf {MIP^{*}}[poly]\) protocol having an inverse polynomial gap between completeness and soundness.
As with the previous approaches, the reason for the inverse polynomial gap between completeness and soundness is the use of a selftest with robustness \(\epsilon = poly(1/n)\) (and \(\epsilon \rightarrow 0\) as \(n \rightarrow \infty \)). In turn, this leads to a polynomial overhead for the protocol as a whole. Specifically, McKague showed that the total number of required provers and communication complexity, for a quantum computation \(\mathcal {C}\), is of the order \(O(\mathcal {C}^{22})\). Note, however, that each of the provers must only perform a singlequbit measurement. Hence, apart from the initial preparation of the graph state \({\left \vert {G}\right \rangle }\), the individual provers are not universal quantum computers, merely singlequbit measurement devices.
4.3 Post Hoc Verification
In Section 3.2 we reviewed a protocol by Morimae and Fitzsimons for post hoc verification of quantum computation. Of course, that protocol involved a single quantum prover and a verifier with a measurement device. In this section, we review two post hoc protocols for the multiprover setting having a classical verifier. We start with the first post hoc protocol by Fitzsimons and Hajdušek.
FH Protocol
Similar to the 1SPosthoc protocol from Section 3.2, the protocol of Fitzsimons and Hajdušek, which we shall refer to as the FH protocol, also makes use of the local Hamiltonian problem stated in Definition 9. As mentioned, this problem is complete for the class \(\mathsf {QMA}\), which consists of problems that can be decided by a \(\mathsf {BQP}\) verifier receiving a witness state from a prover. Importantly, the size of the witness state is polynomial in the size of the input to the problem. However, Fitzsimons and Vidick proposed a protocol for the klocal Hamiltonian problem (and hence any \(\mathsf {QMA}\) problem), involving 5 provers, in which the quantum state received by the verifier is of constant size [79]. That protocol is the basis for the FH protocol and so we start with a description of it.

Energy measurement. In this case, the verifier will pick a random term \(H_{i}\), from H, and ask each prover for k qubits corresponding to the logical states on which \(H_{i}\) acts. The verifier will then perform a twooutcome measurement, defined by the operators \(\{H_{i}, I  H_{i} \}\)on the received qubits. As in the 1SPosthoc protocol, this provides an estimate for the energy of \(\left \vert {\psi }\right \rangle \). The verifier accepts if the measurement outcome indicates the state has energy below a.

Encoding measurement. In this case the verifier will choose at random between two subtests. In the first subtest, she will choose j at random from 1 to n and ask each prover to return the physical qubits comprising the j’th logical qubit. She then measures these qubits to check whether their joint state lies within the code space, accepting if it does and rejecting otherwise. In the second subtest, the verifier chooses a random set, S, of 3 values between 1 and n. She also picks one of the values at random, labelled j. The verifier then asks a randomly chosen prover for the physical qubits of the logical states indexed by the values in S, while asking the remaining provers for their shares of logical qubit j. As an example, if the set contains the values \(\{1, 5, 8 \}\), then the verifier picks one of the 5 provers at random and asks him for his shares (physical qubits) of logical qubits 1, 5 and 8 from \(\left \vert {\psi }\right \rangle \). Assuming that the verifier also picked the random value 8 from the set, then she will ask the remaining provers for their shares of logical qubit 8. The verifier then measures logical qubit j (or 8, in our example) and checks if it is in the code subspace, accepting if it is and rejecting otherwise. The purpose of this second subtest is to guarantee that the provers respond with different qubits when queried.
One can see that when the witness state exists and the provers follow the protocol, the verifier will indeed accept with high probability. On the other hand, Fitzsimons and Vidick show that when there is no witness state, the provers will fail at convincing the verifier to accept with high probability. This is because they cannot simultaneously provide qubits yielding the correct energy measurements and also have their joint state be in the correct code space. This also illustrates why their protocol required testing both of these conditions. If one wanted to simplify the protocol, so as to have a single prover providing the qubits for the verifier’s \(\{H_{i}, I  H_{i} \}\) measurement, then it is no longer possible to prove soundness. The reason is that even if there does not exist a \({\left \vert {\psi }\right \rangle }\) having energy less than a for H, the prover could still find a group of k qubits which minimize the energy constraint for the specific \(H_{i}\) that the verifier wishes to measure. The second subtest prevents this from happening, with high probability, since it forces the provers to consistently provide the requested indexed qubits from the state \({\left \vert {\psi }\right \rangle }\).
Generators for 5qubit code
Generator  Name 

I XZZX  g _{ 1} 
XI XZZ  g _{ 2} 
Z XI XZ  g _{ 3} 
Z ZXI X  g _{ 4} 
Generators with fifth operator rotated
Generator  Name 

\(I \textsf {X} \textsf {Z} \textsf {Z} \textsf {X}^{\prime }\)  \(g^{\prime }_{1}\) 
\(\mathsf {X} I \textsf {X} \textsf {Z} \textsf {Z}^{\prime }\)  \(g^{\prime }_{2}\) 
\(\mathsf {Z} \textsf {X} I \textsf {X} \textsf {Z}^{\prime }\)  \(g^{\prime }_{3}\) 
\(\mathsf {Z} \textsf {Z} \textsf {X} I \textsf {X}^{\prime }\)  \(g^{\prime }_{4}\) 
We have discussed how a classical verifier can test that the 5 provers share a state encoded in the logical space of the 5qubit code. But to achieve the functionality of the Fitzsimons and Vidick protocol, one needs to also delegate to the provers the measurement of a local term \(H_{i}\) from the Hamiltonian. This is again possible using the 5player nonlocal game. Firstly, it can be shown that, without loss of generality, that each \(H_{i}\), in the klocal Hamiltonian, can be expressed as a linear combination of terms comprised entirely of I, \(\mathsf {X}\) and \(\mathsf {Z}\). This means that the Hamiltonian itself is a linear combination of such terms, \(H = {\sum }_{i} a_{i} S_{i}\), where \(a_{i}\) are real coefficients and \(S_{i}\) are klocal \(\mathsf {X}\mathsf {Z}\)terms. This is akin to the \(\mathsf {X}\mathsf {Z}\)Hamiltonian from the 1SPosthoc protocol.

(1) The verifier instructs the provers to share the FeynmanKitaev state, associated with her circuit \(\mathcal {C}\), encoded in the 5qubit error correcting code, as described above. We denote this state as \(\left \vert {\psi }\right \rangle _{L}\). The provers are then split up and not allowed to communicate. The verifier then considers a klocal Hamiltonian having \(\left \vert {\psi }_{L}\right \rangle \) as a ground state as well as the threshold values a and b, with \(b  a > 1/poly(\mathcal {C})\).

(2) The verifier chooses to either perform the energy measurement or the encoding measurement as described above. For the energy measurement she asks the provers to measure a randomly chosen \(\mathsf {X}\mathsf {Z}\)term from the local Hamiltonian. The verifier accepts if the outcome indicates that the energy of \(\left \vert {\psi }_{L}\right \rangle \) is below a. For the encoding measurement the verifier instructs the provers to perform the measurements of the 5player nonlocal game. She accepts if the provers win the game, indicating that their shared state is correctly encoded.
One therefore has:
Theorem 14
The FH protocol is an \(\mathsf {MIP^{*}}\) protocol achieving an inverse polynomial gap between completeness and soundness.
There are two significant differences between this protocol and the previous entanglementbased approaches. The first is that the protocol does not use selftesting to enforce that the provers are performing the correct operations in order to implement the computation \(\mathcal {C}\). Instead, the computation is checked indirectly by using the selftesting result to estimate the groundstate energy of the klocal Hamiltonian. This then provides an answer to the considered \(\mathsf {BQP}\) computation viewed as a decision problem.^{42} The second difference is that the protocol is not blind. In all the previous approaches, the provers had to share an entangled state which was independent of the computation, up to a certain size. However, in the FH protocol, the state that the provers need to share depends on which quantum computation the verifier wishes to perform.
In terms of communication complexity, the protocol, as described, would involve only 2 rounds of interaction between the verifier and the provers. However, since the completenesssoundness gap is inverse polynomial, and therefore decreases with the size of the computation, it becomes necessary to repeat the protocol multiple times to properly differentiate between the accepting and rejecting cases. On the one hand, the local Hamiltonian itself has an inverse polynomial gap between the two cases of acceptance and rejection. As shown in [30, 68], for the Hamiltonian resulting from a quantum circuit, \(\mathcal {C}\), that gap is \(1/\mathcal {C}^{2}\). To boost this gap to constant, the provers must share \(O(\mathcal {C}^{2})\) copies of the FeynmanKitaev state.
On the other hand, the selftesting result has an inverse polynomial robustness. This means that estimating the energy of the ground state is done with a precision which scales inverse polynomially in the number of qubits of the state. More precisely, according to Ji’s result, the scaling should be \(1/O(N^{16})\), where N is the number of qubits on which the Hamiltonian acts [81]. This means that the protocol should be repeated on the order of \(O(N^{16})\) times, in order to boost the completenesssoundness gap to constant.
NV Protocol
The second entanglementbased post hoc protocol was developed by Natarajan and Vidick [23] and we therefore refer to it as the NV protocol. The main ideas of the protocol are similar to those of the FH protocol. However, Natarajan and Vidick prove a selftesting result having constant robustness and use it in order to perform the energy estimation of the ground state for the local Hamiltonian.
The statement of their general selftesting result is too involved to state here, so instead we reproduce a corollary to their result (also from [23]) that is used for the NV protocol. This corollary involves selftesting a tensor product of Bell pairs:
Theorem 15
For any integer n there exists a twoplayer nonlocal game, known as the Pauli braiding test (PBT), with \(O(n)\) bit questions and \(O(1)\) bit answers satisfying the following:
This theorem is essentially a selftesting result for a tensor product of Bell states, and Pauli \(\mathsf {X}\) and \(\mathsf {Z}\) observables, achieving a constant robustness. The Pauli braiding test is used in the NV protocol in a similar fashion to Ji’s result, from the previous subsection, in order to certify that a set of provers are sharing a state that is encoded in a quantum error correcting code. Again, this relies on a bipartition of the provers into two sets, such that, an encoded state shared across the bipartition is equivalent to a Bell pair.

Linearity test. In this test, the referee will randomly pick a basis setting, W, from the set \(\{ \textsf {X}, \textsf {Z} \}\). She then randomly chooses two strings \(\mathbf {a_{1}}, \mathbf {a_{2}} \in \{ 0, 1 \}^{n}\) and sends them to Alice. With equal probability, the referee takes \(\mathbf {b_{1}}\) to be either \(\mathbf {a_{1}}\), \(\mathbf {a_{2}}\) or \(\mathbf {a_{1}} \oplus \mathbf {a_{2}}\). She also randomly chooses a string \(\mathbf {b_{2}} \in \{ 0, 1 \}^{n}\) and sends the pair \((\mathbf {b_{1}}, \mathbf {b_{2}})\) to Bob.^{43} Alice and Bob are then asked to measure the observables \(W(\mathbf {a_{1}})\), \(W(\mathbf {a_{2}})\) and \(W(\mathbf {b_{1}})\), \(W(\mathbf {b_{2}})\), respectively, on their shared state. We denote Alice’s outcomes as \(a_{1}\), \(a_{2}\) and Bob’s outcomes as \(b_{1}\), \(b_{2}\). If \(\mathbf {b_{1}} = \mathbf {a_{1}}\) (or \(\mathbf {b_{1}}=\mathbf {a_{2}}\), respectively), the referee checks that \(b_{1} = a_{1}\) (or \(b_{1} = a_{2}\), respectively). If \(\mathbf {b_{1}} = \mathbf {a_{1}} \oplus \mathbf {a_{2}}\), she checks that \(b_{1} = a_{1} a_{2}\). This test is checking, on the one hand, that when Alice and Bob measure the same observables, they should get the same outcome (which is what should happen if they share Bell states). On the other hand, and more importantly, it is checking the commutation and linearity of their operators, i.e. that \(W(\mathbf {a_{1}})W(\mathbf {a_{2}}) = W(\mathbf {a_{2}})W(\mathbf {a_{1}}) = W(\mathbf {a_{1}} + \mathbf {a_{2}})\) (and similarly for Bob’s operators).

Anticommutation test. The referee randomly chooses two strings \(\mathbf {x}, \mathbf {z} \in \{ 0, 1 \}^{n}\), such that \(\mathbf {x} \cdot \mathbf {z} = 1 \; mod \; 2\), and sends them to both players. These strings define the observables \(\mathsf {X}(\mathbf {x})\) and \(\mathsf {Z}(\mathbf {z})\) which are anticommuting because of the imposed condition on \(\mathbf {x}\) and \(\mathbf {z}\). The referee then engages in a nonlocal game with Alice and Bob designed to test the anticommutation of these observables for both of their systems. This can be any game that tests this property, such as the CHSH game or the magic square game, described in [82, 83]. As an example, if the referee chooses to play the CHSH game, then Alice will be instructed to measure either \(\mathsf {X}(\mathbf {x})\) or \(\mathsf {Z}(\mathbf {z})\) on her half of the shared state, while Bob would be instructed to measure either \((\mathsf {X}(\mathbf {x}) + \mathsf {Z}(\mathbf {z}))/\sqrt {2}\) or \((\mathsf {X}(\mathbf {x})  \mathsf {Z}(\mathbf {z}))/\sqrt {2}\). The test is passed if the players achieve the win condition of the chosen anticommutation game. Note that for the case of the magic square game, the condition can be achieved with probability 1 when the players implement the optimal quantum strategy. For this reason, if the chosen game is the magic square game, then \(\omega ^{*}(PBT) = 1\).

Consistency test. This test combines the previous two. The referee randomly chooses a basis setting, \(W \in \{ \textsf {X}, \textsf {Z} \}\) and two strings \(\mathbf {x}, \mathbf {z} \in \{ 0, 1 \}^{n}\). Additionally, let \(\mathbf {w} = \mathbf {x}\), if \(W = \mathsf {X}\) and \(\mathbf {w} = \mathbf {z}\) if \(W = \mathsf {Z}\). The referee sends W, \(\mathbf {x}\) and \(\mathbf {z}\) to Alice. With equal probability the referee will then choose to perform one of two subtests. In the first subtest, the referee sends \(\mathbf {x}, \mathbf {z}\) to Bob as well and plays the anticommutation game with both, such that Alice’s observable is \(W(\mathbf {w})\). As an example, if \(W = \mathsf {X}\) and the game is the CHSH game, then Alice would be instructed to measure \(\mathsf {X}(\mathbf {x})\), while Bob is instructed to measure either \((\mathsf {X}(\mathbf {x}) + \mathsf {Z}(\mathbf {z}))/\sqrt {2}\) or \((\mathsf {X}(\mathbf {x})  \mathsf {Z}(\mathbf {z}))/\sqrt {2}\). This subtest essentially mimics the anticommutation test and is passed if the players achieve the win condition of the game. In the second subtest, which mimics the linearity test, the referee sends W, \(\mathbf {w}\) and a random string \(\mathbf {y} \in \{ 0, 1 \}^{n}\) to Bob, instructing him to measure \(W(\mathbf {w})\) and \(W(\mathbf {y})\). Alice is instructed to measure \(W(\mathbf {x})\) and \(W(\mathbf {z})\). The test if passed if Alice and Bob obtain the same result for the \(W(\mathbf {w})\) observable. For instance, if \(W = \mathsf {X}\), then both Alice and Bob will measure \(\mathsf {X}(\mathbf {x})\) and their outcomes for that measurement must agree.
Generators for Steane’s 7qubit code
Generator  Name 

III XXXX  g _{ 1} 
I XXII XX  g _{ 2} 
XI XI XI X  g _{ 3} 
III ZZZZ  g _{ 4} 
I XXII XZ  g _{ 5} 
ZI ZI ZI Z  g _{ 6} 
The reason Natarajan and Vidick use this specific error correcting code is because it has two properties that are necessary for the application of their selftesting result. The first property is that each stabilizer generator is a tensor product of only the I, \(\mathsf {X}\) and \(\mathsf {Z}\) operators. This, of course, is true for the 5qubit code as well. The second property is a symmetry condition: for each index \(i \in \{1, ..., 6\}\), there exists a pair of stabilizer generators, \(S_{\mathsf {X}}\) and \(S_{\mathsf {Z}}\), such that \(S_{\mathsf {X}}\) consists exclusively of I and \(\mathsf {X}\) operators and has an \(\mathsf {X}\) on position i, whereas \(S_{\mathsf {Z}}\) is identical to \(S_{\mathsf {X}}\) but with all \(\mathsf {X}\) operators replaced with \(\mathsf {Z}\). This property is not satisfied by the 5qubit code and will allow the verifier to delegate to the provers measurements of the form \(\mathsf {X}(\mathbf {x})\) and \(\mathsf {Z}(\mathbf {z})\), where \(\mathbf {x}\) and \(\mathbf {z}\) are binary strings, as in the Pauli braiding test.

Pauli braiding test. The verifier chooses one of the 7 provers at random to be Alice, while the remaining provers will take on the role of Bob. The verifier then performs the Pauli braiding test with Alice and Bob in order to selftest the logical qubits in \(\left \vert {\psi }_{L}\right \rangle \). As mentioned, each logical qubit, encoded in the 7 qubit code, is equivalent to a Bell pair under the chosen bipartition. The Pauli braiding test is essentially checking that the provers have correctly encoded each of the qubits in \(\left \vert {\psi }\right \rangle \) and that they are correctly measuring \(\mathsf {X}\) and \(\mathsf {Z}\) observables. The verifier rejects if the provers do not pass the test.

Energy test. In this case, the verifier will pick an \(\mathsf {X}\textsf {Z}\)term, S, from H, at random, and instruct the provers to measure this term on their shared state. Note that S consists of logical \(\mathsf {X}\) and \(\mathsf {Z}\) operators. This means that each prover will need to perform local measurements so that their joint measurement acts as either \(\mathsf {X}_{L}\) or \(\mathsf {Z}_{L}\), respectively. Additionally, \(\mathsf {X}_{L}\) and \(\mathsf {Z}_{L}\), for the 7 qubit code, are expressed as tensor products of physical \(\mathsf {X}\) and \(\mathsf {Z}\) operations. This means that each prover will be instructed to measure an operators of the form \(\mathsf {X}(\mathbf {x})\) and \(\mathsf {Z}(\mathbf {z})\), on its physical qubits, where \(\mathbf {x}, \mathbf {z} \in \{0, 1\}^{N}\), and N is the total number of logical qubits on which H acts. The product \(\mathsf {X}(\mathbf {x})\mathsf {Z}(\mathbf {z})\) is the outcome for that prover’s share of S. The verifier then takes all of these \(\pm 1\) outcomes and multiplies them together, thus obtaining the outcome of measuring S itself. She accepts if the outcome of the measurement indicates that the estimated energy of \(\left \vert {\psi }\right \rangle \) is below a and rejects otherwise.
 Energy consistency test. This test is a combination of the previous two. As in the Pauli braiding test, the provers are bipartitioned into two sets, one consisting of one prover, denoted Alice, and the other consisting of the other 6 provers, jointly denoted as Bob. The verifier now performs a test akin to the linearity test from Pauli braiding. She randomly chooses \(W \in \{ \mathsf {X}, \mathsf {Z} \}\), and let \(\mathbf {w} = \mathbf {x}\), if \(W = \mathsf {X}\) and \(\mathbf {w} = \mathbf {z}\) if \(W = \mathsf {Z}\). She also chooses \(\mathbf {x}, \mathbf {z} \in \{0, 1\}^{N}\) according to the same distribution as in the energy test (i.e. as if she were instructing the provers to measure a random \(\mathsf {X}\mathsf {Z}\)term from H). The verifier then does one of the following:

With probability \(1/2\), instructs Alice to measure the observables \(\mathsf {X}(\mathbf {x})\) and \(\mathsf {Z}(\mathbf {z})\). Additionally, the verifier chooses \(\mathbf {y} \in \{0, 1\}^{N}\) at random and instructs Bob to measure \(W(\mathbf {y})\) and \(W(\mathbf {y} \oplus \mathbf {w})\). If \(W = \mathsf {X}\), the verifier accepts if the product of Bob’s answers agrees with Alice’s answer for the \(\mathsf {X}(\mathbf {x})\) observable. If \(W = \mathsf {Z}\), the verifier accepts if the product of Bob’s answers agrees with Alice’s answer for the \(\mathsf {Z}(\mathbf {z})\) observable. Note that this is the case since the product of Bob’s observables should be \(W(\mathbf {w})\) if he is behaving honestly.

With probability \(1/4\), instructs Alice to measure \(W(\mathbf {y})\) and \(W(\mathbf {v})\), where \(\mathbf {y}, \mathbf {w} \in \{0, 1\}^{N}\) are chosen at random. Bob is instructed to measure \(W(\mathbf {y})\) and \(W(\mathbf {y} \oplus \mathbf {w})\). The verifier accepts if the outcomes of Alice and Bob for \(W(\mathbf {y})\) agree.

With probability \(1/4\), instructs Alice to measure \(W(\mathbf {y} \oplus \mathbf {w})\) and \(W(\mathbf {v})\), where \(\mathbf {y}, \mathbf {w} \in \{0, 1\}^{N}\) are chosen at random. Bob is instructed to measure \(W(\mathbf {y})\) and \(W(\mathbf {y} \oplus \mathbf {w})\). The verifier accepts if the outcomes of Alice and Bob for \(W(\mathbf {y} \oplus \mathbf {w})\) agree.

The selftesting result guarantees that if these tests succeed, the verifier obtains an estimate for the energy of the ground state. Importantly, unlike the FH protocol, her estimate has constant precision. However, the protocol, as described up to this point, will still have an inverse polynomial completenesssoundness gap given by the local Hamiltonian. Recall that this is because the FeynmanKitaev state will have energy below a when \(\mathcal {C}\) accepts x with high probability, and energy above b otherwise, where \(b  a > 1/\mathcal {C}^{2}\). But one can easily boost the protocol to a constant gap between completeness and soundness by simply requiring the provers to share \(M = O(\mathcal {C}^{2})\) copies of the ground state. This new state, \({\left \vert {\psi }\right \rangle }^{\otimes M}\), would then be the ground state of a new Hamiltonian \(H^{\prime }\).^{44} One then runs the NV protocol for this Hamiltonian. It should be mentioned that this Hamiltonian is no longer 2local, however, all of the tests in the NV protocol apply for these general Hamiltonians as well (as long as each term is comprised of I, \(\mathsf {X}\) and \(\mathsf {Z}\) operators, which is the case for \(H^{\prime }\)). Additionally, the new Hamiltonian has a constant gap. The protocol therefore achieves a constant number of rounds of interaction with the provers (2 rounds) and we have that:
Theorem 16
The NV protocol is an \(\mathsf {MIP^{*}}\) protocol achieving a constant gap between completeness and soundness.
To then boost the completenesssoundness gap to \(1\epsilon \), for some \(\epsilon > 0\), one can perform a parallel repetition of the protocol \(O(log(1/\epsilon ))\) times.
4.4 Summary of EntanglementBased Protocols
We have seen that having noncommunicating provers sharing entangled states allows for verification protocols with a classical client. What all of these protocols have in common is that they all make use of selftesting results. These essentially state that if a number of noncommunicating players achieve a near optimal win rate in a nonlocal game, the strategy they employ in the game is essentially fixed, up to a local isometry. The strategy of the players consists of their shared quantum state as well as their local observables. Hence, selftesting results provide a precise characterisation for both.
Comparison of entanglementbased protocols
Protocol  Provers  Qmem provers  Rounds  Communication  Blind 

RUV  2  2  O(N^{8192} ⋅ log(1/𝜖))  O(N^{8192} ⋅ log(1/𝜖))  Y 
McKague  O(N^{22} ⋅ log(1/𝜖))  0  O(N^{22} ⋅ log(1/𝜖))  O(N^{22} ⋅ log(1/𝜖))  Y 
GKW  2  1  O(N^{2048} ⋅ log(1/𝜖))  O(N^{2048} ⋅ log(1/𝜖))  Y 
HPDF  O(N^{4}log(N) ⋅ log(1/𝜖))  O(log(1/𝜖))  O(N^{4}log(N) ⋅ log(1/𝜖))  O(N^{4}log(N) ⋅ log(1/𝜖))  Y 
FH  5  5  O(N^{16} ⋅ log(1/𝜖))  O(N^{19} ⋅ log(1/𝜖))  N 
NV  7  7  O(1)  O(N^{3} ⋅ log(1/𝜖))  N 
We noticed that, depending on the approach that is used, there will be different requirements for the quantum operations of the provers. Of course, all protocols require that collectively the provers can perform \(\mathsf {BQP}\) computations, however, individually some provers need not be universal quantum computers. Related to this is the issue of blindness. Again, based on what approach is used some protocols utilize blindness and some do not. In particular, the post hoc protocols are not blind since the computation and the input are revealed to the provers so that they can prepare the FeynmanKitaev state.
We have also seen that the robustness of the selftesting game impacts the communication complexity of the protocol. Specifically, having robustness which is inverse polynomial in the number of qubits of the selftested state, leads to an inverse polynomial gap between completeness and soundness. In order to make this gap constant, the communication complexity of the protocol has to be made polynomial. This means that most protocols will have a relatively large overhead, when compared to prepareandsend or receiveandmeasure protocols. Out of the surveyed protocols, the NV protocol is the only one which utilizes a selftesting result with constant robustness and therefore has a constant completenesssoundness gap. We summarize all of these facts in Table 6.^{45}
5 Outlook
5.1 SubUniversal Protocols
So far we have presented protocols for the verification of universal quantum computations, i.e. protocols in which the provers are assumed to be \(\mathsf {BQP}\) machines. In the near future, however, quantum computers might be more limited in terms of the type of computations that they can perform. Examples of this include the class of socalled instantaneous quantum computations, denoted \(\mathsf {IQP}\), boson sampling or the onepure qubit model of quantum computation [1, 2, 84]. While not universal, these examples are still highly relevant since, assuming some plausible complexity theoretic conjectures hold, they could solve certain problems or sample from certain distributions that are intractable for classical computers. One is therefore faced with the question of how to verify the correctness of outcomes resulting from these models. In particular, when considering an interactive protocol, the prover should be restricted to the corresponding subuniversal class of problems and yet still be able to prove statements to a computationally limited verifier. We will see that many of the considered approaches are adapted versions of the VUBQC protocol from Section 2.2. It should be noted, however, that the protocols themselves are not direct applications of VUBQC. In each instance, the protocol was constructed so as to adhere to the constraints of the model.
The first subuniversal verification protocol is for the onepure (or oneclean) qubit model. A machine of this type takes as input a state of limited purity (for instance, a system comprising of the totally mixed state and a small number of single qubit pure states), and is able to coherently apply quantum gates. The model was considered in order to reflect the state of a quantum computer with noisy storage. In [85], Kapourniotis, Kashefi and Datta introduced a verification protocol for this model by adapting VUBQC to the onepure qubit setting. The verifier still prepares individual pure qubits, as in the original VUBQC protocol, however the prover holds a mixed state of limited purity at all times.^{46} Additionally, the prover can inject or remove pure qubits from his state, during the computation, as long as it does not increase the total purity of the state. The resulting protocol has an inverse polynomial completenesssoundness gap. However, unlike the universal protocols we have reviewed, the constraints on the prover’s state do not allow for the protocol to be repeated. This means that the completenesssoundness gap cannot be boosted through repetition.
Another model, for which verification protocols have been proposed, is that of instantaneous quantum computations, or \(\mathsf {IQP}\) [2, 86]. An \(\mathsf {IQP}\) machine is one which can only perform unitary operations that are diagonal in the \(\mathsf {X}\) basis and therefore commute with each other. The name “instantaneous quantum computation” illustrates that there is no temporal structure to the quantum dynamics [2]. Additionally, the machine is restricted to measurements in the computational basis. It is important to mention that IQP does not represent a decision class, like \(\mathsf {BQP}\), but rather a sampling class. The input to a sampling problem is a specification of a certain probability distribution and the output is a sample from that distribution. The class \(\mathsf {IQP}\), therefore, contains all distributions which can be sampled efficiently (in polynomial time) by a machine operating as described above. Under plausible complexity theoretic assumptions, it was shown that this class is not contained in the set of distributions which can be efficiently sampled by a classical computer [86].
In [2], Shepherd and Bremner proposed a hypothesis test in which a classical verifier is able to check that the prover is sampling from an \(\mathsf {IQP}\) distribution. The verifier cannot, however, check that the prover sampled from the correct distributions. Nevertheless, the protocol serves as a practical tool for demonstrating a quantum computational advantage. The test itself involves an encoding, or obfuscation scheme which relies on a computational assumption (i.e. it assumes that a particular problem is intractable for IQP machines).
Another test of \(\mathsf {IQP}\) problems is provided by the Hangleiter et al approach, from Section 3.2 [30]. Recall that this was essentially the 1SPosthoc protocol for certifying the ground state of a local Hamiltonian. Hangleiter et al have the prover prepare multiple copies of a state which is the FeynmanKitaev state of an \(\mathsf {IQP}\) circuit. They then use the post hoc protocol to certify that the prover prepared the correct state (measuring local terms from the Hamiltonian associated with that state) and then use one copy to sample from the output of the \(\mathsf {IQP}\) circuit. This is akin to the measurementonly approach of Section 3.1. In a subsequent paper, by BermejoVega et al, they consider a subclass of sampling problems that are contained in \(\mathsf {IQP}\) and prove that this class is also hard to classically simulate (subject to standard complexity theory assumptions). The problems can be viewed as preparing a certain entangled state and then measuring all qubits in a fixed basis. The authors provide a way to certify that the state prepared is close to the ideal one, by giving an upper bound on the trace distance. Moreover, the measurements required for this state certification can be made using local stabilizer measurements, for the considered architectures and settings [5].
Recently, another scheme has been proposed, by Mills et al. [87], which again adapts the VUBQC protocol to the \(\mathsf {IQP}\) setting. This eliminates the need for computational assumptions, however it also requires the verifier to have a single qubit preparation device. In contrast to VUBQC, however, the verifier need only prepare eigenstates of the \(\mathsf {Y}\) and \(\mathsf {Z}\) operators.
Yet another scheme derived from VUBQC was introduced in [88] for a model known as the Ising spin sampler. This is based on the Ising model, which describes a lattice of interacting spins in the presence of a magnetic field [89]. The Ising spin sampler is a translation invariant Ising model in which one measures the spins thus obtaining samples from the partition function of the model. Just like with \(\mathsf {IQP}\), it was shown in [90] that, based on complexity theoretic assumptions, sampling from the partition function is intractable for classical computers.
Lastly, Disilvestro and Markham proposed a verification protocol [91] for Spekkens’ toy model [92]. This is a local hidden variable theory which is phenomenologically very similar to quantum mechanics, though it cannot produce nonlocal correlations. The existence of the protocol, again inspired by VUBQC, suggests that Bell nonlocality is not a necessary feature for verification protocols, at least in the setting in which the verifier has a trusted quantum device.
5.2 Fault Tolerance
The protocols reviewed in this paper have all been described in an ideal setting in which all quantum devices work perfectly and any deviation from the ideal behaviour is the result of malicious provers. This is not, however, the case in the real world. The primary obstacle, in the development of scalable quantum computers, is noise which affects quantum operations and quantum storage devices. As a solution to this problem, a number of fault tolerant techniques, utilizing quantum error detection and correction, have been proposed. Their purpose is to reduce the likelihood of the quantum computation being corrupted by imperfect gate operations. But while these techniques have proven successful in minimizing errors in quantum computations, it is not trivial to achieve the same effect for verification protocols. To clarify, while we have seen the use of quantum error correcting codes in verification protocols, their purpose was to either boost the completenesssoundness gap (in the case of prepareandsend protocols), or to ensure an honest behaviour from the provers (in the case of entanglementbased post hoc protocols). The question we ask, therefore, is: how can one design a faulttolerant verification protocol? Note that this question pertains primarily to protocols in which the verifier is not entirely classical (such as the prepareandsend or receiveandmeasure approaches) or in which one or more provers are assumed to be singlequbit devices (such as the GKW and HPDF protocols). For the remaining entanglementbased protocols, one can simply assume that the provers are performing all of their operations on top of a quantum error correcting code.
Let us consider what happens if, in the prepareandsend and receiveandmeasure protocols, the devices of the verifier and the prover are subject to noise.^{47} If, for simplicity, we assume that the errors on these devices imply that each qubit will have a probability, p, of producing the same outcome as in the ideal setting, when measured, we immediately notice that the probability of n qubits producing the same outcomes scales as \(O(p^{n})\). This means that, even if the prover behaves honestly, the computation is very unlikely to result in the correct outcome [19].
Ideally, one would like the prover to perform his operations in a fault tolerant manner. In other words, the prover’s state should be encoded in a quantum error correcting code, the gates he performs should result in logical operations being applied on his state and he should, additionally, perform errordetection (syndrome) measurements and corrections. But we can see that this is problematic to achieve. Firstly, in prepareandsend protocols, the computation state of the prover is provided by the verifier. Who should then encode this state in the errorcorrecting code, the verifier or the prover? It is known that in order to suppress errors in a quantum circuit, \(\mathcal {C}\), each qubit should be encoded in a logical state having \(O(polylog(\mathcal {C}))\)many qubits [93]. This means that if the encoding is performed by the verifier, she must have a quantum computer whose size scales polylogarithmically with the size of the circuit that she would like to delegate. It is preferable, however, that the verifier has a constantsize quantum computer. Conversely, even if the prover performs the encoding, there is another complication. Since the verifier needs to encrypt the states she sends to the prover, and since her operations are susceptible to noise, the errors acting on these states will have a dependency on her secret parameters. This means that when the prover performs errordetection procedures he could learn information about these secret parameters and compromise the protocol.
For receiveandmeasure protocols, one encounters a different obstacle. While the verifier’s measurement device is not actively malicious, if the errors occurring in this device are correlated with the prover’s operations in preparing the state, this can compromise the correctness of the protocol.
A number of fault tolerant verification protocols have been proposed, however, they all overcome these limitations by making additional assumptions. For instance, one proposal, by Kapourniotis and Datta [88], for making VUBQC fault tolerant, uses a topological errorcorrecting code described in [58, 59]. The errorcorrecting code is specifically designed for performing fault tolerant MBQC computations, which is why it is suitable for the VUBQC protocol. In the proposed scheme, the verifier still prepares single qubit states, however there is an implicit assumption that the errors on these states are independent of the verifier’s secret parameters. The prover is then instructed to perform a blind MBQC computation in the topological code. The protocol described in [88] is used for a specific type of MBQC computation designed to demonstrate a quantum computational advantage. However, the authors argue that the techniques are general and could be applied for universal quantum computations.
A faulttolerant version of the measurementonly protocol from Section 3.1 has also been proposed in [95]. The graph state prepared by the prover is encoded in an errorcorrecting code, such as the topological lattice used by the previous approaches. As in the ‘nonfaulttolerant’ version of the protocol, the prover is instructed to send many copies of this state which the verifier will test using stabilizer measurements. The verifier also uses one copy in order to perform her computation in an MBQC fashion. The protocol assumes that the errors occurring on the verifier’s measurement device are independent of the errors occurring on the prover’s devices.
More details, regarding the difficulties with achieving fault tolerance in \(\mathsf {QPIP}\) protocols, can be found in [26].
5.3 Experiments and Implementations
Protocols for verification will clearly be useful for benchmarking experiments implementing quantum computations. Experiments implementing quantum computations on a small number of qubits can be verified with brute force simulation on a classical computer. However, as we have pointed out that this is not scalable, in the longterm it is worthwhile to try and implement verification protocols on these devices. As a result, there have been proof of concept experiments that demonstrate the components necessary for verifiable quantum computing.
Inspired by the prepareandsend VUBQC protocol, Barz et al implemented a fourphoton linear optical experiment, where the fourqubit linear cluster state was constructed from entangled pairs of photons produced through parametric downconversion [96]. Within this cluster state, in runs of the experiment, a trap qubit was placed in one of two possible locations, thus demonstrating some of the elements of the VUBQC protocol. However, it should be noted that the trap qubits are placed in the system through measurements on nontrap qubits within the cluster state, i.e. through measurements made on the the other three qubits. Because of this, the analysis of the VUBQC protocol cannot be directly translated over to this setting, and bespoke analysis of possible deviations is required. In addition, the presence of entanglement between the photons was demonstrated through Bell tests that are performed blindly. This work also builds on a previous experimental implementation of blind quantum computation by Barz et al. [97].
With regards to receiveandmeasure protocols, and in particular the measurementonly protocol of Section 3.1, Greganti et al. implemented [98] some of the elements of these protocols with a fourphoton experiment, similar to the experiment of Barz et al mentioned above [96]. This demonstration builds on previous work in the experimental characterisation of stabiliser states [99]. In this case, two fourqubit cluster states were generated: the linear cluster state and the star graph state, where in the latter case the only entanglement is between one central qubit and pairwise with every other qubit. In order to demonstrate the elements for measurementonly verification, by suitable measurements made by the client, traps can be placed in the state. Furthermore, the linear cluster state and star graph state can be used as computational resources for implementing single qubit unitaries and an entangling gate respectively.
Finally, preliminary steps have been taken towards an experimental implementation of the RUV protocol, from Section 4.1. Huang et al implemented a simplified version of this protocol using sources of pairs of entangled photons [74]. Repeated CHSH tests were performed on thousands of pairs of photons demonstrating a large violation of the CHSH inequality; a vital ingredient in the protocol of RUV. In between the many rounds of CHSH tests, state tomography, process tomography, and a computation were performed, with the latter being the factorisation of the number 15. Again, all of these elements are ingredients in the protocol, however, the entangled photons are created ’onthefly’. In other words, in RUV, two noncommunicating provers share a large number of maximally entangled states prior to the full protocol, but in this experiment these states are generated throughout.
6 Conclusions
The realization of the first quantum computers capable of outperforming classical computers at nontrivial tasks is fast approaching. All signs indicate that their development will follow a similar trajectory to that of classical computers. In other words, the first generation of quantum computers will comprise of large servers that are maintained and operated by specialists working either in academia, industry or a combination of both. However, unlike with the first supercomputers, the Internet opens up the possibility for users, all around the world, to interface with these devices and delegate problems to them. This has already been the case with the 5qubit IBM machine [100], and more powerful machines are soon to follow [101, 102]. But how will these computationally restricted users be able to verify the results produced by the quantum servers? That is what the field of quantum verification aims to answer. Moreover, as mentioned before and as is outlined in [12], the field also aims to answer the more foundational question of: how do we verify the predictions of quantum mechanics in the large complexity regime?
In this paper, we have reviewed a number of protocols that address these questions. While none of them achieve the ultimate goal of the field, which is to have a classical client verify the computation performed by a single quantum server, each protocol provides a unique approach for performing verification and has its own advantages and disadvantages. We have seen that these protocols combine elements from a multitude of areas including: cryptography, complexity theory, error correction and the theory of quantum correlations. We have also seen that proofofconcept experiments, for some of these protocols, have already been realized.
What all of the surveyed approaches have in common, is that none of them are based on computational assumptions. In other words, they all perform verification unconditionally. However, recently, there have been attempts to reduce the verifier’s requirements by incorporating computational assumptions as well. What this means is that the protocols operate under the assumption that certain problems are intractable for quantum computers. We have already mentioned an example: a protocol for verifying the subuniversal sampling class of \(\mathsf {IQP}\) computations, in which the verifier is entirely classical. Other examples include protocols for quantum fully homomorphic encryption [103, 104]. In these protocols, a client is delegating a quantum computation to a server while trying to keep the input to the computation hidden. The use of computational assumptions allows these protocols to achieve this functionality using only one round of backandforth communication. However, in the referenced schemes, the client does require some minimal quantum capabilities. A recent modification of these schemes has been proposed in order to make the protocols verifiable as well [105]. Additionally, an even more recent paper introduces a protocol for quantum fully homomorphic encryption with an entirely classical client (again, based on computational assumptions) [106]. We can therefore see a new direction emerging in the field of delegated quantum computations. This recent success in developing protocols based on computational assumptions could very well lead to the first singleprover verification protocol with a classical client.
Another new direction, especially pertaining to entanglementbased protocols, is given by the development of selftesting results achieving constant robustness. This started with the work of Natarajan and Vidick, which was the basis of their protocol from Section 4.3 [23]. We saw, in Section 4, that all entanglementbased protocols rely, one way or another, on selftesting results. Consequently, the robustness of these results greatly impacts the communication complexity and overhead of these protocols. Since most protocols were based on results having inverse polynomial robustness, this led to prohibitively large requirements in terms of quantum resources (see Table 6). However, subsequent work by Coladangelo et al, following up on the Natarajan and Vidick result, has led to two entanglementbased protocols, which achieve near linear overhead [24].^{48} This is a direct consequence of using a selftesting result with constant robustness and combining it with the TestorCompute protocol of Broadbent from Section 2.3. Of course, of the two protocols proposed by Coladangelo et al, only one is blind and so an open problem, of their result, is whether the second protocol can also be made blind. Another question is whether the protocols can be further optimized so that only one prover is required to perform universal quantum computations, in the spirit of the GKW protocol from Section 4.1.

While the problem of a classical verifier delegating computations to a single prover is the main open problem of the field, we emphasize a more particular instance of this problem: can the proof that any problem in \(\mathsf {PSPACE}\)^{49} admits an interactive proof system, be adapted to show that any problem in \(\mathsf {BQP}\) admits an interactive proof system with a \(\mathsf {BQP}\) prover? The proof that \(\mathsf {PSPACE} = \mathsf {IP}\) (in particular the \(\mathsf {PSPACE} \subseteq \mathsf {IP}\) direction) uses errorcorrecting properties of lowdegree polynomials to give a verification protocol for a \(\mathsf {PSPACE}\)complete problem [107]. We have seen that the PolyQAS VQC scheme, presented in Section 16, also makes use of errorcorrecting properties of lowdegree polynomials in order to perform quantum verification (albeit, with a quantum error correcting code and a quantum verifier). Can these ideas lead to a classical verifier protocol for \(\mathsf {BQP}\) problems with a \(\mathsf {BQP}\) prover?

In all existing entanglementbased protocols, one assumes that the provers are not allowed to communicate during the protocol. However, this assumption is not enforced by physical constraints. Is it, therefore, possible to have an entanglementbased verification protocol in which the provers are spacelike separated?^{50} Note, that since all existing protocols require the verifier to query the two (or more) provers adaptively, it is not directly possible to make the provers be spacelike separated.

What is the optimal overhead (in terms of either communication complexity, or the resources of the verifier) in verification protocols? For all types of verification protocols we have seen that, for a fixed completenesssoundness gap, the best achieved communication complexity is linear. For the prepareandsend case is it possible to have a protocol in which the verifier need only prepare a polylogarithmic number of single qubits (in the size of the computation)? For the entanglementbased case, can the classical verifier send only polylogarithmic sized questions to the provers? This latter question is related to the quantum \(\mathsf {PCP}\) conjecture [108].

Are there other models of quantum computation that are suitable for developing verification protocols? We have seen that the way in which we view quantum computations has a large impact on how we design verification protocols and what characteristics those protocols will have. Specifically, the separation between classical control and quantum resources in MBQC lead to VUBQC, or the \(\mathsf {QMA}\)completeness of the local Hamiltonian problem lead to the post hoc approaches. Of course, all universal models are equivalent in terms of the computations which can be performed, however each model provides a particular insight into quantum computation which can prove useful when devising new protocols. Can other models of quantum computation, such as the adiabatic model, the anyon model etc, provide new insights?

We have seen that while certain verification protocols employ errorcorrecting codes, these are primarily used for boosting the completenesssoundness gap. Alternatively, for the protocols that do in fact incorporate fault tolerance, in order to cope with noisy operations, there are additional assumptions such as the noise in the verifier’s device being uncorrelated with the noise in the prover’s devices. Therefore, the question is: can one have a fault tolerant verification protocol, with a minimal quantum verifier, in the most general setting possible? By this we mean that there are no restrictions on the noise affecting the quantum devices in the protocol, other than those resulting from the standard assumptions of fault tolerant quantum computations (constant noise rate, local errors etc). This question is addressed in more detail in [26]. Note that the question refers in particular to prepareandsend and receiveandmeasure protocols, since entanglementbased approaches are implicitly fault tolerant (one can assume that the provers are performing the computations on top of error correcting codes).
Footnotes
 1.
BPP and \(\mathsf {MA}\) are simply the probabilistic versions of the more familiar classes \(\mathsf {P}\) and \(\mathsf {MA}\). Under plausible derandomization assumptions, \(\mathsf {BPP} = \textsf {P}\) and \(\mathsf {MA} = \textsf {MA}\) [13].
 2.
Even if this were the case, i.e. \(\mathsf {BQP} \subseteq \textsf {MA}\), for this to be useful in practice one would require that computing the witness can also be done in \(\mathsf {BQP}\). In fact, there are candidate problems known to be in both \(\mathsf {BQP}\) and MA, for which computing the witness is believed to not be in \(\mathsf {BQP}\) (a conjectured example is [17]).
 3.
MA can be viewed as an interactiveproof system where only one message is sent from the prover (Merlin) to the verifier (Arthur).
 4.
In other words, the provers would not be able to differentiate among the different computations even if they had unbounded computational power.
 5.
The definitions of these classes can be found in Section A.
 6.
In the classical setting, computing on encrypted data culminated with the development of fully homomorphic encryption (FHE), which is considered the “holly grail” of the field [41, 42, 43, 44]. Using FHE, a client can delegate the evaluation of any polynomialsize classical circuit to a server, such that the input and output of the circuit are kept hidden from the server, based on reasonable computational assumptions. Moreover, the protocol involves only one round of backandforth interaction between client and server.
 7.
In other words, for all Pauli operators P and all Clifford operators C, there exists a Pauli operator Q such that \(CP = QC\).
 8.
For instance, her initial state \(\rho \) could contain a number of \({\left \vert {0}\right \rangle }\) qubits that is equal to the number of \(\mathsf {T}\) gates in the circuit.
 9.
This is also possible in Childs’ protocol by simply encoding the description of the circuit \(\mathcal {C}\) in the input and asking Bob to run a universal quantum circuit. The onetime padded input that is sent to Bob would then comprise of both the description of \(\mathcal {C}\) as well as x, the input for \(\mathcal {C}\).
 10.
For a brief overview of MBQC see Section A.
 11.
This remains true even if the qubits have been entangled with the \(\mathsf {CZ}\) operation.
 12.
This input can be a classical bit string \({\left \vert {x}\right \rangle }\), though it can also be more general.
 13.
An alternative to (9) is: \(TD(\rho _{out}, p {\left \vert {{\Psi }^{s}_{out}}\right \rangle }{\left \langle {{\Psi }^{s}_{out}}\right \vert } \otimes {\left \vert {acc^{s}}\right \rangle }{\left \langle {acc^{s}}\right \vert } + (1 p) \rho \otimes {\left \vert {rej^{s}}\right \rangle }\left \langle {rej^{s}}\right \rangle ) \leq \epsilon \), for some \(0 \leq p \leq 1\) and some density matrix \(\rho \), where TD denotes trace distance. In other words, the output state of the protocol, \(\rho _{out}\), is close to a state which is a mixture of the correct output state with acceptance and an arbitrary state and rejection. This definition can be more useful when one is interested in a quantum output for the protocol (i.e. the prover returns a quantum state to the verifier). Such a situation is particularly useful when composing verification protocols [19, 49, 50, 51].
 14.
One could imagine this happening if, for instance, the prover provides random responses to the verifier instead of performing the desired computation \(\mathcal {C}\).
 15.
The projectors for the measurement are assumed to be \(P_{acc} = {\left \vert {acc}\right \rangle }{\left \langle {acc}\right \vert }\), for acceptance and \(P_{rej} = I  {\left \vert {acc}\right \rangle }{\left \langle {acc}\right \vert }\) for rejection.
 16.
 17.
^{17} Hence \(k = O(log(\mathfrak {C}_{t}))\).
 18.
Technically, what is required here is that \(P_{1} \neq P_{2}\), since global phases are ignored.
 19.
This can simply be the state \(\left \vert {x}\right \rangle \), if the verifier wishes to apply \(\mathcal {C}\) on the classical input x. However, the state can be more general which is why we are not restricting it to be \(\left \vert {x}\right \rangle \).
 20.
Note that by abuse of notation we assume Open image in new window refers to the group of generalized Pauli operations over qudits, whereas, typically, one uses this notation to refer to the Pauli group of qubits.
 21.
See Section 1 for the definition of the Toffoli gate.
 22.
Note that no actual quantum state was returned to the verifier by the prover. Instead, she locally prepared a quantum state from the classical outcomes reported by the prover.
 23.
To be precise, the communication in the PolyQAS VQC scheme is \(O((n + L) \cdot log(1/\epsilon ))\), where n is the size of the input and L is the number of Toffoli gates in \(\mathcal {C}\).
 24.
Note that adding dummy qubits into the graph will have the effect of disconnecting qubits that would otherwise have been connected. It is therefore important that the chosen graph state allows for the embedding of traps and dummies so that the desired computation can still be performed. For instance, the brickwork state from Section A allows for only one trap qubit to be embedded, whereas other graph states allows for multiple traps. See [27, 57] for more details.
 25.
As in the previous protocols, this need not be a classical input and the verifier could prepare an input of the form \(\left \vert {\psi }\right \rangle = \left \vert {\psi _{1}}\right \rangle \otimes ... \otimes \left \vert {\psi _{n}}\right \rangle \).
 26.
Note that the number of traps, T, and the number of dummies, D, are related, since each trap should have only dummy neighbours in \(\left \vert {G}\right \rangle \).
 27.
Since the prover is unbounded and is free to choose any of the uncountably many CPTP strategies, j should be thought more of as a symbolic parameter indicating that there is a dependence on the prover’s strategy and whether or not this strategy is the ideal one.
 28.
Note that the security proof for PolyQAS VQC was in fact inspired from that of the VUBQC protocol, as mentioned in [26].
 29.
The preparation of a specific input \({\left \vert {x}\right \rangle }\) can be done as part of the circuit \(\mathcal {C}\).
 30.
However, note that if the verifier chooses a test run, in the case where the prover is honest, this will lead to acceptance irrespective of the outcome of the decision problem. This is in contrast to the previous protocols in which the testing is performed at the same time as the computation and, when the test succeeds, the verifier outputs the result of the computation.
 31.
Technically, the complexity should be \(O((x + \mathcal {C}) \cdot log(1/\epsilon ))\), however we are assuming that \(\mathcal {C}\)acts nontrivially on x (i.e. there are at least \(x\) gates in \(\mathcal {C}\)).
 32.
 33.
As a side note, the total number of measurements is not the same as the communication complexity for this protocol, since the prover would have to send \(O(\mathcal {C}^{3} \cdot log(1/\epsilon ))\) qubits in total. This is because, for each repetition, the prover sends a state of \(O(\mathcal {C})\) qubits, but the verifier only measures 2 qubits from each such state.
 34.
To define local correlations, consider a setting with two players, Alice and Bob. Each player receives an input, x for Alice and y for Bob and produces an output, denoted a for Alice and b for Bob. We say that the players’ responses are locally correlated if: \(Pr(a,bx,y) = {\sum }_{\lambda } Pr(a  x, \lambda ) Pr(b  y, \lambda ) Pr(\lambda )\). Where \(\lambda \) is known as a hidden variable. In other words, given this hidden variable, the players’ responses depend only on their local inputs.
 35.
Note that the McKague, Yang and Scarani result could also be used to certify a tensor product of Bell pairs, by repeating the selftest of a single Bell pair multiple times. However, this would require each repetition to be independent of the previous one. In other words the states shared by Alice and Bob, as well as their measurement outcomes, should be independent and identically distributed (i.i.d.) in each repetition. The Reichardt, Unger and Vazirani result makes no such assumption.
 36.
However, with added assumptions (such as i.i.d. states and measurement statistics for the two provers), the scaling can become small enough that experimental testing is possible. A proof of concept experiment of this is realized in [74].
 37.
In fact, what is used here is a more general version of Theorem 8 involving an extended CHSH game. See the appendix section of [18].
 38.
For instance, one game would involve Alice measuring either \(\mathsf {X}\) or \(\mathsf {Y}\), whereas Bob should measure \((\mathsf {X} + \mathsf {Y})/\sqrt {2}\) or \((\mathsf {X}  \mathsf {Y})/\sqrt {2}\). Similar games can be defined by suitably choosing observables from the given set.
 39.
In other words, N(τ) consists of those vertices that are connected to an odd number of vertices from \(\tau \).
 40.
The measurement angles need not be restricted to this set, however, as in VUBQC, this set of angles is sufficient for performing universal MBQC computations.
 41.
The 5qubit code is the smallest error correcting capable of correcting for arbitrary singlequbit errors [80].
 42.
In their paper, Fitzsimons and Hajdušek also explain how their protocol can be used to sample from a quantum circuit, rather than solve a decision problem [22].
 43.
Note that pair can be either \((\mathbf {b_{1}}, \mathbf {b_{2}})\) or \((\mathbf {b_{2}}, \mathbf {b_{1}})\), so that Bob does not know which string is the one related to Alice’s inputs.
 44.
Note that the state still needs to be encoded in the 7 qubit code.
 45.
Note that for the HPDF protocol we assumed that there is one prover with quantum memory, comprised of the individual provers that come together in order to perform the MBQC computation at the end of the protocol. Since, to achieve a completenesssoundness gap of \(1  \epsilon \). the protocol is repeated \(O(log(1/\epsilon ))\) times, this means there will be \(O(log(1/\epsilon ))\) provers with quantum memory in total.
 46.
The purity of a dqubit state, \(\rho \), is quantified by the purity parameter defined in [85] as: π(ρ) = log(Tr(ρ^{2})) + d.
 47.
Different noise models have been examined when designing fault tolerant protocols, however, a very common model and one which can be considered in our case, is depolarizing noise [93, 94]. This can be singlequbit depolarizing noise, which acts as \(\mathcal {E}(\rho ) = (1 p) [I] + p/3 ([\textsf {X}] + [\textsf {Y}] + [\textsf {Z}])\), or twoqubit depolarizing noise, which acts as \(\mathcal {E}(\rho ) = (1 p) [I \otimes I] + p/15 ([I \otimes \textsf {X}] + ... [\textsf {Z} \otimes \textsf {Z}])\), for some probability \(p > 0\). The square bracket notation indicates the action of an operator.
 48.
The result from [24] appeared on the arxiv close to the completion of this work, which is why we did not review it.
 49.
PSPACE is the class of problems which can be solved in polynomial space by a classical computer.
 50.
In an experiment, two regions are spacelike separated if the time it takes light to travel from one region to the other is longer than the duration of the experiment. Essentially, according to relativity, this means that there is no causal ordering between events occurring in one region and events occurring in the other.
 51.
Note that if the operator is degenerate (i.e. has repeating eigenvalues) then the projectors for degenerate eigenvalues will correspond to projectors on the subspaces spanned by the associated eigenvectors.
 52.
It should be noted that this is the case provided that quantum mechanics is a complete theory in terms of its characterisation of physical systems. See [111] for more details.
 53.
One could allow for purifications in larger systems, but we restrict attention to same dimensions.
 54.
These are known as stabilizer operators for the states in the code spaces. We also encounter these operators in Section A. The operators form a group under multiplication and so, when specifying the code space, it is sufficient to provide the generators of the group.
 55.
A measurement pattern is simply a tuple consisting of the measurement angles, for the qubits in \(\left \vert {G_{N}}\right \rangle \), and the partial ordering of these measurements.
 56.
The notation \(M(x)\) means running the Turing machine M on input x.
 57.
A problem, P, is complete for the complexity class \(\mathsf {QMA}\) if \(P \in \textsf {QMA}\) and all problems in \(\mathsf {QMA}\) can be reduced in quantum polynomial time to P.
Notes
Acknowledgements
The authors would like to thank Petros Wallden, Alex Cojocaru, Thomas Vidick for very useful comments and suggestions for improving this work, and Dan Mills for TE X support. AG would also like to especially thank Matty Hoban for many helpful remarks and comments and Vivian Uhlir for useful advice in improving the figures in the paper. EK acknowledges funding through EPSRC grant EP/N003829/1 and EP/M013243/1. TK acknowledges funding through EPSRC grant EP/K04057X/2.
References
 1.Aaronson, S., Arkhipov, A.: The computational complexity of linear optics. In: Proceedings of the Fortythird Annual ACM Symposium on Theory of Computing. STOC ’11, pp 333–342. ACM, New York (2011)Google Scholar
 2.Shepherd, D., Bremner, M.J.: Temporally unstructured quantum computation. In: Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, vol. 465, pp 1413–1439. The Royal Society (2009)Google Scholar
 3.Boixo, S., Isakov, S.V., Smelyanskiy, V.N., Babbush, R., Ding, N., Jiang, Z., Martinis, J.M., Neven, H.: Characterizing quantum supremacy in nearterm devices. arXiv:http://arXiv.org/abs/1608.00263 (2016)
 4.Aaronson, S., Chen, L.: Complexitytheoretic foundations of quantum supremacy experiments. arXiv:http://arXiv.org/abs/1612.05903 (2016)
 5.BermejoVega, J., Hangleiter, D., Schwarz, M., Raussendorf, R., Eisert, J.: Architectures for quantum simulation showing a quantum speedup (2017)Google Scholar
 6.Tillmann, M., Dakić, B., Heilmann, R., Nolte, S., Szameit, A., Walther, P.: Experimental boson sampling. Nat. Photon. 7(7), 540–544 (2013)CrossRefGoogle Scholar
 7.Spagnolo, N., Vitelli, C., Bentivegna, M., Brod, D.J., Crespi, A., Flamini, F., Giacomini, S., Milani, G., Ramponi, R., Mataloni, P., et al: Experimental validation of photonic boson sampling. Nat. Photon. 8(8), 615–620 (2014)CrossRefGoogle Scholar
 8.Bentivegna, M., Spagnolo, N., Vitelli, C., Flamini, F., Viggianiello, N., Latmiral, L., Mataloni, P., Brod, D.J., Galvão, E.F., Crespi, A., et al.: Experimental scattershot boson sampling. Sci. Adv. 1(3), e1400255 (2015)CrossRefGoogle Scholar
 9.Lanyon, B., Barbieri, M., Almeida, M., White, A.: Experimental quantum computing without entanglement. Phys. Rev. Lett. 101(20), 200501 (2008)CrossRefGoogle Scholar
 10.Aaronson, S.: The Aaronson $25.00 prize. http://www.scottaaronson.com/blog/?p=284
 11.Vazirani, U.: Workshop on the computational worldview and the sciences http://users.cms.caltech.edu/schulman/Workshops/CSLens2/reportcompworldview.pdf (2007)
 12.Aharonov, D., Vazirani, U.: Is Quantum Mechanics Falsifiable? A Computational Perspective on the Foundations of Quantum Mechanics. Computability: Turing, Gödel, Church, and Beyond. MIT Press (2013)Google Scholar
 13.Impagliazzo, R., Wigderson, A.: P= bpp if e requires exponential circuits: Derandomizing the xor lemma. In: Proceedings of the TwentyNinth Annual ACM Symposium on Theory of Computing, pp. 220–229. ACM (1997)Google Scholar
 14.Shor, P.W.: Polynomialtime algorithms for prime factorization and discrete logarithms on a quantum computer. SIAM Rev. 41(2), 303–332 (1999)MathSciNetCrossRefzbMATHGoogle Scholar
 15.Bernstein, E., Vazirani, U.: Quantum complexity theory. SIAM J. Comput. 26(5), 1411–1473 (1997)MathSciNetCrossRefzbMATHGoogle Scholar
 16.Watrous, J.: Succinct quantum proofs for properties of finite groups. In: Proceedings of the 41st Annual Symposium on Foundations of Computer Science. FOCS ’00, pp 537–. IEEE Computer Society, Washington, DC (2000)Google Scholar
 17.Childs, A.M., Cleve, R., Deotto, E., Farhi, E., Gutmann, S., Spielman, D.A.: Exponential algorithmic speedup by a quantum walk. In: Proceedings of the ThirtyFifth Annual ACM Symposium on Theory of Computing, pp 59–68. ACM (2003)Google Scholar
 18.Reichardt, B.W., Unger, F., Vazirani, U.: Classical command of quantum systems. Nature 496(7446), 456 (2013)CrossRefGoogle Scholar
 19.Gheorghiu, A., Kashefi, E., Wallden, P.: Robustness and device independence of verifiable blind quantum computing. J. Phys. 17(8), 083040 (2015)Google Scholar
 20.Hajdušek, M., PérezDelgado, C.A., Fitzsimons, J.F.: Deviceindependent verifiable blind quantum computation. arXiv:http://arXiv.org/abs/1502.02563 (2015)
 21.McKague, M.: Interactive proofs for \(\mathsf {{{BQP}}}\) via selftested graph states. Theory Comput. 12(3), 1–42 (2016)MathSciNetCrossRefGoogle Scholar
 22.Fitzsimons, J.F., Hajdušek, M.: Post hoc verification of quantum computation. arXiv:1512.04375 (2015)
 23.Natarajan, A., Vidick, T.: Robust selftesting of manyqubit states. arXiv:1610.03574 (2016)
 24.Coladangelo, A., Grilo, A., Jeffery, S., Vidick, T.: Verifieronaleash: new schemes for verifiable delegated quantum computation, with quasilinear resources. arXiv:1708.07359 (2017)
 25.Aharonov, D., BenOr, M., Eban, E.: Interactive proofs for quantum computations. In: Innovations in Computer Science  ICS 2010, Tsinghua University, Beijing, China, January 57, 2010. Proceedings, pp. 453–469 (2010)Google Scholar
 26.Aharonov, D., BenOr, M., Eban, E., Mahadev, U.: Interactive proofs for quantum computations. arXiv:1704.04487 (2017)
 27.Fitzsimons, J.F., Kashefi, E.: Unconditionally verifiable blind quantum computation. Phys. Rev. A 96, 012303 (2017)CrossRefGoogle Scholar
 28.Broadbent, A.: How to verify a quantum computation. Theory of Computing. arXiv:1509.09180 (2018)
 29.Morimae, T., Fitzsimons, J.F.: Post hoc verification with a single prover. arXiv:1603.06046 (2016)
 30.Hangleiter, D., Kliesch, M., Schwarz, M., Eisert, J.: Direct certification of a class of quantum simulations. Quant. Sci. Technol. 2(1), 015004 (2017)CrossRefGoogle Scholar
 31.Hayashi, M., Morimae, T.: Verifiable measurementonly blind quantum computing with stabilizer testing. Phys. Rev. Lett. 115(22), 220502 (2015)CrossRefGoogle Scholar
 32.Morimae, T., Takeuchi, Y., Hayashi, M.: Verification of hypergraph states. Phys. Rev. A 96, 062321 (2017)MathSciNetCrossRefGoogle Scholar
 33.Gheorghiu, A., Wallden, P., Kashefi, E.: Rigidity of quantum steering and onesided deviceindependent verifiable quantum computation. J. Phys. 19(2), 023043 (2017)Google Scholar
 34.Fitzsimons, J.F.: Private quantum computation: An introduction to blind quantum computing and related protocols. npj Quant. Inf. 3(1), 23 (2017)CrossRefGoogle Scholar
 35.Childs, A.M.: Secure assisted quantum computation. Quant. Info. Comput. 5(6), 456–466 (2005)MathSciNetzbMATHGoogle Scholar
 36.Broadbent, A., Fitzsimons, J., Kashefi, E.: Universal blind quantum computation. In: Proceedings of the 50th Annual Symposium on Foundations of Computer Science. FOCS ’09, pp 517–526. IEEE Computer Society (2009)Google Scholar
 37.Arrighi, P., Salvail, L.: Blind quantum computation. Int. J. Quant. Inf. 04(05), 883–898 (2006)CrossRefzbMATHGoogle Scholar
 38.Giovannetti, V., Maccone, L., Morimae, T., Rudolph, T.G.: Efficient universal blind quantum computation. Phys. Rev. Lett. 111, 230501 (2013)CrossRefGoogle Scholar
 39.Mantri, A., PérezDelgado, C.A., Fitzsimons, J.F.: Optimal blind quantum computation. Phys. Rev. Lett. 111, 230502 (2013)CrossRefGoogle Scholar
 40.Rivest, R.L., Adleman, L., Dertouzos, M.L.: On data banks and privacy homomorphisms. Found. Sec. Comput. 4(11), 169–180 (1978)MathSciNetGoogle Scholar
 41.Gentry, C.: Fully homomorphic encryption using ideal lattices. In: Proceedings of the Fortyfirst Annual ACM Symposium on Theory of Computing. STOC ’09, pp 169–178. ACM, New York (2009)Google Scholar
 42.Brakerski, Z., Vaikuntanathan, V.: Efficient fully homomorphic encryption from (standard) LWE. In: Proceedings of the 2011 IEEE 52Nd Annual Symposium on Foundations of Computer Science. FOCS ’11, pp 97–106. IEEE Computer Society, Washington, DC (2011)Google Scholar
 43.Brakerski, Z., Gentry, C., Vaikuntanathan, V.: (leveled) fully homomorphic encryption without bootstrapping. In: In: Proceedings of the 3rd Innovations in Theoretical Computer Science Conference. ITCS ’12, pp 309–325. ACM, New York (2012)Google Scholar
 44.van Dijk, M., Gentry, C., Halevi, S., Vaikuntanathan, V.: Fully homomorphic encryption over the integers. In: Proceedings of the 29th Annual International Conference on Theory and Applications of Cryptographic Techniques. EUROCRYPT’10, pp 24–43. Springer, Berlin (2010)Google Scholar
 45.Katz, J., Lindell, Y.: Introduction to Modern Cryptography. CRC press (2014)Google Scholar
 46.Danos, V., Kashefi, E.: Determinism in the oneway model. Physical Review A 74(5), 052310 (2006)CrossRefGoogle Scholar
 47.Aaronson, S., Cojocaru, A., Gheorghiu, A., Kashefi, E.: On the implausibility of classical client blind quantum computing. arXiv:1704.08482 (2017)
 48.Dunjko, V., Kashefi, E.: Blind quantum computing with two almost identical states. arXiv:1604.01586 (2016)
 49.Dunjko, V., Fitzsimons, J.F., Portmann, C., Renner, R.: Composable security of delegated quantum computation. In: International Conference on the Theory and Application of Cryptology and Information Security, pp 406–425. Springer (2014)Google Scholar
 50.Kashefi, E., Wallden, P.: Garbled quantum computation. Cryptography 1 (1), 6 (2017)CrossRefzbMATHGoogle Scholar
 51.Kapourniotis, T., Dunjko, V., Kashefi, E.: On optimising quantum communication in verifiable quantum computing. arXiv:http://arXiv.org/abs/1506.06943 (2015)
 52.Barnum, H., Crėpeau, C., Gottesman, D., Smith, A.D., Tapp, A.: Authentication of quantum messages. In: 43rd Symposium on Foundations of Computer Science (FOCS 2002), 16–19 November 2002, Vancouver, BC, Canada, Proceedings, pp. 449–458 (2002)Google Scholar
 53.Aharonov, D., BenOr, M.: Faulttolerant quantum computation with constant error rate. SIAM J. Comput. 38(4), 1207–1282 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
 54.Gottesman, D., Chuang, I.L.: Demonstrating the viability of universal quantum computation using teleportation and singlequbit operations. Nature 402(6760), 390–393 (1999)CrossRefGoogle Scholar
 55.Broadbent, A., Gutoski, G., Stebila, D.: Quantum onetime programs. In: Advances in Cryptology–CRYPTO 2013, pp. 344–360. Springer (2013)Google Scholar
 56.Canetti, R.: Universally composable security: A new paradigm for cryptographic protocols. In: 42nd IEEE Symposium on Foundations of Computer Science, 2001. Proceedings, pp. 136–145. IEEE (2001)Google Scholar
 57.Kashefi, E., Wallden, P.: Optimised resource construction for verifiable quantum computation. J. Phys. A: Math. Theor. 50(14), 145306 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
 58.Raussendorf, R., Harrington, J., Goyal, K.: A faulttolerant oneway quantum computer. Ann. Phys. 321(9), 2242–2270 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
 59.Raussendorf, R., Harrington, J., Goyal, K.: Topological faulttolerance in cluster state quantum computation. J. Phys. 9(6), 199 (2007)MathSciNetGoogle Scholar
 60.Fisher, K., Broadbent, A., Shalm, L., Yan, Z., Lavoie, J., Prevedel, R., Jennewein, T., Resch, K.: Quantum computing on encrypted data. Nat. Commun. 5, 3074 (2014)CrossRefGoogle Scholar
 61.Fitzsimons, J.F., Hajdušek, M., Morimae, T.: Post hoc verification of quantum computation. Phys. Rev. Lett. 120(4), 040501 (2018)MathSciNetCrossRefGoogle Scholar
 62.Crépeau, C.: Cutandchoose protocol. In: Encyclopedia of Cryptography and Security, pp. 290–291. Springer (2011)Google Scholar
 63.Kashefi, E., Music, L., Wallden, P.: The quantum cutandchoose technique and quantum twoparty computation. arXiv:http://arXiv.org/abs/1703.03754 (2017)
 64.Kempe, J., Kitaev, A., Regev, O.: The complexity of the local hamiltonian problem. SIAM J. Comput. 35(5), 1070–1097 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
 65.Morimae, T., Nagaj, D., Schuch, N.: Quantum proofs can be verified using only singlequbit measurements. Phys. Rev. A 93(2), 022326 (2016)CrossRefGoogle Scholar
 66.Kitaev, A.Y., Shen, A., Vyalyi, M.N.: Classical and Quantum Computation, vol. 47. American Mathematical Society, Providence (2002)zbMATHGoogle Scholar
 67.Biamonte, J.D., Love, P.J.: Realizable Hamiltonians for universal adiabatic quantum computers. Phys. Rev. A 78, 012352 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
 68.Bausch, J., Crosson, E.: Increasing the quantum unsat penalty of the circuittoHamiltonian construction. arXiv:http://arXiv.org/abs/1609.08571 (2016)
 69.Mayers, D., Yao, A.: Self testing quantum apparatus. Quant. Info. Comput. 4(4), 273–286 (2004)MathSciNetzbMATHGoogle Scholar
 70.Coladangelo, A., Stark, J.: Separation of finite and infinitedimensional quantum correlations, with infinite question or answer sets. arXiv:http://arXiv.org/abs/1708.06522 (2017)
 71.Cirel’son, B.: Quantum generalizations of Bell’s inequality. Lett. Math. Phys. 4(2), 93–100 (1980)MathSciNetCrossRefGoogle Scholar
 72.Clauser, J.F., Horne, M.A., Shimony, A., Holt, R.A.: Proposed experiment to test local hiddenvariable theories. Phys. Rev. Lett. 23, 880–884 (1969)CrossRefzbMATHGoogle Scholar
 73.McKague, M., Yang, T.H., Scarani, V.: Robust selftesting of the singlet. J. Phys. A: Math. Theor. 45(45), 455304 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
 74.Huang, H.L., Zhao, Q., Ma, X., Liu, C., Su, Z.E., Wang, X.L., Li, L., Liu, N.L., Sanders, B.C., Lu, C.Y., et al.: Experimental blind quantum computing for a classical client. Phys. Rev. Lett. 119(5), 050503 (2017)CrossRefGoogle Scholar
 75.Barrett, J., Hardy, L., Kent, A.: No signaling and quantum key distribution. Phys. Rev. Lett. 95(1), 010503 (2005)CrossRefGoogle Scholar
 76.Acín, A., Brunner, N., Gisin, N., Massar, S., Pironio, S., Scarani, V.: Deviceindependent security of quantum cryptography against collective attacks. Phys. Rev. Lett. 98(23), 230501 (2007)CrossRefGoogle Scholar
 77.Schrödinger, E.: Probability relations between separated systems. Math. Proc. Cambridge Philos. Soc. 32(10), 446–452 (1936)CrossRefzbMATHGoogle Scholar
 78.MHALLA, M., PERDRIX, S.: Graph states, pivot minor, and universality of (x, z)measurements. Int. J. Unconv. Comput., 9 (2013)Google Scholar
 79.Fitzsimons, J., Vidick, T.: A multiprover interactive proof system for the local hamiltonian problem. In: Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, pp 103–112. ACM (2015)Google Scholar
 80.Laflamme, R., Miquel, C., Paz, J.P., Zurek, W.H.: Perfect quantum error correcting code. Phys. Rev. Lett. 77(1), 198 (1996)CrossRefGoogle Scholar
 81.Ji, Z.: Classical verification of quantum proofs. In: Proceedings of the FortyEighth Annual ACM Symposium on Theory of Computing, pp. 885–898. ACM (2016)Google Scholar
 82.Mermin, N.D.: Simple unified form for the major nohiddenvariables theorems. Phys. Rev. Lett. 65(27), 3373 (1990)MathSciNetCrossRefzbMATHGoogle Scholar
 83.Peres, A.: Incompatible results of quantum measurements. Phys. Lett. A 151 (34), 107–108 (1990)MathSciNetCrossRefGoogle Scholar
 84.Knill, E., Laflamme, R.: Power of one bit of quantum information. Phys. Rev. Lett. 81(25), 5672 (1998)CrossRefGoogle Scholar
 85.Kapourniotis, T., Kashefi, E., Datta, A.: Verified delegated quantum computing with one pure qubit. arXiv:http://arXiv.org/abs/1403.1438 (2014)
 86.Bremner, M.J., Jozsa, R., Shepherd, D.J.: Classical simulation of commuting quantum computations implies collapse of the polynomial hierarchy. In: Proceedings of the Royal Society of London A: Mathematical, Physical and Engineering Sciences, The Royal Society, rspa20100301 (2010)Google Scholar
 87.Mills, D., Pappa, A., Kapourniotis, T., Kashefi, E.: Information theoretically secure hypothesis test for temporally unstructured quantum computation. arXiv:http://arXiv.org/abs/1704.01998 (2017)
 88.Kapourniotis, T., Datta, A.: Nonadaptive faulttolerant verification of quantum supremacy with noise. arXiv:http://arXiv.org/abs/1703.09568 (2017)
 89.Ising, E.: Beitrag zur theorie des ferromagnetismus. Zeitschrift für Physik A Hadrons and Nuclei 31(1), 253–258 (1925)Google Scholar
 90.Gao, X., Wang, S.T., Duan, L.M.: Quantum supremacy for simulating a translationinvariant ising spin model. Phys. Rev. Lett. 118(4), 040502 (2017)CrossRefGoogle Scholar
 91.Disilvestro, L., Markham, D.: Quantum protocols within Spekkens’ toy model. Phys. Rev. A 95(5), 052324 (2017)CrossRefGoogle Scholar
 92.Spekkens, R.W.: Evidence for the epistemic view of quantum states: A toy theory. Phys. Rev. A 75(3), 032110 (2007)CrossRefGoogle Scholar
 93.Nielsen, M.A., Chuang, I.L.: Quantum Computation and Quantum Information: 10th Anniversary Edition, 10th edn. Cambridge University Press, New York (2011)zbMATHGoogle Scholar
 94.Buhrman, H., Cleve, R., Laurent, M., Linden, N., Schrijver, A., Unger, F.: New limits on faulttolerant quantum computation. In: 47th Annual IEEE Symposium on Foundations of Computer Science, 2006. FOCS’06, pp. 411–419. IEEE (2006)Google Scholar
 95.Fujii, K., Hayashi, M.: Verifiable faulttolerance in measurementbased quantum computation. arXiv:http://arXiv.org/abs/1610.05216 (2016)
 96.Barz, S., Fitzsimons, J.F., Kashefi, E., Walther, P.: Experimental verification of quantum computation. Nat. Phys. 9(11), 727–731 (2013). ArticleCrossRefGoogle Scholar
 97.Barz, S., Kashefi, E., Broadbent, A., Fitzsimons, J.F., Zeilinger, A., Walther, P.: Demonstration of blind quantum computing. Science 335(6066), 303–308 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
 98.Greganti, C., Roehsner, M.C., Barz, S., Morimae, T., Walther, P.: Demonstration of measurementonly blind quantum computing. J. Phys. 18(1), 013020 (2016)Google Scholar
 99.Greganti, C., Roehsner, M.C., Barz, S., Waegell, M., Walther, P.: Practical and efficient experimental characterization of multiqubit stabilizer states. Phys. Rev. A 91(2), 022325 (2015)CrossRefGoogle Scholar
 100.Ibm quantum experience. http://research.ibm.com/ibmq/
 101.
 102.
 103.Broadbent, A., Jeffery, S.: Quantum homomorphic encryption for circuits of low Tgate complexity. In: Advances in Cryptology  CRYPTO 2015  35th Annual Cryptology Conference, Santa Barbara, CA, USA, August 1620, 2015. Proceedings. Part II, pp. 609–629 (2015)Google Scholar
 104.Dulek, Y., Schaffner, C., Speelman, F. : Quantum Homomorphic Encryption for PolynomialSized Circuits, pp 3–32. Springer, Berlin (2016)zbMATHGoogle Scholar
 105.Alagic, G., Dulek, Y., Schaffner, C., Speelman, F.: Quantum fully homomorphic encryption with verification. arXiv:http://arXiv.org/abs/1708.09156(2017)
 106.Mahadev, U.: Classical homomorphic encryption for quantum circuits. arXiv:http://arXiv.org/abs/1708.02130 (2017)
 107.Shamir, A.: Ip= pspace. J. ACM (JACM) 39(4), 869–877 (1992)MathSciNetCrossRefGoogle Scholar
 108.Aharonov, D., Arad, I., Vidick, T.: Guest column: The quantum pcp conjecture. ACM Sigact News 44(2), 47–79 (2013)MathSciNetCrossRefGoogle Scholar
 109.Watrous, J.: Guest column: An introduction to quantum information and quantum circuits 1. SIGACT News 42(2), 52–67 (2011)CrossRefGoogle Scholar
 110.Watrous, J.: Quantum computational complexity. In: Encyclopedia of Complexity and Systems Science, pp 7174–7201. Springer (2009)Google Scholar
 111.Harrigan, N., Spekkens, R.W.: Einstein, incompleteness, and the epistemic view of quantum states. Found. Phys. 40(2), 125–157 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
 112.Gottesman, D.: An introduction to quantum error correction and faulttolerant quantum computation. In: Quantum Information Science and its Contributions to Mathematics, Proceedings of Symposia in Applied Mathematics, vol 68, pp 13–58 (2009)Google Scholar
 113.Raussendorf, R., Briegel, H.J.: A oneway quantum computer. Phys. Rev. Lett. 86, 5188–5191 (2001)CrossRefGoogle Scholar
 114.Briegel, H.J., Browne, D.E., Dur, W., Raussendorf, R., Van den Nest, M.: Measurementbased quantum computation. Nat. Phys., 19–26 (2009)Google Scholar
 115.Raussendorf, R., Browne, D.E., Briegel, H.J.: Measurementbased quantum computation on cluster states. Phys. Rev. A 68(2), 022312 (2003)CrossRefGoogle Scholar
 116.Complexity Zoo. https://complexityzoo.uwaterloo.ca/Complexity_Zoo
 117.Arora, S., Barak, B.: Computational Complexity: A Modern Approach, 1st edn. Cambridge University Press, New York (2009)CrossRefzbMATHGoogle Scholar
 118.BenOr, M., Goldwasser, S., Kilian, J., Wigderson, A.: Multiprover interactive proofs: How to remove intractability assumptions. In: Proceedings of the Twentieth Annual ACM Symposium on Theory of Computing, pp 113–131. ACM (1988)Google Scholar
 119.Cleve, R., Hoyer, P., Toner, B., Watrous, J.: Consequences and limits of nonlocal strategies. In: 19th IEEE Annual Conference on Computational Complexity, 2004. Proceedings, pp. 236–249. IEEE (2004)Google Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.