Finding eigenvectors with a quantum variational algorithm

This paper presents a hybrid variational quantum algorithm that finds a random eigenvector of a unitary matrix with a known quantum circuit. The algorithm is based on the SWAP test on trial states generated by a parametrized quantum circuit. The eigenvector is described by a compact set of classical parameters that can be used to reproduce the found approximation to the eigenstate on demand. This variational eigenvector finder can be adapted to solve the generalized eigenvalue problem, to find the eigenvectors of normal matrices and to perform quantum principal component analysis (QPCA) on unknown input mixed states. These algorithms can all be run with low depth quantum circuits, suitable for an efficient implementation on noisy intermediate state quantum computers (NISQC) and, with some restrictions, on linear optical systems. Limitations and potential applications are discussed.

1 Introduction: Noisy intermediate scale quantum computers and hybrid algorithms.
Scalable general purpose quantum computers could run algorithms that are more efficient than any classical alternative [1,2].However, at the present moment, the available technology is restricted to computers with a moderate number of qubits with a varying degree of noise.These computers are usually dubbed Noisy Intermediate Scale Quantum Computers, NISQC [3,4].In this scenario, there has been a growing interest on hybrid quantum-classical algorithms [5,6,7] where part of the work is shifted to a classical computer.In most cases there is a continuous feedback between the classical and the quantum computer, which has quantum circuits that are a function of a few parameters which are updated in the classical part of the algorithm according to the results of the quantum measurements from previous stages.
These algorithms are particularly useful in problems dealing with large state spaces which can be sampled quickly on a quantum computer without the need to explicitly write out the whole state, which has a size that grows exponentially with the size of the problem.
A good example is quantum simulation, one of the most promising applications of quantum computing.For instance, in molecular simulation, the variational quantum eigensolver can efficiently search for a particular state in the Hilbert space of n qubits, which grows exponentially with the size of the problem, and still achieve reasonable results even in the presence of noise [8,9,10].This approach based on using quantum circuits with a classical control also plays a key role in quantum machine learning [11,12,13,14,15].
In this paper, I present an application of these concepts to the search for an eigenvalue of a known unitary which can be generalized to different kinds of matrices with some restrictions.The algorithm is based on a modified version the SWAP test for state comparison and can be realized with circuits of a low depth (with few consecutive elementary gates).We only require a state preparation phase, a circuit for U and one CNOT gate and one Hadamard gate for each qubit on which U acts.At the end of the process, we obtain a list of classical parameters that can produce a good approximation to a random eigenvector of the desired matrix using a known classically controlled quantum circuit.
The same method can be adapted to perform Quantum Principal Component Analysis of an unknown mixed state, which can be cast as a problem of finding the eigenvectors of a Hermitian matrix.
The paper starts with a brief reminder of the SWAP test for quantum state comparison and some hardware-efficient optimizations in Section 2, followed by an explanation of the notation and methods of hybrid algorithms in Section 3.
The core quantum variational eigenvector finder algorithm is presented in Section 4, including modifications to solve the generalized eigenvalue problem (Section 4.1) and to find the eigenstates of general normal matrices instead of just unitaries (Section 4.2).Section 5 shows how to adapt the variational eigenvector finder for Quantum Principal Component Analysis.
Section 6 proposes an alternative implementation in optical systems instead of the usual quantum circuit model.Finally, Section 7 discuses the strong and weak points of the variational approach for eigenvector determination and comments some potential applications in the short and long term.

The SWAP test
While it is impossible to tell apart with certainty two quantum states which are not orthogonal, there are methods that can tell us if they are equal or not with a certain probability.The SWAP test [16,17] permits a simple comparison.We start with two arbitrary n qubit states |ψ⟩ and |ϕ⟩ and an ancillary qubit initially in the state |0⟩.The quantum circuit corresponding to the test is shown on Figure 1.The evolution in that circuit is ( If we measure the ancillary qubit after the last Hadamard gate, the probability of finding the state |0⟩ is associated with the eigenvalue 1 of the observable for the projectors to the |0⟩ and |1⟩ states P 0 and P 1 .For the output state |Φ out ⟩ the probability of measuring the |0⟩ state in the ancillary qubit is and the average value of the Z observable for the first qubit becomes For two identical states, the SWAP operation does not change the state and the value of the control qubit is irrelevant.The two Hadamard gates on the ancillary qubit cancel and the whole circuit is equivalent to the identity.We can see from Eq. ( 1) that when both states are the same |ψ⟩ = |ϕ⟩ there is a destructive interference between the terms entangled to the ancillary qubit state |1⟩.
The result from the measurement is always |0⟩ and whenever we get that result we say the SWAP test has been passed.However, if the states are different, each part of the uniform superposition of the ancillary qubit becomes entangled to a different state.In that case there is always a greater than 1 probability of measuring the state |1⟩.In a noiseless system, failing to pass the test shows with certainty the input states are different.The probability of finding a |1⟩ ancillary qubit on measurement grows as the input states become more different.If we have k copies of the same states and repeat the test k times, even for very close states with an overlap ≈ (1 − ϵ) and we have a probability of passing the test P (0) k ≈ 1 − kϵ.With enough repetitions, if we never fail the test, we can be confident that the two input states are the same or, at least, very close.
As expected, global phases do not change the result of the test.Two inputs |ψ⟩ and e iΦ |ψ⟩, for an arbitrary phase Φ, will act exactly in the same way under the test.This is the desired operation.While a larger circuit with a reference can distinguish between these two states, if these inputs were isolated states there is no physical measurement that could tell them apart.
The comparison in the SWAP test is also valid for general inputs with mixed states.We consider an ensemble given by the density matrix where we have a statistical mixture of finding a pure state |ψ i ⟩ with a probability p i for each.For two mixed states with density matrices ρ and σ the probability of passing the test becomes: where tr(ρσ) replaces | ⟨ψ|ϕ⟩ | 2 [18].When both states are pure states with ρ = |ψ⟩ ⟨ψ| and σ = |ϕ⟩ ⟨ϕ| we recover the original overlap. |0⟩ Figure 2: Equivalent circuits for the SWAP test on single qubit states.

The destructive SWAP test
The depth of the SWAP test circuit can be reduced if we notice that, after the test, the original inputs become entangled and they are discarded.The total number of elementary gates needed to perform a SWAP test can be reduced to 2n for n qubits.Most importantly, we only need two stages.This is the destructive SWAP test explained in [19], which was later rediscovered by an AI driven search [20].We can check the test is valid starting from the SWAP circuit for two qubits shown in Figure 2a.The SWAP operation is performed with three CNOT gates.A sequence of 3 CNOT gates can perform classical XOR swapping [21] for two bits in the computational basis x, y ∈ F 2 so that where CNOT(i, j) is controlled by the ith qubit and has the jth qubit as target.The result follows from two properties the XOR operation ⊕ (equivalent to modulo 2 addition): x ⊕ x = 0 and x ⊕ 0 = x.
If the middle CNOT is replaced by a Toffoli gate controlled by the ancillary qubit in the SWAP test, we have a controlled SWAP operation.When the ancilla has a state |1⟩ the middle Toffoli gate is equivalent to a CNOT gate on the inputs and we recover the 3 CNOTs of the SWAP circuit.When it is |0⟩ the two consecutive CNOT gates cancel each other and we have the identity.
The NOT operation can be decomposed as X = HZH and we can convert CNOT gates into CZ gates surrounded by two Hadamard gates.The Toffoli gate becomes a doubly controlled Z gate with operation CCZ |x⟩ |y⟩ |z⟩ = (−1) x•y•z |x⟩ |y⟩ |z⟩ where the sign change only happens if all the qubits have a 1.In that sense, any of the qubits can be considered to be the target.Putting all together, we can find a series of equivalent circuits shown in Figs.2b-2d.Now, starting from Figure 2d, we can notice that changes on the input qubits after the Toffoli gate do no affect the state of the ancillary qubit.We might just as well measure both qubits right after the Toffoli.Finally, there is a further simplification if we notice that the ancillary qubit will result in a |0⟩ unless both control qubits from the inputs are in state |1⟩.The result of measuring the two input qubits at this point perfectly determines the state of the ancillary qubit, which can be removed.If we measure in the computational basis (observable Z), we can associate a measurement finding a |0⟩ state to a bit 0 (Z's eigenvalue 1) and finding state |1⟩ to a bit 1 (Z's eigenvalue −1).The logical AND of the measurement results reproduces the result of measuring the ancillary qubit.A measurement on the input qubits and classical computation (the AND operation) return the same statistics as measuring the Z observable for the ancilla.
The same reasoning can be applied to multiple qubits.In that case, the SWAP operation has the same decomposition for each pair of qubits of the multiqubit state.We group the first qubit of |ϕ⟩, q 1 ϕ , with the first qubit of |ψ⟩, q 1 ψ , and continue forming all the n (q i ϕ , q i ψ ) pairs for 1 ≤ i ≤ n.For n qubit states, the simplified circuit in Figure 2d would have the same CNOT and Hadamard gates in the corresponding qubit pairs and a Toffoli gate controlled by them and acting on the ancillary qubit.There is a total of n Toffolis that change the state of the input from |0⟩ to |1⟩ or from |1⟩ to |0⟩ when both controls are |1⟩.The final value depends on the parity of the bitwise AND, ∧, of the strings coming from measuring the n qubits of each input state.An even number of ones means the output state in the ancillary qubit would have been |0⟩ (a pass).An odd number of ones corresponds to failing the test.
In terms of the Z observable of the ancillary qubit, the measurement result is exactly the same as the product of the measurement results for the CZ observable on each pair.For two qubits, the CZ operation has eigenvectors |00⟩ , |01⟩ , |10⟩ associated to eigenvalue 1 and an eigenvector |11⟩ associated to eigenvalue −1.The product of these CZ observables has the same value as a measurement of the observable Z in the SWAP test.Considering all the qubits, the observable with the result of the destructive SWAP test is CZ ⊗n if we write our inputs in the order Finding a 1 eigenvalue corresponds to passing the test.Measuring eigenvalue −1 indicates a failure.

Hybrid algorithms: Variational methods
Physical quantum computers are difficult to build.They have a high cost of operation when compared to classical computers and, in their current implementations, are still too prone to decoherence to be able to carry out long calculations.For all these reasons, whenever possible, we would like to replace the quantum parts of the algorithm by classical computation.For instance, in Shor's algorithm for integer factorization [1], the only part that needs a quantum computer is the order finding subroutine.As shown by Ekerå [22], a classical computer can perform a longer preprocessing stage and the quantum computer can be reserved for a single use of the non-classical routine for order finding.Even this simplified form of factoring still requires a number of qubits and consecutive stages beyond the capabilities of current noisy quantum computers.
The first promising results of real-world quantum computers come from hybrid algorithms where the quantum and the classical stages feed each other.One particularly successful family of algorithms are variational methods taking advantage of the native capacity of a quantum computer to describe states in a large space without needing to store an exponential amount of data [8,10].
For instance, if we want to compute the ground state energy E g of a particular Hamiltonian, H, classical variational methods propose a series of trial states.For any state |ψ⟩, the expected energy value ⟨ψ| H |ψ⟩ ≥ E g gives an upper bound to the minimal energy of the ground state.If we choose with care the states for which we measure the energy, we can approximate E g with good precision.Quantum methods offer a compact way to perform variational algorithms.We no longer need to write down the whole state vector of the ground state, which can have a number of complex elements growing exponentially with the size of the problem.For instance, a simple system with n electrons in separate sites can have 2 n spin configurations.
Most variational hybrid algorithms can be described using four main blocks.We will describe them in parallel with an example application for ground state estimation for a Hamiltonian made of a combination of Pauli operators (Figure 3).The four blocks are: The gates in the circuit depend on a list of classical parameters ⃗ θ = (θ 1 , . . ., θ k ) to prepare a trial state, or ansatz, |ψ( ⃗ θ)⟩.We assume a fixed initial state, usually with the n qubits initialized to |0⟩ so that U ( ⃗ θ) |0⟩ ⊗n = |ψ( ⃗ θ)⟩.
In the example of Figure 3, the classical parameters appear in a series of rotation gates RX and RY which rotate a single qubit with respect to the X and Y axis of the Bloch sphere, respectively, by an angle determined by a classical parameter.
There are also two CNOT gates that introduce entanglement in the final state.This circuit follows the approach of what is called a Hardware Efficient Ansatz [23], where the gates are selected to give the shallower possible circuit that still can approximate the desired state.
• A processing circuit: Depending on the desired target, we need to act on the ansatz in different ways.The measurement stage must return values related to an objective function scoring how close we are to our goal.
The example in Figure 3 shows a typical variational simulation scenario where the gates perform a change of basis so that the final measurements correspond to one term of the desired Hamiltonian in a particular encoding.In this case, the H gate in the middle qubit and the identity in the rest make the final measurement in the computational basis give the exact same results as measuring the Z⊗X⊗Z observable for the generated ansatz |ψ( ⃗ θ)⟩.
• Measurement: We usually consider measurement in the computational basis {|0⟩ , |1⟩} for each qubit.As a single measurement is not enough to estimate the desired averages, we need to prepare multiple copies of the same trial state using the exact same classical parameters and processing circuit so that we can get meaningful statistics.This stage has been considered together the processing circuit in the example.
• Classical processing and feedback: After a few repetitions, the statistics from the measurement are used to compute an objective function f ( ⃗ θ) which takes a value which depends on the chosen classical parameters and is related to the problem we want to solve.This dependence is, in general, quite complex.Using a quantum circuit we reduce an explicit description of the problem to measuring a small experimental setup.
The global process is a loop that takes the estimated value of the objective function f ( ⃗ θ) at each iteration as a guide to tweak the classical parameters ⃗ θ.In most cases, the problem is posed as a minimization problem where we use classical optimization methods (such as nonlinear optimization or gradient descent methods) changing the input parameters at stage k to new values ⃗ θ k until we find a stable value f ( ⃗ θ s ) at iteration s which is assumed to be the minimum.
In the example, the expected value ⟨ψ( ⃗ θ)|Z ⊗ X ⊗ Z|ψ( ⃗ θ)⟩, which approximates the energy of |ψ( ⃗ θ)⟩ for the simulated Hamiltonian, is the objective function to be minimized.The classical part starts by choosing a random initial list of parameters ⃗ θ 1 that gives a first value f ( ⃗ θ 1 ).The chosen optimization method then generates a new set of classical parameters ⃗ θ 2 that are fed into the parametrized quantum circuit.After we complete the second round of measurements, the new value of the objective function f ( ⃗ θ 2 ) is compared to the previous one.The changes in the parameters that lead to a smaller final value are kept and the ones that increase the objective function are modified until we can no longer find a better solution.This part of the hybrid algorithm draws heavily on classical optimization and there are multiple available software libraries, like Python's scikit-learn [24].
Notice that, in many cases, we have no guarantee that the algorithm will succeed.Still, we can provide educated guesses for the solution, similar to what happens in many classical machine learning methods.
The main advantage of this family of hybrid algorithms is the possibility to probe a large state space with a quantum circuit that grows linearly instead of exponentially with the size of the system.This is particularly interesting in the simulation of quantum systems, where the states of the simulated system can be easily mapped into the quantum computer.For instance, chains of n fermions with spin 1/2 or −1/2 can be directly represented by n qubits in the state |0⟩ or |1⟩ and we can directly explore the Hilbert space of dimension 2 n with a compact system and study, among others, phase transitions in the Ising model [25].A good review of existing applications can be found in [10].

A variational method for the eigenvectors of unitary matrices
The SWAP test gives a suitable objective function for a hybrid variational algorithm that finds the eigenstates |e i ⟩ of a unitary evolution U such that U |e i ⟩ = λ i |e i ⟩.The method can be generalized to multiple scenarios.
For an N × N unitary U , we can find N orthogonal eigenvectors |e i ⟩ which form an orthonormal basis and their corresponding eigenvalues λ i = e iϕ i are complex numbers of modulus 1.
The circuit for the quantum part of the algorithm is shown in Figure 4. We use the same parametrized circuit U ( ⃗ θ) with the same classical parameters twice to generate a trial state |ψ( ⃗ θ)⟩|ψ( ⃗ θ)⟩, which is transformed by the unitary into |ψ( ⃗ θ)⟩U |ψ( ⃗ θ)⟩ and then proceeds to a destructive SWAP test circuit.
The objective function in the algorithm comes from the statistics of the SWAP test and can be written as We approximate the probabilities of passing and failing the SWAP test, P (0) and P (1) respectively, from counting how many times we get each outcome and dividing by the total number of times we run the test.The 0 and 1 results correspond to the parity of the bitwise AND of the measurement results in the computational basis (assuming 0 for |0⟩ and 1 for |1⟩) and they have the same statistics that would appear if we had measured the ancillary qubit in the complete SWAP test.The evaluation of the objective function is reduced to computing the expected value of the Z observable for the ancillary qubit, albeit in a roundabout way to reduce the final number of gates.The objective function is an inner product of normalized states, with f ( ⃗ θ) ≤ 1, and we can see that the maximum is obtained only if the trial state is an eigenvector of U , Most optimization methods and software libraries work by minimizing a function.For these we can consider: With these functions that can be computed from measurements on the circuit, we can use a classical optimization methods to update the parameters ⃗ θ until the parametrized circuit giving U ( ⃗ θ) produces a good approximation to an eigenstate of the target unitary U .

Solving the generalized eigenvalue problem
The circuit can be modified to find generalized eigenvectors in what is usually called the generalized eigenvalue problem of finding |ψ⟩ and λ such that for two given matrices U and V (see Figure 5).The hybrid algorithm is exactly the same as before, now maximizing the objective function f

Generalization for normal matrices
The variational eigenvector finder can be modified to work on normal matrices for both the usual and the generalized eigenvalue problems.Consider a normal matrix N such that N N † = N † N .For an M × M normal matrix the spectral theorem guarantees we have a decomposition: for M orthogonal eigenvectors.This spectral decomposition might not be unique if there are degenerate eigenvalues.
For any square M × M complex matrix A, the matrix If A admits a spectral decomposition like in Eq. ( 14), this Hermitian matrix shares the eigenvectors of A with (15) If we consider the unitary U A = e iH A , it will have a decomposition The matrices A, H A and U A have the same eigenvectors with eigenvector |e i ⟩ associated to eigenvalues λ i , 2 Re(λ i ) and e i2 Re(λ i ) , respectively.
If we can construct the unitary evolution U A , we can obtain the eigenvectors of any A admitting a spectral decomposition, including all normal matrices.Finding an efficient quantum gate sequence that realizes the exponential e iH A might be difficult.This is a wellstudied problem that appears in Hamiltonian simulation [26,27] and in the HHL quantum algorithm for linear systems of equations [28].All the tricks used there can be recycled in this variational method, including efficient approximation for sparse matrices [29] or designing evolutions directly for the particular quantum computer architecture in which the variational circuit is run [30].

Principal Component Analysis of unknown mixed states
The proposed variational method can be also extended to statistical mixtures, which can appear, among other situations, after a pure state is subject to decoherence.Any mixed state can be described by a density matrix with a variable number of terms m where the p j are the probabilities associated to finding a pure state |ψ j ⟩ and sum to one.This decomposition is not unique and the states are not necessarily orthogonal.The density matrix ρ is still Hermitian and it will have a spectral decomposition where N is the dimension of the state space and the |ψ i ⟩ are now orthogonal states.For non-degenerate eigenvalues, this decomposition is unique.
One important problem for mixed states is Quantum Principal Component Analysis (QPCA): determining the leading terms in the spectral decomposition.The eigenvalues of the Hermitian ρ are real and can be ordered from the largest to the smallest as λ 1 ≥ λ 2 ≥ . . .≥ λ N .The principal component is the eigenstate associated to the largest eigenvalue which, in certain scenarios, gives a good approximation to the full state.This mirrors classical Principal Component Analysis methods [31], which have applications in multiple fields, like in machine learning, where it is generally used as a way to reduce the dimensionality of the data [32,33].
The general QPCA problem is thought to be hard for classical computers.Lloyd, Mohseni and Rebentrost showed that there is an efficient quantum algorithm for QPCA as long as we can prepare multiple copies of a state with a density matrix AA † for any given matrix A [34].The proposed quantum method was later dequantized by Tang who showed that, if we assume there is an equivalent black box giving this state preparation, there are classical methods which are only quadratically worse than the quantum proposal (as opposed to the exponential quantum advantage in the case of low rank matrices) [35].
The variational quantum algorithm proposed in the previous sections can also be used to perform principal component analysis with the same caveats regarding state preparation.We assume we are given multiple copies of a mixed state with density matrix ρ.While for general tasks it might be hard mapping a classical matrix to a state, the method can still be useful, for instance, to approximate the input of a noisy channel within certain precision when the principal component does indeed give a good estimation of the original input [36].When compared to the QPCA algorithm of [34], we just need the raw input states instead of using them to build the evolution e iρ , which makes the variational algorithm more suitable for current noisy quantum computers.However, we are subject to the usual problems of variational methods: they are heuristic and we have no guarantee of finding a solution, among other challenges.
We will use the circuit in Figure 6, which also admits a gate efficient realization with a destructive SWAP test like the circuits in Figures 4 and 5.For an unknown input mixed state ρ, we perform a series of SWAP tests comparing the input to the pure state σ( ⃗ θ) = |ψ( ⃗ θ)⟩⟨ψ( ⃗ θ)| generated from a fixed input and a parametrized quantum circuit controlled by the classical parameters ⃗ θ.In the optimization phase, we will tweak these classical parameters until the ansatz σ( ⃗ θ) maximizes the value of the measured observable.
|0⟩ From Equations ( 7) and ( 18), we see the expected value we approximate is The global maximum is achieved when the ansatz is the principal component (the eigenstate |ψ 1 ⟩ associated to the largest eigenvalue λ 1 ).In that case the expected value becomes λ 1 .
For any other trial state, with at least some part of the state orthogonal to |ψ 1 ⟩ and In general, the final state found at the end of the algorithm will depend on the optimization method and the input state.If we start close to a principal component the objective function will find a local maximum with a trial state close to the desired state.However, the search algorithm can also converge to local maxima that do not correspond to a principal component.The global maximum will be more prominent for mixed states with a strong principal component.
There are two important differences with respect to eigenvector estimation for unitary and normal matrices.In those cases, we could know for sure the algorithm had converged to a true eigenvector by checking the expected value, which should converge to 1.Here we cannot be sure we have found the principal component and not converged to some local maximum (likely a second or third component, or a superposition of the first components).However, we get additional information if we did.In QPCA, it is useful finding both the state and the eigenvalue, which can be learnt from the expected value.The value will tell us how important the principal component is with respect to other terms and how well it can be used as a compact description of the whole state.Larger values correspond to principal components that represent the whole mixed state better.
Using multiobjective optimization, we can also search for successive components.When the algorithm converges after k iterations, we obtain a set of parameters ⃗ θ 1 k that give the estimated principal component We can now alternate two series of rounds for each iteration j in order to evaluate the parameters ⃗ θ 2 j defining our trial state for the second component.First we can perform the usual comparison of the ansatz with the input state ρ to estimate tr(ρσ( ⃗ θ 2 j )).Then, we generate as input states σ( ⃗ θ 1 k ) and σ( ⃗ θ 2 j ) and use a SWAP test to estimate the overlap tr(σ( ⃗ We want two orthogonal states such that tr(σ( ⃗ θ 1 k )σ( ⃗ θ 2 j )) = 0. Using these two estimated values, the classical parameters must be optimized according to a new objective function that the classical algorithm will use to propose the vector ⃗ θ 2 j+1 for the next iteration.Assuming we minimize f ( ⃗ θ) during optimization, we can, for instance, define an objective function with a weight constant C that penalizes the previously found principal component and directs the search towards the next maximum eigenvalue in the decomposition of ρ.This procedure can be repeated as many times as needed with additional terms to search for the third, fourth and successive components, much like molecular search for excited energy levels above the ground state in molecules [37].

Optical implementation
The variational algorithms for eigenvectors and quantum principal component analysis we have seen are not restricted to near-term universal quantum computers.They can also be carried out in linear optical setups with certain restrictions.We consider passive linear optical systems with input states that are a superposition of terms |n 1 n 2 • • • n m ⟩ where we have n = m i=1 n i photons distributed into m orthogonal modes (like different paths or orthogonal polarizations).A linear optics system acting on m modes can be described classically by its scattering matrix S which is an m × m unitary matrix [38,39].The corresponding quantum evolution acting on n photons can be computed from S as U = φ m,n (S) using different known methods [40,41,42,43].The unitary U has a size M × M , with M = n+m−1 n .For one photon the quantum evolution is exactly the scattering matrix S = U [44].
There are multiple constructive methods to map any desired unitary S into a physical setup using only linear optical devices such as beamsplitters and phase shifters [45,46,47,48,49,50].These implementations have a circuit size that grows quadratically with the number of modes in the worst case and can generate any desired unitary S.There exist many successful experimental realizations of configurable optical circuits with integrated optics [51,52,53,54,55,56,57] that offer a linear optical system that can be controlled electronically.These hybrid systems have configurable phase shifters and beamsplitters and give quantum evolutions that depend on a few classical parameters.They can be designed to provide any available scattering matrix for the size input, but there are also simpler configurable circuits that can produce good enough ansätze in variational quantum algorithms [58].
These systems give an efficient way to realize the first half of the proposed quantum circuits which has the parametrized unitaries that generate the trial states for eigenvector estimation and principal component analysis.For the second half which performs the state comparison, we need to restrict our setups.
The SWAP test admits a simple experimental realization for two photons.We start from two separate single photons in the same large Hilbert space.We just need to direct the two photons encoding the states to the two inputs of a balanced beamsplitter.At the output of the beamsplitter we place two binary photodetectors (that click for one or more photons and do nothing for the vacuum).In such a setup, we call a coincidence to the simultaneous detection of one photon at each output port of the beamsplitter.For two input single photon states described by density matrices ρ and σ, the probability of finding a coincidence at the output is [59]: which is exactly the probability of failing a SWAP test.That way, optical measurements at a beamsplitter can be used for state comparison [19].Notice we need to encode the whole state from a large Hilbert space into a single photon.There are many alternatives, like taking advantage of the high dimensionality provided by orbital angular momentum [60,61,62].A good option compatible with current optical network technology is using a time-bin encoding for the orthogonal modes.Figure 7 shows different states of a qubit in time-bin encoding.The photon wavefunction can be confined to two different time bins representing the |0⟩ and |1⟩ qubits.If we use a common phase reference that is assigned to the pulses in the |0⟩ time bin, we can use amplitude modulators to distribute the probability amplitude in the two time bins and phase modulators to introduce a relative phase with respect to the reference.The result is an optical qubit in any state of choice.This approach has been used, for instance, in quantum key distribution protocols over optical fiber [63,64].
In the proposed variational algorithms, we need d-dimensional systems, or qudits, which can be generated by dividing the photon into d time bins.With the current technology, we can expect a reasonable coherence time of microseconds and amplitude and phase modulators in the GHz range, which could produce systems with 1024 bins, equivalent to 10 qubits.Existing modulators, beamsplitters and detectors could give results comparable to what can be done in a quantum computer with 20 qubits.Each of the two photons could also be distributed into multiple separate spatial modes, possibly with added timebin encoding.The measurement would need a balanced beamsplitter for each pair pair of modes corresponding to the same position for each photon and one detector at each of their outputs.
In all the variational algorithms proposed, both for finding eigenvectors and for quantum principal component analysis, one of the subsystems would be one photon directed to a parametrized linear optics multiport.The second system can either come from a general optical quantum channel, which could produce a mixed state we want to characterize, or a known optical transformation if we want to map a known matrix S into the optical system in order to find a compact description of its eigenvectors, as, in this case with a single photon, U = S.
While the scalability of this approach is limited, it can be an interesting alternative to full quantum computers in the near term, particularly in the cases where we need a low noise implementation.Photons have long coherence times, relatively low noise and do not need advanced cooling systems unlike many implementations of quantum computers, especially those based on superconducting qubits.Optical systems have their own problems like losses, synchronization or producing good approximations to single photon states, but, for most of these problems, there are acceptable technical solutions that have been used in quantum key distribution systems [65,66].

Strong and weak points, applications and outlook
Classically, finding the eigenvalues of a matrix is an efficient task.For unitary matrices the QR decomposition [67,68,69] gives a robust algorithm with efficient and stable numerical software implementations in reference suites like the LAPACK library [70].For an N × N matrix, the number of operations grows as O(N 3 ), with alternative algorithms with complexities between quadratic and cubic depending on special cases [71,72,73].This includes bisection methods that can search only for one or a limited subset of eigenvectors instead of the whole spectrum at a smaller cost in terms of the number of operations.
However, in a quantum setting, the size of the state space grows exponentially in the number of qubits.For n qubits this means that any algorithm that needs to explicitly write down the N = 2 n complex entries in a general eigenvector will suffer from this growth.The variational quantum algorithms introduced in this paper return a compressed version of the state so that we can produce the desired eigenstate on demand.The classical parameters in ⃗ θ are a short list that contains all the information we need to recreate the desired eigenvector using the parametrized circuit we chose for the search.This native encoding into a quantum state allows for an efficient preparation for future uses.
The algorithm for variational quantum principal component analysis can also be useful for channel characterization, for instance in the analysis of mixed states coming from a quantum optical channel with decoherence or to study the decay of a quantum state with time in a noisy quantum computer.
The main potential application for these methods is finding eigenvectors as a stepping stone for more complex routines.Take for instance the Quantum Phase Estimation algorithm of Kitaev [74] where a quantum computer can give efficient approximations to an eigenvalue of a given unitary provided one of the inputs is the corresponding eigenvector.The variational quantum eigenvector finder can be used in conjunction with Quantum Phase Estimation to obtain the full spectrum of any unitary U , including all the eigenvalues.
The variational eigenvector finder also combines well with Abram and Lloyd's algorithm that gives an efficient approximation to the eigenvectors of a Hamiltonian with arbitrary precision as long as it has a "good enough" initial guess state [75].If we can produce a state | ẽi ⟩ that approximates the desired exact eigenstate |e i ⟩ and has a non-negligible overlap |⟨ ẽi |e i ⟩| 2 with it, the eigenstate can be refined and taken as close to the exact value as desired.This fits particularly well with the output of the variational eigenvector finder.Even if the optimization does not converge to an exact eigenvector, we can determine from P (0)−P (1) (the Z observable from the SWAP test) whether the trial state is close or not to an eigenstate.Larger expected values correspond to a greater overlap.With the correction from the Abrams-Lloyd algorithm, even poor ansätze can give a clean eigenstate.
While these are interesting applications, both require long circuits and use the Quantum Fourier Transform which, for large systems, is still impractical due to accumulated noise.A more realistic application in the short term would be optimizing algorithms for noisy intermediate scale quantum computers.
Take for instance the Hadamard test of Figure 8, which appears in many proposals for quantum machine learning [12,76] and in the quantum algorithm for Jones polynomials [77,78].This circuit is a simpler version of the Quantum Phase Estimation algorithm that still can give results that no classical system can.
A key element in that circuit is the controlled unitary cU which applies a unitary U to the target when the control qubit is |1⟩ and is the identity for a control qubit |0⟩.If we are given U as a black box, this task is impossible in the usual quantum circuit model without describing the complete cU operator and decomposing it [79].The total number of gates in those decompositions can be large, introducing too much noise, and there have been various attempts to give concise descriptions for particular cases in order to reduce the depth of these circuits [80,81,82,83].In particular, if we can produce an eigenstate of the evolution there are restricted efficient circuits for cU [74].The variational quantum eigenvector finder can be part of general methods to compute a Hadamard test with a smaller total number of gates.A preliminary simulation of the proposed quantum variational eigenvector finder [87] shows that, for moderate sizes of U , the algorithm converges to valid eigenvectors both for random input unitaries and for defective matrices with high degeneracy like the Quantum Fourier Transform.As the size of the problem grows, the found state has a smaller overlap with a true eigenstate.This is a common problem in variational quantum algorithms.For an ideal algorithm, we can just use more complex parametrized circuits to cover a larger part of the state space up to a point.However, in noisy execution, this approach can be counterproductive.In the simulations, even for the smallest problems, noise introduces an appreciable deviation from true eigenstates.
On this account, the proposed algorithms share the drawbacks of most hybrid variational methods.First of all we need a good ansatz.There are different proposals for parametrized circuits that could be used.In any variational method there is a tension between expressivity and depth.Expressivity describes how much of the whole state space can be reached from outputs of the parametrized circuit.Depth is given by the number of consecutive elementary gates we need to build the circuit.We would like to have expressive enough circuits that can generate an ansatz close to our target state (the exact eigenvector we are searching for).However, highly expressive parametrized circuits tend to have larger depths and the noise can build up to levels that make the output unusable.Apart from that, in order to sample more of the space they need more classical parameters.Optimizing a large set of parameters is less efficient and the classical part of the algorithm can converge to suboptimal solutions.Among other challenges, if we can represent very small changes, there can appear barren plateaus (the optimization algorithm gets stuck in a region of the state space with small changes in the objective function but far from the real minimum) [84].While this can limit the usefulness of the method as the state space grows, there are methods to optimize the parametric circuits so that the number of parameters is small but we are still able produce states close to the target [85,86].The search for better parametrized circuits and optimization methods is a vibrant area of research and most of the results for other variational quantum algorithms are likely to be useful in the eigenvector algorithms.
One advantage with respect to other problems, like searching for the ground state in molecular simulation, is that we do have a confirmation whether the algorithm has succeeded or not.We might not converge to a true solution, but, when we do, the expected value will be one.Despite these limitations, the presented variational eigenvector finder and quantum principal component analysis algorithms can become a useful addition to the toolbox of quantum computation in the near future and they can be incorporated into practical, more complex hybrid quantum-classical algorithms.

Figure 1 :
Figure 1: Quantum circuit realizing the SWAP test.The crossed lines represent qubit registers with n qubits.

Figure 6 :
Figure 6: Quantum circuit for variational quantum principal component analysis.

2 Figure 7 :
Figure 7: Time bin encoding for optical qubits.The time location of an optical pulse can serve as a basis for a qubit.With different amplitude combinations and relative phases between the optical pulses, we can produce any desired qubit.
Figure 4: Variational quantum circuit for the eigenvectors of U .
Figure 5: Variational quantum circuit for the generalized eigenvector problem U |e i ⟩ = λV |e i ⟩.