Hypercube quantum search: exact computation of the probability of success in polynomial time

In the emerging domain of quantum algorithms, Grover’s quantum search is certainly one of the most significant. It is relatively simple, performs a useful task and more importantly, does it in an optimal way. However, due to the success of quantum walks in the field, it is logical to study quantum search variants over several kinds of walks. In this paper, we propose an in-depth study of the quantum search over a hypercube layout. First, through the analysis of elementary walk operators restricted to suitable eigenspaces, we show that the acting component of the search algorithm takes place in a small subspace of the Hilbert workspace that grows linearly with the problem size. Subsequently, we exploit this property to predict the exact evolution of the probability of success of the quantum search in polynomial time.


Introduction
Many computational problems can be reduced to the search of one or more items in a set that meet a predefined criterion.For instance in a digital communications context, with channel coding using block codes, the receiver has to find the binary word which best explains the received data.If there are 50 bits per data word, this means finding the best solution among 2 50 (approximately 10 15 ) possibilities.This is the kind of problem Grover's algorithm, introduced in [4], can resolve with a high probability in O( √ N ) steps, N being the number of possibilities.This implies that quantum computing could significantly speed up some problems too complex to be simulated with a classical computer.
Quantum walks over graphs are a great tool in the design of quantum algorithms.It has been shown in [1] that quantum walks can be faster than their classical counterparts up to a polynomial factor.Further reading about quantum walks can be found in [9].
The choice of the hypercube structure over other walks is motivated by its direct connection with information science, where it is often used to represent binary values.Indeed, every n-bit value can be placed on the n-dimensional hypercube, where the number of edges separating the two values is equal to the number of bits that differ from one value to the other.Additionally, a displacement along the d-th dimension on the hypercube is equivalent to a flip (0 becomes 1 and conversely) of the value of the d-th bit, as illustrated in Fig. 1.It has been shown in [5] that quantum walks on hypercube are faster to mix than a classical random walk.While Grover's algorithm is simple to build, one is also required to determine the optimal number of iterations to get the maximal probability of success, since if there Fig. 1 The 4-dimensional hypercube structure, with vertices labeled and direction arrowed are too many iterations, the probability decreases.The optimal number of iterations for the standard Grover's algorithm R is bounded by: where M is the number of solutions.Note that even if the number of solutions is unknown, it is possible to use an incremental procedure as shown in [8] or to compute an approximation of this number using the quantum counting technique.However, this formula is invalid for its quantum walk counterparts.Asymptotic results have been obtained by Shenvi et al. in [7] for a search by quantum walks on the hypercube for which there is a unique solution.They prove that when the hypercube dimension n becomes large, after π 2 √ 2 n − 1 iterations, the probability of success is 1 2 − O( 1 n ).In this paper, we describe a method that allows us to compute the exact probability of success of a walk on the hypercube for any number of solutions, in order to find the optimal number of iterations required to maximize this probability.
The main idea behind the quantum walks is to split the Hilbert space H into two distinct parts: the position space H S , which corresponds to the vertices on the graph (in our case, a hypercube), and the coin space H C , which corresponds to the possible directions of displacement from the vertices.Thus, H = H S ⊗ H C , where ⊗ is the Kronecker tensor product.On an n-dimensional hypercube, there are N = 2 n vertices and n possible directions.In this representation, we chose as basis states the distinct positions in H S and the distinct directions in H C .Therefore, a basis state in H can be written |ψ = |p |d , where | p is the position state and |d the direction of movement, often referred as the "quantum coin".In the n-dimension hypercube, p and d are integers with p ∈ [0, N − 1] and d ∈ [1, n].On a hypercube, it is often easier to use binary words to index position.For example, the state located on vertex 4 will be displaced along direction 1 that is |ψ = |4 |1 .For each value of p, we associate a binary word ρ.Our last example can be written as |ψ = |0010 |1 .Note that the most significant bit of ρ is on the right and that moving along the i-th dimension will therefore flip the i-th bit of ρ.
The discrete-time quantum walk search algorithm consists in the repeated application of two unitary operators, the oracle O followed by the uniform walk operator U .The global iteration operator is Q = U O. The oracle's role is to mark the solutions, while walk operator disperses the states on the hypercube.The uniform walk operator U is itself the product of two operators: the shift operator S and the coin operator C, so that U = SC and Q = SC O.These operators are defined and studied in Sect. 2.
Let |ψ 0 be the initial state and t the number of iterations until the system is measured.At each iteration, the operator Q is applied.This operator is constant along the process and is defined once according to the problem.Therefore, the final state will be: The algorithm is a success if the measurement of |ψ t returns one of the M solutions.
As for all quantum algorithms, the outcome is probabilistic.Let |s be the uniform superposition of all solutions.The probability of success after t iterations is: Note that this expression, as well as many other fundamental results about quantum information, can be found in [6].The goal of this paper is to maximize p t .As Q depends only on the size of the problem and its solutions, the probability of success depends only on t.In order to maximize p t , we study a special subspace of H, which is the complement of the joint eigenspace of the operators U and O. Let us denote this joint eigenspace E and its complement E. Let U E , O E and Q E be the components of U , O and Q inside E. In their joint eigenspace, operators always commute, and as we will see in Sect.2.3, O is always self-inverse.Therefore, we have This means that in E, to apply the global operator twice is to apply the uniform walk twice, without the search component brought by the oracle.Therefore, it has no reason to converge to a solution, and the algorithm effectively takes place in E.
We proceed as follows: • We show that the dimension of E is approximately 2Mn, and its exact value will be given in Sect.6.That means that E grows linearly in n, while H grows exponentially.We also prove that the uniform state |u and the superposition of solutions |s are both in E. • We determine the eigenvalues associated with Q in E and their multiplicities.The number of eigenvalues to determine is lesser than or equal to 2Mn.We also have developed a method which can be easily programmed and executed on a classical computer, which uses a minima search over a specifically designed criterion.• Finally, we propose a method to compute the components in E of the initial state and of the uniform superposition of the solutions |s .With these components and the eigenvalues, it is then possible to compute the probability of success with respect to the number of iterations in polynomial time.
1 Notations and mathematical background

Notations
Throughout this paper, some notations will frequently be used in different contexts: • n is the dimension of the hypercube and the dimension of the coin space H C .
• N = 2 n is the number of positions on the hypercube and the dimension of the position space H S .
• N e = n N is the dimension of the global space H = H S ⊗ H C .
• M is the number of solutions for a given problem.
• A T is the transpose of the matrix A and A † is the conjugate transpose of the matrix A. • |u k is the k × 1 uniform column vector.We have When there is no ambiguity, we may omit the index and simply write |u .We also define the n × (n − 1) matrix, characterized by u| = 0 and T = 0. • |1 p is a length N vector which contains 1 at the position p and 0 elsewhere.
• I is the 2 × 2 identity matrix, while I n is the n × n identity matrix.
• X is the Pauli-X matrix, also known as the quantum NOT: • H is the normalized Hadamard matrix: We denote the tensor power H ⊗n by H N , as it is frequently used.We will also use the unnormalized Hadamard HN , given by • E A α is the eigenspace of the operator A associated with the eigenvalue α.We denote the intersection of two eigenspaces As almost every eigenvalue we will encounter is ±1, we will simply denote them by their signs, such as in E A,B +,− .• P a,b is a permutation matrix obtained by reading column per column elements organized row per row.For example, if a = 2 and b = 3, we write all integers from 1 to a × b = 6 row per row in a a × b = 2 × 3 matrix.We obtain: 1 2 3 4 5 6 . (1.5) By reading column per column, we get the order 1, 4, 2, 5, 3, 6.The permutationassociated matrix P 2,3 is obtained by moving the rows of a 6 × 6 identity matrix according to this order.The first row remains the first, the second row is moved to the fourth, etc.We obtain • The default space we are using is H = H S ⊗ H C , but we will also need the

The Kronecker product
The Kronecker product is a bilinear operation on two matrices.Let A and B be two matrices of dimension m A × n A and m B × n B .Let a jk be the A element of the j-th row and k-th column.The tensor product of A and B is: We will also use the Kronecker matrix power, the Kronecker product of a matrix with itself a given number of times.The k-th Kronecker power of a matrix A is denoted A ⊗k .
The inverse of a tensor product A ⊗ B is: The interactions between the matrix product and the Kronecker product follow the property known as mixed product: (1.9) The Kronecker product is associative but not commutative.However, the structure of the permutation matrices defined in Sect.1.1, inspired from [2], leads to this interesting property: A ⊗ B = P a,b (B ⊗ A) P b,a . (1.10) Note that P T a,b = P b,a , so The permutation matrices allow us to switch between the workspaces H S ⊗ H C and H C ⊗ H S .

The discrete Fourier transform
One important use of the discrete Fourier transform (DFT) is to diagonalize certain operators.In its general form, the coefficients of the N × N DFT operator are given by: where ω = e −2π i/N and DFT j,k is the value on the j-th row and k-th column.
In our case, we will use the DFT in H S , where each component belongs to {0, 1}.When N = 2, the DFT matrix is equal to the Hadamard matrix H . Therefore, applying the DFT over each of the n dimensions of the hypercube in H S is equivalent to applying H N to the state vector.We also define F to be the N e × N e matrix that applies the DFT in H S while leaving H C unchanged: (1.13) In this paper, we will say that we switch between the Fourier and the original domains whenever we multiply by F. Note that since H is self-inverse, F is too, and we will therefore invert a Fourier transform with another one.Note that the use of the term "Fourier transform" in this paper implies different calculations: in H, it is a multiplication by F, while in H S it is the transform produced by H N .Furthermore, the Fourier transform of an operator A in H is F AF, while the one of a vector |v (or a matrix which is a collection of vectors) if F|v .The same goes for H N in H S .We will denote the Fourier transform of a matrix A by Â.

The singular value decomposition
The singular value decomposition, or SVD, is a matrix factorization that generalizes the eigendecomposition.The SVD of a given complex M × N matrix A is of the form where U is a M × M unitary matrix, S a M × N rectangular diagonal matrix and V an N × N unitary matrix.The diagonal elements of S are non-negative real values known as the singular values.
The SVD can always be computed in polynomial time.According to Golub and Van Loan in [3], the complexity of the R-SVD algorithm is O(N 3 + N M 2 ).It is even possible to compute only the singular values in O(M N 2 ).We will not discuss further the complexity of the SVD as it can already be considered as efficient.
In this paper, we only use a reduced version of this decomposition, namely the thin SVD.Let us define K = min(M, N ).In this variant of the SVD, we keep only the first K columns of U and S and the first K rows of S and V .This allows for a significant economy in computation time and memory, while the A = U SV † relation remains true.

Notes on unitary matrix
Unitary matrices are omnipresent in the quantum domain.As a reminder, an n × n matrix A is said to be "unitary" when (1.15) If a matrix is unitary, all its eigenvalues lie on the unit circle and all its eigenspaces are orthogonal.Those eigenvalues and eigenvectors are either real or come in conjugate pairs of complex numbers.
If A is a unitary matrix such as A 2 = I n , then all its eigenvalues are ±1.The dimensions of its eigenspaces are: where E A + is the eigenspace associated with +1 and E A − to −1.Furthermore, if A 2 = I n , we can express the projector onto E A ± as (1.17)

Indexation and signatures
In this paper, we will often use the binary word associated with an index.We always use a Latin character to denote the index and a Greek one to denote its binary equivalent.For instance, if p = 3, then ρ = 110.Sometimes, we will use those binary indices as vectors.In such cases, we will use a ket to denote them, such as |ρ .
When working with N e × N e matrices, we will attribute a ±1 signature σ to each row or column.We split the N e elements in N intervals of size n.The n signatures in the p-th interval are defined from the n-bit binary representation of p, where bits 0 are mapped to σ = +1, and 1 to σ = −1.For example, with n = 3, an N e × N e matrix has N = 8 blocks of 3 elements.In the case of the first block, p = 0 and its binary representation is 000.Therefore, the three first rows or columns have a signature σ = +1.In the second block, the position binary representation is 100 and the fourth element has a signature σ = −1, while the fifth and the sixth have a signature σ = +1.This apparently arbitrary definition will naturally emerge in Sect.2.1.

Useful submatrices
Throughout this paper, we will use submatrices of H N : From this, we can find the shift operator in the H C ⊗ H S : it is a block diagonal matrix we denote S CS , where each of its blocks is a S d .We can identify the n S d blocks in Fig. 2.
As a consequence, the diagonalized operator in H C ⊗H S , denoted ŜCS , is composed of n diagonal submatrices Ŝd .We note that each Ŝd matrix is made up of 2 n−d repetitions of a block made of 2 d−1 '+1' followed by 2 d−1 '-1', as seen in Fig. 3 .The relation between the shift operator and its diagonalized version in The shift operator has a simple block diagonal structure in H C ⊗ H S , but we need to get its representation in H S ⊗ H C , that we denote S. We will use the permutation matrices seen in Sect.1.2 to obtain S. Let us note P = P N ,n .We have (2.7) 149 Fig. 4 The shift operator S in As we can see in Fig. 4, the shift operator is more complex in H S ⊗ H C .However, it is the opposite for the coin operator, which is block diagonal in H S ⊗ H C but not in H C ⊗ H S .We arbitrarily choose to study the operators in H S ⊗ H C , as this space is the most commonly used in quantum walk studies.
In order to study the eigendecomposition of S, we need to diagonalize it.Note that P T P = I N e .We have As ŜCS is diagonal, P ŜCS P T is diagonal too (that is true for any permutation matrix).Therefore, F diagonalizes S and we can denote P ŜCS P T by Ŝ.
The structure of Ŝ is remarkable: if we map the +1 to 0 and the −1 to 1, we find along its diagonal all the possible n-bit words in ascending order, as seen in Fig. 5.We also observe that those values are equivalent to the signatures σ defined in Sect.1.6.We deduce that the eigenvectors of S are the columns of F and that they are associated with λ S = ±1 eigenvalues.Using Equation (1.16), we can determine that the eigenspaces associated with both eigenvalues have the same dimension: (2.13) We set Ŝ( p) to be the n × n block indexed by p.A quick way to find Ŝ( p) is to map the binary elements of ρ, the binary representation of p, to +1/ − 1 as noted before, then to place those elements on the diagonal.

The coin operator
The coin operator C is a uniform diffusion operator in H C .Its structure is based on the n × n Grover diffusion operator G defined by (2.14) The coin operator only affects H C .Therefore, its representation in which gives a block diagonal matrix as seen in Fig. 6.
The eigendecompositions of G and C are easily linked.Let us denote those decompositions and so Observe that G|u = |u , therefore |u is an eigenvector of G associated with the eigenvalue +1.Also, G = − (the n × (n − 1) matrix is defined in Sect.1.1 as the kernel of u|); therefore, the n − 1 columns of are all eigenvectors associated

.19)
A useful observation is that C is invariant under the Fourier transform.We have (2.23)

The oracle
The oracle O is the core of the quantum search algorithm.Its structure varies according to the number of solutions and their positions on the hypercube.The easiest way to design the oracle operator is to create a diagonal matrix with −1 at the solution positions and +1 elsewhere, but we use another structure which gives a smaller eigenspace E O − .As the problem is to find the correct positions on the hypercube, all directions associated with a same vertex are equally valid.Therefore, the solutions always come in groups of n in H S ⊗ H C if we work with a block diagonal oracle.If an n × n block corresponds to a group of solutions, it is set to −G, where G is the Grover diffusion operator defined in Sect.2.2.Else, the block is set to I n .An example of oracle for a We can see that the M = 2 blocks corresponding to the solutions in position p = 3 and p = 6 are set to −G, and that the N − M = 6 others are set to I .As we already know the eigendecomposition of G, we can immediately deduce the dimensions of the oracle eigenspaces and their eigenvalues: The eigenvectors associated with −1 are |1 p ⊗ |u n , where p is the position of a solution.It is interesting to note that E O − is the subspace of the solution space E s corresponding to the solutions uniformly spread out in all directions and that E O + is, therefore, the union of the non-solution space E s and the non-uniformly spread out solutions.Note that O and C commute, as they are both block diagonal with only I and ±G as possible blocks.Also, G 2 = I n so O 2 = I N E .

Generators
In this section, we will build "generators", matrices which will be useful in what follows.We define the first three generators as: ) where I s N and I s N are submatrices defined in Sect.1.7.These matrices have orthonormal columns and taken globally; they constitute an orthonormal basis of H.We will see in the next section that they generate subspaces which are closely related to the eigenspaces of the quantum walk operators.Their sizes are, respectively, N e × M, N e × (N − M) and N e × (N e − N ).Their horizontal concatenation forms a unitary N e × N e matrix L 1,2,3 shown in Fig. 8 .We will also use L 1,2 the horizontal concatenation of L 1 and L 2 .
We can show that L 1,2 and L 3 span the same subspaces in the Fourier domain as in the original domain: we have F L 3 = H N ⊗ .Since we can multiply on the right by any non-singular matrix without changing the spanned subspace, we can multiply by H N ⊗ I n and recover L 3 once more.Therefore, The proof for L 1,2 is done the same way by replacing by |u .
123 It is possible to create a variation of L 3 with most of its columns eigenvectors of Ŝ.This variation will be denoted L 3 .To do so, we will replace each block individually by a custom block p .Those blocks all span the same subspace as , as all their columns are orthogonal to |u n .That means L 3 spans the same subspace as L 3 .Each This column is not an eigenvector of Ŝ, which is to be expected considering all of them are either in − p or + p .The resulting matrix L 3 is shown in Fig. 9.
We define L − 3 , L + 3 and L We also define the two generators F + and F − as the submatrices of F that correspond, respectively, to the columns with signature σ = +1 and σ = −1.Both have size N e × N e /2.
Finally, we define |l − as the Fourier transform of the column of L 1,2 , which has all of its nonzero elements having a signature σ = −1, that is where |1 N −1 is a vector having a 1 at the position N − 1 and 0 elsewhere, as defined in Sect.1.1.

Eigenspaces junctions
In this section, we propose an exhaustive analysis of the joint eigenspaces of the quantum walk operators S, C and O in order to study the eigendecompositions of U and Q.All these results can be found in Table 1.While some proofs are trivial, others are quite tedious and the reader may skip these and proceed without difficulty from the results in the table.
We already determined the dimension of the eigenspaces of the three elementary operators, and we can easily find that they are all spanned by one of the generators defined in Sect.3.For instance, In the study of the joint eigenspaces, some cases are easily resolved.For instance, we have Some of the joint eigenspaces do not appear in the table because they are empty: = span L 1,3 , (4.10) and We begin the analysis of the joint eigenspaces of S and C with the observation that C is invariant under the Fourier transform, as seen in Sect.2.2, which means we can search for E Ŝ,C ±,± then switch back to the original domain.We have also seen that a vector is in E Ŝ ± if it has nonzero elements only in positions with signature ±1.Since Therefore, the dimension of E S,C ±− is: The next step is to note that E C + = span L 1,2 and that span L 1,2 = Fspan L 1,2 .There is only one column of L 1,2 in E Ŝ − , the one that has all its elements with signature −1.We defined its Fourier transform as |l − above.Similarly the only column of L 1,2 in E Ŝ + is the one that has nonzero elements only in positions with signature +1.The Fourier transform of such a column is always the uniform vector |u .
The only eigenspace that C and O share together that intersects E S ± is span {L 3 }.We have Therefore, E S,C,O ±,−,+ is equal to E S,C ±− , while all other joint eigenspaces of S, C and O are empty.
We note that E S,C O +,− includes E S,C,O +,−,+ = E S,C +,− , which is spanned by F L + 3 .We will create eigenvectors that span the complement of E S,C +,− in E S,C O +,− .In the Fourier domain, L + 3 spans E S,C +,− and L − 3 is orthogonal to E S + .Then, we will build those eigenvectors |ε as linear combinations of the columns of F L 1 and F • 3 .We have where |ε 1 and |ε 3 are vectors of length M and N − 2. We also observe that the first and last rows of F • 3 are zero because there is no • p for p = 0 and p = N − 1 and because In order to obtain |ε in E Ŝ + , we must cancel the elements with signature −1.All possible |ε have to cancel the last block because all its elements always have signature −1.Since, as already noted, L •  3 is already zero in this block, we only have to cancel the |ε 1 component.Let us define h| as the last row of H s N .We have where w k is the weight of the binary index of the k-th solution.We must have h|ε 1 = 0. Since h| is a M length vector, we can find M − 1 orthogonal |ε 1 .We also can show that if |ε 1 is known, then we can determine a unique |ε 3 .Let us consider any column We can deduce that the dimension of the complement of The proof is similar for E S,C O +,− .In that case, we define h| as the first row of H s N , that is u|.
We can also see that

Eigendecomposition of the uniform walk operator
In this section, we will determine the total eigendecomposition of the uniform walk operator U .As in the last section, the reader can refer to Table 2 for a summary of the results.
The uniform walk operator is U = SC; therefore, we have )  which implies: As illustrated in Fig. 10, the structure of U is complicated and turns out to be much simple in the Fourier domain: = ŜC. (5.7) We can see that Û is a block diagonal matrix containing N blocks Ûp such as (5.11) For any matrix, the sum of its eigenvalues is equal to the trace, and we already know that when p = 0 and p = N − 1, there are (n − w − 1) '−1' eigenvalues and (w − 1) '+1' eigenvalues.The sum of these n − 2 eigenvalues is therefore −n + 2w, and there are still two eigenvalues left to determine, which we will denote by λ w and λ * w .Their sum must be 2(1 − 2(w/n)), which means: (5.12) Since U is unitary, so are the Ûp and |λ w | = 1.Thus, (5.13)Note that λ w and λ * w are complex conjugate eigenvalues.The choice of which one is λ w is arbitrary.
We define the vector |v w by: where |ρ and | ρ are the vectors containing the binary representation of the block index and its negation.We will prove that |v w is an eigenvector of Ûp associated with the eigenvalue λ w .Note that  (5.28) We can also determine the size of each E U λ w , as there are n w positions whose binary index weight is w.We have (5.29) To summarize, we can list the eigenvectors of U in the Fourier domain, i.e., the eigenvectors of Û : If needed, the eigenvectors of U can easily be deduced from those of Û using a multiplication by F to switch back to the original domain.We define two new conjugate generators V w and V * w that span, respectively, E Û λ w and E Û λ * w .Their respective columns are F|1 p ⊗ |v w and F|1 p ⊗ |v w * .

Dimension of the complement space
As explained in the introduction, the purpose of the eigenspaces study is to determine the dimension of E and the eigenvalues of the effective quantum walk operator Q associated with it.Since Q = U O, it has a complicated structure in both original and Fourier domains, and we will need to use results from previous sections.Its structure in the original domain is shown in Fig. 11.Once again, we will summarize the results in Table 3 at the end of this section.Let us start by demonstrating that E U ,O λ,− = ∅ for any λ.We know that , and which means that λ = ±1, since S does not have other eigenvalue.Therefore, and so Let us now consider E U ,O ±,+ .We saw that ) and and where H s N is the submatrix defined in Sect.1.7.It now follows that In the Fourier domain, E Û λ w = span {V w }, where V w is the generator defined in Sect. 5. We have Since V w is an N e × n w matrix, each unit vector of span{V w } can be expressed as V w |ν , where |ν is a unit vector of length n w .If in addition such a vector also belongs to E Ô + , which means |ν must be orthogonal with all of the H s N T ⊗ u n | V w columns.These columns are: We identify here H s N T |1 p as the column of H s N T indexed by p and u n |v w as a scalar value that we will denote by α w .We have We now have everything we need to determine the dimension of E and E. We recall that We have shown that even if the dimension of the total space H increases exponentially with n, the dimension of E only grows linearly.For instance, for n = 50 and M = 4, dim H ≈ 5.6 × 10 16 while dim E ≤ 394.This result does not imply that the quantum search algorithm can be run on a classical computer, but it allows us to compute efficiently the evolution of the probability of success.

Eigenvalues of the eigenspace of interest
Recall that |u is the uniform state (N e dimensional in this case), |s is the uniform superposition of all solutions and |s is the uniform superposition of all non-solutions.so |s is a linear combination of |u and |s and is in E too.We also noted that if the initial state is in E, then it will remain in E during the algorithm.We will therefore initiate with |u .The goal of this section is to determine the eigenvalues of Q associated with E, as well as the components of |u and |s in this subspace.Then, we will see that it is easy to compute the probability of success for any number of iterations t.
If we use an orthonormal basis of eigenvectors of Q, we can represent the component of Q in E as a dim E × dim E diagonal operator Q E , whose diagonal elements are its eigenvalues.If we denote those eigenvalues by e iϕ k , the components of the state after t iterations are: and the probability of success

.13)
Suppose λ = ±1, the particular cases will be treated later.Since λ = e iϕ , we have Let us denote by P S + the Fourier transform of P S + .From the study of the eigenspaces of S in Sect.2.1, we know that P S + is a diagonal matrix which contains 1 at the positions with signature +1 and 0 elsewhere.In the same way, we define P S − the diagonal matrix which contains 1 at the positions with signature −1 and 0 elsewhere.Define where |ε + and |ε − are the Fourier transforms of |ε + and |ε − .We have but since F L ± 3 span eigenspaces orthogonal to E, we can restrict ourselves to F L • 3 instead of L 3 .Then, Note that F L • 3 |ε 3 is a vector whose per block average value is zero, therefore We can decompose ˆ −1 ϕ as where D−1 ϕ is a diagonal matrix whose diagonal elements are the per block average values of the diagonal of −1 w p is the Hamming weight of the binary representation of p and ϒ ϕ is another diagonal matrix whose per block average is zero.Let |ê ± be the Fourier transform of |e ± .By using Equation (7.22), we have because If the diagonal of D−1 ϕ has no zero element (this case will be treated later), we have were Dϕ is a diagonal matrix whose elements are Since they only depend on w p , the elements of Dϕ and its inverse can be indexed by the Hamming weight of their binary indices only.We will denote those elements Dϕ (w p ).Back to the original domain, we have This inequality can easily be tested using the singular value decomposition of D s ϕ : at least one of its singular values must be zero.Eigenvalues ϕ k come in conjugate pairs where one of each pair has its argument in [0, π[.As a consequence, we can decompose D s θ for multiple values of θ between 0 and π until we find all the eigenvalues of Q associated with eigenvectors in E, provided that D s θ computation is not too complex.The dimension of the kernel gives us the multiplicity of the eigenvalue e iϕ k .It is possible to halve the duration of the eigenvalue search by noticing that and that ker D s ϕ = ker D s −ϕ .( Therefore, it is possible to restrain the search to the [0, π/2] span.

Efficient computation of the criterion
The direct computation of D s θ becomes impossible on a classical computer if n is big, as it involves a 2 n × 2 n matrix.However, we have .We have Dθ (w p ) w p .(7.47) We will see that w p can be efficiently computed without the need of H s N .Let us note a the position of a given solution, α its binary representation, and |h a the corresponding column in H w p N , the unnormalized Hadamard matrix restricted to the rows whose indices have a Hamming weight of w p defined in Sect.1.7.The elements of w are The unnormalized Hadamard matrix has an interesting property about its values: where α and β are the binary representation of a and b.Therefore, where a ⊕ b is the position whose binary index is α ⊕ β.Thus, Denote by η m (w p ) the sum of all elements of h m whose weight is w p .
The elements of the sum are +1 when ρ|μ is even, which happens when ρ contains an even number of 1 inside the w m positions corresponding to 1 in μ (the Hamming weight of μ is w m ).Denote this number by 2l.There are w m 2l possible placements for these 1, and n−w m w p −2l for the w p − 2l remaining 1.The total number of +1 in the previous sum is then If m = 0, we have 0 0 = 1 and η 0 (w p ) = n w p .Note that η m (w p ) does not depend on m but on w m , which allows us to compute it for several solutions at the same time.Therefore, we can denote it by η(w p , w m ).Thanks to this result, it is possible to compute D s θ , and therefore the criterion, in polynomial time with a classical computer.

Regular cases
In order to compute the probability of success, we need to compute the components of |u and |s in E. Once the eigenvalues λ k = e iϕ k are determined, we can compute the vectors |ε 1 in the eigenspace associated with each λ k with Equation (7.42) Note that we use the notations s (k, l) and u (k, l) instead of s(k, l) and u(k, l).This is because the vectors |ε are not necessarily orthogonal nor unit vectors, so these equations do not directly provide the projections of |u and |s in the eigenspace associated with e iϕ .A correction is required.First, we define the transformation f , identical to the one seen in Sect.7.2 by where T ϕ is a diagonal matrix such that the T ϕ ( p) contains the average value of the diagonal of ˆ −2 ϕ over block p, that is Let E 1 be a matrix whose columns are the |ε 1 in the eigenspace associated with λ k . Then, where each column of E is an eigenvector |ε in the same eigenspace and E † E is a correlation matrix from which we will deduce the correction.We can diagonalize where S 2 E is a diagonal matrix containing the eigenvalues of E † E and V E a matrix whose columns are its eigenvectors.We can deduce the singular value decomposition of E. Since Note that U E has N e rows and as many columns as there are eigenvectors associated to λ, so it may be too big for a classical computer to handle.However, its computation can be avoided since where S E and V E are square matrices whose size is dim ker D s ϕ .Finally  Let W be the diagonal matrix whose elements are W ( p) = w p /(n − w p ). Then The corresponding correlation matrix can be computed via From this point, the correction method is the same as in the previous case.

Singular D−1 ' case
The last case to cover is the one where D−1 ϕ is not invertible because it contains at least one zero value on its diagonal.According to Equation (7.33), this happens when and we have If we denote by X the matrix whose columns are the vectors | x , the correlation matrix E † E is given by The computation of the components of |u and |s in E is done in the same way as the previous cases: The computation of ε|s only involves L 1 , which is orthogonal to |x , and the computation of ε|u involves ε + |u which depends on L 2 .This could lead to the appearance of |x .However, we see that ε + |u = |ê + (0) , which is never affected by |x .Indeed, |x only affects the positions whose weight is w p , and in the case we are currently studying, w p ∈ [1, n − 1].Therefore, |x does not appear in the computation of ε|u .First, we compute dim E. It is not required for the simulation itself, but it allows us to check if we find enough eigenvalues.In this example, we find dim E = 22.
We chose a step of θ = π/10 000, but π/500 would be precise enough to obtain a good approximation.Over the 10 000 angle values tested, we find 15 local minima in the [0, π] span.After computing the SVD of the corresponding D s θ matrices, we can find that 4 of them do not have any singular value equal to zero.These correspond to the 4 highest local minima seen in Fig. 12.Additionally, the D s θ matrix for θ = π/2 is undefined, as D−1 π/2 is singular.In this case, we apply Sect.7.3.3.Then, we add the symmetric eigenvalues over ]π, 2π [ and the eigenvalue −1 with a multiplicity of M − 1 = 1.
As in any numerical computation, deciding under which threshold a very small singular value is considered zero is always somewhat arbitrary.In some cases, depending on this threshold, the procedure can produce more eigenvalues than expected.These excess values are not an issue, as they correspond to vectors outside E. Therefore, they give zero components to |s and |u and do not affect the computation.
Then, we can compute the corresponding components of |s and |u in E. Note that in this example, among the 22 components of |s and |u , only 12 are nonzero.These are displayed in Table 4.

Upper bound for the probability of success
From the values of ϕ k and s(k, l), it is possible to compute an upper bound for the probability of success.Indeed, we have In the example shown in Sect.8.2, we obtain p t ≤ 0.5509.We checked the validity of this upper bound: after 10 000 iterations, the maximum value reached by the probability of success is 0.4279, which is, as expected, under the upper bound (here it is 28% under the bound).While the upper bound is not a close one in this example, such a theoretical bound is always interesting to have a fast estimate of the best possible performance that the quantum walk search could reach.

Conclusion
Quantum random walk search is a promising algorithm.However its theoretical study can be extremely complex and possibly intractable, especially when there is more than one solution.In this paper, we have proposed a method which allows us to compute the probability of success of the algorithm as a function of the number of iterations, without simulating the search itself.It is therefore possible to compute that probability on a classical computer even for search whose state space dimension is very large.
Knowledge of the probability of success allows us to determine the optimal time of measurement (the time which maximizes this probability), provided that we know the number of solutions and their relative positions.For instance, we may know that the acceptable solutions are the interior of a hypersphere of a given radius, whose absolute position is unknown.
It is also a powerful tool for in-depth study and better understanding of quantum random walk search properties.For instance, in our future work we plan to study how the number of solutions and their relative positions impact the probability of success.This may highlight a phenomenon of interference between solutions which does not exist in a classical search.
The study of the subspace E could also be transposed over different walks patterns, such as planar quantum walks, possibly leading to interesting results.A further study could be considered in order to find the exact complexity of the method, or the required precision on θ during the eigenvalues search.

Fig. 2 2 Operators and their eigenspaces 2 . 1
Fig. 2 The shift operator S CS in H C ⊗ H S for n = 3.The S d blocks are delimited by the dashed lines

Fig. 3
Fig. 3 The diagonalized shift operator ŜCS in H C ⊗ H S for n = 3.The Ŝd blocks are delimited by the dashed lines

Fig. 5
Fig. 5 The diagonalized shift operator Ŝ in H S ⊗ H C for n = 3.The Ŝ( p) blocks are delimited by the dashed lines

Fig. 6
Fig. 6 The coin operator in H S ⊗ H C for n = 3

Fig. 7
Fig. 7 The oracle for n = 3 with solutions at positions 3 and 6.The blocks corresponding to the N = 8 outputs are delimited by the dashed lines

Fig. 8
Fig. 8 The generators concatenation L 1,2,3 for n = 3 with solutions at positions 2 and 5.The generators are delimited by the dashed lines

p
is itself the concatenation of three matrices, + p , − p and • p , whose sizes depends on the Hamming weight w of the binary representation of the p indices: • − p is an n ×(w −1) matrix (or an empty matrix if w ≤ 1).It has nonzero elements only at the w rows with signature σ = −1.Therefore, its columns are eigenvectors of Ŝ associated with the eigenvalue −1.Those elements form w − 1 vectors of length w all orthogonal to |u w and to each other.• + p is an n × (n − w − 1) matrix (or an empty matrix if n − w ≤ 1).It has nonzero elements only at the n − w rows with signature σ = +1.Therefore, its columns are eigenvectors of Ŝ associated with the eigenvalue +1.Those elements form n − w − 1 vectors of length n − w all orthogonal to |u n−w and to each other.• • p is a column vector if w = 0 and w = n, and otherwise is empty.Its elements are (n − w)/(nw) where σ = −1, (3.5) − k/(n(n − w)) where σ = +1.(3.6) for what follows is to work with the operator C O. It is straightforward because C and O commute, which means they share the same eigenspaces.Therefore, we have

Fig. 10
Fig. 10 The uniform walk operator for n = 3 .23) = λ w |v w .(5.24) Thus, |v w and |v w * constitute 2(N − 2) eigenvectors.Taking into consideration the 2(N e /2 − N + 2) already known from E S,C ±,± , this accounts for all of them.Since none of the 2(N − 2) new eigenvectors are associated with an eigenvalue ±1, we have

Fig. 11 •
Fig. 11 The effective walk operator for n = 3 with solutions at positions 3 and 6

. 6 )
Since both |l − and |u contain only nonzero elements, they are not in span {L 1 } = E O

Finally
w p (a, b) = 1 Nη(w p , w a⊕b ).(7.62) .100)where U E is an N × M matrix and S E and V E are M × M matrices.Therefores (k, l) = E(l)|s , (7.101) where |E(l) is the l-th column of E. Because the columns of U E are orthogonal unit vectors, we can use it to compute the correct projection of |s in E, that is |s(k) = U E |s .(7.102)

.104) 7 . 3 . 2
Real eigenvalues caseWe will now consider the case λ = +1.This eigenvalue implies that P S + |ε − = andP S − |ε + = 0.Then, |ε − ∈ E S,C O −,− and |ε + ∈ E S,C O +,+ , but we have shown that dim E S,C O +,+ = 0, so |ε ∈ E S,C O −,− .The intersection of E S,C O −,− with E has a dimension of M − 1.Similarly, if λ = −1, we have P S − |ε − = 0 and P S + |ε + = 0.Then, |ε − ∈ E S,C O +,− and |ε + ∈ E S,C O −,+ , but dim E S,C O −,+ = 0, so |ε ∈ E S,C O +,−, whose dimension in the complement is also M − 1.In the following, we will only consider λ = −1, as the discussion related to λ = +1 is similar, but of less interest since both |u and |s have null projections over this eigenspace.We have shown in Sect. 4 that the subspace of E S,C O −,− in the complement of E S,C −,− , that is in E, has a dimension of M − 1.Furthermore, we established a parametric form of the corresponding eigenvectors:

Fig. 12
Fig. 12 Search for the eigenvalues phases with n = 6, M = 2 solutions at position 3 and 6 and a step θ = π/10 000.Found eigenvalues are circled

Table 1
•  3as the submatrices of L 3 that are made up of the columns corresponding, respectively, to the − p , + p and • p .The size of L − 3 and L + • p of L • 3 .Inside each block p, the positions with signature −1 contain √ (n − w)/(nw) and F L 1 |ε 1 contains a value indexed by p of (H s N |ε 1 )/

Table 2
Uniform walk operator eigendecomposition Eigenspace Generator Dimension .8)where Ŝ( p) is the block of Ŝ associated with p defined in Sect.2.1, and G the Grover diffusion operator defined in Sect.2.2.The diagonal of each Ŝ( p) accounts for n − w times '+1' and w times '−1'.The only coefficient of the diagonal of G is −1 + 2/n; therefore, the trace of each

Table 3
are in E, as they have no intersection with any eigenspace of O.That means |u N e is in E, and any quantum walk initialized in this state will remain in E.
− .Furthermore, span F L + 3 ⊂ span {L 3 }, and since E O + = span L 2,3 , components of a given eigenspace whose dimension is greater than 1.We also define |u(k) and |s(k) the vectors made of all u(k, l) and s(k, l) for a given k.The length of those vectors is the multiplicity of the e iϕ k eigenvalue.SinceE C O − ∪ E C O + = H,any eigenvector |ε in E can be represented as the sum of two vectors belonging to E C O .3) where the terms ψ t (k, l), u(k, l) and s(k, l) are the respective components of |ψ t , |u and |s in the eigenspace associated with e iϕ k , and the parameter l allows us to distinguish the − + S|ε + = λ|ε − + λ|ε + .(7.8) Since S 2 = I , we have − |ε − + |ε + = λS|ε − + λS|ε + .
Recall that e + |u N is equal to the average value of e + | and that the first term of a Fourier transform is the average value of the transformed vector, therefore, We now have the components of |s and |u in E for each |ε 1 associated with each eigenvalue λ k : .83) where A(w p ) is a diagonal matrix whose elements are functions of w p only.Due to the orthogonality of |ε − and |ε + , for two given vectors |ε and |ε , we As for the previous case, the generated |ε are not orthogonal and must be corrected in a similar way.Due to the orthogonality of L 1 and L • 3 , we haveε|ε = ε 1 |ε 1 + ε 3 |ε 3 .
. We showed in Sect.7.2 that w p can be computed easily.Equation (7.37) is only valid over the positions p whose binary representation Hamming weight is not w p .We have|ê + ( p) = i Dϕ ( p)|ê − ( p) .(7.124)We define |ê p + as the vector whose elements at positions p are zero and others are determined by the equation above.It can be obtained by using a modified Dϕ whose elements at positions p are zero.Let us denote this modified matrix D p ϕ .Let us note |x the length n w p vector containing the elements of |ê + at positions p.We have is the component of |b orthogonal to span U w p .Since |b ⊥ has no impact on |a , it can be considered null.Then, T = span V w p S w p .(7.138) Also, note that for two vectors |b and |b b|b

Table 4
Estimated nonzero components of |s and |u in E with n = 6, M = 2 solutions at position 3 and 6