Skip to main content
Log in

Quantum Walks Can Find a Marked Element on Any Graph

  • Published:
Algorithmica Aims and scope Submit manuscript

Abstract

We solve an open problem by constructing quantum walks that not only detect but also find marked vertices in a graph. In the case when the marked set \(M\) consists of a single vertex, the number of steps of the quantum walk is quadratically smaller than the classical hitting time \({{\mathrm{HT}}}(P,M)\) of any reversible random walk \(P\) on the graph. In the case of multiple marked elements, the number of steps is given in terms of a related quantity \({\hbox {HT}}^{+}(P,M)\) which we call extended hitting time. Our approach is new, simpler and more general than previous ones. We introduce a notion of interpolation between the random walk \(P\) and the absorbing walk \(P'\), whose marked states are absorbing. Then our quantum walk is simply the quantum analogue of this interpolation. Contrary to previous approaches, our results remain valid when the random walk \(P\) is not state-transitive. We also provide algorithms in the cases when only approximations or bounds on parameters \(p_M\) (the probability of picking a marked vertex from the stationary distribution) and \({\hbox {HT}}^{+}(P,M)\) are known.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

Notes

  1. An automorphism of \(P\) is a permutation matrix \(Q\) such that \(Q P Q^{\mathsf {T}}= P\).

  2. Note that in the preliminary version of this work [16], a subtle error led to the wrong conclusion that \({\hbox {HT}}^{+}(P,M)={{\mathrm{HT}}}(P,M)\) for all \(M\) and reversible \(P\). In general this only holds when \(|M |=1\).

  3. Reversibility of Markov chains (see Appendix A.1.2) is not related to thermodynamical reversibility. Actually, even a “reversible” Markov chain is thermodynamically irreversible.

  4. We will use terms “random walk”, “Markov chain”, and “stochastic matrix” interchangeably. The same holds for “state”, “vertex”, and “element”.

  5. Note that Szegedy [10] uses a different convention and defines the quantum walk operator corresponding to \(P\) as \(\bigl ( V(P) \, W(P) \, V(P)^\dagger \bigr )^2\) where \(W(P)\) is given in Eq. (5).

  6. Note that in the case of multiple marked elements this expression cannot be used for \(s = 1\), since the numerator and denominator vanish for terms with \(k > n - |M |\). We analyze the \(s \rightarrow 1\) limit in Appendix C.

  7. Strictly speaking, the definition of reversibility also includes ergodicity for the stationary distribution to be uniquely defined. However, we will relax this requirement for \(P'\) since, by continuity, \(\pi '\) is the natural choice of the “unique” stationary distribution.

  8. There is no need to use bra-ket notation at this point; nevertheless we adopt it since vectors \(|v_i(s)\rangle \) later will be used as quantum states.

References

  1. Ambainis, A.: Quantum walk algorithm for element distinctness. SIAM J. Comput. 37(1), 210–239 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  2. Magniez, F., Santha, M., Szegedy, M.: Quantum algorithms for the triangle problem. SIAM J. Comput. 37(2), 413–424 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  3. Buhrman, H., Špalek, R: Quantum verification of matrix products. In: Proceedings of the 17th ACM-SIAM symposium on discrete algorithms (SODA’06), pp. 880–889. ACM (2006)

  4. Magniez, F., Nayak, A.: Quantum complexity of testing group commutativity. Algorithmica 48(3), 221–232 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  5. Aaronson, S., Ambainis, A.: Quantum search of spatial regions. Theory Comput. 1(4), 47–79 (2005)

    Article  MathSciNet  Google Scholar 

  6. Shenvi, N., Kempe, J., Whaley, B.K.: Quantum random-walk search algorithm. Phys. Rev. A 67(5), 052307 (2003)

    Article  Google Scholar 

  7. Childs, A.M., Goldstone, J.: Spatial search and the Dirac equation. Phys. Rev. A 70(4), 042312 (2004)

    Article  MathSciNet  Google Scholar 

  8. Ambainis, A., Kempe, J., Rivosh, A.: Coins make quantum walks faster. In: Proceedings of the 16th ACM-SIAM symposium on discrete algorithms (SODA’05), pp. 1099–1108. SIAM (2005)

  9. Kempe, J.: Discrete quantum walks hit exponentially faster. Probab. Theory Relat. Fields 133(2), 215–235 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  10. Szegedy, M.: Quantum speed-up of Markov chain based algorithms. In: Proceedings of the 45th IEEE symposium on foundations of computer science (FOCS’04), pp. 32–41. IEEE Computer Society Press (2004)

  11. Krovi, H., Brun, T.A.: Hitting time for quantum walks on the hypercube. Phys. Rev. A 73(3), 032341 (2006)

    Article  MathSciNet  Google Scholar 

  12. Magniez, F., Nayak, A., Roland, J., Santha, M.: Search via quantum walk. In: Proceedings of the 39th ACM symposium on theory of computing (STOC’07), pp. 575–584. ACM Press (2007)

  13. Magniez, F., Nayak, A., Richter, P., Santha, M.: On the hitting times of quantum versus random walks. Algorithmica 63(1), 91–116 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  14. Varbanov, M., Krovi, H., Brun, T.A.: Hitting time for the continuous quantum walk. Phys. Rev. A 78(2), 022324 (2008)

    Article  MathSciNet  Google Scholar 

  15. Tulsi, A.: Faster quantum-walk algorithm for the two-dimensional spatial search. Phys. Rev. A 78(1), 012310 (2008)

    Article  Google Scholar 

  16. Krovi, H., Magniez, F., Ozols, M., Roland, J.: Finding is as easy as detecting for quantum walks. In: Automata, Languages and Programming, Lecture Notes in Computer Science, vol. 6198, pp. 540–551. Springer, Berlin-Heidelberg (2010)

  17. Childs, A.M., Goldstone, J.: Spatial search by quantum walk. Phys. Rev. A 70(2), 022314 (2004)

    Article  MathSciNet  Google Scholar 

  18. Ambainis, A., Bačkurs, A., Nahimovs, N., Ozols, R., Rivosh, A.: Lecture Notes in Computer Science, vol. 7582. Springer, Berlin (2013)

    Google Scholar 

  19. Krovi, H., Ozols, M., Roland, J.: Adiabatic condition and the quantum hitting time of Markov chains. Phys. Rev. A 82(2), 022333 (2010)

    Article  Google Scholar 

  20. Grinstead, C.M., Snell, J.L.: Introduction to Probability, 2nd edn. American Mathematical Society, Providence (1997)

    MATH  Google Scholar 

  21. Kemeny, J.G., Snell, J.L.: Finite Markov Chains. Undergraduate Texts in Mathematics. Springer, Berlin (1960)

    MATH  Google Scholar 

  22. Koralov, L.B., Sinai, Y.G.: Theory of Probability and Random Processes. Springer, Berlin (2007)

    Book  MATH  Google Scholar 

  23. Levin, D.A., Peres, Y., Wilmer, E.L.: Markov Chains and Mixing Times. American Mathematical Society, Providence (2009)

    MATH  Google Scholar 

  24. Horn, R.A., Johnson, C.R.: Matrix Analysis. Cambridge University Press, Cambridge (1990)

    MATH  Google Scholar 

  25. Meyer, C.D.: Matrix Analysis and Applied Linear Algebra, vol. 1. SIAM (Society for Industrial and Applied Mathematics), Philadelphia (2000)

  26. Cleve, R., Ekert, A., Macchiavello, C., Mosca, M.: Quantum algorithms revisited. Proc. R. Soc. Lond. 454(1969), 339–354 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  27. Høyer, P., Mosca, M., de Wolf, R.: Quantum search on bounded-error inputs. In: Proceedings of the 30th international colloquium on automata, languages and programming (ICALP’03), volume 2719 of lecture notes in computer science, pp. 291–299. Springer (2003)

  28. Feige, U., Raghavan, P., Peleg, D., Upfal, E.: Computing with noisy information. SIAM J. Comput. 23(5), 1001–1018 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  29. Farhi, E., Goldstone, J., Gutmann, S., Lapan, J., Lundgren, A., Preda, D.: A quantum adiabatic evolution algorithm applied to random instances of an NP-complete problem. Science 292(5516), 472–475 (2001)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgments

MO would like to acknowledge Andrew Childs for many helpful discussions. The authors would also like to thank Andris Ambainis for useful comments. Part of this work was done while HK, MO, and JR were at NEC Laboratories America in Princeton. MO also was affiliated with University of Waterloo and Institute for Quantum Computing (supported by QuantumWorks) and IBM TJ Watson Research Center (supported by DARPA QUEST program under Contract No. HR0011-09-C-0047) during this project. Presently FM, MO and JR are supported by the European Union Seventh Framework Programme (FP7/2007-2013) under Grant Agreement No. 600700 (QALGO). FM is also supported by the French ANR Blanc project ANR-12-BS02-005 (RDAM). Last, JR acknowledges support from the Belgian ARC project COPHYMA.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hari Krovi.

Additional information

A preliminary version of this work appeared in the Proceedings of the 37th International Colloquium on Automata, Languages and Programming, volume 6198 of Lecture Notes in Computer Science, pages 540–551, Springer, 2010.

Appendices

Appendix A: Semi-Absorbing Markov Chains

In this appendix we study a special type of Markov chains described by a one-parameter family \(P(s)\) corresponding to convex combinations of \(P\) and the associated absorbing chain \(P'\). Intuitively, some states of \(P(s)\) are hard to escape and the interpolation parameter \(s\) controls how absorbing they are. For this reason we call such chains semi-absorbing. In this appendix we consider various properties of semi-absorbing Markov chains as a function of the interpolation parameter \(s\). The main result of this appendix is Theorem 4 which is of central importance in Sect. 3.

We discussed some preliminaries on Markov chains and defined basic concepts such as ergodicity in Sect. 2.1. Here we begin by defining the interpolated Markov chain \(P(s)\) and considering its properties, such as the stationary distribution and reversibility (Appendix A.1). We proceed by applying these concepts to define and study the discriminant matrix of \(P(s)\) which encodes all relevant properties of \(P(s)\), such as eigenvalues and the principal eigenvector, but has a much more convenient form (Appendix A.2). Finally, we define the hitting time \({{\mathrm{HT}}}\) and the interpolated hitting time \({{\mathrm{HT}}}(s)\) and relate the two in the case of a single marked element via Theorem 4, which is our main result regarding semi-absorbing Markov chains (Appendix A.3).

Results from this appendix are used in Sect. 3 to construct quantum search algorithms based on discrete-time quantum walks.

1.1 Appendix A.1: Basic Properties of Semi-Absorbing Markov Chains

Assume that a subset \(M \subset X\) of size \(m := |M |\) of the states are marked (we assume that \(M\) is not empty). (see [21, Chapter III] and [20, Sect. 11.2]). Note that \(P'\) differs from \(P\) only in the rows corresponding to the marked states (where it contains all zeros on non-diagonal elements, and ones on the diagonal). If we arrange the states of \(X\) so that the unmarked states \(U := X \setminus M\) come first, matrices \(P\) and \(P'\) have the following block structure:

(97)

where \(P_{UU}\) and \(P_{MM}\) are square matrices of size \((n-m) \times (n-m)\) and \(m\times m\), respectively, while \(P_{UM}\) and \(P_{MU}\) are matrices of size \((n-m) \times m\) and \(m\times (n-m)\), respectively (Fig. 4).

Fig. 4
figure 4

Directed graphs underlying Markov chain \(P\) (left) and the corresponding absorbing chain \(P'\) (right). Outgoing arcs from vertices in the marked set \(M\) have been turned into self-loops in \(P'\)

Recall that we have defined an interpolated Markov chain that interpolates between \(P\) and \(P'\):

$$\begin{aligned} P(s) := (1-s) P + s P', \quad 0 \le s \le 1. \end{aligned}$$
(98)

This expression has some resemblance with adiabatic quantum computation where similar interpolations are usually defined for quantum Hamiltonians [29]. Indeed, the interpolated Markov chain \(P(s)\) was used in [19] to construct an adiabatic quantum search algorithm. Note that \(P(0) = P\), \(P(1) = P'\), and \(P(s)\) has the following block structure:

$$\begin{aligned} P(s) = \begin{pmatrix}P_{UU} &{} P_{UM} \\ (1-s)P_{MU} &{} (1-s)P_{MM} + s I\end{pmatrix}. \end{aligned}$$
(99)

Proposition 7

If \(P\) is ergodic then so is \(P(s)\) for \(s \in [0,1)\). \(P(1)\) is not ergodic.

Proof

Recall from Definition 1 that ergodicity of a Markov chain can be established just by looking at its underlying graph. A non-zero transition probability in \(P\) remains non-zero also in \(P(s)\) for \(s \in [0,1)\). Thus the ergodicity of \(P\) implies that \(P(s)\) is also ergodic for \(s \in [0,1)\). However, \(P(1)\) is not irreducible, since states in \(U\) are not reachable from \(M\). Thus \(P(1)\) is not ergodic. \(\square \)

Proposition 8

\((P'^{\,t})_{UU} = P_{UU}^t\).

Proof

Let us derive an expression for \(P'^{\,t}\), the matrix of transition probabilities corresponding to \(t\) applications of \(P'\). Notice that \(\bigl (\begin{array}{cc}a &{} b \\ 0 &{} 1\end{array}\bigr ) \bigl (\begin{array}{cc}c &{} d \\ 0 &{} 1\end{array}\bigr ) = \bigl (\begin{array}{cc}ac &{} ad + b \\ 0 &{} 1\end{array}\bigr )\). By induction,

$$\begin{aligned} P'^{\,t} = \begin{pmatrix}P_{UU}^t &{} \sum \nolimits _{k=0}^{t-1} P_{UU}^k P_{UM} \\ 0 &{} I\end{pmatrix}. \end{aligned}$$
(100)

When restricted to \(U\), it acts as \(P_{UU}^t\). \(\square \)

Proposition 9

([20, Theorem 11.3, p. 417]) If \(P\) is irreducible then \(\lim _{k \rightarrow \infty } P_{UU}^k = 0\).

Intuitively this means that the sub-stochastic process defined by \(P_{UU}\) eventually dies out or, equivalently, that the unmarked states of \(P'\) eventually get absorbed (by Prop. 8).

Proof

Let us fix an unmarked initial state \(x\). Since \(P\) is irreducible, we can reach a marked state from \(x\) in a finite number of steps. Note that this also holds true for \(P'\). Let us denote the smallest number of steps by \(l_x\) and the corresponding probability by \(p_x > 0\). Thus in \(l := \max _x l_x\) steps of \(P'\) we are guaranteed to reach a marked state with probability at least \(p := \min _x p_x > 0\), independently of the initial state \(x \in U\). Notice that the probability to still be in an unmarked state after \(kl\) steps is at most \((1-p)^k\) which approaches zero as we increase \(k\). \(\square \)

Proposition 10

([21, Theorem 3.2.1,p. 46]) If \(P\) is irreducible then \(I - P_{UU}\) is invertible.

Proof

Notice that

$$\begin{aligned} (I - P_{UU}) \cdot (I + P_{UU} + P_{UU}^2 + \cdots + P_{UU}^{k-1}) = I - P_{UU}^k \end{aligned}$$
(101)

and take the determinant of both sides. From Prop. 9 we see that \(\lim _{k \rightarrow \infty } \det (I - P_{UU}^k) = 1\). By continuity, there exists \(k_0\) such that \(\det (I - P_{UU}^{k_0}) > 0\), so the determinant of the left-hand side is non-zero as well. Using multiplicativity of the determinant, we conclude that \(\det (I - P_{UU}) \ne 0\) and thus \(I - P_{UU}\) is invertible. \(\square \)

In the Markov chain literature \((I - P_{UU})^{-1}\) is called the fundamental matrix of \(P\).

1.1.1 Appendix A.1.1: Stationary Distribution

From now on let us demand that \(P\) is ergodic. Then according to the Perron–Frobenius Theorem it has a unique stationary distribution \(\pi \) that is non-zero everywhere. Let \(\pi _U\) and \(\pi _M\) be row vectors of length \(n-m\) and \(m\) that are obtained by restricting \(\pi \) to sets \(U\) and \(M\), respectively. Then

(102)

where \(0_U\) is the all-zeroes row vector indexed by elements of \(U\) and \(\pi '\) satisfies \(\pi ' P' = \pi '\).

Let \(p_M := \sum _{x \in M} \pi _x\) be the probability to pick a marked element from the stationary distribution. In analogy to the definition of \(P(s)\) in Eq. (98), let \(\pi (s)\) be a convex combination of \(\pi \) and \(\pi '\), appropriately normalized:

$$\begin{aligned} \pi (s) := \frac{(1-s) \pi + s \pi '}{(1-s) + s p_M} = \frac{1}{1 - s (1-p_M)} \begin{pmatrix}(1-s) \pi _U&\pi _M\end{pmatrix}. \end{aligned}$$
(103)

Proposition 11

\(\pi (s)\) is the unique stationary distribution of \(P(s)\) for \(s \in [0,1)\). At \(s = 1\) any distribution with support only on marked states is stationary, including \(\pi (1)\).

Proof

Notice that

$$\begin{aligned} (\pi - \pi ') (P - P') = \begin{pmatrix}\pi _U&0\end{pmatrix} \begin{pmatrix}0 &{} 0 \\ P_{MU} &{} P_{MM} - I\end{pmatrix} = 0 \end{aligned}$$
(104)

which is equivalent to

$$\begin{aligned} \pi P' + \pi ' P = \pi P + \pi ' P'. \end{aligned}$$
(105)

Using this equation we can check that \(\pi (s) P(s) = \pi (s)\) for any \(s \in [0,1]\):

$$\begin{aligned}&\bigl ( (1-s) \pi + s \pi ' \bigr ) \bigl ( (1-s) P + s P' \bigr ) \end{aligned}$$
(106)
$$\begin{aligned}&= (1-s)^2 \pi P + (1-s)s (\pi P' + \pi ' P) + s^2 \pi ' P' \end{aligned}$$
(107)
$$\begin{aligned}&= (1-s)^2 \pi + (1-s)s (\pi + \pi ') + s^2 \pi ' \end{aligned}$$
(108)
$$\begin{aligned}&= \bigl ( (1-s) \pi + s \pi ' \bigr ) \bigl ( (1 - s) + s \bigl ) \end{aligned}$$
(109)
$$\begin{aligned}&= (1-s) \pi + s \pi '. \end{aligned}$$
(110)

Recall from Prop. 7 that \(P(s)\) is ergodic for \(s \in [0,1)\) so \(\pi (s)\) is the unique stationary distribution by Perron–Frobenius Theorem. Since \(P'\) acts trivially on marked states, any distribution with support only on marked states is stationary for \(P(1)\). \(\square \)

1.1.2 Appendix A.1.2: Reversibility

Definition 10

Markov chain \(P\) is called reversible if it is ergodic and satisfies the so-called detailed balance condition

$$\begin{aligned} \forall x,\,y \in X{:}\,\pi _x P_{xy} = \pi _y P_{yx} \end{aligned}$$
(111)

where \(\pi \) is the unique stationary distribution of \(P\).

Intuitively this means that the net flow of probability in the stationary distribution between every pair of states is zero. Note that Eq. (111) is equivalent to

$$\begin{aligned} {{\mathrm{diag}}}(\pi ) \, P = P^{\mathsf {T}}{{\mathrm{diag}}}(\pi ) = \bigl ( {{\mathrm{diag}}}(\pi ) P \bigr )^{\mathsf {T}}\end{aligned}$$
(112)

where \({{\mathrm{diag}}}(\pi )\) is a diagonal matrix whose diagonal is given by vector \(\pi \). Thus Eq. (111) is equivalent to saying that matrix \({{\mathrm{diag}}}(\pi ) P\) is symmetric.

Proposition 12

If \(P\) is reversible then so is \(P(s)\) for any \(s \in [0,1]\). Hence, \(P(s)\) satisfies the interpolated detailed balance equation

$$\begin{aligned} \forall s \in [0,1], \, \forall x,y \in X{:}\,\pi _x(s) P_{xy}(s) = \pi _y(s) P_{yx}(s). \end{aligned}$$
(113)

Proof

First, notice that the absorbing walk \(P'\) is reversibleFootnote 7 since \({{\mathrm{diag}}}(\pi ') P'\) is a symmetric matrix:

$$\begin{aligned} {{\mathrm{diag}}}(\pi ') P' = \begin{pmatrix}0 &{} 0 \\ 0 &{} {{\mathrm{diag}}}(\pi _M)\end{pmatrix} \begin{pmatrix}P_{UU} &{} P_{UM}\\ 0 &{} I\end{pmatrix} = \begin{pmatrix}0 &{} 0 \\ 0 &{} {{\mathrm{diag}}}(\pi _M)\end{pmatrix} = {{\mathrm{diag}}}(\pi '). \end{aligned}$$
(114)

Next, notice that

$$\begin{aligned} {{\mathrm{diag}}}(\pi - \pi ') (P - P') = \begin{pmatrix}{{\mathrm{diag}}}(\pi _U) &{} 0 \\ 0 &{} 0\end{pmatrix} \begin{pmatrix}0 &{} 0 \\ P_{MU} &{} P_{MM} - I\end{pmatrix} = 0 \end{aligned}$$
(115)

which gives us an analogue of Eq. (105):

$$\begin{aligned} {{\mathrm{diag}}}(\pi ') P + {{\mathrm{diag}}}(\pi ) P' = {{\mathrm{diag}}}(\pi ) P + {{\mathrm{diag}}}(\pi ') P'. \end{aligned}$$
(116)

Here the right-hand side is symmetric due to reversibility of \(P\) and \(P'\), thus so is the left-hand side. Using this we can check that \(P(s)\) is reversible:

$$\begin{aligned}&{{\mathrm{diag}}}\bigl ( (1-s) \pi + s \pi ' \bigr ) \bigl ( (1-s) P + s P' \bigr ) \end{aligned}$$
(117)
$$\begin{aligned}&= (1-s)^2 {{\mathrm{diag}}}(\pi ) P + (1-s)s \bigl ( {{\mathrm{diag}}}(\pi ) P' + {{\mathrm{diag}}}(\pi ') P \bigr ) + s^2 {{\mathrm{diag}}}(\pi ') P' \end{aligned}$$
(118)

where the first and last terms are symmetric since \(P\) and \(P'\) are reversible, but the middle term is symmetric due to Eq. (116). \(\square \)

1.2 Appendix A.2: Discriminant Matrix

Recall from Definition 6 that the discriminant matrix of a Markov chain \(P(s)\) is

$$\begin{aligned} D(s) := \sqrt{P(s) \circ P(s)^{\mathsf {T}}}, \end{aligned}$$
(119)

where the Hadamard product “\(\circ \)” and the square root are computed entry-wise. This matrix was introduced by Szegedy in [10]. We prefer to work with \(D(s)\) rather than \(P(s)\) since the matrix of transition probabilities is not necessarily symmetric while its discriminant matrix is.

Proposition 13

If \(P\) is reversible then

$$\begin{aligned} D(s)&= {{\mathrm{diag}}}\bigl ( \! \sqrt{\pi (s)} \, \bigr ) \, P(s) \, {{\mathrm{diag}}}\bigl ( \! \sqrt{\pi (s)} \, \bigr )^{-1}, \quad \quad \forall s \in [0,1); \end{aligned}$$
(120)
$$\begin{aligned} D(1)&= \begin{pmatrix} {{\mathrm{diag}}}\bigl ( \! \sqrt{\pi _U} \, \bigr ) \, P_{UU} \, {{\mathrm{diag}}}\bigl ( \! \sqrt{\pi _U} \, \bigr )^{-1} &{} 0 \\ 0 &{} I\end{pmatrix}. \end{aligned}$$
(121)

Here the square roots are also computed entry-wise and \(M^{-1}\) denotes the matrix inverse of \(M\). Notice that for \(s \in [0,1)\) the right-hand side of Eq. (120) is well-defined, since \(P(s)\) is ergodic by Prop. 7 and thus according to the Perron–Frobenius Theorem has a unique and non-vanishing stationary distribution. However, recall from Prop. 11 that \(\pi (1)\) vanishes on \(U\), so the right-hand side of Eq. (120) is no longer well-defined at \(s = 1\). For this reason we have an alternative expression for \(D(1)\).

Proof

(of Prop. 13 ) For a reversible Markov chain \(P\) the interpolated detailed balance condition in Eq. (113) implies that \(D_{xy}(s) = \sqrt{P_{xy}(s) P_{yx}(s)} = P_{xy}(s) \sqrt{\pi _x(s) / \pi _y(s)}\). This is equivalent to Eq. (120).

At \(s=1\) from Eq. (119) we have:

$$\begin{aligned} D(1) = \sqrt{P(1) \circ P(1)^{\mathsf {T}}} = \sqrt{\begin{pmatrix}P_{UU} \circ P_{UU}^{\mathsf {T}}&{} 0 \\ 0 &{} I\end{pmatrix}} = \begin{pmatrix}\sqrt{P_{UU} \circ P_{UU}^{\mathsf {T}}} &{} 0 \\ 0 &{} I\end{pmatrix}. \end{aligned}$$
(122)

It remains to verify that the upper left block of \(D(1)\) agrees with Eq. (121). Using Eq. (119) we compute that

$$\begin{aligned} D_{UU}(s) = \sqrt{P_{UU} \circ P_{UU}^{\mathsf {T}}} = D_{UU}(0) = {{\mathrm{diag}}}\bigl ( \! \sqrt{\pi _U} \, \bigr ) \, P_{UU} \, {{\mathrm{diag}}}\bigl ( \! \sqrt{\pi _U} \, \bigr )^{-1} \end{aligned}$$
(123)

where the last equality follows from Eq. (120) at \(s = 0\). Together with Eq. (122) this gives us the desired expression in Eq. (121). \(\square \)

1.2.1 Appendix A.2.1: Spectral Decomposition

Recall from Eq. (119) that \(D(s)\) is real and symmetric. Therefore, its eigenvalues are real and it has an orthonormal set of real eigenvectors. Let

$$\begin{aligned} D(s) = \sum _{i=1}^n \lambda _i(s) |v_i(s)\rangle \langle v_i(s)| \end{aligned}$$
(124)

be the spectral decomposition of \(D(s)\) with eigenvalues \(\lambda _i(s)\) and eigenvectorsFootnote 8 \(|v_i(s)\rangle \). Moreover, let us arrange the eigenvalues so that

$$\begin{aligned} \lambda _1(s) \le \lambda _2(s) \le \dots \le \lambda _n(s). \end{aligned}$$
(125)

From now on we will assume that \(P\) is reversible (and hence ergodic) without explicitly mentioning it. Under this assumption the matrices \(P(s)\) and \(D(s)\) are similar (see Prop. 14 below). This means that \(D(s)\) essentially has the same properties as \(P(s)\), but in addition it also admits a spectral decomposition with orthogonal eigenvectors. This will be very useful in Appendix B.1, where we find the spectral decomposition of the quantum walk operator \(W(s)\) in terms of that of \(D(s)\), and use it to relate properties of \(W(s)\) and \(P(s)\).

Proposition 14

Assume \(P\) is reversible. The matrices \(P(s)\) and \(D(s)\) are similar for any \(s \in [0,1]\) and therefore have the same eigenvalues. In particular, the eigenvalues of \(P(s)\) are real.

Proof

From Eq. (120) we see that the matrices \(D(s)\) and \(P(s)\) are similar for \(s \in [0,1)\). From Eq. (121) we see that \(D(1)\) is similar to \(\tilde{P} := \bigl (\begin{array}{cc}P_{UU}&{}0\\ 0&{}I\end{array}\bigr )\). To verify that \(\tilde{P}\) and \(P(1) = \bigl (\begin{array}{cc}P_{UU} &{} P_{UM} \\ 0 &{} I\end{array}\bigr )\) are similar, let \(M := \bigl (\begin{array}{cc}P_{UU}-I &{} P_{UM} \\ 0 &{} I\end{array}\bigr )\). One can check that \(M P(1) M^{-1} = \tilde{P}\) where \(M^{-1} = \bigl (\begin{array}{cc}(P_{UU}-I)^{-1} &{} -(P_{UU}-I)^{-1} P_{UM} \\ 0 &{} I\end{array}\bigr )\) exists, since \(P_{UU} - I\) is invertible according to Prop. 10. By transitivity, \(D(1)\) is also similar to \(P(1)\). \(\square \)

Proposition 15

The largest eigenvalue of \(D(s)\) is \(1\). It has multiplicity \(1\) when \(s \in [0,1)\) and multiplicity \(m\) when \(s = 1\). In other words,

$$\begin{aligned} \lambda _{n-1}(s) < \lambda _n(s) = 1&, \quad \forall s \in [0,1), \end{aligned}$$
(126)
$$\begin{aligned} \lambda _{n-m}(1) < \lambda _{n-m+1}(1) = \dots = \lambda _n(1) = 1&. \end{aligned}$$
(127)

Proof

Let us argue about \(P(s)\), since it has the same eigenvalues as \(D(s)\) by Prop. 14. From the Perron–Frobenius Theorem we have that \(\forall i{:}\,\lambda _i(s) \le 1\) and \(\lambda _n(s) = 1\). In addition, by Prop. 7 the Markov chain \(P(s)\) is ergodic for any \(s \in [0,1)\), so \(\forall i \ne n{:}\,\lambda _i(s) < 1\). Finally, note by Eq. (121) that for \(s = 1\) eigenvalue \(1\) has multiplicity at least \(m\). Recall from Eq. (123) that \(D_{UU}(1)\) and \(P_{UU}\) are similar. From Prop. 10 we conclude that all eigenvalues of \(P_{UU}\) are strictly less than \(1\). Thus the multiplicity of eigenvalue \(1\) of \(D(1)\) is exactly \(m\). \(\square \)

1.2.2 Appendix A.2.2: Principal Eigenvector

Let us prove an analogue of Prop. 11 for the matrix \(D(s)\).

Proposition 16

\(\sqrt{\pi (s)^{\mathsf {T}}}\) is the unique \((+1)\)-eigenvector of \(D(s)\) for \(s \in [0,1)\). At \(s = 1\) any vector with support only on marked states is a \((+1)\)-eigenvector, including \(\sqrt{\pi (1)^{\mathsf {T}}}\).

Proof

Since \(P(s)\) is row-stochastic, \(P(s) \, 1_X^{\mathsf {T}}= 1_X^{\mathsf {T}}\) where \(1_X\) is the all-ones row vector. Thus we can check that for \(s \in [0,1)\),

$$\begin{aligned} D(s) \sqrt{\pi (s)^{\mathsf {T}}}&= {{\mathrm{diag}}}\Bigl ( \! \sqrt{\pi (s)} \, \Bigr ) \, P(s) \, {{\mathrm{diag}}}\Bigl ( \! \sqrt{\pi (s)} \, \Bigr )^{-1} \sqrt{\pi (s)^{\mathsf {T}}} \end{aligned}$$
(128)
$$\begin{aligned}&= {{\mathrm{diag}}}\Bigl ( \! \sqrt{\pi (s)} \, \Bigr ) \, P(s) \, 1_X^{\mathsf {T}}\end{aligned}$$
(129)
$$\begin{aligned}&= {{\mathrm{diag}}}\Bigl ( \! \sqrt{\pi (s)} \, \Bigr ) \, 1_X^{\mathsf {T}}\end{aligned}$$
(130)
$$\begin{aligned}&= \sqrt{\pi (s)^{\mathsf {T}}}. \end{aligned}$$
(131)

Uniqueness for \(s \in [0,1)\) follows by the uniqueness of \(\pi (s)\) and Prop. 14. For the \(s = 1\) case, notice from Eq. (121) that \(D(1)\) acts trivially on marked elements and recall from Eq. (103) that \(\pi (1) = (0_U\;\,\pi _M) / p_M\). \(\square \)

According to the above Proposition, for any \(s \in [0,1]\) we can choose the principal eigenvector \(|v_n(s)\rangle \) in the spectral decomposition of \(D(s)\) in Eq. (124) to be

$$\begin{aligned} |v_n(s)\rangle := \sqrt{\pi (s)^{\mathsf {T}}}. \end{aligned}$$
(132)

We would like to have an intuitive understanding of how \(|v_n(s)\rangle \) evolves as a function of \(s\). Let us introduce some useful notation that we will also need later.

Let \(0_U\) and \(1_U\) (respectively, \(0_M\) and \(1_M\)) be the all-zeros and all-ones row vectors of dimension \(n-m\) (respectively, \(m\)) whose entries are indexed by elements of \(U\) (respectively, \(M\)). Furthermore, let

$$\begin{aligned} \tilde{\pi }_U&:= \pi _U/(1-p_M),&\tilde{\pi }_M&:= \pi _M/p_M \end{aligned}$$
(133)

be the normalized row vectors describing the stationary distribution \(\pi \) restricted to unmarked and marked states. Let us also define the following unit vectors in \(\mathbb {R}^n\):

$$\begin{aligned} |U\rangle&:= \sqrt{(\tilde{\pi }_U\;\,0_M)^{\mathsf {T}}} = \frac{1}{\sqrt{1-p_M}}\sum _{x \in U} \sqrt{\pi _x} |x\rangle , \end{aligned}$$
(134)
$$\begin{aligned} |M\rangle&:= \sqrt{(0_U\;\,\tilde{\pi }_M)^{\mathsf {T}}} = \frac{1}{\sqrt{p_M}}\sum _{x \in M} \sqrt{\pi _x} |x\rangle . \end{aligned}$$
(135)

Then we can express \(|v_n(s)\rangle \) as a linear combination of \(|U\rangle \) and \(|M\rangle \).

Now we prove Prop. 4.

Proof

By substituting \(\pi (s)\) from Eq. (103) into Eq. (132) we get

$$\begin{aligned} |v_n(s)\rangle = \sqrt{\pi (s)^{\mathsf {T}}} = \sqrt{\frac{\bigl ((1-s)\pi _U\;\,\pi _M\bigr )^{\mathsf {T}}}{1 - s (1-p_M)}} = \sqrt{\frac{\bigl ((1-s)(1-p_M)\tilde{\pi }_U\;\,p_M\tilde{\pi }_M\bigr )^{\mathsf {T}}}{1 - s (1-p_M)}} \end{aligned}$$
(136)

which is the desired expression. \(\square \)

Thus \(|v_n(s)\rangle \) lies in the two-dimensional subspace \({{\mathrm{span}}}\lbrace |U\rangle , |M\rangle \rbrace \) and is subject to a rotation as we change the parameter \(s\) (see Fig. 5). In particular,

$$\begin{aligned} |v_n(0)\rangle&= \sqrt{1-p_M} |U\rangle + \sqrt{p_M} |M\rangle ,&|v_n(1)\rangle&= |M\rangle . \end{aligned}$$
(137)
Fig. 5
figure 5

As \(s\) changes from zero to one, the evolution of the principal eigenvector \(|v_n(s)\rangle \) corresponds to a rotation in the two-dimensional subspace \({{\mathrm{span}}}\lbrace |U\rangle , |M\rangle \rbrace \)

Proposition 17

\(\theta (s)\) and its derivative \(\dot{\theta }(s) := \frac{d}{ds} \theta (s)\) are related as follows:

$$\begin{aligned} 2 \dot{\theta }(s) = \frac{\sin \theta (s) \cos \theta (s)}{1-s}. \end{aligned}$$
(138)

Proof

Notice that

$$\begin{aligned} \frac{d}{ds} \bigl ( \sin ^2 \theta (s) \bigr ) = 2 \dot{\theta }(s) \sin \theta (s) \cos \theta (s). \end{aligned}$$
(139)

On the other hand, according to Eq. (21) we have

$$\begin{aligned} \frac{d}{ds} \bigl ( \sin ^2 \theta (s) \bigr ) = \frac{d}{ds} \biggl ( \frac{p_M}{1-s(1-p_M)} \biggr ) = \frac{p_M (1-p_M)}{(1-s(1-p_M))^2} = \frac{\sin ^2 \theta (s) \cos ^2 \theta (s)}{1-s}. \end{aligned}$$
(140)

By comparing both equations we get the desired result. \(\square \)

1.2.3 Appendix A.2.3: Derivative

Proposition 18

\(D(s)\) and its derivative \(\dot{D}(s) := \frac{d}{ds} D(s)\) are related as follows:

$$\begin{aligned} \dot{D}(s) = \frac{1}{2(1-s)} \bigl \{ \varPi _M, I-D(s) \bigr \} \end{aligned}$$
(141)

where \(\{X,Y\} := XY + YX\) is the anticommutator of \(X\) and \(Y\), and \(\varPi _M := \sum _{x \in M} |x\rangle \langle x|\) is the projector onto the \(m\)-dimensional subspace spanned by marked states \(M\).

Proof

Recall from Eq. (119) that \(D(s) = \sqrt{P(s) \circ P(s)^{\mathsf {T}}}\). The block structure of \(P(s)\) is given in Eq. (99). First, let us derive an expression for \(D_{MM}(s)\), the lower right block of \(D(s)\):

$$\begin{aligned} D_{MM}(s)&= \sqrt{P_{MM}(s) \circ P_{MM}(s)^{\mathsf {T}}} \end{aligned}$$
(142)
$$\begin{aligned}&= \sqrt{\bigl ( (1-s) P_{MM} + s I \bigr ) \circ \bigl ( (1-s) P_{MM}^{\mathsf {T}}+ s I \bigr )}. \end{aligned}$$
(143)

Let us separately consider the diagonal and off-diagonal entries of \(D_{MM}(s)\). For \(x, y \in M\) we have

$$\begin{aligned} D_{xy}(s) = {\left\{ \begin{array}{ll} (1-s) \sqrt{P_{xy} P_{yx}} &{} \text {if }x \ne y, \\ (1-s) P_{xx} + s &{} \text {if x = y}. \end{array}\right. } \end{aligned}$$
(144)

Thus we can write \(D_{MM}(s)\) as

$$\begin{aligned} D_{MM}(s) = (1-s) \sqrt{P_{MM}\circ P^{\mathsf {T}}_{MM}} + s I. \end{aligned}$$
(145)

Expressions for the remaining blocks of \(D(s)\) can be derived in a straightforward way. By putting all blocks together we get

$$\begin{aligned} D(s) = \begin{pmatrix}\sqrt{P_{UU} \circ P^{\mathsf {T}}_{UU}} &{} \sqrt{(1-s) (P_{UM} \circ P^{\mathsf {T}}_{MU})} \\ \sqrt{(1-s) (P_{MU} \circ P^{\mathsf {T}}_{UM})} &{} (1-s) \sqrt{P_{MM}\circ P^{\mathsf {T}}_{MM}} + s I \end{pmatrix}. \end{aligned}$$
(146)

When we take the derivative with respect to \(s\) we find

$$\begin{aligned} \dot{D}(s) = \begin{pmatrix}0 &{} -\frac{1}{2\sqrt{1-s}} \sqrt{P_{UM} \circ P^{\mathsf {T}}_{MU}} \\ -\frac{1}{2\sqrt{1-s}} \sqrt{P_{MU} \circ P^{\mathsf {T}}_{UM}} &{} I - \sqrt{P_{MM}\circ P^{\mathsf {T}}_{MM}}\end{pmatrix}. \end{aligned}$$
(147)

To relate \(\dot{D}(s)\) and the original matrix \(D(s)\), observe that

$$\begin{aligned} \varPi _M D(s) + D(s) \varPi _M = \begin{pmatrix} 0 &{} \sqrt{(1-s) (P_{UM} \circ P^{\mathsf {T}}_{MU})} \\ \sqrt{(1-s) (P_{MU} \circ P^{\mathsf {T}}_{UM})} &{} 2 (1-s) \sqrt{P_{MM}\circ P^{\mathsf {T}}_{MM}} + 2 s I \end{pmatrix}\nonumber \\ \end{aligned}$$
(148)

which can be seen by overlaying the second column and row of \(D(s)\) given in Eq. (146). When we rescale this by an appropriate constant, we get

$$\begin{aligned} - \frac{1}{2(1-s)} \{\varPi _M, D(s)\} = \begin{pmatrix} 0 &{} -\frac{1}{2\sqrt{1-s}} \sqrt{P_{UM} \circ P^{\mathsf {T}}_{MU}} \\ -\frac{1}{2\sqrt{1-s}} \sqrt{P_{MU} \circ P^{\mathsf {T}}_{UM}} &{} -\sqrt{P_{MM}\circ P^{\mathsf {T}}_{MM}} - \frac{s}{1-s} I\end{pmatrix}.\nonumber \\ \end{aligned}$$
(149)

This is very similar to the expression for \(\dot{D}(s)\) in Eq. (147), except for a slightly different coefficient for the identity matrix in the lower right corner. We can correct this by adding \(\varPi _M\) with an appropriate constant: \(-\frac{1}{2(1-s)} \{\varPi _M,D(s)\} + \frac{1}{1-s} \varPi _M = \dot{D}(s)\). \(\square \)

1.3 Appendix A.3: Hitting Time

From now on we assume that \(P\) is ergodic and reversible. Recall from Definition 4 that \({{\mathrm{HT}}}(P,M)\) is the expected number of steps it takes for the Random Walk Algorithm to find a marked vertex, starting from the stationary distribution of \(P\) restricted to unmarked vertices. We now prove Prop. 2 which expresses the hitting time of \(P\) in terms of the spectral properties of the discriminant matrix of the absorbing walk \(P'.\)

Proposition 2

The hitting time of Markov chain \(P\) with respect to marked set \(M\) is given by

$$\begin{aligned} {{\mathrm{HT}}}(P,M) =\sum _{k=1}^{n-|M |}\frac{|\langle v_k'|U\rangle |^2}{1-\lambda '_k}, \end{aligned}$$
(9)

where \(\lambda '_k\) are the eigenvalues of the discriminant matrix \(D'=D(P')\) in nondecreasing order, \(|v_k'\rangle \) are the corresponding eigenvectors, and \(|U\rangle \) is the unit vector

$$\begin{aligned} |U\rangle :=\frac{1}{\sqrt{1-p_M}}\sum _{x\notin M}\sqrt{\pi _x}|x\rangle , \end{aligned}$$
(10)

\(p_M\) being the probability to draw a marked vertex from the stationary distribution \(\pi \) of \(P\).

Proposition 19

The hitting time of Markov chain \(P\) with respect to marked set \(M\) is given by

$$\begin{aligned} {{\mathrm{HT}}}(P,M) = \sum _{k=1}^{n-|M |}\frac{|\langle v_k'|U\rangle |^2}{1-\lambda '_k}, \end{aligned}$$
(150)

where \(\lambda '_k\) are the eigenvalues of the discriminant matrix \(D'=D(P')\) in nondecreasing order, \(|v_k'\rangle \) are the corresponding eigenvectors, and \(|U\rangle \) is the unit vector

$$\begin{aligned} |U\rangle :=\frac{1}{\sqrt{1-p_M}}\sum _{x\notin M}\sqrt{\pi _x}|x\rangle , \end{aligned}$$
(151)

\(p_M\) being the probability to draw a marked vertex from the stationary distribution \(\pi \) of \(P\).

Proof

The expected number of iterations in the Random Walk Algorithm is

$$\begin{aligned} {{\mathrm{HT}}}(P,M)&:= \sum _{l=1}^\infty l \cdot \Pr [\text {need }exactly \,l\text { steps}] \end{aligned}$$
(152)
$$\begin{aligned}&= \sum _{l=1}^\infty \sum _{t=1}^l \Pr [\text {need }exactly \,l\text { steps}] \end{aligned}$$
(153)
$$\begin{aligned}&= \sum _{t=1}^\infty \sum _{l=t}^\infty \Pr [\text {need }exactly \,l\text { steps}] \end{aligned}$$
(154)
$$\begin{aligned}&= \sum _{t=1}^\infty \Pr [\text {need } at least \,t\text { steps}] \end{aligned}$$
(155)
$$\begin{aligned}&= \sum _{t=0}^\infty \Pr [\text {need }more \text { than }t\text { steps}]. \end{aligned}$$
(156)

The region corresponding to the double sums in Eqs. (153) and (154) is shown in Fig. 6.

Fig. 6
figure 6

Range of variables \(l\) and \(t\) in the double sums of Eqs. (153) and (154)

It remains to determine the probability that no marked vertex is found after \(t\) steps, starting from an unmarked vertex distributed according to \(\tilde{\pi }_U = \pi _U/(1-p_M)\). The distribution of vertices at the first execution of step 3 of the Random Walk Algorithm is \((\tilde{\pi }_U\;\,0_M)\), hence

$$\begin{aligned} \Pr [\text {need }more \text { than }t\text { steps}] = (\tilde{\pi }_U\;\,0_M) P'^{\,t} (1_U\;\,0_M)^{\mathsf {T}}. \end{aligned}$$
(157)

Recall from Prop. 8 that \((P'^{\,t})_{UU} = P_{UU}^t\) so we can simplify Eq. (157) as follows:

$$\begin{aligned} \Pr [\text {need }more \text { than }t\text { steps}]&= (\tilde{\pi }_U\;\,0_M) P'^{\,t} (1_U\;\,0_M)^{\mathsf {T}}\end{aligned}$$
(158)
$$\begin{aligned}&= \frac{\pi _U}{1-p_M} P_{UU}^t 1_U^{\mathsf {T}}\end{aligned}$$
(159)
$$\begin{aligned}&= \sqrt{\tfrac{\pi _U}{1-p_M}} {{\mathrm{diag}}}\bigl ( \! \sqrt{\pi _U} \, \bigr ) P_{UU}^t {{\mathrm{diag}}}\bigl ( \! \sqrt{\pi _U} \, \bigr )^{-1} \sqrt{\tfrac{\pi _U^{\mathsf {T}}}{1-p_M}} \end{aligned}$$
(160)
$$\begin{aligned}&= \langle U| D'^t |U\rangle , \end{aligned}$$
(161)

where the last equality follows from the expression for the discriminant matrix \(D'=D(1)\) in Eq. (121). By plugging this back in Eq. (156) we get

$$\begin{aligned} {{\mathrm{HT}}}(P,M) = \sum _{t=0}^\infty \langle U| D'^t |U\rangle . \end{aligned}$$
(162)

From the spectral decomposition \(D'=\sum _{k=1}^n\lambda _k'|v_k'\rangle \), this may be rewritten as

$$\begin{aligned} {{\mathrm{HT}}}(P,M) = \sum _{t=0}^\infty \sum _{k=1}^n \lambda _k'^t |\langle v_k'|U\rangle |^2. \end{aligned}$$
(163)

Let \(m := |M |\) be the number of marked elements. Recall from Eq. (121) that \(D'=D(1)\) is block-diagonal and acts as identity matrix in the \(m\)-dimensional marked subspace. Furthermore, all \(1\)-eigenvectors of \(D'\) lie in the marked subspace, since eigenvalue \(1\) has multiplicity \(m\) (recall from Prop. 15 that \(\lambda _k' = 1\) when \(k > n - m\)). Therefore, the terms in Eq. (163) with \(k > n - m\) disappear since \(\langle v'_k|U\rangle = 0\), and we get the desired expression by exchanging the two sums in Eq. (163) and using the expansion \((1-x)^{-1} = \sum _{t=0}^\infty x^t\) where \(|x | < 1\). \(\square \)

Note that the two sums in Eq. (163) may not be exchanged before removing the terms with \(k> n - m\): they do not commute in the presence of these extra terms since \(\lambda _k'=1\) for \(k> n - m\) and therefore \(\sum _{t=0}^\infty |\lambda _k' |^t\) diverges. This subtlety had unfortunately been overlooked in [16, 19], and is at the source of the distinction between the hitting time \({{\mathrm{HT}}}(P,M)\) and the extended hitting time \({\hbox {HT}}^{+}(P,M)\) (see Appendix C).

1.3.1 Appendix A.3.1: Extended Hitting Time

We now prove Prop. 3, which states that the extended hitting time reduces to the usual hitting time in the case of a single marked element, even though they may differ in general.

Proof

The fact that \({\hbox {HT}}^{+}(P,M)={{\mathrm{HT}}}(P,M)\) when \(|M|=1\) follows immediately from the expression for \({{\mathrm{HT}}}(P,M)\) in Prop. 2 and Definition 9.

For the second part, choose

$$\begin{aligned} P = \frac{1}{4} \begin{pmatrix}3 &{} 1 &{} 0 \\ 1 &{} 2 &{} 1 \\ 0 &{} 1 &{}3\end{pmatrix} \end{aligned}$$
(164)

and let the last two elements be marked. If we explicitly compute the eigenvalues and eigenvectors of \(D(s)\), then from Definition 9 we get that \({{\mathrm{HT}}}(s) = \frac{20}{(3-s)^2}\) for \(s \in [0,1)\) and thus \({\hbox {HT}}^{+}(P,M) = 5\). However, \({{\mathrm{HT}}}(P,M) = 4\). One can also use the formulas from Lemma 4 in Appendix C to verify this. \(\square \)

This proposition implies that in the case of a single marked element, the quantum search algorithms in Sect. 3 provide a quadratic speedup over the classical hitting time. In the general case of multiple marked elements, these quantum algorithms still solve the search problems but their cost is given in terms of the extended hitting time rather than the standard one.

1.3.2 Appendix A.3.2: Lazy Walk

For technical reasons, in Sect. 3 it is important that all eigenvalues of \(P(s)\) are non-negative. We can guarantee this using a standard trick—replacing the original Markov chain \(P\) with a “lazy” walk \((P+I)/2\) where \(I\) is the \(n \times n\) identity matrix. In fact, we can assume without loss of generality that the original Markov chain already is “lazy”, since this affects the hitting time only by a constant factor, as shown below.

Proposition 20

Let \(P\) be an ergodic and reversible Markov chain. Then for any \(s \in [0,1]\) the eigenvalues of \((P(s)+I)/2\) are between \(0\) and \(1\). Moreover, if the interpolated hitting time of \(P\) is \({{\mathrm{HT}}}(s)\), then the interpolated hitting time of \((P+I)/2\) is \(2 {{\mathrm{HT}}}(s)\).

Proof

Since \(P\) is reversible, so is \(P(s)\) by Prop. 12. Thus the eigenvalues of \(P(s)\) are real by Prop. 14. If \(\lambda _k(s)\) is an eigenvalue of \(P(s)\) then \(\lambda _k(s) \in [-1,1]\) according to Perron–Frobenius Theorem. Thus, the eigenvalues of \((P(s)+I)/2\) satisfy \((\lambda _k(s)+1)/2 \in [0,1]\).

Recall from Prop. 14 that \(P(s)\) and \(D(s)\) are similar. Thus, the discriminant matrix of \((P(s)+I)/2\) is \((D(s)+I)/2\), which has the same eigenvectors as \(D(s)\). By Definition 9, the interpolated hitting time of \((P(s)+I)/2\) is

$$\begin{aligned} \sum _{k=1}^{n-1} \frac{|\langle v_k(s)|U\rangle |^2}{1 - \frac{\lambda _k(s)+1}{2}}. \end{aligned}$$
(165)

Since \(1 - \frac{\lambda _k(s)+1}{2} = \frac{1-\lambda _k(s)}{2}\), the above expression is equal to \(2 {{\mathrm{HT}}}(s)\) as claimed. \(\square \)

1.3.3 Appendix A.3.3: Relationship Between \({{\mathrm{HT}}}(s)\) and \({\hbox {HT}}^{+}(P,M)\)

In this section we express \({{\mathrm{HT}}}(s)\) as a function of \(s\) and \({\hbox {HT}}^{+}(P,M)\), which is the main result of this appendix. The main idea is to relate \(\frac{d}{ds} {{\mathrm{HT}}}(s)\) to \({{\mathrm{HT}}}(s)\). When we solve the resulting differential equation, the boundary condition at \(s = 1\) gives the desired result.

First, note that by Definition 9, \({{\mathrm{HT}}}(s)\) may be written as \({{\mathrm{HT}}}(s) = \langle U| A(s)|U\rangle \), where

$$\begin{aligned} A(s) := \sum _{k = 1}^{n-1} \frac{|v_k(s)\rangle \langle v_k(s)|}{1 - \lambda _k(s)}. \end{aligned}$$
(166)

The following property of \(A(s)\) will be useful on several occasions.

Proposition 21

\(A(s) |M\rangle = - \frac{\cos \theta (s)}{\sin \theta (s)} A(s) |U\rangle \).

Proof

Recall from Prop. 15 that \(|v_n(s)\rangle \) is orthogonal to \(|v_k(s)\rangle \) for all \(k\ne n\). So, we have \(A(s) |v_n(s)\rangle = 0\) by the definition of \(A(s)\). If we substitute \(|v_n(s)\rangle = \cos \theta (s) |U\rangle + \sin \theta (s) |M\rangle \) from Prop. 4 in this equation, we get the desired formula. \(\square \)

Lemma 1

For \(s < 1\), the derivative of \({{\mathrm{HT}}}(s)\) is related to \({{\mathrm{HT}}}(s)\) as

$$\begin{aligned} \frac{d}{ds} {{\mathrm{HT}}}(s) = \frac{2(1-p_M)}{1-s(1-p_M)} {{\mathrm{HT}}}(s) \end{aligned}$$
(167)

where \(p_M\) is the probability to pick a marked state from the stationary distribution \(\pi \) of \(P\).

Proof

Recall that \({{\mathrm{HT}}}(s) = \langle U| A(s) |U\rangle \) where \(A(s)\) may be written as

$$\begin{aligned} A(s) = B(s)^{-1} - \varPi _n(s) \text { where } B(s) := I - D(s) + \varPi _n(s), \, \varPi _n(s) := |v_n(s)\rangle \langle v_n(s)|. \end{aligned}$$
(168)

Recall from Appendix A.2.1 that \(|v_n(s)\rangle \) is the unique \((+1)\)-eigenvector of \(D(s)\) for \(s \in [0,1)\), thus \(B(s)\) is indeed invertible when \(s\) is in this range.

From now on we will not write the dependence on \(s\) explicitly. We will also often use \(\dot{f}(s)\) as a shorthand form of \(\frac{d}{ds} f(s)\). Let us start with

$$\begin{aligned} \frac{d}{ds} {{\mathrm{HT}}}= \langle U| \dot{A}|U\rangle \end{aligned}$$
(169)

and expand \(\dot{A}\) using Eq. (168). To find \(\frac{d}{ds} (B^{-1})\), take the derivative of both sides of \(B^{-1}B= I\) and get \(\frac{d}{ds} (B^{-1}) \cdot B+ B^{-1} \cdot \frac{d}{ds} B= 0\). Thus \(\frac{d}{ds} (B^{-1}) = -B^{-1} \dot{B} B^{-1}\) and

$$\begin{aligned} \dot{A}= - B^{-1} \dot{B}B^{-1} - \dot{\varPi }_n. \end{aligned}$$
(170)

Notice from Eq. (168) that \(\dot{B}= -\dot{D}+ \dot{\varPi }_n\), thus \(\dot{A}= - B^{-1} ( - \dot{D}+ \dot{\varPi }_n) B^{-1} - \dot{\varPi }_n\) and \(\frac{d}{ds} {{\mathrm{HT}}}= h_1 + h_2 + h_3\) where

$$\begin{aligned} h_1&:= \langle U| B^{-1} \dot{D}B^{-1} |U\rangle , \end{aligned}$$
(171)
$$\begin{aligned} h_2&:=-\langle U| B^{-1} \dot{\varPi }_nB^{-1} |U\rangle , \end{aligned}$$
(172)
$$\begin{aligned} h_3&:=-\langle U| \dot{\varPi }_n|U\rangle . \end{aligned}$$
(173)

Let us evaluate each of these terms separately.

To evaluate the first term \(h_1\), we substitute \(\dot{D}= \frac{1}{2(1-s)} \bigl \{ \varPi _M, I - D\bigr \}\) from Prop. 18 and replace \(I - D\) by \(B- \varPi _n\) according to Eq. (168):

$$\begin{aligned} 2(1-s) h_1&= \langle U| B^{-1} \{\varPi _M, B-\varPi _n\} B^{-1} |U\rangle \end{aligned}$$
(174)
$$\begin{aligned}&= \langle U| B^{-1} \bigl (\{\varPi _M,B\}-\{\varPi _M,\varPi _n\}\bigr ) B^{-1} |U\rangle \end{aligned}$$
(175)
$$\begin{aligned}&= \langle U| \{B^{-1},\varPi _M\} |U\rangle - \langle U| B^{-1} \{\varPi _M,\varPi _n\} B^{-1} |U\rangle . \end{aligned}$$
(176)

Recall that \(\varPi _M = \sum _{x \in M} |x\rangle \langle x|\) is the projector onto the marked states. Thus \(\varPi _M |U\rangle = 0\) and the first term vanishes. Note that \(B\) has the same eigenvectors as \(D\). In particular, \(B^{-1} |v_n\rangle = |v_n\rangle \) and thus \(B^{-1} \varPi _n= \varPi _n= \varPi _nB^{-1}\). Using this we can expand the anti-commutator in the second term: \(B^{-1} \{\varPi _M,\varPi _n\} B^{-1} = B^{-1} \varPi _M \varPi _n+ \varPi _n\varPi _M B^{-1}\). Since all three matrices in this expression are real and symmetric and \(|U\rangle \) is also real, both terms of the anti-commutator have the same contribution, so we get

$$\begin{aligned} 2(1-s) h_1 = -2 \langle U| B^{-1} \varPi _M \varPi _n|U\rangle . \end{aligned}$$
(177)

Recall from Prop. 4 that \(|v_n\rangle = \cos \theta |U\rangle + \sin \theta |M\rangle \), so we see that \(\varPi _M \varPi _n|U\rangle = \varPi _M |v_n\rangle \cdot \langle v_n|U\rangle = \sin \theta |M\rangle \cdot \cos \theta \). Moreover, \(B^{-1} = A+ \varPi _n\) according to Eq. (168), so

$$\begin{aligned} 2(1-s) h_1 = -2 \sin \theta \cos \theta \langle U| (A+ \varPi _n) |M\rangle . \end{aligned}$$
(178)

Recall from Prop. 21 that \(\sin \theta \langle U| A |M\rangle = \cos \theta \langle U| A |U\rangle \). To simplify the second term, notice that \(\langle U| \varPi _n|M\rangle = \langle U|v_n\rangle \cdot \langle v_n|M\rangle = \cos \theta \cdot \sin \theta \). When we put this together, we get

$$\begin{aligned} 2(1-s) h_1 = 2 \cos ^2\theta \langle U| A|U\rangle - 2 \sin ^2\theta \cos ^2 \theta \end{aligned}$$
(179)

or simply

$$\begin{aligned} h_1 = \frac{\cos ^2\theta }{1-s} \bigl ( \langle U| A|U\rangle - \sin ^2\theta \bigr ). \end{aligned}$$
(180)

Let us now consider the second term \(h_2 = -\langle U| B^{-1} \dot{\varPi }_nB^{-1} |U\rangle \). First, we compute \(\dot{\varPi }_n= |\dot{v}_n\rangle \langle v_n| + |v_n\rangle \langle \dot{v}_n|\). Using \(B^{-1} |v_n\rangle = |v_n\rangle \) we get \(B^{-1} \dot{\varPi }_nB^{-1} = B^{-1} |\dot{v}_n\rangle \langle v_n| + |v_n\rangle \langle \dot{v}_n| B^{-1}\). Since \(\langle v_n|U\rangle = \cos \theta \) we have

$$\begin{aligned} h_2 = -2 \langle U| B^{-1} |\dot{v}_n\rangle \cos \theta \end{aligned}$$
(181)

where the factor two comes from the fact that all vectors involved are real and matrix \(B^{-1}\) is real and symmetric. Let us compute

$$\begin{aligned} |\dot{v}_n\rangle = \dot{\theta }\bigl ( -\sin \theta |U\rangle + \cos \theta |M\rangle \bigr ). \end{aligned}$$
(182)

Notice that \(\langle v_n|\dot{v}_n\rangle = 0\) and thus \(\varPi _n|\dot{v}_n\rangle = 0\). By substituting \(B^{-1} = A+ \varPi _n\) from Eq. (168) we get

$$\begin{aligned} h_2 = -2 \langle U| A|\dot{v}_n\rangle \cos \theta . \end{aligned}$$
(183)

Next, we substitute \(|\dot{v}_n\rangle \) and get

$$\begin{aligned} h_2 = -2 \dot{\theta }\bigl ( - \sin \theta \langle U| A|U\rangle + \cos \theta \langle U| A|M\rangle \bigr ) \cos \theta . \end{aligned}$$
(184)

Now we use Prop. 21 to substitute \(A|M\rangle \) by \(A|U\rangle \):

$$\begin{aligned} h_2 = -2 \dot{\theta }\biggl (- \sin \theta - \frac{\cos ^2\theta }{\sin \theta } \biggr ) \langle U| A|U\rangle \cos \theta = 2 \dot{\theta }\frac{\cos \theta }{\sin \theta } \langle U| A|U\rangle . \end{aligned}$$
(185)

Finally, we substitute \(2 \dot{\theta }= \frac{\sin \theta \cos \theta }{1-s}\) from Eq. (138) and get

$$\begin{aligned} h_2 = \frac{\cos ^2\theta }{1-s} \langle U| A|U\rangle . \end{aligned}$$
(186)

For the last term \(h_3 = -\langle U| \dot{\varPi }_n|U\rangle \) we observe that \(\langle U|\dot{v}_n\rangle \langle v_n|U\rangle = - \dot{\theta }\sin \theta \cdot \cos \theta \) thus \(h_3 = 2 \dot{\theta }\sin \theta \cos \theta \) where the factor two comes from symmetry. After substituting \(2\dot{\theta }\) from Eq. (138) we get

$$\begin{aligned} h_3 = \frac{\cos ^2\theta }{1-s} \sin ^2\theta . \end{aligned}$$
(187)

When we compare Eqs. (180), (186), and (187) we notice that \(h_2 = h_1 + h_3\). Thus the derivative of the hitting time is \(\frac{d}{ds} {{\mathrm{HT}}}= h_1 + h_2 + h_3 = 2 h_2\). Recall from Definition 9 that \({{\mathrm{HT}}}= \langle U| A|U\rangle \). Thus

$$\begin{aligned} \frac{d}{ds} {{\mathrm{HT}}}(s) = 2 \frac{\cos ^2\theta (s)}{1-s} {{\mathrm{HT}}}(s). \end{aligned}$$
(188)

By substituting \(\cos \theta (s)\) from Eq. (21) we get the desired result. \(\square \)

We now prove Theorem 4, which relates \({{\mathrm{HT}}}(s)\) to \({\hbox {HT}}^{+}(P,M)\).

Proof

When the marked element is unique, \({\hbox {HT}}^{+}(P,M) = {{\mathrm{HT}}}(P,M)\) by Prop. 3. This gives the second part.

We will prove the first part by solving the differential equation obtained in Lemma 1. Consider Eq. (188) and recall from Eq. (138) that \(2 \dot{\theta }= \frac{\sin \theta \cos \theta }{1-s}\). We can rewrite the coefficient in Eq. (188) as

$$\begin{aligned} 2 \frac{\cos ^2\theta }{1-s} = 2 \cdot \frac{\sin \theta \cos \theta }{1-s} \cdot \frac{\cos \theta }{\sin \theta } = 4 \dot{\theta }\frac{\cos \theta }{\sin \theta } = 4 \frac{\frac{d}{ds} (\sin \theta )}{\sin \theta }. \end{aligned}$$
(189)

Then the differential equation becomes

$$\begin{aligned} \frac{\frac{d}{ds}{{\mathrm{HT}}}(s)}{{{\mathrm{HT}}}(s)} = 4 \frac{\frac{d}{ds}(\sin \theta (s))}{\sin \theta (s)}. \end{aligned}$$
(190)

By integrating both sides we get

$$\begin{aligned} \ln \, |{{\mathrm{HT}}}(s) | = 4 \ln \, |\sin \theta (s) | + C \end{aligned}$$
(191)

for some constant \(C\). Recall from Eq. (21) that \(\sin \theta (1) = 1\), so the boundary condition at \(s=1\) gives us \(C = \ln \, |{\hbox {HT}}^{+}(P,M) |\). Since all quantities are non-negative, we can omit the absolute value signs. After exponentiating both sides we get

$$\begin{aligned} {{\mathrm{HT}}}(s) = \sin ^4 \theta (s) \cdot {\hbox {HT}}^{+}(P,M). \end{aligned}$$
(192)

We get the desired expression when we substitute \(\sin \theta (s)\) from Eq. (21). \(\square \)

In Sect. 3 we consider several quantum search algorithms whose running time depends on \({{\mathrm{HT}}}(s)\) for some values of \(s\). Theorem 4 is a crucial ingredient in analysis of these algorithms: when the marked element is unique, it expresses \({{\mathrm{HT}}}(s)\) as a function of \(s\) and the usual hitting time \({{\mathrm{HT}}}(P,M)\). In particular, we see that \({{\mathrm{HT}}}(s)\) is monotonically increasing as a function of \(s\) and it reaches maximum value at \(s = 1\) (some example plots of \({{\mathrm{HT}}}(s)\) are shown in Fig. 7). This observation is crucial, for example, in the proof of Theorem 7.

Fig. 7
figure 7

The interpolated hitting time \({{\mathrm{HT}}}(s)\) as a function of \(s\) for several values of \(p_M\) according to Theorem 4

Appendix B: Spectrum and Implementation of \(W(s)\)

Szegedy [10] proposed a general method to map a random walk to a unitary operator that defines a quantum walk. The first step of Szegedy’s construction is to map the rows of \(P(s)\) to quantum states. Let \(X\) be the state space of \(P(s)\) and \(\mathcal {H}:= {{\mathrm{span}}}\lbrace |x\rangle {:}\,x \in X \rbrace \) be a complex Euclidean space of dimension \(n := |X |\) with basis states labelled by elements of \(X\). For every \(x \in X\) we define the following state in \(\mathcal {H}\):

$$\begin{aligned} |p_x(s)\rangle := \sum _{y \in X} \sqrt{P_{xy}(s)} |y\rangle . \end{aligned}$$
(193)

Notice that these states are correctly normalized, since \(P(s)\) is row-stochastic. Following the approach of Szegedy [10], we define a unitary operator \(V(s)\) acting on \(\mathcal {H}\otimes \mathcal {H}\) as

$$\begin{aligned} V(s) |x, \bar{0}\rangle := |x\rangle |p_x(s)\rangle = \sum _{y \in X} \sqrt{P_{xy}(s)} |x,y\rangle , \end{aligned}$$
(194)

when the second register is in some reference state \(|\bar{0}\rangle \in \mathcal {H}\), and arbitrarily otherwise. It will not be relevant to us how \(V(s)\) is extended from \(\mathcal {H}\otimes |\bar{0}\rangle \) to \(\mathcal {H}\otimes \mathcal {H}\). The only constraint we impose is that \(V(s)\) is continuous as a function of \(s\), which is a reasonable assumption from a physical point of view.

Let Shift be the operation defined in Eq. (2). Let \(\varPi _0 := I \otimes |\bar{0}\rangle \langle \bar{0}|\) be the projector that keeps only the component containing the reference state \(|\bar{0}\rangle \) in the second register and let \({\mathrm {ref}}_{\mathcal {X}} := 2 \varPi _0 - I \otimes I\). The goal of this section is to find the spectral decomposition of the quantum walk operator corresponding to \(P(s)\):

$$\begin{aligned} W(s) := V(s)^{\dagger }\cdot {\textsc {Shift}}\cdot V(s) \cdot {\mathrm {ref}}_{\mathcal {X}} \end{aligned}$$
(195)

where \(V(s) := V(P(s))\). Recall from Appendix A.2.1 that \(\lambda _k(s)\) and \(|v_k(s)\rangle \) are the eigenvalues and eigenvectors of the discriminant matrix \(D(s)\) of \(P(s)\).

1.1 Appendix B.1: Spectral Decomposition of \(W(s)\)

In this section we determine the invariant subspaces of \(W(s)\) and find its eigenvectors and eigenvalues. First, observe that on certain states \({\textsc {Shift}}\) acts as the swap gate.

Proposition 22

If \(P\) is a Markov chain on graph \(G\) then \({\textsc {Shift}}\, |x, p_x(s)\rangle = |p_x(s), x\rangle \), i.e., Shift always succeeds on states of the form \(|x, p_x(s)\rangle \) for any \(x \in X\).

Proof

From Eq. (194) we get

$$\begin{aligned} {\textsc {Shift}}\, |x, p_x(s)\rangle&= {\textsc {Shift}}\, \sum _{y \in X} \sqrt{P_{xy}(s)} |x, y\rangle \end{aligned}$$
(196)
$$\begin{aligned}&= \sum _{y \in X} \sqrt{P_{xy}(s)} |y, x\rangle \end{aligned}$$
(197)
$$\begin{aligned}&= |p_x(s), x\rangle , \end{aligned}$$
(198)

where the second equality holds since \(P(s)\) is a Markov chain on \(G\) and thus \(P_{xy}(s) = 0\) when \(xy\) is not an edge of \(G\). \(\square \)

It follows from Prop. 22 that \({\textsc {Shift}}\) always succeeds when \(V^{\dagger }(s) \, {\textsc {Shift}}\, V(s)\) acts on any state that has \(|\bar{0}\rangle \) in the second register. In fact, we can say even more.

Proposition 23

If \(P\) is a Markov chain on graph \(G\) then the operator \(V^{\dagger }(s) \, {\textsc {Shift}}\, V(s)\) acts as the discriminant matrix \(D(s)\) (see Appendix A.2) when restricted to \(|\bar{0}\rangle \) in the second register, i.e.,

$$\begin{aligned} \varPi _0 V^{\dagger }(s) \, {\textsc {Shift}}\, V(s) \varPi _0 = D(s) \otimes |\bar{0}\rangle \langle \bar{0}|. \end{aligned}$$
(199)

Proof

From Eq. (194) and Prop. 22 we get

$$\begin{aligned} \langle x,\bar{0}| V^{\dagger }(s) \, {\textsc {Shift}}\, V(s) |y,\bar{0}\rangle&= \langle x, p_x(s)| \, {\textsc {Shift}}\, |y, p_y(s)\rangle \end{aligned}$$
(200)
$$\begin{aligned}&= \langle x, p_x(s)|p_y(s), y\rangle \end{aligned}$$
(201)
$$\begin{aligned}&= \langle p_x(s)|y\rangle \langle x|p_y(s)\rangle \end{aligned}$$
(202)
$$\begin{aligned}&= \sqrt{P_{xy}(s) P_{yx}(s)} \end{aligned}$$
(203)
$$\begin{aligned}&= D_{xy}(s) \end{aligned}$$
(204)

where last equality follows from Eq. (119). \(\square \)

This suggests a close relationship between the operators \(D(s)\) and \(V^{\dagger }(s) \, {\textsc {Shift}}\, V(s)\). We want to extend this and relate the spectral decompositions of \(D(s)\) and \(W(s)\) from Eq. (195). Recall from Eq. (124) the spectral decomposition \(D(s) = \sum _{k=1}^n \lambda _k(s) |v_k(s)\rangle \langle v_k(s)|\).

Definition 11

We define the following subspaces of \(\mathcal {H}\otimes \mathcal {H}\) in terms of the eigenvectors of \(D(s)\) and the operator \(V^{\dagger }(s) \, {\textsc {Shift}}\, V(s)\):

$$\begin{aligned} \mathcal {B}_k(s)&:= {{\mathrm{span}}}\lbrace |v_k(s),\bar{0}\rangle , V^{\dagger }(s) \, {\textsc {Shift}}\, V(s) |v_k(s),\bar{0}\rangle \rbrace , \quad k \in \lbrace 1, \cdots , n-1 \rbrace , \end{aligned}$$
(205)
$$\begin{aligned} \mathcal {B}_n(s)&:= {{\mathrm{span}}}\lbrace |v_n(s),\bar{0}\rangle \rbrace , \end{aligned}$$
(206)
$$\begin{aligned} \mathcal {B}^\perp (s)&:= \textstyle \bigl ( \bigoplus _{k=1}^n \mathcal {B}_k(s) \bigr )^\perp . \end{aligned}$$
(207)

Let us first understand how \(V^{\dagger }(s) \, {\textsc {Shift}}\, V(s)\) acts on vectors defining the subspaces in Definition 11. Let us consider \(s < 1\) and \(k < n\). Then \(\lambda _k(s) \ne 1\) by Prop. 15. By unitarity of \(V^{\dagger }(s) {\textsc {Shift}}\, V(s)\) and Prop. 23,

$$\begin{aligned} V^{\dagger }(s) \, {\textsc {Shift}}\, V(s) |v_k(s), \bar{0}\rangle = \lambda _k(s) |v_k(s), \bar{0}\rangle + \sqrt{1-\lambda _k(s)^2} |v_k(s), \bar{0}\rangle ^\perp \end{aligned}$$
(208)

for some unit vector \(|v_k(s), \bar{0}\rangle ^\perp \) orthogonal to \(|v_k(s), \bar{0}\rangle \) and lying in the subspace \(\mathcal {B}_k(s)\). In particular, \(\mathcal {B}_k(s)\) is two-dimensional. Note that \(|v_k(s), \bar{0}\rangle ^\perp \) depends on how the operator \(V(s)\), defined in Eq. (194), is extended to the rest of the space \(\mathcal {H}\otimes \mathcal {H}\).

Let us also find how \(V^{\dagger }(s) \, {\textsc {Shift}}\, V(s)\) acts on \(|v_k(s), \bar{0}\rangle ^\perp \). If we apply \(V^{\dagger }(s) {\textsc {Shift}}\, V(s)\) to both sides of Eq. (208), we get

$$\begin{aligned} |v_k(s), \bar{0}\rangle= & {} \lambda _k(s) V^{\dagger }(s) \, {\textsc {Shift}}\, V(s) |v_k(s), \bar{0}\rangle \nonumber \\&+ \sqrt{1-\lambda _k(s)^2} V^{\dagger }(s) \, {\textsc {Shift}}\, V(s) |v_k(s), \bar{0}\rangle ^\perp . \end{aligned}$$
(209)

We regroup the terms and substitute Eq. (208):

$$\begin{aligned}&\sqrt{1-\lambda _k(s)^2} V^{\dagger }(s) \, {\textsc {Shift}}\, V(s) |v_k(s), \bar{0}\rangle ^\perp \end{aligned}$$
(210)
$$\begin{aligned}&= |v_k(s), \bar{0}\rangle - \lambda _k(s) V^{\dagger }(s) \, {\textsc {Shift}}\, V(s) |v_k(s), \bar{0}\rangle \end{aligned}$$
(211)
$$\begin{aligned}&= |v_k(s), \bar{0}\rangle - \lambda _k(s) \Bigl ( \lambda _k(s) |v_k(s), \bar{0}\rangle + \sqrt{1-\lambda _k(s)^2} |v_k(s), \bar{0}\rangle ^\perp \Bigr ). \end{aligned}$$
(212)

After cancellation we get

$$\begin{aligned} V^{\dagger }(s) \, {\textsc {Shift}}\, V(s) |v_k(s), \bar{0}\rangle ^\perp = \sqrt{1-\lambda _k(s)^2} |v_k(s), \bar{0}\rangle - \lambda _k(s) |v_k(s), \bar{0}\rangle ^\perp . \end{aligned}$$
(213)

Proposition 24

Subspaces \(\mathcal {B}_1(s), \cdots , \mathcal {B}_n(s)\), and \(\mathcal {B}^\perp (s)\) are mutually orthogonal and invariant under \(W(s)\) for all \(s \in [0,1]\).

Proof

Clearly, \(\mathcal {B}^\perp (s)\) is orthogonal to the other subspaces. Vectors \(|v_k(s),\bar{0}\rangle \) are also mutually orthogonal for \(k \in \lbrace 1,\cdots ,n \rbrace \), since they form an orthonormal basis of \(\mathcal {H}\otimes |\bar{0}\rangle \). Finally, note from Prop. 23 that

$$\begin{aligned} \langle v_j(s), \bar{0}| \cdot V^{\dagger }(s) \, {\textsc {Shift}}\, V(s) |v_k(s), \bar{0}\rangle = \langle v_j(s)| D(s) |v_k(s)\rangle = \delta _{jk} \lambda _k(s), \end{aligned}$$
(214)

so \(V^{\dagger }(s) \, {\textsc {Shift}}\, V(s) |v_k(s), \bar{0}\rangle \) is orthogonal to \(|v_j(s), \bar{0}\rangle \) for any \(j \ne k\). Thus all of the above subspaces are mutually orthogonal.

Let us show that these subspaces are invariant under \(W(s)\). From the definition of \(W(s)\) in Eq. (195) we see that it suffices to check the invariance of each subspace under \(V^{\dagger }(s) \, {\textsc {Shift}}\, V(s)\) and \(\varPi _0\) separately.

First, let us argue the invariance under \(V^{\dagger }(s) \, {\textsc {Shift}}\, V(s)\). Since \({\textsc {Shift}}^2\) acts as identity according to Eq. (2), then so does \(V^{\dagger }(s) \, {\textsc {Shift}}\, V(s)\) and hence \(\mathcal {B}_k(s)\) is invariant under \(V^{\dagger }(s) \, {\textsc {Shift}}\, V(s)\) for any \(k < n\). Next, \(\mathcal {B}_n(s)\) is invariant, since \(V^{\dagger }(s) \, {\textsc {Shift}}\, V(s)\) acts trivially on \(|v_n(s), \bar{0}\rangle \) by Prop. 23. Finally, \(\mathcal {B}^\perp (s)\) is invariant, since it is the orthogonal complement of invariant subspaces.

Let us now show the invariance under \(\varPi _0\). First, let us argue that

$$\begin{aligned} \langle v_j(s), \bar{0}|v_k(s), \bar{0}\rangle ^\perp = 0, \quad \forall j \in \lbrace 1, \cdots , n \rbrace . \end{aligned}$$
(215)

These vectors lie in subspaces \(\mathcal {B}_j(s)\) and \(\mathcal {B}_k(s)\) that are mutually orthogonal when \(j \ne k\). For \(j = k\) this holds by definition of \(|v_k(s), \bar{0}\rangle ^\perp \). Since \({{\mathrm{span}}}\lbrace |v_k(s), \bar{0}\rangle \rbrace _{k=1}^n = \mathcal {H}\otimes |\bar{0}\rangle \), we conclude that

$$\begin{aligned} \varPi _0 |v_k(s), \bar{0}\rangle ^\perp = 0. \end{aligned}$$
(216)

From Eq. (208) we get

$$\begin{aligned} \varPi _0 V^{\dagger }(s) \, {\textsc {Shift}}\, V(s) |v_k(s), \bar{0}\rangle = \lambda _k(s) |v_k(s), \bar{0}\rangle , \end{aligned}$$
(217)

hence \(\mathcal {B}_k(s)\) is invariant under \(\varPi _0\) for \(k < n\). Next, \(\mathcal {B}_n(s)\) is invariant since \(\varPi _0 |v_n(s), \bar{0}\rangle = |v_n(s), \bar{0}\rangle \). Finally, \(\mathcal {B}^\perp (s)\) is invariant by being the orthogonal complement of invariant subspaces. \(\square \)

We now prove Lemma 2 by Szegedy [10], which provides the spectral decomposition of \(W(s)\) in terms of that of \(D(s)\). Note that we can guarantee that all eigenvalues of \(D(s)\) are in \([0,1]\) via Prop. 20.

Lemma 2

(Szegedy [10]) Let \(\mathcal {B}_k(s)\) for \(k = 1, \cdots , n\) be the subspaces from Definition 11. Assume that all eigenvalues \(\lambda _k(s)\) of \(D(s)\) are between \(0\) and \(1\), and let \(\varphi _k(s) \in [0,\pi ]\) be such that

$$\begin{aligned} \lambda _k(s) = \cos \varphi _k(s). \end{aligned}$$
(218)

Then \(W(s)\) has the following eigenvalues and eigenvectors.

$$\begin{aligned}&\text {On }\mathcal {B}_k(s):&e^{\pm i \varphi _k(s)},&|\varPsi ^\pm _k(s)\rangle&:= \frac{|v_k(s), \bar{0}\rangle \pm i |v_k(s), \bar{0}\rangle ^\perp }{\sqrt{2}}. \end{aligned}$$
(219)
$$\begin{aligned}&\text {On }\mathcal {B}_n(s):&1,&|\varPsi _n(s)\rangle&:= |v_n(s), \bar{0}\rangle . \end{aligned}$$
(220)

In particular, \(\bigcup _{k=1}^n \mathcal {B}_k(s)\) is the walk space of \(W(s)\) and the remaining eigenvectors of \(W(s)\) lie in the orthogonal complement \(\mathcal {B}^\perp (s)\).

Proof

Recall Eqs. (208) and (213):

$$\begin{aligned} V^{\dagger }(s) \, {\textsc {Shift}}\, V(s) \cdot |v_k(s), \bar{0}\rangle= & {} \lambda _k(s) |v_k(s), \bar{0}\rangle + \sqrt{1-\lambda _k(s)^2} |v_k(s), \bar{0}\rangle ^\perp , \quad \quad \quad \end{aligned}$$
(221)
$$\begin{aligned} V^{\dagger }(s) \, {\textsc {Shift}}\, V(s) \cdot |v_k(s), \bar{0}\rangle ^\perp= & {} \sqrt{1-\lambda _k(s)^2} |v_k(s), \bar{0}\rangle - \lambda _k(s) |v_k(s), \bar{0}\rangle ^\perp .\quad \quad \quad \end{aligned}$$
(222)

Clearly, \({\mathrm {ref}}_{\mathcal {X}} |v_k(s), \bar{0}\rangle = |v_k(s), \bar{0}\rangle \) from Eq. (4). Recall from Eq. (216) that \(\varPi _0 |v_k(s), \bar{0}\rangle ^\perp = 0\), so \({\mathrm {ref}}_{\mathcal {X}} |v_k(s), \bar{0}\rangle ^\perp = - |v_k(s), \bar{0}\rangle ^\perp \). Thus, Eqs. (221) and (222) give us

$$\begin{aligned} W(s) \cdot |v_k(s), \bar{0}\rangle= & {} \lambda _k(s) |v_k(s), \bar{0}\rangle + \sqrt{1-\lambda _k(s)^2} |v_k(s), \bar{0}\rangle ^\perp , \end{aligned}$$
(223)
$$\begin{aligned} W(s) \cdot |v_k(s), \bar{0}\rangle ^\perp= & {} - \sqrt{1-\lambda _k(s)^2} |v_k(s), \bar{0}\rangle + \lambda _k(s) |v_k(s), \bar{0}\rangle ^\perp . \end{aligned}$$
(224)

Recall from Prop. 24 that subspaces \(\mathcal {B}_k(s)\) are mutually orthogonal and invariant under \(W(s)\). In fact, \(W(s)\) acts in the basis \(\lbrace |v_k(s), \bar{0}\rangle , |v_k(s), \bar{0}\rangle ^\perp \rbrace \) of \(\mathcal {B}_k(s)\) as

$$\begin{aligned} \begin{pmatrix}\lambda _k(s) &{} -\sqrt{1-\lambda _k(s)^2} \\ \sqrt{1-\lambda _k(s)^2} &{} \lambda _k(s)\end{pmatrix} = \lambda _k(s) I + i \sqrt{1-\lambda _k(s)^2} \, \sigma _y \end{aligned}$$
(225)

where \(\sigma _y := \bigl (\begin{array}{cc}0 &{} -i \\ i &{} 0\end{array}\bigr )\) is the Pauli \(y\) matrix. The matrix in Eq. (225) has the same eigenvectors as \(\sigma _y\) and its eigenvalues are given by

$$\begin{aligned} \lambda _k(s) \pm i \sqrt{1-\lambda _k(s)^2} = e^{\pm i \varphi _k(s)}. \end{aligned}$$
(226)

This shows Eq. (219). To obtain Eq. (220), we use Prop. 23:

$$\begin{aligned} \langle v_n(s), \bar{0}| \cdot V^{\dagger }(s) \, {\textsc {Shift}}\, V(s) \cdot |v_n(s), \bar{0}\rangle = 1, \end{aligned}$$
(227)

so \(|v_n(s), \bar{0}\rangle \) is an eigenvector of \(W(s)\) with eigenvalue \(1\). \(\square \)

1.2 Appendix B.2: Quantum Circuit for \(W(s)\)

Recall that \({\mathtt {Update}}(P)\) can be used to implement the quantum walk operator \(W(P)\). However, we would also like to be able to implement the quantum analogue of \(P(s)\) for any \(s \in [0,1]\). Recall from Eq. (195) that it is given by

$$\begin{aligned} W(s) = V(s)^{\dagger }\, {\textsc {Shift}}\, V(s) \cdot {\mathrm {ref}}_{\mathcal {X}}. \end{aligned}$$
(228)

We know how to implement \({\textsc {Shift}}\) and \({\mathrm {ref}}_{\mathcal {X}}\), so we only need to understand how to implement \(V(s)\) using \(V(P)\). Recall from Eq. (3) that

$$\begin{aligned} V(s) |x\rangle |\bar{0}\rangle = |x\rangle |p_x(s)\rangle = |x\rangle \sum _{y \in X} \sqrt{P_{xy}(s)} |y\rangle . \end{aligned}$$
(229)

In the following lemma, we assume that we know \(p_{xx}\) for every \(x\). This is reasonable since in practice the probability of self-loops is known. In many cases, it is even independent of \(x\). For the rest of this chapter, we assume that this is not an obstacle (we can assume that one call to \({\mathtt {Update}}(P)\) allows to learn \(p_{xx}\) for any \(x\)).

Lemma 3

Assuming that \(p_{xx}\) is known for every \(x\), Interpolation \((P,M,s)\) implements \(V(s)\) with quantum complexity \(2 \mathsf {C}+ \mathsf {U}\). Thus, \({\mathtt {Update}}(P(s))\) has quantum complexity of order \(\mathsf {C}+ \mathsf {U}\).

Proof

We explain only how to implement \(V(s)\) using one call to \(V(P)\) and two calls to \({\mathtt {Check}}(M)\). The algorithm for \(V(s)^{\dagger }\) is obtained from the reverse algorithm.

Our algorithm uses four registers: \(\mathsf {R}_1\), \(\mathsf {R}_2\), \(\mathsf {R}_3\), \(\mathsf {R}_4\). The first two registers have underlying state space \(\mathcal {H}\) each, but the last two store a qubit in \(\mathbb {C}^2\) each. Register \(\mathsf {R}_3\) is used to store if the current vertex \(x\) is marked, but \(\mathsf {R}_4\) is used for performing rotations. Let

$$\begin{aligned} R_\alpha := \begin{pmatrix}\cos \alpha &{} -\sin \alpha \\ \sin \alpha &{} \cos \alpha \end{pmatrix} \end{aligned}$$
(230)

denote the rotation by angle \(\alpha \). An algorithm for implementing the transformation \(|x\rangle |\bar{0}\rangle \mapsto |x\rangle |p_x(s)\rangle \) is given below.

figure f

Recall from Eq. (98) that \(P(s)\) has the following block structure:

$$\begin{aligned} P(s) = \begin{pmatrix}P_{UU} &{} P_{UM} \\ (1-s)P_{MU} &{} (1-s)P_{MM} + s I\end{pmatrix}. \end{aligned}$$
(231)

We will analyze the cases \(x \in M\) and \(x \in U\) separately. Then the general case will hold by linearity.

If \(x \in U\) then the corresponding row of \(P(s)\) does not depend on \(s\), so \(|p_x(s)\rangle = |p_x\rangle \). In this case step 4 of the above algorithm is never executed and the remaining steps effectively apply \(V(P)\) to produce the correct state.

When \(x \in M\) the algorithm is more involved. Let us analyze only step 4 where most of the work is done. During this step the state gets transformed as follows:

$$\begin{aligned} |x\rangle |\bar{0}\rangle |1\rangle |0\rangle&\mapsto |x\rangle |\bar{0}\rangle |1\rangle (\sqrt{1-s} |0\rangle + \sqrt{s} |1\rangle ) \end{aligned}$$
(232)
$$\begin{aligned}&\mapsto |x\rangle \bigl ( \sqrt{1-s} |p_x\rangle |1\rangle |0\rangle + \sqrt{s} |x\rangle |1\rangle |1\rangle \bigr ) \end{aligned}$$
(233)
$$\begin{aligned}&\mapsto |x\rangle |p_x(s)\rangle |1\rangle |0\rangle . \end{aligned}$$
(234)

The first two transformations are straightforward, so let us focus only on the last one which corresponds to step 4d. The state at the beginning of this step is

$$\begin{aligned}&|x\rangle \bigl ( \sqrt{1-s} |p_x\rangle |1\rangle |0\rangle + \sqrt{s} |x\rangle |1\rangle |1\rangle \bigr ) \end{aligned}$$
(235)
$$\begin{aligned}&= |x\rangle \Biggl [\sqrt{1-s} \sum _{y \in X \setminus \lbrace x \rbrace } \sqrt{P_{xy}} |y\rangle |1\rangle |0\rangle + |x\rangle |1\rangle \Bigl ( \sqrt{(1-s) P_{xx}} |0\rangle + \sqrt{s} |1\rangle \Bigr ) \Biggr ]. \end{aligned}$$
(236)

Note from the second row of matrix \(P(s)\) in Eq. (231) that all its elements have acquired a factor of \(1-s\), except the diagonal ones. Thus in step 4d we perform a rotation only when \(\mathsf {R}_1 = \mathsf {R}_2\). This rotation affects only the second half of the state in Eq. (236) and transfers all amplitude to \(|0\rangle \) in the last register:

$$\begin{aligned} |x\rangle \Biggl [ \sqrt{1-s} \sum _{y \in X \setminus \lbrace x \rbrace } \sqrt{P_{xy}} |y\rangle + \sqrt{(1-s) P_{xx} + s} |x\rangle \Biggr ] |1\rangle |0\rangle = |x\rangle |p_x(s)\rangle |1\rangle |0\rangle . \end{aligned}$$
(237)

Finally, step 5 uncomputes \(\mathsf {R}_3\) to \(|0\rangle \) and the final state is \(|x\rangle |p_x(s)\rangle |0\rangle |0\rangle \) as desired.

\(\square \)

Appendix C: An Explicit Formula for \({\hbox {HT}}^{+}(P,M)\)

Recall from Definition 9 that \({\hbox {HT}}^{+}(P,M)\) is defined as the \(s \rightarrow 1\) limit of \({{\mathrm{HT}}}(s)\). In this appendix we derive an alternative expression for \({\hbox {HT}}^{+}(P,M)\). This formula explicitly expresses \({\hbox {HT}}^{+}(P,M)\) in terms of the Markov chain \(P\) and its stationary distribution \(\pi \), and makes it easier to evaluate this quantity and compare it to the regular hitting time \({{\mathrm{HT}}}(P,M)\).

Let us define unit vectors \(|\tilde{U}\rangle \in \mathbb {R}^{|U |}\) and \(|\tilde{M}\rangle \in \mathbb {R}^{|M |}\) as follows:

$$\begin{aligned} |\tilde{U}\rangle&:= \sqrt{\tilde{\pi }_U^{\mathsf {T}}},&|\tilde{M}\rangle&:= \sqrt{\tilde{\pi }_M^{\mathsf {T}}}, \end{aligned}$$
(238)

where \(\tilde{\pi }_U\) and \(\tilde{\pi }_M\) are defined in Eq. (133) in terms of the stationary distribution \(\pi = (\pi _U \; \pi _M)\) of \(P\). Note from Eq. (134) that \(|\tilde{U}\rangle \) and \(|\tilde{M}\rangle \) are the restrictions of \(|U\rangle \) and \(|M\rangle \) to the unmarked and marked subspaces. Furthermore, let

$$\begin{aligned} \begin{pmatrix}D_{UU} &{} D_{UM} \\ D_{MU} &{} D_{MM}\end{pmatrix} := \begin{pmatrix}\sqrt{P_{UU} \circ P_{UU}^{\mathsf {T}}} &{} \sqrt{P_{UM} \circ P_{MU}^{\mathsf {T}}} \\ \sqrt{P_{MU} \circ P_{UM}^{\mathsf {T}}} &{} \sqrt{P_{MM} \circ P_{MM}^{\mathsf {T}}} \end{pmatrix} \end{aligned}$$
(239)

be the blocks of the discriminant matrix \(D(P)\) of \(P\) (see Definition 6).

Lemma 4

If \({{\mathrm{HT}}}(P,M)\) is the hitting time of \(P\) (see Definition 4) and \({\hbox {HT}}^{+}(P,M)\) is the extended hitting time (see Definition 9) then

$$\begin{aligned} {{\mathrm{HT}}}(P,M)&= \langle \tilde{U}| (I - D_{UU})^{-1} |\tilde{U}\rangle , \end{aligned}$$
(240)
$$\begin{aligned} {\hbox {HT}}^{+}(P,M)&= \langle \tilde{U}| (I - D_{UU} - S)^{-1} |\tilde{U}\rangle , \end{aligned}$$
(241)

where

$$\begin{aligned} S\! :=\! D_{UM} \Biggl [ (I - D_{MM})^{-1} - \frac{(I \!-\! D_{MM})^{-1} |\tilde{M}\rangle \langle \tilde{M}| (I - D_{MM})^{-1}}{\langle \tilde{M}| (I \!-\! D_{MM})^{-1} |\tilde{M}\rangle } \Biggr ] D_{MU}. \end{aligned}$$
(242)

Vectors \(|\tilde{U}\rangle \) and \(|\tilde{M}\rangle \) are defined in Eq. (238) and matrices \(D_{UU}, D_{UM}, D_{MU}, D_{MM}\) in Eq. (239).

Proof

Let us first derive Eq. (240). Recall from Eq. (243) that \({{\mathrm{HT}}}(P,M)\) can be written as

$$\begin{aligned} {{\mathrm{HT}}}(P,M) = \sum _{t=0}^\infty \langle U| D(1)^t |U\rangle , \end{aligned}$$
(243)

where \(D(1)\) is the discriminant matrix of \(P(1) = P'\). Recall from Eq. (122) that

$$\begin{aligned} D(1) = \begin{pmatrix}\sqrt{P_{UU} \circ P_{UU}^{\mathsf {T}}} &{} 0 \\ 0 &{} I\end{pmatrix}. \end{aligned}$$
(244)

Since \(D(1)\) is block diagonal and \(|U\rangle \) acts only on the unmarked states \(U\), we can restrict each term in Eq. (245) to the unmarked subspace and bring the summation inside:

$$\begin{aligned} {{\mathrm{HT}}}(P,M) = \langle \tilde{U}| \sum _{t=0}^\infty D(1)_{UU}^t |\tilde{U}\rangle . \end{aligned}$$
(245)

Recall from Eq. (146) that the \(UU\) block of \(D(s)\) is independent of \(s\), hence \(D(1)_{UU} = D_{UU}\), the \(UU\) block of \(D(0)\) given in Eq. (239). Recall from Prop. 10 that \(I - P_{UU}\) is invertible. Furthermore, due to Prop. 9 we can write \((I - P_{UU})^{-1} = \sum _{t=0}^\infty P_{UU}^t\). As \(D_{UU}\) and \(P_{UU}\) are similar according to Eq. (123), \(I - D_{UU}\) is also invertible and \((I - D_{UU})^{-1} = \sum _{t=0}^\infty D_{UU}^t\). If we substitute this in Eq. (245), we get Eq. (240) and thus prove the first half of the lemma.

For the second half, recall from Eq. (16) that for \(s \in [0,1)\),

$$\begin{aligned} {{\mathrm{HT}}}(s) = \sum _{k=1}^{n-1} \frac{|\langle v_k(s)|U\rangle |^2}{1-\lambda _k(s)}, \end{aligned}$$
(246)

where \(\lambda _k(s)\) and \(|v_k(s)\rangle \) are the eigenvalues and eigenvectors of the discriminant matrix \(D(s)\). By Prop. 15, for any \(s \in [0,1)\), \(\lambda _n(s) = 1\) and \(\lambda _k(s) < 1\) for all \(k \ne n\). Let \(\varPi _n(s) := |v_n(s)\rangle \langle v_n(s)|\), where \(|v_n(s)\rangle \) is given by Prop. 4:

$$\begin{aligned} |v_n(s)\rangle = \cos \theta (s) |U\rangle + \sin \theta (s) |M\rangle . \end{aligned}$$
(247)

With this in mind, we can rewrite Eq. (246) as follows:

$$\begin{aligned} {{\mathrm{HT}}}(s)&= \langle U| \Biggl [ \sum _{k=1}^{n-1} \sum _{t=0}^{\infty } \lambda _k^t(s) |v_k(s)\rangle \langle v_k(s)| \Biggr ] |U\rangle \end{aligned}$$
(248)
$$\begin{aligned}&= \langle U| \sum _{t=0}^\infty \bigl ( D^t(s) - \varPi _n(s) \bigr ) |U\rangle \end{aligned}$$
(249)
$$\begin{aligned}&= \langle U| \Biggl [ I + \sum _{t=1}^\infty \bigl ( D(s) - \varPi _n(s) \bigr )^t - \varPi _n(s) \Biggr ] |U\rangle \end{aligned}$$
(250)
$$\begin{aligned}&= \langle U| \Bigl [ \bigl ( I - D(s) + \varPi _n(s) \bigr )^{-1} - \varPi _n(s) \Bigr ] |U\rangle \end{aligned}$$
(251)
$$\begin{aligned}&= \langle U| \bigl ( I - D(s) + \varPi _n(s) \bigr )^{-1} |U\rangle - \cos ^2 \theta (s), \end{aligned}$$
(252)

where the last equality follows from Eq. (247).

Our goal is to compute \(\lim _{s \rightarrow 1} {{\mathrm{HT}}}(s)\). Recall from Prop. 15 that \(D(1)\) has eigenvalue \(1\) with multiplicity \(|M |\). Thus, if \(|M | > 1\), the matrix \(I - D(s) + \varPi _n(s)\) in Eq. (252) is not invertible at \(s = 1\), hence we cannot compute the limit by simply substituting \(s = 1\). Let us rewrite this expression before we take the limit.

Note that the discriminant matrix \(D(s)\) at \(s = 0\) agrees with \(D(P)\). Using Eq. (146) that relates \(D(s)\) and \(D(P)\), we can write

$$\begin{aligned} I - D(s) = \begin{pmatrix} I - D_{UU} &{} -\sqrt{1-s} D_{UM} \\ - \sqrt{1-s} D_{MU} &{} (1-s) (I - D_{MM}) \end{pmatrix}, \end{aligned}$$
(253)

where \(\bigl (\begin{array}{cc}D_{UU} &{} D_{UM} \\ D_{MU} &{} D_{MM}\end{array}\bigr )\) are the blocks of \(D(P)\) given in Eq. (239). Next, note that

$$\begin{aligned} |v_n(s)\rangle = \begin{pmatrix}\cos \theta (s) |\tilde{U}\rangle \\ \sin \theta (s) |\tilde{M}\rangle \end{pmatrix}, \end{aligned}$$
(254)

so we can write

$$\begin{aligned} \varPi _n(s)= \begin{pmatrix} \cos ^2 \theta (s) |\tilde{U}\rangle \langle \tilde{U}| &{}\cos \theta (s) \sin \theta (s) |\tilde{U}\rangle \langle \tilde{M}| \\ \cos \theta (s) \sin \theta (s) |\tilde{M}\rangle \langle \tilde{U}| &{} \sin ^2 \theta (s) |\tilde{M}\rangle \langle \tilde{M}| \end{pmatrix}. \end{aligned}$$
(255)

Putting the two equations together, we can write \(I - D(s) + \varPi _n(s)\) as

$$\begin{aligned} \begin{pmatrix} I \!-\! D_{UU} + \cos ^2 \theta (s) |\tilde{U}\rangle \langle \tilde{U}| &{} \!-\! \sqrt{1\!-\!s} D_{UM} + \cos \theta (s) \sin \theta (s) |\tilde{U}\rangle \langle \tilde{M}| \\ \!-\! \sqrt{1\!-\!s} D_{MU} + \cos \theta (s) \sin \theta (s) |\tilde{M}\rangle \langle \tilde{U}| &{} (1\!-\!s)(I \!-\! D_{MM}) + \sin ^2 \theta (s) |\tilde{M}\rangle \langle \tilde{M}|\end{pmatrix}. \end{aligned}$$
(256)

In Eq. (252) we need only the upper left block of the inverse of the above matrix, since \(|U\rangle \) is non-zero only on the \(U\) block. According to the block-wise inversion formula,

$$\begin{aligned} \begin{pmatrix}A &{} B \\ B^{\mathsf {T}}&{} C\end{pmatrix}^{-1} = \begin{pmatrix}(A - B C^{-1} B^{\mathsf {T}})^{-1} &{} \ldots \quad \\ \ldots \quad &{} \ldots \quad \end{pmatrix}. \end{aligned}$$
(257)

Thus, Eq. (252) becomes

$$\begin{aligned} {{\mathrm{HT}}}(s) = \langle \tilde{U}| \bigl ( A(s) - B(s) C(s)^{-1} B(s)^{\mathsf {T}}\bigr )^{-1} |\tilde{U}\rangle - \cos ^2 \theta (s), \end{aligned}$$
(258)

where \(A(s)\), \(B(s)\), and \(C(s)\) are the blocks in Eq. (256). We can further rewrite this as follows:

$$\begin{aligned} {{\mathrm{HT}}}(s) = \langle \tilde{U}| \biggl [ A(s) - \frac{B(s)}{\sqrt{1-s}} \biggl (\frac{C(s)}{1-s}\biggr )^{-1} \frac{B(s)^{\mathsf {T}}}{\sqrt{1-s}} \biggr ]^{-1} |\tilde{U}\rangle - \cos ^2 \theta (s), \end{aligned}$$
(259)

where the extra factors will allows us to deal with the fact that \(C(1)\) is singular.

Now we can compute \(\lim _{s \rightarrow 1} {{\mathrm{HT}}}(s)\) for each piece of Eq. (259) separately. Note from Eq. (21) that \(\cos ^2 \theta (s)\) vanishes as \(s \rightarrow 1\). Similarly, we also get that

$$\begin{aligned} A'&:= \lim _{s \rightarrow 1} A(s) = I - D_{UU}, \end{aligned}$$
(260)
$$\begin{aligned} B'&:= \lim _{s \rightarrow 1} \frac{B(s)}{\sqrt{1-s}} = -D_{UM} + \sqrt{\frac{1-p_M}{p_M}} |\tilde{U}\rangle \langle \tilde{M}|. \end{aligned}$$
(261)

Finally, notice that \(\lim _{s \rightarrow 1} C(s)/(1-s)\) does not exist. Nevertheless, the limit of the inverse exists (in particular, it is a singular matrix) and we can compute it using the Sherman–Morrison formula:

$$\begin{aligned} \bigl ( X + |\psi \rangle \langle \psi | \bigr )^{-1} = X^{-1} - \frac{X^{-1} |\psi \rangle \langle \psi | X^{-1}}{1 + \langle \psi | X^{-1} |\psi \rangle }. \end{aligned}$$
(262)

For \(s < 1\), we get

$$\begin{aligned} \biggl (\frac{C(s)}{1-s}\biggr )^{-1}&= \biggl ( I - D_{MM} + \frac{\sin ^2 \theta (s)}{1-s} |\tilde{M}\rangle \langle \tilde{M}| \biggr )^{-1} \end{aligned}$$
(263)
$$\begin{aligned}&= (I - D_{MM})^{-1} - \frac{(I - D_{MM})^{-1} |\tilde{M}\rangle \langle \tilde{M}| (I - D_{MM})^{-1}}{\frac{1-s}{\sin ^2 \theta (s)} + \langle \tilde{M}| (I - D_{MM})^{-1} |\tilde{M}\rangle }, \end{aligned}$$
(264)

so the limit is

$$\begin{aligned} C':= \lim _{s \rightarrow 1} \biggl (\frac{C(s)}{1-s}\biggr )^{-1} = (I - D_{MM})^{-1} - \frac{(I - D_{MM})^{-1} |\tilde{M}\rangle \langle \tilde{M}| (I - D_{MM})^{-1}}{\langle \tilde{M}| (I - D_{MM})^{-1} |\tilde{M}\rangle }. \end{aligned}$$
(265)

Let \(S(s) := B(s) C(s)^{-1} B(s)^{\mathsf {T}}\) be the matrix that appears in Eq. (258). Since it also appears in Eq. (259), we find that

$$\begin{aligned} S' := \lim _{s \rightarrow 1} S(s) = B' C' {B'}^{\mathsf {T}} \end{aligned}$$
(266)

by substituting \(B'\) and \(C'\) from Eqs. (261) and (265), respectively. Note from Eq. (265) that \(C' |\tilde{M}\rangle = 0\), so Eq. (266) simplifies to

$$\begin{aligned} S' = D_{UM} C' D_{MU} \end{aligned}$$
(267)

after we substitute \(B'\) from Eq. (261). Note that \(S'\) agrees with Eq. (242) and that

$$\begin{aligned} {\hbox {HT}}^{+}(P,M) = \lim _{s \rightarrow 1} {{\mathrm{HT}}}(s) = \langle \tilde{U}| (A' - S')^{-1} |\tilde{U}\rangle , \end{aligned}$$
(268)

where \(A'\) and \(S'\) are given in Eqs. (260) and (267), respectively. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Krovi, H., Magniez, F., Ozols, M. et al. Quantum Walks Can Find a Marked Element on Any Graph. Algorithmica 74, 851–907 (2016). https://doi.org/10.1007/s00453-015-9979-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00453-015-9979-8

Keywords

Navigation