Quantum-accessible reinforcement learning beyond strictly epochal environments

In recent years, quantum-enhanced machine learning has emerged as a particularly fruitful application of quantum algorithms, covering aspects of supervised, unsupervised and reinforcement learning. Reinforcement learning offers numerous options of how quantum theory can be applied, and is arguably the least explored, from a quantum perspective. Here, an agent explores an environment and tries to find a behavior optimizing some figure of merit. Some of the first approaches investigated settings where this exploration can be sped-up, by considering quantum analogs of classical environments, which can then be queried in superposition. If the environments have a strict periodic structure in time (i.e. are strictly episodic), such environments can be effectively converted to conventional oracles encountered in quantum information. However, in general environments, we obtain scenarios that generalize standard oracle tasks. In this work, we consider one such generalization, where the environment is not strictly episodic, which is mapped to an oracle identification setting with a changing oracle. We analyze this case and show that standard amplitude-amplification techniques can, with minor modifications, still be applied to achieve quadratic speed-ups. In addition, we prove that an algorithm based on Grover iterations is optimal for oracle identification even if the oracle changes over time in a way that the “rewarded space” is monotonically increasing. This result constitutes one of the first generalizations of quantum-accessible reinforcement learning.


Introduction
In the last few years, there has been much interest in combining quantum computing and machine learning algorithms. In the domain of quantum-enhanced machine learning, the objective is to utilize quantum effects to speedup or otherwise enhance the learning performance. The possibilities for this are numerous . E.g. variational circuits can be used as a type of "quantum neural network" (more precisely, using them as function approximators which cannot be evaluated efficiently on a conventional computer), which can be S. Wölk sabi.woelk@gmail.com trained as a supervised learning (classification) (Havlícek et al. 2019;Farhi and Neven 2018) or unsupervised learning model (generative models) (Aimeur et al. 2013). There also exist various approaches where algorithmic bottlenecks of classical algorithms are sped-up, via annealing methods (Farhi and Neven 2018), quantum linear-algebraic methods (Harrow et al. 2009), or via sampling enhancements (Dunjko et al. 2016). If the data is assumed to be accessible in a quantum form ("quantum database"), then anything from polynomial to exponential speed-ups of classical algorithms may be possible (Biamonte et al. 2017;Dunjko and Briegel 2018;Chia et al. 2019;Gyurik et al. 2020) 1 .
Modern reinforcement learning (RL), an interactive mode of learning, combines aspects of supervised and unsupervised learning, and consequently allows a broad spectrum of possibilities how quantum effects could help.
In RL (Sutton and Barto 1998;Russell and Norvig 2003;Briegel and De las Cuevas 2012), we talk about a learning agent which interacts with an environment, by performing actions, and perceiving the environmental states, and has to learn a "correct behavior"-the optimal policy-by means of a feedback rewarding signal. Unlike a stationary database, the environment has its own internal memory (a state), which the agent alters with its actions.
In quantum-enhanced RL, we can identify two basic scenarios: (i) where quantum effects can be used to speed up the internal processing (Paparo et al. 2014;Jerbi et al. 2019), and the interaction with the environment is classical, and (ii) where the interaction with the environment (and the environment itself) is quantum. The first framework for such "quantum-accessible" reinforcement learning modeled the environment as a sequence of quantum channels, acting on a communication register, and the internal environmental memory-this constitutes a direct generalization of an unknown environment as a map-withmemory (other options are discussed shortly). In this case, the action of the environment cannot be described as unitary mapping, without considering the entire memory of the environment. In general, this memory is inaccesible to the agent. However, as discussed in Dunjko et al. (2016), under the assumptions that the environmental memory can be purged or uncomputed in pre-defined periods, such blocks of interaction do become a (time-independent) unitary and amenable to oracle computation techniques. For instance, in Dunjko et al. (2016), it was shown that the task of identifying a sequence of actions which leads to a first reward (a necessary step before any true learning can commence) can be sped up using quantum search techniques, and in Dunjko et al. (2017), it was shown how certain environments encode more complex oracles-e.g. Simon's oracle and Recursive Fourier Sampling oracles, leading to exponential speed-ups over classical methods.
For the above techniques to work, however, the purging of all of environmental memory is necessary to achieve time-independent unitary mappings. However, real task environments are typically not (strictly) episodic, motivating the question of what can be achieved in these more general cases. Here, we perform a first step towards generalization by considering environments where the length of the episode can change, but this is signaled and the estimate of the episode lengths is known. This RL scenario is well-motivated and fortunately maps to an oracle identification problem where the oracles change. While this generalizes standard oracular settings, it is still sufficiently simple such that we can employ standard techniques (essentially amplitude amplification) and prove the optimality of our strategies for oracle identification problems with changing oracles and increasing rewarded space.
The paper is organized as follows. We will first summarize the basics scenario of quantum-accessible reinforcement learning in Section 2 and discuss the mappings from constrained (episodic) RL scenarios to oracle identification. We show how this must be generalized for more involved environments, prompting our definition of the "changing oracle" problem stemming from certain classes of RL environments. In Section 3, we focus on the changing oracle problem, analyze the main regimes, and provide an upper bound for the average success probability for the case of monotonically increasing rewarded space in Section 3.1. We proof in Section 3.2 that performing consecutive Grover iterations saturates this bound. We then discuss the more general case of only overlapping rewarded spaces in Section 3.3. In Section 3.4, we provide a numerical example demonstrating the possible advantages of consecutive Grover iterations with changing oracles. We conclude by summarizing our results, by discussing possible extensions, and by noting on the implications of our results of the changing oracle problem for QRL in Section 4.
2 Quantum-accessible reinforcement learning RL can be described as an interaction of a learning agent A with a task environment E via the exchange of messages out of a discrete set which we call actions A = {a j } (performed by the agent) and percepts S = {s j } (issued by the environment). In addition, the environment also issues a scalar reward R = {r j }, which informs the agent about the quality of the previous actions and can be defined as being a part of the percepts. The goal of the agent is to receive as much reward as possible in the long term.
In theory of RL, the most studied environments are exactly describable by a Markov decision process (MDP). An MDP is specified by a transition mapping and a reward function . The transition mapping T specifies the probability of the environment transiting from state s to s , provided the agent performed the action a, whereas the reward function assigns a reward value to a given action of an agent in a given environmental state.
Note that in standard RL, the agent does not have a direct access to the mapping T , but rather to learn it, it must explore, i.e. to act in the environment which is governed by T . On the other hand, in dynamical programming problems (intimately related to RL), one often assumes access to the functions T and R directly. This distinction leads to two different tasks on how agent-environment interaction can be quantized.
In recent works (Cornelissen 2018;Neukart et al. 2018;Levit et al. 2017), coherent access to the transition mapping T is assumed; in this case, lower quantum bounds for finding the optimal policy have been found (Ronagh 2019).
In this paper, we consider the other class of generalization, proposed first in Dunjko et al. (2016). Here, the agent-environment interaction is modeled as a communication between an agent (A) and the environment (E) over a joint communication channel (C), thus in a three-partite Hilbert space H E ⊗ H C ⊗ H A , denoting the memory of the environment, the communication channel, and the memory of the agent. The two parties A and E interact with each other by performing alternately completely positive trace preserving (CPTP) maps on their own memory and the communication channel. Different AE combinations are defined as equivalent in the classical sense, if their interactions are equivalent under constant measurements of C in the computational basis. For classical luck favoring AE settings with a deterministic strictly epochal environment E, it is possible to create a classical equivalent quantum version A q E q which outperforms AE in terms of a given figure of merit as shown in Dunjko et al. (2016).

Strictly epochal environments
This can be achieved by slightly modifying the maps as to purge the environmental memory which couples to the overall interaction preventing a unitary time evolution of the agents memory. A detailed discussion of this procedure and necessary condition on the setting is outlined in Dunjko et al. (2016). However, for our setting, it is sufficient that the interaction of the agent with the environment can be effectively described as oracle queries. Specifically if environments are strictly episodic, meaning after some fixed number of steps the setting is re-set to an initial condition, then the environmental memory can be uncomputed or released to the agent at the end of an epoch. With this modification (called memory scavenging and hijacking in earlier works), blocks of interactions effectively act as one, time-independent unitary O, which can be queried using standard quantum techniques to obtain an advantage. We encode different actions the agent can perform, like a ∈ {0, 1}, into orthogonal quantum states, i.e. as {|0 , |1 }. As a result, the complete sequence of actions a = a 1 , · · · , a M the agent executes during one epoch of length M is encoded in the product state |a = |a 1 ⊗|a 2 ⊗· · ·⊗|a M . For strictly epochal environments, it is possible to re-express the effect of the environment by an unitary oracle Here, W denotes the rewarded space containing all sequences of actions of length M which obtained a reward r(a) larger than a predefined limit. A learning agent can use such an oraculized environment to find rewarded action sequences faster. For this purpose, the agent prepares an equal superposition state of all possible action sequences with typically N = |A| M . Then, it interacts with the environment and thus effectively queries the oracle O. Afterwards, it performs a reflection over the initial state |ψ . In this way, it can perform amplitude amplification performing consecutive Grover iterations (Grover 1997;1998;Brassard et al. 2000) G ψ |ψ with The agent can increase the probability to find a first rewarded sequence by performing several rounds of amplitude amplification. The first found rewarded action sequence is in general not the optimal one. However, in general, it provides information which can help to find an optimal strategy. A quantum-enhanced learning agent can use the found rewarded sequence of actions to learn by using classical policy updates. Thus, quantum-enhanced reinforcement learning combines quantum search and classical reinforcement learning as, e.g. demonstrated experimentally in Saggio et al. (2021). An agent with access to such an oracle thus finds in average the first rewarded sequence faster which increases in so-called luck-favoring settings (Dunjko et al. 2016) also the probability to be rewarded in the future. This approach leading to a quadratic speed-up in exploration can be applied to many settings and even super-polynomial or exponential improvements can be generated for special RL settings (Dunjko et al. 2017).

Beyond strictly epochal environments
The simplest scenarios of task environments, which cannot be reformulated as an oracular problem, are arguably those which involve two oracles. We will consider this slight generalization in this work, as it still allows for a relatively simple treatment. This setting includes environments which simply change as a function of time such as reinforcement learning for managing power consumption or channel allocation in cellular telephone systems (Han 2018;Tesauro et al. 2008;da Silva et al. 2006;Singh and Bertsekas 1996). If the instances of change are known, again the blocking is possible, in which case we obtain the setting where we can realize access to an oracle but which changes as a function of time. Closely related to this is a more specific case of variable episode length. This setting, although more special, is in particular interest in RL. Episodic environments are usually constructed by taking an arbitrary environment, and establishing a cut-off after a certain number of steps. The resulting object is again an environment derived from the initial setting. This construction is special in that given any sequence of actions a which is rewarding in a derived environment with cut-off after m steps, any sequence of actions in the environment which has a larger cut off M > m which has a as a prefix is rewarded in the second. An example for such an environment is the Grid-world problem which consists in navigating a maze and the task is to find a specific location that is rewarded (Russell and Norvig 2003;Sutton and Barto 1998;).
The classical scenarios described above, under oraculization techniques, map onto the changing oracle problem (described in detail in the following section) where at a given time an oracleÕ is exchanged by a different oracle O. This generalization especially captures the scenario of a single increment of an epoch length from m to M > m for search in QRL. In this special case, the rewarded spaceW ofÕ is a subspace of W of O. We will proof that the optimal algorithm in this case is given by a Grover search with a continuous coherent time evolution using both oracles consecutively. However, continuing the coherent time evolution of a Grover search can be suboptimal whenW ⊂ W . The arguments following in the next section can be used iteratively to describe multiple changes/increments of the rewarded space.

The changing oracle problem
The situation above can be abstracted as a "changing oracle" problem which we specify here. As suggested, we consider an "oracle" to be a standard phase-flip oracle, such that O|x = (−1) f (x) |x , where f : X → {0, 1} is a characteristic function on a set of elements X, with |X| = N; in our case X denotes sequences of actions of some prescribed length. The rewarded set is denoted by W = {x ∈ X|f (x) = 1}, and the states |x denote a (known) orthonormal basis.
In the changing oracle problem, we consider two oracles O and O, with respective rewarded setsW and W . The problem specifies two time intervals, phases, in which only one of the two oracles is available: time-steps 1 ≤ k ≤ K during which only access toÕ is available, and time-steps K + 1 ≤ k < K + J during which only access to the second oracle O is available.
For simplicity, we assume that the values of K, J , N as well as the sizes of the rewarded sets |W | =ñ and |W | = n are known in advance, and in general, the objective is to either output an x ∈W before K, or, to output x ∈ W in the remainder of the time. We will refer to both x as the solution. However, the exact time when the oracle changes, and does K and J , is not important and can be unknown as we show later. Unless K is in ( N/ñ), in general attempts to find a solution in the first phase will have a very low success probability no matter what we do due to the optimality of Grover's search. However, even in this case, having access toÕ in the first phase may improve our chances to succeed in the second. This is the setting we consider.
The optimal strategies vitally depend on the known relationship between W andW . We will first briefly discuss all possible setting before focusing on the most interesting cases. Note, in this paper, we are not looking for a strategy which uses a minimal number of queries until a solution is found, but rather, a strategy which maximizes the success probability for a fixed number of queries. However, it is also known that Grover's search achieves the fastest increase of success probability (Zalka 1999). Note, the here described algorithms can be also used to optimize the number of queries. However, the corresponding figure of merit, which needs to be optimized, has to be defined precisely for such tasks.
In the worst case, there may be no known correlation between W andW . In this case, we have no advantage from having access toÕ, and the optimal strategy is a simple Grover's search in the second phase.
Another case with limited interest is when W andW are known to be disjointed. In this case, the first oracle might be used to constrain the search space to the complement W c , which contains W . The lower bounds for this setting are easy to find: we can assume that at K the setW is made known (any state we could have generated usingÕ can be generated with this information). However, in this case, the optimal strategy is still to simply apply quantum search over the restricted spaceW c if it can be fully specified. But since we most often encounter cases whereñ = |W | is (very) small compared to N, the improvement that could be obtained is also minor.
Similar reasoning follows also when the sets are not disjoint, but the intersection is small compared not just to N, but to |W | and |W |. In this case, again we can find lower bounds by assuming that the non-overlapping complement becomes known. In addition, we assume that we can prepare any quantum state, which has an upper bound on the overlap with any state corresponding to the intersection, x ∈ W ∩ W . Then, the optimal strategy is again governed by the optimality of Grover-based amplitude amplification 2 This brings us to the situations which are more interesting, specifically, when the overlap W a = W ∩W is large (see Appendix A for exact definition).
Due to our motivation stemming from aforementioned RL settings, we are particularly interested in the case wheñ W ⊆ W, for which we give the optimal strategy, which turns out to be essentially Grover's amplification where we "pretend" that the oracles hadn't changed.
The other cases, W ⊆W , and the more generic case where the overlap is large, but no containment hold are less interesting for our purpose, so we briefly discuss the possible strategies without proofs of optimality.

Increasing rewarded spaces: upper bound on average final success probabilities
In the following, we consider the above-described changing oracle problem with monotonically increasing rewarded spacesW ⊆ W and derive upper bounds for the maximal average success probability p K+J of finding an element x ∈ W at the end of the second phase. The changing oracle problem is outside the standard settings for which various lower bounding techniques have been developed (Arunachalam et al. 2019;Ambainis 2002;2006), but the setting is simple enough to be treatable by modifying and extending techniques introduced to lower bound unstructured search problems (Zalka 1999).
To find upper bounds of the success probability, we first prove that we can restrict our search for optimal strategies to averaged strategies as defined in Appendix B. This induces certain symmetries which restrict the optimization to an optimization of two angles α and Δ, one for each phase. Finally, we derive bounds α(K) and Δ(J ) for these angles depending on K, J which in turn restrict the optimal success probability p K+J .
The search for an optimal strategy can be limited to strategies based on pure states and unitary time evolutions since it is possible to purify any search strategy by going from a Hilbert space H A spanned by {|x } into a larger Hilbert space H AB = H A ⊗ H B . As a consequence, every search strategy T = ({U k }, |ψ(0) ) based on K + J oracle queries can be described by a set of K + J unitaries U k and initial state |ψ(0) . Our knowledge about possible rewarded items after k oracles queries is then encoded in the quantum state The success probability at the end of the second phase is then given by with (6) Our goal is to maximize the success probability p K+J average over all possible functionsf (x) and f (x) with fixed sizes of the rewarded spaces |W | =ñ and |W | = n ≥ n. Different realization off (x) and f (x) can be generated by substituting all oracle queries O k by σ O k σ † and the projector P W by σ P W σ † where σ denote a permutation operator acting on H A . As a consequence, an optimal strategy is a strategy T which maximizes with at the end of the second phase such that k = K + J . Here, A denotes the set of all possible permutations in H A . We can further limit the search for optimal strategies to averaged strategiesT as defined Appendix B because

Lemma 1 The success probability pT (σ ) of the averaged strategyT is equal to the average success probabilityp T of the strategy T for every permutation σ ∈ A .
as proven in Appendix B. In the following, we consider only average strategies such that p =p and therefore omit the "bar" denoting an average value.
In addition, these strategies lead to symmetry properties of the unitaries U k and resulting states |ψ(k) under permutations σ as outlined in detail in Appendix B. Therefore, we can restrict the initial states |ψ(0) to states with equal probability q(x) = Tr [(|ψ AB ψ|) · (|x A x| ⊗ 1 B )] for all elements x. An example for such a symmetric state is the initial state of the Grover search algorithm given by the equal superposition x |x A / √ N . Yet, many other symmetric initial states are possible due to the additional degrees of freedom resulting from the additional Hilbert space H B . The unitaries U k cannot break the symmetry between elements |x . Only the oraclesÕ and O can break the symmetry between rewarded and not-rewarded elements.
These symmetry properties will limit the optimization overall strategies to an optimization of a few parameters or angles as we will outline below. These parameters are then again upper bounded by the optimality of Grover search.
We can decompose the state |ψ(K) at the end of the first phase into a rewarded and a not-rewarded component with respect to the eigenstates of the second oracle O. The not-rewarded or losing component | = | s is symmetric with respect to elements x ∈ L. However, the rewarded component |w is not completely symmetric becauseÕ breaks the symmetry between elements x ∈W ∩ W and x ∈ W \W . Thus, we can further decompose the rewarded component into a symmetric component |w s and a component |w ⊥ orthogonal to it (see Appendix C). As a result, the state |ψ(K) is given by with the symmetric component and the orthogonal rewarded component The angles ε and φ are parameters depending on the strategy performed during the first phase. Their values are bounded by the success probability at the end of the first phase given by The time evolution during the second phase described by V = U K+J O · · · U K+1 O is also symmetric and thus transforms the symmetric component |φ s into a symmetric component and |w ⊥ into a component orthogonal to V |φ s . As a consequence, the final success probability p K+J can be divided into with (see Appendix C) The reward probability p ⊥ of the orthogonal part is maximal if p ⊥ = 1 which can be achieved if, e.g. V acts on |w ⊥ as identity. We parametrize the reward probability of the symmetric part via p s = sin 2 (φ + Δ), where the parameter Δ quantifies the increase of p s during the second phase. Thus, Δ depends on the strategy performed during the second phase. We can quantify the final success probability via With the help of Eq. 13, we can rewrite cos 2 ε via cos 2 ε = (1 − p K )/ cos 2 φ leading to As a consequence, p K+J is monotonically increasing with p K , φ, Δ provided 0 ≤ φ ≤ π/2 and 0 ≤ φ + Δ ≤ π/2. Thus, an optimal strategy optimizes p K and φ during the first phase and Δ during the second phase. If we denote by the reward probability at the end of the first phase according to the first oracleÕ, then the success probability according to the second oracle O at this point is given by following Eqs. 79 and 80 in Appendix C. Here, n + = |W + | with W + =L ∩ W denotes the number of items x marked only by the second oracle O as rewarded and n = |L| the number of losing items according to O. Thus, p K increases monotonically with α for 0 ≤ α ≤ π/2. The angle φ is also upper bounded by α via (see Appendix C, Eq. 93) This bound also increases monotonically with α for 0 ≤ α ≤ π/2. As a result, the final success probability is upper bounded by the maximal achievable angles α (defined via the strategy during the first phase) and Δ (during the second phase) within the range 0 ≤ α ≤ π/2 and 0 ≤ φ(α) + Δ ≤ π/2. The angles α and Δ can be upper bound with the help of a generalization of the optimality proof of Grover's algorithm from Zalka (1999) which can be stated in the following way Lemma 2 Given an oracle O which marks exactly n out of N items as rewarded, then performing Grover's quantum search algorithm gives the maximal possible average success probability p K = sin 2 [(2K + 1)ν] for up to 0 < K < π/(4ν) − 1/2 with sin 2 ν = n/N.
The proof of this lemma follows the optimality proof from Zalka for n = 1 given in Zalka (1999). We outline the difference in the proof for n > 1 in Appendix E. In general, the angle 2Kν does not only limit the maximal success probability via p ≤ sin 2 [(2k + 1)ν] when starting from a random guess, equal to p 0 = sin 2 ν = n/N, but to p ≤ sin 2 [2kν + φ] when starting from any fixed initial success probability p 0 = sin 2 φ, as we also outline in Appendix E.
As a consequence, the maximal angle α is bounded by α ≤ (2K + 1)ν with sin 2ν =ñ/N which follows directly from Lemma 2 provided (2K + 1)ν ≤ π/2. And the reward probability of p s is limited by sin

Grover search is optimal for monotonically increasing rewarded spaces
In this section, we determine the (average) success probability p K+J for the here defined changing oracle problem obtained via a generalized Grover algorithm and show that it saturates the in Section 3.1 derived bound. Grover's algorithm starts in a equal superposition state given by All unitaries U k for 1 ≤ k ≤ K + J are given by (27) The time evolution during the first phase with oracleÕ leads to a rotation of |ψ(0) by an angle 2Kν in the plane spanned by |w and |ψ(0) as depicted in Fig. 1. The state of O is spanned by {|w s , |w ⊥ }. The equal superposition state |ψ (0) is rotated along the blue circle by an angle 2Kν during the first phase leading to the state |ψ(K) . Consecutively, this state is rotated along the green circle changing only its component |φ s but not |w ⊥ at the end of the first phase is given by and thus saturates the upper limit α = (2K + 1)ν leading to a maximal p K and α. To describe the time evolution during the second phase, we perform a basis transformation into the new basis = tan α ñ(n + + n ) (ñ + n + )n + n 2 saturating (93). The angle ε is given by sin ε = n + n + +ñ sin(α) − ñ n + + n cos(α) .
The time evolution during the second phase, given by oracle O and U k as given in Eq. 27, leads to a rotation of |ψ(K) by an angle 2J ν in a plane parallel to the one spanned by |ψ(0) and |w as depicted in Fig. 1. As a consequence, the final state is given by leading to the maximal possible angle Δ = 2J ν and maximal p ⊥ = sin 2 ε and thus to the maximal possible (average) success probability p K+J . As a result, performing consecutive Grover iterations in the first and second phase with in total K + J oracle queries leads to the maximal possible average success probability p K+J provided α = (2K +1)ν ≤ π/2 and φ(α)+2J ν ≤ π/2. If more queries are available such that (2K + 1)ν > π/2 or φ + 2J ν > π/2, then it is possible to over-rotate the state |ψ such that applyingÕ or O less often or performing another algorithm like, e.g. fixed-point search (Yoder et al. 2014) leads to a higher success probability.
In general, the change of |ψ(k) which can be created with a single oracle query O (Õ) is limited by | ψ(k + 1)|ψ(k) | ≥ cos 2ν (cos 2ν). The maximal possible difference between |ψ(0) and |ψ(K + J ) achievable under this constrains would require that all states |ψ(k) ly within a single plane (see discussion in Zalka (1999)). However, changing the oracle in Grover's algorithm leads to a change or tilt of the rotation plane/ axis as visualized in Fig. 1. Nevertheless, performing Grover iterations is the optimal strategy as we have proven. In addition, changing the oracle creates a component |φ ⊥ which stays invariant under consecutive Grover iterations with the new oracle. Luckily, this component contains only rewarded items such that it does not prevent us from further increasing the success probability with Grover iterations ifW ⊆ W. As a consequence, the optimality of Grover's algorithm in the case of a changing oracle might be not surprising but is also not obvious. Especially because performing Grover's algorithm with the maximal number of available oracle queries is not necessarily optimal ifW and W only share a large overlap but W ⊆ W.

Grover iterations forW ⊆ W
In the following, we investigate the performance of Grover's algorithm ifW and W share a large overlap (see Appendix A) butW ⊆ W. We will show that performing the maximal number K of oracle queries during the first phase is not always optimal depending on the number of available queries J in the second phase.
IfW ⊂ W, then the perpendicular component |φ ⊥ , Eq. 10 also includes a losing component | ⊥ such that the state |ψ(K) can be written via |ψ(K) = cos ε|φ s + sin ε|φ ⊥ |φ s = sin φ|w s + cos φ| s (38) Applying Grover iterations with unitaries U k as defined in Eq. 27 does not change the success probability of the component |φ ⊥ . It only changed the success probability of the component |φ s leading to p K+J = cos 2 ε sin 2 (φ + Δ) + sin 2 ε sin 2 χ (40) with Δ = 2J ν. As a consequence, the success probability at the end of second phase is limited by 1 − | ⊥ |ψ(K) | 2 and thus by the weight of the orthogonal losing component created during the first phase. The contribution of this component is increasing with K for K < π 4ν − 1/2 as shown in Fig. 2.   Fig. 2 Comparison of the success probabilities p K +J for different numbers K , J of Grover iterations during the first and second phase with K = 0 (blue), K = 5 (green) and K = 10 (red) for n = 5000,ñ = 15, n = 10, n + = 5 and thus n − = 10 = |W ∩ L| In this case, the success probability p K+J is still monotonically increasing with Δ. Therefore, performing the maximal possible number (J ) of Grover iterations during the second phase is still a good idea provided φ+2J ν ≤ π/2. However, performing the maximal number (K) of Grover iterations during the first phase is not optimal if it leads to phases φ = φ(K) and χ = χ(K) such that In this situation, performing less Grover iterations K < K during the first phase can lead to a higher final success probability p K +J > p K+J as shown in Fig. 2. Here, performing only K = 5 instead of K = 10 Grover iterations leads to higher final success probabilities for 6 ≤ J ≤ 11. In general, it is optimal to perform the maximal number K of Grover iterations during the first phase if J = 0 (provided (2K + 1)ν < π/2). However, less and less effective queries to the first oracleÕ should be used the more queries to the second oracle are available as demonstrated in Fig. 2.

Minimizing the cost to find a rewarded item
In the previous section, we have proven that it is possible to pursue a Grover-based search strategy and maintaining optimality even when the oracle is changing provided the rewarded space is monotonically increasing. We annotate that changing oracle settings have not been considered in literature before from the present perspective. Hence, it was unclear whether continuing Grover iterations in settings which changing oracles would lead to suboptimal results (compare Section 3.3). Note that the standard results for Grover search (Grover 1997;1998;Brassard et al. 2000) can be only applied to the changing oracle problem if the two phases are treated separately. That is, a Grover search is performed during the first phase. At the end of this phase, a measurement is performed. If no rewarded item is found, another Grover search is started with queries to the second oracle.
In the following, we discuss a numerical example demonstrating the possible improvements we can gain by proceeding with a coherent Grover-based strategy in a setting where the oracle changes rather than stopping it and starting a new one.
In our example, we assume that K Grover iterationsG ψ withÕ have been performed during a first phase. Then, the oracle is changed to O withW ⊆ W and an arbitrary number of queries to the second oracle are allowed. We compare in our example the following two procedures: (a) a measurement is performed immediately after the change of the oracle and (b) the coherent time evolution is continued with the new oracle before performing a measurement. In both cases, we continue with a standard Grover search based solely on the second oracle O if the first measurement did not reveal a rewarded item.
In many scenarios, the goal is to minimize a given cost function, such as the number of oracle queries, rather than optimizing the reward probability. Therefore, we compare in this section the cost to find a rewarded item if we either proceed or interrupt the coherent search when the oracle changes. Here, we us C(j ) = 2j +1 as a typical cost function for a Grover algorithm with j steps (see e.g. Dunjko et al. (2016) and Sriarunothai et al. (2019)).
In the following, we neglect the cost produced by queries to the first oracleÕ because this cost is identical for both cases and optimize the number of Grover iterations during the second phase (see Appendix F). In general, it is advantageous to continue the coherent time evolution if 4ν ≤ π − arcsin 1 with sin 2 ν = |W |/N = 1/N . Here, φ K quantifies the reward probability of the symmetric part after the first phase (compare Eq. 35) and ε K the ratio between the symmetric component and the orthognal component (compare (33)). The cost functions for both procedures scale with √ N just like in typical amplitude amplification algorithms. The cost difference between stopping or continuing Grover's algorithm also scales with √ N for fixed angles φ k and ε k as we have evaluate in Appendix F and visualized in Fig. 3.

Conclusion
Research in quantum enhanced reinforcement learning has motivated quantum computation scenarios involving two Fig. 3 Minimal expected average cost C of a continuous Grover algorithm with changing oracle with optimized j (red) for ψ = √ |W |/N = 1/ √ N and fixed φ K = 0.6 and ε K ≈ 0.32. The expected average cost for an interrupted Grover algorithm with fixed oracles is denoted by a blue dashed line for comparison systems, the agent and its environment, with restricted access to each other. In special cases, the interaction of the agent with its environment can be reduced to unitary oracle queries. However, general settings do not allow such a treatment due to memory effects induced by the environment.
In this paper, we generalized the basic case, where the environment acts effectively as a single fixed oracle, to settings where the oracle changes in time. This was motivated by standard grid-world type problems, where the number of consecutive actions within a single epoch can grow or shrink. We have demonstrated that the search for a rewarded action sequence of increasing length can be described as a search in a data based with fixed sequence length (equal to the maximal sequence length) but changing oracle leading to an increase of the rewarded space. We analyzed this setting and identified Grover-type amplitude amplification as optimal strategy for monotonically increasing rewarded spaces.
However, continuing coherent Grover iterations when the target space decreases will partially trap the resulting state within the losing subspace. As a consequence, the reward probability will be limited, with a limit clearly below unity, if we continue with Grover iterations after the oracle has changed.
It is easy to conceive a cascade of ever more general problems. For example, in slightly more general settings the agent might be allowed to chose if and when to change the effective oracle. In this way, the agent might combine breadth-first and depth-first search in a single coherent search for RL. Often, shorter rewarded action sequences are preferred but longer rewarded action sequences are more likely. Increasing the sequence length during a coherent quantum search will amplify the probability for shorter rewarded sequences more than for longer sequences. Combing different oracles, corresponding to different sequence length, within a single Grover search might therefore help to balanced the tradeoff between the desire for short rewarded sequences on the one side and high reward probabilities on the other. Further research investigating these possible benefits is needed. Quantumenhanced reinforcement learning agent using different oracles might be developed for many different scenarios such as grid world, navigation or routing. In addition, many problems where we are searching for the shortest correct sequence of actions to achieve a goal, like optimal elements to create entangled photons , are equivalent to the grid-world problem.
We envision that the results presented here may become even more important in the future when largescale quantum networks (Kimble 2008;Cacciapuoti et al. 2020) become available where the interactions is naturally quantum.
The goal in RL is in general to minimize a given cost function C instead of maximizes solely the success probability. In general, performing consecutive Grover iterations can be also used to minimize the average number of oracle queries necessary until a rewarded item is found. In general, we expect for our algorithm a quadratic improvement of the cost C g ∝ √ C class compared to the cost C class of a classical algorithm. In Section 3.4, we discussed a cost function depending solely on the number of oracle queries. We compared quantum algorithms based on a single oracle or multiple oracles leading to the costs C s and C g , respectively. Here, we found that the difference C s − C g ∝ C s scales with the cost.
An optimal algorithm will depend on the exact cost function we want to minimize. For example, the search algorithm described in Boyer et al. (1998) is only optimal in terms of oracle queries. However, the number of elementary qubit gates necessary to perform a Grover search can be reduced by using a recursive Grover search (Arunachalam and de Wolf 2015) which separates the database into several subgroups. In RL, queries to different oracles might be connected to different cost. In such setting, an optimal algorithm might use different oracles in a recursive way for a quantum search. In this way, improvements in terms of cost might go beyond the quadratic improvements achievable in quantum exploration.
Finally, possibly the most interesting extensions would avoid reductions of environments to unitary oracles, and identify new schemes to obtain improvements in settings which may be more applicable in real-world RL settings. We leave these more general considerations for follow-up investigations.

Appendix A: Large overlap ofW and W
We say that the rewarded spacesW and W have a large overlap if increasing the probabilityp for x ∈W uniformly also increases the probability p to find x ∈ W.
In general, optimal search strategies can be always constructed in such a way that the probability for all rewarded states p(x|f (x) = 1) are equal as outlined in Appendix B. The same holds for losing states |x with f (x) = 0. Let n a = |W ∩ W| (n ) be the number of states which are marked as rewarded (not-rewarded/losing) by both oracles and n − = |W ∩ L| (n + ) the number of states which are rewarded only according to the first (second) oracle. Thus, the total number of items is given by N = n a + n + n − + n + . We denote the probabilities to find any state which is always rewarded, always not rewarded, rewarded only in the first phase and rewarded only during the second phase by p a , p , p − , p + . Increasing the initial probabilityp = p a + p − = (n a + n − )/N during the first phase in a symmetric way as outlined in Appendix B by a factor α leads to due to normalization. This leads to a change of p given by p = p a + p + → n + n + n + + α n a n − n + n − n + n + .
As a result, we can increase p by increasingp in a symmetric way whenever n a n > n + n − .
As a result, we sayW and W share a large overlap if they fulfill (47).

Appendix B: Averaged search strategies
In the following we consider search problems defined via some set of N orthonormal states {|n A } forming the basis of the Hilbert space H A which can be separated into two subsets H A = W ∪ L, the set of rewarded states W and the set of losing states L with W ∩ L = ∅. Information about rewarded states can be obtained by querying phase-flip oracles where P W k and P L k denote projectors on some subspaces W k and L k forming together again the complete Hilbert For standard search problems we have W k = W ∀k and L k = L ∀k. However, for more general search problems such as the here considered changing oracle problem, the subspaces W k and L k might differ from query to query. Our goal is to find any state |n A ∈ W with the help of maximally K oracle queries. All possible search strategies can be represented via unitary operations and pure initial states since it is possible to purify any search strategy by going to a larger Hilbert space H AB = H A ⊗ H B and defining the generalize operators (49) To avoid a notation with over boarding indices, we skip the labels indicating the different subspace the operators/unitaries are working on if they are not crucial. Operators with a subspace index, such as e.g. σ A acting on a state from a larger Hilbert space, e.g. |ψ AB are meant as short forms of the generalized operators defined similar to Eq. 49.
Any search strategy T to find a state |n ∈ W can be described via T = ({U k }, |ψ(0) AB ) with a pure initial state |ψ(0) AB and unitaries {U k } acting on the combined Hilbert space H AB leading after K oracle queries to the final state and a consecutive projective measurement. Without loss of generality, we apply first an oracle query since any unitary U 0 applied before can be subsumed into the initial state. The probability p T to identify a rewarded state correctly for a given strategy T and set of oracles {O k } is then given by Let σ denote a permutation operator acting on H A and A denoting the group of operators of all possible permutations. The average reward probabilityp T of the strategy T is defined viā and average unitaries Here, the states {|γ C } are given by an arbitrary orthonormal basis of a Hilbert space H C with dimension d C = N! acting as labels for the applied permutation operator σ γ acting on H A .
The averaged strategyT has the following properties: Lemma 3 The success probability pT (σ ) of the averaged strategyT is equal to the average success probabilityp T of the strategy T for every permutation σ ∈ A .
Proof The success probability pT (σ ) is given by The state σ † |ψ(K, σ ) ABC for σ ∈ A is given by where we used because A is a symmetric group. As a consequence, the application of the permutation σ † on |ψ(K) is equivalent to a relabeling of the permutations σ γ such that we now apply the permutationσ † γ = σ † σ † γ instead of σ † γ if subsystem C is in state |γ C . However, these labels have been arbitrary and therefore we find for the success probability pT (σ ) = Tr ABC P W,ABC σ † |ψ(K, σ ) ψ (K, σ )|σ (60) The relabeling can be formalized in the following way. We define the indexγ via σ σ γ = σγ . Then, we can define the permutation π(σ ) acting on H C via π(σ )|γ C = |γ C which then leads to the following lemma:

Lemma 4 The averaged strategyT is permutation invariant under joined permutations
Proof For the symmetric initial state |ψ(0) , we find For the symmetric unitariesŪ k , we find [Ū k , σ ⊗π(σ )] = 0 follows immediately since permutation operators are unitary.
As a consequence of Lemmas 3 and 4, we can limit the search for the best strategy T , optimizingp T , to averaged strategiesT which also optimize the worst case probability min σ p T (σ ) and leads to certain symmetries as outlined in Appendix C.

Appendix C: Symmetry investigations for the changing oracle problem
In the following, we consider a search problem, where the oracle O k changes at a certain time step. Thus, we can separate the search into two phases. The first phase contains K oracle queries to oracleÕ = O k for 1 ≤ k ≤ K with rewarded spaceW and losing spaceL. Then, the oracle changes to O = O k for K < k ≤ J with the new rewarded space W and losing space L and the search is continued by another J queries to O. In addition, we restrict the problem to monotonically increasing rewarded spaces that is the rewarded spaceW of the first phase is a subset W ⊆ W of the rewarded space W of the second oracle O. This automatically leads toL ⊇ L.
In the following, we investigate the appearing symmetries occurring during the first and second phase when applying averaged search strategiesT to this problem. Since we only consider averaged strategies and thus averaged uni-tariesŪ k and states |ψ(k) , we omit the bar on all states and unitaries in this section to simplify the notation.
In the following, we investigate the symmetry properties of the states at the end of the first and the second phase. This will allow us to determine an upper bound for the average success probability p.
We define the set of permutations (operators) Õ = W ∪ L as the complete set of permutations operators which leave the rewarded spaceW and losing spaceL invariant. As a consequence, we find [Õ, σ ] = 0 ∀σ ∈ Õ . The initial state |ψ(0) and all unitaries U k andÕ k during the first phase are permutation invariant under σ ⊗ π(σ ) ∀σ ∈ Õ since Õ ⊆ H A . Thus, the state |ψ(K) at the end of the first phase is also permutation invariant under σ ⊗ π(σ ) ∀σ ∈ Õ .
To determine the symmetry properties of |ψ(K + J ) , we need to investigate how the rewarded and not-rewarded components of |ψ(K) changes when we change the oracle. We define the normalized rewarded |w and not-rewarded with cos α = |PL|ψ(K) |. As a consequence, |ψ(K) can be decomposed via |ψ(K) = cos α|˜ + sin α|w .
The components |w and |˜ are permutation invariant under σ ⊗ π(σ ) ∀σ ∈ Õ because the projectors PW and PL as well as |ψ(K) are permutation invariant. Let us now investigate the rewarded and not-rewarded components at the beginning of the second phase. The initial state of the second phase is given by |ψ(K) . Its component |w is also a rewarded component according to the second oracle O such that P W |w = |w . However, |˜ contains both rewarded and not-rewarded components with cos β = |P L |˜ |. Note, that |w + ∈ W + =L ∩ W and thus |w + ⊥ |w . Therefore, we can divide the state |ψ(K) into three orthogonal components via |ψ(K) = sin α|w + cos α (sin β|w + + cos β| ) . (79) The angle β is given by where n + denotes the dimension of W + and n the dimension of L (see Appendix D).
Let us now invest the symmetries of |ψ(K) with respect to permutations σ ⊗ π(σ ) ∀σ ∈ O which leave the second oracle O invariant. Let P S be the projector onto the symmetric subspace which can be written as where {|s } forms an orthonormal basis of the symmetric subspace. Then, we can define the symmetric component and its complement (83) with cos ε = |P S |ψ(K) |. The state | is permutation invariant under σ ⊗ π(σ ) ∀σ A ∈ O since L ⊆L such that P S | = | . However, the (not normalized) rewarded component sin α|w + cos α sin β|w + is not necessarily permutation invariant under σ ⊗ π(σ ) ∀σ ∈ O . As a consequence, there might exist a non-vanishing component |φ ⊥ ; however, this component lies within the rewarded space W such that (84) The symmetric component |φ S can be decomposed into a rewarded and a not-rewarded component cos ε sin φ|w s = P W P S |ψ(K) cos ε cos φ| s = P L P S |ψ(K) = cos ε cos φ| with cos ε sin φ = |P W P S |ψ(K) |. Thus, the state |ψ(K) can be separated into the following three orthogonal components |ψ(K) = cos ε (sin φ|w s + cos φ| ) + sin ε|w ⊥ . (87) A comparison with Eq. 79 leads to the following identities cos ε cos φ = |ψ(K) = cos α cos β cos ε sin φ = w s |ψ(K) = sin α w s |w + cos α sin β w s |w + sin ε = w ⊥ |ψ(K) = sin α w ⊥ |w + cos α sin β w ⊥ |w + . (90) Note, all appearing scalar products are real due to the definition of |w s and |w ⊥ and they are upper bounded via | w s |w | ≤ ñ n + n + | w s |w + | ≤ n + n + n + .
As a consequence, the angle φ is upper bounded by the angle α via tan φ ≤ tan α ñ(n + + n ) (ñ + n + )n + n 2 Let us investigate the time evolution during the second phase. We denote with a unitary which described the complete time evolution during the second phase. The unitary V commutes with the projector P S as the following considerations will prove. There exist a joined eigenbasis of V and σ ⊗π(σ ) ∀σ ∈ O since [V , σ ⊗ π(σ )] = 0. Let {|v x } be an eigenbasis of V and wlog we assume that the first f states of this basis form the symmetric subspace such that As a consequence, we find where λ y denote the eigenvalues of V . Thus, the time evolution of the symmetric component V |φ S stays a symmetric state with whereas V |φ ⊥ stays orthogonal to this subspace since and thus the symmetric part and the orthogonal part do not mix.
The reward probability of |ψ(K + J ) can be decomposed into a symmetric part and a part orthogonal to it via where we used [P W , P S ] = 0 which follows directly from [P W , σ ⊗ π(σ )] = 0 ∀σ ∈ O and P S = P 2 S .

Appendix D: Determining the angle β
In the following, we give a more detailed derivation of Eq. 80 for determining β defined via Let wlog {|j A } with 1 ≤ j ≤ n + + n be a basis of the losing spaceL A . The state |˜ ABC can then be written as with some arbitrary normalized states |γ j BC . The probability for each state |j A is given by (103) However, the state |˜ is permutation invariant ∀σ ∈ L such that with . As a consequence, we find for the probabilities (105) and due to normalization p j = 1/(n + + n ). Since there exist n + orthonormal states within the subspace W + = W ∩L we find sin 2 β = n + n + + n .

Appendix E: Optimality proof of Grover's algorithm for multiple rewarded items
The optimality proof of Grover's algorithm for oracles with a single rewarded item by Zalka (1999) consist of two parts given by the inequality Here, N is the number of items, p the success probability to identify the single rewarded item y correctly, J the maximal number of oracle queries and the angle ψ is defined via sin 2 ψ = 1/N . The two quantum states |φ j and |φ y j are defined via where |φ is some arbitrary state, V j y a unitary of the form Eq. 94 based on j queries to the oracle O y and V j is a unitary based on j queries to an empty oracle. The optimality of Grover's algorithm follows from the proof of both inequalities and the fact that Grover's algorithm saturates both.
We generalize the results from Zalka by going to oracles O y which mark exactly n items out of N items as rewarded. In this case, y is now a label for the rewarded space W y and there exist now D = N n different oracles. The success probability p now denotes the probability to identify any rewarded item |z ∈ W y correctly. For a random guess, this probability is given by sin 2 ν = n/N. As a consequence, Eq. 107 can be generalized to which we will proof in the following and is equal to Eq. 107 for n = 1. Again, Grover's algorithm saturates these bounds.
We start with the right inequality and proof the following lemma Lemma 5 The maximal difference between |φ J and |φ y J achievable with J oracle queries averaged over all possible oracles with n rewarded items is given by with sin 2 ν = n/N.
Proof This Lemma follows directly from the optimality proof of Grover's algorithm given in Zalka (1999) by generalizing the sum overall possible oracles which mark only one item y to all possible oracles which mark n items. In the following, we do not reproduce every step from Ref. (Zalka 1999) but concentrate only on steps where the generalization from one rewarded item to several rewarded items makes a difference. Following Ref. Zalka (1999) we find (Eq. 22) with the argument and P W y the projector onto the rewarded space of oracle y.The function f (x) is defined in Zalka (1999) via Every state |z ∈ H A is part of the rewarded space W y for exactly d = N−1 n−1 different oracles. As a consequence, the argument x of the function f in Eq. 112 is given by The right side of Eq. 110 is govern by the lemma

Lemma 6
The average success probability p to identify any item z ∈ W y out of the rewarded space W y of the oracle O y with 1 ≤ y ≤ D given the states φ y J average over all oracles is upper bounded by Proof Again, in order to proof this lemma, we follow the proof in Zalka (1999) and only point out the generalizations we have to make when going form n = 1 rewarded state to n > 1 rewarded states. Similar to (Zalka 1999), we write the states via some orthonormal basis {|x } of some Hilbert space with dimension X. The optimal procedure to identify a rewarded item |z is to perform projective measurements (see Ref. Zalka (1999)). Let {|x } be the measurement basis and we denote with X z the subspace containing all states |x which correctly denote that |z is a rewarded item. As a consequence, the success probability p y if the unknown oracle is given by O y is determined via Similar, we can define a success probability a y for the state |φ J via (compare Eq. A7 in Zalka (1999)) In order to proof (6), Zalka determines the minimal distance an arbitrary state |φ y with success probability p y needs to have from a given state |ζ y with success probability a y . This minimal distance is given (compare Eq. A8 in Zalka (1999)) by The minimum of for all possibly states |ζ y and success probabilities p y is reached if all p y = p and a y = a (see Zalka (1999)). Due to normalization we find where we have used that each item |z belongs to the rewarded space of d = N−1 n−1 different oracles. As a consequence, the minimum is achieved for a y = d/D = n/N (see discussion before Eq. A10 in Zalka (1999)) leading finally to the modification of Eq. A10 Zalka (1999) which gives us directly Lemma 6. Also this bound is satisfied by Grover's algorithm.
The above-stated optimality proof of Grover's algorithm can be easily generalized to situation where we start in a state |ζ y with success probability a y = a = sin 2 φ and try to optimize the success probability p y of V J y |ζ y with the help of maximal J oracle queries. Lemma 5 is independent from the initial state and can therefore directly be applied. From Eq. 127 we find 1 D D y=1 ||V J y |ζ y − |ζ y || 2 ≥ 2 − 2 D y p y sin 2 φ (1 − p y ) cos 2 φ which is minimal if p y = p ∀y. Thus, we find 1 D D y=1 ||V J y |ζ y −|ζ y || 2 ≥ 2−2 p sin 2 φ (1 − p) cos 2 φ.
Lemma 5 and Eq. 132 can be simultaneously saturated by starting in a state |ζ s = sin φ 1 |W y | |z ∈W y |z + cos φ 1 |L y | |z ∈L y |z (133) and performing Grover iterations via the unitary Applying V J with an empty oracle on |ζ s does not change the success probability a y leading to a maximal success probability p = sin 2 (φ + ν) with sin 2 ν = n/N.

Appendix F: Minimal cost
In the following, we consider a data base search where the oracleÕ is exchange at time t 0 by a new oracle O with rewarded space W ⊇W . Before t 0 , several Grover iterations G ψ have been applied and created the state |ψ(K) as given in Eq. 32. Then, the oracle changes. The probability to project |ψ(K) onto a target state directly after t 0 is given by p K = ||P W |ψ(K) || 2 = sin 2 ε + cos 2 ε sin 2 φ K .
In the following, we want to minimize the cost to find a target state. A typical cost for Grover's algorithm with j steps is given by C(j ) = 2j + 1 (see e.g. Dunjko et al. (2016) and Sriarunothai et al. (2019)). The standard Grover algorithm considers only static oracles. Hence, the coherent evolution has to be terminated when the oracle changes. Thus in the standard case, we perform a measurement to finish the coherent time evolution based onG ψ . If no target state has been found, we continue by performing Grover iterations with the new oracle described by G ψ . The average cost C s for the standard Grover algorithm is then given by for a fixed number of j steps and sin 2 ν = |W|/N . Here, we neglected all costs accumulated before t 0 . The minimum cost is achieved for tan[(2j + 1)ν] ≈ 2(2j + 1)ν which can be solved numerically. The numerical solution corresponds to apply Grover's algorithm until a probability to find a target is approximately sin 2 [(2j + 1)ν] ≈ 0.844 and leads to the average cost of We generalize Grover's algorithm by continuing the coherent time evolution after the oracle has changed. The cost for continuing the Grover search with j applications of G ψ is given by (2j + 1). A consecutive measurement will project the resulting state |ψ(K + j) onto a target state with probability (see Eq. 36) p K+j = sin 2 ε K + cos 2 ε K sin 2 (2jν + φ k ).
If we do not find a target, we continue with a standard Grover search based on the new oracle leading to the same expected average cost of 1.38/ν. Thus, we expect in total an average cost of if we continue the coherent time evolution by another j Grover iterations G ψ although the oracle has changed before we perform a measurement. By migrating j into a real valued variable, we can optimize C g over j . Extremal points of C g can be found for values of j satisfying sin [4jν + 2φ K ] = 1 1.38 cos 2 ε K .
This minimum corresponds to a probability to find a target state at the first measurement given by p opti (ε K ) = 1 2 1 + sin 2 ε K − cos 4 ε K − 1 1.38 2 which is independent of ν. As a consequence, the average cost of a continuous Grover algorithm optimized over j can be written as with the function f = 1 2 π − arcsin 1 1.38 cos 2 ε K − 2φ k + 1.38 · (1 − p opti (ε K )).
Finally, we can determine the difference between the optimal cost using the continuous or interrupted Grover algorithm by Thus, there exist j such that C g (j ) ≤ C s if 1.38 cos 2 ε K > 1. For 1.38 cos 2 ε K < 1, we achieve a minimal cost at j = 0 such that we do not obtain any advantage continuing the Grover algorithm. In general, the maximal possible different scales as C s (ψ) − C g (ψ) ∼ 1/ν ∼ √ N if we assume that the classical probability to find a target state with the second oracle is given by sin 2 ν = |W |/N = 1/N (see Fig. 3) .
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.