1 Introduction

In the last few years, there has been much interest in combining quantum computing and machine learning algorithms. In the domain of quantum-enhanced machine learning, the objective is to utilize quantum effects to speed-up or otherwise enhance the learning performance. The possibilities for this are numerous (Dunjko and Briegel 2018). E.g. variational circuits can be used as a type of “quantum neural network” (more precisely, using them as function approximators which cannot be evaluated efficiently on a conventional computer), which can be trained as a supervised learning (classification) (Havlícek et al. 2019; Farhi and Neven 2018) or unsupervised learning model (generative models) (Aimeur et al. 2013). There also exist various approaches where algorithmic bottlenecks of classical algorithms are sped-up, via annealing methods (Farhi and Neven 2018), quantum linear-algebraic methods (Harrow et al. 2009), or via sampling enhancements (Dunjko et al. 2016). If the data is assumed to be accessible in a quantum form (“quantum database”), then anything from polynomial to exponential speed-ups of classical algorithms may be possible (Biamonte et al. 2017; Dunjko and Briegel 2018; Chia et al. 2019; Gyurik et al. 2020)Footnote 1.

Modern reinforcement learning (RL), an interactive mode of learning, combines aspects of supervised and unsupervised learning, and consequently allows a broad spectrum of possibilities how quantum effects could help.

In RL (Sutton and Barto 1998; Russell and Norvig 2003; Briegel and De las Cuevas 2012), we talk about a learning agent which interacts with an environment, by performing actions, and perceiving the environmental states, and has to learn a “correct behavior”—the optimal policy—by means of a feedback rewarding signal. Unlike a stationary database, the environment has its own internal memory (a state), which the agent alters with its actions.

In quantum-enhanced RL, we can identify two basic scenarios: (i) where quantum effects can be used to speed up the internal processing (Paparo et al. 2014; Jerbi et al. 2019), and the interaction with the environment is classical, and (ii) where the interaction with the environment (and the environment itself) is quantum. The first framework for such “quantum-accessible” reinforcement learning modeled the environment as a sequence of quantum channels, acting on a communication register, and the internal environmental memory—this constitutes a direct generalization of an unknown environment as a map-with-memory (other options are discussed shortly). In this case, the action of the environment cannot be described as unitary mapping, without considering the entire memory of the environment. In general, this memory is inaccesible to the agent. However, as discussed in Dunjko et al. (2016), under the assumptions that the environmental memory can be purged or uncomputed in pre-defined periods, such blocks of interaction do become a (time-independent) unitary and amenable to oracle computation techniques. For instance, in Dunjko et al. (2016), it was shown that the task of identifying a sequence of actions which leads to a first reward (a necessary step before any true learning can commence) can be sped up using quantum search techniques, and in Dunjko et al. (2017), it was shown how certain environments encode more complex oracles—e.g. Simon’s oracle and Recursive Fourier Sampling oracles, leading to exponential speed-ups over classical methods.

For the above techniques to work, however, the purging of all of environmental memory is necessary to achieve time-independent unitary mappings. However, real task environments are typically not (strictly) episodic, motivating the question of what can be achieved in these more general cases. Here, we perform a first step towards generalization by considering environments where the length of the episode can change, but this is signaled and the estimate of the episode lengths is known. This RL scenario is well-motivated and fortunately maps to an oracle identification problem where the oracles change. While this generalizes standard oracular settings, it is still sufficiently simple such that we can employ standard techniques (essentially amplitude amplification) and prove the optimality of our strategies for oracle identification problems with changing oracles and increasing rewarded space.

The paper is organized as follows. We will first summarize the basics scenario of quantum-accessible reinforcement learning in Section 2 and discuss the mappings from constrained (episodic) RL scenarios to oracle identification. We show how this must be generalized for more involved environments, prompting our definition of the “changing oracle” problem stemming from certain classes of RL environments. In Section 3, we focus on the changing oracle problem, analyze the main regimes, and provide an upper bound for the average success probability for the case of monotonically increasing rewarded space in Section 3.1. We proof in Section 3.2 that performing consecutive Grover iterations saturates this bound. We then discuss the more general case of only overlapping rewarded spaces in Section 3.3. In Section 3.4, we provide a numerical example demonstrating the possible advantages of consecutive Grover iterations with changing oracles. We conclude by summarizing our results, by discussing possible extensions, and by noting on the implications of our results of the changing oracle problem for QRL in Section 4.

2 Quantum-accessible reinforcement learning

RL can be described as an interaction of a learning agent A with a task environment E via the exchange of messages out of a discrete set which we call actions \(\mathcal {A}=\lbrace a_{j}\rbrace \) (performed by the agent) and percepts \(\mathcal {S}=\lbrace s_{j}\rbrace \) (issued by the environment). In addition, the environment also issues a scalar reward \(\mathcal {R}=\lbrace r_{j}\rbrace \), which informs the agent about the quality of the previous actions and can be defined as being a part of the percepts. The goal of the agent is to receive as much reward as possible in the long term.

In theory of RL, the most studied environments are exactly describable by a Markov decision process (MDP). An MDP is specified by a transition mapping and a reward function . The transition mapping T specifies the probability of the environment transiting from state s to \(s^{\prime }\), provided the agent performed the action a, whereas the reward function assigns a reward value to a given action of an agent in a given environmental state.

Note that in standard RL, the agent does not have a direct access to the mapping T, but rather to learn it, it must explore, i.e. to act in the environment which is governed by T. On the other hand, in dynamical programming problems (intimately related to RL), one often assumes access to the functions T and R directly. This distinction leads to two different tasks on how agent-environment interaction can be quantized.

In recent works (Cornelissen 2018; Neukart et al. 2018; Levit et al. 2017), coherent access to the transition mapping T is assumed; in this case, lower quantum bounds for finding the optimal policy have been found (Ronagh 2019).

In this paper, we consider the other class of generalization, proposed first in Dunjko et al. (2016). Here, the agent-environment interaction is modeled as a communication between an agent (A) and the environment (E) over a joint communication channel (C), thus in a three-partite Hilbert space \({\mathscr{H}}_{E}\otimes {\mathscr{H}}_{C} \otimes {\mathscr{H}}_{A} \), denoting the memory of the environment, the communication channel, and the memory of the agent. The two parties A and E interact with each other by performing alternately completely positive trace preserving (CPTP) maps on their own memory and the communication channel. Different AE combinations are defined as equivalent in the classical sense, if their interactions are equivalent under constant measurements of C in the computational basis. For classical luck favoring AE settings with a deterministic strictly epochal environment E, it is possible to create a classical equivalent quantum version AqEq which outperforms AE in terms of a given figure of merit as shown in Dunjko et al. (2016).

2.1 Strictly epochal environments

This can be achieved by slightly modifying the maps as to purge the environmental memory which couples to the overall interaction preventing a unitary time evolution of the agents memory. A detailed discussion of this procedure and necessary condition on the setting is outlined in Dunjko et al. (2016). However, for our setting, it is sufficient that the interaction of the agent with the environment can be effectively described as oracle queries. Specifically if environments are strictly episodic, meaning after some fixed number of steps the setting is re-set to an initial condition, then the environmental memory can be uncomputed or released to the agent at the end of an epoch. With this modification (called memory scavenging and hijacking in earlier works), blocks of interactions effectively act as one, time-independent unitary O, which can be queried using standard quantum techniques to obtain an advantage. We encode different actions the agent can perform, like a ∈{0,1}, into orthogonal quantum states, i.e. as {|0〉, |1〉}. As a result, the complete sequence of actions a = a1,⋯ ,aM the agent executes during one epoch of length M is encoded in the product state |a〉 = |a1〉⊗|a2〉⊗⋯ ⊗|aM〉. For strictly epochal environments, it is possible to re-express the effect of the environment by an unitary oracle

$$ O {|\mathbf{a}\rangle} = \left\lbrace \begin{array}{cc} -{|\mathbf{a}\rangle}& \text{if} \mathbf{a} \in W \\ {|\mathbf{a}\rangle}&\text{else} \end{array}\right. $$
(1)

Here, W denotes the rewarded space containing all sequences of actions of length M which obtained a reward r(a) larger than a predefined limit.

A learning agent can use such an oraculized environment to find rewarded action sequences faster. For this purpose, the agent prepares an equal superposition state of all possible action sequences

$$ {|\psi\rangle}=\frac{1}{\sqrt{N}}\sum\limits_{i=1}^{N}{|\mathbf{a}_{i}\rangle} $$
(2)

with typically \(N=|\mathcal {A}|^{M}\). Then, it interacts with the environment and thus effectively queries the oracle O. Afterwards, it performs a reflection over the initial state |ψ〉. In this way, it can perform amplitude amplification performing consecutive Grover iterations (Grover 1997; 1998; Brassard et al. 2000) Gψ|ψ〉 with

(3)

The agent can increase the probability to find a first rewarded sequence by performing several rounds of amplitude amplification.

The first found rewarded action sequence is in general not the optimal one. However, in general, it provides information which can help to find an optimal strategy. A quantum-enhanced learning agent can use the found rewarded sequence of actions to learn by using classical policy updates. Thus, quantum-enhanced reinforcement learning combines quantum search and classical reinforcement learning as, e.g. demonstrated experimentally in Saggio et al. (2021). An agent with access to such an oracle thus finds in average the first rewarded sequence faster which increases in so-called luck-favoring settings (Dunjko et al. 2016) also the probability to be rewarded in the future. This approach leading to a quadratic speed-up in exploration can be applied to many settings and even super-polynomial or exponential improvements can be generated for special RL settings (Dunjko et al. 2017).

2.2 Beyond strictly epochal environments

The simplest scenarios of task environments, which cannot be reformulated as an oracular problem, are arguably those which involve two oracles. We will consider this slight generalization in this work, as it still allows for a relatively simple treatment. This setting includes environments which simply change as a function of time such as reinforcement learning for managing power consumption or channel allocation in cellular telephone systems (Han 2018; Tesauro et al. 2008; da Silva et al. 2006; Singh and Bertsekas 1996). If the instances of change are known, again the blocking is possible, in which case we obtain the setting where we can realize access to an oracle but which changes as a function of time. Closely related to this is a more specific case of variable episode length. This setting, although more special, is in particular interest in RL. Episodic environments are usually constructed by taking an arbitrary environment, and establishing a cut-off after a certain number of steps. The resulting object is again an environment derived from the initial setting. This construction is special in that given any sequence of actions a which is rewarding in a derived environment with cut-off after m steps, any sequence of actions in the environment which has a larger cut off M > m which has a as a prefix is rewarded in the second. An example for such an environment is the Grid-world problem which consists in navigating a maze and the task is to find a specific location that is rewarded (Russell and Norvig 2003; Sutton and Barto 1998; Melnikov et al. 2018).

The classical scenarios described above, under oraculization techniques, map onto the changing oracle problem (described in detail in the following section) where at a given time an oracle \(\tilde {O}\) is exchanged by a different oracle O. This generalization especially captures the scenario of a single increment of an epoch length from m to M > m for search in QRL. In this special case, the rewarded space \(\tilde {W}\) of \(\tilde {O}\) is a subspace of W of O. We will proof that the optimal algorithm in this case is given by a Grover search with a continuous coherent time evolution using both oracles consecutively. However, continuing the coherent time evolution of a Grover search can be suboptimal when \(\tilde {W}\not \subset W\). The arguments following in the next section can be used iteratively to describe multiple changes/increments of the rewarded space.

3 The changing oracle problem

The situation above can be abstracted as a “changing oracle” problem which we specify here. As suggested, we consider an “oracle” to be a standard phase-flip oracle, such that O|x〉 = (− 1)f(x)|x〉, where \(f: X \rightarrow \{0,1\}\) is a characteristic function on a set of elements X, with |X| = N; in our case X denotes sequences of actions of some prescribed length. The rewarded set is denoted by W = {xX|f(x) = 1}, and the states |x〉 denote a (known) orthonormal basis.

In the changing oracle problem, we consider two oracles \(\tilde {O}\) and O, with respective rewarded sets \(\tilde {W}\) and W. The problem specifies two time intervals, phases, in which only one of the two oracles is available: time-steps 1 ≤ kK during which only access to \(\tilde {O}\) is available, and time-steps K + 1 ≤ k < K + J during which only access to the second oracle O is available.

For simplicity, we assume that the values of K, J, N as well as the sizes of the rewarded sets \(|\tilde {W}|=\tilde {n}\) and |W| = n are known in advance, and in general, the objective is to either output an \(x \in \tilde {W}\) before K, or, to output xW in the remainder of the time. We will refer to both x as the solution. However, the exact time when the oracle changes, and does K and J, is not important and can be unknown as we show later. Unless K is in \({\Omega }(\sqrt {N/\tilde {n}})\), in general attempts to find a solution in the first phase will have a very low success probability no matter what we do due to the optimality of Grover’s search. However, even in this case, having access to \(\tilde {O}\) in the first phase may improve our chances to succeed in the second. This is the setting we consider.

The optimal strategies vitally depend on the known relationship between W and \(\tilde {W}\). We will first briefly discuss all possible setting before focusing on the most interesting cases. Note, in this paper, we are not looking for a strategy which uses a minimal number of queries until a solution is found, but rather, a strategy which maximizes the success probability for a fixed number of queries. However, it is also known that Grover’s search achieves the fastest increase of success probability (Zalka 1999). Note, the here described algorithms can be also used to optimize the number of queries. However, the corresponding figure of merit, which needs to be optimized, has to be defined precisely for such tasks.

In the worst case, there may be no known correlation between W and \(\tilde {W}\). In this case, we have no advantage from having access to \(\tilde {O}\), and the optimal strategy is a simple Grover’s search in the second phase.

Another case with limited interest is when W and \(\tilde {W}\) are known to be disjointed. In this case, the first oracle might be used to constrain the search space to the complement \(\tilde {W}^{c},\) which contains W. The lower bounds for this setting are easy to find: we can assume that at K the set \(\tilde {W}\) is made known (any state we could have generated using \(\tilde {O}\) can be generated with this information). However, in this case, the optimal strategy is still to simply apply quantum search over the restricted space \(\tilde {W}^{c}\) if it can be fully specified. But since we most often encounter cases where \(\tilde {n}=|\tilde {W}|\) is (very) small compared to N, the improvement that could be obtained is also minor.

Similar reasoning follows also when the sets are not disjoint, but the intersection is small compared not just to N, but to |W| and \(|\tilde {W}|\). In this case, again we can find lower bounds by assuming that the non-overlapping complement becomes known. In addition, we assume that we can prepare any quantum state, which has an upper bound on the overlap with any state corresponding to the intersection, \(x \in W \cap \tilde {W}\). Then, the optimal strategy is again governed by the optimality of Grover-based amplitude amplification Footnote 2

This brings us to the situations which are more interesting, specifically, when the overlap \(W_{a}=W \cap \tilde {W}\) is large (see Appendix A for exact definition).

Due to our motivation stemming from aforementioned RL settings, we are particularly interested in the case when \(\tilde {W} \subseteq {W},\) for which we give the optimal strategy, which turns out to be essentially Grover’s amplification where we “pretend” that the oracles hadn’t changed.

The other cases, \({W} \subseteq \tilde {W}\), and the more generic case where the overlap is large, but no containment hold are less interesting for our purpose, so we briefly discuss the possible strategies without proofs of optimality.

3.1 Increasing rewarded spaces: upper bound on average final success probabilities

In the following, we consider the above-described changing oracle problem with monotonically increasing rewarded spaces \(\tilde {W} \subseteq {W}\) and derive upper bounds for the maximal average success probability pK+J of finding an element xW at the end of the second phase. The changing oracle problem is outside the standard settings for which various lower bounding techniques have been developed (Arunachalam et al. 2019; Ambainis 2002; 2006), but the setting is simple enough to be treatable by modifying and extending techniques introduced to lower bound unstructured search problems (Zalka 1999).

To find upper bounds of the success probability, we first prove that we can restrict our search for optimal strategies to averaged strategies as defined in Appendix B. This induces certain symmetries which restrict the optimization to an optimization of two angles α and Δ, one for each phase. Finally, we derive bounds α(K) and Δ(J) for these angles depending on K, J which in turn restrict the optimal success probability pK+J.

The search for an optimal strategy can be limited to strategies based on pure states and unitary time evolutions since it is possible to purify any search strategy by going from a Hilbert space \({\mathscr{H}}_{A}\) spanned by {|x〉} into a larger Hilbert space \({\mathscr{H}}_{AB}={\mathscr{H}}_{A}\otimes {\mathscr{H}}_{B}\). As a consequence, every search strategy T = ({Uk},|ψ(0)〉) based on K + J oracle queries can be described by a set of K + J unitaries Uk and initial state |ψ(0)〉. Our knowledge about possible rewarded items after k oracles queries is then encoded in the quantum state

$$ {|\psi(k)\rangle}=U_{k}O_{k} {\cdots} U_{1}O_{1}{|\psi(0)\rangle} $$
(4)

with \(O_{k}=\tilde {O}\) for 1 ≤ kK and Ok = O for K + 1 ≤ kJ. The success probability at the end of the second phase is then given by

$$ p_{K+J}=\text{Tr }[P_{\mathcal{W}}{|\psi(K+J)\rangle}{\langle\psi(K+J)|}] $$
(5)

with

(6)

Our goal is to maximize the success probability pK+J average over all possible functions \(\tilde {f}(x)\) and f(x) with fixed sizes of the rewarded spaces \(|\tilde {W}|=\tilde {n}\) and \(|{W}|={n}\geq \tilde {n}\). Different realization of \(\tilde {f}(x)\) and f(x) can be generated by substituting all oracle queries Ok by σOkσ and the projector \(P_{\mathcal {W}}\) by \(\sigma P_{\mathcal {W}}\sigma ^{\dagger }\) where σ denote a permutation operator acting on \({\mathscr{H}}_{A}\). As a consequence, an optimal strategy is a strategy T which maximizes

$$ \bar{p}_{T}=\frac{1}{N!}\sum\limits_{\sigma \in {\Sigma}_{A}}p_{T}(\sigma) $$
(7)

with

$$ \begin{array}{@{}rcl@{}} p_{T}(\sigma)&=&\text{Tr }\left[\sigma P_{\mathcal{W}}\sigma^{\dagger} {|\psi(k,\sigma)\rangle}{\langle\psi(k,\sigma)|}\right] \end{array} $$
(8)
$$ \begin{array}{@{}rcl@{}} {|\psi(k,\sigma)\rangle}_{AB}&=&U_{k}\sigma O_{k}\sigma^{\dagger}{\cdots} U_{1} \sigma O_{1}\sigma^{\dagger} {|\psi(0)\rangle}_{AB}\end{array} $$
(9)

at the end of the second phase such that k = K + J. Here, ΣA denotes the set of all possible permutations in \({\mathscr{H}}_{A}\).

We can further limit the search for optimal strategies to averaged strategies \(\bar {T}\) as defined Appendix B because

Lemma 1

The success probability \(p_{\bar {T}}(\sigma )\) of the averaged strategy \(\bar {T}\) is equal to the average success probability \(\bar {p}_{T}\) of the strategy T for every permutation σ ∈ΣA.

as proven in Appendix B. In the following, we consider only average strategies such that \(p=\bar {p}\) and therefore omit the “bar” denoting an average value.

In addition, these strategies lead to symmetry properties of the unitaries Uk and resulting states |ψ(k)〉 under permutations σ as outlined in detail in Appendix B. Therefore, we can restrict the initial states |ψ(0)〉 to states with equal probability \(q(x)=\text {Tr }[({|\psi \rangle }_{AB}{\langle \psi |})\cdot \left ({|x\rangle }_{A}{\langle x|}\otimes \mathbf {1}_{B}\right )]\) for all elements x. An example for such a symmetric state is the initial state of the Grover search algorithm given by the equal superposition \({\sum }_{x}{|x\rangle }_{A}/\sqrt {N}\). Yet, many other symmetric initial states are possible due to the additional degrees of freedom resulting from the additional Hilbert space \({\mathscr{H}}_{B}\). The unitaries Uk cannot break the symmetry between elements |x〉. Only the oracles \(\tilde {O}\) and O can break the symmetry between rewarded and not-rewarded elements.

These symmetry properties will limit the optimization overall strategies to an optimization of a few parameters or angles as we will outline below. These parameters are then again upper bounded by the optimality of Grover search.

We can decompose the state |ψ(K)〉 at the end of the first phase into a rewarded and a not-rewarded component with respect to the eigenstates of the second oracle O. The not-rewarded or losing component |〉 = |s〉 is symmetric with respect to elements \(x\in {\mathscr{L}}\). However, the rewarded component |w〉 is not completely symmetric because \(\tilde {O}\) breaks the symmetry between elements \(x\in \tilde {W}\cap W\) and \(x\in W\setminus \tilde {W}\). Thus, we can further decompose the rewarded component into a symmetric component |ws〉 and a component |w〉 orthogonal to it (see Appendix C). As a result, the state |ψ(K)〉 is given by

$$ {|\psi(K)\rangle}= \cos \varepsilon {|\phi_{s}\rangle}+ \sin \varepsilon {|\phi_{\perp} \rangle} $$
(10)

with the symmetric component

$$ {|\phi_{s}\rangle}=\sin \phi {|w_{s}\rangle}+\cos \phi {|\ell_{s}\rangle}\\ $$
(11)

and the orthogonal rewarded component

$$ {|\phi_{\perp} \rangle}={|w_{\perp} \rangle}. $$
(12)

The angles ε and ϕ are parameters depending on the strategy performed during the first phase. Their values are bounded by the success probability at the end of the first phase given by

$$ p_{K}=\cos^{2}\varepsilon\sin^{2}\phi+\sin^{2}\varepsilon. $$
(13)

The time evolution during the second phase described by V = UK+JOUK+ 1O is also symmetric and thus transforms the symmetric component |ϕs〉 into a symmetric component and |w〉 into a component orthogonal to V |ϕs〉. As a consequence, the final success probability pK+J can be divided into

$$ p_{K+J}=\cos^{2} (\varepsilon) p_{s}+ \sin^{2}(\varepsilon) p_{\perp} $$
(14)

with (see Appendix C)

$$ \begin{array}{@{}rcl@{}} p_{s}&=&\text{Tr }\left[P_{\mathcal{W}}V{|\phi_{s}\rangle}{\langle\phi_{s}|}V^{\dagger} \right] \end{array} $$
(15)
$$ \begin{array}{@{}rcl@{}} p_{\perp}&=&\text{Tr }\left[P_{\mathcal{W}}V{|w_{\perp}\rangle}{\langle w_{\perp}|}V^{\dagger} \right]. \end{array} $$
(16)

The reward probability p of the orthogonal part is maximal if p = 1 which can be achieved if, e.g. V acts on |w〉 as identity. We parametrize the reward probability of the symmetric part via \( p_{s}=\sin \limits ^{2}(\phi +{{{\varDelta }}}), \) where the parameter Δ quantifies the increase of ps during the second phase. Thus, Δ depends on the strategy performed during the second phase. We can quantify the final success probability via

$$ \begin{array}{@{}rcl@{}} p_{K+J}&\leq&\cos^{2} (\varepsilon) \sin^{2}(\phi+{{{\varDelta}}})+\sin^{2}(\varepsilon) \end{array} $$
(17)
$$ \begin{array}{@{}rcl@{}} &\leq& 1-\cos^{2}(\varepsilon)\cos^{2}\left( \phi+{{{\varDelta}}}\right). \end{array} $$
(18)

With the help of Eq. 13, we can rewrite \(\cos \limits ^{2} \varepsilon \) via \( \cos \limits ^{2}\varepsilon =(1-p_{K})/\cos \limits ^{2} \phi \) leading to

$$ p_{K+J}\leq 1-(1-p_{K})\frac{\cos^{2}(\phi+{{{\varDelta}}})}{\cos^{2} \phi}. $$
(19)

As a consequence, pK+J is monotonically increasing with pK,ϕ, Δ provided 0 ≤ ϕπ/2 and 0 ≤ ϕ + Δπ/2. Thus, an optimal strategy optimizes pK and ϕ during the first phase and Δ during the second phase.

If we denote by

$$ \sin^{2}\alpha = \text{Tr }[P_{\tilde{W}}{|\psi(K)\rangle}{\langle\psi(K)|}] $$
(20)

the reward probability at the end of the first phase according to the first oracle \(\tilde {O}\), then the success probability according to the second oracle O at this point is given by

$$ p_{K}=\sin^{2}\alpha +\cos^{2}\alpha \frac{n_+}{n_++n_{\ell}} $$
(21)

following Eqs. 79 and 80 in Appendix C. Here, \(n_{+}=|\mathcal {W}_{+}|\) with \(\mathcal {W}_{+}=\tilde {{\mathscr{L}}}\cap W\) denotes the number of items x marked only by the second oracle O as rewarded and \(n_{\ell }=|\mathcal {L|}\) the number of losing items according to O. Thus, pK increases monotonically with α for 0 ≤ απ/2.

The angle ϕ is also upper bounded by α via (see Appendix C, Eq. 93)

$$ \tan \phi \leq \tan \alpha \sqrt{\frac{\tilde{n}(n_++n_{\ell})}{(\tilde{n}+n_+)n_{\ell}}}+\sqrt{\frac{n_+^{2}}{(\tilde{n}+n_+)n_{\ell}}}. $$
(22)

This bound also increases monotonically with α for 0 ≤ απ/2. As a result, the final success probability is upper bounded by the maximal achievable angles α (defined via the strategy during the first phase) and Δ (during the second phase) within the range 0 ≤ απ/2 and 0 ≤ ϕ(α) + Δπ/2.

The angles α and Δ can be upper bound with the help of a generalization of the optimality proof of Grover’s algorithm from Zalka (1999) which can be stated in the following way

Lemma 2

Given an oracle O which marks exactly n out of N items as rewarded, then performing Grover’s quantum search algorithm gives the maximal possible average success probability \(p_{K}=\sin \limits ^{2}[(2K+1)\nu ]\) for up to 0 < K < π/(4ν) − 1/2 with \(\sin \limits ^{2} \nu =n/N\).

The proof of this lemma follows the optimality proof from Zalka for n = 1 given in Zalka (1999). We outline the difference in the proof for n > 1 in Appendix E. In general, the angle 2Kν does not only limit the maximal success probability via \(p\leq \sin \limits ^{2}[(2k+1)\nu ]\) when starting from a random guess, equal to \(p_{0}=\sin \limits ^{2} \nu =n/N\), but to \(p\leq \sin \limits ^{2}[2k\nu +\phi ]\) when starting from any fixed initial success probability \(p_{0}=\sin \limits ^{2}\phi \), as we also outline in Appendix E.

As a consequence, the maximal angle α is bounded by \(\alpha \leq (2K+1)\tilde {\nu }\) with \(\sin \limits ^{2}\tilde {\nu }=\tilde {n}/N\) which follows directly from Lemma 2 provided \((2K+1)\tilde {\nu }\leq \pi /2\). And the reward probability of ps is limited by \(\sin \limits ^{2}(\phi +{{{\varDelta }}})\) with Δ < 2Jν provided 2Jν + ϕπ/2.

3.2 Grover search is optimal for monotonically increasing rewarded spaces

In this section, we determine the (average) success probability pK+J for the here defined changing oracle problem obtained via a generalized Grover algorithm and show that it saturates the in Section 3.1 derived bound. Grover’s algorithm starts in a equal superposition state given by

$$ \begin{array}{@{}rcl@{}} {|\psi(0)\rangle}&=&\frac{1}{\sqrt{N}}\sum\limits_{x=1}^{N} {|x\rangle} \end{array} $$
(23)
$$ \begin{array}{@{}rcl@{}} &=&\sin\tilde{v} {|\tilde{w}\rangle}+\cos\tilde{v} {|\tilde{\ell}\rangle} \end{array} $$
(24)

with

$$ \begin{array}{@{}rcl@{}} {|\tilde{w}\rangle}&=&\frac{1}{\sqrt{\tilde{n}}}\sum\limits_{x\in \tilde{\mathcal{W}}} {|x\rangle} \end{array} $$
(25)
$$ \begin{array}{@{}rcl@{}} {|\tilde{\ell}\rangle}&=&\frac{1}{\sqrt{N-\tilde{n}}}\sum\limits_{x\in \tilde{\mathcal{L}}} {|x\rangle}. \end{array} $$
(26)

All unitaries Uk for 1 ≤ kK + J are given by

(27)

The time evolution during the first phase with oracle \(\tilde {O}\) leads to a rotation of |ψ(0)〉 by an angle \(2K\tilde {\nu }\) in the plane spanned by \({|\tilde {w}\rangle }\) and |ψ(0)〉 as depicted in Fig. 1. The state at the end of the first phase is given by

$$ {|\psi(K)\rangle}=\sin[(2K+1)\tilde{\nu}]{|\tilde{w}\rangle}+\cos[(2K+1)\tilde{\nu}]{|\tilde{\ell}\rangle} $$
(28)

and thus saturates the upper limit \(\alpha =(2K+1)\tilde {\nu }\) leading to a maximal pK and α. To describe the time evolution during the second phase, we perform a basis transformation into the new basis

$$ \begin{array}{@{}rcl@{}} {|\ell_{s}\rangle}&=&\frac{1}{\sqrt{n_{\ell}}}\sum\limits_{x\in \mathcal{L}}{|x\rangle} \end{array} $$
(29)
$$ \begin{array}{@{}rcl@{}} {|w_{s}\rangle}&=&\frac{1}{\sqrt{n}}\sum\limits_{x\in \mathcal{W}}{|x\rangle} \end{array} $$
(30)
$$ \begin{array}{@{}rcl@{}} {|w_{\perp}\rangle}&=&\sqrt{\frac{n_{+}}{\tilde{n}+n_{+}}}\sum\limits_{x\in \tilde{\mathcal{W}}} {|x\rangle}-\sqrt{\frac{n}{\tilde{n}+n_{+}}}\sum\limits_{x\in \mathcal{W}_{+}} {|x\rangle} \end{array} $$
(31)

with \(\mathcal {W}_{+}=\tilde {{\mathscr{L}}}\cap \mathcal {W}\) and n+ = |W+|. The states |ws〉 and |s〉 are symmetric under permutations permuting only rewarded states with rewarded states and losing states with losing states similar to the symmetry properties of averaged strategies discussed in Appendix C. The state |ψ(K)〉 is given in this new basis by

$$ {|\psi(k)\rangle}=\cos \varepsilon \left( \sin \phi {|w_{s}\rangle}+\cos {|\phi\rangle} {|\ell_{s}\rangle}\right)+\sin \varepsilon {|w_{\perp} \rangle} $$
(32)

with the angle ϕ defined via

$$ \begin{array}{@{}rcl@{}} \tan\phi &=& \frac{{\langle w_{s}|\psi(K)\rangle}}{{\langle \ell_{s}|\psi(k)\rangle}} \end{array} $$
(33)
$$ \begin{array}{@{}rcl@{}} &=&\tan \alpha \sqrt{\frac{\tilde{n}(n_{+}+n_{\ell})}{(\tilde{n}+n_{+})n_{\ell}}}+\sqrt{\frac{n_{+}^{2}}{(\tilde{n}+n_{+})n_{\ell}}} \end{array} $$
(34)

saturating (93). The angle ε is given by

$$ \sin \varepsilon =\sqrt{\frac{n_+}{n_++\tilde{n}}}\left[\sin(\alpha)-\sqrt{\frac{\tilde{n}}{n_++n_{\ell}}}\cos(\alpha)\right]. $$
(35)
Fig. 1
figure 1

Visualization of the time evolution of |ψ(0)〉 under Grover iterations with changing oracles \(O_{k}=\tilde {O}\) for 1 ≤ kK and Ok = O for K + 1 ≤ kJ and \(\tilde {\mathcal {W}}\subseteq \mathcal {W}\). The rewarded space (red plane) of O is spanned by {|ws〉,|w〉}. The equal superposition state |ψ(0)〉 is rotated along the blue circle by an angle \(2K\tilde {\nu }\) during the first phase leading to the state |ψ(K)〉. Consecutively, this state is rotated along the green circle changing only its component |ϕs〉 but not |w

The time evolution during the second phase, given by oracle O and Uk as given in Eq. 27, leads to a rotation of |ψ(K)〉 by an angle 2Jν in a plane parallel to the one spanned by |ψ(0)〉 and |w〉 as depicted in Fig. 1. As a consequence, the final state is given by

$$ \begin{array}{@{}rcl@{}} {|\psi(K+J)\rangle}&=&\cos \varepsilon\left[\sin(\phi+2J\nu){|w_{s}\rangle}+\cos(\phi+2J\nu){|\ell_{s}\rangle}\right]\\ &+&(-1)^J\sin \varepsilon {|w_{\perp} \rangle} \end{array} $$
(36)

leading to the maximal possible angle Δ = 2Jν and maximal \(p_{\perp }=\sin \limits ^{2}\varepsilon \) and thus to the maximal possible (average) success probability pK+J.

As a result, performing consecutive Grover iterations in the first and second phase with in total K + J oracle queries leads to the maximal possible average success probability pK+J provided \(\alpha =(2K+1)\tilde {\nu }\leq \pi /2\) and ϕ(α) + 2Jνπ/2.

If more queries are available such that \((2K+1)\tilde {\nu }> \pi /2\) or ϕ + 2Jν > π/2, then it is possible to over-rotate the state |ψ〉 such that applying \(\tilde {O}\) or O less often or performing another algorithm like, e.g. fixed-point search (Yoder et al. 2014) leads to a higher success probability.

In general, the change of |ψ(k)〉 which can be created with a single oracle query O (\(\tilde {O}\)) is limited by \(|{\langle \psi (k+1)|\psi (k)\rangle }|\geq \cos \limits 2\nu \) (\({\cos \limits } 2\tilde {\nu }\)). The maximal possible difference between |ψ(0)〉 and |ψ(K + J)〉 achievable under this constrains would require that all states |ψ(k)〉 ly within a single plane (see discussion in Zalka (1999)). However, changing the oracle in Grover’s algorithm leads to a change or tilt of the rotation plane/ axis as visualized in Fig. 1. Nevertheless, performing Grover iterations is the optimal strategy as we have proven. In addition, changing the oracle creates a component |ϕ〉 which stays invariant under consecutive Grover iterations with the new oracle. Luckily, this component contains only rewarded items such that it does not prevent us from further increasing the success probability with Grover iterations if \(\mathcal {\tilde {W}}\subseteq \mathcal {W}\). As a consequence, the optimality of Grover’s algorithm in the case of a changing oracle might be not surprising but is also not obvious. Especially because performing Grover’s algorithm with the maximal number of available oracle queries is not necessarily optimal if \(\tilde {\mathcal {W}}\) and \(\mathcal {W}\) only share a large overlap but \(\tilde {\mathcal {W}}\not \subseteq \mathcal {W}\).

3.3 Grover iterations for \(\tilde {\mathcal {W}}\not \subseteq \mathcal {W}\)

In the following, we investigate the performance of Grover’s algorithm if \(\tilde {\mathcal {W}}\) and \(\mathcal {W} \) share a large overlap (see Appendix C) but \(\tilde {\mathcal {W}}\not \subseteq \mathcal {W}\). We will show that performing the maximal number K of oracle queries during the first phase is not always optimal depending on the number of available queries J in the second phase.

If \(\tilde {\mathcal {W}}\not \subset \mathcal {W}\), then the perpendicular component |ϕ〉, Eq. 10 also includes a losing component |〉 such that the state |ψ(K)〉 can be written via

$$ \begin{array}{@{}rcl@{}} {|\psi(K)\rangle}&=& \cos \varepsilon {|\phi_{s}\rangle}+ \sin \varepsilon {|\phi_{\perp}\rangle} \end{array} $$
(37)
$$ \begin{array}{@{}rcl@{}} {|\phi_{s}\rangle}&=&\sin \phi {|w_{s}\rangle}+\cos \phi {|\ell_{s}\rangle} \end{array} $$
(38)
$$ \begin{array}{@{}rcl@{}} {|\phi_{\perp}\rangle}&=&\sin \chi {|w_{\perp}\rangle}+\cos \chi {|\ell_{\perp}\rangle}. \end{array} $$
(39)

Applying Grover iterations with unitaries Uk as defined in Eq. 27 does not change the success probability of the component |ϕ〉. It only changed the success probability of the component |ϕs〉 leading to

$$ \begin{array}{@{}rcl@{}} p_{K+J}&=&\cos^{2}\varepsilon \sin^{2}(\phi+{{{\varDelta}}})+\sin^{2}\varepsilon \sin^{2}\chi \end{array} $$
(40)
$$ \begin{array}{@{}rcl@{}} &\leq&1-\sin^{2}\varepsilon\cos^{2}\chi \end{array} $$
(41)

with Δ = 2Jν. As a consequence, the success probability at the end of second phase is limited by 1 −|〈|ψ(K)〉|2 and thus by the weight of the orthogonal losing component created during the first phase. The contribution of this component is increasing with K for \(K<\frac {\pi }{4\tilde {\nu }}-1/2\) as shown in Fig. 2.

Fig. 2
figure 2

Comparison of the success probabilities \(p_{K^{\prime }+J}\) for different numbers \(K^{\prime },J\) of Grover iterations during the first and second phase with \(K^{\prime }=0\) (blue), \(K^{\prime }=5\) (green) and \(K^{\prime }=10\) (red) for \(n_{\ell }=5000, \tilde {n}=15, n=10,n_{+}=5\) and thus \(n_{-}=10=|\mathcal {\tilde {W}}\cap {\mathscr{L}}|\)

In this case, the success probability pK+J is still monotonically increasing with Δ. Therefore, performing the maximal possible number (J) of Grover iterations during the second phase is still a good idea provided ϕ + 2Jνπ/2. However, performing the maximal number (K) of Grover iterations during the first phase is not optimal if it leads to phases ϕ = ϕ(K) and χ = χ(K) such that

$$ \sin^{2} \chi(K)< \sin^{2}[2J\nu+\phi(K)]. $$
(42)

In this situation, performing less Grover iterations \(K^{\prime }<K\) during the first phase can lead to a higher final success probability \(p_{K^{\prime }+J}>p_{K+J}\) as shown in Fig. 2. Here, performing only \(K^{\prime }=5\) instead of \(K^{\prime }=10\) Grover iterations leads to higher final success probabilities for 6 ≤ J ≤ 11. In general, it is optimal to perform the maximal number K of Grover iterations during the first phase if J = 0 (provided \((2K+1)\tilde {\nu }<\pi /2\)). However, less and less effective queries to the first oracle \(\tilde {O}\) should be used the more queries to the second oracle are available as demonstrated in Fig. 2.

3.4 Minimizing the cost to find a rewarded item

In the previous section, we have proven that it is possible to pursue a Grover-based search strategy and maintaining optimality even when the oracle is changing provided the rewarded space is monotonically increasing. We annotate that changing oracle settings have not been considered in literature before from the present perspective. Hence, it was unclear whether continuing Grover iterations in settings which changing oracles would lead to suboptimal results (compare Section 3.3). Note that the standard results for Grover search (Grover 1997; 1998; Brassard et al. 2000) can be only applied to the changing oracle problem if the two phases are treated separately. That is, a Grover search is performed during the first phase. At the end of this phase, a measurement is performed. If no rewarded item is found, another Grover search is started with queries to the second oracle.

In the following, we discuss a numerical example demonstrating the possible improvements we can gain by proceeding with a coherent Grover-based strategy in a setting where the oracle changes rather than stopping it and starting a new one.

In our example, we assume that K Grover iterations \(\tilde {G}_{\psi }\) with \(\tilde {O}\) have been performed during a first phase. Then, the oracle is changed to O with \(\tilde {W}\subseteq W\) and an arbitrary number of queries to the second oracle are allowed. We compare in our example the following two procedures: (a) a measurement is performed immediately after the change of the oracle and (b) the coherent time evolution is continued with the new oracle before performing a measurement. In both cases, we continue with a standard Grover search based solely on the second oracle O if the first measurement did not reveal a rewarded item.

In many scenarios, the goal is to minimize a given cost function, such as the number of oracle queries, rather than optimizing the reward probability. Therefore, we compare in this section the cost to find a rewarded item if we either proceed or interrupt the coherent search when the oracle changes. Here, we us C(j) = 2j + 1 as a typical cost function for a Grover algorithm with j steps (see e.g. Dunjko et al. (2016) and Sriarunothai et al. (2019)).

In the following, we neglect the cost produced by queries to the first oracle \(\tilde {O}\) because this cost is identical for both cases and optimize the number of Grover iterations during the second phase (see Appendix F). In general, it is advantageous to continue the coherent time evolution if

$$ 4\nu\leq \pi-\arcsin\left( \frac{1}{1.38(1-\sin^{2}\varepsilon_{K})}-2\phi_{K}\right) $$
(43)

with \(\sin \limits ^{2}\nu =|W|/N=1/N^{\prime }\). Here, ϕK quantifies the reward probability of the symmetric part after the first phase (compare Eq. 35) and εK the ratio between the symmetric component and the orthognal component (compare (33)). The cost functions for both procedures scale with \(\sqrt {N^{\prime }}\) just like in typical amplitude amplification algorithms. The cost difference between stopping or continuing Grover’s algorithm also scales with \(\sqrt {N^{\prime }}\) for fixed angles ϕk and εk as we have evaluate in Appendix F and visualized in Fig. 3.

Fig. 3
figure 3

Minimal expected average cost C of a continuous Grover algorithm with changing oracle with optimized j (red) for \(\psi =\sqrt {|W|/N}=1/\sqrt {N^{\prime }}\) and fixed ϕK = 0.6 and εK ≈ 0.32. The expected average cost for an interrupted Grover algorithm with fixed oracles is denoted by a blue dashed line for comparison

4 Conclusion

Research in quantum enhanced reinforcement learning has motivated quantum computation scenarios involving two systems, the agent and its environment, with restricted access to each other. In special cases, the interaction of the agent with its environment can be reduced to unitary oracle queries. However, general settings do not allow such a treatment due to memory effects induced by the environment.

In this paper, we generalized the basic case, where the environment acts effectively as a single fixed oracle, to settings where the oracle changes in time. This was motivated by standard grid-world type problems, where the number of consecutive actions within a single epoch can grow or shrink. We have demonstrated that the search for a rewarded action sequence of increasing length can be described as a search in a data based with fixed sequence length (equal to the maximal sequence length) but changing oracle leading to an increase of the rewarded space. We analyzed this setting and identified Grover-type amplitude amplification as optimal strategy for monotonically increasing rewarded spaces.

However, continuing coherent Grover iterations when the target space decreases will partially trap the resulting state within the losing subspace. As a consequence, the reward probability will be limited, with a limit clearly below unity, if we continue with Grover iterations after the oracle has changed.

It is easy to conceive a cascade of ever more general problems. For example, in slightly more general settings the agent might be allowed to chose if and when to change the effective oracle. In this way, the agent might combine breadth-first and depth-first search in a single coherent search for RL. Often, shorter rewarded action sequences are preferred but longer rewarded action sequences are more likely. Increasing the sequence length during a coherent quantum search will amplify the probability for shorter rewarded sequences more than for longer sequences. Combing different oracles, corresponding to different sequence length, within a single Grover search might therefore help to balanced the tradeoff between the desire for short rewarded sequences on the one side and high reward probabilities on the other. Further research investigating these possible benefits is needed. Quantum-enhanced reinforcement learning agent using different oracles might be developed for many different scenarios such as grid world, navigation or routing. In addition, many problems where we are searching for the shortest correct sequence of actions to achieve a goal, like optimal elements to create entangled photons (Melnikov et al. 2018), are equivalent to the grid-world problem.

We envision that the results presented here may become even more important in the future when large-scale quantum networks (Kimble 2008; Cacciapuoti et al. 2020) become available where the interactions is naturally quantum.

The goal in RL is in general to minimize a given cost function C instead of maximizes solely the success probability. In general, performing consecutive Grover iterations can be also used to minimize the average number of oracle queries necessary until a rewarded item is found. In general, we expect for our algorithm a quadratic improvement of the cost \(C_{g}\propto \sqrt {C_{\text {class}}}\) compared to the cost Cclass of a classical algorithm. In Section 3.4, we discussed a cost function depending solely on the number of oracle queries. We compared quantum algorithms based on a single oracle or multiple oracles leading to the costs Cs and Cg, respectively. Here, we found that the difference CsCgCs scales with the cost.

An optimal algorithm will depend on the exact cost function we want to minimize. For example, the search algorithm described in Boyer et al. (1998) is only optimal in terms of oracle queries. However, the number of elementary qubit gates necessary to perform a Grover search can be reduced by using a recursive Grover search (Arunachalam and de Wolf 2015) which separates the database into several subgroups. In RL, queries to different oracles might be connected to different cost. In such setting, an optimal algorithm might use different oracles in a recursive way for a quantum search. In this way, improvements in terms of cost might go beyond the quadratic improvements achievable in quantum exploration.

Finally, possibly the most interesting extensions would avoid reductions of environments to unitary oracles, and identify new schemes to obtain improvements in settings which may be more applicable in real-world RL settings. We leave these more general considerations for follow-up investigations.