Introduction

One of the main consequences of the revolution in computation sciences, started by Alan Turing, Konrad Zuse and John Von Neumann, among others1,2, is that computers are capable of substituting us and improving our performance in an increasing number of tasks. This is due to the advances in the development of complex algorithms and the technological refinement allowing for faster processing and larger storage. One of the goals in this area, in the frame of bio-inspired technologies, is the design of algorithms that provide computers human-like capacities such as image and speech recognition, as well as preliminary steps in some aspects related to creativity. These achievements would enable us to interact with computers in a more efficient manner. This research, together with other similar projects, is carried out in the field of artificial intelligence3. In particular, researchers in the area of machine learning (ML) inside artificial intelligence are devoted to the design of algorithms responsible of training the machine with data, such that it is able to find a given optimal relation according to specified criteria4. More precisely, ML is divided in three main lines depending on the nature of the protocol. In supervised learning, the goal is to teach the machine a known function without explicitly introducing it in its code. In unsupervised learning, the goal is that the machine develops the ability to classify data by grouping it in different subsets depending on its characteristics. In reinforcement learning, the goal is that the machine selects a sequence of actions depending on its interaction with an environment for an optimal transition from the initial to the final state.

The previous ML techniques have also been studied in the quantum regime in a field called quantum machine learning5,6,7,8,9,10,11,12 with two main motivations. The first one is to exploit the promised speedup of quantum protocols for improving the already existing classical ones. The second one is to develop unique quantum machine learning protocols for combining them with other quantum computational tasks. Apart from quantum machine learning, fields like quantum neural networks, or the more general quantum artificial intelligence, have also addressed similar problems13,14,15,16,17.

Here, we introduce a quantum machine learning algorithm for finding the optimal control state of a multitask controlled unitary operation. It is based on a sequentially-applied time-delayed equation that allows one to implement feedback-driven dynamics without the need of intermediate measurements. The purely quantum encoding permits to speedup the training process by evaluating all possible choices in parallel. Finally, we analyze the performance of the algorithm comparing the ideal solution with the one obtained by the algorithm.

Results

Quantum Machine Learning Algorithm

The first step in the description of the algorithm is the definition of the concept of multitask controlled unitary operations. In essence, these do not differ from ordinary controlled operations, but the multitask label is selected to emphasize that more than two operations are in principle possible. U acts on \(|\psi \rangle \), being \(|\psi \rangle \in {{\mathbb{C}}}^{d}\otimes {{\mathbb{C}}}^{d}\), a quantum state belonging to the tensor product of the control, \({ {\mathcal H} }_{c}\subset {{\mathbb{C}}}^{d}\), and target, \({ {\mathcal H} }_{t}\subset {{\mathbb{C}}}^{d}\), Hilbert spaces. The dimension of both subspaces is the same, d, and depends on the particular problem to address. Mathematically, we define U as

$$U=\sum _{i=1}^{d}|{c}_{i}\rangle \langle {c}_{i}|\otimes {s}_{i},$$
(1)

where \(|{c}_{i}\rangle \) denotes the control state, and s i is the reduced or effective unitary operation that U performs on the target subspace when the control is on \(|{c}_{i}\rangle \).

The goal of our algorithm is to explore the control subspace \({ {\mathcal H} }_{c}\) and find the control state that maximizes the implementation of a known \(s,\,s:{ {\mathcal H} }_{t}\to { {\mathcal H} }_{t}\), which is given in terms of the \(|{\rm{in}}\rangle \) and \(|{\rm{out}}\rangle \) states as \(s\,|{\rm{in}}\rangle =|{\rm{out}}\rangle \). Therefore, our algorithm is appropriate when U is experimentally implementable but its internal structure, the relation between \(|{c}_{i}\rangle \) and s i , is unknown. In other words, our algorithm enables the training of the control subspace \({ {\mathcal H} }_{c}\) by providing data about the target subspace \({ {\mathcal H} }_{t}\), in order to achieve that the complete system implements the desired s operation in the target subspace \({ {\mathcal H} }_{t}\). Our inspirations for the model of controlled unitary operations are supervised learning protocols, in which the goal is that the system is able to learn a given known function. Here, the control subspace plays the role of the memory of the system. This control, or memory, is the mechanism by which the system is able to store the information about the operation that it has to implement. The idea of our algorithm is that the user transmits the information of the operation the system has to make. Therefore, the goal is not to perform a given gate, but to save this information in the system.

The protocol consists in sequentially reapplying the same dynamics in such a way that the initial state in the target subspace is always \(|in\rangle \), while the initial state in the control subspace is the output of the previous cycle. The equation modeling the dynamics is

$$\frac{d}{dt}|\psi (t)\rangle =-i\,[\theta (t-{t}_{i})\theta ({t}_{f}-t){\kappa }_{1}{H}_{1}|\psi (t)\rangle +{\kappa }_{2}{H}_{2}(|\psi (t)\rangle -|\psi (t-\delta )\rangle )].$$
(2)

In this equation \(\theta \) is the Heaviside function, H 1 is the Hamiltonian giving rise to U with \(U={e}^{-i{\kappa }_{1}{H}_{1}({t}_{f}-{t}_{i})}\), and H 2 is the Hamiltonian connecting the input and output states, with \({\kappa }_{1}\) and \({\kappa }_{2}\) the coupling constants of each Hamiltonian. We point out that this evolution cannot be realized with ordinary unitary or dissipative techniques. Nevertheless, recent studies in time delayed equations provide all the ingredients for the implementation of this kind of processes18,19,20,21. Up to future experimental analyses involving the scalability of the presented examples, the inclusion of time delayed terms in the evolution equation is a realistic approach in the technological framework provided by current quantum platforms. Another important feature of Eq. (2), which is related with the delayed term, is that it only acquires physical meaning once the output is normalized.

Regarding the behavior of the equation, each term has a specific role in the learning algorithm. The mechanism is inspired in the most intuitive classical technique for solving this problem, which is the comparison between the input and output states together with the correspondent modification of the control state. Here, the first Hamiltonian produces U while the second Hamiltonian produces the reward by populating the control states responsible of the desired modification of the target subspace. The structure of H 2 guarantees that only the population in the control \(|{c}_{i}\rangle \) associated with the optimal s i is increased,

$${H}_{2}=1\otimes (-i|{\rm{in}}\rangle \langle {\rm{out}}|+i|{\rm{out}}\rangle \langle {\rm{in}}|).$$
(3)

Notice that while this Hamiltonian does not contain explicit information about \(|{c}_{i}\rangle \), the solution of the problem, its multiplication with the feedback term, \(|\psi (t)\rangle -|\psi (t-\delta )\rangle \), is responsible for introducing the reward as an intrinsic part of the dynamics. This is a convenient approach because it eliminates the measurements required during the training phase. In this case where we employ a single pair of \(\{|{\rm{i}}{\rm{n}}\rangle ,|{\rm{o}}{\rm{u}}{\rm{t}}\rangle \}\), target states, H 2 is fixed and time independent. However, this could change in a more complex situation of p pairs of \(\{|{\rm{i}}{\rm{n}}\rangle ,|{\rm{o}}{\rm{u}}{\rm{t}}\rangle \}\) target states, such that \(s={\sum }_{j}^{p}{|{\rm{out}}\rangle }_{j}\otimes {\langle {\rm{in}}|}_{j}\), where H 2 would also be time independent but different in each episode. Even if this generalization is not included in this article, it points out a promising direction for enhancing the protocol.

We would also like to remark the similarity existing between the effect of the delay term in our quantum evolution and gradient ascent techniques in algorithms for optimization problems3. A possible strategy to perform the learning protocol would be to feed the system with random control states, measure each result, and combine them to obtain the final solution. However, we have discovered that it suffices to initialize the control subspace in a superposition of the elements of the basis. We would like to remark that this purely quantum feature reduces significantly the required resources, because a single initial state replaces a set of random states large enough to cover all possible solutions.

Numerical Simulations

We have numerically tested our proposed algorithm in a selection of examples covering the cases with unique or multiple solutions, as well as higher-dimensional systems. We consider as a figure of merit the fidelity function given by the trace of the product between the control state obtained by the algorithm and the ideal control state. In order to recover the solution of the problem we need to trace out the target degrees of freedom, obtaining a density matrix. Therefore, the iteration of the protocol would require solving Eq. (2) written for density matrices. This turns out to be a nontrivial task given the non-local cross terms of the generalized master equation, that reads,

$$\begin{array}{rcl}\frac{d}{dt}|\psi (t)\rangle \langle \psi (t)| & = & -i\,[\theta (t-{t}_{i})\theta ({t}_{f}-t){\kappa }_{1}{H}_{1}+{\kappa }_{2}{H}_{2},|\psi (t)\rangle \langle \psi (t)|]\\ & & +i{\kappa }_{2}({H}_{2}|\psi (t-\delta )\rangle \langle \psi (t)|-|\psi (t)\rangle \langle \psi (t-\delta )|{H}_{2}).\end{array}$$
(4)

To achieve the solution in the most efficient way, we have decomposed each density matrix in a convex sum of pure states and solved the vector equation in Eq. (2) for each of them separately, retrieving the total solution as a linear convex superposition of the individual ones. This method is consistent due to the linearity of Eq. (4).

Definition of the SWAP gate problem

A first specific example we address in this manuscript is given by the excitation transport produced by the controlled SWAP gate. In this scenario, the complete system is an n-node network, where each node is composed by a control and a target qubit. Therefore, the control and target subspaces are defined as \({ {\mathcal H} }_{c}\subset {({{\mathbb{C}}}^{2})}^{\otimes n}\) and \({ {\mathcal H} }_{c}\subset {({{\mathbb{C}}}^{2})}^{\otimes n}\). The excitations in this system belong to the target subspace and are exchanged between two nodes, when both nodes are in a particular state of the control subspace. The control states are in a superposition of open and close, \(|o\rangle \) and \(|c\rangle \), while the target qubits are written in the standard \(\{|0\rangle ,|1\rangle \}\) basis denoting the absence or presence of excitations. We define U, the multitask controlled unitary operation, to implement the SWAP gate between connected nodes only if all the controls of the corresponding nodes are in the open state, \(|o\rangle \). See Fig. 1 for a graphical representation of the most simple cases, the two and three node line networks. The explicit formula for U 2 is given by

$${U}_{2}=(|cc\rangle \langle cc|+|co\rangle \langle co|+|oc\rangle \langle oc|)\otimes 1+|oo\rangle \langle oo|\otimes {s}_{12}$$
(5)

where s ij represents the SWAP gate between qubits i and j. Here, the first two qubits represent the control subspace and the last two represent the target subspace. Although we have employed unitary operations for illustration purposes, the equation requires the translation to Hamiltonians. In order to do so, we first select \({\kappa }_{1}({t}_{f}-{t}_{i})\) to be \(\pi \mathrm{/2}\) and calculate the matrix logarithm, which yields the result for H 1 in Eq. (2), \({H}_{1}=(|oo\rangle \langle oo|)\otimes {h}_{12}\). Denoting with \({\sigma }_{k}\) the Pauli matrices, h ij for \(i < j\) reads

$${h}_{ij}=\frac{1}{2}(\sum _{k=1}^{3}{1}^{\otimes i-1}\otimes {\sigma }_{k}\otimes {1}^{\otimes j-i-1}\otimes {\sigma }_{k}\otimes {1}^{\otimes n-j}-{1}^{\otimes n}).$$
(6)
Figure 1
figure 1

Node line networks. We plot the graphical representation of every control state in the two, (a), and three, (b), node line networks. The circles around the nodes denote the control being in the open state. The effective operation that the control performs on the target subspace is the s ij SWAP gate between nodes i and j.

Unique solution of the quantum machine learning algorithm

The first family of problems we address is n-node line networks, in which the nodes are located in a unidimensional array and are only connected with their closest neighbors. The goal is to find the control state that allows transmitting an excitation from the first to the last node of the network, which in this case requires that all intermediate connections are active. The pair of \(\{|{\rm{i}}{\rm{n}}\rangle ,|{\rm{o}}{\rm{u}}{\rm{t}}\rangle \}\) is determined by these constrains as \(|{\rm{in}}\rangle =|1\rangle {|0\rangle }^{\otimes n-1}\) and \(|{\rm{out}}\rangle ={|0\rangle }^{\otimes n-1}|1\rangle \). Accordingly the problem has a unique solution, given by the control state with all the nodes open, \({|o\rangle }^{\otimes n}\). The parameters we have selected are \(\delta =1\), \({\kappa }_{1}=100\), \({\kappa }_{2}=10\) and \(T=2\), where T represents the total duration of each episode. In Fig. 2 we plot the results together with the required resources. These examples show how the algorithm is properly working for this family of problems independently of the natural basis of U. The H 1 Hamiltonians employed in the simulations for \(n=\mathrm{2,3,4}\) are given by

$$\begin{array}{rcl}{H}_{1}^{2} & = & |oo\rangle \langle oo|\otimes {h}_{12},\\ {H}_{1}^{3} & = & |ooo\rangle \langle ooo|\otimes {h}_{13}+|ooc\rangle \langle ooc|\otimes {h}_{12}+|coo\rangle \langle coo|\otimes {h}_{23},\\ {H}_{1}^{4} & = & |oooo\rangle \langle oooo|\otimes {h}_{14}+|oooc\rangle \langle oooc|\otimes {h}_{13}\\ & & +(|oocc\rangle \langle oocc|+|ooco\rangle \langle ooco|)\otimes {h}_{12}\\ & & +(|ccoo\rangle \langle ccoo|+|ocoo\rangle \langle ocoo|)\otimes {h}_{34}\\ & & +|cooc\rangle \langle cooc|\otimes {h}_{23}+|cooo\rangle \langle cooo|\otimes {h}_{24}.\end{array}$$
(7)
Figure 2
figure 2

Learning curves for single solutions. (a) We plot the fidelity of the learning process as a function of the number of episodes for the first examples of n-node line networks. We have selected the open state, \(|o\rangle =|1\rangle \) of the \(\{|0\rangle ,|1\rangle \}\) basis. (b) We plot the fidelity for a different selection of \(|o\rangle \) in the \(n=3\) case. Here, we have rotated the control states with the goal of testing the algorithm for an arbitrary basis. The solution, \(|ooo\rangle \), is given by \(\frac{1}{\sqrt{2}}[|0\rangle +|1\rangle ]\otimes |1\rangle \otimes [\cos \,(\pi \mathrm{/3)}|0\rangle +\,\sin \,(\pi \mathrm{/3)}|1\rangle ]\).

Multiple solutions of the quantum machine learning algorithm

We address now a set of more complicated networks which will allow us to clarify how the algorithm performs when solving problems with multiple solutions. These are the A network for three nodes and the B and C networks for four nodes, depicted in Fig. 3. The goal of the algorithm is the same as in the previous case, i.e., to find the control state able of sending an excitation from the first to the last node. The difference is that these networks accept two pure states and their superpositions as solutions, a feature that is reflected in the result obtained with the algorithm. The asymptotic state achieved under the feedback induced quantum learning equation is a quantum superposition of both solutions, see Fig. 4a for the numerical simulations. In this case, the previous definition of the fidelity is not valid. Therefore, we provide a new one in terms of the \(|{\rm{in}}\rangle \,{\rm{and}}\,|{\rm{out}}\rangle \) states of the target space and the Hamiltonian H 1. The new fidelity corresponds to the trace of the product between the ideal output \(|{\rm{out}}\rangle \), and the output obtained with the control state achieved by the algorithm after acting on \(|{\rm{in}}\rangle \). Both ideal and real outputs belong to the target subspace. While the \(\{|{\rm{in}}\rangle ,|{\rm{out}}\rangle \}\) pair is the same as in the previous case, the H 1 Hamiltonians change their definition to

$$\begin{array}{c}{H}_{1}^{A}=(|ooo\rangle \langle ooo|+|oco\rangle \langle oco|)\otimes {h}_{13}+|ooc\rangle \langle ooc|\otimes {h}_{12}+|coo\rangle \langle coo|\otimes {h}_{23},\\ {H}_{1}^{B}=(|oooo\rangle \langle oooo|+|ooco\rangle \langle ooco|)\otimes {h}_{14}+|cooo\rangle \langle cooo|\otimes {h}_{34}+|oooc\rangle \langle oooc|\otimes {h}_{13}\\ \,\,\,\,\,\,\,\,\,\,\,+|oocc\rangle \langle oocc|\otimes {h}_{12}+|cooc\rangle \langle cooc|\otimes {h}_{23}+|coco\rangle \langle coco|\otimes {h}_{24},\\ {H}_{1}^{C}=|oocc\rangle \langle oocc|\otimes {h}_{12}+|cooc\rangle \langle cooc|\otimes {h}_{23}+(|coco\rangle \langle coco|+|cooo\rangle \langle cooo|)\otimes {h}_{24}\\ \,\,\,\,\,\,\,\,\,\,\,+|oooc\rangle \langle oooc|\otimes {h}_{13}+(|ocoo\rangle \langle ocoo|+|ccoo\rangle \langle ccoo|)\otimes {h}_{34}+(|ooco\rangle \langle ooco|+|oooo\rangle \langle oooo|)\otimes {h}_{14}.\end{array}$$
(8)
Figure 3
figure 3

Networks with two solutions. We schematize the A, B, and C networks in (a), (b) and (c), respectively. In each of them, we write the pair of solution control states that corresponds to the control performing the s 13 (a) and \({s}_{14}\) (b,c) gates in the target subspace.

Figure 4
figure 4

Learning curves for two solutions and qutrit problems. (a) We depict the learning curve for the A, B, and C networks as a function of the number of episodes. Notice that the curves for the B and C networks are identical. (b) We depict the learning curve for the multitask controlled unitary operation acting on two qutrits as a function of the number of episodes. Here, \(|{\rm{in}}\rangle =|0\rangle \,|{\rm{out}}\rangle =|2\rangle \), and the solution is given by \(|{c}_{2}\rangle =|1\rangle \), where the control states coincide with the basis of the qutrit space.

For the cases studied, the complete set of solutions is obtained encoded in the outcome of the algorithm. This is convenient because it allows one to design a protocol to select a specific optimal solution according to given criteria. In the networks we are analyzing, one might want to obtain the most efficient solution, defining efficiency as achieving the transmission of the excitation while minimizing the number of open nodes. In order to accomplish this task a dissipative term has to be included in the evolution equation, in order to filter out the undesired solutions. We point out that a control-dependent dissipation affects the target subspace, modifying the protocol in the required manner. We explicitly write the Lindblad operators \({\sigma }_{i}\) and dissipation constants \({\gamma }_{i}\) for a two-node case, as follows

$$\begin{array}{l}{\sigma }_{1}=|co01\rangle \langle co11|+|co00\rangle \langle co10|,\quad {\sigma }_{2}=|co00\rangle \langle co01|+|co10\rangle \langle co11|,\\ {\sigma }_{3}=|oc00\rangle \langle oc10|+|oc01\rangle \langle oc11|,\quad {\sigma }_{4}=|oc10\rangle \langle oc11|+|oc00\rangle \langle oc01|,\\ {\sigma }_{5}=|oo00\rangle \langle oo10|+|oo01\rangle \langle oo11|,\,{\sigma }_{6}=|oo00\rangle \langle oo01|+|oo10\rangle \langle oo11|,\\ {\gamma }_{1}={\gamma }_{2}={\gamma }_{3}={\gamma }_{4},\quad {\gamma }_{5}={\gamma }_{6}=2{\gamma }_{1}.\end{array}$$
(9)

Instead of solving the master equation, we have employed the quantum jump formalism, which allows one to work with Eq. (2) instead of Eq. (4), with the consequent simplicity. The dissipation can be modeled with an additional term \({H}_{D}=\frac{-i}{2}\sum {\gamma }_{i}{\sigma }_{i}^{\dagger }{\sigma }_{i}\) in the first part of the time delayed equation in the absence of a decay event. Therefore, in order to assure that the non-Hermitian Hamiltonian accounts for the real evolution of the system, one has to properly balance the relation between \({\kappa }_{1}\) and \({\gamma }_{i}\).

$$\frac{d}{dt}|\psi (t)\rangle =-i[\theta (t-{t}_{i})\theta ({t}_{f}-t)({\kappa }_{1}{H}_{1}+{H}_{D})|\psi (t)\rangle +{\kappa }_{2}{H}_{2}(|\psi (t)\rangle -|\psi (t-\delta )\rangle )].$$
(10)

A non-dissipative alternative consists in the modification of the coupling constant associated with each of the control-target pairs in the unitary operation. These two techniques allow us to find the shortest path between two nodes in a network once the natural basis of the unitary is known.

Extension to qudits

Another possible aspect to study is the extension of the algorithm to higher-dimensional building blocks. We provide an example in which the optimal control state for a multitask controlled unitary operation acting on qutrits is obtained. This operation U is defined in terms of the control states \(|{c}_{i}\rangle \) as

$$U=|{c}_{1}\rangle \langle {c}_{1}|\otimes 1+|{c}_{2}\rangle \langle {c}_{2}|\otimes (\begin{array}{ccc}0 & 1 & 0\\ 0 & 0 & 1\\ 1 & 0 & 0\end{array})+|{c}_{3}\rangle \langle {c}_{3}|\otimes (\begin{array}{ccc}0 & 0 & 1\\ 1 & 0 & 0\\ 0 & 1 & 0\end{array}).$$
(11)

where the first qutrit belongs to the control subspace and the second one belongs to the target subspace. Although no network is defined in this case, the goal of the algorithm is to find the control state that realizes the \(|{\rm{in}}\rangle -|{\rm{out}}\rangle \) transition in the target subspace. In this problem, the system consists of a single control qutrit and a single target qutrit. See Fig. 4b for a numerical simulation of the learning process in this particular case.

Extension to phase gates

All examples discussed until this point consisted on s gates whose effect can be understood as a permutation of the basis elements. Let us consider now a different scenario in which the operations in the target subspace are phase gates, \({s}_{i}=1-\mathrm{2|}i\rangle \langle i|\), therefore, the complete unitary operation reads \(U=\sum |i\rangle \langle i|\otimes \mathrm{(1}-\mathrm{2|}i\rangle \langle i|)\) with \(i\in \mathrm{[1,}n]\). If we choose the reference target states to be,

$$|{\rm{in}}\rangle =\frac{1}{\sqrt{2}}(|1\rangle +|n\rangle ),\quad |{\rm{out}}\rangle =\frac{1}{\sqrt{2}}(|1\rangle -|n\rangle ),$$
(12)

we know a priori that the only solution is given by \(s=1-\mathrm{2|}n\rangle \langle n|\) associated to control \(|c\rangle =|n\rangle \langle n|\). We perform a numerical experiment to analyze how the initial equally weighted control state, \(\frac{1}{\sqrt{n}}\sum |i\rangle \), converges to the solution under the action of \({H}_{1}=-2\sum |i\rangle \langle i|\otimes |i\rangle \langle i|\) depending on the dimension of the system. See Fig. 5 for the simulations. The results show that our algorithm is particularly efficient for this selection of Hamiltonians, given that the solution is reached in \(O(\sqrt{n})\) for all the cases studied.

Figure 5
figure 5

Learning curves for phase gates. (a) We plot the fidelity of the learning process as a function of the number of episodes for the problem of finding the appropriate phase gate.

Efficiency of the Quantum Machine Learning Algorithm

It is important to mention that the simulations and techniques we provide here constitute an analysis of our quantum machine learning algorithm, but our aim in this work is not to demonstrate scalability or quantum speedup. It would be convenient to analytically solve Eq. (2) in order to rigorously analyze the scope of the algorithm and be able to obtain information about its scalability for general problems. Since we have not solved the dynamics analytically, we evaluate the performance by comparing our results with the ones obtained via different methods. In particular, we follow two different strategies to determine the structure of the controlled unitary operation, measure it and analyze it by using machine learning techniques. Here, the resources are quantified by the number of times the unitary operation has to be applied and the output measured in order to be able to determine its structure.

Machine Learning

Here, we employ state-of-the-art classical machine learning algorithms to compare with our quantum protocol. We show the results achieved for three different networks, the two-node line, and two different instances of the three-node line, all of them previously studied with our algorithm in Fig. 2. The numerical experiment is designed for determining the optimal control state by evaluating the action of U on the tensor product of a random control state and the fixed \(|{\rm{in}}\rangle \). The data consists of a recompilation of random control states, which cover the whole control subspace, with their correspondent fidelity for a fixed \(\{|{\rm{in}}\rangle ,|{\rm{out}}\rangle \}\) pair.

For each network, three data sets were used (small, medium, large) with a different number of instances. It must be emphasized that all results are referred to test sets, i.e., obtained with data not used to train the models. Therefore, they must be taken as a good estimation of the prediction capability of the models for new unseen data. Cross-validation was implemented by means of a \(k\)-fold approach4, where \(k\) = 10 for all data sets, except for the small data set of the two-line network whose value was \(k\) = 5 due to the very limited number of instances.

All results were achieved by using Support Vector Regressors (SVRs)22, whose characteristics make them especially adequate when dealing with sparse data sets (few instances and high dimension). SVRs work by creating a transformed data space in which the problem is more easily solvable (ideally the problem is transformed into a linear one). That transformation between spaces is carried out by the so-called kernels (Gaussian and polynomial kernels have been used in this experimentation). The data used for training the models has been randomly selected from a set of multiple pairs of control state and fidelity. Although other ML approaches, such as Reinforcement Learning (RL), might seem appropriate to solve this problem, note that the goal of the problem is actually a prediction of the efficiency of the solution rather than the optimal sequence of steps that link the input state with the output state, thus not matching the RL paradigm.

Tables 1, 2 and 3 report the results achieved by the SVR in the three analyzed networks. These correspond to the two and three node lines analyzed in Fig. 2. In the case of \(n=3\), the topology of networks A and B is the same, the one depicted in Fig. 1., but they are defined in a different control basis. For each case, the state with the best fidelity is shown, together with the Mean Error (ME) and the Root Mean Square Error (RMSE). ME is a measure of bias that represents the difference between the real and the predicted efficiencies, i.e., gives information about whether the model tends to make overestimations (negative values) or underestimations (positive values). On the other hand, RMSE is a well-known robust measure of accuracy.

Table 1 Two-node line.
Table 2 Three-node line A.
Table 3 Three-node line B.

Measurement of the Unitary Operation

An alternative method for solving the learning task would be to measure the input-output relation of the controlled unitary operation when strategically, and not randomly, exploring the control subspace. Let us denote by \(|{c}_{i}\rangle \) the natural basis of the control subspace in U, and by \(|{b}_{i}\rangle \) our guess for this basis in a Hilbert space of dimension n. The measurement protocol consists in applying the unitary operation to \(|{b}_{i}\rangle \otimes |{\rm{in}}\rangle \), projecting this result on \(|{\rm{out}}\rangle \langle {\rm{out}}|\) and tracing out the target subspace achieving \({\rho }_{i}\) for each \({b}_{i}\). In the worst case, this operation has to be repeated for all \({b}_{i}\) to guarantee that the populations of the solutions, and not the internal phases, are found. Afterwards, one has to find the appropriate basis \(|{c}_{i}\rangle \) as a linear combination of the proposed one \(|{b}_{i}\rangle \). Another approach is to determine each component of the unitary operation and change to a basis in which the unitary is expressed as a direct sum of the \({s}_{i}\) operations. This particular strategy highlights the relation between our algorithm and the field of quantum process tomography.

Comparison

In summary, the purely random approach analyzed with ML techniques requires in principle more resources than the quantum feedback algorithm with delayed equation. Nevertheless, the fact that ML techniques are independent of the basis guarantees their success in any possible situation. The comparison is made between the episodes, the number of times that the time delayed equation has to be repeated, and the instances, the amount of data employed in the ML algorithm. Even if both methods are based on different training mechanisms, the information fed to both of them is the same, a figure of merit for each control state. In the SVR the system is provided with pairs of control state and its correspondent fidelity, which requires the implicit knowledge of \(\{|{\rm{in}}\rangle ,|{\rm{out}}\rangle \}\) and the ideal U operation. The connection with the quantum algorithm is that the delay term in Eq. 2 provides a distance that works in an analogue way as the fidelity in the SVR. Notice that in the quantum algorithm each episode only requires a pair of \(\{|{\rm{i}}{\rm{n}}\rangle ,|{\rm{o}}{\rm{u}}{\rm{t}}\rangle \}\) states, therefore the number of episodes equals the number of instances. A more realistic analysis would take into account the duration of each process, but for the moment we cannot make a precise estimation about the time for implementing a time delayed equation.

With respect to the complete measurement approach, recent studies bound its scalability in the order of n 2 or even n, being the latter the dimension of the Hilbert space23,24,25. On the other hand, the measurement protocol does not provide the solution in a physical register, but it is the analysis of the unitary operation that provides the knowledge of it. Moreover, each implementation of the controlled unitary operation is associated with a measurement, while in the quantum machine learning algorithm intermediate measurements are not required, because they are included as an intrinsic part of the dynamics, in contrast to the tomography approach. Additionally, when measuring, one needs to perform a search for the convenient basis along the Hilbert space to retrieve the correct structure of U.

Regarding the scalability of our algorithm, we have observed that the number of episodes for reaching the solution depends on the distance between both, the initial control state and the solution. A direct consequence is that the protocol will not properly work when the initial control state is orthogonal to the solution. This is important to consider because the way to notice the failure is to validate the result by measuring the outcome of the unitary operation. In the simulations carried out here, we have employed |+〉n as the initial control state, but this choice is not unique. In some sense, our protocol can also be understood as a search algorithm. Therefore, a comparison with Grover’s result26 may be in order. Regarding the similarities, the conditional phase rotation in Grover’s search algorithm requires the use of an oracle, whose role is played in our formalism by the combination of a controlled unitary operation and the time-delayed terms. On the other hand, the main difference between both protocols is that on Grover’s algorithm the basis in which the states to optimize are described is known, while in ours, the search is performed without previous knowledge of the basis, in a similar spirit to the analog algorithm by Farhi and Gutmann27. A positive property of our protocol, in contrast with the previously mentioned quantum search algorithms, is that the solution is reached asymptotically, i.e., the fidelity always increases with the number of episodes.

Discussion

In conclusion, we have proposed a quantum machine learning algorithm in which the implementation of time-delayed dynamics allows one to avoid the intermediate measurements, and therefore provides a complementary strategy to conventional quantum machine learning algorithms28,29,30,31. Moreover, we have shown how the framework of multitask controlled unitary operations is flexible enough to address different problems such as efficient excitation transport in networks. This kind of protocol may be straightforwardly adapted to different quantum architectures, which is beyond the scope of this article. We believe our study represents the first proposal for exploiting feedback-induced effects of delayed-equation dynamics without intermediate measurements in quantum machine learning algorithms.