Quantum walk neural networks with feature dependent coins
Abstract
Recent neural networks designed to operate on graph-structured data have proven effective in many domains. These graph neural networks often diffuse information using the spatial structure of the graph. We propose a quantum walk neural network that learns a diffusion operation that is not only dependent on the geometry of the graph but also on the features of the nodes and the learning task. A quantum walk neural network is based on learning the coin operators that determine the behavior of quantum random walks, the quantum parallel to classical random walks. We demonstrate the effectiveness of our method on multiple classification and regression tasks at both node and graph levels.
Keywords
Graph neural networks Random walks Quantum random walksAbbreviations
- CNN
Convolutional neural networks
- DCNN
Diffusion convolutional neural network
- GAT
Graph attention network
- GCN
Graph convolutional neural network
- GPU
Graphics processing unit
- MAE
Mean absolute error
- QWNN
Quantum walk neural networks
- RMSE
Root mean squared prediction error
- RNN
Recursive neural network
- SP
Shortest path
- STD
Standard deviation
- WL
Weisfeiler-Lehman
Introduction
While classical neural network approaches for structured data have been well investigated, there is growing interest in extending neural network architectures beyond grid structured data in the form of images or ordered sequences (Krizhevsky et al. 2012) to the domain of graph-structured data (Atwood and Towsley 2016; Bruna et al. 2014; Gori et al. 2005; Kipf and Welling 2016; Scarselli et al. 2009; Velickovic et al. 2017). Following the success of quantum kernels on graph-structured data Bai et al. (2013, 2017, 2015), a primary motivation of this work is to explore the application of quantum techniques and the potential advantages they might offer over classical algorithms. In this work, we propose a novel quantum walk based neural network structure that can be applied to graph data. Quantum random walks differ from classical random walks through additional operators (called coins) that can be tuned to affect the outcome of the walk.
In (Dernbach et al. 2018) we introduced a quantum walk neural network (QWNN) for the purpose of learning a task-specific random walk on a graph. When dealing with learning problems involving multiple graphs, the original QWNN formulation suffered from a requirement that all nodes across all graphs share the same coin matrix. This paper improves upon our original network architecture by replacing the single coin matrix with a bank that learns a function to produce different coin matrices at each node in every graph. This function allows the behavior of the quantum walk to vary spatially across the graph even when dealing with multi-graph problems. Additionally, this function produces the coins based on neighboring node features so that even for structurally identical graphs, a different walk is produced if the node features change. We also improve the neural network architecture in this work. In the new architecture, each step of the quantum walk produces its own set of diffused features. The aggregated set of features, spanning the length of the walk, are passed to successive layers in the neural network. Finally, the previous work produced results that were dependent upon the ordering of the nodes. This work provides a QWNN architecture that is invariant to node ordering.
The rest of this paper is organized as follows. “Related work” section describes the background literature on graph neural network techniques in further detail. The setting of quantum walks on graphs is described in “Graph quantum walks” section, followed by a formal description of the proposed quantum walk based neural network implementation in “Quantum walk neural networks” section. Experimental results on node and graph regression, and graph classification tasks are presented in “Experiments” section, followed by a discussion of the techniques’ limitations in “Limitations” section and concluding remarks in “Concluding remarks” section.
Related work
Gupta and Zia (2001) and Altaisky (2001) among other researchers proposed quantum versions of artificial neural networks; See Biamonte et al. (2017) and Dunjko et al. (2018) for an overview of the emerging field of quantum machine learning. While not much work exists on quantum machine learning techniques for graph-structured data, in recent years, new neural network techniques that operate on graph-structured data have become prominent. Gori et al. (2005) followed by Scarselli et al. (2009) proposed recursive neural network architectures to deal with graph-structured data, instead of the then prevalent approach of transforming the graph data into a domain that could be handled by conventional machine learning algorithms. Bruna et al. (2014) studied the generalization of convolutional neural networks (CNNs) to graph signals through two approaches, one based upon hierarchical clustering of the domain, and another based on the spectrum of the graph Laplacian. Subsequently, Defferrard et al. (2016) proposed to approximate the convolutional filters on graphs through their fast localized versions.
Our proposed quantum walk neural network is a graph neural network architecture based on discrete quantum walks. Various researchers have worked on quantum walks on graphs – Ambainis et al. (2001) studied quantum variants of random walks on one-dimensional lattices; Farhi and Gutmann (1998) reformulated interesting computational problems in terms of decision trees, and devised quantum walk algorithms that could solve problem instances in polynomial time compared to classical random walk algorithms that require exponential time. Aharonov et al. (2001) generalized quantum walks to arbitrary graphs. Subsequently, Rohde et al. (2011) studied the generalization of discrete time quantum walks to the case of an arbitrary number of walkers acting on arbitrary graph structures, and their physical implementation in the context of linear optics. Quantum walks have recently become the focus of many graph-analytics studies because of their non-classical interference properties. Bai et al. (2013, 2017, 2015) introduced novel graph kernels based on the evolution of quantum walks on graphs. They defined the similarity between two graphs in terms of the similarities between the evolution of quantum walks on the two graphs. Quantum kernel based techniques were shown to outperform classical kernel techniques in effectiveness and accuracy. In Rossi et al. (2013, 2015), Rossi et al. studied the evolution of quantum walks on the union of two graphs to define the kernel between two graphs. These closely related works on quantum walks and the success of quantum kernel techniques motivated our approach in developing a quantum neural network architecture.
Graph quantum walks
Motivated by classical random walks, quantum walks were introduced by (Aharonov et al. 1993). Unlike the stochastic evolution of a classical random walk, a quantum walk evolves according to unitary process. The behavior of a quantum walk is fundamentally different from a classical walk since in a quantum walk there is interference between different trajectories of the walk. Two kinds of quantum walks have been introduced in the literature; namely, continuous time quantum walks (Farhi and Gutmann 1998; Rossi et al. 2017) and discrete time quantum walks (Lovett et al. 2010). Quantum walks have recently received much attention because they have been shown to be a universal model for quantum computation (Childs 2009). In addition, they have numerous applications in quantum information science such as database search (Shenvi et al. 2003), graph isomorphism (Qiang et al. 2012), network analysis and navigation, and quantum simulation.
In shorthand notation, the unitary evolution of the walk is governed by the operator U=S(I⊗C). Applying U successively evolves the state of the quantum walk through time.
Physical implementation of discrete quantum walks
Over the past few years, there have been several proposals for the physical implementation of quantum walks. Quantum walks are unitary process that are naturally implementable in a quantum system by manipulating their internal structure. The internal structure of the quantum system should be engineered to be able to manifest the position and coin Hilbert spaces of the quantum walk. These quantum simulation based methods have been proposed using classical and quantum optics (Zhang et al. 2007), nuclear magnetic resonance (Ryan et al. 2005), ion traps (Travaglione and Milburn 2002), cavity QED (Agarwal and Pathak 2005), optical lattices (Joo et al. 2007), and Bose Einstein condensate (Manouchehri and Wang 2009) as well as quantum dots (Manouchehri and Wang 2008) to implement the quantum walk.
Circuit implementation of quantum walks has also been proposed. While most of these implementations focus on graphs that have a very high degree of symmetry (Loke and Wang 2011) or very sparse graphs (Jordan and Wocjan 2009; Chiang et al. 2010), there is some recent work on circuit implementations on non-degree regular graphs (Loke and Wang 2012).
A central question in implementing quantum walks on graphs is how to scale the physical system to achieve the complexity required for simulating large graphs. Rohde et al. (2013) showed that exponentially larger graphs can be constructed using quantum entanglement as a resource for creating very large Hilbert spaces. They use multiple entangled walkers to simulate a quantum walk on a virtual graph of chosen dimensions. However, this approach has its own limitations and arbitrary graphs can not be built with this method.
Quantum walk neural networks
Many graph neural networks pass information between two nodes based on the distance between the nodes in the graph. This is true for both graph convolution networks and diffusion convolution networks. However, quantum walk neural networks are similar to graph attention networks in that the amount of information passed between two nodes also depends on the features of the nodes. In graph attention networks this is achieved by calculating an attention coefficient for each of a node’s neighors. In quantum walk neural networks, the coin operator alters the spin states of the quantum walk to prioritize specific neighbors.
In (Dernbach et al. 2018), the quantum walk neural network evolves a walk using a single coin matrix, C, to modify the spin state of the walker Φ according to Φ^{(t+1)}=Φ^{(t)}C^{(t)} and then swaps states along the edges of the graph. Features are then diffused across the graph by converting the states of the walker into a probability matrix, P, and using it to diffuse the feature matrix: Y=PX. The coin matrix is learned through backpropagating the gradient of a loss function. In this paper we replace the coin matrix by a node and time dependent function we call a bank. The bank forms the first of the three primary parts of a QWNN. It is followed by the walk and the diffusion. The bank produces the coin matrices used to direct the quantum walk, the walk layers determine the evolution of the quantum walk at each step, and the diffusion layer uses these states to spread information throughout the graph.
Bank
The Coin operators modify the spin state of the walk and are thus the primary levers by which a quantum walk is controlled. The coin operator can vary spatially across nodes in the graph, temporally along steps of the walk, or remain constant in either or both dimensions. In the QWNN, the bank produces these coins for the quantum walk layers.
When the learning environment is restricted to a single static graph, the bank stores the coin operators as individual coin matrices distributed across each node in the graph. However, for dynamic or multi-graph situations, the bank operates by learning a function that produces coin operators from node features \(f:X\rightarrow \mathbb {C}^{d\times d}\) where d is the maximum degree of the graph. In general, f is any arbitrary function that produces a matrix followed by a unitary projection to produce a coin C. This projection step is expensive as it requires a singular value decomposition of a d×d matrix.
In recurrent neural networks (RNN), unitary matrices are employed to deal with exploding or vanishing gradients because backpropagating through a unitary matrix does not change the norm of the gradient. To avoid expensive unitary projections, several recursive neural network architectures use functions f whose ranges are subsets of unitary matrices. A common practice is to use combinations of low dimensional rotation matrices (Arjovsky et al. 2016; Jing et al. 2017). This was the model used for the coin operators in previous QWNNs (Dernbach et al. 2018).
We propose two different functions f(v_{i}).
Walk
For a graph with N vertices, the QWNN processes N separate, non-interacting walks in parallel – one walk originating from each node in the graph. The walks share the same bank functions. A T-step walk produces a sequence of superpositions {Φ^{(0)},Φ^{(1)},...,Φ^{(T)}}. For a graph with degree d, the initial superposition tensor \(\pmb {\Phi }^{(0)}\in \mathbb {C}^{N\times N\times d}\) is initialized with equal spin along all incident edges to the node it begins at such that \(\left (\pmb {\Phi }^{(0)}_{ii\cdot }\right)^{H}\pmb {\Phi }^{(0)}_{ii\cdot }=1\) and \(\forall i{\neq }j:\pmb {\Phi }^{(0)}_{ijk}=0\). The value of \(\pmb {\Phi }^{(t)}_{ijk}\) denotes the amplitude of the i-th walker at node v_{j} with spin k after t steps of the walk.
The output Φ^{(t+1)} is fed into the next quantum step layer (if there is one) and the diffusion layer.
Diffusion
The superpositions at each step of the walk are used to diffuse the signal X across the graph. Given a superposition Φ, the diffusion matrix is constructed by summing the squares of the spin states: \(\pmb {P}=\sum _{k}\pmb {\Phi }_{\cdot \cdot k}\odot \pmb {\Phi }_{\cdot \cdot k}\). The value P_{ij} gives the probability of the walker beginning at v_{i} and ending at v_{j} similar to a classical random walk matrix. Diffused features can then be computed as a function of P and X by Y=h(PX+b) where h is an optional nonlinearity (e.g. reLU). The complete calculation for a forward pass for the QWNN is given in Algorithm 1.
Node and neighborhood ordering
Node ordering and by extension neighborhood ordering of each node can have an effect on a quantum walk if the coin is not equivariant to the ordering. Given a non-equivariant set of coins, if the order of nodes in the graph is permuted, the result of the walk may change.
Experiments
We demonstrate the effectiveness of QWNNs across three different types of tasks: node level regression, graph classification and graph regression. Our experiments focus on comparisons with three other graph neural network architectures: diffusion convolution neural networks (DCNN) (Atwood and Towsley 2016), graph convolution networks (GCN) (Kipf and Welling 2016), and graph attention networks (GAT) (Velickovic et al. 2017).
For graph level experiments, we employ a set2vec layer (Vinyals et al. 2016) as an intermediary between the graph layers and standard neural network feed forward layers. Set2vec has proved effective in other graph neural networks (Gilmer et al. 2017) as it is a permutation invariant function that converts a set of node features into a fixed length vector.
Node regression
Temperature prediction results
RMSE ± STD | |||||
---|---|---|---|---|---|
Walk Length | 1 | 2 | 3 | 4 | 5 |
GCN | 8.56±0.02 | 8.14±0.41 | 7.82±0.13 | 8.55±0.52 | 8.88±0.73 |
DCNN | 8.07±0.21 | 7.40±0.13 | 7.46±0.06 | 7.44±0.10 | 10.19±0.18 |
GAT | 7.84±0.16 | 8.43±0.42 | 8.47±1.02 | 8.23±0.69 | 7.93±0.15 |
QWNN | 6.11±0.14 | 5.54±0.16 | 5.38±0.07 | 5.28±0.08 | 5.65±0.02 |
We use this experiment to provide a visualization for the learned quantum walk. Figure 3b and c shows the evolution of a classical random walk and the learned quantum random walk originating from the highlighted node respectively. At each step, warmer color nodes correspond to nodes with higher superposition amplitudes. Initially, the quantum walk appears to diffuse outward in a symmetrical manner similar to a classical random walk, but in the third and fourth steps of the walk, the learned quantum walk focuses information flow towards the southeast direction. The ability to direct the walk in this way proves beneficial in the prediction task.
Graph classification
Graph classification datasets summary and results
Enzymes | Mutag | NCI1 | |
---|---|---|---|
Graphs | 600 | 188 | 4110 |
Average Nodes | 33 | 18 | 30 |
Max Nodes | 126 | 28 | 111 |
Max Degree | 9 | 4 | 4 |
Node Classes | 3 | 7 | 37 |
Graph Classes | 6 | 2 | 2 |
Classification Accuracy ± STD | |||
GCN | 0.31±0.06 | 0.87±0.10 | 0.69±0.02 |
DCNN | 0.27±0.08 | 0.89±0.10 | 0.69±0.01 |
GAT | 0.32±0.04 | 0.89±0.06 | 0.66±0.03 |
QWNN (cen) | 0.26±0.03 | 0.90±0.09 | 0.76±0.01 |
QWNN (inv) | 0.33±0.04 | 0.88±0.04 | 0.73±0.02 |
WL | 0.59±0.01 | 0.84±0.01 | 0.85±0.00 |
SP | 0.41±0.02 | 0.87±0.01 | 0.73±0.00 |
For the Enzyme and NCI1 experiment, the quantum walk neural networks are composed of a length 6 walk, followed by a set2vec layer, a hidden layer of size 64, and a final softmax layer. In Mutag, the walk length is reduced to 4 and the hidden layer to 16. The reduced size helps alleviate some of the overfitting from such a small training set. We report the best results using the centrality based node ordering version of the network that uses the linear bank function: QWNN (cen) as well as the invariant QWNN using the equivariant bank function: QWNN (inv). We also report results from the three other graph networks. GCN, DCNN, and GAT are all used as an initial layer to a similar neural network followed by a set2vec layer, a hidden layer of size 64 (16 for Mutag) and a softmax output layer. DCNN uses a walk length of 2, while GCN and GAT use feature sizes of 32. Additionally we compare with two graph kernel methods, Weisfeiler-Lehman (WL) kernels (Shervashidze et al. 2011) and shortest path (SP) kernels (Borgwardt and Kriegel 2005), using the results given in (Shervashidze et al. 2011).
Classification accuracies are reported in Table 2. The best neural network accuracies and the best overall accuracies are bolded. Quantum Walks are competitive with the other neural network approaches. QWNN demonstrates the best average accuracy on Mutag and Enzyme but the other neural network approaches are within the margin of error. On the NCI1 experiment, QWNN shows a measurable improvement over the other neural networks. The WL kernels outperform all the neural network approaches on both Enzymes and NCI1.
Graph regression
For this task, we form an approximation of the molecular graph from the Coulomb matrix by normalizing out the atomic charges and separating all atom-atom pairs into two sets based on their physical distances. One set contains the atom pairs with larger distances between them and the other the smaller distances. We create an adjacency matrix from all pairs of atoms in the smaller distance set. There is generally a significant gap between the distances of bonded and unbonded atoms in a molecule but this approach leaves 19 disconnected graphs. For these molecules, edges are added between the least distant pairs of atoms until the graph becomes connected. We use the element of each atom, encoded as a one-hot vector, as the input features for each node.
Atomization energy prediction results
RMSE | MAE | |
---|---|---|
GCN | 16.51±0.38 | 12.39±0.29 |
DCNN | 11.90±0.59 | 8.53±0.42 |
GAT | 18.75±0.51 | 14.52±1.12 |
QWNN (cen) | 9.70±0.77 | 6.74±0.24 |
QWNN (inv) | 10.91±0.56 | 8.28±0.47 |
Limitations
Storing the superposition of a single walker requires O(Nd) space, with N the number of nodes in the graph, and d the max degree of the graph. To calculate a complete diffusion matrix requires that a separate walker begin at every node, increasing the space requirement to O(N^{2}d) which starts to become intractable for very large graphs, especially when doing learning on a graphics processing unit (GPU). Some of this cost can be alleviated using sparse tensors. At time t=0 the superpositions are localized to single nodes so only O(Nd) space used by nonzero amplitudes. At time t=1 the first step increases this to O(Nd^{2}) as each neighboring node becomes nonzero. Given a function s(G,t) which determines the number of nodes in a graph reachable after a t-length random walk, the space complexity for a t-length walk is O(Nds(G,t)).
The majority of graph neural networks are invariant to the ordering of the nodes in the graph. This is true for GCN, DCNN, and GAT. We provide one formulation for a QWNN that is also invariant, however the second formulation is not. Although we have greatly reduced the effect, node ordering can still affect the walk produced in QWNN and thus the overall output of the network. This can occur when two otherwise distinguishable nodes have the same betweenness centrality.
Concluding remarks
Quantum walk neural networks provide a unique neural network approach to graph classification and regression problems. Unlike prior graph neural networks, QWNNs fully integrate the graph structure and the graph signal into the learning process. This allows QWNN to learn task dependent walks on complex graphs. The benefit of using the distributions produced by these walks as diffusion operators is especially clear in regression problems where QWNN demonstrate considerable improvement over other graph neural network approaches. This improvement is demonstrated at both the node and the graph level.
An added benefit of QWNN is that the learned walks provide a human understandable glimpse of the neural network determination of where information originating from each node is most beneficial in the graph. In the current work, each walker on the graph operates independently. A future research direction is to investigate learning multi-walker quantum walks on graphs. Reducing the number of independent walkers and allowing interactions can reduce the space complexity of the quantum walk layers.
Notes
Acknowledgements
Not applicable.
Authors’ contributions
SD worked on conceptualization, methodology, software writing, experiments, writing, and review and editing of the paper. AMK worked on conceptualization, writing, and review and editing of the paper. SP worked on conceptualization, writing, review and editing of the paper, and acquisition of funding for the research. MG helped with the methodology and worked on software. DT worked on conceptualization, review and editing, supervision of the research, acquisition of funding, and methodology. All authors read and approved the final manuscript.
Funding
Research was sponsored by the Army Research Laboratory and was accomplished under Cooperative Agreement Number W911NF-09-2-0053 (the ARL Network Science CTA). The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the Army Research Laboratory or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. This document does not contain technology or technical data controlled under either the U.S. International Traffic in Arms Regulations or the U.S. Export Administration Regulations.
Competing interests
The authors declare that they have no competing interests.
References
- Agarwal, GS, Pathak PK (2005) Quantum random walk of the field in an externally driven cavity. Phys Rev A 72(3):033815.CrossRefGoogle Scholar
- Aharonov, Y, Davidovich L, Zagury N (1993) Quantum random walks. Phys Rev A 48(2):1687.CrossRefGoogle Scholar
- Aharonov, D, Ambainis A, Kempe J, Vazirani U (2001) Quantum Walks on Graphs In: Proceedings of the Thirty-third Annual ACM Symposium on Theory of Computing, 50–59.. ACM, New York.zbMATHGoogle Scholar
- Ahmad, R, Sajjad U, Sajid M (2019) One-dimensional quantum walks with a position-dependent coin. arXiv preprint arXiv:1902.10988.Google Scholar
- Altaisky, M (2001) Quantum neural network. arXiv preprint quant-ph/0107012.Google Scholar
- Ambainis, A (2003) Quantum walks and their algorithmic applications. Int J Quantum Inf 1(04):507–518.zbMATHCrossRefGoogle Scholar
- Ambainis, A, Bach E, Nayak A, Vishwanath A, Watrous J (2001) One-dimensional Quantum Walks In: Proceedings of the Thirty-third Annual ACM Symposium on Theory of Computing, 37–49.. ACM, New York.Google Scholar
- Arjovsky, M, Shah A, Bengio Y (2016) Unitary evolution recurrent neural networks In: International Conference on Machine Learning, 1120–1128.Google Scholar
- Atwood, J, Towsley D (2016) Diffusion-Convolutional Neural Networks In: Advances in Neural Information Processing Systems 29, 1993–2001.. Curran Associates, Inc., Red Hook.Google Scholar
- Bai, L, Hancock ER, Torsello A, Rossi L (2013) A quantum jensen-shannon graph kernel using the continuous-time quantum walk In: International Workshop on Graph-Based Representations in Pattern Recognition, 121–131.. Springer, Berlin.zbMATHCrossRefGoogle Scholar
- Bai, L, Rossi L, Cui L, Zhang Z, Ren P, Bai X, Hancock E (2017) Quantum kernels for unattributed graphs using discrete-time quantum walks. Pattern Recogn Lett 87:96–103.CrossRefGoogle Scholar
- Bai, L, Rossi L, Torsello A, Hancock ER (2015) A quantum jensen–shannon graph kernel for unattributed graphs. Pattern Recogn 48(2):344–355.zbMATHCrossRefGoogle Scholar
- Biamonte, J, Wittek P, Pancotti N, Rebentrost P, Wiebe N, Lloyd S (2017) Quantum machine learning. Nature 549(7671):195.CrossRefGoogle Scholar
- Blum, LC, Reymond J-L (2009) 970 million druglike small molecules for virtual screening in the chemical universe database GDB-13. J Am Chem Soc 131:8732.CrossRefGoogle Scholar
- Borgwardt, KM, Kriegel H-P (2005) Shortest-path kernels on graphs In: Fifth IEEE International Conference on Data Mining (ICDM’05), 8.. IEEE, Houston.Google Scholar
- Borgwardt, KM, Ong CS, Schönauer S, Vishwanathan S, Smola AJ, Kriegel H-P (2005) Protein function prediction via graph kernels. Bioinformatics 21(suppl_1):47–56.CrossRefGoogle Scholar
- Brandes, U (2001) A faster algorithm for betweenness centrality. J Math Sociol 25(2):163–177.zbMATHCrossRefGoogle Scholar
- Bruna, J, Zaremba W, Szlam A, LeCun Y (2014) Spectral networks and locally connected networks on graphs In: International conference on learning representations (ICLR).. OpenReview.net, Amherst.Google Scholar
- Chiang, C-F, Nagaj D, Wocjan P (2010) Efficient Circuits for Quantum Walks. Quantum Info. Comput. 10(5):420–434.MathSciNetzbMATHGoogle Scholar
- Childs, AM (2009) Universal computation by quantum walk. Phys Rev Lett 102(18):180501.MathSciNetCrossRefGoogle Scholar
- Debnath, AK, Lopez de Compadre RL, Debnath G, Shusterman AJ, Hansch C (1991) Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. correlation with molecular orbital energies and hydrophobicity. J Med Chem 34(2):786–797.CrossRefGoogle Scholar
- Defferrard, M, Bresson X, Vandergheynst P (2016) Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering. In: Lee D. D., Sugiyama M., Luxburg U. V., Guyon I., Garnett R. (eds)Advances in Neural Information Processing Systems 29, 3844–3852.. Curran Associates, Inc., Red Hook.Google Scholar
- Dernbach, S, Mohseni-Kabir A, Pal S, Towsley D (2018) Quantum Walk Neural Networks for Graph-Structured Data. In: Aiello L. M, Cherifi C., Cherifi H., Lambiotte R., Lió P., Rocha L. M. (eds)Complex Networks and Their Applications VII, 182–193.. Springer, Cham.Google Scholar
- Dunjko, V, Briegel HJ (2018) Machine learning & artificial intelligence in the quantum domain: a review of recent progress. Reports on Progress in Physics 81(7):074001.MathSciNetCrossRefGoogle Scholar
- Farhi, E, Gutmann S (1998) Quantum computation and decision trees. Phys Rev A 58(2):915.MathSciNetCrossRefGoogle Scholar
- Gilmer, J, Schoenholz SS, Riley PF, Vinyals O, Dahl GE (2017) Neural Message Passing for Quantum Chemistry. In: Doina P Yee W. T (eds)Proceedings of the 34th International Conference on Machine Learning, 1263–1272.. PMLR, Sydney.Google Scholar
- Gori, M, Monfardini G, Scarselli F (2005) A new model for learning in graph domains In: Proceedings. 2005 IEEE International Joint Conference on Neural Networks, 2005, 729–734.. IEEE, Montreal.CrossRefGoogle Scholar
- Gupta, S, Zia R (2001) Quantum neural networks. J Comput Syst Sci 63(3):355–383.MathSciNetzbMATHCrossRefGoogle Scholar
- Jing, L, Shen Y, Dubček T, Peurifoy J, Skirlo S, LeCun Y, Tegmark M, Soljačić M (2017) Tunable Efficient Unitary Neural Networks (EUNN) and Their Application to RNNs In: Proceedings of the 34th International Conference on Machine Learning - Volume 70, 1733–1741.. JMLR.org, Sydney.Google Scholar
- Joo, J, Knight PL, Pachos JK (2007) Single atom quantum walk with 1d optical superlattices. J Modern Opt 54(11):1627–1638.zbMATHCrossRefGoogle Scholar
- Jordan, SP, Wocjan P (2009) Efficient quantum circuits for arbitrary sparse unitaries. Phys Rev A 80(6):062301.MathSciNetCrossRefGoogle Scholar
- Kendon, V (2006) Quantum walks on general graphs. Int J Quantum Inf 4(05):791–805.zbMATHCrossRefGoogle Scholar
- Kipf, TN, Welling M (2016) Semi-Supervised Classification with Graph Convolutional Networks In: 5th International Conference on Learning Representations, ICLR 2017.. OpenReview.net, Amherst.Google Scholar
- Krizhevsky, A, Sutskever I, Hinton GE (2012) ImageNet Classification with Deep Convolutional Neural Networks. In: Pereira F, Burges C. J. C., Bottou L, Weinberger K. Q. (eds)Advances in Neural Information Processing Systems 25, 1097–1105.. Curran Associates, Inc., Red Hook.Google Scholar
- Loke, T, Wang J (2011) An efficient quantum circuit analyser on qubits and qudits. Comput Phys Commun 182(10):2285–2294.zbMATHCrossRefGoogle Scholar
- Loke, T, Wang J (2012) Efficient circuit implementation of quantum walks on non-degree-regular graphs. Phys Rev A 86(4):042338.CrossRefGoogle Scholar
- Lovett, NB, Cooper S, Everitt M, Trevers M, Kendon V (2010) Universal quantum computation using the discrete-time quantum walk. Phys Rev A 81(4):042330.MathSciNetCrossRefGoogle Scholar
- Manouchehri, K, Wang J (2008) Quantum walks in an array of quantum dots. J Phys A Math Theor 41(6):065304.MathSciNetzbMATHCrossRefGoogle Scholar
- Manouchehri, K, Wang J (2009) Quantum random walks without walking. Phys Rev A 80(6):060304.MathSciNetCrossRefGoogle Scholar
- Nayak, A, Vishwanath A (2000) Quantum walk on the line. arXiv preprint quant-ph/0010117.Google Scholar
- Qiang, X, Yang X, Wu J, Zhu X (2012) An enhanced classical approach to graph isomorphism using continuous-time quantum walk. J Phys A Math Theor 45(4):045305.MathSciNetzbMATHCrossRefGoogle Scholar
- Rohde, PP, Schreiber A, Štefaňák M, Jex I, Silberhorn C (2011) Multi-walker discrete time quantum walks on arbitrary graphs, their properties and their photonic implementation. New J Phys 13(1):013001.CrossRefGoogle Scholar
- Rohde, PP, Schreiber A, Štefaňák M, Jex I, Gilchrist A, Silberhorn C (2013) Increasing the dimensionality of quantum walks using multiple walkers. J Comput Syst Sci Nanosci 10(7):1644–1652.Google Scholar
- Rossi, MA, Benedetti C, Borrelli M, Maniscalco S, Paris MG (2017) Continuous-time quantum walks on spatially correlated noisy lattices. Phys Rev A 96(4):040301.MathSciNetCrossRefGoogle Scholar
- Rossi, L, Torsello A, Hancock ER (2013) A Continuous-Time Quantum Walk Kernel for Unattributed Graphs. In: Kropatsch W. G., Artner N. M., Haxhimusa Y., Jiang X. (eds)Graph-Based Representations in Pattern Recognition, 101–110.. Springer, Berlin.CrossRefGoogle Scholar
- Rossi, L, Torsello A, Hancock ER (2015) Measuring graph similarity through continuous-time quantum walks and the quantum jensen-shannon divergence. Phys Rev E 91(2):022815.MathSciNetCrossRefGoogle Scholar
- Rupp, M, Tkatchenko A, Müller K-R, von Lilienfeld OA (2012) Fast and accurate modeling of molecular atomization energies with machine learning. Phys Rev Lett 108:058301.CrossRefGoogle Scholar
- Ryan, CA, Laforest M, Boileau J-C, Laflamme R (2005) Experimental implementation of a discrete-time quantum random walk on an nmr quantum-information processor. Phys Rev A 72(6):062317.CrossRefGoogle Scholar
- Scarselli, F, Gori M, Tsoi AC, Hagenbuchner M, Monfardini G (2009) The graph neural network model. IEEE Trans Neural Netw 20(1):61–80.CrossRefGoogle Scholar
- Schomburg, I, Chang A, Ebeling C, Gremse M, Heldt C, Huhn G, Schomburg D (2004) Brenda, the enzyme database: updates and major new developments. Nucleic Acids Res 32(suppl_1):431–433.CrossRefGoogle Scholar
- Shenvi, N, Kempe J, Whaley KB (2003) Quantum random-walk search algorithm. Phys Rev A 67(5):052307.CrossRefGoogle Scholar
- Shervashidze, N, Schweitzer P, Leeuwen EJv, Mehlhorn K, Borgwardt KM (2011) Weisfeiler-lehman graph kernels. J Mach Learn Res 12(Sep):2539–2561.MathSciNetzbMATHGoogle Scholar
- Travaglione, BC, Milburn GJ (2002) Implementing the quantum random walk. Phys Rev A 65(3):032310.CrossRefGoogle Scholar
- Velickovic, P, Cucurull G, Casanova A, Romero A, Lio P, Bengio Y (2017) Graph attention networks In: Proceedings of the International Conference on Learning Representations (ICLR).. ICLR, Amherst.Google Scholar
- Vinyals, O, Bengio S, Kudlur M (2016) Order Matters: Sequence to sequence for sets In: 4th International Conference on Learning Representations, ICLR 2016.. OpenReview.net, Amherst.Google Scholar
- Wale, N, Watson IA, Karypis G (2008) Comparison of descriptor spaces for chemical compound retrieval and classification. Knowl Inf Syst 14(3):347–375.CrossRefGoogle Scholar
- Williams, C, Vose R, Easterling D, Menne M (2006) United states historical climatology network daily temperature, precipitation, and snow data ORNL/CDIAC-118, NDP-070. Available on-line http://cdiac.ornl.gov/epubs/ndp/ushcn/usa. from the Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, USA.
- Zhang, P, Ren X-F, Zou X-B, Liu B-H, Huang Y-F, Guo G-C (2007) Demonstration of one-dimensional quantum random walks using orbital angular momentum of photons. Phys Rev A 75(5):052310.CrossRefGoogle Scholar
Copyright information
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.