A Quantum Convolutional Neural Network on NISQ Devices

Quantum machine learning is one of the most promising applications of quantum computing in the Noisy Intermediate-Scale Quantum(NISQ) era. Here we propose a quantum convolutional neural network(QCNN) inspired by convolutional neural networks(CNN), which greatly reduces the computing complexity compared with its classical counterparts, with $O((log_{2}M)^6) $ basic gates and $O(m^2+e)$ variational parameters, where $M$ is the input data size, $m$ is the filter mask size and $e$ is the number of parameters in a Hamiltonian. Our model is robust to certain noise for image recognition tasks and the parameters are independent on the input sizes, making it friendly to near-term quantum devices. We demonstrate QCNN with two explicit examples. First, QCNN is applied to image processing and numerical simulation of three types of spatial filtering, image smoothing, sharpening, and edge detection are performed. Secondly, we demonstrate QCNN in recognizing image, namely, the recognition of handwritten numbers. Compared with previous work, this machine learning model can provide implementable quantum circuits that accurately corresponds to a specific classical convolutional kernel. It provides an efficient avenue to transform CNN to QCNN directly and opens up the prospect of exploiting quantum power to process information in the era of big data.

Quantum machine learning is one of the most promising applications of quantum computing in the Noisy Intermediate-Scale Quantum(NISQ) era. Here we propose a quantum convolutional neural network(QCNN) inspired by convolutional neural networks(CNN), which greatly reduces the computing complexity compared with its classical counterparts, with O((log 2 M) 6 ) basic gates and O(m 2 + e) variational parameters, where M is the input data size, m is the filter mask size and e is the number of parameters in a Hamiltonian. Our model is robust to certain noise for image recognition tasks and the parameters are independent on the input sizes, making it friendly to near-term quantum devices. We demonstrate QCNN with two explicit examples . First, QCNN is applied to image processing and numerical simulation of three types of spatial filtering, image smoothing, sharpening, and edge detection are performed. Secondly, we demonstrate QCNN in recognizing image, namely, the recognition of handwritten numbers. Compared with previous work, this machine learning model can provide implementable quantum circuits that accurately corresponds to a specific classical convolutional kernel. It provides an efficient avenue to transform CNN to QCNN directly and opens up the prospect of exploiting quantum power to process information in the era of big data.

I. INTRODUCTION
Machine learning has fundamentally transformed the way people think and behave. Convolutional Neural Network(CNN) is an important machine learning model which has the advantage of utilizing the correlation information of data, with many interesting applications ranging from image recognition to precision medicine.
A CNN generally consists of three layers, convolution layers, pooling layers and fully connected layers. The convolution layer calculates new pixel values x ( ) i j from a linear combination of the neighborhood pixels in the preceding map with the specific weights, x ( ) i, j = m a,b=1 w a,b x ( −1) i+a−2, j+b−2 , where the weights w a,b form a m × m matrix named as a convolution kernel or a filter mask. Pooling layer reduces feature map size, e.g. by taking the average value from four contiguous pixels, and are often followed by application of a nonlinear (activation) function. The fully connected layer computes the final output by a linear combination of all remaining pixels with specific weights determined by parameters in a fully connected layer. The weights in the filter mask and fully connected layer are optimized by training on large datasets.
In this article, we demonstrate the basic framework of a quantum convolutional neural network(QCNN) by sequentially realizing convolution layers, pooling layers and fully connected layers. Firstly, we implement convolution layers based on linear combination of unitary operators(LCU) [22][23][24] . Secondly, we abandon some qubits in the quantum circuit to simulate the effect of the classical pooling layer. Finally, the fully connected layer is realized by measuring the expectation value of a parametrized Hamiltonian and then a nonlinear (activation) function to post-process the expectation value. We perform numerical demonstrations with two examples to show the validity of our algorithm. Finally, the computing complexity of our algorithm is discussed followed by a summary. Figure 1: Comparison of classical convolution processing and quantum convolution processing. F and G are the input and output image data, respectively. On the classical computer, a M × M image can be represented as a matrix and encoded with at least 2 n bits [n = log 2 (M 2 ) ]. The classical image transformation through the convolution layer is performed by matrix computation F * W, which leads to The same image can be represented as a quantum state and encoded in at least n qubits on a quantum computer. The quantum image transformation is realized by the unitary evolution U on a specific quantum state.

A. Quantum Convolution Layer
The first step for performing quantum convolution layer is to encode the image data into a quantum system. In this work, we encode the pixel positions in the computational basis states and the pixel values in the probability amplitudes, forming a pure quantum state. Given a 2D image F = (F i, j ) M×L , where F i, j represents the pixel value at position (i, j) with i = 1, . . . , M and j = 1, . . . , L. F is transformed as a vector f with ML elements by puting the first column of F into the first M elements of f , the second column the next M elements , etc. That is, Accordingly, the image data f can be mapped onto a pure quantum state | f = 2 n −1 k=0 c k |k with n = log 2 (ML) qubits, where the computational basis |k encodes the position (i, j) of each pixel, and the coefficient c k encodes the pixel value, i.e., c k = F i, j /( F 2 i, j ) 1/2 for k < ML and c k = 0 for k ≥ ML. Here ( F 2 i, j ) 1/2 is a constant factor to normalizing the quantum state. Without loss of generality, we focus on the input image with M = L = 2 n pixels. The convolution layer transforms an input image F = (F i, j ) M×M into an output image G = (G i, j ) M×M by a specific filter mask W. In the quantum context, this linear transformation, corresponding to a specific spatial filter operation, can be represented as |g = U| f with the input image state | f and the output image state |g . For simplicity, we take a 3 × 3 filter mask as an example The generalization to arbitrary m × m filter mask is straightforward. Convolution operation will transform the input image corresponding quantum evolution U| f can be performed as follows. We represent input image F = (F i, j ) M×M as an initial state where c k = F i, j /( F 2 i, j ) 1/2 . The M 2 × M 2 linear filtering operator U can be defined as 25 : where E is an M dimensional identity matrix, and V 1 , Generally speaking, the linear filtering operator U is non-unitary that can not be performed directly. Actually, we can embed U in a bigger system with an ancillary system and decompose it into a linear combination of four unitary operators 26 However, the basic gates consumed to perform U i scale exponentially in the dimensions of quantum systems, making the quantum advantage diminishing. Therefore, we present a new approach to construct the filter operator to reduce the gate complexity. For convenience, we change the elements of the first row, the last row, the first column and the last column in the matrix V 1 , V 2 , and V 3 , which is allowable in imagining processing, to the following form Defining the adjusted linear filtering operator U as Next, we decompose V µ (µ = 1, 2, 3) into three unitary matrices without normalization, Thus, the linear filtering operator U can be expressed as which can be simplified to where Q k = (V µµ /w µµ ) ⊗ V vµ /w vµ is unitary, and β k is a relabelling of the indices. Now, we can perform U through the linear combination of unitary operators Q k . The number of unitary operators is equal to the size of filter mask. The quantum circuit to realize U is shown in Fig.2. The work register | f and four ancillary qubits |0000 a are entangled together to form a bigger system.
Firstly, we prepare the initial state | f using amplitude encoding method or quantum random access memory(qRAM). Then, performing unitary matrix S on the ancillary registers to transform |0000 a into a specific superposition state |ψ a S |0000 a = |ψ a = 9 v=1 β k /N|k (11) where N c = 9 k=1 β 2 k and S satisfies S is a parameter matrix corresponding to a specific filter mask that realizes a specific task. Then, we implement a series of ancillary system controlled operations Q k ⊗ |k k| on the work system | f to realize LCU. Nextly, Hadamard gates H T = H ⊗4 are acted to uncompute the ancillary registers |ψ a . The state is transformed to where H T (i:) is the i-th row in matrix H T and S (:1) is the first column in matrix S . The first term equals to which corresponds to the filter mask W. The i-th term equals to filter mask W i (i = 2, 3, . . . , 16), where Totally, 16 filter masks are realized, corresponding to ancilla qubits in 16 different state |i (i = 1, 2, . . . , 16). Therefore, the whole effect of evolution on state | f without considering the ancilla qubits, is the linear combination of the effects of 16 filter masks.
If we only need one filter mask W, measuring the ancillary register and conditioned on seeing |0000 . We have the state 1 N c |0000 U | f , which is proportional to our expected result state |g . The probability of detecting the ancillary state |0000 is After obtaining the final result 1 N c U | f , we can multiply the constant factor N c to compute |g = U | f . In conclusion, the filter operator U can be decomposed into a linear combination of nine unitary operators in the case that the general filter mask  Figure 2: Quantum circuit for realizing the QCNN. | f denotes the initial state of work system after encoded the image data, and the ancillary system is a four qubits system in the |0000 a state. The squares represent unitary operations and the circles represent the state of the controlling system. Unitary operations Q 1 ,Q 2 ,· · · , Q 9 , are activated only when the auxiliary system is in state |0000 ,|0001 · · · ,|1000 respectively. is W. Only four qubits or a nine energy level ancillary system is consumed to realize the general filter operator U , which is independent on the dimensions of image size.
The final stage of our method is to extract useful information from the processed results |g . Clearly, the image state |g is different from |g . However, not all elements in | f are evaluated, the elements corresponding to the four edges of original image remain unchanged. One is only interested in the pixel values which are evaluated by W in | f . These pixel values in |g are as same as that in |g (see proof in supplementary materials). So, we can obtain the informations of G = (G i, j ) M×M (2 ≤ i, j ≤ M − 1) by evaluating the | f under operator U instead of U.

B. Quantum Pooling Layer
The function of pooling layer after the convolutional layer is to reduce the spatial size of the representation so as to reduce the amount of parameters. We adopt average pooling which calculates the average value for each patch on the feature map as pooling layers in our model. Consider a 2 * 2 pixels pooling operation applied with a stride of 2 pixels. It can be directly realized by ignoring the last qubit and the m-th qubit in quantum context. The input image |g = (g 1 , g 2 , g 3 , g 4 , . . . , . . . , g M 2 ) T after this operation can be expressed as the output image

C. Quantum Fully Connected Layer
Fully connected layers complie the data extracted by previous layers to form the final output, it usually appear at the end of convolutional neural networks. We define a parametrized Hamiltonian as the quantum fully connected layer. This Hamiltonian consists of identity operators I and Pauli operators σ z ,  where h 0 , h i , h i j , · · · are the parameters, and Roman indices i, j denote the qubit on which the operator acts, i.e., σ i z means Pauli matrix σ z acting on a qubit at site i. We measure the expectation value of the parametrized Hamiltonian f (p) = p|H|p . f (p) is the final output of the whole quantum neural network. Then, we add an active function to nonlinearly map f (p) to R( f (p)). The parameters in Hamiltonian matrix H are updated by gradient descent method, i.e., are calculated by ∂ f (p) The parameters in S matrix can be trained through classical backpropagation. Now, we construct the framework of quantum neural networks. We demonstrate the performance of our method in image processing and handwritten number recognition in the next section.

A. Image Processing: Edge Detection, Image Smoothing and Sharpening
In addition to constructing QCNN, the quantum convolutional layer can also be used to spatial filtering which is a technique for image processing 25,[27][28][29] , such as image smoothing, sharpening, edge detection and edge enhancement. To show the quantum convolutional layer can handle various image processing tasks, we demonstrate three types of image processing, edge detection, image smoothing and sharpening with fixed filter mask W de , W sm and W sh respectively In a spatial image processing task, we only need one specific filter mask. Therefore, after performing the above quantum convolutional layer mentioned, we measure the ancillary register. If we obtain |0 , our algorithm succeeds and the spatial filtering task is completed. The numerical simulation proves that the output images transformed by a classical and quantum convolutional layer are exactly the same, as shown in Fig.(3).

B. Handwritten Number Recognition
Here we demonstrate a type of image recognition task on a real-world dataset, called MNIST, a handwritten character dataset. In this case, we simulate a complete quantum convolutional neural network model, including a convolutional layer, a pooling layer, and a full-connected layer, as shown in Fig.(2). We consider the two-class image recognition task(recognizing handwritten characters 1 and 8 ) and ten-class image recognition task(recognizing handwritten characters 0 -9 ). Meanwhile, considering the noise on NISQ quantum system, we respectively simulate two circumstances that are the quantum gate Q k is a perfect gate or a gate with certain noise. The noise is simulated by randomly acting a single qubit Pauli gate in [I, X, Y, Z] with a probability of 0.01 on the quantum circuit after an operation implemented. In detail, the handwritten character image of MNIST has 28 × 28 pixels. For convenience, we expand 0 at the edge of the initial image until 32 × 32 pixels. Thus, the work register of QCNN consists of 10 qubits and the ancillary register needs 4 qubits. The convolutional layer is characterized by 9 learnable parameters in matrix W, that is the same for QCNN and CNN. In QCNN, by abandoning the 4-th and 9-th qubit of the work register, we perform the pooling layer on quantum circuit. In CNN, we perform average pooling layer directly. Through measuring the expected values of different Hamiltonians on the remaining work qubits, we can obtain the measurement values. After putting them in an activation function, we get the final classification result. In CNN, we perform a two-layer fully connected neural network and an activation function. In the two-classification problem, the QCNN's parametrized Hamiltonian has 37 learnable parameters and the CNN's fully-connected layer has 256 learnable parameters. The classification result that is close to 0 are classified as handwritten character 1 , and that is close to 1 are classified as handwritten character 8 . In the ten-classification problem, the parametrized Hamiltonian has 10 × 37 learnable parameters and the CNN's fully-connected layer has 10 × 256 learnable parameters. The result is a 10-dimension vector. The classification results are classified as the index of the max element of the vector. Details of parameters, accuracy and gate complexity are listed in Table.(I).
For the 2 class classification problem, the training set and test set have a total of 5000 images and 2100 images, respectively. For the 10 class classification problem, the training set and test set have a total of 60000 images and 10000 images, respectively. Because in a training process, 100 images are randomly chosen in one epoch, and 50 epochs in total, the accuracy of the training set and the test set will fluctuate. So, we repeatedly execute noisy QCNN, noise-free, and CNN 100 times, under the same construction. In this way, we obtain the average accuracy and the field of accuracy, as shown in Fig.(4). We can conclude that from the numerical simulation result, QCNN and CNN provide similar performance. QCNN involves fewer parameters and has a smaller fluctuation range.

IV. ALGORITHM COMPLEXITY
We analyze the computing resources in gate complexity and qubit consumption. (1) Gate complexity. At the convolutional layer stage, we could prepare an initial state in O(poly(log 2 (M 2 )) steps. In the case of preparing a particular input | f , we employ the amplitude encoding method in Ref. [30][31][32] . It was shown that if the amplitude c k and P k = k |c k | 2 can be efficiently calculated by a classical algorithm, constructing the log 2 (M 2 )-qubit X state takes O(poly(log 2 (M 2 )) steps. Alternatively, we can resort to quantum random access memory [33][34][35] . Quantum Random Access Memory (qRAM) is an efficient method to do state preparation, whose complexity is O(log 2 (M 2 )) after the quantum memory cell established. Moreover, the controlled operations Q k can be decomposed into O((log 2 M) 6 ) basic gates (see details in Appendix A). In summary, our algorithm uses O((log 2 M) 6 ) basic steps

V. CONCLUSION
In summary, we desighed a quantum Neural Network which provides exponential speed-ups over their classical counterparts in gate complexity. With fewer parameters, our model achieves similar performance compared with classical algorithm in handwritten number recognition tasks. Therefore, this algorithm has significant advantages over the classical algorithms for large data. We present two interesting and practical applications, image processing and handwritten number recognition, to demonstrate the validity of our method. We give the mapping relations between a specific classical convolutional kernel to a quantum circuit, that provides a bridge between QCNN to CNN. It is a general algorithm and can be implemented on any programmable quantum computer, such as superconducting, trapped ions, and photonic quantum computer. In the big data era, this algorithm has great potential to outperform its classical counterpart, and works as an efficient solution. After performing U and U on quantum state | f respectively, the difference exits in the elements |g k |g k (k = 1, 2, · · · , M, sM + 1, (s + 1)M, M 2 − M + 1, · · · , M 2 ), where 1 ≤ s ≤ M − 2. Since |g can be remapped to G , U will give the output image G = (G i, j ) M×M . The elements in U which is different from U only affect the pixel i, j 2, · · · , M − 1. Thus, only and if only i, j 2, · · · , M − 1, the matrix elements satisfy G i, j G i, j . Namely, the output imagine G i, j = G i, j (2 ≤ i, j ≤ M − 1).

Appendix B: Decomposing operator Q into basic gates
Considering the nine operators Q 1 , Q 2 ,· · · , Q 9 consist of filter operator U . Q k is the tensor product of two of the following three operators E 2 is a M × M identity matrix not need to be further decomposed. For convenient, consider a n-qubits operator E 1 with dimension M × M, where n = log 2 (M 2 ). It can be expressed by the combination of O(n 3 ) CNOT gates and Pauli X gates as shown in Fig.5. Consequently, E 3 can be decomposed into the inverse of combinations of basic gate as shown in Fig.5, because of the fact E 3 = E † 1 . Thus, Q k can be implemented by no more than O(n 6 ) basic gates. Totally, the controlled Q k operation can