1 Introduction

A modern approach to Pattern Recognition is the use of Machine Learning (ML) and, in particular, of supervised learning algorithms for pattern classification. This task essentially consists in assigning a class in a given partition of a dataset to an input value, on the basis of a set of training data whose classes are known. As witnessed by the emerging of the field of Quantum Machine Learning (QML) (Wittek 2016; Schuld et al. 2015; Biamonte et al. 2017), Quantum Computation (Nielsen and Chuang 2011) offers a number of algorithmic techniques that can be advantageously applied for reducing the complexity of classical learning algorithms. This possibility has been variously explored for pattern classification, see e.g. Schuld et al. (2014) and the references therein, and the more recent paper (Park et al. 2020).

One of the most important application of pattern classification is Facial Expressions Recognition (Piatokowska and Martyna 2012). In this field, graph theory (Bondy 1976) provides a suitable mathematical model of a human face. In this paper we address the problem of graph classification for implementing the final stage of a facial expressions recognition system, namely the stage where an expression, described by a set of features, is assigned to one of several classes representing basic emotions such as anger, happiness, sadness, joy etc.

Starting from a set of features that were retrieved from face region in a previous stage, we associate a graph representation to each facial expression by following two alternative strategies: one generates complete graphs, while the other uses a triangulation algorithm to output meshed graphs. We then define a quantum supervised learning algorithm that recognizes input images by assigning a specific class label to each of them.

A crucial passage of our method is the representation of graphs as quantum states, for which we will make use of the amplitude encoding technique, i.e. the encoding of the input features into the amplitudes of a quantum state and their manipulation through quantum gates. Since a state of n qubits is described by 2n complex amplitudes, such an encoding automatically produces an exponential compression of the data. Combined with an appropriate use of quantum interference, this technique is at the base of the computational speed-up of many quantum algorithms; for example it is responsible for the exponential speed-up in the performance of all quantum algorithms based on the Quantum Fourier Transform (Nielsen and Chuang 2011). However, in our context, an exponential speedup of the overall algorithm is not to be taken for granted, as the nature of the data may require a computationally expensive initialization of the quantum state.

The quantum circuit we construct is inspired by the work in Schuld et al. (2017). This circuit performs a classification similar to the k-nearest neighbors classification algorithm used in classical ML, but exploits quantum operations with no classical counterpart, such as those that realize quantum interference: Hadamard gates are used to interfere the new input with the training inputs in a way that a final measurement of a class qubit identifies the class of the input. As already pointed out in Schuld et al. (2017), this approach uses quantum techniques for implementing ML tasks rather than simply translating ML algorithms for making them run on a quantum computer.

We show the results of an experimental testing of our algorithm that we have performed by using the IBM open-source quantum computing software development framework (Aleksandrowicz et al. 2019). We compare the results obtained on this quantum simulator with those obtained by using a classical algorithm that also uses distances for classifying the data. Our experiments show that the accuracy of the quantum classification follow very closely the accuracy of the classical classification if we use the meshed strategy for representing the input data. In the complete graph approach we observe, instead, a much better performance of the classical algorithm. We will argue that this can be explained in terms of the preliminary encoding of the data into quantum states and the higher error rate in the implementation of the complete graphs approach.

This paper is structured as follows. In Section 2, we introduce the dataset employed for face recognition as well as the preprocessing methodology that extracts a meaningful graph representation of the data. In Section 3 we explain how to define an encoding of face graphs into quantum states. Section 4 and Section 5 are devoted to the construction of the quantum classification circuit, the explanation of the algorithms implementing it, and a discussion of our experimental results. Finally, in Section 6 we draw a conclusion and give directions for possible improvements.

2 Dataset and preprocessing

For our experiments, we use the freely available Extended Cohn-Kanade (CK+) database (Lucey et al. 2010). This collects multiple photos of people labeled by their facial expression, as in the examples shown in the left-hand side of Figs. 1 and 2. Each photo is identified with a point cloud of 68 points, \((x,y) \in \mathbb {R}^{2}\), as shown in the right hand side of Figs. 1 and 2.

Fig. 1
figure 1

An element of the CK+ dataset: happy face (left) and its point cloud (right)

Fig. 2
figure 2

An element of the CK+ dataset: sad face (left) and its point cloud (right)

From this point cloud, we select only those 20 points associated with the mouthFootnote 1 as shown in Fig. 3 for the happy expression.

Fig. 3
figure 3

Landmark points associated to the happy mouth. Points are represented in the (x,y) plane

With the objective of using this dataset for the inputs of a quantum circuit implementing a classifier of expressions, we first associate a graph to each object of the data set by following two alternative strategies.

The first strategy considers the weighted complete graph whose vertices are the n landmark points of the mouth and whose edge-weights wij correspond to the Euclidean distance of vertex i from vertex j. Since a complete graph with n vertices has n(n − 1)/2 edges, its construction requires O(n2) steps.

The second strategy is based on the Delaunay triangulation of a set S of n points in \(\mathbb {R}^{2}\) (de Berg et al. 2000). This is a technique for subdividing a planar object into triangles (and a general geometric object in \(\mathbb {R}^{d}\) into simplices), which constructs a partition of \(\mathbb {R}^{2}\) in triangles (or polyhedral in \(\mathbb {R}^{d}\)) as follows: for each point p in the set S, consider the convex hulls of the set of points that are closer to p than to any other point in S, with respect to the Euclidean distance; then take all the convex hulls together with their faces. The most straightforward algorithm finds the Delaunay triangulation of a set of n points in O(n2) by randomly adding one vertex at a time and triangulating again the affected parts of the graph. However, it is possible to improve this algorithm and reduce the runtime to \(O(n \log n)\) as shown in de Berg et al. (2008). By applying the Delaunay triangulation to the points of the mouth of the facial expressions, we obtain meshed weighted graphs, where weights are the same as those used for the complete graphs.

Given a mouth landmark point cloud as in Fig. 3, the outputs of these two strategies are shown in Fig. 4.

Fig. 4
figure 4

Complete and meshed graphs obtained from the mouth landmark points of the happy face, using all the 20 nodes that identify the mouth

Clearly, one can expect that a classification based on the complete graphs strategy gives a higher accuracy than the meshed one, as it can exploit a richer description of the data. As we will see later, although this is true for the classical case, the quantum algorithm we present may achieve a better accuracy with the meshed strategy, due to a lower error rate occurring in this case (matrices are sparser than those for complete graphs).

3 Encoding graphs into quantum states

Consider an undirected simple graph G = (VG, EG), where VG is the set of vertices and EG the set of edges in G. By fixing an ordering of the vertices {vi}i= 1,..n, a graph G is uniquely identified by its adjacency matrix AG with generic element:

$$ a_{ij}= \begin{cases} 1 & \text{if } e_{ij}\in E_{G}, \\ 0 & \text{if } e_{ij}\notin E_{G}. \end{cases} $$

Therefore, if the cardinality of VG is n, i.e., the graph G has n vertices, then AG is a n×n square matrix with zeros on the diagonal. Since we are dealing with undirected simple graphs, AG is also symmetric and this means that the meaningful information about graph G is contained in the d = (n2n)/2 elements of the upper triangular part of AG. We can now vectorize those elements and rename them as follows:

$$ \begin{array}{@{}rcl@{}} \mathbf{a}_{G}&=&\left( a_{12}, a_{13}, .., a_{1n}, a_{23},.., a_{2n},.., a_{(n-1)n} \right)^{T}\\&=&\left( g^{(G)}_{1}, g^{(G)}_{2},.., g^{(G)}_{d} \right)^{T} \end{array} $$
(1)

where:

$$ g^{(G)}_{k} \equiv a_{i,j} \text{ for } k= i\times n-\frac{i(i+1)}{2} - n + j. $$

Following the approach in Rebentrost et al. (2014), from vector aG, we construct a quantum state |G〉 associated to graph G by encoding the elements of the adjacency vector into the amplitudes of the quantum state:

$$ |G\rangle = \frac{1}{\gamma}\sum\limits_{k=1}^{d} g_{k} |k\rangle, $$
(2)

where γ is a normalization constant given by:

$$ \gamma=\sqrt{\sum\limits_{\{k\ \mathrm{s.t}.\ g_{k}\neq 0\}} |g_{k}|^{2}}. $$
(3)

This encoding can be extended to the case of weighted graphs G = (VG, EG, wij), where \( w_{ij}\in \mathbb {R}_{\geq 0} \) is the weight associated to the edge eijEG. In this case, the adjacency matrix is defined as:

$$ a_{ij}= \begin{cases} w_{ij} & \text{if } e_{ij}\in E_{G}, \\ 0 & \text{if } e_{ij}\notin E_{G}. \end{cases} $$

Quantum state encoding is then performed as in Eqs. (1), (2), and (3). This allows us to represent a classical vector of d elements into a quantum state of \(N=\lceil \log (d)\rceil \) qubits, i.e., in our case, with a number of qubits that grows linearly with the number n of the graph vertices.

On the IBM Qiskit framework (Aleksandrowicz et al. 2019), states expressed by Eq. (2) are realized by using a method proposed by Shende et al. in (2006). This method is based on an asymptotically optimal algorithm for the initialization of a quantum register, which exploits the fact that an arbitrary n-qubit state can be decomposed into a separable (i.e., unentangled) state by applying the two controlled rotation Rz and Ry. By recursively applying this transformation to the n-qubit register with the desired target state (i.e., |G〉, in our case), we can construct a circuit that takes it to the n-qubit |00...0〉 state. This can be done using a quantum multiplexor circuit U, which is finally reversed in order to get the desired initialization circuit. In our case, we identify such an initialization circuit with the unitary G that manipulates a register of N qubits initially in |0〉 as follows:

$$ \mathbf{G}|00...0\rangle=|G\rangle, $$

where G represents the following circuit with elementary components CX, Ry(𝜃) and Rz(ϕ):

$$ \mathbf{G} = \begin{bmatrix} R_{y}(-\theta_{0}) R_{z}(-\phi_{0}) & & & \\ & R_{y}(-\theta_{1}) R_{z}(-\phi_{1}) & & \\ & & {\ddots} & \\ & & & R_{y}(-\theta_{2^{N-1}-1}) R_{z}(-\phi_{2^{N-1}-1}) \end{bmatrix}^{{\dagger} } $$

Unfortunately, the construction of G in this way requires O(2N+ 1) gates, thus representing a bottleneck for our algorithm. It will be the subject of future work to try different state preparation schemes by investigating other approaches such as those in Park et al. (2019), Mottonen et al. (2005), Arunachalam et al. (2015), Zhao et al. (2018), Park et al. (2019), and Ciliberto et al. (2018).

3.1 An example

Let’s select n = 4 random vertices among those belonging to the mouth landmark points. The complete graph constructed for these points looks like the one in Fig. 5.

Fig. 5
figure 5

Complete graph with four randomly selected mouth landmark points

Such a graph is encoded into the quantum state |G4〉 defined by:

$$ \begin{array}{@{}rcl@{}} &&|G_{4}\rangle = \frac{1}{\gamma}\left( g_{1} |000\rangle+g_{2} |001\rangle+g_{3} |010\rangle\right.\\ &&\left. +g_{4} |011\rangle+g_{5} |100\rangle+g_{6} |101\rangle\right), \end{array} $$

where γ is a normalization constant. The quantum circuit G4 that realizes |G4〉, i.e., such that G4|000〉 = |G4〉, via the method proposed by Shende et al. in (2006) is shown in Fig. 6, where the gate U(𝜃) is defined by:

$$ \mathbf{U(\theta)} = \begin{bmatrix} \cos(\theta/2) & \ \ \ -\sin(\theta/2) \\ \sin(\theta/2) & \ \ \ \cos(\theta/2) \end{bmatrix} $$
Fig. 6
figure 6

Quantum circuit constructing the state G4. Three qubits are employed in the circuit, which are denoted by a, b, and c

As claimed before, this algorithm is exponential in the number of the input qubits and represents a bottleneck to the speed of the whole algorithm. Thus, finding a more efficient encoding of the data into quantum states is crucial for achieving a better performance of the overall algorithm we are going to define in the next section. This would also benefit from a more complex connection between physical qubits in the hardware that would allow us to reduce the number of swap gates used to adapt the quantum circuit to the topology of the quantum computer.

4 Graph quantum classifier

The first implementation of a quantum classifier on the IBM quantum computer appears in (Schuld et al. 2017), where data are encoded in a single qubit state. We extend the work in that paper by constructing a circuit that is able to deal with multiple qubit state representations of a dataset.

Given a dataset \( \mathcal {D} \), consider the training set:

$$ \mathcal{D}_{\text{train}} = \{(G^{\left\lbrace 1 \right\rbrace } ,y^{\left\lbrace 1 \right\rbrace }),...,(G^{\left\lbrace M \right\rbrace } ,y^{\left\lbrace M \right\rbrace })\}, $$

of M pairs \( (G^{\left \lbrace m \right \rbrace } ,y^{\left \lbrace m \right \rbrace }) \), where \(G^{\left \lbrace m \right \rbrace } \) are graphs (e.g., representing the face of an individual) while \(y^{\left \lbrace m \right \rbrace }\in \{c_{l}\}_{l=1}^{L}\) identify which of the L possible class labels are associated to a graph. Classes partition graphs according to some features, which in our case correspond to sad vs happy face expressions.

Given a new unlabeled input Gtest, the task is to assign a label ytest to Gtest using the distance:

$$ d(G_{1},G_{2})=\sqrt{|\mathbf{a}_{G_{1} }- \mathbf{a}_{G_{2} }|^{2}}= \sqrt{\sum\limits_{i=1}^{d} \left| g_{i}^{(G_{1}) }- g_{i}^{(G_{2}) }\right|^{2}}, $$

and then classifying according to:

$$ \min_{c_{l}} \left[\sum\limits_{m\ s.t.\ y^{\left\lbrace m \right\rbrace }=c_{l} } d\left( G_{\text{test}} ,G^{\left\lbrace m \right\rbrace }\right) \right]. $$
(4)

In the case of binary classification, where there are only two classes, i.e., \(y^{\left \lbrace m \right \rbrace }\in \{+1,-1\}\), the classifier of Eq. (4) can be expressed as:

$$ y_{\text{test}}= -\text{sgn} \left[\sum\limits_{m=1}^{M} y^{\left\lbrace m \right\rbrace } d\left( G_{\text{test}} ,G^{\left\lbrace m \right\rbrace }\right) \right]. $$
(5)

The quantum circuit that implements such a classification requires four multi-qubit quantum registers. In the initial configuration,

  • |0〉a is an ancillary qubit that is entangled to the qubits encoding both the test graph and the training graphs;

  • |00..0〉m is a register of \( \lceil \log (M)\rceil \) qubits that stores the index m of the training graphs \( G^{\left \lbrace m\right \rbrace } \);

  • |00..0〉g is a register of \( \lceil \log (d)\rceil \) qubits used to encode both test and training graphs;

  • |00..0〉c is a register of \( \lceil \log (L)\rceil \) qubits that stores the classes \( y^{\left \lbrace m \right \rbrace }\in \{c_{l}\}_{l=1}^{L}\) of the \( G^{\left \lbrace m\right \rbrace } \).

In the case of a training set of two graphs, one per class, the circuit is implemented as shown in Fig. 7.

Fig. 7
figure 7

Quantum binary classifier circuit

After the first two Hadamard gates, the circuit is in state

$$ \frac{1}{2}\left( |0\rangle_{a} + |1\rangle_{a}\right) \left( |0\rangle_{m} + |1\rangle_{m}\right) |00..0\rangle_{g} |0\rangle_{c}. $$

The Control-Gtest gate, followed by an X operator on the ancilla produces the state:

$$ |G_{\text{test}}\rangle = \frac{1}{\gamma_{\text{test}}}\sum\limits_{k=1}^{d} g_{k}^{\text{test}} |k\rangle, $$

and entangles it with the |0〉a state of the ancilla. Then a \(\mathbf {G^{\left \lbrace 1 \right \rbrace } }\) gate is controlled by both the first qubit a and by the second one m, which is also subjected to an X gate afterwards. This has the effect of creating the state associated to the training graph \( G^{\left \lbrace 1 \right \rbrace }\), namely:

$$ |G^{\left\lbrace 1 \right\rbrace }\rangle = \frac{1}{\gamma^{\left\lbrace 1 \right\rbrace }}\sum\limits_{k=1}^{d} g_{k}^{\left\lbrace 1 \right\rbrace } |k\rangle, $$

and entangling it with both |1〉a and |0〉m. Then, the double controlled \( \mathbf {G^{\left \lbrace 2\right \rbrace }} \) gate has a similar effect, entangling the second training graph state \( |G^{\left \lbrace 2 \right \rbrace }\rangle \) with |1〉a and |1〉m. At this stage, the Control-X gate entangles the m index of the two test graphs with the correct class state, i.e., either |0〉c or |1〉c. The state of the circuit at this point can be written as:

$$ \frac{1}{\sqrt{2M}} \sum\limits_{m=0}^{M-1} \left( |0\rangle_{a} |G_{\text{test}}\rangle_{g} + |1\rangle_{a} |G^{\left\lbrace m \right\rbrace}\rangle_{g} \right) |m\rangle_{m}\ |y^{\left\lbrace m \right\rbrace}\rangle_{c}. $$

Finally, the Hadamard gate applied to the ancilla generates the state:

$$ \begin{array}{@{}rcl@{}} &&\frac{1}{2\sqrt{M}} \sum\limits_{m=0}^{M-1} \left[|0\rangle_{a} \left( |G_{\text{test}}\rangle_{g} + |G^{\left\lbrace m \right\rbrace}\rangle_{g} \right) \right.\\&&\left.+ |1\rangle_{a} \left( |G_{\text{test}}\rangle_{g} - |G^{\left\lbrace m \right\rbrace}\rangle_{g} \right) \right] |m\rangle_{m}\ |y^{\left\lbrace m \right\rbrace}\rangle_{c}. \end{array} $$

Measuring the ancilla in the state |0〉a produces the state:

$$ \begin{array}{@{}rcl@{}} &&\frac{1}{\sqrt{ {\sum}_{m} {\sum}_{i} |g_{i}^{(G_{\text{test}}) }+ g_{i}^{(G^{\left\lbrace m \right\rbrace }) }|^{2}}} \sum\limits_{m=0}^{M-1} \sum\limits_{i=1}^{d} \\&&\times\left( g_{i}^{(G_{\text{test}}) }+ g_{i}^{(G^{\left\lbrace m \right\rbrace }) }\right) |i\rangle_{g}\ |m\rangle_{m}\ |y^{\left\lbrace m \right\rbrace}\rangle_{c}, \end{array} $$

and a subsequent measurement of the class qubit \( |y^{\left \lbrace m\right \rbrace }\rangle \) gives the outcome corresponding to state |0〉c with probability:

$$ \begin{array}{ll} \mathrm{p}(y^{\left\lbrace m\right\rbrace} = 0)= \frac{1}{{ {\sum}_{m} {\sum}_{i} |g_{i}^{(G_{\text{test}}) }+ g_{i}^{(G^{\left\lbrace m \right\rbrace }) }|^{2}}} \sum\limits_{m\ s.t.\ y^{\left\lbrace m\right\rbrace} = 0} \\ {\sum}_{i=1}^{d} \left| g_{i}^{(G_{\text{test}}) }+ g_{i}^{(G^{\left\lbrace m \right\rbrace }) }\right|^{2}=\\=\! 1 - \left( \! \frac{1}{{ {\sum}_{m} {\sum}_{i} |g_{i}^{(\mathit{G}_{\text{test}}) }+ g_{i}^{(\mathit{G}^{\left\lbrace m \right\rbrace }) }|^{2}}} \sum\limits_{m\ s.t.\ y^{\left\lbrace m\right\rbrace} = 0} \ \mathit{d}(\mathit{G}_{\text{test}}, \mathit{G}^{\left\lbrace m \right\rbrace })^{2}\!\right). \end{array} $$

This probability depends on the distance between the test graph Gtest and all those training graphs belonging to the class c = 0. Therefore, if its value is lower than 0.5, then the test set is classified as yclass = − 1 while if it is higher, then the test set is classified as yclass = + 1. The quantum circuit hence realizes the classification expressed by Eq. (5).

5 Classification algorithm and experimental results

The number of qubits used by the quantum classifier scales with the number, n, of vertices in the graphs, which are randomly selected among those of the mouth landmark points. As shown in Table 1, the encoding allows for an exponential compression of resources since only a number \( O(\log _{2}(n^{2})) \) of qubits are needed to represent a complete graph of n nodes. In the table, the number of elements in the adjacency matrix that identify the graphs is denoted by d and its value is n(n − 1)/2.

Table 1 Input size of the classification circuit depending on the number of vertices

We evaluated our quantum classifier on both the complete and the meshed graphs using the qasm_simulator available through the IBM cloud platform. We compared the results with a classical binary classifier based on the Fröbenius norm between graph adjacency matrices defined by:

$$ ||A|| = \sqrt{\sum\limits_{i, j} | a_{ij} |^{2}}, $$

so that the distance used in Eq. (5) becomes:

$$ d\left( G_{\text{test}}, G^{\left\lbrace m \right\rbrace }\right) =\sqrt{\sum\limits_{ij} {| (g_{i}^{\text{test}} - g_{i}^{\{m\}})|}^{2}}. $$
(6)

In order to perform our experiments, we proceeded as follows.

  • By randomly selecting 10 different test graphs Gtest and 25 different pairs of labeled graphs \(\left \lbrace G_{\text {sad}}, G_{\text {happy}} \right \rbrace \), we constructed 250 items of the form \( \left \lbrace G_{\text {test}}, G_{\text {sad}}, G_{\text {happy}} \right \rbrace \) for the classical classifier and the IBM quantum simulator.

Vertices were selected randomly among the 20 landmark points of the mouth, the number n of vertices given in input to the algorithm was gradually increased, and a one-time sample of the vertices was shared with all methods.

Based on this, we have devised an algorithm for classification that is divided into two main steps. The first step consists in a procedure, named ClassifyWrtSingleFace, that considers each item \(\left \lbrace G_{\text {test}}, G_{\text {sad}}, G_{\text {happy}} \right \rbrace \) individually and classifies Gtest as happy or sad, based only on the distances d(Gtest, Gsad) and d(Gtest, Ghappy). The procedure AccuracyWrtSingleFace evaluates the accuracy of such a classification by essentially counting the number of correct answers for all the test graphs. In the second step, another procedure, named ClassifyWrtWholeSet, performs a different classification where each face Gtest is labeled depending on the output of ClassifyWrtSingleFace calculated on the whole set {Ghappy, Gsad}.

We have compared the accuracy obtained with different number n of landmark points of the mouth ranging from 4 to 20. The 20 points case considers all the points of the mouth. The n < 20 cases consider subsets of the mouth landmark points which are chosen randomly and uniformly in the coordinates. For each n, we have picked up 20 randomly chosen subsets, and we have calculated the accuracy of the classifier for each n as the mean of the accuracies obtained by varying such subsets.

5.1 Algorithm step 1

figure a

The evaluation of the distance is obtained classically via the calculation of Eq. (6), and quantumly via the application of the circuit to the item {Gtest, Gsad, Ghappy}. The circuit is executed 1024 times.

Given the set of all test graphs \(\mathcal {G} = \{ (G_{i}, \ell _{i}) \mid i = 1, ..., n \}\), the following procedure describes the calculation of the accuracy of the classification with respect to a single face.

figure b

The values of the accuracy of procedure ClassifyWrtSingleFace that we obtained with the qasm_simulator are reported in Table 2 and described by the plot in Fig. 8.

Table 2 Average accuracy of the ClassifyWrtSingleFace procedure obtained with the qasm_simulator (QC), and the classical classifier (CC)
Fig. 8
figure 8

Results of the procedure AccuracyWrtSingleFace on graphs with number of vertices n ∈ [4,20], obtained with the qasm_simulator compared with the classical classifier. Plots show the average accuracy of the classifiers with shadow indicating the max and min accuracies

Figure 8 and Table 2 show that the highest value of the AccuracyWrtSingleFace procedure is obtained with the classical classifier, using the complete graph strategy. The classical classifier with meshed graphs reveals a worse performance. This is due to the fact that meshed graphs, on average, carry less information about the shape of the mouth. The performance gap between the complete and meshed approach is not so evident for what concerns the quantum simulation. This may be due to the fact that the encoding of a complete graph into a quantum state is much more complex than for meshed graphs since for complete graphs there are always n(n − 1)/2 amplitudes that are different from zero. In the meshed case, the graphs are sparse and much less amplitudes are needed in the quantum states for encoding them. Clearly, the complexity of the quantum states negatively affects the robustness of the circuit. Moreover, it is interesting to notice how the accuracy of the quantum simulation with meshed graphs closely follows the trend of the accuracy obtained by its classical counterpart.

5.2 Algorithm step 2

We now refine the classification by comparing Gtest with the entire set of graphs, {Ghappy, Gsad}, so that a face is classified as happy if by running procedure ClassifyWrtSingleFace we obtain more times happy than sad, and sad otherwise.

The ClassifyWrtWholeSet procedure is described below.

figure c

The accuracy of the above procedure is calculated by the following algorithm as the frequency of the correct answers returned by the ClassifyWrtWholeSet procedure.

figure d

The values returned by the CalculateAccuracyWholeSet procedure for the qasm_simulator and the classical binary classifier are shown in Fig. 9 and Table 3.

Fig. 9
figure 9

Results of the procedure AccuracyWholeSet on graphs with number of vertices n ∈(Bondy 1976; Wittek 2016), obtained with the qasm_simulator compared with the classical classifier. Plots show the average accuracy of the classifiers with shadow indicating the max and min accuracies

Table 3 Results of the average AccuracyWholeSet procedure on graphs with number of vertices n ∈(Bondy 1976; Wittek 2016), obtained with the qasm_simulator (QC) and the classical classifier (CC)

As in step 1, the highest value of the accuracy is obtained with the classical classifier using the complete graph strategy: the fact that all distances between landmark points are considered makes the classifier very faithful. However, it is worth noticing that we only considered small graphs (up to 20 nodes), thus reducing the amount of resources needed to deal with large complete graphs. In applications where graph dimensions are several orders of magnitude higher, a complete graph classification approach may not be a viable option.

A trade-off between graph complexity and accuracy of the classification is reached in the classical case by employing the meshed graphs encoding; good levels of accuracy are obtained with our AccuracyWholeSet algorithm, which never go below 0.86.

In the quantum simulation, the complete graph strategy performs clearly worse than its classical counterpart, while the quantum meshed approach shows an average accuracy which is very close to the classical one.

Comparing Figs. 9 and 8, we can also observe that in the latter, the quantum complete graph strategy has a smaller deviation (blue shadows in Fig.8) than for the quantum meshed graph approach; this is not the case for AccuracyWholeSet, where we achieve comparable results.

6 Conclusion

In this work, we have addressed the problem of classifying graphs representing human facial expressions. Graphs are obtained by selecting some of the face landmark points and connecting them either via a triangulation of the points or via a complete graph construction. We have shown the construction of quantum states whose amplitudes encode the graphs, and devised an interference-based quantum circuit that performs a distance-based classification for the encoded graphs. We have implemented this quantum circuit by means of the IBM open-source quantum computing framework Qiskit (2019), and tested it on a collection of images taken from the Cohn-Kanade (CK) database, one of the most widely used test-beds for algorithm development and evaluation in the field of automatic facial expressions recognition.

The tests have been performed on the qasm_simulator, and the results have been compared with the classical classifier. A calculation of the accuracy revealed that the classical classifier achieves better results when a complete graph approach is employed, while the simulated quantum classifier achieves comparable results with both the meshed and the complete strategy.

Future work should investigate the possibilities of different and more efficient graph encodings into quantum states. Moreover, interesting extensions of the results we have presented could be obtained by exploiting other quantum classification schemes, such as those based on the SWAP test along the lines of Park et al. (2020) or on Variational Quantum Circuits (McClean et al. 2016).