Quantum-inspired learning vector quantizers for prototype-based classification

Prototype-based models like the Generalized Learning Vector Quantization (GLVQ) belong to the class of interpretable classifiers. Moreover, quantum-inspired methods get more and more into focus in machine learning due to its potential efficient computing. Further, its interesting mathematical perspectives offer new ideas for alternative learning scenarios. This paper proposes a quantum computing-inspired variant of the prototype-based GLVQ for classification learning. We start considering kernelized GLVQ with real- and complex-valued kernels and their respective feature mapping. Thereafter, we explain how quantum space ideas could be integrated into a GLVQ using quantum bit vector space in the quantum state space Hn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {H}}^{n}$$\end{document} and show the relations to kernelized GLVQ. In particular, we explain the related feature mapping of data into the quantum state space Hn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {H}}^{n}$$\end{document}. A key feature for this approach is that Hn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {H}}^{n}$$\end{document} is an Hilbert space with particular inner product properties, which finally restrict the prototype adaptations to be unitary transformations. The resulting approach is denoted as Qu-GLVQ. We provide the mathematical framework and give exemplary numerical results.


Introduction
Classification learning still belongs to the main tasks in machine learning [5].Although powerful methods are available, still there is need for improvements and search for alternatives to the existing strategies.A huge progress was achieved by the realization of deep networks [15,28].These networks overcame the hitherto dominating support vector machines (SVM) in classification learning [11,51].However, deep architectures have the disadvantage that the interpretability is at least difficult.Therefore, great effort is currently spent to explain deep models, see [40] and references therein.However, due to the complexity of deep networks this is quite often impossible [70].Thus, alternatives are required for many applications like in medicine [39].A promising alternative is the concept of distancebased methods like in learning vector quantizations [57], where prototypes play the role to be references for data [25,34,37].Further, prototype layers in deep networks also seem to stabilize the behavior deep models [29].
Learning vector quantizers (LVQ) are sparse models for prototype-based classification learning, which were heuristically introduced by T. Kohonen [23,24].LVQ is a competitive learning algorithm relying on an attraction and repulsion scheme (ARS) for prototype adaptation, which can be described geometrically in case of the Euclidean setting.Keeping the idea of ARS but take the LVQ to the next level is the generalized LVQ (GLVQ), which optimizes a loss function approximating the classification error [42].Standard training is stochastic gradient descent learning (SGDL) based on the local losses.
Of particular interest for potential users is the advantage of easy interpretability of LVQ networks according to the prototype principle [63].Further, LVQ networks belong to the class of classification margin optimizers like SVM [10] and are proven to be robust against adversarial attacks [41].
In standard LVQ networks, the distance measure is set to be the squared Euclidean one, yet other proximities can be applied [4].For example, in case of spectral data or histograms, divergences are beneficial [21,31,58], whereas functional data may benefit from similarities including aspects of the slope like Sobolev distances [60].Even the kernel trick successfully applied in SVM can be adopted to the GLVQ, denoted as kernelized GLVQ (KGLVQ [59], i.e., the data and prototypes are implicitly mapped into a potentially infinite-dimensional Hilbert space but compared by means of the kernel-based distance calculated in the low-dimensional data space [45].For reproducing kernel Hilbert spaces, the respective mapping is unique [52,53].This so-called feature mapping frequently is nonlinear depending on the kernel in use [17].
However, SGDL training for KGLVQ requires differentiability of the kernel.Otherwise, median variants of GLVQ have to be considered, i.e., the prototype is restricted to be data points and the optimization goes back to likelihood optimization [32].Replacing the standard Euclidean distance by nonlinear kernel distances can improve the performance of the GLVQ model as it is known for the SVM.Yet, the maybe improved performance of KGLVQ comes with a weaker interpretability, because the implicit kernel mapping does not allow to observe the mapped data directly in the feature space.
Another way to accelerate the usually time-consuming training process in machine learning models is to make use of efficient quantum computing algorithms [9,12,33].For supervised and unsupervised problems, several approaches are developed [3,46,49,50,64].A recent overview for recently developed quantum-inspired approaches can be found in [26].
Quantum support vector machines are studied in [36], Hopfield networks are investigated in [35].A quantum perceptron algorithm is proposed in [66].Particularly, nearest neighbor approaches are studied in [19,65].Related to vector quantization, the c-means algorithm is considered in [22,69], which can be also seen in connection to quantum classification algorithms based on competitive learning [71].
In this contribution, we propose an alternative nonlinear data processing but somewhat related to kernel GLVQ keeping the idea to map the data nonlinearly into a particular Hilbert space.In particular, we transform the data vectors nonlinearly into quantum state vectors and require at the same time that the prototypes are always quantum state vectors, too [48,61].The respective quantum space also is a Hilbert space, and hence, we can take the data mapping as some kind of feature mapping like for kernels [16,56].The adaptation has to ensure the attraction and repulsing strategy known to be the essential ingredients of LVQ algorithms, but has to be adapted here to quantum state space properties.The latter restriction requires the adaptations to be unitary transformations.The resulting algorithm is denoted as Quantum-inspired GLVQ (Qu-GLVQ).
In fact, quantum-inspired machine learning algorithms seem to be a promising approach to improve effectiveness of algorithms with respect to time complexity [55].Yet, here the benefit would be twice because we also emphasize the mathematical similarity to kernel-based learning, which still is one of the most successful strategies in machine learning [53,59].However, a realization on a real quantum system is not in the focus of this paper and, hence, left for future work.The focus of this paper is clearly to show the mathematical similarities between kernel and quantum approaches.
The paper is structured as follows: Starting with a short introduction of standard GLVQ for real and complex data [14], we briefly explain the use of real and complex kernels in context of GLVQ.Again, we consider both the real and the complex case.After these preliminaries, we give the basic mathematical properties of quantum state spaces and incorporating these concepts into GLVQ.We show that this approach is mathematically consistent with the kernel GLVQ.Numerical simulations exemplary show the successful application of the introduced Qu-GLVQ.
Further, we emphasize that the aim of the paper is not to show any improvement in quantum-inspired GLVQ compared to kernel approaches or standard GLVQ.Instead, one goal is to show that both the quantum and the kernel approach are mathematically more or less equivalent while kernel approaches apply implicit mapping and the quantum approach does the mapping explicitly.

Standard GLVQ as introduced in [42] assumes data vectors
It is a cost function-based variant of standard LVQ as introduced by T. Kohonen [23,24] keeping the ARS.Further, a set is supposed together with a differentiable dissimilarity measure d v; p k ð Þ frequently chosen as the (squared) Euclidean distance.Classification of an unknown data sample takes place as a nearest prototype decision with the class label of the winning prototype as response.GLVQ considers the cost function for optimization of the prototypes.Here is the local loss and f is the transfer function frequently chosen as sigmoid.The classifier function is the dissimilarity of a given input to the best matching correct/incorrect prototype p AE regarding the class labels.The classifier function l v ð Þ delivers negative values for correct classification.Thus, E GLVQ V; P ð Þ approximates the classification error an is optimized by stochastic gradient descent learning (SGDL, [38]) regarding the prototypes according to as local derivatives.In fact, this learning rule realizes the ARS in case of the squared Euclidean distance, because of As mentioned before, this standard GLVQ constitutes a margin classifier and is robust against adversarial attacks [10,41].
If the expression describes a standard quadratic form with X 2 R mÂn , we can calculate the derivative accordingly by yields an adaptation DX of the matrix X [8,43].The adaptation of the X-matrix realizes an classification taskspecific metric adaptation for optimum class separation [44].This matrix learning variant of GLVQ is denoted as GMLVQ.
For the (real-valued) kernel GLVQ (KGLVQ) discussed in [59], the dissimilarity measure d v; p ð Þ is set to be the (squared) kernel distance with the differentiable kernel j and the respective (implicit) real kernel feature mapping U j : R n !H into the (real) reproducing kernel Hilbert space H [53].As mentioned in the introduction, the implicitly mapped data in the feature mapping space H whereas the prototypes U j p ð Þ are allowed to move freely in H and, hence may leave D U V ð Þ.In this case, the prototypes recognize the data from outside, which could be a disadvantage for particular applications.

Complex variants of GLVQ
So far we assumed both v 2 V R n and P ¼ p k f g & R n for data and prototypes as well as X 2 R mÂn .In complex GLVQ, all these quantities are assumed to be complex valued.The squared distance (4) reads as Following [14,54], the derivatives according to ( 5) and ( 6) are obtained by the Wirtinger calculus (see ''Appendix 6'') applying the conjugate derivatives as ,respectively, which have to be plugged into (3).For complex kernels j : Provided that the kernel j is differentiable in the sense of Wirtinger [6,7], the derivative of the kernel distance becomes This derivative has to be taken into account to calculate the prototype update (3).

Quantum-inspired GLVQ
In the following we explain our idea of a quantum-inspired GLVQ.For this purpose, first we briefly introduce quantum state vectors and consider their transformations.Subsequently, we describe the proposed network architecture for both the real and the complex case.Finally, we explain, how to map real or complex data into a quantum state space by application of nonlinear mappings.This nonlinear mappings play the role of kernel feature maps as known from kernel methods.
3.1 Quantum bits, quantum state vectors and transformations

The real case
Quantum-inspired machine learning gains more and more attention [9,33,49].Following the usual notations, the data are required to be quantum bits (qubits) with the normalization condition defining the Bloch-sphere [68].In this real case, we suppose both a x 1 The normalization condition is equivalent to for an angle n 2 0; 2p ½ paying attention to the periodicity of the cosinus function.
Consequently, we get the inner product for the quantum states x j i and w j i as as the Euclidean inner product of the amplitude vectors Hence, we have xjx h i ¼ 1 such that the (squared) qubit distance d can be calculated as by orthonormal matrices U 2 R 2Â2 , i.e., U Á U T ¼ E with U T is the transpose.Note that the application of orthonormal matrices remains the inner product invariant, i.e., UxjUy h i¼ xjy h i and form the orthonormal group with matrix multiplication as group operation [13].
Now, we define qubit data vectors as x j i ¼ x 1 j i; . ..; x n j i ð Þ T with qubits x k j i and the vector w j i ¼ w 1 j i; . ..; w n j i ð Þ T for prototypes.For the inner product, we get xjw as the squared distance between qubit vectors.Using (15), we obtain in terms of the amplitude values.Orthogonal transformations of qubit vectors are realized by block-diagonal matrices according to Obviously, the quantum space H n of n-dimensional qubit vectors is an Hilbert space with the inner product xjw h i H n as also recognized in [46,48].

The complex case
In the complex-valued case, the data are required to be quantum bits (qubits), too but now taken as with coefficients a x j i ð Þ; b x j i ð Þ 2 R and the complex phase information e i/ .The normalization condition for the Bloch sphere becomes and, hence, the amplitudes are again as in (14).The complex-valued inner product xjw h i for w l j i ¼ cos now depending on both phase information w and /.
Accordingly, the qubit distance d x j i; w j i ð Þis calculated as paying attention to the properties of the complex inner product.Particularly, we have

Re xjy
for the real part of xjy h i.
are realized by unitary matrices U 2 C 2Â2 , i.e., U Á U Ã ¼ E with U Ã is the Hermitian transpose, where unitary matrices remain the inner product invariant, i.e., UxjUy h i¼ xjy h i.As for orthonormal matrices, the unitary matrices form a group with matrix multiplication as group operation [13].
Using again the normalization condition (13), we obtain as the squared quantum distance between the qubit vectors.This can be calculated as using (24).
Analogously to the real case, unitary transformations of qubit vectors are realized by block-diagonal matrices according to U n ð Þ x j i ¼ diag U 1 ; . ..; U n ð ÞÁx j i and the quantum space H n of n-dimensional complex qubit vectors is an Hilbert space with the inner product xjw h i H n .

The real case
For the real case, we assume a vanishing complex phase information [30], i.e., we require / ¼ 0 yielding e i/ ¼ 1.Thus, the Qu-GLVQ approach takes real-valued qubit vectors x j i and w j i as elements of the data and the prototype sets X and W, respectively.Hence, the complex phase information yields e i/ ¼ 1 assuming / ¼ 0 [30].Thus, the dissimilarity measure for GLVQ has to be replaced by the squared qubit vector distance d x j i; w j i ð Þ from (17).All angles x l from w j i form the angle vector x ¼ x 1 ; . ..; x n ð Þ T .The prototype update can be realized adapting the angle vectors in complete analogy to (3) according to is the squared distance to the best matching correct and incorrect prototype.Using the relation (18), we easily obtain delivering the gradient vector od AE x ð Þ is angle change at time step t.Due to the group property of orthogonal transformations the matrix U R k is also orthogonal and allows to re-calculate the initial state w k j i from the final.

The complex case
The complex variant of Qu-GLVQ depends on both x AE and w AE according to (26) and, therefore, the angle vector update ( 27) for x AE is accompanied by the respective update for the phase vectors.Again we used the convention similar as before but taking the Wirtinger derivative, see ''Appendix 6''.However, we can avoid the explicit application of the Wirtinger calculus: Using (24), we obtain now for (30) determining Dx AE via (27) and again avoiding the explicit application of the Wirtinger calculus.Further, we get determining Dw AE via (27).Again, the prototype update ensures the quantum state property for the adapted prototypes and, hence, could be seen as transformations U k Dx k ð ÞÁ w k j i.

Data transformations and the relation of Qu-GLVQ to kernel GLVQ
In the next step, we explain the mapping of the data into the quantum state space H n .For the prototypes, we always assume that these are given in H n .Starting from usual real data vectors, we apply a nonlinear mapping to obtain qubit vectors.In context of quantum machine learning, U is also denoted as quantum feature map playing the role of a real kernel feature map [46,48].This mapping can be realized taking keeping in mind the normalization (13) and applying an appropriate squashing function P : R ! 0; 2p as suggested in [18].Finally we have In case of complex data vectors, we realize the mapping onto the Riemann sphere R fulfilling the constraint r k k ¼ 1 for all r 2 R, see Fig. 1.
The subsequent mapping W r ð Þ delivers the spherical coordinates Note that the stereographic projection P z ð Þ is unique realizing a squashing effect.This effect becomes particularly apparent if z j j ! 1 and hence is comparable the squashing effect realized by P v l ð Þ for the real case.

Numerical experiments
We tested the proposed Qu-GLVQ algorithm for several datasets in comparison with established LVQ approaches as well as SVM.Particularly, we compare Qu-GLVQ with standard GLVQ, KGLVQ and SVM.For both SVM and KGLVQ, the rbf-kernel was used with the same kernel width obtained by a grid search.For all LVQ variants including Qu-GLVQ, we used only one prototype per class for all experiments.
The datasets are a) WHISKY -a spectral dataset to classify Scottish whisky described in [1,2], WBCD -UCI Wisconcin Breast Cancer Dataset, HEART -UCI Heart disease dataset, PIMA -UCI diabetes dataset, and FLC1satellite remote sensing LANDSAT TM dataset [27]. 2 The averaged results reported are obtained by 10-fold crossvalidation.Only test accuracies are given.
We observe an overall good performance of QU-GLVQ comparable to the other methods, see Table 1.Kernel methods seem to be beneficial as indicated by KGLVQ and SVM for HEART.Yet, Qu-GLVQ delivers similar results.If SVM yields significant better performance than KLVQ and Qu-GLVQ, then we have to take into account that here the SVM complexity (number of support vectors) is much higher than in LVQ-networks, where the number of prototypes was chosen always to be only one per class.
Further, for WHISKY the KGLVQ was not able to achieve a classification accuracy comparable to the other approaches, whereas Qu-GLVQ performed well.This might be addressed due to the crucial dependency of the rbf-kernel on the kernel width.Further, Qu-GLVQ seems to be more stable than KGLVQ and SVM considering the averaged deviations.

Conclusion
In this contribution, we introduced an approach to incorporate quantum machine learning strategies into the GLVQ framework.Usual data and prototype vectors are replaced by respective quantum bit vectors.This replacement can be seen as an explicit nonlinear mapping of the data into the quantum Hilbert space which make the difference to the implicit feature mapping in case of kernel methods like kernelized GLVQ or SVM.Thus, one can visualize and analyze the data as well as the prototypes in this space, such that Qu-GLVQ becomes better interpretable than KGLVQ or SVM without this possibility.
Otherwise, the QU-GLVQ approach shows mathematical equivalence to the kernel approaches in topological sense: the distance calculations are carried out in the mapping Hilbert space implicitly for usual kernels and explicitly for Qu-GLVQ.The resulting adaptation dynamic in Qu-GLVQ is consistent with the unitary transformations required for quantum state changes, because the prototypes remain quantum-state vectors.
Further investigations should include several modifications and extension of the proposed Qu-GLVQ.First, matrix learning like for GMLVQ can easily be integrated.Further, the influence of different transfer function as proposed in [62] for standard GLVQ has to be considered to improve learning speed and performance.A much more challenging task will be to consider entanglements for qubits and complex amplitudes b 2 C for qubits [20] in context of Qu-GLVQ.This research also should continue the approach in [71].
Finally, adaptation to a real quantum system is the final goal as recently realized for other classifier systems [47].
Overall, Qu-GLVQ performs equivalently to KGLVQ and SVM.However, due to its better interpretability it could be an interesting alternative if interpretable models are demanded.
Fig. 1 Illustration of the stereographic projection P z ð Þ of a complex number z to a vector x ¼ x 1 ; x 2 ; x 3 ð Þ T onto a (Riemann-) sphere, i.e., Compliance with ethical standards

Appendix: Wirtinger calculus
In the following, we give a short introduction to the Wirtinger calculus [67].We do not follow the complicate and old-fashioned notations in this original paper but prefer a more modern style.Thus, we follow [14].We consider a real-valued function f of a complex argument Here, minimization of f is to be considered in dependence of the two real arguments x and y.For a complex-valued function g, we analogously have where v x; y ð Þ is also real-valued.The differential operators of Wirtinger type are defined as and where o oz is denoted as conjugate differential operator [67].The respective calculus assumes differentiability of the functions f and g in the real-valued sense, i.e., the function u x; y ð Þ and v x; y ð Þ are assumed to be differentiable with respect to x and y.This Wirtinger-differentiability differs from usual differentiability of complex function, which requires the validity of the Cauchy-Riemann equation which reads as in terms of the Wirtinger calculus, i.e., the complex function is differentiable in z if the partial derivative og oz vanishes.
Both, the product and the sum rule apply whereas the chain rule for a function h : R !R becomes

Fig. 2
Fig. 2 Illustration of the transformation of Bloch-sphere coordinates into respective angle values applying spherical coordinates Conflict of interestThe authors declare that they have no conflict of interest.Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.
og o W g oz and likewise for the conjugate derivative for z.If a function f is given in the form f z; z ð Þ then in o W f oz the variable z is treated as constant and vice versa.For example, we have forf z ð Þ ¼ z j j 2 the derivatives o W f oz ¼ z and o W f oz ¼ z because of z j j 2 ¼ z Á z.Yet, o W z 2 oz ¼ 2z is equivalent to the real case.Next we consider complex vectors z 2 C n .The squared Euclidean norm is z k k 2 2 ¼ z Ã z, where z Ã denotes the Hermitian transpose.We immediately obtain

Table 1
Classification accuracies in % and standard deviations for GLVQ, Qu-GLVQ, KGLVQ, and SVM for the considered datasets (10-fold cross-validation) ± 0.3 92.0 ± 0.4 94.8 ± 0.4 97.4 ± 0.1 (|SV|: 2206) For all LVQ-algorithms, only one prototype per class was used.The number of support vectors for SVM is given by SV j j.N number of data samples, d data dimensionality, #C number of classes