1 Introduction

Classification learning still belongs to the main tasks in machine learning [5]. Although powerful methods are available, still there is need for improvements and search for alternatives to the existing strategies. A huge progress was achieved by the realization of deep networks [15, 28]. These networks overcame the hitherto dominating support vector machines (SVM) in classification learning [11, 51]. However, deep architectures have the disadvantage that the interpretability is at least difficult. Therefore, great effort is currently spent to explain deep models, see [40] and references therein. However, due to the complexity of deep networks this is quite often impossible [70]. Thus, alternatives are required for many applications like in medicine [39]. A promising alternative is the concept of distance-based methods like in learning vector quantizations [57], where prototypes play the role to be references for data [25, 34, 37]. Further, prototype layers in deep networks also seem to stabilize the behavior deep models [29].

Learning vector quantizers (LVQ) are sparse models for prototype-based classification learning, which were heuristically introduced by T. Kohonen [23, 24]. LVQ is a competitive learning algorithm relying on an attraction and repulsion scheme (ARS) for prototype adaptation, which can be described geometrically in case of the Euclidean setting. Keeping the idea of ARS but take the LVQ to the next level is the generalized LVQ (GLVQ), which optimizes a loss function approximating the classification error [42]. Standard training is stochastic gradient descent learning (SGDL) based on the local losses.

Of particular interest for potential users is the advantage of easy interpretability of LVQ networks according to the prototype principle [63]. Further, LVQ networks belong to the class of classification margin optimizers like SVM [10] and are proven to be robust against adversarial attacks [41].

In standard LVQ networks, the distance measure is set to be the squared Euclidean one, yet other proximities can be applied [4]. For example, in case of spectral data or histograms, divergences are beneficial [21, 31, 58], whereas functional data may benefit from similarities including aspects of the slope like Sobolev distances [60]. Even the kernel trick successfully applied in SVM can be adopted to the GLVQ, denoted as kernelized GLVQ (KGLVQ [59], i.e., the data and prototypes are implicitly mapped into a potentially infinite-dimensional Hilbert space but compared by means of the kernel-based distance calculated in the low-dimensional data space [45]. For reproducing kernel Hilbert spaces, the respective mapping is unique [52, 53]. This so-called feature mapping frequently is nonlinear depending on the kernel in use [17].

However, SGDL training for KGLVQ requires differentiability of the kernel. Otherwise, median variants of GLVQ have to be considered, i.e., the prototype is restricted to be data points and the optimization goes back to likelihood optimization [32]. Replacing the standard Euclidean distance by nonlinear kernel distances can improve the performance of the GLVQ model as it is known for the SVM. Yet, the maybe improved performance of KGLVQ comes with a weaker interpretability, because the implicit kernel mapping does not allow to observe the mapped data directly in the feature space.

Another way to accelerate the usually time-consuming training process in machine learning models is to make use of efficient quantum computing algorithms [9, 12, 33]. For supervised and unsupervised problems, several approaches are developed [3, 46, 49, 50, 64]. A recent overview for recently developed quantum-inspired approaches can be found in [26].

Quantum support vector machines are studied in [36], Hopfield networks are investigated in [35]. A quantum perceptron algorithm is proposed in [66]. Particularly, nearest neighbor approaches are studied in [19, 65]. Related to vector quantization, the c-means algorithm is considered in [22, 69], which can be also seen in connection to quantum classification algorithms based on competitive learning [71].

In this contribution, we propose an alternative nonlinear data processing but somewhat related to kernel GLVQ keeping the idea to map the data nonlinearly into a particular Hilbert space. In particular, we transform the data vectors nonlinearly into quantum state vectors and require at the same time that the prototypes are always quantum state vectors, too [48, 61]. The respective quantum space also is a Hilbert space, and hence, we can take the data mapping as some kind of feature mapping like for kernels [16, 56]. The adaptation has to ensure the attraction and repulsing strategy known to be the essential ingredients of LVQ algorithms, but has to be adapted here to quantum state space properties. The latter restriction requires the adaptations to be unitary transformations. The resulting algorithm is denoted as Quantum-inspired GLVQ (Qu-GLVQ).

In fact, quantum-inspired machine learning algorithms seem to be a promising approach to improve effectiveness of algorithms with respect to time complexity [55]. Yet, here the benefit would be twice because we also emphasize the mathematical similarity to kernel-based learning, which still is one of the most successful strategies in machine learning [53, 59]. However, a realization on a real quantum system is not in the focus of this paper and, hence, left for future work. The focus of this paper is clearly to show the mathematical similarities between kernel and quantum approaches.

The paper is structured as follows: Starting with a short introduction of standard GLVQ for real and complex data [14], we briefly explain the use of real and complex kernels in context of GLVQ. Again, we consider both the real and the complex case. After these preliminaries, we give the basic mathematical properties of quantum state spaces and incorporating these concepts into GLVQ. We show that this approach is mathematically consistent with the kernel GLVQ. Numerical simulations exemplary show the successful application of the introduced Qu-GLVQ.

Further, we emphasize that the aim of the paper is not to show any improvement in quantum-inspired GLVQ compared to kernel approaches or standard GLVQ. Instead, one goal is to show that both the quantum and the kernel approach are mathematically more or less equivalent while kernel approaches apply implicit mapping and the quantum approach does the mapping explicitly.

2 The general GLVQ approach

2.1 GLVQ for real valued data

Standard GLVQ as introduced in [42] assumes data vectors \(\mathbf {v}\in V\subseteq \mathbb {R}^{n}\) with class labels \(c\left( \mathbf {v}\right) \in \mathcal {C}=\left\{ 1,\ldots ,C\right\} \) for training. It is a cost function-based variant of standard LVQ as introduced by T. Kohonen [23, 24] keeping the ARS. Further, a set \(P=\left\{ \mathbf {p}_{k}\right\} _{k=1}^{K}\subset \mathbb {R}^{n}\) of prototypes with class labels \(c_{k}=c\left( \mathbf {p}_{k}\right) \) is supposed together with a differentiable dissimilarity measure \(d\left( \mathbf {v},\mathbf {p}_{k}\right) \) frequently chosen as the (squared) Euclidean distance. Classification of an unknown data sample takes place as a nearest prototype decision with the class label of the winning prototype as response. GLVQ considers the cost function

$$\begin{aligned} E_{GLVQ}\left( V,P\right) =\sum _{\mathbf {v}\in V}L\left( \mathbf {v},P\right) \end{aligned}$$

for optimization of the prototypes. Here

$$\begin{aligned} L\left( \mathbf {v},P\right) =f\left( \mu \left( \mathbf {v}\right) \right) \end{aligned}$$
(1)

is the local loss and f is the transfer function frequently chosen as sigmoid. The classifier function

$$\begin{aligned} \mu \left( \mathbf {v}\right) =\frac{d^{+}\left( \mathbf {v}\right) -d^{-}\left( \mathbf {v}\right) }{d^{+}\left( \mathbf {v}\right) +d^{-}\left( \mathbf {v}\right) } \end{aligned}$$
(2)

takes value in the interval \(\left[ -1,1\right] \), where \(d^{\pm }\left( \mathbf {v}\right) =d\left( \mathbf {v},\mathbf {p}^{\pm }\right) \) is the dissimilarity of a given input to the best matching correct/incorrect prototype \(\mathbf {p}^{\pm }\) regarding the class labels. The classifier function \(\mu \left( \mathbf {v}\right) \) delivers negative values for correct classification. Thus, \(E_{GLVQ}\left( V,P\right) \) approximates the classification error an is optimized by stochastic gradient descent learning (SGDL, [38]) regarding the prototypes according to

$$\begin{aligned} \varDelta \mathbf {p}^{\pm }\propto -\frac{\partial L\left( \mathbf {v},P\right) }{\partial \mathbf {p}^{\pm }}=-\frac{\partial f\left( \mu \left( \mathbf {v}\right) \right) }{\partial \mu }\frac{\partial \mu \left( \mathbf {v}\right) }{\partial d^{\pm }}\frac{\partial d^{\pm }\left( \mathbf {v}\right) }{\partial \mathbf {p}^{\pm }} \end{aligned}$$
(3)

as local derivatives. In fact, this learning rule realizes the ARS in case of the squared Euclidean distance, because of \(\frac{\partial d^{\pm }\left( \mathbf {v}\right) }{\partial \mathbf {p}^{\pm }}=-2\left( \mathbf {v}-\mathbf {p}^{\pm }\right) \). As mentioned before, this standard GLVQ constitutes a margin classifier and is robust against adversarial attacks [10, 41].

If the expression

$$\begin{aligned} d_{\varvec{\varOmega }}\left( \mathbf {v},\mathbf {p}\right) =\left( \varvec{\varOmega }\left( \mathbf {v}-\mathbf {p}\right) \right) ^{2} \end{aligned}$$
(4)

describes a standard quadratic form with \(\varvec{\varOmega }\in \mathbb {R}^{m\times n}\), we can calculate the derivative accordingly by

$$\begin{aligned} \frac{\partial d_{\varvec{\varOmega }}\left( \mathbf {v},\mathbf {p}\right) }{\partial \mathbf {p}}=-2\varvec{\varOmega }^{T}\varvec{\varOmega }\left( \mathbf {v}-\mathbf {p}\right) \end{aligned}$$
(5)

whereas

$$\begin{aligned} \frac{\partial d_{\varvec{\varOmega }}\left( \mathbf {v},\mathbf {p}\right) }{\partial \varvec{\varOmega }}=\varvec{\varOmega }\left( \mathbf {v}-\mathbf {p}\right) \left( \mathbf {v}-\mathbf {p}\right) ^{T} \end{aligned}$$
(6)

yields an adaptation \(\varDelta \varvec{\varOmega }\) of the matrix \(\varvec{\varOmega }\) [8, 43]. The adaptation of the \(\varvec{\varOmega }\)-matrix realizes an classification task-specific metric adaptation for optimum class separation [44]. This matrix learning variant of GLVQ is denoted as GMLVQ.

For the (real-valued) kernel GLVQ (KGLVQ) discussed in [59], the dissimilarity measure \(d\left( \mathbf {v},\mathbf {p}\right) \) is set to be the (squared) kernel distance

$$\begin{aligned} d_{\kappa }\left( \mathbf {v},\mathbf {p}\right) =\kappa \left( \mathbf {v},\mathbf {v}\right) -2\kappa \left( \mathbf {v},\mathbf {p}\right) +\kappa \left( \mathbf {p},\mathbf {p}\right) \end{aligned}$$
(7)

with the differentiable kernel \(\kappa \) and the respective (implicit) real kernel feature mapping \(\varPhi _{\kappa }:\mathbb {R}^{n}\rightarrow H\) into the (real) reproducing kernel Hilbert space H [53]. As mentioned in the introduction, the implicitly mapped data \(\varPhi _{\kappa }\left( \mathbf {v}\right) \) form a low-dimensional manifold \(D_{\varPhi }\left( V\right) \) in the feature mapping space H whereas the prototypes \(\varPhi _{\kappa }\left( \mathbf {p}\right) \) are allowed to move freely in H and, hence may leave \(D_{\varPhi }\left( V\right) \). In this case, the prototypes recognize the data from outside, which could be a disadvantage for particular applications.

2.2 Complex variants of GLVQ

So far we assumed both \(\mathbf {v}\in V\subseteq \mathbb {R}^{n}\) and \(P=\left\{ \mathbf {p}_{k}\right\} \subset \mathbb {R}^{n}\) for data and prototypes as well as \(\varvec{\varOmega }\in \mathbb {R}^{m\times n}\). In complex GLVQ, all these quantities are assumed to be complex valued. The squared distance (4) reads as

$$\begin{aligned} d_{\varvec{\varOmega }}\left( \mathbf {v},\mathbf {p}\right) =\left( \mathbf {v}-\mathbf {p}\right) ^{*}\varvec{\varOmega }^{*}\varvec{\varOmega }\left( \mathbf {v}-\mathbf {p}\right) \end{aligned}$$

where \(\left( \mathbf {v}-\mathbf {p}\right) ^{*}\varvec{\varOmega }^{*}=\left( \varvec{\varOmega }\left( \mathbf {v}-\mathbf {p}\right) \right) ^{*}\) is the Hermitian conjugate of \(\varvec{\varOmega }\left( \mathbf {v}-\mathbf {p}\right) \). Following [14, 54], the derivatives according to (5) and (6) are obtained by the Wirtinger calculus (see “Appendix 6”) applying the conjugate derivatives as

$$\begin{aligned} \frac{\partial _{\mathfrak {W}}d_{\varvec{\varOmega }}\left( \mathbf {v},\mathbf {p}\right) }{\partial \mathbf {p}^{*}}=-2\varvec{\varOmega }^{*}\varvec{\varOmega }\left( \mathbf {v}-\mathbf {p}\right) \end{aligned}$$
(8)

whereas

$$\begin{aligned} \frac{\partial _{\mathfrak {W}}d_{\varvec{\varOmega }}\left( \mathbf {v},\mathbf {p}\right) }{\partial \varvec{\varOmega }^{*}}=\varvec{\varOmega }\left( \mathbf {v}-\mathbf {p}\right) \left( \mathbf {v}-\mathbf {p}\right) ^{*} \end{aligned}$$
(9)

,respectively, which have to be plugged into (3).

For complex kernels \(\kappa :\mathbb {C}^{n}\times \mathbb {C}^{n}\rightarrow \mathbb {C}\) we get

$$\begin{aligned} d_{\kappa }\left( \mathbf {v},\mathbf {p}\right) =\kappa \left( \mathbf {v},\mathbf {v}\right) -2\mathfrak {Re}\left( \kappa \left( \mathbf {v},\mathbf {p}\right) \right) +\kappa \left( \mathbf {p},\mathbf {p}\right) \end{aligned}$$
(10)

instead of (7) using the identity \({{\mathfrak {Re}\left( \kappa \left( \mathbf {v},\mathbf {p}\right) \right) }=}\kappa \left( \mathbf {v},\mathbf {p}\right) +\kappa \left( \mathbf {p},\mathbf {v}\right) \). Provided that the kernel \(\kappa \) is differentiable in the sense of Wirtinger [6, 7], the derivative of the kernel distance becomes

$$\begin{aligned} \frac{\partial _{\mathfrak {W}}d_{\kappa }\left( \mathbf {v},\mathbf {p}\right) }{\partial \mathbf {p}^{*}}= & {} -2\frac{\partial _{\mathfrak {W}}\mathfrak {Re}\left( \kappa \left( \mathbf {v},\mathbf {p}\right) \right) }{\partial \mathbf {p}^{*}}\\ \\= & {} -\frac{\partial _{\mathfrak {W}}\kappa \left( \mathbf {v},\mathbf {p}\right) }{\partial \mathbf {p}^{*}}-\frac{\partial _{\mathfrak {W}}\kappa \left( \mathbf {p},\mathbf {v}\right) }{\partial \mathbf {p}^{*}} \end{aligned}$$

paying attention to \(\frac{\partial _{\mathfrak {W}}\kappa \left( \mathbf {v},\mathbf {v}\right) }{\partial \mathbf {p}^{*}}=\frac{\partial _{\mathfrak {W}}\kappa \left( \mathbf {p},\mathbf {p}\right) }{\partial \mathbf {p}^{*}}=0\). This derivative has to be taken into account to calculate the prototype update (3).

3 Quantum-inspired GLVQ

In the following we explain our idea of a quantum-inspired GLVQ. For this purpose, first we briefly introduce quantum state vectors and consider their transformations. Subsequently, we describe the proposed network architecture for both the real and the complex case. Finally, we explain, how to map real or complex data into a quantum state space by application of nonlinear mappings. This nonlinear mappings play the role of kernel feature maps as known from kernel methods.

3.1 Quantum bits, quantum state vectors and transformations

3.1.1 The real case

Quantum-inspired machine learning gains more and more attention [9, 33, 49]. Following the usual notations, the data are required to be quantum bits (qubits)

$$\begin{aligned} \left| x\right\rangle= \alpha \left( \left| x\right\rangle \right) \cdot \left| 0\right\rangle +\beta \left( \left| x\right\rangle \right) \cdot \left| 1\right\rangle \end{aligned}$$
(11)
$$\begin{aligned}= \left[ \begin{array}{c} \alpha \left( \left| x\right\rangle \right) \\ \beta \left( \left| x\right\rangle \right) \end{array}\right] \end{aligned}$$
(12)

with the normalization condition

$$\begin{aligned} \left| \alpha \left( \left| x\right\rangle \right) \right| ^{2}+\left| \beta \left( \left| x\right\rangle \right) \right| ^{2}=1 \end{aligned}$$
(13)

defining the Bloch-sphere [68]. In this real case, we suppose both \(\alpha \left( \left| x\right\rangle \right) ,\beta \left( \left| x\right\rangle \right) \in \mathbb {R}\).Footnote 1 The normalization condition is equivalent to

$$\begin{aligned} \alpha \left( \left| x\right\rangle \right) =\cos \left( \xi \right) \text { and }\beta \left( \left| x\right\rangle \right) =\sin \left( \xi \right) \end{aligned}$$
(14)

for an angle \(\xi \in \left[ 0,2\pi \right] \) paying attention to the periodicity of the cosinus function.

Consequently, we get the inner product for the quantum states \(\left| x\right\rangle \) and \(\left| w\right\rangle \) as

$$\begin{aligned} \left\langle x|w\right\rangle =\alpha \left( \left| x\right\rangle \right) \alpha \left( \left| w\right\rangle \right) +\beta \left( \left| x\right\rangle \right) \beta \left( \left| w\right\rangle \right) \end{aligned}$$
(15)

as the Euclidean inner product of the amplitude vectors \(\left[ \begin{array}{c} \alpha \left( \left| x\right\rangle \right) \\ \beta \left( \left| x\right\rangle \right) \end{array}\right] = \left[ \begin{array}{c} \cos \left( \xi \right) \\ \sin \left( \xi \right) \end{array}\right]\) and \(\left[ \begin{array}{c} \alpha \left( \left| w\right\rangle \right) \\ \beta \left( \left| w\right\rangle \right) \end{array}\right] =\left[ \begin{array}{c} \cos \left( \omega \right) \\ \sin \left( \omega \right) \end{array}\right] \). Hence, we have \(\left\langle x|x\right\rangle =1\) such that the (squared) qubit distance \(\delta \) can be calculated as

$$\begin{aligned} \delta \left( \left| x\right\rangle ,\left| w\right\rangle \right) =2\cdot \left( 1-\left\langle x|w\right\rangle \right) \,. \end{aligned}$$
(16)

Transformations \(\mathbf {U}\left[ \begin{array}{c} \alpha \left( \left| x\right\rangle \right) \\ \beta \left( \left| x\right\rangle \right) \end{array}\right] =\mathbf {U}\left| x\right\rangle \) of qubits are realized by orthonormal matrices \(\mathbf {U}\in \mathbb {R}^{2\times 2}\), i.e., \(\mathbf {U}\cdot \mathbf {U}^{T}=\mathbf {E}\) with \(\mathbf {U}^{T}\) is the transpose. Note that the application of orthonormal matrices remains the inner product invariant, i.e., \(\left\langle \mathbf {U}x|\mathbf {U}y\right\rangle =\left\langle x|y\right\rangle \) and form the orthonormal group with matrix multiplication as group operation [13].

Now, we define qubit data vectors as \(\left| \mathbf {x}\right\rangle =\left( \left| x_{1}\right\rangle ,\ldots ,\left| x_{n}\right\rangle \right) ^{T}\) with qubits \(\left| x_{k}\right\rangle \) and the vector \(\left| \mathbf {w}\right\rangle =\left( \left| w_{1}\right\rangle ,\ldots ,\left| w_{n}\right\rangle \right) ^{T}\) for prototypes. For the inner product, we get \(\left\langle \mathbf {x}|\mathbf {w}\right\rangle _{\mathcal {H}^{n}}=\sum _{k=1}^{n}\left\langle x_{k}|w_{k}\right\rangle \) with

$$\begin{aligned} \left| w_{k}\right\rangle =\cos \left( \omega _{k}\right) \cdot \left| 0\right\rangle +\sin \left( \omega _{k}\right) \cdot \left| 1\right\rangle \end{aligned}$$

and \(\left\langle \mathbf {x}|\mathbf {x}\right\rangle =\left\langle \mathbf {w}|\mathbf {w}\right\rangle =n\). Thus, we get

$$\begin{aligned} \delta \left( \left| \mathbf {x}\right\rangle ,\left| \mathbf {w}\right\rangle \right) =2\left( n-\left\langle \mathbf {x}|\mathbf {w}\right\rangle _{\mathcal {H}^{n}}\right) \end{aligned}$$
(17)

as the squared distance between qubit vectors. Using (15), we obtain

$$\begin{aligned} \delta \left( \left| \mathbf {x}\right\rangle ,\left| \mathbf {w}\right\rangle \right) =2\left( n-\sum _{k=1}^{n}\cos \left( \xi _{k}\right) \cos \left( \omega _{k}\right) +\sin \left( \xi _{k}\right) \sin \left( \omega _{k}\right) \right) \end{aligned}$$
(18)

in terms of the amplitude values.

Orthogonal transformations of qubit vectors are realized by block-diagonal matrices according to \(\mathbf {U}^{\left( n\right) }\left| \mathbf {x}\right\rangle =\text {diag}\left( \mathbf {U}_{1},\ldots ,\mathbf {U}_{n}\right) \cdot \left| \mathbf {x}\right\rangle \). Obviously, the quantum space \(\mathcal {H}^{n}\) of n-dimensional qubit vectors is an Hilbert space with the inner product \(\left\langle \mathbf {x}|\mathbf {w}\right\rangle _{\mathcal {H}^{n}}\) as also recognized in [46, 48].

3.1.2 The complex case

In the complex-valued case, the data are required to be quantum bits (qubits), too but now taken as

$$\begin{aligned} \left| x\right\rangle= \alpha \left( \left| x\right\rangle \right) \cdot \left| 0\right\rangle +\beta \left( \left| x\right\rangle \right) \cdot e^{i\phi }\cdot \left| 1\right\rangle \end{aligned}$$
(19)
$$\begin{aligned}= \left[ \begin{array}{c} \alpha \left( \left| x\right\rangle \right) \\ \beta \left( \left| x\right\rangle \right) \cdot e^{i\phi } \end{array}\right] \end{aligned}$$
(20)

with coefficients \(\alpha \left( \left| x\right\rangle \right) ,\beta \left( \left| x\right\rangle \right) \in \mathbb {R}\) and the complex phase information \(e^{i\phi }\). The normalization condition for the Bloch sphere becomes

$$\begin{aligned} 1= & {} \,\left| \alpha \left( \left| x\right\rangle \right) \right| ^{2}+\left| \beta \left( \left| x\right\rangle \right) \cdot e^{i\phi }\right| ^{2}\nonumber \\ \nonumber \\= & {} \,\left| \alpha \left( \left| x\right\rangle \right) \right| ^{2}+\left| \beta \left( \left| x\right\rangle \right) \right| ^{2}\cdot \underset{=1}{\underbrace{\left| e^{i\phi }\right| ^{2}}} \end{aligned}$$
(21)

and, hence, the amplitudes are again as in (14). The complex-valued inner product \(\left\langle x|w\right\rangle \) for \(\left| w_{l}\right\rangle =\cos \left( \omega _{l}\right) \cdot \left| 0\right\rangle +\sin \left( \omega _{l}\right) \cdot e^{i\psi }\cdot \left| 1\right\rangle \) is calculated as

$$\begin{aligned} \left\langle x|w\right\rangle= & {} \alpha \left( \left| x\right\rangle \right) \cdot \overline{\alpha \left( \left| w\right\rangle \right) }+\beta \left( \left| x\right\rangle \right) \cdot e^{i\phi }\cdot \overline{\beta \left( \left| w\right\rangle \right) \cdot e^{i\psi }}\nonumber \\ \nonumber \\= & {} \cos \left( \xi \right) \cos \left( \omega \right) +\sin \left( \xi \right) \sin \left( \omega \right) \cdot e^{i\left( \phi -\psi \right) }\nonumber \\ \nonumber \\= & {} \cos \left( \xi \right) \cos \left( \omega \right) +\sin \left( \xi \right) \sin \left( \omega \right) \cdot \left( \cos \left( \phi -\psi \right) \right. \nonumber \\&\left. +\,i\cdot \sin \left( \phi -\psi \right) \right) \end{aligned}$$
(22)

now depending on both phase information \(\psi \) and \(\phi \). Accordingly, the qubit distance \(\delta \left( \left| x\right\rangle ,\left| w\right\rangle \right) \) is calculated as

$$\begin{aligned} \delta \left( \left| x\right\rangle ,\left| w\right\rangle \right)= & {} \sqrt{\left\langle x|x\right\rangle -\left\langle x|w\right\rangle -\left\langle w|x\right\rangle +\left\langle w|w\right\rangle }\nonumber \\ \nonumber \\= & {} \sqrt{2-\left\langle x|w\right\rangle -\overline{\left\langle x|w\right\rangle }}\nonumber \\ \nonumber \\= & {} \sqrt{2}\cdot \sqrt{1-\mathfrak {Re}\left( \left\langle x|w\right\rangle \right) } \end{aligned}$$
(23)

paying attention to the properties of the complex inner product. Particularly, we have

$$\begin{aligned} \mathfrak {Re}\left( \left\langle x|y\right\rangle \right) =\cos \left( \xi \right) \cos \left( \omega \right) +\sin \left( \xi \right) \sin \left( \omega \right) \cdot \cos \left( \phi -\psi \right) \end{aligned}$$
(24)

for the real part of \(\left\langle x|y\right\rangle \).

Transformations \(\mathbf {U}\left[ \begin{array}{c} \alpha \left( \left| x\right\rangle \right) \\ \beta \left( \left| x\right\rangle \right) \end{array}\right] =\mathbf {U}\left| x\right\rangle \) of complex qubits are realized by unitary matrices \(\mathbf {U}\in \mathbb {C}^{2\times 2}\), i.e., \(\mathbf {U}\cdot \mathbf {U}^{*}=\mathbf {E}\) with \(\mathbf {U}^{*}\) is the Hermitian transpose, where unitary matrices remain the inner product invariant, i.e., \(\left\langle \mathbf {U}x|\mathbf {U}y\right\rangle =\left\langle x|y\right\rangle \). As for orthonormal matrices, the unitary matrices form a group with matrix multiplication as group operation [13].

Using again the normalization condition (13), we obtain

$$\begin{aligned} \delta \left( \left| \mathbf {x}\right\rangle ,\left| \mathbf {w}\right\rangle \right) =2\left( n-\mathfrak {Re}\left( \left\langle \mathbf {x}|\mathbf {w}\right\rangle _{\mathcal {H}^{n}}\right) \right) \end{aligned}$$
(25)

as the squared quantum distance between the qubit vectors. This can be calculated as

$$\begin{aligned}&\delta \left( \left| \mathbf {x}\right\rangle ,\left| \mathbf {w}\right\rangle \right) =2\left( n-\sum _{k=1}^{n}\cos \left( \xi _{k}\right) \cos \left( \omega _{k}\right) \right. \nonumber \\&\quad \left. +\sin \left( \xi _{k}\right) \sin \left( \omega _{k}\right) \cdot \cos \left( \phi _{k}-\psi _{k}\right) \right) \end{aligned}$$
(26)

using (24).

Analogously to the real case, unitary transformations of qubit vectors are realized by block-diagonal matrices according to \(\mathbf {U}^{\left( n\right) }\left| \mathbf {x}\right\rangle =\text {diag}\left( \mathbf {U}_{1},\ldots ,\mathbf {U}_{n}\right) \cdot \left| \mathbf {x}\right\rangle \) and the quantum space \(\mathcal {H}^{n}\) of n-dimensional complex qubit vectors is an Hilbert space with the inner product \(\left\langle \mathbf {x}|\mathbf {w}\right\rangle _{\mathcal {H}^{n}}\).

3.2 The quantum-inspired GLVQ algorithm

3.2.1 The real case

For the real case, we assume a vanishing complex phase information [30], i.e., we require \(\phi =0\) yielding \(e^{i\phi }=1\). Thus, the Qu-GLVQ approach takes real-valued qubit vectors \(\left| \mathbf {x}\right\rangle \) and \(\left| \mathbf {w}\right\rangle \) as elements of the data and the prototype sets X and W, respectively. Hence, the complex phase information yields \(e^{i\phi }=1\) assuming \(\phi =0\) [30]. Thus, the dissimilarity measure \(d^{\pm }\left( \mathbf {v}\right) =d\left( \mathbf {v},\mathbf {p}^{\pm }\right) \) in (2) for GLVQ has to be replaced by the squared qubit vector distance \(\delta \left( \left| \mathbf {x}\right\rangle ,\left| \mathbf {w}\right\rangle \right) \) from (17). All angles \(\omega _{l}\) from \(\left| \mathbf {w}\right\rangle \) form the angle vector \(\varvec{\omega }=\left( \omega _{1},\ldots ,\omega _{n}\right) ^{T}\). The prototype update can be realized adapting the angle vectors in complete analogy to (3) according to

$$\begin{aligned} \varDelta \varvec{\omega }^{\pm }\propto -\frac{\partial L\left( \left| \mathbf {x}\right\rangle ,W\right) }{\partial \varvec{\omega }^{\pm }}=-\frac{\partial f\left( \mu \left( \left| \mathbf {x}\right\rangle \right) \right) }{\partial \mu }\frac{\partial \mu \left( \left| \mathbf {x}\right\rangle \right) }{\partial \delta ^{\pm }}\frac{\partial \delta ^{\pm }\left( \left| \mathbf {x}\right\rangle \right) }{\partial \left| \mathbf {w}\right\rangle } \end{aligned}$$
(27)

where

$$\begin{aligned} \delta ^{\pm }\left( \left| \mathbf {x}\right\rangle \right) =\delta \left( \left| \mathbf {x}\right\rangle ,\left| \mathbf {w}^{\pm }\right\rangle \right) \end{aligned}$$
(28)

is the squared distance to the best matching correct and incorrect prototype. Using the relation (18), we easily obtain

$$\begin{aligned} \frac{\partial \delta ^{\pm }\left( \left| \mathbf {x}\right\rangle \right) }{\partial \omega _{k}^{\pm }}=-\cos \left( \xi _{k}\right) \cdot \sin \left( \omega _{k}^{\pm }\right) +\sin \left( \xi _{k}\right) \cdot \cos \left( \omega _{k}^{\pm }\right) \end{aligned}$$
(29)

delivering the gradient vector \(\frac{\partial \delta ^{\pm }\left( \left| \mathbf {x}\right\rangle \right) }{\partial \varvec{\omega }^{\pm }}\) where

$$\begin{aligned} \frac{\partial \delta ^{\pm }\left( \left| \mathbf {x}\right\rangle \right) }{\partial \omega _{k}^{\pm }}=\frac{\partial \delta ^{\pm }\left( \left| \mathbf {x}\right\rangle \right) }{\partial \left| \mathbf {w}^{\pm }\right\rangle }\frac{\partial \left| \mathbf {w}^{\pm }\right\rangle }{\partial \varvec{\omega }^{\pm }} \end{aligned}$$
(30)

is used. We further remark that (27) together with (29) ensures the prototypes to remain quantum state vectors. Thus, this update corresponds to an orthogonal (unitary) transformations \(\mathbf {U}_{k}\left( \varDelta \omega _{k}\right) \cdot \left| w_{k}\right\rangle \) realizing the update \(\varDelta \left| \mathbf {w}^{\pm }\right\rangle \) directly in the quantum space \(\mathcal {H}^{n}\). Further, we can collect all transformations by \(\mathbf {U}_{k}^{\varSigma }=\varPi _{t=1}^{N}\mathbf {U}_{k}\left( \varDelta \omega _{k}\left( t\right) \right) \) where \(\varDelta \omega _{k}\left( t\right) \) is angle change at time step t. Due to the group property of orthogonal transformations the matrix \(\mathbf {U}_{k}^{\varSigma }\) is also orthogonal and allows to re-calculate the initial state \(\left| w_{k}\right\rangle \) from the final.

3.2.2 The complex case

The complex variant of Qu-GLVQ depends on both \(\varvec{\omega }^{\pm }\) and \(\varvec{\psi }^{\pm }\) according to (26) and, therefore, the angle vector update (27) for \(\varvec{\omega }^{\pm }\) is accompanied by the respective update

$$\begin{aligned} \varDelta \varvec{\psi }^{\pm }\propto -\frac{\partial L\left( \left| \mathbf {x}\right\rangle ,W\right) }{\partial \varvec{\psi }^{\pm }}=-\frac{\partial f\left( \mu \left( \left| \mathbf {x}\right\rangle \right) \right) }{\partial \mu }\frac{\partial \mu \left( \left| \mathbf {x}\right\rangle \right) }{\partial \delta ^{\pm }}\frac{\partial \delta ^{\pm }\left( \left| \mathbf {x}\right\rangle \right) }{\partial \varvec{\psi }^{\pm }}\,, \end{aligned}$$
(31)

for the phase vectors. Again we used the convention

$$\begin{aligned} \frac{\partial \delta ^{\pm }\left( \left| \mathbf {x}\right\rangle \right) }{\partial \varvec{\psi }^{\pm }}=\frac{\partial \delta _{\mathfrak {W}}^{\pm }\left( \left| \mathbf {x}\right\rangle \right) }{\partial \left| \mathbf {w}^{\pm }\right\rangle }\frac{\partial \left| \mathbf {w}^{\pm }\right\rangle }{\partial \varvec{\psi }^{\pm }} \end{aligned}$$

similar as before but taking the Wirtinger derivative, see “Appendix 6”. However, we can avoid the explicit application of the Wirtinger calculus: Using (24), we obtain now for (30)

$$\begin{aligned} \frac{\partial \delta ^{\pm }\left( \left| \mathbf {x}\right\rangle \right) }{\partial \omega _{k}^{\pm }}= & {} \frac{\partial \delta ^{\pm }\left( \left| \mathbf {x}\right\rangle \right) }{\partial \omega _{k}^{\pm }}\left( \cos \left( \xi _{k}\right) \cdot \cos \left( \omega _{k}^{\pm }\right) \right. \\&\left. +\sin \left( \xi _{k}\right) \cdot \sin \left( \omega _{k}^{\pm }\right) \cdot \cos \left( \phi _{k}-\psi _{k}^{\pm }\right) \right) \\ \\= & {} -\cos \left( \xi _{k}\right) \cdot \sin \left( \omega _{k}^{\pm }\right) \\&+\sin \left( \xi _{k}\right) \cdot \cos \left( \omega _{k}^{\pm }\right) \cdot \cos \left( \phi _{k}-\psi _{k}^{\pm }\right) \end{aligned}$$

determining \(\varDelta \varvec{\omega }^{\pm }\) via (27) and again avoiding the explicit application of the Wirtinger calculus. Further, we get

$$\begin{aligned} \frac{\partial \delta ^{\pm }\left( \left| \mathbf {x}\right\rangle \right) }{\partial \psi _{k}^{\pm }}= & {} \frac{\partial \delta ^{\pm }\left( \left| \mathbf {x}\right\rangle \right) }{\partial \psi _{k}^{\pm }}\left( \cos \left( \xi _{k}\right) \cdot \cos \left( \omega _{k}^{\pm }\right) \right. \\&\left. +\sin \left( \xi _{k}\right) \cdot \sin \left( \omega _{k}^{\pm }\right) \cdot \cos \left( \phi _{k}-\psi _{k}^{\pm }\right) \right) \\= & {} \cos \left( \xi _{k}\right) \cdot \cos \left( \omega _{k}^{\pm }\right) \\&+\sin \left( \xi _{k}\right) \cdot \sin \left( \omega _{k}^{\pm }\right) \cdot \sin \left( \phi _{k}-\psi _{k}^{\pm }\right) \end{aligned}$$

determining \(\varDelta \psi ^{\pm }\) via (27).

Again, the prototype update ensures the quantum state property for the adapted prototypes and, hence, could be seen as transformations \(\mathbf {U}_{k}\left( \varDelta \omega _{k}\right) \cdot \left| w_{k}\right\rangle \).

3.3 Data transformations and the relation of Qu-GLVQ to kernel GLVQ

In the next step, we explain the mapping of the data into the quantum state space \(\mathcal {H}^{n}\). For the prototypes, we always assume that these are given in \(\mathcal {H}^{n}\).

Starting from usual real data vectors, we apply a nonlinear mapping

$$\begin{aligned} \varPhi :\mathbb {R}^{n}\ni \mathbf {v}\rightarrow \left| \mathbf {x}\right\rangle \in \mathcal {H}^{n} \end{aligned}$$

to obtain qubit vectors. In context of quantum machine learning, \(\varPhi \) is also denoted as quantum feature map playing the role of a real kernel feature map [46, 48]. This mapping can be realized taking

$$\begin{aligned} \alpha \left( \left| x_{l}\right\rangle \right) =\cos \left( \xi _{l}\right) \text { and }\beta \left( \left| x_{l}\right\rangle \right) =\sin \left( \xi _{l}\right) \end{aligned}$$

keeping in mind the normalization (13) and applying an appropriate squashing function \(\varPi :\mathbb {R}\rightarrow \left[ 0,2\pi \right] \) such that \(\xi _{l}=\varPi \left( v_{l}\right) \) is valid. Possible choices are

$$\begin{aligned} \varPi \left( v_{l}\right) =\frac{2\pi }{1+\exp \left( -v_{l}\right) }\text { or }\varPi \left( v_{l}\right) =\pi \cdot \left( \tanh \left( v_{l}\right) +1\right) \end{aligned}$$

as suggested in [18]. Finally we have

$$\begin{aligned} \varphi \left( v_{l}\right) =\varphi \left( \xi _{l}\right) \text { with }\left| x_{l}\right\rangle =\left[ \begin{array}{c} \cos \left( \xi \right) \\ \sin \left( \xi \right) \end{array}\right] \end{aligned}$$

In case of complex data vectors, we realize the mapping \(\varPhi :\mathbb {C}^{n}\ni \mathbf {v}\rightarrow \left| \mathbf {x}\right\rangle \in \mathcal {H}^{n}\) by the nonlinear mapping \(\varphi \left( v_{l}\right) =\varPsi \circ \varPi \left( v_{l}\right) \) where \({{\varPi \left( z\right) }}\) is the stereographic projection of the complex number z

$$\begin{aligned} z\overset{\varPi }{\longmapsto }\mathbf {r}= & {} \left( r_{1},r_{2},r_{3}\right) \nonumber \\= & {} \left( \frac{z+\overline{z}}{1+\left| z\right| },\frac{z-\overline{z}}{i\cdot \left( 1+\left| z\right| \right) },\frac{\left| z\right| -1}{1+\left| z\right| }\right) \in \mathscr {R\subseteq \mathbb {R}}^{3} \end{aligned}$$
(32)

onto the Riemann sphere \(\mathscr {R}\) fulfilling the constraint \(\left\| \mathbf {r}\right\| =1\) for all \(\mathbf {r}\in \mathscr {R}\), see Fig. 1.

Fig. 1
figure 1

Illustration of the stereographic projection \(\varPi \left( z\right) \) of a complex number z to a vector \(x=\left( x_{1},x_{2},x_{3}\right) ^{T}\) onto a (Riemann-) sphere, i.e., \(x_{1}^{2}+x_{2}^{2}+x_{3}^{2}=1\) is valid

The subsequent mapping \(\varPsi \left( \mathbf {r}\right) \) delivers the spherical coordinates

$$\begin{aligned} \xi= & {} \arccos \left( r_{3}\right) \\ \\ \phi= & {} \arctan _{2}\left( \frac{r_{2}}{r_{1}}\right) \end{aligned}$$

of \(\mathbf {r}\) such that

$$\begin{aligned} \varphi \left( v_{l}\right) =\varPsi \left( \begin{array}{c} \xi \\ \phi \end{array}\right) \text { with }\left| x_{l}\right\rangle =\left[ \begin{array}{c} \cos \left( \xi \right) \\ \sin \left( \xi \right) \cdot \exp \left( i\cdot \phi \right) \end{array}\right] \end{aligned}$$

is obtained, see Fig. 2.

Fig. 2
figure 2

Illustration of the transformation of Bloch-sphere coordinates into respective angle values applying spherical coordinates

Note that the stereographic projection \(\varPi \left( z\right) \) is unique realizing a squashing effect. This effect becomes particularly apparent if \(\left| z\right| \rightarrow \infty \) and hence is comparable the squashing effect realized by \(\varPi \left( v_{l}\right) \) for the real case.

4 Numerical experiments

We tested the proposed Qu-GLVQ algorithm for several datasets in comparison with established LVQ approaches as well as SVM. Particularly, we compare Qu-GLVQ with standard GLVQ, KGLVQ and SVM. For both SVM and KGLVQ, the rbf-kernel was used with the same kernel width obtained by a grid search. For all LVQ variants including Qu-GLVQ, we used only one prototype per class for all experiments.

The datasets are a) WHISKY - a spectral dataset to classify Scottish whisky described in [1, 2], WBCD - UCI Wisconcin Breast Cancer Dataset, HEART - UCI Heart disease dataset, PIMA - UCI diabetes dataset, and FLC1 - satellite remote sensing LANDSAT TM dataset [27].Footnote 2 The averaged results reported are obtained by 10-fold cross-validation. Only test accuracies are given.

Table 1 Classification accuracies in % and standard deviations for GLVQ, Qu-GLVQ, KGLVQ, and SVM for the considered datasets (10-fold cross-validation)

We observe an overall good performance of QU-GLVQ comparable to the other methods, see Table 1. Kernel methods seem to be beneficial as indicated by KGLVQ and SVM for HEART. Yet, Qu-GLVQ delivers similar results. If SVM yields significant better performance than KLVQ and Qu-GLVQ, then we have to take into account that here the SVM complexity (number of support vectors) is much higher than in LVQ-networks, where the number of prototypes was chosen always to be only one per class. Further, for WHISKY the KGLVQ was not able to achieve a classification accuracy comparable to the other approaches, whereas Qu-GLVQ performed well. This might be addressed due to the crucial dependency of the rbf-kernel on the kernel width. Further, Qu-GLVQ seems to be more stable than KGLVQ and SVM considering the averaged deviations.

5 Conclusion

In this contribution, we introduced an approach to incorporate quantum machine learning strategies into the GLVQ framework. Usual data and prototype vectors are replaced by respective quantum bit vectors. This replacement can be seen as an explicit nonlinear mapping of the data into the quantum Hilbert space which make the difference to the implicit feature mapping in case of kernel methods like kernelized GLVQ or SVM. Thus, one can visualize and analyze the data as well as the prototypes in this space, such that Qu-GLVQ becomes better interpretable than KGLVQ or SVM without this possibility.

Otherwise, the QU-GLVQ approach shows mathematical equivalence to the kernel approaches in topological sense: the distance calculations are carried out in the mapping Hilbert space implicitly for usual kernels and explicitly for Qu-GLVQ. The resulting adaptation dynamic in Qu-GLVQ is consistent with the unitary transformations required for quantum state changes, because the prototypes remain quantum-state vectors.

Further investigations should include several modifications and extension of the proposed Qu-GLVQ. First, matrix learning like for GMLVQ can easily be integrated. Further, the influence of different transfer function as proposed in [62] for standard GLVQ has to be considered to improve learning speed and performance. A much more challenging task will be to consider entanglements for qubits and complex amplitudes \(\beta \in \mathbb {C}\) for qubits [20] in context of Qu-GLVQ. This research also should continue the approach in [71].

Finally, adaptation to a real quantum system is the final goal as recently realized for other classifier systems [47].

Overall, Qu-GLVQ performs equivalently to KGLVQ and SVM. However, due to its better interpretability it could be an interesting alternative if interpretable models are demanded.