Gradient-based Competitive Learning: Theory

Deep learning has been widely used for supervised learning and classification/regression problems. Recently, a novel area of research has applied this paradigm to unsupervised tasks; indeed, a gradient-based approach extracts, efficiently and autonomously, the relevant features for handling input data. However, state-of-the-art techniques focus mostly on algorithmic efficiency and accuracy rather than mimic the input manifold. On the contrary, competitive learning is a powerful tool for replicating the input distribution topology. This paper introduces a novel perspective in this area by combining these two techniques: unsupervised gradient-based and competitive learning. The theory is based on the intuition that neural networks are able to learn topological structures by working directly on the transpose of the input matrix. At this purpose, the vanilla competitive layer and its dual are presented. The former is just an adaptation of a standard competitive layer for deep clustering, while the latter is trained on the transposed matrix. Their equivalence is extensively proven both theoretically and experimentally. However, the dual layer is better suited for handling very high-dimensional datasets. The proposed approach has a great potential as it can be generalized to a vast selection of topological learning tasks, such as non-stationary and hierarchical clustering; furthermore, it can also be integrated within more complex architectures such as autoencoders and generative adversarial networks.

famous techniques such as Gaussian Mixture Models (GMM) [2] and Neural Gas (NG) [3].To overcome this limitation, several incremental algorithms have been proposed in literature, where the number of neurons is not fixed but changes over time w.r.t the complexity of the problem at hand.This approach adds a novel unit whether certain conditions are met, e.g. the quantization error is too high or data is too far from the existing neurons; in this sense, the new unit should yield a better quantization of the input distribution.Some examples are the the adapative k-means [4] and the Density Based Spatial Clustering (DBSCAN) [5].Furthermore, unsupervised learning is generally capable of finding groups of samples that are similar under a specific metric, e.g.Euclidean distance.However, it cannot infer the underlying data topology.At this purpose, to define a local topology, the Competitive Hebbian Learning (CHL) paradigm [6,7,8] is employed by some algorithms such as Self-Organizing-Map (SOM) by Kohonen [9], the Growing Neural Gas (GNG) [10] and its variants [11,12,13,14].Indeed, given an input sample, the two closest neurons, called first and second winners, are linked by an edge, which locally models the input shape.
All the previously cited techniques suffer from the curse of dimensionality [15].Distance-based similarity measures are not effective when dealing with highly dimensional data (e.g.images or gene expression) [16].Therefore, many methods to reduce input dimensionality and to select the most important features have been employed, such as Principal Component Analysis (PCA) [17] and kernel functions [18].To better preserve local topology in the reduced space, the Curvilinear Component Analysis (CCA) [19] and its online incremental version, the GCCA [20,21], proposed a non-linear projection algorithm.This approach is quite useful for noise removal and when input features are highly correlated, because projection reduces the problem complexity; on the contrary, when features are statistically independent, a smaller space implies worse clustering performance due to the information loss.An alternative way for dealing with high dimensional data is the use of Deep Neural Networks (DNN).Indeed, Convolutional Neural Networks (CNN) [22] have proven to be a valid tool for handling high dimensional input distribution in case of supervised learning.The strength of CNNs relies on the convolutional filters, whose output is linearly separable.In this sense, CNN filters can also be exploited for clustering.Also, DNNs can be trained by optimizing a clustering loss function [23], [24], [25].A straightforward approach, however, may lead to overfitting, where data are mapped to compact clusters that do not correspond to data topology.To overcome this problem, weight regularization, data augmentation and supervised network pre-training have been proposed [26].The latter technique exploits a pre-trained CNN (e.g.AlexNet on ImageNet [27]) as a feature extractor in a transfer learning way [28].Otherwise, clustering learning procedures may be integrated with a network learning process, which allows employing more complex architectures such as Autoencoders (AE) [29], Variational-Autoencoders (VAE) [30] or Generative Adversarial Networks (GAN) [31].Such techniques usually employ a two-step learning process: first, a good representation of the input space is learnt through a network loss function and, later, the quantization is fine-tuned by optimizing a clustering-specific loss.The network loss can be either the reconstruction loss of an AE, the variational loss of a VAE or the adversarial loss of a GAN.
To our knowledge, no previous work suggested to join DNN feature transformation skill with the higher representation capabilities of competitive learning approaches.In this paper, we propose two variants of a neural architecture where competitive learning is embedded in the training loss function.These networks can also be placed on top of more complex models, such as AE, CNN, VAE or GAN.
2 Gradient-based competitive learning

Dual neural networks
Multi-layer feedforward neural networks are universal function approximators [32].Given an input matrix X ∈ R d×n containing a collection of n observations and a set of k supervisions Y ∈ R k×n , a neural network with d input and k output units can be used to approximate the target features Y .The relationship between X and Y can be arbitrarily complex, nonetheless deep neural networks can optimize their parameters in such a way that their predictions Y will match the target Y .In supervised settings, neural networks are used to combine the information of different features (rows of X) in order to provide a prediction Y , which corresponds to a nonlinear projection of the observations (columns of X) optimized to match the target Y .Hence, in such scenarios, the neural network will provide one prediction for each observation i = 1, . . ., n.
The objective of competitive learning consists in studying the underlying structure of a manifold by means of prototypes, i.e. a set of positions in the feature space representative of the input observations.Each prototype p k is a vector in R d as it lies in the same feature space of the observations.Hence, competitive learning algorithms can be described as functions mapping an input matrix X ∈ R d×n in an output matrix P ∈ R d×k where the j-th column represents the prototype p j .Vanilla competitive neural networks [33,34,35] are composed of a set of competing neurons described by a vector of weights p j , representing the position of neurons (a.k.a.prototypes) in the input space.The inverse of the Euclidean distance between the input data x i and the weight vector p j represents the similarity between the input and the prototype.For every input vector x i , the prototypes compete with each other to see which one is the most similar  to that particular input vector.By following the Competitive Hebbian Learning (CHL) rule [6,7], the two closest prototypes to x i are connected using an edge, representing their mutual activation.Depending on the approach, the closest prototypes to the input sample move towards it, reducing the distance between the prototype and the input.As a result, the position of the competing neurons in the input space will tend to cluster centroids of the input data.The most natural way of using a feedforward neural network for this kind of task is the transposition of the input matrix X while optimizing a prototype-based loss function.This idea leads to the dual competitive layer (DCL, see Section 2.2 and 2.3), i.e. a fully connected layer trained on X T , thus having n input units corresponding to observations and k output units corresponding to prototypes.Instead of combining different features to generate the feature subspace R k where samples will be projected as for classification or regression tasks, in this case the neural network combines different samples to generate a synthetic summary of the observations, represented by a set of prototypes.Compared with the architecture of a vanilla competitive layer (VCL) [33] where prototypes correspond to the set of weight vectors of the fully connected layer, the dual approach naturally fits in a deep learning framework as the fully connected layer is actually used to apply a transformation of the input.This is not the case in a VCL, where the output is not even computed.
The DCL outputs the prototypes after one batch (epoch) by means of a linear transformation represented by its weights.At this aim, a gradient-based minimization of a loss function is used, by using the whole batch.This reminds the centroid estimation of the generalized Lloyd algorithm (k-means, [36,37]), which, instead, uses only the Voronoi sets.This is an important difference, because the error information can be backpropagated to the previous layer, if any, by exploiting all the observations, thus providing a relaxation of the Voronoi constraint.The underlying DCL analysis can be found in Section 3.
In order to estimate the parameters of DCL, a loss function representing the quantization error of the prototypes, the Voronoi sets are estimated by means of the Euclidean distance matrix (edm).This is the same requirement of the second iteration of the generalized Lloyd algorithm.However, the latter uses this information for directly computing the centroids.The former, instead, only yields the error to be backpropagated.The analysis and choice of the loss function is illustrated in Section 2.3.
By training on the transposed input, DCL looks at observations as features and vice versa.As a consequence, increasing the number of observations n (rows of X T ) enhances the capacity of the network, as the number of input units corresponds to n. Providing a higher number of features, instead, stabilizes the learning process as it expands the set of input vectors of DCL.
After training, once prototype positions have been estimated, the dual network is no longer needed.Indeed, test observations can be evaluated finding the closest prototype for each sample.This means that the amount of information required to employ this approach in production environments corresponds just to the prototype matrix P .

Duality theory for single-layer networks
The intuitions outlined in the previous section can be formalized in a general theory which considers the duality properties between a linear single-layer neural network and its dual, defined as a network which learns on the transpose of the input matrix and has the same number of output neurons.Consider a single layer neural network whose outputs have linear activation functions.There are d input units and k output units which represent a continuous signal in case of regression or class membership (posterior probabilities in case of cross entropy error function) in case of classification.A batch of n samples, say X, is fed to the network.The weight matrix is W 1 , where the element w ij represents the weight from the input unit j to the neuron i.The single layer neural network with linear activation functions in the lower scheme is here called the dual network of the former one.It has the same number of outputs and n inputs.It is trained on the transpose of the original X database.Its weight matrix is W 2 and the output batch is Y 2 .The following theorems state the duality conditions of the two architectures.Figure 2 represents the two networks and their duality.Theorem 2.1 (Network duality in competitive learning).Given a loss function for competitive learning based on prototypes, a single linear network (base) whose weight output neurons are the prototypes is equivalent to another (dual) whose outputs are the prototypes, under the following assumptions: 1. the input matrix of the dual network is the transpose of the input matrix of the base network; 2. the samples of the input matrix X are uncorrelated with unit variance Proof.Consider a loss function based on prototypes, whose minimization is required for competitive learning.From the assumption on the inputs (rows of the matrix X), it results XX T = I d .A single layer linear network is represented by the matrix formula: By multiplying on the right by X T , it holds: Under the second assumption: This equation represents a (dual) linear network whose outputs are the prototypes W . Considering that the same loss function is used for both cases, the two networks are equivalent.
This theorem directly applies to the VCL (base) and DCL (dual) neural networks if the assumption 2 holds for the training set.If not, a preprocessing, e.g.batch normalization, can be performed.Theorem 2.2 (Impossible complete duality).Two dual networks cannot share weights as W 1 = Y 2 and W 2 = Y 1 (complete dual constraint), except if the samples of the input matrix X are uncorrelated with unit variance.
Proof.From the duality of networks and their linearity, for an entire batch it follows: where I d and I n are the identity matrices of size d and n, respectively.These two final conditions are only possible if the samples of the input matrix X are uncorrelated with unit variance, which is not the case in (almost all) machine learning applications.
Theorem 2.3 (Half duality I).Given two dual networks, if the samples of the input matrix X are uncorrelated with unit variance and if Proof.From the first dual constraint (see Figure 3, top), for the second network it stems: Hence: under the second assumption on X T from Theorem 2.1, which implies X T X = I n , the result follows (see Figure 3, bottom).
Proof.From the second dual constraint (see Figure 3, left), for the second network it stems: From the assumption on the inputs (rows of the matrix X), it results XX T = I d .The first neural architecture yields (see Figure 3, right): Theorem 2.4 justifies the use of the first single-layer neural network as a competitive layer.Corollary 2.4.1 (Self-supervised learning).The assumption of Theorem 2.4 implies the construction of labels for the base network.
Proof.As sketched in Figure 3, under the assumption of the equivalence between the training of the dual network (building of prototypes) and the architecture of the base network (output neurons as prototypes), the previous theorem implies the second dual constraint, which means the construction of a self-organized label.
Thanks to this corollary, the base network can work in a self-supervised way, by using the results of the dual selforganization, to infer information on the dataset.This results in a new approach to self-supervised learning.

Clustering as a loss minimization
The theoretical framework developed in Section 2.2 can be easily adapted to accommodate for a variety of unsupervised learning tasks by designing a suitable loss function.One of the most common prototype-based loss functions employed for clustering aims at minimizing the expected squared quantization error [38].Depending on the feature subspace, some clusters may have complex shapes, therefore using only one prototype per cluster may result in a poor representation.To overcome this limitation, each cluster can be represented by a graph composed of a collection of connected prototypes.The corresponding loss function can be written as: where Q is the quantization error and E is the adjacency matrix describing the connections between prototypes.The Q term is estimated from the Voronoi sets of the prototypes, which require the evaluation of the edm between X and Y .The E term uses the CHL rule, which implies the estimation of the first and second winner w.r.t. each sample by means of the same edm.By using the Lagrangian term λ||E|| 2 , the complexity of the graph representing connections among prototypes can be minimized, in order to learn the minimal topological structure.Lonely prototypes (i.e.prototypes without connections) may represent outliers and can be easily pruned or examined individually.
The minimization of Eq. 10 can be exploited for analysing the topological properties of the input manifolds.However, this is out of the scope of this paper.Indeed, it allows both the detection of clusters by means of the connectedness of the graphs and the best number of prototypes (pruning from a user-defined number of output units).This technique addresses the problem of the choice of prototypes in k-means.
Section 2.2 established a set of conditions for the duality of two single-layer feedforward neural networks only in terms of their architecture.Instead, the choice of the learning process determines their application.In case of clustering, they correspond to the VCL and DCL respectively, if they are both trained by the minimization of Eq. 10.However, as it will be shown in Section 3, the equivalence in the architecture does not imply an equivalence in the training process, even if the loss function and the optimization algorithm are the same.Indeed, in a vanilla competitive layer there is no forward pass as Y 1 is not computed nor considered and the prototype matrix is just the weight matrix W 1 : where prototype i ∈ R d×1 .In a dual competitive layer, instead, the prototype matrix corresponds to the output Y 2 ; hence, the forward pass is a linear transformation of the input X T through the weight matrix W 2 : where w i is the weight vector of the i-th output neuron of the dual network and f i is the i-th feature over all samples of the input matrix X.The components of each prototype are computed using a constant weight w i , because P 2 is an outer product, which has rank 1. Besides, each component is computed as it were a one dimensional learning problem.
For instance, the first component of the prototypes is w ; which means that the first component of all the prototypes is computed by considering just the first feature f 1 .Hence, each component is independent from all the other features of the input matrix, allowing the forward pass to be just like a collection of d one-dimensional problems.
Such differences in the forward pass have an impact on the backward pass as well, even if the form of the loss function is the same for both systems.However, the parameters of the optimization are not the same.For the base network: while for the dual network: where Y is a linear transformation (filter) represented by W 2 .In the base competitive layer the gradient of the loss function with respect to the weights W 1 is computed directly as: On the other hand, in the dual competitive layer, the chain rule is required to computed the gradient with respect to the weights W 2 as the loss function depends on the prototypes Y 2 : As a result, despite the architecture of the two layers is equivalent, the learning process is quite different.

Simulations
In order to rigorously assess the main characteristics of the learning process, several metrics are evaluated while training the two networks on three synthetic datasets (see Appendix A.1). Figure 4 shows for each dataset the dynamics of three key metrics for both VCL and DCL: the quantization error, the topological complexity of the solution (i.e.||E||), and the number of valid prototypes (i.e. the ones with a non-empty Voronoi set).By looking at the quantization error both networks tend to converge to similar local minima in all scenarios, thus validating their theoretical equivalence.Nonetheless, the single-layer dual network exhibits a much faster rate of convergence compared to the vanilla competitive layer.The most significant differences are outlined (i) by the number of valid prototypes as DCL tends to employ more resources and (ii) by the topological complexity as VCL favors more complex solutions.Fig. 5 shows topological clustering results after 800 epochs (hyper-parameter settings are described in Appendix A.1).As expected, both neural networks yield an adequate estimation of prototype positions, even though the topology learned by DCL is far more accurate in following the underlying manifolds w.r.t. to VCL.
Fig. 6 shows the trajectories of the prototypes during the training for both networks.The parameters in both networks have been initialized by means of the Glorot initializer [39], which draws small values from a normal distribution centered on zero.For VCL, these parameters are the prototypes and they are initially clustered around the origin, as expected.For DCL, instead, the initial prototypes are an affine transformation of the inputs parameterized by the weight matrix.This implies the initial prototypes are close to a random choice of the input data.the closest to the origin cluster and, then some of them spread towards the furthest manifolds.The DCL trajectories are much shorter because of the closeness of the initial prototypes to the input clusters.These considerations reveal the better suitability of DCL to deep learning traditional initializations.
Section 3 yields a theoretical explanation for the observed results.
3 Theoretical analysis

Stochastic approximation theory of the gradient flows
In the following, the gradient flows of the vanilla and the dual single-layer neural networks are formally examined when trained using the quantization error, one of the most common loss functions used for training unsupervised neural networks in clustering contexts.The following theory is based on the assumption of λ = 0 in Eq. 10.Taking into account the edge error only relaxes the analysis, but the results remain valid.Under the stochastic approximation theory, the asymptotic properties of the gradient flows of the two networks can be estimated.

Base layer gradient flow
For each prototype j, represented in the base layer by the weight vector W j 1 ∈ R d of the j-th neuron (it is the j-th row of the matrix W 1 ), the contribution of its Voronoi set to the quantization error is given by: where n j is the cardinality of the j-th Voronoi set.The corresponding gradient flow of the base network is the following: being the learning rate.The averaging ODE holds: where µ j = E[x i ] is the expectation in the limit of infinite samples of the j-th Voronoi set, and corresponds to the centroid of the Voronoi region.The unique critical point of the ODE is given by: and the ODE can be rewritten as: under the transformation w j 1 = W j 1 − W j 1,crit in order to study the origin as the critical point.The associated matrix is −I d , whose eigenvalues are all equal to −1 and whose eigenvectors are the vectors of the standard basis.Hence, the gradient flow is stable and decreases in the same exponential way, as e −t , in all directions.The gradient flow of one epoch corresponds to an approximation of the second step of the generalized Lloyd iteration, as stated before.

Dual layer gradient flow
In the dual layer, the prototypes are estimated by the outputs, in such a way that they are represented by the rows of the Y 2 matrix.Indeed, the j-th prototype is now represented by the row vector (Y j 2 ) T , from now on called y T j for sake of simplicity.It is computed by the linear transformation: where x i ∈ R n is the i-th row of the training set X and W j 2 ∈ R n is the weight vector of the j-th neuron (it is the j-th row of the matrix W 2 ), and is here named as Ω j for simplicity.Hence, the j-th prototype is computed as: and its squared (Euclidean) 2-norm is: For the j-th prototype, the contribution of its Voronoi set to the quantization error is given by: with the same notation as previously.The gradient flow of the dual network is computed as: being the learning rate.The gradient is given by: The averaging ODE is estimated as: The unique critical point of the ODE is the solution of the normal equations: The linear system can be solved only if X T X ∈ R n×n is full rank.This is true only if n ≤ d (the case n = d is trivial and, so, from now on the analysis deals with n < d) and all columns of X are linearly independent.In this case, the solution is given by: Ω where X + is the pseudoinverse of X.The result corresponds to the least squares solution of the overdetermined linear system: which is equivalent to: This last system shows that the dual layer asymptotically tends to output the centroids as prototypes.The ODE can be rewritten as: under the transformation w j = Ω j − Ω j,crit in order to study the origin as the critical point.The associated matrix is −X T X.Consider the singular value decomposition (SVD) of X = U ΣV T where U ∈ R d×d and V ∈ R n×n are orthogonal and Σ ∈ R d×n) is diagonal (nonzero diagonal elements named singular values and called σ i , indexed in decreasing order).The i-th column of V (associated to σ i ) is written as v i and is named right singular vector.Then: is the eigenvalue decomposition of the sample autocorrelation matrix of the inputs of the dual network.It follows that the algorithm is stable and the ODE solution is given by: where the constants depend on the initial conditions.The same dynamical law is valid for all the other weight neurons.
If n > d and all columns of X are linearly independent, it follows: and the system XΩ j = µ j is underdetermined.This set of equations has a nontrivial nullspace and so the least squares solution is not unique.However, the least squares solution of minimum norm is unique.This corresponds to the minimization problem: The unique solution is given by the normal equations of the second kind: that is, by considering that XX T has an inverse: Multiplying on the left by XX T yields: that is Eq.(b), which is the system whose solution is the unique critical point of the ODE (setting the derivative of Eq.(a) to zero).Resuming, both cases give the same solution.However, in the case n > d and rank(X) = d, the output neuron weight vectors have minimum norm and are orthogonal to the nullspace of X, which is spanned by v n−d+1) , v n−d+2) , . . ., v n .Indeed, X T X has n − d zero eigenvalues, which correspond to centers.Therefore, the ODE solution is given by: This theory proves the following theorem.Theorem 3.1 (Dual flow and PCA).The dual network evolves in the directions of the principal axes of its autocorrelation matrix (see Eq. 34) with time constants given by the inverses of the associated data variances.
This statement claims the dual gradient flow moves faster in the more relevant directions, i.e.where data vary more.Indeed, the trajectories start at the initial position of the prototypes (the constants in Eq. 41 are the associated coordinates in the standard framework rotated by V ) and evolve along the right singular vectors, faster in the directions of more variance in the data.It implies a faster rate of convergence because it is dictated by the data content, as already observed in the numerical experiments (see Fig. 4).

Dynamics of the dual layers
For the basic layer it holds: where l ∈ R d is a vector of constants.Therefore, W j 1 tends asymptotically to µ j , by moving in R d .However, being µ j a linear combinations of the columns of X, it can be deduced that, after a transient period, the neuron weight vectors tend to the range (column space) of X, say R(X), i.e.: where t 0 is a certain instant of time and r = rank(X) = min{d, n} under the assumption of samples independently drawn from the same distribution, which prevents from the presence of collinearities in data.It follows: where l ∈ R d is another vector of constants.Then W j 1 can be considered as the output of a linear transformation represented by the matrix X, i.e.W j 1 = Xp, being p ∈ R n its preimage.Hence, (W j 1 ) T = p T X T , which shows the duality.Indeed, it represents a network whose input is X T , and the output (W j 1 ) T and parameter weight vector p T are the interchange of the corresponding ones in the base network.Notice, however, that the weight vector in the dual network corresponds only through a linear transformation, that is, by means of the preimage.Under the second duality assumption XX T = I d , it holds:  where 0 r,s is the zero matrix with r rows and s columns.Therefore, this assumption implies there are d singular values all equal to 1 or −1.In case of remaining singular values, they are all null and of cardinality d − n.For the dual layer, under the second duality assumption, in the case of singular values all equal to −1 or 0, it follows: where q ∈ R n .Therefore, Ω j tends asymptotically to Ω j,crit , by moving in R n .Hence, it can be deduced that, after a transient period, the neuron weight vectors tend to the range (column space) of X, say R(X T ), i.e.: where t 0 is a certain instant of time and r = rank(X) = min{d, n} under the same assumption of noncollinear data.
Resuming, the base and dual gradient flows, under the two duality assumptions, except for the presence of centers, are given by: the SVD of X and c = Σq for the arbitrariness of the constants.This result claims the fact that the base flow directly estimates the prototype, while the dual flow estimates its preimage.This confirms the duality of the two layers from the dynamical point of view and proves the following theorem.Theorem 3.2 (Dynamical duality).Under the two assumptions of 2.1, the two networks are dynamically equivalent.In particular, the base gradient flow evolves in R(X) and dual gradient flow evolves in R(X T ).
More in general, the fact that the prototypes are straightly computed in the base network implies a more rigid dynamics of its gradient flow.On the contrary, the presence of the singular values in the exponentials of the dual gradient flow originate from the fixed transformation (matrix X) used for the prototype estimation (see Eq. 48).They are exploited for a better dynamics, because they are suited to the statistical characteristics of the training set, as discussed before.Both flows estimate the centroids of the Voronoi sets, like the centroid estimation step of the Lloyd algorithm, but the linear layers allow the use of gradient flows and do not require the a priori knowledge of the number of prototypes (see the discussion on pruning in Section 2.3).However, the dual flow is an iterative least squares solution, while the base flow does the same only implicitly.In the case d > n, rank(X) = rank(X T ) = n, and the base gradient flow stays in R d , but tends to lie on the n-dimensional subspace R(X).Instead, the dual gradient flow is n-dimensional and always evolves in the n-dimensional subspace R(X T ).Proof.The two subspaces are the range (column space) of the two matrices X and X T : R(X) = {z : z = Xu for a certain u} R(X T ) = {y : y = X T x for a certain x} (49) Then: XR(X T ) = {u = Xy : y = X T x for a certain x} = R(X) (50) More in general, multiplying X by a vector yields a vector in R(X).
All vectors in R(X T ) are transformed by X in the corresponding quantities in R(X).In particular: This analysis proves the following theorem.Theorem 3.4 (Fundamental on gradient flows, part I).In the case d > n, the base gradient flow represents the temporal law of a d-dimensional vector tending to an n-dimensional subspace containing the solution.Instead, the dual gradient flow always remains in an n-dimensional subspace containing the solution.Then, the least squares transformation X + yields a new approach, the dual one, which is not influenced by d, i.e. the dimensionality of the input data.
This assertion is the basis of the claim the dual network is a novel and very promising technique for high-dimensional clustering.However, it must be considered that the underlying theory is only approximated and gives an average behavior.Fig. 8 shows a simulation comparing the performances of VCL, DCL, and a deep variant of the DCL model in tackling high-dimensional problems with an increasing number of features (see Appendix A.2 for simulation details).The simulations show how the dual methods are more capable to deal with high-dimensional data as their accuracy remains near 100% until 2000 − 3000 features.Obviously, the deep version of DCL (deep-DCL) yields the best accuracy because it exploits the nonlinear transformation of the additional layers.
In the case n ≥ d, instead, the two subspaces have dimension equal to d.Then, they coincide with the feature space, eliminating any difference between the two gradient flows.In reality, for the dual flow, there are n − d remaining modes with zero eigenvalue (centers) which are meaningless, because they only add n − d constant vectors (the right singular vectors of X) which can be eliminated by adding a bias to each output neuron of the dual layer.Theorem 3.5 (Fundamental on gradient flows, part II).In the case d ≤ n, both gradient flows lie in the same (feature) space, the only difference being the fact that the dual gradient flow temporal law is driven by the variances of the input data.

The Voronoi set estimation
Consider the matrix X T Y T ∈ R n×j , which contains all the inner products between data and prototypes.From the architecture and notation of the dual layer, it follows Y = ΩX T , which yields: where the sample autocorrelation data matrix G is the Gram matrix.The Euclidean distance matrix edm(X, Y ) ∈ R n×j , which contains the squared distances between the columns of X and Y , i.e. between data and prototypes, is given by [40]: where diag(A) is a column vector containing the diagonal entries of A and 1 r is the r-dimensional column vector of all ones.It follows: and, considering that as a quadratic function of the dual weights.This function allows the straight computation of the edm from the estimated weights, which is necessary in order to evaluate the Voronoi sets of the prototypes for the quantization loss.

Conclusion
This work opens a novel field in neural network research where unsupervised gradient-based learning joins competitive learning.Two novel layers (VCL and DCL) suitable for unsupervised deep learning applications are introduced.Despite VCL is just an adaptation of a standard competitive layer for deep neural architectures, DCL represents a completely novel approach.The relationship between the two layers has been extensively analyzed and their equivalence in terms of architecture has been proven.Nonetheless, the advantages of the dual approach justify its employment.Unlike all other clustering techniques, the parameters of DCL evolve in a n-dimensional submanifold which does not depend on the number of features d as the layer is trained on the transposed input matrix.As a result, the dual approach is natively suitable for tackling high-dimensional problems.The flexibility and the power of the approach pave the way towards more advanced and challenging learning tasks; an upcoming paper will compare DCL on renowned benchmarks against state-of-the-art clustering algorithms.Further extensions of this approach may include topological nonstationary clustering [41], hierarchical clustering [12,14], core set discovery [42,43], incremental and attention-based approaches, or the integration within complex architectures such as VAEs and GANs, and will be studied in the future.

A Appendix
A.1 Topological and dynamical simulations In order to validate the theory and to analyze the differences of the two learning approaches, VCL and DCL are compared on three synthetic datasets containing clusters of different shapes and sizes.Table 2 summarizes the main characteristics of each experiment.The first dataset is composed of samples drawn from a two-dimensional Archimedean spiral (Spiral).The second dataset consists of samples drawn from two half semicircles (Moons).The last one is composed of two concentric circles (Circles).Each dataset is normalized by removing the mean and scaling to unit variance before fitting neural models.For all the experiments, the number of output units k of the dual network is set to 30.A grid-search optimization is conducted for tuning the hyperparameters.The learning rate is set to = 0.008 for VCL and to = 0.0008 for DCL.Besides, for both networks, the number of epochs is equal to η = 400 while the Lagrangian multiplier to λ = 0.01.For each dataset, both networks are trained 10 times using different initialization seeds in order to statistically compare their performance.The performance of the vanilla competitive layer and its dual network in tackling high dimensional problems is assessed through numerical experiments.Sure enough, standard distance-based algorithms generally suffer the well-known curse of dimensionality when dealing with high-dimensional data.The MADELON algorithm proposed in [44] is used to generate high-dimensional datasets with an increasing number of features and fixed number of samples.This algorithm creates clusters of points normally distributed about vertices of an n-dimensional hypercube.An equal number of cluster and data is assigned to two different classes.Both the number of samples (n s ) and the dimensionality of the space (n f ) in which they are placed can be defined programmatically.More precisely, the number of samples is set to n s = 100 while the number of features ranges in n f ∈ [1000, 2000, 3000, 5000, 10000].The number of required centroids is fixed to one tenth the number of input samples.Three different networks are compared: VCL, DCL, and a deep variant of DCL with two hidden layers of 10 neurons each (deep-DCL).Results are averaged over 10 repetitions on each dataset.Accuracy for each cluster is calculated by considering true positive those samples belonging to the class more represented and false positive the remaining data.

Figure 1 :
Figure 1: Representation of a deep architecture where a dual competitive network is used to estimate cluster centroids.The first network executes a feature extraction and then maps training observations into a d 1 -dimensional feature subspace.This output is transposed and used to feed the dual network to estimate prototype positions.

Figure 2 :
Figure 2: Base and dual single-layer neural networks.

Figure 3 :Theorem 2 . 4 (
Figure 3: Half dualities.Theorem 2.4 (Half duality II).Given two dual networks, if the samples of the input matrix X are uncorrelated with unit variance and if W 2 = Y 1 (second dual constraint), then W 1 = Y 2 (first dual constraint).

Figure 4 :
Figure 4: Comparison of three key metrics between the vanilla single-layer network and its dual over 10 runs.The metrics are: the quantization error (top row), the norm of the matrix of the edges (middle row), and the number of valid prototypes (bottom row).The metrics are computed on three different datasets: Spiral (left column), Moons (middle column), and Circles (right column).Error bands represent the standard error of the mean.

Figure 5 :Figure 6 :
Fig.6shows the trajectories of the prototypes during the training for both networks.The parameters in both networks have been initialized by means of the Glorot initializer[39], which draws small values from a normal distribution centered on zero.For VCL, these parameters are the prototypes and they are initially clustered around the origin, as expected.For DCL, instead, the initial prototypes are an affine transformation of the inputs parameterized by the weight matrix.This implies the initial prototypes are close to a random choice of the input data.The VCL trajectories tend to

Figure 8 :
Figure 8: High-dimensional simulations: accuracy as a function of the dimensionality of the problem.Error bands correspond to the standard error of the mean.

Table 1 :
Synthetic datasets used for the simulations (s.v.stands for singular value).

Table 2 :
[45,46]ers for high-dimensional simulations using MADELON (s.v.stands for singular value).All the code for the experiments has been implemented in Python 3, relying upon open-source libraries[45,46].All the experiments have been run on the same machine: Intel R Core TM i7-8750H 6-Core Processor at 2.20 GHz equipped with 8 GiB RAM.