Self-organization of Probabilistic PCA Models
We present a new neural model, which extends Kohonen’s self-organizing map (SOM) by performing a Probabilistic Principal Components Analysis (PPCA) at each neuron. Several self-organizing maps have been proposed in the literature to capture the local principal subspaces, but our approach offers a probabilistic model at each neuron while it has linear complexity on the dimensionality of the input space. This allows to process very high dimensional data to obtain reliable estimations of the local probability densities which are based on the PPCA framework. Experimental results are presented, which show the map formation capabilities of the proposal with high dimensional data.
KeywordsProbabilistic Principal Components Analysis (PPCA) competitive learning unsupervised learning dimensionality reduction face recognition handwritten digit recognition
Unable to display preview. Download preview PDF.
- 1.Daniel, G., Chen, M.: Video Visualization Benchmark Resources (November 2006), http://www.swan.ac.uk/compsci/research/graphics/vg/video/
- 3.LeCun, Y., Cortes, C.: The MNIST Database of Handwritten Digits (November 2006), http://yann.lecun.com/exdb/mnist/
- 4.Tenenbaum, J.B., de Silva, V., Langford, J.C.: Data sets for nonlinear dimensionality reduction (November 2006), http://isomap.stanford.edu/datasets.html