Dimension Reduction Techniques for Clustering
High dimensional datasets is frequently encountered in data mining and statistical learning. Dimension reduction eliminates noisy data dimensions and thus and improves accuracy in classification and clustering, in addition to reduced computational cost. Here the focus is on unsupervised dimension reduction. The wide used technique is principal component analysis which is closely related to K-means cluster. Another popular method is Laplacian embedding which is closely related to spectral clustering.
Principal component analysis (PCA) was introduced by Pearson in 1901 and formalized in 1933 by Hotelling. PCA is the foundation for modern dimension reduction. A large number of linear dimension reduction techniques were developed during 1950–1970s.
- 2.Belkin M, Niyogi P. Laplacian eigenmaps and spectral techniques for embedding and clustering. In: Advances in neural information processing systems 14, 2001.Google Scholar
- 4.Ding C, He X. K-means clustering and principal component analysis. In: Proc. 21st Int. Conf. on Machine Learning, 2004.Google Scholar
- 5.Ding C., He X., Zha H., Simon H. Unsupervised learning: self-aggregation in scaled principal component space. Principles of Data Mining and Knowledge Discovery, 6th European Conf., 2002, pp. 112–24.Google Scholar
- 9.Ng AY, Jordan MI, Weiss Y. On spectral clustering: analysis and an algorithm. In: Advances in neural information processing systems 14, 2001.Google Scholar