Abstract
Exponential principal component analysis (e-PCA) provides a framework for appropriately dealing with various data types such as binary and integer for which the Gaussian assumption on the data distribution is inappropriate. In this paper, we develop a simultaneous dimensionality reduction and clustering technique based on a latent variable model for the e-PCA. Assuming the discrete distribution on the latent variable leads to mixture models with constraint on their parameters. We derive a learning algorithm for those mixture models based on the variational Bayes method. Although intractable integration is required to implement the algorithm, an approximation technique using Laplace’s method allows us to carry out clustering on an arbitrary subspace. Numerical experiments on handwritten digits data demonstrate its effectiveness for extracting the structures of data as a visualization technique and its high generalization ability as a density estimation model.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Akaho, S. (2004). e-PCA and m-PCA: Dimension reduction of parameters by information geometry. Proceedings of IJCNN, 129–134.
Amari, S., & Nagaoka, H. (2000). Methods of Information Geometry. Oxford: AMS and Oxford University Press.
Attias, H. (1999). Inferring parameters and structure of latent variable models by variational Bayes. Proceedings of UAI, 21–30.
Banerjee, A., Merugu, S., Dhillon, I. S., & Ghosh J. (2005). Clustering with Bregman Divergences. Journal of Machine Learning Research, 6, 1705–1749.
Bishop, C. M. (1999). Bayesian PCA. Advances in NIPS,11, 382–388.
Collins, M., Dasgupta, S., & Schapire R. (2002). A generalization of principal component analysis to the exponential family. Advances in NIPS,14, 617–624.
Ding, C., & Li, T. (2007). Adaptive dimension reduction using discriminant analysis and K-means clustering. Proceedings of ICML, 521–528.
LeCun, Y., Bottou, L., Bengio, Y., & Haffner P. (1998). Gradient-based learning applied to document recognition. Proceedongs of the IEEE, 86(11), 2278–2324.
Omachi, S., Omachi, M., & Aso H. (2007). An approximation method of the quadratic discriminant function and its application to estimation of high-dimensional distribution. IEICE Transitions on Information System, E90-D(8), 1160–1167.
Sajama, & Orlitsky, A. (2004). Semi-parametric exponential family PCA : Reducing dimensions via non-parametric latent distribution estimation. Technical Report CS2004-0790, University of California at San Diego.
Tipping, M. E., & Bishop, C. M. (1999). Probabilistic principal component analysis. Journal of the Royal Statistical Society, Series B, 61, 611–622.
Watanabe, K., Akaho, S., Omachi, S., & Okada, M. (2009). Variational bayesian mixture model on a subspace of exponential family distributions. IEEE Trans. Neural Networks, 20(11), 1783–1796.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Watanabe, K., Akaho, S., Omachi, S., Okada, M. (2010). Simultaneous Clustering and Dimensionality Reduction Using Variational Bayesian Mixture Model. In: Locarek-Junge, H., Weihs, C. (eds) Classification as a Tool for Research. Studies in Classification, Data Analysis, and Knowledge Organization. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-10745-0_8
Download citation
DOI: https://doi.org/10.1007/978-3-642-10745-0_8
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-10744-3
Online ISBN: 978-3-642-10745-0
eBook Packages: Mathematics and StatisticsMathematics and Statistics (R0)