Advertisement

Dimension Reduction for Mixtures of Exponential Families

  • Shotaro Akaho
Part of the Lecture Notes in Computer Science book series (LNCS, volume 5163)

Abstract

Dimension reduction for a set of distribution parameters has been important in various applications of datamining. The exponential family PCA has been proposed for that purpose, but it cannot be directly applied to mixture models that do not belong to an exponential family. This paper proposes a method to apply the exponential family PCA to mixture models. A key idea is to embed mixtures into a space of an exponential family. The problem is that the embedding is not unique, and the dimensionality of parameter space is not constant when the numbers of mixture components are different. The proposed method finds a sub-optimal solution by linear programming formulation.

Keywords

Mixture Model Dimension Reduction Exponential Family Latent Variable Model Linear Programming Formulation 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Amari, S.: Differential Geometrical Methods in Statistics. Springer, Heidelberg (1985)zbMATHGoogle Scholar
  2. 2.
    Amari, S.: Information Geometry on Hierarchy of Probability Distributions. IEEE Trans. on Information Theory 41 (2001)Google Scholar
  3. 3.
    Collins, M., Dasgupta, S., Schapire, R.: A Generalization of Principal Component Analysis to the Exponential Family. In: Advances in NIPS, vol. 14 (2002)Google Scholar
  4. 4.
    Akaho, S.: The e-PCA and m-PCA: dimension reduction by information geometry. In: IJCNN 2004, pp. 129–134 (2004)Google Scholar
  5. 5.
    Watanabe, K., Akaho, S., Okada, M.: Clustering on a Subspace of Exponential Family Using Variational Bayes Method. In: Proc. of Worldcomp2008/Information Theory and Statistical Learning (2008)Google Scholar
  6. 6.
    McLachlan, G., Peel, D.: Finite Mixture Models. Wiley, Chichester (2000)zbMATHGoogle Scholar
  7. 7.
    Agrawal, R., Srikant, R.: Privacy-Preserving Data Mining. In: Proc. of the ACM SIGMOD, pp. 439–450 (2000)Google Scholar
  8. 8.
    Kumar, A., Kantardzic, M., Madden, S.: Distributed Data Mining: Framework and Implementations. IEEE Internet Computing 10, 15–17 (2006)CrossRefGoogle Scholar
  9. 9.
    Chong, C.Y., Kumar, S.: Sensor networks: evolution, opportunities, and challenges. Proc. of the IEEE 91, 1247–1256 (2003)CrossRefGoogle Scholar
  10. 10.
    Buntine, W.: Variational extensions to EM and multinomial PCA. In: Elomaa, T., Mannila, H., Toivonen, H. (eds.) ECML 2002. LNCS (LNAI), vol. 2430. Springer, Heidelberg (2002)CrossRefGoogle Scholar
  11. 11.
    Edelman, A., Arias, T., Smith, S.: The geometry of algorithms with orthogonality constraints. SIAM J. Matrix Anal. Appl. 20(2), 303–353 (1998)zbMATHCrossRefMathSciNetGoogle Scholar
  12. 12.
    Amari, S.: Information geometry of the EM and em algorithms for neural networks. Neural Networks 8(9), 1379–1408 (1995)CrossRefGoogle Scholar
  13. 13.
    Chvátal, V.: Linear Programming. W.H. Freeman and Company, New York (1983)zbMATHGoogle Scholar
  14. 14.
    Fukumizu, K., Akaho, S., Amari, S.: Critical lines in symmetry of mixture models and its application to component splitting. In: Proc. of NIPS15 (2003)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2008

Authors and Affiliations

  • Shotaro Akaho
    • 1
  1. 1.Neuroscience Research Institute, AIST TsukubaJapan

Personalised recommendations