Asymmetric Gaussian and Its Application to Pattern Recognition

  • Tsuyoshi Kato
  • Shinichiro Omachi
  • Hirotomo Aso
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 2396)

Abstract

In this paper, we propose a new probability model, ‘asymmetric Gaussian(AG),’ which can capture spatially asymmetric distributions. It is also extended to mixture of AGs. The values of its parameters can be determined by Expectation-Conditional Maximization algorithm. We apply the AGs to a pattern classification problem and show that the AGs outperform Gaussian models.

Keywords

Mixture Model Character Recognition Latent Variable Model Orthonormal Matrix Handwritten Character Recognition 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

References

  1. 1.
    T. Kato, S. Omachi and H. Aso: “Precise hand-printed character recognition using elastic models via nonlinear transformation’, Proc. 15th ICPR, Vol. 2, pp. 364–367 (2000).Google Scholar
  2. 2.
    Z. R. Yang and M. Zwolinski: “Mutual information theory for adaptive mixture models’, IEEE Trans. PAMI, 23, 4, pp. 396–403 (2001).Google Scholar
  3. 3.
    N. Kato, M. Suzuki, S. Omachi, H. Aso and Y. Nemoto: “A handwritten character recognition system using directional element feature and asymmetric Mahalanobis distance”, IEEE Trans. PAMI, 21, 3, pp. 258–262 (1999).Google Scholar
  4. 4.
    A. P. Dempster, N. M. Laird and D. B. Rubin: “Maximum likelihood from incomplete data via the EM algorithm”, J.R. Statistical Society, Series B, 39, pp. 1–38 (1977).MATHMathSciNetGoogle Scholar
  5. 5.
    C. M. Bishop: “Neural network for pattern recognition”, Oxford, England: Oxford University Press (1995).Google Scholar
  6. 6.
    R. M. Neal and G. E. Hinton: “A view of the EM algorithm that justifies incremental, sparse, and other variants”, Learning in Graphical Models (Ed. by M. I. Jordan), Kluwer Academic Publishers, pp. 355–368 (1998).Google Scholar
  7. 7.
    X. L. Meng and D. B. Rubin: “Recent extensions of the EM algorithms”, Bayesian Statistics (Eds. by J. M. Bernardo, J. O. Berger, A. P. Dawid and A. F. M. Smith), Vol. 4, Oxford (1992).Google Scholar
  8. 8.
    D. J. C. MacKay: “Bayesian interpolation”, Neural Computation, 4, 3, pp. 415–447 (1992).CrossRefGoogle Scholar
  9. 9.
    P. W. Frey and D. J. Slate: “Letter recognition using holland-style adaptive classifiers”, Machine Learning, 6, 2 (1991).Google Scholar
  10. 10.
    W. H. Press, S. A. Teukolski, W. T. Vetterling and B. P. Flannery: “Numerical Recipes in C”, Cambridge University Press (1988).Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2002

Authors and Affiliations

  • Tsuyoshi Kato
    • 1
  • Shinichiro Omachi
    • 1
  • Hirotomo Aso
    • 1
  1. 1.Graduate School of EngineeringTohoku UniversitySendai-shiJapan

Personalised recommendations