Journal of Signal Processing Systems

, Volume 63, Issue 1, pp 107–116 | Cite as

Fast Algorithm and Efficient Implementation of GMM-Based Pattern Classifiers

Article

Abstract

This paper proposes a fast decision algorithm in pattern classification based on Gaussian mixture models (GMM). Statistical pattern classification problems often meet a situation that comparison between probabilities is obvious and involve redundant computations. When GMM is adopted for the probability model, the exponential function should be evaluated. This work firstly reduces the exponential computations to simple and rough interval calculations. The exponential function is realized by scaling and multiplication with powers of two so that the decision is efficiently realized. For finer decision, a refinement process is also proposed. In order to verify the significance, experimental results on TI DM6437 EVM board and TED TB-3S-3400DSP-IMG board are shown through the application to a color extraction problem. It is verified that the classification was almost completed without any refinement process and the refinement process can proceed the residual decisions.

Keywords

Pattern classification Gaussian mixture model Bayesian decision Efficient implementation color extraction 

References

  1. 1.
    Jain, A. K., Duin, R. P. W., & Mao, J. (2000). Statistical pattern recognition: A review. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(1), 4–37.CrossRefGoogle Scholar
  2. 2.
    McKenna, S. J., Gong, S., & Raja, Y. (1998). Modeling facial colour and identity with Gaussian mixtures. Pattern Recognition, 31(12), 1883–1892.CrossRefGoogle Scholar
  3. 3.
    Phung, S. L., Bouzerdoum, A., & Chai, D. (2005). Skin segmentation using color pixel classification: Analysis and comparison. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(1), 148–154.CrossRefGoogle Scholar
  4. 4.
    Cui, X., & Gong, Y. (2007). A study of variable-parameter Gaussian mixture hidden Markov modeling for noisy speech recognition. IEEE Trans. on Audio, Speech and Language Proc., 15(4), 1366–1376.CrossRefGoogle Scholar
  5. 5.
    Ueda, N., & Ghahramani, Z. (2002). Bayesian model search for mixture models based on optimizing variational bounds. Neural Networks, 15, 1223–1241.CrossRefGoogle Scholar
  6. 6.
    Nasios, N., & Bors, A. G. (2006). Variational learning for Gaussian mixture models. IEEE Transactions on Systems, Man, and Cybernetics. Part B, 36(4), 849–862.CrossRefGoogle Scholar
  7. 7.
    Bishop, C. M. (2006). Pattern recognition and machine learning. New York: Springer.MATHGoogle Scholar
  8. 8.
    Shi, M., Bermak, A., Chandrasekaran, S., & Amira, A. (2006). An efficient FPGA implementation of Gaussian mixture models-based classifier using distributed arithmetic. In IEEE Proc. of int. conf. on electronics, circuits and systems ’06 (pp. 1276–1279).Google Scholar
  9. 9.
    Shi, M., & Bermak, A. (2006). An efficient digital VLSI implementation of Gaussian mixture models-based classifier. IEEE Transactions on Very Large Scale Iintegration (VLSI) Systems, 14(9), 962–974.CrossRefGoogle Scholar
  10. 10.
    Meyer-Baese, U. (2007). Digital signal processing with field programmable gate arrays (3rd edn.). New York: Springer.Google Scholar
  11. 11.
    Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. (2007). Numerical recipes: The art of scientific computing (3rd edn.). Cambridge: Cambridge University Press.MATHGoogle Scholar

Copyright information

© Springer Science+Business Media, LLC 2009

Authors and Affiliations

  1. 1.Department of Electrical and Electronic Engineering, Faculty of EngineeringNiigata UniversityNiigataJapan

Personalised recommendations