Fast Algorithm and Efficient Implementation of GMM-Based Pattern Classifiers

Abstract

This paper proposes a fast decision algorithm in pattern classification based on Gaussian mixture models (GMM). Statistical pattern classification problems often meet a situation that comparison between probabilities is obvious and involve redundant computations. When GMM is adopted for the probability model, the exponential function should be evaluated. This work firstly reduces the exponential computations to simple and rough interval calculations. The exponential function is realized by scaling and multiplication with powers of two so that the decision is efficiently realized. For finer decision, a refinement process is also proposed. In order to verify the significance, experimental results on TI DM6437 EVM board and TED TB-3S-3400DSP-IMG board are shown through the application to a color extraction problem. It is verified that the classification was almost completed without any refinement process and the refinement process can proceed the residual decisions.

This is a preview of subscription content, log in to check access.

Figure 1
Figure 2
Figure 3
Figure 4
Figure 5
Figure 6
Figure 7
Figure 8
Figure 9
Figure 10

Notes

  1. 1.

    EM algorithm was used [7, 11].

References

  1. 1.

    Jain, A. K., Duin, R. P. W., & Mao, J. (2000). Statistical pattern recognition: A review. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(1), 4–37.

    Article  Google Scholar 

  2. 2.

    McKenna, S. J., Gong, S., & Raja, Y. (1998). Modeling facial colour and identity with Gaussian mixtures. Pattern Recognition, 31(12), 1883–1892.

    Article  Google Scholar 

  3. 3.

    Phung, S. L., Bouzerdoum, A., & Chai, D. (2005). Skin segmentation using color pixel classification: Analysis and comparison. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(1), 148–154.

    Article  Google Scholar 

  4. 4.

    Cui, X., & Gong, Y. (2007). A study of variable-parameter Gaussian mixture hidden Markov modeling for noisy speech recognition. IEEE Trans. on Audio, Speech and Language Proc., 15(4), 1366–1376.

    Article  Google Scholar 

  5. 5.

    Ueda, N., & Ghahramani, Z. (2002). Bayesian model search for mixture models based on optimizing variational bounds. Neural Networks, 15, 1223–1241.

    Article  Google Scholar 

  6. 6.

    Nasios, N., & Bors, A. G. (2006). Variational learning for Gaussian mixture models. IEEE Transactions on Systems, Man, and Cybernetics. Part B, 36(4), 849–862.

    Article  Google Scholar 

  7. 7.

    Bishop, C. M. (2006). Pattern recognition and machine learning. New York: Springer.

    Google Scholar 

  8. 8.

    Shi, M., Bermak, A., Chandrasekaran, S., & Amira, A. (2006). An efficient FPGA implementation of Gaussian mixture models-based classifier using distributed arithmetic. In IEEE Proc. of int. conf. on electronics, circuits and systems ’06 (pp. 1276–1279).

  9. 9.

    Shi, M., & Bermak, A. (2006). An efficient digital VLSI implementation of Gaussian mixture models-based classifier. IEEE Transactions on Very Large Scale Iintegration (VLSI) Systems, 14(9), 962–974.

    Article  Google Scholar 

  10. 10.

    Meyer-Baese, U. (2007). Digital signal processing with field programmable gate arrays (3rd edn.). New York: Springer.

    Google Scholar 

  11. 11.

    Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. (2007). Numerical recipes: The art of scientific computing (3rd edn.). Cambridge: Cambridge University Press.

    Google Scholar 

Download references

Acknowledgement

The authors would like to acknowledge the support from the Texas Instruments University program.

Author information

Affiliations

Authors

Corresponding author

Correspondence to Hidenori Watanabe.

Appendix: Derivation of Eq. 15

Appendix: Derivation of Eq. 15

The following relation is used for deriving Eq. 15:

$$\begin{array}{lll} 1-\beta &=& \left(\sum\limits_{i=1}^{L}2^{-i}+2^{-L}\right)- {\beta}\\ &=& \left(\sum\limits_{i=1}^{L}2^{-i}- {\beta}\right)+2^{-L} \\ &=& \sum\limits_{i=1}^{L}\left(1-\beta^{[i]}\right)2^{-i}+2^{-L} \\ &=& \sum\limits_{i=1}^{L}\bar{\beta}^{[i]}2^{-i}+2^{-L} \end{array} $$
(26)

Rights and permissions

Reprints and Permissions

About this article

Cite this article

Watanabe, H., Muramatsu, S. Fast Algorithm and Efficient Implementation of GMM-Based Pattern Classifiers. J Sign Process Syst 63, 107–116 (2011). https://doi.org/10.1007/s11265-009-0439-z

Download citation

Keywords

  • Pattern classification
  • Gaussian mixture model
  • Bayesian decision
  • Efficient implementation
  • color extraction