Skip to main content
Log in

BYY Harmony Learning on Finite Mixture: Adaptive Gradient Implementation and A Floating RPCL Mechanism

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

In tackling the learning problem on a set of finite samples, Bayesian Ying-Yang (BYY) harmony learning has developed a new learning mechanism that makes model selection implemented either automatically during parameter learning or in help of evaluating a new class of model selection criteria. In this paper, parameter learning with automated model selection has been studied for finite mixture model via an adaptive gradient learning algorithm for BYY harmony learning on a specific bidirectional architecture (BI-architecture). Via theoretical analysis, it has shown that the adaptive gradient learning implements a mechanism of floating rival penalized competitive learning (RPCL) among the components in the mixture. Also, the simulation results are demonstrated well for the adaptive gradient algorithm on the sample data sets from Gaussian mixtures with certain degree of overlap. Moreover, the adaptive gradient algorithm is applied to classification of the Iris data and unsupervised color image segmentation.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Mclachlan G.J., Basford K.E. (1988). Mixture Models: Inference and Applications to Clustering. Marcel Dekker, New York

    MATH  Google Scholar 

  2. Devijver P.A., Kittter J. (1982). Pattern Recognition: A Statistical Approach. Prentice Hall, Englewood Cliffs, NJ

    MATH  Google Scholar 

  3. Render R.A., Walker H.F. (1984), Mixture densities, maximum likelihood and the EM algorithm. SIAM Review 26(2): 195–239

    Article  MathSciNet  Google Scholar 

  4. Hartigan J.A. (1977). Distribution problems in clustering. In: Van Ryzin J. (eds). Classification and Clustering. Academic press, New York, pp. 45–72

    Google Scholar 

  5. Akaike H. (1974), A new look at the statistical model identification. IEEE Trans. on Automatic Control AC-19: 716–723

    Article  MathSciNet  Google Scholar 

  6. Scharz G. (1978), Estimating the dimension of a model. The Annals of Statistics 6: 461–464

    Article  MathSciNet  Google Scholar 

  7. Bozdogan H (1987), Model selection and Akaike’s information criterion: the general theory and its analytical extensions. Psychometrika 52: 345–370

    Article  MATH  MathSciNet  Google Scholar 

  8. Ball, G. H. and Hall, D. J. ISODATA: a novel method of data analysis and pattern classification, Technique Report No. 699616, 1965, Stanford Research International.

  9. Makhoul J., Rpucos S., Gish H. (1985), Vector quantization in speech coding. Proceedings of IEEE 73: 1551–1558

    Article  Google Scholar 

  10. Rumelhart D.E., Zipser D. (1985), Feature discovery by competitive learning. Cognitive Sciences 9: 75–112

    Article  Google Scholar 

  11. Ahalt S.C., Krishnamurty A.K., Chen P., Melton D.E. (1990), Competitive learning algorithm for vector quantization. Neural Networks 3: 277–291

    Article  Google Scholar 

  12. Xu L., Krzyzak A., Oja E. (1993), Rival penalized competitive learning for clustering analysis, RBF net, and curve detection. IEEE Transactions on Neural networks 4: 636–648

    Article  Google Scholar 

  13. Xu, L. Rival penalized competitive learning, finite mixture, and multisets clustering, In: Proceedings of 1998 IEEE International Joint Conference On Neural Networks 3 (1998), 251–2530.

  14. Xu, L. Ying-Yang machine: a Bayesian–Kullback scheme for unified learnings and new results on vector quantization, In: Proceedings of the 1995 International Conference on Neural Information Processing (ICONIP’95) 2 (1995), 977–988.

  15. Xu L. (2001), Best harmony, unified RPCL and automated model selection for unsupervised and supervised learning on Gaussian mixtures, three-layer nets and ME-RBF-SVM models. International Journal of Neural Systems 11: 43–69

    Google Scholar 

  16. Xu, L. Ying-Yang learning, In: M. A. Arbib (2nd ed.), The Handbook of Brain Theory and Neural Networks, pp. 1231–1237. The MIT Press, Cambridge, MA, 2002.

  17. Xu L. (2002), BYY harmony learning, structural RPCL, and topological self-organizing on mixture modes. Neural Networks 15: 1231–1237

    Article  Google Scholar 

  18. Xu, L. Bayesian-Kullback YING-YANG learning scheme: reviews and new results, In: Proceedings of the 1996 International Conference on Neural Information Processing (ICONIP’96) 1 (1996) 59–67.

  19. Hu, X. and Xu, L. A comparative study of several cluster number selection criteria, In: Lecture Notes of Computer Science, vol. 2690 pp. 195–202, 2003.

  20. Liu, Z. and Xu, L. Smoothed local PCA by BYY data smoothing learning, In: Proceedings of the International Conference on Control, Automation, and System (ICCAS’01), 621–622 (2001).

  21. Ma, J., Wang, T. and Xu, L. An annealing approach to BYY learning on Gaussian mixture with automated model selection, In: Proceedings of 2003 International Conference on Neural Networks and Signal Processing (ICNN&SP’03) 1 (2003), 23–28.

  22. Ma J., Wang T., Xu L. (2004), A gradient BYY harmony learning rule on Gaussian mixture with automated model selection. Neurocomputing 56: 481–487

    Article  Google Scholar 

  23. Ma J., Gao B., Wang Y., Cheng Q. (2005), Conjugate and natural gradient rules for BYY harmony learning on Gaussian mixture with automated model selection. International Journal of Pattern Recognition and Artificial Intelligence 19: 701–713

    Article  Google Scholar 

  24. Robbins H., Monro S. (1951), A stochastic approximation method. The Annals of Mathematical Statistics 22: 400–407

    Article  MathSciNet  MATH  Google Scholar 

  25. Robert S.J., Everson R., Rezek I. (2000), Maximum certainty data partitioning. Pattern Recognition 33: 833–839

    Article  Google Scholar 

  26. Ueda N., Ghahramani Z. (2002), Bayesian model search for mixture models based on optimizing variational bounds. Neural Network 15: 1223–1241

    Article  Google Scholar 

  27. Blake, C. L. and Merz, C. J. UCI repository of machine learning databases University of California, Irvine, Department of Information and Computer Science, 1998, http://www.ics.uci.edu/~mlearn/MLRepository.html.

  28. Loo C.K., Rao M.V.C. (2005), Accurate and reliable diagnosis and classification using probablistic ensemble simplified fuzzy ARTMAP. IEEE Transactions on Knowledge and Data engineering 17: 1589–1593

    Article  Google Scholar 

  29. Boujeman, N. Generalized competitive clustering for image segmentation, In: Proceedings of 19th International Conference of the North American Fuzzy Information Processing Society, 133–137 (2000).

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jinwen Ma.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ma, J., Wang, L. BYY Harmony Learning on Finite Mixture: Adaptive Gradient Implementation and A Floating RPCL Mechanism. Neural Process Lett 24, 19–40 (2006). https://doi.org/10.1007/s11063-006-9008-7

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-006-9008-7

Keywords

Navigation