Dynamic Competitive Learning

  • Seongwon Cho
  • Jaemin Kim
  • Sun-Tae Chung
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 3971)


In this paper, a new competitive learning algorithm called Dynamic Competitive Learning (DCL) is presented. DCL is a supervised learning method that dynamically generates output neurons and initializes automatically the weight vectors from training patterns. It introduces a new parameter called LOG (Limit of Grade) to decide whether an output neuron is created or not. If the class of at least one among LOG number of nearest output neurons is the same as the class of the present training pattern, then DCL adjusts the weight vector associated with the output neuron to learn the pattern. If the classes of all the nearest output neurons are different from the class of the training pattern, a new output neuron is created and the given training pattern is used to initialize the weight vector of the created neuron. The proposed method is significantly different from the previous competitive learning algorithms in the point that the selected neuron for learning is not limited only to the winner and the output neurons are dynamically generated during the learning process. In addition, the proposed algorithm has a small number of parameters, which are easy to be determined and applied to real-world problems. Experimental results demon- strate the superiority of DCL in comparison to the conventional competitive learning methods.


Weight Vector Recognition Rate Output Neuron Training Pattern Learn Vector Quantization 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Rummelhart, D.E., McClelland, J.L.: Parallel Distributed Processing. In: Rummelhart, D.E., Zipser, D. (eds.) Feature Discovery by Competitive Learning. MIT Press, Cambridge (1988)Google Scholar
  2. 2.
    Ahalt, S.C., Krishnamurthy, A.K., Chen, P., Melton, D.E.: Competitive Learning Algorithms for Vector Quantization. Neural Networks 3, 277–290 (1990)CrossRefGoogle Scholar
  3. 3.
    DeSieno, D.: Adding a Conscience to Competitive Learning. In: Proc. IEEE International Conference on Neural Networks, San Diego, California, pp. 117–124 (1988)Google Scholar
  4. 4.
    Kohonen, T.: Self-Organization and Associative Memory, 3rd edn. Springer, Heidelberg (1989)Google Scholar
  5. 5.
    Kangas, J.A., Kohonen, T.K., Laaksonen, J.T.: Variants of Self-Organizing Maps. IEEE Trans. Neural Networks 1(1), 93–99 (1990)CrossRefGoogle Scholar
  6. 6.
    Yin, F., Wang, J.: Advances in Neural Networks. Springer, Heidelberg (2005)Google Scholar
  7. 7.
    Kohonen, T.: Learning Vector Quantization. Neural Networks 1(1), 303 (1988)CrossRefGoogle Scholar
  8. 8.
    Fausett, L.: Fundamentals of Neural Networks. Prentice-Hall, Englewood Cliffs (1994)MATHGoogle Scholar
  9. 9.
    Carpenter, G.A., Grossberg, S.: The ART of Adaptive Pattern Recognition by Self-Organization Neural Network. Computer, 77–88 (1988)Google Scholar
  10. 10.
    Cho, S., Ersoy, O.K., Lehto, M.: Parallel, Self-Organizing, Hierarchical Neural Networks with Competitive Learning and Safe Rejection Schemes. IEEE Trans. Circuits and Systems, 40(9) (1993) 556-566 Google Scholar
  11. 11.
    Cho, S.: Parallel, Self-Organizing, Hierarchical Neural Networks with Competitive Learning and Safe Rejection Scheme. Ph.D. Thesis, Purdue University (1992)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2006

Authors and Affiliations

  • Seongwon Cho
    • 1
  • Jaemin Kim
    • 1
  • Sun-Tae Chung
    • 2
  1. 1.School of Electronic and Electrical EngineeringHongik UniversitySeoulKorea
  2. 2.School of Electronic EngineeringSoongsil UniversitySeoulKorea

Personalised recommendations