Advertisement

Natural Computing

, Volume 4, Issue 1, pp 39–51 | Cite as

Vector quantization using information theoretic concepts

  • Tue Lehn-schiøler
  • Anant Hegde
  • Deniz Erdogmus
  • Jose C. Principe
Article

Abstract

The process of representing a large data set with a smaller number of vectors in the best possible way, also known as vector quantization, has been intensively studied in the recent years. Very efficient algorithms like the Kohonen self-organizing map (SOM) and the Linde Buzo Gray (LBG) algorithm have been devised. In this paper a physical approach to the problem is taken, and it is shown that by considering the processing elements as points moving in a potential field an algorithm equally efficient as the before mentioned can be derived. Unlike SOM and LBG this algorithm has a clear physical interpretation and relies on minimization of a well defined cost function. It is also shown how the potential field approach can be linked to information theory by use of the Parzen density estimator. In the light of information theory it becomes clear that minimizing the free energy of the system is in fact equivalent to minimizing a divergence measure between the distribution of the data and the distribution of the processing elements, hence, the algorithm can be seen as a density matching method.

Keywords

information particles information theoretic learning Parzen density estimate self-organizing map vector-quantization 

Abbreviations

C-S

Cauchy–Schwartz

K-L

Kullback–Leibler

LBG

Linde Buzo Gray

PE

Processing element

SOM

Self-organized map

QE

Quantization error

VQIT

Vector Quantization using information theoretic concepts

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bishop CM, Svensen M and Williams CKI (1996) GTM: a principled alternative to the self-organizing map. Artificial Neural Networks–ICANN 96. 1996 International Conference Proceedings pp. 165–701.Google Scholar
  2. Durbin, R, Willshaw, D 1987An analogue approach of the travelling salesman problem using an elastic net methodNature326689691Google Scholar
  3. Erdogmus D and Principe JC (2002) Generalized information potential criterion for adaptive system training. IEEE Transactions on Neural Networks 13(5).Google Scholar
  4. Erdogmus, D, Principe, JC, Hild, K 2002Beyond second-order statistics for learningNatural Computing185108Google Scholar
  5. Erwin, E, Obermayer, K, Schulten, K 1992Self-organizing maps: ordering, convergence properties and energy functionsBiological Cybernetics674755Google Scholar
  6. Graepel, T, Burger, M, Obermeyer, K 1995Phase transitions in stochastic self-organizing mapsPhysical Review E5638763890Google Scholar
  7. Heskes, T 1999Energy functions for self-organizing mapsOja, SEKaski,  eds. Kohonen MapsElsevierAmsterdam303316Google Scholar
  8. Heskes, T, Kappen, B 1993Error potentials for self-organizationProceedings IJCNN93312191223Google Scholar
  9. Kohonen, T 1982Self-organized formation of topologically correct feature mapsBiological Cybernetics435969Google Scholar
  10. Kullback, S, Leibler, RA 1951On information and sufficiencyThe Annals of Mathematical Statistics227986Google Scholar
  11. Lampinen J and Kostiainen T (2002) Generative probability density model in the self organizing map.Google Scholar
  12. Linde, Y, Buzo, A, Gray, RM 1980An algorithm for vector quantizer designIEEE Trans Commun COM288495Google Scholar
  13. MacQueen, J 1967Some methods for classification and analysis of multivariate observationsProceedings of the Fifth Berkeley Symposium on Mathematical statistics and probability1281297Google Scholar
  14. Mercer, J 1909Functions of positive and negative type and their connection with the theory of integral equationsPhilosophical Transactions Royal Society London A209415446Google Scholar
  15. Parzen, E 1962On estimation of a probability density function and modeAnnals of Mathematical Statistic2710651076Google Scholar
  16. Principe, JC, Xu, D, Zhao, Q, Fisher, J 2000Learning from examples with information theoretic criteriaJournal of VLSI Signal Processing-Systems266177Google Scholar
  17. Rényi, A 1970Probability TheoryNorth-Holland Publishing CompanyAmsterdamGoogle Scholar
  18. Scofield, CL 1988Unsupervised learning in the N-dimensional Coulomb networkNeural Networks1129Google Scholar
  19. Sum, J, Leung, C-S, Chan, L-W, Xu, L 1997Yet another algorithm which can generate topography mapIEEE Transactions on Neural Networks812041207Google Scholar
  20. Hulle, MM 2002Kernel-based topographic map formation achieved with an information-theoretic approachNeural Networks1510291039Google Scholar
  21. Vapnik, VN 1995The Nature of Statistical Learning TheorySpringer-VerlagNew YorkGoogle Scholar
  22. Yin, H, Allinson, N 2001Self-organizing mixture networks for probability density estimationIEEE Transactions on Neural Networks12405411Google Scholar

Copyright information

© Springer 2005

Authors and Affiliations

  • Tue Lehn-schiøler
    • 1
  • Anant Hegde
    • 2
  • Deniz Erdogmus
    • 2
  • Jose C. Principe
    • 2
  1. 1.Intelligent Signal Processing, Informatics and Mathematical ModellingTechnical University of DenmarkLyngbyDenmark
  2. 2.Computational NeuroEngineering Laboratory, Electrical and Computer Engineering DepartmentUniversity of FloridaGainesvilleUSA

Personalised recommendations