Advertisement

Unsupervised coding with lococode

  • Sepp Hochreiter
  • Jürgen Schmidhuber
Part IV: Signal Processing: Blind Source Separation, Vector Quantization, and Self Organization
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1327)

Abstract

Traditional approaches to sensory coding use code component-oriented objective functions (COCOFs) to evaluate code quality. Previous COCOFs do not take into account the information-theoretic complexity of the code-generating mapping itself. We do: “Low-complexity coding and decoding” (LOCOCODE) generates so-called lococodes that (1) convey information about the input data, (2) can be computed from the data by a low-complexity mapping (LCM), and (3) can be decoded by a LCM. We implement LococoDE by training autoassociators with Flat Minimum Search (FMS), a general method for finding lowcomplexity neural nets. LococoDE extracts optimal codes for difficult versions of the “bars” benchmark problem. As a preprocessor for a vowel recognition benchmark problem it sets the stage for excellent classification performance.

Keywords

Bias Weight Minimal Redundancy Sensory Code Factorial Code Standard Backprop 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    H. B. Barlow, T. P. Kaushal, and G. J. Mitchison. Finding minimum entropy codes. Neural Computation, 1(3):412–423, 1989.Google Scholar
  2. 2.
    P. Dayan and R. Zemel. Competition and multiple cause models. Neural Computation, 7:565–579, 1995.Google Scholar
  3. 3.
    B. A. Olshausen; D. J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 381(6583):607–609, 1996.Google Scholar
  4. 4.
    D. J. Field. What is the goal of sensory coding? Neural Computation, 6:559–601, 1994.Google Scholar
  5. 5.
    T. J. Hastie, R. J. Tibshirani, and A. Buja. Flexible discriminant analysis by optimal scoring. Technical report, AT&T Bell Laboratories, 1993.Google Scholar
  6. 6.
    G. E. Hinton and Z. Ghahramani. Generative models for discovering sparse distributed representations. Technical report, University of Toronto, Department of Computer Science, Toronto, Ontario, M5S 1A4, Canada, 1997. A modified version to appear in Philosophical Transactions of the Royal Society B.Google Scholar
  7. 7.
    S. Hochreiter and J. Schmidhuber. Flat minima. Neural Computation, 9(1):1–42, 1997.Google Scholar
  8. 8.
    T. Kohonen. Self-Organization and Associative Memory. Springer, second ed., 1988.Google Scholar
  9. 9.
    A. J. Robinson. Dynamic Error Propagation Networks. PhD thesis, Trinity Hall and Cambridge University Engineering Department, 1989.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Sepp Hochreiter
    • 1
    • 2
  • Jürgen Schmidhuber
    • 1
    • 2
  1. 1.Technische Universität MünchenMünchenGermany
  2. 2.IDSIALuganoSwitzerland

Personalised recommendations