Objective functions for neural map formation

  • Laurenz Wiskott
  • Terrence Sejnowski
Part II: Cortical Maps and Receptive Fields
Part of the Lecture Notes in Computer Science book series (LNCS, volume 1327)


A unifying framework for analyzing models of neural map formation is presented based on growth rules derived from objective functions and normalization rules derived from constraint functions. Coordinate transformations play an important role in deriving various rules from the same function. Ten different models from the literature are classified within the objective function framework presented here. Though models may look different, they may actually be equivalent in terms of their stable solutions. The techniques used in this analysis may also be useful in investigating other types of neural dynamics.


Objective Function Coordinate Transformation Output Neuron Constraint Function Normalization Rule 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. [1]
    Amari, S. (1980). Topographic organization of nerve fields. Bulletin of Mathematical Biology 42:339–364.Google Scholar
  2. [2]
    Bienenstock, E. and von der Malsburg, C. (1987). A neural network for invariant pattern recognition. Europhysics Letters, 4(1):121–126.Google Scholar
  3. [3]
    Goodhill, G. J. (1993). Topography and ocular dominance: A model exploring positive correlations. Biol. Cybern., 69:109–118.Google Scholar
  4. [4]
    Goodhill, G. J., Finch, S., and Sejnowski, T. J. (1996). Optimizing cortical mappings. In Touretzky, D., Mozer, M., and Hasselmo, M., editors, Advances in Neural Information Processing Systems, volume 8, pages 330–336, Cambridge, MA. MIT Press.Google Scholar
  5. [5]
    Häussler, A. F. and von der Malsburg, C. (1983). Development of retinotopic projections — An analytical treatment. J. Theor. Neurobiol., 2:47–73.Google Scholar
  6. [6]
    Konen, W. and von der Malsburg, C. (1993). Learning to generalize from single examples in the dynamic link architecture. Neural Computation, 5(5):719–735.Google Scholar
  7. [7]
    Linsker, R. (1986). From basic network principles to neural architecture: Emergence of orientation columns. Ntl. Acad. Sci. USA, 83:8779–8783.Google Scholar
  8. [8]
    Miller, K. D., Keller, J. B., and Stryker, M. P. (1989). Ocular dominance column development: Analysis and simulation. Science, 245:605–245.Google Scholar
  9. [9]
    Miller, K. D. and MacKay, D. J. C. (1994). The role of constraints in Hebbian learning. Neural Computation, 6:100–126.Google Scholar
  10. [10]
    Obermayer, K., Ritter, H., and Schulten, K. (1990). Large-scale simulations of self-organizing neural networks on parallel computers: Application to biological modelling. Parallel Computing, 14:381–404.Google Scholar
  11. [11]
    Tanaka, S. (1990). Theory of self-organization of cortical maps: Mathematical framework. Neural Networks, 3:625–640.Google Scholar
  12. [12]
    von der Malsburg, C. (1973). Self-organization of orientation sensitive cells in the striate cortex. Kybernetik, 14:85–100.Google Scholar
  13. [13]
    Whitelaw, D. J. and Cowan, J. D. (1981). Specificity and plasticity of retinotectal connections: A computational model. J. Neuroscience, 1(12):1369–1387.Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 1997

Authors and Affiliations

  • Laurenz Wiskott
    • 1
  • Terrence Sejnowski
    • 1
    • 2
    • 3
  1. 1.Computational Neurobiology LaboratoryThe Salk Institute for Biological StudiesSan Diego
  2. 2.Howard Hughes Medical Institute The Salk Institute for Biological StudiesSan Diego
  3. 3.Department of BiologyUniversity of CaliforniaSan Diego La Jolla

Personalised recommendations