Biological Cybernetics

, Volume 104, Issue 4, pp 251–262

Normalization for probabilistic inference with neurons

Original Paper

DOI: 10.1007/s00422-011-0433-y

Cite this article as:
Eliasmith, C. & Martens, J. Biol Cybern (2011) 104: 251. doi:10.1007/s00422-011-0433-y

Abstract

Recently, there have been a number of proposals regarding how biologically plausible neural networks might perform probabilistic inference (Rao, Neural Computation, 16(1):1–38, 2004; Eliasmith and Anderson, Neural engineering: computation, representation and dynamics in neurobiological systems, 2003; Ma et al., Nature Neuroscience, 9(11):1432–1438, 2006; Sahani and Dayan, Neural Computation, 15(10):2255–2279, 2003). To be able to repeatedly perform such inference, it is essential that the represented distributions be appropriately normalized. Past approaches have considered normalization mechanisms independently of inference, often leaving them unexplored, or appealing to a notion of divisive normalization that requires pooling across many neurons. Here, we demonstrate how normalization and inference can be combined into an appropriate connection matrix, eliminating the need for pooling or a division-like operation. We algebraically demonstrate that such a solution is available regardless of the inference being performed. We show that such a solution is relevant to neural computation by implementing it in a recurrent spiking neural network.

Keywords

Neural computation Probabilistic inference Spiking network Attractor network Normalization NEF 

Supplementary material

422_2011_433_MOESM1_ESM.zip (1.7 mb)
ESM 1 (ZIP 1785 kb)

Copyright information

© Springer-Verlag 2011

Authors and Affiliations

  1. 1.Centre for Theoretical NeuroscienceUniversity of WaterlooWaterlooCanada
  2. 2.Department of Computer ScienceUniversity of TorontoTorontoCanada

Personalised recommendations