Applications of Information Theory to Analysis of Neural Data
Information theory is a practical and theoretical framework developed for the study of communication over noisy channels. Its probabilistic basis and capacity to relate statistical structure to function make it ideally suited for studying information flow in the nervous system. It has a number of useful properties: it is a general measure sensitive to any relationship, not only linear effects; it has meaningful units which in many cases allow direct comparison between different experiments; and it can be used to study how much information can be gained by observing neural responses in single trials, rather than in averages over multiple trials. A variety of information-theoretic quantities are commonly used in neuroscience – (see entry “Definitions of Information-Theoretic Quantities”). In this entry we review some applications of information theory in neuroscience to study encoding of information in both single neurons and neuronal populations.
KeywordsMutual Information Neural Activity Spike Train Bold Signal Local Field Potential
Research is supported by the SI-CODE (FET-Open, FP7-284533) project and by the ABC and NETT (People Programme Marie Curie Actions PITN-GA-2011-290011 and PITN-GA-2011-289146) projects of the European Union’s Seventh Framework Programme FP7 2007–2013.
- Adrian ED (1928) The basis of sensation. Norton, New York. http://psycnet.apa.org/psycinfo/1928-01753-000. Accessed 17 Jan 2014
- Gan JQ (2006) Feature dimensionality reduction by manifold learning in brain-computer interface design. In: Proceedings of the 3rd international workshop on brain-computer interfaces, Graz, Austria, pp 28–29. http://cswww.essex.ac.uk/Research/BCIs/BCI06_GAN1.pdf. Accessed 17 Jan 2014
- Rieke F, Bialek W, Warland D, de Ruyter van Steveninck RR (1997) Spikes: exploring the neural code. Bradford BookGoogle Scholar