Synergy, redundancy, and multivariate information measures: an experimentalist’s perspective
- 1.7k Downloads
Information theory has long been used to quantify interactions between two variables. With the rise of complex systems research, multivariate information measures have been increasingly used to investigate interactions between groups of three or more variables, often with an emphasis on so called synergistic and redundant interactions. While bivariate information measures are commonly agreed upon, the multivariate information measures in use today have been developed by many different groups, and differ in subtle, yet significant ways. Here, we will review these multivariate information measures with special emphasis paid to their relationship to synergy and redundancy, as well as examine the differences between these measures by applying them to several simple model systems. In addition to these systems, we will illustrate the usefulness of the information measures by analyzing neural spiking data from a dissociated culture through early stages of its development. Our aim is that this work will aid other researchers as they seek the best multivariate information measure for their specific research goals and system. Finally, we have made software available online which allows the user to calculate all of the information measures discussedwithin this paper.
KeywordsInformation theory Multivariate information measures Complex systems Neural coding Dissociated neuronal cultures Multielectrode array
We would like to thank Paul Williams, Randy Beer, Alexander Murphy-Nakhnikian, Shinya Ito, Ben Nicholson, Emily Miller, Virgil Griffith, and Elizabeth Timme for providing useful comments. We would also like to thank the anonymous reviewers for their helpful comments on this paper. Their input during the revision process was invaluable.
- Abdallah, S.A., & Plumbley, M.D. (2010). A measure of statistical complexity based on predictive information. arXiv:1012.1890v1.
- Bell, A.J. (2003). International workshop on independent component analysis and blind signal separation, (p. 921).Google Scholar
- Berrou, C., Glavieux, A., Thitimajshima, P. (1993). In Proceedings of IEEE International Conference on Communications (Vol. 2, p. 1064).Google Scholar
- Butte, A.J., & Kohane, I.S. (2000). In Pacific Symposium on Biocomputing (Vol. 5, p. 415).Google Scholar
- Chechik, G., Globerson, A., Tishby, N., Anderson, M.J., Young, E.D., Nelken, I. (2001). In T.G. Dietterich, S. Becker, Z. Ghahramani (Eds.), Neaural information processing systems 14 (Vol. 1, p. 173) MIT Press.Google Scholar
- Cover, T.M., & Thomas, J.A. (2006). Elements of information theory, 2nd edn. Wiley-Interscience.Google Scholar
- Gat, I., & Tishby, N. (1999). In M.S. Kearns, S.A. Solla, D.A. Cohn (Eds.), Neural information processing systems 11 (p. 111). MIT Press.Google Scholar
- Griffith, V., & Koch, C. (2012). Quantifying synergistic mutual information. arXiv:12054265v2.
- Jakulin, A., & Bratko, I. (2008). Quantifying and visualizing attribute interactions. arXiv:cs/0308002v3.
- Lizier, J.T., Flecker, B., Williams, P.L. (2013). Towards a synergy-based approach to measuring information modification. arXiv:1303.3440.
- Lungarella, M., & Sporn, O. (2006). PLoS One, 2, e144.Google Scholar
- Quiroga R.Q., & Panzeri S.(Eds.) (2013). Principles of Neural Coding. CRC Press LLC.Google Scholar
- Rieke, F., Warland, D., de Ruyter van Steveninck, R.R., Bialek, W. (1997). Spikes: exploring the neural code. MIT Press.Google Scholar
- Timme, N., Alford, W., Flecker, B., Beggs, J.M. (2011). Multivariate information measures: an experimentalist’s perspective. arXiv:1111.6857.
- Wagenaar, D.A., Pine, J., Potter, S.M. (2006a). BMC Neuroscience, 7.Google Scholar
- Williams, P.L., & Beer, R.D. (2010). Decomposing multivariate information. arXiv:1004.2515v1.
- Williams, P.L., & Beer, R.D. (2011). Generalized measures of information transfer. arXiv:1102.1507v1.