Skip to main content

Neural Engine Hypothesis

  • Chapter
Dynamic Neuroscience

Abstract

This chapter presents a hypothesis that the animal’s brain is acting analogously to a heat engine when it actively modulates incoming sensory information to achieve enhanced perceptual capacity. To articulate this hypothesis, we describe stimulus-evoked activity of a neural population based on the maximum entropy principle with constraints on two types of overlapping activities, one that is controlled by stimulus conditions and the other, termed internal activity, that is regulated internally in an organism. We demonstrate that modulation of the internal activity realizes gain control of stimulus response, and controls stimulus information. The model’s statistical structure common to thermodynamics allows us to construct the first law for neural dynamics, equation of state, and fluctuation-response relation. A cycle of neural dynamics is then introduced to model information processing by the neurons during which the stimulus information is dynamically enhanced by the internal gain-modulation mechanism. Based on the conservation law of entropy, we demonstrate that the cycle generates entropy ascribed to the stimulus-related activity using entropy supplied by the internal mechanism, analogously to a heat engine that produces work from heat. We provide an efficient cycle that achieves the highest entropic efficiency to retain the stimulus information. The theory allows us to quantify efficiency of the internal computation and its theoretical limit, which can be used to test the hypothesis.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 179.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The activity rates of Neuron 4, 5 do not depend on α because b 0 does not contain interactions that relate Neuron 1–3 with Neuron 4, 5. If there are non-zero interactions between any pair from Neuron 1–3 and Neuron 4, 5 in b 0, the activity rates of Neuron 4, 5 increase with the increased rates of Neuron 1–3.

  2. 2.

    We obtain dU = TdS − fdX, using β ≡ 1/T and α ≡ βf in Eq. (11.11). In this form, the expectation parameter U is a function of (S, X). According to the conventions of thermodynamics, we may call U internal energy, T temperature of the system, and f force applied to neurons by a stimulus. It is possible to describe the evoked activity of a neural population using these standard terms of thermodynamics. However, this introduces the concepts of work and heat, which may not be relevant quantities for neurons to exchange with environment.

  3. 3.

    Importantly, − ψ is a logarithm of the simultaneous silence probability predicted by the model, Eq. (11.4). The observed probability of the simultaneous silence could be different from the prediction if the model is inaccurate. For example, an Ising model may be inaccurate, and it was shown that neural higher-order interactions may significantly contribute to increasing the silence probability (Ohiorhenuan et al. 2010; Shimazaki et al. 2015).

  4. 4.

    Under the assumption that rates of synchronous spike events scale with \(\mathcal {O}(\varDelta ^k)\), where Δ is a bin size of discretization and k is the number of synchronous neurons. It was proved (Kass et al. 2011) that it is possible to construct a continuous-time limit (Δ → 0) of the maximum entropy model that takes the synchronous events into account. Here we follow their result to consider the continuous-time representation.

  5. 5.

    When α and β are both dependent on the stimulus, the Fisher information about s is given as

    $$\displaystyle \begin{aligned} J(s) = \frac{ \partial \boldsymbol{\theta}(s)^\top } {\partial s} \mathbf{J} \frac{ \partial \boldsymbol{\theta}(s) } {\partial s}, \end{aligned} $$
    (11.19)
    Fig. 11.2
    figure 2figure 2

    The delayed gain-modulation by internal activity. The parameters of the maximum entropy model (N = 5) follow those in Fig. 11.1. (a) An illustration of delayed gain-modulation described in Eqs. (11.17) and (11.18). The stimulus increases the stimulus component α that activates Neuron 1, 2, and 3. Subsequently, the internal component β is increased, which increases the background activity of all 5 neurons. We assume a slower time constant for the gain-modulation than the stimulus activation (τ β  = 0.1 and τ α  = 0.05). (b) Top: Dynamics of the stimulus and internal components (solid lines, γ = 0.5). The internal component β without the delayed gain-modulation (γ = 0) is shown by a dashed black line. Middle: Activity rates [a.u.] of Neuron 1–3 with (solid red) and without (dashed black) the delayed gain-modulation. Bottom: The Fisher information about stimulus component α (Eq. (11.16)). (c) The X-α (left) and U-β (right) phase diagrams. A red solid cycle represents dynamics when the delayed gain-modulation is applied (γ = 0.5). The dashed line is a trajectory when the delayed gain-modulation is not applied to the population (γ = 0). (d) Left: The U-β phase diagrams of neural dynamics with different combinations of τ β and γ that achieve the same level of the maximum modulation (the minimum value of β = 0.9). Right: The Fisher information about the stimulus component α for different cycles. The color code is the same as in the left panel. The inset shows the Fisher information about the stimulus intensity s (Eq. (11.19))

    where θ(s) ≡ [−β, α] and J is a Fisher information matrix given by Eq. (11.24), which will be discussed in the later section. We computed Eq. (11.19) using analytical solutions of the dynamical equations given as \(\alpha (t) = \frac {s t}{\tau _{\alpha }} e^{-t/ \tau _{\alpha }}\) and \(\beta (t) = 1 - \frac {s \gamma }{\tau _{\beta }-\tau _{\alpha }} \left \{ \frac {\tau _{\alpha } \tau _{\beta }}{\tau _{\beta }-\tau _{\alpha }} ( e^{-t/ \tau _{\beta }}-e^{-t/\tau _{\alpha }} ) - t e^{- t/\tau _{\alpha }} \right \}\).

  6. 6.

    Here we use entropy synonymously with heat in thermodynamics to facilitate the comparison with a heat engine. However this is not an accurate description because the entropy is a state variable.

  7. 7.

    This is synonymous with the statement that the first law prohibits a perpetual motion machine of the first kind, a machine that can work indefinitely without receiving heat.

  8. 8.

    Let us consider the efficiency η achieved by an arbitrary cycle \(\mathcal {C}\) during which the internal component β satisfies β L  ≤ β ≤ β H . Let the minimum and maximum internal activity in the cycle be U min and U max. We decompose \(\mathcal {C}\) into the path \(\mathcal {C}_1\) from U min to U max and the path \(\mathcal {C}_2\) from U max to U min during which the internal component is given as β 1(U) and β 2(U), respectively. Because the cycle acts as an engine, we expect β 1(U) > β 2(U). The entropy changes produced by the internal activity during the path C i (i = 1, 2) is computed as \(\varDelta S^{\mathrm {int}}_{\mathcal {C}_1} = \int _{U_{\mathrm {min}}}^{U_{\mathrm {max}}} \beta _1(U) \, dU \leq \beta _H \int _{U_{\mathrm {min}}}^{U_{\mathrm {max}}} \, dU = \beta _H (U_{\mathrm {max}}-U_{\mathrm {min}})\) and \(| \varDelta S^{\mathrm {int}}_{\mathcal {C}_2} |= |\int _{U_{\mathrm {max}}}^{U_{\mathrm {min}}} \beta _2(U) \, dU| \geq |\beta _L \int _{U_{\mathrm {max}}}^{U_{\mathrm {min}}} \, dU| = \beta _L (U_{\mathrm {max}}-U_{\mathrm {min}})\). Hence we obtain \(| \varDelta S^{\mathrm {int}}_{\mathcal {C}_2} | / \varDelta S^{\mathrm {int}}_{\mathcal {C}_1} \geq \beta _L / \beta _H \), or η ≤ η e .

References

  • Abbott, L. F., Varela, J. A., Sen, K., & Nelson, S. B. (1997). Synaptic depression and cortical gain control. Science, 275(5297), 220–224.

    Article  Google Scholar 

  • Amari, S.-I., & Nagaoka, H. (2000). Methods of information geometry. Providence: The American Mathematical Society.

    MATH  Google Scholar 

  • Berkes, P., Orbán, G., Lengyel, M., & Fiser, J. (2011). Spontaneous cortical activity reveals hallmarks of an optimal internal model of the environment. Science, 331(6013), 83–87.

    Article  Google Scholar 

  • Brown, E. N., Frank, L. M., Tang, D., Quirk, M. C., & Wilson, M. A. (1998). A statistical paradigm for neural spike train decoding applied to position prediction from ensemble firing patterns of rat hippocampal place cells. Journal of Neuroscience, 18(18), 7411–7425.

    Google Scholar 

  • Burkitt, A. N., Meffin, H., & Grayden, D. B. (2003). Study of neuronal gain in a conductance-based leaky integrate-and-fire neuron model with balanced excitatory and inhibitory synaptic input. Biological Cybernetics, 89(2), 119–125.

    Article  Google Scholar 

  • Carandini, M., & Heeger, D. J. (2012). Normalization as a canonical neural computation. Nature Review Neuroscience, 13(1), 51–62.

    Article  Google Scholar 

  • Carnot, S. (1824). Réflexions sur la puissance motrice du feu et sur les machines propres à développer cette puissance, Bachelier, Paris.

    MATH  Google Scholar 

  • Cauller, L. J., & Kulics, A. T. (1991). The neural basis of the behaviorally relevant N1 component of the somatosensory-evoked potential in SI cortex of awake monkeys: Evidence that backward cortical projections signal conscious touch sensation. Experimental Brain Research, 84(3), 607–619.

    Article  Google Scholar 

  • Chance, F. S., Abbott, L. F., & Reyes, A. D. (2002). Gain modulation from background synaptic input. Neuron, 35(4), 773–782.

    Article  Google Scholar 

  • Doiron, B., Longtin, A., Berman, N., & Maler, L. (2001). Subtractive and divisive inhibition: Effect of voltage-dependent inhibitory conductances and noise. Neural Computation, 13(1), 227–248.

    Article  Google Scholar 

  • Donner, C., Obermayer, K., & Shimazaki, H. (2017). Approximate inference for time-varying interactions and macroscopic dynamics of neural populations. PLoS Computational Biology, 13(1), e1005309.

    Article  Google Scholar 

  • Ghose, G. M., & Maunsell, J. H. R. (2002). Attentional modulation in visual cortex depends on task timing. Nature, 419(6907), 616–620.

    Article  Google Scholar 

  • Granot-Atedgi, E., Tkačik, G., Segev, R., & Schneidman, E. (2013). Stimulus-dependent maximum entropy models of neural population codes. PLoS Computational Biology, 9(3), e1002922.

    Article  MathSciNet  Google Scholar 

  • Ito, S., & Sagawa, T. (2013). Information thermodynamics on causal networks. Physics Review Letter, 111(18), 180603.

    Article  Google Scholar 

  • Ito, S., & Sagawa, T. (2015). Maxwell’s demon in biochemical signal transduction with feedback loop. Nature Communication, 6, Article number: 7498.

    Google Scholar 

  • Jaynes, E. T. (1957). Information theory and statistical mechanics. Physical Review, 106(4), 620–630.

    Article  MathSciNet  Google Scholar 

  • Kass, R. E., Kelly, R. C., & Loh, W.-L. (2011). Assessment of synchrony in multiple neural spike trains using loglinear point process models. Annals of Applied Statistics, 5, 1262–1292.

    Article  MathSciNet  Google Scholar 

  • Kelly, R. C., & Kass, R. E. (2012). A framework for evaluating pairwise and multiway synchrony among stimulus-driven neurons. Neural Computation, 24(8), 2007–2032.

    Article  Google Scholar 

  • Kenet, T., Bibitchkov, D., Tsodyks, M., Grinvald, A., & Arieli, A. (2003). Spontaneously emerging cortical representations of visual attributes. Nature, 425(6961), 954–956.

    Article  Google Scholar 

  • Laughlin, S. B. (1989). The role of sensory adaptation in the retina. Journal of Experimental Biology, 146, 39–62.

    Google Scholar 

  • Lee, B. B., Dacey, D. M., Smith, V. C., & Pokorny, J. (2003). Dynamics of sensitivity regulation in primate outer retina: The horizontal cell network. Journal of Vision, 3(7), 513–526.

    Article  Google Scholar 

  • Luck, S. J., Chelazzi, L., Hillyard, S. A., & Desimone, R. (1997). Neural mechanisms of spatial selective attention in areas V1, V2, and V4 of macaque visual cortex. Journal of Neurophysiology, 77(1), 24–42.

    Article  Google Scholar 

  • Manita, S., Suzuki, T., Homma, C., Matsumoto, T., Odagawa, M., Yamada, K., et al. (2015). A top-down cortical circuit for accurate sensory perception. Neuron, 86(5), 1304–1316.

    Article  Google Scholar 

  • Martínez-Trujillo, J., & Treue, S. (2002). Attentional modulation strength in cortical area MT depends on stimulus contrast. Neuron, 35(2), 365–370.

    Article  Google Scholar 

  • McAdams, C. J., & Maunsell, J. H. (1999). Effects of attention on orientation-tuning functions of single neurons in macaque cortical area V4. Journal of Neuroscience, 19(1), 431–441.

    Google Scholar 

  • Mitchell, S. J., & Silver, R. A. (2003). Shunting inhibition modulates neuronal gain during synaptic excitation. Neuron, 38(3), 433–445.

    Article  Google Scholar 

  • Moran, J., & Desimone, R. (1985). Selective attention gates visual processing in the extrastriate cortex. Science, 229(4715), 782–784.

    Article  Google Scholar 

  • Motter, B. C. (1993). Focal attention produces spatially selective processing in visual cortical areas V1, V2, and V4 in the presence of competing stimuli. Journal of Neurophysiology, 70(3), 909–919.

    Article  Google Scholar 

  • Nasser, H., Marre, O., & Cessac, B. (2013). Spatio-temporal spike train analysis for large scale networks using the maximum entropy principle and monte carlo method. Journal of Statistical Mechanics, 2013(03), P03006.

    Article  MathSciNet  Google Scholar 

  • Ohiorhenuan, I. E., Mechler, F., Purpura, K. P., Schmid, A. M., Hu, Q., & Victor, J. D. (2010). Sparse coding and high-order correlations in fine-scale cortical networks. Nature, 466(7306), 617–621.

    Article  Google Scholar 

  • Ohzawa, I., Sclar, G., & Freeman, R. D. (1985). Contrast gain control in the cat’s visual system. Journal Neurophysiology, 54(3), 651–667.

    Article  Google Scholar 

  • Prescott, S. A., & De Koninck, Y. (2003). Gain control of firing rate by shunting inhibition: roles of synaptic noise and dendritic saturation. Proceedings of National Academy of Science USA, 100(4), 2076–2081.

    Article  Google Scholar 

  • Reynolds, J. H., Chelazzi, L., & Desimone, R. (1999). Competitive mechanisms subserve attention in macaque areas V2 and V4. Journal of Neuroscience, 19(5), 1736–1753.

    Google Scholar 

  • Reynolds, J. H., Pasternak, T., & Desimone, R. (2000). Attention increases sensitivity of V4 neurons. Neuron, 26(3), 703–714.

    Article  Google Scholar 

  • Rothman, J. S., Cathala, L., Steuber, V., & Silver, R. A. (2009). Synaptic depression enables neuronal gain control. Nature, 457(7232), 1015–1018.

    Article  Google Scholar 

  • Sachidhanandam, S., Sreenivasan, V., Kyriakatos, A., Kremer, Y., & Petersen, C. C. (2013). Membrane potential correlates of sensory perception in mouse barrel cortex. Nature Neuroscience, 16(11), 1671–1677.

    Article  Google Scholar 

  • Sagawa, T., & Ueda, M. (2010). Generalized Jarzynski equality under nonequilibrium feedback control. Physics Review Letter, 104(9), 090602.

    Article  Google Scholar 

  • Sagawa, T., & Ueda, M. (2012). Fluctuation theorem with information exchange: Role of correlations in stochastic thermodynamics. Physics Review Letter, 109(18), 180602.

    Article  Google Scholar 

  • Sakmann, B., & Creutzfeldt, O. D. (1969). Scotopic and mesopic light adaptation in the cat’s retina. Pflügers Archiv: European Journal of Physiology, 313(2), 168–185.

    Article  Google Scholar 

  • Salinas, E., & Abbott, L. F. (1996). A model of multiplicative neural responses in parietal cortex. Proceedings of National Academy of Sciences USA, 93(21), 11956–11961.

    Article  Google Scholar 

  • Salinas, E., & Sejnowski, T. J. (2001). Gain modulation in the central nervous system: Where behavior, neurophysiology, and computation meet. Neuroscientist, 7(5), 430–440.

    Article  Google Scholar 

  • Schneidman, E., Berry, M. J., Segev, R., & Bialek, W. (2006). Weak pairwise correlations imply strongly correlated network states in a neural population. Nature, 440(7087), 1007–1012.

    Article  Google Scholar 

  • Schultz, W. (2016). Dopamine reward prediction-error signalling: A two-component response. Nature Review Neuroscience, 17(3), 183–195.

    Article  Google Scholar 

  • Seidemann, E., & Newsome, W. T. (1999). Effect of spatial attention on the responses of area MT neurons. Journal of Neurophysiology, 81(4), 1783–1794.

    Article  Google Scholar 

  • Shimazaki, H. (2013). Single-trial estimation of stimulus and spike-history effects on time-varying ensemble spiking activity of multiple neurons: a simulation study. Journal of Physics: Conference Series, 473, 012009.

    Google Scholar 

  • Shimazaki, H., Amari, S.-I., Brown, E. N., & Grün, S. (2009). State-space analysis on time-varying correlations in parallel spike sequences. In Proceedings of IEEE ICASSP, pp. 3501–3504.

    Google Scholar 

  • Shimazaki, H., Amari, S.-i., Brown, E. N., & Grün, S. (2012). State-space analysis of time-varying higher-order spike correlation for multiple neural spike train data. PLoS Computational Biology, 8(3), e1002385.

    Google Scholar 

  • Shimazaki, H., Sadeghi, K., Ishikawa, T., Ikegaya, Y., & Toyoizumi, T. (2015). Simultaneous silence organizes structured higher-order interactions in neural populations. Scientific Reports, 5, 9821.

    Article  Google Scholar 

  • Shlens, J., Field, G. D., Gauthier, J. L., Grivich, M. I., Petrusca, D., Sher, A., et al. (2006). The structure of multi-neuron firing patterns in primate retina. Journal of Neuroscience, 26(32), 8254–8266.

    Article  Google Scholar 

  • Silver, R. A. (2010). Neuronal arithmetic. Nature Review Neuroscience, 11(7), 474–489.

    Article  Google Scholar 

  • Smith, A. C., & Brown, E. N. (2003). Estimating a state-space model from point process observations. Neural Computation, 15(5), 965–991.

    Article  Google Scholar 

  • Spratling, M. W., & Johnson, M. H. (2004). A feedback model of visual attention. Journal of Cognitive Neuroscience, 16(2), 219–237.

    Article  Google Scholar 

  • Supèr, H., Spekreijse, H., & Lamme, V. A. (2001). A neural correlate of working memory in the monkey primary visual cortex. Science, 293(5527), 120–124.

    Article  Google Scholar 

  • Sutherland, C., Doiron, B., & Longtin, A. (2009). Feedback-induced gain control in stochastic spiking networks. Biological Cybernetics, 100(6), 475–489.

    Article  MathSciNet  Google Scholar 

  • Tang, A., Jackson, D., Hobbs, J., Chen, W., Smith, J. L., Patel, H., et al. (2008). A maximum entropy model applied to spatial and temporal correlations from cortical networks in vitro. Journal of Neuroscience, 28(2), 505–518.

    Article  Google Scholar 

  • Tkac̆ik, G., Marre, O., Amodei, D., Schneidman, E., Bialek, W., & Berry, M. J. (2014). Searching for collective behavior in a large network of sensory neurons. PLoS Computational Biology, 10(1), e1003408.

    Google Scholar 

  • Tkac̆ik, G., Mora, T., Marre, O., Amodei, D., Palmer, S. E., Berry, M. J., et al. (2015). Thermodynamics and signatures of criticality in a network of neurons. Proceedings of National Academy of Sciences USA, 112(37), 11508–11513.

    Google Scholar 

  • Yu, S., Huang, D., Singer, W., & Nikolic, D. (2008). A small world of neuronal synchrony. Cerebral Cortex, 18(12), 2891–2901.

    Article  Google Scholar 

Download references

Acknowledgements

This chapter is an extended edition of the manuscript submitted to the arXiv (Shimazaki H., Neurons as an information-theoretic engine. arXiv:1512.07855, 2015). The author thanks J. Gaudreault, C. Donner, D. Hirashima, S. Koyama, S. Amari, and S. Ito for critically reading the original manuscript.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hideaki Shimazaki .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer International Publishing AG

About this chapter

Cite this chapter

Shimazaki, H. (2018). Neural Engine Hypothesis. In: Chen, Z., Sarma, S.V. (eds) Dynamic Neuroscience. Springer, Cham. https://doi.org/10.1007/978-3-319-71976-4_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-71976-4_11

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-71975-7

  • Online ISBN: 978-3-319-71976-4

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics