Energetics of stochastic BCM type synaptic plasticity and storing of accurate information

Excitatory synaptic signaling in cortical circuits is thought to be metabolically expensive. Two fundamental brain functions, learning and memory, are associated with long-term synaptic plasticity, but we know very little about energetics of these slow biophysical processes. This study investigates the energy requirement of information storing in plastic synapses for an extended version of BCM plasticity with a decay term, stochastic noise, and nonlinear dependence of neuron’s firing rate on synaptic current (adaptation). It is shown that synaptic weights in this model exhibit bistability. In order to analyze the system analytically, it is reduced to a simple dynamic mean-field for a population averaged plastic synaptic current. Next, using the concepts of nonequilibrium thermodynamics, we derive the energy rate (entropy production rate) for plastic synapses and a corresponding Fisher information for coding presynaptic input. That energy, which is of chemical origin, is primarily used for battling fluctuations in the synaptic weights and presynaptic firing rates, and it increases steeply with synaptic weights, and more uniformly though nonlinearly with presynaptic firing. At the onset of synaptic bistability, Fisher information and memory lifetime both increase sharply, by a few orders of magnitude, but the plasticity energy rate changes only mildly. This implies that a huge gain in the precision of stored information does not have to cost large amounts of metabolic energy, which suggests that synaptic information is not directly limited by energy consumption. Interestingly, for very weak synaptic noise, such a limit on synaptic coding accuracy is imposed instead by a derivative of the plasticity energy rate with respect to the mean presynaptic firing, and this relationship has a general character that is independent of the plasticity type. An estimate for primate neocortex reveals that a relative metabolic cost of BCM type synaptic plasticity, as a fraction of neuronal cost related to fast synaptic transmission and spiking, can vary from negligible to substantial, depending on the synaptic noise level and presynaptic firing. Supplementary Information The online version contains supplementary material available at (10.1007/s10827-020-00775-0).


Determination of the fixed points of Eq. (3).
Fixed points of Eq. (3) in the main text in the limit N → ∞, i.e. without noise, are determined from the condition dv/dt = 0. Since, the variables v and r are monotonically related by Eq.
(6) in the main text, we can invert this formula and write equivalently: This step enables us to solve equation dv/dt = 0 not in v variable but in r variable, which is much easier. Thus, the fixed points of Eq. (3) in the main text are determined from the following equation: We can rearrange this equation and obtain a polynomial of the third degree in r: We can solve this equation iteratively, using an ǫ expansion, i.e., we look for the solution in the form: r = r 0 +r 1 ǫ+r 2 ǫ 2 +O(ǫ 3 ), where ǫ ≪ 1, and r 0 , r 1 , r 2 are unknowns to be determined.
This solution corresponds to the case when basal conductance of the synapse/spine is very small (thin spines are weakly conducting electric current). After the above substitution for r, Eq.
(3) becomes: where "the coefficients" R 0 , R 1 , R 2 near different powers of ǫ given below must vanish, i.e., Solution of the system (5-7) in terms of r 0 , r 1 , r 2 leads to three different solutions in terms of r (three states: down (d), up (u), and intermediate (max)) which read: and where r + and r − are given by The condition for the existence of r u and r max is ∆ ≥ 0.
If we use Eq. (1) again, we obtain v d , v u , v max in the leading order in ǫ, which are present in the main text.
2 Entropy production and entropy flux for nonequilibrium systems.
In this section we derive the expressions for entropy production and entropy flux, which is given by Eq. (52) in the main text.
Eq. (34) in the main text can be written as: where J is the probability flux given by The temporal derivative of the entropy S, defined as S = − dvP (v|f o ) ln P (v|f o ), becomes after using Eq. (12) above as follows [1]: The integral can be done by parts (using the boundary condition that P and its derivative are 0 at the boundaries), and hence Form Eq. (13) above, we get F − J P , and after substituting this expression for ∂P/(P ∂v) in Eq. (15), and performing rearrangements, we obtain dS/dt = Π − Γ, where the entropy production Π and entropy flux Γ are given by and Entropy production is a measure of how fast energy is dissipated in the system (in units sec −1 ) in the form of heat. At a nonequilibrium steady state (considered in the paper), both Π and Γ are nonzero and equal to each other. Thus, to have entropy production rate at the steady state, it is enough to determine Γ, since Γ is easier to compute.
Let us find a more useful form of the entropy flux. For this we insert Eq. (13) for J into Eq. (17), and decompose the resulting integral into two integrals, one with P and second with ∂P/∂v. The second integral is done by parts, and we obtain the entropy flux equivalently as: 3 Saddle point approximation.
In this section it is shown how the saddle point approximation works. Let's consider the following integral with an arbitrary smooth function G(v): The idea is that for small σ the biggest contribution to the integral comes from v that are close to v o . Specifically, we change a variable of integration from v to a new variable x = Thus, our integral I in the new variable x takes the form where the lower limit of integration is The next step is to expand the function G in powers of x and to compute the emerging integrals which have the following form: where Γ f (x 1 ) and Γ f (x 1 , x 2 ) are the standard Gamma functions of one or two arguments [2]. In the limit σ → 0 the double argument Gamma function approaches zero exponentially (∼ e −x 2 0 ) since x 0 → −∞, and hence that contribution can be neglected. Thus, we have: Taking this into account, we can write the result for the integral I as Because of the factor [1 + (−1) n ], uneven powers of n in the above series disappear.