# Combined mechanisms of neural firing rate homeostasis

- 718 Downloads
- 1 Citations

## Abstract

Spikes in the membrane potential of neurons comprise the currency of information processing in the brain. The ability of neurons to convert any information present across their multiple inputs into a significant modification to the pattern of their emitted spikes depends on the rate at which they emit spikes. If the mean rate is near the neuron’s maximum, or if the rate is near zero, then changes in the inputs have minimal impact on the neuron’s firing rate. Therefore, a neuron needs to control its mean rate. Protocols that either dramatically increase or decrease a neuron’s firing rate lead to multiple compensatory changes that return the neuron’s mean rate toward its prior value. In this primer, first as a summary of our previous work (Cannon and Miller in J Neurophysiol 116(5):2004–2022, 2016; Cannon and Miller in J Math Neurosci 7(1):1, 2017), we describe the advantages and disadvantages of having more than one such control mechanism responding to the neuron’s firing rate. We suggest how problems of two, coexisting, potentially competing mechanisms can be overcome. Key requirements are: (1) the control be of a distribution of values, which the controlled variable achieves over a fast timescale compared to the timescale of the control system; (2) at least one of the control mechanisms be nonlinear; and (3) the two control systems are satisfied by a stable distribution or range of values that can be achieved by the variable. We show examples of functional control systems, including the previously studied integral feedback controller and new simulations of a “bang–bang” controller, that allow for compensation when inputs to the system change. Finally, we present new results describing how the underlying signal processing pathways would produce mechanisms of dual control, as opposed to a single mechanism with two outputs, and compare the responses of these systems to changes of input statistics.

## Keywords

Homeostasis Integral feedback control Stochastic Integrator Feedback Control theory Mutual information## 1 Introduction

Homeostasis is a common requirement for a diverse range of biological systems. From the biochemical to the organismal scale, biological processes operate near-optimally under some conditions, but under other conditions, they fail. These conditions—such as size, temperature, acidity—are typically under tight control. The control of these conditions is homeostatic, meaning that following perturbations that cause a deviation from a target value or range, biological pathways are engaged to counter that change and return the condition to equilibrium.

### 1.1 Homeostasis of neural firing rates

Neural firing rates are homeostatic, but in an unusual sense. Since variations in neural activity are necessary to convey information—and such conveyance of information is the raison d’être of neural activity—a homeostatic controller that enforced a fixed firing rate of a neuron would render the neuron useless. Rather, a homeostatic controller for neural activity might need to ensure the average rate, when sampled over a reasonable period of time, is maintained in a suitable range. Or, even more preferably, the controller might need to ensure the neuron’s firing rate varies over a particular range in order to transmit useful information.

Such behavior is in contrast to typical homeostatic controllers, for which any deviation from the mean is undesirable. For example, strong variability in body temperature would be disastrous for most warm-blooded animals, even if the mean temperature were maintained. Therefore, we might expect the control system for neural firing rate to have different properties from that of many other controllers—in particular, the timescale of feedback can (and should) be a lot slower than the timescale of variation in firing rates. See the work of O’Leary and Wyllie (2011) for a more thorough introduction to the use of control theory for homeostasis of cellular processes.

Equation 4 indicates how the neuron’s firing rate depends on time-varying excitatory inputs, \( S_{\text{E}} \left( t \right) \), and inhibitory inputs, \( S_{\text{I}} \left( t \right) \), via excitatory and inhibitory synaptic conductance, \( g_{\text{E}} \) and \( g_{\text{I}} \), respectively. The variable, \( T \), represents a threshold—the amount of input the neuron requires before significant activity. In response to changes in input, a neuron’s firing rate changes very rapidly, over a timescale on the order of 10 ms, so the time constant \( \tau_{r} \), is short and of this order. On the other hand, the homeostatic response described by Eq. 5 is much slower, with changes in synaptic conductance due to a neuron’s firing rate alone requiring at least tens of minutes and probably several hours to reach measurable levels. Therefore, the time constant, \( \tau_{g} \), for these homeostatic changes is much larger than \( \tau_{r} \). Equation 5 then represents an integration of the error signal, where the target rate, \( r_{\text{tgt}} \), represents a fixed point of the system. If the synaptic inputs vary only slowly compared to \( \tau_{g} \), then the system will reach the fixed point with \( r = r_{\text{tgt}} \). When the synaptic inputs are more rapidly varying—as they are in vivo—then the changes in excitatory conductance, \( g_{\text{E}} \), cannot counteract any rapid, transient change in firing rate, \( r \). However, the conductance will, for example, increase more than it will decrease over time if the rate fluctuates more often, or to a greater extent, below rather than above the target rate. In this manner, Eq. 5 leads to a control of the mean of the firing rate, \( r \).

## 2 The benefit of multiple control mechanisms

A single controller can control a single variable or, more precisely, a single function (such as the mean) of that variable when it fluctuates. Control of a neuron’s mean firing rate is important, in simplest terms because each action potential produced by a neuron costs energy, so excessive activity is wasteful. On the other hand, a neuron that rarely or never produces action potentials is taking up resources to function as a cell but is of minimal value for information processing.

When we consider the number of successive feedforward connections and, even more importantly, the feedback connections, the need for a control system becomes paramount (Turrigiano and Nelson 2004). For example, an imbalance between excitatory and inhibitory feedback connections in the hippocampus can lead to hyperexcitability (Bateup et al. 2013) and possibly contributes to epilepsy (Badawy et al. 2012).

Yet, as well as maintaining a satisfactory mean level of activity, neurons must also vary their firing rates to perform any useful information processing. One can think of neurons mapping a range of inputs into a range of outputs according to their firing rate response curves. These curves have particular regions over which changes in input cause significant changes in output. Control of the mean firing rate of a neuron could ensure the mean input falls in the neuron’s responsive range. However, it may be better to also match the range of inputs to the entire responsive range of the neuron, so that any change in input leads to a change in output. To achieve such responsiveness, a neuron would need to control the variance of its firing rate, which—as we will see in Sect. 3—requires cooperation of two controllers (Triesch 2007). Here we focus on sensory systems, in which the input statistics may not be known a priori and a neuron's spikes should contain some useful information about the current input.

### 2.1 Mutual information

In order to quantify the above argument, we chose to measure the mutual information between a simulated neuron’s input and its output when additional noise is included within the neuron. We selected ten equally spaced levels of input current, switching between these levels every 500 ms. By calculating the mean rate of the neuron and comparing that to the input current during each period, we assessed how well a simulated neuron distinguished between the different levels of input (for details see Cannon and Miller 2016).

While the results presented here are produced with a firing rate model neuron (Eq. 4), similar behavior arises from a voltage-based spiking-neuron model (Cannon and Miller 2016).

### 2.2 Tuning an integrator in a neural circuit

Mutual information simply indicates how much information about an input can be obtained from an output, so is really a measure of information transfer. In the brain, however, neural circuits must do more than transfer information in their sensory inputs. Rather, brains must allow us to combine separate sources of information over multiple timescales and calculate a desired response. Of the many computations required for such neural processing, one that is particularly amenable to study in the context of fine-tuning is the integration of inputs across time.

Neural integrators have been studied in the context of gaze control (Goldman et al. 2002, 2003; Seung 1996; Seung et al. 2000a), short-term memory of a continuous quantity (called parametric working memory) (Romo et al. 1999; Miller et al. 2003; Machens et al. 2005; Miller and Wang 2006), and decision making (Huk and Shadlen 2005; Miller and Katz 2010, 2013; Wong et al. 2007; Wong and Wang 2006; Wang 2002, 2008). An important property of an integrator is that transient inputs shift the activity of the integrator (up or down depending on whether the input is positive or negative) but the history of inputs accumulates and remains when the input is removed.

The effective time constant in Eq. 7 is \( \tau_{r} /\left( {1 - \alpha g_{\text{E}} } \right) \), which extends to infinity in the limit \( \alpha g_{\text{E}} = 1 \). When \( \alpha g_{\text{E}} < 1 \), the firing rate converge to a fixed point, and when \( \alpha g_{\text{E}} > 1 \), the firing rate increases without bound. In the limit of \( \alpha g_{\text{E}} = 1 \), the firing rate will represent a perfect integration of the input, \( S_{\text{in}} \), so long as the threshold is also controlled, such that \( T = 0 \). That is, in the space of possible values of \( g_{\text{E}} \) and \( T \), only a single point corresponds to a perfect integrator, so a dual-control system would be essential for this point to be reached and integration produced in such a feedback circuit.

## 3 The problem of competing controllers

However, the problem of wind-up can be avoided if certain conditions hold. Two necessary conditions are: (1) The signal fluctuates on a rapid timescale compared to the timescale of feedback control, and (2) feedback from the controllers is based on different functions of the signal, at most one of which is linear. The first condition allows the controllers to act upon the distribution of values attained by the signal, rather than a single value. The second condition ensures the controllers are fixing different moments or different combinations of moments of that distribution (see also Triesch 2007). We will show later that two additional conditions must be satisfied when there are two controllers, one to ensure that the target distribution has nonnegative moments and the other to ensure the targets correspond to stable rather than unstable fixed points (Cannon and Miller 2016, 2017).

### 3.1 A simple dual-control model

The first controller (Eq. 8) adjusts the neuron’s threshold to ensure its mean firing rate satisfies \( <r>\,= r_{{{\text{goal}}\left( T \right)}} \). In practice, such control of intrinsic excitability (Desai et al. 1999) could be based on adjustment of the number of sodium and/or potassium channels implanted in the membrane of the soma near the axon hillock where spikes are generated. The second controller (Eq. 9) adjusts synaptic conductance to ensure that the neuron’s mean-squared firing rate satisfies \( <r^{2}>\,= r_{{{\text{goal}}\left( g \right)}}^{2} \). In practice, such control could be based on adjustment of the number or type of glutamate receptors in the postsynaptic density.

By combining Eqs. 8 and 9, we see that both the mean and the variance of the firing rate are set by such a dual-control system, with \( {\text{Var}}\left( r \right) = \, <r^{2}>- <r^{2}> \,= r_{{{\text{goal}}\left( g \right)}}^{2} - r_{{{\text{goal}}\left( T \right)}}^{2} \). Since the variance of any distribution must be positive, we straight away see the condition on the two target rates, \( r_{{{\text{goal}}\left( g \right)}}^{2} > r_{{{\text{goal}}\left( T \right)}}^{2} \), or (since any rate must be positive) \( r_{{{\text{goal}}\left( g \right)}} > r_{{{\text{goal}}\left( T \right)}} \) (see Fig. 3).

At first sight, an alternative control system is possible, with \( \frac{{{\text{d}}g_{\text{E}} }}{{{\text{d}}t}} \) linearly dependent on the rate and \( \frac{{{\text{d}}T}}{{{\text{d}}t}} \) following a quadratic dependence, so long as the opposite condition is satisfied, \( r_{{{\text{goal}}\left( T \right)}} > r_{{{\text{goal}}\left( g \right)}} . \) Indeed, such a control system with the synaptic conductance and threshold controllers flipped, does possess a fixed point with allowable firing rate variance greater than zero. However, the flipped control system turns out to be unstable. The instability arises because synaptic conductance scales the variance of firing rates while the threshold scales only the mean (This can be seen from Eq. 6, where the inputs are multiplied by \( g_{\text{E}} \) and a multiplicative factor changes the variance, while the threshold is subtracted so has no impact on the variance). In a situation in which the firing rate variance is too high, but the mean is too low, the neuron should decrease the synaptic conductance and decrease the threshold to approach the fixed point determined by the target rates. However, in the flipped system, if the mean rate is too low then synaptic conductance would increase, while the variance being too high might cause a concomitant increase in threshold. Therefore, the flipped controller adjusts the system’s parameters in the opposite direction needed to reach the fixed point. Simulations and mathematical analysis of the stability of fixed points verify this intuition (Cannon and Miller 2017).

To avoid the need for fine-tuning, the two control functions, \( f_{T} \) and \( f_{g} \), should have different curvatures at their target rates, in which case the function with greater curvature (strictly its second derivative divided by its first derivative) should be \( f_{g} \), the one for the synaptic input controller (Cannon and Miller 2017).

Indeed, we were able to show that a control system with these constraints and appropriate target/goal rates could, from arbitrary initial conditions, adapt its threshold and synaptic input conductance so as to maximize mutual information between a neuron’s activity and its inputs, and to optimize performance of a neural integrator (Cannon and Miller 2016). More importantly, following any change in input statistics, or in the case of the integrator circuit, loss of 20% of the neurons providing feedback, the control system led to recovery of optimal function.

### 3.2 A “bang–bang” controller

Such controllers with binary outputs are known as “bang–bang” controllers. Each controller would be ensuring that the neuron spends a certain fraction or proportion of its time, \( p_{T} \) and \( p_{g} \), with its firing rate above that controller’s step-point, respectively \( r_{{{\text{step}}\left( T \right)}} \) and \( r_{{{\text{step}}\left( g \right)}} \).

For two bang–bang controllers to cooperate rather than compete, the target fraction of time above the higher step-point must be less than the fraction above the lower step-point (see Fig. 4). Also, as in the previous example, in order for a system with step-controllers to approach a stable equilibrium, the controller with the higher step-point should be the one that scales the synaptic conductance rather than the threshold. That is, \( r_{{{\text{step}}\left( g \right)}} > r_{{{\text{step}}\left( T \right)}} \) and \( p_{T} > p_{g} \). Indeed, if we simulate the system with \( r_{{{\text{step}}\left( g \right)}} < r_{{{\text{step}}\left( T \right)}} \), the synaptic input conductance rapidly decreases to zero and the neuron becomes unresponsive.

While a system of dual bang–bang controllers can be more robust, they have the disadvantage of lacking a stable fixed point. For example, from Eq. 13, the excitatory synaptic conductance is either increasing at a constant rate of \( p_{g} /\tau_{g} \) when \( r < r_{{{\text{step}}\left( g \right)}} \) or decreasing at a constant rate of \( \left( {1 - p_{g} } \right)/\tau_{g} \) when \( r > r_{{{\text{step}}\left( g \right)}} \). For many control systems, the consequent alternation of switching on then switching off of different systems is undesirable. However, a living cell such as a neuron is a dynamic system whose ion channels and receptors—which determine its synaptic conductance—are mobile, being replaced and manufactured unceasingly. Therefore, a bang–bang controller, by simply switching between two distinct rates either of insertion or removal of membrane proteins, can be feasibly implemented with no additional “operational costs” to the cell (O’Leary et al. 2013, 2014).

## 4 Underlying processes

### 4.1 The need for a biochemical integrator

Engineered controllers include, in addition to integral controllers, proportional controllers and derivative controllers, in which the feedback signal is either proportional to the system’s output or to its derivative (O’Leary and Wyllie 2011). More commonly, combinations of the three are used. However, to be at all useful, the proportional and derivative controllers should respond rapidly compared to the timescale of signal variations—as does, for example, the rapid feedback from interneurons in many cortical circuits. The observed slower homeostatic control is more compatible with integral feedback control, which at heart requires a biochemical process within the neuron to integrate any error signal (Somvanshi et al. 2015).

Rate of loss of \( P \) could be independent of \( P \) if the amount of \( P \) saturates an enzyme or any other degradation system that can cause \( P \) to decrease. The Michaelis–Menten reaction scheme is an example of such saturating enzymatic kinetics, whereby the rate of loss would be proportional to \( \frac{P}{{K_{\text{M}} + P}} \), which is independent of \( P \) so long as \( P \gg K_{\text{M}} \) (where \( K_{\text{M}} \) is the Michaelis constant for the reaction and dependent on enzymatic binding and dissociation rate constants). In general, such zeroth-order kinetics arises for many biochemical reactions whose rates are fit as Hill functions with rate proportional to \( \frac{{P^{n} }}{{K_{\text{H}}^{n} + P^{n} }} \) with Hill coefficient \( n \), so long as \( P \gg K_{\text{H}} \).

Finally, the only accumulated quantity in a neuronal feedback system could be the number of receptors or channels in the membrane if the rate of removal of channels is saturated. For example, if proteins in the ubiquitination pathway are required to internalize and remove channels, and those proteins are in relatively short supply, then the rate of loss of channels could be independent of the total number of channels. In that case, the integrator in the feedback system would be at the final stage. In particular, when considering homeostasis of synaptic strength and homeostasis of intrinsic excitability, the two processes would unavoidably have different integrators and should be thought of as different controllers. The set-point of each type of channel/receptor would be established by its fixed rate of loss from the membrane (likely to be different for different channels/receptors in different parts of the cell) being matched by an activity-dependent rate of insertion.

### 4.2 The problem of filtering

As we have seen, a homeostatic control system requires components that can be stable at many different levels. For example, if a neuron is to raise its excitability (i.e., reduce its threshold) in response to a reduction in excitatory input it would need to adjust and then stably maintain a new density of sodium and/or potassium channels in its soma. Key to the production of such multi-stable components is an underlying process whose reverse rate does not depend on the accumulated state. In a biochemical system, this corresponds to a zeroth-order reaction rate, such as \( D_{g} \left( g \right) \) in Eq. 21 and \( D_{T} \left( T \right) \) in Eq. 22 being independent of \( g \) and \( T \) respectively.

Indeed, in a homeostatic control system, one might be able to identify the zeroth-order reaction as the point of integration and thus the controller in the system. To assess whether the system has a single homeostatic controller or two such controllers becomes a question of where in the feedback system—i.e., before or after a single feedback signal branches into two distinct signaling pathways—the point of integration arises. For example, at the extreme end, if integration occurs only at the point of channel insertion and removal—i.e., if \( D_{g} \left( g \right) \) and \( D_{T} \left( T \right) \) were constants—then the control of synaptic strength must be distinct from the control of intrinsic excitability.

However, the stability and utility of dual controllers are compromised if the integration step is too far removed from the initial signal. This is because at each step in the signaling pathway, the initially rapidly fluctuating firing-rate-dependent calcium signal is filtered so that the downstream fluctuations can be damped. Essentially, if rapid calcium fluctuations, varying on a timescale of 100 ms, are damped by processes which allow for variation on a timescale of 10 s, then the 100-fold increase in timescale causes a 10-fold reduction in the standard deviation (and thus a 100-fold reduction in the variance) of the downstream signal. Yet, as we saw in Sect. 2, both the benefit and the stability of a system with dual controllers rely upon setting a nonzero variance of the signal being controlled. If that signal is constrained to have low variance—by being a filtered version of a noisy signal—then the constraints on the controller’s set-points become too tight for wind-up to be avoided and/or the ability of the two controllers to control the firing rate variance (which has diminished impact on the signal they can control) is lost. Therefore, for dual control to be feasible, a method of maintaining a large variance in the input to the integrator is necessary. Such variance could be maintained by faithful transmission (without filtering) of the somatic calcium signal, which fluctuates rapidly according to the spike train. Alternatively, as we shall investigate in the next section, when filtering is present, nonlinearities in the signal processing pathway could boost the variance in the downstream signal.

### 4.3 The benefit of nonlinearities

The reduction in variability by temporal filtering and resulting problems for a dual-control system can be ameliorated if the signaling pathways produce supralinear downstream responses. For example, introduction of a cubic or quartic nonlinearity can provide sufficient supralinearity of the response to allow a downstream signal to fluctuate a lot in spite of much temporal filtering of the upstream fluctuations in firing rate. Such a strong supralinearity is not unreasonable, since, for example, the very first step in most calcium signaling pathways is the binding of calcium by calmodulin, which is fully activated when bound to four calcium ions (so its activation can depend on up to the 4^{th} power of calcium concentration).

## 5 Summary and conclusions

In this article, we have considered how two control processes can cooperate rather than compete when they respond to the same signal, namely a single neuron’s firing rate. The presence of separate controllers for excitatory synaptic conductance and intrinsic excitability in neurons allows a neuron to produce outputs that vary across a desired range and to compensate for changes in both the variance and the mean of its inputs. However, when two controllers respond to the same variable, namely a neuron’s spike train, additional constraints are necessary to ensure the two controllers do not compete and produce “wind-up.” We have shown that the constraints include two essential features—the controlled variable should change rapidly compared to the timescale of the control process, and the controllers should respond nonlinearly to the controlled variable. Moreover, the controller that has a relatively greater impact on the gain of the system compared to the firing threshold should have a greater supralinearity in its response and a higher effective set-point.

If, instead, different controllers respond to distinct enough signals—for example, if one were to respond to synaptic input while the other responds to firing rate—then the system of distinct controllers would be more robust and competition between set-points avoided. An intriguing case arises if one control system is based on a single neuron’s firing rate while another monitors neighborhood activity. In such a situation, it is again possible for the controllers to compete since each neuron contributing to neighborhood activity also has its individual firing rate being controlled. Therefore, for the system to function, it would be important that neurons’ individual set-points are not incompatible with the set-point desired for the mean neighborhood activity. In such a situation, the issues addressed and techniques formulated in this article are relevant.

Whether a biochemical feedback system acts more as a single controller or a dual controller depends on whether the integration step occurs before or after two control pathways branch from each other. The point of integration can be determined by whether a variable in the system simply filters the output firing rate—so eventually returns to its prior level following compensation to a change in inputs—or integrates up the error and so remains at a distinct new level following a compensatory process. For example, somatic calcium concentration acts as a filter of spikes and indicates spike rate, so returns to its prior level following a period of disrupted activity which is then compensated for. On the other hand, conductance of excitatory synapses would remain elevated following such compensation. It will be of interest to assess whether activation of CaMKIV follows more the trajectory of internal calcium, or of synaptic conductance.

Therefore, to experimentally distinguish a dual-control system from a single control system with two outputs, it would be valuable to analyze measurements of the activation of CaMKIV both during and after homeostatic compensation from perturbations that change firing rates of neurons. If the change in firing rates causes a change in CaMKIV levels, but as firing rates return via cell-internal homeostatic mechanisms, so CaMKIV activation returns to baseline, then a dual-control mechanism is supported. However, if CaMKIV activation only shifts monotonically and remains shifted once the neuron returns to its prior activity patterns, then CaMKIV activation has the properties of an integrator for a single control system.

A second line of experiment that could weigh against a dual-control system would be observation of correlations between the abundance of mRNA for synaptic proteins such as those for AMPA receptor units, which undergo homeostatic regulation, and the abundance of mRNA for the proteins of homeostatically regulated intrinsic sodium or potassium channels. Such correlation would be expected in a system with a single control system in which transcription of these distinct mRNAs is co-regulated. For example, low firing rates would increase mRNA abundance for proteins associated with AMPA receptors and sodium channels while decreasing mRNA abundance for proteins associated with potassium channels. Observation of such correlation between mRNA abundances (Schulz et al. 2007; Tobin et al. 2009) as well as conductance values for different intrinsic channels (Goaillard et al. 2009) has previously provided strong evidence for a single control mechanism underlying these diverse intrinsic properties (O’Leary et al. 2013).

Intriguingly, in a dual-control system, it is possible to observe synaptic and intrinsic properties shifting in opposing direction—for example threshold could increase at the same time as excitatory synaptic conductance increases (Fig. 6A5, A6). Observation of such behavior would be evidence for a dual-control system if other plasticity mechanisms are absent or already accounted for.

Finally, data arising from protocols in which mean neural firing rate is maintained constant, but the statistical properties of the activity are altered, would provide valuable fodder to our efforts to understand the underlying control processes. For example, observations of how the neuron responds to alternating periods of quiescence and higher frequency firing while keeping mean rate fixed, as the durations of periods or the rate of high-frequency firing are altered, would be illuminating. Homeostasis arising from such protocols would not only support a theory of dual mechanisms but would constrain many of the underlying components of such a dual-control model.

## 6 Note

MATLAB codes used to produce indicated figures are available at https://github.com/primon23/combined_homeostasis.

## References

- Abraham ST, Benscoter H, Schworer CM, Singer HA (1996) In situ Ca
^{2+}dependence for activation of Ca^{2+}/calmodulin-dependent protein kinase II in vascular smooth muscle cells. J Biol Chem 271(5):2506–2513Google Scholar - Badawy RA, Freestone DR, Lai A, Cook MJ (2012) Epilepsy: ever-changing states of cortical excitability. Neuroscience 222:89–99Google Scholar
- Bateup HS, Johnson CA, Denefrio CL, Saulnier JL, Kornacker K, Sabatini BL (2013) Excitatory/inhibitory synaptic imbalance leads to hippocampal hyperexcitability in mouse models of tuberous sclerosis. Neuron 78(3):510–522Google Scholar
- Cannon J, Miller P (2016) Synaptic and intrinsic homeostasis cooperate to optimize single neuron response properties and tune integrator circuits. J Neurophysiol 116(5):2004–2022Google Scholar
- Cannon J, Miller P (2017) Stable control of firing rate mean and variance by dual homeostatic mechanisms. J Math Neurosci 7(1):1Google Scholar
- Desai NS, Rutherford LC, Turrigiano GG (1999) Plasticity in the intrinsic excitability of cortical pyramidal neurons. Nat Neurosci 2(6):515–520Google Scholar
- Goaillard JM, Taylor AL, Schulz DJ, Marder E (2009) Functional consequences of animal-to-animal variation in circuit parameters. Nat Neurosci 12(11):1424–1430Google Scholar
- Goldman MS, Kaneko CR, Major G, Aksay E, Tank DW, Seung HS (2002) Linear regression of eye velocity on eye position and head velocity suggests a common oculomotor neural integrator. J Neurophysiol 88(2):659–665Google Scholar
- Goldman MS, Levine JH, Tank GMDW, Seung HS (2003) Robust persistent neural activity in a model integrator with multiple hysteretic dendrites per neuron. Cereb Cortex 13:1185–1195Google Scholar
- Goold CP, Nicoll RA (2010) Single-cell optogenetic excitation drives homeostatic synaptic depression. Neuron 68(3):512–528Google Scholar
- Huk AC, Shadlen MN (2005) Neural activity in macaque parietal cortex reflects temporal integration of visual motion signals during perceptual decision making. J Neurosci 25(45):10420–10436Google Scholar
- Ibata K, Sun Q, Turrigiano GG (2008) Rapid synaptic scaling induced by changes in postsynaptic firing. Neuron 57(6):819–826Google Scholar
- Joseph A, Turrigiano GG (2017) All for one but not one for all: excitatory synaptic scaling and intrinsic excitability are coregulated by CaMKIV, whereas inhibitory synaptic scaling is under independent control. J Neurosci 37(28):6778–6785Google Scholar
- Machens CK, Romo R, Brody CD (2005) Flexible control of mutual inhibition: a neural model of two-interval discrimination. Science 307(5712):1121–1124Google Scholar
- Maffei A, Turrigiano GG (2008) Multiple modes of network homeostasis in visual cortical layer 2/3. J Neurosci 28(17):4377–4384Google Scholar
- Miller P, Katz DB (2010) Stochastic transitions between neural states in taste processing and decision-making. J Neurosci 30(7):2559–2570Google Scholar
- Miller P, Katz DB (2013) Accuracy and response-time distributions for decision-making: linear perfect integrators versus nonlinear attractor-based neural circuits. J Comput Neurosci 35(3):261–294Google Scholar
- Miller P, Wang XJ (2006) Inhibitory control by an integral feedback signal in prefrontal cortex: a model of discrimination between sequential stimuli. Proc Natl Acad Sci USA 103(1):201–206Google Scholar
- Miller P, Brody CD, Romo R, Wang XJ (2003) A recurrent network model of somatosensory parametric working memory in the prefrontal cortex. Cereb Cortex 13(11):1208–1218Google Scholar
- O’Leary T, Wyllie DJ (2011) Neuronal homeostasis: time for a change? J Physiol 589(Pt 20):4811–4826Google Scholar
- O’Leary T, van Rossum MC, Wyllie DJ (2010) Homeostasis of intrinsic excitability in hippocampal neurones: dynamics and mechanism of the response to chronic depolarization. J Physiol 588(Pt 1):157–170Google Scholar
- O’Leary T, Williams AH, Caplan JS, Marder E (2013) Correlations in ion channel expression emerge from homeostatic tuning rules. Proc Natl Acad Sci USA 110(28):E2645–E2654Google Scholar
- O’Leary T, Williams AH, Franci A, Marder E (2014) Cell types, network homeostasis, and pathological compensation from a biologically plausible ion channel expression model. Neuron 82(4):809–821Google Scholar
- Ransdell JL, Nair SS, Schulz DJ (2012) Rapid homeostatic plasticity of intrinsic excitability in a central pattern generator network stabilizes functional neural network output. J Neurosci 32(28):9649–9658Google Scholar
- Romo R, Brody CD, Hernandez A, Lemus L (1999) Neuronal correlates of parametric working memory in the prefrontal cortex. Nature 399(6735):470–473Google Scholar
- Schulz DJ, Goaillard JM, Marder EE (2007) Quantitative expression profiling of identified neurons reveals cell-specific constraints on highly variable levels of gene expression. Proc Natl Acad Sci USA 104(32):13187–13191Google Scholar
- Seung HS (1996) How the brain keeps the eyes still. Proc Natl Acad Sci USA 93(23):13339–13344Google Scholar
- Seung HS, Lee DD, Reis BY, Tank DW (2000a) Stability of the memory of eye position in a recurrent network of conductance-based model neurons. Neuron 26(1):259–271Google Scholar
- Seung HS, Lee DD, Reis BY, Tank DW (2000b) The autapse: a simple illustration of short-term analog memory storage by tuned synaptic feedback. J Comput Neurosci 9(2):171–185Google Scholar
- Somvanshi PR, Patel AK, Bhartiya S, Venkatesh KV (2015) Implementation of integral feedback control in biological systems. Wiley Interdiscip Rev Syst Biol Med 7(5):301–316Google Scholar
- Tobin AE, Cruz-Bermudez ND, Marder E, Schulz DJ (2009) Correlations in ion channel mRNA in rhythmically active neurons. PLoS ONE 4(8):e6742Google Scholar
- Triesch J (2007) Synergies between intrinsic and synaptic plasticity mechanisms. Neural Comput 19(4):885–909Google Scholar
- Turrigiano GG (2008) The self-tuning neuron: synaptic scaling of excitatory synapses. Cell 135(3):422–435Google Scholar
- Turrigiano GG, Nelson SB (2004) Homeostatic plasticity in the developing nervous system. Nat Rev Neurosci 5(2):97–107Google Scholar
- Turrigiano GG, Leslie KR, Desai NS, Rutherford LC, Nelson SB (1998) Activity-dependent scaling of quantal amplitude in neocortical neurons. Nature 391(6670):892–896Google Scholar
- Wang XJ (2002) Probabilistic decision making by slow reverberation in cortical circuits. Neuron 36:955–968Google Scholar
- Wang XJ (2008) Decision making in recurrent neuronal circuits. Neuron 60(2):215–234Google Scholar
- Wong KF, Wang XJ (2006) A recurrent network mechanism of time integration in perceptual decisions. J Neurosci 26(4):1314–1328Google Scholar
- Wong KF, Huk AC, Shadlen MN, Wang XJ (2007) Neural circuit dynamics underlying accumulation of time-varying evidence during perceptual decision making. Front Comput Neurosci 1:6Google Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.