# Supervised learning with decision margins in pools of spiking neurons

- 1.2k Downloads
- 2 Citations

## Abstract

Learning to categorise sensory inputs by generalising from a few examples whose category is precisely known is a crucial step for the brain to produce appropriate behavioural responses. At the neuronal level, this may be performed by adaptation of synaptic weights under the influence of a training signal, in order to group spiking patterns impinging on the neuron. Here we describe a framework that allows spiking neurons to perform such “supervised learning”, using principles similar to the Support Vector Machine, a well-established and robust classifier. Using a hinge-loss error function, we show that requesting a margin similar to that of the SVM improves performance on linearly non-separable problems. Moreover, we show that using pools of neurons to discriminate categories can also increase the performance by sharing the load among neurons.

## 1 Introduction

To make sense of the world, animals must distinguish the sensory input patterns that characterize different objects or situations. In some cases, specific sensory patterns have innate behavioural associations, such as the species-typical meanings of animal vocalizations, for example growls and whines (Altenmüller et al., 2013). In other cases however, these associations must be learned. In the laboratory, pairing an initially-neutral conditioned stimulus such as a tone, light, or odour with an aversive unconditioned stimulus such as a foot shock leads an animal to respond similarly to the conditioned as to the unconditioned stimulus; such learning is believed to depend on synaptic plasticity in the amygdala (Pape & Pare, 2010). Repeated performance of an action in a given circumstance leads to the formation of stimulus–response associations or habits, which are believed to develop through synaptic plasticity in the dorsal striatum (Yin & Knowlton, 2006). Importantly, learning of stimulus categories does not require any explicit behaviour, reward or punishment. For example, new born female Belding’s ground squirrels learn the odours of their siblings simply by their presence in the nest during early life; this association allows later identification of kin during adulthood (Holmes, 1986).

In statistics and machine learning, association of input patterns with desired categories, as specified by a training signal, is referred to as supervised learning. This form of learning should be distinguished from reinforcement learning, in which learning is governed by a reward rather than an explicit training signal; and unsupervised learning, in which representations are found based on structure in the input, without any explicit training signal. A classical algorithm for supervised learning is the Perceptron learning rule F. Rosenblatt 1958), which trains a single artificial neuron to linearly weight its inputs such that category is predicted by whether the weighted sum exceeds a fixed threshold. The Support Vector Machine (SVM) improves on perceptron performance by using a *margin* (a gap between the training boundaries for different classes), as well as through other innovations such as the introduction of nonlinearities through a kernel function (Cortes & Vapnik, 1995).

A number of learning rules have been suggested by which spiking neurons might perform tasks analogous to supervised learning (Bohte et al., 2002; Florian, 2007; Pfister et al., 2006; Ponulak & Kasiński, 2010; Xu et al., 2013; Legenstein et al., 2005). Recently, concepts of the Perceptron were extended to spiking neurons in a framework called the “Tempotron”, in which an error signal is used to adjust synapses strongly active when the neuron was close to its threshold (Florian, 2012; Gutig & Sompolinsky 2006; Gütig & Sompolinsky, 2009), producing 1 or 0 spikes according to the desired category. In the present work, we describe an adaptation of the SVM to spiking neurons, whose margin allows for the training of more general firing rate modulations than 0/1 spike. We found that a moderate training margin increases the learning speed of single neurons in linearly separable tasks, and increases their performance in linearly non-separable tasks. To further improve learning of linearly non-separable problems, we considered an extension in which neurons work in pools trained simultaneously (Urbanczik & Senn, 2009), whose combined activity forms the network’s response to a pattern. We found that this indeed improved performance as the training signal, although global, nevertheless allowed different neurons to learn different receptive fields.

## 2 Material and methods

### Neuron model

In all simulations, we used a conductance-based integrate-and-fire neuron model with a membrane time constant τ_{m} = 20ms, a leak conductance g_{L} = 10nS, and a resting membrane potential V_{rest} = − 70mV. Spikes were generated when the membrane potential V_{m} reached the threshold V_{thresh} = − 50mV. To model the shape of the action potential, the voltage was set to 20 mV after threshold crossing, and then decayed linearly during a refractory period of duration τ_{width} = 5ms to the reset value V_{reset} = − 55mV, following which an exponentially decaying depolarizing current of initial magnitude 50pA and time constant τ_{dep} = 40ms was applied (similarly to (Clopath et al., 2010; Yger & Harris, 2013)). We used this scheme with a high reset voltage and ADP, rather than the more common low reset value, as it provides a better match to intracellular recordings *in vitro* and *in vivo*. Synaptic connections were modelled as transient conductance changes with instantaneous rise followed by exponential decay. Synaptic connections were excitatory only (synaptic weights were clipped when they attempted to cross zero), with a time constant τ_{exc} = 5ms and a reversal potential E_{exc} = 0mV.

### Input patterns

To reduce the time of the simulations, we used only 10 input neurons. For each input pattern, the firing rate of each input neuron is independently drawn from a uniform distribution between 0 and 1Hz. The rate pattern is then normalised such that the total input rate is 10 000Hz, comparable to the physiological regime in which neurons operate (assuming an average of 10,000 incoming synapses at 1Hz). Every time a pattern is presented, the rate pattern is transformed into a novel 100 ms spiking pattern via the realization of ten independent and homogeneous Poisson processes.

### Network structure

### Learning rule

We derived the learning rule from approximate gradient descent on the Support Vector Machine cost function (see Fig. 1 panel C). This cost function *E* for a neuronal pool on a given trial is a function of the summed number of spikes N_{pool} emitted by all the neurons within the pool during that trial, and of the category of the input pattern presented during that trial. This function depends on two parameters, the learning thresholds θ_{+} and θ_{−}. If the input pattern belongs to the same category as the pool, then the pool should respond to it by emitting at least θ_{+} θ_{+} spikes. If this is the case then the cost for the pool is 0, otherwise it is equal to the number of missing spikes. The cost function is thus a rectified linear function of the pool’s number of spikes, with parameter θ_{+}. If the input pattern is not of the same category as the pool, then the pool should respond to it by emitting less than θ_{−} spikes. If this is the case then the cost for the pool is 0, otherwise it is equal to the number of superfluous spikes. The cost function is thus a rectified linear function of the pool’s number of spikes, with parameter θ_{−}.

_{ij}from the input neuron

*j*to the neuron

*i*of the pool must be proportional to the opposite of the derivative of this cost function with respect to w

_{ij}. Due to Poisson noise in the inputs, for a given set of weights, the response to a given pattern may vary from trial to trial. We therefore consider the derivative of the expected cost function. With the chain rule, this is equal to the product of the derivative of the expected cost

*E*with respect to the expected summed number of spikes of the pool < N

_{pool}> (because of the hinge-loss function, this derivative takes the values −1, 0 or 1); of the derivative of < N

_{pool}> with respect to the expected number of spikes < n

_{i}> of neuron

*i*(this derivative takes the value 1 expect at < n

_{i}> = 0, where it is set to 0 or 1 so as to incorporate the constraint that neurons which do not spike are not allowed to reduce their incoming synaptic weights but can increase them); and of the derivative of < n

_{i}> with respect to w

_{ij}. This leads to the following equation for the weight update:

_{i}by a function of the square of the membrane voltage V

_{i}of neuron

*i*:

*T*being the length of a trial, and

*F*a rectified linear function of the voltage:

This approximation captures well the relationship between firing rate and voltage in a typical trial generated with the input statistics of the classification task (Supplementary Figure 1).

_{exc}(t)

_{ i }is the total synaptic excitatory conductance and I

_{syn}(t)

_{ i }the synaptic current of neuron

*i*. Specifically, if \( {t}^j,\dots, {t}_{N_j}^j \) are the

*N*

_{ j }times at which a particular synapse

*j*of weight w

_{ij}is active, if \( \mathrm{g}\left(\mathrm{t}\right)={\mathrm{e}}^{-\mathrm{t}/{\uptau}_{\mathrm{syn}}} \) (if t > 0) is the kernel function representing the conductance time course, and if

*N*is the number of input synapses (here 10), we have:

_{i}integrates I

_{syn}(t)

_{ i }with an effective time constant τ

_{eff}= c

_{m}/(g

_{L}+ G

_{exc}(t)

_{ i }). Approximating τ

_{eff}by a constant equal to c

_{m}/(g

_{L}+ < G

_{exc}(t)

_{ i }>) where < G

_{exc}(t)

_{ i }> denotes a running average of the synaptic conductance during the presentation of one pattern (Gütig & Sompolinsky, 2009), we can approximate V

_{i}(t) by the following equation:

*j*spiked:

Ignoring the reset mechanism and the non-linearity due to the spike, this would be exact for a current based neuron, but this is only an approximation for the conductance-based neuron which we implement, estimating the average membrane time constant with the average conductance received during the presentation of a single pattern (Gütig & Sompolinsky, 2009).

We impose the constraint that weights that attempt to become negative are clipped to zero, since we are using only excitatory synapses. In addition, we add a constraint that a neuron that doesn’t fire cannot reduce its incoming synaptic weights.

### Neural simulator

Simulations of the spiking neurons were performed using a customised version of the NEST simulator (Diesmann & Gewaltig, 2007) and the PyNN interface (Davison et al., 2009), with a fixed time step of 0.1 ms.

### Support vector machine

For Figs. 5, 10, the linear Support Vector Machine of the Python scikit toolkit (Pedregosa & Varoquaux, 2011) was trained on Poisson spike counts drawn from the same patterns that were used to train the neuronal pools. For each pattern number, the cost parameter (termed *c*) was chosen so as to optimise the SVM performance. This yielded the same optimal cost parameter c = 10^{− 6} for all pattern numbers. In Fig. 5, performance for a lower and a higher value of the cost parameter are also shown.

## 3 Results

We studied a learning algorithm for spiking neurons to perform supervised learning, based on the support vector machine (SVM) cost function. The network that was used for the task is shown in Fig. 1a. Working in a rate-based framework, we defined each input pattern by a set of mean rates of each of the input neurons during that pattern, which is transformed into a 100 ms spiking pattern via a homogeneous Poisson process generated anew each time a pattern is presented (see Fig. 1a left, and Material and Methods). The input patterns are normalised random rate vectors (randomly assigned to the two categories A and B, see Materials and Methods). Fig. 1a is a schematic illustration of the learning task addressed by the spiking neurons, and of the generation process of the input spike trains from the input patterns of the two categories. All neurons in the two readout pools received connections from all input neurons. There are no lateral connections between the readout neurons (Fig. 1b). Each pool is assigned one category of inputs to which it must respond, its positive (+) patterns; the other patterns become the pool’s negative (−) patterns. Pool A’s + patterns are thus the A patterns, while its patterns are the B patterns. Classification is assessed as correct if in response to an A pattern, the summed number of spikes from pool A, N_{A} is greater than the summed number of spikes from pool B, N_{B} (and vice versa).

_{+}spikes to their + patterns and less than θ

_{−}to their patterns using a cost function which counts the number of missing or superfluous spikes (illustrated in Fig. 1c). The “hinge” shape of this cost function is directly inspired from SVM techniques (Cortes & Vapnik, 1995). On the trials in which a pool emits an incorrect number of spikes (less than θ

_{+}in response to a + pattern, or more than θ

_{−}in response to a pattern), it receives an error signal indicating whether it has fired too many or too little spikes, allowing it to perform approximate gradient descent on this cost function, ensuring that after each update the cost is decreased (for the complete derivation see the Material and Methods section). The rule obtained has the form:

*K*can be seen as a normalized EPSP at the soma,

*F*is a rectified linear function,

*N*

_{ pool }is the total number of spikes emitted by one pool, and n

_{i}is the number of spikes emitted by neuron

*i*. The supervision or error signal is defined as:

In our learning rule, each synapse thus accumulates an eligibility trace over the course of a trial. At the end of each trial, if the neuron receives an error signal, the eligibility trace is transformed into a synaptic change, the sign of which is dictated by the error signal. This defines a 3-factor learning rule: if there is no error signal, no plasticity occurs; if the error signal is positive, the rule is Hebbian (inputs that make the neuron fire are potentiated); but if the error signal is negative, the rule is anti-Hebbian (inputs that make the neuron fire are depressed). Therefore, unlike purely Hebbian or STDP rules that require homeostasis to ensure stability (Abbott & Nelson, 2000; Clopath et al., 2010; Yger & Harris, 2013), this rule is intrinsically stable. Note that such a notion of eligibility traces has already been proposed in the case of reinforcement learning with a delayed error signal (Izhikevich, 2007; Legenstein et al., 2008).

_{+}= 4 and θ

_{−}= 1. Synaptic weights evolved throughout learning (see Fig. 2a; a 1 ns conductance gives rise to 0.5 mV EPSP in the fluctuation conditions of a typical trial), and after training, each neuron fired at least 3 spikes to each of the patterns of its category and at most 2 spikes to each of the other patterns (Fig. 2b) leading to an almost perfect classification (Fig. 2c). Fig. 2d illustrates the responses after learning to the six “A” patterns immediately followed by the six “B” patterns. The neuron from pool A was strongly active during the first 6 patterns, while the one from pool B was active during the latter 6. The readout neurons thus reliably spiked to their categories. We verified that learning behaviour was not affected by the number of input synapses (Supplementary Figure 2). For the remainder of the text, we therefore used 10 input synapses.

_{+}− θ

_{−}between the learning thresholds. To answer this question, we performed the same task, i.e. the classification of 12 random rate patterns, but with different values of the learning thresholds (θ

_{−}, θ

_{+}) ranging from 0 to 12. The classification performance after learning is plotted as a function of the learning thresholds in Fig. 3a. As one can see, performance on the nearly separable 12-pattern task was better with a smaller margin, decreasing when M exceeds 4 spikes. However, when the task was made more complex using 24 input patterns (which is a highly linearly non-separable task in 10 dimensions), a clear benefit of the margin was seen (see bad performance on the diagonal where M = 0 in Fig. 3b).

The fact that optimal performance could be obtained in the 12 pattern case with a margin of 0 was at first surprising. For example, when trained with equal thresholds (θ_{−}, θ_{+}) = (4, 4), if each pool emitted exactly 4 spikes to every pattern, they would receive no error signal during training, yet their classification performance would be 0 %. To investigate how good performance could be obtained without a margin in a close-to-linearly separable situation, we plotted a histogram of spike count outputs (see Fig. 3c). Note that the spike count distribution of each category is broad and bell-shaped, even after learning; this reflects the random distribution of the multiple patterns in each class. High classification without a margin occurred because the centres of the distributions are widely separated. This can be characterised by the difference between the mean spike count in response to target patterns and the mean spike count in response to null patterns, which we will refer to as spike count modulation. We suggest that modulation occurs because the hinge cost function causes plasticity anytime the response exceeds the learning threshold, and the broadness of the spike count distributions for each class causes the centres of the spike count histograms to move apart, resulting in spike count modulation even when no margin was requested. For 24 patterns however, this separation did not occur, suggesting that in a highly nonlinearly separable problem, spike count modulation only occurs when a margin is explicitly requested.

We next asked how requesting a margin affected performance in the two cases. Figs. 3c damd e show histograms, for various margins, of the spike counts emitted by each neuron in response to its + patterns (full lines) and in response to its patterns (dashed lines). As the margin is increased, the spike distributions move further apart, allowing better separation in the case of 24 patterns (green). For 12 patterns however (red), because separation already occurred without a margin, little gain was derived from the margin, and indeed performance actually decreased in the case of an 8-spikes margin, likely due to the broadening of the response distribution for patterns. We speculate this may occur because in order to respond very strongly to + patterns, the neurons cannot avoid also producing strong responses to at least some patterns.

_{+}− θ

_{−}) for 12 patterns (red) and 24 patterns (green) to classify. The spike count modulation does not track the margin, as seen by the shallow slope of this curve; in addition it is systematically bigger when the task is easier (12 versus 24 patterns to classify). This confirms the intuitive explanation for how a margin of zero can give a variety of performances (Fig. 4b), whereas the relationship between spike count modulation and performance (Fig. 4c) is much tighter and much more constrained.

*c*which weights the cost of misclassifying a pattern relative to the importance of providing a large margin. Figure 5 shows that the performance of single neurons trained with an optimal margin (

*M*= 4, full red curve) closely tracks the performance of an SVM trained on the same inputs with an optimal

*c*parameter (

*c*= 10

^{− 6}, full black curve). Demanding too large of a margin for single neurons (

*M*= 0, dotted red curve), or setting the SVM

*c*parameter too low (

*c*= 10

^{− 13}, dotted black curve) leads to poor performance specifically on easy tasks with low pattern numbers. Conversely, demanding too little of a margin for single neurons (

*M*= 12, dashed red curve), or setting

*c*too high (

*c*= 1, dashed black curve) leads to a drop in performance for difficult tasks with large patterns numbers. We conclude that a margin of 4 provides good performance, close to that of a linear SVM, for a wide range of pattern numbers. Performance for the thresholds (0,1), which defines an algorithm similar to the voltage convolution implementation of the Tempotron rule (Gutig & Sompolinsky, 2006), is consistently worse for all numbers of patterns.

_{+}is high then if it is low. Likewise more mistakes occur at first if θ

_{−}is low then if it is high. Learning is therefore fastest when the margin is large. Of the margin values that provide optimal performance in the classification of 12 patterns, a margin of 4 (which is also the optimal margin in a wide range of pattern numbers) thus provides the highest learning speed. For the classification of 24 patterns by single neurons, the influence of the margin on the convergence time is less evident, but the margin has a stronger effect on asymptotic performance (Fig. 6b).

_{−}, θ

_{+}) = (4,8) corresponding to a margin of 4 spikes again provide good performance over a wide range of linearly non-separable pattern numbers. In all cases, multineuron pools (full lines) provided improved performance over a linear SVM (dotted line). As in the single neuron case (dashed lines), we found that larger margins perform worse for easier problems, whereas small margins provide poorer performance for more challenging tasks with high pattern number. Also similarly to the single neuron cases, we found that performance depends more closely on the actual spike count modulation generated, rather than on the margin requested (see Fig. 11).

## 4 Discussion

In this study, we presented a learning rule which allows multineuron pools to learn in a supervised way to increase their firing rate in response to a certain set of inputs but not to another set. We combined an approach similar to the Tempotron (Gutig & Sompolinsky 2006; Gütig & Sompolinsky, 2009) for the synaptic update with concepts from the Support Vector Machine literature (Cortes & Vapnik, 1995). We found that a moderate training margin increases the learning speed of single neurons in linearly separable tasks, and increases their performance in linearly non-separable tasks. Although we did not assess the performance of the original Tempotron rule on our task, we found that using a (0,1) threshold a similar rule to the “voltage convolution” implementation of the Tempotron rule (Gutig & Sompolinsky 2006 produced worse performance on our task. We note however that the learning task originally used to test the Tempotron consisted of detecting reliable spatiotemporal patterns, whereas our task consists of discriminating Poisson spike trains that can vary from one repeat to the next. This may provide an explanation of the relatively poor performance of the (0,1) rule to some of the original applications of the Tempotron paper.

The performance of single neurons was bounded by the linear SVM performance, but performance could be increased by training neurons in pools with a single, global training signal. Although the neurons in a given pool received the same error signal derived from the pool’s number of spikes, they were nevertheless able to spontaneously select different features, thus classifying linearly non-separable inputs.

In models of unsupervised learning, lateral or recurrent inhibition is often used to force neurons to develop different receptive fields (Clopath et al., 2010; Masquelier et al., 2009; Yger & Harris, 2013). In the present case, recurrent inhibition was not necessary for neurons to evolve different receptive fields. Since our model has no feed forward inhibition, we normalised the rate patterns such that each pattern had the same global rate (otherwise, a pool would not be able to simultaneously respond with a high number of spikes to patterns of low input rate and with a low number of spikes to patterns of high input rate, and would therefore misclassify many patterns.) Adding divisive feedforward inhibition to the model might allow it to extend to the classification of non-normalised rate patterns. In the present model, synaptic weights were not allowed to become negative. Such a constraint typically reduces the capacity of perceptrons to learn rate-based inputs (see for example Amit et al., 1989; Gardner, 1988; Legenstein & Maass, 2007). This loss in capacity could be compensated for in part by adding subtractive feedforward inhibition to our model.

Could an analogous rule be implemented in the brain? The rule requires two steps: first, an eligibility trace is constructed based on pre-synaptic input occurring shortly prior to or during postsynaptic depolarization; and second, this is consolidated into a change in synaptic strength by a later-arriving training signal. Molecular mechanisms that could underlie the eligibility trace are well described, such as the multiple phosphorylation cascades that occur downstream of calcium influx via the NMDA receptor (Sweatt, 2009). But how might a training signal be conveyed? In the case of reinforcement learning, dopamine has been suggested as a training signal, and dopamine has indeed been implicated in the consolidation of eligibility traces (Kentros et al., 2004). A role for eligibility traces in reinforcement learning has been modelled previously (Izhikevich, 2007; Legenstein et al., 2008; El Boustani et al., 2012). A global reinforcement signal, however, cannot instruct different neuronal populations with different target signals. A more flexible, higher-dimensional training signal might instead be conveyed by glutamatergic inputs. In the cerebellum, for example, climbing fibre inputs provide strong inputs that generate complex-spike bursts which are believed to constitute a training signal (Eccles et al., 1967; Marr, 1969; Raymond et al., 1996). A second example consists of auditory fear conditioning, in which a conditioned reflex is established by the coincidence of signals conveying a conditioned stimulus (a tone) with a stronger unconditioned stimulus (a shock), by potentially glutamatergic inputs onto the amygdala (Pape & Pare, 2010). Understanding how spiking neurons may perform supervised learning at a computational level may lead to better understanding of such neuronal circuits.

## Notes

### Acknowledgments

This work was supported by the Wellcome Trust (095668) and EPSRC (I005102, K015141).

### Conflict of interest

None.

## Supplementary material

## References

- Abbott, L. F., & Nelson, S. B. (2000). Synaptic plasticity: taming the beast.
*Nature Neuroscience, 3 Suppl(november), 1178*, 83.Google Scholar - Altenmüller, E., Zimmermann, E., Schmidt, D., & Phil, S. (2013).
*Evolution of emotional communication: from sounds in nonhuman mammals to speech and music in man*. Oxford: Oxford University Press. Retrieved from http://forward.library.wisconsin.edu/catalog/ocn810119047.CrossRefGoogle Scholar - Amit, D. J., Campbell, C., & Wong, K. Y. M. (1989). The interaction space of neural networks with sign-constrained synapses.
*Journal of Physics A: Mathematical and General, 22*(21), 4687.Google Scholar - Bohte, S. M., Kok, J. N., & Poutrã, H. L. (2002). Error-backpropagation in temporally encoded networks of spiking neurons.
*Neurocomputing, 48*, 17–37.CrossRefGoogle Scholar - Clopath, C., Büsing, L., Vasilaki, E., & Gerstner, W. (2010). Connectivity reflects coding: a model of voltage-based STDP with homeostasis.
*Nature Neuroscience, 13*(3), 344–52.PubMedCrossRefGoogle Scholar - Cortes, C., & Vapnik, V. (1995). Support-vector networks.
*Machine Learning*. Retrieved from http://link.springer.com/article/10.1007/BF00994018Google Scholar - Davison, A. A. P., Brüderle, D., Bruderle, D., Eppler, J., Kremkow, J., Muller, E., … Yger, P. (2009). PyNN: a common interface for neuronal network simulators.
*Frontiers in NeuroInformatics …*,*2*, 11. doi: 10.3389/neuro.11.011.2008 - Diesmann, M., & Gewaltig, M. O. (2007). NEST (NEural simulation tool).
*Scholarpedia*. doi: 10.4249/scholarpedia.1430.Google Scholar - Eccles, S. J. C., Itō, M., & Szentágothai, J. (1967).
*The cerebellum as a neuronal machine*(p. 335). Retrieved from http://books.google.fr/books/about/The_cerebellum_as_a_neuronal_machine.html?id=nWh9AAAAIAAJ&pgis=1 - El Boustani, S., Yger, P., Frégnac, Y., & Destexhe, A. (2012). Stable learning in stochastic network states.
*The Journal of Neuroscience: The Official Journal of the Society for Neuroscience, 32*(1), 194–214.Google Scholar - Florian, R. V. (2007). Reinforcement learning through modulation of spike-timing-dependent synaptic plasticity.
*Neural Computation, 19*(6), 1468–502.PubMedCrossRefGoogle Scholar - Florian, R. V. (2012). The chronotron: a neuron that learns to fire temporally precise spike patterns.
*PloS One, 7*(8), e40233.PubMedCentralPubMedCrossRefGoogle Scholar - Gardner, E. (1988). The space of interactions in neural network models.
*Journal of Physics A: Mathematical and General, 21*(July 1987), 257–270.CrossRefGoogle Scholar - Gutig, R., & Sompolinsky, H. (2006). The tempotron: a neuron that learns spike timing-based decisions.
*Nature Neuroscience, 9*(3), 420–428.PubMedCrossRefGoogle Scholar - Gütig, R., & Sompolinsky, H. (2009). Time-warp-invariant neuronal processing.
*PLoS Biology, 7*(7), e1000141.PubMedCentralPubMedCrossRefGoogle Scholar - Holmes, W. G. (1986). Kin recognition by phenotype matching in female Belding’s ground squirrels.
*Animal Behaviour, 34*, 38–47.CrossRefGoogle Scholar - Izhikevich, E. M. (2007). Solving the distal reward problem through linkage of STDP and dopamine signaling.
*Cerebral Cortex (New York, N.Y.: 1991), 17*(10), 2443–52.CrossRefGoogle Scholar - Kentros, C. G., Agnihotri, N. T., Streater, S., Hawkins, R. D., & Kandel, E. R. (2004). Increased attention to spatial context increases both place field stability and spatial memory.
*Neuron, 42*(2), 283–295.PubMedCrossRefGoogle Scholar - Legenstein, R., & Maass, W. (2007). On the classification capability of sign-constrained perceptrons.
*Neural Computation, 20*(1), 288–309.CrossRefGoogle Scholar - Legenstein, R., Naeger, C., & Maas, W. (2005). What can a neuron learn with spike-timing-dependent plasticity?
*Neural Computation, 17*(11), 2337–2382.Google Scholar - Legenstein, R., Pecevski, D., & Maass, W. (2008). A learning theory for reward-modulated spike-timing-dependent plasticity with application to biofeedback.
*PLoS Computational Biology, 4*(10), e1000180.PubMedCentralPubMedCrossRefGoogle Scholar - Marr, D. (1969). A theory of cerebellar cortex.
*The Journal of Physiology, 202*(2), 437–470.PubMedCentralPubMedGoogle Scholar - Masquelier, T., Guyonneau, R., & Thorpe, S. J. (2009). Competitive STDP-based spike pattern learning.
*Neural Computation, 21*(5), 1259–1276.Google Scholar - Pape, H. H., & Pare, D. (2010). Plastic synaptic networks of the amygdala for the acquisition, expression, and extinction of conditioned fear.
*Physiological Reviews, 90*(2), 419–463.PubMedCentralPubMedCrossRefGoogle Scholar - Pedregosa, F., & Varoquaux, G. (2011). Scikit-learn: Machine learning in Python.
*… of Machine Learning …*,*12*, 2825–2830.Google Scholar - Pfister, J., Toyoizumi, T., Barber, D., & Gerstner, W. (2006). Optimal spike-timing-dependent plasticity for precise action potential firing in supervised learning.
*Neural Computation, 18*(6), 1318–48.PubMedCrossRefGoogle Scholar - Ponulak, F., & Kasiński, A. (2010). Supervised learning in spiking neural networks with ReSuMe: sequence learning, classification, and spike shifting.
*Neural Computation, 22*(2), 467–510.PubMedCrossRefGoogle Scholar - Raymond, J. L., Lisberger, S. G., & Mauk, M. D. (1996). The cerebellum.
*Science, 272*, 1126–1131.PubMedCrossRefGoogle Scholar - Rosenblatt, F. (1958). The perceptron: a probabilistic model for information storage and organization in the brain.
*Psychological Review, 65*(6), 386–408.PubMedCrossRefGoogle Scholar - Sweatt, J. D. (2009).
*Mechanisms of Memory, Second Edition*(p. 450). Academic Press. Retrieved from http://www.amazon.com/Mechanisms-Memory-Second-Edition-Sweatt/dp/0123749514 - Urbanczik, R., & Senn, W. (2009). Reinforcement learning in populations of spiking neurons.
*Nature Neuroscience, 12*(3), 250–2.PubMedCrossRefGoogle Scholar - Xu, Y., Zeng, X., & Zhong, S. (2013). A new supervised learning algorithm for spiking neurons.
*Neural Computation, 25*(6), 1472–511.PubMedCrossRefGoogle Scholar - Yger, P., & Harris, K. D. (2013). The Convallis rule for unsupervised learning in cortical networks.
*PLoS Computational Biology, 9*(10), 1–32.CrossRefGoogle Scholar - Yin, H. H., & Knowlton, B. J. (2006). The role of the basal ganglia in habit formation.
*Nature Reviews Neuroscience, 7*(6), 464–476.PubMedCrossRefGoogle Scholar

## Copyright information

**Open Access** This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.