Synaptic Plasticity with Memristive Nanodevices

Chapter
Part of the Cognitive Systems Monographs book series (COSMOS, volume 31)

Abstract

This chapter provides a comprehensive overview of current research on nanoscale memory devices suitable to implement some aspect of synaptic plasticity. Without being exhaustive on the different forms of plasticity that could be realized, we propose an overall classification and analysis of few of them, which can be the basis for going into the field of neuromorphic computing. More precisely, we present how nanoscale memory devices, implemented in a spike-based context, can be used for synaptic plasticity functions such as spike rate-dependent plasticity, spike timing-dependent plasticity, short-term plasticity, and long-term plasticity.

1 Introduction

There is nowadays an increasing interest in neuromorphic computing as a promising candidate to provide enhanced performances and new functionalities to efficient and low-power biomimetic hardware systems. On the one hand, seminal works in the 1950s with the concept of perceptron have been evolving continuously via software approaches. Starting from the simplest circuit structure (the perceptron), which corresponds to some formal neural networks representation, the Artificial Neural Networks (ANNs) have seen the emergence of very complex systems with impressive performances in recognition tasks, for example. Along these lines, the deep neural networks (DNNs) are today the most promising candidates for new computing systems. Even if the concepts of neurons and synapses are largely used in this field, a direct equivalence with their biological counterparts is not straightforward and sometimes impossible (or not biorealistic). On the other hand, recent progresses in neurosciences and biology have highlighted some basic mechanisms present in biological neural networks (BNNs). If the global understanding of the computing principle of such networks is out of reach, lots of key elements for computing have been evidenced. For example, spike timing-dependent plasticity (STDP), initially observed in BNNs, has attracted strong attention from computer science community since it opens the way to unsupervised learning systems which are expected to provide another breakthrough in the future of computing. In between these two main directions, i.e., ANNs and BNNs, neuromorphic computing and engineering emerge as an intermediate solution: The objective is still oriented toward the development of computing systems but with stronger analogy with biology with respect to ANNs. This classification should be carefully handled since the frontier between these different fields is far from being clear.

This chapter focuses on a crucial aspect addressed by neuromorphic computing: the synaptic plasticity. More precisely, starting from biological evidences, we will present some aspects of the synaptic plasticity that can be efficiently implemented with various emerging nanoscale memories for future biomimetic hardware systems.

2 Neuromorphic Systems: Basic Processing and Data Representation

By analogy with biological systems, information in neuromorphic systems is carried by spikes of voltage with a typical duration in the range of milliseconds. Starting from this simple observation, a first statement would be to consider neuromorphic networks as digital systems (spike being an all or nothing event). This direction was explored with the concept of neuron as logical unit performing logic operations in a digital way [32]. This short cut is of course hiding very important features observed in biological systems that present many analog properties of fundamental importance for computing. The first footprint of analog characteristics of biological systems can be simply emphasized by considering the analog nature of the synaptic connections bridging neurons. Analog synapses can be described in a first approximation as a tunable linear conductance, defining the synaptic weight between two neurons (this description is largely used in ANNs). Meanwhile, a more biorealistic description should consider the analog synapse as a complex device-transmitting signal in a nonlinear manner (i.e., frequency dependent). The second footprint of analog property is somehow embedded in the time-coding strategy used in BNNs: As the neuron is performing time integration of the digital spikes, the signal used for computing (the integrated value of the overall spiking activity) becomes an analog value regulating the spiking activity of the neuron. This second aspect is of particular relevance if we consider dynamical computing (i.e., natural data processing such as vision or sound that present a strong dynamical component). The temporal organization of spikes (or their time occurrence with respect to other spikes in the network) is carrying some analog component of the signal in biological networks. Now combining analog synapses with integrating neurons, the level of nonlinearity used by the network for computing the analog signal can be strongly modify. Simple linear filters can be realized with linear synaptic conductance associated with simple integrate-and-fire (\( I \& F\)) neurons or strongly nonlinear systems can be built, based on nonlinear synaptic conductance with complex integration at the neuron level such as leaky integrate-and-fire (\({ LIF}\)) or sigmoid neurons.

2.1 Data Encoding in Neuromorphic Systems

Starting from the statement that neuromorphic systems are analog systems, we have to define the appropriate data representation that will match the function to be realized. It should be stressed that data representation in biological systems is still under debate and a detail understanding is still a major challenge that should open new avenues from both a basic understanding and practical computing point of views.

Based on these general considerations, we can now try to present a simplified vision of data coding in biological systems that could be the basic ingredient for neuromorphic computing (i.e., hardware system implementation).

2.1.1 Rate-Coding Scheme

The simplest data representation corresponds to a rate-coding scheme, i.e., the analog value of the signal carrying information (or strength of stimuli) is associated with the average frequency of the train of pulse. The neuron can then transmit some analog signals through its mean firing rate. Rate-coding data representation is often used for static input stimuli representation but appears to be less popular for time-varying stimuli. Indeed, the sampling time interval \(\bigtriangleup _{sampling}\) used for estimating the mean firing rate imply that events with fast temporal variation (typically variation on a timescale smaller than \(\bigtriangleup _{sampling}\)) cannot be described accurately. For example, the brain’s time response to visual stimuli is around 100 ms and it cannot be accurately described in rate-coding systems that are typically in the range of frequencies from 1 to 100 Hz. A simple example of static data representation is to consider the representation of a static image from a \(N \times M\) pixel array of black and white pixels into a \(N \times M\) vector \(X=(x_{1},\ldots , x_{i}\ldots , x_{n})\) where \(x_{i}\) can be either 0 or 1 (i.e., min and max frequencies). Then, this concept can be simply extended to analog data (such as pictures with different level of grays) by choosing properly the average firing rate.

2.1.2 Temporal-Coding Scheme

A second coding scheme is known as temporal coding in which each individual pulse of voltage is carrying a logical \(+1\) and a time signature. This time stamp, associated with a given spike, can carry some analog value if we now consider its timing with respect to the other spikes emitted in the network [26]. The difficulty in this coding scheme is to precisely define the origin of time for a given spiking event that should depend on the event to be computed. A simple example is to consider a white point passing with a given speed in front of a detector with a black background and producing a pulse of voltage in each pixel of the detector when it is in front of it. By tracking both position of the activated pixel and time stamp attached to it, the dynamic of the event can be encoded.
Fig. 1

Schematic illustration of data encoding schemes. A natural stimulus (such as a visual or auditory cue) is encoded through an input neuron population that sends and encodes the information on time in a time-coding scheme and in b rate-coding scheme

Figure 1 shows how the rate- and time-coding schemes can be used to encode an analog signal \(x_{i}\).

2.2 Spike Computing for Neuromorphic Systems

In this chapter, we will use only these two simplified data encoding concepts, but it should be stressed that other strategies such as stochastic-coding (i.e., the analog value of the signal is associated with the probability of a spike) are potential directions that deserve attention. We should also be aware that both rate coding and temporal coding have been evidenced to coexist in biological systems and both coding strategies can be used for powerful computing implementation. In fact, spike computing has attracted a large attention since the low-power performances of biological systems seem to be strongly linked to the spike coding used in such networks. But it should be emphasized and we should be aware of that translating conventional representation (i.e., digital sequences as in video) into spiking signal would most probably miss the roots of low-power computing in the biological system. Discretization of time and utilization of synchronous clock is in opposition with continuous time and asynchronous character of biological networks. Spike computing needs to be consider globally, i.e., by considering the full functional network and data encoding principle, from sensors to high-level computing elements. In this sense, recent development of bioinspired sensors such as artificial cochlea (sound detection) or artificial retinas (visual detection) with event-based representation opens many potentialities for fully spike-based computing where the dynamical aspect of spikes is naturally reproduced.

3 Synaptic Plasticity for Information Computing

By remaining in a computational spike-based context, we now focus on how a bioinspired network, composed in a first approximation of neurons and synapses, can process information (other functional units have to be considered if we want to describe precisely a biological networks such as proteins, glial cells, and \(\ldots \)). We can roughly categorized spike processing into (i) how spikes are transmitted between neurons, (ii) how spikes propagate along neurons, and (iii) how spikes are generated. These two last points can be attributed to ‘neuron processing’ and more precisely to the response of a biological membrane (the neuron membrane) to electrical or chemical signals. Many associated features such as signal integration, signal restoration, or spike generation are of first importance for spike computing, but these aspects are beyond the purposes of this chapter. The signal transmission will be the focus of this chapter, and different processes involved at the synaptic connection between two neurons will be described. We will concentrate on the dynamical responses observed in chemical synapses that are of interest for spike processing. Such synaptic mechanisms are broadly described as synaptic plasticity: the modification of the synaptic conductance as a function of the neurons activity. The specific synaptic weight values stored in the network are a key ingredient for neuromorphic computing. Such synaptic weight distribution is reached through synaptic learning and adaptation and can be described by the different plasticity rules present in the network. Furthermore, it should be noted that all the processes observed in biological synapses and their consequences on information processing are still an ongoing activity and final conclusions are still out of reach. Most probably, the efficiency of biological computing systems lies in a combination of many different features (restricted to the synapse level in this chapter) and our aim is to expose few of them that have been successfully implemented and to discuss their potential interest for computing.

In biology, synaptic plasticity can be attributed to various mechanisms involved in the transmission of the signal between pre- and post-synaptic neurons, such as neurotransmitter release modification, neurotransmitter recovery in the pre-synaptic connection, receptors sensitivity modification, or even structural modification of the synaptic connection (see [6]) for a description of the different mechanisms involved in synaptic plasticity).

It seems important at this stage to make a comprehensive distinction between different approaches used to describe the synaptic plasticity. The first approach, used to describe the synaptic plasticity, can be identified as a ‘causal description’ based on the origin of the synaptic conductance modification. A second one is based on a ‘phenomenological description,’ in which the temporal evolution (i.e., the dynamics) of the synaptic changes is the key element.

3.1 Causal Approach: Synaptic Learning Versus Synaptic Adaptation

By following the seminal idea of Hebb [19], a first form of plasticity is the so-called synaptic learning (Hebbian-type learning) and can be simply defined as an increase of the synaptic weight when the activity of its pre- and post-neuron increases. Many learning rules have been adapted following this simple idea of ‘who fire together, wire together.’ Hebbian-type plasticity implies that the synaptic weight evolution \(dw_{ij}/dt\) depends on the product of the activity of the pre-neuron (\(a_{i}\)) and post-neuron (\(a_{j}\)) as follows:
$$\begin{aligned} \frac{dw_{ij}}{dt} \propto a_{i} \cdot a_{j} \end{aligned}$$
(1)
This type of plasticity is defined in biology as homosynaptic plasticity [37]. Depending on the signal representation, i.e., rate coding or temporal coding, refinement (or particular cases) of Hebb’s rule can be formulated such as spike rate-dependent plasticity (SRDP) or spike timing-dependent plasticity (STDP) with neuron activity defined as the mean firing rate or the spike timing, respectively.
A second form of synaptic plasticity can be referred to Synaptic Adaptation (where adaptation is in opposition with the notion of learning). In this case, synaptic weight modification depends on the activity of the pre- or post-neuron activity only or on the accumulation of both but in an additive process:
$$\begin{aligned} \frac{dw_{ij}}{dt} \propto a_{i} + a_{j} \end{aligned}$$
(2)
In particular, if the synaptic plasticity depends only on post-activity, such mechanism is defined as heterosynaptic plasticity otherwise, if it is only pre-neuron activity dependent, it is named transmitter-induced plasticity.

Practically, this distinction seems very useful to classify the different synaptic processes that will be implemented and to evaluate their efficiency and contribution to the computing network performances. One major difficulty is that both synaptic learning and synaptic adaptation can manifest simultaneously and it becomes much more complicated in practical cases to make a clear distinction between them. In fact, learning in its large sense (i.e., how a network can become functional based on its past experiences) may involve both processes. Also, activity-independent weight modification can also be included to describe synaptic plasticity (e.g., to describe the slow conductance decay of inactive synapses, as it will be presented in the following paragraph).

3.2 Phenomenological Approach: Short-Term Plasticity Versus Long-Term Plasticity

Another important synaptic plasticity aspect that has to be considered is the timescale involved in the synaptic weight modification. Thus, by focusing on the synaptic plasticity dynamics observed in biological systems, synaptic weight modification can be either permanent (i.e., lasting for months to years) or temporary (i.e., relaxing to its initial state with a characteristic time constant in the milliseconds to hours range). This observation leads to the definition of long-term plasticity (LTP) and short-term plasticity (STP), respectively. We can notice that the boundary classification into long-term (LT) and short-term (ST) effects is not well defined and should be consider with respect to the task to be realized. Both STP and LTP can correspond to an increase or decrease of the synaptic efficiency, thus leading to the definition of facilitation (or potentiation) and depression, respectively. It is important to notice that there is no one to one equivalence between the concepts of STP, LTP, and the notion of short-term memory (STM) and long-term memory (LTM) which corresponds to a higher abstraction level (i.e., memory is then used in the sense of psychology). In this latter case, the information can be recalled from the network (i.e., information that has been memorized) and it cannot be directly associated with a specific set of synaptic weight with a given lifetime and plasticity rule. In fact, how synaptic plasticity can be related to the memorization of the information as well as how it is involved in different timescale of memory (from milliseconds to years) still remains debated.

4 Synaptic Plasticity Implementation in Neuromorphic Nanodevices

Many propositions of synaptic plasticity implementation with nanoscale memory devices have emerged these past years. By referring to the classification previously proposed, two main streams can be identified: the causal description and the phenomenological one. The first one relies on the implementation of the origin of the synaptic plasticity, without necessarily replicating the details of the spike transmission observed in biology. On the contrary, the second strategy has the aim to reproduce accurately the spike transmission properties observed in BNNs, by omitting the origin of the synaptic response, but rather by highlighting its temporal evolution.

In this section, we will present examples of practical devices implementation by following these two lines. Of course, a global approach based on a combination of both descriptions (the causal and the phenomenological one) would be the ideal solution to describe the synaptic weights distribution in ANNs for the future development of neuromorphic computing.

4.1 Causal Implementation of Synaptic Plasticity

In this first part, by following the Causal description, we will take into account the origin of the synaptic plasticity, without necessarily replicating the details of the spike transmission observed in biology.

4.1.1 Generality: Hebbian Learning

Hebbian learning has been at the basis of most of the learning strategies explored in neuromorphic computing. Hebbian-type algorithms define how a synaptic weight evolves during the learning experience and set the final weight distribution after the learning experience. Starting from its simplest form, i.e., ‘who fire together, wire together,’ a first limitation of Hebbian learning can be evidenced. Indeed, if all synapses of the network are subject to Hebbian learning (Fig. 2), all synaptic connections should converge to their maximum conductivity after some time of activity since only potentiation is included in this rule, thus destroying the functionality of the network. A first addition to the Hebb’s postulate is then to introduce anti-Hebbian plasticity that would allow to decrease the synaptic weight conductance (i.e., depression) when activity of both pre- and post-neurons are present (Fig. 2, green curve). One important consequence of this simple formulation (Hebbian and anti-Hebbian) is that the final synaptic weight distribution after learning should become bimodal (or binary), i.e., some weights became saturated to their maximum conductance (i.e., fully potentiated) while all the others should saturate to their lowest conductance state (i.e., fully depressed).
Fig. 2

Representation of the Hebbian rule (purple) and Hebbian/anti-Hebbian rule (green) for a constant post-neuron activity when pre-neuron activity is increased (stimulation rate). Addition of anti-Hebbian learning is a prerequisite in order to prevent all the synaptic weight to reach their maximal conductance

4.1.2 Time-Based Computing: Spike Timing-Dependent Plasticity

Without reviewing all the different STDP implementation in nanoscale memory devices propositions, we want to highlight some general ideas that are at the origin of this plasticity mechanism. The STDP was introduced by [2, 34] as a refinement of Hebb’s rule. In this plasticity form (Synaptic Learning), the precise timing of pre- and post-synaptic spikes is taken into account as a key parameter for updating the synaptic weight. In particular, the pre-synaptic spike is required to shortly precede the post-synaptic one to induce potentiation, whereas the reverse timing of pre- and post-synaptic spike elicits depression. To understand how synaptic weights change according to this learning rule, we can focus on the process of synaptic transmission, depicted in Fig. 3.
Fig. 3

Pair-based STDP learning rules: Long-term potentiation (LTP) is achieved thanks to a constructive pulses overlap respecting the causality principle (pre-before-post). On the contrary, if there is no causality correlation between pre- and post-synaptic spikes, long-term depression (LTD) is induced

Whenever a pre-synaptic spike arrives (\(t_{pre}\)) at an excitatory synapse, a certain quantity (\(r_1\)), for example, glutamate, is released into the synaptic cleft and binds to glutamate receptors. Such detector variable of pre-synaptic events \(r_1\), increases whenever there is a pre-synaptic spike and decreases back to zero otherwise with a time constant \(\tau ^{+}\). Formally, when \(t=t_{pre}\) this gives the following:
$$\begin{aligned} \frac{dr_{1}}{dt} = -\frac{r_1(t)}{\tau _+} \end{aligned}$$
(3)
We emphasize that \(r_1\) is an abstract variable (i.e., state variable). Instead of glutamate binding, it could describe equally well some other quantity that increases after pre-synaptic spike arrival. If a post-synaptic spike arrives (\(t_{post}\)) at the same synapse, and the temporal difference with respect to the pre-synaptic one is not much larger than \(\tau ^{+}\), the interaction between these two spikes will induce potentiation (LTP). As a consequence the synaptic weight w(t) will be updated as follows:
$$\begin{aligned} w(t) = w(t) + r_1 \cdot A_2^{+} \end{aligned}$$
(4)
If a pre-synaptic spike arrives after the post-synaptic one, another detector variable will be taken into account, relative to post-synaptic events (\(o_1\)), as shown in Fig. 3. Similarly, we can consider that the dynamics of \(o_1\) can be described by time constant \(\tau _-\). Formally, when \(t=t_{post}\) this gives the following:
$$\begin{aligned} \frac{do_{1}}{dt} = -\frac{o_1(t)}{\tau _-} \end{aligned}$$
(5)
If the temporal difference is not much larger than \(\tau ^{-}\), the spike interaction will induce depression (LTD). As a consequence the synaptic weight w(t) will be updated as follows:
$$\begin{aligned} w(t) = w(t) - o_1 \cdot A_2^{-} \end{aligned}$$
(6)
One of the important aspects of STDP is to present both Hebbian and anti-Hebbian learning. Replicating the exact biological STDP window (Fig. 4a) is not a mandatory condition for implementing interesting learning strategies (other shapes have been reported in biology) while balancing the Hebbian/anti-Hebbian contribution remains a challenge in order to maintain STDP learning stable. It should be noted that synaptic weight distribution becomes bimodal after some time of network activity if this simple STDP window is implemented [40].
Fig. 4

a Biological STDP window from [4]. In all three cases: bd, the particular shape of the signal applied at the input (pre-neuron) and output (post-neuron) of the memory element induces a particular effective voltage that induces potentiation (increase of conductance) or depression (decrease of conductance) reproducing the STDP window of (a). b First proposition of STDP implementation in nanoscale bipolar memory devices where time multiplexing approach was considered. In this case, the STDP window can be reproduced with high fidelity while the spike signal is far from biorealistic. c Implementation of STDP in unipolar PCM devices. Still the STDP window can be reproduced precisely while the signal is not biorealistic. d Proposition of STDP implementation with bipolar memristor. Both the STDP window and pulse shape are mapped to biorealistic observations

The proposition of memristor [38] provides an interesting framework for the implementation of synaptic weights (i.e., analog property of the memory) and for the implementation of STDP in particular. Nanoscale memories or ‘memristive devices,’ as previously introduced, are electrical resistance switches that can retain a state of internal resistance based on the history of applied voltage and the associated memristive formalism. Using such nanoscale devices provides a straightforward implementation of this bioinspired learning rule. In particular, the modulation of the memristive weight (i.e., the conductance change \( \varDelta G(W,V)\) is controlled by an internal parameter W that depends on the physics involved in the memory effect. In most of the memory technologies used for such bioinspired computational purpose, the internal state variable W (and consequently the conductance) is controlled through the applied voltage or the current (and implicitly by its duration). Mathematically, this behavior corresponds to a first-order memristor model:
$$\begin{aligned} \frac{dW}{dt} = f(W,V,t) \end{aligned}$$
(7)
with \(I =V \cdot G(W,V)\). Practically, by exploiting memristive devices as synapses, most of the STDP implementation relies on specific engineering of the spikes’s shape that convert the time correlation (or anti-correlation) between pre- and post-spikes into a particular voltage that induces a modification of the memory element conductance. The time lag induced by pre-synaptic events, as the \(r_1\) variable in Fig. 3, determines that the potentiation is converted into a particular voltage across the memristor in order to induce an increase of conductance when a post-synaptic spike interact with it. Similarly, time lag induced by post-synaptic events in analogy with \(o_1\) variable in Fig. 3 will induce depression in form voltage across the memristor when interacting with a pre-synaptic spike.

First implementation was proposed by Snider [36] with time multiplexing approach (Fig. 4b), in which, although the spike signal is far from biorealistic, the STDP window can be reproduced with high fidelity. Figure 4c shows another successful STDP implementation with non-biorealistic signal in a phase-change memory device [22]. Depending on the particular memory device considered, different encoding strategies were proposed with the same principle of input/output voltage correlation in which the STDP window mapped to biorealistic observations. Recently, by going deeper in the memristive switching behavior (i.e., by considering a higher-order memristive model), STDP was proposed through even more biorealistic pulse shape [21], as it will be explained in the Sect. 4.1.4.

4.1.3 Rate-Based Computing: The BCM Learning Rule

While the STDP learning rule has been largely investigated these past years, another refinement of the Hebb’s rule can be formulated in the case of rate-coding approaches. Bienenstock et al. [5] proposed in the 1980s the BCM learning rule with the concept of ‘sliding threshold’ that ensures to maintain the weight distribution bounded and thus avoiding unlimited depression and potentiation resulting from simple Hebbian learning implementation. The BCM learning rule can be simply formalized as follows:
$$\begin{aligned} \frac{dw_{ij}}{dt} = \varphi (a_{j}(t)) \cdot a_{i}(t) - \varepsilon w_{ij} \end{aligned}$$
(8)
where \(w_{ij}\) is the synaptic conductance of the synapse bridging the pre-neuron i and post-neuron, j, \(a_{i}\), and \(a_{j}\) are the pre- and post-neuron activities, respectively, \(\varepsilon \) is a constant related to a slow decaying component of all the synaptic weights (this term appears to become important in special cases, see [5] but not mandatory) and \(\varphi \) a scalar function parametrized as follows:
$$ \begin{aligned} \varphi (a_{j})<0 \,\, for \,\, a_{j}< \theta _{m} \quad \& \quad \varphi (a_{j})>0 \,\, for \,\, a_{j}> \theta _{m} \end{aligned}$$
where \(\theta _{m}\) is a threshold function that depends on the mean activity of the post-neuron. A first-order analysis can be realized on this simple learning rule. (i) Both Hebbian-type learning (product between \(a_{i}\) and \(a_{j}\)) and adaptation (through the small decay function that is not related to pre- and post-neuron activities) are present in this rule. (ii) The threshold ensures that both Hebbian and anti-Hebbian plasticity can be obtained through the scalar function \(\varphi \) that can take positive and negative values (potentiation and depression). (iii) Thus, the ‘sliding threshold effect’ corresponds to the displacement of the threshold as a function of the post-neuron activity and is a key ingredient to prevent the synaptic weight distribution to become bimodal. Indeed, if the mean post-neuron activity is high, any pre-neuron activity should induce potentiation (most probably). If now \(\theta _{m}\) is increased when the mean post-neuron activity increases, it will increase the probability of depression or at least reduce the magnitude of potentiation and consequently limit the potentiation of the weight (Fig. 5).
Fig. 5

BCM learning rule representation. The synaptic weight modification is represented as a function of pre-neuron activity for a fixed post-neuron activity. The sliding threshold depends on the mean post-neuron activity, i.e., \(\theta _{m}\) is increased if \(a_{j}\) increases while \(\theta _{m}\) is decreased if \(a_{j}\) decreases, thus preventing unlimited synaptic weight modification

The BCM learning rule was initially proposed for rate-coding approaches and was measured in BNNs in the long-term regime of the synaptic plasticity. The BCM learning rule has been shown to maximize the selectivity of the post-neuron [5]. Only few works have demonstrated partially the BCM rule in nanoscale memory devices with some limitations. Lim et al. [25] proposed to describe the weight saturation in \(TiO_{2}\) electrochemical cells subject to rate-based input. This work demonstrated the sliding threshold effect describing the saturation of the weight during potentiation and depression but did not reproduce the Hebbian/anti-Hebbian transition. Ziegler et al. [47] demonstrate the sliding threshold effect in the long-term regime but without considering explicitly a rate-coding approach, i.e., neuron activity was simply associated with the pre- and post-neuron voltages. Kim et al. [21] proposed an adaptation of the BCM rule in second-order memristor, as it will be presented in the next section, but in a transmitter-induced plasticity context, thus missing the Hebbian-type plasticity initially proposed in the BCM framework. Future works are expected to provide stronger analogy with BCM rule, both from a phenomenological point of view (i.e., biorealistic rate-coding implementation) and from a causal point of view (i.e., reproducing all the aspects of the BCM rule).

4.1.4 Reconciliation of BCM with STDP

On the one hand, the importance of individual spikes and their respective timing can only be described in the context of STDP. The time response in the visual cortex being in the order of 100 ms, rate-coding approaches are unlikely to offer a convenient description of such processes while time coding could. On the other hand, simple STDP function misses the rate-coding property observed in BNNs and conveniently described in the context of the BCM. More precisely, in the case of pair-based STDP, both potentiation and depression are expected to decrease as the activity mean frequency of the network is increased while BNNs show opposite trend. Izhikevich et al. [20] proposed that classical pair-based STDP, implemented with the nearest-neighbor spike interactions, can be mapped to the BCM rule. However, their model failed to capture the frequency dependence [35] if pairs of spikes are presented at different frequencies [14].

From a neurocomputational point of view, Gjorgjieva et al. [18] proposed a triplet STDP model based on the interactions of three consecutive spikes as generalization of the BCM theory. This model is able to describe plasticity experiments that the classical pair-based STDP rule has failed to capture and is sensitive to higher-order spatio-temporal correlations, which exist in natural stimuli and have been measured in the brain. As done for the pair-based case, to understand how synaptic weights change according to this learning rule, we can focus on the process of synaptic transmission, depicted in Fig. 6.
Fig. 6

Triplet-based STDP learning rules. a Synaptic weight potentiation (LTP) is achieved thanks to (post-pre-post) spike iterations, as a result the relative time lag of the detector-variable dynamics. Similarly a synaptic weight depression (LTD) is induced with (pre-post-pre) spike interactions. b Synaptic weight evolution in function of time correlation of pre- and post- spikes

Instead of having only one process triggered by a pre-synaptic spike, it is possible to consider several different quantities, which increase in the presence of a pre-synaptic spike. We can thus consider, \(r_1\) and \(r_2\) two different detectors variables of pre-synaptic events and their dynamics can be described with two time constant \(\tau _+\) and \(\tau _x\) (\(\tau _x >\tau _+\)). Formally, when \(t=t_{pre}\), this gives the following:
$$ \begin{aligned} \frac{dr_{1}}{dt} = -\frac{r_1(t)}{\tau _+} \quad \& \quad \frac{dr_{2}}{dt} = -\frac{r_2(t)}{\tau _x} \end{aligned}$$
(9)
Similarly, we can consider, \(o_1\) and \(o_2\) two different detector variables of post-synaptic events and their dynamics can be described with two time constants \(\tau _-\) and \(\tau _y\) (\(\tau _y >\tau _-\)). Formally, when \(t=t_{post}\), this gives the following:
$$ \begin{aligned} \frac{do_{1}}{dt} = -\frac{o_1(t)}{\tau _-} \quad \& \quad \frac{do_{2}}{dt} = -\frac{o_2(t)}{\tau _y} \end{aligned}$$
(10)
We assume that the weight increases after post-synaptic spike arrival by an amount that is proportional to the value of the pre-synaptic variable \(r_1\) but depends also on the value of the second post-synaptic detector \(o_2\). Hence, post-synaptic spike arrival at time \(t_{post}\) triggers a change given by the following:
$$\begin{aligned} w(t) = w(t) + r_1(t) \cdot (A_2^{+} + A_3^{+}o_2(t)) \end{aligned}$$
(11)
Similarly, a pre-synaptic spike at time \(t_{pre}\) triggers a change that depends on the post-synaptic variable \(o_1\) and the second pre-synaptic variable \(r_2\) as follows:
$$\begin{aligned} w(t) = w(t) - o_1(t) \cdot (A_2^{-} + A_3^{-}r_2(t) ) \end{aligned}$$
(12)
As done previously, we emphasize that \(r_1\), \(r_2\), \(o_1\), and \(o_2\) are abstract variables that not identify with specific biophysical quantities. Biological candidates of detectors of pre-synaptic events are, for example, the amount of glutamate bound [9] or the number of NMDA receptors in an activated state [34]. Post-synaptic detectors \(o_1\) and \(o_2\) could represent the influx of calcium concentration through voltage-gated \(Ca^{2+}\) channels and NMDA channels [9] or the number of secondary messengers in a deactivated state of the NMDA receptor [34].

A possible solution to implement this generalized rule that embraces both BCM theory and STDP has been proposed by Mayr et al. [31] for the first time in \(BiFeO_{3}\) memristive devices. They succeeded in implementing triplet STDP through a more complex spikes’s shape engineering that encodes the time interaction between more than two pulses into a particular voltage able to induce a modification of the memory element conductance. Triplet STDP rule has been also performed by Williamson et al. [43] in asymmetric \(TiO_{2}\) memristor in hybrid neuron/memristor system. Subramaniam et al. [39] have used triplet STDP rule in a compact electronic circuit in which neuron consists of a spiking soma circuit fabricated with nanocrystalline-silicon thin-film transistors (ns-Si TFTs) with nanoparticle TFT-based short-term memory device and \({ HfO_{2}}\) memristor as synapse.

Another generalized description, in which both time- and rate-coding approaches are taken into account at the same time and implemented in an amorphous InGaZnO memristor, has been proposed by Wang et al. [42]. In addition to the conventional ion migration induced by the application of pulse of voltage, another physical mechanism of the device operation occurs: the gradient of the ions concentration, leading to the appearance of ion diffusion, resulting in an additional state variable. Kim et al. [21] recently proposed a second-order memristor that offers an interesting solution toward this goal of reconciliation of various learning mechanisms in a single memory device.

Mathematically, in analogy to the previous definition, a second-order memristor model can be described as follows:
$$ \begin{aligned} \frac{dW_{1}}{dt} = f_1(W_1,W_2,V,t) \quad \& \quad \frac{dW_{2}}{dt} = f_2(W_1,W_2,V,t) \end{aligned}$$
(13)
with \(I= V \cdot G(W_1,W_2,V,t)\) and implemented with a simple nonoverlapping pulses protocol for the synaptic weight modulation.
The interest behind this higher-order memristor description is to provide additional parameters that will ensure some other higher-order interaction between pulses (i.e., more than two), while the pair-based interaction is preserved. More precisely, as shown in Fig. 7a, the temperature has been proposed as second-order state variable that exhibits short-term dynamics and naturally encodes information on this relative timing of synapse activity. By exploiting these two state variables (i.e., the conductance and the temperature), STDP has been implemented, as it is shown in Fig. 7a. Specifically, the first ‘heating’ spike elicits an increase in the device temperature by Joule effect regardless of the pulses polarity, which then tends naturally to relax after the removal of the stimulation, then temporal summation of the thermal effect can occur and can induce an additional increment in the temperature of the device if the second ‘programming’ spike is applied before T has decayed to its resting value.
Fig. 7

Second-order memristor model. a On the right: the modulated second-order state variable exhibits short-term dynamics and naturally encodes information on the relative timing and synapse activity. On the left STDP implementation: memristor conductance change as a function of only two spikes (i.e., each spike consists of a programming pulse and a heating pulse) [21]. b On the right Simulation results illustrating how the short-term behavior affected long-term weight change. The difference in long-term weight is caused by the different values of residue of the second state variable at the moment when the second pulse is applied. The first and the second state variables under two conditions (interval between two pulses \( \varDelta t = 20, 90\) ms) are shown. On the left memristor weight change as a function of the relative timing between the pre- and post-synaptic pulses without pulses overlapping (STDP implementation) [17]

Longer time interval will induce a small conductance change because of the heat dissipation responsible to a lower residual T when the second spike is applied. Thus, the amount of the conductance change (long-term dynamics) can be tuned by the relative timing of the pulses encoded in the short-term dynamics of second state variable (i.e., the temperature T).

Du et al. [17] have proposed another second-order memristor model. Also in this case, two state variables are used to describe an oxide-based memristor. The first one, as in the previous example, directly determines the device conductance (i.e., the synaptic weight). Specifically, this first state variable represents the area of the conducting channel region in the oxide memristor, thus directly affecting the device conductance. The second state variable represents the oxygen vacancy mobility in the film which directly affects the dynamics of the first state variable (conductance) but only indirectly modulates the device conductance (Fig. 7a). Equivalently to T, the w is increased by application of a pulse and then tends to relax to an initial value and affects the first state variable by increasing the amount of conductance change in a short timescale. By exploiting this second-order memristor model, Du et al. [17] have demonstrated that STDP can be implemented in oxide-based memristor by simple nonoverlapping pre- and post-synaptic spike pairs, rather than through the engineering of the pulse’s shape (Fig. 7b).

In neurobiology, the timing information is intrinsically embedded in the internal synaptic mechanisms. Malenka and Bear [27] have demonstrated that together with the neurotransmitter dynamics in the presynaptic connection, secondary internal state variables, such as the natural decay of the post-synaptic calcium ion (\(Ca^{2+}\)) concentration, are involved in the synaptic weight modulation and the synaptic plasticity that can be achieved by simple nonoverlapping spikes and tuned by synaptic activity (i.e., rate- and timing-dependent spikes) which brings an interesting analogy between biological processes and material implementation described above [18].

The hypothesis that several synaptic functions manifest simultaneously and are interrelated at synaptic level seems accepted by different scientific communities. Recent biological studies indicate that multiple plasticity mechanisms contribute to cerebellum-dependent learning [8]. Multiple plasticity mechanisms may provide the flexibility required to store memories over different timescales encoding the dynamics involved. From a computational point of view, Zenke et al. [46] have recently proposed the idea to use multiple plasticity mechanisms at different timescales. Instead of focusing on particular and local learning schemes, their strategy aims to create memory and learning functions through interplay of multiple plasticity mechanisms. By following this trend of multi-scale plasticity mechanisms, Mayr et al. [30] have realized a VLSI implementation in which short-term, long-term, and meta-plasticity interact each other at different timescales to tune the overall synapse weights distribution.

4.2 Phenomenological Implementation of Synaptic Plasticity

In this section, we will follow the second synaptic description approach: the phenomenological one. The spike transmission properties observed in BNNs will be presented as a function of the temporal evolution of the synaptic weight.

4.2.1 STP in a Single Memristive Nanodevices

As previously mentioned, the transmitter-induced plasticity is a particular form of synaptic adaptation that depends only on pre-neuron activity. From a phenomenological point of view, such plasticity is most often observed on short timescale, thus belonging to the class of STP. As shown in Fig. 8b, this STP regime is frequency dependent and can be used to modulate the synaptic weights distribution as a function of network activity. From a biological view point, a phenomenological model of frequency-dependent synaptic transmission was used to describe such synaptic response in STP regime [28]. The primary synaptic parameters are the absolute synaptic efficacy (A), the utilization of synaptic efficacy (U), recovery from depression (\(\tau _{rec}\)), and the recovery from facilitation (\(\tau _{facil}\)) (Fig. 8a). In this model, synaptic response is then dependent on the finite amount of neurotransmitter resources in the pre-synaptic neuron and their respective dynamics (utilization and recovery) and on the absolute efficacy of the synaptic connection which could depends on post-synaptic neuron receptors sensitivity or synaptic connection, for example. The most likely biophysical mechanisms underlying changes in the value of these synaptic parameters were considered [28].
Fig. 8

Phenomenological model of frequency-dependent synaptic transmission. a Each AP utilizes U a fraction of the available/recovered synaptic efficacy R. When an AP arrives, U is increased by an amplitude of u and becomes a variable, U1. b Phenomenology of changing absolute synaptic efficacy parameter A. On the left synaptic responses of depressing synapses when A is increased 1.7-fold. On the right synaptic responses of facilitating synapses when A is increased 1.7-fold. Adapted from [29]

If we consider a temporal-coding approach in which pulses are considered as discrete events, STP can be evidenced through the notion of paired pulse facilitation (PPF) corresponding to the enhancement of a pulse transmission when this latter closely follows a prior impulse. The countereffect (i.e., corresponding to depression) is referred to as paired pulse depression (PPD). If we now focus on rate-coding approaches, facilitation and depression can be simply described as a high-pass and low-pass filters. Depending on the mean firing rate of the synapse, signal can be enhanced or depressed when pre-neuron frequency is increased. A simple material implementation of such mechanism can be realized through passive RC circuits. It turns out that RC circuits with time constants in the milliseconds to seconds range leads to very high capacity with large area (even at low current operation) that are a severe limitation for hardware implementation of STP. Different alternative approaches can realize more efficiently such dynamical effects by taking advantage of physical mechanisms present in nanoscale memory devices.

The first proposition of STP with nanodevices was realized in a nanoparticles/organic memory transistor (NOMFET) [3]. The basic principle of this device is equivalent to a floating gate transistor. Charges, stored in the nanoparticles, modify the channel conductivity via coulomb repulsion between the carriers (holes) and the charged nanoparticles. The particularity of this device relies on the leaky memory behavior: Charges stored in the nanoparticles tend to relax with a characteristic time constant in the 100–200 ms range [16]. When the NOMFET is connected in a diode-like configuration (Fig. 7a), each input spike (with a negative voltage value) charges the nanoparticles and decreases the NOMFET conductivity. Between pulses, charges escape from the nanoparticles and the conductivity relaxes toward its resting value. By analogy with biology, this device mimics the STP observed in depressing synapses (Fig. 9) and described by [1]. As a matter of comparison, this synaptic functionality is realized with a single memory transistor while its implementation in Si-based technologies (i.e., CMOS) required 7 transistors [7].
Fig. 9

STP implementation in a NOMFET. a Schematic representation of the NOMFET and pseudo-two-terminal connections of the device. b Comparison between the frequency-dependent post-synaptic potential response of a depressing synapse (lines) and the iterative model of Varela et al. (dots), adapted from [41], as a function of frequency of the pre-synaptic input signal. c Response (drain current) of NOMFET with L / W ratio of \(12\,\upmu \)m/113 \(\upmu \)m and NP size of 5 nm to sequences of spikes at different frequencies (pulse voltage \(Vp=-30\) V)

STP has been also demonstrated in two-terminal devices that would ensure higher devices density when integrated into complex systems. Equivalently, STP in two-terminal devices is implemented by taking advantage of the volatility of the different memory technologies (i.e., low retention of the state that is often a drawback in pure memory applications). Redox systems based on electrochemical memory cell (ECM) [33] or valence change memory (VCM) [12, 44] have demonstrated STP with a facilitating behavior. In such devices, short-term plasticity is ensured by the low stability of the conducting filaments that tends to dissolve, thus relaxing the device toward the insulating state. \({ TiO_{2}}\) VCM cells have been reported with both facilitating and depressing behavior [25] with relaxation related to oxydo-reduction counter-reaction. Protonic devices have demonstrated STP with depressing functionality due to proton recovery latency from atmosphere required to restore the proton concentration and conductivity [15].

In terms of functionality, [1] has demonstrated that depressing synapses with STP act as a gain control device (at high frequency, i.e., high synaptic activity, the synaptic weight is decreased, thus leading to a lowering of the signal when activity becomes too important). More generally, STP (both depressing and facilitating) provides a very important frequency coding property (as depicted in Fig. 7 that could play a major role in the processing of spike rate-coded information). Indeed, if a simple integrate-and-fire neuron (\( I \& F\)) is associated with static weight (with no dependence with spike frequency), the computing node (i.e., neuron and synapses) is only a linear filter (linear combination of the different input) while STP turns the node to nonlinear. This property (i.e., locally induced nonlinearity in spike signal transmission) has been used to implement reservoir-computing approaches as proposed by Buonomano and Maass [10] with the liquid-state machine and could be an important property of biological systems for computing.

4.2.2 Coexistence of STP and LTP in the Same Memristive Nanodevice

If the contribution of short-term and long-term processes to computing is not completely understood in biological systems, both STP and LTP effects in synaptic connections have been evidenced and should play a crucial role. A first approach is to consider that repetition of short-term effects should lead to long-term modification in the synaptic connections. This behavior would explain the important hypothesis of memory consolidation in the sense of psychology [24]. Ohno et al. [33] reported for the first time the transition from short-term to long-term potentiation in atomic bridge technology (Fig. 10). Considering again the transmitter-induced plasticity dependent on the pre-synaptic activity (associated with spike rate in this case), the synaptic conductivity is increased due to the formation of a silver (Ag) filament across the insulating gap. While for low frequency, the bridge tends to relax between pulses; higher frequencies lead to a strong filament that maintains the device in the ON state. These results suggest a critical size of the bridging filament in order to maintain the conductive state stable (i.e., providing a LTP of the synaptic connection).
Fig. 10

STP and LTP implementation in an ECM cell depending on input pulse repetition time. a Schematic representation of the \(Ag_{2}S\) ECM cell and the signal transmission of a biological signal. Application of input pulses causes the precipitation of Ag atoms from the \(Ag_{2}S\) electrode, resulting in the formation of an Ag atomic bridge between the \(Ag_{2}S\) electrode and a countermetal electrode. When the precipitated Ag atoms do not form a bridge, the ECM cell works in the STP regime. After an atomic bridge is formed, it works as LTP. b Frequent stimulation (\(T=2\) s) causes long-term enhancement in the strength of the synaptic connection while short-term enhancement is induced at lower frequency (\(T=20\) s) [33]

Similar results have been obtained in a variety of memory devices where filamentary switching displayed two regimes of volatility. Wang et al. [42] have shown that STP-to-LTP transition can occur through repeated ‘stimulation’ training. By stimulating sequentially an oxide-based memristive device with 100 positive pulses, the synaptic weight gradually increases with the number of pulses. Once the applied voltage is removed, a spontaneous decay of synaptic weight occurs in the case of no external inputs. The synaptic weight does not relax to the initial state, but stabilizes at a mid-state, which means that the change of synaptic weight consists of two parts: STP and LTP.

Chang et al. [13] have evidenced a continuous evolution of the volatility as a function of the conductivity level of the device in \({ WO_{3}}\) oxide cells attributed to the competition between oxygen vacancies drift (creation of conductive path across the device) and lateral diffusion (disruption of the conducting filament). Another description of these two regimes of volatility could be associated with a competition between surface and volume energies in the conductive filament [45].

4.2.3 Conflict Between Causal and Phenomenological Description

If this concept of ST-to-LT transition has been well demonstrated in a variety of nanoscale memory devices, we have to emphasize that they were all reported in the context of transmitter-induced plasticity (more precisely corresponding to the synaptic adaptation, a non-Hebbian plasticity form). In biology, the facilitating processes observed in short timescale (i.e., transmitter-induced STP) and associated with an increase of neurotransmitter release probability during a burst of spike (i.e., corresponding to an increase of synaptic efficiency at high-frequency spiking rate) is additive with LTP [6] that could be associated with a Hebbian-type plasticity involving both pre- and post-neuron activities. In other words, a causal description will make a clear distinction between the origin of ST- and LT-plasticity while a phenomenological description (Fig. 10) will not. Indeed, during high-frequency burst of spikes associated with transmitter-induced plasticity, the firing of the post-neuron is favored and should lead to both pre- and post-activities, thus leading to Hebbian-type LTP. In the case of the neuromorphic implementation described above, the transition between STP and LTP is associated with a single parameter (such as the mean firing rate of the pre-neuron) and both ST and LT regimes cannot be uncorrelated (i.e., ST will lead to LT regime). The device state will move sequentially from one regime to another one via transmitter-induced plasticity only. It should be noted that this effect induces some restriction in terms of (i) network configurability, since non-Hebbian and Hebbian-type learning cannot be dissociated, and (ii) network functionality, since the synaptic connection moves from a nonlinear conductance in its ST regime (i.e., frequency dependent) to a linear conductance in its LT regime. Alternative approaches are still needed as proposed by Cantley et al. [11] where short-term processes and long-term processes are realized by two different devices (leaky floating gate transistor and nonvolatile two-terminal devices) in order to match the complexity of biological synapses.

Another approach [23] relies on the fact that ECM cells are multi-filamentary systems providing one additional parameter for device’s conductance modulation: Either the number of filaments or the size of a single filament can produce an increase of conductivity, while these two situations will lead to different volatility properties (Fig. 11). The independent control of these two parameters leading both to potentiation offers the possibility to dissociate different forms of plasticity and to reproduce synaptic plasticity in a more biorealistic way. In particular, in the case of multi-filamentary ECM cells, an independent control of the number of filaments and of the width of each individual filament was proposed in order to reproduce different potentiation with both ST and LT regimes.

5 Conclusions

We have presented various plasticity mechanisms that have been implemented in nanoscale memory devices, promising candidates for future biomimetic hardware systems. Of course, the different examples described above are far from being exhaustive but are a tentative classification and formalization of synaptic plasticity in nanoscale devices. Future works should provide more complex device systems with richer features embedded in nanoscale components that will pave the way to complex neuromorphic computing systems. Notably, while we have only focused on the synaptic aspect, important efforts are still needed to implement neurons and synaptic interconnections that will determine the applicability of the different concepts exposed in this chapter. Indeed, since synaptic elements are required to be implemented in a high-density architecture, major challenges in terms of practical operating conditions and interconnections strategies should be taken into account.
Fig. 11

STP and LTP implementation in an ECM cell. By using the number of pulses as plasticity key factor, two examples of LTP (case 1 and 2) and STP (case 3 and 4) are obtained. Both dendritic branches density and dendrites diameter can be tuned independently to reproduce various STP/LTP combinations

Finally, neuromorphic computing being an emerging field evolving in between ANNs and BNNs, strong interdisciplinary approaches will be valuable for the future of neuromorphic computing.

Notes

Acknowledgements

The authors thank Dr. Dominique Vuillaume for careful reading of the manuscript and Dr. Damien Querlioz for fruitful discussions. This work was supported by ANR-12-PDOC- 0027-01 (Grant DINAMO).

References

  1. 1.
    Abbott, L., Varela, J., Sen, K., Nelson, S.: Synaptic depression and cortical gain control. Science 275(5297), 221–224 (1997)CrossRefGoogle Scholar
  2. 2.
    Abbott, L.F., Nelson, S.B.: Synaptic plasticity: taming the beast. Nat. Neurosci. 3, 1178–1183 (2000)CrossRefGoogle Scholar
  3. 3.
    Alibart, F., Pleutin, S., Guérin, D., Novembre, C., Lenfant, S., Lmimouni, K., Gamrat, C., Vuillaume, D.: An organic nanoparticle transistor behaving as a biological spiking synapse. Adv. Funct. Mater. 20(2), 330–337 (2010)CrossRefGoogle Scholar
  4. 4.
    Bi, G.-Q., Poo, M.-M.: Synaptic modifications in cultured hippocampal neurons: dependence on spike timing, synaptic strength, and postsynaptic cell type. J. Neurosci. 18(24), 10464–10472 (1998)Google Scholar
  5. 5.
    Bienenstock, E.L., Cooper, L.N., Munro, P.W.: Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. J. Neurosci. 2(1), 32–48 (1982)Google Scholar
  6. 6.
    Bliss, T.V., Collingridge, G.L., et al.: A synaptic model of memory: long-term potentiation in the hippocampus. Nature 361(6407), 31–39 (1993)CrossRefGoogle Scholar
  7. 7.
    Boegerhausen, M., Suter, P., Liu, S.-C.: Modeling short-term synaptic depression in silicon. Neural Comput. 15(2), 331–348 (2003)CrossRefMATHGoogle Scholar
  8. 8.
    Boyden, E.S., Katoh, A., Raymond, J.L.: Cerebellum-dependent learning: the role of multiple plasticity mechanisms. Neuroscience 27 (2004)Google Scholar
  9. 9.
    Buonomano, D.V., Karmarkar, U.R.: Book review: how do we tell time? Neurosc. 8(1), 42–51 (2002)Google Scholar
  10. 10.
    Buonomano, D.V., Maass, W.: State-dependent computations: spatiotemporal processing in cortical networks. Nat. Rev. Neurosci. 10(2), 113–125 (2009)CrossRefGoogle Scholar
  11. 11.
    Cantley, K.D., Subramaniam, A., Stiegler, H.J., Chapman, R., Vogel, E.M., et al.: Hebbian learning in spiking neural networks with nanocrystalline silicon tfts and memristive synapses. IEEE Trans. Nanotechnol. 10(5), 1066–1073 (2011)CrossRefGoogle Scholar
  12. 12.
    Chang, S.H., Lee, S.B., Jeon, D.Y., Park, S.J., Kim, G.T., Yang, S.M., Chae, S.C., Yoo, H.K., Kang, B.S., Lee, M.-J., et al.: Oxide double-layer nanocrossbar for ultrahigh-density bipolar resistive memory. Adv. Mater. 23(35), 4063–4067 (2011a)CrossRefGoogle Scholar
  13. 13.
    Chang, T., Jo, S.-H., Kim, K.-H., Sheridan, P., Gaba, S., Lu, W.: Synaptic behaviors and modeling of a metal oxide memristive device. Appl. Phys. A 102(4), 857–863 (2011b)CrossRefGoogle Scholar
  14. 14.
    Clopath, C., Büsing, L., Vasilaki, E., Gerstner, W.: Connectivity reflects coding: a model of voltage-based stdp with homeostasis. Nat. Neurosci. 13(3), 344–352 (2010)CrossRefGoogle Scholar
  15. 15.
    Deng, Y., Josberger, E., Jin, J., Rousdari, A.F., Helms, B.A., Zhong, C., Anantram, M., Rolandi, M.: H+-type and oh–type biological protonic semiconductors and complementary devices. Sci. Rep. 3 (2013)Google Scholar
  16. 16.
    Desbief, S., Kyndiah, A., Guerin, D., Gentili, D., Murgia, M., Lenfant, S., Alibart, F., Cramer, T., Biscarini, F., Vuillaume, D.: Low voltage and time constant organic synapse-transistor. Org. Electron. 21, 47–53 (2015)CrossRefGoogle Scholar
  17. 17.
    Du, C., Ma, W., Chang, T., Sheridan, P., Lu, W.D.: Biorealistic implementation of synaptic functions with oxide memristors through internal ionic dynamics. Adv. Funct. Mater. 25(27), 4290–4299 (2015)CrossRefGoogle Scholar
  18. 18.
    Gjorgjieva, J., Clopath, C., Audet, J., Pfister, J.-P.: A triplet spike-timing-dependent plasticity model generalizes the bienenstock-cooper-munro rule to higher-order spatiotemporal correlations. Proc. Natl. Acad. Sci. 108(48), 19383–19388 (2011)CrossRefGoogle Scholar
  19. 19.
    Hebb, D.O.: The first stage of perception: growth of the assembly. Org. Behav. 60–78 (1949)Google Scholar
  20. 20.
    Izhikevich, E.M., et al.: Simple model of spiking neurons. IEEE Trans. Neural Netw. 14(6), 1569–1572 (2003)MathSciNetCrossRefGoogle Scholar
  21. 21.
    Kim, S., Du, C., Sheridan, P., Ma, W., Choi, S., Lu, W.D.: Experimental demonstration of a second-order memristor and its ability to biorealistically implement synaptic plasticity. Nano Lett. 15(3), 2203–2211 (2015)CrossRefGoogle Scholar
  22. 22.
    Kuzum, D., Jeyasingh, R.G., Lee, B., Wong, H.-S.P.: Nanoelectronic programmable synapses based on phase change materials for brain-inspired computing. Nano Lett. 12(5), 2179–2186 (2011)CrossRefGoogle Scholar
  23. 23.
    La Barbera, S., Vuillaume, D., Alibart, F.: Filamentary switching: synaptic plasticity through device volatility. ACS Nano 9(1), 941–949 (2015)CrossRefGoogle Scholar
  24. 24.
    Lamprecht, R., LeDoux, J.: Structural plasticity and memory. Nat. Rev. Neurosci. 5(1), 45–54 (2004)CrossRefGoogle Scholar
  25. 25.
    Lim, J., Ryu, S.Y., Kim, J., Jun, Y.: A study of tio2/carbon black composition as counter electrode materials for dye-sensitized solar cells. Nanoscale Res. Lett. 8(1), 1–5 (2013)CrossRefGoogle Scholar
  26. 26.
    Maass, W., Natschläger, T.: Networks of spiking neurons can emulate arbitrary hopfield nets in temporal coding. Netw. Comput. Neural Syst. 8(4), 355–371 (1997)CrossRefMATHGoogle Scholar
  27. 27.
    Malenka, R.C., Bear, M.F.: Ltp and ltd: an embarrassment of riches. Neuron 44(1), 5–21 (2004)CrossRefGoogle Scholar
  28. 28.
    Markram, H., Lübke, J., Frotscher, M., Sakmann, B.: Regulation of synaptic efficacy by coincidence of postsynaptic aps and epsps. Science 275(5297), 213–215 (1997)CrossRefGoogle Scholar
  29. 29.
    Markram, H., Pikus, D., Gupta, A., Tsodyks, M.: Potential for multiple mechanisms, phenomena and algorithms for synaptic plasticity at single synapses. Neuropharmacology 37(4), 489–500 (1998)CrossRefGoogle Scholar
  30. 30.
    Mayr, C., Partzsch, J., Noack, M., Schüffny, R.: Live demonstration: multiple-timescale plasticity in a neuromorphic system. In: ISCAS, pp. 666–670 (2013)Google Scholar
  31. 31.
    Mayr, C., Stärke, P., Partzsch, J., Cederstroem, L., Schüffny, R., Shuai, Y., Du, N., Schmidt, H.: Waveform driven plasticity in bifeo3 memristive devices: model and implementation. In: Advances in Neural Information Processing Systems, pp. 1700–1708 (2012)Google Scholar
  32. 32.
    McCulloch, W.S., Pitts, W.: A logical calculus of the ideas immanent in nervous activity. Bull. Math. Biophys. 5(4), 115–133 (1943)MathSciNetCrossRefMATHGoogle Scholar
  33. 33.
    Ohno, T., Hasegawa, T., Tsuruoka, T., Terabe, K., Gimzewski, J.K., Aono, M.: Short-term plasticity and long-term potentiation mimicked in single inorganic synapses. Nat. Mater. 10(8), 591–595 (2011)CrossRefGoogle Scholar
  34. 34.
    Senn, W., Markram, H., Tsodyks, M.: An algorithm for modifying neurotransmitter release probability based on pre-and postsynaptic spike timing. Neural Comput. 13(1), 35–67 (2001)CrossRefMATHGoogle Scholar
  35. 35.
    Sjöström, P.J., Turrigiano, G.G., Nelson, S.B.: Rate, timing, and cooperativity jointly determine cortical synaptic plasticity. Neuron 32(6), 1149–1164 (2001)CrossRefGoogle Scholar
  36. 36.
    Snider, G.S.: Spike-timing-dependent learning in memristive nanodevices. In: IEEE International Symposium on Nanoscale Architectures, 2008. NANOARCH 2008, pp. 85–92. IEEE (2008)Google Scholar
  37. 37.
    Sourdet, V., Debanne, D.: The role of dendritic filtering in associative long-term synaptic plasticity. Learn. Mem. 6(5), 422–447 (1999)CrossRefGoogle Scholar
  38. 38.
    Strukov, D.B., Snider, G.S., Stewart, D.R., Williams, R.S.: The missing memristor found. Nature 453(7191), 80–83 (2008)CrossRefGoogle Scholar
  39. 39.
    Subramaniam, A., Cantley, K.D., Bersuker, G., Gilmer, D., Vogel, E.M.: Spike-timing-dependent plasticity using biologically realistic action potentials and low-temperature materials. IEEE Trans. Nanotechnol. 12(3), 450–459 (2013)CrossRefGoogle Scholar
  40. 40.
    Van Rossum, M.C., Bi, G.Q., Turrigiano, G.G.: Stable hebbian learning from spike timing-dependent plasticity. J. Neurosci. 20(23), 8812–8821 (2000)Google Scholar
  41. 41.
    Varela, J.A., Sen, K., Gibson, J., Fost, J., Abbott, L., Nelson, S.B.: A quantitative description of short-term plasticity at excitatory synapses in layer 2/3 of rat primary visual cortex. J. Neurosci. 17(20), 7926–7940 (1997)Google Scholar
  42. 42.
    Wang, Z.Q., Xu, H.Y., Li, X.H., Yu, H., Liu, Y.C., Zhu, X.J.: Synaptic learning and memory functions achieved using oxygen ion migration/diffusion in an amorphous ingazno memristor. Adv. Funct. Mater. 22(13), 2759–2765 (2012)CrossRefGoogle Scholar
  43. 43.
    Williamson, A., Schumann, L., Hiller, L., Klefenz, F., Hoerselmann, I., Husar, P., Schober, A.: Synaptic behavior and stdp of asymmetric nanoscale memristors in biohybrid systems. Nanoscale 5(16), 7297–7303 (2013)CrossRefGoogle Scholar
  44. 44.
    Yang, Y., Choi, S., Lu, W.: Oxide heterostructure resistive memory. Nano Lett. 13(6), 2908–2915 (2013)CrossRefGoogle Scholar
  45. 45.
    Yuan, P., Leonetti, M.D., Pico, A.R., Hsiung, Y., MacKinnon, R.: Structure of the human bk channel ca2+-activation apparatus at 3.0 å resolution. Science 329(5988), 182–186 (2010)CrossRefGoogle Scholar
  46. 46.
    Zenke, F., Agnes, E.J., Gerstner, W.: Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks. Nat. Commun. 6 (2015)Google Scholar
  47. 47.
    Ziegler, L., Zenke, F., Kastner, D.B., Gerstner, W.: Synaptic consolidation: from synapses to behavioral modeling. J. Neurosci. 35(3), 1319–1334 (2015)CrossRefGoogle Scholar

Copyright information

© Springer (India) Pvt. Ltd. 2017

Authors and Affiliations

  1. 1.Institut d’ElectroniqueMicroelectronique et NanotechnologiesVilleneuve-d’AscqFrance

Personalised recommendations