Keywords

1 Introduction

It is worth noting that a digital emulator is different compared to a hardware emulator and naturally comes with some benefits, although it is generally harder to construct. Hardware emulators use electrical devices to emulate the behavior of a desired electrical system. Due to the intrinsic computational parallelism in a signal processing sense of voltage and current in an electrical circuit, such an emulation circuit is a powerful instrument. However, such emulators are often hardwired and a change of parameters in the system of interest requires new designing of the electrical hardware [1, 2]. This makes hardware emulators somewhat inflexible, as a physical exchange of electrical devices in the emulation circuit is required, sometimes the circuit even needs to be partially or completely redesigned. Software emulation on the contrary can change parameter values digitally. This enables not only uncomplicated and time-saving parameter changes, but also in-operando parameter manipulation, meaning that parameters can even be changed during run-time. Consequently, procedures such as parameter optimization and sensitivity analyses are possible, as demonstrated in [3]. It should however be mentioned that a digital emulation comes at a cost. First, appropriate interfaces with the help of analogue-to-digital (A/D) and digital-to-analogue converters (D/A) need to be implemented [4]. Second, digital emulators are typically more sophisticated to design compared to hardware emulators because the stability of the overall system must not be altered by the digital emulator which runs under finite precision arithmetic.

2 Digital Emulation Technique

The wave digital concept [5] takes into account all of the above and is here exploited as a digital emulation technique as it is known to preserve energetic properties such as passivity in a digital signal processing sense [6, 7]. It leads to a computationally massive parallel algorithm, which can be implemented on DSPs or FPGAs and ASICs for real-time applications in integrated circuits [8, 9]. A short overview of the wave digital concept is given below.

Fig. 1
figure 1

Definition of a port with electrical quantities u, i (left) and wave quantities a, b with port resistance R (right)

Voltage u and current i of Fig. 1 are related to incident wave a and reflected wave b via an arbitrary positive constant R that is called the port resistance and the bijective transformation

$$\begin{aligned} \begin{bmatrix} a\\ b \end{bmatrix}= \begin{bmatrix} 1 &{} \phantom {-}R\\ 1 &{} -R \end{bmatrix} \begin{bmatrix} u\\ i \end{bmatrix}, \quad R>0. \end{aligned}$$
(1)

Consequently, to obtain the wave digital model based on a reference circuit, one has to translate all electrical devices port-wise and their Kirchhoff interconnection network separately in the wave digital domain and then connect them port-wise again subsequently. For example, resistive voltage and current sources described by

$$\begin{aligned} u = e-R_0i\quad \text {and}\quad i=j-G_0u, \end{aligned}$$
(2)

with source resistance \(R_0\) or source conductance \(G_0\) and inputs e, j, respectively. Using the bijective transformation (1), their wave digital flow graphs are described by

$$\begin{aligned} a=[1-\varrho ]e+\varrho b\quad \text {and}\quad a=[1+\varrho ]Rj+\varrho b, \end{aligned}$$
(3)

respectively, with \(\varrho =[R_0-R]/[R_0+R]\), cf. Table 1. In the case of ideal sources, the inner resistances vanish and hence

$$\begin{aligned} R_0=0\Leftrightarrow \varrho =-1\quad \text {and}\quad G_0=0\Leftrightarrow \varrho =1, \end{aligned}$$
(4)

resulting in the simplified expression

$$\begin{aligned} a=2e-b\quad \text {and}\quad a=2Rj+b. \end{aligned}$$
(5)
Table 1 Wave digital sources

Other selected nonreactive elements include the resistor, which is characterized by

$$\begin{aligned} u=Zi\quad \Leftrightarrow \quad b=\varrho a, \end{aligned}$$
(6)

with \(\varrho =[Z-R]/[Z+R]\). After discretization and deploying the trapezoidal integration rule due to its beneficial features regarding passivity and stability [6], the capacitor and the inductor become simple delay elements in the wave digital domain. By defining the step size \(T=t_k-t_{k-1}\), one obtains

$$\begin{aligned} b(t_k) = a(t_{k-1})\quad \text {and}\quad b(t_k)=-a(t_{k-1}), \end{aligned}$$
(7)

for a capacitor and inductor, respectively, after their port resistances haven been chosen to \(R=T/[2C]\) and \(R=[2L]/T\), respectively. Their corresponding wave digital flow graphs are shown in Table 2.

Table 2 Selected wave digital reactive elements

Now that selected electrical devices and their corresponding wave flow graphs have been presented, their port-wise connection is discussed. Since all wave quantities are always related to their respective port resistance, only elements with the same port resistances can be connected immediately. While certain elements allow for an arbitrary (but positive) choice of the port resistance, such as the resistor, others do not offer this degree of freedom, such as the capacitor or inductor. To still be able to connect elements with different port resistances, parallel and series adaptors representing Kirchhoff parallel and series interconnections are utilized. The underlying equations describing a Kirchhoff series interconnection are

$$\begin{aligned} \mathbb {1}^T\boldsymbol{u}=0\quad \text {and}\quad \boldsymbol{i}=\mathbb {1}i_0, \end{aligned}$$
(8)

with \(\mathbb {1}=[1\dots 1]^T\) being the all one vector of appropriate dimension. When the electrical quantities are replaced by wave quantities, cf. (1), the relation

$$\begin{aligned} \boldsymbol{b}=[\boldsymbol{1}-\boldsymbol{\gamma }\mathbb {1}^T]\quad \text {with}\quad \gamma =\frac{2\boldsymbol{R}\mathbb {1}}{\mathbb {1}^T\boldsymbol{R}\mathbb {1}} \end{aligned}$$
(9)

emerges, where \(\boldsymbol{1}\) is the identity matrix of suitable dimensions and \(\boldsymbol{R}=\textrm{diag}_{\mu =1}^nR_\mu \), with \(R_\mu \) being the port resistance of the \(\mu \)-th port. Likewise, a Kirchhoff parallel interconnection is described by

$$\begin{aligned} \mathbb {1}^T\boldsymbol{i}=0\quad \text {and}\quad \boldsymbol{u}=\mathbb {1}u_0 \end{aligned}$$
(10)

in the electrical domain, which yields the wave relationship

$$\begin{aligned} \boldsymbol{b}=[\mathbb {1}\boldsymbol{\gamma }^T-\boldsymbol{1}]\quad \text {with}\quad \gamma =\frac{2\mathbb {1}^T\boldsymbol{G}}{\mathbb {1}^T\boldsymbol{G}\mathbb {1}}, \end{aligned}$$
(11)

with \(\boldsymbol{G}=\boldsymbol{R}^{-1}\). The respective wave digital symbols of Kirchhoff series and parallel adaptors are shown in Table 3.

Table 3 Wave digital adaptors

3 Memristive Neuronal Oscillator

The well-known Morris-Lecar model is a second-order model describing a neuron’s membrane potential behavior [10]. The structure of its electrical circuit is shown in Fig. 2 (top left) and the underlying equations are written as

$$\begin{aligned} C\frac{\text {d}u}{\text {d}t}=i_\textrm{App}-g_\textrm{Ca}(u)[u-E_\textrm{Ca}]-z\textrm{G}_\textrm{K1}[u-E_\textrm{K}]-\textrm{G}_\textrm{L}[u-E_\textrm{L}], \end{aligned}$$
(12)

with the nonlinear resistor and memristor described by

$$\begin{aligned} g_\textrm{Ca}(u)&=G_\textrm{Ca1}\sigma \left( \frac{u-\textrm{U}_\textrm{Ca1}}{\textrm{U}_\textrm{Ca2}}\right) ,\end{aligned}$$
(13)
$$\begin{aligned} \frac{\textrm{d}z}{\textrm{d}t}&=[z_\infty (u)-z]\textrm{F}_\textrm{K}\cosh \left( \frac{u-\textrm{U}_\textrm{K1}}{2\textrm{U}_\textrm{K2}}\right) ,\quad \text {with}\quad z_\infty (u)=\sigma \left( \frac{u-\textrm{U}_\textrm{K1}}{\textrm{U}_\textrm{K2}}\right) , \end{aligned}$$
(14)

respectively, where \(\sigma (\cdot )\) is sigmoid function, the capacitor models the changing membrane potential, \(E_\textrm{L}\) and \(G_\textrm{L}\) model the leakage current behavior. \(E_\textrm{Ca}\) is the voltage offset to achieve biologically meaningful calcium (Ca) concentration values and the nonlinear resistor \(g_\textrm{Ca}(u)\) described by (13) models the opening and closing behavior of the Ca-channels. Likewise, voltage offset \(E_\textrm{K}\) aims at accomplishing biologically meaningful potassium (K) concentration values and the memristor \(W_\textrm{K}(z,u)\) described by (14) models the opening and closing behavior of K-channels [11]. The nonlinear resistor here models the opening and closing behavior of the Calcium channels, whereas the memristor accounts for opening and closing behavior of the Potassium channels.

Fig. 2
figure 2

Wave digital flow graph of the Morris-Lecar neuron model. Its structural, port-wise correspondence to the electrical circuit (top left) becomes apparent

A fundamentally important phase of an action potential is the hyperpolarization phase, in which the membrane potential temporarily falls below the resting potential. This is relevant for several learning mechanisms, e.g. spike-timing dependent plasticity [12]. The problem with the parameter set given in [11] is that for several values of the applied current \(i_\textrm{App}\) the hyperpolarization phase is missing. In order to possibly find a parameter set, where the hyperpolarization phase is present, a parameter optimization on the basis of a wave digital emulation is conducted [3]. The wave digital model of the circuit is shown in Fig. 2 (bottom right) and is obtained from the electrical circuit by a port-wise translation of its electrical components and Kirchhoff interconnection structures. For this reason, the structural similarities between the electrical circuit and the wave digital flow graph are immediately observable. There, implicit relationships between certain wave quantities are highlighted in orange. To deal with these, fixpoint iterations are utilized [13]. Because the wave digital model is real-time capable, in-operando parameter manipulations enable a parameter optimization during run-time.

For the emulation results, an ideal current source is placed at the left-hand port to mimic the applied current \(i_\textrm{App}\) and the resulting membrane potential is observed at the right-hand port. The action potential emergence as a reaction to the constant input \(i_\textrm{App} = 69~\mathrm {\mu A}\) with the state-of-the-art parameters of [11] as the reference and in comparison with the in [3] optimized parameters is given in Fig. 3.

Fig. 3
figure 3

Constant current of the Morris-Lecar model with optimized parameters from [3] in comparison to parameters as in [11]

The following is gathered. The membrane potential of the reference parameter set never fall below its resting potential \(\Phi _\textrm{ref}=-60.86~\textrm{mV}\). The highlights that indeed the important hyperpolarization phase is missing. On the contrary, the optimized parameter cause the membrane potential to fall deeper while the resting potential \(\Phi _\textrm{opt}=-49.35~\textrm{mV}\) is increased. This results in a clear hyperpolarization phase, indicated by the green areas in Fig. 3. In summary, a sophisticated in-operando parameter optimization has been conducted via the wave digital model, which resulted in a biologically more meaningful behavior.

4 Stimulus-Driven Topology Formation

4.1 Supervised Topology Formation

Memristors are a suitable electrical device for supervised topology formation and first preinvestigations in this regard have been done in [14]. For an extension of this concept, descriptive sketches of neurons and their coupling strengths like in Fig. 4(i) are considered. They are used especially in biology to visualize neuronal setups [15]. In this context, the nodes symbolize the neurons and the arrows represent the (unidirectional) synapses. The numbers indicate the couplings strength between 0 and 1, where the \(+\) and − sign stands for excitatory and inhibitory couplings, respectively. The synthesis of an electrical circuit based on these descriptions is challenging for multiple reasons. The probably biggest questions arise when aiming to realize the unidirectional synaptic coupling, as effects like propagation of voltages and currents in an electrical circuit are generally not unidirectional and hence are not free from feedback.

Fig. 4
figure 4

(i): Conceptual neuronal network with coupling strengths and inhibitory/excitatory coupling represented by negative/positive signs. (ii): Synthesized wave flow diagram

In the following, it is explained how the wave digital model of Fig. 4(ii) achieves these tasks. To do so, the wave digital concept is here exploited as a modeling tool, since this process is more intuitive in the wave domain rather than the electrical domain as it will become apparent subsequently [16]. Lastly, the appropriate electrical circuit is obtained due to its direct correspondence with the final wave digital model. The model is subsequently verified in the context of a dog’s gait patterns. Figure 5 shows the proposed synapse model to synthesize the scenario of Fig. 4(i). It consists of a four port circulator with two attached resistances \(R_1\) and \(R_2\) that account for the coupling strength and coupling type (excitatory for positive reflection coefficients or inhibitory for negative coefficients). To explain its functioning, the reflection coefficients are expressed by

$$\begin{aligned} \varrho _\mu = \frac{R_\mu - R}{R_\mu + R},~\mu \in \{1,2\}. \end{aligned}$$
(15)

The special cases of the these reflection coefficients are of interest here in this context. Specifically it holds that

$$\begin{aligned} R_\mu \rightarrow \infty \quad&\Leftrightarrow \quad \varrho _\mu =1,\\ R_\mu \rightarrow 0\quad&\Leftrightarrow \quad \varrho _\mu =-1,\\ R_\mu =R\quad&\Leftrightarrow \quad \varrho _\mu =0. \end{aligned}$$

It can be seen that the whole spectrum \(\varrho _\mu \in [-1,1]\) is experienced and that the sign of the reflection coefficients is determined by whether the resistance value \(R_\mu \) is smaller or bigger than the circulation resistance R. Of special interest is the case, where \(R_\mu =R\), as this results in \(\varrho _\mu =0\). This case is special, because it allows for unidirectional coupling. This can best be seen when the power flow is is calculated to

$$\begin{aligned} \begin{bmatrix} p_{\textrm{pre,in}}\\ p_{\textrm{post,in}} \end{bmatrix} = \begin{bmatrix} 0 &{} \varrho _{2}^2 \\ \varrho _{1}^2 &{} 0 \end{bmatrix} \begin{bmatrix} p_{\textrm{pre,out}}\\ p_{\textrm{post,out}} \end{bmatrix}, \end{aligned}$$
(16)

where

$$\begin{aligned} p_{\textrm{pre,in}}=\frac{b_\textrm{pre}^2}{4R},~p_{\textrm{pre,out}}=\frac{a_\textrm{pre}^2}{4R},\qquad p_{\textrm{post,in}}=\frac{b_\textrm{post}^2}{4R},~p_{\textrm{post,out}}=\frac{a_\textrm{post}^2}{4R} \end{aligned}$$

and the total power consumption of the pre- and postsynaptic ports

$$\begin{aligned} p_\textrm{pre}=p_\textrm{pre,in}-p_\textrm{pre,out},\qquad p_\textrm{post}=p_\textrm{post,in}-p_\textrm{post,out}. \end{aligned}$$

It becomes evident that \(\varrho _1=0\) prohibits any power to flow from pre- towards postsynaptic neuron and vice versa for \(\varrho _2=0\). Consequently, this enables unidirectional coupling if either one of the reflection coefficients is zero or complete decoupling if both are zero. With these insights it is possible to synthesize an electrical circuit corresponding to the scenario of Fig. 4(i) as shown in Fig. 4(ii). It becomes evident that the nature of the scattering parameter depiction is much more similar to the conceptual graph compared to a depiction in electrical quantities. Therefore, depending on the context, the wave digital concept can not only be a powerful emulation technique but is also a versatile modeling tool by enabling perspectives that are quite unintuitive in the electrical domain, but are easily accessible in the wave domain.

Fig. 5
figure 5

Left: Proposed synapse model designed as a wave digital model. Right: Its corresponding electrical circuit

To verify the proper functioning of the concept presented above, a simple application example in the context of a dog’s gait patterns is examined. More specifically, the gait pattern running, pacing and trotting are investigated. To analyze these patterns, two subgroups \(\mathcal {S}_1\) and \(\mathcal {S}_2\) with the following features are created

  1. 1.

    Every component within a subgroup is synchronized to every other component within the same subgroup (intra-group synchronization),

  2. 2.

    Every component of subgroup \(\mathcal {S}_1\) is asynchron to every component of subgroup \(\mathcal {S}_2\) (inter-group asynchronization).

By these categorizations, the three above mentioned gait patterns can be established by assigning the four legs of a dog to the subgroups \(\mathcal {S}_1,\mathcal {S}_2\).

Fig. 6
figure 6

Output signals of four interconnected, optimized Morris-Lecar neuronal oscillators mimicking the gait of a dog changing from running (\(t\le T_1\)) to pacing (\(T_1<t\le T_2\)) to trotting (\(T_2<t\le T_3\)) to standstill (\(t>T_3\)). Each color represents the motion of a dog’s leg as indicated at the top

For the emulation scenario, four optimized wave digital neuronal oscillators, cf. Sects. 3 and [3], are associated with a leg of a dog each as indicated by the top sketches of Fig. 6. They represent a neuron population that accounts for the movement of a leg. First results in this context have been published in [16]. The neuronal oscillator model of choice is the optimized Morris-Lecar neuron model of Fig. 2 with its parameters as mentioned in [3]. The sketches on top of Fig. 6 indicate intra-group synchronization (solid arrows) and intra-group asynchronization (dashed arrows). The intra-group synchronization is here realized by a bidirectional coupling with edge weights \(+1\) while intra-group asynchronization is achieved by bidirectional couplings with edge weights \(-1\). Although arbitrary but different initial conditions are chosen for the four neuron’s membrane potentials, the desired pattern is achieved after just two oscillations and indeed represents the gait pattern of a running dog, cf. Fig. 6. At \(t=T_1\), the edge weights are abruptly changed to the pacing gait pattern. Note that although the topology changes are abrupt, the transition behavior is actually smooth and steady. In fact, almost no changes with respect to the phases are visible during the first four oscillations after the switch, only the amplitude is temporarily reduced. Note that in the transition phase the underlying synaptic interconnection of the neurons does not match their relative behavior. For this reason, they interact destructively until this transition phase is completed. This is the case after approximately 10 oscillations after \(t=T_1=250\,\textrm{ms}\). A similar behavior is observable when the synaptic coupling weights are changed once again to mimic a trotting gait pattern at \(t=T_2=1000\,\textrm{ms}\). During the transition phase, where the constellation of neuronal activity and underlying synaptic interconnection is inconsistent, the amplitudes are damped. Then, after around 6 oscillations the relative positions of the membrane potentials start to change until completion after around 14 oscillations. Note that here the change in synaptic coupling weights are again abrupt, but the transition is still smooth. This is highly desirable because this translates to the dog being able to change gaits without tumbling. At \(t=T_3=1625\,\textrm{ms}\), the current exciting the Morris-Lecar neurons is switched off and consequently a standstill is achieved after just one oscillation but in a smooth fashion. To summarize, a slow down process from a running to a still standing dog was successfully emulated, where its multiple gait patterns were mimicked in the process. An important detail worth mentioning is the fact that in this application example the hyperpolarizon phase is present at all times, which is observable by the action potential minima always surpassing the resting potential \(\Phi _\textrm{opt}=-49.35\,\textrm{mV}\). This once again highlights the improved Morris-Lecar neuron model [3].

It is also apparent that the here presented gait pattern generator is closely related to the synchronization of neuronal activity to form a biologically meaningful behavior. More general investigations on synchronization of linear(ized) circuits have been conducted in [17, 18] and also in the context of an Ising machine [19]. Furthermore, the first results in this context have sparked more sophisticated results incorporating axon growth in the case of gait pattern generation [20] and gait pattern classification [21].

Fig. 7
figure 7

Setup of the maze to mimic two-neuron communication, which consists of nodes \(\varphi _{\mu }\) and their interconnections \(\mathcal {C}_\mu \)

4.2 Self-organized Topology Formation

4.2.1 Optimal Two-Neuron Communications

It is desirable to investigate how information from an origin neuron can be transferred to a destination neuron optimally, i.e. with the least amount of delay. This problem can be formulated by graph theory, where nodes represent neurons and edges mimic synapses. Doing so, the task is comparable to solving a maze, where the entry of the maze accounts for the origin neuron and the exit accounts for the destination neuron. The here exploited analog parallelism is different from computational parallelism and enables the solution to the problem to outperform all known graph-theoretical algorithms [22]. The structure of the maze represents the structure of the neural network, where it is assumed that only nearest neighbor synaptic couplings exist and that an obstacle, e.g. a wall, means that no synaptic coupling exists between these specific neighboring neurons. First preinvestigations have been done in [23]. The concept can also be used to determine the longest rather than the shortest path, which is also a computationally complex problem [24].

Stacking all neuron potentials \(\varphi _{\mu \nu }\) and all synaptic currents \(j_\kappa \) in vectors \( \boldsymbol{\varphi }\) and \(\boldsymbol{j}\), respectively, allows for a condensed notation of the maze topology, cf. Fig. 7. The synapse voltages \(\boldsymbol{v}\) are obtained by the respective neuron potential differences and can be expressed as

$$\begin{aligned} \boldsymbol{v}=\boldsymbol{N}^T\boldsymbol{\varphi }(u), \end{aligned}$$
(17)

where \(\boldsymbol{N}\) is the incidence matrix and voltage u is the potential difference between origin neuron potential \(\varphi _{\mu } = u\) and destination neuron potential \(\varphi _{\nu }=0\). Consequently, by Tellegen’s theorem, the relationship between synaptic currents \(\boldsymbol{j}\) and neuronal currents \(\boldsymbol{i}\) is

$$\begin{aligned} \boldsymbol{i}=\boldsymbol{Nj}, \end{aligned}$$
(18)

where \(i=-\boldsymbol{e}_\mu ^T\boldsymbol{i}=\boldsymbol{e}_\nu ^T\boldsymbol{i}\).

Fig. 8
figure 8

Synapse model for two-neuron communication

The synapse model mimics bidirectional (both switches closed) or unidirectional (only one switch closed) connection or no connection at all (both switches open) and is depicted in Fig. 8. This is because the switches decide whether a current can flow through the memductors \(W_\kappa ', W_\kappa ''\) or not. Here, a wave digital model with physically meaningful parameters of a RRAM-cell, developed in [25], has been utilized. This device distinguishes itself by its rapid switching behavior, which here is exploited to achieve the final state of the maze as quickly as possible. Consequently, synaptic currents \(\boldsymbol{j}\) and voltages \(\boldsymbol{v}\) are related by \(\boldsymbol{j}=\boldsymbol{W}(\boldsymbol{z},\boldsymbol{v})\,\boldsymbol{v}\), which in combination with the synaptic voltages and the neuronal currents, cf. (17) and (18), yields a nonlinear relationship between i and u

$$\begin{aligned} i=\boldsymbol{e}_\nu ^T\boldsymbol{NW}(\boldsymbol{z},\boldsymbol{N}^T\boldsymbol{\varphi }(u))\,\boldsymbol{N}^T\boldsymbol{\varphi }(u), \end{aligned}$$
(19)

which is numerically solvable in the wave digital domain by using fixpoint iterations for the implicit relationships between ui and \(\boldsymbol{z}\) [23]. By Chua’s closure theorem, the memristive network can be summarized by single memristor, see [26]. With the emulation parameters as shown in Table 4, there are two emulation scenarios investigated in the following.

Table 4 Maze memristor emulation parameters

Scenario 1 involves a maze of size \(20\times 20\) as shown in Fig. 9 (top left). There are only three possible communication paths for the origin neuron (entry) to transmit information to the destination neuron (exit). The course of the current as a response to the input voltage ramp e is displayed to the right. Every notch represents the switching of all RRAM-cells along a certain path, which can be exploited as a detection mechanism. Since here the current is highest among the path with the least total memristance, the possible communication paths are found in order of ascending path length at \(t=13\,\textrm{ms}\), \(t=32\,\textrm{ms}\) and \(t=39\,\textrm{ms}\), respectively. In scenario 2 a bigger maze of size \(50\times 50\), cf. Fig (bottom left), is investigated with the course of the current again illustrated to its right. A single notch indicates that there is only a single communication path possible. Note that the point in time, in which the communication path is discovered by the setup, is also \(t=13\,\textrm{ms}\). This is the same amount of time required in the first scenario with the significantly smaller maze to find the shortest communication path. Remember that the first scenario requires 361 connection elements and consequently 722 RRAM-cells, while the second scenario requires more than six times that many (2401 connection elements, 4802 RRAM-cells).

Fig. 9
figure 9

Comparison of a \(20\times 20\) maze with three possible paths from entry to exit, highlighted in orange, green and blue (top left) and a \(50\times 50\) maze with only a single path (bottom left). The course of the currents for the \(20\times 20\) and \(50\times 50\) maze are shown at the top right and bottom right, respectively

This leads to the counterintuitive conclusion that the size of the maze has a negligible influence on the convergence time. This is due to the inherently massive analog parallelism and is the reason why computationally complex problems as the one investigated in this work can be solved efficiently by memristive circuits. While in the first scenario an input voltage \(e=180\,\textrm{V}\) has been utilized, in comparison to the second scenario, where an input voltage \(e=500\,\textrm{V}\) has been deployed, roughly three-times more voltage was required to compensate the time requirement, it is still remarkable that independent of the maze size, the presented setup requires an identical convergence time to find the shortest communication path. Furthermore, due to the immense amount of memristive devices present in the setup, a run-time comparison between the here presented wave digital emulation and the circuit simulation software LTspice was investigated in [23]. Although the scenarios were different, a run-time advantage by the factor of 300 in favor of the wave digital emulation was discovered on an identical computational device.

4.2.2 Mimicking Neuroplasticity

Here, one neuron of interest aims to find the optimal, e.g. shortest, communication pathways to all other neurons being part of the neural network. The basic concept was presented in [27], extended in [24] to solve the longest path problem and in [28] to mimic neuroplasticity. A fundamental scenario of optimal communication pathways with regards to multiple neurons is shown in Fig. 10.

Fig. 10
figure 10

A fundamental scenario what adaption and optimization means: The setup determines the paths of least delay self-organizingly and lets all other paths die off to increase energy efficiency

There, three interconnected neurons are depicted with different time delays that can be associated for example with their axon lengths. For instance, an action potential can travel from neuron \(\mathcal {N}_\mathcal {A}\) to neuron \(\mathcal {N}_\mathcal {B}\) with a time delay of \(\tau \). The same holds true for a communication between neurons \(\mathcal {N}_\mathcal {B}\) and \(\mathcal {N}_\mathcal {C}\). However, a communication between neurons \(\mathcal {N}_\textrm{A}\) and \(\mathcal {N}_\mathcal {C}\) requires triple the time, namely \(3\tau \). From the perspective of \(\mathcal {N}_\mathcal {A}\), there are two possibilities to transmit information towards \(\mathcal {N}_\textrm{C}\): First, the direct path, which would require a time delay of \(3\tau \). Second, the path over \(\mathcal {N}_\mathcal {B}\), which would require \(2\tau \) in total. The second choice is obviously better than the first with respect to time delay and should hence be used. This would make the synaptic connection between \(\mathcal {N}_\mathcal {A}\) and \(\mathcal {N}_\mathcal {C}\) obsolete. Because keeping this path (axon, axon terminals, synapse) alive requires energy, it would make sense from an energy perspective to let this path shut down, as indicated in Fig. 11 (right).

Fig. 11
figure 11

Left: Initial neuronal network with dense synaptic connection. Right: Only synaptic connections enabling the least delay communication between neuron \(\mathcal {A}\) and all other neurons in the network remain, all others have died off. The synapse between neurons \(\mathcal {E}\) and \(\mathcal {F}\) is formed after the synapse between neurons \(\mathcal {C}\) and \(\mathcal {F}\) was damaged

The ambition of the following steps is to synthesize an electrical circuit that inherently incorporates the above mechanisms. First, the problem is abstractly represented as a graph, where a neuron is mimicked by a node and a synaptic connection by a directed edge. The edge weights then represent the according time delay of the path. This is visualized by way of example as a nine neuron setup \(\mathcal {N}_\mathcal {A},\dots ,\mathcal {N}_\mathcal {I}\) in Fig. 11 (left). By doing so, the problem of finding the minimal communication paths from neuron \(\mathcal {N}_\mathcal {A}\) to all other neurons \(\mathcal {N}_\mathcal {B},\dots ,\mathcal {N}_\mathcal {I}\) becomes the problem of finding the minimum spanning tree, where \(\mathcal {N}_\mathcal {A}\) is the root. Previously, non-circuit approaches to solve this problem exist, such as the Dijkstra, A*, Bellman-Ford or Floyd-Warshall algorithms [29]. Among other features, these algorithms differ in their access to information on the topology and costs in the graph. Apart from the inherent favorable convergence time [30], a self-organizing circuit solution to the minimum spanning tree problem is also desirable in the context of unsupervised learning, as the circuit itself discovers optimal communications structures for information transport. Unsupervised learning is also of interest in other domains, such as pattern recognition, where circuit-based solutions are applied [31]. The circuit to inherently find this minimum spanning tree in designed by synthesizing the neurons (nodes) and synapses (edges) as proposed by Fig. 12.

Fig. 12
figure 12

Circuit concept to mimic neuroplasticity in the neuronal network of Fig. 11

There, two different types of memductors are utilized. The neurons are represented by the \(\mu \)-indexed memductor, which is described by equations

$$\begin{aligned} i_\mu&= W_\mu (z_\mu ) u_\mu ,\end{aligned}$$
(20a)
$$\begin{aligned} W_\mu (z_\mu )&= \frac{W_0^\textrm{N}\, W_1^\textrm{N}}{W_1^\textrm{N} + z_\mu \left[ W_0^\textrm{N} - W_1^\textrm{N}\right] } \end{aligned}$$
(20b)

where \(W_{\mu }(z_{\mu })\) is the memductance and \(z_{\mu }\) is the inner state. Here, \(z_{\mu }=0\) indicates the neuron memductor is in the high conductance state \(W_0^\textrm{N}\) and \(z_{\mu }=1\) indicates the low conductance state \(W_1^\textrm{N}\). The state equation is described by

$$\begin{aligned} \begin{aligned} \dot{z}_\mu&= g(u_\mu )\left[ \sigma (u_\mu )\sigma (z_\mu ) + \sigma (-u_\mu )\sigma (1-z_\mu ) \right] ,\\ g(u_\mu )&= S_\textrm{N} \left[ u_\mu -u_p\right] \sigma (u_\mu -u_p)+ S_\textrm{N}\left[ -u_\mu +u_n\right] \sigma (-u_\mu +u_n), \end{aligned} \end{aligned}$$
(21)

where \(\sigma (\cdot )\) is the Heaviside function, \(S_\textrm{N}\) is the steepness of the change from \(W_0^\textrm{N}\) to \(W_1^\textrm{N}\) and vice versa, \(u_\textrm{p}\) is the set voltage and \(u_\textrm{n}\) is the reset voltage. The synapse between neuron \(\mu \) and neuron \(\nu \) is described by the double-indexed memductor model

$$\begin{aligned} j_{\mu \nu }&= W_{\mu \nu }(z_{\mu \nu }) v_{\mu \nu },\end{aligned}$$
(22a)
$$\begin{aligned} W_{\mu \nu }(z_{\mu \nu })&= \frac{W_0^\textrm{S}\, W_1^\textrm{S}}{W_1^\textrm{S} + z_{\mu \nu }\left[ W_0^\textrm{S} - W_1^\textrm{S}\right] } \end{aligned}$$
(22b)

and its state equation

$$\begin{aligned} \begin{aligned} \dot{z}_{\mu \nu }&= g(v_{\mu \nu })\left[ \sigma (v_{\mu \nu })\sigma (z_{\mu \nu }) + \sigma (-v_{\mu \nu })\sigma (1-z_{\mu \nu }) \right] + S_\textrm{R}\sigma (z_{\mu \nu }-0.1),\\ g(v_{\mu \nu })&= S_\textrm{S} \left[ v_{\mu \nu }-v_\textrm{p}\right] \sigma (v_{\mu \nu }-v_\textrm{p})+ S_\textrm{S}\left[ -v_{\mu \nu }+v_\textrm{n}\right] \sigma (-v_{\mu \nu }+v_\textrm{n}), \end{aligned} \end{aligned}$$
(23)

where \(S_\textrm{R}\) is the amplitude of the retention characteristic, \(S_\textrm{S}\) is the steepness of the change from \(W_0^\textrm{S}\) to \(W_1^\textrm{S}\) and vice versa, \(v_\textrm{p}\) is the set voltage and \(v_\textrm{n}\) is the reset voltage. Additionally, in order for the circuit to function properly, it was explained in [27] in detail that

$$\begin{aligned} W_0^\textrm{S} \gg W_0^\textrm{N} \gg W_1^\textrm{S} \gg W_1^\textrm{N}. \end{aligned}$$
(24)

must hold. By keeping the above mentioned specifics in mind, the setup of Fig. 11 (left) is synthesized by the electrical circuit presented in Fig. 12.

The mechanisms work as follows. Assuming all neuron memductors are initially in the high conductance state and all synapse memductors in the low conductance state, the root neuron \(\mathcal {N}_\textrm{A}\) is attached to an ideal voltage source e mimicking its membrane potential activity. When the synapse memductors directly attached to the root memductor experienced enough magnetic flux, they transition to the high conductance state. At this point, both the root memductor and the newly reached neuron memductor are approximately at the the same voltage potential. Therefore, this neuron memductor experiences almost all of the voltage from e. Since the neuron memductors are oriented in a way such that they transition in the low conductance state when experiencing a negative voltage, i.e. during the hyperpolarization phase, it is exploited as a mechanism to signal that the path of minimal delay towards this neuron has been found. To further illustrate this procedure, the initial neuron population of Fig. 11 (left) is considered in a state where it has not yet adapted to its environment. The corresponding memristive circuit of Fig. 12 is simulated for this purpose with its parameters given in Table 5. The initial states of synapse and neuron memductors are chosen to \(z_0^\textrm{S} = 0.7\) and \(z_0^\textrm{N} = 0\), respectively, and the input signal is given by a sequence of action potentials with \(U_1 = -0.75\, \textrm{V}\), \(U_2 = 4\,\textrm{V}\) and period \(T_\textrm{e}= 1\,\textrm{s}\).

Table 5 Simulation parameters

The behavior of all synapse memductance states are shown in Fig. 13, where all synapses’ states being part of the optimal synaptic structure formation are visualized on top and all others at the bottom.

Fig. 13
figure 13

Simulation results of all synaptic memristors which are part of the optimal communication structure (top) and are not part of the optimal communication structure (bottom)

In particular, path \(\mathcal {N}_\mathcal {A}\) to \(\mathcal {N}_\mathcal {B}\) is found first at \(t_1=0.45\,\textrm{s}\), followed by path \(\mathcal {N}_\mathcal {B}\) to \(\mathcal {N}_\mathcal {C}\) at \(t_2=1.35\,\textrm{s}\), path \(\mathcal {N}_\mathcal {B}\) to \(\mathcal {N}_\mathcal {E}\) at \(t_3=1.55\,\textrm{s}\), path \(\mathcal {N}_\mathcal {C}\) to \(\mathcal {N}_\mathcal {F}\) at \(t_4=2.45\,\textrm{s}\), path \(\mathcal {N}_\mathcal {E}\) to \(\mathcal {N}_\textrm{D}\) at \(t_5=4.25\,\textrm{s}\), path \(\mathcal {N}_\mathcal {E}\) to \(\mathcal {N}_\mathcal {H}\) as well as path \(\mathcal {N}_\mathcal {F}\) to \(\mathcal {N}_\mathcal {I}\) at \(t_6=5.45\,\textrm{s}\), and finally path \(\mathcal {N}_\mathcal {D}\) to \(\mathcal {N}_\mathcal {G}\) at \(t_7=6.35\,\textrm{s}\). At this point, all neurons \(\mathcal {N}_\textrm{B},\dots ,\mathcal {N}_\mathcal {I}\) in the setup can be reached from the root neuron \(\mathcal {N}_\mathcal {A}\). There are three things worth highlighting. First, notice that path E to H and path F to I are found simultaneously, which shows that no overwriting of previous states is required. Second, note that the communication paths are found in order of ascending total delay time from the perspective of \(\mathcal {N}_\mathcal {A}\). Lastly, all synapses associated with the memductance states shown in Fig. 13 (bottom) are not part of the optimal communication structure and hence must die off from the perspective of energy efficiency. The circuit identifies those synapses by all memductances whose states are not in the high conductance state. The proper interpretation is visualized by the blue communication paths in Fig. 11 (right).

However at \(t_8=9.76\,\textrm{s}\), the memductor \(W_\mathcal{C}\mathcal{F}\) is removed from the electrical circuit to account for some damage to the neural network, maybe induced by an accident or a disease. Because the path from \(\mathcal {N}_\textrm{C}\) to \(\mathcal {N}_\mathcal {F}\) was part of the previously optimal communication structure, \(\mathcal {N}_\mathcal {A}\) can not communicate with all neurons of the neural network any more. Specifically, neurons \(\mathcal {N}_\mathcal {F}\) and \(\mathcal {N}_\mathcal {I}\) are affected. However, the circuit is able to self-heal this damage. After a short period of time at \(t_9=12.21\,\textrm{s}\), the memductor \(W_\mathcal{E}\mathcal{F}\) transitions to the high conductance state, indicating that the previously died synapse between \(\mathcal {N}_\textrm{E}\) and \(\mathcal {N}_\mathcal {F}\) is now created anew. Now the root neuron \(\mathcal {N}_\mathcal {A}\) can communicate again with neurons \(\mathcal {N}_\mathcal {F}\) and \(\mathcal {N}_\mathcal {I}\), highlighted by the green arrow in Fig. 11 (right). Note that the circuit will always inherently find a way to self-heal the communication structure with the minimal amount of changes compared to the already established, but incomplete structure. By this circuit synthesis and the subsequent simulation results it was shown that a structure can be simultaneously energy-efficient and fault-tolerant. Both are features frequently associated with the human brain and typically are in a trade-off with one another, where more energy needs to be invested in order to achieve a redundancy, which in return guarantees the fault tolerance.

5 An Elementary Decision Problem via Optical Illusions

Optical illusions are closely linked to associative learning. They also showcase that we permanently make unconscious decisions and it is for example when we encounter an optical illusion that we become aware of it, because something does not seem to be right. This is where the limits of our decision making is revealed. Associative learning or Hebbian learning was developed by Hebb, and is often summarized as “Cells that fire together, wire together” [32]. It describes a rule for change in synaptic coupling strength between mutually active pre- and postsynaptic neurons. In general, it attempts to explain associative learning, but is a very simplistic explanation that lacks details. For example, time delays caused by the axon are not considered. One famous example of associative learning is Pavlov’s dog [33]. There, it was observed that a dog produces saliva whenever he was given food. After a time, Pavlov discovered that the saliva was already produced when the lab’s assistant entered the room with the food. He developed the theory that the dog had learned to associate the assistant’s foot steps with the food and hence the foot steps caused the saliva production. He then tested the theory by ringing a bell shortly before the dog was given food. And sure enough, after repeating the process several times, a ring of the bell produced saliva in the dog’s mouth, even if there was no food given after the bell’s sound. He discovered that the two stimuli (sound of bell and presenting of food) both have to be presented within short period of time in order for the dog to associate the two events with one another. Since then, associative learning has been tried to replicate in electrical circuits as one branch of bio-inspired behavior. In this context, a simple form of associative learning is synthesized by a simple memristive network with neuronal oscillators [34]. Such oscillators are generally nonlinear models, cf. Sect. 3. A well-known, well-investigated nonlinear oscillator model is the Kuramoto model, which describes a set of multiple interconnected systems [35]. It is here utilized to model such a neuronal oscillator. The overall set \(\Sigma _{\textrm{K}}\) is described by

$$\begin{aligned} \Sigma _{\textrm{K}}: {\left\{ \begin{array}{ll} \dot{\varphi }_\mu &{}= \omega +\frac{1}{m}\sum _{\nu =1}^m \kappa _{\mu \nu }\sin (\varphi _\nu -\varphi _\mu ) \\ z_\mu &{}= \varphi _\mu \\ y_\mu &{}= \sin (\varphi _\mu ) \end{array}\right. }, \end{aligned}$$
(25)

where \(\mu =1,\dots ,m\), \(\varphi _\mu \) is the angle of the \(\mu \)-th oscillator, \(\omega \) is the intrinsic radian frequency, \(\kappa _{\mu \nu }\) is a nonlinear coupling factor, \(z_\mu \) is an output providing the neighboring subsystems with information about \(\varphi _\mu \) and \(y_\mu \) is the output from which the oscillations can be observed. Note that the phases \(\dot{\varphi }_\mu \) itself are only integrated and the oscillation of the system is only due to the nonlinear output function \(y_\mu \). To incorporate the Hebbian learning rule, the coupling strength \(\kappa _{\mu \nu }\) between oscillator \(\mu \) and \(\nu \) has a memory

$$\begin{aligned} \dot{\kappa }_{\mu \nu } = \varepsilon [\alpha \cos (\varphi _{\nu } - \varphi _{\mu }) - \kappa _{\mu \nu }], \end{aligned}$$
(26)

where \(\epsilon \) is the learning rate, \(\alpha \) determines the weighting of new information. Here, the cosine function seeks to model the associative learning rule, as it is maximizes when the phases \(\varphi _\nu \) and \(\varphi _\mu \) are similar and minimized when the phase shift is \(\pi \) [36]. For the circuit synthesis process, details are elaborated in [37]. The two circuits of Fig. 14 are synthesize the Kuramoto model as in (25). The underlying equation of the left circuit is

$$\begin{aligned} \dot{u}_\mu =\frac{T_0I_0}{C_0}\omega -i_\mu , \end{aligned}$$
(27)

where \(T_0,I_0,C_0\) are normalization constants with the units of a time, a current and a capacitance, respectively. A detailed analysis on a Kuramoto model with adaptive coupling is provided in [38]. It can already be seen that (27) synthesizes the first equation of (25) under the condition that

$$\begin{aligned} i_\mu =\frac{1}{m}\sum _{\nu =1}^m \kappa _{\mu \nu }\sin \left( \frac{u_\mu -u_\nu }{U_0}\right) , \end{aligned}$$
(28)

where \(U_0\) is a normalization constant with the unit of a voltage and \(u_\mu \) is associated with \(\varphi _\mu \). Note that the negative sign has been incorporated in the argument of the sine function.

Fig. 14
figure 14

Integrator circuit representing the Kuramoto model (left) and a memristive coupling element to describe the input signal of integrator \(\mu \) from integrator \(\nu \) (right)

The right circuit of Fig. 14 possesses the underlying equation

$$\begin{aligned} j_{\mu \nu } = W_{\mu \nu }(z_{\mu \nu },v_{\mu \nu })v_{\mu \nu },\qquad v_{\mu \nu }=u_\mu -u_\nu , \end{aligned}$$
(29)

with \(W_{\mu \nu }=1/M_{\mu \nu }\). The goal of the memductive interconnection is to model the \(\nu \)-th element of the sum in (25). This is achieved when the coupling strength is associated with the state of the memductor and the phase difference with the voltage over the memductor, such that \(\kappa _{\mu \nu }=z_{\mu \nu }\) and \(\varphi _{\mu }-\varphi _{\nu }=v_{\mu \nu }\). For the memductance it follows that

$$\begin{aligned} W_{\mu \nu }=\frac{j_{\mu \nu }}{v_{\mu \nu }}=\frac{1}{m}z_{\mu \nu }\,\textrm{si}\left( \frac{v_{\mu \nu }}{U_0}\right) G_0, \end{aligned}$$
(30)

where \(G_0\) is a normalization constant with the unit of a conductance. It must be mentioned however that (30) allows for \(W_{\mu \nu }\le 0\) and hence is not passive in general. Such memristors are called active memristors [26]. The Hebbian learning rule (26) is then synthesized by the proper modeling of the memductor’s state equation

$$\begin{aligned} \dot{z}_{\mu \nu } = \varepsilon \left[ \alpha \cos \left( \frac{v_{\mu \nu }}{U_0}\right) - z_{\mu \nu }\right] . \end{aligned}$$
(31)

Figure 15 shows a setup synthesizing four Kuramoto oscillators with a fully coupled interconnection network [37].

Fig. 15
figure 15

Setup of four synthesized and fully coupled Kuramoto oscillators with the memductances incorporating the Hebbian learning rule

In the following, a condensed description of the memductive interconnection network will be given before the wave digital model is established. To obtain a general description of an arbitrarily large number of connected Kuramoto oscillators m, the voltages and currents are summarized by vectors \(\boldsymbol{u}=[u_1,\dots ,u_m]^T\) and \(\boldsymbol{i}=[i_1,\dots ,i_m]^T\), respectively. Notice in Fig. 15 that the orientiation of the currents \(j_{\mu \nu }\) is chosen such that the they originate at the lower level subsystem and are directed towards to the higher level subsystem. This allows for a representation of the interconnection structure by the incidence matrix \(\boldsymbol{N}\). Kirchoff’s node and current laws are then given by

$$\begin{aligned} \boldsymbol{i}=\boldsymbol{Nj},\qquad \boldsymbol{u}=\boldsymbol{N}^T\boldsymbol{v}, \end{aligned}$$
(32)

where j is the vector of all currents \(j_{\mu \nu }\) and v is the vector of all voltages \(v_{\mu \nu }\) of the interconnections with \(\mu ,\nu = 1,\dots ,m\), \(\mu <\nu \). Utilizing this notation, (29) is rewritten as

(33)

With the help of (32), the relationship between the oscillator voltages and currents \(\boldsymbol{u}\), \(\boldsymbol{i}\) and the memductors of the interconnection network is described by

$$\begin{aligned} \boldsymbol{i}=\hat{\boldsymbol{W}}\boldsymbol{u},\quad \text {with }\quad \hat{\boldsymbol{W}}=\boldsymbol{NW}(\boldsymbol{z},\boldsymbol{v})\boldsymbol{N}^T, \end{aligned}$$
(34)

where \(\hat{\boldsymbol{W}}=\hat{\boldsymbol{W}}(\boldsymbol{z},\boldsymbol{v})\) contains both the information about the memductances and the topology. With these insights it is possible to not only form a vector-based representation of the synthesized Kuramoto model, but to also obtain the corresponding wave digital model, cf. [37]. There, \(\boldsymbol{R}_C=T/2C_0\boldsymbol{1},\boldsymbol{R}_W=R_W\boldsymbol{1}\) and consequently \(\boldsymbol{R}_S=\left[ \boldsymbol{R}_C^{-1}+\boldsymbol{R}_W^{-1}\right] ^{-1}\). The value of \(R_W\) can essentially be chosen arbitrarily, as long as \(R_W>0\), but it should have a value close to the middle of \(W_{\mu \nu }\)’s range. Doing so typically results in a good numerical condition of the scattering matrix \(\boldsymbol{S}_W\), where

$$\begin{aligned} \boldsymbol{i}=\hat{\boldsymbol{W}}\boldsymbol{u}\quad \Leftrightarrow \quad \boldsymbol{a}_W=\boldsymbol{S}_W\boldsymbol{b}_W,\quad \text {with}\quad \boldsymbol{S}_W=[\boldsymbol{R}_W^{-1}+\hat{\boldsymbol{W}}]^{-1}[\boldsymbol{R}_W^{-1}-\hat{\boldsymbol{W}}]. \end{aligned}$$
(35)
Fig. 16
figure 16

An optical illusion being modeled by the setup of Fig. 15

In the following, an application example of the synthesized Kuramoto model in the context of an optical illusion is presented. For this, the optical illusion depicted in Fig. 16 is considered. Here, the setup of Fig. 15 models the four relevant aspects of the optical illusion, where \(\mathcal {N}_1,\mathcal {N}_3\) account for the water and cloud (background), respectively. \(\mathcal {N}_2\) accounts for the bottom part of middle pillar of the object and \(\mathcal {N}_4\) accounts for both the top part of the middle pillar and the cloud background. It becomes apparent that the state of \(\mathcal {N}_4\) and whether it is associated with the cloud or the middle pillar is the key factor of why the whole setup is an optical illusion. As the bottom part of the middle pillar and the ocean are spatially close, it appears natural to initialize the phases of the corresponding Kuramoto oscillators close as well. The same holds true for the archway and the cloud. Since the ocean and cloud are spatially far away from each other, their phase difference is comparatively big initially. Hence, \(\varphi _1 = 0\), \(\varphi _2 = -0.1\pi \), \(\varphi _3 = \pi \) and \(\varphi _4 = 0.9\pi \) was chosen. Moreover, the learning rate was chosen \(\epsilon = 0.1\), the weighting of new information \(\alpha = 0.1\) and the intrinsic radian frequency \(\omega = 1 \textrm{s}^{-1}\). The according emulation results are shown in Fig. 17. It can be observed that due to the initial conditions, the oscillators \(\mathcal {N}_1\) and \(\mathcal {N}_2\) representing the ocean background and the lower part of the middle pillar fire together. And as expected, due to “cells that fire together, wire together” [32], these two oscillators synchronize. This is interpreted as being an association of the bottom part of the middle pillar with the ocean. The same association takes place with with the top part of the pillar and the cloud, represented by oscillators \(\mathcal {N}_3\) and \(\mathcal {N}_4\). This example might indicate that the optical illusion works due to the spatial correlation of the contradictory association between object and background. This contradiction is represented by a large phase shift between the two synchronized subgroups in Fig. 17. Of course, a human is able to detect the contradiction after a short period of time and to determine the origin of the optical illusion.

Fig. 17
figure 17

Four oscillators form two synchronized groups with the latter being in an anti-phase configuration, explaining the contradiction of the optical illusion

6 Conclusion

Due to their inherent massive parallelism in a signal processing sense, electrical circuits in general are candidates to replicate the computing power as well as energy and spatial efficiency of biological systems. In order to do so, the underlying biological systems must be understood and then abstractly transferred to hardware circuits, making bio-inspired computing a highly interdisciplinary field. Before circuits can be manufactured, theoretical circuits must be designed to abstractly mimic biological behavior and serve as a proof of concept. This purpose has been fulfilled in a variety of application presented in this chapter. The wave digital concept has been used as an emulation technique to digitally replicate these theoretical circuits. Since memristors are the centerpieces of all these circuit concepts and currently hard to fabricate with the current state of technology. Hence, software-based preinvestigations are useful to shorten development cycles, especially since the wave digital concept is real-time capable and offers in-operando parameter manipulation. For example, this has been exploited to optimized parameters of the well-known Morris-Lecar model in order to show a more biologically meaningful behavior. Further applications included a gait pattern generator, for which a unidirectional synapse has been developed. Also, two- and multi-neuron communications has been investigated on an abstract level, where the circuit concept for the latter also exhibited self-healing to repair synaptic damage and hence showed a rich behavior in the context of neuroplasticity. Lastly, an optical illusion was investigated to mimic an elementary decision problem.

In summary, the field of bio-inspired computing has the potential to tackle problems, which are too complex for current computing concepts. The memristor is a novel electrical circuit element that enables computations with and in memory and could be the key component to enable bio-inspired circuits. The bottlenecks, which need to be overcome to manufacture high performance circuits that mimic biological behavior is twofold: First, the understanding of the way biological systems function must be increased to identify its general and abstract mechanism, which then need to be modeled in an electrical circuit. Second, the manufacturing process of memristors must become more reliable and precise.