Keywords

1 Introduction

Is it truly possible to implement higher brain functions, such as perception or consciousness, in engineered systems? This question has been frequently raised over the last few decades and has led to distinct views over time, as both neurobiological understanding and available computational capabilities advanced [1,2,3,4,5,6,7]. The essence of this question goes back to the fundamental relation between matter and mind, which was addressed as early as ancient Greece, and emerged in the principle of “Dualism” most famously defended by the philosopher René Descartes in the sixteen century [8]. Descartes postulated that the body (matter) and the mind are distinct and separate units in human beings because he could not imagine that mental phenomena could be explained by natural mechanisms [9]. However, the invention of electroencephalography (EEG) and imaging techniques, such as functional magnetic resonance imaging (fMRI), enabled the study of inner information processing in the human brain and individuals’ states of consciousness [10,11,12,13]. As a result, the strict distinction between matter and mind has become blurry [14, 15]. Strong evidence has been found that the inner representation of the human brain (the mind) is related to its neurochemistry (the matter), e.g. the amount and type of neurotransmitters and/or drugs within the nervous system [16, 17]. It is therefore worthwhile to reconsider the relationship between mind and matter when engineering artificial systems to exhibit higher brain functions by considering recent progress in nanoelectronics and neurobiology.

This perspective on future computing is motivated by three key aspects. First by the recent, growing movement to reboot the entire field of computing, i.e. how data are processed. Second by state-of-the-art, fundamental progress in neurosciences, including the fields of complex networks and dynamic brain states. Third by advances in materials science and nanoelectronics that have led to, e.g., memristive devices, nanoparticle/nanowire networks, and fluidic memristors, providing new functionality in electronics, such as synaptic-like plasticity or spatio-temporal networks [18,19,20,21,22]. With the foreseen restrictions on current digital computing, the question “What comes next?” finds its answer in merging novel discoveries made on the nervous system’s information pathways with the development of novel electronic devices, paving the way to an entirely new kind of computing.

In this perspective, we present a concept of an artificial spatio-temporal network which uses temporal and structural mechanisms in nervous systems as guidelines. It addresses the important, interwoven spatiotemporal aspects of information pathways and processing in nervous systems [23,24,25,26,27,28]. The state of nervous system criticality combined with the blooming and pruning of nervous cells during growth might be an interesting guideline to develop new computing principles [23, 25, 29,30,31,32,33,34]. Components essential for artificial spatio-temporal networks and a pathway to realize it, are presented, including biological fundamentals such as phylogenies, ontogenesis, and homeostasis [35, 36]. However, these basal biological mechanisms alone might not be sufficient to establish mental functions in artificial systems; we therefore include the temporal binding hypothesis developed in neuroscience as a further essential guideline [10, 37,38,39,40]. The synchronized firing of neural ensembles across different brain regions is treated as a fundamental neural mechanism that defines how a hierarchical network structure, such as the brain, can integrate several sensory inputs to determine the unity of an object (for example linking form, color, size, motion, etc., together) [41, 42]. Opportunities and possible limitations of this approach towards implementing higher brain functions in artificial systems, such as perception and consciousness, will be discussed. The paper is arranged as follows:

In chapter “Neuromorphic Circuits with Redox-Based Memristive Devices”, the current status of computing architectures is summarized. In chapter “Redox-Based Bi-Layer Metal Oxide Memristive Devices”, we present a condensed overview of advanced device components, with a focus on memristive switching devices. We subsequently address spatiotemporal information processing in nervous systems, including network structure, network dynamics, and homeostasis in chapter “MemFlash—Floating Gate Transistors as Memristors”. An artificial spatio-temporal network concept is introduced in chapter “Critical Discussion of Ex situ and In situ TEM Measurements on Memristive Devices”, where we hypothesize on which information pathways might lead to higher brain functions in engineered systems based on hardware-oriented electrochemical electronics but also discuss current limitations of this approach. In chapter “Modeling and Simulation of Silver-Based Filamentary Memristive Devices” a possible benchmark is discussed for bio-inspired systems. Chapter “Integration of Memristive Devices into a 130 nm CMOS Baseline Technology” provides a discussion on the practical implementation of an artificial spatio-temporal network, to mimic basal biological information pathways.

2 The Current State of Information Technology

The sixties marked the beginning of a glorious time in information technology as the tremendous opportunities of silicon technology merged with the concept of Boolean computing, resulting in the first digital revolution [43].

This development followed the exponential increase over time of electronic components integration on a chip predicted by Gordon E. Moore, combined with a sequential data processing architecture comprising a central processing unit and memory for data storage, for which Alain Turing and John von Neumann laid the foundation years before [44,45,46]. The tremendous technological and economical success of the digital revolution is still going strong today with seemingly no end in sight. Billions of transistors on a single processor chip, displaying features as small as about 10 nm in size and clock frequencies of a few GHz, are the current standard in CMOS (Complementary Metal Oxide Semiconductor) technology, representing the backbone of today’s semiconductor industry [47,48,49]. However, during the last couple of years, dark clouds have appeared on the horizon for the semiconductor industry. The envisioned goal of downscaling devices with every new circuit generation to the nm level has created an ongoing need to develop ever more sophisticated and expensive fabrication tools for e.g. lithography, dry-etching, and layer deposition [50,51,52]. As a result, each new circuit generation entails an increasing economic risk for semiconductor companies. Moreover, over the last few decades, progress in processor core clock rates have overtaken memory access and access times, leading to a cumbersome situation where data transmission between the arithmetic logic unit (ALU) and memories dominates instead of the arithmetic information process itself. This system level-related challenge is called memory latency (or memory gap) and is a consequence of the von Neumann bottleneck, where data is processed sequentially [47, 53, 54]. Two major obstacles restrict the further development of information technology, namely limitations in downscaling at the device level, and memory latency on the architecture. Although society is experiencing a second digital revolution via the resurgence of artificial intelligence (AI) and the Internet of Things (IOT), Moore’s law, which has been driving the computer industry for decades, is becoming outdated as the limits of device integration and/or economical boundaries have now been reached. The incredible advances made by the first digital revolution based on binary “0” and “1” computation combined with the latest achievements in the field of machine learning led to great progress in speech and pattern recognition, while rendering autonomous driving tangible. Yet, additional challenges are growing increasingly problematic behind the scenes. Huge, power consuming hardware systems in the form of cloud servers are now mandatory to support recent advancements in AI and the IOT. This is why global digital players, such as Google, Amazon, and Facebook, as well as bitcoin trading platforms need energy-hungry server farms [55].

On the system level, and in particular since the advent of the internet and the renewed interest in AI, the power consumption of the digital world is growing without limits, in increasing conflict with sustainable and climate-neutral resource management. Moreover, future autonomous electric vehicles require both high recognition capability and low power consumption. It therefore is hardly surprising that the semiconductor world is currently in an era of upheaval, turning a new page on information processing based on novel computing architectures and advanced hardware components.

3 Advanced Computing Architectures and Novel Electronic Devices

The aim of this section is to give a short survey on novel computing architectures and advanced electronic devices. We do not intend to present a comprehensive overview but instead to give a taste of the developments currently being pursued to overcome the limitations of digital computing and to establish new computing primitives. To simplify access to the different research areas for interested readers, we discuss seminal and overview papers and present recently published pioneering research. Nonetheless, we are aware that the given reference list is far from exhaustive. In addition, this section is critical to understanding the similarities, and most importantly the distinctions, between artificial spatio-temporal networks and standard neuromorphic computation presented in Sect. 5. While traditional von Neumann computing continues to dominate the ICT scene, recent groundbreaking innovations in alternative computing architectures and advanced electronic devices have become hard to ignore [22, 56,57,58,59,60].

These developments are threefold. Firstly, somewhat older concepts, such as artificial neural networks (ANN) leading to Deep Learning (DL) systems, have received an impressive performance boost through novel and efficient algorithms paired with more powerful electronics hardware [61]. Secondly, new technologies, such as Quantum Computing and Reservoir Computing (RC), have appeared, leading to remarkable results [62,63,64]. Thirdly, in the field of nanoelectronics, a plethora of advanced device structures and novel functional components has led to a rethink of traditional computing architecture, paving the way to in-memory computing that circumvents the von-Neumann bottleneck [58, 60, 65, 66]. In Fig. 1 a shamrock-like illustration highlights these three research areas.

Fig. 1
figure 1

A shamrock-like illustration of the three development areas, which characterize the currently expansive development in the field of Artificial Intelligence (AI)

The first leaf representing Machine Learning encompasses Artificial Neural Networks (ANN), Spiking Neural Networks (SNN), Reservoir Computing (RC), Long Short Term Memory (LSTM), and Deep Learning (DL) systems [67]. The foundations of Neural Networks were laid by McCulloch and Pitts [68], Rosenblatt’s Perceptron [69] for ANNs, and von Neumann’s postulate of SNNs in 1956. More recent inventions from Jäger (Reservoir Computing) [70], Hochreiter and Schimdhuber (Long-/Short-Term Memory) [71]) and Hinton (DL) [67]) have advanced the field one huge step forward and comprise the backbone of today’s AI.

In the second leaf, the field of neuromorphic engineering, initiated by Carver Mead and Mohawa and Rodney Douglas, seeks to mimic the basal mechanisms of information processing in nervous systems via an essentially hardware-oriented approach [72,73,74]. In recent years, great progress has been made in the development of bio-inspired processors. Here, event-based spiking neural networks (SNNs) in the form of either mixed (analog and digital) or strictly digital signal processing provides novel opportunities for low-power data processing [59, 60, 75,76,77]. Interestingly, some of the spiking neuromorphic circuits work at biologically relevant frequencies, exhibiting low energy consumption. One point of merit for neuromorphic engineering is their energy per synaptic operation (SOP), which is in the pJ to nJ range for neuroprocessors [57, 60, 77].

Hence, the incorporation of relatively few basal mechanisms of biological information processing, such as leaky-integrated firing, axon delays, and local learning rules, can lead to significant improvements in resource management.

Recent advances in the field of nanoelectronics devices, such as memristive devices, nanoparticle networks, nanowire networks, or memristive fluids, compose the third leaf of advanced computer architecture. Research in silicon nanoelectronics is dominated by the development of new field effect transistors (FET) [52, 78] for the next generation of CMOS circuits, as well as entirely novel devices and materials exhibiting advanced functionalities [66, 79,80,81,82,83,84]. In particular, the memristor (originating from memory and resistor, also called memristive device) is a two terminal device that exhibits attractive features for various applications in the post-Moore area, generating considerable interest. Memristive devices were intensively studied in the sixties and seventies [85,86,87]. The field was further propelled forward by the establishment of the theoretical background of memristors by Leon Chua (1971), with the corresponding experimental realization and interpretation by Hewlett–Packard (HP)-Labs (2008) [88, 89]. Over the years, numerous books and reviews have covered fundamental and practical properties of memristive devices and their related circuits [18, 84, 90,91,92].

So far we have described nanoelectronic devices fabricated using top-down methods, where the layers are deposited on an entire wafer and the devices are patterned by lithography and dry-etching [93,94,95]. In bottom-up approaches, functional materials are deposited or synthesized to obtain networks, such as irregular nanowires and/or 3D textures. Often the self-assembly capabilities of materials are exploited to create complex structures. Top-down and bottom-up approaches are habitually combined to create the electrical connections necessary to characterize the structures [96]. In the context of bio-inspired computing, we would like to highlight here the work done on nanowire networks [20, 97,98,99,100,101,102,103]. The structure of such networks, and in particular their dynamic properties, reflect basal functionalities as observed in nervous systems, such as small-word connectivity and self-organized criticality (SOC) [23, 104]. We would like to emphasize that the three ICT research areas shown in Fig. 1 are not independent from one another: there is considerable overlap between them, which has proven to be mutually beneficial.

3.1 Advanced Computing Architectures

Here we present a few concepts of novel and reconsidered computing architectures. We would like to emphasize that the following four examples were chosen to demonstrate the diversity of the field but are not intended to give a comprehensive overview. The icons in Fig. 2 represent different computing principles.

Fig. 2
figure 2

The illustration shows snapshots of different computing strategies. a Digital computing, b in-memory computing, c matrix multiplication, d reservoir computing, e oscillatory computing, f bio-inspired computing, g probabilistic computing, h cellular automata, i quantum computing

For concepts other than those shown in Fig. 2, such as quantum computing, cellular automata, and probabilistic computing, we refer to the literature [62, 80, 105,106,107]. We focus on comparing today’s digital computing to in-memory computing, vector matrix multiplication, reservoir computing, oscillatory computing, and bio-inspired computing (see icons in Fig. 2).

In order to overcome the von Neumann bottleneck of digital computing (Fig. 2a), near-memory computing was developed in 1990 [108]. Here, the strict separation of an arithmetic logic unit communicating with several distinct memories was eliminated. Part of the computational tasks was performed within the memory itself, leading to more efficient computing. This development has recently shifted to a higher gear, leading to in-memory computing (Fig. 2b) following the invention of memristive crossbar-arrays [58, 109]. Vector matrix multiplication (Fig. 2c) is considered a key hardware booster in Deep Learning. The time and energy consuming task of vector matrix multiplication is performed in a memristive crossbar-array in which the input and output layer are interconnected by an array of weighted, checkboard arranged memristive devices [58, 92, 110]. Vector matrix multiplication is an example of how Deep Learning may benefit from the development of new electronic devices, e.g. memristors. Reservoir Computing (Fig. 2b) was independently invented by Herbert Jäger and Wolfgang Maass and belongs to the general framework of Recurrent Neural Networks (RNN) [70, 111]. In RNNs, a backpropagation through-time procedure is typically applied to adjust (train) the weights of the network to desired target functions. Here, a significant amount of time is required, with no certainty that the optimal weights will be set after learning. Thus in RC the reservoir consists of an ensemble of nonlinear elements coupled to one another. The reservoir projects incoming data and time series to a higher dimension that can be easily readout by conventional classifiers, in which the training is executed by means of a linear regression, for example. This reservoir can be either virtual or physical. These aforementioned reservoirs are designed like neural networks in which the connections are randomized but remain fixed during computation. Physical reservoirs are those which rely on natural systems exhibiting nonlinearity [64, 112, 113].

The goal of analog computing is to mimic complex technical systems by means of electronic circuits which represent key system parameters as a set of voltage levels at nodes. Oscillatory computing (Fig. 2e) refers to a subset of analog computing in which the oscillator frequencies and phases enrich the representation of information. Oscillatory systems are omnipresent in nature and engineering [114,115,116]. Technically, oscillators can be realized in numerous ways, such as in discrete or integrated semiconductor electronics, spin-torque devices, Josephson junctions, optical devices, or micro electro-mechanical systems(Schneider et al. 2018; Lequeux et al. 2016; Chen et al. 2020; Ignatov et al. 2016; X. Cheng et al. 2021; Feldmann et al. 2019a; C. Lenk, L. Seeber, and M. Ziegler 2020) [82, 117,118,119,120,121,122]. In general, dynamical systems and their coupled oscillators may offer elegant solutions to compute HP-hard problems. Coupled oscillator networks have been successfully exploited in the field of pattern recognition [123, 124]. However, larger systems have not yet been successfully developed due to noise-stability problems and device constraints in the new class of compact oscillators based on VO\(_2\) or NbO\(_x\), for example [125, 126].

The term bio-inspired computing (Fig. 2f) is only loosely defined. To a large extent, the computing primitives described above (see Figs. 1 and 2b–e) are more or less biologically motivated. The Perceptron is a crude blue print of a neuron and is still today at the heart of Deep Learning systems [69]. In-memory computing is a strategy to abrogate the strict separation of the ALU and memory in digital computing, and is derived from biological information processing where logic and memory are blended. Neuromorphic processors contain circuits that can execute the Leaky Integrate-and-Fire dynamics of neurons, including the biologically motivated winner-take-all (WTA) principle, and introduces axon delays [57, 127]. Coupled oscillators imitate the orchestra of neural ensembles, i.e. the communication of separate brain regions which is considered to be the fundamental mechanism that explains perception [10, 37, 41]. Cellular automata, for example, were introduced by John von Neumann to describe self-reproduction in biology [128]. Probabilistic computing is based on Bayesian inference, which is closely related to the way humans make decisions [129, 130]. Therefore, it is essential to declare precisely to what extent an artificially built system is bio-inspired and which biological pathway have been applied as design principles [131].

3.2 Novel Electronic Devices

There is an ongoing effort to shrink silicon FETs to feature sizes below 5 nm. The FinFET structure has dominated CMOS technology since its invention in 1989 [132]. Novel designs, such as GAAFET (Gate-All-Around) and MBCFET (Multi-Bridge-Channel), are serious candidates for next generation CMOS chips (see Fig. 3a) [52]. Aside from this ongoing improvement of conventional FETs, devices with novel functionalities and materials have been attracting considerable interest to implement novel computing architecture. Magnetic Josephson Junctions, photonic synapses, and bio-organic memories represent only a fraction of current development strategies [21, 80, 117, 121]. In Fig. 3b–d, unconventional nanoelectronics device structures are illustrated. In Fig. 3b, a memristive device structure is illustrated, comprising two electrodes separated by a memristive layer. In the same Figure, a qualitative I-V curve of a memristive device is shown alongside a sketch of a biological synapse (see also Fig. 4), highlighting that memristive devices are promising artificial synaptic counterparts due to their capability of presenting variable resistive weights in engineered neural networks [19].

Fig. 3
figure 3

Schematics of four advanced device components. a 3D view graph of a GAAFET as applied in today’s latest digital processors [52], b sketch of a memristive device including a qualitative I-V curve and illustration of a synapse [18, 91, 133], c cartoon of a 3D nanowire network [20, 22, 81, 97, 99,100,101,102], d 3D cross-sectional graph of a fluidic memristive device adapted from [21] with permission

One universal property of the memristive device concept is that the memristive state depends on previously induced charge flows, applied currents, or applied electric fields, thus storing a historically-determined resistance state. For details concerning resistive switching and the underlying physical-chemical mechanisms, we refer the reader to the references given in the figure caption (Fig. 3) and the overwhelming literature on the subject [19, 91, 134, 135].

It is this concurrently complex and simple device concept, together with the tremendous predicted potential for breakthrough technologies in areas such as universal memories and novel non-Boolean computing schemes for cognitive electronic systems, that propels the research and development of memristors and memristor-based circuits worldwide. It is important to mention that, in contrast to the theoretically simplistic memristor concept, in practice the realization of memristive devices by modern thin film technology is a task littered with obstacles. Up until now, a huge number of experimental findings on memristor devices consisting of a broad variety of metal/insulator material combinations have been published, all of which show memristive I-V curves [91, 133, 135]. At first glance, it seems that the toolbox of resistive switching devices is ready for nearly any circuit application: simply pick a device concept and follow the extensive materials and methods laid out in the literature. However, a closer look at the fine details casts a dark shadow on this bright research field, leading to a harsh awakening based on hard facts. These “hard facts” are the requirements and boundary conditions set by the envisaged circuit applications, in which memristors must fit technologically, electronically, and economically. Currently, two main development avenues can be explored for memristive devices. The first focuses on resistive random access memories (RRAMs). It is believed that the zoo of today’s existing memory diversity can be replaced by a single (universal) memory concept. RRAMs are considered attractive candidates for universal memories because they: (i) show non-volatile data storage, (ii) can be densely integrated, (iii) are fast, and (iv) are cheap to produce. In particular, such a universal memory might attenuate the problem known as memory latency in modern digital computers [54, 136]. Besides the RRAM goal which may be categorized under the label “More Than Moore”, novel and very appealing computer architectures have been proposed in which memristors might play a vital role. Another main focus of possible memristive device applications may be associated with such catchphrases as: non-Boolean computing, bio-inspired information processing, neuromorphic engineering, or cognitive electronics [66, 137,138,139,140]. On the local, synaptic level, learning in nervous systems is explained by Hebb’s learning rule and Spike-timing dependent plasticity (STDP), amongst others [141]. STDP and other memory-related mechanisms observed in nervous systems, such as Long-term Potentiation (LTP) and Long-term Depression (LTD) [142], have been successfully mimicked by memristive devices [143, 144]. Moreover, traditional studies known from behaviorism, such as classical conditioning (e.g. Pavlov’s dog), anticipation, and optical illusions, were successfully realized experimentally by both single and pairs of memristive devices [119, 145,146,147,148]. The extent to which larger networks of memristive devices are able to mimic higher brain functions is still unknown.

In Fig. 3c, a sketch of a nanowire network (NWN) is shown. NWNs have been successfully synthesized for various materials, such as metals, oxides, and semiconductors [113, 139]. Nanowires show appealing features with respect to bio-inspired computing from the point of structure, topology, and inherent dynamics [99,100,101,102, 149]. In recent comprehensive reviews by Zhu et al. and Kuncic and Nakayama, hallmarks known from biological systems as small-world connectivity (topology) and self-organized criticality (dynamic) were addressed [22, 101]. Interestingly, brain-like avalanche effects have been observed in NWNs that exhibit dynamic features found in nervous systems [23, 25, 100, 103]. Finally, we would like to emphasize that emergent neuromorphic materials and devices are not restricted to the solid state phase. In Fig. 3d, a sketch related to a nanofluidic device is shown. Bocquet and co-workers demonstrated by analysis and molecular dynamic simulations that ion transport across quasi-two-dimensional slits under an electric field displays memristive I-V curves, as well as spiking voltage patterns in accordance with the Hodkin–Huxley model of biological neurons [21, 150]. We would like to emphasizes that while these examples of NWNs and nanofluidics clearly demonstrate that the material “tool box” offers novel opportunities to implement higher brain function, its full potential has yet to be fully explored.

4 Information Processing in Nervous Systems

This perspective explores the role of information processing observed in nervous systems as a basis for the development of energy-efficient technological computing systems, and even the possibility of implementing higher brain functions in engineered systems. Nervous systems offer paradigms to improve energy-efficient artificial information processing units. The exploration of signal pathways in nervous systems shows us how evolution led to extremely energy-efficient signal processing units (nervous systems). For example, the human brain dissipates a power of only roughly 20–25 W. This, in addition to the amazing capabilities of humans’ vision and hearing, reveals fascinating opportunities for autonomous vehicles or speech recognition. Hence, processing sensitive data in server clouds may lead to severe security concerns. The data of millions of cars in motion, including their controllability, falling into the wrong hands could lead to fatal attacks; Local data processing in an autonomous car with low power consumption is preferable.

Creatures are very well adapted to their specific ecological niche, a result of a hundred million years of ongoing evolution and the associated interaction between creatures and their environment throughout their life span [151,152,153]. In particular, information pathways in nervous systems are prototypes for engineers to perform cognitive tasks in quasi-real time with extremely low power consumption [154]. These features alone, and the information processing behind them, represent attractive models for entirely new computing architectures. In Sects. 4.1 and 4.2, local and global aspects of information processing mechanisms are presented, respectively. Distinct differences between digital computing and information pathways in biological systems are highlighted in the framework of topology and dynamics to motivate the concept of artificial spatio-temporal networks, as subset of the field of bio-inspired information processing. In Sects. 4.3 (Phylogenies and Ontogenesis) and 4.4 (Homeostasis), we underline important hallmarks of information processing in biological systems which have so far only been partly considered for artificial systems. Note that in chapter “MemFlash—Floating Gate Transistors as Memristors”, we do not address how such mechanisms can be established in electronics: This is the subject of chapter “Critical Discussion of Ex situ and In situ TEM Measurements on Memristive Devices”, where several approaches are proposed to implement an artificial spatio-temporal network. It is not our goal to develop another pattern recognition system but to address the fundamental question: “To what extent can higher brain functions be reproduced in artificial systems?”. We believe that essential information pathways in biology have been to a large extent overlooked, as detailed in this perspective. One important difference between artificial spatio-temporal networks and contemporary AI and neuromorphic engineering is that essential growth mechanisms observed in nervous systems are exploited as a guideline in the former.

4.1 Local Aspects of Information Processing in Nervous Systems

In contrast to current clock-driven Boolean Turing machines, information processing in biological nervous systems is characterized by highly parallel, energy-efficient, and adaptive architecture [53, 155, 156]. When it comes to pattern recognition, failure tolerance, and cognitive tasks, even simple creatures outperform supercomputers, in particular regarding power dissipation. Fundamental building blocks leading to such remarkable properties exploit neurons as central processing units, which are interconnected by synapses to form a complex dynamical three dimensional network, the connectome [157]. In Fig. 4, the structure of a neuron is sketched, including the soma, dendrites, the axon, and connections to other neurons by synapses.

Fig. 4
figure 4

Blueprint of a neuron including an enlarged sketch of a synapse and the illustration of a single action potential, a spike

An action potential (spike) is defined as a sudden transitory and propagating change in the resting potential across a membrane. Action potentials sent from presynaptic neurons are received via the dendrites and synapses of the postsynaptic neurons. Those signals are integrated within the cell body of the postsynaptic neuron. When a threshold potential is reached, the neuron generates a new spike or a sequence of new spikes at the axon hillock that are transmitted via the axon to a postsynaptic neuron. This entire process is called Leaky Integrate-and-Fire (LIF). The term leaky reflects the fact that the cell membrane is not a perfect electrical insulator. Numerous LIF models, such as the FitzHugh–Nagumo, Morris–Lecar, or Hindmarsh–Rose models, have been developed to address different aspects of the biological substrate [158,159,160,161]. Depending on the electrical activity of two connected neurons, the connection strengths (the weights) can become weaker or stronger. This is at the heart of Donald E. Hebb’s learning rule, who first recognized that “Neurons which fire together wire together” [162]. On the biochemical level the variable strength is explained by the amount of neuro transmitters (vesicles) which are released into the synaptic cleft.

From an engineering point of view, nervous systems process information in such a way that silicon technology, the holy grail of modern digital computing strategies, seems to be outmatched. For example, electronic components and circuits, such as transistors, memories and processors, are optimized for small parameter spreads to run at GHz clock frequencies under a precise pulse timing [47, 54]. In particular, they exploit nanosecond signal pulses that travel at nearly the speed of light along well-ordered transmission lines that connect different system parts in an essentially two-dimensional topology. In contrast, information pathways in nervous systems are characterized by highly irregular tissue consisting of neurons, synapses, and axons. Low conduction velocities on the order of several m/s lead to pronounced signal retardation, i.e. delays. In Fig. 5, characteristic timescales of CMOS processors and nervous systems are compared. In digital computing, the pulse duration is below a ns, and the signal transmission velocity is at nearly the speed of light. In nervous systems, the corresponding values are 3.5 ms for the pulse duration of an action potential or spike, and a few tenths of a ms for the transmission of a spike along myelinated axons [163]. Whereas the clock frequency of a modern Si processor is about 5 GHz, human EEG brain waves range from below 1 Hz to a few 100 Hz [41, 164, 165]. This represents a six orders of magnitude discrepancy between technical and biological parameters. These facts alone point towards fundamental differences between information processing in digital computing and those in natural nervous systems.

Fig. 5
figure 5

Comparison between pulse transmission speed, pulse duration, and voltage amplitude in nervous systems and digital computing. The sequence of action potentials were adapted with permission from Fig. 1 of Ref. [160]

4.2 Global Aspects of Information Processing in Nervous Systems

Nervous systems are considered to be time-varying networks in which spike-dynamics and cellular morphology are intricately linked and reciprocally interwoven [27, 151, 166, 167].

Information processing in nervous systems applies a broad range of structurally and temporally related phenomena [163, 168, 169]. At the level of individual synapses, neurons, and axons, the formation and transmission of action potentials (“spikes”) are reasonably well understood. However, a look at the mesoscopic and macroscopic level of the three-dimensional neuronal network leads to an entirely different assessment. Although groundbreaking progress has been reported on in vivo and in vitro techniques over the last decades, the nervous system’s spatiotemporal information processing is still not well understood [12, 170,171,172]. The biochemical mechanisms that explain higher brain functions at the cellular level, such as awareness, perception, and in particular consciousness, remain elusive [15, 24, 173,174,175,176]. Nonetheless, neuroscientists were able to identify basal mechanisms that define the fundamental platform of the unique and marvelous nervous system’s information processing. Characteristic features, such as STDP [141, 159, 177], stochastic firing and bursting of neurons in the hundred Hz range, recurrent network structures, and aspects of oscillatory synchrony in larger neuronal ensembles [39, 41, 104, 114, 178,179,180,181,182,183] are essential ingredients in biologically-based information processing. Moreover, factors related to the close interaction of a nervous system with its environment, i.e. external stimuli, are of crucial importance [184]. Therefore, neuronal design principles provide a model for bio-inspired computing systems, which are diametric to development strategies in present binary IT, including GHz clock frequencies, near-light-speed signal transmission, and clearly separated from logic and memory [47, 51].

Beyond that, we would like to emphasize that information-related aspects of nervous systems during evolution (phylogenies), along with their individual development throughout their lifetime (ontogenesis), provide a promising model from which novel electronic architectures may be designed. In the animal kingdom, the intricacy of nervous systems varies tremendously between single- and multi-cellular organisms, and the human brain with its billions of interconnected neurons [185,186,187,188,189,190,191,192,193]. For the sake of completeness, we would like to specify that the existence of cognitive functionalities in entities without a nervous system, such as plants or the acellular slime mold Physarum polycephalum, is currently heavily debated. For interested readers, more detailed information can be found in the following Refs. [194,195,196,197].

Despite their different cognitive capabilities, neurons and nervous systems present many common features in all creature, such as synapses, signal transmission lines (axons), and action potentials (spikes), that act as basic information building blocks. While the term morphology defines the real structure of a nerve net, the topology of a net is more abstract, related to important theoretical graphical parameters that define the connectome of a nervous system [24, 27, 198, 199]. The connectome is considered to be the canonical state describing the cellular wiring diagram of a nerve net. Edges, nodes, cluster coefficients, characteristic path lengths, hubs, and motifs determine the topological quality of a net, for example. An unraveling of the micro- and macro-connectome and nervous system dynamics offer a suitable model for the next generation of bioinspired hardware electronics [169].

Fig. 6
figure 6

Network cube and complexity: a classification of various networks. The dashed blue line illustrates a fictive, guided “walk” through the cube, starting from “S” and ending at the goal “G”. In this way, it will be possible to push the network properties of neuromorphic circuits towards those of cortical maps (see lower right-hand corner in the cube). This approach is part of an artificial spatio-temporal networks. b Qualitative illustration of the complexity term. From a spatial (topological) and temporal (dynamical) point-of-view, a complex system is neither completely random nor entirely ordered, but exhibits a state in between. a and b are adapted with permission from Solé and Valverde [200] and Huberman and Hogg [201], respectively

The network cube (Fig. 6a) classifies a number of different nets according to theoretical attributes, including randomness, modularity, and heterogeneity [26, 200]. Interestingly, in this framework, cortical maps (lower right corner of the cube) extracted from the structural properties of nervous systems are somewhat isolated from all other nets, which are located in the upper left corner of the cube. In Fig. 6b, dynamical complexity (y-axis) is described as a state between complete asynchrony, with independent, random firing of the individual oscillators, and complete synchrony, with all oscillators firing in phase [201]. Between these two extremes, system dynamics can be characterized by a complex and time-varying interaction of the oscillatory ensemble. This regime exhibits features of self-organized criticality (SOC) typically observed at and near phase transitions and might be identified by avalanches of firing neuron ensembles [20, 23, 202,203,204,205]. Avalanche behavior is common in many physical phenomena, such as magnetic systems, earthquakes, and brain dynamics at the critical region of phase transitions, and were first described by Bak et al. [202]. The common feature of all these systems is slow external driving, causing an intermittent, widely distributed response. Avalanches appear in very different sizes, often distributed in the form of power laws. As known from statistical physics, power laws imply the absence of a characteristic scale, a property observed close to a critical point. When describing the dynamics in a nervous system using the SOC and brain-like avalanches models, the type of phase transition associated to each term must be clearly defined. For example, SOC and brain-like avalanches in NWNs (see Sect. 3.2 and Fig. 3c) are related to non-activity - activity phase transitions. In the context of firing neuron ensembles in a brain, SOC and avalanches may describe a temporal phase transition between the asynchronous and synchronous states [203]. In other words, a system could be in the supercritical state (above the critical point) in an inactive -active phase transition, while remaining subcritical (below the critical point) with respect to the asynchronous-synchronous phase transition. However, such phase transitions are not necessarily exclusive and might appear simultaneously in the brain, or the mechanisms could even be interwoven. So far, while nanoparticle networks and NWNs have been studied in context with their activity pattern, neuron-like oscillatory components have yet to be considered. The orchestra of firing neuron ensembles is considered a key underlining mechanism in understanding the binding problem, i.e. the capability of the brain to integrate (bind) different sensory inputs. For example, such a process can occur in the visual system when forming a unified perception of the environment [37,38,39,40, 42, 206, 207]. Suggestions on ways to include relaxation-type oscillators to mimic the LIF features of biological neurons and the state of SOC are presented in chapter “Critical Discussion of Ex situ and In situ TEM Measurements on Memristive Devices”. Finally, the topological and temporal dynamics of the regime are extremely sensitive to external distortions (stimuli) at the critical point, allowing the system to respond in numerous ways to external stimuli [23, 25, 30, 208, 209]. In biological terms, this means that the system can easily adapt to risky environmentally-driven situations. The manifold brain states available near the point of criticality offer a wide repertoire of means to react in a reasonable way to external tasks imposed by the environment. In extreme situations, this improves the chances of survival and is of evolutionary importance.

4.3 Phylogenies and Ontogenesis

The origin of bio-inspired computing can be best drawn from the two following neuroscience quotations:

(1) Gilles Laurent pointed out the common evolutionary heritage of living organisms. His contribution “Shall We Even Understand the Fly’s Brain? (see: 23 Problems in Systems Neuroscience edited by J. L. van Hemmen and T. J. Sejnowski, Chap. 1, p. 3, [210]) states: “When it comes to computation, integrative principles, or “cognitive” issues such as perception, however, most neuroscientists act as if King Cortex appeared one bright morning out of nowhere, leaving in the mud a zoo of robotic critters, prisoners of their flawed designs and obviously incapable of perception, feeling, pain, sleep, or emotions, to name but a few of their deficiencies.” [211].

(2) Martijn P. van den Heuvel et al., made in “The Neonatal Connectome During Preterm Brain Development” the following statement: “The adult cerebral brain network is the result of a complex developmental trajectory. From the prenatal formation of the first neurons, throughout the first years of life and all the way into late adolescents, the brain undergoes an elaborate developmental trajectory.” [212].

How are these sayings so important for the design of novel bio-inspired computing primitives? The general idea behind these two quotations is the concept of development. Quotation (1) by Gilles Laurent highlights evolutionary development, the phylogenies of species, and their relevance to the emergence of the human cortex. This bottom-up approach favors the study of less complex creatures that appeared early during evolution, laying the foundation for much more complex nervous systems in vertebrates [190]. In particular, information processing strategies throughout evolution and in completely different species are astonishingly similar, if not the exact same. For example, the basic ingredients of information processing (neurons, synapses, and action potentials, as described in chapter “Redox-Based Bi-Layer Metal Oxide Memristive Devices”) in the nervous systems of squids and macaques are hardly distinguishable from one another. Although François Jacob addressed the random and playful character of evolution by the phrase “Nature is a tinkerer, not an inventor” [153], evolution can be somewhat conservative in the sense that similar structural and dynamical features appear in very different species throughout phylogenies. This justifies the investigation of information pathways in simpler, easier-to-understand organisms in order to comprehend higher brain functions in more complex vertebrates. A famous example is the research of Eric Kandel on the snail Aplysia, relating physiological signaling with behavior [185]. Studying the neural design of biological species with only a few hundred or thousands neurons is a fruitful ansatz to develop novel computing primitives [169, 191,192,193, 213,214,215]. We will come back to this issue in chapter “Critical Discussion of Ex situ and In situ TEM Measurements on Memristive Devices”.

While phylogenetics addresses the development and evolution of groups of similar species, ontogenesis is the study of how an individual member of a species develops as it ages. In quotation (2), Martijn P. van den Heuvel and coworkers underline the intriguing mechanisms of nervous systems development in humans, from conception to late adolescence. We propose that ontogenesis and their functional ingredients could serve as an essential guideline for novel computing primitives. To support this argument, we describe here the fundamentals of ontogenesis in the human nervous system, including the importance of external stimuli during development. Physiology, neurobiology, and behavioral science provide overwhelming experimental evidence showing that the conditions during the growth and regeneration of neuronal nervous systems under external stimuli are of central importance [64, 216,217,218,219,220,221]. Both the formation and elimination of nerve cells, synapses, and axonal connections occur frequently during the first stages of brain development, belonging to a very creative process that shapes the nervous system to be well-adapted for future environment-related tasks. In addition to the creation of neurons and axons, pruning (programmed cell death or apoptosis) and axonal rewiring are both essential and expedient mechanisms. Finally, myelination of axons is an essential step to improve the nervous system’s performance by shaping and optimizing the signal transfer time between neurons and distributed brain areas. Gerald Edelmann coined the expression “Neuronal Darwinism” to highlight the striking parallels between evolution and brain development (Edelman 1987; Tononi, Sporns, and Edelman 1994; Edelman and Tononi 2001; Van Ooyen and Butz-Ostendorf 2017) [222,223,224,225]. Neurons, synapses, and axonal connections grow lavishly at first, a growth that is controlled by the genome, epi-genome, and stochastic factors. Subsequent structural shaping and elimination, often called blooming and pruning, are largely influenced by the interaction of the entire nervous system with environmental stimuli, and the nervous system’s subsequent reaction [184, 226, 227]. There have been attempts in the past to design materials and systems that mimic biological information processing, dubbed “evolvable hardware” and “evolution-in-materio” [97, 196, 197, 228, 229]. This work has been recently extended to novel, transistor-based devices by Baek et al. [230]. Although the findings are very promising, basal spatiotemporal and topologically-relevant mechanisms have not been reproducible in electronics hardware so far. In both biological and artificial systems, the connection between these mechanisms should be worked out with regard to the required complexity and functionality (see Fig. 6).

Neural network growth in nervous systems has been studied in-depth both theoretically and experimentally [29, 31, 32, 212, 219, 231,232,233,234]. In particular, the early stages of nervous system growth under external stimuli is of critical importance for the healthy development of mature creatures [157, 220, 235]. It is known that both external stimuli and genetic factors have tremendous impact on the emergence of functional neural circuits that determine behavior during critical periods of cortical region growth [226, 227, 234, 236]. Cell overproduction and subsequent attrition are likewise important for nervous system development [226, 237, 238]. Morphological aspects, connectivity, growth, regeneration, and the impact of neuronal activity-related spike-based synchronization mechanisms in neuronal network ensembles serve as models for novel electronics [39, 41, 114, 165, 175, 239, 240]. Clear evidence of structural dendritic spine plasticity is shown in a series of photographs taken over a few days in Fig. 7, demonstrating that spines grow and shrink depending on external, touch-related stimuli in mice [241].

Fig. 7
figure 7

Structural plasticity. In vivo time snapshots of the appearance and disappearance of dendritic spines in the mice barrel cortex. The top row shows the growth of a persistent spine between days 12 and 28 (orange arrows). The bottom row shows examples of spine retraction (yellow arrows between days 0 and 16, and green arrows between days 0 and 12). Figure from [241] with permission. These structural changes were correlated to external stimuli applied by whisker trimming in mice

A look at a few growth parameters underlines the importance of understanding biological networks during development. A two-year-old human toddler exhibits the maximum number of neurons and synapses of our species, roughly a factor of two more than a fully grown adult. If we estimate 170 billion neurons [187, 226] with 10\(^3\) synapses per neuron, a two-year-old human carries 170 \(\times \) 10\(^{12}\) synapses. We assume the total axon length of a toddler to be about 850,000 km (https://aiimpacts.org/transmitting-fibers-in-the-brain-total-length-and-distribution-of-lengths/). The time between egg fertilization to the age of two is 1000 d or 8,64 \(\times \) 10\(^7\) s. This leads to an average net growth of roughly 2000 neuron/s, 2 million synaptic interconnections/s and an axon growth rate of about 10 m/s! These measures alone unambiguously demonstrate the overwhelming significance of network growth in humans, particular during childhood [221, 242]. Moreover, we believe that such a tremendous development is an interesting template for novel computing architectures. It might be an essential building block to achieve higher brain functionalities in artificial systems, and constitutes a key aspect artificial spatio-temporal networks.

Figure 8 shows several snapshots taken during human development, where the excessive growth of neurons between the ages of one month to two years is clearly visible. Interestingly, between the ages of two to four years, neuron pruning leads to reduced neuron density. While the net neuron density during adulthood is rather constant, blooming and pruning still continue to occur, albeit at a much lower rate [225].

Fig. 8
figure 8

Adapted from [157]

Blooming and pruning of nerve cells in young humans [157] and J. Conel, “The Post-Natal Development of the Human Cerebral Cortex,” Harvard University Press, Cambridge, 1939–1967.

From the postnatal phase up to the age of around two years, our central nervous system is characterized by enormous development and permanent remodeling, while being simultaneously subject to an exuberant amount of external stimuli via our senses [184, 212, 227, 232]. Genetics, stochastics, and external stimuli (in other words nature and nurture) define who we are and strongly influence higher brain function during adulthood, including perception, awareness, and consciousness.

In Fig. 9, windows of plasticity in human brain development are sketched [227, 243]. Even in much simpler creatures (e.g. the worm C.-elegans), external stimuli play an essential role in the healthy development of the nervous system [235].

Fig. 9
figure 9

Adapted from [227, 243] with permission

Illustration of critical or sensitive periods during the first years after birth for humans. The three periods present (from left to right) the development of sensory pathways, motor skills, and higher cognitive functions.

These windows for sensing, motor skills, and higher cognition are also called critical periods. They reflect the tremendous rearrangement of the human brain during early childhood, accompanied with enormous learning capabilities. It is interesting to assign the above estimated growth parameters and the appearance of critical periods to human altriciality. Altriciality refers to the way creatures are born completely incapable of caring for themselves (Dunsworth et al. 2012). Hence, at the moment of birth (eye opening), a sudden rush of external stimuli, in particular vision, meets a premature nervous system still under heavy construction, reconstruction, and growth, in the case of humans. The concomitant occurrence of environmental input, nervous system growth, and close interaction between the nervous system and its environment may explain the huge plasticity and learning capabilities during these first years. This development seems to be essential to form higher brain functions [212, 218]. Although it might be incredibly difficult to mimic such basal neurobiological mechanisms in engineered systems, nervous system development and growth cannot be neglected in establishing higher brain functions in artificial systems. Attempts to achieve this goal are proposed in chapter “Critical Discussion of Ex situ and In situ TEM Measurements on Memristive Devices”.

4.4 Homeostasis

As in Sect. 4.3, we begin with the following sequentially-labelled neuroscience quotation: (3) Arjen van Ooyen and Markus Butz-Ostendorf emphasized the role of homeostasis on p.133 of their contribution (see: The Functional Role of Critical Dynamics edited by Nergis Tomen, J. Michael Herrmann, and Udo Ernst [244]: “In conclusion, during development, homeostatic structural plasticity can guide the formation of synaptic connections to create a critical network that has optimal functional properties for information processing in adulthood.” [245].

Roughly speaking, is homeostasis a kind of counteracting mechanism to network plasticity, and thus an important factor to ensure network robustness and stability. As will be discussed below in more detail, homeostasis comprises dynamical and morphological components, and is thought to explain how a nervous system stabilizes (itself) near the point of criticality [246]. In other words, homeostasis addresses the term “self” in SOC. The role of homeostasis as a stabilizing factor in neural networks is amply described in a huge number of publications, with only a few mentioned here (Abbott 2003; Turrigiano 2012; C. Tetzlaff et al. 2010; Stepp, Plenz, and Srinivasa 2015; Fauth, Wörgötter, and Tetzlaff 2017; van Ooyen 2017; Ma et al. 2019) [225, 239, 247,248,249,250,251]. In homeostatic structural plasticity, all incoming synapses of a cell are modified to stabilize the neuronal activity around a particular level (set point), and reflect a negative feedback loop between neuronal activity and connectivity [225, 248, 252, 253]. The fundamental principle of homeostasis is sketched in Fig. 10.

Fig. 10
figure 10

Adapted from [253, 254] with permission

Illustration of homeostasis in a nervous system at the neuron level.

Higher firing (dynamic component) of a neuron results in spine deletion (morphological component), whereas reduced firing supports spine formation, keeping the average electrical activity at a set-point, potentially stabilizing the global activity of the entire neural ensemble near the desired critical state, i.e. the state with the largest dynamic range for information processing [30, 208, 209, 254]. While this model appears attractive at first glance, it raises a fundamental question in neuroscience: “How can an individual, local neuron in a huge nervous system access the global network state in order to orientate its own activity accordingly?” [27, 203], or in other words, what defines the activity set-point? This is an example of the poorly understood relation between local, mesoscopic, and global mechanisms in nervous systems.

In chapters “Redox-Based Bi-Layer Metal Oxide Memristive Devices” and “MemFlash—Floating Gate Transistors as Memristors”, we presented various basal local and global information pathways in nervous systems. In the following chapter, we will suggest a number of strategies with the goal of implementing higher brain functions in artificial systems [229].

5 Artificial Spatio-temporal Networks

At this point, an obvious and understandable question might be: Is the goal to achieve higher brain functions in artificial systems possible at all or, more precisely, to what extent can the intriguing and complex biological mechanisms described in chapters “Redox-Based Bi-Layer Metal Oxide Memristive Devices” and “MemFlash—Floating Gate Transistors as Memristors” be merged into a novel computing primitive? How close is neuroscience to understanding higher brain functions and to what extent can the plethora of phenomena set by materials and engineering designs strategies enable mental functions in artificial systems?

Here we discuss possible ways and limitations of using artificial systems to mimic biological fundamentals, including topological and dynamical aspects, such as phylogenies, ontogenies, homeostasis, SOC, memory, oscillatory orchestra (synchrony), and so on. Nonetheless, we are aware that fundamental limits which may impede consciousness in engineered systems. It would be interesting, however, to identify and define those limits.

In Fig. 11, considerations set by materials science and design strategies are illustrated.

Fig. 11
figure 11

Artificial spatio-temporal networks: Materials science considerations and design strategies to generate higher brain function in artificial systems. The proposed system takes basal functionalities of bio-inspired information pathways into account discussed in chapters “Redox-Based Bi-Layer Metal Oxide Memristive Devices” and “MemFlash—Floating Gate Transistors as Memristors”. a 2D or 3D spatio-temporal materials network. Wires within the network are connected via memristive components. The memristive functionality at cross-sections of the network implements memory and local plasticity in the network. The faded area represents a growing network. In the case of a 2D network structure formed on a planar substrate, network growth might be modified, for example, by pre-pattern substrates, a functionalized surface, additional electrical potentials, optical stimuli, and deposition-related growth parameters (materials, deposition rate, reactive gases, substrate temperature, and so on). A 3D network allows further freedom of design and allows for a nervous system-like connectivity. The 3D network could be in a solid phase, or even multiphase, network, the latter combining materials in the solid, liquid, and gas phase. b Representation of a pulse-oscillator ensemble in order to mimic neural spiking activity. The individual oscillators of the ensemble are electrically connected to the network, leading to modifications of the network connectivity by oscillator pulses. Conversely, the network weights in turn influence the dynamic state of the oscillator ensemble via pulse coupling. The oscillatory ensemble allows an input of external stimuli (e.g., touch, vision, and hearing) via fire rate coding. In addition, analyzing the interspike interval (ISI) distributions of the ensemble in quasi-real-time enables permanent monitoring of the dynamic state of individual oscillators, as well as the entire ensemble. c Stage to monitor the structure and extract the topology of the network in real time by, for example, optical microscopy, electron microscopy, thermal imaging, or magnetic field distribution detection (similar to MEG (magnetoencephalography)). d By monitoring the oscillatory ensemble dynamics (see b) and the structural connectivity (see c), the spatio-temporal state and its evolution can be analyzed in real-time

Before describing the interplay between the components sketched in Fig. 11, we should first consider a few aspects of biological information pathways which are obviously implementable by materials science and electronics, and might simplify the execution of the proposed artificial spatio-temporal network. In a human brain, the ability to access, and thus measure, the structural, topological, and dynamical states is hindered by both technological and ethical constraints [255, 256]. In contrast, artificial systems should theoretically permit access to all local and global parameters in any conceivable experimental setup. This offers a high degree of freedom in designing artificial systems. In particular, for a system growing in complexity, a designer might decide which segments should be externally controlled and which should develop via self-assembly and self-organization.

Furthermore, the time scales involved in biological information processing may actually facilitate their artificial engineering. In phylogenetic and ontogenetic development, low time scales dominate the scene. Species vary from one generation to the next, with networks growing from days to years. Additionally, nervous system dynamics are in the 100 Hz range, with low transmission velocities on the order of m/s, i.e. the speed of spikes along axons are common. As such, there is no need to build ultrafast artificial systems in order to imitate basal biological information pathway. Indeed, the deposition or synthesis of any material, e.g. nanoparticles or NWNs, is a growing materials network (Fig. 11a), and can be adjusted to low time scales. In addition, low time scales adapt well in many ways to materials transport parameters, including ionic drift, diffusion currents, and mass transport in general. Biological time scales are easily accessible by electronics, facilitating circuit design, and permitting real-time observation of spatio-temporal system development (Fig. 11b, c, d). For example, leaky-integrated-firing of a biological neuron can be technically realized by van der Pol (vdP) oscillators [115, 119, 257], compact devices based on VO\(_2\) or NbO\(_x\), which exhibit a negative differential resistance (NDR) I-V curve [126, 258,259,260], or integrated, mixed-signal circuits [120]. In general, low time scales known form biological information pathways, including external stimuli that affect them, offer an exploration space attainable by materials-related phenomena, electronics, and parameter monitoring.

How can an artificial spatio-temporal computing system, as sketched in Fig. 11, be practically realized? The goal in a bio-inspired artificial spatiotemporal network is to reach the desired topological and dynamical states simultaneously, in order to mimic the previously discussed characteristic hallmarks of the nervous system. This is handily illustrated in both Fig. 6a, where the topological cortical map region is labeled “G” (Goal) in the network cube, and in Fig. 6b, where the state of SOC is highlighted as the envisaged dynamical state. The main challenge here is to define the appropriate material network properties and dynamical setting for the entire system that will enable a similar spatiotemporal state to that of a nervous system. This global system state is often said to be structurally complex while being temporally close to the edge of chaos [24, 25, 171, 173, 183, 261, 262]. To achieve this goal, we describe the components presented schematically in Fig. 11 and their interactions in accordance to the biological information pathways described in chapters “Redox-Based Bi-Layer Metal Oxide Memristive Devices” and “MemFlash—Floating Gate Transistors as Memristors”. The material network template offers manifold opportunities on either a 2D or 3D platform (Fig. 11a). A network growth mimicking ontogenesis can be realized by continuous film deposition, or ongoing material synthesis of, for example, nanoparticle or nanowire networks. Network growth can be influenced in at least in three ways, the first of which being the oscillatory ensemble that is electrically connected to the network. Here, external stimuli, e.g., hearing, vision, and touch, are imprinted into the material network growth process via fire rate coding (Fig. 11b). Network formation and structure evolution are modified by the additional potential differences between the oscillator contacts within the network. Second, by integrating additional conductive pads (islands) on a 2D substrate platform, the formation of filaments between the oscillator’s electrodes can be controlled by the islands’ shape, number, size, and/or additional applied bias potential (Fig. 11a). The formation of conductive filaments during network growth could also be manipulated via structurally modulated or functionalized surfaces. In this way, not all network pathways are allowed, while others are assisted [263]. Biologically, this corresponds to axon growth and guidance [233]. This approach can also apply to 3D structures, which provides an increased degree of freedom and in principle allows nervous system-like connectivity. The materials network, whether 2D or 3D, does not necessarily have to be in the solid state: Electrolytes may be an appropriate fluid which satisfies the aforementioned requirements, including the state of criticality [21, 99, 264,265,266,267,268,269]. Third, additional stimuli (Fig. 11a) in the form of, e.g. light or temperature, can also modify the spatio-temporal evolution of the functional material network. An imprint of information during network growth is common to all three methods. This distinguishes the artificial spatio-temporal network approach from common AI systems. In the latter, the training or learning sequence is applied after system manufacture. By applying the three methods described above, it might be possible to imprint information in a similar way to that of a human nervous system during ontogenesis (method 1), as well a kind of a-priori knowledge (methods 2 and 3), i.e. phylogenetic factors.

For the entire system, a simultaneous, in-depth monitoring of network structure during its development and temporal evolution is intended, in accordance with a neuroscience approach to extract the structure and dynamics of complex brain networks [24, 270]. To this end, the spatio-temporal development of time-varying connectivity within the functional materials network (see Fig. 11a) will be monitored, for example, by means by optical microscopy, electron microscopy, or the magnetic field distribution in accordance with magnetoencephalography (MEG) (see Fig. 11c). This will allow visualization of the time-evolving correlation matrix of the oscillatory ensemble, and the extraction of more theoretical metrics, such as cluster coefficients, characteristic path lengths, motifs, modularity, and hubs [24, 26, 27, 171]. In Fig. 6a, the pale blue dashed line in the cube represents a fictional pathway through the network cube. At first, we assume that the materials network is a topology state labeled “S” (Start). The position “S” within the cube is chosen as an example, but could just as well be any other topological position within the network cube. By constantly monitoring the topology of the system during network growth and intervention via a set of parameters (e.g. added materials, extra potentials, and external stimuli to the oscillatory ensemble), it might be possible to adjust the system to arrive at “G”, defined by a set of characteristic theoretical parameter (hubs, motifs, modularity, cluster coefficient, path length, etc.) [27].

Simultaneously, the ISI distribution and time series of the oscillatory ensemble will be recorded (see Fig. 11b) [271]. Spike train distances provide a means of quantifying neuronal variability and the degree of synchrony in and between oscillatory ensembles, and may indicate the rise of oscillatory avalanche firing as one indicator of the SOC [23, 272,273,274,275,276,277]. SOC is described as a state located somewhere between the random, independent firing of individual oscillators, and complete synchrony, where all oscillators fire in phase with the same frequency [265, 278]. Between these two extremes, a system’s dynamics can be characterized by a complex and time-varying interaction of the oscillatory ensemble (see Fig. 6b). This regime exhibits features of criticality typically observed close to phase transitions [20, 25, 103, 209, 279,280,281]. In particular, the topology and temporal dynamics of a system in such a state are extremely sensitive to external distortions (stimuli) and may respond to them in numerous ways.

Practically speaking, we will begin by analyzing coupled nonlinear oscillator network raster plots, phase portraits, phase response curves, bifurcation diagrams, spike distance measurements, and cross-correlation type time-series analysis. Information from these analyses will be subsequently applied to quantify the phase and frequency relationships between network oscillators and their development over time [282, 283]. Finally, we would like to discuss obvious obstacles and challenges. In Sect. 4.4, the rule of homeostasis was highlighted. The concept of homeostasis is of essential importance to stabilize the nervous system dynamics and morphology to a set-point. For the system presented in Fig. 11, homeostasis is not illustrated. It might be possible to reconstruct a feedback parameter from the structural and functional matrices to reduce or enhance, if necessary, the oscillatory activity, or to modify the material growth process. Another challenge might be the implementation of appropriate delay lines to mimic the important signal retardation known from nervous systems [173, 182]. Ionic conductors with slow ionic motion in the form of drift or diffusion currents could be a possible solution.

One important issue remains: Picture a fabricated artificial spatio-temporal system as depicted in Fig. 11, that presents all previously discussed biological information pathways. How can we benchmark the system, and determine how it solves tasks set by external stimuli? Certainly, the functional and structural network states reflect the overall system state. As such, one viable approach is to read out the system state and to activate a set of artificial motor neurons to react to an input task. However, this does not accurately represent the process in the human brain, where there is no internal, global system observer to decide on the next step [9]. At this point, we are confronted with a difficult challenge: how can we lead matter to imitate the mind? While the authors can suggest an example system as shown in Fig. 11, this question remains open.

6 Benchmarking for Bio-inspired Computing

Benchmarking in AI is an important approach to measure its performance, and subsequently enable comparisons between different systems. In pattern recognition, for example, MNIS data sets are used, with the recognition rate defining a clear benchmark. While contemporary AI systems show extraordinary capability in performing a single, specific task, their success at task variability is highly limited compared to the nervous system. Nonetheless, a new generation of AI has demonstrated extraordinary capabilities in the field of gaming (Chess and Go), including an aptitude for self-learning [284]. Yet, it remains unclear how to define a fair and comprehensible benchmarking for neuromorphic systems and bio-inspired computing [285]. Computational tasks must be carefully designed in order to assess the overall system’s performance in comparison with human mental capabilities, as previously proposed by Alan Turing in his seminal work on Machinery and Intelligence [155]. Bloom’s learning taxonomy, which was developed to hierarchically categorize learning in the classroom, can be helpful in assessing how successfully artificial systems mimic higher brain functions [286]. This taxonomy contains six categories of cognitive skills and presents a hierarchy with increasing cognitive functionality from bottom (factual knowledge) to top (creation) (see Fig. 12), or in other words, from lower-order skills that require less cognitive processing to higher-order skills that require deeper learning and a greater degree of cognitive processing [287].

Fig. 12
figure 12

Figure 2 from Ref. [287]

Suggested benchmark for bio-inspired systems based on Bloom’s taxonomy. The pyramid represents increasing cognitive human skills from bottom to top.

This strategy may serve as a basis for benchmarking in bio-inspired computing systems. However, due to the nervous system’s task variability for each of the six cognitive categories, transparent benchmarks must be developed. This goal is extremely important for future comparisons of bio-inspired systems, which are currently developed on different platforms. In addition, resource-related parameters, such as energy consumption, system weight, and failure tolerance, need to be included.

7 Discussion

This perspective introduces the concept of artificial spatio-temporal networks, which proposes basal hallmarks, such as morphological and dynamical characteristics of nervous systems, to reproduce higher brain functions in artificial systems. In particular, the basal mechanisms known from the growth of nervous systems might play a significant role in their function. This concept will undoubtedly be a way to include biologically-relevant features in future artificial systems. Yet, only the tip of the iceberg has thus far been addressed: to fully realize an artificial spatio-temporal network, several challenges remain unresolved.

In more general terms, artificial spatio-temporal networks again raise the fundamental question: “To what extent can higher brain functions be reproduced in artificial systems?” Seminal books and papers by [1, 3, 5, 7, 176, 234, 288], and many more address this topic in one way or another. According to the authors, higher brain function can be described on the basis of natural sciences and mathematics, permitting us to view this challenge in another light. On an atomistic level, we find in living nature, and therefore in any nervous system, old and well known friends from the periodic table of the elements, including but not limited to Carbon (C), Sodium (Na), Potassium (K), Chlorine (Cl), Oxygen (O), and Hydrogen (H). Any effort to establish higher brain function in an artificial system, whether in silico (software oriented) or in a material-based substrate, as in the case of artificial spatio-temporal networks, should apply another tool box of elements to establish awareness, perception, or consciousness, e.g. Silicon(Si), Gold (Au), Silver (Ag), Tungsten (W), O, and so on. There is no obvious reason why this strategy should not work, but if it cannot, what are the fundamental limits, and how are they defined? A look at biochemical substrates in living species highlights the weaknesses of the simplistic, atomistic view point. There is still unknown genetic information that strongly controls nervous system behavior and function, especially during development, which therefore cannot currently be considered in any artificially-constructed systems. Whether there are shortcuts to bypass the role genes play in neural behavior and development is completely unknown, and might act as a show stopper [234]. On the one hand, it is truly challenging to introduce basal biological functionalities, such as homeostasis, signal delay, growth, and the appropriate states of criticality and topology in an artificial system. On the other hand, the materials tool box may offer plethora of phenomena which have not yet been explored for novel computing architectures [66, 266,267,268,269]. Hence, these simple questions and views point towards an even more fundamental aspect: in living systems, the separation between matter and information becomes blurred, making it risky to apply these terms without investigating living and artificial systems equally, or precisely clarifying the respective context [289].

8 Conclusion

In this perspective, we addressed fundamental limits of current ICT and briefly summarized the state-of-the-art. Today’s digital electronics work with clock rates in the GHz range, utilizing ns pulses and signal transmissions at nearly light speed in a vacuum. Meanwhile, nervous systems exhibit numerous remarkable and fascinating features, including anticipation, awareness, perception, and consciousness. The associated action potential spikes are 6 orders of magnitude longer, and travel with a velocity 6 orders of magnitude lower, than their electronic analogs, while dissipating only a couple of Watts of power. We touched on the fundamentals of information processing in biological (nervous) and engineered systems. Specifically, we highlighted the dynamical and morphological properties exhibited by nervous systems using the human brain as an example. The exceptional topology of the human cortex in comparison to other biological and technical networks, in addition to the state of SOC, served as guidelines to develop artificial spatio-temporal systems. A pathway to realize artificial spatio-temporal systems in a hardware-orientated system was presented, aiming to emulate higher brain functions in an artificial system. The role of ontogenesis was discussed, revealing that the mechanism of neural network growth provides crucial information useful in designing novel artificial computing systems, which have yet to be addressed in great detail.

Neural network growth illustrates how important the ongoing interaction between the internal and external world is when artificially creating the basic structures that provide the ability to learn specific functions. In our opinion, this emphasizes the importance of basal properties which, while beginning to be applied systems in artificial, have yet to be fully implemented. These properties include individual autonomous dynamic units, time-variable coupling between them, and both positive and negative connection growth. With respect to time-variability, the research field has shown enormous progress in recent years with the development of memristive systems. Although memristive devices can already replicate the phenomena associated with learning to a certain degree, the question remains whether these devices can suitably reproduce both the necessary processes in their entirety, and global dynamics which are shaped by an overwhelmingly complex network. The last point in particular presents immense challenges for a conservative implementation of memristive devices in large-scale systems. Finally, we discussed possible limitations in implementing higher brain functions in artificial systems. We concluded that genetic information plays a key role in the development of neural nervous systems, knowledge that we are still lacking if we want to fully implement this behavior in artificial systems, specifically with regards to awareness, perception, and consciousness. The exploration space for implementation is certainly extraordinary large for artificial spatio-temporal systems. This huge parameter space is both curse and blessing: while such a large number of variables must be monitored and controlled, it also allows for greater flexibility and opportunities. One thing is certain in this context: no matter which engineered solution ultimately prevails, humanity will be confronted with a multitude of ambivalent questions and challenges, in which certainly “Matter and Mind Matter”.