Engineered systems, including our current most advanced artificial intelligences (AIs) driven by deep learning (DL), are typically characterised by 1: their suitability for only the specific purposes for which they were designed and trained (fundamentally, even large language models are doing nothing more than predicting the next word in a long sequence), 2: their inability to operate autonomously in unconstrained real-world environments, and 3: the increasing engineering effort required as systems become ever more complex and purpose-built. Autonomous vehicles, reliable medical diagnoses, detection of offensive online content, and even robust facial recognition have all, at some time or other, been claimed to be essentially solved, but now it is generally accepted that many are years or even decades away.

DL pioneers LeCun and Hinton have proposed that supervised learning will ultimately need to be abandoned and that unsupervised learning, as occurs in the brain, is the way forward to more capable AI [1]. “Obviously we’re missing something… The next revolution of AI will not be supervised” (LeCun) [2]. An understanding is emerging that [3]: “The brain seamlessly merges bottom-up discriminative and top-down generative computations in perceptual inference, and model-free and model-based control… [We must] explain task performance on the basis of neuronal dynamics and provide a mechanistic account of how the brain gives rise to the mind.” Buzsaki states more succinctly [4]: “Brains do not process information: they create it.”

This perspective advocates for the advantages of taking inspiration for AI explicitly and faithfully (at an appropriate level) from the brain:

Part 1: Presents the most recent research on neural mechanisms that underlie computation.

Part 2: Explains how emergent computation could arise from these mechanisms.

Part 3: Suggests how AI can be improved by incorporating these principles.

Principles of Neural Function and Plasticity

The brain’s flexibility, and indeed its entire computational capacity, is rooted in the activity dynamics of its components [4, 5]—that is, in neurons connected by synapses into networks. Significant differences exist between spiking neural networks (SNNs) and the superficially similar artificial neural networks (ANNs) used for DL:

  • Real neurons communicate with spikes where spike timing forms an integral part of the neural representation of information [6]. The computational and efficiency benefits of sparse spike coding are substantial [7, 8]. SNNs are also rigorously more powerful than their real-valued ANN counterparts [9].

  • Networks self-organise to represent feedforward input structure [10, 11]. The mechanisms the brain uses to accomplish this have been established, and this is where SNNs have achieved their most outstanding successes to date: Spike timing-dependent plasticity (STDP) [12,13,14,15,16,17] identifies causal interactions between neurons such that when one neuron reliably drives another, the synapse between them is strengthened; several homeostatic mechanisms [14,15,16, 18] that regulate neuronal firing rates and total synaptic strengths, combined with local decorrelating inhibition [13, 15,16,17, 19], together implement sparse non-negative matrix factorisation [16, 20, 21] (NMF) that extracts the underlying latent causes in the inputs. Since these extracted features are sparse and additive, they are parts-based and therefore have the significant advantage of being readily explainable.

  • Abundant feedback connections build predictive models [5, 22,23,24,25] by learning to invert the self-organised feedforward representations (also using STDP [24, 25]). Such is both its importance to and ubiquity in neural computation that feedback connections in the brain almost always outnumber the feedforward.

  • Oscillations in populations of neurons [26,27,28,29] and quasi-chaotic dynamic state transitions [30,31,32,33] continuously and dynamically reconfigure neural circuits based on the computational needs of the task at hand. Overall activity levels, oscillation frequencies, and proximity to critical regions of dynamical phase space are controlled through widespread recurrent thalamic and brainstem projections [34].

  • Patterns of neural activity form representations of perceptions, actions, and internal brain states. Multi-scale inhibitory mechanisms [17, 27, 34,35,36] ensure that only best-matching circuits are activated for any given representation or task, implementing powerful k-winner-take-all computations [37].

  • Spike conduction delays [27, 28, 38, 39], oscillations [27, 28, 38], and short-term plasticity (STP) [38,39,40] innately represent time in the brain [39,40,41]. In recurrent neural circuits, STP can maintain memory states (working memory) for indefinite lengths of time [42, 43].

  • Dopamine directly modulates the gain of STDP for model-free reinforcement learning (RL) [44, 45]. Other neuromodulators have arguably equally important effects—acetylcholine increases the efficacy of feedforward connections and attention to inputs, and noradrenaline responds to novel and salient inputs and serotonin to risks and threats [46].

The above principles are well-established and offer potential clues to the next AI revolution beyond ANNs. However, while many of these principles have been simulated in models of neuronal dynamics, most of them await to be effectively integrated into functional models of neural operation. That is, while the links from spiking network structure to dynamics are reasonably well-understood, the known links from neural dynamics to function are considerably more tenuous. Significant progress in this regard will be required to harness the richness and complexity of SNN dynamics for computation in AI.

Other principles of biological neural function are more speculative, or the specific underlying mechanisms have not yet been fully determined:

  • Brains combine predictive models of the world with oscillations and dynamic circuit reconfiguration to create internalised simulations of “what if” scenarios and future plans [47,48,49]. These models are also used to maintain consistent brain states through time and to sanity check inputs to flag unexpected and out-of-distribution events [50]. Much of the brain’s computational power arises from the interactions between its internal world models and its innate dynamics.

  • Closely related to the above is the idea of stochastic sampling [25, 51]—the brain represents probabilities by sampling over time from the distribution of possible interpretations of its inputs.Footnote 1 I conjecture that the same process applies to outputs (actions), including sampling from possible sequences of actions through time, which is clearly equivalent to internalised simulations of ‘what if’ scenarios.

  • Dopamine works with oscillations, dynamic circuit reconfiguration, and the internal world models to implement model-based RL. How this occurs is currently unclear, but prediction of forthcoming reward and temporal difference (TD)-style reinforcement learning will be essential components [52].

  • Explicit actions are just the final steps in a series of neural events learned through reinforcement—i.e. actions are preceded by sequences of internal neural operations that, from the perspective of neural activity patterns and TD learning, are essentially indistinguishable from those patterns that directly cause movement of the body in the world. High-level cognitive functions are therefore simply “internal actions” [49, 53] (“the brain is embodied, and the body and brain are embedded in the world” [54] – Edelman). Related thinking has led to the suggestion of an “embodied Turing test” by many eminent AI researchers [55].

  • Subcortical circuits, particularly the basal ganglia, cerebellum, brainstem, and even the spinal cord, fully control stereotypical and well-trained movements and are critical for serialising and timing of all other motor outputs. These regions phylogenetically pre-date the cortex and may use partially different mechanisms, although are still tightly integrated (e.g. the cerebellum contributes vitally to cognition [56]).

  • These computational principles apply similarly across all of cortex, evidenced by the structural uniformity of not just sensory but also association and motor regions, and the ubiquity of STDP and homeostatic mechanisms. Differentiation of function occurs predominantly through structural connectivity, including the abundant subcortical connections. The role of the cortex is to predict, not just incoming perceptions but also its own upcoming intentions and actions. It does this by becoming a model of the world, not in an abstract sense but in an actual physical sense; through self-organising plasticity, cortical dynamics are tuned to replicate, mimic, and couple with the dynamics of the world. Complex neural dynamics mirror the complexity evident in the real world—brains are complex precisely because the world is complex [57] (“complex” is used here in the complex systems sense and does not just mean “complicated”). Indeed neural dynamics continuously and task-dependently couple both with the body and with sensory events [58]. Therefore, what we think of as cortical representations of perceptions, actions, and objects in the world should be more correctly understood as transient neurodynamics that are simply replicating the perceived world’s (and the body’s) transient dynamical states. This is a subtle but profound distinction. How to exploit this insight for the benefit of AI is covered in subsequent sections.

Despite their great potential for advancing both our understanding of neural computation and our artificial intelligence technologies, spiking networks also present significant challenges in practical use. Perhaps the most dire of these is “spike propagation failure”; SNNs constructed of predominantly excitatory connections tend to over-synchronise and enter a seizure-like dynamical state, destroying all the information they contain. It has been analytically shown that this is a fundamental problem with all SNNs that use thresholded neurons (which in practice is all of them) [59]. Fortunately the problem can be solved with the incorporation of appropriate balancing inhibitory connections to maintain the excitation/inhibition (E/I) balance, along with several biologically plausible normalisation (homeostatic) mechanisms [59, 60]. Unfortunately the problem is not widely recognised, and these solutions have only recently been proposed and are not yet well-known or in widespread use within the machine learning community. The result is that this problem may have had significant detrimental impact on SNN research to date.

Neural Assemblies, Dynamics, Cognition, and Creativity

To compute means to control the flow of information and to store, recall, organise, integrate, and transform information in pursuit of a defined outcome or ongoing effect. In the case of the brain, it also means to flexibly adapt to unforeseen or changing conditions in ways that no computers can currently achieve. The brain accomplishes this by flexibly and adaptively activating neural assemblies (groups of neurons) in combinatorial patterns that best represent the confluence of sensory input and current internal state. But what controls which assemblies should be active? Neural assemblies respond when they recognise (i.e. are keyed by) particular efferent spike patterns elicited either from the senses or from other parts of the brain. The vital insight is that, since the cortex is dynamically tuned to be a physical analog of the world, neurons and assemblies respond when they are required and without centralised control. This rather fortuitous outcome is the result of multiple plasticity mechanisms covered in the previous section (STP, RL, self-organisation, and homeostasis) that integrate to bias the innate dynamics towards activity patterns that both recapitulate the world and that are ultimately rewarded [61], while also extracting features that help predict the reward [16, 20, 21], and maintaining the overall activity within useful dynamical bounds [34].

Each active neural assembly outputs a transformation of its inputs, and in so doing, it is performing a computation on its inputs that meets either innate (self-organising) or external (reward-bearing) criteria. Once the particular input that activated an assembly disappears, or the brain state changes, input to the assembly no longer matches, and the assembly naturally shuts down until it next receives keying input. This mechanism has the effect of always finding a part of the brain to process any given input or brain state—when the key match is good, the brain responds quickly, driving lateral inhibition and pre-emptively shutting down other neurons and regions which might otherwise have responded. If the match is poor, the brain responds more slowly since a poor match needs longer to drive neurons to threshold.

STDP, RL, and homeostatic mechanisms cause the recruitment of more neurons to represent common inputs and well-trained tasks through the following mechanism: Commonly-occurring inputs will cause excessive firing of the associated assemblies which will then homeostatically raise their thresholds to reduce their firing rates. This will give other neurons that were previously inhibited (by lateral inhibition from those assemblies) a chance to respond instead, and when they do, STDP will then solidify their new roles in representing the input. This recruitment process increases the fidelity and discriminability of representations of common inputs by increasing the numbers of neurons involved and also increases processing speed due to closer key-matches with finer discriminability. Of course, the recruited neurons will likely already be involved in existing representations that are similar to this new one; otherwise, they would not be responding to this one at all; such a mechanism is central to the brain’s ability to generalise and is a recognised functional advantage of sparse parts-based coding [20]. Thanks to sparse coding, no conflict is introduced by recruiting existing neurons into new representations since what matters is the overall dynamic pattern of activity, not the responses of single neurons (which can be involved in any number of representations). Thanks also to the combinatorial explosion of possible activity patterns across the brain, these potential representations are practically infinite in number [27].

Neural connections are highly convergent, divergent, modular, hierarchical, and re-entrant, with large overlaps between modules and strong cross-connections between hierarchies at all levels. Such anatomical structure causes complex patterns of competition and coupling that interact through and across multiple hierarchical levels simultaneously, resulting in spatiotemporal activity patterns that are exquisitely intricate and interdependent. Active assembly boundaries are fluid, ranging in size from a handful of neurons up to large regions, and no single module is ever at the top, or in control, from the perspective of either static connectivity or dynamic activity. Assemblies couple in novel patterns in response to novel inputs, and indeed can exploit the quasi-chaotic nature of the transitions between states to form novel patterns any time, in a manner that may be related to binding of representations, fluid intelligence, and creativity [62,63,64]. Neural computation therefore manifests as a continuous superposition of transient dynamic states. However, they are far from random; constrained by neural architecture [65] and shaped by the forces of plasticity, they are finely honed to be task- and context-specific. These patterns underlie the combinatorial computational power of the brain as well as its extreme flexibility.

The brain does not follow a programme. Brain regions do not encode packets of information which are then transmitted to receiving regions for decoding and processing, and brains do not work “despite the noise”. Due to efficient coding and stochastic sampling, what we are tempted to think of as noise is in fact the computation in its entirety [25]. Engineering-style reductionist simplifications, such as describing information transfer through propagating pulse packets, or models that give the impression of neuroinspiration but that in reality use reductionist constructs such as latched registers, dedicated data buses and serially executed programmes [66], therefore yield few insights into real neural function. Says Friston [5]: “By studying the dynamics and self-organisation of functional networks, we may gain insight into the true nature of the brain as the embodiment of the mind.” The brain is the ultimate bootstrapped physical dynamical system. A neuron simply sits and listens [67]. When it hears an incoming pattern of spikes that matches a pattern it knows, it responds with a spike of its own. That’s it! When this process is repeated recursively tens to trillions of times, what emerges is a brain controlling a body in the world or doing something else equally clever. Our challenge is to understand how this occurs. My contention is that the above principles are sufficient to meet this challenge to both better understand the brain and to construct better brain-inspired AI.

Brains to AI

The differences between brains and ANNs lead to significant concrete differences in capabilities:

  • AI is difficult to train and typically requires huge amounts of data. Due to its ability to self discover parts-based representations and its modular hierarchical structure which allows it to combine dynamic assemblies into novel patterns, the brain implements transfer learning by chunking, often requiring just a handful of training samples.

  • AI systems have enormous energy requirements for training and operation. Standard computer CPU architectures and even GPGPUs (general-purpose graphical processing units) lack the ability to simulate neural networks, particularly SNNs, efficiently. The latest generation of neuromorphic hardware co-locates the memory with the processing units as required for efficient implementation, and supports on-chip STDP and homeostasis to incorporate the most recent advances in spiking neural algorithms [68].

  • AI systems need to be pre-trained, and any new information typically requires complete re-training from scratch. The brain learns online continuously using transfer learning and specialised structures for single-shot memories coupled with dynamical processes (activity patterns) that are activated during down-time (sleep) for integrating that knowledge into long-term networks.

  • AI is terribly brittle and can be easily fooled by adversarial input that needs to be shifted only slightly outside the training distribution. Brains generalise exceptionally well due to modular self-organisation, sparse coding, predictive feedback, and transient k-winner-take-all combinatorial dynamics.

  • AI systems can perform only the task for which they are trained. Due to its ability to dynamically reconfigure through oscillations and internal actions, the brain can perform multiple tasks and switch between them as required and can rapidly learn new tasks by transfer learning through chunking.

Furthermore, brains generate explanatory causal models using STDP, predictive feedback, and working memory. What we currently call AI is fundamentally still big data and correlation analysis, predominantly used to generate classifications and predictions. There are arguably exceptions to this rule [69, 70]—for example, generative networks, transformer (attentional) networks, and networks that attempt to simulate the working and episodic memory systems. Interestingly these exceptions tend to draw inspiration directly from the brain to improve on the capabilities of DL. While improvements are often achieved, which is testimony to the astuteness of brain inspiration, the insights are applied in piecemeal fashion, and many of the compelling advantages of neural processing remain ignored and unharnessed.

While recurrent ANNs are theoretically Turing complete, we know from experience with DL that choice of architecture and how information is represented make a difference, and that just because a task can theoretically be performed does not mean that it can be done efficiently, or even that it can be learned at all. SNNs are more powerful than ANNs of equal size and are dynamically and architecturally ideal for representing spatiotemporal patterns and for building causal models of the world. It is reasonable to expect that there will be classes of problem, relevant for our usage of AI, for which ANNs will fail in practice but that can be learned and performed efficiently by SNNs.

The true computational power of the brain lies in the simultaneous integration of all of the principles of neural computation. To the author’s knowledge, such an integration has never been attempted at scale. I am not advocating a biophysically-detailed bottom-up approach nor a top-down cognitive model. This is “sideways-in”, where relevant biophysical principles are abstracted and combined in such a way as to bring about emergence of function as occurs in the brain. The primary modelling level (the level that should be modelled explicitly) is the level of neurons, synapses, and spikes. At the level below are ion channels, neurotransmitters, synaptic currents, and membrane dynamics—these are abstracted and modelled as mathematical functions rather than explicitly. At the level above are populations of neurons, oscillations, and network dynamics—these emerge from interactions of the lower-level components, giving rise to the complex functional properties of the brain.

Studies have already shown how deep networks implemented with spiking neurons outperform standard DL in several respects. On simple problems, they require orders of magnitude fewer training samples, they can use unlabelled data for most of the training, and they generate sparse efficient parts-based representations [20, 68, 71]. These early results are significant, but they reveal only a small subset of the full capabilities of brains and SNNs. DL requires large quantities of training data because typically the full range of input space needs to be explicitly covered during training, unlike brains which are able to dynamically generate novel information [4] and actively test world models and hypotheses through “internal actions”. Energy requirements for training state-of-the-art deep networks are already measured in megawatt-hours, and the curse of dimensionality is causing an exponential increase as ever-larger problems are tackled (energy requirements for training state-of-the-art deep networks have increased by nearly an order of magnitude every year since 2012). Even with the accelerated DL hardware currently being developed, this is clearly an unsustainable trajectory. A paradigm shift, as offered by SNNs, is being called for.

While parts-based decompositions and generative models can also be implemented using DL, these functions are parsimoniously implemented in SNNs by STDP with very low power requirements and using relatively small unlabelled datasets. The fundamental nature of spikes as momentary events leads to powerful temporal representations, rapid processing, and intrinsic dynamics that allow for stochastic sampling and dynamic reconfiguration of neural circuits to match ongoing computational needs. DL offers few of these capabilities. The clear implication is that brains and SNNs are ideally suited for embodied interactions with the real world, and the full advantages of SNNs will become evident when these neural computational principles are integrated and applied to real-world problems. Notably, these are exactly the kinds of problems for which DL is having trouble scaling.

Each of the computational principles—sparse spike-time coding, self-organisation, short term plasticity, reward learning, homeostasis, feedback predictive circuits, conduction delays, oscillations, innate dynamics, stochastic sampling, multi-scale inhibition, k-winner-take-all, and embodied coupling—are research topics that have been separately investigated, some quite extensively. However, a rich understanding of neural function can only be obtained by understanding how these principles synergistically combine (18, 25, 61). Integrating some of these principles clearly presents significant challenges, but others should be relatively straight-forward. Oscillations and spike-time coding have rarely, if ever, been combined with RL to flexibly route information through a neural network, for example. Further combining such a network with self-organising plasticity could then create a network that can generalise and respond flexibly to new inputs; feedback could allow for attention to unexpected inputs; and so on. While some of these principles could be independently implemented without SNNs, the space of potential implementations collapses dramatically when integrating many principles simultaneously. I contend that to efficiently perform all of which the brain is capable, SNNs may be one of the few viable underlying implementation substrates, since the complex dynamic activity patterns that actually perform the computations are so closely tied to SNN structure.

There is perhaps a feeling in the ANN community that the dynamics of spiking networks are difficult to conceptualise and control and that the addition of a range of neural mechanisms can only further complicate the problem. This view has possibly even stymied SNN research. Contrary to this view, it is quite possible, even likely, that the synergistic combination of principles will lead to intuitive dynamics and a deeper understanding of both the underlying mechanisms and the emergent computational properties. Most of the principles that have been discussed here are mechanistic and are understood from both procedural and functional perspectives. They are evidence-based, concrete, and realistically implementable in functioning neural circuit models and AI prototypes. The next-generation AI using these principles will inherit the many advantages of directly brain-inspired neural processing. If similar attention and resources are given to these SNN mechanisms as has been given to ANNs over the last 10 years, it seems reasonable to expect that revolutionary computational systems can be realised, or at the very least extensive progress in this direction can be made.