1 Introduction

More than 20 years ago, a molecular neurobiologist at the University of Berkeley Professor Walter Freeman put forward the concept of neurodynamics (Freeman 2000). It has become a new research field to study the activities of cognition and nervous system with the theory and method of dynamics (Pouget and Latham 2002; Basar 1998; Çelik et al. 2021; Brydges et al. 2020; Yu et al. 2020; Wouapi et al. 2021; Wang et al. 2019a; Iribarren and Moro 2009, 2008; Memmesheimer and Timme 2006; Navarro-López et al. 2021; Buxton 2012; Churchland et al. 2002; Hipp et al. 2011a; Ermentrout et al. 2007; Lakatos et al. 2008; Rabinovich and Huerta 2006; Sandrini et al. 2015; Hopfield 2010; Hu et al. 2021), many of scientific research achievements has been showed in this research area. Neurodynamics is more commonly known as computational neuroscience in European and American countries, while it is known as neuromechanics in Japan (Takeda 1999). Particularly, the Dynamic Brain Group, originally organized by Japanese scientists, developed various kinds of collaborative researches such as the organization of Dynamic Brain Forum (DBF) in the world, where many researchers in the world, who have been engaged in neuroscience from the aspect of “dynamic brain” (Tsuda et al. 1987; Tsuda 1991, 1992). However, in the field of experimental neuroscience, scientists prefer to use neuroinformatics to describe the basic laws of neural information processing qualitatively or quantitatively. But no matter what name we used with it, it doesn't change the fact that neuroscientists and artificial intelligence scientists have come to realized that the development of cognitive neuroscience is increasingly dependent not only on advances in experimental techniques and rigorous experimental data, but also to understand the principle of brain network signal processing and transmission and insight into the internal mechanism of neural coding distribution mode with quantitative methods from a theoretical height, so as to discover the law and nature behind the vast experimental data. In order to better understand and master the operation of the brain and deal with various brain diseases, dynamic theory is used to accurately predict the potential patients with degenerative brain diseases at an early stage (Navarro-López et al. 2021; Ebrahimzadeh et al. 2021; Yang et al. 2021a; Jiang et al. 2020; Sharma and Acharya 2021).

For a long time, cognitive neuroscience, which takes experiments as its basic research method, has focused on the improvement of experimental phenomena, experimental data and experimental techniques while ignoring the importance of theory. Because of this result, the brain science with hundreds of years of history has not had a systematic and complete theoretical system of its own. This abnormal phenomenon has made brain science so far developed to today is still an immature discipline. Although theoretical neuroscience was born 20 years ago, it is not widely accepted by the academic community to have only one theory. Although theoretical neuroscientists have made a series of excellent research achievements today (Wouapi et al. 2021; Wang et al. 2006, 2021a; Clancy et al. 2017; Videbech 2010; Zhang et al. 2019, 2020; Yuan et al. 2021, 2022; Yao and Wang 2019; Maltba et al. 2022; Zhou et al. 2020; Li et al. 2020; Kim and Lim 2020; Yang et al. 2021b), it is still difficult to widely and effectively cooperate with experimental neuroscientists to promote each other and integrate development. One of the efforts for the integration of development has been performed is to publish a textbook of comprehensive neuroscience such as “Neuroscience in the 21st Century”, 3rd Edition (Pfaff and Volkow 2022). Among the chapters in that textbook, the following is devoted to the dynamics in neural systems: Ichiro Tsuda, Dynamics in neural systems: a dynamical systems viewpoint. The view of history, dynamic theory for neural information processing have been proposed in several aspects. Minoru Tsukada first treated the neural information processing as dynamic Markov channels from dynamics brain (Tsukada et al. 1975). Ichiro Tsuda proposed several dynamic aspects of the brain as a typical complex system (Kaneko and Tsuda 2001). Tsuda proposed Hermeneutics of the brain from the aspect that the brain interprets the external world to recognize internal images of external signals via action and sensation (Tsuda 1984). Tsuda first proposed a dynamic associative memory model in nonequilibrium neural networks (Tsuda et al. 1987), in contrast to a typical static model of memory capacity such as Hopfield model. Kazuyuki Aihara followed Tsuda’s model and confirmed the presence of dynamic associative memory, by using his chaotic neural networks (Adachi and Aihara 1997; Aihara et al. 1990). In this kind of study, Tsuda found complex dynamic transitions in his nonequilibrium neural networks, whose transition was proposed to be called “Chaotic Itinerancy” (Tsuda 1991, 1992, 2001, 2013, 2015; Nara and Davis 1992). In early 1990s, the Japanese Dynamic Brain Group was organized by Minoru Tsukada, Hiroshi Fujii, Shigetoshi Nara, Ichiro Tsuda, and Kazuyuki Aihara, and J-DBG developed various kinds of collaborative researches such as the organization of DBF, where many researchers in the world, who have been engaged in neuroscience from the aspect of “dynamic brain”, gathered together. This activity of forum was led to the later organization of ICCN (International Conference on Cognitive Neurodynamics) and the publication of Cognitive Neurodynamics by Springer. M. Tsukada invited Masamich Sakagami in Tamagawa University to do the collaborative research on the dynamic mechanism of thoughts. An important aspect was whether or not the thoughts process can be discriminated with associative memories. M. Sakagami succeeded to make an experimental system, which can discriminate these two cognitive phases. X. Pan were enthusiastic to perform such an experiment and succeeded to find specific neurons. Finally, they published important papers (Pan et al. 2008, 2014). As another important works of DBG were for Cantor coding (Tsuda and Kuroda 2001; Fukushima et al. 2007; Kuroda et al. 2009; Yamaguti et al. 2011; Ryeu et al. 2001), gap junction-coupled neural network models (Fujii and Tsuda 2004; Tsuda et al. 2004; Tadokoro et al. 2011) and complex visual hallucinations and so on (Collerton et al. 2016; Tsukada et al. 2015).

Nevertheless, there still seems to be an invisible and unbridgeable chasm between the achievements of neuroscience at all levels. These conditions prevent the mutual use, influence, and diffusion of their research findings, as well as major breakthroughs in cognitive neuroscience. As a result, the research field of cognitive neuroscience has been unable to get out of the dilemma of “blind man touching the elephant”. Especially in consciousness, thinking, creativity generation mechanism, emotion, the nature of intelligence, prediction, visual perception generation mechanism, memory storage and call, global brain function and many other aspects of the research progress is very slow, some even no progress. Moreover, a growing number of scientists in other fields are intrigued by the multitude of unanswered scientific questions in the field of brain research and their complexity.

The basic ideas of neurodynamics have been increasingly infiltrated and embodied in many aspects, such as neuroscience, artificial intelligence, brain-like computing, bioinformation, medical diagnosis, image processing, control science, complex network and engineering applications (Bullmore and Sporns 2009; Ullman 2019; Roy et al. 2019; Zeng et al. 2019; Wang and Zhu 2016; Deco et al. 2015; Kanwisher 2010a). Brain science is a large-scale science, which not only involves the three-dimensional (3D) intersection of many disciplines, but also poses many unprecedented challenges to many mature disciplines. For example, does the weak magnetic field inside our brains contribute to the transmission of nerve signals? If so, where is the experimental evidence? If there is no contribution, how to explain the negative power component of neurons (Wang et al. 2015a; Wang and Wang 2018a), and how to explain the equivalence between Wang–Zhang's (W–Z) neuron model and Hodgkin–Huxley's (H–H) model (Wang and Wang 2018b). A further question is whether neurons in the brain, or even without synaptic connections and nerve fiber connections between brain regions, can still transmit neural signals in the case of electromagnetic field coupling to achieve communication between neurons and various brain regions (Yang et al. 2022; Ma and Tang 2017).

As an unstable dynamic system, the brain has no controversy in the academic world. A large number of experimental data and results reveal that our brain has specific functional characteristics at any level, and its activities are highly nonlinear and complex. The highly nonlinear and complex brain dynamics as well as their various functional expressions are not only related to gene and functional genomics, biology and biochemistry, but also to solid mechanics (McIntyre et al. 2001), fluid mechanics (Moore and Cao 2008), dynamics and control (Lu et al. 2008a, b). Our research shows that some experimental phenomena in cognitive neuroscience can be reproduced and repeated by mechanical models (Wang et al. 2015a; Li et al. 2022a), and some experimental data that cannot be explained by neuroscience can also be scientifically explained by our mechanical models (Peng and Wang 2021). We also can use mechanical models to predict new experimental phenomena and new neural mechanisms not found in neuroscience (Wang and Wang 2018b). These results fully demonstrate the power of mechanical science in the field of brain science research.

A comprehensive review article about “Neurodynamics and Mechanics” profoundly elaborated the internal connection between neurodynamics and mechanics (Lu 2020), explained the transformation from classical mechanics to generalized mechanics, and the one-to-one correspondence between generalized mechanics and neurodynamics. The review pointed that since the twentieth century, dynamical system theory and methods have been further developed and successfully used in various mechanical systems, even nonlinear differential equations described in general system has universal theoretical significance and important application value. This indicates that modern mechanics research has broken through the traditional category of classical mechanical system and opened up a new category of “generalized” mechanical system. The research object of mechanics has expanded from “particle or particle system” to the general “dynamic system”, the concept of “force” has expanded from “mechanical force” to the general “interaction”, and the concept of “motion” has also expanded from “configuration change” in geometric space to “state evolution” in state space. These ideas are important and instructive for the modeling and calculation of mechanical science in cognitive neuroscience and the construction of the theoretical system of brain-like intelligence.

At present, there is no widely accepted theory in the field of neuroscience, thus, theoretical neuroscience and experimental neuroscience cannot effectively integrate and promote each other. This has seriously hindered the development of various fields of cognitive neuroscience, so that we cannot effectively interpret experimental data, and reveal data behind the nature and regularities form the basis of scientific predictions and explanations. In order to make a great breakthrough in neuroscience and establish a systematic and complete theoretical system of cognitive neuroscience, it is necessary to perform researches on brain theory. The core scientific question in the study of brain theory is whether human intelligent behaviors depend on the activity of a single or a few neurons or whether they are realized by the interactions at molecular to systemic levels? The answer of this core scientific question is now beyond dispute among neuroscientists. However, academic community has not yet come up with an effective solution to solve this core scientific challenge. To this end, we proposed a definition of large-scale neuroscience. The cornerstone of this definition is that a large-scale neuroscience model is on the basis of the neural energy model (Wang and Zhu 2016; Wang et al. 2015a, 2008; Wang and Wang 2018b), and the neural energy model arises from the theory and method of analytical dynamics (Wang and Pan 2021). The present study aimed to quantitatively obtain the global information of neural activity by finding the relationship between neural energy and membrane potential, field potential and firing rate of network. As the global information of the neural activity can be transformed into energy for research and analysis, neural energy coding constitutes the cornerstone of large-scale neuroscience models. The main contents include the following items: (a) a new research method that can unify reductionism and holism in theory. The new method could theoretically reproduce not only the electrophysiological recordings, but also the global information of the functional neural activity using functional magnetic resonance imaging, fMRI (Yuan et al. 2021; Wang et al. 2015a; Peng and Wang 2021; Cheng et al. 2020); (b) the global functional model of brain is established computationally, which can be used to construct, analyze, and describe the experimental models of neuroscience at various levels (Wang and Zhu 2016; Wang et al. 2020). Thus, the computational results at all levels are no longer mutually inapplicable, contradictory and irrelevant; (c) if a global functional model of brain cannot explain the function and energy consumption of the default mode network (DMN) and the resting state network, and how it transitions from the default model to the cognitive network and the corresponding energy transformation under task-induced condition, then, such model is not a global neural model (Yuan et al. 2021). The adjustable parameters of the global functional model of brain are necessarily limited and simple; (d) this type of models can be used for the analysis of experimental data hidden behind the nature and regularity. As our neural energy model meets all of the above-mentioned requirements of the global functional model of brain, and it has already accumulated and published a series of original and innovative scientific research results (Wang et al. 2008, 2015a, 2017a, 2018a, b, 2019b, 2021a; Yuan et al. 2021; Wang and Zhu 2016; Wang and Wang 2018a, b; Li et al. 2022a; Peng and Wang 2021; Lu 2020; Cheng et al. 2020), this has laid a firm foundation for the creation of new brain theory that can stand the testing of experimental data. The main research directions of brain theory are reflected in the following four aspects.

1.1 Brain theory: exploration of working mechanism of the brain

One of the first scientific questions that needs to be answered in the discussion of how the brain works is why neuroscience cannot explain the mechanism of about 95% of the brain's energy consumption (Fox and Raichle 2007a; Balasubramanian 2021; Raichle 2010)? Although brain only accounts for about 2% of body weight, it consumes about 20% of the body's energy. The neural energy expenditure caused by task stimuli typically accounts for only 5% of the resting brain energy consumption. Much of our understanding of the brain comes from study of the 5% of brain activity (Raichle and Mintun 2006). From these data, it is clear that the energy loss in the brain is almost independent of task stimuli. The following questions should be answered:

  1. (a)

    What is the nature of the persistent intrinsic activity that causes the great depletion of brain energy?

  2. (b)

    What is the relationship between the huge depletion of brain energy and cognitive function?

  3. (c)

    Is it possible that the current mainstream view in cognitive neuroscience may mislead scholars to ignore the possibility that neuroscience experiments and cognitive psychology experiments reveal only some parts of the brain activity?

The important question is that if the brain is reflex in nature, why is the energy consumption of the brain under stimulated conditions almost the same as the energy consumption of the brain under resting conditions? If brain is reflex, task-induced brain energy consumption should increase significantly, why is it less than a 5% increase? In this research area, we have published a series of research results (Wang et al. 2021a, 2018a; Yuan et al. 2021). Using the large-scale neural energy theory to reveal the neural mechanism of the hemodynamic phenomenon of the brain, it was revealed that neural energy is an important marker of the activity of the nervous system, and energy features contain information about external stimuli and neural responses (Yuan et al. 2021; Peng and Wang 2021); using the neural energy theory to attain the biophysical mechanism of the mutual coupling and antagonism between the brain DMN and the working memory network, it can be demonstrated that neural energy can effectively fuse DMN s and cognitive networks to interpret and analyze the spatial information and encoded content contained in complex neural activities (Yuan et al. 2021). The computational simulation results are in complete agreement with the experimental data (Piccoli et al. 2015; Compte 2000; Wei et al. 2012; Hsieh and Ranganath 2014; Karlsgodt et al. 2005). The neural energy theory can also efficiently express the neural coding of the cognitive system in a 3D space (Hsieh and Ranganath 2014); the nervous system can realize the maximization of information coding under the condition of energy constraints (Wang et al. 2018a, b, 2019b), and the neural energy coding can maximize the efficiency of intellectual exploration (Wang et al. 2017a). Our study also demonstrated a stimulus-related increase in energy expenditure of less than 5% compared with energy expenditure under a spontaneous activity, which is consistent with brain imaging results reported by Raichle et al. (Wang et al. 2021a), and spontaneous activity consumes most of the energy compared with task state (Fox and Raichle 2007a; Balasubramanian 2021; Raichle 2010; Raichle and Mintun 2006). Therefore, the neural energy can be used to express the neural activity of the cerebral cortical network (Wang and Zhu 2016).

Another view is that there is experimental evidence that the neural activity of the brain exhibits quasi-critical characteristics (Williams-García et al. 2014). The biological cerebral cortex generally operates near the quasi-critical point. An article published in the Physical Review Letters revealed that external input forces the neural network of the brain away from a tipping point and operates in a non-equilibrium state. Under different conditions, the brain is in a “quasi-critical state” that satisfies the scaling relationship (Fosque et al. 2021). In particular, several recent articles have scientifically concentrated on how the brain works, and have obtained experimental data for verification. They demonstrated that diverse cognitive processes set different demands on locally segregated and globally integrated brain activity. With emphasizing the multilevel, hierarchical modular structure of the functional connectivity of the brain to derive eigenmode-based measures, Wang et al. (Wang et al. 2021b) showed that in healthy adults (range of age 22–36 years old), the healthy brain is characterized by a balance between functional segregation and integration. Crucially, a stronger integration is associated with a better cognitive ability, and a stronger segregation fosters crystallized intelligence and processing speed, and an individual’s tendency toward balance supports a better memory. Thus, the segregation–integration balance empowers the brain to support diverse cognitive abilities. This association between balance and cognitive abilities is not only consistent with the recently proposed Network Neuroscience Theory (NNT) of human intelligence (Barbey 2018a), but also provides more contents to the NNT. In fact, the balance between segregation and integration requires the diversity from weak to strong functional connectivity in dynamic patterns. Using the eigenmode analysis, Wang et al. (Wang et al. xxxx) also found that the diverse functional interaction is generated by hierarchically activating and recruiting structural modes, which are inherent to the hierarchical modular organization of the structural connectome. The critical state can best explore the hierarchical modular organization, optimize the combination of intrinsic structural modes, and maximize the functional diversity.

We can conclude that whether it is the hypothesis that the operation of the brain is quasi-critical, or the hypothesis that the cognitive ability of the brain conforms to the principle of separation–integration balance, under the condition that the experimental evidence is still lacking, we need to further analyze from the perspective of energy. Examine whether the energy in quasi-critical states, and the energy of state-space switching, is consistent with known features of complex, adaptive brain network dynamics in the presence of quasi-critical states. As for the separation–integration balance principle of the brain, it is necessary to quantitatively describe how the DMN transitions to the cognitive neural network from the perspective of neural energy. The establishment of the global neural model of the brain must conform to two basic principles supported by experimental data as follows: (1) cost-effectiveness—the activity of the neural network in the resting state and cognitive activity conforms to the principle of energy minimization; (2) high efficiency—the transmission efficiency of neural network signals in the cerebral cortex conforms to the principle of maximizing energy utilization (Laughlin and Sejnowski 2003a; Zheng et al. 2022).

1.2 Modeling of cerebral neural network and dynamic analysis of cognitive function

Panayiota Poirazi and Athanasia Papoutsi presented modeling methods at three levels of abstraction, from “single neuron” to “microcircuit” and then to “large-scale network model”, in a recent review on the relationship between dendrites and cerebral function. This study systematically summarizes the important role of dendrites in computational modeling, and expounds the great contribution of neurodynamics theory and computational neuroscience to the important progress of neuroscience by enumerating the successful complementarity or interaction between modeling and experimental neuroscience (Poirazi and Papoutsi 2020). In particular, a recent review of the structure, function and control of cerebral networks published in the Nature Reviews Physics comprehensively explains the possible complex operational mechanisms of the brain from the perspective of statistical physics, and analyzes the processes of cerebral cognition, creativity and consciousness from the perspective of complex network dynamics. From the perspective of computational network biodynamics, the changes of functional networks during the processes of brain diseases and improvement were summarized (Lynn and Bassett 2019).

Our findings showed that large-scale neuroscientific models can not only analyze and explain the local neural activities of the brain, but also the global neural activities of the brain (Yuan et al. 2021; Peng and Wang 2021). This large-scale cerebral functional model is also a robust method to solve the conversion relationship between electroencephalography (EEG) and electrocorticography (ECoG) (Hipp et al. 2011b). It can also be used to describe the interaction between various cerebral regions (Yuan et al. 2021; Peng and Wang 2021), which can also explain the dependence between blood oxygen signals and states of consciousness through insight and analysis of the nature and laws behind the experimental data (Raichle et al. 2018; Stender and Mortensen 2016), and the essence of intelligence (Barbey 2018b; Kruegera et al. 2009), the source of creativity (Kanwisher 2010b), the laws of encoding and decoding of the perceptual neural system (Stelnmetz et al. 2019; Esterman et al. 2009), the neurophysiological mechanism of the hemodynamic phenomenon of the brain, the description and the content of brain waves, etc. (Cohen 2017). Hence, neural energy models are the only option for large-scale neuroscience models until more efficient methods are found (Wang and Zhu 2016).

1.3 Research of dynamic coding based on classification

1.3.1 Receptor coding

The essence of cognition is the process of information processing, and the essence of information processing is expressed through neural coding. Therefore, selection of a neural coding mode plays an extremely important role in determining the cerebral functions. To date, researches on neural coding have mainly concentrated on the firing of various membrane potentials to measure and characterize various stimulus properties, including light intensity, sound intensity, temperature, pressure, and motion. As the neural activity and applicable scope revealed by different neural coding patterns are very limited, we need to explore the coding characteristics at different levels, which are of great importance to reveal the cognitive activity corresponding to various types of coding patterns. Especially, when we combine coding patterns at different levels to explore the cognitive properties of different functions, it greatly increases the complexity of neural coding research and makes it more difficult to explore the working mechanism of the brain (Wang and Zhu 2016).

In the field of neuroinformatics, several neural coding models have been proposed (Breakspear 2017). They can be mainly summarized as receptor coding, including spike count code, spike timing code, and temporal correlation code, respectively, corresponding to different assumptions of information units (Johnson and Ray 2004; Nirenberg and Latham 2003; Victor 1999). Receptor encoding is the premise of realizing perceptual behavior, thus, in the process of realizing vision, hearing, and smell, the dynamic encoding patterns on photoreceptors, auditory receptors, and olfactory receptors are neurophysiological responses to various stimulus information. The characteristics of these neurophysiological responses strongly depend on the mechanism of the neural dynamics of each functional circuit. It has been found that photoreceptors and auditory receptors primarily encode the firing rate and firing time of spikes on neurons (Victor 1999; Wang and Wang 2018c, 2020; Jacobs et al. 2009; Malnic et al. 1999), while olfactory receptors simultaneously encode combinations of stimulus-sensitive neurons (Miyamichi and Luo 2009; Xu et al. 2022a). In the recent decade, firing rate code has become the standard for describing the properties of various types of perceptual and cortical neurons. Humans detect the surface of objects through the fine movements of their fingers and recognize surface textures. The simplest action in the process is a fingertip rubbing across the surface. During tactile detection, material properties are converted into neural signals from the somatosensory system. Aiming at the tactile sensing and coding problems involved in softness, Wang et al. simulated the touch evaluation process and compared it with the psychophysical response of the softness of fabric materials. The study found that the average firing rate of action potentials evoked by all tactile receptors in the contact area between the fingertip and the object reflects the bending stiffness of fabric-like flexible materials, and there is a linear relationship between the average firing rate and softness (Yao and Wang 2019; Hu and Wang 2013; Hu et al. 2012). Therefore, the encoding form in receptors is closely associated with stimulus characteristics. These findings may enable us to further study the physiological mechanisms that receptors rely on, in order to process stimulus information.

1.3.2 Coding of navigational information

In the coding of navigational information, a recent article on how odor cues are recognized as location information by recording hippocampal CA1 neuron activity is of great significance. Scholars found that using smell as a landmark in the coding of navigational information, iteratively interacting with the route integration and the smell landmark in turn, can form a long-distance cognitive space map. The location information of place cells represented by odor cues greatly improves the spatial cognition and navigation abilities of rats (Fischler-Ruiz et al. 2021).

An important premise in the coding of navigational information is how the nervous system orders various information in order to encode the most important information, so as to achieve advantages and avoid disadvantages and the highest efficient navigation. Researchers found that paraventricular nucleus of the thalamus (PVT) plays an important role in tracking the salience of external stimuli. The brain uses this information to learn how to measure external stimuli and to ignore or avoid certain stimuli. The results showed that the importance of PVT in selecting external stimuli is not only determined by the physical characteristics of the stimulus itself, but also is related to the internal physiological state of animals and external environment (Zhu et al. 2018a). This provides a very important physiological basis for the coding of navigational information.

It is essential to emphasize that we biologically use the concept of neural energy to construct a computational model for the exploration of intelligence. The theory of neural energy coding is used to solve the path search problem: the model constructs a neural energy field by the power of the place cell cluster, and calculates the gradient of the neural energy field, so as to use the gradient vector to study the problem of intellectual exploration. The findings demonstrate that our proposed new model of intellectual exploration based on the neural energy not only finds optimal paths more efficiently, but also presents a biophysically meaningful learning process. This new idea verifies the importance of hippocampal place cells and synapses for spatial memory and the effectiveness of energy encoding, and provides an important theoretical basis for understanding the neural dynamics of spatial memory (Wang et al. 2017b).

1.3.3 Coding of cortical information

The coding of cortical information is a very significant research direction in the field of neural information processing. It mainly involves time coding, first-spike latency coding, population coding, and phase coding. Compared with time coding, firing rate coding has certain advantages in signal acquisition and energy efficiency. However, the firing of a single neuron in temporal coding can contain more information (Optican and Richmond 1987; Thorpe et al. 2001). It can express the characteristics of spike activity that cannot be expressed in the firing rate coding through the rank order code of the spike and the first-spike latency coding of the neuron. Experimental data showed that in the study of visual, auditory, and somatosensory cortex, more efficient and reliable neural signals could be encoded based on the rank order code of the spike and the first-spike latency coding of the neuron (Heil 2004; Chase and Young 2007; Zhong and Wang 2021a, b, c; Xin et al. 2019).

Studies revealed that both the encoding of receptors, the encoding of navigational information, and the encoding related to various cognitive information are not determined by the activity of a single or a few neurons, but also by the joint neural response of large-scale neural populations (Insel et al. 2004; Hipp et al. 2011c). The population coding is produced to express such type of associative neural response. In particular, synchronized oscillation of neural populations in the cortex can be analyzed as units of information in population encoding (Hipp et al. 2011c), thereby facilitating the study of the intrinsic relationships between coupled networks in the cerebral cortex. The research on the phenomenon of synchronous oscillation is mainly based on the theory of phase synchronization and the theory of binding. Based on the binding theory (Feldman 2012), we can not only study how the codes in different cerebral regions are related to produce perception, decision-making, planning, and behavior (Churchland et al. 2012), but also express the neural activity of the cortical network with different rhythms through the phase synchronization theory (Wang et al. 2008, 2009; Wang and Zhang 2011; Rubin et al. 2012). Therefore, phase synchronization theory is widely recognized as an important mechanism to support the study of binding phenomena (Panzeri et al. 2015), facilitating the study of phase encoding on how synchronized oscillation depends greatly on enrichment of the population encoding. The most important disadvantage of population encoding is that it cannot handle the problem of high-dimensional nonlinear coupling (Wang and Zhu 2016), followed by how the macroscopic properties of population encoding can be effectively integrated with encoding at the microscale, and some important mechanisms at the microscale, such as interactions between neurons, roles of different functional neurons in the macroscopic expression of population codes, and how the sparsity of coding and combinatorial selection of local encodes cause their intrinsic connections in the encoding of large-scale neuronal populations. These problems severely limit the application of population coding in the field of complex brain dynamics research (Panzeri et al. 2015). To address and explore the limitations of various coding theories, we proposed a theory of neural energy coding (Wang and Zhu 2016).

1.3.4 Behavioral coding

How do different neuronal populations, which are widely distributed in the brain, and communicate with each other to accomplish certain complex tasks? Are there certain working principles for these widely distributed neuronal populations when performing these complex tasks? One example is that in order to complete a perceptual decision, the brain needs to process sensory information and select behaviors that lead to rewarding effects. Neurons in multiple cerebral regions mediate some aspects of these processes, and it is not clear which cerebral region processes which information and whether the processing of this information depends on similar or different neural circuits. Therefore, the study of behavioral coding has noticeably attracted neuroscientists’ attention in recent years to solve these scientific problems.

In an article entitled “Spontaneous behaviors drive multidimensional, brainwide activity”, Stringer et al. used two-photon calcium imaging combined with neuropixel electrode recording to monitor the activity of about 10,000 neurons in the visual cortex of awake mice. The primary visual cortex was found to encode visual and motor information related to facial movements (Stringer et al. 2019). In another article entitled “Thirst regulates motivated behavior through modulation of brainwide neural population dynamics” published in the Science by Allen et al. from Stanford University, the same neuropixel electrodes were used to record nearly 24,000 neurons in 34 cerebral regions of mice in a thirsty state. It was found that the state of drinking motivation could determine the activity of neuronal population throughout the brain to convert sensory information into behavioral effects (Allen et al. 2019). In the third article (Gründemann, et al. 2019), entitled “Amygdala ensembles encode behavioral states” from the research group of Andreas Lüthi, University of Basel, Switzerland, the GRIN prism was used to study the encoding behavioral states of the basal amygdala under different behavioral paradigms, and found that in exploration during the non-exploratory and non-exploratory phases, two non-overlapping functional neuronal populations could encode opposite behavioral states. These results revealed the working state of neuronal population in different cerebral regions under complex tasks from a large-scale perspective, so as to better understand the working principle of the brain. This unprecedented large-scale recording of neuronal activity is due to the latest breakthrough in animal calcium imaging under the condition of awake activity with neuropixel electrodes, enabling us to explore scientific problems that have never been imagined more boldly and freely.

1.4 Neural energy coding

Neural energy coding is a novel coding theory based on the corresponding relationship between neural energy consumption and neural activity.

1.4.1 Studying the scientific significance of neural energy encoding

  1. (a)

    According to the fMRI findings, when a part of the body is stimulated, according to the blood oxygenation level dependent (BOLD) imaging, it was found that there was a 31% increase in blood flow in the contralateral cerebral hemisphere, which was corresponded to only 6% increase in oxygen consumption (consistent with the 5:1 result obtained by the spectroscopy). Why is the increase in blood flow not accompanied by a substantial increase in oxygen consumption? Only a proportional increase in the rate can be consistent with an increase in energy expenditure continuously provided in the bloodstream. More importantly, is the large amount of remaining neural energy loss only used for the brain's physiological metabolism or is it related to cognitive activity? If these remaining energies are relevant to cognition, in what form are they involved in cognition? At present, neuroscience cannot explain these physiological phenomena, and neural energy coding has a strong potential to scientifically explain and probe these physiological phenomena (Yuan et al. 2021; Peng and Wang 2021; Cheng et al. 2020).

  2. (b)

    For the large number of unsolved neuroscience problems mentioned above, it is not enough to rely solely on neuroscience experiments and new neuroscience technologies, because experiments can only observe phenomena and new neuroscience techniques can only discover previously unobserved phenomena. For instance, the experimental data cannot reveal the working mechanism and the distribution pattern of dark energy of the DMN in the brain, or they cannot reveal the coupling relationship between the DMN and the functional neural network and their quantitative relationship with the biological energy provided by the brain blood flow.

  3. (c)

    Completion of cognitive tasks is always associated with synchronized oscillation and synaptic transmission of neural activity in the network, and the neural energy consumption is generally considered to be only a small fraction (around 5%) of the overall energy expenditure. Thus, is there any other way of neural information transmission, which is one of the necessary conditions for the completion of cognitive tasks, as well as synaptic transmission? Professor Dominique Durand, an American scientist, recently published an article in the Journal of Neuroscience (Qiu et al. 2015), pointing out that neural information can be effectively transmitted only using the brain's electrical field to achieve cognitive tasks without the need for synaptic transmission, indicating that neural information may be involved in spontaneous neural activity. Therefore, studies based on spontaneous neural activity are likely to reveal how major energy consumption occurs in the brain and the internal relationship between the neural mechanism of energy consumption and cognitive function. Neural energy coding can unify spontaneous neural activity and task-induced neural activity in a group of neural models (Yuan et al. 2021; Peng and Wang 2021).

1.4.2 Key features of neural energy coding

As the neuronal membrane potential has a unique correspondence with neural energy (Wang et al. 2019a), the time-varying energy flow can reflect the conduction of information flow in the network, that is to say, neural information coding can be expressed by neural energy coding. This is certain that no matter how kaleidoscopic firing patterns of neurons are, how rich and varied are the synchronized oscillation of different frequencies of networks in the nervous system, or the changes in local field potentials, the patterns of energy changes that constrain all these neural activities will always occur with the ever-changing membrane potential or the oscillation mode of the network (Wang et al. 2015b; Wang and Wang 2014).

1.4.3 Advantages of neural energy coding compared with the existing coding theories

  1. (a)

    Global functional models of the brain based on energy can be used to analyze and describe experimental phenomenological neuroscience at various levels. It makes the computational results at various levels no longer mutually inapplicable, contradictory, and irrelevant. That is to say, neural information can be expressed in energy at various levels of molecules, neurons, networks, cognition, and behavior, as well as at the combination of each level, and a neural energy model can be used to unify the interactions between various levels (Yuan et al. 2021; Wang and Zhu 2016), which is impossible to do this with any traditional neural coding theory.

  2. (b)

    It is difficult to simultaneously obtain recordings from multiple cerebral regions. Although EEG and magnetoencephalography (MEG) can obtain neuronal activities from various cerebral regions, estimating cortical interactions based on these extracranial signals is a clinical challenge. The main obstacle is the lack of a theoretical tool capable of efficiently analyzing interactions among each cortical area in high-dimensional spaces. Additionally, a conversion relationship between scalp EEG and cortical potential was not found. The neural energy provides an effective solution to the above-mentioned problems.

  3. (c)

    As energy is a scalar quantity, whether it is a single neuron or a neuronal population, a network or a behavioral, linear or nonlinear neural model, their dynamic response can be used to describe the mode of neural coding by the method of neural energy superposition (Yuan et al. 2021; Peng and Wang 2021; Wang et al. 2015b; Wang and Wang 2014). This provides global information about functional neural activity in the brain, which is impossible to achieve with other traditional coding theories.

  4. (d)

    The mode of network coupling oscillation can be ever-changing, and the coupling oscillation of neural network has a unique relationship with the oscillation of network energy. When the large-scale of the neural network modeling and numerical analysis becomes impossible to handle due to the extremely complexity of high-dimensional nonlinear coupling, it is suggested to use the neural energy coding to study the neural information processing. Thus, complex neural processing can be simply and easily performed without losing information (Zheng et al. 2022).

Our proposed model of neuronal energy could also accurately predict the presence of an unknown magnetic substance in the brain (Wang and Zhang 2006). After 10 years, an important academic article was published in the Nature Materials, which experimentally proved the existence of a magnetic protein called MagR in the brain, which can be used for direction and navigation in path exploration (Qin and Xie 2016).

As neural energy can describe the interactions at various levels from the global perspective of cerebral activity, neural energy coding can also be used to express various existing neural coding patterns. That is to say, all types of coding patterns based on membrane potential in the past can be expressed by neural energy coding patterns, which are special cases of neural energy coding.

2 Application of analysis dynamics in neuron modeling

2.1 Problem raising

Observed from the Angle of mechanics, the growth of brain nerve system of connections between neurons function circuit formation and regeneration of nerve tissue degeneration after degradation ability, and the dynamics of the growth cone structure and their motions, and the change of the growth cone motion state and its movement trend depends on the interaction forces. Although the mechanics of neuron activity are well understood at the molecular level, all functional neural activity in the brain is based on the activity of 100 billion neurons. During the development of the nervous system, once neurons find their position in the brain, they are both the basic structural unit and the functional unit of the whole nervous system. In order to understand the complex multi-level interaction system of the brain and explore the law and essence behind the various functions of the system, according to the role and contribution of various forces of neuronal axons and dendrites in the signal transmission process. On the basis of previous studies on neurobiology and cell biology, we need to further investigate the effect of neuronal electromagnetic induction on membrane potential changes and corresponding energy metabolism. Some pioneering work has been done by Ma Jun of Lanzhou University of Technology. He pointed out that electromagnetic field effects generated by ion transport inside and outside nerve cells should be considered during electrophysiological activities to further explain the dynamic mechanism of synaptic plasticity from a physical perspective (Wu et al. 2016). Based on the principle of electromagnetic induction, magnetic flux was introduced into the neuron model, and induced current was used to express the electromagnetic field effect and external electromagnetic radiation effect generated by ion transport in the cell, which could explain the multi-mode oscillation of neuron electrical activity and the coupling synchronization process of neuron network internal field.

In 1952, Alan Hodgkin and Andrew Huxley gave the first quantitative description of neuron membrane potential (H–H equation), and proposed the concept of ion channel, thus revealing the veil of neuron excitability. Because this research, they won the Nobel Prize in Physiology and Medicine in 1963 and is the only two scientists who have won the Nobel Prize so far for constructing mathematical models. However, they propose that the H–H neuron model is a high-dimensional complex nonlinear equation. Due to the many parameters, although they can reflect many nonlinear properties of nerve cells, they are suitable for the study at the subcellular level. But a large number of neurons can cause considerable computational difficulty, or even impossibility. Therefore, it is not suitable for large-scale neural network and large-scale neural computing research. In order to solve these difficulties, some scholars proposed some simplified neuron models before and after the publication of H–H precise models. Typical model is Hindermarsh–Rose nonlinear model (HR), which can be used as the basic unit of large-scale networks for the study of action potential emission characteristics of single neurons, due to its characteristic and applicable scope of much less computation and parameters than H–H model. The main disadvantage of HR model is that the rich nonlinear dynamic properties obtained under the condition of single neuron or several neurons cannot be directly extended to high-dimensional nonlinear system and neural network system, and there are many limitations in practical application. The integral-and-fire model (IF) can also be used for dynamic research of a large number of neuron clusters due to its simple calculation. However, this model is defective because it cannot record the integrity of neuron membrane potential changes. Compared with H–H model, FithzHuge–Nagumo nonlinear model (FHN) has much less computation and can only be used to study the action potential properties of single neurons. The Chay model is a simplified version of H–H model, and the computational workload is relatively reduced. Its main shortcoming is that it is not suitable for large-scale neural network and large-scale neural computation. In fact, these situations not only seriously hinder the complete description of the nervous system in neurodynamics and computational neuroscience, but also seriously hinder the construction of the theoretical system of brain science. This research makes it impossible for us to understand the laws and nature behind big data in neuroscience from a theoretical height. As a result, there will continue to be a lack of a common language between experimental and theoretical neuroscience, leaving a hidden and unbridgeable gap in brain research that is supposed to understand, collaborate and develop together.

2.2 Biophysical mechanism of W–Z neuron model

In conclusion, it is necessary to find a simple and reliable neuron computing model that can perform large-scale combinatorial computation at all levels without losing the main information. The characteristics of this new neuron model are as follows: (1) It must be a neuron model equivalent to H–H model; (2) It is not only applicable to the calculation of the interaction between brain regions, but also applicable to the calculation of all levels of brain and the combination of all levels of brain; (3) At the neural nucleus, mesoscopic, complex network, macro cognitive and behavioral levels, the new neural model can ignore the secondary information but can’t lose the main information; (4) The calculation is simple and reliable.

After more than 10 years of exploration, we have preliminarily found such a new neuron model called W–Z that basically meets the above requirements (Wang et al. 2015a; Wang and Wang 2018a, b). When we used analytical mechanics to construct this original neuron model at the cellular level, we accidentally discovered a new working mechanism of neurons. Although H–H neuron model and W–Z neuron model are two neuron models constructed at completely different levels, they are actually equivalent (Wang and Wang 2018b). The biggest difference between the two neuron models is that inductor elements are introduced into the W–Z model. The biological basis for the inductance element was the discovery of a long-ignored phenomenon in neuroscience experiments. These experimental phenomena provide intracellular records of isolated brain and spinal cord sections of different types of mammals, indicating that central neurons can generate various complex action potential emission patterns and spontaneously and continuously generate action potentials by internal ionic mechanisms when there is no synaptic connection. Even if there is no stimulus input, the mechanism of neuron discharge mode is strongly intrinsically related to the activity of ionic current (Byrne and Roberts 2009). Although Ma Jun and other authors did not see the above experimental phenomenon, they started from the perspective of physics and based on W–Z neuron model (Wang and Wang 2018b) and the working principle of memristor (Wu et al. 2016; Lv et al. 2016). They keenly feel that the electromagnetic field effect generated by induced current in the process of ion transport in cells and the external electromagnetic radiation effect are keenly felt to contribute to the conduction of neural signals (Lv et al. 2016). They applied this method to study the electrical activity behavior of cardiac tissue under electromagnetic radiation and predicted the mechanism of sudden cardiac death and shock induced by electromagnetic radiation (Lv et al. 2016; Ma et al. 2017). Thus, the physical mechanism of synaptic plasticity of neurons is scientifically explained. The process of task synaptic current triggering is accompanied by the generation of electromagnetic fields. Therefore, the combination of capacitors and induction coils can be used to simulate the function of hybrid synapses (Ma et al. 2019).

Alan Hodgkin and Andrew Huxley show that ionic current can be described as the product of conductance and driving force (the difference between the membrane voltage and the ionic Nernst potential) in the case of constant membrane permeability. Conductance reflects the permeability of cell membrane to ions, while the driving force reflects the trend of ionic current movement of charged particles in the fluid inside and outside the cell under the dual effects of electric field gradient and concentration gradient. Lots of experimental data (Byrne and Roberts 2009) shows that since the resting potential of the neuron is not at the equilibrium potential level of any particular ion, various ions continue to diffuse along the concentration difference, which is evident in the generation of action potentials and synaptic potentials. Therefore, cells need to recover through active transport by the sodium–potassium pump, which is thought to complete transport through allosteric effects of protein phosphorylation and dephosphorylation. When the membrane potential exceeds the threshold for an action potential, it initiates a large, transient inward current (sodium current) followed by a persistent outward current (potassium current). They showed that when the membrane potential was depolarized, the sodium current (sodium conductance) was rapidly activated and then inactivated. The potassium current (potassium conductance) is activated after a delay, and a high activation level remains as long as depolarization is maintained. As the inactivation of sodium channel is slower than activation, when the sodium channel is activated but not inactivated, a large amount of sodium ions flow in, leading to the increase of sodium current, forming a positive feed. It then switches to repolarization due to sodium channel inactivation and potassium channel activation to prepare for the next action potential. All kinds of ion currents in H–H model, no matter sodium current or potassium current, are very weak in each ion channel, so it is difficult to construct their own self-induction effects at the level of ion channel. Although the magnetic field intensity in the nervous system is very weak (Liu 2002), the electromagnetic induction effect exists objectively. Given this objective fact, our neuronal model is built directly on the level of nerve cells, the total effect of self-induction of all types of ion channel currents can be expressed in terms of an inductance. This inductance can in fact be used to express the total effect of the magnetic field on the movement of all the ionic currents represented in the H–H model. This is the biological explanation for why inductance is introduced in the W–Z model, and the underlying reason why H–H and W–Z models are the equivalent neuron models.

2.3 Analytical dynamics reveals new working principles of neuronal activity

Based on the experimental data provided by electrophysiology and the working characteristics of neurons, we present the biophysical model of coupled neuron activity as shown in Fig. 1. This model reflects the interaction of a single neuron with all other neurons connected to it. The mutual coupling between neurons is realized by the current formed by the input of the first neuron to the first neuron in the cerebral cortex.

Fig. 1
figure 1

Schematic description of W–Z model (Wang et al. 2015a)

According to Fig. 1, we get the following circuit equation:

$$P_{m} = U_{m} I_{0m} + U_{im} I_{m} ,$$
(1)
$$U_{im} = C_{m} r_{3m} \dot{U}_{0m} + U_{0m} ,$$
(2)
$$U_{m} = r_{0m} I_{0m} + r_{1m} I_{1m} + L_{m} \dot{I}_{1m} ,$$
(3)
$$I_{0m} = I_{1m} - I_{m} + \frac{{U_{im} }}{{r_{m} }} + C_{m} \dot{U}_{0m} .$$
(4)

Among them, membrane capacitance represents the accumulation of positive and negative ions inside and outside the cell membrane. It is the heat loss caused by the collision of the inner and outer membranes during ion exchange. As mentioned above, in this new neuron model, we have introduced an inductance element that is not in the H–H model, which means that each ion channel on the cell membrane produces a self-induction effect caused by a loop current flowing through multiple ions.

It contains the transport process of a large number of charged ions in nerve cells such as sodium, potassium and calcium ions in ion channels, which can trigger uniform or non-uniform electromagnetic fields inside and outside the cell, which in turn will affect the transport of charged ions. That is to say, the change of the charge distribution density in the cell and the electromagnetic field effect produced during the charge transport process will cause the induced current in the cell, which is exactly what the traditional neuron model does not consider (Ma et al. 2019). Represents the heat consumed by ions colliding with each other under the condition of current generation, which can be equivalent to resistance. m can be expressed as the chemical gradient inside the neuron, which will drive the flow of ions, and can be simulated by a voltage source or a current source in the electrical model. It is the loss caused by the unsatisfactory voltage source. And means that in addition to the chemical gradient, the neuron also accepts the action of other neurons, and at the same time maintains the resting potential of the neuron at rest. In order to express this function, set the input current as

$$I_{m} = i_{1m} + \sum\limits_{j = 1}^{n} {\left[ {i_{0m(j - 1)} \sin \left( {\omega_{m(j - 1)} (t_{j} - t_{j - 1} )} \right)} \right]} + i_{0m(n)} \sin \left( {\omega_{m(n)} (t - t_{n} )} \right).$$
(5)

Among them is to maintain the resting potential, and the rest represents the input of the surrounding N neurons to the mth neuron. m is the frequency of the action potential. It is the resistance across and the loss caused by the imperfect current source. It is stimulated by peripheral neurons when it is subliminal. When the action potential is issued, it is not affected by external influences, and is affected by the internal mechanism of neurons. The effect is that the voltage source and the current source are not produced at the same site. The voltage source mainly provides a small loop current of the ion channel, and the current source mainly accepts the stimulation of peripheral neurons, and they are almost closed to each other, but there is an internal connection between them, which can be represented by resistance. The observable physical quantities in this physical model are membrane potential and membrane current respectively.

The total power of N neurons can be expressed as

$$P = \sum\limits_{m = 1}^{N} {P_{m} } .$$
(6)

According to the circuit diagram, the power of the neuron is given by the following formula:

$$P_{m} = d_{1m} \dot{U}_{0m}^{2} + d_{2m} \dot{U}_{0m} + d_{3m} \dot{U}_{0m} U_{0m} + d_{4m} U_{0m}^{2} + d_{5m} U_{0m} + d_{6m} .$$
(7)

The parameter \(d_{im} \;(i = 1,\;2, \ldots ,6)\) can be found in literature (Wang et al. 2015a).

Since the expression of the voltage source Um cannot be obtained from the circuit model in Fig. 1, generally the membrane potential cannot be determined. However, the interaction between coupled neurons in the cerebral cortex is orderly and follows the law of self-organization (Gu and Liang 2007; Haken 1996), and convincing evidence is that the data provided by neuroscientists at Yale University has confirmed that the activity of neurons in the cerebral cortex requires consume energy (Raichle and Gusnard 2002; Maandag et al. 2007; Lin et al. 2010). Their research work shows that compared with the resting state, the energy consumption of the brain under stimulation is mainly used for the propagation of action potentials, and neurotransmitter stimulation of receptors causes the restoration of post-synaptic ion current (Lin et al. 2010).

The neuron cluster model established by us with m neurons under coupling conditions can describe the basic characteristics of the subthreshold and threshold electrical activity of neuron cluster through its current coupling relationship with surrounding neurons (Wang et al. 2008, 2009). However, the membrane potential cannot be obtained directly from the model. If we can find the functional form of neuron energy consumption and find the constraint conditions of neuron motion equation controlled by the energy function, we can get the solution of membrane potential. Due to spontaneous electrical activity of neuron clusters obey the law of self-organization (Haken 1996), at the same time, according to Yale university neuroscientist on nerve signal transmission and energy dissipation of tightly coupled with the experimental results, we determine the constraint condition of the given the new neuron model is likely to be the energy function in the circuit system. In mechanical analysis, we know that for a known dynamic system, we can write down the kinetic energy and potential energy of the system, thus obtaining its Lagrange function. But in our neuron electrical model, we assume that potential energy is equal to a constant, (and power is the average energy). It can be assumed that the power consumed on the circuit model can be regarded as the energy function of the dynamic system, thus leading to the Lagrange function as the constraint condition of the circuit model, which plays a key role in describing the neuron model completely. Whether such an idea is reasonable depends on whether the results obtained by such an idea agree with the results of neuroelectrophysiological experiments. The genius of this idea is to extend the modeling of dynamical systems from classical mechanical systems to nervous systems.

According to the above ideas, suppose that the Lagrange function in the model is related to the total power of the circuit model, and its dynamics equation is given by the following equation

$$\frac{d}{dt}\left( {\frac{{\partial P_{m} }}{{\partial \dot{U}_{0m} }}} \right) - \frac{{\partial P_{m} }}{{\partial U_{0m} }} = 0\quad (m = 1,\;2, \ldots ,N).$$
(8)

The solution to the above equation is

$$\begin{aligned} U_{0m} & = - \frac{{\hat{g}_{1} }}{{\lambda^{2}_{m} }} - \frac{{\hat{g}_{2} e^{{ - a(t - t_{n} )}} }}{{\lambda^{2}_{m} - a^{2} }} - \frac{1}{{\lambda_{m}^{2} + \omega_{m}^{2} }}\left( {\hat{g}_{3} \sin \omega_{m} (n)(t - t_{n} ) + \hat{g}_{4} \cos \omega_{m} (n)(t - t_{n} ) + \left( {U_{0m} (t_{n} )} \right.} \right. \\ & \quad \left. {\left. { + \frac{{\hat{g}_{1} }}{{\lambda^{2}_{m} }} + \frac{{\hat{g}_{2} }}{{\lambda^{2}_{m} - a^{2} }} + \frac{{\hat{g}_{4} }}{{\lambda_{m}^{2} + \omega_{m}^{2} (n)}}} \right)} \right)e^{{ - \lambda_{m} (t - t_{n} )}} \quad t_{n} < t < t_{n + 1} ,\quad n = 0,\;1,\;2, \ldots ,t_{0} = 0. \\ \end{aligned}$$
(9)

When i0m(n) is strongly stimulated and the membrane potential reaches the threshold level, we obtain the neuron membrane potential U0m and the corresponding energy function Pm, as shown in Fig. 2.

Fig. 2
figure 2

Action potential (a) and corresponding energy function (b) (Wang et al. 2015a)

According to the calculated results, the action potential waveform is completely consistent with the experimental data, thus confirming our previous judgment is correct. It is important to note that the results of the analytical mechanics calculations show for the first time a previously undiscovered phenomenon that when neurons fire an action potential, the corresponding energy expenditure does not correspond to the conventional view of neuroscience, which is that neurons only use energy. It's about absorbing energy and then expending energy before expending energy. In fact, during the production of an action potential, the energy change of the neuron is composed of two parts. One part is the negative energy of the oxygenated hemoglobin obtained from the blood stream, which is used for energy storage. On the other hand, deoxygenated hemoglobin presents positive energy for energy consumption (Wang et al. 2015a; Wang and Wang 2018a, b). For this novel energy calculation result, Zheng et al. (Zheng et al. 2014, 2016), combined with molecular biology knowledge and existing experimental data, provided a qualitative explanation for the regulation process of ion channel opening and closing, glutamate circulation and glucose by neurons and related glial cells during action potential generation. It is pointed out that the negative energy in action potential is a process of energy storage, that is, the amount of glucose and oxygen absorbed from blood stream is greater than the amount needed to consume. In other words, stimulated neurons will lead to increased cerebral blood flow, but there is a demand for oxygen consumption during depolarization (there is no oxygen consumption at the moment), mainly in the form of energy absorption. In the repolarization stage of neurons, the energy storage has been consumed, and at this time the oxygen consumption of neurons increases significantly, which is manifested as energy consumption. In short, the neuron is not only a energy-consuming device but also an energy-storing device. From the perspective of an action potential, a neuron absorbs energy from the bloodstream and then uses it up again and again, reaching a dynamic equilibrium, suggesting that the energy storage capacity of a single neuron is limited. From the perspective of glucose and oxygen supply in blood stream, when the glucose and oxygen supply in blood stream is sufficient and the energy storage of neurons is not up to the upper limit, neurons will spontaneously reserve energy at the initial stage of action potential depolarization. This may also explain why the oxygen metabolic rate, CMRO2, changes more rapidly than blood flow CBF, reflecting the fact that neurons release stored oxygen in response to stimuli and then consume oxygen and glucose from the bloodstream. We have explained the neural mechanism of cerebral hemodynamic phenomena from a quantitative perspective (Peng and Wang 2021).

It emphasized that the existence of negative power component in the energy function corresponding to the firing of action potential by neurons is an extremely important new discovery of the working mechanism of neurons (Wang et al. 2015a). This new mechanism reveals two previously undiscovered patterns of neuronal activity. The first is that there is a corresponding relationship between membrane potential discharge of neurons and nerve energy. The second is that neurons consume energy when they are active below the threshold value, while absorb and consume energy when they are active above the threshold value. The first law reveals a unique correspondence between functional acquisition of neuron membrane potential and energy function, which has been strongly confirmed by H–H models (Wang and Wang 2018b). The second rule verifies an experimental phenomenon currently unexplained by neuroscience, namely, activation of brain regions increases blood flow by 31% while oxygen consumption increases by only 6% (Zheng et al. 2016), a relationship approximately equal to 5:1. Our calculation shows that the area of the positive and negative interval in the energy curve in Fig. 2 is approximately equal to 5:1 (Adachi and Aihara 1997). It should be emphasized here that the positive and negative areas in this power curve have profound neurobiological significance: the positive and negative areas well correspond to the experimental result that blood flow increases about 31% during stimulus-induced neuronal activity, while the associated oxygen consumption increases only about 6% (Zheng et al. 2016). The use of the negative power component can also explain the hemodynamic phenomenon of the brain, that is, the significant increase in blood flow after the activation of the designated cortical area is delayed by 7–8 s from the activation moment. It also explains why tactile perception is synchronized with the emergence of consciousness and so on. The new neuron model based on experimental data enables us to propose the concept, theory and method of energy coding in an original way (Wang et al. 2008, 2009, 2015a, b, 2017a, b, 2018a, b, 2019b, 2021a; Yuan et al. 2021; Wang and Zhu 2016; Wang and Wang 2018b, 2020, 2014; Li et al. 2022a; Peng and Wang 2021; Zheng et al. 2014, 2016, 2022; Wang and Zhang 2006). This new concept and coding theory can not only explain some experimental phenomena that cannot be explained by neuroscience so far and quantitatively reveal the laws behind some experimental data, but also predict some phenomena that cannot be discovered by experimental neuroscience. When we fully understand and master the above nature of neuronal activity, we will have a new understanding of the rules of neural information processing and the principle of neural coding in the cerebral cortex. This fully reflects the influence and role of mechanics in promoting the progress of neuroscience and the influence and role of mechanics in the field of neuroscience and life science.

According to the discovery of the new working mechanism of neurons, the neuron model provided by us can also be used to quantitatively prove that the operating mode of brain works in accordance with the following criteria (Zheng et al. 2022; Laughlin and Sejnowski 2003b): (1) Economy—The neural network activity conforms to the energy minimization principle when the brain is at rest and participating in cognitive activities (Zheng et al. 2022); (2) High efficiency—the transmission efficiency of cortical neural network signals conforms to the principle of maximum energy utilization (Zheng et al. 2022); (3) Self-organizing neural computations—the relationship between membrane potential and energy reflects the coupling relationship between neural information and cerebral blood flow (Moore and Cao 2008; Fox and Raichle 2007a; Raichle and Gusnard 2002; Lin et al. 2010). In addition, using this new neuron model, not only the action potential and the corresponding energy consumption of neurons can be simulated, but also the waveforms of presynaptic excitatory potential (EPSP) and postsynaptic inhibitory potential (IPSP) and the corresponding energy of EPSP and IPSP can be simulated, and the simulation results are in complete agreement with experimental records (Wang et al. 2015a). This original neuronal model has been supported by a large number of neuroelectrophysiological experiments. The W–Z neuron model constructed by us and the coding theory of neural energy make it possible to transform a variety of complex, coupled and highly nonlinear membrane potential firing modes into energy firing modes for coding studies (Wang et al. 2008, 2009, 2015a, b, 2017a, b, 2018a, b, 2019b, 2021a,; Yuan et al. 2021; Wang and Zhu 2016; Wang and Wang 2014, 2018b, 2020; Li et al. 2022a; Peng and Wang 2021; Zheng et al. 2014, 2016, 2022; Wang and Zhang 2006). The theory holds that the encoding of neural information is closely related to the metabolism of neural energy, and the mechanism of encoding of neural information can be understood and revealed by using the energy method. What is particularly interesting is that the brain power calculated by W–Z neuron model is about 45 W, while the brain power provided by experimental data is 20 W (Wang et al. 2015a). This is also the first time that the power consumption of our own brains has been quantified.

Thus, the discovery of this new and important working mechanism of neurons not only depends on the creative application of the theory of analytical mechanics in neuroscience, but also can perfectly bind neural information and neural energy together, laying a solid foundation for the research framework of global neural coding of the brain.

Experimental data show that the brain consumes only about 5% more energy in the task state than in the resting state (Fox and Raichle 2007a). The maximum energy consumption in the resting state comes from the DMN, which consumes more than 95% of the brain energy consumption in the resting state (Peppiatt and Attwell 2004). Much of our understanding of the brain used to come from studying the activity of this 5%. Because the structure of different brain regions and their individual neural activity patterns are different, neuroscientists often use dynamic BOLD signals measured by fMRI to look at brain activity as a whole in order to obtain global information about brain activity. However, the average distribution of blood flow in the brain when the brain is activated and the nonlinear coupling relationship between blood flow and oxygen consumption make it difficult for us to obtain a quantitative and accurate understanding of the neural activity of the brain in various states, and we cannot understand the interaction between neurons in the activated brain using this technology (Clancy et al. 2017). At present, there is no new experimental technology in the field of neuroscience that can perfectly integrate the accuracy of neural electrophysiological recording with fMRI technology that reflects the global information of brain functional activity, including optogenetics technology which is only locally observable. If neuroscience cannot achieve such a new technique for a long time in the future, can we theoretically propose a new research method that combines the reductionism of the activity of individual neurons and the holism of the macroscopic effects of the brain, and take it as the main basis for studying the global neural activity of brain function. This new research method can not only accurately reproduce neural electrophysiology record, also can use fMRI data provided by the large repeat reproduce the global information of brain functional activity, and it also can be estimated in theory we haven't found the new phenomenon of neural activity, like found that neurons in the energy consumption negative power component so to do that involves understanding the nature of neuronal activity in the brain and to do that we need to compare the H–H neuronal model to the W–Z neuronal model, right through the analysis and research of these two types of different neuron models, we can explain what factors dominate and control the various releasing modes of complex membrane potential of neurons, so as to understand and master the nature and change rules of neural information processing and signal transduction.

2.4 Equivalence between W–Z neuron model and H–H model and its molecular biological basis

In order to verify the validity of W–Z neuron model established by analytical mechanics method, we used H–H model to calculate the energy characteristics of action potential and membrane potential (Wang and Wang 2018b). That's because almost all previous work has been based on how neural activity causes changes in nerve energy. Under the same stimulation conditions, whether the neurons can transition from sub-threshold activity to supra-threshold activity is depended on whether the neurons can be fully energized. The two states of energy supply (both sub-threshold and supra-threshold) determine whether the ion pump can provide stable Nernst potential for sodium ions. That is, from the perspective of energy, the Nernst potential of sodium ion closely connects the sub-threshold and the supra-threshold activity. The inverse of this question is whether changes in nerve energy cause changes in neuronal activity, under insufficient energy supply, neurons will only manifest as the sub-threshold activity when stimulated, but cannot fire action potentials. In general, we are usually concerned about the imbalance distribution of neural energy caused by neural activities under stimulation. However, neural activities are also modulated and constrained by neural energy. The mechanism is that ion pump that lacks energy supply cannot provide constant Nernst potential for sodium ions, and in the early stage of the change of cell membrane permeability, it cannot provide continuous flow of sodium ions to reach the threshold potential (Hopfield 2010; Hu et al. 2021). Thus, the electrophysiological activity of neurons is strictly constrained by energy levels in the brain. When energy supply of ion pump of the sodium ion channels is insufficient, the response of neurons is sub-threshold activity to any form and intensity of stimulation. But only when the maximum power of sodium ion channels is not constrained, the neurons will be sup-threshold firing.

So it is necessary through calculating model to understand why when ion pump when there is no guarantee that a constant potential, namely when neurons system energy shortage, energy is how to regulate neuronal activity Its scientific significance is when compared to the resting state network energy consumption, the task of related part of the increase of energy consumption is small (less than or equal to 5%). So most of what we know and understand about brain function so far comes from this tiny fraction of brain activity and if we want to get a complete picture of how the brain works, we have to think about the part that consumes most of the energy which is the innate spontaneous neural activity, right. Therefore, we need to further investigate the H–H neuron model established on the level of ion channel, and explore the nature of neuron activity through the comparative study of these two types of neuron models.

The circuit model of H–H equation is shown in Fig. 3.

Fig. 3
figure 3

Schematic description of H–H model (Wang and Wang 2018b)

Its differential equation is described as

$$C_{m} \frac{{dV_{m} }}{dt} = g_{l} (E_{l} - V_{m} ) + g_{{{\text{Na}}}} m^{3} h(E_{{{\text{Na}}}} - V_{m} ) + g_{{\text{K}}} n^{4} (E_{{\text{K}}} - V_{m} ) + I,$$
(10)

where Cm is the membrane capacitance of neuron membrane, Vm is the membrane potential, /ENa and EK are the Nernst potential of sodium ion and potassium ion respectively, El is the potential when the leakage current is zero. \(\hat{g}_{{{\text{Na}}}}\) and \(\hat{g}_{{\text{K}}}\) sodium channel and potassium channels of variable conductance, \(\hat{g}_{{{\text{Na}}}} = g_{{{\text{Na}}}} m^{3} h,\;\hat{g}_{{\text{K}}} = g_{{\text{K}}} m^{4} ,\;g_{l}\) for leakage conductance. The variable conductance of sodium and potassium channels is described by the following set of nonlinear differential equations:

$$\left\{ \begin{gathered} \frac{dn}{{dt}} = \alpha_{n} (1 - n) - \beta_{n} n, \hfill \\ \frac{dm}{{dt}} = \alpha_{m} (1 - m) - \beta_{m} m, \hfill \\ \frac{dh}{{dt}} = \alpha_{h} (1 - h) - \beta_{h} h. \hfill \\ \end{gathered} \right.$$
(11)

Each parameter in the above equation can be found in Tsuda (1991).

In H–H equation of circuit model, the total energy can be table as follows:

$$W_{all} = C\frac{{dV_{m} }}{dt}V_{m} + i_{{{\text{Na}}}} E_{{{\text{Na}}}} + i_{{\text{K}}} E_{{\text{K}}} + i_{l} E_{l} ,$$
(12)
$$C\frac{{dV_{m} }}{dt} = I - i_{{{\text{Na}}}} - i_{{\text{K}}} - i_{l} ,$$
(13)
$$W_{all} = IV_{m} + i_{{{\text{Na}}}} (E_{{{\text{Na}}}} - V_{m} ) + i_{{\text{K}}} (E_{{\text{K}}} - V_{m} ) + i_{l} (E_{l} - V_{m} ),$$
(14)
$$W_{all} = IV_{m} + (i_{{{\text{Na}}}} E_{{{\text{Na}}}} + i_{{\text{K}}} E_{{\text{K}}} + i_{l} E_{l} ) - V_{m} + (i_{{{\text{Na}}}} + i_{{\text{K}}} + i_{l} ).$$
(15)

Including IVm to provide energy to the outside world on the circuit system, \((i_{{{\text{Na}}}} E_{{{\text{Na}}}} + i_{{\text{K}}} E_{{\text{K}}} + i_{l} E_{l} )\) to represent the Nernst potential energy provided by the voltage source, While \(V_{m} (i_{{{\text{Na}}}} + i_{{\text{K}}} + i_{l} )\) is the energy in membrane potential difference inside and outside. In the above process of releasing action potential from neurons, if the energy consumed by the change of membrane permeability is not taken into account, the energy involved is the energy provided by oxygen and glucose carried by blood flow to neurons respectively (the energy provided by the outside world to the circuit system). The increase in glucose consumption due to brain stimulation is mainly due to activation of the sodium–potassium ATP pump (Zheng et al. 2014, 2016; Sokoloff 2008; Maandag 2007). Described in the first two of them are under the threshold of neurons and the relationship between biological energy, and the ion concentration difference through ion channels to help spread does not consume energy From the perspective of dynamic observation, however, under the threshold of neurons into functional neurons in the process, the sum of the three types of energy and H–H model of circuit is equal to the total energy in the system. The former two types of energy (the outside world to provide energy and the energy of the voltage source) and the circuit of IVm and \(V_{m} (i_{{{\text{Na}}}} + i_{{\text{K}}} + i_{l} )\) correspond, so the energy provided by the Nernst potential \((i_{{{\text{Na}}}} E_{{{\text{Na}}}} + i_{{\text{K}}} E_{{\text{K}}} + i_{l} E_{l} )\) equals the biological energy consumption of ion pump. In fact, in this process, the sodium–potassium pump continuously transports ions against the ion concentration gradient, which directly consumes biological energy, that is, one ATP can pump three sodium ions and two potassium ions. This also confirms that due to the existence of the ion pump, a stable Nernst potential is provided by the continuous transport of ions, thereby providing energy for neural activity. Therefore, we can calculate the power consumed by the ion pump by using the power of the voltage source represented by the Nernst potential, that is, the nerve energy consumed by neuronal activity is:

$$P = |i_{{\text{K}}} E_{{\text{K}}} | + |i_{l} E_{l} | - |i_{{{\text{Na}}}} E_{{{\text{Na}}}} .$$
(16)

The negative sign of the third term in Eq. (16) refers to the fact that in the circuit shown in Fig. 1, the voltage source/and current/are in opposite directions to ENa, iNa and EK and El (the sodium current is inward, while the potassium current and leakage flow outward). For an action potential, we can use the above equation to calculate the neural energy expend.

The calculated parameter values in Fig. 4 can be found in Wang and Wang 2018b It can be seen that although the action potential of H–H model and W–Z model has some errors in waveform (mainly because W–Z model is constructed at different levels as H–H model), the nerve energy of H–H model also has negative power component, and almost has the same dynamic characteristics as that of W–Z neuron model. This result shows that the neuron energy model proposed by us has a deep internal connection with the H–H model. From the perspective of computation, H–H neuron model needs to calculate the conductance and current of multiple ion channels, so if H–H neurons are used to construct a large number of neuron networks, it will cost a lot of computation cost. However, if W–Z neurons are used to construct the network hierarchical model with a large number of neurons, it has a greater advantage because of its greatly reduced computational complexity (see Fig. 4).

Fig. 4
figure 4

Neuron action potential and corresponding power consumption based on H–H model (Wang and Wang 2018b)

We can make biological explanation for the negative power component of neuron energy at the initial stage of firing action potential as follows.

This is mainly due to local congestion caused by nerve activity. As blood vessels dilate and blood flow increases, the amount of arterial inflow increases, leading to an increase in oxygenated hemoglobin into the blood vessels (Peppiatt and Attwell 2004). Neurons take oxygen mainly through oxygenated hemoglobin, but the consumption of O2 does not increase proportionally with the increase in blood flow and O2 (Peppiatt and Attwell 2004). Fox et al. observed in PET that the event-induced oxygen uptake coefficient (OEF) decreased from 40% at rest to 20%, meaning that 80% of the oxygen delivered during the event was not physiologically metabolized. This suggests that the energy requirements associated with neural activation (compared to resting state requirements) are small and that the hyperemic response of cerebral blood flow is influenced by products of non-oxidative metabolism such as lactic acid (Lin et al. 2010). Functional congestion plays a direct role in neuronal information processing. Local blood vessels dilate due to increased blood flow, increased blood volume and increased local vascular pressure. The anatomy shows that neurons and glial cells are located near blood vessels, so the dilation of blood vessels causes the deformation of cell membranes. Membrane deformation caused by these mechanical force signals such as blood flow, blood volume, pressure and local dilation and contraction of blood vessels can regulate ion channels sensitive to mechanical force, thus altering neural activity (Moore and Cao 2008; Lin et al. 2010; Peppiatt and Attwell 2004). For example, in the somatosensory cortex, sensory stimulation induced an increase in the diameter of the arterioles, a mean net increase \(10\sim 15\mu m\), with some experimental data suggesting that the dilation was even greater \(15\mu m\) (Moore and Cao 2008). According to Poiseuille's theory, a 23% reduction in blood vessel diameter results in a threefold decrease in blood flow, while an increase in blood vessel diameter results in a four- or fivefold increase in blood flow. In addition, blood flow plays a dominant role in the brain's temperature. Local congestion lowers brain temperature and reduces the effect of heat due to neural activity (see Fig. 5) (Moore and Cao 2008).

Fig. 5
figure 5

Cellular/molecular and haemodynamic changes caused by lactate products and calcium waves in astrocyte end feet (Zheng et al. 2014)

At the molecular level, glial cells, the most abundant cell type in the brain, have long been thought to function only as support and nourishment for neurons. But glial cells have recently been shown to play a crucial role in neural activity. It not only affects the growth and development of neurons, but also may directly participate in the transmission process of nerve signals. Astrocytes are the most abundant in glial cells, and glycogen in the brain is mainly stored in astrocytes. Magistretti et al. Astrocyte lactate shuttle hypothesis, ANLSH (Pellerin and Magistretti 1994), indicating that astrocytes play a crucial role in nerve energy metabolism and hemodynamics. Currently, the role of brain glycogen is not completely clear, but a large number of studies (Pellerin and Magistretti 1994; Brown 2004a, b; DiNuzzo et al. 2012) have shown that brain glycogen is a very important brain energy reserve and the material basis of brain activity.

The increased activity of Na+, K+ and Ca2+ channels increased ATP consumption and stimulates ATP production. As shown in, during glycolysis of glucose in glial cells, one molecule of glucose produces two molecules of lactic acid and two ATPs, which are just used for glutamate uptake and metabolism. A molecule of glutamate, along with three Na+, is taken into the astrocyte via the cotransporter, which activates the Na+/K+ pump to restore the osmotic gradient. Glutamate enters astrocytes and is converted into substances such as glutamine, which is sent back to neighboring neurons. Moreover, Na+ uptake is a passive process; activation of the Na+/K+ pump and conversion of glutamate to glutamine are energy consuming processes (each consuming 1 ATP) (Figley and Stroman 2011). Lactate, the product of glycolysis, is transported out of the cell through the lactate shuttling protein on the cell membrane, which is then absorbed by neighboring neurons, producing 36 ATP after oxidative metabolism (Sokoloff 2008). It can be seen that although increased blood flow is mainly due to increased concentration of lactic acid, a product of non-oxidative metabolism. Most (98%) of the energy requirement comes from oxidative metabolic pathways (Eikenberry and Marmarelis 2015). So when the activity of brain tissue increases, the corresponding energy demand rises rapidly, but the brain blood flow does not change enough, resulting in a lack of blood sugar. At this point, brain glycogen is rapidly digested to meet the energy needs of brain tissue activity (Zheng et al. 2014).

The above are the molecular mechanisms of negative power components caused by local congestion in the brain and the mechanical mechanisms of increased blood flow caused by vascular dilation. Thus, the above on vascular smooth muscle relaxation caused by different kinds of nerve chemical reactions lead to increased blood flow to the brain, in turn, control the neural activity in the brain is a neurochemistry, neural signal transduction, cell rheology of nonlinear viscoelastic mechanics, blood, blood vessels, non-Newtonian fluid mechanics and damage mechanics of multidisciplinary crossover study, There are a lot of mechanics problems which need to be further explored and solved by mechanics workers (Zheng et al. 2016).

From the perspective of the circuit, the power emitted by the voltage source represented by Nernst potential is negative, indicating that the work done by other components in the circuit to the voltage source is mainly done by the surrounding glial cells and neurons through various mechanical forces. This stage of capacitor discharge releases the energy stored in the capacitor (brain glycogen in the brain). Capacitance in the H–H model corresponds to the cell membrane of the neuron, while sodium ions enter the neuron through ion channels under the action of potential gradient caused by the potential difference between inside and outside the membrane. This can be regarded as the storage of cell membrane provides energy for the inward flow of sodium ions, which exactly corresponds to the discharge of the capacitor. It can be seen from the above discussion that the movement of ions corresponds completely to the circuit model (Wang and Wang 2018b).

Fig. 6 correspond to the enlargement of the upper and lower graphs on the right. It can be seen from Fig. 6 that when the sodium ion pump cannot provide stable Nernst potential, the activity of subthreshold membrane potential is mainly energy consumption. When the DMN is coupled with the resting state network, this explains why about 95% of the energy expenditure is devoted to the brain's intrinsic, spontaneous activity, whereas the neural energy expenditure caused by task stimulation usually accounts for only 5% of the brain energy expenditure under the resting state condition.

Fig. 6
figure 6

Neuron activity under different energy supply states (Wang and Wang 2018b)

In short, there are many coding theories in the field of neural coding such as frequency coding, rhythm coding, time coding, phase coding and other coding methods. However, they can only be applied to local, single or few neuron systems and isolated or closed neural networks, while the actual neural coding must be the global neural coding of a large range, various levels of coupling and the interaction of various related brain regions. Energy coding studies neural coding from the energy characteristics of neuron activity, which can reflect global, economic and high efficiency advantages (Yuan et al. 2021; Wang and Zhu 2016; Peng and Wang 2021; Wang et al. 2009, 2015b, 2017a, 2018a, b, 2019b; Wang and Wang 2014, 2020; Wang and Zhang 2011; Zhu et al. 2018b). Objectively, the inductance element presented in the W–Z model not only theoretically proves the contribution of electromagnetic field effect to signal conduction and information encoding, but also provides a theoretical basis for predicting the existence of an unknown magnetic substance in the brain. In the 10 years since we first proposed the embryonic neural energy model in 2006 (Wang and Zhang 2006), we have seen an important academic paper published in Nature Material that experimentally demonstrated the existence of a magnetic protein called MagR in the brain. Navigation for direction and orientation in path exploration (Qin and Xie 2016).

In short, complex changes in action potential and subthreshold membrane potential can reveal rich dynamic properties of neuronal firing activity. Our study not only reveals the equivalence of two kinds of different neuron models, but also discovers the nature and law of neuron firing activity behind the rich dynamic properties and a large amount of experimental data. From a reductionist point of view, this is an important contribution to neuroscience.

2.5 Neural energy mechanism of cluster release and W–Z neuron model

Cluster firing is also one of the common firing patterns of neurons. However, does cluster release also have the above properties? So far, the cellular mechanism of cluster release and its biological significance remain unclear. Therefore, from the perspective of energy, we proposed a neural energy calculation method based on The Chay cluster release model, and analyzed the ion current and its energy consumption (power) per unit time under two conditions with and without stimulation. Studies have found that the power becomes negative during the depolarization process of cluster release, which is consistent with the research results of W–Z neural energy model (Zhu et al. 2019). Furthermore, it was found that the energy consumption of neurons in cluster firing mode was minimal, especially in the spontaneous state without stimulation. The total energy consumption of cluster firing for 30 s was equivalent to the biological energy consumed by a single action potential. These results suggest that low-energy cluster firing is an energy-efficient way of neural information transmission that follows a brain strategy that minimizes energy. The energy efficiency of neural information transmission is considered to be an important constraint of neural information processing. It is usually measured by the energy consumed per unit of information. Most previous studies have focused on the energy efficiency of individual action potentials. However, neural information is more likely to be encoded by a spike sequence rather than a single spike. So far, it is not clear how energy efficiency depends on the discharge pattern of spike sequences. We simulate high, medium, and low frequency emission patterns based on the Chay neuron model and examine their energy efficiency. The results show that medium frequency mode is more effective than high and low frequency mode. Sparse cluster scattering (SBF) mode is the most effective, because it consumes the least energy and transmits the same amount of neural information as the high-frequency mode which consumes more energy. The SBF model minimizes energy consumption by balancing the potential energy stored in the depletion ion concentration gradient. In addition, the combination of sparse cluster release (SBF) with a single spike maximizes the neural information carried by the SBF model, thus improving the energy efficiency. In conclusion, the nervous system may prioritize limiting energy costs over maximizing information to achieve higher energy efficiency (Zhu et al. 2020).

3 Equivalence between H–H model and W–Z model in structural network

It has been known that the neural activity of the brain and the operation of the brain are subject to the principle of minimization of energy and maximization of signal transmission efficiency (Laughlin and Sejnowski 2003b). This rigorous working mode of the brain has been proved by a large number of experimental data. This working principle governs the activity of the entire brain, but its role and contribution to cognition need to be further understood. In order to find the intrinsic correlation and essential connection between cognitive behavior and energy information, it is necessary to construct a series of structural and functional neural networks, and under what conditions the transformation from structural network to functional network should be solved, which involves large-scale neuroscience modeling and analysis.

3.1 Derived from the definition of large-scale neuroscience models based on analytical dynamics

Large-scale neuroscience model is based on neural energy model, and the construction of neural energy model is derived from the theory and method of analyzing dynamics. Its purpose is to quantitatively obtain the global information of brain neural activity through the correspondence between nerve energy and membrane potential, field potential and network emission rate. Since the global information of the brain can be converted into energy for study and analysis, neural energy coding forms the cornerstone of large-scale neuroscience models. Its definition is as follows:

(1) It can analyze and interpret both local and global neural activities of the brain. At the same time, it can also be used to construct, analyze and describe the experimental phenomena of neuroscience at various levels from molecular to behavioral, and can establish the global brain function model on the combination of various levels, so that the calculation results at various levels are no longer unusable, contradictory and irrelevant. (2) Global brain function model can be used to solve the scalp EEG and conversion relation between cortical potentials. It is still difficult to describe large-scale neuronal interactions throughout the brain. It is currently difficult to record damage in multiple brain regions simultaneously. Although EEG and MEG can sample neuronal activity from various regions of the brain, it is very difficult to estimate cortical interactions on the basis of these extracranial signals, the main obstacle being the lack of a theoretical tool that can effectively analyze cortical to cortical interactions in high dimensional space. In addition, there is a lack of a conversion relationship between scalp electroencephalogram and cortical potential. One promising approach to these extremely difficult problems is the neuro-energy theory. (3) Can be used for insight and analysis of experimental data behind the essence and regularity of problems (such as oxygen dependencies between signals and state of consciousness; the meaning and the content of the EEG of brain waves, etc.). (4) If a global brain activity model can under the condition of the degraded to explain the function of the DMN and static state interest, energy consumption, and under the task induced can explain the formation of the transition from the default mode to the cognitive network and the corresponding energy conversion, such a model is a global neural model. And the tunable parameters of the global model of brain function must be few and simple.

3.2 Comparison of network calculation results based on class H–H neuron model

At the level of individual neurons, we have proved the equivalence of H–H and Z–W neuron models, but we still need to prove whether H–H and Z–W neuron models are also equivalent at the level of networks. If equivalent, the W–Z neuron model can be used to study cognition and behavior. As described above, W–Z model is much simpler than H–H model in terms of calculation and has the advantage of not losing the main information. Therefore, when studying the macro behavior model related to cognition, details such as ion concentration and ion current of synaptic connection between neurons need not be considered. Their dynamic characteristics are focused on the macro expression of cognitive and behavioral coding patterns.

A simple structural neural network is constructed according to the connection mode of neurons in functional columns of cerebral cortex (Fig. 7).

Fig. 7
figure 7

Schematic diagram of a fully connected structured network (Wang and Wang 2014)

In the fully connected neural network structure shown in Fig. 7, each neuron is composed of H–H model, and the coding mode of various parameter conditions and the behavioral response of the network are simulated through two different index systems (Wang et al. 2015b). The research objective is to explore the relationship between synchronous discharge activity and network parameters of structural neural network, and to investigate the equivalence of H–H model and W–Z model under the same network structure as shown in Fig. 7.

According to equivalent circuit Fig. 3 of H–H model, its differential equation and variable conductance of ion channel are expressed by Eqs. (10) and (11).

In the circuit model of H–H equation, the total power of the fully connected network in Fig. 7 can be obtained according to the total energy Eq. (12)–(15). Where IVm is the external energy provided to the circuit system, (iNaENa + iKEK + ilEl) is the energy supplied by the voltage source represented by the Nernst potential, Vm(iNa + iK + il) is the energy in the potential difference between the inner and outer membrane. Distributed in these neurons action potential in the process, if they do not consider the change of cell membrane permeability of consumed energy, the energy involved respectively carries blood oxygen and glucose for energy provided by the neuron, inside and outside the cell membrane potential and ion pump in against the concentration of energy in the transport ion TiDuCha when consumed by biological energy (ATP), Increased glucose consumption due to brain stimulation is mainly caused by activation of the sodium–potassium ATP pump (Churchland et al. 2002; Rabinovich and Huerta 2006; Jiang et al. 2020). The first two describe the relationship between subthreshold neurons and biological energy, and the assisted diffusion of ions through ion channels along the difference in ion concentration does not consume energy. However, from a dynamic perspective, the sum of these three types of energy is equal to the total energy in the circuit system of H–H model during the transformation of subthreshold neurons into functional neurons. The first two types of energy correspond to the IVm and Vm(iNa + iK + il) in the circuit respectively, so the energy provided by the Nernst potential (iNaENa + iKEK + ilEl) is equal to the biological energy consumed by the ion pump. In fact, in this process, the sodium–potassium pump is constantly transporting ions up and down the concentration gradient, which directly depletes biological energy so that one ATP can pump out three sodium ions and two potassium ions. This also confirms the existence of an ion pump, which provides a steady energy potential through the continuous transport of ions, thus providing energy for neural activity. Thus, we can calculate the power consumed by the ion pump by using the power of the voltage source represented by Nernst potential in the circuit shown in Fig. 7, that is, the nerve energy consumed by neuron activity is shown in Eq. (16).

The third in the minus sign refers to the circuit shown in Fig. 7, the direction of the voltage source and current ENa in contrast to the iNa, EK, El and iK, il potassium (sodium current towards inside cells, and electric current and leakage flow). For an action potential, we can use the above equation to calculate the amount of nerve energy expended. The calculated parameter values are determined by Zhu et al. (2018b). In fully connected neural networks, the dynamic properties of each neuron are derived from the above H–H model, so the network structure is strictly defined on the basis of neurobiology. The anatomical structure of neuronal connections in the cerebral cortex indicates that the neural network inside any brain region is a fully connected structural neural network if functional connections are not considered, such as cortical functional columns (Gazzaniga et al. 2002). If the cortical functional column is regarded as a closed system, and a local region within the closed system is intercepted for simplicity, the network structure of this region can be expressed by a structural neural network consisting of 20 neurons as shown in Fig. 7. To understand the energy coding mode of cortical neural network under different parameters, the connection of neural network is simplified to some extent. The connections between neurons in the figure indicate that they are coupled to each other, but the coupling strength between any two neurons is not the same, and the coupling strength between two neurons is not symmetrical. According to the principle of synaptic plasticity, statistical data from experiments show that the range of synaptic coupling strength between neurons follows a uniform distribution (Rubinov et al. 2011). That is, the following matrix is satisfied:

$$W = \left[ {\begin{array}{*{20}c} {w_{1,1} } & {w_{1,2} } & {...} & {w_{1,n} } \\ {w_{2,1} } & \ddots & {} & {w_{2,n} } \\ \vdots & {} & \ddots & \vdots \\ {w_{n,1} } & {w_{n,2} } & \ldots & {w_{n,n} } \\ \end{array} } \right].$$
(17)

\(w_{i,j}\) represents the coupling intensity from the first neuron to the first neuron, and represents the number of neurons.

$$\begin{aligned} Iin(t) & = W \times Q(t - \tau )^{\prime}, \\ I(t) & = Iin(t) + Iext(t). \\ \end{aligned}$$
(18)

Put \(I(t)\) into Eq. (1) to obtain the membrane potential \(V_{im} (t)\), and calculate the power consumed \(P_{i} (t)\) by the neuron through Eq. (3). Where \(I(t)\) represents the sum of current stimuli received by neurons at any time, \(I_{in} (t)\) represents the interaction between neurons, and \(I_{ext} (t)\) represents the influence of external stimuli on neurons.

$$Q(t - \tau ) = [Q_{1} (t - \tau ),\;Q_{2} (t - \tau ), \ldots ,Q_{j} (t - \tau ), \ldots ,Q_{n} (t - \tau )].$$
(19)

Moment of Qi on behalf of each neuron's action potential distribution state, for the sake of simplicity it is simplified to the pulse of 0 or 1, when in the resting potential is 0, action potential constantly to 1, which said a neurons after issuing an action potential to another neuron stimulation time interval, which is exciting transfer delay, the range of possible values is uniformly distributed.

We use the traditional maximum correlation coefficient of synchronization index and the novel negative energy ratio to measure the synchronization activity of the network (Zhu et al. 2018b).

The average maximum correlation coefficient is defined as follows:

$$\rho_{mean} = \frac{{\sum\nolimits_{i = 1}^{N} {\max (C_{i,1} ,\;C_{i,2} , \ldots ,C_{i,j} , \ldots ,C_{i,n} )} }}{N}\quad (i \ne j),$$
(20)

where \(C_{i,j}\) is the Pearson correlation coefficient between the membrane potential of the first \(j\) neuron and the first neuron. If the Pearson correlation coefficient between any two neurons is closer to 1, it indicates that the synchronization between these two neurons is greater. Previous studies have found that two or more oscillating groups will appear in steady state if the network achieves synchronization under transient stimuli. It can be seen that when the index of maximum correlation coefficient is adopted, the closer its value is to 1, the stronger the synchronization of neurons within the oscillating group is, that is to say, the closer the network state is to the common synchronization phenomenon of multiple groups. When the value is closer to 0, it indicates that the synchronization of neurons within the oscillating group is weaker, that is, only a few neurons are synchronized.

Negative energy ratio is defined as follows: the ratio of the absolute value of negative energy consumed by the whole neural network from moment 0 to moment 0 to the sum of the absolute value of positive and negative energy.

$$\alpha (t) = \frac{{E_{negative} }}{{E_{positive} + E_{negative} }} \times 100\% ,$$
(21)
$$E_{negative} = \sum\limits_{i = 1}^{n} {\int_{o}^{t} {P_{i} (t) \cdot {\text{sgn}} ( - P_{i} (t))dt} } ,$$
(22)
$$E_{positive} = \sum\limits_{i = 1}^{n} {\int_{o}^{t} {P_{i} (t) \cdot {\text{sgn}} (P_{i} (t))dt} } ,$$
(23)

where \(P_{i} (t)\) represents the power consumed by the neuron at the moment, and the integration \(P_{i} (t)\) in [0, \(t\)] represents the energy consumed by the neuron during [0, \(t\)]. \({\text{sgn}} (x) = \left\{ {\begin{array}{*{20}c} {1,} & {x > 0} \\ {0,} & {x \le 0} \\ \end{array} } \right.\) is symbolic function, \(E_{negative}\) and \(E_{positive}\) respectively represents the negative energy and positive energy consumed by the whole neural network in [0, \(t\)].

The synchronicity of network activity is measured by the two indexes of mean maximum correlation coefficient and negative energy ratio. The larger the two indexes are, the stronger the synchronicity of network activity. Figure 8 compares the equivalence of H–H model and W–Z model through the changing relationship between the number of neurons and nerve energy:

Fig. 8
figure 8

Model based on W–Z (b) and H–H model (a) (Zhu et al. 2018b) of the negative energy ratio and all curves of maximum correlation coefficient on the number of neurons

The figure above shows that the greater the number of neurons in the network, the greater the demand for energy for synchronous oscillation of the network, that is, more energy reserves are required, and the negative energy ratio just reflects the energy stored in network activities. Like the traditional correlation coefficient method, the negative energy ratio can also reveal the synchronization state of the network, and both models show a positive correlation between the number of neurons and the network synchronization and the negative energy ratio. On the other hand, the negative energy ratio does not saturate quickly with the increase of the number of neurons, so that the number of neurons in the network can be more effectively distinguished, which is also one of the advantages of energy coding. It should be especially emphasized that, through the comparison of the left and right figures, it can be clearly seen that for an identical fully connected neuron network, the maximum correlation coefficient and negative energy ratio of H–H model and W–Z model are almost the same with the increasing number of neurons.

Figure 9 compares the equivalence of H–H model and W–Z model through the changing relationship between neuron coupling intensity and nerve energy.

Fig. 9
figure 9

Model based on W–Z (b) and H–H model (a) (Zhu et al. 2018b) of the negative energy ratio and both maximum correlation coefficient curves of coupling strength

Due to the coupling strength between the neurons affecting their information interaction, and the process relies on energy is required to complete, the coupling strength is larger, the synchronicity activities of the network is more intense, the stronger the information interaction between neurons, the higher the demand for energy, so you need a high energy reserve, namely reflection is higher than the negative energy. Both models show that the coupling strength is positively correlated with network synchronization and negative energy ratio. By comparing Fig. 9 left, right, and W–Z and H–H model for a same full connection neural network with increasing coupling strength between neurons, their negative energy, though there are some error, but also are increasing and the trend of growth is nearly the same, and both maximum correlation coefficient is exactly the same.

Figure 10 compares the equivalence of H–H model and W–Z model through the change relation between neuron excitation transmission delay and nerve energy.

Fig. 10
figure 10

Model based on W–Z (b) and H–H model (a) (Zhu et al. 2018b) of the negative energy ratio and both maximum correlation coefficient about excited transfer delay change curve

The longer the delay time for presynaptic neurons to release excitatory neurotransmitters to postsynaptic neurons, the weaker the correlation between the activities of presynaptic and postsynaptic neurons, the weaker the synchronous activity of the whole network, and the lower the demand for energy, thus the less energy stored in network activities, and the lower the corresponding negative energy ratio. The results based on the two different models show similar decreasing curves with the increase of excitatory transfer delay. Through the comparison of the left and right figures, it can be clearly seen that for an identical fully connected neural network, the maximum correlation coefficient and negative energy ratio of H–H model and W–Z model are almost the same with the increase of signal delay time.

In addition, in the two published papers (Cohen 2017; Zhu et al. 2018b), we also carried out a large number of studies on the network neural coding in the above three cases respectively by H–H model and W–Z model, and found that the coding modes are almost the same under different parameters. Combined with the above two different synchronization indexes, the dynamic characteristics of synchronization are identical under the three conditions of increasing number of neurons, coupling strength and signal delay time. This proves that H–H model and W–Z model are equivalent at the level of structural neural network.

The results show that H–H model is suitable for the modeling, analysis and calculation of simple and local neural networks with a small number of neurons, while W–Z model is suitable for the modeling, analysis and calculation of complex networks with a large number of neurons. In particular, due to the correspondence between neural information and neural energy, and the advantages of simple calculation and no loss of main information, the neural energy theory and Z–W neuron model are very potential research methods that can be used to construct large-scale neuroscience models (Wang and Zhu 2016).

4 Application of W–Z model and neural energy method in functional neural networks

In fact, using neural energy method to study functional neural networks is not only very effective, but also can give scientific explanations for some experimental phenomena that neuroscience has not been able to make clear up to now from a quantitative perspective.

4.1 Neural mechanisms of cerebral hemodynamic phenomena

Neuroscientists have long been puzzled by the phenomenon of brain hemodynamics. The so-called hemodynamic phenomenon is that blood flow to the nervous system always increases significantly about 7–8 s after the cerebral cortex is stimulated (Fox and Raichle 2007a; Peppiatt and Attwell 2004). Provided according to the literature, the current neuroscience community does not give an effective theory to the phenomenon of neural mechanisms of scientific and reasonable explanation (Moore and Cao 2008; Maandag 2007), also didn't see related from neural modeling to calculate these two aspects of computer simulation and the experimental phenomenon that the hemodynamic phenomenon research reports. In order to simulate the phenomenon of blood flow delay in fMRI, we constructed a multi-level neural network based on W–Z neuron model and used the method of energy coding to give the neural energy changes that produce hemodynamic phenomena. The hemodynamic phenomenon of a large increase in cerebral blood flow in fMRI with a lag neuron activation area of 7–8 s is reconstructed quantitatively. Since this study is based on the negative energy mechanism of neuronal activity that we have revealed (Wang et al. 2015a), we predict that the nature of brain hemodynamic phenomena is the existence of negative energy mechanism in neural activity (Peng and Wang 2021). Recently, based on the anatomical structure of the visual nervous system, we constructed a large-scale neural network model consisting of various visual areas for visual information processing using neuron energy model, using which we successfully simulated the hemodynamic phenomenon of the visual system in fMRI (Peng and Wang 2021).

The significance of the above study lies in that it can provide a new vision for exploring the dynamic mechanism of hemodynamic phenomena in the future, and thus provide important scientific support for establishing the framework of brain theoretical research in the future.

4.2 Application of neural energy coding in brain navigation

Spatial cognition and representation are critical to animal’s survival, such as navigating to find hidden food or to avoid danger. It is believed that animal can form a cognitive map in the brain to solve spatial tasks. The concept of cognitive map was hypothesized by Tolman (Tolman 1948). Until 1971, O'Keefe and Dostrovsky discovered the first neural basis of cognitive map (O’Keefe and Dostrovsky 1971). They reported a type of neuron in the rodent hippocampus that emitted spikes whenever it ran through a specific set of spatial locations. This type of neurons is now termed as “place cell,” and the particular subset of locations in the arena to which the place cell respond is called a “place field”. Different place cells are corresponding to different place fields, which varies in size, shape, and center. Thus, the environment is represented by the population of place cell entities in the hippocampus to support the concept of cognitive map (Wilson and McNaughton 1993). Animals typically recruit different place cell populations when navigating in different environments, suggesting a remapping of spatial representations in the hippocampus across environments (Alme et al. 2014). Other than spatial properties, the place cell also showed a striking temporal feature which is called phase precession, meaning the spike timing is progressively advancing relative to the local theta oscillation phase in the hippocampus as the rat pass a typical place field (O’Keefe and Recce 1993). This phenomenon extends the place cell code from a pure spatial domain to a spatial–temporal domain. Thus, the hippocampus plays an important role not only in spatial representation, but also in spatial memory. Place cells are thought to be important for spatial tasks such as path finding, and can also act as route planners.

However, the place cell is only the first piece of the spatial computation system in the brain. It is a component of a more general circuit to represent the spatial information in a dynamic way (Moser et al. 2008). The place cell has particular spatial selectivity. So it is natural to ask where the spatial information received by place cell come from. The medial entorhinal cortex (MEC) is an important upstream of hippocampus and has attracted much attention during recent years. A similar type of neuron has been found in MEC called grid cell, which also responds to the location of the animal like the place cell, but multiple firing fields appear in a periodic fashion, forming a triangular grid pattern covering the entire arena (Sargolini et al. 2006; Hafting et al. 2005). It is hypothesized that projections from the grid cells to the hippocampal place cells support the generation of the place fields. From dorsal to ventral MEC, the space between the vertices of grid pattern increases in a modular manner (Sargolini et al. 2006; Hafting et al. 2005; Fyhn et al. 2004), while the positions of the grid vertices vary randomly, but each grid maintains a stable spatial phase (locations on the horizontal plane). These spatial features of the grid cell population are believed to form a global spatial coordinate system in the brain, which may be informatively redundant and robust, suggesting that grid cells are playing a role in path integration (Hafting et al. 2005; Barry et al. 2007), meaning tracking the locomotion of the animal. Furthermore, there is evidence that the firing fields of grid cells persist in the absence of sensory input, suggesting that the perception of self-movement is the main driver of grid cell activity, further implying that animals can continuously track and update their self-location in the environment through the grid coordinate system (Hafting et al. 2005; McNaughton et al. 2006). Grid cells together with place cells constitute a quantitative spatial–temporal representation system for representing locations, paths, distance and associated behavioral and episodic memories.

Most of the understanding of the brain’s navigation system came from experiments or theoretical models in two-dimensional (2D) space. The studies often conduct on flat, horizontal planes, while the actual world is three dimensional (3D), and all animals, more or less, need to navigate in 3D space. However, little is known about how 3D space is encoded in the brain. Do the place field and grid fields have 3D properties? The regularly distributed firing fields of grid cells on the 2D plane constitute a metric system for navigation, and it is a difficult question how this hexagonal pattern can be generalized to volumetric space. Similarly, conclusion has not been made about the 3D counterpart of the circular or elliptical place field on the 2D plane. Evidence has suggested the grid cell firing patterns on the one-dimensional linear track can be treated as cutting the 2D grid lattices (Yoon et al. 2016), implying the grid cell representations may be global and the dimensionality could be higher than that of the experimental setup (Finkelstein et al. 2016). However, recordings from rodents do not support this hypothesis (Hayman et al. 2011, 2015). In these experiments, grid cells were recorded when rats navigated in 3D space or at least a section of volumetric space, such as on a helix stairs (Hayman et al. 2011), titling or even vertical walls (Hayman et al. 2015; Casali et al. 2019). The results show that the grid field maintains its horizontal character but is vertically elongated on multi-layered helix and is almost indistinguishable from the grid field on the tilting plane from that on the horizontal plane. Place cell recordings from free-flying bats (Yartsev and Ulanovsky 2013) and grid cell recordings from crawling bats (Yartsev et al. 2011) have also been published, indicating the existence of volumetric place fields in 3D space and hexagonal lattice patterns in the horizontal plane in such mammals. One theoretical analysis suggests that a face-centered cubic lattice is optimal for maximizing 3D spatial resolution (Mathis et al. 2015). Another fact which should be taken into account is that rodents and bats have different natural movement behaviors, so their spatial encoding strategies are not necessarily identical. And conclusions about spatial representations of different species should be drawn with caution.

The complexity of the spatial computation problem forces us to consider an alternative besides electrophysiological experiments—neurodynamical modeling, since neural systems reveal abundant dynamical properties, such as oscillation and attractor dynamics. Regarding this, two main categories of models have been developed to reveal the mechanisms that form the grid cell activity, such as periodic firing fields, phase precession, and invariance to velocity changes. One is the oscillatory interference (OI) model and the other is the attractor network model (Giocomo et al. 2011). The first type consists of several oscillator pairs. Inside each pair, there is a baseline oscillator and a velocity-modulated oscillator. The frequency difference between two oscillators is influenced by the speed and direction (velocity) of the animal's movement. And different pairs corresponding to different allocentric directions. Thus, each pair of oscillator continuously tracks the distance the animal travels in a fixed direction by the phase difference of the two oscillators. And multiple oscillator pairs are integrated to form a grid pattern (Burgess et al. 2007). The attractor network model arranges the neurons on a sheet and each neuron excites its neighbors and inhibits proximal neurons. The network can generate attractor states to represent position. Combining specific input cues and structured recurrent connections, the activity bumps of the grid cell layer move in response to the animal's motion, which guarantees the periodic pattern can be formed for each neuron (Burak and Fiete 2009).

OI model is often used to simulate the grid cell activity in 2D space. Classic OI model is consisted of several pairs of oscillators. Each pair has a somatic oscillator and a dendritic oscillator. Frequency of somatic oscillator is determined by the background theta rhythm and frequency of dendrite oscillator is modulated by velocity input on the basis of theta, which has an increment proportionate to the projection of velocity to the preferred direction, which is assigned to each oscillator pair. When the preferred directions corresponding to the oscillator pairs are separated by 60°, the hexagonal pattern of grid cell firing can be generated. Based on this idea, a gravity-modulated OI model has been proposed to generate grid cell activity in 3D space (Wang et al. 2021c). The fixed preferred directions in 2D space of the classic model are changeable in this new model, which can be rotated onto the local plane. The rotation operation is presumably achieved by receiving the head direction signal with reference to gravity. The rotation axis is the intersection line of horizontal plane and body plane. By this method, the OI model can be modified to simulate the grid cell activity in 3D space for crawling animals such as rodents. Simple as it is, the results can account for the known experimental phenomena found in rats, and the model also simulate and make testable predictions of grid cell activity on novel surfaces in 3D space.

Figure 11 shows the grid pattern on the multi-layer helix stairs, which was used to recording spatial neurons in experiment (Hayman et al. 2011). In this figure, three “stripe patterns” (a, Stripe pattern 1–3) is the direct results of three preferred directions and the grid pattern can be formed by the threshold product of the stripe activities (a, bottom right). Projection of the grid patterns on the ground (top view) suggests the grid cell seems not sensitive to vertical locations, and the pattern is similar on each layer of the helix track (b). The histogram of grid firing locations on every layer (from bottom to top: lowest coil to highest coil of the helix) in c further confirm this observation. The firing locations are represented by angles on each layer. Note that this phenomenon is a key finding in experiment (Hayman et al. 2011). The model can also generate grid patterns with different orientations and spatial periodicity (d).

Fig. 11
figure 11

Grid firing analysis on helix track

Grid patterns on other complex surfaces in 3D space were also simulated, and it turns out that the grid cell activity can be trajectory-dependent, i.e., different trajectories can result in different grid patterns (Wang et al. 2021d). As shown in Fig. 12, the navigating terrains are a multimodal surface, a saddle surface and a unimodal surface. The two upper rows of Fig. 12 illustrate the stripe and grid (low right) patterns of random moving while the two lower rows of Fig. 12 are the same patterns but generated by regular trajectories, which are zigzag paths for multimodal and saddle surface and a spiraled path from top to bottom for unimodal surface. The stripes are very fuzzy and the grids are almost vanishing for random moving while the patterns generated by regular moving are quite inerratic. Mathematical analysis indicates that the condition for trajectory-independent grid pattern is quite rigorous, which demands the rotation of preferred direction vectors defined on every location on the surface forming a conservative field, and merely the horizontal and tilting plane can satisfy the condition.

Fig. 12
figure 12

The trajectory-dependency of grid fields on smooth surfaces in 3D space

In 2D space, it is generally believed that grid cells are involved in the formation of the place field because hippocampus receive primary input from MEC. The place cells are likely integrating multiple inputs from grid cells. Studies have shown that hippocampal pyramidal neurons perform linear summation of synaptic inputs (Cash and Yuste 1999). An elegant mathematical model inspired by Fourier Transformation has been proposed to generate a Gaussian-type place field by linearly summing dendritic inputs from several grid cells with designed synaptic weights (Solstad et al. 2006). However, this model applied a sinusoidal function defined directly on the 2D plane to represent grid fields which did not consider the actual trajectory of the animal, and it did not suitable for 3D navigation either. Followed the gravity-modulated rotation scheme of the preferred directions in grid cell model, the place cell activity can be similarly modeled. Grid patterns with different orientations and wavelengths are first generated on the surfaces in 3D space, then these patterns are summed by the similar designed synaptic weights (Xu et al. 2022b).

Examples of simulation results are showed in Fig. 13 (a: helix, b: sphere, c: multimodal surface, d: saddle surface). Trajectories were also generated by random movement. Each sub-figure contains four panels. The first column shows the place field on the manifold. The firing rates are color-coded and warmer colors indicate higher firing rates. In the upper left panel, the rate is calculated for each position along the animal's trajectory, while the lower left panel shows only the rates above the threshold (half of the maximum value) for better illustration. The upper right panel in each subplot shows the weight values of each the grid cell input, which are color-coded. The lower right panel also shows the projection of place field on the horizontal plane. The feature that place cell fires at almost the same location in each layer as the rat navigates the helix is consistent with experimental recordings (Hayman et al. 2011). Future experiments could be set up similar apparatus to verify the place field patterns predicted by the model.

Fig. 13
figure 13

Place fields on complex 2D manifolds in 3D space

The aforementioned neurodynamical models focused on rodents, which are not a volumetric moving animal. The 3D movement of rodents is depending on the environmental apparatus. Bat is a volumetric navigating mammal which has hippocampal formation. A network model of 3D place cell based on neural energy was also proposed (Wang et al. 2018b). This is a concrete examples of the application of neural energy theory in brain science. Neural energy was used to define place field and place field center, and the locating performance and energy consumption characteristics of the place cell system were analyzed. Figure 14a displays the activity patterns of 16 randomly selected cells. The scatter plots in 3D space represent the different locations of the flying bat in the random search trajectory, and the firing power (in nW) of place cells at the corresponding position was coded by color. Centralized 3D place field can be generated by this energy-based model. The distribution and size of place fields as well as firing powers vary among different place cells. Maximum power is about 3000 nW among these 16 cells and larger place fields usually have higher maximal power. Then this model was used to perform locating function in 3D space. Figure 14b shows the average locating error with respect to the size of place field. The result suggests that the locating error was not simply monotonously increasing as the place field enlarging. There always exists a minimum localization error when the place field is of the medium size. So the place field with a reasonable optimal size will most accurately preform the localization function. Notably, larger place field usually corresponding to more energy consumption, so a moderate field size (moderate energy consumption) is the optimal solution for locating by place cell network. It implies the trade-off between energy consumption and spatial coverage of the place cell. The study using energy coding method validates the principle of energy economy of the brain in encoding 3D spatial information.

Fig. 14
figure 14

Place cell network model encoding 3D spatial information constructed by neural energy method (Wang et al. 2018b)

It has been shown that the spatial computation could vary among different species, especially in different dimensions. Is there a universal principle behind these differences? The solution of this question is another successful application of the novel neural energy method. A new perspective of understanding the place cell activity in different dimensional spaces can be provided by neural energy and information theory. Place cell fire spikes to transmit information about locations. However, neural activity such as spike is energy expensive. The neural system ought to make full use of every spike to represent the largest amount of spatial information. Inspired by this designing principle, a theoretical work tries to answer the spike allocation problem for place cell, i.e., at what location to fire a spike (and forming the place field) with finite amount of total energy can achieve the most efficient representation for spatial information (Wang et al. 2019b)? This question is actually a functional optimization problem with constrains which can be solved by a mathematical technique named calculus of variations. The place field of different species can be treated as a function defined in different dimensional spaces and the amount of information is a functional taken place field as the input function. The finite amount of neural energy is the major constraint for the place cell. The functional optimization problem with constraint can be constructed based on neural energy by these steps. The variational method gave the optimal shape of place field. When the moving trajectory is uniformly distributed in 1D, 2D or 3D space, the spike location arrangement is a Gaussian-shape place field in every dimensional space. These results are shown in Fig. 15a (1D space), b (2D space) and c (3D space). Figure 15d exemplifies the maximum information per spike (vertical axis) in 2D space with respect to the spatial variance and area of the 2D space. When the trajectory distribution is different, the resulting place field is also affected, implying the animal’s natural habitat and moving statistics play important roles in determining the distribution of place field. It indicates that bat flying in 3D space and rat climbing in 3D space may result in differently shaped 3D place fields, which can reconcile the inconsistency of the place field symmetry found in animal experiments. It is a potential evidence that the brain complies certain designing principles such as energy economy and information efficiency, which is a representative application of neural energy theory.

Fig. 15
figure 15

Optimal place field in different dimensional space constrained by neural energy (Wang et al. 2019b)

The neural energy has very rich behaviors no matter in single neuron level or network level. After separating and defining neural energy supply and consumption, calculating suggested that the energy properties of supra-threshold and sub-threshold, such as power synchronization of ion channels and energy utilization ratio, have significant differences. Especially the energy utilization ratio, which can rise to above 100% during sub-threshold activity, revealing an overdraft property of energy use (Wang et al. 2017b). The neural energy method has also been used to study the transformation of different types of memory based on a neurodynamical model (Wang et al. 2019c). A method has been developed to measure the changes in energy input of different stimuli and the corresponding energy consumption of the memory system. The results provide a comprehensive understanding in the memory transformation by an energy coding approach and also reveal the energy-efficient principle of the neural system.

The neural energy is also a promising prospective to study cognitive function such as path-finding. Based on activity pattern of hippocampal place cells, a novel model of neural energy field gradient has been proposed. A mapping among discrete spatial locations, place cell population and neural energy was constructed to define the neural energy field (Wang et al. 2017a). The distribution pattern of firing power among the neuron cluster was utilized to encode the metric and topological information of space. Then it suggested that the energy field gradient can sever as a navigational vector. By the coupling effect of gradient and noise vectors, the model can perform an efficient and biological plausible mental exploration. It is an important example that neural energy is an effective tool to study cognitive functions of the brain.

In conclusion, energy-based large-scale neuroscience models can profoundly reveal the relationship between energy, information and spatial position in the nervous system. Its advantages are as follows:

  1. (1)

    nervous energy can efficiently express the cognitive system in 3D space of neural coding (Wang et al. 2019b);

  2. (2)

    the energy constraint conditions to maximize information coding (Wang et al. 2018a, 2018b);

  3. (3)

    the neural coding can maximize energy to improve the efficiency of intellectual discovery (Wang et al. 2017a).

4.3 Neural energy characteristics of memory switching

Neuroscience provides a qualitative explanation of how short-term memory is transitioned to long-term memory using data from neuroanatomical experiments. However, on the basis of experimental data, from the perspective of quantitative analysis, it seems that there is no relevant research report on how short-term memory is transferred to long-term memory under different stimulus conditions. We explore the interaction between working memory and long-term memory from the perspective of energy encoding based on a bi-stable working memory model. Long-term memory was induced using the working memory model using theta cluster stimulation (TBS) and high frequency stimulation (HFS), which induced LTP in experiments (Zhu et al. 2016a, b). Based on electrical stimulation of the physical nature of nervous system, we developed a quantitative method to determine the stimulus to the energy input of the nervous system and the corresponding energy consumption system, and at the same time, further studied the two different long-term memory inducing stimulation protocols, their minimum energy consumption, and defined the energy ratios to quantitatively describe the stimulus energy efficiency. The results show that both of these commonly used LTP-inducing stimuli can successfully stimulate long-term memory based on the bi-stable dynamic model. However, by analyzing the minimum energy consumption and energy ratio, TBS is found to be a more energy efficient stimulus mode than HFS, which is also consistent with experimental results (Wang et al. 2019c). The reason may be that TBS can push up the system response rhythmically, gradually raising it to a high steady state. In this study, by combining neural energy and dynamics, the energy characteristics of dynamic switching in memory model were found by investigating the response characteristics of the memory system to the stimulus modes commonly used in the two experiments (Wang et al. 2019c). This provides a strong dynamic evidence for understanding how working memory is transformed into long-term memory, which reflects the high efficiency of the nervous system energy utilization during the formation of long-term memory. This example is also a successful application of neural energy coding theory.

4.4 Dynamics and energy characteristics in spontaneous brain networks

Most current researches on the brain have concentrated on task-related brain activities, both experimentally and theoretically. However, it is of much necessary to take spontaneous activities into account when we consider how the brain works, since these spontaneous brain activities always consume most of the brain's energy. The up–down oscillations of membrane potentials, which usually characterized by bistable and bimodal distribution, accompanied by some spontaneous spikes in up states, is considered to be one of the significant spontaneous activities. How these spontaneous phenomena occur? How much is the energy consumption of this kind of activity? Whether it implies something or not? Our work on spontaneous up–down network have tried to answer these questions and to provide a theoretical complement to the study of spontaneous brain networks.

In this work, a network model of spontaneous up–down oscillation has been designed, and on this basis the causes and key elements which influence the spontaneous spikes have been revealed, as well as energy characteristics of spontaneous bistable networks (Wang et al. 2019c, 2021e). The results of our theoretical study of up–down oscillations, as shown in Fig. 16, specifically focus on intrinsic ion channel kinetics and synaptic transmission process. In Fig. 16a, b, it is obvious to find that the fast sodium current is critical to the generation of spontaneous neural spikes, while the persistent sodium current plays a role in whole spontaneous fluctuation, with or without external noise. Both of them influence spontaneous firing rates and synchronous up–down activities, which illustrated in Fig. 16c as the combined effect. In terms of synaptic transmission, the blocking of excitatory connection reduces neural spikes and meanwhile still reveals spontaneous firing, as the experimental results recorded (Sanchez-Vives and McCormick 2000; Compte et al. 2003) (also see Fig. 16d), which indicate that some neurons produce spikes spontaneously through intrinsic membrane mechanisms. Furthermore, energy consumption of spontaneous up–down network and its characteristics has been concerned and the result is demonstrated in Fig. 17. The energy consumption of neurons in spontaneous up–down network is calculated and bistable characteristic and bimodal distribution of energy consumption is shown in Fig. 17a, which is just in accordance with the feature of membrane potentials. At the same time, temporal and spatial characteristics of energy consumption which mostly occurs during up states and concentrates within the neuron rather than in synaptic transmission process, are reflected in Fig. 17c. Besides, we also compare the indicator of energy with other commonly used ones, like firing rate and synchronizing rate, and expound its effectiveness and robustness as a global indicator (see Fig. 17b). In Fig. 17d, the results show that the energy consumption of stimulus-related energy is much smaller than that of spontaneous activity, indicating that energy consumption was driven by internal spontaneous activity rather than external stimulus, which are consistent with the evidence from brain imaging results and point of view put forward by Raichle (Raichle and Mintun 2006; Fox and Raichle 2007b).

Fig. 16
figure 16

Influence of intrinsic ion channel and synaptic transmission on up and down oscillation. (Color figure online)

Fig. 17
figure 17

Energy characteristics in spontaneous up and down oscillation networks. (Color figure online)

Figure 16a. Spontaneous membrane potential oscillation of two sample neurons—an excitatory neuron (EN) and an inhibitory neuron (IN)—with or without fast sodium current. Sample neurons with fast sodium current exhibit spontaneous spiking in up and down oscillation. Figure 16b. Membrane potential oscillation of sample neurons under different cases with or without noise input. (A) Membrane potential oscillation of a sample neuron under cases without noise. (B) Membrane potential oscillation of a sample neuron under cases with noise input. Figure 16c. Mean firing rate of neurons in the network controlled by persistent sodium conductance together with fast sodium conductance. (Blue) Mean firing rate of neurons in the network controlled by persistent sodium conductance. (Red) Mean firing rate of neurons in the network controlled by fast sodium conductance. Figure 16d. Experimental and simulated results show block of excitatory synaptic transmission decreases neural firing and reveals spontaneous firing. (A) Extracellular recordings from layer V pyramidal cells: (A1)–(A3) Extracellular recordings from three example neurons (Sanchez-Vives and McCormick 2000; Compte et al. 2003). (B) Simulated results based on our network model before and after blocking excitatory synaptic transmission: (B1)–(B2) Simulated results from two example neurons.

Figure 17a. Power and membrane potential are always stable at two states and both show bimodal distribution, whenever in spontaneous activities or during continuous external stimulus (10–12 s). (A) Membrane potential distribution of all the neurons in the network. (B) Power distribution of all the neurons in the network. Figure 17b. The network size-dependent change of three indicators. (A) Mean synchronization rate for excitatory and inhibitory neurons. (B) Mean firing rate for excitatory and inhibitory neurons. (C) Mean energy consumption for excitatory and inhibitory neurons. (D) Mean energy consumption for all neurons (the green dotted line) in the network. Figure 17c. Temporal and spatial characteristics of energy consumption. (A) Membrane potential versus energy consumption plane of a single neuron. (B) Mean ratio of synaptic to total energy consumption of all the neurons in the network. Figure 17d. Spontaneous and stimulation related energy consumption of neurons in the network. (A) Mean energy consumption of neurons in the network during spontaneous and stimulated periods. (B) Stimulation related increases in energy consumption of neurons in the network.

Through the observation and analysis of the findings, we believe that these results shed light on the role of intrinsic sodium current and synaptic transmission in spontaneous firing and up and down transitions, lay the foundation for further work on spontaneous cortex activity, and would promote the progress of the energy theory in the study of spontaneous brain activity.

4.5 Biophysical mechanism of interaction between default mode network and working memory network

The question of particular interest to neuroscientists is what causes the enormous persistent expenditure of brain energy. Is it possible that the current mainstream view of cognitive neuroscience has misled researchers into ignoring or ignoring the possibility that experiments in neuroscience and cognitive psychology reveal only part of brain activity (Fox and Raichle 2007a). The answer may lie in the brain's DMN and its resting state network. To this end, we first explore the neural mechanism of antagonism between DMN and task positive network (TPN). The results show that the synaptic connection strength has opposite effects on the positive task network (TPN) and the negative task network (TNN), thus concluding that the neural mechanism of antagonism between the DMN and the positive task network is mutual inhibition at the synaptic level (Cheng et al. 2020).

As shown in Figs. 18, 19 and 20, three different parameters (NMDA conductance parameter K5, Gaussian parameters σ and J+ of internal preference weight of inner pyramidal cell population) strongly controlled TPN and TNN release, especially synaptic connection strength J+ had opposite role to TPN and TNN, Thus, the antagonistic mechanism between them is caused by mutual inhibition at the synaptic level.

Fig. 18
figure 18

Firing rate curves of TPN (left) and TNN (right) with different synaptic conductance (Cheng et al. 2020)

Fig. 19
figure 19

Firing rate curves of TPN (left) and TNN (right) with different \(180^{^\circ }\) (Yuan et al. 2021)

Fig. 20
figure 20

Firing rate curves of task-positive (left) and task-negative network (right) with different σ (Yuan et al. 2021)

We further investigate the relationship between DMN activity as the number of working memories increases. It was found that with the increase of the number of stimuli in working memory (Fig. 21), the neural activity of the default network decreased more rapidly (Fig. 22), indicating that the task was more difficult. It indicates that the default network is indispensable in the process of working memory.

Fig. 21
figure 21

Average firing rate of excitatory neurons in the working memory network in model 1 (Yuan et al. 2021)

Fig. 22
figure 22

Average firing rate of excitatory neurons in the default-mode network in model 2 under various stimuli during the whole process (Cheng et al. 2020)

The calculated results are in good agreement with the experimental data provided in Hu et al. (2013).

The energy expression of DMN and working memory network under coupling condition is consistent with the conclusion of synaptic mutual inhibition, as shown in Fig. 23.

Fig. 23
figure 23

Firing results of TPN–TNN network after the introduction of AMPA with the same order of magnitude as NMDA. a Firing rate curves of excitatory population in TPN and TNN. Gaussian weight parameters: preferred direction: 180°. Red dotted line is the baseline of TNN firing rate after stimulus withdrawal. The baseline value is 32.21 Hz. Bright blue solid line is the baseline of TPN firing rate after stimulus withdrawal. The baseline value is 21.32 Hz. b Scatter plot of TPN with density color temperature. c Scatter plot of TNN with density color temperature. d Contained energy in TPN and TNN. Red curve is the contained energy of TPN, and the blue one is the contained energy of TNN (Yuan et al. 2021). (Color figure online)

We also studied the control and energy expression of NMDA neurotransmitter on the interaction between TPN and TNN. As shown in Fig. 24, the energy of TPN and TNN under the action of NMDA neurotransmitter switch in TPN–TNN model was found. Furthermore, it is found that the NMDA conductance between TPN and TNN can be used as the switch of different stages of working memory and has good robustness.

Fig. 24
figure 24

Contained energy of TPN and TNN with NMDA switch I. The whole network got stimulation in 750–1000 ms. NMDA channels between TPN and TNN was switched off in 3000–7000 ms (right below) and switched on in the rest of simulation time. Preference direction: 180°, Gaussian weight parameter: \(\sigma = 13.25,\;J^{ + } = 3.62,\;k5 = 95\) (Yuan et al. 2021). (Color figure online)

The particularly interesting is that we coupled TNN1, the DMN of the posterior cingulate gyrus (PCC) and TNN2, the DMN of the inferior parietal lobe (IPL), with the working memory network to study and reconstruct three stages of working memory: encoding, storage and retrieval. What needs to be emphasized is that in the information extraction stage, the information in the coding stage can be fully reflected. It was found that the antagonism between DMN and working memory network was not only negatively correlated in the traditional sense, but showed complex negative correlation and positive correlation successively in different brain regions.

Fig. 25a is the scatter color temperature diagram of the specific emission rate of the three networks in the whole 9000 ms. Fig. 25b is the energy curve of the three networks in the coding, maintenance and extraction phases respectively. Task negative network 2 is the last blue dotted line, task negative network 1 is the middle magenta dotted line, and task positive network is the first green solid line.

Fig. 25
figure 25

Simulation results of the whole process of working memory (Yuan et al. 2021). (Color figure online)

TPN is the positive network of task, TNN1 is the default network of posterior cingulate gyrus (PCC), TNN2 is the default network of IPL. Figure 25a: Scatter color temperature diagram of TPN emission rate on the top, and emission rate of TNN1 and TNN2 in the presence of working memory in the middle and below (strong negative activation of the left and right ends and inhibition of the middle part). Figure 25b (energy graph): Above is the code. In the middle is the maintenance phase. Note that the green line in the maintenance phase is the result of processing the coded stimulus with the past information. The picture below is the extraction (recall stage). It can be seen that the information of the coding stage is fully reflected in the recall stage.

4.6 Neural energy as a new view to explain the mechanisms of neuropsychiatric disorders

Some evidence suggests that neuropsychiatric disorders are related to energy metabolisms. At the molecular level, quantities of upstream genes associated with energy metabolisms are found to be significantly changed in animal models and human cerebrospinal fluid of major depressive disorder (Abdallah et al. 2014; Ågren and Niklasson 1988; Głombik et al. 2020; Gu et al. 2021; Zuccoli et al. 2017), schizophrenia (Zuccoli et al. 2017; Chase et al. 2015; Duarte and Xin 2019; Martins-de-Souza et al. 2011; Pruett and Meador-Woodruff 2020; Konradi et al. 2004) and bipolar disorder (Zuccoli et al. 2017; Konradi et al. 2004). At the cellular level, some studies show that abnormal glial cell activities are also the potential pathological reasons of neuropsychiatric disorders (Cui et al. 2018; Dietz et al. 2020), while glial cells are responsible for the energy supply of neurons. In addition, a large number of fMRI results have also shown that different blood flow velocities and BOLD effect exist in brains with neuropsychiatric disorder (Chen et al. 2011; Forbes et al. 2006; Gur et al. 2002; Jaworska et al. 2015; Zhou et al. 2007), which lead to different neural energy consumption results. However, it is unable to tell how molecular/cellular/whole-brain levels changes lead to neuropsychiatric disorders only by these experiments, because brain is highly coupled by these non-independent components. Since neural energy theory plays a significant role in encoding cognitive activities, could it also become a new explanation for neuropsychiatric disorders?

This is possible and valuable, but few researchers have been working on it. Although neuropsychiatric disorders are usually accompanied by much more complex intracellular activities (e.g., overexpression and underexpression of key proteins, interactions of different neurotransmitters and ion channels) than those in simple cognitive activities, these changes will finally act on the membrane potentials and neural population activities. With some biophysical models like H–H model (Hodgkin and Huxley 1952), Rall’s compartment model (Rall 1962) and neurotransmitter receptor binding models (Destexhe et al. 1995), researchers can simulate the membrane potentials with ion currents and neurotransmitters data from the available electrophysiological experiments of neuropsychiatric disorders. After that, it is easy to calculate the neuronal energy consumption of ion channels, synaptic activities and neuronal activities as well as the neuronal population’s behaviors.

Li’s study in the field of major depressive disorder has made an exploration of the application of neural energy theory to neuropsychiatric disorder (Li et al. 2022b). Li et al. chose medium spiny neurons (MSNs) of nucleus accumbens (NAc) as the research objective, which is a key neuronal type in dopaminergic pathway and closely related to major depressive disorder. Using H–H model, Li et al. first successfully established the membrane potential computational model of single MSN in depression and normal group by adjusting several ion channel properties. Then, using energy model (Zhu et al. 2018b), Li et al. calculated the neuronal power and energy consumption (Fig. 26).

Fig. 26
figure 26

Membrane potential, neuronal power and energy in single MSN model. b and c showed the ‘positive’ and ‘negative’ components (Li et al. 2022a)

Further analyses showed differences in energy encoding patterns between the depression and normal group (Fig. 27): (1) the energy cost of MSN in MDD group was lower than that in control group; (2) the negative-to-total energy ratio of MSN in MDD group was higher than that in control group; and (3) the delay time of the power peak and the potential peak in MDD group were shorter than that in control group. These results are consistent with some behaviors and can be easily calculated by theoretical models, while are hardly to get only by biological experiments. In brief, it demonstrates that neural energy should be considered as an important part of decoding the mechanism of major depressive disorder, and it gives a new thought for the research of other neuropsychiatric disorders.

Fig. 27
figure 27

The abnormalities of MSN under energy model in MDD group (Li et al. 2022a). a The total energy results. b The Negative-to-Positive energy ratio results. c The lag time (between power peak and the potential peak) results (Zhu et al. 2018b)

5 Conclusion remarks

This review article systematically summarizes how neural energy combines various levels of molecules, cells, networks, and behavior, thus systematically interpreting the so-called large-scale neuroscience theory is actually neural energy theory. Only neural energy theory makes it possible to systematically construct models of global neural activity in the brain, and to unify their respective advantages in reductionism and holism in neuroscience within a research framework. Only the neural energy theory is possible to study interactions among the microscopic, mesoscopic and macroscopic neural activities in a theoretical system from the perspective of global neural coding. Therefore, it is able to draw out the global information of how the brain works through comprehensive research and comparison of experimental data obtained at various levels (Wouapi et al. 2021; Navarro-López et al. 2021; Churchland et al. 2002, 2012; Tsuda et al. 1987, 2004; Tsuda 1991, 1992, 1984, 2001, 2013, 2015; Ebrahimzadeh et al. 2021; Yang et al. 2021a, b, 2022; Jiang et al. 2020; Sharma and Acharya 2021; Wang et al. 2006, 2008, 2009, 2015a, b, 2017a, b, 2018a, b, 2019b, c, 2020, 2021a, b, c, d, e; Clancy et al. 2017; Videbech 2010; Zhang et al. 2019, 2020; Yuan et al. 2022, 2021; Yao and Wang 2019; Maltba et al. 2022; Zhou et al. 2020, 2007; Li et al. 2020, 2022a, b; Kim and Lim 2020; Pfaff and Volkow 2022; Tsukada et al. 1975, 2015; Kaneko and Tsuda 2001; Adachi and Aihara 1997; Aihara et al. 1990; Nara and Davis 1992; Pan et al. 2008, 2014; Tsuda and Kuroda 2001; Fukushima et al. 2007; Kuroda et al. 2009; Yamaguti et al. 2011; Ryeu et al. 2001; Fujii and Tsuda 2004; Tadokoro et al. 2011; Collerton et al. 2016; Bullmore and Sporns 2009; Ullman 2019; Roy et al. 2019; Zeng et al. 2019; Wang and Zhu 2016; Deco et al. 2015; Kanwisher 2010a, b; Wang and Wang 2018a, b, c, 2020, 2014; Ma and Tang 2017; McIntyre et al. 2001; Moore and Cao 2008; Moore and Cao 2008; Lu et al. 2008a, b; Peng and Wang 2021; Lu 2020; Wang and Pan 2021; Cheng et al. 2020; Fox and Raichle 2007a, b; Balasubramanian 2021; Raichle 2010; Raichle and Mintun 2006; Piccoli et al. 2015; Compte 2000; Wei et al. 2012; Hsieh and Ranganath 2014; Karlsgodt et al. 2005; Williams-García et al. 2014; Fosque et al. 2021; Barbey 2018a, b; Wang et al. xxxx; Laughlin and Sejnowski 2003a, 2003b; Zheng et al. 2022, 2014, 2016; Poirazi and Papoutsi 2020; Lynn and Bassett 2019; Hipp et al. 2011b; Hipp et al. 2011c; Raichle et al. 2018; Stender and Mortensen 2016; Kruegera et al. 2009; Stelnmetz et al. 2019; Esterman et al. 2009; Cohen 2017; Breakspear 2017; Johnson and Ray 2004; Nirenberg and Latham 2003; Victor 1999; Jacobs et al. 2009; Malnic et al. 1999; Miyamichi and Luo 2009; Xu et al. 2022a, b; Hu and Wang 2013; Hu et al. 2012, 2013; Fischler-Ruiz et al. 2021; Zhu et al. 2016a, b, 2018a, b, 2019, 2020; Optican and Richmond 1987; Thorpe et al. 2001; Heil 2004; Chase and Young 2007; Zhong and Wang 2021a, 2021b, 2021c; Xin et al. 2019; Insel et al. 2004; Feldman 2012; Wang and Zhang 2011, 2006; Rubin et al. 2012; Panzeri et al. 2015; Stringer et al. 2019; Allen et al. 2019; Gründemann, et al. 2019; Qiu et al. 2015; Qin and Xie 2016; Wu et al. 2016; Byrne and Roberts 2009; Lv et al. 2016; Ma et al. 2017, 2019; Liu 2002; Gu and Liang 2007; Haken 1996; Raichle and Gusnard 2002; Maandag et al. 2007; Lin et al. 2010; Peppiatt and Attwell 2004; Eikenberry and Marmarelis 2015; Sokoloff 2008; Maandag 2007; Figley and Stroman 2011; Pellerin and Magistretti 1994; Brown 2004a, 2004b; DiNuzzo et al. 2012; Rong et al. 2020; Gazzaniga et al. 2002; Rubinov et al. 2011; Tolman 1948; O’Keefe and Dostrovsky 1971; Wilson and McNaughton 1993; Alme et al. 2014; O’Keefe and Recce 1993; Moser et al. 2008; Sargolini et al. 2006; Hafting et al. 2005; Fyhn et al. 2004; Barry et al. 2007; McNaughton et al. 2006; Yoon et al. 2016; Finkelstein et al. 2016; Hayman et al. 2011, 2015; Casali et al. 2019; Yartsev and Ulanovsky 2013; Yartsev et al. 2011; Mathis et al. 2015; Giocomo et al. 2011; Burgess et al. 2007; Burak and Fiete 2009; Cash and Yuste 1999; Solstad et al. 2006; Sanchez-Vives and McCormick 2000; Compte et al. 2003; Abdallah et al. 2014; Ågren and Niklasson 1988; Głombik et al. 2020; Gu et al. 2021; Zuccoli et al. 2017; Chase et al. 2015; Duarte and Xin 2019; Martins-de-Souza et al. 2011; Pruett and Meador-Woodruff 2020; Konradi et al. 2004; Cui et al. 2018; Dietz et al. 2020; Chen et al. 2011, 2021; Forbes et al. 2006; Gur et al. 2002; Jaworska et al. 2015; Hodgkin and Huxley 1952; Rall 1962; Destexhe et al. 1995; Déli and Kisvárday 2020). This is the only way out for us to walk out of the dilemma of blind men touching elephants in the field of neuroscience research.

Finally, we want to emphasize that the modeling and analysis methods of neural energy theory are based on neuron energy models. All the other coupling factors of neuron and neural network activity, such as the regulation of blood flow phenomenon to networks and the regulation of glial cells to neuron, have been considered as a result of the dynamic changes of neuron or coupling network.

From the above introduction, we know that the neural energy method can encode not only different stimulus information, but also the firing of individual neurons and neural oscillations at different frequencies at the neural network level. This is because (1) as a global brain function model, the neural energy model can be used to analyze and describe the experimental phenomena of neuroscience at all levels, so that the calculation results at all levels are no longer unusable, contradictory and irrelevant. That is to say, neural information can be expressed with energy at various levels of molecules, neurons, networks, cognition and behavior and the combination of all levels. Energy can be used to unify the neural model among all levels (Yuan et al. 2021; Wang and Zhu 2016; Wang et al. 2008); (2) Neural energy can be used together with the releasing pattern of membrane potential to interpret neural information processing (Chen et al. 2021); (3) Neural energy can describe the interaction of large scale neurons throughout the brain (at the combination of molecular, neuronal and network levels) (Déli and Kisvárday xxxx), which is otherwise impossible to be achieved by any traditional neural coding theory; (4) It is currently difficult to record damage in multiple brain regions simultaneously. Although EEG and MEG can sample neuronal activity from various regions of the brain, it is very difficult to estimate cortical interactions on the basis of these extracranial signals. The main obstacle is the lack of a theoretical tool that can effectively analyze cortical to cortical interactions in a high-dimensional space. In addition, there is no conversion relationship between scalp EEG and cortical potential. Nerve energy provides an effective solution to the above problems; (5) Since energy is a scalar quantity, whether it is a single neuron or a cluster, whether it is a network or a behavioral, linear or nonlinear neural model, their dynamic response can be described by the method of neural energy superposition. Thus, global information about functional neural activity in the brain can be obtained, which cannot be achieved by other traditional coding theories; (6) network coupling oscillation modes can be ever-changing, the coupling oscillation and the neural network and the network energy oscillation and there is a corresponding relationship, so when the large scale of the neural network modeling and numerical analysis for high dimensional nonlinear coupling is extremely complex and become impossible to deal with, you can use the nervous energy code to study the neural information processing, this makes complex neuroinformatics research simple and easy to process without losing information. Electrophysiological experiments in neuroscience have revealed the relationship between spontaneous brain activity and behavior, but it is difficult to give the quantitative relationship between behavior and brain energy consumption through experiments. And the importance of studying the quantitative relationship between the two lies in the future in the calculation of brain agent (Yuan et al. 2021; Peng and Wang 2021). However, there are two potential limitations of the proposed theory: (1) the technique of directly measuring brain energy supply and consumption in different temporal and spatial scales need to be further developed to advance the neural energy method; (2) the energy field model of the whole brain and its dynamic and interaction with different stimuli such as electric, magnetic or photic stimuli should be developed. These limitations are also promising future directions for research in this area. When new measuring techniques can determine the exact amount of energy consumed by a single neuron, a neural circuit and a brain area, deeper understanding about how the neural energy encodes stimulus, behavior and neural activity will be achieved. And a unified energy field model of the whole brain can provide the first-principle perspective of how the human brain interacts with the physical world. Even so, if one can master the behavior of the agent and the relationship with energy consumption, one can design the intelligent body neural chip through energy constraints, and find the agent's behavior and optimize the relationship between network parameters. So nervous energy theory and the calculation method can not only help to deeply understand macro behavior and the dependencies between the brain activity, but also can through the analysis of the brain nerve energy consumption, master the information of the whole information dynamic changes in the brain, to provide parameters of the basis for the design of agent behavior.