1 Introduction

Several approaches for multimodal information fusion have been proposed and categorized (see Valdes-Sosa et al. 2009; Rosa et al. 2010; Huster et al. 2012; Jorge et al. 2014 for reviews). Brain simulation can be called, as the name suggests, a model-based integration approach. Model-based approaches rely on increasingly realistic, biophysically oriented, neuronal models. In contrast, data-driven approaches rely on signal processing, thereby avoiding the need for computationally expensive simulation of neuronal and neurovascular coupling dynamics. Model-based approaches are typically recognized as symmetrical, meaning that the underlying assumption is that EEG and fMRI measure distinct, only partially overlapping aspects of neuronal activity. Asymmetrical approaches, on the other hand, use information from one modality to guide the analysis of the other, for example, fMRI-informed EEG analysis and EEG-informed fMRI analysis. For example, fMRI was used to guide EEG source imaging by using statistical maps of the fMRI result to confine the putative source space (Dale et al. 2000; Ou et al. 2010). The basic idea underlying such methods is to use fMRI to estimate the location of a neural event, while EEG is used to retrieve the time course of that event. The opposite direction is also possible—using EEG to inform fMRI analyses. For example, BOLD responses were predicted by convolving EEG measures—like the amplitude of an ERP component or the power of a frequency band—with a hemodynamic response function, yielding so-called EEG regressors (e.g., Debener et al. 2006). Asymmetrical approaches are often criticized for relying on the assumption that the fMRI signal at a certain location contains information about electric activity at that location. Or, vice versa, that the neural generators of scalp potentials generate a hemodynamic response that can be detected in the fMRI signal. While there is clearly a biological motivation for these assumptions, one can also easily construct scenarios where they might not hold true, for example, by considering regions that do not contribute to scalp EEG due to their geometry and orientation; or neural processes that come at no additional metabolic cost, which might be the case when the mean firing rate of the population does not change, but still important computations are performed by means of other neural codes. Summarizing, multimodal data fusion is a considerable challenge that has been the source of much debate, but also of promising avenues to better understand brain function than with each method alone. Clearly, the spatiotemporal co-occurrence of, say, a statistically significant activation in fMRI, and an EEG ERP after stimulus presentation, is not sufficient for concluding that they both are different manifestations of the same neural event. Or, more generally, correlations in EEG and fMRI data that occur time locked at the same location are not necessarily caused by the same neural event. We simply do not know enough about the underlying neurophysiological processes to arrive at far-reaching conclusions from the mere observation of correlated signal patterns.

Brain simulation-based approaches for the integration of multimodal data are based on a generative computational model of the underlying physiological activity that is thought to give rise to observed signals in the measured modalities. In the case of EEG–fMRI, such brain models can be divided into two parts. The first part simulates the activity and interaction of neurons or neural populations, like the fluctuation of firing rates, neurotransmitter concentrations, or postsynaptic potentials. The second part provides a mapping between neural activity (e.g., fluctuating postsynaptic potentials) and the representation of this activity as a signal in a given modality (e.g., fluctuating voltage in an EEG channel). Here, we call this second part of the model forward model to designate its direction from source to data, in contrast to inverse models, which seek a mapping from data to source (note that often the entirety of the two parts is called “forward model”). For model-based EEG–fMRI integration we need two forward models, one for EEG, to describe the propagation of electrical activity from neural sources to EEG sensors, and one for fMRI, to describe the coupling between neural activity, changes in cerebral blood flow and blood oxygenation. As generative models specify a forward relationship, from model to data, multimodal data integration often seeks model inversion, that is, finding a model that adequately explains the multimodal dataset, which for an existing model corresponds to finding parameter distributions that lead to optimal predictions of the data. In practice, model inversion is rendered difficult due to the complexity of realistic neuronal-metabolic-hemodynamic cascades (Rosa et al. 2010). Clearly, the performance of brain simulation-based approaches depends not only on assumptions about the neuronal model, but also about the mapping between the simulated neural state variables and measured electrophysiological or hemodynamic signals, which is far from being established (e.g., neurovascular coupling; Logothetis et al. 2001; Raichle and Mintun 2006).

In this chapter we discuss EEG–fMRI integration work that is based on the simulation of the evolution of neuronal states in the entire brain, or at least the entire neocortex. That is, we consider it a brain simulation if there exists an anatomical interpretation, or one-to-one mapping, between the nodes of a brain network model and a regional brain parcellation that covers large parts of the brain. Furthermore, we require the state variables of the model to have a biophysical interpretation (like firing rates, electric potentials, neurotransmitter concentrations) and that the state variables are part of a dynamical system, that is, coupled differential equations, that model their biophysical interaction. Importantly, with BNMs we require the system to explicitly simulate the coupling and interaction between different neural state variables, like the effects that excitatory and inhibitory current flows, or the average rate of spikes emitted by a population of neurons, have on other populations. This approach is motivated by the idea to use such models to learn something about unobservable (hidden) neural activity and information processing that underlies observable signals. Because full-brain simulation on a microscopic scale is infeasible for practical simulations (meaning: it is possible (Izhikevich and Edelman 2008), but impractical as it consumes many resources and is weakly constrained by empirical data), it involves abstracting from detailed simulations of neurons on a microscopic (e.g., multicompartment neuron) level, or from a mesoscopic scale (local connectivity), to a macroscopic scale, in which the average ensemble activity is simulated on the basis of a mean-field theory, the basic idea of which is that individual neuron models can be lumped together and represented by an approximation of their mean activity.

2 Brain Network Models

Brain network models rest on the basic assumption that for understanding brain activity, it is crucial to take the dynamic interaction of neurons, or neural populations, along their anatomical connectivity into account (Fig. 30.1). To generate a mathematical model, this involves the application of dynamical systems theory in order to describe coupled neural activity using coupled differential equations (see Breakspear 2017 for a current review). For example, a system of first-order ordinary differential equations has the form

$$ \frac{\textrm{d}x}{\textrm{d}t}=f(x) $$

where x(t) ∈ ℝd is a vector of dependent variables, f : ℝd → ℝd is a vector field, and dx/dt is the time derivative, which we may regard as describing the evolution of the d-dimensional state variable x(t).

Fig. 30.1
A three-part illustration of a brain network model, a to c. A has the connection of A M P A, N M D A and G A B A. B has the connection of excitatory and inhibitory pools, and C has the connection of excitatory and inhibitory pools through global and local networks.

From spiking networks to neural mass models to brain network models. (a) Brain network model construction starts by considering networks of spiking neurons that interact via recurrent and feedforward excitatory (AMPA, NMDA) and inhibitory (GABA) connections. (b) Neurons can be organized into populations defined by shared characteristics like similar inputs, outputs, and connectivity. To form neural mass models, average population dynamics, like the evolution of the population’s mean membrane potential or mean firing rate, are described by simplified models that only capture the main modes of these dynamics, while ignoring the details of individual neuron dynamics. (c) Brain network models are constructed by coupling several neural mass models to form a global network (red arrows) of local networks (black arrows). The global network is structured by structural connectivity that specifies the coupling of large-scale brain areas by white-matter fiber bundles; for human BNMs, it is often obtained by diffusion-weighted MRI tractography. (Adapted with permission from Deco et al. 2011)

The underlying idea is to use such a mathematical system of interacting (coupled) differential equations in order to express the temporal dynamics and interaction of interesting quantities of the physical system (state variables). In the brain, quantities of interest are usually the dynamics of physiological processes or entities like membrane potentials, firing rates, sequences of spike times, or synaptic currents. The goal of such modeling approaches is often to express the way these quantities are connected in the real system in terms of physical laws. However, attempts to fully reduce neural activity to more fundamental physical laws within a single model would lead to impractical complexity of the model and might miss the level of emergence at which the relevant explanatory mechanisms actually take place. Therefore, one often seeks a parsimonious model that accomplishes a desired level of explanation or prediction with as few predictor variables as possible.

In order to determine the future state of the system at time t, the system needs to be solved analytically or numerically. Solving the system leads to a trajectory or orbit in the phase space of the system that starts from the used initial conditions. The phase space can be interpreted as a geometric equivalent of the algebraic form of the system, as its differential equations describe a flow in this space, that is, the temporal change of the system for each possible point in its phase space. Nonlinear evolution equations are, except for rare special cases, not explicitly solvable, and even apparently simple equations can produce complicated behavior such as chaos, which is why the focus is often set on understanding the qualitative behavior of the solutions. Trajectories in the phase space often converge toward so-called attractors, which include fixed-points (the system’s state does not change at this point) or limit cycles (a closed trajectory). For example, the regular, periodic spiking of a single neuron, or the oscillation of brain rhythms, like the alpha rhythm (8–12 Hz), can be described by limit cycle attractors.

Often, terms of the ordinary differential equations that make up the model are replaced by stochastic processes, which corresponds to adding small random perturbations to the orbits at each time step. Noise lends interesting properties to such stochastic differential equations, like the ability to spontaneously switch between different attractors (e.g., see the study by Freyer et al. 2011, discussed below), which may be identified as a principle underlying neuronal computation. Representing parts of the system by stochastic processes can be used to simplify the model. With these neural ensemble approaches, we assume that for a description of the collective behavior of a large number of neurons, the state of individual neurons is irrelevant, and, furthermore, that the evolution of neuronal states is uncorrelated. Using the central limit theorem, this allows us to express the state of a large number of neurons by a Gaussian probability distribution, regardless of the individual distributions of states. Hence, the uncorrelated spiking of a large number of neurons can be reduced to the mean and variance of the average population firing rate, which reflects the response of a population to its total synaptic input. The dynamics of our assumed ensemble can then be described by a so-called Fokker–Planck equation that can be analytically derived from dynamic neuron models and captures the ensemble’s drift of the mean and its diffusion—its change in variance. In summary, such diffusion approximation provides a parsimonious way to reduce the enormous degrees of freedom of direct simulation, by capturing mean and variance of ensemble activity, based on the assumption that the statistics of local ensembles are Gaussian and uncorrelated. Assuming strong coherence between neurons, we can further drop the diffusion of the variance term from the model and describe collective neural behavior by its mean only. The resulting models are called neural mass models (NMMs), and they build the basis for many brain simulation endeavors (Breakspear 2017; Deco et al. 2011, 2008). Problematically, converging evidence from neuronal recordings suggests that neural population activity is heavy-tailed (the tail corresponding to bursts of activity) and not Gaussian (Roberts et al. 2015), and that synaptic currents have nonzero time-lagged autocorrelation (Haider et al. 2016; Okun et al. 2010). The accommodation of non-Gaussian distributions with FPEs is an active area of research that can be expected to further drive development of improved ensemble models.

NMMs are used to simulate the mean activity of local groups of neurons that show coherent behavior, such as pyramidal neurons or inhibitory interneurons. That is, with NMMs, cortical areas are understood as ensembles of strongly interacting excitatory and inhibitory neurons that are clustered into the respective sub-populations. Examples of popular neural mass models are the model developed by Jansen and Rit (1995) and the model developed by Wong and Wang (2006). The evolution of population dynamics is then often decomposed into two parts. One part simulates the steady-state response of a neural population to an input in terms of average postsynaptic membrane potential or firing rates. A second part then transforms the model state into an output, like an ongoing firing rate, which is often assumed to be instantaneous and following a sigmoid function. While biological neurons interact via many individual synapses, in the neural ensemble approach synaptic interaction is, similarly to population activity, understood as the combined effect of many individual synaptic connections. To bridge the scale from a small patch of cortex to the entire brain, ensembles of NMMs are coupled to form mesoscopic circuits (e.g., cortical columns), or macroscopic large-scale brain networks, or both. One node of a large-scale brain network is often formed by one excitatory NMM and one inhibitory NMM that are mutually and recurrently coupled. In addition, these nodes are then globally coupled to form brain network models consisting of one global network that connects several local excitatory/inhibitory networks. In contrast to NMMs, which treat the cortex as a discrete network of nodes coupled by the connectome, neural field models treat the cortex as a continuous sheet, which additionally involves modeling the change of neural activity along space.

Problematically, even small network models involve a large number of free parameters to specify their long-range connectivity, that is, the mutual coupling strengths between each pair of nodes. As direct measurements of the so-called effective connectivity, the causal influence between ensembles, is scarce for humans, these parameters are often informed by either generic considerations about the potential distribution of such connections, or estimations that are based on analyses of diffusion-weighted magnetic resonance (dwMRI) imaging data. dwMRI tractography is a model-based technique to reconstruct the trajectories of white-matter long-range axon bundles connecting brain regions, which is called structural connectivity. Metrics like the relative volume or the relative diameter of such structural connections are then used as a proxy for effective connectivity. While tractography is widely used, one can show that tractography-based connectivity strength estimation gives higher weights to short and straight connections, while underestimating the impact of long-range and especially interhemispheric connections (Maier-Hein et al. 2017). Apart from that, other problems are associated with this technique; for example, it does not allow determining the direction of the reconstructed axon bundles and the relative proportion of excitatory and inhibitory neurons that are targeted by long-range axons. As a remedy, structural connectivity obtained from dwMRI is often further constrained by aggregated results from invasive animal tract tracing studies, like the CoCoMac database (Stephan et al. 2001) for monkey data or the Allen Mouse Brain Connectivity Atlas (Oh et al. 2014) for rodents. Having constrained connectivity with relative weights obtained from tractography, these relative weights are normalized by a free parameter that is often obtained by fitting the model to empirical data. Likewise, a second conduction velocity parameter is often used to compute conduction delays from connection lengths.

Applying BNMs to the analysis of brain data is an active area of research that yielded important insights into the neurophysiological mechanisms underlying healthy and pathological brain signals (Bansal et al. 2018; Breakspear 2017). The basic brain simulation approach is didactically illustrated in Bojak et al. (2011) and Ritter et al. (2013). Open source, freely available software packages like the Python neuroinformatics platform the Virtual Brain (Sanz-Leon et al. 2013) and the MATLAB/C++ software package NFTsim (Sanz-Leon et al. 2018) enable simulation, post-processing, and analysis of BNMs. (Semi-)automatic preprocessing pipelines allow convenient construction of individualized BNMs on the basis of multimodal data like dwMRI, structural MRI, fMRI, MEG, and EEG (Proix et al. 2016; Schirner et al. 2015). Given empirical multimodal data, a brain network model, and EEG/fMRI forward models, underlying neural activity is then estimated by so-called inverse modeling, which means that the parameters of the involved models are optimized until a good fit between simulated and empirical activity is found (Triebkorn et al. 2018). Straightforwardly, this can be realized by brute-force simulation, that is, testing parameter combinations and estimating the fit. While model inversion is an appealing concept, its application involves considerable difficulties in practice, because the inverse problem is ill-posed for complex models like brain models, which involve cascades of dynamical systems with large numbers of free parameters. First, there may be no unique solution, as can be seen from the underdetermination of the source reconstruction problem, which involves a large number of unknowns (current sources), but only a small number of observations (EEG channels). Second, the solution may be unstable with respect to small perturbations in the data, which can be expected from the convoluted and highly nonstationary nature of brain dynamics. This means, in practice, we often have heuristic approaches as our only option for full-brain simulation.

Despite the involved challenges, brain simulation found broad success in modeling epilepsy (Proix et al. 2018; Taylor et al. 2013), Alzheimer’s (Stefanovski et al. 2018; Zimmermann et al. 2018), brain tumor patients (Aerts et al. 2018), stroke (Adhikari et al. 2015; Falcon et al. 2016), structural disconnection (Cabral et al. 2012), lesions (Alstott et al. 2009), and plasticity effects (Roy et al. 2014), to cite some examples. Examples of brain simulation to model evoked potentials, resting-state activity, and the alpha rhythm in the context of EEG–fMRI integration are discussed in more detail below.

3 EEG and fMRI Forward Models

EEG-forward models

simulate the propagation of electromagnetic waves from sources to EEG sensors (Hallez et al. 2007). To estimate electromagnetic propagation, usually a multicompartment volume conductor head model is constructed using tessellations of cortical, skull, and scalp surfaces from structural MRI data; T1-weighted MRI protocols provide adequate contrast to differentiate between the involved tissue types. After specifying a forward model on the basis of a realistic model of head geometry and the spatial arrangement of neural populations, an aggregated mapping can be computed that provides a linear transformation from source to sensor space called lead field. Source activity is assumed to be well approximated by the fluctuation of equivalent current dipoles generated by excitatory neurons located in the cortical sheet, which in turn is assumed to be proportional to membrane potential or input current fluctuation. For further simplification, one often constrains the dipolar orientation of electric sources at each location to be normal to the cortical surface, reasoning that pyramidal neurons in cortex are mainly organized in columns that are roughly perpendicular to the cortical surface, which consequently represents source activity as a dipole layer parallel to the cortical surface. Inhibitory neurons are sparse (~15%), and their dendrites fan out spherically; hence, their contributions to the EEG signal are thought to be negligible. As the main contributor to the EEG signal, the ensemble of postsynaptic potentials of a neural population at a given location is then used as input to the lead field mapping.

BOLD–fMRI forward models

simulate neurovascular coupling, that is, the relationship between neural activity and associated changes in cerebral blood flow, blood volume, and deoxyhemoglobin content. The mechanistic details of this relationship remain unclear (Iadecola 2017). Empirical data related to neurovascular coupling come from invasive animal studies that combine metabolic/vascular measurements (e.g., using fMRI or optical imaging) with multiunit recordings; such studies revealed significant correlations between hemodynamic and electrophysiological signals, identifying input or sub-threshold synaptic activity, rather than output spiking or energy demand, as the primary driver for the local vascular response (see Rosa et al. 2010 for an overview and discussion). Neurovascular coupling is conceptualized by the idea of a “neurovascular unit”: a tightly interacting compound of neurons, astrocytes, and several other cells and also chemicals, that detect and respond to the needs of neuronal supply (see Iadecola (2017) for a recent and comprehensive review). This conceptual understanding is currently undergoing a shift from a unidimensional process, that only involves neuronal-astrocytic signaling, to a multidimensional one in which chemical signals engage in multiple pathways and effector systems in a highly orchestrated manner. The basic principle underlying neurovascular coupling in this model is that increasing neuronal activity uses increasing amounts of energy, in the form of oxygen and glucose, and that cerebral blood flow varies in proportion to the amount of utilized energy in a given brain region. These considerations indicate a feedback model, where already existing metabolic needs increase blood flow post hoc. Measurements, however, show that the increase in CBF is larger than the need for oxygen, resulting in excess delivery of O2 (Raichle and Mintun 2006), and that increases in CBF even occur under conditions of excess oxygen and glucose (Attwell and Iadecola 2002). In light of this evidence, a feedforward model was proposed, which suggests that CBF delivery is regulated by signaling pathways that are initiated by the activation of postsynaptic glutamate receptors that drive the release of vasoactive by-products like K+, nitric oxide, and prostanoids. Both views may be reconciled by a model that proposes that an initial (potentially) exaggerated feedforward flow response to neural activity is accompanied by a secondary feedback component, driven by reduced tissue O2, to adjust CBF to better match the actual metabolic needs of the tissue (Iadecola 2017). Another far-reaching assumption underlying functional brain imaging is that the spatiotemporal correspondence between neural activity and hemodynamic response is sufficiently precise. However, in some regions, CBF exceeds the area of activation (e.g., auditory, visual, and cerebellar cortices), which is likely due to nonoverlapping neural and vascular topologies and retrograde vasodilation (see Iadecola 2017 for an overview of related studies), and therefore, such effects become increasingly relevant as fMRI resolution increases. Mirroring our lack of understanding of the exact nature of neurovascular coupling, different studies often use different forward models; different inputs to BOLD forward models, typically involving state variables as diverse as the number of incoming spikes to a neuronal population, the postsynaptic membrane potential or the synaptic gating amplitude; and different considerations regarding the exclusive inclusion of input signals in contrast to using a combination of input and output signals—output is often not taken into account following empirical observations (Logothetis et al. 2001).

A straightforward approach to simulate local BOLD signals often used in literature is based on the (linear) convolution of cortical current density with a kernel function that approximates the canonical shape of a hemodynamic response: a brief, time-delayed, and intense positive signal change that peaks approximately after 4–6 s, followed by an undershoot that peaks after 6–10 s, and slow recovery to baseline. More elaborate approaches, like the Balloon-Windkessel model (Buxton et al. 1998; Friston et al. 2000), explicitly simulate the relationship between blood flow and measured BOLD signal using a dynamical system description that couples blood flow and blood volume dynamics and relates them to BOLD contrast:

$$ {\displaystyle \begin{array}{ll}{\dot{s}}_i& ={z}_i-{\kappa}_i{s}_i-{\gamma}_i\left({f}_i-1\right)\\ {}{\dot{f}}_i& ={s}_i\\ {}{\tau}_i{\dot{v}}_i& ={f}_i-{v}_i^{1/\alpha}\\ {}{\tau}_i{\dot{q}}_i& =\frac{f_i}{\rho}\left[1-{\left(1-\rho \right)}^{1/{f}_i}\right]-{q}_i{v}_i^{1/\alpha -1}\\ {}{B}_i& ={V}_0\left[{k}_1\left(1-{q}_i\right)+{k}_2\left(1-{q}_i/{v}_i\right)+{k}_3\left(1-{v}_i\right)\right]\\ {}{k}_1& =7{\rho}_i\\ {}{k}_2& =2\\ {}{k}_3& =2{\rho}_i-0.2\end{array}}. $$

This part of the metabolic–hemodynamic cascade is based on the long-lasting idea that an increase in neuronal activity zi causes an increase in a vasodilatory signal si, which increases inflow of blood fi and concomitant changes in blood volume vi and, accompanied by that, changes in deoxyhemoglobin content qi and ultimately the BOLD signal Bi. A list of parameters is provided in Table 30.1. A consequence of this forward model is that its parameters can be varied without affecting the temporal dynamics of electric scalp potentials, which means that the fusion of fMRI with EEG for model inversion cannot inform parameters better than fMRI alone. The reverse is, however, not true: changing the parameters of the neural mass model will affect EEG and fMRI predictions. Sotero and Trujillo-Barreto (2008) propose a variation of the model from Friston et al. (2000) that postulates, in addition to the direct effect that excitatory activity has on CBF (and consequently CBV), a second effect, namely, that excitatory, as well as inhibitory, neuronal activity modulate glucose and oxygen consumption, which is supposed to have further (in addition to the glutamate–CBF cascade) downstream effects on CBV. The authors note in their article that this model has the interesting property to reconcile observations on the effect of insulin: while BOLD responses were significantly lower, insulin had no effect on visual evoked potentials (Seaquist et al. 2007), which could be explained by considering an effect of insulin on glucose and CBF-related parameters, which can, under this model, be achieved without affecting the EEG signal (Sotero and Trujillo-Barreto 2008).

Table 30.1 Balloon-Windkessel model parameters

It should be stressed, again, that many aspects of the coupling between neurophysiology and EEG, respectively fMRI, are not yet fully understood, and forward models are based on simplifying assumptions, for example, regarding equivalent current dipoles, the exact nature of neurovascular coupling or the dependence of the hemodynamic response on brain state and anatomical location. Improving forward models is an active area of research. For example, recent approaches aim to retrieve the onset and shape of hemodynamic responses by fitting models to voxel-wise fMRI time series and then use shape parameters as pathophysiological indicator (Wu and Marinazzo 2016). To summarize, the combination of BNMs with forward models attempt to provide mechanistic interpretations of large-scale neural dynamics over a wide range of temporal scales and for different modalities on the basis of neural dynamics models that can be derived from simplifying spiking-network models. In the following we review some work that applied brain simulation to analyze EEG–fMRI data.

4 Evoked Potentials

Sotero and Trujillo-Barreto (2008) were among the first to perform full-brain simulations for integrating EEG–fMRI and also positron emission tomography. Their model is based on an extension of the neural mass model developed by Jansen and Rit (1995) and Zetterberg et al. (1978), which includes recurrent excitatory connections, to better simulate pyramidal-to-pyramidal connections within a cortical column. The NMM was coupled with a metabolic–hemodynamic forward model that relates excitatory/inhibitory neuronal activity with glucose consumption, subsequent oxygen consumption, and cerebral blood flow (CBF); output is fed into a Balloon model (Buxton et al. 2004, 1998) to simulate a BOLD signal. In addition, a lead field matrix was used to project the average membrane potentials of pyramidal cells into EEG channel space. This local model, intended to characterize the dynamics in one cortical voxel, was then coupled with other local models, belonging to the same cortical area, via excitatory and inhibitory short-range connections, in order to simulate one cortical region. Short-range connectivity was modeled by exponential functions that reduce coupling weight with coupling distance. Thalamus was simulated by two populations, one to model excitatory thalamocortical relay neurons and one to model thalamic inhibitory reticular neurons. Furthermore, excitatory long-range connectivity, constrained by dwMRI tractography results, was implemented to globally connect regions to form a large-scale brain network. In total, the model consisted of 12 random differential equations (RDE) for the thalamus and a system of 16 RDE for each simulated cortical voxel. Sotero and Trujillo-Barreto (2008) compared their model with empirical activity in several ways. First, they showed that their model had reproduced realistic visual evoked potentials (VEP) in O1 and O2 electrodes and corresponding BOLD signal changes. Visual stimulation was simulated by adding a brief transient on top of the Gaussian random input to drive thalamocortical excitatory relay neurons in ten VEP-related regions. Interestingly, the authors observed that the excitation–inhibition ratio of a local population has a modulating effect on the initial dip, the peak of the BOLD signal, and its undershoot. This is important as it indicates that the shape of the hemodynamic response can be determined by factors that are independent of metabolic–hemodynamic coupling, but can be explained by neuronal activity alone, and should therefore be considered in studies that aim to infer the shape of local hemodynamic response, in addition to purported changes of glucose and CBF-related parameter in the current models. In addition to VEPs, Sotero and Trujillo-Barreto (2008) reproduced resting-state alpha activity with the model, which is further described below.

5 Resting-State

Resting-state is characterized by the absence of a task. In fMRI, resting-state typically goes along with strong, coherent fluctuations of BOLD in a frequency range below 0.1 Hz. When brain activity at different locations shows correlated activity, the regions are said to form resting-state networks (RSNs), with the default mode network as, arguably, the most prominent example (Raichle et al. 2001). The brain regions partaking in a given network are said to entertain functional connectivity, which involves coherent activity in wide-spread cortical and subcortical areas covering the entire brain. When it comes to electric activity, like EEG, resting-state networks are often linked with a specific composition of oscillatory band powers, most prominently, the alpha rhythm. For example, the default-mode network is characterized by increased alpha and beta activity, while prefrontal networks can be associated with high gamma (~30 to 80 Hz) power (Mantini et al. 2007). Importantly, RSNs strongly overlap with sensory–motor, visual, auditory, attention, language, and default networks that appear during active behavioral tasks; atypical resting state activity was linked with atypical brain function (e.g., Rombouts et al. 2005; Seeley et al. 2009).

To better understand the functional role of resting-state activity, it would be helpful to characterize the underlying neurophysiological processes. It therefore may come as a surprise that even after almost a century since its discovery, and a considerable amount of findings on its behavioral correlates, the origin of alpha rhythms is still incompletely understood. Different hypotheses have been proposed that typically fall into two categories: (1) rhythms are endogenously produced by thalamic or cortical “pacemaker” cells, that function like clocks that entrain other cells; and (2) rhythms arise as an emergent property from the interaction of networks of cells, where not a single neuron or population is responsible for the rhythm; for example, by means of limit cycle attractors in the networks state space, or through filtering properties of the network that filter white noise network input (Buzsaki 2006).

The works of Robinson et al. (2002), Honey et al. (2007), Ghosh et al. (2008), Deco et al. (2009), Valdes-Sosa et al. (2009), and Freyer et al. (2011) are examples of the pioneering application of BNMs to study the network mechanisms that give rise to the emergence of resting-state-like activity in fMRI, electric activity, or both (see Deco et al. 2011 for a review). The general takeaway from such studies is that coupling parameters that replicate realistic conductance enable neural masses to spontaneously engage into complex activity patterns, characteristic of neural time series, which includes intermittency, phase synchrony, multistability, and spontaneous switching between synchronized cell assembly formation and desynchronizing bursts. The notion of multistability refers to a property of many dynamical systems to have multiple stable attractors in its phase space that are surrounded by basins of attraction and separated by unstable equilibria, which, in neural models, often separate a high-activity from a low-activity state. BNM dynamics suggest a resting-state mechanism where noise or bifurcation enables the system to switch between multiple stable states that are spawned by the basins of attraction of fixed point or limit-cycle attractors. In such a multistable phase space, the state of the system can show different kinds of behavior, like steady-state equilibrium or chaotic oscillations, depending on the initial state and the geometry of the phase space. States can switch between different attractors driven by perturbations that move the state over the boundaries of different basins of attraction. That is, noise or stimuli enable the exploration of the state space in the vicinity of multiple stable equilibria, which offers a geometric explanation for seemingly chaotic time series behavior. In addition to the intrinsic dynamics of NMMs, structural coupling may give rise to the oscillatory entrainment of the populations’ state variables, which can lead to the emergence of coherent oscillations over a wide range of frequencies, from <0.1 Hz up to 100 Hz, that in turn shape functional network topology. The aforementioned modeling studies show that, with BNMs, realistic functional connectivity in the slow band of resting-state activity can be easily produced on the basis of coupled oscillating population models. Importantly, it was shown that the electric population activity, which serves as input for the BOLD and EEG forward models, likewise resembles empirical data. For example, Valdes-Sosa et al. (2009) simulated a 16,138-regions large-scale model for a full-cortex and thalamus parcellation to produce resting-state EEG–fMRI. Nodes were simulated by the classical Jansen and Rit neural mass model (Jansen and Rit 1995) and coupled by structural connectivity estimates computed by dwMRI tractography. Gaussian white noise with mean 20 and variance 2 pulses/s was used to drive excitatory relay populations of the posterior right thalamus. Changing levels of thalamic stimulation were simulated by increasing the input to a mean of 100 pulses/s for a duration of 2 s. In addition to fMRI time series, ongoing alpha band power fluctuation of simulated local field potentials was computed and convolved with a hemodynamic response function to yield an alpha-regressor for BOLD, that is, a prediction of BOLD based on alpha-band power fluctuation, which is a common technique in empirical EEG–fMRI. For each node of the network, its alpha-regressor was correlated with its fMRI prediction, which was also done in initial EEG–fMRI analytic studies (Goldman et al. 2002; Laufs et al. 2003a, b; Moosmann et al. 2003). The resulting correlation coefficients were spatially mapped to cortex and thalamus surfaces and the resulting pattern was compared with the pattern obtained by performing the same analysis with empirical EEG–fMRI data. Strikingly, Valdes-Sosa and colleagues found the same general pattern of electric–hemodynamic correlation—positive correlations in frontal cortices and thalamus and negative correlations in occipital regions—for simulated data as for their empirical data.

To fit BNMs to empirical data one often varies free parameters that, that is, represent global transmission speed, global scaling of connection weights, or the level of noise at each region. Such global parameters control the qualitative behavior of the system, that is, the geometry of its phase space, and determine the relative location and stability of equilibria and their attraction domains. Global parameter exploration elucidated that the optimal fit between empirical and simulated data is near the brink of a bifurcation, where the system’s behavior undergoes a qualitative change (Ghosh et al. 2008). At the edge of such a critical instability, the spatial correlations of the noisy excursions are mainly shaped by SC, while the system retains maximal sensitivity to external stimulation and is able to efficiently and quickly respond even to weak inputs that push the system into a different state (Deco et al. 2013). In addition to global parameters, the specific topology of the models’ structural connectivity, that is, the heterogeneous region-to-region coupling weights and time delays, have an effect on the specific network dynamics and the topology of emerging functional connectivity. Important in this regard is that empirical functional connectivity is not static, but network connections gain and loose functional connections in an ongoing manner, which is called functional connectivity dynamics (FCD) or “switching” between functional networks (Allen et al. 2014). Honey et al. (2007) showed with BNM simulations that in the electrical activity underlying simulated BOLD, network switching was reflected by alternating periods of elevated or decreased information flow (as measured by transfer entropy) on multiple time scales. At the slowest time scale (several minutes), functional coupling was relatively static and a good indicator of the presence of a structural link, while at intermediate time scales (~0.1 Hz), functional couplings fluctuated and gave rise to different functional networks. At the fast time scale of alpha oscillations (~10 Hz), this network switching was then linked to intermittent synchronization and desynchronization between brain regions. That is, resting-state fMRI oscillations and FCD were reflected by the time-varying fast synchronization and desynchronization of population dynamics, which emerged despite the absence of time-varying inputs or variation of connection strengths. Importantly, SC topology had a decisive effect on the emergence of FCD: corrupting SC topology by degree-preserving randomization considerably reduced the fluctuation of inter-regional information flow and consequently FCD. This simulation outcome thereby addresses important open questions regarding the relationships between resting-state activity on different time scales, because not only slow BOLD fluctuations but also fast electric rhythms, as well as their ongoing power modulation, can be explained by noise-driven multistable switching between different attractors.

The ongoing, seemingly chaotic, “waxing and waning” of the alpha rhythm power attracted the curiosity of researchers for many decades, but a biophysical explanation, that is, a theory based on the activity of neurons or neuronal populations, has not been established until recently. In contrast to earlier accounts that interpreted neural activity as chaotic, robust analysis of resting-state EEG suggests that cortical activity operates in a multistable regime that can be described as jumping between a high-amplitude 10-Hz oscillation and low-amplitude filtered “noise.” Hence, the histogram of empirical alpha’s power fluctuations is not unimodal but composed of two distinct modes (Freyer et al. 2009). Here, again, dynamical systems theory gave rise to novel impulses for the interpretation of the alpha rhythm’s “erratic” switching between different states. Using a corticothalamic neural field model with realistic parameters, Freyer et al. (2011) were able to reproduce the bimodal distribution of alpha rhythm power fluctuation with a strikingly close match. Specifically, variation of the excitatory corticothalamic coupling parameter controlled a subcritical Hopf bifurcation, that enables the coexistence of damped equilibrium behavior with unstable periodic oscillations over a range of physiologically plausible values. In the vicinity of this bifurcation, spontaneous switching between low- and high-amplitude activity was triggered by noise that drove the specific thalamic nucleus.

6 EEG–fMRI (Anti)Correlation

Depending on the settings of excitatory and inhibitory kinetic parameters and the strength of input currents, the Jansen–Rit neural mass model is able to produce 10-Hz oscillations that resemble cortical alpha rhythms. Underlying is a Hopf bifurcation that renders the system’s stable equilibrium point unstable and gives rise to a stable limit cycle with a frequency close to 10 Hz, when input exceeds a certain threshold.

With other settings also oscillatory activity in delta, theta, beta, and gamma bands can be produced (Fig. 30.2). Sotero and Trujillo-Barreto (2008) simulated alpha activity, computed ongoing alpha band power fluctuation time courses, and convolved them with a hemodynamic response function. In line with similar work, the resulting negative and positive correlation patterns were in agreement with experimental results (Goldman et al. 2002; Laufs et al. 2003a, b; Moosmann et al. 2003) in occipital, temporal, and frontal regions; interestingly, the simulation result additionally predicted positive EEG–fMRI correlations in several cortical areas that have not been found by earlier experimental work but were only discovered later (Gonçalves et al. 2006). Sotero and Trujillo-Barreto (2008) studied EEG–fMRI correlations for power-modulated alpha, by stimulating thalamus with 2-s pulse trains consisting of 104 action potentials per second and 20-s intervals between pulse trains. This time, positive BOLD–alpha correlations were found in the thalamus and the cuneus, whereas negative correlations have been obtained for all the other areas. Next, the authors studied alpha desynchronization by delivering an excitatory 2-s pulse of 106 action potentials to the thalamus to represent visual stimulation. The effect of this stimulation was a shift in the EEG power spectrum from alpha band toward higher frequencies, accompanied by an increase of CBF and BOLD amplitudes, which the authors interpret as consistent with the finding that eye-opening increases CBF in the visual cortex (Raichle et al. 2001). To summarize, stimulation with a lower frequency (104 AP/s) caused a mere shift of the alpha peak, while stimulation with a higher frequency (106 AP/s) lead to alpha desynchronization, indicating that increased neural activity is associated with decreased alpha power. The idea that neural activation causes a shift in the EEG spectrum toward higher frequencies accompanied by an increased BOLD amplitude is consistent with a hypothesis formulated by Kilner et al. (2005) that is based on a simple heuristic starting from dimensional analysis of electric and hemodynamic data. To test this hypothesis with explicit simulation, Sotero and Trujillo-Barreto (2008) increased input frequencies in steps from 105 to 106 AP/s and found an almost linear relationship between EEG spectral mass and the maximum BOLD amplitude. Finding that increased neural activity increased BOLD, but decreased alpha, the authors went on to study two scenarios for creating a dip in the BOLD signal. In their first experiment they delivered a 2-s pulse to thalamocortical excitatory relay neurons and doubled the value of the connection strength from these neurons onto cortical inhibitory neurons. As may be expected, high levels of inhibitory activity resulted from this coupling parameter change, while excitatory activity was consequently reduced, which caused a decrease in the BOLD signal. While a changing level of excitation versus inhibition surely is a candidate explanation for BOLD signal fluctuation, synaptic scaling by 100% over a short-time scale of a few seconds seems unlikely (Zenke et al. 2017). In their second experiment, thalamocortical excitatory relay cells were inhibited and cortical inhibitory interneurons were excited and the result was studied at the left and right occipital poles. As a result of this stimulation, inhibitory activity increased and excitatory activity decreased below baseline levels, accompanied by a decrease of BOLD amplitude. While these results are in agreement with experimental data, indicating that a substantial component of the negative BOLD response originates from decreased neuronal activity, it remains unclear what exactly causes this increase in inhibition relative to excitation (apart from implausible connection weight changes or selective stimulation) and how it is related to the alpha rhythm. That is, how can the decrease of fMRI signals be reconciled with increased alpha rhythm power? Or more precisely: how would a large alpha rhythm inhibit cortical activity?

Fig. 30.2
An area chart for Tau i versus Tau e. Alpha, Beta, Gamma, Delta, and Theta are plotted. Its corresponding high and low-frequency waves are on the right.

Depending on its excitatory and inhibitory synaptic kinetics parameters τe, respectively τi, the Jansen–Rit model produces delta (1–4 Hz), theta (4–8 Hz), alpha (8–12 Hz), beta (12–30 Hz), or gamma (>30 Hz) oscillations (left). Examples of simulated signals are shown on the right-hand side. (Reproduced with permission from David and Friston 2003)

7 From EEG–fMRI to Neural Activity

The observation that increasing EEG alpha power is often spatially co-localized with decreasing fMRI amplitudes (Goldman et al. 2002; Laufs et al. 2003a, b; Moosmann et al. 2003) seems counterintuitive. Why should stronger or more synchronous fluctuation of input currents and postsynaptic potentials be less metabolically demanding? Adding to the puzzle are multiunit recordings in monkeys that show that not only alpha power but also alpha phase was negatively correlated with neural firing and task performance (Haegens et al. 2011). That is, firing of neurons was at its maximum despite input currents and membrane potentials in their vicinity were at their minimum. Of course, these findings are not entirely surprising when looked at from the “historical” perspective of alpha as an “idling rhythm” or from the perspective of more recent hypotheses termed “gating by inhibition” and “pulsed inhibition” (Jensen and Mazaheri 2010; Klimesch et al. 2007). These theories ascribe to alpha important functional roles related to information processing, attention, perceptual awareness, and cognitive performance, because all of these cognitive phenomena appear to be rhythmically modulated by it (Busch et al. 2009; Klimesch 1999; Mathewson et al. 2009).

Two findings seem to prevail: First, alpha power decreases or shifts to higher frequencies, during task performance in regions related to task–execution; second, stronger alpha decreases at the moment of task–execution correlate with better task performance. In other words, alpha desynchronization correlates with brain activation, while alpha synchronization correlates with inhibition of brain areas that are not relevant for the task at hand at that particular time (Jensen and Mazaheri 2010; Klimesch 1999). Regardless of its functional interpretation, the anticorrelation between neural firing and alpha, as well as the anticorrelation between BOLD and alpha, trigger the conclusion that alpha rhythms have the ability to decrease neural activity. The question is: how would this process be neurophysiologically implemented? To better understand the involved neurophysiological processes, Becker et al. (2015) used a BNM consisting of Stefanescu-Jirsa 3D NMM populations (Stefanescu and Jirsa 2008) to simulate neural firing, LFP and BOLD of cortex, reticular nucleus, and thalamus. Tuning global coupling strength yielded band-limited oscillations in the alpha range and its band-power time course showed alpha-typical fluctuations (“waxing-and-waning”). Comparisons showed that both, neural firing and fMRI had an inverse relationship with the alpha power time course, reproducing aforementioned empirical observations. Furthermore, and in line with invasive recordings (Haegens et al. 2011), firing was inversely related to alpha phase. Importantly, this result from Becker et al. (2015) shows that firing and fMRI are negatively correlated with alpha even without any change of coupling strengths, which was the proposed mechanism in Sotero and Trujillo-Barreto (2008). Nevertheless, variation of network coupling strengths clearly shows an effect on alpha: the negative correlation between firing, fMRI, and alpha vanished for uncoupled neural masses, which indicates that the presumed inhibitory effect of alpha is brought about as a network effect by several connected populations. Also in line with the previously mentioned work, an increase of excitatory long-range coupling strengths led to decreased alpha power, accompanied by a shift of oscillatory power toward higher frequencies, which is compatible with the idea that neural activation is indexed by a loss of power in lower frequencies in favor of higher frequencies (Kilner et al. 2005).

In summary, the reviewed studies support the ideas that (1) the inverse relationship between alpha and firing may be caused by alpha; (2) alpha’s inhibitory effect on neural activity depends on network interaction; and (3) a change of coupling strengths is not required to bring about the inhibitory effect. Firing inhibition without the necessity of changing connection strengths allows for a functional interpretation of alpha as a mechanism for temporally inhibiting or dynamically uncoupling regions from neural processing as formulated in “gating by inhibition” hypotheses. It is, however, still unclear how such an inhibitory effect can be brought about.

8 From EEG–fMRI to Neural Mechanisms

A possible explanation for the mutual (anti)correlations between neural firing, fMRI, and the alpha rhythm was recently proposed in a study that uses a novel form of brain simulation based on injecting BNMs with empirical EEG source activity to predict simultaneously measured fMRI (Schirner et al. 2018). The advantage of this “hybrid” modeling approach is that not only model structure but also model dynamics are constrained by empirical data.

Upon injecting the BNMs of 15 subjects with their resting-state EEG source activity, the hybrid model was able to predict each subject’s individual whole-brain, large-scale fMRI time series (Fig. 30.3). By fitting BNMs to fMRI time series, that is, to dynamical, instead of static features of neural activity (like FC or power spectra), this approach allows the study of neural population dynamics, like firing rate or synaptic activity, that are usually hidden from direct observation. That is, because the forward modeling of BNM activity produced signals that closely correlate with the subjects’ moment-to-moment EEG–fMRI activity, model activity may enable the study of the neurocomputational processes underlying the measured EEG–fMRI signals. The approach therefore provides a natural way for the integration of EEG–fMRI with the benefit of providing an explanation of the observed EEG–fMRI dynamics in terms of neurophysiological activity.

Fig. 30.3
A graph plots amplitude versus time. It plots high-frequency waves for empirical f M R I, hybrid model, alpha regressor, permutation model, and noise model. The curves for the latter four almost overlap the curve for empirical model.

Example time series of hybrid BNM simulation results. Upper trace: Hybrid BNMs predict resting-state fMRI time series on the basis of injected EEG source activity. The fMRI prediction is related to the anticorrelation of ongoing alpha power fluctuation of the injected EEG with fMRI (second trace from above), but hybrid model simulation yielded significantly better fits with observed fMRI than alpha-based regressors alone. Importantly, the intrinsic activity of fitted models, like firing rates or synaptic gating variables, may reveal neurodynamic processes that are hidden from direct observation with noninvasive techniques like EEG and fMRI. Models that are injected with randomly permuted activity (third trace from above) or regular, noise-driven BNMs (lowest trace) cannot recapture the specific ongoing dynamics of fMRI activity. (Reproduced from Schirner et al. 2018)

In contrast to “regular” BNMs, which are typically driven by noise, in hybrid models noise terms are replaced by EEG source activity. While this additional component corrupts the autonomy of the model—it is no longer an independent description of brain activity, but requires supply of empirical data—hybrid modeling has important advantages. Specifically, the approach maintains a low model complexity, while enabling to simulate realistic dynamics with exceptional temporal detail. Injected currents are intended to serve as an approximation of those aspects of the brain that are not captured by the model, for example, due to the simplification inherent in mean field techniques. Thereby, the approach allows the systematic testing of neural theory as captured by BNMs in light of biologically plausible network activity. In the following we will review the findings obtained from the very first application of this novel approach.

Using resting-state EEG–fMRI data, the model predicted several independent empirical phenomena from different modalities and temporal scales (Fig. 30.4): (1) the spatial topologies and temporal dynamics of fMRI RSNs, (2) excitation–inhibition balance of synaptic input currents, (3) the inverse relationship between alpha-oscillation phase and spike-firing on short time scales, (4) the inverse relationship between alpha-band power and fMRI, respectively firing, on long time scales, (5) and fMRI power-law scaling. Importantly, subsequent analyses of the produced activity revealed neurophysiological mechanisms that could explain how brain networks produce all of these signal patterns and how they are interrelated in terms of neural population activity.

Fig. 30.4
A set of 10 graphs in 2 sections a to e, in simulation and experiment. Simulation has firing versus Alpha cycle, excitation- inhibition balance, firing versus Alpha power, Alpha power versus F M R I, and scale free F M R I. Experiment has monkey, rat, monkey, and humans.

Hybrid model simulations reproduced several empirical phenomena measured by different modalities and on different time scales. From fast to slow time scale: the inverse relationship between neural firing and alpha cycle (a), the ongoing balancing of excitatory and inhibitory postsynaptic currents (EPSC and IPSC, respectively) and their relation to local field potentials (LFP), respectively injected EEG (b), the inverse relationship between alpha band-power fluctuation and firing rates (c), the inverse relationship between alpha band-power fluctuation and fMRI (d), scale-free fMRI power spectra (e). (Reproduced from Schirner et al. 2018)

Upon finding that the hybrid BNMs predict subject-specific ongoing fMRI time series, Schirner et al. (2018) analyzed population activity and found that the ongoing alpha power fluctuation of the injected EEG was the main driver behind the emergence of the predicted fMRI oscillations. As may be expected from the work reviewed in this chapter, excitatory population firing rates and synaptic gating decreased, when alpha power increased, and vice versa. Strikingly, the underlying mechanism seemed to be related to the activity of the inhibitory populations, because these showed the exact opposite effect: when alpha power increased, their activity likewise increased, leading to increased inhibition, and vice versa. Indeed, when looking at the excitatory and inhibitory postsynaptic currents (EPSCs and IPSCs, respectively), the authors found that the modulation of the input currents relevant for producing slow fMRI oscillations was largely due to the effect of inhibitory populations and long-range network input (Fig. 30.5). Due to the interaction of excitatory and inhibitory populations, the power modulation of injected alpha activity was transformed into an amplitude fluctuation of firing rates, synaptic activity, and consequently fMRI activity on the slow time scale of fMRI resting-state oscillations. To identify how alpha increased inhibition on slow time scales (<0.1 Hz), the authors looked at the activity at the fast time scale of individual alpha cycles (Fig. 30.6). Therein, it was found that for increasing alpha power the positive half-cycle of an alpha wave led to increasingly higher levels of inhibitory population firing rates, which dampened excitatory population activity, while the negative deflections of the alpha cycle simply silenced the inhibitory population. As alpha power was increasing, the increasingly large positive half-cycles of the alpha wave led to an increasingly large inhibitory effect. By contrast, the increasingly large negative half-cycles of the alpha wave had no compensatory effect for larger amplitudes, because firing rates cannot be smaller than 0 Hz. As a consequence of the declining capability of inhibitory populations to balance oscillatory input current peaks, and resulting inhibition bursts, excitatory populations’ firing rates became increasingly lower for increasing alpha power (see Fig. 30.6 for a visual explanation).

Fig. 30.5
An illustration of brain activity simulation is categorized into 7 groups. They are artificial Alpha, long input range, synaptic input currents, E-I, firing rate, synaptic gating, and F M R I.

Alpha activity (I, black trace: alpha amplitude) with ongoing power fluctuation (I, orange trace: power envelope) is injected into BNM nodes, which leads to the emergence of anticorrelated fMRI amplitude fluctuations (VII, red trace). Alpha activity enters the model as an excitatory postsynaptic current (EPSC, II, red trace) and is added to the sum of all input currents (III, red and blue trace) that enter the excitatory, respectively inhibitory, populations (IV) at each node. Note that the alpha-modulated EPSCs are centered at zero (II, EPSC, black trace); that is, injected alpha on average neither increases nor decreases the constant overall effective EPSC of the model (0.382 nA; not shown). The modulation of total synaptic input (III, black trace), which gives rise to fMRI oscillations, is due to inhibitory postsynaptic currents (II, blue trace) and long-range network input (II, green trace). (Reproduced from Schirner et al. 2018)

Fig. 30.6
An illustration of fast time scale brain simulation is categorized into 7 groups. They are artificial Alpha, long input range, synaptic input currents, E-I, firing rate, synaptic gating, and F M R I.

On the fast time scale of individual alpha cycles, the inhibitory effect of increasing alpha power (II, black trace: alpha amplitude; orange trace: power envelope) can be visually inferred. By modulating the synaptic input of inhibitory populations (III, blue trace), alpha modulated the firing rates of inhibitory populations (IV, blue trace). Importantly, for increasing alpha power, inhibitory firing became increasingly stronger during positive deflections of the alpha wave. This is because increasingly positive amplitude peaks of alpha progressively increased firing of inhibitory populations, while the balancing effect of troughs of alpha was bounded by zero firing, as there are no “negative” firing rates in nature (V, blue trace). Consequently, due to the decreasing capability to balance inhibitory populations’ excitation, net firing of excitatory populations decreased for increased alpha power. (Reproduced from Schirner et al. 2018)

Upon identifying a mechanism that explains the inhibitory effect of alpha activity, the study took a closer look at the role of the long-range network, because parameter space exploration showed that when global coupling was turned off, the prediction of fMRI was considerably decreased. Interestingly, along with the decreased predictability of empirical fMRI, analyses showed a strong reduction of the “scale-freeness” of the time series produced by simulations with disabled long-range coupling. Compared to empirical fMRI, disabled long-range coupling led to a decreased power-law exponent. That is, the power spectrum of fMRI was less steep. Power-law scaling means that the decrease of power of frequencies in a signal’s power spectrum follows a power-law P ∝ fβ, with power P, frequency f, and power-law exponent β. Power-law scaling was found to be ubiquitous in nature, for example, occurring in sandpiles, earthquakes, foraging patterns of various species, frequencies of family names, sizes of craters on the moon, and neural activity, to name a few. Importantly, power-laws were found to emerge near the critical point of phase transitions and can be associated with the spontaneous acquisition of structure and complex behavior, and is therefore an appealing concept as it could point toward a general theory of self-organization in biological systems. Schirner et al. (2018) therefore asked how the previous findings on the emergence of fMRI oscillations from alpha power fluctuation could be integrated with long-range network interaction and the emergence of power-law scaling.

To explain the underlying cause, it helps to express power-law scaling with the formula “slower oscillations go along with stronger signal modulation than faster oscillations.” This effect can be nicely observed when comparing Figs. 30.5 and 30.7. When alpha power oscillation was very slow, the resulting fMRI amplitude peaks were large, while fast alpha power oscillations produced smaller fMRI amplitude peaks (Fig. 30.5). Intriguingly, this effect vanished when long-range coupling was turned off: then all amplitude peaks had roughly the same height (Fig. 30.7). In other words, network coupling translated the duration of an alpha power oscillation into the height of the fMRI amplitude’s peak during that oscillation. This effect can be explained (1) by recurrent excitation through the large-scale network, which further amplified the effect of alpha-band power oscillations on neural firing, and (2) the relatively slow time scale of NMDA excitatory synaptic gating (100 ms), which enabled recurrent long-range excitation to accumulate for the time duration during which high alpha power inhibited neural firing. During the troughs of slow alpha power oscillations, recurrent excitation had more time to build up, which resulted in larger amplitudes compared to slower alpha oscillations. Accordingly, the power of slower oscillations increased in fMRI relative to faster oscillations, leading to steeper power-law slopes. This observation, therefore, addresses open questions in neuroscience on the origin of power-law behavior, and whether power-laws in neural networks originate from cellular-level or global network-level processes (Beggs and Timme 2012). Furthermore, these results explicitly account for structured input activity, while in vitro and in silico studies have so far focused on systems without or considerably decreased input (Hesse and Gross 2014). Lastly, the co-emergence of spatial long-range correlations and power-law scaling may point to a unifying explanation of resting-state activity within the framework of self-organized criticality, which offers a general mechanism for the emergence of correlations and complex dynamics in stochastic multiunit systems (Linkenkaer-Hansen et al. 2001).

Fig. 30.7
An illustration of long-range coupling turned off is categorized into 7 groups. They are artificial Alpha, long input range, synaptic input currents, E-I, firing rate, synaptic gating, and F M R I.

When long-range coupling is turned off, power-law scaling of fMRI decreased. In Fig. 30.5, one can see that slow alpha power oscillations produce large peaks of fMRI amplitude, while faster oscillations produce smaller peaks. Intriguingly, when long-range coupling was turned off, all amplitude peaks were equally high, regardless of alpha power frequency. Long-range coupling seemed to “translate” the power oscillation of alpha into the height of amplitude peaks in fMRI. (Reproduced from Schirner et al. 2018)

A recent study challenged the widely held view that alpha–BOLD anticorrelation originates from alpha power fluctuation, instead proposing that both originate from high- and low-frequency components of the same underlying cortical activity, and that the inverse correlation arises from variations in the strengths of corticothalamic and intrathalamic feedback (Pang and Robinson 2018). The arguing is that the high-pass filtering of EEG, commonly done in empirical studies to improve signal quality, discards slow-frequency fluctuations that may drive BOLD. The study used a corticothalamic neural field model that was successively fitted to power spectra of 4-s epochs of EEG data using 1-s steps with the goal to track assumed temporal changes in gain parameters. The resulting model activity then reproduced the evolution of empirical EEG power spectra on the basis of second-by-second fluctuations of six fitted corticothalamic gain parameters that correspond to the responses of the simulated populations to input. This theory should be easily testable by empirical EEG–fMRI data. Instead, the authors analyzed EEG-only data, using the low-frequency component of EEG as a proxy for BOLD data, arguing that both are slow signals and correlations between them have been reported in literature. Upon finding negative correlations between low-frequency EEG power and alpha EEG power, the authors conclude that this analysis reproduced the BOLD–alpha anticorrelation widely reported in literature. Problematically, using slow EEG as a proxy for BOLD in order to show that slow EEG and BOLD are correlated (respectively, the implied idea that BOLD originates from slow electric fluctuations) is circular reasoning. To restate, the result from Pang and Robinson (2018) indicates that slow EEG oscillations, slow power modulation of alpha activity as well as fMRI oscillations all originate from moment-to-moment oscillatory gain modulation of corticothalamic interaction. The results from Schirner et al. (2018), on the other hand, indicate that the low-frequency oscillation assumed to underlie BOLD oscillations emerges from local and global interaction of excitatory and inhibitory neural populations, which transforms slow alpha-power oscillation into a low-frequency amplitude oscillation of neural activity. Importantly, the latter mechanism does not require any change of parameters like the former, but explains the emergence of slow frequency oscillations just by means of a neurophysiological mechanism that is captured by the model itself. That is, even if the reasoning in Pang and Robinson (2018) was not circular, the theory does not explain the data in terms of a neurophysiological mechanism that emerges from model dynamics, but merely shifts the correlational reasoning from EEG–fMRI data comparison to a proposed ongoing fluctuation of cortical gains that was obtained from moment-to-moment fits of the model to the data, which is in itself problematic as by the very same approach presumably every kind of moment-to-moment signal change can be explained. In any case, the two theoretical results are nice examples how brain simulation-based integration helps formulating novel theories and testable predictions that may help explain the underlying processes that lead to the observation of EEG–fMRI signals.

Summarizing, the reviewed results may help explaining several observations from EEG–fMRI research in terms of neurophysiological mechanisms. Because respiration, cardiac pulsation, subject movement and other processes are correlated with resting-state fMRI oscillations, it is hard to differentiate which part of the signal is of neural origin and which part of the signal is noise. Importantly, it is unclear how exactly hemodynamic oscillations and networks relate to neural activity (e.g., Murphy et al. 2013; Yuan et al. 2013). Problematically, fluctuations of cardiac and respiratory rates are often used to form regressors that are then used to “clean” the fMRI data, although it is known that neural activity may well be temporally correlated with these physiological signals (de Munck et al. 2008; Yuan et al. 2013) and therefore erroneously removed (Birn 2012). Brain simulation-based EEG–fMRI integration may therefore be an approach to differentiate between neural and nonneural origins of fMRI variance. For example, the prediction of fMRI from EEG via hybrid BNMs suggests an explicit mechanism that relates fMRI to electric activity via a chain of neural-metabolic-hemodynamic interactions, which can then be used to distinguish neural and nonneural signal components. It is important to note that although most of the reviewed results revolved around the alpha rhythm, the alpha rhythm, though prominent, is certainly not superior to other rhythms with respect to neural computation and can, at best, be thought of as one of several modes of brain operation. None of the mentioned simulation approaches are confined to the alpha rhythm—conventional as well as hybrid BNMs accommodate other rhythms as well.

9 Outlook: Diagnosis and Therapy

Brain simulation was able to reproduce and explain a variety of phenomena observed with EEG–fMRI. Models that make conflicting predictions directly point to the limits of our current understanding and help us design new experiments that may test the theories associated with the models. Despite their success, models are, per definition, abstractions from reality and trying to make them more detailed, or realistic, often makes them overly complex, which then may interfere with their ability to provide concise explanations of phenomena. Increasing model complexity may come with an increased ability to explain a specific observation, but also a potentially decreased ability to generalize to a class of observations (e.g., due to overfitting). For example, the so-called “vascular steal” phenomenon describes an effect where an increase of local CBF leads to a decrease in neighboring regions, due to the need to divert blood flow toward the active tissue (Raichle 1998). In many of the reviewed models, negative BOLD signals were obtained by increasing inhibition, be it due to fluctuations in coupling or due to an emerging network mechanism, but “vascular steal” again provides a viable alternative explanation. Does this mean we need to resign in front of the possibility that there always might be an alternative explanation? We would argue, no: To decide between alternative explanations, their consequences and predictions must be developed and compared, and this is exactly what brain simulation is supposed to do. Future brain models will simultaneously incorporate multiple scales—where necessary individual neurons and synaptic connections will be simulated with high precision, while other locations are described by simplifying NMMs. This will enable us to study single-neuron and neuron-population activity in light of realistic full-brain network activity, which will lead to better understanding of the whole and its parts. Brain simulation-based EEG–fMRI integration provides promising avenues for the better understanding of pathologies. For example, Sotero and Trujillo-Barreto (2008) related the EEG–fMRI simulations with Alzheimer’s disease (AD). Based on the observation of AD patients showing a progressive cortical disconnection syndrome due to the loss of large pyramidal neurons in cortical layers III and V, the authors simulated AD by reducing all long-range connection weights by 50% and recurrent excitation by 67%. Resulting EEG showed a marked reduction of alpha activity accompanied by an increase in delta and theta bands; resulting BOLD showed reduced amplitudes. In a second experiment, the effect of external stimulation was studied by delivering a 2-s pulse of 106 action potentials to excitatory thalamocortical relay neurons, which was also done previously to simulate alpha rhythm desynchronization (but there without reducing connectivity parameters). Compared to their simulation with standard parameter values, BOLD signal fluctuation decreased, which the authors found in agreement with empirical studies, concluding that cortical disconnection may contribute to the shift of the EEG spectrum toward lower frequencies and decreased BOLD amplitudes. In another study, Stefanovski et al. (2019) used PET data to heterogeneously set the excitation–inhibition ratios of NMM parameters to simulate individual amyloid beta burdens in Alzheimer’s patients’ models, which yielded insights that may lead to an explanation for the slowing of EEG in Alzheimer’s and points to its potential reversibility mediated by NMDA receptor antagonists.

An emerging picture in clinically relevant FC literature is that it is often not the whole-brain network pattern that differentiates patients and healthy controls, but rather the form and frequency of dynamical transitions between brain states (Cohen 2017). For example, Schizophrenia patients spent significantly more time in a disconnected brain state and transitioned less often to integrated brain states than controls (Damaraju et al. 2014). Similar examples of atypical temporal FC dynamics from attention deficit hyperactivity disorder (Tomasi and Volkow 2012) and autism (Falahpour et al. 2016; Rashid et al. 2018). To exploit such observations and bring it to use in clinics, a critical next step would be to investigate the neural mechanisms underlying the formation and dissolution of functional connections to move beyond mere descriptive measures. So far, however, experimental approaches to probe how topological features of brain networks relate to pathophysiological processing had limited success. By contrast, brain simulation-based EEG–fMRI integration showed its potential to uncover the dynamic principles underlying observable signals. Not much work has been devoted so far to relate FCD signatures in EEG and fMRI with brain network model dynamics, but precisely this avenue may help us gain understanding in the processes that lead to aberrant network dynamics by enabling the direct probing of the effect of structural and dynamic perturbations in silico; for example, atypical FCD might be reproduced and the underlying mechanisms studied by using EEG data of patients as a constraint for electric activity in a hybrid brain simulation approach to simulate fMRI (Schirner et al. 2018). We do not know what the future of brain simulation-based EEG–fMRI integration holds, but given the recent advances in brain simulation and brain stimulation (Berényi et al. 2012; Ngo et al. 2013), it seems likely that joining forces between the two separate research streams (e.g., model-based closed loop stimulation) will improve the quality of life of many patients suffering from various disorders.