1 Introduction

With 86 billion neurons [1] and hundreds of trillions of synapses [2], the brain is one of the most complex systems in the known universe. Part of this complexity is due to the intricate pattern of connections among brain cells. Arguably, the computations performed by the brain depend heavily on the connections among its cells, but how? In other words, how the structure of the brain is related to its function? Many argue that since the brain is a complex system its functions are more than the sum of its parts [3,4,5,6]. So, knowledge of the individual behavior of the brain components is not enough to explain its emergent properties, e.g. cognition and consciousness. Therefore, the task of modeling the brain connectivity with a reasonable degree of accuracy constitutes an essential step for understanding these emergent phenomena.

Building-block strategies where spatial and temporal scales are individually modeled are the modus operandi of computational neuroscience. Ionic currents are modeled and studied in separate, neurons are studied in separate, populations are studied in separate, and the emerging behavior from the interaction of these “blocks” is then studied via mathematical and computational models. However, the step from single to multiple and interconnected cells is not trivial, neither from the point of view of behavior nor from the point of view of data extraction and coding.

But why is coding the structure of neuronal connections and not only the individual cells so important? We may look for hints on this question in different phenomena in nature. Starting with inanimate matter, we find that the crystalline structure of materials directly influences their thermal, electric, magnetic and optic properties [7, 8]. In addition, the dimensionality of the lattice, as expressed by the number of neighboring sites to each node in the network, as well as the reach of interactions, is known to alter significantly the behavior of physical observables in the vicinity of a phase transition [9]. The flow of current and, consequently, the expected behavior of electric circuits, is highly dependent on the spatial configuration of resistors, sources, capacitors and inductors, and on how their branches intertwine between some input and output of electric signal. When conducting nanoparticles are arranged into a percolating structure, complex activity arises in the circuit via bursts of switching conductivity [10]. Due to its intricate cortical-like activity, this condensed matter device could serve as prototype for neuromorphic hardware.

Although the brain is not an ideal and isolated electric circuit, the most accepted model for the neuronal membrane is based on an equivalent circuit made of a capacitor coupled in parallel to resistors and batteries [11]. Hence, the structure of the brain, reflected on its immense electrical circuitry, works as a complex web that shapes neuronal activity, and ultimately determines brain function as a kind of process that lies in a continuous feedback loop with the embedding environment. Neuroscience is then tasked with relating a set of inputs to the brain (e.g., via sensory systems), to the corresponding generated outputs, also known as functions. Although the intended function is not always clear, we can have a look at the activity of specific brain regions to get clues on whether a particular structure of a computational model makes sense.

Examples of this structure-function interdependence come from experimental and theoretical studies alike [12,13,14]: spontaneous cortical activity is sometimes observed in the form of bursts of action potentials, known as neuronal avalanches [15, 16]. Nevertheless, when a group of researchers built a lattice-like network of neurons from the scratch in vitro, they did not observe these complex patterns [17]. However, adding modular structure to the lattice may do the job [18]. These modules could then be built on top of each other generating a layered structure that mimics the cortex. These layered networks are known for generating propagating waves [19, 20], long-range correlations [21, 22], and neuronal avalanches [23]. Rich-club-like modularity destroys bursting synchrony [24], but allows for neuronal avalanches [25], whereas hierarchical networks of this type optimize the diversity of spiking patterns [26]. Some types of plasticity lead networks of neurons into a modular topology [27]. In fact, evolving techniques of manipulating neuronal activity may give birth to synthetic biological brain structures, also known as connectomes [28].

While pointing to the importance of modeling the neuronal structure in a model, in this tutorial review we provide the reader a concise guide on how to access connectivity maps based on experimental recording, and from there how to model brain inspired networks. We discuss recent models using data-driven connectivity maps at neuronal level from different brain areas (e.g., cortex, hippocampus) and discuss advantages on choosing each of those models.

Our paper is organized as follows. The next section is divided in subsections where we present a brief discussion on how networks are usually constructed. We start in Sect. 2.1 discussing how structural data is usually recorded from multiple experiments until it reaches the so-called connectivity map, passing through a discussion about basic neuron models in Sect. 2.2 as well as synaptic models in Sect. 2.3. Then, in Sect. 2.4, we guide the reader on how to code the connectivity maps into network models and describe what are the simulation tools that are commonly used. Finally, in Sect. 3 we summarize recent models based on data-driven structural connectivity maps, mainly the ones where connections are described at neuronal level as in microcircuit models.

2 How network models are usually constructed

With respect to connectivity, neural network models can be classified along two different axes. The first one refers to the nature of the graph used to implement the network, and the second to the granularity or scale of the connections.

Regarding the nature of the graph, it can be of two basic types:

  • Artificial graph. The connections among neurons are generated according to some predefined rules with the specific aim of creating a network with desired properties, e.g. random or small-world topologies [12, 29,30,31].

  • Data-driven graph. The connections among neurons are based on experimental data obtained with different techniques with the aim of replicating as faithfully as possible the circuitry of a particular brain region [32,33,34].

Regarding the granularity of the connectivity, it is tied to the scale at which neural structures are described. There are three basic granularity levels:

  • Connections linking morphologically detailed neurons. Synaptic contacts happen at specific positions along cell bodies, e.g. distal/proximal dendrites or somata. Models that want to take into account the positions of synaptic contacts must be based on morphologically detailed neuron models. These models are composed of several interconnected compartments emulating the branched structure of the neuronal dendritic trees [35,36,37]. Each compartment can have its own set of ionic channels and maximal ionic conductance densities. With this type of neuron model, connections among neurons can be set in a compartment-wise fashion, including compartment-specific values of the synaptic parameters.

  • Connections linking point neurons. At a coarser grain level, neurons can be described as points without spatial structure. In such cases, to set the connections among neurons one needs only to specify which cells are connected to which (usually according to some probabilistic rules) together with the parameters (type and strength) of the cell-to-cell synapses [38,39,40].

  • Connections linking neuronal populations. At an even coarser spatial granularity level, individual cells are no longer recognized as such and groups of neurons are lumped together into single “average” neurons [41,42,43]. In such neural population models the connections represent axonal links among neuronal groups or brain regions and the synaptic parameters correspond to effective properties of the existing synapses.

The artificial graph approach has been used to study cortical activity states [38, 44,45,46], specially transitions between up and down states [45, 47]; how basic information processing computations are performed by a population of neurons [48]; and to understand low-level operations performed by cell assemblies [49, 50].

Noticeably, it is known that the primate cortex is organized in a structured manner [51, 52]. Indeed, the connection among cortical regions resemble the structure of small-world networks [53], with clusters sparsely connected among them but also with strong interconnections. There is also evidence that the cortex has a hierarchical structure [54,55,56], meaning that the cluster has smaller clusters nested within them. This topology allows different regions to be relatively independent to process information and be specialized in distinct functions [13, 26, 57]. However, the existence of pathways connecting the clusters also allows for information to be integrated among different regions.

The data-driven approach is supported by the massive amount of data that is routinely gathered using different techniques on neuronal microcircuits of different brain regions [58]. They demonstrate how microcircuits in the brain are highly specific in terms of their connectivity and the impact of this specificity on function. For example, there is a high specificity in the vertical pattern of connections among neurons in distinct cortical layers [59, 60]. The pattern of cortical microcircuitry endows the recurrent cortical network with specific computational properties [61, 62].

In this tutorial review, we will focus on large-scale network models built using the data-driven approach. In the following sections we will explore the specificity of brain microcircuits while discussing modeling approaches based on maps that contain such information.

2.1 Obtaining structural data

In recent years, efforts to characterize the connectivity maps (connectomics) representing the cortical structure have been made [63, 64]. These connectivity maps can span several scales, from the intra- and interlayer connections in a cortical microcircuit to connections linking cortical regions [51, 65,66,67], depending on the techniques used to obtain them. Since these maps track synaptic connections at the neuronal level, and white matter pathways connecting cortical areas at meso/macro-scale level, they are referred to as structural connectivity maps. Usually, the structural connections can be mapped using magnetic resonance imaging (MRI) based techniques such as diffusion tensor (or weighted) imaging (DTI/DWI), and tractography [68]. The connectivity for local cortical microcircuits can be obtained by means of electrophysiological techniques [69], axonal tracing [70, 71], and electron- [72] and light- [73] microscopy. More details about different ways to extract anatomical information and use them to build neuronal network models are given elsewhere [74].

Similar approaches are employed for other brain areas. Anatomical explorations of the hippocampus date back to the works of Ramon y Cajal with Golgi staining techniques. The highly specific pattern of connections within the limbic system has been revealed through advanced MRI or neuroanatomical tract-tracing techniques [75,76,77,78,79].

With the increasing number of data collected, the structural or functional connectivity maps started to be used in modeling studies. Consequently, several projects aiming at the construction of realistic brain models, such as the Human Brain Project [80], the Blue Brain Project [81], or the Allen Brain Explorer [82]. These are examples of projects being funded around the world [83, 84]. Faithful models constructed upon the connectivity data available were built in order to understand how the brain connectivity structure impacts the dynamics of the system, and to which phenomena the information contained in these maps are crucial [40, 85,86,87]. A major advantage of such models is the construction of canonical models for similar brain areas: all over the neocortex there is a similar six-layered organization whereas for the archicortex and paleocortex there is a three- or four-layered structure [88]. Once a specific microcircuit of such areas is built, it is then used as a building block to enlarge a given network model or to connect it to other areas.

2.2 The neuron model

Although not the focus of this tutorial review, the choice of the neuron and synaptic models have an impact on the modeling results. So, they will be briefly reviewed in this and the next subsection. In terms of the neuron, the phenomenology of spike-train generation can be implemented without specific modeling of the underlying biophysical mechanisms but only by modeling the lipid bilayer of neurons by an equivalent passive RC circuit. In this simplified case the membrane voltage (V) is described by \(\tau \mathrm{d}V/\mathrm{d}t = -V + RI = f(V)\), where \(\tau \) is the membrane time constant, R is the membrane resistance, and I the injected current. Since this model cannot generate an action potential by means of its own dynamics, an artificial mechanism of fire-and-reset is usually included where spikes are counted whenever V crosses a certain threshold value \(V_{\mathrm{th}}\) with a subsequent reset. The function of V on the right-hand side is not restricted to be linear and can be nonlinear as well. These models are referred to as integrate-and-fire type models [39].

The integrate-and-fire model can be extended in a variety of ways with the inclusion of elements that capture features of neuronal firing behavior. Two-dimensional integrate-and-fire models include equations to tackle feedback effects from ionic currents. Generally, such models can be expressed by the coupled ODEs: \(\mathrm{d}V/\mathrm{d}t = f(V) - w\), \(dw/dt = G(V,w)\). Due to their dynamical properties, two-dimensional integrate-and-fire models posses a richer repertoire of behaviors that can be fitted to experimental in vivo and in vitro data to reproduce characteristic firing patterns of neurons [89, 90]. Effects such as oscillations or subthreshold resonance can be described as well [91]. Another class of neurons that can be used are the ones with discrete time, the so-called map-based neurons [92]. This class of neurons is particularly interesting because they have a rich dynamical repertoire of spiking activity, and allow for relatively easy analytical tractability and efficient simulations [93].

Another class of neuron models is the one comprised by so-called conductance-based models, which explicitly describe the activation and inactivation dynamics of the gated ionic channels present in the neuronal membrane. This formalism originates from the seminal work of Hodgkin-Huxley in 1952 [94]. The generic equations of a conductance-based neuron model are: \(C \mathrm{d}V/\mathrm{d}t = - \sum _i I_i + I_{\mathrm{inj}}\), where \(I_i = {\bar{G}}_i m_i^{r} h_i^{s} (V-E_i)\), and \(\mathrm{d}x_i/\mathrm{d}t = (x_{i,\infty }(V) - x_i)/\tau _{x_{i}}(V)\) (\(x = m\) or h). In these equations, C is the membrane capacitance, V is membrane voltage, \(I_i\) is the ionic current of the ith ion, \({\bar{G}}_i\) is the maximal membrane conductance to ion i, \(m_i\) and \(h_i\) are, respectively, the activation and inactivation variables of the membrane conductance to the ith ion, r and s are small integers, \(E_i\) is the reversal membrane potential of ion i, and \(\tau _{x_{i}}(V)\) and \(x_{i,\infty }(V)\) are, respectively, the voltage-dependent activation/inactivation time constant and steady-state value [37]. In Fig. 1 we show a schematic diagram the integrate-and-fire and conductance-based modeling approaches. All these neuron models can be adapted to describe neurons with a determined morphology, by defining discrete compartments (each of which obeys a certain membrane potential) that are electrically coupled to each other.

Fig. 1
figure 1

Different types of neuron model. Left: The simplified approach taken by integrate-and-fire models uses a fire-and-reset rule set “by hand” to model a spike. Right: The conductance-based approach models ionic currents using the Hodgkin-Huxley formalism

2.3 The synapse model

Synapses can also be modeled at different levels of biological plausibility and the choice of synaptic model is crucial for the simulation of large-scale networks, since the number of synapses is much higher than the number of neurons. To model a synapse, one could simply define an event-driven model by incrementing the synaptic conductance or the synaptic current by a certain amount; or, on the other hand, consider more complex processes such as those of the conductance-based approach where the synaptic conductance depends on the membrane voltage of the postsynaptic neuron [39, 95]. Either way, the parameters of the model can be chosen to reproduce the behavior of excitatory synapses mediated by glutamate receptors AMPA and NMDA, and inhibitory synapses mediated by GABAergic receptors GABA\(_{\mathrm{A}}\) (ionotropic) and GABA\(_{\mathrm{B}}\) (metabotropic) [95]. Additionally, the synaptic strengths can be static or dynamic, and the latter case has been the focus of intense research aimed at modeling plastic synapses of both short and long term [96,97,98,99].

The electric current \(I_{ij}^{\mathrm{syn}}(t)\) generated by a single synapse from a neuron j (presynaptic) to a neuron i (postsynaptic) has the form [95],

$$\begin{aligned} I_{ij}^{\mathrm{syn}}(t) = G_{ij}(t) \left[ V_i(t)-E_{ij}^{\mathrm{syn}}\right] , \end{aligned}$$
(1)

where \(G_{ij}(t)\) is a time-dependent conductance, \(V_i(t)\) is the postsynaptic membrane potential, and \(E_{ij}^{\mathrm{syn}}\) is the reversal potential of the synaptic current. The value of \(E_{ij}^{\mathrm{syn}}\) determines whether the synapse \(j\rightarrow i\) is inhibitory or excitatory (typical values of \(E_{ij}^{\mathrm{syn}}\) are 0 mV for excitatory synapses and \(-75\) mV for inhibitory ones). For integrate-and-fire type models, a simplification that is often done is to fix \(V_i\) (e.g. at resting voltage) and incorporate the battery term into \(G_{ij}(t)\) so that Eq. (1) reads \(I_{ij}^{\mathrm{syn}}(t) = J_{ij}(t)\). The term \(J_{ij}(t)\) is the amplitude of the postsynaptic current (called synaptic strength or efficacy) and its sign determines whether the synapse is inhibitory (negative sign) or excitatory (positive sign).

An example of short-term synaptic plasticity [96] is the facilitation-depression dynamics given by Eqs (2)–(4). This model contains two variables that represent the fraction of presynaptic channels that are open \(u_{ij}\), and the fraction of neurotransmitters that are available to be released \(x_{ij}\). Upon a spike in the presynaptic neuron, the synaptic conductance \(G_{ij}\) is increased by a factor \(u_{ij}^{+} x_{ij} J_{ij} \), where \(J_{ij}\) is the synaptic strength (a parameter) and the superscript \(^+\) (\(^-\)) indicates the moment after (before) the spike. The interplay of the time constants \(\tau _{\mathrm{fac}}\) and \(\tau _{\mathrm{dep}}\) determines if a synapse will have a temporal depression or facilitation:

$$\begin{aligned} \frac{\mathrm{d} u_{ij}}{\mathrm{d} t}=&-\frac{u_{ij}}{\tau _{\mathrm{fac}}}+U\left( 1-u_{ij}^{-}\right) \delta \left( t-t^{f}\right) , \end{aligned}$$
(2)
$$\begin{aligned} \frac{\mathrm{d} x_{ij}}{\mathrm{d} t}=&\frac{1-x_{ij}}{\tau _{\mathrm{dep}}}-u_{ij}^{+} x \delta \left( t-t^{f}\right) , \end{aligned}$$
(3)
$$\begin{aligned} G_{ij}(t) =&u_{ij}^{+} x_{ij} J_{ij} \delta \left( t-t^{f}\right) , \end{aligned}$$
(4)

where \(t^f\) is the time at which the presynaptic neuron fires a spike (the upper index f should not be confused with a power), and U is the proportion of new open calcium channels upon a presynaptic event. Examples of both short-term depression and facilitation can be seen in Fig. 2. The facilitation-depression model can be further simplified to a case without short-term plasticity by making u and x constants.

Alternatively, the synaptic strength can be determined by a spike timing dependent plasticity (STDP) rule, i.e., the increment or decrement of the synaptic efficacy is calculated using the relation between the times of pre- and postsynaptic spikes [100,101,102]. Such rules are based on experimental evidence [103, 104], and are applicable to both excitatory and inhibitory synapses [105, 106]. Let us assume that the synaptic efficacy of a \(j\rightarrow i\) synapse can be described by a single variable \(J_{ij}(t)\). The STDP rule can be implemented by defining two auxiliary variables, \(x_j(t)\) and \(y_i(t)\), which must be integrated over time. These variables are used to model the strengthening of the synapse when the presynaptic spike precedes the postsynaptic spike, and the weakening of the synapse when the presynaptic spike follows the postsynaptic spike, respectively. Then, a simplified STDP rule for excitatory synapses (\(J_{ij} > 0\)) is defined as [39, 102]

$$\begin{aligned} \dfrac{\mathrm{d}x_j}{\mathrm{d}t}&= -\frac{x_j}{\tau _+} + \sum _f\delta (t-t^f_j)\,,\end{aligned}$$
(5)
$$\begin{aligned} \dfrac{\mathrm{d}y_i}{\mathrm{d}t}&= -\frac{y_i}{\tau _-} + \sum _f\delta (t-t^f_i)\,,\end{aligned}$$
(6)
$$\begin{aligned} \dfrac{dJ_{ij}}{dt}&= x_j(t)\,A_+\,{\Theta }\!\left( J^{\mathrm{max}}-J_{ij}\right) \,\sum _{f,\mathrm post}\delta (t-t^f_i) \nonumber \\&\quad -y_i(t)\,A_-\,{\Theta }\!\left( J_{ij}\right) \,\sum _{f,\mathrm pre}\delta (t-t^f_j)\,, \end{aligned}$$
(7)

where \(t_j^f\) is the instant of the f-th spike of the presynaptic neuron j, \(t_i^f\) is the instant of the f-th spike of the postsynaptic neuron i, \(\tau _{+,-}\) are the decay time constants of \(x_j\) and \(y_i\), respectively, \({\Theta }(u)\) is the Heaviside function, \(A_{+,-}>0\) are the synaptic strength increment and decrement parameters, and \(J^{\mathrm{max}}\) is the maximum value of synaptic efficacy.

Eqs. (5) to (7) work like this: every time a presynaptic spike occurs, two things happen: first, \(x_j\) is instantaneously increased by a unitary amount and then decreases exponentially with time constant \(\tau _+\) and, second, while \(J_{ij}>0\) it is instantaneously decreased by \(y_iA_-\). On the other hand, every time a postsynaptic firing occurs, another two things happen: first, \(y_i\) is instantaneously increased by an unitary amount and then decreases exponentially with time constant \(\tau _-\) and, second, while \(J_{ij}<J^{\mathrm{max}}\) it is instantaneously increased by \(x_jA_+\). This is captured by the scheme in Fig. 2c: the variable \(x_j\) is responsible for the exponential increase on the left-hand side (green part) of the hyperbole, since it only adds to \(J_{ij}\) when \(T_{\mathrm{post}}>T_{\mathrm{pre}}\) (a postsynaptic spike happens after a presynaptic spike); conversely, \(y_i\) is responsible for the exponential decrease on the right-hand side (red part) of the hyperbole, since it is only subtracted from \(J_{ij}\) when \(T_{\mathrm{post}}<T_{\mathrm{pre}}\) (a postsynaptic spike happens before a presynaptic spike). Note that \(T_{\mathrm{post}}=t^f_i\) and \(T_{\mathrm{pre}}=t^f_j\) in the figure.

Fig. 2
figure 2

Different synaptic models. a Upon a presynaptic spike, an excitatory postsynaptic potential (EPSP) is created in the postsynaptic neuron after a synaptic time delay (a similar scheme can be implemented for an inhibitory synapse). The amplitude of the postsynaptic potential can change over time depending on some plasticity rule. b A short-term plasticity rule decreases or increases the EPSP amplitude over time depending on depression (STD) or facilitation (STF), respectively. c A spike-timing-dependent plasticity (STDP) rule changes the synaptic strength depending on the temporal difference between pre- and postsynaptic spikes

2.4 How to implement connectivity maps in network models

2.4.1 Connectivity maps for networks of point neurons

Once with the data, the translation into a computational network model is not an easy task. Different levels of complexity require distinct ways to organize the extracted information into a connectivity map. Such a map may represent inter- or intra-areal connections, and in both cases the rules to implement the connections are similar. Our focus here is mainly on models at the microcircuit level, where a small piece of the brain as shown in Fig. 3a is zoomed and its structure is represented by a connectivity matrix (Fig. 3b) or ring of connections (Fig. 3c). The same connectivity map can be used to generate networks with different levels of biological structural details as in Fig. 3d, where each node can represent a population of neurons or individual neurons. In the latter case, individual neurons can also be modeled by complex structures instead of just a point. Moreover, the information from the connectivity maps can also be expanded to add other features as spatiality (Fig. 3e). In this section we will discuss in a guided manner how to better approach the above tasks.

Fig. 3
figure 3

From structural data to the neuronal network model. Data for this example were taken from a multi-scale model of the macaque visual cortex [107, 108]. In a we show a illustrative depiction of the human cortex. The zoom indicates a cortical slice and the black squares represent cortical microcircuits. In b we show the structural connectivity matrix. Neurons are of excitatory or inhibitory types (E, I) and belong to four different cortical layers (2/3, 4, 5, and 6) in four different cortical areas (V1, V2, V3, and V4). Presynaptic neurons are placed in the x-axis and postsynaptic neurons in the y-axis; Notice that the structural connectivity matrix is asymmetric. In c we show a network representation of the structural matrix in (b); neurons are placed along a ring and connections between pairs of them are indicated by lines. In (d) we show the different levels of spatial granularity at which a cortical microcircuit could be simulated: all excitatory/inhibitory neurons in a given layer can be described by a neural population model (left column); each individual cell can be represented by a point neuron model; or each individual cell can be described by a morphologically detailed neuron model (right column). As one goes from the left to the right column the number of equations and parameters of the full model increases dramatically, and, consequently, the computational cost involved in the implementation and simulation. This makes critical the choice of trade-off between the kind of phenomenon studied and the spatial granularity level of the model. In (e) we show how one could connect pairs of neurons using a distance based rule given by a probability density function as the 2D Gaussian function defined in Eq. 11. The colored circles indicate the contour lines of the distance based function and the presynaptic neuron is placed at the center of the inner contour line

Usually, connectivity maps are reported as matrices of connection probabilities where the rows/columns refer to the source/target elements (single neurons or neural populations) in the network. In a point neurons network with no spatial notion, the matrix gives all the information needed to build the network. The implementation is similar to the simplest case of a random network of the Erdős-Rényi type [109] where there is only one connection probability for all neuron pairs, thus one can draw connections by testing each neuron pair against this connection probability.

The way in which the testing mentioned above is implemented is of utmost importance. Choices such as pair combinations with or without replacement can create far different network graphs. Usually, the synaptic connectivity is established in a network model by attributing to each pair of neurons (or populations) a random number between zero and one drawn from an uniform distribution, and then testing it against the predefined connection probability for the pair. This choice avoids multiple synapses between the same neuron pair, which could be desirable or not. Moreover, data-driven connectivity maps at microcircuit level are usually directed graphs, i.e. pairs of neurons are not necessarily connected in a reciprocal way.

Alternatively, a distinct testing scheme would be to calculate the total number of synapses (\(N_{\mathrm{syn}}\)) among neurons based on the connection probability, and randomly draw lists containing \(N_{\mathrm{syn}}\) pre- and postsynaptic indexes. In this method, multiple synapses are possible. An important detail is that these two schemes can deliver equivalent results depending on how \(N_{\mathrm{syn}}\) is calculated.

An example of such differences can be found in a recent replication of the Potjans-Diesmann cortical microcircuit model [40] (reviewed in Sect. 3.1) by us [110]. Depending on the way the connection probability \(C_{B,A}\) between a neuron j in source population A of size \(N_A\) and a neuron i in the target population B of size \(N_B\) is calculated, there are notable differences on the average spiking behavior of Layer 5 neurons. The exact value of \(C_{B,A}\) is given by

$$\begin{aligned} C_{B,A} = 1 - \left( 1-\frac{1}{N_A N_B}\right) ^{N_{\mathrm{syn}}}, \end{aligned}$$
(8)

where \(N_{\mathrm{syn}}\) denotes the total number of synapses between populations A and B. For \(N_{syn}/(N_A N_B)\) small, the Taylor expansion of Eq. 8 to first order results in the approximate expression for \(C_{B,A}\) given by

$$\begin{aligned} C_{B,A} = \frac{N_{\mathrm{syn}}}{N_A \cdot N_B}. \end{aligned}$$
(9)
Fig. 4
figure 4

Connection probability calculated by the exact expression and its first-order approximation. Connection probability between two neuronal populations, A and B, as a function of the number of synapses \(N_{\mathrm{syn}}\) between them when calculated by the exact formula (Eq. 8) (blue) and its first-order approximation (Eq. 9) (orange). Inset: zoom over small values of \(N_{\mathrm{syn}}\) to highlight the beginning of the difference between the two curves

Figure 4 shows a comparison of the curves of \(C_{B,A}\) versus \(N_{\mathrm{syn}}\) calculated by Eqs. 8 and 9 for fixed numbers of neurons in populations A and B. For a small number of synapses \(N_{\mathrm{syn}}\) the two equations give equivalent connection probabilities but as \(N_{\mathrm{syn}}\) becomes larger the exact expression and its approximation diverge significantly. This reflects on the structure of the network and, consequently, on the activity (see [110] for a more detailed discussion).

Moreover, one has to determine whether connectivity maps indicate incoming or outgoing synapses. In a similar manner, instead of defining the total number of synapses between two populations, a distinction between in-degree and out-degree may be necessary.

Another key point would be for maps that involve connectivity dependent on morphology. In these more complex structures, more elaborate methods involving multiple steps before deciding for the creation of connections may be necessary. Notice that the methods described above do not take into account the neuronal morphology or even the spatial notion while placing neurons on a grid. In order to incorporate these information, the connectivity matrix alone may not be sufficient.

From the point of view of spatial organization, there are limitations with respect to the maximum distance a neuron can send projections or to its target preferences. For the first case, a distance dependent connection probability may be required or, similarly, a fixed connection probability coupled to a distant dependent rule. Regardless of the way, neurons have to be placed on a spatial grid with a chosen dimension so distance dependent synapses can be created. As an example, consider the case of a two-dimensional (2D) grid where space is discretized in (x,y) positional variables and each neuron can assume a position in this (x,y)-grid. One can then define the absolute distance \(r_{ij}\) between a presynaptic neuron i and a postsynaptic neuron j,

$$\begin{aligned} r_{ij} = \sqrt{\varDelta x_{ij}^2 + \varDelta y_{ij}^2}, \end{aligned}$$
(10)

where \(\varDelta x_{ij} = \mid x_i - x_j\mid \) and \(\varDelta y_{ij} = \mid y_i - y_j\mid \). Notice that these distances may have limitations determined by boundary conditions. For example, in a 2D square grid of size \(L \times L\) with periodic boundary conditions, \(\varDelta x_{ij}\) as defined above is only valid for \(\mid x_i - x_j\mid \le L/2\), otherwise \(\varDelta x_{ij} = L - \mid x_i - x_j\mid \). The same is valid for \(\varDelta y_{ij}\). Other boundary conditions could also be applied.

Once neuronal distances are handled, network connections can be set up by using the previously stated connectivity matrix as the zero-distance connection probability, and then test for pairs of neurons against a distance dependent connectivity rule. An example is given for a Gaussian probability density distribution,

$$\begin{aligned} c(r_{ij}) = C_{B,A} e^{-r_{ij}^2/2\sigma _A^2}, \end{aligned}$$
(11)

where \(\sigma _A\) is the standard deviation. This equation is valid for \(r_{ij} \le R\), with R the maximal distance a neuron projection can reach. A similar approach can be applied for a network with 3D spatial notion. Other distributions such as the exponential are often used.

2.4.2 Connectivity maps for networks of neurons with morphology

Other biological features can be implemented together with the distance dependence when creating the connections, an example is the direction tuning dependency for visual systems [111]. Nevertheless, in a more general manner, the next step towards adding structural complexity to the network is the implementation of neuronal morphology. For this, as an approximation, it is possible to use the same connection probability rules described above and define which pairs of neurons are connected using as reference their somata positions. Then, an additional procedure is required for creating the connections: the distribution of synapses along the neuronal dendritic tree. Different neurons may receive synaptic inputs with distinct distribution patterns in a cell-type and brain region dependent manner.

Consideration of cell morphology is important when details related to specific locations of synaptic contacts are potentially relevant. An example is the organization of synaptic contacts involving inhibitory interneurons in the cortex. While parvalbumin-expressing (PV) interneurons target preferentially the somata of pyramidal cells, somatostatin-expressing (SOM) cells target their distal dendrites [112, 113]. This feature makes the response of a cortical pyramidal neuron to inhibitory input dependent on the type of cell that provides the inhibition. For example, the degree of attenuation of inhibitory postsynaptic potentials depends on where in the soma-dendritic domain of the pyramidal neuron they occur [114]. Similarly, in the hippocampus the modulatory effects on the spiking activity of pyramidal cells due to inhibitory inputs from oriens lacunosum moleculare (OLM) interneurons depend on the locations of these synaptic inputs [115]. Brain networks are more than sets of nodes connected together and modellers must be aware that anatomical and morphological information could be key to a fuller understanding of brain functions.

Usually, the data-driven information needed to create the neuronal morphologies and the specific connection rules are given by the authors of the model or can be found on database repositories as http://www.neuromorpho.org/. Morphology files can be acquired by many different techniques and software, and it is common that files containing morphological information are in different formats such as Neurolucida [116], SWC (baptized after the researchers who worked on a system to reconstruct the three-dimensional morphology of neurons [117]), and MorphML [118]. The morphological data available in the neuromorpho repository goes through a process of standardization before being published in order to make the data available uniform and easily readable across platforms. In Fig 5a we show the content of a SWC file corresponding to a Purkinje cell from the mouse. In each row the file displays the information of a given neuronal segment and the columns contain the following information from left to right: segment index, segment type, the coordinates x, y, z, the radius of the segment (in \(\upmu \)m), and the parent segment. While the interpretation of most of those properties is straightforward, the segment type and parent deserve further explanation. The segment type encodes which neuronal structure the segment represents as follows: soma = 1, axon = 2, (basal) dendrite = 3, apical dendrite = 4, and custom = \(5+\). The first segment of the SWC file is always a soma. Notice that a given structure can be composed of many segments as the soma in Fig 5a. The parent indicates the segment index i at which the segment j connects from, the first segment of the file must have parent value equal to \(-1\), and for the subsequent segments the parent segment must always have a value smaller than that of the “child” segment.

Fig. 5
figure 5

Morphology using SWC files. In a we show the content of the file AZ_Adult1_10CELLS-4.CNG.swc obtained from neuromorpho corresponding to a mouse Purkinje cell. In b we depict the geometrical representation of the first segment (soma) from the same SWC file. In c we show a plot the cell morphology using the NEURON simulator

Once the cell morphologies are implemented and the neurons are positioned in the spatial grid, one possibility for connecting them is to create connections between pairs of neurons according to the spatial intersections among presynaptic axons and postsynaptic dendrites. Additionally, the same distance-dependent connectivity rules discussed for point-neuron networks can be applied using the somata as reference points. It is important to note that processes (axon and dendrites) emanating from different neurons reach distinct distances. Some neurons make mostly local connections while other make both local and long-range contacts, and this can be taken into account in a distance-dependent way by setting distinct zones with predefined radii centered on reference points, e.g. somata, and allowing specific synaptic types to be created exclusively within such zones. After defining the existence of connections between pairs of neurons, the next step is to define the number of synapses and how they are distributed in their allocated zones. When there is not much information about synaptic positions, one strategy is to use a known distribution (e.g. uniform, Gaussian, etc.) and randomly place the synapses within the predefined zone of the postsynaptic dendritic tree until the maximum number of connections is attained. It is also known that specific neuron types tend to concentrate their received synapses on specific regions of their dendritic trees, and this information must be used to create a more biologically faithful connectivity pattern.

Since implementing these characteristics from the scratch in a code is a laborious task, these models are commonly implemented using neurosimulators. For morphologically detailed neuron models, the most used neurosimulator is the NEURON simulator [119]. For instance, SWC files can be easily loaded into NEURON. Once imported, each segment is represented as a cylinder as depicted in Fig 5b. The morphology can be visualized by means of the NEURON graphical user interface (GUI) or by calling the PlotShape function in a Python (or HOC, the original NEURON programming language) script. In Fig 5c we show a plot of the whole morphology of the Purkinje cell loaded from the SWC file. Besides NEURON, there are other tools to load cell morphologies. An example is NeuroConstruct [120], in which not only it is possible to load different types of morphology files but also to generate scripts compatible with different simulators such as NEURON, GENESIS [121, 122], MOOSE [123, 124], PSICS [125] and PyNN [126]. NEURON and other simulation environments will be better discussed in the next subsection.

Surely, there is not a unique method for building complex neuronal networks as different biological details may be modeled in distinct ways. In this subsection, we gave examples of general approaches to implement the connectivity map into a neuronal network code. It is important to realize that extracting synapses from a connectivity map derived from data is not a trivial task and may deliver different results depending on the approach. Many of the models that are discussed in the next section were built following the methods discussed in this section.

2.4.3 Choosing a neurosimulator

Coding a model is a task that can be approached by low or high level programming languages. Many computational neuroscientists prefer to develop their own codes in low level languages such as C/C++, or Fortran. In higher level languages the syntax is more human readable and simpler to debug, as is the example of MATLAB (The MathWorks Inc., Natick, USA) which can be used as it is or coupled to packages for neuronal simulation [127]. While low level languages offer the advantage that the commands are closer to the processor instructions, dealing with complex data and complex syntax is not an easy task. Higher level languages ultimately make the models more accessible to the scientific community by using a more unified syntax, which facilitates information sharing and reproducibility [128, 129]. In this section, we present some of the most popular neurosimulators, which are high level packages developed with the sole purpose of neural modeling. We also discuss differences among them that should be taken into account when choosing the one to use.

In recent years, Python has become a standard programming language in many research areas due to its high productivity and interpretability. As a consequence, packages for many research fields such as astronomy [130], network analysis [131], machine learning [132, 133], and neuroscience [134] are available for the scientific community. For computational neuroscience, in particular, many Python packages—henceforth addressed to as neurosimulators—can be used for the in silico implementation of network models as the ones which will be reviewed in Sect. 3.

We discuss three neurosimulators which are available in Python: Brian 2 [135], NEST [136], and NEURON [137]. As discussed in Sect. 2.2 the single neuron model can be classified as simplified or conductance-based, and can have morphology or not. When choosing a neurosimulator, one should first define in which of these categories the adopted model fits in.

The neurosimulator Brian 2 was developed to be used in Python [135]. Although its main focus is on point neurons, Brian 2 also offers the possibility of defining cells with morphologies. A useful feature of Brian 2 is that it allows the user to define the ordinary differential equations (ODEs) of the model, so it is possible to describe both simplified or conductance-based models in the package. However, since the number of ODEs increases rapidly with the number of ionic channels modeled, Brian 2 is not very practical for detailed conductance-based models as the computational cost of implementing them increases rapidly.

NEST [136] constitutes another option for modeling large-scale networks of point neurons. Instead of allowing the user to define the ODEs of the model as in Brian 2, NEST has pre-implemented models available and the user only imports them. This is convenient for its practicality, but can be a problem if the model requires some mechanism that is not pre-implemented. For that, the user is either required to work with the NEST source code in C/C++ or use a software called NESTML which implements different models for NEST [138]. Both Brian 2 and NEST offer support to run the model in parallel, although only NEST offers message parsing interface (MPI) support. Because of that, NEST is more appropriate for large-scale models and is compatible with high-performance computing allowing a simulation to run across many compute nodes [139, 140].

Finally, the NEURON simulator [137] is the alternative of choice for modeling neurons with morphology and networks made of them. In NEURON, a morphology is represented by a series of cylindrical compartments connected to each other, and even a single-compartment neuron with only a soma has a geometric representation with surface area and length. Even though one can adopt a simplified approach to model the compartment dynamics, the NEURON simulator works optimally with biophysical models where several ionic currents can be easily added to a neuron model. The simulator already has many ionic mechanisms implemented, such as the classic fast sodium and delayed rectifier potassium channels of the Hodgkin-Huxley model [94] and many others, which can be found in databases such as modelDB [141]. If a specific ionic mechanism is needed, it can be programmed as a “.mod” file; however, this is a low level language and can be a nuisance for less experienced users. Despite the practicality of implementing morphological models in NEURON, building neuronal networks can be very challenging using this neurosimulator. In this regard, it is possible to use packages that work on top of NEURON such as NetPyne [142], Brain Modeling Toolkit (BMTK) [143], or BioNet [144] to construct large-scale networks that can be efficiently run in parallel using NEURON.

Many other packages for computational neuroscience are available, some of them, such as PyNN [126], even try to integrate Brian 2, NEST, and NEURON. A complete characterization of the different neurosimulators available would be out of the scope of the present review; for this, we recommend the reader to see [140, 145]. Several efforts have also been undertaken in recent years to promote code sharing and reproducibility in computational neuroscience [128, 129, 146,147,148]. We emphasize the existence of repositories such as ModelDB [149] and journals dedicated to the replication of computational work such as the ReScience Journal [150], which are important steps towards transparency in science.

3 How modeling from connectivity maps can be done

As an example, one of the first challenging endeavors following a similar approach as described above was put forward by Izhikevich and Edelman [151]. They proposed a hybrid model built from mixing diffusion tensor imaging (DTI) data with local circuitry information. The idea was basically to create a brain model from the scratch to check what type of collective behavior it exhibits, and learn on the fly what it takes to simulate a brain (as stated by one of the authors in his personal website [152]). There are many details to this model, and the authors give a comprehensive description of it in the supplementary material of their article. Here, we will describe only its main features.

The entire cortex is reduced to fit into a sphere of 40 mm of diameter. This is done to keep the density of neurons appropriate. Neurons are randomly placed on the surface of the cortex, and they establish synaptic contacts according to a detailed map derived from two sources: first, the cortical surface map (that provides the coordinates for cortical neurons) was measured via anatomical MRI, whereas DTI provided the white matter tracts through which distant cortical regions are connected; this corresponds to the macroscopic structure of the model. Secondly, the microscopic structure (i.e., the cortical layers connectivity) came from a major study on cortical area 17 of the cat [60], which yielded a long table of connection probabilities. It contained cortical layers 1 to 6 plus the thalamus and the reticular thalamic nucleus (RTN). The model also comprises many different neurons: pyramidal cells, spiny stellate neurons, basket and non-basket interneurons, thalamocortical relay cells, RTN cells and thalamic interneurons are some examples. These neurons have their morphologies partially described in terms of compartments representing the ramifications of their dendritic trees and axonal projections.

In regards to the dynamics, the model was originally built with 10\(^6\) neurons and about 5\(\times 10^8\) synapses, although the precise number varied throughout the study. Dendritic compartments and somata are described by the two-dimensional quadratic integrate-and-fire model proposed by Izhikevich [89]. The input current to each soma compartment is given by the sum of the dendritic current plus the synaptic current. The synaptic currents are composed of AMPA, NMDA, GABA\(_{\mathrm{A}}\), GABA\(_{\mathrm{B}}\) and gap junction contributions (the latter is only present inside layers and between specific sets of cells). The neurotransmitter-modulated synapses have time scale parameters and reversal potentials taken from the experimental literature, and their conductances obey first-order linear kinectics. The dendritic currents and the gap junctions are given by simple Ohmic currents. Short-term synaptic plasticity is modeled by a depressing or facilitating factor multiplying the synaptic conductances. Long-term plasticity is implemented via dendritic STDP rules using dopaminergic parameters, and the GABAergic synapses do not undergo these adaptations.

In terms of efficiency, it took the authors 1 minute to simulate approximately 1 second of network dynamics with sub-millisecond time step. In order to keep the network activity going on, the authors had to do a pre-run of the whole network subjected to spontaneous synaptic release (i.e., synaptic noise). Then, the synaptic plasticity present in the model was sufficient to generate a state with self-sustained activity. This state was used to detect signatures of chaos (sensitivity to initial conditions), brain rhythms, and traveling waves composed to up and down clusters of neurons with velocities consistent with experiments. Gamma rhythms were mostly present in the finer spatial scales, since averaging over a small piece of cortex hid these waves. Delta, beta and alpha rhythms also emerged. Interestingly, even though the cortical microcircuitry was homogeneous, different regions of the cortex developed different rhythms—this diversity might have come from the heterogeneity introduced by the long-range white matter tract connections. The authors only suggested a way to calculate functional connectivity from their model, but did not do a thorough investigation to check what kinds of functional networks would appear.

The model by Izhikevich and Edelman [151] is an examplar case of the modeling approaches and strategies described in the previous section. Decisions toward connectivity, neuron and synaptic models were made towards its construction and certainly influenced the final result. Note that the connectivity was primarily extracted from a single work [60] following rules as described in Sect. 2.4.

3.1 Further examples of recent network models based on connectivity maps

Following the guidelines presented above, below we summarize different recent models which are entirely based on data-driven connectivity maps. The list is far from being complete but illustrates to the reader how important areas of the brain can be approached in a similar manner. They can also be taken as a starting point by the newcomers to the field who want to pursue further studies in this field.

The models are listed in Tables 1 and 2 . Table 1 indicates the brain area modeled; the single neuron model used (simplified, which could be the one-dimensional leaky integrate-and-fire (LIF) [153], the two-dimensional Izhikevivh [154] or adaptive exponential integrate-and-fire (AdEx) [90], or the multivariable and multiparametric generalized integrate-and-fire (GLIF) [155] models, or conductance-based (CB) [37]); whether the neurons have morphology or are pointwise; the network size; whether the model implements synaptic plasticity or not; and whether the network has spatiality or not. Table 2 indicates whether the code of the model is publicly available or not and in which platform (see the web addresses indicated in the text), and the neurosimulator/programming language used. We want to point out that the code availability was based on the direct links presented explicitly in the references and, as some of the references are preprint papers, the code may be available after acceptance. Also, cases where the codes are available only by contacting the authors were not included.

Table 1 Summary of recent data-driven models with structural connectivity at microcircuit level
Table 2 Code availability and programming languages used in the reference works

3.1.1 Models of the hippocampus

The hippocampus is one of the most studied parts of the brain, and there are two basic reasons for this: first, it has a prominent laminar organization of neurons and their afferents, making it a popular model system for studies of cortical function; and second, it is involved with important functions as episodic memory formation, synaptic plasticity and spatial coding.

Interneuronal mechanisms of hippocampal theta oscillations in a full-scale model of the rodent CA1 circuit [32]: This is a full-scale (1:1) model of the rodent hippocampal CA1 region. The model has four layers with 338,740 neurons divided in 311,500 pyramidal cells and 27,240 interneurons. The pyramidal cells are described by the same multicompartmental conductance-based model with realistic morphology, and the interneurons are of eight different conductance-based types with simplified morphologies. Details of the neuronal dynamics including description of the ionic channels and their equations are given in the appendix section of the reference paper. The connectivity was extracted from experimental data and the synapses were modeled by a double exponential function without plasticity. To connect the neurons the distance between each pair of cells was taken into account together with the cell type dependent number of connections. Then the total incoming synapses to a postsynaptic neuron were divided into radial bins and distributed among the bins according to a Gaussian axonal bouton distribution of the presynaptic cell. The goal of the authors was to investigate theta oscillations and the importance of interneurons in the different phases. They found that parvalbumin-expressing interneurons and neurogliaform cells, as well as the interneuronal diversity itself is very important for the theta rhythm, although the gamma rhythm is also observed and discussed in the work. The code is available in ModelDB at https://senselab.med.yale.edu/ModelDB/showModel.cshtml?model=187604, and also in the open source brain (OSB) database at https://www.opensourcebrain.org/projects/nc_ca1.

Data-driven integration of hippocampal CA1 synaptic physiology in silico [156]: This is a detailed model of the rat hippocampal CA1 area following the same principles used in the construction of the model of the rat somatosensory cortex [69] (see below). The model consists of about 400,000 cells (\(\sim \)90% pyramidal and \(\sim \)10% interneurons), represented by 11 morphologically reconstructed cell types described as multicompartmental conductance based models [168]. The model describes postsynaptic conductances with biexponential kinetics and has synaptic plasticity with STP dynamics described by the Tsodyks-Markram model [169]. The model provides a complementary resource for the quantification of network structure in the rodent hippocampal CA1 region, and helps in the identification of gaps in existing knowledge.

Towards a large-scale biologically realistic model of the hippocampus [157]: The model describes the pathway from layer II of the entorhinal cortex (EC) to the dentate gyrus (DG) (the EC-to-DG pathway) of the rat hippocampus. This pathway comprises three pathways: EC to the CA3 region, DG to CA3, and CA3 to CA1. The model describes morphological structures such as dendritic trees based on experimentally measured parameters [170]. The model is composed of 100, 000 granule cells, 11, 200 EC cells and 1000 basket cells. The connectivity map was extracted from experimental work [171], which defines the presynaptic connections based on anatomical distances and then randomly distribute them until the spine pool is depleted. Synapses contain both STDP rules for long-term potentiation (LTP) and long-term depression (LTD) as well as STD and STP. Although the authors simulated only a reduced scale (to 1/10th) version of the full EC-to-DG pathway, their cluster simulations showed a good performance. Improvements of this model were done to study the topography dependency of spatio-temporal correlations in the Entorhinal-Dentate-CA3 circuit [172]. A comparison between the 1/10th and the full scale versions of the model was implemented [173]. The code was developed in NEURON [119, 174] and the data analysis was made in Python.

3.1.2 Models of the olfactory bulb

The olfactory system has a remarkable sensitivity and dynamic range, enabling the discrimination of thousands of volatile molecules (odorants) that occur in variable concentrations and move chaotically in the turbulent environment around us. This makes the olfactory system one of the favorite systems for the study of spatiotemporal representation and processing of stimuli in the brain. The olfactory bulb is the main structure in the olfactory processing pathway from nose to cortex, and because of this it has received considerable attention. There are many data available on the olfactory bulb and its afferent and efferent connections, allowing the development of computational models with high level of biological detail and connectivity information as the two examples given below.

Distributed organization of a brain microcircuit analyzed by three-dimen-sional modeling: the olfactory bulb [158]: This model, which is based on a previous one-dimensional model of the olfactory bulb of the rat [175], is a three-dimensions model of the same structure constituted of cells with morphology and realistic input patterns. The main goal of the authors was to use the model to study the encoding of monomolecular odor stimuli on the mitral-granule cell circuitry. More specifically, they analyse this odor representation by inspecting the neuronal clusters formed between neurons due to synaptic plasticity after odor presentation. To constrain the peak conductances of the model to biological ranges, the authors first analysed the level of activity of 127 glomeruli via optical imaging and used the normalized values in the model. The input is considered realistic because the peak conductance of the odor input is randomly drawn from a normal distribution given by optical imaging data. To model the morphology of mitral cells the authors used 8 3D experimentally obtained cell reconstructions. For each mitral cell the soma compartment was randomly drawn from one of these 8 recorded morphologies, however the lateral dendrites were generated by an algorithm developed by the authors that generates the dendritic tree while preserving the following parameters from the experimental recordings: growth direction of the dendrites, path lengths, branch lengths, and the probability for each branch order. The authors generated a total of 635 morphologies for mitral cells, which by their turn were connected to a variable number of glomerular cells ranging from 13,260 to 69,013 depending on the connectivity rule adopted. The background input to the granule cells, which was not explicitly modeled, consisted of Poisson generators. Synaptic plasticity (LTP and LTD rules) was implemented to model odor learning. Using the model, the authors showed that the glomerular pattern of activation is dependent on the odor input (in this study they used a set of 20 natural odorant stimuli, e.g. coffee, kiwi and mint), suggesting that the glomerular activity patterns could be used by the system to encode the stimulus. The code is availabe in ModelDB at: https://senselab.med.yale.edu/ModelDB/ShowModel?model=151681.

Synaptic clusters function as odor operators in the olfactory bulb [159]: The authors sought a better understanding on how odor stimuli is processed and interpreted by the mitral-granule circuit of the olfactory bulb by analyzing the configuration of the synaptic weights, referred to as synaptic cluster, after stimulus presentations. The cluster formed by a single glomerular unit was defined as a column. The model, as described above [158], consists of a 3D model, which enables a realistic representation of the overlapping neuronal dendritic trees. It consists of 635 mitral cells organized in 127 glomeruli (5 per glomerulus), and 97,017 granule cells. The network has synaptic plasticity (LTP and LTD), and the weights are modified independently in each dendrodendritic synapse between mitral and granule cells, depending on the local spiking activity, after the presentation of an odorant input. The model was validated by comparing the formation of synaptic clusters after stimulus presentation with experimental data representing a cluster configuration, obtained from pseudorabies virus staining. Among the main simulation results reported by the authors, we can highlight: (i) the model reproduces the spread of backpropagated action potentials originated in the dendrites of mitral cells within a glomerulus as observed experimentally; (ii) an analysis of the final configuration of synaptic weights after exposition to two odorant stimuli in different sequences showed that odor exposure is noncomutative, suggesting that the olfactory bulb represents odors learned in the past depending on the order they were learned. The authors also developed a mathematical framework to describe how the circuit represents the outputs as a function based on a series of matrix operations. The code is availabe in ModelDB at: https://senselab.med.yale.edu/ModelDB/ShowModel?model=168591.

3.1.3 Models of frontal lobe regions

The frontal lobes comprise about one-third of the cortical tissue. This large area is involved in a wide range of motor and executive functions, including simple and complex motor skills, attention, judgement, reasoning, problem solving and emotional regulation. The frontal lobes are also extensively connected with other brain regions. The complexity of the structures and functions of the frontal lobes has hindered our understanding of how these structures and functions are related. Modeling is likely to play a key role in future advances on this theme. We present below two examples of models for frontal lobe regions: the primary motor cortex and the prefrontal cortex.

Multiscale dynamics and information flow in a data-driven model of the primary motor cortex microcircuit [33]: This is a biophysically detailed model of the mouse primary motor cortex (M1) with over 10,000 neurons and 35 million synapses. The modeled M1 microcircuit is represented by a cylindrical volume of \(300 \mu m\) of diameter divided into six layers (L1, L2/3, L4, L5A, L5B and L6). Seven different types of pyramidal cells and two inhibitory interneurons were implemented, all using a conductance-based formalism. Among the different types of neurons, the authors focused and gave more biological details to two neurons from layer 5: the intratelencephalic (IT) and the pyramidal-tract (PT) neurons. The IT and PT neurons are modeled with full dendritic morphologies and have \(700+\) compartments. For the other cells, simpler models were used containing around \(3-6\) compartments. The synapses are conductance-based and not plastic representing AMPA and NMDA for excitatory connections, and \(GABA_{\mathrm{A}}\) and \(GABA_{\mathrm{B}}\) for inhibitory connections. The model exhibits spontaneous neural activity patterns and oscillations consistent with M1 data, which are dependent on the cell class and the layer and sublaminar location. The simulated local field potential (LFP) displays delta and beta/gamma oscillations with gamma amplitude modulated by delta phase. The flow of information measured by spectral Granger causality indicates routes at frequencies in the high beta/low gamma band. Furthermore, the brief activation of specific long-range inputs or different neuromodulatory conditions changes the output dynamics. The results of the model simulations suggest an association between the cell-type-specific circuits of M1 and dynamic aspects of activity, such as oscillations, neuromodulation and information flow. The model was developed using NEURON [119, 174] and NetPyNE [142].

A detailed data-driven network model of prefrontal cortex reproduces key features of in vivo activity [160]: The model is a detailed neuronal network constructed after in vitro electrophysiological and anatomical data from the prefrontal cortex (PFC) of the rat and mouse. The 1,000 point neurons in the model are described by a simplified version of the AdEx model (simpAdEx [176]). Population heterogeneity is taken into account by the four different neuron types used in the network: pyramidal cell, fast-spiking interneuron, bitufted interneuron, and Martinotti cell. These neurons are distributed into two cortical layers, representing the superficial layers 2/3 and the deep layer 5. The connections are randomly created following different connection probabilities for each pair of cell types. Missing experimental information about rodents was replaced by data on monkeys, ferrets and cats. Synapses (AMPA, NMDA and GABA\(_{\mathrm{A}}\)) are conductance-based modeled by double-exponential functions. The model also includes short-term plasticity dynamics. Different characteristics such as spike train statistics, membrane potential fluctuations, local field potentials, and transmission of transient stimulus information across layers were validated by the authors against experimental recordings, and the validation appears robust even with moderate changes in the parameters. The code, written in C and MATLAB (The MathWorks Inc., Natick, USA), is available at https://senselab.med.yale.edu/ModelDB/showmodel.cshtml?model=189160.

3.1.4 Models of sensory cortices

The sensory cortices are responsible for receiving and interpreting sensory information from different modalities. The similarities experimentally observed among the local pattern of connections across different sensory cortices of different animal species led to the concept of “canonical neocortical microcircuit” [177]. This idea is implemented in the different computational models presented in this subsection, which focus on the somatosensory and visual systems. The examples show implementations with different levels of biological detail, from networks of point neurons to complex networks of neurons with morphology and synaptic plasticity. They serve as an illustration of how, once with a connectivity map, the level of biological detail put in the model may vary depending on the scientific question. Some of the examples below use the same connectivity map to generate networks with different mechanisms implemented at neuronal and/or synaptic level.

The cell-type specific cortical microcircuit: Relating structure and activity in a full-Scale spiking network model [40]: This is a multilayered model of the microcircuit of the early sensory cortex based on data from primary sensory cortices of the cat, rat, and mouse. The model combines anatomical and physiological data in a detailed implementation of the canonical neocortical microcircuit [177]. The missing experimental data was replaced by estimates and approximations involving target specificity. The model reproduces in full-scale the neuronal network under a surface area of 1 mm\(^2\) of early sensory cortex, and contains 77,169 excitatory and inhibitory neurons divided into four layers, comprising eight cell populations: L2/3e, L2/3i, L4e, L4i, L5e, L5i, L6e, and L6i. All neurons are point neurons described by the same leaky integrate-and-fire model. The network is built from a given connectivity matrix with probabilities of connection between each pair of populations. With short delays after presynaptic spikes, synaptic currents receive step increases and then decay exponentially. The step magnitudes and decay time constants are fixed and depend on the synapse type (excitatory or inhibitory). Synaptic delays are randomly drawn from an uniform distribution. Inputs coming from other brain regions are modeled by Poisson spike trains. The model generates spontaneous asynchronous irregular activity and cell-type specific firing rates in good agreement with in vivo recordings. The code is available in the OSB web site at http://opensourcebrain.org/projects/potjansdiesmann2014 (versions in NEST, PyNEST, and PyNN). A replication of the model in Brian 2 [110] is available at https://github.com/ReScience-Archives/ShimouraR-KamijiNL-PenaRFO-CordeiroVL-CeballosCC-RomaroC-RoqueAC-2017/.

Hybrid scheme for modeling local field potentials from point-neuron networks [161]: This work proposes a hybrid modeling scheme combining point-network models with biophysical principles underlying LFP generation by neurons. To demonstrate the hybrid scheme the authors use two models: the first is a modified version of the point-neuron network of Potjans and Diesmann described above [40]. This network is simulated and its spiking activity is recorded. The second model is a network of multicompartmental neurons which receive cell-type and layer-specific synaptic inputs coming from the spikes generated by the first model. The multicompartmental neurons belong to 16 different cell classes reconstructed from several published sources and mainly from the cat visual cortex. Similarly to the model by Izhikevich and Edelman [151], the fractions of neuron types per layer were extracted and adapted to create the multicompartment version, which was used to estimate the LFP.

Reconciliation of weak pairwise spike-train correlations and highly coherent local field potentials across space [162]: This model is an extension of Potjans and Diesmann model [40] to include a spatial notion and an upscale in the surface area delimiting the cortical column from 1 mm\(^2\) to \(4 \times 4\) mm\(^2\). The network size was chosen to match the size of the Utah multi-electrode array and to reproduce LFP measurements related to spatial propagation and distance-dependent correlation of evoked neural activity. The number of neurons, totaling 1,249,136, was adjusted to keep the same neuron density by layer as the Potjans and Diesmann model, and the connectivity matrix was modified resulting in a matrix of zero-distance connection probabilities. The point neurons are modeled by the LIF equations and are randomly placed in a 2D square grid representing the same size of the surface area. The synaptic connections are randomly created by a distance-based connection probability rule following a Gaussian distribution. The network activity preserves the main characteristics of the Potjans and Diesmann model, including the asynchronous irregular spiking across populations. Moreover, in agreement with what is experimentally observed in the sensory cortex, the model displays weak pairwise spike-train correlations while also showing strong and distance-dependent LFP correlations. The network is implemented in NEST [136] and the LFP is measured using NEURON, LFPy and hybridLFPy. The data analysis in the article was done using Python.

Multi-scale account of the network structure of macaque visual cortex [107]: This model is an expansion of the Potjans and Diesmann model [40] from a single microcircuit to a multi-area network representing the 32 areas related to vision in the cortex of the macaque. Each one of the 32 areas is constituted by a 1 mm\(^2\) microcircuit containing four layers (except the parahippocampal area which has only three layers) with populations of excitatory and inhibitory neurons as in the Potjans and Diesmann model. The number of neurons by area vary from 197,936 (area V1) to 73,251 neurons (area TH). The neuron and synaptic models are the same as used in the Potjans and Diesmann model. There are adjustments in the neuron density and layer thickness based on the region where the area is located. The long-range connections (among areas) reproduce layer-specific inter-area projections taken from experimental data. The authors describe in the article all the steps they used to obtain the structural map for local and long-range connections, including approximations and references. Although the connectivity matrix is not explicitly shown, in the supplementary data there are tables with all values required to built the model. The code is available in GitHub at https://github.com/INM-6/multi-area-model, and from that it is possible to extract all the necessary information to reimplement the model.

A multi-scale layer-resolved spiking network model of resting-state dynamics in macaque visual cortical areas [108]: In this work the authors studied the dynamics of the multi-area model described above [107]. The network reproduces spiking statistics from electrophysiological recordings and cortico-cortical functional connectivity patterns observed in functional magnetic resonance imaging (fMRI) studies under resting-state conditions.

Griffiths phase and long-range correlations in a biologically motivated visual cortex model [21]: This is a detailed network model representing the form recognition pathway of the visual system of mammals. It comprises five layers: the retina photoreceptors, the thalamic lateral geniculate nucleus (LGN), and the cortical layers 4C\(\beta \), 2/3, and 6. Neurons are modeled by a discrete-time compartmental integrate-and-fire dynamics with dissipative dendritic compartments and conservative axon and soma compartments. The number of dendritic and axonal compartments is adjusted to keep the velocity of propagation of the action potential comparable to experiments. The neurons have a simplified morphology, where dendritic, soma, and axonal compartments are organized in a linear structure. The structural matrix for interlayer excitatory connectivity is given by the authors, and was drawn from experiments with the macaque monkey. Although lateral inhibition is not taken into account, the network excitation is balanced by dissipation in the dendrites. The retina layer contains \(10^6\) photoreceptors, and the other layers contain around \(100^2\) neurons each, summing up to 40,000 neurons in the network. A total of up to \(32.5\times 10^6\) synapses are formed by linking axonal to dendritic compartments according to predefined distributions of synaptic boutons over the bodies of the neurons. The network has recurrent connections between layers 2/3 and 4C\(\beta \), and interlayer connections are made according to a Gaussian distribution around the presynaptic neuron position, creating a columnar structure. The authors use the size of postsynaptic potential (PSP), obtained as an average over macaque brain experiments, as a control parameter, and the network starts to display activity in a complex feedback pattern as the PSP is increased past 1 mV [19]. The phase transition happens via an unusual type of nonequilibrium percolation over a Griffiths region [23], when compared to the usual directed-percolation frequently found in neuronal networks [16, 178,179,180]. The network also presents power-law avalanches and long-range correlations (with approximately 1/f power spectrum of avalanche sizes) [21, 23].

Systematic integration of structural and functional data into multi-scale models of mouse primary visual cortex [111]: This is a data-driven model of the mouse primary visual cortex (V1), containing 230, 924 neurons that simulates physiological studies with arbitrary visual stimuli. The model contains 17 cell classes divided into five cortical layers (1, 2/3, 4, 5 and 6) and represents a cortical column with 845\(\upmu \)m of radius. The model contains a connectivity matrix giving the connection probabilities between all neurons types at each layer. Synapses are modeled by bi-exponential functions and are not plastic. Two variants of the model were developed, a version with biophysically detailed compartmental neuronal models [181], and a version with generalized leaky integrate and fire (GLIF) point-neuron models [155]. Both versions are constrained by experimental measurements and reproduce many observations from electrical recordings in vivo [182]. Also, in both models the firing rate distributions in response to the used inputs are similar. The authors say that the tuning adjustments made in the networks to reproduce experimental data were useful to identify rules governing cell-class-specific and synaptic strengths. The simulations were developed with the use of the Brain Modeling ToolKit (BMTK) (https://alleninstitute.github.io/bmtk, [143, 183]), which facilitates simulations with both NEURON [119] and NEST [184] and supports Python 2.7 and 3.6. Model and simulation outputs are saved in the SONATA format (https://github.com/AllenInstitute/sonata [185]). All models, code, and meta-data are publicly available via the Allen Brain Map web portal at https://portal.brain-map.org/explore/models/mv1-all-layers.

Visual physiology of the layer 4 cortical circuit in silico [163]: This is a data-driven biophysically and anatomically detailed circuit model of layer 4 from the primary visual cortex (V1) of the mouse. The detailed network has 10, 000 neurons divided into five cell types, three excitatory and two inhibitory, which are modeled using individual neuron parameters from the Allen Cell Types Database (http://celltypes.brain-map.org/). The connectivity map is based on data from V1 of the cat. To prevent boundary artefacts, the detailed model was surrounded by a simplified network of 35, 000 leaky integrate-and-fire neurons divided into excitatory and inhibitory groups. The cells were spatially distributed and the connections were randomly created according to a probability rule that implements linear decay with intersomatic distance and dependence on the difference between the assigned preferred orientation angle. Once a connection was established, the number of synapses is randomly selected with uniform distribution in the range between 3 and 7. Additionally, a functional synaptic connectivity rule, commonly observed in the excitatory neurons in the mouse V1 L2/3 and known as “like-to-like connectivity”, is applied. The positions of the synapses along a postsynaptic dendritic tree are randomly assigned with distance constraints based on experimental data. The adopted synaptic function models are the bi-exponential for the biophysical neurons and the instantaneous rise followed by exponential decay for the LIF cells. The inputs coming from the lateral geniculate nucleus (LGN) of the thalamus are simulated as a set of filters representing the image processing done at pre-cortical stages. The same or similar sets of visual stimuli presented to the model was also presented for comparison to mice in in vivo experiments. The model reproduces a large set of experimental results, including effects of optogenetic perturbations. Simulations showed that tuning properties of neurons are affected by the functional connectivity rules. A simplified version of the network was simulated and the results were qualitatively similar but lacked quantitative agreement. The simulations were done using Python 2.7 together with NEURON 7.4 [174] through the BioNet package [183], which is an interface for modelling large-scale networks. The codes are available at: https://github.com/AllenInstitute/arkhipov2018_layer4.

A biophysically detailed model of neocortical local field potentials predicts the critical role of active membrane currents [164]: This is a model of cortical layers 4 and 5 populated by over 12, 000 multicompartmental pyramidal and basket cells. The data used comes from the rat primary somatosensory barrel cortex (S1). There are five million dendritic and somatic compartments with voltage- and ion-dependent currents, realistic connectivity, and probabilistic AMPA, NMDA, and GABA synapses. The neurons somata were randomly placed, according to the layer depth, in a 3D hexagonal volume with radius of 320\(\upmu \)m and the synaptic connections were created for \(5 \%\) of apposition closer than 3\(\upmu \)m. Simulation results show that the LFP is dominated by active currents instead of synaptic currents. This is a convenient model for LFP reproduction and to study how it is affect by internal and external layer interactions in cortical circuits. The model is part of a collaborative effort between several institutions including the Blue Brain Project and the Allen Institute for Neuroscience. The code, unfortunately not available, is written in NEURON [119].

Reconstruction and simulation of neocortical microcircuitry [69]: This model is a reconstruction of the microcircuitry of the somatosensory cortex of juvenile rat. It is based on an algorithm developed by the authors to reconstruct detailed anatomy and physiology from sparse experimental data [186]. The model represents a neocortical volume of 0.29 ± 0.01 mm\(^3\) containing \(\sim \)31,000 neurons, with 55 layer-specific morphological and 207 morphoelectrical neuron subtypes distributed in the 6 cortical layers. Neurons are described by multicompartmental conductance-based models using up to 13 active ion channel types and an intracellular Ca\(^{2+}\) dynamics. The network connections were created by positioning the neurons in the volume and setting the synaptic contacts among the overlapping arbors according to biological constrains, which resulted in about 37 million synapses. Excitatory synapses are modeled using both AMPA and NMDA receptor kinetics, and inhitibory synapses using a combination of GABA\(_\text {A}\) and GABA\(_\text {B}\) receptor kinetics. The model reproduces several in vitro and in vivo experiments without parameter tuning. Simulations also show transition from synchronous to asynchronous activity modulated by physiological mechanisms. The reconstructed model and experimental data are available at: https://bbp.epfl.ch/nmc-portal [187].

Data-driven modeling of cholinergic modulation of neural microcircuits: bridging neurons, synapses and network activity [165]: This model incorporates cholinergic modulation to the microcirtuit model described above [69, 186, 187]. The modulation by acetylcholine (ACh) on cellular excitability is implemented by a depolarizing step current injection and, on synaptic transmission, by changing the utilization of synaptic efficacy parameter in the used stochastic synapse model. In the untuned version of model, Ach desynchronizes spontaneous network activity. Simulations show that, at low level of Ach, the network exhibits slow oscillations and network synchrony as observed in non-rapid eye movement (nREM) sleep, and, at high ACh concentrations, it exhibits fast oscillations and network asynchrony as during wakefulness and REM sleep. Data analysis as the cross-correlograms were computed using MATLAB (The MathWorks Inc., Natick, USA).

Virtual Electrode Recording Tool for EXtracellular potentials (VERTEX): comparing multi-electrode recordings from simulated and biological mammalian cortical tissue [167]: This model contains 175, 421 excitatory and inhibitory neurons subdivided in 15 different cell types distributed in five layers (2/3, 4, 5 and 6) with several characteristics based on cat V1 data [60, 151]. The neurons are described by reduced compartmental models [188, 189]. The neuronal somata are randomly placed in a 3D space according to a rule based on the cortical depth. Synaptic connections are created based on distance, following a Gaussian kernel distribution with a fixed radius, and the number of synapses between groups of neurons is adapted from experimental data [60]. The synapses are defined as AMPA for excitatory and GABA for inhibitory connections, and they are modeled as conductance-based with exponential decay. The authors developed a tool implemented in MATLAB (The MathWorks Inc., Natick, USA) to obtain the LFP from the model, the Virtual Electrode Recording Tool for EXtracellular potentials (VERTEX), which preserves the spatial and frequency-scaling features of the LFP. Codes are available in the resources section at https://www.dynamic-connectome.org/.

Impact of higher order network structure on emergent cortical activity [166]: This model is based on the detailed neocortical microcircuit model of Markram et al. described above [69]. From this detailed model, the authors generated another model with similar first-order structure but unconstrained higher order characteristics. This resulted in two models with different synaptic connectivity. The model by Markram et al. includes higher order structures, e.g. abundance of cliques of all-to-all connected neurons, arising from the morphological diversity of neuronal types, and the model generated by the authors does not have such higher order structures. By analyzing the two models, the authors demonstrated that the differences in higher order network structure have an impact on emergent cortical activity, and that the presence of higher order structure leads to more meaningful neuronal firing patters such as increased pairwise correlation. The main conclusion is that the higher order structure created by morphological diversity within neuronal types has an impact on emergent cortical activity.

4 Conclusion

In this tutorial review, we covered details about how to build large-scale neuronal network models with special emphasis on the connectivity maps. Recent models for which connectivity is an essential aspect were reviewed and commented. Modeling neurons, synapses, and networks is a problem of increasing complexity, which is mostly derived from neurobiological recordings [6]. Among the different choices a modeler has to make, there are several guides helping these decisions [41, 190, 191]. Our focus, however, is on networks composed of spiking-neurons where the connectivity is determined from a map that is based on experiments (a data-driven connectivity map). To the best of our knowledge, there is no review exploring this important step in a tutorial manner.

Before addressing the connectivity, we discussed concisely how neuron and synaptic models can be tackled. Depending on the aimed level of biological detail, neurons can be modeled containing ionic channels or not. The same strategy can be forwarded to neurons with or without morphology. Moving to synapses, the choice of synaptic model and whether it includes plasticity or not adds another dimension of complexity to the problem. Synaptic models may introduce time-dependent properties such as short-term plasticity or even spike-timing-dependent plasticity where the difference between pre- and postsynaptic spike-timing determines strengthening or weakening of a synapse. Regardless of the single neuron or synaptic properties, every neuron and synapse model can be used in the construction of a network, which, depending on the modeling compromises assumed, results in a enormous repertoire of possible degrees of model complexity.

Proceeding from neurons and synapses to the network connectivity, the researcher in possession of the experimental data, i.e. the connectivity map, will need to interpret it and translate its information into synaptic links among neurons. As we have shown in the comparison between results obtained with Eqs. 8 and 9 , care must be taken when using approximate expressions for the connection probabilities. The differences between networks generated with the exact and the approximate connection probability expressions may go unnoticed when the network size is small, but as the network size grows they become evident and may have great impact of the neuronal activity.

Similarly, the role of the spatial organization or of the morphological organization may be critical in determining how neurons are connectivity. First, placing neurons in a spatially structured manner implies that neurons are more likely to be connected to closer neighbours than to the farther ones. In light of that, several works approach the problem by drawing connections with distant-dependent connection probabilities that decay with distance. Secondly, the neuronal structure is highly important especially when considered together with the electrophysiological class of the neuron. For instance, basket cells in the cortex make synaptic contacts with the somata of pyramidal cells whereas bitufted and Martinotti cells usually make their synaptic contacts to pyramidal cells in the dendrites of these cells [192]. Hence specificity is an essential aspect of connectivity maps.

It is important to emphasize the restrictions imposed by the brain anatomy to the network models (e.g. geometric constraints due to the limited size of the cranium and finite size of connections, differential effects of distal vs. proximal dendrites, etc). Although it is usual and reasonable to make approximations to deal with these anatomical features when building a model, frequently they are ignored and not even mentioned omitting relevant information that may be important to neuroscience. Modellers must be aware of the possible consequences of not taking into consideration anatomical elements and should consider them when making conclusions from their network models.

Regarding the tool(s) used to implement the model, a modeler either approaches the problem with a low-level language such as C/C++ or uses a neurosimulator which encompasses high-level functions that are easy to interpret and favor code sharing with a larger community. Among the multitude of available neurosimulators, we discussed here three examples which are currently popular: Brian 2, NEST, and NEURON. Searches made on Google Scholar for the keywords “NEURON simulator”, “NEST simulator” and “Brian 2 simulator” within the period 2015-2020 gave, respectively, 409, 377 and 35 results. For comparison, a similar search for “spiking network model” resulted in 812 hits. All searches were made on December 2nd, 2020. Our discussion of these neurosimulators highlighted their advantages and disadvantages with respect to specific modeling situations.

Finally, we gave at the end a list of cutting-edge recent works in which the data-driven approach to construct large-scale networks of spiking neurons was used. The list contains brief summaries of the characteristics of the models and how they were used. We hope the information given in Sect. 2 is sufficient to help the reader, especially a newcomer to the field of computational neuroscience, to start reading and understanding these works. We also hope our information is useful to guide the choice and use of one of the publicly available models in a research project.