Keywords

1 Introduction

We have truly entered the Age of the Connectome due to a confluence of advanced imaging tools, methods such as the flavors of functional connectivity analysis and interspecies connectivity comparisons, and computational power to simulate neural circuitry [1,2,3,4,5,6,7,8]. The interest in connectomes is reflected in the exponentially rising number of articles on the subject (Fig. 1). What are our goals? What are the “functional requirements” of connectome modelers? We give a perspective on these questions from our group whose focus is modeling neurological disorders, such as neuropathic back pain, epilepsy, Parkinson’s disease, and age-related cognitive decline, and treating them with neuromodulation.

Fig. 1
figure 1

Number of publications on network neuroscience per year between 1990 and 2018. Reprinted from Douw et al. [1]

2 Goals and Means

2.1 Electroceuticals and Neuromodulation

The ultimate goal of electroceuticals and neuromodulation is to use electromagnetic fields to modify any component of the central and peripheral nervous system in a predictable way to restore or enhance its normal functionality. Neuromodulation focuses more on restoring functionality from a diseased state, while electroceuticals’ emphasis is using electromagnetic fields to replicate pharmaceutical effects and thereby provide a far less expensive and time-consuming path than the drug development and regulatory route via an alternative medical device route cutting cost and time as much as 90%.

In the past decade, as numerical modeling and simulation have become more sophisticated, their role in reducing research and development time and expense has become increasingly clear and valuable [9, 10]. The Food and Drug Administration and the American Society of Mechanical Engineers have led the way to formalize the simulation paradigm so that its results and their level of validity are satisfactorily transparent [11, 12].

2.2 Benefits of Numerical Modeling

The benefits of numerical modeling begin with the ability to predict electromagnetic field distributions and the resulting forces and energy imparted by the field to targeted and non-targeted structures. Modeling provides significant advantages over empirical studies particularly under the following conditions, which apply pervasively to the nervous system:

  1. 1.

    Where it is difficult or impossible to measure values empirically

  2. 2.

    Where preclinical studies are expensive or impossible

  3. 3.

    In inhomogeneous, anisotropic materials

  4. 4.

    Where material parameters are imprecisely established

Further, modelers can perform extensive “what if” explorations in large parameter spaces either interactively or in an automated batch mode. Using standards such as those mentioned above, modelers can benefit by exchanging their models to compare results or to use another’s model as a starting point for new explorations.

2.3 The Role of Simple Versus Complex Models

Freddie Hansen of Abbott Laboratories built over 300 simulation models in 6 years – how did he do it? For most of his models, Hansen uses the COMSOL finite element modeling (FEM) software as a “FEM pocket calculator” with which he builds quick-and-dirty FEMs in a few hours to a few days [13]. These models are designed to answer a single, simple question. Roughly defined parameters and large error bars can be addressed with parameter sweeps across the relevant parameters to outline possible range of responses and determine sensitivities.

In the field, such simple models are often called “sub-models,” i.e., they are a study of a component from a larger model. The nervous system – even in simpler model organisms than humans – is so complex that sub-modeling will be an important technique for calibration, validation, and dissecting functionality of connectomes.

More complex and sophisticated models , such as Hansen’s principal model of a heart pump, take weeks and months to build, calibrate, and validate. Generally, much greater emphasis is placed on validation in complex vs. simple models, and far greater time is required to beat the desired calibration and validation behavior out of the model. Hence, calibration of sub-models within connectomes can be an important route to efficiency and model control.

An example of a connectome sub-model is the H-reflex , which is the relatively simple spinal cord circuit triggered when the doctor hits your knee with a rubber mallet. The connectivity of the monosynaptic H-reflex is well-known (Fig. 2), which is not generally the case with more complex circuits, hence the need to start with what is known and get that calibrated before venturing into unknown territory. Calibration of the H-reflex involves balancing the connection strength between the Ia excitatory sensory fiber, Renshaw cell (RC) inhibitory feedback loop, and alpha motor neuron circuit such that this mini-circuit replicates its reported ~50 ms refractory time (Fig. 3).

Fig. 2
figure 2

Schematic of the Hoffmann reflex (H-reflex), a monosynaptic relay from afferent sensory fiber Ia to an alpha motor neuron (αMN) and back to the muscle via a large A-α efferent fiber. The refractory time of the circuit is principally mediated by a Renshaw cell (RC) clock

Fig. 3
figure 3

Recorded transmembrane potential in a neural circuitry model calibration of the H-reflex spinal motor circuit to replicate its ~50 ms circuit refractory time. x-axis: time in ms. y-axis: transmembrane potential in mV. Action potentials (AP) are initiated at 12, 47, and 82 ms (red arrows) in Ia sensory fibers, triggering the reflex. The timing of alpha motor neuron cell response is modulated by a double-Renshaw cell (RC) inhibitory “clock” (Fig. 1). With proper calibration of axonal delays and connection strengths, the alpha motor neuron fires in response to the Ia fiber at 12 ms and 82 ms APs but, inhibited by the RC clock, does not fire at 47 ms, evidencing the refractory period

While one or two such mini-calibrations may suffice for a simple model, many may be necessary in a complex model in order to impose sufficient constraints to validate the model. The growing sophistication of new top-down modeling methods is greatly improving the calibration/validation process (see Sect. 2.5).

2.4 Ockham’s Razor Drives All Modeling

“What can be accounted for by fewer assumptions is explained in vain by more” was a principle frequently invoked by the influential medieval Scholastic thinker, William of Ockham (1285–1349) [14]. Similarly, in modeling, a guiding principle is to make the model no more complex than is required to capture the desired phenomena. In modeling the human brain and spinal cord, one has little choice but to invoke this principle frequently, because the systems are so complex and intertwined. For example, one cannot model the entire brain to capture a given disorder, such as movement disorder due to loss of dopaminergic neurons in the substantia nigra pars compacta in Parkinson’s disease, where the central basal ganglia loop sends and receives signals from external centers. In such a model, vast regions of, e.g., cortex and thalamus may be represented as single groups, or single excitatory and inhibitory pairs [15].

2.5 Capturing the Required Level of Detail

Similar to the principle of Ockham’s razor, modelers must make a decision about the level of detail they wish to capture and the developmental and computational cost required to capture the target level [16, 17].

Biology is fundamentally a multisystems level discipline, and accordingly, modelers must decide at which systems level to focus [18,19,20]. Concomitantly, though, the systems level approach gives modelers some flexibility with regard to “axiomatizing” or “black-boxing” elements at the underlying systems level to the one at hand, thereby simplifying the model and rendering its behavior more understandable. This approach goes back as far as von Neumann’s earliest thoughts on how to model the nervous system [21].

2.6 Which Neural Circuitry Software?

Once a decision is at least tentatively made on the systems level and extent of detail, a modeling tool is selected that is designed to capture the target level of detail [22].

By way of example, our connectome models have used the following tools, each designed to model a different neural systems level and to be computationally efficient for the required tasks:

  1. 1.

    UNCuS (Universal Neural Circuitry Simulator) written in C++ and Java incorporating electrotonic dendritic compartments and neuron type calibration based on 12 parameters [6, 16, 23, 24]

  2. 2.

    Active nerve fiber cable models in C++ and Java incorporating ion channel gating at 0.25 ms timesteps [25]

  3. 3.

    Simpler passive fiber models based on relative threshold or an absolute version of the Weiss equation using the second finite difference of electric field potential along the fiber as predicted in a finite element model (Fig. 4) [26, 27]

  4. 4.

    Finite element models in COMSOL Multiphysics™ of electromagnetic neuromodulation devices and their generation of electric potential, current density, etc., in heterogeneous biological tissue [26, 27]

  5. 5.

    A special-purpose numerical model written in Mathematica (WRI, Champaign, IL, USA) of fiber threshold changes due to cathodic and anodic stimulation phases to elucidate a hypothesis on traditional low-frequency, “burst,” and high-frequency spinal cord stimulation [28, 29]

Fig. 4
figure 4

Rheobase, the minimum threshold of a nerve fiber under sustained stimulation, and chronaxie, the time constant of the fiber standardized at twice the rheobase stimulus strength, together determine the fiber threshold under stimuli of any length in passive fiber modeling. Rheobase may be relative to a given empirical model, or absolute, measured by replicating a model in finite element software and measuring electric field gradient along a virtual fiber [26]

In recent years, neural circuitry software is often categorized into three different model paradigms, all using coupled differential equations. The kind of considerations used to select which paradigm is appropriate for modeling the basal ganglia in Parkinson’s disease, for instance, is described by Rubin [30]:

  1. 1.

    Activity-based or firing-rate models

  2. 2.

    Integrate-and-fire models

  3. 3.

    Conductance-based models

An older tool, NEURON , has often been used for small neural circuits as well as individual neuron and axon behavior and offers the advantage of transparency – for those who can march up the learning curve required to learn its interface [31].

However, as the need to model larger connectomes and longer time series is growing rapidly, newer methods are being developed [32,33,34,35,36]. Other driving forces of high efficiency model paradigms are “biomimetic” or “bio-inspired” systems and the prospect of interfacing neural prostheses with biological neural circuits [37,38,39,40]. Such paradigms seek to marry software and hardware requirements (“algorithm-hardware co-design”) to produce ultimate computational efficiency [41, 42]. These newer approaches are likely to obsolete the current methods within a decade.

2.7 Initial Conditions

There are no firm guidelines for setting up the initial conditions for a neural circuit, which remains more of an art than a science. One can start with randomized connectivity and connection strength, for instance, the absence of any knowledge of specific connectivity and connection strength, i.e., a high entropy configuration, and impose calibrations that constrain the circuit, reducing its entropy, until it sends its “message,” i.e., behaves according to a desired specification.

An example of a “middle ground” initial conditions approach would be specifying coarse-grained connectivity that is known from the literature and using heuristics to ensure some reasonable performance or more likely avoid unreasonable performance. One such heuristic is setting connection strength from inhibitory to excitatory groups and excitatory to inhibitory groups incrementally stronger than from excitatory to excitatory groups and from inhibitory to inhibitory groups, such that the circuit does not initially slip into hyperactivity. A similar heuristic is lowering the ratio of excitatory to inhibitory connections such that stability is initially generated. A third technique is imposing a hypothesized exogenous “black box” inhibitory input to stabilize the circuit. Any highly recurrent circuit topology requires stabilizing, negative-feedback heuristics of this genre [6]. We used the first and third techniques in our human spinal cord connectome, in which the exogenous black box represented known but unquantified descending inhibitory control from higher brain centers.

Our human spinal cord connectome model entailed an ultimate attempt to perform a bottom-up calibration by culling from the literature all topology including source peripheral fibers or neural groups, their targets (specific Rexed laminae, i.e., the gray matter of the spinal cord), connection strength, and neurotransmitters [6]. The end result was that the overall model was under-constrained due to the dearth of complete data, in particular on connection strengths. Our conclusion was twofold. First, the model was valuable as a starting template for calibrating the innumerable local circuity models that embody the spinal cord connectome. Second, while advances in imaging techniques foreshadowed the possibility of complete bottom-up calibration at some point in the future, for practical purposes, top-down calibration, e.g., by replicating circuit behavior (input-output specifications) would be required in connectome modeling.

2.8 Calibration and Validation

The general calibration/validation procedure is to cull a set of calibrations for a given circuit from the literature and add calibrations one by one until the model predicts one or more of the remaining calibrations with reasonable accuracy, i.e., is validated against the remaining empirical criteria. Calibration can be done iteratively by hand, which can be painstakingly slow and tedious, but quite educational as to how neural circuitry behaves in general, how the specific software implementing the circuitry behaves, and how the specific circuit and its components affect each other.

On the other hand, calibration of a neural circuit can be performed automatically by an optimization program using one or more objective functions and error tolerance in the objective criteria as terminating conditions of a “while” loop. Such automated batch processing may be the only expeditious way to program evenly a moderately sized circuit.

As stated, rather than using bottom-up calibration, the field is moving toward top-down calibration utilizing a variety of empirical techniques (e.g., imaging techniques, electro- and magnetoencephalography), which, in conjunction, constrain models more than what has been possible to date, and notably in diseased state modeling [43, 44]. The lower-level connectivity, connection strength, variety of individual neural processing, and axonal delays are reflected in the emergent high-level behavior [45, 46].

For example, to uncover connectivity and connection strength, fiber-tracing studies and identifying where on the dendritic tree an axon synapsed have been replaced with diffusion-weighted magnetic resonance image (MRI) and blood-oxygen-level-dependent (BOLD) data from which the “functional” connectivity of the network is inferred [47, 48]. In a rapidly evolving paradigm, variations on this technique are utilized with older methods to offer the modeler a menu of possible calibration/validation datasets (Tables 1, 2, 3, and 4). Disease “signatures” (biomarkers) in the new paradigms can be used to calibrate models of diseased vs. healthy states and measure the effects of neuromodulation, electroceutical, or pharmaceutical techniques to restore the healthy state [24, 49,50,51,52].

Table 1 Evolving methods of top-down calibration [43]
Table 2 Type of modeling software used for the relevant neural systems level and time scale
Table 3 Modeling software functional requirements for efficient circuit assembly
Table 4 Idealized requirements for connectome models designed to inform medical device and drug development

3 The Functional Requirements

At a high level, the following are required to efficiently make valuable connectome models:

  1. 1.

    User-friendly modeling software for the correct systems level

  2. 2.

    Standardized parts list (axon and neuron types)

  3. 3.

    Assembly instructions (topography and weights)

  4. 4.

    Methods of calibration, validation, and their associated datasets to use in those processes

4 Conclusion

The “connectome” has come of age and is now in a similar stage as the genome was 15 years ago. The great advances and benefits loom ahead. Yet the functional requirements to build connectomes are now known. Connectome simulation is fast and cheap compared to hardware prototyping, or in vitro and in vivo investigation of how the neural system works. Connectome modeling of neuromodulation and electroceutical action on the nervous system will lead to an explosion of targets and means for greater control over disease and disorder treatment and enhanced neural behavior.