Abstract
An isotropic dynamical system is one that looks the same in every direction, i.e., if we imagine standing somewhere within an isotropic system, we would not be able to differentiate between different lines of sight. Conversely, anisotropy is a measure of the extent to which a system deviates from perfect isotropy, with larger values indicating greater discrepancies between the structure of the system along its axes. Here, we derive the form of a generalised scalable (mechanically similar) discretized field theoretic Lagrangian that allows for levels of anisotropy to be directly estimated via timeseries of arbitrary dimensionality. We generate synthetic data for both isotropic and anisotropic systems and, by using Bayesian model inversion and reduction, show that we can discriminate between the two datasets – thereby demonstrating proof of principle. We then apply this methodology to murine calcium imaging data collected in rest and task states, showing that anisotropy can be estimated directly from different brain states and cortical regions in an empirical in vivo biological setting. We hope that this theoretical foundation, together with the methodology and publicly available MATLAB code, will provide an accessible way for researchers to obtain new insight into the structural organization of neural systems in terms of how scalable neural regions grow – both ontogenetically during the development of an individual organism, as well as phylogenetically across species.
Introduction
Two of the main concepts upon which computational neuroscience models are based are those of the ‘particle’ (Sears, 1964) and the ‘field’ (McMullin, 2002) – both terms that are inherited from theoretical physics.
Particle theoretic models
In the particle theoretic approach we treat every node within a neural system as a zerodimensional (pointlike) element – a socalled ‘particle’ that evolves in time. The ways in which each one of these neural particles evolve influences the rest of the connected system, such that collectively, the particles form nodes of a dynamically evolving graph (Deco et al., 2008). Particle theoretic frameworks yield experimental advantages for neuroimaging modalities such as electroencephalography (EEG), in which there are usually very few measurement locations. Furthermore, particle theoretic frameworks have computational and statistical advantages for neuroimaging analyses due to associated dimensionality reduction – an attribute that becomes increasingly important for largescale recordings of neural systems (Izhikevich & Edelman, 2008). However, this computational expediency comes at the cost of losing the spatial information contained in a continuum description.
Field theoretic models
The field theoretic approach treats a neural system as a continuous structure called a ‘field’ that is a function of position, with position treated as a continuous variable. A neural field can exist in 2D or 3D space: it is natural to work in a twodimensional space when modelling a single cortical sheet or a threedimensional space for a crosscortical volume (Breakspear, 2017). In this paper, we use a model that is in essence a compromise between the particle and field models, by taking a continuous field and discretizing it such that we only consider certain points in space – which we henceforth refer to as a discretized field theoretic model.
Isotropic vs. anisotropic systems
A system is said to be isotropic if it looks identical in every direction. This means that if we imagine ourselves standing somewhere within an isotropic structure, then we would see precisely the same structure along all lines of sight. Conversely, anisotropy is a measure of the extent to which a system deviates from perfect isotropy. For example, a sheet of wood is anisotropic due to the preferential directionality of the grain – which we can see by the fact that it is easier to break the wood along the grain than it is to break it against the grain. We present a discretized field theoretic model that allows for the estimation of anisotropy in connected dynamical systems of arbitrary dimensionality. We provide accompanying MATLAB code in a public repository that can be readily used to measure levels of anisotropy on a nodewise basis via timeseries measurements.
Here, we focus on isotropy as the main parameter of interest, as this quantity is usually studied in neuroscience in the context of diffusion tensor imaging (DTI). The latter allows for structural integrity measures of axons, by quantifying the extent to which water molecules diffuse along the axons — i.e., anisotropically. Damage to white matter caused by e.g., traumatic brain injury (TBI) can cause axonal tissue to rupture, resulting in water molecules leaking more isotropically than in an intact axon. The measure of isotropy we propose here, as opposed to DTI, can be estimated directly from any regionspecific neuroimaging timeseries. This allows for us to implement the mathematical framework of Lagrangian field theory in the study of a dynamically (as opposed to structurally) derived measure of anisotropy. Furthermore, as we are embedding the isotropy measure into a scalable mathematical framework, we allow for an estimation of how similar (isotropic) or dissimilar (anisotropic) the signals of neighbouring regions are during the growth of neural systems.
Inference of anisotropy
We estimate anisotropy in arbitrary neuronal timeseries, together with hyperparameters that describe the variance of states and parameters, by using Dynamic Expectation Maximisation (DEM) (Friston et al., 2008) within the Statistical Parametric Mapping (SPM) software. This inference method uses generalised coordinates of motion within a Laplace approximation routine of states and parameters (multivariate Gaussians). In contrast with other inference methods, DEM allows us to use four embedding dimensions, which encompass smooth noise processes, unlike e.g., traditional Kalman filters that employ martingale assumptions (Roebroeck et al., 2009).
Specifically, DEM approximates the posterior of a parameter quantifying anisotropy using three steps:

1)
The D step uses variational Bayesian filtering as an instantaneous gradient descent in a moving frame of reference for state estimation in generalised coordinates;

2)
The E step estimates the model parameter quantifying anisotropy by using gradient ascent on the negative variational free energy. The variational free energy (F) combines both accuracy and complexity when scoring models \(F=\underset{accuracy}{\underbrace{\mathit\langle logp(y\vert\theta,m)\rangle}}\underbrace{KL\left[q\left(\theta\right),p\left(\theta\vert m\right)\right]}_{complexity}\), where \(logp\left(y\theta ,m\right)\) is the log likelihood of the data \(y\) conditioned upon model states, parameters and hyperparameters \(\theta\), and model structure \(m\). We seek a model that provides an accurate and maximally simple explanation for the data.

3)
The M step repeats this process for the hyperparameters, given by the precision components of random fluctuations on the states and observation noise (Friston et al., 2008).
Overview
This paper comprises three sections.
In the first, we outline the theoretical foundations of Lagrangian field theory and the form of a generalised scalable discretized equation of motion that can be used both for forward generative models and for model inversion via neural timeseries.
In the second section, we generate in silico data via forward models of an isotropic and an anisotropic system. We then use Bayesian model inversion and subsequent Bayesian model reduction to show that we can correctly discriminate between the isotropic and anisotropic systems – thereby providing construct validation.
In the third section, we use murine data collected in both rest and task states to map levels of anisotropy across different cortical regions directly via the in vivo timeseries. These widefield calcium imaging data were collected across the left hemisphere of mouse cortex expressing GCaMP6f in layer 2/3 excitatory neurons (Fagerholm et al., 2021; GalleroSalas et al., 2021; Gilad et al., 2018), with cortical areas aligned to the Allen Mouse Common Coordinate Framework (Mouse & Coordinate, 2016),
We suggest that the presented methodology could be valuable in future largescale studies of neural systems, in which the quantification of regionwise anisotropy may shed light on how neural systems grow both ontogenetically within the lifespan of an individual animal, as well as phylogenetically across species (Buzsaki et al., 2013).
Methods
We will now cover the technical foundations of the approach, starting with Lagrangian field theory and the principle of stationary action. We then derive a generalised, scalable, discretized field theoretic Lagrangian and consider the empirical estimation of anisotropy under this formulation using empirical (timeseries) data and Bayesian estimators.
Lagrangian field theory
We remind the reader of the basic concepts underlying Lagrangian field theory and the principle of stationary action in Appendix I. In brief: we represent the state of a system by a field which is a function of the 4D space–time position \(r\equiv \left(t,x,y,z\right)\). The equations of motion that describe how this field evolves in time are obtained by requiring that the field \(\varphi \left(r\right)\) renders the value of a quantity known as the action \({S}\) stationary:
The integral in the definition of the action is over the space–time domain \(\Omega\) encompassing all space from the initial time \({t}_{i}\) to the final time \({t}_{f}\). The integrand, which is known as the Lagrangian density \(\mathcal{L}\left(r,\varphi ,\partial \varphi \right)\), defines the system of interest as a function of \(r\), the values of the fields \(\varphi\) at \(r\), and their spatiotemporal derivatives \(\partial \varphi\) at \(r\).
Scale transformations
We define a scale transformation as a mapping from arbitrary points in the 5D space with axes labelled \(\left(\varphi ,r\right)=(\varphi ,t,x,y,z)\) to scaled points \(\left({\varphi }_{s},{r}_{s}\right)=\left(\lambda \varphi ,{\lambda }^{\alpha }r\right)\), where \(\lambda\) is an arbitrary scale factor and \(\alpha\) is a constant. A field configuration \(\varphi =\varphi \left(r\right)\) is a 4D surface in this 5D space, and the scale transformation takes points on that surface to points on a new 4D surface, defining a new field configuration. The value of the new field \({\varphi }_{s}\) at the scaled space–time point \({r}_{s}={\lambda }^{\alpha }r\) is related to the value of the original field \(\varphi\) at the unscaled point \(r\) via
It is convenient to allow different scaling exponents in different space–time directions, so from now on \({\lambda }^{\alpha }r\) is to be understood as shorthand for the vector \(\left({\lambda }^{{\alpha }_{t}}t,{\lambda }^{{\alpha }_{x}}x,{\lambda }^{{\alpha }_{y}}y,{\lambda }^{{\alpha }_{z}}z\right).\)
Taking partial derivatives of Eq. (2), we obtain:
where \({\lambda }^{1{\alpha }_{\mu }}\) depends only on the \({\mu }^{{th}}\) component of the vector of exponents \(\alpha =\left({\alpha }_{t},{\alpha }_{x},{\alpha }_{y},{\alpha }_{z}\right)\). From now on, we denote the vector with components \({\lambda }^{1{\alpha }_{\mu }}{\varphi }_{\mu }\left({\lambda }^{\alpha }r\right)\) as \({\lambda }^{1\alpha }\partial \varphi ({\lambda }^{\alpha }r)\).
Scaling the action
Using Eqs. (1), (2) and (3), we see that the scaled action is given by:
We then change variables in Eq. (4), setting \(r{^{\prime}}={\lambda }^{\alpha }r\) such that:
where \({\lambda }^{{\sum }_{\nu }{\alpha }_{\nu }}\) is the Jacobian that accounts for the change in integration variables and \(\sum_{\nu }{\alpha }_{\nu }={\alpha }_{t}+{\alpha }_{x}+{\alpha }_{y}+{\alpha }_{z}\). The integrals are now over the same space–time region \(\Omega\) as in the original unscaled action in Eq. (1), which means that we can rewrite Eq. (5) using the same simple notation:
Scalable systems
The action \(S\left[\varphi \left(r\right)\right]\) is said to be scalable, or equivalently to possess ‘mechanical similarity’ (Landau & Lifshitz, 1976) if, for any choice of \(\varphi \left(r\right)\), not just choices that make the action stationary and therefore satisfy the Euler–Lagrange equation of motion (see Appendix I), the following relationship holds:
where \(n\) is a constant. Note that a ‘scalable’ system should not be confused with a ‘scale free’ system, which is one that lacks a characteristic length scale, such as those studied in the physics of phase transitions.
More explicitly, we can use Eqs. (6) and (7) to express the scalability condition as follows:
Generalised scalable Lagrangians
We can expand any analytic Lagrangian density \(\mathcal{L}\left(\varphi ,\partial \varphi \right)\) as a power series:
where \({{C}}_{a,{b}_{t},{b}_{x},{b}_{y},{b}_{z}}\) is an expansion coefficient and the summations over the integers \(a,{b}_{t},{b}_{x},{b}_{y},{b}_{z}\) range from \(0\) to \(\infty\). We have assumed for the sake of simplicity that the Lagrangian density has no explicit dependence on \(r\). This is normally the case when the system of interest is not driven by external forces or other influences. We next use Eq. (9) to obtain \(\mathcal{L}\left(\lambda \varphi ,{\lambda }^{1\alpha }\partial \varphi \right)\):
For the action to be scalable, Eq. (8) tells us that Eq. (10) must equal:
We conclude that the action is scalable if and only if
for all choices of the integers \(a\) and \({b}_{\nu }\) at which \({{C}}_{a,{b}_{t},{b}_{x},{b}_{y},{b}_{z}}\) is nonzero. If, for example, we consider possible contributions to the Lagrangian with specific values of \({b}_{t}\), \({b}_{x}\), \({b}_{y}\), and \({b}_{z}\), Eq. (12) tells us that \({{C}}_{a,{b}_{t},{b}_{x},{b}_{y},{b}_{z}}\) must be zero unless \(a= n\sum_{\nu }{\alpha }_{\nu } \sum_{\nu }(1 {\alpha }_{\nu }){b}_{\nu }\). The value of \(a\) is determined by the values of \({b}_{t}\), \({b}_{x}\), \({b}_{y}\), \({b}_{z}\) and the summation over \(a\) is no longer required. The generalised scalable discretized field theoretic Lagrangian may therefore be written as follows:
The anisotropic wave equation
Let us now design a special case of Eq. (13) in two spatial dimensions. Having chosen to set \({\alpha }_{t}={\alpha }_{x}\) and \(n={\alpha }_{y}+2\), we construct a Lagrangian density with three nonzero terms. In the first term, \({C}_{{b}_{t},{b}_{x},{b}_{y}}=1\), \({b}_{t}=2\), and \({b}_{x}={b}_{y}={b}_{z}=0\); in the second term, \({C}_{{b}_{t},{b}_{x},{b}_{y}}=1\), \({b}_{x}=2\), and \({b}_{t}= {b}_{y}={b}_{z}=0\); and finally, in the third term,\({C}_{{b}_{t},{b}_{x},{b}_{y}}=1\), \({b}_{y}=2\), and \({b}_{t}={b}_{x}={b}_{z}=0\).
This yields the Lagrangian density:
where the exponent \(\beta\), which is by defined by \(\beta ={\alpha }_{y}{\alpha }_{x}\), quantifies the degree of anisotropy, such that the system is perfectly isotropic when \(\beta=0\Rightarrow\alpha_y=\alpha_x\). To provide intuition for the way in which the \(\beta\) parameter affects the system’s dynamics, we run a series of forward models for a range of values in Supplementary Fig. 1.
The corresponding equation of motion – the twodimensional Euler–Lagrange equation (see Appendix I) – is:
We can verify that if \(\varphi \left(t,x,y\right)\) is a solution of this equation, so is the scaled field \({\varphi }_{s}\left(t,x,y\right)= \lambda \varphi \left({\lambda }^{{\alpha }_{t}}t, {\lambda }^{{\alpha }_{x}}x, {\lambda }^{{\alpha }_{y}}y\right)=\) \(\lambda \varphi \left({\lambda }^{{\alpha }_{x}}t, {\lambda }^{{\alpha }_{x}}x, {\lambda }^{{(\alpha }_{x}+\beta )}y\right)\) (see Appendix II).
In the case of an isotropic system, when \(\beta =0\), the Euler–Lagrange equation becomes:
We see that the Lagrangian density of Eq. (14) leads to an equation of motion (15) that reduces to the wave Eq. (16) in the case of an isotropic system. This makes it an intuitive test case.
Spatial discretisation
For Eq. (15) to be used in the modelling of neural timeseries we must first discretize the partial spatial derivatives. This is necessary because we do not deal with spatially continuous data in neuroimaging, but rather with data collected at a discrete set of points. We therefore make the following standard transformations:
where \(\varphi \left(x,y\right)\) is the value of the field at the point \(\left(x,y\right)\) and, e.g., \(\varphi \left(x+1,y\right)\) is the value of the field one ‘step’ in the positive \(x\) direction in the graph from the point \(\left(x,y\right)\).
Applying the transformations to Eq. (15), we obtain:
which we split into two firstorder differential equations by defining a new variable to obtain the final form of the equations of motion used in all subsequent forward models and Bayesian model inversions presented in this paper:
We then use Eq. (19) as the equations of motion for subsequent model inversion with the Statistical Parametric Mapping (SPM) software. The associated observer equation comprises the \(\varphi\) variables – i.e., we assume that the strength of the field \(\varphi \left(x,y\right)\) is what is being measured in the neural timeseries at the \(\left(x,y\right)\) coordinate.
Equation (19) is the basis of the MATLAB code made available for the use with forward generative models, as well as with Bayesian model inversion of timeseries of arbitrary dimensionality from any neuroimaging modality.
Synthetic data
We consider a 2D grid of size 3 × 3 pixels, where each of the nine pixels is given different initial conditions and subsequently allowed to evolve according to the equation of motion in Eq. (19). We run two forward models: a) one isotropic case in which \(\beta =0\); and b) one anisotropic case in which \(\beta \ne 0\). Having created these synthetic data with a prior of \(\beta =0\), we then perform Bayesian model inversion using Dynamic Expectation Maximization (DEM) (Friston et al., 2008) to infer the latent states and estimate both the \(\beta\) parameter, as well as the hyperparameters – the components of precision of random fluctuations on the observation noise and states. Model inversion for any discrete system can be performed on a nodebynode basis, by considering the ways in which the dynamics evolve in the immediate neighbourhood of the node under consideration. When this model is equipped with fluctuations one can use standard (Variational Laplace) Bayesian model inversion procedures to estimate the exponents for any given timeseries. We then set the prior for the free parameter \(\beta\) to zero and use DEM to obtain a posterior estimate for \(\beta\) from both the synthetic isotropic and anisotropic data. Following model inversion, we use Bayesian model reduction (Friston et al., 2015; Rosa et al., 2012) to test the evidence for a perfectly isotropic system in which \(\beta =0\) by setting the prior variances of \(\beta\) to zero. We are therefore able to test whether we can correctly identify the ground truth isotropic data (created with \(\beta =0\)) with a higher evidence for the reduced model and conversely whether we can correctly identify the ground truth anisotropic data (created with \(\beta \ne 0\)) with a higher evidence for the full model.
Empirical data
All animal experiments were carried out according to the guidelines of the Veterinary Office of Switzerland following approval by the Cantonal Veterinary Office in Zürich. All murine calcium imaging data were collected as previously reported (Fagerholm et al., 2021; GalleroSalas et al., 2021; Gilad et al., 2018). As with the synthetic data, we perform Bayesian model inversion to obtain posterior estimates for the \(\beta\) parameter quantifying the extent to which the time series for each pixel deviate from isotropy at \(\beta =0\). We perform this model inversion once for every second pixel \(\left(n=6651\right)\) within each trial \(\left(n=10\right)\), mouse \(\left(n=3\right)\) and condition (\(n=2,\) task and rest). Following model inversion, we average the posteriors for the \(\beta\) parameter across trials to obtain results per mouse and condition and then average these posteriors once more across mice. We then filter these averaged images with 2D Gaussian moving average smoothing kernels with a semiwidth window of 7 pixels.
Results
We show the ways in which the synthetic timeseries evolve for the isotropic (Fig. 1A) and anisotropic (Fig. 1B) cases. Each of the regions in the \(3\times 3\) synthetic data is given a different initial value and rate of change ranging between \(\sim 1\) and \(2\), to initialize dynamics in the absence of external inputs. Exact values of model parameters, boundary conditions, and initial values are listed in the publicly available code. Following model inversion and reduction, we then demonstrate proof of principle by showing that there is higher evidence for the ground truth isotropic data having been created with the isotropic model (Fig. 1C) and conversely for the ground truth anisotropic data having been created with the anisotropic model (Fig. 1D). The results in Figs. 1C and D remain when initial conditions and observation noise are varied by \(\sim 10\%\), as commented in the publicly available code. Finally, we show the different degrees of anisotropy in the murine calcium imaging data in rest and task states (Fig. 1E), together with averaged signals (Fig. 1F). Note that the negative posterior \(\beta\) values in Fig. 1E are a result of the specific choice of model parameters, observation noise, and initial conditions used.
Overall, there is a marked variability in the degrees of anisotropy across mice and states. On the other hand, the secondary motor cortices show consistently high degrees of anisotropy across mice and states. The generation (Fig. 1A and B) and inversion (Figure C and D) of the synthetic data can be fully reproduced with the accompanying MATLAB code and the murine calcium imaging data in Fig. 1E and F are made available in a public repository.
Discussion
We present a theoretical framework, together with a practical numerical analysis, designed for the estimation of anisotropy in arbitrary timeseries from any connected dynamical system. The basis for this framework rests upon classical Lagrangian field theory applied to scalable (mechanically similar) dynamical systems. Scalability entails a situation whereby a system continues to obey the same equation of motion as it changes size. In other words, a dynamical system that grows or shrinks will begin producing states that are different from those of the original (unscaled) system. However, in systems possessing the property of scalability, the new states in the scaled system will still be described by the same equation of motion used to describe the original (unscaled) system.
It stands to reason that the dynamical evolution of the signals propagating in neural systems possess some form of scalability, given that evolutionary processes add new neuroanatomy to existing structures i.e., the same basic architecture extends itself whist maintaining information processing capabilities (Douglas & Martin, 1991; Hilgetag et al., 2000; Markov et al., 2013). Similarly, a neural structure changes scale during development, whilst allowing for information processing to remain intact. It is therefore of interest to develop tools that allow for the analysis of scalable systems. We therefore pose the following question: given that neural systems possess some form of scalability, what are the consequences thereof and what further questions present themselves? It is in this spirit that we present a formalism that applies to any scalable dynamical system that is sufficiently general to accommodate the evolution of any (driven or nondriven) system in any number of spatial dimensions.
With reference to the murine calcium imaging results, we note the following three main results: a) there is a high variability in anisotropy across mice and states, which may be due to the low number of trials and/or to the fact that anisotropy values may vary from trial to trial depending on neural activity; b) there is no clear difference in anisotropy across rest/task states; and c) there are consistently high levels of anisotropy in the secondary motor cortex (Mos) across mice and states, which may reflect the nonhomogeneous nature of local networks.
We construct a methodology that is set within the Bayesian model inversion scheme of Dynamic Causal Modelling (DCM). This means that, by using the timeseries measured from any (e.g., neuroimaging) modality, we obtain posterior estimates of spatial stretch factors – one for each spatial dimension. The discrepancy between these stretch factors then directly provides an estimate of the anisotropy at every voxel in the neuroimaging data. We thus obtain an intuitive understanding of what these measures mean by imagining ourselves standing at a certain node in a neural system. If the system is isotropic then, as we look in every direction – updown, left–right, and backforward, we will see no difference in the ways in which the signals evolve in time in these different directions. On the other hand, if the system is anisotropic then we will see a difference in our lines of sight along the different axes – and the greater this difference becomes the greater the degree of anisotropy. We demonstrate proof of principle by generating synthetic data using a special case of the generalised Lagrangian that reduces to the wave equation in the limit of the perfectly isotropic case. Using Bayesian model inversion and reduction, we show that we can correctly identify which of the two models (anisotropic and isotropic) were used to generate each dataset, thus showing the discriminatory ability of the proposed methodology.
It should be noted that, in addition to the Lagrangian framework presented here, the Martin–Siggia–Rose–DeDominicis–Janssen (MSRDJ) is an alternative formalism that allows for stochastic differential equations (e.g., the Langevin equation) to be solved probabilistically (Martin et al., 1973). In contrast to our methodology, in which we examine a single solution of a stochastic differential equation, the MSRDJ framework encompasses a probability distribution of possible solutions and the ways in which they evolve in time (Chow & Buice, 2015). Furthermore, it should be noted that whereas restingstate brain dynamics are usually modelled by equilibrium fluctuations about a steadystate (i.e., a stable point attractor), here we employ a limitcycle model. The choice of which type of model (e.g., fixed point vs. limit cycle) will naturally depend on the nature of the system under consideration.
An empirically determined estimation of anisotropy could be informative in imaging neuroscience, as it facilitates a direct empirical measurement of how substructures within the brain grow (under the assumption of scalability). This provides a new way of assessing the ways in which anatomy changes across both an evolutionary timeline, as well as across the lifespan of an individual organism. It is our hope that the theory, methodology, and accompanying tools will allow for these kinds of questions to be addressed by researchers and that these will lead to a clearer understanding of the spatial dependencies, growth, and development of neural systems.
Data availability
The MATLAB code used to produce Fig. 1AD is made available at the following public repository: github.com/allavailablepubliccode/anisotropy.
All preprocessed murine calcium imaging data used to produce Fig. 1E and F are made available in the following public repository: figshare.com/articles/Murine_calcium_imaging_data/12012852.
References
Breakspear, M. (2017). Dynamic models of largescale brain activity. Nature Neuroscience, 20(3):340–52. Epub 2017/02/24. https://doi.org/10.1038/nn.4497. PubMed PMID: 28230845.
Buzsaki, G., Logothetis, N., Singer, W. (2013). Scaling brain size, keeping timing: evolutionary preservation of brain rhythms. Neuron, 80(3):751–64. Epub 2013/11/05. https://doi.org/10.1016/j.neuron.2013.10.002 PubMed PMID: 24183025; PubMed Central PMCID: PMCPMC4009705.
Chow, C. C., & Buice, M. A. (2015). Path integral methods for stochastic differential equations. The Journal of Mathematical Neuroscience (JMN)., 5(1), 1–35.
Deco, G., Jirsa,V. K., Robinson, P. A., Breakspear, M., Friston, K. (2008). The dynamic brain: from spiking neurons to neural masses and cortical fields. PLoS Computational Biology, 4(8):e1000092.
Douglas, R.J., Martin, K.A. (1991). A functional microcircuit for cat visual cortex. The Journal of physiology, 440:735–69. PubMed PMID: 1666655; PubMed Central PMCID: PMC1180177.
Fagerholm, E. D., Foulkes, W. M. C., GalleroSalas, Y., Helmchen, F., Friston, K. J., Leech, R, et al. (2021). Neural Systems Under Change of Scale. Frontiers in Computational Neuroscience, 5(33). https://doi.org/10.3389/fncom.2021.643148.
Friston, K.J., Litvak, V., Oswal, A., Razi, A., Stephan, K.E., van Wijk, B.C.M, et al. (2016). Bayesian model reduction and empirical Bayes for group (DCM) studies. Neuroimage, 128:413–31. Epub 2015/11/17. https://doi.org/10.1016/j.neuroimage.2015.11.015. PubMed PMID: 26569570; PubMed Central PMCID: PMCPMC4767224.
Friston, K. J., TrujilloBarreto, N., Daunizeau, J. (2008). DEM: a variational treatment of dynamic systems. Neuroimage, 41(3):849–85. Epub 2008/04/25. https://doi.org/10.1016/j.neuroimage.2008.02.054. PubMed PMID: 18434205.
GalleroSalas, Y., Han, S., Sych, Y., Voigt, F. F., Laurenczy, B., Gilad, A, et al. (2021). Sensory and behavioral components of neocortical signal flow in discrimination tasks with shortterm memory. Neuron 109(1):135–48. e6.
Gilad, A., GalleroSalas, Y., Groos, D., Helmchen, F. (2018). Behavioral strategy determines frontal or posterior location of shortterm memory in neocortex. Neuron, 99(4):814–28. e7.
Hilgetag, C.C., O'Neill, M.A., Young, M.P. (2000). Hierarchical organization of macaque and cat cortical sensory systems explored with a novel network processor. Philosophical Transactions of the Royal Society B: Biological Sciences, 355(1393):71–89. PubMed PMID: PMC1692720.
Izhikevich, E. M., & Edelman, G. M. (2008). Largescale model of mammalian thalamocortical systems. Proceedings of the National Academy of Sciences., 105(9), 3593–3598.
Landau, L. D., Lifshitz, E. M. (1976). Mechanics (third edition), Vol. 1 of Course of Theoretical Physics. Oxford: Pergamon Press.
Markov, N., ErcseyRavasz, M., Van Essen, D., Knoblauch, K., Toroczkai, Z., & Kennedy, H. (2013). Cortical highdensity counterstream architectures. Science, 342(6158), 1238406.
Martin, P. C., Siggia, E., & Rose, H. (1973). Statistical dynamics of classical systems. Physical Review a., 8(1), 423.
McMullin, E. (2002). The origins of the field concept in physics. Physics in Perspective., 4(1), 13–39.
Mouse and Coordinate. (2016). Allen mouse common coordinate framework. http://help.brainmap.org/download.
Roebroeck, A., Formisano, E., Goebel, R. (2011). The identification of interacting networks in the brain using fMRI: Model selection, causality and deconvolution. Neuroimage, 58(2):296–302. Epub 2009/09/30. https://doi.org/10.1016/j.neuroimage.2009.09.036. PubMed PMID: 19786106.
Rosa, M.J., Friston, K., Penny, W. (2012). Posthoc selection of dynamic causal models. Journal of Neuroscience Methods, 208(1):66–78. Epub 2012/05/09. https://doi.org/10.1016/j.jneumeth.2012.04.013. PubMed PMID: 22561579; PubMed Central PMCID: PMCPMC3401996.
Sears, F.W. (1964). University Physics, Complete.
Acknowledgements
E.D.F. was supported by a King’s College London Prize Fellowship; K.J.F. was funded by a Wellcome Principal Research Fellowship (Ref: 088130/Z/09/Z); R.J.M was funded by the Wellcome/EPSRC Centre for Medical Engineering (Ref: WT 203148/Z/16/Z); R.L was funded by the MRC (Ref: MR/R005370/1). The authors would also like to acknowledge support from the Data to Early Diagnosis and Precision Medicine Industrial Strategy Challenge Fund, UK Research and Innovation (UKRI), the National Institute for Health Research (NIHR), the Biomedical Research Centre at South London, the Maudsley NHS Foundation Trust, and King’s College London.
Funding
Medical research council.
Author information
Authors and Affiliations
Contributions
All authors conceived of the analysis and wrote the paper.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no conflict of interest.
Additional information
Action Editor: Abraham Zvi Snyder
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Electronic supplementary material
Below is the link to the electronic supplementary material.
Appendices
Appendix I
The classical real scalar field of interest in this work depends on position and time, and it turns out to be convenient to treat it as a function of the fourdimensional Cartesian vector
where \(t\) is time, and \(x,y,z\) are spatial length, width, and height coordinates, respectively. Individual components of the vector \(r\) are written \({r}_{\mu }\), with \(\mu\) any element of the set \(\left\{t, x, y, z\right\}\). The field \(\varphi\) is a function of \(r\):
The vector \(\partial \varphi\) of partial derivatives of \(\varphi\) at \(r\) is given by
and its components are written \({\partial }_{\mu }\varphi\) or, more simply, \({\varphi }_{\mu }\).
The central quantity in Lagrangian field theory is the Lagrangian density,\(\mathcal{L}\), which is a function of \(r,\varphi ,\partial \varphi\):
Note that we have not yet assumed any relationship between the values of \(r\), \(\varphi\) and \(\partial \varphi\); the Lagrangian density can be evaluated for any choices of the 9 real numbers required to specify the scalar field \(\varphi\) and the two fourcomponent vectors \({r}_{\mu }\) and \({\partial }_{\mu }\varphi\).
Given a particular choice of field ‘trajectory’ \(\varphi \left(r\right)\), the standard definition of the action as a functional of \(\varphi \left(r\right)\) is:
A trajectory in this context consists of the values of the field \(\varphi \left(r\right)= \varphi (t, x,y,z)\) at all spatial points \((x,y,z)\) and all times \(t\) between a chosen initial time \({t}_{i}\) and a chosen final time \({t}_{f}\). The fourdimensional integration volume \(\Omega\) coincides with the region in which the trajectory is defined. Note that we are now assuming that the field \(\varphi\) and its derivatives \({\varphi }_{\mu }\) are functions of \(r\), so \(\varphi\) and \(\partial \varphi\) are now related to each other. We are also assuming that \(\varphi\) and \({\varphi }_{\mu }\) tend to zero as the spatial distance \(d= \sqrt{{x}^{2}+ {y}^{2}+ {z}^{2}}\) from the origin tends to infinity.
The principle of stationary action tells us that the evolution of \(\varphi \left(r\right)\) between the initial and final times, \({t}_{i}\) and \({t}_{f}\), renders the action stationary with respect to all variations
that vanish when \(t={t}_{i}\) and \(t={t}_{f}\).
Using Eq. (24), we evaluate the variation of the action as follows:
where we are using the Einstein summation convention according to which terms in which the same suffix appears twice are automatically summed over all four values of that suffix. We next convert the middle term on the second line to a surface integral by using the 4D version of the divergence theorem to obtain:
where \(\partial\Omega\) is the surface of the 4D volume \(\Omega\) and \(d{S}_{\mu }\) is an element of the 3D surface of \(\Omega\). If the field decays to zero rapidly enough as the spatial distance from the origin tends to infinity, and remembering that \(\delta \varphi =0\) when \(t={t}_{i}\) and \(t={t}_{f}\), the surface integral vanishes, and we obtain:
Since \(\delta \varphi\) is arbitrary except for the constraint that it vanishes at the surface \(\partial\Omega\), the principle of stationary action \(\left(\delta S=0\right)\) implies that the fields evolve according to the Euler–Lagrange equation:
or more explicitly:
Appendix II
To show that the scaled field \({\varphi }_{s}=\lambda \varphi \left({\lambda }^{{\alpha }_{x}}t, {\lambda }^{{\alpha }_{x}}x, {\lambda }^{\left({\alpha }_{x}+\beta \right)}y\right)\) is also a solution of Eq. (15), we differentiate \({\varphi }_{s}\) twice with respect to time:
as well as twice with respect to the \(x\) coordinate:
and twice with respect to the \(y\) coordinate:
We then take raise \({\varphi }_{s}\) to the power of \(2\beta\):
which, together with Eq. (33), means that:
We then differentiate \({\varphi }_{s}\) once with respect to the \(y\) coordinate and take the square:
which, together with Eq. (34), means that:
Therefore, by using Eqs. (31), (35) and (37) with the original equation of motion in Eq. (15):
where the \({\lambda }^{2{\alpha }_{x}1}\) factor cancels, leaving the form of the original equation of motion in Eq. (15):
We have therefore shown that if \(\varphi \left(t,x,y\right)\) is a solution of Eq. (15), so is the scaled field
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Fagerholm, E.D., Foulkes, W.M.C., GalleroSalas, Y. et al. Estimating anisotropy directly via neural timeseries. J Comput Neurosci 50, 241–249 (2022). https://doi.org/10.1007/s10827021008108
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10827021008108
Keywords
 Anisotropy
 Neuroimaging
 DCM
 Data fitting
 Lagrangian
 Field theory