1 Introduction

Two of the main concepts upon which computational neuroscience models are based are those of the ‘particle’ (Sears, 1964) and the ‘field’ (McMullin, 2002) – both terms that are inherited from theoretical physics.

1.1 Particle theoretic models

In the particle theoretic approach we treat every node within a neural system as a zero-dimensional (point-like) element – a so-called ‘particle’ that evolves in time. The ways in which each one of these neural particles evolve influences the rest of the connected system, such that collectively, the particles form nodes of a dynamically evolving graph (Deco et al., 2008). Particle theoretic frameworks yield experimental advantages for neuroimaging modalities such as electroencephalography (EEG), in which there are usually very few measurement locations. Furthermore, particle theoretic frameworks have computational and statistical advantages for neuroimaging analyses due to associated dimensionality reduction – an attribute that becomes increasingly important for large-scale recordings of neural systems (Izhikevich & Edelman, 2008). However, this computational expediency comes at the cost of losing the spatial information contained in a continuum description.

1.2 Field theoretic models

The field theoretic approach treats a neural system as a continuous structure called a ‘field’ that is a function of position, with position treated as a continuous variable. A neural field can exist in 2D or 3D space: it is natural to work in a two-dimensional space when modelling a single cortical sheet or a three-dimensional space for a cross-cortical volume (Breakspear, 2017). In this paper, we use a model that is in essence a compromise between the particle and field models, by taking a continuous field and discretizing it such that we only consider certain points in space – which we henceforth refer to as a discretized field theoretic model.

1.3 Isotropic vs. anisotropic systems

A system is said to be isotropic if it looks identical in every direction. This means that if we imagine ourselves standing somewhere within an isotropic structure, then we would see precisely the same structure along all lines of sight. Conversely, anisotropy is a measure of the extent to which a system deviates from perfect isotropy. For example, a sheet of wood is anisotropic due to the preferential directionality of the grain – which we can see by the fact that it is easier to break the wood along the grain than it is to break it against the grain. We present a discretized field theoretic model that allows for the estimation of anisotropy in connected dynamical systems of arbitrary dimensionality. We provide accompanying MATLAB code in a public repository that can be readily used to measure levels of anisotropy on a node-wise basis via timeseries measurements.

Here, we focus on isotropy as the main parameter of interest, as this quantity is usually studied in neuroscience in the context of diffusion tensor imaging (DTI). The latter allows for structural integrity measures of axons, by quantifying the extent to which water molecules diffuse along the axons — i.e., anisotropically. Damage to white matter caused by e.g., traumatic brain injury (TBI) can cause axonal tissue to rupture, resulting in water molecules leaking more isotropically than in an intact axon. The measure of isotropy we propose here, as opposed to DTI, can be estimated directly from any region-specific neuroimaging timeseries. This allows for us to implement the mathematical framework of Lagrangian field theory in the study of a dynamically (as opposed to structurally) derived measure of anisotropy. Furthermore, as we are embedding the isotropy measure into a scalable mathematical framework, we allow for an estimation of how similar (isotropic) or dissimilar (anisotropic) the signals of neighbouring regions are during the growth of neural systems.

1.4 Inference of anisotropy

We estimate anisotropy in arbitrary neuronal timeseries, together with hyperparameters that describe the variance of states and parameters, by using Dynamic Expectation Maximisation (DEM) (Friston et al., 2008) within the Statistical Parametric Mapping (SPM) software. This inference method uses generalised coordinates of motion within a Laplace approximation routine of states and parameters (multivariate Gaussians). In contrast with other inference methods, DEM allows us to use four embedding dimensions, which encompass smooth noise processes, unlike e.g., traditional Kalman filters that employ martingale assumptions (Roebroeck et al., 2009).

Specifically, DEM approximates the posterior of a parameter quantifying anisotropy using three steps:

  1. 1)

    The D step uses variational Bayesian filtering as an instantaneous gradient descent in a moving frame of reference for state estimation in generalised coordinates;

  2. 2)

    The E step estimates the model parameter quantifying anisotropy by using gradient ascent on the negative variational free energy. The variational free energy (F) combines both accuracy and complexity when scoring models \(F=\underset{accuracy}{\underbrace{\mathit\langle logp(y\vert\theta,m)\rangle}-}\underbrace{KL\left[q\left(\theta\right),p\left(\theta\vert m\right)\right]}_{complexity}\), where \(logp\left(y|\theta ,m\right)\) is the log likelihood of the data \(y\) conditioned upon model states, parameters and hyperparameters \(\theta\), and model structure \(m\). We seek a model that provides an accurate and maximally simple explanation for the data.

  3. 3)

    The M step repeats this process for the hyperparameters, given by the precision components of random fluctuations on the states and observation noise (Friston et al., 2008).

1.5 Overview

This paper comprises three sections.

In the first, we outline the theoretical foundations of Lagrangian field theory and the form of a generalised scalable discretized equation of motion that can be used both for forward generative models and for model inversion via neural timeseries.

In the second section, we generate in silico data via forward models of an isotropic and an anisotropic system. We then use Bayesian model inversion and subsequent Bayesian model reduction to show that we can correctly discriminate between the isotropic and anisotropic systems – thereby providing construct validation.

In the third section, we use murine data collected in both rest and task states to map levels of anisotropy across different cortical regions directly via the in vivo timeseries. These wide-field calcium imaging data were collected across the left hemisphere of mouse cortex expressing GCaMP6f in layer 2/3 excitatory neurons (Fagerholm et al., 2021; Gallero-Salas et al., 2021; Gilad et al., 2018), with cortical areas aligned to the Allen Mouse Common Coordinate Framework (Mouse & Coordinate, 2016),

We suggest that the presented methodology could be valuable in future large-scale studies of neural systems, in which the quantification of region-wise anisotropy may shed light on how neural systems grow both ontogenetically within the lifespan of an individual animal, as well as phylogenetically across species (Buzsaki et al., 2013).

2 Methods

We will now cover the technical foundations of the approach, starting with Lagrangian field theory and the principle of stationary action. We then derive a generalised, scalable, discretized field theoretic Lagrangian and consider the empirical estimation of anisotropy under this formulation using empirical (timeseries) data and Bayesian estimators.

2.1 Lagrangian field theory

We remind the reader of the basic concepts underlying Lagrangian field theory and the principle of stationary action in Appendix I. In brief: we represent the state of a system by a field which is a function of the 4D space–time position \(r\equiv \left(t,x,y,z\right)\). The equations of motion that describe how this field evolves in time are obtained by requiring that the field \(\varphi \left(r\right)\) renders the value of a quantity known as the action \({S}\) stationary:

$$S\left[\varphi \left(r\right)\right]={\int }_{\Omega }{d}^{4}r \mathcal{L}\left(r,\varphi ,\partial \varphi \right)$$
(1)

The integral in the definition of the action is over the space–time domain \(\Omega\) encompassing all space from the initial time \({t}_{i}\) to the final time \({t}_{f}\). The integrand, which is known as the Lagrangian density \(\mathcal{L}\left(r,\varphi ,\partial \varphi \right)\), defines the system of interest as a function of \(r\), the values of the fields \(\varphi\) at \(r\), and their spatiotemporal derivatives \(\partial \varphi\) at \(r\).

2.2 Scale transformations

We define a scale transformation as a mapping from arbitrary points in the 5-D space with axes labelled \(\left(\varphi ,r\right)=(\varphi ,t,x,y,z)\) to scaled points \(\left({\varphi }_{s},{r}_{s}\right)=\left(\lambda \varphi ,{\lambda }^{\alpha }r\right)\), where \(\lambda\) is an arbitrary scale factor and \(\alpha\) is a constant. A field configuration \(\varphi =\varphi \left(r\right)\) is a 4-D surface in this 5-D space, and the scale transformation takes points on that surface to points on a new 4D surface, defining a new field configuration. The value of the new field \({\varphi }_{s}\) at the scaled space–time point \({r}_{s}={\lambda }^{\alpha }r\) is related to the value of the original field \(\varphi\) at the unscaled point \(r\) via

$$\varphi_s\left(r_s\right)=\lambda\varphi\left(r\right)\Rightarrow\varphi_s\left(\lambda^\alpha r\right)=\lambda\varphi\left(r\right)\Rightarrow\varphi_s\left(r\right)=\varphi_s\left(\lambda^{-\alpha}r\right)$$
(2)

It is convenient to allow different scaling exponents in different space–time directions, so from now on \({\lambda }^{\alpha }r\) is to be understood as shorthand for the vector \(\left({\lambda }^{{\alpha }_{t}}t,{\lambda }^{{\alpha }_{x}}x,{\lambda }^{{\alpha }_{y}}y,{\lambda }^{{\alpha }_{z}}z\right).\)

Taking partial derivatives of Eq. (2), we obtain:

$${\varphi }_{s\mu }\left(r\right)={\partial }_{\mu }\left({\varphi }_{s}\left(r\right)\right)={\lambda }^{1-{\alpha }_{\mu }}{\varphi }_{\mu }\left({\lambda }^{-\alpha }r\right),$$
(3)

where \({\lambda }^{1-{\alpha }_{\mu }}\) depends only on the \({\mu }^{{th}}\) component of the vector of exponents \(\alpha =\left({\alpha }_{t},{\alpha }_{x},{\alpha }_{y},{\alpha }_{z}\right)\). From now on, we denote the vector with components \({\lambda }^{1-{\alpha }_{\mu }}{\varphi }_{\mu }\left({\lambda }^{-\alpha }r\right)\) as \({\lambda }^{1-\alpha }\partial \varphi ({\lambda }^{-\alpha }r)\).

2.3 Scaling the action

Using Eqs. (1), (2) and (3), we see that the scaled action is given by:

$$S\left[{\varphi }_{s}\left(r\right)\right]={\int }_{{\lambda }^{{\alpha }_{t}}{t}_{i}}^{{{\lambda }{\alpha }_{t}}{t}_{f}}dt\underset{{all x},{y},{z}}{\iiint }dxdydz \mathcal{L}\left(r, \lambda \varphi \left({\uplambda }^{-{\alpha }}r\right),{\uplambda }^{1-{\alpha }}\partial \varphi \left({\lambda }^{-\alpha }r\right)\right),$$
(4)

We then change variables in Eq. (4), setting \(r{^{\prime}}={\lambda }^{-\alpha }r\) such that:

$$S[{\varphi }_{s}\left(r\right)]= {\lambda }^{{\sum }_{\nu }{\alpha }_{\nu }}{\int }_{{t}_{i}}^{{t}_{f}}d{t}^{^{\prime}}\underset{{all} x, y, z}{\overset{ }{\iiint }}d{x}^{^{\prime}}d{y}^{^{\prime}}d{z}^{^{\prime}} \mathcal{L}\left({\lambda }^{\alpha }{r}^{^{\prime}}, \lambda \varphi \left({r}^{^{\prime}}\right), {\lambda }^{1-\alpha }\partial \varphi \left({r}^{^{\prime}}\right)\right)$$
(5)

where \({\lambda }^{{\sum }_{\nu }{\alpha }_{\nu }}\) is the Jacobian that accounts for the change in integration variables and \(\sum_{\nu }{\alpha }_{\nu }={\alpha }_{t}+{\alpha }_{x}+{\alpha }_{y}+{\alpha }_{z}\). The integrals are now over the same space–time region \(\Omega\) as in the original un-scaled action in Eq. (1), which means that we can re-write Eq. (5) using the same simple notation:

$$S\left[{\varphi }_{s}\left(r\right)\right]={\lambda }^{\sum_{\nu }{\alpha }_{\nu }}{\int }_{\Omega }{d}^{4}r \mathcal{L}\left({\lambda }^{\alpha }r, \lambda \varphi \left(r\right), {\lambda }^{1-\alpha }\partial \varphi \left(r\right)\right)$$
(6)

2.4 Scalable systems

The action \(S\left[\varphi \left(r\right)\right]\) is said to be scalable, or equivalently to possess ‘mechanical similarity’ (Landau & Lifshitz, 1976) if, for any choice of \(\varphi \left(r\right)\), not just choices that make the action stationary and therefore satisfy the Euler–Lagrange equation of motion (see Appendix I), the following relationship holds:

$$S\left[{\varphi }_{s}\left(r\right)\right]={\lambda }^{n}S\left[\varphi \left(r\right)\right],$$
(7)

where \(n\) is a constant. Note that a ‘scalable’ system should not be confused with a ‘scale free’ system, which is one that lacks a characteristic length scale, such as those studied in the physics of phase transitions.

More explicitly, we can use Eqs. (6) and (7) to express the scalability condition as follows:

$${\int }_{\Omega }{d}^{4}r \mathcal{L}\left({\lambda }^{\alpha }r, \lambda \varphi (r),{\lambda }^{1-\alpha }\partial \varphi (r)\right)={\lambda }^{n-\sum_{\nu }{\alpha }_{\nu }}{\int }_{\Omega }{d}^{4}r \mathcal{L}\left(r,\varphi (r),\partial \varphi (r)\right)$$
(8)

2.5 Generalised scalable Lagrangians

We can expand any analytic Lagrangian density \(\mathcal{L}\left(\varphi ,\partial \varphi \right)\) as a power series:

$$\mathcal{L}=\sum_{a,{b}_{t},{b}_{x},{b}_{y},{b}_{z}}{{C}}_{a,{b}_{t},{b}_{x},{b}_{y},{b}_{z}}{\varphi }^{a}{\dot{\varphi }}^{{b}_{t}}{{\varphi }_{x}}^{{b}_{x}}{{\varphi }_{y}}^{{b}_{y}}{{\varphi }_{z}}^{{b}_{z}},$$
(9)

where \({{C}}_{a,{b}_{t},{b}_{x},{b}_{y},{b}_{z}}\) is an expansion coefficient and the summations over the integers \(a,{b}_{t},{b}_{x},{b}_{y},{b}_{z}\) range from \(0\) to \(\infty\). We have assumed for the sake of simplicity that the Lagrangian density has no explicit dependence on \(r\). This is normally the case when the system of interest is not driven by external forces or other influences. We next use Eq. (9) to obtain \(\mathcal{L}\left(\lambda \varphi ,{\lambda }^{1-\alpha }\partial \varphi \right)\):

$$\mathcal{L}\left(\lambda \varphi ,{\lambda }^{1-\alpha }\partial \varphi \right)={\sum_{a,{b}_{t},{b}_{x},{b}_{y},{b}_{z}}{{C}}_{a,{b}_{t},{b}_{x},{b}_{y},{b}_{z}} \lambda }^{a+{\sum }_{\nu }(1-{\alpha }_{\nu }){b}_{\nu }} {\varphi }^{a}{\dot{\varphi }}^{{b}_{t}}{{\varphi }_{x}}^{{b}_{x}}{{\varphi }_{y}}^{{b}_{y}}{{\varphi }_{z}}^{{b}_{z}}$$
(10)

For the action to be scalable, Eq. (8) tells us that Eq. (10) must equal:

$${\lambda }^{n-{\sum }_{\nu }{\alpha }_{\nu }}\sum_{a,{b}_{t},{b}_{x},{b}_{y},{b}_{z}}{{C}}_{a,{b}_{t},{b}_{x},{b}_{y},{b}_{z}}{\varphi }^{a}{\dot{\varphi }}^{{b}_{t}}{{\varphi }_{x}}^{{b}_{x}}{{\varphi }_{y}}^{{b}_{y}}{{\varphi }_{z}}^{{b}_{z}}$$
(11)

We conclude that the action is scalable if and only if

$$a+\sum_{\nu }(1- {\alpha }_{\nu }){b}_{\nu }=n-\sum_{\nu }{\alpha }_{\nu }$$
(12)

for all choices of the integers \(a\) and \({b}_{\nu }\) at which \({{C}}_{a,{b}_{t},{b}_{x},{b}_{y},{b}_{z}}\) is non-zero. If, for example, we consider possible contributions to the Lagrangian with specific values of \({b}_{t}\), \({b}_{x}\), \({b}_{y}\), and \({b}_{z}\), Eq. (12) tells us that \({{C}}_{a,{b}_{t},{b}_{x},{b}_{y},{b}_{z}}\) must be zero unless \(a= n-\sum_{\nu }{\alpha }_{\nu }- \sum_{\nu }(1- {\alpha }_{\nu }){b}_{\nu }\). The value of \(a\) is determined by the values of \({b}_{t}\), \({b}_{x}\), \({b}_{y}\), \({b}_{z}\) and the summation over \(a\) is no longer required. The generalised scalable discretized field theoretic Lagrangian may therefore be written as follows:

$$\mathcal{L}=\sum_{{b}_{t},{b}_{x},{b}_{y},{b}_{z}}{{C}}_{{b}_{t},{b}_{x},{b}_{y},{b}_{z}}{\varphi }^{n+\sum_{\nu }\left(({\alpha }_{\nu }-1){b}_{\nu }-{\alpha }_{\nu }\right)}{\dot{\varphi }}^{{b}_{t}}{{\varphi }_{x}}^{{b}_{x}}{{\varphi }_{y}}^{{b}_{y}}{{\varphi }_{z}}^{{b}_{z}}$$
(13)

2.6 The anisotropic wave equation

Let us now design a special case of Eq. (13) in two spatial dimensions. Having chosen to set \({\alpha }_{t}={\alpha }_{x}\) and \(n={\alpha }_{y}+2\), we construct a Lagrangian density with three non-zero terms. In the first term, \({C}_{{b}_{t},{b}_{x},{b}_{y}}=1\), \({b}_{t}=2\), and \({b}_{x}={b}_{y}={b}_{z}=0\); in the second term, \({C}_{{b}_{t},{b}_{x},{b}_{y}}=-1\), \({b}_{x}=2\), and \({b}_{t}= {b}_{y}={b}_{z}=0\); and finally, in the third term,\({C}_{{b}_{t},{b}_{x},{b}_{y}}=-1\), \({b}_{y}=2\), and \({b}_{t}={b}_{x}={b}_{z}=0\).

This yields the Lagrangian density:

$$\mathcal{L}={\dot{\varphi }}^{2}-{{\varphi }_{x}}^{2}-{\varphi }^{2\beta }{{\varphi }_{y}}^{2},$$
(14)

where the exponent \(\beta\), which is by defined by \(\beta ={\alpha }_{y}-{\alpha }_{x}\), quantifies the degree of anisotropy, such that the system is perfectly isotropic when \(\beta=0\Rightarrow\alpha_y=\alpha_x\). To provide intuition for the way in which the \(\beta\) parameter affects the system’s dynamics, we run a series of forward models for a range of values in Supplementary Fig. 1.

The corresponding equation of motion – the two-dimensional Euler–Lagrange equation (see Appendix I) – is:

$$\ddot{\varphi }={\varphi }_{xx}+{\varphi }^{2\beta }{\varphi }_{yy}+\beta {\varphi }^{2\beta -1}{{\varphi }_{y}}^{2}$$
(15)

We can verify that if \(\varphi \left(t,x,y\right)\) is a solution of this equation, so is the scaled field \({\varphi }_{s}\left(t,x,y\right)= \lambda \varphi \left({\lambda }^{-{\alpha }_{t}}t, {\lambda }^{-{\alpha }_{x}}x, {\lambda }^{-{\alpha }_{y}}y\right)=\) \(\lambda \varphi \left({\lambda }^{-{\alpha }_{x}}t, {\lambda }^{-{\alpha }_{x}}x, {\lambda }^{-{(\alpha }_{x}+\beta )}y\right)\) (see Appendix II).

In the case of an isotropic system, when \(\beta =0\), the Euler–Lagrange equation becomes:

$$\ddot{\varphi }={\varphi }_{xx}+{\varphi }_{yy}$$
(16)

We see that the Lagrangian density of Eq. (14) leads to an equation of motion (15) that reduces to the wave Eq. (16) in the case of an isotropic system. This makes it an intuitive test case.

2.7 Spatial discretisation

For Eq. (15) to be used in the modelling of neural timeseries we must first discretize the partial spatial derivatives. This is necessary because we do not deal with spatially continuous data in neuroimaging, but rather with data collected at a discrete set of points. We therefore make the following standard transformations:

$$\begin{aligned}{\varphi }_{y}&\to \frac{1}{2}\left(\varphi \left(x,y+1\right)-\varphi \left(x,y-1\right)\right), {\varphi }_{xx}\\&\to \varphi \left(x+1,y\right)-2\varphi \left(x,y\right)+\varphi \left(x-1,y\right),{\varphi }_{yy}\\&\to \varphi \left(x,y+1\right)-2\varphi \left(x,y\right)+\varphi \left(x,y-1\right),\end{aligned}$$
(17)

where \(\varphi \left(x,y\right)\) is the value of the field at the point \(\left(x,y\right)\) and, e.g., \(\varphi \left(x+1,y\right)\) is the value of the field one ‘step’ in the positive \(x\) direction in the graph from the point \(\left(x,y\right)\).

Applying the transformations to Eq. (15), we obtain:

$$\begin{aligned}\ddot{\varphi }\left(x,y\right)=&\varphi \left(x+1,y\right)-2\varphi \left(x,y\right)+\varphi \left(x-1,y\right)\\&+{\varphi \left(x,y\right)}^{2\beta }\left(\varphi \left(x,y+1\right)-2\varphi \left(x,y\right)+\varphi \left(x,y-1\right)\right)\\&+\beta {\varphi \left(x,y\right)}^{2\beta -1}{\left(\frac{1}{2}\left(\varphi \left(x,y+1\right)-\varphi \left(x,y-1\right)\right)\right)}^{2},\end{aligned}$$
(18)

which we split into two first-order differential equations by defining a new variable to obtain the final form of the equations of motion used in all subsequent forward models and Bayesian model inversions presented in this paper:

$$\begin{array}{c} \dot{\varphi }=\theta \\ \dot{\theta }\left(x,y\right)=\varphi \left(x+1,y\right)-2\varphi \left(x,y\right)+\varphi \left(x-1,y\right)\\+{\varphi \left(x,y\right)}^{2\beta }\left(\varphi \left(x,y+1\right)-2\varphi \left(x,y\right)+\varphi \left(x,y-1\right)\right)\\+\beta {\varphi \left(x,y\right)}^{2\beta -1}{\left(\frac{1}{2}\left(\varphi \left(x,y+1\right)-\varphi \left(x,y-1\right)\right)\right)}^{2} \end{array}$$
(19)

We then use Eq. (19) as the equations of motion for subsequent model inversion with the Statistical Parametric Mapping (SPM) software. The associated observer equation comprises the \(\varphi\) variables – i.e., we assume that the strength of the field \(\varphi \left(x,y\right)\) is what is being measured in the neural timeseries at the \(\left(x,y\right)\) coordinate.

Equation (19) is the basis of the MATLAB code made available for the use with forward generative models, as well as with Bayesian model inversion of timeseries of arbitrary dimensionality from any neuroimaging modality.

2.8 Synthetic data

We consider a 2D grid of size 3 × 3 pixels, where each of the nine pixels is given different initial conditions and subsequently allowed to evolve according to the equation of motion in Eq. (19). We run two forward models: a) one isotropic case in which \(\beta =0\); and b) one anisotropic case in which \(\beta \ne 0\). Having created these synthetic data with a prior of \(\beta =0\), we then perform Bayesian model inversion using Dynamic Expectation Maximization (DEM) (Friston et al., 2008) to infer the latent states and estimate both the \(\beta\) parameter, as well as the hyperparameters – the components of precision of random fluctuations on the observation noise and states. Model inversion for any discrete system can be performed on a node-by-node basis, by considering the ways in which the dynamics evolve in the immediate neighbourhood of the node under consideration. When this model is equipped with fluctuations one can use standard (Variational Laplace) Bayesian model inversion procedures to estimate the exponents for any given timeseries. We then set the prior for the free parameter \(\beta\) to zero and use DEM to obtain a posterior estimate for \(\beta\) from both the synthetic isotropic and anisotropic data. Following model inversion, we use Bayesian model reduction (Friston et al., 2015; Rosa et al., 2012) to test the evidence for a perfectly isotropic system in which \(\beta =0\) by setting the prior variances of \(\beta\) to zero. We are therefore able to test whether we can correctly identify the ground truth isotropic data (created with \(\beta =0\)) with a higher evidence for the reduced model and conversely whether we can correctly identify the ground truth anisotropic data (created with \(\beta \ne 0\)) with a higher evidence for the full model.

2.9 Empirical data

All animal experiments were carried out according to the guidelines of the Veterinary Office of Switzerland following approval by the Cantonal Veterinary Office in Zürich. All murine calcium imaging data were collected as previously reported (Fagerholm et al., 2021; Gallero-Salas et al., 2021; Gilad et al., 2018). As with the synthetic data, we perform Bayesian model inversion to obtain posterior estimates for the \(\beta\) parameter quantifying the extent to which the time series for each pixel deviate from isotropy at \(\beta =0\). We perform this model inversion once for every second pixel \(\left(n=6651\right)\) within each trial \(\left(n=10\right)\), mouse \(\left(n=3\right)\) and condition (\(n=2,\) task and rest). Following model inversion, we average the posteriors for the \(\beta\) parameter across trials to obtain results per mouse and condition and then average these posteriors once more across mice. We then filter these averaged images with 2-D Gaussian moving average smoothing kernels with a semi-width window of 7 pixels.

3 Results

We show the ways in which the synthetic timeseries evolve for the isotropic (Fig. 1A) and anisotropic (Fig. 1B) cases. Each of the regions in the \(3\times 3\) synthetic data is given a different initial value and rate of change ranging between \(\sim 1\) and \(2\), to initialize dynamics in the absence of external inputs. Exact values of model parameters, boundary conditions, and initial values are listed in the publicly available code. Following model inversion and reduction, we then demonstrate proof of principle by showing that there is higher evidence for the ground truth isotropic data having been created with the isotropic model (Fig. 1C) and conversely for the ground truth anisotropic data having been created with the anisotropic model (Fig. 1D). The results in Figs. 1C and D remain when initial conditions and observation noise are varied by \(\sim 10\%\), as commented in the publicly available code. Finally, we show the different degrees of anisotropy in the murine calcium imaging data in rest and task states (Fig. 1E), together with averaged signals (Fig. 1F). Note that the negative posterior \(\beta\) values in Fig. 1E are a result of the specific choice of model parameters, observation noise, and initial conditions used.

Fig. 1
figure 1

Synthetic and experimental data. A Synthetic data generated using the isotropic model in Eq. (19) with \(\upbeta =0\). The colours of the wavefronts correspond to pixels in the grid inset top right. The x and y axes show the amplitudes of the wavefronts multiplied by cos(time) and sin(time), respectively. B Synthetic data generated using the anisotropic model in Eq. (19) with \(\upbeta =-3\). The colours of the wavefronts correspond to pixels in the grid inset top right. The x and y axes show the amplitudes of the wavefronts multiplied by cos(time) and sin(time), respectively. C Approximate lower bound log model evidence given by the free energy F following Bayesian model reduction for isotropic i and anisotropic a models using the isotropic ground-truth data. Corresponding probabilities p derived from the log evidence are shown in the inset on the right. D Approximate lower bound log model evidence given by the free energy F following Bayesian model reduction for isotropic i and anisotropic a models using the anisotropic ground-truth data. Corresponding probabilities p derived from the log evidence are shown in the inset on the left. E) Left hemisphere of calcium imaging data collected in three mice (first three rows) in rest (left column) and task (right column) states. The final fourth row shows average values across the three mice. The colour bars indicate the value of the \(\upbeta\) exponent ranging from isotropic i to increasingly anisotropic a pixels. F Timecourses of normalized signal intensity z averaged across all pixels, with the layout corresponding to that in E i.e., for each of the three mice (first three rows) across the two states (columns), together with signals averaged across mice (last row)

Overall, there is a marked variability in the degrees of anisotropy across mice and states. On the other hand, the secondary motor cortices show consistently high degrees of anisotropy across mice and states. The generation (Fig. 1A and B) and inversion (Figure C and D) of the synthetic data can be fully reproduced with the accompanying MATLAB code and the murine calcium imaging data in Fig. 1E and F are made available in a public repository.

4 Discussion

We present a theoretical framework, together with a practical numerical analysis, designed for the estimation of anisotropy in arbitrary timeseries from any connected dynamical system. The basis for this framework rests upon classical Lagrangian field theory applied to scalable (mechanically similar) dynamical systems. Scalability entails a situation whereby a system continues to obey the same equation of motion as it changes size. In other words, a dynamical system that grows or shrinks will begin producing states that are different from those of the original (unscaled) system. However, in systems possessing the property of scalability, the new states in the scaled system will still be described by the same equation of motion used to describe the original (unscaled) system.

It stands to reason that the dynamical evolution of the signals propagating in neural systems possess some form of scalability, given that evolutionary processes add new neuroanatomy to existing structures i.e., the same basic architecture extends itself whist maintaining information processing capabilities (Douglas & Martin, 1991; Hilgetag et al., 2000; Markov et al., 2013). Similarly, a neural structure changes scale during development, whilst allowing for information processing to remain intact. It is therefore of interest to develop tools that allow for the analysis of scalable systems. We therefore pose the following question: given that neural systems possess some form of scalability, what are the consequences thereof and what further questions present themselves? It is in this spirit that we present a formalism that applies to any scalable dynamical system that is sufficiently general to accommodate the evolution of any (driven or non-driven) system in any number of spatial dimensions.

With reference to the murine calcium imaging results, we note the following three main results: a) there is a high variability in anisotropy across mice and states, which may be due to the low number of trials and/or to the fact that anisotropy values may vary from trial to trial depending on neural activity; b) there is no clear difference in anisotropy across rest/task states; and c) there are consistently high levels of anisotropy in the secondary motor cortex (Mos) across mice and states, which may reflect the non-homogeneous nature of local networks.

We construct a methodology that is set within the Bayesian model inversion scheme of Dynamic Causal Modelling (DCM). This means that, by using the timeseries measured from any (e.g., neuroimaging) modality, we obtain posterior estimates of spatial stretch factors – one for each spatial dimension. The discrepancy between these stretch factors then directly provides an estimate of the anisotropy at every voxel in the neuroimaging data. We thus obtain an intuitive understanding of what these measures mean by imagining ourselves standing at a certain node in a neural system. If the system is isotropic then, as we look in every direction – up-down, left–right, and back-forward, we will see no difference in the ways in which the signals evolve in time in these different directions. On the other hand, if the system is anisotropic then we will see a difference in our lines of sight along the different axes – and the greater this difference becomes the greater the degree of anisotropy. We demonstrate proof of principle by generating synthetic data using a special case of the generalised Lagrangian that reduces to the wave equation in the limit of the perfectly isotropic case. Using Bayesian model inversion and reduction, we show that we can correctly identify which of the two models (anisotropic and isotropic) were used to generate each dataset, thus showing the discriminatory ability of the proposed methodology.

It should be noted that, in addition to the Lagrangian framework presented here, the Martin–Siggia–Rose–DeDominicis–Janssen (MSRDJ) is an alternative formalism that allows for stochastic differential equations (e.g., the Langevin equation) to be solved probabilistically (Martin et al., 1973). In contrast to our methodology, in which we examine a single solution of a stochastic differential equation, the MSRDJ framework encompasses a probability distribution of possible solutions and the ways in which they evolve in time (Chow & Buice, 2015). Furthermore, it should be noted that whereas resting-state brain dynamics are usually modelled by equilibrium fluctuations about a steady-state (i.e., a stable point attractor), here we employ a limit-cycle model. The choice of which type of model (e.g., fixed point vs. limit cycle) will naturally depend on the nature of the system under consideration.

An empirically determined estimation of anisotropy could be informative in imaging neuroscience, as it facilitates a direct empirical measurement of how sub-structures within the brain grow (under the assumption of scalability). This provides a new way of assessing the ways in which anatomy changes across both an evolutionary timeline, as well as across the lifespan of an individual organism. It is our hope that the theory, methodology, and accompanying tools will allow for these kinds of questions to be addressed by researchers and that these will lead to a clearer understanding of the spatial dependencies, growth, and development of neural systems.