# Stochastic neural field equations: a rigorous footing

## Abstract

We here consider a stochastic version of the classical neural field equation that is currently actively studied in the mathematical neuroscience community. Our goal is to present a well-known rigorous probabilistic framework in which to study these equations in a way that is accessible to practitioners currently working in the area, and thus to bridge some of the cultural/scientific gaps between probability theory and mathematical biology. In this way, the paper is intended to act as a reference that collects together relevant rigorous results about notions of solutions and well-posedness, which although may be straightforward to experts from SPDEs, are largely unknown in the neuroscientific community, and difficult to find in a very large body of literature. Moreover, in the course of our study we provide some new specific conditions on the parameters appearing in the equation (in particular on the neural field kernel) that guarantee the existence of a solution.

### Keywords

Stochastic neural field equations Spatially correlated noise Multiplicative noise Stochastic integro-differential equation Existence and uniqueness### Mathematics Subject Classification

60H20 60H30 92C20## 1 Introduction

Neural field equations have been widely used to study spatiotemporal dynamics of cortical regions. Arising as continuous spatial limits of discrete models, they provide a step towards an understanding of the relationship between the macroscopic spatially structured activity of densely populated regions of the brain, and the underlying microscopic neural circuitry. The discrete models themselves describe the activity of a large number of individual neurons with no spatial dimensions. Such neural mass models have been proposed by Lopes da Silva et al. (1974, 1976) to account for oscillatory phenomena observed in the brain, and were later put on a stronger mathematical footing in the study of epileptic-like seizures in Jansen and Rit (1995). When taking the spatial limit of such discrete models, one typically arrives at a nonlinear integro-differential equation, in which the integral term can be seen as a nonlocal interaction term describing the spatial distribution of synapses in a cortical region. Neural field models build on the original work of Wilson and Cowan (1972, Wilson and Cowan (1973)) and Amari (1977), and are known to exhibit a rich variety of phenomena including stationary states, traveling wave fronts, pulses and spiral waves. For a comprehensive review of neural field equations, including a description of their derivation, we refer to Bressloff (2012).

There are in fact two distinct approaches to defining and interpreting the quantity (1.2), both of which allow one to build up a theory of stochastic *partial* differential equations (SPDEs). Although (1.1) does not strictly classify as a SPDE (since there is no derivative with respect to the spatial variable), both approaches provide a rigorous underlying theory upon which to base a study of such equations.

The first approach generalizes the theory of stochastic processes in order to give sense to solutions of SPDEs as random processes that take their values in a Hilbert space of functions [as presented by Da Prato and Zabczyk in (1992) and more recently by Prévôt and Röckner in (2007)]. With this approach, the quantity (1.2) is interpreted as a *Hilbert space-valued* integral i.e. “\(\int \mathbf {B}(Y(t))dW(t)\)”, where \((Y(t))_{t\ge 0}\) and \((W(t))_{t\ge 0}\) take their values in a Hilbert space of functions, and \(\mathbf {B}(Y(t))\) is an operator between Hilbert spaces (depending on \(\sigma \)). The second approach is that of Walsh [as described in Walsh (1986)], which, in contrast, takes as its starting point a PDE with a random and highly irregular “white-noise” term. This approach develops integration theory with respect to a class of random measures, so that (1.2) can be interpreted as a random field in both \(t\) and \(x\).

In the theory of SPDEs, there are advantages and disadvantages of taking both approaches. This is also the case with regards to the stochastic neural field Eq. (1.1), as described in the conclusion below (Sect. 5), and it is for this reason that we here review both approaches. Taking the functional approach of Da Prato and Zabczyk is perhaps more straightforward for those with knowledge of stochastic processes, and the existing general results can be applied more directly in order to obtain, for example, existence and uniqueness. This was the path taken in Kuehn and Riedler (2014) where the emphasis was on large deviations, though in a much less general setup than we consider here (see Remark 2.3). However, it can certainly be argued that solutions constructed in this way may be “non-physical”, since the functional theory tends to ignore any spatial regularity properties (solutions are typically \(L^2\)-valued in the spatial direction). We argue that the approach of Walsh is more suited to looking for “physical” solutions that are at least continuous in the spatial dimension. A comparison of the two approaches in a general setting is presented in Dalang and Quer-Sardanyons (2011) or Jetschke (1982, 1986), and in our setting in Sect. 4 below. Our main conclusion is that in typical cases of interest for practitioners, the approaches are equivalent (see Example 4.2), but one or the other may be more suited to a particular need.

To reiterate, the main aim of this article is to present a review of an existing theory, which is accessible to readers unfamiliar with stochastic partial differential equations, that puts the study of stochastic neural field equations on a rigorous mathematical footing. As a by product we will be able to give general conditions on the functions \(G\), \(\sigma \) and \(w\) that, as far as we know, do not appear anywhere else in the literature and guarantee the existence of a solution to (1.1) in some sense. Moreover, these conditions are weak enough to be satisfied for all typical choices of functions made by practitioners (see Sects. 2.6, 2.7 and 2.8). By collecting all these results in a single place, we hope this will provide a reference for practitioners in future works.

The layout of the article is as follows. We first present in Sect. 2 the necessary material in order to consider the stochastic neural field Eq. (1.1) as an evolution equation in a Hilbert space. This involves introducing the notion of a \(Q\)-Wiener process taking values in a Hilbert space and stochastic integration with respect to \(Q\)-Wiener processes. A general existence result from Prato and Zabczyk (1992) is then applied in Sect. 2.5 to yield a unique solution to (1.1) interpreted as a Hilbert space valued process. The second part of the paper switches track, and describes Walsh’s theory of stochastic integration (Sect. 3.1), with a view of giving sense to a solution to (1.1) as a random field in both time and space. To avoid dealing with distribution-valued solutions, we in fact consider a Gaussian noise that is smoothed in the spatial direction (Sect. 3.2), and show that, under some weak conditions, the neural field equation driven by such a smoothed noise has a unique solution in the sense of Walsh that is continuous in both time and space (Sect. 3.3). We finish with a comparison of the two approaches in Sect. 4, and summarize our findings in a conclusion (Sect. 5).

**Notation:** Throughout the article \((\varOmega , {\mathcal {F}} , {\mathbb P} )\) will be a probability space, and \(L^2(\varOmega , \mathcal {F}, \mathbb {P})\) will be the space of square-integrable random variables on \((\varOmega , \mathcal {F}, \mathbb {P})\). We will use the standard notation \(\mathcal {B}(\mathcal {T})\) to denote the Borel \(\sigma \)-algebra on \(\mathcal {T}\) for any topological space \(\mathcal {T}\). The Lebesgue space of \(p\)-integrable (with respect to the Lebesgue measure) functions over \({\mathbb {R}} ^N\) for \(N\in \mathbb {N} = \{1, 2, \dots \}\) will be denoted by \(L^p({\mathbb {R}} ^N)\), \(p\ge 1\), as usual, while \(L^p({\mathbb {R}} ^N, \rho )\), \(p\ge 1\), will be the Lebesgue space weighted by a measurable function \(\rho :{\mathbb {R}} ^N\rightarrow {\mathbb {R}} ^+\).

## 2 Stochastic neural field equations as evolution equations in Hilbert spaces

**Notation:**In this section we will also need the following basic notions from functional analysis. Let \(U\) and \(H\) be two separable Hilbert spaces. We will write \(L_0(U, H)\) to denote the space of all bounded linear operators from \(U\) to \(H\) with the usual norm

^{1}(with the shorthand \(L_0(H)\) when \(U=H\)), and \(L_2(U, H)\) for the space of all Hilbert-Schmidt operators from \(U\) to \(H\), i.e. those bounded linear operators \(B:U \rightarrow H\) such that

### 2.1 Hilbert space valued \(Q\)-Wiener processes

The purpose of this section is to provide a basic understanding of how we can generalize the idea of an \({\mathbb {R}} ^d\)-valued Wiener process to one that takes its values in an infinite dimensional Hilbert space, which for convenience we fix to be \(U=L^2({\mathbb {R}} ^N)\) (this is simply for the sake of being concrete).

In the finite dimensional case, it is well-known that \({\mathbb {R}} ^d\)-valued Wiener processes are characterized by their \(d\times d\) covariance matrices, which are symmetric and non-negative. The basic idea is that in the infinite dimensional setup the covariance matrices are replaced by covariance *operators*, which are linear, non-negative, symmetric and bounded.

^{2}, let us compute the covariance operator of \(W\). An easy computation based on (2.2) and the elementary properties of the standard real-valued Brownian motion shows that

### 2.2 Stochastic integration with respect to \(Q\)-Wiener processes

The second point is that we would like to be able to define stochastic integration with respect to these Hilbert space valued Wiener processes. In particular we must determine for which integrands this can be done [exactly as in Prato and Zabczyk (1992)].

As above, let \(U = L^2({\mathbb {R}} ^N)\), \(Q:U \rightarrow U\) a non-negative, symmetric bounded linear operator on \(U\) such that \(\mathrm {Tr}(Q) <\infty \), and \(W = (W(t))_{t\ge 0}\) be a \(Q\)-Wiener process on \(U\) [given by (2.2)].

*Example 2.1*

^{3}at time \(s\), and if

### 2.3 The stochastic neural field equation: interpretation in language of Hilbert space valued processes

- \(\mathbf {B}: H \rightarrow L_0( U, H)\) is such thatwhere \(U= L^2({\mathbb {R}} ^N)\) and \(H=L^2({\mathbb {R}} ^N, \rho )\) for notational simplicity;$$\begin{aligned} \Vert \mathbf {B}(g) - \mathbf {B}(h)\Vert _{L_0(U, H)} \le C_\sigma \Vert g-h\Vert _{U}, \quad g, h \in L^2({\mathbb {R}} ^N, \rho ), \end{aligned}$$
- \(G:{\mathbb {R}} \rightarrow {\mathbb {R}} \) is bounded and globally Lipschitz i.e such that there exists a constant \(C_G\) with \(\sup _{a\in {\mathbb {R}} }|G(a)| \le C_G\) andTypically the nonlinear gain function \(G\) is taken to be a sigmoid function, for example \(G(a) = (1+e^{-a})^{-1}\), \(a\in {\mathbb {R}} \), which certainly satisfies this assumption.$$\begin{aligned} |G(a) - G(b)| \le C_G|a- b|, \qquad \forall a, b \in {\mathbb {R}} . \end{aligned}$$

### 2.4 Discussion of conditions on the neural field kernel \(w\) and \(\rho \)

Of particular interest to us are the conditions on the neural field kernel \(w\) which will allow us to prove existence and uniqueness of a solution to (2.5) by quoting a standard result from Prato and Zabczyk (1992).

**C1**) that the operator \(\mathbf {F}\) is stable on the space \(L^2({\mathbb {R}} ^N)\). For instance, suppose that in fact \(G \equiv 1\) (so that \(G\) is trivially globally Lipschitz). Then for \(h\in L^2({\mathbb {R}} ^N)\) (and assuming \(w\ge 0\)) we have that

**C1**) holds, while (2.8) is not finite. For example in the case \(N=1\) we could take \(w(x, y) = (1+|x|)^{-1}(1+|y|)^{-1}\) for \(x, y \in {\mathbb {R}} \). In such a case the Eq. (2.5) is ill-posed: if \(Y(t)\in L^2({\mathbb {R}} )\) then \(F(t, Y(t))\) is not guaranteed to be in \(L^2({\mathbb {R}} )\), which in turn implies that \(Y(t)\not \in L^2({\mathbb {R}} )\)!

**C1**) and (

**C2**) hold, then we have to work instead in a weighted space \(L^2({{\mathbb {R}} ^N}, \rho )\), in order to ensure that \(\mathbf {F}\) is stable. In this case, we will see that if

Condition \((\mathbf C1' )\) is in fact a non-trivial eigenvalue problem, and it is not straightforward to see whether it is satisfied for a given function \(w\). However, we chose to state the theorem below in a general way, and then below provide some important examples of when it can be applied.

We will discuss these abstract conditions from a modeling point of view below. However, we first present the existence and uniqueness result.

### 2.5 Existence and uniqueness

**Theorem 2.2**

- (i)
satisfies conditions (

**C1**) and (**C2**); or - (ii)
satisfies conditions (

**C1’**) and (**C2’**).

**C1’**).

*Proof*

We simply check the hypotheses of Prato and Zabczyk (1992, Theorem 7.4) (a standard reference in the theory) in both cases (i) and (ii). This involves showing that (a) \(\mathbf {F}: L^2({{\mathbb {R}} ^N},\rho _w) \rightarrow L^2({{\mathbb {R}} ^N},\rho _w)\); (b) the operator \(\mathbf {B}(h)\in L_2(Q^\frac{1}{2}(U), H)\), for all \(h\in H\) [recalling that \(U=L^2({\mathbb {R}} ^N)\) and \(H=L^2({\mathbb {R}} ^N, \rho )\)]; and (c) \(\mathbf {F}\) and \(\mathbf {B}\) are globally Lipschitz.

**C2**). Similarly in case (ii) for any \(h\in L^2({{\mathbb {R}} ^N}, \rho _w)\)

(b): To show (b) in both cases, we know by Example 2.1 that for \(h\in H\), \(\mathbf {B}(h)\in L_2(Q^\frac{1}{2}(U), H)\) whenever \(\mathbf {B}(h)\in L_0(U, H)\), which is true by assumption.

**C1**), \(\mathbf {F}\) is indeed Lipschitz.

**C1’**), we see that

*Remark 2.3*

(Large Deviation Principle) The main focus of Kuehn and Riedler (2014) was a large deviation principle for the stochastic neural field Eq. (2.5) with small noise, but in a less general situation than we consider here. In particular, the authors only considered the neural field equation driven by a simple additive noise, white in both space and time.

We would therefore like to remark that in our more general case, and under much weaker conditions than those imposed in Kuehn and Riedler (2014) (our conditions are for example satisfied for a connectivity function \(w\) that is homogeneous, as we will see in Example 2 below), an LDP result for the solution identified by the above theorem still holds and can be quoted from the literature. Indeed, such a result is presented in Peszat (1994, Theorem 7.1). The main conditions required for the application of this result have essentially already been checked above (global Lipschitz properties of \(\mathbf {F}\) and \(\mathbf {B}\)), and it thus remains to check conditions (E.1)–(E.4) as they appear in Peszat (1994). In fact these are trivialities, since the strongly continuous contraction semigroup \(S(t)\) is generated by the identity in our case.

### 2.6 Discussion of conditions on \(w\) and \(\rho \) in practice

**C1**) nor (

**C2**) of the above theorem are satisfied, and so we instead must try to show that (

**C1’**) is satisfied [(

**C2’**) trivially holds], and look for solutions in a weighted \(L^2\) space. This is done in the second example below.

^{4}. Depending on the species, the long range connections feature an anisotropy, meaning that they tend to align themselves with the preferred orientation at \(x\). On way to take this into account is to introduce the function \(A(\chi ,x)=\exp [-((1-\chi )^2x_1^2+x_2^2)/2\beta _{lr}^2]\), where \(x=(x_1,x_2)\), \(\chi \in [0,1)\), and \(\beta _{lr}\) is the extent of the long range connectivity. When \(\chi =0\) there is no isotropy (as for the macaque monkey for example) and when \(\chi \in (0,1)\) there is some anisotropy (as for the tree shrew, for example). Let \(R_\alpha \) represent the rotation by angle \(\alpha \) around the origin. The long range neural field kernel is then defined by (Baker and Cowan 2009; Bressloff 2003)

It is also important to mention the role of \(\rho _w\) from a modeling perspective. The first point is that in the case where \(w\) is homogeneous, it is very natural to look for solutions that live in \(L^2({\mathbb {R}} ^N, \rho )\) for some \(\rho \in L^1({{\mathbb {R}} ^N})\), rather than in \(L^2({\mathbb {R}} ^N)\). This is because in the deterministic case (see Ermentrout and McLeod 1993), solutions of interest are of the form of traveling waves, which are constant at \(\infty \), and thus are not integrable.

Moreover, we emphasize that in Theorem 2.2 and the examples in the next section we identify a single \(\rho _w\in L^1({{\mathbb {R}} ^N})\) so that the standard existence result of Prato and Zabczyk (1992) can be directly applied through Theorem 2.2. We do not claim that this is the only weight \(\rho \) for which the solution can be shown to exist in \(L^2({\mathbb {R}} ^N, \rho )\) (see also Example 2 below).

*Remark 2.4*

If we replace the spatial coordinate space \({{\mathbb {R}} ^N}\) by a bounded domain \(\mathcal {D}\subset {{\mathbb {R}} ^N}\), so that the neural field Eq. (2.5) describes the activity of a neuron found at position \(x\in \mathcal {D}\) then checking the conditions as done Theorem 2.2 becomes rather trivial (under appropriate boundary conditions). Indeed, by doing this one can see that there exists a unique \(L^2(\mathcal {D})\)-valued solution to (2.5) under the condition (**C2’**) only (with \({{\mathbb {R}} ^N}\) replaced by \(\mathcal {D}\)). Although working in a bounded domain seems more physical (since any physical section of cortex is clearly bounded), the unbounded case is still often used, see Bressloff and Webber (2012) or the review Bressloff (2012), and is mathematically more interesting. The problem in passing to the unbounded case stems from the fact that the nonlocal term in (2.5) naturally ‘lives’ in the space of bounded functions, while according to the theory the noise naturally lives in an \(L^2\) space. These are not compatible when the underlying space is unbounded.

### 2.7 Discussion of the noise term in (2.5)

An obvious question is then for which choices of \(\sigma \) and \(\varphi \) can we apply the above results? In particular we need to check that \(\mathbf {B}(h)\) is a bounded linear operator from \(L^2({\mathbb {R}} ^N)\) to \(L^2({\mathbb {R}} ^N, \rho )\) for all \(h\in L^2({\mathbb {R}} ^N, \rho )\), and that \(\mathbf {B}\) is Lipschitz (assuming as usual that \(\rho \in L^1({\mathbb {R}} ^N)\)).

^{5}of Prato and Zabczyk (1992), \((X(t))_{t\ge 0}\) is Gaussian with mean zero and

We conclude that (2.14) is exactly the rigorous interpretation of the noise described in Bressloff and Webber (2012), when interpreting a solution to the stochastic neural field equation as a process taking values in \(L^2({{\mathbb {R}} ^N}, \rho _w)\).

*Remark 2.5*

### 2.8 Examples

As mentioned we now present two important cases where the conditions (**C1’**) and (**C2’**) are satisfied. For convenience, in both cases we in fact show that \( (\mathbf C1' ) \) is satisfied for some \(\rho _w\in L^1({\mathbb {R}} ^N)\) that is also bounded.

*Example 1*: \(|w|\) defines a compact integral operator. Suppose that

- given \(\varepsilon > 0\), there exists \(\delta >0\) and \(R>0\) such that for all \(\theta \in {{\mathbb {R}} ^N}\) with \(|\theta |<\delta \)where \(B(0, R)\) denotes the ball of radius \(R\) in \({{\mathbb {R}} ^N}\) centered at the origin;
- (i)for almost all \(x\in {{\mathbb {R}} ^N}\),$$\begin{aligned} \int _{{{\mathbb {R}} ^N}\backslash B(0, R)} |w(x, y)| dy < \varepsilon , \quad \int _{{{\mathbb {R}} ^N}}|w(x, y+\theta )-w(x, y)|dy < \varepsilon , \end{aligned}$$
- (ii)for almost all \(y\in {{\mathbb {R}} ^N}\),$$\begin{aligned} \int _{{{\mathbb {R}} ^N}\backslash B(0, R)} |w(x, y)| dx < \varepsilon , \quad \int _{{{\mathbb {R}} ^N}}|w(x+\theta , y)-w(x, y)|dx < \varepsilon , \end{aligned}$$

- (i)
- There exists a bounded subset \(\varOmega \subset {{\mathbb {R}} ^N}\) of positive measure such that$$\begin{aligned} \inf _{y \in \varOmega }\int _\varOmega |w(x, y)|dx >0, \,\, \mathrm {or} \,\, \inf _{x \in \varOmega }\int _\varOmega |w(x, y)|dy >0; \end{aligned}$$
- \(w\) satisfies (
**C2’**) and moreover$$\begin{aligned} \forall y\in {{\mathbb {R}} ^N}\ (x\mapsto w(x, y))\in L^1({{\mathbb {R}} ^N}), \,\, \mathrm {and}\,\, \sup _{y\in {{\mathbb {R}} ^N}}\Vert w(\cdot , y)\Vert _{L^1({{\mathbb {R}} ^N})} < \infty . \end{aligned}$$

**C1’**) so that we can apply Theorem 2.2 in this case. Indeed, let \(\mathbb {X}\) be the Banach space of functions in \(L^1({{\mathbb {R}} ^N})\cap L^\infty ({{\mathbb {R}} ^N})\) equipped with the norm \(\Vert \cdot \Vert _\mathbb {X} = \max \{\Vert \cdot \Vert _{L^1({{\mathbb {R}} ^N})}, \Vert \cdot \Vert _{L^\infty ({{\mathbb {R}} ^N})}\}\). Thanks to the last point above, we can well-define the map \(J:\mathbb {X} \rightarrow \mathbb {X}\) by

Note now that the space \(\mathbb {K}\) of positive functions in \(\mathbb {X}\) is a cone in \(\mathbb {X}\) such that \(J(\mathbb {K}) \subset \mathbb {K}\), and that the cone is *reproducing* (i.e. \(\mathbb {X} = \{f - g: f, g \in \mathbb {K}\}\)). If we can show that \(r(J)\) is strictly positive, we can thus finally apply the Krein-Rutman Theorem [see for example (Du (2006), Theorem 1.1)] to see that \(r(J)\) is an eigenvalue with corresponding non-zero eigenvector \(\rho \in \mathbb {K}\).

**C1’**) is satisfied.

*Example 2*: Homogeneous case. Suppose that

\(w\) is homogeneous i.e \(w(x,y) = w(x-y)\) for all \(x, y\in {{\mathbb {R}} ^N}\);

\(w\in L^1({{\mathbb {R}} ^N})\) and is continuous;

\(\int _{{\mathbb {R}} ^N}|x|^{2N}|w(x)|dx <\infty \).

**C1’**) is satisfied in this case so that [since (

**C2’**) is trivially satisfied] Theorem 2.2 yields the existence of a unique \(L^2({{\mathbb {R}} ^N}, \rho _w)\)-valued solution to (2.5).

Moreover, Eq. (2.19) shows that \(\hat{\rho }(\xi )\) is in Schwartz space, hence so is \(\rho \), implying that it is bounded. Note that Eq. (2.19) provides a way of explicitly computing one possible function \(\rho _w\) appearing in condition (**C1’**) in the cases where the neural field kernel is homogeneous [for example given by (2.11) and (2.13)]. That particular function can be varied for example by changing the function \(z\) and/or the constant \(\Lambda _w\).

## 3 Stochastic neural fields as Gaussian random fields

In this section we take an alternative approach, and try to give sense to a solution to the stochastic neural field Eq. (1.1) as a random field, using Walsh’s theory of integration.

**C2’**) above and \(L^1\)-Lipschitz continuity), this equation has a unique solution \((t, x)\mapsto Y(t, x)\) that is bounded and continuous in \(x\) and continuously differentiable in \(t\), whenever \(x\mapsto Y(0, x)\) is bounded and continuous (Potthast 2010).

*distribution*which, when integrated against a test function \(h\in L^2({\mathbb {R}} ^+\times {{\mathbb {R}} ^N})\)

*distribution*-valued in the spatial direction, which is rather unsatisfactory. Indeed, consider the extremely simple linear case when \(G\equiv 0\) and \(\sigma \equiv 1\), so that (3.2) reads

*stochastic heat equation*). In such a case, the semigroup generated by the second order differential operator can be enough to smooth the space-time white noise in the spatial direction, leading to solutions that are continuous in both space and time [at least when the spatial dimension is \(1\)—see for example Pardoux (2007, Chapter 3) or Walsh (1986, Chapter 3)].

However, we argue that it is not worth developing this theory here, since distribution valued solutions are of little interest physically. It is for this reason that we instead look for other types of random noise to add to the deterministic Eq. (3.1) which in particular will be correlated in space that will produce solutions that are real-valued random fields, and are at least Hölder continuous in both space and time. In the theory of SPDEs, when the spatial dimension is \(2\) or more, the problem of an equation driven by space-time white noise having no real-valued solution is a well-known and much studied one [again see for example Pardoux (2007, Chapter 3) or Walsh (1986, Chapter 3) for a discussion of this]. To get around the problem, a common approach (Dalang and Frangos 1998; Ferrante and Sanz-Solé 2006; Sanz-Solé and Sarrà 2002) is to consider random noises that are smoother than white noise, namely a Gaussian noise that is white in time but has a smooth spatial covariance. Such random noise is known as either spatially colored or spatially homogeneous white-noise. One can then formulate conditions on the covariance function to ensure that real-valued Hölder continuous solutions to the specific SPDE exist.

It should also be mentioned, as remarked in Dalang and Frangos (1998), that in trying to model physical situations, there is some evidence that white-noise smoothed in the spatial direction is more natural, since spatial correlations are typically of a much larger order of magnitude than time correlations.

In the stochastic neural field case, since we have no second order differential operator, our solution will only ever be as smooth as the noise itself. We therefore look to add a noise term to (3.1) that is at least Hölder continuous in the spatial direction instead of pure white noise, and then proceed to look for solutions to the resulting equation in the sense of Walsh.

The section is structured as follows. First we briefly introduce Walsh’s theory of stochastic integration, for which the classical reference is Walsh (1986). This theory will be needed to well-define the stochastic integral in our definition of a solution to the neural field equation. We then introduce the spatially smoothed space-time white noise that we will consider, before finally applying the theory to analyze solutions of the neural field equation driven by this spatially smoothed noise under certain conditions.

### 3.1 Walsh’s stochastic integral

We will not go into the details of the construction of Walsh’s stochastic integral, since a very nice description is given by D. Khoshnevisan in Dalang et al. (2009) [see also Walsh (1986)]. Instead we present the bare essentials needed in the following sections.

^{6}

*white noise*on \({\mathbb {R}} ^+\times {{\mathbb {R}} ^N}\). We then define the

*white noise process*\(W{:=}(W_t(A))_{t\ge 0, A\in \mathcal {B}({{\mathbb {R}} ^N})}\) by

^{7}at time \(t\) given \((W_s(A))_{s\le t, A\in {\mathcal {B}}({{\mathbb {R}} ^N})}\). Then let \(\mathfrak {P}_W\) be the set of all such functions \(f\) for which \(\Vert f\Vert _{W} <\infty \). The point is that this space forms the set of integrands that can be integrated against the white noise process according to Walsh’s theory.

Indeed, we have then following theorem ((Walsh 1986, Theorem 2.5)).

**Theorem 3.1**

The following inequality will also be fundamental:

**Theorem 3.2**

### 3.2 Spatially smoothed space-time white noise

The regularity in time of this process is the same as that of a Brownian path:

**Lemma 3.3**

For any \(x\in {{\mathbb {R}} ^N}\), the path \(t\mapsto W^\varphi (t, x)\) has an \(\eta \)-Hölder continuous modification for any \(\eta \in (0, 1/2)\).

*Proof*

More importantly, if we impose some (very weak) regularity on \(\varphi \) then \(W^\varphi \) inherits some spatial regularity:

**Lemma 3.4**

*Proof*

*Remark 3.5*

The condition (3.8) with \(\alpha =1\) is true if and only if the function \(\varphi \) is in the Sobolev space \(W^{1, 2}({{\mathbb {R}} ^N})\) ((Brezis 2010, Proposition 9.3)).

When \(\alpha <1\) the set of functions \(\varphi \in L^2({{\mathbb {R}} ^N})\) which satisfy (3.8) defines a Banach space denoted by \(N^{\alpha , 2}({{\mathbb {R}} ^N})\) which is known as the Nikolskii space. This space is closely related to the more familiar fractional Sobolev space \(W^{\alpha , 2}({{\mathbb {R}} ^N})\) though they are not identical. We refer to Simon (1990) for a detailed study of such spaces and their relationships. An example of when (3.8) holds with \(\alpha =1/2\) is found by taking \(\varphi \) to be an indicator function. It is in this way we see that (3.8) is a rather weak condition.

### 3.3 The stochastic neural field equation driven by spatially smoothed space-time white noise

- \(\sigma :{\mathbb {R}} \rightarrow {\mathbb {R}} \) is globally Lipschitz [exactly as in (2.15)] i.e. there exists a constant \(C_\sigma \) such that$$\begin{aligned} |\sigma (a) - \sigma (b)| \le C_\sigma |a-b|, \ \mathrm {and} \ \ |\sigma (a)| \le C_\sigma (1 + |a|), \quad \forall a, b\in {\mathbb {R}} ; \end{aligned}$$
- \(G:{\mathbb {R}} \rightarrow {\mathbb {R}} \) is bounded and globally Lipschitz (exactly as above) i.e. such that there exists a constant \(C_G\) with \(\sup _{a\in {\mathbb {R}} }|G(a)| \le C_G\) and$$\begin{aligned} |G(a) - G(b)| \le C_G|a- b|, \quad \forall a, b \in {\mathbb {R}} . \end{aligned}$$

**Definition 3.6**

**C1**) and (

**C2**) or (

**C1’**) and (

**C2’**) to be satisfied. The difficulty was to keep everything well-behaved in the Hilbert space \(L^2({{\mathbb {R}} ^N})\) (or \(L^2({{\mathbb {R}} ^N}, \rho )\)). However, when looking for solutions in the sense of random fields \((Y(t, x))_{t\ge 0, x\in {{\mathbb {R}} ^N}}\) such that (3.10) is satisfied, such restrictions are no longer needed, principally because we no longer have to concern ourselves with the behavior in space at infinity. Indeed, in this section we simply work with the condition (

**C2’**) i.e. that

**Theorem 3.7**

**C2’**). Then there exists an almost surely unique predictable random field \((Y(t, x))_{t\ge 0, x\in {{\mathbb {R}} ^N}}\) which is a solution to (3.9) in the sense of Definition 3.6 such that

*Proof*

The proof proceeds in a classical way, but where we are careful to interpret all stochastic integrals as described in Sect. 3.1, and so we provide the details.

*Uniqueness*: Suppose that \((Y(t, x))_{t\ge 0, x\in {{\mathbb {R}} ^N}}\) and \((Z(t, x))_{t\ge 0, x\in {{\mathbb {R}} ^N}}\) are both solutions to (3.9) in the sense of Definition 3.6. Let \(D(t, x) = Y(t, x) - Z(t, x)\) for \(x\in {{\mathbb {R}} ^N}\) and \(t\ge 0\). Then we have

*Existence:*Let \(Y_0(t, x) = Y_0(x)\). Then define iteratively for \(n\in \mathbb {N}_0\), \(t\ge 0\), \(x\in {{\mathbb {R}} ^N}\),

In a very similar way, one can also prove that the solution remains \(L^p\)-bounded whenever the initial condition is \(L^p\)-bounded for any \(p>2\). Moreover, this also allows us to conclude that the solution has time continuous paths for all \(x\in {{\mathbb {R}} ^N}\).

**Theorem 3.8**

If the initial condition has finite \(p\)-moments for all \(p>2\), then \(t\mapsto Y(t, x)\) has an \(\eta \)-Hölder continuous version, for any \(\eta \in (0, 1/2)\) and any \(x\in {{\mathbb {R}} ^N}\).

*Proof*

The proof of the first part of this result uses similar techniques as in the proof of Theorem 3.7 in order to bound \(\mathbb {E}\left[ \, |Y(t, x)|^p \,\right] \) uniformly in \(t\in [0, T]\) and \(x\in {{\mathbb {R}} ^N}\). In particular, we use the form of \(Y(t, x)\) given by (3.10), Burkhölder’s inequality (see Theorem 3.2), Hölder’s inequality and Gronwall’s lemma, as well as the conditions imposed on \(w\), \(\sigma \), \(G\) and \(\varphi \).

#### 3.3.1 Spatial regularity of solution

As mentioned in the introduction to this section, the spatial regularity of the solution \((Y(t, x))_{t\ge 0, x\in {{\mathbb {R}} ^N}}\) to (3.9) is of interest. In particular we would like to find conditions under which it is at least continuous in space. As we saw in Lemma 3.4, under the weak condition on \(\varphi \) given by (3.8), we have that the spatially smoothed space-time white noise is continuous in space. We here show that under this assumption together with a Hölder continuity type condition on the neural field kernel \(w\), the solution \((Y(t, x))_{t\ge 0, x\in {{\mathbb {R}} ^N}}\) inherits the spatial regularity of the the driving noise.

It is worth mentioning that the neural field equation fits into the class of degenerate diffusion SPDEs (indeed there is no diffusion term), and that regularity theory for such equations is an area that is currently very active [see for example Hofmanová (2013) and references therein]. However, in our case we are not concerned with any kind of sharp regularity results [in contrast to those found in Dalang and Sanz-Solé (2009) for the stochastic wave equation], and simply want to assert that for most typical choices of neural field kernels \(w\) made by practitioners, the random field solution to the neural field equation is at least regular in space. The results of the section are simple applications of standard techniques to prove continuity in space of random field solutions to SPDEs, as is done for example in Walsh (1986, Corollary 3.4).

*Remark 3.9*

This condition is certainly satisfied for all typical choices of neural field kernel \(w\). In particular, any smooth rapidly decaying function will satisfy \((\mathbf{C3'})\).

**Theorem 3.10**

\(w\) satisfies (

**C3’**);- \(\varphi \) satisfies (3.8) i.e.where \({\varvec{\tau }}_z\) indicates the shift by \(z\in {{\mathbb {R}} ^N}\) operator;$$\begin{aligned} \Vert \varphi - {\varvec{\tau }}_z(\varphi )\Vert _{L^2({{\mathbb {R}} ^N})} \le C_\varphi |z|^\alpha , \quad \forall z\in {{\mathbb {R}} ^N}, \end{aligned}$$
\(x\mapsto Y_0(x)\) is \(\alpha \)-Hölder continuous.

*Proof*

Let \((Y(t, x))_{t\ge 0, x\in {{\mathbb {R}} ^N}}\) be the mild solution to (3.9), which exists and is unique by Theorem 3.7. The stated regularity in time is given in Theorem 3.8. It thus remains to prove the regularity in space.

**C3’**). Moreover, by Hölder’s and Burkhölder’s inequalities once again, we see that

## 4 Comparison of the two approaches

The purpose of this section is to compare the two different approaches taken in Sects. 2 and 3 above to give sense to the stochastic neural field equation. Such a comparison of the two approaches in a general setting has existed for a long time in the probability literature [see for example Jetschke (1982, 1986), or more recently Dalang and Quer-Sardanyons (2011)], but we provide a proof of the main result (Theorem 4.1) in the Appendix for completeness.

**C2’**) and the given assumptions on the initial condition]. Then, by that result, there exists a unique random field \((Y(t, x))_{t\ge 0, x\in {\mathbb {R}} ^N}\) such that

It turns out the that this random field solution is equivalent to the Hilbert space valued solution constructed in Sect. 2, in the following sense.

**Theorem 4.1**

**C1’**) is satisfied for some \(\rho _w\in L^1({\mathbb {R}} ^N)\). Then the random field \((Y(t, x))_{t\ge 0}\) satisfying (4.1) and (4.2) is such that \((Y(t))_{t\ge 0} {:=} (Y(t, \cdot ))_{t\ge 0}\) is the unique \(L^2({\mathbb {R}} ^N, \rho _w)\)-valued solution to the stochastic evolution equation

*Example 4.2*

**C2’**) is satisfied (indeed \(\Vert w(x-\cdot )\Vert _{L^1({\mathbb {R}} ^N)}\) is constant) and \(\sigma \) is Lipschitz and of linear growth, so that (assuming the initial condition has finite moments), Theorems 3.7 and 3.8 can be applied to yield a unique random field solution \((Y(t, x))_{t\ge 0}\) to the stochastic neural field equation. Moreover, by Example 2 in Sect. 2.8, we also see that (

**C1’**) is satisfied. Thus Theorem 2.2 can also be applied to construct a Hilbert space valued solution to the stochastic neural field equation (Eq. (4.3)). By Theorem 4.1, the solutions are equivalent.

## 5 Conclusion

We have here explored two rigorous frameworks in which stochastic neural field equations can be studied in a mathematically precise fashion. Both these frameworks are useful in the mathematical neuroscience literature: the approach of using the theory of Hilbert space valued processes is adopted in Kuehn and Riedler (2014), while we the random field framework is more natural for Bressloff, Ermentrout and their associates in Bressloff and Webber (2012), Bressloff and Wilkerson (2012), Kilpatrick and Ermentrout (2013).

It turns out that the constructions are equivalent (see Sect. 4), when all the conditions are satisfied (which we emphasize is certainly the case for all usual modeling choices of the neural field kernel \(w\) and noise terms made in the literature—see Sects. 2.6, 2.7 and Example 4.2). However, there are still some advantages and disadvantages for taking one approach over the other, depending on the purpose. For example, an advantage of the construction of a solution as a stochastic process taking values in a Hilbert space carried out in Sect. 2, is that it allows one to consider more general diffusion coefficients. Moreover, it easy to apply results from a large body of literature taking this approach (for example LDP results—see Remark 2.3). A disadvantage is that we have to be careful to impose conditions which control the behavior of the solution in space at infinity and guarantee the integrability of the solution. In particular we require that the connectivity function \(w\) either satisfies the strong conditions (**C1**) and (**C2**), or the weaker but harder to check conditions (**C1’**) and (**C2’**).

On the other hand, the advantage of the random field approach developed in Sect. 3 is that one no longer needs to control what happens at infinity. We therefore require fewer conditions on the connectivity function \(w\) to ensure the existence of a solution [(**C2’**) is sufficient—see Theorem 3.7]. Moreover, with this approach, it is easier to write down conditions that guarantee the existence of a solution that is continuous in both space and time (as opposed to the Hilbert space approach, where spatial regularity is somewhat hidden). However, in order to avoid non-physical distribution valued solutions, we had to impose a priori some extra spatial regularity on the noise (see Sect. 3.2).

## Footnotes

- 1.
The norm of \(B \subset L_0(U,H)\) is classically defined as \(\sup _{x \ne 0} \frac{\Vert Bx\Vert _H}{\Vert x\Vert _U}\).

- 2.
The covariance operator \(C:U\rightarrow U\) of \(W\) is defined as \({\mathbb {E}} [\langle W(s),g\rangle _U \langle W(t),h \rangle _U]=s\wedge t\langle C g,h\rangle _U\) for all \(g\), \(h \in U\).

- 3.
Technically this means that \(\varPhi (s)\) is measurable with respect the \(\sigma \)-algebra generated by all left-continuous processes that are known at time \(s\) when \((W(u))_{u\le s}\) is known (these process are said to be adapted to the filtration generated by \(W\)).

- 4.
This would be for an infinite size cortex. The cortex is in effect of finite size but the spatial extents of \(w_{loc}\) and \(w_{lr}\) are very small with respect to this size and hence the model in which the cortex is \({\mathbb {R}} ^2\) is acceptable.

- 5.
This can also be obtained by applying the operator \(B\) to the representation (2.2) of \(W\).

- 6.
Recall that a collection of random variables \(X = \{X(\theta )\}_{\theta \in \Theta }\) indexed by a set \(\Theta \) is a Gaussian random field on \(\Theta \) if \((X(\theta _1), \dots , X(\theta _k))\) is a \(k\)-dimensional Gaussian random vector for every \(\theta _1, \dots , \theta _k\in \Theta \). It is characterized by its mean and covariance functions.

- 7.
Precisely we consider functions \(f\) such that \((t,x,\omega )\mapsto f(t,x,\omega )\) is measurable with respect to the \(\sigma \)-algebra generated by linear combinations of functions of the form \(X(\omega )\mathbf{1 }_{(a, b]}(t)\mathbf{1 }_A(x)\), where \(a, b \in {\mathbb {R}} ^+\), \(A\in {\mathcal {B}} ({{\mathbb {R}} ^N})\), and \(X:\varOmega \rightarrow {\mathbb {R}} \) is bounded and measurable with respect to the \(\sigma \)-algebra generated by \((W_s(A))_{s\le a, A\in {\mathcal {B}}({{\mathbb {R}} ^N})}\).

- 8.
This is a family of random variables such that for each \(u\in U\), \(({\mathcal {W}} _t(u))_{t\ge 0}\) is a Brownian motion with variance \(t\Vert u\Vert ^2_U\), and for all \(s, t\ge 0\), \(u_1,u_2\in U\), \({\mathbb {E}} [{\mathcal {W}} _t(u_1){\mathcal {W}} _s(u_2)] = (s\wedge t)\langle u_1, u_2\rangle _U\). See for example Dalang and Quer-Sardanyons (2011) Section 2.1

## Notes

### Acknowledgments

The authors are grateful to James Maclaurin for suggesting the use of the Fourier transform in Example 2 on page 18, to Etienne Tanré for discussions, and to the referees for their useful suggestions and references.

### References

- Amari SI (1977) Dynamics of pattern formation in lateral-inhibition type neural fields. Biol Cybern 27(2):77–87CrossRefGoogle Scholar
- Baker T, Cowan J (2009) Spontaneous pattern formation and pinning in the primary visual cortex. J Physiol Paris 103(1–2):52–68CrossRefGoogle Scholar
- Bressloff P (2003) Spatially periodic modulation of cortical patterns by long-range horizontal connections. Phys D Nonlinear Phenom 185(3–4):131–157CrossRefGoogle Scholar
- Bressloff P (2009) Stochastic neural field theory and the system-size expansion. SIAM J Appl Math 70:1488–1521CrossRefGoogle Scholar
- Bressloff P (2010) Metastable states and quasicycles in a stochastic Wilson-Cowan model of neuronal population dynamics. Phys Rev E 82(5):051903Google Scholar
- Bressloff P (2012) Spatiotemporal dynamics of continuum neural fields. J Phys A Math Theor 45(3):033001Google Scholar
- Bressloff P, Cowan J, Golubitsky M, Thomas P, Wiener M (2001) Geometric visual hallucinations, Euclidean symmetry and the functional architecture of striate cortex. Philos Trans R Soc Lond B 306(1407):299–330CrossRefGoogle Scholar
- Bressloff P, Webber M (2012) Front propagation in stochastic neural fields. SIAM J Appl Dyn Syst 11(2):708–740Google Scholar
- Bressloff PC, Folias SE (2004) Front bifurcations in an excitatory neural network. SIAM J Appl Math 65(1):131–151CrossRefGoogle Scholar
- Bressloff PC, Wilkerson J (2012) Traveling pulses in a stochastic neural field model of direction selectivity. Front Comput Neurosci 6(90)Google Scholar
- Brezis H (2010) Functional analysis, Sobolev spaces and Partial Differential Equations. Springer, BerlinCrossRefGoogle Scholar
- Brzeźniak Z, Peszat S (1999) Space-time continuous solutions to SPDE’s driven by a homogeneous Wiener process. Studia Math 137(3):261–299Google Scholar
- Dalang R, Khoshnevisan D, Mueller C, Nualart D, Xiao Y (2009) In: Khoshnevisan and Firas Rassoul-Agha (eds) A minicourse on stochastic partial differential equations, Lecture Notes in Mathematics, vol 1962. Springer, Berlin. Held at the University of Utah, Salt Lake CityGoogle Scholar
- Dalang RC, Frangos NE (1998) The stochastic wave equation in two spatial dimensions. Ann. Probab. 26(1):187–212CrossRefGoogle Scholar
- Dalang RC, Quer-Sardanyons L (2011) Stochastic integrals for spde’s: a comparison. Expo. Math. 29(1):67–109CrossRefGoogle Scholar
- Dalang RC, Sanz-Solé M (2009) Hölder-Sobolev regularity of the solution to the stochastic wave equation in dimension three. Mem. Am. Math. Soc. 199(931):vi+70Google Scholar
- Du Y (2006) Order structure and topological methods in nonlinear partial differential equations, vol 1., Series in Partial Differential Equations and ApplicationsWorld Scientific Publishing Co., Pte. Ltd., HackensackGoogle Scholar
- Ermentrout G, McLeod J (1993) Existence and uniqueness of travelling waves for a neural network. In: Proceedings of the Royal Society of Edinburgh, vol 123, pp 461–478Google Scholar
- Eveson SP (1995) Compactness criteria for integral operators in \(L^\infty \) and \(L^1\) spaces. Proc. Am. Math. Soc. 123(12):3709–3716Google Scholar
- Faye G, Chossat P, Faugeras O (2011) Analysis of a hyperbolic geometric model for visual texture perception. J Math Neurosci 1(4)Google Scholar
- Ferrante M, Sanz-Solé M (2006) SPDEs with coloured noise: analytic and stochastic approaches. ESAIM Probab. Stat. 10:380–405 (electronic)CrossRefGoogle Scholar
- Folias SE, Bressloff PC (2004) Breathing pulses in an excitatory neural network. SIAM J Appl Dyn Syst 3(3):378–407CrossRefGoogle Scholar
- Hofmanová M (2013) Degenerate parabolic stochastic partial differential equations. Stoch Process Appl 123(12):4294–4336CrossRefGoogle Scholar
- Jansen BH, Rit VG (1995) Electroencephalogram and visual evoked potential generation in a mathematical model of coupled cortical columns. Biol Cybern 73:357–366CrossRefGoogle Scholar
- Jetschke G (1982) Different approaches to stochastic parabolic differential equations. In: Proceedings of the 10th Winter School on Abstract Analysis, pp 161–169Google Scholar
- Jetschke G (1986) On the equivalence of different approaches to stochastic partial differential equations. Math Nachr 128(1):315–329CrossRefGoogle Scholar
- Kilpatrick ZP, Ermentrout B (2013) Wandering bumps in stochastic neural fields. SIAM J Appl Dyn Syst 12(1):61–94CrossRefGoogle Scholar
- Kuehn C, Riedler MG (2014) Large deviations for nonlocal stochastic neural fields. J Math Neurosci 4(1)Google Scholar
- Lopes da Silva F, Hoeks A, Zetterberg L (1974) Model of brain rhythmic activity. Kybernetik 15:27–37CrossRefGoogle Scholar
- Lopes da Silva F, van Rotterdam A, Barts P, van Heusden E, Burr W (1976) Model of neuronal populations. The basic mechanism of rhythmicity. In: Corner MA, Swaab DF (eds) Progress in brain research. Elsevier, Amsterdam, pp 281–308Google Scholar
- Lund JS, Angelucci A, Bressloff PC (2003) Anatomical substrates for functional columns in macaque monkey primary visual cortex. Cereb Cortex 12:15–24CrossRefGoogle Scholar
- Mariño J, Schummers J, Lyon D, Schwabe L, Beck O, Wiesing P, Obermayer K, Sur M (2005) Invariant computations in local cortical networks with balanced excitation and inhibition. Nat Neurosci 8(2):194–201CrossRefGoogle Scholar
- Owen M, Laing C, Coombes S (2007) Bumps and rings in a two-dimensional neural field: splitting and rotational instabilities. New J Phys 9(10):378–401CrossRefGoogle Scholar
- Pardoux E (2007) Stochastic partial differential equations. Lectures given in Fudan University, ShanghaïGoogle Scholar
- Peszat S (1994) Large deviation principle for stochastic evolution equations. Probab Theory Relat Fields 98(1):113–136CrossRefGoogle Scholar
- Potthast R, Beim Graben P (2010) Existence and properties of solutions for neural field equations. Math Methods Appl Sci 33(8):935–949Google Scholar
- Prato GD, Zabczyk J (1992) Stochastic equations in infinite dimensions. Cambridge University Press, CambridgeCrossRefGoogle Scholar
- Prévôt C, Röckner M (2007) A concise course on stochastic partial differential equations., Lecture Notes in MathematicsSpringer, BerlinGoogle Scholar
- Sanz-Solé M, Sarrà M (2002) Hölder continuity for the stochastic heat equation with spatially correlated noise. In: Seminar on Stochastic Analysis, Random Fields and Applications, III (Ascona, 1999), Progr. Probab., vol 52. Birkhäuser, Basel, pp 259–268Google Scholar
- Simon J (1990) Sobolev, Besov and Nikol’ skiĭ fractional spaces: embeddings and comparisons for vector valued spaces on an interval. Ann. Math. Pura Appl. 4(157):117–148CrossRefGoogle Scholar
- Veltz R, Faugeras O (2010) Local/global analysis of the stationary solutions of some neural field equations. SIAM J Appl Dyn Syst 9(3):954–998CrossRefGoogle Scholar
- Walsh JB (1986) École d’été de probabilités de Saint-Flour, XIV–1984, Lecture Notes in Mathematics. An introduction to stochastic partial differential equations. Springer, Berlin, pp 265–439Google Scholar
- Wilson H, Cowan J (1972) Excitatory and inhibitory interactions in localized populations of model neurons. Biophys J 12:1–24CrossRefGoogle Scholar
- Wilson H, Cowan J (1973) A mathematical theory of the functional dynamics of cortical and thalamic nervous tissue. Biol Cybern 13(2):55–80Google Scholar

## Copyright information

**Open Access**This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.