Stochastic neural field equations: A rigorous footing

We extend the theory of neural fields which has been developed in a deterministic framework by considering the influence spatio-temporal noise. The outstanding problem that we here address is the development of a theory that gives rigorous meaning to stochastic neural field equations, and conditions ensuring that they are well-posed. Previous investigations in the field of computational and mathematical neuroscience have been numerical for the most part. Such questions have been considered for a long time in the theory of stochastic partial differential equations, where at least two different approaches have been developed, each having its advantages and disadvantages. It turns out that both approaches have also been used in computational and mathematical neuroscience, but with much less emphasis on the underlying theory. We present a review of two existing theories and show how they can be used to put the theory of stochastic neural fields on a rigorous footing. We also provide general conditions on the parameters of the stochastic neural field equations under which we guarantee that these equations are well-posed. In so doing we relate each approach to previous work in computational and mathematical neuroscience. We hope this will provide a reference that will pave the way for future studies (both theoretical and applied) of these equations, where basic questions of existence and uniqueness will no longer be a cause for concern.


Introduction
Neural field equations have been widely used to study spatiotemporal dynamics of cortical regions.Arising as continuous spatial limits of discrete models, they provide a step towards an understanding of the relationship between the macroscopic spatially structured activity of densely populated regions of the brain, and the underlying microscopic neural circuitry.The discrete models themselves describe the activity of a large number of individual neurons with no spatial dimensions.Such neural mass models have been proposed by Lopes da Silva and colleagues [22,23] to account for oscillatory phenomena observed in the brain, and were later put on a stronger mathematical footing in the study of epileptic-like seizures in [20].When taking the spatial limit of such discrete models, one typically arrives at a nonlinear integro-differential equation, in which the integral term can be seen as a nonlocal interaction term describing the spatial distribution of synapses in a cortical region.Neural field models build on the original work of Wilson and Cowan [34,35] and Amari [1], and are known to exhibit a rich variety of phenomena including stationary states, traveling wave fronts, pulses and spiral waves.For a comprehensive review of neural field equations, including a description of their derivation, we refer to [5].
More recently several authors have become interested in stochastic versions of neural field equations (see for example [2,3,7,8,21]), in order to (amongst other things) model the effects of fluctuations on wave front propagation.In particular, in [7] a multiplicative stochastic term is added to the neural field equation, resulting in a stochastic nonlinear integro-differential equation of the form dY (t, x) = −Y (t, x) + R w(x, y)G(Y (t, y))dy dt + σ(Y (t, x))dW (t, x), (1.1) for x ∈ R, t 0, and some functions G (referred to as the nonlinear gain function), σ (the diffusion coefficient) and w (the neural field kernel, sometimes also called the connectivity function).Here (W (t, x)) x∈R,t 0 is a stochastic process (notionally a "Gaussian random noise") that depends on both space and time, and which may possess some spatial correlation.
In [7] (1.1) is used in a slightly informal way to derive some interesting phenomena.However, from a more rigorous point of view one must be careful to understand what exactly is meant by the equation (1.1), and indeed, what do we understand by a solution.The main point is that any solution must involve an object of the form " σ(Y (t, x))dW (t, x)" (1.2) which must be precisely defined.Of course, in the case where there is no spatial dimension, the theory of such stochastic integrals is widely disseminated, but for integrals with respect to space-time white noise (for example) it is far less wellknown.It is for this reason that we believe it be extremely worthwhile making a detailed study of how to give sense to these objects, and moreover to solutions to (1.1) when they exist.There are in fact two distinct approaches to defining and interpreting the quantity (1.2), both of which allow one to build up a theory of stochastic partial differential equations (SPDEs).Although (1.1) does not strictly classify as a SPDE (since there is no derivative with respect to the spatial variable), both approaches provide a rigorous underlying theory upon which to base a study of such equations.
The first approach generalizes the theory of stochastic processes in order to give sense to solutions of SPDEs as random processes that take their values in a Hilbert space of functions (as presented by Da Prato and Zabczyk in [11] and more recently by Prévôt and Röckner in [28]).With this approach, the quantity (1.2) is interpreted as a Hilbert space-valued integral i.e. " σ(Y (t))dW (t)", where (Y (t)) t 0 and (W (t)) t 0 take their values in a Hilbert space of functions, and σ(Y (t)) is an operator between Hilbert spaces.The second approach is that of J. B. Walsh (as described in [33]), which, in contrast, takes as its starting point a PDE with a random and highly irregular "white-noise" term.This approach develops integration theory with respect to a class of random measures (called martingale measures), so that (1.2) can be interpreted as a random field in both t and x.
In the theory of SPDEs, there are advantages and disadvantages of taking both approaches.This is also the case with regards to the stochastic neural field equation (1.1), as described in the conclusion below (Section 5), and it is for this reason that we here develop both approaches, with the view that one or other will suit a particular reader's needs.Taking the functional approach of Da Prato and Zabczyk is perhaps more straightforward for those with knowledge of stochastic processes, and the existing general results can be applied more directly in order to obtain, for example, existence and uniqueness.This was the path taken in [29] where the emphasis was on large deviations, though in a much less general setup than we consider here (see Remark 2.7).However, it can certainly be argued that solutions constructed in this way are "non-physical", since the functional theory tends to ignore any spatial regularity properties (solutions are typically L 2 -valued in the spatial direction).We argue that the approach of Walsh is more suited to looking for "physical" solutions that are at least continuous in the spatial dimension, though we must restrict slightly the type of noise that is permitted.A comparison of the two approaches in a general setting is presented in [14], and in Section 4 in our specific setting.
The main aim of this article is thus two fold: firstly it is to present a review of an existing theory, which is accessible to readers unfamiliar with stochastic partial differential equations, that puts the study of stochastic neural field equations on a rigorous mathematical footing.Secondly, we will give general conditions on the functions G, σ and w that are certainly satisfied for most typical choices, and under which we guarantee that there exists a solution to the neural field equation (1.1) in some sense.We hope this will provide a reference that will pave the way for future studies (both theoretical and applied) of these equations, where basic questions of existence and uniqueness will no longer be a cause for concern.
The layout of the article is as follows.We first present in Section 2 the necessary material in order to consider the stochastic neural field equation (1.1) as an evolution equation in a Hilbert space.This involves introducing the notion of a Q-Wiener process taking values in a Hilbert space and stochastic integration with respect to Q-Wiener processes, before quoting a general existence and uniqueness result for solutions of stochastic evolution equations from [11].This theorem is then applied in Section 2.4 to yield a unique solution to (1.1) interpreted as a Hilbert space-valued process, both in the case when the noise has a spatial correlation and when it does not.The second part of the paper switches tack, and describes Walsh's theory of stochastic integration with respect to martingale measures (Section 3.1), with a view of giving sense to a solution to (1.1) as a random field in both time and space.To avoid dealing with distribution-valued solutions, we in fact consider a Gaussian noise that is smoothed in the spatial direction (Section 3.2), and show that, under some weak conditions, the neural field equation driven by such a smoothed noise has a unique solution in the sense of Walsh that is continuous in both time and space (Section 3.3).We finish with a comparison of the two approaches in Section 4, and summarize our findings in a conclusion (Section 5).
Notation: Throughout the article (Ω, F , (F t ) t 0 , P) will be a filtered probability space, where the filtration (F t ) t 0 satisfies the usual conditions (i.e.complete and right-continuous), and L 2 (Ω, F , P) will be the space of square-integrable random variables on (Ω, F , P).We will use the standard notation B(T ) to denote the Borel σ-algebra on T for any topological space T .The Lebesgue space of p-integrable (with respect to the Lebesgue measure) functions over R N for N ∈ N = {1, 2, . . .} will be denoted by L p (R N ), p 1, as usual, while L p (R N , ρ), p 1, will be the Lebesgue space weighted by a measurable function ρ : R N → R N .

Stochastic neural field equations as evolution equations in Hilbert spaces
In this section we will need the following operator spaces.Let U and H be two separable Hilbert spaces.We will write L 0 (U, H) to denote the space of all bounded linear operators form U to H with the usual norm (with the shorthand L 0 (H) when U = H), and L 2 (U, H) for the space of all Hilbert-Schmidt operators from U to H, i.e. those bounded linear operators B : U → H such that for some (and hence all) complete orthonormal systems {e k } k 1 of U. Finally, a bounded linear operator Q : U → U will be said to be trace-class if Tr(Q) := ), e k U < ∞, again for some (and hence all) complete orthonormal systems {e k } k 1 of U.

Hilbert space valued Q-Wiener processes
Let U be a separable Hilbert space and Q : U → U a non-negative, symmetric bounded linear operator on U such that Tr(Q) < ∞.
(iv) W has independent increments; and (v) for all 0 s t the law of W (t) − W (s) on U is Gaussian with mean 0 and covariance operator (t − s)Q.
Since Q is non-negative and trace-class, there exists a complete orthonormal basis {e k } k 1 for U and a sequence of non-negative real numbers By [11,Proposition 4.1], for arbitrary t 0, W has the expansion where (β k (t)) t 0 , k = 1, 2, . . .are mutually independent standard real-valued Brownian motions on (Ω, F , P), and the series is convergent in L 2 (Ω, F , P).

Stochastic integration with respect to Q-Wiener processes
Again let U be a separable Hilbert space, Q : U → U a non-negative, symmetric bounded linear operator on U such that Tr(Q) < ∞, and W = (W (t)) t 0 be a Q-Wiener process on U with respect to (F t ) t 0 (given by (2.1)).
Let H be another separable Hilbert space, and let Q 1 2 (U) be the subspace of U, which is a Hilbert space under the inner product The space L 2 (Q U) into H plays an important role in the theory of stochastic evolution equations, and for this reason we detail the following trivial example: H).Let T > 0 be arbitrary.By following the construction detailed in Chapter 4 of [11], we have that for a process (Φ(t)) t∈[0,T ] the integral has a sense as an element of H when is predictable (with respect to the filtration (F t ) t 0 ) and if Thus, for example, we have that When B : U → H is bounded, this certainly holds by the previous example.

Solutions to stochastic evolution equations
Let U and H be two separable Hilbert spaces.Consider the stochastic evolution equation where W is a U-valued Q-Wiener process with respect to (F t ) t 0 , with Q : U → U a non-negative, symmetric bounded linear operator on U such that Tr(Q) < ∞ as above.
We use G t to denote the predictable σ-field on [0, t] × Ω i.e.G t is the σ-algebra generated by all left-continuous stochastic processes on [0, t] adapted to (F s ) s∈[0,t] for all t 0.
Fix an arbitrary finite horizon T > 0. We work with the following hypotheses.
(H1) A is the generator of a strongly continuous semigroup S(t) = e tA , t 0, in H.
(H4) There exists a constant C such that We now make precise what we mean by a mild solution to (2.3).

Definition 2.3 (Mild solution).
A predictable H-valued process (Y (t)) t∈[0,T ] is said to be a mild solution of (2.3) on [0, T ] if and, for arbitrary t ∈ [0, T ], we have Note that under the conditions (H1) -(H4), (2.4) implies that the integrals in this expression are well-defined.
The following existence and uniqueness result is quoted from [11] (Theorem 7.4).
Theorem 2.4 (Da Prato -Zabczyk).Assume that conditions (H1) -(H4) are satisfied, and that Y 0 is an F 0 -measurable H-valued random variable with finite pmoments for all p 2. Then there exists a unique (up to equivalence in the Hilbert space H) mild solution (Y (t)) t∈[0,T ] of (2.3).Moreover, it has a continuous modification.
In addition, for all p 2, there exists a constant C (p) and for all p > 2 2. 4 The stochastic neural field equation: existence and uniqueness of a Hilbert-space valued solution In this section we describe our precise interpretation of the stochastic neural field equation (1.1) in the language of Hilbert space valued stochastic evolution equations (equation (2.7) below), and study existence and uniqueness properties of this equation.Note that as opposed to (1.1), we here work in the more general setup when the underlying space is N-dimensional.
Let ρ : R N → R N be in L ∞ (R N ).Consider the stochastic evolution equation where • indicates the composition of operators, W is an L2 (R N )-valued Q-Wiener process with respect to (F t ) t 0 , with Q a non-negative, symmetric bounded linear operator on L 2 (R N ) such that Tr(Q) < ∞, as usual.Here for some ϕ ∈ L1 (R N ) 1 ; • F is an operator on L 2 (R N , ρ) defined by where w : R N × R N → R is the neural field kernel, and G : R → R is the nonlinear gain function, assumed to be bounded and globally Lipschitz i.e such that there exists a constant C G with sup a∈R |G(a)| C G and Typically the nonlinear gain function G is taken to be a sigmoid function, for example G(a) = (1 + e −a ) −1 , a ∈ R.
Of particular interest to us are the conditions on the neural field kernel w which will allow us to prove existence and uniqueness of a solution taking its values in the space L 2 (R N , ρ) for some ρ through Theorem 2. 4.
In [29, footnote 1] it is suggested that the condition together with symmetry of w is enough to ensure that there exists a unique L 2 (R N )valued solution to (2.7).However, the problem is that it does not follow from (C1) that the operator F is stable on the space L 2 (R N ).For instance, suppose that in fact G ≡ 1 (so that G is trivially globally Lipschitz).Then for h ∈ L 2 (R N ) (and assuming w 0) we have that The point is that we can chose positive w such that (C1) holds, while (2.10) is not finite.For example in the case N = 1 we could take w(x, y) With this in mind we argue two points.Firstly, if we want a solution in L 2 (R N ), we must make the additional strong assumption that Indeed, below we will show that (C1) together with (C2) are enough to yield the existence of a unique L 2 (R N )-valued solution to (2.7).
On the other hand, if we don't want to make the strong assumptions that (C1) and (C2) hold, then we have to work instead in a weighted space L 2 (R N , ρ), in order to ensure that F is stable.In this case, we will see that if ') for some Λ w > 0, and ∀x ∈ R N (y → w(x, y)) ∈ L 1 (R N ), and sup for some constant C w , then we can prove the existence of a unique L 2 (R N , ρ w )-valued solution to (2.7).Condition (C1') is in fact a non-trivial eigenvalue problem, and it is not straightforward to see whether it is satisfied for a given function w.However, we chose to state the theorem below in a general way, and then below provide some important examples of when it can be applied.
In particular, one case of interest is when w is homogeneous i.e. w(x, y) = w(x − y) for all x, y ∈ R N , with w ∈ L 1 (R N ).This is an especially important case, since the homogeneity of w is a very common assumption that is made in the literature (see for example [6,7,8,19,21,24]).However, when w is homogeneous it is clear that neither (C1) nor (C2) are satisfied, and so we instead must try to show that (C1') is satisfied ((C2') trivially holds).This is done in the second example below.
Remark 2.5.If we replace the spatial coordinate space R N by a bounded domain D ⊂ R N , so that the neural field equation (2.7) describes the activity of a neuron found at position x ∈ D then these kinds of issues do not come into play, and everything becomes rather trivial (under appropriate boundary conditions).Indeed, in this case one can then check the conditions of Theorem 2.4 to see that there exists a unique L 2 (D)-valued solution to (2.7) under the condition (C2') only (with R N replaced by D).Although working in a bounded domain seems more physical (since any physical section of cortex is clearly bounded), the unbounded case is still often used, see [7] or the review [5], and is mathematically more interesting.The problem in passing to the unbounded case stems from the fact that the nonlocal term in (2.7) naturally 'lives' in the space of bounded functions, while the noise naturally lives in an L 2 space.These are not compatible when the underlying space is unbounded.Theorem 2.6.Suppose that the neural field kernel w either (i) satisfies conditions (C1) and (C2); or (ii) satisfies conditions (C1') and (C2').
If (i) holds set ρ w ≡ 1, while if (ii) holds let ρ w be the function appearing in condition (C1').
Then, whenever Y 0 is an F 0 -measurable L 2 (R N , ρ w )-valued random variable with finite p-moments for all p 2, the neural field equation (2.7) has a unique mild solution taking values in the space L 2 (R N , ρ w ).To be precise, there exists a unique and, Moreover, (Y (t)) t 0 has a continuous modification, and satisfies the bounds (2.5) and (2.6) for every T > 0 (with Proof.We check the hypotheses (H1)-(H4) in both cases (i) and (ii) in order to be able to apply Theorem 2.4, with U = L 2 (R N ) and H = L 2 (R N , ρ w ).
(H1): In our case A = −Id, and so (H1) is trivially satisfied in both cases.
(H2): We check that the function In case (i) this holds since ρ w ≡ 1 and for any h ∈ L 2 (R N ) by assumption (C2).Similarly in case (ii) for any h ∈ L 2 (R N , ρ w ) (H3): To show (H3) in both cases it suffices to check that for any We know by Example 2.2 that it suffices to prove that σ(h) To this end, for any u ∈ L 2 (R N ), we have by definition (H4): To show (H4), we first want to be globally Lipschitz.To this end, for any g, h ∈ L 2 (R N , ρ w ), we see that in either case where we have used the Lipschitz property of G. Now in case (i) it clearly follows from the Cauchy-Schwartz inequality that so that by condition (C1), F is indeed Lipschitz.
In case (ii), by Cauchy-Schwartz and the specific property of ρ w given by (C1'), we see that where B L 0 (U,H) is finite since ρ w is bounded (in either case).
As mentioned we now present two important cases where the conditions (C1') and (C2') are satisfied.
Example 1: |w| defines a compact integral operator.Suppose that where B(0, R) denotes the ball of radius R in R N centered at the origin; • There exists a bounded subset Ω ⊂ R N of positive measure such that • w satisfies (C2') and moreover We claim that these assumptions are sufficient for (C1') so that we can apply Theorem 2.6 in this case.Indeed, let X be the Banach space of functions in Thanks to the last point above, we can well-define the map J : X → X by Moreover, it follows from [16, Corollary 5.1] that the first condition we have here imposed on w is in fact necessary and sufficient for both the operators J : ) to be compact.We therefore clearly also have that the condition is necessary and sufficient for the operator J : X → X to be compact.Note now that the space K of positive functions in X is a cone in X such that J(K) ⊂ K, and that the cone is reproducing (i.e.X = {f − g : f, g ∈ K}).If we can show that r(J) is strictly positive, we can thus finally apply the Krein-Rutman Theorem (see for example [15,Theorem 1.1]) to see that r(J) is an eigenvalue with corresponding non-zero eigenvector ρ ∈ K.
To show that r(J) > 0, suppose first of all that there exists a bounded Ω ⊂ R N of positive measure such that inf y∈Ω Ω |w(x, y)|dx > 0. Define h = 1 on Ω, 0 elsewhere, so that h X = max{1, |Ω|}.Then, trivially, by assumption.Replacing h by h = h/ max{1, |Ω|} yields h X = 1 and Thus J m/ max{1, |Ω|}.Similarly so that J 2 m 2 / max{1, |Ω|}.In fact we have J k m k / max{1, |Ω|} for all k 1, so that, by the spectral radius formula, r(J) m > 0. The case where inf x∈Ω Ω |w(x, y)|dy > 0 holds instead is proved similarly, by instead taking h = 1/|Ω| on Ω (0 elsewhere) and working with the We have thus found a non-negative, non-zero function Example 2: Homogeneous case.Suppose that • w is homogeneous i.e w(x, y) = w(x − y) for all x, y ∈ R N ; • w ∈ L 1 (R N ) and is continuous; These conditions are satisfied for many typical choices of the neural field kernel in the literature (e.g. the "Mexican hat" kernel [4,17,24,32]).However, it is clear that we are not in the case of the previous example, since for any R > 0 sup which is not uniformly small.We thus again show that (C1') is satisfied in this case so that (since (C2') is trivially satisfied) Theorem 2.6 yields the existence of a unique L 2 (R N , ρ w )-valued solution to (2.7).
In order to do this, we use the Fourier transform.Let v = |w|, so that v is continuous and in L 1 (R N ).Let Fv be the Fourier transform of v i.e.
Therefore Fv is continuous and bounded by Now let Λ w = w L 1 (R N ) + 1, and z(x) := e −|x| 2 /2 , x ∈ R N , so that z is in the Schwartz space of smooth rapidly decreasing functions, which we denote by S(R N ).
Then define .
We note that the denominator is continuous and strictly bounded away from 0 (indeed by construction Λ w − Fv(ξ) 1 for all ξ ∈ R N ).Thus ρ is continuous, bounded and in L 1 (R N ) (since Fz ∈ S(R N ) by the standard stability result for the Fourier transform on S(R N )).
We now claim that F −1 ρ(x) ∈ L 1 (R N ), where the map F −1 is defined by Indeed, we note that for any k ∈ {1, . . ., N}, which is well-defined and bounded thanks to our assumption on the integrability of x → |x| 2N |w(x)|.Since Fz is rapidly decreasing, we can thus see that the function ρ(ξ) is 2N times differentiable with respect to every component and for each k ∈ {1, . . ., N}, we have that 2N  , for all x ∈ R N .Thus there exists a constant K such that |F −1 ρ(x)| K/|x| 2N .Moreover, since we also have the trivial bound ), by adjusting the constant K. Since this is integrable over R N , the claim is proved.Now, by the classical Fourier Inversion Theorem (which is applicable since ρ and F −1 ρ are both in L 1 (R N )), we thus have that By setting ρ(x) = F −1 ρ(x), we see that We may finally again apply the inverse Fourier transform F −1 to both sides, so that by the Inversion Theorem again (along with the standard convolution formula) it holds that It then follows that Remark 2.7 (Large Deviation Principle).The main focus of [29] was a large deviation principle for the stochastic neural field equation (2.7) with small noise, but in a less general situation than we consider here.In particular, the authors only considered the neural field equation driven by a simple additive noise, white in both space and time.We would therefore like to remark that in our more general case, and under much weaker conditions than those imposed in [29], an LDP result still holds and can be quoted from the literature.Indeed, such a result is presented in [26,Theorem 7.1].The main conditions required for the application of this result have essentially already been checked above (global Lipschitz properties of F and σ(•) • B), and it thus remains to check conditions (E.1) -(E.4) as they appear in [26].In fact these are trivialities, since the strongly continuous contraction semigroup S(t) is generated by the identity in our case.

Color of the noise in stochastic neural field equation (2.7)
It is important to understand the properties of the noise term in the neural field equation (2.7) which we now know has a solution in some sense, and in particular why we have have chosen the particular form (2.8) for the 'coefficient' B (although it is really an operator).Recall that by definition for some ϕ ∈ L 1 (R N ).The first point is that we have deliberately made the definition for ϕ ∈ L 1 (R N ) so that it is possible to (at least formally) take ϕ as the Dirac delta function.More rigorously we can take smooth approximations in L 1 (R N ) that integrate to 1. Anyhow, the operator B is then simply the identity and Theorem 2.6 still holds.However, when ϕ is not the Dirac function, we claim that the noise term BdW (t) in (2.7) is in fact spatially correlated.This is important in applications.For example in [7] it is actually a spatially correlated noise that is added to the deterministic neural field equation.
To see the spatial correlation, consider B as a map from L 2 (R N ) → L 2 (R N , ρ) for some bounded ρ ∈ L 1 (R N ).As noted above, B is then bounded, and hence the process (U(t)) t 0 defined by is a well-defined L 2 (R N , ρ)-valued process.Moreover, by Theorem 5.2 of [11], (U(t)) t 0 is Gaussian with mean zero and In other words, for all g, h ∈ L 2 (R N , ρ), s, t 0, we have that That is, for any g, h ∈ L 2 (R N , ρ) Using this in (2.11), we see that for all g, h ∈ L 2 (R N , ρ), since Q is a linear operator and is self-adjoint.We can then conclude that Hence (U(t)) t 0 is white in time but stationary and colored in space with covariance function (s ∧ t)c(x).This is exactly the rigorous interpretation of the noise described in [7], when interpreting a solution to the stochastic neural field equation as a process taking values in L 2 (R N , ρ w ).

Stochastic neural fields as Gaussian random fields
In this section we take an alternative approach, and try to give sense to a solution to the stochastic neural field equation (1.1) as a random field, using Walsh's theory of integration.This approach generally takes as its starting point a deterministic PDE, and then attempts include a term which is random in both space and time.With this in mind, consider first the well studied deterministic neural field equation Under some conditions on the neural field kernel w (boundedness, condition (C2') above and L 1 -Lipschitz continuity), this equation has a unique solution (t, x) → Y (t, x) that is bounded and continuous in x and continuously differentiable in t, whenever x → Y (0, x) is bounded and continuous ( [27]).The idea then is to directly add a noise term to this equation, and try and give sense to all the necessary objects in order to be able to define what we mean by a solution.Indeed, consider the following stochastic version of (3.1), where Ẇ is a "space-time white noise".The definition of Ẇ will be made precise below, but informally we may think of the object Ẇ (t, x) as the random distribution which, when integrated against a test function h The point is that with this interpretation of space-time white noise, since equation (3.2) specifies no regularity in the spatial direction (the map x → Y (t, x) is simply assumed to be Lebesgue measurable so that the integral makes sense), it is clear that any solution will be distribution-valued in the spatial direction, which is rather unsatisfactory.Indeed, consider the extremely simple linear case when G ≡ 0 and σ ≡ 1, so that (3.2) reads Formally, the solution to this equation is given by and since the integral is only over time it is clear (at least formally) that x → Y (t, x) is a distribution for all t 0. This differs significantly from the usual SPDE situation, when one would typically have an equation such as (3.3)where a second order differential operator in space is applied to the first term on the right-hand side (leading to the much studied stochastic heat equation).In such a case, the semigroup generated by the second order differential operator can be enough to smooth the space-time white noise in the spatial direction, leading to solutions that are continuous in both space and time (at least when the spatial dimension is 1see for example [25,Chapter 3] or [33,Chapter 3]).
Of course one can develop a theory of distribution-valued processes (as is done in [33, Chapter 4]) to interpret solutions of (3.2) in the obvious way: one says that the random field (Y (t, x) for all t 0.Here all the integrals can be well-defined, which makes sense intuitively if we think of Ẇ (t, x) as a distribution.In fact it is more common to write t 0 R N e −(t−s) φ(x)W (dsdx) for the stochastic integral term, once it has been rigorously defined.
However, we argue that it is not worth developing this theory here, since distribution valued solutions are of little interest physically.It is for this reason that we instead look for other types of random noise to add to the deterministic equation (3.1) that will produce solutions that are real-valued random fields, and are at least Hölder continuous in both space and time.In the theory of SPDEs, when the spatial dimension is 2 or more, the problem of an equation driven by space-time white noise having no real-valued solution is a well-known and much studied one (again see for example [25,Chapter 3] or [33,Chapter 3] for a discussion of this).To get around the problem, a common approach ( [13,18,30]) is to consider random noises that are smoother than white noise, namely a Gaussian noise that is white in time but has a smooth spatial covariance.Such random noise is known as either spatially colored or spatially homogeneous white-noise.One can then formulate conditions on the covariance function to ensure that real-valued Hölder continuous solutions to the specific SPDE exist.
It should also be mentioned, as remarked in [13], that in trying to model physical situations, there is some evidence that white-noise smoothed in the spatial direction is more natural, since spatial correlations are typically of a much larger order of magnitude than time correlations.
In the stochastic neural field case, since we have no second order differential operator, our solution will only ever be as smooth as the noise itself.We therefore look to add a noise term to (3.1) that is at least Hölder continuous in the spatial direction, and then proceed to look for solutions to the resulting equation in the sense of Walsh.
The section is structured as follows.First we briefly introduce Walsh's theory of stochastic integration with respect to martingale measures, for which the classical reference is [33] although we instead follow closely the more recent explanation given by D. Khoshnevisan in [12].This theory will be needed to well-define the stochastic integral in our definition of a solution to the neural field equation.We then introduce the spatially smoothed space-time white noise that we will consider, before finally applying the theory to analyze solutions of the neural field equation driven by this spatially smoothed noise under certain conditions.

Integration with respect to the white noise process
Consider the centered Gaussian random field2 where |A ∩ B| denotes the Lebesgue measure of A ∩ B. We say that Ẇ is a white noise on R + ×R N .We then define the white noise process for all t 0, and we suppose that (W t (A)) t 0 is adapted to the filtration (F t ) t 0 for all A ∈ B(R N ).
We would like to build up a theory of stochastic integration with respect to this process.With this in mind, one may hope that W t is a signed measure on R N for all t > 0. However, for all such t it holds that almost surely (see Exercise 3.16 [12, Chapter 1]), so that W t is in fact not σ-finite with any positive probability.
On the other hand, we can prove that ) and all t 0, and that the sum is convergent in L 2 (Ω, F , P) (see Lemma 5.3 [12,Chapter 1]).In this sense W t is instead an L 2 (Ω, F , P)-valued (random) measure i.e.W t : B(R N ) → L 2 (Ω, F , P).Moreover, it is straightforward to show that for all A ∈ B(R N ), (W t (A)) t 0 is a centered martingale (with respect to (F t ) t 0 ).In summary we have that the white noise process W := (W t (A)) t 0,A∈B(R N ) is such that (i) W 0 (A) = 0 almost surely, for all A ∈ B(R N ); (ii) for all t > 0, W t is a σ-finite L 2 (Ω, F , P)-valued signed measure; (iii) for all A ∈ B(R N ), (W t (A)) t 0 is a centered martingale with respect to the filtration (F t ) t 0 .
In general, a family of random variables indexed by t 0 and A ∈ B(R N ) satisfying (i)-(iii) is defined to be a martingale measure (with respect to (F t ) t 0 ).One can in fact build stochastic integrals with respect to general martingale measures under a condition known as worthiness ([33, Chapter 2]).However, for our needs and to keep things simple, we concentrate on this construction for the white noise process (which turns out to be worthy).
Indeed, starting with elementary functions f : R where a, b ∈ R + , X : Ω → R is bounded and F a -measurable, and A ∈ B(R N ), we first define the stochastic integral process of f with respect to W as It is important to note that f • W is itself a new martingale measure (exactly as Itô integrals with respect to martingales are martingales).As usual we then build up the definition to accommodate integrands of linear combinations of elementary functions.We call such functions simple functions, and denote the set of all simple functions by S. We will also say that a function (t, x, ω) → f (t, x, ω) is predictable if it is measurable with respect to the σ-algebra generated by S, which we denote by P. In other words, P is the smallest σ-algebra on R + × R N × Ω such that all simple functions are measurable.
As with the construction of the Itô integral, to go beyond linear combinations of elementary functions the quadratic variation process plays a role.Indeed, we define where < •, • > t is the standard cross-variation process between two martingales.The point is that this process defines a σ-finite measure on R + × R N × R N , since by (3.4) where δ 0 is the Dirac delta function.
Remark 3.1.It should be noted that when building stochastic integrals with respect to a general martingale measure, one needs to impose extra conditions in order to ensure that the quadratic variation process can be associated with a σ-finite measure.It is at this point that the afore mentioned worthiness property is needed.
To proceed, we now fix a finite horizon T > 0 and define the norm ) for any predictable function f .Then let P W be the set of all predictable functions f for which f W < ∞.
The following proposition and theorem complete the construction of the stochastic integral with respect to W .For proofs we refer to [33, Proposition 2.3 and Theorem 2.5] respectively (which are written for the general case of a worthy martingale measure).Proposition 3.2.The space P W equipped with the norm • W is a complete Banach space.Moreover, the space of simple functions S is dense in P W . Theorem 3.3.For all f ∈ P W , f • W can be well-defined as the L 2 (Ω, F , P)-limit of martingale measures f n • W , for an approximating sequence {f n } n 1 ⊂ S in the norm • W . Moreover f • W is a martingale measure such that for all t ∈ (0, T ] and A, B ∈ B(R N ), and The L p -version of (3.9) is known as Burkhölder's inequality and will be useful: Theorem 3.4 (Burkhölder's inequality).For all p 2 there exists a constant c p such that for all f ∈ P W , t ∈ (0, T ] and A ∈ B(R N ), .
We now adopt the more standard notation and set for all f ∈ P W , t 0 and A ∈ B(R N ).

Spatially smoothed space-time white noise
Let W = (W t (A)) t 0,A∈B(R N ) be a white-noise process adapted to (F t ) t 0 as defined in the previous section.For ϕ ∈ L 2 (R N ), we can well-define the (Gaussian) random field (W ϕ (t, x)) t 0,x∈R N for any T > 0 by To see this one just needs to check that ϕ(x − •) ∈ P W for every x, where, as above, P W is the set of all predictable functions f for which f W < ∞.The function ϕ(x−•) is clearly predictable for each x (since it is non-random) and for every T > 0 < ∞, so that the integral in (3.11) is indeed well-defined in the sense of the above construction.Moreover, (W ϕ (t, x)) t 0 is a centered martingale for each x ∈ R N with respect to (F t ) t 0 (by the properties of martingale measures) and has spatial covariance which by equation (3.10), Theorem 3.3 and equation (3.7) is equal to where ⋆ denotes the convolution operator as usual, and ϕ(x) = ϕ(−x).In this sense the noise is again spatially correlated.The regularity in time of this process is the same as that of a Brownian path: Lemma 3.5.For any x ∈ R N , the path t → W ϕ (t, x) has an η-Hölder continuous modification for any η ∈ (0, 1/2).
Proof.For x ∈ R N , s, t 0 with s t and any p 2 we have by Burkhölder's inequality (Theorem 3.4 above) that The result follows from the standard Kolmogorov continuity theorem (see for example Theorem 4.3 of [12, Chapter 1]).
More importantly, if we impose some (very weak) regularity on ϕ then W ϕ inherits some spatial regularity: Lemma 3.6.Suppose that there exists a constant C ϕ such that for some α ∈ (0, 1], where τ z indicates the shift by z operator (so that τ z (ϕ)(y) := ϕ(y + z) for all y, z ∈ R N ).Then for all t 0, the map x → W ϕ (t, x) has an η-Hölder continuous modification, for any η ∈ (0, α).
Proof.For x, x ∈ R N , t 0, and any p 2 we have (again by Burkhölder's inequality) that The result follows by Kolmogorov's continuity theorem.
Remark 3.7.The condition (3.12) with α = 1 is true if and only if the function ϕ is in the Sobolev space W 1,2 (R N ) ([9, Proposition 9.3]).When α < 1 the set of functions ϕ ∈ L 2 (R N ) which satisfy (3.12) defines a Banach space denoted by N α,2 (R N ) which is known as the Nikolskii space.This space is closely related to the more familiar fractional Sobolev space W α,2 (R N ) though they are not identical.We refer to [31] for a detailed study of such spaces and their relationships.An example of when (3.12) holds with α = 1/2 is found by taking ϕ to be an indicator function.It is in this way we see that (3.12) is a rather weak condition.

The stochastic neural field equation driven by spatially smoothed space-time white noise
We now have everything in place to define and study the solution to the stochastic neural field equation driven by a spatially smoothed space-time white noise.Indeed, consider the equation with initial condition Y (0, x) = Y 0 (x) for x ∈ R N and t 0, where Although the above equation is not well-defined ( ∂ ∂t W ϕ (t, x) does not exist), we will interpret a solution to (3.13) in the following way.Definition 3.8.By a solution to (3.13) we will mean a real-valued predictable (i.e.P-measurable) adapted random field (Y (t, x)) t 0,x∈R N such that almost surely for all t 0 and x ∈ R N , where the stochastic integral term is understood in the sense described in Section 3.1.
Once again we are interested in the conditions on the neural field kernel w that allow us to prove the existence of solutions in this new sense.Recall that in Section 2 we either required conditions (C1) and (C2) or (C1') and (C2') to be satisfied.The difficulty was to keep everything well-behaved in the Hilbert space L 2 (R N ) (or L 2 (R N , ρ)).However, when looking for solutions in the sense of random fields (Y (t, x)) t 0,x∈R N such that (3.14) is satisfied, such restrictions are no longer needed, principally because we no longer have to concern ourselves with the behavior in space at infinity.Indeed, in this section we simply work with the condition (C2') i.e. that ∀x ∈ R N (y → w(x, y)) ∈ L 1 (R N ), and sup for some constant C w .Theorem 3.9.Suppose that the map x → Y 0 (x) is Borel measurable almost surely, Y 0 (x) is F 0 -measurable for all x ∈ R N , and that Suppose moreover that the neural field kernel w satisfies condition (C2').Then there exists an almost surely unique predictable random field (Y (t, x)) t 0,x∈R N which is a solution to (3.13) in the sense of Definition 3.8 such that for any T > 0.
Proof.The proof proceeds in a classical way, but where we are careful to interpret all stochastic integrals as described in Section 3.1.
Uniqueness: Suppose that (Y (t, x)) t 0,x∈R N and (Z(t, x)) t 0,x∈R N are both solutions to (3.13) in the sense of Definition 3.8.Let D(t, x) = Y (t, x) − Z(t, x) for x ∈ R N and t 0. Then we have where we have used Cauchy-Schwarz and the L 2 -version of Burkhölder's inequality (3.9).Thus, using the Lipschitz property of σ and G, By the Cauchy-Schwarz inequality once again which is finite since we are assuming Y and Z satisfy (3.15).Writing An application of Gronwall's lemma then yields sup s t H(s) = 0 for all t 0. Hence Y (t, x) = Z(t, x) almost surely for all t 0, x ∈ R N .
We first check that the stochastic integral is well-defined, under the assumption that for any T > 0, which we know is true for n = 0 by assumption, and we show by induction is also true for each integer n 1 below.To this end for any T > 0 This shows that the integrand in the stochastic integral is in the space P W (for all T > 0), which in turn implies that the stochastic integral in the sense of Walsh is indeed well-defined (by Theorem 3.3).
To be rigorous, we must moreover check that the deterministic integral in (3.16) is well-defined.When n = 0, this follows from the fact that x → Y 0 (x) is Borel measurable almost surely.For n 1, we use the fact that the stochastic convolution This implies that by setting for all n ∈ N 0 and t 0. Now, similarly, we can find a constant C t such that for any x ∈ R N and s ∈ [0, t], so that for s ∈ [0, t], Using this in (3.18) we see that, for all t 0. This is sufficient to see that (3.17) uniformly for all t 0 and x ∈ R N .Thus taking the limit as n → ∞ in (3.16) (in the L 2 (Ω, F , P) sense) proves that (Y (t, x)) t 0,x∈R N does indeed satisfy (3.14) almost surely.The fact that it is predictable and adapted follows from the construction.
In a very similar way, one can also prove that the solution remains L p -bounded whenever the initial condition is L p -bounded for any p > 2.Moreover, this also allows us to conclude that the solution has time continuous paths for all x ∈ R N .Theorem 3.10.Suppose that we are in the situation of Theorem 3.9, but in addition we have that sup Proof.The proof of the first part of this result uses similar techniques as in the proof of Theorem 3.9 in order to bound E [ |Y (t, x)| p ] uniformly in t ∈ [0, T ] and x ∈ R N .In particular, we use the form of Y (t, x) given by (3.14), Burkhölder's inequality (see Theorem 3.4), Hölder's inequality and Gronwall's lemma, as well as the conditions imposed on w, σ, G and ϕ.
For the time continuity, we again use similar techniques to achieve the bound for all s, t ∈ [0, T ] with s t and x ∈ R N , for some constant C (p) T .The results then follow from Kolmogorov's continuity theorem once again.Now let p 2. The aim is to estimate E [ |Y (t, x) − Y (t, x)| p ] for x, x ∈ R N and then to use Kolmogorov's theorem to get the stated spatial regularity.To this end, we have that where we have used (C3').Moreover, by Hölder's and Burkhölder's inequalities once again, we see that , for all x, x ∈ R N and p 2. Thus where we note that the right-hand side is finite thanks to Theorem 3.
where the last line follows from our assumptions on Y 0 and by adjusting the constant C (p) T .This bound holds for all t 0, x, x ∈ R N and p 2. The proof is then completed using Gronwall's inequality, and Kolmogorov's continuity theorem once again.

Comparison of the two approaches
The purpose of this section is to compare the two different approaches taken in Sections 2 and 3 above to give sense to the stochastic neural field equation.
Our starting point is the random field solution, given by Theorem 3.9.Suppose that the conditions of Theorem 3.9 are satisfied (i.e.ϕ ∈ L 2 (R N ), σ : R → R Lipschitz, G : R → R Lipschitz and bounded, w satisfies (C2') and the given assumptions on the initial condition).Then, by that result, there exists a unique random field (Y (t, x)) t 0,x∈R N such that such that sup for all T > 0, and we say that (Y (t, x)) t 0,x∈R N is the random field solution to the stochastic neural field equation.The relationship between this random field solution to the stochastic neural field equation, and a solution constructed as a Hilbert space valued process according to Section 2 is given in the following theorem.Theorem 4.1.Suppose the conditions of Theorem 3.9 and Theorem 3.10 are satisfied.Moreover suppose that (C1') is satisfied for some ρ w ∈ L 1 (R N ), and fix U = L 2 (R N ) and H = L 2 (R N , ρ w ).Then the random field (Y (t, x)) t 0 satisfying (4.1) and (4.2) is such that (Y (t)) t 0 := (Y (t, •)) t 0 is the unique mild H-valued solution to the stochastic evolution equation where (W (t)) t 0 is a U-valued Q-Wiener process, B : H → L 0 (U, H) is given by and F : H → H is given by for some ϕ ∈ L 1 (R N ).Indeed, even if the ϕ we consider in (4.1) is also in L 1 (R N ), in order for the noise term B(Y (t))dW (t) in (4.3) to have this structure, since B(h)(u)(x) = σ(h(x))B(u)(x) for all h ∈ H, u ∈ U and x ∈ R N , we would require σ to be defined by This is not a well-defined map H → L 0 (H) unless σ is bounded.However, in the case when σ is bounded (for example σ ≡ 1), under the assumptions that ϕ ∈ L 1 (R N ) ∩ L 2 (R N ) and w satisfies (C1'), the above result shows that the random field solution is a special case of the H-valued solution to (2.7) constructed in Section 2.4, where the noise term is given by σ(Y (t)) • BdW (t).
Proof of Theorem 4.1.We first check that (4.3) does indeed have a mild solution according to Theorem 2.4.This does not follow directly from Theorem 2.6 (see Remark (4.2)).However, under the current assumptions, it is easy to check that conditions (H1) -(H4) of Theorem 2.4 are satisfied.Indeed, we can follow most of the proof of Theorem 2.6, inserting where necessary the facts that B : H → L 0 (U, H) is well-defined, since for any h ∈ H, u ∈ U, H , for all h 1 , h 2 ∈ H. Finally, by the assumptions on the initial condition, and since Hence we can apply Theorem 2.4.
The proof of the result involves some technical definition chasing, and in fact is contained in [14], though rather implicitly.It is for this reason that we carry out the proof explicitly in our situation, by closely following [14,Proposition 4.10].The most important point is to relate the stochastic integrals that appear in the two different formulations of a solution.To this end, define to be the Walsh integral that appears in the random field solution (4.1).Our aim is to show that where the integral on the right-hand side is the H-valued stochastic integral which appears in the mild formulation of a solution to (4.3), defined according to Definition 2.3.
Step 1: Adapting Proposition 2.6 of [14] very slightly, we have that the Walsh integral I(t, x) can be written as the integral with respect to the cylindrical Wiener process W = {W t (u) : t 0, u ∈ U} with covariance Id U . 3 Precisely, we have for all t 0, x ∈ R N , where g t,x s (y) := e −(t−s) σ(Y (s, x))ϕ(x − y), y ∈ R N , which is in L 2 (Ω × [0, T ]; U) for any T > 0 thanks to (4.2).By definition, the integral with respect to the cylindrical Wiener process W is given by where {e k } ∞ k=1 is a complete orthonormal basis for U, and (β k (t)) t 0 := (W t (e k )) t 0 are independent real-valued Brownian motions.This series is convergent in L 2 (Ω).
Step 2: Fix arbitrary T > 0. As in Section 3.5 of [14], we can consider the process {W (t), t ∈ [0, T ]} defined by where J : U → U is a Hilbert-Schmidt operator.W (t) takes its values in U, where it is a Q(= JJ * )-Wiener process with Tr(Q) < ∞ (Proposition 3.6 of [14]).We define J(u s , u U , which takes values in R. Proposition 3.10 of [14] tells us that the process {Φ t,x s , s ∈ [0, T ]} defines a predictable process with values in L 2 (U, R) and where the integral on the left is defined according to Section (2.2), with values in R.
Step 3: We now note that the original Walsh integral Indeed, because of (3.9), again thanks to (4.2).Hence I(t, •) takes values in H, and we can therefore write by (4.6), where {f j } ∞ j=1 is a complete orthonormal basis in H.Moreover, by using (4.5)Step 4: To conclude it suffices to note that the pathwise integrals in (4.1) and the mild H-valued solution to (4.3) coincide as elements of H. Indeed, it is clear that, by definition of F, where the later in an element of H.

Conclusion
We have here explored two alternative ways to define in a mathematically precise fashion the notion of a stochastic neural field.Both of these approaches have been used previously without theoretical justification by scientists in the field of theoretical neuroscience.Indeed, the approach of using the theory of Hilbert space valued processes presented by Da Prato and Zabczyk (analysed in Section 2) is adopted in [29], while we argue the random field approach is that which is implicitly used by Bressloff, Ermentrout and their associates in [7,8,21].The difference between the two constructions is completely determined by the type of noise that one wishes to consider in the neural field equation, which may give rise to inherently different solutions.The advantage of the construction of a solution as a stochastic process taking values in a Hilbert space carried out in Section 2, is that it allows one to consider more general diffusion coefficients (see Remark 4.2).Moreover, our construction using this approach can also handle a noise term that has no spatial correlation i.e. a pure space-time white noise, by taking the correlation function ϕ to be a Dirac mass (see Section 2.5).A disadvantage is that we have to be careful to impose conditions which control the behavior of the solution in space at infinity and guarantee the integrability of the solution.In particular we require that the connectivity function w either satisfies the strong conditions (C1) and (C2), or the weaker but harder to check conditions (C1') and (C2').
On the other hand, the advantage of the random field approach developed in Section 3 is that one no longer needs to control what happens at infinity.We therefore require fewer conditions on the connectivity function w to ensure the existence of a solution ((C2') is sufficient -see Theorem 3.9).Moreover, with this approach, it is easier to write down conditions that guarantee the existence of a solution that is continuous in both space and time (as opposed to the Hilbert space approach, where spatial regularity is somewhat hidden).However, in order to avoid non-physical distribution valued solutions, we had to impose a priori some extra spatial regularity on the noise (see Section 3.2).
The relationship between the two approaches is summarized in Section 4, where we showed that if we impose the extra condition (C1') to ensure integrability, it is possible to reinterpret the random field solution as a Hilbert space valued process that satisfies an infinite dimensional stochastic evolution equation, though with a subtly different noise term to those equations originally considered.Nonetheless, we are able to see that when σ : R → R is bounded, the random field solution is in fact a special case of the Hilbert space valued solution constructed in Section 2.4 (under the additional condition that ϕ ∈ L 1 (R N ) ∩ L 2 (R N ) -see Remark 4.2).
Our main conclusion here is thus that the approach to take really does depend on the end goal.If one is interested in very general diffusions, and has some strong decay properties on w, the infinite dimensional Hilbert space approach is well-suited.On the other hand, if one is interested in spatially regular solutions, and does not wish to impose such strong decay properties on w, but is content with the addition of less general and more regular noise terms, the random field approach should be taken.
We end with a word about the applicability of our results.Neural field equations are commonly encountered in neuroscience with regard to modeling brain areas.In practice one is often interested in modeling two-or three-dimensional pieces of cortex whose size is large with respect to that of the support of the connectivity kernel w.It is often necessary to extend the physical space where the brain tissues are located with representation spaces to account for the computations performed by the neurons.For example in visual perception features such as disparity (related to depth perception), velocity (related to visual motion perception), or color can be represented by points in R 2 and R 3 .In sound perception the local Fourier analysis performed by the cochlea is represented by a spatial distribution of points in C. Other examples can be found in the motor cortex where the neurons preparing for an action of part of the body store representations for driving the effector muscles that are naturally represented by points in some R N .
If the size of the connectivity kernel becomes comparable to that of the considered brain area or if the feature space is naturally bounded (e.g.visual orientations, rotation angles for effectors) it becomes more natural to work with a bounded subset of R N with periodic or zero boundary conditions.However, both our approaches still apply in this setup (and are in fact easier to justify).
x∈R N is the spatially smoothed space-time white noise (adapted to (F t ) t 0 ) defined by(3.11)  for some ϕ ∈ L 2 (R N );• G : R → Ris the nonlinear gain function, assumed to bounded and globally Lipschitz i.e such that there exists a constant C G with sup a∈R |G(a)| C G and |G(a) − G(b)| C G |a − b|, ∀a, b ∈ R, as above; • σ : R → R is globally Lipschitz i.e. there exists a constant C σ such that |σ(a) − σ(b)| C σ |a − b|, and |σ(a)| C σ (1 + |a|), ∀a, b ∈ R.
x∈R N E [ |Y 0 (x)| p ] < ∞ for some p > 2. Then the solution (Y (t, x)) t 0,x∈R N to (3.13) in the sense of Definition 3.8 is L p -bounded on [0, T ]×R N for any T i.e. sup t∈[0,T ],x∈R N E [ |Y (t, x)| p ] < ∞, and the map t → Y (t, x) has a continuous version for all x ∈ R N .If the initial condition has finite p-moments for all p > 2, then t → Y (t, x) has an η-Hölder continuous version, for any η ∈ (0, 1/2) and any x ∈ R N .
y)G(Y (s, y))dyds = t 0 e −(t−s) F(Y (s))ds, x) is predictable (this follows from the construction in Section 3.1 and is explicitly stated in [10, Section 2.1]).Hence, for fixed t the map x → Y n (t, x) is Borel measurable almost surely, which in turn allows us to well-define the deterministic integral in(3.16).Now define D n (t, x) := Y n+1 (t, x) − Y n (t, x) for n ∈ N 0 , t 0 and x ∈ R N .Then exactly as in the uniqueness calculation we have holds uniformly in n.By completeness, for each t 0 and x ∈ R N there exists Y (t, x) ∈ L 2 (Ω, F , P) such that Y (t, x) is the limit in L 2 (Ω, F , P) of the sequence of square-integrable random variables (Y n (t, x)) n 1 .Moreover, the convergence is uniform on [0, T ] × R N , i.e.
G , K w , C σ , C ϕ , ϕ L 2 (R N ) as well as sup s∈[0,T ],y∈R N E [ |Y (s, y)| p ]), such that Remark 4.2.We note that the evolution equation (4.3) is subtly different to (2.7) considered in Section 2.4, in that the noise term added here is not generally of the form σ(Y (t)) • BdW (t) for some σ : H → L 0 (H), with B : U → H given by