A field-theoretic approach to the Wiener Sausage

The Wiener Sausage, the volume traced out by a sphere attached to a Brownian particle, is a classical problem in statistics and mathematical physics. Initially motivated by a range of field-theoretic, technical questions, we present a single loop renormalised perturbation theory of a stochastic process closely related to the Wiener Sausage, which, however, proves to be exact for the exponents and some amplitudes. The field-theoretic approach is particularly elegant and very enjoyable to see at work on such a classic problem. While we recover a number of known, classical results, the field-theoretic techniques deployed provide a particularly versatile framework, which allows easy calculation with different boundary conditions even of higher momenta and more complicated correlation functions. At the same time, we provide a highly instructive, non-trivial example for some of the technical particularities of the field-theoretic description of stochastic processes, such as excluded volume, lack of translational invariance and immobile particles. The aim of the present work is not to improve upon the well-established results for the Wiener Sausage, but to provide a field-theoretic approach to it, in order to gain a better understanding of the field-theoretic obstacles to overcome.

: Example of the Wiener Sausage problem in two dimensions. The blue area has been traced out by the Brownian particle attached to a disc shown in red.
probabilistic point of view and has a very wide range of applications, such as medical physics [e.g. 5], chemical engineering [e.g. 10] or ecology [e.g. 23]. On the lattice, the volume of the Sausage translates to the number of distinct sites visited [22]. In this work, we present an alternative, fieldtheoretic approach which is particularly flexible with respect to boundary conditions and observables.
The approach has the additional appeal that, somewhat similar to percolation [19] where all non-trivial features are due to the imposed definition of clusters as being composed of occupied sites connected via open bonds between nearest neighbours, the "interaction" in the present case is one imposed in retrospect. After all, the Brownian particle studied is free and not affected by any form of interaction. Yet, the observable requires us to discount returns, i.e. loops in the trajectory of the particle, thereby inducing an interaction between the particle's past and present.
Before describing the process to be analysed in further detail, we want to point out that some of the questions pursued in the following are common to the field-theoretic re-formulation of stochastic processes [7,16,3,4,21,20]. Against the background of a field theory of the Manna Model [14,6] one of us recently developed, the features we wanted to understand were: 1) "Fermionic", "excluded volume" or "hard-core interaction" processes [e.g. 11], i.e. processes where lattice sites have a certain carrying capacity (unity in the present case) that cannot be exceeded. 2) Systems with boundaries, i.e. lack of momentum conservation in the vertices. 2') Related to that, how different modes couple on finite, but translationally invariant systems (periodic boundary conditions). 3) The special characteristics of the propagator of the immobile species. 4) Observables that are spatial or spatio-temporal integrals of densities.
The Wiener Sausage incorporates all of the above and because it is exactly solvable or has been characterised by very different means [12,8,1], it also gives access to a better understanding of the renormalisation process itself. In the following, we will discuss most of the aspects mentioned above, leaving, however, some of it to future research.

Model
In the following, we will analyse the field theory of a particle (species "A", the active species) that diffuses freely with diffusion constant D, subject to an extinction rate (or "mass") r and possibly to boundary conditions. Ignoring those for the time being, the propagator of the Brownian particle (the "activity") takes the familiar form 1/(−ıω + Dk 2 + r) where ω and k parameterise frequency and momentum (wave number) coordinates. While diffusing, the particle can spawn offspring with rate γ which, however, belong to an immobile second species (species "B", the blue ink traces of A, below sometimes referred to as a "substrate particle"), the propagator being 1/(−ıω + ), where the limit → 0 + is implied to establish causality. The spawning is suppressed (or inhibited) if the site is occupied by an offspring already. This condition induces the interaction. It is this condition that renders determining the number of distinct sites visited a mere counting exercise, i.e. the number of distinct sites is (proportional to) the number offspring spawned.
To fully appreciate the field-theoretical description, we differentiate three different perspectives on the Wiener Sausage in the following: 1) The original description in terms of a sphere dragged by a Brownian particle, 2) a random walker on a lattice, where the Sausage becomes the set of distinct sites visited, 3) a Brownian particle in the continuum that spawns, up to a finite carrying capacity, immobile offspring with a finite rate and 3') the field theoretical description of the latter.
In the third picture, the active particle deposits a certain number of offspring, which have a priori neither volume nor shape. This is what we are aiming to calculate in the following (to leading order) and this is the main difference to the original Wiener Sausage. This approach has its origin on the lattice. On the lattice, the original Wiener Sausage is recovered asymptotically as the number of distinct sites visited, i.e. the number of offspring in the limit where the diffusion is slow compared to the spawning. When taking the continuum limit, the spawning rate γ diverges, 1 as does the "hopping rate" H in order to maintain a finite diffusion constant in continuous space. If that limit of the spawning rate is taken, the resulting trace may be expected to be a dense set of points.
However, in the field theoretic description, we will keep the spawning rate (see τ and σ below) finite. As illustrated in Figure 3, on large scales even a finite spawning rate produces a seemingly dense path.
In the field theory, the suppression is a separate process, which should better be called "production-discount", as it makes up, in retrospect, for an over-counting of spawning events. The sphere's volume enters as an inverse points actually visited Fig. 2: The volume of the Wiener Sausage in one dimension is the length covered by the Brownian particle (the set of all points actually visited) plus the volume V 0 of the sphere the Brownian particle is dragging (indicated by the two rounded bumpers). density: The parameter κ, which encodes the (negatively counted) rate discount of the spawning in the presence of a substrate particle, has the unit of a rate per density, as it quantifies how much spawning should be discounted in the presence of a certain density of spawned particles along the path of diffusing particle. The suppression in the field theory is not an avoided spawning event, but a negative spawning, see Eq. (14) with κ = γ/n 0 .
The engineering dimension of the relevant parameter of a volume per time, compared to the engineering dimension of the diffusion constant of an area per time, reveals the upper critical dimension of 2 by their competition. In dimensions greater than 2 return probabilities to individual sites on a lattice are finite [see Pólya's random walk constants 15,24], but in the continuum the return probability vanishes and consequently the number of offspring spawned is linear in time, say γt.
Before introducing the field theory of the present model, we discuss briefly the intricacies of the fermionic nature of the B particles.

Finite carrying capacity
The condition of not allowing more than one B particle per lattice site may, in biological terms, be interpreted as a finite "carrying capacity" of a unit area. Successful spawning then drops to 0 once the occupation reaches the carrying capacity. A carrying capacity of unity, the present fermionic case, implies a rather drastic cutoff. One might be tempted to raise the cutoff and soften it by introducing a logistic term, so that spawning drops linearly in the occupation in an otherwise bosonic setup. While this may raise the suspicion and invite the criticism of a fudge, as demonstrated below, such a bosonic regularisation may be interpreted as the fermionic case on a lattice with a particular connectivity, i.e. the attempted regularisation is the original, fermionic case in disguise, suggesting that no such regularisation is needed.
In the present process, on the lattice, a spawning attempt occurs with Poissonian frequency γ but is suppressed in the presence of an immobile individual. If n B (x, t) is the number of such offspring on a lattice at position x and time t, one may express the effective spawning rate as where n 0 is the carrying capacity. Setting n 0 = 1 recovers the fermionic constraint exactly, but looks rather brutal not least because it remains somewhat unclear what is going to happen in the continuum limit. Some authors [e.g. 25, and references therein] avoid terms like Eq. (1) by expanding a suitable expression for δ 1,n B (x,t) , a Kronecker δ-function. Eq. (1) is not a leading order term in that expansion. For n 0 = 1 and before taking any other approximation (e.g. continuous space and state, removing irrelevant terms in the field theory) a logistic term like (1) is a representation of the original process as exact as one involving the Kronecker δ-function. For n 0 > 1 a logistic term gives rise to a model that may be strictly different compared to one with a sharp carrying capacity implemented by, say, a Heaviside step-function, θ(n 0 − n B (x, t)), but nonetheless one that may be of equal interest. Large n 0 on the other hand, softens the cutoff, because it will rarely be reached as spawning is (smoothly) more and more suppressed. One might therefore be inclined to study the problem in the limit of large n 0 . At closer inspection, however, it turns out that such increased n 0 does not present a qualitative change of the problem: Having n 0 > 1 is as if each site was divided into n 0 spaces. When the Brownian particle jumps from site to site it arrives in one of those n 0 spaces, only n 0 −n B (n, t) of which are empty, so that an offspring can be left behind. The process with carrying capacity n 0 > 1 therefore corresponds to the process with a carrying capacity of unity per L n 0 = 4 Fig. 4: A one dimensional lattice of size L and carrying capacity n 0 = 4 corresponds to the lattice shown above, where the carrying capacity of the former is implemented by expanding each site into a column of n 0 sites. The Brownian particle can jump from every site to all sites in the neighbouring columns. In the new lattice, the carrying capacity per site is unity, the carrying capacity per column is n 0 . site on a lattice where n B (n, t) describes the number of immobile offspring in each "nest" or column, as illustrated in Figure 4. In effect, the carrying capacity n 0 > 1 is implemented per column, leaving the original fermionic constraint of at most one offspring per site in place. In other words, even when a carrying capacity n 0 1 is introduced to smoothen the fermionic constraint, it is still nothing else but the original constraint n 0 = 1 on a different lattice. This led us to believe that there is no qualitative difference in n 0 = 1 or any other finite value of n 0 . In the following, we retain n 0 because it is an interesting parameter (n 0 → ∞ switches the interaction of) and a "marker" of the interaction. It may be set to any positive value.

Field theory
In order to cast the model introduced above in a field theoretic language, we take the Doi-Peliti [7,16] approach without going through too many technical details. There are a number of reviews and fantastically useful tutorials available [3,4].
In the following the mobile particle is of species "A", performing Brownian motion with (nearest neighbour) hopping rate H, which translates to diffusion constant D = H/(2d) on a d-dimensional hypercubic lattice. To regularise the infrared, we also introduce an extinction rate r. A's creation operator is a † (x), its annihilation operator is a(x). The immobile species is "B", spawned with rate γ by species A. Its creation operator is b † (x), its annihilation operator is b(x), both commuting with the creation and annihilation operators of species A. The immobile species goes extinct with rate , which allows us to restore causality (possible annihilation, i.e. existence, only after creation) when we take the limit → 0.

Fourier Transform
After replacing the operators by real fields, the Gaussian (harmonic) part of the resulting path integral can be performed, once the fields have been Fourier transformed. We will use the sign and notational convention of The field φ(k, ω) corresponds to the annihilator a(x) of the active particles, the fieldφ(k, ω) to the Doi- It is instructive to consider a second set of orthogonal functions at this stage. Placing the process on a finite lattice, means that boundary conditions have to be met, which is more conveniently done in one eigensystem rather than another. Below, we will consider hypercubic d-dimensional lattices which are infinitely extended (using the orthogonal functions and transforms introduced above) ind = d − 1 directions, while one direction is open, i.e. the particle density of species A vanishes at the boundaries and outside. This Dirichlet boundary condition is best met using eigenfunctions 2/L sin(q n z) with q n = πn/L and n = 1, 2, . . ., making it complete and orthonormal because In passing, we have introduced the finite linear length of the lattice L. Purely for ease of notation and in order to keep expressions in finite systems dimensionally as similar as possible to those in infinite ones, Eq. (2), we will transform as follows: where δ(z − y) is the usual Dirac δ function for z − y ∈ (0, L) but to be replaced by the periodic Dirac comb ∞ m=−∞ δ(z − y + mL) for arbitrary z − y. For ease of notation, we have omitted the time dependence of φ(x, t) as well asd components other than z. The other fields,φ, as well as ψ andψ transform correspondingly. The spatial transform of the latter is subject to some convenient choice, because the immobile species is not constrained by a boundary condition. It will turn out that, as expected in finite size scaling, the lowest mode q 1 = π/L plays the rôle of a temperature like variable, controlling the distance to the critical point.
We will also briefly study systems which are infinitely extended ind directions and periodically closed in one. In the periodic direction, the spectrum of conveniently chosen eigenfunctions 1/L exp (ık n y) is also discrete with k n = 2πn/L and n ∈ Z, 1 L L 0 dy e kny e kmy = δ n+m,0 .
Again, we transform slightly asymmetrically (in L), where again δ(z − y) is a Dirac comb if considered for z − y / ∈ (0, L). Again, time andd spatial coordinates were omitted. Similar transforms apply to the other fields.
There is a crucial difference between eigenfunctions exp (k n y) and sin(q n z), as the former conserves momenta in vertices, whereas the latter does not: with q n = πn/L > 0 and k n = 2πn/L (sign unconstrained) as introduced above.
Having made convenient choices such as Eq. (4), we will carry on using the Fourier transforms of the bulk Eq. (2), which is easily re-written for Dirichlet boundary conditions using Eq. (4), simply by replacing each integral over dk by (2/L) n and similar for periodic boundary conditions Eq. (7). Only the non-linearity, Section 3.3, is expected to require further careful analysis as nm of Eq. (9c) is structurally far more demanding than δ n+m+ ,0 of Eq. (9a).

Harmonic Part
Following the normal procedure [e.g. 2], the harmonic part of the Liouvillian L = L 0 + L 1 in the continuum reads After Fourier transforming and without further ado the harmonic part of the path integral can be performed, producing the two bare propagators where δ¯(ω +ω ) = δ(ω +ω )/(2π) and δ¯(k+k ) = δ(k+k )/(2π) d . Regarding those δ-functions, we follow the usual conventions for the diagrammatic representation of the propagators (overall momentum conservation, with each term corresponding only to the amplitude). Below, we will refer to the propagator of the diffusive particles as the "activity propagator" and to the one for the immobile species as the "substrate propagator" (or "activity" and "substrate legs"). As the propagation of the active particles is unaffected by the deposition of immobile particles, the activity propagator does not renormalise φφ = φφ 0 . The same is true for the immobile species, which might be spawned by active particles, however, once deposited remains inert, ψψ = ψψ 0 .
The Fourier transform Eq. (2) of the latter produces δ(x − x )θ(t − t ) in the limit → 0, with θ(x) denoting the Heaviside θ-function as one would expect (with x, t being the position and time of "probing" and x , t position and time of creation). At this stage, there is no interaction and no transmutation, ψ (k, ω)φ(k , ω ) = 0. Diffusing particles A happily co-exist with immobile ones.

Non-Linearity
If spawning occurs unhindered with rate γ, the number of B particles deposited over time t has an expectation of exactly tγ. Demanding, however, that deposition is suppressed in the presence of a particle B, i.e. demanding that no more than one B particle can ever be deposited on any given site, introduces interaction between previously deposited particles and any new particle to be deposited.
As discussed in Section 2.1, spawning is moderated down in the presence of B particles to γ(1 − n B (x, t)/n 0 ). A the level of a master equation, this conditional deposition gives a non-linear contribution of ∂ t P(. . . , n A , n B , . . .) = harmonic terms+ where, for convenience, the problem is considered for individual lattice sites n which contain n A = n A (n) particles of species A and n B particles of species B. The contributions by harmonic terms, namely diffusion of A particles and spontaneous extinction of both, as discussed in the previous section, have been omitted. The first term in the sum describes the creation of a B particle in the presence of n B − 1 of those to make up n B in total, the second term makes the B particle number exceed n B , n B → n B + 1. If where the sum runs over all states of the entire lattice, then the conditional deposition produces the contribution ∂ t |Ψ (t) = bilinear terms+ n γb(n) a † (n)a(n) − γ n 0b (n)b † (n)b(n) a † (n)a(n) (14) where we have used the commutator, (b † b−1)b † = b † 2 b and the Doi-shifted creation operator, b † =b + 1, as well as the particle number operator b † b. Although using Doi-shifted operators throughout gives rise to a rather confusing six non-linear vertices, the resulting field theory does not turn out as messy as one may expect. However, we need to allow for different renormalisation, therefore introducing six different couplings below.
Replacing a † by 1 +ã in the first term of the sum generates the bilinearty ab, which we will parameterise in the following by τ , corresponding to a transmutation of an active particle to an immobile one. Transmutation is obviously spurious; it does not actually take place but will allow us in the Doi-shifted setup (and thus with the corresponding left vacuum [3,4]) to probe for substrate particles (using b) after creating an active one (using a † ) without having to probe for the latter (using a). There is no advantage in moving that to the bilinear part L 0 , because the determinant of the bilinear matrix M in is unaffected by τ = 0 and therefore none of the propagators mentioned above change. One may therefore treat all terms (including the bilinear transmutation) resulting from the interaction perturbatively, with transmutation that is present regardless of the carrying capacity n 0 . At this stage it is worth noting the sign of τ (and σ below) as positive, i.e. the perturbative expansion will generate terms with pre-factors τ (and σ below). The only other non-linearity independent from the carrying capacity n 0 is the vertexbãa (orψφφ) in the following parameterised by the coupling constant σ. Diagrammatically, it may be written as the (amputated vertex) and can thought of as spawning, rather than transmutation parameterised by τ . According to Eq. (14), there are four non-linearities with bare-level couplings of γ/n 0 , generated by replacing the regular creation operators by their Doi-shifted counterparts, a † (n) = 1 +ã(n) and b † (n) = 1 +b(n), in γ n0b (n)b † (n)b(n) a † (n)a(n). Each spawns at least one substrate particle, but more importantly, it also annihilates at least one substrate particle as it "probes for" its presence. The two simplest and most important (amputated) vertices are the ones introduced above with a "tail added", where we have also indicated their coupling. By mere inspection, it is clear that those two vertices can be strung together, renormalising the left one. In fact, κ is the one and only coupling that renormalises all non-linearities (σ,λ,κ,χ and ξ), including itself. Two more vertices are generated, which become important only for higher order correlation functions of the substrate particles, because there is no vertex annihilating more than one of them -correlations between substrate particles are present but not relevant for the dynamics. Notably, there is no vertex that has more incoming than outgoing substrate legs. Finally, we note that the sign with which λ, κ, χ and ξ are generated in the perturbative expansion is negative. For completeness, we state the non-linear part of the Liouvillian (see Eq. (10)) with at bare level.

Dimensional analysis
Determining the engineering dimensions of the coupling introduced above is part of the "usual drill" and will allow us to determine the upper critical dimension and to remove irrelevant couplings. Without dwelling on details, analysis of the harmonic part, Eq. (10) Performing the Doi-shift in Eq. (14) first and introducing couplings for the non-linearities as outlined above allows for two further independent dimensions, say [σ] = A and [τ ] = B (both originally equal to the rate γ), As far as the field theory is concerned, the only constraint is to retain the diffusion constant on large scales, which implies T = L 2 . As a result, the nonlinear coupling κ (originally γ/n 0 ) becomes irrelevant in dimensions d > d c , as expected with upper critical dimension d c = 2. The two independent engineering dimensions A and B will be used in the analysis below in order to maintain the existence of the associated processes of transmutation and spawning, which are expected to govern the tree level. If we were to argue that they are to become irrelevant above a certain upper critical dimension, the density of offspring and its correlations would necessarily vanish everywhere. 2 Even though we may want to exploit the ambiguity in the engineering dimensions [13,21] in the scaling analysis (however, consistent with the results above), in the following section we will make explicit use of the Doi-shift 2 Strictly, as we will demonstrate below, n-point correlation functions can be constructed with τ only, say τ in Eq. (32). However, it is clear that the density of the active walker and its immobile offspring will remain correlated, which is mediated by σ, Eq. (17).
when deriving observables, which means that bothφ abdψ are dimensionless (in real space and time), φ = ψ = 1, which implies A = T −1 and A = B.
As expected, τ is then a rate (namely the transmutation rate) and so is σ, [τ ] = [σ] = 1/T. Also not unexpectedly, the remaining four couplings all end up having the same engineering dimension, as suggested by γ/n 0 , which is a rate per density (γ being the spawning rate and n 0 turning into a carrying capacity density as we take the continuum limit).

Observables at tree level: Bulk
The aim of the present work is to characterise the volume of the Wiener Sausage. As discussed in Section 2, this is done not in terms of an actual spatial volume, but rather in terms of the number of spawned immobile offspring. In this section, we define the relevant observables in terms of the fields introduced above. This is best done at tree level, presented in the following, before considering loops and the subsequent renormalisation. While the tree level is the theory valid above the upper critical dimension, it is equivalently the theory valid in the absence of any physical interaction, i.e. the theory of n 0 → ∞. We introduce the observables first in the presence of a mass r, which amounts to removing the particle after a time of 1/r on average.
If v (1) (x; x * ) is the density of substrate particles at x in a particular realisation of the process at the end of the life time of the diffusive particle which started at x * , the volume of the Sausage is where • denotes the ensemble average of • and the dependence on x * drops out in the bulk. Alternatively (as done below), one may consider a distribution 3 d(x * ) of initial starting points x * , over which an additional expectation, denotes by an overline, •, has to be taken.
Higher moments require higher order correlation functions where and v (n) (x 1 , . . . , x n ; x * ) denotes the n-point correlation function of the substrate particle density generated by a single diffusive particle started at Given that b † (x)b(x) is the particle density operator, that correlation function is the expectation with only a single, 4 initial, diffusive particle started at x * , t 0 . The multiple limits on the right are needed so we measure deposition due to the active particle left after its lifetime. As the present phenomenon is time-homogeneous, t 0 will not feature explicitly, but rather enter in the difference t i − t 0 , each of which diverges as the limits are taken. In principle, only a single limit is needed, t = t 1 = t 2 = . . . = t n → ∞, but as discussed below, equal times leave some ambiguity that can be avoided.
, which leaves us with four terms after replacing by Doi-shifted creation operators, After the Doi shift, pure annihilation, ψ , vanishes -it is the expected density of substrate particles in the vacuum, as no active particle has been created first. The expectation ψ (x 1 , t 1 )ψ(x 1 , t 1 ) ∝ θ(t 1 − t 1 ) vanishes as well, for θ(0) = 0 (effectively the Itō interpretation of the time derivatives, [20]) is needed in order to make the Doi-Pelitti approach meaningful. The field ψ(x 1 , t 1 ) in the densityψ(x 1 , t 1 )ψ(x 1 , t 1 ) is meant to re-create the particle annihilated by the operator corresponding to ψ(x 1 , t 1 ). For the same reason, ψ (x 1 , t 1 )ψ(x 1 , t 1 )φ(x * , 0) vanishes, even when a vertex, 0 t 1 t 1 is available. In fact, any occurrence ofψ(x 1 , t 1 ) requires an occurrence of ψ(x 2 , t 2 ) with t 2 > t 1 . What remains of Eq. (26) is therefore only Taking the Fourier transform of Eq. (16), reveals the general mechanism of provided g(ω) itself has no pole at the origin, as otherwise additional residues that survive the limit t → ∞ would have to be considered. In Eq. (28) the starting point of the walker still enters via k 0 . If that "driving" is done with a distribution of initial starting points d(k 0 ), the resulting deposition is given by (30) where the little circle on the right indicates the "driving" which "supplies" a certain momentum distribution. More specifically, an initial distribution of and the resulting deposition is distributed according to In an infinite system, the position of the initial driving should not and will not enter -to calculate the volume of the Sausage, we will evaluate at k = 0. The same applies for the time of when the initial distribution of particles is made. In principle it would give rise to an additional factor of exp (−ıωt * ), but we will evaluate at ω = 0. Evaluating at k = 0 in the bulk produces the volume integral over the offspring distribution, i.e. the expected volume V of the Sausage, in the absence of a limiting carrying capacity, which corresponds to the naïve expectation of the (number) deposition rate τ multiplied by the survival time of the random walker 1/r. From this expression it is also clear that the "volume" calculated here is, as expected, dimensionless.
Following similar arguments for n = 2, the relevant diagrams are where the symbol representsψ(x, t)ψ(x, t), which is a convolution in Fourier space, which in real space and time gives a δ( , corresponding to an immobile particle deposited at t 0 and x 0 , found later at time t 1 > t 0 and x 1 = x 0 and left there to be found again at time t 2 > t 1 and The effect of taking the limits t i → ∞ is the same as for the first moment, namely it results in ω i = 0. The same holds here, except that in diagrams containing the convolution, the result depends on the order in which the limits are taken. This can be seen in the factor θ(t 2 − t 1 )θ(t 1 − t 0 ), as one naturally expects from this diagram: The first probing must occur after creation and the second one after the first. A diagram like the second in Eq. (32) does not carry a constraint like that.
Each of the diagrams on the right hand side of Eq. (32) appears twice, as the external fields can be attached in two different ways. When evaluating at k 1 = k 2 = 0 this would lead to the same (effective) combinatorial factor of 2 for both diagrams. However, taking the time limits in a particular order means that one labelling of the second diagram results in a vanishing contribution. The resulting combinatorial factors are therefore 1 for and 2 for , i.e.
again dimensionless. Given that τ = σ = γ initially, Eq. (14), the above may be written γ/r + 2γ 2 /r 2 . Unsurprisingly, the moments correspond to those expected for a Poisson process with rate γ taking place during the exponentially distributed lifetime of the particle, subject to a Poisson process with rate r. The resulting moment generating function is simply reproducing all moments once τ = σ = γ. Carrying on with the diagrammatic expansion, higher order moments can be constructed correspondingly. At tree level (or n 0 → ∞ equivalently), there are no further vertices contributing. Determining v (n) (k 1 , . . . , k n ; k 0 ) is therefore merely a matter of adding substrate legs, , either by adding a convolution, , or by branching with coupling σ. For example, (36) Upon taking the limits, effective combinatorial factors become 1, 3, 3 and 6 respectively, so that and similarly In general, the leading order behaviour in small r at tree level in the bulk is dominated by diagrams with the largest number of branches, i.e. the largest power of σ, like the right-most term in Eq. (36), so that which is essentially determined by the time the active particle survives.

Observables at tree level: open boundary conditions
Nothing changes diagrammatically when considering the observables introduced above in systems with open boundary conditions. As n 0 → ∞ does not pose a constraint, it makes no difference whether the system is periodically closed (in d = 2 a finite cylinder) or infinitely extended (semi-infinite strip) in the other directions -those directions simply do not matter for the observables studied, except when the diffusion constant enters. What makes the difference to the considerations in the bulk, Section 3.5, are open directions, in the following fixed to one, so that the number of infinite (or, at this stage equivalently, periodically closed) directions isd = d−1; in the following k, k ∈ Rd. While the diagrams obviously remain unchanged, their interpretation changes because of the orthogonality relations as stated in Eq. (5b) and Eq. (9c) or, equivalently, the lack of momentum conservation due to the absence of translational invariance. Replacing the propagators by where a single open direction causes the appearance of the indices n and m, results in the one point function where the index n refers to the Fourier-sin component as discussed in Section 3.1. If driving is uniform (homogeneous) in the open, finite direction, its Fourier transform is d n (k) = δ¯(k) L 0 dz sin(q n z)/L = 2δ¯(k)/(q n L) for odd n and vanishes otherwise. As for the periodic or infinite directions, the distribution of the driving does not enter into V n , as momentum conservation implies that the only amplitudes of the driving that matter are that of the k = 0 or k 0 = 0 modes, Eq. (2) and Eq. (6).
In the limit of large L this result recovers Eq. (31), which would be less surprising if L → ∞ would simply restore the bulk, which is, however, not the case, because as the driving is uniform, some of it always takes place "close to" the open boundaries. However, open boundaries matter only up to a distance of D/r from the boundaries, i.e. the fraction of walkers affected by the open boundaries is of the order D/r/L. The limit r → 0 gives V = τ L 2 /(12D), matching results for the average residence time of a random walker on a finite lattice with cylindrical boundary conditions using D = 1/(2d) [17]. Sticking with r → 0, calculating higher order moments for uniform driving is straight-forward, although somewhat tedious. For example, the two diagrams contributing to v (2) are and τ k 2 , ω 2 , l σ k 0 , ω 0 , n where n, m, l ∈ {1, 3, 5, . . .}, required to perform the summation over the lattice at uniform driving, then produces This may be compared to the known expressions for the moments of the number of distinct sites visited by a random walker within n moves [22, e.g. Eq. (A.14)], which contains logarithms even in three dimensions, where the present tree level results are valid. This is, apparently, caused by constraining the length of the Sausage by limiting the number of moves, rather than a Poissonian death rate. Performing the summations Eq. (44) is straight-forward, but messy and tedious. 5 The relevant sums converge rather quickly, producing (summing over 200 terms for each index), for example as in Eq. (39) essentially determined by the time the particle stays on the lattice. Similar to the bulk, the lack of interaction allows the volume moments of the Sausage to be determined on the basis of the underlying Poisson process. In the case of homogeneous drive, the mth moment of the residence time t r of a Brownian particle diffusing on an open interval of length L is and the moment generating function of the Poissonian deposition with rate γ is just M(z) = exp (−γt r (1 − exp (z))), so that V m = d m M(z)/dz m |z = 0 , reproducing the results above such as confirming, in particular, the high accuracy of the leading order term in L, as 17/280 = 0.06071428571428571428 . . ..

Beyond tree level
Below d c = 2 the additional vertices parameterised by λ, κ, χ and ξ, Eq. (18) and Eq. (19) respectively, have to be taken into account. Because κ is the only vertex that has the same number of incoming and outgoing legs, it is immediately clear that its presence can, and, in fact, will contribute to the renormalisation of all other vertices, say but in particular itself: Combinations of other vertices gives rise to "cross-production", say χ, , by λξ, , but none of these terms contains more than one loop without the involvement of κ. As for the generation of higher order vertices, it is clear that the number of outgoing substrate-legs (on the left) can never be decreased by combining vertices, because within every vertex the number of outgoing substrate legs is at least that of incoming substrate legs. In particular does not exist. A vertex like that, combined, say, with σ to form the bubble , which renormalises the propagator, suggests the diffusive movement of active particles is affected by the presence of substrate particles. This is, by definition of the original problem, not the case. Because no active particles are generated solely by a combination of substrate particles, none of the vertices has more outgoing then incoming activity legs. Denoting the tree level coupling of the proper vertex (with amputated legs) Dimensional analysis gives Because diffusion is to be maintained, it follows that T = L 2 , yet, as indicated above, the dimensions of A and B are to some extent a matter of choice. Leaving them undetermined results in d(n + b − 1) + 2(a − b) ≤ 2 for Γ [ m n a b ] to be relevant in d dimensions. Setting, on the other hand, A = B = T −1 (see above) results in d(n + b − 1) ≤ 2. As n = 1, this implies (d − 2)b + 2a ≤ 2 and db ≤ 2, respectively. In both cases, the upper critical dimension for a vertex with b ≥ 1 and thus a ≥ 1 to be relevant is d c = 2. On the other hand, no loop can be formed if b = 0, so above d = 2 (where b = 1 is irrelevant) there are no one-particle irreducibles contributing to any of the Γ [ m n a b ] and so the set of couplings introduced above, τ , σ, λ, κ, χ and ξ remains unchanged. As far as Sausage moments are concerned, λ, κ, χ and ξ do not enter, as there is no vertex available to pair up the incoming substrate leg on the right. The tree level results discussed in Section 3.5 therefore are the complete theory in d > d c = 2.
Below d c = 2, the dimensional analysis depends on the choice one makes for A and B. If they remain independent, then the only relevant vertices that are topologically possible are those with a ≤ 1, removing χ and ξ from the problem. However, it is entirely consistent (and one may argue, even necessary) to assume A = B = T −1 , resulting in no constraint on a at all. Not only are therefore vertices for all a relevant, what is worse, they are all generated as one-particle irreducibles. For example, the reducible diagram contributing to v (2) at tree level, Section 3.5, posseses, even at one loop, two one-particle irreducible counterparts in d < 2, contributing to the corresponding proper vertex. Such diagrams exist for all a, so, in principle, all these couplings have to be allowed for in the Liouvillian and all have to be renormalised in their own right. The good news is, however, that the Z-factor of κ (see below) contains all infinities of all couplings exactly once, i.e. the renormalisation of all couplings can be related to that of κ by a Ward-Takahashi identity, see Section 4.1.1.

Renormalisation
Without further ado, we will therefore carry on with renormalising κ only. As suggested in Eq. (49), this can be done to all orders, in a geometric sum. The one and only relevant integral is 6 where = 2 − d and we have indicated the total momentum k (i.e. the sum of the momenta delivered by the two incoming legs) and the total frequency ω going through it. 7 This integral has the remarkable property that it is independent of k, because of the k-independence of the substrate propagator. While the latter conserves momentum in the bulk by virtue of δ¯(k + k ) in 6 We have written explicitly κ vertices, including the amputated legs. At this stage it is unimportant which coupling forms the loop, but this will change when we study semi-infinite systems in Section 4.3. Eq. (11b), its amplitude does not depend on k. Even if there were renormalisation of the activity propagator it would therefore not affect its kdependence, i.e. η = 0, whereas its ω dependence may be affected, i.e. z = 2 would be possible. The expression ((r + − ıω)/D) 1/2 can be identified as an inverse length; it is the infrared regularisation (or more precisely the normalisation point, R = 1, Eq. (68a)) that can, in the present case, be implemented either by considering finite time (ω = 0), spontaneous extinction of activity (r > 0) or, notably, spontaneous extinction (evaporation) of substrate particles ( > 0). In order to extract exponents, it is replaced by the arbitrary inverse length scale µ. We will return to the case µ = −ıω/D in Section 4.2, e.g. Eq. (78). For the time being, the normalisation point is with → 0, ω → 0. The renormalisation conditions are then (see Eq. (52)) where {0, 0} indicates that the vertex is evaluated at vanishing momenta and frequencies. Defining Z = κ R /κ allows all renormalisation to be expressed in terms of Z, as detailed in Section 4.1.1.
To one loop the renormalisation of κ, Eq. (49), is therefore or κ R = κZ with Z = 1 − κW . Introducing the dimensionless coupling g = κW/Γ ( /2) with g R = gZ gives Z = 1−gΓ ( /2), which may be approximated to one loop by Z = 1−g R Γ ( /2), which is in fact exact if all terms in Eq. (49) are retained, so that Z becomes a geometric sum in g, The resulting β-function is β g (g) = dg R /d ln µ| g g R = − g R − κW β g and therefore The last statement is exact to all orders; the non-trivial fixed point in > 0 is exactly g * R = 1/Γ ( /2) ≈ /2, which is when the Z-factor vanishes (as g diverges in small µ).

Ward-Takahashi Identities
The observation about the renormalisation of other couplings can be formalised as Ward-Takahashi identities. Using the same notation as in Eq. (55), we note that κ R = κZ implies σ R = σZ and λ R = λZ, i.e.
The renormalisation of the coupling τ breaks with that pattern as because the tree level contribution τ , Eq. (16), has higher order corrections such as , which do not contain τ itself, but rather the combination λσ. However, at bare level, σ = τ and λ = κ, so that in the present case A different issue affects the renormalisation of χ and ξ. For example, the latter acquires contributions from any of the diagrams shown in Eq. (49) by "growing an outgoing substrate leg", , on any of the κ vertices, whereas contributions from , generated by σd/dr are UV finite and therefore dropped. Given that Eq. (62) are the only contributions to the renormalisation of ξ, it reads and correspondingly Again, in the present case χ − ξλ/κ = 0 and therefore χ R = χdκ R /dκ. From Section 4.1, it is straight forward to show that and we can therefore summarise In d < 2, the only proper vertices Γ [ n m a b ] to consider are those with n = 1, b ≤ 1, m ≤ 1 and arbitrary a. The renormalisation for all of them can be traced back to that of Γ [ 1 1 1 1 ] . It is a matter of straight-forward algebra to demonstrate this explicitly. As these couplings play no further rôle for the observables analysed henceforth, we spare the reader a detailed account.

Scaling
We are now in the position to determine the scaling of all couplings. For the time being, we will focus solely, however, on calculating the first moment of the Sausage volume.
We have noted earlier (Section 4), that the governing non-linearity is κ and have already introduced the corresponding dimensionless, renormalised coupling g R and found its fixed point value. Following the standard procedure [20], we define the finite, dimensionless, renormalised vertex functions Γ [ m n a b ] ({k, ω}; D, r, τ, σ, λ, κ, χ, ξ) where {k, ω} denotes the entire set of momenta and frequency arguments and µ is an arbitrary scale. In principle, there could be more bare couplings and there are certainly more generated, at least in principle, see Section 4.1.1. The vertex functions can immediately be related to their arguments via Eq. (55) and Eq. (52): where the normalisation point is R = 1. Because The asymptotic solution (of the the Callan-Symanzik equation) can be combined with the dimensional analysis of the renormalised vertex function, which gives to give, using z 2 = r and Eq. (67), As far as scaling (but not amplitudes) is concerned, the tree level results apply to the right hand side as its mass r is finite, i.e. and If r −1 is interpreted as the observation time t, the result in d < 2 (and V ∝ t in d > 2, Eq. (31)) recovers the earlier result by [1], including the logarithmic corrections expected at the upper critical dimension. Eqs. (74) and (75) are the first two key results for the field theory of the Wiener Sausage reported in the present work. We will now further explore the results and their implications. In d = 1, it is an exercise in complex analysis (albeit lengthy) to determine the amplitude of the first moment. To make contact with established results in the literature, we study the sausage in one dimension after finite time t. Following the tree level results Eqs. (27), (30) and (31) we now have where the space integral is taken by setting k = 0 and the driving has been evaluated to d(0) = 1, see Eq. (30). The Z-factor is given by Eq. (57), but µ should be replaced by −ıω/D, as we will consider the double limit r, → 0, but at finite ω, which is the total frequency flowing through the diagram, Eq. (53), so for d = 1 = which for small ω and therefore large t (which we are interested in) is dominated by 2 −ıDω)/κ. Keeping only that term, the integral in Eq. (76) can now be performed and gives On the lattice, i.e. before taking the continuum limit, sites have no volume and the ratio τ /κ is just the carrying capacity n 0 . Setting that to unity recovers, up to the additive volume mentioned above, see Figure 2, the result by Berezhkovskii, Makhnovskii and Suris [1,Eq. (10)]. Given the difference in the process and the course a field-theoretic treatment taken, in particular the continuum limit, one might argue that this is a mere coincidence. In fact, attempting a similar calculation for the amplitude of the second moment does not suggest that it can be recovered in that case.
As for higher moments of the volume, in addition to the two diagrams mentioned in Eq. (32), there is now also and Γ [ 0 1 2 0 ] = χσ(Z − 1)/κ + ξZ 2 . However, as above, the second moment is dominated by the second, tree-like term in Eq. (32), which gives to leading order as Z ∝ r /2 . Higher order moments follow that pattern V m ∝ Z m , and as dimensional consistency is maintained by the dimensionless product rD d/ κ −2/ entering Z, Eqs. (54), (56) and (57), for d < 2 with r=1/t. Compared to Eq. (39) the diffusion constant is present again, as the coverage depends not only on the survival time (determined by r), but also on the area explored during that time.

Semi-infinite strip
In the following, we study the renormalisation of the present field theory on a semi-infinite lattice, i.e. a lattice that is open (Dirichlet boundary conditions) in one direction and infinite in d − 1 orthogonal directions. The same setup was considered at tree level in Section 3.6. Again, there are no diagrammatic changes, yet the renormalisation procedure itself requires closer attention. Before carrying out the integration of the relevant loop, Eq. (53), we make a mild adjustment with respect to the set of orthogonal functions that we use for the substrate and the activity. While the latter is subject to Dirichlet boundary conditions in the present case, naturally leading to the set of sin(q n z) eigenfunctions introduced above, the former is not afflicted with such a constraint, i.e. in principle one may choose whatever set is most convenient 8 and suitable. As general as that statement is, there are, however, some subtle implications; to start with, whatever representation is used in the harmonic part of the Hamiltonian must result in the integrand factorising, so that the path integral over the Gaussian can be performed. In the presence of transmutation, that couples the choice of the set for one species to that for the other. With a suitable choice, all propagators fulfil orthogonality relations and therefore conserve momentum, i.e. they are proportional to δ n,m (in case of the basis sin(q n z)), δ n,−m (basis exp (ık n z)) δ(k + k ) (basis exp (ıkz)), which is obviously a welcome simplification of the diagrams and their corresponding integrals and sums.
The situation can be relaxed even further by considering transmutation only perturbatively, i.e. removing it from the harmonic part. However, if different eigenfunctions are chosen for different species, transmutation vertices are no longer momentum conserving; if we choose, as we will below, sin(q n z) for the basis of the activity and exp (ık m z), then the proper vertex of τ comes with where the m ∈ Z refers to the index of the eigenfunction used for the activity and n ∈ N + to the eigenfunction of the substrate field. The fact that ∆ p,l has off-diagonal elements indicates that momentum-conservation is broken. Obviously, in the presence of boundaries, translational invariance is always broken, but that does not necessarily result in a lack of momentum conservation in bare propagators, as it does here. However, it always results in a lack of momentum conservation in vertices with more than two legs, as only exponentials have the property that their products are eigenfunctions. If propagators renormalise through these vertices, they will eventually inherit the non-conservation, i.e. allowing them to have off-diagonal elements from the start will become a necessity in the process of renormalisation. While the transmutation vertex introduced above may appear unnecessarily messy, it does not renormalise and does not require much further attention. Rewriting the four-point vertex κ in terms of the two different sets of eigenfunctions, however, proves beneficial. Introducing means that the relevant loop is Contrary to Eq. (53), it is now of great importance to know with which couplings (here two κ couplings) this loop was formed, because different couplings require different "tensors", like U n,m+k, in the present case. For example, the coupling σ comes with L 0 dz sin(q n z) exp (ık m z) sin(q z). The actual technical difficulty to overcome, however, is the possible renormalisation of U n,m, itself, as there is no guarantee that the right hand side of Eq. (85) is proportional to U n,m, . In other words, the sum Eq. (49) may be of the form κ(LU n,m+k, + κW LU n,m+k, + κ 2 W 2 LU n,m+k, + . . .), with U n,m+k, = U n, m + k, etc., rather than LU n,m+k, κ(1+κW +κ 2 W 2 +. . .), which would spoil the renormalisation process.
Carrying on with that in mind, the integrals over ω and k are identical to the ones carried out in Eq. (53) and therefore straight-forward. The summation over m is equally simple, because that index features only in U n,m, and Eq. (8a) implies 1 L m L 2 U n,m2−m ,n U n ,m +m1, = L 0 dz sin(q n z)e ıkm 2 z e ıkm 1 z sin(q z) sin 2 (q n z) . (86) Using that identity in Eq. (85) allows us to write dz sin(q n z)e ıkm 2 z e ıkm 1 z sin(q z) It is only that last sum that requires further investigation. In particular, if we were able to demonstrate that it is essentially independent of z, then the preceding integral becomes LU n,m1+m2, and this contribution to the renormalisation of κU n,m1+m2, is proportional to U n,m1+m2, . The remaining summation in Eq. (87) can be performed [26] to leading order in the small 9 dimensionless quantity ρ = L 2 (r + − ıω) /(π 2 D), Approximating 2ζ(3−d) ≈ Γ ( /2) the Z-factor for the renormalisation of κ in a system with open boundaries in one direction is therefore unchanged, cf. Eqs. (53) and (90), provided µ = π/L. Of course, that result holds only as long as ρ 1 is small enough, in particular r D/L 2 , i.e. sudden death by extinction is rare compared to death by reaching the boundary. In the case of more frequent deaths by extinction, or, equivalently, taking the thermodynamic limit in the open direction, extinction is expected to take over eventually and the bulk results above apply, Section 4.2. Although there is an effective change of mechanism (bulk extinction versus reaching the edge), there is no dimensional crossover. 9 For ρ large, ∞ n =1 (n 2 +ρ) The renormalisation of τ involves the κ-loops characterised above, as well as σ and λ, which, in principle, have to be considered separately; after all, the loop they form has a structure, , that deviates from the structure studied above, , Eq. (90). In principle, there is (again) no guarantee that the diagrams contributing to the renormalisation of τ all have the same dependence on the external indices, i.e. whether they are all proportional to ∆ n,m , Eq. (82). By definition, however, Eq. (84) i.e. one leg is removed by evaluating at m 1 = 0 (see the diagram in Eq. (85)) and one by performing the summation. Applying this operation to all diagrams appearing in Eq. (49) produces all diagrams renormalising τ and κ itself. Provided that σ = τ and λ = κ, the renormalisation of τ is therefore linear in that of κ and Eq. (61) remains valid, i.e. the renormalisation procedure outlined above for τ and κ remains intact. In principle, further attention is required for the renormalisation of higher order vertices, but as long as only (external) substrate legs are attached, , their index m n can be absorbed into the sum of the indices of the substrate legs present: Just like any external leg can take up momentum or frequency, such new legs shift the index used in the internal summation such as the one in Eq. (85), but that does not affect the renormalisation provided that it is done at vanishing external momenta, so that the external momenta do not move the poles of the propagators involved.
We conclude that all Ward-Takahashi identities remain unchanged. As for the scaling of the Sausage volume, comparing Eq. (90) to Eq. (53) and identifying µ = π/L or r = π 2 D/L 2 means that now for d < 2, compared to Eq. (81). Noticeably, compared to the tree level Eq. (47), the diffusion constant is absent -in dimensions d < 2 each point is visited infinitely often, regardless of the diffusion constant. Even though the deposition in the present setup is Poissonian, what determines the volume of the sausage is not the time it takes the active particles to drop off the lattice, ∝ L 2 /D, but the competition between deposition parameterised by τ and σ and its inhibition by κ. The scaling V m ∝ L md for d < 2 suggests that the Wiener Sausage is a "compact" d dimensional object in dimensions d < 2, whereas V m ∝ L 2m at tree level, d > 2, Section 3.6. The Wiener Sausage may therefore be seen as a two-dimensional object projected into a d-dimensional space.
The obvious interpretation of r = π 2 D/L 2 in Eq. (92) is that of π/L being the lowest mode in the denominator of the propagator Eq. (40a) in the presence of open boundaries compared to (effectively) r/D at k = 0 in Eq. (11a).
It is interesting to determine the amplitude of the scaling in L with one open boundary, not least in order to determine whether the finding of Eq. (78) being identical to the result known in the literature is a mere coincidence. Technically, the route to take differs from Eq. (41), because in Section 3.6 both substrate as well as activity were represented in the sin eigensystem. However, integrating over L amounts to evaluating the matrix ∆ m,n in Eq. (83) at m = 0 and in that case L∆ m,n = 2/q n for n odd and 0 otherwise, which reproduces Eq. (41) at r = 0: To determine τ R = τ Z we replace W in Eqs.
which for d = 1 reproduces the exact result which is easily confirmed from first principles. However, repeating the calculation for driving at the centre, x * = L/2 gives d n = (−1) (n−1)/2 for n odd and 0 otherwise, so that in d = 1 after some algebra which is somewhat off the exact amplitude of ln(2) = 0.69314718 . . . compared to 3/4. This is apparently due to the renormalisation of U n,m, in Eq. (90) being correct only up to O( 0 ), but that problem may require further investigation.

Infinite cylinder: crossover
At tree level, Section 3.6, it makes no technical difference to study the Sausage on a finite cylinder or a semi-infinite strip, because the relevant observables require integration in space which amounts to evaluating at k n = 0 or k = 0 resulting in the same expression, e.g. Eq. (31) in both cases. When including interaction, however, it does matter whether the lattice studied is infinite in d − 1 directions or periodically closed. Clearly a periodically closed direction has a 0-mode and does therefore not impose an effective cutoff in k. In that respect, periodic closure is identical to infinite extent, while physically it is not (just like at tree level). One may therefore wonder how periodic closure differs from infinite extent mathematically: How does a finite cylinder differ from an infinite strip? As a first step to assess the effect, we replace the open direction by a periodically closed one. One may regard this as an unfortunate kludge -after all, what we are really interested in is a system that is finite in two directions, open in one and periodically closed in the other. However, if the aim is to study finite size scaling in 2 − dimensions, then two finite dimensions are already too many.
However, the setup of an infinitely long (in d − 1 dimensions) periodically closed tube with circumference L does address the problem in question, namely the difference of k = 0 in an infinitely extended direction versus k n = 0 in a finite but periodic direction. In addition, an infinite cylinder compared to an infinite strip has translational invariance restored in the periodic direction, and therefore the vertices even for a finite system dramatically simplified.
The physics of a d-dimensional system with one direction periodically closed is quite clear: At early times, or, equivalently, large extinction rates r D/L 2 , the periodic closure is invisible and so the scaling is that of a d-dimensional (infinite) bulk system as described in Section 4.2, V m ∝ r −md/2 . But when the walker starts to re-encounter, due to the periodic closure, sites visited earlier, this "direction will saturate" and so for very small r, it will display the scaling of an infinite d − 1-dimensional lattice.
Just like for the setup in Section 3.5, it is most convenient to study the system for small but finite extinction rate r. The integrals to be performed are identical to Eq. (85), but both sums have a pre-factor of 1/L, Eq. (7), (rather than one having 1/L and the other 2/L, Eq. (4)) and LU n,m,l has the much simpler Kronecker form L 0 dz e ıknz e ıkmz e ık k z e ık z = LŨ n,m+k, = Lδ n+m+k+ ,0 , but most importantly the expression corresponding to Eq. (86) sees sin 2 (q n z) replaced by unity, because the bare propagator corresponding to Eq. (40a) carries a factor Lδ n+m,0 , Eq. (6), rather than Lδ n,m /2, Eq. (3), which results in n ofŨ n,m2−m ,n to pair up with −n inŨ −n ,m +m1, . For easier comparison, we will keep LŨ n,m+k, in the following. We thus have (see Eq. (87)) = κ 2 LŨ n,m+k, Comparing Eq. (98) to Eq. (87), Eq. (88) and Eq. (90) and re-arranging terms gives for smallρ = L 2 (r + − ıω) /(4π 2 D) and for largeρ The asymptotics above are responsible for all the interesting features to be discussed in the following. Firstly, intuition seems to play tricks: One may think that for smallρ in the sum on the left of Eq. (101), it will always be large compared to n = 0 and always be small compared to n → ∞. In fact, one might think there is no difference at all between large or small ρ and be tempted to approximate the sum immediately by an integral, That, however, produces only the second line, Eq. (101b). The crucial difference is that in a sum each summand actually contributes, whereas in an integral the integrand is weighted by the integration mesh. So, the summand (n 2 +ρ) (d−3)/2 has to be evaluated for n = 0, producingρ d− 3 2 in Eq. (101a), which dominates the sum for d < 2 (even d < 3, but the series does not converge for 2 < d, and, in fact, is not needed as no IR divergences appear in d > 2) andρ → 0. The remaining terms can actually be evaluated forρ = 0, producing 2ζ(3 − d). The integral, which the (Riemann) sum converges to for largeρ, on the other hand, is strictly proportional toρ d− 2 2 and therefore much less divergent than then sum for smallρ → 0 and d < 2.
Of the two regimesρ 1 andρ 1 the former is more easily analysed. Setting − ıω = 0 for the time being, we notice thatρ ∝ L 2 r suggests, somewhat counter-intuitively, that large r, which shortens the lifetime of the walker, has the same effect as large L, which prolongs the time it takes the walker to explore the system. Both effects are, however, of the same nature: They prevent the walker from "feeling" the periodicity of the system. In that case, the walker displays bulk behaviour and in fact, Eq. (100) is the same as Eq. (53).
The other regime,ρ 1 is richer. At d < 2 and fixed L, Eq. (99) displays a crossover between the two additive terms on the right hand side. Stretching the expansion (101a) beyond its stated limits, for intermediate values of r or L,ρ ≈ 1, the first term on the right hand side of Eq. (99) dominates and the scaling behaviour is that of an open semi-infinite strip of linear extent L, Eq. (90). This is because at moderately large r (or, equally, short times t), the walker is not able to fully explore the infinite directions. But rather than "falling off" as in the system with open boundaries, it starts crossing its own path due to the periodic boundary conditions, at which point the scaling like a d-dimensional bulk lattice (ρ 1) ceases and turns into that of a ddimensional open one (ρ ≈ 1). The crossover can also be seen in Eq. (101a), which for d < 2 is dominated by 2ζ(3 − d) for largeρ and by ρ (d−3)/2 for smallρ.
As r gets even smaller (or t increases),ρ → 0, the scaling is dominated by the infinite directions, of which there ared = d − 1, i.e. the scaling is that of a bulk system withd dimensions as discussed in Section 4.1, in particular Eq. (53). In this setting, the walker explores a infinitely long thin cylinder, which has effectively degenerated into an infinitely long line. While the (comparatively) small circumference of the cylinder remains accessible this is fully explored very quickly compared to the progress in the infinite direction.
To emphasise the scaling of the last two regimes, one can re-write Eq. (99) as Here, the first term displays the behaviour of the semi-infinite strip discussed above (Section 4.3, Eq. (90), ζ(3 − d) ∝ 1/ , but L/π there and L/(2π) here) and the second term that of a bulk-system withd dimensions, Eq. (53); the infrared singularity (r + −ıω) −˜ /2 is in fact accompanied by the corresponding ultraviolet singularity Γ (˜ /2), exactly as if the space dimension was reduced from d tod = d − 1.
The second term also reveals an additional factor 1/L compared to (53). 10 This expression determines the factor W , which enters the Z-factor inversely, Z ∝ Lr˜ /2 , Eq. (57), i.e. in the present setting, the Sausage volume scales like (τ /r)Lr˜ /2 = τ Lr −d/2 . The scaling in t is found by replacing r by 1/t, or more precisely by ω and Fourier transforming according to Eq. (76), which results in the scaling V ∝ Lt 1−˜ /2 = Ltd /2 .

Discussion
Because the basic process analysed above is very well understood and has a long-standing history [12,8,1], this work may add not so much to understanding of the process itself, was it not for a field-theoretic re-formulation, which is particularly flexible and elegant. The price is a process that ultimately differs from the original model. In hindsight, the agreement of the original Wiener Sausage problem and the process used here to formulate the problem field-theoretically deserves further scrutiny.
That agreement applies first and foremost to the exponents, say Eq. (81) and Eq. (92) and the corresponding results in the literature [1]. That amplitudes are reproduced even in one dimension, Eq. (78) for the bulk and Eq. (95) for the finite system, is not as much of a surprise in the case of the bulk than it is in the open system. In case of the former, the renormalisation of κ and ultimately of τ is all there is to the effective, "observed" τ -there is no approximation taking place. In fact, when the amplitude was calculated, the Z-factor was exact, Eq. (77) or Eq. (57), rather than the usual approximation Z = 1 − κW .
In the open case, however, omitting terms of order 0 , such as Eq. (89) is likely to cause deviations from the exact results. Unless one is prepared to allow for a z (or equivalently, q n ) dependent κ (whose z dependence is in fact irrelevant in the field theoretic sense) as suggested in Eq. (87), one should not expect the resulting amplitudes to recover the exact results. That Eq. (95) does nevertheless, may be explained by the "averaging effect" of the driving, given that As demonstrated in the case of first bulk moment, exact results may be recovered by avoiding all expansions and calculating moments immediately in d = 1, rather than expanding in small or κ.
As alluded to above, the field theoretic description of the Wiener Sausage is very elegant, not least because the diagrams have an immediate interpretation. For example, corresponds to a substrate particle deposited while the active particle is propagating. Correspondingly, is the suppression of a deposition as the active particle encounters an earlier deposition -the active particle returns to a place it has been before. All loops can therefore be contracted along the wavy line, , to produce a trajectory, say or more strikingly just , illustrating that the loop integrals calculated above, in fact capture the probability of a walker to return: W ∝ ω − /2 , Eq. (56), which in the time domain gives t −d/2 .

Original motivation
The present study was motivated by a number of "technicalities" which were encountered by one of us during the study of a more complicated field theory. The first issue, as mentioned in the introduction, was the "fermionic" or excluded-volume interaction. In a first step, that was generalised to an arbitrary carrying capacity n 0 , whereby the deposition rate of immobile offspring varies smoothly in the occupation number until the carrying capacity is reached. It was argued above, Figure 4, that the constraint to a finite but large carrying capacity n 0 , which may be conceived as less brutal than setting n 0 = 1, can be understood as precisely the latter constraint, but on a more complicated lattice. Even though the field theory was constructed in a straight-forward fashion, the perturbative implementation of the constraint, namely by effectively discounting depositions that should not have happened in the first place, make it look like a minor miracle that it produces the correct scaling (and even the correct amplitudes in some cases). We conclude, that the present approach is perfectly suitable to implement excluded volume constraints.
It is interesting to vary n 0 in the expressions obtained for the volume moments. At first it may not be obvious that, for example, the first volume moments in one dimension, Eq. (78) and Eq. (95), are linear in n 0 , given that κ = τ /n 0 , Eq. (21). Given that κ enters the mth moment V m as κ −m , Eq. (81) and Eq. (92), the carrying capacity therefore enters through κ = γ/n 0 as n m 0 . Even though the carrying capacity enters smoothly into the deposition rate (or, equivalently, the suppression of the deposition), in dimensions d < 2 each site is visited infinitely often and is therefore "filled up to the maximum" with offspring particles, as if the carrying capacity was a hard cutoff (with the deposition rate being constant until the occupation reaches the carrying capacity). The volume of each sausage therefore increases by a factor n 0 in dimensions d < 2 and is independent of it (as κ does not enter) in d > 2.
The second issue to be investigated was the presence of open boundaries. This is, obviously, not a new problem as far as field theory is concerned in general, but in the present case being able to change boundary conditions exploits the flexibility of the field theoretical re-formulation of the Wiener Sausage and allows us to probe results in a very instructive way.
It is often said that translational invariance corresponds to momentum conservation in k-space, but the present study highlights some subtleties. As far as bare propagators are concerned, open, periodic, or, in fact, reflecting boundary condition all allow it to be written with a Kronecker-δ function. In that sense, bare propagators do not lose momentum. Momentum, however, is generally not conserved in vertices, i.e. vertices with more than two legs do not come with a simple δ n+m+ ,0 , but rather in a form such as Eq. (9d) or Eq. (84).
These more complicated expressions are present even at tree level, Eq. (45). This touches on an interesting feature, namely that non-linearities are present even in dimensions above the upper critical dimension -they have to, as otherwise the tree level lacks a mechanism by which immobile offspring are deposited.
Below the upper critical dimension, the lack of momentum conservation has three major consequences: Firstly, each vertex comes with a summation and so a loop formed of two vertices, Eq. (85), requires not only one summation "around the loop" but a second one accounting for another index, which is no longer fixed by momentum conservation. This is a technicality, but one that requires more and potentially serious computation. Secondly, and more seriously, the very structure of the vertex might change. For example, at bare level κ comes with a factor LU n,m+k, , but that U n,m+k, might change under renormalisation.
Finally, the third and probably most challenging consequence is the loss of momentum conservation in the propagator. While a lack of translational invariance may not be a problem at bare level, the presence of non-momentum conserving vertices can render the propagators themselves non-momentum conserving -provided the propagators renormalise at all (see the discussion after Eq. (83)), which they do not in the present case, as far as the two shown in Eq. (11a) are concerned. However, parameterised by τ has every right to be called a propagator and it does renormalise. Luckily, however, it never features within loops, so the complications arising from its new structure can be handled within observables and does not spoil the renormalisation process itself.
A consequence of the Dirichlet boundary conditions is the existence of a lowest, non-vanishing mode, q 1 = π/L, Eq. (92), which, in fact, turns out to play the rôle of the effective mass -just like the minimum of the inverse propagator, (−ıω + Dk 2 + r), the "gap", is r in the bulk, it is Dq 2 1 + r in the presence of Dirichlet boundary conditions, and thus does not vanish even when r = 0. This is a nice narrative, which is challenged, however, when periodic boundary conditions are applied. At tree level, when the interaction is switched off, periodic boundaries cannot be distinguished from an infinite system, and so we would evaluate at tree level an infinite and a periodic system both at k = 0 and k n = 0 respectively, producing exactly the same expectation (for exactly the right reason).
The situation is different beyond tree level. Periodic or open, the system is finite. However, periodic boundaries do not drain active particles, so the lowest wave number vanishes, k n = 0. To control the infrared (in the infinite directions), a finite extinction rate r is necessary, which effectively competes with the system size L viaρ ∝ L 2 r/D, Eq. (99) and Eq. (100). Ifρ is large, bulk behaviour ∝ρ − /2 is recovered, Eq. (100), as is the case in the open system (see footnote 9 before Eq. (88)). For moderately small values, ζ(3 − d) ∝ 1/ dominates, Eq. (101a), a signature of a d-dimensional system with open boundaries, Eq. (90). In that case, scaling amplitudes are in fact ∝ L , Eq. (102). However, the presence of the 0-mode allows for a different asymptote asρ is lowered further, the bulk-like term governing the d − 1 =d infinite dimensions takes over, ∝ L −1 ((r+ −ıω)/D) −˜ /2 . It is the appearance of that term and only that term which distinguishes periodic from open boundary conditions. So, the narrative of "lowest wave number corresponds to mass" is essentially correct. In open systems, it dominates for all small masses. In periodic systems, the scaling of the lowest non-zero mode competes with that of a d − 1-dimensional bulk system due to the presence of a 0-mode in the periodic direction, which asymptotically drops out.
The third point that was to be addressed in the present work were the special properties of a propagator of an immobile species. The fact that the propagator is, apart from δ(k+k ), Eq. (11b), independent of the momentum is physically relevant as the particles deposited stay where they have been deposited and so the walker has to truly return to a previous spot in order to interact. Also, deposited particles are not themselves subject to any boundary conditions -this is the reason for the ambiguity of the eigenfunctions that can be used for the fields of the substrate particles. If deposited particles were to "fall off" the lattice, the volume of the sausage on a finite lattice cannot be determined by taking the ω → 0 limit.
It is interesting to see what happens to the crucial integral Eq. (53) when the immobile propagator is changed to (−ıω + νk 2 + ) −1 : which at external momentum k = 0 is Eq. (53) with D replaced by D+ν. The integral thus remains essentially unchanged, just that the effective diffusion constant is adjusted by D → D + ν.
A slightly bigger surprise is the fact that , the IR regulator of the substrate propagator, is just as good an IR regulator as r, the IR regulator of the activity propagator. The entire field theory and thus all the physics discussed above, does not change when the "evaporation of walkers" is replaced by "evaporation of substrate particles". The stationarity of both in infinite systems is obviously due to two completely different processes, which, however, have the same effect on the moments of the Sausage Volume: If r is finite, then a walker eventually disappears, living behind the trace of substrate particles, which stay indefinitely. If is finite, then stationarity is maintained as substrate particle disappear while new ones are produced by an ever wandering walker.
Finally, the fourth issue to be highlighted in the present work was that of observables which are spatial integrals of densities. These observables have a number of interesting features. As far as space is concerned, eigenfunctions with a 0-mode immediately give access to integrals over all space. However, open boundaries force us to perform a summation (and an awkward looking one, too, say Eq. (41)).

Future work
Two interesting extensions of the present work deserve brief highlighting. Firstly, the Wiener Sausage may be studied on networks: Given a network or an ensemble thereof, how many distinct sites are visited as a function of time. The key ingredient in the analysis is the lattice Laplacian, which provides a mathematical tool to describe the diffusive motion of the walker. The contributions k 2 and q 2 n in the denominator of the propagator, Eq. (11a) and Eq. (40a), are the squared eigenvalues of the Laplacian operator in the continuum and, in fact, of the lattice Laplacian, for, say, a square lattice. The integrals in k-space and, equivalently, sums like Eq. (4) and Eq. (41) should be seen as integrating over all eigenvalues k 2 , whose density in d dimensions is proportional to |k| d−1 . It is that d which determines the scaling in, say, V ∝ t d/2 for d < 2. In other words, if |k| ds−1 is the density of eigenvalues (the density of states) of the lattice Laplacian, then the Wiener Sausage volume scales like t ds/2 (and the probability of return like t −ds/2 ). Provided the propagator does not acquire an anomalous dimension, which could depend on d s in a complicated way, the difference between a field theory on a regular lattice with dimension d and one on a complicated graph with spectral dimension d s is captured by replacing d by d s [9, p. 23]. We confirmed this finite size scaling of the Wiener Sausage on four different fractal lattices.
The second interesting extension is the addition of processes, such as branching of the walkers itself. In that case they not only interact with their past trace, but also with the trace of ancestors and successors. This field theory is primarily dominated by the branching ratio, say s, and λ, whereas κ, χ and ξ are irrelevant. Preliminary results suggest that d c = 4 in this case and again V ∝ L 2− , this time, however, with = 4 − d. Higher moments seem to follow V m ∝ L (m−1)d+2− = L md−2 . The latter result suggests that the dimension of the cluster formed of sites visited is that of the underlying lattice.