A Field-Theoretic Approach to the Wiener Sausage

The Wiener Sausage, the volume traced out by a sphere attached to a Brownian particle, is a classical problem in statistics and mathematical physics. Initially motivated by a range of field-theoretic, technical questions, we present a single loop renormalised perturbation theory of a stochastic process closely related to the Wiener Sausage, which, however, proves to be exact for the exponents and some amplitudes. The field-theoretic approach is particularly elegant and very enjoyable to see at work on such a classic problem. While we recover a number of known, classical results, the field-theoretic techniques deployed provide a particularly versatile framework, which allows easy calculation with different boundary conditions even of higher momenta and more complicated correlation functions. At the same time, we provide a highly instructive, non-trivial example for some of the technical particularities of the field-theoretic description of stochastic processes, such as excluded volume, lack of translational invariance and immobile particles. The aim of the present work is not to improve upon the well-established results for the Wiener Sausage, but to provide a field-theoretic approach to it, in order to gain a better understanding of the field-theoretic obstacles to overcome.


Introduction
Example of the Wiener Sausage problem in two dimensions. The blue area has been traced out by the Brownian particle attached to a disc shown in red (Color figure online) wide range of applications, such as medical physics [6, for example], chemical engineering [11, for example] or ecology [33, for example]. On the lattice, the volume of the Sausage translates to the number of distinct sites visited [29]. In this work, we present an alternative, field-theoretic approach which is particularly flexible with respect to boundary conditions and observables with the aim to characterise and resolve the technical challenges in such an undertaking, not with the aim to improve upon the existing theory of the Wiener Sausage.
The approach has the additional appeal that, somewhat similar to percolation [25] where all non-trivial features are due to the imposed definition of clusters as being composed of occupied sites connected via open bonds between nearest neighbours, the "interaction" in the present case is one imposed in retrospect. After all, the Brownian particle studied is free and not affected by any form of interaction. Yet, the observable requires us to discount returns, i.e. loops in the trajectory of the particle, thereby inducing an interaction between the particle's past and present.
Before describing the process to be analysed in further detail, we want to point out that some of the questions pursued in the following are common to the field-theoretic reformulation of stochastic processes [4,5,8,21,27,28]. Against the background of a field theory of the Manna Model [7,19] one of us recently developed, the features we wanted to understand were: (1) "Fermionic", "excluded volume" or "hard-core interaction" processes [13, for example], i.e. processes where lattice sites have a certain carrying capacity (unity in the present case) that cannot be exceeded. (2) Systems with boundaries, i.e. lack of momentum conservation in the vertices. (2') Related to that, how different modes couple in finite, but translationally invariant systems (periodic boundary conditions). (3) The special characteristics of the propagator of the immobile species. (4) Observables that are spatial or spatio-temporal integrals of densities.
The Wiener Sausage incorporates all of the above and because it is exactly solvable or has been characterised by very different means [2,9,15,31], it also gives access to a better understanding of the renormalisation process itself. In the following section we will describe the process we are investigating and contrast it with the original Wiener Sausage. In Sect. 3 we will introduce the field-theoretic description up to tree level, which is complemented by Sect. 4, where we perform a one-loop renormalisation procedure. It will turn out that there are no further corrections beyond one loop and our perturbative results may thus be regarded as exhaustive. Sections 4.3 and 4.4 are dedicated to calculations in finite systems. Section 5 contains a discussion of the results mostly from a field-theoretic point of view, with Sect. 5.1 however focusing on a summary of this work with regard to the original Wiener Sausage problem.

Model
Originally, the Wiener Sausage is concerned with the moments or generally statistical properties as a function of time of the volume traced out by a sphere of fixed given (say, unit) radius, which is attached to a Brownian particle. This volume is thus the set of point within a certain distance to the particle's trajectory. Our field-theoretic approach will not recover that process, but one that can reasonably be assumed to reside in the same universality class. One may take the view that the field-theoretic description is merely a different view on the same phenomenon, namely the Wiener Sausage.
To motivate the field theory and link it to the original problem, we will distinguish three different models: (i) The original Wiener Sausage in terms of a sphere dragged by a Brownian particle [15], (ii) a discrete time random walker on a lattice, where the Sausage becomes the set of distinct sites visited [29], (iii) a Brownian particle in the continuum that spawns immobile offspring with a finite rate and subject to a finite carrying capacity. In the following, we will first describe how the phenomenon on the lattice, (ii), relates to the original Wiener Sausage, (i), and then how the field theory, (iii), relates to the lattice model, (ii).
The asymptote in long times t of the number of distinct sites visited by a discrete time random walker on a lattice ((ii) above) is expected to converge to that of the volume V (t) over the volume V 0 of the sphere in the original process ((i) above), provided the walker returns repeatedly, so that the shape of the sphere and the structure of the lattice respectively do not enter into the shape and size of the volume visited. Frequent returns are realised in the limit of long times and below d = 2 dimensions. In that case, the walker on the lattice becomes a discretised version of the original Wiener Sausage, as the particle drags a sphere that is small compared to the volume traced out. Indeed, in one dimension, d = 1, the expected volume of the Wiener Sausage in units of the volume of the sphere is dominated by V (t)/V 0 ∼ 4t D/(π b 2 ), where t is the time, D is the diffusion constant and b the radius of the sphere, whereas the expected number of distinct sites visited by a random walker after n steps is dominated √ 8n/π [29]. The two expressions are identical for t = n and D = 2b 2 , the effective diffusion constant of a random walker taking one step of distance 2b in one of the 2 directions in each time step. Above d = 2 dimensions, the walker is free, i.e. self-intersection of the trace becomes irrelevant on larger time scales. The number of distinct sites visited and the Wiener Sausage volume therefore both scale linearly in t and n respectively. However, the (non-universal) proportionality factor, e.g. lim t→∞ V (t)/(V 0 t) for the original Wiener Sausage, is affected by the microscopic details such as the self-intersection of the sphere or the lattice structure of the random walker.
We proceed to relate the process on the lattice (ii) to a Brownian particle spawning immobile offspring (iii). To this end, we first describe (ii) in the language of reaction and diffusion. In (ii), an "active" particle (species "A", the active species) performs a random walk on a lattice. Simultaneously, the particle spawns immobile offspring particles (species "B", the blue ink traces of A shown in Fig. 1, below sometimes referred to as a "substrate particle") at every site visited, provided that the site is not already occupied by an immobile B particle. In other words, A spawns exactly one B at every newly visited site, so that the number of B particles deposited becomes a proxy for the number of distinct sites visited. In dimensions less than 2 the A particle will return to every site visited arbitrarily often in the limit of long times. A finite spawning probability will therefore change the number of B particles deposited only at the fringes of the set of sites visited, without, however, changing the asymptotics of the number of B particles in the system as a function of time. If n B (x, t) is the number of B particles at position x on the lattice and time t, the probability with which B particles are spawned by an A particle may be written as γ (1 − n B (x, t)), so that deposition occurs with probability γ if no B particle is present and not at all otherwise.
At this stage, we may introduce a carrying capacity n 0 , which determines the maximum number of B particles deposited on any site, by making the spawning probability drop from γ to 0 linearly in the particle number n B (x, t), i.e. like In the process (ii) discussed so far, n 0 is unity, but from what has been discussed above, n 0 > 1 will result in each (frequently) revisited site carrying n 0 immobile B particles. The meaning of the carrying capacity in relation to the field theory is further discussed in Sect. 2.2.
To see the relation between the third process, (iii) and the discrete time, discrete space process (ii), we first introduce continuous time in the latter. Random hopping which used to occur once in every time step now becomes a Poisson process with a certain rate, say H , as does the spawning of immobile offspring, with now takes place with rate γ (n 0 −n B (x, t))/n 0 . In the limit of γ H all distinct sites visited will carry n 0 immobile offspring. However, in dimensions d < 2 sites are visited repeatedly, so that even a finite deposition (attempt) rate γ yields the same asymptotic occupation. In dimensions d > 2 the number of B particles deposited will, on the other hand, be proportional to the rate γ .
The expression γ (n 0 − n B (x, t))/n 0 may be written as γ − κn B (x, t) where κ = γ /n 0 is a discount rate. In this interpretation (the view adopted in the perturbation theory below), deposition takes place unhindered with rate γ while unlimited (and thus supposedly suppressed) deposition is discounted by κn B (x, t), i.e. with a rate proportional to the occupation and inversely proportional to the carrying capacity.
It remains to take the continuum (space) limit to arrive at process (iii) to be written as a field theory, where occupation numbers n B (x, t) and n A (x, t) for B and A particles respectively turn into occupation densities, i.e. fields, namely n B (x, t) and n A (x, t). Moreover, the carrying capacity n 0 turns into a carrying density capacity, n 0 , so that the discount mentioned above is now parameterised by κ = γ /n 0 (a rate per density). The deposition thus occurs with rate The random movement of the A particle is now parameterised by the diffusion constant D, which may be obtained as the hopping rate H over the squared lattice spacing in the limit of the latter going to 0.

Intermediate Summary
The long-winded discussion above serves as a justification as to why we expect the field theory of (iii) to produce a phenomenon in the same universality class as the original Wiener Sausage. Starting from the original Wiener Sausage (i), we have motivated why the process on the lattice, (ii), can be regarded as a discretised version of (i) and introduced (iii) as its continuum approximation. In the course of the justification, we made use of some of the details of the processes involved, such as repeated returns to sites in process (ii). The field theoretic description of the universality class of the Wiener Sausage, however, may be derived without recourse to these details, simply by observing that the volume traced out by the sausage is proportional to the length of the trajectory with multiple visits discounted, corresponding to the number of immobile B particles deposited by a Brownian particle (of species A), if its spawning rate is moderated down in the presence of B particles. To summarise, process (iii), to be cast in a Liouvillian and thus a field theory below, is defined as follows: The Brownian particle A freely diffuses with diffusion constant D and possibly subject to boundary conditions. While diffusing, the particle can spawn offspring with Poissonian rate γ which, however, belong to an immobile second species B. If n B (x, t) is the density of these particles, the deposition is linearly regulated down in their presence, according to γ − κn B (x, t) with κ = γ /n 0 . Here, γ is the deposition rate and n 0 is the carrying (density) capacity.
It is convenient in the field theory to allow for spontaneous extinction with rate (or "mass") r . Ignoring boundary conditions, the propagator of the Brownian particle (species A, the "activity") takes the familiar form 1/(−ıω + Dk 2 +r ) where ω and k parameterise frequency and momentum (wave number) coordinates, respectively. The propagator of the immobile species takes the form 1/(−ıω + ), where is the rate of spontaneous extinction of B therticles and the limit → 0 + is implied to establish causality, as often done in field theories. The key observable, corresponding to the volume of the sausage, is the total number of immobile particles in the system after a given time t, i.e. the spatial integral over n B (x, t).
The engineering dimension of κ is a rate per density, which in comparison to the engineering dimension of the diffusion constant, an area per time, reveals the upper critical dimension of 2. Alternatively, this can be seen from the density of unhindered deposition as a function of time, (γ t)/(Dt) d/2 , i.e. in the absence of discounts, κ = 0 or n 0 → ∞.
In what follows, we will characterise process (iii) field theoretically. The Liouvillian of the process is split into a linear part, Eq. (11), discussed in Sect. 3.2, and a non-linear part Eq. (21), discussed in Sect. 3.3. The linear part of the Liouvillian can be constructed from the propagators mentioned above and vice versa. The Liouvillian will enter into a path-integral, which can be used to generate all correlation and vertex functions. The path integral itself is to be evaluated perturbatively in the non-linearity, which, for example, instantly indicates that the non-linearity has no bearing on the propagator of particle A.
Above two dimensions, the non-linearity causes an ultraviolet divergence, Eq. (56), which has its origin in the increasingly sharp divergence in t of the density of a random walker at the origin. 1 However, in dimensions above 2 the non-linearity is infrared irrelevant and so all long-time, long-range observables are covered by the tree-level. In dimension below 2 no ultraviolet divergence occurs and the infrared can be regularised using finite masses r and . We will therefore work in dimensions 2− , with > 0, known as dimensional regularisation (of the ultraviolet).
Initially, the density fields will be studied on an infinite domain without boundaries. However, in Sect. 4.3 we will also consider an infinite slab and in Sect. 4.4 an infinite cylinder. We will use Fourier transforms to write the fields in the infinite domain and suitably chosen Fourier series in the presence of open (Dirichlet) or cylindrical boundaries. These transforms and series are discussed in Sect. 3.1 and used later in the respective sections.
We will demonstrate in the following that the field theory recovers exact results of the original Wiener Sausage as far as universal exponents are concerned, but also with respect to some amplitudes (namely the leading order term of the volume of the sausage in one 1 The integral (56) is in fact the time-integrated density of a random walker at the origin, subject to extinction dimension as a function of time and the leading order of the volume as a function of the system size of the infinite slab). Firstly, the present results confirm that the logistic term Eq. (2) is capable of capturing the constraints due to the carrying capacity. At a more technical level, the calculations for (partially) finite systems (infinite slab and cylinder) involve different propagators, which under renormalisation can lead to new non-linearities. Similar to the classic case discussed in [18] this problem, however, will be avoided. The results for these more complicated boundary conditions show very interesting crossover behaviour. Finally, from a physical point of view, it is particularly interesting that the infrared regularisation of the immobile species, , a neccessary ingredient as to preserve causality in the absence of diffusion, can in principle be used to regularise the theory as a whole, i.e. without the need of a particle mass r . Before introducing the field theory of the present model in Sect. 3, we discuss in the following briefly the intricacies of the fermionic nature of the B particles.

Finite Carrying Capacity
To fully understand the effect and consequences of the carrying capacity, it is best to reconsider the process on the lattice. A carrying capacity of n 0 = 1 in Eq. (1) switches off the deposition of B particles in their presence in a rather dramatic fashion, implementing a constraint that is normally referred to as fermionic, because there is never more than one B particle deposited on a site. Raising n 0 allows the spawning rate to drop linearly in the occupation in an otherwise bosonic setup. While this may raise suspicion and invite the criticism of a fudge, as demonstrated below, such a bosonic regularisation may be interpreted as the fermionic case on a lattice with a particular connectivity, i.e. the attempted regularisation is the original, fermionic case in disguise, suggesting that no such regularisation is needed.
Some authors [34, and references therein] avoid terms like Eqs. (1) or (2) by expanding a suitable expression for δ 1,n B (x,t) , a Kronecker δ-function. Equations (1) and (2) are not leading order terms in an expansion. For n 0 = 1 and before taking any other approximation (e.g. continuous space and density or removing irrelevant terms in the field theory) a logistic term like (1) is a representation of the original process as exact as one involving the Kronecker δ-function. For n 0 > 1 a logistic term gives rise to a model that may be strictly different compared to one with a sharp carrying capacity implemented by, say, a Heaviside stepfunction, θ(n 0 − n B (x, t)), but nonetheless one that may be of equal interest.
Large n 0 on the other hand, softens the cutoff, because spawning does not drop from suddenly from γ to 0 but is more and more suppressed. One might therefore be inclined to study the problem in the limit of large n 0 . At closer inspection, however, it turns out that such increased n 0 does not present a qualitative change of the problem: Having n 0 > 1 is as if each site was divided into n 0 spaces. When the Brownian particle jumps from site to site it arrives in one of those n 0 spaces, only n 0 − n B (n, t) of which are empty, so that an offspring can be left behind. The process with carrying capacity n 0 > 1 therefore corresponds to the process with a carrying capacity of unity per space on a lattice where n B (n, t) describes the number of immobile offspring in each "nest" or column of such spaces, as illustrated in Fig. 2. In effect, the carrying capacity n 0 > 1 is implemented per column, leaving the original fermionic constraint of at most one offspring per space (or site) in place. In other words, even when a carrying capacity n 0 1 is introduced to smoothen the fermionic constraint, it is still nothing else but the original constraint n 0 = 1 on a different lattice. This led us to believe that there is no qualitative difference in n 0 = 1 or any other finite value of n 0 . In the following, the field theory will retain the carrying capacity n 0 because it is an interesting L n 0 = 4 L n 0 = 4 Fig. 2 A one dimensional lattice of size L and carrying capacity n 0 = 4 corresponds to the lattice shown above, where the carrying capacity of the former is implemented by expanding each site into a column of n 0 sites. The Brownian particle can jump from every site to all sites in the neighbouring columns. In the new lattice, the carrying capacity per site is unity, the carrying capacity per column is n 0 parameter (n 0 → ∞ switches the interaction off) and a "marker" of the interaction. It may be set to any positive value.

Field Theory
In order to cast the model introduced above in a field-theoretic language, we take the Doi-Peliti [8,21] approach without going through too many technical details. There are a number of reviews and extremely useful tutorials available [4,5].
In the following the mobile particle is of species "A", performing Brownian motion with (nearest neighbour) hopping rate H , which translates to diffusion constant D = H/(2d) on a d-dimensional hypercubic lattice. We expect universal scaling in the large time and space limit. To regularise the infrared, we also introduce an extinction rate r . A's creation operator is a † (x), its annihilation operator is a(x). The immobile species is "B", spawned with rate γ by species A. Its creation operator is b † (x), its annihilation operator is b(x), both commuting with the creation and annihilation operators of species A. The immobile species goes extinct with rate , which allows us to have a Fourier transform and to restore causality (possible annihilation, i.e. existence, only after creation) even without spontaneous extinction, once we take the limit → 0.

Fourier Transform
After replacing the operators by real fields, the Gaussian (harmonic) part of the resulting path integral can be performed, once the fields have been Fourier transformed. We will use the sign and notational convention of The field φ(k, ω) corresponds to the annihilator a(x) of the active particles, the fieldφ(k, ω) to the Doi-shifted creatorã( It is instructive to consider a second set of orthogonal functions at this stage. Placing the process in a space that has a finite extension along one axis means that boundary conditions have to be met, which is more conveniently done in one eigensystem rather than another. Below, we will consider an infinite slab with finite thickness L, i.e. d-dimensional spaces which are infinitely extended (using the orthogonal functions and transforms introduced above) ind = d − 1 dimensions, while along one axis, the boundaries are open, i.e. the particle density of species A vanishes at the (two parallel,d-dimensional) boundaries and outside. This Dirichlet boundary condition is best met using eigenfunctions √ 2/L sin(q n z) with q n = πn/L and n = 1, 2, . . ., making it complete and orthonormal because 2 L L 0 dz sin(q n z) sin(q m z) = δ n,m .
In passing, we have introduced the finite linear length of the space, L. Purely for ease of notation and in order to keep expressions in finite systems dimensionally as similar as possible to those in infinite ones, Eq. (3), we will transform as follows: where δ(z − y) is the usual Dirac δ function for z − y ∈ (0, L) but to be replaced by the periodic Dirac comb ∞ m=−∞ δ(z − y + m L) for arbitrary z − y. For ease of notation, we have omitted the time dependence of φ(x, t) as well asd components other than z. The other fields,φ, as well as ψ andψ transform correspondingly. The spatial transform of the latter is subject to some convenient choice, because the immobile species is not constrained by a boundary condition.
It will turn out that, as expected in finite size scaling, the lowest mode q 1 = π/L plays the rôle of a temperature like variable, controlling the distance to the critical point.
We will also briefly study systems which are infinitely extended ind dimensions and periodically closed in one. In the periodic dimension, the spectrum of conveniently chosen eigenfunctions √ 1/L exp (ık n y) is discrete with k n = 2πn/L and n ∈ Z, 1 L L 0 dy e ık n y e ık m y = δ n+m,0 .
Again, we transform slightly asymmetrically (in L), where again δ(z − y) is to be replaced by a Dirac comb if considered for z − y / ∈ (0, L). Again, time andd spatial coordinates were omitted. Similar transforms apply to the other fields.
There is a crucial difference between eigenfunctions exp (ık n y) and sin(q n z), as the former conserves momenta in vertices, whereas the latter does not: with q n = πn/L > 0, n ∈ N + and k n = 2πn/L, n ∈ Z (sign unconstrained) as introduced above.
Having made convenient choices such as Eq. (5), we will carry on using the Fourier transforms of the bulk Eq. (3), which is easily re-written for Dirichlet boundary conditions using Eq. (5), simply by replacing each integral overdk by (2/L) n and similar for periodic boundary conditions, Eq. (8). Only the non-linearity, Sect. 3.3, is expected to require further careful analysis as nm of Eq. (10b) is structurally far more demanding than δ n+m+ ,0 of Eq. (10a).

Harmonic Part
Following the normal procedure [3, for example], the harmonic part L 0 of the Liouvillian The non-linear part L 1 , Eq. (21), is discussed in Sect. 3.3. The harmonic part, L 0 , describes the diffusive evolution of the density field of A particles, represented by φ andφ, which diffuse with diffusion constant D and get spontaneously extinct with rate r , as well as the evolution of immobile particles B, represented by densities ψ andψ, which do not diffuse but get extinct with rate . After Fourier transforming and without further ado the harmonic part of the path integral can be performed, producing the two bare propagators Below, we will refer to the propagator of the diffusive particles as the "activity propagator" and to the one for the immobile species as the "substrate propagator" (or "activity" and "substrate legs", respectively). As the propagation of the active particles is unaffected by the deposition of immobile particles, the activity propagator does not renormalise φφ = φφ 0 . The same is true for the immobile species, which might be spawned by active particles, however, once deposited remains inert, ψψ = ψψ 0 .
The Fourier transform Eq. (3) of the latter produces δ(x − x )θ (t − t ) in the limit → 0, with θ(x) denoting the Heaviside θ -function as one would expect (with x, t being the position and time of "probing" and x , t position and time of creation). At this stage, there is no interaction and no transmutation, ψ (k, ω)φ(k , ω ) = 0. Diffusing particles A happily co-exist with immobile ones.

Non-linearity
The harmonic part of the Liouvillian, L 0 , discussed in the preceding section covers the diffusive motion and spontaneous extinction of A particles (fields φ andφ) and the spontaneous extinction of the resting B particles (fields ψ andψ). In the following, we will discuss the nonlinear (interacting) part of the Liouvillian, L 1 , which introduces the spawning of B particles by the A particle, subject to the constraint of the finite carrying capacity, which establishes an effective interaction between previously deposited particles and any new particle to be deposited.
As discussed in Sect. 2.2, spawning is moderated down in the presence of B particles to γ (1 − n B (x, t)/n 0 ). At the level of a master equation, this conditional deposition gives a non-linear contribution of where, for convenience, the problem is considered for individual lattice sites n which contain n A = n A (n) particles of species A and n B particles of species B. The contributions by harmonic terms, namely diffusion of A particles and spontaneous extinction of both, as discussed in the previous section, have been omitted. The first term in the sum describes the creation of a B particle in the presence of n B − 1 of those to make up n B in total, the second term makes the B particle number exceed n B , n B → n B + 1. If where the sum runs over all states of the entire lattice, then the conditional deposition produces the contribution where we have used the commutator, and the Doi-shifted creation operator, b † =b + 1, as well as the particle number operator b † b. Although using Doi-shifted operators throughout gives rise to a rather confusing six nonlinear vertices, the resulting field theory does not turn out as messy as one may expect. However, we need to allow for different renormalisation, therefore introducing six different couplings below.
Replacing a † by 1 +ã in the first term of the sum generates the bilinearty ab, which we will parameterise in the following by τ , corresponding to a transmutation of an active particle to an immobile one. Transmutation is obviously spurious; it does not actually take place but will allow us in the Doi-shifted setup (and thus with the corresponding left vacuum [4,5]) to probe for substrate particles (using b) after creating an active one (using a † ) without having to probe for the latter (using a). There is no advantage in moving that to the bilinear part L 0 , because the determinant of the bilinear matrix M in is unaffected by τ = 0 and therefore none of the propagators mentioned above change.
One may therefore treat all terms (including the bilinear transmutation) resulting from the interaction perturbatively, with transmutation that is present regardless of the carrying capacity n 0 . At this stage it is worth noting the sign of τ (and σ below) as positive, i.e. the perturbative expansion will generate terms with pre-factors τ (and σ below). The only other non-linearity independent from the carrying capacity n 0 is the vertexbãa (orψφφ) in the following parameterised by the coupling constant σ . Diagrammatically, it may be written as the (amputated vertex) σ , (18) and can be thought of as spawning, rather than transmutation parameterised by τ . According to Eq. (15), there are four non-linearities with bare-level couplings of γ /n 0 , generated by replacing the regular creation operators by their Doi-shifted counterparts, Each spawns at least one substrate particle, but more importantly, it also annihilates at least one substrate particle as it "probes for" its presence. The two simplest and most important (amputated) vertices are the ones introduced above with a "wriggly tail added", λ κ (19) where we have also indicated their coupling. By mere inspection, it is clear that those two vertices can be strung together, renormalising the left one. In fact, κ is the one and only coupling that renormalises all non-linearities (σ, λ, κ, χ, and ξ ), including itself. Two more vertices are generated, which become important only for higher order correlation functions of the substrate particles, because there is no vertex annihilating more than one of them-correlations between substrate particles are present but not relevant for the dynamics. Notably, there is no vertex that has more incoming than outgoing substrate legs. Finally, we note that the sign with which λ, κ, χ and ξ are generated in the perturbative expansion is negative. For completeness, we state the interaction part of the Liouvillian (see Eq. (11)) with at bare level.

Dimensional Analysis
Determining the engineering dimensions of the coupling introduced above is part of the "usual drill" and will allow us to determine the upper critical dimension and to remove irrelevant couplings. Without dwelling on details, analysis of the harmonic part, Eq. (11), reveals that [D] = L 2 /T (as expected for a diffusion constant) and [r ] = = 1/T (as expected for all extinction rates), with [x] = L, a length, and [t] = T, a time. In real time and real space, Performing the Doi-shift in Eq. (15) first and introducing couplings for the non-linearities as outlined above allows for two further independent dimensions, say spawning [σ ] = A and transmutation [τ ] = B (both originally equal to the rate γ ), which implies As far as the field theory is concerned, the only constraint is to retain the diffusion constant on large scales, which implies T = L 2 . As a result, the non-linear coupling κ (originally γ /n 0 ) becomes irrelevant in dimensions d > d c , as expected with upper critical dimension d c = 2. The two independent engineering dimensions A and B will be used in the analysis below in order to maintain the existence of the associated processes of transmutation and spawning, which are expected to govern the tree level. If we were to argue that they become irrelevant above a certain upper critical dimension, the density of offspring and its correlations would necessarily vanish everywhere. 2 Even though we may want to exploit the ambiguity in the engineering dimensions [17,28] in the scaling analysis (however, consistent with the results above), in the following section we will make explicit use of the Doi-shift when deriving observables, which means that bothφ andψ are dimensionless (in real space and time), φ = ψ = 1, which implies A = T −1 and A = B. As expected, τ is then a rate (namely the transmutation rate) and so is σ , [τ ] = [σ ] = 1/T. Also not unexpectedly, the remaining four couplings all end up having the same engineering dimension, which is a rate per density (γ being the spawning rate and n 0 turning into a carrying capacity density as we take the continuum limit).

Observables at Tree Level: Bulk
The aim of the present work is to characterise the volume of the Wiener Sausage fieldtheoretically. As discussed in Sect. 2, this is done not in terms of an actual spatial volume, but rather in terms of the number of spawned immobile offspring. In this section, we define the relevant observables in terms of the fields introduced above. This is best done at tree level, presented in the following, before considering loops and the subsequent renormalisation. While the tree level is the theory valid above the upper critical dimension, it is equivalently the theory valid in the absence of any physical interaction, i.e. the theory of n 0 → ∞. We introduce the observables first in the presence of a mass r , which amounts to removing the particle after a time of 1/r on average.
If v (1) is the density of substrate particles at x in a particular realisation of the process at the end of the life time of the diffusive particle which started at x * , the volume of the Sausage where • denotes the ensemble average of • and the dependence on x * drops out in the bulk. Alternatively (as done below), one may consider a distribution 3 d(x * ) of initial starting points x * , over which an additional expectation, denoted by an overline, •, has to be taken.
Higher moments require higher order correlation functions where and v (n) (x 1 , . . . , x n ; x * ) denotes the n-point correlation function of the substrate particle density generated by a single diffusive particle started at Given that b † (x)b(x) is the particle density operator, that correlation function is the expectation with only a single, 4 initial, diffusive particle started at x * , t 0 . The multiple limits on the right are needed so we measure deposition due to the active particle left after its lifetime. As the present phenomenon is time-homogeneous, t 0 will not feature explicitly, but rather enter in the difference t i − t 0 , each of which diverges as the limits are taken. In principle, only a single limit is needed, t = t 1 = t 2 = . . . = t n → ∞, but as discussed below, equal times leave some ambiguity that can be avoided.
, which leaves us with four terms after replacing by Doi-shifted creation operators, Pure annihilation, ψ , vanishes-it is the expected density of substrate particles in the vacuum, as no active particle has been created first. The expectation ψ ( vanishes as well, for θ(0) = 0 (effectively the Itō interpretation of the time derivatives, [27]) is needed in order to make the Doi-Pelitti approach meaningful. The field is meant to re-create the particle annihilated by the operator corresponding to ψ( is available. In fact, to contribute, any occurrence ofψ(x 1 , t 1 ) requires an occurrence of ψ(x 2 , t 2 ) with t 2 > t 1 . What remains of Eq. (26) is therefore only Taking the Fourier transform of Eq. (17), reveals the general mechanism of provided g(ω) itself has no pole at the origin, as otherwise additional residues that survive the limit t → ∞ would have to be considered. In Eq. (28) the starting point of the walker still enters via k 0 . If that "driving" is done with a distribution of initial starting points d(k 0 ), the resulting deposition is given by v (1) where the little circle on the right indicates the "driving" which "supplies" a certain momentum distribution. More specifically, an initial distribution of d( and the resulting deposition is distributed according to v (1) In an infinite system, the position of the initial driving should not and will not enter-to calculate the volume of the Sausage, we will evaluate at k = 0. The same applies for the time of when the initial distribution of particles is made. In principle it would give rise to an additional factor of exp (−ıωt * ), but we will evaluate at ω = 0. Evaluating at k = 0 in the bulk produces the volume integral over the offspring distribution, i.e. the expected volume V of the Sausage, in the absence of a limiting carrying capacity, which corresponds to the naïve expectation of the (number) deposition rate τ multiplied by the survival time of the random walker 1/r . From this expression it is also clear that the "volume" calculated here is, as expected, dimensionless. Following similar arguments for n = 2, the relevant diagrams are where the symbol representsψ(x, t)ψ(x, t), which is a convolution in Fourier space, which in real space and time gives a δ( , corresponding to an immobile particle deposited at t 0 and x 0 , found later at time t 1 > t 0 and x 1 = x 0 and left there to be found again at time t 2 > t 1 and The effect of taking the limits t i → ∞ is the same as for the first moment, namely it results in ω i = 0. The same holds here, except that in diagrams containing the convolution, the result depends on the order in which the limits are taken. This can be seen in the factor θ(t 2 − t 1 )θ (t 1 − t 0 ), as one naturally expects from this diagram: The first probing must occur after creation and the second one after the first. A diagram like the second in Eq. (32) does not carry a constraint like that.
Each of the diagrams on the right hand side of Eq. (32) appears twice, as the external fields can be attached in two different ways. When evaluating at k 1 = k 2 = 0 this would lead to the same (effective) combinatorial factor of 2 for both diagrams. However, taking the time limits in a particular order means that one labelling of the first diagram results in a vanishing contribution. The resulting combinatorial factors are therefore 1 for and 2 for , i.e.
again dimensionless. Given that τ = σ = γ initially, Eq. (15), the above may be written γ /r + 2γ 2 /r 2 . Unsurprisingly, the moments correspond to those expected for a Poisson process with rate γ taking place during the exponentially distributed lifetime of the particle, subject to a Poisson process with rate r . The resulting moment generating function is simply with V n = d n dx n x=0 M(x) reproducing all moments once τ = σ = γ .
Carrying on with the diagrammatic expansion, higher order moments can be constructed correspondingly. At tree level (or n 0 → ∞ equivalently), there are no further vertices contributing. Determining v (n) (k 1 , . . . , k n ; k 0 ) is therefore merely a matter of adding substrate legs, , either by adding a convolution, , or by branching with coupling σ . For example, Upon taking the limits, effective combinatorial factors become 1, 3, 3 and 6 respectively, so that and similarly In general, the leading order behaviour in small r at tree level in the bulk is dominated by diagrams with the largest number of branches, i.e. the largest power of σ , like the right-most term in Eq. (37), so that which is essentially determined by the time the active particle survives.

Observables at Tree Level: Open Boundary Conditions
Nothing changes diagrammatically when considering the observables introduced above in systems with open boundary conditions along one axis. As n 0 → ∞ does not pose a constraint, it makes no difference whether the system is periodically closed (in d = 2 a finite cylinder) or infinitely extended (infinite slab) along the other axes-these directions simply do not matter for the observables studied, except when the diffusion constant enters. What makes the difference to the considerations in the bulk, Sect. 3.5, are open dimensions, in the following fixed to one, so that the number of infinite (or, at this stage equivalently, periodically closed) directions isd = d − 1; in the following k, k ∈ Rd . While the diagrams obviously remain unchanged, their interpretation changes because of the orthogonality relations as stated in Eqs. (6b) and (10b) or, equivalently, the lack of momentum conservation due to the absence of translational invariance. Replacing the propagators by In the limit of large L this result recovers Eq. (31), which would be less surprising if L → ∞ would simply restore the bulk, which is, however, not the case, because as the driving is uniform, some of it always takes place "close to" the open boundaries. However, open boundaries matter only up to a distance of √ D/r from the boundaries, i.e. the fraction of walkers affected by the open boundaries is of the order √ D/r /L. The limit r → 0 gives V = τ L 2 /(12D), matching results for the average residence time of a random walker on a finite lattice with cylindrical boundary conditions using D = 1/(2d) [22]. Sticking with r → 0, calculating higher order moments for uniform driving is straightforward, although somewhat tedious. For example, the two diagrams contributing to v (2) are and where n, m, l ∈ {1, 3, 5, . . .} (as driving is uniform and the sausage volume is an integral over the entire system), then produces This may be compared to the known expressions for the moments of the number of distinct sites visited by a random walker within n moves [29, in particular Eq. (A.14)], which contains logarithms even in three dimensions, where the present tree level results are valid. This is, apparently, caused by constraining the length of the Sausage by limiting the number of moves, rather than a Poissonian death rate. Performing the summations Eq. (45) is straight-forward, but messy and tedious. 5 The relevant sums converge rather quickly, for the third moment producing (by summing numerically over 200 terms for each index), for example Just like in the bulk for small r , Eq. (39), the diagrams dominating large L are the treebranch-like diagrams such as Eq. (44), with highest power of σ , rather than those involving convolutions, Eq. (43). Each new branch produces a factor L 2 , so in general as in Eq. (40) essentially determined by the time the particle stays on the lattice. Similar to the bulk, the lack of interaction allows the volume moments of the Sausage to be determined on the basis of the underlying Poisson process. In the case of homogeneous drive, the mth moment of the residence time t r of a Brownian particle diffusing on an open interval of length L is and the moment generating function of the Poissonian deposition with rate γ is just M(z) = exp (−γ t r (1 − exp (z))), so that V m = d m M(z)/dz m |z = 0 , reproducing the results above such as confirming, in particular, the high accuracy of the leading order term in L, as 17/280 = 0.06071428571428571428 . . ..

Beyond Tree Level
Below d c = 2 the additional vertices parameterised by λ, κ, χ and ξ , Eqs. (19) and (20) respectively, have to be taken into account. Because κ is the only vertex that has the same number of incoming and outgoing legs, it is immediately clear that its presence can, and, in fact, will contribute to the renormalisation of all other vertices, say but in particular itself: Among the vertices introduced in Sect. 3.3, namely τ , σ , λ , κ , χ and ξ , none has an outgoing activity leg if it does not have an incoming activity leg, and all have at least as many outgoing substrate legs as they have incoming substrate legs. Apart from κ, each vertex has either more outgoing substrate legs than incoming ones or fewer outgoing activity legs than incoming ones. Combining them in any form will thus never result in a diagram contributing to the renormalisation of κ, which has one leg of each kind.
Combinations of other vertices gives rise to "cross-production", say χ, , by λξ , , but none of these terms contains more than one loop without the involvement of κ. As for the generation of higher order vertices, it is clear that the number of outgoing substrate-legs (on the left) can never be decreased by combining vertices, because within every vertex the number of outgoing substrate legs is at least that of incoming substrate legs. In particular does not exist. A vertex like that, combined, say, with σ to form the bubble , which renormalises the propagator, suggests the diffusive movement of active particles is affected by the presence of substrate particles. This is, by definition of the original problem, not the case.
Because no active particles are generated solely by a combination of substrate particles, none of the vertices has more outgoing then incoming activity legs. Denoting the tree level coupling of the proper vertex (with amputated legs) Because diffusion is to be maintained, it follows that T = L 2 , yet, as indicated above, the dimensions of A and B are to some extent a matter of choice. Leaving them undetermined Below d c = 2, the dimensional analysis depends on the choice one makes for A and B. If they remain independent, then the only relevant vertices that are topologically possible are those with a ≤ 1, removing χ and ξ from the problem. However, it is entirely consistent (and one may argue, even necessary) to assume A = B = T −1 , resulting in no constraint on a at all. Not only are therefore vertices for all a relevant, what is worse, they are all generated as oneparticle irreducibles. For example, the reducible diagram contributing to v (2) at tree level, Sect. 3.5, possesses, even at one loop, two one-particle irreducible counterparts in d < 2, contributing to the corresponding proper vertex. Such diagrams exist for all a, so, in principle, all these couplings have to be allowed for in the Liouvillian and all have to be renormalised in their own right. The good news is, however, that the Z -factor of κ (see below) contains all infinities of all couplings exactly once, i.e. the renormalisation of all couplings can be related to that of κ by a diagrammatic vertex identity, see Sect. 4.1.1.

Renormalisation
Without further ado, we will therefore carry on with renormalising κ only. As suggested in Eq. (52), this can be done to all orders, in a geometric sum. The one and only relevant integral is 6 where = 2 − d and we have indicated the total momentum k (i.e. the sum of the momenta delivered by the two incoming legs) and the total frequency ω going through it. 7 This integral has the remarkable property that it is independent of k, because of the k-independence of the substrate propagator. While the latter conserves momentum in the bulk by virtue of δ¯(k + k ) in Eq. (12b), its amplitude does not depend on k. Even if there were renormalisation of the activity propagator it would therefore not affect its k-dependence, i.e. η = 0, whereas its ω dependence may be affected, i.e. z = 2 would be possible. The expression ((r + −ıω)/D) 1/2 can be identified as an inverse length; it is the infrared regularisation (or more precisely the normalisation point, R = 1, Eq. (74a)) that can, in the present case, be implemented either by considering finite time (ω = 0), spontaneous extinction of activity (r > 0) or, notably, spontaneous extinction (evaporation) of substrate particles ( > 0). In order to extract exponents, it is replaced by the arbitrary inverse length scale μ. We will return to the case μ = √ −ıω/D in Sect. 4.2, e.g. Eq. (84). For the time being, the normalisation point is with → 0, ω → 0. The renormalisation conditions are then (see Eq. (55)) 6 We have written explicitly κ vertices, including the amputated legs. At this stage it is unimportant which coupling forms the loop, but this will change when we study infinite slabs in Sect. 4.3. 7 Here and in the following we obviously choose {k i , ω i } = {0, 0} in the renormalisation condition, i.e. where {0, 0} indicates that the vertex is evaluated at vanishing momenta and frequencies.
Defining Z = κ R /κ allows all renormalisation to be expressed in terms of Z , as detailed in Sect. 4.1.1.
Starting with only one loop, the renormalisation of κ, Eq. (52), is therefore Introducing the dimensionless coupling g = κ W/ ( /2) with g R = g Z gives Z = 1 − g ( /2), which may be approximated to one loop by Z = 1 − g R ( /2). Keeping, however, all loops in Eq. (52), this last expression is no longer an approximation: if all terms in Eq. (52) are retained, Z becomes a geometric sum in g, incorporating all parquet diagrams [12]. The resulting β-function is β g (g) = dg R /d ln μ| g g R = − g R − κ Wβ g and therefore The last statement is exact to all orders; the non-trivial fixed point in > 0 is exactly g * R = 1/ ( /2) ≈ /2, which is when the Z -factor vanishes (as g diverges in small μ).

Ward-Takahashi and Vertex Identities
Different vertices and therefore the renormalisation of different couplings can be related to each other by Ward-Takahashi identities. They are usually constructed by considering global symmetries [36], such as the invariance of the Liouvillian under [1] φ to be considered for small δ, which produces an identity on couplings involving an odd number of fields, The identities derived in the following are certainly consistent with Eq. (64), but derived at diagrammatic level. To start with, we reiterate that Eq. (52) contains all contributions (and to all orders) to 1 1 1 1 , the renormalised vertex κ. Repeating for σ , , and λ, , the diagrammatic, topological argument presented for κ after Eq. (52), it turns out that diagrams contributing to their renormalisation are essentially identical to those contributing to κ, as shown in Eq. (51). Using the same notation as in Eq. (58), we note that κ R = κ Z implies σ R = σ Z and λ R = λZ , i.e.
The renormalisation of the coupling τ breaks with that pattern as because the tree level contribution τ , Eq. (17), has higher order corrections such as , which do not contain τ itself, but rather the combination λσ . However, at bare level, σ = τ and λ = κ, so that in the present case A different issue affects the renormalisation of χ and ξ . For example, the latter acquires contributions from any of the diagrams shown in Eq. (52) by "growing an outgoing substrate leg", , on any of the κ vertices, whereas contributions from , generated by σ d/dr are UV finite and therefore dropped. Given that Eq. (68) are the only contributions to the renormalisation of ξ , it reads and correspondingly for the one-particle irreducible contributions to χ R where we have used χ − ξλ/κ = 0. From Sect. 4.1, it is straight forward to show that and we can therefore summarise In d < 2, the only proper vertices n m a b to consider are those with n = 1, b ≤ 1, m ≤ 1 and arbitrary a. The renormalisation for all of them can be traced back to that of 1 1 1 1 . It is a matter of straight-forward algebra to demonstrate this explicitly. As these couplings play no further rôle for the observables analysed henceforth, we spare the reader a detailed account.

Scaling
We are now in the position to determine the scaling of all couplings. For the time being, we will focus solely, however, on calculating the first moment of the Sausage volume.
We have noted earlier (Sect. 4), that the governing non-linearity is κ and have already introduced the corresponding dimensionless, renormalised coupling g R and found its fixed point value. Following the standard procedure [27], we define the finite, dimensionless, renormalised vertex functions where {k, ω} denotes the entire set of momenta and frequency arguments and μ is an arbitrary inverse scale. In principle, there could be more bare couplings and there are certainly more generated, at least in principle, see Sect. 4.1.1. The vertex functions can immediately be related to their arguments via Eqs. (58) and (55): where the normalisation point is R = 1. Because The asymptotic solution (of the Callan-Symanzik equation) can be combined with the dimensional analysis of the renormalised vertex function, which gives to give, using z 2 = r and Eq. (73), m n a b ({k, ω}; D, r, τ, σ, λ, κ, χ, ξ) As far as scaling (but not amplitudes) is concerned, the tree level results apply to the right hand side as its mass r is finite, i.e.
If r −1 is interpreted as the observation time t, the result points actually visited Fig. 3 The volume of the Wiener Sausage in one dimension is the length covered by the Brownian particle (the set of all points actually visited) plus the volume V 0 of the sphere the Brownian particle is dragging (indicated by the two rounded bumpers) in d < 2 (and V ∝ t in d > 2, Eq. (31)) recovers the earlier result in [2], including the logarithmic corrections expected at the upper critical dimension. Eqs. (80) and (81) are the first two key results for the field theory of the Wiener Sausage reported in the present work.
We will now further explore the results and their implications.
In d = 1, it is an exercise in complex analysis (albeit lengthy) to determine the amplitude of the first moment. To make contact with established results in the literature, we study the sausage in one dimension after finite time t. Following the tree level results Eqs. (27), (30) and (31) we now have where the space integral is taken by setting k = 0 and the driving has been evaluated to d(0) = 1, see Eq. (30). The Z -factor is given by Eq. (60), but μ should be replaced by √ −ıω/D, as we will consider the double limit r, → 0, but at finite ω, which is the total frequency flowing through the diagram, Eq. (56), so for d = 1 = which for small ω and therefore large t (which we are interested in) is dominated by 2 √ −ı Dω)/κ. Keeping only that term, the integral in Eq. (82) can be performed and gives On the lattice, i.e. before taking the continuum limit, sites have no volume and the ratio τ/κ is just the carrying capacity n 0 . Setting that to unity one recovers, up to the additive volume mentioned above, see Fig. 3, the result in the continuum by Berezhkovskii, Makhnovskii and Suris [2, Eq. (10)] which coincides with the asymptote on the lattice [20,29]. Given the difference in the process and the course a field-theoretic treatment taken, in particular the continuum limit, one might argue that this is a mere coincidence. In fact, attempting a similar calculation for the amplitude of the second moment does not suggest that it can be recovered. As for higher moments of the volume, in addition to the two diagrams mentioned in Eq. (32), there is now also 0 1 2 0 (85) and 0 1 2 0 = χσ (3Z − 2Z 2 − 1)/κ. However, as above, the second moment is dominated in small r by the second, tree-like term in Eq. (32), which gives to leading order as Z ∝ r /2 . Higher order moments follow that pattern V m ∝ Z m , and as dimensional consistency is maintained by the dimensionless product r D d/ κ −2/ entering Z , Eqs. (57), (59) and (60), for d < 2 with r=1/t. Compared to Eq. (40) the diffusion constant is present again, as the coverage depends not only on the survival time (determined by r ), but also on the area explored during that time.

Infinite Slab
In the following, we study the renormalisation of the present field theory on an infinite slab, i.e. a lattice that is finite and open (Dirichlet boundary conditions) along one axis and infinite ind = d − 1 orthogonal dimensions. The same setup was considered at tree level in Sect. 3.6. Again, there are no diagrammatic changes, yet the renormalisation procedure itself requires closer attention. Before carrying out the integration of the relevant loop, Eq. (56), we make a mild adjustment with respect to the set of orthogonal functions that we use for the substrate and the activity. While the latter is subject to Dirichlet boundary conditions in the present case, naturally leading to the set of sin(q n z) eigenfunctions introduced above, the former is not afflicted with such a constraint, i.e. in principle one may choose whatever set is most convenient 8 and suitable. As general as that statement is, there are, however, some subtle implications; to start with, whatever representation is used in the harmonic part of the Hamiltonian must result in the integrand factorising, so that the path integral over the Gaussian can be performed. In the presence of transmutation, that couples the choice of the set for one species to that for the other. With a suitable choice, all propagators fulfil orthogonality relations and therefore conserve momentum, i.e. they are proportional to δ n,m (in case of the basis sin(q n z)), δ n,−m (basis exp (ık n z)) and/or δ(k + k ) (basis exp (ıkz)), which is obviously a welcome simplification of the diagrams and their corresponding integrals and sums.
This constraint can be relaxed by considering transmutation only perturbatively, i.e. removing it from the harmonic part. However, if different eigenfunctions are chosen for different species, transmutation vertices are no longer momentum conserving; if we choose, as we will below, sin(q n z) for the basis of the activity and exp (ık m z), then the proper vertex of τ comes with L 0 dz e ık n z sin(q m z) = L n,m 8 The existence of a 0-mode as the space integral is one feature to consider. and a summation of the n and m, connecting from the sides, Eq. (17), i.e. τ m n p= where the m ∈ Z refers to the index of the eigenfunction used for the activity and n ∈ N + to the eigenfunction of the substrate field. The fact that p,l has off-diagonal elements indicates that momentum-conservation is broken. Obviously, in the presence of boundaries, translational invariance is always broken, but that does not necessarily result in a lack of momentum conservation in bare propagators, as it does here. However, it always results in a lack of momentum conservation in vertices with more than two legs, as only exponential eigenfunctions have the property that their products are eigenfunctions as well. If propagators renormalise through these vertices, they will eventually inherit the non-conservation, i.e. allowing them to have off-diagonal elements from the start will become a necessity in the process of renormalisation. While the transmutation vertex introduced above may appear unnecessarily messy, it does not renormalise and does not require much further attention. Rewriting the four-point vertex κ in terms of the two different sets of eigenfunctions, however, proves beneficial. Introducing means that the relevant loop is Contrary to Eq. (56), it is now of great importance to know with which couplings (here two κ couplings) this loop was formed, because different couplings require different "tensors", like U n,m+k, in the present case. For example, the coupling σ comes with L 0 dz sin(q n z) exp (ık m z) sin(q z). The actual technical difficulty to overcome, however, is the possible renormalisation of U n,m, itself, as there is no guarantee that the right hand side of Eq. (91) is proportional to U n,m, . In other words, the sum Eq. (52) may be of the form κ(LU n,m+k, + κ W LU n,m+k, + κ 2 W 2 LU n,m+k, + · · · ), with U n,m+k, = U n,m+k, etc., rather than LU n,m+k, κ(1 + κ W + κ 2 W 2 + · · · ), which would spoil the renormalisation process.
Carrying on with that in mind, the integrals over ω and k are identical to the ones carried out in Eq. (56) and therefore straight-forward. The summation over m is equally simple, because that index features only in U n,m, and Eq. (9a) implies 1 L m L 2 U n,m 2 −m ,n U n ,m +m 1 , = L 0 dz sin(q n z)e ık m 2 z e ık m 1 z sin(q z) sin 2 (q n z) . (92) Using that identity in Eq. (91) allows us to write dz sin(q n z)e ık m 2 z e ık m 1 z sin(q z) It is only that last sum that requires further investigation. In particular, if we were able to demonstrate that it is essentially independent of z, then the preceding integral becomes LU n,m 1 +m 2 , and this contribution to the renormalisation of κU n,m 1 +m 2 , is proportional to U n,m 1 +m 2 , . The remaining summation in Eq. (93) can be performed [35] to leading order in the small 9 dimensionless quantity ρ = L 2 r + − ıω /(π 2 D), so that the leading order behaviour in of Eq. (93) is in fact to leading order in , where we have used ((3 − d)/2) = √ π + O( ), anticipating no singularities around d = 3. Approximating 2ζ(3 − d) ≈ ( /2) the Z -factor for the renormalisation of κ in a system with open boundaries in one dimension is therefore unchanged, cf. Eqs. (56) and (96), provided μ = π/L. Of course, that result holds only as long as ρ 1 is small enough, in particular r D/L 2 , i.e. sudden death by extinction is rare compared to death by reaching the boundary. In the case of more frequent deaths by extinction, or, equivalently, taking the 9 For ρ large, ∞ n =1 (n 2 + ρ) ), the open system recovers the results in the bulk, Eq. (56). thermodynamic limit in the finite, open dimension, extinction is expected to take over eventually and the bulk results above apply, Sect. 4.2. Although there is an effective change of mechanism (bulk extinction versus reaching the edge), there is no dimensional crossover.
The renormalisation of τ involves the κ-loops characterised above, as well as σ and λ, which, in principle, have to be considered separately; after all, the loop they form has a structure, , that deviates from the structure studied above, , Eq. (96).
In principle, there is (again) no guarantee that the diagrams contributing to the renormalisation of τ all have the same dependence on the external indices, i.e. whether they are all proportional to n,m , Eq. (88). By definition, however, Eq. (90) i.e. one leg is removed by evaluating at m 1 = 0 (see the diagram in Eq. (91)) and one by performing the summation. Applying this operation to all diagrams appearing in Eq. (52) produces all diagrams renormalising τ . Provided that σ = τ and λ = κ, the renormalisation of τ is therefore linear in that of κ and Eq. (67) remains valid, i.e. the renormalisation procedure outlined above for τ and κ remains intact. In principle, further attention is required for the renormalisation of higher order vertices, but as long as only (external) substrate legs are attached, , their index m n can be absorbed into the sum of the indices of the substrate legs present: Just like any external leg can take up momentum or frequency, such new legs shift the index used in the internal summation such as the one in Eq. (91), but that does not affect the renormalisation provided that it is done at vanishing external momenta, so that the external momenta do not move the poles of the propagators involved.
We conclude that all diagrammatic vertex identities of Sect. 4.1.1 remain unchanged. As for the scaling of the Sausage volume, comparing Eqs. (96) to (56) and identifying μ = π/L or r = π 2 D/L 2 means that now for d < 2, compared to Eq. (87). Noticeably, compared to the tree level Eq. (48), the diffusion constant is absent-in dimensions d < 2 each point is visited infinitely often, regardless of the diffusion constant. Even though the deposition in the present setup is Poissonian, what determines the volume of the sausage is not the time it takes the active particles to drop off the lattice, ∝ L 2 /D, but the competition between deposition parameterised by τ and σ and its inhibition by κ. The scaling V m ∝ L md for d < 2 suggests that the Wiener Sausage is a "compact" d dimensional object in dimensions d < 2, whereas V m ∝ L 2m at tree level, d > 2, Sect. 3.6. The Wiener Sausage may therefore be seen as a two-dimensional object projected into a d-dimensional space.
The obvious interpretation of r = π 2 D/L 2 in Eq. (98) is that of π/L being the lowest mode in the denominator of the propagator Eq. (41a) in the presence of open boundaries compared to (effectively) √ r/D at k = 0 in Eq. (12a). It is interesting to determine the amplitude of the scaling in L with one open boundary, not least in order to determine whether the finding of Eq. (84) being identical to the result known in the literature is a mere coincidence. Technically, the route to take differs from Eq. (42), because in Sect. 3.6 both substrate as well as activity were represented in the sin eigensystem. However, integrating over L (for uniform driving and in order to determine the volume) amounts to evaluating the matrix p, in Eq. (89) at p = 0 and in that case L p, = 2/q for odd and 0 otherwise, which reproduces Eq. (42) at r = 0, with τ replaced by τ R : To determine τ R = τ Z we replace W in Eqs. (56), (59) and (60) by 2( which for d = 1 reproduces the exact result (for uniform driving) which is easily confirmed from first principles. However, repeating the calculation for driving at the centre, x * = L/2, gives d n = (−1) (n−1)/2 for n odd and 0 otherwise, so that in d = 1 after some algebra which is somewhat off the exact amplitude of ln(2) = 0.69314718 . . . compared to 3/4. This is apparently due to the renormalisation of U n,m, in Eq. (96) being correct only up to O( 0 ), but that problem may require further investigation.

Infinite Cylinder: Crossover
At tree level, Sect. 3.6, it makes no technical difference to study the Sausage on a finite cylinder or an infinite slab, because the relevant observables require integration in space which amounts to evaluating at k n = 0 or k = 0 resulting in the same expression, e.g. Eq. (31) in both cases. When including interaction, however, it does matter whether the lattice studied is infinite in d −1 dimensions or periodically closed. Clearly a periodically closed axis has a 0-mode and does therefore not impose an effective cutoff in k. In that respect, periodic closure is identical to infinite extent, while physically it is not (just like at tree level). One may therefore wonder how periodic closure differs from infinite extent mathematically: How does a finite cylinder differ from an infinite strip? As a first step to assess the effect, we replace the open dimension (axis) by a periodically closed one. One may regard this as an unfortunate kludge-after all, what we are really interested in is a system that is finite in two dimensions, namely open in one and periodically closed in the other. However, if the aim is to study finite size scaling in 2 − dimensions, then two finite dimensions are already too many.
However, the setup of an infinitely long (in d −1 dimensions) periodically closed tube with circumference L does address the problem in question, namely the difference of k = 0 in an infinitely extended axis versus k n = 0 in a finite but periodic closed dimension. In addition, an infinite cylinder compared to an infinite strip has translational invariance restored in the periodic dimension, and therefore the vertices even for a finite system dramatically simplified.
The physics of a d-dimensional system with one axis periodically closed is quite clear: At early times, or, equivalently, large extinction rates r D/L 2 , the periodic closure is invisible and so the scaling is that of a d-dimensional (infinite) bulk system as described in Sect. 4.2, V m ∝ r −md/2 . But when the walker starts to re-encounter, due to the periodic closure, sites visited earlier, this "dimension will saturate" and so for very small r , it will display the scaling of an infinite d − 1-dimensional lattice.
Just like for the setup in Sect. 3.5, it is most convenient to study the system for small but finite extinction rate r . The integrals to be performed are identical to Eq. (91), but both sums have a pre-factor of 1/L, Eq. (8), (rather than one having 1/L and the other 2/L, Eq. (5)) and LU n,m,l has the much simpler Kronecker form L 0 dz e ık n z e ık m z e ık k z e ık z = LŨ n,m+k, = Lδ n+m+k+ ,0 .
Most importantly the expression corresponding to Eq. (92) sees sin 2 (q n z) replaced by unity, because the bare propagator corresponding to Eq. (41a) carries a factor Lδ n+m,0 , Eq. (7), rather than Lδ n,m /2, Eq. (4), which results in n ofŨ n,m 2 −m ,n to pair up with −n iñ U −n ,m +m 1 , . For easier comparison, we will keep LŨ n,m+k, in the following. We thus have (see Eq. (93)) Comparing Eqs. (104) to (93), (94) and (96) and re-arranging terms gives for smallρ = and for largeρ summand (n 2 +ρ) (d−3)/2 has to be evaluated for n = 0, producingρ d− 3 2 in Eq. (107a), which dominates the sum for d < 2 (even d < 3, but the series does not converge for 2 < d, and, in fact, is not needed as no IR divergences appear in d > 2) andρ → 0. The remaining terms can actually be evaluated forρ = 0, producing 2ζ (3 − d). The integral, which the (Riemann) sum converges to for largeρ, on the other hand, is strictly proportional toρ d− 2 2 and therefore much less divergent than then sum for smallρ → 0 and d < 2.
Of the two regimesρ 1 andρ 1 the former is more easily analysed. Setting − ıω = 0 for the time being, we notice thatρ ∝ L 2 r suggests, somewhat counterintuitively, that large r , which shortens the lifetime of the walker, has the same effect as large L, which prolongs the time it takes the walker to explore the system. Both effects are, however, of the same nature: They prevent the walker from "feeling" the periodicity of the system. In that case, the walker displays bulk behaviour and in fact, Eq. (106) is the same as Eq. (56).
The other regime,ρ 1 is richer. At d < 2 and fixed L, Eq. (105) displays a crossover between the two additive terms on the right hand side. Stretching the expansion (107a) beyond its stated limits, for intermediate values of r or L,ρ ≈ 1, the first term on the right hand side of Eq. (105) dominates and the scaling behaviour is that of an open infinite slab of linear extent L, Eq. (96). This is because at moderately large r (or, equally, short times t), the walker is not able to fully explore the infinitely extended directions. But rather than "falling off" as in the system with open boundaries, it starts crossing its own path due to the periodic boundary conditions, at which point the scaling like a d-dimensional bulk lattice (ρ 1) ceases and turns into that of a d-dimensional open one (ρ ≈ 1). The crossover can also be seen in Eq. (107a), which for d < 2 is dominated by 2ζ(3 − d) for largeρ and by ρ (d−3)/2 for smallρ.
As r gets even smaller (or t increases),ρ → 0, the scaling is dominated by the infinite dimensions, of which there ared = d − 1, i.e. the scaling is that of a bulk system with d dimensions as discussed in Sect. 4.1, in particular Eq. (56). In this setting, the walker explores an infinitely long, thin cylinder, which has effectively degenerated into an infinitely long line. While the (comparatively) small circumference of the cylinder remains accessible this is fully explored very quickly compared to the progress in the infinite directions.
To emphasise the scaling of the last two regimes, one can re-write Eq. (105) as Here, the first term displays the behaviour of the infinite slab discussed above (Sect. 4.3, Eq. (96), ζ(3 − d) ∝ 1/ , but L/π there and L/(2π) here) and the second term that of a bulk-system withd dimensions, Eq. (56); the infrared singularity (r + − ıω) −˜ /2 is in fact accompanied by the corresponding ultraviolet singularity (˜ /2), exactly as if the space dimension was reduced from d tod = d − 1.
The second term also reveals an additional factor 1/L compared to (56). 10 This expression determines the factor W , which enters the Z -factor inversely, Z ∝ Lr˜ /2 , Eq. (60), i.e. in the present setting, the Sausage volume scales like (τ/r )Lr˜ /2 = τ Lr −d/2 . The scaling in t is found by replacing r by 1/t, or more precisely by ω and Fourier transforming according to Eq. (82), which results in the scaling V ∝ Lt 1−˜ /2 = Ltd /2 .

Summary and Discussion
Because the basic process analysed above is very well understood and has a long-standing history [2,9,[14][15][16]23,24,26,30,32], this work may add not so much to the understanding of the process itself, was it not for a field-theoretic re-formulation, which is particularly flexible and elegant. The price is a process that ultimately differs from the original model. In hindsight, the agreement of the original Wiener Sausage problem with the process used here to formulate the problem field-theoretically deserves further scrutiny. In the following, we first summarise our findings above with respect to the original Wiener Sausage problem, before discussing in further detail the field-theoretic insights.

Summary of Results in Relation to the Original Wiener Sausage
The original Wiener Sausage problem is concerned with the volume traced out by a finite sphere attached to a Brownian particle. In the present analysis, this has been replaced by a Brownian particle attempting to spawn immobile offspring at Poissonian rate σ . The attempt fails if such immobile particles are present already. On the lattice, this process amounts to a variant of the number of distinct sites visited [29].
Above, the field-theoretic treatment has been carried out perturbatively to one loop for dimensions d < 2 = d c , but it turns out that there are no higher order loops to be considered. In any dimension, by construction and as a matter of universality, the large time and space asymptotes of the original Wiener Sausage, the process on the lattice and the field theory are expected to coincide at least as far as exponents are concerned.
The tree level of the field theory describes the phenomenon without interaction, i.e. ignoring returns. The resulting observables are the asymptotes of the Wiener Sausage volume in dimensions above d = 2. The moments found in the bulk, Eqs. (31), (35), (38), (39) and generally (40), V m ∝ m!τ σ m−1 r −m , coincide with those from the exact moment generating function Eq. (36) of the process ignoring return, obtained by probabilistic considerations.
In the infinite slab, the field theory still produces exact results (of the process ignoring return), such as Eqs. (42) and (46), although higher moments are tedious to calculate in closed form, Eq. (47). Again, they are easily verified using generating functions, such as Eq. (49), which also confirms the general form Eq. (48), V m ∝ τ σ m−1 L 2m D −m , determined fieldtheoretically.
Below two dimensions, infrared divergences occur in the perturbation theory, which need to be controlled by a finite extinction rate r (or ). It turns out that all orders can be dealt with at once, because "parquet diagrams" [12] can be summed over in a geometric (Dyson) sum, such as Eq. (52). We can therefore expect exact universal exponents of asymptotes, whereas amplitudes are generally non-universal and can be affected by field-theoretically irrelevant terms. In the bulk, the asymptotes Eqs. (80), (81), (86) and generally (87), V m ∝ m!τ σ m−1 r −md/2 (D d/2 /κ) m , reproduce the (leading order) exponents as known in the literature [2]. In one dimension, the first moment of the volume, Eq. (84), reproduces the asymptote (in large t) in the continuum [2] and on the lattice [20,29]. Even the amplitude is reproduced correctly.
The bulk calculations can be modified to apply to the infinite slab, producing Eq. (98), V m ∝ m!τ σ m−1 (L/π) md κ −m . However, the renormalisation in this case is correct only to leading order in , as terms of order 0 , such as Eq. (95), were omitted (whereas in the bulk, the Z -factor was exact, Eqs. (83) or (60)). In one dimension, i.e. when the walker can explore only a finite interval, the amplitude of the first moment for uniformly distributed initial starting points, Eq. (100) at d = 1, coincides with the exact result, Eq. (101). However, placing the particle initially at the centre results in an amplitude, (102), that differs from the exact result.
Unless one is prepared to allow for a space-dependent κ (whose space dependence is in fact irrelevant in the field-theoretic sense) as suggested in Eq. (93) for the infinite slab, one cannot expect the resulting amplitudes to recover the exact results. That Eq. (101) does so nevertheless, may be explained by the "averaging effect" of the uniform driving, given that As alluded to above, the field-theoretic description of the Wiener Sausage is very elegant, not least because the diagrams have an immediate interpretation. For example, corresponds to a substrate particle deposited while the active particle is propagating. Correspondingly, is the suppression of a deposition as the active particle encounters an earlier deposition-the active particle returns to a place it has been before. All loops can therefore be contracted along the wavy line, , to produce a trajectory, say or more strikingly just , illustrating that the loop integrals calculated above, in fact capture the probability of a walker to return: W ∝ ω − /2 , Eq. (59), which in the time domain gives t −d/2 .

Original Motivation
The present study was motivated by a number of "technicalities" which were encountered by one of us during the study of a more complicated field theory. The first issue, as mentioned in the introduction, was the "fermionic" or excluded-volume interaction. In a first step, that was generalised to an arbitrary carrying capacity n 0 , whereby the deposition rate of immobile offspring varies smoothly in the occupation number until the carrying capacity is reached. It was argued above, Fig. 2, that the constraint to a finite but large carrying capacity n 0 , which may be conceived as less brutal than setting n 0 = 1, can be understood as precisely the latter constraint, but on a more complicated lattice.
Even though the field theory was constructed in a straight-forward fashion, the perturbative implementation of the constraint, namely by effectively discounting depositions that should not have happened in the first place, make it look like a minor miracle that it produces the correct scaling (and even the correct amplitudes in some cases). We conclude that the present approach is perfectly suitable to implement excluded volume constraints.
It is interesting to vary n 0 in the expressions obtained for the volume moments. At first it may not be obvious that, for example, the first volume moments in one dimension, Eqs. (84) and (101), are linear in n 0 , because κ = τ/n 0 , Eq. (22). Given that κ enters the mth moment V m as κ −m , Eqs. (87) and (98), the carrying capacity therefore enters through κ = γ /n 0 as n m 0 . Even though the carrying capacity enters smoothly into the deposition rate (or, equivalently, the suppression of the deposition), in dimensions d < 2 each site is visited infinitely often and is therefore "filled up to the maximum" with offspring particles, as if the carrying capacity was a hard cutoff (i.e. as if the deposition rate were constant until the occupation reaches the carrying capacity). The volume of each sausage therefore increases by a factor n 0 in dimensions d < 2 and is independent of it (as κ does not enter) in d > 2.
The second issue to be investigated was the presence of open boundaries. This is, obviously, not a new problem as far as field theory is concerned in general, but in the present case being able to change boundary conditions exploits the flexibility of the field-theoretical reformulation of the Wiener Sausage and allows us to probe results in a very instructive way.
It is often said that translational invariance corresponds to momentum conservation in k-space, but the present study highlights some subtleties. As far as bare propagators are concerned, open, periodic, or, in fact, reflecting boundary conditions all allow it to be written with a Kronecker-δ function. In that sense, bare propagators do not lose momentum. Momentum, however, is generally not conserved in vertices, i.e. vertices with more than two legs do not come with a simple δ n+m+ ,0 , but rather in a form such as Eqs. (10c) or (90).
These more complicated expressions are present even at tree level, Eq. (46). This touches on an interesting feature, namely that non-linearities are present even in dimensions above the upper critical dimension-they have to, as otherwise the tree level lacks a mechanism by which immobile offspring are deposited.
Below the upper critical dimension, the lack of momentum conservation has three major consequences: Firstly, each vertex comes with a summation and so a loop formed of two vertices, Eq. (91), requires not only one summation "around the loop" but a second one accounting for another index, which is no longer fixed by momentum conservation. This is a technicality, but one that requires more and potentially serious computation. Secondly, and more seriously, the very structure of the vertex might change. For example, at bare level κ comes with a factor LU n,m+k, , but that U n,m+k, might change under renormalisation.
Finally, the third and probably most challenging consequence is the loss of momentum conservation in the propagator. While a lack of translational invariance may not be a problem at bare level, the presence of non-momentum conserving vertices can render the propagators themselves non-momentum conserving-provided the propagators renormalise at all (see the discussion after Eq. (89)), which they do not in the present case, as far as the two shown in Eq. (12a) are concerned. However, parameterised by τ has every right to be called a propagator and it does renormalise. Luckily, however, it never features within loops, so the complications arising from its new structure can be handled within observables and does not spoil the renormalisation process itself.
A consequence of the Dirichlet boundary conditions is the existence of a lowest, nonvanishing mode, q 1 = π/L, Eq. (98), which, in fact, turns out to play the rôle of the effective mass-just like the minimum of the inverse propagator, (−ıω + Dk 2 + r ), the "gap", is r in the bulk, it is Dq 2 1 + r in the presence of Dirichlet boundary conditions, and thus does not vanish even when r = 0. This is a nice narrative, which is challenged, however, when periodic boundary conditions are applied. At tree level, when the interaction is switched off, periodic boundaries cannot be distinguished from an infinite system, and so we would evaluate at tree level an infinite and a periodic system both at k = 0 and k n = 0 respectively, producing exactly the same expectation (for exactly the right reason).
The situation is different beyond tree level. Periodic or open, the system is finite. However, periodic boundaries do not drain active particles, so the lowest wave number vanishes, k n = 0. To control the infrared (in the infinite directions), a finite extinction rate r is necessary, which effectively competes with the system size L viaρ ∝ L 2 r/D, Eqs. (105) and (106). Ifρ is large, bulk behaviour ∝ρ − /2 is recovered, Eq. (106), as is the case in the open system (see footnote 9 before Eq. (94)). For moderately small values, ζ(3 − d) ∝ 1/ dominates, Eq. (107a), a signature of a d-dimensional system with open boundaries, Eq. (96). In that case, scaling amplitudes are in fact ∝ L , Eq. (108). However, the presence of the 0-mode allows for a different asymptote asρ is lowered further, the bulk-like term governing the d − 1 =d infinite dimensions takes over, ∝ L −1 ((r + − ıω)/D) −˜ /2 . It is the appearance of that term and only that term which distinguishes periodic from open boundary conditions. So, the narrative of "lowest wave number corresponds to mass" is essentially correct. In open systems, it dominates for all small masses. In periodic systems, the scaling of the lowest non-zero mode competes with that of a d − 1-dimensional bulk system due to the presence of a 0-mode in the periodic dimension, which asymptotically drops out.
The third point that was to be addressed in the present work were the special properties of a propagator of an immobile species. The fact that the propagator is, apart from δ(k + k ), Eq. (12b), independent of the momentum is physically relevant as the particles deposited stay where they have been deposited and so the walker has to truly return to a previous spot in order to interact. Also, deposited particles are not themselves subject to any boundary conditions-this is the reason for the ambiguity of the eigenfunctions that can be used for the fields of the substrate particles. If deposited particles were to "fall off" the lattice, the volume of the sausage on a finite lattice cannot be determined by taking the ω → 0 limit.
It is interesting to see what happens to the crucial integral Eq. (56) when the immobile propagator is changed to (−ıω + νk 2 + ) −1 : which at external momentum k = 0 is Eq. (56) with D replaced by D + ν. The integral thus remains essentially unchanged, just that the effective diffusion constant is adjusted by D → D + ν. A slightly bigger surprise is the fact that , the IR regulator of the substrate propagator, is just as good an IR regulator as r , the IR regulator of the activity propagator. The entire field theory and thus all the physics discussed above, does not change when the "evaporation of walkers" is replaced by "evaporation of substrate particles". The stationarity of both in infinite systems is obviously due to two completely different processes, which, however, have the same effect on the moments of the Sausage Volume: If r is finite, then a walker eventually disappears, living behind the trace of substrate particles, which stay indefinitely. If is finite, then stationarity is maintained as substrate particles disappear while new ones are produced by an ever wandering walker.
Finally, the fourth issue to be highlighted in the present work was that of observables which are spatial integrals of densities. These observables have a number of interesting features. As far as space is concerned, eigenfunctions with a 0-mode immediately give access to integrals over all space. However, open boundaries force us to perform a summation (and an awkward looking one, too, say Eq. (42)).

Future Work
Two interesting extensions of the present work deserve brief highlighting. Firstly, the Wiener Sausage may be studied on networks: Given a network or an ensemble thereof, how many distinct sites are visited as a function of time. The key ingredient in the analysis is the lattice Laplacian, which provides a mathematical tool to describe the diffusive motion of the walker. The contributions k 2 and q 2 n in the denominator of the propagator, Eqs. (12a) and (41a), are the squared eigenvalues of the Laplacian operator in the continuum and, in fact, of the lattice Laplacian, for, say, a square lattice. The integrals in k-space and, equivalently, sums like Eqs. (5) and (42) should be seen as integrating over all eigenvalues k 2 , whose density in d dimensions is proportional to |k| d−1 . It is that d which determines the scaling in, say, V ∝ t d/2 for d < 2. In other words, if |k| d s −1 is the density of eigenvalues (the density of states) of the lattice Laplacian, then the Wiener Sausage volume scales like t d s /2 (and the probability of return like t −d s /2 ). Provided the propagator does not acquire an anomalous dimension, which could depend on d s in a complicated way, the difference between a field theory on a regular lattice with dimension d and one on a complicated graph with spectral dimension d s is captured by replacing d by d s [10, p. 23]. We confirmed this finite size scaling of the Wiener Sausage on four different fractal lattices. The second interesting extension is the addition of processes, such as branching of the walkers itself. In that case they not only interact with their past trace, but also with the trace of ancestors and successors. This field theory is primarily dominated by the branching ratio, say s, and λ, whereas κ, χ and ξ are irrelevant. Preliminary results suggest that d c = 4 [31, see also] in this case and again V ∝ L 2− , this time, however, with = 4 − d. Higher moments seem to follow V m ∝ L (m−1)d+2− = L md−2 . The latter result suggests that the dimension of the cluster formed of sites visited is that of the underlying lattice.