Symmetrization of exterior parabolic problems and probabilistic interpretation

We prove a comparison theorem for the averages of the solutions of two exterior parabolic problems, the second being the"symmetrization"of the first one, by using approximation of the Schwarz symmetrization by polarizations, as it was introduced in [4]. This comparison provides an alternative proof, based on PDEs, of the isoperimetric inequality for the Wiener sausage, which was proved in [14].


Introduction
In the present article we prove a comparison theorem for the average in space, at any time t, for the solutions of two parabolic exterior problems, the second being the "symmetrization" of the first one. In order to do so, we show that the average of the solution decreases under polarization, and since the Schwarz symmetrization is the limit of compositions of polarizations, we carry the comparison to the limit. This technique was introduced in [4].
Our result is motivated by a problem in probability theory. Namely, the isoperimetric inequality for the Wiener sausage, which was proved in [14]. The problem is the following. If (w t ) t≥0 is a Wiener process in R d , one wants to minimize the expected volume of the set ∪ t≤T (w t + A), for T ≥ 0, over "all" subsets A of R d of a given measure. It was proved in [14] that the minimizer is the ball (the result was for a more general setting, see Section 2 below). This was proved by obtaining a similar result for random walks by using rearrangement inequalities of Brascamp-Lieb-Luttinger type on the sphere, which were proved in [6], and then by Donsker's theorem, the authors obtain the result for the Wiener process. It is known that the expected volume of the Wiener sausage up to time t, can be expressed as the average in x ∈ R d of the probability that a Wiener process starting from x ∈ R d hits the set A by time t. It is also known that this collection of probabilities, as a function of (t, x), satisfies a parabolic equation on (0, T ) × R d \ A. For properties of these hitting times and applications to the Wiener sausage we refer the reader to [3] and references therein, and for the case of Riemannian manifolds, we refer to [10]. Therefore, we provide an alternative proof of the isoperimetric inequality for the Wiener sausage, based on PDE techniques.
Comparison results between solutions of partial differential equations and solutions of their symmetrized counterparts, were first proved in [15]. Since then, much work has been done in this area, for elliptic and parabolic equations, and we refer the reader to [13], [12], [2], [4] and references therein. The equations under consideration at these works, are on a bounded domain, with Dirichlet or Neumann boundary conditions. Our approach is based on the techniques introduced in [4].
Let us now introduce some notation that will be frequently used throughout the paper. We denote by R d the Euclidean space of dimension 1 ≤ d < ∞. For A, B subsets of R d , we write and for x ∈ R d we write x + A := {x} + A. The open ball of radius ρ > 0 in R d will be denoted by B ρ . Let x ∈ R d and A ⊂ R d and let H be a closed half-space. If A is measurable, |A| will stand for the Lebesgue measure of A. We will write σ H (x) and A H for the reflections of x and A respectively, with respect to the shifted hyperplane ∂H. We will write A and A for the closure and the interior of A respectively. We will use the notation P H A for the polarization of A with respect to H, that is For a non-negative function u on R d we will write P H u for the polarization of u with respect to H, that is We will denote by H the set of all half-spaces H such that 0 ∈ H. For positive functions f and g on R d and for H ∈ H, we will write f ⊳ H g, if f (x) + f (σ H (x)) ≤ g(x) + g(σ H (x)) for a.e. x ∈ H. For a bounded set V ⊂ R d , we will denote by V * the closed, centered ball of volume |V |. For a positive function u on R d such that |{u > r}| < ∞ for all r > 0, we denote by u * its symmetric decreasing rearrangement. For an open set D ⊂ R d we denote by H 1 (D) the space of all functions in u ∈ L 2 (D) whose distributional derivatives ∂ i u := ∂ ∂x i u, i = 1, .., d, lie in L 2 (D), equipped with the norm We will write H 1 0 (D) for the closure of C ∞ c (D) (the space of smooth, compactly supported real functions on D) in H 1 (D). We will write H 1 (D), and H 1 0 (D) for L 2 ((0, T ); H 1 (D)), and L 2 ((0, T ); H 1 0 (D)) respectively. Also we define , H 1 (D) := H 1 (D) ∩ C([0, T ]; L 2 (D)) and H 1 0 (D) := H 1 0 (D) ∩ C([0, T ]; L 2 (D)). The notation (·, ·), will be used for the inner product in L 2 (R d ). Also, the summation convention with respect to integer valued repeated indices will be in use.
The rest of the article is organized as follows. In Section 2 we state our main results. In Section 3 we prove a version of a parabolic maximum principle, and some continuity properties of the solution map with respect to the set A. These tools are then used in Section 4 in order to prove the main theorems.

Main results
Let (Ω, F , P) be a probability space carrying a standard Wiener process (w t ) t≥0 with values in R d , and let A be compact subset of R d . For T ≥ 0 we let us consider the expected volume of the Wiener sausage generated by A, that is, the quantity E |∪ t≤T (w t + A)|. In [14], the following theorem is proved.
The result in [14] is stated for open sets A, and the set A is allowed to depend on time. As it was mentioned above, this was proved by obtaining a similar inequality for random walks, using rearrangement inequalities of Brascamp-Lieb-Luttinger type on the sphere, which were proved in [6], and then by using Donsker's theorem, the authors obtain the inequality for the Wiener process.
Let us now move to our main result, and see the connection with Theorem The following is very well known. Our two main results read as follows.
Let u, v be the solutions of the problems Π(A, ψ) and Π(P H A, P H ψ), extended to 1 on A and P H A respectively. Then for all t ∈ [0, T ], we have v t ⊳ H u t .
where u t and v t are extended to 1 on A and A * respectively.
It is easy to check that where It is also known that the unique solution of the problem Π(A, 0) is given by Consequently Theorem 2.1 follows by Theorem 2.4 by choosing ψ = 0, if |A| > 0. If |A| = 0 then (2.1) trivially holds.
Remark 2.1. All of the arguments in the next sections can be repeated in exactly the same way, if the operator 1 2 ∆ is replaced by an operator of the form L t u := ∂ i (a ij t ∂ j u), such that for j, i ∈ {1, ..., d}, a ij ∈ L ∞ ((0, T )), and there exists a constant κ > 0 such that for almost all t ∈ [0, T ], for all z = (z 1 , ..., z d ) ∈ R d . Consequently one can replace w t in Theorem 2.1 by "non-degenerate" stochastic integrals of the form y t = t 0 σ s dB s where B t is an m-dimensional Wiener process and σ is a measurable function from [0, T ] to the set of d × m matrices such that (σ t σ ⊤ t ) d i,j=1 satisfies (2.5).

Auxiliary Results
In this section we prove some tools that we will need in order to obtain the proof our main theorems. Namely, we present a version of the parabolic maximum principle for functions that are not necessarily continuous up to the parabolic boundary. The maximum principle is the main tool used in order to show the comparison of the solution of the problem Π(A, ψ) and its polarized version. The reason that we need this version of the maximum principle is that, P H A is not guaranteed to have any "good" properties, even if ∂A is of class C ∞ , and therefore one can not expect the solution of Π(P H A, P H ψ) to be continuous up to the boundary. We also present some continuity properties of the solution map with respect to the set A, so that we can then iterate Theorem 2.3 with respect to a sequence of half-spaces and pass to the limit, in order to obtain Theorem 2.4.
In this section we consider a ij ∈ L ∞ ((0, T ) × R d ) for i, j = 1, ..., d, and we assume that there exists a constant κ > 0 such that for any z = (z i , ..., z d ) ∈ R d we have a ij t (x)z i z j ≤ κ|z| 2 , for a.e. (t, x) ∈ [0, T ] × R d . We will denote by K := max i,j a ij L∞ . For an open set Q ⊂ R d , let Ψ(Q) be the set of functions u ∈ H 1 (Q), such that for any φ ∈ C ∞ c (Q) for all t ∈ [0, T ]. Notice that by the De Giorgi-Moser-Nash theorem, if u ∈ Ψ(Q), then u ∈ C((0, T ) × Q). Let us also introduce the functions α r (s), β r (s) and γ r (s) on R, for r > 0, that will be needed in the next lemma, given by For all s ∈ R we have γ r (s) → 2I s>0 , β r (s) → 2s + and α r (s) → (s + ) 2 as r → 0. Also, for all s ∈ R and r > 0, the following inequalities hold |γ r (s)| ≤ 2, |β r (s)| ≤ 2|s|, |α r (s)| ≤ s 2 . Proof. Let us fix t ′ ∈ (0, T ), and let ζ ∈ C ∞ c (B 1 ) be a positive function with unit integral. For ε > 0 and δ > 0, set ζ ε (x) = ε −1 ζ(x/ε) and M δ : for all t ∈ [t ′ , T ], where u ε = u * ζ ε . Let also g n ∈ C ∞ c (Q) with 0 ≤ g n ≤ 1, g n = 1 on Q 1/n , g n = 0 on Q \ Q 1/2n and choose ε < 1/2n. We can then multiply the equation with g n , and by the chain rule we have By standard arguments (see e.g. [8]), letting ε → 0, leads to Let us also introduce the notation We claim that there exists ρ > 0 such that dist(U δ t , ∂Q) > ρ for any t ∈ while by assumption we have that lim sup which is a contradiction, and therefore U δ t ⊂ Q which means that dist(U δ t , ∂Q) > 0 (the sets are compact). If inf t∈[t ′ ,T ] dist(U δ t , ∂Q) = 0, we can find (s, y) ∈ [t ′ , T ] × ∂Q, and a sequence (t n , x n ) ∈ [t ′ , T ] × U δ tn such that (t n , x n ) → (s, y) as n → ∞. Then we have by the definition of U δ tn , lim sup Going back to (3.7), for any n > 1/θ, we have that for all s ∈ [t ′ , T ] since ∂ i g n = 0 on Q 1/n and U δ s ⊂ Q 1/n by (3.8). Similarly for the last term on the right hand side of (3.7). Therefore, letting n → ∞ and r → 0 in (3.7) gives The above inequality holds for any t ′ ∈ (0, T ], and therefore by letting t ′ ↓ 0 and using the continuity of u (in L 2 (Q)) we have for any t ∈ [0, T ]. Since δ was arbitrary, the lemma is proved.
We now continue with the continuity properties of the solution map. Lets us fix ξ ∈ L 2 (R d ) and f ∈ L 2 ((0, T ) × R d ). We will say that u solves the for all t ∈ [0, T ]. For n ∈ N, let ξ n ∈ L 2 (R d ), f n ∈ L 2 ((0, T ) × R d ) and let A n ⊂ R d be compact sets.
Lemma 3.2. Suppose Assumption 3.1 holds, and let u n and u be the solutions of the problems Π 0 (A n , ξ n , f n ) and Π 0 (A, ξ, f ) respectively . Let us extend u n and u to zero on A n and A respectively. Then Let us set C n = R d \ A n and C = R d \ A. Clearly, for (i) it suffices to show that there exists a subsequence with u n k such that u n k → u weakly in H 1 0 (C). By standard estimates we have that there exists a constant N depending only on d, K, κ, and T , such that for all n Since u n are zero on A n , we can replace C n by C in the above inequality, to obtain that there exists a subsequence (u n k ) ∞ k=1 ⊂ H 1 0 (C), and a function v ∈ H 1 0 (C) such that u n k → v weakly in H 1 0 (C).
we have that for all k large enough supp(φ) ⊂ C n k . Also, u n k solves Π 0 (A n k , ξ n k , f n k ), and therefore (3.10) which by letting k → ∞ gives (3.11) which also holds for any φ ∈ H 1 0 (C), since C ∞ c (C) is dense in the later. Hence v belongs to the space H 1 0 (D) (by Theorem 2.16 in [11] for example), and is a solution of Π 0 (A, ξ, f ). By the uniqueness of the solution we get u = v (as elements of H 1 0 (C)), and this proves (i). Let us fix t ∈ [0, T ]. It suffices to show that there exists a subsequence u n k t such that u n k t → u t weakly in L 2 (C) as k → ∞. Notice that by (3.9), there exists a subsequence u n k t which converges weakly to some v ′ ∈ L 2 (C). Again, for φ ∈ C ∞ c (C) and k large enough, we have that (3.10) holds. As k → ∞, the right hand side of (3.10) converges to the right hand side of (3.11) (for our fixed t ∈ [0, T ]), which is equal to (u t , φ), while the left hand side of (3.11) converges to (v ′ , φ). Hence, v ′ = u t on C, and since u n k t converges weakly in L 2 (C) to v ′ , the lemma is proved. Corollary 3.3. Suppose that (i) and (iii) from Assumption 3.1 hold, and let u n and u be the solutions of the problems Π(A n , ψ n ) and Π(A, ψ). Set u n = 1 and u = 1 on A n and A respectively. Then for each t, u n t → u t weakly in L 2 (R d ) as n → ∞.
Proof. Let g ∈ C ∞ c (R d ) with g = 1 on a compact set B such that A 0 ⊂ B. Then u n − g and u − g solve the problems Π 0 (A n , ψ n − g, − 1 2 ∆g) and Π 0 (A, ψ − g, − 1 2 ∆g) and the result follows by Lemma 3.2. For two compact subsets of R d , A 1 and A 2 , we denote by d(A 1 , A 2 ) the Hausdorff distance, that is In Lemma 3.4 below we will need the following: A)). If u ∈ H 1 (R d ) and u = 0 a.e. on A, then u ∈ H 1 0 (R d \ A). To see this, suppose first that supp(u) ⊂ B R , where R is large enough, so that A ⊂ R. It follows that B R \ A is a Carathéodory set, and by Theorem 7.3 (ii), page 436 in [9], if u ∈ H 1 0 (B R ), and u = 0 a.e. on A, then u ∈ H 1 0 (B R \ A), and therefore u ∈ H 1 0 (R d \ A). For general u we can take ζ ∈ C ∞ c (R d ), such that 0 ≤ ζ ≤ 1 and ζ(x) = 1 for |x| ≤ 1, and set ζ n (x) = ζ(x/n). Then by the previous discussion ζ n u ∈ H 1 0 (R d \ A) and since ζ n u → u in Lemma 3.4. Suppose Assumption 3.2 holds, and let u n and u be the solutions of the problems Π 0 (A n , ξ n , f n ) and Π 0 (A, ξ, f ). Let us extend u n and u to 0 on A n and A respectively. Then , as n → ∞, and for any t ∈ [0, T ]. Proof. As in the proof of Lemma 3.2 it suffices to find a subsequences such that the corresponding convergences take place. By standard estimates, there exists a constant N depending only on d, κ, T and K, such that for all n ∈ N sup t≤T u n t 2 (3.12) Therefore, there exists a subsequence (u n k ) ∞ k=1 ⊂ H 1 0 (R d ), and a function v ∈ H 1 0 (R d ) such that u n k → v weakly in H 1 0 (R d ). For φ ∈ C ∞ c (R d \ A), since d(A, A n ) → 0 as n → ∞, we have that for all k large enough supp(φ) ⊂ R d \ A n k . Also, u n k solves Π 0 (A n k , ξ n k , f n k ), and therefore (3.13) which by letting k → ∞ gives by assumption and (3.12). Consequently for almost all t ∈ (0, T ), v t = 0 for a.e. x ∈ A. By virtue of Remark 3.1, we have that v ∈ H 1 0 (R d \ A), which combined with (3.14) implies that v ∈ H 1 0 (R d \ A) and is the unique solution of the problem Π 0 (A, ξ, f ). This proves (i).
Let us fix t ∈ [0, T ]. By (3.12) there exists a subsequence u n k t that converges weakly to some v ′ ∈ L 2 (R d ). Again, for φ ∈ C ∞ c (C) and k large enough, we have that (3.13) holds. As k → ∞, the right hand side of (3.13) converges to the right hand side of (3.14), which is equal to (u t , φ), while for our fixed t, the left hand side of (3.11) converges to (v ′ , φ).
as k → ∞. Therefore v ′ = 0 = u t on A. This shows that v ′ = u t on R d and the lemma is proved.
As with Lemma 3.2, we have the following corollary, whose proof is similar to the one of Corollary 3.3.
Corollary 3.5. Suppose that (i) and (iii) from Assumption 3.2 hold and let u n and u be the solutions of the problems Π(A n , ψ n ) and Π(A, ψ). Set u n = 1 and u = 1 on A n and A respectively. Then for each t, u n t → u t weakly in L 2 (R d ) as n → ∞.

Proofs of Theorems 2.3 and 2.4
Proof. of Theorem 2.3. Let us assume for now that R d \ A has smooth boundary, ψ is compactly supported and smooth. It follows under these extra conditions that u ∈ C ∞ ([0, T ] × R d \ A). Also, by the De Giorgi-Moser-Nash theorem v is continuous in (0, T ) × (R d \ P H A).
First notice that 0 ≤ u, v ≤ 1. Let us extend u = 1 and v = 1 on A and P H A respectively so that they are defined on the whole R d , and for a function f let us use the notation f (x) := f (σ H (x)). Clearly it suffices to show that for each t ∈ (0, T ] w t := v t + v t − u t − u t ≤ 0, for a.e. x ∈ H c . Suppose that the opposite holds, that is, (Notice that the boundaries of A and A H are of measure zero, since they are smooth). On Γ 1 , by definition w t = 0 for any t ∈ [0, T ], and therefore (4.15) holds for some i ∈ {2, 3, 4}. Suppose it holds for i = 2. Since the