Symmetrization of exterior parabolic problems and probabilistic interpretation

Open Access
Article
  • 359 Downloads

Abstract

We prove a comparison theorem for the spatial mass of the solutions of two exterior parabolic problems, one of them having symmetrized geometry, using approximation of the Schwarz symmetrization by polarizations, as it was introduced in Brock and Solynin (Trans Am Math Soc 352(4):1759–1796, 2000). This comparison provides an alternative proof, based on PDEs, of the isoperimetric inequality for the Wiener sausage, which was proved in Peres and Sousi (Geom Funct Anal 22(4):1000–1014, 2012).

Keywords

Schwarz symmetrization Polarization Parabolic equations Wiener sausage 

Mathematics Subject Classification

35K20 35B51 

1 Introduction

In the present article we prove a comparison theorem for the spatial mass, at any time t, for the solutions of two parabolic exterior problems, the second being the “symmetrization” of the first one. In order to do so, we show that the spatial mass of the solution decreases under polarization, and since the Schwarz symmetrization is the limit of compositions of polarizations, we carry the comparison to the limit. This technique was introduced in [4].

Our result is motivated by a problem in probability theory. Namely, the isoperimetric inequality for the Wiener sausage, which was proved in [15]. The problem is the following. If \((w_t)_{t\ge 0}\) is a Wiener process in \(\mathbb {R} ^d\), one wants to minimize the expected volume of the set \(\cup _{t \le T} (w_t+A)\), for \(T\ge 0\), over “all” subsets A of \(\mathbb {R}^d\) of a given measure. It was proved in [15] that the minimizer is the ball (the result was for a more general setting, see Sect. 2 below). This was proved by obtaining a similar result for random walks by using rearrangement inequalities of Brascamp-Lieb-Luttinger type on the sphere, which were proved in [6], and then by Donsker’s theorem, the authors obtain the result for the Wiener process. It is known that the expected volume of the Wiener sausage up to time t, can be expressed as the integral over \(x\in \mathbb {R}^d\) of the probability that a Wiener process starting from \(x \in \mathbb {R}^d\) hits the set A by time t. It is also known that this collection of probabilities, as a function of (tx), satisfies a parabolic equation on \((0,T) \times \mathbb {R}^d{\setminus } A\). For properties of these hitting times and applications to the Wiener sausage we refer the reader to [3] and references therein, and for the case of Riemannian manifolds, we refer to [11]. Therefore, we provide an alternative proof of the isoperimetric inequality for the Wiener sausage, based on PDE techniques.

Comparison results between solutions of partial differential equations and solutions of their symmetrized counterparts, were first proved in [16]. Since then, much work has been done in this area, for elliptic and parabolic equations, and we refer the reader to [2, 4, 13, 14] and references therein. The equations under consideration at these works, are on a bounded domain, with Dirichlet or Neumann boundary conditions. Our approach is based on the techniques introduced in [4].

Let us now introduce some notation that will be frequently used throughout the paper. We denote by \(\mathbb {R}^d\) the Euclidean space of dimension \(1 \le d < \infty \). For A, B subsets of \(\mathbb {R}^d\), we write
$$\begin{aligned} A+B :=\left\{ z \in \mathbb {R}^d \ | \ z= x+y, \ x \in A, \ y \in B \right\} , \end{aligned}$$
and for \(x \in \mathbb {R}^d\) we write \(x+A:= \{x\}+A\). The open centered ball of radius \(\rho >0\) in \(\mathbb {R}^d\) will be denoted by \(B_\rho \). Let \(x \in \mathbb {R}^d\) and \(A \subset \mathbb {R}^d\) and let H be a closed half-space. If A is measurable, |A| will stand for the Lebesgue measure of A. We will write \(\sigma _H(x)\) and \(A_H\) for the reflections of x and A respectively, with respect to the shifted hyperplane \(\partial H\). We will write \(\overline{A}\) and \(\underline{A}\) for the closure and the interior of A respectively. We will use the notation \(P_HA\) for the polarization of A with respect to H, that is
$$\begin{aligned} P_HA:= \Big ( \left( A \cup A_H \right) \cap H \Big ) \cup \Big (A \cap A_H \Big ). \end{aligned}$$
For a non-negative function u on \(\mathbb {R}^d\) we will write \(P_H u\) for the polarization of u with respect to H, that is
$$\begin{aligned} P_H u (x) = {\left\{ \begin{array}{ll} \max \{u(x), u(\sigma _H(x))\} , &{} \text {if} \ x \in H\\ \min \{u(x), u(\sigma _H(x))\} , &{} \text {if} \ x \in H^c .\\ \end{array}\right. } \end{aligned}$$
We will denote by \(\mathcal {H}\) the set of all half-spaces H such that \(0 \in H\). For positive functions f and g on \(\mathbb {R}^d\) and for \(H \in \mathcal {H}\), we will write \(f \lhd _H g\), if \(f(x)+f(\sigma _H(x) ) \le g(x)+g(\sigma _H(x) )\) for a.e. \(x\in H\). For a bounded set \(V \subset \mathbb {R}^d\), we will denote by \(V^*\) the closed, centered ball of volume |V|. For a positive function u on \(\mathbb {R}^d\) such that \(|\{u> r \}|< \infty \) for all \(r >0\), we denote by \(u^*\) its symmetric decreasing rearrangement. For an open set \(D \subset \mathbb {R}^d\) we denote by \(H^1(D)\) the space of all functions in \(u \in L_2(D)\) whose distributional derivatives \(\partial _i u:=\frac{\partial }{\partial x_i}u\), \(i=1,..,d\), lie in \(L_2(D)\), equipped with the norm
$$\begin{aligned} \Vert u\Vert ^2_{H^1} = \Vert u\Vert ^2_{L_2}+ \sum _{i=1}^d\Vert \partial _i u\Vert ^2_{L_2}. \end{aligned}$$
We will write \(H_0^1(D)\) for the closure of \(C^\infty _c(D)\) (the space of smooth, compactly supported real functions on D) in \(H^1(D)\). We will write \(\mathbb {H}^1(D)\), and \(\mathbb {H}^1_0(D)\) for \(L_2((0,T);H^1(D))\), and \(L_2((0,T);H^1_0(D))\) respectively. Also we define , \(\mathscr {H}^1(D):=\mathbb {H}^1(D) \cap C([0,T];L_2(D))\) and \(\mathscr {H}^1_0(D):=\mathbb {H}^1_0(D) \cap C([0,T];L_2(D))\). The notation \((\cdot ,\cdot )\), will be used for the inner product in \(L_2(\mathbb {R}^d)\). Also, the summation convention with respect to integer valued repeated indices will be in use.

The rest of the article is organized as follows. In Sect. 2 we state our main results. In Sect. 3 we prove a version of the parabolic maximum principle, and some continuity properties of the solution map with respect to the set A. These tools are then used in Sect. 4 in order to prove the main theorems.

2 Main results

Let \((\varOmega , \mathscr {F}, \mathbb {P})\) be a probability space carrying a standard Wiener process \((w_t)_{t \ge 0}\) with values in \(\mathbb {R}^d\), and let A be compact subset of \(\mathbb {R}^d\). For \(T \ge 0\) we let us consider the expected volume of the Wiener sausage generated by A, that is, the quantity \(\mathbb {E}\left| \cup _{t \le T} \left( w_t+A\right) \right| \). In [15], the following theorem is proved.

Theorem 1

For any \(T \ge 0\) we have
$$\begin{aligned} \mathbb {E}\left| \cup _{t \le T} \left( w_t+A^*\right) \right| \le \mathbb {E}\left| \cup _{t \le T} \left( w_t+A\right) \right| . \end{aligned}$$
(1)

The result in [15] is stated for open sets A, and the set A is allowed to depend on time. As it was mentioned above, this was proved by obtaining a similar inequality for random walks, using rearrangement inequalities of Brascamp-Lieb-Luttinger type on the sphere, which were proved in [6], and then by using Donsker’s theorem, the authors obtain the inequality for the Wiener process.

Let us now move to our main result, and see the connection with Theorem 1. For a compact set \(A \subset \mathbb {R}^d \), and for \(\psi \in L_2(\mathbb {R}^d{\setminus } A)\), let us denote by \(\varPi (A,\psi )\) the problem
$$\begin{aligned} \left\{ \begin{array}{ll} dv_t = \frac{1}{2}\Delta v_t \ dt &{} \text{ in } (0,T) \times \mathbb {R}^d{\setminus } A;\\ v_t(x)=1 &{} \text{ on } [0,T] \times \partial A; \\ v_0(x) = \psi (x) &{} \text{ in } \mathbb {R}^d{\setminus } A \end{array} \right. \end{aligned}$$
(2)

Definition 1

We will say that u is a solution of the problem \(\varPi (A, \psi )\) if
  1. (i)

    \(u \in \mathscr {H}^1(\mathbb {R}^d {\setminus } A)\),

     
  2. (ii)
    for each \(\phi \in C^\infty _c(\mathbb {R}^d {\setminus } A)\),
    $$\begin{aligned} (u_t,\phi )=(\psi , \phi )-\int _0^t\frac{1}{2}(\partial _iu_s , \partial _i \phi ) \ ds, \end{aligned}$$
    for all \(t\in [0,T]\)
     
  3. (iii)

    \(v-\xi \in \mathbb {H}^1_0(\mathbb {R}^d{\setminus } A)\), for any \(\xi \in H^1_0(\mathbb {R}^d)\) with \(\xi =1\) on a compact set \(A'\), \(A \subset \underline{A'}\).

     

The following is very well known.

Theorem 2

There exists a unique solution of the problem \(\varPi (A,\psi )\).

If \(\psi \in L_2(\mathbb {R}^d)\), then by \(\varPi (A, \psi )\) we obviously mean \(\varPi (A, \psi |_{\mathbb {R}^d {\setminus } A})\). Our two main results read as follows.

Theorem 3

Let \(\psi \in L_2(\mathbb {R}^d)\) with \(0\le \psi \le 1\), \(\psi =1\) on A. Let uv be the solutions of the problems \(\varPi (A,\psi )\) and \(\varPi (P_HA,P_H\psi )\), extended to 1 on A and \(P_HA\) respectively. Then for all \(t \in [0,T]\), we have \( v_t \lhd _H u_t. \)

Theorem 4

Let \(\psi \in L_2(\mathbb {R}^d)\) with \(0 \le \psi \le 1\), and \(\psi =1\) on A. Suppose that \(|A|>0\). Let u, v be the solutions of the problems \(\varPi (A, \psi )\) and \(\varPi (A^*, \psi ^*)\) respectively . Then for any \(t \in [0,T]\) we have
$$\begin{aligned} \int _{\mathbb {R}^d} v_t \ dx \le \int _{\mathbb {R}^d} u_t \ dx , \end{aligned}$$
(3)
where \(u_t\) and \(v_t\) are extended to 1 on A and \(A^*\) respectively.
It is easy to check that
$$\begin{aligned} \mathbb {E}\left| \cup _{t \le T} \left( w_t+A\right) \right| = \int _{\mathbb {R}^d} \mathbb {P}( \tau _A^x \le t) \ dx. \end{aligned}$$
where
$$\begin{aligned} \tau _A^x:=\inf \{ t \ge 0 : x+ w_t \in A \}. \end{aligned}$$
It is also known that the unique solution of the problem \(\varPi (A,0)\) is given by
$$\begin{aligned} u_t(x)= \mathbb {P}(\tau ^x_A \le t). \end{aligned}$$
(4)
Consequently Theorem 1 follows by Theorem 4 by choosing \(\psi =0\) on \(\mathbb {R}^d {\setminus } A\), if \(|A|>0\). If \(|A|=0\) then (1) trivially holds.

Remark 1

All of the arguments in the next sections can be repeated in exactly the same way, if the operator \(\frac{1}{2}\Delta \) is replaced by an operator of the form \(L_t u:= \partial _i (a^{ij}_t \partial _ju)\), such that for \( j,i \in \{1,...,d\}\), \(a^{ij} \in L_\infty ((0,T))\) (independent of \(x \in \mathbb {R}^d\)), and there exists a constant \(\kappa >0\) such that for almost all \(t \in [0,T]\),
$$\begin{aligned} a^{ij}_t z_iz_j \ge \kappa |z|^2, \end{aligned}$$
(5)
for all \(z=(z_1,...,z_d) \in \mathbb {R}^d \). Consequently one can replace \(w_t\) in Theorem 1 by “non-degenerate” stochastic integrals of the form \( y_t =\int _0^t \sigma _s \ d B_s \) where \(B_t\) is an m-dimensional Wiener process and \(\sigma \) is a measurable function from [0, T] to the set of \(d\times m\) matrices such that \((\sigma _t \sigma ^\top _t)_{i,j=1}^d\) satisfies (5).

3 Auxiliary results

In this section we prove some tools that we will need in order to obtain the proof of our main theorems. Namely, we present a version of the parabolic maximum principle for functions that are not necessarily continuous up to the parabolic boundary. This result (Lemma 1 below) is probably well known but we provide a proof for the convenience of the reader. The maximum principle is the main tool used in order to show the comparison of the solution of the problem \(\varPi (A, \psi )\) and its polarized version. The reason that we need this version of the maximum principle is that, \(P_H A\) is not guaranteed to have any “good” properties, even if \(\partial A\) is of class \(C^ \infty \), and therefore one can not expect the solution of \(\varPi (P_HA, P_H \psi )\) to be continuous up to the boundary. We also present certain continuity properties of the solution map with respect to the set A, so that we can then iterate Theorem 3 in order to obtain Theorem 4.

In this section we consider \(a^{ij} \in L_\infty ((0,T) \times \mathbb {R}^d)\) for \(i,j=1,...,d\), and we assume that there exists a constant \(\kappa >0\) such that for any \(z=(z_i,...,z_d) \in \mathbb {R}^d\) we have
$$\begin{aligned} a^{ij}_t(x)z_i z_j \ge \kappa |z|^2, \end{aligned}$$
for a.e. \((t,x)\in [0,T] \times \mathbb {R}^d\). We will denote by \(K:= \max _{i,j} \Vert a^{ij}\Vert _{L_\infty }\). For an open set \(Q \subset \mathbb {R}^d\), let \(\varPsi (Q)\) be the set of functions \(u \in \mathscr {H}^1(Q)\), such that for any \(\phi \in C^\infty _c(Q)\)
$$\begin{aligned} (u_t,\phi )=(u_0,\phi )-\int _0^t (a^{ij}_s\partial _iu_s, \partial _i \phi ) \ ds, \end{aligned}$$
(6)
for all \(t\in [0,T]\). Notice that by the De Giorgi-Moser-Nash theorem, if \(u \in \varPsi (Q)\), then \(u \in C((0,T)\times Q)\).
Let us also introduce the functions \(\alpha _r(s)\), \(\beta _r(s)\) and \(\gamma _r(s)\) on \(\mathbb {R}\), for \(r>0\), that will be needed in the next lemma, given by
$$\begin{aligned} \gamma _ r (s)= & {} \left\{ \begin{array}{rl} 2 &{} \text {if } s > r \\ \frac{2s}{r} &{} \text {if } 0 \le s \le r \\ 0 &{} \text {if } s < 0, \end{array} \right. \\ \beta _r (s)= & {} \int _0^s \gamma _r (t) \ dt, \qquad \alpha _r(s)=\int _0^s \beta (t) \ dt. \end{aligned}$$
For all \(s \in \mathbb {R}\) we have \(\gamma _r(s) \rightarrow 2I_{s >0}\), \( \beta _r(s) \rightarrow 2s_+\) and \(\alpha _r(s) \rightarrow (s_+)^2\) as \(r \rightarrow 0\). Also, for all \(s\in \mathbb {R}\) and \(r>0\), the following inequalities hold
$$\begin{aligned} |\gamma _r(s)|\le 2, \ |\beta _r(s)| \le 2|s|, \ |\alpha _r(s)| \le s^2. \end{aligned}$$

Lemma 1

Let Q be a bounded open set and let \(u\in \varPsi (Q)\). If there exists \(M\in \mathbb {R}\), such that \(u_0(x)\le M\) for a.e. \(x \in Q\) and \(\limsup _{(t,x) \rightarrow (t_0, x_0)} u_t(x) \le M\) for any \((t_0,x_0) \in (0,T] \times \partial Q\), then
$$\begin{aligned} \sup _{t \in [0,T]} \sup _{Q} u_t(x) \le M. \end{aligned}$$

Proof

Let us fix \(t' \in (0,T)\), and let \(\zeta \in C^\infty _c(B_1)\) be a positive function with unit integral. For \(\varepsilon >0\) and \(\delta >0\), set \(\zeta ^\varepsilon (x)= \varepsilon ^{-1} \zeta (x/ \varepsilon )\) and \(M^\delta :=M+\delta \). For \(x \in Q^\varepsilon := \{ x \in Q | \text {dist}(x, \partial Q) > \varepsilon \}\), we can plug \(\zeta ^\varepsilon (x-\cdot )\) in (6) in place of \(\phi \) to obtain
$$\begin{aligned} u^\varepsilon _t(x)-M^\delta = u^\varepsilon _ {t'}(x)-M^\delta +\int _{t'}^t (a^{ij}_s \partial _j u_s, \partial _i\zeta ^\varepsilon (x- \cdot )) \ ds, \end{aligned}$$
for all \(t \in [t',T]\), where \(u^\varepsilon =u * \zeta ^\varepsilon \). Let also \(g^n \in C^\infty _c(Q)\) with \(0 \le g^n \le 1\), \(g^n=1\) on \(Q^{1/n}\), \(g^n=0\) on \(Q{\setminus } Q^{1/2n}\) and choose \(\varepsilon < 1/2n\). We can then multiply the equation with \(g^n\), and by the chain rule we have
$$\begin{aligned} \int _Q \alpha _r((u^\varepsilon _t-M^\delta )g_n) \ dx =&\int _Q \alpha _r((u^\varepsilon _{t'}-M^\delta )g_n) \ dx \\ -&\int _{t'}^t \int _Q (a^{ij}_s\partial _j u_s)^\varepsilon \partial _i (g_n \beta _r((u^\varepsilon _s-M^\delta )g_n) \ dx ds. \end{aligned}$$
By standard arguments (see e.g. [8]), letting \( \varepsilon \rightarrow 0\), leads to
$$\begin{aligned} \int _Q \alpha _r((u_t-M^\delta )g_n) \ dx&= \int _Q \alpha _r((u_{t'}-M^\delta )g_n) \ dx \nonumber \\&\quad - \int _{t'}^t \int _Q g_n^2a^{ij}_s\partial _j u_s \gamma _r(u_s-M^\delta ) \partial _i (u_s-M^\delta ) \ dx ds \nonumber \\&\quad -\int _{t'}^t \int _Q a^{ij}_s\partial _j u_s \partial _i g_n \beta _r((u_s-M^\delta )g_n) \ dx ds \nonumber \\&\quad - \int _{t'}^t \int _Q a^{ij}_s\partial _j u_s \gamma _r((u_s-M^\delta )g_n) (u_s-M^\delta )g_n\partial _i g_n dx ds \end{aligned}$$
(7)
Let us also introduce the notation
$$\begin{aligned} U^\delta _t=\{x \in Q | \ u_t(x) > M+\delta \}. \end{aligned}$$
We claim that there exists \(\rho >0\) such that \(\text {dist}(\overline{U^\delta _t}, \partial Q) > \rho \) for any \(t \in [{t'},T]\). For each \(t\in [{t'},T]\), we have \(\overline{U^\delta _t} \subset Q \cup \partial Q\). If \(\inf _{t\in [{t'},T]}\text {dist}(\overline{U^\delta _t}, \partial Q)= 0\), then we can find \((s,y) \in [{t'},T] \times \partial Q\), and a sequence \((t_n,x_n) \in [{t'},T] \times U^\delta _{t_n}\) such that \((t_n,x_n) \rightarrow (s,y)\) as \(n \rightarrow \infty \). Then we have by the definition of \(U^\delta _{t_n}\),
$$\begin{aligned} \limsup _{ (x_n,t_n) \rightarrow (s,y)} u_{t_n}(x_n) \ge M+\delta , \end{aligned}$$
while by assumption we have that
$$\begin{aligned} \limsup _{ (x_n,t_n) \rightarrow (s,y)} u_{t_n}(x_n) \le M, \end{aligned}$$
which is a contradiction, and therefore
$$\begin{aligned} \inf _{t\in [{t'},T]}\text {dist}(\overline{U^\delta _t}, \partial Q)= \theta >0. \end{aligned}$$
(8)
Going back to (7), for any \(n > 1/\theta \), we have that for all \(s \in [{t'},T]\)
$$\begin{aligned} \int _Q \partial _i u_s \partial _i g_n \beta _r((u_s-M^\delta )g_n) \ dx =\int _{U^\delta _s} \partial _i u_s \partial _i g_n \beta _r((u_s-M^\delta )g_n) \ dx =0, \end{aligned}$$
since \(\partial _ig_n=0\) on \(Q^{1/n}\) and \(U^\delta _s \subset Q^{1/n}\) by (8). Similarly for the last term on the right hand side of (7). Therefore, letting \(n \rightarrow \infty \) and \(r\rightarrow 0\) in (7) gives
$$\begin{aligned} \Vert (u_t-M^\delta )_+ \Vert ^2_{L_2(Q)}&=\Vert (u_{t'}-M^\delta )_+ \Vert ^2_{L_2(Q)} -\int _{t'}^t \int _Q a^{ij}_s \partial _i u_s\partial _ju_s I_{u_s>M^\delta } \ dx ds \\&\quad \le \Vert (u_{t'}-M^\delta )_+ \Vert ^2_{L_2(Q)} . \end{aligned}$$
The above inequality holds for any \(t'\in (0,T]\), and therefore by letting \(t' \downarrow 0\) and using the continuity of u (in \(L_2(Q)\)) we have
$$\begin{aligned} \Vert (u_t-M^\delta )_+ \Vert ^2_{L_2(Q)} \le \Vert (u_0-M^\delta )_+ \Vert ^2_{L_2(Q)} \le 0, \end{aligned}$$
since \(u_0 \le M\). Consequently
$$\begin{aligned} \sup _{Q} u_t(x) \le M+\delta , \end{aligned}$$
for any \(t \in [0,T]\). Since \(\delta \) was arbitrary, the lemma is proved. \(\square \)
We now continue with the continuity properties of the solution map. Let us fix \(\xi \in L_2(\mathbb {R}^d)\) and \(f \in L_2((0,T) \times \mathbb {R}^d)\). We will say that u solves the problem \(\varPi _0(A, \xi , f)\) if
  1. (i)

    \(u \in \mathscr {H}_0^1(\mathbb {R}^d {\setminus } A)\), and

     
  2. (ii)
    for each \(\phi \in C^\infty _c(\mathbb {R}^d {\setminus } A)\),
    $$\begin{aligned} (u_t,\phi )=(\xi , \phi )+\int _0^t \left( (f_s, \phi )-(a^{ij}_s\partial _iu_s , \partial _j \phi ) \right) \ ds, \end{aligned}$$
    for all \(t\in [0,T]\).
     
For \(n \in \mathbb {N}\), let \(\xi ^n \in L_2(\mathbb {R}^d)\), \(f^n \in L_2((0,T) \times \mathbb {R}^d)\) and let \(A_n \subset \mathbb {R}^d\) be compact sets.

Assumption 1

  1. (i)

    \(\xi ^n \rightarrow \xi \) weakly in \(L_2(\mathbb {R}^d)\)

     
  2. (ii)

    \( f^n \rightarrow f \) weakly in \( L_2([0,T]; L_2(\mathbb {R}^d))\)

     
  3. (iii)

    \(A_{n+1} \subset A_n\) For each \(n \in \mathbb {N}\), and \( \cap _n A_n=A\).

     

Lemma 2

Suppose Assumption 1 holds, and let \(u^n\) and u be the solutions of the problems \(\varPi _0(A_n, \xi ^n,f^n)\) and \(\varPi _0(A, \xi ,f)\) respectively . Let us extend \(u^n\) and u to zero on \(A_n\) and A respectively. Then
  1. (i)

    \(u^n \rightarrow u\) weakly in \(\mathbb {H}^1_0(\mathbb {R}^d)\) as \(n \rightarrow \infty \),

     
  2. (ii)

    \(u^n_t \rightarrow u_t\), weakly in \(L_2(\mathbb {R}^d)\) as \(n \rightarrow \infty \), for any \(t \in [0,T]\).

     

Proof

Let us set \(C_n = \mathbb {R}^d {\setminus } A_n\) and \(C = \mathbb {R}^d {\setminus } A\). Clearly, for (i) it suffices to show that there exists a subsequence with \(u^{n_k}\) such that \(u^{n_k} \rightarrow u\) weakly in \(\mathbb {H}^1_0(C)\). By standard estimates we have that there exists a constant N depending only on \(d, K, \kappa \), and T, such that for all n
$$\begin{aligned} \sup _{t\le T} \Vert u^n_t\Vert _{L_2(C_n)}^2+\int _0^T \Vert u^n_t\Vert ^2_{H^1_0(C_n)} \ dt \le N \left( \Vert \xi ^n\Vert ^2_{L_2(C_n)}+ \int _0^T \Vert f^n_t\Vert ^2_{L_2(C_n)} \right) . \end{aligned}$$
(9)
Since \(u^n\) are zero on \(A_n\), we can replace \(C_n\) by C in the above inequality, to obtain that there exists a subsequence \((u^{n_k})_{k=1}^\infty \subset \mathbb {H}^1_0(C)\), and a function \(v \in \mathbb {H}^1_0(C)\) such that \(u^{n_k} \rightarrow v\) weakly in \(\mathbb {H}^1_0(C)\).
For \(\phi \in C^\infty _c(\mathbb {R}^d{\setminus } A)\) we have that for all k large enough \(\text {supp} (\phi ) \subset C_{n_k}\). Also, \(u^{n_k}\) solves \(\varPi _0(A_{n_k}, \xi ^{n_k},f^{n_k})\), and therefore
$$\begin{aligned} (u^{n_k}_t,\phi )=(\xi ^{n_k}, \phi )+\int _0^t \left( (f^{n_k}_s, \phi )-(a^{ij}_s\partial _iu^{n_k}_s , \partial _j \phi ) \right) \ ds \ \ \text {for all }\, t \in [0,T].\nonumber \\ \end{aligned}$$
(10)
which by letting \(k \rightarrow \infty \) gives
$$\begin{aligned} (v_t,\phi )=(\xi , \phi )+\int _0^t\left( (f_s, \phi )-(a^{ij}_s\partial _iv_s , \partial _j \phi ) \right) \ ds \ \ \text {for a.e. }\, t \in [0,T], \end{aligned}$$
(11)
which also holds for any \(\phi \in H^1_0(C)\), since \(C^\infty _c(C)\) is dense in the latter. Hence v belongs to the space \( \mathscr {H}^1_0(D)\) (by Theorem 2.16 in [12] for example), and is a solution of \(\varPi _0(A, \xi ,f)\). By the uniqueness of the solution we get \(u=v\) (as elements of \( \mathscr {H}^1_0(C)\)), and this proves (i).

Let us fix \(t \in [0,T]\). It suffices to show that there exists a subsequence \(u^{n_k}_t\) such that \(u^{n_k}_t \rightarrow u_t\) weakly in \(L_2(C)\) as \(k \rightarrow \infty \). Notice that by (9), there exists a subsequence \(u^{n_k}_t\) which converges weakly to some \(v' \in L_2(C)\). Again, for \(\phi \in C_c^\infty (C)\) and k large enough, we have that (10) holds. As \(k \rightarrow \infty \), the right hand side of (10) converges to the right hand side of (11) (for our fixed \(t \in [0,T]\)), which is equal to \((u_t,\phi )\), while the left hand side of (11) converges to \((v', \phi )\). Hence, \(v'=u_t\) on C, and since \(u^{n_k}_t\) converges weakly in \(L_2(C)\) to \(v'\), the lemma is proved. \(\square \)

Corollary 1

Suppose that (i) and (iii) from Assumption 1 hold, and let \(u^n\) and u be the solutions of the problems \(\varPi (A_n,\psi ^n)\) and \(\varPi (A,\psi )\). Set \(u^n=1\) and \(u=1\) on \(A_n\) and A respectively. Then for each t, \(u^n_t \rightarrow u_t\) weakly in \(L_2(\mathbb {R}^d)\) as \(n \rightarrow \infty \).

Proof

Let \(g \in C^\infty _c (\mathbb {R}^d)\) with \(g=1\) on a compact set B such that \(A_0 \subset \underline{B}\). Then \(u^n-g\) and \(u-g\) solve the problems \(\varPi _0(A_n,\psi ^n-g,-\frac{1}{2}\Delta g)\) and \(\varPi _0(A,\psi -g,-\frac{1}{2}\Delta g)\) and the result follows by Lemma 2. \(\square \)

For two subsets of \(\mathbb {R}^d\), \(A_1\) and \(A_2\), we denote by \(d(A_1,A_2)\) the Hausdorff distance, that is
$$\begin{aligned} d(A_1,A_2)= \inf \left\{ \rho \ge 0 \ | \ A_1 \subset (A_2+ \overline{B}_\rho ), \ A_2 \subset (A_1+ \overline{B}_\rho ) \right\} . \end{aligned}$$
In Lemma 3 below we will need the following:

Remark 2

Let \(A \subset \mathbb {R}^d\) be compact such that \(\mathbb {R}^d {\setminus } A\) is a Carathéory set (i.e. \(\partial (\mathbb {R}^d {\setminus } A) = \partial \overline{(\mathbb {R}^d {\setminus } A})\)). If \(u \in H^1(\mathbb {R}^d)\) and \(u=0\) a.e. on A, then \(u \in H^1_0(\mathbb {R}^d {\setminus } A)\). To see this, suppose first that \(\text {supp}(u) \subset B_R\), where R is large enough, so that \(A \subset B_R\). It follows that \(B_R {\setminus } A\) is a Carathéodory set, and by Theorem 7.3(ii), p. 436 in [9], if \(u \in H^1_0(B_R)\), and \(u=0\) a.e. on A, then \(u \in H^1_0(B_R {\setminus } A)\), and therefore \(u \in H^1_0(\mathbb {R}^d {\setminus } A)\). For general u we can take \(\zeta \in C^\infty _c(\mathbb {R}^d)\), such that \(0\le \zeta \le 1\) and \(\zeta (x)=1\) for \(|x| \le 1\), and set \(\zeta ^n(x)=\zeta (x/n)\). Then by the previous discussion \(\zeta ^n u \in H^1_0(\mathbb {R}^d{\setminus } A)\) and since \(\zeta ^n u \rightarrow u\) in \(H^1(\mathbb {R}^d{\setminus } A)\) we get that \(u \in H^1_0(\mathbb {R}^d{\setminus } A)\).

Assumption 2

  1. (i)

    \(\xi ^n \rightarrow \xi \) weakly in \(L_2(\mathbb {R}^d)\)

     
  2. (ii)

    \( f^n \rightarrow f \) weakly in \( L_2([0,T]; L_2(\mathbb {R}^d))\)

     
  3. (iii)

    \(d(A,A_n) \rightarrow 0\), \( |A {\setminus } A_n|\rightarrow 0\), as \(n \rightarrow \infty \), and \(\mathbb {R}^d {\setminus } A\) is a Carathéodory set.

     

Lemma 3

Suppose Assumption 2 holds, and let \(u^n\) and u be the solutions of the problems \(\varPi _0(A_n, \xi ^n,f^n)\) and \(\varPi _0(A, \xi ,f)\). Let us extend \(u^n\) and u to 0 on \(A_n\) and A respectively. Then
  1. (i)

    \(u^n \rightarrow u\) weakly in \(\mathbb {H}^1_0(\mathbb {R}^d)\),

     
  2. (ii)

    \(u^n_t \rightarrow u_t\) weakly in \(L_2(\mathbb {R}^d)\), as \(n \rightarrow \infty \), for any \(t \in [0,T]\).

     

Proof

As in the proof of Lemma 2 it suffices to find a subsequences such that the corresponding convergences take place. By standard estimates, there exists a constant N depending only on \(d, \kappa , T\) and K, such that for all \(n \in \mathbb {N}\)
$$\begin{aligned} \sup _{t\le T} \Vert u^n_t\Vert _{L_2(\mathbb {R}^d)}^2+\int _0^T \Vert u^n_t\Vert ^2_{H^1_0(\mathbb {R}^d)} \ dt \le N\left( \Vert \xi ^n\Vert ^2_{L_2(\mathbb {R}^d)}+ \int _0^T \Vert f^n_t\Vert ^2_{L_2(\mathbb {R}^d)} \right) .\nonumber \\ \end{aligned}$$
(12)
Therefore, there exists a subsequence \((u^{n_k})_{k=1}^\infty \subset \mathbb {H}^1_0(\mathbb {R}^d)\), and a function \(v \in \mathbb {H}^1_0(\mathbb {R}^d)\) such that \(u^{n_k} \rightarrow v\) weakly in \(\mathbb {H}^1_0(\mathbb {R}^d)\).
For \(\phi \in C^\infty _c(\mathbb {R}^d{\setminus } A)\), since \(d(A,A_n) \rightarrow 0\) as \(n \rightarrow \infty \), we have that for all k large enough \(\text {supp} (\phi ) \subset \mathbb {R}^d {\setminus } A_{n_k}\). Also, \(u^{n_k}\) solves \(\varPi _0(A_{n_k}, \xi ^{n_k},f^{n_k})\), and therefore
$$\begin{aligned} (u^{n_k}_t,\phi )=(\xi ^{n_k}, \phi )+\int _0^t \left( (f^{n_k}_s, \phi )-(a^{ij}_s\partial _iu^{n_k}_s , \partial _j \phi ) \right) \ ds, \ \ \text {for all}\;t \in [0,T].\nonumber \\ \end{aligned}$$
(13)
which by letting \(k \rightarrow \infty \) gives
$$\begin{aligned} (v_t,\phi )=(\xi , \phi )+\int _0^t\left( (f_s, \phi )-(a^{ij}_s \partial _iv_s , \partial _j \phi ) \right) \ ds \ \ \text {for a.e.}\; t \in [0,T], \end{aligned}$$
(14)
Notice that for \(\phi \in L_\infty (A)\), \(\psi \in L_\infty ((0,T))\)
$$\begin{aligned}&\left| \int _0^T \int _A v_t \phi \psi _t \ dx dt \right| = \lim _{k \rightarrow \infty } \left| \int _0^T \int _{A{\setminus } A_{n_k}} u^{n_k}_t \phi \psi _t \ dx dt \right| \\&\quad \le T \Vert \phi \Vert _{L_\infty (A)} \Vert \psi \Vert _{L_\infty ((0,T))} \lim _{k\rightarrow \infty } \sup _{t \le T} \Vert u^{n_k}_t\Vert _{L_2(\mathbb {R}^d)} |A{\setminus } A_{n_k}|^{1/2}=0, \end{aligned}$$
by assumption and (12). Consequently for almost all \(t \in (0,T)\), \(v_t =0\) for a.e. \(x \in A\). By virtue of Remark 2, we have that \(v \in \mathbb {H}^1_0(\mathbb {R}^d {\setminus } A)\), which combined with (14) implies that \(v \in \mathscr {H}^1_0(\mathbb {R}^d {\setminus } A)\) and is the unique solution of the problem \(\varPi _0(A,\xi ,f)\). This proves (i).
Let us fix \(t\in [0,T]\). By (12) there exists a subsequence \(u^{n_k}_t\) that converges weakly to some \(v' \in L_2(\mathbb {R}^d)\). Again, for \(\phi \in C_c^\infty (C)\) and k large enough, we have that (13) holds. As \(k \rightarrow \infty \), the right hand side of (13) converges to the right hand side of (14), which is equal to \((u_t,\phi )\), while for our fixed t, the left hand side of (11) converges to \((v', \phi )\). Hence, \(v'=u_t\) on \(\mathbb {R}^d{\setminus } A\). Also if \(\phi \in L_\infty (A)\)
$$\begin{aligned} \int _A v' \phi \ dx = \lim _{k \rightarrow \infty } \int _A u^{n_k}_t \phi \ dx \le \Vert \phi \Vert _{L_\infty (A)} \lim _{k \rightarrow \infty } \Vert u^{n_k}_t \Vert _{L_2(\mathbb {R}^d)}|A{\setminus } A_{n_{k}}|^{1/2} = 0. \end{aligned}$$
Therefore \(v'=0=u_t\) on A. This shows that \(v' =u_t\) on \(\mathbb {R}^d\) and the lemma is proved. \(\square \)

As with Lemma 2, we have the following corollary, whose proof is similar to the one of Corollary 1.

Corollary 2

Suppose that (i) and (iii) from Assumption 2 hold and let \(u^n\) and u be the solutions of the problems \(\varPi (A_n,\psi ^n)\) and \(\varPi (A,\psi )\). Set \(u^n=1\) and \(u=1\) on \(A_n\) and A respectively. Then for each t, \(u^n_t \rightarrow u_t\) weakly in \(L_2(\mathbb {R}^d)\) as \(n \rightarrow \infty \).

4 Proofs of Theorems 3 and 4

Proof of Theorem 3

Let us assume for now that \(\mathbb {R}^d {\setminus } A\) has smooth boundary, \(\psi \) is compactly supported and smooth. It follows under these extra conditions that \(u \in C^\infty ([0,T] \times \overline{\mathbb {R}^d {\setminus } A})\). Also, by the De Giorgi-Moser-Nash theorem v is continuous in \((0,T) \times (\mathbb {R}^d {\setminus } P_HA)\).

First notice that \(0\le u,v \le 1\) (see the remark below). Let us extend \(u=1\) and \(v=1\) on A and \(P_HA\) respectively so that they are defined on the whole \(\mathbb {R}^d\), and for a function f let us use the notation \(\overline{f}(x):=f(\sigma _H(x))\). Clearly it suffices to show that for each \(t \in (0,T]\)
$$\begin{aligned} w_t:=v_t+\overline{v}_t - u_t-\overline{u}_t \le 0, \ \text {for a.e. }\, x\in H^c. \end{aligned}$$
Suppose that the opposite holds, that is,
$$\begin{aligned} \sup _{(0,T]}\sup _{H^c} w_t(x)= \sup _{(0,T]}\sup _{\mathbb {R}^d } w_t(x)=: \alpha >0. \end{aligned}$$
Then we have that
$$\begin{aligned} \sup _{(0,T]} \sup _{\varGamma _i} w_t(x)= \alpha , \end{aligned}$$
(15)
for some \(i \in \{1,2,3,4\}\), where
$$\begin{aligned} \varGamma _1:=&A\cap A_H\cap H^c , \ \ \ \varGamma _2:=( \underline{A} {\setminus } A_H)\cap H^c \\ \varGamma _3:=&(\underline{A_H} {\setminus } A)\cap H^c , \ \varGamma _4:= H^c{\setminus } (A\cup A_H) . \end{aligned}$$
(Notice that the boundaries of A and \(A_H\) are of measure zero, since they are smooth). On \(\varGamma _1\), by definition \(w_t =0\) for any \(t \in [0,T]\), and therefore (15) holds for some \(i \in \{2,3,4\}\). Suppose it holds for \(i=2\). Since the initial conditions are compactly supported, we can find an open rectangle R with \(A\cup A_H \subset R\), such that
$$\begin{aligned} \sup _{(0,T) \times \overline{R^c} } \max \{u_t(x) ,v_t(x) \} \le \alpha /10 . \end{aligned}$$
(16)
Since \(w_t=v_t-\overline{u}_t=: \hat{w}_t\) on \(\varGamma _2\) we have
$$\begin{aligned} \sup _{(0,T]} \sup _{\varTheta } \hat{w}_t \ge \alpha , \end{aligned}$$
(17)
where \(\varTheta = (H^c {\setminus } A_H)\cap R\). Since
  1. (i)

    \(\limsup _{(0,T)\times \varTheta \ni (t,x) \rightarrow (t_0,x_0)} \hat{w_t} \le 0, \, \) for any \((t_0,x_0) \in (0,T) \times \partial A_H\),

     
  2. (ii)

    \(P_H\psi - \overline{\psi } \le 0\) on \(H^c\),

     
  3. (iii)

    inequality (16) holds,

     
we obtain by virtue of Lemma 1 that for any \(\varepsilon >0\), there exists \((t_0, x_0) \in (0,T)\times \partial H\) (in fact \(x_0 \in \partial H {\setminus } A_H\) due to (i) above) such that
$$\begin{aligned} \limsup _{(0,T)\times \varTheta \ni (t,x) \rightarrow (t_0,x_0)} \hat{w_t} \ge \alpha - \varepsilon . \end{aligned}$$
Notice that \(\hat{w}\) is continuous at \((t_0,x_0)\) and therefore \(\hat{w}_{t_0}(x_0) \ge \alpha - \varepsilon \). This implies that
$$\begin{aligned} w_{t_0}(x_0) =2\hat{w}_{t_0}(x_0) \ge 2( \alpha - \varepsilon )=2 \sup _{(0,T]} \sup _{\mathbb {R}^d} w_t- 2 \varepsilon , \end{aligned}$$
which is a contradiction for \(\varepsilon \) small enough . If (15) holds for \(i=3\) then in the same way we have that
$$\begin{aligned} \sup _{(0,T]} \sup _{\varTheta '} \tilde{w}_t \ge \alpha , \end{aligned}$$
(18)
where \(\tilde{w}_t=v_t-u_t\) and \(\varTheta '=(H^c {\setminus } A)\cap R\). This inequality leads to a similar contradiction.
Finally let us assume that (15) holds for \(i=4\). In particular then we have
$$\begin{aligned} \sup _{(0,T]} \sup _{G} w_t \ge \alpha , \ \text {where} \ G:=R{\setminus } (A\cup A_H). \end{aligned}$$
By virtue of (16), and since \(P_H \psi \lhd _H \psi \), Lemma 1 implies that for any \(\varepsilon >0\), there exists \((t_0,x_0) \in (0,T] \times \partial (A\cup A_H)\) such that
$$\begin{aligned} \limsup _{(0,T) \times G \ni (t,x) \rightarrow (t_0,x_0)} w_t(x) \ge \alpha - \varepsilon . \end{aligned}$$
Notice that \(x_0 \in \partial A \cap A_H^c\) or \(x_0 \in \partial A _H \cap A^c\), because if it belongs to \(\partial A \cap \partial A_H\) then the \(\limsup \) above is less than or equal to zero. Let us consider the first case. We can assume further that \((t_0, x_0) \in \partial A \cap A_H^c \cap H^c\) because of symmetry. Let \((t_n,x_n) \in G\) be a sequence converging to \((t_0,x_0)\) such that \(w_{t_n}(x_n) \rightarrow \alpha - \varepsilon \). For all \(n\in \mathbb {N}\) sufficiently large, we have \(w_{t_n}(x_n) \ge \alpha - 2\varepsilon \) and \(u_{t_n} (x_n) \ge 1- \varepsilon \), the last by the continuity of u up to the parabolic boundary. Then we have for all n large
$$\begin{aligned} \alpha - 2\varepsilon \le w_{t_n}(x_n) \le v_{t_n}(x_n) +1-(1-\varepsilon )-\overline{u}_{t_n}(x_n). \end{aligned}$$
This now implies (17) which we showed leads to a contradiction. For the second case, we can assume again that \((t_0, x_0) \in \partial A_H \cap A^c \cap H^c\). This in the same manner leads to (18), which also leads to contradiction.
For general A and \(\psi \), let \(A_n\) be a sequence of compact sets such that for \(n \in \mathbb {N}\), \(\mathbb {R}^d{\setminus } A_n\) has smooth boundary, \(A \subset A_{n+1} \subset A_n\), and \(A = \cap _nA_n\) (see e.g. page 60 in [7]) . Let \(0 \le \psi ^n \le 1\) be smooth with compact support such that \(\psi ^n=1\) on \(A_n\), and \(\Vert \psi ^n- \psi \Vert _{L_2(\mathbb {R}^d)} \rightarrow 0\) as \(n \rightarrow 0\). Then we also have that \(\Vert P_H\psi ^n- P_H\psi \Vert _{L_2(\mathbb {R}^d)} \rightarrow 0\) as \(n \rightarrow 0\), \(P_HA \subset P_HA_{n+1} \subset P_HA_n\) for any \(n \in \mathbb {N}\), and \(P_HA = \cap _n P_HA_n\) (see [4]). Let \(u^n\) and \(v^n\) be the solutions of the problems \(\varPi (A_n,\psi ^n)\) and \(\varPi (P_HA_n, P_H\psi _n)\) respectively. By Lemma 2 we have that \(u^n_t\) and \(v^n_t\) converge to \(u_t\) and \(v_t\) weakly in \(L_2(\mathbb {R}^d)\). In particular \(z^n:=(u^n_t,v^n_t)\) converges weakly to \(z:=(v_t,u_t)\) in \(L_2(\mathbb {R}^d;\mathbb {R}^2) \). By Mazur’s lemma there exists a sequence \((g_k=(g^1_k,g^2_k))_{k \in \mathbb {N}}\) of convex combinations of \(z^n\) such that the convergence takes place strongly. Then we can find a subsequence \(g_{k(l)}\), \(l \in \mathbb {N}\), where the convergence takes place for a.e. \(x\in \mathbb {R}^d\). For each l we have
$$\begin{aligned} g^1_{k(l)}+\overline{g^1_{k(l)}} = \sum _{i\in C}c_i (v^{i}_t+\overline{v^{i}}_t)\le \sum _{i\in C} c_i (u^{i}_t+\overline{u^{i}}_t)= g^2_{k(l)}+\overline{g^2_{k(l)}} \end{aligned}$$
where \(C \subset \mathbb {N}\) is a finite set and \(c_i \ge 0\), \(\sum _{i \in C} c_i=1\). Letting \(l \rightarrow \infty \) finishes the proof. \(\square \)

Remark 3

With \(\psi \) and A as in Theorem 3, it is easily seen that if u solves \(\varPi (A, \psi )\) then \(0\le u \le 1\), if for example \(\mathbb {R}^d {\setminus } A\) has Lipschitz boundary. For general A, we can take \(A_n\) compact such that for all n, \(\mathbb {R}^d {\setminus } A_n\) has smooth boundary, and \(A_n \downarrow A\). Then the corresponding solutions \(u^n\) satisfy \(0 \le u^n \le 1\), which by virtue of Corollary 1 implies that \(0\le u \le 1\).

Proof of of Theorem 4

First, let us assume that
$$\begin{aligned} \int _{\mathbb {R}^d} u_t(x) \ dx < \infty , \end{aligned}$$
or else the conclusion of the theorem is obviously true. Since \(|A|>0\), it follows from [5] that there exist \(H_i \in \mathcal {H}\), \(i \in \mathbb {N}\), such that
$$\begin{aligned} \lim _{n \rightarrow \infty } \left( \Vert \psi ^* - \psi ^n \Vert _{L_2(\mathbb {R}^d)}+|A^*\Delta A_n|+d(A^*,A_n) \right) = 0, \end{aligned}$$
where
$$\begin{aligned} \psi ^n := P_{H_n}... P_{H_1} \psi , \ A_n :=P_{H_n}...P_{H_1}A. \end{aligned}$$
Let \(u^n\) be the solution of the problem \(\varPi (A_n,\psi ^n)\). For \(t \in [0,T]\), by virtue of Theorem 3, we have by induction
$$\begin{aligned} \int _{\mathbb {R}^d} u^n_t \ dx \le \int _{\mathbb {R}^d} u_t \ dx, \end{aligned}$$
(19)
for all \(n \ge 0\). By Lemma 3 (\(|A|>0 \) and therefore \(\mathbb {R}^d{\setminus } A^*\) is obviously a Carathéodory set) we have that \(u^n_t \rightarrow v_t\) weakly in \(L_2(\mathbb {R}^d)\) as \(n \rightarrow \infty \). Hence we can find a sequence of convex combination that converges strongly, and a subsequence of it, let us call it \((v^n)_{n=1}^\infty \), such that \(v^n \rightarrow v_t\) for a.e. \(x \in \mathbb {R}^d\). Since for each n, \(v^n\) is convex combination of elements from \((u^n_t)_{n=1}^\infty \), we have by (19)
$$\begin{aligned} \int _{\mathbb {R}^d} v^n \ dx \le \int _{\mathbb {R}^d} u_t \ dx, \end{aligned}$$
which combined with Fatou’s lemma brings the proof to an end. \(\square \)

Notes

Acknowledgments

The author would like to thank Takis Konstantopoulos and Tomas Juskevicius for the useful discussions.

References

  1. 1.
    Alvino, A., Lions, P.-L., Trombetti, G.: Comparison results for elliptic and parabolic equations via symmetrization: a new approach. Differ. Integr. Equ. 4(1), 25–50 (1991)MathSciNetMATHGoogle Scholar
  2. 2.
    Bandle, C.: On symmetrizations in parabolic equations. J. Anal. Math. 30, 98–112 (1976)MathSciNetCrossRefMATHGoogle Scholar
  3. 3.
    van den Berg, M.: On the expected volume of intersection of independent Wiener sausages and the asymptotic behaviour of some related integrals. J. Funct. Anal. 222(1), 114–128 (2005)MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Brock, F., Solynin, A.Y.: An approach to symmetrization via polarization. Trans. Am. Math. Soc. 352(4), 1759–1796 (2000)MathSciNetCrossRefMATHGoogle Scholar
  5. 5.
    Burchard, A., Fortier, M.: Random polarizations. Adv. Math. 234, 550–573 (2013)MathSciNetCrossRefMATHGoogle Scholar
  6. 6.
    Burchard, A., Schmuckenschlger, M.: Comparison theorems for exit times. Geom. Funct. Anal. 11(4), 651–692 (2001)MathSciNetCrossRefMATHGoogle Scholar
  7. 7.
    Daners, D.: Domain perturbation for linear and semi-linear boundary value problems. In: Chipot, M. (ed.) Handbook of Differential Equations: Stationary Partial Differential Equations, vol. VI. Elsevier/North-Holland, Amsterdam (2008)Google Scholar
  8. 8.
    Dareiotis, K., Gyöngy, I.: A comparison principle for stochastic integro-differential equations. Potential Anal. 41(4), 12031222 (2014)MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    Delfour, M.C., Zolsio, J.-P.: Shapes and geometries. Metrics, Analysis, Differential Calculus, and Optimization. Advances in Design and Control, vol. 22. Society for Industrial and Applied Mathematics (SIAM), Philadelphia (2011)Google Scholar
  10. 10.
    Fraenkel, L.E.: On regularity of the boundary in the theory of Sobolev spaces. Proc. Lond. Math. Soc. 3(3), 385–427 (1979)MathSciNetCrossRefMATHGoogle Scholar
  11. 11.
    Grigor’yan, A., Saloff-Coste, L.: Hitting probabilities for Brownian motion on Riemannian manifolds. J. Math. Pures Appl. 81(2), 115–142 (2002)MathSciNetCrossRefMATHGoogle Scholar
  12. 12.
    Krylov, N.V., Rozovskii, B.L.: Stochastic evolution equations. Current Problems in Mathematics, Vol. 14 (Russian), pp. 71147, 256, Akad. Nauk SSSR, Vsesoyuz. Inst. Nauchn. i Tekhn. Informatsii, Moscow (1979)Google Scholar
  13. 13.
    Kesavan, S.: Symmetrization and Applications. Series in Analysis, vol. 3. World Scientific, Hackensack (2006)MATHGoogle Scholar
  14. 14.
    Partial differential equations of elliptic type. Papers from the conference held in Cortona, October 12–16, 1992. Edited by Angelo Alvino, Eugene Fabes and Giorgio Talenti. Symposia Mathematica, XXXV. Cambridge University Press, Cambridge (1994)Google Scholar
  15. 15.
    Peres, Y., Sousi, P.: An isoperimetric inequality for the Wiener sausage. Geom. Funct. Anal. 22(4), 1000–1014 (2012)MathSciNetCrossRefMATHGoogle Scholar
  16. 16.
    Talenti, G.: Elliptic equations and rearrangements. Ann. Scuola Norm. Sup. Pisa Class Sci. 3(4), 697–718 (1976)MathSciNetMATHGoogle Scholar

Copyright information

© The Author(s) 2016

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Uppsala UniversityUppsalaSweden

Personalised recommendations