We prove the existence of solutions of Problem 1 with particular properties. In our previous work [17] this problem was considered under the assumption that the velocity is constant in the whole \(\Omega _s\). In the situation considered here, the physical velocity \(\omega \) is constant only on every connected component of \(\Omega _s\), and the velocity of each solid particle is an unknown. Therefore, the candidates of limiting profiles v over which we optimize (belonging to \(\mathrm {BV}_\diamond \)) also satisfy \(\nabla v = 0\) on \(\Omega _s\).
A Minimizer with Three Values
Theorem 5
There is a solution of Problem 1 that attains only two non-zero values.
The same result has been proved in [17] in the simpler situation when the velocities were considered uniformly constant on the whole \(\Omega _s\). For the proof of Theorem 5, we proceed in two steps:
-
1.
We prove the existence of a minimizer for Problem 1 which attains only finitely many values. This is accomplished by convexity arguments reminiscent of slicing by the coarea (16) and layer cake (17) formulas, but more involved.
-
2.
When considered over functions with finitely many values, the minimization of the total variation with integral constraints is a simple finite-dimensional optimization problem, and standard linear programming arguments provide the result.
The core of the proof of Theorem 5 is the following lemma, that states that a simplified version of the minimization problem can be solved with finitely many values.
Lemma 2
Let \(\Omega _1 \subset \Omega _0\) be two bounded measurable sets, \(\nu \in \mathbb R\). Then, there exists a minimizer of \(\mathrm {TV}\) on the set
$$\begin{aligned} \mathcal {A}_\nu (\Omega _0, \Omega _1):= \left\{ v \in \mathrm {BV}(\mathbb R^2) \ \big | \ \left. v \right| _{\mathbb R^2 \setminus \Omega _0} \equiv 0, \ \left. v \right| _{\Omega _1} \equiv 1 , \ \int _{\mathbb R^2} v = \nu \right\} , \end{aligned}$$
where the range consists of at most five values, one of them being zero.
In turn our proof of Lemma 2 is based on the following minimizing property of level sets, which we believe could be of interest in itself.
Lemma 3
Let \(\Omega _0, \Omega _1, \nu \) and \(\mathcal {A}_\nu (\Omega _0, \Omega _1)\) be as in Lemma 2, and u a minimizer of \(\mathrm {TV}\) in \(\mathcal {A}_\nu (\Omega _0, \Omega _1)\). Assume further that u has values only in [0, 1], and denote \(E_s := \{u > s\}\). Let \(s_0\) be a Lebesgue point of \( s \mapsto {\text {Per}}(E_s)\) and \(s \mapsto |E_s|\) (these two functions are measurable, so almost every \(s \in [0,1]\) is a Lebesgue point for them). Then \(1_{E_{s_0}}\) minimizes \(\mathrm {TV}\) in \(\mathcal {A}_{|E_{s_0}|}(\Omega _0, \Omega _1)\).
The proofs of these two lemmas are located after the proof of Theorem 5.
Proof of Theorem 5
Step 1. A minimizer with finite range
To begin the proof, we assume that we are given a minimizer u of the total variation in \(\mathrm {BV}_\diamond \), that is, a solution of Problem 1. We represent \(\Omega _s\) by its connected components \(\Omega ^i_s\), \(i=1,\ldots ,N\),
$$\begin{aligned} \Omega _s = \bigcup _{i=1}^N \Omega ^i_s. \end{aligned}$$
Since u belongs to \(\mathrm {BV}_\diamond \), u is constant on every \(\Omega ^i_s\), and we introduce the constants \(\gamma _i\) such that
$$\begin{aligned} \left. u \right| _{\Omega ^i_s}=\gamma _i. \end{aligned}$$
We can assume that \(\gamma _i \leqslant \gamma _{i+1}\). Note that the constraint (18) reads
$$\begin{aligned} \frac{1}{\sum _{i=1}^n |\Omega ^i_s|} \sum _{i=1}^N \gamma _i |\Omega ^i_s| = 1. \end{aligned}$$
Defining
$$\begin{aligned} u_i := u \cdot 1_{\{\gamma _i< u < \gamma _{i+1}\}} + \gamma _i 1_{\{u \leqslant \gamma _i\}} + \gamma _{i+1} 1_{\{u \geqslant \gamma _{i+1}\}}, \end{aligned}$$
we have
$$\begin{aligned} u = \sum _{i=1}^N \big ( u_i - \gamma _i \big ). \end{aligned}$$
Notice that each \(u_i\) minimizes the total variation among functions with fixed integral \(\int _\Omega u_i\), and satisfying the boundary conditions \(u = \gamma _i\) on \(\{u \leqslant \gamma _i\}\) and \(u=\gamma _{i+1}\) on \(\{u \geqslant \gamma _{i+1}\}\).
As a result, the function \(v_i := \frac{u_i - \gamma _i}{\gamma _{i+1} - \gamma _i} \) minimizes the total variation with constraints \(\left. v_i \right| _{\mathbb R^2 \setminus \{u > \gamma _i\}} \equiv 0\), \(\left. v_i \right| _{\{u \geqslant \gamma _{i+1}\}} \equiv 1\) and prescribed integral. Lemma 2 (applied with \(\Omega _0 = \{u > \gamma _i\}\) and \(\Omega _1 = \{u \geqslant \gamma _{i+1} \}\)) shows that \(v_i\) can be replaced by a five level-set function \(\tilde{v}_i\) which has total variation smaller or equal to \(\mathrm {TV}(v_i)\). Hence \(u_i\) can be replaced by the five level-set function \(\tilde{u}_i := \gamma _i + \tilde{v}_i (\gamma _{i+1} - \gamma _i)\) without increasing the total variation.
Therefore, the finitely-valued function
$$\begin{aligned} \tilde{u} := \sum _{i=1}^N \big ( \tilde{u}_i - \gamma _i \big ) \end{aligned}$$
is again a solution of Problem 1 (the functions u and \(\tilde{u}\) coincide on \(\Omega _s\), so the constraint
is satisfied).
Step 2. Construction of a three-valued minimizer
Step 1 provides a solution \(\tilde{u}\) of Problem 1 that reaches a finite number (denoted as \(p+1\)) of values. We denote its range (listed in increasing order) by
$$\begin{aligned} \{ \gamma _{p^-} , \ldots , \gamma _{-1}, 0, \gamma _1, \ldots ,\gamma _{p^+} \} \end{aligned}$$
where \(p^- \leqslant 0 \leqslant p^+\), \(p^+ - p^- = p\) and \(\gamma _i < 0\) for \(i< 0\) and \(\gamma _i > 0\) for \(i >0.\)
Let us now define, for \(i <0\), \(E_i := \{\tilde{u} \leqslant \gamma _i\}\) and \(\alpha _i := \gamma _i - \gamma _{i+1}\) and for \(i > 0\), \(E_i := \{ \tilde{u} \geqslant \gamma _i\}\) and \(\alpha _i := \gamma _i - \gamma _{i-1}.\) The function \(\tilde{u}\) then writes
$$\begin{aligned} \tilde{u} = \sum _{\begin{array}{c} i = p^- \\ i \ne 0 \end{array}}^{p^+} \alpha _i 1_{E_i} \end{aligned}$$
(23)
where \(E_i \subset E_j\) whenever \(i<j<0\) or \(i>j>0.\)
We also have
$$\begin{aligned} \mathrm {TV}(\tilde{u})&= \sum _{\begin{array}{c} i = p^- \\ i \ne 0 \end{array}}^{p^+} |\alpha _i| {\text {Per}}(E_i), \nonumber \\ \int _{\Omega } \tilde{u}&=\sum _{\begin{array}{c} i = p^- \\ i \ne 0 \end{array}}^{p^+} \alpha _i |E_i|, \nonumber \\ \int _{\Omega _s} \tilde{u}&= \sum _{\begin{array}{c} i = p^- \\ i \ne 0 \end{array}}^{p^+} \alpha _i |E_i^s| \end{aligned}$$
(24)
where \(E_i^s = E_i \cap \Omega _s\).
Since \(\tilde{u}\) is a solution to Problem 1, the collection \((\alpha _i)\) minimizes \(\sum _i |\alpha _i| {\text {Per}}(E_i) \) with constraints
$$\begin{aligned} \sum _{\begin{array}{c} i = p^- \\ i \ne 0 \end{array}}^{p^+} \alpha _i |E_i| = 0 \quad \text {and} \quad \sum _{\begin{array}{c} i = p^- \\ i \ne 0 \end{array}}^{p^+} \alpha _i |E_i^s| = |\Omega _s| \end{aligned}$$
as well as \(\alpha _i < 0\) for \(i< 0\) and \(\alpha _i >0\) for \(i>0.\) The constraint on the sign of the \(\alpha _i\) is made such that the formula (24) holds. Indeed, if the \(\alpha _i\) change signs, the right hand side of (24) is only an upper bound for \(\mathrm {TV}(\tilde{u})\).
Introducing the vectors
$$\begin{aligned} \begin{aligned} a&= ({\text {Per}}(E_{p^-}), \ldots , {\text {Per}}(E_{p^+})),\\ b&= (|E_{p^-}^s|,\ldots ,|E_{p^+}^s|),\\ c&= (|E_{p^-}|,\ldots ,|E_{p^+}|),\\ x&= (\alpha _{p^-}, \ldots ,\alpha _{p^+})\,, \end{aligned} \end{aligned}$$
minimizing (24) for \(\tilde{u}\) of the form (23) and with the constrained mentioned above is reformulated into finding a minimizer of
$$\begin{aligned} \begin{aligned}&(a,x) \rightarrow \left| a^T |x| \right| _{\ell ^1}\,,\\ x&\text { s.t. }b^T x =|\Omega _s| \text { and } c^T x = 0\;. \end{aligned} \end{aligned}$$
Denoting by \(\sigma \in \{-1,1\}^{p} \subseteq \mathbb R^{p}\) indexed by \(i \in \{p^-, \ldots ,p^+\}\) with \(\sigma _i = -1\) for \(i< 0 \) and \(\sigma _i = 1\) for \(i>0\), this minimization problem can be rewritten as
$$\begin{aligned} \min _{ x \in \mathbb R^{p+1}} \left\{ a^T (\sigma :x) = (\sigma : a)^T x \ \big \vert \ b^T x = |\Omega _s|,\ c^T x = 0,\ \sigma :x \geqslant 0 \right\} , \end{aligned}$$
(25)
where \(\sigma :x := (x_1 \sigma _1, \ldots , x_p \sigma _p, x_{p+1} \sigma _{p+1})\). The space of constraints is then a (possibly empty) polyhedron given by the intersection of the quadrant \(\sigma :x \geqslant 0\) with the two hyperplanes \(c^T x = 0\) and \(b^T x =|\Omega _s|\). Now for a point of a polyhedron in \(\mathbb R^{p}\) to be a vertex, we must have that at least p constraints are active at it. Therefore, at least \(p-2\) of these constraints should be of those defining the quadrant \(\sigma : x \geqslant 0\), meaning that at a vertex, at least \(p-2\) coefficients of x are zero.
This polyhedron could be unbounded, but since \(a \geqslant 0\) and \(\sigma :x \geqslant 0\) componentwise, the minimization of \(a^T (\sigma :x)\) must have at least one solution in it. Moreover, since it is contained in a quadrant (\(\sigma : x \geqslant 0\)), it clearly does not contain any line, so it must have at least one vertex [5, Theorem 2.6]. Since the function to minimize is linear in x, it has a minimum at one such vertex [5, Theorem 2.7]. That proves the existence of a minimizer of (25) with at least \(p-2\) of the \((\alpha _i)\) being zero. This corresponds to a minimizer for Problem 1 which has only two level-sets with nonzero values, finishing the proof of Theorem 5.\(\square \)
Proof of Lemma 2
Proof of Lemma 2
For conciseness, we denote the set \(\mathcal {A}_\nu (\Omega _0, \Omega _1)\) by \(\mathcal {A}\). Let w be an arbitrary minimizer of \(\mathrm {TV}\) in \(\mathcal {A}\). Splitting w at 0 and 1 we can write
$$\begin{aligned} w = (w^{1+}-1) + w^{(0,1)} - w^- \end{aligned}$$
(26)
with \(w^{1+} := w \cdot 1_{w \geqslant 1} + 1_{w < 1}\), \(w^{(0,1)} = w \cdot 1_{0 \leqslant w \leqslant 1} + 1_{w > 1}\), and \(w^-\) the usual negative part. We see from the coarea formula that
$$\begin{aligned} \mathrm {TV}(w)&= \int _{s \leqslant 0} {\text {Per}}(w \geqslant s) + \int _{0<s< 1} {\text {Per}}(w \geqslant s) + \int _{s \geqslant 1} {\text {Per}}(w \geqslant s) \\&= \mathrm {TV}(w^-) + \mathrm {TV}(w^{(0,1)}) + \mathrm {TV}(w^{1+}). \end{aligned}$$
With this splitting, \(w^-\) can be seen to be a minimizer of \(\mathrm {TV}\) over
$$\begin{aligned} \mathcal {A}^- := \left\{ v \in \mathrm {BV}(\mathbb R^2) \ \big \vert \ v=0 \text { on } \left\{ w > 0 \right\} \cup \mathbb R^2 \backslash \Omega _0, \ \int _\Omega v = \int _\Omega w^- \right\} . \end{aligned}$$
By Theorem 3, almost every level set of \(w^-\) is a Cheeger set of \(\Omega _0 \setminus \{w >0\}\), the complement of \(\left\{ w > 0 \right\} \cup \mathbb R^2 \backslash \Omega _0\). In particular, if we replace \(w^-\) by \(\frac{\int _\Omega w^-}{|\mathcal C_0|}1_{\mathcal C_0}\), where \(\mathcal C_0\) is one such Cheeger set, the total variation doesn’t increase. Therefore, there exists a minimizer \(\tilde{w}^-\) of \(\mathrm {TV}\) on \(\mathcal {A}^-\) that reaches only one non-zero value.
With an analogous argumentation we see that, because \(w^{1+}\) minimizes \(\mathrm {TV}\) on the set
$$\begin{aligned} \mathcal {A}^{1+} := \left\{ v \in \mathrm {BV}(\mathbb R^2) \ \big \vert \ v = 1 \text { on } \{w < 1\}, \ \int _{\Omega } v = \int _{\Omega } w^{1+} \right\} , \end{aligned}$$
there exists a minimizer \(\tilde{w}^{1+}\) that writes
$$\begin{aligned} \tilde{w}^{1+} = 1 + \zeta 1_{\mathcal C_1} \end{aligned}$$
where \(\mathcal C_1\) is a Cheeger set of \(\{w \geqslant 1\}\) and \(\zeta \geqslant 0\) is a constant.
Moreover, defining
$$\begin{aligned} \mu := \int _\Omega w^{(0,1)}, \end{aligned}$$
\(w^{(0,1)}\) minimizes \(\mathrm {TV}\) on the set
$$\begin{aligned} \mathcal {A}^{(0,1)}_\mu := \left\{ v \in \mathrm {BV}(\mathbb R^2) \ \big \vert \ v = 1 \text { on } \{w \geqslant 1\}, v = 0 \text { on } \{w \leqslant 0\} \text { and } \int _\Omega v = \mu \right\} . \end{aligned}$$
The remainder of the proof consists in showing that there exists a minimizer of \(\mathrm {TV}\) in \(\mathcal {A}^{(0,1)}_\mu \) that attains only three values. Since \(w^{(0,1)}\) is one of them, there exists some minimizer of \(\mathrm {TV}\) in \(\mathcal {A}^{(0,1)}_\mu \) with values in [0, 1]. We denote by u a generic one. In what follows, we denote by \(E_s := \{u > s\}\) the level-sets of u.
Noticing that \(\mathcal {A}^{(0,1)}_\mu =\mathcal {A}_\mu (\{w \leqslant 0\},\{w \geqslant 1\})\), we can use Lemma 3 to obtain that for almost every s, \(1_{E_{s}}\) minimizes \(\mathrm {TV}\) in \(\mathcal {A}^{(0,1)}_{|E_{s}|}\). That implies in particular that for a.e. s, \(E_s\) minimizes perimeter with fixed mass. We introduce \(E_s^{(1)}\) the set of points of density 1 for \(E_s\) and \(E_s^{(0)}\) the set of points of density 0 for \(E_s\), that is
$$\begin{aligned}&E_s^{(1)} := \left\{ x \in \Omega \, \big \vert \, \lim _{r \rightarrow 0} \frac{ |E_s \cap B_r(x)|}{|B_r(x)|} = 1 \right\} \qquad \text {and} \\&E_s^{(0)} := \left\{ x \in \Omega \, \big \vert \, \lim _{r \rightarrow 0} \frac{ |E_s \cap B_r(x)|}{|B_r(x)|} = 0 \right\} . \end{aligned}$$
Lebesgue differentiation theorem implies that \(E_s^{(1)} = E_s\) and \(E_s^{(0)} = \Omega \setminus E_s\) a.e.
Now, since the level-sets are nested, the function \(s \mapsto |E_s|\) is nonincreasing. Therefore, there exists \(s_\mu \) such that
$$\begin{aligned} \text {for }s > s_\mu , \,|E_s| \leqslant \mu \text {, and for }s < s_\mu ,\, |E_s| \geqslant \mu . \end{aligned}$$
Let us now define
$$\begin{aligned} E^+ := \bigcup _{s > s_\mu } E^{(1)}_s \qquad \text {and} \qquad E^- := \bigcap _{s < s_\mu } \Omega \setminus E^{(0)}_s. \end{aligned}$$
We then have the following fact, to be proved below:
Claim
If \(E^\pm \) is not empty, \(1_{E^\pm }\) minimizes total variation in \(\mathcal {A}^{(0,1)}_{|E^\pm |}\), with \(|E^+| \leqslant \mu \leqslant |E^-|.\)
To finish the proof of Lemma 2, we distinguish two alternatives. Either \(E^+\) or \(E^-\) has mass \(\mu \), in which case the claim above implies Lemma 2, or \(E^\pm \) are both nonempty and
$$\begin{aligned} |E^+| < \mu \qquad \text {and} \qquad |E^-| > \mu . \end{aligned}$$
In the second case, let \(s< s_\mu \). Then, \(|E^-| \in (|E^+|,|E_s|)\) and there exists \(t = \frac{|E^-| - |E^+|}{|E_s|-|E^+|}\) such that \(|E^-| = t |E_s| + (1-t) |E^+|.\) The function \(t 1_{E_s} + (1-t) 1_{E^+}\) therefore belongs to \(\mathcal {A}^{(0,1)}_{|E^-|}\). Since \(1_{E^-}\) is a minimizer of \(\mathrm {TV}\) in this set, one must have
$$\begin{aligned} {\text {Per}}(E^-) \leqslant \mathrm {TV}(t 1_{E_s} + (1-t) 1_{E^+}) \leqslant \frac{|E^-| - |E^+|}{|E_s|-|E^+|} {\text {Per}}(E_s) + \frac{|E_s| - |E^-|}{|E_s|-|E^+|} {\text {Per}}(E^+). \end{aligned}$$
This equation rewrites
$$\begin{aligned} {\text {Per}}(E_s) \geqslant \frac{|E_s| - |E^+|}{|E^-| - |E^+|} {\text {Per}}(E^-) + \frac{|E^-| - |E_s|}{|E^-| - |E^+|} {\text {Per}}(E^+). \end{aligned}$$
(27)
Similarly, if \(s > s_\mu \), one has \(|E_s| < |E^+|\) and \(|E^+|\) is a convex combination of \(\{|E^-|, |E_s|\}.\) The same steps lead to the same (27). Finally, one just write (we use (27), the coarea and the layer-cake formulas)
$$\begin{aligned} \mathrm {TV}(u)&= \int _0^1 {\text {Per}}(E_s) \geqslant \int _0^1 \frac{\left( |E_s| - |E^+| \right) {\text {Per}}(E^-) + \left( |E^-| - |E_s| \right) {\text {Per}}(E^+) }{|E^-| - |E^+|} \\&\geqslant \int _0^1 \frac{{\text {Per}}(E^-) - {\text {Per}}(E^+)}{|E^-| - |E^+|} |E_s| + \frac{|E^-| {\text {Per}}(E^+) - |E^+|{\text {Per}}(E^-) }{|E^-| - |E^+|} \\&= \frac{{\text {Per}}(E^-) - {\text {Per}}(E^+)}{|E^-| - |E^+|} \mu + \frac{|E^-| {\text {Per}}(E^+) - |E^+|{\text {Per}}(E^-) }{|E^-| - |E^+|} \\&= \mathrm {TV}\left( \lambda 1_{E^-} + (1-\lambda ) 1_{E^+} \right) \end{aligned}$$
with \(\lambda = \frac{\mu - |E^+|}{|E^-| - |E^+|}\).
As a result, one can replace \(w^{(0,1)}\) in the decomposition (26) by a three valued minimizer \(\tilde{w}^{(0,1)}\) of \(\mathrm {TV}\) in \(\mathcal {A}^{(0,1)}_\mu \). Therefore, combining the three modified parts we see that there exists a minimizer in \(\mathcal {A}\)
$$\begin{aligned} \tilde{w} := (\tilde{w}^{1+}-1) + \tilde{w}^{(0,1)} - \tilde{w}^- \end{aligned}$$
which attains at most five values. \(\square \)
Proof of claim
By Lemma 3, \(1_{E_s^{(1)}}\) minimizes total variation in \(\mathcal {A}^{(0,1)}_{|E_s|}\) for almost every s. Then, let us select a decreasing sequence \(s_n \searrow s_\mu \) such that for each n, \(1_{E_{s_n}^{(1)}}\) minimizes total variation in \(\mathcal {A}^{(0,1)}_{|E_{s_n}|}\). Since \(E_{s_n}^{(1)} \rightarrow E^+\) in \(L^1\), one has \(|E^+| = \lim |E_{s_n}^{(1)}| = \lim |E_{s_n}|\) and the semicontinuity for the perimeter gives
$$\begin{aligned} {\text {Per}}(E^+) \leqslant \liminf {\text {Per}}(E_{s_n}^{(1)}). \end{aligned}$$
In fact, the sequence \({\text {Per}}(E_{s_n}^{(1)})\) is bounded. To see this, we fix a value \(\hat{s} < s_\mu \) and since \(E_{s_1} \subset E_{s_n}^{(1)} \subset E_{\hat{s}}\) we can write for some \(t_n \in (0,1)\)
$$\begin{aligned} |E_{s_n}^{(1)}|=t_n |E_{\hat{s}}| + (1-t_n) |E_{s_1}|. \end{aligned}$$
Therefore, applying Lemma 3 again we obtain
$$\begin{aligned} {\text {Per}}(E_{s_n}^{(1)}) \leqslant \mathrm {TV}\left( t_n 1_{E_{\hat{s}}} + (1-t_n) 1_{E_{s_1}}\right) \leqslant {\text {Per}}(E_{\hat{s}})+{\text {Per}}(E_{s_1}). \end{aligned}$$
Now, let us assume that there exists \(v \in \mathrm {BV}(\Omega )\) with \(\int v = |E^+|\) and \(\mathrm {TV}(v) < {\text {Per}}(E^+) - \varepsilon \). By the above, for every \(\delta >0\) we can find n such that \( |E^+| \geqslant |E_{s_n}| \geqslant |E^+| - \delta \) and
$$\begin{aligned} {\text {Per}}(E^+) \leqslant {\text {Per}}(E_{s_n}^{(1)}) + \delta . \end{aligned}$$
Now, if \(\delta < \varepsilon /10\) is small enough, we can find a ball \(B_n \subset \Omega \) such that \(\int _\Omega v \cdot 1_{\Omega \setminus B_n} = |E_{s_n}|\) and \(\Vert v \Vert _\infty {\text {Per}}(B_n) \leqslant \varepsilon /10\), so we get
$$\begin{aligned} \begin{aligned} \mathrm {TV}(v \cdot 1_{\Omega \setminus B_n})&\leqslant \mathrm {TV}(v) + \Vert v \Vert _\infty {\text {Per}}(B_n) \leqslant {\text {Per}}(E^+) -\varepsilon + \Vert v \Vert _\infty {\text {Per}}(B_n) \\&\leqslant {\text {Per}}(E_{s_n}^{(1)}) + \Vert v \Vert _\infty {\text {Per}}(B_n) + \delta - \varepsilon \leqslant {\text {Per}}(E_{s_n}^{(1)}) - \frac{\varepsilon }{2},\end{aligned} \end{aligned}$$
and therefore we get a contradiction with the \(\mathrm {TV}\)-minimality of \(E_{s_n}^{(1)}\).
Selecting an increasing sequence \(\tilde{s}_n \nearrow s_\mu \) and such that \(\Omega \setminus E_{s_n}^{(0)}\) minimizes \(\mathrm {TV}\) in \(\mathcal {A}^{(0,1)}_{|E_{s_n}|}\), we obtain similarly that \(1_{E^-}\) minimizes \(\mathrm {TV}\) in \(\mathcal A^{(0,1)}_{|E^-|}.\)\(\square \)
Proof of Lemma 3
Proof of Lemma 3
Since the arguments \(\Omega _0, \Omega _1\) are fixed for the course of this proof, we will denote the sets \(\mathcal {A}_\tau (\Omega _0, \Omega _1)\) by \(\mathcal {A}_\tau \) for each \(\tau >0\). First, note that for every \(s_1 < s_2\), the function
$$\begin{aligned} u_{[s_1,s_2]} := s_2 1_{E_{s_2}} + u \cdot 1_{[s_1,s_2]} + s_1 1_{u<s_1} \end{aligned}$$
is such that \( v:= \frac{u_{[s_1,s_2]} - s_1}{s_2 - s_1}\) minimizes the total variation in \(\mathcal {A}_{\int v}\). Indeed, if \(\hat{v} \in \mathcal {A}_{\int v}\) with \(\mathrm {TV}(\hat{v}) < \mathrm {TV}(v)\), then \(\mathrm {TV}(\hat{v}(s_2-s_1)+s_1) < \mathrm {TV}(u_{[s_1,s_2]})\). Since \(u = (u \cdot 1_{u< s_1} -s_1) + u_{[s_1,s_2]} + (u \cdot 1_{u> s_2} - s_2)\), then we would have
$$\begin{aligned} \mathrm {TV}(u)&= \mathrm {TV}((u \cdot 1_{u< s_1} -s_1)) + \mathrm {TV}(u_{[s_1,s_2]}) + \mathrm {TV}((u \cdot 1_{u> s_2} - s_2)) \\&> \mathrm {TV}((u \cdot 1_{u< s_1} -s_1)) + \mathrm {TV}(\hat{v}(s_2-s_1)+s_1) + \mathrm {TV}((u \cdot 1_{u> s_2} - s_2)) \\&\geqslant \mathrm {TV}\big ((u \cdot 1_{u< s_1} -s_1) + (\hat{v}(s_2-s_1)+s_1) + (u \cdot 1_{u> s_2} - s_2)\big ), \end{aligned}$$
where \((u \cdot 1_{u< s_1} -s_1) + (\hat{v}(s_2-s_1)+s_1) + (u \cdot 1_{u> s_2} - s_2) \in \mathcal {A}_{\nu }\), which is a contradiction with the minimality of u.
Letting \(s_0\) as in the assumptions, we have just seen that for every \(h >0\), \(\frac{u_{[s_0 - h, s_0 + h]} - (s_0-h)}{2h}\) minimizes the total variation in \(\mathcal {A}_{\nu _h}\) with
$$\begin{aligned} \nu _h&:= \frac{ \int _{s_0-h \leqslant u \leqslant s_0 + h} (u - (s_0-h))}{2h} + |E_{s_0 + h}| \\&= \frac{1}{2h} \int _{s_0 - h}^{s_0 + h} | \{ u> t\} \cap \{u \leqslant s_0 + h\} | \, \mathrm {d}t +|E_{s_0 + h}| \\&= \frac{1}{2h} \int _{s_0 - h}^{s_0 + h} | \{ u > t\} | \, \mathrm {d}t = \frac{1}{2h} \int _{s_0 - h}^{s_0 + h} | E_s | \, \mathrm {d}s. \end{aligned}$$
On the other hand, the total variation of \(\frac{u_{[s_0 - h, s_0 + h]} - (s_0-h)}{2h}\) writes, using the coarea formula,
$$\begin{aligned} \frac{1}{2h} \int _{s_0 - h}^{s_0 + h} {\text {Per}}(E_s). \end{aligned}$$
Finally, let us assume that \(1_{E_{s_0}}\) does not minimize total variation in \(\mathcal {A}_{|E_{s_0}|}\). Then, there would exist \(\varepsilon > 0\) and \(u_0 \in \mathcal {A}_{|E_{s_0}|}\) such that
$$\begin{aligned} {\text {Per}}(E_{s_0}) \geqslant \mathrm {TV}(u_0) + \varepsilon . \end{aligned}$$
Since \(s_0\) is a Lebesgue point, one can find \(\delta >0\) such that for every \(h \leqslant \delta \),
$$\begin{aligned} \left| \frac{1}{2h} \int _{s_0 - h}^{s_0 + h} {\text {Per}}(E_s) \, \mathrm {d}s - {\text {Per}}(E_{s_0}) \right| \leqslant \frac{\varepsilon }{10} \qquad \text {and} \qquad \left| \frac{1}{2h} \int _{s_0 - h}^{s_0 + h} | E_s | \, \mathrm {d}s - |E_{s_0}| \right| \leqslant \frac{\varepsilon }{10} . \end{aligned}$$
Let \(h\leqslant \delta \) and B be a ball such that \({\text {Per}}(B) \leqslant \frac{\varepsilon }{4 \Vert u_0 \Vert _\infty }\). There exists \(\alpha \) such that the function \(u_0 + \alpha 1_B\) satisfies
$$\begin{aligned} \int u_0 + \alpha 1_B = \frac{1}{2h} \int _{s_0 - h}^{s_0 + h} | E_s | \, \mathrm {d}s. \end{aligned}$$
Reducing h if needed, one can enforce that \(|\alpha | \leqslant 2 \Vert u_0 \Vert _\infty \).
Then,
$$\begin{aligned} \mathrm {TV}(u_0 + \alpha 1_B)\leqslant & {} \mathrm {TV}(u_0) + \alpha {\text {Per}}(B) \leqslant {\text {Per}}(E_{s_0}) - \varepsilon + \alpha {\text {Per}}(B) \\\leqslant & {} \frac{1}{2h} \int _{s_0 - h}^{s_0 + h} {\text {Per}}(E_s) \, \mathrm {d}s - \frac{4 \varepsilon }{10} , \end{aligned}$$
which contradicts the minimality of \(u_{[s_0 - h, s_0+h]}\) and proves the claim. \(\square \)
Minimizers with Connected Level-Sets
In this subsection, we refine our analysis slightly, and show the existence of three-valued minimizers for Problem 1 with additional properties. We start with the following definition:
Definition 3
A set of finite perimeter A is called indecomposable, if there are no two disjoint finite perimeter sets B, C such that \(|B|>0\), \(|C|>0\), \(A = B \cup C\) and \({\text {Per}}(A)={\text {Per}}(B)+{\text {Per}}(C)\).
This notion is in fact a natural measure-theoretic sense of connectedness for sets for finite perimeter, for more information about it see [2].
Remark 3
By computing the Fenchel dual of Problem 1, it can be seen that the non-zero level-sets of any solution are minimizers of the functional
$$\begin{aligned} E \mapsto {\text {Per}}(E) - \int _{\Omega \setminus \Omega _s} k, \text { with } k \in L^2(\Omega \setminus \Omega _s). \end{aligned}$$
This optimality property in turn implies lower bounds only depending on k for the perimeter and mass of E, and in case it can be decomposed in the sense of Definition 3, the same lower bounds also hold for each set in such a decomposition. In consequence, E can only be decomposed in at most a finite number of sets. The proof of these statements relies heavily on the results of [2], and is presented in [12] for the unconstrained case, and [21] for the case with Dirichlet constraints, as used here.
Assuming these results, one can simplify the level sets of solutions further:
Theorem 6
There exists a minimizer for Problem 1 attaining exactly three values for which all non-zero level-sets are indecomposable.
Proof
First, we consider the positive level-set and assume that it is decomposable in two sets \(\Omega _1, \Omega _2\) as in Definition 3. Then the corresponding minimizer u can be written as
$$\begin{aligned} u = \alpha (1_{\Omega _1} + 1_{\Omega _2}) - \beta 1_{\Omega _-}, \end{aligned}$$
where \(\alpha , \beta >0\). Consider a perturbation of u of the form
$$\begin{aligned} u_h = (\alpha +h) 1_{\Omega _1} + (\alpha + k) 1_{\Omega _2} - (\beta + l) 1_{\Omega _-}, \end{aligned}$$
with \(|h| \leqslant \alpha , |k| \leqslant \alpha \), and \(|l| \leqslant \beta \). Then, since \(\Omega _1 \cap \Omega _2 = \emptyset \), \(u_h \in \mathrm {BV}_\diamond \) if and only if
$$\begin{aligned} h|\Omega _1| + k |\Omega _2| - l |\Omega _-| = 0 \quad \text {and} \quad h|\Omega _1^s| + k |\Omega _2^s| - l |\Omega _-^s| = 0, \end{aligned}$$
where \(\Omega _i^s := \Omega _i \cap \Omega _s.\) These two equations lead to
$$\begin{aligned} l = h \frac{|\Omega _1^s||\Omega _2| - |\Omega _1||\Omega _2^s|}{|\Omega _2||\Omega _-^s| - |\Omega _-||\Omega _2^s|} \quad \text {and} \quad k = h \frac{|\Omega _1^s||\Omega _-| - |\Omega _1||\Omega _-^s|}{|\Omega _2||\Omega _-^s| - |\Omega _-||\Omega _2^s|}. \end{aligned}$$
Under our assumptions on \(h, k, l, \Omega _1\) and \(\Omega _2\), and since \(1_{\Omega _1}+1_{\Omega _2}=1_{\Omega _1 \cup \Omega _2}\), the total variation of the perturbed function \(u_h\) can be written as
$$\begin{aligned} \mathrm {TV}(u_h)&= (\alpha +\min (h,k)) {\text {Per}}(\Omega _1 \cup \Omega _2) \\&\quad +(h-k)^+ {\text {Per}}(\Omega _1) + (k-h)^+ {\text {Per}}(\Omega _2)+ (\beta + l) {\text {Per}}(\Omega _-)\\&= (\alpha + h) {\text {Per}}(\Omega _1) + (\alpha + k) {\text {Per}}(\Omega _2) + (\beta + l) {\text {Per}}(\Omega _-). \end{aligned}$$
Then, because u is a minimizer of \(\mathrm {TV}\), it follows that
$$\begin{aligned} h {\text {Per}}(\Omega _1) + k {\text {Per}}(\Omega _2) + l {\text {Per}}(\Omega _-) \geqslant 0. \end{aligned}$$
Since the left hand side and k, l are linear in h, one can replace h by \(-h\) and obtain
$$\begin{aligned} h {\text {Per}}(\Omega _1) + k {\text {Per}}(\Omega _2) + l {\text {Per}}(\Omega _-) = 0 \end{aligned}$$
which shows that \(u_h\) is also a minimizer. Now since we have
$$\begin{aligned} \beta = \alpha \,\frac{|\Omega _1|+|\Omega _2|}{|\Omega _-|} \text { and } l = \frac{h|\Omega _1|+k|\Omega _2|}{|\Omega _-|}, \end{aligned}$$
one can choose h such that \(h=-\alpha \) or \(k = -\alpha \) without violating \(|l|\le \beta \), and therefore produce a minimizer whose positive part is either \(\Omega _2\) or \(\Omega _1\), respectively. We proceed similarly for the negative part and therefore obtain an indecomposable negative level-set. \(\square \)
Remark 4
In the above proof, through an adequate choice of components for deletion, one can even obtain simply connected level sets. The measure-theoretic notion corresponding to simple connectedness is defined in [2] to be boundedness of the connected components of the complement of the set, these connected components having been defined through indecomposability. For example, assuming that \(\Omega _2\) is fully enclosed in \(\Omega _-\) (that is if \(\partial \Omega _2 \cap \partial \Omega _- = \partial \Omega _2\)), then the variation of \(u_h\) can also be written
$$\begin{aligned} \mathrm {TV}(u_h) = (\alpha + h) {\text {Per}}(\Omega _1) + (\alpha + k + \beta + l) {\text {Per}}(\Omega _2) + (\beta + l) ({\text {Per}}(\Omega _-) - {\text {Per}}(\Omega _2), \end{aligned}$$
which is linear in h as long as \(k \geqslant -\alpha - \beta - l\). The equality case in this last constraint corresponds to joining \(\Omega _2\) to \(\Omega _-\), and avoiding creating a “hole” in \(\Omega _-\) by the procedure mentioned above (which replaces \( \alpha 1_{\Omega _2}\) by zero). Clearly, this procedure can also be performed for the positive level set, and in fact the “holes” to be deleted could also be connected components of the zero level set. Therefore, a solution in which both the positive and negative level set are simply connected can be obtained.
Remark 5
The intuition behind these last results is that, like in the proof of Theorem 5, the constraints of the problem are linear with respect to the values, and the total variation is also linear as long as the signs of the differences of values at the interfaces do not change. In particular, the points at which the topology of the level sets changes are situations in which these signs change (that is, the values of two adjacent level sets are equal).