Planar least gradient problem: existence, regularity and anisotropic case

We show existence of solutions to the least gradient problem on the plane for boundary data in BV(∂Ω)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$BV(\partial \varOmega )$$\end{document}. We also provide an example of a function f∈L1(∂Ω)\\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f \in L^1(\partial \varOmega ) \backslash $$\end{document}(C(∂Ω)∪BV(∂Ω))\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$(C(\partial \varOmega ) \cup BV(\partial \varOmega ))$$\end{document}, for which the solution exists. We also show non-uniqueness of solutions even for smooth boundary data in the anisotropic case for a nonsmooth anisotropy. We additionally prove a regularity result valid also in higher dimensions.


Introduction
The least gradient problem is a problem of minimalization min Ω |Du|, u ∈ BV (Ω), u| ∂Ω = f arising in the study of convergence of p-Laplace functions with p → 1, see [11], or as a dimensional reduction in free material design problem, see [7], or (in its anisotropic version) in the conductivity imaging problem, see [10].
Here, we may impose certain conditions on Ω and the function f and use different approaches to the boundary condition. In [18] f is assumed to be continuous and the boundary condition is in the sense of traces. The authors also impose a set of geometrical conditions on Ω, which are satisfied by strictly convex sets; in fact, in dimension two they are equivalent to strict convexity. The authors of [10] consider an anisotropic version of the problem and introduce another geometric condition on Ω called the barrier condition; see Definition 5.8. Another approach is presented in [13], where boundary datum belongs to L 1 (∂Ω), but the boundary condition is understood in a weaker sense.
Throughout this paper Ω ⊂ R N shall be an open bounded set with Lipschitz boundary. When necessary, we will additionally assume convexity or strict convexity of Ω and bounds on the dimension N . We consider the following minimalization problem called the least gradient problem: where T denotes the trace operator T : BV (Ω) → L 1 (∂Ω). Unless specified otherwise, the boundary datum f belongs to L 1 (∂Ω). Even existence of solutions in this sense is not obvious, as the functional is not lower semicontinuous with respect to L 1 convergence. In fact, in [17] the authors have given an example of a function f without a solution to corresponding two-dimensional least gradient problem. It was a characteristic function of a certain fat Cantor set; note that it does not lie in BV (∂Ω).
There are two possible ways to deal with Problem (1). The first is the relaxation of the functional F. Such reformulation and its relationship with the original statement is considered in [13] and [12]. Another way is to consider when Problem (1) has a solution in the classical sense and what is its regularity. This paper uses the latter approach and its main result is giving a sufficient condition for existence of solutions of the least gradient problem on the plane: Obviously, this condition is not necessary; the construction given in [18] for continuous boundary data does not require the boundary data to have finite total variation. We also provide an example of a function f ∈ L 1 (∂Ω)\(C(∂Ω) ∪ BV (∂Ω)), for which the solution exists, see Example 4.8. Another result included in this article provides a certain regularity property. Theorem 1.2 asserts existence of a decomposition of a function of least gradient into a continuous and a locally constant function. It is not a property shared by all BV functions, see [ Let us clarify the notation in the above theorem. Firstly, u c is a continuous function and u j is a piecewise constant function; in particular, Du c does not consist only of Cantor part of the derivative, but also the absolutely continuous part. Secondly, Du j is the derivative of u j , while (Du) j is the jump part of the derivative of u; the same applies to Du c and (Du) c .
The final chapter takes on the subject of anisotropy. As it was proved in [10], for a uniformly convex anisotropic norm φ on R N smooth with respect to the Euclidean norm there is a unique solution to the anisotropic least gradient problem. I consider l p norms on the plane for p ∈ [1, ∞] to show that for p = 1, ∞, when the anisotropy is neither smooth nor uniformly convex, the solutions need not be unique even for smooth boundary data (see Examples 5.16 and 5.17), whereas for 1 < p < ∞, when the anisotropy is smooth and uniformly convex, Theorem 1.3 asserts that the only connected minimal surface with respect to the l p norm is a line segment, similarly to the isotropic solution. Theorem 1. 3 Let Ω ⊂ R 2 be an open bounded convex set. Let the anisotropy be defined by the function φ(x, Du) = Du p , where 1 < p < ∞. Let E be a φ-minimal set with respect to Ω, i.e. χ E is a function of φ-least gradient in Ω. Then every connected component of ∂ E is a line segment.
Finally, we observe in Corollary 5.18 that if Ω is an open bounded set with Lipschitz boundary, then Theorem 1.3 implies existence of a unique solution to the anisotropic least gradient problem for continuous boundary data.

Least gradient functions
Now we shall briefly recall basic facts about least gradient functions. What we need most in this paper is the Miranda stability theorem, see [14], and the relationship between functions of least gradient and minimal surfaces. For more information see [4], [6]. If Ω is bounded with Lipschitz boundary, we may instead assume that v ∈ BV 0 (Ω), i.e. v is a BV function with zero trace on ∂Ω. This equivalence follows from approximation of v by functions of the form v n = vχ Ω n for properly chosen Ω n ; a full proof can be found in [19,Theorem 2.2].
be an open bounded set with Lipschitz boundary. We say that u ∈ BV (Ω) is a solution of the least gradient problem in the sense of traces for given f ∈ L 1 (∂Ω), if u is a function of least gradient and T u = f . To underline the difference between the two notions, we recall a stability theorem by Miranda: . Then u is a function of least gradient in Ω.
An identical result for solutions of least gradient problem is impossible, as the trace operator is not continuous in L 1 topology; see Example 4.7. We need an additional assumption regarding traces. A correct formulation would be: be an open bounded set with Lipschitz boundary. Suppose that f n → f ∈ L 1 (∂Ω). Let u n be a solution of the least gradient problem for f n , i.e. u n is a function of least gradient such that T u n = f n . If u n → u in L 1 (Ω) and T u = f , then u is a solution of the least gradient problem for f .
To deal with regularity of solutions of the least gradient problem, it is convenient to consider superlevel sets of u, i.e. sets of the form ∂{u > t} for t ∈ R. The reason for this are the two subsequent results: A comprehensive description of the regularity theory for minimal sets can be found in [6]. As sets of bounded perimeter are defined up to a set of measure zero, the regularity results depend on the representative; here, we employ the standard convention: x ∈ E if and only if lim sup i.e. a set E of bounded perimeter consists of points of positive density. Under this convention, [6,Theorem 10.11] implies that in dimensions 2 ≤ N ≤ 7 the boundary of a minimal set E is locally an analytical hypersurface. Now, if u ∈ BV (Ω) is a function of least gradient, then Theorem 2.6 implies that each set E t = {u ≥ t} is minimal; we change the representative of each set E t to E t to follow the convention above. This way we define a function u(x) = sup{t : x ∈ E t } satisfying E t = { u ≥ t}; but Lemma 2.5 implies that u and u differ on a set of measure zero. Hence, after a change of representative, we get that the boundary of each superlevel set of u is a union of analytical minimal surfaces; from now on we assume that we deal with such a representative.

Sternberg-Williams-Ziemer construction
In [18] the authors have shown existence and uniqueness of solutions to the least gradient problem for continuous boundary data and strictly convex Ω (or, to be more precise, the authors assume that ∂Ω has non-negative mean curvature and is not locally area-minimizing). The proof of existence is constructive and we shall briefly recall it. The main idea is reversing Theorem 2.6 and constructing almost all level sets of the solution. According to the Lemma 2.5 this uniquely determines the solution. We fix the boundary data g ∈ C(∂Ω). By Tietze theorem it has an extension G ∈ C(R N \Ω). We may also demand that G has support contained in a ball B(0, R) containing Ω and that G ∈ BV (R N \Ω). Let L t = (R N \Ω) ∩ {G ≥ t}. Since G ∈ BV (R N \Ω), then for a.e. t ∈ R we have P(L t , R N \Ω) < ∞. For such t, let E t be a set solving the following problems: max {|E ∩ Ω| : E is a minimizer of (2)} .
Let us note that both of these problems have solutions; let m ≥ 0 be the infimum in the first problem. Let E n be a sequence of sets such that P(E n , R N ) → m. By compactness of unit ball in BV (B(0, R)) and lower semicontinuity of the total variation we obtain on some subsequence Take M ≤ |Ω| be the supremum in the second problem. Take a sequence of sets E n such that |E n ∩ Ω| → M. Then on some subsequence χ E n k → χ E in L 1 (B(0, R)), and thus Then we can show existence of a set T of full measure such that for every t ∈ T we have ∂ E t ∩ ∂Ω ⊂ g −1 (t) and for every t, s ∈ T , s < t the inclusion E t ⊂ int E s holds. It enables us to treat E t as superlevel sets of a certain function; we define it on Ω by the following formula: It turns out that u ∈ C(Ω) ∩ BV (Ω) and u is a solution to the least gradient problem for g. Moreover |{u ≥ t} (E t ∩ Ω)| = 0 for a.e. t ∈ R. Uniqueness proof is based on the following comparison principle:

Theorem 2.7 ([18, Theorem 4.1]) Suppose that Ω ⊂ R N is an open bounded set with
Lipschitz boundary such that ∂Ω has non-negative mean curvature and is not locally areaminimizing. Let g 1 , g 2 ∈ C(∂Ω) satisfy g 1 ≥ g 2 on ∂Ω. Then the corresponding minimizers to the least gradient problem satisfy u 1 ≥ u 2 in Ω.
In the existence proof in Sect. 4 we are going to use a particularly simple case of the construction. Suppose that Ω ⊂ R 2 and g ∈ C 1 (∂Ω). Firstly, notice that we only have to construct the set E t for almost all t. Secondly, we recall that in dimension two the only connected minimal surface is a line segment; thus, to find the set E t , let us fix t and look at the preimage g −1 (t). We connect its points using line segments with sum of their lengths as small as possible. It can cause problems, for example if we take t to be a global maximum of the function; thus, let us take t to be a regular value (by Sard theorem almost all values are regular), so the preimage g −1 (t) contains finitely many points. As the derivative at every point p ∈ g −1 (t) is nonzero, there is at least one line segment L ⊂ ∂ E t ending in p. By minimality of ∂ E t there can be at most one (we prove this rigorously even for discontinuous g in Proposition 3.5), so there is exactly one line segment L ⊂ ∂ E t ending in every p ∈ g −1 (t).
The proof of existence on numerous occasions relies heavily on the continuity of the boundary data g: firstly, via Tietze extension theorem; secondly, to prove that ∂ E t ∩ ∂Ω ⊂ g −1 (t), where for discontinuous g we would have to add to the right hand side the set of discontinuity points; finally, in order for u to be well-defined we need the fact that E t ⊂ int  s for s < t, which does not necessarily hold for discontinuous g. Thus we may not directly apply the above construction to discontinuous boundary data.

BV on a one-dimensional compact manifold
Using a partition of unity, one may attempt to define BV spaces on arbitrary compact manifolds; such approach is presented in [3]. It is not necessary for us; it suffices to consider one-dimensional case. Let us consider Ω ⊂ R 2 open and bounded with C 1 boundary. We consider functions on ∂Ω integrable with respect to the one-dimensional Hausdorff measure (which are approximately continuous H 1 -a.e.). We recall (see [5,Chapter 5.10]) that the one-dimensional BV space on the interval (a, b) ⊂ R may be described in the following way: for every a < x 0 < · · · < x n < b, where x i are points of approximate continuity of f . The smallest such constant M turns out to be the usual total variation of f . We may extend this definition to the case where we have a one-dimensional manifold diffeomorphic to an open interval if it is properly parametrized, i.e. all tangent vectors have length one. Repeating the proof from [5] we get that this definition coincides with the divergence definition. Then we extend it to the case of a one-dimensional compact connected manifold in the following way: This definition does not depend on the choice of p, as in dimension one the total variation on disjoint intervals is additive, thus for different points p 1 , p 2 we get that where ( p 1 , p 2 ) is an oriented arc from p 1 to p 2 . Thus all local properties of BV (∂Ω) hold; we shall recall the most important one for our considerations: However, some global properties need not hold. For example, the decomposition theorem f = f ac + f j + f s does not hold; consider Ω = B(0, 1), f = arg(z). The main reason is that π 1 (∂Ω) = 0.

Regularity of least gradient functions
In this section we are going to prove several regularity results about functions of least gradient, valid up to dimension 7. We start with a weak form of the maximum principle and later prove a result on decomposition of a least gradient function into a continuous and jump-type part; this decomposition holds not only at the level derivatives, but also at the level of functions. We will extensively use the regularity theory for area-minimizing sets, the monotonicity formula for minimal surfaces and a version of the maximum principle for minimal graphs. Moreover, as Theorem 2.6 works also for superlevel sets of the form {u ≥ t}, all results in this Section also hold with {u ≥ t} in place of {u > t}.
A well-known consequence of the monotonicity formula for minimal surfaces, see [16,Theorem 17.6], is the following: The result above is originally stated more generally; this version takes into account the regularity theory for area-minimizing sets as stated in Proposition 3.1.
Firstly, we provide a reformulation of Theorem 2.6 taking into account the regularity theory for area-minimizing sets in low dimensions. It is an analogue of the maximum principle for linear equations; geometrically speaking, the linear weak maximum principle states that every level set touches the boundary. As by Theorem 2.6 for every t ∈ R the set {u > t} is area-minimizing in Ω, in view of the above results we obtain For bounded convex sets on the plane it can be strengthened so that the connected components of ∂{u > t} cannot intersect even on the boundary of Ω. We recall that bounded convex sets in R N automatically have Lipschitz boundary, so the trace operator is well defined. Proof There is only one thing left to prove: that two connected components of ∂ E t = ∂{u > t} do not intersect on ∂Ω. Suppose we have at least two line segments in ∂ E t : x y and xz. Due to Proposition 3.3, there are at most finitely many line segments of the form x x ⊂ ∂ E t intersecting the triangle x yz enclosed by the line segments x y, xz, yz (as Ω is convex, we have x yz ⊂ Ω); without loss of generality we may assume that x y and xz are adjacent.
Consider the function χ E t . In the triangle x yz we have χ E t = 1 and χ E t = 0 on the opposite side of x y, xz (or the opposite situation, which we handle similarly). Now, we see that the function χ E t = χ E t − χ xyz has strictly smaller total variation due to the triangle inequality. While the difference between χ E t and χ E t is not compactly supported in Ω, these functions have the same trace, so in view of the Definition 2.1 we deduce that χ E t is not a function of least gradient. Thus in every point of ∂Ω ends at most one line segment from ∂ E t . The next Corollary is a direct consequence of Proposition 3.4.

Corollary 3.6 Let Ω ⊂ R N , where N ≤ 7, be an open set and take u ∈ BV (Ω) to be a least gradient function. Let E t = {u > t}. For each x ∈ Ω and t ∈ R there exists a ball B(x, r ) such that there is at most one connected component of ∂ E t intersecting this ball.
Remark 3.7 Even in two dimensions the above result may fail for x ∈ ∂Ω. Even though Proposition 3.5 excludes the possibility of intersection of two connected components of ∂ E t on ∂Ω, there still may be points in the neighbourhood of which there are infinitely many connected components of ∂ E t ; see Examples 4.7 and 4.8. Now, we recall the definitions of the jump set J u and the approximate discontinuity set S u (see [2, Chapter 3]): We say that x ∈ J u , the jump set of u, if there exists a unit vector ν (called the normal vector) and real numbers a = b such that The triple (a, b, ν) is uniquely determined up to permutation of a, b and the sign of ν.
We say that x ∈ S u , the approximate discontinuity set of u, if the upper and lower approximate limits of u at As J u = S u , due to our choice of representative [9,Theorem 4.1] implies that u has only jump-type discontinuities, i.e. is continuous on Ω\J u .
Proof The inclusion J u ⊂ S u is obvious from the definitions. Now, we will prove that S u is a subset of the right hand side. Let x ∈ S u . By definition of S u for every t ∈ (u ∨ (x), u ∧ (x)) the density of {u > t} at x is neither 0 nor 1, so x ∈ ∂ M {u > t}, where ∂ M denotes a measuretheoretical boundary of a set. As for N ≤ 7 with our choice of representative the boundary of {u > t} coincides with its measure-theoretic boundary and its reduced boundary, we obtain the second inclusion: if x ∈ S u , then x ∈ ∂ * {u > t} for every t in some open interval.
Finally, we will prove that the right hand side is a subset of J u . Fix x ∈ ∂ * {u > s 0 }∩∂ * {u > t 0 }. Using Corollary 3.6 we choose a ball B = B(x, r 0 ) such that there is one connected component S t 0 of ∂{u > t 0 } and S s 0 of ∂{u > s 0 } intersecting this ball. Due to Proposition 3.4 they are minimal surfaces and by Proposition 3.2 we have S t 0 = S s 0 = S. Now we define two auxiliary functions from S to R: Using an argument as above, we notice that due to Proposition 3.2 both u + and u − are constant along S and for each t ∈ (u − , u + ) the connected component S t of ∂{u > t} passing through x coincides with S. In particular the normal vectors at x coincide, so the normal at x does not depend on t ∈ (u − , u + ). We introduce again the notation E t = {u > t}. Notice that the set B\S is a union of two disjoint connected open sets B + and B − , where B + = (B\S) ∩ E t 0 and B − = (B\S)\E t 0 ; as B is convex, this either can be seen directly or (as it is homeomorphic to R N ) as a simple case of the Alexander duality theorem, see [8,Theorem 27.10]. We will show that u + and u − coincide with the trace values of u at x along S from B + and B − respectively. Let T B ± S u denote the trace of u on S from B ± ; in this notation, our goal is to show that We will prove the above equality for u + (the other case is analogous). Firstly, we use Corollary 3.6 to notice that for every t ∈ (u − , u + ) there exists a smaller ball B(x, r ) ⊂ B such that S is the only one connected component of ∂ E t intersecting this ball. Again, greater or equal to t; thus the trace at x is greater or equal to u + . We have to prove the opposite bound, i.e.
Let us denote the limit above by s + ; it is clearly greater or equal to u + . The other inequality follows from definition of u + : if s + > u + , then x ∈ ∂{u > u + +s + 2 }, contradiction with maximality of u + . Now take s > u + arbitrarily close to u + . By Corollary 3.6 there is a smaller ball , let us denote it by S . We have two possibilities: either the subset of B(x, ρ) bounded by S and S is a subset of B + ∩ E s or it is a subset of B + \E s . We see that the first case is impossible by taking sufficiently small r in Eq. (3). In the second case we obtain on an even smaller ball B(x, ρ ), on which u + < u ≤ s, so again u + ≤ T B(x,ρ )∩B + S u ≤ s and by letting s → u + we obtain that T B + S u(x) ≤ u + . Thus the traces from B(x, r ) ± are well defined and distinct, so This result is optimal, as it fails for least gradient functions in higher dimensions: be an open set, whose boundary is the Simons cone. Then χ E is a function of least gradient, 0 / ∈ J u and 0 ∈ S u . Also χ E is not continuous at 0. Proof Take x ∈ ∂ E t . Let S be the (unique) connected component of ∂ E t passing through x. Using Proposition 3.9 we notice that either S ∩ J u = ∅ or S ⊂ J u . In the former case u is continuous at every point of S, so by our choice of representative the trace of u is constant along S. Thus we only have to consider the latter case, i.e. S ⊂ J u . Now, we define u + and u − as in Proposition 3.9; they are constant along S and they coincide with trace of u from both sides of S.
Now we have all the tools to prove Theorem 1.2. The notation in the theorem and its proof is described in the introductory chapter after the Theorem is formulated.
Proof of Theorem 1.2 1. From Corollary 3.11 J u = k∈J S k , where S k are pairwise disjoint minimal surfaces without boundary in Ω and J is at most countable. Fix any x 0 ∈ Ω\J u . As Ω is convex, each of the surfaces S k divides Ω into two open connected sets. While these two sets do not depend on the choice of x 0 , we denote them Ω k (x 0 ) and Ω k (x 0 ); this numbering is chosen so that x 0 ∈ Ω k (x 0 ). To such an edge we ascribe a weight a k (x 0 ) in the following way: from Corollary 3.12 the trace of such that x 0 and x lie on the opposite sides of S k . We define u j (·, p, x 0 ) by the formula Heuristically, this is the function that counts all the jumps of u along a path P going from x 0 to x with appropriate signs; see also Remark 3.15. 3. The function u j (x) = u j (x, x 0 ) does not depend on the choice of x 0 up to an additive constant: choose instead some x 1 ∈ Ω\J u . For each k, we either have Take any x ∈ Ω\J u and let P denote the set of By the definition of the jump function a k we also have a k ( S k separates x 0 and x 1 ; in this case the traces in the definitions of a k (x 0 ) and a k (x 1 ) appear in opposite order and a k ( as P P is precisely the set of such k that S k separates x 0 and x 1 . Thus, if we chose some x 1 in place of x 0 , the function u j would change by a summand u j (x 1 , x 0 ). We see that u j ∈ L 1 (Ω) and that it takes at most countably many values. 4. We notice that Du j = (Du) j : as . We notice that J u j = J u and the jumps along connected components of J u have the same magnitude. We define u c = u − u j and see that (Du c ) j = 0. 5. The u c , u j defined above are functions of least gradient.
Suppose that u j is not a a function of least gradient, i.e. there exists v ∈ BV (Ω) with compact support such that Ω |D(u j + v)| < Ω |Du j |. Then we would get where the first inequality follows from u being a function of least gradient, and the last equality from measures Du c and Du j being mutually singular. The proof for u c is analogous. 6. The function u c is continuous. As u c is of least gradient, then if it isn't continuous at x ∈ Ω, then by Proposition 3.9 a certain set of the form ∂{u c > t} passes through x. Let S be a connected component of ∂{u c > t} containing x. Again, as u c is of least gradient, by Corollary 3.12 u c has a constant nonzero jump along S, which is impossible as (Du c ) j = 0. 7. What is left is to prove uniqueness of such a decomposition. Let u = u 1 c + u 1 j = u 2 c + u 2 j . Changing the order of summands we obtain but the distributional derivative of the left hand side is a continuous measure, and the distributional derivative of the right hand side is supported on the set of zero measure with respect to H N −1 , so both of them are zero measures. But the condition Dv = 0 implies v = const, so the functions u 1 c , u 2 c differ by an additive constant.

Example 3.13
In this decomposition u c is not necessarily continuous up to the boundary. Let us use the complex numbers notation for the plane. We take Ω = B (1, 1). Let the boundary values be given by the formula f (z) = arg(z). Then Remark 3.14 If J u is closed relative to Ω, for example if H N −1 (J u ) < ∞ (it suffices as each connected component of J u is a smooth hypersurface, closed relative to Ω), then Ω\J u is open, each of its connected components U i is open and the function u j is piecewise constant: from Eq. (4) we see that it is constant on U i .

Remark 3.15
In two dimensions, we may give a clear geometrical meaning to the function u j . Take a path P, which parametrizes a (finite) polygonal chain with vertices in Ω\J u such that P(0) = x 0 , P(1) = x (for example we may take P to parametrize the line segment x 0 x). The path P intersects an at most countable family of the line segments S k . If P passes through S k from Ω k (x 0 ) to Ω k (x 0 ), then we add to u j the summand a k (x 0 ); if in the opposite direction, then we add the summand −a k (x 0 ). As we see, some of the summands may cancel out. Then we may prove that this definition does not depend on the choice of P and on the choice of x 0 up to an additive constant. Using this line of reasoning in higher dimensions we run into geometric difficulties, such as the fact that even if P parametrizes a line segment, it may intersect S k multiple times and these intersections might be not transversal, in which case we do not know how to assign the sign of a k (x 0 ). The proof of Theorem 1.2 as presented above defines the same function u j and avoids these geometric difficulties.

Existence of solutions on the plane
We will prove existence of solutions on the plane for boundary data in BV (∂Ω). We are going to use approximations of the solution in strict topology: firstly, in Proposition 4.1 we will ensure that existence of convergent sequences of approximations in L 1 (Ω) topology is not a problem; then, we will upgrade it to strict convergence in Theorem 4.6 and use the Miranda stability theorem (Theorem 2.3) to end the proof. Later, we will see an example of a discontinuous function f of infinite total variation such that the solution to the corresponding least gradient problem exists. Thus, after possibly passing to a subsequence, there exist functions f n , f such that f n → f in BV (Ω) and T f n = f n , T f = f . Now we may proceed as in [9,Proposition 3.3]. Let us estimate from above the norm of u n − f n BV : where the inequalities follow from Poincaré inequality (as u n − f n has trace zero), triangle inequality and the fact that u n is solution of the least gradient problem for f n . The common bound follows from convergence of f n .
Thus, by compactness of the unit ball of BV (Ω) in L 1 (Ω) we get a convergent subsequence u n k − f n k → v in L 1 (Ω). But f n → f in BV (Ω), so as well in L 1 (Ω); thus We are going to need three lemmas. The first two are straightforward and their proofs can be found as a step in the proof of co-area formula, see [5,Section 5.5]. The third one is a convenient version of Fatou lemma.

Lemma 4.3 Let f, f n ∈ L ∞ (Ω) and suppose that f, f n form a bounded family in L ∞ (Ω).
If Lemma 4.4 Suppose that g, g n : Ω → R are nonnegative. If additionally for a.e. x ∈ Ω we have g(x) ≤ lim inf n→∞ g n (x) and lim n→∞ Ω g n dx = Ω g dx < ∞, then g n → g in L 1 (Ω).
so it suffices to prove that Ω (g − g n ) + dx → 0 to show that g n → g in L 1 (Ω). Now let us see what happens to (well defined) upper limit of the sequence Ω (g − g n ) + dx: where inequality follows from the (inverse) Fatou lemma: by definition 0 ≤ (g − g n ) + ≤ g, and g is integrable, so we can apply the Fatou lemma. To prove equalities we use the fact that lim sup n→∞ (−g n ) = − lim inf n→∞ g n and the assumption that g ≤ lim inf n→∞ g n a.e. Thus Ω (g − g n ) + dx → 0, so g n → g in L 1 (Ω).
We recall the definition of strict convergence in BV (Ω).

Definition 4.5
Let Ω ⊂ R N be an open set. We say that a sequence u n ∈ BV (Ω) converges strictly to a function u ∈ BV (Ω), if u n → u in L 1 (Ω) and total variations of u n converge to the total variation of u, i.e.
We are going to use two properties of strict convergence: firstly, for any function u ∈ BV (Ω) we can find a sequence u n ∈ C ∞ (Ω) ∩ BV (Ω) converging to u in strict topology. Secondly, the trace operator is continuous with respect to strict convergence.
Lower semicontinuity of the total variation gives us P(E t , ∂Ω) ≤ lim inf n→∞ P(E n t , ∂Ω) < ∞ for a.e. t. We observe that the conditions in Lemma 4.4 are fulfilled and we obtain convergence P(E n t , ∂Ω) → P(E t , ∂Ω) in L 1 (R), so after possibly passing to a subsequence we have pointwise convergence for a.e. t. Consequently χ { f n ≥t} → χ { f ≥t} strictly in BV (∂Ω). 3. As ∂Ω ∈ C 1 and f n ∈ C 1 (∂Ω), then by Sard theorem the set T of such t, which are regular values for all f n , is of full measure. From now on it is necessary that we are in dimension N = 2. Recalling the Sternberg-Williams-Ziemer construction we get that for every t ∈ T every point of ∂ E n t ∩ ∂Ω is an end of at least one line segment; by Proposition 3.5 it is an end of exactly one line segment. 4. Let t ∈ T . As ∂Ω is one-dimensional, then P(E n t , ∂Ω) ∈ N and Dχ { f n ≥t} is a sum M i=1 ±δ x i . Furthermore, by Proposition 2.9 and our convention of choosing the representative the set E t is a union of closed arcs between consecutive points x i . Similarly E n t is a union of closed arcs between consecutive points x n i (however, this is obvious as f n are smooth functions). 5. As χ { f n ≥t} → χ { f ≥t} strictly, then for sufficiently large n we have P(E n t , ∂Ω) = P(E t , ∂Ω). What is more, their derivatives converge in weak* topology; but we have an exact representation of those derivatives. This gives us convergence x n i → x i for every i. 6. Now, take the sequence u n of unique solutions to the least gradient problem for f n given by the Sternberg-Williams-Ziemer construction. By Proposition 4.1 it has a subsequence (still denoted by u n ) convergent in L 1 (Ω) to u ∈ BV (Ω). This gives us convergence χ {u n ≥t} → χ {u≥t} in L 1 (Ω) for a.e. t ∈ R. Let us fix t ∈ T such that the above convergence holds. Let F t = {u ≥ t} and F n t = {u n ≥ t}. As by Theorem 2.6 χ F n t are functions of least gradient in Ω, by Theorem 2.3 χ F t is a also function of least gradient in Ω. By Proposition 3.5 each of the sets ∂ F n t (and ∂ F t ) is a finite union of line segments, pairwise disjoint in Ω, connecting certain pairs of points among x n i (x i ). By definition of T every point of ∂ F n t ∩ ∂Ω (and ∂ F t ∩ ∂Ω) is an end of exactly one line segment. 7. Let A be the set of pairs (i, j), where i < j, such that the line segment x i x j is a subset of ∂ F t . We notice that as χ F n t → χ F t in L 1 (Ω), then for sufficiently big n also the line segment x n i x n j is a subset of ∂ F n t and only such line segments are connected components of ∂ F n t . Thus 8. Let us see that P(F n t , Ω) ≤ P(Ω, R N ). Indeed, ∂ F n t is a finite union of line segments, pairwise disjoint in Ω, connecting certain pairs of points among x n i . If we choose a different connection between them, for example by drawing a full convex polygon with vertices in x n i , by minimality of ∂ F n t the polygon has a larger perimeter. If we use arcs on ∂Ω instead, the perimeter would be even larger, as line segments are minimal surfaces in R 2 . 9. Since the functions χ {u n ≥t} converge in L 1 (Ω) for a.e. t to χ {u≥t} , then by Lemma 4. 3 we have convergence u n → u in L 1 (Ω). Furthermore in Step 7 we proved convergence P(F n t , Ω) → P(F t , Ω) for a.e. t, so by dominated convergence theorem (by Step 8 this sequence is bounded) we have convergence P(F n t , Ω) → P(F t , Ω) in L 1 (R). By co-area formula Ω |Du n | → Ω |Du|, which gives that u n → u strictly in BV (Ω).
From the above Theorem we immediately obtain Theorem 1.1.
Proof of Theorem 1.1 For each f ∈ BV (∂Ω) we can find a sequence f n of class C ∞ (∂Ω) strictly convergent to f . Let u n be solutions of the least gradient problem for f n . Then after possibly passing to subsequence we have that u n → u strictly in BV (Ω); but the trace is continuous in the strict topology, so T u = f . Thus by Miranda stability theorem (Theorem 2.3) u is a function of least gradient, so it is a solution of the least gradient problem for f .

Example 4.7
Take Ω = B(0, 1) ⊂ R 2 . As we know from [17], when f is a characteristic function of a certain fat Cantor set, then the least gradient problem has no solution. Thus, we would expect that if we approximated the boundary function and constructed solutions of the least gradient problem for the approximation, then the trace of the limit would be incorrect. To settle this, let f n be a function of the n-th stage of the Cantor set construction and u n be the corresponding solution to the least gradient problem; it exists by Theorem 1.1. We will see that u n → 0 in L 1 (Ω).
Let us see what is the length of all such intervals. Let a n be the length of an interval at the n-th stage of construction. Then a n = a n−1 2 − 1 2 2n+1 . As a 0 = 1, we obtain a direct formula a n = 2 n +1 2 2n+1 . We map the interval [0, 1] to an arc of length 1 on the circle symmetric with respect to the y axis, i.e. g(x) = (cos( π +1 2 − x), sin( π +1 2 − x)). This way our Cantor set is on the circle and we define f n = f n • g −1 : im g → R. We extend each of the functions f , f i to the whole ∂ B(0, 1) by 0. This situation for n = 1 is presented on Fig. 1.
Consider the function f 1 . In order to check whether ∂{u ≥ t} consists of two bases of the trapezoid or two sides, we compare the sums of lengths. We directly compute the sum of lengths of the bases to be (cos( π 2 − 1 2 )−cos( π 2 + 1 2 ))+(cos( π 2 − 1 8 )−cos( π 2 + 1 8 )) ≈ 1.21, while the length of the sides equals 2(cos( π 2 − 1 2 )−cos( π 2 − 1 8 )) 2 +(sin( π 2 − 1 2 )−sin( π 2 − 1 8 )) 2 ) 1/2 ≈ 0.75. Hence the sides of the trapezoid minimize P (E t , B(0, 1)) for t ∈ (0, 1) and u 1 , solution of the least gradient problem for f 1 , takes value 0 on the trapezoid and takes value 1 only on flaps cut off by the sides of the trapezoid (shown in grey on Fig. 1). Now, we will repeat the above reasoning at every stage of construction of the Cantor set. At every stage of construction there appear 2 n−1 identical trapezoids in the flaps on which the value of u n−1 was constant and equal to 1; we have to compare the sum of lengths of the sides of these trapezoid versus their bases. We use the law of cosines to notice that for a circle of radius one te length of the chord corresponding to angle α equals √ 2 − 2 cos α; thus we have to check the following inequality: 1 − cos(a n ) + 1 − cos(b n ) > 2 1 − cos(a n+1 ).
On the left hand side is the (rescaled by 1 √ 2 ) sum of lengths of the two bases and on the right hand side is the (rescaled) sum of lengths of the sides. Substitute x = 2 −n , recall that b n = 2 −2n and use the direct formula for a n . It changes to the inequality W. Górny   Fig. 1 Solution of the least gradient problem for the first stage of construction Actually, we may prove a stronger inequality: let us omit the second summand and square both sides of the resulting inequality. Then it changes to But g satisfies g(0) = 0 and its derivative is positive on (0, 1), so g > 0 on (0, 1), thus the inequality holds for all x ∈ (0, 1); hence inequality (5) holds for all n. Thus on every stage of construction the sides of the trapezoids are shorter than the bases. Now it is easy to determine the solution u n . We proceed as in the proof of Theorem 4.6. Let us take a sequence of smooth functions f k 1 approximating f 1 in the strict topology on ∂Ω and another sequence f k 2 approximating f 2 in the same topology; we additionally require that f k 1 ≥ f k 2 for each k. We denote the solutions to the approximating problems by u k 1 and u k 2 . Thus, by Theorem 2.7, i.e. the comparison principle for continuous boundary data, we have that u k 1 ≥ u k 2 ; we pass to the limit with k → ∞ to obtain that u 1 ≥ u 2 . Thus, while determining u 2 , we only have to take into account two possible configurations: in the two flaps in which u 1 = 1 we have to compare the sum of lengths of the sides of the resulting trapezoid versus its bases. As the sides of the trapezoid are shorter, u 2 is nonzero only in four smaller flaps enclosed by an arc on which f 2 = 1 and a line segment.
We repeat the above argument at every level of the construction. It follows that the solution u n equals zero on each trapezoid in this construction and will be nonzero only on 2 n flaps enclosed by arcs on which f n equals 1 and line segments. In particular, the sequence u n is nonincreasing and for every point x inside the circle at a sufficiently large stage of construction we would have u n (x) = 0. Thus u n → 0 a.e.; but it is bounded from above by 1, so the convergence holds also in L 1 (Ω). Example 4.8 Let us make a slight change to the previous example: consider another fat Cantor set on the circle. More precisely, take a sequence b n converging very rapidly to zero, so the inequality (5) holds in the opposite direction; it is possible due to the triangle inequality. This results in a Cantor set C of almost full measure.
Let us look again at Fig. 1. As inequality (5) holds in the opposite direction, at the first level of the construction it is more efficient to fix u 1 = 1 in both the grey flaps and the trapezoid than to follow the construction in the previous example. Arguing as in the previous example, we see that every stage of construction it is more efficient to remove 2 n−1 curvilinear flaps bounded by a line segment and an arc on ∂Ω corresponding to a connected component of { f n = 0} from the set {u n−1 = 1} than to repeat the construction from the previous example, i.e. add trapezoids to the set {u n−1 = 0}. Thus the set F n = {u n = 1} is a union of trapezoids arising in all the previous steps and flaps bounded by a line segment and an arc on ∂Ω corresponding to a connected component of { f n = 1}.
The sequence u n = χ F n converges to u = χ F a.e. and in L 1 (Ω); by Theorem 2.3 u is a function of least gradient. We have to check that T u = f . Firstly, let us notice that the trace of χ A , a characteristic function of a set of bounded perimeter A ⊂ R 2 is again a characteristic function of some set: by [5,Theorem 5 Now, let us see that by the construction above for every x ∈ C except for the two extreme points there is a circular sector centered at x such that it lies entirely inside each F n , so also entirely inside F: as b n is small, then in every step of construction (except for the first) the angle between ∂Ω and every line segment cutting off a flap in which u n = 0 is bounded from above by some small α 0 . Hence at every point of C (except for the two extreme points) for sufficiently small r > 0 there is a circular sector, bounded by two rays such that the angle between them and ∂Ω equals α 0 , in which u = 1.
However, it implies that the mean integral − B(x,r )∩Ω |χ F (y)|dy is bounded from below, so T u = T χ F cannot equal 0 on the Cantor set. By the previous paragraph T u equals one H 1a.e. on the Cantor set. We also see that for each x ∈ ∂Ω\C there is an open neighbourhood of x in which u n = 0 for sufficiently large n, so T u equals zero on ∂Ω\C. We obtained that there exists a solution to the least gradient problem for a certain discontinuous f / ∈ BV (∂Ω).

Anisotropic case
This section is devoted to the anisotropic least gradient problem. We discuss l p norms on the plane for p ∈ [1, ∞]. We prove a non-uniqueness result for p = 1, ∞ and discuss how the solutions look like for p ∈ (1, ∞). We will use the notation introduced in [12]. 1. φ is convex with respect to the second variable for a.e. x ∈ Ω; 2. φ is homogeneous with respect to the second variable, i.e.
In order to introduce the anisotropic total variation and recover some properties of the classical total variation, we will additionally assume that 4. φ is elliptic in Ω, i.e.
Remark 5.2 These conditions are sufficient for most of the cases considered in scientific work: they are satisfied for the classical least gradient problem, i.e. φ(x, ξ) = |ξ |, as well as for the l p norms, p ∈ [1, ∞] and for weighted least gradient problem considered in [10]:

Definition 5.3 The polar function of
While the definition of the anisotropic total variation as introduced in [1] is slightly more general, we will only consider metric integrands φ which are elliptic and continuous in Ω. As proved in [1,Chapter 3], in that case we may introduce the following equivalent definition: Definition 5.4 Let φ be a metric integrand continuous and elliptic in Ω. For a given function u ∈ L 1 (Ω) we define its φ-total variation in Ω by the formula (another notation used in the literature is Ω φ(x, Du)): , z(x)) ≤ 1 a.e., z ∈ C 1 c (Ω) .

Remark 5.5
When φ is continuous and elliptic in Ω, then similarly to the classical case ( [1, Chapter 4]) we recover lower semicontinuity of the φ-total variation and the co-area formula. We also recover the approximation by C ∞ functions in the strict topology, even in a stronger form proved by Giusti in [6, Corollaries 1.17, 2.10]: The approximation by smooth functions in the strict topology entails that we may approximate sets of bounded φ-perimeter both in the Lebesgue measure and in φ-perimeter by open sets with smooth boundary (also with respect to some given boundary conditions); in the isotropic case, see [2,Theorem 3.42]. The tools used there include the isoperimetric inequality, lower semicontinuity of the total variation and the co-area formula, all of which are valid also in the anisotropic case.
For an explicit use we shall need the following integral representation ( [1,10]): Proposition 5.6 Let ϕ : Ω × R N → R be a metric integrand. Then we have an integral representation: where ν u is the Radon-Nikodym derivative ν u = d Du d|Du| . In particular, if E ⊂ Ω and ∂ E is sufficiently smooth (at least C 1 ), then we have a representation where ν E is the external normal to E.
We are interested in the anisotropic version of the least gradient problem. We state it similarly to the isotropic version: Again, if φ is a metric integrand with continuous extension to R N , we may instead assume that v is a BV φ function with zero trace on ∂Ω; see [12,Proposition 3.16]. Furthermore, we say that u is a solution to the anisotropic least gradient problem with boundary data f if u is a function of φ-least gradient and T u = f .
In the anisotropic case, existence and uniqueness of minimizers in the least gradient problem for continuous boundary data depend not only on the geometry of Ω, but also on the regularity of φ. The uniqueness proof is based on a maximum principle and requires uniform convexity and a condition slightly weaker than W 3,∞ regularity of an elliptic metric integrand away from {ξ = 0}; for a precise assumption, see [10,Theorem 1.2]. The existence proof requires ellipticity of the metric integrand φ and a barrier condition: Definition 5.8 ([10, Definition 3]) Let Ω ⊂ R N be an open bounded set with Lipschitz boundary. Suppose that φ is an elliptic metric integrand. We say that Ω satisfies the barrier condition if for every x 0 ∈ ∂Ω and sufficiently small ε > 0, if V minimizes P φ (·, p; R N ) in In the isotropic case φ(x, ξ) = ξ 2 this is equivalent, at least for sets with C 2 boundary, to the condition slightly weaker than strict convexity introduced in [18].
Before we proceed, we need one additional result that relates functions of φ-least gradient and φ-minimal sets, i.e. sets, the characteristic functions of which are of φ-least gradient. Its proof in one direction follows the lines of the proof of Theorem 2.6 and in the other direction it is a simple application of the co-area formula.

Proposition 5.9 ([12, Theorem 3.19]) Let Ω ⊂ R N be an open bounded set with Lipschitz boundary. Assume that the metric integrand φ has a continuous extension to R N . Then u ∈ BV φ (Ω) is a function of φ-least gradient in Ω if and only if χ E t is a function of φ-least gradient for almost all t ∈ R.
Definition 5.10 For p ∈ [1, ∞) we define the l p norm of a vector on the plane by the formula (x, y) p = (|x| p + |y| p ) 1/ p . For p = ∞ it is defined as (x, y) ∞ = sup(|x|, |y|).
We will discuss the φ-least gradient function with respect to an anisotropy generated by the l p norms, i.e. φ(x, ξ) = ξ p . Firstly, let us see that for p = 1 or p = ∞ does not satisfy the regularity assumptions of [10,Theorem 1.2], so if the solution to the anisotropic least gradient problem exists, it need not be unique; indeed, we will provide an example of nonuniqueness in Example 5.16. Unfortunately, as Corollary 5.13 clearly shows, the barrier condition is not satisfied and we have to prove existence of minimizers using another means.
We aim to prove that for nonsmooth anisotropy, i.e. for p = 1 or p = ∞, the solutions in general are not unique; in order to achieve this goal, we will study how do minimal surfaces with respect to the l p norm look like. The next result is stated for sets with C 1 boundary for two reasons. Firstly, together with approximation by smooth functions in strict topology it is enough to consider sets with C 1 boundary to construct an example with nonuniqueness of minimizers. Secondly, minimal sets with respect to l 1 anisotropy may have singularities on the boundary and an analogue of the next Proposition would be false; see the end of Example 5.16. Proof 1. Firstly, want to show that if two points p 1 = (x 0 , y 1 ) and p 2 = (x 0 , y 2 ) with the same first coordinate belong to ∂ E, then they are connected by a vertical line segment p 1 p 2 ⊂ ∂ E. Suppose otherwise. Denote by ∂ E( p 1 , p 2 ) the part of ∂ E between p 1 and p 2 ; without loss of generality we may assume there it contains no other points with first coordinate x 0 . As ∂ E is C 1 , at the point (x, y) ∈ ∂ E the Radon-Nikodym derivative ν χ E is perpendicular to the level set, so it is a vector (− sin θ, cos θ). Thus φ(x, ν χ E ) = | sin θ | + | cos θ |. We also recall that |Dχ E | = H 1 | ∂ E . Using the representation introduced by Proposition 5.6 we calculate where F is a set with C 1 boundary (possibly except for p 1 and p 2 ) such that E F is the set enclosed by ∂ E( p 1 , p 2 ) and p 1 p 2 . Thus E was not a 1-minimal set, contradiction.
2. Let (x 0 , y 0 ) and (x 1 , y 1 ) be two points on the same connected component of ∂ E. Let us suppose additionally that ∂ E does not contain any vertical line segments, i.e. we may represent a level set from the point (x 0 , y 0 ) to (x 1 , y 1 ) as a graph of a C 1 function g. Like before, let us note that at the point (s, g(s)) the Radon-Nikodym derivative ν χ E t is a vector (− sin θ, cos θ), where g (s) = tan θ . We have to minimize the integral (we may assume that x 0 < x 1 ): As we assumed g to be C 1 , the inequality becomes equality if and only if g is monotone.
In particular there are multiple functions minimizing this integral. 3. Now we allow ∂ E to contain vertical line segments. The difference is purely technical, as we have to divide our integral into two parts. Let us suppose that the (orientated) length of i-th vertical line segment equals λ i , then we have where the inequality becomes equality if and only if g is monotone and all the vertical line segments are orientated in the same direction as g . Again, there are multiple functions minimizing this integral.  The next Proposition estabilishes a criterion for existence and uniqueness of minimizers with respect to l 1 norm. Then, in Example 5.16, we provide an example when minimizers exist and are not unique. As · p p is nonincreasing in p, we have the first inequality. The second follows from the definition of the Euclidean solution; by its uniqueness the second inequality is strict, if only u = v. As the boundaries of superlevel sets of u are parallel to the axes of the coordinate system, we have the final equality. It follows that u is a unique solution to the anisotropic least gradient problem.

Example 5.15
Let Ω = B(0, 1) ⊂ R 2 . Take φ(x, Du) = Du 1 . Let f (θ ) = cos(2θ). We construct the isotropic solution u 0 using Sternberg-Williams-Ziemer construction. We notice, as shown on Fig. 2, that the boundaries of superlevel sets of u 0 are parallel to the axes of the coordinate system. By Proposition 5.14 the solution to the anisotropic least gradient problem is unique.
As the barrier condition fails for p = 1, we cannot expect existence of solutions for arbitrary continuous boundary data. However, we may show an example of nonuniqueness of solutions even for the unit ball in the class of smooth boundary data.

Example 5.16
Let Ω = B(0, 1) ⊂ R 2 . Take φ(x, Du) = Du 1 and let f ∈ C ∞ (∂Ω) be given by the formula f = sin(2θ), i.e. the boundary datum is a rotation of the boundary datum from Example 5.15 by π 4 . Then the solution to the anisotropic least gradient problem exists and is not unique.
According to Proposition 5.9, it is sufficient to construct superlevel sets of v such that P 1 (E t , Ω) is minimal for almost all t ∈ R; then Ω |Dv| 1 would be minimal as well. We observe that every t ∈ (0, 1) is regular and its preimage consists of four points of the form p 1 = (a, b), p 2 = (b, a), We want to construct a 1-minimal set E t with boundary value χ { f >t} . Arguing as in Corollary 5.12 we see that we only need to consider competitors with smooth boundary. By Proposition 5.11 every connected component of ∂ E t is a graph of a monotone function (possibly including vertical line segments); hence there are two types of competitors: sets, whose boundary is a union of two monotone Jordan curves p 1 p 2 and p 3 p 4 or the union of two monotone Jordan curves p 1 p 4 and p 2 p 3 . Now we recall the calculation in Proposition 5.11. Thus the l 1 length of every monotone Jordan curve p 1 p 2 equals 2|a − b| (and the same for p 3 p 4 ), while the l 1 length of every monotone Jordan curve p 1 p 4 equals 2|a + b| (and the same for p 2 p 3 ). Thus, if we choose p 1 p 2 and p 3 p 4 as the boundary of a set E t , we obtain that E t is 1-minimal. We may perform an analogous calculation for t ∈ (−1, 0).
We construct a function v ∈ BV 1 (Ω) in the following way: let E t = {v > t} be as above. We additionally require that E t ⊂ int E s and define v(x) = sup{t : x ∈ E t }. Then v is well defined and by Proposition 5.9 it is a function of 1-least gradient.
Obviously this construction leads to multiple solutions. Firstly, we may take ∂ E t to be a union of two line segments; then v coincides with the isotropic solution. Secondly, in the first quadrant ({x, y > 0}) we may define ∂ E t to be an arc of the circle with centre (1, 1) and passing through p 1 and p 2 ; we extend this definition similarly to other quadrants. This minimizer is presented on Fig. 3.
Finally, we may take ∂ E t to be the union of a vertical and horizontal line segment: in the first quadrant we define q = (a, a) and take ∂ E t = p 1 q ∪ qp 2 and proceed similarly in other quadrants. While ∂ E t is not C 1 , we argue as in Corollary 5.13 to see that E t is a 1-minimal set. Then the zero level set of v has a singularity: ∂ E 0 = {(x, y) ∈ Ω : x = 0 ∨ y = 0}. It is a union of two monotone Jordan curves intersecting at (0, 0).

Example 5.17
Now let p = ∞. If we make a similar calculation, we obtain that the perimeter of a level set connecting points (x 0 , y 0 ) with (x 1 , y 1 ) equals ∂ E t max(| sin θ |, | cos θ |)dH n−1 = where the inequality becomes equality if and only if |g | − 1 has constant sign; in other words, the angle between the level set and the x coordinate axis is always not greater (or not smaller) than π W. Górny Nevertheless, it may still happen that the solution is unique: it is the case if we take such f that the Euclidean solution has all level sets at an angle π 4 to the coordinate axes. For example we can take f (θ ) = sin(2θ). Now we fix p ∈ (1, ∞). Let Ω ⊂ R 2 be an open, bounded, strictly convex set. Take f ∈ C(∂Ω). The l p norm is uniformly convex and smooth away from {ξ = 0}, so by [10,Theorem 1.2] there is at most one solution to the anisotropic least gradient problem with boundary data f . Regarding the barrier condition, if ∂Ω is additionally C 2 , we may use a version of the barrier condition in local coordinates, see [10,Remark 3.2] to check that it is satisfied and that in view of [10, Theorem 1.1] a solution to the anisotropic least gradient problem with boundary data f exists.
Our main goal is Theorem 1.3. Apart from its value as a structure theorem for φ-least gradient functions, it shows that the barrier condition as stated in Definition 5.8 is satisfied also without the additional assumption that ∂Ω is C 2 : as the connected minimal surfaces with respect to l p norms for p ∈ (1, ∞) are the same as in the isotropic case, the barrier condition for a l p norm for given p is satisfied for the same sets Ω as it is for p = 2, i.e. if Ω is an open, bounded, strictly convex set.
Proof of Theorem 1.3 Let (x 0 , y 0 ) and (x 1 , y 1 ) be two points on the same connected component of ∂ E. We have to minimize an integral analogous to the previous one (notation stays the same): As we are in dimension two, due to regularity results from [15, Theorem I. 3.1] every connected component of the boundary of a φ-minimal set is a C 2 manifold, thus we may use the Euler-Lagrange equation for the functional L, which takes form sgn(g )(|g |) p−1 (1 + |g | p ) 1 p −1 = const.
Taking absolute value and raising both sides to power p p−1 we obtain |g | p 1 + |g | p = const = C, thus g = const. Thus the anisotropic minimal surface connecting points (x 0 , y 0 ) and (x 1 , y 1 ) is a line segment.
We conclude by stating a consequence of Theorem 1.3 for the existence theory of anisotropic least gradient functions. In view of [10, Theorems 1.1-1.2] we obtain Corollary 5. 18 If Ω ⊂ R 2 is an open, bounded, strictly convex set and f ∈ C(∂Ω), then the corresponding anisotropic least gradient problem has a unique solution.

W. Górny
We compare the sums of lengths of these line segments with respect to l p norms. The sum of lengths of p 1 p 2 and p 3 p 4 is independent of p and equals 20. The sum of lengths of p 2 p 3 and p 4 p 1 equals (1 p + 1 p ) 1/ p + (11 p + 11 p ) 1/ p = 12 · 2 1/ p . Let p 0 = log(2) log( 5 3 ) . Then for p > p 0 it is smaller than 20, so the trapezoid p 1 p 2 p 3 p 4 ⊂ {u ≥ 0}; if p < p 0 , then it is greater than 20, so p 1 p 2 p 3 p 4 ⊂ {u < 0}. Finally, for p = p 0 the trapezoid p 1 p 2 p 3 p 4 is a zero level set of positive measure; this is the only p with this property. In particular, solutions to the anisotropic least gradient problem for the same boundary data can vary with p.