Boundedness and Unboundedness in Total Variation Regularization

We consider whether minimizers for total variation regularization of linear inverse problems belong to L∞\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$L^\infty $$\end{document} even if the measured data does not. We present a simple proof of boundedness of the minimizer for fixed regularization parameter, and derive the existence of uniform bounds for sufficiently small noise under a source condition and adequate a priori parameter choices. To show that such a result cannot be expected for every fidelity term and dimension we compute an explicit radial unbounded minimizer, which is accomplished by proving the equivalence of weighted one-dimensional denoising with a generalized taut string problem. Finally, we discuss the possibility of extending such results to related higher-order regularization functionals, obtaining a positive answer for the infimal convolution of first and second order total variation.


Introduction
For Ω ⊂ R d either a bounded Lipschitz domain or the whole R d with d ≥ 2, Σ ⊂ R m an arbitrary domain with m ⩾ 1, and given a linear bounded operator A : L d/(d−1) (Ω) → L q (Σ), we are interested in solutions u α,w of the TV-regularized inverse problem Au = f with noisy data f + w for w ∈ L q (Σ), that is, solutions of the minimization problem min u∈L d/(d−1) (Ω) where 1 < q < ∞, σ = min(q, 2) and TV(u) denotes the total variation of u, see (3) below for its definition.This convex minimization leads, at any minimizer u α,w , to the optimality condition where j(u) = |u| q−2 u is the duality mapping of L q (Σ) and ∂ L d/(d−1) TV(u α ) denotes the subgradient (see Definition 1 below).Our main goal is to present a proof of the following uniform boundedness result: Theorem 1. Assume that for A, f there is a unique solution u † for Au = f which satisfies the source condition Ran(A * ) ∩ ∂ TV(u † ) ̸ = ∅.There is some constant C(q, σ, d, Ω) such that if α n , w n are sequences of regularization parameters and perturbations for which then the corresponding sequence u αn,wn of minimizers is bounded in L ∞ (Ω), and (possibly up to a subsequence) u αn,wn −−−→ n→∞ u † strongly in L p (Ω) for all p ∈ (1, ∞).
This result is a combination of Proposition 1, Proposition 2 and Corollary 1 in Section 3, which in turn depend on the preliminaries reviewed in Section 2.
In the case of denoising A = Id, it is well known that TV regularization satisfies a maximum principle and ∥u α,w ∥ L ∞ ⩽ ∥f + w∥ L ∞ ; a proof can be found in [19,Lem. 2.1] for L 2 (R 2 ) and [37,Prop. 3.6] for the L d/(d−1) (R d ) case that most closely resembles the situation considered here.Another related result is the nonexpansiveness in L p norm for denoising with rectilinear anisotropy found in [41].In comparison, our results work with unbounded measurements and linear operators, and although the analysis depends on the noise level and parameter choice, the bound on ∥u α,w ∥ L ∞ can be made uniform in the regime of small noise and regularization parameter.Our proof of Theorem 1 hinges on the L d summability and stability in L d norm of the subgradients v α,w appearing in (2) and their relation with the perimeter of the level sets of u α,w .To the best of our knowledge, the subgradient of TV was first characterized in [4,3] (see also [6] for further context) relying on the results of [7].In Section 2 we summarize the parts of these results that we will use in the sequel.
Moreover, we are also interested in finding cases outside the assumptions of the result above in which minimizers fail to be bounded.To this end, in Section 4 we consider a one-dimensional denoising problem (21), with weights present in both the fidelity term and the total variation which may vanish at the boundary of the interval.By proving its equivalence to a generalized taut string problem, one obtains that the minimizers can be constructed by gluing a few different types of behavior on finitely many subintervals.This in turn allows us to produce explicit unbounded minimizers for radial data in 3D and boundedness conditions for radial powers.Moreover, we believe that results on weighted taut-string formulations can be of independent interest.
Indeed, the taut string approach to one-dimensional total variation minimization has been studied in many works, either from a continuous [29] or discrete [25,33] point of view.Versions for more general variants have also been considered, like [46] for higher-order total variation, [30] for 'nonuniform tubes' which can be seen as a weight imposed directly on the taut string formulation, and [40] where the graph setting is considered.Likewise, there is a large number of works considering weighted TV denoising seen as having spatially-dependent regularization parameters, and approaches to their automatic selection.This literature is extensive and the choice of weights for particular tasks is beyond the scope of this work, so we only explicitly mention the analytical studies [34] for the one-dimensional case and [8] for higher dimensions and weights that degenerate at the boundary.In contrast, we have not been able to find material combining both weighted TV functionals and taut string characterizations, which motivates our investigations below.
In Section 5 we explore the possibility of extending Theorem 1 or parts thereof to the setting of higher-order regularizers related to the total variation.We obtain a boundedness result for the case of infimal convolution of TV and TV 2 regularizers, for which the optimality conditions are closely related to subgradients of TV.Finally, we also present counterexamples suggesting that L ∞ bounds in the case of total generalized variation (TGV) likely need methods different to those considered in this work.

Preliminaries
In this section we collect definitions and preliminary results.Definition 1.For a convex functional F : X → R ∪ {+∞}, where X is a Banach space, the subgradient or subdifferential of F at some element u ∈ X is the set defined as where X ′ is the dual Banach space of X and ⟨•, •⟩ (X ′ ,X) denotes the corresponding duality product.

Subgradient of TV, pairings, and slicing
The total variation is defined as which in turn motivates defining the perimeter in Ω of a Lebesgue measurable subset E ⊂ Ω as Per(E) := TV(1 E ), where 1 E is the indicatrix taking the value 1 on E and 0 on R d \ E. We notice that (3) is the Fenchel conjugate with respect to the dual pair where χ K is the convex characteristic function with value 0 on K and +∞ elsewhere, and K is the strong L d closure of K.This closure in turn satisfies (as stated in [20, Def.2.2] and proved in [13,Prop. 7]) the identity where the equality z • n ∂Ω = 0 is understood in the sense of the normal trace in W 1,d (div), that is, Now, since TV is positively one-homogeneous, we have (see [38,Lem. A.1], for example) that and It is natural to ask whether one can use that v = div z for some z to integrate by parts in the last equality, which would formally lead to z • Du = |Du|, or z = Du/|Du|.However, since z is only in L ∞ (Ω; R d ) and not guaranteed to be continuous, there is no immediate meaning to the action of the measure Du on z, and Du/|Du| can only be defined |Du|-a.e. as the polar decomposition of Du.This difficulty is mitigated by defining (as first done in [7]) the product between Du and z as a distribution (z, Du) given by This definition makes sense whenever u ∈ BV(Ω) ∩ L d/(d−1) (Ω) and so it can be extended to a measure, which in fact is absolutely continuous with respect to |Du|, and whose corresponding Radon-Nikodým derivative we denote by θ(z, Du, •) ∈ L 1 (Ω, |Du|).Moreover, there exists a generalized normal trace of z on ∂Ω denoted by [z, n ∂Ω ] ∈ L ∞ (∂Ω) for which the following Green's formula [7, Thm.1.9] holds: With these definitions in mind ( 5) may be written [6,Prop. 1.10] Since we will make extensive use of level sets, we now fix the notation we use for them: 1) (Ω) we denote by E s the upper level set {u > s} if s > 0, and the lower level set {u < s} for s < 0, so that |E s | < +∞ for all s ∈ R \ {0}.These two cases can be summarized as Finally, the following characterization of the subgradient of TV in terms of perimeter of level sets is crucial for our results below: Lemma 1.Let u and E s be as in Definition 2. The following assertions are equivalent: • v ∈ ∂ TV(0) and Ω vu = TV(u).
• v ∈ ∂ TV(0) and for a.e.s, • For a.e.s, the level sets E s satisfy Proof.The equivalence between the first two items follows from TV being positively onehomogeneous, and a proof can be found in [38, Lem.A.1], for example.A proof for the other statements in the L 2 (R 2 ) setting, but which generalizes without modification to the present case, can be found in [21,Prop. 3].

Dual solutions and their stability in one-homogeneous regularization
In [36,Prop. 3.3], the dual problem to (1) is computed to be sup where q ′ = q/(q − 1) is the conjugate exponent of q, and analogously for σ ′ , and we notice that owing to the strict concavity of the objective, this problem has a unique maximizer.Also, in [36,Prop. 3.6] it is proved that the solutions of (D α,w ) satisfy where ρ L q ,σ is defined as the inverse of the function σ ′ , that is, the largest function ψ satisfying for all u, v ∈ L q′ (Σ) and λ ∈ (0, 1): Now, since we have chosen σ = min(q, 2), we have [11,Thm. 5 where δ L q′ denotes the modulus of uniform convexity of L q′ (Σ) (which is independent of Σ) defined using points in the unit ball, that is In this setting and defining v α,w := A * p α,w ∈ ∂ TV(u α,w ) as in (2), we can deduce from (10) that which tells us that the threshold to control the effect of the noise on the dual variables is a direct generalization of the linear parameter choice that would arise in the case σ = q = 2.
To know more about the asymptotic behavior of v α,w as both α and w vanish one needs an additional assumption.Whenever u † denotes a solution of Au = f of minimal TV among such solutions, we say that the source condition is satisfied if there exists p ∈ L q′ (Ω) such that Now, since TV is not strictly convex, u † may not be unique, but arguing as in [38,Rem. 3] we have that if û † is another such solution and we have (12), then A * p ∈ ∂ TV(û † ) as well, for the same source element p.For our purposes, the significance of this source condition comes from the fact that it guarantees that the formal dual problem (D 0,0 ) obtained by setting α = 0 and w = 0 in (D α,w ) has at least one maximizer, namely the source element p.Under this source condition assumption and taking a sequence α n → 0 we have in the case where noise is not present (w = 0) the convergence v αn,0 = A * p α,0 → A * p 0,0 =: v 0,0 strongly in L d (Ω), (13) where p 0,0 is the element of L q′ (Ω) of minimal norm among those satisfying (12), which is unique.This convergence is proved (see [36,Prop. 3.4] for a detailed argument) by testing the optimality of p α,0 and p 0,0 in (D α,0 ) and (D 0,0 ) with respect to each other to obtain both p α,0 ⇀ p 0,0 weakly in L q ′ (Σ) and ∥p α,0 ∥ L q ′ (Σ) ⩽ ∥p 0,0 ∥ L q ′ (Σ) , which together imply strong convergence by using the Radon-Riesz property of L q ′ (Σ).Finally, combining this convergence with (11), we also have that if w n ∈ L q (Σ) are such that α n → 0, then also v αn,wn → A * p 0,0 strongly in L d (Ω).
3 Boundedness, uniform boundedness, and strong convergence We start with a direct proof of boundedness of minimizers u α,w of (1) which, using the results cited in the previous section, can be made uniform for small noise and strong enough regularization.It relies on studying the level sets Moreover, under the source condition A * p 0 ∈ ∂ TV(u † ) and a parameter choice satisfying the condition where C q,σ is the constant in (11) and η < Θ d is the constant of the isoperimetric inequality Per(E) ⩾ Θ d |E| (d−1)/d for E of finite perimeter, we have a bound for ∥u α,w ∥ L ∞ (R d ) which is uniform in α and w.
Proof.We first show the claim for fixed α, w.To do this, we make use of (D α,w ): the problem has a solution p α,w , we have strong duality and the optimality condition In what follows, we assume without loss of generality that s > 0 so that E s α,w := {u α,w > s}.From (9) and the Hölder inequality we can derive for a.e.s > 0 the estimate Now, v α,w ∈ L d (R d ) so for any ε > 0, there exists a δ > 0 such that for sets E with |E| ⩽ δ, which is not possible for ε too small if |E s α,w | > 0. Note that the bound on δ does not depend on s, only on v α,w , which means that we have a uniform positive lower bound on the mass of every level set E s α,w .From the layer-cake formula for L d/(d−1) (R d ) functions [42,Thm. 1.13] we conclude that there must exist s 0 > 0 such that for s ⩾ s 0 , |E s α,w | = 0, which means that u α,w ⩽ s 0 a.e. in R d .We prove similarly that u − α,w = max(0, −u α,w ) is bounded.Finally, let us assume that (α n , w n ) is a sequence of regularization parameters and noises for which (14) holds.From v αn,0 → v 0,0 for any sequence α n → 0 (see (13)), one infers that the family {v α,0 } α>0 is equi-integrable in L d (R d ).We want, as before, to estimate E s α,w |v α,w | d .This can be done by writing Now, using the equi-integrability of v α,0 , one can, for any ε, find δ so that as soon as |E s α,w | ⩽ δ, the first term of the right hand side is bounded by ε.On the other hand, the second term is, independently from δ, bounded by η < Θ d , as a consequence of ( 11) and ( 14).We conclude that as soon as |E s α,w | ⩽ δ, we have which is still not possible for ε too small (independent of s and α, w satisfying ( 14) Remark 1.By a similar argument on the E s α,w (see [38,Lem. 5]), we can actually show that u α,w has compact support, so that u + α,w ∈ L 1 (R d ) and we could use the (simpler) layer cake formula in L 1 : which would provide the same contradiction.
Remark 2. The parameter choice condition (14) does not necessarily imply convergence of the dual variables.This is the case when so that, as remarked in Section 2.2, in fact ( 13) and (11) imply that In particular, this implies that the family (v αn,wn ) n is equi-integrable in L d (R d ), which means that in the reasoning coming after (15), the δ can be chosen independent of n, which would simplify the proof in this more restrictive case.
Remark 3. In the case of denoising in the plane with the Rudin-Osher-Fatemi model [48], for which d = m = 2, q = 2 and A = Id : and Proposition 1 applies as soon as f Remark 4. The same proof shows that if the source condition ( 12) is satisfied, then We consider now the case when Ω is a bounded Lipschitz domain, which leads to two cases for the functional in (1).First, we can consider the total variation TV(u; R d ) of the extension by zero of u from Ω to the whole R d , which we refer to as homogeneous Dirichlet boundary conditions.Second, we can consider the variation TV(u; Ω) which we refer to as homogeneous Neumann boundary conditions, since the test functions in (3) are compactly supported in Ω. Proof.In the Dirichlet case we work with the functional in (1) but consider TV( [2,Cor. 3.89] and using that Ω is a Lipschitz domain we have that where we emphasize that the first total variation is computed in R d whereas the second one in Ω as in (3).The values of u at the boundary are understood in the sense of traces.
In this situation, since E is linear we can consider the composition TV • E as a convex positively homogeneous functional in its own right, which gives us that indeed, where p α,w is defined in (2).By general properties of one-homogeneous functionals we then have (as in the first two items of Lemma 1) that Moreover, we can argue exactly as in [21,Prop. 3] (which uses more properties of the subgradient than the equality above) to obtain that also in this case where the level sets E s α,w are defined as in Proposition 1, and from which we can follow the rest of the proof exactly, since E s α,w ⊂ Ω.In the Neumann case we consider TV as TV(u; Ω).This parallels what is done in [36,Sec. 6] and with more details in the 2D case in [38,Sec. 4.3].In this case, one uses the estimate (see [38,Sec. 4.3] for a proof) where C Ω is the constant of the Poincaré-Sobolev inequality To see that ( 16) can play the role of the isoperimetric inequality, notice that we only need to use this inequality for large values of |s| and sets of small measure.Therefore we may assume and in the situation of Proposition 1, the optimality condition (2) implies that v α,w ∈ L ∞ (Ω) as well.One can then use strong regularity results (see [44,Thm. 3.1] or [28,Thm. 3.6(b)], for example) to obtain that ∂{u α,w > s} ∈ C 1,γ for all γ < 1/4, provided that d ⩽ 7.Moreover, since our estimates are uniform under the assumptions of Theorem 1, this regularity can also be made uniform along a sequence [53, Secs.1.9, 1.10] as well.
The assumption of both A and A * preserving boundedness is easily seen to be satisfied for convolution operators with regular enough kernels, since in this case A * is of the same type.However, it also holds for other commonly used operators.As an example, let us consider the case of the Radon transform R of functions on a bounded domain Ω ⊂ R d .In this case one can consider Σ = S d−1 × (−R, R) with R > 0 large enough so that Ω ⊂ B(0, R) and the last map being restriction, since R is continuous between the middle two spaces [45,Thm. 1 as well, since the integrals on planes in the definition of R are then on domains of uniformly bounded Hausdorff measure Similarly, A * is then extension by zero composed with the backprojection integral operator, so we have Observe that for the case of Ω ⊂ R 2 and convolutions with L 2 kernels boundedness is immediate, since Young's inequality for convolutions used in (2) Corollary 1.Under the assumptions of Proposition 1, for a sequence of minimizers {u αn,wn } n with α n , w n satisfying (14) and α n → 0 and w n → 0 as n → ∞, we have that, up to a subsequence, where u † is an exact solution of Au = f with minimal TV among such solutions.If there is only one exact solution, then the whole sequence converges to it in the same fashion.
Proof.We first notice that the parameter choice ( 14) is less restrictive than the one needed in [36,Prop. 3.1] which provides strong convergence to some u † in L p loc (Ω) for p ∈ (1, d/(d − 1)) and up to a subsequence by a basic compactness argument.Moreover, if Ω = R d , we can apply [36,Lem. 5.1] to obtain that all of the u αn,wn and u † are supported inside a common ball B(0, R) for some R > 0. If, in contrast, Ω is bounded, then Ω ⊂ B(0, R) and we may extend to the latter by zero.
For this subsequence (which we do not relabel) and p ⩽ p we have which using Proposition 1 immediately implies (17).Finally, if the minimal TV solution of Au = f is unique, any subsequence has a further subsequence converging to this unique solution, so the whole sequence must in turn converge to it.Remark 6.In the plane, under the same source condition (12), a parameter choice equivalent to the one we use for d = 2, and additional assumptions for both A and u † , linear convergence rates are proved in the L 2 setting in [54,Thm. 4.12], that is ∥u αn,wn −u † ∥ L 2 (Ω) = O(α n ) (observe that the parameter choice forces ∥w n ∥ = O(α n )).Therefore, this result can be combined with ours to obtain also a convergence rate of order 2/p in L p (Ω).To see this, we just notice that by Remark 4 we have u † ∈ L ∞ (Ω) in addition to the uniform bound on ∥u αn,wn ∥ L ∞ (Ω) , so that as in (18) we have One might wonder if strong convergence in L ∞ (Ω) is possible.In fact, it is not: Example 1.Consider (as in Remark 3) denoising in the plane R 2 with parameter α n = 1/n of f + w n , where for arbitrarily small c > 0 we define Notice that this situation does not change for a more aggressive parameter choice.

Results with density estimates
The combination of the source condition Ran(A * ) ∩ ∂ TV(u † ) ̸ = ∅ and the parameter choice ( 14) leads to uniform weak regularity estimates for the level sets of u α,w , and boundedness may also be deduced from those.More precisely, recalling that for x ∈ ∂E s α,w and r ⩽ r 0 , where r 0 and C are independent of x, n and s, and where we have taken a representative of E s α,w for which the topological boundary equals the support of the derivative of its indicatrix [43,Prop. 12.19], that is ∂E s α,w = supp D1 E s α,w .In turn, this support can be characterized [43,Prop. 12.19] by the property where we note that it could be that these quotients tend to 0 or 1 as r → 0. We refer to the inequalities in (19) as inner and outer density estimates respectively, and the combination of both as E s α,w satisfying uniform density estimates.Such estimates are the central tool for the results of convergence of level set boundaries in Hausdorff distance in [21,38,36,37].
The proof of ( 19) is more involved than those in the previous section, but once it is obtained, boundedness of minimizers follows promptly: Proposition 3. Assume that the E s α,w satisfy uniform density estimates at scales r ⩽ r 0 with constant C, and that α is chosen in terms of ∥w∥ L q (Σ) so that u α,w is bounded in L d/(d−1) (Ω) for some p ⩾ 1.Then, in fact {u α,w } is uniformly bounded in L ∞ (Ω).If additionally we have a sequence u αn,wn → u † strongly in L p(Ω) for some p, then also u αn,wn → u † strongly in L p (Ω) for all p such that p ⩽ p < ∞. 1) , which, since r 0 is fixed, contradicts the fact that the family {u α,w } is bounded in L d/(d−1) (Ω).For the second statement, we can argue as in (18).Remark 7. In the Dirichlet case of Proposition 2 we have assumed only that Ω is a Lipschitz domain without further restrictions.In this context, unless Ω is convex, it is not true that E s is a minimizer of Per(E) since these may extend beyond Ω, while the functional u → (TV • E)(u) − Ω v α,w u is only sensitive to variations supported in Ω.This kind of variational problem is used in [36] and [38] to obtain the density estimates (19), but for that one needs (see [38,Lem. 9]) to extend v α,w by a variational curvature of Ω minorized by a function in The existence of such a curvature is an additional restriction on Ω and it is not satisfied for domains with inner corners in R 2 , for example.
One can also make the bound slightly more explicit in terms of the constant and scale of the density estimates: Remark 8. Let u have level sets E s := {sign(s)u > |s|} satisfying density estimates with constant C at scale r 0 .Then, we have This is a direct application of the Markov inequality.Indeed, since there is always a boundary point x 0 , the level sets E s which are nonempty must satisfy |E s | ⩾ C|B(x 0 , r 0 )| = C|B(0, r 0 )|, which implies In particular, this implies 1 C 1/p |B(0, r 0 )|

Taut string with weights and unbounded radial examples
In this section we consider denoising of one-dimensional data with a modified Rudin-Osher-Fatemi (ROF) functional with weights in both terms, obtaining a taut string characterization of the solutions which reduces the problem to finding finitely many parameters.The main difficulty is that we allow weights which may degenerate at the boundary, which forces the use of weighted function spaces in all the arguments.

One-dimensional weighted ROF problem and optimality condition
We start with two weights ϕ, ρ on the interval (0, 1) for which ϕ, ρ > 0 on (0, 1), ρ ∈ Lip(0, 1), ϕ ∈ C(0, 1) ∩ L ∞ (0, 1), and where we notice that it could be that ϕ(x) → 0 or ρ(x) → 0 as x → 0 or x → 1.Using these weights we consider the weighted denoising minimization problem inf where we define the weighted total variation with weight ρ in the usual way (see [1,9], for example) as and the problem is considered in the weighted Lebesgue space which is a Hilbert space when considered with the inner product The predual variables and optimality conditions for (21) will then naturally be formulated on a weighted Sobolev space, namely The first step is to see that under the assumptions (20), the denoising problem is still well posed: . Then, there is a unique minimizer u α of (21).
Proof.With the assumptions on ϕ, ρ, these weights stay away from zero on any compact subset of (0, 1), which means that we have, for an open subset A ⊂ (0, 1), for which in particular A ⊂⊂ (0, 1), a constant C A such that for every u ∈ L 2 ϕ (0, 1) This means that along a minimizing sequence {u n } n for (21), we have Applying Cauchy-Schwarz on the first term and since α > 0, we obtain sup n A u n | + α TV(u n ; A) < +∞ which allows applying [2,Thm. 3.23] to conclude that a not relabeled subsequence of {u n } n converges in L 1 loc (0, 1) to some u ∞ ∈ BV loc (0, 1).Now, arguing as in [17, Prop.1.3.1]we have that TV ρ is lower semicontinuous with respect to strong L 1 loc convergence.To see this, let z ∈ C 1 c (0, 1) with |z| ⩽ ρ, so that we have u n Since the left hand side is bounded by TV ρ (u n ), this implies in particular that for any such z, Taking the supremum over z as in (22), we obtain the semicontinuity Moreover, we may take a further subsequence of {u n } n converging weakly in L 2 ϕ (0, 1) to a limit which clearly must be again u ∞ , and the first term of ( 21) is lower semicontinuous with respect to this convergence since it involves only the squared norm of L 2 ϕ (0, 1).This and (23) show that u ∞ is actually a minimizer of (21).
We have that u realizes the infimum in (21) if and only if where the subgradient means that v ∈ ∂ L 2 ϕ TV ρ (u) if and only if This set is characterized (with assumptions on the weights covering ours) in [8, Lem.2.4], which in one dimension yields the following generalization of (8): There exists ξ ∈ L ∞ (0, 1) with |ξ| ⩽ 1 and (ρξ) ′ ∈ L 2 1/ϕ (0, 1), for which and also where it is to be noted that in comparison to (8), we always consider the product ρξ together, and generally avoid differentiating ρ alone.Like for (6), the second line of (25) generalizes integration by parts when ũ ∈ C ∞ c (0, 1), so we have that vϕ = −(ρξ) ′ in the sense of distributions, and ρξ ∈ W 1,2 1,1/ϕ (0, 1).In analogy to (7), it is tempting to interpret this equality in terms of boundary values.However, as remarked in [8, Below Lemma 2.4], to which extent this is possible depends on further properties of ϕ.In our one-dimensional case and noting that because ϕ ∈ L ∞ (0, 1) ∩ C(0, 1) the inverse 1/ϕ remains bounded away from zero on the closed interval, and we have and as we will prove in Theorem 2, in fact we have that ρ(0)ξ(0) = ρ(1)ξ(1) = 0, in particular for v = (f − u)/α in (24).Since TV ρ is positively one-homogeneous, we recognize in the above that the first statement of (25) characterizes ∂ L 2 ϕ TV ρ (0), and by Fenchel duality in L 2 ϕ (0, 1) (as in [27, Thm.III.4.2] with Λ = Id), we have min where the last maximization problem can be written as max for which we notice that since the last term does not involve v, is the familiar formulation as projection on a convex set.
For explicitly characterizing the minimizers, we will need to give a pointwise meaning to (24), which turns out to be x 0 f (s) − u(s) ϕ(s) ds = −U (x) for all x ∈ (0, 1), where as we prove in Theorem 2 below.Here, avoiding density questions and owing to the embedding (26) we have directly defined A first remark about this characterization is that the last equality in the second line of ( 27) is made possible by the one-dimensional setting, since then functions in W 1,2 1,1/ϕ,0 (0, 1) are continuous.Whether such a characterization is possible in higher dimensions, even without weights, is a much more delicate question, see [13,22,24].A further remark is that (27) implies in particular which is formulated only in terms of the minimizer u and does not involve additional derivatives.However, equation ( 28) only provides information on supp |Du|, so we cannot immediately conclude that if u satisfies it, then it must be a minimizer of (21).
Theorem 2. The Fenchel dual of (30), for which strong duality holds, is equivalent to the weighted ROF problem (21).Specifically, if U 0 is the minimizer of (30) and V 0 is optimal in the dual problem, then u 0 = V 0 /ϕ = U ′ 0 /ϕ + f is the minimizer of (21).Moreover, for the pair (u 0 , U 0 ) we have that (27) is satisfied (replacing u, U by u 0 , U 0 , respectively), and this condition characterizes optimality of this duality pair.Proof.Following for example [34,Sec. 3.1] and using the notation of [27, Thm.III.4.2], we call where Λ : W 1,2 1,1/ϕ,0 (0, 1) → L 2 1/ϕ (0, 1), χ L denotes the indicator function of L and so that ( 30) can be written as min U ∈W 1,2 1,1/ϕ,0 (0,1) In this situation, the dual problem writes min V ∈L 2 1/ϕ (0,1) where and taking into account W 1,2 1,1/ϕ,0 (0, 1) ⊂ L 2 (0, 1) ⊂ W 1,2 1,1/ϕ,0 (0, 1) * we have where for the last inequality we have used that U ∈ C 1 c (0, 1) |U | ⩽ αρ ⊂ L. We would like to have equality in this last inequality, which holds in particular when Next, we notice that since , where DU = U ′ for U ∈ W 1,2 1,1/ϕ,0 , the statement ( 32) is in fact equivalent to density of C 1 c (0, 1)∩L in L in the strong W 1,2 1,1/ϕ topology.Such a density property can not be obtained by directly mollifying elements of L, because ρ being nonconstant implies that the mollified functions could violate the pointwise constraint; a modified mollifying procedure has been considered in [35], but only for continuous ρ with a positive lower bound, which does not cover our case.Instead, we can replace V /ϕ in (32) by a sequence of smooth approximations for which TV ρ converges (that is, in strict convergence), and then pass to the limit.This type of approximation of TV ρ has been proved in [17,Thm. 4.1.6]for Lipschitz ρ but without the lower bound, allowing ρ to vanish at the boundary and thus covering our situation.
Summarizing, the dual problem writes min that clearly has the same minimizers as min which in turn becomes (21) if we define u := V /ϕ and f := F ′ /ϕ, both of which belong to L 2 ϕ (0, 1).The optimality conditions for minimizers U 0 , V 0 for this pair of problems are then Note that we use the results from [27], which means that the subgradients of F * are elements of the primal space W 1,2 1,1/ϕ,0 (0, 1) and not of the bidual W 1,2 1,1/ϕ,0 (0, 1) * * .In this case, we have (see [13,Prop. 7]) where the closure is taken in the strong topology of W 1,2 1,1/ϕ,0 (0, 1).Moreover, we can also extend F (we denote by F this extension) to C 0 ([0, 1]), the space of continuous functions on [0, 1] which vanish on the boundary.Then, F * is defined on Radon measures and we have where the closure is now taken with respect to uniform convergence in [0, 1].In fact, these two closures satisfy as shown in Lemma 3 below.This allows us to relate ∂F * (0) to subgradients of the weighted total variation norm for measures, which (see for example [34,Lem. 3.1]) satisfy for each where With this, in the two lines of (33) we have that ) ds for all x ∈ (0, 1), and x, and |U 0 (x)| ⩽ αρ(x) for all x ∈ (0, 1), with which we arrive at (27).
Proof.Since W 1,2 1,1/ϕ,0 (0, 1) is continuously embedded in C 0 ([0, 1]), we immediately have For the opposite containment, we consider Pasch-Hausdorff regularizations with respect to the metric induced by the weight ϕ of the positive and negative parts where Defining U n (s) by U + n (s) if U (s) > 0 and −U − n (s) otherwise, so that again U n = U + n − U − n , we obtain regularized functions which are Lipschitz in this metric (this result dates back to [32], see also [31,Thm. 2.1] for a proof), hence belonging to W 1,∞ 1,1/ϕ,0 (0, 1) ⊂ W 1,2 1,1/ϕ,0 (0, 1) and which are by definition pointwise bounded by U , so they remain in the constraint set.Moreover, the U n converge uniformly to U in [0, 1], even if ϕ can vanish at the boundary.To see this, we notice that the functions U ± n are pointwise increasing with respect to n and converging to U ± respectively, to use Dini's theorem on both sequences.

Switching behavior
Defining the bijective transformation 1,1/ϕ,0 (0, 1), we have that Ǔ ′ ∈ L 2 (0, Φ(1)) and, using the Poincaré inequality in W 1,2 0 (0, Φ(1)), that Ǔ ∈ L 2 (0, Φ(1)) as well.Likewise, from Ǔ ∈ W 1,2 0 (0, Φ(1)) we can conclude U ∈ W 1,2 1,1/ϕ,0 (0, 1) using the inequality This implies that ( 30) is equivalent to min 1 2 where similarly F = F • Φ −1 .In turn, arguing as in [50,Thm. 4.46] the unique minimizer of ( 35) is the same for any strictly convex integrand, so in particular it can be obtained from min by setting Ǔ = Û − F .This is now a 'generalized' taut string formulation that fits in those considered in [30,Lem. 5.4].One can then argue as in [10, Lem.9] (which is directly based on the previously cited result) to conclude that for any δ > 0 the interval (δ, Φ(1) − δ) can be partitioned into finitely many subintervals on which one or neither of the constraints is active.The reasoning behind such a result is that in the form (36), one is minimizing the Euclidean length of the graph of a continuous function with constraints from above and below, so as long as these constraints are at some positive distance ϵ apart, switching from one constraint to the other being active must cost at least ϵ and enforce a subinterval in which neither is active.In our case we need to restrict the interval to avoid the endpoints, because in our setting the weights are potentially degenerate and ρ • Φ −1 may vanish at 0 or Φ(1), and with it the distance between the two constraints.For such degenerate weights, it is enough to assume that each of the two functions F ± α ρ • Φ −1 is either convex or concave on a neighborhood of 0 and 1 to ensure only finitely many changes of behavior.
With this property and taking into account Theorem 2 we can go back to (27) to see that the minimizer of (21) alternates between the three behaviors finitely many times on (δ, 1 − δ) for any δ > 0, and on (0, 1) for data f which is not oscillating near 0 and 1, and a subinterval corresponding to either the first or the last case is always followed by another on which u ′ = 0.

Denoising of unbounded radial data
We can apply the results above to find minimizers in the case of radially symmetric data f ∈ L 2 (B(0, 1)) of min which considered in R d corresponds to (21) with where we remark that (37) for this particular case was claimed without proof in [39].Indeed, it is easy to guess the form of this term by formally differentiating (27), but since this is not possible, we prefer to rigorously justify the behavior of minimizers by the arguments of the previous subsection.The earlier work [52] also contains some results about nearly explicit minimizers for piecewise linear or piecewise constant radial data.
In particular, let us compute explicitly the solution in the case d = 3 and In this case we have and using (39) we conclude that F + α ρ • Φ −1 is concave for all α > 0 F − α ρ • Φ −1 is strictly concave for 0 < α < 1/(d − 1) = 1/2 and convex for α ≥ 1/(d − 1) = 1/2.Therefore, (37) tells us that either u(r) = (1 − 2α)/r or u ′ (r) = 0 on each of finitely many subintervals.We consider the simplest case of two such intervals and check that we can satisfy the optimality condition for it.First, the Neumann boundary condition forces u ′ (1) = 0 since the radial weight equals one there.Moreover, if f ⩾ 0, the minimizer u of ( 38) satisfies u ⩾ 0 as well, because in this case, replacing u with u + cannot increase either term of the energy.Taking this into account, as well as the fact that u should be continuous [18], we consider for c ∈ (0, 1) the family of candidates u α,c (r) := 1−2α r for r ∈ (0, c) for r ∈ (c, 1), (40) where α ⩽ 1/2 is required so that u ⩾ 0 is satisfied.Moreover, we have also excluded segments with value (1 + 2α)/r, since in that case the fidelity cost is the same as for (1 − 2α)/r, but with a higher variation.To certify optimality of (40), we attempt to satisfy the pointwise characterization (27).The condition U (1) = 0, for U defined by the first line of ( 27) forces which is equivalent to which if 0 < α < 1/2 can be solved to find a c ∈ (0, 1), as can be easily seen using the intermediate value theorem.Now, since for the family u α,c , one has Du |Du| = −1 on (0, c] = supp(|Du|), one can check that for r on this interval, we have which, using the definition of u α,c , is satisfied in particular if which, as long as 1 4 ⩽ α < 1 2 and taking into account (41), is satisfied as soon as c ⩽ r ⩽ 1. Equation ( 41) can be solved directly, for example for α = 1  4 , and we obtain a solution c 1 4 ∈ (0, 1), which guarantees that ( 40) is indeed a minimizer of (38).Thus, we have obtained an explicit example of an unbounded minimizer of (38), whose graph is depicted in Figure 1.
Remark 9. We note here (as kindly pointed out by an anonymous reviewer) that using techniques of [5] it is also possible to find an explicit vector field z satisfying the characterization (8) to guarantee that ( 40) is a solution of the denoising problem with data 1/r.Denoting e r = x/|x| and z(x) = z(r)e r , so that div z(x) = z ′ (r) + 2 r z(r) for d = 3, one can define for which continuity as well as L ∞ boundedness by 1 holds for c solving (41).Moreover in which case u α,c indeed solves (38).Furthermore, a construction along these lines would also be possible for Lemma 4 below, for which our proof is based on the same weighted taut string technique.
For other cases such an explicit computation may be impractical or impossible, but we can still use the taut string characterization to derive some conditions for local boundedness and unboundedness of radial minimizers: Lemma 4. Let f : (0, 1) → R be such that f (δ,1) ∈ L 2 (δ, 1) for all δ ∈ (0, 1), and such that for some δ 0 ∈ (0, 1) we have and consider the corresponding minimizer u of (21) with ρ(r) = ϕ(r) = r d−1 .Then, if u is unbounded on any neighborhood of 0, then necessarily β ⩾ 1, and if β > 1, then u is unbounded on any neighborhood of 0. In particular, for d = 2 and with f of this form belonging to L 2 r d−1 (0, 1), we have that u is always bounded on a neighborhood of 0.
Proof.First, we notice that we must have u ∈ L ∞ (δ, 1) for all δ > 0, (47) because on the interval (δ, 1) we have ρ ≥ δ d−1 , which ensures that the restriction of u to it belongs to BV(δ, 1), and in one dimension we have the embedding BV(δ, 1) ⊂ L ∞ (δ, 1).Given (47) and using the equivalence with (38), we can conclude by applying the pointwise comparison principle for minimizers of TV denoising in B(0, 1) ⊂ R d (a proof of which can be found in for example [37,Prop. 3.6]) against inputs of the form treated in Lemma 4 above.Assuming (45), we choose δ 0 such that f (r) ≥ r −β for r ∈ (0, δ 0 ), while for the case (46) we take δ 0 such that f (r) ≤ r −β for r ∈ (0, δ 0 ), and in both cases compare with f defined by We can also see this criterion in light of the results of Section 3, in particular Propositions 1 and 2. There, the main requirement for boundedness of u is that there should be v ∈ ∂ L d/(d−1) TV(u) ⊂ L d (Ω).In the case of radial data on Ω = B(0, 1) ⊂ R d , we have worked with the ROF denoising problem for which and by (37) we know that u α − f switches between ±α(d − 1)/| • | and zero on annuli.Moreover, the power growth 1/| • | is precisely the threshold for a function on R d to belong to L d (B(0, 1)), assuming it is in L ∞ B(0, 1) \ B(0, δ) for all δ.That is, for slower power growth as in (46) we have v α ∈ L d (B(0, 1)) and the minimizer is bounded, which we could have proved using the techniques of Section 3. In contrast, for faster power growth as in (45) we have that v α / ∈ L d (B(0, 1)), so the methods of Section 3 are not applicable, and by Proposition 5 in this case TV denoising produces an unbounded minimizer.

Higher-order regularization terms
Here we treat problems regularized with two popular approaches combining derivatives of first and higher orders, with the goal of extending our boundedness results from Section 3 to these more involved settings.In this section, we limit ourselves to the case of functions defined on a bounded Lipschitz domain Ω ⊂ R d .
The first approach, for which we are able to prove an analog of Theorem 1 in dimension 2, is infimal convolution of first and second order total variation first introduced in [23] (see also [14]), that is min where TV 2 (g) for g ∈ BV 2 (Ω) ⊂ W 1,1 (Ω) is the norm of the distributional Hessian as a matrixvalued Radon measure, that is Here and in the rest of this section, A and the exponents σ, q are as in Sections 1 and 3, i.e.A : L d/(d−1) (Ω) → L q (Σ) is a bounded linear operator, and σ = min(q, 2).
The second approach, widely used in applications but harder to treat analytically, is total generalized variation of second order, introduced in [15] and for which we use the characterization of [16,Thm. 3.1] to write min for which we denote the second term as TGV(u), not making the regularization parameters α 1 , α 2 explicit in the notation.Here, TD(z) denotes the 'total deformation' of the vector field z defined in terms of its distributional symmetric gradient, that is Since we are working on a bounded Lipschitz domain, the inner infima are attained in both cases and the denoising functionals ( 48) and ( 49) have unique minimizers.For a proof, see [14,Prop. 4.8,Prop. 4.10] for (48) and [16,Thm. 3.1,Thm. 4.2][12][14, Thm.5.9, Prop.5.17] for the TGV case (49).
Lemma 5. Assuming that u is a minimizer of (48) and the inner infimum is attained by some g u , we have the necessary and sufficient optimality condition: If ( 49) is minimized by u and the inner infimum is attained for some z u , we also have the necessary condition where the functional J zu is defined by J zu (ũ) = |Dũ − z u |(Ω), and v is as in (50).
Proof.We assume for the sake of simplicity that α 1 = α 2 = 1, and first prove (51).We notice that the first term in ( 49) is continuous with respect to u ∈ L d/(d−1) (Ω), so we have [27, Prop.I.5.6] for an optimal u that and similarly for (48).Now, one would like to use the characterization of the subgradient of exact infimal convolutions as intersections of subgradients at the minimizing pair (see [55, Cor.2.4.7], for example).Since the complete result cannot be applied directly to (52), we make explicit the parts of its proof that are applicable.To this end, let u * ∈ L d (Ω) belong to the right hand side of (52).We have for all ũ ∈ L d/(d−1) (Ω), noticing that TGV(u) < +∞ implies u ∈ BV(Ω) since we may just consider z = 0 in the inner minimization, that ⩾ 0, which combined with ( 52) implies (51).To prove that ( 50) is necessary for optimality, assuming that u now minimizes (48) we have analogously and the analogous computation leads to v ∈ ∂ TV(u − g u ).To see that we also have v ∈ ∂ TV 2 (g u ), we consider u * ∈ L d (Ω) in the right hand side of (53) so that for ũ ∈ L d/(d−1) (Ω), Notice that this last part is not a priori possible for (49) due to the lack of symmetry in the variables involved.That ( 50) is also sufficient can be seen from [55,Cor. 2.4.7],since in this case we do have an infimal convolution.More generally one can prove the following restricted analogue of Theorem 1: and Ω be a bounded Lipschitz domain.Assume that for A : L 2 (Ω) → L q (Σ) linear bounded, f ∈ L q (Σ) and a fixed γ > 0 there is a unique solution u † for Au = f which satisfies the source condition Ran(A * ) ∩ ∂(TV □ γTV 2 )(u † ) ̸ = ∅, where we denote TV(u − g) + γ TV 2 (g).
In this case, there is some constant C(γ, q, σ, Ω) such that if α n , w n are sequences of regularization parameters and perturbations for which Proof.Since the proof follows by using the methods of Section 3 essentially verbatim, we provide just an outline highlighting the steps that pose significant differences.The first ingredient is to repeat the results on convergence and stability of dual variables summarized in Section 2.2, which in fact only depend on the regularization term being positively one-homogeneous and lower semicontinuous.Once these are obtained, from the characterization (50) and Proposition 1 one gets uniform L ∞ bounds for u n − g un .To finish, one notices that by comparing with u † in the minimization problem (54) we get using Au † = f that γ TV 2 (g un ) ⩽ 1 σ ∥w n ∥ σ L q (Σ) + α n inf g∈BV 2 (Ω) TV(u † − g) + γ TV 2 (g) , in which the first term of the right hand side is bounded above in terms of α n by the parameter choice and (TV □ γTV 2 )(u † ) < +∞ by the source condition, so the bound on g un obtained by the embedding BV 2 (Ω) ⊂ L ∞ (Ω) is not just uniform in n but in fact vanishes as n → ∞.Since we have assumed that Ω is bounded, the strong convergence in L p(Ω) follows directly along the lines of the proof of Corollary 1.
In comparison with Theorem 1, the above result is weaker in two aspects.The first is that we need to impose that the two regularization parameters maintain a constant ratio between them, in order to formulate a source condition in terms of a subgradient of a fixed functional.Moreover, the result is limited to d = 2 and bounded domains.Without the latter assumption, in particular the upgrade to strong convergence in L p(Ω) would require a common compact support for all u n .However, even assuming attainment of the inner infimum, using Lemma 5 we would only get a common support for u n − g un , and in the absence of coarea formula for TV 2 it is not clear how to control the one of g un .
For the case of TGV regularization it is unclear if one can also use the boundedness results of Section 3, which are based on subgradients of TV.A first observation is that we cannot use the TGV subgradients directly, except in trivial cases: and similarly for TV, as stated in (5).But since we assumed that v ∈ ∂ TV(u) as well, we must then have TGV(u) = Ω vu = TV(u).
In Appendix A we explore whether it is possible to use a more refined approach by trying to find elements of ∂ TV(u) from those of ∂J zu (u) appearing in (51) for TGV regularization, with a negative conclusion (at least without using further properties of z u ) in the form of an explicit counterexample.

Proposition 2 .
Proposition 1 also holds for bounded Lipschitz domains Ω and either homogeneous Dirichlet or Neumann boundary conditions on u α,w .
Now, it remains to check that the inequality |U | ⩽ αρ holds on (c, 1) (it holds on (0, c) thanks to the previous argument).Since at c, both functions involved are equal, it is enough to show sign U (r) (f (r) − u α,c (r))ϕ(r) = d dr |U (r)| ⩽ αρ ′ (r) for r > c,