1 Introduction

In the variational setting for imaging, given image data \(f\in \mathrm {L}^{s}(\mathrm {\Omega })\), \(\mathrm {\Omega }\subseteq \mathbb {R}^{2}\), ones aims to reconstruct an image u by minimising a functional of the type

$$\begin{aligned} \min _{u\in X} \frac{1}{s}\Vert f-Tu\Vert _{\mathrm {L}^{s}(\mathrm {\Omega })}^{s}+\mathrm {\Psi }(u), \end{aligned}$$
(1.1)

over a suitable function space X. Here T denotes a linear and bounded operator that encodes the transformation or degradation that the original image has gone through. Random noise is also usually present in the degraded image, the statistics of which determine the norm in the first term of (1.1), the fidelity term. The presence of \(\mathrm {\Psi }\), the regulariser, makes the minimisation (1.1) a well–posed problem and its choice is crucial for the overall quality of the reconstruction. A classical regulariser in imaging is the total variation functional weighted with a parameter \(\alpha >0\), \(\alpha \mathrm {TV}\) [10], where

$$\begin{aligned} \mathrm {TV}(u):=\sup \left\{ \int _{\mathrm {\Omega }}u\,\mathrm {div}\phi \,dx:\; \phi \in C_{c}^{1}(\mathrm {\Omega },\mathbb {R}^{2}), \; \Vert \phi \Vert _{\infty }\le 1 \right\} . \end{aligned}$$
(1.2)

While \(\mathrm {TV}\) is able to preserve edges in the reconstructed image, it also promotes piecewise constant structures leading to undesirable staircasing artefacts. Several regularisers that incorporate higher order derivatives have been introduced in order to resolve this issue. The most prominent one has been the second order total generalised variation (\(\mathrm {TGV}\)) [3] which can be interpreted as a special type of infimal convolution of first and second order derivatives. Its definition reads

$$\begin{aligned} \mathrm {TGV}_{\alpha ,\beta }^{2}(u):=\min _{w\in \mathrm {BD}(\mathrm {\Omega })} \alpha \Vert Du-w\Vert _{\mathcal {M}}+\beta \Vert \mathcal {E}w\Vert _{\mathcal {M}}. \end{aligned}$$
(1.3)

Here \(\alpha ,\beta \) are positive parameters, \(\mathrm {BD}(\mathrm {\Omega })\) is the space of functions of bounded deformation, \(\mathcal {E}w\) is the distributional symmetrised gradient and \(\Vert \cdot \Vert _{\mathcal {M}}\) denotes the Radon norm of a finite Radon measure, i.e.,

$$\begin{aligned} \Vert \mu \Vert _{\mathcal {M}}=\sup \left\{ \int _{\mathrm {\Omega }}\phi \,d\mu :\; \phi \in C_{c}^{\infty }(\mathrm {\Omega },\mathbb {R}^{2}), \; \Vert \phi \Vert _{\infty }\le 1 \right\} . \end{aligned}$$

\(\mathrm {TGV}\) regularisation typically produces piecewise smooth reconstructions eliminating the staircasing effect. A plausible question is whether results of similar quality can be achieved using simpler, first order regularisers. For instance, it is known that Huber \(\mathrm {TV}\) can reduce the staircasing up to an extent [6].

In [5], a family of first order infimal convolution type regularisation functionals is introduced, that reads

$$\begin{aligned} \mathrm {TVL}_{\alpha ,\beta }^{p}(u):=\min _{w\in \mathrm {L}^{p}(\mathrm {\Omega })} \alpha \Vert Du-w\Vert _{\mathcal {M}}+\beta \Vert w\Vert _{\mathrm {L}^{p}(\mathrm {\Omega })}, \end{aligned}$$
(1.4)

where \(1<p\le \infty \). While in [5], basic properties of (1.4) are shown for the general case \(1<p\le \infty \), see Proposition 1, the main focus remains on the finite p case. There, the \(\mathrm {TVL}^{p}\) regulariser is successfully applied to image denoising and decomposition, reducing significantly the staircasing effect and producing piecewise smooth results that are very similar to the solutions obtained by \(\mathrm {TGV}\). Exact solutions of the \(\mathrm {L}^{2}\) fidelity denoising problem are also computed there for simple one dimensional data.

1.1 Contribution of the Present Work

The purpose of the present paper is to examine more thoroughly the case \(p=\infty \), i.e.,

$$\begin{aligned} \mathrm {TVL}_{\alpha ,\beta }^{\infty }(u):=\min _{w\in \mathrm {L}^{\infty }(\mathrm {\Omega })} \alpha \Vert Du-w\Vert _{\mathcal {M}}+\beta \Vert w\Vert _{\mathrm {L}^{\infty }(\mathrm {\Omega })}, \end{aligned}$$
(1.5)

and the use of the \(\mathrm {TVL}^{\infty }\) functional in \(\mathrm {L}^{2}\) fidelity denoising

$$\begin{aligned} \min _{u} \frac{1}{2}\Vert f-u\Vert _{\mathrm {L}^{2}(\mathrm {\Omega })}^{2}+\mathrm {TVL}_{\alpha ,\beta }^{\infty }(u). \end{aligned}$$
(1.6)

We study thoroughly the one dimensional version of (1.6), by computing exact solutions for data f a piecewise constant and a piecewise affine step function. We show that the solutions are piecewise affine and we depict some similarities and differences to TGV solutions. The functional \(\mathrm {TVL}_{\alpha ,\beta }^{\infty }\) is further tested for Gaussian denoising. We show that \(\mathrm {TVL}_{\alpha ,\beta }^{\infty }\), unlike \(\mathrm {TGV}\), is able to recover hat–like structures, a property that is already present in the \(\mathrm {TVL}_{\alpha ,\beta }^{p}\) regulariser for large values of p, see [5], and it is enhanced here. After explaining some limitations of our model, we propose an extension where the parameter \(\beta \) is spatially varying, i.e., \(\beta =\beta (x)\), and discuss a rule for selecting its values. The resulting denoised images are comparable to the TGV reconstructions and indeed the model has the potential to produce much better results.

2 Properties of the \(\mathrm {TVL}_{\alpha ,\beta }^{\infty }\) Functional

The following properties of the \(\mathrm {TVL}_{\alpha ,\beta }^{\infty }\) functional are shown in [5]. We refer the reader to [5, 9] for the corresponding proofs and to [1] for an introduction to the space of functions of bounded variation \(\mathrm {BV}(\mathrm {\Omega })\).

Proposition 1

[5]. Let \(\alpha ,\beta >0\), \(d\ge 1\), let \(\mathrm {\Omega }\subseteq \mathbb {R}^{d}\) be an open, connected domain with Lipschitz boundary and define for \(u\in \mathrm {L}^{1}(\mathrm {\Omega })\)

$$\begin{aligned} \mathrm {TVL}_{\alpha ,\beta }^{\infty }(u):=\min _{w\in \mathrm {L}^{\infty }(\mathrm {\Omega })} \alpha \Vert Du-w\Vert _{\mathcal {M}}+\beta \Vert w\Vert _{\mathrm {L}^{\infty }(\mathrm {\Omega })}. \end{aligned}$$
(2.1)

Then we have the following:

  1. (i)

    \(\mathrm {TVL}_{\alpha ,\beta }^{\infty }(u)<\infty \) if and only if \(u\in \mathrm {BV}(\mathrm {\Omega })\).

  2. (ii)

    If \(u\in \mathrm {BV}(\mathrm {\Omega })\) then the minimum in (2.1) is attained.

  3. (iii)

    \(\mathrm {TVL}_{\alpha ,\beta }^{\infty }(u)\) can equivalently be defined as

    $$\begin{aligned} \mathrm {TVL}_{\alpha ,\beta }^{\infty }(u)=\sup \left\{ \int _{\mathrm {\Omega }}u\,\mathrm {div}\phi \,dx:\; \phi \in C_{c}^{1}(\mathrm {\Omega },\mathbb {R}^{2}), \; \Vert \phi \Vert _{\infty }\le \alpha ,\; \Vert \phi \Vert _{\mathrm {L}^{1}(\mathrm {\Omega })}\le \beta \right\} \!, \end{aligned}$$

    and \(\mathrm {TVL}_{\alpha ,\beta }^{\infty }\) is lower semicontinuous w.r.t. the strong \(\mathrm {L}^{1}\) convergence.

  4. (iv)

    There exist constants \(0<C_{1}<C_{2}<\infty \) such that

    $$\begin{aligned} C_{2}\mathrm {TV}(u)\le \mathrm {TVL}_{\alpha ,\beta }^{\infty }(u)\le C_{1}\mathrm {TV}(u),\quad \text {for all }u\in \mathrm {BV}(\mathrm {\Omega }). \end{aligned}$$
  5. (v)

    If \(f\in \mathrm {L}^{2}(\mathrm {\Omega })\), then the minimisation problem

    $$\begin{aligned} \min _{u\in \mathrm {BV}(\mathrm {\Omega })} \frac{1}{s}\Vert f-u\Vert _{\mathrm {L}^{2}(\mathrm {\Omega })}^{2}+\mathrm {TVL}_{\alpha ,\beta }^{\infty }(u), \end{aligned}$$

    has a unique solution.

3 The One Dimensional \(\mathrm {L^{2}}\)\(\mathrm {TVL}_{\alpha ,\beta }^{\infty }\) Denoising Problem

In order to get an intuition about the underlying regularising mechanism of the \(\mathrm {TVL}_{\alpha ,\beta }^{\infty }\) regulariser, we study here the one dimensional \(\mathrm {L}^{2}\) denoising problem

$$\begin{aligned} \min _{\begin{array}{c} u\in \mathrm {BV}(\mathrm {\Omega }) \\ w\in \mathrm {L}^{\infty }(\mathrm {\Omega }) \end{array}} \frac{1}{2}\Vert f-u\Vert _{\mathrm {L}^{2}(\mathrm {\Omega })}^{2}+\alpha \Vert Du-w\Vert _{\mathcal {M}}+\beta \Vert w\Vert _{\mathrm {L}^{\infty }(\mathrm {\Omega })}. \end{aligned}$$
(3.1)

In particular, we present exact solutions for simple data functions. In order to do so, we use the following theorem:

Theorem 2

(Optimality conditions). Let \(f\in \mathrm {L}^{2}(\mathrm {\Omega })\). A pair \((u,w)\in \mathrm {BV}(\mathrm {\Omega })\times \mathrm {L}^{\infty }(\mathrm {\Omega })\) is a solution of (3.1) if and only if there exists a unique \(\phi \in \mathrm {H}_{0}^{1}(\mathrm {\Omega })\) such that

$$\begin{aligned} \phi '&= u-f, \end{aligned}$$
(3.2)
$$\begin{aligned} \phi&\in \alpha \mathrm {Sgn}(Du-w),\end{aligned}$$
(3.3)
$$\begin{aligned} \phi&\in {\left\{ \begin{array}{ll} \left\{ \psi \in \mathrm {L}^{1}(\mathrm {\Omega }):\; \Vert \psi \Vert _{\mathrm {L}^{1}(\mathrm {\Omega })}\le \beta \right\} , &{} \text { if } w=0,\\ \left\{ \psi \in \mathrm {L}^{1}(\mathrm {\Omega }):\; \langle \psi , w \rangle =\beta \Vert w\Vert _{\mathrm {L}^{\infty }(\mathrm {\Omega })},\; \Vert \psi \Vert _{\mathrm {L}^{1}(\mathrm {\Omega })}\le \beta \right\} , &{} \text { if }w\ne 0. \end{array}\right. } \end{aligned}$$
(3.4)

Recall that for a finite Radon measure \(\mu \), \(\mathrm {Sgn}(\mu )\) is defined as

$$\begin{aligned} \mathrm {Sgn}(\mu )=\left\{ \phi \in \mathrm {L}^{\infty }(\mathrm {\Omega })\cap \mathrm {L}^{\infty }(\mathrm {\Omega },\mu ):\; \Vert \phi \Vert _{\mathrm {L}^{\infty }(\mathrm {\Omega })}\le 1,\; \phi =\frac{d\mu }{d|\mu |}, \; |\mu |-\text {a.e.} \right\} . \end{aligned}$$

As it is shown in [4], \(\mathrm {Sgn}(\mu )\cap C_{0}(\mathrm {\Omega })=\partial \Vert \cdot \Vert _{\mathcal {M}}(\mu )\cap C_{0}(\mathrm {\Omega })\).

Proof

The proof of Theorem 2 is based on Fenchel–Rockafellar duality theory and follows closely the corresponding proof of the finite p case. We thus omit it and we refer the reader to [5, 9] for further details, see also [4, 8] for the analogue optimality conditions for the one dimensional \(\mathrm {L}^{1}\)\(\mathrm {TGV}\) and \(\mathrm {L}^{2}\)\(\mathrm {TGV}\) problems.    \(\square \)

The following proposition states that the solution u of (3.1) is essentially piecewise affine.

Proposition 3

(Affine structures). Let (uw) be an optimal solution pair for (3.1) and \(\phi \) be the corresponding dual function. Then \(|w|=\Vert w\Vert _{\mathrm {L}^{\infty }(\mathrm {\Omega })}\) a.e. in the set \(\{\phi \ne 0\}\). Moreover, \(|u'|=\Vert w\Vert _{\mathrm {L}^{\infty }(\mathrm {\Omega })}\) whenever \(u>f\) or \(u<f\).

Proof

Suppose that there exists a \(U\subseteq \{\phi \ne 0\}\) of positive measure such that \(|w(x)|<\Vert w\Vert _{\mathrm {L}^{\infty }(\mathrm {\Omega })}\) for every \(x\in U\). Then

$$\begin{aligned} \int _{\mathrm {\Omega }}\phi w\,dx&\le \int _{\mathrm {\Omega }\setminus U} |\phi ||w|\,dx +\int _{U}|\phi ||w|\,dx<\Vert w\Vert _{\mathrm {L}^{\infty }(\mathrm {\Omega })}\left( \int _{\mathrm {\Omega }\setminus U}|\phi |\,dx + \int _{U}|\phi |\,dx\right) \\&=\Vert w\Vert _{\mathrm {L}^{\infty }(\mathrm {\Omega })}\Vert \phi \Vert _{\mathrm {L}^{1}(\mathrm {\Omega })}=\beta \Vert w\Vert _{\mathrm {L}^{\infty }(\mathrm {\Omega })}, \end{aligned}$$

where we used the fact that \(\Vert \phi \Vert _{\mathrm {L}^{1}(\mathrm {\Omega })}\le \beta \) from (3.4). However this contradicts the fact that \(\langle \phi , w \rangle =\beta \Vert w\Vert _{\mathrm {L}^{\infty }(\mathrm {\Omega })}\) also from (3.4). Note also that from (3.2) we have that \(\{u>f\}\cup \{u<f\}\subseteq \{\phi \ne 0\}\) up to null sets. Thus, the last statement of the proposition follows from the fact that whenever \(u>f\) or \(u<f\) then \(u'=w\) there. This last fact can be shown exactly as in the corresponding \(\mathrm {TGV}\) problems, see [4, Prop. 4.2].   \(\square \)

Piecewise affinity is typically a characteristic of higher order regularisation models, e.g. \(\mathrm {TGV}\). Indeed, as the next proposition shows, \(\mathrm {TGV}\) and \(\mathrm {TVL}^{\infty }\) regularisation coincide in some simple special cases.

Proposition 4

The one dimensional functionals \(\mathrm {TGV}_{\alpha ,\beta }^{2}\) and \(\mathrm {TVL}_{\alpha ,2\beta }^{\infty }\) coincide in the class of those \(\mathrm {BV}\) functions u, for which an optimal w in both definitions of \(\mathrm {TGV}\) and \(\mathrm {TVL}^{\infty }\) is odd and monotone.

Proof

Note first that for every odd and monotone bounded function w we have \(\Vert Dw\Vert _{\mathcal {M}}=2\Vert w\Vert _{\mathrm {L}^{\infty }(\mathrm {\Omega })}\) and denote this set of functions by \(\mathcal {A}\subseteq \mathrm {BV}(\mathrm {\Omega })\). For a \(\mathrm {BV}\) function u as in the statement of the proposition we have

$$\begin{aligned} \mathrm {TGV}_{\alpha ,\beta }^{2}(u)&=\underset{w\in \mathcal {A}}{{\text {argmin}}}\; \alpha \Vert Du-w\Vert _{\mathcal {M}}+\beta \Vert Dw\Vert _{\mathcal {M}}\\&=\underset{w\in \mathcal {A}}{{\text {argmin}}}\; \alpha \Vert Du-w\Vert _{\mathcal {M}}+2\beta \Vert w\Vert _{\mathrm {L}^{\infty }(\mathrm {\Omega })} = \mathrm {TVL}_{\alpha ,2\beta }^{\infty }(u). \end{aligned}$$

   \(\square \)

Exact Solutions: We present exact solutions for the minimisation problem (3.1), for two simple functions \(f,g:(-L,L)\rightarrow \mathbb {R}\) as data, where \(f(x)=h\mathcal {X}_{(0,L)}(x)\) and \(g(x)=f(x)+\lambda x\), with \(\lambda ,h>0\). Here \(\mathcal {X}_{C}(x)=1\) for \(x\in C\) and 0 otherwise.

With the help of the optimality conditions (3.2)–(3.4) we are able to compute all possible solutions of (3.1) for data f and g and for all values of \(\alpha \) and \(\beta \). These solutions are depicted in Fig. 1. Every coloured region corresponds to a different type of solution. Note that there are regions where the solutions coincide with the corresponding solutions of \(\mathrm {TV}\) minimisation, see the blue and red regions in Fig. 1a and the blue, purple and red regions in Fig. 1b. This is not surprising since as it is shown in [5] for all dimensions, \(\mathrm {TVL}_{\alpha ,\beta }^{\infty }=\alpha \mathrm {TV}\), whenever \(\beta /\alpha \ge |\mathrm {\Omega }|^{1/q}\) with \(1/p\,+\,1/q=1\) and \(p\in (1,\infty ]\). Notice also the presence of affine structures in all solutions, predicted by Proposition 3. For demonstration purposes, we present the computation of the exact solution that corresponds to the yellow region of Fig. 1b and refer to [9] for the rest.

Fig. 1.
figure 1

Exact solutions for the \(\mathrm {L}^{2}\)\(\mathrm {TVL}_{\alpha ,\beta }^{\infty }\) one dimensional denoising problem (Color figure online)

Since we require a piecewise affine solution, from symmetry and (3.2) we have that \(\phi (x)=(c_{1}-\lambda )\frac{x^{2}}{2}-c_{2}|x|+c_{3}\). Since we require u to have a discontinuity at 0, (3.3) implies \(\phi (0)=a\) while from the fact that \(\phi \in \mathrm {H}_{0}^{1}(\mathrm {\Omega })\) and from (3.4) we must have \(\phi (-L)=0\) and \(\langle \phi ,w \rangle =\beta \Vert w\Vert _{\mathrm {L}^{\infty }(\mathrm {\Omega })}\). These conditions give

$$\begin{aligned} c_{1}=\frac{6(\alpha L-\beta )}{L^{3}}+\lambda ,\quad c_{2}=\frac{4\alpha L-3\beta }{L^{2}},\quad c_{3}=\alpha . \end{aligned}$$

We also have \(c_{1}=u'=w\) and thus we require \(c_{1}>0\). Since we have a jump at \(x=0\) we also require \(g(0)<u(0)<h\), i.e., \(0<c_{2}<\frac{h}{2}\) and \(u(-L)>g(-L)\) i.e., \(\phi '(-L)>0\). These last inequalities are translated to the following conditions

$$\begin{aligned} \left\{ \beta<\alpha L+\frac{\lambda L^{3}}{6},\; \beta>\frac{4\alpha L}{3}-\frac{hL^{2}}{6},\; \beta >\frac{2\alpha L}{3},\; \beta <\frac{4\alpha L}{3} \right\} , \end{aligned}$$

which define the yellow area in Fig. 1b. We can easily compute u now:

$$\begin{aligned} u(x)= {\left\{ \begin{array}{ll} \big (\frac{6(\alpha L -\beta )}{L^{3}} +\lambda \big )x+h-\frac{4\alpha L-3\beta }{L^{2}},&{} \; x\in (0,L),\\ \big (\frac{6(\alpha L -\beta )}{L^{3}} +\lambda \big )x+\frac{4\alpha L-3\beta }{L^{2}},&{} \; x\in (-L,0). \end{array}\right. } \end{aligned}$$

Observe that when \(\beta =\alpha L\), apart from the discontinuity, we can also recover the slope of the data \(g'=\lambda \), something that neither \(\mathrm {TV}\) nor \(\mathrm {TGV}\) regularisation can give for this example, see [8, Sect. 5.2].

4 Numerical Experiments

In this section we present our numerical experiments for the discretised version of \(\mathrm {L}^{2}\)\(\mathrm {TVL}_{\alpha ,\beta }^{\infty }\) denoising. We solve (3.1) using the split Bregman algorithm, see [9, Chap. 4] for more details.

Fig. 2.
figure 2

One dimensional numerical examples

First we present some one dimensional examples that verify numerically our analytical results. Figure 2a shows the \(\mathrm {TVL}^{\infty }\) result for the function \(g(x)=h\mathcal {X}_{(0,L)}(x)+\lambda x\) where \(\alpha , \beta \) belong to the yellow region of Fig. 1b. Note that the numerical and the analytical results coincide. We have also computed the \(\mathrm {TGV}\) solution where the parameters are selected so that \(\Vert f-u_{\mathrm {TGV}}\Vert _{2}=\Vert f-u_{\mathrm {TVL}^{\infty }}\Vert _{2}\). Figure 2b shows a numerical verification of Proposition 4. There, the \(\mathrm {TVL}^{\infty }\) parameters are \(\alpha =2\) and \(\beta =4\), while the \(\mathrm {TGV}\) ones are \(\alpha =2\) and \(\beta =2\). Both solutions coincide since they satisfy the symmetry properties of the proposition.

Fig. 3.
figure 3

\(\mathrm {TVL}^{\infty }\) based denoising and comparison with the corresponding \(\mathrm {TV}\) and \(\mathrm {TGV}\) results. All the parameters have been optimised for best SSIM.

We proceed now to two dimensional examples, starting from Fig. 3. There we used a synthetic image corrupted by additive Gaussian noise of \(\sigma =0.01\), cf. Figs. 3a and b. We observe that \(\mathrm {TVL}^{\infty }\) denoises the image in a staircasing–free way in Fig. 3c. In order to do so however, one has to use large values of \(\alpha \) and \(\beta \) something that leads to a loss of contrast. This can easily be treated by solving the Bregman iteration version of \(\mathrm {L}^{2}\)\(\mathrm {TVL}_{\alpha ,\beta }^{\infty }\) minimisation, that is

$$\begin{aligned} \begin{aligned} u^{k+1}&=\underset{u,w}{{\text {argmin}}}\; \frac{1}{2} \Vert f-v^{k}-u\Vert _{2}^{2}+\alpha \Vert \nabla u -w\Vert _{1}+\beta \Vert w\Vert _{\infty },\\ v^{k+1}&=v^{k}+f-u^{k+1}. \end{aligned} \end{aligned}$$
(4.1)

Bregman iteration has been widely used to deal with the loss of contrast in these type of regularisation methods, see [2, 7] among others. For fair comparison we also employ the Bregman iteration version of \(\mathrm {TV}\) and \(\mathrm {TGV}\) regularisations. The Bregman iteration version of \(\mathrm {TVL}^{\infty }\) regularisation produces visually a very similar result to the Bregman iteration version of \(\mathrm {TGV}\), even though it has a slightly smaller SSIM value, cf. Figs. 3g and h. However, \(\mathrm {TVL}^{\infty }\) is able to reconstruct better the sharp spike at the middle of the figure, cf. Figs. 3d, e and j.

The reason for being able to obtain good reconstruction results with this particular example is due to the fact that the modulus of the gradient is essentially constant apart from the jump points. This is favoured by the \(\mathrm {TVL}^{\infty }\) regularisation which promotes constant gradients as it is proved rigorously in dimension one in Proposition 3. We expect that a similar analytic result holds in higher dimensions and we leave that for future work. However, this is restrictive when gradients of different magnitude exist, see Fig. 4. There we see that in order to get a staircasing–free result with \(\mathrm {TVL^{\infty }}\) we also get a loss of geometric information, Fig. 4b, as the model tries to fit an image with constant gradient, see the middle row profiles in Fig. 4c. While improved results can be achieved with the Bregman iteration version, Fig. 4d, the result is not yet fully satisfactory as an affine staircasing is now present in the image, Fig. 4e.

Fig. 4.
figure 4

Illustration of the fact that \(\mathrm {TVL^{\infty }}\) regularisation favours gradients of fixed modulus

Fig. 5.
figure 5

\(\mathrm {TVL}^{\infty }\) reconstructions for different values of \(\beta \). The best result is obtained by spatially varying \(\beta \), setting it inversely proportional to the gradient, where we obtain a similar result to the \(\mathrm {TGV}\) one

Spatially Adapted \(\varvec{\mathrm {TVL}^{\infty }}\): One way to allow for different gradient values in the reconstruction, or in other words allow the variable w to take different values, is to treat \(\beta \) as a spatially varying parameter, i.e., \(\beta =\beta (x)\). This leads to the spatially adapted version of \(\mathrm {TVL}^{\infty }\):

$$\begin{aligned} \mathrm {TVL}_{\mathrm {s.a.}}^{\infty }(u)=\min _{w\in \mathrm {L}^{\infty }(\mathrm {\Omega })} \alpha \Vert Du-w\Vert _{\mathcal {M}}+\Vert \beta w\Vert _{\mathrm {L}^{\infty }(\mathrm {\Omega })}. \end{aligned}$$
(4.2)

The idea is to choose \(\beta \) large in areas where the gradient is expected to be small and vice versa, see Fig. 5 for a simple illustration. In this example the slope inside the inner square is roughly twice the slope outside. We can achieve a perfect reconstruction inside the square by setting \(\beta =3500\), with artefacts outside, see Fig. 5a and a perfect reconstruction outside by setting the value of \(\beta \) twice as large, i.e., \(\beta =7000\), Fig. 5b. In that case, artefacts appear inside the square. By setting a spatially varying \(\beta \) with a ratio \(\beta _{out}/\beta _{in}\simeq 2\) and using the Bregman iteration version, we achieve an almost perfect result, visually very similar to the \(\mathrm {TGV}\) one, Figs. 5c and d. This example suggests that ideally \(\beta \) should be inversely proportional to the gradient of the ground truth. Since this information is not available in practice we use a pre-filtered version of the noisy image and we set

$$\begin{aligned} \beta (x)=\frac{c}{|\nabla f_{\sigma }(x)|+\epsilon }. \end{aligned}$$
(4.3)
Fig. 6.
figure 6

Best reconstruction of the “Ladybug” in terms of SSIM using \(\mathrm {TV}\), \(\mathrm {TVG}\) and spatially adapted \(\mathrm {TVL^{\infty }}\) regularisation. The \(\beta \) for the latter is computed both from the filtered (Fig. 6g) and the ground truth image (Fig. 6h)

Here c is a positive constant to be tuned, \(\epsilon >0\) is a small parameter and \(f_{\sigma }\) denotes a smoothing of the data f with a Gaussian kernel. We have applied the spatially adapted \(\mathrm {TVL}^{\infty }\) (non-Bregman) with the rule (4.3) in the natural image “Ladybug” in Fig. 6. There we pre-smooth the noisy data with a discrete Gaussian kernel of \(\sigma =2\), Fig. 6d, and then apply \(\mathrm {TVL}_{\mathrm {s.a.}}^{\infty }\) with the rule (4.3), Fig. 6g. Comparing to the best \(\mathrm {TGV}\) result in Fig. 6f, the SSIM value is slightly smaller but there is a better recovery of the image details (objective). Let us note here that we do not claim that our rule for choosing \(\beta \) is the optimal one. For demonstration purposes, we show a reconstruction where we have computed \(\beta \) using the gradient of ground truth \(u_{\mathrm {g.t.}}\), Fig. 6c, as \(\beta (x)=c/(|\nabla u_{\mathrm {g.t.}}(x)|+\epsilon )\), with excellent results, Fig. 6h. This is of course impractical, since the gradient of the ground truth is typically not available but it shows that there is plenty of room for improvement regarding the choice of \(\beta \). One could also think of reconstruction tasks where a good quality version of the gradient of the image is available, along with a noisy version of the image itself. Since the purpose of the present paper is to demonstrate the capabilities of the \(\mathrm {TVL}^{\infty }\) regulariser, we leave that for future work.