Abstract
We consider a gradient flow of the total variation in a negative Sobolev space \(H^{s}\,(0\le s \le 1)\) under the periodic boundary condition. If \(s=0\), the flow is nothing but the classical total variation flow. If \(s=1\), this is the fourth order total variation flow. We consider a convex variational problem which gives an implicittime discrete scheme for the flow. By a duality based method, we give a simple numerical scheme to solve this minimizing problem numerically and discuss convergence of a forward–backward splitting scheme. Results of several numerical experiments are given.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
During the last two decades, total variation models have become very popular in image processing and analysis. This interest has been initiated by the seminal paper of Rudin et al. [34], where the authors proposed to minimize the functional
to solve the problem of image denoising. Here, \(\Omega \subset \mathbb {R}^2\) denotes the image domain, the function \(f: \Omega \rightarrow \mathbb {R}\) represents a given grayscale image and \(\tau >0\) is a parameter. The first term in (1.1) is called the fidelity term and it enforces a minimizer with respect to u to be close to a given image f in the sense of \(L^2\)norm. The last term in (1.1) is the total variation of u and it plays the role of regularization.
The total variation model (1.1) was investigated in detail by Meyer [28] in context of the image decomposition problem. In this problem, it is assumed that a given image f is a sum of two components, u and v, where u is a cartoon representation of f and v is an oscillatory component, composed of noise and texture. Meyer has observed that in the case when f is a characteristic function, which is small with respect to a certain norm, the model (1.1) does not provide expected decomposition of f, since it treats such a function as oscillations. To overcome this inaccuracy, he proposed to consider a new model, replacing the \(L^2\)norm in the fidelity term, by a weaker norm, which is more appropriate to deal with textures or oscillatory patterns (see [28] for details). This change, however, introduced a practical difficulty in numerical computation of a minimizer, due to the impossibility of expressing the associated EulerLagrange equation. One of the first attempts to overcome this difficulty has been made by Osher et al. [33]. They proposed to approximate Meyer’s model by its simplified version, given by
Here \(H^{1}(\Omega )\) is the function space dual of \(H^1_{0}(\Omega )\). Later, this model was studied in context of the decomposition problem by Elliott and Smitheman [14, 15]. It has been also considered for other image processing task, like inpainting, by Schönlieb [36] and Burger et al. [8].
The total variation models (1.1) and (1.2) and their interesting and successfulapplications in image processing became also the motivation for many authors to perform rigorous analysis of properties of solutions to the corresponding total variation flows. In the case of the model (1.1), the corresponding total variation flow is formally given by the second order partial differential equation
Whereas the total variation flow corresponding to the model (1.2) is provided by the fourth order partial differential equation
We intend to give a few references related to (1.3) and (1.4) which is not at all exhaustive. Although the meanings of solutions for (1.3) and (1.4) are unclear, it can be understood in a naive way as the gradient flow of a lower semicontinuous convex functional in a Hilbert space. The wellposedness of (1.3) and (1.4) has been established by the theory of maximal monotone operators initiated by Kōmura [24] and developed by Brezis [7]. A behavior of solution for (1.3) is rigorously studied, for example, in [2, 6]. The book [2] contains several types of wellposedness results in different function spaces other than a Hilbert space \(L^2\). Its anisotropic version was studied by Moll [29], Mucha et al. [30], and Łasica et al. [27] with intention to propose a numerical scheme for image denoising. Meanwhile, a viscosity approach for (1.3) and its nondivergent generalization is established by Giga et al. [17]. There is less literature on (1.4) based on rigorous analysis. A characterization of the speed is given in Kashima [21] (see [22] for a general dimension) and the extinction time estimate is given [18, 19]. It is also noted that a solution to (1.4) may become discontinuous even if it is initially Lipschitz continuous [16], which is different from (1.3). In addition to image processing applications, this type of equation is also very important to model relaxation phenomena in materials science below the roughening temperature; see a recent paper by Liu et al. [26] and references therein. A numerical analysis for (1.4) is given by Kohn and Versieux [23].
In this paper, we generalize equations (1.3) and (1.4) and consider the total variation flow in the space \(H^{s}\) with \(s\in [0,1]\). More precisely, we consider the \((2s+2)\)order parabolic differential equation
with periodic boundary conditions and initial data in \(H^{s}\). Our aim is to present a unified approach to the construction of the minimizing total variation sequence in the space \(H^{s}\) and to discuss the problem of its convergence in an infinite dimensional space setting. An application of derived scheme will allow us to perform numerical experiments to observe evolution of solutions to the Eq. (1.5) and their characteristic features with respect to different values of the exponent s. Such a close look at this problem may be the basis for further study on the considered evolution equation and its applications. For example, our numerical experiment suggests that the solution may be discontinuous instantaneously for \(s=1/2\) and \(s=1\) for Lipschitz initial data. The case \(s=1\) has been rigorously proved in [16]. We conjecture that such phenomena occur for all \(s \in (0,1]\) excluding \(s=0\) of course.
The wellposedness problem for (1.5) is similar to (1.3) and (1.4). Apparently, the characterization of speed of (1.5) was not derived before so we give its characterization. The proof given here is very close to the one for (1.3) given in [2]. We consider a semidiscretization of this equation with respect to a time variable. The scheme we consider here is an implicittime discrete scheme. Such formulation leads to recursive minimization of the functional
The nondifferentiability of the total variation term caused a problem although the problem is convex. One way is to use Bregman’s splitting method presented byOberman et al. [32]. Here, we consider a dual formulation of this problem which has been rigorously derived for \(s \in (0,1]\). We generate the minimizing dual problem sequence by the wellknown splitting scheme and prove its convergence in \(H^{s}\). We use the characterization of the subdifferential of the total variation in \(H^{s}\) to derive an explicit form of this sequence. It turns out that its form is very simple and convenient to compute an approximate minimizer. However, the proof of its convergence in\(L^2\)norm can be carried out only in a finite dimensional space. Due to this limitation, we investigate the problem of convergence in the ergodic sense.
This paper is organized as follows. We recall \(H^{s}\) space and the total variation in Sect. 2. We give a characterization of the subdifferential of the total variation in \(H^{s}\) in Sect. 3. In Sect. 4, we give a way of semidiscritization, which is a recursive minimization problem of convex but nondifferentiable functional. In Sect. 5, we formulate its dual problem. In Sect. 6, we discuss convergence of a forward–backward splitting scheme. In Sect. 7, we derive its explicit form. In Sect. 8, we discuss its ergodic convergence. In Sect. 9, we explain how to discretize the introduced scheme and provide an exact condition for a convergent solution in a finite dimensional space. In Sect. 10, we present results of numerical experiments to illustrate evolution of solutions to considered total variation flows, showing their characteristic features with respect to different values of index \(s\in [0,1]\).
2 Preliminaries
To give a rigours interpretation of the Eq. (1.5) with the periodic boundaryconditions, we first need to introduce preliminary definitions and notations.
Let \(\mathbb {T}^d\) be a \(d\)dimensional torus defined by
For \(s\in (0,1]\), we define by \(H_{\text {av}}^{s}(\mathbb {T}^d)\), the space dual of
where \(H^s(\mathbb {T}^d)\) is the standard fractional Sobolev space
equipped with the norm
Here \({\hat{u}}\) denotes the Fourier transform of the function u, i.e.,
Since \({\hat{u}}(0)=0\) for \(u\in H_{\mathrm {av}}^s(\mathbb {T}^d)\), then the norm
is equivalent to the norm of \(H^s_{\mathrm {av}}\) defined above. We shall use the latter one for elements in \(H_{\mathrm {av}}^s\), because it is more convenient.
We shall define the fractional Laplacian \((\Delta _{\mathrm {av}})^s:H_{\mathrm {av}}^s(\mathbb {T}^d)\rightarrow H_{\mathrm {av}}^{s}(\mathbb {T}^d)\), \(s\in (0,1)\) in line with the definition in [20]. Since, the Fourier transform diagonalizes \(\Delta _{\mathrm {av}}\), then our task greatly simplifies. Thus,
Then, the inner product in \(H_{\mathrm {av}}^s\) is defined by
for all \(u,v \in H_{\mathrm {av}}^s(\mathbb {T}^d)\). The last equality in the above definition follows from Parseval’s theorem. Note that
Since the operator \((\Delta _{\mathrm {av}})^s\) is an isometry, therefore the inner product in \(H_{\mathrm {av}}^{s}(\mathbb {T}^d)\) is given by
for all f, \(g\in H_{\text {av}}^{s}(\mathbb {T}^d)\).
Now, we introduce the definition of a total variation of the function u. It is given by
where for a vector field \(z(x)=(z_1(x),z_2(x))\), the norm \(\left\ \cdot \right\ _{\infty }\) is defined by \(\left\ z \right\ _{\infty }:=\sup _x z(x)\) and \(\cdot \) denotes the standard Euclidean norm. Since the norms \(\Vert \cdot \Vert _\infty \) and \(\Vert \cdot \Vert _2\) are equivalent on \(\mathbb {R}^2\), the our definition of BV is equivalent to the standard one. The subspace of periodic functions \(u\in L^1(\mathbb {T}^d)\) such that the total variation of u is finite is denoted by \(BV(\mathbb {T}^d)\).
3 A characterization of the subdifferential
We assume that H is a normed space, endowed with the inner product \(( \cdot ,\cdot )_H\), and \(H^*\) is its dual space. Let \(\Phi : H \rightarrow [0,\infty ]\). Then, we define the associated dual functional by
Further, we need to introduce some standard results:
Lemma 1
Let \(\Phi \), \(\Psi : H \rightarrow [0,\infty ]\). If \(\Phi \le \Psi \) on H, then \({\tilde{\Psi }}\le {\tilde{\Phi }}\) on \(H^*\).
Proof
See proof of [3, Lemma 1.5]. \(\square \)
Lemma 2
Suppose \(\Phi \) is convex, lower semicontinuous and positively homogeneous of degree one, then \(\tilde{{\tilde{\Phi }}}(u) =\Phi (u)\).
Proof
See proof of [3, Proposition 1.6]. \(\square \)
Definition 1
Let \(\Phi \) be a proper lower semicontinuous, and convex functional on H. The subdifferential of \(\Phi \) at \(u\in H\) is the set
Lemma 3
Suppose \(\Phi \) is convex, lower semicontinuous, nonnegative, and positively homogeneous of degree one. Then, \(v\in \partial _{H} \Phi (u)\) if and only if \({\tilde{\Phi }}(v)\le 1\) and \(( u,v)_{H} =\Phi (u)\).
Proof
See proof of [3, Theorem 1.8]. \(\square \)
Now we define the functional \(\Phi \) on \(L^2(\mathbb {T}^d)\) by
In the next lemma, we will show that the associated dual functional \({\tilde{\Phi }}(u)=\Psi (u)\), where \(\Psi \) is given by (3.4). This will enable us to characterize the subdifferential of \(\Phi \) in \(H_{\mathrm {av}}^{s}(\mathbb {T}^d)\) in Theorem 1. In proofs, we follow results presented in [19, Lemma 8.5, Proposition 8.6] and [18], which are based on the theory developed in [3].
For further analysis we need to define the space
Lemma 4
Let \(\Psi \) be the functional defined by
Then, \(\Psi (w)={\tilde{\Phi }}(w)\) for \(w\in H_{\mathrm {av}}^{s}(\mathbb {T}^d)\), where \({\tilde{\Phi }}\) is the functional dual to \(\Phi \).
Proof
Let \(w\in H_{\mathrm {av}}^{s}(\mathbb {T}^d)\). First, we show \({\tilde{\Phi }}(w)\le \Psi (w)\). If \(\Psi (w)\) is infinite, then the inequality is obvious. Thus, assume that \(\Psi (w)<\infty \), then there exists \(z\in X(\mathbb {T}^d)\) such that
Take \(z^{\varepsilon }=z*\rho ^{\varepsilon }\), where \(\rho \) is a mollifier. In [22], Kashima showed that \(\left\ z^{\varepsilon } \right\ _{L^{\infty }(\mathbb {T}^d)}\le \left\ z \right\ _{L^{\infty }(\mathbb {T}^d)}\) and \(\mathrm {div\,}z^{\varepsilon }\rightarrow \mathrm {div\,}z\) in \(H_{\mathrm {av}}^s(\mathbb {T}^d)\) as \(\varepsilon \rightarrow 0\). Then, using this result, we obtain
for all \(v\in H_{\mathrm {av}}^{s}(\mathbb {T}^d)\). Therefore, we get that
which is the desired inequality. Now we prove that \({\tilde{\Phi }}(w)\ge \Psi (w)\). To do so, we first note that if we prove \(\Phi (w)\le {\tilde{\Psi }}(w)\) then by Lemma 1 we get that \({\tilde{\Phi }}(w)\ge \tilde{{\tilde{\Psi }}}(w)\). Furthermore, since \(\Psi \) is convex, lowersemicontinuous, and positively homogeneous of degree one, we get that \(\Psi =\tilde{{\tilde{\Psi }}}\). This will imply the desired inequality.
Therefore, we need to prove that \(\Phi (w)\le {\tilde{\Psi }}(w)\). By the definition of \(\Psi \), we get that
\(\square \)
Theorem 1
Assume that \(u\in H_{\mathrm {av}}^{s}(\mathbb {T}^d)\) is such that \(\Phi (u)<\infty \). We denote by \(\partial _{H_{\mathrm {av}}^{s}}\Phi \) the subdifferential of \(\Phi \) with respect to \(H_{\mathrm {av}}^{s}(\mathbb {T}^d)\)topology. Then, \(v\in \partial _{H_{\mathrm {av}}^{s}} \Phi (u)\) if and only if there exists \(z\in X(\mathbb {T}^d)\) such that
Proof
By Lemma 3, we see that \(v\in \partial _{H_{\mathrm {av}}^{s}} \Phi (u)\) if and only if \({\tilde{\Phi }}(v)\le 1\) and \((u,v)_{H_{\mathrm {av}}^{s}(\mathbb {T}^d)}=\Phi (u)\). Moreover, \(\Psi ={\tilde{\Phi }}\) holds by Lemma 4. These conditions are equivalent to
\(\square \)
Theorem 1 gives a rigorous interpretation of the Eq. (1.5) as
The result concerning the existence and uniqueness of a solution to system (3.8) is given in the theorem below:
Theorem 2
Let \(\mathcal {A}(u):=\partial _{H_{\mathrm {av}}^{s}}\Phi (u)\) and suppose that \(u_0\in D(\mathcal {A})\). Then, there exists a unique function \(u:[0,\infty )\rightarrow H_{\mathrm {av}}^{s}(\mathbb {T}^d)\) such that:
 \(\mathrm {(1)}\) :

for all \(t>0\) we have that \(u(t)\in D(\mathcal {A})\),
 \(\mathrm {(2)}\):

\(\frac{du}{dt}\in {L^\infty (0,\infty ; H_{\mathrm {av}}^{s}(\mathbb {T}^d))}\) and \(\left\ \frac{du}{dt} \right\ _{H_{\mathrm {av}}^{s}(\mathbb {T}^d)} \le \left\ \mathcal {A}^0 (u_0) \right\ _{H_{\mathrm {av}}^{s}(\mathbb {T}^d)}\),
 \(\mathrm {(3)}\) :

\(\frac{du}{dt}\in \mathcal {A}(u)\) a.e. on \((0,\infty )\),
 \(\mathrm {(4)}\) :

\(u(0)=u_0\),
where \(\mathcal {A}^0\) denotes the minimal selection of the operator \(\mathcal {A}\).
Proof
The proof of this theorem can be found in Brezis [7]. \(\square \)
4 A semidiscretization
In order to construct the scheme to solve the Eq. (3.8), we introduce its semidiscretization with respect to the time variable t. That is, we consider the finite set of \(n+1\) equidistant points \(\{t_i = i\tau \ : \ i=0,\ldots ,n \text { and }\tau = t/n\}\) in the interval [0, t]. For \(i=1,\ldots ,n\), we define inductively a \(u_{\tau }(t_i)\) as a solution of
It is well known that if \(\mathcal {A} = \partial _{H_{\mathrm {av}}^{s}} \Phi \) is a maximal monotone, then the resolvent \(\mathcal {J}_\tau := (I+\tau \mathcal {A})^{1}\) is nonexpansive, which implies that the implicit scheme (4.1) is stable and we have
This result has been justified in a very general setting by Crandall–Liggett [12]:
Theorem 3
For all \(u_0\in \overline{D(\mathcal {A})}\) and \(t>0\) we have that
where S(t) is the semigroup generated by \(\mathcal {A}\). The convergence is uniform on any compact interval of \([0,\infty )\).
The original paper by Crandall–Liggett contains also an error estimate, which is not optimal. The optimal one is of order \(O(\tau ^2)\) and it has been derived by Rulla [35, Theorem 4].
Hence, to solve (3.8), we need to know how to find a solution to the one iteration of the scheme (4.1). First, we observe that the Eq. (4.1) for \(u_{\tau }(t_i)\) is the optimality condition for the minimization problem
The main difficulty in the construction of a minimizing sequence for such kind of problems is caused by the lack of differentiability of the total variation term. The commonly used approach to overcome this difficulty consists in considering the dual formulation of (4.2). The first work related to image processing application where this approach has been used and where the proof of the convergence has been provided, was the paper of Chambolle [9]. Nowadays, there is a variety of efficient schemes that one can apply to solve the problem dual to (4.2). Among others, we mention here papers of Chambolle and Pock [10] and Beck and Teboulle [5]. We also refer to papers of Aujol [1] and Weiss et al. [37], where the authors adapt the existing methods for the purpose of the total variation minimization. In this paper, we rather focus on rigorous derivation of the dual formulation of (4.1) in \(H_{\mathrm {av}}^{s}\), construction of the minimizing sequence and discussion of its convergence in an infinite dimensional space. To construct this sequence we use the wellknown splitting scheme introduced by Lions and Mercier [25], which in fact is a variant of the classical Douglas–Rachford algorithm. This scheme was investigated in general setting by many authors. For instance, we refer to the work of Combettes and Wajs [11] or to the recent paper of Attouch and Peypouquet [4], where the authors provide more accurate convergence rates for its accelerated version developed by Nesterov [31].
5 A dual problem
Let \(C\subset H\) be a nonempty convex and closed set. We define the indicator function of a set C by
To derive the dual problem we will need the following result.
Lemma 5
We have:

(i)
The set
$$\begin{aligned} K := \{v\in \mathcal {D}'(\mathbb {T}^d) \ : \ v=(\Delta _{\mathrm {av}})^s\mathrm {div\,}z, \ \mathrm {div\,}z \in H_{\mathrm {av}}^s(\mathbb {T}^d),\ \left\ z \right\ _{\infty }\le 1\} \end{aligned}$$(5.2)is closed under the strong \(H_{\mathrm {av}}^{s}(\mathbb {T}^d)\)topology.

(ii)
If we consider \(\Phi \) given by (3.2) as a functional defined on \(H_{\mathrm {av}}^{s}(\mathbb {T}^d)\), then the convex conjugate of \(\Phi \) is given by \(\Phi ^*(v)=\chi _{K}(v)\).
Proof

(i)
Let \(\{v_j\}\in K\) be a sequence converging to v in \(H_{\mathrm {av}}^{s}\). This means that \(\mathrm {div\,}z_j\) converges to some w in \(H_{\mathrm {av}}^s\), where \(v_j = (\Delta _{\mathrm {av}})^s\mathrm {div\,}z_j\). Since \(z_j\) is bounded in \(L^\infty \), it converges to some z in the sense of weak\(^*\) topology of \(L^\infty \) by taking a subsequence. This implies \(w = \mathrm {div\,}z\) so we get the desired result.

(ii)
Let \(u\in BV(\mathbb {T}^d)\cap H_{\mathrm {av}}^{s}(\mathbb {T}^d)\), then the convex conjugate of function \(\chi _{K}\) is given by
$$\begin{aligned} \begin{aligned} \chi _{K}^*(u)&=\sup _{v}\{(v,u)_{H_{\mathrm {av}}^{s}} \ : \ v\in K\}\\&=\sup _{z}\{\langle u,\mathrm {div\,}z \rangle \ : \ z\in \mathcal {D}(\mathbb {T}^d),\ \left\ z \right\ _{\infty }\le 1\}=\Phi (u). \end{aligned} \end{aligned}$$(5.3)Since the set K is convex, the function \(\chi _{K}\) is convex. Moreover, since \(\chi _{K}\) is lower semicontinuous, then it follows from [13, Proposition I.4.1] that \(\Phi ^*(v)=\chi _{K}^{**}(v)=\chi _{K}(v)\).
\(\square \)
Proposition 1
Let \(f\in H_{\mathrm {av}}^{s}(\mathbb {T}^d)\) be a given function, then the problem
is equivalent with
Moreover, the solution u of (5.4) is associated with the solution v of (5.5) by the relation
Proof
Using the standard result of convex analysis (see, e.g, [13, Corollary I.5.2]), we get that the Euler–Lagrange equation
associated with the problem (5.4) is equivalent to
where \(\Phi ^{*}\) is the convex conjugate functional of \(\Phi \) in \(H_{\mathrm {av}}^{s}(\mathbb {T}^d)\). Setting \(\tau v= fu\), we rewrite equation (5.8) as
and we note that this is the Euler–Lagrange equation of the functional
Using Lemma 5, we conclude that the dual problem to (5.4) is given by (5.5) and the relation (5.6) holds. \(\square \)
Corollary 1
The solution of problem (5.4) satisfies \(u = f\tau P_K^{H_{\mathrm {av}}^{s}}(f/\tau )\), where \(P_K^{H_{\mathrm {av}}^{s}}\) denotes the orthogonal projection on the set K with respect to the inner product in \(H_{\mathrm {av}}^{s}\).
Remark 1
Using the characterization of \(v\in \partial _{H_{\mathrm {av}}^{s}} \Phi (u)\) provided in Theorem 1, we can rewrite the dual problem (5.5) as
where Z is the closure of the set
with respect to the strong \(L^2\)topology.
Note that by expanding the norm in (5.10), neglecting the term with the norm of f and using that \(\Vert (\Delta _{av})^s \mathrm {div\,}z\Vert _{H_{\mathrm {av}}^{s}}=\Vert \mathrm {div\,}z\Vert _{H_{\mathrm {av}}^s}\), we derive
The same alternative formulation of the dual problem (5.10) can be also obtained by application of results in [13, Chapter III].
6 Convergence of a forward–backward splitting scheme
In this section, using the forwardbackward splitting approach, we construct theminimizing dual problem (5.5) sequence \(\{v_k\}\) and prove its convergence in \(H_{\mathrm {av}}^{s}\).
Let us define the functional J on \(H_{\mathrm {av}}^{s}(\mathbb {T}^d)\) by
Then, the dual problem (5.5) can be equivalently written as
In order to construct a minimizing sequence \(\{v^k\}\) for the above problem, we consider the forward–backward splitting scheme introduced by Lions and Mercier [25], given by
Remark 2
Application of the scheme (6.1) requires that
for all \(v\in H_{\mathrm {av}}^{s}(\mathbb {T}^d)\). This is ensured by [13, Proposition I.5.6], because \(\hbox {int}\,(D(J))\cap D(\Phi ^*)\ne \emptyset \).
Remark 3
Let \(v\in H_{\mathrm {av}}^{s}\), then by Moreau’s identity
we obtain that the scheme (6.1) is equivalent to
where \(H_{1/\lambda }\) denotes the Moreau–Yosida approximation of the operator \(\mathcal {A} = \partial _{H_{\mathrm {av}}^{s}} \Phi \). It is well known that \(H_{1/\lambda }\) converges as \(\lambda \rightarrow \infty \) to the minimal selection \(\mathcal {A}^0\) of \(\mathcal {A}\).
In the proof of convergence of the sequence \(\{v^k\}\) we will need the lemma below:
Lemma 6
For \(v\in K\) we have that if \(u\in \partial _{H_{\mathrm {av}}^{s}} \Phi ^*(v)\), then \((u,vw)_{H_{\mathrm {av}}^{s}}\ge 0\) for all \(w\in K\).
Proof
By the definition of the subdifferential \(\partial _{H_{\mathrm {av}}^{s}} \Phi ^*(v)\), we have that \(u\in \partial _{H_{\mathrm {av}}^{s}} \Phi ^*(v)\) if and only if \(\Phi ^*(w) \ge \Phi ^*(v) + (u,wv)_{H_{\mathrm {av}}^{s}}\) for all \(w\in H_{\mathrm {av}}^{s}(\mathbb {T}^d)\). Since \(\Phi ^*\) is the indicator function of the set K, we have \((u,vw)_{H_{\mathrm {av}}^{s}}\ge 0\) for all \(w\in K\). \(\square \)
Proposition 2
Assume that \(0<\lambda \tau <2\), then \(v^k\rightharpoonup v^*\) and \(u^k\rightharpoonup u^*\) in \(H_{\mathrm {av}}^{s}\), where \(v^*\in K\) is such that \(v^*\in \partial _{H_{\mathrm {av}}^{s}} \Phi (u^*)\) and \(u^*= f\tau v^*\).
Proof
Assume that \(v^{k}\), \(v^{k+1} \in K\). After simple calculations, we obtain
From Lemma 6, it follows that
for all \(w\in K\). In particular, taking \(w=v^k\) and recalling that \(u^k = f\tau v^k\), we get
Taking into account this fact in (6.2), we obtain
Therefore, taking \(\lambda \) so that \(0<\lambda \tau <2\), we get \(J(v^{k+1})<J(v^k)\) as long as \(v^{k+1}\ne v^k\). This implies that the sequence \(\{J(v^k)\}\) is convergent.
Let \(m = \lim _{k\rightarrow \infty } J(v^k)\). Then, obviously \(\lim _{k\rightarrow \infty } J(v^{k+1}) = m\), and we have
Since \(\lambda \tau <2\) by the assumption, we have
Then, passing to the limit as \(k\rightarrow \infty \), we obtain
We notice that boundedness of the sequence \(\{J(v^k)\}\) and the definition of J imply that the sequence \(\{v^k\}\) is bounded in \(H_{\mathrm {av}}^{s}\). Moreover, since \(u^k = f\tau v^k\), the sequence \(\{u^k\}\) is also bounded in \(H_{\mathrm {av}}^{s}\) . Then, let denote by \(v^*\) a limit of the convergent subsequence \(\{v^{k_j}\}\) of \(\{v^{k}\}\) and by \(v^{**}\) a limit of the convergent subsequence \(\{v^{k_j+1}\}\) of \(\{v^{k+1}\}\). The fact (6.5) implies that \(v^*=v^{**}\). It remains to show that \(v^*\in \partial _{H_{\mathrm {av}}^{s}}\Phi (u^*)\), where \(u^*=f\tau v^*\) is a limit of the convergent subsequence \(\{u^{k_j}\}\) of \(\{u^k\}\).
Let \(\xi ^{k+1}\in \partial _{H_{\mathrm {av}}^{s}} \Phi ^*(v^{k+1})\) and \(\zeta \in \partial _{H_{\mathrm {av}}^{s}} \Phi ^*(w)\). Then, the monotonicity of the operator \(\partial _{H_{\mathrm {av}}^{s}} \Phi ^*\) implies that
for all \(w\in H_{\mathrm {av}}^{s}(\mathbb {T}^d)\). Using the second line of the scheme (6.1) we can rewrite the above inequality as
The first term in (6.6) goes to zero as \(k\rightarrow \infty \), because its absolute value can be estimated by
The first factor above goes to zero, while the second one is bounded.
We rewrite the second term in (6.6), as follows,
The weak convergence of \(v^{k+1}\) implies
Moreover,
We notice that due to (6.5), the term \((v^{k+1} v^k,v^{k+1}  w)_{H_{\mathrm {av}}^{s}}\) goes to zero. In addition,
If we combine these pieces of information, then passing to the limit in (6.6) with \(k\rightarrow \infty \), we obtain
Since the functional \(\Phi ^*\) is convex, proper and lower semicontinuous, itssubdifferential \(\partial _{H_{\mathrm {av}}^{s}} \Phi ^*\) is a maximal monotone operator. This implies that \(u^*\in \partial _{H_{\mathrm {av}}^{s}}\Phi ^*(v^*)\), what is equivalent to \(v^*\in \partial _{H_{\mathrm {av}}^{s}}\Phi (u^*)\), due to [13, Corollary I.5.2]. \(\square \)
7 An explicit form of a scheme
Since the nonlinear projection, given in Corollary 1, is difficult to compute in practice, we use the characterization of subdifferential \(\partial _{H_{\mathrm {av}}^{s}} \Phi \), provided in Theorem 1, toconstruct an explicit scheme for minimizing sequence of the dual problem (5.10). More precisely, we apply the same forward–backward scheme as in the previous section to generate the sequence \(\{z^k\}\) minimizing (5.10). This sequence is related to \(\{v^k\}\) by formula \(v^k = \tau (\Delta _{\mathrm {av}})^s\mathrm {div\,}z^k\).
In order to proceed, we need to introduce definitions of functionals F and G on \(L^2(\mathbb {T}^d,\mathbb {R}^d)\). They are given by
if \(z\in L^2(\mathbb {T}^d,\mathbb {R}^d)\) satisfies \(\mathrm {div\,}z \in H_{\mathrm {av}}^s(\mathbb {T}^d)\) and
Form definitions (7.1) and (7.2) it follows that the dual problem (5.10) is equivalent with
In order to find \(z^*\) such that \(0\in \partial _{L^2} (F(z^*) + G(z^*))\), we consider the splitting scheme:
In the first line of the above scheme, \(\phi _{1/k}\) denotes a smoothing kernel with the size inversely proportional to the number of iteration. We apply the diminishing amount of smoothness to \(z^{k+1}\) to guarantee that \(z^{k+1}\) is in the domain of the subdifferential of F.
For practical implementation of this scheme, it remains to find its closedform expression.
Lemma 7
Assume that \(z\in L^2(\mathbb {T}^d,\mathbb {R}^d)\) is such that \(F(z)<\infty \). Then, we have \(D(\partial _{L^2} F) = \{z \in L^2(\mathbb {T}^d,\mathbb {R}^d)\, :\, F(z)<\infty ,\ \nabla (\tau (\Delta _{\mathrm {av}})^s\mathrm {div\,}z + f)\in L^2(\mathbb {T}^d,\mathbb {R}^d)\}\) and \(\partial _{L^2} F(z) = \{\nabla w\}\), where \(w = \tau (\Delta _{\mathrm {av}})^s\mathrm {div\,}z + f\).
Proof
By a direct calculation
for any \(\eta \in L^2(\mathbb {T}^d,\mathbb {R}^d)\) and \(\epsilon >0\). Let v be in \(\partial _{L^2}F(z)\). Then, by definition
Dividing by \(\epsilon \) both sides we see that
The lefthand side equals \(\langle w,\mathrm {div\,}\eta \rangle \). Thus we end up with
for all \(\eta \in L^2(\mathbb {T}^d,\mathbb {R}^d)\). This implies that \(v=\nabla w\) if and only if \(\nabla w\) is in \(L^2(\mathbb {T}^d,\mathbb {R}^d)\). \(\square \)
Lemma 8
Let G be a functional on \(L^2(\mathbb {T}^d,\mathbb {R}^d)\) defined by (7.2), and let
for some \(y\in L^2(\mathbb {T}^d,\mathbb {R}^d)\) and \(\lambda >0\). Then, we have that \(z = P_Z^{L^2}(y)\), where \(P_Z^{L^2}(y)\) denotes the orthogonal projection of y on the set Z with respect to the inner product in \(L^2\).
Proof
The formula (7.5) implies that z is a solution to the equation
which is the optimality condition to the problem
This implies that \(z = P_Z^{L^2}(y)\). \(\square \)
Remark 4
Due to the form of the set Z, the projection operator \(P^{L^2}_Z\) is given by the explicit formula
for \(y\in L^2(\mathbb {T}^d,\mathbb {R}^d)\), where \(\cdot \) denotes the Euclidean norm and \(a \vee b:=\max (a,b)\). Using this fact, Lemmas 7 and 8, we obtain a closedform expression of the scheme (7.4). It is given by
8 Ergodic convergence
In this section, we investigate ergodic convergence of sequences \(\{v^k\}\) and \(\{z^k\}\) defined by schemes (6.1) and (7.7), respectively. First, we recall from Proposition 2 that if \(0<\lambda \tau <2\), then the sequence \(\{v^k\}\) converges weakly in \(H_{\mathrm {av}}^{s}(\mathbb {T}^d)\) to \(v^*\in K\), where \(v^*\) is a unique solution of the dual problem (5.5). Then, Mazur’s lemma implies existence of the sequence
where \(\{\alpha _k\}\) is such that \(\sum _{k=0}^n \alpha _k = 1\), which converges strongly in \(H_{\mathrm {av}}^{s}(\mathbb {T}^d)\) to \(v^*\) as \(n\rightarrow \infty \). Our aim in this section is to construct a sequence \(\{\alpha _k\}\) such that \({\bar{v}}^n\rightarrow v^*\) as \(n\rightarrow \infty \), and next, to use this result in order to prove that sequence
converges weakly in \(QL^2(\mathbb {T}^d)\) to \(z^*\in Z\) as \(n\rightarrow \infty \), where Q is the orthogonal projection onto the space of gradient fields. It is based on Helmholtz–Weyldecomposition. Note that for this proof, we will not require a weak convergence of \(z^k\) to \(z^*\in Z\) in \(L^2(\mathbb {T}^d)\). We are not able to prove this in an infinite dimensional space setting (see Lemma 9 and Proposition 4). All these results are given in the following proposition:
Proposition 3
Let \(\{v^k\}\) be a weakly convergent sequence in K generated by the scheme (6.1) and let \(\{\beta _k\}\) be a sequence of positive real numbers such that \(\{\beta _k\}\in l^2{\setminus } l^1\). Then, for \(\alpha _k = \beta _k/\sum _{j=1}^n \beta _j\) the sequence \(\{{\bar{v}}^n\}\) given by (8.1) converges strongly in \(H_{\mathrm {av}}^{s}(\mathbb {T}^d)\) to \(v^*\in K\) as \(n\rightarrow \infty \). Moreover, the sequence \(\{{\bar{z}}^n\}\) given by (8.2), where \(\{z^k\}\) is generated by the scheme (7.7), converges weakly in \(QL^2(\mathbb {T}^d)\) to \(z^*\in Z\) as \(n\rightarrow \infty \).
Proof
Let \(\alpha _k = \beta _k/\sum _{k=0}^{n} \beta _k\), where the sequence \(\{\beta _k\}\in l^2{\setminus } l^1\). Then, by the triangle inequality and Cauchy–Schwarz inequality, we have
Since \(\{ \beta _k\}\in l^2{\setminus } l^1\), and therefore \(\sum _{k=0}^{n} \beta _k\rightarrow \infty \) as \(n\rightarrow \infty \), it remains to show that \(\sum _{k=0}^n \Vert v^kv^*\Vert ^2_{H_{\mathrm {av}}^{s}}\) is bounded. Using the scheme (6.1) and the fact that the resolvent \((I+\lambda \partial _{H_{\mathrm {av}}^{s}} \Phi ^*)^{1}\) is nonexpansive, we have
Thus, we get,
for \(k = 0,1,\ldots ,n\). By assumption, \(0<\lambda \tau <2\), we have that \(1\lambda \tau <1\). Therefore, we get the estimate
If we apply this fact to (8.3), then passing to the limit and using the monotoneconvergence theorem, we obtain \(\lim _{n\rightarrow \infty } \Vert {\bar{v}}^nv^*\Vert _{H_{\mathrm {av}}^{s}} = 0\).
Since the set K defined in (5.2) is closed in \(H_{\mathrm {av}}^{s}\)norm, then we have existence of \(z^*\in Z\) such that \(v^*= (\Delta _{\mathrm {av}})^s\mathrm {div\,}z^*\). We also have
Since
thus \(\mathrm {div\,}{\bar{z}}^n \rightarrow \mathrm {div\,}z^*\) in \(H_{\mathrm {av}}^s(\mathbb {T}^d)\). Moreover, since
for all \(\phi \in H_{\text {av}}^1(\mathbb {T}^d)\), we have \({\bar{z}}^n \rightharpoonup z^*\) in \(QL^2(\mathbb {T}^d)\). \(\square \)
9 Implementation
In this section, we explain how to discretize the system (7.7) and to solvenumerically one iteration of the semiimplicit scheme (4.1). Recursive application of this procedure leads to a numerical solution of the nonlinear evolution equation (3.8). For simplicity of presentation, we construct a method for the onedimensional problem, but it can be easily extended to higher dimensions. We also present results concerning the convergence of the introduced scheme in a finite dimensional space.
We denote by X the Euclidean space \(\mathbb {R}^N\). The scalar product of two elements u, \(v\in X\) is defined by \(\langle u, v\rangle := \sum _{i=1}^N u_i v_i\) and the norm \(\Vert u\Vert :=\sqrt{\langle u, u\rangle }\). The discrete gradient operator satisfying periodic boundary conditions is defined as
where \(h>0\) is the space discretization step. With this notation in hand, the discrete divergence and the Laplace operator are given by \(\mathrm {div\,}:=\nabla ^T\) and \(\Delta := \mathrm {div\,}\nabla \),respectively. Note that \(\Delta \) is a symmetric, positive definite, circulant matrix of size \(N\times N\). It is well know, that in this case, its eigenvectors are given by
for \(j = 0,\ldots ,N1\), and corresponding eigenvalues are
where \(c_k\), \(k=0,\ldots ,N1\), are subsequent elements of the first row of \(\Delta \). Therefore, analogously to the continuous setting, we can define the operator \((\Delta )^s\) using the discrete Fourier transform, i.e.,
where \(M^s\) is an \(N\times N\) diagonal matrix with elements \(\mu _j^s\), \(j=0,\ldots ,N1\).
Here \({\hat{u}}\) denotes the discrete Fourier transform of \(u\in X\), i.e.,
for \(k=0,\ldots ,N1\). The discrete inverse Fourier transform is given by
for \(j=0,\ldots ,N1\).
For convenience, we also introduce the definition of scalar product
for all u, \(v\in X\) and the norm \(\left\ u \right\ _{s}:=\sqrt{(u,u)_{s}}\).
Lemma 9
For \(z\in X\), there exists a constant \(C>0\), such that
Moreover, \(C = \mu _{\text {max}}^{s+1}\), where \(\mu _{\text {max}}\) denotes the largest eigenvalue of the discrete Laplace operator.
Proof
From definitions (9.1), (9.2) and Parseval’s theorem, we have
Since
then using this fact and applying the Plancherel identity, we get
\(\square \)
We denote by \(Z:=\{ z\in X \ : \ \left\ z \right\ _\infty \le 1 \}\), where \(\left\ z \right\ _\infty : = \max _j z_j\). For \(f\in X\) we define functionals F and G on X by
and
Then, the discrete version of the problem (7.3) is
and can be solved by the iterative scheme
Proposition 4
Assume that \(C \lambda \tau <2\), where the constant \(C>0\) is as in Lemma 9, then \(z^k\rightarrow z^*\) in X, where \(z^*\in Z\) is such that \(0\in \partial _X F(z^*) + \partial _X G(z^*)\).
Proof
In the proof, we essentially follow steps in the proof of Proposition 2. We assume that \(z^{k}\), \(z^{k+1} \in Z\). After simple calculations, we obtain
From the second line of the scheme (9.7) we have that
Using the definition of subdifferential \(\partial _X G\) and the fact that G is an indicator function of the set Z, we get
for any \(p\in Z\). Then, in particular for \(p = z^k\) and using that \(w^{k} = \nabla (f+\tau (\Delta )^s\mathrm {div\,}z^{k})\), we obtain
Taking into account this fact and Lemma 9 in (9.8), we obtain
Therefore, taking \(\lambda \) so that \(C\lambda \tau <2\), we get \(F(z^{k+1})<F(z^k)\), as long as \(z^{k+1}\ne z^k\), which implies that the sequence \(\{F(z^k)\}\) is convergent. To complete the proof we proceed in a similar way as in the proof of Proposition 2 using the fact that in a finite dimensional space linear operators are bounded and norms are equivalent. \(\square \)
We note that from Proposition 4 and Lemma 9 it follows that the sequence \(\{z^k\}\) converges weakly to \(z^*\) in X if and only if
where the constant \(C = \mu _{\text {max}}^{s+1}\) and \(\mu _{\text {max}}:=\max _j \mu _j\) is the largest eigenvalue of the discrete Laplace operator \(\Delta \). In the case of considered here finite differencediscretization, we have that
where h denotes the space discretization step. In other words, to achieve theconvergence one has to select values of parameters \(\tau \), \(\lambda \) and h so that \(\mu _{\text {max}}^{s+1}\tau \lambda <2\). Taking, for instance, \(\tau = O(h^{2s+3})\), \(\lambda = O(1/h)\) and passing to the limit with \(h\rightarrow 0\), we get the desired result.
10 Numerical results
In this section, we present results of numerical experiments, obtained by application of the introduced earlier schemes, to solve the evolution equation (3.8). Theseexperiments have been performed on onedimensional data for more accuratepresentation of differences in solutions with respect to different values of the index s and their characteristic features.
For experiments, we were considering initial data f, \(g:[10,10]\rightarrow \mathbb {R}\), given by explicit formulas
Graphs of functions f and g are presented in Fig. 1.
In all calculations, we have fixed values of parameters \(h = 0.1\) and \(\tau = 0.1\). In each case, a value of \(\lambda \) was selected so that the criterion for the convergence given in Proposition 4 would be satisfied, i.e., we set \(\lambda = 4\cdot 10^{2}\), \(\lambda = 2\cdot 10^{3}\) and \(\lambda = 10^{4}\) for \(s = 0\), \(s=0.5\) and \(s=1\), respectively. In Fig. 2, we present evolution of solutions to the \(H^{s}\) total variation flow for initial data f and g, and for different values of the index s (\(s=0, 0.5, 1\)).
From our computation, we conjecture that a solution may be instantaneously discontinuous for \(s \in (0,1]\) for Lipschitz initial data. This is rigorously proved for \(s=1\) in [16]. Also, we see from this computation, that the motion becomes slower as s becomes larger.
References
Aujol, J.F.: Some firstorder algorithms for total variation based image restoration. J. Math. Imaging Vis. 34(3), 307–327 (2009)
Andreu, F., Ballester, C., Caselles, V., Mazón, J.M.: Minimizing total variation flow. Differ. Integr. Equ. 14, 321–360 (2001)
AndreuVaillo, F., Mazón, J.M., Caselles, V.: Progress in Mathematics. Parabolic quasilinear equations minimizing linear growth functionals, vol. 223. Birkhauser, Basel (2004)
Attouch, H., Peypouquet, J.: The rate of convergence of Nesterov’s accelerated forwardbackward method is actually faster than \(1/k^2\). SIAM J. Optim. 26(3), 1824–1834 (2016)
Beck, A., Teboulle, M.: A fast iterative shrinkagethresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)
Bellettini, G., Caselles, V., Novaga, M.: The total variation flow in \(\mathbb{R}^N\). J. Differ. Equ. 184(2), 475–525 (2002)
Brezis, H.: Opérateurs maximaux monotones et semigroupes de contractions dans les espaces de Hilbert. NorthHolland, Amsterdam (1973)
Burger, M., He, L., Schoenlieb, C.B.: Cahn–Hilliard inpainting and a generalization for grayvalue images. SIAM J. Imaging Sci. 2(4), 1129–1167 (2009)
Chambolle, A.: An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 20(1–2), 89–97 (2004)
Chambolle, A., Pock, T.: A firstorder primaldual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40, 120–145 (2011)
Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward–backward splitting. Multiscale Model. Simul. 4(4), 1168–1200 (2005)
Crandall, M.G., Liggett, T.M.: Generation of semigroups of nonlinear transformations on general Banach spaces. Am. J. Math. 93, 265–298 (1971)
Ekeland, I., Témam, R.: Convex Analysis and Variational Problems. NorthHolland, Amsterdam (1976)
Elliott, C.M., Smitheman, S.A.: Analysis of the TV regularization and \(H^{1}\) fidelity model for decomposing an image into cartoon plus texture. Commun. Pure Appl. Anal. 6(4), 917–936 (2007)
Elliott, C.M., Smitheman, S.A.: Numerical analysis of the TV regularization and \(H^{1}\) fidelity model for decomposing an image into cartoon plus texture. IMA J. Numer. Anal. 6, 1–39 (2008)
Giga, M.H., Giga, Y.: Very singular diffusion equationssecond and fourth order problems. Jpn. J. Appl. Math. 27, 323–345 (2010)
Giga, M.H., Giga, Y., Požár, N.: Periodic total variation flow of nondivergence type in \(\mathbb{R}^n\). J. Math. Pures Appl. 102, 203–233 (2014)
Giga, Y., Kohn, R.V.: Scaleinvariant extinction time estimates for some singular diffusion equations. Discrete Contin. Dyn. Syst. 30(2), 509–535 (2011)
Giga, Y., Kuroda, H., Matsuoka, H.: Fourthorder total variation flow with Dirichlet condition: characterization of evolution and extinction time estimates. Adv. Math. Sci. Appl. 24, 499–534 (2014)
Henry, D.: Geometric theory of semilinear parabolic equations. In: Lecture Notes in Mathematics, vol. 840. Springer, Berlin (1981)
Kashima, Y.: A subdifferential formulation of fourth order singular diffusion equations. Adv. Math. Sci. Appl. 14, 49–74 (2004)
Kashima, Y.: Characterization of subdifferentials of a singular convex functional in Sobolev spaces of order minus one. J. Funct. Anal. 262(6), 2833–2860 (2012)
Kohn, R.V., Versieux, H.M.: Numerical analysis of a steepestdescent PDE model for surface relaxation below the roughening temperature. SIAM J. Numer. Anal. 48, 1781–1800 (2010)
Komura, Y.: Nonlinear semigroups in Hilbert space. J. Math. Soc. Jpn. 19, 493–507 (1967)
Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)
Liu, J.G., Lu, J., Margetis, D., Marzuola, J.L.: Asymmetry in crystal facet dynamics of homoepitaxy by a continuum model. (preprint)
Łasica, M., Moll, S., Mucha, P.B.: Total variation denoising in \(l^1\) anisotropy. SIAM J. Imaging Sci. 10(4), 1691–1723 (2016)
Meyer, Y.: Oscillating Patterns in Image Processing and Nonlinear Evolution Equations: The Fifteenth Dean Jacqueline B. Lewis Memorial Lectures. American Mathematical Society, Boston (2001)
Moll, J.S.: The anisotropic total variation flow. Math. Ann. 332, 177–218 (2005)
Mucha, P.B., Muszkieta, M., Rybka, P.: Two cases of squares evolving by anisotropic diffusion. Adv. Differ. Equ. 20(7–8), 773–800 (2015)
Nesterov, Y.: A method of solving a convex programming problem with convergence rate \(O(1/k^2)\). Sov. Math. Dokl. 27, 372–376 (1983)
Oberman, A., Osher, S., Takei, R., Tsai, R.: Numerical methods for anisotropic mean curvature flow based on a discrete time variational formulation. Commun. Math. Sci. 9, 637–662 (2011)
Osher, S.J., Sole, A., Vese, L.A.: Image decomposition and restoration using total variation minimization and the \(H^{1}\) norm. Multiscale Model. Simul. SIAM Interdiscip. J. 1(3), 349–370 (2003)
Rudin, L., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Physica D 60, 259–268 (1992)
Rulla, J.: Error analysis for implicit approximations to solutions to Cauchy problems. SIAM J. Numer. Anal. 33(1), 68–87 (1996)
Schönlieb, C.B: Total variation minimization with an \(H^{1}\) constraint. In: CRM Series 9, Singularities in Nonlinear Evolution Phenomena and Applications Proceedings, Scuola Normale Superiore Pisa, pp. 201–232 (2009)
Weiss, P., Aubert, G., BlancFéraud, L.: Efficient schemes for total variation minimization under constraints in image processing. SIAM J. Sci. Comput. 31(3), 2047–2080 (2009)
Acknowledgements
This work was partially supported by the EU IRSES program “FLUX” and the Polish Ministry of the Science and Higher Education Grant number 2853/7.PR/2013/2. The work of the Yoshikazu Giga was partially supported by Japan Society for the Promotion of Science through Grant Nos. 26220702 (Kiban S) and 16H03948 (Kiban B). A part of the research for this paper was performed, when Monika Muszkieta and Piotr Rybka visited the University of Tokyo. Its hospitality is gratefully acknowledged. The authors also thank the anonymous reviewer for the careful reading of our manuscript and for insightful comments.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
This article is published under an open access license. Please check the 'Copyright Information' section either on this page or in the PDF for details of this license and what reuse is permitted. If your intended use exceeds what is permitted by the license or if you are unable to locate the licence and reuse information, please contact the Rights and Permissions team.
About this article
Cite this article
Giga, Y., Muszkieta, M. & Rybka, P. A duality based approach to the minimizing total variation flow in the space \(H^{s}\). Japan J. Indust. Appl. Math. 36, 261–286 (2019). https://doi.org/10.1007/s13160018003404
Received:
Revised:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s13160018003404