Abstract
Two key ideas have greatly improved techniques for image enhancement and denoising: the lifting of image data to multiorientation distributions and the application of nonlinear PDEs such as total variation flow (TVF) and mean curvature flow (MCF). These two ideas were recently combined by Chambolle and Pock (for TVF) and Citti et al. (for MCF) for twodimensional images. In this work, we extend their approach to enhance and denoise images of arbitrary dimension, creating a unified geometric and algorithmic PDE framework, relying on (sub)Riemannian geometry. In particular, we follow a different numerical approach, for which we prove convergence in the case of TVF by an application of Brezis–Komura gradient flow theory. Our framework also allows for additional data adaptation through the use of locally adaptive frames and coherence enhancement techniques. We apply TVF and MCF to the enhancement and denoising of elongated structures in 2D images via orientation scores and compare the results to Perona–Malik diffusion and BM3D. We also demonstrate our techniques in 3D in the denoising and enhancement of crossing fiber bundles in DWMRI. In comparison with datadriven diffusions, we see a better preservation of bundle boundaries and angular sharpness in fiber orientation densities at crossings.
1 Introduction
In the last decade, many PDEbased imageanalysis techniques for tracking and enhancement of curvilinear structures in images took advantage of lifting image data, typically defined on \(\mathbb {R}^d\), to a multiorientation distribution (e.g., an orientation score) defined on the homogeneous space \(\mathbb {M}_d\) of ddimensional positions and orientations, see Fig. 1 and [5, 8, 11, 14, 20, 53]. After lifting the image to a multiorientation distribution, the distribution is taken as an initial condition of a PDE flow. After solving a limited number of iterations of the PDE model, one obtains a regularized version of the original distribution, and by integration over all orientations, one obtains a regularized version of the original image.
The key advantage of lifting the images from \(\mathbb {R}^d\) to the homogeneous space \(\mathbb {M}_d\) is that the PDE flow can act differently on substructures with different orientations [5, 11, 24]. For instance, if the image contains two crossing lines, the PDE can regularize the two lines independently, rather than regularizing the whole crossing. Similarly, if the image contains a corner, the corner is preserved in the regularized image.
This idea of lifting images has been successfully implemented for image enhancement [13, 30], geodesic tracking [6, 11, 51], image denoising [31], contrast perception and optical illusions [7]. For instance, Perona–Malik diffusion has been lifted to the homogeneous space \(\mathbb {M}_d\) [17] and coherence enhancing diffusion (CED) [54] has been lifted to the method of coherence enhancing diffusion on invertible orientation scores (CEDOS) [30] and to its 3D generalization [33].
PDE flows on orientation lifts of 3D images are relevant for applications such as fiber enhancement [17, 21, 46, 53] and fiber tracking [45] in diffusionweighted magnetic resonance imaging (DWMRI), and in enhancement [33] and tracking [15] of blood vessels in 3D images.
The general workflow is illustrated in Fig. 1. The original image is described by a function \(f:\Omega _f \rightarrow \mathbb {R}^{+}\), where \(\Omega _f \subset \mathbb {R}^d\) is its support. From \(f \in \mathbb {L}_2 \left( \Omega _f \right) \), one computes an orientation lift \(U: \mathbb {M}_{d} \rightarrow \mathbb {C} \), compactly supported within
There are various ways to construct such a lift: it can be (the real part of) an invertible orientation score [22] (cf. Fig. 1), a channel representation [28], a lift by Gabor wavelets [3], or a fiber orientation density [44]. In all of these approaches the absolute value \(U(\varvec{x},\varvec{n})\) can be regarded (after normalization) as a probability density of finding a fiber structure at position \(\varvec{x} \in \mathbb {R}^d\) with local orientation \(\mathbf {n} \in S^{d\!\!1}\). We set the orientation lift U as an initial condition of a PDE flow \(U \mapsto \Phi _t(U)\) with evolution time \(t > 0\). Finally, the processed multiorientation representation \(\Phi _t(U)\) is integrated over all orientations to obtain the enhanced image \(f_t\). In this article, we will work with the orientation score, with the main motivation being that this operation is invertible [20], so that when taking \(t \downarrow 0\), the output equals the input, i.e., \(\lim _{t \downarrow 0} f_t = f\) in \(\mathbb {L}_{2}\)sense.
The enhanced image that one obtains after running a PDE flow, (the bottomright picture in Fig. 1), naturally depends on the type of flow used. One flow may be more suitable than another, depending on the requirements imposed on the resulting image. In case it is important to preserve sharp transitions in the image, while maintaining plateaus, nonlinear flows such as total variation flows (TVF) and mean curvature flow (MCF) [49] are typically more suited than nonlinear diffusion flows (such as Perona and Malik diffusion [42] and coherenceenhancing diffusion [54]).
For \(d=2\), TVF and MCF were recently generalized to lifted images by Chambolle and Pock [11] and Citti et al. [13], respectively.
Their promising results have motivated us to generalize TVF and MCF to lifted images for general dimension d and provide a general geometric and algorithmic framework that can accommodate features such as locally adaptive frames and coherence enhancement.
The benefits of our approach are that we obtain a single unifying geometric and algorithmic framework for arbitrary d, with efficient algorithms (for \(d=2,3\)) that preserve crossing lines, corners, plateaus, edges and bundle boundaries and can improve curvature adaptation via the optional inclusion of locally adaptive frames. Such frames account for curvature of lines and allow us to remove bias toward sampled orientations in orientation scores.
Our PDE methods on \(\mathbb {M}_{d}\) are computationally more expensive than their counterparts acting only on \(\mathbb {R}^d\), but they are still practical. Similar to crossingpreserving nonlinear diffusion on \(\hbox {SE}(2)\equiv \mathbb {M}_{2}\), locally adaptive frames allow us to remove orientation sampling bias in orientation scores [29, Fig. 6.11] and to use only 4 orientation samples [30]. For our crossingpreserving MCF and TVF on \(\mathbb {M}_{2}\) we sample our (processed) orientation scores only on 8 orientations. On \(\mathbb {M}_{3}\) we compute regularized orientation lifts on a grid with 162 orientations, where we rely on efficient numerical schemes for PDEs on \(\mathbb {M}_3\) relying on the loworder PDE discretization schemes explained in [17, 37], instead of higherorder schemes via spherical harmonics [34, Ch. 3.4], in order to reduce computation time.
The structure of this article is as follows. We start by recapitulating orientation scores and explaining the homogeneous space \(\mathbb {M}_d\) as a Lie group quotient in the rigid body motion group \(\hbox {SE}(d)\) in Sect. 2 and explain the necessary geometric concepts. In Sect. 3, we introduce the PDEs for total variation and mean curvature flow on \(\mathbb {M}_d\) and explain our explicit discretization scheme. Our numerical scheme includes regularization for which we prove convergence in Sect. 4. In Sect. 5 we evaluate the potential of our methods with 2D and 3D experiments.
Remark 1
(Visualization of 3D orientation scores) In the 3rd row of Fig. 1, and henceforth, we visualize a lifted image \(U: \mathbb {R}^{3} \rtimes S^{2} \rightarrow \mathbb {R}^+\) by a grid of angular profiles \(\{\, \mu \, U(\mathbf {x},\mathbf {n})\, \mathbf {n}\;\; \mathbf {x} \in \mathbb {Z}^{3}, \mathbf {n} \in S^{2}\,\}\), with fixed \(\mu >0\).
Remark 2
(Additional content in this version) This article is an extended version of the authors’ SSVM article by the same name [25]. The following content is new:

A coordinatefree formulation of gaugeframe fitting in Sect. 2.4 that generalizes our previous coordinatedependent approach of [23].

An introduction of distinct geometric setups (with or without locally adaptive frames) for any dimension of \(\mathbb {M}_d\) that admits the formulation of TVF and MCF PDEs. A quick overview of the two distinct geometric approaches is provided in Table 1.

A general formulation of our “coherence enhancement technique” for TVF and MCF on \(\mathbb {M}_d\) in Sect. 2.5.

A proof for the theorem of the strong convergence, stability and accuracy of TV flows. This result was announced in [25] but not yet proven.

Extensions of our 2D denoising/enhancing experiments, Sect. 5, Figs. 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, and Table 4. These experiments now include a full comparison of isotropic and anisotropic processing, and the effect of the including coherence enhancement (via locally adaptive frames) in TVF and MCF. They also include additional comparisons to Perona–Malik [42] diffusion and to a wellestablished denoising method: BM3D [18, 35].

A more comprehensive treatment of the geometric tools used such as vector fields and metric tensors. We now clearly distinguish between the group \(\hbox {SE}(d)\) and the homogeneous space \(\mathbb {M}_d\). We also explain how to transfer geometric tools on these sets in Sect. 2.6 and Table 2.
2 Preliminary Theory
Before we can provide the generalized PDEs, which include TVF and MCF as special cases, we need to construct the necessary tools.
In this section, we review orientation scores, the rigidbody motion group SE(d), and the homogeneous space \(\mathbb {M}_d\) of positions and orientations. For further reading on engineering applications and harmonic analysis on Lie group \(\hbox {SE}(d)\) we refer to [12, Ch. 6]. For theory on homogeneous spaces we refer to [36, Ch. 21]. For image processing on \(\hbox {SE}(2)\), see for example [5, 8, 11, 14], for image processing on \(\hbox {SE}(3)\), see for example [39, 45, 47].
2.1 Orientation Scores: Lifting the Image Domain from \(\mathbb {R}^d\) to \(\mathbb {M}_d\)
In order to disentangle all local orientations in an image we lift the data from position space \(\mathbb {R}^d\) to the homogeneous space \(\mathbb {M}_{d}\) of positions and orientations. This means that we extend the domain of an image. See Fig. 1, where we lift the data from \(\mathbb {R}^d\) toward \(\mathbb {M}_{d}\) via invertible orientation scores.
Building an orientation score starts with selecting an orientationsensitive filter (or wavelet) \(\psi \in \mathbb {L}_1 \cap \mathbb {L}_2\left( \mathbb {R}^d\right) \). We can then (under appropriate conditions [20, 33]) filter out a particular direction from an image \(f\in \mathbb {L}_2\left( \mathbb {R}^d\right) \) by convolving the image with this filter aligned to that direction. An orientation score \(\mathcal {W}_\psi f\) can then be constructed by applying this filtering for all directions \(\varvec{n}\in S^{d1}\):
for all \(\varvec{x} \in \mathbb {R}^d\) and rotations \(\varvec{R}_{\varvec{n}}\) that map a reference axis \(\varvec{a}\in S^{d1}\) to \(\varvec{n}\).
For this paper we will be using cake wavelets [20, 33] for our filter \(\psi \), illustrated in Fig. 2 for \(d=2\). These wavelets are directional filters that have the property that we can accurately reconstruct the original image from the orientation score (again under appropriate conditions) by integration over \(S^{d1}\), i.e.,
where \(\sigma \) denotes the usual surface measure over \(S^{d1}\). We always use standard cake wavelet parameter settings from [37] in our experiments.
The explicit formulas for these cake wavelets that allow invertible orientation scores are available in [5, 20] and specifically for \(d=3\) in [33]. An intuitive illustration of an orientation score is given in Fig. 2.
2.2 \(\hbox {SE}(d)\) and the Homogeneous Space of Positions and Orientations \(\mathbb {M}_d\)
Consider the rigid body motion group \(\hbox {SE}(d)= \mathbb {R}^d \rtimes SO(d)\), the semidirect product of the translation group \(\mathbb {R}^d\) and the rotation group \(\hbox {SO}(d)\) of orthogonal \(d \times d\) matrices. We call elements of \(\hbox {SE}(d)\) rototranslations. The product of two rototranslations \(g_i = (\varvec{x}_i, \varvec{R}_i)\) in \(\hbox {SE}(d)\) is given by
These rototranslations act transitively on the space \(\mathbb {R}^d \times S^{d1}\) by
for all \((\varvec{y},\varvec{n}) \in \mathbb {R}^{d}\times S^{d1}\) and all rototranslations \((\varvec{x},\varvec{R}) \in SE(d)\).
We choose an a priori reference vector \(\varvec{a} \in S^{d1}\), say \(\varvec{a}=(1,0)^T\) if \(d=2\) or \(\varvec{a}=(0,0,1)^T\) if \(d=3\). Then the stabilizer of the element \((\varvec{0},\varvec{a})\) is given by
which is isomorphic to \(\hbox {SO}(d1)\).
The homogeneous space of positions and orientations is the partition of left cosets
The left cosets are equivalence classes in \(\hbox {SE}(d)\) with respect to the equivalence relation
For \(d=2\), the subgroup \(H_2=\{(\mathbf {0},\mathbf {I})\}\) consists only of the unit element, and thereby the manifold \(\mathbb {M}_2\) is diffeomorphic to \(\hbox {SE}(2)\). However, for \(d > 2\) the manifolds \(\mathbb {M}_d\) and \(\hbox {SE}(d)\) are not diffeomorphic.
For \(d=3\), the stabilizer can be described by
where \(\varvec{R}_{\varvec{a},\alpha }\) denotes a (counterclockwise) rotation over angle \(\alpha \) around the reference axis \(\varvec{a}\). This means that two rototranslations \(g_1 = (\varvec{x}_1,\varvec{R}_1)\) and \(g_2 = (\varvec{x}_2,\varvec{R}_2)\) are equivalent if and only if
The equivalence classes \(\left[ g \right] = \left\{ g' \in \hbox {SE}(3) \ \big \vert \ g' \sim g \right\} \) are usually just denoted by \(p=(\varvec{x},\varvec{n})\) as they consist of all rigid body motions \(g=\left( \varvec{x},\varvec{R}_{\varvec{n}} \right) \) that map the reference point \((\varvec{0},\varvec{a})\) onto \((\varvec{x},\varvec{n}) \in \mathbb {R}^3 \times S^2\):
Remark 3
(Distinguishing \(\hbox {SE}(d)\) from \(\mathbb {M}_d\)) As the distinction between the group \(\hbox {SE}(d)\) and the homogeneous space \(\mathbb {M}_d\) (which is not a group for \(d=3\) and above) is important, we will use g, h for elements of \(\hbox {SE}(d)\), and p, q for points in \(\mathbb {M}_d\).
To understand why the situation changes from \(d=2\) to \(d >2\) observe that in 2 dimensions we need one angle to specify orientation and have one rotational degree of freedom, in 3 dimensions we need 2 angles to specify orientation, but we have 3 rotational degrees of freedom, i.e., \(\mathbb {M}_3\) has one dimension less than \(\hbox {SE}(3)\). See also Fig. 3 for an illustration of this difference.
For \(d > 3\) this situation persists as we have more rotational degrees of freedom that do not change the orientation.
Remark 4
(Domain of an orientation score) The orientation score is well defined on the domain \(\mathbb {M}_d\) if we assume \(\psi \) is not affected under the action of subgroup \(H_d\). For \(d=3\) this means we must impose axial symmetry on the wavelets, for details see [33].
2.3 Differential Structure on \(\hbox {SE}(d)\), \({\mathbf {\mathbb {M}}}_d\)
As a manifold, we view the group \(\hbox {SE}(d)\) in a standard way as a submanifold of \(\mathbb {R}^d \times \mathbb {R}^{d \times d}\). The Lie algebra is, as a vector space, the tangent space at the unit element (see [36, Ch. 7]). We view elements of tangent spaces (i.e., tangent vectors) as derivations acting on functions: If v is an ordinary vector in \(\mathbb {R}^d \times \mathbb {R}^{d\times d}\) tangent to \(\hbox {SE}(d)\), the corresponding derivation acting on a function \(f \in C^1 \left( \hbox {SE}(d) \right) \) is just the derivative of f in the direction of v.
The Lie algebra has dimension \(D=\frac{1}{2}d(d+1)\). We choose a basis \(\left( A_i \right) _{i=1}^{D}\) for the Lie algebra of \(\hbox {SE}(d)\) with the following properties. The basis is orthonormal with respect to the inner product belonging to the standard Euclidean metric on \(\mathbb {R}^d \times \mathbb {R}^{d \times d}\); the vectors \(\{A_1, \dots , A_d \}\) span the spatial part of the Lie algebra, which is isomorphic to \(\mathbb {R}^d\) with the vector \(A_d\) corresponding to the derivative in the direction of \(\varvec{a}\). Recall that for \(d=2\) the subgroup \(H_d\) is trivial. For \(d\ge 3\) one has that the set \(\{A_{2d}, \dots , A_{D}\}\) forms a basis for the stabilizer subgroup \(H_d\). We take the convention that the Lie algebra vector \(A_{i+d}\) generates the inplane rotation in the plane spanned by \(A_i\) and \(\varvec{a}\) for \(i=1,\ldots ,d1\).
For the case \(d=2\) this gives us two spatial generators, \(A_1\) and \(A_2\), and one rotation generator \(A_3\). Moving to \(d=3\) we have 3 spatial degrees of freedom and 3 rotational degrees of freedom, but only 2 of those rotational degrees of freedom will change the orientation, we denote the generator corresponding to the rotation that preserves the reference axis by \(A_6\). This gives us the following basis:
As is illustrated in Fig. 3 for \(d=3\) we have a rotational degree of freedom that does not change the orientation reference axis. As a result \(\mathbb {M}_3\) is not isomorphic to \(\hbox {SE}(3)\). It is rather a 5dimensional quotient of the 6dimensional Lie group \(\hbox {SE}(3)\), see also Remark 3.
Remark 5
(Generalization for \(d>3\)) Generalizing this scheme for \(d > 3\) we would have the following basis for the Lie algebra:
We extend the vectors \(A_i\) to leftinvariant vector fields \(\mathcal {A}_i\) as follows. The group acts on itself by left multiplication,
and the derivation \((\mathcal {A}_i)_g\), evaluated in a point g, is given by the pushforward
for all \(f\in C^\infty \left( \hbox {SE}(d), \mathbb {R} \right) \). We denote the corresponding covector fields by \(\omega ^i:g \mapsto \omega ^i \vert _g\). For each \(g \in \hbox {SE}(d)\), the covector \(\omega ^i\vert _g\) is an element of the dual to the tangent space of \(\hbox {SE}(d)\) at g. The covector fields are characterized by
where \(\delta ^i_j\) denotes the Kronecker delta.
Note that
and so for all \(g=\left( \varvec{x},\varvec{R}_{\varvec{n}}\right) \in \hbox {SE}(d)\):
from which we infer that the leftinvariant frame is aligned with the direction \(\varvec{n} \in S^{d1}\).
Remark 6
(LeftInvariant Basis in 2D) We can represent an element of \(\hbox {SE}(2)\) by its position and angle as \((\varvec{x}, \theta )\in \mathbb {R}^2 \times \left[ 0, 2\pi \right) \) which allows us to write the leftinvariant vector fields \(\mathcal {A}_i\) as:
For an explicit form of the leftinvariant vector fields \(\mathcal {A}_i\) in case \(d=3\), see “Appendix A.”
We introduce the following metric tensor field in terms of the leftinvariant covector fields \(\left( \omega ^i \right) _{i=1}^{2d1}\).
Definition 1
(Leftinvariant metric tensor field) Given positive constants \(D_S > 0\) and \(D_A > 0\), and a nonnegative real number \(\mathfrak {e} \ge 0\), we define the leftinvariant metric tensor field \(\mathcal {G}\) by
Remark 7
(SubRiemannian case) Henceforth we refer to \(\mathfrak {e}=0\) as the subRiemannian case where tangent vectors are constrained to the span of \(\mathcal {A}_d,\ldots ,\mathcal {A}_{2d1}\). Intuitively when \(\mathfrak {e} \downarrow 0\) the other tangent directions get infinite cost and become prohibited. This means that we restrict ourselves to socalled horizontal tangent vectors:
Observe that this subRiemannian metric tensor is defined (and invertible) on a subbundle of the tangent bundle on the group as it does not contain any of the covectors dual to the subbundle induced by subgroup \(H_d\). Furthermore it is spatially isotropic orthogonal to the primary spatial direction. Also spherically we impose isotropy in the metric as can be seen from the last term in the above definition.
This metric induces an associated norm: If \(\dot{g} \in T_g \left( \hbox {SE}(d)\right) \), then
where again in the subRiemannian case we only allow \(\dot{g}\) to be in the span of \(\mathcal {A}_d,\ldots ,\mathcal {A}_{2d1}\).
Now that we have \(\hbox {SE}(d)\) equipped with a (sub)Riemannian metric tensor, we can derive the basic tools that are required to formulate our geometric PDEs. These basic tools include the gradient, its norm, and the divergence of a vector field. Let us relabel our parameters as
Let \(\tilde{U}:\hbox {SE}(d) \rightarrow \mathbb {R}\) carry the axial symmetry:
for some \(U:\mathbb {R}^{d} \times S^{d1} \rightarrow \mathbb {R}\).
Then in the Riemannian setting the gradient of a differentiable function \(\tilde{U}:\hbox {SE}(d) \rightarrow \mathbb {R}\) on the group induced by this metric tensor becomes
where the sum only runs to \(2d1\) and not to \(\text {dim}(\hbox {SE}(d))=D= \frac{1}{2}d(d+1)\) since (16) implies that
The gradient then has the following norm
The divergence of a vector field is given by
In the subRiemannian setting, where we restrict ourselves to vector fields spanned by \(\left( \mathcal {A}_i \right) _{i=d}^{2d1}\), we have
2.4 Locally Adaptive Frames on \(\hbox {SE}(d)\) as SVD of the Hessian
As an alternative to the leftinvariant frame we can choose a frame (and subsequently a metric tensor field) that is adapted to the data (which we also refer to as gauge frames in analogy with [23]). Specifically, instead of having the vector field \(\mathcal {A}_d = \varvec{n} \cdot \nabla \) as a static forward direction we want to choose a vector field \(\mathcal {B}_d\) that locally aligns with the data [23]. In particular, \(\mathcal {B}_d\) can take on a angular component, meaning the local “straight forward” will follow the curve of the data; consequently, flows can better follow curved structures, see Fig. 4 for an example.
Remark 8
(Fitting a frame) We can induce an entire frame in \(\hbox {SE}(d)\) from a choice of main vector, see [23, Appendix A] for details. For an intuitive illustration see Fig. 4. In this article we will focus on the method by which the main gauge vector is obtained.
Next we will present a singular value decomposition of the Hessian; we will choose the eigenvector associated with the smallest eigenvalue as \(\mathcal {B}_d\). Geometrically, this can be seen as the direction in which the gradient changes the least. Before we can formulate this procedure we explain the concept of exponential curves (see Fig. 5).
Definition 2
(Exponential curve) Let \(\dot{g}\in T_g \left( \hbox {SE}(d)\right) \) then the exponential curve parameterized by t through g with tangent vector \(\dot{g}\) is written as \(e^{\dot{g}t}_g\) and is the curve for which \(e^{\dot{g}0}_g=g\) and which has the property that for all \(t\in \mathbb {R}\):
Or more explicitly in coordinates, if \(\dot{g}=\sum _{i=1}^{2d1} \dot{g}^i \mathcal {A}_i \big \vert _g\) we have that:
Hence the exponential curves are those curves whose tangent vector components with respect to the leftinvariant frame do not change. For an illustration of such curves for the case \(d=2\) see Fig. 5.
In view of (16) and (18) we define
Now we want to select \(\mathcal {B}_d \big \vert _g\) (normalized with respect to the existing metric tensor \(\mathcal {G} \big \vert _g\)) so that the gradient of the data \(\tilde{U} \in C^1\left( \hbox {SE}(d), \mathbb {R} \right) \) changes as little as possible (recall Fig. 4) in the following manner.
Definition 3
(Main gauge vector) We define the main gauge vector as
where we assume that \(\tilde{U}\) is such that we have a unique minimizer. The Hessian in the previous equation is induced by a Cartan connection as outlined in [23, Appendix 4, (133)].
Writing the tangent vector in terms of the local leftinvariant frame as \(\dot{g}=\sum _{i=1}^{2d1} \dot{g}^i \mathcal {A}_i \big \vert _g \in T(g)\) let us write out the Hessian as follows:
We can write this problem in terms of matrices by defining the following:
with i as row index and j as column index.
Using these the objective function in (22) becomes
which we want to minimize under the constraint
Taking the derivative of the Lagrangian of this convex optimization problem gives us optimality under the following condition (\(\lambda \in \mathbb {R}\)):
i.e., \(\dot{\varvec{g}}\) needs to be an eigenvector of the matrix \(M^2 K^T M^2 K\) with eigenvalue \(\lambda \) (serving as the Lagrangian multiplier). If for a moment we rewrite (25) as
we see that \(\lambda \) is indeed real since \(\left( M K M \right) ^T \left( M K M \right) \) is symmetric. With this eigenvalue and vector objective function (23) evaluates to
This last equation incidentally proves that \(M^2 K^T M^2 K\) is positive semidefinite and, more importantly, that to minimize the change in gradient we need to look at the eigenvector belonging to the smallest eigenvalue.
In practice, we do not immediately calculate the eigenvectors and eigenvalues from the scheme we have just proposed, but for the purpose of stability we first apply a componentwise Gaussian smoothing on the matrix K as follows:
ergen with the usual surface measure \(\sigma \) on \(S^{d1}\) and with the smoothing kernel
where \(G^M_{\rho }\) is the heat kernel on the Riemannian manifold M with timescale \(\rho >0\), the spatial kernel is centered on \(\varvec{0}\), and the orientation kernel is centered on the reference direction \(\varvec{a}\).
Remark 9
(Diffusion on \(\mathbb {M}_d\)) It is important in the context of \(\mathbb {M}_d\) to choose diffusion that is isotropic spatially (with timescale \(\rho _s\)) and spherically (with timescale \(\rho _a\)) since this is the only diffusion that commutes with the leftinvariant vector fields. Note that \(G^{\mathbb {R}^d}_{\rho _s}(\varvec{0},\varvec{y})\) depends only on \(\left\ \varvec{y} \right\ \) and \(G^{S^{d1}}_{\rho _a }(\varvec{a},\varvec{m})\) depends only on \(\arccos {\left( \varvec{a} \cdot \varvec{m} \right) }\) making \(G(\varvec{y},\varvec{m})\) the heat kernel on the product manifold \(\mathbb {R}^d \times S^{d1}\). This smoothing method is a variant on the one used in [23].
The remaining basis vectors are determined by considering a rotation that maps \(\mathcal {A}_d \big \vert _g\) to \(\mathcal {B}_d \big \vert _g\) and then applying a specific rotation to the remaining \(\mathcal {A}_i \big \vert _g\) that keeps the remaining spatial generators spatial. For an illustration see Fig. 4. How this rotation is chosen and applied is detailed in [23, App.B].
Having determined a dataadaptive frame \(\left( \mathcal {B}_i\right) _{i=1}^{2d1}\) (induced by \(\mathcal {B}_d\), recall Fig. 4), we equip it with the following straightforward metric, where again we rely on the corresponding dual frame \(\left( \beta ^i \right) _{i=1}^{2d1}\) given by
Definition 4
(Gauge metric tensor field) We define the gauge metric tensor field \(g \mapsto \mathcal {J} \big \vert _g \left( \cdot , \cdot \right) \) as
which induces a norm on \(\dot{g} \in T_g \left( \hbox {SE}(d)\right) \):
a gradient on \(\tilde{U} \in C^1\left( \hbox {SE}(d)\right) \):
with norm
and finally gives the divergence of a vector field as:
which means that if we apply it to a vector field expressed in the gauge frame as \(\varvec{u}=\sum _{i=1}^{2d1} u^i \mathcal {B}_i\) we have:
2.5 Coherence Enhancement Operator
Coherenceenhancing diffusion is a wellknown technique for image enhancement [54]. It is intended for line amplification rather than strictly denoising. Crossingpreserving versions on \(\mathbb {M}_d\) have been developed [23] and evaluated for denoising. Here, crossing lines are well enhanced, but plateaus and boundaries of line structures are damaged. Therefore we propose to include the coherence enhancement technique into TV and MC flows.
Next we explain how this coherence enhancement operator is constructed from an orientation confidence.
In \(\mathbb {R}^3\), orientation confidence is calculated by the Laplacian in the subspace orthogonal to the line structure. We can take a similar approach in \(\mathbb {M}_d\) by taking the Laplacian in the space spanned by \(\left( \mathcal {A}_i\right) _{i=1, i\ne d}^{2d1}\). Recall that \(\mathcal {A}_d\) is implicitly aligned with the local line structure along \(\varvec{n}\). In the gaugeframe setting \(\mathcal {B}_d\) is explicitly aligned with the line structure (see Fig. 4), and therefore we take the Laplacian in the span of \(\left( \mathcal {B}_i \right) _{i=1,i\ne d}^{2d1}\).
In the subRiemannian case (i.e., \(D_1=\cdots =D_{d1}=0\)) this just reduces to the second derivatives in the \((d1)\)dimensional spaces spanned by \(\left( \mathcal {A}_i\right) _{i=d+1}^{2d1}\) and \(\left( \mathcal {B}_i\right) _{i=d+1}^{2d1}\), respectively. With that in mind we define orientation confidence in \(\hbox {SE}(d)\) as follows.
Let \(\tilde{U}:\hbox {SE}(d) \rightarrow \mathbb {R}\), then in the leftinvariant case we define
in the gaugeframe case.
Note that the \(\mathcal {B}_i\)’s are normalized with respect to old metric (12) and as such the parameters \(D_i\) are still included in (36). In the case that \(\mathcal {B}_d\) is aligned with \(\mathcal {A}_d\) we have \(\mathcal {B}_i = \sqrt{D_i}\mathcal {A}_i\) and (35) and (36) coincide.
Definition 5
(Isotropy factor) Let \(c>0\) be a chosen scaling constant, then the isotropy factor is defined as:
with \(C_{\tilde{U}}\) defined by (35), respectively (36).
What is convenient about this quantity is that it gives a number in the range \(\left( 0,1\right] \) with a number close to zero indicating a high degree of anisotropy and a 1 indicating perfect isotropy. This is the quantity that we can use to steer flow.
The choice of c controls how steep the decline of the isotropy factor is. Its appropriate value depends on the application and on exactly how the data are represented numerically (normalized to [0, 1] in our case) and are best determined heuristically or by histogram. For our experiments we have used \(c=0.2\).
Using this scalar function \(\alpha _{\tilde{U}}\) on the group \(\hbox {SE}(d)\), we can locally modify vectors based on how certain we are the data are locally aligned. We refer to this modification of vector fields as coherence enhancement (as in coherence enhancing diffusion [30]). Tangent vectors (such as the gradient as we will see) are modified as follows. Let \(\varvec{v}\) be a vector field on \(\hbox {SE}(d)\). Then the coherenceenhanced vector field is given as
for the leftinvariant geometry and as
and for the gauge geometry.
Intuitively, these linear operators \(E_\mathcal {G}, E_\mathcal {J}:T(G) \rightarrow T(G)\) preserve the magnitude of the vector in the main direction and weaken it orthogonal to the main direction if we are certain the data are locally aligned to the main direction.
2.6 Descending to the Homogeneous Space
So far we have developed two distinct geometries on the group \(\hbox {SE}(d)\) that are summarized in Table 1. We can bring these geometries down to the homogeneous space \(\mathbb {M}_d\) by considering the natural extension of functions and vector fields on \(\mathbb {M}_d\) to \(\hbox {SE}(d)\). Consider a function \(U \in C^{\infty }(\mathbb {M}_d)\), then the function \(\tilde{U}\), given by
for all \(g=(\varvec{x},\varvec{R}) \in \hbox {SE}(d)\), is its natural extension to \(\hbox {SE}(d)\) and is clearly also smooth. Similarly, a tangent vector field (recall that we understand these as differential operators acting on scalar functions) \(\varvec{v}\) on \(\mathbb {M}_d\) can be extended as follows:
under the additional constraint that \(\tilde{\varvec{v}}\) vanishes in the direction induced by the subgroup \(H_d\) (i.e., for all \(i \ge 2d\) we have \(\left\langle \mathcal {\omega }^i,\, \tilde{\varvec{v}} \right\rangle =0\)) this extension is unique.
Having extended functions and vector fields upward to the group, we can apply the tools from Table 1 to them and subsequently project the results back to the homogeneous space by the mapping \((\varvec{x},\varvec{R}) \mapsto (\varvec{x},\varvec{Ra})\). This mapping is not injective. Nevertheless, thanks to metrics (12), (30) being laterally and spherically isotropic and the way we extend functions to the group by (39), (40), all the tools we list in Table 2 are well defined on \(\mathbb {M}_d\).
Remark 10
(Choice of \(\varvec{R_n}\)) While the choice of mapping \(\varvec{n} \mapsto \varvec{R_n}\) does not matter for the final result, a choice does have to be made for an implementation when \(d \ge 3\). The most straightforward manner is selecting that \(\varvec{R_n}\) which is an inplane rotation, meaning the plane of rotation is spanned by \(\varvec{a}\) and \(\varvec{n}\). In the two cases where this is not possible (i.e., \(\varvec{n}=\pm \varvec{a}\)) we pick \(\varvec{R_a}=\varvec{0}\) and \(\varvec{R}_{\varvec{a}} = \varvec{R}_{\varvec{e}_z,\pi }\), where \(\varvec{R}_{\varvec{e}_y,\pi }\) denotes the rotation around the axis \(\varvec{e}_y\) by an angle \(\pi \). Concretely the inplane rotation in 3D is given in terms of the ZYZEuler angles \(\alpha ,\beta ,\gamma \) by requiring that \(\alpha =\gamma \), which gives the mapping
for the unique \(\alpha \in [0,2\pi )\) and \(\beta \in (0,\pi )\) so that the resulting rotation maps \(\varvec{a}\) to \(\varvec{n}\).
3 Total Variation, Mean Curvature PDE Flows on \(\mathbb {M}_d\)
3.1 PDE System
On \(\mathbb {R}^n\) the formulation of total variation is built on the identity \(\text {div}(f\varvec{v}) = f\, \text {div}(\varvec{v}) + \nabla f \cdot \varvec{v}\). Similarly on \(\mathbb {M}_d\) we have:
From the last equation we deduce the following integration by parts formula:
for all \(U\in C^1(\Omega )\) and all smooth vector fields \(\varvec{v}\) vanishing at the boundary \(\partial \Omega \). This formula allows us to build a weak formulation of TVF on \(\mathbb {M}_d\) starting from functions of bounded variation (BV) [1].
Definition 6
(Weak formulation of TVF on \(\mathbb {M}_d\)) Let \(\mathfrak {e}\ge 0\). Let \(U \in \mathbb {L}_1(\Omega )\) (recall (1)). Let \(\chi _0(\Omega )\) denote the vector space of smooth vector fields that vanish at the boundary \(\partial \Omega \) and let \(\varepsilon \ge 0\). Then we define
If \(\hbox {TV}_0 (U) < \infty \) we say that \(U \in BV(\Omega )\).
For all \(U \in BV(\Omega )\) we have
Recall Remark 7 about the subRiemannian setting, and recall the notion of horizontal tangent vectors (13). So (44) also covers the subRiemannian setting (i.e., \(\mathfrak {e}=0\)) when setting
for tangent vector \(\mathbf {v}(p)=(\dot{\mathbf {x}}(p),\dot{\mathbf {n}}(p)) \in T_{p}(\mathbb {M}_d)\) at base point \(p=(\mathbf {x},\mathbf {n}) \in \mathbb {M}_d\).
Lemma 1
Let \(\varepsilon \ge 0\). For \(U \in C^{1}(\Omega ,\mathbb {R})\) we have
Furthermore for \(U \in C^{2}(\mathbb {M}_d,\mathbb {R})\) and \(\mathfrak {e}, \varepsilon >0\) we have that
Proof
First we substitute (43) into (44), then we apply Gauss theorem and use \(\left. U\varvec{v} \right _{\partial \Omega }=0\). Then we apply Cauchy–Schwarz on \(V_p:=\mathbb {R} \times T_p \mathbb {M}_d\) for each \(p \in \mathbb {M}_d\), with inner product
which holds with equality iff the vectors are linearly dependent. Therefore we smoothly approximate
by \((\psi ,\varvec{v})\) one obtains (46).
For \(U \in C^{2}(\Omega ,\mathbb {R})\), \(\delta \in C_{c}^{\infty }(\Omega ,\mathbb {R})\) we obtain
and the result follows. \(\square \)
For vector fields \(\varvec{v}\) on \(\mathbb {M}_d\) define the regularized norm:
This is a common way to regularize denominators, and we will use Sect. 4 to prove that this approach converges for \(\varepsilon \rightarrow 0\).
Now we propose the following rototranslation equivariant enhancement PDE on \(\Omega \subset \mathbb {M}_d\), recall (1).
Definition 7
(Equivariant enhancement PDE) Given \(U \in BV(\Omega )\) with \(\varepsilon > 0\) then we call \(W^\varepsilon :\mathbb {M}_d \times \mathbb {R}_{\ge 0} \rightarrow \mathbb {R}\) (being smooth) obeying
the gradient flow started at U with evolution time \(t \ge 0\) and parameters \(a,b \in \{0,1\}\). Here we use Neumann boundary conditions with \(\mathbf {N}(\varvec{x})\) as the normal to the spatial boundary at \(\varvec{x} \in \Omega _f\).
The coherence enhancement version of this PDE is given by replacing \(\text {div}\) by \(\text {div} \circ E\) (recall (37) and (38)):
Remark 11
(Two versions of the PDE) This PDE system on the quotient \(\mathbb {M}_d\) has two versions depending on whether one chooses the left invariant or gauge geometry as outlined in Tables 1 and 2.
We then have the following cases:

For \((a,b)=(1,1)\) we arrive at mean curvature flow (MCF), studied for \(d=2\) in [13].

For \((a,b)=(0,1)\) we arrive at total variation flow, studied for \(d=2\) in [11].

For \((a,b)=(0,0)\) we arrive at a linear diffusion for which exact smooth solutions exist for both \(d=2\) and \(d=3\) [43].
Remark 12
(PDE in terms of curvature) By product rule (42) the righthand side of (49) for \(\varepsilon \downarrow 0\) becomes
with the mean curvature \(\overline{\kappa }_{I}(p,t)\) of level set
akin to [49, ch;3.2], and with (possibly hypoelliptic) Laplacian \(\Delta =\text {div} \circ \nabla \).
Remark 13
(Lack of regularity and weak solutions) For MCF and TVF smooth solutions to PDE (49) exist only under special circumstances. This lack of regularity is an advantage in image processing to preserve step edges and plateaus in images, yet it forces us to define a concept of weak solutions. Here, we distinguish between MCF and TVF.
For MCF one relies on viscosity solution theory developed by Evans–Spruck [26], see also [32, 50] for the case of MCF with Neumann boundary conditions. In [13, Thm 3.6] existence of \(C^{1}\)viscosity solutions is shown for \(d=2\).
For TVF we will rely on gradient flow theory by Brezis–Komura [2, 9].
Remark 14
In this article we do not address convergence of our PDE solutions toward the subRiemannian setting \(\mathfrak {e} \downarrow 0\), and we only focus on convergence results for \(\varepsilon \downarrow 0\). In previous work (by others) convergence to the subRiemannian setting is addressed for special cases. For the special case \((a,b)=(0,0)\) convergence of the solutions with respect to \(\mathfrak {e} \downarrow 0\) is clear from the exact solutions see [43, ch:2.7]. For such convergence in the challenging case \((a,b)=(1,1)\) (MCF), see Citti et al. [4, 13]. For Eikonal PDEs convergence of viscosity solutions toward the subRiemannian setting holds as well, see [24, Thm.2]. It is therefore interesting to see whether convergence results toward the subRiemannian setting hold for the general case including the TVF case, but this falls outside the scope of this article. In Sect. 4 we only focus on convergence results for \(\varepsilon \downarrow 0\) for \(\mathfrak {e}>0\) fixed.
3.2 Numerics
We implemented PDE system (49) by Euler forward time discretization, relying on standard Bspline or linear interpolation techniques for derivatives in the underlying tools on \(\mathbb {M}_d\) given in Table 2. For details see [17, 30]. Also, the explicit upperbounds for stable choices of stepsizes can be derived by the Gershgorin circle theorem [17, 30].
For \(d=2\) the discretization is straightforward [30], for \(d=3\) we discretized per [17] in the software package Lie Analysis for Mathematica developed by Martin et al. [37], to our PDEs of interest (49) on \(\mathbb {M}_3\).
The Eulerforward discretizations are not unconditionally stable. For \(a=b=0\), the Gershgorin circle theorem [17, ch.4.2] gives the stability bound
when using linear interpolation with spatial stepsize h and angular stepsize \(h_{a}\). In our experiments, for \(d=2\) we set \(h=1\) and for \(d=3\) we took \(h_{a}=\frac{\pi }{25}\) using an almost uniform spherical sampling from a tessellated icosahedron with \(N_A=162\) points. TVF required smaller times steps when \(\varepsilon \) decreases. Keeping in mind (50) but then applying product rule (42) to the case \(0<\varepsilon \ll 1\), we concentrate on the first term as it is of order \(\varepsilon ^{1}\) when the gradient vanishes. Then we find \(\Delta t \le \varepsilon \cdot (\Delta t)_\mathrm{{crit}}\) for TVF. For MCF we do not have this limitation.
While both the quantitative and qualitative results of our proposed methods are encouraging we have to end our numerics section mentioning the computational cost. In Table 3 we summarize the relative computational time of our methods versus spatial Perona–Malik, and this summary shows our methods being several orders of magnitude slower. In the interest of fairness we need to add that benchmarking our prototype Mathematica implementation against the builtin Perona–Malik implementation is not a fair comparison, we are confident that an optimized native implementation would fare much better in a performance comparison.
4 Gradient Flow and Convergence
In this section we provide a gradient flow formulation that we will use to prove the convergence of our regularization scheme for TVF. The reader who is more interested in the experimental results than the technical convergence results can safely choose to skip this section and continue reading Sect. 5.
4.1 Preliminaries
The total variation flow can be seen as a gradient flow of a lowersemicontinuous, convex functional in a Hilbert space, as we explain next.
If \(F: H \rightarrow [0,\infty ]\) is a proper (i.e., not everywhere equal to infinity), lower semicontinuous, convex functional on a Hilbert space H (not to be confused with the subgroup H above, as we will not need the subgroup anymore we will stick with convention and use H for the Hilbert space from now on), the subdifferential of F in a point u in the finiteness domain of F is defined as
The subdifferential is closed and convex, and thereby it has an element of minimal norm, called “the gradient of F in u” denoted by \(\mathrm {grad} F(u)\). Let \(u_0\) be in the closure of the finiteness domain of F. By Brezis–Komura theory, [9, 2, Thm 2.4.15] there is a unique locally absolutely continuous curve \(u: [0,\infty ) \rightarrow H\) such that
We call \(u:[0,\infty ) \rightarrow H\) the gradient flow of F starting at \(u_0\).
Recall the definition of \(\Omega \) in (1), then the function \(\hbox {TV}_\epsilon : \mathbb {L}_2(\Omega ) \rightarrow [0,\infty ]\) is lowersemicontinuous and convex for every \(\epsilon \ge 0\). This allows us to generalize solutions to PDE (49) as follows:
Definition 8
Let \(U \in \Xi := BV(\Omega ) \cap \mathbb {L}_{2}(\Omega )\). We define by \(t \mapsto W^\epsilon (\cdot , t)\) the gradient flow of \(\hbox {TV}_\epsilon \) starting at U.
Remark 15
(Smooth solutions) A smooth solution \(W^\epsilon \) to (49) with \((a,b)=(0,1)\) is a gradient flow.
A functional \(\Phi : H \rightarrow (\infty , \infty ]\) is said to be \(\lambda \)convex for some \(\lambda \in \mathbb {R}\) if
is convex. In that case, the functional
is convex as well, for arbitrary \(v \in H\), because the latter functional deviates from the first by an affine functional.
We first prove a stability estimate for the minimization of \(1/\tau \)convex functionals.
Lemma 2
Let \(\tau > 0\). If a functional \(\Phi : H \rightarrow (\infty , \infty ]\) on H is \(1/\tau \)convex, and \(u^*\) is its unique minimizer, then for all \(u \in H\),
This lemma is an extension of a standard result regarding strongly convex functionals (see, e.g., [41, Thm. 2.1.7]) but with no assumptions on differentiability. We include the proof in “Appendix B.”
For a proper (i.e., not everywhere equal to \(\infty \)), lower semicontinuous, convex functional F, and \(\tau > 0\), define the socalled [48] proximal operator \(J_\tau ^F: H \rightarrow H\) by
Proposition 1
Let \(\delta >0\) and let \(F, G: H \rightarrow [0,\infty ]\) be two nonnegative, proper, lower semicontinuous, convex functionals on a Hilbert space H, such that for all \(u \in H\),
Let \(u_0, v_0 \in H\), such that
Then, we have the following estimate for the gradient flow \(u:[0,\infty ) \rightarrow H\) of F starting at \(u_0\) and the gradient flow \(v:[0,\infty ) \rightarrow H\) of G starting at \(v_0\):
We provide the proof in “Appendix C.” The idea is that the stability estimate in Lemma 2 will allow us to conclude that \(J_\tau ^F[u_0]\) and \(J_\tau ^G[v_0]\) are close when \(u_0\) and \(v_0\) are close. By iterating the operators \(J_\tau ^F\) and \(J_\tau ^G\), we approximate the gradient flows of F and G, respectively, and from slope estimate (51) we will derive that this approximation is uniform. This will allow us to derive bounds for the gradient flows from the bounds for \(J_\tau ^F\) and \(J_\tau ^G\).
We now know that the gradient flows of F and G are close when the slopes \(\partial F(u_0)\) and \(\partial G(v_0)\) are bounded. This assumption can be rather stringent. We will relax it and merely require that \(F(u_0)\) and \(G(v_0)\) are bounded by some constant \(E > 0\), in exchange for a bound between gradient flows that is slightly worse. Our approach will be to run the gradient flow for a small time s from \(u_0\) and \(v_0\) and use the regularizing property of the gradient flow to conclude a slope bound. On the other hand, if s is small, u(s) and v(s) will be close to \(u_0\) and \(v_0\). We will then choose s (almost) optimally to derive a bound between the gradient flows.
4.2 Strong \(\mathbb {L}_2\)Convergence of TV Flows
We prove the convergence, stability, and accuracy of TV flows by considering them as the gradient flows of the family of functionals \(\hbox {TV}_{\varepsilon }\). The theory of contraction semigroups [2, Ch. 4] will allow us to show that as \(\varepsilon \rightarrow 0\) the gradient flow of \(\hbox {TV}_{\varepsilon }\) converges to the gradient flow of \(\hbox {TV}_0\) in the \(\mathbb {L}_2\) sense.
Just like Lemma 1 and Proposition 1 the following theorem was already announced in [25] but similarly lacked a proof, which we now include.
Theorem 1
Let \(F:H \rightarrow [0,\infty ]\) and \(G:H \rightarrow [0,\infty ]\) be two proper, lower semicontinuous, convex functionals on a Hilbert space H, such that
for all \(u \in H\). Let \(u_0, v_0 \in H\) be such that \(F(u_0) \le E\) and \(G(v_0) \le E\) and \(\Vert u_0  u^*\Vert \le M\) and \(\Vert v_0  v^*\Vert \le M\), for some constants \(E, M > 0\), where \(u^*\) and \(v^*\) minimize F and G, respectively.
Then the gradient flow \(u:[0,\infty ) \rightarrow H\) of F starting at \(u_0\) and the gradient flow \(v:[0,\infty ) \rightarrow H\) of G starting at \(v_0\) satisfy
for all \(0 \le t < E^6 M^6/\delta ^9\).
Proof
By the evolution variational inequality [2, Theorem 4.0.4, (iii)], we know that for all \(s > 0\)
and
By the regularizing property [2, Theorem 4.0.4, (ii)],
and
where \(u^*\) minimizes F and \(v^*\) minimizes G.
Because the gradient flow is a nonexpansive semigroup [2, Theorem 4.0.4, (iv)], we obtain
Now assume \(t < E^6 M^6/\delta ^9\). We will want to choose s (almost) optimally, depending on t. We choose
and note that with \(L := M/s\), we have
By the slope estimates (54) we can apply Proposition 1 to the gradient flows starting at u(s) and v(s), to obtain
\(\square \)
If, for the general result of Theorem 1, we take \(F=\hbox {TV}_0\), \(G=\hbox {TV}_\varepsilon \) and \(\delta =\varepsilon \left \Omega \right \) we obtain the following result.
Corollary 1
(Strong \(\mathbb {L}_2\)convergence, stability and accuracy of TV flows) Let \(U \in \mathbb {L}_{2}(\Omega )\) and let \(W^{\varepsilon }\) be the gradient flow of \(\hbox {TV}_{\varepsilon }\) starting at U and \(\varepsilon ,\mathfrak {e} \ge 0\). Let \(t\ge 0\). Let \(\delta =\varepsilon \Omega \). Then
More precisely, for all \(U \in BV(\Omega )\), \(E \ge \hbox {TV}_0(U) + \delta \), \(M \ge \Vert U\Vert \) and \(0 \le t < E^6 M^6 / \delta ^9\) we have that
5 Experiments
In our experiments, we aim to enhance contour and fiber trajectories in medical images and to remove noise. Lifting the image \(f:\mathbb {R}^{d} \rightarrow \mathbb {R}\) toward its orientation lift \(U:\mathbb {M}_d \rightarrow \mathbb {R}\) defined on the space of positions and orientations \(\mathbb {M}=\mathbb {R}^{d} \rtimes S^{d1}\) preserves crossings [30] and avoids leakage of wavefronts [24].
For our experiments for \(d=3\) the initial condition \(U: \mathbb {M}_3 \rightarrow \mathbb {R}^+\) is a fiber orientation density function (FODF) obtained from DWMRI data [44].
For our experiments for \(d=2\) the initial condition U is an invertible orientation score (OS) that we sampled on 8 equidistant orientations.
For both \(d=2\) (Sect. 5.1) and \(d=3\) (Sect. 5.4), we show advantages of TVF and MCF over crossingpreserving diffusion flows [17, 30] on \(\mathbb {M}_d\).
Finally, we include denoising experiments where we show qualitative and quantitative results where comparison with the wellknown denoising technique BM3D [18] shows advantages and good results.
5.1 Image Enhancement/Denoising
In accordance with the workflow in Fig. 1 we go through the following steps:
for \(t\ge 0\). With respect to the final step we recall that we use cake wavelets that allow for sharp approximate reconstruction by integration over angles only. Here \(U \mapsto W(\cdot ,t)=\Phi _t(U)\) denotes the flow operator on \(\mathbb {M}_2\) (49). Hence the initial condition for our TVF/MCFPDE (49) is set by an orientation score of image \(f:\mathbb {R}^2 \rightarrow \mathbb {R}\) given by (2).
By the invertibility of the orientation score one has \(f=f_0^a\), so all flows depart from the original image.
We refer to the different methods we experimented with, by the following terms.

MCF: we set \((a,b)=(1,1)\).

TVF: we set \((a,b)=(0,1)\).

Left invariant: we use the leftinvariant geometry per the first column of Table 1.

Gauge: we use the locally adaptiveframes geometry per the second column of Table 1.

Isotropic: we set \(\mathfrak {e}=1\).

Anisotropic: we set \(\mathfrak {e}=0.25\).

With coherence enhancement: we use the PDE with the E operator per (49).
So that Isotropic Gauge TVF with coherence enhancement for example would equate to setting \((a,b)=(0,1)\), \(\mathfrak {e}=1\), includes the E operator in the PDE and uses the second column of Table 1 to define our geometric objects.
For quantitative comparison we will look at relative \(\mathbb {L}_1\) and \(\mathbb {L}_2\) errors, meaning if we have a (clean) source image \(f_{source}\) and a denoised image \(f_t\) that has been processed to time t we calculate the relative error as:
with the corresponding \(\mathbb {L}_1\) or \(\mathbb {L}_2\) norm.
We will test against two types of noise: Gaussian and correlated.
5.2 Gaussian Noise
We apply Gaussian noise with standard deviation 0.2 to our normalized (to [0, 1]) source image; the original and noisy images are shown in Fig. 9a, respectively, Fig. 9b.
In Fig. 6 we show how the errors progress with \(t\ge 0\) for the isotropic (\(\mathfrak {e}=1\)) case without coherence enhancement (i.e., without E). For comparison we plot the same error with spatial Perona–Malik. While Perona–Malik is clearly more stable and resilient to oversmoothing, both our proposed methods have much smaller minimal errors.
Remark 16
(Interpretation of timescales) The different methods work on different timescales, we scale these to be able to plot the results together, but no meaning should be attributed to one method obtaining a minimum earlier than another. The error graphs just show:

how large the minimal error is and

how fast the image deteriorates after this minimum has been reached.
In our next experiment we increase anisotropy by setting \(\mathfrak {e}=.25\), the resulting errors are plotted in Fig. 7. We gain no improvement in minimal error while requiring more computational cycles to reach the minimum, from which we conclude that for this application isotropic processing is more desirable.
In Fig. 8 we show the errors for the isotropic setup with coherence enhancement included (for \(c=.2\)). We get a very minor improvement in minimal errors and a decent improvement in oversmoothing stability, although still not on the level of Perona–Malik. It is remarkable that with coherence enhancement included the dataadaptive geometry is less stable than the leftinvariant geometry, we observe that combining two different methods of adapting to the data is counterproductive in this instance.
For a qualitative comparison of the different isotropic methods with coherence enhancement we oversmooth the collagen image past the time of its lowest \(\mathbb {L}_2\) error with a factor of two, the corresponding qualitative results are shown in Fig. 9.
5.3 Correlated Noise
For correlated noise we apply a Gaussian filter with \(\sigma =1.0\) to Gaussian noise with \(\sigma =0.2\). The error evolution for the isotropic methods is plotted in Fig. 10. We observe that MCF performs worse in this setting against correlated noise; both in minimal error and in stability it does not do as well as spatial Perona–Malik. TVF on the other hand has a better minimal error than Perona–Malik at the cost of stability. The stability does somewhat improve if we turn on the locally adaptive frames.
The error evolution of the experiment including the use of coherence enhancement is displayed in Fig. 11. Overall this improves the results, but MCF still exhibits an inferior performance in this setting. TVF sees both an improvement in minimal error and stability. As with the Gaussian noise experiment we see that turning on both coherence enhancement and locally adaptive frames is counterproductive.
A qualitative comparison of the methods against correlated noise is shown in Fig. 12, where again we smooth twice the optimal time. We observe the same general trend as in Fig. 9; all methods do a good job of preserving edges, but TVF stands out in clearing the plateaus.
As a final method to compare against we look at BM3D [18]. Both BM3D and our methods share a dependence on some prior knowledge for optimal performance: BM3D requires us to know the standard deviation of the noise, and our method requires us to know the optimal processing time.
We make a qualitative comparison of removing correlated noise between BM3D and leftinvariant TVF by smoothing \(1.5 \times \) past the optimal \(\mathbb {L}_2\) error: \(1.5 \times \) the optimal standard deviation in case of BM3D and \(1.5\times \) the processing time in case of TVF. The resulting images are shown in Fig. 13.
For a broader comparison, we compute peak signaltonoise ratios for the collagen image we already saw and for two additional images of different styles shown in Fig. 14. Results are summarized in Table 4.
As the gauge TVF method compares favorably against correlated noise according to the PSNR value in Table 4, we will look at its qualitative result and compare it against BM3D in Fig. 15.
5.4 Denoising and Fiber Enhancement on FODFs in DWMRI
In DWMRI image processing one obtains a field of angular diffusivity profiles (orientation density function) of water molecules. A high diffusivity in particular orientation correlates to biological fibers structure, in brain white matter, along that same direction. Crossingpreserving enhancement of FODF fields \(U:\mathbb {M}_3 \rightarrow \mathbb {R}^+\) helps to better identify structural pathways in brain white matter, which is relevant for surgery planning, see for example [38, 44].
For a quantitative comparison we applied TVF, MCF, and PM diffusion [17] to denoise a popular synthetic FODF \(U: \mathbb {M}_3 \rightarrow \mathbb {R}^+\) from the Fiberfox Tractometer challenge with realistic noise profiles [40]. In Fig. 16, we can observe the many crossing fibers in the dataset. Furthermore, we depicted the absolute \(\mathbb {L}_{2}\)error \(t \mapsto \Vert U  \Phi _t(U)\Vert _{\mathbb {L}_{2}(\mathbb {M}_3)}\) as a function of the evolution parameter t, where \(\Phi _t(U)=W_{\varepsilon }(\cdot ,t)\) with optimized \(\varepsilon =0.02\) in the case of TVF (in green), and MCF (in blue), and where \(\Phi _t\) is the PM diffusion evolution [17] on \(\mathbb {M}_3\) with optimized PM parameter \(K=0.2\) (in red). We also depict results for \(K=0.1, 0.4\) (with the dashed lines) and \(\varepsilon =0.01, 0.04\). We see that the other parameter settings provide on average worse results, justifying our optimized parameter settings. We set \(D_{S}=1.0\), \(D_A=0.001\), \(\Delta t=0.01\). We observe that:

TVF can reach lower error values than MC flow with adequate \(\Delta t=0.01\),

MCF provides more stable errors for all \(t>0\) than TV flow with respect to \(\epsilon >0\),

TVF and MCF produce lower error values than PM diffusion,

PM diffusion provides the most variable results for all \(t>0\).
For a qualitative comparison we applied TVF, MCF, PM diffusion and linear diffusion to a FODF field \(U: \mathbb {M}_3 \rightarrow \mathbb {R}^+\) obtained from a standard DWMRI dataset (with b = 1000 s/mm\(^2\), 54 gradient directions) via constrained spherical deconvolution (CSD) [19, 52]. See Fig. 17, where for each method, we used the optimal parameter settings with the artificial dataset. We see that

all methods perform well on the real datasets. Contextual alignment of the angular profiles better reflects the anatomical fiber bundles,

MCF and TVF better preserve boundaries and angular sharpness,

MCF better preserves the amplitude at crossings at longer times.
6 Conclusion
We have proposed a PDE system on the homogeneous space \(\mathbb {M}_d=\mathbb {R}^{d} \rtimes S^{d1}\) of positions and orientations, for crossingpreserving denoising and enhancement of (lifted) images containing both complexelongated structures and plateaus.
It includes TVF, MCF and diffusion flows as special cases and includes (sub)Riemannian geometry. Thereby we generalized recent related works by Citti et al. [13] and Chambolle and Pock [11] from 2D to 3D using a different numerical scheme with new convergence results (Theorem 1) and stability bounds. We used the divergence and intrinsic gradient on a (sub)Riemannian manifold above \(\mathbb {M}_d\) for a formal weak formulation of total variation flows, which simplifies if the lifted images are differentiable (Lemma 1).
For 2D image denoising and enhancement we have shown that in all cases TVF on \(\mathbb {M}_2\) has a better minimal error than Perona–Malik and MCF at the cost of being more sensitive to oversmoothing, recall Figs. 6, 7, 8, 9, 10, 11, and 12. The \(\mathbb {L}_1\), \(\mathbb {L}_2\) and PSNR measures indicate the potential of our proposed methods for denoising, and we manage to improve PSNR results against methods such as BM3D against correlated noise on some images, recall Figs. 13 and 15. Qualitatively this is mainly reflected in better clearing of plateaus while still preserving hard edges and crossings.
In 3D we compared to previous nonlinear crossingpreserving diffusion methods on \(\mathbb {M}_3\); we showed improvements over Perona–Malik and improvements over contextual fiber enhancement methods in DWMRI processing [17, 21] on real medical image data. We observe that crossings and boundaries (of bundles and plateaus) are better preserved over time. We support this quantitatively by a denoising experiment on a benchmark DWMRI dataset, where MCF performs better than TVF and both perform better than Perona–Malik diffusions, in view of error reduction and stability.
Altogether, we conclude that our TVF and MCF methods on \(\mathbb {M}_d\) work well for denoising and enhancement for both \(d=2\) and \(d=3\). In general we see clear benefits of the inclusion of locally adaptive frames and of limited inclusion of coherence enhancement. The code from our experiments is available as a Mathematica notebook at https://bmnsmets.com/files/tvf_mcf_denoising_jmiv.nb.
Future work While we have shown the potential of our PDE system on \(\mathbb {M}_d\) as a denoising/enhancement method some challenges remain for future work:

Determining stopping time, our methods show good minimal errors but are prone to degrading the image if left running for too long. For general applications a robust automatic stopping method would be helpful. Spectral analysis of nonlinear operators [10, 16] may apply here.

Coherence enhancement [54] was not originally conceived for denoising. It is therefore interesting to see how edge enhancing diffusion [27] (EED) performs when generalized to \(\mathbb {M}_d\), i.e., we would reformulate our enhancement operator E as:
$$\begin{aligned} E := \int _{S^d} \varvec{c} \otimes \varvec{c} \ e^{\frac{ \left \nabla U \cdot \varvec{c} \right ^2 }{2 \kappa ^2}} \hbox {d}\mu (\varvec{c}), \end{aligned}$$and test its performance.

In this article we obtained convergence results of our PDE solutions for \(\epsilon \downarrow 0\) while keeping \(\mathfrak {e}>0\) fixed. It is interesting to study the full limiting case \((\epsilon , \mathfrak {e}) \rightarrow (0,0)\), for the general setting covering total variation flow.
References
Ambrosio, L., Ghezzi, R., Magnani, V.: BV functions and sets of finite perimeter in subRiemannian manifolds. Ann. Inst. Henri Poincare Non Linear Anal. 32(3), 489–517 (2015)
Ambrosio, L., Gigli, N., Savaré, G.: Gradient Flows in Metric Spaces and in the Space of Probability Measures. Bikhäuser, Basel (2005)
Baspinar, E., Citti, G., Sarti, A.: A geometric model of multiscale orientation preference maps via gabor functions. JMIV 60(6), 900–912 (2018)
Baspinar, E.: Minimal Surfaces in SubRiemannian Structures and Functional Geometry of the Visual Cortex. Ph.D. thesis, University of Bologna (2018)
Bekkers, E.: Retinal Image Analysis using SubRiemannian Geometry in \(SE(2)\). Ph.D. thesis, TU/e Eindhoven (2017)
Bekkers, E., Duits, R., Mashatkov, A., Sanguinetti, G.: A PDE approach to datadriven subRiemannian geodesics in \(SE(2)\). SIIMS 8(4), 2740–2770 (2015)
Bertalmí, M., Calatroni, L., Franceschi, V., Franceschiello, B., Prandi, D.: A corticalinspired model for orientationdependent contrast perception: a link with Wilson–Cowan equations. In: International Conference on Scale Space and Variational Methods in Computer Vision, pp. 472–484. Springer, Cham (2019)
Boscain, U., Chertovskih, R., Gauthier, J.P., Prandi, D., Remizov, A.: Highly corrupted image inpainting by hypoelliptic diffusion. JMIV 60(8), 1231–1245 (2018)
Brézis, H.: Operateurs maximeaux monotones et semigropes de contractions dans les espaces de Hilbert, vol. 50. NorthHolland Publishing Co., Amsterdam (1973)
Bungert, L., Burger, M., Tenbrinck, D.: Computing nonlinear eigenfunctions via gradient flow extinction. In: International Conference on Scale Space and Variational Methods in Computer Vision, pp. 291–302. Springer, Cham (2019)
Chambolle, A., Pock, T.: Total rototranslation variation. Numer. Math. 142, 611–666 (2019)
Chirikjian, G.S., Kyatkin, A.B., Buckingham, A.C.: Engineering applications of noncommutative harmonic analysis: with emphasis on rotation and motion groups. Appl. Mech. Rev. 54(6), B97–B98 (2001)
Citti, G., Franceschiello, B., Sanguinetti, G., Sarti, A.: SubRiemannian mean curvature flow for image processing. SIIMS 9(1), 212–237 (2016)
Citti, G., Sarti, A.: A cortical based model of perceptional completion in the rototranslation space. JMIV 24(3), 307–326 (2006)
Cohen, E., Deffieux, T., Demené, C., Cohen, L., Tanter, M.: 3d vessel extraction in the rat brain from ultrasensitive Doppler images. In: Computer Methods in Biomechanics and Biomedical Engineering. LNB, pp. 81–91 (2018)
Cohen, I., Falik, A., Gilboa, G.: Stable explicit pLaplacian flows based on nonlinear eigenvalue analysis. In: International Conference on Scale Space and Variational Methods in Computer Vision, pp. 315–327. Springer, Cham (2019)
Creusen, E.J., Duits, R., Florack, L., Vilanova, A.: Numerical schemes for linear and nonlinear enhancement of DWMRI. NMTMA 6(3), 138–168 (2013)
Dabov, K., Foi, A., Katkovnik, V., Egiazarian, K.: Image denoising by sparse 3D transformdomain collaborative filtering. IEEE Trans. Image Process. 16(8), 2080–2095 (2007)
Descoteaux, M., Deriche, R., Knosche, T.R., Anwander, A.: Deterministic and probabilistic tractography based on complex fibre orientation distributions. IEEE Trans. Med. Imaging 28(2), 269–286 (2008)
Duits, R.: Perceptual organization in image analysis. Ph.D. thesis, TU/e (2005)
Duits, R., Creusen, E., Ghosh, A., Dela Haije, T.: Morphological and linear scale spaces for fiber enhancement in DWMRI. JMIV 46(3), 326–368 (2013)
Duits, R., Franken, E.M.: Left invariant parabolic evolution equations on \({SE}(2)\) and contour enhancement via invertible orientation scores, part I: linear leftinvariant diffusion equations on \({SE}(2)\). QAMAMS 68, 255–292 (2010)
Duits, R., Janssen, M., Hannink, J., Sanguinetti, G.: Locally adaptive frames in the rototranslation group and their applications in medical image processing. JMIV 56(3), 367–402 (2016)
Duits, R., Meesters, S., Mirebeau, J., Portegies, J.: Optimal paths for variants of the 2D and 3D ReedsShepp car with applications in image analysis. JMIV 60, 816–848 (2018)
Duits, R., StOnge, E., Portegies, J., Smets, B.: Total variation and mean curvature PDEs on the space of positions and orientations. In: International Conference on Scale Space and Variational Methods in Computer Vision, pp. 211–223 (2019)
Evans, L.C., Spruck, J.: Motion of level sets by mean curvature. J. Differ. Geom. 33(3), 635–681 (1991)
Fabbrini, L., et al.: Improved edge enhancing diffusion filter for specklecorrupted images. IEEE Geosci. Remote Sens. Lett. 11(1), 99–103 (2013)
Felsberg, M., Forssen, P.E., Scharr, H.: Channel smoothing: efficient robust smoothing of lowlevel signal features. In: IEEE PAMI, pp. 209–222 (2006)
Franken, E.: Enhancement of crossing elongated structures in images. PhD thesis, Technical University Eindhoven (2008)
Franken, E.M., Duits, R.: Crossing preserving coherenceenhancing diffusion on invertible orientation scores. IJCV 85(3), 253–278 (2009)
Ghimpeţeanu, G., Batard, T., Bertalmí, M., Levine, S.: A decomposition framework for image denoising algorithms. IEEE Trans. Image Process. 25(1), 388–399 (2015)
Giga, Y., Sato, M.H.: Generalized interface evolution with the Neumann boundary condition. Proc. Jpn. Acad. Ser. A Math. Sci. 67(8), 263–266 (1991)
Janssen, M.H.J., Janssen, A.J.E.M., Bekkers, E.J., Bescós, J.O., Duits, R.: Processing of invertible orientation scores of 3d images. JMIV 60(9), 1427–1458 (2018)
Janssen, M.H.J., Dela Haije, T.C.J., Martin, F.C., Bekkers, E.J., Duits, R.: The Hessian of axially symmetric functions on \(SE(3)\) and application in 3D image analysis. In: LNCS (Proceedings of SSVM), pp. 643–655 (2017)
Lebrun, M.: An analysis and implementation of the BM3D image denoising method. Image Process. Line 2, 175–213 (2012)
Lee, J.M.: Smooth manifolds. In: Introduction to Smooth Manifolds, pp. 1–31. Springer, New York (2013)
Martin, F., Bekkers, E., Duits, R.: Lie analysis package. www.lieanalysis.nl/ (2017). Accessed 5 Oct 2019
Meesters, S., et al.: Stability metrics for optic radiation tractography: towards damage prediction after resective surgery. J. Neurosci. Methods 288, 34–44 (2017)
MomayyezSiahkal, P., Siddiqi, K.: 3D stochastic completion fields for mapping connectivity in diffusion MRI. IEEE Trans. Pattern Anal. Mach. Intell. 35(4), 983–995 (2012)
Neher, P.F., Laun, F.B., Stieltjes, B., MaierHein, K.H.: Fiberfox: facilitating the creation of realistic white matter software phantoms. Magn. Reson. Med. 72(5), 1460–1470 (2014)
Nesterov, Y.: Introductory Lectures on Convex Programming: A Basic Course. Springer, Berlin (2004)
Perona, P., Malik, J.: Scalespace and edge detection using anisotropic diffusion. IEEE Trans. Pattern Anal. Mach. Intell. 12(7), 629–639 (1990)
Portegies, J.M., Duits, R.: New exact and numerical solutions of the (convection) diffusion kernels on SE(3). DGA 53, 182–219 (2017)
Portegies, J.M., Fick, R., Sanguinetti, G.R., Meesters, S.P.L., Girard, G., Duits, R.: Improving fiber alignment in HARDI by combining contextual PDE flow with constrained spherical deconvolution. PLoS ONE 10(10), e0138122 (2015)
Portegies, J.: PDEs on the Lie Group SE(3) and their applications in diffusionweighted MRI. Ph.D. thesis, Department of Mathematics, TU/e (2018)
Reisert, M., Burkhardt, H.: Efficient tensor voting with 3d tensorial harmonics. In: IEEE Conference, CVPRW ’08, pp. 1 –7 (2008)
Reisert, M., Kiselev, V.G.: Fiber continuity: an anisotropic prior for ODF estimation. IEEE Trans. Med. Imaging 30(6), 1274–1283 (2011)
Rockafellar, R.T., Wets, R.J.B.: Variational Analysis, vol. 317. Springer, Berlin (2009)
Sapiro, G.: Geometric Partial Differential Equations and Image Analysis. Cambridge University Press, Cambridge (2006)
Sato, M.H.: Interface evolution with Neumann boundary condition. Adv. Math. Sci. Appl. 4(1), 249–264 (1994)
Schmidt, M., Weickert, J.: Morphological counterparts of linear shiftinvariant scalespaces. J. Math. Imaging Vis. 56(2), 352–366 (2016)
Tournier, J.D., Calamante, F., Connelly, A.: Robust determination of the fibre orientation distribution in diffusion MRI: nonnegativity constrained superresolved spherical deconvolution. Neuroimage 35(4), 1459–1472 (2007)
Vogt, T., Lellmann, J.: Measurevalued variational models with applications to diffusionweighted imaging. JMIV 60(9), 1482–1502 (2018)
Weickert, J.A.: Coherenceenhancing diffusion filtering. Int. J. Comput. Vis. 31(2/3), 111–127 (1999)
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A LeftInvariant Vector Field Basis for \(\hbox {SE}(3)\)
Two charts are required to cover \(\hbox {SO}(3)\), when using ZYZEuler angles:
where \(\beta \in \left( 0, \pi \right) \) and \(\alpha ,\gamma \in \left[ 0, 2\pi \right) \), the leftinvariant vector field basis is given by:
Recall that in this article we take the convention that \(A_{d+i}\) is the inplanar rotation from axis \(\mathbf {a}=(0,0,1)\) to \(A_i\) for \(i=1,\ldots d1\). Thereby \(\mathcal {A}_{5}\) is the counterclockwise rotation about axis \(\mathcal {A}_1\), whereas \(\mathcal {A}_{4}\) is the counterclockwise rotation about axis \(\mathcal {A}_2\). Recall Fig. 3.
The above set of expressions is not valid for \(\beta = 0\) or \(\beta =\pi \); in that case, we can switch to XYZEuler angles:
with \(\alpha ' \in \left[ 0,2\pi \right) \), \(\beta ' \in \left[ \pi ,\pi \right) \) and \(\gamma ' \in \left( \pi /2, \pi /2 \right) \). The basis vector fields are then given by:
for \(\left \beta ' \right \ne \pi /2\).
Appendix B Proof of Lemma 2
We recall the lemma and give a proof.
Lemma 2 Let \(\tau > 0\). If a functional \(\Phi : H \rightarrow (\infty , \infty ]\) on H is \(1/\tau \) convex, and \(u^*\) is its unique minimizer, then for all \(u \in H\),
Proof
The functional \(\Psi :H \rightarrow (\infty , \infty ]\) given by
is convex. It is sufficient to show that \(\Psi \) is nonnegative. If it were not, there would exist a \(v \in H\) such that \(\Psi (v) < 0\). We will show that then, for t small enough, \(\Phi (t v + (1t) u^*) < \Phi (u^*)\), contradicting that \(u^*\) is a minimizer. We first have by definition that, for \(t \in (0,1)\),
By the convexity of \(\Psi \),
Combining the two inequalities, we find
so that indeed, for t small enough, \(\Phi (tv + (1t)u^*) < \Phi (u^*)\), leading to the announced contradiction.
Therefore, \(\Psi \) is nonnegative, which means that
for all \(u \in H\). \(\square \)
Appendix C Proof of Proposition 1
We recall the proposition and give a proof.
Proposition 1 Let \(F, G: H \rightarrow [0,\infty ]\) be two nonnegative, proper, lower semicontinuous, convex functionals on a Hilbert space H, such that for all \(u \in H\),
Let \(u_0, v_0 \in H\), such that
Then, we have the following estimate for the gradient flow \(u:[0,\infty ) \rightarrow H\) of F starting at \(u_0\) and the gradient flow \(v:[0,\infty ) \rightarrow H\) of G starting at \(v_0\):
Proof
Let \(\tau >0\) and let \(u_1^F := J^F_\tau [u_0]\) and \(v_1^G := J^G_\tau [v_0]\). Set also \(v_1^F:=J^F_\tau [v_0]\) and \(u_1^G := J^G_\tau [u_0]\). Then, using the definition of \(v_1^F\) in the second inequality below, we find
Because the functional
is \(1/\tau \)convex, it follows by Lemma 2 that
Now we use that \(J_\tau ^F\) is nonexpansive [2, Eq. (4.0.2)], so
We conclude that
By iterating this estimate, we derive
The a priori estimate [2, Theorem 4.0.4, (v)] yields that the gradient flows u and v of F and G, respectively, are approximated well by \((J_{t/n}^F)^n[u_0]\) and \((J_{t/n}^G)^n[v_0]\). More precisely, for \(t > 0\) and \(n > 0\), the a priori estimate gives
By these a priori estimates and the estimate for discrete flows (61), we see that
To derive the final estimates, we need to make good choices for n. If \(0 \le t \le \delta / L^2\), we take \(n=1\) and obtain
If \(t > \delta /L^2\), we choose \(n =\lceil L^{2/3} (t/\delta )^{1/3} \rceil \), which is larger than or equal to 2. In that case,
We then obtain
\(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Smets, B.M.N., Portegies, J., StOnge, E. et al. Total Variation and Mean Curvature PDEs on the Homogeneous Space of Positions and Orientations. J Math Imaging Vis 63, 237–262 (2021). https://doi.org/10.1007/s10851020009914
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10851020009914